Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
6,300
67
602 GENERALIZATION OF BACKPROPAGATION TO RECURRENT AND HIGHER ORDER NEURAL NETWORKS Fernando J. Pineda Applied Physics Laboratory, Johns Hopkins University Johns Hopkins Rd., Laurel MD 20707 Abstract A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form. Preliminary experiments suggest that learning can occur very rapidly in networks with recurrent connections. The continuous formalism makes the new approach more suitable for implementation in VLSI. Introduction One interesting class of neural networks, typified by the Hopfield neural networks (1,2) or the networks studied by Amari(3,4) are dynamical systems with three salient properties. First, they posses very many degrees of freedom, second their dynamics are nonlinear and third, their dynamics are dissipative. Systems with these properties can have complicated attractor structures and can exhibit computational abilities. The identification of attractors with computational objects, e.g. memories at d rules, is one of the foundations of the neural network paradigm. In this paradigl n, programming becomes an excercise in manipulating attractors. A learning algorithm is a rule or dynamical equation which changes the locations of fixed points to encode information. One way of doing this is to minimize, by gradient descent, some function of the system parameters. This general approach is reviewed by Amari(4) and forms the basis of many learning algorithms. The formalism described here is a specific case of this general approach. The purpose of this paper is to introduce a fonnalism for obtaining adaptive dynamical systems which are based on backpropagation(5,6,7). These dynamical systems are expressed as systems of coupled first order differential equations. The formalism will be illustrated by deriving adaptive equations for a recurrent network with first order neurons, a recurrent network with higher order neurons and finally a recurrent first order associative memory. Example 1: Recurrent backpropagation with first order units Consider a dynamical system whose state vector x evolves according to the following set of coupled differential equations ? American Institute ofPhvsics 1988 603 dx?/dt = -x'1 + g'(LW"X') I 1. IJ J + I?I (1) J where i=l, ... ,N. The functions g' are assumed to be differentiable and may have different forms for various populations of neurons. In this paper we shall make no other requirements on gi' In the neural network literature it is common to take these functions to be Sigmoid shaped functions. A commonly used form is the logistic function, (2) This form is biologically motivated since it attempts to account for the refractory phase of real neurons. However, it is important to stress that there is nothing in the mathematical content of this paper which requires this form -- any differentiable function will suffice in the formalism presented in this paper. For example, a choice which may be of use in signal processing is sin(~). A necessary condition for the learning algorithms discussed here to exist is that the system posesses stable isolated attractors, i.e. fixed points. The attractor structure of (1) is the same as the more commonly used equation du/dt = -ui +~Wijg(Uj) + Ki' (3) J Because (1) and (3) are related by a simple linear transformation. Therefore results concerning the stability of (3) are applicable to (1). Amari(3) studied the dynamics of equation (3) in networks with random conections. He found that collective variables corresponding to the mean activation and its second moment must exhibit either stable or bistable behaviour. More recently, Hopfield(2) has shown how to construct content addressable memories from symmetrically connected networks with this same dynamical equation. The symmetric connections in the network gaurantee global stability. The solution of equation (1) is also globally asymptotically stable if w can be transformed into a lower triangular matrix by row and column exchange operations. This is because in such a case the network is a simply a feedforward network and the output can be expressed as an explicit function of the input. No Liapunov function exists for arbitrary weights as can be demonstrated by constructing a set of weights which leads to oscillation. In practice, it is found that oscillations are not a problem and that the system converges to fixed points unless special weights are chosen. Therefore it shall be assumed, for the purposes of deriving the backpropagation equations, that the system ultimately settles down to a fixed point. Consider a system of N neurons, or units, whose dynamics is determined by equation (1). Of all the units in the network we will arbitrarily define some subset of them (A) as input units and some other subset of them (0) as output units. Units which are neither members of A nor 0 are denoted hidden units. A unit may be simultaneously an input unit and an output unit. The external environment influences the system through the source term, I. If a unit is an input unit, the corresponding component of I is nonzero. To make this more precise it is useful to introduce a notational convention. Suppose that <I> represent some subset of units in the network then the function 8i<I> is defined by 8'm= { 1'V 1 if i-th unit is a member of <I> 0 o therwise (4) In terms of this function, the components of the I vector are given by (5) 604 where ~i is detennined by the external environment. Our goal will be to fmd a local algorithm which adjusts the weight matrix w so that a given initial state XO = x(to)' and a given input I result in a fixed point, xoo= x(too ), whose components have a desired set of values Ti along the output units. This will be accomplished by minimizing a function E which tneasures the distance between the desired fixed point and the actual fixed point i.e., 1 N E = - :E Ji2 (6) 2 i=l where J.I -- (T.I - xoo.I ) e'n I.u. (7) E depends on the weight matrix w through the fixed point Xoo(w). A learning algorithm drives the fixed points towards the manifolds which satisfy xi00 = Ti on the output units. One way of accomplishing this with dynamics is to let the system evolve in the weight space along trajectories which are antiparallel to the gradient of E. In . other words, dE (8) dWi/dt = - T\ - dw .. IJ where T\ is a numerical constant which defines the (slow) time scale on which w changes. T\ must be small so that x is always essentially at steady state, i.e. x(t) == xoo. It is important to stress that the choice of gradient descent for the learning dynamics is by no means unique, nor is it necessarily the best choice. Other learning dynamics which employ second order time derivatives (e.g. the momentum method(5? or which employ second order space derivatives (e.g. second order backpropagation(8? may be more useful in particular applications. However, equation (8) does have the virtue of being the simplest dynamics which minimizes E. On performing the differentiations in equation (8), one immediately obtains dwrs/dt = T\ 1: Jk k dx ook awrs (9) The derivative of xook with respect to wrs is obtained by first noting that the fixed points of equation (1) satisfy the nonlinear algebraic equation oo .) + J. Xoo.I = g?(:Ewoox I . IJ J I' (10) J differentiating both sides of this equation with respect to Wrs and finally solving for dxooId dW rs ' The result is dXook = (L- 1)kr gr'(Ur)xoo s (11) dWrs where gr' is the derivative of gr and where the matrix L is given by (12) Bii is the Kroneker Bfunction ( BU= 1 if i=j, otherwise Bij = 0). On substituting (11) into (9) one obtains the remarkablY simple form 605 where dWrsldt =11 YrXoo s (13) (14) Yr = gr'(u r ) LJk(L -1)kr k= Equations (13) and (14) specify a fonnallearning rule. Unfortunately, equation (14) requires a matrix inversion to calculate the error signals Yk' Direct matrix inversions are necessarily nonlocal calculations and therefore this learning algorithm is not suitable for implementation as a neural network. Fortunately, a local method for calculating Yr can be obtained by the introduction of an associated dynamical system. To obtain this dynamical system fIrst rewrite equation (14) as LLrk (Yr / gr'(ur )} = Jk . (15) r Then multiply both sides by fk'(uk)' substitute the explicit form for L and finally sum over r. The result is o= - Yk + gk'(uk){ LWrkYr + Jk} . (16) r One now makes the observation that the solutions of this linear equation are the fIxed points of the dynamical system given by dYk/dt = - Yk +gk'(uk){LWrkYr + Jk} . (17) r This last step is not unique, equation (16) could be transformed in various ways leading to related differential equations, cf. Pineda(9). It is not difficult to show that the frrst order fInite difference approximation (with a time step ~t = 1) of equations (1), (13) and (17) has the same form as the conventional backpropagation algorithm. Equations (1), (13) and (17) completely specify the dynamics for an adaptive neural network, provided that (1) and (17) converge to stable fixed points and provided that both quantities on the right hand side of equation (13) are the steady state solutions of (1) and (17). It was pointed out by Almeida(10) that the local stability of (1) is a sufficient condition for the local stability of (17). To prove this it suffices to linearize equation (1) about a stable fixed point. The resulting linearized equation depends on the same matrix L whose transpose appears in the derivation of equation (17), cf. equation (15). But Land LT have the same eigenValues, hence it follows that the fIXed points of (17) must also be locally stable if the fIxed points of (1) are locally stable. Learning multiple associations It is important to stress that up to this point the entire discussionhas assumed that I and T are constant in time, thus no mechanism has been obtained for learning multiple input/output associations. Two methods for training the network to learn multiple associations are now discussed. These methods lead to qualitatively different learning behaviour. Suppose that each input/output pair is labeled by a pattern label n, i.e. {In ,Tn}. Then the energy function which is minimized in the above discussion must also depend on this label since it is an implicit function of the In ,Tn pairs. In order to learn multiple input/output associations it is necessary to minimize all the E[n] simultaniously. In otherwords the function to minimize is (18) 606 where the sum is over all input/output associations. From (18) it follows that the gradient for Etotal is simply the sum of the gradients for each association, hence the corresponding gradient descent equation has the form, = 11 L yOOi[a] xOOia] . (19) a In numerical simulations, each time step of (19) requires relaxing (1) and (17) for each pattern and accumulating the gradient over all the patterns. This fonn of the algorithm is deterministic and is guaranteed to converge because, by construction, Etotal is a Liapunov function for equation (19). However, the system may get stuck m a local minimum. This method is similar to the master/slave approach of Lapedes and Farber(1l). Their adaptive equation, which plays the same role as equation (19), also has a gradient form, although it is not strictly descent along the gradient. For a randomly or fully connected network it can be shown that tbe number of oper~tions required per weight update in the master/slave fonnalis~ is proportional to N where N is the number of units. This is because there are O(N ) update equations and each equation requires O(N) operations (assuming some precomputation). On the other hand, in the backpropagation formalism each update equation re~uires only 0(1) operations because of their trivial outer product form. Also O(N ) operations are required t~ precompute XOO and yoo. The result is that each weight update requires only O(N ) operations. It is not possible to conclude from this argument that one or the other approach will be more efficient in a particular application because there are other factors to consider such as the number of patterns and the number of time steps required for x and y to converge. A detailed comparison of the two methods is in preparation. A second approach to learning multiple patterns is to use (13) and to change the patterns randomly on each time step. The system therefore receives a sequence of random impulses each of which attempts to minimize E[ex] for a single pattern. One can then defme L(w) to be the mean E[a] (averaged over the distribution of patterns). dWijldt L(w) = <E [w, la,Ta ]> (20) Amari(4) has pointed out that if the sequence of random patterns is stationary and if L(w) has a unique minimum then the theory of stochastic approximation guarantees that the solution of (13) wet) will converge to the minimum point '!min of L(w) to within a small fluctuating tenn which vanishes as 11 tends to zero. hVlaently 11 is analogous to the temperature parameter in simulated annealing. This second approach generally converges more slowly than the first, but it will ultimately converge (in a statistical sense) to the global minimum. In principle the fixed points, to which the solutions of (1) and (17) eventually converge, depend on the initial states. Indeed, Amari's(3) results imply that equation (1) is bistable for certain choices of weights. Therefore the presentation of multiple patterns might seem problematical since in both approaches the final state of the previous pattern becomes the initial state of the new pattern. The safest approach is to reinitialize the network to the same initial state each time a new pattern is presented. e.g. xi(t~ = 0.5 for all i. In practice the system learns robustly even if the initial conditIons are chosen randomly. Example 2: Recurrent higher order networks It is straightforward to apply the technique of the previous section to a dynamical system with higher order units. Higher order systems have been studied by Sejnowski (12) and Lee et al.(13). Higher order networks may have definite advantages 607 over networks with first order units alone A detailed discussion of the backpropagation fonnalism applied to higher order networks is beyond the scope of this paper. Instead, the adaptive equations for a network with purely n-th order units will be presented as an example of the fonnalism. To this end consider a dynamical system of the fonn dx?/dt -- -x'1 + g'(lI!) I 1-:1 + I?1 (21) where (22) and where there are n+ 1 indices and the summations are over all indices except i. The superscript on the weight tensor indicates the order of the correlation. Note that an additional nonlinear function f has been added to illustrate a further generalization. Both f and g must be differentiable and may be chosen to be sigmoids. It is not difficult, although somewhat tedious, to repeat the steps of the previous example to derive the adaptive equations for this system. The objective function in this case is the same as was used in the fIrst example, i.e. equation (6). The n-th order gradient descent equation has the fonn (23) Equation (23) illustrates the major feature of backpropagation which distinguishes it from other gradient descent algorithms or similar algorithms which make use of a gradient. Namely, that the gradient of the objective function has a very trivial outer product fonn. y (n)oo is the steady state solution of dy(n)k/dt = - y(n)k + gk'(uk) {fk'(xk)Ly(n)rkY (n)r + Jk }. (24) r The matrix v(n) plays the role of w in the previous example, however v(n) now depends on the state of the network according to y(n)ij = L'" L s<n)ijk"'l ( f(xk) ... f(xI)} (25) k I where is s(n) a tensor which is symmetric with respect to the exchange of the second index and all the indices to the right, i.e. S(n)..IJk"1 -- w(n) ijk"'l + w(n) ikj"'l + ... + w(n) ijl"'k . (26) Finally, it should be noted that: 1) If the polynomial ui is not homogenous, the adaptive equations are more complicated and involve cross tenns between the various orders and that: 2) The local stability of the n-th order backpropagation equations now depends on the eigenvalues of the matrix L .. = 0" - g.'(u?) f.'(x?) y(n) .. IJ IJ 1 1 1 1 IJ' (27) As before, if the forward propagation converges so will the backward propagation. Example 3: Adaptive content addressable memory In this section the adaptive equations for a content addressable memory (CAM) are derived as a fmal illustration of the generality of the formalism. Perhaps 608 the best known (and best studied) examples of dYnamical systems which exhibit CAM behaviour are the systems discussed by Hopfield(l). Hopfield used a nonadaptive method for programming the symmetric weight matrix. More recently Lapedes and Farber<ll) have demonstrated how to contruct a master dynamical system which can be used to train the weights of a slave system which has the Hopfield fonn. This slave system then performs the CAM operation. The resulting weights are not symmetric. The learning proceedure presented in this section is most closely related to the method of Lapedes and Farber in that a master network is used to adjust the weights of a slave network. In constrast to the afforementioned formalism, which requires a very large associated weight matrix for the master network, both the master and slave networks of the following approach make use of the same weight matrix. The CAM under consideration is based on equation (1). However, the interpretation of the dynamics will be somewhat different from the first section. The main difference is that the dynamics in the learning phase is constrained. The constrained dynamical system is denoted the master network. The unconstrained system is denoted the slave network. The units in the network are divided into only two sets: the set of visible units (V) and the set of internal or hidden units (H). There will be no distinction made between input and output units. Thus, I will generally be zero unless an input bias is needed in some application. The dynamical system will be used as an autoassociative memory, thus the memory recall is performed by starting the network at a particular initial state which represents partial information about a stored memory. More precisely, suppose that there exists a subset K of the visible units whose states are known to have values Ti' Then the initial state of the network is (28) where the bi are arbitrary. The CAM relaxes to the previously stored memory whose basin of attraction contains this partial state. Memories are stored by a master network whose topology is exactly the same as the slave network, but whose dynamics is somewhat modified. The state vector z of the master network evolves according to the equation N d~/dt = -~ + gi(LwikZk) + Ii (29) k=l where Z is defmed by Z,1 = T?1 E)?V + z?1 E)'H 1 1 ? (30) The components of Z along the visible units are just the target value specified by T. This equation is useful as a master equation because if the weights can be chosen so that the zi of the visible units relax to the target values Ti,: then a fixed point of (29) is also a fixed point of (1). It can be concluded therefore, that by training the weights of the master network one is also training the weights of the slave network. Note that the form of Z implies that equation (29) can be rewritten as (31) where 9 i = - LWikTk . (32) keY From equations (31) and (32) it is clear that the dynamics of the master system is driven by the thresholds which depend on the targets. 609 To derive the adaptive equations consider the objective function 1 N 2 E master =2" 1: Ii (33) i=l where (34) It is straightforward to apply the steps discussed in previous sections to EJ1Iaster' This results in adaptive equations for the weights. The mathematical details Will be omitted since they are essentially the same as before, the gradient descent equation is dWi/dt = 11yooiZOOj (35) where yOO is the steady state solution of where dyk"dt = - Yk +g'k(vkHeiHLwrkYr + Ik} r vi i ~ikZook . (36) (37) Equations (31), and (35)-37) define the dynamics of the master network. To train the slave network to be an autoassociative memory it is necessary to use the stored memories as the initial states of the master network, i.e. z?(t 1 0 ) = T?1 e?1V + b?1 eiH (39) where bi is an arbitrary value as before. The previous discussions concerning the stability of the three equations (1), (13) and (17) apply to equations (31) (35) and (36) as well. It is also possible to derive the adaptive equations for a higher order associative network, but this will not be done here. Only preliminary computer simulations have been performed with this algorithm to verify their validity, but more extensive experiments are in progress. The fIrst simulation was with a fully connected network with 10 visible units and 5 hidden units. The training set consisted of four random binary vectors with the magnitudes of the vectors adjusted so that 0.1 ~ Ti S; 0.9. The equations were approximated by first order fmite difference equations with ~t = 1 and 11 = 1. The training was performed with the detenninistic method for learning multiple associations. Figure 1. shows Etotal as a function of the number of updates for both the master and slave networks. Etota! for the slave exhibits discontinous behaviour because the trajectory through the weight space causes x(to ) to cut across the basins of attraction for the fixed points of equation (1). The number of updates required for the network to learn the patterns is relatively modest and can be reduced further by increasing 11. This suggests that learning can occur very rapidly in this type of network. Discussion The algorithms presented here by no means exhaust the class of possible adaptive algorithms which can be obtained with this formalism. Nor is the choice of gradient descent a crucial feature in this formalism. The key idea is that it is possible to express the gradient of an objective function as the outer product of vectors which can be calculated by dynamical systems. This outer produc2,form is also responsible for the fact that the gradient can be calculated with only O(N ) operations in a fully connected or randomly connected network. In fact the number of operations per 610 weight update is proportional to the number of connections in the network. The methods used here will generalize to calculate higher order derivatives of the objective function as well. The fact that the algorithms are expressed as differential equations suggests that they may be implemented in analog electronic or optical hardware. 2.00 . . . . . - - - - - - - - - - - - - - , ~ Master -.- Slave 1.00 --"",.. ~ 20 40 60 Updates 80 100 figure 1. Etota! as a function of the the number of updates. References (1) (2) (3) (4) (5) (6) (7) (8) I. J. HopfieJd. Neural Networks as Physical Systems with Emergent Collective Computational Abilities. Proc. Nat. Acad. Sci. USA. Bio.79. 2554-2558. (1982) 1. I. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sci. USA. Bio. .8l, 3088-3092. (1984) Shun-Ichi Amari. IEEE Trans. on Systems Man and Cybernetics. 2.643-657. (1972) Shun-Ichi Amari. in Systems Neuroscience. ed. Jacqueline Metzler. Academic press. (1977) D. E. Rumelhart. G. E. Hinton and R.I. Williams. in Parallel Distributed Processing. edited by D. E. Rumelhart and 1. L. McClelland. M.LT. press. (1986) David B. Parker. Learning-Logic, Invention Report. S81-64. File 1. Office of Technology Licensing. Stanford University. October. 1982 Y. LeChun. Proceedings of Cognitiva. 85. p. 599. (1985) David B. Parker. Second Order Backpropagation: Implementing an Optimal O(n) Approximation to Newton's Method as an Artificial Neural Network. submitted to Computer. (1987) Fernando J. Pineda. Generalization ofbackpropagation to recurrent neural networks, Phys. Rev. Lett.? l.8. 2229-2232. (1987) (10) Luis B. Almeida. in the Proceedings of the IEEE First Annual International Conference on Neural Networks. San Diego. California. June 1987. edited by (9) 611 M. Caudil and C. Butler (to be published This is a discrete version of the algorithm presented as the fIrst example (11) Alan Lapedes and Robert Farber, A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognition, Physica, D22, 247-259, (1986), see also, Programming a Massively Parallel, Computation Universal System: Static Behaviour, in Neural Networks for Computing Snowbird, UT 1986, AIP Conference Proceedings, 151, (1986), edited by John S. Denker (12) Terrence J. Sejnowski, Higher-order Boltzmann Machines, Draft preprint obtained from author (13) Y.C. Lee, Gary Doolen, H.H. Chen, G.Z. Sun, Tom Maxwell, H.Y. Lee and C. Lee Giles, Machine Learning using a higher order correlation network, Physica D22, 276-306, (1986)
67 |@word version:1 inversion:2 polynomial:1 tedious:1 r:1 linearized:1 simulation:3 fmite:1 fonn:5 jacqueline:1 moment:1 initial:8 contains:1 lapedes:4 activation:2 dx:3 must:5 luis:1 john:3 numerical:2 visible:5 update:9 stationary:1 tenn:1 alone:1 yr:3 liapunov:2 xk:2 ji2:1 draft:1 location:1 ofbackpropagation:1 mathematical:2 along:4 direct:1 differential:6 ik:1 prove:1 introduce:2 indeed:1 nor:3 globally:1 actual:1 increasing:1 becomes:2 provided:2 suffice:1 minimizes:1 transformation:1 differentiation:1 guarantee:1 ti:5 precomputation:1 exactly:1 uk:4 bio:2 unit:29 ly:1 before:3 local:6 tends:1 acad:2 might:1 dissipative:2 studied:4 suggests:2 relaxing:1 bi:2 averaged:1 unique:3 responsible:1 practice:2 definite:1 backpropagation:13 addressable:5 universal:1 word:1 integrating:1 suggest:1 get:1 applying:1 influence:1 accumulating:1 conventional:1 deterministic:1 demonstrated:2 straightforward:2 williams:1 starting:1 d22:2 immediately:1 constrast:1 rule:3 adjusts:1 attraction:2 doolen:1 deriving:3 dw:2 population:1 stability:6 analogous:1 construction:1 suppose:3 play:2 target:3 diego:1 programming:3 rumelhart:2 approximated:1 jk:5 recognition:1 problematical:1 cut:1 metzler:1 labeled:1 role:2 preprint:1 calculate:2 connected:5 sun:1 yk:4 edited:3 environment:2 vanishes:1 ui:2 cam:5 dynamic:14 ultimately:2 depend:3 solving:1 rewrite:1 purely:1 basis:1 completely:1 hopfield:6 emergent:1 various:3 derivation:1 train:2 sejnowski:2 artificial:1 whose:8 stanford:1 relax:1 amari:7 otherwise:1 triangular:1 ability:2 gi:2 final:1 associative:2 superscript:1 pineda:3 sequence:2 differentiable:3 eigenvalue:2 advantage:1 net:1 product:4 rapidly:2 detennined:1 frrst:1 requirement:1 converges:3 object:1 tions:1 oo:2 recurrent:10 linearize:1 pose:1 illustrate:1 snowbird:1 derive:3 ij:7 progress:1 implemented:1 implies:1 convention:1 farber:4 closely:1 stochastic:1 bistable:2 settle:1 implementing:1 shun:2 exchange:2 behaviour:5 suffices:1 generalization:4 preliminary:2 summation:1 nonsymmetrical:1 adjusted:1 strictly:1 physica:2 scope:1 substituting:1 major:1 omitted:1 purpose:2 proc:2 applicable:1 wet:1 label:2 proceedure:1 always:1 modified:1 contruct:1 office:1 encode:1 derived:1 june:1 notational:1 indicates:1 laurel:1 sense:1 entire:1 hidden:3 vlsi:1 manipulating:1 transformed:2 denoted:3 constrained:3 special:1 homogenous:1 construct:1 shaped:1 represents:1 minimized:1 report:1 aip:1 employ:2 distinguishes:1 randomly:4 simultaneously:1 phase:2 attractor:5 attempt:2 freedom:1 wijg:1 multiply:1 adjust:1 xoo:7 partial:2 necessary:3 detenninistic:1 modest:1 unless:2 desired:2 re:1 isolated:1 formalism:9 column:1 giles:1 typified:1 subset:4 gr:5 too:1 stored:4 international:1 bu:1 lee:4 physic:1 terrence:1 hopkins:2 slowly:1 external:2 american:1 derivative:5 leading:1 oper:1 li:1 account:1 de:1 exhaust:1 satisfy:2 depends:4 vi:1 performed:3 doing:1 wrs:2 complicated:2 parallel:2 minimize:4 accomplishing:1 generalize:1 identification:1 antiparallel:1 trajectory:2 drive:1 cybernetics:1 published:1 submitted:1 phys:1 ed:1 energy:1 associated:3 static:1 recall:1 ut:1 appears:1 maxwell:1 higher:13 dt:10 ta:1 tom:1 specify:2 response:1 fmal:1 done:1 generality:1 just:1 implicit:1 correlation:2 hand:2 receives:1 etotal:3 nonlinear:3 propagation:3 defines:1 logistic:1 perhaps:1 impulse:1 usa:2 validity:1 verify:1 consisted:1 hence:2 symmetric:4 laboratory:1 nonzero:1 illustrated:1 sin:1 ll:1 self:1 defmed:1 noted:1 steady:4 stress:3 ijl:1 tn:2 performs:1 temperature:1 consideration:1 recently:2 common:1 sigmoid:1 physical:1 refractory:1 discussed:4 he:1 association:7 interpretation:1 analog:1 rd:1 unconstrained:1 fk:2 pointed:2 stable:7 optimizing:1 driven:1 massively:1 certain:1 binary:1 arbitrarily:1 tenns:1 accomplished:1 minimum:4 fortunately:1 additional:1 somewhat:3 converge:6 fernando:2 paradigm:1 signal:3 ii:2 multiple:7 alan:1 academic:1 calculation:1 cross:1 divided:1 concerning:2 essentially:2 represent:1 safest:1 remarkably:1 annealing:1 source:1 concluded:1 crucial:1 posse:1 cognitiva:1 file:1 member:2 seem:1 symmetrically:1 noting:1 feedforward:2 relaxes:1 zi:1 topology:1 idea:1 motivated:1 algebraic:1 cause:1 autoassociative:2 useful:3 generally:2 detailed:2 involve:1 clear:1 backpropagated:1 locally:2 hardware:1 mcclelland:1 simplest:1 reduced:1 exist:1 neuroscience:1 per:2 discrete:1 shall:2 express:1 ichi:2 key:2 salient:1 four:1 threshold:1 neither:1 invention:1 backward:1 asymptotically:1 nonadaptive:1 sum:3 tbe:1 master:17 electronic:1 oscillation:2 dy:1 ki:1 guaranteed:1 annual:1 occur:2 precisely:1 argument:1 min:1 performing:1 optical:1 relatively:1 according:3 precompute:1 across:1 ur:2 eih:1 evolves:2 biologically:1 rev:1 ikj:1 xo:1 equation:64 previously:1 eventually:1 mechanism:1 needed:1 end:1 operation:8 rewritten:1 apply:3 denker:1 fluctuating:1 bii:1 robustly:1 substitute:1 cf:2 newton:1 calculating:1 uj:1 graded:1 tensor:2 objective:5 added:1 quantity:1 reinitialize:1 md:1 exhibit:4 gradient:17 distance:1 simulated:1 sci:2 outer:5 manifold:1 fonnalism:3 trivial:2 assuming:1 index:4 illustration:1 minimizing:1 difficult:2 unfortunately:1 october:1 robert:1 gk:3 implementation:2 collective:3 boltzmann:1 neuron:7 observation:1 finite:1 descent:8 extended:1 hinton:1 precise:1 arbitrary:3 introduced:2 david:2 pair:2 required:4 defme:1 namely:1 connection:3 specified:1 extensive:1 california:1 kroneker:1 distinction:1 trans:1 beyond:1 dynamical:17 pattern:15 memory:14 suitable:2 technology:1 imply:1 coupled:2 dyk:2 literature:1 evolve:1 fully:3 interesting:1 licensing:1 proportional:2 foundation:1 degree:1 sufficient:1 basin:2 principle:1 land:1 row:1 repeat:1 last:1 transpose:1 side:3 bias:1 institute:1 differentiating:1 distributed:1 calculated:2 lett:1 author:1 commonly:2 adaptive:15 dwi:2 qualitatively:1 stuck:1 forward:1 made:1 san:1 nonlocal:1 obtains:2 logic:1 global:2 assumed:3 conclude:1 xi:2 butler:1 continuous:1 reviewed:1 learn:3 obtaining:1 du:1 necessarily:2 constructing:1 main:1 nothing:1 fmd:1 parker:2 slow:1 momentum:1 explicit:2 slave:13 lw:1 third:1 learns:1 bij:1 down:1 specific:1 virtue:1 essential:1 exists:2 kr:2 magnitude:1 nat:2 sigmoids:1 illustrates:1 chen:1 lt:2 simply:2 expressed:3 gary:1 ljk:1 goal:1 presentation:1 towards:1 man:1 content:6 change:3 determined:2 except:1 la:1 ijk:3 internal:1 almeida:2 preparation:1 yoo:2 ex:1
6,301
670
A Model of Feedback to the Lateral Geniculate Nucleus Carlos D. Brody Computation and Neural Systems Program California Institute of Technology Pasadena, CA 91125 Abstract Simplified models of the lateral geniculate nucles (LGN) and striate cortex illustrate the possibility that feedback to the LG N may be used for robust, low-level pattern analysis. The information fed back to the LG N is rebroadcast to cortex using the LG N 's full fan-out, so the cortex-LGN-cortex pathway mediates extensive cortico-cortical communication while keeping the number of necessary connections small. 1 INTRODUCTION The lateral geniculate nucleus (LGN) in the thalamus is often considered as just a relay station on the way from the retina to visual cortex, since receptive field properties of neurons in the LGN are very similar to retinal ganglion cell receptive field properties. However, there is a massive projection from cortex back to the LGN: it is estimated that 3-4 times more synapses in the LG N are due to corticogeniculate connections than those due to retinogeniculate connections [12]. This suggests some important processing role for the LGN, but the nature of the computation performed has remained far from clear. I will first briefly summarize some anatomical facts and physiological results concerning the corticogeniculate loop, and then present a simplified model in which its function is to (usefully) mediate communication between cortical cells. 409 410 Brody 1.1 SOME ANATOMY AND PHYSIOLOGY The LG N contains both principal cells, which project to cortex, and inhibitory interneurons. The projection to cortex sends collaterals into a sheet of inhibitory cells called the perigeniculate nucleus (PGN). PGN cells, in turn, project back to the LGN. The geniculocortical projection then proceeds into cortex, terminating principally in layers 4 and 6 in the cat [11, 12]. Areas 17, 18, and to a lesser extent, 19 are all innervated. Layer 6 cells in area 17 of the cat have particularly long, non-end-stopped receptive fields [2]. It is from layer 6 that the corticogeniculate projection back originates. 1 It, too, passes through the PGN, sending collaterals into it, and then cont.acts both principal cells and interneurons in the LGN, mostly in the more distal parts of their dendrites [10, 13]. Both the forward and the backward projection are retinotopically ordered. Thus there is the possibility of both excitatory and inhibitory effects in the corticogeniculate projection, which is principally what shall be used in the model. The first attempts to study the physiology of the corticogeniculate projection involved inactivating cortex in some way (often cooling cortex) while observing geniculate responses to simple visual stimuli. The results were somewhat inconclusive: some investigators reported that the projection was excitatory, some that it was inhibitory, and still others t.hat it had no observable effect at all. [1, 5,9] Later studies have emphasized the need for using stimuli which optimally excite the cortical cells which project to the LGN; inactivating cortex should then make a significant difference in the inputs to geniculate cells. This has helped to reveal some effects: for example, LGN cells with corticogeniculate feedback are end-stopped (that is, respond much less to long bars than to short bars), while the end-stopping is quite dearly reduced when the cortical input is removed [8]. One study [13] has used cross-correlat.ion analysis between cortical and geniculate cells to suggest that there is spatial structure in the corticogeniculate projection: an excitatory corticogeniculate interaction was found if cells had receptive field centers that were close to each other, while an inhibitory interaction was found if the centers were farther apart. However, the precise spatial structure of the projection remains unknown. 2 A FEEDBACK MODEL I will now describe a simplified model of the LGN and the corticogeniculate loop. The very simple connection scheme shown in fig 1 originated in a suggestion by Christof Koch [3] that the long receptive fields in layer 6 might be used to facilitate contour completion at t.he LGN level. In the model, then, striate cortex simple cells feed back positively to the LGN, enhancing the conditions which gave rise to their firing. This reinforces, or completes, the oriented bar or edge patterns to which they are tuned. Assuming that the visual features of interest are for the most part oriented, while much of the noise in images may be isotropic and unoriented, enhancing the oriented features improves the signal-to-noise ratio. 1 In all areas innervated by the LGN. A Model of Feedback to the Lateral Geniculate Nucleus OO[i] ///~ ~~ D--~~ ~ 00 RETINA LGN CELLS [f] ~~ VI CELLS Figure 1: Basic model connectivity: A schematic diagram showing the connections between different. pools of units in the single spatial frequency channel model. LGN cells first filter the image linearly through a center-surround filter (V 2 G), the result of which is then passed through a sigmoid nonlinearity (tanh). (In the simulations presented here G was a Gaussian with standard deviation 1.4 pixels.) VI cells then provide oriented filtering, which is also passed through a nonlinearity (logistic; but see details in text) and fed back positively to the LGN to reinforce detected oriented edges. VI cells excite LGN cells which have excitat.ory connections to them, and inhibit those t.hat have inhibitory connections to them. Inhibition is implicitly assumed to be mediated by interneurons. (Note that there are no intracortical or intrageniculate connections: communication takes place entirely through the feedback loop.) See text for further details. For simplicity, only striate cortex simple "edge-detecting" cells were modeled. Two models are presented. In the first one, all cortical cells have the same spatial frequency characteristics. In the second one, two channels, a high frequency channel and a low frequency channel, interact simultaneously. 2.1 SINGLE SPATIAL FREQUENCY CHANNEL MODEL A srhematic diagram of the model is shown in figure 1. The retina is used simply as an input layer. To each input position (pixel) in the retina there corresponds one LGN unit. Linear weights from the retina to the LGN implement a '\l2C filter, where G(x,y) is a two-dimensional Gaussian. The LGN units then project to eight. different pools of "orientation-t.uned" cells in VI. Each of these pools has as many units as t.here are pixels in the input "retina". The weights in the projection forward to VI represent eight rotations of the template shown in figure 2a, covering 360 degrees. This simulates basic orientation tuning in VI. Cortical cells then feed 411 412 Brody back positively to the geniculus, using rotations of the template shown in fig 2(b). The precise dynamics of the model are as follows: Ri are real-valued retinal inputs, Li are geniculate unit outputs, and V; are cortical cell outputs. Gji represent weights from retina - LGN, Fji forward weights from LGN - VI, and Bji backward weights from VI - LGN. o:,/3,,,,(,TCl and TC2 are all constants. For geniculate units: dl_J dt =-"111.+ I J L G .. Q? + L B ' LVIe JI~~ L j = tanh(/j J~ i ) Ie While for cortical cell units: dVj dt = -o:v'J + L.....J ~ Y?L? - /3(~ L.....J IY?IL ?)2 JI I i Here JI i I V;={ g(Vj - Tcd o if vi > TC2 otherwise gO is the logistic function. "receptIYe field le.atII" ??????? ??????? ??????? 0000000 0000000 0000000 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??????? 0000000 o o 0 0 0 000 0 0 0 000 (b) Figure 2: Weights between the LGN and VI. Figure 2(a): Forward weights, from the LGN to VI. Each circle represents the weight from a cell in the LGN; dark circles represent positive weights, light circles negative weights (assumed mediated by interneurons). The radius of each circle represent.s the strength of the corresponding weight. These weights create "edge-detecting" neurons in VI. Figure 2(b): Backwards weights, from VI back to the LGN. Only cells close to the contrast edge receive strong feedback. In the scheme described above many cortical cells have overlapping receptive fields, both in the forward projection from the geniculus and in the backwards projection from cortex. A cell which is reinforcing an edge within its receptive field will also partially reinforce the edge for retinotopically nearby cortical cells. For nearby cells with similar orientation tuning, the reinforcement will enhance their own firing; they will then enhance the firing of ot.her, similar, cells farther along; and so on. That is, the overlapping feedback fields allow the edge detection process to follow contours (note that the process is tempered at the geniculate level by actual input from the retina). This is illustrated in figure 3. A Model of Feedback to the Lateral Geniculate Nucleus Figure 3: Following contours: This figure shows the effect on the LGN of the feedback enhancement. The image on the left is the retinal input: a very weak, noisy horizontal edge. The center image is the LGN after two iterations of the simulation. Note that initially only certain sectors of the edge are detected (and hence enhanced) . The rightmost image is the LGN after 8 iterations: the enhanced region has spread to cover the entire edge through the effect of horizontally oriented, overlapping receptive fields. This is the final stable point of the dynamics. 2.2 MULTIPLE SPATIAL FREQUENCY CHANNELS MODEL In the model described above the LGN is integrating and summarizing the information provided by each of the orientation-tuned pools of cortical cells. 2 It does so in a way which would easily ext.end to cover other types of cortical cells (bar or grating "detectors" , or varying spatial frequency channels). To experiment simply with this possibility, an extra set of eight pools of orientation-tuned "edge-detecting" cortical cells was added. The new set's weights were similar to the original weights described above, except t.hey had a "receptive field length" (see figure 2) of 3 pixels: the original set had a "receptive field length" of 9 pixels. Thus one set was tuned for detecting short edges, while the other was tuned for detecting long edges. The effect of using both of these sets is illustrated in figure 4. Both sets interact nonlinearly to produce edge detection rather more robust than either set used alone: the image produced using both simultaneously is not a linear addit.ion of those produced using each set separately. Note how little noise is accepted as an edge. The same model, running with the same parameters but more pixels, was also tested on a real image. This is shown in figure 5. 3 DISCUSSION ON CONNECTIVITY A major function fulfilled by the LG N in this model is that of providing a communicat.ions pathway between cortical cells, both between cells of similar orientation but different location or spatial frequency tuning, and between cells of different orienta2 A function not unlike that suggested by Mumford [7], except that here the "experts" are extremely low-level orient.ation-tuned channels. 413 414 Brody ~::. ' ''1 :r" Figure 4: Combined spatial frequency channels: The leftmost image is the retinal input, a weak noisy edge. (The other three images are "summary outputs", obtained as follows: the model produces activations in many pools of cortical cell units; the activations from all VI units corresponding to a particular retinotopic position are added together to form a real-valued number corresponding to that position; and this is then displayed as a grey-scale pixel. Since only "edge-detecting" units were used, this provides a rough estimate of the certainty of there being an edge at that point.) Second from left we see the summary output of the model after 20 iterations (by which time it has stabilized), using only the low spatial frequency channel. Only a single segment of the edge is detected. Third from left is the output after 20 iterations using only the high frequency channel. Only isolat.ed, short, segment.s of the edge are detected. The rightmost image is the output using both channels simultaneously. Now the segments detected by the high frequency channel can combine with the original image to provide edges long enough for the low frequency channel t.o detect and complete into a single, long continuous edge. tion tuning: for example, these last compete to reinforce their particular orientation preference on the geniculus. The model qualitatively shows that such a pathway, while mediated by a low-level representation like that of the LGN, can nevertheless be used effectively, producing contour-following and robust edge-detection. We must now ask whether such a function could not be performed without feedback. Clearly, it could be done without feedback to the LGN, purely through intracortical connections, since any feedback net.work can in principle be "unfolded in time" into a feedforward network which performs the same computation- provided we have enough units and connections available. In other words, any sugg{'st.ed functional role for corticogeniculate feedback must not only include an account of the proposed computation performed, but also an account of why it is preferable to perform that computation through a feedback process, in terms of some efficiency measure (like the number of cells or synapses necessary, for example). There can be no other rationale, apart from fortuitous coincidence, for constructing an elaborate feedback mechanism to perform a computation that could just as well be done without it. With this view in mind, it. is worth re-stating that in this model any two cortical cells whose receptive fields overlap are connected (disynaptically) through the LG N. How many connections would we require in order to achieve similar communication if we only used direct connections between cortical orientation-tuned cells instead? In monkey, each cell's receptive field overlaps with approximately 10 6 others [4]- thus, A Model of Feedback to the Lateral Geniculate Nucleus ~?1~ . . '\ ,. ,, .~ .... ~ ..::~:.:- :o~, ;...=:: ... ,?;:~Hk.;~.~ i SF' __ ......" ,..... ff',.P.-..5*1 .~ .. II., . '. ?;romw i.;\'"W.,.,-W,...gg-,........... e '. il }: / .:.-r. ,6, t ~\ :: fi w~~\ r?~:,,~~,? ?? .-' ... ','., ......... '.'CrT5S , '.,,. ., ~ 1:' '........~., .. " .... ;..~ ? ??...;y;;..~ _ . .. .......-.??? .r?????? ~ ::' """'" .'::~?~:t;\' :I:~i~J:~~:?::;.:;~:;:;:;~~;~:~:~I:~:; :;:';:.~: _ :', :l~~s't]~k.il~:~~~~~~: Figure 5: A real image: The top image is the retinal input. Stippling is due to printing only. The center image is that obtained through detecting the zero-crossings of v 2 c. To reduce spurious edges. a minimum slope threshold was placed on the point of the zero-crossing below which edges were not accepted. The image shown here was the best that could be obtained through varying both the width of the Gaussian G and the slope threshold value. The last image shows the summary output from the model, using two simultaneous spatial frequency cha.nnels. Note how noise is reduced compared to the center image, straight lines are smoother, and resolution is not impaired, but is better in places (group of people at lower left. or "smoke stacks" atop launcher). 415 416 Brody any cortical cell would need to synapse onto at least 106 cells. If the information can be sent via the LGN, geniculate cell fan-out can reduce the number of necessary synapses by a significant factor. It is estimated that geniculate cells (in the cat) synapse onto at least 200 cortical cells (probably more) [6], reducing the number of necessary connections considerably. 4 BIOLOGY AND CONCLUSIONS In section 1.1 I noted one important study [8J which established that corticogeniculate input reduces firing of geniculate cells for long bars; this is in direct contradiction to the prediction which would be made by this model, where the feedback enhances firing for long features (here, edges). Thus, the model does not agree with known physiology. However, the model's value lies simply in clearly illustrating the possibility that feedback in a hierarchical processing scheme like the corticogeniculate loop can be utilized for robust, low-level pattern analysis, through the use of the cortex-+LGN-+cortex communications pathway. The possibility that a great deal of different types of information could be flowing through this pathway for this purpose should not be left unconsidered. Acknowledgements The author is supported by fellowships from the Parsons Foundation and from CONACYT (Mexico). Thanks are due to Michael Lyons for careful reading of the manuscript. References [IJ Baker, F. H. and Malpeli, J. G. 1977 Exp. Brain Res. 29 pp. 433-444 [2J Gilbert, C.D. 1977, J. Physiol., 268, pp. 391-421 [3J Koch, C. 1992, personal communication. [4J Hubel, D.H. and Wiesel, T. N. 1977, Proc. R. Soc. Lond. (B) 198 pp. 1-59 [5] Kalil, R. E. and Chase, R. 1970, J. Neurophysiol. 33 pp. 459-474 [6] Martin, K.A.C. 1988, Q. J. Exp. Phy. 73 pp. 637-702 [7] Mumford, D. 1991 Bioi. Cybern. 65 pp. 135-145 [8] Murphy, P.C. and Sillito, A.M. 1987, Nature 329 pp. 727-729 [9] Richard. D. et. al. 1975, Exp. Brain Res. 22 pp. 235-242 [10] Robson, J. A. 1983. J. Compo Neurol. 216 pp. 89-103 [11] Sherman, S. M. 1985. Prog. in Psychobiol. and Phys. Psych. 11 pp. 233-314 [12J Sherman, S.M. and Koch, C. 1986, Exp. Brain Res. 63 pp. 1-20 [13J Tsumoto, T. et. al. 1978, Exp. Brain Res. 32 pp. 345-364
670 |@word illustrating:1 briefly:1 wiesel:1 cha:1 grey:1 simulation:2 fortuitous:1 phy:1 contains:1 tuned:7 rightmost:2 activation:2 atop:1 must:2 physiol:1 alone:1 isotropic:1 short:3 farther:2 compo:1 correlat:1 detecting:7 provides:1 location:1 preference:1 along:1 direct:2 pathway:5 combine:1 brain:4 unfolded:1 actual:1 little:1 lyon:1 project:4 provided:2 retinotopic:1 baker:1 what:1 psych:1 monkey:1 certainty:1 act:1 usefully:1 preferable:1 originates:1 unit:11 christof:1 producing:1 inactivating:2 positive:1 ext:1 firing:5 approximately:1 might:1 suggests:1 implement:1 area:3 physiology:3 projection:13 word:1 integrating:1 suggest:1 onto:2 close:2 sheet:1 cybern:1 gilbert:1 center:6 go:1 l2c:1 resolution:1 simplicity:1 contradiction:1 enhanced:2 massive:1 unconsidered:1 crossing:2 particularly:1 utilized:1 cooling:1 role:2 coincidence:1 region:1 connected:1 inhibit:1 removed:1 dynamic:2 personal:1 terminating:1 segment:3 purely:1 efficiency:1 neurophysiol:1 easily:1 cat:3 describe:1 detected:5 quite:1 whose:1 valued:2 otherwise:1 noisy:2 final:1 chase:1 net:1 dearly:1 interaction:2 loop:4 malpeli:1 achieve:1 enhancement:1 impaired:1 produce:2 illustrate:1 oo:1 completion:1 stating:1 ij:1 grating:1 soc:1 strong:1 anatomy:1 radius:1 filter:3 require:1 pgn:3 communicat:1 koch:3 considered:1 exp:5 great:1 major:1 relay:1 purpose:1 proc:1 geniculate:15 robson:1 tanh:2 create:1 rough:1 clearly:2 gaussian:3 rather:1 varying:2 hk:1 contrast:1 summarizing:1 detect:1 stopping:1 entire:1 initially:1 spurious:1 pasadena:1 her:1 lgn:36 pixel:7 orientation:8 spatial:11 field:14 biology:1 represents:1 tc2:2 others:2 stimulus:2 richard:1 retina:8 oriented:6 simultaneously:3 murphy:1 attempt:1 detection:3 interest:1 interneurons:4 possibility:5 light:1 edge:27 necessary:4 collateral:2 circle:4 re:5 stopped:2 cover:2 deviation:1 ory:1 too:1 optimally:1 reported:1 considerably:1 combined:1 st:1 thanks:1 dvj:1 ie:1 pool:6 enhance:2 together:1 iy:1 michael:1 connectivity:2 expert:1 li:1 account:2 intracortical:2 retinal:5 retinogeniculate:1 vi:14 performed:3 later:1 helped:1 tion:1 view:1 observing:1 carlos:1 slope:2 il:3 characteristic:1 weak:2 produced:2 worth:1 straight:1 detector:1 synapsis:3 simultaneous:1 phys:1 ed:2 frequency:14 involved:1 pp:12 ask:1 improves:1 back:8 manuscript:1 feed:2 dt:2 follow:1 response:1 flowing:1 synapse:2 done:2 just:2 horizontal:1 overlapping:3 smoke:1 logistic:2 reveal:1 facilitate:1 effect:6 hence:1 illustrated:2 deal:1 distal:1 width:1 covering:1 noted:1 leftmost:1 gg:1 complete:1 performs:1 image:17 fi:1 sigmoid:1 rotation:2 functional:1 ji:3 retinotopically:2 he:1 unoriented:1 significant:2 surround:1 tuning:4 nonlinearity:2 had:4 sherman:2 stable:1 cortex:17 inhibition:1 own:1 apart:2 certain:1 tempered:1 minimum:1 somewhat:1 kalil:1 signal:1 ii:1 smoother:1 full:1 multiple:1 thalamus:1 reduces:1 cross:1 long:8 concerning:1 schematic:1 prediction:1 basic:2 enhancing:2 iteration:4 represent:4 cell:49 ion:3 receive:1 fellowship:1 separately:1 completes:1 diagram:2 sends:1 ot:1 extra:1 unlike:1 pass:1 probably:1 simulates:1 sent:1 backwards:2 feedforward:1 enough:2 gave:1 reduce:2 lesser:1 whether:1 passed:2 reinforcing:1 clear:1 dark:1 reduced:2 inhibitory:6 stabilized:1 estimated:2 fulfilled:1 reinforces:1 anatomical:1 shall:1 group:1 nevertheless:1 threshold:2 tcd:1 backward:2 orient:1 compete:1 respond:1 place:2 prog:1 entirely:1 brody:5 layer:5 fan:2 strength:1 ri:1 nearby:2 extremely:1 lond:1 martin:1 principally:2 agree:1 remains:1 turn:1 mechanism:1 mind:1 geniculus:3 fed:2 end:4 sending:1 fji:1 available:1 eight:3 hierarchical:1 innervated:2 hat:2 original:3 top:1 running:1 include:1 added:2 mumford:2 receptive:12 striate:3 enhances:1 reinforce:3 lateral:6 addit:1 extent:1 geniculocortical:1 assuming:1 length:2 cont:1 modeled:1 gji:1 ratio:1 providing:1 mexico:1 lg:7 mostly:1 sector:1 negative:1 rise:1 unknown:1 perform:2 neuron:2 displayed:1 communication:6 precise:2 stack:1 station:1 nonlinearly:1 extensive:1 connection:13 california:1 established:1 mediates:1 bar:5 proceeds:1 suggested:1 pattern:3 below:1 reading:1 summarize:1 program:1 overlap:2 ation:1 scheme:3 technology:1 mediated:3 text:2 acknowledgement:1 rationale:1 suggestion:1 filtering:1 foundation:1 nucleus:6 degree:1 principle:1 tsumoto:1 excitatory:3 summary:3 placed:1 last:2 keeping:1 supported:1 cortico:1 allow:1 institute:1 template:2 feedback:19 cortical:20 contour:4 forward:5 qualitatively:1 reinforcement:1 made:1 simplified:3 author:1 far:1 observable:1 implicitly:1 hubel:1 assumed:2 excite:2 continuous:1 sillito:1 why:1 nature:2 channel:14 robust:4 ca:1 parson:1 dendrite:1 interact:2 constructing:1 vj:1 spread:1 linearly:1 noise:4 mediate:1 positively:3 fig:2 ff:1 elaborate:1 position:3 originated:1 sf:1 lie:1 third:1 printing:1 remained:1 emphasized:1 showing:1 neurol:1 physiological:1 inconclusive:1 effectively:1 simply:3 ganglion:1 visual:3 horizontally:1 ordered:1 partially:1 hey:1 corresponds:1 bji:1 bioi:1 careful:1 except:2 reducing:1 principal:2 called:1 accepted:2 people:1 tcl:1 investigator:1 tested:1
6,302
6,700
SchNet: A continuous-filter convolutional neural network for modeling quantum interactions ? K. T. Sch?tt1?, P.-J. Kindermans1 , H. E. Sauceda2 , S. Chmiela1 A. Tkatchenko3 , K.-R. M?ller1,4,5? 1 Machine Learning Group, Technische Universit?t Berlin, Germany 2 Theory Department, Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin, Germany 3 Physics and Materials Science Research Unit, University of Luxembourg, Luxembourg 4 Max-Planck-Institut f?r Informatik, Saarbr?cken, Germany 5 Dept. of Brain and Cognitive Engineering, Korea University, Seoul, South Korea [email protected] ? [email protected] Abstract Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space. While convolutional neural networks have proven to be the first choice for images, audio and video data, the atoms in molecules are not restricted to a grid. Instead, their precise locations contain essential physical information, that would get lost if discretized. Thus, we propose to use continuousfilter convolutional layers to be able to model local correlations without requiring the data to lie on a grid. We apply those layers in SchNet: a novel deep learning architecture modeling quantum interactions in molecules. We obtain a joint model for the total energy and interatomic forces that follows fundamental quantumchemical principles. Our architecture achieves state-of-the-art performance for benchmarks of equilibrium molecules and molecular dynamics trajectories. Finally, we introduce a more challenging benchmark with chemical and structural variations that suggests the path for further work. 1 Introduction The discovery of novel molecules and materials with desired properties is crucial for applications such as batteries, catalysis and drug design. However, the vastness of chemical compound space and the computational cost of accurate quantum-chemical calculations prevent an exhaustive exploration. In recent years, there have been increased efforts to use machine learning for the accelerated discovery of molecules and materials with desired properties [1?7]. However, these methods are only applied to stable systems in so-called equilibrium, i.e., local minima of the potential energy surface E(r1 , . . . , rn ) where ri is the position of atom i. Data sets such as the established QM9 benchmark [8] contain only equilibrium molecules. Predicting stable atom arrangements is in itself an important challenge in quantum chemistry and material science. In general, it is not clear how to obtain equilibrium conformations without optimizing the atom positions. Therefore, we need to compute both the total energy E(r1 , . . . , rn ) and the forces acting on the atoms ?E Fi (r1 , . . . , rn ) = ? (r1 , . . . , rn ). (1) ?ri One possibility is to use a less computationally costly, however, also less accurate quantum-chemical approximation. Instead, we choose to extend the domain of our machine learning model to both compositional (chemical) and configurational (structural) degrees of freedom. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work, we aim to learn a representation for molecules using equilibrium and non-equilibrium conformations. Such a general representation for atomistic systems should follow fundamental quantum-mechanical principles. Most importantly, the predicted force field has to be curl-free. Otherwise, it would be possible to follow a circular trajectory of atom positions such that the energy keeps increasing, i.e., breaking the law of energy conservation. Furthermore, the potential energy surface as well as its partial derivatives have to be smooth, e.g., in order to be able to perform geometry optimization. Beyond that, it is beneficial that the model incorporates the invariance of the molecular energy with respect to rotation, translation and atom indexing. Being able to model both chemical and conformational variations constitutes an important step towards ML-driven quantum-chemical exploration. This work provides the following key contributions: ? We propose continuous-filter convolutional (cfconv) layers as a means to move beyond grid-bound data such as images or audio towards modeling objects with arbitrary positions such as astronomical observations or atoms in molecules and materials. ? We propose SchNet: a neural network specifically designed to respect essential quantumchemical constraints. In particular, we use the proposed cfconv layers in R3 to model interactions of atoms at arbitrary positions in the molecule. SchNet delivers both rotationally invariant energy prediction and rotationally equivariant force predictions. We obtain a smooth potential energy surface and the resulting force-field is guaranteed to be energyconserving. ? We present a new, challenging benchmark ? ISO17 ? including both chemical and conformational changes3 . We show that training with forces improves generalization in this setting as well. 2 Related work Previous work has used neural networks and Gaussian processes applied to hand-crafted features to fit potential energy surfaces [9?14]. Graph convolutional networks for circular fingerprint [15] and molecular graph convolutions [16] learn representations for molecules of arbitrary size. They encode the molecular structure using neighborhood relationships as well as bond features, e.g., one-hot encodings of single, double and triple bonds. In the following, we briefly review the related work that will be used in our empirical evaluation: gradient domain machine learning (GDML), deep tensor neural networks (DTNN) and enn-s2s. Gradient-domain machine learning (GDML) Chmiela et al. [17] proposed GDML as a method to construct force fields that explicitly obey the law of energy conservation. GDML captures the relationship between energy and interatomic forces (see Eq. 1) by training the gradient of the energy estimator. The functional relationship between atomic coordinates and interatomic forces is thus learned directly and energy predictions are obtained by re-integration. However, GDML does not scale well due to its kernel matrix growing quadratically with the number of atoms as well as the number of examples. Beyond that, it is not designed to represent different compositions of atom types unlike SchNet, DTNN and enn-s2s. Deep tensor neural networks (DTNN) Sch?tt et al. [18] proposed the DTNN for molecules that are inspired by the many-body Hamiltonian applied to the interactions of atoms. They have been shown to reach chemical accuracy on a small set of molecular dynamics trajectories as well as QM9. Even though the DTNN shares the invariances with our proposed architecture, its interaction layers lack the continuous-filter convolution interpretation. It falls behind in accuracy compared to SchNet and enn-s2s. enn-s2s Gilmer et al. [19] proposed the enn-s2s as a variant of message-passing neural networks that uses bond type features in addition to interatomic distances. It achieves state-of-the-art performance on all properties of the QM9 benchmark [19]. Unfortunately, it cannot be used for molecular dynamics predictions (MD-17). This is caused by discontinuities in their potential energy surface due to the 3 ISO17 is publicly available at www.quantum-machine.org. 2 Figure 1: The discrete filter (left) is not able to capture the subtle positional changes of the atoms ? (bottom left). The continuous filter captures these resulting in discontinuous energy predictions E changes and yields smooth energy predictions (bottom right). discreteness of the one-hot encodings in their input. In contrast, SchNet does not use such features and yields a continuous potential energy surface by using continuous-filter convolutional layers. 3 Continuous-filter convolutions In deep learning, convolutional layers operate on discretized signals such as image pixels [20, 21], video frames [22] or digital audio data [23]. While it is sufficient to define the filter on the same grid in these cases, this is not possible for unevenly spaced inputs such as the atom positions of a molecule (see Fig. 1). Other examples include astronomical observations [24], climate data [25] and the financial market [26]. Commonly, this can be solved by a re-sampling approach defining a representation on a grid [7, 27, 28]. However, choosing an appropriate interpolation scheme is a challenge on its own and, possibly, requires a large number of grid points. Therefore, various extensions of convolutional layers even beyond the Euclidean space exist, e.g., for graphs [29, 30] and 3d shapes[31]. Analogously, we propose to use continuous filters that are able to handle unevenly spaced data, in particular, atoms at arbitrary positions. Given the feature representations of n objects X l = (xl1 , . . . , xln ) with xli ? RF at locations R = (r1 , . . . , rn ) with ri ? RD , the continuous-filter convolutional layer l requires a filter-generating function W l : RD ? RF , that maps from a position to the corresponding filter values. This constitutes a generalization of a filter tensor in discrete convolutional layers. As in dynamic filter networks [32], this filter-generating function is modeled with a neural network. While dynamic filter networks generate weights restricted to a grid structure, our approach generalizes this to arbitrary position and number of objects. The output xl+1 for the convolutional layer at position ri is then given by i X xl+1 = (X l ? W l )i = xlj ? W l (ri ? rj ), (2) i j where "?" represents the element-wise multiplication. We apply these convolutions feature-wise for computational efficiency [33]. The interactions between feature maps are handled by separate object-wise or, specifically, atom-wise layers in SchNet. 4 SchNet SchNet is designed to learn a representation for the prediction of molecular energies and atomic forces. It reflects fundamental physical laws including invariance to atom indexing and translation, a smooth energy prediction w.r.t. atom positions as well as energy-conservation of the predicted force fields. The energy and force predictions are rotationally invariant and equivariant, respectively. 3 Figure 2: Illustration of SchNet with an architectural overview (left), the interaction block (middle) and the continuous-filter convolution with filter-generating network (right). The shifted softplus is defined as ssp(x) = ln(0.5ex + 0.5). 4.1 Architecture Fig. 2 shows an overview of the SchNet architecture. At each layer, the molecule is represented atomwise analogous to pixels in an image. Interactions between atoms are modeled by the three interaction blocks. The final prediction is obtained after atom-wise updates of the feature representation and pooling of the resulting atom-wise energy. In the following, we discuss the different components of the network. Molecular representation A molecule in a certain conformation can be described uniquely by a set of n atoms with nuclear charges Z = (Z1 , . . . , Zn ) and atomic positions R = (r1 , . . . rn ). Through the layers of the neural network, we represent the atoms using a tuple of features X l = (xl1 , . . . xln ), with xli ? RF with the number of feature maps F , the number of atoms n and the current layer l. The representation of atom i is initialized using an embedding dependent on the atom type Zi : x0i = aZi . (3) The atom type embeddings aZ are initialized randomly and optimized during training. Atom-wise layers A recurring building block in our architecture are atom-wise layers. These are dense layers that are applied separately to the representation xli of atom i: xl+1 = W l xli + bl i These layers is responsible for the recombination of feature maps. Since weights are shared across atoms, our architecture remains scalable with respect to the size of the molecule. Interaction The interaction blocks, as shown in Fig. 2 (middle), are responsible for updating the atomic representations based on the molecular geometry R = (r1 , . . . rn ). We keep the number of feature maps constant at F = 64 throughout the interaction part of the network. In contrast to MPNN and DTNN, we do not use weight sharing across multiple interaction blocks. The blocks use a residual connection inspired by ResNet [34]: xl+1 = xli + vil . i As shown in the interaction block in Fig. 2, the residual vil is computed through an atom-wise layer, an interatomic continuous-filter convolution (cfconv) followed by two more atom-wise layers with a softplus non-linearity in between. This allows for a flexible residual that incorporates interactions between atoms and feature maps. 4 (a) 1st interaction block (b) 2nd interaction block (c) 3rd interaction block Figure 3: 10x10 ? cuts through all 64 radial, three-dimensional filters in each interaction block of SchNet trained on molecular dynamics of ethanol. Negative values are blue, positive values are red. Filter-generating networks The cfconv layer including its filter-generating network are depicted at the right panel of Fig. 2. In order to satisfy the requirements for modeling molecular energies, we restrict our filters for the cfconv layers to be rotationally invariant. The rotational invariance is obtained by using interatomic distances dij = kri ? rj k as input for the filter network. Without further processing, the filters would be highly correlated since a neural network after initialization is close to linear. This leads to a plateau at the beginning of training that is hard to overcome. We avoid this by expanding the distance with radial basis functions ek (ri ? rj ) = exp(??kdij ? ?k k2 ) located at centers 0? ? ?k ? 30? every 0.1? with ? = 10?. This is chosen such that all distances occurring in the data sets are covered by the filters. Due to this additional non-linearity, the initial filters are less correlated leading to a faster training procedure. Choosing fewer centers corresponds to reducing the resolution of the filter, while restricting the range of the centers corresponds to the filter size in a usual convolutional layer. An extensive evaluation of the impact of these variables is left for future work. We feed the expanded distances into two dense layers with softplus activations to compute the filter weight W (ri ? rj ) as shown in Fig. 2 (right). Fig 3 shows 2d-cuts through generated filters for all three interaction blocks of SchNet trained on an ethanol molecular dynamics trajectory. We observe how each filter emphasizes certain ranges of interatomic distances. This enables its interaction block to update the representations according to the radial environment of each atom. The sequential updates from three interaction blocks allow SchNet to construct highly complex many-body representations in the spirit of DTNNs [18] while keeping rotational invariance due to the radial filters. 4.2 Training with energies and forces As described above, the interatomic forces are related to the molecular energy, so that we can obtain an energy-conserving force model by differentiating the energy model w.r.t. the atom positions ? ? i (Z1 , . . . , Zn , r1 , . . . , rn ) = ? ? E (Z1 , . . . , Zn , r1 , . . . , rn ). F ?ri (4) Chmiela et al. [17] pointed out that this leads to an energy-conserving force-field by construction. As SchNet yields rotationally invariant energy predictions, the force predictions are rotationally equivariant by construction. The model has to be at least twice differentiable to allow for gradient descent of the force loss. We chose a shifted softplus ssp(x) = ln(0.5ex + 0.5) as non-linearity throughout the network in order to obtain a smooth potential energy surface. The shifting ensures that ssp(0) = 0 and improves the convergence of the network. This activation function shows similarity to ELUs [35], while having infinite order of continuity. 5 Table 1: Mean absolute errors for energy predictions in kcal/mol on the QM9 data set with given training set size N . Best model in bold. N SchNet DTNN [18] enn-s2s [19] enn-s2s-ens5 [19] 50,000 100,000 110,462 0.59 0.34 0.31 0.94 0.84 ? ? ? 0.45 ? ? 0.33 We include the total energy E as well as forces Fi in the training loss to train a neural network that performs well on both properties: ! 2 n X ? ? ? E 2 ? (E, F1 , . . . , Fn )) = kE ? Ek ? + (5) `(E, . Fi ? ? n ?Ri i=0 This kind of loss has been used before for fitting a restricted potential energy surfaces with MLPs [36]. In our experiments, we use ? = 0 in Eq. 5 for pure energy based training and ? = 100 for combined energy and force training. The value of ? was optimized empirically to account for different scales of energy and forces. Due to the relation of energies and forces reflected in the model, we expect to see improved generalization, however, at a computational cost. As we need to perform a full forward and backward pass on the energy model to obtain the forces, the resulting force model is twice as deep and, hence, requires about twice the amount of computation time. Even though the GDML model captures this relationship between energies and forces, it is explicitly optimized to predict the force field while the energy prediction is a by-product. Models such as circular fingerprints [15], molecular graph convolutions or message-passing neural networks[19] for property prediction across chemical compound space are only concerned with equilibrium molecules, i.e., the special case where the forces are vanishing. They can not be trained with forces in a similar manner, as they include discontinuities in their predicted potential energy surface caused by discrete binning or the use of one-hot encoded bond type information. 5 Experiments and results In this section, we apply the SchNet to three different quantum chemistry datasets: QM9, MD17 and ISO17. We designed the experiments such that each adds another aspect towards modeling chemical space. While QM9 only contains equilibrium molecules, for MD17 we predict conformational changes of molecular dynamics of single molecules. Finally, we present ISO17 combining both chemical and structural changes. For all datasets, we report mean absolute errors in kcal/mol for the energies and in kcal/mol/? for the forces. The architecture of the network was fixed after an evaluation on the MD17 data sets for benzene and ethanol (see supplement). In each experiment, we split the data into a training set of given size N and use a validation set of 1,000 examples for early stopping. The remaining data is used as test set. All models are trained with SGD using the ADAM optimizer [37] with 32 molecules per mini-batch. We use an initial learning rate of 10?3 and an exponential learning rate decay with ratio 0.96 every 100,000 steps. The model used for testing is obtained using an exponential moving average over weights with decay rate 0.99. 5.1 QM9 ? chemical degrees of freedom QM9 is a widely used benchmark for the prediction of various molecular properties in equilibrium [8, 38, 39]. Therefore, the forces are zero by definition and do not need to be predicted. In this setting, we train a single model that generalizes across different compositions and sizes. QM9 consists of ?130k organic molecules with up to 9 heavy atoms of the types {C, O, N, F}. As the size of the training set varies across previous work, we trained our models each of these experimental settings. Table 1 shows the performance of various competing methods for predicting the total energy (property U0 in QM9). We provide comparisons to the DTNN [18] and the best performing MPNN 6 Table 2: Mean absolute errors for energy and force predictions in kcal/mol and kcal/mol/?, respectively. GDML and SchNet test errors for training with 1,000 and 50,000 examples of molecular dynamics simulations of small, organic molecules are shown. SchNets were trained only on energies as well as energies and forces combined. Best results in bold. N = 1,000 GDML [17] forces N = 50,000 SchNet energy both DTNN [18] energy SchNet energy both Benzene energy forces 0.07 0.23 1.19 14.12 0.08 0.31 0.04 ? 0.08 1.23 0.07 0.17 Toluene energy forces 0.12 0.24 2.95 22.31 0.12 0.57 0.18 ? 0.16 1.79 0.09 0.09 Malonaldehyde energy forces 0.16 0.80 2.03 20.41 0.13 0.66 0.19 ? 0.13 1.51 0.08 0.08 Salicylic acid energy forces 0.12 0.28 3.27 23.21 0.20 0.85 0.41 ? 0.25 3.72 0.10 0.19 Aspirin energy forces 0.27 0.99 4.20 23.54 0.37 1.35 ? ? 0.25 7.36 0.12 0.33 Ethanol energy forces 0.15 0.79 0.93 6.56 0.08 0.39 ? ? 0.07 0.76 0.05 0.05 Uracil energy forces 0.11 0.24 2.26 20.08 0.14 0.56 ? ? 0.13 3.28 0.10 0.11 Naphtalene energy forces 0.12 0.23 3.58 25.36 0.16 0.58 ? ? 0.20 2.58 0.11 0.11 configuration denoted enn-s2s and an ensemble of MPNNs (enn-s2s-ens5) [19]. SchNet consistently obtains state-of-the-art performance with an MAE of 0.31 kcal/mol at 110k training examples. 5.2 MD17 ? conformational degrees of freedom MD17 is a collection of eight molecular dynamics simulations for small organic molecules. These data sets were introduced by Chmiela et al. [17] for prediction of energy-conserving force fields using GDML. Each of these consists of a trajectory of a single molecule covering a large variety of conformations. Here, the task is to predict energies and forces using a separate model for each trajectory. This molecule-wise training is motivated by the need for highly-accurate force predictions when doing molecular dynamics. Table 2 shows the performance of SchNet using 1,000 and 50,000 training examples in comparison with GDML and DTNN. Using the smaller data set, GDML achieves remarkably accurate energy and force predictions despite being only trained on forces. The energies are only used to fit the integration constant. As mentioned before, GDML does not scale well with the number of atoms and training examples. Therefore, it cannot be trained on 50,000 examples. The DTNN was evaluated only on four of these MD trajectories using the larger training set [18]. Note that the enn-s2s cannot be used on this dataset due to discontinuities in its inferred potential energy surface. We trained SchNet using just energies and using both energies and forces. While the energy-only model shows high errors for the small training set, the model including forces achieves energy predictions comparable to GDML. In particular, we observe that SchNet outperforms GDML on the more flexible molecules malonaldehyde and ethanol, while GDML reaches much lower force errors on the remaining MD trajectories that all include aromatic rings. The real strength of SchNet is its scalability, as it outperforms the DTNN in three of four data sets using 50,000 training examples using only energies in training. Including force information, SchNet consistently obtains accurate energies and forces with errors below 0.12 kcal/mol and 0.33 kcal/mol/?, respectively. Remarkably, when training on energies and forces using 1,000 training examples, SchNet performs better than training the same model on energies alone for 50,000 examples. 7 Table 3: Mean absolute errors on C7 O2 H10 isomers in kcal/mol. mean predictor 5.3 energy SchNet energy+forces known molecules / unknown conformation energy forces 14.89 19.56 0.52 4.13 0.36 1.00 unknown molecules / unknown conformation energy forces 15.54 19.15 3.11 5.71 2.40 2.18 ISO17 ? chemical and conformational degrees of freedom As the next step towards quantum-chemical exploration, we demonstrate the capability of SchNet to represent a complex potential energy surface including conformational and chemical changes. We present a new dataset ? ISO17 ? where we consider short MD trajectories of 129 isomers, i.e., chemically different molecules with the same number and types of atoms. In contrast to MD17, we train a joint model across different molecules. We calculate energies and interatomic forces from short MD trajectories of 129 molecules drawn randomly from the largest set of isomers in QM9. While the composition of all included molecules is C7 O2 H10 , the chemical structures are fundamentally different. With each trajectory consisting of 5,000 conformations, the data set consists of 645,000 labeled examples. We consider two scenarios with this dataset: In the first variant, the molecular graph structures present in training are also present in the test data. This demonstrates how well our model is able to represent a complex potential energy surface with chemical and conformational changes. In the more challenging scenario, the test data contains a different subset of molecules. Here we evaluate the generalization of our model to previously unseen chemical structures. We predict forces and energies in both cases and compare to the mean predictor as a baseline. We draw a subset of 4,000 steps from 80% of the MD trajectories for training and validation. This leaves us with a separate test set for each scenario: (1) the unseen 1,000 conformations of molecule trajectories included in the training set and (2) all 5,000 conformations of the remaining 20% of molecules not included in training. Table 3 shows the performance of the SchNet on both test sets. Our proposed model reaches chemical accuracy for the prediction of energies and forces for the test set of known molecules. Including forces in the training improves the performance here as well as on the set of unseen molecules. This shows that using force information does not only help to accurately predict nearby conformations of a single molecule, but indeed helps to generalize across chemical compound space. 6 Conclusions We have proposed continuous-filter convolutional layers as a novel building block for deep neural networks. In contrast to the usual convolutional layers, these can model unevenly spaced data as occurring in astronomy, climate reasearch and, in particular, quantum chemistry. We have developed SchNet to demonstrate the capabilities of continuous-filter convolutional layers in the context of modeling quantum interactions in molecules. Our architecture respects quantum-chemical constraints such as rotationally invariant energy predictions as well as rotationally equivariant, energy-conserving force predictions. We have evaluated our model in three increasingly challenging experimental settings. Each brings us one step closer to practical chemical exploration driven by machine learning. SchNet improves the state-of-the-art in predicting energies for molecules in equilibrium of the QM9 benchmark. Beyond that, it achieves accurate predictions for energies and forces for all molecular dynamics trajectories in MD17. Finally, we have introduced ISO17 consisting of 645,000 conformations of various C7 O2 H10 isomers. While we achieve promising results on this new benchmark, modeling chemical and conformational variations remains difficult and needs further improvement. For this reason, we expect that ISO17 will become a new standard benchmark for modeling quantum interactions with machine learning. 8 Acknowledgments This work was supported by the Federal Ministry of Education and Research (BMBF) for the Berlin Big Data Center BBDC (01IS14013A). Additional support was provided by the DFG (MU 987/20-1) and from the European Union?s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement NO 657679. K.R.M. gratefully acknowledges the BK21 program funded by Korean National Research Foundation grant (No. 2012-005741) and the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (no. 2017-0-00451). References [1] M. Rupp, A. Tkatchenko, K.-R. M?ller, and O. A. Von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett., 108(5):058301, 2012. [2] G. Montavon, M. Rupp, V. Gobre, A. Vazquez-Mayagoitia, K. Hansen, A. Tkatchenko, K.-R. M?ller, and O. A. von Lilienfeld. Machine learning of molecular electronic properties in chemical compound space. New J. Phys., 15(9):095003, 2013. [3] K. Hansen, G. Montavon, F. Biegler, S. Fazli, M. Rupp, M. Scheffler, O. A. Von Lilienfeld, A. Tkatchenko, and K.-R. M?ller. Assessment and validation of machine learning methods for predicting molecular atomization energies. J. Chem. Theory Comput., 9(8):3404?3419, 2013. [4] K. T. Sch?tt, H. Glawe, F. Brockherde, A. Sanna, K.-R. M?ller, and EKU Gross. How to represent crystal structures for machine learning: Towards fast prediction of electronic properties. Phys. Rev. B, 89(20):205118, 2014. [5] K. Hansen, F. Biegler, R. Ramakrishnan, W. Pronobis, O. A. von Lilienfeld, K.-R. M?ller, and A. Tkatchenko. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space. J. Phys. Chem. Lett., 6:2326, 2015. [6] F. A. Faber, L. Hutchison, B. Huang, Ju. Gilmer, S. S. Schoenholz, G. E. Dahl, O. Vinyals, S. Kearnes, P. F. Riley, and O. A. von Lilienfeld. Fast machine learning models of electronic and energetic properties consistently reach approximation errors better than dft accuracy. arXiv preprint arXiv:1702.05532, 2017. [7] F. Brockherde, L. Voigt, L. Li, M. E. Tuckerman, K. Burke, and K.-R. M?ller. Bypassing the Kohn-Sham equations with machine learning. Nature Communications, 8(872), 2017. [8] R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1, 2014. [9] S. Manzhos and T. Carrington Jr. A random-sampling high dimensional model representation neural network for building potential energy surfaces. J. Chem. Phys., 125(8):084109, 2006. [10] R. Malshe, M .and Narulkar, L. M. Raff, M. Hagan, S. Bukkapatnam, P. M. Agrawal, and R. Komanduri. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations. J. Chem. Phys., 130(18):184102, 2009. [11] J. Behler and M. Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett., 98(14):146401, 2007. [12] A. P. Bart?k, M. C. Payne, R. Kondor, and G. Cs?nyi. Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons. Phys. Rev. Lett., 104(13):136403, 2010. [13] J. Behler. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. J. Chem. Phys., 134(7):074106, 2011. [14] A. P. Bart?k, R. Kondor, and G. Cs?nyi. On representing chemical environments. Phys. Rev. B, 87(18):184115, 2013. 9 [15] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, NIPS, pages 2224?2232, 2015. [16] S. Kearnes, K. McCloskey, M. Berndl, V. Pande, and P. F. Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 30(8):595?608, 2016. [17] S. Chmiela, A. Tkatchenko, H. E. Sauceda, I. Poltavsky, K. T. Sch?tt, and K.-R. M?ller. Machine learning of accurate energy-conserving molecular force fields. Science Advances, 3(5): e1603015, 2017. [18] K. T. Sch?tt, F. Arbabzadah, S. Chmiela, K.-R. M?ller, and A. Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8(13890), 2017. [19] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, pages 1263?1272, 2017. [20] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4): 541?551, 1989. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [22] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725?1732, 2014. [23] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. In 9th ISCA Speech Synthesis Workshop, pages 125?125, 2016. [24] W. Max-Moerbeck, J. L. Richards, T. Hovatta, V. Pavlidou, T. J. Pearson, and A. C. S. Readhead. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series. Monthly Notices of the Royal Astronomical Society, 445(1):437?459, 2014. [25] K. B. ?lafsd?ttir, M. Schulz, and M. Mudelsee. Redfit-x: Cross-spectral analysis of unevenly spaced paleoclimate time series. Computers & Geosciences, 91:11?18, 2016. [26] L. E. Nieto-Barajas and T. Sinha. Bayesian interpolation of unequally spaced time series. Stochastic environmental research and risk assessment, 29(2):577?587, 2015. [27] J. C. Snyder, M. Rupp, K. Hansen, K.-R. M?ller, and K. Burke. Finding density functionals with machine learning. Physical review letters, 108(25):253002, 2012. [28] M. Hirn, S. Mallat, and N. Poilvert. Wavelet scattering regression of quantum chemical energies. Multiscale Modeling & Simulation, 15(2):827?863, 2017. [29] J. Bruna, W. Zaremba, A. Szlam, and Y. Lecun. Spectral networks and locally connected networks on graphs. In ICLR, 2014. [30] M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015. [31] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37?45, 2015. 10 [32] X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool. Dynamic filter networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 667?675. 2016. [33] F. Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. [34] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. [35] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. [36] A Pukrittayakamee, M Malshe, M Hagan, LM Raff, R Narulkar, S Bukkapatnum, and R Komanduri. Simultaneous fitting of a potential-energy surface and its corresponding force fields using feedforward neural networks. The Journal of chemical physics, 130(13):134101, 2009. [37] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [38] L. C. Blum and J.-L. Reymond. 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. J. Am. Chem. Soc., 131:8732, 2009. [39] J.-L. Reymond. The chemical space project. Acc. Chem. Res., 48(3):722?730, 2015. 11
6700 |@word middle:2 briefly:1 kondor:2 nd:1 simulation:3 sgd:1 initial:2 configuration:1 contains:2 series:3 outperforms:2 o2:3 current:1 activation:2 fn:1 shape:1 enables:1 designed:4 update:3 bart:2 alone:1 generative:1 fewer:1 leaf:1 beginning:1 hamiltonian:1 vanishing:1 short:2 provides:1 location:2 org:1 zhang:1 ethanol:5 become:1 consists:3 fitting:2 manner:1 introduce:1 indeed:1 market:1 equivariant:4 growing:1 mechanic:1 brain:1 discretized:2 wavenet:1 inspired:2 scheffler:1 nieto:1 toderici:1 increasing:1 ller1:1 provided:1 linearity:3 project:1 panel:1 kind:1 developed:1 astronomy:1 finding:1 every:2 charge:1 zaremba:1 universit:1 k2:1 demonstrates:1 unit:2 grant:3 reasearch:1 szlam:1 planck:2 positive:1 before:2 engineering:1 local:2 despite:1 encoding:2 path:1 interpolation:2 tt1:1 chose:1 twice:3 initialization:1 suggests:1 challenging:4 range:2 practical:1 responsible:2 acknowledgment:1 testing:1 atomic:4 lost:1 block:15 union:1 lecun:3 backpropagation:1 procedure:1 faber:1 empirical:1 drug:1 parrinello:1 organic:3 radial:4 bk21:1 get:1 cannot:3 close:1 context:1 risk:1 luxembourg:2 sukthankar:1 www:1 map:6 center:4 tuckerman:1 resolution:1 ke:1 pure:1 is14013a:1 estimator:1 insight:1 importantly:1 nuclear:1 hutchison:1 financial:1 embedding:1 handle:1 variation:3 coordinate:1 analogous:1 construction:2 mallat:1 guzik:1 us:1 mpnn:2 agreement:1 element:1 recognition:4 updating:1 located:1 hagan:2 richards:1 cut:2 labeled:1 binning:1 bottom:2 database:1 pande:1 preprint:4 solved:1 capture:4 calculate:1 ensures:1 connected:1 sun:1 iitp:1 mentioned:1 gross:1 environment:2 mu:1 ideally:1 battery:1 reymond:2 dynamic:13 geodesic:1 trained:9 efficiency:1 basis:1 unequally:1 joint:2 various:4 represented:1 hirn:1 train:3 fast:4 klaus:1 neighborhood:1 choosing:2 exhaustive:1 kalchbrenner:1 pearson:1 encoded:1 widely:1 larger:1 otherwise:1 simonyan:1 unseen:3 tuytelaars:1 itself:1 final:1 agrawal:1 differentiable:1 propose:4 interaction:24 product:1 clevert:1 tu:2 combining:1 payne:1 conserving:5 achieve:1 az:1 scalability:1 sutskever:1 convergence:1 double:1 requirement:1 r1:9 xception:1 generating:5 adam:3 ring:1 depthwise:1 object:4 resnet:1 help:2 x0i:1 conformation:11 eq:2 soc:1 predicted:4 c:2 elus:2 discontinuous:1 filter:36 dral:1 stochastic:2 exploration:5 centered:1 material:5 virtual:1 education:1 government:1 azi:1 f1:1 generalization:4 extension:1 bypassing:1 burke:2 duvenaud:1 exp:1 equilibrium:10 maclaurin:1 predict:5 lawrence:1 electron:1 dieleman:1 lm:1 achieves:5 cken:1 early:1 optimizer:1 estimation:1 bond:4 kearnes:2 hansen:4 jackel:1 behler:2 hubbard:1 largest:1 interatomic:9 reflects:1 federal:1 promotion:1 gaussian:2 aim:1 chmiela:5 avoid:1 encode:1 improvement:1 consistently:3 contrast:4 baseline:1 am:1 mueller:1 dependent:1 stopping:1 leung:1 geosciences:1 relation:1 h10:3 qm9:12 schulz:1 germany:3 pixel:2 classification:2 flexible:2 denoted:1 development:1 art:4 integration:2 dtnn:12 special:1 field:9 construct:2 having:1 beach:1 atom:40 sampling:2 represents:1 constitutes:2 future:1 report:1 fundamentally:1 randomly:2 national:1 dfg:1 geometry:2 consisting:2 freedom:4 screening:1 message:3 aspirin:1 possibility:1 circular:3 highly:3 evaluation:3 henderson:1 behind:1 accurate:10 tuple:1 closer:1 partial:1 korea:3 institut:2 euclidean:1 unterthiner:1 initialized:2 desired:2 re:3 sinha:1 increased:1 modeling:10 zn:3 riley:3 cost:2 technische:1 subset:2 predictor:2 krizhevsky:1 dij:1 varies:1 combined:2 st:2 fritz:1 fundamental:3 ju:1 international:2 oord:1 density:1 lee:2 physic:2 analogously:1 synthesis:1 von:6 zen:1 choose:1 possibly:1 huang:1 fazli:1 cognitive:1 ek:2 derivative:1 leading:1 manzhos:1 li:1 account:1 potential:20 de:3 chemistry:6 bold:2 isca:1 satisfy:1 benzene:2 explicitly:2 caused:2 bombarell:1 doing:1 sklodowska:1 red:2 hirzel:1 capability:2 jia:1 curie:1 contribution:1 mlps:1 publicly:1 accuracy:5 convolutional:20 acid:1 ensemble:1 yield:3 spaced:5 generalize:1 xli:5 bayesian:1 handwritten:1 raw:1 kavukcuoglu:1 emphasizes:1 informatik:1 accurately:1 vil:2 trajectory:14 ren:1 vazquez:1 enn:10 aromatic:1 acc:1 plateau:1 simultaneous:1 reach:4 phys:10 sharing:1 definition:1 c7:3 energy:93 riemannian:1 sampled:1 dataset:3 astronomical:3 improves:4 lilienfeld:6 subtle:1 feed:1 scattering:1 follow:2 reflected:1 improved:1 evaluated:2 revolutionize:1 though:2 furthermore:1 just:1 correlation:2 hand:1 iparraguirre:1 multiscale:1 assessment:2 lack:1 continuity:1 brings:1 scientific:1 gdb:1 usa:1 building:3 contain:2 requiring:1 hence:1 chemical:33 gesellschaft:1 climate:2 during:1 uniquely:1 covering:1 generalized:2 crystal:1 tt:4 demonstrate:2 performs:2 delivers:1 image:5 wise:11 novel:3 fi:3 rotation:1 functional:1 physical:3 overview:2 empirically:1 million:1 extend:1 interpretation:1 he:1 mae:1 composition:3 monthly:1 kri:1 dft:1 curl:1 rd:3 grid:7 pointed:1 voigt:1 sugiyama:2 gratefully:1 fingerprint:4 funded:2 moving:2 stable:2 bruna:2 similarity:1 surface:16 add:1 own:1 recent:1 optimizing:1 henaff:1 driven:2 scenario:3 compound:4 certain:2 der:1 rotationally:8 minimum:1 additional:2 ministry:1 zip:1 kilo:1 ller:9 signal:1 u0:1 multiple:1 full:1 rj:4 sham:1 x10:1 smooth:5 faster:1 calculation:1 cross:2 long:1 dept:1 molecular:29 impact:1 prediction:27 variant:2 scalable:1 regression:1 vision:3 arxiv:8 kernel:1 represent:5 hochreiter:1 boscaini:1 tkatchenko:6 addition:1 remarkably:2 separately:1 unevenly:5 crucial:1 sch:5 operate:1 unlike:1 configurational:1 south:1 pooling:1 incorporates:2 spirit:1 structural:3 feedforward:1 split:1 embeddings:1 concerned:1 variety:1 arbabzadah:1 fit:2 zi:1 architecture:9 restrict:1 competing:1 motivated:1 handled:1 kohn:1 effort:1 energetic:1 speech:1 passing:3 compositional:1 deep:13 conformational:8 clear:1 covered:1 karpathy:1 amount:1 locally:1 generate:1 exist:1 shifted:2 notice:1 per:1 blue:1 discrete:3 snyder:1 group:1 key:1 four:2 blum:1 drawn:1 prevent:1 discreteness:1 marie:1 dahl:2 backward:1 graph:9 chollet:1 year:1 luxburg:1 letter:1 chemically:1 throughout:2 guyon:1 architectural:1 electronic:3 draw:1 comparable:1 bound:1 layer:28 guaranteed:1 followed:1 strength:1 constraint:2 fei:2 ri:9 nearby:1 aspect:1 speed:1 performing:1 expanded:1 separable:1 schoenholz:2 department:1 structured:2 according:1 isomer:4 jr:1 beneficial:1 across:7 xlj:1 smaller:1 increasingly:1 rev:5 den:1 restricted:3 indexing:2 invariant:5 computationally:1 ln:2 equation:1 remains:2 previously:1 discus:1 r3:1 available:1 generalizes:2 apply:3 obey:1 observe:2 eight:1 appropriate:1 denker:1 spectral:2 batch:1 shetty:1 remaining:3 include:4 recombination:1 nyi:2 society:1 bl:1 tensor:4 move:1 arrangement:1 costly:1 md:6 usual:2 ssp:3 gradient:4 iclr:2 distance:6 separate:3 berlin:5 manifold:1 reason:1 rupp:5 code:1 modeled:2 relationship:4 illustration:1 rotational:2 mini:1 ratio:1 innovation:1 difficult:1 unfortunately:1 korean:1 robert:1 negative:1 ba:1 design:2 bronstein:1 unknown:3 perform:2 observation:2 convolution:9 datasets:2 benchmark:9 howard:1 descent:1 defining:1 hinton:1 communication:3 precise:1 frame:1 rn:9 carrington:1 arbitrary:5 inferred:1 s2s:10 introduced:2 mechanical:1 extensive:1 z1:3 optimized:3 connection:1 imagenet:1 learned:1 quadratically:1 boser:1 saarbr:1 established:1 kingma:1 nip:2 discontinuity:3 able:6 beyond:6 recurring:1 below:1 pattern:2 challenge:2 program:2 rf:3 max:3 including:7 haber:1 video:3 shifting:1 hot:3 royal:1 gool:1 force:64 predicting:4 residual:4 representing:1 scheme:1 technology:1 acknowledges:1 review:2 discovery:2 multiplication:1 graf:1 law:3 loss:3 expect:2 proven:1 vandergheynst:1 triple:1 digital:1 validation:3 foundation:1 gilmer:3 degree:4 sufficient:1 principle:2 editor:2 share:1 heavy:1 translation:2 supported:1 xln:2 free:1 keeping:1 allow:2 senior:1 institute:1 fall:1 aspuru:1 differentiating:1 absolute:4 van:1 overcome:1 lett:4 toluene:1 quantum:20 forward:1 commonly:1 collection:1 functionals:1 obtains:2 keep:2 ml:1 conservation:3 biegler:2 continuous:13 table:6 promising:1 learn:4 nature:2 molecule:41 ca:1 correlated:2 expanding:1 symmetry:1 mol:9 expansion:1 complex:3 european:1 constructing:1 domain:3 garnett:2 significance:1 dense:2 universe:1 big:1 noise:1 body:4 crafted:1 fig:7 bmbf:1 atomization:2 position:13 exponential:3 xl:4 lie:1 comput:1 breaking:1 montavon:2 wavelet:1 masci:1 kcal:9 brabandere:1 decay:2 cortes:1 essential:2 catalysis:1 workshop:2 restricting:1 sequential:1 supplement:1 occurring:2 horizon:1 suited:1 depicted:1 positional:1 kristof:1 vinyals:3 mccloskey:1 xl1:2 ramakrishnan:2 corresponds:2 environmental:1 towards:5 shared:1 change:6 hard:1 included:3 specifically:2 infinite:1 reducing:1 aided:1 acting:1 total:4 called:1 pas:1 invariance:5 experimental:2 support:1 softplus:4 seoul:1 chem:7 accelerated:1 evaluate:1 audio:4 ex:2
6,303
6,701
Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples Haw-Shiuan Chang, Erik Learned-Miller, Andrew McCallum University of Massachusetts, Amherst 140 Governors Dr., Amherst, MA 01003 {hschang,elm,mccallum}@cs.umass.edu Abstract Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation. 1 Introduction Learning easier material before harder material is often beneficial to human learning. Inspired by this observation, curriculum learning [5] has shown that learning from easier instances first can also improve neural network training. When it is not known a priori which samples are easy, examples with lower loss on the current model can be inferred to be easier and can be used in early training. This strategy has been referred to as self-paced learning [25]. By decreasing the weight of difficult examples in the loss function, the model may become more robust to outliers [33], and this method has proven useful in several applications, especially with noisy labels [36]. Nevertheless, selecting easier examples for training often slows down the training process because easier samples usually contribute smaller gradients, and the current model has already learned how to make correct predictions on these samples. On the other hand, and somewhat ironically, the opposite strategy (i.e., sampling harder instances more often) has been shown to accelerate (mini-batch) stochastic gradient descent (SGD) in some cases, where the difficulty of an example can be defined by its loss [18, 29, 44] or be proportional to the magnitude of its gradient [51, 1, 12, 13]. This strategy is sometimes referred to as hard example mining [44]. In the literature, we can see that these two opposing strategies work well in different situations. Preferring easier examples may be effective when either machines or humans try to solve a challenging task containing more label noise or outliers. On the other hand, focusing on harder samples may accelerate and stabilize SGD in cleaner data by minimizing the variance of gradients [1, 12]. However, we often do not know how noisy our training dataset is. Motivated by this practical need, this paper explores new methods of re-weighting training examples that are effective in both scenarios. Intuitively, if a model has already predicted some examples correctly with high confidence, those samples may be too easy to contain useful information for improving that model further. Similarly, if some examples are always predicted incorrectly over many iterations of training, these examples may just be too difficult/noisy and may degrade the model. This suggests that we should somehow prefer uncertain samples that are predicted incorrectly sometimes during training and correctly at 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Cat Images P(class | image) Cat? 0.9 Cat? 0.8 Cat? 0.4 Dog? 0.2 Frog? 0.2 Emphasize? Frog? 0.3 Methods Easy Self Paced Learning Uncertain Active Learning Hard Hard Example Mining Training Mini-batch Iterations Figure 1: The proposed methods emphasize uncertain samples based on previous prediction history. other times, as illustrated in Figure 1. This preference is consistent with common variance reduction strategies in active learning [43]. Previous studies suggest that finding informative unlabeled samples to label is related to selecting already-labeled samples to optimize the model parameters [14]. As reported in the previous studies [42, 6], models can sometimes achieve lower generalization error after being trained with only a subset of actively selected training data. In other words, focusing on informative samples can be beneficial even when all labels are available. We propose two lightweight methods that actively emphasize uncertain samples to improve minibatch SGD for classification. One method measures the variance of prediction probabilities, while the other one estimates the closeness between the prediction probabilities and the decision threshold. For logistic regression, both methods can be proven to reduce the uncertainty in the model parameters under reasonable approximations. We present extensive experiments on CIFAR 10, CIFAR 100, MNIST (image classification), Question Type (sentence classification), CoNLL 2003, and OntoNote 5.0 (Named entity Recognition), as well as on different architectures, including multiple class logistic regression, fully-connected networks, convolutional neural networks (CNNs) [26], and residual networks [16]. The results show that active bias makes neural networks more robust without prior knowledge of noise, and reduces the generalization error by 1% ?18% even on training sets having few (if any) annotation errors. 2 Related work As (deep) neural networks become more widespread, many methods have recently been proposed to improve SGD training. When using (mini-batch) SGD, the randomness of the gradient sometimes slows down the optimization, so one common approach is to use the gradient computed in previous iterations to stabilize the process. Examples include momentum [38], stochastic variance reduced gradient (SVRG) [21], and proximal stochastic variance reduced gradient (Prox-SVRG) [49]. Other work proposes variants of semi-stochastic algorithms to approximate the exact gradient direction and reduce the gradient variance [47, 34]. More recently, supervised optimization methods like learning by learning [3] also show great potential in this problem. In addition to the high variance of the gradient, another issue with SGD is the difficulty of tuning the learning rate. Like Quasi-Newton methods, several methods adaptively adjust learning rates based on local curvature [2, 40], while ADAGRAD [11] applies different learning rates to different dimensions. ADAM [23] combines several of these techniques and is widely used in practice. More recently, some studies accelerate SGD by weighting each class differently [13] or weighting each sample differently as we do [18, 51, 29, 12, 1, 44], and their experiments suggest that the methods are often compatible with other techniques such as Prox-SVRG, ADAGRAD, or ADAM [29, 13]. Notice that Gao et al. [12] discuss the idea of selecting uncertain examples for SGD based on active learning, but their proposed methods choose each sample according to the magnitude of its gradient as in ISSGD [1], which actually prefers more difficult examples. The aforementioned methods focus on accelerating the optimization of a fixed loss function given a fixed model. Many of these methods adopt importance sampling. That is, if the method prefers to 2 select harder examples, the learning rate corresponding to those examples will be lower. This makes gradient estimation unbiased [18, 51, 1, 12, 13], which guarantees convergence [51, 13]. On the other hand, to make models more robust to outliers, some approaches inject bias into the loss function in order to emphasize easier examples [37, 48, 27, 35]. Some variants of the strategy gradually increase the loss of hard examples [32], as in self-paced learning [25]. To alleviate the local minimum problem during training, other techniques that smooth the loss function have been proposed recently [8, 15]. Nevertheless, to our knowledge, it remains an unsolved challenge to balance the easy and difficult training examples to facilitate training while remaining robust to outliers. 3 Methods In this section, we first discuss the baseline methods against which we shall compare and introduce some notations which we are going to use later on. We then present our two active bias methods based on prediction variance and closeness to the decision threshold. 3.1 Baselines Due to its simplicity and generally good performance, the most widely used version of SGD samples each training instance uniformly. This basic strategy has two variants. The first samples with replacement. Let D = (xi , yi )i indicate the training dataset. The probability of selecting each sample 1 is equal (i.e., Ps (i|D) = |D| ), so we call it SGD Uniform (SGD-Uni). The second samples without replacement. Let Se be the set of samples we have already used in the current epoch. Then, the 1 sampling probability Ps (i|Se , D) would become ( |D|?|S )1i?S / e , where 1 is an indicator function. e| This version scans through all of the examples in each epoch, so we call it SGD-Scan. We propose a simple baseline which selects harder examples with higher probability, as done by Loshchilov and Hutter [29]. Specifically, we let Ps (i|H, Se , D) ? 1 ? p?H t?1 (yi |xi ) + D , where i Hit?1 is the history of prediction probability which stores all p(yi |xi ) when xi is selected to train S the network before the current iteration t, H = i Hit?1 , p?H t?1 (yi |xi ) is the average probability i of classifying sample i into its correct class yi over all the stored p(yi |xi ) in Hit?1 , and D is a smoothness constant. Notice that by only considering p(yi |xi ) in Hit?1 , we won?t need to perform extra forward passes. We refer to this simple baseline as SGD Sampled by Difficulty (SGD-SD). In practice, SGD-Scan often works better than SGD-Uni because it ensures that the model sees all of the training examples in each epoch. To emphasize difficult examples while applying SGD-Scan, we weight P each sample differently in the loss function. That is, the loss function is modified as L = i vi ? lossi (W ) + ?R(W ), where W are the parameters in the model, lossi (W ) is the prediction loss, and ?R(W ) is the regularization term of the model. The weight of the ith sample vi can be set as N1D (1 ? p?H t?1 (yi |xi ) + D ), where ND is a normalization constant making the average i of vi equal to 1. We want to keep the average of the vi fixed so that we do not change the global learning rate. We denote this method SGD Weighted by Difficulty (SGD-WD). Models usually cannot fit outliers well, so SGD-SD and SGD-WD would not be robust to noise. To make a model unbiased, importance sampling can be used. That is, we can let Ps (i|H, Se , D) ? 1 ? p?H t?1 (yi |xi ) + D and vi = ND (1 ? p?H t?1 (yi |xi ) + D )?1 , which is similar to an approach i i used by Hinton [18]. We refer to this as SGD Importance-Sampled by Difficulty (SGD-ISD). In addition, we propose two simple baselines that emphasize easy examples, as in self-paced learning. Based on the same naming convention, SGD Sampled by Easiness (SGD-SE) denotes that Ps (i|H, Se , D) ? p?H t?1 (yi |xi ) + E , while SGD Weighted by Easiness (SGD-WE) sets i vi = N1E (? pH t?1 (yi |xi ) + E ), where NE normalizes the vi ?s to have unit mean. i 3.2 Prediction Variance In the active learning setting, the prediction variance can be used to measure the uncertainty of each sample for either a regression or classification problem [41]. In order to gain more information at each SGD iteration, we choose samples with high prediction variances. 3 Since the prediction variances are estimated on the fly, we would like to balance exploration and exploitation. Adopting the optimism in face of uncertainty heuristics of bandit problems [7], we draw the next sample based on the estimated prediction variance plus its confidence interval. Specifically, for SGD Sampled by Prediction Variance (SGD-SPV), we let s conf ci Ps (i|H, Se , D) ? std (H) + V , where conf ci std (H) = vd ar(pH t?1 (yi |xi )) + vd ar(pH t?1 (yi |xi ))2 i i |Hit?1 | ? 1 , (1)  t?1 t?1 vd ar pH t?1 (yi |xi ) is the prediction variance estimated by history Hi , and |Hi | is the number  i of stored prediction probabilities. Assuming pH t?1 (yi |xi ) is normally distributed under the unceri tainty of model parameters w, the variance of prediction variance estimation can be estimated by  2 2 ? vd ar pH t?1 (yi |xi ) (|Hit?1 | ? 1)?1 . As we did in the baselines, adding the smoothness constant i V prevents the low variance instances from never being selected again. Similarly, another variant of conf c (H) + V ), where NV normalizes vi like other weighted methods; the method sets vi = N1V (std i we call this SGD Weighted by Prediction Variance (SGD-WPV). As in SGD-WD, SGD-WE or self-paced learning [4], we train an unbiased model for several burnin epochs at the beginning so as to judge the sampling uncertainty reasonably and stably. Other implementation details will be described in the first section of the supplementary material. Using a low learning rate, model parameters w would be close to a good local minimum after sufficient burn-in epochs, and thus the posterior distribution of w can be locally approximated by a Gaussian distribution. Furthermore, the prediction distribution p(yi |xi , w) is often locally smooth with respect to the model parameters w (i.e., small changes of model parameters only induce small changes in the prediction distribution), so a Gaussian tends to approximate the distribution of pH t?1 (yi |xi ) well in practice. i Example: logistic regression Given a Gaussian prior P r(W = w) = N (w|0, s0 I) on the parameters, consider the probabilistic interpretation of logistic regression: ? log(P r(Y, W = w|X)) = ? X log(p(yi |xi , w)) ? i where p(yi |xi , w) = 1 , 1+exp(?yi wT xi ) c ||w||2 , s0 (2) and yi ? {1, ?1}. Since the posterior distribution of W is log-concave [39], we can use P r(W = w|Y, X) ? N (w|wN , SN ), where wN is maximum a posteriori (MAP) estimation, and ?1 SN = 5w 5w ? log(P r(Y, W |X)) = X p(yi |xi ) (1 ? p(yi |xi )) xi xi T + i 2c I. s0 (3) Then, we further approximate p(yi |xi , W ) using the first order Taylor expansion p(yi |xi , W ) ? p(yi |xi , w) + gi (w)T (W ? w), where gi (w) = p(yi |xi , w) (1 ? p(yi |xi , w)) xi . We can compute the prediction variance [41] with respect to the uncertainty of W V ar(p(yi |xi , W )) ? gi (w)T SN gi (w). (4) These approximations tell us several things. First, V ar(p(yi |xi , W )) is proportional to p(yi |xi , w)2 (1 ? p(yi |xi , w))2 , so the prediction variance is larger when the sample i is closer to the boundary. Second, when we have more sample points close to the boundary, the variance of the parameters SN is lower. That is, when we emphasize samples with high prediction variances, the uncertainty of parameters tends to be reduced, akin to the variance reduction strategy in active learning [30]. Third, with a Gaussian assumption on the posterior distribution P r(W = w|Y, X) and the Taylor expansion, the distribution of p(yi |xi , W ) in logistic regression becomes Gaussian, which justifies our previous assumption of pH t?1 (yi |xi ) for the confidence estimation of the prediction i variance. Notice that there are other methods that can measure the prediction uncertainty, such as the mutual information between labels and parameters [19], but we found that the prediction variance works better in our experiments. 4 7 7 6 6 5 5 4 4 3 3 2 5 0 5 10 15 w[1] 1.27 0.05 1.26 0.00 1.25 1.24 0.05 1.23 0.10 1.22 0.15 2.35 2.30 2.25 b 2.20 2.15 2.10 (e) SGD-WD parameters space 0.98 0.05 0.96 5 7 0.97 6 0.95 0.00 0.94 5 0.93 4 0.92 0.91 0.15 2.35 2.30 2.25 10 b 2.20 2.15 3 2.10 0 (c) SGD-Scan parameters space (b) Training Samples 5 10 (d) SGD-Scan boundaries 0.15 0.840 7 0.10 6 0.05 5 0.00 0.795 5 4 0.05 0.780 4 3 0.10 1.21 w[1] 1.28 0.99 0.10 0.10 1.29 0.10 0.15 0.05 0 (a) Sampling distribution 0.15 w[1] 8 0 5 (f) SGD-WD sample weights and boundaries 6 0.810 0.765 3 0.750 0.15 2.35 10 7 0.825 2.30 2.25 b 2.20 2.15 2.10 0 (g) SGD-WPV parameters space 5 10 (h) SGD-WPV sample weights and boundaries Figure 2: A toy example which compares different methods in a two-class logistic regression model. To visualize the optimization path for the classifier parameters (the red paths in (c), (e), and (g)) in two dimensions, we fix the weight corresponding to the x-axis to 0.5 and only show the weight for y-axis w[1] and bias term b. The ith sample size in (f) and (h) is proportional to vi . The toy example shows that SGD-WPV can train a more accurate model in a noisy dataset. Figure 2 illustrates a toy example. Given the same learning rate, we can see that the normal SGD in Figure 2c and 2d will have higher uncertainty when there are many outliers, and emphasizing difficult examples in Figure 2e and 2f makes it worse. On the other hand, the samples near the boundaries would have higher prediction variances (i.e., larger circles or crosses in Figure 2h) and thus higher impact on the loss function in SGD-WPV. After burn-in epochs, w becomes close to a local minimum using SGD. Then, the parameters estimated in each iteration can be viewed, approximately, as samples drawn from the posterior distribution of the parameters P r(W = w|Y, X) [31]. Therefore, after running SGD long enough,   vd ar pH t?1 (yi |xi ) can be used to approximate V ar (p(yi |xi , W )). Notice that if we directly apply i bias at the beginning without running burn-in epochs, incorrect examples might be emphasized, which is also known as the local minimum problem in active learning [14]. For instance, in Figure 2, if burn-in epochs are not applied and the initial w is a vertical line on the left, the outliers close to the initial boundary would be emphasized, which slows down the convergence speed. In this simple example, we can also see that the gradient magnitude is proportional to the difficulty because ?5w log(p(yi |xi , w)) = (1 ? p(yi |xi , w)) xi . This is why we believe the SGD acceleration methods based on gradient magnitude [1, 13] can be categorized as variants of preferring difficult examples, and thus more vulnerable to outliers (like the samples on the left or right in Figure 2). 3.3 Threshold Closeness Motivated by the previous analysis, we propose a simpler and more direct approach to select samples whose correct class probability is close to the decision threshold. SGD Sampled by Threshold  Closeness (SGD-STC) makes Ps (i|H, Se , D) ? p?H t?1 (yi |xi ) 1 ? p?H t?1 (yi |xi ) + T , where i i p?H t?1 (yi |xi ) is the average probability of classifying sample i into its correct class yi over all the i stored p(yi |xi ) in Hit?1 . When there are multiple classes, this measures the closeness of the threshold for distinguishing the correct class out of the union of the rest of the classes (i.e., one-versus-rest). The method is similar to an approximation of the optimal allocation in stratified sampling proposed by Druck and McCallum [10]. Similarly, SGD Weighted by Threshold Closeness (SGD-WTC) P chooses the weight of ith sample vi = 1 1 t?1 (yi |xi )(1? p t?1 (yi |xi ))+T , where NT = p ? ? ?H t?1 (yj |xj )(1? p?H t?1 (yj |xj ))+T . jp Hi NT Hi |D| j j The weighting can be viewed as combining the SGD-WD and SGD-WE by multiplying their weights 5 Table 1: Model architectures. Dropouts and L2 reg (regularization) are only applied to the fullyconnected (FC) layer(s). Dataset MNIST CIFAR 10 CIFAR 100 Question Type CoNLL 2003 OntoNote 5.0 MNIST # Conv layers 2 0 26 or 62 1 Filter size 5x5 N/A Filter number 32, 64 N/A # Pooling layers 2 0 # FC layers 2 1 Dropout keep probs 0.5 1 L2 reg 0.0005 0.01 1 1 0 1 # BN layers 0 0 13 or 31 0 3X3 16, 32, 64 0 (2,3,4)x1 64 1 0.5 0.01 3 3x1 100 0 0 1 0.5, 0.75 0.001 0 N/A N/A 0 0 2 1 0 Table 2: Optimization hyper-parameters and experiment settings Momentum SGD Batch size 64 100 Learning rate 0.01 1e-6 CIFAR 100 Momentum 128 0.1 Question Type CoNLL 2003 OntoNote 5.0 MNIST ADAM 64 0.001 Learning rate decay 0.95 0.5 (per 5 epochs) 0.1 (at 80, 100, 120 epochs) 1 ADAM 128 0.0005 SGD 128 0.1 Dataset Optimizer MNIST CIFAR 10 150 # Burn-in epochs 2 10 90 or 50 20 1 200 30 10 1 60 20 10 # Epochs 80 30 150 # Trials 20 30 20 100 together. Although other uncertainty estimates such as entropy are widely used in active learning and can also be viewed as a measure of boundary closeness, we found the proposed formula works better in our experiments. When using logistic regression, after injecting the bias vi into the loss function, approximating the prediction probability based on previous history, removing the regularization and smoothness constant (i.e., p(yi |xi , w) ? p?H t?1 (yi |xi ), 1/s0 = 0, and T = 0), we can show that i X i V ar(p(yi |xi , W )) ? X gi (w)T SN gi (w) ? NT ? dim(w), (5) i where dim(w) is the dimension of parameters w. This will ensure that the average prediction variance drops linearly as the number of training instance increases. The derivation could be seen in the supplementary materials. 4 Experiments We test our methods on six different datasets. The results show that the active bias techniques constantly outperform the standard uniform sampling (i.e., SGD-Uni and SGD-Scan) in the deep models as well as the shallow models. For each dataset, we use an existing, publicly available implementation for the problem and emphasize samples using different methods. The architectures and hyper-parameters are summarized in Table 1. All neural networks use softmax and cross-entropy loss at the last layer. The optimization and experiment setups are listed in Table 2. As shown in the second column of the table, SGD in CNNs and residual networks actually refers to momentum or ADAM instead of vanilla SGD. All experiments use mini-batch. Like most of the widely used neural network training techniques, the proposed techniques are not applicable to every scenario. For all the datasets we tried, we found that the proposed methods are not sensitive to the hyper-parameter setup except when applying a very complicated model to a relatively smaller dataset. If a complicated model achieves 100% training accuracy within a few epochs, the most uncertain examples would often be outliers, biasing the model towards overfitting. To avoid this scenario, we modify the default hyper-parameters setup in the implementation of the text classifiers in Section 4.3 and Section 4.4 to achieve similar performance using simplified models. For all other models and datasets, we use the default hyper-parameters of the existing implementations, which should favor the SGD-Uni or SGD-Scan methods, since the default hyper-parameters are 6 Table 3: The average of the best testing error rates for different sampling methods and datasets (%). The confidence intervals are standard errors. LR means logistic regression. Datasets MNIST Noisy MNIST CIFAR 10 QT Model CNN CNN LR CNN SGD-Uni 0.55?0.01 0.83?0.01 62.49?0.06 2.19?0.02 SGD-SD 0.52?0.01 1.00?0.01 63.14?0.06 2.03?0.02 SGD-ISD 0.57?0.01 0.84?0.01 62.48?0.07 2.20?0.02 SGD-SE 0.54?0.01 0.69 ?0.01 60.87?0.06 2.28?0.02 SGD-SPV 0.51 ?0.01 0.64?0.01 60.66?0.06 2.08 ?0.02 SGD-STC 0.51?0.01 0.63?0.01 61.00?0.06 2.08?0.02 optimized for these cases. To show the reliability of the proposed methods, we do not optimize the hyper-parameters for the proposed methods or baselines. Due to the randomness in all the SGD variants, we repeat experiments and list the number of trials in Table 2. At the beginning of each trial, network weights are trained with uniform sampling SGD until validation performance starts to saturate. After these burn-in epochs, we apply different sampling/weighting methods and compare performance. The number of burn-in epochs is determined by cross-validation, and the number of epochs in each trial is set large enough to let the testing error of most methods converge. In Tables 3 and 4, we evaluate the testing performance of each method after each epoch and report the best testing performance among epochs within each trial. As previously discussed, there are various versions preferring easy or difficult examples. Some of them require extra time to collect necessary statistics such as the gradient magnitude of each sample [12, 1], change the network architecture [15, 44], or involve an annealing schedule like selfpaced learning [25, 32]. We tried self-paced learning on CIFAR 10 but found that performance usually remains the same and is sometimes sensitive to the hyper-parameters of the annealing schedule. This finding is consistent with the results from [4]. To simplify the comparison, we focus on testing the effects of steady bias based on sample difficulty (e.g., compare with SGD-SE and SGD-SD) and do not gradually change the preference during the training like self-paced learning. It is not always easy to change the sampling procedure because of the model or implementation constraints. For example, in sequence labeling tasks (CoNLL 2003 and OntoNote 5.0), the words in the same sentence need to be trained together. Thus, we only compare methods which modify the loss function (SGD-W*) with SGD-Scan for some models. For the other experiments, re-weighting examples (SGD-W*) generally gives us better performance than changing the sampling distribution (SGD-S*). It might be because we can better estimate the statistics of each sample. 4.1 MNIST We apply our method to a CNN [26] for MNIST1 using one of the Tensorflow tutorials.2 The dataset has high testing accuracy, so most of the examples are too easy for the model after a few epochs. Selecting more difficult instances can accelerate learning or improve testing accuracy [18, 29, 13]. The results from SGD-SD and SGD-WD confirm this finding while selecting uncertain examples can give us a similar or larger boost. Furthermore, we test the robustness of our methods by randomly reassigning the labels of 10% of the images, and the results indicate that the SGD-WPV improves the performance of SGD-Scan even more while SGD-SD overfits the data seriously. 4.2 CIFAR 10 and CIFAR 100 We test a simple multi-class logistic regression3 on CIFAR 10 [24].4 Images are down-sampled significantly to 32 ? 32 ? 3, so many examples are difficult, even for humans. SGD-SPV and SGD-SE perform significantly better than SGD-Uni here, consistent with the idea that avoiding difficult examples increases robustness to outliers. For CIFAR 100 [24], we demonstrate that the proposed approaches can also work in very deep residual networks [16].5 To show the method is not sensitive to the network depth and the number of burn-in epochs, we present results from the network with 27 layers and 90 burn-in epochs as well 1 http://yann.lecun.com/exdb/mnist/ https://github.com/tensorflow/models/blob/master/tutorials/image/mnist 3 https://cs231n.github.io/assignments2016/assignment2/ 4 https://www.cs.toronto.edu/~kriz/cifar.html 5 https://github.com/tensorflow/models/tree/master/resnet 2 7 Table 4: The average of the best testing error rates and their standard errors for different weighting methods (%). For CoNLL 2003 and OntoNote 5.0, the values are 1-(F1 score). CNN, LR, RN 27, RN 63 and FC mean convolutional neural network, logistic regression, residual networks with 27 layers, residual network with 63 layers, and fully-connected network, respectively. Datasets MNIST Noisy MNIST CIFAR 10 CIFAR 100 CIFAR 100 QT CoNLL 2003 OntoNote 5.0 MNIST MNIST (distill) Model CNN CNN LR RN 27 RN 63 CNN CNN CNN FC FC SGD-Scan 0.54?0.01 0.81?0.01 62.48?0.06 34.04?0.06 30.70?0.06 2.24?0.02 11.62?0.04 17.80?0.05 2.85?0.03 2.27?0.01 SGD-WD 0.48?0.01 0.92?0.01 63.10?0.06 34.55?0.06 31.57?0.09 1.93?0.02 11.50?0.05 17.65?0.06 2.17?0.01 2.13?0.02 SGD-WE 0.56?0.01 0.72?0.01 60.88?0.06 33.65?0.07 29.92?0.09 2.30?0.02 11.73?0.04 18.40?0.05 3.08?0.03 2.35?0.01 SGD-WPV 0.48?0.01 0.61?0.02 60.61?0.06 33.69?0.07 30.02?0.08 1.99?0.02 11.24?0.06 17.82?0.03 2.68?0.02 2.18?0.02 SGD-WTC 0.48?0.01 0.63?0.01 61.02?0.06 33.64?0.07 30.16?0.09 2.02?0.02 11.18?0.03 17.51?0.05 2.34?0.03 2.07?0.02 as the network with 63 layers and 50 burn-in epochs. Without changing architectures, emphasizing uncertain or easy examples gains around 0.5% in both settings, which is significant considering the fact that the much deeper network shows only 3% improvement here. When training a neural network, gradually reducing the learning rate (i.e., the magnitude of gradients) usually improves performance. When difficult examples are sampled less, the magnitude of gradients would be reduced. Thus, some of the improvement of SGD-SPV and SGD-SE might come from using a lower effective learning rate. Nevertheless, since we apply the aggressive learning rate decay in the experiments of CIFAR 10 and CIFAR 100, we know that the improvements from SGD-SPV and SGD-SE cannot be entirely explained by its lower effective learning rate. 4.3 Question Type To investigate whether our methods are effective for smaller text datasets, we apply them to a sentence classification task, which we refer to as the Question Type (QT) dataset [28].6 We use the CNN architecture proposed by Kim [22].7 Like many other NLP tasks, the dataset is relatively small and this CNN classifier does not inject noise to inputs like the implementation of residual networks in CIFAR 100, so this complicated model reaches 100% training accuracy within a few epochs. To address this, we reduced the model complexity by (i) decreasing the number of filters from 128 to 64, (ii) decreasing convolutional filter widths from 3,4,5 to 2,3,4, (iii) adding L2 regularization with scale 0.01, (iv) performing PCA to reduce the dimension of pre-trained word embedding from 300 to 50 and fixing the word embedding during training. The smaller model achieves better performance compared with the results from the original paper [22]. As with MNIST, most examples are too easy for the model, so preferring hard examples is effective while the proposed active bias methods can achieve comparable performance, and are better than SGD-Uni and SGD-Scan. 4.4 Sequence Tagging Tasks We also test our methods on Named Entity Recognition (NER) in CoNLL 2003 [46] and OntoNote 5.0 [20] datasets using the CNN from Strubell et al. [45].8 Similar to Question Type, the model is too complex for our approaches. So we (i) only use 3 layers instead of 4 layers, (ii) reduce the number of filters from 300 to 100, (iii) add 0.001 L2 regularization, (iv) make the 50 dimension word embedding from Collobert et al. [9] non-trainable. The micro F1 of this smaller model only drops around 1%-2% from the original big model. Table 4 shows that our methods achieve the lowest error rate (1-F1) in both benchmarks. 6 https://cogcomp.cs.illinois.edu/Data/QA/QC/ https://github.com/dennybritz/cnn-text-classification-tf 8 https://github.com/iesl/dilated-cnn-ner 7 8 4.5 Distillation Although state-of-the-art neural networks in many applications memorize examples easily [50], much simpler models can usually achieve similar performance like those in the previous two experiments. In practice, such models are often preferable due to their low computation and memory requirements. We have shown that the proposed method can improve these smaller models as distillation did [17], so it is natural to check whether our methods can work well with distillation. We use an implementation9 that distills a shallow CNN with 3 convolution layers to a 2 layer fully-connected network in MNIST. The teacher network can achieve 0.8% testing error, and the temperature of softmax is set as 1. Our approaches and baselines simply apply the sample dependent weights vi to the final loss function (i.e., cross-entropy of the true labels plus cross-entropy of the prediction probability from the teacher network). In MNIST, SGD-WTC and SGD-WD can achieve similar or better improvements compared with adding distillation into SGD-Scan. Furthermore, the best performance comes from the distillation plus SGD-WTC, which shows that active bias is compatible with distillation in this dataset. 5 Conclusion Deep learning researchers often gain accuracy by employing training techniques such as momentum, dropout, batch normalization, and distillation. This paper presents a new compatible sibling to these methods, which we recommend for wide use. Our relatively simple and computationally lightweight techniques emphasize the uncertain examples (i.e., SGD-*PV and SGD-*TC). The experiments confirm that the proper bias can be beneficial to generalization performance. When the task is relatively easy (both training and testing accuracy are high), preferring more difficult examples works well. On the contrary, when the dataset is challenging or noisy (both training and testing accuracy are low), emphasizing easier samples often lead to a better performance. In both cases, the active bias techniques consistently lead to more accurate and robust neural networks as long as the classifier does not memorize all the training samples easily (i.e., training accuracy is high but testing accuracy is low). Acknowledgements This material is based on research sponsored by National Science Foundation under Grant No. 1514053 and by DARPA under agreement number FA8750-1 3-2-0020 and HRO011-15-2-0036. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. References [1] G. Alain, A. Lamb, C. Sankar, A. Courville, and Y. Bengio. Variance reduction in SGD by distributed importance sampling. arXiv preprint arXiv:1511.06481, 2015. [2] S.-I. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Computation, 12(6):1399?1409, 2000. [3] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. In NIPS, 2016. [4] V. Avramova. Curriculum learning with deep convolutional neural networks, 2015. [5] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009. [6] A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast kernel classifiers with online and active learning. Journal of Machine Learning Research, 6(Sep):1579?1619, 2005. 9 https://github.com/akamaus/mnist-distill 9 [7] S. Bubeck, N. Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed R in Machine Learning, 5(1):1?122, 2012. bandit problems. Foundations and Trends [8] P. Chaudhari, A. Choromanska, S. Soatto, and Y. LeCun. Entropy-SGD: Biasing gradient descent into wide valleys. In ICLR, 2017. [9] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493?2537, 2011. [10] G. Druck and A. McCallum. Toward interactive training and evaluation. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 947?956. ACM, 2011. [11] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011. [12] J. Gao, H. Jagadish, and B. C. Ooi. Active sampler: Light-weight accelerator for complex data analytics at scale. arXiv preprint arXiv:1512.03880, 2015. [13] S. Gopal. Adaptive sampling for SGD by exploiting side information. In ICML, 2016. [14] A. Guillory, E. Chastain, and J. A. Bilmes. Active learning as non-convex optimization. In AISTATS, 2009. [15] C. Gulcehre, M. Moczulski, F. Visin, and Y. Bengio. Mollifying networks. In ICLR, 2017. [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. [17] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014. [18] G. E. Hinton. To recognize shapes, first learn to generate images. Progress in brain research, 165:535?547, 2007. [19] N. Houlsby, F. Husz?r, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011. [20] E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. OntoNotes: the 90% solution. In HLT-NAACL, 2006. [21] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. [22] Y. Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014. [23] D. Kingma and J. Ba. arXiv:1412.6980, 2014. Adam: A method for stochastic optimization. arXiv preprint [24] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. [25] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS, 2010. [26] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [27] G.-H. Lee, S.-W. Yang, and S.-D. Lin. Toward implicit sample noise modeling: Deviation-driven matrix factorization. arXiv preprint arXiv:1610.09274, 2016. [28] X. Li and D. Roth. Learning question classifiers. In COLING, 2002. [29] I. Loshchilov and F. Hutter. Online batch selection for faster training of neural networks. arXiv preprint arXiv:1511.06343, 2015. 10 [30] D. J. MacKay. Information-based objective functions for active data selection. Neural computation, 4(4):590?604, 1992. [31] S. Mandt, M. D. Hoffman, and D. M. Blei. A variational analysis of stochastic gradient algorithms. In ICML, 2016. [32] S. Mandt, J. McInerney, F. Abrol, R. Ranganath, and D. Blei. Variational tempering. In AISTATS, 2016. [33] D. Meng, Q. Zhao, and L. Jiang. What objective does self-paced learning indeed optimize? arXiv preprint arXiv:1511.06049, 2015. [34] Y. Mu, W. Liu, X. Liu, and W. Fan. Stochastic gradient made stable: A manifold propagation approach for large-scale optimization. IEEE Transactions on Knowledge and Data Engineering, 2016. [35] C. G. Northcutt, T. Wu, and I. L. Chuang. Learning with confident examples: Rank pruning for robust classification with noisy labels. arXiv preprint arXiv:1705.01936, 2017. [36] T. Pi, X. Li, Z. Zhang, D. Meng, F. Wu, J. Xiao, and Y. Zhuang. Self-paced boost learning for classification. In IJCAI, 2016. [37] D. Pregibon. Resistant fits for some commonly used logistic models with medical applications. Biometrics, pages 485?498, 1982. [38] N. Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12 (1):145?151, 1999. [39] J. D. Rennie. Regularized logistic regression is strictly convex. Unpublished manuscript. URL: people. csail. mit. edu/ jrennie/ writing/ convexLR. pdf , 2005. [40] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. ICML, 2013. [41] A. I. Schein and L. H. Ungar. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235?265, 2007. [42] G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In ICML, 2000. [43] B. Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010. [44] A. Shrivastava, A. Gupta, and R. Girshick. Training region-based object detectors with online hard example mining. In CVPR, 2016. [45] E. Strubell, P. Verga, D. Belanger, and A. McCallum. Fast and accurate sequence labeling with iterated dilated convolutions. arXiv preprint arXiv:1702.02098, 2017. [46] E. F. Tjong Kim Sang and F. De Meulder. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In HLT-NAACL, 2003. [47] C. Wang, X. Chen, A. J. Smola, and E. P. Xing. Variance reduction for stochastic gradient optimization. In NIPS, 2013. [48] Y. Wang, A. Kucukelbir, and D. M. Blei. Reweighted data for robust probabilistic models. arXiv preprint arXiv:1606.03860, 2016. [49] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014. [50] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017. [51] P. Zhao and T. Zhang. Stochastic optimization with importance sampling. arXiv preprint arXiv:1412.2753, 2014. 11
6701 |@word trial:5 exploitation:1 version:3 cnn:16 nd:2 tried:2 bn:1 sgd:104 harder:5 reduction:6 initial:2 liu:2 lightweight:3 uma:1 selecting:6 score:1 seriously:1 document:1 fa8750:1 existing:2 freitas:1 current:4 wd:9 nt:3 com:6 informative:2 shape:1 drop:2 sponsored:1 moczulski:1 selected:3 mccallum:5 beginning:3 ith:3 realizing:1 lr:4 blei:3 contribute:1 toronto:1 preference:3 simpler:2 zhang:7 ironically:1 direct:1 become:3 incorrect:1 combine:1 fullyconnected:1 introduce:1 tagging:1 indeed:1 kuksa:1 multi:2 brain:1 inspired:1 decreasing:3 armed:1 considering:2 becomes:2 conv:1 notation:2 lowest:1 what:1 interpreted:1 finding:3 elm:1 guarantee:1 ooi:1 every:1 concave:1 interactive:1 preferable:1 classifier:6 hit:7 unit:1 normally:1 grant:1 medical:1 before:2 ner:2 thereon:1 local:5 modify:2 sd:6 tends:2 io:1 engineering:1 jiang:1 meng:2 path:2 mandt:2 approximately:1 might:3 plus:3 burn:10 frog:2 suggests:1 challenging:2 collect:1 factorization:1 stratified:1 analytics:1 palmer:1 practical:1 lecun:4 yj:2 testing:12 practice:4 union:1 regret:1 pesky:1 x3:1 procedure:1 significantly:2 confidence:4 word:5 induce:1 refers:1 pre:1 suggest:2 cannot:2 unlabeled:1 close:5 valley:1 selection:2 applying:2 writing:1 optimize:3 www:1 map:1 dean:1 roth:1 convex:2 survey:1 qc:1 simplicity:1 qian:1 embedding:3 exact:1 distinguishing:1 agreement:1 trend:1 recognition:6 approximated:1 std:3 labeled:1 fly:1 preprint:11 wang:2 region:1 ensures:1 connected:3 sun:1 mu:1 complexity:1 cs231n:1 trained:4 predictive:1 accelerate:4 easily:2 darpa:2 differently:3 sep:1 various:2 cat:4 derivation:1 train:3 probs:1 fast:2 effective:6 tell:1 labeling:2 hyper:8 whose:1 heuristic:1 widely:4 solve:1 supplementary:2 larger:3 rennie:1 amari:1 cvpr:1 favor:1 statistic:2 gi:6 schohn:1 noisy:8 final:1 online:4 sequence:3 blob:1 propose:4 haw:1 combining:1 achieve:7 schaul:2 exploiting:1 convergence:2 ijcai:1 p:7 requirement:1 adam:7 resnet:1 object:1 andrew:1 fixing:1 qt:3 progress:1 aug:1 c:3 predicted:4 indicate:2 judge:1 convention:1 come:2 direction:1 memorize:2 distilling:1 correct:7 cnns:2 stochastic:14 filter:5 exploration:1 human:3 languageindependent:1 settle:1 material:5 require:1 government:2 ungar:1 fix:1 generalization:4 f1:3 alleviate:1 strictly:1 proximity:1 around:2 normal:1 exp:1 great:1 visin:1 visualize:1 optimizer:1 early:1 adopt:1 achieves:2 purpose:1 estimation:4 injecting:1 applicable:1 label:8 sensitive:3 tainty:1 tf:1 weighted:5 hoffman:2 fukumizu:1 mit:1 always:2 gaussian:5 gopal:1 modified:1 denil:1 avoid:1 husz:1 tjong:1 focus:2 improvement:4 consistently:1 rank:1 check:1 baseline:8 kim:3 posteriori:1 dim:2 dependent:1 bandit:2 koller:1 quasi:1 going:1 selects:1 reproduce:1 choromanska:1 issue:1 classification:10 aforementioned:1 among:1 html:1 priori:1 proposes:1 art:1 softmax:2 mackay:1 mutual:1 equal:2 never:1 having:1 beach:1 sampling:16 progressive:1 park:1 reassigning:1 icml:5 report:1 recommend:1 simplify:1 micro:1 few:4 randomly:1 packer:1 national:1 recognize:1 replacement:2 opposing:1 mining:4 investigate:1 evaluation:2 adjust:1 light:1 copyright:1 cogcomp:1 accurate:4 closer:1 necessary:1 biometrics:1 tree:1 iv:2 taylor:2 re:3 circle:1 schein:1 girshick:1 uncertain:9 hutter:2 instance:8 column:1 modeling:1 ar:9 distill:2 subset:1 deviation:1 uniform:3 krizhevsky:1 johnson:1 too:5 reported:1 stored:3 mnist1:1 teacher:2 guillory:1 proximal:2 chooses:1 adaptively:1 recht:1 st:1 confident:1 explores:1 amherst:2 international:1 siam:1 preferring:5 csail:1 probabilistic:2 lee:1 together:2 druck:2 again:1 cesa:1 management:1 containing:1 choose:2 kucukelbir:1 emnlp:1 dr:1 worse:1 conf:3 inject:2 zhao:2 selfpaced:1 sang:1 actively:2 toy:3 aggressive:1 potential:1 prox:2 distribute:1 de:2 li:2 summarized:1 stabilize:2 dilated:2 vi:13 collobert:3 later:1 try:1 view:1 overfits:1 hazan:1 red:1 start:1 houlsby:1 xing:1 complicated:3 annotation:1 jul:1 publicly:1 accuracy:11 convolutional:5 variance:34 hovy:1 miller:1 bayesian:1 kavukcuoglu:1 iterated:1 ren:1 multiplying:1 bilmes:1 researcher:1 lengyel:1 randomness:2 history:4 detector:1 chaudhari:1 reach:1 hlt:2 against:1 lossi:2 unsolved:1 gain:4 sampled:7 dataset:12 hardt:1 massachusetts:1 popular:1 knowledge:5 improves:2 schedule:2 actually:2 focusing:2 manuscript:1 higher:4 supervised:1 improved:1 done:1 furthermore:3 just:1 implicit:1 smola:1 until:1 hand:4 belanger:1 cohn:1 propagation:1 minibatch:2 somehow:1 widespread:1 logistic:13 stably:1 believe:1 usa:1 facilitate:1 effect:1 contain:1 unbiased:3 strubell:2 true:1 naacl:2 regularization:5 soatto:1 ontonotes:1 illustrated:1 reweighted:1 x5:1 during:4 self:11 width:1 kriz:1 steady:1 won:1 pdf:1 exdb:1 demonstrate:1 duchi:1 temperature:1 image:9 variational:2 recently:4 common:2 jp:1 discussed:1 interpretation:1 he:1 distillation:8 refer:3 significant:1 smoothness:3 tuning:1 vanilla:1 similarly:3 illinois:1 ramshaw:1 language:1 reliability:1 jrennie:1 stable:1 resistant:1 add:1 curvature:1 posterior:4 driven:1 scenario:3 store:1 yi:50 seen:1 minimum:4 additional:1 somewhat:1 converge:1 semi:1 ii:2 multiple:3 reduces:1 karlen:1 smooth:2 faster:1 cross:5 long:3 cifar:19 wtc:4 lin:1 naming:1 mcinerney:1 verga:1 impact:1 prediction:29 variant:6 regression:12 basic:1 multilayer:1 vision:1 arxiv:22 iteration:7 normalization:3 sometimes:5 adopting:1 kernel:1 addition:2 want:1 ertekin:1 interval:2 annealing:2 extra:2 rest:2 pass:1 nv:1 pooling:1 thing:1 contrary:1 call:3 near:1 yang:1 iii:2 easy:11 wn:2 enough:2 bengio:5 xj:2 fit:2 weischedel:1 architecture:7 nonstochastic:1 opposite:1 reduce:4 idea:2 haffner:1 sibling:1 whether:2 six:2 motivated:2 optimism:1 pca:1 url:1 accelerating:2 akin:1 prefers:2 andrychowicz:1 deep:8 useful:2 generally:2 se:13 listed:1 cleaner:1 involve:1 locally:2 ph:9 reduced:5 http:9 generate:1 outperform:1 sankar:1 tutorial:2 notice:4 governmental:1 estimated:5 correctly:2 per:1 shall:1 easiness:2 threshold:8 nevertheless:3 drawn:1 distills:1 changing:2 tempering:1 isd:2 subgradient:1 uncertainty:10 master:2 named:3 almost:1 reasonable:1 lamb:1 yann:1 wu:2 draw:1 endorsement:1 decision:4 prefer:1 conll:8 comparable:1 entirely:1 dropout:4 layer:15 hi:4 paced:11 gomez:1 courville:1 n1d:1 fan:1 wpv:7 constraint:1 speed:1 kumar:1 performing:1 relatively:4 according:1 across:1 beneficial:3 smaller:6 shallow:2 making:1 outlier:10 intuitively:1 gradually:3 explained:1 computationally:1 remains:2 previously:1 discus:2 singer:1 know:2 gulcehre:1 available:2 apply:6 alternative:1 batch:8 robustness:2 original:2 chuang:1 top:1 remaining:1 include:1 denotes:1 running:2 ensure:1 nlp:1 newton:1 madison:1 ghahramani:1 especially:1 approximating:1 implied:1 objective:2 already:4 question:7 strategy:8 gradient:30 iclr:3 entity:3 vd:5 rethinking:1 degrade:1 manifold:1 toward:2 marcus:1 assuming:1 erik:1 mini:4 minimizing:1 balance:2 difficult:13 setup:3 pregibon:1 slows:3 ba:1 implementation:6 reliably:1 proper:1 policy:1 perform:2 bianchi:1 vertical:1 observation:1 convolution:2 datasets:9 benchmark:1 descent:7 incorrectly:2 situation:1 hinton:4 rn:4 inferred:1 dog:1 unpublished:1 extensive:2 sentence:4 optimized:1 pfau:1 learned:2 herein:1 tensorflow:3 boost:2 kingma:1 nip:6 qa:1 address:1 usually:5 pattern:1 biasing:2 challenge:1 including:2 memory:1 difficulty:7 natural:3 regularized:1 indicator:1 curriculum:3 residual:8 representing:1 improve:7 github:6 zhuang:1 ne:1 axis:2 reprint:1 governor:1 sn:5 text:3 prior:2 literature:2 epoch:23 l2:4 acknowledgement:1 understanding:1 adagrad:2 wisconsin:1 loss:15 fully:3 accelerator:1 proportional:4 allocation:1 proven:2 versus:1 validation:2 foundation:2 sufficient:1 consistent:3 s0:4 xiao:2 classifying:2 pi:1 tiny:1 bordes:1 normalizes:2 compatible:3 loshchilov:2 repeat:1 last:1 meulder:1 svrg:3 alain:1 bias:13 side:1 deeper:1 wide:2 face:1 distributed:2 boundary:8 dimension:5 default:3 depth:1 forward:1 author:1 adaptive:3 made:1 simplified:1 commonly:1 employing:1 transaction:1 ranganath:1 approximate:4 emphasize:9 uni:7 pruning:1 keep:2 confirm:2 global:1 active:22 overfitting:1 xi:52 latent:1 why:1 table:10 learn:2 reasonably:1 robust:8 ca:1 shrivastava:1 improving:1 expansion:2 bottou:3 complex:2 necessarily:1 stc:2 official:1 louradour:1 did:2 aistats:2 linearly:1 big:1 noise:5 categorized:1 x1:2 referred:2 momentum:7 pv:1 weighting:7 third:1 coling:1 down:4 emphasizing:4 formula:1 removing:1 saturate:1 emphasized:2 list:1 decay:2 gupta:1 closeness:7 workshop:1 mnist:18 adding:3 importance:5 ci:2 magnitude:7 notwithstanding:1 justifies:1 illustrates:1 chen:1 easier:8 authorized:1 entropy:5 tc:1 fc:5 simply:1 bubeck:1 gao:2 prevents:1 expressed:1 contained:1 vinyals:2 vulnerable:1 chang:1 applies:1 constantly:1 acm:2 ma:1 weston:3 viewed:3 acceleration:1 towards:1 shared:1 hard:7 change:6 specifically:2 except:1 uniformly:1 determined:1 wt:1 reducing:1 sampler:1 jagadish:1 experimental:1 burnin:1 perceptrons:1 select:2 people:1 support:1 scan:13 avoiding:1 evaluate:1 reg:2 trainable:1 scratch:1
6,304
6,702
Differentiable Learning of Submodular Models Andreas Krause Department of Computer Science ETH Zurich [email protected] Josip Djolonga Department of Computer Science ETH Zurich [email protected] Abstract Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to incorporate in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation. 1 Introduction Discrete optimization problems are ubiquitous in machine learning. While the majority of them are provably hard, a commonly exploitable trait that renders some of them tractable is that of submodularity [1, 2]. In addition to capturing many useful phenomena, submodular functions can be minimized in polynomial time and also enjoy a powerful connection to convex optimization [3]. Both of these properties have been used to great effect in both computer vision and machine learning, to e.g. compute the MAP configuration in undirected graphical models with long-reaching interactions [4] and higher-order factors [5], clustering [6], to perform variational inference in log-supermodular models [7, 8], or to design norms useful for structured sparsity problems [9, 10]. Despite all the benefits of submodular functions, the question of how to learn them in a practical manner remains open. Moreover, if we want to open the toolbox of submodular optimization to modern practitioners, an intriguing question is how to to use them in conjunction with deep learning networks. For instance, we need to develop mechanisms that would enable them to be trained together in a fully end-to-end fashion. As a concrete example from the computer vision domain, consider the problem of image segmentation. Namely, we are given as input an RGB representation x ? Rn?3 of an image captured by say a dashboard camera, and the goal is to identify the set of pixels A ? {1, 2, . . . , n} that are occupied by pedestrians. While we could train a network ? : x ? v ? Rn to output per-pixel scores, it would be helpful, especially in domains with limited data, to bias the predictions by encoding some prior beliefs about the expected output. For example, we might prefer segmentations that are spatially consistent. One common approach to encourage such configurations is to first define a graph over the image G = (V, E) by connecting neighbouring pixels, specify weights w over the edges, and then solve the following graph-cut problem X X A? (w, v) = arg min F (A) = arg min wi,j JA ? {i, j} = 1K + vi . (1) |{z} | {z } A?V A?V {i,j}?E 1 iff the predictions disagree i?A pixel score While this can be easily seen as a module computing the best configuration as a function of the edge weights and per-pixel scores, incorporating it as a layer in a deep network seems at a first glance to 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. be a futile task. Even though the output is easily computable, it will be discontinuous and have no Jacobian, which is necessary for backpropagation. However, as the above problem is an instance of submodular minimization, we can leverage its relationship to convexity and relax it to X 1 1 y? (w, v) = arg min f (y) + kyk2 = arg min wi,j |yi ? yj | + vT y + kyk2 . (2) 2 2 y?Rn y?Rn {i,j}?E In addition to having a continuous output, this relaxation has a very strong connection with the discrete problem as the discrete optimizer can be obtained by thresholding y? as A? = {i ? V | yi? > 0}. Moreover, as explained in ?2, for every submodular function F there exists an easily computable convex function f so that this relationship holds. For general submodular functions, the negation of the solution to the relaxed problem (2) is known as the min-norm point [11]. In this paper we consider the problem of designing such modules that solve discrete optimization problems by leveraging this continuous formulation. To this end, our key technical contribution is to analyze the sensitivity of the min-norm point as a function of the parametrization of the function f . For the specific case above we will show how to compute ?y? /?w and ?y? /?v. Continuing with the segmentation example, we might want to train a conditional model P (A | x) that can model the uncertainty in the predictions to be used in downstream decision making. A rich class of models are log-supermodular models, i.e., those of the form P (A | x) = exp(?F?(x) (A))/Z(?) for some parametric submodular function F? . While they can capture very useful interactions, they are very hard to train in the maximum likelihood setting due to the presence of the intractable normalizer Z(?). However, Djolonga and Krause [8] have shown that for any such distribution we can find the closest fully factorized distribution Q(? | x) minimizing a specific information theoretic divergence D? . In other words, we can exactly compute Q(? | x) = arg minQ?Q D? (P (? | x) k Q), where Q is the family of fully factorized distributions. Most importantly, the optimal Q can also be computed from the min-norm point. Thus, a reasonable objective would be to learn a model ?(x) so that the best approximate distribution Q(? | x) gives high likelihood to the training data point. This is a complicated bi-level optimization problem (as Q implicitly depends on ?) with an inner variational inference procedure, which we can again train end-to-end using our results. In other words, we can optimize the following algorithm end-to-end with respect to ?. ?(x) xi ???? ?i ?? P = exp(?F?i (A))/Z(?i ) ?? Q = arg min D? (P k Q) ?? Q(Ai | xi ). (3) Q?Q Related work. Sensitivity analysis of the set of optimal solutions has a long history in optimization theory [12]. The problem of argmin-differentiation of the specific case resulting from graph cuts (i.e. eq. (2)) has been considered in the computer vision literature, either by smoothing the objective [13], or by unrolling iterative methods [14]. The idea to train probabilistic models by evaluating them using the marginals produced by an approximate inference algorithm has been studied by Domke [15] for tree-reweighted belief propagation and mean field, and for continuous models by Tappen [16]. These methods either use the implicit function theorem, or unroll iterative optimization algorithms. The benefits of using an inconsistent estimator, which is what we do by optimizing eq. (3), at the benefit of using computationally tractable inference methods has been discussed by Wainwright [17]. Amos and Kolter [18] discuss how to efficiently argmin-differentiate quadratic programs by perturbing the KKT conditions, an idea that goes back to Boot [19]. We make an explicit connection to their work in Theorem 4. In Section 4 we harness the connection between the min-norm problem and isotonic regression, which has been exploited to obtain better duality certificates [2], and by Kumar and Bach [20] to design an active-set algorithm for the min-norm problem. Chakravarti [21] analyzes the sensitivity of the optimal isotonic regression point with respect to perturbations of the input, but does not discuss the directional derivatives of the problem. Recently, Dolhansky and Bilmes [22] have used deep networks to parametrize submodular functions. Discrete optimization is also used in structured prediction [23, 24] for the computation of the loss function, which is closely related to our work if we use discrete optimization only at the last layer. However, in this case we have the advantage in that we allow for arbitrary loss functions to be applied to the solution of the relaxation. Contributions. We develop a very efficient approximate method (?4) for the computation of the Jacobian of the min-norm problem inspired by our analysis of isotonic regression in ?3, where we derive results that might be of independent interest. Even more importantly, from a practical perspective, this Jacobian has a very nice structure and we can multiply with it in linear time. This 2 means that we can efficiently perform back-propagation if we use these layers in a modern deep architectures. In ?5 we show how to compute directional derivatives exactly in polynomial time, and give conditions under which our approximation is correct. This is also an interesting theoretical result as it quantifies the stability of the min-norm point with respect to the model parameters. Lastly, we use our results to learn log-supermodular models in ?6. 2 Background on Submodular Minimization Let us introduce the necessary background on submodular functions. They are defined over subsets of some ground set, which in the remaining of this paper we will w.l.o.g. assume to be V = {1, 2, . . . , n}. Then, a function F : 2V ? R is said to be submodular iff for all A, B ? V it holds that F (A ? B) + F (A ? B) ? F (A) + F (B). (4) We will furthermore w.l.o.g. assume that F is normalized so that F (?) = 0. A very simple family of submodular functions are modular functions. These, seen P as discrete analogues of linear functions, satisfy the above with equality and are given as F (A) = i?A mi for some real numbers mi . As common practice in combinatorial optimization, we will treat any vector m ? Rn as a modular P V function 2 ? R defined as m(A) = i?A mi . In addition to the graph cuts from the introduction (eq. (1)), another widely used class of functions are concave-of-cardinality functions, i.e. those of the form F (|A|) = h(|A|) for some concave h : R ? R [5]. From eq. (4) we see that if we want to define a submodular function over a collection D ( 2V it has to be closed under union and intersection. Such collections are known as lattices, and two examples that we will use are the simple lattice 2V and the trivial lattice {?, V }. In the theory of submodular minimization, a critical object defined by a pair consisting of a submodular function F and a lattice D ? {?, V } is the base polytope B(F | D) = {x ? Rn | x(A) ? F (A) for all A ? D} ? {x ? Rn | x(V ) = F (V )}. (5) V We will also use the shorthand B(F ) = B(F | 2 ). Using the result of Edmonds [25], we know how to maximize a linear function over B(F ) in O(n log n) time with n function evaluations of F . Specifically, to compute maxy?B(F ) zT y, we first choose a permutation ? : V ? V that sorts z, i.e. so that z?(1) ? z?(2) ? ? ? ? ? z?(n) . Then, a maximizer f (?) ? B(F ) can be computed as [f (?)]?(i) = F ({?(i)} | {?(1), . . . , ?(i ? 1)}), (6) where the marginal gain of A given B is defined as F (A | B) = F (A ? B) ? F (B). Hence, we know how to compute the support function f (z) = supy?B(F ) zT y, which is known as the Lov?sz extension [3]. First, this function is indeed an extension as f (1A ) = F (A) for all A ? V , where 1A ? {0, 1}n is the indicator vector for the set A. Second, it is convex as it is a supremum of linear functions. Finally, and most importantly, it lets us minimize submodular functions in polynomial time with convex optimization because minA?2V F (A) = minz?[0,1] f (z) and we can efficiently round the optimal continuous point to a discrete optimizer. Another problem, with a smooth objective, which is also explicitly tied to the problem of minimizing F is that of computing the min-norm point, which can be defined in two different ways as 1 1 y? = arg min kyk2 , or equivalently as ? y? = arg min f (y) + kyk2 , (7) 2 2 y y?B(F ) where the equivalence comes from strong Fenchel duality [2]. The connection with submodular minimization comes from the following lemma, which we have already hinted at in the introduction. Lemma 1 ([1, Lem. 7.4]). Define A? = {i | yi? < 0} and A0 = {i | yi? ? 0}. Then A? (A0 ) is the unique smallest (largest) minimizer of F . Moreover, if instead of hard-thresholding we send the min-norm point through a sigmoid, the result has the following variational inference interpretation, which lets us optimize the pipeline in eq. (3). Lemma 2 ([8, Thm. 3]). Define the infinite R?nyi divergence between any distributions P and Q over   2V as D? (P || Q) = supA?V log P (A)/Q(A) . For P (A) ? exp(?F (A)), the distribution Q? minimizing D? over all fully factorized distributions Q is given as Y Y Q(A) = ?(?yi? ) ?(yi? ), i?A i?A / where ?(u) = 1/(1 + exp(?u)) is the sigmoid function. 3 3 Argmin-Differentiation of Isotonic Regression We will first analyze a simpler problem, i.e., that of isotonic regression, defined as 1 y(x) = arg min ky ? xk2 , 2 y?O (8) where O = {y ? Rn | yi ? yi+1 for i = 1, 2, . . . , n ? 1}. The connection to our problem will be made clear in Section 4, and it essentially follows from the fact that the Lov?sz extension is linear on O. In this section, we will be interested in computing the Jacobian ?y/?x, i.e., in understanding how the solution y changes with respect to the input x. The function is well-defined because of the strict convexity of the objective and the non-empty convex feasible set. Moreover, it can be easily computed in O(n) time using the pool adjacent violators algorithm (PAVA) [26]. This is a well-studied problem in statistics, see e.g. [27]. To understand the behaviour of y(x), we will start by stating the optimality conditions of problem (8). To simplify the notation, for any A ? V P 1 x we will define Meanx (A) = |A| i?A i . The optimality conditions can be stated via ordered partitions ? = (B1 , B2 , . . . , Bm ) of V , meaning that the sets Bi are disjoint, ?kj=1 Bj = V , and ? is ordered so that 1 + maxi?Bj i = mini?Bj+1 i. Specifically, for any such partition we define y? = (y1 , y2 , . . . , ym ), where yj = Meanx (Bj )1|Bj | and 1k = {1}k is the vector of all ones. In other words, y? is defined by taking block-wise averages of x with respect to ?. By analyzing the KKT conditions of problem (8), we obtain the following well-known condition. Lemma 3 ([26]). An ordered partition ? = (B1 , B2 , . . . , Bm ) is optimal iff the following hold 1. (Primal feasibility) For any two blocks Bj and Bj+1 we have Meanx (Bj ) ? Meanx (Bj+1 ). (9) 2. (Dual feasibility) For every block B ? ? and each i ? B define PreB (i) = {j ? B | j ? i}. Then, the condition reads Meanx (PreB (i)) ? Meanx (B) ? 0. (10) Points where eq. (9) is satisfied with equality are of special interest, because of the following result. Lemma 4. If for some Bj and Bj+1 the first condition is satisfied with equality, we can merge the two sets so that the new coarser partition ?0 will also be optimal. Thus, in the remaining of this section we will assume that the sets Bj are chosen maximally. We will now introduce a notion that will be crucial in the subsequent analysis. Definition 1. For any block B, we say that i ? B is a breakpoint if Meanx (PreB (i)) = Meanx (B) and it is not the right end-point of B (i.e., i < maxi0 ?B i0 ). From an optimization perspective, any breakpoint is equivalent to non-strict complementariness of the corresponding Lagrange multiplier. From a combinatorial perspective, they correspond to positions where we can refine ? into a finer partition ?0 that gives rise to the same point, i.e., y? = y?0 (if we merge blocks using Lemma 4, the point where we merge them will become a breakpoint). We can now discuss the differentiability of y(x). Because projecting onto convex sets is a proximal operator and thus non-expansive, we have the following as an immediate consequence of Rademacher?s theorem. Lemma 5. The function y(x) is 1-Lipschitz continuous and differentiable almost everywhere. We will denote by ?x?k and ?x+k the left and right partial derivatives with respect to xk . For any index k we will denote by u(k) (l(k)) the breakpoint with the smallest (largest) coordinate larger (smaller) than k. Define it as +? (??) if no such point exists. Moreover, denote by ?(z) the collection of indices where z takes on distinct values, i.e., ?(z) = ?ni=1 {{i0 ? V | zi = zi0 }}. Theorem 1. Let k be any coordinate and let B ? ?(y(x)) be the block containing coordinate i. Also define B+ = {i ? B | i ? u(k)} and B? = {i ? B | i ? l(k)}. Hence, for any i ? B ?x+k (yi ) = Ji ? B \ B? K/|B \ B? |, and 4 ?x?k (yi ) = Ji ? B \ B+ K/|B \ B+ |. Note that all of these derivatives will agree iff there are no breakpoints, which means that the existence of breakpoints is an isolated phenomenon due to Lemma 5. In this case the Jacobian exists and has a very simple block-diagonal form. Namely, it is equal to ?y = ?(y(x)) ? blkdiag(C|B1 | , C|B2 | , . . . , C|Bm | ), (11) ?x where Ck = 1k?k /k is the averaging matrix with elements 1/k. We will use ?(z) for the matrix taking block-wise averages with respect to the blocks ?(z). As promised in the introduction, Jacobian multiplication ?(y(x))u is linear as we only have to perform block-wise averages. 4 Min-Norm Differentiation In this section we will assume that we have a function F? parametrized by some ? ? Rd that we seek to learn. For example, we could have a mixture model F? (A) = d X ?j Gj (A), (12) j=1 for some fixed submodular functions Gj : 2V ? R. In this case, to ensure that the resulting function is submodular we also want to enforce ?j ? 0 unless Gj is modular. We would like to note that the discussion in this section goes beyond such models. Remember that the min-norm point is defined as 1 y? = ? arg min f? (y) + kyk2 , (13) 2 y where f? is the Lov?sz extension of F? . Hence, we want to compute ?y/??. To make the connection with isotonic regression, remember how we evaluate the Lov?sz extension at y. First, we pick a permutation ? that sorts y, and then evaluate it as f? (y) = f? (?)T y, where f? (?) is defined in eq. (6). Hence, the Lov?sz extension is linear on the set of all vectors that are sorted by ?. Formally, for any permutation ? the Lov?sz extension is equal to f? (?)T y on the order cone O(?) = {y | y?(n) ? y?(n?1) ? . . . ? y?(1) }. Given a permutation ?, if we constrain eq. (13) to O(?) we can replace f? (y) by the linear function f? (?)T , so that the problem reduces to 1 y? (?) = ? arg min ky + f? (?)k2 , (14) y?O(?) 2 which is an instance of isotonic regression if we relabel the elements of V using ?. Roughly, the idea is to instead differentiate eq. (14) with f? (?) computed at the optimal point y? . However, because we can choose an arbitrary order among the elements with equal values, there may be multiple permutations that sort y? , and this extra choice we have seems very problematic. Nevertheless, let us continue with this strategy and analyze the resulting approximations to the Jacobian. We propose the following approximation to the Jacobian ?y? ?f? (?) ? Jb? ? ?(y? ) ? = ?(y? ) ? [??1 f? (?) | ??2 f? (?) | | {z } ?? ?? ??? | ??d f? (?)] , ?y (?) ? ?f ?(?) ? where ?(y? ) is used as an approximation of a Jacobian which might not exist. Fortunately, due to the special structure of the linearizations, we have the following result that the gradient obtained using the above strategy does not depend on the specific permutation ? that was chosen. Theorem 2. If ??k F (A) exists for all A ? V the approximate Jacobians Jb? are equal and do not depend on the choice of ?. Specifically, the j-th block of any element i ? B ? ?(y? ) is equal to 1 ?? F? (B | {i0 | [y? ]i0 < [y? ]i }). (15) |B| j Proof sketch, details in supplement. Remember that ?(y? ) averages f? (?) within each B ? ?(y? ). Moreover, as ? sorts y? , the elements in B must be placed consecutively. The coordinates of f? (?) are marginal gains (6) and they will telescope inside the mean, which yields the claimed quantity. 5 Graph cuts. As a special, but important case, let us analyze how the approximate Jacobian looks like for a cut function (eq. (1)), in which case eq. (13) reduces to eq. (2). Let ?(y(w, v)) = (B1 , B2 , . . . , Bm ). For any element i ? V we will denote by ?(i) ? {1, 2, . . . , m} the index of the block where it belongs to. Then, the approximate Jacobian Jb at ? = (w, v) has entries ?bvj (yi ) = J?(i) = ?(j)K/|B?(i) |, ? 1 ? ?sign(yi ? yj ) |B?(k) | 1 ?bwi,j (yk ) = sign(yj ? yi ) |B?(k) | ? ? 0 and if ?(k) = ?(i), or if ?(k) = ?(j), and otherwise, where the sign function is defined to be zero if the argument is zero. Intuitively, increasing the modular term vi by ? will increase all the coordinates B in y that are in the same segment by ?/|B|. On the other hand, increasing the weight of an edge wi,j will have no effect if i and j are already on y in the same segment, and otherwise it will pull the segments containing i and j together by increasing the smaller one and decreasing the larger one. In the supplementary we provide a pytorch module that executes the back propagation pass in O(|V | + |E|) time in about 10 lines of code, and we also derive the approximate Jacobians for concave-of-cardinality and facility location functions. 5 Analysis We will now theoretically analyze the conditions under which our approximation is correct, and then give a characterization of the exact directional derivative together with a polynomial algorithm that computes it. The first notion that will have implications for our analysis is that of (in)separability. Definition 2. The function F : 2V ? R is said to be separable if there exists some B ? V such that B? / {?, V } and F (V ) = F (B) + F (V \ B). The term separable is indeed appropriate as it implies that F (A) = F (A ? B) + F ((V \ B) ? A) for all A ? V [2, Prop. 4.3], i.e., the function splits as a sum of two functions on disjoint domains. Hence, we can split the problem into two (on B and V \ B) and analyze them independently. We would like to point out that separability is checkable in cubic time using the algorithm of Queyranne [28]. To simplify the notation, we will assume that we want to compute the derivative at point ? 0 ? Rd which results in the min-norm point y0 = y? ? Rn . We will furthermore assume that y0 takes on unique values ?1 < ?2 < ? ? ? < ?k on sets B1 , B2 , . . . , Bk respectively, and we will define the chain ? = A0 ? A1 ? A2 ? ? ? ? ? Ak = V by Aj = ?jj 0 =1 Bj 0 . A central role in the analysis will be played by the set of constraints in B(F? ) (see (5)) that are active at y? , which makes sense given that we expect small perturbations in ? 0 to result in small changes in y?0 as well. Definition 3. For any submodular function F : 2V ? R and any point z ? B(F ) we shall denote by DF (z) the lattice of tight sets of z on B(F ), i.e. DF (z) = {A ? V | z(A) = F (A)}. The fact that the above set is indeed a lattice is well-known [1]. Moreover, note that DF (z) ? {?, V }. We will also define D0 = DF?0 (y0 ), i.e., the lattice of tight sets at the min-norm point. 5.1 When will the approximate approach work? We will analyze sufficient conditions so that irrespective of the choice of ?, the isotonic regression problem eq. (14) has no breakpoints, and the left and right derivatives agree. To this end, for any j ? {1, 2, . . . , k} we define the submodular function Fj : 2Bj as Fj (H) = F?0 (H | Aj?1 ), where we have dropped the dependence on ? 0 as it will remain fixed throughout this section. Theorem 3. The approximate problem (14) is argmin-continuously differentiable irrespective of the chosen permutation ? sorting y? if and only if any of the following equivalent conditions hold.   (a) arg minH?Bj Fj (H) ? Fj (Bj )|H|/|Bj | = {?, Bj }. 0 0 (b) yB ? relint(B(Fj )), i.e. DFj (yB ) = {?, Bj }, which is only possible if Fj is inseparable. j j 6 In other words, we can equivalently say that the optimum has to lie on the interior of the face. Moreover, if ? ? y? is continuous1 , this result implies that the min-norm point is locally defined as averaging within the same blocks using (15), so that the approximate Jacobian is exact. We would like to point out that one can obtain the same derivatives as the ones suggested in ?4, if we perturb the KKT conditions, as done by Amos and Kolter [18]. If we use that approach, in addition to the computational challenges, there is the problem of non-uniqueness of the Lagrange multiplier, and moreover, some valid multipliers might be zero for some of the active constraints. This would render the resulting linear system rank deficient, and it is not clear how to proceed. Remember that when we analyzed the isotonic regression problem in ?3 we had non-differentiability due to the exactly same reason ? zero multipliers for active constraints, which in that case correspond to the breakpoints. Theorem 4. For a specific Lagrange multiplier there exists a solution to the perturbed KKT conditions derived by [18] that gives rise to the approximate Jacobians from Section 4. Moreover, this multiplier is unique if the conditions of Theorem 3 are satisfied. 5.2 Exact computation Unfortunately, computing the gradients exactly seems very complicated for arbitrary parametrizations F? , and we will focus our attention to mixture models of the form given in eq. (12). The directions v where we will compute the directional derivatives will have in general non-negative components vj , unless Fj is modular. By leveraging the theory of Shapiro [29], and exploiting the structure of both the min-norm point and the polyhedron B(Fv | D0 ) we obtain at the following result. Theorem 5. Assume that F?0 is inseparable and let v be any direction so that Fv is submodular. The directional derivative ?y/??j at ? 0 in direction v is given by the solution of the following problem. 1 kdk2 , 2 d subject to d ? B(Fv | D0 ), and d(Bj ) = Fv (Aj ) for j ? {1, 2, . . . , k}. minimize (16) First, note that this is again a min-norm problem, but now defined over a reduced lattice D0 with k additional equality constraints. Fortunately, due to these additional equalities, we can split the above problem into k separate min-norm problems. Namely, for each j ? {1, 2, . . . , k} collect the lattice of tight sets that intersect Bj as Dj0 = {H ? Bj | H ? D0 }, and define the function Gj : 2Bj ? R as Gj (A) = Fv (A | Aj?1 ), where note that the parameter vector ? is taken as the direction v in which we want to compute the derivative. Then, the block of the optimal solution of problem (16) corresponding to Bj is equal to d?Bj = 1 kyj k2 , yj ?B(Gj |Dj0 ) 2 arg min (17) which is again a min-norm point problem where the base polytope is defined using the lattice Dj0 . We can then immediately draw a connection with the results from the previous subsection. Corollary 1. If all latices are trivial, the solution of (17) agrees with the approximate Jacobian (15). How to solve problem (16)? Fortunately, the divide-and-conquer algorithm of Groenevelt [30] can be used to find the min-norm point over arbitrary lattices. To do this, we have to compute for each i ? Bj the unique smallest set Hi? in arg minHj 3i Fj (Hj ) ? y 0 (Hj ), which can be done using submodular minimization after applying the reduction of Schrijver [31]. To highlight the difference with the approximation from section 4, let us consider a very simple case. Lemma 6. Assume that Gj is equal to Gj (A) = Ji ? AK for some i ? Bj . Then, the directional derivative is equal to 1|D| /|D| where D = {i0 | i ? Hi?0 }. Hence, while the approximate directional derivative would average over all elements in Bj , the true one averages only over a subset D ? Bj and is possibly sparser. Lemma 6 gives us the exact directional derivatives for the graph cuts, as each component Gj will be either a cut function on 1 For example if the correspondence ?  B(F? ) is hemicontinuous due to Berge?s theorem. 7 two vertices, or a function of the form in Lemma 6. In the first case the directional derivative is zero because 0 ? B(Gj ) ? B(Gj | Dj0 ). In the second case, we can can either solve exactly using Lemma 6 or use a more sophisticated approximation, generalizing the result from [32] ? given that Fj is separable over 2Bj iff the graph is disconnected, we can first separate the graph into connected components, and then take averages within each connected component instead of over Bj . 5.3 Structured attention and constraints Recently, there has been an interest in the design of structured attention mechanisms, which, as we will now show, can be derived and furthermore generalized using the results in this paper. The first mechanism is the sparsemax of Martins and Astudillo [33]. It takes as input a vector and projects it to the probability simplex, which is the base polytope corresponding to G(A) = min{|A|, 1}. Concurrently with this work, Niculae and Blondel [32] have analyzed the following problem 1 y? = min f (y) + ky ? zk2 , (18) 2 y?B(G) for the special case when B(G) is the simplex and f is the Lov?sz extension of one of two specific submodular functions. We will consider the general case when G can be any concave-of-cardinality function and F is an arbitrary submodular function. Note that, if either f (y) or the constraint were not present in problem (18), we could have simply leveraged the theory we have developed so far to differentiate it. Fortunately, as done by Niculae and Blondel [32], we can utilize the result of Yu [34] to significantly simplify (18). Namely, because projection onto B(G) preserves the order of the coordinates [35, Lemma 1], we can write the optimal solution y? to (18) as 1 1 y? = min ky ? y0 k, where y0 = arg min f (y) + ky ? zk2 . 2 x?B(G) 2 y We can hence split problem (18) into two subtasks ? first, compute y0 and then project it onto B(G). As each operation can reduces to a minimum-norm problem, we can differentiate each of them separately, and thus solve (18) by stacking two submodular layers one after the other. 6 Experiments We consider the image segmentation tasks CNN CNN+GC from the introduction, where we are given n?3 Mean Std. Dev. Mean Std. Dev. an RGB image x ? R and are supposed n to predict those pixels y ? {0, 1} con- Accuracy 0.8103 0.1391 0.9121 0.1034 taining the foreground object. We used the 0.3919 0.1911 0.2681 0.2696 NLL Weizmann horse segmentation dataset [36], # Fg. Objs. 96.9 65.8 25.3 30.6 which we split into training, validation and test splits of sizes 180, 50 and 98 respec- Figure 1: Test set results. We see that incorporating a tively. The implementation was done in graph cut solver improves both the accuracy and negpytorch2 , and to compute the min-norm ative log-likelihood (NLL), while having consistent point we used the algorithm from [37]. To segmentations with fewer foreground objects. make the problem more challenging, at training time we randomly selected and revealed only 0.1% of the training set labels. We first trained a convolutional neural network with two hidden layers that directly predicts the per-pixel labels, which we refer to as CNN. Then, we added a second model, which we call CNN+GC, that has the same architecture as the first one, but with an additional graph cut layer, whose weights are parametrized by a convolutional neural network with one hidden layer. Details about the architectures can be found in the supplementary. We train the models by maximizing the log-likelihood of the revealed pixels, which corresponds to the variational bi-level strategy (eq. (3)) due to Lemma 2. We trained using SGD, Adagrad [38] and Adam [39], and chose the model with the best validation score. As evident from the results presented in Section 6, adding the discrete layer improves not only the accuracy (after thresholding the marginals at 0.5) and log-likelihood, but it gives more coherent results as it makes predictions with fewer connected components (i.e., foreground objects). Moreover, if we have a look at the predictions themselves in Figure 2, we can observe that the optimization layer not only removes spurious predictions, but there is is also a qualitative difference in the marginals as they are spatially more consistent. 2 The code will be made available at https://www.github.com/josipd/nips-17-experiments. 8 Figure 2: Comparison of results from both models on four instances from the test set (up: CNN, down: CNN+GC). We can see that adding the graph-cut layers helps not only quantitatively, but also qualitatively, as the predictions are more spatially regular and vary smoothly inside the segments. 7 Conclusion We have analyzed the sensitivity of the min-norm point for parametric submodular functions and provided both a very easy-to-implement practical approximate algorithm for general objectives, and strong theoretical result characterizing the true directional derivatives for mixtures. These results allow the use of submodular minimization inside modern deep architectures, and they are also immediately applicable to bi-level variational learning of log-supermodular models of arbitrarily high order. Moreover, we believe that the theoretical results open the new problem of developing algorithms that can compute not only the min-norm point, but also solve for the associated derivatives. Acknowledgements. The research was partially supported by ERC StG 307036 and a Google European PhD Fellowship. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] S. Fujishige. Submodular functions and optimization. Annals of Discrete Mathematics vol. 58. 2005. F. Bach. ?Learning with submodular functions: a convex optimization perspective?. FoundaR in Machine Learning 6.2-3 (2013). tions and Trends L. Lov?sz. ?Submodular functions and convexity?. Mathematical Programming The State of the Art. Springer, 1983, pp. 235?257. Y. Boykov and V. Kolmogorov. ?An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision?. IEEE Transactions on Pattern Analysis and Machine Intelligence 26.9 (2004), pp. 1124?1137. P. Kohli, L. Ladicky, and P. H. Torr. ?Robust higher order potentials for enforcing label consistency?. Computer Vision and Pattern Recognition (CVPR). 2008. M. Narasimhan, N. Jojic, and J. A. Bilmes. ?Q-clustering?. Advances in Neural Information Processing Systems (NIPS). 2006, pp. 979?986. J. Djolonga and A. Krause. ?From MAP to Marginals: Variational Inference in Bayesian Submodular Models?. Advances in Neural Information Processing Systems (NIPS). 2014. J. Djolonga and A. Krause. ?Scalable Variational Inference in Log-supermodular Models?. International Conference on Machine Learning (ICML). 2015. F. R. Bach. ?Shaping level sets with submodular functions?. Advances in Neural Information Processing Systems (NIPS). 2011. F. R. Bach. ?Structured sparsity-inducing norms through submodular functions?. Advances in Neural Information Processing Systems (NIPS). 2010. 9 [11] S. Fujishige, T. Hayashi, and S. Isotani. The minimum-norm-point algorithm applied to submodular function minimization and linear programming. Kyoto University. Research Institute for Mathematical Sciences [RIMS], 2006. [12] R. T. Rockafellar and R. J.-B. Wets. Variational analysis. Vol. 317. Springer Science & Business Media, 2009. [13] K. Kunisch and T. Pock. ?A bilevel optimization approach for parameter learning in variational models?. SIAM Journal on Imaging Sciences 6.2 (2013), pp. 938?983. [14] P. Ochs, R. Ranftl, T. Brox, and T. Pock. ?Bilevel optimization with nonsmooth lower level problems?. International Conference on Scale Space and Variational Methods in Computer Vision. Springer. 2015, pp. 654?665. [15] J. Domke. ?Learning graphical model parameters with approximate marginal inference?. IEEE Transactions on Pattern Analysis and Machine Intelligence 35.10 (2013), pp. 2454?2467. [16] M. F. Tappen. ?Utilizing variational optimization to learn markov random fields?. Computer Vision and Pattern Recognition (CVPR). 2007. [17] M. J. Wainwright. ?Estimating the wrong graphical model: Benefits in the computation-limited setting?. Journal of Machine Learning Research (JMLR) 7 (2006). [18] B. Amos and J. Z. Kolter. ?OptNet: Differentiable Optimization as a Layer in Neural Networks?. International Conference on Machine Learning (ICML). 2017. [19] J. C. Boot. ?On sensitivity analysis in convex quadratic programming problems?. Operations Research 11.5 (1963), pp. 771?786. [20] K. Kumar and F. Bach. ?Active-set Methods for Submodular Minimization Problems?. arXiv preprint arXiv:1506.02852 (2015). [21] N. Chakravarti. ?Sensitivity analysis in isotonic regression?. Discrete Applied Mathematics 45.3 (1993), pp. 183?196. [22] B. Dolhansky and J. Bilmes. ?Deep Submodular Functions: Definitions and Learning?. Neural Information Processing Society (NIPS). Barcelona, Spain, Dec. 2016. [23] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. ?Large margin methods for structured and interdependent output variables?. Journal of Machine Learning Research (JMLR) 6.Sep (2005), pp. 1453?1484. [24] B. Taskar, C. Guestrin, and D. Koller. ?Max-margin Markov networks?. Advances in Neural Information Processing Systems (NIPS). 2004, pp. 25?32. [25] J. Edmonds. ?Submodular functions, matroids, and certain polyhedra?. Combinatorial structures and their applications (1970), pp. 69?87. [26] M. J. Best and N. Chakravarti. ?Active set algorithms for isotonic regression; a unifying framework?. Mathematical Programming 47.1-3 (1990), pp. 425?439. [27] T. Robertson and T. Robertson. Order restricted statistical inference. Tech. rep. 1988. [28] M. Queyranne. ?Minimizing symmetric submodular functions?. Mathematical Programming 82.1-2 (1998), pp. 3?12. [29] A. Shapiro. ?Sensitivity Analysis of Nonlinear Programs and Differentiability Properties of Metric Projections?. SIAM Journal on Control and Optimization 26.3 (1988), pp. 628?645. [30] H. Groenevelt. ?Two algorithms for maximizing a separable concave function over a polymatroid feasible region?. European Journal of Operational Research 54.2 (1991). [31] A. Schrijver. ?A combinatorial algorithm minimizing submodular functions in strongly polynomial time?. Journal of Combinatorial Theory, Series B 80.2 (2000), pp. 346?355. [32] V. Niculae and M. Blondel. ?A Regularized Framework for Sparse and Structured Neural Attention?. arXiv preprint arXiv:1705.07704 (2017). [33] A. Martins and R. Astudillo. ?From softmax to sparsemax: A sparse model of attention and multi-label classification?. International Conference on Machine Learning (ICML). 2016. [34] Y.-L. Yu. ?On decomposing the proximal map?. Advances in Neural Information Processing Systems. 2013, pp. 91?99. [35] D. Suehiro, K. Hatano, S. Kijima, E. Takimoto, and K. Nagano. ?Online prediction under submodular constraints?. International Conference on Algorithmic Learning Theory. 2012. [36] E. Borenstein and S. Ullman. ?Combined top-down/bottom-up segmentation?. IEEE Transactions on Pattern Analysis and Machine Intelligence 30.12 (2008), pp. 2109?2125. 10 [37] A. Barbero and S. Sra. ?Modular proximal optimization for multidimensional total-variation regularization?. arXiv preprint arXiv:1411.0589 (2014). [38] J. Duchi, E. Hazan, and Y. Singer. ?Adaptive subgradient methods for online learning and stochastic optimization?. Journal of Machine Learning Research (JMLR) 12.Jul (2011), pp. 2121?2159. [39] D. Kingma and J. Ba. ?Adam: A method for stochastic optimization?. arXiv preprint arXiv:1412.6980 (2014). 11
6702 |@word kohli:1 cnn:6 polynomial:5 seems:4 norm:26 open:3 checkable:1 seek:1 rgb:2 pick:1 sgd:1 kijima:1 reduction:1 configuration:3 series:1 score:4 com:1 intriguing:1 must:1 subsequent:1 partition:5 hofmann:1 remove:1 dolhansky:2 fewer:2 selected:1 intelligence:3 xk:1 parametrization:1 certificate:1 characterization:1 location:1 simpler:1 mathematical:4 become:1 qualitative:1 shorthand:1 kdk2:1 inside:3 introduce:2 manner:1 blondel:3 theoretically:1 lov:8 chakravarti:3 expected:1 indeed:4 themselves:1 sparsemax:2 roughly:1 multi:1 inspired:1 decreasing:1 cardinality:3 unrolling:1 increasing:3 project:2 solver:1 moreover:12 notation:2 provided:1 factorized:3 medium:1 estimating:1 what:1 spain:1 argmin:4 developed:1 narasimhan:1 differentiation:3 guarantee:1 dfj:1 remember:4 every:2 multidimensional:1 concave:5 exactly:5 k2:2 wrong:1 control:1 enjoy:1 dropped:1 pock:2 treat:1 consequence:1 despite:1 encoding:1 ak:2 analyzing:1 merge:3 might:5 chose:1 studied:2 equivalence:1 collect:1 challenging:2 limited:2 zi0:1 bi:5 weizmann:1 practical:3 camera:1 unique:4 yj:5 practice:1 union:1 block:14 implement:1 backpropagation:1 procedure:1 intersect:1 eth:2 significantly:1 projection:2 word:4 regular:1 altun:1 onto:3 interior:1 tsochantaridis:1 operator:1 applying:1 isotonic:11 optimize:2 equivalent:2 map:3 www:1 maximizing:2 send:1 go:2 attention:5 minq:1 convex:8 independently:1 immediately:2 estimator:1 utilizing:1 importantly:3 pull:1 stability:1 notion:2 coordinate:6 variation:1 annals:1 exact:4 neighbouring:1 programming:5 designing:1 element:7 trend:1 tappen:2 recognition:2 ochs:1 robertson:2 std:2 cut:12 coarser:1 predicts:1 bottom:1 role:1 module:3 preprint:4 taskar:1 capture:1 region:1 connected:3 yk:1 convexity:3 josipd:2 trained:4 depend:2 tight:3 segment:4 easily:5 sep:1 kolmogorov:1 train:6 distinct:1 horse:1 whose:2 modular:6 widely:1 solve:6 larger:2 say:3 relax:2 otherwise:2 supplementary:2 cvpr:2 statistic:1 online:2 differentiate:4 advantage:1 differentiable:4 nll:2 propose:1 interaction:2 parametrizations:1 iff:5 nagano:1 supposed:1 inducing:1 ky:5 exploiting:1 empty:1 optimum:1 rademacher:1 adam:2 object:4 help:1 derive:2 develop:2 stating:1 tions:1 eq:15 strong:3 berge:1 come:2 implies:2 direction:4 submodularity:1 closely:1 discontinuous:1 correct:2 consecutively:1 stochastic:2 enable:1 ja:1 behaviour:1 hinted:1 extension:8 pytorch:1 hold:4 considered:1 ground:1 exp:4 great:1 algorithmic:1 bj:31 predict:1 optimizer:2 inseparable:2 smallest:3 xk2:1 a2:1 vary:1 uniqueness:1 applicable:1 wet:1 combinatorial:5 label:4 largest:2 agrees:1 amos:3 minimization:10 suehiro:1 concurrently:1 reaching:1 occupied:1 ck:1 hj:2 conjunction:1 corollary:1 derived:2 focus:2 joachim:1 rank:1 polyhedron:2 likelihood:5 niculae:3 expansive:1 tech:1 normalizer:1 stg:1 sense:1 helpful:1 inference:10 i0:5 a0:3 hidden:2 spurious:1 koller:1 interested:1 provably:1 pixel:8 arg:15 dual:1 classification:1 among:1 smoothing:1 special:4 art:1 brox:1 marginal:3 field:2 equal:8 softmax:1 having:2 beach:1 look:2 yu:2 icml:3 breakpoint:4 foreground:3 djolonga:4 minimized:1 jb:3 nonsmooth:1 simplify:3 quantitatively:1 simplex:2 modern:4 randomly:1 preserve:1 divergence:2 consisting:1 negation:1 interest:3 multiply:1 evaluation:1 mixture:3 analyzed:3 primal:1 meanx:8 implication:1 chain:1 edge:3 encourage:1 partial:1 necessary:2 unless:2 tree:1 continuing:1 divide:1 sacrificing:1 isolated:1 theoretical:4 josip:1 minimal:1 instance:4 fenchel:1 dev:2 lattice:11 stacking:1 vertex:1 subset:2 entry:1 perturbed:1 proximal:3 combined:1 st:1 international:5 sensitivity:7 siam:2 probabilistic:2 pool:1 together:3 continuously:2 concrete:1 connecting:1 ym:1 again:3 central:1 satisfied:3 containing:2 choose:2 possibly:1 leveraged:1 derivative:17 ullman:1 jacobians:3 relint:1 potential:1 b2:5 rockafellar:1 pedestrian:1 satisfy:1 kolter:3 explicitly:1 objs:1 vi:2 depends:1 closed:1 analyze:7 hazan:1 start:1 sort:4 complicated:2 jul:1 ative:1 contribution:3 minimize:2 ni:1 accuracy:3 convolutional:2 efficiently:3 correspond:2 identify:1 yield:1 directional:10 bayesian:1 produced:1 bilmes:3 finer:1 executes:1 history:1 definition:4 energy:1 pp:18 proof:1 mi:3 associated:1 latex:1 con:1 gain:2 dataset:1 subsection:1 improves:2 ubiquitous:1 segmentation:7 shaping:1 rim:1 sophisticated:1 back:3 higher:2 supermodular:6 harness:1 specify:1 maximally:1 yb:2 formulation:2 done:4 though:1 strongly:1 furthermore:3 implicit:1 lastly:1 sketch:1 hand:1 nonlinear:1 maximizer:1 propagation:3 glance:1 google:1 aj:4 believe:1 usa:1 effect:2 normalized:1 true:2 y2:1 multiplier:6 unroll:1 regularization:1 equality:5 hence:7 spatially:3 read:1 facility:1 jojic:1 symmetric:1 reweighted:1 round:1 adjacent:1 kyk2:5 generalized:1 mina:1 evident:1 complete:1 theoretic:1 duchi:1 fj:9 image:5 variational:12 meaning:1 wise:3 recently:2 boykov:1 common:2 sigmoid:2 polymatroid:1 ji:3 perturbing:1 tively:1 bwi:1 discussed:1 interpretation:1 trait:1 marginals:4 refer:1 ai:1 rd:2 consistency:1 mathematics:2 erc:1 submodular:43 groenevelt:2 had:1 hatano:1 gj:11 base:3 closest:1 perspective:4 optimizing:1 inf:1 belongs:1 claimed:1 certain:1 rep:1 continue:1 arbitrarily:1 vt:1 yi:13 exploited:1 captured:1 seen:2 analyzes:1 relaxed:1 fortunately:4 additional:3 minimum:2 guestrin:1 maximize:1 multiple:1 reduces:3 d0:5 kyoto:1 smooth:1 technical:1 bach:5 long:3 a1:1 feasibility:2 prediction:9 scalable:1 regression:11 relabel:1 vision:7 essentially:1 dashboard:1 df:4 arxiv:8 metric:1 dec:1 addition:4 want:7 krause:4 background:2 separately:1 fellowship:1 crucial:1 extra:1 borenstein:1 strict:2 subject:1 deficient:1 undirected:1 fujishige:2 astudillo:2 leveraging:3 inconsistent:1 flow:1 practitioner:1 call:1 leverage:1 presence:1 revealed:2 split:6 easy:1 zi:1 architecture:5 andreas:1 idea:4 inner:1 computable:3 queyranne:2 render:2 proceed:1 jj:1 deep:7 useful:3 clear:2 locally:1 differentiability:3 telescope:1 reduced:1 http:1 shapiro:2 exist:1 problematic:1 sign:3 disjoint:2 per:3 edmonds:2 discrete:12 write:1 shall:1 vol:2 dj0:4 key:2 four:1 nevertheless:1 promised:1 takimoto:1 utilize:1 imaging:1 graph:12 relaxation:2 downstream:1 subgradient:1 cone:1 sum:1 linearizations:1 everywhere:1 powerful:1 uncertainty:1 family:2 reasonable:1 almost:1 throughout:1 draw:1 decision:1 prefer:1 capturing:1 layer:14 hi:2 breakpoints:4 optnet:1 played:1 correspondence:1 quadratic:2 refine:1 bilevel:2 constraint:7 ladicky:1 constrain:1 barbero:1 argument:1 min:38 optimality:2 kumar:2 separable:4 martin:2 department:2 structured:7 developing:1 disconnected:1 smaller:2 remain:1 separability:2 y0:6 wi:3 bvj:1 making:1 lem:1 maxy:1 explained:1 restricted:1 projecting:1 intuitively:1 pipeline:1 taken:1 computationally:1 zurich:2 remains:1 agree:2 discus:3 mechanism:3 singer:1 know:2 tractable:2 end:11 zk2:2 parametrize:1 operation:2 available:1 decomposing:1 observe:1 enforce:1 appropriate:1 existence:1 top:1 clustering:2 remaining:2 ensure:1 graphical:3 unifying:1 perturb:1 especially:1 conquer:1 nyi:1 society:1 objective:5 question:2 already:2 quantity:1 added:1 parametric:2 strategy:3 dependence:1 diagonal:1 said:2 gradient:3 separate:2 parametrized:3 majority:1 polytope:3 trivial:2 reason:1 enforcing:1 code:2 index:3 relationship:2 mini:1 minimizing:5 equivalently:2 unfortunately:1 stated:1 rise:2 negative:1 ba:1 design:3 implementation:1 zt:2 perform:3 disagree:1 boot:2 markov:2 minh:1 immediate:1 y1:1 rn:9 perturbation:2 supa:1 gc:3 arbitrary:5 thm:1 subtasks:1 bk:1 namely:4 pair:1 toolbox:1 connection:8 fv:5 coherent:1 barcelona:1 kingma:1 nip:8 beyond:1 suggested:1 pattern:5 sparsity:2 challenge:1 blkdiag:1 program:2 max:2 belief:2 wainwright:2 analogue:1 critical:1 business:1 regularized:1 indicator:1 github:1 irrespective:2 kj:1 prior:1 literature:1 nice:1 understanding:1 acknowledgement:1 multiplication:1 adagrad:1 interdependent:1 fully:4 loss:2 permutation:7 expect:1 highlight:1 interesting:1 foundar:1 validation:2 supy:1 krausea:1 sufficient:1 consistent:3 thresholding:3 placed:1 last:1 maxi0:1 supported:1 bias:1 allow:2 understand:1 institute:1 taking:2 face:1 characterizing:1 matroids:1 sparse:2 fg:1 benefit:4 evaluating:1 valid:1 rich:1 computes:1 kyj:1 commonly:1 collection:3 made:2 qualitatively:1 bm:4 taining:1 far:1 adaptive:1 transaction:3 approximate:15 implicitly:1 supremum:1 sz:8 kkt:4 active:6 b1:5 xi:2 continuous:6 iterative:2 quantifies:1 learn:6 robust:1 ca:1 sra:1 operational:1 futile:1 european:2 domain:3 vj:1 exploitable:1 fashion:1 cubic:1 position:1 explicit:1 lie:1 tied:1 jmlr:3 jacobian:14 minz:1 theorem:10 down:2 specific:6 maxi:1 incorporating:2 exists:6 intractable:1 adding:2 supplement:1 phd:1 margin:2 sorting:1 sparser:1 smoothly:1 intersection:1 generalizing:1 simply:1 lagrange:3 ordered:3 partially:1 hayashi:1 springer:3 ch:2 corresponds:1 minimizer:1 violator:1 complemented:1 prop:1 conditional:1 goal:1 sorted:1 lipschitz:1 replace:1 feasible:2 experimentally:1 hard:3 change:2 specifically:3 infinite:1 respec:1 torr:1 domke:2 averaging:2 isotani:1 lemma:14 total:1 pas:1 duality:2 schrijver:2 experimental:1 formally:1 support:1 pava:1 ethz:2 incorporate:2 evaluate:2 phenomenon:2
6,305
6,703
Inductive Representation Learning on Large Graphs William L. Hamilton? [email protected] Rex Ying? [email protected] Jure Leskovec [email protected] Department of Computer Science Stanford University Stanford, CA, 94305 Abstract Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node?s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions. 1 Introduction Low-dimensional vector embeddings of nodes in large graphs1 have proved extremely useful as feature inputs for a wide variety of prediction and graph analysis tasks [5, 11, 28, 35, 36]. The basic idea behind node embedding approaches is to use dimensionality reduction techniques to distill the high-dimensional information about a node?s neighborhood into a dense vector embedding. These node embeddings can then be fed to downstream machine learning systems and aid in tasks such as node classification, clustering, and link prediction [11, 28, 35]. However, previous works have focused on embedding nodes from a single fixed graph, and many real-world applications require embeddings to be quickly generated for unseen nodes, or entirely new (sub)graphs. This inductive capability is essential for high-throughput, production machine learning systems, which operate on evolving graphs and constantly encounter unseen nodes (e.g., posts on Reddit, users and videos on Youtube). An inductive approach to generating node embeddings also facilitates generalization across graphs with the same form of features: for example, one could train an embedding generator on protein-protein interaction graphs derived from a model organism, and then easily produce node embeddings for data collected on new organisms using the trained model. The inductive node embedding problem is especially difficult, compared to the transductive setting, because generalizing to unseen nodes requires ?aligning? newly observed subgraphs to the node embeddings that the algorithm has already optimized on. An inductive framework must learn to ? The two first authors made equal contributions. While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. 1 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Visual illustration of the GraphSAGE sample and aggregate approach. recognize structural properties of a node?s neighborhood that reveal both the node?s local role in the graph, as well as its global position. Most existing approaches to generating node embeddings are inherently transductive. The majority of these approaches directly optimize the embeddings for each node using matrix-factorization-based objectives, and do not naturally generalize to unseen data, since they make predictions on nodes in a single, fixed graph [5, 11, 23, 28, 35, 36, 37, 39]. These approaches can be modified to operate in an inductive setting (e.g., [28]), but these modifications tend to be computationally expensive, requiring additional rounds of gradient descent before new predictions can be made. There are also recent approaches to learning over graph structures using convolution operators that offer promise as an embedding methodology [17]. So far, graph convolutional networks (GCNs) have only been applied in the transductive setting with fixed graphs [17, 18]. In this work we both extend GCNs to the task of inductive unsupervised learning and propose a framework that generalizes the GCN approach to use trainable aggregation functions (beyond simple convolutions). Present work. We propose a general framework, called GraphSAGE (SAmple and aggreGatE), for inductive node embedding. Unlike embedding approaches that are based on matrix factorization, we leverage node features (e.g., text attributes, node profile information, node degrees) in order to learn an embedding function that generalizes to unseen nodes. By incorporating node features in the learning algorithm, we simultaneously learn the topological structure of each node?s neighborhood as well as the distribution of node features in the neighborhood. While we focus on feature-rich graphs (e.g., citation data with text attributes, biological data with functional/molecular markers), our approach can also make use of structural features that are present in all graphs (e.g., node degrees). Thus, our algorithm can also be applied to graphs without node features. Instead of training a distinct embedding vector for each node, we train a set of aggregator functions that learn to aggregate feature information from a node?s local neighborhood (Figure 1). Each aggregator function aggregates information from a different number of hops, or search depth, away from a given node. At test, or inference time, we use our trained system to generate embeddings for entirely unseen nodes by applying the learned aggregation functions. Following previous work on generating node embeddings, we design an unsupervised loss function that allows GraphSAGE to be trained without task-specific supervision. We also show that GraphSAGE can be trained in a fully supervised manner. We evaluate our algorithm on three node-classification benchmarks, which test GraphSAGE?s ability to generate useful embeddings on unseen data. We use two evolving document graphs based on citation data and Reddit post data (predicting paper and post categories, respectively), and a multigraph generalization experiment based on a dataset of protein-protein interactions (predicting protein functions). Using these benchmarks, we show that our approach is able to effectively generate representations for unseen nodes and outperform relevant baselines by a significant margin: across domains, our supervised approach improves classification F1-scores by an average of 51% compared to using node features alone and GraphSAGE consistently outperforms a strong, transductive baseline [28], despite this baseline taking ?100? longer to run on unseen nodes. We also show that the new aggregator architectures we propose provide significant gains (7.4% on average) compared to an aggregator inspired by graph convolutional networks [17]. Lastly, we probe the expressive capability of our approach and show, through theoretical analysis, that GraphSAGE is capable of learning structural information about a node?s role in a graph, despite the fact that it is inherently based on features (Section 5). 2 2 Related work Our algorithm is conceptually related to previous node embedding approaches, general supervised approaches to learning over graphs, and recent advancements in applying convolutional neural networks to graph-structured data.2 Factorization-based embedding approaches. There are a number of recent node embedding approaches that learn low-dimensional embeddings using random walk statistics and matrix factorization-based learning objectives [5, 11, 28, 35, 36]. These methods also bear close relationships to more classic approaches to spectral clustering [23], multi-dimensional scaling [19], as well as the PageRank algorithm [25]. Since these embedding algorithms directly train node embeddings for individual nodes, they are inherently transductive and, at the very least, require expensive additional training (e.g., via stochastic gradient descent) to make predictions on new nodes. In addition, for many of these approaches (e.g., [11, 28, 35, 36]) the objective function is invariant to orthogonal transformations of the embeddings, which means that the embedding space does not naturally generalize between graphs and can drift during re-training. One notable exception to this trend is the Planetoid-I algorithm introduced by Yang et al. [40], which is an inductive, embeddingbased approach to semi-supervised learning. However, Planetoid-I does not use any graph structural information during inference; instead, it uses the graph structure as a form of regularization during training. Unlike these previous approaches, we leverage feature information in order to train a model to produce embeddings for unseen nodes. Supervised learning over graphs. Beyond node embedding approaches, there is a rich literature on supervised learning over graph-structured data. This includes a wide variety of kernel-based approaches, where feature vectors for graphs are derived from various graph kernels (see [32] and references therein). There are also a number of recent neural network approaches to supervised learning over graph structures [7, 10, 21, 31]. Our approach is conceptually inspired by a number of these algorithms. However, whereas these previous approaches attempt to classify entire graphs (or subgraphs), the focus of this work is generating useful representations for individual nodes. Graph convolutional networks. In recent years, several convolutional neural network architectures for learning over graphs have been proposed (e.g., [4, 9, 8, 17, 24]). The majority of these methods do not scale to large graphs or are designed for whole-graph classification (or both) [4, 9, 8, 24]. However, our approach is closely related to the graph convolutional network (GCN), introduced by Kipf et al. [17, 18]. The original GCN algorithm [17] is designed for semi-supervised learning in a transductive setting, and the exact algorithm requires that the full graph Laplacian is known during training. A simple variant of our algorithm can be viewed as an extension of the GCN framework to the inductive setting, a point which we revisit in Section 3.3. 3 Proposed method: GraphSAGE The key idea behind our approach is that we learn how to aggregate feature information from a node?s local neighborhood (e.g., the degrees or text attributes of nearby nodes). We first describe the GraphSAGE embedding generation (i.e., forward propagation) algorithm, which generates embeddings for nodes assuming that the GraphSAGE model parameters are already learned (Section 3.1). We then describe how the GraphSAGE model parameters can be learned using standard stochastic gradient descent and backpropagation techniques (Section 3.2). 3.1 Embedding generation (i.e., forward propagation) algorithm In this section, we describe the embedding generation, or forward propagation algorithm (Algorithm 1), which assumes that the model has already been trained and that the parameters are fixed. In particular, we assume that we have learned the parameters of K aggregator functions (denoted AGGREGATE k , ?k ? {1, ..., K}), which aggregate information from node neighbors, as well as a set of weight matrices Wk , ?k ? {1, ..., K}, which are used to propagate information between different layers of the model or ?search depths?. Section 3.2 describes how we train these parameters. 2 In the time between this papers original submission to NIPS 2017 and the submission of the final, accepted (i.e., ?camera-ready?) version, there have been a number of closely related (e.g., follow-up) works published on pre-print servers. For temporal clarity, we do not review or compare against these papers in detail. 3 Algorithm 1: GraphSAGE embedding generation (i.e., forward propagation) algorithm Input : Graph G(V, E); input features {xv , ?v ? V}; depth K; weight matrices Wk , ?k ? {1, ..., K}; non-linearity ?; differentiable aggregator functions AGGREGATE k , ?k ? {1, ..., K}; neighborhood function N : v ? 2V Output : Vector representations zv for all v ? V 1 2 3 4 5 6 7 8 9 h0v ? xv , ?v ? V ; for k = 1...K do for v ? V do hkN (v) ? AGGREGATEk ({hk?1 u , ?u ? N (v)});   k k k?1 hv ? ? W ? CONCAT(hv , hkN (v) ) end hkv ? hkv /khkv k2 , ?v ? V end zv ? hK v , ?v ? V The intuition behind Algorithm 1 is that at each iteration, or search depth, nodes aggregate information from their local neighbors, and as this process iterates, nodes incrementally gain more and more information from further reaches of the graph. Algorithm 1 describes the embedding generation process in the case where the entire graph, G = (V, E), and features for all nodes xv , ?v ? V, are provided as input. We describe how to generalize this to the minibatch setting below. Each step in the outer loop of Algorithm 1 proceeds as follows, where k denotes the current step in the outer loop (or the depth of the search) and hk denotes a node?s representation at this step: First, each node v ? V aggregates the representations of the nodes in its k?1 immediate neighborhood, {hk?1 u , ?u ? N (v)}, into a single vector hN (v) . Note that this aggregation step depends on the representations generated at the previous iteration of the outer loop (i.e., k ? 1), and the k = 0 (?base case?) representations are defined as the input node features. After aggregating the neighboring feature vectors, GraphSAGE then concatenates the node?s current representation, hk?1 , with the aggregated neighborhood vector, hk?1 v N (v) , and this concatenated vector is fed through a fully connected layer with nonlinear activation function ?, which transforms the representations to be used at the next step of the algorithm (i.e., hkv , ?v ? V). For notational convenience, we denote the final representations output at depth K as zv ? hK v , ?v ? V. The aggregation of the neighbor representations can be done by a variety of aggregator architectures (denoted by the AGGREGATE placeholder in Algorithm 1), and we discuss different architecture choices in Section 3.3 below. To extend Algorithm 1 to the minibatch setting, given a set of input nodes, we first forward sample the required neighborhood sets (up to depth K) and then we run the inner loop (line 3 in Algorithm 1), but instead of iterating over all nodes, we compute only the representations that are necessary to satisfy the recursion at each depth (Appendix A contains complete minibatch pseudocode). Relation to the Weisfeiler-Lehman Isomorphism Test. The GraphSAGE algorithm is conceptually inspired by a classic algorithm for testing graph isomorphism. If, in Algorithm 1, we (i) set K = |V|, (ii) set the weight matrices as the identity, and (iii) use an appropriate hash function as an aggregator (with no non-linearity), then Algorithm 1 is an instance of the Weisfeiler-Lehman (WL) isomorphism test, also known as ?naive vertex refinement? [32]. If the set of representations {zv , ?v ? V} output by Algorithm 1 for two subgraphs are identical then the WL test declares the two subgraphs to be isomorphic. This test is known to fail in some cases, but is valid for a broad class of graphs [32]. GraphSAGE is a continuous approximation to the WL test, where we replace the hash function with trainable neural network aggregators. Of course, we use GraphSAGE to generate useful node representations?not to test graph isomorphism. Nevertheless, the connection between GraphSAGE and the classic WL test provides theoretical context for our algorithm design to learn the topological structure of node neighborhoods. Neighborhood definition. In this work, we uniformly sample a fixed-size set of neighbors, instead of using full neighborhood sets in Algorithm 1, in order to keep the computational footprint of each batch 4 fixed.3 That is, using overloaded notation, we define N (v) as a fixed-size, uniform draw from the set {u ? V : (u, v) ? E}, and we draw different uniform samples at each iteration, k, in Algorithm 1. Without this sampling the memory and expected runtime of a single batch is unpredictable and in the worst case O(|V|). In contrast, the per-batch space and time complexity for GraphSAGE is fixed QK at O( i=1 Si ), where Si , i ? {1, ..., K} and K are user-specified constants. Practically speaking we found that our approach could achieve high performance with K = 2 and S1 ? S2 ? 500 (see Section 4.4 for details). 3.2 Learning the parameters of GraphSAGE In order to learn useful, predictive representations in a fully unsupervised setting, we apply a graph-based loss function to the output representations, zu , ?u ? V, and tune the weight matrices, Wk , ?k ? {1, ..., K}, and parameters of the aggregator functions via stochastic gradient descent. The graph-based loss function encourages nearby nodes to have similar representations, while enforcing that the representations of disparate nodes are highly distinct:   > JG (zu ) = ? log ?(z> (1) u zv ) ? Q ? Evn ?Pn (v) log ?(?zu zvn ) , where v is a node that co-occurs near u on fixed-length random walk, ? is the sigmoid function, Pn is a negative sampling distribution, and Q defines the number of negative samples. Importantly, unlike previous embedding approaches, the representations zu that we feed into this loss function are generated from the features contained within a node?s local neighborhood, rather than training a unique embedding for each node (via an embedding look-up). This unsupervised setting emulates situations where node features are provided to downstream machine learning applications, as a service or in a static repository. In cases where representations are to be used only on a specific downstream task, the unsupervised loss (Equation 1) can simply be replaced, or augmented, by a task-specific objective (e.g., cross-entropy loss). 3.3 Aggregator Architectures Unlike machine learning over N-D lattices (e.g., sentences, images, or 3-D volumes), a node?s neighbors have no natural ordering; thus, the aggregator functions in Algorithm 1 must operate over an unordered set of vectors. Ideally, an aggregator function would be symmetric (i.e., invariant to permutations of its inputs) while still being trainable and maintaining high representational capacity. The symmetry property of the aggregation function ensures that our neural network model can be trained and applied to arbitrarily ordered node neighborhood feature sets. We examined three candidate aggregator functions: Mean aggregator. Our first candidate aggregator function is the mean operator, where we simply take the elementwise mean of the vectors in {hk?1 u , ?u ? N (v)}. The mean aggregator is nearly equivalent to the convolutional propagation rule used in the transductive GCN framework [17]. In particular, we can derive an inductive variant of the GCN approach by replacing lines 4 and 5 in Algorithm 1 with the following:4 hkv ? ?(W ? MEAN({hk?1 } ? {hk?1 v u , ?u ? N (v)}). (2) We call this modified mean-based aggregator convolutional since it is a rough, linear approximation of a localized spectral convolution [17]. An important distinction between this convolutional aggregator and our other proposed aggregators is that it does not perform the concatenation operation in line 5 of Algorithm 1?i.e., the convolutional aggregator does concatenate the node?s previous layer representation hk?1 with the aggregated neighborhood vector hkN (v) . This concatenation can be v viewed as a simple form of a ?skip connection? [13] between the different ?search depths?, or ?layers? of the GraphSAGE algorithm, and it leads to significant gains in performance (Section 4). LSTM aggregator. We also examined a more complex aggregator based on an LSTM architecture [14]. Compared to the mean aggregator, LSTMs have the advantage of larger expressive capability. However, it is important to note that LSTMs are not inherently symmetric (i.e., they are not permutation invariant), since they process their inputs in a sequential manner. We adapt LSTMs to operate on an unordered set by simply applying the LSTMs to a random permutation of the node?s neighbors. 3 4 Exploring non-uniform samplers is an important direction for future work. Note that this differs from Kipf et al?s exact equation by a minor normalization constant [17]. 5 Pooling aggregator. The final aggregator we examine is both symmetric and trainable. In this pooling approach, each neighbor?s vector is independently fed through a fully-connected neural network; following this transformation, an elementwise max-pooling operation is applied to aggregate information across the neighbor set:  pool k AGGREGATE k = max({? Wpool hui + b , ?ui ? N (v)}), (3) where max denotes the element-wise max operator and ? is a nonlinear activation function. In principle, the function applied before the max pooling can be an arbitrarily deep multi-layer perceptron, but we focus on simple single-layer architectures in this work. This approach is inspired by recent advancements in applying neural network architectures to learn over general point sets [29]. Intuitively, the multi-layer perceptron can be thought of as a set of functions that compute features for each of the node representations in the neighbor set. By applying the max-pooling operator to each of the computed features, the model effectively captures different aspects of the neighborhood set. Note also that, in principle, any symmetric vector function could be used in place of the max operator (e.g., an element-wise mean). We found no significant difference between max- and mean-pooling in developments test and thus focused on max-pooling for the rest of our experiments. 4 Experiments We test the performance of GraphSAGE on three benchmark tasks: (i) classifying academic papers into different subjects using the Web of Science citation dataset, (ii) classifying Reddit posts as belonging to different communities, and (iii) classifying protein functions across various biological protein-protein interaction (PPI) graphs. Sections 4.1 and 4.2 summarize the datasets, and the supplementary material contains additional information. In all these experiments, we perform predictions on nodes that are not seen during training, and, in the case of the PPI dataset, we test on entirely unseen graphs. Experimental set-up. To contextualize the empirical results on our inductive benchmarks, we compare against four baselines: a random classifer, a logistic regression feature-based classifier (that ignores graph structure), the DeepWalk algorithm [28] as a representative factorization-based approach, and a concatenation of the raw features and DeepWalk embeddings. We also compare four variants of GraphSAGE that use the different aggregator functions (Section 3.3). Since, the ?convolutional? variant of GraphSAGE is an extended, inductive version of Kipf et al?s semi-supervised GCN [17], we term this variant GraphSAGE-GCN. We test unsupervised variants of GraphSAGE trained according to the loss in Equation (1), as well as supervised variants that are trained directly on classification cross-entropy loss. For all the GraphSAGE variants we used rectified linear units as the non-linearity and set K = 2 with neighborhood sample sizes S1 = 25 and S2 = 10 (see Section 4.4 for sensitivity analyses). For the Reddit and citation datasets, we use ?online? training for DeepWalk as described in Perozzi et al. [28], where we run a new round of SGD optimization to embed the new test nodes before making predictions (see the Appendix for details). In the multi-graph setting, we cannot apply DeepWalk, since the embedding spaces generated by running the DeepWalk algorithm on different disjoint graphs can be arbitrarily rotated with respect to each other (Appendix D). All models were implemented in TensorFlow [1] with the Adam optimizer [16] (except DeepWalk, which performed better with the vanilla gradient descent optimizer). We designed our experiments with the goals of (i) verifying the improvement of GraphSAGE over the baseline approaches (i.e., raw features and DeepWalk) and (ii) providing a rigorous comparison of the different GraphSAGE aggregator architectures. In order to provide a fair comparison, all models share an identical implementation of their minibatch iterators, loss function and neighborhood sampler (when applicable). Moreover, in order to guard against unintentional ?hyperparameter hacking? in the comparisons between GraphSAGE aggregators, we sweep over the same set of hyperparameters for all GraphSAGE variants (choosing the best setting for each variant according to performance on a validation set). The set of possible hyperparameter values was determined on early validation tests using subsets of the citation and Reddit data that we then discarded from our analyses. The appendix contains further implementation details.5 5 Code and links to the datasets: http://snap.stanford.edu/graphsage/ 6 Table 1: Prediction results for the three datasets (micro-averaged F1 scores). Results for unsupervised and fully supervised GraphSAGE are shown. Analogous trends hold for macro-averaged scores. Citation Name Reddit PPI Unsup. F1 Sup. F1 Unsup. F1 Sup. F1 Unsup. F1 Sup. F1 Random Raw features DeepWalk DeepWalk + features GraphSAGE-GCN GraphSAGE-mean GraphSAGE-LSTM GraphSAGE-pool 0.206 0.575 0.565 0.701 0.742 0.778 0.788 0.798 0.206 0.575 0.565 0.701 0.772 0.820 0.832 0.839 0.043 0.585 0.324 0.691 0.908 0.897 0.907 0.892 0.042 0.585 0.324 0.691 0.930 0.950 0.954 0.948 0.396 0.422 ? ? 0.465 0.486 0.482 0.502 0.396 0.422 ? ? 0.500 0.598 0.612 0.600 % gain over feat. 39% 46% 55% 63% 19% 45% Figure 2: A: Timing experiments on Reddit data, with training batches of size 512 and inference on the full test set (79,534 nodes). B: Model performance with respect to the size of the sampled neighborhood, where the ?neighborhood sample size? refers to the number of neighbors sampled at each depth for K = 2 with S1 = S2 (on the citation data using GraphSAGE-mean). 4.1 Inductive learning on evolving graphs: Citation and Reddit data Our first two experiments are on classifying nodes in evolving information graphs, a task that is especially relevant to high-throughput production systems, which constantly encounter unseen data. Citation data. Our first task is predicting paper subject categories on a large citation dataset. We use an undirected citation graph dataset derived from the Thomson Reuters Web of Science Core Collection, corresponding to all papers in six biology-related fields for the years 2000-2005. The node labels for this dataset correspond to the six different field labels. In total, this is dataset contains 302,424 nodes with an average degree of 9.15. We train all the algorithms on the 2000-2004 data and use the 2005 data for testing (with 30% used for validation). For features, we used node degrees and processed the paper abstracts according Arora et al.?s [2] sentence embedding approach, with 300-dimensional word vectors trained using the GenSim word2vec implementation [30]. Reddit data. In our second task, we predict which community different Reddit posts belong to. Reddit is a large online discussion forum where users post and comment on content in different topical communities. We constructed a graph dataset from Reddit posts made in the month of September, 2014. The node label in this case is the community, or ?subreddit?, that a post belongs to. We sampled 50 large communities and built a post-to-post graph, connecting posts if the same user comments on both. In total this dataset contains 232,965 posts with an average degree of 492. We use the first 20 days for training and the remaining days for testing (with 30% used for validation). For features, we use off-the-shelf 300-dimensional GloVe CommonCrawl word vectors [27]; for each post, we concatenated (i) the average embedding of the post title, (ii) the average embedding of all the post?s comments (iii) the post?s score, and (iv) the number of comments made on the post. The first four columns of Table 1 summarize the performance of GraphSAGE as well as the baseline approaches on these two datasets. We find that GraphSAGE outperforms all the baselines by a significant margin, and the trainable, neural network aggregators provide significant gains compared 7 to the GCN approach. For example, the unsupervised variant GraphSAGE-pool outperforms the concatenation of the DeepWalk embeddings and the raw features by 13.8% on the citation data and 29.1% on the Reddit data, while the supervised version provides a gain of 19.7% and 37.2%, respectively. Interestingly, the LSTM based aggregator shows strong performance, despite the fact that it is designed for sequential data and not unordered sets. Lastly, we see that the performance of unsupervised GraphSAGE is reasonably competitive with the fully supervised version, indicating that our framework can achieve strong performance without task-specific fine-tuning. 4.2 Generalizing across graphs: Protein-protein interactions We now consider the task of generalizing across graphs, which requires learning about node roles rather than community structure. We classify protein roles?in terms of their cellular functions from gene ontology?in various protein-protein interaction (PPI) graphs, with each graph corresponding to a different human tissue [41]. We use positional gene sets, motif gene sets and immunological signatures as features and gene ontology sets as labels (121 in total), collected from the Molecular Signatures Database [34]. The average graph contains 2373 nodes, with an average degree of 28.8. We train all algorithms on 20 graphs and then average prediction F1 scores on two test graphs (with two other graphs used for validation). The final two columns of Table 1 summarize the accuracies of the various approaches on this data. Again we see that GraphSAGE significantly outperforms the baseline approaches, with the LSTM- and pooling-based aggregators providing substantial gains over the mean- and GCN-based aggregators.6 4.3 Runtime and parameter sensitivity Figure 2.A summarizes the training and test runtimes for the different approaches. The training time for the methods are comparable (with GraphSAGE-LSTM being the slowest). However, the need to sample new random walks and run new rounds of SGD to embed unseen nodes makes DeepWalk 100-500? slower at test time. For the GraphSAGE variants, we found that setting K = 2 provided a consistent boost in accuracy of around 10-15%, on average, compared to K = 1; however, increasing K beyond 2 gave marginal returns in performance (0-5%) while increasing the runtime by a prohibitively large factor of 10-100?, depending on the neighborhood sample size. We also found diminishing returns for sampling large neighborhoods (Figure 2.B). Thus, despite the higher variance induced by sub-sampling neighborhoods, GraphSAGE is still able to maintain strong predictive accuracy, while significantly improving the runtime. 4.4 Summary comparison between the different aggregator architectures Overall, we found that the LSTM- and pool-based aggregators performed the best, in terms of both average performance and number of experimental settings where they were the top-performing method (Table 1). To give more quantitative insight into these trends, we consider each of the six different experimental settings (i.e., (3 datasets) ? (unsupervised vs. supervised)) as trials and consider what performance trends are likely to generalize. In particular, we use the non-parametric Wilcoxon Signed-Rank Test [33] to quantify the differences between the different aggregators across trials, reporting the T -statistic and p-value where applicable. Note that this method is rank-based and essentially tests whether we would expect one particular approach to outperform another in a new experimental setting. Given our small sample size of only 6 different settings, this significance test is somewhat underpowered; nonetheless, the T -statistic and associated p-values are useful quantitative measures to assess the aggregators? relative performances. We see that LSTM-, pool- and mean-based aggregators all provide statistically significant gains over the GCN-based approach (T = 1.0, p = 0.02 for all three). However, the gains of the LSTM and pool approaches over the mean-based aggregator are more marginal (T = 1.5, p = 0.03, comparing 6 Note that in very recent follow-up work Chen and Zhu [6] achieve superior performance by optimizing the GraphSAGE hyperparameters specifically for the PPI task and implementing new training techniques (e.g., dropout, layer normalization, and a new sampling scheme). We refer the reader to their work for the current state-of-the-art numbers on the PPI dataset that are possible using a variant of the GraphSAGE approach. 8 LSTM to mean; T = 4.5, p = 0.10, comparing pool to mean). There is no significant difference between the LSTM and pool approaches (T = 10.0, p = 0.46). However, GraphSAGE-LSTM is significantly slower than GraphSAGE-pool (by a factor of ?2?), perhaps giving the pooling-based aggregator a slight edge overall. 5 Theoretical analysis In this section, we probe the expressive capabilities of GraphSAGE in order to provide insight into how GraphSAGE can learn about graph structure, even though it is inherently based on features. As a case-study, we consider whether GraphSAGE can learn to predict the clustering coefficient of a node, i.e., the proportion of triangles that are closed within the node?s 1-hop neighborhood [38]. The clustering coefficient is a popular measure of how clustered a node?s local neighborhood is, and it serves as a building block for many more complicated structural motifs [3]. We can show that Algorithm 1 is capable of approximating clustering coefficients to an arbitrary degree of precision: Theorem 1. Let xv ? U, ?v ? V denote the feature inputs for Algorithm 1 on graph G = (V, E), where U is any compact subset of Rd . Suppose that there exists a fixed positive constant C ? R+ such that kxv ? xv0 k2 > C for all pairs of nodes. Then we have that ? > 0 there exists a parameter setting ?? for Algorithm 1 such that after K = 4 iterations |zv ? cv | < , ?v ? V, where zv ? R are final output values generated by Algorithm 1 and cv are node clustering coefficients. Theorem 1 states that for any graph there exists a parameter setting for Algorithm 1 such that it can approximate clustering coefficients in that graph to an arbitrary precision, if the features for every node are distinct (and if the model is sufficiently high-dimensional). The full proof of Theorem 1 is in the Appendix. Note that as a corollary of Theorem 1, GraphSAGE can learn about local graph structure, even when the node feature inputs are sampled from an absolutely continuous random distribution (see the Appendix for details). The basic idea behind the proof is that if each node has a unique feature representation, then we can learn to map nodes to indicator vectors and identify node neighborhoods. The proof of Theorem 1 relies on some properties of the pooling aggregator, which also provides insight into why GraphSAGE-pool outperforms the GCN and mean-based aggregators. 6 Conclusion We introduced a novel approach that allows embeddings to be efficiently generated for unseen nodes. GraphSAGE consistently outperforms state-of-the-art baselines, effectively trades off performance and runtime by sampling node neighborhoods, and our theoretical analysis provides insight into how our approach can learn about local graph structures. A number of extensions and potential improvements are possible, such as extending GraphSAGE to incorporate directed or multi-modal graphs. A particularly interesting direction for future work is exploring non-uniform neighborhood sampling functions, and perhaps even learning these functions as part of the GraphSAGE optimization. Acknowledgments The authors thank Austin Benson, Aditya Grover, Bryan He, Dan Jurafsky, Alex Ratner, Marinka Zitnik, and Daniel Selsam for their helpful discussions and comments on early drafts. The authors would also like to thank Ben Johnson for his many useful questions and comments on our code. This research has been supported in part by NSF IIS-1149837, DARPA SIMPLEX, Stanford Data Science Initiative, Huawei, and Chan Zuckerberg Biohub. W.L.H. was also supported by the SAP Stanford Graduate Fellowship and an NSERC PGS-D grant. The views and conclusions expressed in this material are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the above funding agencies, corporations, or the U.S. and Canadian governments. 9 References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint , 2016. [2] S. Arora, Y. Liang, and T. Ma. A simple but tough-to-beat baseline for sentence embeddings. In ICLR, 2017. [3] A. R. Benson, D. F. Gleich, and J. Leskovec. Higher-order organization of complex networks. Science, 353(6295):163?166, 2016. [4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. In ICLR, 2014. [5] S. Cao, W. Lu, and Q. Xu. Grarep: Learning graph representations with global structural information. In KDD, 2015. [6] J. Chen and J. Zhu. Stochastic training of graph convolutional networks. arXiv preprint arXiv:1710.10568, 2017. [7] H. Dai, B. Dai, and L. Song. Discriminative embeddings of latent variable models for structured data. In ICML, 2016. [8] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016. [9] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015. [10] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, volume 2, pages 729?734, 2005. [11] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In KDD, 2016. [12] W. L. Hamilton, J. Leskovec, and D. Jurafsky. Diachronic word embeddings reveal statistical laws of semantic change. In ACL, 2016. [13] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In EACV, 2016. [14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735? 1780, 1997. [15] K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251?257, 1991. [16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [17] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2016. [18] T. N. Kipf and M. Welling. Variational graph auto-encoders. In NIPS Workshop on Bayesian Deep Learning, 2016. [19] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1?27, 1964. [20] O. Levy and Y. Goldberg. Neural word embedding as implicit matrix factorization. In NIPS, 2014. [21] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. In ICLR, 2015. [22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. [23] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. In NIPS, 2001. [24] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML, 2016. 10 [25] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999. [26] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011. [27] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014. [28] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In KDD, 2014. [29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. ? u?rek and P. Sojka. Software Framework for Topic Modelling with Large Corpora. In [30] R. Reh? LREC, 2010. [31] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61?80, 2009. [32] N. Shervashidze, P. Schweitzer, E. J. v. Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeilerlehman graph kernels. Journal of Machine Learning Research, 12:2539?2561, 2011. [33] S. Siegal. Nonparametric statistics for the behavioral sciences. McGraw-hill, 1956. [34] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, A. Paulovich, S. L. Pomeroy, T. R. Golub, E. S. Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences, 102(43):15545?15550, 2005. [35] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In WWW, 2015. [36] D. Wang, P. Cui, and W. Zhu. Structural deep network embedding. In KDD, 2016. [37] X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang. Community preserving network embedding. In AAAI, 2017. [38] D. J. Watts and S. H. Strogatz. Collective dynamics of ?small-world? networks. Nature, 393(6684):440?442, 1998. [39] L. Xu, X. Wei, J. Cao, and P. S. Yu. Embedding identity and interest for social networks. In WWW, 2017. [40] Z. Yang, W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016. [41] M. Zitnik and J. Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):190?198, 2017. 11
6703 |@word trial:2 repository:1 version:4 proportion:1 tamayo:1 propagate:1 kutzkov:1 sgd:2 reduction:1 contains:6 score:5 daniel:1 document:1 interestingly:1 dubourg:1 outperforms:7 existing:2 current:3 comparing:2 activation:2 si:2 must:2 devin:1 concatenate:1 kdd:4 designed:4 hash:2 alone:1 v:1 advancement:2 concat:1 core:1 short:1 ratner:1 tarlow:1 iterates:1 provides:4 node:105 draft:1 zhang:2 mehlhorn:1 guard:1 constructed:1 schweitzer:1 initiative:1 abadi:1 dan:1 behavioral:1 manner:2 blondel:1 node2vec:1 expected:1 ontology:2 examine:1 multi:7 inspired:4 salakhutdinov:1 unpredictable:1 increasing:2 psychometrika:1 provided:3 linearity:3 notation:1 moreover:1 what:1 reddit:14 interpreted:1 gcn:13 transformation:2 corporation:1 temporal:1 quantitative:2 every:1 multidimensional:1 runtime:5 zaremba:1 prohibitively:1 k2:2 classifier:1 unit:1 grant:1 szlam:1 hamilton:2 before:3 service:1 positive:1 aggregating:2 local:9 xv:4 timing:1 despite:4 signed:1 acl:1 therein:1 examined:2 co:1 jurafsky:2 factorization:6 graduate:1 statistically:1 averaged:2 directed:1 unique:2 camera:1 acknowledgment:1 testing:3 tsoi:1 lecun:1 block:1 differs:1 backpropagation:1 footprint:1 mei:1 empirical:1 evolving:5 yan:1 thought:1 significantly:3 pre:1 word:6 refers:1 protein:16 convenience:1 close:1 deepwalk:12 operator:5 cannot:1 sojka:1 context:1 applying:5 optimize:1 equivalent:1 map:1 dean:2 www:2 independently:1 focused:2 identifying:1 subgraphs:4 rule:1 insight:4 importantly:1 his:1 embedding:33 gcns:2 classic:3 analogous:1 suppose:1 user:4 exact:2 guzik:1 us:1 goldberg:1 hypothesis:1 trend:4 element:2 expensive:2 particularly:1 submission:2 mukherjee:1 database:1 winograd:1 observed:1 role:4 preprint:2 wang:4 hv:2 worst:1 capture:1 verifying:1 revisiting:1 ensures:1 connected:3 sun:1 ordering:1 trade:1 substantial:1 intuition:1 agency:1 complexity:1 ui:1 ideally:1 dynamic:1 signature:2 trained:9 passos:1 predictive:2 classifer:1 unsup:3 completely:1 triangle:1 easily:1 darpa:1 joint:1 various:4 train:7 distinct:3 fast:1 describe:4 zemel:1 aggregate:13 shervashidze:1 neighborhood:30 choosing:1 stanford:9 larger:1 supplementary:1 snap:1 cvpr:1 ability:1 statistic:4 unseen:18 transductive:8 final:5 online:3 hagenbuchner:1 advantage:1 differentiable:1 sequence:1 propose:3 interaction:6 h0v:1 macro:1 neighboring:1 relevant:2 loop:4 cao:2 achieve:3 representational:1 academy:1 sutskever:1 defferrard:1 motwani:1 extending:1 siegal:1 produce:2 generating:4 adam:3 rotated:1 ben:1 derive:1 depending:1 minor:1 strong:5 implemented:1 c:1 skip:1 mootha:1 quantify:1 direction:2 closely:2 attribute:4 stochastic:5 human:1 material:2 implementing:1 brin:1 require:3 government:1 f1:9 generalization:2 clustered:1 varoquaux:1 biological:3 extension:2 exploring:2 kxv:1 hold:1 practically:1 around:1 sufficiently:1 duvenaud:1 guibas:1 maclaurin:1 mapping:1 predict:2 mo:1 rfou:1 kruskal:1 optimizer:2 early:2 applicable:2 label:4 prettenhofer:1 title:1 wl:4 rough:1 modified:2 rather:2 avoid:1 pn:2 shelf:1 corollary:1 derived:3 focus:3 notational:1 consistently:2 improvement:2 commoncrawl:1 rank:2 slowest:1 hk:11 contrast:1 rigorous:1 grisel:1 baseline:11 modelling:1 helpful:1 inference:3 motif:2 huawei:1 entire:2 diminishing:1 relation:1 overall:2 classification:8 denoted:2 development:1 paulovich:1 art:2 gramfort:1 marginal:2 equal:1 field:2 beach:1 sampling:8 hop:2 identical:2 biology:1 runtimes:1 look:1 yu:1 unsupervised:10 throughput:2 nearly:1 broad:1 future:2 hacking:1 simplex:1 icml:3 report:1 micro:1 simultaneously:1 recognize:1 national:1 individual:3 scarselli:2 replaced:1 skiena:1 william:1 maintain:1 attempt:1 organization:1 interest:1 highly:1 cournapeau:1 reh:1 golub:1 behind:4 word2vec:1 edge:1 capable:2 necessary:1 orthogonal:1 iv:1 walk:3 re:1 theoretical:4 leskovec:5 leeuwen:1 instance:1 classify:3 column:2 bresson:1 goodness:1 lattice:1 phrase:1 distill:1 vertex:1 subset:2 uniform:4 johnson:1 rex:1 contextualize:1 underpowered:1 encoders:1 st:1 borgwardt:1 lstm:12 sensitivity:2 international:1 off:2 pool:10 connecting:1 quickly:1 again:1 ambiguity:1 aaai:1 hn:1 emnlp:1 weisfeiler:2 return:2 michel:1 li:1 potential:1 gensim:1 unordered:3 pointnet:1 wk:3 includes:1 hkn:3 coefficient:5 lehman:2 notable:1 satisfy:1 bombarell:1 depends:1 ranking:1 performed:2 view:1 closed:1 sup:3 hirzel:1 competitive:1 aggregation:5 capability:5 complicated:1 contribution:1 ass:1 accuracy:3 convolutional:16 qk:1 emulates:1 efficiently:2 variance:1 correspond:1 identify:1 conceptually:3 generalize:5 infolab:1 raw:4 bayesian:1 lu:1 ren:1 weisfeilerlehman:1 rectified:1 published:1 tissue:2 reach:1 aggregator:42 definition:1 against:3 nonetheless:1 naturally:3 associated:1 proof:3 static:1 gain:9 newly:1 proved:2 dataset:11 sampled:4 popular:1 sap:1 knowledge:1 dimensionality:1 improves:1 segmentation:1 gleich:1 nonmetric:1 feed:1 higher:2 supervised:16 multigraph:1 methodology:1 follow:2 day:2 modal:1 wei:3 gillette:1 done:1 though:1 niepert:1 implicit:1 lastly:2 lstms:4 expressive:3 su:1 replacing:1 scikit:1 nonlinear:2 marker:1 propagation:5 incrementally:1 minibatch:4 defines:1 logistic:1 xv0:1 reveal:2 perhaps:2 immunological:1 usa:1 name:1 building:1 requiring:1 inductive:16 regularization:1 symmetric:4 semantic:1 round:3 during:6 encourages:1 davis:1 hill:1 complete:1 thomson:1 interpreting:1 image:1 wise:2 variational:1 novel:1 funding:1 common:1 sigmoid:1 superior:1 pseudocode:1 functional:1 cohen:1 volume:2 extend:2 organism:2 belong:1 elementwise:2 slight:1 he:2 refer:2 significant:8 cv:2 tuning:1 vanilla:1 rd:1 jg:1 fingerprint:1 kipf:5 bruna:1 supervision:1 longer:1 base:1 aligning:1 wilcoxon:1 recent:7 chan:1 optimizing:2 belongs:1 schmidhuber:1 server:1 arbitrarily:3 seen:1 preserving:1 additional:3 somewhat:1 dai:2 aggregated:2 corrado:2 semi:5 ii:5 full:4 technical:1 adapt:1 academic:1 offer:1 long:2 cross:2 ahmed:1 post:18 molecular:3 graphs1:1 laplacian:1 prediction:10 variant:13 basic:2 regression:1 heterogeneous:1 essentially:1 scalable:1 multilayer:1 qi:1 arxiv:3 iteration:4 kernel:3 normalization:2 agarwal:1 hochreiter:1 addition:1 whereas:1 fine:1 fellowship:1 diachronic:1 lander:1 operate:4 unlike:4 rest:1 perozzi:2 bringing:1 comment:6 pooling:10 tend:1 subject:2 facilitates:1 undirected:1 induced:1 tough:1 jordan:1 call:1 structural:7 near:1 leverage:3 yang:3 canadian:1 iii:3 embeddings:28 feedforward:1 variety:4 fit:1 gave:1 architecture:10 inner:1 idea:3 selsam:1 barham:1 whether:2 six:3 expression:1 iparraguirre:1 isomorphism:4 song:1 speaking:1 deep:5 useful:8 iterating:1 tune:1 transforms:1 nonparametric:1 locally:1 processed:1 category:3 generate:5 http:1 outperform:2 nsf:1 revisit:1 disjoint:1 per:1 bryan:1 hyperparameter:2 promise:1 brucher:1 zv:7 key:1 four:3 terminology:1 nevertheless:1 clarity:1 graph:87 downstream:3 year:2 run:4 place:1 reporting:1 reader:1 draw:2 endorsement:1 appendix:6 scaling:2 summarizes:1 comparable:1 entirely:3 layer:9 dropout:1 lrec:1 topological:2 declares:1 alex:1 software:1 nearby:2 generates:2 aspect:1 extremely:2 performing:1 mikolov:1 department:1 structured:3 according:3 watt:1 manning:1 unintentional:1 belonging:1 cui:2 across:7 describes:2 qu:1 modification:1 s1:3 making:1 benson:2 intuitively:1 invariant:3 computationally:1 equation:3 previously:1 discus:1 fail:1 thirion:1 fed:3 end:2 serf:1 generalizes:3 operation:2 brevdo:1 probe:2 apply:2 away:1 spectral:5 appropriate:1 batch:4 encounter:2 slower:2 original:2 assumes:1 clustering:8 denotes:3 running:1 remaining:1 top:1 gori:2 maintaining:1 ppi:6 placeholder:1 giving:1 concatenated:2 especially:2 approximating:1 forum:1 sweep:1 objective:4 implied:1 already:3 print:1 occurs:1 question:1 parametric:1 perrot:1 september:1 gradient:5 iclr:5 link:2 thank:2 capacity:1 majority:2 outer:3 concatenation:4 topic:1 collected:2 cellular:1 enforcing:1 assuming:1 length:1 code:2 relationship:1 illustration:1 providing:2 ying:1 liang:1 difficult:1 negative:2 disparate:1 ba:1 design:2 implementation:3 collective:1 policy:1 pei:1 perform:2 gated:1 convolution:3 datasets:6 discarded:1 benchmark:5 descent:5 beat:1 immediate:1 situation:1 extended:1 topical:1 arbitrary:2 community:7 drift:1 compositionality:1 introduced:3 overloaded:1 pair:1 required:1 specified:1 vanderplas:1 optimized:1 connection:2 sentence:3 learned:4 distinction:1 tensorflow:2 boost:1 kingma:1 nip:8 jure:2 beyond:3 able:2 proceeds:1 below:2 summarize:3 monfardini:2 pagerank:2 built:1 max:9 memory:2 video:1 subramanian:1 natural:1 predicting:4 indicator:1 recursion:1 residual:1 zhu:4 representing:1 scheme:1 iterators:1 ebert:1 arora:2 ready:1 naive:1 auto:1 text:4 review:1 literature:1 python:1 relative:1 law:1 loss:9 fully:6 bear:1 permutation:3 expect:1 generation:5 interesting:1 filtering:1 grover:2 localized:2 vandergheynst:1 generator:1 validation:5 degree:8 consistent:1 principle:2 classifying:4 share:1 production:2 austin:1 course:1 summary:1 supported:2 perceptron:2 wide:3 neighbor:10 taking:1 aspuru:1 distributed:2 depth:10 rek:1 world:2 valid:1 rich:2 genome:1 ignores:1 author:4 made:4 forward:5 refinement:1 collection:1 far:1 social:3 welling:2 transaction:1 citation:14 compact:1 approximate:1 mcgraw:1 feat:1 keep:1 gene:5 global:3 corpus:1 discriminative:1 search:5 continuous:2 latent:1 why:1 table:4 nature:1 learn:16 concatenates:1 ca:2 inherently:6 reasonably:1 brockschmidt:1 symmetry:1 improving:1 hornik:1 complex:2 necessarily:1 domain:2 official:1 significance:1 dense:1 pgs:1 whole:1 s2:3 hyperparameters:2 profile:2 reuters:1 grarep:1 fair:1 xu:2 augmented:1 representative:1 ng:1 aid:1 precision:2 sub:2 position:1 duchesnay:1 candidate:2 levy:1 tang:1 theorem:5 embed:2 specific:4 zu:4 essential:1 incorporating:1 exists:3 workshop:1 sequential:2 effectively:3 pennington:1 hui:1 socher:1 enrichment:1 multicellular:1 margin:2 chen:4 entropy:2 generalizing:3 simply:3 likely:1 visual:1 positional:1 aditya:1 contained:1 ordered:1 nserc:1 expressed:2 strogatz:1 recommendation:1 constantly:2 relies:1 ma:1 viewed:2 identity:3 goal:1 month:1 replace:1 content:2 change:1 youtube:1 determined:1 except:1 uniformly:1 glove:2 sampler:2 specifically:1 called:1 total:3 isomorphic:1 accepted:1 experimental:4 citro:1 exception:1 indicating:1 pedregosa:1 pomeroy:1 bioinformatics:1 absolutely:1 incorporate:1 evaluate:1 web:3 trainable:5
6,306
6,704
Subset Selection and Summarization in Sequential Data Ehsan Elhamifar Computer and Information Science College Northeastern University Boston, MA 02115 [email protected] M. Clara De Paolis Kaluza Computer and Information Science College Northeastern University Boston, MA 02115 [email protected] Abstract Subset selection, which is the task of finding a small subset of representative items from a large ground set, finds numerous applications in different areas. Sequential data, including time-series and ordered data, contain important structural relationships among items, imposed by underlying dynamic models of data, that should play a vital role in the selection of representatives. However, nearly all existing subset selection techniques ignore underlying dynamics of data and treat items independently, leading to incompatible sets of representatives. In this paper, we develop a new framework for sequential subset selection that finds a set of representatives compatible with the dynamic models of data. To do so, we equip items with transition dynamic models and pose the problem as an integer binary optimization over assignments of sequential items to representatives, that leads to high encoding, diversity and transition potentials. Our formulation generalizes the well-known facility location objective to deal with sequential data, incorporating transition dynamics among facilities. As the proposed formulation is non-convex, we derive a max-sum message passing algorithm to solve the problem efficiently. Experiments on synthetic and real data, including instructional video summarization, show that our sequential subset selection framework not only achieves better encoding and diversity than the state of the art, but also successfully incorporates dynamics of data, leading to compatible representatives. 1 Introduction Subset selection is the task of finding a small subset of most informative items from a ground set. Besides helping to reduce the computational time and memory of algorithms, due to working on a much smaller representative set [1], it has found numerous applications, including, image and video summarization [2, 3, 4], speech and document summarization [5, 6, 7], clustering [8, 9, 10, 11, 12], feature and model selection [13, 14, 15, 16], sensor placement [17, 18], social network marketing [19] and product recommendation [20]. Compared to dictionary learning methods such as Kmeans [21], KSVD [22] and HMMs [23], that learn centers/atoms in the input-space, subset selection methods choose centers/atoms from the given set of items. Sequential data, including time-series such as video, speech, audio and sensor measurements as well as ordered data such as text, form an important large part of modern datasets, requiring effective subset selection techniques. Such datasets contain important structural relationships among items, often imposed by underlying dynamic models, that should play a vital role in the selection of representatives. For example, there exists a logical way in which segments of a video or sentences of a document are connected together and treating segments/sentences as a bag of randomly permutable items results in losing the semantic content of the video/document. However, existing subset selection methods 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: We propose a framework for the summarization of sequential data. Given a source set of items {x1 , . . . , xM }, with a dynamic transition model, and a target set of sequential items (y 1 , . . . , y T ), we find a sequence of representatives from the source set with high transition probability that well encode the target set. ignore these relationships and treat items independent from each other. Thus, there is a need for sequential subset selection methods that, instead of treating items independently, use the underlying dynamic models of data to select high-quality, diverse and compatible representatives. Prior Work: A subset selection framework consists of three main components: i) the inputs to the algorithm; ii) the objective function to optimize, characterizing the informativeness and diversity of selected items; iii) the algorithm to optimize the objective function. The inputs to subset selection algorithms are in the form of either feature vector representations or pairwise similarities between items. Several subset selection criteria have been studied in the literature, including maximum cut objective [24, 25], maximum marginal relevance [26], capacitated and uncapacitated facility location objectives [27, 28], multi-linear coding [29, 30] and maximum volume subset [6, 31], which all try to characterize the informativeness/value of a subset of items in terms of ability to represent the entire distribution and/or having minimum information overlap among selected items. On the other hand, optimizing almost all subset selection criteria is, in general, NP-hard and non-convex [25, 32, 33, 34], which has motivated the development and study of approximate methods for optimizing these criteria. This includes greedy approximate algorithms [28] for maximizing submodular functions, such as graph-cuts and facility location, which have worst-case approximation guarantees, as well as sampling methods from Determinantal Point Process (DPP) [6, 31], a probability measure on the set of all subsets of a ground set, for approximately finding the maximum volume subset. Motivated by the maturity of convex optimization and advances in sparse and low-rank recovery, recent methods have focused on convex relaxation-based methods for subset selection [8, 9, 2, 35, 36]. When it comes to sequential data, however, the majority of subset selection methods ignore the underlying dynamics and relationships among items and treat items independent from each other. Recent results in [37, 3] have developed interesting extensions to DPP-based subset selection, by capturing representatives in a sequential order such that newly selected representatives are diverse with respect to the previously selected ones. However, sequential diversity by itself is generally insufficient, especially, when the sequence of diverse selected items are unlikely to follow each other according to underlying dynamic models. For example, in a video/document on a specific topic with intermediate irrelevant scenes/sentences to the topic, promoting sequential diversity results in selecting irrelevant scenes/sentences. [38] extends submodular functions to capture ordered preferences among items, where ordered preferences are represented by a directed acyclic graph over items, and presents a greedy algorithm to pick edges instead of items. The method, however, cannot deal with arbitrary graphs, such as Markov chains with cycles. On the other hand, while Hidden Markov Models (HMMs) [23, 39] and dynamical systems [40, 41] have been extensively studied for modeling sequential data, they have not been properly exploited in the context of subset selection. Paper Contributions: In this paper we develop a new framework for sequential subset selection that incorporates the dynamic model of sequential data into subset selection. We develop a new class of objective functions that promotes to select not only high-quality and diverse items, but also a sequence of representatives that are compatible with the dynamic model of data. To do so, we propose a dynamic subset selection framework, where we equip items with transition probabilities and design objective functions to select representatives that well capture the data distribution with a high overall transition probability in the sequence of representatives, see Figure 1. Our formulation generalizes the facility location objective [27, 28] to sequential data, by incorporating transition dynamics among facilities. Since our proposed integer binary optimization is non-convex, we develop a max-sum message passing framework to solve the problem efficiently. By experiments on synthetic and real data, including instructional video summarization, we show that our method outperforms the state of the art in terms of selecting representatives with better encoding, diversity and dynamic compatibility. 2 2 Subset Selection for Sequential Data Sequential data, including time-series and ordered data contain important structural relationships among items, often imposed by underlying dynamic models of data, that should play a vital role in the selection of representatives. In this section, we develop a new framework for sequential subset selection that incorporates underlying dynamic models and relationships among items into subset selection. More specifically, we propose a dynamic subset selection framework, where we equip items with transition probabilities and design objectives to select representatives that capture the data distribution with a high transition probability in the sequence of representatives. In the next section, we develop an efficient algorithm to solve the proposed optimization problem. 2.1 Sequential Subset Selection Formulation Assume we have a source set of items X = {x1 , . . . , xM }, equipped with a transition model, p(xi0 |xi1 , . . . , xin ), between items, and a target set of sequential items Y = (y 1 , . . . , y T ). Our goal is to find a small representative subset of X that well encode Y, while the set of representatives are compatible according to the dynamic model of X. Let xrt be the representative of y t for t 2 {1, . . . , T }. We propose a potential function (r1 , . . . , rT ) whose maximization over all possible assignments (r1 , . . . , rT ) ? {1, . . . , M }T , i.e., max (r1 ,...,rT )?{1,...,M }T (1) (r1 , . . . , rT ) achieves the three goals of i) minimizing the encoding cost of Y via the representative set; ii) selecting a small set of representatives from X; iii) selecting an ordered set of representatives (xr1 , . . . , xrT ) that are compatible with the dynamics on X. To tackle the problem, we consider a decomposition of the potential function three potentials, corresponding to the three aforementioned objectives, as (r1 , . . . , rT ) , enc (r1 , . . . , rT ) ? card (r1 , . . . , rT ) ? into the product of dyn (r1 , . . . , rT ), (2) where enc (r1 , . . . , rT ) denotes the encoding potential that favors selecting a representative set from X that well encodes Y, card (r1 , . . . , rT ) denotes the cardinality potential that favors selecting a small number of distinct representatives. Finally, dyn (r1 , . . . , rT ) denotes the dynamic potential that favors selecting an ordered set of representatives that are likely to be generated by the underlying dynamic model on X. Next, we study each of the three potentials. Encoding Potential: Since the encoding of each item of Y depends on its own representative, we assume that the encoding potential function factorizes as enc (r1 , . . . , rT ) = T Y enc,t (rt ), (3) t=1 where enc,t (i) characterizes how well xi encodes y t and becomes larger when xi better represents y t . In this paper, we assume that enc,t (i) = exp( di,t ), where di,t indicates the dissimilarity of xi to y t . A lower dissimilarity di,t means that xi better encodes/represents y t . Cardinality Potential: Notice that maximizing the encoding potential alone results in selecting many representatives. Hence, we consider a cardinality potential to restrict the total number of representatives. Denoting the number of representatives by |{r1 , . . . , rT }|, we consider card (r1 , . . . , rT ) = exp( |{r1 , . . . , rT }|), (4) which promotes to select a small number of representatives. The parameter > 0 controls the effect of the cardinality on the global potential , where a close to zero ignores the effect of cardinality potential, resulting in many representatives, and a larger results in a smaller representative set. Dynamic Potential: While encoding and cardinality potentials together promote selecting a few representatives from X that well encode Y, there is no guarantee that the sequence of representatives (xr1 , . . . , xrT ) is compatible with the underlying dynamic of X. Thus, we introduce a dynamic potential that measures the compatibility of the sequence of representatives. To do so, we consider 3 an n-th order Markov Model to represent the dynamic relationships among the items in X, where the selection of the representative xrt depends on the m previously selected representatives, i.e., xrt 1 , . . . , xrt n . More precisely, we consider dyn (r1 , . . . , rT ) n Y = t=1 T Y pt (xrt ) ? t=n+1 pt (xrt |xrt 1 , . . . , xrt n ) ! (5) , where pt (xi ) indicates the probability of selecting xi as the representative of y t and pt (xi0 |xi1 , . . . , xin ) denotes the probability of selecting xi0 as the representative of y t given that xi1 , . . . , xin has been selected as the representative of y t 1 , . . . , y t n , respectively. The regularization parameter > 0 determines the effect of the dynamic potential on the overall potential , where a close to zero results in discounting the effect of the dynamic of X. As a result, maximizing the dynamic potential promotes to select a sequence of representatives that are highly likely to follow the dynamic model on the source set. 2.2 Optimization Framework for Sequential Subset Selection In the rest of the paper, we consider a first order Markov model, which performs well in the application studied in the paper (our proposed optimization can be generalized to n-th order Markov models as well). Putting all three potentials together, we consider maximization of the global potential function = T Y enc,t (rt ) t=1 ? card (r1 , . . . , rT ) ? T Y p1 (xr1 ) ? t=2 pt (xrt |xrt 1 ) ! (6) . over all possible assignments (r1 , . . . , rT ) ? {1, . . . , M }T . To do so, we cast the problem as an integer binary optimization. We define binary assignment variables {zi,t }t=1,...,T i=1,...,M , where zi,t 2 {0, 1} indicates if xi is a representative of y t . Since each item y t is associated with only a single PM representative, we have i=1 zi,t = 1. Also, we define variables { i }i=1,...,M and {uti0 ,i }t=1,...,T i,i0 =1,...,M , t where i 2 {0, 1} indicates if xi is a representative of y 1 and ui0 ,i 2 {0, 1} indicates if xi0 is a representative of y t given that xi is a representative of y t 1 . As we will show, { i } and {uti0 ,i } are related to {zi,t }, hence, the final optimization only depends on {zi,t }. Using the variables defined above, we can rewrite the global potential function in (6) as = T Y M Y enc,t (i) zi,t t=1 i=1 ? card (r1 , . . . , rT ) ? We can equivalently maximize the logarithm of T X M X zi,t di,t + log card (r1 , . . . , rT ) + t=1 i=1 M Y p1 (xi ) i=1 i ? T Y M Y M Y t=2 i0 =1 i=1 pt (xi0 |xi ) uti0 ,i . (7) , which is to maximize M X i log p1 (xi ) + T X M X t=2 i,i0 =1 i=1 ui0 ,i log pt (xi0 |xi ), (8) where we used log enc,t (i) = di,t . Notice that { i } and {uti0 ,i } can be written as functions of the assignment variables {zi,t }. Denoting the indicator function by 1(?) , which is one when its argument is true and is zero otherwise, we can write i = 1(r1 =i) and uti0 ,i = 1(rt =i0 ,rt 1 =i) . Hence, we have i = zi,1 , uti0 ,i = zi,t (9) 1 zi0 ,t . As a result, we can rewrite the maximization in (8) as the equivalent optimization max {zi,t } T X M X zi,t di,t + log card (r1 , . . . , rT ) + ( t=1 i=1 + T X M X t=2 i,i0 =1 M X zi,1 log p1 (xi ) i=1 zi,t 1 zi0 ,t log pt (xi0 |xi )) s. t. zi,t 2 {0, 1}, 4 M X i=1 (10) zi,t = 1, 8 i, t. ?1C ?11 ?2C ?12 ?3C z11 D ?11;12 z12 D ?12;13 z13 ?21 .. ?22 .. ?23 z21 ?31 z31 D ?21;12 z22 .. ?32 D ?31;12 z32 .. D ?22;13 z23 .. D ?32,13 z33 ?it ?tC ??? ?it ?2R D ?1(t ??? 0 it,1(t+1) .. ??? .. .. ?iD0(t 1);it D ?M (t 1);it 0 0 ( t+ ?it it,1(t+1) 1) it,i 0 (t+ it,i .. it ?iR zit 1);it ?3R ?33 .. .. ?1R ?13 0 ,M it (t + it, 1) M (t + 1) D ?it;1(t+1) 1) .. .. D ?it;i 0 (t+1) D ?it;M (t+1) Figure 2: Left: Factor graph representing (12). Right: Messages from each factor to a variable node zi,t . It is important to note that if xi becomes a representative of some items in Y, then k [zi,1 ? ? ? zi,T ] k1 PM would be 1. Hence, the number of representatives is given by i=1 k [zi,1 ? ? ? zi,T ] k1 . As a result, we can rewrite the cardinality potential in (4) as card (r1 , . . . , rT ) M X = exp( i=1 k [zi,1 ? ? ? zi,T ] k1 ). (11) Finally, considering a homogeneous Markov Model on the dynamics of the source set, where pt (?|?) = p(?|?), i.e., transitioning from xi as the representative of y t 1 to xi0 as the representative of y t does not depend on t, we propose to solve the optimization max {zi,t } T X M X zi,t di,t t=1 i=1 + T X M X t=2 i,i0 =1 M X i=1 zi,t 1 zi0 ,t k [zi,1 ? ? ? zi,T ] k1 + ( M X zi,1 log p1 (xi ) i=1 log p(xi0 |xi )) s. t. zi,t 2 {0, 1}, M X i=1 (12) zi,t = 1, 8 i, t. In our proposed formulation above, we assume that the dissimilarities {di,t } and the dynamic models, i.e., the probabilities p1 (?) and p(?|?), are known. These models can be given by prior knowledge or by learning from training data, as we show in the experiments. It is important to notice that the optimization in (12) is non-convex, due to binary optimization variables and quadratic terms in the objective function, which is not necessarily positive semi-definite (this can be easily seen when p(xi0 |xi ) 6= p(xi |xi0 ) for some i, i0 ). In the next section, we treat (12) as a MAP inference on binary random variables and develop a message passing algorithm to find the hidden values {zi,t }. Once we solve the optimization in (12), we can obtain the representatives as the items of X for which zi,t is non-zero for some t. Moreover, we can obtain the segmentation of the sequential items in Y according to their assignments to the representatives. In fact, the sequence of representatives obtained by our proposed optimization in (12) not only corresponds to diverse items that well encode the sequential target data, but also is compatible with the underlying dynamic of the source data. Remark 1 Without the dynamic potential, ie, with = 0, our proposed optimization in (12) reduces to the uncapacitated facility location objective. Hence, our framework generalizes the facility location to sequential data by considering transition dynamics among facilities (source set items). 3 Message Passing for Sequential Subset Selection In this section, we develop an efficient message passing algorithm to solve the proposed optimization in (12). To do so, we treat the sequential subset selection as a MAP inference, where {zi,t } correspond to binary random variables whose joint log-likelihood is given by the objective function in (12). We represent the log-likelihood, i.e., the objective function in (12), with a factor graph [42], which is shown in Figure 2. Recall that a factor graph is a bipartite graph that consists of variable nodes and factor nodes, where every factor evaluates a potential function over variables it is connected to. The log-likelihood is then proportional to the sum of all factor potentials. 5 To form the factors corresponding to the objective function in (12), we define mi,i0 , log p(xi0 |xi ) and d?i,t , di,t log p1 (xi ) if t = 1 and d?i,t , di,t for all other values of t. Denoting z i,: , > > [zi,1 ? ? ? zi,T ] and z :,t , [z1,t ? ? ? zM,t ] , we define factor potentials corresponding to our framework, shown in Figure 2. More specifically, we define the encoding and dynamic potentials, D respectively, as ?i,t (zi,t ) , d?i,t zi,t and ?i,t 1;i0 ,t (zi,t 1 , zi0 ,t ) , mi,i0 zi,t 1 zi0 ,t . Moreover we define the cardinality and constraint potentials, respectively, as ( ? PM , kz i,: k1 > 0 0, R C i=1 zi,t = 1 . ?i (z i,: ) , , ?t (z :,t ) , 0, otherwise 1, otherwise The MAP formulation of our sequential subset selection is then given by max {zi,t } T X M X ?i,t (zi,t ) + t=1 i=1 M X ?iR (z i,: ) + T X T M X M X1 X ?lC (z :,t ) + 1;i0 ,t (zi,t 1 , zi0 ,t ). (13) t=1 i0 =1 i=1 t=1 i=1 D ?i,t To perform MAP inference, we use the max-sum message passing algorithm, which iteratively updates messages between variable and factor nodes in the graph. In our framework, the incoming messages to each variable node zi,t are illustrated in Figure 2. Messages are computed as follows (please see the supplementary materials for the derivations). d?i,t (14) i,t max{0, mi,j + ?} i,t;j,t+1 0 i,t 1;j,t d?i0 ,1 + k6=i ,i ?i,t (15) 0 (16) max{0, mi,j + ? } max {?i0 ,1 0 ?i,t max{0, ?} 0 min{0, + M X max{0, ? } i0 ,t;j,t+1 j=1 X + M X j=1 max{0, d?i,k + ?i,k + where, for brevity of notation, we have defined ? and ?0 as 4 M X ( d?j,t+1 + ?j,t+1 + ?j,t+1 + M X j,t+1;k,t+2 k=1 4 ?0 = d?i,t 1 + ?i,t 1 i,k;j,k+1 (17) + 0 j,k 1;i,k )}} (18) j=1 k6=t ?= 0 j,t 1;i0 ,t } + ?i,t 1 + X k6=i + X 0 k,t;j,t+1 , (19) 0 k,t 2;i,t 1 . (20) k6=i i,t 1;k,t + M X k=1 The update of messages continues until convergence, when each variable zi,t is assigned to the value that maximizes the sum of its incoming messages. It is important to note that the max-sum algorithm always converges to the optimal MAP assignment on trees, and has shown good performance on graphs with cycles in many applications, including our work. We also use a dampening factor 2 [0, 1) on message updates as so that a message ? is computed as ?(new) ?(old) + (1 )?(update) . 4 Experiments In this section, we evaluate the performance of our proposed method as well as the state of the art for subset selection on synthetic and real sequential data. For real applications, we consider the task of summarizing instructional videos to learn the key steps of the task described in the videos. In addition to our proposed message passing (MP) algorithm, we have implemented the optimization in (12) using an ADMM framework [43], where we have relaxed the integer binary constraints to zi,t 2 [0, 1]. In practice both MP and ADMM algorithms achieve similar results. We compare our proposed method, Sequential Facility Location (SeqFL), with several subset selection algorithms. Since we study the performance of methods as a function of the size of the representative set, we use the fixed-size variant of DPP, called kDPP [44]. In addition to kDPP, we evaluate the performance of Markov kDPP (M-kDPP) [37], in which successive representatives are diverse among themselves and with respect to the previously selected representatives, as well as Sequential kDPP 6 20 10 0 0 5 10 15 20 kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 250 200 150 100 0 5 Number of representatives 10 15 50 kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 40 30 20 10 0 kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 0.8 0.6 0.4 0.2 0 0 20 1 Diversity score 30 300 Encoding + *Dynamic cost kDPP M-kDPP Seq-kDPP Greedy DS3 SeqFL 40 Dynamic cost Encoding cost 50 5 10 15 20 0 5 10 15 20 Number of representatives Number of representatives Number of representatives Figure 3: Encoding cost, dynamic cost, total cost and diversity score of different algorithms as a function of the 15 10 5 0 0 2 4 6 8 10 12 300 15 =0 =0.01 =0.02 =0.06 =0.08 10 5 0 0 2 4 6 8 10 Dynamic cost =0 =0.01 =0.02 =0.06 =0.08 20 200 150 100 0 12 =0 =0.01 =0.02 =0.06 =0.08 250 2 4 6 8 10 12 1 Diversity score 20 25 Encoding cost Number representatives number of selected representatives. The size of the source set is M = 50. =0 =0.01 =0.02 =0.06 =0.08 0.5 0 0 2 4 6 8 10 12 Figure 4: Number of representatives, encoding cost, dynamic cost and diversity score of our proposed method (SeqFL) as a function of the parameters ( , ). (Seq-kDPP) [3], which divides a time-series into multiple windows and successively selects diverse samples from each window conditioned on the previous window.1 We also compare our method against DS3 [8] and the standard greedy method [28], which optimize the conventional facility location, which has no dynamic cost, via convex relaxation and greedy selection, respectively. To compare the performance of different methods, we evaluate several costs and scores that demonstrate the effectiveness of each method in terms of encoding, diversity and dynamic compatibility of the set of selected representatives. More specifically, given dissimilarities {di,t }, the dynamic model p1 (?) and p(?|?), representative set ?, and the assignment of points to repPT PM ? ? resentatives {zi,t }, we compute the encoding cost as t=1 i=1 di,t zi,t , the dynamic cost as PM P P T M ? ? ? 0 log p (x )z log p(x |x )z z and the diversity score as det(K ? ). 0 1 i i,1 i i i,t 1 i ,t i=1 t=2 i,i0 =1 Here, K corresponds to the kernel matrix, used in DPP and its variants, and K ? denotes the submatrix of K indexed by ?. In this paper, we use Euclidean distances as dissimilarities and compute the corresponding inner-product kernel to run DPPs. Notice that the diversity score, which is the volume of the parallelotope spanned by the representatives, is what DPP methods aim to (approximately) maximize. As DPP methods only find representatives and not assignment of points, we compute ? zi,t ?s by assigning each point to the closest representative in ?, according to the kernel. 4.1 Synthetic Data To demonstrate the effectiveness of our proposed method for sequential subset selection, we generate synthetic data where, for a source set X with M items corresponding to means of M Gaussians, we generate a transition probability matrix among items and an initial probability vector. We draw a sequence of length T from the corresponding Markov model to form the target set Y and run different algorithms to generate k representatives. We then compute the average encoding and transition costs as well as the diversity scores for sequences drawn from the Markov model, as a function of k 2 {1, 2, . . . , M }. In the experiments we set M = 50, T = 100. For a fixed , we run SeqFL for different values of to select different number of representatives. Figure 3 illustrates encoding and transition costs and the diversity scores of different methods, where for SeqFL we have set = 0.02. Notice that our proposed method consistently obtains lower encoding, dynamic and total costs for all numbers of representatives, demonstrating its effectiveness for obtaining a sequence of high-quality and compatible representatives according to the dynamics. It is important to notice that although our method does not maximize the diversity score, used in kDPP and its variants, it achieves slightly better diversity scores (higher is better) than kDPP and M-kDPP. Figure 4 demonstrates the effect of the parameters ( , ) on the solution of our proposed method. Notice that for a fixed , as increases, we select a smaller number of representatives, hence, the encoding cost increases. Also, for a fixed , as increases, we put more more emphasis on dynamic compatibility of representatives, hence, the dynamic cost decreases. On the other hand, the diversity score decreases for smaller , as we select more representatives which become more redundant. The results in Figure 4 also demonstrate the robustness of our method to the change of parameters. 1 To have a fair comparison and to select a fixed number of representatives, we modify the SeqDPP method [3] and implement Seq-kDPP where k representatives are chosen in each window. 7 Task Change tire Make coffee CPR Jump car Repot plant All tasks (P, R) F-score (P, R) F-score (P, R) F-score (P, R) F-score (P, R) F-score (P, R) F-score kDPP (0.56, 0.50) 0.53 (0.38, 0.33) 0.35 (0.71, 0.71) 0.71 (0.50, 0.50) 0.50 (0.57, 0.67) 0.62 (0.54, 0.54) 0.54 M-kDPP (0.55, 0.60) 0.57 (0.50, 0.44) 0.47 (0.71, 0.71) 0.71 (0.56, 0.50) 0.53 (0.60, 0.50) 0.55 (0.58, 0.55) 0.57 Seq-kDPP (0.44, 0.40) 0.42 (0.63, 0.56) 0.59 (0.71, 0.71) 0.71 (0.56, 0.50) 0.53 (0.57, 0.67) 0.62 (0.58, 0.57) 0.57 DS3 (0.56, 0.50) 0.53 (0.50, 0.56) 0.53 (0.71, 0.71) 0.71 (0.50, 0.50) 0.50 (0.57, 0.67) 0.62 (0.57, 0.59) 0.58 SeqFL (0.60, 0.60) 0.60 (0.50, 0.56) 0.53 (0.83, 0.71) 0.77 (0.60, 0.60) 0.60 (0.80, 0.67) 0.73 (0.67, 0.63) 0.65 Table 1: Precision (P), Recall (R) and F-score for summarization of instructional videos for five tasks. Ground Truth Open Airway Check Breathing Give Breath Give Compression Give Breath Give Compression Give Compression Check Breathing Give Breath Give Compression Give Breath Give Compression Check Response SeqFL 1 Figure 5: Ground-truth and automatic summarization results of our method for the CPR task. 4.2 Instructional Video Summarization We apply SeqFL to the task of summarization of intructional videos to automatically learn the sequence of key actions to perform a task. We use videos from the instructional video dataset [45], which consists of 30 instructional videos for each of five activities. The dataset also provides labels for frames which contain the main steps required to perform that task. We preprocess the videos by segmenting each video into superframes [46] and obtain features using a deep neural network that we have constructed for feature extraction for summarization tasks. We use 60% of the videos from each task as the training set to build an HMM model whose states form the source set, X. Using the learned dynamic model, we apply our method to summarize each of the remaining videos. The summaries for each video are sets of elements of X, states in the HMM model. For evaluation, we map the representative states into actions in the ground truth by using the labels of the five nearest neighbors in the training set to each state. The summary for each video is an assignment of each superframe in the video to one of the action labels in the training set. Since each video may have shown each action performed for a different length of time, we remove consecutive repeated labels to form a list of actions performed, removing the length of time each action was performed. To construct the final summary for a task, we align the lists of actions obtained by each method for all the test videos, following the alignment method of [45] for several number of slots. For each method, we choose the number of HMM states and the alignment of videos that give the best performance (see the supplementary materials for more details). Table 1 shows the precision, recall and the F-score of various methods. Notice that existing methods perform similar to each other for most tasks, suggesting that sequential diversity promoted by Seq-kDPP and M-kDPP is not sufficient for capturing the important steps of tasks. On the other hand, for most tasks and over the entire dataset, our method (SeqFL) significantly outperforms other algorithms, better producing the sequence of important steps to perform a task, thanks to the ability of our framework to incorporate the underlying dynamics of the data. Figure 5 shows the ground-truth and the automatic summary produced by our method for the CPR task, demonstrating that we can sufficiently well capture the main steps and the sequence of steps to perform the task. 5 Conclusions We developed a new framework for sequential subset selection that takes advantage of the underlying dynamic models of data, promoting to select a set of representatives that are compatible according to the dynamic models of data. By experiments on synthetic and real data, we showed the effectiveness of our method for summarization of sequential data. Our ongoing research include development of fast greedy algorithms for our sequential subset selection objective that can also deal with n-th order dynamic models as well as investigation of the theoretical guarantees of our proposed formulations. 8 Acknowledgements This work is supported by NSF IIS-1657197 award and startup funds from the Northeastern University, College of Computer and Information Science. References [1] S. Garcia, J. Derrac, J. R. Cano, and F. Herrera, ?Prototype selection for nearest neighbor classification: Taxonomy and empirical study,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 417?435, 2012. [2] E. Elhamifar and M. C. D. P. Kaluza, ?Online summarization via submodular and convex optimization,? in IEEE Conference on Computer Vision and Pattern Recognition, 2017. [3] B. Gong, W. Chao, K. Grauman, and F. Sha, ?Diverse sequential subset selection for supervised video summarization,? in Neural Information Processing Systems, 2014. [4] I. Simon, N. Snavely, and S. M. Seitz, ?Scene summarization for online image collections,? in IEEE International Conference on Computer Vision, 2007. [5] H. Lin and J. Bilmes, ?Learning mixtures of submodular shells with application to document summarization,? in Conference on Uncertainty in Artificial Intelligence, 2012. [6] A. Kulesza and B. Taskar, ?Determinantal point processes for machine learning,? Foundations and Trends in Machine Learning, vol. 5, 2012. [7] B. J. Frey and D. Dueck, ?Clustering by passing messages between data points,? Science, vol. 315, 2007. [8] E. Elhamifar, G. Sapiro, and S. S. Sastry, ?Dissimilarity-based sparse subset selection,? IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. [9] E. Elhamifar, G. Sapiro, and R. Vidal, ?Finding exemplars from pairwise dissimilarities via simultaneous sparse recovery,? Neural Information Processing Systems, 2012. [10] G. Kim, E. Xing, L. Fei-Fei, and T. Kanade, ?Distributed cosegmentation via submodular optimization on anisotropic diffusion,? in International Conference on Computer Vision, 2011. [11] A. Shah and Z. Ghahramani, ?Determinantal clustering process ? a nonparametric bayesian approach to kernel based semi-supervised clustering,? in Conference on Uncertainty in Artificial Intelligence, 2013. [12] R. Reichart and A. Korhonen, ?Improved lexical acquisition through dpp-based verb clustering,? in Conference of the Association for Computational Linguistics, 2013. [13] E. Elhamifar, S. Burden, and S. S. Sastry, ?Adaptive piecewise-affine inverse modeling of hybrid dynamical systems,? in World Congress of the International Federation of Automatic Control (IFAC), 2014. [14] E. Elhamifar and S. S. Sastry, ?Energy disaggregation via learning ?powerlets? and sparse coding,? in AAAI Conference on Artificial Intelligence, 2015. [15] I. Guyon and A. Elisseeff, ?An introduction to variable and feature selection,? Journal of Machine Learning Research, 2003. [16] I. Misra, A. Shrivastava, and M. Hebert, ?Data-driven exemplar model selection,? in Winter Conference on Applications of Computer Vision, 2014. [17] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta, ?Robust submodular observation selection,? Journal of Machine Learning Research, vol. 9, 2008. [18] S. Joshi and S. Boyd, ?Sensor selection via convex optimization,? IEEE Transactions on Signal Processing, vol. 57, 2009. [19] J. Hartline, V. S. Mirrokni, and M. Sundararajan, ?Optimal marketing strategies over social networks,? in World Wide Web Conference, 2008. [20] D. McSherry, ?Diversity-conscious retrieval,? in Advances in Case-Based Reasoning, 2002. [21] R. Duda, P. Hart, and D. Stork, Pattern Classification. Wiley-Interscience, October 2004. [22] M. Aharon, M. Elad, and A. M. Bruckstein, ?K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,? IEEE Trans. on Signal Processing, vol. 54, no. 11, pp. 4311?4322, 2006. [23] L. Rabiner, ?A tutorial on hidden markov models and selected applications in speech recognition,? Proceedings of the IEEE, vol. 77, 1989. [24] F. Hadlock, ?Finding a maximum cut of a planar graph in polynomial time,? SIAM Journal on Computing, vol. 4, 1975. [25] R. Motwani and P. Raghavan, ?Randomized algorithms,? Cambridge University Press, New York, 1995. 9 [26] J. Carbonell and J. Goldstein, ?The use of mmr, diversity-based reranking for reordering documents and producing summaries,? in SIGIR, 1998. [27] P. B. Mirchandani and R. L. Francis, Discrete Location Theory. Wiley, 1990. [28] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, ?An analysis of approximations for maximizing submodular set functions,? Mathematical Programming, vol. 14, 1978. [29] E. Elhamifar, G. Sapiro, and R. Vidal, ?See all by looking at a few: Sparse modeling for finding representative objects,? in IEEE Conference on Computer Vision and Pattern Recognition, 2012. [30] E. Esser, M. Moller, S. Osher, G. Sapiro, and J. Xin, ?A convex model for non-negative matrix factorization and dimensionality reduction on physical space,? IEEE Transactions on Image Processing, vol. 21, no. 7, pp. 3239?3252, 2012. [31] A. Borodin and G. Olshanski, ?Distributions on partitions, point processes, and the hypergeometric kernel,? Communications in Mathematical Physics, vol. 211, 2000. [32] U. Feige, ?A threshold of ln n for approximating set cover,? Journal of the ACM, 1998. [33] T. Gonzalez, ?Clustering to minimize the maximum intercluster distance,? Theoretical Computer Science, vol. 38, 1985. [34] A. Civril and M. Magdon-Ismail, ?On selecting a maximum volume sub-matrix of a matrix and related problems,? Theoretical Computer Science, vol. 410, 2009. [35] P. Awasthi, A. S. Bandeira, M. Charikar, R. Krishnaswamy, S. Villar, and R. Ward, ?Relax, no need to round: Integrality of clustering formulations,? in Conference on Innovations in Theoretical Computer Science (ITCS), 2015. [36] A. Nellore and R. Ward, ?Recovery guarantees for exemplar-based clustering,? in Information and Computation, 2015. [37] R. H. Affandi, A. Kulesza, and E. B. Fox, ?Markov determinantal point processes,? in Conference on Uncertainty in Artificial Intelligence, 2012. [38] S. Tschiatschek, A. Singla, and A. Krause, ?Selecting sequences of items via submodular maximization,? AAAI, 2017. [39] Z. Ghahramani and M. I. Jordan, ?Factorial hidden markov models,? Machine Learning, vol. 29, no. 2-3, 1997. [40] Z. Ghahramani and S. Roweis, ?Learning nonlinear dynamical systems using an em algorithm,? NIPS, 2008. [41] C. Bishop, Pattern Recognition and Machine Learning. New York: Springer, 2007. [42] F. Kschischang, B. Frey, and H.-A. Loeliger, ?Factor graphs and the sum-product algorithm,? IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498?519, 2001. [43] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learning via the alternating direction method of multipliers,? Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1?122, 2010. [44] A. Kulesza and B. Taskar, ?k-dpps: Fixed-size determinantal point processes,? in International Conference on Machine Learning, 2011. [45] J.-B. Alayrac, P. Bojanowski, N. Agrawal, I. Laptev, J. Sivic, and S. Lacoste-Julien, ?Unsupervised learning from narrated instruction videos,? in Computer Vision and Pattern Recognition (CVPR), 2016. [46] M. Gygli, H. Grabner, H. Riemenschneider, and L. V. Gool, ?Creating summaries from user videos,? in European Conference on Computer Vision, 2014. 10
6704 |@word polynomial:1 compression:5 duda:1 open:1 instruction:1 seitz:1 decomposition:1 elisseeff:1 pick:1 reduction:1 initial:1 series:4 score:20 selecting:13 loeliger:1 denoting:3 document:6 outperforms:2 existing:3 disaggregation:1 clara:2 assigning:1 chu:1 written:1 determinantal:5 partition:1 informative:1 remove:1 treating:2 update:4 fund:1 alone:1 greedy:9 selected:11 intelligence:6 item:42 reranking:1 provides:1 node:5 location:9 preference:2 successive:1 five:3 mathematical:2 constructed:1 become:1 maturity:1 ksvd:1 consists:3 interscience:1 introduce:1 pairwise:2 p1:8 themselves:1 multi:1 z31:1 automatically:1 equipped:1 cardinality:8 considering:2 becomes:2 z13:1 window:4 underlying:13 moreover:2 notation:1 maximizes:1 what:1 permutable:1 riemenschneider:1 developed:2 finding:6 guarantee:4 dueck:1 sapiro:4 every:1 tackle:1 grauman:1 demonstrates:1 control:2 producing:2 segmenting:1 positive:1 frey:2 treat:5 modify:1 congress:1 encoding:22 approximately:2 emphasis:1 studied:3 hmms:2 factorization:1 zi0:6 tschiatschek:1 directed:1 practice:1 definite:1 implement:1 narrated:1 area:1 empirical:1 significantly:1 boyd:2 cannot:1 close:2 selection:48 put:1 context:1 optimize:3 equivalent:1 imposed:3 map:6 center:2 maximizing:4 conventional:1 lexical:1 independently:2 convex:10 focused:1 sigir:1 recovery:3 spanned:1 gygli:1 target:5 play:3 pt:9 user:1 losing:1 homogeneous:1 programming:1 designing:1 element:1 trend:2 recognition:5 continues:1 cut:3 ds3:6 role:3 taskar:2 capture:4 worst:1 connected:2 cycle:2 decrease:2 dynamic:55 depend:1 rewrite:3 segment:2 laptev:1 bipartite:1 easily:1 joint:1 represented:1 various:1 derivation:1 distinct:1 fast:1 effective:1 artificial:4 startup:1 whose:3 larger:2 solve:6 supplementary:2 federation:1 elad:1 otherwise:3 relax:1 cvpr:1 ability:2 favor:3 ward:2 itself:1 final:2 online:2 sequence:16 advantage:1 agrawal:1 propose:5 product:4 zm:1 enc:9 achieve:1 roweis:1 ismail:1 convergence:1 motwani:1 r1:23 converges:1 object:1 derive:1 develop:8 gong:1 pose:1 exemplar:3 nearest:2 zit:1 implemented:1 come:1 direction:1 raghavan:1 bojanowski:1 material:2 investigation:1 extension:1 helping:1 cpr:3 sufficiently:1 ground:7 exp:3 achieves:3 dictionary:2 consecutive:1 bag:1 label:4 villar:1 singla:1 successfully:1 awasthi:1 sensor:3 always:1 aim:1 factorizes:1 encode:4 properly:1 consistently:1 rank:1 indicates:5 likelihood:3 check:3 kim:1 summarizing:1 inference:3 i0:17 breath:4 entire:2 unlikely:1 hidden:4 selects:1 compatibility:4 overall:2 among:13 aforementioned:1 classification:2 k6:4 development:2 art:3 marginal:1 construct:1 once:1 having:1 beach:1 atom:2 sampling:1 extraction:1 represents:2 unsupervised:1 nearly:1 promote:1 hadlock:1 np:1 piecewise:1 few:2 modern:1 randomly:1 winter:1 dampening:1 message:16 highly:1 evaluation:1 alignment:2 mixture:1 dyn:3 mcsherry:1 chain:1 edge:1 fox:1 tree:1 indexed:1 old:1 logarithm:1 divide:1 euclidean:1 overcomplete:1 theoretical:4 modeling:3 cover:1 assignment:10 maximization:4 cost:20 subset:44 characterize:1 synthetic:6 st:1 thanks:1 international:4 siam:1 randomized:1 ie:1 xi1:3 physic:1 together:3 aaai:2 successively:1 choose:2 creating:1 leading:2 suggesting:1 potential:31 de:1 diversity:21 coding:2 includes:1 z12:1 mp:2 depends:3 performed:3 try:1 characterizes:1 francis:1 xing:1 simon:1 contribution:1 minimize:1 cosegmentation:1 ir:2 efficiently:2 correspond:1 preprocess:1 rabiner:1 bayesian:1 itcs:1 produced:1 bilmes:1 cc:2 hartline:1 simultaneous:1 neu:2 evaluates:1 against:1 energy:1 acquisition:1 pp:5 associated:1 di:12 mi:4 newly:1 dataset:3 logical:1 recall:3 knowledge:1 car:1 dimensionality:1 segmentation:1 goldstein:1 higher:1 supervised:2 follow:2 planar:1 response:1 improved:1 formulation:8 marketing:2 until:1 working:1 hand:4 web:1 nonlinear:1 quality:3 usa:1 effect:5 contain:4 requiring:1 true:1 multiplier:1 facility:11 hence:7 regularization:1 discounting:1 alternating:1 assigned:1 iteratively:1 semantic:1 illustrated:1 deal:3 round:1 please:1 criterion:3 generalized:1 demonstrate:3 performs:1 reasoning:1 cano:1 image:3 parikh:1 physical:1 stork:1 volume:4 anisotropic:1 association:1 xi0:12 sundararajan:1 measurement:1 cambridge:1 dpps:2 automatic:3 sastry:3 pm:5 herrera:1 submodular:8 esser:1 similarity:1 align:1 krishnaswamy:1 closest:1 own:1 recent:2 showed:1 optimizing:2 irrelevant:2 driven:1 misra:1 bandeira:1 binary:8 exploited:1 seen:1 minimum:1 guestrin:1 relaxed:1 promoted:1 maximize:4 redundant:1 signal:2 ii:3 semi:2 multiple:1 reduces:1 z11:1 ifac:1 long:1 lin:1 retrieval:1 hart:1 award:1 promotes:3 variant:3 vision:7 seqdpp:1 represent:3 kernel:5 addition:2 krause:2 source:10 rest:1 incorporates:3 effectiveness:4 jordan:1 integer:4 alayrac:1 structural:3 joshi:1 intermediate:1 vital:3 iii:2 zi:50 restrict:1 reduce:1 inner:1 prototype:1 det:1 motivated:2 speech:3 passing:8 york:2 remark:1 action:7 deep:1 generally:1 factorial:1 nonparametric:1 z22:1 extensively:1 conscious:1 capacitated:1 generate:3 nsf:1 tutorial:1 notice:8 xr1:3 diverse:8 write:1 discrete:1 vol:16 putting:1 key:2 demonstrating:2 threshold:1 drawn:1 diffusion:1 integrality:1 lacoste:1 graph:11 relaxation:2 sum:7 run:3 inverse:1 uncertainty:3 extends:1 almost:1 guyon:1 seq:8 draw:1 gonzalez:1 incompatible:1 z23:1 submatrix:1 capturing:2 z32:1 quadratic:1 activity:1 placement:1 precisely:1 constraint:2 fei:2 scene:3 encodes:3 argument:1 min:1 charikar:1 according:6 smaller:4 slightly:1 feige:1 em:1 osher:1 instructional:7 ln:1 previously:3 generalizes:3 gaussians:1 aharon:1 magdon:1 vidal:2 promoting:2 apply:2 robustness:1 shah:1 denotes:5 clustering:8 remaining:1 include:1 linguistics:1 k1:5 especially:1 coffee:1 build:1 ghahramani:3 approximating:1 grabner:1 objective:16 snavely:1 sha:1 rt:26 mirrokni:1 strategy:1 nemhauser:1 distance:2 card:8 majority:1 hmm:3 ui0:2 topic:2 carbonell:1 equip:3 xrt:12 besides:1 length:3 relationship:7 insufficient:1 minimizing:1 innovation:1 equivalently:1 october:1 taxonomy:1 negative:1 design:2 summarization:16 perform:6 derrac:1 observation:1 datasets:2 markov:12 looking:1 communication:1 frame:1 arbitrary:1 verb:1 peleato:1 cast:1 required:1 eckstein:1 reichart:1 sentence:4 z1:1 sivic:1 learned:1 hypergeometric:1 nip:2 trans:1 dynamical:3 pattern:7 xm:2 borodin:1 breathing:2 kulesza:3 summarize:1 including:8 max:14 video:28 memory:1 gool:1 overlap:1 hybrid:1 kdpp:27 indicator:1 representing:1 numerous:2 julien:1 text:1 prior:2 literature:1 acknowledgement:1 chao:1 plant:1 reordering:1 interesting:1 wolsey:1 proportional:1 acyclic:1 foundation:2 affine:1 sufficient:1 informativeness:2 compatible:10 summary:6 supported:1 hebert:1 tire:1 neighbor:2 wide:1 characterizing:1 affandi:1 sparse:6 distributed:2 dpp:7 transition:15 world:2 kz:1 ignores:1 collection:1 jump:1 adaptive:1 z33:1 social:2 transaction:5 approximate:2 obtains:1 ignore:3 mmr:1 global:3 bruckstein:1 incoming:2 xi:23 table:2 kanade:1 learn:3 robust:1 ca:1 kschischang:1 obtaining:1 shrivastava:1 ehsan:1 moller:1 necessarily:1 european:1 main:3 fair:1 repeated:1 x1:3 representative:83 wiley:2 lc:1 precision:2 sub:1 airway:1 mcmahan:1 uncapacitated:2 northeastern:3 removing:1 transitioning:1 specific:1 bishop:1 list:2 gupta:1 incorporating:2 exists:1 burden:1 sequential:39 dissimilarity:7 conditioned:1 elhamifar:7 illustrates:1 boston:2 tc:1 garcia:1 likely:2 ordered:7 recommendation:1 springer:1 corresponds:2 truth:4 determines:1 acm:1 ma:2 shell:1 intercluster:1 slot:1 goal:2 kmeans:1 z21:1 admm:2 content:1 hard:1 change:2 fisher:1 specifically:3 korhonen:1 total:3 called:1 svd:1 xin:4 select:11 college:3 brevity:1 relevance:1 ongoing:1 incorporate:1 evaluate:3 audio:1
6,307
6,705
Question Asking as Program Generation Anselm Rothe1 [email protected] 1 Brenden M. Lake1,2 [email protected] Todd M. Gureckis1 [email protected] 2 Department of Psychology Center for Data Science New York University Abstract A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions. 1 Introduction In active machine learning, a learner is able to query an oracle in order to obtain information that is expected to improve performance. Theoretical and empirical results show that active learning can speed acquisition for a variety of learning tasks [see 21, for a review]. Although impressive, most work on active machine learning has focused on relatively simple types of information requests (most often a request for a supervised label). In contrast, humans often learn by asking far richer questions which more directly target the critical parameters in a learning task. A human child might ask ?Do all dogs have long tails?? or ?What is the difference between cats and dogs?? [2]. A long term goal of artificial intelligence (AI) is to develop algorithms with a similar capacity to learn by asking rich questions. Our premise is that we can make progress toward this goal by better understanding human question asking abilities in computational terms [cf. 8]. To that end, in this paper, we propose a new computational framework that explains how people construct rich and interesting queries within in a particular domain. A key insight is to model questions as programs that, when executed on the state of a possible world, output an answer. For example, a program corresponding to ?Does John prefer coffee to tea?? would return True for all possible world states where this is the correct answer and False for all others. Other questions may return different types of answers. For example ?How many sugars does John take in his coffee?? would return a number 0, 1, 2, etc. depending on the world state. Thinking of questions as syntactically well-formed programs recasts the problem of question asking as one of program synthesis. We show that this powerful formalism offers a new approach to modeling question asking in humans and may eventually enable more human-like question asking in machines. We evaluate our model using a data set containing natural language questions asked by human participants in an information-search game [19]. Given an ambiguous situation or context, our model can predict what questions human learners will ask by capturing constraints in how humans construct semantically meaningful questions. The method successfully predicts the frequencies of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. human questions given a game context, and can also synthesize novel human-like questions that were not present in the training set. 2 Related work Contemporary active learning algorithms can query for labels or causal interventions [21], but they lack the representational capacity to consider a richer range of queries, including those expressed in natural language. AI dialog systems are designed to ask questions, yet these systems are still far from achieving human-like question asking. Goal-directed dialog systems [25, 1], applied to tasks such as booking a table at a restaurant, typically choose between a relatively small set of canned questions (e.g., ?How can I help you??, ?What type of food are you looking for??), with little genuine flexibility or creativity. Deep learning systems have also been developed for visual ?20 questions? style tasks [22]; although these models can produce new questions, the questions typically take a stereotyped form (?Is it a person??, ?Is it a glove?? etc.). More open-ended question asking can be achieved by non-goal-driven systems trained on large amounts of natural language dialog, such as the recent progress demonstrated in [20]. However, these approaches cannot capture intentional, goal-directed forms of human question asking. Recent work has probed other aspects of question asking. The Visual Question Generation (VQG) data set [16] contains images paired with interesting, human-generated questions. For instance, an image of a car wreck might be paired with the question, ?What caused the accident?? Deep neural networks, similar to those used for image captioning, are capable of producing these types of questions after extensive training [16, 23, 11]. However, they require large datasets of images paired with questions, whereas people can ask intelligent questions in a novel scenario with no (or very limited) practice, as shown in our task below. Moreover, human question asking is robust to changes in task and goals, while state-of-the-art neural networks do not generalize flexibly in these ways. 3 The question data set Our goal was to develop a model of context-sensitive, goal-directed question asking in humans, which falls outside the capabilities of the systems described above. We focused our analysis on a data set we collected in [19], which consists of 605 natural language questions asked by 40 human players to resolve an ambiguous game situation (similar to ?Battleship?).1 Players were individually presented with a game board consisting of a 6?6 grid of tiles. The tiles were initially turned over but each could be flipped to reveal an underlying color. The player?s goal was to identify as quickly as possible the size, orientation, and position of ?ships? (i.e., objects composed of multiple adjacent tiles of the same color) [7]. Every board had exactly three ships which were placed in nonoverlapping but otherwise random locations. The ships were identified by their color S = {Blue, Red, Purple}. All ships had a width of 1, a length of N = {2, 3, 4} and orientation O = {Horizontal, Vertical}. Any tile that did not overlap with a ship displayed a null ?water? color (light gray) when flipped. After extensive instructions about the rules and purpose of the game and a number of practice rounds [see 19], on each of 18 target contexts players were presented with a partly revealed game board (similar to Figure 1B and 1C) that provided ambiguous information about the actual shape and location of the ships. They were then given the chance to ask a natural-language question about the configuration. The player?s goal was to use this question asking opportunity to gain as much information as possible about the hidden game board configuration. The only rules given to players about questions was that they must be answerable using one word (e.g., true/false, a number, a color, a coordinate like A1 or a row or column number) and no combination of questions was allowed. The questions were recorded via an HTML text box in which people typed what they wanted to ask. A good question for the context in Figure 1B is ?Do the purple and the red ship touch??, while ?What is the color of tile A1?? is not helpful because it can be inferred from the revealed game board and the rules of the game (ship sizes, etc.) that the answer is ?Water? (see Figure 3 for additional example questions). Each player completed 18 contexts where each presented a different underlying game board and partially revealed pattern. Since the usefulness of asking a question depends on the context, the data 1 https://github.com/anselmrothe/question_dataset 2 !"#$%&! a) 1 2 3 4 5 6 ABCDEF Hidden gameboard b) 1 2 3 4 5 6 ABCDEF c) 1 2 3 4 5 6 ABCDEF Partially revealed gameboard Figure 1: The Battleship game used to obtain the question data set by Rothe et al. [19]. (A) The hidden positions of three ships S = {Blue, Red, Purple} on a game board that players sought to identify. (B) After observing the partly revealed board, players were allowed to ask a natural language question. (C) The partly revealed board in context 4. set consists of 605 question-context pairs hq, ci, with 26 to 39 questions per context.2 The basic challenge for our active learning method is to predict which question q a human will ask from the given context c and the overall rules of the game. This is a particularly challenging data set to model because of the the subtle differences between contexts that determine if a question is potentially useful along with the open-ended nature of human question asking. 4 A probabilistic model of question generation Here we describe the components of our probabilistic model of question generation. Section 4.1 describes two key elements of our approach, compositionality and computability, as reflected in the choice to model questions as programs. Section 4.2 describes a grammar that defines the space of allowable questions/programs. Section 4.3 specifies a probabilistic generative model for sampling context-sensitive, relevant programs from this space. The remaining sections cover optimization, the program features, and alternative models (Sections 4.4-4.6). 4.1 Compositionality and computability The analysis of the data set [19] revealed that many of the questions in the data set share similar concepts organized in different ways. For example, the concept of ship size appeared in various ways across questions: ? ? ? ? ? ? ?How long is the blue ship?? ?Does the blue ship have 3 tiles?? ?Are there any ships with 4 tiles?? ?Is the blue ship less then 4 blocks?? ?Are all 3 ships the same size?? ?Does the red ship have more blocks than the blue ship?? As a result, the first key element of modeling question generation was to recognize the compositionality of these questions. In other words, there are conceptual building blocks (predicates like size(x) and plus(x,y)) that can be put together to create the meaning of other questions (plus(size(Red), size(Purple))). Combining meaningful parts to give meaning to larger expressions is a prominent approach in linguistics [10], and compositionality more generally has been an influential idea in cognitive science [4, 15, 14]. The second key element is the computability of questions. We propose that human questions are like programs that when executed on the state of a world output an answer. For example, a program that when executed looks up the number of blue tiles on a hypothesized or imagined Battleship game board and returns said number corresponds to the question ?How long is the blue ship??. In this way, programs can be used to evaluate the potential for useful information from a question by executing the program over a set of possible or likely worlds and preferring questions that are informative for identifying the true world state. This approach to modeling questions is closely 2 Although each of the 40 players asked a question for each context, a small number of questions were excluded from the data set for being ambiguous or extremely difficult to address computationally [see 19]. 3 related to formalizing question meaning as a partition over possible worlds [6], a notion used in previous studies in linguistics [18] and psychology [9]. Machine systems for question answering have also fruitfully modeled questions as programs [24, 12], and computational work in cognitive science has modeled various kinds of concepts as programs [17, 5, 13]. An important contribution of our work here is that it tackles question asking and provides a method for generating meaningful questions/programs from scratch. 4.2 A grammar for producing questions To capture both compositionality and computability, we represent questions in a simple programming language, based on lambda calculus and LISP. Every unit of computation in that language is surrounded by parentheses, with the first element being a function and all following elements being arguments to that function (i.e., using prefix notation). For instance, the question ?How long is the blue ship?? would be represented by the small program (size Blue). More examples will be discussed below. With this step we abstracted the question representation from the exact choice of words while maintaining its meaning. As such the questions can be thought of as being represented in a ?language of thought? [3]. Programs in this language can be combined as in the example (> (size Red) (size Blue)), asking whether the red ship is larger than the blue ship. To compute an answer, first the inner parentheses are evaluated, each returning a number corresponding to the number of red or blue tiles on the game board, respectively. Then these numbers are used as arguments to the > function, which returns either True or False. A final property of interest is the generativity of questions, that is, the ability to construct novel expressions that are useful in a given context. To have a system that can generate expressions in this language we designed a grammar that is context-free with a few exceptions, inspired by [17]. The grammar consists of a set of rewrite rules, which are recursively applied to grow expressions. An expression that cannot be further grown (because no rewrite rules are applicable) is guaranteed to be an interpretable program in our language. To create a question, our grammar begins with an expression that contains the start symbol A and then rewrites the symbols in the expression by applying appropriate grammatical rules until no symbol can be rewritten. For example, by applying the rules A ? N, N ? (size S), and S ? Red, we arrive at the expression (size Red). Table SI-1 (supplementary materials) shows the core rewrite rules of the grammar. This set of rules is sufficient to represent all 605 questions in the human data set. To enrich the expressiveness and conciseness of our language we added lambda expressions, mapping, and set operators (Table SI-2, supplementary material). Their use can be seen in the question ?Are all ships the same size??, which can be conveniently represented by (= (map (? x (size x)) (set Blue Red Purple))). During evaluation, map sequentially assigns each element from the set to x in the ?-part and ultimately returns a vector of the three ship sizes. The three ship sizes are then compared by the = function. Of course, the same question could also be represented as (= (= (size Blue) (size Red)) (size Purple)). 4.3 Probabilistic generative model An artificial agent using our grammar is able to express a wide range of questions. To decide which question to ask, the agent needs a measure of question usefulness. This is because not all syntactically well-formed programs are informative or useful. For instance, the program (> (size Blue) (size Blue)) representing the question ?Is the blue ship larger than itself?? is syntactically coherent. However, it is not a useful question to ask (and is unlikely to be asked by a human) because the answer will always be False (?no?), no matter the true size of the blue ship. We propose a probabilistic generative model that aims to predict which questions people will ask and which not. Parameters of the model can be fit to predict the frequency that humans ask particular questions in particular context in the data set by [19]. Formally, fitting the generative model is a problem of density estimation in the space of question-like programs, where the space is defined by the grammar. We define the probability of question x (i.e., the probability that question x is asked) 4 with a log-linear model. First, the energy of question x is the weighted sum of question features E(x) = ?1 f1 (x) + ?2 f2 (x) + ... + ?K fK (x), (1) where ?k is the weight of feature fk of question x. We will describe all features below. Model variants will differ in the features they use. Second, the energy is related to the probability by p(x; ?) = P exp(?E(x)) exp(?E(x)) = , exp(?E(x)) Z x?X (2) where ? is the vector of feature weights, highlighting the fact that the probability is dependent on a parameterization of these weights, Z is the normalizing constant, and X is the set of all possible questions that can be generated by the grammar in Tables SI-1 and SI-2 (up to a limit on question length).3 The normalizing constant needs to be approximated since X is too large to enumerate. 4.4 Optimization The objective is to find feature weights that maximize the likelihood of asking the human-produced questions. Thus, we want to optimize arg max ? N X log p(d(i) ; ?), (3) i=1 where D = {d(1) , ..., d(N ) } are the questions (translated into programs) in the human data set. To optimize via gradient ascent, we need the gradient of the log-likelihood with respect to each ?k , which is given by ?log p(D; ?) = N Ex?D [fk (x)] ? N Ex?P? [fk (x)]. (4) ??k PN The term Ex?D [fk (x)] = N1 i=1 fk (d(i) ) is the expected P (average) feature values given the empirical set of human questions. The term Ex?P? [fk (x)] = x?X fk (x)p(x; ?) is the expected feature values given the model. Thus, when the gradient is zero, the model has perfectly matched the data in terms of the average values of the features. Computing the exact expected feature values from the model is intractable, since there is a very large number of possible questions (as with the normalizing constant in Equation 2). We use importance sampling to approximate this expectation. To create a proposal distribution, denoted as q(x), we use the question grammar as a probabilistic context free grammar with uniform distributions for choosing the re-write rules. The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling.4 Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1. Last, for the purpose of evaluating the model (computing log-likelihood), the importance sampler is also used to approximate the normalizing constant in Eq. 2 via the estimator Z ? Ex?q [ p(x;?) q(x) ]. 4.5 Question features We now turn to describe the question features we considered (cf. Equation 1), namely two features for informativeness, one for length, and four for the answer type. Informativeness. Perhaps the most important feature is a question?s informativeness, which we model through a combination of Bayesian belief updating and Expected Information Gain (EIG). To compute informativeness, our agent needs to represent several components: A belief about the current world state, a way to update its belief once it receives an answer, and a sense of all possible 3 We define X to be the set of questions with 100 or fewer functions. We had to remove the rule L ? (draw C) from the grammar and the corresponding 14 questions from the data set that asked for a demonstration of a colored tile. Although it is straightforward to represent those questions with this rule, the probabilistic nature of draw led to exponentially complex computations of the set of possible-world answers. 4 5 Empirical question frequency context: 3 15 context: 4 ? = 0.51 ? = 0.58 context: 5 context: 6 ? = 0.85 context: 7 ? = 0.85 10 context: 8 ? = 0.62 0 ? ? ?? ? ?? ? ?? ?? ? ?? ? ? context: 11 15 context: 12 ? = 0.56 ? = 0.37 10 ? 5 ? ? 0 ?40 ? ? ? ? ? ??? ? ?? ?? ?? ? = 0.6 ? ?? ? ? ?? ? ?? ? context: 13 context: 14 ? = 0.47 ? = 0.69 ?? ? ? ?? ? ? ?? ? ? ?? ? ?? context: 15 ?40 ? ?? ? ??? ?? ?? ?? ? ?? ?20 ? ? ?40 ? ?? ? ? ?? ? ?? ?? ? ?20 ? ? ? ? ?? ?? ? ? ? ?40 ?20 ?? ?40 ?? ??? ??? ? ?20 ?20 context: 18 ? = 0.45 ?40 ?20 ? ? ? ??? ? ? ? ? ??? ? ?? ? ? ? ? ? ? ? ? ?? ? ? ?? ?40 ? ? ? ? ?? ? ? ?? ?? ? ? ? ? ? ? ???? ? = 0.8 ? ? ? ? ? ? ?? ? context: 17 ? = 0.82 ? ? ? ?? ? ?? ? ?? ? ? ?? ?? ?? ??? ? context: 16 ? = 0.8 ? ? ? ?? ????? ?20 ? ? ? ?? ?? ? ?? ?? context: 10 ? = 0.47 ? ? 5 context: 9 ? = 0.75 ? ?40 ? ?? ???? ? ?20 Negative energy Figure 2: Out-of-sample model predictions regarding the frequency of asking a particular question. The y-axis shows the empirical question frequency, and x-axis shows the model?s energy for the question (Eq. 1, based on the full model). The rank correlation ? is shown for each context. answers to the question.5 In the Battleship game, an agent must identify a single hypothesis h (i.e., a hidden game board configuration) in the space of possible configurations H (i.e., possible board games). The agent can ask a question x and receive the answer d, updating its hypothesis space by applying Bayes? rule, p(h|d; x) ? p(d|h; x)p(h). The prior p(h) is specified first by a uniform choice over the ship sizes, and second by a uniform choice over all possible configurations given those sizes. The likelihood p(d|h; x) ? 1 if d is a valid output of the question program x when executed on h, and zero otherwise. The Expected Information Gain (EIG) value of a question x is the expected reduction in uncertainty about the true hypothesis h, averaged across all possible answers Ax of the question h i X EIG(x) = p(d; x) I[p(h)] ? I[p(h|d; x)] , (5) d?Ax where I[?] is the Shannon entropy. Complete details about the Bayesian ideal observer follow the approach we used in [19]. Figure 3 shows the EIG scores for the top two human questions for selected contexts. In addition to feature fEIG (x) = EIG(x), we added a second feature fEIG=0 (x), which is 1 if EIG is zero and 0 otherwise, to provide an offset to the linear EIG feature. Note that the EIG value of a question always depends on the game context. The remaining features described below are independent of the context. Complexity. Purely maximizing EIG often favors long and complicated programs (e.g., polynomial questions such as size(Red)+10*size(Blue)+100*size(Purple)+...). Although a machine would not have a problem with answering such questions, it poses a problem for a human answerer. Generally speaking, people prefer concise questions and the rather short questions in the data set reflect this. The probabilistic context free grammar provides a measure of complexity that favors shorter programs, and we use the log probability under the grammar fcomp (x) = ? log q(x) as the complexity feature. Answer type. We added four features for the answer types Boolean, Number, Color, and Location. Each question program belongs to exactly one of these answer types (see Table SI-1). The type Orientation was subsumed in Boolean, with Horizontal as True and Vertical as False. This allows the model to capture differences in the base rates of question types (e.g., if people prefer true/false questions over other types). Relevance. Finally, we added one auxiliary feature to deal with the fact that the grammar can produce syntactically coherent programs that have no reference to the game board at all (thus are not really questions about the game; e.g., (+ 1 1)). The ?filter? feature f? (x) marks questions 5 We assume here that the agent?s goal is to accurately identify the current world state. In a more general setting, the agent would require a cost function that defines the helpfulness of an answer as a reduced distance to the goal. 6 that refer to the Battleship game board with a value of 1 (see the otherwise.6 4.6 b marker in Table SI-1) and 0 Alternative models To evaluate which features are important for human-like question generation, we tested the full model that uses all features, as well as variants in which we respectively lesioned one key property. The information-agnostic model did not use fEIG (x) and fEIG=0 (x) and thus ignored the informativeness of questions. The complexity-agnostic model ignored the complexity feature. The type-agnostic model ignored the answer type features. 5 Results and Discussion The probabilistic model of question generation was eval- Table 1: Log likelihoods of model variants uated in two main ways. First, it was tasked with predict- averaged across held out contexts. ing the distribution of questions people asked in novel scenarios, which we evaluate quantitatively. Second, it Model LL was tasked with generating genuinely novel questions that were not present in the data set, which we evaluate Full -1400.06 qualitatively. To make predictions, the different candiInformation-agnostic -1464.65 date models were fit to 15 contexts and asked to predict Complexity-agnostic -22993.38 the remaining one (i.e., leave one out cross-validation).7 Type-agnostic -1419.26 This results in 64 different model fits (i.e., 4 models ? 16 fits). First, we verify that compositionality is an essential ingredient in an account of human question asking. For any given context, about 15% of the human questions did not appear in any of the other contexts. Any model that attempts to simply reuse/reweight past questions will be unable to account for this productivity (effectively achieving a log-likelihood of ??), at least not without a much larger training set of questions. The grammar over programs provides one account of the productivity of the human behavior. Second, we compared different models on their ability to quantitatively predict the distribution of human questions. Table 1 summarizes the model predictions based on the log-likelihood of the questions asked in the held-out contexts. The full model ? with learned features for informativeness, complexity, answer type, and relevance ? provides the best account of the data. In each case, lesioning its key components resulted in lower quality predictions. The complexity-agnostic model performed far worse than the others, highlighting the important role of complexity (as opposed to pure informativeness) in understanding which questions people choose to ask. The full model also outperformed the information-agnostic and type-agnostic models, suggesting that people also optimize for information gain and prefer certain question types (e.g., true/false questions are very common). Because the log-likelihood values are approximate, we bootstrapped the estimate of the normalizing constant Z and compared the full model and each alternative. The full model?s loglikelihood advantage over the complexity-agnostic model held in 100% of the bootstrap samples, over the information-agnostic model in 81% of samples, and over type-agnostic model in 88%. Third, we considered the overall match between the best-fit model and the human question frequencies. Figure 2 shows the correlations between the energy values according to the held-out predictions of the full model (Eq. 1) and the frequencies of human questions (e.g., how often participants asked ?What is the size of the red ship?? in a particular context). The results show very strong agreement for some contexts along with more modest alignment for others, with an average Spearman?s rank correlation coefficient of 0.64. In comparison, the information-agnostic model achieved 0.65, the complexity-agnostic model achieved -0.36, and the type-agnostic model achieved 0.55. One limitation is that the human data is sparse (many questions were only asked once), and thus correlations 6 The features f? (x) and fEIG=0 (x) are not identical. Questions like (size Blue) do refer to the board but will have zero EIG if the size of the blue ship is already known. 7 For computational reasons we had to drop contexts 1 and 2, which had especially large hypothesis spaces. However, we made sure that the grammar was designed based on the full set of contexts (i.e., it could express all questions in the human question data set). 7 are limited as a measure of fit. However, there is, surprisingly, no correlation at all between question generation frequency and EIG alone [19], again suggesting a key role of question complexity and the other features. Last, the model was tasked with generating novel, ?human-like? questions that were not part of the human data set. Figure 3 shows five novel questions that were sampled from the model, across four different game contexts. Questions were produced by taking five weighted samples from the set of programs produced in Section 4.4 for approximate inference, with weights determined by their energy (Eq. 2). To ensure novelty, samples were rejected if they were equivalent to any human question in the training data set or to an already sampled question. Equivalence between any two questions was determined by the mutual information of their answer distributions (i.e., their partitions over possible hypotheses), and or if the programs differed only through their arguments (e.g. (size Blue) is equivalent to (size Red)). The generated questions in Figure 3 demonstrate that the model is capable of asking novel (and clever) human-like questions that are useful in their respective contexts. Interesting new questions that were not observed in the human data include ?Are all the ships horizontal?? (Context 7), ?What is the top left of all the ship tiles?? (Context 9), ?Are blue and purple ships touching and red and purple not touching (or vice versa)?? (Context 9), and ?What is the column of the top left of the tiles that have the color of the bottom right corner of the board?? (Context 15). The four contexts were selected to illustrate the creative range of the model, and the complete set of contexts is shown in the supplementary materials. 6 Conclusions People use question asking as a cognitive tool to gain information about the world. Although people ask rich and interesting questions, most active learning algorithms make only focused requests for supervised labels. Here were formalize computational aspects of the rich and productive way that people inquire about the world. Our central hypothesis is that active machine learning concepts can be generalized to operate over a complex, compositional space of programs that are evaluated over possible worlds. To that end, this project represents a step toward more capable active learning machines. There are also a number of limitations of our current approach. First, our system operates on semantic representations rather than on natural language text directly, although it is possible that such a system can interface with recent tools in computational linguistics to bridge this gap [e.g., 24]. Second, some aspects of our grammar are specific to the Battleship domain. It is often said that some knowledge is needed to ask a good question, but critics of our approach will point out that the model begins with substantial domain knowledge and special purpose structures. On the other hand, many aspects of our grammar are domain general rather than domain specific, including very general functions and programming constructs such as logical connectives, set operations, arithmetic, and mapping. To extend this approach to new domains, it is unclear exactly how much new knowledge engineering will be needed, and how much can be preserved from the current architecture. Future work will bring additional clarity as we extend our approach to different domains. From the perspective of computational cognitive science, our results show how people balance informativeness and complexity when producing semantically coherent questions. By formulating question asking as program generation, we provide the first predictive model to date of open-ended human question asking. Acknowledgments We thank Chris Barker, Sam Bowman, Noah Goodman, and Doug Markant for feedback and advice. This research was supported by NSF grant BCS-1255538, the John Templeton Foundation Varieties of Understanding project, a John S. McDonnell Foundation Scholar Award to TMG, and the MooreSloan Data Science Environment at NYU. 8 Figure 3: Novel questions generated by the probabilistic model. Across four contexts, five model questions are displayed, next to the two most informative human questions for comparison. Model questions were sampled such that they are not equivalent to any in the training set. The natural language translations of the question programs are provided for interpretation. Questions with lower energy are more likely according to the model. 9 References [1] A. Bordes and J. Weston. Learning End-to-End Goal-Oriented Dialog. arXiv preprint, 2016. [2] M. M. Chouinard. Children?s Questions: A Mechanism for Cognitive Development. Monographs of the Society of Research in Cognitive Development, 72(1):1?129, 2007. [3] J. A. Fodor. The Language of Thought. Harvard University Press, 1975. [4] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28:3?71, 1988. [5] N. D. Goodman, J. B. Tenenbaum, and T. Gerstenberg. Concepts in a probabilistic language of thought. In E. Margolis and S. Laurence, editors, Concepts: New Directions. MIT Press, Cambridge, MA, 2015. [6] J. Groenendijk and M. Stokhof. On the Semantics of Questions and the Pragmantics of Answers. PhD thesis, University of Amsterdam, 1984. [7] T. M. Gureckis and D. B. Markant. Active Learning Strategies in a Spatial Concept Learning Game. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, 2009. [8] T. M. Gureckis and D. B. Markant. Self-Directed Learning: A Cognitive and Computational Perspective. Perspectives on Psychological Science, 7(5):464?481, 2012. [9] R. X. D. Hawkins, A. Stuhlmuller, J. Degen, and N. D. Goodman. Why do you ask? Good questions provoke informative answer. In Proceedings of the 37th Annual Conference of the Cognitive Science Society, 2015. [10] P. Jacobson. Compositional Semantics. Oxford University Press, 2014. [11] U. Jain, Z. Zhang, and A. Schwing. Creativity: Generating Diverse Questions using Variational Autoencoders. arXiv preprint, 2017. [12] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-fei, C. L. Zitnick, and R. Girshick. Inferring and Executing Programs for Visual Reasoning. In International Conference on Computer Vision, 2017. [13] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332?1338, 2015. [14] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 2017. [15] G. F. Marcus. The Algebraic Mind: Integrating Connectionism and Cognitive Science. MIT Press, Cambridge, MA, 2003. [16] N. Mostafazadeh, I. Misra, J. Devlin, M. Mitchell, X. He, and L. Vanderwende. Generating Natural Questions About an Image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1802?1813, 2016. [17] S. T. Piantadosi, J. B. Tenenbaum, and N. D. Goodman. Bootstrapping in a language of thought: A formal model of numerical concept learning. Cognition, 123(2):199?217, 2012. [18] C. Roberts. Information structure in discourse: Towards an integrated formal theory of pragmatics. Working Papers in Linguistics-Ohio State University Department of Linguistics, pages 91?136, 1996. [19] A. Rothe, B. M. Lake, and T. M. Gureckis. Asking and evaluating natural language questions. In A. Papafragou, D. Grodner, D. Mirman, and J. Trueswell, editors, Proceedings of the 38th Annual Conference of the Cognitive Science Society, Austin, TX, 2016. [20] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016. [21] B. Settles. Active Learning. Morgan & Claypool Publishers, 2012. [22] F. Strub, d. V. Harm, J. Mary, B. Piot, A. Courville, and O. Pietquin. End-to-end optimization of goal-driven and visually grounded dialogue systems. In International Joint Conference on Artificial Intelligence (IJCAI), 2017. [23] A. K. Vijayakumar, M. Cogswell, R. R. Selvaraju, Q. Sun, S. Lee, D. Crandall, and D. Batra. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. arXiv preprint, 2016. [24] Y. Wang, J. Berant, and P. Liang. Building a Semantic Parser Overnight. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332?1342, 2015. [25] S. Young, M. Ga?si?c, B. Thomson, and J. D. Williams. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160?1179, 2013. 10
6705 |@word polynomial:1 laurence:1 open:4 instruction:1 calculus:1 concise:2 tmg:1 recursively:1 reduction:1 configuration:5 contains:2 score:1 bootstrapped:1 prefix:1 past:1 current:5 com:1 si:7 yet:1 must:2 john:4 numerical:1 partition:2 informative:4 shape:1 wanted:1 remove:1 designed:3 interpretable:1 update:1 drop:1 pylyshyn:1 alone:1 intelligence:4 generative:5 fewer:1 selected:2 parameterization:1 core:1 short:1 colored:1 provides:4 location:3 zhang:1 five:3 along:2 bowman:1 consists:3 fitting:1 behavioral:1 introduce:1 expected:7 behavior:1 dialog:5 brain:1 inspired:1 salakhutdinov:1 food:1 little:1 resolve:1 actual:1 piantadosi:1 provided:2 begin:2 moreover:1 underlying:2 formalizing:1 notation:1 matched:1 null:1 what:10 agnostic:15 kind:1 connective:1 developed:1 spoken:1 finding:1 bootstrapping:1 ended:4 every:2 tackle:1 recasts:1 exactly:3 returning:1 gerstenberg:1 unit:1 grant:1 intervention:1 appear:1 producing:4 humanlike:1 engineering:1 todd:2 treat:1 limit:1 oxford:1 might:2 plus:2 equivalence:1 challenging:1 limited:2 range:3 averaged:2 directed:4 acknowledgment:1 practice:2 block:3 bootstrap:1 procedure:1 empirical:4 thought:5 revealing:1 word:3 integrating:1 lesioning:1 cannot:2 clever:1 ga:1 operator:1 put:1 context:58 applying:3 trueswell:1 optimize:3 equivalent:3 map:2 demonstrated:1 center:1 maximizing:1 straightforward:1 williams:1 flexibly:1 focused:3 barker:1 pomdp:1 identifying:1 assigns:1 pure:1 insight:1 rule:14 estimator:1 his:1 notion:1 coordinate:1 fodor:2 target:2 parser:1 exact:2 programming:2 us:1 hypothesis:6 agreement:1 harvard:1 synthesize:1 element:6 approximated:1 particularly:1 updating:2 genuinely:1 berant:1 predicts:2 observed:1 role:2 bottom:1 preprint:3 wang:1 capture:3 inquire:1 sun:1 contemporary:1 ran:1 substantial:1 monograph:1 environment:1 complexity:14 sugar:1 asked:11 productive:1 lesioned:1 ultimately:1 trained:1 rewrite:4 predictive:1 purely:1 f2:1 learner:2 translated:1 joint:2 cat:1 various:2 represented:4 tx:1 grown:1 jain:1 describe:3 query:4 artificial:4 crandall:1 outside:1 choosing:1 richer:2 larger:4 supplementary:3 loglikelihood:1 otherwise:4 grammar:19 ability:4 favor:2 think:1 itself:1 final:1 advantage:1 sequence:1 propose:3 turned:1 relevant:1 combining:1 date:2 flexibility:1 representational:1 ijcai:1 captioning:1 produce:3 generating:5 executing:2 leave:1 object:1 help:2 depending:1 develop:2 illustrate:1 pose:1 progress:2 eq:4 strong:1 auxiliary:1 pietquin:1 overnight:1 differ:1 direction:1 closely:1 correct:1 filter:1 human:47 enable:1 settle:1 material:3 explains:1 require:2 premise:1 f1:1 scholar:1 creativity:2 really:1 connectionism:2 sordoni:1 hawkins:1 intentional:1 considered:2 exp:3 claypool:1 visually:1 mapping:2 predict:7 cognition:2 anselm:2 sought:1 purpose:3 estimation:1 outperformed:1 applicable:1 label:3 sensitive:2 individually:1 bridge:1 vice:1 create:3 successfully:1 tool:2 weighted:2 hoffman:1 mit:2 always:2 aim:1 uated:1 rather:3 pn:1 thirtieth:1 ax:2 rank:2 likelihood:8 contrast:1 sense:1 helpful:1 inference:1 dependent:1 typically:2 unlikely:1 integrated:1 initially:1 hidden:4 favoring:1 semantics:2 overall:2 arg:1 orientation:3 html:1 denoted:1 development:2 enrich:1 art:1 special:1 spatial:1 mutual:1 genuine:1 construct:4 once:2 beach:1 sampling:3 identical:1 flipped:2 represents:1 look:1 thinking:1 future:1 others:3 intelligent:1 quantitatively:2 few:1 oriented:1 composed:1 recognize:1 resulted:1 consisting:1 n1:1 attempt:1 subsumed:1 interest:1 eval:1 evaluation:1 alignment:1 light:1 jacobson:1 held:4 capable:4 shorter:1 respective:1 modest:1 creatively:1 re:1 causal:1 girshick:1 theoretical:1 psychological:1 instance:3 formalism:1 modeling:4 column:2 asking:26 boolean:2 cover:1 cost:1 uniform:3 usefulness:2 predicate:1 fruitfully:1 johnson:1 too:1 answer:23 combined:1 st:2 person:1 density:1 international:3 preferring:1 vijayakumar:1 probabilistic:12 lee:1 decoding:1 synthesis:1 quickly:1 together:1 thesis:1 aaai:1 again:1 recorded:1 containing:1 choose:2 reflect:1 tile:12 opposed:1 central:1 lambda:2 cognitive:13 worse:1 corner:1 dialogue:2 style:1 return:6 helpfulness:1 ullman:1 account:4 potential:1 suggesting:2 nonoverlapping:1 coefficient:1 matter:1 provoke:1 caused:1 depends:2 performed:1 observer:1 observing:1 red:16 start:1 bayes:1 participant:2 capability:1 complicated:1 contribution:1 formed:2 purple:9 hariharan:1 who:1 identify:4 generalize:1 bayesian:2 accurately:1 produced:3 energy:7 acquisition:1 frequency:8 typed:1 conciseness:1 gain:5 sampled:4 ask:19 logical:1 mitchell:1 color:8 car:1 knowledge:3 organized:1 subtle:1 formalize:1 supervised:2 follow:1 reflected:1 evaluated:2 box:1 rejected:1 until:1 correlation:5 hand:1 receives:1 horizontal:3 autoencoders:1 touch:1 working:1 marker:1 lack:1 canned:1 eig:11 defines:2 pineau:1 quality:1 reveal:1 gray:1 perhaps:1 mostafazadeh:1 mary:1 usa:1 building:4 hypothesized:1 concept:9 true:9 verify:1 excluded:1 semantic:2 deal:1 adjacent:1 round:1 game:25 width:1 during:1 ll:1 ambiguous:5 self:1 generalized:1 prominent:1 allowable:1 complete:2 demonstrate:1 thomson:1 syntactically:4 interface:1 bring:1 reasoning:1 hallmark:1 image:5 meaning:4 novel:10 variational:1 ohio:1 common:1 exponentially:1 volume:1 imagined:1 tail:1 discussed:1 extend:2 interpretation:1 he:1 association:2 refer:2 versa:1 cambridge:2 ai:2 rd:1 grid:1 fk:8 language:20 had:5 impressive:1 etc:3 base:1 recent:3 touching:2 perspective:3 belongs:1 driven:2 ship:32 scenario:2 certain:1 misra:1 meeting:2 der:1 seen:1 morgan:1 additional:2 accident:1 determine:1 maximize:1 novelty:1 arithmetic:1 multiple:1 full:9 bcs:1 ing:1 match:1 offer:1 long:8 cross:1 award:1 paired:3 a1:2 parenthesis:2 prediction:5 variant:4 basic:1 vision:1 expectation:1 tasked:3 arxiv:3 iteration:1 represent:4 grounded:1 achieved:4 beam:1 proposal:1 addition:2 whereas:1 want:1 receive:1 answerer:1 preserved:1 grow:1 publisher:1 goodman:4 operate:1 ascent:2 sure:1 strub:1 lisp:1 ideal:1 revealed:7 bengio:1 markant:3 variety:2 cogswell:1 restaurant:1 psychology:2 fit:6 architecture:2 identified:1 perfectly:1 inner:1 idea:1 regarding:1 devlin:1 whether:1 expression:9 reuse:1 algebraic:1 york:1 speaking:1 compositional:3 deep:2 enumerate:1 useful:6 gureckis:4 generally:2 ignored:3 amount:1 tenenbaum:4 wreck:1 reduced:1 http:1 specifies:2 generate:1 nsf:1 piot:1 battleship:6 per:1 blue:24 diverse:3 write:1 probed:1 tea:1 express:2 key:7 four:5 serban:1 achieving:2 clarity:1 computability:4 sum:1 run:1 powerful:1 you:3 uncertainty:1 arrive:1 decide:1 lake:3 draw:2 maaten:1 prefer:4 summarizes:1 capturing:1 guaranteed:1 courville:2 oracle:1 annual:5 noah:1 constraint:1 fei:2 vanderwende:1 aspect:4 speed:1 argument:3 extremely:1 formulating:1 attempting:1 relatively:2 department:2 influential:1 according:2 creative:2 project:2 request:3 combination:2 mcdonnell:1 spearman:1 describes:2 across:5 sam:1 templeton:1 computationally:1 equation:2 turn:1 eventually:1 mechanism:1 needed:2 mind:1 end:8 operation:1 rewritten:1 hierarchical:1 appropriate:1 alternative:3 top:3 remaining:3 cf:2 linguistics:7 completed:1 ensure:1 opportunity:1 maintaining:1 include:1 coffee:2 especially:1 society:4 objective:1 question:185 added:4 already:2 strategy:1 said:2 unclear:1 gradient:5 hq:1 distance:1 unable:1 thank:1 capacity:2 chris:1 collected:1 toward:2 water:2 reason:1 induction:1 marcus:1 length:3 modeled:2 demonstration:1 balance:1 liang:1 difficult:1 executed:5 robert:1 potentially:1 reweight:1 negative:1 vertical:2 datasets:1 displayed:2 situation:3 looking:1 brenden:2 expressiveness:1 inferred:1 compositionality:6 dog:2 pair:1 namely:1 extensive:2 specified:1 coherent:3 learned:1 nip:1 address:1 able:2 below:4 pattern:1 appeared:1 challenge:1 program:39 including:2 max:1 belief:3 critical:2 overlap:1 natural:11 representing:1 improve:1 github:1 axis:2 doug:1 text:2 review:2 understanding:3 prior:1 generation:9 interesting:4 limitation:2 gershman:1 ingredient:1 rothe:2 validation:1 foundation:2 agent:8 sufficient:1 informativeness:9 editor:2 share:1 bordes:1 translation:1 austin:1 surrounded:1 critic:1 row:1 course:1 placed:1 last:2 free:3 surprisingly:1 supported:1 formal:3 fall:1 wide:1 taking:1 sparse:1 van:1 grammatical:1 feedback:1 world:14 evaluating:2 rich:5 valid:1 answerable:1 qualitatively:1 made:1 far:3 approximate:5 abstracted:1 active:10 sequentially:1 conceptual:1 harm:1 search:2 stuhlmuller:1 why:1 table:8 learn:5 nature:2 robust:1 ca:1 complex:3 constructing:1 domain:7 zitnick:1 did:3 main:1 stereotyped:1 child:2 allowed:2 advice:1 board:17 differed:1 position:2 inferring:1 answering:2 third:1 young:1 specific:2 margolis:1 symbol:3 nyu:4 offset:1 normalizing:5 intractable:1 essential:1 false:7 effectively:1 importance:3 ci:1 phd:1 gap:1 entropy:1 led:1 simply:1 likely:2 visual:3 conveniently:1 highlighting:2 expressed:1 amsterdam:1 partially:2 corresponds:1 chance:1 discourse:1 ma:2 weston:1 goal:14 towards:1 change:1 glove:1 determined:2 operates:1 semantically:2 sampler:1 schwing:1 batra:1 partly:3 player:10 shannon:1 meaningful:3 productivity:2 exception:1 formally:1 pragmatic:1 people:15 mark:1 relevance:2 evaluate:6 tested:1 scratch:1 ex:5
6,308
6,706
Revisiting Perceptron: Ef?cient and Label-Optimal Learning of Halfspaces Songbai Yan UC San Diego La Jolla, CA [email protected] Chicheng Zhang? Microsoft Research New York, NY [email protected] Abstract It has been a long-standing problem to ef?ciently learn a halfspace using as few labels as possible in the presence of noise. In this work, we propose an ef?cient Perceptron-based algorithm for actively learning homogeneous halfspaces under the uniform distribution over the unit sphere. Under the bounded noise condition [49], where each label is ?ipped with probability at most?? < 12 , our algorithm achieves a ? ? ? d 1 2 d2 ? ? near-optimal label complexity of O in time O . Under 2 ln 3 (1?2?) ? ?(1?2?) ? the adversarial noise condition [6, 45, 42], where at most a ?(?) fraction of ? labels? ? d ln 1 O can be ?ipped, our algorithm achieves a near-optimal label complexity of ? ? ? ? d2 . Furthermore, we show that our active learning algorithm can be in time O ? converted to an ef?cient passive learning algorithm that has near-optimal sample complexities with respect to ? and d. 1 Introduction We study the problem of designing ef?cient noise-tolerant algorithms for actively learning homogeneous halfspaces in the streaming setting. We are given access to a data distribution from which we can draw unlabeled examples, and a noisy labeling oracle O that we can query for labels. The goal is to ?nd a computationally ef?cient algorithm to learn a halfspace that best classi?es the data while making as few queries to the labeling oracle as possible. Active learning arises naturally in many machine learning applications where unlabeled examples are abundant and cheap, but labeling requires human effort and is expensive. For those applications, one natural question is whether we can learn an accurate classi?er using as few labels as possible. Active learning addresses this question by allowing the learning algorithm to sequentially select examples to query for labels, and avoid requesting labels which are less informative, or can be inferred from previously-observed examples. There has been a large body of work on the theory of active learning, showing sharp distributiondependent label complexity bounds [21, 11, 34, 27, 35, 46, 60, 41]. However, most of these general active learning algorithms rely on solving empirical risk minimization problems, which are computationally hard in the presence of noise [5]. On the other hand, existing computationally ef?cient algorithms for learning halfspaces [17, 29, 42, 45, 6, 23, 7, 8] are not optimal in terms of label requirements. These algorithms have different degrees of noise tolerance (e.g. adversarial noise [6], malicious noise [43], random classi?cation noise [3], ? Work done while at UC San Diego. ? ? (?)) := O(f (?) ln f (?)), and ?(f ? (?)) := ?(f (?)/ ln f (?)). We say f (?) = ?(g(?)) We use O(f if f (?) = ? ? ? ? O(g(?)) and f (?) = ? g(?) 2 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. bounded noise [49], etc), and run in time polynomial in 1? and d. Some of them naturally exploit the utility of active learning [6, 7, 8], but they do not achieve the sharpest label complexity bounds in contrast to those computationally-inef?cient active learning algorithms [10, 9, 60]. Therefore, a natural question is: is there any active learning halfspace algorithm that is computationally ef?cient, and has a minimum label requirement? This has been posed as an open problem in [50]. In the realizable setting, [26, 10, 9, 56] give ef?cient algorithms that have optimal label complexity ? ln 1 ) under some distributional assumptions. However, the challenge still remains open in of O(d ? the nonrealizable setting. It has been shown that learning halfspaces with agnostic noise even under Gaussian unlabeled distribution is hard [44]. Nonetheless, we give an af?rmative answer to this question under two moderate noise settings: bounded noise and adversarial noise. 1.1 Our Results We propose a Perceptron-based algorithm, ACTIVE -P ERCEPTRON, for actively learning homogeneous halfspaces under the uniform distribution over the unit sphere. It works under two noise settings: bounded noise and adversarial noise. Our work answers an open question by [26] on whether Perceptron-based active learning algorithms can be modi?ed to tolerate label noise. In the ?-bounded noise setting (also known as the Massart noise model [49]), the label of an example x ? Rd is generated by sign(u ? x) for some underlying halfspace u, and ?ipped ? ? with probabil? ? d2 d 1 ? ? O ity ?(x) ? ? < 1 . Our algorithm runs in time O , and requires 3 2 ? ln 2 (1?2?) ? (1?2?) ? labels. We show that this label complexity is nearly ?optimal by providing an almost matching ? d information-theoretic lower bound of ? (1?2?)2 ? ln 1? . Our time and label complexities substan? tially improve over the state of the art result of [8], which runs in time O(d 1 O( ) 1 4 ? (1?2?) O(d ln ) labels. 1 O( (1?2?) 4)1 ?) and requires ? Our main theorem on learning under bounded noise is as follows: Theorem 2 (Informal). Suppose the labeling oracle O satis?es the ?-bounded noise condition with respect to u, then for ACTIVE -P ERCEPTRON, with probability at least 1??: (1) The output halfspace v is such? that P[sign(v?? X) ?= sign(u ? X)] ? ?; (2) The number of label queries to? oracle O?is at d 1 d ? ? most O (1?2?)2 ? ln ? ; (3) The number of unlabeled examples drawn is at most O (1?2?)3 ? ; (4) ? ? d2 ? The algorithm runs in time O (1?2?)3 ? . In addition, we show that our algorithm also works in a more challenging setting, the ?-adversarial noise setting [6, 42, 45].3 In this setting, the examples still come iid from a distribution, but the assumption on the labels is just that P[sign(u ? X) ?= Y ] ? ? for some halfspace u. Under this assumption, the Bayes classi?er may not be a halfspace. We?show that our algorithm achieves an ? ? ? ? ? d2 , and requires error of ? while tolerating a noise level of ? = ? ln d +ln ln 1 . It runs in time O ? ? ? ? ? 1 ? d ? ln only O ? labels which is near-optimal. ACTIVE -P ERCEPTRON has a label complexity bound that matches the state of the art result of [39]4 , while having a lower running time. Our main theorem on learning under adversarial noise is as follows: Theorem 3 (Informal). Suppose the labeling oracle O satis?es the ?-adversarial noise condition ? with respect to u, where ? < ?( ln d +ln ). Then for ACTIVE -P ERCEPTRON, with probability at ln 1 ? ? least 1 ? ?: (1) The output halfspace v is such ? that?P[sign(v ? X) ?= sign(u ? X)] ? ?; (2) The number ? d ? ln 1 ; (3) The number of unlabeled examples drawn is O of label queries to oracle O is at most ? ? ? ? ? ? d ; (4) The algorithm runs in time O ? d2 . at most O ? ? 3 Note that the adversarial noise model is not the same as that in online learning [18], where each example can be chosen adversarially. 4 The label complexity bound is implicit in [39] by a re?ned analysis of the algorithm of [6] (See their Lemma 8 for details). 2 Table 1: A comparison of algorithms for active learning of halfspaces under the uniform distribution, in the ?-bounded noise model. Algorithm [10, 9, 60] Label Complexity d 1 ? O( ) 2 ln [8] Our Work ? O(d ? O( Time Complexity (1?2?) ? 1 O( (1?2?) 4) d (1?2?)2 ? ln 1? ) ln 1? ) superpoly(d, 1? ) 5 1 ? O( (1?2?)4 ) ? 1 ) O(d ? ? ? d2 1 ? O 3 (1?2?) ? Table 2: A comparison of algorithms for active learning of halfspaces under the uniform distribution, in the ?-adversarial noise model. Algorithm Noise Tolerance [60] [39] Our Work ? = ?(?) ? = ?(?) ? ) ? = ?( ln d+ln ln 1 Label Complexity ? ln 1 ) O(d ? ? ln 1 ) O(d ? ? ln 1 ) O(d ? ? Time Complexity superpoly(d, 1? ) 1 poly(d, ? 2 1? ?) ? O d ?? Throughout the paper, ACTIVE -P ERCEPTRON is shown to work if the unlabeled examples are drawn uniformly from the unit sphere. The algorithm and analysis can be easily generalized to any spherical symmetrical distributions, for example, isotropic Gaussian distributions. They can also be generalized to distributions whose densities with respect to uniform distribution are bounded away from 0. In addition, we show in Section 6 that ACTIVE -P ERCEPTRON can be converted to a passive learning algorithm, PASSIVE -P ERCEPTRON, that has near optimal sample complexities with respect to ? and d under the two noise settings. We defer the discussion to the end of the paper. 2 Related Work Active Learning. The recent decades have seen much success in both theory and practice of active learning; see the excellent surveys by [54, 37, 25]. On the theory side, many label-ef?cient active learning algorithms have been proposed and analyzed. An incomplete list includes [21, 11, 34, 27, 35, 46, 60, 41]. Most algorithms relies on solving empirical risk minimization problems, which are computationally hard in the presence of noise [5]. Computational Hardness of Learning Halfspaces. Ef?cient learning of halfspaces is one of the central problems in machine learning [22]. In the realizable case, it is well known that linear programming will ?nd a consistent hypothesis over data ef?ciently. In the nonrealizable setting, however, the problem is much more challenging. A series of papers have shown the hardness of learning halfspaces with agnostic noise [5, 30, 33, 44, 23]. The state of the art result [23] shows that under standard complexity-theoretic assumptions, there exists a data distribution, such that the best linear classi?er has error o(1), but no polynomial time algorithms can achieve an error at most 12 ? d1c for every c > 0, even with improper learning. [44] shows that under standard assumptions, even if the unlabeled distribution is Gaussian, any agnostic halfspace learning algorithm must run in time ( 1? )?(ln d) to achieve an excess error of ?. These results indicate that, to have nontrivial guarantees on learning halfspaces with noise in polynomial time, one has to make additional assumptions on the data distribution over instances and labels. Ef?cient Active Learning of Halfspaces. Despite considerable efforts, there are only a few halfspace learning algorithms that are both computationally-ef?cient and label-ef?cient even under the uniform distribution. In the realizable setting, [26, 10, 9] propose computationally ef?cient active ? ln 1 ). learning algorithms which have an optimal label complexity of O(d ? Since it is believed to be hard for learning halfspaces in the general agnostic setting, it is natural to consider algorithms that work under more moderate noise conditions. Under the bounded noise 5 The algorithm needs to minimize 0-1 loss, the best known method for which requires superpolynomial time. 3 setting [49], the only known algorithms that are both label-ef?cient and computationally-ef?cient are [7, 8]. [7] uses a margin-based framework which queries the labels of examples near the decision boundary. To achieve computational ef?ciency, it adaptively chooses a sequence of hinge loss minimization problems to optimize as opposed to directly optimizing the 0-1 loss. It works only when the label ?ipping probability upper bound ? is small (? ? 1.8 ? 10?6 ). [8] improves over [7] by adapting a polynomial regression procedure into the margin-based framework. It works for any 1 O( ) ? < 1/2, but its label complexity is O(d (1?2?)4 ln 1? ), which is far worse than the informationd 1 theoretic lower bound ?( (1?2?) 2 ln ? ). Recently [20] gives an ef?cient algorithm with a near-optimal label complexity under the membership query model where the learner can query on synthesized points. In contrast, in our stream-based model, the learner can only query on points drawn from the data distribution. We note that learning in the stream-based model is harder than in the membership query model, and it is unclear how to transform the DC algorithm in [20] into a computationally ef?cient stream-based active learning algorithm. Under the more challenging ?-adversarial noise setting, [6] proposes a margin-based algorithm that reduces the problem to a sequence of hinge loss minimization problems. Their algorithm achieves an ? 2 ln 1 ) labels. Later, [39] performs a error of ? in polynomial time when ? = ?(?), but requires O(d ? ? ln 1 ), but the time complexity of re?ned analysis to achieve a near-optimal label complexity of O(d ? the algorithm is still an unspeci?ed high order polynomial. Tables 1 and 2 present comparisons between our results and results most closely related to ours in the literature. Due to space limitations, discussions of additional related work are deferred to Appendix A. 3 De?nitions and Settings We consider learning homogeneous halfspaces under uniform distribution. ? The instance space X ? is the unit sphere in Rd , which we denote by Sd?1 := x ? Rd : ?x? = 1 . We assume d ? 3 throughout this paper. The label space Y = {+1, ?1}. We assume all data points (x, y) are drawn i.i.d. from an underlying distribution D over X ? Y. We denote by DX the marginal of D over X (which is uniform over Sd?1 ), and DY |X the conditional distribution of Y given X. Our algorithm is allowed to draw unlabeled examples x ? X from DX , and to make queries to a labeling oracle O for labels. Upon query x, O returns a label y ? drawn from DY |X=x . The hypothesis ? class of interest is d?1 the set of homogeneous halfspaces H := hw (x) = sign(w ? x) | w ? S . For any hypothesis h ? H, we de?ne its error rate err(h) := PD [h(X) = ? Y ]. We will drop the subscript D in PD when ? ? it is clear from the context. Given a dataset S = ?(X1 , Y1 ), . .?. , (Xm , Ym ) , we de?ne the empirical ?m 1 error rate of h over S as errS (h) := m i=1 1 h(xi ) ?= yi . De?nition 1 (Bounded Noise [49]). We say that the labeling oracle O satis?es the ?-bounded noise condition for some ? ? [0, 1/2) with respect to u, if for any x, P[Y ?= sign(u ? x) | X = x] ? ?. It can be seen that under ?-bounded noise condition, hu is the Bayes classi?er. De?nition 2 (Adversarial Noise [6]). We say that the labeling oracle O satis?es the ?-adversarial noise condition for some ? ? [0, 1] with respect to u, if P[Y ?= sign(u ? X)] ? ?. For two unit vectors v1 , v2 , denote by ?(v1 , v2 ) = arccos(v1 ? v2 ) the angle between them. The following lemma gives relationships between errors and angles (see also Lemma 1 in [8]). ? ? ? ? Lemma 1. For any v1 , v2 ? Sd?1 , ?err(hv1 ) ? err(hv2 )? ? P hv1 (X) ?= hv2 (X) = ?(v1?,v2 ) . Additionally, if? the labeling oracle? satis?es the ?-bounded noise condition with respect to u, then for ? ? any vector v, ?err(hv ) ? err(hu )? ? (1 ? 2?)P hv (X) ?= hu (X) = 1?2? ? ?(v, u). Given access to unlabeled examples drawn from DX and a labeling oracle O, our goal is to ?nd a polynomial time algorithm A such that with probability at least 1 ? ?, A outputs a halfspace hv ? H with P[sign(v ? X) ?= sign(u ? X)] ? ? for some target accuracy ? and con?dence ?. (By Lemma 1, this guarantees that the excess error of hv is at most ?, namely, err(hv ) ? err(hu ) ? ?.) The desired algorithm should make as few queries to the labeling oracle O as possible. 4 We say an algorithm A achieves a label complexity of ?(?, ?), if for any target halfspace hu ? H, with probability at least 1 ? ?, A outputs a halfspace hv ? H such that err(hv ) ? err(hu ) + ?, and requests at most ?(?, ?) labels from oracle O. 4 Main Algorithm Our main algorithm, ACTIVE -P ERCEPTRON (Algorithm 1), works in epochs. It works under the bounded and the adversarial noise models, if its sample schedule {mk } and band width {bk } are set appropriately with respect to each noise model. At the beginning of each epoch k, it assumes an upper bound of 2?k on ?(vk?1 , u), the angle between current iterate vk?1 and the underlying halfspace u. As we will see, this can be shown to hold with high probability inductively. Then, it calls procedure M ODIFIED -P ERCEPTRON (Algorithm 2) to ?nd an new iterate vk , which can be shown to have an ? angle with u at most 2k+1 with high probability. The algorithm ends when a total of k0 = ?log2 1? ? epochs have passed. For simplicity, we assume for the rest of the paper that the angle between the initial halfspace v0 and the underlying halfspace u is acute, that is, ?(v0 , u) ? ?2 ; Appendix F shows that this assumption can be removed with a constant overhead in terms of label and time complexities. Algorithm 1 ACTIVE -P ERCEPTRON Input: Labeling oracle O, initial halfspace v0 , target error ?, con?dence ?, sample schedule {mk }, band width {bk }. Output: learned halfspace v. 1: Let k0 = ?log2 1? ?. 2: for k = 1, 2, . . . , k0 do ? 3: vk ? M ODIFIED -P ERCEPTRON(O, vk?1 , 2?k , k(k+1) , mk , bk ). 4: end for 5: return vk0 . Procedure M ODIFIED -P ERCEPTRON (Algorithm 2) is the core component of ACTIVE -P ERCEPTRON. It sequentially performs a modi?ed Perceptron update rule on the selected new examples (xt , yt ) [51, 17, 26]: (1) wt+1 ? wt ? 21 {yt wt ? xt < 0} (wt ? xt ) ? xt De?ne ?t := ?(wt , u). Update rule (1) implies the following relationship between ?t+1 and ?t (See Lemma 8 in Appendix E for its proof): cos ?t+1 ? cos ?t = ?21 {yt wt ? xt < 0} (wt ? xt ) ? (u ? xt ) (2) This motivates us to take cos ?t as our measure of progress; we would like to drive cos ?t up to 1(so that ?t goes down to 0) as fast as possible. To this end, M ODIFIED -P ERCEPTRON samples new points xt under time-varying distributions DX |Rt ? ? d?1 : b and query for their labels, where Rt = x ? S 2 ? wt ? x ? b is a band inside the unit sphere. The rationale behind the choice of Rt is twofold: ? 1. We set Rt to have a probability mass of ?(?), so that the time complexity of rejection 1 ? sampling is at most O( ? ) per example. Moreover, in the adversarial noise setting, we set Rt ? large enough to dominate the noise of magnitude ? = ?(?). 2. Unlike the active Perceptron algorithm in [26] or other margin-based approaches (for example [55, 10]) where examples with small margin are queried, we query the label of the examples with a range of margin [ 2b , b]. From a technical perspective, this ensures that ?t decreases by a decent amount in expectation (see Lemmas 9 and 10 for details). Following the insight of [32], we remark that the modi?ed Perceptron update (1) on distribution DX |Rt can be alternatively viewed as performing stochastic gradient descent on a special non-convex loss function ?(w, (x, y)) = min(1, max(0, ?1? 2b yw?x)). It is an interesting open question whether optimizing this new loss function can lead to improved empirical results for learning halfspaces. 5 Algorithm 2 M ODIFIED -P ERCEPTRON Input: Labeling oracle O, initial halfspace w0 , angle upper bound ?, con?dence ?, number of iterations m, band width b. Output: Improved halfspace wm . 1: for t = 0, 1, 2, . . . , m ? ? 1 do ? 2: De?ne region Rt = x ? Sd?1 : 2b ? wt ? x ? b . Rejection sample xt ? DX |Rt . In other words, draw xt from DX until xt is in Rt . Query O for its label yt . 4: wt+1 ? wt ? 21 {yt wt ? xt < 0} ? (wt ? xt ) ? xt . 5: end for 6: return wm . 3: 5 Performance Guarantees We show that ACTIVE -P ERCEPTRON works in the bounded and the adversarial noise models, achieving computational ef?ciency and near-optimal label complexities. To this end, we ?rst give a lower bound on the label complexity under bounded noise, and then give computational and label complexity upper bounds under the two noise conditions respectively. We defer all proofs to the Appendix. 5.1 A Lower Bound under Bounded Noise We ?rst present an information-theoretic lower bound on the label complexity in the bounded noise setting under uniform distribution. This extends the distribution-free lower bounds of [53, 37], and generalizes the realizable-case lower bound of [47] to the bounded noise setting. Our lower bound can also be viewed as an extension of [59]?s Theorem 3; speci?cally it addresses the hardness under the ?-Tsybakov noise condition where ? = 0 (while [59]?s Theorem 3 provides lower boundes when ? ? (0, 1)). 1 Theorem 1. For any d > 4, 0 ? ? < 12 , 0 < ? ? 4? , 0 < ? ? 14 , for any active learning algorithm d?1 A, there is a u ? S , and a labeling oracle O that satis?es ?-bounded noise condition with respect to u, such that if with probability at least 1 ? ?, A makes at most n queries of labels to O?and outputs ? d log 1 ? log 1 v ? Sd?1 such that P[sign(v ? X) ?= sign(u ? X)] ? ?, then n ? ? (1?2?)? 2 + (1?2?)?2 . 5.2 Bounded Noise We establish Theorem 2 in the bounded noise setting. The theorem implies that, with appropriate settings of input parameters, ACTIVE -P ERCEPTRON ef?ciently learns a halfspace of excess error at most ? with probability at least 1 ? ?, under the assumption that DX is uniform over the unit d 1 ? sphere and O has bounded noise. In addition, it queries at most O( (1?2?)2 ln ? ) labels. This matches the lower bound of Theorem 1, and improves over the state of the art result of [8], where a label 1 ? O( (1?2?)4 ) ln 1 ) is shown using a different algorithm. complexity of O(d ? The proof and the precise setting of parameters (mk and bk ) are given in Appendix C. Theorem 2 (ACTIVE -P ERCEPTRON under Bounded Noise). Suppose Algorithm 1 has inputs labeling oracle O that satis?es ?-bounded noise condition with respect to halfspace u, initial halfspace v0 such that ?(v0 , u) ? [0, ?2 ], target error ?, con?dence ?, sample ? schedule {m ? k } where ? ? ?k 2 (1?2?) d d k ? mk = ? (1?2?) . Then with 2 (ln (1?2?)2 + ln ? ) , band width {bk } where bk = ? d ln(km /?) k probability at least 1 ? ?: 1. The output halfspace v is such that P[sign(v ? X) ?= sign(u ? X)] ? ?. ? ? ?? d 1 d 1 1 ? ln ? ln + ln + ln ln 2. The number of label queries is O (1?2?) . 2 ? (1?2?)2 ? ? 6 3. The is ? number of?unlabeled examples drawn ? 2 d d 1 1 O (1?2?)3 ? ln (1?2?)2 + ln ? + ln ln ? ? 4. The algorithm runs in time O ? d2 (1?2?)3 1 ? ? ln 1? . ? ?2 d 1 1 ? ln (1?2?) ? 2 + ln ? + ln ln ? 1 ? ? ln 1? . The theorem follows from Lemma 2 below. The key ingredient of the lemma is a delicate analysis m of the dynamics of the angles {?t }t=0 , where ?t = ?(wt , u) is the angle between the iterate wt and the halfspace u. Since xt is randomly sampled and yt is noisy, we are only able to show that ?t decreases by a decent amount in expectation. To remedy the stochastic ?uctuations, we apply m martingale concentration inequalities to carefully control the upper envelope of sequence {?t }t=0 . Lemma 2 (M ODIFIED -P ERCEPTRON under Bounded Noise). Suppose Algorithm 2 has inputs labeling oracle O that satis?es ?-bounded noise condition with respect to halfspace u, initial halfspace w0 and angle upper bound ? ? (0, ?2 ] such that ?(w0 , u) ???, con?dence ? ?, number ?(1?2?) ? d ln(m/?) d d 1 of iterations m = ?( (1?2?) 2 (ln (1?2?)2 + ln ? )), band width b = ? . Then with probability at least 1 ? ?: 1. The output halfspace wm is such that ?(wm , u) ? ?2 . ? ? ?? d d 1 2. The number of label queries is O (1?2?)2 ln (1?2?)2 + ln ? . 3. The number of unlabeled examples drawn is O 4. The algorithm runs in time O 5.3 ? d2 (1?2?)3 ? ? ln ? d (1?2?)3 d (1?2?)2 ? ? ln + ln 1 ? ?2 d (1?2?)2 ? 1 ? ? + ln 1 ? ?2 ? 1 ? ? . . Adversarial Noise We establish Theorem 3 in the adversarial noise setting. The theorem implies that, with appropriate settings of input parameters, ACTIVE -P ERCEPTRON ef?ciently learns a halfspace of excess error at most ? with probability at least 1 ? ?, under the assumption that DX is uniform over the unit sphere ? and O has an adversarial noise of magnitude ? = ?( ln d+ln ). In addition, it queries at most ln 1? 1 ? O(d ln ? ) labels. Our label complexity bound is information-theoretically optimal [47], and matches the state of the art result of [39]. The bene?t of our approach is computational: it has a running time ? d2 ), while [39] needs to solve a convex optimization problem whose running time is some of O( ? polynomial over d and 1? with an unspeci?ed degree. The proof and the precise setting of parameters (mk and bk ) are given in Appendix C. Theorem 3 (ACTIVE -P ERCEPTRON under Adversarial Noise). Suppose Algorithm 1 has inputs labeling oracle O that satis?es ?-adversarial noise condition with respect to halfspace u, initial halfspace v0 such that ?(v0 , u) ? ?2 , target error ?, con?dence {mk } where ? ? ?, sample schedule mk = ?(d(ln d + ln k? )), band width {bk } where bk = ? ? ?( ln d +ln ln ? 1 ? ? 2?k d ln(kmk /?) ). Then with probability at least 1 ? ?: 1. The output halfspace v is such that P[sign(v ? X) ?= sign(u ? X)] ? ?. ? ?? ? 2. The number of label queries is O d ? ln 1? ? ln d + ln 1? + ln ln 1? . . Additionally ? ? ? ? ?2 3. The number of unlabeled examples drawn is O d ? ln d + ln 1? + ln ln 1? ? ? ? ?2 4. The algorithm runs in time O d2 ? ln d + ln 1? + ln ln 1? ? 7 1 ? ? ln 1? . 1 ? ? ln 1? . The theorem follows from Lemma 3 below, whose proof is similar to Lemma 2. Lemma 3 (M ODIFIED -P ERCEPTRON under Adversarial Noise). Suppose Algorithm 2 has inputs labeling oracle O that satis?es ?-adversarial noise condition with respect to halfspace u, initial halfspace w0 and angle upper bound ? ? (0, ?2 ] such?that ?(w0 , ? u) ? ?, con?dence ?, number of iterations m = ?(d(ln d + ln 1? )), band width b = ? ? ? d ln(m/?) Then with probability at least 1 ? ?: ? . Additionally ? ? ?( ln(m/?)) ). 1. The output halfspace wm is such that ?(wm , u) ? ?2 . ? ? ?? 2. The number of label queries is O d ? ln d + ln 1? . ? ? ?2 ? 3. The number of unlabeled examples drawn is O d ? ln d + ln 1? ? ?1 6 ? ? ?2 ? 4. The algorithm runs in time O d2 ? ln d + ln 1? ? ?1 . Implications to Passive Learning ACTIVE -P ERCEPTRON can be converted to a passive learning algorithm, PASSIVE -P ERCEPTRON, for learning homogeneous halfspaces under the uniform distribution over the unit sphere. PASSIVE -P ERCEPTRON has PAC sample complexities close to the lower bounds under the two noise models. We give a formal description of PASSIVE -P ERCEPTRON in Appendix B. We give its formal guarantees in the corollaries below, which are immediate consequences of Theorems 2 and 3. In the ?-bounded noise model, the sample complexity of PASSIVE -P ERCEPTRON improves over the O( 1 ) ? d (1?2?)4 ) is obtained. The bound state of the art result of [8], where a sample complexity of O( ? d ? has the same dependency on ? and d as the minimax upper bound of ?( ?(1?2?) ) by [49], which is achieved by a computationally inef?cient ERM algorithm. Corollary 1 (PASSIVE -P ERCEPTRON under Bounded Noise). Suppose PASSIVE -P ERCEPTRON has inputs distribution D that satis?es ?-bounded noise condition with respect to u, initial halfspace v?0 , ? d d k target error ?, con?dence ?, sample schedule {mk } where mk = ? (1?2?) 2 (ln (1?2?)2 + ln ? ) , ? ? ?k (1?2?) band width {bk } where bk = ? ?2d ln(km . Then with probability at least 1 ? ?: (1) The /?) k output ? halfspace ? v is such that err(hv ) ? err(hu )?+ ?; 2(2) The ? number of labeled examples drawn is d d ? ? O . (3) The algorithm runs in time O . 3 3 (1?2?) ? (1?2?) ? In the ?-adversarial noise model, the sample complexity of PASSIVE -P ERCEPTRON matches the ? d ) obtained in [39]. Same as in active minimax optimal sample complexity upper bound of ?( ? learning, our algorithm has a faster running time than [39]. Corollary 2 (PASSIVE -P ERCEPTRON under Adversarial Noise). Suppose PASSIVE -P ERCEPTRON has inputs distribution D that satis?es ?-adversarial noise condition with respect to u, initial ? ? halfspace v0 , target error ?, con?dence ?, sample schedule {mk } where mk = ? d(ln d + ln k? ) , ? ? 2?k ? band width {bk } where bk = ? . Furthermore ? = ?( ln ln 1?+ln d ). Then with d ln(km /?) k ? ? probability at least 1 ? ?: (1) The output v is such that err(hv ) ? err(h u ) + ?; (2) The ? ? 2? ? halfspace d d ? ? . (3) The algorithm runs in time O . number of labeled examples drawn is O ? ? Tables 3 and 4 present comparisons between our results and results most closely related to ours. Acknowledgments. The authors thank Kamalika Chaudhuri for help and support, Hongyang Zhang for thought-provoking initial conversations, Jiapeng Zhang for helpful discussions, and the anonymous reviewers for their insightful feedback. Much of this work is supported by NSF IIS-1167157 and 1162581. 8 Table 3: A comparison of algorithms for PAC learning halfspaces under the uniform distribution, in the ?-bounded noise model. Algorithm [8] ERM [49] Our Work Sample Complexity ? O( ? O( 1 O( ) d (1?2?)4 ? Time Complexity O( ) d (1?2?)? ) d ? O( (1?2?)3 ? ) 1 ) ? d (1?2?)4 ) O( ? superpoly(d, 1? ) d2 1 ? O( ) 3 ? (1?2?) ? Table 4: A comparison of algorithms for PAC learning halfspaces under the uniform distribution, in the ?-adversarial noise model where ? = ?( ln ln 1?+ln d ). ? Algorithm [39] ERM [57] Our Work Sample Complexity ? d) O( ? ? d) O( ? ? d) O( ? Time Complexity poly(d, 1? ) superpoly(d, 1? ) ? d2 ) O( ? References [1] Alekh Agarwal. Selective sampling algorithms for cost-sensitive multiclass prediction. ICML (3), 28: 1220?1228, 2013. [2] Nir Ailon, Ron Begleiter, and Esther Ezra. Active learning using smooth relative regret approximations with applications. Journal of Machine Learning Research, 15(1):885?920, 2014. [3] Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343?370, Apr 1988. ISSN 1573-0565. doi: 10.1023/A:1022873112823. URL https://doi.org/10.1023/A: 1022873112823. [4] Martin Anthony and Peter L Bartlett. Neural network learning: Theoretical foundations. Cambridge University Press, 2009. [5] Sanjeev Arora, L?szl? Babai, Jacques Stern, and Z Sweedyk. The hardness of approximate optima in lattices, codes, and systems of linear equations. In Foundations of Computer Science, 1993. Proceedings., 34th Annual Symposium on, pages 724?733. IEEE, 1993. [6] Pranjal Awasthi, Maria Florina Balcan, and Philip M Long. The power of localization for ef?ciently learning linear separators with noise. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 449?458. ACM, 2014. [7] Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, and Ruth Urner. Ef?cient learning of linear separators under bounded noise. In COLT, pages 167?190, 2015. [8] Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, and Hongyang Zhang. Learning and 1-bit compressed sensing under asymmetric noise. In Proceedings of The 28th Conference on Learning Theory, COLT 2016, 2016. [9] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In COLT, 2013. [10] M.-F. Balcan, A. Z. Broder, and T. Zhang. Margin based active learning. In COLT, 2007. [11] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. J. Comput. Syst. Sci., 75(1): 78?89, 2009. [12] Maria-Florina Balcan and Vitaly Feldman. Statistical active learning algorithms. In NIPS, pages 1295?1303, 2013. [13] Maria-Florina Balcan and Hongyang Zhang. S-concave distributions: Towards broader distributions for noise-tolerant and sample-ef?cient learning algorithms. arXiv preprint arXiv:1703.07758, 2017. [14] Maria-Florina Balcan, Steve Hanneke, and Jennifer Wortman Vaughan. The true sample complexity of active learning. Machine learning, 80(2-3):111?139, 2010. 9 [15] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010. [16] Alina Beygelzimer, Sanjoy Dasgupta, and John Langford. Importance weighted active learning. In Twenty-Sixth International Conference on Machine Learning, 2009. [17] Avrim Blum, Alan M. Frieze, Ravi Kannan, and Santosh Vempala. A polynomial-time algorithm for learning noisy linear threshold functions. Algorithmica, 22(1/2):35?52, 1998. [18] Nicolo Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge university press, 2006. [19] Nicol? Cesa-Bianchi, Claudio Gentile, and erancesco Orabona. Robust bounds for classi?cation via selective sampling. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, pages 121?128, 2009. [20] Lin Chen, Hamed Hassani, and Amin Karbasi. Near-optimal active learning of halfspaces via query synthesis in the noisy setting. In Thirty-First AAAI Conference on Arti?cial Intelligence, 2017. [21] David A. Cohn, Les E. Atlas, and Richard E. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201?221, 1994. [22] Nello Cristianini and John Shawe-Taylor. An introduction to support vector machines and other kernelbased learning methods. 2000. [23] Amit Daniely. Complexity theoretic limitations on learning halfspaces. arXiv preprint arXiv:1505.05800, 2015. [24] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005. [25] Sanjoy Dasgupta. Two faces of active learning. Theoretical computer science, 412(19):1767?1781, 2011. [26] Sanjoy Dasgupta, Adam Tauman Kalai, and Claire Monteleoni. Analysis of perceptron-based active learning. In Learning Theory, 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, June 27-30, 2005, Proceedings, pages 249?263, 2005. [27] Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems 20, 2007. [28] Ofer Dekel, Claudio Gentile, and Karthik Sridharan. Selective sampling and active learning from single and multiple teachers. Journal of Machine Learning Research, 13(Sep):2655?2697, 2012. [29] John Dunagan and Santosh Vempala. A simple polynomial-time rescaling algorithm for solving linear programs. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 315?320. ACM, 2004. [30] Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Kumar Ponnuswami. New results for learning noisy parities and halfspaces. In Foundations of Computer Science, 2006. FOCS?06. 47th Annual IEEE Symposium on, pages 563?574. IEEE, 2006. [31] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3):133?168, 1997. [32] Andrew Guillory, Erick Chastain, and Jeff Bilmes. Active learning as non-convex optimization. In International Conference on Arti?cial Intelligence and Statistics, pages 201?208, 2009. [33] Venkatesan Guruswami and Prasad Raghavendra. Hardness of learning halfspaces with noise. SIAM Journal on Computing, 39(2):742?765, 2009. [34] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007. [35] S. Hanneke. Theoretical Foundations of Active Learning. PhD thesis, Carnegie Mellon University, 2009. [36] Steve Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361, 2011. R in Machine [37] Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends? Learning, 7(2-3):131?309, 2014. [38] Steve Hanneke and Liu Yang. arXiv:1207.3772, 2012. Surrogate losses in passive and active learning. 10 arXiv preprint [39] Steve Hanneke, Varun Kanade, and Liu Yang. Learning with a drifting target concept. In International Conference on Algorithmic Learning Theory, pages 149?164. Springer, 2015. [40] D. Hsu. Algorithms for Active Learning. PhD thesis, UC San Diego, 2010. [41] Tzu-Kuo Huang, Alekh Agarwal, Daniel Hsu, John Langford, and Robert E. Schapire. Ef?cient and parsimonious agnostic active learning. CoRR, abs/1506.08669, 2015. [42] Adam Tauman Kalai, Adam R Klivans, Yishay Mansour, and Rocco A Servedio. Agnostically learning halfspaces. SIAM Journal on Computing, 37(6):1777?1805, 2008. [43] Michael Kearns and Ming Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807?837, 1993. [44] Adam Klivans and Pravesh Kothari. Embedding Hard Learning Problems Into Gaussian Space. In APPROX/RANDOM 2014, pages 793?809, 2014. [45] Adam R Klivans, Philip M Long, and Rocco A Servedio. Learning halfspaces with malicious noise. Journal of Machine Learning Research, 10(Dec):2715?2740, 2009. [46] V. Koltchinskii. Rademacher complexities and bounding the excess risk in active learning. JMLR, 2010. [47] Sanjeev R Kulkarni, Sanjoy K Mitter, and John N Tsitsiklis. Active learning using arbitrary binary valued queries. Machine Learning, 11(1):23?35, 1993. [48] Philip M Long. On the sample complexity of pac learning half-spaces against the uniform distribution. IEEE Transactions on Neural Networks, 6(6):1556?1559, 1995. [49] Pascal Massart and ?lodie N?d?lec. Risk bounds for statistical learning. The Annals of Statistics, pages 2326?2366, 2006. [50] Claire Monteleoni. Ef?cient algorithms for general active learning. In International Conference on Computational Learning Theory, pages 650?652. Springer, 2006. [51] TS Motzkin and IJ Schoenberg. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):393?404, 1954. [52] Francesco Orabona and Nicolo Cesa-Bianchi. Better algorithms for selective sampling. In Proceedings of the 28th international conference on Machine learning (ICML-11), pages 433?440, 2011. [53] Maxim Raginsky and Alexander Rakhlin. Lower bounds for passive and active learning. In Advances in Neural Information Processing Systems, pages 1026?1034, 2011. [54] Burr Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010. [55] Simon Tong and Daphne Koller. Support vector machine active learning with applications to text classi?cation. Journal of machine learning research, 2(Nov):45?66, 2001. [56] Christopher Tosh and Sanjoy Dasgupta. Diameter-based active learning. In ICML, pages 3444?3452, 2017. [57] Vladimir N. Vapnik and Alexey Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications, 16(2):264?280, 1971. [58] Liwei Wang. Smoothness, disagreement coef?cient, and the label complexity of agnostic active learning. Journal of Machine Learning Research, 12(Jul):2269?2292, 2011. [59] Yining Wang and Aarti Singh. Noise-adaptive margin-based active learning and lower bounds under tsybakov noise condition. In AAAI, 2016. [60] Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 442?450, 2014. [61] Yuchen Zhang, Percy Liang, and Moses Charikar. A hitting time analysis of stochastic gradient langevin dynamics. In COLT, pages 1980?2022, 2017. 11
6706 |@word polynomial:10 nd:4 dekel:1 open:4 d2:14 hu:7 km:3 prasad:1 arti:2 harder:1 initial:10 liu:2 series:1 chervonenkis:1 daniel:2 ours:2 existing:1 err:13 current:1 com:1 kmk:1 beygelzimer:3 dx:9 must:1 john:5 informative:1 hongyang:3 cheap:1 drop:1 atlas:1 update:3 intelligence:2 selected:1 half:1 isotropic:1 beginning:1 core:1 provides:1 coarse:1 ron:1 org:1 zhang:10 daphne:1 symposium:4 focs:1 nonrealizable:2 overhead:1 burr:1 inside:1 theoretically:1 hardness:5 sweedyk:1 ming:1 spherical:1 bounded:33 underlying:4 moreover:1 agnostic:11 mass:1 guarantee:4 cial:2 every:1 concave:2 control:1 unit:9 sd:5 consequence:1 despite:1 subscript:1 lugosi:1 alexey:1 koltchinskii:1 challenging:3 co:4 range:1 acknowledgment:1 thirty:2 practice:1 regret:1 procedure:3 empirical:4 yan:1 adapting:1 thought:1 matching:1 liwei:1 word:1 unlabeled:13 close:1 risk:4 context:1 vaughan:1 optimize:1 reviewer:1 yt:6 go:1 convex:3 survey:2 simplicity:1 rule:2 insight:1 dominate:1 ity:1 embedding:1 schoenberg:1 haghtalab:2 annals:2 diego:3 suppose:8 target:8 shamir:1 yishay:1 programming:1 homogeneous:6 us:1 designing:1 hypothesis:3 trend:1 expensive:1 asymmetric:1 nitions:1 distributional:1 labeled:2 observed:1 preprint:3 wang:2 hv:9 revisiting:1 region:1 ensures:1 improper:1 decrease:2 removed:1 halfspaces:26 pd:2 complexity:46 inductively:1 cristianini:1 seung:1 dynamic:2 singh:1 solving:3 upon:1 localization:1 learner:2 easily:1 sep:1 k0:3 fast:1 doi:2 query:26 labeling:18 whose:3 posed:1 solve:1 valued:1 say:4 compressed:1 statistic:3 transform:1 noisy:6 laird:1 online:1 sequence:3 propose:3 inef:2 chaudhuri:2 achieve:5 amin:1 description:1 probabil:1 rst:2 convergence:2 requirement:2 optimum:1 rademacher:1 adam:5 help:1 andrew:1 montreal:2 ij:1 progress:1 come:1 indicate:1 implies:3 closely:2 stochastic:3 human:1 settle:1 generalization:1 anonymous:1 extension:1 hold:1 algorithmic:1 achieves:5 aarti:1 pravesh:1 label:64 sensitive:1 weighted:1 minimization:4 awasthi:3 gaussian:4 unspeci:2 kalai:2 avoid:1 claudio:2 varying:1 broader:1 corollary:3 june:2 vk:5 maria:6 contrast:2 adversarial:26 realizable:4 helpful:1 esther:1 membership:2 streaming:1 nika:2 koller:1 selective:5 colt:6 pascal:1 proposes:1 arccos:1 art:6 special:1 uc:3 ipping:1 marginal:1 santosh:2 khot:1 having:1 beach:1 sampling:6 adversarially:1 icml:5 nearly:1 hv1:2 richard:1 few:5 randomly:1 modi:3 frieze:1 bertinoro:1 babai:1 parikshit:1 algorithmica:1 microsoft:2 delicate:1 karthik:1 ab:1 interest:1 satis:12 deferred:1 ipped:3 analyzed:1 szl:1 yining:1 behind:1 implication:1 accurate:1 incomplete:1 taylor:1 yuchen:1 abundant:1 re:2 desired:1 theoretical:3 mk:12 instance:2 ezra:1 lattice:1 cost:1 daniely:1 uniform:16 wortman:1 tishby:1 dependency:1 answer:2 teacher:1 guillory:1 chooses:1 adaptively:1 st:1 density:1 broder:1 international:6 siam:3 standing:1 michael:1 ym:1 synthesis:1 sanjeev:2 thesis:2 aaai:2 tzu:1 central:1 cesa:3 opposed:1 huang:1 begleiter:1 worse:1 return:3 rescaling:1 actively:3 syst:1 li:1 converted:3 de:7 includes:1 stream:3 later:1 wm:6 bayes:2 jul:1 defer:2 simon:1 chicheng:3 halfspace:39 minimize:1 accuracy:1 raghavendra:1 sharpest:1 bor:1 iid:1 tolerating:1 bilmes:1 hanneke:7 drive:1 cation:3 hamed:1 monteleoni:3 coef:1 ed:5 urner:1 sixth:2 against:1 servedio:2 nonetheless:1 frequency:1 naturally:2 proof:5 con:9 sampled:1 hsu:4 dataset:1 conversation:1 improves:3 schedule:6 hassani:1 carefully:1 steve:5 tolerate:1 varun:1 improved:2 done:1 furthermore:2 just:1 implicit:1 until:1 langford:4 hand:1 christopher:1 cohn:1 usa:1 concept:1 true:1 remedy:1 game:1 width:9 generalized:2 theoretic:5 performs:2 percy:1 passive:17 balcan:9 ef:29 recently:1 superpolynomial:1 ponnuswami:1 synthesized:1 mellon:1 cambridge:2 feldman:2 queried:1 smoothness:1 rd:3 approx:1 odified:7 mathematics:1 shawe:1 access:2 acute:1 alekh:2 v0:8 etc:1 nicolo:2 recent:1 perspective:1 optimizing:2 jolla:1 moderate:2 italy:1 inequality:2 binary:1 success:1 errs:1 yi:1 nition:2 seen:2 minimum:1 additional:2 gentile:2 speci:1 ashok:1 venkatesan:1 ii:1 multiple:1 reduces:1 smooth:1 technical:1 match:4 faster:1 af:1 believed:1 long:6 sphere:8 alan:1 lin:1 prediction:2 regression:1 florina:6 expectation:2 arxiv:6 iteration:3 agarwal:2 achieved:1 dec:1 addition:4 malicious:3 appropriately:1 envelope:1 rest:1 unlike:1 massart:2 vitaly:2 quebec:2 december:1 sridharan:1 call:1 ciently:5 near:10 presence:4 yang:2 canadian:1 enough:1 songbai:1 decent:2 iterate:3 uctuations:1 agnostically:1 multiclass:1 requesting:1 whether:3 utility:1 url:1 passed:1 bartlett:1 effort:2 guruswami:1 peter:1 york:1 remark:1 clear:1 yw:1 gopalan:1 amount:2 tsybakov:2 band:10 diameter:1 angluin:1 http:1 schapire:1 nsf:1 moses:1 sign:17 jacques:1 per:1 carnegie:1 dasgupta:6 key:1 threshold:1 blum:1 achieving:1 drawn:13 alina:1 ravi:1 v1:5 relaxation:1 fraction:1 raginsky:1 run:13 angle:10 distributiondependent:1 extends:1 almost:1 throughout:2 parsimonious:1 draw:3 decision:1 appendix:7 dy:2 bit:1 bound:30 lec:1 oracle:20 annual:7 nontrivial:1 constraint:1 dence:9 klivans:3 min:1 kumar:1 performing:1 vempala:2 martin:1 ned:2 vk0:1 ailon:1 charikar:1 request:1 making:1 erm:3 karbasi:1 ln:108 computationally:11 equation:1 previously:1 remains:1 jennifer:1 committee:1 end:6 informal:2 generalizes:1 ofer:1 apply:1 away:1 v2:5 appropriate:2 disagreement:3 drifting:1 assumes:1 running:4 log2:2 hinge:2 madison:1 cally:1 exploit:1 amit:1 establish:2 question:6 concentration:1 rt:9 rocco:2 surrogate:1 unclear:1 gradient:2 thank:1 sci:1 philip:4 w0:5 nello:1 kannan:1 issn:1 code:1 ruth:1 relationship:2 providing:1 vladimir:1 liang:1 robert:1 motivates:1 stern:1 twenty:1 allowing:1 upper:9 bianchi:3 ladner:1 dunagan:1 kothari:1 francesco:1 descent:1 t:1 immediate:1 langevin:1 precise:2 dc:1 y1:1 ucsd:1 mansour:1 sharp:1 arbitrary:1 canada:2 inferred:1 bk:13 david:1 namely:1 bene:1 learned:1 nip:4 address:2 able:1 beyond:1 below:3 xm:1 challenge:1 provoking:1 program:1 max:1 power:1 event:1 natural:3 rely:1 minimax:2 improve:1 ne:4 arora:1 nir:1 text:1 epoch:3 literature:2 nicol:1 relative:2 wisconsin:1 freund:1 loss:7 rationale:1 interesting:1 limitation:2 dana:1 ingredient:1 tially:1 foundation:5 degree:2 consistent:1 pranjal:3 claire:3 supported:1 parity:1 free:1 tsitsiklis:1 side:1 formal:2 perceptron:8 face:1 tauman:2 tolerance:2 boundary:1 feedback:1 author:1 adaptive:1 san:3 far:1 transaction:1 excess:5 approximate:1 nov:1 active:68 tolerant:2 sequentially:2 symmetrical:1 xi:1 alternatively:1 decade:1 table:6 additionally:3 kanade:1 learn:3 robust:1 ca:2 improving:1 excellent:1 poly:2 separator:3 anthony:1 erceptron:32 apr:1 main:4 bounding:1 noise:86 allowed:1 body:1 x1:1 cient:25 mitter:1 martingale:1 ny:1 tong:1 ciency:2 comput:1 jmlr:1 learns:2 hw:1 theorem:17 down:1 xt:15 showing:1 pac:4 er:4 insightful:1 list:1 sensing:1 rakhlin:1 exists:1 avrim:1 erick:1 kamalika:2 importance:1 corr:1 maxim:1 phd:2 magnitude:2 vapnik:1 margin:8 chen:1 rejection:2 hitting:1 motzkin:1 springer:2 relies:1 acm:4 conditional:1 goal:2 viewed:2 towards:1 orabona:2 twofold:1 jeff:1 considerable:1 hard:5 subhash:1 uniformly:1 wt:15 classi:8 lemma:13 kearns:1 total:1 sanjoy:6 kuo:1 e:13 la:1 hv2:2 ya:1 select:1 support:3 kernelbased:1 arises:1 alexander:1 kulkarni:1
6,309
6,707
Gradient Descent Can Take Exponential Time to Escape Saddle Points Chi Jin University of California, Berkeley [email protected] Simon S. Du Carnegie Mellon University [email protected] Jason D. Lee University of Southern California [email protected] Michael I. Jordan University of California, Berkeley [email protected] Aarti Singh Carnegie Mellon University [email protected] Barnab?s P?czos Carnegie Mellon University [email protected] Abstract Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be signi?cantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not slowed down by saddle points?it can ?nd an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justi?es the importance of adding perturbations for ef?cient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical ?ndings. 1 Introduction Gradient Descent (GD) and its myriad variants provide the core optimization methodology in machine learning problems. Given a function f (x), the basic GD method can be written as: ? ? x(t+1) ? x(t) ? ??f x(t) , (1) where ? is a step size, assumed ?xed in the current paper. While precise characterizations of the rate of convergence GD are available for convex problems, there is far less understanding of GD for non-convex problems. Indeed, for general non-convex problems, GD is only known to ?nd a stationary point (i.e., a point where the gradient equals zero) in polynomial time [Nesterov, 2013]. A stationary point can be a local minimizer, saddle point, or local maximizer. In recent years, there has been an increasing focus on conditions under which it is possible to escape saddle points (more speci?cally, strict saddle points as in De?nition 2.4) and converge to a local minimizer. Moreover, stronger statements can be made when the following two key properties hold: 1) all local minima are global minima, and 2) all saddle points are strict. These properties hold for a variety of machine learning problems, including tensor decomposition [Ge et al., 2015], dictionary learning [Sun et al., 2017], phase retrieval [Sun et al., 2016], matrix sensing [Bhojanapalli et al., 2016, Park et al., 2017], matrix completion [Ge et al., 2016, 2017], and matrix factorization [Li et al., 2016]. For these 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. problems, any algorithm that is capable of escaping strict saddle points will converge to a global minimizer from an arbitrary initialization point. Recent work has analyzed variations of GD that include stochastic perturbations. It has been shown that when perturbations are incorporated into GD at each step the resulting algorithm can escape strict saddle points in polynomial time [Ge et al., 2015]. It has also been shown that episodic perturbations suf?ce; in particular, Jin et al. [2017] analyzed an algorithm that occasionally adds a perturbation to GD (see Algorithm 1), and proved that not only does the algorithm escape saddle points in polynomial time, but additionally the number of iterations to escape saddle points is nearly dimensionindependent1 . These papers in essence provide suf?cient conditions under which a variant of GD has favorable convergence properties for non-convex functions. This leaves open the question as to whether such perturbations are in fact necessary. If not, we might prefer to avoid the perturbations if possible, as they involve additional hyper-parameters. The current understanding of gradient descent is silent on this issue. The major existing result is provided by Lee et al. [2016], who show that gradient descent, with any reasonable random initialization, will always escape strict saddle points eventually?but without any guarantee on the number of steps required. This motivates the following question: Does randomly initialized gradient descent generally escape saddle points in polynomial time? In this paper, perhaps surprisingly, we give a strong negative answer to this question. We show that even under a fairly natural initialization scheme (e.g., uniform initialization over a unit cube, or Gaussian initialization) and for non-pathological functions satisfying smoothness properties considered in previous work, GD can take exponentially long time to escape saddle points and reach local minima, while perturbed GD (Algorithm 1) only needs polynomial time. This result shows that GD is fundamentally slower in escaping saddle points than its perturbed variant, and justi?es the necessity of adding perturbations for ef?cient non-convex optimization. The counter-example that supports this conclusion is a smooth function de?ned on Rd , where GD with random initialization will visit the vicinity of d saddle points before reaching a local minimum. While perturbed GD takes a constant amount of time to escape each saddle point, GD will get closer and closer to the saddle points it encounters later, and thus take an increasing amount of time to escape. Eventually, GD requires time that is exponential in the number of saddle points it needs to escape, thus e?(d) steps. 1.1 Related Work Over the past few years, there have been many problem-dependent convergence analyses of nonconvex optimization problems. One line of work shows that with smart initialization that is assumed to yield a coarse estimate lying inside a neighborhood of a local minimum, local search algorithms such as gradient descent or alternating minimization enjoy fast local convergence; see, e.g., [Netrapalli et al., 2013, Du et al., 2017, Hardt, 2014, Candes et al., 2015, Sun and Luo, 2016, Bhojanapalli et al., 2016, Yi et al., 2016, Zhang et al., 2017]. On the other hand, Jain et al. [2017] show that gradient descent can stay away from saddle points, and provide global convergence rates for matrix square-root problems, even without smart initialization. Although these results give relatively strong guarantees in terms of rate, their analyses are heavily tailored to speci?c problems and it is unclear how to generalize them to a wider class of non-convex functions. For general non-convex problems, the study of optimization algorithms converging to minimizers dates back to the study of Morse theory and continuous dynamical systems ([Palis and De Melo, 2012, Yin and Kushner, 2003]); a classical result states that gradient ?ow with random initialization always converges to a minimizer. For stochastic gradient, this was shown by Pemantle [1990], although without explicit running time guarantees. Lee et al. [2016] established that randomly initialized gradient descent with a ?xed stepsize also converges to minimizers almost surely. However, these results are all asymptotic in nature and it is unclear how they might be extended to deliver explicit convergence rates. Moreover, it is unclear whether polynomial convergence rates can be obtained for these methods. Next, we review algorithms that can provably ?nd approximate local minimizers in polynomial time. The classical cubic-regularization [Nesterov and Polyak, 2006] and trust region [Curtis et al., 2014] 1 Assuming that the smoothness parameters (see De?nition 2.1- 2.3) are all independent of dimension. 2 algorithms require access to the full Hessian matrix. A recent line of work [Carmon et al., 2016, Agarwal et al., 2017, Carmon and Duchi, 2016] shows that the requirement of full Hessian access can be relaxed to Hessian-vector products, which can be computed ef?ciently in many machine learning applications. For pure gradient-based algorithms without access to Hessian information, Ge et al. [2015] show that adding perturbation in each iteration suf?ces to escape saddle points in polynomial time. When smoothness parameters are all dimension independent, Levy [2016] analyzed a normalized form of gradient descent with perturbation, and improved the dimension dependence to O(d3 ). This dependence has been further improved in recent work [Jin et al., 2017] to polylog(d) via perturbed gradient descent (Algorithm 1). 1.2 Organization This paper is organized as follows. In Section 2, we introduce the formal problem setting and background. In Section 3, we discuss some pathological examples and ?un-natural" initialization schemes under which the gradient descent fails to escape strict saddle points in polynomial time. In Section 4, we show that even under a fairly natural initialization scheme, gradient descent still needs exponential time to escape all saddle points whereas perturbed gradient descent is able to do so in polynomial time. We provide empirical illustrations in Section 5 and conclude in Section 6. We place most of our detailed proofs in the Appendix. 2 Preliminaries Let ???2 denote the Euclidean norm of a ?nite-dimensional vector in Rd . For a symmetric matrix A, let ?A?op denote its operator norm and ?min (A) its smallest eigenvalue. For a function f : Rd ? R, let ?f (?) and ?2 f (?) denote its gradient vector and Hessian matrix. Let Bx (r) denote the dd dimensional ?2 ball centered at x with radius r, [?1, 1] denote the d-dimensional cube centered at 0 d with side-length 2, and B? (x, R) = x + [?R, R] denote the d-dimensional cube centered at x with side-length 2R. We also use O(?), and ?(?) as standard Big-O and Big-Omega notation, only hiding absolute constants. Throughout the paper we consider functions that satisfy the following smoothness assumptions. De?nition 2.1. A function f (?) is B-bounded if for any x ? Rd : |f (x) | ? B. De?nition 2.2. A differentiable function f (?) is ?-gradient Lipschitz if for any x, y ? Rd : ??f (x) ? ?f (y)?2 ? ? ?x ? y?2 . De?nition 2.3. A twice-differentiable function f (?) is ?-Hessian Lipschitz if for any x, y ? Rd : ? 2 ? ?? f (x) ? ?2 f (y)? ? ? ?x ? y? . 2 op Intuitively, de?nition 2.1 says function value is both upper and lower bounded; de?nition 2.2 and 2.3 state the gradients and Hessians of function can not change dramatically if two points are close by. De?nition 2.2 is a standard asssumption in the optimization literature, and de?nition 2.3 is also commonly assumed when studying saddle points and local minima. Our goal is to escape saddle points. The saddle points discussed in this paper are assumed to be ?strict? [Ge et al., 2015]: De?nition 2.4. A saddle point x?? is called? an ?-strict saddle point if there exists some ? > 0 such that ??f (x? )?2 = 0 and ?min ?2 f (x? ) ? ??. That is, a strict saddle point must have an escaping direction so that the eigenvalue of the Hessian along that direction is strictly negative. It turns out that for many non-convex problems studied in machine learning, all saddle points are strict (see Section 1 for more details). To escape strict saddle points and converge to local minima, we can equivalently study the approximation of second-order stationary points. For ?-Hessian Lipschitz functions, such points are de?ned as follows by Nesterov and Polyak [2006]: 3 Algorithm 1 Perturbed Gradient Descent [Jin et al., 2017] 1: 2: 3: 4: 5: 6: 7: 8: Input: x(0) , step size ?, perturbation radius r, time interval tthres , gradient threshold gthres . tnoise ? ?tthres ? 1. for t = 1, 2, ? ? ? do if ??f (xt )?2 ? gthres and t ? tnoise > tthres then x(t) ? x(t) + ? t , ? t ? unif (B0 (r)), tnoise ? t, end if ? ? x(t+1) ? x(t) ? ??f x(t) . end for De?nition 2.5.? A point x is a called a second-order stationary point if ??f (x)?2 = 0 and ? ?min ?2 f (x) ? 0. We also de?ne its ?-version, that is, stationary point ? ? an ?-second-order ? for some ? > 0, if point x satis?es ??f (x)?2 ? ? and ?min ?2 f (x) ? ? ??. Second-order stationary points must have a positive semi-de?nite Hessian in additional to a vanishing gradient. Note if all saddle points x? are strict, then second-order stationary points are exactly equivalent to local minima. In this paper, we compare gradient descent and one of its variants?the perturbed gradient descent algorithm (Algorithm 1) proposed by Jin et al. [2017]. We focus on the case where the step size satis?es ? < 1/?, which is commonly required for ?nding a minimum even in the convex setting [Nesterov, 2013]. The following theorem shows that if GD with random initialization converges, then it will converge to a second-order stationary point almost surely. Theorem 2.6 ([Lee et al., 2016] ). Suppose that f is ?-gradient Lipschitz, has continuous Hessian, and step size ? < 1? . Furthermore, assume that gradient descent converges, meaning limt?? x(t) exists, and the initialization distribution ? is absolutely continuous with respect to Lebesgue measure. Then limt?? x(t) = x? with probability one, where x? is a second-order stationary point. The assumption that gradient descent converges holds for many non-convex functions (including all ? ? the examples considered in this paper). This assumption is used to avoid the case when ?x(t) ?2 goes to in?nity, so limt?? x(t) is unde?ned. Note the Theorem 2.6 only provides limiting behavior without specifying the convergence rate. On the other hand, if we are willing to add perturbations, the following theorem not only establishes convergence but also provides a sharp convergence rate: Theorem 2.7 ([Jin et al., 2017]). Suppose f is B-bounded, ?-gradient Lipschitz, ?-Hessian Lipschitz. 2 For any ? > 0, ? ? ?? , there exists a proper choice of ?, r, tthres , gthres (depending on B, ?, ?, ?, ?) such that Algorithm 1 will ?nd an ?-second-order stationary point, with at least probability 1 ? ?, in the following number of iterations: ?? ? ? d?B ?B 4 . log O ?2 ?2 ? This theorem states that with proper choice of hyperparameters, perturbed gradient descent can consistently escape strict saddle points and converge to second-order stationary point in a polynomial number of iterations. 3 Warmup: Examples with ?Un-natural" Initialization The convergence result of Theorem 2.6 raises the following question: can gradient descent ?nd a second-order stationary point in a polynomial number of iterations? In this section, we discuss two very simple and intuitive counter-examples for which gradient descent with random initialization requires an exponential number of steps to escape strict saddle points. We will also explain that, however, these examples are unnatural and pathological in certain ways, thus unlikely to arise in practice. A more sophisticated counter-example with natural initialization and non-pathological behavior will be given in Section 4. 4 (a) Negative Gradient Field of f (x) = x21 ? x22 . (b) Negative Gradient Field for function de?ned in Equation (2). Figure 1: If the initialization point is in red rectangle then it takes GD a long time to escape the neighborhood of saddle point (0, 0). Initialize uniformly within an extremely thin band. Consider a two-dimensional function f with a strict saddle point at (0, 0). Suppose that inside the neighborhood U = [?1, 1]2 of the saddle point, function is locally quadratic f (x1 , x2 ) = x21 ? x22 , For GD with ? = 14 , the update equation can be written as (t) (t) x 3x (t+1) (t+1) and x2 = 1 = 2 . x1 2 2 3 ? exp( 1? ) 3 ? exp( 1? ) If we initialize uniformly within [?1, 1] ? [?( 2 ) , (2) ] then GD requires at least exp( 1? ) steps to get out of neighborhood U , and thereby escape the saddle point. See Figure 1a for illustration. Note that in this case the initialization region is exponentially thin (only of width 1 2 ? ( 32 )? exp( ? ) ). We would seldom use such an initialization scheme in practice. Initialize far away. Consider again a two-dimensional function with a strict saddle point at (0, 0). This time, instead of initializing in a extremely thin band, we construct a very long slope so that a relatively large initialization region necessarily converges to this extremely thin band. Speci?cally, consider a function in the domain [??, 1] ? [?1, 1] that is de?ned as follows: ? 2 2 ?x1 ? x2 if ? 1 < x1 < 1 (2) f (x1 , x2 ) = ?4x1 + x22 if x1 < ?2 ? h(x1 , x2 ) otherwise, where h(x1 , x2 ) is a smooth function connecting region [??, ?2] ? [?1, 1] and [?1, 1] ? [?1, 1] while making f have continuous second derivatives and ensuring x2 does not suddenly increase when x1 ? [?2, ?1].2 For GD with ? = 14 , when ?1 < x1 < 1, the dynamics are (t+1) x1 (t) = x1 2 and (t+1) x2 (t) 3x2 , 2 = and when x1 < ?2 the dynamics are (t) x2 . 2 Suppose we initialize uniformly within [?R ? 1, ?R + 1] ? [?1, 1] , for R large. See Figure 1b for an (t) illustration. Letting t denote the ?rst time that x1 ? ?1, then approximately we have t ? R and so 1 (t) (0) x2 ? x2 ?( 12 )R . From the previous example, we know that if ( 12 )R ? ( 32 )? exp ? , that is R ? exp 1? , then GD will need exponential time to escape from the neighborhood U = [?1, 1] ? [?1, 1] of the saddle point (0, 0). In this case, we require an initialization region leading to a saddle point at distance R which is exponentially large. In practice, it is unlikely that we would initialize exponentially far away from the saddle points or optima. (t+1) x1 2 (t) = x1 + 1 and (t+1) x2 = We can construct such a function using splines. See Appendix B. 5 4 Main Result In the previous section we have shown that gradient descent takes exponential time to escape saddle points under ?un-natural" initialization schemes. Is it possible for the same statement to hold even under ?natural? initialization schemes and non-pathological functions? The following theorem con?rms this: Theorem 4.1 (Uniform initialization over a unit cube). Suppose the initialization point is uniformly sampled from [?1, 1]d . There exists a function f de?ned on Rd that is B-bounded, ?-gradient Lipschitz and ?-Hessian Lipschitz with parameters B, ?, ? at most poly(d) such that: 1. with probability one, gradient descent with step size ? ? 1/? will be ?(1) distance away from any local minima for any T ? e?(d) . 2. for any ? > 0, with probability 1 ? e?d , perturbed gradient descent (Algorithm 1) will ?nd a point x such that ?x ? x? ?2 ? ? for some local minimum x? in poly(d, 1? ) iterations. Remark: As will be apparent in the next section, in the example we constructed, there are 2d symmetric local minima at locations (?c . . . , ?c), where c is some constant. The saddle points are of the form (?c, . . . , ?c, 0, . . . , 0). Both algorithms will travel across d neighborhoods of saddle points before reaching a local minimum. For GD, the number of iterations to escape the i-th saddle point increases as ?i (? is a multiplicative factor larger than 1), and thus GD requires exponential time to escape d saddle points. On the other hand, PGD takes about the same number of iterations to escape each saddle point, and so escapes the d saddle points in polynomial time. Notice that B, ?, ? = O(poly(d)), so this does not contradict Theorem 2.7. We also note that in our construction, the local minimizers are outside the initialization region. We note this is common especially for unconstrained optimization problems, where the initialization is usually uniform on a rectangle or isotropic Gaussian. Due to isoperimetry, the initialization concentrates in a thin shell, but frequently the ?nal point obtained by the optimization algorithm is not in this shell. It turns out in our construction, the only second-order stationary points in the path are the ?nal local minima. Therefore, we can also strengthen Theorem 4.1 to provide a negative result for approximating ?-second-order stationary points as well. Corollary 4.2. Under the same initialization as in Theorem 4.1, there exists a function f satisfying the requirements of Theorem 4.1 such that for some ? = 1/poly(d), with probability one, gradient descent with step size ? ? 1/? will not visit any ?-second-order stationary point in T ? e?(d) . The corresponding positive result that PGD to ?nd ?-second-order stationary point in polynomial time immediately follows from Theorem 2.7. The next result shows that gradient descent does not fail due to the special choice of initializing uniformly in [?1, 1]d . For a large class of initialization distributions ?, we can generalize Theorem 4.1 to show that gradient descent with random initialization ? requires exponential time, and perturbed gradient only requires polynomial time. Corollary 4.3. Let B? (z, R) = {z} + [?R, R]d be the ?? ball of radius R centered at z. Then for any initialization distribution ? that satis?es ?(B? (z, R)) ? 1 ? ? for any ? > 0, the conclusion of Theorem 4.1 holds with probability at least 1 ? ?. That is, as long as most of the mass of the initialization distribution ? lies in some ?? ball, a similar conclusion to that of Theorem 4.1 holds with high probability. This result applies to random Gaussian initialization, ? = N (0, ? 2 I), with mean 0 and covariance ? 2 I, where ?(B? (0, ? log d)) ? 1 ? 1/poly(d). 4.1 Proof Sketch In this section we present a sketch of the proof of Theorem 4.1. The full proof is presented in the Appendix. Since the polynomial-time guarantee for PGD is straightforward to derive from Jin et al. [2017], we focus on showing that GD needs an exponential number of steps. We rely on the following key observation. 6 Key observation: escaping two saddle points sequentially. ? 2 2 if ???x1 + Lx2 2 f (x1 , x2 ) = L(x1 ? 2) ? ?x22 if ? L(x1 ? 2)2 + L(x2 ? 2)2 if Consider, for L > ? > 0, x1 ? [0, 1] , x2 ? [0, 1] x1 ? [1, 3] , x2 ? [0, 1] . x1 ? [1, 3] , x2 ? [1, 3] (3) Note that this function is not continuous. In the next paragraph we will modify it to make it smooth and satisfy the assumptions of the Theorem but useful intuition is obtained using this discontinuous function. The function has an optimum at (2, 2) and saddle points at (0, 0) and (2, 0). We call [0, 1] ? [0, 1] the ?neighborhood of (0, 0) and [1, 3] ? [0, 1] the neighborhood of (2, 0). Suppose ? (0) (0) lies in [0, 1] ? [0, 1]. De?ne t1 = minx(t) ?1 t to be the time of ?rst the initialization x , y 1 departure from the neighborhood of (0, 0) (thereby escaping the ?rst saddle point). By the dynamics of gradient descent, we have (t ) (0) (t ) (0) x1 1 = (1 + 2??)t1 x1 , x2 1 = (1 ? 2?L)t1 x2 . Next we calculate the number of iterations such that x2 ? 1 and the algorithm thus leaves the neighborhood of the saddle point (2, 0) (thus escaping the second saddle point). Letting t2 = minx(t) ?1 t, we have: 2 (t ) (0) x2 1 (1 + 2??)t2 ?t1 = (1 + 2??)t2 ?t1 (1 ? 2?L)t1 x2 ? 1. We can lower bound t2 by t2 ? 2?(L + ?)t1 + log( x10 ) 2 2?? ? L+? t1 . ? The key observation is that the number of steps to escape the second saddle point is number of steps to escape the ?rst one. L+? ? times the Spline: connecting quadratic regions. To make our function smooth, we create buffer regions and use splines to interpolate the discontinuous parts of Equation (3). Formally, we consider the following function, for some ?xed constant ? > 1: ? ? ??x21 + Lx22 if x1 ? [0, ? ] , x2 ? [0, ? ] ? ? ? ? if x1 ? [?, 2? ] , x2 ? [0, ? ] ?g(x1 , x2 ) (4) f (x1 , x2 ) = L(x1 ? 4? )2 ? ?x22 ? ? if x1 ? [2?, 6? ] , x2 ? [0, ? ] ? ? 2 ? L(x ? 4? ) + g (x ) ? ? if x ? [2?, 6? ] , x ? [?, 2? ] 1 1 2 1 2 ? ? ?L(x ? 4? )2 + L(x ? 4? )2 ? 2? if x1 ? [2?, 6? ] , x2 ? [2?, 6? ] , 1 2 where g, g1 are spline polynomials and ? > 0 is a constant de?ned in Lemma B.2. In this case, there are saddle points at (0, 0), and (4?, 0) and the optimum is at (4?, 4? ). Intuitively, [?, 2? ] ? [0, ? ] and [2?, 6? ] ? [?, 2? ] are buffer regions where we use splines (g and g1 ) to transition between regimes and make f a smooth function. Also in this region there is no stationary point and the smoothness assumptions are still satis?ed in the theorem. Figure. 2a shows the surface and stationary points of this function. We call the union of the regions de?ned in Equation (4) a tube. From two saddle points to d saddle points. We can readily adapt our construction of the tube to d dimensions, such that the function is smooth, the location of saddle points are (0, . . . , 0), (4?, 0, . . . , 0), . . ., (4?, . . . , 4?, 0), and optimum is at (4?, . . . , 4? ). Let ti be the number of step to escape the neighborhood of the i-th saddle point. We generalize our key observation to this case and L+? d obtain ti+1 ? L+? ? ? ti for all i. This gives td ? ( ? ) which is exponential time. Figure 2b shows the tube and trajectory of GD. Mirroring trick: from tube to octopus. In the construction thus far, the saddle points are all on the boundary of tube. To avoid the dif?culties of constrained non-convex optimization, we would like to make all saddle points be interior points of the domain. We use a simple mirroring trick; i.e., for every coordinate xi we re?ect f along its axis. See Figure 2c for an illustration in the case d = 2. 7 -90 15 -10 -10 0 -9 0 -110 0 -1 20 x2 10 -9 0 -80 -70 5 -110 -100 -6 0 10 -1 0 -10 -90 -80 0 -7 -5 0 -6 -5 0 -50 0 5 0 0 5 10 15 x1 (a) Contour plot of the objective function and tube de?ned in 2D. (b) Trajectory of gradient descent in the tube for d = 3. (c) Octopus de?ned in 2D. Figure 2: Graphical illustrations of our counter-example with ? = e. The blue points are saddle points and the red point is the minimum. The pink line is the trajectory of gradient descent. -300 0 -200 -400 -600 0 500 1000 0 500 0 -200 -400 -600 1000 GD PGD 0 500 Epochs Epochs (a) L = 1, ? = 1 Objective Function -200 200 GD PGD Objective Function -100 Objective Function Objective Function GD PGD 0 -400 200 200 100 -200 -400 -600 -800 1000 GD PGD 0 0 500 Epochs (b) L = 1.5, ? = 1 1000 Epochs (c) L = 2, ? = 1 (d) L = 3, ? = 1 Figure 3: Performance of GD and PGD on our counter-example with d = 5. -400 -600 -800 0 1000 2000 Epochs (a) L = 1, ? = 1 Objective Function -200 500 GD PGD 0 -500 -1000 0 1000 -500 -1000 -1500 2000 Epochs 500 GD PGD 0 0 1000 Epochs (b) L = 1.5, ? = 1 (c) L = 2, ? = 1 2000 Objective Function 500 GD PGD 0 Objective Function Objective Function 200 GD PGD 0 -500 -1000 -1500 0 1000 2000 Epochs (d) L = 3, ? = 1 Figure 4: Performance of GD and PGD on our counter-example with d = 10 Extension: from octopus to Rd . Up to now we have constructed a function de?ned on a closed subset of Rd . The last step is to extend this function to the entire Euclidean space. Here we apply the classical Whitney Extension Theorem (Theorem B.3) to ?nish our construction. We remark that the Whitney extension may lead to more stationary points. However, we will demonstrate in the proof that GD and PGD stay within the interior of ?octopus? de?ned above, and hence cannot converge to any other stationary point. 5 Experiments In this section we use simulations to verify our theoretical ?ndings. The objective function is de?ned in (14) and (15) in the Appendix. In Figures 3 and Figure 4, GD stands for gradient descent and 1 PGD stands for Algorithm 1. For both GD and PGD we let the stepsize ? = 4L . For PGD, we ?e e choose tthres = 1, gthres = 100 and r = 100 . In Figure 3 we ?x dimension d = 5 and vary L as considered in Section 4.1; similarly in Figure 4 we choose d = 10 and vary L. First notice that in all experiments, PGD converges faster than GD as suggested by our theorems. Second, observe the ?horizontal" segment in each plot represents the number of iterations to escape a saddle point. For GD the length of the segment grows at a ?xed rate, which coincides with the result mentioned at the beginning for Section 4.1 (that the number of iterations to escape a saddle point increase at each time with a multiplicative factor L+? ? ). This phenomenon is also veri?ed in the ?gures by the fact that as the ratio L+? ? becomes larger, the rate of growth of the number of iterations to escape increases. On 1 the other hand, the number of iterations for PGD to escape is approximately constant (? ?? ). 8 6 Conclusion In this paper we established the failure of gradient descent to ef?ciently escape saddle points for general non-convex smooth functions. We showed that even under a very natural initialization scheme, gradient descent can require exponential time to converge to a local minimum whereas perturbed gradient descent converges in polynomial time. Our results demonstrate the necessity of adding perturbations for ef?cient non-convex optimization. We expect that our results and constructions will naturally extend to a stochastic setting. In particular, we expect that with random initialization, general stochastic gradient descent will need exponential time to escape saddle points in the worst case. However, if we add perturbations per iteration or the inherent randomness is non-degenerate in every direction (so the covariance of noise is lower bounded), then polynomial time is known to suf?ce [Ge et al., 2015]. One open problem is whether GD is inherently slow if the local optimum is inside the initialization region in contrast to the assumptions of initialization we used in Theorem 4.1 and Corollary 4.3. We believe that a similar construction in which GD goes through the neighborhoods of d saddle points will likely still apply, but more work is needed. Another interesting direction is to use our counterexample as a building block to prove a computational lower bound under an oracle model [Nesterov, 2013, Woodworth and Srebro, 2016]. This paper does not rule out the possibility for gradient descent to perform well for some non-convex functions with special structures. Indeed, for the matrix square-root problem, Jain et al. [2017] show that with reasonable random initialization, gradient updates will stay away from all saddle points, and thus converge to a local minimum ef?ciently. It is an interesting future direction to identify other classes of non-convex functions that gradient descent can optimize ef?ciently and not suffer from the negative results described in this paper. 7 Acknowledgements S.S.D. and B.P. were supported by NSF grant IIS1563887 and ARPA-E Terra program. C.J. and M.I.J. were supported by the Mathematical Data Science program of the Of?ce of Naval Research under grant number N00014-15-1-2670. J.D.L. was supported by ARO W911NF-17-1-0304. A.S. was supported by DARPA grant D17AP00001, AFRL grant FA8750-17-2-0212 and a CMU ProSEED/BrainHub Seed Grant. The authors thank Rong Ge, Qing Qu, John Wright, Elad Hazan, Sham Kakade, Benjamin Recht, Nathan Srebro, and Lin Xiao for useful discussions. The authors thank Stephen Wright and Michael O?Neill for pointing out calculation errors in the older version. References Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding Approximate Local Minima Faster Than Gradient Descent. In STOC, 2017. Full version available at http://arxiv.org/abs/1611.01146. Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Global optimality of local search for low rank matrix recovery. In Advances in Neural Information Processing Systems, pages 3873?3881, 2016. Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger ?ow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985?2007, 2015. Yair Carmon and John C Duchi. Gradient descent ef?ciently ?nds the cubic-regularized non-convex Newton step. arXiv preprint arXiv:1612.00547, 2016. Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Accelerated methods for non-convex optimization. arXiv preprint arXiv:1611.00756, 2016. Alan Chang. The Whitney extension theorem in high dimensions. arXiv preprint arXiv:1508.01779, 2015. 9 Frank E Curtis, Daniel P Robinson, and Mohammadreza Samadi. A trust region algorithm with a worst-case iteration complexity of O(??3/2 ) for nonconvex optimization. Mathematical Programming, pages 1?32, 2014. Randall L Dougherty, Alan S Edelman, and James M Hyman. Nonnegativity-, monotonicity-, or convexity-preserving cubic and quintic Hermite interpolation. Mathematics of Computation, 52 (186):471?494, 1989. Simon S Du, Jason D Lee, and Yuandong Tian. When is a convolutional ?lter easy to learn? arXiv preprint arXiv:1709.06129, 2017. Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797?842, 2015. Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems, pages 2973?2981, 2016. Rong Ge, Chi Jin, and Yi Zheng. No spurious local minima in nonconvex low rank problems: A uni?ed geometric analysis. In Proceedings of the 34th International Conference on Machine Learning, pages 1233?1242, 2017. Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651?660. IEEE, 2014. Prateek Jain, Chi Jin, Sham Kakade, and Praneeth Netrapalli. Global convergence of non-convex gradient descent for computing matrix squareroot. In Arti?cial Intelligence and Statistics, pages 479?488, 2017. Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, and Michael I. Jordan. How to escape saddle points ef?ciently. In Proceedings of the 34th International Conference on Machine Learning, pages 1724?1732, 2017. Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In Conference on Learning Theory, pages 1246?1257, 2016. K?r Y Levy. The power of normalization: Faster evasion of saddle points. arXiv preprint arXiv:1611.04831, 2016. Xingguo Li, Zhaoran Wang, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, and Tuo Zhao. Symmetry, saddle points, and global geometry of nonconvex matrix factorization. arXiv preprint arXiv:1612.09296, 2016. Yurii Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87. Springer Science & Business Media, 2013. Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177?205, 2006. Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems, pages 2796?2804, 2013. J Jr Palis and Welington De Melo. Geometric Theory of Dynamical Systems: An Introduction. Springer Science & Business Media, 2012. Dohyung Park, Anastasios Kyrillidis, Constantine Carmanis, and Sujay Sanghavi. Non-square matrix sensing without spurious local minima via the Burer-Monteiro approach. In Arti?cial Intelligence and Statistics, pages 65?74, 2017. Robin Pemantle. Nonconvergence to unstable points in urn models and stochastic approximations. The Annals of Probability, pages 698?712, 1990. Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages 2379?2383. IEEE, 2016. 10 Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere I: Overview and the geometric picture. IEEE Transactions on Information Theory, 63(2):853?884, 2017. Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factorization. IEEE Transactions on Information Theory, 62(11):6535?6579, 2016. Hassler Whitney. Analytic extensions of differentiable functions de?ned in closed sets. Transactions of the American Mathematical Society, 36(1):63?89, 1934. Blake E Woodworth and Nati Srebro. Tight complexity bounds for optimizing composite objectives. In Advances in Neural Information Processing Systems, pages 3639?3647, 2016. Xinyang Yi, Dohyung Park, Yudong Chen, and Constantine Caramanis. Fast algorithms for robust PCA via gradient descent. In Advances in Neural Information Processing Systems, pages 4152? 4160, 2016. G George Yin and Harold J Kushner. Stochastic Approximation and Recursive Algorithms and Applications, volume 35. Springer, 2003. Xiao Zhang, Lingxiao Wang, and Quanquan Gu. Stochastic variance-reduced gradient descent for low-rank matrix recovery from linear measurements. arXiv preprint arXiv:1701.00481, 2017. 11
6707 |@word version:3 polynomial:20 stronger:1 norm:2 nd:8 open:2 unif:1 willing:1 simulation:1 decomposition:2 covariance:2 arti:2 thereby:2 necessity:2 liu:1 daniel:1 xinyang:1 fa8750:1 past:1 existing:1 current:2 luo:2 naman:1 written:2 must:2 readily:1 john:5 analytic:1 plot:2 update:2 stationary:20 intelligence:2 leaf:2 isotropic:1 beginning:1 vanishing:1 core:1 characterization:1 coarse:1 provides:2 location:2 org:1 zhang:2 hermite:1 warmup:1 mathematical:4 along:2 constructed:2 ect:1 symposium:2 edelman:1 prove:1 lx2:1 yuan:1 focs:1 introductory:1 inside:3 paragraph:1 introduce:1 indeed:2 behavior:2 frequently:1 chi:5 td:1 zhi:1 increasing:2 hiding:1 provided:1 becomes:1 moreover:2 notation:1 bounded:5 mass:1 medium:2 bhojanapalli:3 prateek:2 xed:4 finding:1 guarantee:4 cial:2 berkeley:4 every:2 ti:3 growth:1 exactly:1 unit:2 grant:5 enjoy:1 before:2 positive:2 t1:8 local:28 modify:1 path:1 interpolation:1 approximately:2 might:2 twice:1 initialization:41 studied:1 specifying:1 dif:1 factorization:3 tian:1 practice:3 union:1 block:1 recursive:1 nite:2 episodic:1 empirical:1 composite:1 get:2 cannot:1 close:1 interior:2 operator:1 optimize:1 equivalent:1 go:2 straightforward:1 convex:21 recovery:3 immediately:1 pure:1 rule:1 variation:1 coordinate:1 limiting:1 annals:1 construction:7 suppose:6 heavily:1 strengthen:1 programming:2 trick:2 satisfying:2 preprint:7 initializing:2 wang:2 worst:2 calculate:1 region:13 nonconvergence:1 sun:6 counter:6 mentioned:1 intuition:1 benjamin:2 convexity:1 complexity:2 nesterov:7 dynamic:3 hinder:1 singh:1 raise:1 segment:2 smart:2 tight:1 myriad:1 deliver:1 gu:1 darpa:1 caramanis:1 jain:4 fast:2 hyper:1 neighborhood:12 outside:1 apparent:1 larger:2 pemantle:2 elad:2 say:1 otherwise:1 statistic:2 g1:2 dougherty:1 online:1 eigenvalue:2 differentiable:3 simchowitz:1 aro:1 product:1 jarvis:1 date:1 nity:1 degenerate:1 intuitive:1 rst:4 convergence:12 requirement:2 optimum:5 boris:1 converges:9 hyman:1 wider:1 illustrate:1 polylog:1 completion:4 depending:1 omega:1 derive:1 op:2 b0:1 strong:2 netrapalli:4 c:3 signi:1 implies:1 direction:5 concentrate:1 radius:3 discontinuous:2 stochastic:8 centered:4 ndings:2 require:3 barnab:1 preliminary:1 isit:1 brian:1 strictly:1 extension:5 rong:5 hold:6 lying:1 aartisingh:1 considered:3 wright:4 blake:1 exp:6 seed:1 pointing:1 major:1 dictionary:2 vary:2 smallest:1 aarti:1 favorable:1 travel:1 quanquan:1 create:1 establishes:1 minimization:3 always:3 gaussian:3 reaching:2 avoid:3 corollary:3 focus:4 naval:1 consistently:1 rank:3 contrast:1 dependent:1 minimizers:5 unlikely:2 entire:1 unde:1 spurious:3 provably:1 monteiro:1 issue:1 constrained:1 special:2 fairly:3 initialize:5 cube:4 equal:1 field:2 construct:2 beach:1 represents:1 park:3 nearly:1 thin:5 future:1 t2:5 spline:5 fundamentally:1 escape:38 few:1 inherent:1 sanghavi:2 pathological:6 randomly:2 hassler:1 interpolate:1 qing:3 usc:1 phase:4 geometry:1 lebesgue:1 ab:1 organization:1 satis:4 possibility:1 zheng:1 analyzed:3 x22:5 oliver:1 capable:1 closer:2 necessary:1 euclidean:2 initialized:2 re:1 theoretical:3 arpa:1 marshall:1 w911nf:1 sidford:1 whitney:4 subset:1 uniform:3 answer:1 perturbed:12 gd:47 st:1 recht:2 international:3 terra:1 ju:2 stay:3 ssdu:1 lee:8 cantly:1 michael:4 connecting:2 again:1 tube:7 choose:2 huang:1 american:1 derivative:1 leading:1 bx:1 zhao:1 li:3 de:29 zhaoran:1 satisfy:2 later:1 root:2 jason:4 multiplicative:2 closed:2 hazan:2 red:2 candes:2 simon:2 slope:1 square:3 convolutional:1 variance:1 who:1 yield:1 identify:1 generalize:3 lu:1 trajectory:3 randomness:1 carmon:4 explain:1 reach:1 ed:3 failure:1 james:1 naturally:1 proof:5 con:1 sampled:1 proved:1 hardt:2 organized:1 sophisticated:1 back:1 furong:1 afrl:1 methodology:1 improved:2 furthermore:1 hand:5 sketch:2 horizontal:1 trust:2 maximizer:1 perhaps:1 grows:1 believe:1 usa:1 building:1 xiaodong:1 normalized:1 verify:1 vicinity:1 regularization:2 hence:1 alternating:3 symmetric:2 moritz:1 width:1 essence:1 harold:1 coincides:1 complete:1 demonstrate:2 dohyung:2 duchi:3 allen:1 meaning:1 ef:9 common:1 overview:1 exponentially:4 volume:2 discussed:1 extend:2 yuandong:1 mellon:3 measurement:1 counterexample:1 smoothness:5 rd:9 seldom:1 unconstrained:1 mathematics:1 similarly:1 sujay:2 soltanolkotabi:1 access:3 han:1 surface:1 add:3 asssumption:1 recent:4 showed:1 constantine:2 optimizing:1 occasionally:1 certain:1 nonconvex:4 buffer:2 n00014:1 yi:3 nition:11 preserving:1 minimum:21 additional:2 relaxed:1 ruoyu:1 george:1 speci:3 zeyuan:1 surely:2 converge:8 semi:1 stephen:1 full:4 sham:3 x10:1 anastasios:1 smooth:7 alan:2 faster:3 adapt:1 calculation:1 melo:2 long:5 retrieval:4 lin:1 burer:1 sphere:1 visit:2 ensuring:1 converging:1 variant:4 basic:2 cmu:4 arxiv:15 iteration:15 normalization:1 tailored:1 limt:3 agarwal:2 background:1 whereas:2 interval:1 veri:1 strict:16 quan:1 jordan:4 call:2 ciently:6 mohammadreza:1 wirtinger:1 yang:1 easy:1 variety:1 escaping:7 silent:1 polyak:3 praneeth:3 kyrillidis:1 whether:3 pca:1 rms:1 unnatural:1 suffer:1 hessian:13 remark:2 mirroring:2 dramatically:1 generally:1 useful:2 detailed:1 involve:1 amount:2 band:3 locally:1 reduced:1 http:1 nsf:1 notice:2 per:1 blue:1 carnegie:3 key:5 threshold:1 d3:1 evasion:1 ce:4 nal:2 lter:1 rectangle:2 asymptotically:1 year:2 place:1 almost:3 reasonable:2 throughout:1 squareroot:1 raman:1 prefer:1 appendix:4 bound:3 guaranteed:1 neill:1 quadratic:2 oracle:1 annual:1 x2:30 nathan:1 min:4 extremely:3 optimality:1 urn:1 tengyu:2 relatively:2 xingguo:1 ned:14 ball:3 pink:1 jr:1 across:1 kakade:3 qu:3 making:1 bullins:1 randall:1 slowed:2 intuitively:2 bapoczos:1 equation:4 discus:2 eventually:2 turn:2 fail:1 needed:1 know:1 letting:2 ge:12 end:2 yurii:2 studying:1 available:2 neyshabur:1 apply:2 observe:1 away:5 stepsize:2 encounter:1 yair:2 slower:2 samadi:1 running:1 include:1 kushner:2 x21:3 graphical:1 chijin:1 newton:2 cally:2 woodworth:2 especially:1 emmanuel:1 approximating:1 classical:3 suddenly:1 society:1 tensor:2 objective:11 question:4 dependence:2 unclear:3 southern:1 gradient:62 ow:2 minx:2 distance:2 thank:2 unstable:1 assuming:1 length:3 illustration:5 ratio:1 equivalently:1 statement:2 stoc:1 frank:1 negative:6 motivates:1 proper:2 perform:1 upper:1 observation:4 descent:45 jin:12 extended:1 incorporated:1 precise:1 perturbation:15 arbitrary:1 sharp:1 tuo:1 required:2 california:3 established:2 nip:1 robinson:1 lingxiao:1 able:1 suggested:1 dynamical:2 usually:1 departure:1 regime:1 program:2 including:2 max:1 power:1 natural:9 rely:1 regularized:1 business:2 isoperimetry:1 zhu:1 scheme:8 older:1 ne:2 picture:1 nding:1 axis:1 arora:1 review:1 understanding:3 literature:1 epoch:8 acknowledgement:1 nati:2 geometric:4 morse:1 asymptotic:1 haupt:1 expect:2 lecture:1 suf:4 interesting:2 srebro:4 foundation:1 xiao:2 dd:1 course:1 surprisingly:1 czos:1 last:1 supported:4 formal:1 side:2 taking:1 absolute:1 boundary:1 dimension:6 yudong:1 transition:1 stand:2 contour:1 author:2 made:1 commonly:2 far:4 transaction:4 approximate:3 contradict:1 uni:1 monotonicity:1 global:7 sequentially:1 assumed:4 conclude:1 xi:1 search:2 continuous:5 un:3 robin:1 additionally:1 nature:1 learn:1 robust:1 ca:1 inherently:2 symmetry:1 curtis:2 du:3 culties:1 necessarily:1 poly:5 domain:2 octopus:4 main:1 big:2 noise:1 hyperparameters:1 arise:1 x1:34 cient:4 cubic:4 slow:1 fails:1 nonnegativity:1 explicit:2 exponential:13 lie:2 mahdi:1 levy:2 justi:2 srinadh:1 down:2 theorem:25 xt:1 showing:1 behnam:1 sensing:2 exists:5 adding:4 importance:1 chen:1 yin:2 saddle:74 likely:1 chang:1 applies:1 springer:3 minimizer:5 gures:1 ma:2 shell:2 goal:1 lipschitz:8 nish:1 change:1 pgd:19 uniformly:5 lemma:1 called:2 e:5 aaron:1 formally:1 support:1 absolutely:1 accelerated:1 phenomenon:1
6,310
6,708
Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction Kristofer E. Bouchard? Alejandro F. Bujan? Shashanka Ubaru? Prabhat? Edward F. Chang?? Michael W. Mahoney? Farbod Roosta-Khorasani? Antoine M. Snijdersk Jian-Hua Maok Sharmodeep Bhattacharyya?? Abstract The increasing size and complexity of scientific data could dramatically enhance discovery and prediction for basic scientific applications. Realizing this potential, however, requires novel statistical analysis methods that are both interpretable and predictive. We introduce Union of Intersections (UoI), a flexible, modular, and scalable framework for enhanced model selection and estimation. Methods based on UoI perform model selection and model estimation through intersection and union operations, respectively. We show that UoI-based methods achieve low-variance and nearly unbiased estimation of a small number of interpretable features, while maintaining high-quality prediction accuracy. We perform extensive numerical investigation to evaluate a UoI algorithm (U oILasso ) on synthetic and real data. In doing so, we demonstrate the extraction of interpretable functional networks from human electrophysiology recordings as well as accurate prediction of phenotypes from genotype-phenotype data with reduced features. We also show (with the U oIL1Logistic and U oICU R variants of the basic framework) improved prediction parsimony for classification and matrix factorization on several benchmark biomedical data sets. These results suggest that methods based on the UoI framework could improve interpretation and prediction in data-driven discovery across scientific fields. 1 Introduction A central goal of data-driven science is to identify a small number of features (i.e., predictor variables; X in Fig. 1(a)) that generate a response variable of interest (y in Fig. 1(a)) and then to estimate the relative contributions of these features as the parameters in the generative process relating the predictor variables to the response variable (Fig. 1(a)). A common characteristic of many modern massive data sets is that they have a large number of features (i.e., high-dimensional data), while ? Biological Systems and Engineering Division, LBNL. [email protected] Redwood Center, UC Berkeley. [email protected] ? ICSI and Department of Statistics, UC Berkeley. {farbod,mmahoney}@icsi.berkeley.edu ? Department of Computer Science and Engineering, University of Minnesota. [email protected] ? NERSC, LBNL. [email protected] k Biological Systems and Engineering Division, LBNL. {AMSnijders,jhmao}@lbl.gov ?? Department of Neurological Surgery, UC San Francisco. [email protected] ?? Department of Statistics, Oregon State University. [email protected] ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: The basic UoI framework. (a) Schematic of regularization and ensemble methods for regression. (b) Schematic of the Union of Intersections (UoI) framework. (c) A data-distributed version of the U oILasso algorithm. (d) Dependence of false positive, false negatives, and estimation variability on number of bootstraps in selection (B1 ) and estimation (B2 ) modules. also exhibiting a high degree of sparsity and/or redundancy [2, 19, 11]. That is, while formally high-dimensional, most of the useful information in the data features for tasks such as reconstruction, regression, and classification can be restricted or compressed into a much smaller number of important features. In regression and classification, it is common to employ sparsity-inducing regularization to attempt to achieve simultaneously two related but quite different goals: to identify the features important for prediction (i.e., model selection) and to estimate the associated model parameters (i.e., model estimation) [2, 19]. For example, the Lasso algorithm in linear regression uses L1 regularization to penalize the total magnitude of model parameters, and this often results in feature compression by setting some parameters exactly to zero [18] (See Fig. 1(a), pure white elements in right-hand vectors, emphasized by ?). It is well known that this type of regularization implies a prior assumption about the distribution of the parameter (e.g., L1 -regularization implicitly assumes a Laplacian prior distribution) [12]. However, strong sparsity-inducing regularization, which is common when there are many more potential features than data samples (i.e., the so-called small n/p regime) can severely hinder the interpretation of model parameters (Fig. 1(a), indicated by less saturated colors between top and bottom vectors on right hand side). For example, while sparsity may be achieved, incorrect features may be chosen and parameters estimates may be biased. In addition, it can impede model selection and estimation when the true model distribution deviates from the assumed distribution [2, 10]. This may not matter for prediction quality, but it clearly has negative consequences for interpretability, an admittedly not completely-well-defined property of algorithms that is crucial in many scientific applications [9]. In this context, interpretability reflects the degree to which an algorithm returns a small number of physically meaningful features with unbiased and low variance estimates of their contributions. On the other hand, another common characteristic of many state of the art methods is to combine several related models for a given task. In statistical data analysis, this is often formalized by so-called 2 ensemble methods, which improve prediction accuracy by combining parameter estimates [12]. In particular, by combining several different models, ensemble methods often include more features to predict the response variables, and thus the number of data features is expanded relative to the individuals in the ensemble. For example, estimating an ensemble of model parameters by randomly resampling the data many times (e.g., bootstrapping) and then averaging the parameter estimates (e.g., bagging) can yield improved prediction accuracy by reducing estimation variability [8, 12] (See Fig. 1(a), bottom). However, by averaging estimates from a large ensemble, this process often results in many non-zero parameters, which can hinder interpretability and the identification of the true model support (compare top and bottom vectors on right hand side of Fig. 1(a)). Taken together, these observations suggest that explicit and more precise control of feature compression and expansion may result in an algorithm with improved interpretative and predictive properties. In this paper, we introduce Union of Intersections (UoI), a flexible, modular, and scalable framework to enhance both the identification of features (model selection) as well as the estimation of the contributions of these features (model estimation). We have found that the UoI framework permits us to explore the interpretability-predictivity trade-off space, without imposing an explicit prior on the model distribution, and without formulating a non-convex problem, thereby often leading to improved interpretability and prediction. Ideally, data analysis methods in many scientific applications should be selective (only features that influence the response variable are selected), accurate (estimated parameters in the model are as close to the true value as possible), predictive (allowing prediction of the response variable), stable (e.g., the variability of the estimated parameters is small), and scalable (able to return an answer in a reasonable amount of time on very large data sets) [17, 2, 15, 10]. We show empirically that UoI-based methods can simultaneously achieve these goals, results supported by preliminary theory. We primarily demonstrate the power of UoI-based methods in the context of sparse linear regression (U oILasso ), as it is the canonical statistical/machine learning problem, it is theoretically tractable, and it is widely used in virtually every field of scientific inquiry. However, our framework is very general, and we demonstrate this by extending UoI to classification (U oIL1Logistic ) and matrix factorization (U oICU R ) problems. While our main focus is on neuroscience (broadly speaking) applications, our results also highlight the power of UoI across a broad range of synthetic and real scientific data sets.1 2 Union of Intersections (UoI) For concreteness, we consider an application of UoI in the context of the linear regression. Specifically, we consider the problem of estimating the parameters ? ? Rp that map a p-dimensional vector of predictor variables x ? Rp to the observation variable y ? R, when there are n paired samples of x and y corrupted by i.i.d Gausian noise: y = ? T x + ?, (1) where ? ? N (0, ? ) for each sample. When the true ? is thought to be sparse (i.e., in the L0 -norm ? can be found by solving a constrained optimization problem sense), then an estimate of ? (call it ?) of the form: n X ?? ? argmin (yi ? ?xi )2 + ?R(?). (2) p iid 2 ??R i=1 Here, R(?) is a regularization term that typically penalizes the overall magnitude of the parameter vector ? (e.g., R(?) = k?k1 is the target of the Lasso algorithm). The Basic UoI Framework. The key mathematical idea underlying UoI is to perform model selection through intersection (compressive) operations and model estimation through union (expansive) operations, in that order. This is schematized in Fig. 1(b), which plots a hypothetical range of selected 1 More details, including both empirical and theoretical results, are in the associated technical report [4]. 3 features (x1 : xp , abscissa) for different values of the regularization parameter (?, ordinate). See [4] for a more detailed description. In particular, UoI first performs feature compression (Fig. 1(b), Step 1) through intersection operations (intersection of supports across bootstrap samples) to construct a family (S) of candidate model supports (Fig. 1(b), e.g., Sj?1 , opaque red region is intersection of abutting pink regions). UoI then performs feature expansion (Fig. 1(b), Step 2) through a union of (potentially) different model supports: for each bootstrap sample, the best model estimates (across different supports) is chosen, and then a new model is generated by averaging the estimates (i.e., taking the union) across bootstrap samples (Fig. 1(b), dashed vertical black line indicates the union of features from Sj and Sj+1 ). Both feature compression and expansion are performed across all regularization strengths. In UoI, feature compression via intersections and feature expansion via unions are balanced to maximize prediction accuracy of the sparsely estimated model parameters for the response variable y. Innovations in Union of Intersections. UoI has three central innovations: (1) calculate model supports (Sj ) using an intersection operation for a range of regularization parameters (increases in ? shrink all values ?? towards 0), efficiently constructing a family of potential model supports {S : Sj ? Sj?k , for k sufficiently large}; (2) use a novel form of model averaging in the union step to directly optimize prediction accuracy (this can be thought of as a hybrid of bagging [8] and boosting [16]); and (3) combine pure model selection using an intersection operation with model selection/estimation using a union operation in that order (which controls both false negatives and false positives in model selection). Together, these innovations often lead to better selection, estimation and prediction accuracy. Importantly, this is done without explicitly imposing a prior on the distribution of parameter values, and without formulating a non-convex optimization problem. The U oILasso Algorithm. Since the basic UoI framework, as described in Fig. 1(c), has two main computational modules?one for model selection, and one for model estimation?UoI is a framework into which many existing algorithms can be inserted. Here, for simplicity, we primarily demonstrate UoI in the context of linear regression in the U oILasso algorithm, although we also apply it to classification with the U oIL1Logistic algorithm as well as matrix factorization with the U oICU R algorithm. U oILasso expands on the BoLasso method for the model selection module [1], and it performs a novel model averaging in the estimation module based on averaging ordinary least squares (OLS) estimates with potentially different model supports. U oILasso (and UoI in general) has a high degree of natural algorithmic parallelism that we have exploited in a distributed Python-MPI implementation. (Fig. 1(c) schematizes a simplified distributed implementation of the algorithm; see [4] for more details.) This parallelized U oILasso algorithm uses distribution of bootstrap data samples and regularization parameters (in Map) for independent computations involving convex optimizations (Lasso and OLS, in Solve), and it then combines results (in Reduce) with intersection operations (model selection module) and union operations (model estimation module). By solving independent convex optimization problems (e.g., Lasso, OLS) with distributed data resampling, our U oILasso algorithm efficiently constructs a family of model supports, and it then averages nearly unbiased model estimates, potentially with different supports, to maximize prediction accuracy while minimizing the number of features to aid interpretability. 3 3.1 Results Methods All numerical results used 100 random sub-samplings with replacement of 80-10-10 cross-validation to estimate model parameters (80%), choose optimal meta-parameters (e.g., ?, 10%), and determine prediction quality (10%). Below, ? denotes the values of the true model parameters, ?? denotes the estimated values of the model parameters from some algorithm (e.g., U oILasso ), S? is the support of 4 the true model (i.e., the set of non-zero parameter indices), and S?? is the support of the estimated model. We calculated several metrics of model selection, model estimation, and prediction accuracy. |S ? ?S? | (1) Selection accuracy (set overlap): 1? |S ? |?0 +|S? |0 , where, ? is the symmetric set difference operator. ? This metric ranges in [0, 1], taking a value of 0 if S? and S?? have no elements in common, and q P 2 (?i ? ??i ) . taking a value of 1 if and only if they are identical. (2) Estimation error (r.m.s): p1 ? 2 . (4) Prediction accuracy (R2 ): (3) Estimation variability (parameter variance): E[??2 ] ? (E[?]) P Pn (yi ?y?i )2 1 ? 0 log(n). For the P (yi ? y?i )2 ) + k?k 2 . (5) Prediction parsimony (BIC): n log( n?1 (yi ?E[y]) i=1 ? 0 experimental data, as the true model size is unknown, the selection ratio ( k?k p ) is a measure of the overall size of the estimated model relative to the total number of parameters. For the classification task using U oIL1Logistic , BIC was calculated as: ?2 log ` + S?? log N , where ` is the log-likelihood on the validation set. For the matrix factorization task using U oICU R , reconstruction accuracy was the Frobenius norm of the difference between the data matrix A and the low-rank approximation matrix A0 constructed from A(:, c), the reduced column matrix of A: kA ? A0 kF , where c is the set of k selected columns. 3.2 Model Selection and Stability: Explicit Control of False Positives, False Negatives, and Estimate Stability Due to the form of the basic UoI framework, we can control both false negative and false positive discoveries, as well as the stability of the estimates. For any regularized regression method like in (2), a decrease in the penalization parameter (?) tends to increase the number of false positives, and an increase in ? tends to increase false negatives. Preliminary analysis of the UoI framework shows that, for false positives, a large number of bootstrap resamples in the intersection step (B1 ) produces an increase in the probability of getting no false positive discoveries, while an increase in the number of bootstraps in the union step (B2 ) leads to a decrease in the probability of getting no false positives. Conversely, for false negatives, a large number of bootstrap resamples in the union step (B2 ) produces an increase in the probability of no false negative discoveries, while an increase in the number of bootstraps in the intersection step (B1 ) leads to a decrease in the probability of no false negatives. Also, a large number of bootstrap samples in union step (B2 ) gives a more stable estimate. These properties were confirmed numerically for U oILasso and are displayed in Fig. 1(d), which plots the average normalized false negatives, false positives, and standard deviation of model estimates from running U oILasso , with ranges of B1 and B2 on four different models. These results are supported by preliminary theoretical analysis of a variant of U oILasso (see [4]). Thus, the relative values of B1 and B2 express the fundamental balance between the two basic operations of intersection (which compresses the feature space) and union (which expands the feature space). Model selection through intersection often excludes true parameters (i.e., false negatives), and, conversely, model estimation using unions often includes erroneous parameters (i.e., false positives). By using stochastic resampling, combined with model selection through intersections, followed by model estimation through unions, UoI permits us to mitigate the feature inclusion/exclusion inherent in either operation. Essentially, the limitations of selection by intersection are counteracted by the union of estimates, and vice versa. 3.3 U oILasso has Superior Performance on Simulated Data Sets To explore the performance of the U oILasso algorithm, we have performed extensive numerical investigations on simulated data sets, where we can control key properties of the data. There are a large number of algorithms available for linear regression, and we picked some of the most popular algorithms (e.g., Lasso), as well as more uncommon, but more powerful algorithms (e.g., SCAD, a non-convex method). Specifically, we compared U oILasso to five other model selection/estimation 5 Figure 2: Range of observed results, in comparison with existing algorithms. (a) True ? distribution (grey histograms) and estimated values (colored lines). (b) Scatter plot of true and estimated values of observation variable on held-out samples. (c) Metrics of algorithm performance. methods: Ridge, Lasso, SCAD, BoATS, and debiased Lasso [12, 18, 10, 5, 3, 13]. Note that BoATS and debiased Lasso are both two-stage methods. We examined performance of these algorithms across a variety of underlying distributions of model parameters, degrees of sparsity, and noise levels. Across all algorithms examined, we found that U oILasso (Fig. 2, black) generally resulted in very high selection accuracy (Fig. 2(c), right) with parameter estimates with low error (Fig. 2(c), center-right), leading to the best prediction accuracy (Fig. 2(c), center-left) and prediction parsimony (Fig. 2(c), left). In addition, it was very robust to differences in underlying parameter distribution, degree of sparsity, and magnitude of noise. (See [4] for more details.) U oILasso in Neuroscience: Sparse Functional Networks from Human Neural Recordings and Parsimonious Prediction from Genetic and Phenotypic Data We sought to determine if the enhanced selection and estimation properties of U oILasso also improved its utility as a tool for data-driven discovery in complex, diverse neuroscience data sets. Neurobiology seeks to understand the brain across multiple spatio-temporal scales, from molecules-to-minds. We first tackled the problem of graph formation from multi-electrode (p = 86 electrodes) neural recordings taken directly from the surface of the human brain during speech production (n = 45 trials each). See [7] for details. That is, the goal was to construct sparse neuroscientifically-meaningful graphs for further downstream analysis. To estimate functional connectivity, we calculated partial correlation graphs. The model was estimated independently for each electrode, and we compared the results of graphs estimated by U oILasso to the graphs estimated by SCAD. In Fig. 3(a)-(b), we display the networks derived from recordings during the production of /b/ while speaking /ba/. We found that the U oILasso network (Fig. 3(a)) was much sparser than the SCAD network (Fig. 3(b)). Furthermore, the network extracted by U oILasso contained electrodes in the lip (dorsal vSMC), jaw (central vSMC), and larynx (ventral vSMC) regions, accurately reflecting the articulators engaged in the production of /b/ (Fig. 3(c)) [7]. The SCAD network (Fig. 3(d)) did not have any of these properties. This highlights the improved power of U oILasso to extract sparse graphs with functionally meaningful features relative to even some non-convex methods. 3.4 We calculated connectivity graphs during the production of 9 consonant-vowel syllables. Fig. 3(e) displays a summary of prediction accuracy for U oILasso networks (red) and SCAD networks (black) 6 Figure 3: Application of UoI to neuroscience and genetics data. (a)-(f): Functional connectivity networks from ECoG recordings during speech production. (g)-(h): Parsimonious prediction of complex phenotypes form genotype and phenotype data. as a function of time. The average relative prediction accuracy (compared to baseline times) for the U oILasso network was generally greater during the time of peak phoneme encoding [T = -100:200] compared to the SCAD network. Fig. 3(f) plots the time course of the parameter selection ratio for the U oILasso network (red) and SCAD network (black). The U oILasso network was consistently ? 5? sparser than the SCAD network. These results demonstrate that U oILasso extracts sparser graphs from noisy neural signals with a modest increase in prediction accuracy compared to SCAD. We next investigated whether U oILasso would improve the identification of a small number of highly predictive features from genotype-phenotype data. To do so, we analyzed data from n = 365 mice (173 female, 192 male) that are part of the genetically diverse Collaborative Cross cohort. We analyzed single-nucleotide polymorphisms (SNPs) from across the entire genome of each mouse (p = 11, 563 SNPs). For each animal, we measured two continuous, quantitative phenotypes: weight and behavioral performance on the rotorod task (see [14] for details). We focused on predicting these phenotypes from a small number of geneotype-phenotype features. We found that U oILasso identified and estimated a small number of features that were sufficient to explain large amounts of variability in these complex behavioral and physiological phenotypes. Fig. 3(g) displays the non-zero values estimated for the different features (e.g., location of loci on the genome) contributing to the weight (black) and speed (red) phenotype. Here, non-opaque points correspond to the mean ? s.d. across cross-validation samples, while the opaque points are the medians. Importantly, for both speed and weight phenotypes, we confirmed that several identified predictor features had been reported in the literature, though by different studies, e.g., genes coding for Kif1b, Rrm2b/Ubr5, and Dloc2. (See [4] for more details.) Accurate prediction of phenotypic variability with a small number of factors was a unique property of models found by U oILasso . For both weight and rotorod performance, models fit by U oILasso had marginally increased prediction accuracy compared to other methods (+1%), but they did so with far fewer parameters (lower selection ratios). This results in prediction parsimony (BIC) that was several orders of magnitude better (Fig. 3(h)). Together, these results demonstrate that U oILasso can identify a small number of genetic/physiological factors that are highly predictive of complex physiological and behavioral phenotypes. 7 Figure 4: Extension of UoI to classification and matrix decomposition. (a) UoI for classification (U oIL1Logistic ). (b) UoI for matrix decomposition (U oICU R ); solid and dashed lines are for PAH and dashed SORCH data sets, respectively. 3.5 U oIL1Logistic and U oICU R : Application of UoI to Classification and Matrix Decomposition As noted, UoI is is a framework into which other methods can be inserted. While we have primarily demonstrated UoI in the context of linear regression, it is much more general than that. To illustrate this, we implemented a classification algorithm (U oIL1Logistic ) and matrix decomposition algorithm (U oICU R ), and we compared them to the base methods on several data sets (see [4] for details). In classification, UoI resulted in either equal or improved prediction accuracy with 2x-10x fewer parameters for a variety of biomedical classification tasks (Fig. 4(a)). For matrix decomposition (in this case, column subset selection), for a given dimensionality, UoI resulted in reconstruction errors that were consistently lower than the base method (BasicCUR), and quickly approached an unscalable greedy algorithm (GreedyCUR) for two genetics data sets (Fig. 4(b)). In both cases, UoI improved the prediction parsimony relative to the base (classification or decomposition) method. 4 Discussion UoI-based methods leverage stochastic data resampling and a range of sparsity-inducing regularization parameters/dimensions to build families of potential features, and they then average nearly unbiased parameter estimates of selected features to maximize predictive accuracy. Thus, UoI separates model selection with intersection operations from model estimation with union operations: the limitations of selection by intersection are counteracted by the union of estimates, and vice versa. Stochastic data resampling can be a viewed as a perturbation of the data, and UoI efficiently identifies and robustly estimates features that are stable to these perturbations. A unique property of UoI-based methods is the ability to control both false positives and false negatives. Initial theoretical work (see [4]) shows that increasing the number of bootstraps in the selection module (B1 ) increases the amount of feature compression (primary controller of false positives), while increasing the number of bootstraps in the estimation module (B2 ) increases feature expansion (primary controller of false negatives), and we observe this empirically. Thus, neither should be too large, and their relative values express the balance between feature compression and expansion. This tension is seen in many places in machine learning and data analysis: local nearest neighbor methods vs. global latent factor models; local spectral methods that tend to expand due to their diffusion-based properties vs. flow-based methods that tend to contract; and sparse L1 vs. dense L2 penalties/priors more generally. Interestingly, an analogous balance of compressive and expansive forces contributes to neural leaning algorithms based on Hebbian synaptic plasticity [6]. Our results highlight how revisiting popular methods in light of new data science demands can lead to still further-improved methods, and they suggest several directions for theoretical and empirical work. 8 References [1] F. R. Bach. Bolasso: model consistent Lasso estimation through the bootstrap. In Proceedings of the 25th international conference on Machine learning, pages 33?40, 2008. [2] P. Bickel and B. Li. Regularization in statistics. TEST, 15(2):271?344, 2006. [3] K. E. Bouchard. Bootstrapped adaptive threshold selection for statistical model selection and estimation. Technical report, 2015. Preprint: arXiv:1505.03511. [4] K. E. Bouchard, A. F. Bujan, F. Roosta-Khorasani, S. Ubaru, Prabhat, A. M. Snijders, J.-H. Mao, E. F. Chang, M. W. Mahoney, and S. Bhattacharyya. Union of Intersections (UoI) for interpretable data driven discovery and prediction. Technical report, 2017. Preprint: arXiv:1705.07585 (also available as Supplementary Material). [5] K. E. Bouchard and E. F. Chang. Control of spoken vowel acoustics and the influence of phonetic context in human speech sensorimotor cortex. Journal of Neuroscience, 34(38):12662?12677, 2014. [6] K. E. Bouchard, S. Ganguli, and M. S. Brainard. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences. Frontiers in Computational Neuroscience, 9(92), 2015. [7] K. E. Bouchard, N. Mesgarani, K. Johnson, and E. F. Chang. Functional organization of human sensorimotor cortex for speech articulation. Nature, 495(7441):327?332, 2013. [8] L. Breiman. Bagging predictors. Machine Learning, 24(2):123?140, 1996. [9] National Research Council. Frontiers in Massive Data Analysis. The National Academies Press, Washington, D. C., 2013. [10] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348?1360, 2001. [11] S. Ganguli and H. Sompolinsky. 2012. Annual Review of Neuroscience, 35(1):485?508, Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis. [12] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, New York, 2003. [13] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. Journal of Machine Learning Research, 15:2869?2909, 2014. [14] J.-H. Mao, S. A. Langley, Y. Huang, M. Hang, K. E. Bouchard, S. E. Celniker, J. B. Brown, J. K. Jansson, G. H. Karpen, and A. M. Snijders. Identification of genetic factors that modify motor performance and body weight using collaborative cross mice. Scientific Reports, 5:16247, 2015. [15] V. Marx. Biology: The big challenges of big data. Nature, 498(7453):255?260, 2013. [16] R. E. Schapire and Y. Freund. Boosting: Foundations and Algorithms. MIT Press, Cambridge, MA, 2012. [17] T. J. Sejnowski, P. S. Churchland, and J. A. Movshon. Putting big data to good use in neuroscience. Nature Neuroscience, 17(11):1440?1441, 2014. [18] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58(1):267?288, 1996. [19] M. J. Wainwright. Structured regularizers for high-dimensional problems: Statistical and computational issues. Annual Review of Statistics and Its Application, 1:233?253, 2014. 9
6708 |@word trial:1 version:1 compression:7 norm:2 grey:1 seek:1 decomposition:6 thereby:1 solid:1 initial:1 series:1 genetic:3 bootstrapped:1 interestingly:1 bhattacharyya:2 existing:2 ka:1 com:1 gmail:1 scatter:1 numerical:3 plasticity:1 motor:1 plot:4 interpretable:5 resampling:5 v:3 generative:1 selected:4 fewer:2 greedy:1 realizing:1 colored:1 boosting:2 location:1 five:1 mathematical:1 constructed:1 incorrect:1 combine:3 behavioral:3 introduce:2 theoretically:1 javanmard:1 abscissa:1 p1:1 multi:1 brain:2 gov:3 increasing:3 estimating:2 underlying:3 argmin:1 parsimony:5 compressive:2 spoken:1 bootstrapping:1 temporal:1 berkeley:3 every:1 hypothetical:1 expands:2 mitigate:1 quantitative:1 exactly:1 control:7 positive:12 engineering:3 local:2 modify:1 tends:2 severely:1 consequence:1 encoding:2 black:5 examined:2 conversely:2 factorization:4 range:7 unique:2 mesgarani:1 testing:1 union:25 bootstrap:13 langley:1 empirical:2 thought:2 confidence:1 suggest:3 close:1 selection:34 operator:1 context:6 influence:2 optimize:1 map:2 demonstrated:1 center:3 independently:1 convex:6 focused:1 formalized:1 simplicity:1 pure:2 importantly:2 stability:3 analogous:1 enhanced:2 target:1 massive:2 us:2 hypothesis:1 element:3 sparsely:1 bottom:3 inserted:2 module:8 observed:1 preprint:2 role:1 calculate:1 revisiting:1 region:3 sompolinsky:1 trade:1 decrease:3 icsi:2 balanced:1 complexity:1 ideally:1 gausian:1 hinder:2 solving:2 churchland:1 predictive:6 division:2 completely:1 sejnowski:1 approached:1 formation:1 quite:1 modular:2 widely:1 solve:1 supplementary:1 compressed:2 ability:1 statistic:4 noisy:1 sequence:1 reconstruction:3 combining:2 achieve:3 academy:1 description:1 inducing:3 frobenius:1 competition:1 getting:2 electrode:4 farbod:2 extending:1 produce:2 mmahoney:1 brainard:1 illustrate:1 measured:1 nearest:1 edward:2 strong:1 implemented:1 implies:1 exhibiting:1 direction:1 stochastic:3 human:5 khorasani:2 material:1 polymorphism:1 investigation:2 preliminary:3 biological:2 ecog:1 extension:1 frontier:2 sufficiently:1 algorithmic:1 predict:1 sought:1 lbnl:3 ventral:1 bickel:1 estimation:27 council:1 vice:2 tool:1 reflects:1 mit:1 clearly:1 schematized:1 pn:1 shrinkage:1 breiman:1 l0:1 focus:1 derived:1 articulator:1 rank:1 indicates:1 likelihood:2 expansive:2 consistently:2 baseline:1 sense:1 ganguli:2 typically:1 entire:1 a0:2 expand:1 selective:1 overall:2 classification:13 flexible:2 issue:1 animal:1 art:1 constrained:1 uc:3 field:2 construct:3 equal:1 extraction:1 beach:1 sampling:1 washington:1 biology:1 identical:1 broad:1 nearly:3 report:4 inherent:1 employ:1 primarily:3 modern:1 randomly:1 simultaneously:2 resulted:3 national:2 individual:1 replacement:1 vowel:2 attempt:1 friedman:1 organization:1 interest:1 highly:2 saturated:1 mahoney:2 umn:1 uncommon:1 analyzed:2 genotype:3 male:1 light:1 regularizers:1 held:1 accurate:3 partial:1 nucleotide:1 modest:1 penalizes:1 lbl:3 theoretical:4 increased:1 column:3 ordinary:1 deviation:1 subset:1 predictor:5 johnson:1 too:1 reported:1 answer:1 corrupted:1 synthetic:2 interpretative:1 combined:1 st:1 fundamental:1 peak:1 international:1 contract:1 off:1 probabilistic:1 unscalable:1 michael:1 enhance:2 together:3 mouse:3 quickly:1 connectivity:3 central:3 choose:1 huang:1 american:1 leading:2 return:2 li:2 potential:4 b2:7 coding:1 includes:1 uoi:44 matter:1 oregon:1 explicitly:1 performed:2 picked:1 doing:1 red:4 bouchard:7 contribution:3 collaborative:2 square:1 accuracy:19 variance:3 characteristic:2 efficiently:3 ensemble:6 yield:1 identify:3 phoneme:1 correspond:1 identification:4 accurately:1 iid:1 marginally:1 confirmed:2 inquiry:1 explain:1 synaptic:2 sensorimotor:2 associated:2 pah:1 popular:2 color:1 dimensionality:2 jansson:1 reflecting:1 tension:1 response:6 improved:9 shashanka:1 shrink:1 done:1 though:1 furthermore:1 biomedical:2 stage:1 correlation:1 hand:4 quality:3 indicated:1 scientific:8 usa:1 impede:1 normalized:1 unbiased:4 true:10 brown:1 regularization:13 symmetric:1 white:1 during:5 noted:1 mpi:1 ridge:1 demonstrate:6 performs:3 l1:3 snp:2 resamples:2 novel:3 common:5 ols:3 superior:1 functional:5 empirically:2 association:1 interpretation:2 relating:1 numerically:1 functionally:1 counteracted:2 imposing:2 versa:2 cambridge:1 inclusion:1 had:2 minnesota:1 stable:3 cortex:2 surface:1 alejandro:1 base:3 exclusion:1 female:1 driven:5 phonetic:1 verlag:1 meta:1 yi:4 exploited:1 seen:1 greater:1 parallelized:1 determine:2 maximize:3 dashed:3 signal:1 multiple:1 snijders:2 hebbian:2 technical:3 cross:4 long:1 bach:1 paired:1 laplacian:1 schematic:2 prediction:33 scalable:3 basic:7 variant:2 regression:12 involving:1 metric:3 essentially:1 controller:2 physically:1 histogram:1 arxiv:2 achieved:1 penalize:1 addition:2 interval:1 median:1 jian:1 vsmc:3 crucial:1 biased:1 recording:5 tend:2 virtually:1 flow:1 nonconcave:1 call:1 prabhat:3 leverage:1 cohort:1 variety:2 bic:3 fit:1 hastie:1 lasso:10 identified:2 reduce:1 idea:1 whether:1 utility:1 penalty:1 movshon:1 speech:4 speaking:2 york:1 dramatically:1 useful:1 generally:3 detailed:1 amount:3 reduced:2 generate:1 schapire:1 canonical:1 estimated:13 neuroscience:9 tibshirani:2 broadly:1 diverse:2 express:2 bolasso:2 redundancy:1 key:2 four:1 threshold:1 putting:1 neither:1 phenotypic:2 diffusion:1 graph:8 excludes:1 concreteness:1 downstream:1 nersc:1 powerful:1 opaque:3 place:1 family:4 reasonable:1 parsimonious:2 followed:1 syllable:1 tackled:1 display:3 fan:1 oracle:1 annual:2 strength:1 speed:2 formulating:2 expanded:1 jaw:1 department:4 structured:1 scad:10 pink:1 across:11 smaller:1 restricted:1 taken:2 mind:1 locus:1 tractable:1 available:2 operation:13 permit:2 apply:1 observe:1 spectral:1 robustly:1 rp:2 bagging:3 assumes:1 top:2 include:1 denotes:2 running:1 compress:1 maintaining:1 k1:1 build:1 society:1 surgery:1 primary:2 dependence:1 antoine:1 separate:1 simulated:2 index:1 ubaru:2 minimizing:1 ratio:3 roosta:2 abutting:1 innovation:3 balance:4 potentially:3 negative:13 ba:1 implementation:2 unknown:1 perform:3 allowing:1 vertical:1 observation:3 markov:1 benchmark:1 displayed:1 neurobiology:1 variability:6 precise:1 redwood:1 perturbation:2 ordinate:1 neuroscientifically:1 extensive:2 acoustic:1 nip:1 able:1 parallelism:1 below:1 articulation:1 regime:1 sparsity:8 challenge:1 genetically:1 interpretability:6 including:1 marx:1 royal:1 wainwright:1 power:3 overlap:1 natural:1 hybrid:1 regularized:1 predicting:1 force:2 boat:2 improve:3 identifies:1 extract:2 deviate:1 prior:5 literature:1 discovery:8 oregonstate:1 python:1 kf:1 contributing:1 relative:8 l2:1 review:2 freund:1 highlight:3 limitation:2 validation:3 penalization:1 foundation:1 degree:5 sufficient:1 xp:1 consistent:1 leaning:1 production:5 genetics:2 summary:1 course:1 supported:2 penalized:1 side:2 understand:1 neighbor:1 taking:3 sparse:6 distributed:4 calculated:4 dimension:1 genome:2 adaptive:1 san:1 predictivity:1 simplified:1 far:1 sj:6 hang:1 implicitly:1 gene:1 global:1 b1:6 assumed:1 francisco:1 spatio:1 xi:1 consonant:1 continuous:1 latent:1 lip:1 nature:3 robust:1 ca:1 molecule:1 contributes:1 expansion:6 investigated:1 complex:4 constructing:1 did:2 main:2 dense:1 montanari:1 big:3 noise:3 x1:1 neuronal:1 fig:31 site:1 body:1 aid:1 sub:1 mao:2 explicit:3 candidate:1 erroneous:1 emphasized:1 sensing:1 r2:1 physiological:3 false:24 magnitude:4 demand:1 sparser:3 phenotype:12 intersection:24 electrophysiology:1 explore:2 contained:1 neurological:1 chang:5 hua:1 springer:1 extracted:1 ma:1 goal:4 viewed:1 towards:1 specifically:2 reducing:1 debiased:2 averaging:6 admittedly:1 total:2 called:2 engaged:1 experimental:1 meaningful:3 formally:1 support:12 dorsal:1 ucsf:1 evaluate:1
6,311
6,709
One-Shot Imitation Learning Yan Duan?? , Marcin Andrychowicz? , Bradly Stadie?? , Jonathan Ho?? , Jonas Schneider? , Ilya Sutskever? , Pieter Abbeel?? , Wojciech Zaremba? ? Berkeley AI Research Lab, ? OpenAI ? Work done while at OpenAI {rockyduan, jonathanho, pabbeel}@eecs.berkeley.edu {marcin, bstadie, jonas, ilyasu, woj}@openai.com Abstract Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large (maybe infinite) set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained such that when it takes as input the first demonstration demonstration and a state sampled from the second demonstration, it should predict the action corresponding to the sampled state. At test time, a full demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. Our experiments show that the use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. 1 Introduction We are interested in robotic systems that are able to perform a variety of complex useful tasks, e.g. tidying up a home or preparing a meal. The robot should be able to learn new tasks without long system interaction time. To accomplish this, we must solve two broad problems. The first problem is that of dexterity: robots should learn how to approach, grasp and pick up complex objects, and how to place or arrange them into a desired configuration. The second problem is that of communication: how to communicate the intent of the task at hand, so that the robot can replicate it in a broader set of initial conditions. Demonstrations are an extremely convenient form of information we can use to teach robots to overcome these two challenges. Using demonstrations, we can unambiguously communicate essentially any manipulation task, and simultaneously provide clues about the specific motor skills required to perform the task. We can compare this with an alternative form of communication, namely natural language. Although language is highly versatile, effective, and efficient, natural language processing 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Policy for task F Many demonstrations Imitation Learning Algorithm Policy for task A Task A e.g. stack blocks into towers of height 3 One-Shot Imitator (Neural Network) Meta Learning Algorithm obs action Single demonstration for task F Many demonstrations for task A Many demonstrations for task B obs action ? more demonstrations for more tasks Environment Environment (b) One-Shot Imitation Learning Demo1 Many demonstrations Imitation Learning Algorithm sample Policy for task B action Task B e.g. stack blocks into towers of height 2 Many demonstrations for task A One-Shot Imitator (Neural Network) obs Many demonstrations for task B predicted action observation from Demo2 Environment corresponding action in Demo2 Supervised loss (a) Traditional Imitation Learning (c) Training the One-Shot Imitator Figure 1: (a) Traditionally, policies are task-specific. For example, a policy might have been trained through an imitation learning algorithm to stack blocks into towers of height 3, and then another policy would be trained to stack blocks into towers of height 2, etc. (b) In this paper, we are interested in training networks that are not specific to one task, but rather can be told (through a single demonstration) what the current new task is, and be successful at this new task. For example, when it is conditioned on a single demonstration for task F, it should behave like a good policy for task F. (c) We can phrase this as a supervised learning problem, where we train this network on a set of training tasks, and with enough examples it should generalize to unseen, but related tasks. To train this network, in each iteration we sample a demonstration from one of the training tasks, and feed it to the network. Then, we sample another pair of observation and action from a second demonstration of the same task. When conditioned on both the first demonstration and this observation, the network is trained to output the corresponding action. systems are not yet at a level where we could easily use language to precisely describe a complex task to a robot. Compared to language, using demonstrations has two fundamental advantages: first, it does not require the knowledge of language, as it is possible to communicate complex tasks to humans that don?t speak one?s language. And second, there are many tasks that are extremely difficult to explain in words, even if we assume perfect linguistic abilities: for example, explaining how to swim without demonstration and experience seems to be, at the very least, an extremely challenging task. Indeed, learning from demonstrations have had many successful applications . However, so far these applications have either required careful feature engineering, or a significant amount of system interaction time. This is far from what what we desire: ideally, we hope to demonstrate a certain task only once or a few times to the robot, and have it instantly generalize to new situations of the same task, without long system interaction time or domain knowledge about individual tasks. In this paper we explore the one-shot imitation learning setting illustrated in Fig. 1, where the objective is to maximize the expected performance of the learned policy when faced with a new, previously unseen, task, and having received as input only one demonstration of that task. For the tasks we consider, the policy is expected to achieve good performance without any additional system interaction, once it has received the demonstration. We train a policy on a broad distribution over tasks, where the number of tasks is potentially infinite. For each training task we assume the availability of a set of successful demonstrations. Our learned policy takes as input: (i) the current observation, and (ii) one demonstration that successfully solves a different instance of the same task (this demonstration is fixed for the duration of the episode). The policy outputs the current controls. We note that any pair of demonstrations for the same task provides a supervised training example for the neural net policy, where one demonstration is treated as the input, while the other as the output. 2 To make this model work, we made essential use of soft attention [6] for processing both the (potentially long) sequence of states and action that correspond to the demonstration, and for processing the components of the vector specifying the locations of the various blocks in our environment. The use of soft attention over both types of inputs made strong generalization possible. In particular, on a family of block stacking tasks, our neural network policy was able to perform well on novel block configurations which were not present in any training data. Videos of our experiments are available at http://bit.ly/nips2017-oneshot. 2 Related Work Imitation learning considers the problem of acquiring skills from observing demonstrations. Survey articles include [48, 11, 3]. Two main lines of work within imitation learning are behavioral cloning, which performs supervised learning from observations to actions (e.g., [41, 44]); and inverse reinforcement learning [37], where a reward function [1, 66, 29, 18, 22] is estimated that explains the demonstrations as (near) optimal behavior. While this past work has led to a wide range of impressive robotics results, it considers each skill separately, and having learned to imitate one skill does not accelerate learning to imitate the next skill. One-shot and few-shot learning has been studied for image recognition [61, 26, 47, 42], generative modeling [17, 43], and learning ?fast? reinforcement learning agents with recurrent policies [16, 62]. Fast adaptation has also been achieved through fast-weights [5]. Like our algorithm, many of the aforementioned approaches are a form of meta-learning [58, 49, 36], where the algorithm itself is being learned. Meta-learning has also been studied to discover neural network weight optimization algorithms [8, 9, 23, 50, 2, 31]. This prior work on one-shot learning and meta-learning, however, is tailored to respective domains (image recognition, generative models, reinforcement learning, optimization) and not directly applicable in the imitation learning setting. Recently, [19] propose a generic framework for meta learning across several aforementioned domains. However they do not consider the imitation learning setting. Reinforcement learning [56, 10] provides an alternative route to skill acquisition, by learning through trial and error. Reinforcement learning has had many successes, including Backgammon [57], helicopter control [39], Atari [35], Go [52], continuous control in simulation [51, 21, 32] and on real robots [40, 30]. However, reinforcement learning tends to require a large number of trials and requires specifying a reward function to define the task at hand. The former can be time-consuming and the latter can often be significantly more difficult than providing a demonstration [37]. Multi-task and transfer learning considers the problem of learning policies with applicability and re-use beyond a single task. Success stories include domain adaptation in computer vision [64, 34, 28, 4, 15, 24, 33, 59, 14] and control [60, 45, 46, 20, 54]. However, while acquiring a multitude of skills faster than what it would take to acquire each of the skills independently, these approaches do not provide the ability to readily pick up a new skill from a single demonstration. Our approach heavily relies on an attention model over the demonstration and an attention model over the current observation. We use the soft attention model proposed in [6] for machine translations, and which has also been successful in image captioning [63]. The interaction networks proposed in [7, 12] also leverage locality of physical interaction in learning. Our model is also related to the sequence to sequence model [55, 13], as in both cases we consume a very long demonstration sequence and, effectively, emit a long sequence of actions. 3 3.1 One Shot Imitation Learning Problem Formalization We denote a distribution of tasks by T, an individual task by t ? T, and a distribution of demonstrations for the task t by D(t). A policy is symbolized by ?? (a|o, d), where a is an action, o is an observation, d is a demonstration, and ? are the parameters of the policy. A demonstration d ? D(t) is a sequence of observations and actions : d = [(o1 , a1 ), (o2 , a2 ), . . . , (oT , aT )]. We assume that the distribution of tasks T is given, and that we can obtain successful demonstrations for each task. We assume that there is some scalar-valued evaluation function Rt (d) (e.g. a binary value 3 indicating success) for each task, although this is not required during training. The objective is to maximize the expected performance of the policy, where the expectation is taken over tasks t ? T, and demonstrations d ? D(t). 3.2 Block Stacking Tasks To clarify the problem setting, we describe a concrete example of a distribution of block stacking tasks, which we will also later study in the experiments. The compositional structure shared among these tasks allows us to investigate nontrivial generalization to unseen tasks. For each task, the goal is to control a 7-DOF Fetch robotic arm to stack various numbers of cube-shaped blocks into a specific configuration specified by the user. Each configuration consists of a list of blocks arranged into towers of different heights, and can be identified by a string. For example, ab cd ef gh means that we want to stack 4 towers, each with two blocks, and we want block A to be on top of block B, block C on top of block D, block E on top of block F, and block G on top of block H. Each of these configurations correspond to a different task. Furthermore, in each episode the starting positions of the blocks may vary, which requires the learned policy to generalize even within the training tasks. In a typical task, an observation is a list of (x, y, z) object positions relative to the gripper, and information if gripper is opened or closed. The number of objects may vary across different task instances. We define a stage as a single operation of stacking one block on top of another. For example, the task ab cd ef gh has 4 stages. 3.3 Algorithm In order to train the neural network policy, we make use of imitation learning algorithms such as behavioral cloning and DAGGER [44], which only require demonstrations rather than reward functions to be specified. This has the potential to be more scalable, since it is often easier to demonstrate a task than specifying a well-shaped reward function [38]. We start by collecting a set of demonstrations for each task, where we add noise to the actions in order to have wider coverage in the trajectory space. In each training iteration, we sample a list of tasks (with replacement). For each sampled task, we sample a demonstration as well as a small batch of observation-action pairs. The policy is trained to regress against the desired actions when conditioned on the current observation and the demonstration, by minimizing an `2 or cross-entropy loss based on whether actions are continuous or discrete. A high-level illustration of the training procedure is given in Fig. 1(c). Across all experiments, we use Adamax [25] to perform the optimization with a learning rate of 0.001. 4 Architecture While, in principle, a generic neural network could learn the mapping from demonstration and current observation to appropriate action, we found it important to use an appropriate architecture. Our architecture for learning block stacking is one of the main contributions of this paper, and we believe it is representative of what architectures for one-shot imitation learning could look like in the future when considering more complex tasks. Our proposed architecture consists of three modules: the demonstration network, the context network, and the manipulation network. An illustration of the architecture is shown in Fig. 2. We will describe the main operations performed in each module below, and a full specification is available in the Appendix. 4.1 Demonstration Network The demonstration network receives a demonstration trajectory as input, and produces an embedding of the demonstration to be used by the policy. The size of this embedding grows linearly as a function of the length of the demonstration as well as the number of blocks in the environment. Temporal Dropout: For block stacking, the demonstrations can span hundreds to thousands of time steps, and training with such long sequences can be demanding in both time and memory usage. Hence, we randomly discard a subset of time steps during training, an operation we call temporal dropout, analogous to [53, 27]. We denote p as the proportion of time steps that are thrown away. 4 Attention over Current State Context Network Block# A B C D E Attention over Demonstration Neighborhood Attention + Temporal Convolution F G H I J Context Embedding Hidden layers Hidden layers Temporal Dropout Action Demonstration Network Manipulation Network Demonstration Current State Figure 2: Illustration of the network architecture. In our experiments, we use p = 0.95, which reduces the length of demonstrations by a factor of 20. During test time, we can sample multiple downsampled trajectories, use each of them to compute downstream results, and average these results to produce an ensemble estimate. In our experience, this consistently improves the performance of the policy. Neighborhood Attention: After downsampling the demonstration, we apply a sequence of operations, composed of dilated temporal convolution [65] and neighborhood attention. We now describe this second operation in more detail. Since our neural network needs to handle demonstrations with variable numbers of blocks, it must have modules that can process variable-dimensional inputs. Soft attention is a natural operation which maps variable-dimensional inputs to fixed-dimensional outputs. However, by doing so, it may lose information compared to its input. This is undesirable, since the amount of information contained in a demonstration grows as the number of blocks increases. Therefore, we need an operation that can map variable-dimensional inputs to outputs with comparable dimensions. Intuitively, rather than having a single output as a result of attending to all inputs, we have as many outputs as inputs, and have each output attending to all other inputs in relation to its own corresponding input. We start by describing the soft attention module as specified in [6]. The input to the attention includes a query q, a list of context vectors {cj }, and a list of memory vectors {mj }. The ith attention weight is given by wi ? v T tanh(q + ci ), where v is a learned weight vector. The output of attention is a weighted combination of the memory content, where the weights are given by a softmax operation P i) over the attention weights. Formally, we have output ? i mi Pexp(w . Note that the output has j exp(wj ) the same dimension as a memory vector. The attention operation can be generalized to multiple query heads, in which case there will be as many output vectors as there are queries. Now we turn to neighborhood attention. We assume there are B blocks in the environment. We denote the robot?s state as srobot , and the coordinates of each block as (x1 , y1 , z1 ), . . . , (xB , yB , zB ). in The input to neighborhood attention is a list of embeddings hin 1 , . . . , hB of the same dimension, which can be the result of a projection operation over a list of block positions, or the output of a previous neighborhood attention operation. Given this list of embeddings, we use two separate linear layers to compute a query vector and a context embedding for each block: qi ? Linear(hin i ), and ci ? Linear(hin i ). The memory content to be extracted consists of the coordinates of each block, concatenated with the input embedding. The ith query result is given by the following soft attention in B operation: resulti ? SoftAttn(query: qi , context: {cj }B j=1 , memory: {((xj , yj , zj ), hj ))}j=1 ). Intuitively, this operation allows each block to query other blocks in relation to itself (e.g. find the closest block), and extract the queried information. The gathered results are then combined with each block?s own information, to produce the output embedding per block. Concretely, we have 5 outputi ? Linear(concat(hin i , resulti , (xi , yi , zi ), srobot )). In practice, we use multiple query heads per block, so that the size of each resulti will be proportional to the number of query heads. 4.2 Context network The context network is the crux of our model. It processes both the current state and the embedding produced by the demonstration network, and outputs a context embedding, whose dimension does not depend on the length of the demonstration, or the number of blocks in the environment. Hence, it is forced to capture only the relevant information, which will be used by the manipulation network. Attention over demonstration: The context network starts by computing a query vector as a function of the current state, which is then used to attend over the different time steps in the demonstration embedding. The attention weights over different blocks within the same time step are summed together, to produce a single weight per time step. The result of this temporal attention is a vector whose size is proportional to the number of blocks in the environment. We then apply neighborhood attention to propagate the information across the embeddings of each block. This process is repeated multiple times, where the state is advanced using an LSTM cell with untied weights. Attention over current state: The previous operations produce an embedding whose size is independent of the length of the demonstration, but still dependent on the number of blocks. We then apply standard soft attention over the current state to produce fixed-dimensional vectors, where the memory content only consists of positions of each block, which, together with the robot?s state, forms the context embedding, which is then passed to the manipulation network. Intuitively, although the number of objects in the environment may vary, at each stage of the manipulation operation, the number of relevant objects is small and usually fixed. For the block stacking environment specifically, the robot should only need to pay attention to the position of the block it is trying to pick up (the source block), as well as the position of the block it is trying to place on top of (the target block). Therefore, a properly trained network can learn to match the current state with the corresponding stage in the demonstration, and infer the identities of the source and target blocks expressed as soft attention weights over different blocks, which are then used to extract the corresponding positions to be passed to the manipulation network. Although we do not enforce this interpretation in training, our experiment analysis supports this interpretation of how the learned policy works internally. 4.3 Manipulation network The manipulation network is the simplest component. After extracting the information of the source and target blocks, it computes the action needed to complete the current stage of stacking one block on top of another one, using a simple MLP network.1 This division of labor opens up the possibility of modular training: the manipulation network may be trained to complete this simple procedure, without knowing about demonstrations or more than two blocks present in the environment. We leave this possibility for future work. 5 Experiments We conduct experiments with the block stacking tasks described in Section 3.2.2 These experiments are designed to answer the following questions: ? How does training with behavioral cloning compare with DAGGER? ? How does conditioning on the entire demonstration compare to conditioning on the final state, even when it already has enough information to fully specify the task? ? How does conditioning on the entire demonstration compare to conditioning on a ?snapshot? of the trajectory, which is a small subset of frames that are most informative? 1 In principle, one can replace this module with an RNN module. But we did not find this necessary for the tasks we consider. 2 Additional experiment results are available in the Appendix, including a simple illustrative example of particle reaching tasks and further analysis of block stacking 6 ? Can our framework generalize to tasks that it has never seen during training? To answer these questions, we compare the performance of the following architectures: ? BC: We use the same architecture as previous, but and the policy using behavioral cloning. ? DAGGER: We use the architecture described in the previous section, and train the policy using DAGGER. ? Final state: This architecture conditions on the final state rather than on the entire demonstration trajectory. For the block stacking task family, the final state uniquely identifies the task, and there is no need for additional information. However, a full trajectory, one which contains information about intermediate stages of the task?s solution, can make it easier to train the optimal policy, because it could learn to rely on the demonstration directly, without needing to memorize the intermediate steps into its parameters. This is related to the way in which reward shaping can significantly affect performance in reinforcement learning [38]. A comparison between the two conditioning strategies will tell us whether this hypothesis is valid. We train this policy using DAGGER. ? Snapshot: This architecture conditions on a ?snapshot? of the trajectory, which includes the last frame of each stage along the demonstration trajectory. This assumes that a segmentation of the demonstration into multiple stages is available at test time, which gives it an unfair advantage compared to the other conditioning strategies. Hence, it may perform better than conditioning on the full trajectory, and serves as a reference, to inform us whether the policy conditioned on the entire trajectory can perform as well as if the demonstration is clearly segmented. Again, we train this policy using DAGGER. We evaluate the policy on tasks seen during training, as well as tasks unseen during training. Note that generalization is evaluated at multiple levels: the learned policy not only needs to generalize to new configurations and new demonstrations of tasks seen already, but also needs to generalize to new tasks. Concretely, we collect 140 training tasks, and 43 test tasks, each with a different desired layout of the blocks. The number of blocks in each task can vary between 2 and 10. We collect 1000 trajectories per task for training, and maintain a separate set of trajectories and initial configurations to be used for evaluation. The trajectories are collected using a hard-coded policy. 5.1 Performance Evaluation 100% Policy Type Demo BC DAGGER Snapshot Final state Demo BC DAGGER Snapshot Final state 80% Average Success Rate 80% Average Success Rate 100% Policy Type 60% 40% 60% 40% 20% 20% 0% 0% 1 2 3 4 5 6 7 2 Number of Stages 4 5 6 7 8 Number of Stages (a) Performance on training tasks. (b) Performance on test tasks. Figure 3: Comparison of different conditioning strategies. The darkest bar shows the performance of the hard-coded policy, which unsurprisingly performs the best most of the time. For architectures that use temporal dropout, we use an ensemble of 10 different downsampled demonstrations and average the action distributions. Then for all architectures we use the greedy action for evaluation. Fig. 3 shows the performance of various architectures. Results for training and test tasks are presented separately, where we group tasks by the number of stages required to complete them. This is because tasks that require more stages to complete are typically more challenging. In fact, even our scripted policy frequently fails on the hardest tasks. We measure success rate per task by executing the greedy policy (taking the most confident action at every time step) in 100 different configurations, each conditioned on a different demonstration unseen during training. We report the average success rate over all tasks within the same group. 7 From the figure, we can observe that for the easier tasks with fewer stages, all of the different conditioning strategies perform equally well and almost perfectly. As the difficulty (number of stages) increases, however, conditioning on the entire demonstration starts to outperform conditioning on the final state. One possible explanation is that when conditioned only on the final state, the policy may struggle about which block it should stack first, a piece of information that is readily accessible from demonstration, which not only communicates the task, but also provides valuable information to help accomplish it. More surprisingly, conditioning on the entire demonstration also seems to outperform conditioning on the snapshot, which we originally expected to perform the best. We suspect that this is due to the regularization effect introduced by temporal dropout, which effectively augments the set of demonstrations seen by the policy during training. Another interesting finding was that training with behavioral cloning has the same level of performance as training with DAGGER, which suggests that the entire training procedure could work without requiring interactive supervision. In our preliminary experiments, we found that injecting noise into the trajectory collection process was important for behavioral cloning to work well, hence in all experiments reported here we use noise injection. In practice, such noise can come from natural human-induced noise through tele-operation, or by artificially injecting additional noise before applying it on the physical robot. 5.2 Visualization We visualize the attention mechanisms underlying the main policy architecture to have a better understanding about how it operates. There are two kinds of attention we are mainly interested in, one where the policy attends to different time steps in the demonstration, and the other where the policy attends to different blocks in the current state. Fig. 4 shows some of the attention heatmaps. (a) Attention over blocks in the current state. (b) Attention over downsampled demonstration. Figure 4: Visualizing attentions performed by the policy during an entire execution. The task being performed is ab cde fg hij. Note that the policy has multiple query heads for each type of attention, and only one query head per type is visualized. (a) We can observe that the policy almost always focuses on a small subset of the block positions in the current state, which allows the manipulation network to generalize to operations over different blocks. (b) We can observe a sparse pattern of time steps that have high attention weights. This suggests that the policy has essentially learned to segment the demonstrations, and only attend to important key frames. Note that there are roughly 6 regions of high attention weights, which nicely corresponds to the 6 stages required to complete the task. 6 Conclusions In this work, we presented a simple model that maps a single successful demonstration of a task to an effective policy that solves said task in a new situation. We demonstrated effectiveness of this approach on a family of block stacking tasks. There are a lot of exciting directions for future work. We plan to extend the framework to demonstrations in the form of image data, which will allow more end-to-end learning without requiring a separate perception module. We are also interested in enabling the policy to condition on multiple demonstrations, in case where one demonstration does not fully resolve ambiguity in the objective. Furthermore and most importantly, we hope to scale up 8 our method on a much larger and broader distribution of tasks, and explore its potential towards a general robotics imitation learning system that would be able to achieve an overwhelming variety of tasks. 7 Acknowledgement We would like to thank our colleagues at UC Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Huawei Fellowship. Jonathan Ho was also supported by an NSF Fellowship. References [1] Pieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In International Conference on Machine Learning (ICML), 2004. [2] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Neural Information Processing Systems (NIPS), 2016. [3] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469?483, 2009. [4] Yusuf Aytar and Andrew Zisserman. Tabula rasa: Model transfer for object category detection. In 2011 International Conference on Computer Vision, pages 2252?2259. IEEE, 2011. [5] Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fast weights to attend to the recent past. In Neural Information Processing Systems (NIPS), 2016. [6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [7] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems, pages 4502?4510, 2016. [8] Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Optimality in Artificial and Biological Neural Networks, pages 6?8, 1992. [9] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Universit? de Montr?al, D?partement d?informatique et de recherche op?rationnelle, 1990. [10] Dimitri P Bertsekas and John N Tsitsiklis. Neuro-dynamic programming: an overview. In Decision and Control, 1995., Proceedings of the 34th IEEE Conference on, volume 1, pages 560?564. IEEE, 1995. [11] Sylvain Calinon. Robot programming by demonstration. EPFL Press, 2009. [12] Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. In Int. Conf. on Learning Representations (ICLR), 2017. [13] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [14] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, pages 647?655, 2014. [15] Lixin Duan, Dong Xu, and Ivor Tsang. Learning with augmented features for heterogeneous domain adaptation. arXiv preprint arXiv:1206.4660, 2012. 9 [16] Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2 : Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016. [17] Harrison Edwards and Amos Storkey. Towards a neural statistician. International Conference on Learning Representations (ICLR), 2017. [18] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33rd International Conference on Machine Learning, volume 48, 2016. [19] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017. [20] Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Learning invariant feature spaces to transfer skills with reinforcement learning. In Int. Conf. on Learning Representations (ICLR), 2017. [21] Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pages 2944?2952, 2015. [22] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565?4573, 2016. [23] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks. Springer, 2001. [24] Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of domain-invariant image representations. arXiv preprint arXiv:1301.3224, 2013. [25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014. [26] Gregory Koch. Siamese neural networks for one-shot image recognition. ICML Deep Learning Workshop, 2015. [27] David Krueger, Tegan Maharaj, J?nos Kram?r, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016. [28] Brian Kulis, Kate Saenko, and Trevor Darrell. What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1785?1792. IEEE, 2011. [29] S. Levine, Z. Popovic, and V. Koltun. Nonlinear inverse reinforcement learning with gaussian processes. In Advances in Neural Information Processing Systems (NIPS), 2011. [30] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1?40, 2016. [31] Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016. [32] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [33] Mingsheng Long and Jianmin Wang. Learning transferable features with deep adaptation networks. CoRR, abs/1502.02791, 1:2, 2015. [34] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009. 10 [35] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. [36] Devang K Naik and RJ Mammone. Meta-neural networks that learn by learning. In International Joint Conference on Neural Netowrks (IJCNN), 1992. [37] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In International Conference on Machine Learning (ICML), 2000. [38] Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278?287, 1999. [39] Andrew Y Ng, H Jin Kim, Michael I Jordan, Shankar Sastry, and Shiv Ballianda. Autonomous helicopter flight via reinforcement learning. In NIPS, volume 16, 2003. [40] Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682?697, 2008. [41] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems, pages 305?313, 1989. [42] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Under Review, ICLR, 2017. [43] Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot generalization in deep generative models. International Conference on Machine Learning (ICML), 2016. [44] St?phane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, volume 1, page 6, 2011. [45] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. [46] Fereshteh Sadeghi and Sergey Levine. (cad)2 rl: Real single-image flight without a single real image. 2016. [47] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning (ICML), 2016. [48] Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233?242, 1999. [49] Jurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987. [50] J?rgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 1992. [51] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, pages 1889?1897, 2015. [52] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. [53] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929?1958, 2014. 11 [54] Bradlie Stadie, Pieter Abbeel, and Ilya Sutskever. Third person imitation learning. In Int. Conf. on Learning Representations (ICLR), 2017. [55] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104?3112, 2014. [56] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [57] Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58?68, 1995. [58] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 1998. [59] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014. [60] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Pieter Abbeel, Sergey Levine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015. [61] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Neural Information Processing Systems (NIPS), 2016. [62] Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016. [63] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, volume 14, pages 77?81, 2015. [64] Jun Yang, Rong Yan, and Alexander G Hauptmann. Cross-domain video concept detection using adaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, pages 188?197. ACM, 2007. [65] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations (ICLR), 2016. [66] B. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI Conference on Artificial Intelligence, 2008. 12
6709 |@word trial:2 kulis:1 seems:2 proportion:1 replicate:1 open:1 pieter:10 simulation:1 propagate:1 pick:3 versatile:1 shot:16 reduction:1 initial:3 liu:1 configuration:8 contains:1 jimenez:2 bc:3 o2:1 freitas:1 past:2 current:17 com:1 yuxuan:1 cad:1 activation:2 yet:1 diederik:1 must:2 guez:1 readily:2 john:3 devin:2 informative:1 motor:2 designed:1 bart:1 greedy:2 fewer:1 intelligence:1 generative:4 imitate:2 concat:1 ivo:1 ith:2 aja:1 recherche:1 provides:3 pascanu:2 philipp:1 location:1 zhang:2 height:5 wierstra:4 along:1 koltun:2 jonas:2 pritzel:1 consists:4 behavioral:6 apprenticeship:1 peng:1 indeed:1 expected:5 roughly:1 behavior:1 frequently:1 kiros:1 multi:2 salakhutdinov:2 td:1 resolve:1 duan:4 overwhelming:2 considering:1 brett:1 underlying:1 discover:1 medium:1 agnostic:1 what:8 atari:1 kind:1 string:1 dharshan:1 argall:1 finding:1 transformation:1 temporal:9 berkeley:3 every:1 collecting:1 interactive:1 zaremba:1 universit:1 botvinick:2 control:11 wayne:1 internally:1 ly:1 kelvin:1 bertsekas:1 danihelka:1 before:1 attend:4 engineering:3 tends:1 struggle:1 sutton:1 soyer:2 laurent:1 might:1 rnns:1 studied:2 specifying:3 suggests:2 collect:2 challenging:2 hunt:1 range:1 yj:1 practice:2 block:70 regret:1 goyal:1 razvan:2 procedure:3 jan:2 riedmiller:1 rnn:2 yan:4 significantly:2 adapting:1 convenient:1 projection:1 matching:1 gammon:1 word:1 downsampled:3 get:1 undesirable:1 shankar:1 context:12 applying:1 bellemare:1 optimize:1 dean:1 demonstrated:1 map:3 maximizing:1 sepp:1 attention:40 starting:1 jimmy:3 independently:1 survey:2 hadsell:1 ke:2 go:2 layout:1 duration:1 attending:2 rule:2 importantly:1 zoneout:1 embedding:11 handle:1 traditionally:1 autonomous:3 analogous:1 coordinate:2 merri:1 target:3 yishay:1 heavily:1 user:1 speak:1 programming:2 caption:1 samy:2 hypothesis:1 storkey:1 trend:1 recognition:5 asymmetric:1 levine:8 steven:1 module:7 preprint:14 wang:2 capture:1 tsang:1 thousand:1 wj:1 region:2 episode:2 russell:2 valuable:1 environment:12 ziebart:1 reward:7 ideally:2 dynamic:3 gerald:1 trained:7 depend:1 segment:1 division:1 eric:3 accelerate:1 easily:1 joint:1 schwenk:1 various:3 pezeshki:1 train:8 univ:1 informatique:1 forced:1 effective:2 describe:4 fast:7 query:12 artificial:3 zemel:1 woj:1 tell:2 neighborhood:7 dof:1 visuomotor:2 mammone:1 whose:3 modular:1 larger:1 valued:1 cvpr:1 consume:1 solve:2 ability:2 neil:1 unseen:6 jointly:1 itself:2 final:8 online:1 shakir:1 advantage:2 sequence:10 net:3 propose:2 interaction:7 helicopter:2 adaptation:7 relevant:2 translate:1 achieve:2 shiv:1 schaul:1 sutskever:5 darrell:6 captioning:1 karol:1 silver:4 leave:1 adam:2 object:8 tim:2 pexp:1 andrew:6 attends:2 executing:1 phane:1 wider:1 help:1 recurrent:2 received:2 jurgen:1 op:1 strong:1 solves:2 edward:1 coverage:1 predicted:1 memorize:1 larochelle:2 come:1 direction:1 guided:1 ning:2 stochastic:2 opened:1 human:3 nando:1 ermon:1 explains:1 require:4 crux:1 abbeel:10 generalization:4 conwell:1 preliminary:1 biological:1 ryan:1 brian:1 anticipate:1 heatmaps:1 rong:1 clarify:1 koch:1 exp:1 mapping:1 predict:1 visualize:1 matthew:3 rgen:1 desjardins:1 vary:4 torralba:1 a2:1 arrange:1 battaglia:1 ruslan:2 injecting:2 applicable:1 lose:1 tanh:1 ross:1 saw:1 successfully:1 weighted:1 hoffman:5 stefan:2 amos:1 mit:1 clearly:1 hope:2 gaussian:1 always:1 rather:4 denil:1 reaching:1 hj:1 rusu:2 barto:1 broader:2 linguistic:1 rosemary:1 rezende:2 focus:1 schaal:2 properly:1 consistently:1 backgammon:1 cloning:6 mainly:1 tech:1 adversarial:1 rostamizadeh:1 maharaj:1 kim:1 browning:1 huawei:1 dependent:1 epfl:1 entire:8 typically:1 santoro:1 hidden:3 relation:3 marcin:3 interested:4 aforementioned:2 among:1 jianmin:1 plan:1 softmax:1 tzeng:3 summed:1 cube:1 uc:1 once:2 never:1 nicely:1 beach:1 ng:4 veness:1 preparing:1 having:3 stuart:2 hardest:1 icml:9 progressive:1 look:1 koray:2 broad:2 holger:1 yu:1 yoshua:6 partement:1 gordon:1 richard:2 report:1 future:3 few:4 randomly:2 composed:1 simultaneously:1 anirudh:1 individual:2 replacement:1 statistician:1 maintain:1 ab:4 thrown:1 detection:2 montr:1 mlp:1 ostrovski:1 investigate:1 possibility:2 mnih:2 highly:1 evaluation:4 grasp:1 joel:3 kirkpatrick:1 chernova:1 misha:1 hubert:2 xb:1 emit:1 imitator:3 arthur:1 experience:2 necessary:1 respective:1 institut:1 conduct:1 tree:1 desired:3 re:1 instance:5 modeling:1 soft:9 phrase:2 applicability:1 cost:1 stacking:12 subset:4 calinon:1 hundred:1 krizhevsky:1 successful:6 harada:1 reported:1 answer:2 eec:1 gregory:2 rationnelle:1 accomplish:3 fetch:1 confident:1 st:2 combined:1 international:12 cho:3 lstm:1 accessible:1 person:1 fundamental:1 told:1 physic:1 dong:1 michael:3 together:2 pecase:1 ilya:5 concrete:1 again:1 aaai:1 thesis:1 ambiguity:1 lorien:1 huang:1 conf:3 cognitive:1 dimitri:1 wojciech:1 ullman:1 li:1 leverage:1 volodymyr:2 potential:2 de:3 ioannis:1 dilated:2 includes:2 availability:1 int:3 kate:4 jitendra:1 piece:1 performed:3 vehicle:1 lot:1 lab:1 closed:1 doing:1 later:1 observing:1 dagger:9 aggregation:1 start:4 capability:1 jia:1 contribution:1 convolutional:1 ensemble:2 correspond:2 gathered:1 generalize:9 kavukcuoglu:2 produced:1 informatik:1 trajectory:14 explain:1 inform:1 sebastian:1 synaptic:2 trevor:6 against:1 acquisition:1 colleague:1 mohamed:1 regress:1 james:1 mi:1 sampled:3 knowledge:2 improves:1 cj:2 shaping:2 segmentation:1 feed:1 originally:1 coline:2 supervised:4 danilo:2 tom:3 unambiguously:1 specify:1 zisserman:1 yb:1 arranged:1 evaluated:1 done:1 dey:1 furthermore:2 stage:15 hand:2 receives:1 flight:2 trust:1 nonlinear:1 fereshteh:1 rabinowitz:1 believe:1 grows:2 mingsheng:1 usage:1 effect:1 lillicrap:4 requiring:3 usa:1 matt:1 concept:1 former:1 hence:4 kyunghyun:3 regularization:1 moritz:1 illustrated:1 visualizing:1 fethi:1 game:1 self:1 during:9 uniquely:1 illustrative:1 transferable:1 generalized:1 trying:2 complete:5 demonstrate:2 confusion:1 mohammad:1 performs:2 gh:2 stefano:1 hin:4 image:9 dexterity:1 novel:1 recently:1 charles:2 ef:2 krueger:1 netowrks:1 rl:1 physical:3 hugo:2 overview:1 conditioning:13 ballas:1 volume:7 tassa:2 extend:1 interpretation:2 jocelyn:2 bougares:1 significant:2 cambridge:1 queried:1 ai:1 meal:1 rd:2 sastry:1 rasa:1 erez:2 particle:1 aytar:1 language:7 had:2 funded:1 specification:1 robot:15 supervision:1 impressive:1 etc:2 add:1 align:1 sergio:1 closest:1 own:2 recent:1 produce:6 chelsea:4 discard:1 schmidhuber:2 manipulation:11 certain:1 tesauro:1 route:2 meta:11 onr:1 binary:1 success:7 yi:1 joshua:1 seen:4 preserving:1 george:1 additional:4 tabula:1 greater:1 schneider:1 maximize:2 catalin:1 ii:1 siamese:1 full:4 multiple:8 reduces:1 infer:1 rj:1 segmented:1 needing:1 match:1 faster:1 veloso:1 cross:2 long:8 lai:1 equally:1 award:1 coded:2 raia:1 a1:1 qi:2 prediction:1 scalable:1 neuro:1 heterogeneous:1 essentially:2 vision:3 expectation:1 cloutier:2 arxiv:28 iteration:2 sergey:8 kernel:1 tailored:1 scripted:1 robotics:3 younger:1 cell:1 achieved:1 fellowship:2 separately:2 want:2 hochreiter:1 harrison:1 source:3 yusuf:1 ot:1 suspect:1 induced:1 bahdanau:2 effectiveness:1 jordan:2 encoderdecoder:1 call:2 extracting:1 near:1 yang:1 rodner:1 intermediate:2 bengio:8 enough:2 pratt:1 hb:1 embeddings:3 xj:1 bartunov:1 zi:1 variety:4 architecture:16 identified:1 affect:1 isolation:1 andreas:1 perfectly:1 knowing:1 blundell:2 whether:3 veda:1 bartlett:1 passed:2 swim:1 peter:4 compositional:2 andrychowicz:2 action:23 antonio:1 heess:2 useful:1 deep:12 oneshot:1 maybe:1 transforms:1 amount:2 referential:1 tenenbaum:1 svms:1 augments:1 simplest:1 category:1 sachin:1 http:1 outperform:2 visualized:1 zj:1 nsf:1 estimated:1 per:6 instantly:2 ionescu:1 discrete:1 georg:1 group:2 key:1 openai:4 yangqing:1 achieving:1 prevent:1 ravi:1 leibo:2 naik:1 downstream:1 inverse:6 you:2 communicate:3 place:3 family:3 almost:2 home:1 lanctot:1 appendix:2 ob:3 decision:1 comparable:1 bit:1 dropout:6 layer:3 bound:1 pay:1 nan:1 gomez:1 courville:2 daishi:1 symbolized:1 nontrivial:1 ijcnn:1 precisely:1 alex:2 untied:1 nitish:1 span:1 optimality:1 extremely:3 nboer:1 injection:1 martin:1 structured:1 munich:1 combination:1 vladlen:1 across:4 mastering:1 wi:1 quoc:1 intuitively:3 den:1 invariant:2 taken:1 visualization:1 previously:1 turn:2 describing:1 mechanism:1 needed:1 finn:4 antonoglou:1 serf:1 end:4 gulcehre:1 available:4 panneershelvam:1 operation:17 apply:3 observe:3 away:1 appropriate:2 generic:3 enforce:1 rl2:1 sonia:1 jane:1 alternative:3 batch:1 darkest:1 ho:3 top:7 assumes:1 include:2 lixin:1 concatenated:1 gregor:1 malik:1 objective:3 already:2 question:2 strategy:4 rt:1 traditional:1 bagnell:2 said:1 evolutionary:1 gradient:5 iclr:7 separate:3 thank:1 thrun:1 fidjeland:1 shaped:2 simulated:1 chris:1 tower:8 maddison:1 nelson:1 collected:1 considers:3 dzmitry:2 afshin:1 erik:1 length:4 o1:1 illustration:3 julian:1 downsampling:1 minimizing:1 providing:1 demonstration:93 difficult:2 acquire:1 schrittwieser:1 potentially:2 teach:1 hij:1 ilyasu:1 intent:1 ba:3 pomerleau:1 policy:57 perform:9 convolution:3 snapshot:6 observation:12 kumaran:1 daan:4 enabling:1 caglar:1 jin:1 behave:1 descent:3 tele:1 situation:3 hinton:2 communication:3 head:5 y1:1 frame:3 mansour:1 stack:8 tomer:1 david:6 introduced:1 pair:4 required:5 specified:3 namely:1 z1:1 pfau:1 nips2017:1 learned:9 kingma:1 nip:6 beyond:1 bar:1 able:5 usually:2 perception:1 kram:1 below:1 pattern:2 challenge:1 including:2 memory:9 explanation:1 video:2 demanding:1 difficulty:1 rely:1 treated:1 business:1 natural:4 advanced:1 arm:1 sadeghi:1 identifies:1 hook:1 jun:1 extract:2 faced:1 prior:1 understanding:1 review:1 schulman:2 acknowledgement:1 graf:1 relative:1 unsurprisingly:1 fully:2 loss:2 diploma:1 generation:1 interesting:1 proportional:2 geoffrey:3 pabbeel:1 humanoid:1 agent:1 article:1 principle:3 exciting:1 story:1 cd:2 translation:3 land:1 mohri:1 maas:1 surprisingly:1 last:1 supported:2 tsitsiklis:1 allow:1 explaining:1 wide:1 taking:1 munos:1 brenna:1 sparse:1 fg:1 van:2 overcome:1 dimension:4 valid:1 computes:1 concretely:2 commonly:1 clue:1 made:2 sifre:1 reinforcement:20 collection:1 far:3 adaptive:1 skill:11 robotic:2 overfitting:1 instantiation:1 manuela:1 popovic:1 consuming:1 xi:2 demo:2 abhishek:1 imitation:21 don:1 continuous:4 search:1 table:2 nature:2 transfer:3 learn:12 ca:1 robust:1 mj:1 nicolas:3 mehryar:1 alvinn:1 complex:5 artificially:1 domain:11 marc:2 bradly:1 did:1 aistats:1 main:4 linearly:1 noise:6 repeated:1 x1:1 xu:2 fig:5 augmented:2 representative:1 andrei:2 slow:1 judy:4 formalization:1 fails:1 position:8 stadie:2 unfair:1 communicates:1 third:1 donahue:2 perfect:1 specific:5 insightful:1 list:8 gupta:1 multitude:1 consist:1 essential:1 gripper:2 workshop:1 effectively:2 corr:1 decaf:1 ci:2 drew:1 hauptmann:1 execution:1 conditioned:6 chen:1 easier:3 locality:1 entropy:2 led:1 remi:1 timothy:2 explore:2 ivor:1 visual:2 vinyals:3 desire:2 tegan:1 contained:1 expressed:1 labor:1 scalar:1 chang:1 driessche:1 acquiring:2 springer:2 corresponds:1 relies:1 acm:3 extracted:1 identity:1 goal:1 careful:2 towards:3 jeff:2 replace:1 fisher:1 content:3 hard:2 shared:1 specifically:2 infinite:2 cde:1 typical:1 yuval:2 sylvain:1 operates:1 zb:1 multimedia:1 adamax:1 invariance:2 saenko:4 indicating:1 formally:1 guillaume:1 aaron:2 support:1 latter:1 jonathan:4 alexander:2 oriol:3 evaluate:1 regularizing:1 srivastava:1
6,312
671
An Information-Theoretic Approach to Deciphering the Hippocampal Code William E. Skaggs Bruce L. McNaughton Katalin M. Gothard Etan J. Markus Center for Neural Systems, Memory, and Aging 344 Life Sciences North University of Arizona Tucson AZ 85724 [email protected] Abstract Information theory is used to derive a simple formula for the amount of information conveyed by the firing rate of a neuron about any experimentally measured variable or combination of variables (e.g. running speed, head direction, location of the animal, etc.). The derivation treats the cell as a communication channel whose input is the measured variable and whose output is the cell's spike train. Applying the formula, we find systematic differences in the information content of hippocampal "place cells" in different experimental conditions. 1 INTRODUCTION Almost any neuron will respond to some manipulation or other by changing its firing rate, and this change in firing can convey information to downstream neurons. The aim of this article is to introduce a very simple formula for the average rate at which a cell conveys information in this way, and to show how the formula is helpful in the study of the firing properties of cells in the rat hippocampus. This is by no means the first application of information theory to the study of neural coding; see especially Richmond and Optican (1990). The thing that particularly distinguishes 1030 An Information-Theoretic Approach to Deciphering the Hippocampal Code our approach is its simplemindedness. To get the basic idea, imagine we are recording the activity of a neuron in the brain of a rat, while the rat is wandering around randomly on a circular platform. Suppose we observe that the cell fires only when the rat is on the left half of the platform, and that it fires at a constant rate everywhere on the left half; and suppose that on the whole the rat spends half of its time on the left half of the platform. In this case, if we are prevented from seeing where the rat is, but are informed that the neuron has just this very moment fired a spike, we obtain thereby one bit of information about the current location of the rat. Suppose we have a second cell, which fires only in the southwest quarter of the platform; in this case a spike would give us two bits of information. If there were in addition a small amount of background firing, the information would be slightly less than two bits. And so on. Going back to the cell that fires everywhere on the left half of the platform, suppose that when it is active, it fires at a mean rate of 10 spikes per second. Since it is active half the time, it fires at an overall mean rate of 5 spikes per second. Since a spike conveys one bit of information about the rat's location, the cell's spike train conveys information at an average rate of 5 bits per second. This does not mean that if the cell is observed for one second, on average 5 bits will be obtained-rather it means that if the cell is observed for a sufficiently short time interval dt, on average 5dt bits will be obtained. In 20 milliseconds, for example, the expected information conveyed by the cell about the location of the rat will be very nearly 0.1 bits. The longer the time interval over which the cell is observed, the more redundancy in the spike train, and hence the farther below 5dt the total information falls. The formula that leads to these numbers is 1= l.\(X) log2 .\~) p(x)dx, (1) where I is the information rate of the cell in bits per second, x is spatial location, p( x) is the probability density for the rat being at location x, .\( x) is the mean firing rate when the rat is at location x, and .\ = Jz .\(x)p(x)dx is the overall mean firing rate of the cell. The derivation of this formula appears in the final section. (To our knowledge the formula, though very simple, has not previously been published.) Note that, as far as the formula is concerned, there is nothing special about spatial location: the formula can equally well be used to define the rate at which a cell conveys information about any aspect of the rat's state, or any combination of aspects. The only mathematical requirement 1 is that the rat's state x and the spike train of the cell both be stationary random variables, so that the probability density p( x) and the expected firing rate .\( x) are well-defined. The information rate given by formula (1) is measured in bits per second. If it is divided by the overall mean firing rate.\ of the cell (expressed in spikes per second), then a different kind of information rate is obtained, in units of bits per spike-let us call it the information per spike. This is a measure of the specificity of the cell: the more "grandmotherish" the cell, the more information per spike. For a population lOther than obvious requirements of integrability that are sure to be fulfilled in natural situations. 1031 1032 Skaggs, McNaughton, Gothard, and Markus of cells, then, a highly distributed representation equates to little information per spike. ? ....:.. ' .11 ,'. ??? ? . . .' . ? '? .. ':,'. . J .. ' ': .' ? ? ? ? ? 1 ," I ? ? ?? ?? I. ? ? /" . .,' ? ? ' I ? II. .. ??? ? ? '.' ? .... ? II:'. .. '. ? ????? ?? ' ' . . ? I. ? ? ? ? .... ".' " I. ? ? ......:.. ? ,'.' .' . I.' ?.... .1 .': . ,.'. .. .,'1,' "': ? ,', I ? ?? .1 ' . ' :.. ??? ~'.' II II. ILl. ' ? ? . ' ___ ???? ? ? II ? .1 ' . . .1 ??? I .. I I ",1' : II ?. I' ? .". I.' I I, 'I: .' ,I ? ? .: ???? . ..'.. I I ' ", . .. II'~ ,'I' . . II ' ?? - . . .', ? ? ::.' '1 ...' IJ I '. II:. ..L_ ~.' "_ "I?? I .' . I II ... ? ? ..I ? ??? ? ? I ? ?????????? I. ? ....:' ?' .?. 1.??. ..\. . ?I . . . .I'? I ??? .... ? ? ", I????? ...1 I ? ? '. ? I' '. ? "1 ? II? ..? ..... .' . ' . '.1, .~:. '. I.". I I . ? ???? I ' . ' ..... II. ..' I.. I I II.'. ? ??? ..1 ? I. ? I ? II. I ? ? .'. '1 ??? I.' .' ? II ? ." ? I I.' ? ... -..; :- ? ," ." ??? ???? ? ? II ? ? ?? ,\... ? ..... :.: I...? II '. ? ..,... I ? I'" J'-- I .t.. . . .. .? ? ~I 1 _ '??? :1.' I I . .. I... .'. I. I ???????? I. ? .. ........... I '. '. ?? I ? I I..... ? .I I.. .; " I:. :~. . 1.1 ',.' I. '. '. .I .' ," I.' I. .'..I...: .? ....1._.?.. . . .'.' . ...'.':". \ '. .." ?????? I ? ????? _ ', ? ? ' I'. 1 ? '. I" '. '.:" ? I.. I? ":.' I. ?? I.' ? ? ? I ? ? .1. ? I I .' .... '.l!)' ? ? ? ? ? - . CO' . ?... '. '. I,.~. .1'? I '1 I'?~ --;- ?? ~~~~\H~,,- ....~g.!~lt?1~ ~~~ Figure 1: "Spot plot" of the activity of a single pyramidal cell in the hippocampus of a rat, recorded while the rat foraged for food pellets inside a small cylinder. The dots show locations visited by the rat, and the circles show points where the cell fired-large circles mean that several spikes occurred within a short time. The lines indicate which direction the rat was facing when the cell fired. The plot represents 29 minutes of data, during which the cell fired at an overall mean rate of 1.319 Hz. Consider, as an example, a typical "place cell" (actually an especially nice place cell) from the CAl layer of the hippocampus of a rat-Figure I shows a "spot plot" of the activity of the cell as the rat moves around inside a 76 cm diameter cylinder with high, opaque walls, foraging for randomly scattered food pellets. This cell, like most pyramidal cells in CAl, fires at a relatively high rate (above 10 Hz) when the rat is in a specific small portion of the environment-the "place field" of the cellbut at a much lower rate elsewhere. Different cells have place fields in different locations; there are no systematic rules for their arrangement, except that there may be a tendency for neighboring cells to have nearby place fields. The activity of place cells is known to be related to more than just place: in some circumstances it is sensitive to the direction the rat is facing, and it can also be modulated by running speed, alertness, or other aspects of behavioral state. The dependence on An Information-Theoretic Approach to Deciphering the Hippocampal Code head direction has given rise to a certain amount of controversy, because in some types of environment it is very strong, while in others it is virtually absent. Table 1 gives statistics for the amount of information conveyed by this cell about spatial location, head direction, running speed, and combinations of these variables. Note that the information conveyed about spatial location and head direction is hardly more than the information conveyed about spatial location alone-the difference is well within the error bounds of the calculation. Thus this cell has no detectable directionality. This seems to be typical of cells recorded in unstructured environments. Table 1: Information conveyed by the cell whose activity is plotted in Figure 1. VARIABLES Location Head Direction Running Speed Location and Head Direction Location and Running Speed INFO 2.40 bits/sec 0.48 bits/sec 0.03 bits/sec 2.53 bits/sec 2.36 bits/sec INFO PER SPIKE 1.82 bits 0.37 bits 0.02 bits 1.92 bits 1.79 bits The information-rate measure may be helpful in understanding the computations performed by neural populations. Consider an example. Cells in the CA3 and CAl regions of the rat hippocampal formation have long been known to convey information about a rat's spatial location (this is discussed in more detail below). Data from our lab suggest that, in a given environment, an average CA3 cell conveys something in the neighborhood of 0.1 bits per second about the rat's position-some cells convey a good deal more information than this, but many are virtually silent. Cells in CAl receive most of their input from cells in CA3; each gets on the order of 10,000 such inputs. Question: How long must the integration time of a CAl cell be in order for it to form a good estimate of the rat's location? Answer: With 10,000 inputs, each conveying on average 0.1 bits per second, the cell receives information at a rate of 1000 bits per second, or 1 bit per millisecond, so in 5-10 msec the cell receives enough information to form a moderately precise estimate of location. 2 APPLICATIONS We now very briefly describe two experimental studies that have found differences in the spatial information content of rat hippocampal activity under different conditions. The methods used for recording the cells are described in detail in McNaughton et al (1989)-to summarize, the cells were recorded with stereotrodes, which are twisted pairs of electrodes, separated by about 15 microns at the tips, that pick up the extracellular electric fields generated when cells fire. A single stereotrode can detect the activity of as many as six or seven distinct hippocampal cells; spikes from different cells can be separated on the basis of their amplitudes on the two electrodes, as well as other differences in wave shape. The location of the rat was tracked using arrays of LEDs attached to their heads and a video camera on the ceiling. Spatial firing rate maps for each cell were constructed using an adaptive binning technique designed to minimize error (Skaggs and McNaughton, submitted), 1033 1034 Skaggs, McNaughton, Gothard, and Markus and information rates were calculated using these firing rate maps. As a control, the spike train was randomly time-shifted relative to the sequence of locations; this was done 100 times, and the cell was deemed to have significant spatial dependence if its information rate was more than 2.29 standard deviations above the mean of the 100 control information rates. 2.1 EXPERIMENT: PROXIMAL VERSUS DISTAL VISUAL CUES In this study (a preliminary account of which appears in Gothard et al (1992?, the activity of place cells was recorded successively in two environments, the first a 76 em diameter cylinder with four patterned cue-cards on the high, opaque gray wall, the second a cylinder of the same shape, but with a low, transparent plexiglass wall and four patterned cue-cards on the distant black walls of the recording room. The two environments thus had the same shape, and from any given point were visually quite similar; the difference is that in one all of the visual cues were proximal to the rat, while in the other many of them were distal. DISTAL CUES PROXIMAL CUES ... . .. ;:;: .:.,,: :.:; . .. ...; . .. :: : . . " . . :. : Figure 2: Firing rate maps of four simultaneously recorded cells, in the distal cue environment (top) and proximal cue environment (bottom). The scale is identical for all plots; black ~ 5 Hz. Fifty cells were recorded with robust place-dependent firing in one or the other cylinder. There was no discernable relationship between place fields in the two environments-a cell having a place field in the proximal cue environment might be nearly silent in the distal cue environment, and even if it did fire, its place field would be in a different location. (Figure 2 shows firing rate maps for four of the cells.) A substantially higher fraction of the cells had place fields in the proximal cue environment, and overall the average information per second was almost 50% higher An Information-Theoretic Approach to Deciphering the Hippocampal Code in the proximal cue environment. For the cells possessing fields, the information per spike was significantly higher in the proximal cue environment, meaning that place fields were more compact. These results indicate that in the proximal cue environment, spatial location is represented by the hippocampal population more precisely, and by a larger pool of cells, than in the distal cue environment. The most likely explanation is that, at least in the absence of local cues, the configuration of visual landmarks controls the activity of the place cell population. 2.2 EXPERIMENT: LIGHT VERSUS DARK Visual cues have a great deal of influence on place fields, but they are not the only important factor; in fact, some hippocampal cells maintain place fields even in complete darkness (McNaughton et a/., 1989b; Quirk et a/., 1990). This experiment (Markus et a/., 1992) was designed to examine how lack of visual cues changes the properties of place fields. Rats traversed an eight-arm radial maze for chocolate milk reward, with the room lights being turned on and oft' on alternate trials. (A trial consisted of one visit to each of the eight arms of the maze.) Figure 3 shows firing rate maps for four simultaneously recorded cells. LIGHT :/::;...:....: .tt:?: '::::.::;.: DARK Figure 3: Firing rate maps of four simultaneously recorded cells, with room lights turned on (top) and off (bottom). The scale is identical for all plots; black ~ 5 Hz. (The loops at the ends of the arms are caused by the rat turning around there.) The most salient effect was that a much larger fraction of cells showed significant spatially selective firing in the light than in the dark: 35% as opposed to 20%. However, the average information per second decreased only by 15% in the dark as compared to the light, from 0.326 bits per second in the light to 0.278 bits per 1035 1036 Skaggs, McNaughton, Gothard, and Markus second in the dark. (These are overestimates of the population averages, because cells silent in both light and dark were not included in the sample.) Interestingly, the drop in information content from light to dark seemed to be much smaller than the drop from proximal cues to distal cues in the previous experiment. A major difference between the two experime:nts is that, in the eight-arm maze, tactile cues potentially give a great deal of information about spatial location, but in a cylinder they serve only to distinguish the center from the wall. While it is dangerous to compare the two experiments, which differed methodologically in several ways, the results suggest that tactile cues can have a very strong influence on hippocampal firing, at least when visual cues are absent. 3 THEORY The information-rate formula (1) is derived by considering a neuron as a "channel" (in the information-theoretic sense) whose input is the spatial location of the rat, and whose output is the spike train. During a sufficiently short time interval the spike train is effectively a binary random variable (Le. the only possibilities are to spike once or not at all), and the probability of spiking is determined by the spatial location. The event of spiking may be indicated by a random variable S whose value is 1 if the cell spikes and 0 otherwise. If the environment is partitioned into a set of nonoverlapping bins, then spatial location may be represented by an integer-valued random variable X giving the index of the currently occupied bin. In information theory, the information conveyed by a discrete random variable X about another discrete random variable Y, which is identical to the mutual information of X and Y, is given by where Z i and Yi are the possible values of X and Y, and pO is probability. If Aj is the mean firing rate when the rat is in bin j, then the probability of a spike during a brief time interval tl.t is P(S= 11X =j) = Ajtl.t. Also, the overall probability of a spike is P(S=1) = Atl.t, where with Pi = P(X =j). After these expressions are plugged in to the equation for I(Y IX) above, it is a matter of straightforward algebra, using power series expansions of logarithms and keeping only lower order terms, to derive a discrete approximation of equation (1). An Information-Theoretic Approach to Deciphering the Hippocampal Code 4 DISCUSSION In many situations, neurons must decide whether to fire on the basis of relatively brief samples of input, often 100 milliseconds or less. A cell cannot get much information from a single input in such a short time, so to achieve precision it needs to integrate many inputs. Formula (1) provides a measure of how much information a single input conveys about a given variable in such a brief time interval. The formula can be applied to any type of cell that uses firing rate to convey information. The only requirement is to have enough data to get good, stable estimates of firing rates. In practice, for a hippocampal cell having a mean firing rate of around 0.5 Hz in an environment, twenty minutes of data is adequate for measuring position-dependence; and for a "theta cell" (an interneuron, firing at a considerably higher rate), very clean measurements are possible. We have used the measure in the study of hippocampal place cells, but it might actually work better for some other types. The problem with place cells is that they fire at low overall rates, so it is time-consuming to get an adequate sample. Cortical pyramidal cells often have mean rates at least ten times faster, so it ought to be easier to get accurate numbers for them. The information measure might naturally be applied to study, for example, the changes in information content of visual cortical cells as a visual stimulus is blurred or dimmed. Supported by NIMH grant MH46823 References Gothard, K. M., Skaggs, W. E., McNaughton, B. L., Barnes, C. A., and Youngs, S. P. (1992). Place field specificity depends on proximity of visual cues. Soc Neurosci Abstr, 18:1216. 508.10 . Markus, E. J., Barnes, C. A., McNaughton, B. L., Gladden, V., Abel, T. W., and Skaggs, W. E. (1992). Decrease in the information content of hippocampal cal cell spatial firing patterns in the dark. Soc Neuroscience Abstr, 18:1216. 508.12. McNaughton, B. L., Leonard, B., and Chen, L. (1989b). Cortical-hippocampal interactions and cognitive mapping: A hypothesis based on reintegration of the parietal and inferotemporal pathways for visual processing. Psychobiology, 17:230-235. McNaughton, B. L., Barnes, C. A., Meltzer, J., and Sutherland, R. J. (1989a). Hippocampal granule cells are necessary for normal spatial learning but not for spatially selective pyramidal cell discharge. Exp Brain Res, 76:485-496. Quirk, G. J., Muller, R. U., and Kubie, J. L. (1990). The firing of hippocampal place cells in the dark depends on the rat's previous experience. J N eurosci, 10:2008-2017. Richmond, B. J. and Optican, L. M. (1990). Temporal encoding of two-dimensional patterns by single units in primate primary visual cortex: Ii information transmission. J Neurophysiol, 64:370-380. 1037
671 |@word trial:2 briefly:1 hippocampus:3 seems:1 methodologically:1 pick:1 thereby:1 moment:1 configuration:1 series:1 interestingly:1 optican:2 current:1 nt:1 dx:2 must:2 distant:1 shape:3 plot:5 designed:2 drop:2 stationary:1 half:6 alone:1 cue:24 short:4 farther:1 provides:1 location:27 mathematical:1 constructed:1 pathway:1 behavioral:1 inside:2 introduce:1 expected:2 examine:1 brain:2 food:2 little:1 considering:1 twisted:1 kind:1 cm:1 spends:1 substantially:1 informed:1 ought:1 temporal:1 control:3 unit:2 grant:1 overestimate:1 sutherland:1 local:1 treat:1 aging:1 encoding:1 chocolate:1 firing:25 black:3 might:3 co:1 patterned:2 camera:1 practice:1 kubie:1 spot:2 significantly:1 radial:1 seeing:1 specificity:2 suggest:2 get:6 cannot:1 cal:6 applying:1 influence:2 darkness:1 bill:1 map:6 center:2 straightforward:1 unstructured:1 rule:1 array:1 population:5 atl:1 mcnaughton:11 discharge:1 imagine:1 suppose:4 us:1 hypothesis:1 particularly:1 dimmed:1 binning:1 observed:3 bottom:2 region:1 alertness:1 decrease:1 environment:18 nimh:1 moderately:1 reward:1 abel:1 controversy:1 algebra:1 serve:1 basis:2 neurophysiol:1 po:1 represented:2 derivation:2 train:7 separated:2 distinct:1 describe:1 formation:1 neighborhood:1 whose:6 quite:1 larger:2 valued:1 otherwise:1 statistic:1 final:1 sequence:1 interaction:1 neighboring:1 turned:2 loop:1 fired:4 achieve:1 az:1 electrode:2 requirement:3 abstr:2 transmission:1 derive:2 quirk:2 measured:3 ij:1 strong:2 soc:2 indicate:2 direction:8 bin:3 southwest:1 transparent:1 wall:5 preliminary:1 traversed:1 proximity:1 around:4 sufficiently:2 normal:1 visually:1 great:2 exp:1 mapping:1 major:1 eurosci:1 currently:1 visited:1 sensitive:1 pellet:2 aim:1 rather:1 occupied:1 derived:1 integrability:1 richmond:2 detect:1 sense:1 helpful:2 dependent:1 going:1 selective:2 overall:7 ill:1 animal:1 platform:5 spatial:16 special:1 integration:1 mutual:1 field:14 once:1 having:2 identical:3 represents:1 nearly:2 others:1 stimulus:1 distinguishes:1 randomly:3 simultaneously:3 fire:11 william:1 maintain:1 cylinder:6 circular:1 highly:1 possibility:1 light:9 accurate:1 necessary:1 experience:1 plugged:1 logarithm:1 circle:2 plotted:1 re:1 measuring:1 ca3:3 deviation:1 deciphering:5 foraging:1 answer:1 proximal:10 considerably:1 density:2 systematic:2 off:1 pool:1 tip:1 recorded:8 successively:1 opposed:1 cognitive:1 account:1 nonoverlapping:1 coding:1 sec:5 north:1 matter:1 blurred:1 caused:1 depends:2 performed:1 lab:1 portion:1 wave:1 bruce:1 minimize:1 conveying:1 psychobiology:1 published:1 submitted:1 obvious:1 conveys:6 naturally:1 knowledge:1 amplitude:1 actually:2 back:1 appears:2 higher:4 dt:3 done:1 though:1 just:2 receives:2 lack:1 aj:1 indicated:1 gray:1 effect:1 consisted:1 hence:1 spatially:2 deal:3 distal:7 during:3 rat:33 hippocampal:18 theoretic:6 complete:1 tt:1 meaning:1 gladden:1 possessing:1 quarter:1 spiking:2 tracked:1 attached:1 discussed:1 occurred:1 significant:2 measurement:1 had:2 dot:1 stable:1 longer:1 cortex:1 etc:1 inferotemporal:1 something:1 showed:1 manipulation:1 certain:1 binary:1 life:1 yi:1 muller:1 ii:18 faster:1 calculation:1 long:2 divided:1 prevented:1 equally:1 visit:1 basic:1 circumstance:1 cell:79 receive:1 addition:1 background:1 interval:5 decreased:1 pyramidal:4 fifty:1 sure:1 recording:3 hz:5 virtually:2 thing:1 call:1 integer:1 enough:2 concerned:1 reintegration:1 meltzer:1 skaggs:7 silent:3 idea:1 absent:2 whether:1 six:1 expression:1 tactile:2 wandering:1 hardly:1 adequate:2 amount:4 dark:9 ten:1 diameter:2 millisecond:3 shifted:1 fulfilled:1 neuroscience:1 per:20 discrete:3 redundancy:1 four:6 salient:1 changing:1 clean:1 downstream:1 fraction:2 everywhere:2 micron:1 respond:1 opaque:2 place:23 almost:2 decide:1 bit:27 layer:1 bound:1 distinguish:1 discernable:1 arizona:2 barnes:3 activity:9 dangerous:1 precisely:1 markus:6 nearby:1 aspect:3 speed:5 relatively:2 extracellular:1 alternate:1 combination:3 smaller:1 slightly:1 em:1 partitioned:1 primate:1 ceiling:1 equation:2 previously:1 detectable:1 end:1 eight:3 observe:1 top:2 running:5 log2:1 giving:1 especially:2 granule:1 move:1 arrangement:1 question:1 spike:25 primary:1 dependence:3 card:2 landmark:1 seven:1 code:5 index:1 relationship:1 potentially:1 info:2 rise:1 twenty:1 neuron:7 parietal:1 situation:2 communication:1 head:7 precise:1 lother:1 pair:1 below:2 pattern:2 oft:1 summarize:1 tucson:1 memory:1 video:1 explanation:1 power:1 event:1 natural:1 turning:1 arm:4 brief:3 theta:1 deemed:1 nice:1 understanding:1 relative:1 facing:2 versus:2 integrate:1 conveyed:7 article:1 pi:1 elsewhere:1 supported:1 keeping:1 l_:1 fall:1 distributed:1 calculated:1 cortical:3 maze:3 seemed:1 nsma:1 adaptive:1 far:1 compact:1 active:2 consuming:1 table:2 channel:2 jz:1 robust:1 expansion:1 electric:1 did:1 neurosci:1 whole:1 nothing:1 convey:4 tl:1 scattered:1 differed:1 precision:1 position:2 msec:1 ix:1 young:1 formula:13 minute:2 specific:1 effectively:1 equates:1 milk:1 interneuron:1 easier:1 chen:1 lt:1 led:1 likely:1 visual:11 expressed:1 leonard:1 room:3 absence:1 content:5 experimentally:1 change:3 directionality:1 typical:2 except:1 included:1 determined:1 total:1 experimental:2 tendency:1 modulated:1
6,313
6,710
Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding Mainak Jas1 , Tom Dupr? La Tour1 , Umut Sim? ? sekli1 , Alexandre Gramfort1,2 1: LTCI, Telecom ParisTech, Universit? Paris-Saclay, Paris, France 2: INRIA, Universit? Paris-Saclay, Saclay, France Abstract Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research. One of the goals for analyzing such data is hence to extract such ?shift-invariant? atoms. Even though some success has been reported with existing algorithms, they are limited in applicability due to their heuristic nature. Moreover, they are often vulnerable to artifacts and impulsive noise, which are typically present in raw neural recordings. In this study, we address these issues and propose a novel probabilistic convolutional sparse coding (CSC) model for learning shift-invariant atoms from raw neural signals containing potentially severe artifacts. In the core of our model, which we call ?CSC, lies a family of heavy-tailed distributions called ?-stable distributions. We develop a novel, computationally efficient Monte Carlo expectation-maximization algorithm for inference. The maximization step boils down to a weighted CSC problem, for which we develop a computationally efficient optimization algorithm. Our results show that the proposed algorithm achieves state-of-the-art convergence speeds. Besides, ?CSC is significantly more robust to artifacts when compared to three competing algorithms: it can extract spike bursts, oscillations, and even reveal more subtle phenomena such as cross-frequency coupling when applied to noisy neural time series. 1 Introduction Neural time series data, either non-invasive such as electroencephalograhy (EEG) or invasive such as electrocorticography (ECoG) and local field potentials (LFP), are fundamental to modern experimental neuroscience. Such recordings contain a wide variety of ?prototypical signals? that range from beta rhythms (12?30 Hz) in motor imagery tasks and alpha oscillations (8?12 Hz) involved in attention mechanisms, to spindles in sleep studies, and the classical P300 event related potential, a biomarker for surprise. These prototypical waveforms are considered critical in clinical and cognitive research [1], thereby motivating the development of computational tools for learning such signals from data. Despite the underlying complexity in the morphology of neural signals, the majority of the computational tools in the community are based on representing the signals with rather simple, predefined bases, such as the Fourier or wavelet bases [2]. While such bases lead to computationally efficient algorithms, they often fall short at capturing the precise morphology of signal waveforms, as demonstrated by a number of recent studies [3, 4]. An example of such a failure is the disambiguation of the alpha rhythm from the mu rhythm [5], both of which have a component around 10 Hz but with different morphologies that cannot be captured by Fourier- or wavelet-based representations. Recently, there have been several attempts for extracting more realistic and precise morphologies directly from unfiltered electrophysiology signals, via dictionary learning approaches [6?9]. These methods all aim to extract certain shift-invariant prototypical waveforms (called ?atoms? in this context) to better capture the temporal structure of the signals. As opposed to using generic bases 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. that have predefined shapes, such as the Fourier or the wavelet bases, these atoms provide a more meaningful representation of the data and are not restricted to narrow frequency bands. In this line of research, Jost et al. [6] proposed the MoTIF algorithm, which uses an iterative strategy based on generalized eigenvalue decompositions, where the atoms are assumed to be orthogonal to each other and learnt one by one in a greedy way. More recently, the ?sliding window matching? (SWM) algorithm [9] was proposed for learning time-varying atoms by using a correlation-based approach that aims to identify the recurring patterns. Even though some success has been reported with these algorithms, they have several limitations: SWM uses a slow stochastic search inspired by simulated annealing and MoTIF poorly handles correlated atoms, simultaneously activated, or having varying amplitudes; some cases which often occur in practical applications. A natural way to cast the problem of learning a dictionary of shift-invariant atoms into an optimization problem is a convolutional sparse coding (CSC) approach [10]. This approach has gained popularity in computer vision [11?15], biomedical imaging [16] and audio signal processing [10, 17], due to its ability to obtain compact representations of the signals and to incorporate the temporal structure of the signals via convolution. In the neuroscience context, Barth?lemy et al. [18] used an extension of the K-SVD algorithm using convolutions on EEG data. In a similar spirit, Brockmeier and Pr?ncipe [7] used the matching pursuit algorithm combined with a rather heuristic dictionary update, which is similar to the MoTIF algorithm. In a very recent study, Hitziger et al. [8] proposed the AWL algorithm, which presents a mathematically more principled CSC approach for modeling neural signals. Yet, as opposed to classical CSC approaches, the AWL algorithm imposes additional combinatorial constraints, which limit its scope to certain data that contain spike-like atoms. Also, since these constraints increase the complexity of the optimization problem, the authors had to resort to dataset-specific initializations and many heuristics in their inference procedure. While the current state-of-the-art CSC methods have a strong potential for modeling neural signals, they might also be limited as they consider an `2 reconstruction error, which corresponds to assuming an additive Gaussian noise distribution. While this assumption could be reasonable for several signal processing tasks, it turns out to be very restrictive for neural signals, which often contain heavy noise bursts and have low signal-to-noise ratio. In this study, we aim to address the aforementioned concerns and propose a novel probabilistic CSC model called ?CSC, which is better-suited for neural signals. ?CSC is based on a family of heavy-tailed distributions called ?-stable distributions [19] whose rich structure covers a broad range of noise distributions. The heavy-tailed nature of the ?-stable distributions renders our model robust to impulsive observations. We develop a Monte Carlo expectation maximization (MCEM) algorithm for inference, with a weighted CSC model for the maximization step. We propose efficient optimization strategies that are specifically designed for neural time series. We illustrate the benefits of the proposed approach on both synthetic and real datasets. 2 Preliminaries P 1/p Notation: For a vector v ? Rn we denote the `p norm by kvkp = ( i |vi |p ) . The convolution of N M N +M ?1 two vectors v1 ? R and v2 ? R is denoted by v1 ? v2 ? R . We denote by x the observed signals, d the temporal atoms, and z the sparse vector of activations. The symbols U, E, N , S denote the univariate uniform, exponential, Gaussian, and ?-stable distributions, respectively. Convolutional sparse coding: The CSC problem formulation adopted in this work follows the Shift Invariant Sparse Coding (SISC) model from [10]. It is defined as follows: min d,z N  X 1 n=1 2 kxn ? K X k=1 dk ? znk k22 + ? K X  kznk k1 , s.t. kdk k22 ? 1 and znk ? 0, ?n, k , (1) k=1 where xn ? RT denotes one of the N observed segments of signals, also referred to as a trials in this paper. We denote by T as the length of a trial, and K the number of atoms. The aim in this model is to approximate the signals xn by the convolution of certain atoms and their respective activations, which are sparse. Here, dk ? RL denotes the kth atom of the dictionary d ? {dk }k , and znk ? RT+?L+1 denotes the activation of the kth atom in the nth trial. We denote by z ? {znk }n,k . The objective function (1) has two terms, an `2 data fitting term that corresponds to assuming an additive Gaussian noise model, and a regularization term that promotes sparsity with an `1 norm. The 2 p(x) 10-2 10-3 10 -4 10-5 -30 Trial 2 (xn,t ) ?=2.0, ?=0 ?=1.9, ?=0 ?=1.8, ?=0 ?=0.9, ?=1 Trial 1 (xn,t ) 5 100 10-1 10 0 -10 0 -5 -20 -10 0 10 20 x 30 500 1000 1500 2000 2500 500 1000 1500 2000 2500 Time (t) Time (t) (a) (b) Figure 1: (a) PDFs of ?-stable distributions. (b) Illustration of two trials from the striatal LFP data, which contain severe artifacts. The artifacts are illustrated with dashed rectangles. regularization parameter is called ? > 0. Two constraints are also imposed. First, we ensure that dk lies within the unit sphere, which prevents the scale ambiguity between d and z. Second, a positivity constraint on z is imposed to be able to obtain physically meaningful activations and to avoid sign ambiguities between d and z. This positivity constraint is not present in the original SISC model [10]. ?-Stable distributions: The ?-stable distributions have become increasingly popular in modeling signals that might incur large variations [20?24] and have a particular importance in statistics since they appear as the limiting distributions in the generalized central limit theorem [19]. They are characterized by four parameters: ?, ?, ?, and ?: (i) ? ? (0, 2] is the characteristic exponent and determines the tail thickness of the distribution: the distribution will be heavier-tailed as ? gets smaller. (ii) ? ? [?1, 1] is the skewness parameter. If ? = 0, the distribution is symmetric. (iii) ? ? (0, ?) is the scale parameter and measures the spread of the random variable around its mode (similar to the standard deviation of a Gaussian distribution). Finally, (iv) ? ? (??, ?) is the location parameter (for ? > 1, it is simply the mean). The probability density function of an ?-stable distribution cannot be written in closed-form except for certain special cases; however, the characteristic function can be written as follows: x ? S(?, ?, ?, ?) ?? E[exp(i?x)] = exp(?|??|? [1 + i sign(?)??? (?)] + i??) , ? where ?? (?) = log |?| for ? = 1, ?? (?) = tan(??/2) for ? 6= 1, and i = ?1. As an important special case of the ?-stable distributions, we obtain the Gaussian distribution when ? = 2 and ? = 0, i.e. S(2, 0, ?, ?) = N (?, 2? 2 ). In Fig. 1(a), we illustrate the (approximately computed) probability density functions (PDF) of the ?-stable distribution for different values of ? and ?. The distribution becomes heavier-tailed as we decrease ?, whereas the tails vanish quickly when ? = 2. The moments of the ?-stable distributions can only be defined up to the order ?, i.e. E[|x|p ] < ? if and only if p < ?, which implies the distribution has infinite variance when ? < 2. Furthermore, despite the fact that the PDFs of ?-stable distributions do not admit an analytical form, it is straightforward to draw random samples from them [25]. 3 Alpha-Stable Convolutional Sparse Coding 3.1 The Model From a probabilistic perspective, the CSC problem can be also formulated as a maximum a-posteriori (MAP) estimation problem on the following probabilistic generative model: k zn,t ? E(?), xn,t |z, d ? N (? xn,t , 1), where, x ?n , K X dk ? znk . (2) k=1 k Here, zn,t denotes the tth element of znk . We use the same notations for xn,t and x ?n,t . It is easy to verify that the MAP estimate for this probabilistic model, i.e. maxd,z log p(d, z|x), is identical to the original optimization problem defined in (1)1 . It has been long known that, due to their light-tailed nature, Gaussian models often fail at handling noisy high amplitude observations or outliers [26]. As a result, the ?vanilla? CSC model turns out to be highly sensitive to outliers and impulsive noise that frequently occur in electrophysiological 1 Note that the positivity constraint on the activations is equivalent to an exponential prior for the regularization term rather than the more common Laplacian prior. 3 recordings, as illustrated in Fig. 1(b). Possible origins of such artifacts are movement, muscle contractions, ocular blinks or electrode contact losses. In this study, we aim at developing a probabilistic CSC model that would be capable of modeling challenging electrophysiological signals. We propose an extension of the original CSC model defined in (2) by replacing the light-tailed Gaussian likelihood (corresponding to the `2 reconstruction loss in (1)) with heavy-tailed ?-stable distributions. We define the proposed probabilistic model (?CSC) as follows: ? k zn,t ? E(?), xn,t |z, d ? S(?, 0, 1/ 2, x ?n,t ) , (3) where S denotes the ?-stable distribution. While still being able to capture the temporal structure of the observed signals via convolution, the proposed model has a richer structure and would allow large variations and outliers, thanks to the heavy-tailed ?-stable distributions. Note that the vanilla CSC defined in (2) appears as a special case of ?CSC, as the ?-stable distribution coincides with the Gaussian distribution when ? = 2. 3.2 Maximum A-Posteriori Inference Given the observed signals x, we are interested in the MAP estimates, defined as follows:  X X k (d? , z ? ) = arg max log p(xn,t |d, z) + log p(zn,t ) . d,z n,t (4) k As opposed to the Gaussian case, unfortunately, this optimization problem is not amenable to classical optimization tools, since the PDF of the ?-stable distributions does not admit an analytical expression. As a remedy, we use the product property of the symmetric ?-stable densities [19, 27] and re-express the ?CSC model as conditionally Gaussian. It leads to:  ?  ?? 2/?  1 k zn,t ? E(?), ?n,t ? S , 1, 2(cos ) , 0 , xn,t |z, d, ? ? N x ?n,t , ?n,t , (5) 2 4 2 where ? is called the impulse variable that is drawn from a positive ?-stable distribution (i.e. ? = 1), whose PDF is illustrated in Fig. 1(a). It can be shown that both formulations of the ?CSC model are identical by marginalizing the joint distribution p(x, d, z, ?) over ? [19, Proposition 1.3.1]. The impulsive structure of the ?CSC model becomes more prominent in this formulation: the variances of the Gaussian observations are modulated by stable random variables with infinite variance, where the impulsiveness depends on the value of ?. It is also worth noting that when ? = 2, ?n,t becomes deterministic and we can again verify that ?CSC coincides with the vanilla CSC. The conditionally Gaussian structure of the augmented model has a crucial practical implication: if the impulse variable ? were to be known, then the MAP estimation problem over d and z in this model would turn into a ?weighted? CSC problem, which is a much easier task compared to the original problem. In order to be able to exploit this property, we propose an expectation-maximization (EM) algorithm, which iteratively maximizes a lower bound of the log-posterior log p(d, z|x), and algorithmically boils down to computing the following steps in an iterative manner: E-Step: B (i) (d, z) = E [log p(x, ?, z|d)]p(?|x,z(i) ,d(i) ) , (6) M-Step: (d(i+1) , z (i+1) ) = arg maxd,z B (i) (d, z). (7) where E[f (x)]q(x) denotes the expectation of a function f under the distribution q, i denotes the iterations, and B (i) is a lower bound to log p(d, z|x) and it is tight at the current iterates z (i) , d(i) . The E-Step: In the first step of our algorithm, we need to compute the EM lower bound B that has the following form: N  q K K  X X X (i) (i) + B (d, z) = ? k wn (xn ? dk ? znk )k22 + ? kznk k1 , (8) n=1 k=1 k=1 where =+ denotes equality up to additive constants, denotes the Hadamard (element-wise) product, (i) and the square-root operator is also defined element-wise. Here, wn ? RT+ are the weights that are (i) defined as follows: wn,t , E [1/?n,t ]p(?|x,z(i) ,d(i) ) . As the variables ?n,t are expected to be large when x ?n,t cannot explain the observation xn,t ? typically due to a corruption or a high noise ? the weights will accordingly suppress the importance of the particular point xn,t . Therefore, the overall approach will be more robust to corrupted data than the Gaussian models where all weights would be deterministic and equal to 0.5. 4 Unfortunately, the weights w(i) cannot be Algorithm 1 ?-stable Convolutional Sparse Coding computed analytically, therefore we need to resort to approximate methods. In this Require: Regularization: ? ? R+ , Num. atoms: K, Atom length: L, Num. iterations: I , J, M study, we develop a Markov chain Monte 1: for i = 1 to I do Carlo (MCMC) method to approximately compute the weights, where we approxi- 2: /* E-step: */ mate the intractable expectations with a finite 3: for j = 1 to J do (i,j) (i) Draw ?n,t via MCMC (9) sample average, given as follows: wn,t ? 4: PJ 5: end for (i,j) (i,j) PJ (i) (i,j) (1/J) j=1 1/?n,t , where ?n,t are some 6: wn,t ? (1/J) j=1 1/?n,t samples that are ideally drawn from the pos- 7: /* M-step: */ terior distribution p(?|x, z (i) , d(i) ). Unfor- 8: for m = 1 to M do tunately, directly drawing samples from the 9: z (i) = L-BFGS-B on (10) posterior distribution of ? is not tractable ei- 10: d(i) = L-BFGS-B on the dual of (11) ther, and therefore, we develop a Metropolis- 11: end for Hastings algorithm [28], that asymptotically 12: end for generates samples from the target distribution 13: return w(I) , d(I) , z (I) p(?|?) in two steps. In the j-th iteration of this algorithm, we first draw a random sample for each n and t from the prior distribution (cf. (5)), i.e., ?0n,t ? p(?n,t ). We then compute an acceptance probability for each ?0n,t that is defined as follows: n o (i,j) (i,j) acc(?n,t ? ?0n,t ) , min 1, p(xn,t |d(i) , z (i) , ?0n,t )/p(xn,t |d(i) , z (i) , ?n,t ) (9) where j denotes the iteration number of the MCMC algorithm. Finally, we draw a uniform random (i) number un,t ? U([0, 1]) for each n and t. If un,t < acc(?n,t ? ?0n,t ), we accept the sample and set (i+1) (i+1) (i) ?n,t = ?0n,t ; otherwise we reject the sample and set ?n,t = ?n,t . This procedure forms a Markov chain that leaves the target distribution p(?|?) invariant, where under mild ergodicity conditions, it can be shown that the finite-sample averages converge to their true values when J goes to infinity [29]. More detailed explanation of this procedure is given in the supplementary document. The M-Step: Given the weights wn that are estimated during the E-step, the objective of the Mstep (7) is to solve a weighted CSC problem, which is much easier when compared to our original problem. This objective function is not jointly convex in d and z, yet it is convex if one fix either d or z. Here, similarly to the vanilla CSC approaches [9, 10], we develop a block coordinate descent strategy, where we solve the problem in (7) for either d or z, by keeping respectively z and d fixed. We first focus on solving the problem for z while keeping d fixed, given as follows: min z N  K  X X X ? k wn (xn ? Dk z?nk )k22 + ? kznk k1 n=1 k=1 s.t. znk ? 0, ?n, k . (10) k Here, we expressed the convolution of dk and znk as the inner product of the zero-padded activations z?nk , [(znk )> , 0 ? ? ? 0]> ? RT+ , with a Toeplitz matrix Dk ? RT ?T , that is constructed from dk . The matrices Dk are never constructed in practice, and all operations are carried out using convolutions. This problem can be solved by various constrained optimization algorithms. Here, we choose the k quasi-Newton L-BFGS-B algorithm [30] with a box constraint: 0 ? zn,t ? ?. This approach only requires the simple computation of the gradient of the objective function with respect to z (cf. supplementary material). Note that, since each trial is independent from each other, we can solve this problem for each zn in parallel. We then solve the problem for the atoms d while keeping z fixed. This optimization problem turns out to be a constrained weighted least-squares problem. In the non-weighted case, this problem can be solved either in the time domain or in the Fourier domain [10?12]. The Fourier transform simplifies the convolutions that appear in least-squares problem, but it also induces several difficulties, such as that the atom dk have to be in a finite support L, an important issue ignored in the seminal work of [10] and addressed with an ADMM solver in[11, 12]. In the weighted case, it is not clear how to solve this problem in the Fourier domain. We thus perform all the computations in the time domain. Following the traditional filter identification approach [31], we need to embed the one-dimensional k signals znk into a matrix of delayed signals Znk ? RT ?L , where (Znk )i,j = zn,i+j?L+1 if L ? 1 ? 5 Heide et al (2015) Wohlberg (2016) M-step M-step - 4 parallel 100 10 1 10 2 10 3 6000 Heide et al (2015) Wohlberg (2016) M-step M-step - 4 parallel 5000 Time (s) (objective - best) / best 101 4000 3000 2000 1000 0 0 2000 4000 Time (s) (a) K = 10, L = 32. K = 2, L = 32 K = 2, L = 128 K = 10, L = 32 (b) Time to reach a relative precision of 0.01. Figure 2: Comparison of state-of-the-art methods with our approach. (a) Convergence plot with the objective function relative to the obtained minimum, as a function of computational time. (b) Time taken to reach a relative precision of 10?2 , for different settings of K and L. i + j < T and 0 elsewhere. Equation (1) then becomes: min d N K X X ? k wn (xn ? Znk dk )k22 , n=1 s.t. kdk k22 ? 1 . (11) k=1 Due to the constraint, we must resort to an iterative approach. The options are to use (accelerated) projected gradient methods such as FISTA [32] applied to (11), or to solve a dual problem as done in [10]. The dual is also a smooth constraint problem yet with a simpler positivity box constraint (cf. supplementary material). The dual can therefore be optimized with L-BFGS-B. Using such a quasi-Newton solver turned out to be more efficient than any accelerated first order method in either the primal or the dual (cf. benchmarks in supplementary material). Our entire EM approach can be summarized in the Algorithm 1. Note that during the alternating minimization, thanks to convexity we can warm start the d update and the z update using the solution from the previous update. This significantly speeds up the convergence of the L-BFGS-B algorithm, particularly in the later iterations of the overall algorithm. 4 Experiments In order to evaluate our approach, we conduct several experiments on both synthetic and real data. First, we show that our proposed optimization scheme for the M-step provides significant improvements in terms of convergence speed over the state-of-the-art CSC methods. Then, we provide empirical evidence that our algorithm is more robust to artifacts and outliers than three competing CSC methods [6, 7, 12]. Finally, we consider LFP data, where we illustrate that our algorithm can reveal interesting properties in electrophysiological signals without supervision, even in the presence of severe artifacts. The source code is publicly available at https://alphacsc.github.io/. Synthetic simulation setup: In our synthetic data experiments, we simulate N trials of length T by first generating K zero mean and unit norm atoms of length L. The activation instants are integers drawn from a uniform distribution in J0, T ? LK. The amplitude of the activations are drawn from a uniform distribution in [0, 1]. Atoms are activated only once per trial and are allowed to overlap. The activations are then convolved with the generated atoms and summed up as in (1). M-step performance: In our first set of synthetic experiments, we illustrate the benefits of our M-step optimization approach over state-of-the-art CSC solvers. We set N = 100, T = 2000 and ? = 1, and use different values for K and L. To be comparable, we set ? = 2 and add Gaussian noise to the synthesized signals, where the standard deviation is set to 0.01. In this setting, we have wn,t = 1/2 for all n, t, which reduces the problem to a standard CSC setup. We monitor the convergence of ADMM-based methods by Heide et al. [11] and Wohlberg [12] against our M-step algorithm, using both a single-threaded and a parallel version for the z-update. As the problem is non-convex, even if two algorithms start from the same point, they are not guaranteed to reach the same local minimum2 . Hence, for a fair comparison, we use a multiple restart strategy with averaging across 24 random seeds. 2 Note that the M-step can be viewed as a biconvex problem, for which global convergence guarantees can be shown under certain assumptions [33, 34]. However, we have observed that it is required to use multiple restarts even for vanilla CSC, implying that these assumptions are not satisfied in this particular problem. 6 Brockmeier et al. MoTIF Brockmeier et al. MoTIF Brockmeier et al. MoTIF ?CSC CSC ?CSC CSC ?CSC Atom 1 Atom 2 G. Truth CSC (a) No corruption. (b) 10% corruption. (c) 20% corruption Figure 3: Simulation to compare state-of-the-art methods against ?CSC. During our experiments we have observed that the ADMM-based methods do not guarantee the feasibility of the iterates. In other words, the norms of the estimated atoms might be greater than 1 during the iterations. To keep the algorithms comparable, when computing the objective value, we project the atoms to the unit ball and scale the activations accordingly. To be strictly comparable, we also imposed a positivity constraint on these algorithms. This is easily done by modifying the soft-thresholding operator to be a rectified linear function. In the benchmarks, all algorithms use a single thread, except ?M-step - 4 parallel? which uses 4 threads during the z update. In Fig. 2, we illustrate the convergence behaviors of the different methods. Note that the y-axis is the precision relative to the objective value obtained upon convergence. In other words, each curve is relative to its own local minimum (see supplementary document for details). In the right subplot, we show how long it takes for the algorithms to reach a relative precision of 0.01 for different settings (cf. supplementary material for more benchmarks). Our method consistently performs better and the difference is even more striking for more challenging setups. This speed improvement on the M-step is crucial for us as this step will be repeatedly executed. Robustness to corrupted data: In our second synthetic data experiment, we illustrate the robustness of ?CSC in the presence of corrupted observations. In order to simulate the likely presence of high amplitude artifacts, one way would be to directly simulate the generative model in (3). However, this would give us an unfair advantage, since ?CSC is specifically designed for such data. Here, we take an alternative approach, where we corrupt a randomly chosen fraction of the trials (10% or 20%) with strong Gaussian noise of standard deviation 0.1, i.e. one order of magnitude higher than in a regular trial. We used a regularization parameter of ? = 0.1. In these experiments, by CSC we refer to ?CSC with ? = 2, that resembles using only the M-step of our algorithm with deterministic weights wn,t = 1/2 for all n, t. We used a simpler setup where we set N = 100, T = 512, and L = 64. We used K = 2 atoms, as shown in dashed lines in Fig. 3. For ?CSC, we set the number of outer iterations I = 5, the number of iterations of the M-step to M = 50, and the number of iterations of the MCMC algorithm to J = 10. We discard the first 5 samples of the MCMC algorithm as burn-in. To enable a fair comparison, we run the standard CSC algorithm for I ? M iterations, i.e. the total number of M-step iterations in ?CSC. We also compared ?CSC against competing state-of-art methods previously applied to neural time series: Brockmeier and Pr?ncipe [7] and MoTIF [6]. Starting from multiple random initializations, the estimated atoms with the smallest `2 distance with the true atoms are shown in Fig. 3. ?V In the artifact-free scenario, all algorithms perform equally well, except for MoTIF that suffers from the presence of activations with varying amplitudes. This is because it aligns the data using correlations before performing the eigenvalue decomposition, without taking into account the strength of activations in each trial. The performance of Brockmeier and Pr?ncipe [7] and CSC degrades as 400 200 0 200 400 600 8009.0 0.0 9.5 10.0 10.5 Time (s) 11.0 11.5 12.0 0.1 0.0 (a) LFP spike data from [8] 0.1 0.2 Time (s) (b) Estimated atoms 7 0.3 Figure 4: Atoms learnt by ?CSC on LFP data containing epileptiform spikes with ? = 2. 0.15 High frequency 0.10 0.05 0.00 0.05 0.10 0.0 0.2 0.4 Time (s) 0.6 0.0 0.2 0.4 Time (s) 0.6 0.0 0.2 0.4 Time (s) 0.6 (a) Atoms learnt by: CSC (clean data), CSC (full data), ?CSC (full data) 175 150 125 100 75 50 25 0.005 0.004 0.003 0.002 0.001 2.5 5.0 7.5 10.0 Low frequency 0.000 (b) Comodulogram. Figure 5: (a) Three atoms learnt from a rodent striatal LFP channel, using CSC on cleaned data, and both CSC and ?CSC on the full data. The atoms capture the cross-frequency coupling of the data (dashed rectangle). (b) Comodulogram presents the cross-frequency coupling intensity computed between pairs of frequency bands on the entire cleaned signal, following [37]. the level of corruption increases. On the other hand, ?CSC is clearly more robust to the increasing level of corruption and recovers reasonable atoms even when 20% of the trials are corrupted. Results on LFP data In our last set of experiments, we consider real neural data from two different datasets. We first applied ?CSC on an LFP dataset previously used in [8] and containing epileptiform spikes as shown in Fig. 4(a). The data was recorded in the rat cortex, and is free of artifact. Therefore, we used the standard CSC with our optimization scheme, (i.e. ?CSC with ? = 2). As a standard preprocessing procedure, we applied a high-pass filter at 1 Hz in order to remove drifts in the signal, and then applied a tapered cosine window to down-weight the samples near the edges. We set ? = 6, N = 300, T = 2500, L = 350, and K = 3. The recovered atoms by our algorithm are shown in Fig. 4(b). We can observe that the estimated atoms resemble the spikes in Fig. 4(a). These results show that, without using any heuristics, our approach can recover similar atoms to the ones reported in [8], even though it does not make any assumptions on the shapes of the waveforms, or initializes the atoms with template spikes in order to ease the optimization. The second dataset is an LFP channel in a rodent striatum from [35]. We segmented the data into 70 trials of length 2500 samples, windowed each trial with a tapered cosine function, and detrended the data with a high-pass filter at 1 Hz. We set ? = 10, initialized the weights wn to the inverse of the variance of the trial xn . Atoms are in all experiments initialized with Gaussian white noise. As opposed to the first LFP dataset, this dataset contains strong artifacts, as shown in Fig. 1(b). In order to be able to illustrate the potential of CSC on this data, we first manually identified and removed the trials that were corrupted by artifacts. In Fig. 5(a), we illustrate the estimated atoms with CSC on the manually-cleaned data. We observe that the estimated atoms correspond to canonical waveforms found in the signal. In particular, the high frequency oscillations around 80 Hz are modulated in amplitude by the low-frequency oscillation around 3 Hz, a phenomenon known as cross-frequency coupling (CFC) [36]. We can observe this by computing a comodulogram [37] on the entire signal (Fig. 5(b)). This measures the correlation between the amplitude of the high frequency band and the phase of the low frequency band. Even though CSC is able to provide these excellent results on the cleaned data set, its performance heavily relies on the manual removal of the artifacts. Finally, we repeated the previous experiment on the full data, without removing the artifacts and compared CSC with ?CSC, where we set ? = 1.2. The results are shown in the middle and the right sub-figures of Fig. 5(a). It can be observed that in the presence of strong artifacts, CSC is not able to recover the atoms anymore. On the contrary, we observe that ?CSC can still recover atoms as observed in the artifact-free regime. In particular, the cross-frequency coupling phenomenon is still visible. 5 Conclusion We address the present need in the neuroscience community to better capture the complex morphology of brain waves. Our approach is based on a probabilistic formulation of a CSC model. We propose an inference strategy based on MCEM to deal efficiently with heavy tailed noise and take into account the polarity of neural activations with a positivity constraint. Our problem formulation allows the use of fast quasi-Newton methods that outperform previously proposed state-of-the-art ADMM-based algorithms, even when not making use of our parallel implementation. Results on LFP data demonstrate that such algorithms can be robust to the presence of transient artifacts in data and reveal insights on neural time-series without supervision. 8 6 Acknowledgement The work was supported by the French National Research Agency grants ANR-14-NEUC-0002-01, ANR-13-CORD-0008-02, and ANR-16-CE23-0014 (FBIMATRIX), as well as the ERC Starting Grant SLAB ERC-YStG-676943. References [1] S. R. Cole and B. Voytek. Brain oscillations and the importance of waveform shape. Trends Cogn. Sci., 2017. [2] M. X. Cohen. Analyzing neural time series data: Theory and practice. MIT Press, 2014. ISBN 9780262319560. [3] S. R. Jones. When brain rhythms aren?t ?rhythmic?: implication for their mechanisms and meaning. Curr. Opin. Neurobiol., 40:72?80, 2016. [4] A. Mazaheri and O. Jensen. Asymmetric amplitude modulations of brain oscillations generate slow evoked responses. The Journal of Neuroscience, 28(31):7781?7787, 2008. [5] R. Hari and A. Puce. MEG-EEG Primer. Oxford University Press, 2017. [6] P. Jost, P. Vandergheynst, S. Lesage, and R. Gribonval. MoTIF: an efficient algorithm for learning translation invariant dictionaries. In Acoustics, Speech and Signal Processing, ICASSP, volume 5. IEEE, 2006. [7] A. J. Brockmeier and J. C. Pr?ncipe. Learning recurrent waveforms within EEGs. IEEE Transactions on Biomedical Engineering, 63(1):43?54, 2016. [8] S. Hitziger, M. Clerc, S. Saillet, C. Benar, and T. Papadopoulo. Adaptive Waveform Learning: A Framework for Modeling Variability in Neurophysiological Signals. IEEE Transactions on Signal Processing, 2017. [9] B. Gips, A. Bahramisharif, E. Lowet, M. Roberts, P. de Weerd, O. Jensen, and J. van der Eerden. Discovering recurring patterns in electrophysiological recordings. J. Neurosci. Methods, 275: 66?79, 2017. [10] R. Grosse, R. Raina, H. Kwong, and A. Y. Ng. Shift-invariant sparse coding for audio classification. In 23rd Conference on Uncertainty in Artificial Intelligence, UAI?07, pages 149?158. AUAI Press, 2007. ISBN 0-9749039-3-0. [11] F. Heide, W. Heidrich, and G. Wetzstein. Fast and flexible convolutional sparse coding. In Computer Vision and Pattern Recognition (CVPR), pages 5135?5143. IEEE, 2015. [12] B. Wohlberg. Efficient algorithms for convolutional sparse representations. Image Processing, IEEE Transactions on, 25(1):301?315, 2016. [13] M. D. Zeiler, D. Krishnan, G.W. Taylor, and R. Fergus. Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), pages 2528?2535. IEEE, 2010. [14] M. ?orel and F. ?roubek. Fast convolutional sparse coding using matrix inversion lemma. Digital Signal Processing, 2016. [15] K. Kavukcuoglu, P. Sermanet, Y-L. Boureau, K. Gregor, M. Mathieu, and Y. Cun. Learning convolutional feature hierarchies for visual recognition. In Advances in Neural Information Processing Systems (NIPS), pages 1090?1098, 2010. [16] M. Pachitariu, A. M Packer, N. Pettit, H. Dalgleish, M. Hausser, and M. Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In Advances in Neural Information Processing Systems (NIPS), pages 1745?1753, 2013. [17] B. Mailh?, S. Lesage, R. Gribonval, F. Bimbot, and P. Vandergheynst. Shift-invariant dictionary learning for sparse representations: extending K-SVD. In 16th Eur. Signal Process. Conf., pages 1?5. IEEE, 2008. 9 [18] Q. Barth?lemy, C. Gouy-Pailler, Y. Isaac, A. Souloumiac, A. Larue, and J. I. Mars. Multivariate temporal dictionary learning for EEG. J. Neurosci. Methods, 215(1):19?28, 2013. [19] G. Samorodnitsky and M. S. Taqqu. Stable non-Gaussian random processes: stochastic models with infinite variance, volume 1. CRC press, 1994. [20] E. E. Kuruoglu. Signal processing in ?-stable noise environments: a least Lp-norm approach. PhD thesis, University of Cambridge, 1999. [21] B. B. Mandelbrot. Fractals and scaling in finance: Discontinuity, concentration, risk. Selecta volume E. Springer Science & Business Media, 2013. [22] U. Sim? ? sekli, A. Liutkus, and A. T. Cemgil. Alpha-stable matrix factorization. IEEE SPL, 22 (12):2289?2293, 2015. [23] Y. Wang, Y. Qi, Y. Wang, Z. Lei, X. Zheng, and G. Pan. Delving into ?-stable distribution in noise suppression for seizure detection from scalp EEG. J. Neural. Eng., 13(5):056009, 2016. [24] S. Leglaive, U. Sim? ? sekli, A. Liutkus, R. Badeau, and G. Richard. Alpha-stable multichannel audio source separation. In ICASSP, pages 576?580, 2017. [25] J. M. Chambers, C. L. Mallows, and B. W. Stuck. A method for simulating stable random variables. Journal of the american statistical association, 71(354):340?344, 1976. [26] P. J. Huber. Robust Statistics. Wiley, 1981. [27] S. Godsill and E. Kuruoglu. Bayesian inference for time series with heavy-tailed symmetric ?-stable noise processes. Proc. Applications of heavy tailed distributions in economics, eng. and stat., 1999. [28] S. Chib and E. Greenberg. Understanding the Metropolis-Hastings algorithm. The American Statistician, 49(4):327?335, 1995. [29] J.S. Liu. Monte Carlo strategies in scientific computing. Springer, 2008. [30] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190?1208, 1995. [31] E. Moulines, P. Duhamel, J-F. Cardoso, and S. Mayrargue. Subspace methods for the blind identification of multichannel FIR filters. IEEE Transactions on signal processing, 43(2): 516?525, 1995. [32] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183?202, 2009. [33] Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli, and Rashish Tandon. Learning sparsely used overcomplete dictionaries. In Conference on Learning Theory, pages 123?137, 2014. [34] Jochen Gorski, Frank Pfeuffer, and Kathrin Klamroth. Biconvex sets and optimization with biconvex functions: a survey and extensions. Mathematical Methods of Operations Research, 66(3):373?407, 2007. [35] G. Dall?rac, M. Graupner, J. Knippenberg, R. C. R. Martinez, T. F. Tavares, L. Tallot, N. El Massioui, A. Verschueren, S. H?hn, J.B. Bertolus, et al. Updating temporal expectancy of an aversive event engages striatal plasticity under amygdala control. Nature Communications, 8: 13920, 2017. [36] O. Jensen and L. L. Colgin. Cross-frequency coupling between neuronal oscillations. Trends in cognitive sciences, 11(7):267?269, 2007. [37] A. BL. Tort, R. Komorowski, H. Eichenbaum, and N. Kopell. Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. J. Neurophysiol., 104(2): 1195?1210, 2010. 10
6710 |@word mild:1 trial:17 version:1 inversion:1 middle:1 norm:5 simulation:2 decomposition:2 contraction:1 eng:2 thereby:1 moment:1 liu:1 series:8 contains:1 document:2 deconvolutional:1 existing:1 current:2 recovered:1 activation:13 yet:3 written:2 must:1 csc:68 realistic:1 additive:3 plasticity:1 shape:3 mstep:1 motor:1 remove:1 designed:2 plot:1 update:6 opin:1 implying:1 greedy:1 generative:2 leaf:1 discovering:1 lemy:2 accordingly:2 intelligence:1 core:1 short:1 gribonval:2 num:2 iterates:2 provides:1 location:1 simpler:2 windowed:1 burst:2 constructed:2 mandelbrot:1 beta:1 become:1 mathematical:1 fitting:1 manner:1 huber:1 expected:1 behavior:1 frequently:1 morphology:6 brain:5 moulines:1 inspired:1 byrd:1 window:2 solver:3 increasing:1 becomes:4 project:1 moreover:1 underlying:1 notation:2 maximizes:1 medium:1 prateek:1 neurobiol:1 skewness:1 kznk:3 guarantee:2 temporal:6 auai:1 finance:1 universit:2 control:1 unit:3 grant:2 appear:2 engages:1 positive:1 before:1 engineering:1 local:3 swm:2 cemgil:1 limit:2 io:1 striatum:1 despite:2 analyzing:2 oxford:1 modulation:1 approximately:2 inria:1 might:3 burn:1 initialization:2 resembles:1 evoked:1 challenging:2 co:1 ease:1 limited:3 factorization:1 range:2 practical:2 lfp:11 practice:2 block:2 mallow:1 rashish:1 cogn:1 procedure:4 j0:1 empirical:1 significantly:2 reject:1 matching:2 word:2 regular:1 get:1 cannot:4 classification:1 operator:2 context:2 risk:1 seminal:1 equivalent:1 imposed:3 demonstrated:1 map:4 deterministic:3 straightforward:1 attention:1 go:1 starting:2 convex:3 economics:1 survey:1 insight:1 handle:1 variation:2 coordinate:1 limiting:1 target:2 tan:1 heavily:1 hierarchy:1 tandon:1 us:3 origin:1 element:3 trend:2 recognition:3 particularly:1 updating:1 asymmetric:1 sparsely:1 observed:8 solved:2 capture:4 wang:2 cord:1 region:1 decrease:1 movement:1 removed:1 principled:1 agency:1 mu:1 complexity:2 convexity:1 environment:1 ideally:1 electrocorticography:1 taqqu:1 mcem:2 segment:1 tight:1 solving:1 incur:1 upon:1 pfeuffer:1 neurophysiol:1 po:1 joint:1 easily:1 icassp:2 various:1 jain:1 fast:4 monte:4 artificial:1 whose:2 heuristic:4 richer:1 supplementary:6 solve:6 cvpr:2 drawing:1 otherwise:1 anr:3 ability:1 statistic:2 toeplitz:1 jointly:1 noisy:2 transform:1 advantage:1 eigenvalue:2 analytical:2 isbn:2 propose:6 reconstruction:2 wetzstein:1 product:3 turned:1 hadamard:1 p300:1 awl:2 poorly:1 convergence:8 electrode:1 extending:1 generating:1 coupling:7 develop:6 illustrate:8 stat:1 recurrent:1 clerc:1 sim:3 strong:4 netrapalli:1 resemble:1 implies:1 waveform:9 filter:4 stochastic:2 modifying:1 kwong:1 enable:1 transient:1 material:4 crc:1 require:1 fix:1 pettit:1 preliminary:1 proposition:1 biological:1 mathematically:1 ecog:1 extension:3 strictly:1 around:4 considered:1 exp:2 seed:1 scope:1 slab:1 achieves:1 dictionary:8 smallest:1 estimation:2 proc:1 combinatorial:1 epileptiform:2 sensitive:1 cole:1 tool:3 weighted:7 minimization:1 mit:1 clearly:1 gaussian:17 aim:5 rather:3 avoid:1 shrinkage:1 varying:3 focus:1 pdfs:2 improvement:2 consistently:1 biomarker:1 likelihood:1 dall:1 suppression:1 posteriori:2 inference:6 motif:9 el:1 typically:2 entire:3 accept:1 quasi:3 france:2 interested:1 arg:2 issue:2 flexible:1 aforementioned:1 denoted:1 exponent:1 dual:5 development:1 overall:2 art:8 special:3 constrained:3 summed:1 field:1 equal:1 never:1 having:1 beach:1 atom:44 manually:2 identical:2 once:1 broad:1 jones:1 ng:1 jochen:1 richard:1 modern:1 randomly:1 chib:1 simultaneously:1 national:1 packer:1 delayed:1 beck:1 phase:2 statistician:1 attempt:1 ltci:1 curr:1 detection:1 acceptance:1 interest:1 wohlberg:4 highly:1 zheng:1 severe:3 light:2 activated:2 primal:1 chain:2 predefined:2 amenable:1 implication:2 edge:1 capable:1 respective:1 orthogonal:1 conduct:1 iv:1 taylor:1 initialized:2 re:1 overcomplete:1 modeling:5 soft:1 teboulle:1 impulsive:4 cover:1 measuring:1 zn:8 maximization:5 applicability:1 deviation:3 uniform:4 motivating:1 reported:3 thickness:1 corrupted:5 learnt:4 synthetic:6 combined:1 eur:1 st:1 density:3 fundamental:1 spindle:1 thanks:2 siam:2 probabilistic:8 quickly:1 thesis:1 imagery:1 recorded:1 ambiguity:2 containing:3 opposed:4 central:1 again:1 fir:1 positivity:6 choose:1 satisfied:1 cognitive:3 admit:2 resort:3 conf:1 american:2 return:1 account:2 potential:4 bfgs:5 de:1 coding:11 summarized:1 kvkp:1 vi:1 depends:1 blind:1 later:1 root:1 closed:1 weerd:1 start:2 recover:3 option:1 parallel:6 wave:1 dalgleish:1 gouy:1 square:3 publicly:1 papadopoulo:1 convolutional:11 variance:5 characteristic:2 efficiently:1 correspond:1 identify:1 blink:1 raw:2 identification:2 kavukcuoglu:1 bayesian:1 lu:1 carlo:4 worth:1 rectified:1 corruption:6 kopell:1 acc:2 explain:1 reach:4 suffers:1 manual:1 aligns:1 failure:1 against:3 frequency:15 involved:1 invasive:2 ocular:1 isaac:1 recovers:1 boil:2 dataset:5 popular:1 animashree:1 electrophysiological:4 subtle:1 amplitude:9 barth:2 appears:1 alexandre:1 higher:1 restarts:1 tom:1 response:1 klamroth:1 formulation:5 done:2 though:4 box:2 mar:1 furthermore:1 ergodicity:1 biomedical:2 correlation:3 hand:1 hastings:2 ncipe:4 replacing:1 ei:1 french:1 mode:1 artifact:18 reveal:3 impulse:2 lei:1 pailler:1 scientific:2 usa:1 k22:6 contain:5 verify:2 remedy:1 true:2 hence:2 regularization:5 kxn:1 equality:1 symmetric:3 iteratively:1 alternating:1 analytically:1 illustrated:3 white:1 conditionally:2 brockmeier:7 deal:1 during:5 rhythm:4 coincides:2 biconvex:3 rat:1 generalized:2 prominent:1 cosine:2 pdf:3 demonstrate:1 performs:1 meaning:1 wise:2 image:2 novel:3 recently:2 common:1 rl:1 cohen:1 volume:3 tail:2 association:1 synthesized:1 significant:2 refer:1 cambridge:1 rd:1 vanilla:5 similarly:1 erc:2 gips:1 badeau:1 had:1 stable:30 supervision:2 cortex:1 alekh:1 heidrich:1 base:5 add:1 posterior:2 own:1 recent:2 multivariate:1 perspective:1 discard:1 scenario:1 certain:5 success:2 maxd:2 der:1 muscle:1 captured:1 minimum:2 additional:1 greater:1 subplot:1 converge:1 signal:41 dashed:3 sliding:1 ii:1 multiple:3 full:4 reduces:1 gorski:1 smooth:1 segmented:1 characterized:1 clinical:2 cross:6 long:3 sphere:1 equally:1 promotes:1 laplacian:1 feasibility:1 jost:2 qi:1 tavares:1 vision:3 expectation:5 physically:1 iteration:11 agarwal:1 whereas:1 annealing:1 addressed:1 source:2 crucial:2 sisc:2 recording:4 hz:7 contrary:1 spirit:1 anandkumar:1 call:1 extracting:2 integer:1 noting:1 presence:6 near:1 iii:1 easy:1 wn:11 krishnan:1 variety:2 competing:3 identified:1 inner:1 simplifies:1 praneeth:1 ce23:1 shift:7 thread:2 expression:1 heavier:2 render:1 speech:1 repeatedly:1 fractal:1 ignored:1 detailed:1 clear:1 cardoso:1 band:4 induces:1 tth:1 multichannel:2 http:1 generate:1 outperform:1 canonical:1 sign:2 neuroscience:4 algorithmically:1 popularity:1 estimated:7 per:1 express:1 four:1 monitor:1 drawn:4 lesage:2 tapered:2 pj:2 clean:1 bimbot:1 rectangle:2 v1:2 imaging:2 asymptotically:1 nocedal:1 padded:1 fraction:1 run:1 inverse:2 uncertainty:1 striking:1 family:2 reasonable:2 spl:1 separation:1 oscillation:8 disambiguation:1 draw:4 scaling:1 seizure:1 comparable:3 capturing:1 bound:4 guaranteed:1 sleep:1 scalp:1 strength:1 occur:2 constraint:12 infinity:1 generates:1 fourier:6 speed:4 simulate:3 min:4 performing:1 eichenbaum:1 developing:1 ball:1 smaller:1 across:1 increasingly:1 em:3 pan:1 lp:1 metropolis:2 cun:1 making:1 outlier:4 invariant:9 restricted:1 pr:4 taken:1 computationally:3 equation:1 previously:3 turn:4 mechanism:2 fail:1 tractable:1 end:3 adopted:1 pursuit:1 operation:2 available:1 pachitariu:1 observe:4 v2:2 generic:1 chamber:1 simulating:1 anymore:1 rac:1 alternative:1 robustness:2 primer:1 convolved:1 original:5 denotes:10 ensure:1 cf:5 zeiler:1 newton:3 instant:1 exploit:1 restrictive:1 k1:3 classical:3 gregor:1 contact:1 bl:1 objective:8 initializes:1 spike:7 strategy:6 degrades:1 rt:6 concentration:1 traditional:1 gradient:2 kth:2 subspace:1 distance:1 simulated:1 sci:1 majority:1 restart:1 outer:1 threaded:1 visible:1 assuming:2 meg:1 besides:1 length:5 code:1 polarity:1 illustration:1 ratio:1 sermanet:1 setup:4 unfortunately:2 executed:1 robert:1 potentially:1 striatal:3 frank:1 tort:1 godsill:1 suppress:1 aversive:1 implementation:1 perform:2 convolution:8 observation:5 datasets:2 markov:2 benchmark:3 mate:1 finite:3 descent:1 duhamel:1 variability:1 precise:2 communication:1 rn:1 community:2 intensity:1 drift:1 cast:1 paris:3 required:1 cleaned:4 optimized:1 pair:1 acoustic:1 hausser:1 narrow:1 nip:3 ther:1 address:3 able:6 recurring:2 discontinuity:1 pattern:4 regime:1 sparsity:1 saclay:3 max:1 memory:1 explanation:1 detrended:1 event:2 critical:1 business:1 natural:1 difficulty:1 warm:1 overlap:1 raina:1 nth:1 representing:1 scheme:2 zhu:1 github:1 mathieu:1 lk:1 axis:1 carried:1 extract:3 sahani:1 prior:3 understanding:1 acknowledgement:1 removal:1 marginalizing:1 relative:6 loss:2 prototypical:4 limitation:1 interesting:1 unfiltered:1 vandergheynst:2 digital:1 znk:14 imposes:1 thresholding:2 corrupt:1 heavy:9 translation:1 elsewhere:1 supported:1 last:1 keeping:3 free:3 allow:1 wide:2 fall:1 taking:1 template:1 rhythmic:1 sparse:15 benefit:2 van:1 curve:1 greenberg:1 xn:18 souloumiac:1 amygdala:1 rich:1 hn:1 author:1 kdk:2 adaptive:1 projected:1 preprocessing:1 stuck:1 expectancy:1 transaction:4 alpha:6 compact:1 approximate:2 keep:1 umut:1 approxi:1 global:1 hari:1 uai:1 assumed:1 fergus:1 search:1 iterative:4 un:2 tailed:12 nature:4 channel:2 robust:7 ca:1 delving:1 correlated:1 eeg:6 excellent:1 complex:1 domain:4 spread:1 neurosci:2 noise:15 martinez:1 allowed:1 fair:2 repeated:1 augmented:1 fig:13 telecom:1 referred:1 neuronal:2 grosse:1 slow:2 wiley:1 precision:4 sub:1 exponential:2 lie:2 unfair:1 vanish:1 wavelet:3 down:3 theorem:1 embed:1 removing:1 specific:1 jensen:3 symbol:1 dk:13 concern:1 evidence:1 intractable:1 liutkus:2 importance:4 gained:1 heide:4 magnitude:1 phd:1 boureau:1 nk:2 easier:2 surprise:1 suited:1 rodent:2 aren:1 electrophysiology:1 simply:1 univariate:1 likely:1 neurophysiological:1 visual:1 prevents:1 expressed:1 vulnerable:1 terior:1 springer:2 corresponds:2 truth:1 determines:1 relies:1 goal:1 formulated:1 viewed:1 admm:4 samorodnitsky:1 paristech:1 fista:1 specifically:2 except:3 infinite:3 averaging:1 lemma:1 called:6 total:1 pas:2 experimental:1 la:1 svd:2 meaningful:2 support:1 modulated:2 accelerated:2 incorporate:1 evaluate:1 mcmc:5 audio:3 phenomenon:3 handling:1
6,314
6,711
Integration Methods and Optimization Algorithms Vincent Roulet INRIA, ENS, PSL Research University, Paris France [email protected] Damien Scieur INRIA, ENS, PSL Research University, Paris France [email protected] Francis Bach INRIA, ENS, PSL Research University, Paris France [email protected] Alexandre d?Aspremont CNRS, ENS PSL Research University, Paris France [email protected] Abstract We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation. Compared with recent advances in this vein, the differential equation considered here is the basic gradient flow, and we derive a class of multi-step schemes which includes accelerated algorithms, using classical conditions from numerical analysis. Multi-step schemes integrate the differential equation using larger step sizes, which intuitively explains the acceleration phenomenon. Introduction Applying the gradient descent algorithm to minimize a function f has a simple numerical interpretation as the integration of the gradient flow equation, written x(t) ? = ??f (x(t)), x(0) = x0 (Gradient Flow) using Euler?s method. This appears to be a somewhat unique connection between optimization and numerical methods, since these two fields have inherently different goals. On one hand, numerical methods aim to get a precise discrete approximation of the solution x(t) on a finite time interval. On the other hand, optimization algorithms seek to find the minimizer of a function, which corresponds to the infinite time horizon of the gradient flow equation. More sophisticated methods than Euler?s were developed to get better consistency with the continuous time solution but still focus on a finite time horizon [see e.g. S?li and Mayers, 2003]. Similarly, structural assumptions on f lead to more sophisticated optimization algorithms than the gradient method, such as the mirror gradient method [see e.g. Ben-Tal and Nemirovski, 2001; Beck and Teboulle, 2003], proximal gradient method [Nesterov, 2007] or a combination thereof [Duchi et al., 2010; Nesterov, 2015]. Among them Nesterov?s accelerated gradient algorithm [Nesterov, 1983] is proven to be optimal on the class of smooth convex or strongly convex functions. This latter method was designed with optimal complexity in mind, but the proof relies on purely algebraic arguments and the key mechanism behind acceleration remains elusive, with various interpretations discussed in e.g. [Bubeck et al., 2015; Allen Zhu and Orecchia, 2017; Lessard et al., 2016]. Another recent stream of papers used differential equations to model the acceleration behavior and offer another interpretation of Nesterov?s algorithm [Su et al., 2014; Krichene et al., 2015; Wibisono et al., 2016; Wilson et al., 2016]. However, the differential equation is often quite complex, being 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. reverse-engineered from Nesterov?s method itself, thus losing the intuition. Moreover, integration methods for these differential equations are often ignored or are not derived from standard numerical integration schemes. Here, we take another approach. Rather than using a complicated differential equation, we use advanced multistep discretization methods on the basic gradient flow equation in (Gradient Flow). Ensuring that the methods effectively integrate this equation for infinitesimal step sizes is essential for the continuous time interpretation and leads to a family of integration methods which contains various well-known optimization algorithms. A full analysis is carried out for linear gradient flows (quadratic optimization) and provides compelling explanations for the acceleration phenomenon. In particular, Nesterov?s method can be seen as a stable and consistent gradient flow discretization scheme that allows bigger step sizes in integration, leading to faster convergence. 1 Gradient flow We seek to minimize a L-smooth ?-strongly convex function defined on Rd . We discretize the gradient flow equation (Gradient Flow), given by the following ordinary differential equation x(t) ? = g(x(t)), x(0) = x0 , (ODE) where g comes from a potential ?f , meaning g = ??f . Smoothness of f means Lipschitz continuity of g, i.e. kg(x) ? g(y)k ? Lkx ? yk, for every x, y ? Rd , where k.k is the Euclidean norm. This property ensures existence and uniqueness of the solution of (ODE) (see [S?li and Mayers, 2003, Theorem 12.1]). Strong convexity of f also means strong monotonicity of ?g, i.e., ?kx ? yk2 ? ?hx ? y, g(x) ? g(y)i, for every x, y ? Rd , and ensures that (ODE) has a unique point x? such that g(x? ) = 0, called the equilibrium. This is the minimizer of f and the limit point of the solution, i.e. x(?) = x? . Finally this assumption allows us to control the convergence rate of the potential f and the solution x(t) as follows. Proposition 1.1 Let f be a L-smooth and ?-strongly convex function and x0 ? dom(f ). Writing x? the minimizer of f , the solution x(t) of (Gradient Flow) satisfies f (x(t)) ? f (x? ) ? (f (x0 ) ? f (x? ))e?2?t , kx(t) ? x? k ? kx0 ? x? ke??t . (1) A proof of this last result is recalled in the Supplementary Material. We now focus on numerical methods to integrate (ODE). 2 2.1 Numerical integration of differential equations Discretization schemes In general, we do not have access to an explicit solution x(t) of (ODE). We thus use integration algorithms to approximate the curve (t, x(t)) by a grid (tk , xk ) ? (tk , x(tk )) on a finite interval [0, tmax ]. For simplicity here, we assume the step size hk = tk ? tk?1 is constant, i.e., hk = h and tk = kh. The goal is then to minimize the approximation error kxk ? x(tk )k for k ? [0, tmax /h]. We first introduce Euler?s method to illustrate this on a basic example. Euler?s explicit method. Euler?s (explicit) method is one of the oldest and simplest schemes for integrating the curve x(t). The idea stems from a Taylor expansion of x(t) which reads x(t + h) = x(t) + hx(t) ? + O(h2 ). When t = kh, Euler?s method approximates x(t + h) by xk+1 , neglecting the second order term, xk+1 = xk + hg(xk ). In optimization terms, we recognize the gradient descent algorithm used to minimize f . Approximation errors in an integration method accumulate with iterations, and as Euler?s method uses only the last point to compute the next one, it has only limited control over the accumulated error. 2 Linear multistep methods. Multi-step methods use a combination of past iterates to improve convergence. Throughout the paper, we focus on linear s-step methods whose recurrence can be written s?1 s X X xk+s = ? ?i xk+i + h ?i g(xk+i ), for k ? 0, i=0 i=0 where ?i , ?i ? R are the parameters of the multistep method and h is again the step size. Each new point xk+s is a function of the information given by the s previous points. If ?s = 0, each new point is given explicitly by the s previous points and the method is called explicit. Otherwise each new point requires solving an implicit equation and the method is called implicit. To simplify notations we use the shift operator E, which maps Exk ? xk+1 . Moreover, if we write gk = g(xk ), then the shift operator also maps Egk ? gk+1 . Recall that a univariate polynomial is called monic if its leading coefficient is equal to 1. We now give the following concise definition of s-step linear methods. Definition 2.1 Given an (ODE) defined by g, x0 , a step size h and x1 , . . . , xs?1 initial points, a linear s-step method generates a sequence (tk , xk ) which satisfies ?(E)xk = h?(E)gk , for every k ? 0, (2) where ? is a monic polynomial of degree s with coefficients ?i , and ? a polynomial of degree s with coefficients ?i . A linear s?step method is uniquely defined by the polynomials (?, ?). The sequence generated by the method then depends on the initial points and the step size. We now recall a few results describing the performance of multistep methods. 2.2 Stability Stability is a key concept for integration methods. First of all, consider two curves x(t) and y(t), both solutions of (ODE), but starting from different points x(0) and y(0). If the function g is Lipchitzcontinuous, it is possible to show that the distance between x(t) and y(t) is bounded on a finite interval, i.e. kx(t) ? y(t)k ? Ckx(0) ? y(0)k ?t ? [0, tmax ], where C may depend exponentially on tmax . We would like to have a similar behavior for our sequences xk and yk , approximating x(tk ) and y(tk ), i.e. kxk ? yk k ? kx(tk ) ? y(tk )k ? Ckx(0) ? y(0)k ?k ? [0, tmax /h], (3) when h ? 0, so k ? ?. Two issues quickly arise. First, for a linear s-step method, we need s starting values x0 , ..., xs?1 . Condition (3) will therefore depend on all these starting values and not only x0 . Secondly, any discretization scheme introduces at each step an approximation error, called local error, which accumulates over time. We write this error loc (xk+s ) and define it as loc (xk+s ) , xk+s ? x(tk+s ), where xk+s is computed using the real solution x(tk ), ..., x(tk+s?1 ). In other words, the difference between xk and yk can be described as follows kxk ? yk k ? Error in the initial condition + Accumulation of local errors. We now write a complete definition of stability, inspired by Definition 6.3.1 from Gautschi [2011]. Definition 2.2 (Stability) A linear multistep method is stable iff, for two sequences xk , yk generated by (?, ?) using a sufficiently small step size h > 0, from the starting values x0 , ..., xs?1 , and y0 , ..., ys?1 , we have kxk ? yk k ? C  tmax /h max i?{0,...,s?1} kxi ? yi k + X  kloc (xi+s )k + kloc (yi+s )k , (4) i=1 for any k ? [0, tmax /h]. Here, the constant C may depend on tmax but is independent of h. When h tends to zero, we may recover equation (3) only if the accumulated local error also tends to zero. We thus need 1 loc lim k (xi+s )k = 0 ?i ? [0, tmax /h]. h?0 h 3 This condition is called consistency. The following proposition shows there exist simple conditions to check consistency, which rely on comparing a Taylor expansion of the solution with the coefficients of the method. Its proof and further details are given in the Supplementary Material. Proposition 2.3 (Consistency) A linear multistep method defined by polynomials (?, ?) is consistent if and only if ?(1) = 0 and ?0 (1) = ?(1). (5) Assuming consistency, we still need to control sensitivity to initial conditions, written kxk ? yk k ? C max i?{0,...,s?1} kxi ? yi k. (6) Interestingly, analyzing the special case where g = 0 is completely equivalent to the general case and this condition is therefore called zero-stability. This reduces to standard linear algebra results as we only need to look at the solution of the homogeneous difference equation ?(E)xk = 0. This is captured in the following theorem whose technical proof can be found in [Gautschi, 2011, Theorem 6.3.4]. Theorem 2.4 (Root condition) Consider a linear multistep method (?, ?). The method is zero-stable if and only if all roots of ?(z) are in the unit disk, and the roots on the unit circle are simple. 2.3 Convergence of the global error Numerical analysis focuses on integrating an ODE on a finite interval of time [0, tmax ]. It studies the behavior of the global error defined by x(tk ) ? xk , as a function of the step size h. If the global error converges to 0 with the step size, the method is guaranteed to approximate correctly the ODE on the time interval, for h small enough. We now state Dahlquist?s equivalence theorem, which shows that the global error converges to zero when h does if the method is stable, i.e. when the method is consistent and zero-stable. This naturally needs the additional assumption that the starting values x0 , . . . , xs?1 are computed such that they converge to the solution (x(0), . . . , x(ts?1 )). The proof of the theorem can be found in Gautschi [2011]. Theorem 2.5 (Dahlquist?s equivalence theorem) Given an (ODE) defined by g and x0 and a consistent linear multistep method (?, ?), whose starting values are computed such that limh?0 xi = x(ti ) for any i ? {0, . . . , s ? 1}, zero-stability is necessary and sufficient for convergence, i.e. to ensure x(tk ) ? xk ? 0 for any k when the step size h goes to zero. 2.4 Region of absolute stability The results above ensure stability and global error bounds on finite time intervals. Solving optimization problems however requires looking at infinite time horizons. We start by finding conditions ensuring that the numerical solution does not diverge when the time interval increases, i.e. that the numerical solution is stable with a constant C which does not depend on tmax . Formally, for a fixed step-size h, we want to ensure kxk k ? C max i?{0,...,s?1} kxi k for all k ? [0, tmax /h] and tmax > 0. (7) This is not possible without further assumptions on the function g as in the general case the solution x(t) itself may diverge. We begin with the simple scalar linear case which, given ? > 0, reads x(t) ? = ??x(t), x(0) = x0 . (Scalar Linear ODE) The recurrence of a linear multistep methods with parameters (?, ?) applied to (Scalar Linear ODE) then reads ?(E)xk = ??h?(E)xk ? [? + ?h?](E)xk = 0, where we recognize a homogeneous recurrence equation. Condition (7) is then controlled by the step size h and the constant ?, ensuring that this homogeneous recurrent equation produces bounded solutions. This leads us to the definition of the region of absolute stability, also called A-stability. 4 Definition 2.6 (Absolute stability) The region of absolute stability of a linear multistep method defined by (?, ?) is the set of values ?h such that the characteristic polynomial ??h (z) , ?(z) + ?h ?(z) (8) of the homogeneous recurrence equation ??h (E)xk = 0 produces bounded solutions. Standard linear algebra links this condition to the roots of the characteristic polynomial as recalled in the next proposition (see e.g. Lemma 12.1 of S?li and Mayers [2003]). Proposition 2.7 Let ? be a polynomial and write xk a solution of the homogeneous recurrence equation ?(E)xk = 0 with arbitrary initial values. If all roots of ? are inside the unit disk and the ones on the unit circle have a multiplicity exactly equal to one, then kxk k ? ?. Absolute stability of a linear multistep method determines its ability to integrate a linear ODE defined by x(t) ? = ?Ax(t), x(0) = x0 , (Linear ODE) where A is a positive symmetric matrix whose eigenvalues belong to [?, L] for 0 < ? ? L. In this case the step size h must indeed be chosen such that for any ? ? [?, L], ?h belongs to the region of absolute stability of the method. This (Linear ODE) is a special instance of (Gradient Flow) where f is a quadratic function. Therefore absolute stability gives a necessary (but not sufficient) condition to integrate (Gradient Flow) on L-smooth, ?-strongly convex functions. 2.5 Convergence analysis in the linear case By construction, absolute stability also gives hints on the convergence of xk to the equilibrium in the linear case. More precisiely, it allows us to control the rate of convergence of xk , approximating the solution x(t) of (Linear ODE) as shown in the following proposition whose proof can be found in Supplementary Material. Proposition 2.8 Given a (Linear ODE) defined by x0 and a positive symmetric matrix A whose eigenvalues belong to [?, L] with 0 < ? ? L, using a linear multistep method defined by (?, ?) and applying a fixed step size h, we define rmax as rmax = max max |r|, ??[?,L] r?roots(??h (z)) where ??h is defined in (8). If rmax < 1 and its multiplicity is equal to m, then the speed of convergence of the sequence xk produced by the linear multistep method to the equilibrium x? of the differential equation is given by k kxk ? x? k = O(k m?1 rmax ). (9) We can now use these properties to analyze and design multistep methods. 3 Analysis and design of multi-step methods As shown previously, we want to integrate (Gradient Flow) and Proposition 1.1 gives a rate of convergence in the continuous case. If the method tracks x(t) with sufficient accuracy, then the rate of the method will be close to the rate of convergence of x(kh). So, larger values of h yield faster convergence of x(t) to the equilibrium x? . However h cannot be too large, as the method may be too inaccurate and/or unstable as h increases. Convergence rates of optimization algorithms are thus controlled by our ability to discretize the gradient flow equation using large step sizes. We recall the different conditions that proper linear multistep methods should satisfy. ? Monic polynomial (Section 2.1). Without loss of generality (dividing both sides of the difference equation of the multistep method (2) by ?s does not change the method). ? Explicit method (Section 2.1). We assume that the scheme is explicit in order to avoid solving a non-linear system at each step. 5 ? Consistency (Section 2.2). If the method is not consistent, then the local error does not converge when the step size goes to zero. ? Zero-stability (Section 2.2). Zero-stability ensures convergence of the global error (Section 2.3) when the method is also consistent. ? Region of absolute stability (Section 2.4). If ?h is not inside the region of absolute stability for any ? ? [?, L], then the method is divergent when tmax increases. Using the remaining degrees of freedom, we can tune the algorithm to improve the convergence rate on (Linear ODE), which corresponds to the optimization of a quadratic function. Indeed, as showed in Proposition 2.8, the largest root of ??h (z) gives us the rate of convergence on quadratic functions (when ? ? [?, L]). Since smooth and strongly convex functions are close to quadratic (being sandwiched between two quadratics), this will also give us a good idea of the rate of convergence on these functions. We do not derive a proof of convergence of the sequence for general smooth and (strongly) convex function (but convergence is proved by Nesterov [2013] or using Lyapunov techniques by Wilson et al. [2016]). Still our results provide intuition on why accelerated methods converge faster. 3.1 Analysis of two-step methods We now analyze convergence of two-step methods (an analysis of Euler?s method is provided in the Supplementary Material). We first translate the conditions multistep method, listed at the beginning of this section, into constraints on the coefficients: ?2 = 1 ?2 = 0 ?0 + ?1 + ?2 = 0 ?0 + ?1 + ?2 = ?1 + 2?2 |Roots(?)| ? 1 (Monic polynomial) (Explicit method) (Consistency) (Consistency) (Zero-stability). Equality contraints yield three linear constraints, defining the set L such that L = {?0 , ?1 , ?0 , ?1 : ?1 = ?(1 + ?0 ), ?1 = 1 ? ?0 ? ?0 , |?0 | < 1}. (10) We now seek conditions on the remaining parameters to produce a stable method. Absolute stability requires that all roots of the polynomial ??h (z) in (8) are inside the unit circle, which translates into condition on the roots of second order equations here. The following proposition gives the values of the roots of ??h (z) as a function of the parameters ?i and ?i . Proposition 3.1 Given constants 0 < ? ? L, a step size h > 0 and a linear two-step method defined by (?, ?), under the conditions (?1 + ?h?1 )2 ? 4(?0 + ?h?0 ), (?1 + Lh?1 )2 ? 4(?0 + Lh?0 ), (11) the roots r? (?) of ??h , defined in (8), are complex conjugate for any ? ? [?, L]. Moreover, the largest root modulus is equal to max |r? (?)|2 = max {?0 + ?h?0 , ?0 + Lh?0 } . ??[?,L] (12) The proof can be found in the Supplementary Material. The next step is to minimize the largest modulus (12) in the coefficients ?i and ?i to get the best rate of convergence, assuming the roots are complex (the case were the roots are real leads to weaker results). 3.2 Design of a family of two-step methods for quadratics We now have all ingredients to build a two-step method for which the sequence xk converges quickly to x? for quadratic functions. Optimizing the convergence rate means solving the following problem, min max {?0 + ?h?0 , ?0 + Lh?0 } s.t. (?0 , ?1 , ?0 , ?1 ) ? L (?1 + ?h?1 )2 ? 4(?0 + ?h?0 ) (?1 + Lh?1 )2 ? 4(?0 + Lh?0 ), 6 in the variables ?0 , ?1 , ?0 , ?1 , h > 0, where L is defined in (10). If we use the equality constraints in (10) and make the following change of variables, ? = h(1 ? ?0 ), c? = ?0 + ?h?0 , cL = ?0 + Lh?0 , h (13) ? in the variables c? , cL . In that case, the optimal solution is the problem can be solved, for fixed h, given by q p ? 2 , c? = (1 ? Lh) ? 2, c? = (1 ? ?h) (14) ? L 2 ? ?]0, (1+?/L) [. Now if we fix h ? we can recover obtained by tightening the two first inequalities, for h L a two step linear method defined by step size h by using the equations in (13). We define p (?, ?) and a p the following quantity ? = (1 ? ?/L)/(1 + ?/L). ? = 1/L for example, the parameters of the correspondA suboptimal two-step method. Setting h ing two-step method, called method M1 , are   1 M1 = ?(z) = ? ? (1 + ?)z + z 2 , ?(z) = ??(1 ? ?) + (1 ? ? 2 )z, h = L(1 ? ?) (15) and its largest modulus root (12) is given by q p ? rate(M1 ) = max{c? , cL } = c? = 1 ? ?/L. ? which minimizes the Optimal two-step method for quadratics. We can compute the optimal h maximum of the two roots c?? and c?L defined in (14). The solution simply balances the two terms in ? ? = (1 + ?)2 /L. This choice of h ? leads to the method M2 , described by the maximum, with h   1 M2 = ?(z) = ? 2 ? (1 + ? 2 )z + z 2 , ?(z) = (1 ? ? 2 )z, h = ? (16) ?L with convergence rate p p ? ? rate(M2 ) = c? = cL = ? = (1 ? ?/L)/(1 + ?/L) < rate(M1 ). We will now see that methods M1 and M2 are actually related to Nesterov?s accelerated method and Polyak?s heavy ball algorithms. 4 On the link between integration and optimization ? We In the previous section, we derived a family of linear multistep methods, parametrized by h. will now compare these methods to common optimization algorithms used to minimize L-smooth, ?-strongly convex functions. 4.1 Polyak?s heavy ball method The heavy ball method was proposed by Polyak [1964]. It adds a momentum term to the gradient step xk+2 = xk+1 ? c1 ?f (xk+1 ) + c2 (xk+1 ? xk ), ? ? 2 where c1 = 4/( L + ?) and c2 = ? 2 . We can organize the terms in the sequence to match the general structure of linear multistep methods, to get ? 2 xk ? (1 + ? 2 )xk+1 + xk+2 = c1 (??f (xk+1 )) . We easily identify ?(z) = ? 2 ?(1+? 2 )z +z 2 and h?(z) = c1 z. To extract h, we will assume that the method is consistent (see conditions (5)). All computations done, we can identify the corresponding linear multistep method as   1 2 2 2 MPolyak = ?(z) = ? ? (1 + ? )z + 1, ?(z) = (1 ? ? )z, h = ? . (17) ?L This shows that MPolyak = M2 . In fact, this result was expected since Polyak?s method is known to be optimal for quadratic functions. However, it is also known that Polyak?s algorithm does not converge for a general smooth and strongly convex function [Lessard et al., 2016]. 7 4.2 Nesterov?s accelerated gradient Nesterov?s accelerated method in its simplest form is described by two sequences xk and yk , with yk+1 = xk+1 = 1 ?f (xk ), L yk+1 + ?(yk+1 ? yk ). xk ? As above, we will write Nesterov?s accelerated gradient as a linear multistep method by expanding yk in the definition of xk , to get 1 (??(??f (xk )) + (1 + ?)(??f (xk+1 ))) . L Again, assuming as above that the method is consistent to extract h, we identify the linear multistep method associated to Nesterov?s algorithm. After identification,   1 2 2 MNest = ?(z) = ? ? (1 + ?)z + z , ?(z) = ??(1 ? ?) + (1 ? ? )z, h = , L(1 ? ?) ?xk ? (1 + ?)xk+1 + xk+2 = which means that M1 = MNest . 4.3 The convergence rate of Nesterov?s method Pushing the analysis a little bit further, we have a simple intuitive argument that explains why Nesterov?s algorithm is faster than the gradient method. There is of course a complete proof of its rate of convergence [Nesterov, 2013], even using differential equations arguments [Wibisono et al., 2016; Wilson et al., 2016], but we take a simpler approach here. The key parameter is the step size h. If we compare it with the step p size in the classical gradient method, Nesterov?s method uses a step size which is (1 ? ?)?1 ? L/? times larger. Recall that, in continuous time, the rate of convergence of x(t) to x? is given by f (x(t)) ? f (x? ) ? e?2?t (f (x0 ) ? f (x? )). The gradient method tries to approximate x(t) using an Euler scheme with step size h = 1/L, which (grad) means xk ? x(k/L), so (grad) f (xk ? ) ? f (x? ) ? f (x(k/L)) ? f (x? ) ? (f (x0 ) ? f (x? ))e?2k L . However, Nesterov?s method has a step size equal to p   p 1 + ?/L 1 1 ? hNest = = 4?L . ?? which means xnest ? x k/ k L(1 ? ?) 2 ?L 4?L while maintaining stability. In that case, the estimated rate of convergence becomes ?  ? ? f (xnest ? f (x? ) ? (f (x0 ) ? f (x? ))e?k ?/L , k ) ? f (x ) ? f x k/ 4?L which is approximatively the rate p of convergence of Nesterov?s algorithm in discrete time and we recover the accelerated rate in ?/L versus ?/L for gradient descent. Overall, the accelerated method is more efficient because it integrates the gradient flow faster than simple gradient descent, making longer steps. A numerical simulation in Figure 1 makes this argument more visual. 5 Generalization and Future Work We showed that accelerated optimization methods can be seen as multistep integration schemes applied to the basic gradient flow equation. Our results give a natural interpretation of acceleration: multistep schemes allow for larger steps, which speeds up convergence. In the Supplementary Material, we detail further links between integration methods and other well-known optimization algorithms such as proximal point methods, mirror gradient decent, proximal gradient descent, and 8 9 9 x0 xstar Exact Euler Nesterov Polyak 8.5 8 x0 xstar Exact Euler Nesterov Polyak 8.5 8 7.5 7.5 7 7 6.5 6.5 6 6 5 5.5 6 6.5 7 5 5.5 6 6.5 Figure 1: Integration of a linear ODE with optimal (left) and small (right) step sizes. discuss the weakly convex case. The extra-gradient algorithm and its recent accelerated version Diakonikolas and Orecchia [2017] can also be linked to another family of integration methods called Runge-Kutta which include notably predictor-corrector methods. Our stability analysis is limited to the quadratic case, the definition of A-stability being too restrictive for the class of smooth and strongly convex functions. A more appropriate condition would be Gstability, which extends A-stability to non-linear ODEs, but this condition requires strict monotonicity of the error (which is not the case with accelerated algorithms). Stability may also be tackled by recent advances in lower bound theory provided by Taylor [2017] but these yield numerical rather than analytical convergence bounds. Our next objective is thus to derive a new stability condition in between A-stability and G-stability. Acknowledgments The authors would like to acknowledge support from a starting grant from the European Research Council (ERC project SIPA), from the European Union?s Seventh Framework Programme (FP7PEOPLE-2013-ITN) under grant agreement number 607290 SpaRTaN, an AMX fellowship, as well as support from the chaire ?conomie des nouvelles donn?es with the data science joint research initiative with the fonds AXA pour la recherche and a gift from Soci?t? G?n?rale Cross Asset Quantitative Research. 9 References Allen Zhu, Z. and Orecchia, L. [2017], Linear coupling: An ultimate unification of gradient and mirror descent, in ?Proceedings of the 8th Innovations in Theoretical Computer Science?, ITCS 17. Beck, A. and Teboulle, M. [2003], ?Mirror descent and nonlinear projected subgradient methods for convex optimization?, Operations Research Letters 31(3), 167?175. Ben-Tal, A. and Nemirovski, A. [2001], Lectures on modern convex optimization: analysis, algorithms, and engineering applications, SIAM. Bubeck, S., Tat Lee, Y. and Singh, M. [2015], ?A geometric alternative to nesterov?s accelerated gradient descent?, ArXiv e-prints . Diakonikolas, J. and Orecchia, L. [2017], ?Accelerated extra-gradient descent: A novel accelerated first-order method?, arXiv preprint arXiv:1706.04680 . Duchi, J. C., Shalev-Shwartz, S., Singer, Y. and Tewari, A. [2010], Composite objective mirror descent., in ?COLT?, pp. 14?26. Gautschi, W. [2011], Numerical analysis, Springer Science & Business Media. Krichene, W., Bayen, A. and Bartlett, P. L. [2015], Accelerated mirror descent in continuous and discrete time, in ?Advances in neural information processing systems?, pp. 2845?2853. Lessard, L., Recht, B. and Packard, A. [2016], ?Analysis and design of optimization algorithms via integral quadratic constraints?, SIAM Journal on Optimization 26(1), 57?95. Nesterov, Y. [1983], A method of solving a convex programming problem with convergence rate o (1/k2), in ?Soviet Mathematics Doklady?, Vol. 27, pp. 372?376. Nesterov, Y. [2007], ?Gradient methods for minimizing composite objective function?. Nesterov, Y. [2013], Introductory lectures on convex optimization: A basic course, Vol. 87, Springer Science & Business Media. Nesterov, Y. [2015], ?Universal gradient methods for convex optimization problems?, Mathematical Programming 152(1-2), 381?404. Polyak, B. T. [1964], ?Some methods of speeding up the convergence of iteration methods?, USSR Computational Mathematics and Mathematical Physics 4(5), 1?17. Su, W., Boyd, S. and Candes, E. [2014], A differential equation for modeling nesterov?s accelerated gradient method: Theory and insights, in ?Advances in Neural Information Processing Systems?, pp. 2510?2518. S?li, E. and Mayers, D. F. [2003], An introduction to numerical analysis, Cambridge University Press. Taylor, A. [2017], Convex Interpolation and Performance Estimation of First-order Methods for Convex Optimization, PhD thesis, Universit? catholique de Louvain. Wibisono, A., Wilson, A. C. and Jordan, M. I. [2016], ?A variational perspective on accelerated methods in optimization?, Proceedings of the National Academy of Sciences p. 201614734. Wilson, A. C., Recht, B. and Jordan, M. I. [2016], ?A lyapunov analysis of momentum methods in optimization?, arXiv preprint arXiv:1611.02635 . 10
6711 |@word version:1 polynomial:11 norm:1 disk:2 seek:3 simulation:1 tat:1 concise:1 initial:5 contains:1 loc:5 interestingly:1 past:1 kx0:1 discretization:4 comparing:1 written:3 must:1 numerical:15 designed:1 xk:54 oldest:1 beginning:1 recherche:1 provides:1 iterates:1 simpler:1 mathematical:2 c2:2 differential:11 initiative:1 introductory:1 inside:3 introduce:1 x0:18 notably:1 expected:1 indeed:2 behavior:3 pour:1 multi:5 chaire:1 inspired:1 little:1 becomes:1 begin:1 provided:2 moreover:3 notation:1 bounded:3 project:1 gift:1 medium:2 kg:1 rmax:4 minimizes:1 developed:1 finding:1 quantitative:1 every:3 ti:1 exactly:1 doklady:1 k2:1 universit:1 control:4 unit:5 grant:2 organize:1 positive:2 engineering:1 local:4 tends:2 limit:1 accumulates:1 analyzing:1 dahlquist:2 multistep:24 interpolation:1 inria:6 tmax:14 equivalence:2 limited:2 nemirovski:2 unique:2 acknowledgment:1 union:1 universal:1 composite:2 boyd:1 word:1 integrating:2 get:5 cannot:1 close:2 operator:2 applying:2 writing:1 accumulation:1 equivalent:1 map:2 elusive:1 go:2 starting:7 convex:18 ke:1 simplicity:1 m2:5 insight:1 stability:30 construction:1 exact:2 losing:1 homogeneous:5 us:2 soci:1 programming:2 agreement:1 vein:1 exk:1 preprint:2 solved:1 region:6 ensures:3 yk:14 intuition:2 convexity:1 complexity:1 nesterov:27 dom:1 depend:4 solving:5 weakly:1 algebra:2 singh:1 purely:1 completely:1 easily:1 joint:1 various:2 soviet:1 spartan:1 shalev:1 quite:1 whose:6 larger:4 supplementary:6 otherwise:1 ability:2 itself:2 runge:1 sequence:9 eigenvalue:2 analytical:1 fr:4 iff:1 translate:1 academy:1 intuitive:1 kh:3 convergence:32 produce:3 converges:3 ben:2 tk:17 derive:3 illustrate:1 damien:2 recurrent:1 coupling:1 strong:2 dividing:1 bayen:1 come:1 lyapunov:2 engineered:1 material:6 explains:2 hx:2 fix:1 generalization:1 proposition:11 secondly:1 sufficiently:1 considered:1 equilibrium:4 uniqueness:1 estimation:1 integrates:1 council:1 largest:4 aim:1 rather:2 avoid:1 wilson:5 derived:2 focus:4 ax:1 check:1 hk:2 cnrs:1 accumulated:2 inaccurate:1 france:4 issue:1 among:1 colt:1 overall:1 ussr:1 integration:16 special:2 field:1 equal:5 beach:1 look:1 future:1 simplify:1 hint:1 few:1 modern:1 recognize:2 national:1 beck:2 freedom:1 introduces:1 behind:1 hg:1 aspremon:1 integral:1 neglecting:1 necessary:2 unification:1 lh:8 euclidean:1 taylor:4 circle:3 theoretical:1 instance:2 modeling:1 compelling:1 teboulle:2 ordinary:1 euler:11 predictor:1 seventh:1 too:3 proximal:3 kxi:3 st:1 recht:2 sensitivity:1 siam:2 lee:1 physic:1 diverge:2 quickly:2 again:2 thesis:1 leading:2 li:4 potential:2 scieur:2 de:2 includes:1 coefficient:6 satisfy:1 explicitly:1 depends:1 stream:1 root:17 try:1 analyze:2 francis:2 linked:1 start:1 recover:3 complicated:1 candes:1 minimize:6 accuracy:1 characteristic:2 yield:3 ckx:2 identify:3 identification:1 vincent:2 itcs:1 produced:1 asset:1 definition:9 infinitesimal:1 pp:4 thereof:1 naturally:1 proof:9 associated:1 proved:1 recall:4 lim:1 limh:1 sophisticated:2 actually:1 appears:1 alexandre:1 done:1 strongly:9 generality:1 implicit:2 hand:2 su:2 nonlinear:1 continuity:1 modulus:3 usa:1 concept:1 equality:2 read:3 symmetric:2 krichene:2 recurrence:5 uniquely:1 complete:2 duchi:2 allen:2 meaning:1 variational:1 novel:1 common:1 exponentially:1 discussed:1 interpretation:5 approximates:1 belong:2 m1:6 accumulate:1 cambridge:1 smoothness:1 rd:3 consistency:8 grid:1 similarly:1 erc:1 mathematics:2 stable:7 access:1 longer:1 yk2:1 lkx:1 add:1 recent:4 showed:2 perspective:1 optimizing:1 belongs:1 reverse:1 axa:1 inequality:1 yi:3 seen:3 captured:1 additional:1 somewhat:1 conomie:1 converge:4 itn:1 full:1 reduces:1 stem:1 smooth:9 technical:1 faster:5 ing:1 match:1 bach:2 offer:1 long:1 cross:1 y:1 bigger:1 controlled:2 ensuring:3 basic:5 arxiv:5 iteration:2 c1:4 want:2 fellowship:1 ode:20 interval:7 roulet:2 extra:2 strict:1 orecchia:4 flow:19 jordan:2 structural:1 enough:1 decent:1 suboptimal:1 polyak:8 idea:2 translates:1 grad:2 shift:2 psl:4 bartlett:1 ultimate:1 algebraic:1 ignored:1 tewari:1 listed:1 tune:1 simplest:2 exist:1 estimated:1 correctly:1 track:1 discrete:3 write:5 vol:2 key:3 subgradient:1 letter:1 extends:1 family:4 throughout:1 bit:1 bound:3 guaranteed:1 tackled:1 quadratic:12 constraint:4 tal:2 generates:1 speed:2 argument:4 min:1 combination:2 ball:3 conjugate:1 y0:1 making:1 nouvelles:1 intuitively:1 multiplicity:2 equation:29 remains:1 previously:1 describing:1 discus:1 mechanism:1 singer:1 mind:1 operation:1 appropriate:1 alternative:1 existence:1 remaining:2 ensure:3 include:1 maintaining:1 pushing:1 restrictive:1 sipa:1 build:1 approximating:2 classical:2 sandwiched:1 objective:3 print:1 quantity:1 diakonikolas:2 gradient:42 kutta:1 distance:1 link:3 parametrized:1 gautschi:4 unstable:1 assuming:3 corrector:1 balance:1 minimizing:1 innovation:1 gk:3 tightening:1 design:4 contraints:1 proper:1 discretize:2 finite:6 acknowledge:1 descent:11 t:1 defining:1 looking:1 precise:1 arbitrary:1 paris:4 connection:1 mayer:4 recalled:2 louvain:1 nip:1 rale:1 egk:1 max:9 packard:1 explanation:1 natural:1 rely:1 business:2 advanced:1 zhu:2 scheme:12 improve:2 carried:1 aspremont:1 extract:2 speeding:1 geometric:1 loss:1 lecture:2 proven:1 versus:1 ingredient:1 h2:1 integrate:6 degree:3 sufficient:3 consistent:8 lessard:3 heavy:3 course:2 last:2 catholique:1 side:1 weaker:1 allow:1 absolute:11 curve:3 author:1 projected:1 programme:1 approximate:3 monotonicity:2 global:6 xi:3 shwartz:1 continuous:5 why:2 donn:1 ca:1 inherently:1 expanding:1 expansion:2 complex:3 cl:4 european:2 arise:1 x1:1 en:5 momentum:2 explicit:7 amx:1 theorem:8 x:4 divergent:1 essential:1 effectively:1 mirror:6 phd:1 fonds:1 horizon:3 kx:4 simply:1 univariate:1 bubeck:2 visual:1 kxk:8 scalar:3 approximatively:1 springer:2 corresponds:2 minimizer:3 satisfies:2 relies:1 determines:1 goal:2 acceleration:5 lipschitz:1 monic:4 change:2 infinite:2 lemma:1 called:10 e:1 la:1 formally:1 support:2 latter:1 accelerated:19 wibisono:3 phenomenon:2
6,315
6,712
Sharpness, Restart and Acceleration Vincent Roulet INRIA, ENS Paris France [email protected] Alexandre d?Aspremont CNRS, ENS Paris France [email protected] Abstract The ?ojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls the performance of restart schemes, as observed by Nemirovskii and Nesterov [1985]. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods. Introduction We study convex optimization problems of the form minimize f (x) (P) where f is a convex function defined on Rn . The complexity of these problems using first order methods is generically controlled by smoothness assumptions on f such as Lipschitz continuity of its gradient. Additional assumptions such as strong convexity or uniform convexity provide respectively linear [Nesterov, 2013b] and faster polynomial [Juditski and Nesterov, 2014] rates of convergence. However, these assumptions are often too restrictive to be applied. Here, we make a much weaker and generic assumption that describes the sharpness of the function around its minimizers by constants ? ? 0 and r ? 1 such that ? d(x, X ? )r ? f (x) ? f ? , for every x ? K, (Sharp) r where f ? is the minimum of f , K ? Rn is a compact set, d(x, X ? ) = miny?X ? kx ? yk is the distance from x to the set X ? ? K of minimizers of f 1 for the Euclidean norm k ? k. This defines a lower bound on the function around its minimizers: for r = 1, f shows a kink around its minimizers and the larger is r the flatter is the function around its minimizers. We tackle this property by restart schemes of classical convex optimization algorithms. Sharpness assumption (Sharp) is better known as a H?lderian error bound on the distance to the set of minimizers. Hoffman [Hoffman, 1952] first introduced error bounds to study system of linear inequalities. Natural extensions were then developed for convex optimization [Robinson, 1975; Mangasarian, 1985; Auslender and Crouzeix, 1988], notably through the concept of sharp minima [Polyak, 1979; Burke and Ferris, 1993; Burke and Deng, 2002]. But the most striking discovery was made by ?ojasiewicz [?ojasiewicz, 1963, 1993] who proved inequality (Sharp) for real analytic and subanalytic functions. It has then been extended to non-smooth subanalytic convex functions by Bolte et al. [2007]. Overall, since (Sharp) essentially measures the sharpness of minimizers, it holds somewhat generically. On the other hand, this inequality is purely descriptive as we have no hope of ever observing either r or ?, and deriving adaptive schemes is crucial to ensure practical relevance. 1 We assume the problem feasible, i.e. X ? 6= ?. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ?ojasiewicz inequalities either in the form of (Sharp) or as gradient dominated properties [Polyak, 1979] led to new simple convergence results [Karimi et al., 2016], in particular for alternating and splitting methods [Attouch et al., 2010; Frankel et al., 2015], even in the non-convex case [Bolte et al., 2014]. Here we focus on H?lderian error bounds as they offer simple explanation of accelerated rates of restart schemes. Restart schemes were already studied for strongly or uniformly convex functions [Nemirovskii and Nesterov, 1985; Nesterov, 2013a; Juditski and Nesterov, 2014; Lin and Xiao, 2014]. In particular, Nemirovskii and Nesterov [1985] link a ?strict minimum? condition akin to (Sharp) with faster convergence rates using restart schemes which form the basis of our results, but do not study the cost of adaptation and do not tackle the non-smooth case. In a similar spirit, weaker versions of this strict minimum condition were used more recently to study the performance of restart schemes in [Renegar, 2014; Freund and Lu, 2015; Roulet et al., 2015]. The fundamental question of a restart scheme is naturally to know when must an algorithm be stopped and relaunched. Several heuristics [O?Donoghue and Candes, 2015; Su et al., 2014; Giselsson and Boyd, 2014] studied adaptive restart schemes to speed up convergence of optimal methods. The robustness of restart schemes was then theoretically studied by Fercoq and Qu [2016] for quadratic error bounds, i.e. (Sharp) with r = 2, that LASSO problem satisfies for example. Fercoq and Qu [2017] extended recently their work to produce adaptive restarts with theoretical guarantees of optimal performance, still for quadratic error bounds. Previous references focus on smooth problems, but error bounds appear also for non-smooth ones, Gilpin et al. [2012] prove for example linear converge of restart schemes in bilinear matrix games where the minimum is sharp, i.e. (Sharp) with r = 1. Our contribution here is to derive optimal scheduled restart schemes for general convex optimization problems for smooth, non-smooth or H?lder smooth functions satisfying the sharpness assumption. We then show that for smooth functions these schemes can be made adaptive with nearly optimal complexity (up to a squared log term) for a wide array of sharpness assumptions. We also analyze restart criterion based on a sufficient decrease of the gap to the minimum value of the problem, when this latter is known in advance. In that case, restart schemes are shown ot be optimal without requiring any additional information on the function. 1 1.1 Problem assumptions Smoothness Convex optimization problems (P) are generally divided in two classes: smooth problems, for which f has Lipschitz continuous gradients, and non-smooth problems for which f is not differentiable. Nesterov [2015] proposed to unify point of views by assuming generally that there exist constants 1 ? s ? 2 and L > 0 such that k?f (x) ? ?f (y)k ? Lkx ? yks?1 , for all x, y ? Rn (Smooth) where ?f (x) is any sub-gradient of f at x if s = 1 (otherwise this implies differentiability of f ). For s = 2, we retrieve the classical definition of smoothness [Nesterov, 2013b]. For s = 1 we get a classical assumption made in non-smooth convex optimization, i.e., that sub-gradients of the function are bounded. For 1 < s < 2, this assumes gradient of f to be H?lder Lipschitz continuous. In a first step, we will analyze restart schemes for smooth convex optimization problems, then generalize to general smoothness assumption (Smooth) using appropriate accelerated algorithms developed by Nesterov [2015]. 1.2 Error bounds In general, an error bound is an inequality of the form d(x, X ? ) ? ?(f (x) ? f ? ), where ? is an increasing function at 0, called the residual function, and x may evolve either in the whole space or in a bounded set, see Bolte et al. [2015] for more details. We focus on H?lderian Error Bounds (Sharp) as they are the most common in practice. They are notably satisfied by a analytic and subanalytic functions but the proof (see e.g. Bierstone and Milman [1988, Theorem 6.4]) is shown using topological arguments that are far from constructive. Hence, outside of some 2 particular cases (e.g. strong convexity), we cannot assume that the constants in (Sharp) are known, even approximately. Error bounds can generically be linked to ?ojasiewicz inequality that upper bounds magnitude of the gradient by values of the function [Bolte et al., 2015]. Such property paved the way to many recent results in optimization [Attouch et al., 2010; Frankel et al., 2015; Bolte et al., 2014]. Here we will see that (Sharp) is sufficient to acceleration of convex optimization algorithms by their restart. Note finally that in most cases, error bounds are local properties hence the convergence results that follow will generally be local. 1.3 Sharpness and smoothness Let f be a convex function on Rn satisfying (Smooth) with parameters (s, L). This property ensures that, f (x) ? f ? + Ls kx ? yks , for given x ? Rn and y ? X ? . Setting y to be the projection of x onto X ? , this yields the following upper bound on suboptimality f (x) ? f ? ? L d(x, X ? )s . s (1) Now, assume that f satisfies the error bound (Sharp) on a set K with parameters (r, ?). Combining (1) and (Sharp) this leads for every x ? K, s? ? d(x, X ? )s?r . rL This means that necessarily s ? r by taking x ? X ? . Moreover if s < r, this last inequality can only be valid on a bounded set, i.e. either smoothness or error bound or both are valid only on a bounded set. In the following, we write 2 2 s ? , L s /? r (2) and ? ,1? r respectively a generalized condition number for the function f and a condition number based on the ratio of powers in inequalities (Smooth) and (Sharp). If r = s = 2, ? matches the classical condition number of the function. 2 Scheduled restarts for smooth convex problems In this section, we seek to solve (P) assuming that the function f is smooth, i.e. satisfies (Smooth) with s = 2 and L > 0. Without further assumptions on f , an optimal algorithm to solve the smooth convex optimization problem (P) is Nesterov?s accelerated gradient method [Nesterov, 1983]. Given an initial point x0 , this algorithm outputs, after t iterations, a point x = A(x0 , t) such that f (x) ? f ? ? cL d(x0 , X ? )2 , t2 (3) where c > 0 denotes a universal constant (whose value will be allowed to vary in what follows, with c = 4 here). We assume without loss of generality that f (x) ? f (x0 ). More details about Nesterov?s algorithm are given in Supplementary Material. In what follows, we will also assume that f satisfies (Sharp) with parameters (r, ?) on a set K ? X ? , which means ? d(x, X ? )r ? f (x) ? f ? , for every x ? K. (Sharp) r As mentioned before if r > s = 2, this property is necessarily local, i.e. K is bounded. We assume then that given a starting point x0 ? Rn , sharpness is satisfied on the sublevel set {x| f (x) ? f (x0 )}. Remark that if this property is valid on an open set K ? X ? , it will also be valid on any compact set K 0 ? K with the same exponent r but a potentially lower constant ?. The scheduled restart schemes we present here rely on a global sharpness hypothesis on the sublevel set defined by the initial point and are not adaptive to constant ? on smaller sublevel sets. On the other hand, restarts on criterion that we present in Section 4, assuming that f ? is known, adapt to the value of ?. We now describe a restart scheme exploiting this extra regularity assumption to improve the computational complexity of solving problem (P) using accelerated methods. 3 2.1 Scheduled restarts Here, we schedule the number of iterations tk made by Nesterov?s algorithm between restarts, with tk the number of (inner) iterations at the k th algorithm run (outer iteration). Our scheme is described in Algorithm 1 below. Algorithm 1 Scheduled restarts for smooth convex minimization Inputs : x0 ? Rn and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do xk := A(xk?1 , tk ) end for Output : x ? := xR The analysis of this scheme and the following ones relies on two steps. We first choose schedules that ensure linear convergence in the iterates xk at a given rate. We then adjust this linear rate to minimize the complexity in terms of the total number of iterations. We begin with a technical lemma which assumes linear convergence holds, and connects the growth of tk , the precision reached and the total number of inner iterations N . Lemma 2.1. Let xk be a sequence whose k th iterate is generated from the previous one by an PR algorithm that runs tk iterations and write N = k=1 tk the total number of iterations to output a point xR . Suppose setting tk = Ce?k , k = 1, . . . , R for some C > 0 and ? ? 0 ensures that outer iterations satisfy f (xk ) ? f ? ? ?e??k , (4) for all k ? 0 with ? ? 0 and ? ? 0. Then precision at the output is given by, f (xR ) ? f ? ? ? exp(??N/C), and f (xR ) ? f ? ? when ? = 0, ? ? (?e?? C ?1 N + 1) ? , when ? > 0. Proof. When ? = 0, N = RC, and inserting this in (4) at the last point xR yields the desired PR ? e?R ?1 result. On the other hand, when ? > 0, we have N = k=1 tk = Ce e? ?1 , which gives  ? R = log ee??1 C N + 1 /?. Inserting this in (4) at the last point, we get  ? ? f (xR ) ? f ? ? ? exp ? ?? log ee??1 ? ? , C N +1 ?? ?1 (?e C N +1) ? where we used ex ? 1 ? x. This yields the second part of the result. The last approximation in the case ? > 0 simplifies the analysis that follows without significantly affecting the bounds. We also show in Supplementary Material that using t?k = dtk e does not significantly affect the bounds above. Remark that convergence bounds are generally linear or polynomial such that we can extract a subsequence that converges linearly. Therefore our approach does not restrict the analysis of our scheme. It simplifies it and can be used for other algorithms like the gradient descent as detailed in Supplementary Material. We now analyze restart schedules tk that ensure linear convergence. Our choice of tk will heavily depend on the ratio between r and s (with s = 2 for smooth functions here), incorporated in the parameter ? = 1 ? s/r defined in (2). Below, we show that if ? = 0, a constant schedule is sufficient to ensure linear convergence. When ? > 0, we need a geometrically increasing number of iterations for each cycle. Proposition 2.2. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, ?) on a set K. Assume that we are given x0 ? Rn such that ? {x| f (x) ? f (x0 )} ? K. Run Algorithm 1 from x0 with iteration schedule tk = C?,? e? k , for k = 1, . . . , R, where 1 ? ? C?,? , e1?? (c?) 2 (f (x0 ) ? f ? )? 2 , (5) 4 with ? and ? defined in (2) and c = 4e2/e here. The precision reached at the last point x ? is given by,     1 1 f (? x) ? f ? ? exp ?2e?1 (c?)? 2 N (f (x0 ) ? f ? ) = O exp(??? 2 N ) , when ? = 0, (6) while, f (? x) ? f ? ?  where N = PR k=1 tk f (x0 ) ? f ? ? e?1 (f (x 0) ? ? 1 f ? ) 2 (c?)? 2 N +1   ?2  ?2 = O N ? , when ? > 0, (7) is the total number of iterations. Proof. Our strategy is to choose tk such that the objective is linearly decreasing, i.e. f (xk ) ? f ? ? e??k (f (x0 ) ? f ? ), (8) for some ? ? 0 depending on the choice of tk . This directly holds for k = 0 and any ? ? 0. Combining (Sharp) with the complexity bound in (3), we get f (xk ) ? f ? ? c? (f t2k 2 (xk?1 ) ? f ? ) r , where c = 4e2/e using that r2/r ? e2/e . Assuming recursively that (8) is satisfied at iteration k ? 1 for a given ?, we have f (xk ) ? f ? ? 2 c?e?? r (k?1) (f (x0 ) t2k 2 ? f ?) r , and to ensure (8) at iteration k, we impose 2 c?e?? r (k?1) (f (x0 ) t2k 2 ? f ? ) r ? e??k (f (x0 ) ? f ? ). Rearranging terms in this last inequality, using ? defined in (2), we get tk ? e ?(1?? ) 2 1 ? (c?) 2 (f (x0 ) ? f ? )? 2 e ?? 2 k . (9) For a given ? ? 0, we can set tk = Ce?k where C=e ?(1?? ) 2 1 ? (c?) 2 (f (x0 ) ? f ? )? 2 and ? = ? ?/2, (10) and Lemma 2.1 then yields,   ? 1 f (? x) ? f ? ? exp ??e? 2 (c?)? 2 N (f (x0 ) ? f ? ), when ? = 0, while f (? x) ? f ? ? (f (x0 )?f ? )  ? ? ? 2 2 ?e 2 1 ? (c?)? 2 (f (x0 )?f ? ) 2 N +1 ? , when ? > 0. These bounds are minimal for ? = 2, which yields the desired result. When ? = 0, bound (6) matches the classical complexity bound for smooth strongly convex functions [Nesterov, 2013b]. When ? > 0 on the other hand, bound (7) highlights a much faster convergence rate than accelerated gradient methods. The sharper the function (i.e. the smaller r), the faster the convergence. This matches the lower bounds for optimizing smooth and sharp functions functions [Arjevani and Shamir, 2016; Nemirovskii and Nesterov, 1985, Page 6] up to constant ? factors. Also, setting tk = C?,? e? k yields continuous bounds on precision, i.e. when ? ? 0, bound (7) converges to bound (6), which also shows that for ? near zero, constant restart schemes are almost optimal. 5 2.2 Adaptive scheduled restart The previous restart schedules depend on the sharpness parameters (r, ?) in (Sharp). In general of course, these values are neither observed nor known a priori. Making our restart scheme adaptive is thus crucial to its practical performance. Fortunately, we show below that a simple logarithmic grid search strategy on these parameters is enough to guarantee nearly optimal performance. We run several schemes with a fixed number of inner iterations N to perform a log-scale grid search on ? and ?. We define these schemes as follows.  Si,0 : Algorithm 1 with tk = Ci , (11) Si,j : Algorithm 1 with tk = Ci e?j k , where Ci = 2i and ?j = 2?j . We stop these schemes when the total number of inner algorithm PR iterations has exceed N , i.e. at the smallest R such that k=1 tk ? N . The size of the grid search in Ci is naturally bounded as we cannot restart the algorithm after more than N total inner iterations, so i ? [1, . . . , blog2 N c]. We will also show that when ? is smaller than 1/N , a constant schedule performs as well as the optimal geometrically increasing schedule, which crucially means we can also choose j ? [1, . . . , dlog2 N e] and limits the cost of grid search. The following result details the convergence of this method, its notations are the same as in Proposition 2.2 and its technical proof can be found in Supplementary Material. Proposition 2.3. Let f be a smooth convex function satisfying (Smooth) with parameters (2, L) and (Sharp) with parameters (r, ?) on a set K. Assume that we are given x0 ? Rn such that {x| f (x) ? f (x0 )} ? K and denote N a given number of iterations. Run schemes Si,j defined in (11) to solve (P) for i ? [1, . . . , blog2 N c] and j ? [0, . . . , dlog2 N e], stopping each time after N PR total inner algorithm iterations i.e. for R such that k=1 tk ? N . ? Assume N is large enough, so N ? 2C?,? , and if 1 N ? > 1. > ? > 0, C?,? If ? = 0, there exists i ? [1, . . . , blog2 N c] such that scheme Si,0 achieves a precision given by   1 f (? x) ? f ? ? exp ?e?1 (c?)? 2 N (f (x0 ) ? f ? ). If ? > 0, there exist i ? [1, . . . , blog2 N c] and j ? [1, . . . , dlog2 N e] such that scheme Si,j achieves a precision given by f (? x) ? f ? ? f (x0 )?f ?  1 ? e?1 (c?)? 2 2 ? (f (x0 )?f ? ) 2 (N ?1)/4+1 ? . Overall, running the logarithmic grid search has a complexity (log2 N )2 times higher than running N iterations using the optimal (oracle) scheme. As showed in Supplementary Material, scheduled restart schemes are theoretically efficient only if the algorithm itself makes a sufficient number of iterations to decrease the objective value. Therefore we need N large enough to ensure the efficiency of the adaptive method. If ? = 0, we naturally ? ? ? ? have C?,0 ? 1, therefore if N1 > ? > 0 and N is large, assuming C?,? ? C?,0 , we get C?,? ? 1. This adaptive bound is similar to the one of Nesterov [2013b] to optimize smooth strongly convex functions in the sense that we lose approximately a log factor of the condition number of the function. However our assumptions are weaker and we are able to tackle all regimes of the sharpness property, i.e. any exponent r ? [2, +?], not just the strongly convex case. In the supplementary material we also analyze the simple gradient descent method under the sharpness (Sharp) assumption. It shows that simple gradient descent achieves a O(?? ) complexity for a given accuracy . Therefore restarting accelerated gradient methods reduces complexity to O(?? /2 ) compared to simple gradient descent. This result is similar to the acceleration of gradient descent. We extend now this restart scheme to solve non-smooth or H?lder smooth convex optimization problem under the sharpness assumption. 3 Universal scheduled restarts for convex problems In this section, we use the framework introduced by Nesterov [2015] to describe smoothness of a convex function f , namely, we assume that there exist s ? [1, 2] and L > 0 on a set J ? Rn , i.e. k?f (x) ? ?f (y)k ? Lkx ? yks?1 , for every x, y ? J. 6 Without further assumptions on f , the optimal rate of convergence for this class of functions is bounded as O(1/N ? ), where N is the total number of iterations and ? = 3s/2 ? 1, (12) which gives ? = 2 for smooth functions and ? = 1/2 for non-smooth functions. The universal fast gradient method [Nesterov, 2015] achieves this rate by requiring only a target accuracy  and a starting point x0 . It outputs after t iterations a point x , U(x0 , , t), such that 2 cL s d(x0 , X ? )2   + , (13) 2 2? 2 2 s t s 4s?2 where c is a constant (c = 2 s ). More details about the universal fast gradient method are given in Supplementary Material. f (x) ? f ? ? We will again assume that f is sharp with parameters (r, ?) on a set K ? X ? , i.e. ? d(x, X ? )r ? f (x) ? f ? , for every x ? K. (Sharp) r As mentioned in Section 1.2, if r > s, smoothness or sharpness are local properties, i.e. either J or K or both are bounded, our analysis is therefore local. In the following we assume for simplicity, given an initial point x0 , that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f (x) ? f (x0 )}. The key difference with the smooth case described in the previous section is that here we schedule both the target accuracy k used by the algorithm and the number of iterations tk made at the k th run of the algorithm. Our scheme is described in Algorithm 2. Algorithm 2 Universal scheduled restarts for convex minimization Inputs : x0 ? Rn , 0 ? f (x0 ) ? f ? , ? ? 0 and a sequence tk for k = 1, . . . , R. for k = 1, . . . , R do k := e?? k?1 , xk := U(xk?1 , k , tk ) end for Output : x ? := xR Our strategy is to choose a sequence tk that ensures f (xk ) ? f ? ? k , for the geometrically decreasing sequence k . The overall complexity of our method will then depend on the growth of tk as described in Lemma 2.1. The proof is similar to the smooth case and can be found in Supplementary Material. Proposition 3.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, ?) on a set K. Given x0 ? Rn assume that {x|f (x) ? f (x0 )} ? J ? K. Run Algorithm 2 from x0 for a given 0 ? f (x0 ) ? f ? with ? = ?, ? tk = C?,?,? e? k , s ? ?? ? where C?,?,? , e1?? (c?) 2? 0 where ? is defined in (12), ? and ? are defined in (2) and c = 8e2/e here. The precision reached at the last point x ? is given by,     s s f (? x) ? f ? ? exp ??e?1 (c?)? 2? N 0 = O exp(??? 2? N ) , when ? = 0, while, f (? x) ? f ? ?  where N = PR k=1 tk 0 ? ? s ? e?1 (c?)? 2? 0 N +1  s  ?? ? ?? = O ? 2? N ? , when ? > 0, is total number of iterations. This bound matches the lower bounds for optimizing smooth and sharp functions [Nemirovskii and Nesterov, 1985, Page 6] up to constant factors. Notice that, compared to Nemirovskii and Nesterov [1985], we can tackle non-smooth convex optimization by using the universal fast gradient algorithm of Nesterov [2015]. The rate of convergence in Proposition 3.1 is controlled by the ratio between ? and ?. If these are unknown, a log-scale grid search won?t be able to reach the optimal rate, even if ? is known since we will miss the optimal rate by a constant factor. If both are known, in the case of non-smooth strongly convex functions for example, a grid-search on C recovers nearly the optimal bound. Now we will see that if f ? is known, restart produces adaptive optimal rates. 7 4 Restart with termination criterion Here, we assume that we know the optimum f ? of (P), or have an exact termination criterion. This is the case for example in zero-sum matrix games problems or non-degenerate least-squares without regularization. We assume again that f satisfies (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, ?) on a set K. Given an initial point x0 we assume that smoothness and sharpness are satisfied simultaneously on the sublevel set {x| f (x) ? f (x0 )}. We use again the universal gradient method U. Here however, we can stop the algorithm when it reaches the target accuracy as we know the optimum f ? , i.e. we stop after t inner iterations such that x = U(x0 , , t ) satisfies f (x) ? f ? ? , and write x , C(x0 , ) the output of this method. Here we simply restart this method and decrease the target accuracy by a constant factor after each restart. Our scheme is described in Algorithm 3. Algorithm 3 Restart on criterion Inputs : x0 ? Rn , f ? , ? ? 0, 0 = f (x0 ) ? f ? for k = 1, . . . , R do k := e?? k?1 , end for Output : x ? := xR xk := C(xk?1 , k ) The following result describes the convergence of this method. It relies on the idea that it cannot do more iterations than the best scheduled restart to achieve the target accuracy at each iteration. Its proof can be found in Supplementary Material. Proposition 4.1. Let f be a convex function satisfying (Smooth) with parameters (s, L) on a set J and (Sharp) with parameters (r, ?) on a set K. Given x0 ? Rn assume that {x, f (x) ? f (x0 )} ? J ? K. Run Algorithm 3 from x0 with parameter ? = ?. The precision reached at the last point x ? is given by,     s s f (? x) ? f ? ? exp ??e?1 (c?)? 2? N (f (x0 ) ? f ? ) = O exp(??? 2? N ) , when ? = 0, while, f (? x) ? f ? ?  f (x0 ) ? f ? s ? e?1 (c?)? 2? (f (x 0) ? ? f ?) ? N +1  s ??  ?? = O ? 2? N ? , when ? > 0, where N is the total number of iterations, ? is defined in (12), ? and ? are defined in (2) and c = 8e2/e here. Therefore if f ? is known, this method is adaptive, contrary to the general case in Proposition 3.1. It can even adapt to the local values of L or ? as we use a criterion instead of a preset schedule. Here, stopping using f (xk ) ? f ? implicitly yields optimal choices of C and ? . A closer look at the proof shows that the dependency in ? of this restart scheme is a factor h(?) = ?e??/? of the number of iterations. Taking ? = 1, leads then to a suboptimal constant factor of at most h(?)/h(1) ? e/2 ? 1.3 for ? ? [1/2, 2], so running this scheme with ? = 1 makes it parameter-free while getting nearly optimal bounds. 5 Numerical Results We illustrate our results by testing our adaptive restart methods, denoted Adap and Crit, introduced respectively in Sections 2.2 and 4 on several problems and compare them against simple gradient descent (Grad), accelerated gradient methods (Acc), and the restart heuristic enforcing monotonicity (Mono in [O?Donoghue and Candes, 2015]). For Adap we plot the convergence of the best method found by grid search to compare with the restart heuristic. This implicitly assumes that the grid search is run in parallel with enough servers. For Crit we use the optimal f ? found by another solver. This gives an overview of its performance in order to potentially approximate it along the iterations 8 in a future work as done with Polyak steps [Polyak, 1987]. All restart schemes were done using the accelerated gradient with backtracking line search detailed in the Supplementary Material, with large dots representing restart iterations. The results focused on unconstrained problems but our approach can directly be extended to composite problems by using the proximal variant of the gradient, accelerated gradient and universal fast gradient methods [Nesterov, 2015] as detailed in the Supplementary Material. This includes constrained optimization as a particular case by adding the indicator function of the constraint set to the objective (as in the SVM example below). In Figure 1, we solve classification problems with various losses on the UCI Sonar data set [Asuncion and Newman, 2007]. For least square loss on sonar data set, we observe much faster convergence of the restart schemes compared to the accelerated method. These results were already observed by O?Donoghue and Candes [2015]. For logistic loss, we observe that restart does not provide much improvement. The backtracking line search on the Lipschitz constant may be sufficient to capture the geometry of the problem. For hinge loss, we regularized by a squared norm and optimize the dual, which means solving a quadratic problem with box constraints. We observe here that the scheduled restart scheme convergences much faster, while restart heuristics may be activated too late. We observe similar results for the LASSO problem. In general Crit ensures the theoretical accelerated rate but Adap exhibits more consistent behavior. This highlights the benefits of a sharpness assumption for these last two problems. Precisely quantifying sharpness from data/problem structure is a key open problem. 10 -10 0 200 400 600 800 Iterations 10 -1 10 -2 f(x)-f Grad Acc Mono Adap Crit Grad Acc Mono Adap Crit 0 500 1000 Iterations * 10 0 * 10 0 * 10 -5 f(x)-f f(x)-f * 10 0 10 -5 10 -10 f(x)-f 10 0 Grad Acc Mono Adap Crit 0 500 Iterations 1000 10 -5 10 -10 Grad Acc Mono Adap Crit 0 500 1000 Iterations Figure 1: From left to right: least square loss, logistic loss, dual SVM problem and LASSO. We use adaptive restarts (Adap), gradient descent (Grad), accelerated gradient (Acc) and restart heuristic enforcing monotonicity (Mono). Large dots represent the restart iterations. Regularization parameters for dual SVM and LASSO were set to one. Acknowledgments The authors would like to acknowledge support from the chaire ?conomie des nouvelles donn?es with the data science joint research initiative with the fonds AXA pour la recherche, a gift from Soci?t? G?n?rale Cross Asset Quantitative Research and an AMX fellowship. The authors are affiliated to PSL Research University, Paris, France. 9 References Arjevani, Y. and Shamir, O. [2016], On the iteration complexity of oblivious first-order optimization algorithms, in ?International Conference on Machine Learning?, pp. 908?916. Asuncion, A. and Newman, D. [2007], ?Uci machine learning repository?. Attouch, H., Bolte, J., Redont, P. and Soubeyran, A. [2010], ?Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the kurdyka-?ojasiewicz inequality?, Mathematics of Operations Research 35(2), 438?457. Auslender, A. and Crouzeix, J.-P. [1988], ?Global regularity theorems?, Mathematics of Operations Research 13(2), 243?253. Bierstone, E. and Milman, P. D. [1988], ?Semianalytic and subanalytic sets?, Publications Math?matiques de l?IH?S 67, 5?42. Bolte, J., Daniilidis, A. and Lewis, A. [2007], ?The ?ojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems?, SIAM Journal on Optimization 17(4), 1205?1223. Bolte, J., Nguyen, T. P., Peypouquet, J. and Suter, B. W. [2015], ?From error bounds to the complexity of first-order descent methods for convex functions?, Mathematical Programming pp. 1?37. Bolte, J., Sabach, S. and Teboulle, M. [2014], ?Proximal alternating linearized minimization for nonconvex and nonsmooth problems?, Mathematical Programming 146(1-2), 459?494. Burke, J. and Deng, S. [2002], ?Weak sharp minima revisited part i: basic theory?, Control and Cybernetics 31, 439?469. Burke, J. and Ferris, M. C. [1993], ?Weak sharp minima in mathematical programming?, SIAM Journal on Control and Optimization 31(5), 1340?1359. Fercoq, O. and Qu, Z. [2016], ?Restarting accelerated gradient methods with a rough strong convexity estimate?, arXiv preprint arXiv:1609.07358 . Fercoq, O. and Qu, Z. [2017], ?Adaptive restart of accelerated gradient methods under local quadratic growth condition?, arXiv preprint arXiv:1709.02300 . Frankel, P., Garrigos, G. and Peypouquet, J. [2015], ?Splitting methods with variable metric for kurdyka??ojasiewicz functions and general convergence rates?, Journal of Optimization Theory and Applications 165(3), 874?900. Freund, R. M. and Lu, H. [2015], ?New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure?, arXiv preprint arXiv:1511.02974 . Gilpin, A., Pena, J. and Sandholm, T. [2012], ?First-order algorithm with O(log 1/) convergence for -equilibrium in two-person zero-sum games?, Mathematical programming 133(1-2), 279?298. Giselsson, P. and Boyd, S. [2014], Monotonicity and restart in fast gradient methods, in ?53rd IEEE Conference on Decision and Control?, IEEE, pp. 5058?5063. Hoffman, A. J. [1952], ?On approximate solutions of systems of linear inequalities?, Journal of Research of the National Bureau of Standards 49(4). Juditski, A. and Nesterov, Y. [2014], ?Primal-dual subgradient methods for minimizing uniformly convex functions?, arXiv preprint arXiv:1401.1792 . Karimi, H., Nutini, J. and Schmidt, M. [2016], Linear convergence of gradient and proximal-gradient methods under the polyak-?ojasiewicz condition, in ?Joint European Conference on Machine Learning and Knowledge Discovery in Databases?, Springer, pp. 795?811. Lin, Q. and Xiao, L. [2014], An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization., in ?ICML?, pp. 73?81. 10 ?ojasiewicz, S. [1963], ?Une propri?t? topologique des sous-ensembles analytiques r?els?, Les ?quations aux d?riv?es partielles pp. 87?89. ?ojasiewicz, S. [1993], ?Sur la g?om?trie semi-et sous-analytique?, Annales de l?institut Fourier 43(5), 1575?1595. Mangasarian, O. L. [1985], ?A condition number for differentiable convex inequalities?, Mathematics of Operations Research 10(2), 175?179. Nemirovskii, A. and Nesterov, Y. [1985], ?Optimal methods of smooth convex minimization?, USSR Computational Mathematics and Mathematical Physics 25(2), 21?30. Nesterov, Y. [1983], ?A method of solving a convex programming problem with convergence rate O(1/k 2 )?, Soviet Mathematics Doklady 27(2), 372?376. Nesterov, Y. [2013a], ?Gradient methods for minimizing composite functions?, Mathematical Programming 140(1), 125?161. Nesterov, Y. [2013b], Introductory lectures on convex optimization: A basic course, Vol. 87, Springer Science & Business Media. Nesterov, Y. [2015], ?Universal gradient methods for convex optimization problems?, Mathematical Programming 152(1-2), 381?404. O?Donoghue, B. and Candes, E. [2015], ?Adaptive restart for accelerated gradient schemes?, Foundations of computational mathematics 15(3), 715?732. Polyak, B. [1979], Sharp minima institute of control sciences lecture notes, moscow, ussr, 1979, in ?IIASA workshop on generalized Lagrangians and their applications, IIASA, Laxenburg, Austria?. Polyak, B. [1987], Introduction to optimization, Optimization Software. Renegar, J. [2014], ?Efficient first-order methods for linear programming and semidefinite programming?, arXiv preprint arXiv:1409.5832 . Robinson, S. M. [1975], ?An application of error bounds for convex programming in a linear space?, SIAM Journal on Control 13(2), 271?273. Roulet, V., Boumal, N. and d?Aspremont, A. [2015], ?Renegar?s condition number, shaprness and compressed sensing performance?, arXiv preprint arXiv:1506.03295 . Su, W., Boyd, S. and Candes, E. [2014], A differential equation for modeling nesterov?s accelerated gradient method: Theory and insights, in ?Advances in Neural Information Processing Systems?, pp. 2510?2518. 11
6712 |@word repository:1 dtk:1 version:1 polynomial:2 norm:2 open:2 termination:2 seek:1 crucially:1 linearized:1 recursively:1 initial:4 juditski:3 si:5 bierstone:2 must:1 numerical:1 analytic:2 plot:1 une:1 xk:15 ojasiewicz:11 recherche:1 iterates:1 math:1 revisited:1 rc:1 along:1 mathematical:7 differential:1 initiative:1 prove:1 introductory:1 x0:49 theoretically:2 notably:2 behavior:1 pour:1 nor:1 chaire:1 decreasing:2 redont:1 solver:1 increasing:3 gift:1 begin:1 bounded:8 moreover:1 notation:1 medium:1 what:2 developed:2 guarantee:3 quantitative:1 every:5 tackle:4 growth:4 doklady:1 control:6 appear:1 before:1 local:7 limit:1 bilinear:1 approximately:2 inria:2 studied:3 trie:1 practical:2 acknowledgment:1 testing:1 practice:1 xr:8 universal:9 significantly:2 composite:2 boyd:3 projection:2 get:5 onto:1 cannot:3 optimize:2 starting:2 l:1 convex:39 focused:1 sharpness:20 unify:1 simplicity:1 splitting:2 lderian:3 insight:1 array:1 deriving:1 retrieve:1 searching:1 riv:1 shamir:2 suppose:1 heavily:1 target:5 exact:1 soci:1 programming:10 hypothesis:1 satisfying:6 database:1 observed:3 preprint:6 capture:1 ensures:4 cycle:1 decrease:3 yk:1 mentioned:2 convexity:4 complexity:13 miny:1 nesterov:30 depend:3 solving:4 crit:7 purely:1 efficiency:1 basis:1 accelerate:1 joint:2 various:1 soviet:1 subanalytic:5 fast:5 describe:2 paved:1 newman:2 outside:1 whose:2 heuristic:5 larger:1 solve:5 supplementary:11 otherwise:1 lder:3 compressed:1 itself:1 descriptive:1 differentiable:2 sequence:5 adaptation:1 fr:2 inserting:2 uci:2 combining:2 degenerate:1 achieve:1 getting:1 kink:1 convergence:23 exploiting:1 regularity:2 optimum:2 adap:8 produce:2 converges:2 tk:29 derive:1 depending:1 illustrate:1 strong:3 implies:1 material:11 lagrangians:1 homotopy:1 proposition:7 extension:1 hold:4 burke:4 around:4 exp:10 equilibrium:1 vary:1 achieves:4 smallest:1 lose:1 hoffman:3 hope:1 minimization:5 rough:1 publication:1 sou:2 focus:3 improvement:1 sense:1 minimizers:7 cnrs:1 stopping:2 el:1 france:3 karimi:2 unobservable:1 overall:4 classification:1 dual:4 denoted:1 exponent:2 priori:1 ussr:2 constrained:1 beach:1 look:1 icml:1 nearly:4 future:1 t2:1 nonsmooth:2 oblivious:1 suter:1 simultaneously:2 national:1 geometry:1 connects:1 t2k:3 n1:1 adjust:1 generically:5 semidefinite:1 activated:1 primal:1 aspremon:1 closer:1 institut:1 euclidean:1 desired:2 theoretical:2 minimal:1 stopped:1 modeling:1 teboulle:1 cost:2 uniform:1 too:2 dependency:1 proximal:5 st:1 person:1 fundamental:1 international:1 siam:3 physic:1 squared:2 again:3 satisfied:5 sublevel:5 choose:4 sabach:1 de:4 flatter:1 includes:1 satisfy:1 view:1 observing:1 analyze:4 linked:1 reached:4 parallel:1 candes:5 asuncion:2 contribution:1 minimize:2 square:3 om:1 accuracy:6 who:1 ensemble:1 yield:7 generalize:1 weak:2 vincent:2 lu:2 daniilidis:1 asset:1 cybernetics:1 acc:6 reach:2 definition:1 against:1 pp:7 e2:5 naturally:3 proof:7 recovers:1 stop:3 proved:1 austria:1 knowledge:1 schedule:10 alexandre:1 higher:1 restarts:9 follow:1 done:2 box:1 strongly:5 generality:1 just:1 hand:4 su:2 continuity:1 defines:1 logistic:2 scheduled:11 usa:1 attouch:3 concept:1 requiring:2 hence:2 regularization:2 alternating:3 game:3 won:1 suboptimality:1 criterion:6 generalized:2 performs:1 mangasarian:2 recently:2 matiques:1 common:1 rl:1 overview:1 extend:1 pena:1 smoothness:10 rd:1 unconstrained:1 grid:9 mathematics:6 peypouquet:2 dot:2 lkx:2 recent:1 showed:1 optimizing:2 axa:1 server:1 nonconvex:2 inequality:14 frankel:3 minimum:10 additional:2 somewhat:1 impose:1 fortunately:1 deng:2 conomie:1 converge:1 semi:1 reduces:1 smooth:42 technical:2 faster:6 match:4 adapt:2 offer:1 long:1 lin:2 cross:1 divided:1 e1:2 controlled:2 variant:1 basic:2 essentially:1 metric:1 arxiv:12 iteration:38 represent:1 affecting:1 fellowship:1 roulet:4 crucial:2 ot:1 extra:1 strict:2 contrary:1 spirit:1 ee:2 near:1 exceed:1 enough:4 iterate:1 affect:1 lasso:4 restrict:1 polyak:7 inner:7 simplifies:2 idea:1 suboptimal:1 donoghue:4 grad:6 psl:1 akin:1 arjevani:2 remark:2 generally:4 detailed:3 blog2:4 differentiability:1 continuation:1 exist:3 notice:1 write:3 vol:1 key:2 mono:6 neither:1 ce:3 annales:1 subgradient:2 geometrically:3 sum:2 run:9 striking:1 almost:2 decision:1 bound:38 milman:2 quadratic:4 topological:1 oracle:1 renegar:3 constraint:2 precisely:1 software:1 dominated:1 fourier:1 speed:1 argument:1 fercoq:4 describes:2 smaller:3 garrigos:1 sandholm:1 qu:4 making:1 nouvelles:1 pr:6 equation:1 know:3 end:3 ferris:2 operation:3 observe:4 generic:1 appropriate:1 schmidt:1 robustness:1 bureau:1 assumes:3 denotes:1 ensure:6 running:3 moscow:1 log2:1 hinge:1 restrictive:1 classical:5 objective:3 already:2 question:1 strategy:4 exhibit:1 gradient:37 distance:2 link:1 restart:49 outer:2 enforcing:2 assuming:5 sur:1 ratio:3 minimizing:2 potentially:2 sharper:1 affiliated:1 unknown:1 perform:1 upper:2 acknowledge:1 descent:8 extended:3 ever:1 incorporated:1 nemirovskii:7 rn:14 sharp:33 introduced:3 namely:1 paris:3 nip:1 robinson:2 auslender:2 able:2 below:4 dynamical:1 rale:1 regime:1 explanation:1 power:1 natural:1 rely:1 regularized:1 business:1 indicator:1 residual:1 representing:1 scheme:41 improve:1 aspremont:2 extract:1 analytiques:1 discovery:2 evolve:1 freund:2 loss:7 lecture:2 highlight:2 foundation:1 sufficient:5 consistent:1 xiao:2 course:3 last:9 free:1 weaker:3 institute:1 wide:1 boumal:1 taking:2 sparse:1 benefit:1 valid:4 author:2 made:5 adaptive:16 yks:3 nguyen:1 far:1 restarting:2 compact:2 approximate:2 implicitly:2 dlog2:3 monotonicity:3 global:2 subsequence:1 continuous:3 search:11 sonar:2 donn:1 robust:1 ca:1 rearranging:1 necessarily:2 cl:2 european:1 linearly:2 whole:1 allowed:1 en:3 precision:8 sub:2 amx:1 late:1 theorem:2 sensing:1 r2:1 svm:3 exists:1 workshop:1 ih:1 adding:1 ci:4 magnitude:1 fonds:1 kx:2 gap:1 bolte:9 logarithmic:3 led:1 simply:1 backtracking:2 kurdyka:2 nutini:1 springer:2 satisfies:6 relies:2 lewis:1 acceleration:3 quantifying:2 lipschitz:4 feasible:1 uniformly:2 preset:1 miss:1 lemma:4 called:1 total:10 e:2 la:2 gilpin:2 support:1 latter:1 relevance:1 accelerated:18 constructive:1 aux:1 ex:1
6,316
6,713
Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition Naoya Takeishi? , Yoshinobu Kawahara?,? , Takehisa Yairi? Department of Aeronautics and Astronautics, The University of Tokyo ? The Institute of Scientific and Industrial Research, Osaka University ? RIKEN Center for Advanced Intelligence Project {takeishi,yairi}@ailab.t.u-tokyo.ac.jp, [email protected] ? Abstract Spectral decomposition of the Koopman operator is attracting attention as a tool for the analysis of nonlinear dynamical systems. Dynamic mode decomposition is a popular numerical algorithm for Koopman spectral analysis; however, we often need to prepare nonlinear observables manually according to the underlying dynamics, which is not always possible since we may not have any a priori knowledge about them. In this paper, we propose a fully data-driven method for Koopman spectral analysis based on the principle of learning Koopman invariant subspaces from observed data. To this end, we propose minimization of the residual sum of squares of linear least-squares regression to estimate a set of functions that transforms data into a form in which the linear regression fits well. We introduce an implementation with neural networks and evaluate performance empirically using nonlinear dynamical systems and applications. 1 Introduction A variety of time-series data are generated from nonlinear dynamical systems, in which a state evolves according to a nonlinear map or differential equation. In summarization, regression, or classification of such time-series data, precise analysis of the underlying dynamical systems provides valuable information to generate appropriate features and to select an appropriate computation method. In applied mathematics and physics, the analysis of nonlinear dynamical systems has received significant interest because a wide range of complex phenomena, such as fluid flows and neural signals, can be described in terms of nonlinear dynamics. A classical but popular view of dynamical systems is based on state space models, wherein the behavior of the trajectories of a vector in state space is discussed (see, e.g., [1]). Time-series modeling based on a state space is also common in machine learning. However, when the dynamics are highly nonlinear, analysis based on state space models becomes challenging compared to the case of linear dynamics. Recently, there is growing interest in operator-theoretic approaches for the analysis of dynamical systems. Operator-theoretic approaches are based on the Perron?Frobenius operator [2] or its adjoint, i.e., the Koopman operator (composition operator) [3], [4]. The Koopman operator defines the evolution of observation functions (observables) in a function space rather than state vectors in a state space. Based on the Koopman operator, the analysis of nonlinear dynamical systems can be lifted to a linear (but infinite-dimensional) regime. Consequently, we can consider modal decomposition, with which the global characteristics of nonlinear dynamics can be inspected [4], [5]. Such modal decomposition has been intensively used for scientific purposes to understand complex phenomena (e.g., [6]?[9]) and also for engineering tasks, such as signal processing and machine learning. In fact, modal decomposition based on the Koopman operator has been utilized in various engineering tasks, including robotic control [10], image processing [11], and nonlinear system identification [12]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. One of the most popular algorithms for modal decomposition based on the Koopman operator is dynamic mode decomposition (DMD) [6], [7], [13]. An important premise of DMD is that the target dataset is generated from a set of observables that spans a function space invariant to the Koopman operator (referred to as Koopman invariant subspace). However, when only the original state vectors are available as the dataset, we must prepare appropriate observables manually according to the underlying nonlinear dynamics. Several methods have been proposed to utilize such observables, including the use of basis functions [14] and reproducing kernels [15]. Note that these methods work well only if appropriate basis functions or kernels are prepared; however, it is not always possible to prepare such functions if we have no a priori knowledge about the underlying dynamics. In this paper, we propose a fully data-driven method for modal decomposition via the Koopman operator based on the principle of learning Koopman invariant subspaces (LKIS) from scratch using observed data. To this end, we estimate a set of parametric functions by minimizing the residual sum of squares (RSS) of linear least-squares regression, so that the estimated set of functions transforms the original data into a form in which the linear regression fits well. In addition to the principle of LKIS, an implementation using neural networks is described. Moreover, we introduce empirical performance of DMD based on the LKIS framework with several nonlinear dynamical systems and applications, which proves the feasibility of LKIS-based DMD as a fully data-driven method for modal decomposition via the Koopman operator. 2 2.1 Background Koopman spectral analysis We focus on a (possibly nonlinear) discrete-time autonomous dynamical system xt+1 = f (xt ), x ? M, t ? T = {0} ? N, (1) where M denotes the state space and (M, ?, ?) represents the associated probability space. In dynamical system (1), Koopman operator K [4], [5] is defined as an infinite-dimensional linear operator that acts on observables g : M ? R (or C), i.e., Kg(x) = g(f (x)), (2) with which the analysis of nonlinear dynamics (1) can be lifted to a linear (but infinite-dimensional) regime. Since K is linear, let us consider a set of eigenfunctions {?1 , ?2 , . . . } of K with eigenvalues {?1 , ?2 , . . . }, i.e., K?i = ?i ?i for i ? N, where ? : M ? C and ? ? C. Further, suppose that g can be expressed as a linear combination of those infinite number of eigenfunctions, i.e., P? g(x) = i=1 ?i (x)ci with a set of coefficients {c1 , c2 , . . . }. By repeatedly applying K to both sides of this equation, we obtain the following modal decomposition: ? X g(xt ) = ?ti ?i (x0 )ci . (3) i=1 Here, the value of g is decomposed into a sum of Koopman modes wi = ?i (x0 )ci , each of which evolves over time with its frequency and decay rate respectively given by ??i and |?i |, since ?i is a complex value. The Koopman modes and their eigenvalues can be investigated to understand the dominant characteristics of complex phenomena that follow nonlinear dynamics. The above discussion can also be applied straightforwardly to continuous-time dynamical systems [4], [5]. Modal decomposition based on K, often referred to as Koopman spectral analysis, has been receiving attention in nonlinear physics and applied mathematics. In addition, it is a useful tool for engineering tasks including machine learning and pattern recognition; the spectra (eigenvalues) of K can be used as features of dynamical systems, the eigenfunctions are a useful representation of time-series for various tasks, such as regression and visualization, and K itself can be used for prediction and optimal control. Several methods have been proposed to compute modal decomposition based on K, such as generalized Laplace analysis [5], [16], the Ulam?Galerkin method [17], and DMD [6], [7], [13]. DMD, which is reviewed in more detail in the next subsection, has received significant attention and been utilized in various data analysis scenarios (e.g., [6]?[9]). Note that the Koopman operator and modal decomposition based on it can be extended to random dynamical systems actuated by process noise [4], [14], [18]. In addition, Proctor et al. [19], [20] discussed Koopman analysis of systems with control signals. In this paper, we primarily target autonomous deterministic dynamics (e.g., Eq. (1)) for the sake of presentation clarity. 2 2.2 Dynamic mode decomposition and Koopman invariant subspace Let us review DMD, an algorithm for Koopman spectral analysis (further details are in the suppleT mentary). Consider a set of observables {g1 , . . . , gn } and let g = [g1 ? ? ? gn ] be a vector-valued n?m observable. In addition, define two matrices Y0 , Y1 ? R generated by x0 , f and g, i.e., Y0 = [g(x0 ) ? ? ? g(xm?1 )] and Y1 = [g(f (x0 )) ? ? ? g(f (xm?1 ))] , (4) where m + 1 is the number of snapshots in the dataset. The core functionality of DMD algorithms is computing the eigendecomposition of matrix A = Y1 Y0? [13], [21], where Y0? is the Moore? Penrose pseudoinverse of Y0 . The eigenvectors of A are referred to as dynamic modes, and they coincide with the Koopman modes if the corresponding eigenfunctions of K are in span{g1 , . . . , gn } [21]. Alternatively (but nearly equivalently), the condition under which DMD works as a numerical realization of Koopman spectral analysis can be described as follows. Rather than calculating the infinite-dimensional K directly, we can consider the restriction of K to a finite-dimensional subspace. Assume the observables are elements of L2 (M, ?). The Koopman invariant subspace is defined as G ? L2 (M, ?) s.t. ?g ? G, Kg ? G. If G is spanned by a finite number of functions, then the restriction of K to G, which we denote K, becomes a finite-dimensional linear operator. In the sequel, we assume the existence of such G. If {g1 , . . . , gn } spans G, then DMD?s matrix A = Y1 Y0? coincides with K ? Rn?n asymptotically, wherein K is the realization of K with regard to the frame (or basis) {g1 , . . . , gn }. For modal decomposition (3), the (vector-valued) Koopman modes are given by w and the values of the eigenfunctions are obtained by ? = z H g, where w and z are the right- and left-eigenvectors of K normalized such that wiH zj = ?i,j [14], [21], and z H denotes the conjugate transpose of z. Here, an important problem in the practice of DMD arises, i.e., we often have no access to g that spans a Koopman invariant subspace G. In this case, for nonlinear dynamics, we must manually prepare adequate observables. Several researchers have addressed this issue; Williams et al. [14] leveraged a dictionary of predefined basis functions to transform original data, and Kawahara [15] defined Koopman spectral analysis in a reproducing kernel Hilbert space. Brunton et al. [22] proposed the use of observables selected in a data-driven manner [23] from a function dictionary. Note that, for these methods, we must select an appropriate function dictionary or kernel function according to the target dynamics. However, if we have no a priori knowledge about them, which is often the case, such existing methods do not have to be applied successfully to nonlinear dynamics. 3 3.1 Learning Koopman invariant subspaces Minimizing residual sum of squares of linear least-squares regression In this paper, we propose a method to learn a set of observables {g1 , . . . , gn } that spans a Koopman invariant subspace G, given a sequence of measurements as the dataset. In the following, we summarize desirable properties for such observables, upon which the proposed method is constructed. Theorem 1. Consider a set of square-integrable observables {g1 , . . . , gn }, and define a vectorT valued observable g = [g1 ? ? ? gn ] . In addition, define a linear operator G whose matrix form  ? R R is given as G = M (g ? f )g H d? gg H d? . Then, ?x ? M, g(f (x)) = Gg(x) if and only M if {g1 , . . . , gn } spans a Koopman invariant subspace. Pn Proof. If ?x ? M, g(f (x)) = Gg(x), then for any g? = i=1 ai gi ? span{g1 , . . . , gn }, ! n n n X X X K? g= ai gi (f (x)) = ai Gi,j gj (x) ? span{g1 , . . . , gn }, i=1 j=1 i=1 where Gi,j denotes the (i, j)-element of G; thus, span{g1 , . . . , gn } is a Koopman invariant subspace. On the other hand, if {g1 , . . . , gn } spans a Koopman R invariant subspace,Rthere exists a linear operator K such that ?x ? M, g(f (x)) = Kg(x); thus, M (g ? f )g H d? = M Kgg H d?. Therefore, an instance of the matrix form of K is obtained in the form of G. According to Theorem 1, we should obtain g that makes g ? f ? Gg zero. However, such problems cannot be solved with finite data because g is a function. Thus, we give the corresponding empirical 3 risk minimization problem based on the assumption of ergodicity of f and the convergence property of the empirical matrix as follows. Assumption 1. For dynamical system (1), the time-average and space-average of a function g : M ? R (or C) coincide in m ? ? for almost all x0 ? M, i.e., Z m?1 1 X lim g(xj ) = g(x)d?(x), for almost all x0 ? M. m?? m M j=0 Theorem 2. Define Y0 and Y1 by Eq. (4) and suppose that Assumption 1 holds. If all modes are sufficiently excited in the data (i.e., rank(Y0 ) = n), then matrix A = Y1 Y0? almost surely converges to the matrix form of linear operator G in m ? ?. R 1 1 H H to M (g ? f )g H d? and RProof. HFrom Assumption 1, m Y1 Y0 and m Y0 Y0 respectively converge 1 gg d? for almost all x0 ? M. In addition, since the rank of Y0 Y0H is always n, ( m Y0 Y H )? conM  10  R 1 H ? H H ? verges to ( M gg d?) in m ? ? [24]. Consequently, in m ? ?, A = m Y1 Y0 m Y0 Y0 almost surely converges to G, which is the matrix form of linear operator G. Since A = Y1 Y0? is the minimum-norm solution of the linear least-squares regression from the columns of Y0 to those of Y1 , we constitute the learning problem to estimate a set of function that transforms the original data into a form in which the linear least-squares regression fits well. In particular, we minimize RSS, which measures the discrepancy between the data and the estimated regression model (i.e., linear least-squares in this case). We define the RSS loss as follows: 2 LRSS (g; (x0 , . . . , xm )) = Y1 ? (Y1 Y0? )Y0 , (5) F which becomes zero when g spans a Koopman invariant subspace. If we implement a smooth parametric model on g, the local minima of LRSS can be found using gradient descent. We adopt g that achieves a local minimum of LRSS as a set of observables that spans (approximately) a Koopman invariant subspace. 3.2 Linear delay embedder for state space reconstruction In the previous subsection, we have presented an important part of the principle of LKIS, i.e., minimization of the RSS of linear least-squares regression. Note that, to define RSS loss (5), we need access to a sequence of the original states, i.e., (x0 , . . . , xm ) ? Mm+1 , as a dataset. In practice, however, we cannot necessarily observe full states x due to limited memory and sensor capabilities. In this case, only transformed (and possibly degenerated) measurements are available, which we denote y = ?(x) with a measurement function ? : M ? Rr . To define RSS loss (5) given only degenerated measurements, we must reconstruct the original states x from the actual observations y. Here, we utilize delay-coordinate embedding, which has been widely used for state space reconstruction in the analysis of nonlinear dynamics. Consider a univariate time-series (. . . , yt?1 , yt , yt+1 , . . . ), which is a sequence of degenerated measurements yt = ?(xt ). According to the well-known Taken?s theorem [25], [26], a faithful representation of xt that preserves the structure of the state  T ? t = yt yt?? ? ? ? yt?(d?1)? with some lag parameter ? and space can be obtained by x embedding dimension d if d is greater than 2 dim(x). For a multivariate time-series, embedding with non-uniform lags provides better reconstruction [27]. For example, when we have a twoT dimensional time-series yt = [y1,t y2,t ] , an embedding with non-uniform lags is similar to  T ? t = y1,t y1,t??11 ? ? ? y1,t??1d1 y2,t y2,t??21 ? ? ? y2,t??2d2 with each value of ? and x d. Several methods have been proposed for selection of ? and d [27]?[29]; however, appropriate values may depend on the given application (attractor inspection, prediction, etc.). In this paper, we propose to surrogate the parameter selection of the delay-coordinate embedding by learning a linear delay embedder from data. Formally, we learn embedder ? such that  T (k) T T ? t = ?(yt ) = W? ytT yt?1 ? ? ? yt?k+1 x , W? ? Rp?kr , (6) ? r = dim(y), and k is a hyperparameter of maximum lag. We estimate weight where p = dim(x), ? instead W? as well as the parameters of g by minimizing RSS loss (5), which is now defined using x of x. Learning ? from data yields an embedding that is suitable for learning a Koopman invariant subspace. Moreover, we can impose L1 regularization on weight W? to make it highly interpretable if necessary according to the given application. 4 g ?t x . . . , yt ?t) g(x k+1 , yt k+2 , . . . , yt , yt+1 , . . . original time-series h Lrec yt h Lrec yt+1 LRSS g ? t+1 x ? t+1 ) g(x Figure 1: An instance of LKIS framework, in which g and h are implemented by MLPs. 3.3 Reconstruction of original measurements Simple minimization of LRSS may yield trivial g, such as constant values. We should impose some constraints to prevent such trivial solutions. In the proposed framework, modal decomposition is first obtained in terms of learned observables g; thus, the values of g must be back-projected to the space of the original measurements y to obtain a physically meaningful representation of the dynamic modes. Therefore, we modify the loss function by employing an additional term such that the original ? measurements y can be reconstructed from the values of g by a reconstructor h, i.e., y ? h(g(x)). Such term is given as follows: m X ?0, . . . , x ? m )) = ? j ))k2 , Lrec (h, g; (x kyj ? h(g(x (7) j=0 and, if h is a smooth parametric model, this term can also be reduced using gradient descent. Finally, the objective function to be minimized becomes ? k?1 , . . . , x ? m )) + ?Lrec (h, g; (x ? k?1 , . . . , x ? m )), (8) L(?, g, h; (y0 , . . . , ym )) = LRSS (g, ?; (x where ? is a parameter that controls the balance between LRSS and Lrec . 3.4 Implementation using neural networks In Sections 3.1?3.3, we introduced the main concepts for the LKIS framework, i.e., RSS loss minimization, learning the linear delay embedder, and reconstruction of the original measurements. Here, we demonstrate an implementation of the LKIS framework using neural networks. Figure 1 shows a schematic diagram of the implementation of the framework. We model g and h using multi-layer perceptrons (MLPs) with a parametric ReLU activation function [30]. Here, the sizes of the hidden layer of MLPs are defined by the arithmetic means of the sizes of the input and output layers of the MLPs. Thus, the remaining tunable hyperparameters are k (maximum delay ? and n (dimensionality of g). To obtain g with dimensionality much of ?), p (dimensionality of x), greater than that of the original measurements, we found that it was useful to set k > 1 even when full-state measurements (e.g., y = x) were available. After estimating the parameters of ?, g, and h, DMD can be performed normally by using the values of the learned g, defining the data matrices in Eq. (4), and computing the eigendecomposition of A = Y1 Y0? ; the dynamic modes are obtained by w, and the values of the eigenfunctions are obtained by ? = z H g, where w and z are the right- and left-eigenvectors of A. See Section 2.2 for details. In the numerical experiments described in Sections 5 and 6, we performed optimization using firstorder gradient descent. To stabilize optimization, batch normalization [31] was imposed on the inputs of hidden layers. Note that, since RSS loss function (5) is not decomposable with regard to data points, convergence of stochastic gradient descent (SGD) cannot be shown straightforwardly. However, we empirically found that the non-decomposable RSS loss was often reduced successfully, even with mini-batch SGD. Let us show an example; the full-batch RSS loss (denoted L?RSS ) under the updates of the mini-batch SGD are plotted in the rightmost panel of Figure 4. Here, L?RSS decreases rapidly and remains small. For SGD on non-decomposable losses, Kar et al. [32] provided guarantees for some cases; however, examining the behavior of more general non-decomposable losses under mini-batch updates remains an open problem. 4 Related work The proposed framework is motivated by the operator-theoretic view of nonlinear dynamical systems. In contrast, learning a generative (state-space) model for nonlinear dynamical systems directly has 5 12 12 8 LKIS linear Hankel basis exp. truth 0.2 Im(6) 6 4 2 0.3 noisy x1 noisy x2 10 8 LKIS linear Hankel basis exp. truth 0.2 6 0.1 Im(6) 0.3 x1 x2 10 4 2 0 0 0.1 0 0 -0.1 -2 -4 20 40 60 80 100 -0.1 -2 -0.2 -0.6 -4 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 20 40 60 80 100 -0.2 -0.6 -0.4 -0.2 0 Re(6) 0.2 0.4 0.6 0.8 1 Re(6) Figure 2: (left) Data generated from system (9) and (right) the estimated Koopman eigenvalues. While linear Hankel DMD produces an inconsistent eigenvalue, LKIS-DMD successfully identifies ?, ?, ?2 , and ?0 ?0 = 1. Figure 3: (left) Data generated from system (9) and white Gaussian observation noise and (right) the estimated Koopman eigenvalues. LKIS-DMD successfully identifies the eigenvalues even with the observation noise. been actively studied in machine learning and optimal control communities, on which we mention a few examples. A classical but popular method for learning nonlinear dynamical systems is using an expectation-maximization algorithm with Bayesian filtering/smoothing (see, e.g., [33]). Recently, using approximate Bayesian inference with the variational autoencoder (VAE) technique [34] to learn generative dynamical models has been actively researched. Chung et al. [35] proposed a recurrent neural network with random latent variables, Gao et al. [36] utilized VAE-based inference for neural population models, and Johnson et al. [37] and Krishnan et al. [38] developed inference methods for structured models based on inference with a VAE. In addition, Karl et al. [39] proposed a method to obtain a more consistent estimation of nonlinear state space models. Moreover, Watter et al. [40] proposed a similar approach in the context of optimal control. Since generative models are intrinsically aware of process and observation noises, incorporating methodologies developed in such studies to the operator-theoretic perspective is an important open challenge to explicitly deal with uncertainty. 5 Numerical examples In this section, we provide numerical examples of DMD based on the LKIS framework (LKIS-DMD) implemented using neural networks. We conducted experiments on three typical nonlinear dynamical systems: a fixed-point attractor, a limit-cycle attractor, and a system with multiple basins of attraction. We show the results of comparisons with other recent DMD algorithms, i.e., Hankel DMD [41], [42], extended DMD [14], and DMD with reproducing kernels [15]. The detailed setups of the experiments discussed in this section and the next section are described in the supplementary. Fixed-point attractor Consider a two-dimensional nonlinear map on xt = [x1,t x1,t+1 = ?x1,t , x2,t+1 = ?x2,t + (?2 ? ?)x21,t , T x2,t ] : (9) which has a stable equilibrium at the origin if ?, ? < 1. The Koopman eigenvalues of system (9) include ? and ?, and the corresponding eigenfunctions are ?? (x) = x1 and ?? (x) = x2 ? x21 , respectively. ?i ?j is also an eigenvalue with corresponding eigenfunction ?i? ?j? . A minimal Koopman invariant subspace of system (9) is span{x1 , x2 , x21 }, and the eigenvalues of the Koopman operator restricted to such subspace include ?, ? and ?2 . We generated a dataset using system (9) with ? = 0.9 and ? = 0.5 and applied LKIS-DMD (n = 4), linear Hankel DMD [41], [42] (delay 2), and DMD with basis expansion by {x1 , x2 , x21 }, which corresponds to extended DMD [14] with a right and minimal observable dictionary. The estimated Koopman eigenvalues are shown in Figure 2, wherein LKIS-DMD successfully identifies the eigenvalues of the target invariant subspace. In Figure 3, we show eigenvalues estimated using data contaminated with white Gaussian observation noise (? = 0.1). The eigenvalues estimated by LKIS-DMD coincide with the true values even with the observation noise, whereas the results of DMD with basis expansion (i.e., extended DMD) are directly affected by the observation noise. Limit-cycle attractor We generated data from the limit cycle of the FitzHugh?Nagumo equation x?1 = x31 /3 + x1 ? x2 + I, x?2 = c(x1 ? bx2 + a), (10) where a = 0.7, b = 0.8, c = 0.08, and I = 0.8. Since trajectories in a limit-cycle are periodic, the (discrete-time) Koopman eigenvalues should lie near the unit circle. Figure 4 shows the eigenvalues estimated by LKIS-DMD (n = 16), linear Hankel DMD [41], [42] (delay 8), and DMDs with 6 104 RBF polynomial Im(6) Im(6) Im(6) linear Hankel log(L?RSS ) log(,L?rec ) 102 100 Im(6) LKIS 10-2 10-4 10-6 Re(6) Re(6) Re(6) Re(6) 0 50 100 150 iterations Figure 4: The left four panels show the estimated Koopman eigenvalues on the limit-cycle of the FitzHugh-Nagumo equation by LKIS-DMD, linear Hankel DMD, and kernel DMDs with polynomial and RBF kernels. The hyperparameters of each DMD are set to produce 16 eigenvalues. The rightmost plot shows the full-batch (size 2,000) loss under mini-batch (size 200) SGD updates along iterations. Non-decomposable part L?RSS decreases rapidly and remains small, even by SGD. 5 0 x_ x_ Im(log(6)=/t) 10 -5 -10 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 x Re(log(6)=/t) x Figure 5: (left) The continuous-time Koopman eigenvalues estimated by LKIS-DMD on the Duffing equation. (center) The true basins of attraction of the Duffing equation, wherein points in the blue region evolve toward (1, 0) and points in the red region evolve toward (?1, 0). Note that the stable manifold of the saddle point is not drawn precisely. (right) The values of the Koopman eigenfunction with a nearly zero eigenvalue computed by LKIS-DMD, whose level sets should correspond to the basins of attraction. There is rough agreement between the true boundary of the basins of attraction and the numerically computed boundary. The right two plots are best viewed in color. reproducing kernels [15] (polynomial kernel of degree 4 and RBF kernel of width 1). The eigenvalues produced by LKIS-DMD agree well with those produced by kernel DMDs, whereas linear Hankel DMD produces eigenvalues that would correspond to rapidly decaying modes. Multiple basins of attraction Consider the unforced Duffing equation x ? = ?? x? ? x(? + ?x2 ), x = [x T x] ? , (11) T T where ? = 1, ? = ?1, and ? = 0.5. States x following (11) evolve toward [1 0] or [?1 0] depending on which basin of attraction the initial value belongs to unless the initial state is on the stable manifold of the saddle. Generally, a Koopman eigenfunction whose continuous-time eigenvalue is zero takes a constant value in each basin of attraction [14]; thus, the contour plot of such an eigenfunction shows the boundary of the basins of attraction. We generated 1,000 episodes of time-series starting at different initial values uniformly sampled from [?2, 2]2 . The left plot in Figure 5 shows the continuous-time Koopman eigenvalues estimated by LKIS-DMD (n = 100), all of which correspond to decaying modes (i.e., negative real parts) and agree with the property of the data. The center plot in Figure 5 shows the true basins of attraction of (11), and the right plot shows the estimated values of the eigenfunction corresponding to the eigenvalue of the smallest magnitude. The surface of the estimated eigenfunction agrees qualitatively with the true boundary of the basins of attractions, which indicates that LKIS-DMD successfully identifies the Koopman eigenfunction. 6 Applications The numerical experiments in the previous section demonstrated the feasibility of the proposed method as a fully data-driven method for Koopman spectral analysis. Here, we introduce practical applications of LKIS-DMD. Chaotic time-series prediction Prediction of a chaotic time-series has received significant interest in nonlinear physics. We would like to perform the prediction of a chaotic time-series using DMD, since as follows. Since g(xt ) is decomposed as Pn DMD can be naturally utilized for prediction H of K, the next i=1 ?i (xt )ci and ? is obtained by ?i (xt ) = zi g(xt ) where zi is a left-eigenvalue Pn step of g can be described in terms of the current step, i.e., g(xt+1 ) = i=1 ?i (ziH g(xt ))ci . In addition, in the case of LKIS-DMD, the values of g must be back-projected to y using the learned h. 7 20 LKIS LSTM linear Hankel 3 RMS error raw 15 2.5 10 5 2 0 1.5 -5 1 -10 0.5 -15 0 10 20 30 2.2 30-step prediction truth LKIS -20 10 2 OC-SVM 1.8 5 RMS error 1.6 1.4 0 1.2 1 0.8 -5 0.6 0.4 RuLSIF -10 0.2 0 10 20 30 Figure 6: The left plot shows RMS errors from 1- to 30-step predictions, and the right plot shows a part of the 30-step prediction obtained by LKIS-DMD on (upper) the Lorenz-x series and (lower) the Rossler-x series. Figure 7: The top plot shows the raw time-series obtained by a far-infrared laser [45]. The other plots show the results of unstable phenomena detection, wherein the peaks should correspond to the occurrences of unstable phenomena. We generated two types of univariate time-series by extracting the {x} series of the Lorenz attractor [43] and the Rossler attractor [44]. We simulated 25,000 steps for each attractor and used the first 10,000 steps for training, the next 5,000 steps for validation, and the last 10,000 steps for testing prediction accuracy. We examined the prediction accuracy of LKIS-DMD, a simple LSTM network, and linear Hankel DMD [41], [42], all of whose hyperparameters were tuned using the validation set. The prediction accuracy of every method and an example of the predicted series on the test set by LKIS-DMD are shown in Figure 6. As can be seen, the proposed LKIS-DMD achieves the smallest root-mean-square (RMS) errors in the 30-step prediction. Unstable phenomena detection One of the most popular applications of DMD is the investigation of the global characteristics of dynamics by inspecting the spatial distribution of the dynamic modes. In addition to the spatial distribution, we can investigate the temporal profiles of mode activations by examining the values of corresponding eigenfunctions. For example, assume there is an eigenfunction ??1 that corresponds to a discrete-time eigenvalue ? whose magnitude is considerably smaller than one. Such a small eigenvalue indicates a rapidly decaying (i.e., unstable) mode; thus, we can detect occurrences of unstable phenomena by observing the values of ??1 . We applied LKIS-DMD (n = 10) to a time-series generated by a far-infrared laser, which was obtained from the Santa Fe Time Series Competition Data [45]. We investigated the values of eigenfunction ??1 corresponding to the eigenvalue of the smallest magnitude. The original time-series and values of ??1 obtained by LKIS-DMD are shown in Figure 7. As can be seen, the activations of ??1 coincide with sudden decays of the pulsation amplitudes. For comparison, we applied the novelty/change-point detection technique using one-class support vector machine (OC-SVM) [46] and direct density-ratio estimation by relative unconstrained least-squares importance fitting (RuLSIF) [47]. We computed AUC, defining the sudden decays of the amplitudes as the points to be detected, which were 0.924, 0.799, and 0.803 for LKIS, OC-SVM, and RuLSIF, respectively. 7 Conclusion In this paper, we have proposed a framework for learning Koopman invariant subspaces, which is a fully data-driven numerical algorithm for Koopman spectral analysis. In contrast to existing approaches, the proposed method learns (approximately) a Koopman invariant subspace entirely from the available data based on the minimization of RSS loss. We have shown empirical results for several typical nonlinear dynamics and application examples. We have also introduced an implementation using multi-layer perceptrons; however, one possible drawback of such an implementation is the local optima of the objective function, which makes it difficult to assess the adequacy of the obtained results. Rather than using neural networks, the observables to be learned could be modeled by a sparse combination of basis functions as in [23] but still utilizing optimization based on RSS loss. Another possible future research direction could be incorporating approximate Bayesian inference methods, such as VAE [34]. The proposed framework is based on a discriminative viewpoint, but inference methodologies for generative models could be used to modify the proposed framework to explicitly consider uncertainty in data. 8 Acknowledgments This work was supported by JSPS KAKENHI Grant No. JP15J09172, JP26280086, JP16H01548, and JP26289320. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] M. W. Hirsch, S. Smale, and R. L. Devaney, Differential equations, dynamical systems, and an introduction to chaos, 3rd. Academic Press, 2013. A. Lasota and M. C. Mackey, Chaos, fractals, and noise: Stochastic aspects of dynamics, 2nd. Springer, 1994. B. O. Koopman, ?Hamiltonian systems and transformation in Hilbert space,? Proceedings of the National Academy of Sciences of the United States of America, vol. 17, no. 5, pp. 315?318, 1931. I. Mezi?c, ?Spectral properties of dynamical systems, model reduction and decompositions,? Nonlinear Dynamics, vol. 41, no. 1-3, pp. 309?325, 2005. M. Budi?i?c, R. Mohr, and I. Mezi?c, ?Applied Koopmanism,? Chaos, vol. 22, p. 047 510, 2012. C. W. Rowley, I. Mezi?c, S. Bagheri, P. Schlatter, and D. S. Henningson, ?Spectral analysis of nonlinear flows,? Journal of Fluid Mechanics, vol. 641, pp. 115?127, 2009. P. J. Schmid, ?Dynamic mode decomposition of numerical and experimental data,? Journal of Fluid Mechanics, vol. 656, pp. 5?28, 2010. J. L. Proctor and P. A. Eckhoff, ?Discovering dynamic patterns from infectious disease data using dynamic mode decomposition,? International Health, vol. 7, no. 2, pp. 139?145, 2015. B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz, ?Extracting spatial-temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition,? Journal of Neuroscience Methods, vol. 258, pp. 1?15, 2016. E. Berger, M. Sastuba, D. Vogt, B. Jung, and H. B. Amor, ?Estimation of perturbations in robotic behavior using dynamic mode decomposition,? Advanced Robotics, vol. 29, no. 5, pp. 331?343, 2015. J. N. Kutz, X. Fu, and S. L. Brunton, ?Multiresolution dynamic mode decomposition,? SIAM Journal on Applied Dynamical Systems, vol. 15, no. 2, pp. 713?735, 2016. A. Mauroy and J. Goncalves, ?Linear identification of nonlinear systems: A lifting technique based on the Koopman operator,? in Proceedings of the 2016 IEEE 55th Conference on Decision and Control, 2016, pp. 6500?6505. J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor, Dynamic mode decomposition: Data-driven modeling of complex systems. SIAM, 2016. M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, ?A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition,? Journal of Nonlinear Science, vol. 25, no. 6, pp. 1307?1346, 2015. Y. Kawahara, ?Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis,? in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 911?919. I. Mezi?c, ?Analysis of fluid flows via spectral properties of the Koopman operator,? Annual Review of Fluid Mechanics, vol. 45, pp. 357?378, 2013. G. Froyland, G. A. Gottwald, and A. Hammerlindl, ?A computational method to extract macroscopic variables and their dynamics in multiscale systems,? SIAM Journal on Applied Dynamical Systems, vol. 13, no. 4, pp. 1816?1846, 2014. N. Takeishi, Y. Kawahara, and T. Yairi, ?Subspace dynamic mode decomposition for stochastic Koopman analysis,? Physical Review E, vol. 96, no. 3, 033310, p. 033 310, 3 2017. J. L. Proctor, S. L. Brunton, and J. N. Kutz, ?Dynamic mode decomposition with control,? SIAM Journal on Applied Dynamical Systems, vol. 15, no. 1, pp. 142?161, 2016. ??, ?Generalizing Koopman theory to allow for inputs and control,? arXiv:1602.07647, 2016. J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz, ?On dynamic mode decomposition: Theory and applications,? Journal of Computational Dynamics, vol. 1, no. 2, pp. 391?421, 2014. 9 [22] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, ?Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control,? PLoS ONE, vol. 11, no. 2, e0150171, 2016. [23] S. L. Brunton, J. L. Proctor, and J. N. Kutz, ?Discovering governing equations from data by sparse identification of nonlinear dynamical systems,? Proceedings of the National Academy of Sciences of the United States of America, vol. 113, no. 15, pp. 3932?3937, 2016. [24] V. Rako?cevi?c, ?On continuity of the Moore?Penrose and Drazin inverses,? Matemati?cki Vesnik, vol. 49, no. 3-4, pp. 163?172, 1997. [25] F. Takens, ?Detecting strange attractors in turbulence,? in Dynamical Systems and Turbulence, Warwick 1980, ser. Lecture Notes in Mathematics, vol. 898, 1981, pp. 366?381. [26] T. Sauer, J. A. Yorke, and M. Casdagli, ?Embedology,? Journal of Statistical Physics, vol. 65, no. 3-4, pp. 579?616, 1991. [27] S. P. Garcia and J. S. Almeida, ?Multivariate phase space reconstruction by nearest neighbor embedding with different time delays,? Physical Review E, vol. 72, no. 2, 027205, p. 027 205, 2005. [28] Y. Hirata, H. Suzuki, and K. Aihara, ?Reconstructing state spaces from multivariate data using variable delays,? Physical Review E, vol. 74, no. 2, 026202, p. 026 202, 2006. [29] I. Vlachos and D. Kugiumtzis, ?Nonuniform state-space reconstruction and coupling detection,? Physical Review E, vol. 82, no. 1, 016207, p. 016 207, 2010. [30] K. He, X. Zhang, S. Ren, and J. Sun, ?Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,? in Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015, pp. 1026?1034. [31] S. Ioffe and C. Szegedy, ?Batch normalization: Accelerating deep network training by reducing internal covariate shift,? in Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 37, 2015, pp. 448?456. [32] P. Kar, H. Narasimhan, and P. Jain, ?Online and stochastic gradient methods for nondecomposable loss functions,? in Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 694?702. [33] Z. Ghahramani and S. T. Roweis, ?Learning nonlinear dynamical systems using an EM algorithm,? in Advances in Neural Information Processing Systems, vol. 11, 1999, pp. 431? 437. [34] D. P. Kingma and M. Welling, ?Stochastic gradient VB and the variational auto-encoder,? in Proceedings of the 2nd International Conference on Learning Representations, 2014. [35] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, ?A recurrent latent variable model for sequential data,? in Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 2980?2988. [36] Y. Gao, E. W. Archer, L. Paninski, and J. P. Cunningham, ?Linear dynamical neural population models through nonlinear embeddings,? in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 163?171. [37] M. Johnson, D. K. Duvenaud, A. Wiltschko, R. P. Adams, and S. R. Datta, ?Composing graphical models with neural networks for structured representations and fast inference,? in Advances in Neural Information Processing Systems, vol. 29, 2016, pp. 2946?2954. [38] R. G. Krishnan, U. Shalit, and D. Sontag, ?Structured inference networks for nonlinear state space models,? in Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017, pp. 2101?2109. [39] M. Karl, M. Soelch, J. Bayer, and P. van der Smagt, ?Deep variational Bayes filters: Unsupervised learning of state space models from raw data,? in Proceedings of the 5th International Conference on Learning Representations, 2017. [40] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller, ?Embed to control: A locally linear latent dynamics model for control from raw images,? in Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 2746?2754. [41] H. Arbabi and I. Mezi?c, ?Ergodic theory, dynamic mode decomposition and computation of spectral properties of the Koopman operator,? arXiv:1611.06664, 2016. [42] Y. Susuki and I. Mezi?c, ?A Prony approximation of Koopman mode decomposition,? in Proceedings of the 2015 IEEE 54th Conference on Decision and Control, 2015, pp. 7022? 7027. 10 [43] E. N. Lorenz, ?Deterministic nonperiodic flow,? Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130?141, 1963. [44] O. E. R?ssler, ?An equation for continuous chaos,? Physical Letters, vol. 57A, no. 5, pp. 397? 398, 1976. [45] A. S. Weigend and N. A. Gershenfeld, Eds., Time series prediction: Forecasting the future and understanding the past, ser. Santa Fe Institute Series. Westview Press, 1993. [46] S. Canu and A. Smola, ?Kernel methods and the exponential family,? Neurocomputing, vol. 69, no. 7-9, pp. 714?720, 2006. [47] S. Liu, M. Yamada, N. Collier, and M. Sugiyama, ?Change-point detection in time-series data by relative density-ratio estimation,? Neural Networks, vol. 43, pp. 72?83, 2013. 11
6713 |@word polynomial:3 norm:1 nd:3 casdagli:1 vogt:1 open:2 d2:1 r:17 decomposition:31 excited:1 sgd:6 mention:1 reduction:1 initial:3 liu:1 series:24 united:2 tuned:1 rightmost:2 past:1 existing:2 current:1 yairi:3 activation:3 must:6 numerical:8 plot:10 interpretable:1 update:3 mackey:1 intelligence:2 selected:1 generative:4 discovering:2 inspection:1 hamiltonian:1 core:1 yamada:1 lr:7 sudden:2 provides:2 detecting:1 zhang:1 embedder:4 along:1 c2:1 constructed:1 differential:2 direct:1 fitting:1 manner:1 introduce:3 x0:10 behavior:3 growing:1 multi:2 mechanic:3 decomposed:2 researched:1 actual:1 becomes:4 project:1 brunton:10 underlying:4 moreover:3 estimating:1 panel:2 kevrekidis:1 provided:1 kg:3 developed:2 narasimhan:1 transformation:1 guarantee:1 temporal:2 every:1 act:1 ti:1 firstorder:1 k2:1 ser:3 control:13 normally:1 unit:1 grant:1 engineering:3 local:3 modify:2 limit:5 mohr:1 approximately:2 studied:1 examined:1 challenging:1 limited:1 range:1 faithful:1 practical:1 acknowledgment:1 testing:1 practice:2 implement:1 chaotic:3 nondecomposable:1 riedmiller:1 empirical:4 cannot:3 selection:2 operator:28 turbulence:2 risk:1 applying:1 context:1 restriction:2 map:2 deterministic:2 center:3 yt:17 imposed:1 williams:2 attention:3 starting:1 demonstrated:1 ergodic:1 decomposable:5 attraction:10 utilizing:1 spanned:1 osaka:2 unforced:1 embedding:7 population:2 autonomous:2 coordinate:2 laplace:1 target:4 inspected:1 suppose:2 origin:1 agreement:1 element:2 recognition:1 utilized:4 mentary:1 rec:1 infrared:2 observed:2 solved:1 region:2 cycle:5 sun:1 episode:1 plo:1 decrease:2 valuable:1 amor:1 disease:1 rowley:3 ojemann:1 dynamic:42 depend:1 upon:1 observables:16 basis:9 various:3 america:2 riken:1 laser:2 jain:1 fast:1 detected:1 artificial:1 kawahara:4 whose:5 lag:4 widely:1 valued:3 supplementary:1 devaney:1 warwick:1 reconstruct:1 encoder:1 gi:4 g1:13 transform:1 itself:1 noisy:2 online:1 sequence:3 eigenvalue:29 rr:1 vlachos:1 propose:5 wih:1 reconstruction:7 pulsation:1 tu:1 realization:2 rapidly:4 multiresolution:1 roweis:1 academy:2 adjoint:1 infectious:1 frobenius:1 competition:1 convergence:2 optimum:1 extending:1 ulam:1 produce:3 adam:1 converges:2 depending:1 recurrent:2 ac:2 coupling:1 nearest:1 received:3 eq:3 implemented:2 predicted:1 soelch:1 direction:1 drawback:1 tokyo:2 functionality:1 filter:1 stochastic:5 hirata:1 human:1 premise:1 investigation:1 inspecting:1 im:7 hold:1 mm:1 sufficiently:1 duvenaud:1 exp:2 equilibrium:1 dictionary:4 sanken:1 adopt:1 achieves:2 smallest:3 purpose:1 estimation:4 prepare:4 agrees:1 successfully:6 tool:2 minimization:6 rough:1 sensor:1 always:3 gaussian:2 rather:3 pn:3 lifted:2 vae:4 focus:1 kakenhi:1 rank:2 indicates:2 industrial:1 contrast:2 detect:1 dim:3 inference:8 cunningham:1 hidden:2 smagt:1 transformed:1 archer:1 issue:1 classification:2 denoted:1 priori:3 takens:1 smoothing:1 spatial:3 aware:1 beach:1 manually:3 represents:1 unsupervised:1 nearly:2 kastner:1 discrepancy:1 minimized:1 contaminated:1 future:2 primarily:1 few:1 x_:2 preserve:1 national:2 neurocomputing:1 phase:1 attractor:9 detection:5 interest:3 highly:2 investigate:1 predefined:1 fu:1 bayer:1 necessary:1 sauer:1 unless:1 bx2:1 re:7 plotted:1 circle:1 shalit:1 minimal:2 instance:2 column:1 modeling:2 gn:13 maximization:1 conm:1 uniform:2 delay:10 examining:2 jsps:1 conducted:1 johnson:3 straightforwardly:2 periodic:1 considerably:1 st:2 density:2 lstm:2 peak:1 international:5 siam:4 cki:1 sequel:1 physic:4 receiving:1 ym:1 kutz:7 aaai:1 leveraged:1 possibly:2 astronautics:1 verge:1 chung:2 actively:2 szegedy:1 koopman:65 stabilize:1 coefficient:1 explicitly:2 performed:2 view:2 root:1 reconstructor:1 observing:1 red:1 decaying:3 bayes:1 capability:1 ass:1 square:13 minimize:1 mlps:4 accuracy:3 characteristic:3 yield:2 correspond:4 identification:3 bayesian:3 raw:4 produced:2 ren:1 trajectory:2 researcher:1 ed:1 frequency:1 pp:33 naturally:1 associated:1 proof:1 sampled:1 dataset:6 tunable:1 popular:5 intrinsically:1 intensively:1 knowledge:3 subsection:2 lim:1 dimensionality:3 hilbert:2 color:1 amplitude:2 back:2 follow:1 methodology:2 wherein:5 modal:12 ergodicity:1 governing:1 smola:1 hand:1 nonlinear:37 multiscale:1 continuity:1 defines:1 mode:30 scientific:2 usa:1 normalized:1 y2:4 concept:1 true:5 evolution:1 regularization:1 moore:2 white:2 deal:1 width:1 auc:1 coincides:1 oc:3 generalized:1 gg:6 theoretic:4 demonstrate:1 l1:1 image:2 variational:3 chaos:4 recently:2 embedology:1 common:1 empirically:2 physical:5 jp:2 discussed:3 he:1 numerically:1 surpassing:1 significant:3 composition:1 measurement:11 dinh:1 ai:3 rd:1 unconstrained:1 mathematics:3 canu:1 sugiyama:1 access:2 stable:3 surface:1 attracting:1 aeronautics:1 gj:1 etc:1 dominant:1 multivariate:3 bagheri:1 recent:1 perspective:1 belongs:1 driven:8 scenario:1 kar:2 der:1 integrable:1 seen:2 minimum:3 greater:2 additional:1 impose:2 goel:1 surely:2 converge:1 novelty:1 signal:3 arithmetic:1 full:4 desirable:1 multiple:2 smooth:2 academic:1 long:1 wiltschko:1 nagumo:2 feasibility:2 schematic:1 prediction:14 regression:11 vision:1 expectation:1 physically:1 iteration:2 kernel:13 normalization:2 arxiv:2 robotics:1 c1:1 addition:9 background:1 whereas:2 addressed:1 diagram:1 macroscopic:1 eigenfunctions:8 recording:1 flow:4 inconsistent:1 adequacy:1 extracting:2 near:1 bengio:1 embeddings:1 krishnan:2 variety:1 xj:1 fit:3 relu:1 zi:2 shift:1 motivated:1 rms:4 accelerating:1 forecasting:1 sontag:1 constitute:1 repeatedly:1 adequate:1 fractal:1 deep:3 useful:3 generally:1 detailed:1 eigenvectors:3 santa:2 transforms:3 prepared:1 locally:1 reduced:2 generate:1 zj:1 estimated:13 neuroscience:1 blue:1 discrete:3 hyperparameter:1 vol:35 affected:1 four:1 drawn:1 clarity:1 prevent:1 eckhoff:1 gershenfeld:1 utilize:2 cevi:1 asymptotically:1 sum:4 weigend:1 inverse:1 letter:1 uncertainty:2 springenberg:1 hankel:11 almost:5 family:1 strange:1 drazin:1 x31:1 decision:2 vb:1 entirely:1 lrec:5 layer:5 henningson:1 courville:1 annual:1 constraint:1 precisely:1 x2:10 sake:1 aspect:1 span:13 fitzhugh:2 department:1 structured:3 according:7 combination:2 conjugate:1 smaller:1 reconstructing:1 y0:23 em:1 wi:1 evolves:2 aihara:1 invariant:21 restricted:1 taken:1 equation:10 visualization:1 remains:3 agree:2 end:2 available:4 observe:1 spectral:15 appropriate:6 occurrence:2 batch:8 rp:1 existence:1 original:13 denotes:3 remaining:1 include:2 top:1 x21:4 graphical:1 calculating:1 ghahramani:1 prof:1 classical:2 objective:2 parametric:4 surrogate:1 gradient:6 subspace:23 simulated:1 manifold:2 unstable:5 trivial:2 toward:3 degenerated:3 modeled:1 berger:1 mini:4 ratio:2 minimizing:3 balance:1 yorke:1 equivalently:1 setup:1 difficult:1 fe:2 smale:1 negative:1 fluid:5 implementation:7 summarization:1 perform:1 upper:1 observation:8 snapshot:1 finite:5 descent:4 defining:2 extended:4 precise:1 y1:17 rn:1 frame:1 reproducing:5 perturbation:1 nonuniform:1 community:1 datta:1 atmospheric:1 introduced:2 perron:1 imagenet:1 coherent:1 learned:4 mezi:6 kingma:1 nip:1 ytt:1 eigenfunction:9 dynamical:30 pattern:3 xm:4 regime:2 summarize:1 challenge:1 including:3 memory:1 prony:1 suitable:1 residual:3 advanced:2 identifies:4 autoencoder:1 schmid:1 health:1 extract:1 auto:1 review:6 understanding:1 l2:2 evolve:3 relative:2 fully:5 loss:15 lecture:1 goncalves:1 filtering:1 validation:2 eigendecomposition:2 degree:1 basin:10 consistent:1 principle:4 viewpoint:1 karl:2 jung:1 supported:1 last:1 transpose:1 side:1 allow:1 understand:2 institute:2 wide:1 neighbor:1 sparse:2 van:1 regard:2 boundary:4 dimension:1 kyj:1 contour:1 qualitatively:1 suzuki:1 coincide:4 projected:2 employing:1 far:2 welling:1 reconstructed:1 approximate:2 observable:3 global:2 robotic:2 pseudoinverse:1 hirsch:1 rulsif:3 ioffe:1 discriminative:1 alternatively:1 spectrum:1 continuous:5 latent:3 reviewed:1 scratch:1 learn:3 yoshinobu:1 delving:1 composing:1 ca:1 actuated:1 expansion:2 investigated:2 complex:5 necessarily:1 main:1 noise:8 hyperparameters:3 profile:1 x1:10 referred:3 galerkin:1 exponential:1 watter:2 lie:1 dmd:55 learns:1 theorem:4 embed:1 xt:12 rectifier:1 covariate:1 decay:3 svm:3 exists:1 incorporating:2 lorenz:3 sequential:1 kr:1 ci:5 importance:1 lifting:1 magnitude:3 generalizing:1 garcia:1 paninski:1 univariate:2 saddle:2 gao:2 boedecker:1 penrose:2 expressed:1 springer:1 corresponds:2 truth:3 schlatter:1 viewed:1 presentation:1 consequently:2 rbf:3 change:2 westview:1 infinite:5 typical:2 uniformly:1 reducing:1 experimental:1 meaningful:1 perceptrons:2 select:2 formally:1 internal:1 support:1 almeida:1 arises:1 evaluate:1 d1:1 phenomenon:7 nonperiodic:1
6,317
6,714
Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations Eirikur Agustsson ETH Zurich Fabian Mentzer ETH Zurich Michael Tschannen ETH Zurich [email protected] [email protected] [email protected] Lukas Cavigelli ETH Zurich Radu Timofte ETH Zurich & Merantix Luca Benini ETH Zurich [email protected] [email protected] [email protected] Luc Van Gool KU Leuven & ETH Zurich [email protected] Abstract We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both. 1 Introduction In recent years, deep neural networks (DNNs) have led to many breakthrough results in machine learning and computer vision [20, 28, 10], and are now widely deployed in industry. Modern DNN models often have millions or tens of millions of parameters, leading to highly redundant structures, both in the intermediate feature representations they generate and in the model itself. Although overparametrization of DNN models can have a favorable effect on training, in practice it is often desirable to compress DNN models for inference, e.g., when deploying them on mobile or embedded devices with limited memory. The ability to learn compressible feature representations, on the other hand, has a large potential for the development of (data-adaptive) compression algorithms for various data types such as images, audio, video, and text, for all of which various DNN architectures are now available. DNN model compression and lossy image compression using DNNs have both independently attracted a lot of attention lately. In order to compress a set of continuous model parameters or features, we need to approximate each parameter or feature by one representative from a set of quantization levels (or vectors, in the multi-dimensional case), each associated with a symbol, and then store the assignments (symbols) of the parameters or features, as well as the quantization levels. Representing each parameter of a DNN model or each feature in a feature representation by the corresponding quantization level will come at the cost of a distortion D, i.e., a loss in performance (e.g., in classification accuracy for a classification DNN with quantized model parameters, or in reconstruction error in the context of autoencoders with quantized intermediate feature representations). The rate R, i.e., the entropy of the symbol stream, determines the cost of encoding the model or features in a bitstream. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To learn a compressible DNN model or feature representation we need to minimize D + ?R, where ? > 0 controls the rate-distortion trade-off. Including the entropy into the learning cost function can be seen as adding a regularizer that promotes a compressible representation of the network or feature representation. However, two major challenges arise when minimizing D + ?R for DNNs: i) coping with the non-differentiability (due to quantization operations) of the cost function D + ?R, and ii) obtaining an accurate and differentiable estimate of the entropy (i.e., R). To tackle i), various methods have been proposed. Among the most popular ones are stochastic approximations [39, 19, 7, 32, 5] and rounding with a smooth derivative approximation [15, 30]. To address ii) a common approach is to assume the symbol stream to be i.i.d. and to model the marginal symbol distribution with a parametric model, such as a Gaussian mixture model [30, 34], a piecewise linear model [5], or a Bernoulli distribution [33] (in the case of binary symbols). DNN model compression In this paper, we propose a unified end-to-end learning framework for learning compressible representations, jointly optimizing the model parameters, the quantization levels, and the entropy of the resulting symbol stream to compress either a subset of feature representations in the network or the model itself (see inset figure). We address both challenges i) and ii) above with methods that are novel in the context DNN model and feature compression. Our main contributions are: x F1 ( ? ; w1 ) x(1) x(K?1) FK ( ? ; wK ) x(K) z = [w1 , w2 , . . . , wK ] data compression x Fb ? ... ? F1 z = x(b) FK ? ... ? Fb+1 x(K) z: vector to be compressed ? We provide the first unified view on end-to-end learned compression of feature representations and DNN models. These two problems have been studied largely independently in the literature so far. ? Our method is simple and intuitively appealing, relying on soft assignments of a given scalar or vector to be quantized to quantization levels. A parameter controls the ?hardness? of the assignments and allows to gradually transition from soft to hard assignments during training. In contrast to rounding-based or stochastic quantization schemes, our coding scheme is directly differentiable, thus trainable end-to-end. ? Our method does not force the network to adapt to specific (given) quantization outputs (e.g., integers) but learns the quantization levels jointly with the weights, enabling application to a wider set of problems. In particular, we explore vector quantization for the first time in the context of learned compression and demonstrate its benefits over scalar quantization. ? Unlike essentially all previous works, we make no assumption on the marginal distribution of the features or model parameters to be quantized by relying on a histogram of the assignment probabilities rather than the parametric models commonly used in the literature. ? We apply our method to DNN model compression for a 32-layer ResNet model [13] and fullresolution image compression using a variant of the compressive autoencoder proposed recently in [30]. In both cases, we obtain performance competitive with the state-of-the-art, while making fewer model assumptions and significantly simplifying the training procedure compared to the original works [30, 6]. The remainder of the paper is organized as follows. Section 2 reviews related work, before our soft-to-hard vector quantization method is introduced in Section 3. Then we apply it to a compressive autoencoder for image compression and to ResNet for DNN compression in Section 4 and 5, respectively. Section 6 concludes the paper. 2 Related Work There has been a surge of interest in DNN models for full-resolution image compression, most notably [32, 33, 4, 5, 30], all of which outperform JPEG [35] and some even JPEG 2000 [29] The pioneering work [32, 33] showed that progressive image compression can be learned with convolutional recurrent neural networks (RNNs), employing a stochastic quantization method during training. [4, 30] both rely on convolutional autoencoder architectures. These works are discussed in more detail in Section 4. In the context of DNN model compression, the line of works [12, 11, 6] adopts a multi-step procedure in which the weights of a pretrained DNN are first pruned and the remaining parameters are quantized using a k-means like algorithm, the DNN is then retrained, and finally the quantized DNN model is encoded using entropy coding. A notable different approach is taken by [34], where the DNN 2 compression task is tackled using the minimum description length principle, which has a solid information-theoretic foundation. It is worth noting that many recent works target quantization of the DNN model parameters and possibly the feature representation to speed up DNN evaluation on hardware with low-precision arithmetic, see, e.g., [15, 23, 38, 43]. However, most of these works do not specifically train the DNN such that the quantized parameters are compressible in an information-theoretic sense. Gradually moving from an easy (convex or differentiable) problem to the actual harder problem during optimization, as done in our soft-to-hard quantization framework, has been studied in various contexts and falls under the umbrella of continuation methods (see [3] for an overview). Formally related but motivated from a probabilistic perspective are deterministic annealing methods for maximum entropy clustering/vector quantization, see, e.g., [24, 42]. Arguably most related to our approach is [41], which also employs continuation for nearest neighbor assignments, but in the context of learning a supervised prototype classifier. To the best of our knowledge, continuation methods have not been employed before in an end-to-end learning framework for neural network-based image compression or DNN compression. 3 3.1 Proposed Soft-to-Hard Vector Quantization Problem Formulation Preliminaries and Notations. We consider the standard model for DNNs, where we have an architecture F : Rd1 7? RdK+1 composed of K layers F = FK ? ? ? ? ? F1 , where layer Fi maps Rdi ? Rdi+1 , and has parameters wi ? Rmi . We refer to W = [w1 , ? ? ? , wK ] as the parameters of the network and we denote the intermediate layer outputs of the network as x(0) := x and x(i) := Fi (x(i?1) ), such that F (x) = x(K) and x(i) is the feature vector produced by layer Fi . The parameters of the network are learned w.r.t. training data X = {x1 , ? ? ? , xN } ? Rd1 and labels Y = {y1 , ? ? ? , yN } ? RdK+1 , by minimizing a real-valued loss L(X , Y; F ). Typically, the loss can be decomposed as a sum over the training data plus a regularization term, N 1 X L(X , Y; F ) = `(F (xi ), yi ) + ?R(W), (1) N i=1 where `(F (x), y) Pis the sample loss, ? > 0 sets the regularization strength, and R(W) is a regularizer (e.g., R(W) = i kwi k2 for l2 regularization). In this case, the parameters of the network can be learned using stochastic gradient descent over mini-batches. Assuming that the data X , Y on which the network is trained is drawn from some distribution PX,Y , the loss (1) can be thought of as an estimator of the expected loss E[`(F (X), Y) + ?R(W)]. In the context of image classification, Rd1 would correspond to the input image space and RdK+1 to the classification probabilities, and ` would be the categorical cross entropy. We say that the deep architecture is an autoencoder when the network maps back into the input space, with the goal of reproducing the input. In this case, d1 = dK+1 and F (x) is trained to approximate x, e.g., with a mean squared error loss `(F (x), y) = kF (x) ? yk2 . Autoencoders typically condense the dimensionality of the input into some smaller dimensionality inside the network, i.e., the layer with the smallest output dimension, x(b) ? Rdb , has db  d1 , which we refer to as the ?bottleneck?. Compressible representations. We say that a weight parameter wi or a feature x(i) has a compressible representation if it can be serialized to a binary stream using few bits. For DNN compression, we want the entire network parameters W to be compressible. For image compression via an autoencoder, we just need the features in the bottleneck, x(b) , to be compressible. Suppose we want to compress a feature representation z ? Rd in our network (e.g., x(b) of an autoencoder) given an input x. Assuming that the data X , Y is drawn from some distribution PX,Y , z will be a sample from a continuous random variable Z. To store z with a finite number of bits, we need to map it to a discrete space. Specifically, we map z to a sequence of m symbols using a (symbol) encoder E : Rd 7? [L]m , where each symbol is an index ranging from 1 to L, i.e., [L] := {1, . . . , L}. The reconstruction of z is then produced by a ? = D(E(z)) ? Rd . Since z is (symbol) decoder D : [L]m 7? Rd , which maps the symbols back to z 3 a sample from Z, the symbol stream E(z) is drawn from the discrete probability distribution PE(Z) . Thus, given the encoder E, according to Shannon?s source coding theorem [8], the correct metric for compressibility is the entropy of E(Z): X H(E(Z)) = ? P (E(Z) = e) log(P (E(Z) = e)). (2) e?[L]m Our generic goal is hence to optimize the rate distortion trade-off between the expected loss and the entropy of E(Z): min EX,Y [`(F? (X), Y) + ?R(W)] + ?H(E(Z)), (3) E,D,W ?, and ? > 0 controls the trade-off where F? is the architecture where z has been replaced with z between compressibility of z and the distortion it imposes on F? . However, we cannot optimize (3) directly. First, we do not know the distribution of X and Y. Second, the distribution of Z depends in a complex manner on the network parameters W and the distribution of X. Third, the encoder E is a discrete mapping and thus not differentiable. For our first approximation we consider the sample entropy instead of H(E(Z)). That is, given the data X and some fixed network parameters W, we can estimate the probabilities P (E(Z) = e) for e ? [L]m via a histogram. For this estimate to be accurate, we however would need |X |  Lm . If z is the bottleneck of an autoencoder, this would correspond to trying to learn a single histogram for the entire discretized data space. We relax this by assuming the entries of E(Z) are i.i.d. such that we can instead compute the histogram over the L distinct values. More Qm precisely, we assume that for e = (e1 , ? ? ? , em ) ? [L]m we can approximate P (E(Z) = e) ? l=1 pel , where pj is the histogram estimate |{el (zi )|l ? [m], i ? [N ], el (zi ) = j}| pj := , (4) mN where we denote the entries of E(z) = (e1 (z), ? ? ? , em (z)) and zi is the output feature z for training data point xi ? X . We then obtain an estimate of the entropy of Z by substituting the approximation (3.1) into (2), ! ! m m L X Y Y X pel = ?m pj log pj = mH(p), (5) H(E(Z)) ? ? pel log e?[L]m l=1 l=1 j=1 where the first (exact) equality is due to [8], Thm. 2.6.6, and H(p) := ? entropy for the (i.i.d., by assumption) components of E(Z) 1 . PL j=1 pj log pj is the sample We now can simplify the ideal objective of (3), by replacing the expected loss with the sample mean over ` and the entropy using the sample entropy H(p), obtaining N 1 X `(F (xi ), yi ) + ?R(W) + ?mH(p). (6) N i=1 We note that so far we have assumed that z is a feature output in F , i.e., z = x(k) for some k ? [K]. However, the above treatment would stay the same if z is the concatenation of multiple feature outputs. One can also obtain a separate sample entropy term for separate feature outputs and add them to the objective in (6). In case z is composed of one or more parameter vectors, such as in DNN compression where z = W, ? cease to be random variables, since W is a parameter of the model. That is, opposed to the z and z ? which we want to be compressible, case where we have a source X that produces another source Z we want the discretization of a single parameter vector W to be compressible. This is analogous to compressing a single document, instead of learning a model that can compress a stream of documents. In this case, (3) is not the appropriate objective, but our simplified objective in (6) remains appropriate. This is because a standard technique in compression is to build a statistical model of the (finite) data, which has a small sample entropy. The only difference is that now the histogram probabilities in (4) are taken over W instead of the dataset X , i.e., N = 1 and zi = W in (4), and they count towards storage as well as the encoder E and decoder D. 1 In fact, from [8], Thm. 2.6.6, it follows that if the histogram estimates pj are exact, (5) is an upper bound for the true H(E(Z)) (i.e., without the i.i.d. assumption). 4 Challenges. Eq. (6) gives us a unified objective that can well describe the trade-off between compressible representations in a deep architecture and the original training objective of the architecture. However, the problem of finding a good encoder E, a corresponding decoder D, and parameters W that minimize the objective remains. First, we need to impose a form for the encoder and decoder, and second we need an approach that can optimize (6) w.r.t. the parameters W. Independently of the choice of E, (6) is challenging since E is a mapping to a finite set and, therefore, not differentiable. This implies that neither H(p) is differentiable nor F? is differentiable w.r.t. the parameters of z and layers that feed into z. For example, if F? is an autoencoder and z = x(b) , the output of the network will not be differentiable w.r.t. w1 , ? ? ? , wb and x(0) , ? ? ? , x(b?1) . These challenges motivate the design decisions of our soft-to-hard annealing approach, described in the next section. 3.2 Our Method Encoder and decoder form. For the encoder E : Rd 7? [L]m we assume that we have L centers vectors C = {c1 , ? ? ? , cL } ? Rd/m . The encoding of z ? Rd is then performed by reshaping it ?(m) ] ? R(d/m)?m , and assigning each column z ?(l) to the index of its into a matrix Z = [? z(1) , ? ? ? , z d nearest neighbor in C. That is, we assume the feature z ? R can be modeled as a sequence of m points in Rd/m , which we partition into the Voronoi tessellation over the centers C. The decoder ? ? R(d/m)?m from a symbol sequence (e1 , ? ? ? , em ) by D : [L]m 7? Rd then simply constructs Z ? = [ce , ? ? ? , ce ], from which z ? back ? is formed by reshaping Z picking the corresponding centers Z 1 m ? = D(E(Z)). ? = D(E(z)) and Z into Rd . We will interchangeably write z The idea is then to relax E and D into continuous mappings via soft assignments instead of the hard nearest neighbor assignment of E. ? ? Rd/m to C as Soft assignments. We define the soft assignment of z 2 ?(? z) := softmax(??[k? z ? c1 k , . . . , k? z ? cL k2 ]) ? RL , (7) yj e where softmax(y1 , ? ? ? , yL )j := ey1 +???+eyL is the standard softmax operator, such that ?(? z) has positive entries and k?(? z)k1 = 1. We denote the j-th entry of ?(? z) with ?j (? z) and note that ( 1 if j = arg minj 0 ?[L] k? z ? cj 0 k lim ?j (? z) = ??? 0 otherwise ? z) := lim??? ?(? ? in C. We such that ?(? z) converges to a one-hot encoding of the nearest center to z ? z) as the hard assignment of z ? to C and the parameter ? > 0 as the hardness of therefore refer to ?(? the soft assignment ?(? z). ? as Using soft assignment, we define the soft quantization of z L X ? z) := Q(? cj ?i (? z) = C?(? z), j=1 where we write the centers as a matrix C = [c1 , ? ? ? , cL ] ? Rd/m?L . The corresponding hard ? z) := lim??? Q(? ? z) = ce(?z) , where e(? ?. assignment is taken with Q(? z) is the center in C nearest to z Therefore, we can now write: ? z(1) ), ? ? ? , ?(? ? z(m) )]. ? = D(E(Z)) = [Q(? ? z(1) ), ? ? ? , Q(? ? z(m) )] = C[?(? Z ? via hard nearest neighbor assignments, we can approximate it with Now, instead of computing Z ? a smooth relaxation Z := C[?(? z(1) ), ? ? ? , ?(? z(m) )] by using the soft assignments instead of the ?, this gives us a differentiable hard assignments. Denoting the corresponding vector form by z ? in the network with z ?. approximation F? of the quantized architecture F? , by replacing z Entropy estimation. Using the soft assignments, we can similarly define a soft histogram, by summing up the partial assignments to each center instead of counting as in (4): N m 1 XX (l) qj := ?j (? zi ). mN i=1 l=1 5 This gives us a valid probability mass function q = (q1 , ? ? ? , qL ), which is differentiable but converges to p = (p1 , ? ? ? , pL ) as ? ? ?. We can now define the ?soft entropy? as the cross entropy between p and q: L X ? H(?) := H(p, q) = ? pj log qj = H(p) + DKL (p||q) j=1 P where DKL (p||q) = Since j pj log(pj /qj ) denotes the Kullback?Leibler divergence. ? DKL (p||q) ? 0, this establishes H(?) as an upper bound for H(p), where equality is obtained when p = q. We have therefore obtained a differentiable ?soft entropy? loss (w.r.t. q), which is an upper bound on ? the sample entropy H(p). Hence, we can indirectly minimize H(p) by minimizing H(?), treating the histogram probabilities of p as constants for gradient computation. However, we note that while qj is additive over the training data and the symbol sequence, log(qj ) is not. This prevents the use ? of mini-batch gradient descent on H(?), which can be an issue for large scale learning problems. ? ? In this case, we can instead re-define the soft entropy H(?) as H(q, p). As before, H(?) ? H(p) ? ? as ? ? ?, but H(?) ceases to be an upper bound for H(p). The benefit is that now H(?) can be decomposed as L N X m X L X X 1 (l) ? H(?) := H(q, p) = ? qj log pj = ? ?j (? zi ) log pj , (8) mN j=1 i=1 j=1 l=1 such that we get an additive loss over the samples xi ? X and the components l ? [m]. Soft-to-hard deterministic annealing. Our soft assignment scheme gives us differentiable ap? proximations F? and H(?) of the discretized network F? and the sample entropy H(p), respectively. However, our objective is to learn network parameters W that minimize (6) when using the encoder and decoder with hard assignments, such that we obtain a compressible symbol stream E(z) which we can compress using, e.g., arithmetic coding [40]. To this end, we anneal ? from some initial value ?0 to infinity during training, such that the soft approximation gradually becomes a better approximation of the final hard quantization we will use. Choosing the annealing schedule is crucial as annealing too slowly may allow the network to invert the soft assignments (resulting in large weights), and annealing too fast leads to vanishing gradients too early, thereby preventing learning. In practice, one can either parametrize ? as a function of the iteration, or tie it to an auxiliary target such as the difference between the network losses incurred by soft quantization and hard quantization (see Section 4 for details). For a simple initialization of ?0 and the centers C, we can sample the centers from the set Z := P (l) ? z)k2 z ? Q(? {? zi |i ? [N ], l ? [m]} and then cluster Z by minimizing the cluster energy z??Z k? using SGD. 4 Image Compression We now show how we can use our framework to realize a simple image compression system. For the architecture, we use a variant of the convolutional autoencoder proposed recently in [30] (see Appendix A.1 for details). We note that while we use the architecture of [30], we train it using our soft-to-hard entropy minimization method, which differs significantly from their approach, see below. Our goal is to learn a compressible representation of the features in the bottleneck of the autoencoder. Because we do not expect the features from different bottleneck channels to be identically distributed, we model each channel?s distribution with a different histogram and entropy loss, adding each entropy term to the total loss using the same ? parameter. To encode a channel into symbols, we separate the channel matrix into a sequence of pw ? ph -dimensional patches. These patches (vectorized) form the columns of Z ? Rd/m?m , where m = d/(pw ph ), such that Z contains m (pw ph )-dimensional points. Having ph or pw greater than one allows symbols to capture local correlations in the bottleneck, which is desirable since we model the symbols as i.i.d. random variables for entropy coding. At test time, the symbol encoder E then determines the symbols in the channel by performing a nearest ? as described above. During neighbor assignment over a set of L centers C ? Rpw ph , resulting in Z, ? training we instead use the soft quantized Z, also w.r.t. the centers C. 6 1.00 MS-SSIM ImageNET100 1.00 B100 MS-SSIM 1.00 MS-SSIM Urban100 1.00 0.98 0.98 0.98 0.98 0.96 0.96 0.96 0.96 0.94 0.94 0.94 0.94 0.92 0.92 0.92 0.92 0.90 0.90 0.90 0.90 0.88 0.88 0.88 0.88 0.86 0.86 0.86 0.2 0.4 rate [bpp] 0.6 0.20bpp / 0.91 / 0.69 / 23.88dB SHVQ (ours) 0.2 0.4 rate [bpp] 0.6 0.20bpp / 0.90 / 0.67 / 24.19dB BPG SHVQ (ours) BPG JPEG 2000 JPEG 0.86 0.2 0.4 rate [bpp] 0.6 0.20bpp / 0.88 / 0.63 / 23.01dB JPEG 2000 Kodak MS-SSIM 0.2 0.4 rate [bpp] 0.6 0.22bpp / 0.77 / 0.48 / 19.77dB JPEG Figure 1: Top: MS-SSIM as a function of rate for SHVQ (Ours), BPG, JPEG 2000, JPEG, for each data set. Bottom: A visual example from the Kodak data set along with rate / MS-SSIM / SSIM / PSNR. We trained different models using Adam [17], see Appendix A.2. Our training set is composed similarly to that described in [4]. We used a subset of 90,000 images from ImageNET [9], which we downsampled by a factor 0.7 and trained on crops of 128 ? 128 pixels, with a batch size of 15. To estimate the probability distribution p for optimizing (8), we maintain a histogram over 5,000 images, which we update every 10 iterations with the images from the current batch. Details about other hyperparameters can be found in Appendix A.2. The training of our autoencoder network takes place in two stages, where we move from an identity function in the bottleneck to hard quantization. In the first stage, we train the autoencoder without any quantization. Similar to [30] we gradually unfreeze the channels in the bottleneck during training (this gives a slight improvement over learning all channels jointly from the start). This yields an efficient weight initialization and enables us to then initialize ?0 and C as described above. In the second stage, we minimize (6), jointly learning network weights and quantization levels. We anneal ? by letting the gap between soft and hard quantization error go to zero as the number of iterations t goes to infinity. Let eS = kF? (x)?xk2 be the soft error, eH = kF? (x)?xk2 be the hard error. With gap(t) = eH ?eS we can denote the error between the actual the desired gap with eG (t) = gap(t) ? T /(T + t) gap(0), such that the gap is halved after T iterations. We update ? according to ?(t + 1) = ?(t) + KG eG (t), where ?(t) denotes ? at iteration t. Fig. 3 in Appendix A.4 shows the evolution of the gap, soft and hard loss as sigma grows during training. We observed that both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively (see Appendix A.3 for details). Evaluation. To evaluate the image compression performance of our Soft-to-Hard Vector Quantization Autoencoder (SHVQ) method we use four datasets, namely Kodak [2], B100 [31], Urban100 [14], ImageNET100 (100 randomly selected images from ImageNET [25]) and three standard quality measures, namely peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [37], and multi-scale SSIM (MS-SSIM), see Appendix A.5 for details. We compare our SHVQ with the standard JPEG, JPEG 2000, and BPG [1], focusing on compression rates < 1 bits per pixel (bpp) (i.e., the regime where traditional integral transform-based compression algorithms are most challenged). As shown in Fig. 1, for high compression rates (< 0.4 bpp), our SHVQ outperforms JPEG and JPEG 2000 in terms of MS-SSIM and is competitive with BPG. A similar trend can be observed for SSIM (see Fig. 4 in Appendix A.6 for plots of SSIM and PSNR as a function of bpp). SHVQ performs best on ImageNET100 and is most challenged on Kodak when compared with JPEG 2000. Visually, SHVQ-compressed images have fewer artifacts than those compressed by JPEG 2000 (see Fig. 1, and Fig. 5?12 in Appendix A.7). Related methods and discussion. JPEG 2000 [29] uses wavelet-based transformations and adaptive EBCOT coding. BPG [1], based on a subset of the HEVC video compression standard, is the 7 M ETHOD O RIGINAL MODEL P RUNING + FT. + INDEX CODING + H. C ODING [12] P RUNING + FT. + K - MEANS + FT. + I.C. + H.C. [11] P RUNING + FT. + H ESSIAN - WEIGHTED K - MEANS + FT. + I.C. + H.C. P RUNING + FT. + U NIFORM QUANTIZATION + FT. + I.C. + H.C. P RUNING + FT. + I TERATIVE ECSQ + FT. + I.C. + H.C. S OFT- TO -H ARD A NNEALING + FT. + H. C ODING ( OURS ) S OFT- TO -H ARD A NNEALING + FT. + A. C ODING ( OURS ) ACC [%] 92.6 92.6 92.6 92.7 92.7 92.7 92.1 92.1 C OMP. RATIO 1.00 4.52 18.25 20.51 22.17 21.01 19.15 20.15 Table 1: Accuracies and compression factors for different DNN compression techniques, using a 32-layer ResNet on CIFAR-10. FT. denotes fine-tuning, IC. denotes index coding and H.C. and A.C. denote Huffman and arithmetic coding, respectively. The pruning based results are from [6]. current state-of-the art for image compression. It uses context-adaptive binary arithmetic coding (CABAC) [21]. SHVQ (ours) Theis et al. [30] The recent works of [30, 5] Quantization vector quantization rounding to integers also showed competitive performance with JPEG 2000. While Backpropagation grad. of soft relaxation grad. of identity mapping Gaussian scale mixtures we use the architecture of [30], Entropy estimation (soft) histogram Training material ImageNET high quality Flickr images there are stark differences beOperating points single model ensemble tween the works, summarized in the inset table. The work of [5] build a deep model using multiple generalized divisive normalization (GDN) layers and their inverses (IGDN), which are specialized layers designed to capture local joint statistics of natural images. Furthermore, they model marginals for entropy estimation using linear splines and also use CABAC[21] coding. Concurrent to our work, the method of [16] builds on the architecture proposed in [33], and shows that impressive performance in terms of the MS-SSIM metric can be obtained by incorporating it into the optimization (instead of just minimizing the MSE). In contrast to the domain-specific techniques adopted by these state-of-the-art methods, our framework for learning compressible representation can realize a competitive image compression system, only using a convolutional autoencoder and simple entropy coding. 5 DNN Compression For DNN compression, we investigate the ResNet [13] architecture for image classification. We adopt the same setting as [6] and consider a 32-layer architecture trained for CIFAR-10 [18]. As in [6], our goal is to learn a compressible representation for all 464,154 trainable parameters of the model. We concatenate the parameters into a vector W ? R464,154 and employ scalar quantization (m = d), such that ZT = z = W. We started from the pre-trained original model, which obtains a 92.6% accuracy on the test set. We implemented the entropy minimization by using L = 75 centers and chose ? = 0.1 such that the converged entropy would give a compression factor ? 20, i.e., giving ? 32/20 = 1.6 bits per weight. The training was performed with the same learning parameters as the original model was trained with (SGD with momentum 0.9). The annealing schedule used was a simple exponential one, ?(t + 1) = 1.001 ? ?(t) with ?(0) = 0.4. After 4 epochs of training, when ?(t) has increased by a factor ? 20, we switched to hard assignments and continued fine-tuning at a 10? lower learning rate. 2 Adhering to the benchmark of [6, 12, 11], we obtain the compression factor by dividing the bit cost of storing the uncompressed weights as floats (464, 154 ? 32 bits) with the total encoding cost of compressed weights (i.e., L ? 32 bits for the centers plus the size of the compressed index stream). Our compressible model achieves a comparable test accuracy of 92.1% while compressing the DNN by a factor 19.15 with Huffman and 20.15 using arithmetic coding. Table 1 compares our results with state-of-the-art approaches reported by [6]. We note that while the top methods from the literature also achieve accuracies above 92% and compression factors above 20?, they employ a considerable amount of hand-designed steps, such as pruning, retraining, various types of weight clustering, special encoding of the sparse weight matrices into an index-difference based format and then finally use 2 We switch to hard assignments since we can get large gradients for weights that are equally close to two ? converges to hard nearest neighbor assignments. One could also employ simple gradient clipping. centers as Q 8 entropy coding. In contrast, we directly minimize the entropy of the weights in the training, obtaining a highly compressible representation using standard entropy coding. In Fig. 13 in Appendix A.8, we show how the sample entropy H(p) decays and the index histograms develop during training, as the network learns to condense most of the weights to a couple of centers when optimizing (6). In contrast, the methods of [12, 11, 6] manually impose 0 as the most frequent center by pruning ? 80% of the network weights. We note that the recent works by [34] also manages to tackle the problem in a single training procedure, using the minimum description length principle. In contrast to our framework, they take a Bayesian perspective and rely on a parametric assumption on the symbol distribution. 6 Conclusions In this paper we proposed a unified framework for end-to-end learning of compressed representations for deep architectures. By training with a soft-to-hard annealing scheme, gradually transferring from a soft relaxation of the sample entropy and network discretization process to the actual nondifferentiable quantization process, we manage to optimize the rate distortion trade-off between the original network loss and the entropy. Our framework can elegantly capture diverse compression tasks, obtaining results competitive with state-of-the-art for both image compression as well as DNN compression. The simplicity of our approach opens up various directions for future work, since our framework can be easily adapted for other tasks where a compressible representation is desired. Acknowledgments This work was supported by EUs Horizon 2020 programme under grant agreement No 687757 ? REPLICATE, by NVIDIA Corporation through the Academic Hardware Grant, by ETH Zurich, and by Armasuisse. References [1] BPG Image format. https://bellard.org/bpg/. [2] Kodak PhotoCD dataset. http://r0k.us/graphics/kodak/. [3] Eugene L Allgower and Kurt Georg. Numerical continuation methods: an introduction, volume 13. Springer Science & Business Media, 2012. [4] Johannes Ball?, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. arXiv preprint arXiv:1607.05006, 2016. [5] Johannes Ball?, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. arXiv preprint arXiv:1611.01704, 2016. [6] Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Towards the limit of network quantization. arXiv preprint arXiv:1612.01543, 2016. [7] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123?3131, 2015. [8] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. [10] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115?118, 2017. [11] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [12] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135?1143, 2015. [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 9 [14] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197?5206, 2015. [15] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016. [16] Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. arXiv preprint arXiv:1703.10114, 2017. [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [18] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. [19] Alex Krizhevsky and Geoffrey E Hinton. Using very deep autoencoders for content-based image retrieval. In ESANN, 2011. [20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [21] Detlev Marpe, Heiko Schwarz, and Thomas Wiegand. Context-based adaptive binary arithmetic coding in the h. 264/avc video compression standard. IEEE Transactions on circuits and systems for video technology, 13(7):620?636, 2003. [22] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. Int?l Conf. Computer Vision, volume 2, pages 416?423, July 2001. [23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525?542. Springer, 2016. [24] Kenneth Rose, Eitan Gurewitz, and Geoffrey C Fox. Vector quantization by deterministic annealing. IEEE Transactions on Information theory, 38(4):1249?1257, 1992. [25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211?252, 2015. [26] Wenzhe Shi, Jose Caballero, Ferenc Husz?r, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874?1883, 2016. [27] Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016. [28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. [29] David S. Taubman and Michael W. Marcellin. JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer Academic Publishers, Norwell, MA, USA, 2001. [30] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszar. Lossy image compression with compressive autoencoders. In ICLR 2017, 2017. [31] Radu Timofte, Vincent De Smet, and Luc Van Gool. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution, pages 111?126. Springer International Publishing, Cham, 2015. [32] George Toderici, Sean M O?Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085, 2015. 10 [33] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016. [34] Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017. [35] Gregory K Wallace. The JPEG still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviii?xxxiv, 1992. [36] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems Computers, 2003, volume 2, pages 1398?1402 Vol.2, Nov 2003. [37] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600?612, April 2004. [38] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074?2082, 2016. [39] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229?256, 1992. [40] Ian H. Witten, Radford M. Neal, and John G. Cleary. Arithmetic coding for data compression. Commun. ACM, 30(6):520?540, June 1987. [41] Paul Wohlhart, Martin Kostinger, Michael Donoser, Peter M. Roth, and Horst Bischof. Optimizing 1-nearest prototype classifiers. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2013. [42] Eyal Yair, Kenneth Zeger, and Allen Gersho. Competitive learning and soft competition for vector quantizer design. IEEE transactions on Signal Processing, 40(2):294?309, 1992. [43] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017. 11
6714 |@word pw:4 compression:55 replicate:1 retraining:1 open:1 simplifying:1 q1:1 sgd:2 thereby:1 solid:1 cleary:1 harder:1 initial:1 electronics:1 contains:1 daniel:2 denoting:1 document:2 ours:6 kurt:1 outperforms:1 current:2 laparra:2 discretization:2 activation:1 assigning:1 diederik:1 guez:1 attracted:1 john:3 realize:2 ronald:1 additive:2 concatenate:1 zeger:1 numerical:1 partition:1 christian:1 enables:1 visibility:1 designed:2 treating:1 update:2 plot:1 joy:1 selected:1 fewer:2 device:1 serialized:1 vanishing:1 aja:1 quantizer:1 quantized:10 compressible:21 org:1 zhang:1 along:1 ijcv:1 inside:1 manner:1 benini:2 aitken:2 notably:1 hardness:2 expected:3 p1:1 surge:1 nor:1 multi:3 wallace:1 yiran:1 discretized:2 relying:2 decomposed:2 actual:3 toderici:3 farhadi:1 becomes:1 xx:1 brett:1 notation:1 circuit:1 mass:1 medium:1 pel:3 kg:1 tschannen:1 compressive:3 unified:4 finding:1 transformation:1 corporation:1 sung:3 every:1 tackle:2 tie:1 blau:1 classifier:2 k2:3 qm:1 control:3 grant:2 yn:1 arguably:1 before:3 positive:1 local:2 shumeet:1 limit:1 soudry:1 encoding:5 laurent:1 ap:1 rnns:1 plus:2 initialization:2 studied:2 chose:1 challenging:2 limited:1 acknowledgment:1 yj:1 practice:3 differs:1 backpropagation:1 procedure:3 coping:1 eth:8 significantly:2 thought:1 pre:1 downsampled:1 get:2 cannot:1 close:1 zehan:2 andrej:1 operator:1 storage:1 context:9 sukthankar:1 optimize:4 chunpeng:1 deterministic:3 map:5 roth:1 center:16 shi:3 williams:1 attention:1 go:3 jimmy:1 independently:3 helen:1 resolution:5 convex:1 simplicity:1 adhering:1 matthieu:2 estimator:1 continued:1 cvpr09:1 analogous:1 hevc:1 target:2 suppose:1 itay:1 exact:2 us:2 agreement:1 element:1 trend:1 recognition:6 showcase:1 database:2 observed:2 bottom:1 ft:12 preprint:11 wang:5 capture:3 susan:1 compressing:3 sun:1 eu:1 trade:5 ran:1 rose:1 avc:1 trained:8 motivate:1 ferenc:3 singh:2 ali:1 meed:1 easily:1 mh:2 joint:1 various:6 regularizer:2 train:3 distinct:1 fast:2 describe:1 approached:1 choosing:1 neighborhood:1 jean:1 encoded:1 widely:1 valued:1 cvpr:2 distortion:5 say:2 relax:2 compressed:6 otherwise:1 ability:1 statistic:2 encoder:10 jointly:4 transform:2 itself:2 final:1 sequence:5 differentiable:12 net:1 propose:1 tran:1 reconstruction:3 remainder:1 frequent:1 bpg:8 achieve:1 description:2 competition:1 sutskever:1 cluster:2 yaniv:1 produce:1 silver:1 converges:3 adam:2 incremental:1 wider:1 resnet:4 andrew:3 allgower:1 develop:1 damien:3 exemplar:1 recurrent:4 ard:2 nearest:9 eq:1 edward:1 dividing:1 auxiliary:1 implemented:1 esann:1 implies:1 come:1 direction:1 correct:1 cnns:1 stochastic:5 human:1 material:1 bin:1 dnns:4 f1:3 preliminary:1 adjusted:1 pl:2 ic:1 visually:1 caballero:2 mapping:4 lm:1 mostafa:1 substituting:1 major:1 narendra:1 early:1 smallest:1 adopt:1 xk2:2 achieves:1 estimation:3 proc:1 favorable:1 label:1 hubara:1 schwarz:1 concurrent:1 establishes:1 weighted:1 minimization:2 gaussian:2 super:3 heiko:1 rather:1 husz:1 zhou:2 mobile:1 taubman:1 encode:1 june:3 improvement:1 b100:2 bernoulli:1 contrast:5 sense:1 inference:1 voronoi:1 el:4 entire:2 transferring:1 typically:3 cunningham:1 dnn:30 transformed:1 condense:2 ullrich:1 arg:1 classification:8 issue:1 among:1 pixel:3 lucas:2 development:1 art:6 breakthrough:1 softmax:3 initialize:1 special:1 saurabh:1 construct:1 marginal:2 having:1 beach:1 manually:1 progressive:1 uncompressed:1 future:1 yoshua:2 spline:1 piecewise:1 connectionist:1 simplify:1 wen:1 modern:1 randomly:1 employ:4 composed:3 proximations:1 few:1 divergence:1 replaced:1 maintain:1 william:2 ab:1 interest:1 investigate:1 highly:2 evaluation:2 joel:2 mixture:2 accurate:2 norwell:1 integral:1 partial:1 arthur:1 fox:1 tree:1 re:1 desired:2 increased:1 column:2 industry:1 wb:1 soft:38 jpeg:18 rdb:1 cover:1 measuring:1 challenged:2 assignment:26 tessellation:1 clipping:1 cost:6 rdi:2 subset:3 entry:4 krizhevsky:3 rounding:3 too:3 graphic:1 reported:1 gregory:1 st:1 international:2 peak:1 fundamental:1 stay:1 lee:1 dong:1 probabilistic:1 off:5 picking:1 pool:1 yl:1 michael:4 ilya:1 yao:1 sanjeev:1 w1:4 squared:1 manage:1 opposed:1 huang:3 slowly:1 possibly:1 conf:2 derivative:1 leading:1 stark:1 li:4 potential:1 de:1 coding:18 wk:3 ioannis:1 int:1 summarized:1 rueckert:1 notable:1 nari:1 depends:1 stream:8 performed:2 view:1 lot:1 dally:2 eyal:1 start:1 competitive:7 yandan:1 jia:2 contribution:1 minimize:6 formed:1 accuracy:5 convolutional:8 largely:1 ensemble:1 correspond:2 yield:1 covell:3 bayesian:1 vincent:4 manages:1 produced:2 ren:1 worth:1 russakovsky:1 converged:1 acc:1 minj:1 deploying:1 flickr:1 andre:1 sebastian:1 sharing:1 yurong:1 energy:1 associated:1 rdk:3 couple:1 ledig:1 dataset:2 treatment:1 popular:1 lim:3 knowledge:1 dimensionality:2 niform:1 organized:1 schedule:2 psnr:3 bpp:11 cj:2 segmentation:1 back:3 sean:2 focusing:1 feed:1 higher:1 supervised:1 totz:1 improved:1 rahul:1 april:1 wei:1 done:1 huizi:1 formulation:1 furthermore:1 just:2 stage:3 autoencoders:4 correlation:1 hand:2 replacing:2 su:1 multiscale:1 assessment:2 propagation:1 nonlinear:1 quality:5 ordonez:1 artifact:1 runing:5 grows:1 michele:3 hwang:3 lossy:3 effect:1 umbrella:1 usa:2 true:1 counterpart:1 evolution:1 hence:2 regularization:3 equality:2 spatially:1 leibler:1 neal:1 xnor:1 eg:2 r0k:1 game:1 self:1 during:9 interchangeably:1 m:9 generalized:1 trying:1 theoretic:2 demonstrate:1 mohammad:1 performs:1 allen:1 zhiheng:1 ranging:1 image:42 novel:1 recently:2 fi:3 common:1 specialized:1 witten:1 rl:1 overview:1 volume:3 million:2 discussed:1 slight:1 he:1 kluwer:1 marginals:1 refer:3 gdn:1 leuven:1 rd:13 tuning:2 similarly:2 fk:3 moving:1 han:2 similarity:3 yk2:1 impressive:1 add:1 halved:1 recent:4 showed:2 perspective:2 optimizing:4 commun:1 store:2 nvidia:1 ecological:1 binary:6 yi:2 cham:1 seen:1 minimum:2 george:4 greater:1 impose:2 omp:1 deng:2 employed:1 xiangyu:1 redundant:1 july:1 ii:5 arithmetic:7 full:2 desirable:2 multiple:3 signal:3 simoncelli:4 segmented:1 smooth:2 academic:2 adapt:1 cross:2 long:1 cifar:2 retrieval:1 luca:1 lin:1 reshaping:2 e1:3 equally:1 promotes:1 dkl:3 variant:2 ko:1 regression:1 crop:1 essentially:1 metric:2 vision:12 arxiv:22 iteration:5 histogram:13 normalization:1 invert:1 c1:3 huffman:3 want:4 krause:1 fine:2 annealing:9 johnston:2 float:1 source:3 jian:1 xviii:1 crucial:1 w2:1 publisher:1 unlike:1 bovik:2 kwi:1 db:5 integer:2 structural:3 ee:7 noting:1 ideal:1 intermediate:3 bengio:2 easy:1 bernstein:1 counting:1 identically:1 switch:1 zi:7 architecture:16 shor:2 malley:1 idea:1 prototype:2 grad:2 qj:6 bottleneck:8 veda:1 motivated:1 xxxiv:1 song:2 peter:1 karen:1 shaoqing:1 wohlhart:1 deep:15 johannes:3 karpathy:1 amount:1 ten:1 ph:5 hardware:2 differentiability:1 generate:1 http:2 outperform:1 continuation:4 oding:3 per:2 diverse:1 write:3 discrete:4 vol:1 georg:1 four:1 drawn:3 binaryconnect:1 pj:12 neither:1 ce:3 vangool:1 kenneth:2 relaxation:4 year:1 sum:1 inverse:1 jose:2 place:1 throughout:1 eitan:1 wu:1 patch:2 nnealing:2 timofte:2 lanctot:1 appendix:9 decision:1 comparable:1 bit:8 huszar:2 layer:14 bound:4 tackled:1 strength:1 rmi:1 adapted:1 precisely:1 infinity:2 fei:4 alex:3 tal:1 speed:1 min:1 pruned:1 performing:1 ey1:1 px:2 format:2 martin:2 radu:2 structured:1 according:2 ball:2 smaller:1 em:3 son:1 mastering:1 wi:2 sheikh:1 appealing:1 joseph:1 rob:1 making:1 intuitively:1 den:1 gradually:5 valero:2 taken:3 asilomar:1 zurich:8 remains:2 count:1 know:1 letting:1 gersho:1 antonoglou:1 end:19 adopted:1 parametrize:1 panneershelvam:1 available:1 operation:1 minnen:3 apply:2 hierarchical:1 appropriate:2 generic:1 kodak:6 pierre:1 indirectly:1 fowlkes:1 batch:4 yair:1 original:5 compress:6 denotes:4 remaining:1 clustering:2 thomas:3 publishing:1 top:2 giving:1 k1:1 build:3 move:1 skin:1 malik:1 objective:8 parametric:3 strategy:1 traditional:1 hai:1 gradient:7 iclr:1 separate:3 thrun:1 concatenation:1 decoder:7 nondifferentiable:1 chris:1 maddison:1 assuming:3 consumer:1 length:2 code:1 modeled:1 index:8 mini:2 julian:1 minimizing:5 ratio:2 schrittwieser:1 ql:1 hao:1 sigma:1 troy:1 ba:1 design:2 ethod:1 zt:1 satheesh:1 upper:4 ssim:14 datasets:1 benchmark:1 enabling:1 finite:3 jin:3 descent:2 fabian:1 hinton:3 y1:2 compressibility:2 reproducing:1 retrained:1 thm:2 david:6 introduced:1 namely:2 trainable:2 connection:1 optimized:1 bischof:1 nick:2 imagenet:7 learned:5 kingma:1 nip:1 address:2 justin:1 below:1 pattern:4 regime:1 oft:2 challenge:5 sparsity:1 pioneering:1 max:1 memory:1 including:1 video:5 gool:2 hot:1 business:1 force:1 rely:2 natural:2 eh:2 wiegand:1 residual:1 mn:3 representing:1 scheme:4 technology:1 lossless:1 picture:1 started:1 concludes:1 lately:1 categorical:1 autoencoder:14 roberto:1 gurewitz:1 text:1 eugene:1 literature:3 review:1 epoch:1 kf:3 theis:3 l2:1 embedded:1 loss:18 expect:1 geoffrey:4 yiwen:1 dermatologist:1 foundation:1 switched:1 incurred:1 vectorized:1 imposes:1 principle:2 courbariaux:2 storing:1 pi:1 tiny:1 essian:1 cancer:1 overparametrization:1 supported:1 allow:1 neighbor:6 fall:1 lukas:1 sparse:1 riginal:1 distributed:1 van:3 benefit:2 dimension:1 xn:1 valid:1 transition:1 evaluating:1 fb:2 preventing:1 adopts:1 commonly:1 reinforcement:1 adaptive:5 sifre:1 horst:1 simplified:1 far:2 employing:1 welling:1 transaction:5 programme:1 smet:1 approximate:4 nov:1 pruning:4 obtains:1 kullback:1 summing:1 assumed:1 eero:2 xi:4 abhishek:1 continuous:4 terative:1 search:1 khosla:1 anchored:1 table:3 channel:7 nature:2 learn:7 ca:1 ku:1 rastegari:1 obtaining:4 mse:2 complex:1 priming:1 european:1 domain:1 anneal:3 cl:3 elegantly:1 marc:1 tween:1 main:1 wenzhe:3 noise:1 arise:1 hyperparameters:1 paul:1 x1:1 xu:1 fig:6 representative:1 ahuja:1 deployed:1 wiley:1 precision:3 sub:1 mao:1 momentum:1 exponential:1 perceptual:1 pe:1 third:1 anbang:1 learns:2 wavelet:1 ian:1 theorem:1 choi:1 specific:2 bishop:1 inset:2 symbol:22 dk:1 decay:1 cease:2 deconvolution:1 incorporating:1 socher:1 quantization:40 adding:2 corr:1 horizon:1 gap:7 chen:2 entropy:41 rd1:3 cavigelli:2 led:1 simply:1 explore:1 visual:2 prevents:1 aditya:1 kaiming:1 scalar:4 pretrained:1 driessche:1 radford:1 ch:7 springer:3 determines:2 acm:1 ma:2 identity:2 goal:4 towards:3 jeff:1 luc:2 considerable:1 content:1 vicente:1 hard:26 specifically:2 baluja:1 redmon:1 olga:1 total:2 e:2 divisive:1 shannon:1 formally:1 berg:1 guo:1 jonathan:1 alexander:1 ethz:7 evaluate:1 audio:1 d1:2 ex:1
6,318
6,715
Learning spatiotemporal piecewise-geodesic trajectories from longitudinal manifold-valued data Juliette Chevallier CMAP, ?cole polytechnique [email protected] Pr St?phane Oudard Oncology Department USPC, AP-HP, HEGP St?phanie Allassonni?re CRC, Universit? Paris Descartes [email protected] Abstract We introduce a hierarchical model which allows to estimate a group-average piecewise-geodesic trajectory in the Riemannian space of measurements and individual variability. This model falls into the well defined mixed-effect models. The subject-specific trajectories are defined through spatial and temporal transformations of the group-average piecewise-geodesic path, component by component. Thus we can apply our model to a wide variety of situations. Due to the non-linearity of the model, we use the Stochastic Approximation ExpectationMaximization algorithm to estimate the model parameters. Experiments on synthetic data validate this choice. The model is then applied to the metastatic renal cancer chemotherapy monitoring: we run estimations on RECIST scores of treated patients and estimate the time they escape from the treatment. Experiments highlight the role of the different parameters on the response to treatment. 1 Introduction During the past few years, the way we treat renal metastatic cancer was profoundly changed: a new class of anti-angiogenic therapies targeting the tumor vessels instead of the tumor cells has emerged and drastically improved survival by a factor of three (Escudier et al., 2016). These new drugs, however, do not cure the cancer, and only succeed in delaying the tumor growth, requiring the use of successive therapies which must be continued or interrupted at the appropriate moment according to the patient?s response. The new medicine process has also created a new scientific challenge: how to choose the more efficient drug therapy. This means that one has to properly understand how the patient reacts to the possible treatments. Actually, there are several strategies and taking the right decision is a contested issue (Rothermundt et al., 2015, 2017). To achieve that goal, physicians took an interest in mathematical modeling. Mathematics has already demonstrated its efficiency and played a role in the change of stop-criteria for a given treatment (Burotto et al., 2014). However, to the best of our knowledge, there only exists one model which was designed by medical practitioners. Although, very basic mathematically, it seems to show that this point of view may produce interesting results. Introduced by Stein et al. in 2008, the model performs a non-linear regression using the least squares method to fit an increasing or/and decreasing exponential curve. This model is still used but suffers limitations. First, as the profile are fitted individual-by-individual independently, the model cannot explain a global dynamic. Then, the choice of exponential growth avoids the emergence of plateau effects which are often observed in practice. This opens the way to new models which would explain both a population and each individual with other constraints on the shape of the response. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Learning models of disease progression from such databases raises great methodological challenges. We propose here a very generic model which can be adapted to a large number of situations. For a given population, our model amounts to estimating an average trajectory in the set of measurements and individual variability. Then we can define continuous subject-specific trajectories in view of the population progression. Trajectories need to be registered in space and time, to allow anatomical variability (as different tumor sizes), different paces of progression and sensitivity to treatments. The framework of mixed-effects models is well suited to deal with this hierarchical problem. Mixedeffects models for longitudinal measurements were introduced in the seminal paper of Laird and Ware (1982) and have been widely developed since then. The recent generic approach of Schiratti et al. (2015) to align patients is even more suitable. First, anatomical data are naturally modeled as points on Riemannian manifolds while the usual mixed-effects models are defined for Euclidean data. Secondly, the model was built with the aim of granting individual temporal and spatial variability through individual variations of a common time-line grant and parallel shifting of the average trajectory. However, Schiratti et al. (2015) have made a strong hypothesis to build their model as they consider that the mean evolution is a geodesic. This would mean in our targeted situation that the cancer would either go on evolving or is always sensitive to the treatment. Unfortunately, the anti-angiogenic treatments may be inefficient, efficient or temporarily efficient, leading to a re-progression of the metastasis. Therefore, we want to relax this assumption on the model. In this paper, we propose a generic statistical framework for the definition and estimation of spatiotemporal piecewise-geodesic trajectories from longitudinal manifold-valued data. Riemannian geometry allows us to derive a method that makes few assumptions about the data and applications dealt with. We first introduce our model in its most generic formulation and then make it explicit for RECIST (Therasse et al., 2000) score monitoring, i.e. for one-dimension manifolds. Experimental results on those scores are given in section 4.2. The introduction of a more general model is a deliberate choice as we are expecting to apply our model to the corresponding medical images. Because of the non-linearity of the model, we have to use a stochastic version of the Expectation-Maximization algorithm (Dempster et al., 1977), namely the MCMC-SAEM algorithm, for which theoretical results regarding the convergence have been proved in Delyon et al. (1999) and Allassonni?re et al. (2010) and numerical efficiency has been demonstrated for these types of models (Schiratti et al. (2015), M ONOLIX ? MOd?les NOn LIn?aires ? effets miXtes). 2 Mixed-effects model for piecewise-geodesically distributed data We consider a longitudinal dataset obtained by repeating measurements of n ? N? individuals, where each individual i ? J1, nK is observed ki ? N? times, at the time points ti = (ti,j )16j6ki and where yiP = (yi,j )16j6ki denotes the sequence of observations for this individual. We also n denote k = i=1 ki the total numbers of observations. We assume that each observation yi,j is a point on a d-dimensional geodesically complete Riemannian manifold (M, g), so that y = (yi,j )16i6n, 16j6ki ? M k . We generalize the idea of Schiratti et al. (2015) and build our model in a hierarchical way. We see our data points as samples along trajectories and suppose that each individual trajectory derives from a group-average scenario through spatiotemporal transformations. Key to our model is that the group-average trajectory in no longer assumed to be geodesic but piecewise-geodesic. 2.1 Generic piecewise-geodesic curves model  Let m ? N? and tR = ?? < t1R < . . . < tm?1 < +? a subdivision of R, called thebreakingR up times sequence. Let M0 a d-dimensional geodesically complete manifold and ??0` 16`6m a family of geodesics on M0 . To completely define our average trajectory, we introduce m isometries ?`0 : M0 ? M0` := ?`0 (M0 ). This defines m new geodesics ? on the corresponding space M0` ? by setting down ?0` = ?`0 ? ??0 ` . The isometric nature of the mapping ?`0 ensures that the manifolds M0` remain Riemannian and that the curves ?0` remain geodesic. In particular, each ?0` remains parametrizable (Gallot et al., 2004). We define the average trajectory by ?t ? R, ?0 (t) = ?01 (t) 1]??,t1R ] (t) + m?1 X ?0` (t) 1]t`?1 ,t` ] (t) + ?0m (t) 1]tm?1 ,+?[ (t) . R `=2 2 R R In this framework, M0 may be understood as a manifold-template of the geodesic components of the curve ?0 . Because of the piecewise nature of our average-trajectory ?0 , constraints have to be formulated on each interval of the subdivision tR . Following the formulation of the local existence and uniqueness theorem (Gallot et al., 2004), constraints on geodesics are generally formulated by forcing a value and a tangent vector at a given time-point. However, such an approach cannot ensure the curve ?0 to be at least continuous. That is why we re-formulate these constraints in our model as boundary m+1 conditions. Let a sequence A? = (A?0 , . . . , A?m ) ? (M0 ) , an initial time t0 ? R and a final time 1 1 t1 ? R. We impose that for all ` ? J1, m ? 1K, ??0 (t0 ) = A?0 , ??0` (t`R ) = A?` , ??0`+1 (t`R ) = A?` and ??0m (t1 ) = A?m . Notably, the 2m constraints are defined step by step. In one dimension (cf section 2.2), the geodesics could be written explicitly and such constraints do not complicate the model so much. In higher dimension, we have to use shooting or matching methods to enforce this constraint. In practice, the choice of the isometries ?`0 and the geodesics ??0` have to be done with the aim to be "as regular as possible" (at least continuous as said above) at the rupture points t`R . In one dimension for instance, we build trajectories that are continuous, not differentiable but with a very similar slope on each side of the breaking-points. We want the individual trajectories to represent a wide variety of behaviors and to derive from the group average path by spatiotemporal transformations. To do that, we define for each component ` of the piecewise-geodesic curve ?0 a couple of transformations (?`i , ?i` ). These transformations, namely the diffeomorphic component deformations and the time component reparametrizations, characterize respectively the spatial and the temporal variability of propagation among the population. Thus, individual trajectories may write in the form of ?t ? R, ?i (t) = ?i1 (t) 1]??,t1R,i ] (t) + m?1 X ?i` (t) 1]t`?1 ,t` R,i R,i ] (t) + ?im (t) 1]tm?1 ,+?[ (t) (?) R,i `=2 where the functions ?i` are obtained from ?0` through the applications of the two transformations ?`i and ?i` described below. Note that, in particular, each individual possesses his own sequence  of rupture times tR,i = t`R,i 16`<m . Moreover, we require the fewest constraints possible in the construction: at least continuity and control of the slopes at these breaking-up points. For compactness, we will now abusively denote t0R for t0 and tm R for t1 . To allow different paces in the progression and different breaking-up times for each individual, we introduce some temporal transformations ?i` , called time-warp, that are defined for the subject `?1 ` i ? J1, nK and for the geodesic component ` ? J1, mK by ?i` (t) = ?i` (t ? t`?1 R ? ?i ) + tR . The ` parameters ?i correspond to the time-shift between the mean and the individual progression onset and the ?i` are the acceleration factors that describe the pace of individuals, being faster or slower than the average. To ensure good adjunction at the rupture points, we demand the individual breaking-up `?1 times t`R,i and the time-warps to satisfy ?i` (t`R,i ) = t`R and ?i` (t`?1 R,i ) = tR . Hence the subdivision tR,i is constrained by the time reparametrizations, which are also constrained. Only the acceleration factors ?i` and the first time shift ?i1 are free: for all ` ? J1, mK, the constraints rewrite step by step as ` t`R,i = t`?1 R + ?i + t`R ?t`?1 R ?`i `?1 `?1 and ?i` = tR,i ? tR . Concerning the space variability, we introduce m diffeomorphic deformations ?`i which enable the different components of the individual trajectories to vary more irrespectively of each other. We just enforce the adjunction to be at least continuous and therefore the diffeomorphisms ?`i have to satisfy ?`i ? ?0` (t`R ) = ?`+1 ? ?0`+1 (t`R ). Note that the mappings ?`i do not need to be isometric anymore, as i the individual trajectories are no longer required to be geodesic. Finally, for all i ? J1, nK and ` ? J1, mK, we set ?i` = ?`i ? ?0` ? ?i` and define ?i as in (?). The observations yi = (yi,j ) are assumed to be distributed along the curve ?i and perturbed by an additive Gaussian noise ?i ? N (0, ? 2 Iki ) : ?(i, j) ? J1, nK ? J1, ki K, 1 yi,j = ?i (ti,j ) + ?i,j where ?i,j ? N (0, ? 2 ) . By defining A` = ?`0 (A?` ) for each ` we can apply the constraints on ?0` instead of ??0` . 3 The choice of the isometries ?`0 and the diffeomorphisms ?`i will induce a large panel of piecewisegeodesic models. For example, if m = 1, ?0 = Id and if ?1i denotes the application that maps the curve ?0 onto its parallel curve for a given non-zero tangent vector wi , we feature the model proposed by Schiratti et al. (2015). In the following paragraph we propose another specific model which can be used for chemotherapy monitoring for instance (see section 4.2). 2.2 Piecewise-logistic curve model We focus in the following on the case of piecewise-logistic model, which presents a real interest regarding to our target application (cf section 4.2). We assume that m = 2 and d = 1 and we set M0 = ]0, 1[ equipped with the logistic metric. Given three real numbers ?0init , ?0escap and ?0fin we set escap  escap escap  1 init 2 fin down ?0 : x 7? ?0 ? ?0 x + ?0 and ?0 : x 7? ?0 ? ?0 x + ?0escap . Thus, we can map escap escap M0 onto the intervals ]?0 , ?0init [ and ]?0 , ?0fin [ respectively: if ??0 refers to the sigmoid function, ?10 ? ??0 will be a logistic curve, growing from ?0escap to ?0init . In this way, there is essentially a single breaking-up time and we will denote it tR at the population level and tiR at the individual one. Moreover, due to our target applications, we force the first logistic to be decreasing and the second one increasing (this condition may be relaxed). Logistics are defined on open intervals, with asymptotic constraints. We want to formulate our constraints on some noninfinite time-points, as explained in the previous paragraph, we set a positive threshold ? close to zero and demand the logistics ?01 and ?02 to be ?-near from their corresponding asymptotes. More precisely, we impose the average trajectory ?0 to be of the form ?0 = ?01 1]??,tR ] + ?02 1]tR ,+?[ where ( escap ?01 : R ? ]?0escap , ?0init [ ?02 : R ? ]?0escap , ?0fin [ ?0 + 2? 6 ?0init ?0init + ?0escap e(at+b) ?0fin + ?0escap e?(ct+d) ?0escap + 2? 6 ?0fin t ? 7 (at+b) ?(ct+d) 1+e 1+e and a, b, c and d are some positive numbers given by the following constraints t 7? ?01 (t0 ) = ?0init ? ? , ?01 (tR ) = ?02 (tR ) = ?0escap + ? and ?02 (t1 ) = ?0fin ? ? . In our context, the initial time of the process is known: it is the beginning of the treatment. So, we assume that the average initial time t0 is equal to zero. Especially t0 is no longer a variable. 1 1 1 Moreover, for each individual i ? J1, nK, the time-warps   write ?i (t) = ?i (t ? t0 ? ?i ) + t0 and 1??1i ?1i ?i2 (t) = ?i2 (t ? tR ? ?i2 ) + tR where ?i2 = ?i1 + (tR ? t0 ). From now on, we note ?i for ?i1 . RECIST score (dimentionless) In the same way as the time-warp, the diffeomorphisms ?1i and ?2i are chosen to allow different amplitudes and rupture values: for each subject i ? J1, nK, given the two scaling factors ri1 and ri2 and the space-shift ?i , we define ?`i (x) = ri` (x ? ?0 (tR )) + ?0 (tR ) + ?i , ` ? {1, 2}. Other choices are conceivable but in the context of our target applications, this one is appropriate. Mathematically, any regular and injective function defined on ]?0escap , ?0init [ (respectively ]?0escap , ?0fin [) is suited. ?i ?0 ?1 ?2 ?3 ?4 ?5 ?6 ?7 400 200 ?0init ?? ri1 t0 t1 tR 0 ? 0 ri2 ?0 ?0fin ?? 1,000 2,000 Times (in days) ?i ?i escap +? 0 t0 tiR ti1 tR t1 (b) From average to individual trajectory. (a) Diversity of individual trajectories. Figure 1: Model description. Figure 1a represents a typical average trajectory and several individual ones, for different vectors Pi . The rupture times are represented by diamonds and the initial/final times by stars. Figure 1b illustrates the non-standard constraints for ?0 and the transition from the average trajectory to an individual one: the trajectory ?i is subject to a temporal and a spacial warp. In other "words", ?i = ?1i ? ?01 ? ?i1 1]??,tiR ] + ?2i ? ?02 ? ?i2 1]tiR ,+?[ . 4 To sum up, each individual trajectory ?i depends on the average curve ?0 through fixed effects   zpop = ?0init , ?0escap , ?0fin , tR , t1 and random effects zi = ?i1 , ?i2 , ?i , ri1 , ri2 , ?i . This leads to a non-linear mixed-effects model. More precisely, for all (i, j) ? J1, nK ? J1, ki K,    yi,j = ri1 ?i1 (ti,j ) ? ?0 (tR ) + ?0 (tR ) + ?i 1]??,tiR ] (ti,j )    + ri2 ?i2 (ti,j ) ? ?0 (tR ) + ?0 (tR ) + ?i 1]tiR ,+?[ (ti,j ) + ?i,j 0 where ?i1 = ?1i ? ?01 , ?i2 = ?2i ? ?02 and tiR = t0 + ?i1 + tR??t . Figure 1 provides an illustration of the 1 i model. On each subfigure, the bold black curve represents the average trajectory ?0 and the colour curves several individual trajectories. The acceleration and the scaling parameters have to be positive and equal to one on average while the ` time and space shifts are of any signs and must be zero on average. For these reasons, we set ?i` = e?i  ` and ri` = e?i for ` ? {1, 2} leading to Pi = t ?i1 ?i2 ?i ?1i ?2i ?i . We assume that Pi ? N (0, ?) where ? ? Sp R, p = 6. This assumption is important in view of the applications. Usually, the random effects are studied independently. Here, we are interested in correlations between the two phases of patient?s response to treatment (see section 4.2). 3 Parameters estimation with the MCMC-SAEM algorithm In this section, we explain how to use a stochastic version of the EM algorithm to produce maximum a posteriori estimates of the parameters. 3.1 Statistical analysis of the piecewise-logistic curves model We want to estimate (zpop , ?, ?). The theoretical convergence of the EM algorithm, and a fortiori of the SAEM algorithm (Delyon et al., 1999), is proved only if the model belongs to the curved exponential family. Moreover, for numerical performances this framework is important. Without further hypothesis, the piecewise-logistic model does not satisfy this constraint. We proceed as in Kuhn and Lavielle (2005): we assume that zpop is the realization of independent Gaussian random variables with fixed small variances andestimate the means of those  variables. So, the parameters we escap init fin want to estimate are from now on ? = ?0 , ?0 , ?0 , tR , t1 , ?, ? . The fixed and random effects z = (zpop , zi )16i6n are considered as latent variables. Our model write in a hierarchical way as ? ki n O ? O  ? ? ? y | z, ? ? N ?i (ti,j ), ? 2 ? ? i=1 j=1 n ? O ? ? escap 2 2 init fin , ? 2 ) ? N (t , ? 2 ) ? N (t , ? 2 ) ? z | ? ? N (? N (0, ?) , ? ) ? N (? , ? ) ? N (? ? R 1 init escap fin R 1 0 0 0 ? i=1 where ?init , ?escap , ?fin , ?R and ?1 are hyperparameters of the model. The product measures ? mean that the corresponding entries are considered to be independent in our model. Of course, it is not the case for the observations which are obtained by repeating measurements for several individuals but this assumption leads us to a more computationally tractable algorithm. In this context, the EM algorithm is very efficient to compute the maximum likelihood estimate of ?. Due to the non-linearity of our model, a stochastic version of the EM algorithm is adopted, namely the Stochastic Approximation Expectation-Maximization (SAEM) algorithm. As the conditional distribution q(z|y, ?) is unknown, the Expectation step is replaced using a Monte-Carlo Markov Chain (MCMC) sampling algorithm, leading to consider the MCMC-SAEM algorithm introduced in Kuhn and Lavielle (2005) and Allassonni?re et al. (2010). It alternates between a simulation step, a stochastic approximation step and a maximization step until convergence. The simulation step is achieved using a symmetric random walk Hasting-Metropolis within Gibbs sampler (Robert and Casella, 1999). See the supplementary material for details about algorithmics. To ensure the existence of the maximum a posteriori (theorem 1), we use a "partial" Bayesian formalism, i.e. we assume the following prior (?, ?) ? W ?1 (V, m? ) ? W ?1 (v, m? ) where V ? Sp R, v, m? , m? ? R 5 and W ?1 (V, m? ) denotes the inverse Wishart distribution with scale matrix V and degrees of freedom m? . In order for the inverse Wishart to be non-degenerate, the degrees m? and m? must satisfy m? > 2p and m? > 2. In practice, we yet use degenerate priors but with correct posteriors .To be consistent with the one-dimension inverse Wishart distribution, we define the density function of distribution of higher dimension as p  !m?  |V | 1 1 ?1  fW ?1 (V,m? ) (?) = exp ? tr V ? pp 2 ?p m2? 2 2 |?| where ?p is the multivariate gamma function. The maximization step is straightforward given the sufficient statistics of our exponential model: we update the parameters by taking a barycenter between the corresponding sufficient statistic and the prior. See the supplementary material for explicit equations. 3.2 Existence of the Maximum a Posteriori The next theorem ensures us that the model is well-posed and that the maximum we are looking for through the MCMC-SAEM algorithm exists. Let ? the space of admissible parameters : n   o ? = ?0init , ?0escap , ?0fin , tR , t1 , ?, ? ? R5 ? Sp R ? R+ ? positive-definite . Theorem 1 (Existence of the MAP). Given the piecewise-logistic model and the choice of probability distributions for the parameters and latent variables of the model, for any dataset (ti,j , yi,j )i?J1,nK, j?J1,ki K , there exist ?bM AP ? argmax q(?|y). ??? A detailed proof is postponed to the supplementary material. 4 Experimental results The piecewise-logistic model has been designed for chemotherapy monitoring. More specifically, we have met radiologists of the H?pital Europ?en Georges-Pompidou (HEGP ? Georges Pompidou European Hospital) to design our model. In practice, patients suffer from the metastatic kidney cancer and take a drug each day. Regularly, they come to the HEGP to check the tumor evolution. The response to a given treatment has generally two distinct phases: first, tumor?s size reduces; then, the tumor grows again. A practical question is to quantify the correlation between both phases and to determine as accurately as possible the individual rupture times tiR which are related to an escape of the patient?s response to treatment. 4.1 Synthetic data In order to validate our model and numerical scheme, we first run experiments on synthetic data. We well understood that the covariance matrix ? gives a lot of information on the health status of a patient: pace and amplitude of tumor progression, individual rupture times. . . Therefore, we have to pay special attention to the estimation of ? in this paragraph. An important point was to allow a lot of different individual behaviors. In our synthetic example, Figure 1a illustrates this variability. From a single average trajectory (?0 in bold plain line), we can generate individuals who are cured at the end (dot-dashed lines: ?3 and ?4 ), some whose response to the treatment is bad (dashed lines: ?5 and ?6 ), some who only escape (no positive response to the treatments ? dotted lines: ?7 ). Likewise, we can generate "patients" with only positive responses or no response at all. The case of individual 4 is interesting in practice: the tumor still grows but so slowly that the growth is negligible, at least in the short-run. Figure 2 illustrates the qualitative performance of the estimation. We are notably able to understand various behaviors and fit subjects which are far from the average path, such as the orange and the green curves. We represent only five individuals but 200 subjects have been used to perform the estimation. To measure the influence of the sample size on our model/algorithm, we generate synthetic datasets of various size and perform the estimation 50 times for each dataset. Means and standard deviations 6 RECIST score (dimentionless) RECIST score (dimentionless) 200 100 0 200 100 0 0 500 1,000 Times (in days) 0 1,500 500 1,000 Times (in days) 1,500 (b) After 600 iterations. (a) Initialisation. Figure 2: Initialisation and "results". On both figures, the estimated trajectories are in plain lines and the target curves in dashed lines. The (noisy) observations are represented by crosses. The average path is in bold black line, the individuals in color. Figure 2a: Population parameters zpop and latent variables zpop are initialized at the empirical mean of the observations; individual trajectories are initialized on the average trajectory (P = 0, ? = 0.1Ip , ? = 1). Figure 2b: After 600 iterations, sometime less, the estimated curves fit very well the observations. As the algorithm is stochastic, estimated curves ? and effectively the individuals ? still wave around the target curves. Table 1: Mean (standard deviation) of relative error (expressed as a percentage) for the population parameters zpop and the residual standard deviation ? for 50 runs according to the sample size n. Sample size n ?0init ?0escap ?0fin tR t1 ? 50 100 150 1.63 (1.46) 2.42 (1.50) 2.14 (1.17) 9.45 (5.40) 9.07 (5.19) 11.40 (5.72) 6.23 (2.25) 7.82 (2.43) 5.82 (2.55) 11.58 (1.64) 13.62 (1.31) 9.24 (1.63) 4.41 (0.75) 5.27 (0.60) 3.42 (0.71) 25.24 (12.84) 10.35 (3.96) 2.83 (2.31) of the relative errors for the real parameters, namely ?0init , ?0escap , ?0fin , tR , t1 and ?, are compiled in Table 1. To compare things which are comparable, we have generated a dataset of size 200 and shortened them to the desired size. Moreover, to put the algorithm on a more realistic situation, the synthetic individual times are non-periodically spaced, individual sizes vary between 12 and 18 and the observed values are noisy (? = 3). We remark that our algorithm is stable and that the bigger the sample size, the better we learn the residual standard deviation ?. The parameters tR and ?0escap are quite difficult to learn as they occur on the flat section of the trajectory. However, the error we made is not crippling as the most important for clinicians is the dynamic along both phases. As the algorithm enables to estimate both the mean trajectory and the individual dynamic, it succeeds in estimating the inter-individual variability. This ends in a good estimate of the covariance matrix ? (see figure 4). 4.2 Chemotherapy monitoring: RECIST score of treated patients We now run our estimation algorithm on real data from HEGP. The RECIST (Response Evaluation Criteria In Solid Tumors) score (Therasse et al., 2000) measures the tumoral growth and is a key indicator of the patient survival. We have performed the estimation over a drove of 176 patients of the HEGP. There is an average of 7 visits per subjects (min: 3, max: 22), with an average duration of 90 days between consecutive visits. We have run the algorithm several times, with different proposal laws for the sampler (a Symmetric Random Walk Hasting-Metropolis within Gibbs one) and different priors. We present here a run with a low residual standard variation in respect to the amplitude of the trajectories and complexity of the dataset: ? = 14.50 versus max(?0init , ?0fin ) ? ?0escap = 452.4. Figure 3a illustrates the performance of the model on the first eight patients. Although we cannot explain all the paths of progression, the algorithm succeeds in fitting various types of curves: from the yellow curve ?3 which is rather flat and only escape to the red ?7 which is spiky. From Figure 3b, it seems that the rupture times occur early in the progression in average. Nevertheless , this result is to be considered with some reserve: the rupture time generally occurs on a stable phase of the disease and the estimation may be difficult. 7 RECIST score (dimentionless) 400 ?0 ?1 ?2 ?3 ?4 ?5 ?6 ?7 ?8 200 0 0 100 200 300 Times (in days) 400 40 20 0 500 0 1,000 2,000 3,000 4,000 Individual rupture times (in days) 5,000 (b) Individual rupture times tiR . (a) After 600 iterations. Individual rupture times tiR (in days) High score? 10 4,000 Fast response? ?Low step High step? ste r ? 2 p? fa cto de pl itu 5 Lo w 0 0 ste p ?10 0 1st amplitude factor ?1 gh ?Slow response ?5 4 5 Hi ?2 0 2 1st acceleration factor ? 1 0 2 nd am 0 2,000 ? ?5 ?10 ?4 Space shift ? 0 ?Low score 500 ? 2 nd Fa st acc pr el og er re at ss io n Sl fa ow ct pr or ? 2 og re ss ? Last onset? ?Early onset Time shift ? Figure 3: RECIST score. We keep conventions of the previous figures. Figure 3a is the result of a 600 iterations run. We represent here only the first 8 patients among the 176. Figure 3b is the histogram of the rupture times tiR for this run. In black bold line, the estimated average rupture time tR is a good estimate of the average of the individual rupture times although there exists a large range of escape. (b) The space warp. (a) The time warp. Figure 4: Individual random effects. Figure 4a: log-acceleration factors ?i1 and ?i2 against times shifts ?i . Figure 4b: log-amplitude factors ?1i and ?2i against space shifts ?i . In both figure, the color corresponds to the individual rupture time tiR . These estimations hold for the same run as Figure 3. In Figure 4, we plot the individual estimates of the random effects (obtained from the last iteration) in comparison to the individual rupture times. Even though the parameters which lead the space warp, i.e. ?1i , ?2i and ?i are correlated, the correlation with the rupture time is not clear. In other words, the volume of the tumors seems to not be relevant to evaluate the escape of a patient. On the contrary, which is logical, the time warp strongly impacts the rupture time. 4.3 Discussion and perspective We propose here a generic spatiotemporal model to analyze longitudinal manifold-valued measurements. Contrary to Schiratti et al. (2015), the average trajectory is not assumed to be geodesic anymore. This allows us to apply our model to more complex situations: in chemotherapy monitoring for example, where the patients are treated and tumors may respond, stabilize or progress during the treatment, with different conducts for each phase. At the age of personalized medicine, to give physicians decision support systems is really important. Therefore learning correlations between both phases is crucial. This has been taken into account here. For purpose of working with more complicated data, medical images for instance, we have first presented our model in a very generic version. Then we made it explicit for RECIST scores monitoring and performed experiments on data from the HEGP. However, we have studied that dataset as if all patients behave similarly, which is not true in practice. We believe that a possible amelioration of our model is to put it into a mixture model. Lastly, the SAEM algorithm is really sensitive to initial conditions. This phenomenon is emphasized by the non-independence between the individual variables: actually, the average trajectory ?0 is not exactly the trajectory of the average parameters. Fortunately, the more the sample size n increases, the more ?0 draws closer to the trajectory of the average parameters. 8 Acknowledgments Ce travail b?n?ficie d?un financement public Investissement d?avenir, reference ANR-11-LABX0056-LMH. This work was supported by a public grant as part of the Investissement d?avenir, project reference ANR-11-LABX-0056-LMH. Travail r?alis? dans le cadre d?un projet financ? par la Fondation de la Recherche M?dicale, "DBI20131228564". Work performed as a part of a project funded by the Fondation of Medical Research, grant number "DBI20131228564". References St?phanie Allassonni?re, Estelle Kuhn, and Alain Trouv?. Construction of bayesian deformable models via a stochastic approximation algorithm: A convergence study. Bernoulli, 16(3):641?678, 08 2010. Mauricio Burotto, Julia Wilkerson, Wilfred Stein, Motzer Robert, Susan Bates, and Tito Fojo. Continuing a cancer treatment despite tumor growth may be valuable: Sunitinib in renal cell carcinoma as example. PLoS ONE, 9(5):e96316, 2014. Bernard Delyon, Marc Lavielle, and Eric Moulines. Convergence of a stochastic approximation version of the em algorithm. The Annals of Statistics, 27(1):94?128, 1999. Arthur Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1?38, 1977. Bernard Escudier, Camillo Porta, M?lanie Schmidinger, Nathalie Rioux-Leclercq, Axel Bex, Vincent S. Khoo, Viktor Gruenvald, and Alan Horwich. Renal cell carcinoma: Esmo clinical practice guidelines for diagnosis, treatment and follow-up. Annals of Oncology, 27(suppl 5):v58?v68, 2016. Sylvestre Gallot, Dominique Hulin, and Jacques Lafontaine. Riemannian Geometry. Universitext. SpringerVerlag Berlin Heidelberg, 3 edition, 2004. Estelle Kuhn and Marc Lavielle. Maximum likelihood estimation in nonlinear mixed effects models. Computational Statistics & Data Analysis, 49(4):1020?1038, 2005. Nan M. Laird and James H. Ware. Random-effects models for longitudinal data. Biometrics, 38(4):963?974, 1982. Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer Texts in Statistics. SpringerVerlag New York, 1999. Christian Rothermundt, Alexandra Bailey, Linda Cerbone, Tim Eisen, Bernard Escudier, Silke Gillessen, Viktor Gr?nwald, James Larkin, David McDermott, Jan Oldenburg, Camillo Porta, Brian Rini, Manuela Schmidinger, Cora N. Sternberg, and Paul M. Putora. Algorithms in the first-line treatment of metastatic clear cell renal cell carcinoma ? analysis using diagnostic nodes. The Oncologist, 20(9):1028?1035, 2015. Christian Rothermundt, Joscha Von Rappard, Tim Eisen, Bernard Escudier, Viktor Gr?nwald, James Larkin, David McDermott, Jan Oldenburg, Camillo Porta, Brian Rini, Manuela Schmidinger, Cora N. Sternberg, and Paul M. Putora. Second-line treatment for metastatic clear cell renal cell cancer: experts? consensus algorithms. World Journal of Urology, 35(4):641?648, 2017. Jean-Baptiste Schiratti, St?phanie Allassonniere, Olivier Colliot, and Stanley Durrleman. Learning spatiotemporal trajectories from manifold-valued longitudinal data. In Neural Information Processing Systems, number 28 in Advances in Neural Information Processing Systems. 2015. Wilfred D. Stein, William Doug Figg, William Dahut, Aryeh D. Stein, Moshe B. Hoshen, Doug Price, Susan E. Bates, and Tito Fojo. Tumor growth rates derived from data for patients in a clinical trial correlate strongly with patient survival: A novel strategy for evaluation of clinical trial data. The Oncologist, 13(10):1046?1054, 2008. Patrick Therasse, Susan G. Arbuck, Elizabeth A. Eisenhauer, Jantien Wanders, Richard S. Kaplan, Larry Rubinstein, Jaap Verweij, Martine Van Glabbeke, Allan T. van Oosterom, Michaele C. Christian, and Steve G. Gwyther. New guidelines to evaluate the response to treatment in solid tumors. Journal of the National Cancer Institute, 92(3):205?216, 2000. 9
6715 |@word trial:2 version:5 seems:3 nd:2 open:2 iki:1 simulation:2 dominique:1 covariance:2 t1r:3 tr:33 solid:2 moment:1 initial:5 series:1 score:13 oldenburg:2 initialisation:2 longitudinal:7 past:1 yet:1 must:3 written:1 interrupted:1 numerical:3 additive:1 j1:15 realistic:1 shape:1 periodically:1 enables:1 asymptote:1 designed:2 plot:1 update:1 christian:4 beginning:1 short:1 granting:1 recherche:1 provides:1 node:1 successive:1 parametrizable:1 five:1 mathematical:1 along:3 aryeh:1 urology:1 shooting:1 qualitative:1 fitting:1 paragraph:3 introduce:5 inter:1 allan:1 notably:2 behavior:3 growing:1 moulines:1 decreasing:2 equipped:1 increasing:2 project:2 estimating:2 linearity:3 moreover:5 panel:1 linda:1 renal:6 developed:1 transformation:7 temporal:5 ti:9 growth:6 exactly:1 universit:1 control:1 medical:4 grant:3 mauricio:1 allassonniere:2 t1:11 understood:2 local:1 treat:1 positive:6 negligible:1 io:1 despite:1 shortened:1 id:1 ware:2 path:5 ap:2 black:3 studied:2 phanie:3 tir:12 range:1 practical:1 acknowledgment:1 practice:7 definite:1 cadre:1 jan:2 ri2:4 drug:3 evolving:1 empirical:1 matching:1 word:2 induce:1 regular:2 refers:1 donald:1 cannot:3 targeting:1 onto:2 close:1 put:2 context:3 influence:1 seminal:1 map:3 demonstrated:2 go:1 straightforward:1 attention:1 independently:2 kidney:1 duration:1 formulate:2 m2:1 continued:1 his:1 population:7 variation:2 annals:2 construction:2 suppose:1 target:5 drove:1 olivier:1 hypothesis:2 database:1 hoshen:1 observed:3 role:2 susan:3 porta:3 ensures:2 cured:1 plo:1 valuable:1 disease:2 expecting:1 dempster:2 complexity:1 dynamic:3 geodesic:19 raise:1 rewrite:1 ali:1 efficiency:2 eric:1 completely:1 represented:2 various:3 fewest:1 distinct:1 fast:1 describe:1 monte:2 rubinstein:1 whose:1 emerged:1 widely:1 valued:4 metastatic:5 supplementary:3 relax:1 posed:1 s:2 anr:2 statistic:5 emergence:1 noisy:2 laird:3 final:2 ip:1 sequence:4 differentiable:1 took:1 propose:4 product:1 fr:1 ste:2 dans:1 relevant:1 realization:1 silke:1 degenerate:2 achieve:1 deformable:1 description:1 validate:2 convergence:5 produce:2 phane:1 tim:2 derive:2 expectationmaximization:1 progress:1 strong:1 europ:1 come:1 met:1 quantify:1 kuhn:4 convention:1 avenir:2 correct:1 stochastic:9 enable:1 larry:1 material:3 public:2 crc:1 require:1 really:2 brian:2 secondly:1 mathematically:2 im:1 pl:1 hold:1 therapy:3 considered:3 around:1 exp:1 great:1 mapping:2 reserve:1 m0:11 vary:2 consecutive:1 early:2 purpose:1 uniqueness:1 estimation:12 sometime:1 cole:1 sensitive:2 cora:2 always:1 gaussian:2 aim:2 rather:1 saem:7 og:2 derived:1 focus:1 properly:1 methodological:2 bernoulli:1 likelihood:3 check:1 geodesically:3 diffeomorphic:2 am:1 posteriori:3 el:1 compactness:1 i1:11 interested:1 issue:1 among:2 spatial:3 yip:1 constrained:2 special:1 orange:1 equal:2 beach:1 sampling:1 represents:2 r5:1 metastasis:1 piecewise:15 escape:6 few:2 richard:1 aire:1 gamma:1 national:1 individual:51 replaced:1 geometry:2 argmax:1 phase:7 william:2 freedom:1 interest:2 chemotherapy:5 evaluation:2 mixture:1 radiologist:1 chain:1 closer:1 partial:1 injective:1 arthur:1 biometrics:1 conduct:1 incomplete:1 euclidean:1 continuing:1 walk:2 desired:1 initialized:2 re:8 deformation:2 subfigure:1 theoretical:2 fitted:1 mk:3 instance:3 formalism:1 modeling:1 maximization:4 contested:1 deviation:4 entry:1 ri1:4 gr:2 gallot:3 characterize:1 perturbed:1 spatiotemporal:6 estelle:2 synthetic:6 st:8 density:1 sensitivity:1 axel:1 physician:2 again:1 von:1 choose:1 slowly:1 martine:1 wishart:3 expert:1 inefficient:1 leading:3 account:1 diversity:1 de:2 star:1 bold:4 stabilize:1 satisfy:4 explicitly:1 onset:3 depends:1 performed:3 view:3 lot:2 analyze:1 red:1 wave:1 jaap:1 parallel:2 complicated:1 slope:2 square:1 variance:1 who:2 likewise:1 crippling:1 correspond:1 spaced:1 yellow:1 dealt:1 generalize:1 bayesian:2 vincent:1 accurately:1 bates:2 carlo:2 trajectory:40 monitoring:7 acc:1 explain:4 plateau:1 suffers:1 casella:2 complicate:1 definition:1 against:2 pp:1 james:3 naturally:1 proof:1 riemannian:6 couple:1 stop:1 proved:2 treatment:19 dataset:6 logical:1 knowledge:1 color:2 stanley:1 amplitude:5 actually:2 steve:1 higher:2 isometric:2 day:8 follow:1 response:14 improved:1 juliette:2 formulation:2 done:1 though:1 strongly:2 just:1 lastly:1 spiky:1 correlation:4 until:1 working:1 nonlinear:1 propagation:1 continuity:1 defines:1 logistic:9 escudier:4 scientific:1 grows:2 alexandra:1 believe:1 usa:1 effect:14 sylvestre:1 requiring:1 true:1 evolution:2 hence:1 symmetric:2 i2:10 deal:1 during:2 criterion:2 complete:2 polytechnique:2 julia:1 performs:1 gh:1 image:2 novel:1 common:1 sigmoid:1 volume:1 viktor:3 measurement:6 gibbs:2 mathematics:1 hp:1 similarly:1 dot:1 funded:1 stable:2 longer:3 compiled:1 align:1 patrick:1 fortiori:1 isometry:3 recent:1 own:1 posterior:1 multivariate:1 belongs:1 perspective:1 forcing:1 scenario:1 yi:8 postponed:1 mcdermott:2 george:3 relaxed:1 impose:2 fortunately:1 determine:1 dashed:3 nwald:2 reduces:1 alan:1 faster:1 cross:1 long:1 lin:1 clinical:3 concerning:1 oncologist:2 baptiste:1 visit:2 bigger:1 jean:1 hasting:2 impact:1 descartes:1 basic:1 regression:1 patient:19 expectation:3 metric:1 essentially:1 iteration:5 represent:3 histogram:1 suppl:1 achieved:1 cell:7 proposal:1 want:5 interval:3 crucial:1 posse:1 subject:8 thing:1 regularly:1 contrary:2 mod:1 practitioner:1 near:1 abusively:1 reacts:1 variety:2 quite:1 fit:3 zi:2 independence:1 regarding:2 idea:1 tm:4 ti1:1 reparametrizations:2 shift:8 t0:12 colour:1 suffer:1 proceed:1 york:1 remark:1 generally:3 detailed:1 clear:3 amount:1 repeating:2 stein:4 rioux:1 generate:3 wanders:1 sl:1 deliberate:1 exist:1 percentage:1 dotted:1 sign:1 estimated:4 jacques:1 per:1 carcinoma:3 pace:4 anatomical:2 diagnosis:1 diagnostic:1 write:3 irrespectively:1 profoundly:1 group:5 key:2 threshold:1 nevertheless:1 ce:1 year:1 sum:1 run:10 inverse:3 respond:1 durrleman:1 family:2 draw:1 decision:2 scaling:2 comparable:1 ki:6 ct:3 pay:1 hi:1 played:1 nan:2 adapted:1 occur:2 constraint:15 precisely:2 ri:2 flat:2 personalized:1 sternberg:2 min:1 diffeomorphisms:3 department:1 according:2 alternate:1 remain:2 em:6 elizabeth:1 stephanie:1 wi:1 metropolis:2 explained:1 pr:3 taken:1 computationally:1 equation:1 remains:1 tractable:1 end:2 adopted:1 apply:4 progression:9 hierarchical:4 eight:1 appropriate:2 generic:7 enforce:2 adjunction:2 bailey:1 anymore:2 slower:1 existence:4 denotes:3 ensure:3 cf:2 universitext:1 medicine:2 build:3 especially:1 society:1 already:1 question:1 occurs:1 moshe:1 strategy:2 chevallier:2 barycenter:1 usual:1 fa:3 said:1 conceivable:1 ow:1 berlin:1 lmh:2 manifold:10 consensus:1 reason:1 itu:1 modeled:1 illustration:1 difficult:2 unfortunately:1 robert:3 kaplan:1 design:1 guideline:2 unknown:1 diamond:1 perform:2 observation:8 markov:1 datasets:1 fin:18 anti:2 curved:1 logistics:2 behave:1 situation:5 defining:1 variability:8 delaying:1 looking:1 oncology:2 introduced:3 david:2 namely:4 paris:1 required:1 trouv:1 registered:1 algorithmics:1 nip:1 able:1 below:1 usually:1 challenge:2 built:1 green:1 max:2 royal:1 shifting:1 suitable:1 treated:3 force:1 indicator:1 residual:3 scheme:1 created:1 doug:2 rupture:19 health:1 text:1 prior:4 tangent:2 asymptotic:1 relative:2 law:1 par:1 highlight:1 mixed:6 interesting:2 limitation:1 versus:1 age:1 degree:2 sufficient:2 consistent:1 wilfred:2 rubin:1 nathalie:1 pi:3 lo:1 cancer:8 course:1 changed:1 supported:1 last:2 free:1 larkin:2 alain:1 drastically:1 side:1 allow:4 understand:2 warp:9 institute:1 wide:2 fall:1 taking:2 template:1 distributed:2 van:2 curve:22 dimension:6 boundary:1 transition:1 cure:1 avoids:1 plain:2 eisen:2 world:1 made:3 bm:1 far:1 correlate:1 status:1 keep:1 global:1 investissement:2 projet:1 assumed:3 bex:1 manuela:2 cto:1 continuous:5 latent:3 un:2 spacial:1 why:1 allassonni:4 table:2 nature:2 learn:2 ca:1 init:19 heidelberg:1 vessel:1 european:1 complex:1 marc:2 sp:3 tito:2 noise:1 hyperparameters:1 profile:1 edition:1 paul:2 amelioration:1 en:1 slow:1 explicit:3 exponential:4 breaking:5 financ:1 admissible:1 fondation:2 down:2 theorem:4 bad:1 specific:3 emphasized:1 er:1 survival:3 derives:1 exists:3 effectively:1 cmap:1 delyon:3 illustrates:4 demand:2 nk:8 suited:2 expressed:1 temporarily:1 springer:1 corresponds:1 lavielle:4 succeed:1 conditional:1 labx:1 goal:1 targeted:1 formulated:2 acceleration:5 price:1 change:1 fw:1 springerverlag:2 typical:1 specifically:1 clinician:1 sampler:2 tumor:15 total:1 called:2 hospital:1 bernard:4 experimental:2 subdivision:3 succeeds:2 la:2 colliot:1 support:1 schiratti:7 wilkerson:1 evaluate:2 mcmc:5 phenomenon:1 correlated:1
6,319
6,716
Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications Qinshi Wang Princeton University Princeton, NJ 08544 [email protected] Wei Chen Microsoft Research Beijing, China [email protected] Abstract We study combinatorial multi-armed bandit with probabilistically triggered arms and semi-bandit feedback (CMAB-T). We resolve a serious issue in the prior CMAB-T studies where the regret bounds contain a possibly exponentially large factor of 1/p? , where p? is the minimum positive probability that an arm is triggered by any action. We address this issue by introducing a triggering probability modulated (TPM) bounded smoothness condition into the general CMAB-T framework, and show that many applications such as influence maximization bandit and combinatorial cascading bandit satisfy this TPM condition. As a result, we completely remove the factor of 1/p? from the regret bounds, achieving significantly better regret bounds for influence maximization and cascading bandits than before. Finally, we provide lower bound results showing that the factor 1/p? is unavoidable for general CMAB-T problems, suggesting that the TPM condition is crucial in removing this factor. 1 Introduction Stochastic multi-armed bandit (MAB) is a classical online learning framework modeled as a game between a player and the environment with m arms. In each round, the player selects one arm and the environment generates a reward of the arm from a distribution unknown to the player. The player observes the reward, and use it as the feedback to the player?s algorithm (or policy) to select arms in future rounds. The goal of the player is to cumulate as much reward as possible over time. MAB models the classical dilemma between exploration and exploitation: whether the player should keep exploring arms in search for a better arm, or should stick to the best arm observed so far to collect rewards. The standard performance measure of the player?s algorithm is the (expected) regret, which is the difference in expected cumulative reward between always playing the best arm in expectation and playing according to the player?s algorithm. In recent years, stochastic combinatorial multi-armed bandit (CMAB) receives many attention (e.g. [9, 7, 6, 10, 13, 15, 14, 16, 8]), because it has wide applications in wireless networking, online advertising and recommendation, viral marketing in social networks, etc. In the typical setting of CMAB, the player selects a combinatorial action to play in each round, which would trigger the play of a set of arms, and the outcomes of these triggered arms are observed as the feedback (called semi-bandit feedback). Besides the exploration and exploitation tradeoff, CMAB also needs to deal with the exponential explosion of the possible actions that makes exploring all actions infeasible. One class of the above CMAB problems involves probabilistically triggered arms [7, 14, 16], in which actions may trigger arms probabilistically. We denote it as CMAB-T in this paper. Chen et al. [7] provide such a general model and apply it to the influence maximization bandit, which models 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. stochastic influence diffusion in social networks and sequentially selecting seed sets to maximize the cumulative influence spread over time. Kveton et al. [14, 16] study cascading bandits, in which arms are probabilistically triggered following a sequential order selected by the player as the action. However, in both studies, the regret bounds contain an undesirable factor of 1/p? , where p? is the minimum positive probability that any arm can be triggered by any action,1 and this factor could be exponentially large for both influence maximization and cascading bandits. In this paper, we adapt the general CMAB framework of [7] in a systematic way to completely remove the factor of 1/p? for a large class of CMAB-T problems including both influence maximization and combinatorial cascading bandits. The key observation is that for these problems, a harder-to-trigger arm has less impact to the expected reward and thus we do not need to observe it as often. We turn this key observation into a triggering probability modulated (TPM) bounded smoothness condition, adapted from the original bounded smoothness condition in [7]. We eliminates the 1/p? factor in the regret bounds for all CMAB-T problems with the TPM condition, and show that influence maximization bandit and the conjunctive/disjunctive cascading bandits all satisfy the TPM condition. Moreover, for general CMAB-T without the TPM condition, we show a lower bound result that 1/p? is unavoidable, because the hard-to-trigger arms are crucial in determining the best arm and have to be observed enough times. Besides removing the exponential factor, our analysis is also tighter in other regret factors or constants comparing to the existing influence maximization bandit results [7, 25], combinatorial cascading bandit [16], and linear bandits without probabilistically triggered arms [15]. Both the regret analysis based on the TPM condition and the proof that influence maximization bandit satisfies the TPM condition are technically involved and nontrivial, but due to the space constraint, we have to move the complete proofs to the supplementary material. Instead we introduce the key techniques used in the main text. Related Work. Multi-armed bandit problem is originally formated by Robbins [20], and has been extensively studied in the literature [cf. 3, 21, 4]. Our study belongs to the stochastic bandit research, while there is another line of research on adversarial bandits [2], for which we refer to a survey like [4] for further information. For stochastic MABs, an important approach is Upper Confidence Bound (UCB) approach [1], on which most CMAB studies are based upon. As already mentioned in the introduction, stochastic CMAB has received many attention in recent years. Among the studies, we improve (a) the general framework with probabilistically triggered arms of [7], (b) the influence maximization bandit results in [7] and [25], (c) the combinatorial cascading bandit results in [16], and (d) the linear bandit results in [15]. We defer the technical comparison with these studies to Section 4.3. Other CMAB studies do not deal with probabilistically triggered arms. Among them, [9] is the first study on linear stochastic bandit, but its regret bound has since been improved by Chen et al. [7], Kveton et al. [15]. Combes et al. [8] improve the regret bound of [15] for linear bandits in a special case where arms are mutually independent. Most studies above are based on the UCB-style CUCB algorithm or its minor variant, and differ on the assumptions and regret analysis. Gopalan et al. [10] study Thompson sampling for complex actions, which is based on the Thompson sample approach [22] and can be applied to CMAB, but their regret bound has a large exponential constant term. Influence maximization is first formulated as a discrete optimization problem by Kempe et al. [12], and has been extensively studied since (cf. [5]). Variants of influence maximization bandit have also been studied [18, 23, 24]. Lei et al. [18] use a different objective of maximizing the expected size of the union of the influenced nodes over time. Vaswani et al. [23] discuss how to transfer node level feedback to the edge level feedback, and then apply the result of [7]. Vaswani et al. [24] replace the original maximization objective of influence spread with a heuristic surrogate function, avoiding the issue of probabilistically triggered arms. But their regret is defined against a weaker benchmark relaxed by the approximation ratio of the surrogate function, and thus their theoretical result is weaker than ours. 1 The factor of 1/f ? used for the combinatorial disjunctive cascading bandits in [16] is essentially 1/p? . 2 2 General Framework In this section we present the general framework of combinatorial multi-armed bandit with probabilistically triggered arms originally proposed in [7] with a slight adaptation, and denote it as CMAB-T. We illustrate that the influence maximization bandit [7] and combinatorial cascading bandits [14, 16] are example instances of CMAB-T. CMAB-T is described as a learning game between a learning agent (or player) and the environment. The environment consists of m random variables X1 , . . . , Xm called base arms (or arms) following a joint distribution D over [0, 1]m . Distribution D is picked by the environment from a class of distributions D before the game starts. The player knows D but not the actual distribution D. The learning process proceeds in discrete rounds. In round t ? 1, the player selects an action St from an action space S based on the feedback history from the previous rounds, and the environment draws (t) (t) from the joint distribution D an independent sample X (t) = (X1 , . . . , Xm ). When action St is played on the environment outcome X (t) , a random subset of arms ?t ? [m] are triggered, and the (t) outcomes of Xi for all i ? ?t are observed as the feedback to the player. The player also obtains a nonnegative reward R(St , X (t) , ?t ) fully determined by St , X (t) , and ?t . A learning algorithm aims at properly selecting actions St ?s over time based on the past feedback to cumulate as much reward as possible. Different from [7], we allow the action space S to be infinite. In the supplementary material, we discuss an example of continuous influence maximization [26] that uses continuous and infinite action space while the number of base arms is still finite. We now describe the triggered set ?t in more detail, which is not explicit in [7]. In general, ?t may have additional randomness beyond the randomness of X (t) . Let Dtrig (S, X) denote a distribution of the triggered subset of [m] for a given action S and an environment outcome X. We assume that ?t is drawn independently from Dtrig (St , X (t) ). We refer Dtrig as the probabilistic triggering function. To summarize, a CMAB-T problem instance is a tuple ([m], S, D, Dtrig , R), with elements already described above. These elements are known to the player, and hence establishing the problem input to the player. In contrast, the environment instance is the actual distribution D ? D picked by the environment, and is unknown to the player. The problem instance and the environment instance together form the (learning) game instance, in which the learning process would unfold. In this paper, we fix the environment instance D, unless we need to refer to more than one environment instances. For each arm i, let ?i = EX?D [Xi ]. Let vector ? = (?1 , . . . , ?m ) denote the expectation vector of arms. Note that vector ? is determined by D. Same as in [7], we assume that the expected reward E[R(S, X, ? )], where the expectation is taken over X ? D and ? ? Dtrig (S, X), is a function of action S and the expectation vector ? of the arms. Henceforth, we denote rS (?) , E[R(S, X, ? )]. We remark that Chen et al. [6] relax the above assumption and consider the case where the entire distribution D, not just the mean of D, is needed to determine the expected reward. However, they need to assume that arm outcomes are mutually independent, and they do not consider probabilistically triggered arms. It might be interesting to incorporate probabilistically triggered arms into their setting, but this is out of the scope of the current paper. The performance of a learning algorithm A is measured by its (expected) regret, which is the difference in expected cumulative reward between always playing the best action and playing actions selected by algorithm A. Formally, let opt? = supS?S rS (?), where ? = EX?D [X], and we assume that opt? is finite. Same as in [7], we assume that the learning algorithm has access to an offline (?, ?)-approximation oracle O, which takes ? = (?1 , . . . , ?m ) as input and outputs an action S O such that Pr{r? (S O ) ? ? ? opt? } ? ?, where ? is the approximation ratio and ? is the success probability. Under the (?, ?)-approximation oracle, the benchmark cumulative reward should be the ?? fraction of the optimal reward, and thus we use the following (?, ?)-approximation regret: Definition 1 ((?, ?)-approximation Regret). The T -round (?, ?)-approximation regret of a learning algorithm A (using an (?, ?)-approximation oracle) for a CMAB-T game instance ([m], S, D, Dtrig , R, D) with ? = EX?D [X] is " A Reg?,?,? (T ) = T ? ? ? ? ? opt? ? E T X # R(StA , X (t) , ?t ) = T ? ? ? ? ? opt? ? E i=1 " T X i=1 3 # rStA (?) , where StA is the action A selects in round t, and the expectation is taken over the randomness of the environment outcomes X (1) , . . . , X (T ) , the triggered sets ?1 , . . . , ?T , as well as the possible randomness of algorithm A itself. We remark that because probabilistically triggered arms may strongly impact the determination of the best action, but they may be hard to trigger and observe, the regret could be worse and the regret analysis is in general harder than CMAB without probabilistically triggered arms. The above framework essentially follows [7], but we decouple actions from subsets of arms, allow action space to be infinite, and explicitly model triggered set distribution, which makes the framework more powerful in modeling certain applications (see supplementary material for more discussions). 2.1 Examples of CMAB-T: Influence Maximization and Cascading Bandits In social influence maximization [12], we are given a weighted directed graph G = (V, E, p), where V and E are sets of vertices and edges respectively, and each edge (u, v) is associated with a probability p(u, v). Starting from a seed set S ? V , influence propagates in G as follows: nodes in S are activated at time 0, and at time t ? 1, a node u activated in step t ? 1 has one chance to activate its inactive out-neighbor v with an independent probability p(u, v). The influence spread of seed set S, ?(S), is the expected number of activated nodes after the propagation ends. The offline problem of influence maximization is to find at most k seed nodes in G such that the influence spread is maximized. Kempe et al. [12] provide a greedy algorithm with approximation ratio 1 ? 1/e ? ? and success probability 1 ? 1/|V |, for any ? > 0. For the online influence maximization bandit [7], the edge probabilities p(u, v)?s are unknown and need to be learned over time through repeated influence maximization tasks: in each round t, k seed nodes St are selected, the influence propagation from St is observed, the reward is the number of nodes activated in this round, and one wants to repeat this process to cumulate as much reward as possible. Putting it into the CMAB-T framework, the set of edges E is the set of arms [m], and their outcome distribution D is the joint distribution of m independent Bernoulli distributions with means p(u, v) for all (u, v) ? E. Any seed set S ? V with at most k nodes is an action. The triggered arm set ?t is the set of edges (u, v) reached by the propagation, that is, u can be reached from St (t) by passing through only edges e ? E with Xe = 1. In this case, the distribution Dtrig (St , X (t) ) degenerates to a deterministic triggered set. The reward R(St , X (t) , ?t ) equals to the number of nodes (t) in V that is reached from S through only edges e ? E with Xe = 1, and the expected reward is exactly the influence spread ?(St ). The offline oracle is a (1 ? 1/e ? ?, 1/|V |)-approximation greedy algorithm. We remark that the general triggered set distribution Dtrig (St , X (t) ) (together with infinite action space) can be used to model extended versions of influence maximization, such as randomly selected seed sets in general marketing actions [12] and continuous influence maximization [26] (see supplementary material). Now let us consider combinatorial cascading bandits [14, 16]. In this case, we have m independent Bernoulli random variables X1 , . . . , Xm as base arms. An action is to select an ordered sequence from a subset of these arms satisfying certain constraint. Playing this action means that the player reveals the outcomes of the arms one by one following the sequence order until certain stopping condition is satisfied. The feedback is the outcomes of revealed arms and the reward is a function form of these arms. In particular, in the disjunctive form the player stops when the first 1 is revealed and she gains reward of 1, or she reaches the end and gains reward 0. In the conjunctive form, the player stops when the first 0 is revealed (and receives reward 0) or she reaches the end with all 1 outcomes (and receives reward 1). Cascading bandits can be used to model online recommendation and advertising (in the disjunctive form with outcome 1 as a click) or network routing reliability (in the conjunctive form with outcome 0 as the routing edge being broken). It is straightforward to see that cascading bandits fit into the CMAB-T framework: m variables are base arms, ordered sequences are actions, and the triggered set is the prefix set of arms until the stopping condition holds. 3 Triggering-Probability Modulated Conditions Chen et al. [7] use two conditions to guarantee the theoretical regret bounds. The first one is monotonicity, which we also use in this paper, and is restated below. 4 Algorithm 1 CUCB with computation oracle. Input: m, Oracle 1: For each arm i, Ti ? 0 {maintain the total number of times arm i is played so far} 2: For each arm i, ? ?i ? 1 {maintain the empirical mean of Xi } 3: for t = 1, 2, 3, . . . do q ln t 4: For each arm i ? [m], ?i ? 32T {the confidence radius, ?i = +? if Ti = 0} i 5: For each arm i ? [m], ? ?i = min {? ?i + ?i , 1} {the upper confidence bound} 6: S ? Oracle(? ?1 , . . . , ? ?m ) (t) 7: Play action S, which triggers a set ? ? [m] of base arms with feedback Xi ?s, i ? ? (t) 8: For every i ? ? , update Ti and ? ?i : Ti = Ti + 1, ? ?i = ? ?i + (Xi ? ? ?i )/Ti 9: end for Condition 1 (Monotonicity). We say that a CMAB-T problem instance satisfies monotonicity, if for any action S ? S, for any two distributions D, D0 ? D with expectation vectors ? = (?1 , . . . , ?m ) and ?0 = (?01 , . . . , ?0m ), we have rS (?) ? rS (?0 ) if ?i ? ?0i for all i ? [m]. The second condition is bounded smoothness. One key contribution of our paper is to properly strengthen the original bounded smoothness condition in [7] so that we can both get rid of the undesired 1/p? term in the regret bound and guarantee that many CMAB problems still satisfy the conditions. Our important change is to use triggering probabilities to modulate the condition, and thus we call such conditions triggering probability modulated (TPM) conditions. The key point of TPM conditions is including the triggering probability in the condition. We use pD,S to denote the i probability that action S triggers arm i when the environment instance is D. With this definition, we can also technically define p? as p? = inf i?[m],S?S,pD,S >0 pD,S . In this section, we further use i i 1-norm based conditions instead of the infinity-norm based condition in [7], since they lead to better regret bounds for the influence maximization and cascading bandits. Condition 2 (1-Norm TPM Bounded Smoothness). We say that a CMAB-T problem instance satisfies 1-norm TPM bounded smoothness, if there exists B ? R+ (referred as the bounded smoothness constant) such that, for any two distributions D, D0 ? D with expectation vectors ? and ?0 , and any P action S, we have |rS (?) ? rS (?0 )| ? B i?[m] pD,S |?i ? ?0i |. i Note that the corresponding non-TPM version of the above condition would remove pD,S in the i above condition, which is a generalization of the linear condition used in linear bandits [15]. Thus, the TPM version is clearly stronger than the non-TPM version (when the bounded smoothness constants are the same). The intuition of incorporating the triggering probability pD,S to modulate the 1-norm i condition is that, when an arm i is unlikely triggered by action S (small pD,S ), the importance of i arm i also diminishes in that a large change in ?i only causes a small change in the expected reward rS (?). This property sounds natural in many applications, and it is important for bandit learning ? although an arm i may be difficult to observe when playing S, it is also not important to the expected reward of S and thus does not need to be learned as accurately as others more easily triggered by S. 4 CUCB Algorithm and Regret Bound with TPM Bounded Smoothness We use the same CUCB algorithm as in [7] (Algorithm 1). The algorithm maintains the empirical estimate ? ?i for the true mean ?i , and feed the upper confidence bound ? ?i to the offline oracle to obtain the next action S to play. The upper confidence bound ? ?i is large if arm i is not triggered often (Ti is small), providing optimistic estimates for less observed arms. We next provide its regret bound. Definition 2 (Gap). Fix a distribution D and its expectation vector ?. For each action S, we define the gap ?S = max(0, ? ? opt? ? rS (?)). For each arm i, we define ?imin = inf S?S:pD,S >0,?S >0 i ?imax = ?S , sup ?S . S?S:pD,S >0,?S >0 i As a convention, if there is no action S such that pD,S > 0 and ?S > 0, we define ?imin = +?, i i i ?max = 0. We define ?min = mini?[m] ?min , and ?max = maxi?[m] ?imax . 5 ? Let S? = {i ? [m] | p?,S > 0} be the set of arms that could be triggered by S. Let K = maxS?S |S|. i For convenience, we use dxe0 to denote max{dxe, 0} for any real number x. Theorem 1. For the CUCB algorithm on a CMAB-T problem instance that satisfies monotonicity (Condition 1) and 1-norm TPM bounded smoothness (Condition 2) with bounded smoothness constant B, (1) if ?min > 0, we have distribution-dependent bound   2 X  X 576B 2 K ln T 2BK ? + log2 i +2 ? Reg?,?,? (T ) ? ? ?max + 4Bm; (1) 6 ?imin ?min 0 i?[m] i?[m] (2) we have distribution-independent bound  ? log2 Reg?,?,? (T ) ? 12B mKT ln T + T 18 ln T   +2 ?m? 0 ?2 ? ?max + 2Bm. 6 (2) ? For the above theorem, we remark that the regret bounds are tight (up to a O( log T ) factor in the case of distribution-independent bound) base on a lower bound result in [15]. More specifically, Kveton et al. [15] show that for linear bandits (a special class of CMAB-T without probabilistic triggering), the distribution-dependent regret is lower bounded by ?( (m?K)K log T ), and the ? ? distribution-independent regret is lower bounded by ?( mKT ) when T ? m/K, for some instance where ?imin = ? for all i ? [m] and ?imin < ?. Comparing with our regret upper bound in the above theorem, (a) for distribution-dependent bound, we have the regret upper bound O( (m?K)K log T ) since for that instance B = 1 and there are K arms with ?imin = ?, so tight with ? the? lower bound in [15]; and (b) for distribution-independent bound, we have the regret upper bound ? O( mKT log T ), tight to the lower bound up to a O( log T ) factor, same as the upper bound for the linear bandits in [15]. This indicates that parameters m and K appeared in the above regret bounds are all needed. As for parameter B, we can view it simply as a scaling parameter. If we scale the reward of an instance to B times larger than before, certainly, the regret is B times larger. Looking at the distribution-dependent regret bound (Eq. (1)), ?imin would also be scaled by a factor of B, canceling one B factor from B 2 , and ?max is also scaled by a factor of B, and thus the regret bound in Eq. (1) is also scaled by a factor of B. In the distribution-independent regret bound (Eq. (2)), the scaling of B is more direct. Therefore, we can see that all parameters m, K, and B appearing in the above regret bounds are needed. 4.1 Novel Ideas in the Regret Analysis Due to the space limit, the full proof of Theorem 1 is moved to the supplementary material. Here we briefly explain the novel aspects of our analysis that allow us to achieve new regret bounds and differentiate us from previous analyses such as the ones in [7] and [16, 15]. We first give an intuitive explanation on how to incorporate the TPM bounded smoothness condition to remove the factor 1/p? in the regret bound. Consider a simple illustrative example of two actions S0 and S, where S0 has a fixed reward r0 as a reference action, and S has a stochastic reward depending on the outcomes of its triggered base arms. Let S? be the set of arms that can be triggered ? suppose i can be triggered by action S with probability pS , and its true mean is ?i by S. For i ? S, i and its empirical mean at the end of round t is ? ?i,t . The analysis in [7] would need a property that, if for all i ? S? |? ?i,t ? ?i | ? ?i for some properly defined ?i , then S no longer generates regrets. The analysis would conclude that arm i needs to be triggered ?(log T /?i2 ) times for the above condition to happen. Since arm i is only triggered with probability pSi , it means action S may need to be played ?(log T /(pSi ?i2 )) times. This is the essential reason why the factor 1/p? appears in the regret bound. Now with the TPM bounded smoothness, we know that the impact of |? ?i,t ? ?i | ? ?i to the difference in the expected reward is only pSi ?i , or equivalently, we could relax the requirement to |? ?i,t ? ?i | ? ?i /pSi to achieve the same effect as in the previous analysis. This translates to the result that action S would generate regret in at most O(log T /(pSi (?i /pSi )2 )) = O(pSi log T /?i2 ) rounds. We then need to handle P the case when we have multiple actions that could trigger arm i. The simple addition of S:pS >0 pSi log T /?i2 is not feasible since we may have exponentially or even i infinitely many such actions. Instead, we introduce the key idea of triggering probability groups, such that the above actions are divided into groups by putting their triggering probabilities pSi 6 into geometrically separated bins: (1/2, 1], (1/4, 1/2] . . . , (2?j , 2?j+1 ], . . . The actions in the same group would generate regret in at most O(2?j+1 log T /?i2 ) rounds with a similar argument, and P summing up together, they could generate regret in at most O( j 2?j+1 log T /?i2 ) = O(log T /?i2 ) rounds. Therefore, the factor of 1/pSi or 1/p? is completely removed from the regret bound. Next, we briefly explain our idea to achieve the improved bound over the linear bandit result in [15]. The key step is to bound regret ?St generated in round t. By a derivation similar to [15, 7] together with the 1-norm TPM bounded smoothness condition, we would obtain that ?St ? P t B i?S?t pD,S (? ?i,t ? ?i ) with high probability. The analysis in [15] would analyze the errors i |? ?i,t ? ?i | by a cascade of infinitely many sub-cases of whether there are xj arms with errors larger than yj with decreasing yj , but it may still be loose. Instead we directly work on the above summation. Naive bounding the about error summation would not give a O(log T ) bound because there could be too many arms with small errors. Our trick is to use a reverse amortization: we cumulate small errors on many sufficiently sampled arms and treat them as errors of insufficiently sample arms, such that an arm sampled O(log T ) times would not contribute toward the regret. This trick tightens our analysis and leads to significantly improved constant factors. 4.2 Applications to Influence Maximization and Combinatorial Cascading Bandits The following two lemmas show that both the cascading bandits and the influence maximization bandit satisfy the TPM condition. Lemma 1. For both disjunctive and conjunctive cascading bandit problem instances, 1-norm TPM bounded smoothness (Condition 2) holds with bounded smoothness constant B = 1. Lemma 2. For the influence maximization bandit problem instances, 1-norm TPM bounded smooth? where C? is the largest number ness (Condition 2) holds with bounded smoothness constant B = C, of nodes any node can reach in the directed graph G = (V, E). The proof of Lemma 1 involves a technique called bottom-up modification. Each action in cascading bandits can be viewed as a chain from top to bottom. When changing the means of arms below, the triggering probability of arms above is not changed. Thus, if we change ? to ?0 backwards, the triggering probability of each arm is unaffected before its expectation is changed, and when changing the mean of an arm i, the expected reward of the action is at most changed by pD,S |?0i ? ?i |. i The proof of Lemma 2 is more complex, since the bottom-up modification does not work directly on graphs with cycles. To circumvent this problem, we develop an influence tree decomposition technique as follows. First, we order all influence paths from the seed set S to a target v. Second, each edge is independently sampled based on its edge probability to form a random live-edge graph. Third, we divide the reward portion of activating v among all paths from S to v: for each live-edge graph L in which v is reachable from S, assign the probability of L to the first path from S to v in L according to the path total order. Finally, we compose all the paths from S to v into a tree with S as the root and copies of v as the leaves, so that we can do bottom-up modification on this tree and properly trace the reward changes based on the reward division we made among the paths. 4.3 Discussions and Comparisons We now discuss the implications of Theorem 1 together with Lemmas 1 and 2 by comparing them with several existing results. Comparison with [7] and CMAB with ?-norm bounded smoothness conditions. Our work is a direct adaption of the study in [7]. Comparing with [7], we see that the regret bounds in Theorem 1 are not dependent on the inverse of triggering probabilities, which is the main issue in [7]. When applied to influence maximization bandit, our result is strictly stronger than that of [7] in two aspects: ? (a) p we remove the factor of 1/p by using the TPM condition; (b) we reduce a factor of |E| and |E| in the dominant terms of distribution-dependent and -independent bounds, respectively, due to our use of 1-norm instead of ?-norm conditions used in Chen et al. [7]. In the supplementary material, we further provide the corresponding ?-norm TPM bounded smoothness conditions and the regret bound results, since in general the two sets of results do not imply each other. Comparison with [25] on influence maximization bandits. Let G = (V, E) be the social graph we consider. By Lemma 2, our Theorem 1 can be applied to the influence maximization bandit 7 ? ? which gives concrete O(log T ) distribution-dependent and O( T log T ) distributionwith B = C, independent bounds for the influence maximization bandit. Wen et al. [25] also study the influence maximization bandit aiming at eliminating the exponential factor 1/p? , but they only provide distribution-independent regret bounds when the graph is a forest. Therefore, our result on influence maximization bandit is much more general than theirs. Even limiting our result to the forest case, our result is still better, as we now explain. When the graph is a bidirectional forest, C? is the size of the largest connected component in the forest. ? Then we can apply Theorem 1 to obtain ? T log T ). In contrast, their regret bound is the distribution-independent regret bound as O(|E| C ? O(|E|C? T log T ), where C? is a parameter with complicated dependency on the graph topology ? and edge influence probabilities. Clearly, our regret bound is O( log T ) better than theirs on pa3 2 rameter T . When comparing C? with C? , we have C? ? |V | and pC? ? |V | , thus our worst-case bound is better than their worst-case bound by an additional O( |V |) factor. They do not provide simple analytical properties on C? , but on the three simple graph examples they gave, our parameter C? is either comparable or better than C? . Wen et al. [25] also study a generalization of linear transformation of edge probabilities, which is orthogonal to our current discussion, and could be potentially incorporated into the general CMAB-T framework. Comparison with [16] on combinatorial cascading bandits By Lemma 1, we can apply Theorem 1 to combinatorial conjunctive ? smoothness P and disjunctive cascading bandits with bounded constant B = 1, achieving O( ?i1 K log T ) distribution-dependent, and O( mKT log T ) min distribution-independent regret. In contrast, besides having exactly these terms, the results Q Q in [16] have ? ? an extra factor of 1/f , where f = i?S ? p(i) for conjunctive cascades, and f ? = i?S ? (1 ? p(i)) for disjunctive cascades, with S ? being the optimal solution and p(i) being the probability of success for item (arm) i. For conjunctive cascades, f ? could be reasonably close to 1 in practice as argued in [16], but for disjunctive cascades, f ? could be exponentially small since items in optimal solutions typically have large p(i) values. Therefore, our result completely removes the dependency on 1/f ? and is better than their result. Moreover, we also have much smaller constant factors owing to the new reverse amortization method described in Section 4.1. Comparison with [15] on linear bandits. When there is no probabilistically triggered arms (i.e. p? = 1), Theorem 1 would have tighter bounds since some analysis dealing with probabilistic triggering is not needed. In particular, in Eq. (1) the leading constant 624 would be reduced to 48, the dlog2 xe0 term is gone, and 6Bm becomes 2Bm; in Eq. (2) the leading constant 50 is reduced to 14, and the other changes are the same as above (see the supplementary material). The result itself is also a new contribution, since it generalizes the linear bandit of [15] to general 1-norm conditions with matching regret bounds, while significantly reducing the leading constants (their constants are 534 and 47 for distribution-dependent and independent bounds, respectively). This improvement comes from the new reversed amortization method described in Section 4.1. 5 Lower Bound of the General CMAB-T Model In this section, we show that there exists some CMAB-T problem instance such that p the regret bound in [7] is tight, i.e. the factor 1/p? in the distribution-dependent bound and 1/p? in the distribution-independent bound are unavoidable, where p? is the minimum positive probability that any base arm i is triggered by any action S. It also implies that the TPM bounded smoothness may not be applied to all CMAB-T instances. For our purpose, we only need a simplified version of the bounded smoothness condition of [7] as below: There exists a bounded smoothness constant B such that, for every action S and every pair of mean outcome vectors ? and ?0 , we have |rS (?) ? rS (?0 )| ? B maxi?S? |?i ? ?0i |, where S? is the set of arms that could possibly be triggered by S. We prove the lower bounds using the following CMAB-T problem instance ([m], S, D, Dtrig , R). For each base arm i ? [m], we define an action Si , with the set of actions S = {S1 , . . . , Sm }. The family of distributions D consists of distributions generated by every ? ? [0, 1]m such that the arms are independent Bernoulli variables. When playing action Si in round t, with a fixed probability p, arm i (t) (t) is triggered and its outcome Xi is observed, and the reward of playing Si is p?1 Xi ; otherwise with probability 1 ? p no arm is triggered, no feedback is observed and the reward is 0. Following the 8 CMAB-T framework, this means that Dtrig (Si , X), as a distribution on the subsets of [m], is either {i} with probability p or ? with probability 1 ? p, and the reward R(Si , X, ? ) = p?1 Xi ? I{? = {i}}. The expected reward rSi (?) = ?i . So this instance satisfies the above bounded smoothness with constant B = 1. We denote the above instance as FTP(p), standing for fixed triggering probability instance. This instance is similar with position-based model [17] with only one position, while the feedback is different. For the FTP(p) instance, we have p? = p and rSi (?)P= p ? p?1 ?i = ?i . Then applying the result in [7], we have distributed-dependent upper bound O( i p?1i log T ) and min p distribution-independent upper bound O( p?1 mT log T ). We first provide the distribution-independent lower bound result. Theorem 2. Let p be a real number with 0 < p < 1. Then for any CMAB-T algorithm A, if T ? 6p?1 , there exists a CMAB-T environment instance D with mean ? such that on instance FTP(p), s 1 mT A Reg? (T ) ? . 170 p The proof of the above and the next theorem p are all based on the results for the classical MAB problems. Comparing to the upper bound O( p?1 mT log T ). obtained from [7], Theorem 2 implies ? that the regret upper bound of CUCB in [7] is tight up to a O( log T ) factor. This means that the 1/p? factor in the regret bound of [7] cannot be avoided in the general class of CMAB-T problems. Next we give the distribution-dependent lower bound. For a learning algorithm, we say that it is consistent if, for every ?, every non-optimal arm is played o(T a ) times in expectation, for any real number a > 0. Then we have the following distribution-dependent lower bound. Theorem 3. For any consistent algorithm A running on instance FTP(p) and ?i < 1 for every arm i, we have A X Reg? (T ) p?1 ?i lim inf ? , T ?+? ln T kl(?i , ?? ) i:? <?? i ? ? where ? = maxi ?i , ?i = ? ? ?i , and kl(?, ?) is the Kullback-Leibler divergence function. Again we see that the distribution-dependent upper bound obtained from [7] asymptotically match the lower bound above. Finally, we remark that even if we rescale the reward from [1, 1/p] back to [0, 1], the corresponding scaling factor B would become ? p, and thus we would still obtain the conclusion that the regret bounds in [7] is tight (up to a O( log T ) factor), and thus 1/p? is in general needed in those bounds. 6 Conclusion and Future Work In this paper, we propose the TPM bounded smoothness condition, which conveys the intuition that an arm difficult to trigger is also less important in determining the optimal solution. We show that this condition is essential to guarantee low regret, and prove that important applications, such as influence maximization bandits and combinatorial cascading bandits all satisfy this condition. There are several directions one may further pursue. One is to improve the regret bound for some specific problems. For example, for the influence maximization bandit, can we give a better algorithm or analysis to achieve a better regret bound than the one provided by the general TPM condition? Another direction is to look into other applications with probabilistically triggered arms that may not satisfy the TPM condition or need other conditions to guarantee low regret. Combining the current CMAB-T framework with the linear generalization as in [25] to achieve scalable learning result is also an interesting direction. 9 Acknowledgment Wei Chen is partially supported by the National Natural Science Foundation of China (Grant No. 61433014). References [1] Peter Auer, Nicol? Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235?256, 2002. [2] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48?77, 2002. [3] Donald A. Berry and Bert Fristedt. Bandit problems: Sequential Allocation of Experiments. Chapman and Hall, 1985. [4] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed R in Machine Learning, 5(1):1?122, 2012. bandit problems. Foundations and Trends [5] Wei Chen, Laks V. S. Lakshmanan, and Carlos Castillo. Information and Influence Propagation in Social Networks. Morgan & Claypool Publishers, 2013. [6] Wei Chen, Wei Hu, Fu Li, Jian Li, Yu Liu, and Pinyan Lu. Combinatorial multi-armed bandit with general reward functions. In NIPS, 2016. [7] Wei Chen, Yajun Wang, Yang Yuan, and Qinshi Wang. Combinatorial multi-armed bandit and its extension to probabilistically triggered arms. Journal of Machine Learning Research, 17(50):1?33, 2016. A preliminary version appeared as Chen, Wang, and Yuan, ?combinatorial multi-armed bandit: General framework, results and applications?, ICML?2013. [8] Richard Combes, M. Sadegh Talebi, Alexandre Proutiere, and Marc Lelarge. Combinatorial bandits revisited. In NIPS, 2015. [9] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking, 20, 2012. [10] Aditya Gopalan, Shie Mannor, and Yishay Mansour. Thompson sampling for complex online problems. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. [11] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13?30, 1963. [12] David Kempe, Jon M. Kleinberg, and ?va Tardos. Maximizing the spread of influence through a social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 137?146, 2003. [13] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid bandits: Fast combinatorial optimization with learning. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI), 2014. [14] Branislav Kveton, Csaba Szepesv?ri, Zheng Wen, and Azin Ashkan. Cascading bandits: learning to rank in the cascade model. In Proceedings of the 32th International Conference on Machine Learning, 2015. [15] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesv?ri. Tight regret bounds for stochastic combinatorial semi-bandits. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [16] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Combinatorial cascading bandits. Advances in Neural Information Processing Systems, 2015. [17] Paul Lagr?e, Claire Vernade, and Olivier Capp?. Multiple-play bandits in the position-based model. In Advances in Neural Information Processing Systems, pages 1597?1605, 2016. [18] Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng, and Pierre Senellart. Online influence maximization. In KDD, 2015. [19] Michael Mitzenmacher and Eli Upfal. Probability and Computing. Cambridge University Press, 2005. 10 [20] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin American Mathematical Society, 55:527?535, 1952. [21] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [22] William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933. [23] Sharan Vaswani, Laks V. S. Lakshmanan, and Mark Schmidt. Influence maximization with bandits. In NIPS Workshop on Networks in the Social and Information Sciences, 2015. [24] Sharan Vaswani, Branislav Kveton, Zheng Wen, Mohammad Ghavamzadeh, Laks V.S. Lakshmanan, and Mark Schmidt. Diffusion independent semi-bandit influence maximization. In Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. to appear. [25] Zheng Wen, Branislav Kveton, and Michal Valko. Influence maximization with semi-bandit feedback. CoRR, abs/1605.06593v1, 2016. [26] Yu Yang, Xiangbo Mao, Jian Pei, and Xiaofei He. Continuous influence maximization: What discounts should we offer to social network users? In Proceedings of the 2016 International Conference on Management of Data (SIGMOD), 2016. 11
6716 |@word exploitation:2 briefly:2 version:6 eliminating:1 norm:14 stronger:2 hu:1 r:10 decomposition:1 lakshmanan:3 harder:2 liu:1 selecting:2 ours:1 prefix:1 past:1 existing:2 yajun:1 current:3 com:1 comparing:6 michal:1 si:5 conjunctive:7 maniu:1 happen:1 kdd:2 remove:6 update:1 greedy:2 selected:4 leaf:1 item:2 intelligence:2 mannor:1 node:12 contribute:1 revisited:1 mathematical:1 direct:2 become:1 lagr:1 yuan:2 consists:2 prove:2 wassily:1 compose:1 introduce:2 expected:15 multi:10 decreasing:1 resolve:1 actual:2 armed:10 becomes:1 provided:1 bounded:30 moreover:2 what:1 pursue:1 transformation:1 csaba:3 nj:1 guarantee:4 every:7 ti:7 exactly:2 biometrika:1 scaled:3 stick:1 grant:1 appear:1 positive:3 before:4 treat:1 limit:1 aiming:1 sutton:1 establishing:1 path:6 might:1 china:2 studied:3 collect:1 vaswani:4 gone:1 directed:2 acknowledgment:1 yj:2 kveton:9 union:1 regret:66 practice:1 unfold:1 empirical:3 significantly:3 cascade:6 matching:1 confidence:5 donald:1 get:1 convenience:1 undesirable:1 close:1 cannot:1 eriksson:1 influence:50 live:2 applying:1 branislav:6 deterministic:1 maximizing:2 straightforward:1 attention:2 starting:1 independently:2 thompson:4 survey:1 restated:1 cascading:24 imax:2 handle:1 siyu:1 limiting:1 tardos:1 target:1 play:5 trigger:9 strengthen:1 suppose:1 yishay:1 olivier:1 us:1 user:1 trick:2 element:2 trend:1 satisfying:1 observed:8 disjunctive:8 bottom:4 wang:4 worst:2 cycle:1 connected:1 removed:1 observes:1 mentioned:1 intuition:2 environment:16 broken:1 pd:12 reward:39 ghavamzadeh:1 tight:7 dilemma:1 technically:2 upon:1 division:1 cmab:42 completely:4 silviu:1 capp:1 easily:1 joint:3 derivation:1 separated:1 jain:1 fast:1 describe:1 activate:1 artificial:2 outcome:15 heuristic:1 supplementary:7 larger:3 say:3 relax:2 otherwise:1 luyi:1 statistic:1 fischer:1 itself:2 online:6 differentiate:1 triggered:41 sequence:3 analytical:1 propose:1 adaptation:1 combining:1 degenerate:1 achieve:5 weic:1 intuitive:1 moved:1 p:2 requirement:1 ftp:4 illustrate:1 andrew:1 depending:1 develop:1 measured:1 rescale:1 minor:1 received:1 eq:5 involves:2 come:1 implies:2 convention:1 differ:1 direction:3 radius:1 owing:1 stochastic:10 exploration:2 routing:2 material:7 cucb:6 bin:1 argued:1 activating:1 assign:1 fix:2 generalization:3 preliminary:1 mab:3 opt:6 tighter:2 brian:1 summation:2 exploring:2 strictly:1 extension:1 hold:3 sufficiently:1 hall:1 claypool:1 seed:8 scope:1 mo:1 purpose:1 diminishes:1 combinatorial:24 robbins:2 largest:2 weighted:1 mit:1 clearly:2 always:2 aim:1 barto:1 probabilistically:17 properly:4 she:3 bernoulli:3 indicates:1 improvement:1 rank:1 likelihood:1 contrast:3 adversarial:1 sigkdd:1 sharan:2 dependent:14 stopping:2 entire:1 unlikely:1 typically:1 bandit:79 proutiere:1 selects:4 i1:1 issue:4 among:4 special:2 kempe:3 ness:1 equal:1 having:1 beach:1 sampling:2 chapman:1 look:1 yu:2 icml:3 jon:1 future:2 others:1 serious:1 richard:2 wen:8 sta:2 randomly:1 divergence:1 national:1 individual:1 microsoft:2 maintain:2 william:1 ab:1 mining:1 zheng:6 certainly:1 pc:1 activated:4 chain:1 implication:1 edge:15 tuple:1 explosion:1 fu:1 orthogonal:1 unless:1 tree:3 divide:1 theoretical:2 instance:29 modeling:1 yoav:1 maximization:38 introducing:1 vertex:1 subset:5 too:1 dependency:2 st:17 international:6 siam:1 standing:1 systematic:1 probabilistic:3 michael:1 together:5 talebi:1 concrete:1 again:1 unavoidable:3 satisfied:1 cesa:3 management:1 possibly:2 hoeffding:1 henceforth:1 worse:1 american:2 style:1 leading:3 li:2 suggesting:1 satisfy:6 explicitly:1 view:2 picked:2 root:1 optimistic:1 analyze:1 sup:2 reached:3 start:1 portion:1 maintains:1 complicated:1 carlos:1 defer:1 contribution:2 maximized:1 accurately:1 lu:1 advertising:2 unaffected:1 randomness:4 history:1 explain:3 networking:2 influenced:1 reach:3 canceling:1 ashkan:4 definition:3 against:1 lelarge:1 involved:1 conveys:1 proof:6 associated:1 psi:10 stop:2 gain:2 sampled:3 fristedt:1 lim:1 knowledge:1 auer:2 back:1 appears:1 feed:1 bidirectional:1 originally:2 alexandre:1 wei:6 cumulate:4 qinshi:2 improved:3 rahul:1 mitzenmacher:1 strongly:1 marketing:2 just:1 until:2 receives:3 combes:2 propagation:4 lei:2 usa:1 effect:1 contain:2 true:2 hence:1 leibler:1 i2:7 deal:2 undesired:1 round:16 game:5 illustrative:1 mabs:1 complete:1 mohammad:1 novel:2 viral:1 mt:3 tightens:1 exponentially:4 association:1 slight:1 he:1 theirs:2 refer:3 multiarmed:2 cambridge:1 smoothness:26 reliability:1 reachable:1 access:1 longer:1 etc:1 base:9 dominant:1 recent:2 belongs:1 inf:3 reverse:2 certain:3 inequality:1 success:3 xe:2 yi:1 reynold:1 morgan:1 minimum:3 additional:2 relaxed:1 herbert:1 r0:1 determine:1 maximize:1 xe0:1 semi:6 full:1 sound:1 multiple:2 d0:2 smooth:1 technical:1 match:1 adapt:1 determination:1 exceeds:1 long:1 offer:1 divided:1 rameter:1 va:1 impact:3 variant:2 scalable:1 essentially:2 expectation:10 addition:1 want:1 szepesv:2 pinyan:1 jian:2 crucial:2 publisher:1 extra:1 eliminates:1 shie:1 bhaskar:1 call:1 yang:2 backwards:1 revealed:3 enough:1 xj:1 fit:1 gave:1 matroid:1 nonstochastic:2 topology:1 triggering:16 click:1 reduce:1 idea:3 tradeoff:1 translates:1 inactive:1 whether:2 rsi:2 peter:2 passing:1 cause:1 azin:4 action:53 remark:5 gopalan:2 discount:1 extensively:2 reduced:2 generate:3 schapire:1 discrete:2 group:3 key:7 putting:2 achieving:2 drawn:1 changing:2 diffusion:2 v1:1 graph:10 asymptotically:1 geometrically:1 fraction:1 year:2 beijing:1 sum:1 tpm:30 inverse:1 eli:1 powerful:1 uncertainty:1 family:1 draw:1 scaling:3 comparable:1 sadegh:1 bound:78 played:4 cheng:1 nonnegative:1 oracle:8 nontrivial:1 adapted:1 insufficiently:1 constraint:2 infinity:1 ri:2 generates:2 aspect:3 kleinberg:1 argument:1 min:7 imin:7 according:2 smaller:1 modification:3 s1:1 pr:1 taken:2 ln:5 mutually:2 turn:1 discus:3 loose:1 needed:5 know:2 end:5 generalizes:1 apply:4 observe:3 pierre:1 appearing:1 eydgahi:1 schmidt:2 original:3 top:1 running:1 cf:2 log2:2 laks:3 sigmod:1 classical:3 society:1 move:1 objective:2 already:2 surrogate:2 reversed:1 senellart:1 reason:1 toward:1 besides:3 modeled:1 mini:1 ratio:3 providing:1 equivalently:1 difficult:2 robert:1 potentially:1 trace:1 design:1 policy:1 unknown:5 pei:1 bianchi:3 upper:13 observation:3 sm:1 benchmark:2 finite:3 xiaofei:1 extended:1 looking:1 incorporated:1 mansour:1 bert:1 princeton:3 bk:1 david:1 pair:1 kl:2 learned:2 nip:4 address:1 beyond:1 proceeds:1 below:3 xm:3 appeared:2 summarize:1 including:2 max:8 explanation:1 natural:2 circumvent:1 valko:1 arm:90 improve:3 imply:1 naive:1 text:1 prior:1 literature:1 berry:1 discovery:1 nicol:3 determining:2 freund:1 fully:1 interesting:2 allocation:1 foundation:2 upfal:1 agent:1 vernade:1 consistent:2 s0:2 propagates:1 playing:8 amortization:3 claire:1 changed:3 repeat:1 wireless:1 copy:1 supported:1 infeasible:1 offline:4 weaker:2 allow:3 wide:1 neighbor:1 bulletin:1 distributed:1 feedback:14 dxe:1 cumulative:4 made:1 reinforcement:1 simplified:1 bm:4 avoided:1 far:2 social:8 transaction:1 obtains:1 kullback:1 keep:1 monotonicity:4 dealing:1 dlog2:1 sequentially:1 reveals:1 uai:1 rid:1 summing:1 conclude:1 xi:8 search:1 continuous:4 why:1 reasonably:1 transfer:1 ca:1 szepesvari:1 improving:1 forest:4 complex:3 hoda:1 marc:1 spread:6 main:2 bounding:1 paul:2 repeated:1 x1:3 referred:1 gai:1 sub:1 position:3 mao:1 explicit:1 exponential:4 comput:1 third:1 removing:2 theorem:14 specific:1 bastien:1 showing:1 maxi:3 krishnamachari:1 evidence:1 exists:4 incorporating:1 essential:2 workshop:1 sequential:3 corr:1 importance:1 chen:11 gap:2 simply:1 infinitely:2 bubeck:1 aditya:1 ordered:2 partially:1 recommendation:2 satisfies:5 chance:1 adaption:1 acm:2 modulate:2 goal:1 formulated:1 viewed:1 replace:1 feasible:1 hard:2 change:6 mkt:4 typical:1 determined:2 infinite:4 specifically:1 reducing:1 decouple:1 lemma:8 called:3 total:2 castillo:1 player:22 ucb:2 select:2 formally:1 mark:2 modulated:4 incorporate:2 reg:5 avoiding:1 ex:3
6,320
6,717
Predictive-State Decoders: Encoding the Future into Recurrent Networks Arun Venkatraman1? , Nicholas Rhinehart1?, Wen Sun1 , Lerrel Pinto , Martial Hebert1 , Byron Boots2 , Kris M. Kitani1 , J. Andrew Bagnell1 1 The Robotics Institute, Carnegie-Mellon University, Pittsburgh, PA 2 School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA 1 Abstract Recurrent neural networks (RNNs) are a vital modeling technique that rely on internal states learned indirectly by optimization of a supervised, unsupervised, or reinforcement training loss. RNNs are used to model dynamic processes that are characterized by underlying latent states whose form is often unknown, precluding its analytic representation inside an RNN. In the Predictive-State Representation (PSR) literature, latent state processes are modeled by an internal state representation that directly models the distribution of future observations, and most recent work in this area has relied on explicitly representing and targeting sufficient statistics of this probability distribution. We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with P REDICTIVE -S TATE D ECODERS (P SDs), which add supervision to the network?s internal state representation to target predicting future observations. P SDs are simple to implement and easily incorporated into existing training pipelines via additional loss regularization. We demonstrate the effectiveness of P SDs with experimental results in three different domains: probabilistic filtering, Imitation Learning, and Reinforcement Learning. In each, our method improves statistical performance of state-of-the-art recurrent baselines and does so with fewer iterations and less data. 1 Introduction Despite their wide success in a variety of domains, recurrent neural networks (RNNs) are often inhibited by the difficulty of learning an internal state representation. Internal state is a unifying characteristic of RNNs, as it serves as an RNN?s memory. Learning these internal states is challenging because optimization is guided by the indirect signal of the RNN?s target task, such as maximizing the cost-to-go for reinforcement learning or maximizing the likelihood of a sequence of words. These target tasks have a latent state sequence that characterizes the underlying sequential data-generating process. Unfortunately, most settings do not afford a parametric model of latent state that is available to the learner. However, recent work has shown that in certain settings, latent states can be characterized by observations alone [8, 24, 26] ? which are almost always available to recurrent models. In such partially-observable problems (e.g. Fig. 1a), a single observation is not guaranteed to contain enough information to fully represent the system?s latent state. For example, a single image of a robot is insufficient to characterize its latent velocity and acceleration. While a latent state parametrization may be known in some domains ? e.g. a simple pendulum can be sufficiently modeled by its angle ? ? data from most domains cannot be explicitly parametrized. and angular velocity (?, ?) ? Contributed equally to this work. Direct correspondence to: {arunvenk,nrhineha}@cs.cmu.edu 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (a) The process generating sequential data has latent state st which generates the next latent state st+1 . st is usually unknown but generates the observations xt which are used to learn a model for the system. (b) An overview of our approach for modelling the process from Fig. 1a. We attach a decoder to the internal state of an RNN to predict statistics of future observations xt to xt+k observed at training time. Figure 1: Data generation process and proposed model In lieu of ground truth access to latent states, recurrent neural networks [32, 47] employ internal states to summarize previous data, serving as a learner?s memory. We avoid the terminology ?hidden state" as it refers to the internal state in the RNN literature but refers to the latent state in the HMM, PSR, and related literature. Internal states are modified towards minimizing the target application?s loss, e.g., minimizing observation loss in filtering or cumulative reward in reinforcement learning. The target application?s loss is not directly defined over the internal states: they are updated via the chain rule (backpropagation) through the global loss. Although this modeling is indirect, recurrent networks nonetheless can achieve state-of-the-art results on many robotics [18, 23], vision [34, 50], and natural language tasks [15, 20, 38] when training succeeds. However, recurrent model optimization is hampered by two main difficulties: 1) non-convexity, and 2) the loss does not directly encourage the internal state to model the latent state. A poor internal state representation can yield poor task performance, but rarely does the task objective directly measure the quality of the internal state. Predictive-State Representations (PSRs) [8, 24, 26] offer an alternative internal state representation to that of RNNs in terms of the available observations. Spectral learning methods for PSRs provide theoretical guarantees on discovering the global optimum for the model and internal state parameters under the assumptions of infinite training data and realizability. However, in the non-realizable setting ? i.e. model mismatch (e.g., using learned parameters of a linear system model for a non-linear system) ? these algorithms lose any performance guarantees on using the learned model for the target inference tasks. Extensions to handle nonlinear systems rely on RKHS embeddings [43], which themselves can be computationally infeasible to use with large datasets. Nevertheless, when these models are trainable, they often achieve strong performance [24, 45]; the structure they impose significantly simplifies the learning problem. We leverage ideas from the both RNN and PSR paradigms, resulting in a marriage of two orthogonal sequential modeling approaches. When training an RNN, P REDICTIVE -S TATE D ECODERS (Fig. 1b) provide direct supervision on the internal state, aiding the training problem. The proposed method can be viewed as an instance of Multi-Task Learning (MTL) [13] and self-supervision [27], using the inputs to the learner to form a secondary unsupervised objective. Our contribution is a general method that improves performance of learning RNNs for sequential prediction problems. The approach is easy to implement as a regularizer on traditional RNN loss functions with little overhead and can thus be incorporated into a variety of existing recurrent models. In our experiments, we examine three domains where recurrent models are used to model temporal dependencies: probabilistic filtering, where we predict the future observation given past observations; Imitation Learning, where the learner attempts to mimic an expert?s actions; and Reinforcement Learning, where a policy is trained to maximize cumulative reward. We observe that our method improves loss convergence rates and results in higher-quality final objectives in these domains. 2 Latent State Space Models To model sequential prediction problems, it is common to cast the problem into the Markov Process framework. Predictive distributions in this framework satisfy the Markov property: P (st+1 |st , st?1 , . . . , s0 ) = P (st+1 |st ) 2 (1) Figure 2: Learning recurrent models consists of learning a function f that updates the internal state ht given the latest observation xt . The internal state may also be used to predict targets yt , such as control actions for imitation and reinforcement learning. These are then inputs to a loss function ` which accumulate as the multi-step loss L over all timesteps. where st is the latent state of the system at timestep t. Intuitively, this property tells us that the future st+1 is only dependent on the current state2 st and does not depend on any previous state s0 , . . . , st?1 . As st is latent, the learner only has access to observations xt , which are produced by st . For example, in robotics, xt may be joint angles from sensors or a scene observed as an image. A common graphical model representation is shown in Fig. 1a. The machine learning problem is to find a model f that uses the latest observation xt to recursively update an internal state, denoted ht , illustrated in Fig. 2. Note that ht is distinct from st . ht is the learner?s internal state, and st is the underlying configuration of the data-generating Markov Process. For example, the internal state in the Bayesian filtering/POMDP setup is represented as a belief state [49], a ?memory" unit in neural networks, or as a distribution over observations for PSRs. Unlike traditional supervised machine learning problems, learning models for latent state problems must be accomplished without ground-truth supervision of the internal states themselves. Two distinct paradigms for latent state modeling exist. The first are discriminative approaches based on RNNs, and the second is a set of theoretically well-studied approaches based on Predictive-State Representations. In the following sections we provide a brief overview of each class of approach. 2.1 Recurrent Models and RNNs A classical supervised machine learning approach for learning internal models involves choosing an explicit parametrization for the internal states and assuming ground-truth access to these states and observations at training time [17, 29, 33, 37]. These models focus on learning only the recursive model f in Fig. 2, assuming access to the st (Fig. 1a) at training time. Another class of approaches drop the assumption of access to ground truth but still assume a parametrization of the internal state. These models set up a multi-step prediction error and use expectation maximization to alternate between optimizing over the model?s parameters and the internal state values [2, 19, 16]. While imposing a fixed representation on the internal state adds structure to the learning problem, it can limit performance. For many problems such as speech recognition [20] or text generation [48], it is difficult to fully represent a latent state inside the model?s internal state. Instead, typical machine learning solutions rely on the Recurrent Neural Network architecture. The RNN model (Fig. 2) uses the internal state to make predictions yt = f (ht , xt ) and is trained by minimizing a series of loss functions `t over each prediction, as shown in the following optimization problem: X min L = min `t (f (ht , xt )) (2) f f t The loss functions `t are usually application- and domain-specific. For example, in a probabilistic filtering problem, the objective may be to minimize the negative log-likelihood of the observations [4, 52] or the prediction of the next observation [34]. For imitation learning, this objective function will penalize deviation of the prediction from the expert?s action [39], and for policygradient reinforcement learning methods, the objective includes the log-likelihood of choosing actions weighted by their observed returns. In general, the task objective optimized by the network does not directly specify a loss directly over the values of the internal state ht . 2 In Markov Decision Processes (MDPs), P (st+1 |st ) may depend on an action taken at st . 3 The general difficulty with the objective in Eq. (2) is that the recurrence with f results in a highly non-convex and difficult optimization [2]. RNN models are thus often trained with backpropagation-through-time (BPTT) [55]. BPTT allows future losses incurred at timestep t to be back-propogated and affect the parameter updates to f . These updates to f then change the distribution of internal states computed during the next forward pass through time. The difficulty is then that small updates to f can drastically change the distribution of ht , sometimes resulting in error exponential in the time horizon [53]. This ?diffusion problem" can yield an unstable training procedure with exponentially exploding or vanishing gradients [7]. While techniques such as truncated gradients [47] or gradient-clipping [35] can alleviate some of these problems, each of these techniques yields stability by discarding information about how future observations and predictions should backpropagate through the current internal state. A significant innovation in training internal states with long-term dependence was the LSTM [25]. Many variants on LSTMs exist (e.g. GRUs [14]), yet in the domains evaluated by Greff et al. [21], none consistently exhibit statistically significant improvements over LSTMs. In the next section, we discuss a different paradigm for learning temporal models. In contrast with the open-ended internal-state learned by RNNs, Predictive-State methods do not parameterize a specific representation of the internal state but use certain assumptions to construct a mathematical structure in terms of the observations to find a globally optimal representation. 2.2 Predictive-State Models Predictive-State Representations (PSRs) address the problem of finding an internal state by formulating the representation directly in terms of observable quantities. Instead of targeting a prediction loss as with RNNs, PSRs define a belief over the distribution of k future observations, gt = [xTt , ..., xTt+k?1 ]T ? Rkn given all the past observations pt = [x0 , . . . xt?1 ] [10]. In the case of linear systems, this k is similar to the rank of the observability matrix [6]. The key assumption in PSRs is that the definition of state is equivalent to having sufficient information to predict everything about gt at time-step t [42], i.e. there is a bijective function that maps P (st |pt?1 ) ? the distribution of latent state given the past ? to P (gt |pt?1 ) ? the belief over future observations. Spectral learning approaches were developed to find an globally optimal internal state representation and the transition model f for these Predictive-State models. In the controls literature, these approaches were developed as subspace identification [51], and in the ML literature as spectral approaches for partially-observed systems [9, 8, 26, 56]. A significant improvement in model learning was developed by Boots et al. [10], Hefny et al. [24], where sufficient feature functions ? (e.g., moments) map distributions P (gt |pt ) to points in feature space E [?(gt )|pt ]. For example,  E [?(gt )|pt ] = E gt , gt gtT |pt are the sufficient statistics for a Gaussian distribution. With this representation, learning latent state prediction models can be reduced to supervised learning. Hefny et al. [24] used this along with Instrumental Variable Regression [11] to develop a procedure that, in the limit of infinite data, and under a linear-system realiziablity assumption, would converge to the globally optimal solution. Sun et al. [45] extended this setup to create a practical algorithm, Predictive-State Inference Machines (PSIMs) [44, 45, 54], based on the concept of inference machines [31, 40]. Unlike in Hefny et al. [24], which attempted to find a generative observation model and transition model, PSIMs directly learned the filter function, an operator f , that can deterministically pass the predictive states forward in time conditioned on the latest observation, by minimizing the following loss over f : `p = X 2 k?(gt+1 ) ? f (ht , xt )k , ht+1 = f (ht , xt ) (3) t This loss function, which we call the predictive-state loss, forms the basis of our P REDICTIVE -S TATE D ECODERS. By minimizing this supervised loss function, PSIM assigns statistical meaning to internal states: it forces the internal state ht to match sufficient statistics of future observations E [?(gt )|pt ] at every timestep t. We observe an empirical sample of the future gt = [xt , . . . , xt+k ] at each timestep by looking into the future in the training dataset or by waiting for streaming future observations. Whereas [45] primarily studied algorithms for minimizing the predictive-state loss, we adapt it to augment general recurrent models such as LSTMs and for a wider variety of applications such as imitation and reinforcement learning. 4 Figure 3: Predictive-State Decoders Architecture. We augment the RNN from Fig. 2 with an additional objective function R which targets decoding of the internal state through F at each time step to the predictive-state which is represented as statistics over the future observations. 3 Predictive-State Decoders Our P REDICTIVE -S TATE D ECODERS architecture extends the Predictive-State Representation idea to general recurrent architectures. We hypothesize that by encouraging the internal states to encode information sufficient for reconstructing the predictive state, the resulting internal states better capture the underlying dynamics and learning can be improved. The result is a simple-to-implement objective function which is coupled with the existing RNN loss. To represent arbitrary sizes and values of PSRs with a fixed-size internal state in the recurrent network, we attach a decoding module F (?) to the internal states to produce the resulting PSR estimates. Figure 3 illustrates our approach. Our P SD objective R is the predictive-state loss: X 2 R= kF (ht ) ? ?([xt+1 , xt+2 , . . .])k2 , ht = f (ht?1 , xt?1 ), (4) t where F is a decoder that maps from the internal state ht to an empirical sample of the predictivestate, computed from a sequence of observed future observations available at training. The network is optimized by minimizing the weighted total loss function L + ?R where ? is the weighting on the predictive-state objective R. This penalty encourages the internal states to encode information sufficient for directly predicting sufficient future observations. Unlike more standard regularization techniques, R does not regularize the parameters of the network but instead regularizes the output variables, the internal states predicted by the network. Our method may be interpreted as an instance of Multi-Task Learning (MTL) [13]. MTL has found use in recent deep neural networks [5, 27, 30]. The idea of MTL is to employ a shared representation to perform complementary or similar tasks. When the learner exhibits good performance on one task, some of its understanding can be transferred to a related task. In our case, forcing RNNs to be able to more explicitly reason about the future they will encounter is an intuitive and general method. Endowing RNNs with a theoretically-motivated representation of the future better enables them to serve their purpose of making sequential predictions, resulting in more effective learning. This difference is pronounced in applications such as imitation and reinforcement learning (Sections 4.2 and 4.3) where the primary objective is to find a control policy to maximize accumulated future reward while receiving only observations from the system. MTL with P SDs supervises the network to predict the future and implicitly the consequences of the learned policy. Finally, our P SD objective can be considered an instance of self-supervision [27] as it uses the inputs to the learner to form a secondary unsupervised objective. As discussed in Section 2.1, the purpose of the internal state in recurrent network models (RNNs, LSTMs, deep, or otherwise) is to capture a quantity similar to that of state. Ideally, the learner would be able to back-propagate through the primary objective function L and discover the best representation of the latent state of the system towards minimizing the objective. However, as this problem highly non-convex, BPTT often yields a locally-optimal solution in a basin determined by the initialization of the parameters and the dataset. By introducing R, the space of feasible models is reduced. We observe next how this objective leads our method to find better models. 5 Helicopter GRU Network 101 100 Hopper GRU Network Baseline k = 2, ? = 1.0 k = 2, ? = 10.0 k = 5, ? = 1.0 k = 5, ? = 10.0 101 9 ? 100 Observation Loss Baseline k = 2, ? = 1.0 k = 2, ? = 10.0 k = 5, ? = 1.0 k = 5, ? = 10.0 Observation Loss Observation Loss Pendulum GRU Network 8 ? 100 100 200 300 Iteration 400 500 0 Baseline k = 2, ? = 1.0 k = 2, ? = 10.0 k = 5, ? = 1.0 k = 5, ? = 10.0 101 100 200 300 Iteration 400 500 100 200 300 Iteration 400 (a) Pendulum 3 ? 101 2 ? 101 0 Baseline k = 2, ? = 1.0 k = 2, ? = 10.0 k = 5, ? = 1.0 k = 5, ? = 10.0 101 500 100 200 300 Iteration 400 500 Hopper LSTM Network Baseline k = 2, ? = 10.0 k = 5, ? = 0.5 k = 5, ? = 10.0 k = 10, ? = 10.0 6 ? 101 9 ? 100 0 4 ? 101 Helicopter LSTM Network Observation Loss Observation Loss Pendulum LSTM Network 100 Observation Loss 0 Baseline k = 2, ? = 1.0 k = 5, ? = 5.0 k = 5, ? = 10.0 k = 10, ? = 5.0 6 ? 101 4 ? 101 3 ? 101 2 ? 101 0 100 200 300 Iteration (b) Helicopter 400 500 0 100 200 300 Iteration 400 500 (c) Hopper Figure 4: Loss over predicting future observations during filtering. For both RNNs with GRU cells (top) and with with LSTM cells (bottom), adding P SDs to the RNN networks can often improve performance and convergence rate. 4 Experiments We present results on problems of increasing complexity for recurrent models: probabilistic filtering, Imitation Learning (IL), and Reinforcement Learning (RL). The first is easiest, as the goal is to predict the next future observation given the current observation and internal state. For imitation learning, the recurrent model is given training-time expert guidance with the goal of choosing actions to maximize the sequence of future rewards. Finally, we analyze the challenging domain of reinforcement learning, where the goal is the same as imitation learning but expert guidance is unavailable. P REDICTIVE -S TATE D ECODERS require two hyperparameters: k, the number of observations to characterize the predictive state and ?, the regularization trade-off factor. In most cases, we primarily tune ?, and set k to one of {2, . . . , 10}. For each domain, for each k, there were ? values for which the performance was worse than the baseline. However, for many sets of hyperparameters, the performance exceeded the baselines. Most notably, for many experiments, the convergence rate was significantly better using P SDs, implying that P SDs allows for more efficient data utilization for learning recurrent models. P SDs also require a specification of two other parameters in the architecture: the featurization function ? and decoding module F . For simplicity, we use an affine function as the decoder F in Eq. (4). The results presented below use an identity featurization ? for the presented results but include a short discussion of second order featurization. We find that in each domain, we are able to improve the performance of the state-of-the-art baselines. We observe improvements with both GRU and LSTM cells across a range of k and ?. In IL with P SDs, we come significantly closer and occasionally eclipse the expert?s performance, whereas the baselines never do. In our RL experiments, our method achieves statistically significant improvements over the state-of-the-art approach of [18, 41] on the 5 different settings we tested. 4.1 Probabilistic Filtering In the probabilistic filtering problem, the goal is to predict the future from the current internal state. Recurrent models for filtering use a multi-step objective function that maximizes the likelihood of the future observations over the internal states and dynamics model f ?s parameters. Under a Gaussian assumption (e.g. like a Kalman filter [22]), the equivalent objective that minimizes the negative P 2 log-likelihood is given as L = t kxt+1 ? f (xt , ht )k . While traditional methods would explicitly solve for parametric internal states ht using an EM style approach, we use BPTT to implicitly find an non-parametric internal state. We optimize the 6 Figure 5: Cumulative rewards for AggreVaTeD and AggreVaTeD+P REDICTIVE -S TATE D ECODERS on partially observable Acrobot and CartPole with both LSTM cells and GRU cells averaged over 15 runs with different random seeds. end-to-end filtering performance through the P SD joint objective minf,F L + ?R. Our experimental results are shown in Fig. 4. The experiments were run with ? as the identity, capturing statistics representing the first moment. We tested ? as second-order statistics and found while the performance improved over the baseline, it was outperformed by the first moment. In all environments, a dataset was collected using a preset control policy. In the Pendulum experiments, we predict the pendulum?s angle ?. The LQR controlled Helicopter experiments [3] use a noisy state as the observation, and the Hopper dataset was generated using the OpenAI simulation [12] with robust policy optimization algorithm [36] as the controller. We test each environment with Tensorflow?s built-in GRU and LSTM cells [1]. We sweep over various k and ? hyperparameters and present the average results and standard deviations from runs with different random seeds. Fig. 4 baselines are recurrent models equivalent to P SDs with ? = 0. 4.2 Imitation Learning We experiment with the partially observable CartPole and Acrobot domains3 from OpenAI Gym [12]. We applied the method of AggreVaTeD [46], a policy-gradient method, to train our expert models. AggreVaTeD uses access to a cost-to-go oracle in order to train a policy that is sensitive to the value of the expert?s actions, providing an advantage over behavior cloning IL approaches. The experts have access to the full state of the robots, unlike the learned recurrent policies. We tune the parameters of LSTM and GRU agents (e.g., learning rate, number of internal units) and afterwards only tune ? for P SDs. In Fig. 5, we observe that P SDs improve performance for both GRUand LSTM-based agents and increasing the predictive-state horizon k yields better results. Notably, P SDs achieves 73% relative improvement over baseline LSTM and 42% over GRU on Cartpole. Difference random seeds were used. The cumulative reward of the current best policy is shown. 4.3 Reinforcement Learning Reinforcement learning (RL) increases the problem complexity from imitation learning by removing expert guidance. The latent state of the system is heavily influenced by the RL agent itself and changes as the policy improves. We use [18]?s implementation of TRPO [41], a Natural Policy 3 The observation function only provides positional information (including joint angles), excluding velocities. 7 TRPO TRPO+PSD Figure 6: Walker Cumulative Rewards and Sorted Percentiles. N = 15, 5e4 TRPO steps per iteration. Table 1: Top: Mean Average Returns ? one standard deviation, with N = 15 for Walker2d? and N = 30 otherwise. Bottom: Relative improvement of on the means. ? indicates p < 0.05 and ?? indicates p < 0.005 on Wilcoxon?s signed-rank test for significance of improvement. All runs computed with 5e3 transitions per iteration, except Walker2d? , with 5e4. Swimmer HalfCheetah Hopper Walker2d Walker2d? [41] [41]+P SDs 91.3 ? 25.5 97.0 ? 19.4 330 ? 158 372 ? 143 1103 ? 264 1195 ? 272 383 ? 96 416 ? 88 1396 ? 396 1611 ? 436 Rel. ? 6.30%? 13.0%? 9.06%? 8.59%? 15.4%?? Gradient method [28]. Although [41] defines a KL-constraint on policy parameters that affect actions, our implementation of P SDs introduces parameters (those of the decoder) that are unaffected by the constraint, as the decoder does not directly govern the agent?s actions. In these experiments, results are highly stochastic due to both environment randomness and nondeterministic parallelization of rllab [18]. We therefore repeat each experiment at least 15 times with paired random seeds. We use (k = 4 for Hopper), the identity  k = 2 for most experiments featurization for ?, and vary ? in 101 , 100 , . . . , 10?6 , and employ the LSTM cell and other default parameters of TRPO. We report the same metric as [18]: per-TRPO batch average return across learning iterations. Additionally, we report per-run performance by plotting the sorted average TRPO batch returns (each item is a number representing a method?s performance for a single seed). Figs. 6 and 7 demonstrate that our method generally produces higher-quality results than the baseline. These results are further summarized by their means and stds. in Table 1. In Figure 6, 40% of our method?s models are better than the best baseline model. In Figure 7c, 25% of our method?s models are better than the second-best (98th percentile) baseline model. We compare various RNN cells in Table 2, and find our method can improve Basic (linear + tanh nonlinearity), GRU, and LSTM RNNs, and usually reduces the performance variance. We used Tensorflow [1] and passed both the ?hidden" and ?cell" components of an LSTM?s internal state to the decoder. We also conducted preliminary additional experiments with second order featurization (?(x) = [x, vec(xxT )]). Corresponding to Tab. 2, column 1 for the inverted pendulum, second order features yielded 861 ? 41, a 4.9% improvement in the mean and a large reduction in variance. 5 Conclusion We introduced a theoretically-motivated method for improving the training of RNNs. Our method stems from previous literature that assigns statistical meaning to a learner?s internal state for modelling latent state of the data-generating processes. Our approach uses the objective in PSIMs and applies it to more complicated recurrent models such as LSTMs and GRUs and to objectives beyond probabilistic filtering such as imitation and reinforcement learning. We show that our straightforward method improves performance across all domains with which we experimented. 8 TRPO (a) Swimmer, N=30 TRPO+PSD (b) HalfCheetah, N=30 (c) Hopper, N=40 Figure 7: Top: Per-iteration average returns for TRPO and TRPO+P REDICTIVE -S TATE D ECODERS vs. batch iteration, with 5e3 steps per iteration. Bottom: Sorted per-run mean (across iterations) average returns. Our method generally produces better models. Table 2: Variations of RNN units. Mean Average Returns ? one standard deviation, with N = 20. 1e3 transitions per iteration are used. Our method can improve each recurrent unit we tested. InvertedPendulum Swimmer Basic GRU LSTM Basic GRU LSTM [41] [41]+P SDs 820 ? 139 820 ? 118 673 ? 268 782 ? 183 640 ? 265 784 ? 215 66.0 ? 21.4 71.4 ? 26.9 64.6 ? 55.3 75.1 ? 28.8 56.5 ? 23.8 61.0 ? 23.8 Rel. ? ?0.08% 20.4% 22.6% 8.21% 16.1% 7.94% Acknowledgements This material is based upon work supported in part by: Office of Naval Research (ONR) contract N000141512365, and National Science Foundation NRI award number 1637758. References [1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Pieter Abbeel and Andrew Y Ng. Learning first-order markov models for control. In NIPS, pages 1?8, 2005. [3] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In ICML, pages 1?8. ACM, 2005. [4] Pieter Abbeel, Adam Coates, Michael Montemerlo, Andrew Y Ng, and Sebastian Thrun. Discriminative training of kalman filters. In Robotics: Science and Systems (RSS), 2005. [5] Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by poking: Experiential learning of intuitive physics. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 5074?5082. Curran Associates, Inc., 2016. [6] Karl Johan Astr?m and Richard M Murray. Feedback systems: an introduction for scientists and engineers. Princeton university press, 2010. 9 [7] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994. [8] Byron Boots. Spectral Approaches to Learning Predictive Representations. PhD thesis, Carnegie Mellon University, December 2012. [9] Byron Boots, Sajid M Siddiqi, and Geoffrey J Gordon. Closing the learning-planning loop with predictive state representations. The International Journal of Robotics Research, 30(7): 954?966, 2011. [10] Byron Boots, Arthur Gretton, and Geoffrey J. Gordon. Hilbert space embeddings of predictive state representations. In UAI-2013, 2013. [11] Roger J Bowden and Darrell A Turkington. Instrumental variables. Number 8. Cambridge University Press, 1990. [12] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. [13] Rich Caruana. Multitask learning. In Learning to learn, pages 95?133. Springer, 1998. [14] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [15] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980?2988, 2015. [16] Adam Coates, Pieter Abbeel, and Andrew Y. Ng. Learning for control from multiple demonstrations. In ICML, pages 144?151, New York, NY, USA, 2008. ACM. [17] Marc Peter Deisenroth, Marco F Huber, and Uwe D Hanebeck. Analytic moment-based gaussian process filtering. In International Conference on Machine Learning, pages 225?232. ACM, 2009. [18] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. [19] Zoubin Ghahramani and Sam T Roweis. Learning nonlinear dynamical systems using an EM algorithm. pages 431?-437, 1999. [20] Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pages 1764?1772, 2014. [21] Klaus Greff, Rupesh K Srivastava, Jan Koutn?k, Bas R Steunebrink, and J?rgen Schmidhuber. Lstm: A search space odyssey. IEEE transactions on neural networks and learning systems, 2016. [22] Tuomas Haarnoja, Anurag Ajay, Sergey Levine, and Pieter Abbeel. Backprop kf: Learning discriminative deterministic state estimators. NIPS, 2016. [23] Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527, 2015. [24] Ahmed Hefny, Carlton Downey, and Geoffrey J Gordon. Supervised learning for dynamical system learning. In NIPS, 2015. [25] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735?1780, 1997. [26] Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden markov models. In COLT, 2009. [27] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. CoRR, abs/1611.05397, 2016. URL http://arxiv.org/abs/1611.05397. [28] Sham Kakade. A natural policy gradient. Advances in neural information processing systems, 2:1531?1538, 2002. [29] J Ko, D J Klein, D Fox, and D Haehnel. GP-UKF: Unscented kalman filters with Gaussian process prediction and observation models. pages 1901?1907, 2007. 10 [30] Iasonas Kokkinos. Ubernet: Training a ?universal? convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. CoRR, abs/1609.02132, 2016. [31] John Langford, Ruslan Salakhutdinov, and Tong Zhang. Learning nonlinear dynamic models. In ICML, pages 593?600. ACM, 2009. [32] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436?444, 2015. [33] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1?40, 2016. [34] Peter Ondruska and Ingmar Posner. Deep tracking: Seeing beyond seeing using recurrent neural networks. In Thirtieth AAAI Conference on Artificial Intelligence, 2016. [35] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. ICML, 28:1310?1318, 2013. [36] Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. arXiv preprint arXiv:1703.02702, 2017. [37] Liva Ralaivola and Florence D?Alche-Buc. Dynamical modeling with kernels for nonlinear time series prediction. NIPS, 2004. [38] Marc?Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. ICLR, 2016. [39] St?phane Ross, Geoffrey J Gordon, and J Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. AISTATS, 2011. [40] Stephane Ross, Daniel Munoz, Martial Hebert, and J Andrew Bagnell. Learning messagepassing inference machines for structured prediction. In CVPR. IEEE, 2011. [41] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889?1897, 2015. [42] Satinder Singh, Michael R. James, and Matthew R. Rudary. Predictive state representations: A new theory for modeling dynamical systems. In UAI, 2004. [43] Le Song, Byron Boots, Sajid M Siddiqi, Geoffrey J Gordon, and Alex J Smola. Hilbert space embeddings of hidden markov models. In ICML, pages 991?998, 2010. [44] Wen Sun, Roberto Capobianco, Geoffrey J. Gordon, J. Andrew Bagnell, and Byron Boots. Learning to smooth with bidirectional predictive state inference machines. In Proceedings of The International Conference on Uncertainty in Artificial Intelligence (UAI), 2016. [45] Wen Sun, Arun Venkatraman, Byron Boots, and J Andrew Bagnell. Learning to filter with predictive state inference machines. In Proceedings of The 33rd International Conference on Machine Learning, pages 1197?1205, 2016. [46] Wen Sun, Arun Venkatraman, Geoffrey J Gordon, Byron Boots, and J Andrew Bagnell. Deeply aggrevated: Differentiable imitation learning for sequential prediction. In ICML, 2017. [47] Ilya Sutskever. Training recurrent neural networks. PhD thesis, University of Toronto, 2013. [48] Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017?1024, 2011. [49] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005. [50] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [51] Peter Van Overschee and BL De Moor. Subspace identification for linear systems: TheoryImplementation-Applications. Springer Science & Business Media, 2012. [52] William Vega-Brown and Nicholas Roy. Cello-em: Adaptive sensor models without ground truth. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1907?1914. IEEE, 2013. [53] Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving multi-step prediction of learned time series models. In AAAI, pages 3024?3030, 2015. 11 [54] Arun Venkatraman, Wen Sun, Martial Hebert , Byron Boots, and J. Andrew (Drew) Bagnell. Inference machines for nonparametric filter learning. In 25th International Joint Conference on Artificial Intelligence (IJCAI-16), July 2016. [55] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550?1560, 1990. [56] David Wingate and Satinder Singh. Kernel predictive linear gaussian models for nonlinear stochastic dynamical systems. In International Conference on Machine Learning, pages 1017? 1024. ACM, 2006. 12
6717 |@word multitask:1 instrumental:2 kokkinos:1 nd:1 bptt:4 open:1 pieter:9 seek:1 propagate:1 simulation:1 r:1 psim:1 recursively:1 reduction:2 moment:4 configuration:1 series:3 daniel:2 lqr:1 precluding:1 rkhs:1 past:3 existing:4 current:5 yet:1 liva:1 must:1 john:4 devin:1 supervises:1 analytic:2 enables:1 hypothesize:1 drop:1 update:5 v:1 alone:1 generative:1 fewer:1 discovering:1 implying:1 item:1 bart:1 intelligence:3 parametrization:3 vanishing:1 short:2 wolfram:1 provides:1 pascanu:1 philipp:1 toronto:1 org:1 zhang:2 mathematical:1 along:1 direct:2 gtt:1 jonas:1 abadi:1 consists:1 combine:1 overhead:1 nondeterministic:1 inside:2 invertedpendulum:1 apprenticeship:1 x0:1 theoretically:3 huber:1 notably:2 behavior:1 themselves:2 examine:1 planning:1 multi:6 ashvin:1 salakhutdinov:1 globally:3 duan:1 little:1 encouraging:1 increasing:2 discover:1 underlying:4 maximizes:1 medium:1 easiest:1 what:1 interpreted:1 minimizes:1 developed:3 finding:1 ended:1 guarantee:2 temporal:2 marian:1 every:1 interactive:1 zaremba:2 k2:1 control:7 unit:4 utilization:1 scientist:1 sd:19 limit:2 consequence:1 despite:1 encoding:1 anurag:1 laurent:1 signed:1 rnns:17 sajid:2 initialization:1 studied:2 challenging:2 limited:1 range:1 statistically:2 averaged:1 practical:1 lecun:1 recursive:1 regret:1 implement:3 backpropagation:3 razvan:1 procedure:2 jan:1 area:1 rnn:16 empirical:2 yan:1 significantly:3 universal:1 word:1 refers:2 bowden:1 seeing:2 zoubin:1 cannot:1 ga:1 targeting:2 operator:1 ralaivola:1 sukthankar:1 optimize:1 equivalent:3 map:3 marten:1 yt:2 maximizing:2 dean:1 go:2 latest:3 straightforward:1 deterministic:1 convex:2 pomdp:1 sepp:1 tomas:1 simplicity:1 assigns:2 matthieu:1 rule:1 estimator:1 regularize:1 posner:1 stability:1 handle:1 variation:1 merri:1 updated:1 target:8 pt:8 heavily:1 us:5 curran:1 swimmer:3 jaitly:1 pa:1 velocity:3 associate:1 recognition:2 roy:1 std:1 werbos:1 observed:5 bottom:3 module:2 aggrevated:5 preprint:6 levine:4 capture:2 parameterize:1 wingate:1 region:1 sun:5 ranzato:1 trade:1 deeply:1 environment:3 convexity:1 complexity:2 govern:1 reward:7 ideally:1 dynamic:4 trained:3 depend:2 alche:1 singh:2 predictive:29 serve:1 upon:1 learner:10 basis:1 czarnecki:1 easily:1 joint:4 indirect:2 kratarth:1 schwenk:1 represented:2 various:2 regularizer:1 xxt:1 train:2 distinct:2 effective:1 ondruska:1 artificial:3 vicki:1 tell:1 klaus:1 visuomotor:1 choosing:3 rein:1 kalchbrenner:1 whose:1 solve:1 cvpr:1 otherwise:2 statistic:7 gp:1 noisy:1 itself:1 final:1 patrice:1 online:1 advantage:2 sequence:5 kxt:1 agrawal:1 differentiable:1 helicopter:4 poke:1 loop:1 halfcheetah:2 ludwig:1 achieve:2 roweis:1 schaul:1 intuitive:2 pronounced:1 sutskever:2 convergence:3 ijcai:1 optimum:1 darrell:2 produce:3 generating:5 adam:2 silver:1 phane:1 wider:1 poking:1 recurrent:35 andrew:12 augmenting:1 capobianco:1 develop:1 school:1 eq:2 strong:1 auxiliary:1 c:1 involves:1 predicted:1 come:1 guided:1 stephane:1 filter:6 stochastic:2 exploration:1 featurization:5 everything:1 material:1 backprop:1 require:2 odyssey:1 abbeel:9 alleviate:1 preliminary:1 koutn:1 extension:1 unscented:1 marco:1 sufficiently:1 marriage:1 ground:5 considered:1 seed:5 predict:8 matthew:2 rgen:2 rkn:1 achieves:2 vary:1 purpose:2 ruslan:1 outperformed:1 lose:1 tanh:1 ross:2 sensitive:1 create:1 arun:5 weighted:2 moor:1 mit:1 sensor:2 always:1 gaussian:5 modified:1 avoid:1 thirtieth:1 office:1 encode:2 focus:1 naval:1 improvement:8 consistently:1 modelling:2 likelihood:5 rank:2 cloning:1 indicates:2 contrast:1 adversarial:1 baseline:17 realizable:1 inference:7 tate:7 dependent:1 rupesh:1 streaming:1 accumulated:1 hidden:4 pixel:1 uwe:1 colt:1 denoted:1 augment:2 art:5 construct:1 never:1 having:1 beach:1 ng:4 frasconi:1 koray:2 holger:1 unsupervised:4 icml:10 minf:1 kastner:1 venkatraman:4 future:26 mimic:1 report:2 yoshua:5 gordon:7 inhibited:1 employ:3 wen:5 primarily:2 richard:1 brockman:1 intelligent:1 national:1 ukf:1 jeffrey:1 william:1 attempt:1 psd:2 atlanta:1 ab:3 highly:3 mnih:1 joel:1 introduces:1 chain:1 andy:1 psrs:8 encourage:1 closer:1 arthur:1 haehnel:1 hausknecht:1 cartpole:3 orthogonal:1 fox:2 pulkit:1 guidance:3 theoretical:1 instance:3 column:1 modeling:6 rllab:1 caruana:1 maximization:1 clipping:1 cost:2 introducing:1 deviation:4 phrase:1 burgard:1 conducted:1 sumit:1 characterize:2 dependency:2 cho:1 st:23 grus:2 lstm:17 international:10 cello:1 rudary:1 oord:1 probabilistic:8 off:1 receiving:1 decoding:3 contract:1 michael:4 ashish:1 physic:1 lee:1 ilya:2 thesis:2 aaai:2 worse:1 expert:9 simard:1 style:1 return:7 wojciech:3 chung:1 volodymyr:1 de:1 summarized:1 includes:1 inc:1 satisfy:1 jitendra:1 explicitly:4 analyze:1 characterizes:1 pendulum:7 tab:1 relied:1 complicated:1 florence:1 contribution:1 minimize:1 il:3 greg:2 convolutional:1 variance:2 characteristic:1 yield:5 bayesian:1 identification:2 kavukcuoglu:2 produced:1 craig:1 none:1 kris:1 unaffected:1 randomness:1 influenced:1 sebastian:2 trevor:1 definition:1 nonetheless:1 james:3 hsu:1 dataset:4 improves:5 hilbert:2 hefny:4 iasonas:1 back:2 exceeded:1 bidirectional:1 higher:2 supervised:6 mtl:5 tom:1 specify:1 improved:2 rahul:1 evaluated:1 angular:1 roger:1 smola:1 langford:1 lstms:5 trust:1 nonlinear:5 defines:1 quality:3 usa:2 contain:1 concept:1 brown:1 regularization:3 kyunghyun:1 moritz:1 illustrated:1 fethi:1 during:2 self:2 recurrence:1 encourages:1 davis:1 percentile:2 stone:1 bijective:1 demonstrate:2 greff:2 image:2 meaning:2 kyle:1 vega:1 common:2 endowing:1 hopper:7 rl:4 overview:2 exponentially:1 volume:1 discussed:1 accumulate:1 mellon:2 significant:4 bougares:1 cambridge:1 imposing:1 vec:1 dinh:1 munoz:1 rd:2 closing:1 nonlinearity:1 sugiyama:1 language:1 robot:3 access:7 supervision:5 specification:1 astr:1 gt:11 add:2 wilcoxon:1 chelsea:1 recent:3 optimizing:1 forcing:1 schmidhuber:2 occasionally:1 certain:2 onr:1 success:1 carlton:1 accomplished:1 walker2d:4 inverted:1 additional:3 impose:1 schneider:1 goel:1 converge:1 paradigm:3 maximize:3 corrado:1 signal:1 exploding:1 july:1 full:1 afterwards:1 multiple:1 reduces:1 stem:1 gretton:1 sham:2 smooth:1 match:1 characterized:2 adapt:1 offer:1 long:4 ahmed:1 equally:1 award:1 paired:1 controlled:1 prediction:17 variant:1 regression:1 basic:3 controller:1 ajay:1 vision:2 navdeep:1 cmu:1 expectation:1 metric:1 kernel:2 sergey:4 iteration:15 represent:3 sometimes:1 robotics:6 cell:9 penalize:1 agarwal:1 whereas:2 hochreiter:1 ingmar:1 walker:1 parallelization:1 unlike:4 byron:9 bahdanau:1 december:1 effectiveness:1 encoderdecoder:1 call:1 jordan:1 chopra:1 leverage:1 vital:1 enough:1 embeddings:3 easy:1 variety:3 affect:2 sun1:1 timesteps:1 bengio:5 architecture:5 observability:1 simplifies:1 idea:3 barham:1 pettersson:1 motivated:2 url:1 passed:1 downey:1 penalty:1 song:1 peter:4 speech:2 e3:3 afford:1 york:1 action:9 deep:7 jie:1 generally:2 tune:3 aiding:1 nonparametric:1 mid:1 locally:1 siddiqi:2 reduced:2 http:1 exist:2 coates:2 per:8 klein:1 serving:1 diverse:1 carnegie:2 waiting:1 paolo:1 key:1 terminology:1 nevertheless:1 openai:3 trpo:11 leibo:1 ht:18 diffusion:1 nal:1 timestep:4 houthooft:1 run:6 angle:4 luxburg:1 uncertainty:1 extends:1 almost:1 guyon:1 yann:1 decision:1 capturing:1 guaranteed:1 courville:1 correspondence:1 oracle:1 yielded:1 constraint:2 alex:2 scene:1 generates:2 min:2 formulating:1 nboer:1 mikolov:1 transferred:1 structured:2 alternate:1 poor:2 across:4 reconstructing:1 em:3 sam:1 kakade:2 making:1 intuitively:1 den:1 dieter:1 pipeline:1 taken:1 computationally:1 discus:1 finn:1 serf:1 end:6 lieu:1 gulcehre:1 available:4 nri:1 brevdo:1 observe:5 spectral:5 indirectly:1 nicholas:2 alternative:1 encounter:1 gym:2 batch:3 hampered:1 top:3 include:1 n000141512365:1 graphical:1 lerrel:2 unifying:1 arxiv:13 ghahramani:1 murray:1 classical:1 rsj:1 experiential:1 bl:1 sweep:1 objective:23 malik:1 quantity:2 parametric:3 primary:2 dependence:1 traditional:3 bagnell:7 exhibit:2 gradient:7 iclr:1 subspace:2 thrun:2 decoder:9 parametrized:1 hmm:1 collected:1 unstable:1 reason:1 dzmitry:1 assuming:2 kalman:3 tuomas:1 modeled:2 insufficient:1 providing:1 minimizing:8 demonstration:1 innovation:1 setup:2 unfortunately:1 difficult:3 negative:2 ba:1 haarnoja:1 implementation:2 policy:15 unknown:2 contributed:1 perform:1 boot:9 observation:44 datasets:2 markov:7 caglar:1 descent:1 truncated:1 regularizes:1 extended:1 incorporated:2 looking:1 excluding:1 hinton:2 auli:1 arbitrary:1 hanebeck:1 introduced:1 david:2 cast:1 gru:12 kl:1 trainable:1 optimized:2 learned:8 tensorflow:3 nip:5 address:1 able:3 beyond:2 usually:3 below:1 mismatch:1 dynamical:5 summarize:1 built:1 including:1 memory:5 max:1 belief:3 overschee:1 difficulty:5 rely:3 attach:2 predicting:3 natural:3 force:1 business:1 representing:3 improve:5 technology:1 brief:1 mdps:2 abhinav:1 martial:4 realizability:1 coupled:1 roberto:1 text:2 eugene:1 literature:6 understanding:1 acknowledgement:1 kf:2 xtt:2 schulman:3 relative:2 graf:1 loss:31 fully:2 generation:2 filtering:13 geoffrey:9 foundation:1 ko:1 incurred:1 agent:4 affine:1 sufficient:8 basin:1 s0:2 plotting:1 editor:1 translation:1 karl:1 repeat:1 supported:1 hebert:3 infeasible:1 drastically:1 institute:2 wide:1 distributed:1 van:3 feedback:1 default:1 transition:4 cumulative:5 rich:1 forward:2 reinforcement:18 adaptive:1 transaction:2 observable:5 implicitly:2 jaderberg:1 buc:1 satinder:2 ml:1 global:2 uai:3 pittsburgh:1 state2:1 discriminative:3 xi:1 imitation:14 davidson:1 continuous:1 latent:25 search:1 table:4 additionally:1 nature:1 learn:2 johan:1 robust:2 ca:1 messagepassing:1 unavailable:1 improving:2 steunebrink:1 domain:12 garnett:1 marc:2 aistats:1 significance:1 main:1 aurelio:1 hyperparameters:3 paul:2 complementary:1 fig:13 junyoung:1 benchmarking:1 georgia:1 ny:1 tong:2 montemerlo:1 explicit:1 deterministically:1 exponential:1 weighting:1 zhifeng:1 tang:1 removing:1 e4:2 xt:18 specific:2 discarding:1 experimented:1 gupta:1 rel:2 sequential:8 adding:1 corr:2 drew:1 phd:2 acrobot:2 conditioned:1 illustrates:1 heterogeneous:1 horizon:2 chen:2 backpropagate:1 psr:4 positional:1 tracking:1 partially:5 eclipse:1 pinto:2 applies:1 springer:2 truth:5 acm:5 mart:1 nair:1 viewed:1 goal:4 identity:3 acceleration:1 sorted:3 towards:3 cheung:1 shared:1 feasible:1 change:3 infinite:2 typical:1 determined:1 except:1 preset:1 engineer:1 total:1 secondary:2 pas:2 experimental:2 succeeds:1 attempted:1 citro:1 rarely:1 aaron:2 deisenroth:1 internal:57 princeton:1 tested:3 srivastava:1
6,321
6,718
Optimistic posterior sampling for reinforcement learning: worst-case regret bounds Shipra Agrawal Columbia University [email protected] Randy Jia Columbia University [email protected] Abstract We present an algorithm based on posterior sampling (aka Thompson sampling) that achieves near-optimal worst-case regret bounds when the underlying Markov Decision Process (MDP) is communicating with a finite, though unknown, diameter. ? ? Our main result is a high probability regret upper bound of O(D SAT ) for any communicating MDP with S states, A actions and diameter D, when T ? S 5 A. Here, regret compares the total reward achieved by the algorithm to the total expected reward of an optimal infinite-horizon undiscounted average reward policy, in time horizon ? T . This result improves over the best previously known upper ? bound of O(DS AT ) achieved by any algorithm in this ? setting, and matches the dependence on S in the established lower bound of ?( DSAT ) for this problem. Our techniques involve proving some novel results about the anti-concentration of Dirichlet distribution, which may be of independent interest. 1 Introduction Reinforcement Learning (RL) refers to the problem of learning and planning in sequential decision making systems when the underlying system dynamics are unknown, and may need to be learned by trying out different options and observing their outcomes. A typical model for the sequential decision making problem is a Markov Decision Process (MDP), which proceeds in discrete time steps. At each time step, the system is in some state s, and the decision maker may take any available action a to obtain a (possibly stochastic) reward. The system then transitions to the next state according to a fixed state transition distribution. The reward and the next state depend on the current state s and the action a, but are independent of all the previous states and actions. In the reinforcement learning problem, the underlying state transition distributions and/or reward distributions are unknown, and need to be learned using the observed rewards and state transitions, while aiming to maximize the cumulative reward. This requires the algorithm to manage the tradeoff between exploration vs. exploitation, i.e., exploring different actions in different states in order to learn the model more accurately vs. taking actions that currently seem to be reward maximizing. Exploration-exploitation tradeoff has been studied extensively in the context of stochastic multiarmed bandit (MAB) problems, which are essentially MDPs with a single state. The performance of MAB algorithms is typically measured through regret, which compares the total reward obtained by the algorithm to the total expected reward of an optimal action. Optimal regret bounds have been established for many variations of MAB (see Bubeck et al. [2012] for a survey), with a large majority of results obtained using the Upper Confidence Bound (UCB) algorithm, or more generally, the optimism in the face of uncertainty principle. Under this principle, the learning algorithm maintains tight over-estimates (or optimistic estimates) of the expected rewards for individual actions, and at any given step, picks the action with the highest optimistic estimate. More recently, posterior sampling, aka Thompson Sampling [Thompson, 1933], has emerged as another popular algorithm design principle in MAB, owing its popularity to a simple and extendible algorithmic structure, an 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. attractive empirical performance [Chapelle and Li, 2011, Kaufmann et al., 2012], as well as provably optimal performance bounds that have been recently obtained for many variations of MAB [Agrawal and Goyal, 2012, 2013b,a, Russo and Van Roy, 2015, 2014, Bubeck and Liu, 2013]. In this approach, the algorithm maintains a Bayesian posterior distribution for the expected reward of every action; then at any given step, it generates an independent sample from each of these posteriors and takes the action with the highest sample value. We consider the reinforcement learning problem with finite states S and finite actions A in a similar regret based framework, where the total reward of the reinforcement learning algorithm is compared to the total expected reward achieved by a single benchmark policy over a time horizon T . In our setting, the benchmark policy is the infinite-horizon undiscounted average reward optimal policy for the underlying MDP, under the assumption that the MDP is communicating with (unknown) finite diameter D. The diameter D is an upper bound on the time it takes to move from any state s to any other state s0 using an appropriate policy, for each pair s, s0 . A finite diameter is understood to be necessary for interesting bounds on the regret of any algorithm in this setting [Jaksch et al., 2010]. The UCRL2 algorithm of Jaksch et al. [2010],? which is based on the optimism principle, achieved the ? best previously known upper bound of O(DS AT ) for this problem. A similar bound was achieved by Bartlett and Tewari [2009], though assuming the knowledge of the diameter D. Jaksch et al. ? [2010] also established a worst-case lower bound of ?( DSAT ) on the regret of any algorithm for this problem. Our main contribution is a? posterior sampling based algorithm with a high ? probability worst-case ? ? regret upper bound of O(D SAT + DS 7/4 A3/4 T 1/4 ), which is O(D SAT ) when T ?? S 5 A. This improves the previously best known upper bound for this problem by a factor of S, and matches the dependence on S in the lower bound, for large enough T . Our algorithm uses an ?optimistic version? of the posterior sampling heuristic, while utilizing several ideas from the algorithm design structure in Jaksch et al. [2010], such as an epoch based execution and the extended MDP construction. The algorithm proceeds in epochs, where in the beginning of ? every epoch, it generates ? = O(S) sample transition probability vectors from a posterior distribution for every state and action, and solves an extended MDP with ?A actions and S states formed using these samples. The optimal policy computed for this extended MDP is used throughout the epoch. Posterior Sampling for Reinforcement Learning (PSRL) approach has been used previously in Osband et al. [2013], Abbasi-Yadkori and Szepesvari [2014], Osband and Van Roy [2016], but in a Bayesian regret framework. Bayesian regret is defined as the expected regret over a known prior on the ? ? transition probability matrix. Osband and Van Roy [2016] demonstrate an O(H SAT ) bound on the expected Bayesian regret for PSRL in finite-horizon episodic Markov decision processes, when the episode length is H. In this paper, we consider the stronger notion of worst-case regret, aka minimax regret, which requires bounding the maximum regret for any instance of the problem. 1 Further, ? we consider a non-episodic communicating MDP setting, and produce a comparable bound ? of O(D SAT ) for large T , where D is the unknown diameter of the communicating MDP. In comparison to a single sample from the posterior in PSRL, our algorithm is slightly inefficient as ? it uses multiple (O(S)) samples. It is not entirely clear if the extra samples are only an artifact of the analysis. In an empirical study of a multiple sample version of posterior sampling for RL, Fonteneau et al. [2013] show that multiple samples can potentially improve the performance of posterior sampling in terms of probability of taking the optimal decision. Our analysis utilizes some ideas from the Bayesian regret analysis, most importantly the technique of stochastic optimism from Osband et al. [2014] for deriving tighter deviation bounds. However, bounding the worst-case regret requires several new technical ideas, in particular, for proving ?optimism? of the gain of the sampled MDP. Further discussion is provided in Section 4. We should also compare our result with the very recent result of Azar et al. [2017], which provides an optimistic version of value-iteration algorithm with a minimax (i.e., worst-case) regret bound of 1 Worst-case regret is a strictly stronger notion of regret in case the reward distribution function is known and only the transition probability distribution is unknown, as we will assume here for the most part. In case of unknown reward distribution, extending our worst-case regret bounds would require an assumption of bounded rewards, where as the Bayesian regret bounds in the above-mentioned literature allow more general (known) priors on the reward distributions with possibly unbounded support. Bayesian regret bounds in those more general settings are incomparable to the worst-case regret bounds presented here. 2 ? ? HSAT ) when T ? H 3 S 3 A. However, the setting considered in Azar et al. [2017] is that of an O( episodic MDP, where the learning agent interacts with the system in episodes of fixed and known length H. The initial state of each episode can be arbitrary, but importantly, the sequence of these initial states is shared by the algorithm and any benchmark policy. In contrast, in the non-episodic setting considered in this paper, the state trajectory of the benchmark policy over T time steps can be completely different from the algorithm?s trajectory. To the best of our understanding, the shared sequence of initial states and the fixed known length H of episodes seem to form crucial components of the analysis in Azar et al. [2017], making it difficult to extend their analysis to the non-episodic communicating MDP setting considered in this paper. Among other related work, Burnetas and Katehakis [1997] and Tewari and Bartlett [2008] present optimistic linear programming approaches that achieve logarithmic regret bounds with problem dependent constants. Strong PAC bounds have been provided in Kearns and Singh [1999], Brafman and Tennenholtz [2002], Kakade et al. [2003], Asmuth et al. [2009], Dann and Brunskill [2015]. There, the aim is to bound the performance of the policy learned at the end of the learning horizon, and not the performance during learning as quantified by regret. Strehl and Littman [2005], Strehl and Littman [2008] provide an optimistic algorithm for bounding regret in a discounted reward setting, but the definition of regret is slightly different in that it measures the difference between the rewards of an optimal policy and the rewards of the learning algorithm along the trajectory taken by the learning algorithm. 2 2.1 Preliminaries and Problem Definition Markov Decision Process (MDP) We consider a Markov Decision Process M defined by tuple {S, A, P, r, s1 }, where S is a finite state-space of size S, A is a finite action-space of size A, P : S ? A ? ?S is the transition model, r : S ? A ? [0, 1] is the reward function, and s1 is the starting state. When an action a ? A is taken 0 in a state s ? S, a reward rs,a P is generated and the system transitions to the next state s ? S with probability Ps,a (s0 ), where s0 ?S Ps,a (s0 ) = 1. We consider ?communicating? MDPs with finite ?diameter? (see Bartlett and Tewari [2009] for an in-depth discussion). Below we define communicating MDPs, and recall some useful known results for such MDPs. Definition 1 (Policy). A deterministic policy ? : S ? A is a mapping from state space to action space. Definition 2 (Diameter D(M)). Diameter D(M) of an MDP M is defined as the minimum time required to go from one state to another in the MDP using some deterministic policy: D(M) = max ? min Ts?s 0, s6=s0 ,s,s0 ?S ?:S?A ? 0 where Ts?s 0 is the expected number of steps it takes to reach state s when starting from state s and using policy ?. Definition 3 (Communicating MDP). An MDP M is communicating if and only if it has a finite diameter. That is, for any two states s 6= s0 , there exists a policy ? such that the expected number of ? steps to reach s0 from s, Ts?s 0 , is at most D, for some finite D ? 0. Definition 4 (Gain of a policy). The gain of a policy ?, from starting state s1 = s, is defined as the infinite horizon undiscounted average reward, given by T 1X ? (s) = E[ lim rst ,?(st ) |s1 = s]. T ?? T i=1 ? where st is the state reached at time t. Lemma 2.1 (Optimal gain for communicating MDPs). For a communicating MDP M with diameter D: (a) (Puterman [2014] Theorem 8.1.2, Theorem 8.3.2) The optimal (maximum) gain ?? is state independent and is achieved by a deterministic stationary policy ? ? , i.e., there exists a 3 deterministic policy ? ? such that ? ?? := max max ?? (s0 ) = ?? (s), ?s ? S. 0 ? s ?S Here, ? ? is referred to as an optimal policy for MDP M. (b) (Tewari and Bartlett [2008], Theorem 4) The optimal gain ?? satisfies the following equations, T T ?? = min max rs,a + Ps,a h ? hs = max rs,a + Ps,a h? ? h?s , ?s (1) h?RS s,a a ? where h , referred to as the bias vector of MDP M, satisfies: max h?s ? min h?s ? D. s s Given the above definitions and results, we can now define the reinforcement learning problem studied in this paper. 2.2 The reinforcement learning problem The reinforcement learning problem proceeds in rounds t = 1, . . . , T . The learning agent starts from a state s1 at round t = 1. In the beginning of every round t, the agent takes an action at ? A and observes the reward rst ,at as well as the next state st+1 ? Pst ,at , where r and P are the reward function and the transition model, respectively, for a communicating MDP M with diameter D. The learning agent knows the state-space S, the action space A, as well as the rewards rs,a , ?s ? S, a ? A, for the underlying MDP, but not the transition model P or the diameter D. (The assumption of known and deterministic rewards has been made here only for simplicity of exposition, since the unknown transition model is the main source of difficulty in this problem. Our algorithm and results can be extended to bounded stochastic rewards with unknown distributions using standard Thompson Sampling for MAB, e.g., using the techniques in Agrawal and Goyal [2013b].) The agent can use the past observations to learn the underlying MDP model and decide future actions. PT The goal is to maximize the total reward t=1 rst ,at , or equivalently, minimize the total regret over a time horizon T , defined as PT R(T, M) := T ?? ? t=1 rst ,at (2) where ?? is the optimal gain of MDP M. We present an algorithm for the learning agent with a near-optimal upper bound on the regret R(T, M) for any communicating MDP M with diameter D, thus bounding the worst-case regret over this class of MDPs. 3 Algorithm Description Our algorithm combines the ideas of Posterior sampling (aka Thompson Sampling) with the extended MDP construction used in Jaksch et al. [2010]. Below we describe the main components of our algorithm. t Some notations: Ns,a denotes the total number of times the algorithm visited state s and played t t action a until before time t, and Ns,a (i) denotes the number of time steps among these Ns,a steps where the next state was i, i.e., a transition from state s to i was observed. We index the states from 1 PS t t (i) = Ns,a to S, so that i=1 Ns,a for any t. We use the symbol 1 to denote the vector of all 1s, and 1i to denote the vector with 1 at the ith coordinate and 0 elsewhere. Doubling epochs: Our algorithm uses the epoch based execution framework of Jaksch et al. [2010]. An epoch is a group of consecutive rounds. The rounds t = 1, . . . , T are broken into consecutive epochs as follows: the k th epoch begins at the round ?k immediately after the end of (k ? 1)th epoch ? ?k and ends at the first round ? such that for some state-action pair s, a, Ns,a ? 2Ns,a . The algorithm computes a new policy ? ?k at the beginning of every epoch k, and uses that policy through all the rounds in that epoch. It is easy to observe that irrespective of how the policy ? ?k is computed, the number of epochs in T rounds is bounded by SA log(T ). 4 Posterior Sampling: We use posterior sampling to compute the policy ? ?k in the beginning of every epoch. Dirichlet distribution is a convenient choice maintaining posteriors for the transition probability vectors Ps,a for every s ? S, a ? A, as they satisfy the following useful property: given a prior Dirichlet(?1 , . . . , ?S ) on Ps,a , after observing a transition from state s to i (with underlying probability Ps,a (i)), the posterior distribution is given by Dirichlet(?1 , . . . , ?i + 1, . . . , ?S ). By this property, for any s ? S, a ? A, on starting from prior Dirichlet(1) for Ps,a , the posterior at time t is t Dirichlet({Ns,a (i) + 1}i=1,...,S ). Our algorithm uses a modified, optimistic version of this approach. At the beginning of every epoch k, for every s ? S, a ? A such that Ns,a ? ?, it generates multiple samples for Ps,a from a ?boosted? posterior. Specifically, it generates ? = O(S log(SA/?)) independent sample probability vectors ?,k Q1,k s,a , . . . , Qs,a as ?k Qj,k s,a ? Dirichlet(Ms,a ), t where Mts,a denotes the vector [Ms,a (i)]i=1,...,S , with t t Ms,a (i) := ?1 (Ns,a (i) + ?), for i = 1, . . . , S. (3) q Here, ? = O(log(T /?)), ? = O(log(T /?)), ? = TAS + 12?S 2 , and ? ? (0, 1) is a parameter of the algorithm. In the regret analysis, we derive sufficiently large constants that can be used in the definition of ?, ?, ? to guarantee the bounds. However, no attempt has been made to optimize those constants, and it is likely that much smaller constants suffice. For every remaining s, a, i.e., those with small Ns,a (Ns,a < ?) the algorithm use a simple optimistic sampling described in Algorithm 1. This special sampling for s, a with small Ns,a has been introduced to handle a technical difficulty in analyzing the anti-concentration of Dirichlet posteriors when the parameters are very small. We suspect that with an improved analysis, this may not be required. Extended MDP: The policy ? ?k to be used in epoch k is computed as the optimal policy of an ? k defined by the sampled transition probability vectors, using the construction of extended MDP M Jaksch et al. [2010]. Given sampled vectors Qj,k s,a , j = 1, . . . , ?, for every state-action pair s, a, we k ? define extended MDP M by extending the original action space as follows: for every s, a, create ? actions for every action a ? A, denoting by aj the action corresponding to action a and sample ? k , on taking action aj in state s, reward is rs,a but transitions to the next state j; then, in MDP M follows the transition probability vector Qj,k s,a . ? k to take actions in the action Note that the algorithm uses the optimal policy ? ?k of extended MDP M ? k , where the policy ? space A which is technically different from the action space of MDP M ?k is defined. We slightly abuse the notation to say that the algorithm takes action at = ? ? (st ) to mean that the algorithm takes action at = a ? A when ? ?k (st ) = aj for some j. Our algorithm is summarized as Algorithm 1. 4 Regret Bounds We prove the following bound on the regret of Algorithm 1 for the reinforcement learning problem. Theorem 1. For any communicating MDP M with S states, A actions, and diameter D, with probability 1 ? ?. the regret of Algorithm 1 in time T ? CDA log2 (T /?) is bounded as:  ?  ? D SAT + DS 7/4 A3/4 T 1/4 + DS 5/2 A R(T, M) ? O where C is an absolute constant. For T ? S 5 A, this implies a regret bound of  ?  ? D SAT . R(T, M) ? O ? hides logarithmic factors in S, A, T, ? and absolute constants. Here O The rest of this section is devoted to proving the above theorem. Here, we provide a sketch of the proof and discuss some of the key lemmas, all missing details are provided in the supplementary material. 5 Algorithm 1 A posterior sampling based algorithm for the reinforcement learning problem Inputs: State space S, Action space A, starting state s1 , reward function r, time horizon T , parameters ? ? (0, 1], ? = O(S log(SA/?)), ? = O(log(T /?)), ? = O(log(T /?)), ? = q TS A + 12?S 2 . 1 Initialize: ? 1 := 1, M?s,a = ?1. for all epochs k = 1, 2, . . . , do Sample transition probability vectors: For each s, a, generate ? independent sample probability vectors Qj,k s,a , j = 1, . . . , ?, as follows: ?k ? (Posterior sampling): For s, a such that Ns,a ? ?, use samples from the Dirichlet distribution: ?k Qj,k s,a ? Dirichlet(Ms,a ), ? ?k < ?, use the following (Simple optimistic sampling): For remaining s, a, with Ns,a simple optimistic sampling: let ? Ps,a = P?s,a ? ?, r ?k Ns,a (i) 3P?s,a (i) log(4S) ? where Ps,a (i) = N ?k , and ?i = min + ? N k s,a s,a  3 log(4S) ? , Ps,a (i) ?k Ns,a , and let z be a random vector picked uniformly at random from {11 , . . . , 1S }; set PS ? ? Qj,k s,a = Ps,a + (1 ? i=1 Ps,a (i))z. ? k constructed using sample Compute policy ? ? k : as the optimal gain policy for extended MDP M j,k set {Qs,a , j = 1, . . . , ?, s ? S, a ? A}. Execute policy ? ?k: for all time steps t = ?k , ?k + 1, . . . , until break epoch do Play action at = ? ?k (st ). Observe the transition to the next state st+1 . t+1 t+1 (i) for all a ? A, s, i ? S as defined (refer to Equation (3)). (i), Ms,a Set Ns,a ?k t+1 If Nst ,at ? 2Nst ,at , then set ?k+1 = t + 1 and break epoch. end for end for 4.1 Proof of Theorem 1 PT As defined in Section 2, regret R(T, M) is given by R(T, M) = T ?? ? t=1 rst ,at , where ?? is the optimal gain of MDP M, at is the action taken and st is the state reached by the algorithm at time t. Algorithm 1 proceeds in epochs k = 1, 2, . . . , K, where K ? SA log(T ). To bound its regret in time T , we first analyze the regret in each epoch k, namely, P?k+1 ?1 Rk := (?k+1 ? ?k )?? ? t=? rst ,at , k and bound Rk by roughly ?k+1 X Ns,a ? N ?k p ? s,a D k Ns,a s,a ? k+1 ?k where, by definition, for every s, a, (Ns,a ? Ns,a ) is the number of times this state-action pair is visited in epoch k. The proof of this bound has two main components: (a) Optimism: The policy ? ?k used by the algorithm in epoch k is computed as an optimal gain policy ? k . The first part of the proof is to show that with high probability, the of the extended MDP M k ? extended MDP M is (i) a communicating MDP with diameter at most 2D, and (ii) optimistic, i.e., has optimal gain at least (close to) ?? . Part (i) is stated as Lemma 4.1, with a proof provided ? k be the optimal gain of the extended MDP M ? k . In in the supplementary material. Now, let ? 6 Lemma 4.2, which forms one of the main novel technical components of our proof, we show that with probability 1 ? ?, q SA ? k ? ?? ? O(D ? ? ). T We first show that above holds if for every s, a, there exists a sample transition probability vector whose projection on a fixed unknown vector (h? ) is optimistic. Then, in Lemma 4.3 we prove this optimism by deriving a fundamental new result on the anti-concentration of any fixed projection of a Dirichlet random vector (Proposition A.1 in the supplementary material). Substituting this upper bound on ?? , we have the following bound on Rk with probability 1 ? ?: q  P?k+1 ?1  ? SA ? Rk ? t=? ? ? r + O(D ) . (4) k s ,a t t T k ? k for MDP (b) Deviation bounds: Optimism guarantees that with high probability, the optimal gain ? ? k ? ? ? k. ?k , ?k is the gain of the chosen policy ? ?k for MDP M M is at least ? . And, by definition of ? However, the algorithm executes this policy on the true MDP M. The only difference between ? k , the next the two is the transition model: on taking an action aj := ? ?k (s) in state s in MDP M state follows the sampled distribution P?s,a := Qj,k s,a , (5) where as on taking the corresponding action a in MDP M, the next state follows the distribution ? k and the average reward obtained by the Ps,a . The next step is to bound the difference between ? ? algorithm by bounding the deviation (Ps,a ? Ps,a ). This line of argument bears similarities to the analysis of UCRL2 in Jaksch et al. [2010], but with tighter deviation bounds that we are able to guarantee due to the use of posterior sampling instead of deterministic optimistic bias used in ? k , the bias vector h, ? and UCRL2. Now, since at = ? ?k (st ), using the relation between the gain ? k ? reward vector of optimal policy ? ?k for communicating MDP M (refer to Lemma 2.1)  P?k+1 ?1  ? P?k+1 ?1 ? ? ? ? rst ,at = (Pst ,at ? 1st )T h t=?k t=?k P?k+1 ?1 ? ? = (Pst ,at ? Pst ,at + Pst ,at ? 1st )T h (6) t=?k ? ? RS , the bias vector of MDP M ? k satisfies where with high probability, h ? s ? mins h ? s ? D(M ? k ) ? 2D (refer to Lemma 4.1). maxs h ? for all s, a, to bound the first term in above. Next, we bound the deviation (P?s,a ? Ps,a )T h ? Note that h is random and can be arbitrarily correlated with P? , therefore, we need to bound ? ? [0, 2D]S ). maxh?[0,2D]S (P?s,a ? Ps,a )T h. (For the above term, w.l.o.g. we can assume h ?k For s, a such that Ns,a > ?, P?s,a = Qj,k s,a is a sample from the Dirichlet posterior. In Lemma 4.4, we show that with high probability, k ? pD + DS (7) max (P?s,a ? Ps,a )T h ? O( ?k ). ?k Ns,a Ns,a ? This bound is an improvement by a S factor over the corresponding deviation bound obtainable for the optimistic estimates of Ps,a in UCRL2. The derivation of this bound utilizes and extends ?k the stochastic optimism technique from Osband et al. [2014]. For s, a with Ns,a ? ?, P?s,a = Qj,k s,a is a sample from the simple optimistic sampling, where we can only show the following weaker ?k bound, but since this is used only while Ns,a is small, the total contribution of this deviation will be small: s ! S DS k T ? ? max (Ps,a ? Ps,a ) h ? O D . (8) ?k + ?k Ns,a Ns,a h?[0,2D]S h?[0,2D]S ? ?k , h, ? st ] = P T h ? Finally, to bound the second term in (6), we observe that E[1Tst+1 h|? st ,at and use ? Azuma-Hoeffding inequality to obtain with probability (1 ? SA ): p P?k+1 ?1 ? ? O( (?k+1 ? ?k ) log(SA/?)). (Pst ,at ? 1st )T h (9) t=?k 7 Combining the above observations (equations (4), (6), (7), (8), (9)), we obtain the following bound on Rk within logarithmic factors: r ?k+1  X Ns,a ? p ? N ?k  SA ?k+1 ?k+1 p ? s,a 1(Ns,a > ?) + D(?k+1 ??k ) +D S 1 (N ? ?) +D ?k+1 ? ?k . s,a k T Ns,a s,a (10) We can finish the proof by observing that (by definition of an epoch) the number of visits of any state-action pair can at most double in an epoch, ?k+1 ?k ?k Ns,a ? Ns,a ? Ns,a , and therefore, substituting this observation in (10), we can bound (within logarithmic factors) the PK total regret R(T ) = k=1 Rk as: ! q K p ? p P P P ? ?k SA k D(?k+1 ? ?k ) T + D Ns,a + D SNs,a + D ?k+1 ? ?k ? k=1 ? k >? s,a:Ns,a k <? s,a:Ns,a ? ? ?K Ns,a ) + D log(K)(SA S?) + D KT P ?k+1 ?k where we used Ns,a ? 2Ns,a and k (?k+1 ? ?k ) = T . Now, we use that K ? SA log(T ), q ? TS 2 and SA S? = O(S 7/4 A3/4 T 1/4 + S 5/2 A log(T /?)) (using ? = A + 12?S ). Also, since p ? P P ?K ?K Ns,a ? SAT , and we obtain, s,a Ns,a ? T , by simple worst scenario analysis, s,a ? ? SAT + DS 7/4 A3/4 T 1/4 + DS 5/2 A). R(T, M) ? O(D ? P ? D SAT + D log(K)( s,a 4.2 p Main lemmas Following lemma form the main technical components of our proof. All the missing proofs are provided in the supplementary material. Lemma 4.1. Assume T ? CDA log2 (T /?) for a large enough constant C. Then, with probability ? k is bounded by 2D. 1 ? ?, for every epoch k, the diameter of MDP M ? k of the extended MDP Lemma 4.2. With probability 1 ? ?, for every epoch k, the optimal gain ? k ? M satisfies: q   ? k ? ?? ? O D log2 (T /?) SA , ? T ? where ? the optimal gain of MDP M and D is the diameter. Proof. Let h? be the bias vector for an optimal policy ? ? of MDP M (refer to Lemma 2.1 in the preliminaries section). Since h? is a fixed (though unknown) vector with |hi ? hj | ? D, we can apply Lemma 4.3 to obtain that with probability 1 ? ?, for all s, a, there exists a sample vector Qj,k s,a for some j ? {1, . . . , ?} such that T ? T ? (Qj,k s,a ) h ? Ps,a h ? ? q   ? k which for any s, takes where ? = O D log2 (T /?) SA . Now, consider the policy ? for MDP M T action aj , with a = ? ? (s) and j being a sample satisfying above inequality. Let Q? be the transition ? matrix for this policy, whose rows are formed by the vectors Qj,k s,? ? (s) , and P? be the transition matrix whose rows are formed by the vectors Ps,?? (s) . Above implies Q? h? ? P?? h? ? ?1. We use this inequality along with the known relations between the gain and the bias of optimal ? ? k satisfies policy in communicating MDPs to obtain that the gain ?(?) of policy in ? for MDP M ? ? ?(?) ? ? ? ? (details provided in the supplementary material), which proves the lemma statement ? k ? ?(?). ? since by optimality ? 8 Lemma 4.3. (Optimistic Sampling) Fix any vector h ? RS such that |hi ? hi0 | ? D for any i, i0 , ? and any epoch k. Then, for every s, a, with probability 1 ? SA there exists at least one j such that q   2 SA T T (Qj,k . s,a ) h ? Ps,a h ? O D log (T /?) T Lemma 4.4. (Deviation bound) With probability 1 ? ?, for all epochs k, sample j, all s, a s ? ! ? S log(SAT /?) log(SAT /?) ? ?k ? O D >? +D , Ns,a ? ?k ?k ? N N s,a s,a j,k T s ! max (Qs,a ? Ps,a ) h ? ? h?[0,2D]S ? S log(S) S log(SAT /?) ? ?k ? +D , Ns,a ?? ? O D N ?k N ?k s,a 5 s,a Conclusions We presented an algorithm inspired by posterior sampling that achieves near-optimal worst-case regret bounds for the reinforcement learning problem with communicating MDPs in a non-episodic, undiscounted average reward setting. Our algorithm may be viewed as a more efficient randomized version of the UCRL2 algorithm of Jaksch et al. [2010], with randomization via posterior sampling ? forming the key to the S factor improvement in the regret bound provided by our algorithm. Our analysis demonstrates that posterior sampling provides the right amount of uncertainty in the samples, so that an optimistic policy can be obtained without excess over-estimation. While our work surmounts some important technical difficulties in obtaining worst-case regret bounds for posterior sampling based algorithms for communicating MDPs, the provided bound is tight in its dependence on S and A only for large T (specifically, for T ? S 5 A). Other related results on tight ? worst-case regret bounds have a similar requirement of large T (Azar et al. [2017] produce an ? HSAT ) bound when T ? H 3 S 3 A). Obtaining a cleaner worst-case regret bound that does O( not require such a condition remains an open question. Other important directions of future work ? include reducing the number of posterior samples required in every epoch from O(S) to constant or logarithmic in S, and extensions to contextual and continuous state MDPs. 9 References Yasin Abbasi-Yadkori and Csaba Szepesvari. Bayesian optimal control of smoothly parameterized systems: The lazy posterior sampling algorithm. arXiv preprint arXiv:1406.3926, 2014. Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions: with formulas, graphs, and mathematical tables, volume 55. Courier Corporation, 1964. Shipra Agrawal and Navin Goyal. Analysis of Thompson Sampling for the Multi-armed Bandit Problem. In Proceedings of the 25th Annual Conference on Learning Theory (COLT), 2012. Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013a. Shipra Agrawal and Navin Goyal. Further Optimal Regret Bounds for Thompson Sampling. In AISTATS, pages 99?107, 2013b. John Asmuth, Lihong Li, Michael L Littman, Ali Nouri, and David Wingate. A Bayesian sampling approach to exploration in reinforcement learning. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 19?26. AUAI Press, 2009. Mohammad Gheshlaghi Azar, Ian Osband, and R?mi Munos. Minimax regret bounds for reinforcement learning. arXiv preprint arXiv:1703.05449, 2017. Peter L Bartlett and Ambuj Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35?42. AUAI Press, 2009. Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for nearoptimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213?231, 2002. S?bastien Bubeck and Che-Yu Liu. Prior-free and prior-dependent regret bounds for Thompson sampling. In Advances in Neural Information Processing Systems, pages 638?646, 2013. S?bastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic R in Machine Learning, 5(1):1?122, 2012. multi-armed bandit problems. Foundations and Trends Apostolos N Burnetas and Michael N Katehakis. Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1):222?255, 1997. Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In Advances in neural information processing systems, pages 2249?2257, 2011. Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, pages 2818?2826, 2015. Rapha?l Fonteneau, Nathan Korda, and R?mi Munos. An optimistic posterior sampling strategy for bayesian reinforcement learning. In NIPS 2013 Workshop on Bayesian Optimization (BayesOpt2013), 2013. Charles Miller Grinstead and James Laurie Snell. Introduction to probability. American Mathematical Soc., 2012. Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563?1600, 2010. Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003. Emilie Kaufmann, Nathaniel Korda, and R?mi Munos. Thompson Sampling: An Optimal Finite Time Analysis. In International Conference on Algorithmic Learning Theory (ALT), 2012. Michael J Kearns and Satinder P Singh. Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in neural information processing systems, pages 996?1002, 1999. 10 Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandits in metric spaces. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 681?690. ACM, 2008. Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning. arXiv preprint arXiv:1607.00215, 2016. Ian Osband, Dan Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pages 3003?3011, 2013. Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. arXiv preprint arXiv:1402.0635, 2014. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. Daniel Russo and Benjamin Van Roy. Learning to Optimize Via Posterior Sampling. Mathematics of Operations Research, 39(4):1221?1243, 2014. Daniel Russo and Benjamin Van Roy. An Information-Theoretic Analysis of Thompson Sampling. Journal of Machine Learning Research (to appear), 2015. Yevgeny Seldin, Fran?ois Laviolette, Nicolo Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian inequalities for martingales. IEEE Transactions on Information Theory, 58(12): 7086?7093, 2012. I. G. Shevtsova. An improvement of convergence rate estimates in the Lyapunov theorem. 82(3): 862?864, 2010. Alexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning, pages 856?863. ACM, 2005. Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences, 74(8):1309?1331, 2008. Ambuj Tewari and Peter L Bartlett. Optimistic linear programming gives logarithmic regret for irreducible MDPs. In Advances in Neural Information Processing Systems, pages 1505?1512, 2008. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933. 11
6718 |@word h:1 exploitation:2 version:5 polynomial:1 stronger:2 nd:1 open:1 r:8 q1:1 pick:1 initial:3 liu:2 daniel:2 denoting:1 past:1 current:1 contextual:2 john:3 ronald:1 v:2 stationary:1 intelligence:2 beginning:5 ith:1 provides:2 unbounded:1 ucrl2:5 along:2 constructed:1 mathematical:3 katehakis:2 symposium:1 apostolos:1 prove:2 combine:1 dan:1 emma:1 expected:9 roughly:1 planning:1 multi:3 yasin:1 inspired:1 discounted:1 armed:3 provided:8 begin:1 underlying:7 bounded:5 notation:2 suffice:1 csaba:1 corporation:1 guarantee:3 every:19 auai:2 biometrika:1 demonstrates:1 control:1 appear:1 before:1 understood:1 aiming:1 analyzing:1 abuse:1 studied:2 quantified:1 christoph:1 abramowitz:1 russo:4 regret:53 goyal:5 episodic:7 empirical:3 convenient:1 projection:2 courier:1 confidence:1 refers:1 close:1 context:1 optimize:2 deterministic:6 missing:2 maximizing:1 go:1 fonteneau:2 starting:5 thompson:13 survey:1 simplicity:1 immediately:1 communicating:21 q:3 utilizing:1 importantly:2 deriving:2 s6:1 proving:3 handle:1 notion:2 variation:2 coordinate:1 construction:3 pt:3 play:1 programming:3 olivier:1 us:6 trend:1 roy:8 satisfying:1 observed:2 preprint:4 wingate:1 worst:16 episode:4 irene:1 highest:2 machandranath:1 observes:1 mentioned:1 benjamin:5 pd:1 broken:1 complexity:2 reward:37 littman:5 hi0:1 dynamic:2 depend:1 tight:3 singh:2 weakly:1 ali:1 technically:1 completely:1 shipra:4 indirect:1 derivation:1 describe:1 london:2 artificial:2 outcome:1 whose:3 emerged:1 heuristic:1 supplementary:5 say:1 agrawal:6 sequence:2 combining:1 achieve:1 description:1 rst:7 convergence:2 double:1 undiscounted:4 extending:2 p:29 produce:2 requirement:1 derive:1 measured:1 sa:17 strong:1 soc:1 solves:1 ois:1 implies:2 lyapunov:1 direction:1 dsat:2 owing:1 stochastic:7 exploration:4 material:5 require:2 fix:1 generalization:1 preliminary:2 mab:6 proposition:1 tighter:2 randomization:1 snell:1 exploring:1 strictly:1 extension:1 hold:1 sufficiently:1 considered:3 algorithmic:2 mapping:1 substituting:2 achieves:2 consecutive:2 estimation:3 currently:1 maker:1 visited:2 create:1 aim:1 modified:1 hj:1 boosted:1 improvement:3 likelihood:1 aka:4 contrast:1 dependent:2 i0:1 typically:1 bandit:5 relation:2 provably:1 among:2 colt:1 special:1 initialize:1 beach:1 sampling:41 yu:1 icml:1 future:2 ortner:1 wen:1 irreducible:1 individual:1 william:1 attempt:1 interest:1 zheng:1 evaluation:1 devoted:1 kt:1 tuple:1 necessary:1 taylor:1 cda:2 theoretical:1 korda:2 instance:1 deviation:8 burnetas:2 nearoptimal:1 st:15 rapha:1 fundamental:1 randomized:2 international:3 michael:5 thesis:1 abbasi:2 cesa:2 manage:1 possibly:2 hoeffding:1 american:1 inefficient:1 li:3 summarized:1 satisfy:1 dann:2 view:1 break:2 picked:1 optimistic:20 observing:3 analyze:1 reached:2 start:1 option:1 maintains:2 jia:1 contribution:2 minimize:1 formed:3 nathaniel:1 kaufmann:2 miller:1 ronen:1 bayesian:12 accurately:1 trajectory:3 executes:1 reach:2 emilie:1 definition:11 james:1 proof:10 mi:3 gain:19 sampled:4 pst:6 popular:1 recall:1 knowledge:1 lim:1 improves:2 obtainable:1 auer:2 asmuth:2 ta:1 improved:1 execute:1 though:3 until:2 d:9 sketch:1 navin:3 aj:5 artifact:1 mdp:53 usa:1 true:1 regularization:1 jaksch:10 puterman:2 attractive:1 round:9 during:1 m:5 trying:1 theoretic:1 demonstrate:1 mohammad:1 nouri:1 novel:2 recently:2 charles:1 mt:1 rl:2 volume:1 extend:1 refer:4 multiarmed:1 nst:2 mathematics:2 shawe:1 lihong:2 chapelle:2 similarity:1 maxh:1 nicolo:2 posterior:34 recent:1 hide:1 scenario:1 randy:1 inequality:4 arbitrarily:1 minimum:1 maximize:2 ii:1 multiple:4 sham:1 exceeds:1 technical:5 match:2 england:1 long:1 visit:1 essentially:1 metric:1 arxiv:8 iteration:1 achieved:6 interval:2 source:1 crucial:1 extra:1 rest:1 suspect:1 seem:2 near:4 enough:2 easy:1 finish:1 nonstochastic:1 incomparable:1 idea:4 tradeoff:2 qj:13 optimism:9 bartlett:6 osband:9 peter:4 action:43 generally:1 tewari:6 clear:1 involve:1 useful:2 cleaner:1 amount:1 extensively:1 diameter:19 generate:1 popularity:1 discrete:2 group:1 key:2 tst:1 graph:1 eli:1 parameterized:1 uncertainty:4 fortieth:1 extends:1 throughout:1 decide:1 utilizes:2 fran:1 decision:12 comparable:1 entirely:1 bound:62 hi:2 played:1 annual:2 generates:4 nathan:1 kleinberg:1 argument:1 min:5 optimality:1 martin:1 according:1 smaller:1 slightly:3 son:1 kakade:2 making:3 s1:6 taken:3 equation:3 previously:4 remains:1 discus:1 know:1 milton:1 end:5 available:1 operation:2 apply:1 observe:3 appropriate:1 yadkori:2 original:1 thomas:1 denotes:3 dirichlet:12 remaining:2 include:1 log2:4 maintaining:1 laviolette:1 prof:1 move:1 question:1 moshe:1 strategy:1 concentration:3 dependence:3 interacts:1 che:1 majority:1 assuming:1 length:3 index:1 equivalently:1 difficult:1 robert:1 potentially:1 statement:1 stated:1 design:2 policy:43 unknown:12 twenty:2 bianchi:2 upper:9 observation:3 markov:8 benchmark:4 finite:13 anti:3 t:5 payoff:1 extended:14 arbitrary:1 regal:1 aleksandrs:1 introduced:1 david:1 pair:5 required:3 namely:1 slivkins:1 extendible:1 learned:3 established:3 nip:2 gheshlaghi:1 able:1 tennenholtz:2 proceeds:4 below:2 azuma:1 ambuj:2 max:11 difficulty:3 minimax:3 improve:1 mdps:12 irrespective:1 columbia:4 sn:1 epoch:30 prior:6 literature:1 understanding:1 bear:1 interesting:1 foundation:1 upfal:1 agent:6 s0:10 principle:4 strehl:4 row:2 elsewhere:1 brafman:2 free:1 bias:6 allow:1 weaker:1 taking:5 face:1 munos:3 absolute:2 fifth:2 van:8 depth:1 transition:24 cumulative:1 stegun:1 computes:1 made:2 reinforcement:22 adaptive:1 transaction:1 excess:1 satinder:1 sat:13 handbook:1 psrl:3 continuous:1 why:1 table:1 learn:2 szepesvari:2 ca:1 obtaining:2 laurie:1 aistats:1 pk:1 main:8 apr:1 bounding:5 azar:5 yevgeny:1 surmounts:1 referred:2 martingale:1 wiley:1 n:45 brunskill:2 ian:4 theorem:7 rk:6 formula:1 bastien:2 pac:2 symbol:1 alt:1 a3:4 evidence:1 exists:5 workshop:1 sequential:2 phd:1 execution:2 horizon:10 smoothly:1 logarithmic:6 likely:1 bubeck:4 forming:1 seldin:1 lazy:1 doubling:1 satisfies:5 acm:3 oct:1 goal:1 viewed:1 exposition:1 shared:2 infinite:3 typical:1 specifically:2 uniformly:1 reducing:1 kearns:2 lemma:17 total:11 ucb:1 support:1 alexander:2 correlated:1
6,322
6,719
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results Antti Tarvainen The Curious AI Company [email protected] Harri Valpola The Curious AI Company [email protected] Abstract The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Using the same network architecture, Mean Teacher achieves error rate 4.35% on SVHN with 250 labels, better than Temporal Ensembling does with 1000 labels. We show that Mean Teacher is compatible with residual networks, and improve state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%. Our preliminary experiments also suggest a large improvement over state of the art on semi-supervised ImageNet 2012. 1 Introduction Deep learning has seen tremendous success in areas such as image and speech recognition. In order to learn useful abstractions, deep learning models require a large number of parameters, thus making them prone to over-fitting (Figure 1a). Moreover, adding high-quality labels to training data manually is often expensive. Therefore, it is desirable to use regularization methods that exploit unlabeled data effectively to reduce over-fitting in semi-supervised learning. When a percept is changed slightly, a human typically still considers it to be the same object. Correspondingly, a classification model should favor functions that give consistent output for similar data points. One approach for achieving this is to add noise to the input of the model. To enable the model to learn more abstract invariances, the noise may be added to intermediate representations, an insight that has motivated many regularization techniques, such as Dropout [27]. Rather than minimizing the classification cost at the zero-dimensional data points of the input space, the regularized model minimizes the cost on a manifold around each data point, thus pushing decision boundaries away from the labeled data points (Figure 1b). Since the classification cost is undefined for unlabeled examples, the noise regularization by itself does not aid in semi-supervised learning. To overcome this, the model [20] evaluates each data point with and without noise, and then apply a consistency cost between the two predictions. In this case, the model assumes a dual role as a teacher and a student. As a student, it learns as before; as a teacher, it generates targets, which are then used by itself as a student for learning. Since the model itself generates targets, they may very well be incorrect. If too much weight is given to the generated targets, the cost of inconsistency outweighs that of misclassification, preventing the learning of new 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: A sketch of a binary classification task with two labeled examples (large blue dots) and one unlabeled example, demonstrating how the choice of the unlabeled target (black circle) affects the fitted function (gray curve). (a) A model with no regularization is free to fit any function that predicts the labeled training examples well. (b) A model trained with noisy labeled data (small dots) learns to give consistent predictions around labeled data points. (c) Consistency to noise around unlabeled examples provides additional smoothing. For the clarity of illustration, the teacher model (gray curve) is first fitted to the labeled examples, and then left unchanged during the training of the student model. Also for clarity, we will omit the small dots in figures d and e. (d) Noise on the teacher model reduces the bias of the targets without additional training. The expected direction of stochastic gradient descent is towards the mean (large blue circle) of individual noisy targets (small blue circles). (e) An ensemble of models gives an even better expected target. Both Temporal Ensembling and the Mean Teacher method use this approach. information. In effect, the model suffers from confirmation bias (Figure 1c), a hazard that can be mitigated by improving the quality of targets. There are at least two ways to improve the target quality. One approach is to choose the perturbation of the representations carefully instead of barely applying additive or multiplicative noise. Another approach is to choose the teacher model carefully instead of barely replicating the student model. Concurrently to our research, Miyato et al. [15] have taken the first approach and shown that Virtual Adversarial Training can yield impressive results. We take the second approach and will show that it too provides significant benefits. To our understanding, these two approaches are compatible, and their combination may produce even better outcomes. However, the analysis of their combined effects is outside the scope of this paper. Our goal, then, is to form a better teacher model from the student model without additional training. As the first step, consider that the softmax output of a model does not usually provide accurate predictions outside training data. This can be partly alleviated by adding noise to the model at inference time [4], and consequently a noisy teacher can yield more accurate targets (Figure 1d). This approach was used in Pseudo-Ensemble Agreement [2] and has lately been shown to work well on semi-supervised image classification [13, 22]. Laine & Aila [13] named the method the ? model; we will use this name for it and their version of it as the basis of our experiments. The ? model can be further improved by Temporal Ensembling [13], which maintains an exponential moving average (EMA) prediction for each of the training examples. At each training step, all the EMA predictions of the examples in that minibatch are updated based on the new predictions. Consequently, the EMA prediction of each example is formed by an ensemble of the model?s current version and those earlier versions that evaluated the same example. This ensembling improves the quality of the predictions, and using them as the teacher predictions improves results. However, since each target is updated only once per epoch, the learned information is incorporated into the training process at a slow pace. The larger the dataset, the longer the span of the updates, and in the case of on-line learning, it is unclear how Temporal Ensembling can be used at all. (One could evaluate all the targets periodically more than once per epoch, but keeping the evaluation span constant would require O(n2 ) evaluations per epoch where n is the number of training examples.) 2 Mean Teacher To overcome the limitations of Temporal Ensembling, we propose averaging model weights instead of predictions. Since the teacher model is an average of consecutive student models, we call this the Mean Teacher method (Figure 2). Averaging model weights over training steps tends to produce a 2 prediction prediction 3 3 classification cost consistency cost ? ?? ? exponential ?? moving average 3 label input student model teacher model Figure 2: The Mean Teacher method. The figure depicts a training batch with a single labeled example. Both the student and the teacher model evaluate the input applying noise (?, ? 0 ) within their computation. The softmax output of the student model is compared with the one-hot label using classification cost and with the teacher output using consistency cost. After the weights of the student model have been updated with gradient descent, the teacher model weights are updated as an exponential moving average of the student weights. Both model outputs can be used for prediction, but at the end of the training the teacher prediction is more likely to be correct. A training step with an unlabeled example would be similar, except no classification cost would be applied. more accurate model than using the final weights directly [18]. We can take advantage of this during training to construct better targets. Instead of sharing the weights with the student model, the teacher model uses the EMA weights of the student model. Now it can aggregate information after every step instead of every epoch. In addition, since the weight averages improve all layer outputs, not just the top output, the target model has better intermediate representations. These aspects lead to two practical advantages over Temporal Ensembling: First, the more accurate target labels lead to a faster feedback loop between the student and the teacher models, resulting in better test accuracy. Second, the approach scales to large datasets and on-line learning. More formally, we define the consistency cost J as the expected distance between the prediction of the student model (with weights ? and noise ?) and the expected prediction of the teacher model (with weights ?0 and noise ? 0 ). h J(?) = Ex,?0 ,? kf (x, ?0 , ? 0 ) f (x, ?, ?)k 2 i The difference between the ? model, Temporal Ensembling, and Mean teacher is how the teacher predictions are generated. Whereas the ? model uses ?0 = ?, and Temporal Ensembling approximates f (x, ?0 , ? 0 ) with a weighted average of successive predictions, we define ?t0 at training step t as the EMA of successive ? weights: ?t0 = ??t0 1 + (1 ?)?t where ? is a smoothing coefficient hyperparameter. An additional difference between the three algorithms is that the ? model applies training to ?0 whereas Temporal Ensembling and Mean Teacher treat it as a constant with regards to optimization. We can approximate the consistency cost function J by sampling noise ?, ? 0 at each training step with stochastic gradient descent. Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function. In the experiments section, we will also explore the use of other cost functions. 3 Table 1: Error rate percentage on SVHN over 10 runs (4 runs when using all labels). We use exponential moving average weights in the evaluation of all our models. All the methods use a similar 13-layer CNN architecture. See Table 5 in the Appendix for results without input augmentation. 250 labels 73257 images GAN [24] ? model [13] Temporal Ensembling [13] VAT+EntMin [15] Supervised-only 27.77 ? 3.18 ? model 9.69 ? 0.92 Mean Teacher 4.35 ? 0.50 500 labels 73257 images 18.44 ? 4.8 6.65 ? 0.53 5.12 ? 0.13 16.88 ? 1.30 6.83 ? 0.66 4.18 ? 0.27 1000 labels 73257 images 8.11 ? 1.3 4.82 ? 0.17 4.42 ? 0.16 3.86 12.32 ? 0.95 4.95 ? 0.26 3.95 ? 0.19 73257 labels 73257 images 2.54 ? 0.04 2.74 ? 0.06 2.75 ? 0.10 2.50 ? 0.07 2.50 ? 0.05 Table 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels). 1000 labels 50000 images GAN [24] ? model [13] Temporal Ensembling [13] VAT+EntMin [15] Supervised-only 46.43 ? 1.21 ? model 27.36 ? 1.20 Mean Teacher 21.55 ? 1.48 3 2000 labels 50000 images 33.94 ? 0.73 18.02 ? 0.60 15.73 ? 0.31 4000 labels 50000 images 18.63 ? 2.32 12.36 ? 0.31 12.16 ? 0.31 10.55 20.66 ? 0.57 13.20 ? 0.27 12.31 ? 0.28 50000 labels 50000 images 5.56 ? 0.10 5.60 ? 0.10 5.82 ? 0.15 6.06 ? 0.11 5.94 ? 0.15 Experiments To test our hypotheses, we first replicated the ? model [13] in TensorFlow [1] as our baseline. We then modified the baseline model to use weight-averaged consistency targets. The model architecture is a 13-layer convolutional neural network (CNN) with three types of noise: random translations and horizontal flips of the input images, Gaussian noise on the input layer, and dropout applied within the network. We use mean squared error as the consistency cost and ramp up its weight from 0 to its final value during the first 80 epochs. The details of the model and the training procedure are described in Appendix B.1. 3.1 Comparison to other methods on SVHN and CIFAR-10 We ran experiments using the Street View House Numbers (SVHN) and CIFAR-10 benchmarks [16]. Both datasets contain 32x32 pixel RGB images belonging to ten different classes. In SVHN, each example is a close-up of a house number, and the class represents the identity of the digit at the center of the image. In CIFAR-10, each example is a natural image belonging to a class such as horses, cats, cars and airplanes. SVHN contains of 73257 training samples and 26032 test samples. CIFAR-10 consists of 50000 training samples and 10000 test samples. Tables 1 and 2 compare the results against recent state-of-the-art methods. All the methods in the comparison use a similar 13-layer CNN architecture. Mean Teacher improves test accuracy over the ? model and Temporal Ensembling on semi-supervised SVHN tasks. Mean Teacher also improves results on CIFAR-10 over our baseline ? model. The recently published version of Virtual Adversarial Training by Miyato et al. [15] performs even better than Mean Teacher on the 1000-label SVHN and the 4000-label CIFAR-10. As discussed in the introduction, VAT and Mean Teacher are complimentary approaches. Their combination may yield better accuracy than either of them alone, but that investigation is beyond the scope of this paper. 4 Table 3: Error percentage over 10 runs on SVHN with extra unlabeled training data. ? model (ours) Mean Teacher 500 labels 73257 images 6.83 ? 0.66 4.18 ? 0.27 500 labels 173257 images 4.49 ? 0.27 3.02 ? 0.16 500 labels 573257 images 3.26 ? 0.14 2.46 ? 0.06 Figure 3: Smoothened classification cost (top) and classification error (bottom) of Mean Teacher and our baseline ? model on SVHN over the first 100000 training steps. In the upper row, the training classification costs are measured using only labeled data. 3.2 SVHN with extra unlabeled data Above, we suggested that Mean Teacher scales well to large datasets and on-line learning. In addition, the SVHN and CIFAR-10 results indicate that it uses unlabeled examples efficiently. Therefore, we wanted to test whether we have reached the limits of our approach. Besides the primary training data, SVHN includes also an extra dataset of 531131 examples. We picked 500 samples from the primary training as our labeled training examples. We used the rest of the primary training set together with the extra training set as unlabeled examples. We ran experiments with Mean Teacher and our baseline ? model, and used either 0, 100000 or 500000 extra examples. Table 3 shows the results. 3.3 Analysis of the training curves The training curves on Figure 3 help us understand the effects of using Mean Teacher. As expected, the EMA-weighted models (blue and dark gray curves in the bottom row) give more accurate predictions than the bare student models (orange and light gray) after an initial period. Using the EMA-weighted model as the teacher improves results in the semi-supervised settings. There appears to be a virtuous feedback cycle of the teacher (blue curve) improving the student (orange) via consistency cost, and the student improving the teacher via exponential moving averaging. If this feedback cycle is detached, the learning is slower, and the model starts to overfit earlier (dark gray and light gray). Mean Teacher helps when labels are scarce. When using 500 labels (middle column) Mean Teacher learns faster, and continues training after ? model stops improving. On the other hand, in the all-labeled case (left column), Mean Teacher and the ? model behave virtually identically. 5 Figure 4: Validation error on 250-label SVHN over four runs per hyperparameter setting and their means. In each experiment, we varied one hyperparameter, and used the evaluation run hyperparameters of Table 1 for the rest. The hyperparameter settings used in the evaluation runs are marked with the bolded font weight. See the text for details. Mean Teacher uses unlabeled training data more efficiently than the ? model, as seen in the middle column. On the other hand, with 500k extra unlabeled examples (right column), ? model keeps improving for longer. Mean Teacher learns faster, and eventually converges to a better result, but the sheer amount of data appears to offset ? model?s worse predictions. 3.4 Ablation experiments To assess the importance of various aspects of the model, we ran experiments on SVHN with 250 labels, varying one (or a few) hyperparameters at a time while keeping the others fixed. Removal of noise (Figures 4(a) and 4(b)). In the introduction and Figure 1, we presented the hypothesis that the ? model produces better predictions by adding noise to the model on both sides. But after the addition of Mean Teacher, is noise still needed? Yes. We can see that either input augmentation or dropout is necessary for passable performance. On the other hand, input noise does not help when augmentation is in use. Dropout on the teacher side provides only a marginal benefit over just having it on the student side, at least when input augmentation is in use. Sensitivity to EMA decay and consistency weight (Figures 4(c) and 4(d)). The essential hyperparameters of the Mean Teacher algorithm are the consistency cost weight and the EMA decay ?. How sensitive is the algorithm to their values? We can see that in each case the good values span roughly an order of magnitude and outside these ranges the performance degrades quickly. Note that EMA decay ? = 0 makes the model a variation of the ? model, although somewhat inefficient one because the gradients are propagated through only the student path. Note also that in the evaluation runs we used EMA decay ? = 0.99 during the ramp-up phase, and ? = 0.999 for the rest of the training. We chose this strategy because the student improves quickly early in the training, and thus the teacher should forget the old, inaccurate, student weights quickly. Later the student improvement slows, and the teacher benefits from a longer memory. Decoupling classification and consistency (Figure 4(e)). The consistency to teacher predictions may not necessarily be a good proxy for the classification task, especially early in the training. So far our model has strongly coupled these two tasks by using the same output for both. How would decoupling the tasks change the performance of the algorithm? To investigate, we changed the model to have two top layers and produce two outputs. We then trained one of the outputs for classification and the other for consistency. We also added a mean squared error cost between the output logits, and then varied the weight of this cost, allowing us to control the strength of the coupling. Looking at the results (reported using the EMA version of the classification output), we can see that the strongly coupled version performs well and the too loosely coupled versions do not. On the other hand, a moderate decoupling seems to have the benefit of making the consistency ramp-up redundant. 6 Table 4: Error rate percentage of ResNet Mean Teacher. We report the test results from 10 runs for CIFAR-10 and validation results from 2 runs for ImageNet. CIFAR-10 1000 labels 50000 images State of the art CNN Mean Teacher ResNet Mean Teacher 21.55 ? 1.48 10.08 ? 0.41 CIFAR-10 4000 labels 50000 images 10.55 [15] 12.31 ? 0.28 6.28 ? 0.15 ImageNet 2012 128000 labels 1280000 images 35.24 ? 0.90 [19] 19.76 ? 0.05 Changing from MSE to KL-divergence (Figure 4(f)) Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function, but KL-divergence would seem a more natural choice. Which one works better? We can see that MSE works better than KL-divergence or their intermediate versions. See Appendix C for details of the cost function family we used for this experiment and for our intuition about why MSE may perform so well. 3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet In the experiments above, we used a traditional 13-layer convolutional architecture, which has the benefit of making comparisons to earlier work easy. In order to explore the effect of the model architecture, we ran experiments using a 12-block Residual Network [8] with Shake-Shake regularization [5] on CIFAR-10. The details of the model and the training procedure are described in Appendix B.2. As shown in Table 4, the results improve remarkably with the better network architecture. To test scaling of the method to realistic images, we ran experiments on Imagenet 2012 dataset [21] with 10% of labels. We used a residual network with squeeze-and-excitation blocks [10] and ShakeShake regularization, and saw a clear improvement over the state of the art. However, the ImageNet results come with caveats: As the test set is not publicly available, we measured the results using the validation set. We also used a very small 8-block architecture, and probably did not use the best hyperparameters. Regardless, the results suggest that Mean Teacher is useful also on large natural images. 4 Related work Noise regularization of neural networks was proposed by Sietsma & Dow [25]. More recently, several types of perturbations have been shown to regularize intermediate representations effectively in deep learning. Adversarial Training [6] changes the input slightly to give predictions that are as different as possible from the original predictions. Dropout [27] zeroes random dimensions of layer outputs. Dropconnect [30] generalizes Dropout by zeroing individual weights instead of activations. Stochastic Depth [11] drops entire layers of residual networks, and Swapout [26] generalizes Dropout and Stochastic Depth. Shake-shake regularization [5] duplicates residual paths and samples a linear combination of their outputs independently during forward and backward passes. Several semi-supervised methods are based on training the model predictions to be consistent to perturbation. The Denoising Source Separation framework (DSS) [28] uses denoising of latent variables to learn their likelihood estimate. The variant of Ladder Network [20] implements DSS with a deep learning model for classification tasks. It produces a noisy student predictions and clean teacher predictions, and applies a denoising layer to predict teacher predictions from the student predictions. The ? model [13] improves the model by removing the explicit denoising layer and applying noise also to the teacher predictions. Similar methods had been proposed already earlier for linear models [29] and deep learning [2]. Virtual Adversarial Training [15] is similar to the ? model but uses adversarial perturbation instead of independent noise. The idea of a teacher model training a student is related to model compression [3] and distillation [9]. The knowledge of a complicated model can be transferred to a simpler model by training the simpler model with the softmax outputs of the complicated model. The softmax outputs contain 7 more information about the task than the one-hot outputs, and the requirement of representing this knowledge regularizes the simpler model. Besides its use in model compression, distillation can be used to harden trained models against adversarial attacks [17]. The difference between distillation and consistency regularization is that distillation is performed after training whereas consistency regularization is performed on training time. Consistency regularization can be seen as a form of label propagation [32]. Training samples that resemble each other are more likely to belong to the same class. Label propagation takes advantage of this assumption by pushing label information from each example to examples that are near it according to some metric. Label propagation can also be applied to deep learning models [31]. However, ordinary label propagation requires a predefined distance metric in the input space. In contrast, consistency targets employ a learned distance metric implied by the abstract representations of the model. As the model learns new features, the distance metric changes to accommodate these features. Therefore, consistency targets guide learning in two ways. On the one hand they spread the labels according to the current distance metric, and on the other hand, they aid the network learn a better distance metric. 5 Conclusion Temporal Ensembling, Virtual Adversarial Training and other forms of consistency regularization have recently shown their strength in semi-supervised learning. In this paper, we propose Mean Teacher, a method that averages model weights to form a target-generating teacher model. Unlike Temporal Ensembling, Mean Teacher works with large datasets and on-line learning. Our experiments suggest that it improves the speed of learning and the classification accuracy of the trained network. In addition, it scales well to state-of-the-art architectures and large image sizes. The success of consistency regularization depends on the quality of teacher-generated targets. If the targets can be improved, they should be. Mean Teacher and Virtual Adversarial Training represent two ways of exploiting this principle. Their combination may yield even better targets. There are probably additional methods to be uncovered that improve targets and trained models even further. Acknowledgements We thank Samuli Laine and Timo Aila for fruitful discussions about their work, and Phil Bachman and Colin Raffel for corrections to the pre-print version of this paper. We also thank everyone at The Curious AI Company for their help, encouragement, and ideas. References [1] Abadi, Mart?n, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man?, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi?gas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. [2] Bachman, Philip, Alsharif, Ouais, and Precup, Doina. Learning with Pseudo-Ensembles. arXiv:1412.4864 [cs, stat], December 2014. arXiv: 1412.4864. [3] Bucilu?a, Cristian, Caruana, Rich, and Niculescu-Mizil, Alexandru. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535?541. ACM, 2006. [4] Gal, Yarin and Ghahramani, Zoubin. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1050?1059, 2016. 8 [5] Gastaldi, Xavier. Shake-Shake regularization. arXiv:1705.07485 [cs], May 2017. arXiv: 1705.07485. [6] Goodfellow, Ian J., Shlens, Jonathon, and Szegedy, Christian. Explaining and Harnessing Adversarial Examples. December 2014. arXiv: 1412.6572. [7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On Calibration of Modern Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706.04599. [8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015. arXiv: 1512.03385. [9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531. [10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs], September 2017. arXiv: 1709.01507. [11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 [cs], March 2016. arXiv: 1603.09382. [12] Kingma, Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980. [13] Laine, Samuli and Aila, Timo. Temporal Ensembling for Semi-Supervised Learning. arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242. [14] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013. [15] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin. Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning. arXiv:1704.03976 [cs, stat], April 2017. arXiv: 1704.03976. [16] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [17] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swami, Ananthram. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv:1511.04508 [cs, stat], November 2015. arXiv: 1511.04508. [18] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control Optim., 30(4):838?855, July 1992. ISSN 0363-0129. doi: 10.1137/0330046. [19] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyuan, Stevens, Andrew, and Carin, Lawrence. Variational Autoencoder for Deep Learning of Images, Labels and Captions. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976. [20] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semisupervised Learning with Ladder Networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 3546?3554. Curran Associates, Inc., 2015. [21] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and FeiFei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014. arXiv: 1409.0575. [22] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1163?1171. Curran Associates, Inc., 2016. 9 [23] Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901?901, 2016. [24] Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226?2234, 2016. [25] Sietsma, Jocelyn and Dow, Robert JF. Creating artificial neural networks that generalize. Neural networks, 4(1):67?79, 1991. [26] Singh, Saurabh, Hoiem, Derek, and Forsyth, David. Swapout: Learning an ensemble of deep architectures. arXiv:1605.06465 [cs], May 2016. arXiv: 1605.06465. [27] Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res., 15(1):1929?1958, January 2014. ISSN 1532-4435. [28] S?rel?, Jaakko and Valpola, Harri. Denoising Source Separation. Journal of Machine Learning Research, 6(Mar):233?272, 2005. ISSN ISSN 1533-7928. [29] Wager, Stefan, Wang, Sida, and Liang, Percy. Dropout Training as Adaptive Regularization. arXiv:1307.1493 [cs, stat], July 2013. arXiv: 1307.1493. [30] Wan, Li, Zeiler, Matthew, Zhang, Sixin, Le Cun, Yann, and Fergus, Rob. Regularization of Neural Networks using DropConnect. pp. 1058?1066, 2013. [31] Weston, Jason, Ratle, Fr?d?ric, Mobahi, Hossein, and Collobert, Ronan. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639?655. Springer, 2012. [32] Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label propagation. 2002. 10
6719 |@word cnn:4 middle:2 version:9 compression:3 seems:1 hu:1 rgb:1 bachman:2 sajjadi:1 accommodate:1 initial:1 liu:1 contains:1 uncovered:1 hoiem:1 daniel:1 ours:1 steiner:1 current:2 optim:1 activation:1 diederik:2 devin:1 additive:1 periodically:1 realistic:1 ronan:1 enables:1 wanted:1 christian:1 drop:1 update:1 juditsky:1 alone:1 isard:1 fewer:1 alec:1 timo:2 bissacco:1 caveat:1 provides:3 successive:2 attack:1 simpler:3 zhang:2 olah:1 incorrect:1 consists:1 abadi:1 yuan:2 fitting:2 dan:1 alsharif:1 expected:5 roughly:1 ratle:1 salakhutdinov:1 company:3 becomes:1 moreover:1 mitigated:1 complimentary:1 minimizes:1 gal:1 transformation:1 temporal:18 pseudo:2 every:2 zaremba:1 control:2 omit:1 before:1 treat:1 tends:1 limit:1 mach:1 path:2 black:1 chose:1 range:1 sietsma:2 averaged:2 practical:1 block:3 implement:1 digit:2 procedure:2 shin:2 area:1 alleviated:1 pre:1 tolga:1 suggest:3 zoubin:2 unlabeled:13 close:1 andrej:1 applying:3 fruitful:1 raffel:1 dean:2 center:1 phil:1 regardless:1 independently:1 jimmy:1 shen:1 x32:1 matthieu:1 insight:1 shlens:2 regularize:1 reparameterization:1 embedding:1 variation:1 updated:4 target:26 caption:1 us:6 mikko:1 hypothesis:2 goodfellow:3 agreement:1 kunal:1 curran:2 associate:2 recognition:3 expensive:1 trick:1 continues:1 predicts:1 labeled:11 bottom:2 role:2 mike:1 wang:2 cycle:2 sun:4 kilian:2 trade:1 ran:5 alessandro:1 intuition:1 jaakko:1 trained:5 singh:1 swami:1 basis:1 accelerate:1 geoff:1 cat:1 various:1 harri:4 doi:1 artificial:1 vicki:1 horse:1 aggregate:1 outcome:1 outside:3 harnessing:1 larger:1 ramp:3 favor:1 itself:3 noisy:4 final:2 cristian:1 advantage:3 cai:2 propose:3 fr:1 loop:1 ablation:1 squeeze:2 exploiting:1 sutskever:2 requirement:1 produce:5 generating:1 adam:2 converges:1 object:1 help:4 coupling:1 resnet:2 andrew:5 stat:6 tim:2 measured:2 c:15 resemble:1 indicate:1 come:1 distilling:1 direction:1 stevens:1 correct:1 alexandru:1 stochastic:8 human:1 jonathon:2 enable:1 virtual:6 require:2 preliminary:1 investigation:1 awni:1 correction:1 ds:2 around:3 lawrence:2 scope:2 predict:1 matthew:1 achieves:1 consecutive:1 early:2 ruslan:1 proc:1 label:41 honkala:1 sensitive:1 saw:1 bucilu:1 weighted:3 stefan:1 concurrently:1 gaussian:1 modified:1 rather:1 varying:1 june:1 improvement:3 likelihood:1 masanori:1 contrast:1 adversarial:11 sigkdd:1 baseline:5 ishii:1 inference:1 abstraction:1 niculescu:1 inaccurate:1 typically:1 entire:1 tao:1 pixel:1 henao:1 classification:17 dual:1 hossein:1 smoothing:2 art:7 softmax:4 orange:2 marginal:1 saurabh:1 once:3 construct:1 having:1 beach:1 sampling:1 manually:1 ng:2 represents:1 yu:3 icml:1 unsupervised:2 carin:1 others:1 report:1 duplicate:1 few:1 employ:1 modern:1 divergence:3 individual:2 phase:1 jeffrey:1 investigate:1 mining:1 zheng:1 evaluation:6 benoit:1 undefined:1 light:2 wager:1 predefined:1 accurate:5 andy:1 necessary:1 netzer:1 old:1 loosely:1 penalizes:1 circle:3 re:1 fitted:2 column:4 earlier:4 caruana:1 ordinary:1 cost:23 smoothened:1 krizhevsky:1 too:3 reported:1 teacher:68 kudlur:1 combined:1 st:1 international:2 sensitivity:1 siam:1 lee:2 michael:2 together:1 quickly:3 ashish:1 ilya:2 precup:1 sanjeev:1 squared:4 augmentation:4 gans:1 rafal:1 choose:2 wan:1 huang:2 dropconnect:2 berglund:1 worse:1 lukasz:1 creating:1 inefficient:1 ricardo:1 wojciech:1 li:4 szegedy:1 nonlinearities:1 student:27 includes:1 coefficient:1 jha:1 inc:2 forsyth:1 doina:1 depends:1 collobert:1 vi:1 multiplicative:1 view:1 picked:1 later:1 performed:2 jason:1 reached:1 start:1 maintains:2 complicated:2 jia:2 ass:1 formed:1 publicly:1 greg:1 accuracy:5 entmin:2 convolutional:2 percept:1 ensemble:5 yield:4 efficiently:2 bolded:1 yes:1 generalize:1 bayesian:1 vincent:1 craig:1 pleiss:1 ren:1 russakovsky:1 published:1 suffers:1 sharing:1 ed:2 papernot:1 evaluates:1 against:3 derek:2 tucker:1 pp:8 propagated:1 stop:1 dataset:3 knowledge:4 car:1 improves:9 wicke:1 sean:1 carefully:2 appears:2 supervised:17 improved:3 sedra:1 april:1 evaluated:1 strongly:2 mar:1 just:2 overfit:1 sketch:1 hand:6 horizontal:1 dow:2 su:1 mehdi:1 propagation:5 minibatch:1 quality:5 gray:6 semisupervised:1 usa:1 effect:4 name:1 contain:2 detached:1 logits:1 vasudevan:1 regularization:18 xavier:1 moore:1 during:5 irving:1 davis:1 excitation:2 levenberg:1 performs:2 percy:1 svhn:15 zhiheng:1 image:26 variational:1 fi:2 recently:4 volume:1 discussed:1 belong:1 approximates:1 he:1 jocelyn:1 significant:1 distillation:5 jozefowicz:1 ai:3 encouragement:1 rd:1 consistency:25 zeroing:1 sugiyama:2 replicating:1 had:1 dot:3 moving:6 calibration:1 impressive:1 longer:3 add:1 pete:1 patrick:1 pu:1 recent:1 moderate:1 wattenberg:1 sherry:1 sixin:1 binary:1 success:2 samuli:2 fernanda:1 inconsistency:1 somesh:1 seen:3 additional:6 somewhat:1 tapani:1 deng:1 feifei:1 xiangyu:1 period:1 sida:1 redundant:1 colin:1 semi:14 corrado:1 desirable:1 july:2 reduces:1 takeru:1 faster:3 long:1 cifar:14 hazard:1 vat:3 prediction:33 variant:1 heterogeneous:1 metric:6 mehran:1 arxiv:31 represent:1 monga:1 normalization:1 agarwal:1 achieved:1 addition:4 whereas:3 remarkably:1 krause:1 yunchen:1 source:2 jian:1 extra:6 rest:3 unlike:1 warden:1 probably:2 pass:1 virtually:1 december:4 inconsistent:1 seem:1 call:1 curious:3 near:1 intermediate:4 bernstein:1 identically:1 easy:1 affect:1 fit:1 architecture:10 polyak:1 reduce:1 idea:2 barham:1 airplane:1 t0:3 whether:1 motivated:1 javanmardi:1 defense:1 manjunath:1 speech:1 shaoqing:1 deep:17 jie:1 useful:2 clear:1 shake:6 amount:1 karpathy:1 dark:2 chuan:1 ten:1 mcdaniel:1 percentage:4 coates:1 per:5 pace:1 blue:5 hyperparameter:4 ichi:1 harp:1 four:1 sheer:1 demonstrating:1 achieving:1 yangqing:1 clarity:2 changing:1 prevent:1 clean:1 backward:1 laine:5 run:11 swapout:2 talwar:1 uncertainty:1 luxburg:1 named:1 family:1 guyon:1 wu:2 yann:1 separation:2 decision:1 appendix:4 scaling:1 ric:1 dropout:10 layer:11 ouais:1 strength:2 gang:1 alex:1 generates:2 aspect:2 speed:1 nitish:1 span:3 martin:2 transferred:1 according:2 combination:4 march:2 belonging:2 slightly:2 aila:5 cun:1 rob:1 making:3 taken:1 hannun:1 eventually:1 needed:1 flip:1 end:1 available:1 generalizes:2 brevdo:1 apply:1 away:1 salimans:2 batch:1 weinberger:2 slower:1 original:1 assumes:1 miyato:3 top:3 gan:3 zeiler:1 outweighs:1 pushing:2 exploit:1 ghahramani:2 especially:1 murray:1 unchanged:1 implied:1 added:2 already:1 print:1 font:1 degrades:1 primary:3 strategy:1 kaiser:1 traditional:1 unclear:1 september:3 gradient:4 distance:6 valpola:3 thank:2 street:1 philip:1 koyama:1 chris:1 manifold:1 considers:1 barely:2 besides:2 issn:4 illustration:1 rasmus:1 minimizing:1 liang:1 october:1 robert:1 hao:1 slows:1 ba:1 satheesh:1 perform:1 allowing:1 upper:1 datasets:5 benchmark:2 descent:3 behave:1 gas:1 november:1 january:1 regularizes:1 hinton:2 incorporated:1 looking:1 perturbation:6 varied:2 chunyuan:1 david:1 kl:3 imagenet:7 acoustic:1 xiaoqiang:1 learned:2 tremendous:1 tensorflow:2 kingma:2 nip:2 beyond:1 suggested:1 usually:1 sanjay:1 maeda:1 reading:1 challenge:1 memory:1 everyone:1 hot:2 misclassification:1 natural:4 regularized:1 residual:7 scarce:1 mizil:1 representing:2 zhu:1 improve:7 zhuang:1 ladder:2 raiko:1 lately:1 coupled:3 autoencoder:1 bare:1 xiaojin:1 text:1 epoch:6 understanding:1 acknowledgement:1 removal:1 kf:1 eugene:1 discovery:1 limitation:1 geoffrey:3 validation:3 vanhoucke:1 consistent:3 proxy:1 principle:1 tasdizen:1 translation:1 row:2 prone:1 compatible:2 changed:2 maas:1 antti:2 free:1 keeping:2 bias:2 side:3 understand:1 guide:1 explaining:1 correspondingly:1 benefit:6 regard:1 overcome:3 boundary:1 curve:6 feedback:3 dimension:1 depth:3 rich:1 preventing:1 forward:1 adaptive:1 replicated:1 far:1 approximate:1 keep:1 overfitting:1 xi:2 fergus:1 zhe:1 latent:1 khosla:1 why:1 table:9 learn:5 ca:1 confirmation:1 decoupling:3 nicolas:1 improving:5 mse:5 necessarily:1 garnett:2 did:1 spread:1 noise:21 hyperparameters:4 paul:2 n2:1 yarin:1 ensembling:19 depicts:1 slow:1 aid:2 explicit:1 exponential:6 house:2 learns:5 zhifeng:1 ian:3 unwieldy:1 removing:1 rectifier:1 mobahi:1 ghemawat:1 offset:1 decay:4 cortes:1 essential:1 workshop:1 rel:1 adding:3 effectively:2 importance:1 magnitude:1 chen:2 vijay:1 forget:1 likely:2 explore:2 gao:1 josh:1 visual:1 vinyals:2 aditya:1 kaiming:1 bo:1 applies:2 radford:1 srivastava:1 springer:1 acm:2 mart:1 ma:1 weston:1 goal:1 identity:1 marked:1 consequently:2 acceleration:1 towards:1 cheung:1 passable:1 jeff:1 man:1 jf:1 change:4 except:1 yuval:1 averaging:4 denoising:5 olga:1 mathias:1 invariance:1 partly:1 xin:1 citro:1 ema:12 formally:1 berg:1 guo:1 jonathan:1 alexander:1 rajat:1 oriol:2 evaluate:2 schuster:1 ex:1
6,323
672
Object-Based Analog VLSI Vision Circuits Christof Koch Computation and Neural Systems California Institute of Technology Pasadena, CA Bimal Mathur, Shih-Chii Liu Rockwell International Science Center Thousand Oaks, CA John G. Harris MIT Artificial Intelligence Laboratory Cambridge, MA Jin Luo, Massimo Sivilotti Tanner Research, Inc. Pasadena, CA Abstract We describe two successfully working, analog VLSI vision circuits that move beyond pixel-based early vision algorithms. One circuit, implementing the dynamic wires model, provides for dedicated lines of communication among groups of pixels that share a common property. The chip uses the dynamic wires model to compute the arclength of visual contours. Another circuit labels all points inside a given contour with one voltage and all other with another voltage. Its behavior is very robust, since small breaks in contours are automatically sealed, providing for Figure-Ground segregation in a noisy environment. Both chips are implemented using networks of resistors and switches and represent a step towards object level processing since a single voltage value encodes the property of an ensemble of pixels. 1 CONTOUR-LENGTH CHIP Contour length computation is useful for further processing such as structural saliency (Shaashua and Ullman, 1988), which is thought to be an important stage before object recognition. This computation is impossible on an analog chip if we 828 Object-Based Analog VLSI Vision Circuits CI ~ c ? 0 ~ "> -CI ? .... tI" 01 00 -" 0 > CI '0' ,1'4 L Jo 01'4 , II ? :JCI -? .? 0 0 10 Contour 20 :so Length Figure 1: Figure 1: Plot of measured voltage vs. contour length from 30 different contours scanned into the contour length chip. The voltage is a linear function of contour length. are restricted to pure pixel- or image-based operations. The dynamic wire methodology provides dedicated lines of communication among groups of pixels of an image which share common properties (Liu and Harris, 1992). In simple applications, object regions can be grouped together to compute the area or the center of mass of each object. Alternatively, object boundaries may be used to compute curvature or contour length. These ideas are not limited to sets of simple electrical wires; resistive networks can also be configured on the fly. The problem of smoothing object contours using resistive dynamic wires has been previously studied (Liu and Harris, 1992). In the contour-length application, pixels along image contours are electrically connected by a reconfigurable dynamic wire. The first step of processing requires that each contour choose an arbitrary but unique leader pixel. The top of Fig. 2 shows several examples of contours and indicates which pixels where chosen as leaders by the chip. The leader is responsible for connecting a shunting resistor between the shared dynamic wire and ground. If each pixel on the contour supplies a constant amount of current to the dynamic wire, all of the current must flow through the shunting resistor. Therefore, the voltage on the wire will encode the contour length. Fig. 1 shows the linear relationship between the measured voltage and the contour length. The bottom half of Fig. 2 shows the length of several example contours using an intensity coding. The brighter contours indicate a higher voltage and therefore a longer contour. The contour length chip was fabricated through MOSIS using 2J.Lm CMOS technology. The prototype 2x2 mm 2 chip contains an a 7x7 pixel array. 829 830 Koch, Mathur, Liu, Harris, Luo, and Sivilotti . "'Q-l .. m a q I EI i . =..c..... a I I :.l;l.l;l.l;l.!;m:l:l ;l.l;l.l;l.l; 11111111 ::::::::::::::::::::~ ~lli~r"" l i ~l~ lllllil .'.;.;......... II :. :.; ?< ~:.'?i~:I. fllt:it!lj :;!;i;..I.;!.;i.:!.:;...:.:.;.. ..' ... ....... ~ ::i.I..:I:;.:."::.I::.i.I:::.? .. Figure 2: Four binary contour images were scanned into the contour-length chip and are shown in the top figure. The highlighted pixel in each contour was chosen by the chip to be the leader. The bottom figure shows the measured voltages (indicated by intensity) from the contour-length chip for the four images are shown. Since the intensity of each pixel encodes its length, the longer contours are brighter. Object-Based Analog VLSI Vision Circuits The most challenging aspect of the design of the contour-length chip is the circuitry to uniquely select a pixel from each contour to be the leader. The leader is selected by entering all the pixels along each contour in a competition. The winner of this competition will be the leader. This competition requires each node to charge up its own capacitor once a reset line has been triggered. The first node that charges its capacitor above the trip point of a digital inverter will pull down a global precharge wire which connects all the pixels along the contour. This wire will in turn latch the states of the winner and losers. One of the pixels will normally toggle first because of the inherent offsets and component mismatches in silicon. 2 FIGURE-GROUND CHIP Ullman (1984) proposed that a visual routine is used in human vision to determine if a specified point in the visual field is inside or outside of one (or more) closed visual contours. We describe such a chip that labels all points inside a givenpossibly incomplete and broken-contour. We assume that the presence of an edge in the image causes switches at the corresponding grid point within a rectangular resistive network to open (Fig. 3). A closed edge contour will then correspond to a series of open switches on this grid. We assume that the visual contour will always encompass the central grid point in the array. At this point, the resistive grid is connected to the battery V/ig, while the periphery of the array is grounded to Vgnd. If the voltage at all other grid points is left floating and the contour is complete, that is, the central grid point is completely isolated from the periphery of the chip by a series of open grid points, the voltage at all points inside the contour rises to V/ig, while the voltage at grid points outside the contour will settle to V gnd . Thus, the figure will be labeled by one voltage level and ground by another. Contours in real images are frequently incomplete, but instead have broken segments of one or more pixels. This will enable the current to flow through these holes in the contour, smearing out the voltage level between inside and outside. We exploit a property of Mead's (1989) Hres circuit, used to implement the resistances, to achieve contour completion. While the current flowing through Hres is linear in the voltage gradient for small voltage differences, it saturates for large voltage gradients. At those locations where the contour is broken, the saturating resistances limit the current flow, preventing smoothing of the voltage profile to occur. Figure 4 shows the responses of the Figure-Ground chip to different input patterns collected with a fixed bias: V/ ig 3.5 V and Vgnd 2 V. The two-dimensional data is presented as pairs of images. The input patterns are located on the left while the corresponding voltage outputs are presented next to the input on the right. The black-white patterns are used to represent the binary input data encoding object boundaries. Thus, at all locations marked in black, the associated switches shown in Fig. 1a are opened. The gray-scale on the right denotes output voltage levels, where the darkest value corresponds to V/ig and the brightest to Vgnd. The center pixel of the view field is always set to V/ig. Notice that at every node where a boundary input signal (in black) appears and the switches are opened, the output voltage at that node is tied to Vgnd . This can be seen best in (e; white outline). To evaluate the ability of our circuit to perform Figure-Ground segregation in the presence of breaks in the contour, more and wider breaks are introduced into a simple square = = 831 832 Koch, Mathur, Liu, Harris, Luo, and Sivilotti +~+ ,., t+.?. .-!" Voltage + vr ;:' . -+ + Vgnd (a) -.& (b) Figure 3: (a) The Figure-Ground network is made up of resistors and switches. The input to the chip is a binary edge map. At every grid point in the rectangular array where edges have been found, four switches are opened, isolating that node from its four neighbors (the shaded edge contour corresponds to a series of isolated nodes). We assume that the central point in the array is always enclosed by the contour. This point is connected to a voltage source V/ig, while the periphery is connected to the voltage Vgnd. If the contour is unbroken, the voltage at each interior point will then rise to V/ig, while all outside grid points will settle to Vgnd. Thus, the object is rapidly segregated from the background. If the contour is not complete, the saturating resistors (indicated with simple resistors) will limit the current flowing through these holes in the contour and partially seal off the boundary. (b) represents a conceptual view of how an object (figure) is segregated from the background in the two-dimensional view field, in terms of two distinct voltage levels (V/ig labels the object and Vgnd labels the background). The circuit has 48 by 48 nodes on a 4.6 by 6.8 mm 2 die size and was implemented using MOSIS 2 pm CMOS technology. Object-Based Analog VLSI Vision Circuits (a) The input consists of a completely enclosed box. The network is therefore broken into two isolated segments, the inside and the outside of the box and labeled by two very different voltage values, Vfig and Vgnd. (b) The object boundary has a break equal to one pixel at the center of the left and right edges. Due to the large Voltage difference across these two leaks, the saturated horizontal resistances, HRes, saturate, thereby helping to "seal" off these breaks using a very simple algorithm. n LJ I I (C) The width of the breaks in the contour increases to three pixels each. Yet HRes still acts to effectively seal the two holes and the "Figure" is segregated from the "Surround". I I (d) The width of the breaks I I increases to five pixels each. Due to the much smaller voltage gradient across this wider gap in the contour, the voltage spreads outside the figure. I I (e) A total of four breaks, each fi ve pixels wide, prevents the ''Figure'' from being segregated. The system can't decide whether a single object with wide breaks at its side or four separate objects are present. II L-.J Figure 4 833 834 Koch, Mathur, Liu, Harris, Luo, and Sivilotti ? ? ? ?? --_....__......_-_. ( a ) ( b ) (c) Figure 5: The Figure-Ground response to a noisy and incomplete contour outlining a hand (the binary image shown in (a) is scanned in from off-chip). The output voltage is shown as intensity in (b) and as a 3-D plot in (c) . The center node is tied to 3.5 V and marked as black in (c). The shaded area labels all pixels whose voltage is above 2.4 V. Notice the voltage decay along the little finger, due to an incomplete contour at the finger tip. Object-Based Analog VLSI Vision Circuits contour. The box and break points on the sides are center-row symmetrical and the breaks are respectively one, three and five pixels wide. In (e), two additional, five pixel wide breaks have been included. For small enough breaks, our circuit has an excellent boundary-completion capabilities. This is important for machine vision, since real images rarely have complete boundaries. The performance of the chip is illustrated in Fig. 4. If the contour is unbroken, the voltage inside the figure rises to V/ ig , segregating it from the surround. If a small gap appears in the contour, it can be partially sealed off by the action of the saturating resistance Hres, which limits the current flowing through this gap, inhibiting full voltage equalization from occurring. As the break in the contour becomes larger, the voltage gradient across the illusionary contour between the upper and the lower part of the figure becomes smaller and smaller. If Hres is set to a low conductance, the gradient becomes larger again (Fig. 5c); now, however, the chip fails to discriminate between very small and large gaps. Note that inside and outside are strictly defined only for a closed contour. Thus, it is somewhat arbitrary at what distance two edges are considered to be part of the same or separate contours (e.g., Fig. 5). If the output voltage is thresholded at 3.0 V (in the case of Fig. 5b), the contour with one or two pixel breaks would be considered a single Figure, while the two larger breaks would not be. 3 CONCLUSION Most analog vision chips are restricted to work either at the local, pixel-level or the global, image-level. The dynamic wire and figure-ground chips discussed in this paper allow data-dependent neighborhoods to form. With these configured neighborhoods, analog chips can now perform object-level processing. Acknowledgements This Work is supported by the National Science Foundation, the Office of Naval Research and Rockwell International Science Center. We thank MOSIS for all chip fabrication. JGH is supported by an NSF postdoctoral fellowship. References Liu, S. and Harris, J .G. (1992), Dynamic wires: an analog VLSI model for object processing, Internat. Journal of Compo Vision. 8: pp. 231-239 . Luo, J., Koch, C. and Mathur, B. (1992), Figure-Ground segregation using an analog VLSI Chip, IEEE Micro, Vol. 12 46-57, 1992. Shaashua, A. and Ullman, S. (1988), Structural saliency: The detection of globally salient structures using a locally connected network. In Proceedings of the IEEE Computer Vision and Pattern Recognition Conference. Ullman, S. (1984), Visual routines, Cognition, Vol. 18, pp. 97-159, 1984. 835
672 |@word seal:3 open:3 thereby:1 liu:7 contains:1 series:3 current:7 luo:5 yet:1 must:1 john:1 plot:2 v:1 intelligence:1 half:1 selected:1 compo:1 provides:2 node:8 location:2 oak:1 five:3 along:4 supply:1 consists:1 resistive:4 inside:8 behavior:1 frequently:1 globally:1 automatically:1 little:1 becomes:3 circuit:12 mass:1 what:1 sivilotti:4 fabricated:1 every:2 ti:1 charge:2 act:1 normally:1 christof:1 before:1 local:1 limit:3 encoding:1 mead:1 black:4 studied:1 jci:1 challenging:1 shaded:2 hres:6 unbroken:2 limited:1 unique:1 responsible:1 implement:1 area:2 thought:1 interior:1 impossible:1 equalization:1 map:1 center:7 rectangular:2 pure:1 array:5 pull:1 us:1 recognition:2 located:1 labeled:2 bottom:2 fly:1 electrical:1 thousand:1 region:1 connected:5 environment:1 broken:4 leak:1 battery:1 dynamic:9 segment:2 completely:2 chip:25 finger:2 jgh:1 distinct:1 describe:2 artificial:1 outside:7 neighborhood:2 whose:1 larger:3 ability:1 highlighted:1 noisy:2 triggered:1 reset:1 rapidly:1 loser:1 achieve:1 competition:3 cmos:2 object:20 wider:2 completion:2 measured:3 implemented:2 indicate:1 opened:3 human:1 enable:1 settle:2 implementing:1 strictly:1 helping:1 mm:2 koch:5 considered:2 ground:10 brightest:1 cognition:1 lm:1 circuitry:1 inhibiting:1 early:1 inverter:1 label:5 grouped:1 successfully:1 mit:1 always:3 voltage:37 office:1 encode:1 naval:1 indicates:1 dependent:1 lj:2 pasadena:2 vlsi:8 pixel:27 among:2 smearing:1 smoothing:2 field:3 once:1 equal:1 represents:1 inherent:1 micro:1 ve:1 national:1 floating:1 connects:1 conductance:1 detection:1 saturated:1 edge:7 incomplete:4 isolated:3 isolating:1 fabrication:1 rockwell:2 international:2 off:4 tanner:1 together:1 connecting:1 tip:1 jo:1 again:1 central:3 choose:1 ullman:4 coding:1 inc:1 configured:2 break:16 view:3 closed:3 capability:1 square:1 ensemble:1 correspond:1 saliency:2 chii:1 lli:1 pp:2 associated:1 routine:2 appears:2 higher:1 methodology:1 flowing:3 response:2 box:3 stage:1 working:1 hand:1 horizontal:1 ei:1 indicated:2 gray:1 entering:1 laboratory:1 illustrated:1 white:2 latch:1 width:2 uniquely:1 die:1 toggle:1 outline:1 complete:3 dedicated:2 image:11 fi:1 common:2 winner:2 analog:11 discussed:1 silicon:1 cambridge:1 surround:2 grid:10 sealed:2 pm:1 longer:2 internat:1 curvature:1 own:1 periphery:3 binary:4 seen:1 additional:1 somewhat:1 determine:1 signal:1 ii:3 encompass:1 full:1 shunting:2 vision:12 represent:2 grounded:1 background:3 fellowship:1 source:1 flow:3 capacitor:2 structural:2 presence:2 enough:1 switch:7 brighter:2 idea:1 prototype:1 whether:1 resistance:4 cause:1 action:1 useful:1 amount:1 locally:1 gnd:1 nsf:1 notice:2 vol:2 group:2 shih:1 four:6 segregating:1 salient:1 thresholded:1 mosis:3 decide:1 scanned:3 occur:1 x2:1 encodes:2 x7:1 aspect:1 electrically:1 across:3 smaller:3 restricted:2 segregation:3 previously:1 turn:1 operation:1 darkest:1 top:2 denotes:1 exploit:1 move:1 gradient:5 distance:1 separate:2 thank:1 collected:1 length:16 relationship:1 providing:1 rise:3 design:1 perform:2 upper:1 wire:13 jin:1 saturates:1 communication:2 arbitrary:2 mathur:5 intensity:4 introduced:1 pair:1 specified:1 trip:1 california:1 beyond:1 pattern:4 mismatch:1 technology:3 acknowledgement:1 segregated:4 enclosed:2 outlining:1 digital:1 foundation:1 share:2 row:1 supported:2 bias:1 side:2 allow:1 institute:1 neighbor:1 wide:4 boundary:7 contour:61 preventing:1 made:1 ig:9 global:2 conceptual:1 symmetrical:1 leader:7 alternatively:1 postdoctoral:1 robust:1 ca:3 excellent:1 spread:1 profile:1 fig:9 vr:1 fails:1 resistor:6 tied:2 down:1 saturate:1 arclength:1 reconfigurable:1 offset:1 precharge:1 decay:1 effectively:1 ci:3 occurring:1 hole:3 gap:4 visual:6 prevents:1 saturating:3 partially:2 corresponds:2 harris:7 ma:1 marked:2 towards:1 massimo:1 shared:1 included:1 total:1 discriminate:1 rarely:1 select:1 evaluate:1
6,324
6,720
Matching neural paths: transfer from recognition to correspondence search Nikolay Savinov1 1 Lubor Ladicky1 Marc Pollefeys1,2 Department of Computer Science at ETH Zurich, 2 Microsoft {nikolay.savinov,lubor.ladicky,marc.pollefeys}@inf.ethz.ch Abstract Many machine learning tasks require finding per-part correspondences between objects. In this work we focus on low-level correspondences ? a highly ambiguous matching problem. We propose to use a hierarchical semantic representation of the objects, coming from a convolutional neural network, to solve this ambiguity. Training it for low-level correspondence prediction directly might not be an option in some domains where the ground-truth correspondences are hard to obtain. We show how transfer from recognition can be used to avoid such training. Our idea is to mark parts as ?matching? if their features are close to each other at all the levels of convolutional feature hierarchy (neural paths). Although the overall number of such paths is exponential in the number of layers, we propose a polynomial algorithm for aggregating all of them in a single backward pass. The empirical validation is done on the task of stereo correspondence and demonstrates that we achieve competitive results among the methods which do not use labeled target domain data. 1 Introduction Finding per-part correspondences between objects is a long-standing problem in machine learning. The level at which correspondences are established can go as low as pixels for images or millisecond timestamps for sound signals. Typically, it is highly ambiguous to match at such a low level: a pixel or a timestamp just does not contain enough information to be discriminative and many false positives will follow. A hierarchical semantic representation could help to solve the ambiguity: we could choose the low-level match which also matches at the higher levels. For example, a car contains a wheel which contains a bolt. If we want to check if this bolt matches the bolt in another view of the car, we should check if the wheel and the car match as well. One possible hierarchical semantic representation could be computed by a convolutional neural network. The features in such a network are composed in a hierarchical manner: the lower-level features are used to compute higher-level features by applying convolutions, max-poolings and non-linear activation functions on them. Nevertheless, training such a convolutional neural network for correspondence prediction directly (e.g., [25], [2]) might not be an option in some domains where the ground-truth correspondences are hard and expensive to obtain. This raises the question of scalability of such approaches and motivates the search for methods which do not require training correspondence data. To address the training data problem, we could transfer the knowledge from the source domain where the labels are present to the target domain where no labels or few labeled data are present. The most common form of transfer is from classification tasks. Its promise is two-fold. First, classification labels are one of the easiest to obtain as it is a natural task for humans. This allows to create huge recognition datasets like Imagenet [18]. Second, the features from the low to mid-levels have been shown to transfer well to a variety of tasks [22], [3], [15]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Although there has been a huge progress in transfer from classification to detection [7], [17], [19], [16], segmentation [12], [1] and other semantic reasoning tasks like single-image depth prediction [4], the transfer to correspondence search has been limited [13], [10], [8]. We propose a general solution to unsupervised transfer from recognition to correspondence search at the lowest level (pixels, sound millisecond timestamps). Our approach is to match paths of activations coming from a convolutional neural network, applied on two objects to be matched. More precisely, to establish matching on the lowest level, we require the features to match at all different levels of convolutional feature hierarchy. Those different-level features form paths. One such path would consist of neural activations reachable from the lowest-level feature to the highest-level feature in the network topology (in other words, the lowest level feature lies in the receptive field of the highest level). Since every lowest-level feature belongs to many paths, we do voting based on all of them. Although the overall number of such paths is exponential in the number of layers and thus infeasible to compute naively, we prove that the voting is possible in polynomial time in a single backward pass through the network. The algorithm is based on dynamic programming and is similar to the backward pass for gradient computation in the neural network. Empirical validation is done on the task of stereo correspondence on two datasets: KITTI 2012 [6] and KITTI 2015 [14]. We quantitatively show that our method is competitive among the methods which do not require labeled target domain data. We also qualitatively show that even dramatic changes in low-level structure can be handled reasonably by our method due to the robustness of the recognition hierarchy: we apply different style transfers [5] to corresponding images in KITTI 2015 and still successfully find correspondences. 2 Notation Our method is generally applicable to the cases where the input data has a multi-dimensional grid topology layout. We will assume input objects o to be from the set of B-dimensional grids ? ? RB and run convolutional neural networks on those grids. The per-layer activations from those networks will be contained in the set of (B + 1)-dimensional grids ? ? RB+1 . Both the input data and the activations will be indexed by a (B + 1)-dimensional vector x = (x, y, . . . , c) ? NB+1 , where x is a column index, y is a row index, etc., and c ? {1, . . . , C} is the channel index (we will assume C = 1 for the input data, which is a non-restrictive assumption as we will explain later). We will search for correspondences between those grids, thus our goal will be to estimate shifts d ? D ? ZB+1 for all elements in the grid. The choice of the shift set D is task-dependent. For example, for sound B = 1 and only 1D shifts can be considered. For images, B = 2 and D could be a set of 1D shifts (usually called a stereo task) or a set of 2D shifts (usually called an optical flow task). In this work, we will be dealing with convolutional neural network architectures, consisting of convolutions, max-poolings and non-linear activation functions (one example of such an architecture is a VGG-net [20], if we omit softmax which we will not use for the transfer). We assume every convolutional layer to be followed by a non-linear activation function throughout the paper and will not specify those functions explicitly. The computational graph of these architectures is a directed acyclic graph G = {A, E}, where A = {a1 , . . . , a|A| } is a set of nodes, corresponding to neuron activations (|A| denotes the size of this set), and E = {e1 , . . . , e|E| } is a set of arcs, corresponding to computational dependencies (|E| denotes the size of this set). Each arc is represented as a tuple (ai , aj ), where ai is the input (origin), SL aj is the output (endpoint). The node set consists of disjoint layers A = `=0 A` . The arcs are only allowed to go from the previous layer to the next one. We will use the notation A` (x) for the node in `-th layer at position x; in(x` ) for the set of origins x`?1 of arcs, entering layer ` at position x` of the reference object; x`+1 ? out(x` ) for the set of endpoints of arcs, exiting layer ` at position x` of the reference object. Let f` ? F = {maxpool, conv} be the mathematic operator which corresponds to forward computation in layer ` as a ? f` (in(a)), a ? A` (with a slight abuse of notation, we use a for both the nodes in the computational graph and the activation values which are computed in those nodes). 2 Searched G? Reference G Shifts Max-pooling k4 (d) = 0 k4 (d) = 0 k4 (d) = 0 Convolution k3 (d) = 1 k3 (d) = 1 k3 (d) = 1 Max-pooling k2 (d) = 1 k2 (d) = 1 k2 (d) = 1 Convolution k1 (d) = 3 k1 (d) = 3 k1 (d) = 2 Input k0 (d) = 3 k0 (d) = 3 k0 (d) = 2 Figure 1: Four siamese paths are shown. Two of them (red) have the same origin and support the hypothesis of the shift d = 3 for this origin. The other two (green and pink) have different origins and support hypotheses d = 3 and d = 2 for their respective origins. 3 Correspondence via path matching We will consider two objects, reference o ? ? and searched o0 ? ?, for which we want to find correspondences. After applying a CNN on them, we get graphs G and G0 of activations. The goal is to establish correspondences between the input-data layers A0 and A00 . That is, every cell A0 (x) in the reference object o ? ? has a certain shift d ? D in the searched object o0 ? ?, and we want to estimate d. Here comes the cornerstone idea of our method: we establish the matching of A0 (x) with A00 (x ? d) for a shift d if there is a pair of ?parallel? paths (we call this pair a siamese path), originating at those nodes and ending at the last layers AL , A0L , which match. This pair of paths must have the same spatial shift with respect to each other at all layers, up to subsampling, and go through the same feature channels with respect to each other. We take the subsampling into account by per-layer functions $ % ? ? = d , k` (d) = ?` (k`?1 (d)), ` = 1, . . . , L, ?` (d) k0 (d) = d, (1) q` where k` (d) is how the zero-layer shift d transforms at layer `, q` is the `-th layer spatial subsampling factor (note that rounding and division on vectors is done element-wise). Then a siamese path P can be represented as P = (p, p0 ), P p = (A0 (xP 0 ), . . . , AL (xL )), 0 P p0 = (A00 (xP 0 ? k0 (d)), . . . , AL (xL ? kL (d))) (2) P where xP 0 = x and x` denotes the position at which the path P intersects layer ` of the reference activation graph. Such paths are illustrated in Fig. 1. The logic is simple: matching in a siamese path means that the recognition hierarchy detects the same features at different perception levels with the same shifts (up to subsampling) with respect to the currently estimated position x, which allows for a confident prediction of match. The fact that a siamese path is ?matched? can be established by computing the matching function (high if it matches, low if not) M (P ) = L K 0 P m` (A` (xP ` ), A` (x` ? k` (d))) (3) `=0 where m` (?, ?) is a matching function for individual neurons (prefers them both to be similar and non-zero at the same time) and is a logical-and-like operator. Both will be discussed later. Since we want to estimate the shift for a node A0 (x), we will consider all possible shifts and vote for each of them. Let us denote a set of siamese paths, starting at A` (x) and A0` (x ? d) and ending at the last layer, as P` (x, d). For every shift d ? D we introduce U (x, d) as the log-likelihood of the event that d is the correct shift, i.e. A0 (x) matches A00 (x ? d). To collect the evidence from all possible paths, we ?sum up? 3 the matching functions for all individual paths, leading to M U (x, d) = M (P ) = P ?P0 (x,d) M L K 0 P m` (A` (xP ` ), A` (x` ? k` (d))) (4) P ?P0 (x,d) `=0 where the sum-like operator ? will be discussed later. The distribution U (x, d) can be used to either obtain the solution as d? (x) = arg maxd?D U (x, d) or to post-process the distribution with any kind of spatial smoothing optimization and then again take the best-cost solution. The obvious obstacle to using the distribution U (x, d) is that Observation 1. If K is the minimal number of activation channels in all the layers of the network and L is the number of layers, the number of paths, considered in the computation of U (x, d) for a single originating node, is ?(K L ) ? at least exponential in the number of layers. In practice, it is infeasible to compute U (x, d) naively. In this work, we prove that it is possible to compute U (x, d) in O(|A| + |E|) ? thus linear in the number of layers ? using the algorithm which will be introduced in the next section. 4 Linear-time backward algorithm Theorem 1. For any m` (?, ?) and any pair of operators h?, i such that is left-distributive over ?, i.e. a (b ? c) = a b ? a c, we can compute U (x, d) for all x and d in O(|A| + |E|). Proof Since there is distributivity, we can use a dynamic programming approach similar to the one developed for gradient backpropagation. ` First, let us introduce subsampling functions ks` (d) = ?s (ks?1 (d)), k`` (d) = d, s ? `. Note that 0 ks = ks as introduced in Eq. 1. Then, let us introduce auxiliary variables U` (x` , d) for each layer ` = 0, . . . , L, which have the same definition as U (x, d) except for the fact that the paths, considered in them, start from the later layer `: M U` (x` , d) = M (P ) = P ?P` (x` ,d) M L K 0 P ` ms (As (xP s ), As (xs ? ks (d))). (5) P ?P` (x` ,d) s=` Note that U (x, d) = U0 (x, d). The idea is to iteratively recompute U` (x` , d) based on known U`+1 (x`+1 , ?` (d)) for all x`+1 . Eventually, we will get to the desired U0 (x, d). The first step is to notice that all the paths share the same prefix and write it out explicitly: U` (x` , d) = M L K 0 P ` ms (As (xP s ), As (xs ? ks (d))) P ?P` (x` ,d) s=` " = M m` (A` (x` ), A0` (x` ? d)) P ?P` (x` ,d) L K # 0 P ` ms (As (xP s ), As (xs ? ks (d))) . s=`+1 (6) Now, we want to pull the prefix m` (A` (x` ), A0` (x` ? d)) out of the ?sum?. For that purpose, we will need the set of endpoints out(x` ), introduced in the notation in Section 2. The ?sum? can be re-written in terms of those endpoints as " L # M K 0 P ` U` (x` , d) = m` (A` (x` ), A0` (x` ? d)) ms (As (xP s ), As (xs ? ks (d))) . x`+1 ?out(x` ) P ?P`+1 (x`+1 ,?`+1 (d)) s=`+1 (7) 4 Algorithm 1 Backward pass 1: procedure BACKWARD(G, G0 ) 2: for AL (xL ) ? AL do 3: for d ? kL (D) do 4: UL (xL , d) ? mL (AL (xL ), A0L (xL ? d)), . Initialize the last layer. 5: end for 6: end for 7: for ` = L-1, ..., 0 do 8: for A` (x` ) ? A` do 9: for d ? k` (D) do 10: S ? 0, 11: for x`+1 ? out(x` ) do 12: S ? S ? U`+1 (x`+1 , ?`+1 (d)), 13: end for 14: U` (x` , d) ? m` (A` (x` ), A0` (x` ? d)) S, 15: end for 16: end for 17: end for 18: return U0 . Return the distribution for the first layer. 19: end procedure The last step is to use the left-distributivity of over ? to pull the prefix out of the ?sum?: U` (x` , d) = m` (A` (x` ), A0` (x` ? d)) M L K 0 P ` ms (As (xP s ), As (xs ? ks (d))) s=`+1 x`+1 ?out(x` ) P ?P`+1 (x`+1 ,?`+1 (d)) = m` (A` (x` ), A0` (x` ? d)) M U`+1 (x`+1 , ?`+1 (d)). (8) x`+1 ?out(x` ) The detailed procedure is listed in Algorithm 1. We use the notation k` (D) for the set of subsampled shifts which is the result of applying function k` to every element of the set of initial shifts D. 5 Choice of neuron matching function m and operators ?, For the convolutional layers, we use the matching function ( 0 if w = 0, v = 0, mconv (w, v) = min(w,v) otherwise. max(w,v) (9) For the max-pooling layers, the computational graph can be truncated to just one active connection (as only one element influences higher-level features). Moreover, max-pooling does not create any additional features, only passes/subsamples the existing ones. Thus it does not make sense to take into account the pre-activations for those layers as they are the same as activations (up to subsampling). For these reasons, we use mmaxpool (w, v) = ?(w = arg max Nw ) ? ?(v = arg max Nv ), (10) where Nw is the neighborhood of max-pooling covering node w, ?(?) is the indicator function (1 if the condition holds, 0 otherwise). In this paper, we use sum as ? and product as . Another possible choice would be max for ? and min or product for ? theoretically, those combinations satisfy the conditions in Theorem 1. Nevertheless, we found sum/product combination working better than others. This could be explained by the fact that max as ? would be taken over a huge set of paths which is not robust in practice. 6 Experiments We validate our approach in the field of computer vision as our method requires a convolutional neural network trained on a large recognition dataset. Out of the vision correspondence tasks, we 5 Table 1: Summary of the convolutional neural network VGG-16. We only show the part up to the 8-th layer as we do not use higher activations (they are not pixel-related enough). In the layer type row, c stands for 3x3 convolution with stride 1 followed by the ReLU non-linear activation function [11] and p for 2x2 max-pooling with stride 2. The input to convolution is padded with the ?same as boundary? rule. Layer index 1 2 3 4 5 6 7 8 Layer type Output channels c 64 c 64 p 64 c 128 c 128 p 128 c 256 c 256 chose stereo matching to validate our method. For this task, the input data dimensionality is B = 2 and the shift set is represented by horizontal shifts D = {(0, 0, 0), . . . , (Dmax , 0, 0)}. We always convert images to grayscale before running CNNs, following the observation by [25] that color does not help. For pre-trained recognition CNN, we chose the VGG-16 network [20]. This network is summarized in Table 1. We will further refer to layer indexes from this table. It is important to mention that we have not used the whole range of layers in our experiments. In particular, we usually started from layer 2 and finished at layer 8. As such, it is still necessary to consider multi-channel input. To extend our algorithm to this case, we create a virtual input layer with C = 1 and virtual per-pixel arcs to all the real input channels. While starting from a later layer is an empirical observation which improves the results for our method, the advantage of finishing at an earlier layer was discovered by other researchers as well [5] (starting from some layer, the network activations stop being related to individual pixels). We will thus abbreviate our methods as ?ours(s, t)? where ?s? is the starting layer and ?t? is the last layer. 6.1 Experimental setup For the stereo matching, we chose the largest available datasets KITTI 2012 and KITTI 2015. All image pairs in these datasets are rectified, so correspondences can be searched in the same row. For each training pair, the ground-truth shift is measured densely per-pixel. This ground truth was obtained by projecting the point cloud from LIDAR on the reference image. The quality measure is the percentage Errt of pixels whose predicted shift error is bigger than a threshold of t pixels. We considered a range of thresholds t = 1, . . . , 5, while the main benchmark measure is Err3 . This measure is only computed for the pixels which are visible in both images from the stereo pair. For comparison with the baselines, we used the setup proposed in [25] ? the seminal work which introduced deep learning for stereo matching and which currently stays one of the best methods on the KITTI datasets. [24] is an extensive study which has a representative comparison of learning-based and non-learning-based methods under the same setup and open-source code [24] for this setup. The whole pipeline works as follows. First, we obtain the raw scores U (x, d) from Algorithm 1 for the shifts up to Dmax = 228. Then we normalize the scores U (x, ?) per-pixel by dividing them over the maximal score, thus turning them into the range [0, 1], suitable for running the post-processing code [24]. Finally, we run the post-processing code with exactly the same parameters as the original method [25] and measure the quality on the same 40 validation images. 6.2 Baselines We have two kinds of baselines in our evaluation: those coming from [25] and our simpler versions of deep feature transfer similar to [13], which do not consider paths. The first group of baselines from [25] are the following: the sum of absolute differences ?sad?, the census transform ?cens? [23], the normalized cross-correlation ?ncc?. We also included the learning-based methods ?fst? and ?acrt? [25] for completeness, although they use training data to learn features while our method does not. For the second group of baselines, we stack up the activation volumes for the given layer range and up-sample the layer volumes if they have reduced resolution. Then we compute normalized cross-correlation of the stacked features. Those baselines are denoted ?corr(s, t)? where ?s? is the 6 Table 2: This table shows the percentages of erroneous pixels Errt for thresholds t = 1, . . . , 5 on the KITTI 2012 validation set from [25]. Our method is denoted ?ours(2, 8)?. The two rightmost columns ?fst? and ?acrt? correspond to learning-based methods from [25]. We give them for completeness, as all the other methods, including ours, do not use learning. Methods Threshold sad cens ncc corr(1, 2) corr(2, 2) corr(2, 8) ours(2, 8) fst acrt 1 2 3 4 5 8.16 - 4.90 - 8.93 - 20.6 10.5 7.58 6.19 5.40 20.4 10.4 7.52 6.13 5.36 20.7 8.14 5.23 4.02 3.42 17.4 6.40 3.94 2.99 2.49 3.02 - 2.61 - Table 3: KITTI 2012 ablation study. Methods Threshold ours(2, 2) ours(2, 3) central(2, 8) ours(2, 8) 1 2 3 4 5 17.7 7.90 5.28 4.08 3.41 18.4 8.16 5.41 4.05 3.32 17.3 6.58 4.02 3.04 2.53 17.4 6.40 3.94 2.99 2.49 starting layer, ?t? is the last layer. Note that we correlate the features before applying ReLU following what [25] does for the last layer. Thus we use the input to the ReLU inside the layers. All the methods, including ours, undergo the same post-processing pipeline. This pipeline consists of semi-global matching [9], left-right consistency check, sub-pixel enhancement by fitting a quadratic curve, median and bilateral filtering. We refer the reader to [25] for the full description. While the first group of baselines was tuned by [25] and we take the results from that paper, we had to tune the post-processing hyper-parameters of the second group of baselines to obtain the best results. 6.3 KITTI 2012 The dataset consists of 194 training image pairs and 195 test image pairs. The reflective surfaces like windshields were excluded from the ground truth. The results in Table 2 show that our method ?ours(2, 8)? performs better compared to the baselines. At the same time, its performance is lower than learning-based methods from [25]. The main promise of our method is scalability: while we test it on a task where huge effort was invested into collecting the training data, there are other important tasks without such extensive datasets. 6.4 Ablation study on KITTI 2012 The goal of this section is to understand how important is the deep hierarchy of features versus one or few layers. We compared the following setups: ?ours(2, 2)? uses only the second layer, ?ours(2, 3)? uses only the range from layer 2 to layer 3, ?central(2, 8)? considers the full range of layers but only with central arcs in the convolutions (connecting same pixel positions between activations) taken into account in the backward pass, ?ours(2, 8)? is the full method. The result in Table 3 shows that it is profitable to use the full hierarchy both in terms of depth and coverage of the receptive field. 6.5 KITTI 2015 The stereo dataset consists of 200 training image pairs and 200 test image pairs. The main difference to KITTI 2012 is that the images are colored and the reflective surfaces are present in the evaluation. Similar conclusions to KITTI 2012 can be drawn from experimental results: our method provides a reasonable transfer, being inferior only to learning-based methods ? see Table 4. We show our depth map results in Fig. 2. 7 Table 4: This table shows the percentages of erroneous pixels Errt for thresholds t = 1, . . . , 5 on the KITTI 2015 validation set from [25]. Our method is denoted ?ours(2, 8)?. The two rightmost columns ?fst? and ?acrt? correspond to learning-based methods from [25]. We give them for completeness, as all the other methods, including ours, do not use learning. Methods Threshold sad cens ncc corr(1, 2) corr(2, 2) corr(2, 8) ours(2, 8) fst acrt 1 2 3 4 5 9.44 - 5.03 - 8.89 - 26.6 10.9 6.68 5.05 4.22 26.5 10.8 6.63 5.03 4.20 29.6 11.2 6.16 4.42 3.60 26.2 9.27 4.78 3.36 2.72 3.99 - 3.25 - Figure 2: Results on KITTI 2015. Top to bottom: reference image, searched image, our depth result. The depth is visualized in the standard KITTI color coding (from close to far: yellow, green, purple, red, blue). 6.6 Style transfer experiment on KITTI 2015 The goal of this experiment is to show the robustness of recognition hierarchy for the transfer to correspondence search ? something we advocated in the introduction as the advantage of our approach. We apply the style transfer method [5], implemented in the Prisma app. We ran different style transfers on the left and right images. While now very different at the pixel level, the higher level descriptions of the images remain the same which allows to successfully run our method. The qualitative results show the robustness of our path-based method in Fig. 3 (see also Fig. 2 for visual comparison to normal data). Figure 3: Results for the style transfer on KITTI 2015. Top to bottom: reference image, searched image, our depth result. The depth is visualized in the standard KITTI color coding (from close to far: yellow, green, purple, red, blue). 8 7 Conclusion In this work, we have presented a method for transfer from recognition to correspondence search at the lowest level. For that, we re-use activation paths from deep convolutional neural networks and propose an efficient polynomial algorithm to aggregate an exponential number of such paths. The empirical results on the stereo matching task show that our method is competitive among methods which do not use labeled data from the target domain. It would be interesting to apply this technique to sound, which should become possible once a high-quality deep convolutional model becomes accessible to the public (e.g., [21]). Acknowledgements We would like to thank Dmitry Laptev, Alina Kuznetsova and Andrea Cohen for their comments about the manuscript. We also thank Valery Vishnevskiy for running our code while our own cluster was down. This work is partially funded by the Swiss NSF project 163910 ?Efficient Object-Centric Detection?. References [1] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561, 2015. [2] Christopher B Choy, JunYoung Gwak, Silvio Savarese, and Manmohan Chandraker. Universal correspondence network. In Advances in Neural Information Processing Systems, pages 2414?2422, 2016. [3] J Donahue, Y Jia, O Vinyals, J Hoffman, N Zhang, E Tzeng, and T Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. corr abs/1310.1531 (2013), 2013. [4] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision, pages 2650?2658, 2015. [5] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015. [6] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [7] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440?1448, 2015. [8] Bumsub Ham, Minsu Cho, Cordelia Schmid, and Jean Ponce. Proposal flow. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3475?3484, 2016. [9] Heiko Hirschmuller. Accurate and efficient stereo processing by semi-global matching and mutual information. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 807?814. IEEE, 2005. [10] Seungryong Kim, Dongbo Min, Bumsub Ham, Sangryul Jeon, Stephen Lin, and Kwanghoon Sohn. Fcss: Fully convolutional self-similarity for dense semantic correspondence. arXiv preprint arXiv:1702.00926, 2017. [11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [12] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431?3440, 2015. [13] Jonathan L Long, Ning Zhang, and Trevor Darrell. Do convnets learn correspondence? In Advances in Neural Information Processing Systems, pages 1601?1609, 2014. [14] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [15] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition (2014). arXiv preprint arXiv:1403.6382. [16] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 779?788, 2016. 9 [17] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91?99, 2015. [18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211?252, 2015. [19] Pierre Sermanet, David Eigen, Xiang Zhang, Micha?l Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. [20] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [21] A?ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. [22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320?3328, 2014. [23] Ramin Zabih and John Woodfill. Non-parametric local transforms for computing visual correspondence. In European conference on computer vision, pages 151?158. Springer, 1994. [24] Jure Zbontar and Yann LeCun. MC-CNN github repository. https://github.com/jzbontar/mc-cnn, 2016. [25] Jure Zbontar and Yann LeCun. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research, 17(1-32):2, 2016. 10
6720 |@word cnn:7 version:1 repository:1 polynomial:3 open:1 choy:1 p0:4 dramatic:1 mention:1 initial:1 contains:2 score:3 tuned:1 ours:15 prefix:3 rightmost:2 existing:1 com:1 activation:21 must:1 written:1 john:1 timestamps:2 visible:1 generative:1 colored:1 recompute:1 completeness:3 node:9 provides:1 ron:1 simpler:1 zhang:3 become:1 qualitative:1 prove:2 consists:4 ijcv:1 fitting:1 inside:1 introduce:3 manner:1 theoretically:1 andrea:1 gatys:1 multi:3 wavenet:1 detects:1 farhadi:1 conv:1 becomes:1 project:1 notation:5 matched:2 moreover:1 cens:3 lowest:6 easiest:1 what:1 kind:2 developed:1 unified:1 finding:2 suite:1 every:5 collecting:1 voting:2 exactly:1 demonstrates:1 k2:3 omit:1 positive:1 before:2 aggregating:1 local:1 path:29 abuse:1 might:2 chose:3 k:9 collect:1 micha:1 limited:1 range:6 directed:1 lecun:3 practice:2 backpropagation:1 x3:1 swiss:1 sullivan:1 procedure:3 evan:1 empirical:4 universal:1 eth:1 matching:19 word:1 pre:2 get:2 close:3 wheel:2 operator:5 nb:1 applying:4 influence:1 seminal:1 map:1 ecker:1 go:3 layout:1 starting:5 resolution:1 rule:1 pull:2 autonomous:2 profitable:1 hierarchy:7 target:4 programming:2 us:2 hypothesis:2 origin:6 element:4 recognition:21 expensive:1 labeled:4 bottom:2 cloud:1 preprint:6 sharif:1 region:1 sun:1 highest:2 ran:1 ham:2 dynamic:2 trained:2 raise:1 laptev:1 ali:2 localization:1 division:1 prisma:1 k0:5 represented:3 intersects:1 stacked:1 fast:1 aggregate:1 hyper:1 neighborhood:1 kalchbrenner:1 whose:1 jean:1 solve:2 cvpr:3 otherwise:2 encoder:1 simonyan:2 invested:1 transform:1 subsamples:1 advantage:2 net:1 matthias:1 propose:4 coming:3 product:3 maximal:1 ablation:2 achieve:1 description:2 validate:2 normalize:1 scalability:2 sutskever:1 enhancement:1 cluster:1 darrell:3 object:14 help:2 kitti:20 andrew:2 measured:1 advocated:1 progress:1 eq:1 dividing:1 auxiliary:1 predicted:1 coverage:1 come:1 implemented:1 ning:1 correct:1 cnns:1 human:1 virtual:2 public:1 require:4 hold:1 considered:4 ground:5 normal:2 k3:3 nw:2 dieleman:1 driving:1 purpose:1 lenz:1 applicable:1 label:4 currently:2 ross:3 largest:1 create:3 successfully:2 hoffman:1 stefan:1 always:1 heiko:1 lubor:2 avoid:1 shelf:1 azizpour:1 clune:1 focus:1 finishing:1 ponce:1 check:3 likelihood:1 baseline:10 sense:1 kim:1 dependent:1 mathematic:1 typically:1 integrated:1 a0:13 originating:2 pixel:16 arg:3 overall:2 among:3 classification:4 denoted:3 hossein:1 overfeat:1 spatial:3 softmax:1 smoothing:1 initialize:1 timestamp:1 santosh:1 field:3 once:2 mutual:1 beach:1 koray:1 tzeng:1 cordelia:1 look:1 unsupervised:1 others:1 yoshua:1 quantitatively:1 few:2 composed:1 densely:1 individual:3 subsampled:1 astounding:1 consisting:1 jeon:1 microsoft:1 ab:2 detection:5 huge:4 highly:2 evaluation:2 accurate:1 tuple:1 necessary:1 respective:1 indexed:1 savarese:1 desired:1 re:2 girshick:3 minimal:1 column:3 earlier:1 obstacle:1 artistic:1 cost:1 krizhevsky:1 rounding:1 dependency:1 cho:1 confident:1 st:1 international:2 accessible:1 stay:1 standing:1 oord:1 off:1 maxpool:1 connecting:1 bethge:1 ilya:1 again:1 ambiguity:2 central:3 zen:1 choose:1 huang:1 zbontar:2 style:6 leading:1 return:2 account:3 stride:2 summarized:1 coding:2 satisfy:1 explicitly:2 later:5 view:1 bilateral:1 vehicle:1 kendall:1 razavian:1 jason:1 red:3 competitive:3 start:1 option:2 parallel:1 jia:1 lipson:1 purple:2 convolutional:23 poolings:2 correspond:2 yellow:2 raw:2 kavukcuoglu:1 ren:1 mc:2 researcher:1 rectified:1 russakovsky:1 app:1 ncc:3 explain:1 trevor:2 definition:1 obvious:1 proof:1 stop:1 dataset:3 logical:1 knowledge:1 savinov:1 car:3 dimensionality:1 segmentation:3 color:3 improves:1 manuscript:1 centric:1 higher:5 follow:1 specify:1 zisserman:1 done:3 just:2 correlation:2 convnets:1 working:1 horizontal:1 christopher:1 su:1 aj:2 quality:3 usa:1 contain:1 normalized:2 entering:1 excluded:1 iteratively:1 moritz:1 semantic:7 illustrated:1 self:1 inferior:1 ambiguous:2 covering:1 transferable:1 m:5 performs:1 reasoning:1 image:23 wise:1 common:2 cohen:1 endpoint:4 volume:3 discussed:2 slight:1 extend:1 he:1 yosinski:1 a00:4 refer:2 ai:2 grid:6 consistency:1 had:1 reachable:1 funded:1 gwak:1 bolt:3 similarity:1 fst:5 surface:3 etc:1 something:1 own:1 inf:1 belongs:1 menze:1 certain:1 maxd:1 additional:1 deng:1 signal:1 u0:3 semi:2 siamese:6 sound:4 full:4 stephen:1 match:11 faster:1 cross:2 long:4 lin:1 post:5 e1:1 bigger:1 a1:1 prediction:4 vision:12 arxiv:12 cell:1 proposal:2 want:5 krause:1 median:1 source:2 jian:1 pass:1 nv:1 pooling:6 comment:1 undergo:1 flow:3 call:1 reflective:2 bernstein:1 bengio:1 enough:2 sander:1 variety:1 relu:3 architecture:5 topology:2 andreas:2 idea:3 vgg:3 shift:23 handled:1 o0:2 ul:1 effort:1 stereo:11 karen:2 shaoqing:1 prefers:1 deep:10 cornerstone:1 generally:1 detailed:1 listed:1 tune:1 karpathy:1 transforms:2 mid:1 zabih:1 visualized:2 sohn:1 reduced:1 http:1 sl:1 percentage:3 nsf:1 millisecond:2 notice:1 estimated:1 disjoint:1 per:7 rb:2 blue:2 write:1 pollefeys:1 promise:2 group:4 four:1 nevertheless:2 threshold:7 drawn:1 alina:1 k4:3 nal:1 backward:7 graph:6 padded:1 sum:8 convert:1 run:3 you:1 raquel:1 throughout:1 reader:1 reasonable:1 yann:3 patch:1 sad:3 geiger:2 layer:55 followed:2 correspondence:27 fold:1 quadratic:1 precisely:1 ladicky:1 alex:3 x2:1 scene:1 min:3 leon:1 optical:1 department:1 combination:2 pink:1 remain:1 joseph:1 rob:2 explained:1 projecting:1 census:1 den:1 taken:2 pipeline:3 zurich:1 dmax:2 eventually:1 end:7 segnet:1 available:1 apply:3 hierarchical:4 generic:1 pierre:1 robustness:3 eigen:2 original:1 denotes:3 running:3 subsampling:6 top:2 ramin:1 restrictive:1 k1:3 establish:3 society:1 g0:2 question:1 manmohan:1 receptive:2 parametric:1 gradient:2 thank:2 distributive:1 decoder:1 philip:1 errt:3 considers:1 urtasun:1 reason:1 code:4 index:5 sermanet:1 setup:5 motivates:1 satheesh:1 convolution:7 neuron:3 datasets:6 observation:3 arc:7 benchmark:2 truncated:1 hinton:1 discovered:1 stack:1 exiting:1 introduced:4 david:2 pair:11 kl:2 extensive:2 connection:1 imagenet:3 heiga:1 established:2 nip:1 address:1 jure:2 usually:3 perception:1 pattern:6 challenge:1 max:13 green:3 including:3 event:1 suitable:1 natural:1 predicting:1 indicator:1 abbreviate:1 turning:1 badrinarayanan:1 github:2 mathieu:1 finished:1 started:1 ready:1 schmid:1 roberto:1 carlsson:1 acknowledgement:1 xiang:1 graf:1 fully:2 distributivity:2 interesting:1 filtering:1 acyclic:1 versus:1 geoffrey:1 validation:5 shelhamer:1 kuznetsova:1 xp:10 share:1 row:3 summary:1 last:7 infeasible:2 divvala:1 senior:1 understand:1 absolute:1 van:1 boundary:1 depth:8 curve:1 ending:2 stand:1 forward:1 qualitatively:1 far:2 josephine:1 correlate:1 dmitry:1 logic:1 dealing:1 ml:1 global:2 active:1 windshield:1 chandraker:1 discriminative:1 fergus:2 grayscale:1 search:7 khosla:1 table:11 learn:2 channel:6 transfer:18 reasonably:1 ca:1 robust:1 european:1 marc:2 domain:7 main:3 dense:1 whole:2 allowed:1 fig:4 representative:1 junyoung:1 sub:1 position:6 exponential:4 xl:6 lie:1 donahue:1 theorem:2 down:1 erroneous:2 x:5 evidence:1 consist:1 naively:2 false:1 corr:9 decaf:1 hod:1 vijay:1 fcs:1 visual:4 vinyals:2 contained:1 kaiming:1 partially:1 cipolla:1 springer:1 ch:1 corresponds:1 truth:5 ma:1 goal:4 towards:1 jeff:1 hard:2 change:1 lidar:1 included:1 except:1 redmon:1 zb:1 called:2 silvio:1 pas:5 experimental:2 vote:1 mark:1 searched:6 support:2 jonathan:2 alexander:1 ethz:1 oriol:1 audio:1
6,325
6,721
Linearly constrained Gaussian processes Carl Jidling Department of Information Technology Uppsala University, Sweden [email protected] Niklas Wahlstr?m Department of Information Technology Uppsala University, Sweden [email protected] Adrian Wills School of Engineering University of Newcastle, Australia [email protected] Thomas B. Sch?n Department of Information Technology Uppsala University, Sweden [email protected] Abstract We consider a modification of the covariance function in Gaussian processes to correctly account for known linear operator constraints. By modeling the target function as a transformation of an underlying function, the constraints are explicitly incorporated in the model such that they are guaranteed to be fulfilled by any sample drawn or prediction made. We also propose a constructive procedure for designing the transformation operator and illustrate the result on both simulated and real-data examples. Introduction Bayesian non-parametric modeling has had a profound impact in machine learning due, in no small part, to the flexibility of these model structures in combination with the ability to encode prior knowledge in a principled manner [6]. These properties have been exploited within the class of Bayesian non-parametric models known as Gaussian Processes (GPs), which have received significant research attention and have demonstrated utility across a very large range of real-world applications [16]. 0.7 0.8 0.9 1 1.1 1.2 Predicted magnetic field strength [a.u.] 1.3 2 1.5 x3 [m] 1 1 0.5 0 1 Abstracting from the myriad number of these applications, it has been observed that the efficacy of GPs modeling is often intimately dependent on the appropriate choice of mean and covariance functions, and the appropriate tuning of their associated hyper-parameters. Often, the most appropriate mean and covariance functions are connected to prior knowledge of the underlying problem. For example, [10] uses functional expectation constraints to consider the problem of gene-disease association, and [13] employs a multivariate generalized von Mises distribution to produce a GP-like regression that handles circular variable problems. 3 0 2 1 -1 0 -2 x2 [m] -3 -1 x1 [m] Figure 1: Predicted strength of a magnetic field at three heights, given measured data sampled from the trajectory shown (blue curve). The three components (x1 , x2 , x3 ) denote the Cartesian coordinates, where the x3 -coordinate is the height above the floor. The magnetic field is curl-free, which can be formulated in terms of three linear constraints. The method proposed in this paper can exploit these constraints to improve the predictions. See Section 5.2 for details. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. At the same time, it is not always obvious how one might construct a GP model that obeys underlying principles, such as equilibrium conditions and conservation "laws". One straightforward approach to this problem is to add fictitious measurements that observe the constraints at a finite number of points of interest. This has the benefit of being relatively straightforward to implement, but has the sometimes significant drawback of increasing the problem dimension and at the same time not enforcing the constraints between the points of interest. A different approach to constraining the GP model is to construct mean and covariance functions that obey the constraints. For example, curl and divergence free covariance functions are used in [22] to improve the accuracy for regression problems. The main benefit of this approach is that the problem dimension does not grow, and the constraints are enforced everywhere, not pointwise. However, it is not obvious how these approaches can be scaled for an arbitrary set of linear operator constraints. The contribution of this paper is a new way to include constraints into multivariate GPs. In particular, we develop a method that transforms a given GP into a new, derived, one that satisfies the constraints. The procedure relies upon the fact that GPs are closed under linear operators, and we propose an algorithm capable of constructing the required transformation. We will demonstrate the utility of this new method on both simulated examples and on a real-world application, the latter in form of predicting the components of a magnetic field, as illustrated in Figure 1. To make these ideas more concrete, we present a simple example that will serve as a focal point several times throughout the paper. To that end, assume that we have a two-dimensional function f (x) : R2 7? R2 on which we put a GP prior f (x) ? GP (?(x), K(x, x0 )) . We further know that f (x) should obey the differential equation ?f1 ?f2 + = 0. ?x1 ?x2 (1) In this paper we show how to modify K(x, x0 ) and ?(x) such that any sample from the new GP is guaranteed to obey the constraints like (1), considering any kind of linear operator constraint. 2 Problem formulation Assume that we are given a data set of N observations {xk , yk }N k=1 where xk denotes the input and yk the output. Both the input and output are potentially vector-valued, where xk ? RD and yk ? RK . We consider the regression problem where the data can be described by a non-parametric model yk = f (xk ) + ek , where ek is zero-mean white noise representing the measurement uncertainty. In this work, we place a vector-valued GP prior on f f (x) ? GP (?(x), K(x, x0 )) , (2) with the mean function and the covariance function ?(?) : RD 7? RK , K(?, ?) : RD ? RD 7? RK ? RK . (3) {xk , yk }N k=1 , Based on the data we would now like to find a posterior over the function f (x). In addition to the data, we know that the function f should fulfill certain constraints Fx [f ] = 0, (4) where Fx is an operator mapping the function f (x) to another h function ig(x) as Fx [f ] = g(x). We further require Fx to be a linear operator meaning that Fx ?1 f 1 + ?2 f 2 = ?1 Fx [f 1 ] + ?2 Fx [f 2 ], where ?1 , ?2 ? R. The operator Fx can for example be a linear transform Fx [f ] = Cf (x) which together with the constraint (4) forces a certain linear combination of the outputs to be linearly dependent. The operator Fx could also include other linear operations on the function f (x). For example, we might know that the function f (x) : R2 ? R2 should obey a certain partial differential equation ?f1 ?f2 Fx [f ] = ?x + ?x . A few more linear operators are listed in Section 1 of the Supplementary 1 2 material, including integration as one the most well-known. The constraints (4) can either come from known physical laws or other prior knowledge of the process generating the data. Our objective is to encode these constraints in the mean and covariance functions (3) such that any sample from the corresponding GP prior (2) always obeys the constraint (4). 2 3 3.1 Building a constrained Gaussian process Approach based on artificial observations Just as Gaussian distributions are closed under linear transformations, so are GPs closed under linear operations (see Section 2 in the Supplementary material). This can be used for a straightforward way of embedding linear operator constraints of the form (4) into GP regression. The idea is to treat the ? ? . The ? k }N ? k = 0 for all k = 1 . . . N constraints as noise-free artificial observations {? xk , y k=1 with y ? k = Fx? k [f ], where x ? k are input points in the domain of regression is then performed on the model y ? k coincide with the points of prediction. interest. For example, one could let these artificial inputs x An advantage of this approach is that it allows constraints of the type (4) with a non-zero right hand side. Furthermore, there is no theoretical limit on how many constraints we can include (i.e. number of rows in Fx ) ? although in practice, of course, there is. However, this is problematic mainly for two reasons. First of all, it makes the problem size grow. This increases memory requirements and execution time, and the numerical stability is worsen due to an increased condition number. This is especially clear from the fact that we want these observations to be noise-free, since the noise usually has a regularizing effect. Secondly, the constraints are only enforced point-wise, so a sample drawn from the posterior fulfills the constraint only in our chosen points. The obvious way of compensating for this is by increasing the number of points in which the constraints are observed ? but that exacerbates the first problem. Clearly, the challenge grows quickly with the dimension of the inferred function. Embedding the constraints in the covariance function removes these issues ? it makes the enforcement continuous while the problem size is left unchanged. We will now address the question of how to design such a covariance function. 3.2 A new construction We want to find a GP prior (2) such that any sample f (x) from that prior obeys the constraints (4). In turn, this leads to constraints on the mean and covariance functions (3) of that prior. However, instead of posing these constraints on the mean and covariance functions directly, we consider f (x) to be related to another function g(x) via some operator Gx f (x) = Gx [g]. (5) Fx [ Gx [g]] = 0. (6) The constraints (4) then amounts to We would like this relation to be true for any function g(x). To do that, we will interpret Fx and Gx as matrices and use a similar procedure to that of solving systems of linear equations. Since Fx and Gx are linear operators, we can think of Fx [f ] and Gx [g] as matrix-vector multiplications where P Fx [f ] = Fx f , with ( Fx f )i = K j=1 ( Fx )ij fj where each element ( Fx )ij in the operator matrix Fx is a scalar operator. With this notation, (6) can be written as Fx Gx = 0. (7) This reformulation imposes constraints on the operator Gx rather than on the GP prior for f (x) directly. We can now proceed by designing a GP prior for g(x) and transform it using the mapping (5). We further know that GPs are closed under linear operations. More specifically, if g(x) is modeled as a GP with mean ?g (x) and covariance Kg (x, x0 ), then f (x) is also a GP with  f (x) = Gx g ? GP Gx ?g , Gx Kg GxT0 . (8) We use ( Gx Kg GxT0 )ij to denote that ( Gx Kg GxT0 )ij = ( Gx )ik ( Gx0 )jl (Kg )kl , where Gx and Gx0 act on the first and second argument of Kg (x, x0 ), respectively. See Section 2 in the Supplementary material for further details on linear operations on GPs. The procedure to find the desired GP prior for f can now be divided into the following three steps 1. Find an operator Gx that fulfills the condition (6). 3 2. Choose a mean and covariance function for g(x). 3. Find the mean and covariance functions for f (x) according to (8). In addition to being resistant to the disadvantages of the approach described in Section 3.1, there are some additional strengths worth pointing out with this method. First of all, we have separated the task of encoding the constraints and encoding other desired properties of the kernel. The constraints are encoded in Fx and the remaining properties are determined by the prior for g(x), such as smoothness assumptions. Hence, satisfying the constraints does not sacrifice any desired behavior of the target function. Secondly, K(x, x0 ) is guaranteed to be a valid covariance function provided that Kg (x, x0 ) is, since GPs are closed under linear functional transformations. From (8), it is clear that each column of K must fulfill all constraints encoded in Fx . Possibly K could be constructed only with this knowledge, assuming a general form and solving the resulting equation system. However, a solution may not just be hard to find, but one must also make sure that it is indeed a valid covariance function. Furthermore, this approach provides a simple and straightforward way of constructing the covariance function even if the constraints have a complicated form. It makes no difference if the linear operators relate the components of the target function explicitly or implicitly ? the procedure remains the same. 3.3 Illustrating example We will now illustrate the method using the example (1) introduced already in the introduction. ?f2 ?f1 + ?x = 0, where x = [x1 , x2 ]T and Consider a function f (x) : R2 7? R2 satisfying ?x 1 2 T f (x) = [f1 (x), f2 (x)] . This equation describes all two-dimensional divergence-free vector fields. ? ? The constraint can be written as a linear constraint on the form (4) where Fx = [ ?x ] and 1 ?x2 T f (x) = [f1 (x) f2 (x)] . Modeling this function with a GP and building the covariance structure as described above, we first need to find the transformation Gx such that (7) is fulfilled. For example, we could pick T  (9) Gx = ? ?x? 2 ?x? 1 .  2 0 If the underlying function g(x) : R 7? R isgiven by g(x) ? GP 0, kg (x, x ) , then we can make use of (8) to obtain f (x) ? GP 0, K(x, x0 ) where ? ? 2 ?2 ? ?x?2 x0 0 ?x x 2 2 1? kg (x, x0 ). K(x, x0 ) = Gx kg (x, x0 ) GxT = ? 2 ?2 ? ?x?1 x0 0 ?x1 x 2 1 Using a covariance function with the following structure, we know that the constraint will be fulfilled by any function generated from the corresponding GP. 4 Finding the operator Gx In a general setting it might be hard to find an operator Gx that fulfills the constraint (7). Ultimately, we want an algorithm that can construct Gx from a given Fx . In more formal terms, the function Gx g forms the nullspace of Fx . The concept of nullspaces for linear operators is well-established [11], and does in many ways relate to real-number linear algebra. However, an important difference is illustrated by considering a one-dimensional function f (x) ? subject to the constraint Fx f = 0 where Fx = ?x . The solution to this differential equation can not be expressed in terms of an arbitrary underlying function, but it requires f (x) to be constant. Hence, ? the nullspace of ?x consists of the set of horizontal lines. Compare this with the real number equation ab = 0, a 6= 0, which is true only if b = 0. Since the nullspace differs between operators, we must be careful when discussing the properties of Fx and Gx based on knowledge from real-number algebra. T Let us denote the rows in Fx as fT 1 , . . . , fL . We now want to find all solutions g such that Fx g = 0 ? fTi g = 0, ? i = 1, . . . , L. (10) The solutions g1 , . . . , gP to (10) will then be the columns of Gx . Each row vector fj can be written as fi = ?i ?f where ?i ? RK?Mf and ?f = [?1 , . . . , ?Mf ]T is a vector of Mf scalar operators 4 Algorithm 1 Constructing Gx Input: Operator matrix Fx Output: Operator matrix Gx where Fx Gx = 0 Step 1: Make an ansatz g = ?? g for the columns in Gx . Step 2: Expand Fx ?? g and collect terms. Step 3: Construct A ? vec(?) = 0 and find the vectors ?1 . . . ?P spanning its nullspace. Step 4: If P = 0, go back to Step 1 and make a new ansatz, i.e. extend the set of operators. Step 5: Construct Gx = [?1 ? g , . . . , ?P ? g ]. included in Fx . We now assume that g also can be written in a similar form g = ?? g where ? ? RK?M g and ? g = [?1 , . . . , ?M g ]T is a vector of M g scalar operators. One may make the assumption that the same set of operators that are used to describe fi also can be used to describe g, i.e., ? g = ?f. However, this assumption might need to be relaxed. The constraints (10) can then be written as (?f)T ?i ?? g = 0, ? i = 1, . . . , L. (11) We perform the multiplication and collect the terms in ?f and ? g . The condition (11) then results in conditions on the parameters in ? resulting a in a homogeneous system of linear equations A ? vec(?) = 0. (12) The vectors vec(?1 ), . . . , vec(?P ) spanning the nullspace of A in (12) are then used to compute the columns in Gx = [ g1 , . . . gP ] where gp = ?p ? g . If it turns out that the nullspace of A is empty, one should start over with a new ansatz and extend the set of operators in ? g . The outline of the procedure as described above is summarized in Algorithm 1. The algorithm is based upon a parametric ansatz rather than directly upon the theory for linear operators. Not only is it more intuitive, but it does also remove any conceptual challenges that theory may provide. A problem with this is that one may have to iterate before having found the appropriate set of operators in Gx . It might be of interest to examine possible alternatives to this algorithm that does not use a parametric approach. Let us now illustrate the method with an example. 4.1 Divergence-free example revisited Let us return to the example discussed in Section 3.3, and show how the solution found by visual inspection also can be found with the algorithm described above. Since Fx only contains first-order derivative operators, we assume that a column in Gx does so as well. Hence, let us propose the following ansatz (step 1) " ? #  ?11 ?12 ?x1 g= ? = ?? g . (13) ? 21 ?22 ?x2 Applying the constraint, expanding and collecting terms (step 2) we find  " ? #   ?2 ?2 ?2 ? ? Fx ?? g = ?x? 1 ?x? 2 ?11 ?12 ?x? 1 = ?11 2 + (?12 + ?21 ) + ?22 2 , 21 22 ?x1 ?x1 ?x2 ?x2 ?x2 where we have used the fact that expression (14) equals zero if " 1 0 0 ?2 ?xi ?xj 0 1 0 0 1 0 = ?2 ?xj ?xi (14) assuming continuous second derivatives. The ? ? # ?11 0 ?? ? 0 ? 12 ? = A ? vec(?) = 0. ?21 1 ?22 (15) The nullspace is spanned by a single vector (step 3) [?11 ?12 ?21 ?22 ]T = ?[0 ? 1 1 0]T , ? ? R.  ?  ? T Choosing ? = 1, we get Gx = ? ?x (step 5), which is the same as in (9). ?x 2 1 5 4.2 Generalization Although there are no conceptual problems with the algorithm introduced above, the procedure of expanding and collecting terms appears a bit informal. In a general form, the algorithm is reformulated such that the operators are completely left out from the solution process. The drawback of this is a more cumbersome notation, and we have therefore limited the presentation to this simplified version. However, the general algorithm is found in the Supplementary material of this paper. 5 5.1 Experimental results Simulated divergence-free function f1 (x1 , x2 ) = e ?ax1 x2 f2 (x1 , x2 ) = e?ax1 x2 ?f1 ?x1 +  ax1 sin(x1 x2 ) ? x1 cos(x1 x2 ) ,  x2 cos(x1 x2 ) ? ax2 sin(x1 x2 ) , Consider the example in Section 3.3. An example of a function fulfilling ?f2 ?x2 = 0 is (16) where a denotes a constant. We will now study how the regression of this function differs when using the covariance function found in Section 3.3 as compared to a diagonal covariance function K(x, x0 ) = k(x, x0 )I. The measurements generated are corrupted with Gaussian noise such that yk = f (x ek , where ek ? N (0, ? 2 I). The squared exponential covariance function k(x, x0 ) =  k1) + 2 ?2 ?f exp ? 2 l kx ? x0 k2 has been used for kg and k with hyperparameters chosen by maximizing the marginal likelihood. We have used the value a = 0.01 in (16). We have used 50 measurements randomly picked over the domain [0 4] ? [0 4], generated with the noise level ? = 10?4 . The points for prediction corresponds to a discretization using 20 uniformly distributed points in each direction, and hence a total of NP = 202 = 400. We have included the approach described is Section 3.1 for comparison. The number of artificial observations have been chosen as random subsets of the prediction points, up to and including the full set. q 1 ?T ? The comparison is made with regard to the root mean squared error erms = NP f ? f ? , where ?f ? = ? ?f ? ?f and ?f is a concatenated vector storing the true function values in all prediction points and ? ?f denotes the reconstructed equivalent. To decrease the impact of randomness, each error value has been formed as an average over 50 reconstructions given different sets of measurements. An example of the true field, measured values and reconstruction errors using the different methods is seen in Figure 2. The result from the experiment is seen in Figure 3a. Note that the error from the approach with artificial observations is decreasing as the number of observations is increased, but only to a certain point. Have in mind, however, that the Gram matrix is growing, making the problem larger and worse conditioned. The result from our approach is clearly better, while the problem size is kept small and numerical problems are therefore avoided. Figure 2: Left: Example of field plots illustrating the measurements (red arrows) and the true field (gray arrows). Remaining three plots: reconstructed fields subtracted from the true field. The artificial observations of the constraint have been made in the same points as the predictions are made. 5.2 Real data experiment Magnetic fields can mathematically be considered as a vector field mapping a 3D position to a 3D magnetic field strength. Based on the magnetostatic equations, this can be modeled as a curl-free 6 0.9 0.038 Our approach Diagonal Artificial obs erms erms Our approach Diagonal 0.7 Artificial obs 0.036 0.5 25 100 0.034 10 1 400 Nc 10 2 10 3 Nc (a) Simulated experiment (b) Real-data experiment Figure 3: Accuracy of the different approaches as the number of artificial observations Nc is increased. vector field. Following Section 3.1 in the Supplementary material, our method can be used to encode the constraints in the following covariance function (which also has been presented elsewhere [22])   ! 0 0 T kx?x0 k2 x ? x x ? x Kcurl (x, x0 ) = ?f2 e? 2l2 I3 ? . (17) l l With a magnetic sensor and an optical positioning system, both position and magnetic field data have been collected in a magnetically distorted indoor environment, see the Supplementary material for details about the experimental details. In Figure 1 the predicted magnitude of the magnetic field over a two-dimensional domain for three different heights above the floor is displayed. The predictions have been made based on 500 measurements sampled from the trajectory given by the blue curve. Similar to the simulated experiment in Section 5.1, we compare the predictions of the curl-free covariance function (17) with the diagonal covariance function and the diagonal covariance function using artificial observations. The results have been formed by averaging the error over 50 reconstructions. In each iteration, training data and test data were randomly selected from the data set collected in the experiment. 500 train data points and 1 000 test data points were used. The result is seen in Figure 3b. We recognize the same behavior as we saw for the simulated experiment in Figure 3a. Note that the accuracy of the artificial observation approach gets very close to our approach for a large number of artificial observations. However, in the last step of increasing the artificial observations, the accuracy decreases. This is probably caused by the numerical errors that follows from an ill-conditioned Gram matrix. 6 Related work Many problems in which GPs are used contain some kind of constraint that could be well exploited to improve the quality of the solution. Since there are a variety of ways in which constraints may appear and take form, there is also a variety of methods to deal with them. The treatment of inequality constraints in GP regression have been considered for instance in [1] and [5], based on local representations in a limited set of points. The paper [12] proposes a finite-dimensional GP-approximation to allow for inequality constraints in the entire domain. It has been shown that linear constraints satisfied by the training data will be satisfied by the GP prediction as well [19]. The same paper shows how this result can be extended to quadratic forms through a parametric reformulation and minimization of the Frobenious norm, with application demonstrated for pose estimation. Another approach on capturing human body features is described in [18], where a face-shape model is included in the GP framework to imply anatomic correctness. A rigorous theoretical analysis of degeneracy and invariance properties of Gaussian random fields is found in [7], including application examples for one-dimensional GP problems. The concept of learning the covariance function with respect to algebraic invariances is explored in [9]. Although constraints in most situations are formulated on the outputs of the GP, there are also situations in which they are acting on the inputs. An example of this is given in [21], describing a method of benefit from ordering constraints on the input to reduce the negative impact of input noise. Applications within medicine include gene-disease association through functional expectation constraints [10] and lung disease sub-type identification using a mixture of GPs and constraints encoded with Markov random fields [17]. Another way of viewing constraints is as modified prior distributions. By making use of the so-called multivariate generalized von Mises distribution, [13] ends up in a version of GP regression customized for circular variable problems. Other fields of interest include using GPs in approximately solving one-dimensional partial differential equations [8, 14, 15]. 7 Generally speaking, the papers mentioned above consider problems in which the constraints are dealt with using some kind of external enforcement ? that is, they are not explicitly incorporated into the model, but rely on approximations or finite representations. Therefore, the constraints may just be approximately satisfied and not necessarily in a continuous manner, which differs from the method proposed in this paper. Of course, comparisons can not be done directly between methods that have been developed for different kinds of constraints. The interest in this paper is multivariate problems where the constraints are linear combinations of the outputs that are known to equal zero. For multivariate problems, constructing the covariance function is particularly challenging due to the correlation between the output components. We refer to [2] for a very useful review. The basic idea behind the so-called separable kernels is to separate the process of modeling the covariance function for each component and the process of modeling the correlation between them. The final covariance function is chosen for example according to some method of regularization. Another class of covariance functions is the invariant kernels. Here, the correlation is inherited from a known mathematical relation. The curl- and divergence free covariance functions are such examples where the structure follows directly from the underlying physics, and has been shown to improve the accuracy notably for regression problems [22]. Another example is the method proposed in [4], where the Taylor expansion is used to construct a covariance model given a known relationship between the outputs. A very useful property on linear transformations is given in [20], based on the GPs natural inheritance of features imposed by linear operators. This fact has for example been used in developing a method for monitoring infectious diseases [3]. The method proposed in this work is exploiting the transformation property to build a covariance function of the invariant kind for a multivariate GP. We show how this property can be exploited to incorporate knowledge of linear constraints into the covariance function. Moreover, we present an algorithm of constructing the required transformation. This way, the constraints are built into the prior and are guaranteed to be fulfilled in the entire domain. 7 Conclusion and future work We have presented a method for designing the covariance function of a multivariate Gaussian process subject to known linear operator constraints on the target function. The method will by construction guarantee that any sample drawn from the resulting process will obey the constraints in all points. Numerical simulations show the benefits of this method as compared to alternative approaches. Furthermore, it has been demonstrated to improve the performance on real data as well. As mentioned in Section 4, it would be desirable to describe the requirements on Gx more rigorously. That might allow us to reformulate the construction algorithm for Gx in a way that allows for a more straightforward approach as compared to the parametric ansatz that we have proposed. In particular, our method relies upon the requirement that the target function can be expressed in terms of an underlying potential function g. This leads to the intriguing and nontrivial question: Is it possible to mathematically guarantee the existence of such a potential? If the answer to this question is yes, the next question will of course be what it look like and how it relates to the target function. Another possible topic of further research is the extension to constraints including nonlinear operators, which for example might rely upon a linearization in the domain of interest. Furthermore, it may be of potential interest to study the extension to a non-zero right-hand side of (4). Acknowledgements This research is financially supported by the Swedish Foundation for Strategic Research (SSF) via the project ASSEMBLE (Contract number: RIT 15-0012). The work is also supported by the Swedish Research Council (VR) via the project Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524). We are grateful for the help and equipment provided by the UAS Technologies Lab, Artificial Intelligence and Integrated Computer Systems Division (AIICS) at the Department of Computer and Information Science (IDA), Link?ping University, Sweden. The real data set used in this paper has been collected by some of the authors together with Manon Kok, Arno Solin, and Simo S?rkk?. We thank them for allowing us to use this data. We also thank Manon Kok for supporting us with the data processing. Furthermore, we would like to thank Carl Rasmussen and Marc Deisenroth for fruitful discussions on constrained GPs. 8 References [1] Petter Abrahamsen and Fred Espen Benth. Kriging with inequality constraints. Math. Geol., 33(6):719?744, 2001. [2] Mauricio A. ?lvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3):195?266, March 2012. [3] Ricardo Andrade-Pacheco, Martin Mubangizi, John Quinn, and Neil Lawrence. Monitoring Short Term Changes of Infectious Diseases in Uganda with Gaussian Processes, pages 95?110. Springer International Publishing, 2016. [4] Emil. M. Constantinescu and Mihai Anitescu. Physics-based covariance models for Gaussian processes with multiple outputs. International Journal for Uncertainty Quantification, 3(1):47? 71, 2013. [5] S?bastien Da Veiga and Amandine Marrel. Gaussian process modeling with inequality constraints. Annales de la facult? des sciences de Toulouse Math?matiques, 21(3):529?555, 2012. [6] Zoubin Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521:452? 459, 2015. [7] David Ginsbourger, Olivier Roustant, and Nicolas Durrande. On degeneracy and invariances of random fields paths with applications in Gaussian process modelling. Journal of Statistical Planning and Inference, 170:117?128, 2016. [8] Thore Graepel. Solving noisy linear operator equations by Gaussian processes: Application to ordinary and partial differential equations. In Proceedings of the Twentieth International Conference on Machine Learning (ICML), August 2003. [9] Franz J. Kir?ly, Andreas Ziehe, and Klaus-Robert M?ller. Learning with algebraic invariances, and the invariant kernel trick. Technical report, arXiv:1411.7817, November 2014. [10] Oluwasanmi Koyejo, Cheng Lee, and Joydeep Ghosh. Constrained Gaussian process regression for gene-disease association. Proceedings of the IEEE 13th International Conference on Data Mining Workshops, 00:72?79, 2013. [11] David G. Luenberger. Optimization by vector space methods. John Wiley & Sons, Inc, 1969. [12] Hassan Maatouk and Xavier Bay. Gaussian process emulators for computer experiments with inequality constraints. Mathematical Geosciences, 49(5):557?582, 2017. [13] Alexandre K. W. Navarro, Jes Frellsen, and Richard E. Turner. The multivariate generalised von Mises distribution: inference and applications. Technical report, arXiv:1602.05003, February 2016. [14] Ngoc Cuong Nguyen and Jaime Peraire. Gaussian functional regression for linear partial differential equations. Computer Methods in Applied Mechanics and Engineering, 287:69?89, 2015. [15] Ngoc Cuong Nguyen and Jaime Peraire. Gaussian functional regression for output prediction: Model assimilation and experimental design. Journal of Computational Physics, 309:52?68, 2016. [16] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. MIT press, Cambridge, MA, 2006. [17] James Ross and Jennifer Dy. Nonparametric mixture of Gaussian processes with constraints. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1346?1354. JMLR Workshop and Conference Proceedings, 2013. [18] Ognjen Rudovic and Maja Pantic. Shape-constrained gaussian process regression for facialpoint-based head-pose normalization?. In Proceedings of the International Conference on Computer Vision (ICCV), 2011. [19] Mathieu Salzmann and Raquel Urtasun. Implicitly constrained Gaussian process regression for monocular non-rigid pose estimation. In Neural Information Processing Systems (NIPS), 2010. [20] Simo S?rkk?. Linear operators and stochastic partial differential equations in Gaussian process regression. In Proceedings of the Artificial Neural Networks and Machine Learning (ICANN), pages 151?158. Springer, 2011. 9 [21] Cuong Tran, Vladimir Pavlovic, and Robert Kopp. Gaussian process for noisy inputs with ordering constraints. Technical report, arXiv:1507.00052, July 2015. [22] Niklas Wahlstr?m. Modeling of Magnetic Fields and Extended Objects for Localization Applications. PhD thesis, Division of Automatic Control, Link?ping University, 2015. 10
6721 |@word illustrating:2 version:2 norm:1 adrian:2 simulation:1 covariance:37 pick:1 contains:1 efficacy:1 salzmann:1 discretization:1 erms:3 ida:1 intriguing:1 written:5 must:3 john:2 numerical:4 shape:2 remove:2 plot:2 intelligence:2 selected:1 inspection:1 xk:6 short:1 provides:1 math:2 revisited:1 uppsala:3 gx:36 height:3 mathematical:2 constructed:1 differential:7 profound:1 ik:1 consists:1 manner:2 x0:19 sacrifice:1 notably:1 indeed:1 behavior:2 examine:1 growing:1 planning:1 mechanic:1 compensating:1 decreasing:1 considering:2 increasing:3 provided:2 fti:1 underlying:7 notation:2 moreover:1 project:2 maja:1 what:1 kg:11 kind:5 newcastle:2 arno:1 developed:1 finding:1 transformation:9 ghosh:1 guarantee:2 collecting:2 act:1 scaled:1 k2:2 control:1 ly:1 appear:1 mauricio:1 before:1 generalised:1 engineering:2 local:1 modify:1 treat:1 limit:1 encoding:2 path:1 approximately:2 might:7 au:1 collect:2 challenging:1 co:2 limited:2 range:1 obeys:3 practice:1 implement:1 differs:3 x3:3 procedure:7 ax1:3 zoubin:1 get:2 close:1 operator:38 put:1 applying:1 equivalent:1 imposed:1 demonstrated:3 fruitful:1 maximizing:1 jaime:2 straightforward:5 attention:1 go:1 oluwasanmi:1 williams:1 spanned:1 embedding:2 handle:1 stability:1 coordinate:2 fx:40 target:6 construction:3 olivier:1 carl:4 gps:13 designing:3 us:1 homogeneous:1 trick:1 element:1 trend:1 satisfying:2 particularly:1 observed:2 ft:1 connected:1 ordering:2 decrease:2 yk:6 principled:1 disease:6 environment:1 mentioned:2 kriging:1 rigorously:1 ultimately:1 ngoc:2 grateful:1 solving:4 algebra:2 myriad:1 serve:1 upon:5 division:2 f2:8 localization:1 completely:1 train:1 separated:1 describe:3 artificial:16 klaus:1 hyper:1 choosing:1 encoded:3 supplementary:6 valued:3 larger:1 ability:1 toulouse:1 neil:2 g1:2 gp:33 transform:2 think:1 noisy:2 final:1 advantage:1 emil:1 propose:3 reconstruction:3 tran:1 flexibility:1 infectious:2 intuitive:1 exploiting:1 empty:1 requirement:3 produce:1 generating:1 object:1 help:1 illustrate:3 develop:1 gx0:2 pose:3 measured:2 ij:4 school:1 received:1 predicted:3 come:1 uu:3 direction:1 drawback:2 stochastic:1 human:1 australia:1 viewing:1 hassan:1 material:6 require:1 f1:7 generalization:1 rkk:2 secondly:2 mathematically:2 extension:2 considered:2 exp:1 equilibrium:1 mapping:3 lawrence:2 pointing:1 estimation:2 ross:1 saw:1 council:1 correctness:1 kopp:1 minimization:1 uas:1 mit:1 clearly:2 sensor:1 schon:1 gaussian:23 always:2 i3:1 fulfill:2 rather:2 pacheco:1 modified:1 encode:3 derived:1 modelling:1 likelihood:1 mainly:1 rigorous:1 equipment:1 inference:2 dependent:2 rigid:1 entire:2 integrated:1 relation:2 geosciences:1 expand:1 issue:1 ill:1 proposes:1 constrained:6 integration:1 marginal:1 field:21 construct:6 equal:2 having:1 beach:1 magnetically:1 look:1 icml:2 future:1 np:2 report:3 pavlovic:1 richard:1 employ:1 few:1 randomly:2 divergence:5 recognize:1 ab:1 interest:8 circular:2 mining:1 mixture:2 behind:1 capable:1 partial:5 sweden:4 simo:2 taylor:1 desired:3 theoretical:2 joydeep:1 increased:3 column:5 modeling:9 instance:1 nullspaces:1 disadvantage:1 wahlstr:2 ordinary:1 strategic:1 subset:1 answer:1 corrupted:1 durrande:1 st:1 international:6 lee:1 contract:2 physic:3 probabilistic:2 ansatz:6 together:2 quickly:1 concrete:1 squared:2 thesis:1 satisfied:3 von:3 choose:1 possibly:1 rosasco:1 worse:1 external:1 ek:3 derivative:2 return:1 ricardo:1 account:1 potential:3 de:3 summarized:1 inc:1 explicitly:3 caused:1 ax2:1 performed:1 root:1 picked:1 closed:5 lab:1 red:1 start:1 lung:1 complicated:1 inherited:1 worsen:1 espen:1 contribution:1 formed:2 accuracy:5 yes:1 dealt:1 bayesian:2 identification:1 trajectory:2 worth:1 monitoring:2 randomness:1 ping:2 cumbersome:1 james:1 obvious:3 associated:1 mi:3 degeneracy:2 sampled:2 treatment:1 knowledge:6 graepel:1 jes:1 back:1 appears:1 alexandre:1 swedish:2 formulation:1 done:1 furthermore:5 just:3 correlation:3 hand:2 horizontal:1 christopher:1 nonlinear:1 quality:1 gray:1 grows:1 thore:1 usa:1 building:2 effect:1 concept:2 true:6 contain:1 xavier:1 hence:4 regularization:1 illustrated:2 white:1 deal:1 sin:2 frellsen:1 exacerbates:1 generalized:2 outline:1 demonstrate:1 fj:2 meaning:1 wise:1 matiques:1 fi:2 functional:5 physical:1 volume:1 jl:1 association:3 extend:2 discussed:1 anatomic:1 interpret:1 significant:2 measurement:7 refer:1 mihai:1 vec:5 cambridge:1 curl:5 tuning:1 rd:4 focal:1 smoothness:1 automatic:1 had:1 resistant:1 add:1 multivariate:8 posterior:2 manon:2 certain:4 inequality:5 discussing:1 exploited:3 seen:3 additional:1 relaxed:1 floor:2 andrade:1 ller:1 july:1 relates:1 full:1 desirable:1 multiple:1 positioning:1 technical:3 constructive:1 long:1 divided:1 impact:3 prediction:11 regression:15 basic:1 vision:1 expectation:2 arxiv:3 iteration:1 sometimes:1 kernel:5 normalization:1 addition:2 want:4 koyejo:1 grow:2 sch:1 sure:1 probably:1 subject:2 navarro:1 ssf:1 constraining:1 iterate:1 xj:2 variety:2 reduce:1 idea:3 andreas:1 expression:1 utility:2 algebraic:2 reformulated:1 proceed:1 speaking:1 generally:1 useful:2 se:3 listed:1 clear:2 transforms:1 amount:1 kok:2 nonparametric:1 problematic:1 fulfilled:4 correctly:1 blue:2 reformulation:2 drawn:3 kept:1 annales:1 enforced:2 everywhere:1 uncertainty:2 distorted:1 raquel:1 place:1 throughout:1 frobenious:1 ob:2 dy:1 bit:1 capturing:1 fl:1 guaranteed:4 cheng:1 quadratic:1 assemble:1 nontrivial:1 strength:4 constraint:70 x2:19 argument:1 separable:1 optical:1 relatively:1 martin:1 department:4 developing:1 according:2 combination:3 march:1 across:1 describes:1 son:1 intimately:1 modification:1 making:2 invariant:3 fulfilling:1 iccv:1 equation:14 monocular:1 remains:1 jennifer:1 turn:2 describing:1 know:5 enforcement:2 mind:1 end:2 informal:1 luenberger:1 operation:4 observe:1 obey:5 appropriate:4 quinn:1 magnetic:10 subtracted:1 alternative:2 existence:1 thomas:2 denotes:3 remaining:2 include:5 cf:1 publishing:1 medicine:1 exploit:1 concatenated:1 k1:1 especially:1 build:1 ghahramani:1 february:1 unchanged:1 objective:1 question:4 already:1 parametric:7 diagonal:5 financially:1 separate:1 link:2 simulated:6 thank:3 pantic:1 topic:1 collected:3 urtasun:1 reason:1 enforcing:1 spanning:2 assuming:2 pointwise:1 modeled:2 relationship:1 reformulate:1 vladimir:1 nc:3 robert:2 potentially:1 relate:2 negative:1 design:2 kir:1 perform:1 allowing:1 observation:13 markov:1 finite:3 november:1 solin:1 displayed:1 supporting:1 situation:2 extended:2 incorporated:2 head:1 niklas:3 arbitrary:2 august:1 inferred:1 introduced:2 david:2 lvarez:1 required:2 kl:1 established:1 nip:2 address:1 usually:1 dynamical:1 indoor:1 challenge:2 built:1 including:4 memory:1 natural:1 force:1 rely:2 predicting:1 quantification:1 customized:1 turner:1 representing:1 improve:5 technology:4 lorenzo:1 imply:1 mathieu:1 prior:15 review:2 l2:1 inheritance:1 acknowledgement:1 multiplication:2 law:2 roustant:1 abstracting:1 fictitious:1 foundation:2 imposes:1 principle:1 emulator:1 storing:1 row:3 course:3 elsewhere:1 supported:2 last:1 free:10 rasmussen:2 cuong:3 side:2 formal:1 allow:2 face:1 benefit:4 distributed:1 curve:2 dimension:3 regard:1 world:2 valid:2 gram:2 fred:1 author:1 made:5 coincide:1 ig:1 simplified:1 avoided:1 ginsbourger:1 franz:1 nguyen:2 reconstructed:2 implicitly:2 gene:3 conceptual:2 conservation:1 xi:2 continuous:3 facult:1 bay:1 nature:1 ca:1 expanding:2 nicolas:1 expansion:1 posing:1 necessarily:1 constructing:5 domain:6 marc:1 da:1 icann:1 main:1 linearly:2 arrow:2 noise:7 hyperparameters:1 x1:16 body:1 gxt:1 vr:1 wiley:1 assimilation:1 sub:1 position:2 exponential:1 jmlr:1 nullspace:7 rk:6 bastien:1 r2:6 explored:1 workshop:2 phd:1 magnitude:1 execution:1 linearization:1 conditioned:2 cartesian:1 kx:2 mf:3 twentieth:1 visual:1 expressed:2 scalar:3 springer:2 corresponds:1 satisfies:1 relies:2 ma:1 formulated:2 presentation:1 careful:1 hard:2 change:1 included:3 specifically:1 determined:1 uniformly:1 averaging:1 acting:1 total:1 called:2 invariance:4 experimental:3 la:1 rit:1 deisenroth:1 ziehe:1 latter:1 fulfills:3 incorporate:1 regularizing:1
6,326
6,722
Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data Joel A. Tropp Caltech Alp Yurtsever EPFL Madeleine Udell Cornell Volkan Cevher EPFL [email protected] [email protected] [email protected] [email protected] Abstract Several important applications, such as streaming PCA and semidefinite programming, involve a large-scale positive-semidefinite (psd) matrix that is presented as a sequence of linear updates. Because of storage limitations, it may only be possible to retain a sketch of the psd matrix. This paper develops a new algorithm for fixed-rank psd approximation from a sketch. The approach combines the Nystr?m approximation with a novel mechanism for rank truncation. Theoretical analysis establishes that the proposed method can achieve any prescribed relative error in the Schatten 1-norm and that it exploits the spectral decay of the input matrix. Computer experiments show that the proposed method dominates alternative techniques for fixed-rank psd matrix approximation across a wide range of examples. 1 Motivation In recent years, researchers have studied many applications where a large positive-semidefinite (psd) matrix is presented as a series of linear updates. A recurring theme is that we only have space to store a small summary of the psd matrix, and we must use this information to construct an accurate psd approximation with specified rank. Here are two important cases where this problem arises. Streaming Covariance Estimation. Suppose that we receive a stream h1 , h2 , h3 , ? ? ? ? Rn of high-dimensional vectors. The psd sample covariance matrix of these vectors has the linear dynamics A(0) ? 0 and A(i) ? (1 ? i?1 )A(i?1) + i?1 hi h?i . When the dimension n and the number of vectors are both large, it is not possible to store the vectors or the sample covariance matrix. Instead, we wish to maintain a small summary that allows us to compute the rank-r psd approximation of the sample covariance matrix A(i) at a specified instant i. This problem and its variants are often called streaming PCA [3, 12, 14, 15, 25, 32]. Convex Low-Rank Matrix Optimization with Optimal Storage. A primary application of semidefinite programming (SDP) is to search for a rank-r psd matrix that satisfies additional constraints. Because of storage costs, SDPs are difficult to solve when the matrix variable is large. Recently, Yurtsever et al. [44] exhibited the first provable algorithm, called SketchyCGM, that produces a rank-r approximate solution to an SDP using optimal storage. Implicitly, SketchyCGM forms a sequence of approximate psd solutions to the SDP via the iteration A(0) ? 0 and A(i) ? (1 ? ?i )A(i?1) + ?i hi h?i . The step size ?i = 2/(i + 2), and the vectors hi do not depend on the matrices A(i) . In fact, SketchyCGM only maintains a small summary of the evolving solution A(i) . When the iteration terminates, SketchyCGM computes a rank-r psd approximation of the final iterate using the method described by Tropp et al. [37, Alg. 9]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Notation and Background The scalar field F = R or F = C. Define ?(R) = 1 and ?(C) = 0. The asterisk ? is the (conjugate) transpose, and the dagger ? denotes the Moore?Penrose pseudoinverse. The notation A1/2 refers to the unique psd square root of a psd matrix A. For p ? [1, ?], the Schatten p-norm k ? kp returns the `p norm of the singular values of a matrix. As usual, ?r refers to the rth largest singular value. For a nonnegative integer r, the phrase ?rank-r? and its variants mean ?rank at most r.? For a matrix M , the symbol JM Kr denotes a (simultaneous) best rank-r approximation of the matrix M with respect to any Schatten p-norm. We can take JM Kr to be any r-truncated singular value decomposition (SVD) of M [24, Sec. 6]. Every best rank-r approximation of a psd matrix is psd. 2 Sketching and Fixed-Rank PSD Approximation We begin with a streaming data model for a psd matrix that evolves via a sequence of general linear updates, and it describes a randomized linear sketch for tracking the psd matrix. To compute a fixed-rank psd approximation, we develop an algorithm based on the Nystr?m method [40], a technique from the literature on kernel methods. In contrast to previous approaches, our algorithm uses a distinct mechanism to truncate the rank of the approximation. The Streaming Model. Fix a rank parameter r in the range 1 ? r ? n. Initially, the psd matrix A ? Fn?n equals a known psd matrix Ainit ? Fn?n . Then A evolves via a series of linear updates: A ? ?1 A + ?2 H H ? Fn?n is (conjugate) symmetric. where ?i ? R, (2.1) In many applications, the innovation H is low-rank and/or sparse. We assume that the evolving matrix A always remains psd. At one given instant, we must produce an accurate rank-r approximation of the psd matrix A induced by the stream of linear updates. The Sketch. Fix a sketch size parameter k in the range r ? k ? n. Independent from A, we draw and fix a random test matrix ? ? Fn?k . (2.2) See Sec. 3 for a discussion of possible distributions. The sketch of the matrix A takes the form Y = A? ? Fn?k . (2.3) The sketch (2.3) supports updates of the form (2.1): Y ? ?1 Y + ?2 H?. (2.4) To find a good rank-r approximation, we must set the sketch size k larger than r. But storage costs and computation also increase with k. One of our main contributions is to clarify the role of k. Under the model (2.1), it is more or less necessary to use a randomized linear sketch to track A [28]. For psd matrices, sketches of the form (2.2)?(2.3) appear explicitly in Gittens?s work [16, 17, 19]. Tropp et al. [37] relies on a more complicated sketch developed in [7, 42]. The Nystr?m Approximation. The Nystr?m method is a general technique for low-rank psd matrix approximation. Various instantiations appear in the papers [5, 11, 13, 16, 17, 19, 22, 27, 34, 40]. Here is the application to the present situation. Given the test matrix ? and the sketch Y = A?, the Nystr?m method constructs a rank-k psd approximation of the psd matrix A via the formula ?nys = Y (?? Y )? Y ? . A (2.5) In most work on the Nystr?m method, the test matrix ? depends adaptively on A, so these approaches are not valid in the streaming setting. Gittens?s framework [16, 17, 19] covers the streaming case. Fixed-Rank Nystr?m Approximation: Prior Art. To construct a Nystr?m approximation with exact rank r from a sketch of size k, the standard approach is to truncate the center matrix to rank r: ?nysfix = Y (J?? Y Kr )? Y ? . A r (2.6) The truncated Nystr?m approximation (2.6) appears in the many papers, including [5, 11, 18, 34]. We have found (Sec. 5) that the truncation method (2.6) performs poorly in the present setting. This observation motivated us to search for more effective techniques. 2 Fixed-Rank Nystr?m Approximation: Proposal. The purpose of this paper is to develop, analyze, and evaluate a new approach for fixed-rank approximation of a psd matrix under the streaming model. We propose a more intuitive rank-r approximation: ? r = JA ?nys Kr . A (2.7) That is, we report a best rank-r approximation of the full Nystr?m approximation (2.5). This ?matrix nearness? approach to fixed-rank approximation appears in the papers [21, 22, 37]. The combination with the Nystr?m method (2.5) is totally natural. Let us emphasize that the approach (2.7) also applies to Nystr?m approximations outside the streaming setting. Summary of Contributions. This paper contains a number of advances over the prior art: 1. We propose a new technique (2.7) for truncating the Nystr?m approximation to rank r. This formulation differs from the published literature on fixed-rank Nystr?m approximations. 2. We present a stable numerical implementation of (2.7) based on the best practices outlined in the paper [27]. This approach is essential for achieving high precision! (Sec. 3) 3. We establish informative error bounds for the method (2.7). In particular, we prove that it attains (1 + ?)-relative error in the Schatten 1-norm when k = ?(r/?). (Sec. 4) 4. We document numerical experiments on real and synthetic data to demonstrate that our method dominates existing techniques [18, 37] for fixed-rank psd approximation. (Sec. 5) Psd matrix approximation is a ubiquitous problem, so we expect these results to have a broad impact. Related Work. Randomized algorithms for low-rank matrix approximation were proposed in the late 1990s and developed into a technology in the 2000s; see [22, 30, 41]. In the absence of constraints, such as streaming, we recommend the general-purpose methods from [22, 23, 27]. Algorithms for low-rank matrix approximation in the important streaming data setting are discussed in [4, 7, 8, 15, 22, 37, 41, 42]. Few of these methods are designed for psd matrices. Nystr?m methods for low-rank psd matrix approximation appear in [11, 13, 16, 17, 19, 22, 26, 34, 37, 40, 43]. These works mostly concern kernel matrices; they do not focus on the streaming model. We are only aware of a few papers [16, 17, 19, 37] on algorithms for psd matrix approximation that operate under the streaming model (2.1). These papers form the comparison group. After this paper was submitted, we learned about two contemporary works [35, 39] that propose the fixed-rank approximation (2.7) in the context of kernel methods. Our research is distinctive because we focus on the streaming setting, we obtain precise error bounds, we address numerical stability, and we include an exhaustive empirical evaluation. Finally, let us mention two very recent theoretical papers [6, 33] that present existential results on algorithms for fixed-rank psd matrix approximation. The approach in [6] is only appropriate for sparse input matrices, while the work [33] is not valid in the streaming setting. 3 Implementation Distributions for the Test Matrix. To ensure that the sketch is informative, we must draw the test matrix (2.2) at random from a suitable distribution. The choice of distribution determines the computational requirements for the sketch (2.3), the linear updates (2.4), and the matrix approximation (2.7). It also affects the quality of the approximation (2.7). Let us outline some of the most useful distributions. A full discussion is outside the scope of our work, but see [17, 19, 22, 29, 30, 37, 41]. Isotropic Models. Mathematically, the most natural model is to construct a test matrix ? ? Fn?k whose range is a uniformly random k-dimensional subspace in Fn . There are two approaches: 1. Gaussian. Draw each entry of the matrix ? ? Fn?k independently at random from the standard normal distribution on F. 2. Orthonormal. Draw a Gaussian matrix G ? Fn?k , as above. Compute a thin orthogonal? triangular factorization G = ?R to obtain the test matrix ? ? Fn?k . Discard R. Gaussian and orthonormal test matrices both require storage of kn floating-point numbers in F for the test matrix ? and another kn floating-point numbers for the sketch Y . In both cases, the cost of multiplying a vector in Fn into ? is ?(kn) floating-point operations. 3 Algorithm 1 Sketch Initialization. Implements (2.2)?(2.3) with a random orthonormal test matrix. Input: Positive-semidefinite input matrix A ? Fn?n ; sketch size parameter k Output: Constructs test matrix ? ? Fn?k and sketch Y = A? ? Fn?k local: ?, Y function N YSTROM S KETCH(A; k) if F = R then ? ? randn(n, k) if F = C then ? ? randn(n, k) + i ? randn(n, k) ? ? orth(?) Y ? A? 1 2 3 4 5 6 7 8 . Internal variables for N YSTROM S KETCH . Constructor . Improve numerical stability Algorithm 2 Linear Update. Implements (2.4). Input: Scalars ?1 , ?2 ? R and conjugate symmetric H ? Fn?n Output: Updates sketch to reflect linear innovation A ? ?1 A + ?2 H local: ?, Y function L INEARU PDATE(?1 , ?2 , H) Y ? ?1 Y + ?2 H? 1 2 3 . Internal variables for N YSTROM S KETCH For isotropic models, we can analyze the approximation (2.7) in detail. In exact arithmetic, Gaussian and isotropic test matrices yield identical Nystr?m approximations (Supplement). In floating-point arithmetic, orthonormal matrices are more stable for large k, but we can generate Gaussian matrices with less arithmetic and communication. References for isotropic test matrices include [21, 22, 31]. Subsampled Scrambled Fourier Transform (SSFT). One shortcoming of the isotropic models is the cost of storing the test matrix and the cost of multiplying a vector into the test matrix. We can often reduce these costs using an SSFT test matrix. An SSFT takes the form ? = ?1 F ?2 F R ? Fn?k . (3.1) The ?i ? Fn?n are independent, signed permutation matrices,1 chosen uniformly at random. The matrix F ? Fn?n is a discrete Fourier transform (F = C) or a discrete cosine transform (F = R). The matrix R ? Fn?k is a restriction to k coordinates, chosen uniformly at random. An SSFT ? requires only ?(n) storage, but the sketch Y still requires storage of kn numbers. We can multiply a vector in Fn into ? using ?(n log n) arithmetic operations via an FFT or FCT algorithm. Thus, for most choices of sketch size k, the SSFT improves over the isotropic models. In practice, the SSFT yields matrix approximations whose quality is identical to those we obtain with an isotropic test matrix (Sec. 5). Although the analysis for SSFTs is less complete, the empirical evidence confirms that the theory for isotropic models also offers excellent guidance for SSFTs. References for SSFTs and related test matrices include [1, 2, 9, 22, 29, 36, 42]. Numerically Stable Implementation. It requires care to compute the fixed-rank approximation (2.7). The supplement shows that a poor implementation may produce an approximation with 100% error! Let us outline a numerically stable and very accurate implementation of (2.7), based on an idea from [27, 38]. Fix a small parameter ? > 0. Instead of approximating the psd matrix A directly, we approximate the shifted matrix A? = A + ?I and then remove the shift. Here are the steps: 1. 2. 3. 4. 5. 6. 1 Construct the shifted sketch Y? = Y + ??. Form the matrix B = ?? Y? . Compute a Cholesky decomposition B = CC ? . Compute E = Y? C ?1 by back-substitution. Compute the (thin) singular value decomposition E = U ?V ? . ?r = U J?2 ? ?IKr U ? . Form A A signed permutation has exactly one nonzero entry in each row and column; the nonzero has modulus one. 4 Algorithm 3 Fixed-Rank PSD Approximation. Implements (2.7). Input: Matrix A in sketch must be psd; rank parameter 1 ? r ? k Output: Returns factors U ? Fn?r with orthonormal columns and nonnegative, diagonal ? ? Fr?r ?r = U ?U ? of the sketched matrix A that form a rank-r psd approximation A 1 2 3 4 5 6 7 8 9 10 local: ?, Y . Internal variables for N YSTROM S KETCH function F IXED R ANK PSDA PPROX(r) ? ? ? norm(Y ) . ? = 2.2 ? 10?16 in double precision Y ? Y + ?? . Sketch of shifted matrix A + ?I B ? ?? Y C ? chol((B + B ? )/2) . Force symmetry (U , ?, ?) ? svd(Y /C, ?econ?) . Solve least squares problem; form thin SVD U ? U (:, 1:r) and ? ? ?(1:r, 1:r) . Truncate to rank r ? ? max{0, ?2 ? ?I} . Square to get eigenvalues; remove shift return (U , ?) The pseudocode addresses some additional implementation details. Related, but distinct, methods were proposed by Williams & Seeger [40] and analyzed in Gittens?s thesis [17]. Pseudocode. We present detailed pseudocode for the sketch (2.2)?(2.4) and the implementation of the fixed-rank psd approximation (2.7) described above. For simplicity, we only elaborate the case of a random orthonormal test matrix; we have also developed an SSFT implementation for empirical testing. The pseudocode uses both mathematical notation and M ATLAB 2017 A functions. Algorithms and Computational Costs. Algorithm 1 constructs a random orthonormal test matrix, and computes the sketch (2.3) of an input matrix. The test matrix and sketch require the storage of 2kn floating-point numbers. Owing to the orthogonalization step, the construction of the test matrix requires ?(k 2 n) floating-point operations. For a general input matrix, the sketch requires ?(kn2 ) floating-point operations; this cost can be removed by initializing the input matrix to zero. Algorithm 2 implements the linear update (2.4) to the sketch. Nominally, the computation requires ?(kn2 ) arithmetic operations, but this cost can be reduced when H has structure (e.g., low rank). Using the SSFT test matrix (3.1) also reduces this cost. Algorithm 3 computes the rank-r psd approximation (2.7). This method requires additional storage of ?(kn). The arithmetic cost is ?(k 2 n) operations, which is dominated by the SVD of the matrix E. 4 Theoretical Results Relative Error Bound. Our first result is an accurate bound for the expected Schatten 1-norm error in the fixed-rank psd approximation (2.7). Theorem 4.1 (Fixed-Rank Nystr?m: Relative Error). Assume 1 ? r < k ? n. Let A ? Fn?n be a psd matrix. Draw a test matrix ? ? Fn?k from the Gaussian or orthonormal distribution, and form ?r given by (2.5) and (2.7) satisfies the sketch Y = A?. Then the approximation A   r ? E kA ? Ar k1 ? 1 + ? kA ? JAKr k1 ; (4.1) k?r?? r ?r k? ? kA ? JAKr k? + E kA ? A ? kA ? JAKr k1 . (4.2) k?r?? The quantities ?(R) = 1 and ?(C) = 0. Similar results hold with high probability. The proof appears in the supplement. In contrast to all previous analyses of randomized Nystr?m methods, Theorem 4.1 yields explicit, sharp constants. (The contemporary work [39, Thm. 1] contains only a less precise variant of (4.1).) As a consequence, the formulae (4.1)?(4.2) offer an a priori mechanism for selecting the sketch size k to achieve a desired error bound. In particular, for each ? > 0, k = (1 + ??1 )r + ? implies ?r k1 ? (1 + ?) ? kA ? JAKr k1 . E kA ? A 5 Thus, we can attain an arbitrarily small relative error in the Schatten 1-norm. In the streaming setting, the scaling k = ?(r/?) is optimal for this result [14, Thm. 4.2]. Furthermore, it is impossible [41, Sec. 6.2] to obtain ?pure? relative error bounds in the Schatten ?-norm unless k = ?(n). The Role of Spectral Decay. To circumvent these limitations, it is necessary to develop a different kind of error bound. Our second result shows that the fixed-rank psd approximation (2.7) automatically exploits decay in the spectrum of the input matrix. Theorem 4.2 (Fixed-Rank Nystr?m: Spectral Decay). Instate the notation and assumptions of Theorem 4.1. Then    % ?r k1 ? kA ? JAKr k1 + 2 min ? kA ? JAK% k1 ; (4.3) E kA ? A 1+ %<k?? k?%??    % ?r k? ? kA ? JAKr k? + 2 min E kA ? A 1+ ? kA ? JAK% k1 . (4.4) %<k?? k?%?? The index % ranges over the natural numbers. The proof of Theorem 4.2 appears in the supplement. Here is one way to understand this result. As the index % increases, the quantity %/(k?%??) increases while the rank-% approximation error decreases. Theorem 4.2 states that the approximation (2.7) automatically achieves the best tradeoff between these two terms. When the spectrum of A decays, the rank-% approximation error may be far smaller than the rank-r approximation error. In this case, Theorem 4.2 is tighter than Theorem 4.1, although the prediction is more qualitative. Additional Results. The proofs can be extended to obtain high-probability bounds, as well as results for other Schatten norms or for other test matrices (Supplement). 5 Numerical Performance Experimental Setup. In many streaming applications, such as [44], it is essential that the sketch uses as little memory as possible and that the psd approximation achieves the best possible error. For the methods we consider, the arithmetic costs of linear updates and psd approximation are roughly comparable. Therefore, we only assess storage and accuracy. For the numerical experiments, the field F = C except when noted explicitly. Choose a psd input matrix A ? Fn?n and a target rank r. Then fix a sketch size parameter k with r ? k ? n. For each trial, draw the test matrix ? from the orthonormal or the SSFT distribution, and form the sketch ?r defined Y = A? of the input matrix. Using Algorithm 3, compute the rank-r psd approximation A in (2.7). We evaluate the performance using the relative error metric: ? r kp kA ? A Schatten p-norm relative error = ? 1. (5.1) kA ? JAKr kp We perform 20 independent trials and report the average error. We compare our method (2.7) with the standard truncated Nystr?m approximation (2.6); the best reference for this type of approach is [18, Sec. 2.2]. The approximation (2.6) is constructed from the same sketch as (2.7), so the experimental procedure is identical. We also consider the sketching method and psd approximation algorithm [37, Alg. 9] based on earlier work from [7, 22, 42]. We implemented this sketch with orthonormal matrices and also with SSFT matrices. The sketch has two different parameters (k, `), so we select the parameters that result in the minimum relative error. Otherwise, the experimental procedure is the same. We apply the methods to representative input matrices; see the Supplement for plots of the spectra. Synthetic Examples. The synthetic examples are diagonal with dimension n = 103 ; results for larger and non-diagonal matrices are similar. These matrices are parameterized by an effective rank parameter R, which takes values in {5, 10, 20}. We compute approximations with rank r = 10. 1. Low-Rank + PSD Noise. These matrices take the form A = diag(1, . . . , 1, 0, . . . , 0) + ?n?1 W ? Fn?n . | {z } R 6 Relative Error (S1 ) 10 0 10 -2 10 -2 10 -4 10 -4 10 -6 10 -6 10 -8 10 -8 1 2 4 8 16 32 64 128 6 12 Storage (T ) ( A ) PhaseRetrieval (r = 1) Relative Error (S1 ) 24 48 96 192 Storage (T ) ( B ) PhaseRetrieval (r = 5) [TYUC17, Alg. 9] Standard (2.6) Proposed (2.7) 10 2 10 -2 10 1 10 -4 10 0 10 -6 10 -8 10 -1 1 2 4 8 16 32 64 128 16 32 64 128 256 Storage (T ) Storage (T ) ( C ) MaxCut (r = 1) ( D ) MaxCut (r = 14) F IGURE 5.1: Application Examples, Approximation Rank r, Schatten 1-Norm Error. The data series show the performance of three algorithms for rank-r psd approximation. Solid lines are generated from the Gaussian sketch; dashed lines are from the SSFT sketch. Each panel displays the Schatten 1-norm relative error (5.1) as a function of storage cost T . See Sec. 5 for details. The matrix W ? Fn?n has the W ISHART(n, n; F) distribution; that is, W = GG? where G ? Fn?n is standard normal. The parameter ? controls the signal-to-noise ratio. We consider three examples: LowRankLowNoise (? = 10?4 ), LowRankMedNoise (? = 10?2 ), LowRankHiNoise (? = 10?1 ). 2. Polynomial Decay. These matrices take the form A = diag(1, . . . , 1, 2?p , 3?p , . . . , (n ? R + 1)?p ) ? Fn?n . | {z } R The parameter p > 0 controls the rate of polynomial decay. We consider three examples: PolyDecaySlow (p = 0.5), PolyDecayMed (p = 1), PolyDecayFast (p = 2). 3. Exponential Decay. These matrices take the form A = diag(1, . . . , 1, 10?q , 10?2q , . . . , 10?(n?R)q ) ? Fn?n . | {z } R The parameter q > 0 controls the rate of exponential decay. We consider three examples: ExpDecaySlow (q = 0.1), ExpDecayMed (q = 0.25), ExpDecayFast (q = 1). Application Examples. We also consider non-diagonal matrices inspired by the SDP algorithm [44]. 1. MaxCut: This is a real-valued psd matrix with dimension n = 2 000, and its effective rank R = 14. We form approximations with rank r ? {1, 14}. The matrix is an approximate solution to the M AX C UT SDP [20] for the sparse graph G40 [10]. 2. PhaseRetrieval: This is a psd matrix with dimension n = 25 921. It has exact rank 250, but its effective rank R = 5. We form approximations with rank r ? {1, 5}. The matrix is an approximate solution to a phase retrieval SDP; the data is drawn from our paper [44]. Experimental Results. Figures 5.1?5.2 display the performance of the three fixed-rank psd approximation methods for a subcollection of the input matrices. The vertical axis is the Schatten 1-norm 7 Relative Error (S1 ) [TYUC17, Alg. 9] Standard (2.6) Proposed (2.7) 10 10 -1 0 10 -1 10 -1 10 -2 10 -2 12 24 48 96 192 12 24 48 96 192 12 24 Storage (T ) Storage (T ) ( A ) LowRankLowNoise 48 96 192 Storage (T ) ( B ) LowRankMedNoise ( C ) LowRankHiNoise Relative Error (S1 ) 10 0 10 -1 10 -1 10 -1 10 -2 10 -3 10 -2 10 -2 12 24 48 96 192 12 24 Storage (T ) 48 96 192 12 24 Storage (T ) ( D ) PolyDecayFast 48 96 192 Storage (T ) ( E ) PolyDecayMed ( F ) PolyDecaySlow 10 0 10 0 Relative Error (S1 ) 10 0 10 -2 10 -2 10 -4 10 -4 10 -6 10 -6 10 -6 10 -8 10 -8 10 -8 10 -2 10 -4 12 24 48 96 Storage (T ) ( G ) ExpDecayFast 192 12 24 48 96 Storage (T ) ( H ) ExpDecayMed 192 12 24 48 96 192 Storage (T ) ( I ) ExpDecaySlow F IGURE 5.2: Synthetic Examples with Effective Rank R = 10, Approximation Rank r = 10, Schatten 1-Norm Error. The data series show the performance of three algorithms for rank-r psd approximation with r = 10. Solid lines are generated from the Gaussian sketch; dashed lines are from the SSFT sketch. Each panel displays the Schatten 1-norm relative error (5.1) as a function of storage cost T . relative error (5.1). The variable T on the horizontal axis is proportional to the storage required for the sketch only. For the Nystr?m-based approximations (2.6)?(2.7), we have the correspondence T = k. For the approximation [37, Alg. 9], we set T = k + `. The experiments demonstrate that the proposed method (2.7) has a significant benefit over the alternatives for input matrices that admit a good low-rank approximation. It equals or improves on the competitors for almost all other examples and storage budgets. The supplement contains additional numerical results; these experiments only reinforce the message of Figures 5.1?5.2. Conclusions. This paper makes the case for using the proposed fixed-rank psd approximation (2.7) in lieu of the alternatives (2.6) or [37, Alg. 9]. Theorem 4.1 shows that the proposed fixed-rank psd approximation (2.7) can attain any prescribed relative error, and Theorem 4.2 shows that it can exploit spectral decay. Furthermore, our numerical work demonstrates that the proposed approximation improves (almost) uniformly over the competitors for a range of examples. These results are timely because of the recent arrival of compelling applications, such as [44], for sketching psd matrices. 8 Acknowledgments. The authors wish to thank Mark Tygert and Alex Gittens for helpful feedback on preliminary versions of this work. JAT gratefully acknowledges partial support from ONR Award N00014-17-1-2146 and the Gordon & Betty Moore Foundation. VC and AY were supported in part by the European Commission under Grant ERC Future Proof, SNF 200021-146750, and SNF CRSII2-147633. MU was supported in part by DARPA Award FA8750-17-2-0101. References [1] N. Ailon and B. Chazelle. The fast Johnson-Lindenstrauss transform and approximate nearest neighbors. SIAM J. Comput., 39(1):302?322, 2009. [2] C. Boutsidis and A. Gittens. Improved matrix algorithms via the subsampled randomized Hadamard transform. SIAM J. Matrix Anal. Appl., 34(3):1301?1340, 2013. [3] C. Boutsidis, D. Garber, Z. Karnin, and E. Liberty. Online principal components analysis. In Proc. 26th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pages 887?901, 2015. [4] C. Boutsidis, D. Woodruff, and P. Zhong. Optimal principal component analysis in distributed and streaming models. In Proc. 48th ACM Symp. Theory of Computing (STOC), 2016. [5] J. Chiu and L. Demanet. Sublinear randomized algorithms for skeleton decompositions. SIAM J. Matrix Anal. Appl., 34(3):1361?1383, 2013. [6] K. Clarkson and D. Woodruff. Low-rank PSD approximation in input-sparsity time. In Proc. 28th Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), pages 2061?2072, Jan. 2017. [7] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In Proc. 41st ACM Symp. Theory of Computing (STOC), 2009. [8] M. B. Cohen, S. Elder, C. Musco, C. Musco, and M. Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proc. 47th ACM Symp. Theory of Computing (STOC), pages 163?172. ACM, New York, 2015. [9] M. B. Cohen, J. Nelson, and D. P. Woodruff. Optimal Approximate Matrix Product in Terms of Stable Rank. In 43rd Int. Coll. Automata, Languages, and Programming (ICALP), volume 55, pages 11:1?11:14, 2016. [10] T. A. Davis and Hu. The University of Florida sparse matrix collection. ACM Trans. Math. Softw., 3(1): 1:1?1:25, 2011. [11] P. Drineas and M. W. Mahoney. On the Nystr?m method for approximating a Gram matrix for improved kernel-based learning. J. Mach. Learn. Res., 6:2153?2175, 2005. [12] D. Feldman, M. Volkov, and D. Rus. Dimensionality reduction of massive sparse datasets using coresets. In Adv. Neural Information Processing Systems 29 (NIPS), 2016. [13] C. Fowlkes, S. Belongie, F. Chung, and J. Malik. Spectral grouping using the Nystr?m method. IEEE Trans. Pattern Anal. Mach. Intell., 26(2):214?225, Jan. 2004. [14] M. Ghasemi, E. Liberty, J. M. Phillips, and D. P. Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM J. Comput., 45(5):1762?1792, 2016. [15] A. C. Gilbert, J. Y. Park, and M. B. Wakin. Sketched SVD: Recovering spectral features from compressed measurements. Available at http://arXiv.org/abs/1211.0361, Nov. 2012. [16] A. Gittens. The spectral norm error of the na?ve Nystr?m extension. Available at http:arXiv.org/abs/ 1110.5305, Oct. 2011. [17] A. Gittens. Topics in Randomized Numerical Linear Algebra. PhD thesis, California Institute of Technology, 2013. [18] A. Gittens and M. W. Mahoney. Revisiting the Nystr?m method for improved large-scale machine learning. Available at http://arXiv.org/abs/1303.1849, Mar. 2013. [19] A. Gittens and M. W. Mahoney. Revisiting the Nystr?m method for improved large-scale machine learning. J. Mach. Learn. Res., 17:Paper No. 117, 65, 2016. [20] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. Assoc. Comput. Mach., 42(6):1115?1145, 1995. [21] M. Gu. Subspace iteration randomization and singular value problems. SIAM J. Sci. Comput., 37(3): A1139?A1173, 2015. [22] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217?288, 2011. 9 [23] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal component analysis of large data sets. SIAM J. Sci. Comput., 33(5):2580?2594, 2011. ISSN 1064-8275. doi: 10.1137/100804139. URL http://dx.doi.org/10.1137/100804139. [24] N. J. Higham. Matrix nearness problems and applications. In Applications of matrix theory (Bradford, 1988), pages 1?27. Oxford Univ. Press, New York, 1989. [25] P. Jain, C. Jin, S. M. Kakade, P. Netrapalli, and A. Sidford. Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja?s algorithm. In 29th Ann. Conf. Learning Theory (COLT), pages 1147?1164, 2016. [26] S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nystr?m method. J. Mach. Learn. Res., 13:981?1006, Apr. 2012. [27] H. Li, G. C. Linderman, A. Szlam, K. P. Stanton, Y. Kluger, and M. Tygert. Algorithm 971: An implementation of a randomized algorithm for principal component analysis. ACM Trans. Math. Softw., 43 (3):28:1?28:14, Jan. 2017. [28] Y. Li, H. L. Nguyen, and D. P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In Proc. 2014 ACM Symp. Theory of Computing (STOC), pages 174?183. ACM, 2014. [29] E. Liberty. Accelerated dense random projections. PhD thesis, Yale Univ., New Haven, 2009. [30] M. W. Mahoney. Randomized algorithms for matrices and data. Found. Trends Mach. Learn., 3(2):123?224, 2011. [31] P.-G. Martinsson, V. Rokhlin, and M. Tygert. A randomized algorithm for the decomposition of matrices. Appl. Comput. Harmon. Anal., 30(1):47?68, 2011. [32] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In Adv. Neural Information Processing Systems 26 (NIPS), pages 2886?2894, 2013. [33] C. Musco and D. Woodruff. Sublinear time low-rank approximation of positive semidefinite matrices. Available at http://arXiv.org/abs/1704.03371, Apr. 2017. [34] J. C. Platt. FastMap, MetricMap, and Landmark MDS are all Nystr?m algorithms. In Proc. 10th Int. Workshop Artificial Intelligence and Statistics (AISTATS), pages 261?268, 2005. [35] F. Pourkamali-Anaraki and S. Becker. Randomized clustered Nystr?m for large-scale kernel machines. Available at http://arXiv.org/abs/1612.06470, Dec. 2016. [36] J. A. Tropp. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal., 3(1-2):115?126, 2011. [37] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher. Randomized single-view algorithms for low-rank matrix approximation. ACM Report 2017-01, Caltech, Pasadena, Jan. 2017. Available at http://arXiv. org/abs/1609.00048, v1. [38] M. Tygert. Beta versions of Matlab routines for principal component analysis. Available at http: //tygert.com/software.html, 2014. [39] S. Wang, A. Gittens, and M. W. Mahoney. Scalable kernel K-means clustering with Nystr?m approximation: relative-error bounds. Available at http://arXiv.org/abs/1706.02803, June 2017. [40] C. K. I. Williams and M. Seeger. Using the Nystr?m method to speed up kernel machines. In Adv. Neural Information Processing Systems 13 (NIPS), 2000. [41] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci., 10 (1-2):iv+157, 2014. [42] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert. A fast randomized algorithm for the approximation of matrices. Appl. Comput. Harmon. Anal., 25(3):335?366, 2008. [43] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr?m method vs random Fourier features: A theoretical and empirical comparison. In Adv. Neural Information Processing Systems 25 (NIPS), pages 476?484, 2012. [44] A. Yurtsever, M. Udell, J. A. Tropp, and V. Cevher. Sketchy decisions: Convex low-rank matrix optimization with optimal storage. In Proc. 20th Int. Conf. Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, May 2017. 10
6722 |@word trial:2 version:2 polynomial:2 norm:17 hu:1 confirms:1 covariance:4 decomposition:6 mention:1 nystr:33 solid:2 reduction:2 substitution:1 series:4 contains:3 selecting:1 woodruff:8 document:1 pprox:1 fa8750:1 existing:1 ketch:4 ka:15 chazelle:1 com:1 dx:1 must:5 fn:29 numerical:11 informative:2 remove:2 designed:1 plot:1 update:11 v:1 intelligence:2 isotropic:8 volkan:2 nearness:2 math:2 org:8 mathematical:1 constructed:1 beta:1 qualitative:1 prove:1 combine:1 symp:6 expected:1 roughly:1 sdp:6 inspired:1 automatically:2 little:1 jm:2 totally:1 begin:1 notation:4 panel:2 kind:1 developed:3 finding:1 guarantee:1 every:1 exactly:1 demonstrates:1 assoc:1 platt:1 control:3 szlam:1 grant:1 appear:3 positive:5 local:3 jtropp:1 consequence:1 mach:6 oxford:1 signed:2 might:1 initialization:1 studied:1 appl:4 factorization:1 limited:1 range:6 unique:1 acknowledgment:1 testing:1 practice:2 implement:4 differs:1 procedure:2 jan:4 snf:2 empirical:4 evolving:2 attain:2 matching:1 projection:1 refers:2 get:1 storage:29 context:1 impossible:1 restriction:1 gilbert:1 deterministic:1 center:1 williams:2 truncating:1 convex:2 independently:1 automaton:1 musco:3 simplicity:1 pure:1 orthonormal:10 stability:2 coordinate:1 constructor:1 construction:1 suppose:1 target:1 massive:1 exact:3 programming:4 us:3 trend:2 cut:1 role:2 initializing:1 wang:1 revisiting:2 adv:5 decrease:1 contemporary:2 removed:1 mu:1 skeleton:1 dynamic:1 depend:1 algebra:3 distinctive:1 gu:1 drineas:1 darpa:1 various:1 pdate:1 caramanis:1 univ:2 distinct:2 fast:2 effective:5 shortcoming:1 jain:2 kp:3 doi:2 artificial:2 outside:2 exhaustive:1 ixed:1 whose:2 garber:1 larger:2 solve:2 valued:1 otherwise:1 compressed:1 triangular:1 statistic:2 transform:6 final:1 online:1 sequence:3 eigenvalue:1 propose:3 product:1 fr:1 frequent:1 hadamard:2 poorly:1 achieve:2 intuitive:1 double:1 requirement:1 produce:3 develop:3 nearest:1 h3:1 netrapalli:1 implemented:1 recovering:1 implies:1 liberty:4 direction:1 owing:1 vc:1 alp:2 kluger:1 ja:1 require:2 fastmap:1 fix:5 clustered:1 preliminary:1 randomization:1 tighter:1 theor:1 mathematically:1 extension:1 clarify:1 hold:1 randn:3 normal:2 scope:1 achieves:2 purpose:2 estimation:1 proc:8 largest:1 establishes:1 tool:1 always:1 gaussian:8 zhou:1 cornell:2 zhong:1 ax:1 focus:2 june:1 rank:79 contrast:2 seeger:2 attains:1 talwalkar:1 helpful:1 sketchy:1 streaming:23 epfl:4 volkov:1 initially:1 pasadena:1 sketched:2 yoel:1 colt:1 html:1 priori:1 art:2 field:2 construct:7 equal:2 aware:1 beach:1 karnin:1 softw:2 identical:3 sampling:1 broad:1 park:1 thin:3 future:1 report:3 recommend:1 develops:1 gordon:1 few:2 haven:1 oja:1 ve:1 intell:1 shkolnisky:1 floating:7 subsampled:3 phase:1 maintain:1 psd:59 ab:7 message:1 multiply:1 joel:1 evaluation:1 mahoney:5 analyzed:1 semidefinite:8 accurate:4 partial:1 necessary:2 orthogonal:1 unless:1 harmon:2 iv:1 desired:1 re:3 guidance:1 theoretical:4 cevher:4 column:2 earlier:1 compelling:1 cover:1 ar:1 sidford:1 phrase:1 cost:14 entry:2 johnson:1 commission:1 kn:6 synthetic:4 adaptively:1 st:2 randomized:14 siam:9 retain:1 probabilistic:1 lauderdale:1 sketching:5 na:1 thesis:3 reflect:1 choose:1 admit:1 conf:2 chung:1 return:3 li:3 mahdavi:1 sec:10 coresets:1 int:3 explicitly:2 depends:1 stream:2 h1:1 root:1 view:1 analyze:2 dagger:1 maintains:1 complicated:1 timely:1 contribution:2 ass:1 square:3 accuracy:1 yield:3 sdps:1 multiplying:2 researcher:1 cc:1 published:1 randomness:1 submitted:1 simultaneous:1 competitor:2 boutsidis:3 atlab:1 proof:4 turnstile:1 ut:1 improves:3 ubiquitous:1 dimensionality:2 satisfiability:1 betty:1 routine:1 back:1 elder:1 appears:4 improved:6 formulation:1 mar:1 furthermore:2 sketch:44 horizontal:1 tropp:7 kn2:2 quality:2 modulus:1 usa:1 symmetric:2 moore:2 nonzero:2 davis:1 noted:1 cosine:1 gg:1 outline:2 complete:1 demonstrate:2 ay:1 performs:1 orthogonalization:1 novel:1 recently:1 pseudocode:4 cohen:2 volume:1 discussed:1 martinsson:3 rth:1 numerically:2 significant:1 measurement:1 feldman:1 phillips:1 rd:1 outlined:1 erc:1 maxcut:3 gratefully:1 language:1 phaseretrieval:3 stable:5 tygert:7 recent:3 discard:1 store:2 n00014:1 onr:1 arbitrarily:1 caltech:3 minimum:1 additional:5 care:1 dashed:2 arithmetic:7 signal:1 full:2 reduces:1 adapt:1 offer:2 long:1 retrieval:1 award:2 a1:1 impact:1 prediction:1 variant:3 scalable:1 metric:1 woolfe:1 arxiv:7 iteration:3 kernel:7 dec:1 receive:1 background:1 proposal:1 ank:1 singular:5 subcollection:1 operate:1 exhibited:1 induced:1 ikr:1 integer:1 near:1 yang:1 bernstein:1 fft:1 iterate:1 affect:1 reduce:1 idea:1 tradeoff:1 shift:2 metricmap:1 motivated:1 pca:4 url:1 becker:1 clarkson:2 york:2 matlab:1 useful:1 chol:1 detailed:1 involve:1 yurtsever:5 reduced:1 generate:1 http:9 shifted:3 track:1 econ:1 per:1 discrete:4 group:1 gunnar:1 achieving:1 drawn:1 v1:1 graph:1 year:1 parameterized:1 soda:2 almost:2 draw:6 decision:1 scaling:1 comparable:1 bound:9 hi:3 igure:2 display:3 correspondence:1 yale:1 nonnegative:2 constraint:2 alex:1 software:1 dominated:1 fourier:3 nathan:1 speed:1 prescribed:2 min:2 kumar:1 fct:1 ailon:1 truncate:3 combination:1 poor:1 conjugate:3 across:1 terminates:1 describes:1 smaller:1 gittens:10 kakade:1 evolves:2 rev:1 s1:5 remains:1 mechanism:3 madeleine:1 lieu:1 available:8 operation:6 linderman:1 apply:1 spectral:7 appropriate:1 fowlkes:1 alternative:3 florida:1 denotes:2 clustering:2 include:3 ensure:1 wakin:1 instant:2 exploit:3 k1:9 establish:1 approximating:2 malik:1 quantity:2 primary:1 usual:1 diagonal:4 md:1 subspace:2 thank:1 reinforce:1 schatten:14 sci:3 landmark:1 nelson:1 topic:1 provable:1 ru:1 issn:1 index:2 ratio:1 innovation:2 difficult:1 mostly:1 setup:1 stoc:4 jat:1 implementation:9 anal:6 perform:1 vertical:1 observation:1 datasets:1 finite:1 jak:2 jin:2 truncated:3 situation:1 extended:1 communication:1 precise:2 rn:1 sharp:1 thm:2 fort:1 required:1 specified:2 california:1 learned:1 nip:5 trans:3 address:2 recurring:1 pattern:1 sparsity:1 including:1 max:1 memory:2 suitable:1 natural:3 force:1 circumvent:1 improve:1 stanton:1 technology:2 axis:2 acknowledges:1 existential:1 prior:2 literature:2 relative:19 expect:1 permutation:2 icalp:1 sublinear:2 limitation:2 proportional:1 asterisk:1 h2:1 foundation:1 storing:1 row:1 summary:4 mohri:1 supported:2 truncation:2 transpose:1 understand:1 institute:1 wide:1 neighbor:1 sparse:5 benefit:1 distributed:1 feedback:1 dimension:4 valid:2 lindenstrauss:1 gram:1 computes:3 crsii2:1 author:1 collection:1 coll:1 nguyen:1 far:1 approximate:8 emphasize:1 nov:1 implicitly:1 pseudoinverse:1 persu:1 instantiation:1 belongie:1 spectrum:3 scrambled:1 search:2 learn:4 ca:1 symmetry:1 alg:6 williamson:1 excellent:1 european:1 constructing:1 diag:3 aistats:2 apr:2 main:1 dense:1 motivation:1 noise:2 arrival:1 representative:1 elaborate:1 ny:2 precision:2 theme:1 orth:1 wish:2 explicit:1 exponential:2 comput:8 late:1 formula:2 theorem:10 udell:3 symbol:1 decay:10 dominates:2 concern:1 essential:2 evidence:1 grouping:1 workshop:1 kr:4 higham:1 supplement:7 phd:2 mitliagkas:1 budget:1 halko:2 penrose:1 tracking:1 scalar:2 nominally:1 applies:1 ch:2 satisfies:2 relies:1 determines:1 acm:11 oct:1 ann:3 absence:1 except:1 uniformly:4 principal:5 called:2 goemans:1 bradford:1 svd:5 experimental:4 chiu:1 select:1 internal:3 support:2 cholesky:1 mark:2 arises:1 rokhlin:2 accelerated:1 evaluate:2
6,327
6,723
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets Karol Hausman?? , Yevgen Chebotar??? , Stefan Schaal?? , Gaurav Sukhatme? , Joseph J. Lim? ? University of Southern California, Los Angeles, CA, USA Max-Planck-Institute for Intelligent Systems, T?bingen, Germany {hausman, ychebota, sschaal, gaurav, limjj}@usc.edu ? Abstract Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentiongan. 1 Introduction One of the key factors to enable deployment of robots in unstructured real-world environments is their ability to learn from data. In recent years, there have been multiple examples of robot learning frameworks that present promising results. These include: reinforcement learning [31] - where a robot learns a skill based on its interaction with the environment and imitation learning [2, 5] - where a robot is presented with a demonstration of a skill that it should imitate. In this work, we focus on the latter learning setup. Traditionally, imitation learning has focused on using isolated demonstrations of a particular skill [29]. The demonstration is usually provided in the form of kinesthetic teaching, which requires the user to spend sufficient time to provide the right training data. This constrained setup for imitation learning is difficult to scale to real world scenarios, where robots have to be able to execute a combination of different skills. To learn these skills, the robots would require a large number of robot-tailored demonstrations, since at least one isolated demonstration has to be provided for every individual skill. In order to improve the scalability of imitation learning, we propose a framework that can learn to imitate skills from a set of unstructured and unlabeled demonstrations of various tasks. As a motivating example, consider a highly unstructured data source, e.g. a video of a person cooking a meal. A complex activity, such as cooking, involves a set of simpler skills such as grasping, reaching, cutting, pouring, etc. In order to learn from such data, three components are required: i) the ability to map the image stream to state-action pairs that can be executed by a robot, ii) the ability to segment the data into simple skills, and iii) the ability to imitate each of the segmented skills. In this work, we tackle the latter two components, leaving the first one for future work. We believe that the capability proposed here of learning from unstructured, unlabeled demonstrations is an important step towards scalable robot learning systems. ? Equal contribution In this paper, we present a novel imitation learning method that learns a multi-modal stochastic policy, which is able to imitate a number of automatically segmented tasks using a set of unstructured and unlabeled demonstrations. Our results indicate that the presented technique can separate the demonstrations into sensible individual skills and imitate these skills using a learned multi-modal policy. We show applications of the presented method to the tasks of skill segmentation, hierarchical reinforcement learning and multi-modal policy learning. 2 Related Work Imitation learning is concerned with learning skills from demonstrations. Approaches that are suitable for this setting can be split into two categories: i) behavioral cloning [27], and ii) inverse reinforcement learning (IRL) [24]. While behavioral cloning aims at replicating the demonstrations exactly, it suffers from the covariance shift [28]. IRL alleviates this problem by learning a reward function that explains the behavior shown in the demonstrations. The majority of IRL works [16, 35, 1, 12, 20] introduce algorithms that can imitate a single skill from demonstrations thereof but they do not readily generalize to learning a multi-task policy from a set of unstructured demonstrations of various tasks. More recently, there has been work that tackles a problem similar to the one presented in this paper, where the authors consider a setting where there is a large set of tasks with many instantiations [10]. In their work, the authors assume a way of communicating a new task through a single demonstration. We follow the idea of segmenting and learning different skills jointly so that learning of one skill can accelerate learning to imitate the next skill. In our case, however, the goal is to separate the mix of expert demonstrations into single skills and learn a policy that can imitate all of them, which eliminates the need of new demonstrations at test time. The method presented here belongs to the field of multi-task inverse reinforcement learning. Examples from this field include [9] and [4]. In [9], the authors present a Bayesian approach to the problem, while the method in [4] is based on an EM approach that clusters observed demonstrations. Both of these methods show promising results on relatively low-dimensional problems, whereas our approach scales well to higher dimensional domains due to the representational power of neural networks. There has also been a separate line of work on learning from demonstration, which is then iteratively improved through reinforcement learning [17, 6, 23]. In contrast, we do not assume access to the expert reward function, which is required to perform reinforcement learning in the later stages of the above algorithms. There has been much work on the problem of skill segmentation and option discovery for hierarchical tasks. Examples include [25, 19, 14, 33, 13]. In this work, we consider a possibility to discover different skills that can all start from the same initial state, as opposed to hierarchical reinforcement learning where the goal is to segment a task into a set of consecutive subtasks. We demonstrate, however, that our method may be used to discover the hierarchical structure of a task similarly to the hierarchical reinforcement learning approaches. In [13], the authors explore similar ideas to discover useful skills. In this work, we apply some of these ideas to the imitation learning setup as opposed to the reinforcement learning scenario. Generative Adversarial Networks (GANs) [15] have enjoyed success in various domains including image generation [8], image-image translation [34, 18] and video prediction [22]. More recently, there have been works connecting GANs and other reinforcement learning and IRL methods [26, 11, 16]. In this work, we expand on some of the ideas presented in these works and provide a novel framework that exploits this connection. The works that are most closely related to this paper are [16], [7] and [21]. In [7], the authors show a method that is able to learn disentangled representations and apply it to the problem of image generation. In this work, we provide an alternative derivation of our method that extends their work and applies it to multi-modal policies. In [16], the authors present an imitation learning GAN approach that serves as a basis for the development of our method. We provide an extensive evaluation of the hereby presented approach compared to the work in [16], which shows that our method, as opposed to [16], can handle unstructured demonstrations of different skills. A concurrent work [21] introduces a method similar to ours and applies it to detecting driving styles from unlabelled human data. 2 3 Preliminaries Let M = (S, A, P, R, p0 , ?, T ) be a finite-horizon Markov Decision Process (MDP), where S and A are state and action spaces, P : S ? A ? S ? R+ is a state-transition probability function or system dynamics, R : S ? A ? R a reward function, p0 : S ? R+ an initial state distribution, ? a reward discount factor, and T a horizon. Let ? = (s0 , a0 , . . . , sT , aT ) be a trajectory of states and actions and PT R(? ) = t=0 ? t R(st , at ) the trajectory reward. The goal of reinforcement learning methods is to find parameters ? of a policy ?? (a|s) that maximizes the expected discounted reward over trajectories induced by the policy: E?? [R(? )] where s0 ? p0 , st+1 ? P (st+1 |st , at ) and at ? ?? (at |st ). In an imitation learning scenario, the reward function is unknown. However, we are given a set of demonstrated trajectories, which presumably originate from some optimal expert policy distribution ?E1 that optimizes an unknown reward function RE1 . Thus, by trying to estimate the reward function RE1 and optimizing the policy ?? with respect to it, we can recover the expert policy. This approach is known as inverse reinforcement learning (IRL) [1]. In order to model a variety of behaviors, it is beneficial to find a policy with the highest possible entropy that optimizes RE1 . We will refer to this approach as the maximum-entropy IRL [35] with the optimization objective   min max H(?? ) + E?? R(s, a) ? E?E1 R(s, a), (1) R ?? where H(?? ) is the entropy of the policy ?? . Ho and Ermon [16] showed that it is possible to redefine the maximum-entropy IRL problem with multiple demonstrations sampled from a single expert policy ?E1 as an optimization of GANs [15]. In this framework, the policy ?? (a|s) plays the role of a generator, whose goal is to make it difficult for a discriminator network Dw (s, a) (parameterized by w) to differentiate between imitated samples from ?? (labeled 0) and demonstrated samples from ?E1 (labeled 1). Accordingly, the joint optimization goal can be defined as max min E(s,a)??? [log(Dw (s, a))] + E(s,a)??E1 [log(1 ? Dw (s, a))] + ?H H(?? ). ? w (2) The discriminator and the generator policy are both represented as neural networks and optimized by repeatedly performing alternating gradient updates. The discriminator is trained on the mixed set of expert and generator samples and outputs probabilities that a particular sample has originated from the generator or the expert policies. This serves as a reward signal for the generator policy that tries to maximize the probability of the discriminator confusing it with an expert policy. The generator can be trained using the trust region policy optimization (TRPO) algorithm [30] with the cost function log(Dw (s, a)). At each iteration, TRPO takes the following gradient step: E(s,a)??? [?? log ?? (a|s) log(Dw (s, a))] + ?H ?? H(?? ), (3) which corresponds to minimizing the objective in Eq. (2) with respect to the policy ?? . 4 Multi-modal Imitation Learning The traditional imitation learning scenario described in Sec. 3 considers a problem of learning to imitate one skill from demonstrations. The demonstrations represent samples from a single expert policy ?E 1 . In this work, we focus on an imitation learning setup where we learn from unstructured and unlabelled demonstrations of various tasks. In this case, the demonstrations come from a set of expert policies ?E1 , ?E2 , . . . , ?Ek , where k can be unknown, that optimize different reward functions/tasks. We will refer to this set of unstructured expert policies as a mixture of policies ?E . We aim to segment the demonstrations of these policies into separate tasks and learn a multi-modal policy that will be able to imitate all of the segmented tasks. In order to be able to learn multi-modal policy distributions, we augment the policy input with a latent intention i distributed by a categorical or uniform distribution p(i), similar to [7]. The goal of the intention variable is to select a specific mode of the policy, which corresponds to one of the skills presented in the demonstrations. The resulting policy can be expressed as: ?(a|s, i) = p(i|s, a) 3 ?(a|s) . p(i) (4) We augment the trajectory to include the latent intention as ?i = (s0 , a0 , i0 , ...sT , aT , iT ). The PT resulting reward of the trajectory with the latent intention is R(?i ) = t=0 ? t R(st , at , it ). R(a, s, i) is a reward function that depends on the latent intention i as we have multiple demonstrations that optimize different expected discounted reward is equal to: R reward functions for different tasks. The QT ?1 E?? [R(?i )] = R(?i )?? (?i )d?i where ?? (?i ) = p0 (s0 ) t=0 P (st+1 |st , at )?? (at |st , it )p(it ). Here, we show an extension of the derivation presented in [16] (Eqs. (1, 2)) for a policy ?(a|s, i) augmented with the latent intention variable i, which uses demonstrations from a set of expert policies ?E , rather than a single expert policy ?E1 . We are aiming at maximum entropy policies that can be determined from the latent intention variable i. Accordingly, we transform the original IRL problem to reflect this goal:   min max H(?(a|s)) ? H(?(a|s, i)) + E? R(s, a, i) ? E?E R(s, a, i), (5) ? R where ?(a|s) = P ?(a|s, i)p(i), which results in the policy averaged over intentions (since the i p(i) is constant). This goal reflects our objective: we aim to obtain a multi-modal policy that has a high entropy without any given intention, but it collapses to a particular task when the intention is specified. Analogously to the solution for a single expert policy, this optimization objective results in the optimization goal of the generative adversarial imitation learning network, with the exception that the state-action pairs (s, a) are sampled from a set of expert policies ?E : max min Ei?p(i),(s,a)??? [log(Dw (s, a))] + E(s,a)??E [1 ? log(Dw (s, a))] w ? (6) +?H H(?? (a|s)) ? ?I H(?? (a|s, i)), where ?I , ?H correspond to the weighting parameters on the respective objectives. The resulting entropy H(?? (a|s, i)) term can be expressed as: H(?? (a|s, i)) = Ei?p(i),(s,a)??? (? log(?? (a|s, i)) (7)   ?? (a|s) = ?Ei?p(i),(s,a)??? log p(i|s, a) p(i) = ?Ei?p(i),(s,a)??? log(p(i|s, a)) ? Ei?p(i),(s,a)??? log ?? (a|s) + Ei?p(i) log p(i) = ?Ei?p(i),(s,a)??? log(p(i|s, a)) + H(?? (a|s)) ? H(i), which results in the final objective: max min Ei?p(i),(s,a)??? [log(Dw (s, a))] + E(s,a)??E [1 ? log(Dw (s, a))] ? w (8) +(?H ? ?I )H(?? (a|s)) + ?I Ei?p(i),(s,a)??? log(p(i|s, a)) + ?I H(i), where H(i) is a constant that does not influence the optimization. This results in the same optimization objective as for the single expert policy (see Eq. (2)) with an additional term ?I Ei?p(i),(s,a)??? log(p(i|s, a)) responsible for rewarding state-action pairs that make the latent intention inference easier. We refer to this cost as the latent intention cost and represent p(i|s, a) with a neural network. The final reward function for the generator is: Ei?p(i),(s,a)??? [log(Dw (s, a))] + ?I Ei?p(i),(s,a)??? log(p(i|s, a)) + ?H 0 H(?? (a|s)). 4.1 (9) Relation to InfoGAN In this section, we provide an alternative derivation of the optimization goal in Eq. (8) by extending the InfoGAN approach presented in [7]. Following [7], we introduce the latent variable c as a means to capture the semantic features of the data distribution. In this case, however, the latent variables are used in the imitation learning scenario, rather than the traditional GAN setup, which prevents us from using additional noise variables (z in the InfoGAN approach) that are used as noise samples to generate the data from. Similarly to [7], to prevent collapsing to a single mode, the policy optimization objective is augmented with mutual information I(c; G(??c , c)) between the latent variable and the state-action pairs generator G dependent on the policy distribution ??c . This encourages the policy to produce behaviors that are 4 interpretable from the latent code, and given a larger number of possible latent code values leads to an increase in the diversity of policy behaviors. The corresponding generator goal can be expressed as: Ec?p(c),(s,a)???c [log(Dw (s, a))] + ?I I(c; G(??c , c)) + ?H H(??c ) (10) In order to compute I(c; G(??c , c)), we follow the derivation from [7] that introduces a lower bound: I(c; G(??c , c)) = H(c) ? H(c|G(??c , c)) (11) 0 = E(s,a)?G(??c ,c) [Ec0 ?P (c|s,a) [log P (c |s, a)]] + H(c) = E(s,a)?G(??c ,c) [DKL (P (?|s, a)||Q(?|s, a)) + Ec0 ?P (c|s,a) [log Q(c0 |s, a)]] + H(c) ? E(s,a)?G(??c ,c) [Ec0 ?P (c|s,a) [log Q(c0 |s, a)]] + H(c) = Ec?P (c),(s,a)?G(??c ,c) [log Q(c|s, a)] + H(c) By maximizing this lower bound we maximize I(c; G(??c , c)). The auxiliary distribution Q(c|s, a) can be parametrized by a neural network. The resulting optimization goal is max min Ec?p(c),(s,a)???c [log(Dw (s, a))] + E(s,a)??E [1 ? log(Dw (s, a))] ? w + ?I Ec?P (c),(s,a)?G(??c ,c) [log Q(c|s, a)] + (12) ?H H(??c ) which results in the generator reward function: Ec?p(c),(s,a)???c [log(Dw (s, a))] + ?I Ec?P (c),(s,a)?G(??c ,c) [log Q(c|s, a)] + ?H H(??c ). (13) This corresponds to the same objective that was derived in Section 4. The auxiliary distribution over the latent variables Q(c|s, a) is analogous to the intention distribution p(i|s, a). 5 Implementation In this section, we discuss implementation details that can alleviate instability of the training procedure of our model. The first indicator that the training has become unstable is a high classification accuracy of the discriminator. In this case, it is difficult for the generator to produce a meaningful policy as the reward signal from the discriminator is flat and the TRPO gradient of the generator vanishes. In an extreme case, the discriminator assigns all the generator samples to the same class and it is impossible for TRPO to provide a useful gradient as all generator samples receive the same reward. Previous work suggests several ways to avoid this behavior. These include leveraging the Wasserstein distance metric to improve the convergence behavior [3] and adding instance noise to the inputs of the discriminator to avoid degenerate generative distributions [32]. We find that adding the Gaussian noise helped us the most to control the performance of the discriminator and to produce a smooth reward signal for the generator policy. During our experiments, we anneal the noise similar to [32], as the generator policy improves towards the end of the training. An important indicator that the generator policy distribution has collapsed to a uni-modal policy is a high or increasing loss of the intention-prediction network p(i|s, a). This means that the prediction of the latent variable i is difficult and consequently, the policy behavior can not be categorized into separate skills. Hence, the policy executes the same skill for different values of the latent variable. To prevent this, one can increase the weight of the latent intention cost ?I in the generator loss or add more instance noise to the discriminator, which makes its reward signal relatively weaker. In this work, we employ both categorical and continuous latent variables to represent the latent intention. The advantage of using a continuous variable is that we do not have to specify the number of possible values in advance as with the categorical variable and it leaves more room for interpolation between different skills. We use a softmax layer to represent categorical latent variables, and use a uniform distribution for continuous latent variables as proposed in [7]. 6 Experiments Our experiments aim to answer the following questions: (1) Can we segment unstructured and unlabelled demonstrations into skills and learn a multi-modal policy that imitates them? (2) What 5 Figure 1: Left: Walker-2D running forwards, running backwards, jumping. Right: Humanoid running forwards, running backwards, balancing. Figure 2: Left: Reacher with 2 targets: random initial state, reaching one target, reaching another target. Right: Gripper-pusher: random initial state, grasping policy, pushing (when grasped) policy. is the influence of the introduced intention-prediction cost on the resulting policies? (3) Can we autonomously discover the number of skills presented in the demonstrations, and even accomplish them in different ways? (4) Does the presented method scale to high-dimensional policies? (5) Can we use the proposed method for learning hierarchical policies? We evaluate our method on a series of challenging simulated robotics tasks described below. We would like to emphasize that the demonstrations consist of shuffled state-action pairs such that no temporal information or segmentation is used during learning. The performance of our method can be seen in our supplementary video2 . 6.1 Task setup Reacher The Reacher environment is depicted in Fig. 2 (left). The actuator is a 2-DoF arm attached at the center of the scene. There are several targets placed at random positions throughout the environment. The goal of the task is, given a data set of reaching motions to random targets, to discover the dependency of the target selection on the intention and learn a policy that is capable of reaching different targets based on the specified intention input. We evaluate the performance of our framework on environments with 1, 2 and 4 targets. Walker-2D The Walker-2D (Fig. 1 left) is a 6-DoF bipedal robot consisting of two legs and feet attached to a common base. The goal of this task is to learn a policy that can switch between three different behaviors dependent on the discovered intentions: running forward, running backward and jumping. We use TRPO to train single expert policies and create a combined data set of all three behaviors that is used to train a multi-modal policy using our imitation framework. Humanoid Humanoid (Fig. 1 right) is a high-dimensional robot with 17 degrees of freedom. Similar to Walker-2D the goal of the task is to be able to discover three different policies: running forward, running backward and balancing, from the combined expert demonstrations of all of them. Gripper-pusher This task involves controlling a 4-DoF arm with an actuated gripper to push a sliding block to a specified goal area (Fig. 2 right). We provide separate expert demonstrations of grasping the object, and pushing it towards the goal starting from the object already being inside the hand. The initial positions of the arm, block and the goal area are randomly sampled at the beginning of each episode. The goal of our framework is to discover both intentions and the hierarchical structure of the task from a combined set of demonstrations. 6.2 Multi-Target Imitation Learning Our goal here is to analyze the ability of our method to segment and imitate policies that perform the same task for different targets. To this end, we first evaluate the influence of the latent intention cost on the Reacher task with 2 and 4 targets. For both experiments, we use either a categorical intention distribution with the number of categories equal to the number of targets or a continuous, 2 http://sites.google.com/view/nips17intentiongan 6 Figure 3: Results of the imitation GAN with (top row) and without (bottom row) the latent intention cost. Left: Reacher with 2 targets(crosses): final positions of the reacher (circles) for categorical (1) and continuous (2) latent intention variable. Right: Reacher with 4 targets(crosses): final positions of the reacher (circles) for categorical (3) and continuous (4) latent intention variable. Figure 4: Left: Rewards of different Reacher policies for 2 targets for different intention values over the training iterations with (1) and without (2) the latent intention cost. Right: Two examples of a heatmap for 1 target Reacher using two latent intentions each. uniformly-distributed intention variable, which means that the network has to discover the number of intentions autonomously. Fig. 3 top shows the results of the reaching tasks using the latent intention cost for 2 and 4 targets with different latent intention distributions. For the continuous latent variable, we show a span of different intentions between -1 and 1 in the 0.2 intervals. The colors indicate the intention ?value?. In the categorical distribution case, we are able to learn a multi-modal policy that can reach all the targets dependent on the given latent intention (Fig. 3-1 and Fig. 3-3 top). The continuous latent intention is able to discover two modes in case of two targets (Fig. 3-2 top) but it collapses to only two modes in the four targets case (Fig. 3-4 top) as this is a significantly more difficult task. As a baseline, we present the results of the Reacher task achieved by the standard GAN imitation learning presented in [16] without the latent intention cost. The obtained results are presented in Fig. 3 bottom. Since the network is not encouraged to discover different skills through the intention learning cost, it collapses to a single target for 2 targets in both the continuous and discrete latent intention variables. In the case of 4 targets, the network collapses to 2 modes, which can be explained by the fact that even without the latent intention cost the imitation network tries to imitate most of the presented demonstrations. Since the demonstration set is very diverse in this case, the network learned two modes without the explicit instruction (latent intention cost) to do so. To demonstrate the development of different intentions, in Fig. 4 (left) we present the Reacher rewards over training iterations for different intention variables. When the latent intention cost is included, (Fig. 4-1), the separation of different skills for different intentions starts to emerge around the 1000-th iteration and leads to a multi-modal policy that, given the intention value, consistently reaches the target associated with that intention. In the case of the standard imitation learning GAN setup (Fig. 4-2), the network learns how to imitate reaching only one of the targets for both intention values. In order to analyze the ability to discover different ways to accomplish the same task, we use our framework with the categorical latent intention in the Reacher environment with a single target. 7 Figure 5: Top: Rewards of Walker-2D policies for different intention values over the training iterations with (left) and without (right) the latent intention cost. Bottom: Rewards of Humanoid policies for different intention values over the training iterations with (left) and without (right) the latent intention cost. Since we only have a single set of expert trajectories that reach the goal in one, consistent manner, we subsample the expert state-action pairs to ease the intention learning process for the generator. Fig. 4 (right) shows two examples of a heatmap of the visited end-effector states accumulated for two different values of the intention variable. For both cases, the task is executed correctly, the robot reaches the target, but it achieves it using different trajectories. These trajectories naturally emerged through the latent intention cost as it encourages different behaviors for different latent intentions. It is worth noting that the presented behavior can be also replicated for multiple targets if the number of categories in the categorical distribution of the latent intention exceeds the number of targets. 6.3 Multi-Task Imitation Learning We also seek to further understand whether our model extends to segmenting and imitating policies that perform different tasks. In particular, we evaluate whether our framework is able to learn a multi-modal policy on the Walker-2D task. We mix three different policies ? running backwards, running forwards, and jumping ? into one expert policy ?E and try to recover all of them through our method. The results are depicted in Fig. 5 (top). The additional latent intention cost results in a policy that is able to autonomously segment and mimic all three behaviors and achieve a similar performance to the expert policies (Fig. 5 top-left). Different intention variable values correspond to different expert policies: 0 - running forwards, 1 - jumping, and 2 - running backwards. The imitation learning GAN method is shown as a baseline in Fig. 5 (top-right). The results show that the policy collapses to a single mode, where all different intention variable values correspond to the jumping behavior, ignoring the demonstrations of the other two skills. To test if our multi-modal imitation learning framework scales to high-dimensional tasks, we evaluate it in the Humanoid environment. The expert policy is constructed using three expert policies: running backwards, running forwards, and balancing while standing upright. Fig. 5 (bottom) shows the rewards obtained for different values of the intention variable. Similarly to Walker-2D, the latent 8 Figure 6: Time-lapse of the learned Gripper-pusher policy. The intention variable is changed manually in the fifth screenshot, once the grasping policy has grasped the block. intention cost enables the neural network to segment the tasks and learn a multi-modal imitation policy. In this case, however, due to the high dimensionality of the task, the resulting policy is able to mimic running forwards and balancing policies almost as well as the experts, but it achieves a suboptimal performance on the running backwards task (Fig. 5 bottom-left). The imitation learning GAN baseline collapses to a uni-modal policy that maps all the intention values to a balancing behavior (Fig. 5 bottom-right). Finally, we evaluate the ability of our method to discover options in hierarchical IRL tasks. In order to test this, we collect expert policies in the Gripper-pusher environment that consist of grasping and pushing when the object is grasped demonstrations. The goal of this task is to check whether our method will be able to segment the mix of expert policies into separate grasping and pushing-whengrasped skills. Since the two sub-tasks start from different initial conditions, we cannot present the results in the same form as for the previous tasks. Instead, we present a time-lapse of the learned multi-modal policy (see Fig. 6) that presents the ability to change in the intention during the execution. The categorical intention variable is manually changed after the block is grasped. The intention change results in switching to a pushing policy that brings the block into the goal region. We present this setup as an example of extracting different options from the expert policies that can be further used in an hierarchical reinforcement learning task to learn the best switching strategy. 7 Conclusions We present a novel imitation learning method that learns a multi-modal stochastic policy, which is able to imitate a number of automatically segmented tasks using a set of unstructured and unlabeled demonstrations. The presented approach learns the notion of intention and is able to perform different tasks based on the policy intention input. We evaluated our method on a set of simulation scenarios where we show that it is able to segment the demonstrations into different tasks and to learn a multi-modal policy that imitates all of the segmented skills. We also compared our method to a baseline approach that performs imitation learning without explicitly separating the tasks. In the future work, we plan to focus on autonomous discovery of the number of tasks in the given pool of demonstrations as well as evaluating this method on real robots. We also plan to learn an additional hierarchical policy over the discovered intentions as an extension of this work. Acknowledgements This research was supported in part by National Science Foundation grants IIS-1205249, IIS-1017134, EECS-0926052, the Office of Naval Research, the Okawa Foundation, and the Max-Planck-Society. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding organizations. References [1] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML, 2004. [2] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469?483, 2009. [3] Mart?n Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. CoRR, abs/1701.07875, 2017. 9 [4] Monica Babes, Vukosi Marivate, Kaushik Subramanian, and Michael L Littman. Apprenticeship learning about multiple intentions. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 897?904, 2011. [5] Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. Robot programming by demonstration. In Springer handbook of robotics, pages 1371?1394. Springer, 2008. [6] Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, and Sergey Levine. Path integral guided policy search. arXiv preprint arXiv:1610.00529, 2016. [7] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016. [8] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pages 1486?1494, 2015. [9] Christos Dimitrakakis and Constantin A Rothkopf. Bayesian multitask inverse reinforcement learning. In European Workshop on Reinforcement Learning, pages 273?284. Springer, 2011. [10] Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. arXiv preprint arXiv:1703.07326, 2017. [11] Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016. [12] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33rd International Conference on Machine Learning, volume 48, 2016. [13] Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. arXiv preprint arXiv:1704.03012, 2017. [14] Roy Fox, Sanjay Krishnan, Ion Stoica, and Ken Goldberg. Multi-level discovery of deep options. arXiv preprint arXiv:1703.08294, 2017. [15] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, NIPS, pages 2672?2680, 2014. [16] Jonathan Ho and Stefano Ermon. abs/1606.03476, 2016. Generative adversarial imitation learning. CoRR, [17] Mrinal Kalakrishnan, Ludovic Righetti, Peter Pastor, and Stefan Schaal. Learning force control policies for compliant manipulation. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 4639?4644. IEEE, 2011. [18] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. Learning to discover cross-domain relations with generative adversarial networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1857?1865, International Convention Centre, Sydney, Australia, 06?11 Aug 2017. PMLR. [19] Oliver Kroemer, Christian Daniel, Gerhard Neumann, Herke Van Hoof, and Jan Peters. Towards learning hierarchical skills for multi-phase manipulation tasks. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 1503?1510. IEEE, 2015. [20] Sergey Levine, Zoran Popovic, and Vladlen Koltun. Nonlinear inverse reinforcement learning with gaussian processes. In Advances in Neural Information Processing Systems, pages 19?27, 2011. 10 [21] Yunzhu Li, Jiaming Song, and Stefano Ermon. Inferring the latent structure of human decisionmaking from raw visual inputs. CoRR, abs/1703.08840, 2017. [22] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. [23] Katharina M?lling, Jens Kober, Oliver Kroemer, and Jan Peters. Learning to select and generalize striking movements in robot table tennis. The International Journal of Robotics Research, 32(3):263?279, 2013. [24] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663?670, 2000. [25] Scott Niekum, Sachin Chitta, Andrew G Barto, Bhaskara Marthi, and Sarah Osentoski. Incremental semantically grounded learning from demonstration. In Robotics: Science and Systems, volume 9, 2013. [26] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016. [27] Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88?97, 1991. [28] St?phane Ross and Drew Bagnell. Efficient reductions for imitation learning. In AISTATS, volume 3, pages 3?5, 2010. [29] Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233?242, 1999. [30] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimization. In Francis R. Bach and David M. Blei, editors, ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 1889?1897. JMLR.org, 2015. [31] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, 1998. [32] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised map inference for image super-resolution. CoRR, abs/1610.04490, 2016. [33] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017. [34] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. [35] Brian D. Ziebart, Andrew L. Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In Dieter Fox and Carla P. Gomes, editors, AAAI, pages 1433?1438. AAAI Press, 2008. 11
6723 |@word multitask:1 c0:2 adrian:1 instruction:1 pieter:7 cha:1 simulation:2 seek:1 covariance:1 p0:4 shot:1 reduction:1 initial:6 series:1 daniel:1 ours:1 hyunsoo:1 com:2 readily:1 john:2 enables:1 christian:1 interpretable:2 update:1 generative:11 leaf:1 imitate:16 imitated:1 accordingly:2 beginning:1 blei:1 detecting:1 philipp:1 marivate:1 org:1 simpler:1 constructed:1 become:1 koltun:1 jonas:1 behavioral:2 redefine:1 inside:1 manner:1 introduce:2 apprenticeship:2 video2:1 expected:2 behavior:14 multi:28 discounted:2 automatically:2 duan:3 soumith:2 increasing:1 provided:2 discover:13 brett:1 maximizes:1 what:1 ec0:3 argall:1 finding:1 temporal:1 every:1 tackle:2 zaremba:1 exactly:1 sherjil:1 control:3 grant:1 cooking:2 planck:2 segmenting:2 limit:1 aiming:1 switching:2 sutton:1 path:1 interpolation:1 suggests:1 challenging:1 collect:1 deployment:1 ease:1 collapse:6 averaged:1 responsible:1 lecun:1 block:5 procedure:1 grasped:4 jan:2 area:2 vukosi:1 yan:4 significantly:1 intention:73 mrinal:2 kinesthetic:1 zoubin:1 cannot:1 unlabeled:4 selection:1 collapsed:1 influence:3 instability:1 impossible:1 yee:1 optimize:2 map:3 demonstrated:2 center:1 maximizing:2 dean:1 shi:1 starting:1 emily:1 focused:1 survey:1 resolution:1 unstructured:13 assigns:1 kalakrishnan:2 pouget:1 communicating:1 disentangled:1 dw:14 handle:1 notion:1 traditionally:2 autonomous:3 analogous:1 pt:2 play:1 target:29 user:1 controlling:1 programming:1 gerhard:1 us:1 goldberg:1 goodfellow:1 osentoski:1 roy:1 trend:1 nderby:1 labeled:2 observed:1 role:1 bottom:6 levine:5 preprint:9 capture:1 region:3 cycle:1 kilian:1 episode:1 autonomously:3 grasping:6 movement:1 highest:1 russell:1 environment:8 vanishes:1 reward:26 littman:1 ziebart:1 warde:1 dynamic:1 trained:2 zoran:1 ferenc:1 segment:10 ali:1 basis:1 accelerate:1 joint:1 various:4 represented:1 derivation:4 train:2 artificial:1 niekum:1 dof:3 rein:1 whose:1 emerged:1 spend:1 larger:1 supplementary:1 jean:1 ability:8 neil:1 jointly:2 transform:1 final:4 differentiate:1 advantage:1 net:3 propose:2 interaction:1 kober:1 alleviates:1 degenerate:1 achieve:1 representational:1 schaul:1 ludovic:1 scalability:2 los:1 sutskever:2 convergence:1 cluster:1 requirement:1 extending:1 neumann:1 produce:3 karol:1 decisionmaking:1 incremental:1 phane:1 object:3 hoof:1 sarah:1 andrew:6 silver:1 qt:1 aug:1 eq:4 sydney:1 auxiliary:2 jiwon:1 involves:2 indicate:3 come:1 convention:1 kaae:1 foot:1 guided:2 closely:1 stochastic:3 human:2 australia:1 enable:1 ermon:3 opinion:1 material:1 explains:1 require:1 abbeel:7 preliminary:1 alleviate:1 brian:1 extension:2 around:1 caballero:1 presumably:1 lawrence:1 sschaal:1 driving:1 achieves:2 consecutive:1 efros:1 proc:1 visited:1 ross:1 concurrent:1 create:1 reflects:1 stefan:5 gaurav:2 gaussian:2 aim:4 super:1 reaching:7 rather:2 avoid:2 husz:1 barto:2 office:1 derived:1 focus:3 schaal:5 naval:1 consistently:1 check:1 cloning:2 contrast:1 adversarial:11 baseline:4 kim:3 inference:2 browning:1 dependent:3 i0:1 accumulated:1 a0:2 relation:2 expand:1 marcin:1 germany:1 classification:1 augment:2 lucas:1 development:2 heatmap:2 constrained:1 softmax:1 plan:2 mutual:1 equal:3 field:2 once:1 ng:2 koray:1 encouraged:1 manually:2 stuart:1 park:1 icml:4 denton:1 future:2 mimic:2 mirza:1 yoshua:1 intelligent:2 richard:1 employ:1 kwon:1 randomly:1 national:1 individual:3 usc:1 phase:1 consisting:1 ab:4 freedom:1 organization:1 highly:1 possibility:1 dillmann:1 alexei:1 evaluation:1 introduces:2 mixture:1 extreme:1 bipedal:1 chernova:1 farley:1 navigation:1 constantin:1 oliver:2 integral:1 capable:1 respective:1 jumping:5 fox:2 circle:2 isolated:3 effector:1 instance:2 cost:20 calinon:1 uniform:2 osindero:1 motivating:1 dependency:1 answer:1 eec:1 accomplish:2 combined:3 person:1 st:12 international:7 standing:1 lee:1 rewarding:1 compliant:1 pool:1 michael:3 connecting:2 analogously:1 ilya:2 gans:3 monica:1 precup:1 reflect:2 aaai:2 opposed:3 collapsing:1 cognitive:1 expert:30 ek:1 style:1 wojciech:1 li:2 diversity:1 sec:1 automation:1 kroemer:2 explicitly:1 doina:1 depends:1 stream:1 later:1 view:3 try:3 helped:1 stoica:1 analyze:2 francis:1 start:3 recover:2 option:4 capability:1 carlos:1 simon:1 contribution:1 square:1 accuracy:1 efficiently:1 correspond:3 generalize:2 bayesian:2 raw:1 kavukcuoglu:1 trajectory:9 worth:1 executes:1 reach:4 suffers:1 energy:1 sukhatme:1 thereof:2 e2:1 hereby:1 associated:1 naturally:1 chintala:2 sampled:3 lim:1 color:1 improves:1 dimensionality:1 segmentation:4 higher:1 follow:2 tom:1 modal:23 improved:1 specify:1 execute:2 evaluated:1 dey:1 babe:1 stage:1 hand:1 irl:9 trust:2 ei:12 mehdi:1 nonlinear:1 google:2 mode:7 brings:1 believe:1 mdp:1 aude:1 usa:1 phillip:1 hence:1 shuffled:1 alternating:1 moritz:1 iteratively:1 lapse:2 semantic:1 during:3 encourages:2 kaushik:1 trying:1 whye:1 demonstrate:2 performs:1 motion:1 stefano:2 rothkopf:1 image:9 novel:3 recently:2 funding:1 common:1 chebotar:2 pouring:1 vezhnevets:1 attached:2 volume:5 refer:3 meal:1 enjoyed:1 rd:1 similarly:3 teaching:1 centre:1 replicating:1 robot:19 access:1 tennis:1 actor:1 etc:1 add:1 base:1 chelsea:2 recent:1 showed:1 re1:3 optimizing:1 belongs:1 optimizes:2 pastor:1 scenario:7 manipulation:2 route:1 success:1 jens:1 seen:1 arjovsky:1 additional:4 wasserstein:2 isola:1 schneider:1 maximize:2 signal:4 christiano:1 ii:4 multiple:5 mix:3 sliding:1 segmented:5 smooth:1 unlabelled:4 exceeds:1 veloso:1 cross:3 bach:1 taesung:1 e1:7 dkl:1 laplacian:1 prediction:5 scalable:1 metric:1 arxiv:18 iteration:6 represent:4 tailored:1 sergey:5 pyramid:1 robotics:6 achieved:1 ion:1 receive:1 whereas:1 jiaming:1 interval:1 walker:7 source:1 leaving:1 eliminates:1 induced:1 leveraging:1 hausman:2 jordan:1 extracting:1 noting:1 backwards:6 iii:1 split:1 concerned:1 krishnan:1 variety:1 switch:1 bengio:1 taeksoo:1 suboptimal:1 idea:4 okawa:1 angeles:1 shift:1 whether:3 grounded:1 song:1 peter:3 bingen:1 action:8 repeatedly:1 deep:4 andrychowicz:1 useful:2 heess:1 discount:1 category:3 ken:1 sachin:1 http:2 generate:1 unpaired:1 correctly:1 diverse:1 discrete:1 key:1 four:1 trpo:5 prevent:2 iros:1 backward:2 year:1 houthooft:1 dimitrakakis:1 inverse:10 parameterized:1 jose:1 striking:1 extends:2 throughout:1 almost:1 yann:1 separation:1 decision:1 confusing:1 bound:2 layer:1 courville:1 activity:1 feudal:1 scene:1 flat:1 min:6 span:1 performing:1 relatively:2 structured:1 combination:1 vladlen:1 beneficial:1 em:1 joseph:1 rob:1 leg:1 explained:1 imitating:1 dieter:1 bing:1 discus:1 finn:2 serf:2 end:3 available:1 apply:3 actuator:1 hierarchical:13 pmlr:1 alternative:2 sonia:1 ho:3 corinna:1 weinberger:1 original:1 top:9 running:16 include:5 gan:8 pushing:5 exploit:1 ghahramani:1 society:1 rsj:1 icra:1 objective:9 question:1 already:1 strategy:1 traditional:2 bagnell:2 southern:1 gradient:4 distance:1 separate:8 simulated:1 separating:1 majority:1 sensible:1 parametrized:1 originate:1 considers:1 unstable:1 ozair:1 code:2 demonstration:50 minimizing:1 difficult:6 setup:8 executed:2 implementation:2 pomerleau:1 policy:98 unknown:3 perform:4 teh:1 billard:1 ruediger:1 markov:1 finite:1 discovered:2 camille:1 subtasks:1 introduced:1 david:4 pair:6 required:2 specified:3 extensive:2 connection:2 discriminator:10 optimized:1 pfau:1 california:1 marthi:1 learned:4 nip:1 able:17 beyond:1 usually:1 below:1 sanjay:1 scott:1 max:10 including:1 video:4 power:1 suitable:1 subramanian:1 force:1 indicator:2 arm:3 zhu:1 improve:2 mathieu:1 categorical:11 jun:1 imitates:2 discovery:3 acknowledgement:1 schulman:2 theis:1 loss:2 mixed:1 generation:2 florensa:1 generator:19 foundation:2 humanoid:6 degree:1 sufficient:1 consistent:2 s0:4 editor:4 critic:1 balancing:5 casper:1 translation:2 row:2 changed:2 jung:1 placed:1 supported:1 maas:1 weaker:1 understand:1 institute:1 amortised:1 emerge:1 fifth:1 brenna:1 distributed:2 van:1 world:3 transition:1 evaluating:1 author:7 forward:8 reinforcement:23 replicated:1 ec:6 welling:1 skill:39 uni:2 cutting:1 emphasize:1 jaderberg:1 instantiation:1 handbook:1 manuela:1 gomes:1 popovic:1 xi:1 fergus:1 imitation:36 continuous:9 latent:46 search:1 lling:1 table:1 reacher:13 promising:2 learn:20 moonsu:1 ca:1 nicolas:1 actuated:1 ignoring:1 katharina:1 bottou:1 complex:1 anneal:1 necessarily:1 domain:3 european:1 bradly:1 aistats:1 wenzhe:1 noise:6 subsample:1 paul:1 categorized:1 xu:1 augmented:2 site:2 fig:21 christos:1 sub:1 position:4 originated:1 explicit:1 stadie:1 inferring:1 sasha:1 screenshot:1 infogan:4 weighting:1 jmlr:2 learns:5 ian:1 bhaskara:1 specific:1 abadie:1 cortes:1 multitude:1 consist:2 gripper:5 workshop:2 adding:2 corr:4 drew:1 anind:1 execution:1 push:1 horizon:2 chen:1 easier:1 entropy:8 depicted:2 carla:1 explore:1 visual:1 prevents:1 expressed:4 vinyals:1 recommendation:1 applies:2 springer:3 corresponds:3 mart:1 goal:23 consequently:1 towards:4 room:1 couprie:1 change:2 included:1 determined:1 upright:1 uniformly:1 sylvain:1 semantically:1 meaningful:1 exception:1 select:2 aaron:1 latter:2 jonathan:2 alexander:1 oriol:1 evaluate:6 pusher:4
6,328
6,724
Learning to Inpaint for Image Compression Mohammad Haris Baig? Department of Computer Science Dartmouth College Hanover, NH Vladlen Koltun Intel Labs Santa Clara, CA Lorenzo Torresani Dartmouth College Hanover, NH Abstract We study the design of deep architectures for lossy image compression. We present two architectural recipes in the context of multi-stage progressive encoders and empirically demonstrate their importance on compression performance. Specifically, we show that: (a) predicting the original image data from residuals in a multi-stage progressive architecture facilitates learning and leads to improved performance at approximating the original content and (b) learning to inpaint (from neighboring image pixels) before performing compression reduces the amount of information that must be stored to achieve a high-quality approximation. Incorporating these design choices in a baseline progressive encoder yields an average reduction of over 60% in file size with similar quality compared to the original residual encoder. 1 Introduction Visual data constitutes most of the total information created and shared on the Web every day and it forms a bulk of the demand for storage and network bandwidth [13]. It is customary to compress image data as much as possible as long as there is no perceptible loss in content. In recent years deep learning has made it possible to design deep models for learning compact representations for image data [2, 16, 18, 19, 20]. Deep learning based approaches, such as the work of Rippel and Bourdev [16], significantly outperform traditional methods of lossy image compression. In this paper, we show how to improve the performance of deep models trained for lossy image compression. We focus on the design of models that produce progressive codes. Progressive codes are a sequence of representations that can be transmitted to improve the quality of an existing estimate (from a previously sent code) by adding missing detail. This is in contrast to non-progressive codes whereby the entire data for a certain quality approximation must be transmitted before the image can be viewed. Progressive codes improve the user?s browsing experience by reducing loading time of pages that are rich in images. Our main contributions in this paper are two-fold. 1. While traditional progressive encoders are optimized to compress residual errors in each stage of their architecture (residual-in, residual-out), instead we propose a model that is trained to predict at each stage the original image data from the residual of the previous stage (residual-in, image-out). We demonstrate that this leads to an easier optimization resulting in better image compression. The resulting architecture reduces the amount of information that must be stored for reproducing images at similar quality by 18% compared to a traditional residual encoder. 2. Existing deep architectures do not exploit the high degree of spatial coherence exhibited by neighboring patches. We show how to design and train a model that can exploit dependences between adjacent regions by learning to inpaint from the available content. We introduce multi-scale convolutions that sample content at multiple scales to assist with inpainting. ? http://www.cs.dartmouth.edu/ haris/compression 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We jointly train our proposed inpainting and compression models and show that inpainting reduces the amount of information that must be stored by an additional 42%. 2 Approach We begin by reviewing the architecture and the learning objective of a progressive multi-stage encoderdecoder with S stages. We adopt the convolutional-deconvolutional residual encoder proposed by Toderici et al. [19] as our reference model. The model extracts a compact binary representation B from an image patch P . This binary representation, used to reconstruct an approximation of the original patch, consists of the sequence of representations extracted by the S stages of the model, B = [B1 , B2 , . . . BS ]. The first stage of the model extracts a binary code B1 from the input patch P . Each of the subsequent stages learns to extract representations Bs , to model the compressions residuals Rs?1 from the previous stage. The compression residuals Rs are defined as Rs = Rs?1 ? Ms (Rs?1 |?s ), where Ms (Rs?1 |?s ) represents the reconstruction obtained by the stage s when modelling the residuals Rs?1 . The model at each stage is split into an encoder Bs = Es (Rs?1 |?E s ) and a decoder E D E D Ds (Bs |?D ) such that M (R |? ) = D (E (R |? )|? ) and ? = {? , ? s s?1 s s s s?1 s s s s s s }. The paramth eters for the s stage of the model are denoted by ?s . The residual encoder-decoder is trained on a dataset P, consisting of N image patches, according to the following objective: ? L(P; ?1:S ) = S X N X (i) (i) kRs?1 ? Ms (Rs?1 |?s )k22. (1) s=1 i=1 (i) (i) Rs represents the compression residual for the ith patch P (i) after stage s and R0 = P (i) . Residual encoders are difficult to optimize as gradients have to traverse long paths from later stages to affect change in the previous stages. When moving along longer paths, gradients tend to decrease in magnitude as they get to earlier stages. We address this shortcoming of residual encoders by studying a class of architectures we refer to as ?Residual-to-Image? (R2I). 2.1 Residual-to-Image (R2I) To address the issue of vanishing gradients we add connections between subsequent stages and restate the loss to predict the original data at the end of each stage, thus performing residual-to-image prediction. This leads to the new objective shown below: L(P; ?1:S ) = S X N X (i) kP (i) ? Ms (Rs?1 |?s )k22. (2) s=1 i=1 Stage s of this model takes as input the compression residuals Rs?1 computed with respect to the original data, Rs?1 = P ? Ms?1 (Rs?2 |?s?1 ), and Ms?1 (Rs?2 |?s?1 ) now approximates the reconstruction of the original data P at stage s ? 1. To allow complete image reconstructions to be produced at each stage while only feeding in residuals, we introduce connections between the layers of adjacent stages. These connections allow for later stages to incorporate information that has been recovered by earlier stages into their estimate of the original image data. Consequently, these connections (between subsequent stages) allow for better optimization of the model. In addition to assisting with modeling the original image, these connections play two key roles. Firstly, these connections create residual blocks [10] which encourage explicit learning of how to reproduce information which could not be generated by the previous stage. Secondly, these connections reduce the length of the path along which information has to travel from later stages to impact the earlier stages, leading to a better joint optimization. This leads us to the question of where should such connections be introduced and how should information be propagated? We consider two types of connections to propagate information between successive stages. 1) Prediction connections are analogous to the identity shortcuts introduced by He et al. [10] for residual learning. They act as parameter-free additive connections: the output of 2 Residuals Encoder layer Data Residual Prediction Decoder layer Data Prediction 10 Binarizer Information flow Additive connection Parametrized connection + 10101000 10101001 01010001 a) Residual encoder 10101000 01010001 10101000 c) Full connections b) Prediction connections 01010001 10101000 d) Decoding connections Residual to Image (R2I) Models Figure 1: Multiple approaches for introducing connections between successive stages. These designs for progressive architectures allow for varying degrees of information to be shared. Architecture (b-d) do not reconstruct residuals, but the original data at every stage. We call these architectures ?residual-to-image? (R2I). each stage is produced by simply adding together the residual predictions of the current stage and all preceding stages (see Figure 1(b)) before applying a final non-linearity.2) Parametric connections are referred to as projection shortcuts by He et al. [10]. Here we use them to connect corresponding layers in two consecutive stages of the compression model. The features of each layer from the previous stage are convolved with learned filters before being added to the features of the same layer in the current stage. A non-linearity is then applied on top. The prediction connections only yield the benefit of creating residual blocks, albeit very large and thus difficult to optimize. In contrast, parametric connections allow for the intermediate representations from previous stages to be passed to the subsequent stages. They also create a denser connectivity pattern with gradients now moving along corresponding layers in adjacent stages. We consider two variants of parametric connections: ?full? which use parametric connections between all the layers in two successive stages (see Figure 1(c)), and ?decoding? connections which link only corresponding decoding layers (i.e., there are no connections between encoding layers of adjacent stages). We note that the LSTM-based model of Toderici et al. [20] represents a particular instance of R2I network with full connections. In Section 3 we demonstrate that R2I models with decoding connections outperform those with full connections and provide an intuitive explanation for this result. 2.2 Inpainting Network Image compression architectures learn to encode and decode an image patch-by-patch. Encoding all patches independently assumes that the regions contain truly independent content. This assumption generally does not hold true when the patches being encoded are contiguous. We observe that the content of adjacent image patches is not independent. We propose a new module for the compression model designed to exploit the spatial coherence between neighboring patches. We achieve this goal by training a model with the objective of predicting the content of each patch from information available in the neighboring regions. Deep models for inpainting, such as the one proposed by Pathak et al. [14], are trained to predict ? from a context region C? (as shown in Figure 2). As there the values of pixels in the region W is data present all around the region to be inpainted this imposes strong constraints on what the inpainted region should look like. We consider the scenario where images are encoded and decoded block-by-block moving from left to right and going from top to bottom (similar to how traditional codecs process images [1, 21]). Now, at decoding time only content above and to the left of each patch will have been reconstructed (see Figure 2(a)). This gives rise to the problem of ?partial-context inpainting?. We propose a model that, given an input region C, attempts to predict the content of the current patch P . We denote by P? the dataset which contains all the patches from the dataset P 3 Content available for inpainting C? Region to inpaint C P? P Full-context Inpainting Partial-context Inpainting (b) Multi-scale convolutional layer (a) Variations of the inpainting problem Figure 2: (a) The two kinds of inpainting problems. (b) A multi-scale convolutional layer with 3 dilation factors. The colored boxes represent pixels from which the content is sampled. and the respective context regions C for each patch. The loss function used to train our inpainting network is: ? ?I ) = Linp (P; N X kP (i) ? MI (C (i) |?I )k22. (3) i=1 The output of the inpainting network is denoted by MI (C (i) |?I ), where ?I refers to the parameters of the inpainting network. 2.2.1 Architecture of the Partial-Context Inpainting Network Our inpainting network has a feed-forward architecture which propagates information from the context region C to the region being inpainted, P . To improve the ability of our model at predicting content, we use a multi-scale convolutional layer as the basic building block of our inpainting network. We make use of the dilated convolutions described by Yu and Koltun [23] to allow for sampling at various scales. Each multi-scale convolutional layer is composed of k filters for each dilation factor being considered. Varying the dilation factor of filters gives us the ability to analyze content at various scales. This structure of filters provides two benefits. First, it allows for a substantially denser and more diverse sampling of data from context and second it allows for better propagation of content at different spatial scales. A similarly designed layer was also used by Chen et al. [5] for sampling content at multiple scales for semantic segmentation. Figure 2(b) shows the structure of a multi-scale convolutional layer. The multi-scale convolutional layer also gives us the freedom to propagate content at full resolution (no striding or pooling) as only a few multi-scale layers suffice to cover the entire region. This allows us to train a relatively shallow yet highly expressive architecture which can propagate fine-grained information that might otherwise be lost due to sub-sampling. This light-weight and efficient design is needed to allow for joint training with a multi-stage compression model. 2.2.2 Connecting the Inpainting Network with the R2I Compression model Next, we describe how to use the prediction of the inpainting network for assisting with compression. Whereas the inpainting network learns to predict the data as accurately as possible, we note that this is not sufficient to achieve good performance on compression, where it is also necessary that the ?inpainting residuals? be easy to compress. We describe the inpainting residuals as R0 = P ? MI (C|?I ), where MI (C|?I ) denotes the inpainting estimate. As we wanted to train our model to always predict the data, we add the inpainting estimate to the final prediction of each stage of our compression model. This allows us to (a) produce the original content at each stage and (b) to 4 discover an inpainting that is beneficial for all stages of the model because of joint training. We now train our complete model as ? ?I , ?1:S ) = Linp (P; ? ?I ) + LC (P; N X S X (i) kP (i) ? [Ms (Rs?1 |?s ) + MI (C (i) |?I )] k22. (4) i=1 s=1 (i) In this new objective LC , the first term Linp corresponds to the original inpainting loss, R0 corresponds to the inpainting residual for example i. We note that each stage of this inpainting-based progressive coder directly affects what is learned by the inpainting network. We refer to the model trained with this joint objective as ?Inpainting for Residual-to-Image Compression? (IR2I). Whereas we train our model to perform inpainting from the original image content, we use a lossy approximation of the context region C when encoding images with IR2I. This is done because at decoding time our model does not have access to the original image data. We use the approximation from stage 2 of our model for performing inpainting at encoding and decoding time, and transmit the binary codes for the first two stages as a larger first code. This strategy allows us to leverage inpainting while performing progressive image compression. 2.3 Implementation Details Our models were trained on 6,507 images from the ImageNet dataset [7], as proposed by Ball? et al. [2] to train their single-stage encoder-decoder architectures. A full description of the R2I models and the inpainting network is provided in the supplementary material. We use the Caffe library [11] to train our models. The residual encoder and R2I models were trained for 60,000 iterations whereas the joint inpainting network was trained for 110,000 iterations. We used the Adam optimizer [12] for training our models and the MSRA initialization [9] for initializing all stages. We used initial learning rates of 0.001 and the learning rate was dropped after 30K and 45K for the R2I models. For the IR2I model, the learning rate was dropped after 30K, 65K, and 90K iterations by a factor of 10 each time. All of our models were trained to reproduce the content of 32 ? 32 image patches. Each of our models has 8 stages, with each stage contributing 0.125 bits-per-pixel (bpp) to the total representation of a patch. Our models handle binary optimization by employing the biased estimators approach proposed by Raiko et al. [15] as was done by Toderici et al. [19, 20]. Our inpainting network has 8 multi-scale convolutional layers for content propagation and one standard convolutional layer for performing the final prediction. Each multi-scale convolutional layer consists of 24 filters each for dilation factors 1, 2, 4, 8. Our inpainting network takes as input a context region C of size 64 ? 64, where the bottom right 32 ? 32 region is zeroed out and represents the region to be inpainted. 3 Results We investigate the improvement brought about by the presented techniques. We are interested in studying the reduction in bit-rate, for varying quality of reconstruction, achieved after adaptation from the residual encoder proposed by Toderici et al. [19]. To evaluate performance, we perform compression with our models on images from the Kodak dataset [8]. The dataset consists of 24 uncompressed color images of size 512 ? 768. The quality is measured according to the MSSSIM [22] metric (higher values indicate better quality). We use the Bjontegaard-Delta metric [4] to compute the average reduction in bit-rate across all quality settings. 3.1 R2I - Design and Performance The table in Figure 3(a) shows the percentage reduction in bit-rate achieved by the three variations of the Residual-to-Image models. As can be seen, adding side-connections and training for the more desirable objective (i.e., approximating the original data) at each stage helps each of our models. That said, having connections in the decoder only helps more compared to using a ?full? connection approach or only sharing the final prediction. 5 18 16 14 Approach R2I Prediction R2I Full R2I Decoding SSIM MS-SSIM 4.483 10.015 20.002 5.177 7.652 17.951 MS?SSIM (dB) Rate Savings (%) 12 10 Residual Encoder (a) R2I Prediction connection 8 R2I Full connections R2I Decoding connections 6 0.1 0.2 0.3 0.4 0.5 0.6 BitRate 0.7 0.8 0.9 1 (b) Figure 3: (a) Average rate savings for each of the three R2I variants compared to the residual encoder proposed by Toderici et al. [19]. (b) Figure shows the quality of images produced by each of the three R2I variants across a range of bit-rates. 50 200 50 Enc?Dec Dec?only Enc?Dec Dec?only Enc?Dec Dec?only R2I full connections 40 R2I decoding connections 40 100 Training Loss (MSE) Training Loss (MSE) Training Loss (MSE) 150 30 20 30 20 50 10 0 0.5 1 1.5 2 2.5 3 3.5 Iterations Iterations a) Stage 1 4 4.5 5 5.5 6 0 10 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 Iterations Iterations (x104) 2.5 3 3.5 Iterations Iterations b) Stage 4 0.5 1 1.5 2 c) Stage 8 4 4.5 5 5.5 6 Figure 4: The R2I training loss from 3 different stages (start, middle, end) viewed as a function of iterations for the ?full? and the ?decoding? connections models. We note that the decoding connections model converges faster, to a lower value, and shows less variance. The model which shares only the prediction between stages performs poorly in comparison to the other two designs as it does not allow for features from earlier stages to be altered as efficiently as done by the full or decoding connections architectures. The model with decoding connections does better than the architecture with full connections because for the model with connections at decoding only the binarization layer in each stage extracts a representation from the relevant information only (the residuals with respect to the data). In contrast, when connections are established in both the encoder and the decoder, the binary representation may include information that has been captured by a previous stage, thereby adding burden on each stage in identifying information pertinent to improving reconstruction, leading to a tougher optimization. Figure 4 shows that the model with full connections struggles to minimize the training error compared to the model with decoding connections. This difference in training error points to the fact that connections in the encoder make it harder for the model to do well at training time. This difficulty of optimization amplifies with the increase in stages as can be seen by the difference between the full and decoding architecture performance (shown in Figure 3(b)) because the residuals become harder to compress. We note that R2I models significantly improve the quality of reconstruction at higher bit rates but do not improve the estimates at lower bit-rates as much (see Figure 3(b)). This tells us that the overall performance can be improved by focusing on approaches that yield a significant improvement at lower bit-rates, such as inpainting, which is analyzed next. 6 18 17 16 Rate Savings (%) SSIM MS-SSIM R2I Decoding R2I Decoding Sep-Inp R2I Decoding Joint-Inp 20.002 27.379 63.353 17.951 27.794 60.446 MS?SSIM (dB) 15 Approach 14 13 12 (a) Impact of inpainting on the performance at compression. All bit-rate savings are reported with respect to the residual encoder by Toderici et al. [19] 11 Residual Encoder R2I?Decoding Sep?Inp 10 R2I?Decoding Joint?Inp 9 0.2 0.3 0.4 0.5 0.6 BitRate 0.7 0.8 0.9 1 (b) Figure 5: (a) Average rate savings with varying forms of inpainting. (b) The quality of images with each of our proposed approaches at varying bit-rates. 3.2 Impact of Inpainting We begin analyzing the performance of the inpainting network and other approaches on partial-context inpainting. We compare the performance of the inpainting network with both traditional approaches as well as a learning-based baseline. Table 1 shows the average SSIM achieved by each approach for inpainting all non-overlapping patches in the Kodak dataset. Approach SSIM PDE-based Exemplar-based [3] [6] Vanilla network Learning-based Inpainting network 0.4574 0.4611 0.4545 0.5165 Table 1: Average SSIM for partial-context inpainting on the Kodak dataset [8]. The vanilla model is a feed-forward CNN with no multi-scale convolutions. The vanilla network corresponds to a 32-layer (4 times as deep as the inpainting network) model that does not use multi-scale convolutions (all filters have a dilation factor of 1), has the same number of parameters, and also operates at full resolution (as our inpainting network). This points to the fact that the improvement in performance of the inpainting network over the vanilla model is a consequence of using multi-scale convolutions. The inpainting network improves over traditional approaches because our model learns the best strategy for propagating content as opposed to using hand-engineered principles of content propagation. The low performance of the vanilla network shows that learning by itself is not superior to traditional approaches and multi-scale convolutions play a key role in achieving better performance. Whereas inpainting provides an initial estimate of the content within the region it by no means generates a perfect reconstruction. This leads us to the question of whether this initial estimate is better than not having an estimate? The table in Figure 5(a) shows the performance on the compression task with and without inpainting. These results show that the greatest reduction in file size is achieved when the inpainting network is jointly trained with the R2I model. We note (from Figure 5(b)) that inpainting greatly improves the quality of results obtained at lower and at higher bit rates. The baseline where the inpainting network is trained separately from the compression network is presented here to emphasize the role of joint training. Traditional codecs [1] use simple non learning-based inpainting approaches and their predefined methods of representing data are unable to compactly encode the inpainting residuals. Learning to inpaint separately improves the performance 7 as the inpainted estimate is better than not having any estimate. But given that the compression model has not been trained to optimize the compression residuals the reduction in bit-rate for achieving high quality levels is low. We show that with joint training, we can not only train a model that does better inpainting but also ensure that the inpainting residuals can be represented compactly. 3.3 Comparison with Existing Approaches Table 2 shows a comparison of the performance of various approaches compared to JPEG [21] in the 0.125 to 1 bits-per-pixel (bpp) range. We select this range as images from our models towards the end of this range show no perceptible artifacts of compression. The first part of the table evaluates the performance of learning-based progressive approaches. We note that our proposed model outperforms the multi-stage residual encoder proposed by Toderici et al. [19] (trained on the same 6.5K dataset) by 17.9% and IR2I outperforms the residual encoder by reducing file-sizes by 60.4%. The residual-GRU, while similar in architecture to our ?full? connections model, does not do better even when trained on a dataset that is 1000 times bigger and for 10 times more training time. The results shown here do not make use of entropy coding as the goal of this work is to study how to improve the performance of deep networks for progressive image compression and entropy coding makes it harder to understand where the performance improvements are coming from. As various approaches use different entropy coding methods, this further obfuscates the source of the improvements. The second part of the table shows the performance of existing codecs. Existing codecs use entropy coding and rate-distortion optimization. We note that even without using either of these powerful post processing techniques, our final ?IR2I? model is competitive with traditional methods for compression, which use both of these techniques. A comparison with recent non-progressive approaches [2, 18], which also use these post-processing techniques for image compression, is provided in the supplementary material. Approach Number of Training Images Progressive Rate Savings (%) Residual Encoder [19] Residual-GRU [20] R2I (Decoding connections) IR2I 6.5K 6M 6.5K 6.5K Yes Yes Yes Yes 2.56 33.26 18.53 51.25 JPEG-2000 [17] WebP [1] N/A N/A No No 63.01 64.98 Table 2: Average rate savings compared to JPEG [21]. The savings are computed on the Kodak [8] dataset with rate-distortion profiles measuring MS-SSIM in the 0-1 bpp range. We observe that a naive implementation of IR2I creates a linear dependence in content (as all regions used as context have to be decoded before being used for inpainting) and thus may be substantially slower. In practice, this slowdown would be negligible as one can use a diagonal scan pattern (similar to traditional codecs) for ensuring high parallelism thereby reducing run times. Furthermore, we perform inpainting using predictions from the first step only. Therefore, the dependence only exists when generating the first progressive code. For all subsequent stages, there is no dependence in content, and our approach is comparable in run time to similar approaches. 4 Conclusion and Future Work We study a class of ?Residual to Image? models and show that within this class, architectures which have decoding connections perform better at approximating image data compared to designs with other forms of connectivity. We observe that our R2I decoding connections model struggles at low bit-rates and we show how to exploit spatial coherence between content of adjacent patches via inpainting to improve performance at approximating image content at low bit-rates. We design a new 8 model for partial-context inpainting using multi-scale convolutions and show that the best way to leverage inpainting is by jointly training the inpainting network with our R2I Decoding model. One interesting extension of this work would be to incorporate entropy coding within our progressive compression framework to train models that produce binary codes which have low-entropy and can be represented even more compactly. Another possible direction would be to extend our proposed framework to video data, where the gains from our discovery of recipes for improving compression may be even greater. 5 Acknowledgements This work was funded in part by Intel Labs and NSF award CNS-120552. We gratefully acknowledge NVIDIA and Facebook for the donation of GPUs used for portions of this work. We would like to thank George Toderici, Nick Johnston, Johannes Balle for providing us with information needed for accurate assessment. We are grateful to members of the Visual Computing Lab at Intel Labs, and members of the Visual Learning Group at Dartmouth College for their feedback. References [1] WebP a new image format for the web. https://developers.google.com/speed/webp/. Accessed: 2017-04-29. [2] Johannes Ball?, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. In ICLR, 2017. [3] Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417?424. ACM Press/Addison-Wesley Publishing Co., 2000. [4] Gisle Bjontegaard. Improvements of the bd-psnr model. In ITU-T SC16/Q6, 35th VCEG Meeting, Berlin, Germany, July 2008, 2008. [5] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016. [6] Antonio Criminisi, Patrick P?rez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on image processing, 13(9):1200?1212, 2004. [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248?255. IEEE, 2009. [8] Eastman Kodak Company. Kodak lossless true color image suite, 1999. http://r0k.us/ graphics/kodak/. [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026?1034, 2015. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. [11] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [12] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014. 9 [13] Mary Meeker. Internet Trends Report 2016. KPCB, 2016. [14] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536?2544, 2016. [15] Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In ICLR, 2015. [16] Oren Rippel and Lubomir Bourdev. Real-time adaptive image compression. In International Conference on Machine Learning, 2017. [17] Athanassios Skodras, Charilaos Christopoulos, and Touradj Ebrahimi. The jpeg 2000 still image compression standard. IEEE Signal processing magazine, 18(5):36?58, 2001. [18] L. Theis, W. Shi, A. Cunningham, and F. Husz?r. Lossy image compression with compressive autoencoders. In ICLR, 2017. [19] George Toderici, Sean M. O?Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. In ICLR, 2016. [20] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016. [21] Gregory K. Wallace. The JPEG still picture compression standard. Communications of the ACM, 34(4), 1991. [22] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on, volume 2, pages 1398?1402. IEEE, 2003. [23] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. 10
6724 |@word cnn:1 middle:1 compression:42 loading:1 kokkinos:1 r:16 propagate:3 thereby:2 inpainting:69 harder:3 reduction:6 initial:3 contains:1 rippel:2 deconvolutional:1 outperforms:2 existing:5 recovered:1 current:3 com:1 laparra:1 guadarrama:1 clara:1 yet:1 diederik:1 must:4 bd:1 subsequent:5 additive:2 pertinent:1 wanted:1 designed:2 ith:1 vanishing:1 record:1 colored:1 provides:2 philipp:1 traverse:1 successive:3 firstly:1 accessed:1 zhang:2 along:3 become:1 koltun:3 consists:3 introduce:2 wallace:1 multi:21 company:1 toderici:10 begin:2 discover:1 linearity:2 suffice:1 provided:2 coder:1 what:2 caselles:1 kind:1 substantially:2 developer:1 compressive:1 suite:1 sung:2 sapiro:1 every:2 toyama:1 act:1 interactive:1 before:5 negligible:1 dropped:2 shumeet:1 struggle:2 consequence:1 encoding:4 analyzing:1 laurent:1 path:3 might:1 initialization:1 co:1 range:5 thirty:1 lost:1 block:5 practice:1 evan:1 significantly:2 projection:1 refers:1 bitrate:2 inp:4 get:1 storage:1 context:17 applying:1 sukthankar:1 www:1 optimize:3 missing:1 crfs:1 shi:1 independently:1 jimmy:1 resolution:3 identifying:1 estimator:1 baig:1 embedding:1 handle:1 variation:2 analogous:1 transmit:1 play:2 user:1 decode:1 magazine:1 trend:1 recognition:4 database:1 bottom:2 role:3 module:1 preprint:3 initializing:1 wang:1 region:20 connected:1 sun:2 decrease:1 trained:14 grateful:1 reviewing:1 yuille:1 creates:1 atrous:1 compactly:3 sep:2 joint:9 various:4 represented:2 train:11 fast:1 shortcoming:1 describe:2 kp:3 inpainted:5 r2i:32 tell:1 kevin:1 caffe:2 encoded:2 larger:1 supplementary:2 denser:2 distortion:2 cvpr:1 reconstruct:2 otherwise:1 encoder:20 ability:2 codecs:5 jointly:3 itself:1 final:5 sequence:2 karayev:1 net:1 propose:3 reconstruction:7 coming:1 adaptation:1 neighboring:4 relevant:1 enc:3 poorly:1 achieve:3 intuitive:1 description:1 amplifies:1 recipe:2 darrell:2 produce:3 generating:1 adam:2 converges:1 perfect:1 object:1 help:2 donation:1 bourdev:2 damien:2 propagating:1 recurrent:2 exemplar:2 measured:1 strong:1 c:1 indicate:1 direction:1 restate:1 filter:6 criminisi:1 stochastic:2 human:1 engineered:1 material:2 feeding:1 secondly:1 extension:1 hold:1 around:1 considered:1 predict:6 efros:1 optimizer:1 adopt:1 consecutive:1 travel:1 ross:1 create:2 brought:1 always:1 husz:1 zhou:1 varying:5 encode:2 focus:1 improvement:6 modelling:1 greatly:1 contrast:3 baseline:3 obfuscates:1 entire:2 cunningham:1 reproduce:2 going:1 interested:1 germany:1 pixel:5 issue:1 classification:1 overall:1 denoted:2 spatial:4 saving:8 having:3 beach:1 sampling:4 progressive:18 represents:4 look:1 yu:2 constitutes:1 uncompressed:1 filling:1 future:1 report:1 torresani:1 richard:1 few:1 composed:1 murphy:1 consisting:1 cns:1 attempt:1 freedom:1 highly:1 investigate:1 alexei:1 joel:1 truly:1 analyzed:1 light:1 predefined:1 accurate:1 encourage:1 partial:6 necessary:1 experience:1 respective:1 girshick:1 instance:1 earlier:4 modeling:1 contiguous:1 cover:1 jpeg:5 measuring:1 papandreou:1 introducing:1 seventh:1 graphic:2 stored:3 reported:1 encoders:5 connect:1 gregory:1 st:1 lstm:1 international:2 dong:1 decoding:26 together:1 connecting:1 connectivity:2 opposed:1 berglund:1 creating:1 leading:2 li:4 b2:1 coding:5 dilated:2 later:3 lab:4 analyze:1 portion:1 start:1 competitive:1 aggregation:1 jia:3 contribution:1 minimize:1 marcelo:1 convolutional:12 variance:1 efficiently:1 yield:3 covell:2 yes:4 ballester:1 vincent:3 eters:1 produced:3 accurately:1 ren:2 q6:1 sharing:1 trevor:2 facebook:1 evaluates:1 mi:5 propagated:1 sampled:1 gain:1 dataset:11 bpp:3 color:2 improves:3 psnr:1 segmentation:2 iasonas:1 sean:1 focusing:1 feed:2 wesley:1 higher:3 day:1 improved:2 wei:1 rahul:1 done:3 box:1 furthermore:1 stage:68 autoencoders:1 d:1 hand:1 web:2 expressive:1 multiscale:1 overlapping:1 propagation:3 assessment:2 google:1 quality:15 artifact:1 michele:2 hwang:2 lossy:5 mary:1 usa:1 building:1 k22:4 contain:1 true:2 semantic:2 adjacent:6 r0k:1 whereby:1 m:12 complete:2 mohammad:1 demonstrate:3 performs:1 image:63 superior:1 empirically:1 nh:2 volume:1 extend:1 he:4 approximates:1 surpassing:1 refer:2 significant:1 dinh:1 vanilla:5 similarly:1 gratefully:1 funded:1 moving:3 access:1 longer:1 similarity:1 add:2 patrick:1 sergio:1 recent:2 kai:1 scenario:1 certain:1 nvidia:1 binary:8 meeting:1 transmitted:2 seen:2 additional:1 captured:1 preceding:1 greater:1 george:4 r0:3 deng:1 xiangyu:2 tapani:1 signal:2 july:1 assisting:2 multiple:3 full:19 desirable:1 reduces:3 simoncelli:2 alan:2 faster:1 long:4 pde:1 post:2 award:1 bigger:1 impact:3 prediction:15 variant:3 basic:1 ensuring:1 vision:4 metric:2 arxiv:6 iteration:10 represent:1 sergey:1 achieved:4 dec:6 deeplab:1 oren:1 addition:1 whereas:4 fine:1 separately:2 johnston:2 source:1 jian:2 bovik:1 biased:1 exhibited:1 file:3 pooling:1 tend:1 facilitates:1 sent:1 db:2 member:2 flow:1 encoderdecoder:1 call:1 structural:1 leverage:2 intermediate:1 split:1 easy:1 feedforward:1 affect:2 dartmouth:4 architecture:21 bandwidth:1 malley:1 reduce:1 shor:1 lubomir:1 msra:1 whether:1 assist:1 passed:1 shaoqing:2 deep:12 antonio:1 generally:1 santa:1 johannes:2 amount:3 http:3 outperform:2 percentage:1 nsf:1 delta:1 per:2 bulk:1 diverse:1 group:1 key:2 achieving:2 yangqing:1 year:1 run:2 powerful:1 architectural:1 patch:21 coherence:3 krahenbuhl:1 comparable:1 bit:15 layer:23 internet:1 fold:1 annual:1 constraint:1 fei:2 generates:1 speed:1 performing:5 relatively:1 gpus:1 format:1 department:1 according:2 ball:2 vladlen:2 beneficial:1 across:2 perceptible:2 shallow:1 b:4 valero:1 asilomar:1 previously:1 needed:2 addison:1 end:5 studying:2 available:3 hanover:2 minnen:2 observe:3 hierarchical:1 striding:1 kodak:7 inpaint:5 slower:1 customary:1 convolved:1 original:16 compress:4 top:2 assumes:1 denotes:1 include:1 ensure:1 publishing:1 ebrahimi:1 exploit:4 approximating:4 chieh:1 objective:7 question:2 added:1 parametric:4 strategy:2 dependence:4 traditional:10 diagonal:1 said:1 gradient:4 iclr:6 link:1 unable:1 thank:1 berlin:1 decoder:6 parametrized:1 itu:1 code:10 length:1 providing:1 liang:1 difficult:2 rise:1 ba:1 design:11 implementation:2 perform:4 ssim:10 convolution:9 acknowledge:1 jin:2 communication:1 reproducing:1 introduced:2 david:2 gru:2 optimized:2 connection:49 imagenet:3 nick:2 learned:2 established:1 kingma:1 nip:1 address:2 below:1 pattern:5 parallelism:1 explanation:1 video:1 greatest:1 pathak:2 difficulty:1 predicting:3 residual:54 representing:1 improve:8 altered:1 lorenzo:1 library:1 lossless:1 picture:1 created:1 raiko:2 extract:4 naive:1 binarization:1 discovery:1 acknowledgement:1 balle:1 removal:1 contributing:1 theis:1 loss:8 fully:1 interesting:1 shelhamer:1 haris:2 degree:2 sufficient:1 imposes:1 propagates:1 zeroed:1 principle:1 share:1 guillermo:1 slowdown:1 free:1 alain:1 side:1 allow:8 understand:1 deepak:1 benefit:2 feedback:1 rich:1 forward:2 made:1 x104:1 adaptive:1 employing:1 transaction:1 reconstructed:1 compact:2 emphasize:1 b1:2 eero:2 meeker:1 dilation:5 table:8 learn:1 delving:1 ca:2 improving:2 mse:3 main:1 profile:1 intel:3 referred:1 lc:2 sub:1 decoded:2 explicit:1 learns:3 grained:1 rez:1 donahue:2 rectifier:1 incorporating:1 burden:1 exists:1 albeit:1 adding:4 socher:1 importance:1 kr:1 magnitude:1 demand:1 browsing:1 chen:2 easier:1 entropy:6 eastman:1 simply:1 visual:3 kaiming:2 corresponds:3 extracted:1 acm:2 viewed:2 identity:1 goal:2 consequently:1 kentaro:1 towards:1 jeff:2 shared:2 fisher:1 content:27 change:1 shortcut:2 specifically:1 baluja:1 reducing:3 operates:1 total:2 mathias:1 e:1 select:1 college:3 guillaume:1 scan:1 jonathan:1 incorporate:2 evaluate:1
6,329
6,725
Adaptive Bayesian Sampling with Monte Carlo EM Anirban Roychowdhury, Srinivasan Parthasarathy Department of Computer Science and Engineering The Ohio State University [email protected], [email protected] Abstract We present a novel technique for learning the mass matrices in samplers obtained from discretized dynamics that preserve some energy function. Existing adaptive samplers use Riemannian preconditioning techniques, where the mass matrices are functions of the parameters being sampled. This leads to significant complexities in the energy reformulations and resultant dynamics, often leading to implicit systems of equations and requiring inversion of high-dimensional matrices in the leapfrog steps. Our approach provides a simpler alternative, by using existing dynamics in the sampling step of a Monte Carlo EM framework, and learning the mass matrices in the M step with a novel online technique. We also propose a way to adaptively set the number of samples gathered in the E step, using sampling error estimates from the leapfrog dynamics. Along with a novel stochastic sampler based on Nos?-Poincar? dynamics, we use this framework with standard Hamiltonian Monte Carlo (HMC) as well as newer stochastic algorithms such as SGHMC and SGNHT, and show strong performance on synthetic and real high-dimensional sampling scenarios; we achieve sampling accuracies comparable to Riemannian samplers while being significantly faster. 1 Introduction Markov Chain Monte Carlo sampling is a well-known set of techniques for learning complex Bayesian probabilistic models that arise in machine learning. Typically used in cases where computing the posterior distributions of parameters in closed form is not feasible, MCMC techniques that converge reliably to the target distributions offer a provably correct way (in an asymptotic sense) to draw samples of target parameters from arbitrarily complex probability distributions. A recently proposed method in this domain is Hamiltonian Monte Carlo (HMC) [1, 2], that formulates the target density as an ?energy function? augmented with auxiliary ?momentum? parameters, and uses discretized Hamiltonian dynamics to sample the parameters while preserving the energy function. The resulting samplers perform noticeably better than random walk-based methods in terms of sampling efficiency and accuracy [1, 3]. For use in stochastic settings, where one uses random minibatches of the data to calculate the gradients of likelihoods for better scalability, researchers have used Fokker-Planck correction steps to preserve the energy in the face of stochastic noise [4], as well as used auxiliary ?thermostat? variables to control the effect of this noise on the momentum terms [5, 6]. As with the batch setting, these methods have exploited energy-preserving dynamics to sample more efficiently than random walk-based stochastic samplers [4, 7, 8]. A primary (hyper-)parameter of interest in these augmented energy function-based samplers in the ?mass? matrix of the kinetic energy term; as noted by various researchers [1, 3, 6, 8, 9], this matrix plays an important role in the trajectories taken by the samplers in the parameter space of interest, thereby affecting the overall efficiency. While prior efforts have set this to the identity matrix or some other pre-calculated value [4, 5, 7], recent work has shown that there are significant gains to be had in efficiency as well as convergent accuracy by reformulating the mass in terms of the target parameters to be sampled [3, 6, 8], thereby making the sampler sensitive to the underlying geometry. This is done by imposing a positive definite constraint on the adaptive mass, and using it as the metric of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the Riemannian manifold of probability distributions parametrized by the target parameters. This constraint also satisfies the condition that the momenta be sampled from a Gaussian with the mass as the covariance. Often called Riemannian preconditioning, this idea has been applied in both batch [3] as well as stochastic settings [6, 8] to derive HMC-based samplers that adaptively learn the critically important mass matrix from the data. Although robust, these reformulations often lead to significant complexities in the resultant dynamics; one can end up solving an implicit system of equations in each half-step of the leapfrog dynamics [3, 6], along with inverting large O(D2 ) matrices. This is sometimes sidestepped by performing fixed point updates at the cost of additional error, or restricting oneself to simpler formulations that honor the symmetric positive definite constraint, such as a diagonal matrix [8]. While this latter choice ameliorates a lot of the added complexity, it is clearly suboptimal in the context of adapting to the underlying geometry of the parameter space. Thus we would ideally need a mechanism to robustly learn this critical mass hyperparameter from the data without significantly adding to the computational burden. We address this issue in this work with the Monte Carlo EM (MCEM) [10, 11, 12, 13] framework. An alternative to the venerable EM technique, MCEM is used to locally optimize maximum likelihood problems where the posterior probabilities required in the E step of EM cannot be computed in closed form. In this work, we perform existing dynamics derived from energy functions in the Monte Carlo E step while holding the mass fixed, and use the stored samples of the momentum term to learn the mass in the M step. We address the important issue of selecting appropriate E-step sampling iterations, using error estimates to gradually increase the sample sizes as the Markov chain progresses towards convergence. Combined with an online method to update the mass using sample covariance estimates in the M step, this gives a clean and scalable adaptive sampling algorithm that performs favorably compared to the Riemannian samplers. In both our synthetic experiments and a high dimensional topic modeling problem with a complex Bayesian nonparametric construction [14], our samplers match or beat the Riemannian variants in sampling efficiency and accuracy, while being close to an order of magnitude faster. 2 2.1 Preliminaries MCMC with Energy-Preserving Dynamics In Hamiltonian Monte Carlo, the energy function is written as 1 H(?, p) = ?L(?) + pT M ?1 p. (1) 2 Here X is the observed data, and ? denotes the model parameters. L(?) = log p(X|?) + log p(?) denotes the log likelihood of the data given the parameters along with the Bayesian prior, and p denotes the auxiliary ?momentum? mentioned above. Note that the second term in the energy function, the kinetic energy, is simply the kernel of a Gaussian with the mass matrix M acting as covariance. Hamilton?s equations of motions are then applied to this energy function to derive the following differential equations, with the dot accent denoting a time derivative: ?? = M ?1 p, p? = ?L(?). These are discretized using the generalized leapfrog algorithm [1, 15] to create a sampler that is both symplectic and time-reversible, upto a discretization error that is quadratic in the stepsize. Machine learning applications typically see the use of very large datasets for which computing the gradients of the likelihoods in every leapfrog step followed by a Metropolis-Hastings correction ratio is prohibitively expensive. To address this, one uses random ?minibatches? of the dataset in each iteration [16], allowing some stochastic noise for improved scalability, and removes the Metropolis-Hastings (M-H) correction steps [4, 7]. To preserve the system energy in this context one has to additionally apply Fokker-Planck corrections to the dynamics [17]. The stochastic sampler in [4] uses these techniques to preserve the canonical Gibbs energy above (1). Researchers have also used the notion of ?thermostats? from the molecular dynamics literature [9, 18, 19, 20] to further control the behavior of the momentum terms in the face of stochastic noise; the resulting algorithm [5] preserves an energy of its own [21] as well. 2.2 Adaptive MCMC using Riemannian Manifolds As mentioned above, learning the mass matrices in these MCMC systems is an important challenge. Researchers have traditionally used Riemannian manifold refomulations to address this, and integrate 2 the updating of the mass into the sampling steps. In [3] the authors use this approach to derive adaptive variants of first-order Langevin dynamics as well as HMC. For the latter the reformulated energy function can be written as:  1 1 Hgc (?, p) = ?L(?) + pT G(?)?1 p + log (2?)D |G(?)| , (2) 2 2 where D is the dimensionality of the parameter space. Note that the momentum variable p can be integrated out to recover the desired marginal density of ?, in spite of the covariance being a function of ?. In the machine learning literature, the authors of [8] used a diagonal G(?) to produce an adaptive variant of the algorithm in [7], whereas the authors in [6] derived deterministic and stochastic algorithms from a Riemannian variant of the Nos?-Poincar? energy [9], with the resulting adaptive samplers preserving symplecticness as well as canonical system temperature. 2.3 Monte Carlo EM The EM algorithm [22] is widely used to learn maximum likelihood parameter estimates for complex probabilistic models. In cases where the expectations of the likelihoods required in the E step are not tractable, one can use Monte Carlo simulations of the posterior instead. The resulting Monte Carlo EM (MCEM) framework [10] has been widely studied in the statistics literature, with various techniques developed to efficiently draw samples and estimate Monte Carlo errors in the E step [11, 12, 13]. For instance, the expected log-likelihood is usually replaced with the following Monte m P 1 Carlo approximation: Q(?|? t ) = m log p(X, utl |?), where u represents the latent augmentation l=1 variables used in EM, and m is the number of samples taken in the E step. While applying this framework, one typically has to carefully tune the number of samples gathered in the E step, since the potential distance from the stationary distribution in the early phases would necessitate drawing relatively fewer samples, and progressively more as the sampler nears convergence. In this work we leverage this MCEM framework to learn M in (1) and similar energies using samples of p; the discretized dynamics constitute the E step of the MCEM framework, with suitable updates to M performed in the corresponding M step. We also use a novel mechanism to dynamically adjust the sample count by using sampling errors estimated from the gathered samples, as described next. 3 3.1 Mass-Adaptive Sampling with Monte Carlo EM The Basic Framework Riemannian samplers start off by reformulating the energy function, making the mass a function of ? and adding suitable terms to ensure constancy of the marginal distributions. Our approach is fundamentally different: we cast the task of learning the mass as a maximum likelihood problem over the space of symmetric positive definite matrices. For instance, we can construct the following problem for standard HMC: 1 1 max L(?) ? pT M ?1 p ? log |M |. (3) M 0 2 2 Recall that the joint likelihood is p(?, p) ? exp(?H(?, p)), H(?, ?) being the energy from (1). Then, we use correct samplers that preserve the desired densities in the E step of a Monte Carlo EM (MCEM) framework, and use the obtained samples of p in the corresponding M step to perform suitable updates for the mass M . Specifically, to wrap the standard HMC sampler in our framework, we perform the generalized leapfrog steps [1, 15] to obtain proposal updates for ?, p followed by Metropolis-Hastings corrections in the E step, and use the obtained p values in the M step. The resultant adaptive sampling method is shown in Alg. 1. Note that this framework can also be applied to stochastic samplers that preserve the energy, upto standard discretization errors. We can wrap the SGHMC sampler [4] in our framework as well, since it uses Fokker-Planck corrections to approximately preserve the energy (1) in the presence of stochastic noise. We call the resulting method SGHMC-EM, and specify it in Alg. 3 in the supplementary. As another example, the SGNHT sampler [5] is known to preserve a modified Gibbs energy [21]; therefore we can propose the following max-likelihood problem for learning the mass: 1 1 ? 2 /2, max L(?) ? pT M ?1 p ? log |M | + ?(? ? ?) (4) M 0 2 2 3 where ? is the thermostat variable, and ?, ?? are constants chosen to preserve correct marginals. The SGNHT dynamics can used in the E step to maintain the above energy, and we can use the collected p samples in the M step as before. We call the resultant method SGNHT-EM, as shown in Alg. 2. Note that, unlike standard HMC above, we do not perform Metropolis-Hastings corrections steps on the gathered samples for these cases. As shown in the algorithms, we collect one set of momenta samples per epoch, after the leapfrog iterations. We use S_count to denote the number of such samples collected before running an M-step update. The advantage of this MCEM approach over the parameter-dependent Riemannian variants is twofold: 1. The existing Riemannian adaptive algorithms in the literature [3, 6, 8] all start by modifying the energy function, whereas our framework does not have any such requirement. As long as one uses a sampling mechanism that preserves some energy with correct marginals for ?, in a stochastic sense or otherwise, it can be used in the E step of our framework. 2. The primary disadvantage of the Riemannian algorithms is the added complexity in the dynamics derived from the modified energy functions. One typically ends up using generalized leapfrog dynamics [3, 6], which can lead to implicit systems of equations; to solve these one either has to use standard solvers that have complexity at least cubic in the dimensionality [23, 24], with scalability issues in high dimensional datasets, or use fixed point updates with worsened error guarantees. An alternative approach is to use diagonal covariance matrices, as mentioned earlier, which ignores the coordinate correlations. Our MCEM approach sidesteps all these issues by keeping the existing dynamics of the desired E step sampler unchanged. As shown in the experiments, we can match or beat the Riemannian samplers in accuracy and efficiency by using suitable sample sizes and M step updates, with significantly improved sampling complexities and runtimes. 3.2 Dynamic Updates for the E-step Sample Size We now turn our attention to the task of learning the sample size in the E step from the data. The nontriviality of this issue is due to the following reasons: first, we cannot let the sampling dynamics run to convergence in each E step without making the whole process prohibitively slow; second, we have to account for the correlation among successive samples, especially early on in the process when the Markov chain is far from convergence, possibly with ?thinning? techniques; and third, we may want to increase the sample count as the chain matures and gets closer to the stationary distribution, and use relatively fewer samples early on. Algorithm 1 HMC-EM Input: ? (0) , , LP _S, S_count ? Initialize M ; repeat ? Sample p(t) ? N (0, M ); for i = 1 to LP _S do ? p(i) ? p(i+?1) , ? (i) ? ? (i+?1) ; ? p(i+/2) ? p(i) ? 2 ?? H(? (i) , p(i) ); ? ? (i+) ? ? (i) + 2 ?p H(? (i) , p(i+/2) ); ? p(i+) ? p(i+/2) ? 2 ?? H(? (i+) , p(i+/2) ); end for   ? Set ? (t+1) , p(t+1) from ? LP _S+ , pLP _S+ using Metropolis-Hastings ? Store MC-EM sample p(t+1) ; if (t + 1) mod S_count = 0 then To this end, we leverage techniques de? Update M using MC-EM samples; rived from the MCEM literature in statisend if tics [11, 13, 25] to first evaluate a suitable ? Update S_count as described in the text; ?test? function of the target parameters at until forever certain subsampled steps, using the gathered samples and current M step estimates. We then use confidence intervals created around these evaluations to gauge the relative effect of successive MCEM estimates over the Monte Carlo error. If the updated values of these functions using newer M-step estimates lie in these intervals, we increase the number of samples collected in the next MCEM loop. Specifically, similar to [13], we start off with the following test function for HMC-EM (Alg. 1):   q(?) = M ?1 p, ?L(?) . We then subsample some timesteps as mentioned below, evaluate q at those steps, and create confidence intervals using sample means and variances: mS = S 1X q , S s=1 s vS = S 1X 2 q ? m2S , S s=1 s 4 CS := mS ? z1??/2 vS , where S denotes the subsample count, z1??/2 is the (1 ? ?) critical value of a standard Gaussian, and CS the confidence (Alg. 2), we use the following test  interval mentioned earlier. For SGNHT-EM  function: q(?) = M ?1 p, ?L(?) + ?M ?1 p, pT M ?1 p , derived from the SGNHT dynamics. One can adopt the following method described in [25]: choose the Pssubsampling (0) offsets {t . . . t } as t = 1 S s Input: ? , , A, LP _S, S_count i=1 xi , where xi ? 1 ? Poisson(?id ), with suitably cho? Initialize ? (0) , p(0) and M ; sen ? ? 1 and d > 0. We found both this repeat and a fixed set of S offsets to work well in for i = 1 to LP _S do (i+1) (i) ? ? p? ? p(i) ? ? (i) M ?1 p(i) ? ?L(? )+ our experiments. 2AN (0, ); With the subsamples collected using this ? ? (i+1) ? ? (i) + M ?1 p(i+1) ;  mechanism, we calculate the confidence 1 (i+1)T ? ? (i+1) ? ? (i) + D p M ?1 p(i+1) ? 1 ; intervals as described earlier. The assumpend for tion is that this interval provides an esti ? Set ? (t+1) , p(t+1) , ? (t+1) = mate of the spread of q due to the Monte  Carlo error. We then perform the M-step, ? (LP _S+1) , p(LP _S+1) , ? (LP _S+1) ; and evaluate q using the updated M-step es(t+1) ? Store MC-EM sample p ; timates. If this value lies in the previously if (t + 1) mod S_count = 0 then calculated confidence bound, we increase ? Update M using MC-EM samples; S as S = S + S/SI in the following iterend if ation to overcome the Monte Carlo noise. ? Update S_count as described in the text; See [11, 13] for details on these procedures. until forever Values for the constants ?, ?, d, SI , as well as initial estimates for S are given in the supplementary. Running values for S are denoted S_count hereafter. Algorithm 2 SGNHT-EM 3.3 An Online Update for the M-Step Next we turn our attention to the task of updating the mass matrices using the collected momenta samples. As shown in the energy functions above, the momenta are sampled from zero-mean normal distributions, enabling us to use standard covariance estimation techniques from the literature. However, since we are using discretized MCMC to obtain these samples, we have to address the variance arising from the Monte Carlo error, especially during the burn-in phase. To that end, we found a running average of the updates to work well in our experiments; in particular, we updated the inverse mass matrix, denoted as MI , at the k th M-step as: (k) MI (k?1) = (1 ? ?(k) )MI (k,est) + ?(k) MI , (5) (k,est) where is a suitable estimate computed from the gathered samples in the k th M-step, and  (k) MI ? is a step sequence satisfying some standard assumptions, as described below. Note that the MI s correspond to the precision matrix of the Gaussian distribution of the momenta; updating this during the M-step also removes the need to invert the mass matrices during the leapfrog iterations. (k,est) Curiously, we found the inverse of the empirical covariance matrix to work quite well as MI in our experiments. These updates also induce a fresh perspective on the convergence of the overall MCEM procedure. Existing convergence analyses in the statistics literature fall into three broad categories: a) the almost sure convergence presented in [26] as t ? ? with increasing sample sizes, b) the asymptotic angle presented in [27], where the sequence of MCEM updates are analyzed as an approximation to the standard EM sequence as the sample size, referred to as S_count above, tends to infinity, and c) the asymptotic consistency results obtained from multiple Gibbs chains in [28], by letting the chain counts and iterations tend to ?. Our analysis differs from all of these, by focusing on the maximum likelihood situations noted above as convex optimization problems, and using SGD convergence (k) techniques [29] for the sequence of iterates MI . (k,est) Proposition 1. Assume the MI ?s provide an unbiased estimate of ?J, and have bounded  eigenvalues. Let inf kMI ?MI? k2 > ?J(MI ) > 0 ? > 0. Further, let the sequence ?(k) satisfy n o 2 P (k) P (k) = ?, k ?(k) < ?. Then the sequence MI converges to the MLE of the precision k? almost surely. 5 Recall that the (negative) precision is a natural parameter of the normal distribution written in exponential family notation, and that the log-likelihood is a concave function of the natural parameters for this family; this makes max-likelihood a convex optimization problem over the precision, even in the presence of linear constraints [30, 31]. Therefore, this implies that the problems (3), (4) have a unique maximum, denoted by MI? above. Also note that the update (5) corresponds to a first order update on the iterates with an L2-regularized objective, with unit regularization parameter; this is denoted by J(MI ) in the proposition. That is, J is the energy preserved by our sampler(s), as a function of the mass (precision), augmented with an L2 regularization term. The resultant strongly convex optimization problem can be analyzed using SGD techniques under the assumptions noted above; we provide a proof in the supplementary for completeness. We should note here that the ?stochasticity? in the proof does not refer to the stochastic gradients of L(?) used in the leapfrog dynamics of Algorithms 2 through 5; instead we think of the collected momenta samples as a stochastic minibatch used to compute the gradient of the regularized energy, as a function of the covariance (mass), allowing us to deal with the Monte Carlo error indirectly. Also (k,est) note that our assumption on the unbiasedness of the MI estimates is similar to [26], and distinct from assuming that the MCEM samples of ? are unbiased; indeed, it would be difficult to make this latter claim, since stochastic samplers in general are known to have a convergent bias. 3.4 Nos?-Poincar? Variants We next develop a stochastic version of the dynamics derived from the Nos?-Poincar? Hamiltonian, followed by an MCEM variant. This allows for a direct comparison of the Riemann manifold formulation and our MCEM framework for learning the kinetic masses, in a stochastic setting with thermostat controls on the momentum terms and desired properties like reversibility and symplecticness provided by generalized leapfrog discretizations. The Nos?-Poincar? energy function can be written as [6, 9]:   1  p  ?1  p  q2 HN P = s ?L(?) + + gkT log s ? H0 , M + (6) 2 s s 2Q where L(?) is the joint log-likelihood, s is the thermostat control, p and q the momentum terms corresponding to ? and s respectively, and M and Q the respective mass terms. See [6, 9] for descriptions of the other constants. Our goal is to learn both M and Q using the MCEM framework, as opposed to [6], where both were formulated in terms of ?. To that end, we propose the following system of equations for the stochastic scenario:      t+/2 2  ? B(?) A(?)s t+/2   pt+ /2 = p + s?L(?) ? ? M ?1 pt+ /2 , (q ) + 1+ q 2 4Q 2Q s     t+/2    1 pt+/2 p  ? ?gkT (1 + log s) + M ?1 + L(?) + H0 = 0, ? q+ 2 2 s s     t+/2   1 1 q t+/2 t+ ?1 t+ s+s + , ? = ? + M p , s =s+ Q s st+     t+ ? B(? t+ ) ?1 t+/2  t+ t+/2 t+ t+ t+/2 ? t+ ) , q p =p + s ?L(? ) ? ? M p =q + H0 + L(? 2 2 st+ 2     t+/2  q t+/2 1 pt+/2 p A(?)st+ t+/2 ?1 t+ ? gkT (1 + log s ) + M ? q ? , 2 st+ st+ 2Q 2Q (7) where t + /2 denotes the half-step dynamics, ? signifies noisy stochastic estimates, and A(?) and B(?) denote the stochastic noise terms, necessary for the Fokker-Planck corrections [6]. Note that we only have to solve a quadratic equation for q t+/2 with the other updates also being closed-form, as opposed to the implicit system of equations in [6]. Proposition 2. The dynamics (7) preserve the Nos?-Poincar? energy (6). The proof is a straightforward application of the Fokker-Planck corrections for stochastic noise to the Hamiltonian dynamics derived from (6), and is provided in the supplementary. With these dynamics, we first develop the SG-NPHMC algorithm (Alg. 4 in the supplementary) as a counterpart to SGHMC and SGNHT, and wrap it in our MCEM framework to create SG-NPHMC-EM (Alg. 5 in the supplementary). As we shall demonstrate shortly, this EM variant performs comparably to SGR-NPHMC from [6], while being significantly faster. 6 4 Experiments In this section we compare the performance of the MCEM-augmented variants of HMC, SGHMC as well as SGNHT with their standard counterparts, where the mass matrices are set to the identity matrix. We call these augmented versions HMC-EM, SGHMC-EM, and SGNHT-EM respectively. As baselines for the synthetic experiments, in addition to the standard samplers mentioned above, we also evaluate RHMC [3] and SGR-NPHMC [6], two recent algorithms based on dynamic Riemann manifold formulations for learning the mass matrices. In the topic modeling experiment, for scalability reasons we evaluate only the stochastic algorithms, including the recently proposed SGR-NPHMC, and omit HMC, HMC-EM and RHMC. Since we restrict the discussions in this paper to samplers with second-order dynamics, we do not compare our methods with SGLD [7] or SGRLD [8]. 4.1 Parameter Estimation of a 1D Standard Normal Distribution In this experiment we aim to learn the parameters of a unidimensional standard normal distribution in both batch and stochastic settings, using 5, 000 data points generated from N (0, 1), analyzing the impact of our MC-EM framework on the way. We compare all the algorithms mentioned so far: HMC, HMC-EM, SGHMC, SGHMC-EM, SGNHT, SGNHT-EM, SG-NPHMC, SG-NPHMC-EM along with RHMC and SGR-NPHMC. The generative model consists of normal-Wishart priors on the mean ? and precision ? , with posterior distribution p(?, ? |X) ? N (X|?, ? )W(? |1, 1), where W denotes the Wishart distribution. We run all the algorithms for the same number of iterations, discarding the first 5, 000 as ?burn-in?. Batch sizes were fixed to 100 for all the stochastic algorithms, along with 10 leapfrog iterations across the board. For SGR-NPHMC and RHMC, we used the observed Fisher information plus the negative Hessian of the prior as the tensor, with one fixed point iteration on the implicit system of equations arising from the dynamics of both. For HMC we used a fairly high learning rate of 1e ? 2. For SGHMC and SGNHT we used A = 10 and A = 1 respectively. For SGR-NPHMC we used A, B = 0.01. We show the RMSE numbers colM ETHOD RMSE (?) RMSE (? ) T IME lected from post-burn-in samples as HMC 0.0196 0.0197 0.417 MS well as per-iteration runtimes in TaHMC-EM 0.0115 0.0104 0.423 MS ble 1. An ?iteration? here refers to a RHMC 0.0111 0.0089 5.748 MS complete E step, with the full quota SGHMC 0.1590 0.1646 0.133 MS SGHMC-EM 0.0713 0.2243 0.132 MS of leapfrog jumps. The improvements SG-NPHMC 0.0326 0.0433 0.514 MS afforded by our MCEM framework SG-NPHMC-EM 0.0274 0.0354 0.498 MS are immediately noticeable; HMCSGR-NPHMC 0.0240 0.0308 3.145 MS EM matches the errors obtained from SGNHT 0.0344 0.0335 0.148 MS RHMC, in effect matching the samSGNHT-EM 0.0317 0.0289 0.148 MS ple distribution, while being much faster (an order of magnitude) per iteration. The stochastic MCEM algo- Table 1: RMSE of the sampled means, precisions and perrithms show markedly better perfor- iteration runtimes (in milliseconds) from runs on synthetic mance as well; SGNHT-EM in partic- Gaussian data. ular beats SGR-NPHMC in RMSE-? while being significantly faster due to simpler updates for the mass matrices. Accuracy improvements are particularly noticeable for the high learning rate regimes for HMC, SGHMC and SG-NPHMC. 4.2 Parameter Estimation in 2D Bayesian Logistic Regression Next we present some results obtained from a Bayesian logistic regression experiment, using both synthetic and real datasets. For the synthetic case, we used the same methodology as [6]; we generated 2, 000 observations from a mixture of two normal distributions with means at [1, ?1] and [?1, 1], with mixing weights set to (0.5, 0.5) and the covariance set to I. We then classify these points using a linear classifier with weights {W0 , W1 } = [1, ?1], and attempt to learn these weights using our samplers. We put N (0, 10I) priors on the weights, and used the metric tensor described in ?7 of [3] for the Riemannian samplers. In the (generalized) leapfrog steps of the Riemannian samplers, we opted to use 2 or 3 fixed point iterations to approximate the solutions to the implicit equations. Along with this synthetic setup, we also fit a Bayesian LR model to the Australian Credit and Heart regression datasets from the UCI database, for additional runtime comparisons. The Australian credit dataset contains 690 datapoints of dimensionality 14, and the Heart dataset has 270 13-dimensional datapoints. 7 For the synthetic case, we discard the first 10, 000 samples as burn-in, and calculate RMSE values from the remaining samples. Learning rates were chosen from {1e ? 2, 1e ? 4, 1e ? 6}, and values of the stochastic noise terms were selected from {0.001, 0.01, 0.1, 1, 10}. Leapfrog steps were chosen from {10, 20, 30}. For the stochastic algorithms we used a batchsize of 100. M ETHOD HMC HMC-EM RHMC SGHMC SGHMC-EM SG-NPHMC SG-NPHMC-EM SGR-NPHMC SGNHT SGNHT-EM RMSE (W0 ) 0.0456 0.0145 0.0091 0.2812 0.2804 0.4945 0.0990 0.1901 0.2035 0.1983 RMSE (W1 ) 0.1290 0.0851 0.0574 0.2717 0.2583 0.4263 0.4229 0.1925 0.1921 0.1729 The RMSE numbers for the synthetic dataset are shown in Table 2, and the per-iteration runtimes for all the datasets are shown in Table 3. Table 2: RMSE of the two regression parameters, We used initialized S_count to 300 for HMC- for the synthetic Bayesian logistic regression exEM, SGHMC-EM, and SGNHT-EM, and 200 periment. See text for details. for SG-NPHMC-EM. The MCEM framework noticeably improves the accuracy in almost all cases, with no computational overhead. Note the improvement for SG-NPHMC in terms of RMSE for W0 . For the runtime calculations, we set all samplers to 10 leapfrog steps, and fixed S_count to the values mentioned above. The comparisons with the M ETHOD T IME ( SYNTH ) T IME (AUS ) T IME (H EART ) Riemannian algorithms tell HMC 1.435 MS 0.987 MS 0.791 MS a clear story: though we HMC-EM 1.428 MS 0.970 MS 0.799 MS do get somewhat better RHMC 1550 MS 367 MS 209 MS accuracy with these samSGHMC 0.200 MS 0.136 MS 0.112 MS SGHMC-EM 0.203 MS 0.141 MS 0.131 MS plers, they are orders of SG-NPHMC 0.731 MS 0.512 MS 0.403 MS magnitude slower. In SG-NPHMC-EM 0.803 MS 0.525 MS 0.426 MS our synthetic case, for inSGR-NPHMC 6.720 MS 4.568 MS 3.676 MS stance, each iteration of SGNHT 0.302 MS 0.270 MS 0.166 MS RHMC (consisting of all SGNHT-EM 0.306 MS 0.251 MS 0.175 MS the leapfrog steps and the M-H ratio calculation) takes more than a second, us- Table 3: Per-iteration runtimes (in milliseconds) for Bayesian logistic ing 10 leapfrog steps and 2 regression experiments, on both synthetic and real datasets. fixed point iterations for the implicit leapfrog equations, whereas both HMC and HMC-EM are simpler and much faster. Also note that the M-step calculations for our MCEM framework involve a single-step closed form update for the precision matrix, using the collected samples of p once every S_count sampling steps; thus we can amortize the cost of the M-step over the previous S_count iterations, leading to negligible changes to the per-sample runtimes. 4.3 Topic Modeling using a Nonparametric Gamma Process Construction Next we turn our attention to a high-dimensional topic modeling experiment using a nonparametric Gamma process construction. We elect to follow the experimental setup described in [6]. Specifically, we use the Poisson factor analysis framework of [32]. Denoting the vocabulary as V , and the documents in the corpus as D, we model the observed counts of the vocabulary terms as DV ?N = Poi(??), where ?K?N models the counts of K latent topics in the documents, and ?V ?K denotes the factor load matrix, that encodes the relative importance of the vocabulary terms in the latent topics. Following standard Bayesian convention, we put model the columns of ? as ??,k ? Dirichlet(?), using normalized Gamma variables: ?v,k = P?v?v , with ?v ? ?(?, 1). Then we have ?n,k ? v pj ?(rk , 1?p ); we put ?(a0 , b0 ) priors on the document-specific mixing probabilities pj . We then set j the rk s to the atom weights generated by the constructive Gamma process definition of [14]; we refer the reader to that paper for the details of the formulation. It leads to a rich nonparametric construction of this Poisson factor analysis model for which closed-form Gibbs updates are infeasible, thereby providing a testing application area for the stochastic MCMC algorithms. We omit the Metropolis Hastings correction-based HMC and RHMC samplers in this evaluation due to poor scalability. 8 (a) (b) Figure 1: Test perplexities plotted against (a) post-burnin iterations and (b) wall-clock time for the 20-Newsgroups dataset. See text for experimental details. We use count matrices from the 20-Newsgroups and Reuters Corpus Volume 1 corpora [33]. The former has 2, 000 words and 18, 845 documents, while the second has a vocabulary of size 10, 000 over 804, 414 documents. We used a chronological 60?40 train-test split for both datasets. Following standard convention for stochastic algorithms, following each minibatch we learn document-specific parameters from 80% of the test set, and calculate test perplexities on the remaining 20%. Test perplexity, a commonly used measure for such evaluations, is detailed in the supplementary. As noted in [14], the atom weights have three sets of components: the Ek s, Tk s and the hyperparameters ?, ? and c. As in [6], we ran three parallel chains for these parameters, collecting samples of the momenta from the Tk and hyperparameter chains for the MCEM mass updates. We kept the mass of the Ek chain fixed to IK , and chose K = 100 as number of latent topics. We initialized S_count, the E-step sample size in our algorithms, to 50 for NPHMC-EM and 100 for the rest. Increasing S_count over time yielded fairly minor improvements, hence we kept it fixed to the values above for simplicity. Additional details on batch sizes, learning rates, stochastic noise estimates, leapfrog iterations etc are provided in the supplementary. For the 20-Newsgroups dataset we ran all algorithms for 1, 500 burn-in iterations, and collected samples for the next 1, 500 steps thereafter, with a stride of 100, for perplexity calculations. For the Reuters dataset we used 2, 500 burn-in iterations. Note that for all these algorithms, an ?iteration? corresponds to a full E-step with a stochastic minibatch. The numbers obtained at the end M ETHOD 20-N EWS R EUTERS T IME (20-N EWS ) of the runs are shown in Table 2, SGHMC 759 996 0.047 S along with per-iteration runtimes. SGHMC-EM 738 972 0.047 S The post-burnin perplexity-vsSGNHT 757 979 0.045 S SGNHT-EM 719 968 0.045 S iteration plots from the 20SGR-NPHMC 723 952 0.410 S Newsgroups dataset are shown in SG-NPHMC 714 958 0.049 S Figure 1. We can see significant SG-NPHMC-EM 712 947 0.049 S improvements from the MCEM framework for all samplers, with that of SGNHT being highly pro- Table 4: Test perplexities and per-iteration runtimes on 20nounced (719 vs 757); indeed, Newsgroups and Reuters datasets. the SG-NPHMC samplers have lower perplexities (712) than those obtained by SGR-NPHMC (723), while being close to an order of magnitude faster per iteration for 20-Newsgroups even when the latter used diagonalized metric tensors, ostensibly by avoiding implicit systems of equations in the leapfrog steps to learn the kinetic masses. The framework yields nontrivial improvements for the Reuters dataset as well. 5 Conclusion We propose a new theoretically grounded approach to learning the mass matrices in Hamiltonianbased samplers, including both standard HMC and stochastic variants, using a Monte Carlo EM framework. In addition to a newly proposed stochastic sampler, we augment certain existing samplers with this technique to devise a set of new algorithms that learn the kinetic masses dynamically from the data in a flexible and scalable fashion. Experiments conducted on synthetic and real datasets demonstrate the efficacy and efficiency of our framework, when compared to existing Riemannian manifold-based samplers. 9 Acknowledgments We thank the anonymous reviewers for their insightful comments and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1418265. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References [1] R. M. Neal. MCMC using Hamiltonian dynamics. In S. Brooks, A. Gelman, G. L. Jones, and X.-L. Meng, editors, Handbook of Markov Chain Monte Carlo, pages 113?162. Chapman & Hall / CRC Press, 2011. [2] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195(2):216?222, 1987. [3] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123? 214, 2011. [4] T. Chen, E. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 1683?1691, 2014. [5] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian Sampling using Stochastic Gradient Thermostats. In Advances in Neural Information Processing Systems (NIPS) 27, pages 3203?3211, 2014. [6] A. Roychowdhury, B. Kulis, and S. Parthasarathy. Robust Monte Carlo Sampling using Riemannian Nos?-Poincar? Hamiltonian Dynamics. In Proceedings of The 33rd International Conference on Machine Learning (ICML), pages 2673?2681, 2016. [7] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proceedings of The 28th International Conference on Machine Learning (ICML), pages 681?688, 2011. [8] S. Patterson and Y. W. Teh. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. In Advances in Neural Information Processing Systems (NIPS) 26, pages 3102?3110, 2013. [9] S. D. Bond, B. J. Leimkuhler, and B. B. Laird. The Nos?-Poincar? Method for Constant Temperature Molecular Dynamics. J. Comput. Phys, 151:114?134, 1999. [10] G. C. G. Wei and M. A. Tanner. A Monte Carlo Implementation of the EM Algorithm and the Poor Man?s Data Augmentation Algorithms. Journal of the American Statistical Association, 85:699?704, 1990. [11] J. G. Booth and J. P. Hobert. Maximizing Generalized Linear Mixed Model Likelihoods with an Automated Monte Carlo EM Algorithm. Journal of the Royal Statistical Society Series B, 61(1):265?285, 1999. [12] C. E. McCulloch. Maximum Likelihood Algorithms for Generalized Linear Mixed Models. Journal of the American Statistical Association, 92(437):162?170, 1997. [13] R. A. Levine and G. Casella. Implementations of the Monte Carlo EM Algorithm. Journal Computational and Graphical Statistics, 10(3):422?439, 2001. [14] A. Roychowdhury and B. Kulis. Gamma Processes, Stick-Breaking, and Variational Inference. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 800?808, 2015. [15] B. Leimkuhler and S. Reich. Simulating Hamiltonian Dynamics. Cambridge University Press, 2004. 10 [16] H. Robbins and S. Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400?407, 1951. [17] L. Yin and P. Ao. Existence and Construction of Dynamical Potential in Nonequilibrium Processes without Detailed Balance. Journal of Physics A: Mathematical and General, 39(27):8593, 2006. [18] D. Frenkel and B. Smit. Understanding Molecular Simulations: From Algorithms to Applications, 2nd Edition. Academic Press, 2001. [19] B. Leimkuhler and C. Matthews. Molecular Dynamics: With Deterministic and Stochastic Numerical Methods. Springer, 2015. [20] W. G. Hoover. Canonical dynamics: Equilibrium phase-space distributions. Physical Review A (General Physics), 31(3):1695?1697, 1985. [21] A. Jones and B. Leimkuhler. Adaptive stochastic methods for sampling driven molecular systems. Journal of Chemical Physics, 135(8):084125, 2011. [22] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society Series B, 39(1):1?38, 1977. [23] J. D. Dixon. Exact solution of linear equations using P-adic expansions. Numerische Mathematik, 40(1):137?141, 1982. [24] W. Eberly, M. Giesbrecht, P. Giorgi, A. Storjohann, and G. Villard. Solving sparse rational linear systems. In Proceedings of the 2006 international symposium on Symbolic and algebraic computation (ISSAC), pages 63?70, 2006. [25] C. P. Robert, T. Ryd?n, and D. M. Titterington. Convergence Controls for MCMC Algorithms, With Applications to Hidden Markov Chains. Journal of Statistical Computation and Simulation, 64:327?355, 1999. [26] G. Fort and E. Moulines. Convergence of the Monte Carlo Expectation Maximization for Curved Exponential Families. The Annals of Statistics, 31(4):1220?1259, 2003. [27] K. S. Chan and J. Ledolter. Monte Carlo EM Estimation for Time Series Models Involving Counts. Journal of the American Statistical Association, 90(429):242?252, 1995. [28] R. P. Sherman, Y.-Y. K. Ho, and S. R. Dalal. Conditions for convergence of Monte Carlo EM sequences with an application to product diffusion modeling . The Econometrics Journal, 2(2):248?267, 1999. [29] L. Bottou. On-line Learning and Stochastic Approximations. In On-line Learning in Neural Networks, pages 9?42. Cambridge University Press, 1998. [30] C. Uhler. Geometry of maximum likelihood estimation in Gaussian graphical models. Annals of Statistics, 40:238?261, 2012. [31] A. P. Dempster. Covariance selection. Biometrics, 28:157?175, 1972. [32] M. Zhou and L. Carin. Negative Binomial Process Count and Mixture Modeling. IEEE Trans. Pattern Anal. Mach. Intell., 37(2):307?320, 2015. [33] N. Srivastava, R. Salakhutdinov, and G. E. Hinton. Modeling documents with deep Boltzmann machines. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI), pages 616?624, 2013. 11
6725 |@word kulis:2 version:2 inversion:1 dalal:1 nd:1 suitably:1 d2:1 simulation:3 covariance:10 sgd:2 thereby:3 initial:1 contains:1 efficacy:1 selecting:1 hereafter:1 series:4 denoting:2 document:7 existing:8 diagonalized:1 current:1 discretization:2 si:2 written:4 numerical:1 remove:2 plot:1 update:24 progressively:1 v:3 stationary:2 half:2 fewer:2 generative:1 selected:1 intelligence:2 hamiltonian:11 lr:1 provides:2 iterates:2 cse:1 completeness:1 successive:2 simpler:4 mathematical:2 along:7 direct:1 differential:1 symposium:1 ik:1 consists:1 overhead:1 theoretically:1 indeed:2 expected:1 behavior:1 discretized:5 moulines:1 euters:1 salakhutdinov:1 riemann:3 solver:1 increasing:2 provided:3 underlying:2 bounded:1 notation:1 mass:35 mcculloch:1 tic:1 q2:1 developed:1 titterington:1 finding:1 guarantee:1 esti:1 every:2 collecting:1 concave:1 chronological:1 runtime:2 prohibitively:2 k2:1 classifier:1 stick:1 control:5 unit:1 grant:1 omit:2 planck:5 hamilton:1 positive:3 before:2 engineering:1 negligible:1 tends:1 mach:1 id:1 analyzing:1 meng:1 approximately:1 burn:6 plus:1 au:1 studied:1 chose:1 dynamically:2 collect:1 unique:1 acknowledgment:1 testing:1 definite:3 differs:1 procedure:2 poincar:8 rived:1 area:1 empirical:1 discretizations:1 significantly:5 adapting:1 matching:1 pre:1 confidence:5 induce:1 refers:1 spite:1 word:1 leimkuhler:4 get:2 cannot:2 close:2 symbolic:1 gelman:1 selection:1 put:3 context:2 applying:1 optimize:1 deterministic:2 reviewer:1 maximizing:1 straightforward:1 attention:3 convex:3 numerische:1 simplicity:1 immediately:1 m2:1 datapoints:2 fang:1 notion:1 traditionally:1 coordinate:1 updated:3 annals:3 target:6 play:1 construction:5 pt:9 exact:1 us:6 expensive:1 satisfying:1 updating:3 particularly:1 econometrics:1 timates:1 database:1 observed:3 role:1 constancy:1 levine:1 ding:1 calculate:4 ran:2 mentioned:8 dempster:2 complexity:6 kmi:1 ideally:1 venerable:1 dynamic:39 solving:2 mcem:25 algo:1 calderhead:1 upon:1 patterson:1 efficiency:6 preconditioning:2 joint:2 various:2 worsened:1 train:1 distinct:1 monte:32 artificial:2 tell:1 hyper:1 pendleton:1 h0:3 quite:1 widely:2 supplementary:8 solve:2 drawing:1 otherwise:1 statistic:7 think:1 noisy:1 laird:2 online:3 subsamples:1 advantage:1 sequence:7 eigenvalue:1 sen:1 propose:4 product:1 uci:1 loop:1 mixing:2 achieve:1 description:1 scalability:5 hgc:1 convergence:11 requirement:1 produce:1 converges:1 tk:2 derive:3 develop:2 minor:1 b0:1 noticeable:2 progress:1 strong:1 auxiliary:3 c:2 implies:1 australian:2 convention:2 girolami:1 correct:4 modifying:1 stochastic:42 opinion:1 material:2 noticeably:2 crc:1 srini:1 ao:1 wall:1 preliminary:1 anonymous:1 hoover:1 symplectic:1 proposition:3 correction:10 batchsize:1 around:1 credit:2 hall:1 normal:6 exp:1 sgld:1 equilibrium:1 claim:1 matthew:1 early:3 adopt:1 lected:1 estimation:5 bond:1 sensitive:1 robbins:1 create:3 gauge:1 sidestepped:1 clearly:1 gaussian:6 aim:1 modified:2 zhou:1 poi:1 partic:1 derived:6 leapfrog:21 improvement:6 likelihood:18 opted:1 sgr:10 baseline:1 sense:2 utl:1 inference:1 dependent:1 neven:1 nears:1 typically:4 integrated:1 a0:1 hidden:1 provably:1 overall:2 issue:5 among:1 flexible:1 denoted:4 augment:1 initialize:2 fairly:2 marginal:2 construct:1 once:1 beach:1 sampling:21 runtimes:8 atom:2 represents:1 broad:1 jones:2 icml:3 carin:1 reversibility:1 chapman:1 simplex:1 fundamentally:1 preserve:12 ime:5 gamma:5 national:2 intell:1 subsampled:1 replaced:1 geometry:3 consisting:1 phase:3 maintain:1 attempt:1 uhler:1 interest:2 highly:1 evaluation:3 adjust:1 analyzed:2 mixture:2 chain:11 closer:1 necessary:1 respective:1 fox:1 biometrics:1 incomplete:1 walk:2 desired:4 initialized:2 plotted:1 roweth:1 instance:2 classify:1 modeling:7 earlier:3 column:1 disadvantage:1 formulates:1 maximization:1 signifies:1 cost:2 rhmc:10 nonequilibrium:1 conducted:1 stored:1 synthetic:13 combined:1 adaptively:2 st:7 density:3 cho:1 unbiasedness:1 international:5 probabilistic:2 off:2 physic:4 tanner:1 w1:2 augmentation:2 reflect:1 opposed:2 choose:1 possibly:1 hn:1 wishart:2 necessitate:1 ek:2 derivative:1 leading:2 sidestep:1 american:3 account:1 potential:2 de:1 stride:1 dixon:1 satisfy:1 performed:1 tion:1 lot:1 closed:5 view:1 start:3 recover:1 parallel:1 rmse:11 monro:1 adic:1 accuracy:8 variance:2 efficiently:2 gathered:6 correspond:1 yield:1 bayesian:12 critically:1 comparably:1 mc:5 carlo:32 trajectory:1 researcher:4 kennedy:1 phys:1 casella:1 definition:1 against:1 energy:34 dm:1 resultant:5 proof:3 riemannian:20 mi:15 sampled:5 gain:1 dataset:9 newly:1 rational:1 recall:2 dimensionality:3 improves:1 carefully:1 thinning:1 focusing:1 matures:1 follow:1 methodology:2 specify:1 improved:2 wei:1 formulation:4 done:1 though:1 strongly:1 implicit:8 correlation:2 until:2 clock:1 hastings:6 reversible:1 accent:1 minibatch:3 logistic:4 usa:1 effect:3 requiring:1 unbiased:2 normalized:1 counterpart:2 former:1 regularization:2 hence:1 reformulating:2 stance:1 symmetric:2 chemical:1 neal:1 deal:1 during:3 plp:1 elect:1 noted:4 m:42 generalized:7 complete:1 demonstrate:2 performs:2 motion:1 temperature:2 pro:1 variational:1 ohio:2 novel:4 recently:2 physical:1 volume:1 association:3 marginals:2 significant:4 refer:2 cambridge:2 imposing:1 gibbs:4 rd:1 consistency:1 stochasticity:1 had:1 dot:1 sherman:1 reich:1 etc:1 posterior:4 own:1 recent:2 frenkel:1 perspective:1 chan:1 inf:1 driven:1 discard:1 scenario:2 store:2 certain:2 perplexity:7 honor:1 arbitrarily:1 exploited:1 devise:1 preserving:4 guestrin:1 additional:3 somewhat:1 surely:1 converge:1 multiple:1 full:2 ing:1 faster:7 match:3 calculation:4 offer:1 long:2 academic:1 post:3 molecular:5 mle:1 impact:1 ameliorates:1 involving:1 scalable:2 variant:10 basic:1 regression:6 metric:3 expectation:2 poisson:3 iteration:27 sometimes:1 kernel:1 grounded:1 invert:1 proposal:1 affecting:1 whereas:3 want:1 preserved:1 interval:6 addition:2 rest:1 unlike:1 sure:1 markedly:1 comment:1 tend:1 mod:2 call:3 leverage:2 presence:2 split:1 automated:1 newsgroups:6 fit:1 timesteps:1 restrict:1 suboptimal:1 idea:1 unidimensional:1 oneself:1 curiously:1 effort:1 algebraic:1 reformulated:1 hessian:1 constitute:1 deep:1 clear:1 involve:1 tune:1 detailed:2 nonparametric:4 locally:1 category:1 canonical:3 millisecond:2 roychowdhury:4 estimated:1 arising:2 per:9 hyperparameter:2 shall:1 srinivasan:1 thereafter:1 sgnht:22 pj:2 clean:1 diffusion:1 kept:2 run:4 inverse:2 angle:1 letter:1 uncertainty:1 almost:3 family:3 reader:1 draw:2 ble:1 comparable:1 bound:1 followed:3 convergent:2 quadratic:2 yielded:1 nontrivial:1 constraint:4 infinity:1 afforded:1 encodes:1 performing:1 relatively:2 department:1 poor:2 anirban:1 across:1 em:62 newer:2 lp:8 metropolis:6 making:3 dv:1 gradually:1 taken:2 heart:2 equation:13 previously:1 mathematik:1 turn:3 count:9 mechanism:4 ostensibly:1 letting:1 tractable:1 end:7 reformulations:2 mance:1 sghmc:18 apply:1 appropriate:1 upto:2 indirectly:1 simulating:1 robustly:1 stepsize:1 alternative:3 batch:5 shortly:1 slower:1 ho:1 existence:1 denotes:7 running:3 ensure:1 sgrld:1 remaining:2 dirichlet:1 graphical:2 binomial:1 gkt:3 especially:2 society:3 unchanged:1 tensor:3 objective:1 added:2 primary:2 diagonal:3 gradient:8 wrap:3 distance:1 thank:1 parametrized:1 w0:3 topic:7 manifold:7 collected:8 reason:2 fresh:1 assuming:1 hobert:1 ratio:2 providing:1 balance:1 difficult:1 hmc:27 setup:2 robert:1 holding:1 favorably:1 synth:1 negative:3 implementation:2 reliably:1 ethod:4 anal:1 boltzmann:1 perform:6 allowing:2 teh:2 observation:1 markov:5 datasets:9 mate:1 enabling:1 curved:1 beat:3 langevin:4 situation:1 hinton:1 inverting:1 cast:1 required:2 fort:1 z1:2 nip:3 brook:1 address:5 trans:1 usually:1 below:2 dynamical:1 pattern:1 regime:1 challenge:1 max:4 including:2 royal:3 critical:2 suitable:6 ation:1 natural:2 regularized:2 hybrid:1 created:1 parthasarathy:2 text:4 prior:6 literature:7 epoch:1 l2:2 sg:16 understanding:1 review:1 asymptotic:3 relative:2 mixed:2 suggestion:1 foundation:2 integrate:1 quota:1 rubin:1 editor:1 story:1 repeat:2 supported:1 keeping:1 infeasible:1 bias:1 fall:1 face:2 sparse:1 issac:1 overcome:1 calculated:2 vocabulary:4 skeel:1 rich:1 ignores:1 author:4 commonly:1 adaptive:12 jump:1 ple:1 far:2 welling:1 approximate:1 forever:2 uai:1 handbook:1 corpus:3 xi:2 latent:4 table:7 additionally:1 learn:11 robust:2 ca:1 alg:7 expansion:1 bottou:1 complex:4 necessarily:1 domain:1 aistats:1 spread:1 whole:1 noise:10 arise:1 subsample:2 reuters:4 hyperparameters:1 edition:1 augmented:5 periment:1 referred:1 board:1 cubic:1 amortize:1 slow:1 fashion:1 precision:8 momentum:15 exponential:2 comput:1 lie:2 breaking:1 third:1 rk:2 load:1 discarding:1 specific:2 insightful:1 offset:2 thermostat:6 burden:1 restricting:1 adding:2 smit:1 importance:1 magnitude:4 babbush:1 chen:2 booth:1 yin:1 simply:1 expressed:1 recommendation:1 springer:1 duane:1 fokker:5 corresponds:2 satisfies:1 minibatches:2 kinetic:5 identity:2 goal:1 formulated:1 towards:1 twofold:1 fisher:1 feasible:1 change:1 man:1 specifically:3 sampler:39 acting:1 ular:1 called:1 e:1 experimental:2 osu:1 est:5 burnin:2 ew:2 perfor:1 latter:4 constructive:1 evaluate:5 mcmc:8 avoiding:1 srivastava:1
6,330
6,726
ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization Yi Xu? , Mingrui Liu? , Qihang Lin? , Tianbao Yang? Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA ? Department of Management Sciences, The University of Iowa, Iowa City, IA 52242, USA {yi-xu, mingrui-liu, qihang-lin, tianbao-yang}@uiowa.edu ? Abstract Alternating direction method of multipliers (ADMM) has received tremendous interest for solving numerous problems in machine learning, statistics and signal processing. However, it is known that the performance of ADMM and many of its variants is very sensitive to the penalty parameter of a quadratic penalty applied to the equality constraints. Although several approaches have been proposed for dynamically changing this parameter during the course of optimization, they do not yield theoretical improvement in the convergence rate and are not directly applicable to stochastic ADMM. In this paper, we develop a new ADMM and its linearized variant with a new adaptive scheme to update the penalty parameter. Our methods can be applied under both deterministic and stochastic optimization settings for structured non-smooth objective function. The novelty of the proposed scheme lies at that it is adaptive to a local sharpness property of the objective function, which marks the key difference from previous adaptive scheme that adjusts the penalty parameter per-iteration based on certain conditions on iterates. On theoretical side, given the local sharpness characterized by an exponent ? ? (0, 1], we show that the 1?? 1 e ) in the proposed ADMM enjoys an improved iteration complexity of O(1/ 2(1??) e deterministic setting and an iteration complexity of O(1/ ) in the stochastic setting without smoothness and strong convexity assumptions. The complexity in either setting improves that of the standard ADMM which only uses a fixed penalty parameter. On the practical side, we demonstrate that the proposed algorithms converge comparably to, if not much faster than, ADMM with a fine-tuned fixed penalty parameter. 1 Introduction Our problem of interest is the following convex optimization problem that commonly arises in machine learning, statistics and signal processing: min x?? F (x) , f (x) + ?(Ax) (1) where ? ? Rd is a closed convex set, f : Rd ? R and ? : Rm ? R are proper lower-semicontinuous convex functions, and A ? Rm?d is a matrix. In this paper, we consider solving (1) by alternating direction method of multipliers (ADMM) in two paradigms, namely deterministic optimization and stochastic optimization. In both paradigms, ADMM has been employed widely for solving the regularized statistical learning problems like (1) due to its capability of tackling the sophisticated structured regularization term ?(Ax) in (1) (e.g., the generalized lasso kAxk1 ), which is often an 1 e suppresses a logarithmic factor. O() 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. obstacle for applying other methods such as proximal gradient method. As follows, we describe the standard ADMM and its variants for solving (1) in different optimization paradigms. It is worth mentioning that all algorithms presented in this paper can be easily extended to handle a more general term ?(A(x) + c), where A is a linear mapping. To apply ADMM, the original problem (1) is first cast into an equivalent constrained optimization problem via decoupling: min x??,y?Rm f (x) + ?(y), s.t. y = Ax. (2) An augmented Lagrangian function for (2) is defined as ? kAx ? yk22 , (3) 2 where ? is a constant called penalty parameter and ? ? Rm is a dual variable. Then, the standard ADMM solves problem (1) by executing the following three steps in each iteration: 2 ? 1 x? +1 = arg min L(x, y? , ?? ) = arg min f (x) + (Ax ? y? ) ? ?? , (4) x?? x?? 2 ? 2 2 1 ? , (Ax ? y) ? (5) ? y? +1 = arg min L(x? +1 , y, ?? ) = arg minm ?(y) + ? +1 ? x?? y?R 2 ? 2 ?? +1 = ?? ? ?(Ax? +1 ? y? +1 ). (6) L(x, y, ?) = f (x) + ?(y) ? ?> (Ax ? y) + When A is not an identity matrix, solving the subproblem (4) above for x? +1 might be difficult. To alleviate the issue, linearized ADMM [33, 34, 8] has been proposed, which solves the following problem instead of (4): 2 1 ? 1 + kx ? x? k2G , x? +1 = arg min f (x) + (Ax ? y? ) ? ?? (7) x?? 2 ? 2 2 ? where kxkG = x> Gx and G ? Rd?d is a positive semi-definite matrix. By setting G = ?I ? ?A> A  0, the term x> A> Ax in (7) vanishes. It has been established that both standard ADMM and linearized ADMM have an O(1/t) convergence rate for solving (2) [8] , where t is the number of iterations. Under a minor condition, this result implies an O(1/) iteration complexity for solving the original problem (1) (see Corollary 1). In addition, we consider ADMM for solving (1) in stochastic optimization with f (x) = E? [f (x; ?)] (8) where ? is a random variable. This formulation captures many risk minimization problems in machine learning where ? denotes a data point sampled from a distribution and f (x; ?) denotes a loss function the model x on the data ?. It also covers as a special case the empirical loss where Pof n f (x) = n1 i=1 f (x; ?i ) with n is the number of samples. For these problems, computing f (x) itself might be prohibitive (e.g., when n is very large) or even impossible. To address this issue, one usually considers the stochastic optimization paradigm, where it is assumed that f (x; ?) and its subgradient ?f (x; ?) can be efficiently computed. To solve the stochastic optimization problem, stochastic ADMM algorithms have been proposed [21, 23], which update y? +1 and ?? +1 the same to (5) and (6), respectively, but update x? +1 as 2 kx ? x? k2G? ? 1 x? +1 = arg min f (x? ; ?? )+?f (x? ; ?? )> (x?x? )+ (Ax ? y ) ? ? (9) ? ? + x?? 2 ? ?? 2 where ?? is a random sample, ?? is a stepsize and G? = ?I ? ??? A> A  I [23] or G? = I [21]. Other stochastic variants of ADMM for ? general convex optimization were also proposed in [23, 35]. These work have established an O(1/ t) convergence rate of stochastic ADMM for solving (2) with f (x) being (8). Under a minor condition, we can also show that these stochastic ADMM algorithms suffer from a higher iteration complexity of O(1/2 ) for finding an -optimal solution to the original problem (1) (see Corollary 3). Although the variants of ADMM with fast convergence rates have been developed under smoothness, strong convexity and other regularity conditions (e.g., the matrix A has full rank), the best iteration 2 complexities of deterministic ADMM and stochastic ADMM for general convex optimization remain O(1/) and O(1/2 ), respectively. On the other hand, many studies have reported that the performance of ADMM is very sensitive to the penalty parameter ?. How to address or alleviate this issue has attracted many studies and remains an active topic. In particular, it remains an open question how to quantify the improvement in ADMM?s theoretical convergence by using adaptive penalty parameters. Of course, the answer to this question depends on the adaptive scheme being used. Almost all previous works focus on using self-adaptive schemes that update the penalty parameter during the course of optimization according to the historical iterates (e.g., by balancing the primal residual and dual residual). However, there is hitherto no quantifiable improvement in terms of convergence rate or iteration complexity for these self-adaptive schemes. In this paper, we focus on the design of adaptive penalization for both deterministic and stochastic ADMM and show that, with the proposed adaptive updating scheme on the penalty parameter, the theoretical convergence properties of ADMM can be improved without imposing any smoothness and strong convexity assumptions on the objective function. The key difference between the proposed adaptive scheme and previous self-adaptive schemes is that the proposed penalty parameter is adaptive to an local sharpness property of the objective function, namely the local error bound (see Definition 1). Given the exponent constant ? ? (0, 1] that characterizes this local sharpness property, we show that 1?? 2 e the proposed deterministic ADMM enjoys an improved iteration complexity of O(1/ ) and the 2(1??) e proposed stochastic ADMM enjoys an iteration complexity of O(1/ ), both of which improve the complexity of their standard counterparts which only use a fixed penalty parameter. To the best of our knowledge, this is the first evidence that an adaptive penalty parameter used in ADMM can lead to provably lower iteration complexities. We call the proposed ADMM algorithms locally adaptive ADMM because of its adaptivity to the problem?s local property. 2 Related Work Since there is a tremendous amount of studies on ADMM, the review below mainly focuses on the ADMMs with a variable penalty parameter. A convergence rate of O(1/t) was first shown for both the standard and linearized variants of ADMM [8, 19, 9] on general non-smooth and non-strongly convex problems. Later, smoothness and strong convexity assumptions are introduced to develop faster convergence rates of ADMMs [22, 3, 11, 6]. Stochastic ADMM was considered in [21, 23] with a ? e convergence rate of O(1/ t) for general convex problems and O(1/t) for strongly convex problems. Recently, many variance reduction techniques have been borrowed into stochastic ADMMPto achieve n improved convergence rates for finite-sum optimization problems where f (x) = n1 i=1 fi (x) under the smoothness and strong convexity assumptions [37, 36, 24]. Nevertheless, most of these aforementioned works focus on using a constant penalty parameter. He et al. [10] analyzed ADMM with self-adaptive penalty parameters. The motivation for their self-adaptive penalty is to balance the order of the primal residual and the dual residual. However, the convergence of ADMM with self-adaptive penalty is not guaranteed unless the adaptive scheme is turned off after a number of iterations. Additionally, their self-adaptive rule requires computing the deterministic subgradient of f (x) so that is not appropriate for stochastic optimization. Tian & Yuan [25] proposed a variant of ADMM with variable penalty parameters. Their analysis and algorithm require the smoothness assumption of ?(Ax) and full column rank of the A matrix. Zhou et al. [15] focused on solving low-rank representation by linearized ADMM and also proposed a non-decreasing self-adaptive penalty scheme. However, their scheme is only applicable to an equality constraint Ax + By = c with c 6= 0. Recently, Xu et al. [31] proposed a self-adaptive penalty scheme for ADMM based on the Barzilai and Borwein gradient methods. The convergence of their ADMM relies on the analysis in He et al. [10] and thus requires the penalty parameter to be fixed after a number of iterations. In contrast, our adaptive scheme fpr the penalty parameter is different from the previous methods in the following aspects: (i) it is adaptive to the local sharpness property of the problem; (ii) it allows the penalty parameter to increase to infinity as the algorithm proceeds; (iii) it can be employed for both deterministic and stochastic ADMMs as well as their linearized versions. It is also notable that the presented algorithms and their convergence theory share many similarities with the recent developments leveraging the local error bound condition [32, 30, 29], where similar iteration complexities have been established. However, we would like to emphasize that the newly 2 e suppresses a logarithmic factor. O() 3 proposed ADMM algorithms are more effective to tackle problems with structured regularizers (e.g., generalized lasso) than the methods in [32, 30, 29], and have an additional unique feature of using adaptive penalty parameter. 3 Preliminaries Recall that the problem of our interest: min F (x) , f (x) + ?(Ax), (10) x?? where ? ? Rd is a closed convex set, f : Rd ? (??, +?] and ? : Rm ? (??, +?] are proper lower-semicontinuous convex functions, and A ? Rm?d is a matrix. Let ?? and F? denote the optimal set of (10) and the optimal value, respectively. We present some assumptions that will be used in the paper. Assumption 1. For the convex optimization problem (10), we assume (a) there exist known x0 ? ? and 0 ? 0 such that F (x0 ) ? F? ? 0 ; (b) ?? is a non-empty convex compact set; (c) there exists a constant ? such that k??(y)k2 ? ? for all y; (d) ? is defined everywhere. ? For a positive semi-definite matrix G, the G-norm is defined as kxkG = x> Gx. Let B(x, r) = {u ? Rd : ku ? xk2 ? r} denote the Euclidean ball centered x with a radius r. We denote by dist(x, ?? ) the distance between x and the set ?? , i.e., dist(x, ?? ) = minv??? kx ? vk2 . We denote by S the -sublevel set of F (x), respectively, i.e., S = {x ? ? : F (x) ? F? + }. Local Sharpness. Below, we introduce a condition, namely local error bound condition, to characterize the local sharpness property of the objective function. Definition 1 (Local error bound (LEB)). A function F (x) is said to satisfy a local error bound condition on the -sublevel set if there exist ? ? (0, 1] and c > 0 such that for any x ? S dist(x, ?? ) ? c(F (x) ? F? )? . (11) Remark: We will refer to ? as the local sharpness parameter. A recent study [1] has shown that the local error bound condition is equivalent to the famous Kurdyka - ?ojasiewicz (KL) property [13], which characterizes that under a transformation of ?(s) = cs? , the function F (x) can be made sharp around the optimal solutions, i.e, the norm of subgradient of the transformed function ?(F (x) ? F? ) is lowered bounded by a constant 1. Note that by allowing ? = 0 in the above condition we can capture a full spectrum of functions. However, a broad family of functions can have a sharper upper bound, i.e., with a non-zero constant ? in the above condition. For example, for functions that are semi-algebraic and continuous, the above inequality is known to hold on any compact set (c.f. [1] and references therein). The value of ? has been revealed for many functions (c.f. [18, 14, 20, 1, 32]). 4 Locally Adaptive ADMM for Deterministic Optimization Since the proposed locally adaptive ADMM algorithm builds upon the standard ADMM, we first present the detailed steps of ADMM in Algorithm 1. Note that if we set G = 0 ? Rd?d , it gives the standard ADMM; and if we use G = ?I ? ?A> A  0, it gives the linearized variant, which can make the computation of x? +1 easier. To ensure G  0, the minimum valid value for ? in the linearized variant is ?kAk22 . To present the convergence result of ADMM (Algorithm 1), we first introduce some notations. ? ? ! x ?A> ? ?, y , F(u) = ? u= ? ? Ax ? y t bt = u 1X u? , t ? =1 t bt = x t 1X x? , t ? =1 bt = y 1X y? , t ? =1 t X bt = 1 ? ?? . t ? =1 We recall the convergence result of [8] for the equality constrained problem (2), which does not assume any smoothness, strong convexity and other regularity conditions. 4 Algorithm 1 ADMM(x0 , ?, t) Algorithm 2 LA-ADMM (x0 , ?1 , K, t) 1: Input: x0 ? ?, the penalty parameter ?, the number 1: Input: x0 ? ?, the number of stages K, and the number of iterations t per of iterations t stage, initial value of penalization pa2: Initialize: x1 = x0 , y1 = Ax1 , ?1 = 0, ? = ?kAk22 rameter ?1 and G = ?I ? ?A> A or G = 0. 2: for k = 1, . . . , K do 3: for ? = 1, . . . , t do 3: Let xk = ADMM(xk?1 , ?k , t) 4: Update x? +1 by (7), y? +1 by (5), 4: Update ?k+1 = 2?k 5: Update ?? +1 by (6) 5: end for 6: end for P 6: Output: xK bt = t? =1 x? /t 7: Output: x Proposition 1 (Theorem 4.1 in [8]). For any x ? ?, y ? Rm and ? ? Rm , we have f (b xt ) + ?(b yt ) ? [f (x) + ?(y)] + (b ut ? u)> F(u) ? ?ky ? y1 k22 k? ? ?1 k22 kx ? x1 k2G + + . 2t 2t 2?t Remark: The above result establishes a convergence rate for the variational inequality pertained bt ) converges to the optimal solutions of (2) in a rate of O(1/t). to (2). When t ? ?, (b xt , y Since our goal is to solve the problem (1), next we present a corollary exhibiting the convergence of ADMM for solving the original problem (1). All omitted proofs can be found in the supplement. bt be the output of ADMM. For any x ? ?, Corollary 1. Suppose Assumption 1.c and 1.d hold. Let x we have kx ? x0 k2G ?kAk22 kx ? x0 k22 ?2 F (b xt ) ? F (x) ? + + . 2t 2t 2?t Remark: For the standard ADMM with G = 0 the first term in the R.H.S vanishes. For the linearized ADMM with G = ?I ? ?A> A  0, we can bound kx ? x0 k2G ? ?kx ? x0 k22 . One can also derive a theoretically optimal value of ? by setting x = x? ? ?? and minimizing the upper bound, which ? results in ? = kAk2 kx?? ?x0 k2 for the standard ADMM or ? = ?2kAk kx for the linearized 2 ? ?x0 k2 ADMM. Finally, the above result implies that the iteration complexity   of standard and linearized 0 k2 ADMM for finding an -optimal solution of (1) is O ?kAk2 kx?x .  Next, we present our locally adaptive ADMM and our main result in this section regarding its iteration complexity. The proposed algorithm is described in Algorithm 2, which is referred to as LA-ADMM. The algorithm runs with multiple stages by calling ADMM at each stage with a warm start and a constant number of iterations t. The penalty parameter ?k is increased by a constant factor larger than 1 (e.g., 2) after each stage and has an initial value dependent on ?, kAk2 , 0 , ? and the targeted accuracy . The convergence result of LA-ADMM employing G = ?I ? ?A> A is established below. A slightly better result in terms of a constant factor can be established for employing G = 0. a local errormbound condition on the Theorem 2. Suppose Assumption 1 holds and F (x) obeys l 2?1?? kAk2 0 , K = dlog2 (0 /)e and t = 8?kAk2 max(1,c2 ) 1?? , we have F (xK ) ? F? ? 1?? e 2. The iteration complexity of LA-ADMM for achieving an 2-optimal solution is O(1/ ). sublevel. Let ?1 = Remark: There are two levels of adaptivity to the local sharpness of the penalty parameter. First, the initial value ?1 in Algorithm 3 depends on the local sharpness parameter ?. Second, the time interval to increase the penalty parameter is determined by the value of t which is also dependent on ?. Compared to the iteration complexity O(1/) of vanilla ADMM, LA-ADMM can enjoy a lower iteration complexity. 5 Locally Adaptive ADMM for Stochastic Optimization In this section, we consider stochastic optimization problem as the following: min F (x) , E? [f (x; ?)] + ?(Ax), x?? (12) where ? is a random variable and f (x; ?) : Rd ? (??, +?] is a proper lower-semicontinuous convex function for each realization of ?. For this problem, in addition to Assumption 1, we make 5 Algorithm 3 SADMM(x0 , ?, ?, t, ? ) Algorithm 4 LA-SADMM (x0 , ?1 , ?1 , D1 , K, t) d 1: Input: x0 ? R , a step size ?, penalty 1: Input: x0 ? Rd , the number of stages K, the numparameter ?, the number of iterations t ber of iterations t per stage, the initial step size ?1 , and a domain ?. the initial parameter ?1 and the initial radius D1 . 2: Initialize: x1 = x0 , y1 = Ax1 , ?1 = 0 2: for k = 1, . . . , K do 3: for ? = 1, . . . , t do 3: Let xk = SADMM(xk?1 , ?k , ?k , t, Bk ? ?) 4: Update x? +1 by (9) and y? +1 by (5) 4: Update ?k+1 = ?k /2 and ?k+1 = 2?k , Dk+1 = Dk /2. 5: Update ?? +1 by (6) 5: end for 6: end for P bt = t? =1 x? /t 6: Output: xK 7: Output: x the following assumption for our development, which is a standard assumption for many previous stochastic gradient methods. Assumption 2. For the stochastic optimization problem (12), we assume that there exists a constant R such that k?f (x; ?)k2 ? R almost surely for any x ? ?. We present a framework of stochastic ADMM (SADMM) in Algorithm 3. The convergence results for solving the equivalent constrained optimization problem of stochastic ADMM with different choices of G? have been established in [21, 23, 35]. Below, we will focus on G? = ?I ? ??A> A  I because it leads to computationally more efficient update for x? +1 than other two choices for high-dimensional problems. Using G? = I will yield a similar convergence bound except for a constant term and using the idea of AdaGrad for computing G? will lead to the same order of convergence in the worst-case, which we will postpone to future work for exploration. The corollary below will be used in our analysis. Corollary 3. Suppose Assumption 1.c, 1.d and Assumption 2 hold. Let G? = ?I ? ??A> A  I in Algorithm 3. For any x ? ?,   ?R2 ?kx1 ? xk22 ?kAk22 kx1 ? xk22 ?2 ?kAk2 kx1 ? xt+1 k2 F (b xt ) ? F (x) ? + + + + 2 2?t 2t 2?t t t X 1 + (E[g? ] ? g? )> (x? ? x). t ? =1 Remark: Taking expectation on both sides will yield the expectational convergence bound. We can also use an analysis of large deviation to bound with high ? ? the last term to obtain the convergence probability. In particular, by setting ? ? 1/ ? , the above result implies an O(1/ t) convergence rate, i.e., O(1/2 ) iteration complexity of stochastic ADMM. Next, we discuss our locally adaptive stochastic ADMM (LA-SADMM) algorithm in Algorithm 4. The key idea is similar to LA-ADMM, i.e., calling SADMM in multiple stages with warm start. The step size ?k in each call of SADMM is fixed and decreases by a certain fraction after one stage. The penalty parameter is updated similarly to that in LA-ADMM but with a different initial value. A key difference from LA-ADMM is that we employ a domain shrinking approach to modify the domain of the solutions x? +1 at each stage. For the k-th stage, the domain for x is the intersection of ? and Bk = B(xk?1 , Dk ), where the latter is a ball with a radius of Dk centered at xk?1 (the initial solution of the k-th stage). The radius Dk will decrease geometrically between stages. The purpose of using the domain shrinking approach is to tackle the last term of the upper bound in Corollary 3 so that it can decrease geometrically as the stage number increases. A similar idea has been adopted in [29, 7, 5]. Note that during each SADMM, we can use the three choices of G? as mentioned before. Below we only present the convergence result of the variant with G? = ?I ? ?k ?k A> A. Theorem 4. Suppose Assumptions 1 and 2 hold and F (x) obeys the local error bound condition 0 c0 6R2 on S . Given ? ? (0, 1), let ?? = ?/K, K = dlog2 ( 0 )e, ?1 = 6R 2 , ?1 = kAk2  , D1 ? 1?? , 0 2 t be the smallest integer such that t ? max{ > ? 2 12?kAk2 D1 ?2 kAk2 6912R2 log(1/?)D 1 , , R2 2 } 0 20 and G? = 2I ? ?1 ?1 A A  I. Then LA-SADMM guarantees that, with a probability 1 ? ?, we have F (xK ) ? F? ? 2. The iteration complexity of LA-SADMM for achieving an 2-optimal solution with a high c0 2(1??) e probability 1 ? ? is O(log(1/?)/ ), provided D1 = O( (1??) ). 6 Algorithm 5 LA-ADMM with Restarting Algorithm 6 LA-SADMM with Restarting (1) t1 , ?1 1: Input: 2: Initialization: x(0) 3: for s = 1, 2, . . . , do (s) 4: x(s) =LA-ADMM(x(s?1) , ?1 , K, ts ) 1?? 5: ts+1 = ts 2 6: end for 7: Output: x(S) , (s+1) ?1 = (s) ?1 /21?? (1) and  ? 0 /2 0 2: Initialization: x(0) , ?1 = 6R 2 , ?1 = 3: for s = 1, 2, . . . , do 1: Input: t1 , D1 6R2 kAk22 0 (s) 4: x(s) =LA-SADMM(x(s?1) , ?1 , ?1 , D1 , K, ts ) (s+1) (s) 5: ts+1 = ts 22(1??) , D1 = D1 21?? 6: end for 7: Output: x(S) Remark: Interestingly, unlike that in LA-ADMM, the initial value ?1 does not depend on ?. The adaptivity of the penalty parameters lies on the time interval t which determines when the value of ? is increased. The difference comes from the first two terms in Corollary 3. Before ending this section, we discuss two points. First, both Theorem 2 and Theorem 4 exhibit the dependence of the two algorithms on the c parameter (e.g., t in Algorithm 2 and D1 in Algorithm 4) that is usually unknown. Nevertheless, this issue can be easily addressed by using another level of restarting and increasing sequence of t and D1 similar to the practical variants in [29, 32]. Due to the limit of space, we only present the algorithms in Algorithm 5 and Algorithm 6 with their formal guarantee presented in supplement. The conclusion is that under mild conditions as long as ? (1) (1) in Algorithm 5 is sufficiently small, t1 and D1 in Algorithm 6 are sufficiently large, the iteration 1?? 2(1??) e e complexities remain O(1/ ) and O(1/ ) when ? in LEB condition is known. Second, these variants can be even employed when the local sharpness parameter ? is unknown by simply setting it to 0, and still enjoy reduced iteration complexities in terms of a multiplicative factor compared to vanilla ADMMs. Detailed results are included in the supplement. 6 Applications and Experiments In this section, we present some experimental results of the proposed algorithms for solving three tasks, namely generalized LASSO, robust regression with a low-rank regularizer (RR-LR) and learning low-rank representation. For generalized lasso, our experiment focuses on comparing the proposed LA-SADMM with SADMM. For the latter tasks, we focus on comparing the proposed LA-ADMM with previous linearized ADMM with and without self-adaptive penalty parameters. We first consider generalized LASSO, which can find applications in many problems in statistics and machine learning [28]. The objective of generalized LASSO can be expressed as: n min F (x) = x?Rd 1X `(x> ai , bi ) + ?kAxk1 n i=1 (13) where (ai , bi ) is a set of pairs of training data, i = 1, . . . , n, ? ? 0 is a regularization parameter, A ? Rm?d is a specified matrix, and `(z, b) is a convex loss function in terms of z. The above formulation include many formulations as special cases, e.g., the standard LASSO where A = I ? Rd?d [26], fused LASSO that penalizes the `1 norm of both the coefficients and their successive differences [27], graph-guided fused LASSO (GGLASSO) where A = F ? Rm?d encodes some graph information about features [12], and sparse graph-guided fused LASSO (S-GGLASSO) where kAxk1 = ?2 kxk1 + ?1 kF xk1 [21]. Let us first discuss the local sharpness parameter of generalized lasso with different loss functions. For the loss function, let us first consider piecewise linear loss function such as hinge loss `(z, b) = max(0, 1 ? bz), absolute loss `(z, b) = |z ? b| and -insensitive loss `(z, b) = max(|z ? b| ? , 0). Then the objective is a polyhedral function. According to the results in [32], the local sharpness parameter is ? = 1. It then implies that both LA-ADMM and LA-SADMM enjoy linear convergence results for solving the problem (13) with a piecewise linear loss function. To the best of our knowledge, these are the first linear convergence results of ADMM without smoothness and strong convexity conditions. One can also consider piecewise quadratic loss such as square loss `(z, b) = (z ? b)2 for b ? R and squared hinge loss `(z, b) = max(0, 1 ? bz)2 for b ? {1, ?1}. According to [14], the problem with convex piecewise quadratic loss has a local e ?) and O(1/) e sharpness parameter ? = 1/2, implying O(1/ for LA-ADMM and LA-SADMM. 7 gisette w8a 0.16 0.55 SADMM LA-SADMM 0.15 0.145 0.14 0.135 0.13 0.45 0.4 0.35 0.3 0 0.5 1 1.5 2 2.5 0.25 3 ?10 6 # of iterations 0 2 4 6 8 11 10 0 20 0.155 0.15 100 0.45 0.4 0.35 0.145 ADMM-best ADMM-worst ADMM-AP ADMM-RB LA-ADMM 7 log(objective) objective 0.165 0.16 80 shape SADMM LA-SADMM 0.5 60 (c) RR + LR gisette 0.55 SADMM LA-SADMM 40 # of iterations (b) SVM + GGLASSO w8a objective 11.5 10 ?10 5 # of iterations 0.175 6.5 6 5.5 0.3 0.14 0.135 12 10.5 (a) SVM + GGLASSO 0.17 ADMM-best ADMM-worst ADMM-AP ADMM-RB LA-ADMM 12.5 log(objective) 0.5 objective objective 0.155 synthetic data 13 SADMM LA-SADMM 5 0 1 2 # of iterations 3 4 ?10 (d) SVM + S-GGLASSO 6 0.25 0 2 4 6 # of iterations 8 10 5 ?10 (e) SVM + S-GGLASSO 0 200 400 600 800 1000 # of iterations (f) LRR Figure 1: Comparison of different algorithms for solving different tasks. RR + LR represents robust regression with a low rank regularizer. LRR represents low-rank representation. For more examples with different values of ?, we refer readers to [32, 30, 29, 17]. SVM Classification with GGLASSO and S-GGLASSO Regularizers To generate the A matrix, we first need to construct a dependency graph of features. We follow [21] to generate a dependency graph by sparse inverse covariance selection [4]. Specifically, we get the estimator ? ?1 via sparse inverse covariance estimation with of the inverse covariance matrix denoted by ? ? ?1 , where i, j ? {1, . . . , d}, i 6= j, an edge the graphical lasso [4]. For each nonzero entry ? ij between i and j is created. If we denote by G ? {V, E} the resulting graph, where V is a set of d vertices, which correspond to d features in the data, and E = {e1 , . . . , em } denotes the set of m edges between elements of V, where ei consists of a tuple of two elements, then the k-th row of A has two non-zero elements corresponding to the k-th edge ek = (i, j) ? E with Ak,i = 1 and Ak,j = ?1. We choose two medium-scale data sets from libsvm website, namely w8a data (n = 49749, d = 300) and gisette data (n = 6000, d = 5000), to conduct the experiment. In the process of estimating inverse covariance matrix, we choose a penalty parameter to be 0.01 that renders the percentage of non-zero elements of the A matrix to be around 3% for w8a data and 1% for gisette data. We compare the performance of the LA-SADMM algorithm ? with SADMM [23], where in SADMM we use G? = ?I ? ??? A> A  I with ?? ? ?1 / ? . For fairness, we set the same initial solution with all zero entries. We fix the value of regularization parameters (? in GGLASSO and ?1 , ?2 in S-GGLASSO) to be n1 , where n is the number of samples. For SADMM, we tune both ?1 and ? from {10?5:1:5 } . For LA-SADMM, we set the initial step size and penalty parameter to their theoretical value in Theorem 4, and select D1 from {100, 1000}. The values of t in LA-SADMM is set to 105 and 5 ? 104 for w8a and gisette, respectively. The results of comparing the objective values versus the number of iterations are presented in Figure 1 (a,b,d,e). We can see that LA-SADMM exhibits a much faster convergence than SADMM. Robust Regression with a Low-rank Regularizer The objective function is F (X) = ?kXk? + kAX ? Ck1 . We can form an equality constraint Y = AX ? C and solve the problem by linearized ADMM. The value of the local sharpness parameter of this problem is still an open problem. We compare the proposed LA-ADMM, the vanilla linearized ADMM with a fixed penalty parameter (ADMM), the linearized ADMM with self-adaptive penalty proposed in [15] (ADMM-AP), and the linearized ADMM with residual balancing in [10, 2] (ADMM-RB). We construct a synthetic data where A ? R1000?100 is generated following a Gaussian distribution with mean 0 and standard deviation 1. To construct C ? R1000?50 , we first generate X ? R100?50 and 8 ? and then let C = AX ? + ?, where ? is a Gaussian retain only its top 20 components denoted by X noise matrix with mean zero and standard deviation 0.01. We set ? = 100. For the vanilla linearized ADMM, we try different penalty parameters from {10?3:1:3 } and report the best performance (using ? = 0.01) and worst performance (using ? = 0.001). To demonstrate the capability of adaptive ADMM, we choose ? = 0.001 as the initial step size for LA-ADMM and ADMM-AP. Other parameters of ADMM-AP is the same as suggested in the original paper. For LA-ADMM, we implement its restarting variant (Algorithm 5), and start with the number of inner iterations t = 2 and increase its value by a factor 2 after 10 stages, and also increase the value of ? by 10 times after each stage. The results are reported in Figure 1 (c), from which we can see that LA-ADMM performs comparably with ADMM with the best penalty parameter and also better than ADMM-AP. We also include the results in terms of running time in the supplement. Low-rank Representation [16] The objective function is F (X) = ?kXk? + kAX ? Ak2,1 , where A ? Rn?d is a data matrix. We used the shape image 3 and set ? = 10. For the vanilla linearized ADMM, we try different penalty parameters from {10?3:1:3 } and report the best performance (using ? = 0.1) and worst performance (using ? = 0.01). To demonstrate the capability of adaptive ADMM, we choose ? = 0.01 as the initial step size for LA-ADMM and ADMM-AP. Other parameters of ADMM-AP is the same as suggested in the original paper. For LA-ADMM, we start with the number of inner iterations t = 20 and increase its value by a factor 2 after 2 stages, and also increase the value of ? by 2 times after each stage. The results are reported in Figure 1 (f), from which we can see that LA-ADMM performs comparably with ADMM with the best penalty parameter and also better than ADMM-AP. We can see from the figure that the results of ADMM-worst and ADMM-AP are quite similar. We also include the results in terms of running time in the supplement. 7 Conclusion In this paper, we have presented a new theory of (linearized) ADMM for both deterministic and stochastic optimization with adaptive penalty parameters. The new adaptive scheme is different from previous self-adaptive schemes and is adaptive to the local sharpness of the problem. We have established faster convergence of the proposed algorithms of ADMM with penalty parameters adaptive to the local sharpness parameter. Experimental results have demonstrated the superior performance of the proposed stochastic and deterministic adaptive ADMM. Acknowlegements We thank the anonymous reviewers for their helpful comments. Y. Xu, M. Liu and T. Yang are partially supported by National Science Foundation (IIS-1463988, IIS-1545995). Y. Xu would like to thank Yan Yan for useful discussions on the low-rank representation experiments. References [1] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. CoRR, abs/1510.08234, 2015. [2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical R in learning via the alternating direction method of multipliers. Foundations and Trends Machine Learning, 3(1):1?122, 2011. [3] W. Deng and W. Yin. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing, 66(3):889?916, 2016. [4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9, 2008. [5] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: Shrinking procedures and optimal algorithms. SIAM Journal on Optimization, 23(4):2061?2089, 2013. 3 http://pages.cs.wisc.edu/~swright/TVdenoising/ 9 [6] T. Goldstein, B. O?Donoghue, S. Setzer, and R. Baraniuk. Fast alternating direction optimization methods. SIAM Journal on Imaging Sciences, 7(3):1588?1623, 2014. [7] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Proceedings of the 24th Annual Conference on Learning Theory (COLT), pages 421?436, 2011. [8] B. He and X. Yuan. On the o(1/n) convergence rate of the douglas-rachford alternating direction method. SIAM Journal on Numerical Analysis, 50(2):700?709, 2012. [9] B. He and X. Yuan. On non-ergodic convergence rate of douglas?rachford alternating direction method of multipliers. Numerische Mathematik, 130(3):567?577, 2015. [10] B. S. He, H. Yang, and S. L. Wang. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. Journal of Optimization Theory and Applications, 106(2):337?356, 2000. [11] M. Hong and Z.-Q. Luo. On the linear convergence of the alternating direction method of multipliers. Mathematical Programming, pages 1?35, 2016. [12] S. Kim, K.-A. Sohn, and E. P. Xing. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics, 25(12):i204?i212, 2009. [13] K. Kurdyka. On gradients of functions definable in o-minimal structures. Annales de l?institut Fourier, 48(3):769 ? 783, 1998. [14] G. Li. Global error bounds for piecewise convex polynomials. Math. Program., 137(1-2):37?64, 2013. [15] Z. Lin, R. Liu, and Z. Su. Linearized alternating direction method with adaptive penalty for low-rank representation. In Advances In Neural Information Processing Systems (NIPS), pages 612?620, 2011. [16] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 663? 670, 2010. [17] M. Liu and T. Yang. Adaptive accelerated gradient converging methods under holderian error bound condition. CoRR, abs/1611.07609, 2017. [18] Z.-Q. Luo and J. F. Sturm. Error bound for quadratic systems. Applied Optimization, 33:383? 404, 2000. [19] R. D. Monteiro and B. F. Svaiter. Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers. SIAM Journal on Optimization, 23(1):475?507, 2013. [20] I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for nonstrongly convex optimization. CoRR, abs/1504.06298, 2015. [21] H. Ouyang, N. He, L. Tran, and A. G. Gray. Stochastic alternating direction method of multipliers. Proceedings of the 30th International Conference on Machine Learning (ICML), 28:80?88, 2013. [22] Y. Ouyang, Y. Chen, G. Lan, and E. Pasiliao Jr. An accelerated linearized alternating direction method of multipliers. SIAM Journal on Imaging Sciences, 8(1):644?681, 2015. [23] T. Suzuki. Dual averaging and proximal gradient descent for online alternating direction multiplier method. In Proceedings of The 30th International Conference on Machine Learning, pages 392?400, 2013. [24] T. Suzuki. Stochastic dual coordinate ascent with alternating direction method of multipliers. In Proceedings of The 31st International Conference on Machine Learning, pages 736?744, 2014. 10 [25] W. Tian and X. Yuan. Faster alternating direction method of multipliers with a worst-case o(1/n2 ) convergence rate. 2016. [26] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267?288, 1996. [27] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91?108, 2005. [28] R. J. Tibshirani, J. Taylor, et al. The solution path of the generalized lasso. The Annals of Statistics, 39(3):1335?1371, 2011. [29] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3821?3830, 2017. [30] Y. Xu, Y. Yan, Q. Lin, and T. Yang. Homotopy smoothing for non-smooth problems with lower complexity than O(1/). In Advances In Neural Information Processing Systems 29 (NIPS), pages 1208?1216, 2016. [31] Z. Xu, M. A. T. Figueiredo, and T. Goldstein. Adaptive admm with spectral penalty parameter selection. CoRR, abs/1605.07246, 2016. [32] T. Yang and Q. Lin. Rsg: Beating subgradient method without smoothness and strong convexity. CoRR, abs/1512.03107, 2016. [33] X. Zhang, M. Burger, X. Bresson, and S. Osher. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal on Imaging Sciences, 3(3):253?276, 2010. [34] X. Zhang, M. Burger, and S. Osher. A unified primal-dual algorithm framework based on bregman iteration. Journal of Scientific Computing, 46(1):20?46, 2011. [35] P. Zhao, J. Yang, T. Zhang, and P. Li. Adaptive stochastic alternating direction method of multipliers. In Proceedings of the 32nd International Conference on Machine Learning (ICML), pages 69?77, 2015. [36] S. Zheng and J. T. Kwok. Fast-and-light stochastic admm. In The 25th International Joint Conference on Artificial Intelligence (IJCAI-16), 2016. [37] W. Zhong and J. T.-Y. Kwok. Fast stochastic alternating direction method of multipliers. In Proceedings of The 31st International Conference on Machine Learning, pages 46?54, 2014. 11
6726 |@word mild:1 version:1 polynomial:1 norm:3 nd:1 open:2 semicontinuous:3 linearized:21 covariance:5 decomposition:1 acknowlegements:1 reduction:1 initial:13 liu:6 series:2 tuned:1 interestingly:1 comparing:3 luo:2 tackling:1 chu:1 attracted:1 numerical:1 shape:2 update:11 implying:1 intelligence:1 prohibitive:1 website:1 xk:10 fpr:1 ojasiewicz:1 lr:3 iterates:2 math:1 gx:2 successive:1 zhang:3 mathematical:1 c2:1 yuan:4 kak22:5 consists:1 polyhedral:1 introduce:2 theoretically:1 x0:18 dist:3 decreasing:1 increasing:1 mingrui:2 provided:1 pof:1 bounded:1 notation:1 gisette:5 medium:1 estimating:1 biostatistics:1 hitherto:1 burger:2 ouyang:2 suppresses:2 developed:1 unified:1 finding:2 transformation:1 guarantee:2 quantitative:1 tackle:2 growth:1 rm:10 k2:6 enjoy:3 positive:2 before:2 t1:3 local:27 modify:1 limit:1 ak:2 r1000:2 path:1 ap:10 might:2 therein:1 initialization:2 dynamically:1 mentioning:1 tian:2 bi:2 obeys:2 lrr:2 practical:2 unique:1 minv:1 definite:2 postpone:1 implement:1 regret:1 block:1 procedure:1 empirical:1 ax1:2 yan:3 composite:1 boyd:1 get:1 uiowa:1 selection:3 risk:1 applying:1 impossible:1 equivalent:3 deterministic:11 lagrangian:1 yt:1 leb:2 demonstrated:1 tianbao:2 reviewer:1 ghadimi:1 kale:1 convex:21 focused:1 sharpness:17 ergodic:1 numerische:1 pasiliao:1 adjusts:1 rule:1 estimator:1 handle:1 coordinate:1 updated:1 annals:1 suppose:4 barzilai:1 programming:1 us:1 kxkg:2 element:4 trend:1 updating:1 kxk1:1 subproblem:1 wang:1 capture:2 worst:7 decrease:3 knight:1 mentioned:1 vanishes:2 convexity:8 complexity:24 nesterov:1 depend:1 solving:15 upon:1 r100:1 easily:2 joint:1 regularizer:3 fast:4 describe:1 effective:1 artificial:1 saunders:1 quite:1 widely:1 solve:3 larger:1 pa2:1 statistic:4 itself:1 online:1 sequence:1 rr:3 reconstruction:1 tran:1 turned:1 realization:1 achieve:1 kx1:3 ky:1 quantifiable:1 convergence:38 regularity:2 empty:1 ijcai:1 executing:1 converges:1 derive:1 develop:2 ij:1 minor:2 received:1 borrowed:1 solves:2 strong:8 c:2 implies:5 come:1 quantify:1 exhibiting:1 direction:18 guided:2 radius:4 stochastic:37 centered:2 exploration:1 require:1 fix:1 alleviate:2 preliminary:1 proposition:1 anonymous:1 homotopy:1 hold:5 around:2 considered:1 sufficiently:2 mapping:1 smallest:1 xk2:1 omitted:1 purpose:1 estimation:2 applicable:2 sensitive:2 city:2 establishes:1 minimization:2 gaussian:2 zhou:1 shrinkage:1 zhong:1 corollary:8 ax:17 focus:7 improvement:3 rank:12 mainly:1 contrast:1 k2g:5 kim:1 vk2:1 helpful:1 dependent:2 bt:8 transformed:1 provably:1 monteiro:1 issue:4 arg:6 dual:6 classification:1 denoted:2 exponent:2 aforementioned:1 development:2 i212:1 smoothing:1 constrained:3 special:2 initialize:2 ak2:1 construct:3 beach:1 represents:2 broad:1 yu:1 definable:1 fairness:1 icml:4 future:1 report:2 piecewise:5 employ:1 suter:1 national:1 n1:3 ab:5 friedman:1 interest:3 zheng:1 analyzed:1 light:1 primal:3 regularizers:2 necoara:1 bregman:1 edge:3 tuple:1 institut:1 unless:1 conduct:1 euclidean:1 taylor:1 penalizes:1 theoretical:5 minimal:1 increased:2 column:1 obstacle:1 cover:1 bresson:1 deviation:3 entry:2 vertex:1 characterize:1 reported:3 dependency:2 answer:1 proximal:2 synthetic:2 svaiter:1 rosset:1 st:3 international:8 siam:6 retain:1 off:1 fused:4 borwein:1 squared:1 management:1 sublevel:3 choose:4 ek:1 zhao:1 li:2 de:1 coefficient:1 satisfy:1 notable:1 depends:2 later:1 multiplicative:1 try:2 closed:2 hazan:1 characterizes:2 start:4 xing:1 capability:3 sadmm:32 square:1 accuracy:1 variance:1 efficiently:1 yield:3 correspond:1 rsg:1 famous:1 comparably:3 worth:1 minm:1 definition:2 proof:1 sampled:1 newly:1 recall:2 knowledge:2 ut:1 improves:1 segmentation:1 sophisticated:1 goldstein:2 higher:1 follow:1 methodology:1 improved:4 formulation:3 strongly:4 xk1:1 stage:18 hand:1 sturm:1 ei:1 su:1 gray:1 scientific:2 usa:3 k22:4 multiplier:14 counterpart:1 equality:4 regularization:4 alternating:18 nonzero:1 during:3 self:13 kak:1 hong:1 generalized:9 demonstrate:3 performs:2 image:1 variational:2 recently:2 fi:1 parikh:1 superior:1 insensitive:1 rachford:2 association:1 he:6 trait:1 refer:2 imposing:1 ai:2 smoothness:10 rd:11 vanilla:5 similarly:1 peypouquet:1 lowered:1 similarity:1 multivariate:1 recent:2 certain:2 inequality:3 yi:2 minimum:1 additional:1 employed:3 deng:1 surely:1 novelty:1 converge:1 paradigm:4 signal:2 semi:3 ii:4 full:3 multiple:2 smooth:3 faster:8 characterized:1 long:2 lin:7 rameter:1 e1:1 kax:3 variant:13 regression:5 converging:1 expectation:1 bz:2 iteration:40 addition:2 fine:1 interval:2 addressed:1 unlike:1 ascent:1 comment:1 leveraging:1 call:2 integer:1 yang:9 yk22:1 revealed:1 iii:1 hastie:1 lasso:16 inner:2 regarding:1 idea:3 donoghue:1 setzer:1 penalty:48 suffer:1 render:1 algebraic:1 remark:6 useful:1 detailed:2 tune:1 amount:1 locally:6 sohn:1 reduced:1 generate:3 http:1 exist:2 percentage:1 qihang:2 per:3 rb:3 tibshirani:4 key:4 nevertheless:2 lan:2 achieving:2 changing:1 wisc:1 douglas:2 libsvm:1 imaging:3 annales:1 graph:6 monotone:1 subgradient:4 geometrically:2 fraction:1 sum:1 run:1 inverse:5 everywhere:1 baraniuk:1 almost:2 family:1 reader:1 bound:18 guaranteed:1 quadratic:4 annual:1 constraint:3 infinity:1 encodes:1 calling:2 aspect:1 fourier:1 min:10 department:2 structured:3 according:3 ball:2 jr:1 remain:2 slightly:1 em:1 osher:2 computationally:1 xk22:2 remains:2 mathematik:1 discus:3 end:6 adopted:1 apply:1 kwok:2 appropriate:1 spectral:1 stepsize:1 original:6 denotes:3 top:1 ensure:1 include:3 running:2 graphical:2 hinge:2 ck1:1 build:1 society:2 objective:16 question:2 dependence:1 kak2:9 said:1 exhibit:2 gradient:6 admms:4 subspace:1 distance:1 thank:2 topic:1 considers:1 balance:1 minimizing:1 difficult:1 sharper:1 glineur:1 design:1 proper:3 unknown:2 allowing:1 upper:3 w8a:5 finite:1 descent:2 t:6 extended:1 y1:3 rn:1 sharp:1 peleato:1 introduced:1 bk:2 namely:5 cast:1 kl:1 pair:1 specified:1 eckstein:1 tremendous:2 established:7 nip:3 address:2 beyond:1 suggested:2 proceeds:1 usually:2 below:6 beating:1 sparsity:1 program:1 max:5 royal:2 ia:2 warm:2 regularized:1 residual:5 zhu:1 scheme:16 improve:1 numerous:1 created:1 review:1 kf:1 adagrad:1 loss:14 nonstrongly:1 adaptivity:3 versus:1 penalization:3 foundation:2 iowa:4 share:1 balancing:2 row:1 course:3 supported:1 last:2 figueiredo:1 enjoys:3 side:3 formal:1 ber:1 taking:1 barrier:1 absolute:1 sparse:5 distributed:1 valid:1 ending:1 commonly:1 adaptive:45 made:1 suzuki:2 historical:1 employing:2 nguyen:1 nonlocal:1 restarting:4 emphasize:1 kaxk1:3 compact:2 dlog2:2 global:3 active:1 assumed:1 spectrum:1 continuous:1 additionally:1 ku:1 robust:4 ca:1 decoupling:1 domain:5 main:1 motivation:1 noise:1 n2:1 xu:8 augmented:1 x1:3 referred:1 shrinking:3 lie:2 theorem:6 xt:5 r2:5 dk:5 svm:5 evidence:1 deconvolution:1 exists:2 corr:5 supplement:5 kx:11 chen:1 easier:1 bolte:1 intersection:1 logarithmic:2 yin:1 simply:1 kurdyka:2 expressed:1 kxk:2 partially:1 i204:1 determines:1 relies:1 identity:1 goal:1 targeted:1 admm:125 colt:1 included:1 determined:1 except:1 specifically:1 averaging:1 called:1 experimental:2 la:40 select:1 swright:1 mark:1 latter:2 arises:1 bioinformatics:1 accelerated:2 d1:13
6,331
6,727
Shape and Material from Sound Zhoutong Zhang MIT Jiajun Wu MIT Qiujia Li University of Cambridge Zhengjia Huang ShanghaiTech University Joshua B. Tenenbaum MIT William T. Freeman MIT, Google Research Abstract What can we infer from hearing an object falling onto the ground? Based on knowledge of the physical world, humans are able to infer rich information from such limited data: the rough shape of the object, its material, the height of the fall, etc. In this paper, we aim to approximate such competency. We first mimic the human knowledge about the physical world using a fast physics-based generative model. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further approximate human past experience by directly mapping audio to object properties using deep learning with self-supervision. We evaluate our method through behavioral studies, where we compare human predictions with ours on inferring object shape, material and initial height of falling. Results show that our method achieves near-human performance, without any annotations. We further test our model using real world data, illustrating its potential in inference from real world data. 1 Introduction Given a short audio clip of interacting objects, humans, even young children, can recover rich information about materials, surface smoothness, and the quantity of objects involved [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Siegel et al., 2014]. How does our cognitive system recover so much content from the audio clip? What is the role of past experience in understanding auditory inputs? For physical scene understanding from visual input, recent behavioral and computational studies suggest that human judgments can be well explained as approximate, probabilistic simulations of a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013]. These studies suggest that the brain encodes rich but noisy knowledge of physical properties of objects and basic laws of physical interactions between objects. To understand, reason, and predict about a physical scene, humans seem to rely on simulations from this mental physics engine. In this paper, we develop a computational system to interpret audio clips of falling objects, inspired by the idea that humans may use a physics engine as part of a generative model to understand the physical world. The first component of our generative model is the representation of a rigid object, which includes its 3D geometric shape, position in space, and its physical properties including mass, Young?s modulus, Rayleigh damping coefficients, and restitution. All of these object attributes are treated as latent variables in our model, which we aim to infer from auditory inputs. The second part includes an efficient physics-based audio synthesis engine. Given the initial conditions and properties of an object, which serves as the hypothesis for our generative model, the engine first simulates the rigid body motion of the object and generate the object?s trajectory with corresponding collision profile under rigid body physics. Then the object?s trajectory and collision profile, along with its pre-computed sound statistics, are used to generate the sound of this entire physical process. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Given an audio of a single object falling, we utilize our generative model to infer latent variables that could best reproduce the sound. With such efficient forward models, it is possible for us to infer the prescribed latent variables in an analysis-by-synthesis style. In short, given an audio clip, we aim to find a set of latent variables that best reproduce it. One challenge in this goal is to design a likelihood function that measures the perceptual distance between two sounds. To address this challenge, we utilize the observation that a simple feature space, such as spectrogram, can be effective if the degrees of freedom for latent variables are restricted. This observation allows us to infer latent variables via methods like Gibbs sampling, where we can only focus on approximating the conditional probability of a single variable given the others. To further accelerate our inference procedure, we incorporate past experience via a supervised learning system that uses unlabeled data with inferred labels. To this end, we propose a self-supervised learning algorithm inspired by the wake/sleep phases in Helmholtz machines [Dayan et al., 1995]. A deep neural network is trained as the recognition model (i.e., sleep cycle), where the labels are generated using our inference algorithm. Then, for any future audio clip, the output of the recognition model can be used as a good initialization, accelerating the inference procedure. We evaluate our models on a range of perception tasks: inferring object shape, material and initial height from the sound. We also collect human responses for each of these tasks and compare them with model estimates. Our results indicate that humans are quite successful in these tasks; our model not only closely matches human successes, but also makes similar errors as humans do. Due to the hardness of acquiring quality audio data with rich labels, we use synthetic data to evaluate our models in above tasks. To prove our model is capable of such inference tasks given real world audio, we tested our model using real captured data under constraint settings. Our work makes three contributions. First, we propose a novel model for estimating physical properties of objects from auditory inputs by incorporating the feedback of a physics engine and an audio engine into the inference process. Second, we train a deep learning based recognition model that leads to efficient inference in the generative model. Third, we test our model and compare it to humans on a variety of judgment tasks, and demonstrate the correlation between human responses and model estimates. 2 Related Work Human visual and auditory perception In the field of auditory perception or psychoacoustics, researchers have explored how humans can infer object properties including shape, material, size from audio in the past decades [Zwicker and Fastl, 2013, Kunkler-Peck and Turvey, 2000, Rocchesso and Fontana, 2003, Klatzky et al., 2000, Siegel et al., 2014]. Recently, McDermott et al. [2013] proposed compact sound representations that capture semantic information and are informative of human auditory perception. 2 iteration Generative Model shape rotation Gibbs Sampling height Likelihood Function material Target Audio iteration Figure 2: Our inference pipeline. We use Gibbs sampling over the latent variables. The conditional probability is approximated using the likelihood between reconstructed sound and the input sound. Sound simulation Our sound synthesis engine builds upon and extends existing sound simulation systems in computer graphics and computer vision [O?Brien et al., 2001, 2002, James et al., 2006, Bonneel et al., 2008, Van den Doel and Pai, 1998, Zhang et al., 2017]. Van den Doel and Pai [1998] simulated object vibration using the finite element method and approximated the vibrating object as a single point source. O?Brien et al. [2001, 2002] used the Rayleigh method to approximate wave equation solutions for better synthesis quality. James et al. [2006] proposed to solve Helmholtz equations using the Boundary Element Method, where each object?s vibration mode is approximated by a set of vibrating points. Recently, Zhang et al. [2017] built a framework for synthesizing largescale audio-visual data. In this paper, we accelerate the framework by Zhang et al. [2017] to achieve near real-time rendering, and explore learning object representations from sound with the synthesis engine in the loop. Physical Object Perception There has been a growing interest in understanding physical object properties like masses and frictions from visual input or scene dynamics [Chang et al., 2017, Battaglia et al., 2016, Wu et al., 2015, 2016, 2017]. Most of the existing research has been focusing on object properties from visual data. Recently, researchers started to explore learning object representations from sound. Owens et al. [2016a] attempted to infer material properties from audio, focusing on the scenario of hitting objects with a drumstick. Owens et al. [2016b] further demonstrated audio signals can be used as supervision on learning object concepts from visual data, and Aytar et al. [2016] proposed to learn sound representations from corresponding video frames. Zhang et al. [2017] discussed the complementary role of auditory and visual data in recovering both geometric and physical object properties. In this paper, we propose to learn physical object representations through a combination of powerful deep recognition models and analysis-by-synthesis inference methods. Analysis-by-synthesis Our framework also relates to the field of analysis-by-synthesis, or generative models with data-driven proposals [Yuille and Kersten, 2006, Zhu and Mumford, 2007, Wu et al., 2015], as we are incorporating a graphics engine as a black-box synthesizer. Unlike earlier methods that focus mostly on explaining visual data, our work aims to infer latent parameters from auditory data. Please refer to Bever and Poeppel [2010] for a review of analysis-by-synthesis methods. 3 An Efficient, Physics-Based Audio Engine At the core of our inference pipeline is an efficient audio synthesis engine. In this section, we first give a brief overview of existing synthesis engines, and then present our technical innovations on accelerating existing systems to real-time rendering for our inference algorithm. 3.1 Audio Synthesis Engine Audio synthesis engines generate realistic sound by following fundamental physical laws. First, the interaction between an object and the environment is generated using rigid body simulation, where 3 Settings Waveform Audio Wave Original algorithm Amplitude cutoff Principal modes Multi-threading SoundNet-8 conv1 pool1 ?? conv7 pool7 fc Figure 3: Our 1D deep convolutional network. Its architecture follows that in Aytar et al. [2016], where raw audio waves are forwarded through consecutive conv-pool layers, and then passed to a fully connected layer to produce output. All Time (s) 30.4 24.5 12.7 1.5 0.8 Table 1: Acceleration break down of each technique we adopted. Timing is evaluated by synthesizing an audio with 200 collisions. The last row reports the final timing after adopting all techniques. Newton?s physics laws dictate the object?s motion and collisions over time. According to vibration analysis, each collision causes the object to vibrate in certain patterns, changing the air pressure around its surface. Such turbulence then propagates in air to the recording position and creates the sound of this physical process. Rigid Body Simulation Given an object?s 3D position and orientation and its mass and restitution, a physics engine can simulate the physical processes, and output the object?s position, orientation, and collision information over time. In our implementation, an open-source physics engine, Bullet [Coumans, 2010], is used to simulate this process. In order to achieve accurate results, we use a time step of 1/300 second for the simulation. Specifically, we record the 3D pose and position of the object over time, as well as the collision locations, magnitudes, and directions. Object sound can then be approximated by accumulating sounds caused by those discrete impulse collisions on its surface. Audio Synthesis The audio synthesis procedure is built upon previous work on simulating realistic sounds [James et al., 2006, Bonneel et al., 2008, O?Brien et al., 2001]. To facilitate fast synthesis, this process is decomposed into two parts: online and offline. The offline part first uses finite element method (FEM) to obtain the object?s vibration modes, which depends on object?s shape and its Young?s modulus. The object vibration modes are then used as Neumann boundary conditions of the Helmholtz equation, which can be solved using boundary element method (BEM). The solution is then approximated using techniques reported by James et al. [2006], where the resulting pressure field is approximated by a sparse set of vibrating points. Note that the computation above only depends on object?s shape and Young?s modulus, but not on the physical process that it undergoes. This allows us to pre-compute a number of shape-modulus configurations before simulation; only minimal computation is needed at the simulation phase. The online part of the audio engine loads pre-computed approximation and decomposes impulses on the surface mesh of the object onto its modal bases. Summing up pressure changes at the observation point induced by vibrations in each mode produces the desired sound. An evaluation of its authenticity can be found in Zhang et al. [2017]. 3.2 Accelerating Audio Synthesis The prerequisite for performing efficient inference via analysis-by-synthesis is fast audio synthesis. Unfortunately, the simulation procedure described above is expensive to compute. We therefore present ways for accelerating the computation to near real-time. First, we pick out the most significant modes excited by each impulse. By setting a threshold at 90% energy cutoff, we shorten the computing time by ignoring sound components generated by insignificant modes, which are about one-half of total modes on average. Secondly, we stop synthesizing as the amplitude of the sound is damped below a small threshold, where humans can hardly perceive. Thirdly, we parallelize the synthesis process by treating each collision independently, which can be computed on an independent thread. We then join the completed threads into a shared buffer according to their time-stamps. The effect of acceleration is shown in Table 1. Online sound synthesis only contains variables that are fully decoupled from the offline stage, which enables us to freely manipulate other variables with little computational cost during simulation time. 4 Variable Range C/T Variable Range C/T 6 Primitive shape (s) Height (z) Rotation axis (i, j, k) 14 classes [1, 2] S2 D C C Specific modulus (E/?) Restitution (e) Rotation angle (w) [1, 30] ? 10 [0.6, 0.9] [??, ?) D C C Rayleigh damping (?) 10[?8,?5] C Rayleigh damping (?) 2[0,5] C Table 2: Variables in our generative model, where the C/T column indicates whether sampling takes place in continuous (C) or discrete (D) domain, and values inside parentheses are the range we uniformly sampled from. Rotation is defined in quaternions. 3.3 Generating Stimuli Because real audio recordings with rich labels are hard to acquire, we synthesize random audio clips using our physics-based simulation in order to test and evaluate our models. Specifically, we constrain ourselves only focusing on primitive objects falling onto the ground. We first construct a sound statistic data set that includes 14 primitives (partly shown in Table 2), each with 10 different specific moduli (defined as Young?s modulus over density). With this pre-computed data, we are able to generate synthetic audio clips in a near-real-time fashion. Since the process of objects falling onto the ground is relatively fast, we set the total simulation time of each scenario to 3 seconds. An detailed synthesis setup can be found in Table 2. 4 Inference In this section, we investigate four models for inferring object properties, each corresponding to a different scenario. We start from an unsupervised model where the input is only one single test case with no annotation. Inspired by how humans can infer scene information using a mental physics engine [Battaglia et al., 2013, Sanborn et al., 2013], we adopt Gibbs sampling over latent variables to find the combination that best reproduces the given audio. We develop this model further with the help from past experience, where a deep neural network is trained using data with inferred labels as supervision. Such self-supervised scheme is able to approximate the most probable configurations faster. We further investigate the case when labels can be acquired but are extremely coarse. We first train a recognition model with weak labels, then randomly pick candidates from those labels as an initialization for our analysis-by-synthesis inference. Lastly, in order to understand the limit for this inference task, we train a deep neural network with fully labeled data that yields the upper-bound performance. 4.1 Models Unsupervised Given an audio clip S, we would like to recover all latent variables x, so that the reproduced sound g(x) is most similar to S. Suppose L(?, ?) is a likelihood function that measure the perceptual distance between two sounds, then the goal is to maximize L(g(x), S). We denote L(g(x), S) as p(x) for brevity. In order to find x that maximizes p(x), p(x) can be treated as an distribution p?(x) scaled with an unknown partition function Z. Since we do not have an exact form for p(?) nor p?(x), Gibbs sampling is applied to draw samples from p(x) using conditional probabilities. Specifically, at sweep round t, we update each variable xi by drawing samples from t?1 p?(xi |xt1 , xt2 , ...xti?1 , xt?1 i+1 , ...xn ). (1) Such conditional probabilities, however, are much easier to approximate. For example, to sample Young?s modulus conditioned on other variables, we could simply use the spectrogram as features, and measure the l2 distance between two sounds, since Young?s modulus would only affect the frequency at each collision. Under such observation, we use the spectrogram as features for all variables except height. Since the height can be inferred from the time of the first collision, a simple likelihood function can be designed as measuring the time difference between the first impact in two sounds. Note that this is only an approximate measure: object?s shape and orientation also affects 5 the time of the first impact. Nonetheless, since we are only concerned with conditional probabilities, such measure can be very effective. To sample from the conditional probabilities, we adopt the Metropolis?Hastings algorithm, where samples are drawn from a Gaussian distribution and are accepted by flipping a biased coin according to its likelihood compared with the previous sample. Specifically, we calculate the l2 distance dt in feature space between g(xt ) and S. For a new sample xt+1 , we also calculate the l2 distance dt+1 in feature space between g(xt+1 ) and S. The new sample is accepted if dt+1 is smaller than dt ; otherwise, xt+1 is accepted with probability exp(?(dt+1 ? dt )/T ), where T is a time varying function inspired by simulated annealing algorithm. In our implementation, T is set as a quadratic function of the current MCMC sweep number t. Self-supervised Learning To accelerate the above sampling process we propose a self-supervised model, which is analogous to a Helmholtz machine trained by the wake-sleep algorithm. We first train a deep neural network, whose labels are generated by the unsupervised inference model suggested above for a limited number of iterations. For a new audio clip, our self-supervised model uses the result from neural network as an initialization, and then runs our analysis-by-synthesis algorithm to make further inference. By making use of past experience, the sampling process is expected to start off from a better position, and takes much fewer iterations to converge than the unsupervised model. Weakly-supervised Learning We further investigate the case where weak supervision might be helpful for accelerating the inference process. Since the latent variables we aim to recover are hard to obtain in real world settings, it is more realistic to assume that we could acquire very coarse labels, such as the type of material, rough attributes of the object?s shape, the height of the fall, etc. Based on such assumptions, we coarsen ground truth labels for all variables. For our primitive shapes, three attributes are defined, namely ?with edge,? ?with curved surface,? and ?pointy.? For material parameters, i.e., specific modulus, Rayleigh damping coefficients and restitution, they are mapped to steel, ceramic, polystyrene and wood by finding the nearest neighbor to those real material parameters. Height is divided into ?low? and ?high? categories. A deep convolutional neural network is trained on our synthesized dataset with coarse labels. As shown in Figure 4, even trained using coarse labels, our network learns features very similar to the ones learned by the fully supervised network. To go beyond coarse labels, the unsupervised model is applied using the initialization suggested by the neural network. Fully-supervised Learning To investigate the limit of this inference task, we train an oracle model with ground truth labels. To visualize what kind of abstraction and characteristic features are learned by oracle model, we plot inputs that maximally activate some hidden units in the last layer of the network. Figure 4 illustrates some of the most interesting waveforms. A selection of them learned to recognize specific temporal patterns, and others are most sensitive to specific frequencies. Similar patterns are found between weakly and fully supervised models. 4.2 Contrasting Model Performance We would like to evaluate how well our model performs under different settings, especially on how past experience or coarse labeling could improve over the unsupervised model. We first present the implementation details of all four models, then compare their quantitative results on all inference tasks. Sampling Setup We performed 80 sweeps of MCMC sampling over all the 7 latent variables; for every sweep, each variable is sampled twice. Shape, specific modulus and rotation are sampled by uniform distributions across their corresponding dimensions. For other continuous variables, we define an auxiliary Gaussian variable xi ? N (?i , ?i2 ) for sampling, where the mean ?i is based on current state. To evaluate the likelihood function between the input and the sampled audio (both with sample rate of 44.1k), we compute the spectrogram of the signal using Tuckey window of length 5,000 with a 2,000 sample overlap. For each window, a 10,000 point Fourier transform is applied. Deep Learning Setup Our fully supervised and self-supervised recognition models inherit the architecture of SoundNet-8 [Aytar et al., 2016] as Figure 3, which takes an arbitrarily long raw audio wave as an input, and produces a 1024-dim feature vector. We append a fully connected layer at the 6 Time Domain small damping multiple collisions low frequency Frequency Domain mid frequency high frequency Weakly Supervised Fully Supervised Figure 4: Visualization of top two sound waves that activate the hidden unit most significantly, in temporal and spectral domain. Their common characteristics can reflect the values of some latent variables, e.g. Rayleigh damping, restitution and specific modulus from left to right. Both weakly and fully supervised models capture similar features. Latent Variables Inference Model shape mod. height ? ? Unsupervised init infer 8% 54% 10% 56% 0.179 0.003 0.144 0.069 0.161 0.173 Self-supervised init infer 14% 52% 16% 62% 0.060 0.005 0.092 0.061 0.096 0.117 Weakly supervised init infer 18% 62% 12% 66% 0.018 0.005 0.077 0.055 0.095 0.153 Fully supervised infer 98% 100% 0.001 0.001 0.051 Table 3: Initial and final classification accuracies and label MSE errors of three different inference models after 80 iterations of MCMC, upper bounded by the fully supervised model. end to produce a 28-dim vector as the final output of the neural network. The former 14 dimensions are the one-hot encoding of primitive shapes, the following 10 dimensions are encodings of the specific modulus. The last 4 dimensions regress the initial height, two Rayleigh damping coefficients and the restitution respectively. All the regression dimensions are normalized to a [?1, 1] range. The weakly supervised model preserves the structure of fully supervised one, but with an 8-dim final output: 3 for shape attributes, 1 for height, and 4 for materials. We used stochastic gradient descent for training, with a learning rate of 0.001, a momentum of 0.9 and a batch size of 16. Mean Square Error(MSE) loss is used for back-propagation. We implemented our framework in Torch7 [Collobert et al., 2011], and trained all models from scratch. Results Results for the four inference models proposed above are shown in Table 3. For shapes and specific modulus, we evaluate the results as classification accuracies; for height, Rayleigh damping coefficients, and restitution, results are evaluated as MSE. Before calculating MSE, we normalize values of each latent variable to [?1, 1] interval, so that the MSE score is comparable across variables. From Table 3, we can conclude that self-supervised and weakly supervised models provide better initialization for our analysis-by-synthesis algorithm, especially on the last four continuous latent variables. One can also observe that final inference accuracies and MSEs are marginally better than the unsupervised case. To illustrate the rate of convergence, we plot the likelihood value, exp(?kd) where d is the distance of sound features, along iterations of MCMC in Figure 5. The mean curve of self-supervised model meets our expectation, i.e., it converges much faster than the unsupervised model, and reaches a slightly higher likelihood at the end of 30 iterations. The fully supervised model, which is trained on 200,000 audios with the full set of ground-truth labels, yields near-perfect results for all latent variables. 7 0.8 steel 1 1.0 0.8 0.9 ceramic 0.6 ceramic 0.6 poly 0.4 poly 0.4 0.2 wood 0 steel ceramic poly wood 0.2 wood steel ceramic poly wood Likelihood 1 steel 0.7 random self-supervised 0.6 0 0.5 0 Human Unsupervised model 0.8 5 10 15 20 25 Number of MCMC Sweeps 30 Figure 5: Left and middle: confusion matrix of material classification performed by human and our unsupervised model. Right: mean likelihood curve over MCMC iterations. HasEdge Edge With Edge Has Has Curved Surface With Has Curved Surface Pointy Pointy IsIs Pointy Height Height 1.001.00 Accuracy 0.75 0.75 1.00 1.00 0.75 0.75 0.830.83 0.670.67 0.50 0.50 0.50 0.50 0.500.50 0.330.33 0.25 0.25 3030 5050 I Iterations 8080 0.89 0.89 0.50 0.50 0.78 0.78 0.33 0.33 0.67 0.67 0.25 0.25 0.170.17 1010 0.67 0.67 Material human 0.17 0.17 0.56 0.56 10 10 30 30 50 50 80 80 Iterations 10 10 30 30 50 50 Iterations Iterations 80 80 10 10 30 30 50 50 Iterations 80 80 10 10 30 30 50 50 Iterations 80 80 Figure 6: Human performance and unsupervised performance comparison. The horizontal line represents human performance for each task. Our algorithm closely matches human performance. 5 Evaluations We first evaluate the performance of our inference procedure by comparing its performance with humans. The evaluation is conducted using synthetic audio with their ground truth labels. Then, we investigate whether our inference algorithm performs well on real-world recordings. Given an audio recorded under experiment settings, our algorithm is able to distinguish its shape among other candidates. 5.1 Human Studies We try to understand how capable our inference model is relative to human performances. We designed three tasks on inferring object?s shape, material and height of the fall, the most intuitive attributes when hearing object falling. Those tasks are designed to be classification problems, where the labels are in accordance with coarse labels used by our weakly-supervised model. The study was conducted on Amazon Mechanical Turk. For each experiment (shape, material, height), we randomly selected 52 test cases. Before answering test questions, the subject is shown 4 training examples with ground truths as familiarization of the setup. We collected 192 responses for the experiment on inferring shape, 566 for material, and 492 for height, resulting in a total of 1,250 responses. Inferring Shapes After being familiarized with the experiment, participants are asked to make three binary judgments about the shape by listening to our synthesized audio clip. Prior examples are given for people to understand the distinctions of ?with edge,? ?with curved surface,? and ?pointy? attributes. Due to material variations, humans mostly make those decisions based on temporal information rather than spectral information, i.e. the time sequence of collisions. As shown in Figure 6, humans are relatively good at recognizing shape attributes from sound and are around the same level of competency when the unsupervised algorithm runs for 10?30 iterations. Inferring Materials We sampled audio clips whose physical properties ? density, Young?s modulus and damping coefficients ? are in the vicinity of true parameters of steel, ceramic, polystyrene and wood. Participants are required to choose one out of four possible materials. However, it can still be challenging to distinguish between materials, especially when sampled ones have similar damping 8 Likelihood 1 0.8 0.6 0.4 0 (a) Real shape and sound (b) Inferred shape and sound 5 10 15 20 25 30 35 40 45 50 55 Iteration (c) Normalized likelihood over iterations Figure 7: Results of inference on real world data. The test recording is made by dropping the metal dice in (a). Our inferred shape and reproduced sound is shown in (b). Likelihood over iteration is plot ted in (c). and specific modulus. Our algorithm confuses steel with ceramic and ceramic with polystyrene occasionally, which is in accordance with human performance, as shown in Figure 5. Inferring Heights In this task, we ask participants to choose whether the object is dropped from a high position or a low one. We provided example videos and audios to help people anchor reference height. Under our scene setup, the touchdown times of the two extremes of the height range differ by 0.2s. To address the potential bias that algorithms may be better at exploiting falling time, we explicitly told humans that the silence at the beginning is informative. Second, we make sure that the anchoring example is always available during the test, which participants can always compare and refer to. Third, the participant has to play each test clip manually, and therefore has control over when the audio begins. Last, we tested on different object shapes. Because the time of first impact is shape-dependent, differently shaped objects dropped from the same height would have first impacts at different times, making it harder for the machine to exploit the cue. 5.2 Transferring to Real Scenes In addition to the synthetic data, we designed real world experiments to test our unsupervised model. We select three candidate shapes: tetrahedron, octahedron, and dodecahedron. We record the sound a metal octahedron dropping on a table and used our unsupervised model to recover the latent variables. Because the real world scenarios may introduce highly complex factors that cannot be exactly modeled in our simulation, a more robust feature and a metric are needed. For every audio clip, we use its temporal energy distribution as the feature, which is derived from spectrogram. A window of 2,000 samples with a 1,500 sample overlap is used to calculate the energy distribution. Then, we use the earth mover?s distance (EMD) [Rubner et al., 2000] as the metric, which is a natural choice for measuring distances between distributions. The inference result is illustrated in Figure 7. Using the energy distribution with EMD distance measure, our generated sound aligns its energy at major collision events with the real audio, which greatly reduces ambiguities among the three candidate shapes. We also provide our normalized likelihood function overtime to show our sampling has converged to produce highly probable samples. 6 Conclusion In this paper, we propose a novel model for estimating physical properties of objects from auditory inputs, by incorporating the feedback of an efficient audio synthesis engine in the loop. We demonstrate the possibility of accelerating inference with fast recognition models. We compare our model predictions with human responses on a variety of judgment tasks and demonstrate the correlation between human responses and model estimates. We also show that our model generalizes to constrained real data. Acknowledgements The authors would like to thank Changxi Zheng, Eitan Grinspun, and Josh H. McDermott for helpful discussions. This work is supported by NSF #1212849 and #1447476, ONR MURI N00014-16-12007, Toyota Research Institute, Samsung, Shell, and the Center for Brain, Minds and Machines (NSF STC award CCF-1231216). 9 References Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Soundnet: Learning sound representations from unlabeled video. In NIPS, 2016. 3, 4, 6 Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical scene understanding. PNAS, 110(45):18327?18332, 2013. 1, 5 Peter W Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In NIPS, 2016. 3 Thomas G Bever and David Poeppel. Analysis by synthesis: a (re-) emerging program of research for language and vision. Biolinguistics, 4(2-3):174?200, 2010. 3 Nicolas Bonneel, George Drettakis, Nicolas Tsingos, Isabelle Viaud-Delmon, and Doug James. Fast modal sounds with scalable frequency-domain synthesis. ACM TOG, 27(3):24, 2008. 3, 4 Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. In ICLR, 2017. 3 Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. 7 Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics. org, 2010. 4 Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural Comput., 7(5):889?904, 1995. 2 Doug L James, Jernej Barbi?c, and Dinesh K Pai. Precomputed acoustic transfer: output-sensitive, accurate sound generation for geometrically complex vibration sources. ACM TOG, 25(3):987?995, 2006. 3, 4 Roberta L Klatzky, Dinesh K Pai, and Eric P Krotkov. Perception of material from contact sounds. Presence: Teleoperators and Virtual Environments, 9(4):399?410, 2000. 2 Andrew J Kunkler-Peck and MT Turvey. Hearing shape. J. Exp. Psychol. Hum. Percept. Perform., 26(1):279, 2000. 1, 2 Josh H McDermott, Michael Schemitsch, and Eero P Simoncelli. Summary statistics in auditory perception. Nat. Neurosci., 16(4):493?498, 2013. 2 James F O?Brien, Perry R Cook, and Georg Essl. Synthesizing sounds from physically based motion. In SIGGRAPH, 2001. 3, 4 James F O?Brien, Chen Shen, and Christine M Gatchalian. Synthesizing sounds from rigid-body simulations. In SCA, 2002. 3 Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman. Visually indicated sounds. In CVPR, 2016a. 3 Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016b. 3 Davide Rocchesso and Federico Fontana. The sounding object. Mondo estremo, 2003. 2 Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover?s distance as a metric for image retrieval. International journal of computer vision, 40(2):99?121, 2000. 9 Adam N Sanborn, Vikash K Mansinghka, and Thomas L Griffiths. Reconciling intuitive physics and newtonian mechanics for colliding objects. Psychol. Rev., 120(2):411, 2013. 1, 5 Max Siegel, Rachel Magid, Joshua B Tenenbaum, and Laura Schulz. Black boxes: Hypothesis testing via indirect perceptual evidence. In CogSci, 2014. 1, 2 Kees Van den Doel and Dinesh K Pai. The sounds of physical shapes. Presence: Teleoperators and Virtual Environments, 7(4):382?395, 1998. 3 Jiajun Wu, Ilker Yildirim, Joseph J Lim, William T Freeman, and Joshua B Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NIPS, 2015. 3 10 Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman. Physics 101: Learning physical object properties from unlabeled videos. In BMVC, 2016. 3 Jiajun Wu, Erika Lu, Pushmeet Kohli, William T Freeman, and Joshua B Tenenbaum. Learning to see physics via visual de-animation. In NIPS, 2017. 3 Alan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? TiCS, 10(7):301?308, 2006. 3 Zhoutong Zhang, Jiajun Wu, Qiujia Li, Zhengjia Huang, James Traer, Josh H. McDermott, Joshua B. Tenenbaum, and William T. Freeman. Generative modeling of audible shapes for object perception. In ICCV, 2017. 3, 4 R in Computer Song-Chun Zhu and David Mumford. A stochastic grammar of images. Foundations and Trends Graphics and Vision, 2(4):259?362, 2007. 3 Eberhard Zwicker and Hugo Fastl. Psychoacoustics: Facts and models, volume 22. Springer Science & Business Media, 2013. 1, 2 11
6727 |@word kohli:1 illustrating:1 middle:1 open:2 simulation:16 excited:1 pressure:3 pick:2 harder:1 initial:5 configuration:2 contains:1 score:1 daniel:1 ours:1 past:7 existing:4 brien:5 current:2 comparing:1 synthesizer:1 familiarized:1 mesh:1 realistic:3 partition:1 informative:2 ronan:1 shape:36 enables:1 treating:1 designed:4 update:1 plot:3 generative:10 half:1 fewer:1 selected:1 cue:1 cook:1 beginning:1 short:2 core:1 record:2 mental:3 coarse:7 pascanu:1 provides:1 location:1 org:1 zhang:8 height:22 along:2 prove:1 behavioral:2 inside:1 introduce:1 acquired:1 expected:1 hardness:1 isi:1 nor:1 growing:1 multi:1 brain:2 anchoring:1 mechanic:1 freeman:7 inspired:4 decomposed:1 little:1 xti:1 window:3 conv:1 provided:1 estimating:2 bounded:1 begin:1 maximizes:1 mass:3 medium:1 what:3 tic:1 kind:1 emerging:1 contrasting:1 finding:1 temporal:4 quantitative:1 every:2 exactly:1 scaled:1 control:1 unit:2 peck:3 before:3 dropped:2 timing:2 accordance:2 limit:2 encoding:2 parallelize:1 meet:1 black:2 might:1 twice:1 initialization:5 collect:1 challenging:1 limited:2 range:6 ms:1 testing:1 galileo:1 razvan:1 procedure:5 dice:1 sca:1 significantly:1 dictate:1 pre:4 integrating:1 griffith:1 suggest:2 onto:4 unlabeled:3 selection:1 cannot:1 turbulence:1 kersten:2 accumulating:1 demonstrated:1 center:1 primitive:5 go:1 independently:1 shen:1 amazon:1 shorten:1 perceive:1 variation:1 analogous:1 target:1 suppose:1 play:1 exact:1 carl:1 us:3 hypothesis:2 element:4 helmholtz:5 recognition:7 approximated:6 expensive:1 synthesize:1 trend:1 muri:1 labeled:1 role:2 solved:1 capture:2 calculate:3 cycle:1 connected:2 environment:4 asked:1 dynamic:2 trained:7 weakly:8 yuille:2 upon:2 creates:1 tog:2 eric:1 accelerate:3 overtime:1 samsung:1 differently:1 siggraph:1 indirect:1 pool1:1 train:5 fast:6 effective:2 activate:2 cogsci:1 zemel:1 labeling:1 quite:1 whose:2 solve:1 cvpr:1 erika:1 drawing:1 otherwise:1 forwarded:1 federico:1 grammar:1 statistic:3 transform:1 noisy:1 fontana:2 final:5 online:3 reproduced:2 sequence:1 propose:5 ment:1 interaction:3 loop:2 achieve:2 coumans:2 intuitive:2 normalize:1 exploiting:1 convergence:1 neumann:1 produce:5 generating:1 perfect:1 converges:1 newtonian:1 object:62 help:2 illustrate:1 develop:2 andrew:3 pose:1 adam:1 nearest:1 mansinghka:1 edward:1 recovering:1 auxiliary:1 implemented:1 indicate:1 differ:1 direction:1 waveform:2 closely:2 attribute:7 stochastic:2 human:35 material:22 virtual:2 probable:2 secondly:1 around:2 ground:8 guibas:1 exp:3 visually:1 mapping:1 predict:1 visualize:1 matthew:1 major:1 achieves:1 consecutive:1 adopt:2 torralba:4 earth:2 battaglia:5 label:20 sensitive:2 vibration:7 mit:4 rough:2 biglearn:1 gaussian:2 always:2 aim:5 rather:1 varying:1 coarsen:1 derived:1 focus:2 rezende:1 likelihood:15 indicates:1 greatly:1 helpful:2 inference:31 dim:3 dayan:2 rigid:6 dependent:1 abstraction:1 entire:1 transferring:1 hidden:2 relation:1 reproduce:2 schulz:1 classification:4 orientation:3 among:2 constrained:1 field:3 construct:1 shaped:1 beach:1 sampling:12 ted:1 manually:1 pai:5 represents:1 adelson:1 unsupervised:15 emd:2 koray:2 mimic:1 future:1 others:2 report:1 competency:2 stimulus:1 richard:1 randomly:2 preserve:1 recognize:1 mover:2 phase:2 ourselves:1 william:7 freedom:1 jessica:1 interest:1 investigate:5 highly:2 possibility:1 zheng:1 evaluation:3 extreme:1 soundnet:3 damped:1 accurate:2 ambient:1 edge:4 capable:2 experience:6 decoupled:1 damping:10 desired:1 re:1 minimal:1 column:1 earlier:1 modeling:1 measuring:2 cost:1 hearing:3 uniform:1 recognizing:1 successful:1 conducted:2 graphic:3 reported:1 synthetic:4 st:1 density:2 fundamental:1 international:1 eberhard:1 probabilistic:1 physic:19 off:1 told:1 pool:1 michael:2 synthesis:30 audible:1 reflect:1 recorded:1 ambiguity:1 huang:2 choose:2 cognitive:1 laura:1 style:1 ullman:1 li:2 potential:2 de:1 includes:3 coefficient:5 caused:1 explicitly:1 depends:2 collobert:2 leonidas:1 performed:2 break:1 try:1 wave:5 recover:5 start:2 participant:5 annotation:2 contribution:1 air:2 square:1 accuracy:4 convolutional:2 characteristic:2 percept:1 judgment:4 yield:2 weak:2 raw:2 bayesian:1 kavukcuoglu:2 yildirim:1 marginally:1 carlo:1 trajectory:2 lu:1 researcher:2 converged:1 reach:1 farabet:1 aligns:1 tetrahedron:1 energy:5 poeppel:2 frequency:7 involved:1 james:9 nonetheless:1 regress:1 turk:1 stop:1 auditory:10 sampled:6 dataset:1 ask:1 davide:1 knowledge:3 lim:2 amplitude:2 back:1 focusing:3 higher:1 dt:6 supervised:27 danilo:1 response:6 modal:2 maximally:1 bmvc:1 evaluated:2 box:2 stage:1 lastly:1 correlation:2 hastings:1 horizontal:1 propagation:1 google:1 perry:1 mode:8 undergoes:1 quality:2 indicated:1 impulse:3 bullet:2 hongyi:1 modulus:16 usa:1 facilitate:1 effect:1 concept:1 normalized:3 true:1 ccf:1 former:1 vicinity:1 phillip:1 semantic:1 i2:1 illustrated:1 neal:1 round:1 dinesh:3 during:2 self:11 please:1 teleoperators:2 demonstrate:3 confusion:1 performs:2 motion:3 christine:1 vondrick:1 image:2 bem:1 novel:2 recently:3 common:1 rotation:5 mt:1 physical:25 overview:1 hugo:1 volume:1 thirdly:1 discussed:1 interpret:1 synthesized:2 refer:2 significant:1 isabelle:1 cambridge:1 gibbs:5 smoothness:1 aytar:4 language:1 supervision:5 surface:8 etc:2 base:1 recent:1 driven:1 scenario:4 occasionally:1 certain:1 buffer:1 n00014:1 binary:1 success:1 arbitrarily:1 onr:1 joshua:8 mcdermott:6 captured:1 george:1 isola:1 spectrogram:5 freely:1 converge:1 maximize:1 signal:2 relates:1 multiple:1 simoncelli:1 pnas:1 sound:46 infer:15 full:1 alan:1 technical:1 match:2 faster:2 reduces:1 hamrick:1 long:2 retrieval:1 divided:1 lai:1 manipulate:1 award:1 parenthesis:1 impact:4 prediction:2 scalable:1 basic:1 regression:1 vision:5 expectation:1 metric:3 erwin:1 iteration:18 physically:1 adopting:1 proposal:1 addition:1 annealing:1 interval:1 wake:2 source:4 yusuf:1 biased:1 unlike:1 sure:1 recording:4 induced:1 subject:1 simulates:1 sounding:1 mod:1 seem:1 near:5 presence:2 confuses:1 concerned:1 rendering:2 variety:2 affect:2 audio:46 architecture:2 idea:1 listening:1 vikash:1 thread:2 whether:3 accelerating:6 passed:1 torch7:2 song:1 peter:3 cause:1 hardly:1 compositional:1 matlab:1 deep:11 antonio:4 collision:14 detailed:1 mid:1 tenenbaum:8 clip:13 category:1 generate:4 http:1 nsf:2 jiajun:6 discrete:2 dropping:2 georg:1 four:5 threshold:2 falling:9 drawn:1 changing:1 cutoff:2 utilize:2 geometrically:1 wood:6 xt2:1 run:2 angle:1 powerful:1 extends:1 place:1 rachel:1 wu:8 eitan:1 draw:1 decision:1 comparable:1 layer:4 bound:1 distinguish:2 sleep:3 quadratic:1 oracle:2 vibrating:3 constraint:1 constrain:1 scene:7 software:1 encodes:1 colliding:1 fourier:1 simulate:2 friction:1 prescribed:1 extremely:1 performing:1 relatively:2 according:3 combination:2 kd:1 smaller:1 across:2 slightly:1 metropolis:1 rev:1 making:2 joseph:2 explained:1 restricted:1 den:3 iccv:1 pipeline:2 equation:3 visualization:1 precomputed:1 needed:2 mind:1 yossi:1 serf:1 end:3 adopted:1 available:1 generalizes:1 prerequisite:1 observe:1 spectral:2 simulating:1 batch:1 coin:1 original:1 thomas:2 top:1 reconciling:1 zwicker:3 completed:1 touchdown:1 newton:1 doel:3 calculating:1 exploit:1 build:1 especially:3 approximating:1 klatzky:2 contact:1 threading:1 sweep:5 question:1 quantity:1 flipping:1 mumford:2 hum:1 gradient:1 sanborn:3 iclr:1 distance:10 thank:1 mapped:1 simulated:2 collected:1 reason:1 length:1 modeled:1 acquire:2 innovation:1 setup:5 mostly:2 unfortunately:1 bulletphysics:1 restitution:7 synthesizing:5 steel:7 append:1 design:1 implementation:3 unknown:1 perform:1 upper:2 observation:4 finite:2 descent:1 curved:4 hinton:1 frame:1 interacting:1 tomer:1 inferred:5 david:2 namely:1 mechanical:1 required:1 tomasi:1 engine:23 acoustic:1 learned:3 distinction:1 bonneel:3 nip:6 address:2 able:4 suggested:2 beyond:1 below:1 perception:8 vibrate:1 pattern:3 challenge:2 program:1 built:2 including:2 max:1 video:4 hot:1 overlap:2 event:1 treated:2 rely:1 natural:1 business:1 largescale:1 zhu:2 scheme:1 improve:1 brief:1 axis:1 started:1 doug:2 bever:2 psychol:2 ilker:1 review:1 understanding:4 geometric:2 l2:3 prior:1 acknowledgement:1 relative:1 law:3 fully:14 loss:1 interesting:1 generation:1 geoffrey:1 foundation:1 degree:1 rubner:2 metal:2 conv1:1 propagates:1 row:1 eccv:1 summary:1 supported:1 last:5 offline:3 bias:1 silence:1 understand:5 institute:1 fall:3 explaining:1 neighbor:1 conv7:1 sparse:1 van:3 feedback:2 boundary:3 xn:1 world:11 dimension:5 rich:5 curve:2 forward:1 made:1 author:1 pushmeet:1 reconstructed:1 approximate:7 compact:1 reproduces:1 anchor:1 summing:1 xt1:1 conclude:1 eero:1 xi:3 fem:1 latent:18 continuous:3 decade:1 decomposes:1 table:9 learn:2 transfer:1 robust:1 nicolas:2 ca:1 ignoring:1 init:3 mse:5 poly:4 complex:2 cl:1 domain:5 stc:1 inherit:1 neurosci:1 barbi:1 s2:1 kees:1 animation:1 profile:2 child:1 complementary:1 body:5 siegel:3 join:1 fashion:1 inferring:8 position:7 momentum:1 comput:1 candidate:4 perceptual:3 stamp:1 answering:1 third:2 toyota:1 learns:1 young:8 down:1 load:1 specific:10 xt:5 explored:1 insignificant:1 chun:1 evidence:1 incorporating:3 workshop:1 magnitude:1 nat:1 conditioned:1 illustrates:1 psychoacoustics:2 chen:1 easier:1 rayleigh:8 fc:1 simply:1 explore:2 visual:10 josh:5 hitting:1 chang:2 acquiring:1 radford:1 springer:1 truth:5 acm:2 shell:1 conditional:6 goal:2 acceleration:2 owen:4 shared:1 content:1 change:1 hard:2 polystyrene:3 specifically:4 except:1 uniformly:1 perceiving:1 principal:1 total:3 partly:1 accepted:3 attempted:1 pointy:5 select:1 people:2 quaternion:1 brevity:1 incorporate:1 evaluate:8 mcmc:6 authenticity:1 tested:2 scratch:1
6,332
6,728
Flexible statistical inference for mechanistic models of neural dynamics Jan-Matthis Lueckmann? 1 , Pedro J. Gon?alves? 1 , Giacomo Bassetto1 , Kaan ?cal1,2 , Marcel Nonnenmacher1 , Jakob H. Macke?1 1 research center caesar, an associate of the Max Planck Society, Bonn, Germany 2 Mathematical Institute, University of Bonn, Bonn, Germany {jan-matthis.lueckmann, pedro.goncalves, giacomo.bassetto, kaan.oecal, marcel.nonnenmacher, jakob.macke}@caesar.de Abstract Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling. 1 Introduction Biophysical models of neuronal dynamics are of central importance for understanding the mechanisms by which neural circuits process information and control behaviour. However, identifying which models of neural dynamics can (or cannot) reproduce electrophysiological or imaging measurements of neural activity has been a major challenge [1]. In particular, many models of interest ? such as multi-compartment biophysical models [2], networks of spiking neurons [3] or detailed simulations of brain activity [4] ? have intractable or computationally expensive likelihoods, and statistical inference has only been possible in selected cases and using model-specific algorithms [5, 6, 7]. Many models are defined implicitly through simulators, i.e. a set of dynamical equations and possibly a description of sources of stochasticity [1]. In addition, it is often of interest to identify models which can reproduce particular features in the data, e.g. a firing rate or response latency, rather than the full temporal structure of a neural recording. ? ? Equal contribution Current primary affiliation: Centre for Cognitive Science, Technical University Darmstadt 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. A B C ?1 h1 ? h2 h3 ? ? ? fI(s) ? ?K ? f1(s) f2(s) hH ?1 Means ?1 ? Precision factors ?K xo x proposal prior prior Mixture weights ?K true posterior posterior forward pass feature 1 Figure 1: Flexible likelihood-free inference for models of neural dynamics. A. We want to flexibly and efficiently infer the posterior over model parameters given observed data, on a wide range of models of neural dynamics. B. Our method approximates the true posterior on ? around the observed data xo by performing density estimation on data simulated using a proposal prior. C. We train a Bayesian mixture-density network (MDN) for posterior density estimation. In the absence of likelihoods, the standard approach in neuroscience has been to use heuristic parameter-fitting methods [2, 8, 9]: distance measures are defined on multiple features of interest, and brute-force search [10, 11] or evolutionary algorithms [2, 9, 12, 13] (neither of which scales to high-dimensional parameter spaces) are used to minimise the distances between observed and modelderived features. As it is difficult to trade off distances between different features, the state-of-the-art methods optimise multiple objectives and leave the final choice of a model to the user [2, 9]. As this approach is not based on statistical inference, it does not provide estimates of the full posterior distribution ? thus, while this approach has been of great importance for identifying ?best fitting? parameters, it does not allow one to identify the full space of parameters that are consistent with data and prior knowledge, or to incrementally refine and reject models. Bayesian inference for likelihood-free simulator models, also known as Approximate Bayesian Computation [14, 15, 16], provides an attractive framework for overcoming these limitations: like parameter-fitting approaches in neuroscience [2, 8, 9], it is based on comparing summary features between simulated and empirical data. However, unlike them, it provides a principled framework for full Bayesian inference and can be used to determine how to trade off goodness-of-fit across summary statistics. However, to the best of our knowledge, this potential has not been realised yet, and ABC approaches are not used for linking mechanistic models of neural dynamics with experimental data (for an exception, see [17]). Here, we propose to use ABC methods for statistical inference of mechanistic models of single neurons. We argue that ABC approaches based on conditional density estimation [18, 19] are particularly suited for neuroscience applications. We present a novel method (Sequential Neural Posterior Estimation, SNPE) in which we sequentially train a mixture-density network across multiple rounds of adaptively chosen simulations1 . Our approach is directly inspired by prior work [18, 19], but overcomes critical limitations: first, a flexible mixture-density network trained with an importance-weighted loss function enables us to use complex proposal distributions and approximate complex posteriors. Second, we represent a full posterior over network parameters of the density estimator (i.e. a ?posterior on posterior-parameters?) which allows us to take uncertainty into account when adjusting weights. This enables us to perform ?continual learning?, i.e. to effectively utilise all simulations without explicitly having to store them. Third, we introduce an approach for efficiently dealing with simulations that return missing values, or which break altogether ? a common situation in neuroscience and many other applications of simulator-based models ? by learning a model that predicts which parameters are likely to lead to breaking simulations, and using this knowledge to modify the proposal distribution. We demonstrate the practical effectiveness and importance of these innovations on biophysical models of single neurons, on simulated and neurophysiological data. Finally, we show how recurrent neural networks can be used to directly learn relevant features from time-series data. 1 Code available at https://github.com/mackelab/delfi 2 1.1 Related work using likelihood-free inference for simulator models Given experimental data xo (e.g. intracellular voltage measurements of a single neuron, or extracellular recordings from a neural population), a model p(x|?) parameterised by ? (e.g. biophysical parameters, or connectivity strengths in a network simulation) and a prior distribution p(?), our goal is to perform statistical inference, i.e. to find the posterior distribution p?(?|x = xo ). We assume that the model p(x|?) is only defined through a simulator [14, 15]: we can generate samples xn ? x|? from it, but not evaluate p(x|?) (or its gradients) explicitly. In neural modelling, many models are defined through specification of a dynamical system with external or intrinsic noise sources or even through a black-box simulator (e.g. using the NEURON software [20]). In addition, and in line with parameter-fitting approaches in neuroscience and most ABC techniques [14, 15, 21], we are often interested in capturing summary statistics of the experimental data (e.g. firing rate, spike-latency, resting potential of a neuron). Therefore, we can think of x as resulting from applying a feature function f to the raw simulator output s, x = f (s), with dim(x)  dim(s). Classical ABC algorithms simulate from multiple parameters, and reject parameter sets which yield data that are not within a specified distance from the empirically observed features. In their basic form, proposals are drawn from the prior (?rejection-ABC? [22]). More efficient variants make use of a Markov-Chain Monte-Carlo [23, 24] or Sequential Monte-Carlo (SMC) samplers [25, 26]. Sampling-based ABC approaches require the design of a distance metric on summary features, as well as a rejection criterion (?), and are exact only in the limit of small ? (i.e. many rejections) [27], implying strong trade-offs between accuracy and scalability. In SMC-ABC, importance sampling is used to sequentially sample from more accurate posteriors while ? is gradually decreased. Synthetic-likelihood methods [28, 21, 29] approximate the likelihood p(x|?) using multivariate Gaussians fitted to repeated simulations given ? (see [30, 31] for generalisations). While the Gaussianity assumption is often motivated by the central limit theorem, distributions over features can in practice be complex and highly non-Gaussian [32]. For example, neural simulations sometimes result in systematically missing features (e.g. spike latency is undefined if there are no spikes), or diverging firing rates. Finally, methods originating from regression correction [33, 18, 19] simulate multiple data xn from different ? n sampled from a proposal distribution p?(?), and construct a conditional density estimate q(?|x) by performing a regression from simulated data xn to ? n . Evaluating this density model at the observed data xo , q(?|xo ) yields an estimate of the posterior distribution. These approaches do not require parametric assumptions on likelihoods or the choice of a distance function and a tolerance (?) on features. Two approaches are used for correcting the mismatch between prior and proposal distributions: Blum and Fran?ois [18] proposed the importance weights p(?)/? p(?), but restricted themselves to proposals which were truncated priors (i.e. all importance weights were 0 or 1), and did not sequentially optimise proposals over multiple rounds. Papamakarios and Murray [19] recently used stochastic variational inference to optimise the parameters of a mixture-density network, and a post-hoc division step to correct for the effect of the proposal distribution. While highly effective in some cases, this closed-form correction step can be numerically unstable and is restricted to Gaussian and uniform proposals, limiting both the robustness and flexibility of this approach. SNPE builds on these approaches, but overcomes their limitations by introducing four innovations: a highly flexible proposal distribution parameterised as a mixture-density network, a Bayesian approach for continual learning from multiple rounds of simulations, and a classifier for predicting which parameters will result in aborted simulations or missing features. Fourth, we show how this approach, when applied to time-series data of single-neuron activity, can automatically learn summary features from data. 2 2.1 Methods Sequential Neural Posterior Estimation for likelihood-free inference In SNPE, our goal is to learn the parameters ? of a posterior model q? (?|x = f (s)) which, when evaluated at xo , approximates the true posterior p(?|xo ) ? q? (?|x = xo ). Given a prior p(?), a proposal prior p?(?), pairs of samples (? n , xn ) generated from the proposal prior and the simulator, and a calibration kernel K? , the posterior model can be trained by minimising the importanceweighted log-loss 1 X p(? n ) L(?) = ? K? (xn , xo ) log q? (? n |xn ), (1) N n p?(? n ) 3 as is shown by extending the argument in [19] with importance-weights p(? n )/? p(? n ) and a kernel K? in Appendix A. Sampling from a proposal prior can be much more effective than sampling from the prior. By including the importance weights in the loss, the analytical correction step of [19] (i.e. division by the proposal prior) becomes unnecessary: SNPE directly estimates the posterior density rather than a conditional density that is reweighted post-hoc. The analytical step of [19] has the advantage of side-stepping the additional variance brought about by importance-weights, but has the disadvantages of (1) being restricted to Gaussian proposals, and (2) the division being unstable if the proposal prior has higher precision than the estimated conditional density. The calibration kernel K? (x, xo ) can be used to calibrate the loss function by focusing it on simulated data points x which are close to xo [18]. Calibration kernels K? (x, xo ) are to be chosen such that K? (xo , xo ) = 1 and that K? decreases with increasing distance kx ? xo k, given a bandwidth ? 2 . Here, we only used calibration kernels to exclude bad simulations by assigning them kernel value zero. An additional use of calibration kernels would be to limit the accuracy of the posterior density estimation to a region near xo . Choice of the bandwidth implies a bias-variance trade-off [18]. For the problems we consider here, we assumed our posterior model q? (?|x) based on a multi-layer neural network to be sufficiently flexible, such that limiting bandwidth was not necessary. P We sequentially optimise the density estimator q? (?|x) = k ?k N (?|?k , ?k ) by training a mixturedensity network (MDN) [19] with parameters ? over multiple ?rounds? r with adaptively chosen proposal priors p?(r) (?) (see Fig. 1). We initialise the proposal prior at the prior, p?(1) (?) = p(?), and subsequently take the posterior of the previous round as the next proposal prior (Appendix B). Our approach is not limited to Gaussian proposals, and in particular can utilise multi-modal and heavy-tailed proposal distributions. 2.2 Training the posterior model with stochastic variational inference To make efficient use of simulation time, we want the posterior network q? (?|x) to use all simulations, including ones from previous rounds. For computational and memory efficiency, it is desirable to avoid having to store all old samples, or having to train a new model at each round. To achieve this goal, we perform Bayesian inference on the weights w of the MDN across rounds. We approximate the distribution over weights as independent Gaussians [34, 35]. Note that the parameters ? of this Bayesian MDN are are means and standard deviations per each weight, i.e., ? = {?m , ?s }. As an extension to the approach of [19], rather than assuming a zero-centred prior over weights, we use the posterior over weights of the previous round, ??(r?1) (w), as a prior for the next round. Using stochastic variational inference, in each round, we optimise the modified loss 1 X p(? n ) L(?(r) ) = ? K? (xn , xo ) log qw (? n |xn ) ? (w) (r) N n p? (? n ) ?(r) (2)  1 + DKL ??(r) (w)||??(r?1) (w) . N Here, the distributions ?(w) are approximated by multivariate normals with diagonal covariance. The continuity penalty ensures that MDN parameters that are already well constrained by previous rounds are less likely to be updated than parameters with large uncertainty (see Appendix C). In practice, gradients of the expectation over networks are approximated using the local reparameterisation trick [36]. 2.3 Dealing with bad simulations and bad features, and learning features from time series Bad simulations: Simulator-based models, and single-neuron models in particular, frequently generate nonsensical data (which we name ?bad simulations?), especially in early rounds in which the relevant region of parameter space has not yet been found. For example, models of neural dynamics can easily run into self-excitation loops with diverging firing rates [37] (Fig. 4A). We introduce a feature b(s) = 1 to indicate that s and x correspond to a bad simulation. We set K(xn , xo ) = 0 2 While we did not investigate this here, an attractive idea would be to base the kernel of the distance between xn and xo on the divergence between the associated posteriors, e.g. K? (xn , xo ) = exp(?1/? DKL (q (r?1) (?|xn )||q (r?1) (?|xo ))) ? in this case, two data would be regarded as similar if the current estimation of the density network assigns similar posterior distributions to them, which is a natural measure of similarity in this context. 4 whenever b(xn ) = 1 since the density estimator should not spend resources on approximating the posterior for bad data. With this choice of calibration kernel, bad simulations are ignored when updating the posterior model ? however, this results in inefficient use of simulations. We propose to learn a model g? : ? ? [0, 1] to predict the probability that a simulation from ? will break. While any probabilistic classifier could be used, we train a binary-output neural network with log-loss on (? n , b(sn )). For each proposed ?, we reject ? with probability g?(?), and do not carry out the expensive simulation3 . The rejections could be incorporated into the importance weights (which would require estimating the corresponding partition function, or assuming it to be constant across rounds), but as these rejections do not depend on the data xo , we interpret them as modifying the prior: from an initially specified prior p(?), we obtain a modified prior excluding those parameters which likely will lead to nonsensical simulations. Therefore, the predictive model g?(?) does not only lead to more efficient inference (especially in strongly under-constrained scenarios), but is also useful in identifying an effective prior ? the space of parameters deemed plausible a priori intersected with the space of parameters for which the simulator is well-behaved. Bad features: It is frequently observed that individual features of interest for fitting single-neuron models cannot be evaluated: for example, the spike latency cannot be evaluated if a simulation does not generate spikes, but the fact that this feature is missing might provide valuable information (Fig. 4C). SNPE can be extended to handle ?bad features? by using a carefully designed posterior network. For each feature fi (s), we introduce a binary feature mi (s) which indicates whether fi is missing. We parameterise the input layer of the posterior network with multiplicative terms of the form hi (s) = fi (s) ? (1 ? mi (s)) + ci ? mi (s) where the term ci is to be learned. This approach effectively learns an imputation value ci for each missing feature. For a more expressive model, one could also include terms which learn interactions across different missing-feature indicators and/or features, but we did not explore this here. Learning features: Finally, we point out that using a neural network for posterior estimation yields a straightforward way of learning relevant features from data [38, 39, 40]. Rather than feeding summary features f (s) into the network, we directly feed time-series recordings of neural activity into the network. The first layer of the MDN becomes a recurrent layer instead of a fully-connected one. By minimising the variational objective (Eq.2), the network learns informative summary features about posterior densities. 3 Results While SNPE is in principle applicable to any simulator-based model, we designed it for performing inference on models of neural dynamics. In our applications, we concentrate on single-neuron models. We demonstrate the ability of SNPE to recover ground-truth posteriors in Gaussian Mixtures and Generalised Linear Models (GLMs) [41], and apply SNPE to a Hodgkin-Huxley neuron model and an autapse model, which can have parameter regimes of unstable behaviour and missing features. 3.1 Statistical inference on simple models Gaussian mixtures: We first demonstrate the effectiveness of SNPE for inferring the posterior of mixtures of two Gaussians, for which we can analytically compute true posteriors. We are interested in the numerical stability of the method (?robustness?) and the ?flexibility? to approximate multi-modal posteriors. To illustrate the robustness of SNPE, we apply SNPE and the method proposed by [19] (which we refer to by Conditional Density Estimation for Likelihood-free Inference, CDE-LFI) to infer the common mean of a mixture of two Gaussians, given samples from the mixture distribution (Fig. 2A; details in Appendix D.1). Whereas SNPE works robustly across multiple algorithmic rounds, CDE-LFI can become unstable: its analytical correction requires a division by a Gaussian which becomes unstable if the precision of the Gaussian does not increase monotonically across rounds (see 2.1). Constraining the precision-matrix to be non-decreasing fixes the numerical issue, but leads to biased estimates of the posterior. Second, we apply both SNPE and CDE-LFI to infer the two means of a mixture of two Gaussians, given samples x from the mixture distribution (Fig. 2B; Appendix D.1). While SNPE can use bi-modal proposals, CDE-LFI cannot, implying reduced efficiency of proposals on strongly non-Gaussian or multi-modal problems. 3 An alternative approach would be to first learn p(?|b(s) = 0) by applying SNPE to a single feature, f1 (s) = b(s), and to subsequently run SNPE on the full feature-set, but using p(?|b(s) = 0) as prior ? however, this would ?waste? simulations for learning p(?|b(s) = 1). 5 0 ?2 2 10 0 2 3 4 5 # of rounds 6 ?2 0 ? -10 2 0.1 D true value SNPE PG-MCMC ?2 5 parameter value ?10 0 ? 10 ?10 0 ? 10 E ... -3.0 -0.5 b0 ... 10 -0.7 1.5 h1 ... -0.3 2.4 h2 ... SNPE covariance true value CDE-LFI PG-MCMC 5 parameter 8 0.1 0 1 xo -0.0 10 2 ?2 0 x 0 F SNPE CDE-LFI 0 2 1 CDE-LFI density SNPE B PG-MCMC covariance value C 0 ? 50 ~ p(?)(2) ~(?)(6) p density 1 100 ? 2 density % completed runs p *(?|x = xo) A -0.0 Figure 2: Inference on simple statistical models. A. Robustness of posterior inference on 1-D Gaussian Mixtures (GMs). Left: true posterior given observation at xo = 0. Middle: percentage of completed runs as a function of number of rounds; SNPE is robust. Right: Gaussian proposal priors tend to underestimate tails of posterior (red). B. Flexibility of posterior inference. Left: True posterior for 1-D bimodal GM and observation xo . Middle and right: First round proposal priors (dotted), second round proposal priors (dashed) and estimated posteriors (solid) for CDE-LFI and SNPE respectively (true posterior red). SNPE allows multi-modal proposals. C, F. Application to GLM. Posterior means and variances are recovered well by both CDE-LFI and SNPE. For reference, we approximate the posterior using likelihood-based PG-MCMC. D. Covariance matrices for SNPE and PG-MCMC. E. Partial view of the posterior for 3 out of 10 parameters (all 10 parameters in Appendix G). Ground-truth parameters in red. 2-D marginals for SNPE (lines) and PG-MCMC (histograms). White and yellow contour lines correspond to 68% and 95% of the mass, respectively. Generalised linear models: Generalised linear models (GLM) are commonly used to model neural responses to sensory stimuli. For these models, several techniques are available to estimate the posterior distribution over parameters, making them ideally suited to test SNPE in a single-neuron model. We evaluated the posterior distribution over the parameters of a GLM using a P?lya-Gamma sampler (PG-MCMC, [42, 43]) and compared it to the posterior distributions estimated by SNPE (Appendix D.2 for details). We found a good agreement of the posterior means and variances (Fig. 2C), covariances (Fig. 2D), as well as pairwise marginals (Fig. 2E). We note that, since GLMs have close-to-Gaussian posteriors, the CDE-LFI method works extremely well on this problem (Fig. 2F). In summary, SNPE leads to accurate and robust estimation of the posterior in simple models. It works effectively even on multi-modal posteriors on which CDE-LFI exhibits worse performance. On a GLM-example with an (almost) Gaussian posterior, the CDE-LFI method works extremely well, but SNPE yields very similar posterior estimates (see Appendix F for additional comparison with SMC-ABC). 3.2 Statistical inference on Hodgkin-Huxley neuron models Simulated data: The Hodgkin-Huxley equations [44] describe the dynamics of a neuron?s membrane potential and ion channels given biophysical parameters (e.g. concentration of sodium and potassium channels) and an injected input current (Fig. 3A, see Appendix D.3). We applied SNPE to a Hodgkin-Huxley model with channel kinetics as in [45] and inferred the posterior over 12 biophysical parameters, given 20 voltage features of the simulated data. The true parameter values are close to the mode of the inferred posterior (Fig. 3B, D), and in a region of high posterior probability. Samples from the posterior lead to voltage traces that are similar to the original data, supporting the correctness of the approach (Fig. 3C). 6 4.3 ?80 0.55 0.00 ... ln(gNa) 0.9 2.0 60 time (ms) 40 ?20 ... ln(gK) 0 C voltage (mV) 3.2 ?20 120 -3.0 ?80 -1.9 ln(gl) 2.3 0 60 time (ms) 120 SNPE mean best IBEA 1.2 E 40 voltage (mV) 0.0 ?20 gNa gK ENa kbn1 noise VT gM ?El F ... 3.2 4.3 ... ln(gNa) ?80 0.19 0.00 0.9 2.0 625 time (ms) 1250 tmax G gl kbn2 ?EK 40 ?20 ... ln(gK) 0 ?80 ... input (nA) ... voltage (mV) ||? - ? * || / ?? D B 40 ... input (nA) voltage (mV) A -3.0 -1.9 ln(gl) 0 625 time (ms) 1250 Figure 3: Application to Hodgkin-Huxley model: A. Simulation of Hodgkin-Huxley model with current injection. B. Posterior over 3 out of 12 parameters inferred with SNPE (12 parameters in Appendix G). True parameters have high posterior probabilities (red). C. Traces for the mode (cyan) of and samples (orange) from the inferred posterior match the original data (blue). D. Comparison between SNPE and a standard parameter-fitting procedure based on a genetic algorithm, IBEA: difference between the mode of SNPE or IBEA best parameter set, and the ground-truth parameters, normalised by the standard deviations obtained by SNPE. E-G. Application to real data from Allen Cell Type Database. Inference over 12 parameters for cell 464212183. Results presented as in A-C. Biophysical neuron models are typically fit to data with genetic algorithms applied to the distance between simulated and measured data-features [2, 8, 9, 46]. We compared the performance of SNPE with a commonly used genetic algorithm (Indicator Based Evolutionary Algorithm, IBEA, from the BluePyOpt package [9]), given the same number of model simulations (Fig. 3D). SNPE is comparable to IBEA in approximating the ground-truth parameters ? note that defining an objective measure to compare the two approaches is difficult, as they both minimise different criteria. However, unlike IBEA, SNPE also returns a full posterior distribution, i.e. the space of all parameters consistent with the data, rather than just a ?best fit?. In-vitro recordings: We also applied the approach to in vitro recordings from the mouse visual cortex (see Appendix D.4, Fig. 3E-G). The posterior mode over 12 parameters of a Hodgkin-Huxley model leads to a voltage trace which is similar to the data, and the posterior distribution shows the space of parameters for which the output of the model is preserved. These posteriors could be used to motivate further experiments for constraining parameters, or to study invariances in the model. 3.3 Dealing with bad simulations and features Bad simulations: We demonstrate our approach (see Section 2.3) for dealing with ?bad simulations? (e.g. for which firing rates diverge) using a simple, two-parameter ?autapse? model for which the region of stability is known. During SNPE, we concurrently train a classifier to predict ?bad simulations? and update the prior accordingly. This approach does not only lead to a more efficient use of simulations, but also identifies the parameter space for which the simulator is well-defined, information that could be used for further model analysis (Fig. 4A, B). Bad features: Many features of interest in neural models, e.g. the latency to first spike after the injection of a current input, are only well defined in the presence of other features, e.g. the presence of spikes (Fig. 4C). Given that large parts of the parameter space can lead to non-spiking behaviour, missing features occur frequently and cannot simply be ignored. We enriched our MDN with an extra layer which imputes values to the absent features, values which are optimised alongside the rest of the parameters of the network (Fig. 4D; Appendix E). Such imputation has marginal computational 7 0.0 2.5 C 102 101 g^ (?) ? rate (Hz) 103 voltage (mV) B 104 D 40 m1(s) c1 m2(s) ? ?20 + 100 0 observed data bad simulation 10?1 0 50 time (ms) 100 -1 0.0 1 J 2.0 1.0 h1 ? h2 h3 1-m1(s) input (mA) A ?80 3.6 ? ? f1(s) ? f2(s) 0.0 0 60 time (ms) 120 ? hH ? Figure 4: Inference on neural dynamics has to deal with diverging simulations and missing features. A. Firing rate of a model neuron connected to itself (autapse). If the strength of the selfconnection (parameter J) is bigger than 1, the dynamics are unstable (orange line - bad simulation). B. Portion of parameter space leading to diverging simulations learned by the classifier (yellow: low probability of bad simulation, blue: high probability), and comparison with analytically computed boundaries (white, see Appendix D.5). C. Illustration of a model neuron in two parameter regimes, spiking (grey trace) and non-spiking (blue). When the neuron does not spike, features that depend on the presence of spiking, such as the latency to first spike, are not defined. D. Our MDN is augmented with a multiplicative layer which imputes values for missing features. cost and grants us the convenience of not having to hand-tune imputation values, or to reject all simulations for which any individual feature might be missing. Learning features with recurrent neural networks (RNNs): In neural modelling, it is often of interest to work with hand-designed features that are thought to be particularly important or informative for particular analysis questions [2]. For instance, the shape of the action potential is intimately related to the dynamics of sodium and potassium channels in the Hodgkin-Huxley model. However, the space of possible features is immense, and given the highly non-linear nature of many of the neural models in question, it can sometimes be of interest to simply perform statistical inference without having to hand-design features. Our approach provides a straightforward means of doing that: we augment the MDN with a RNN which runs along the recorded voltage trace (and stimulus, here a coloured-noise input) to learn appropriate features to constrain the model parameters. As illustrated in figure 5B, the first layer of the network, which previously received pre-computed summary statistics as inputs, is replaced by a recurrent layer that receives full voltage and current traces as inputs. In order to capture long-term dependencies in the sequence input, we use gated-recurrent units (GRUs) for the RNN [47]. Since we are using 25 GRU units and only keep the final output of the unrolled RNN (many-to-one), we introduce a bottleneck. The RNN thus transforms the voltage trace and stimulus into a set of 25 features, which allow SNPE to recover the posterior over the 12 parameters (Fig. 5C). As expected, the presence of spikes in the observed data leads to a tighter posterior for parameters associated to the main ion channels involved in spike generation, ENa , EK , gNa and gK . 4 Discussion Quantitatively linking models of neural dynamics to data is a central problem in computational neuroscience. We showed that likelihood-free inference is at least as general and efficient as ?blackbox? parameter fitting approaches in neuroscience, but provides full statistical inference, suggesting it to be the method of choice for inference on single-neuron models. We argued that ABC approaches based on density estimation are particularly useful for neuroscience, and introduced a novel algorithm (SNPE) for estimating posterior distributions. We can flexibly and robustly estimate posterior distributions, even when large regions of the parameter space correspond to unstable model behaviour, or when features of choice are missing. Furthermore, we have extended our approach with RNNs to automatically define features, thus increasing the potential for better capturing salient aspects of the data with highly non-linear models. SNPE is therefore equipped to estimate posterior distributions under common constraints in neural models. Our approach directly builds on a recent approach for density estimation ABC (CDE-LFI, [19]). While we found CDE-LFI to work well on problems with unimodal, close-to-Gaussian posteriors and stable simulators, our approach extends the range of possible applications, and these extensions are critical for the application to neuron models. A key component of SNPE is the proposal prior, which guides the sampling on each round of the algorithm. Here, we used the posterior on the previous round as the proposal for the next one, as in CDE-LFI and in many Sequential-MC approaches. Our 8 voltage (mV) A B 40 60 240 ?20 ?80 C v1 i1 v2 i2 v3 i3 ? ? vT iT GRUs GRUs GRUs ? GRUs input (nA) Features 2.55 0.00 f1 f2 ? fN 60 240 gNa gK gl ENa ?EK ?El gM tmax kbn1 kbn2 VT noise Mixture Density Network 0 120 time (ms) 240 Figure 5: We can learn informative features using a recurrent mixture-density network (RMDN). A. We consider a neuron driven by a colored-noise input current. B. Rather than engineering summary features to reduce the dimensionality of observations, we provide the complete voltage trace and input current as input to an R-MDN. The unrolled forward pass is illustrated, where a many-to-one recurrent network reduces the dimensionality of the inputs (T time steps long) to a feature vector of dimensionality N . C. Our goal is to infer the posterior density for two different observations: (1) the full 240ms trace shown in panel A; and (2) the initial 60ms of its duration, which do not show any spike. We show the obtained marginal posterior densities for the two observations, using a 25-dimensional feature vector learned by the RNN. In the presence of spikes, the posterior uncertainty gets tighter around the true parameters related to spiking. method could be extended by alternative approaches to designing proposal priors [48, 49], e.g. by exploiting the fact that we also represent a posterior over MDN parameters: for example, one could design proposals that guide sampling towards regions of the parameter space where the uncertainty about the parameters of the posterior model is highest. We note that, while here we concentrated on models of single neurons, ABC methods and our approach will also be applicable to models of populations of neurons. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical models, and enabling theory-driven data-analysis [50]. Acknowledgements We thank Maneesh Sahani, David Greenberg and Balaji Lakshminarayanan for useful comments on the manuscript. This work was supported by SFB 1089 (University of Bonn) and SFB 1233 (University of T?bingen) of the German Research Foundation (DFG) to JHM and by the caesar foundation. References [1] W Gerstner, W M Kistler, R Naud, and L Paninski. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, 2014. [2] S Druckmann, Y Banitt, A Gidon, F Sch?rmann, H Markram, and I Segev. A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. Front Neurosci, 1, 2007. [3] C van Vreeswijk and H Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293), 1996. [4] H Markram et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell, 163(2), 2015. [5] Q J M Huys and L Paninski. Smoothing of, and parameter estimation from, noisy biophysical recordings. PLoS Comput Biol, 5(5), 2009. [6] L Meng, M A Kramer, and U T Eden. A sequential monte carlo approach to estimate biophysical neural models from spikes. J Neural Eng, 8(6), 2011. [7] C D Meliza, M Kostuk, H Huang, A Nogaret, D Margoliash, and H D I Abarbanel. Estimating parameters and predicting membrane voltages with conductance-based neuron models. Biol Cybern, 108(4), 2014. [8] C Rossant, D F M Goodman, B Fontaine, J Platkiewicz, A K Magnusson, and R Brette. Fitting neuron models to spike trains. Front Neurosci, 5:9, 2011. [9] W Van Geit, M Gevaert, G Chindemi, C R?ssert, J Courcol, E B Muller, F Sch?rmann, I Segev, and H Markram. Bluepyopt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front Neuroinform, 10:17, 2016. [10] A A Prinz, C P Billimoria, and E Marder. Alternative to hand-tuning conductance-based models: Construction and analysis of databases of model neurons. J Neurophysiol, 90(6), 2003. 9 [11] C Stringer, M Pachitariu, N A Steinmetz, M Okun, P Bartho, K D Harris, M Sahani, and N A Lesica. Inhibitory control of correlated intrinsic variability in cortical networks. Elife, 5, 2016. [12] Kristofor D Carlson, Jayram Moorkanikara Nageswaran, Nikil Dutt, and Jeffrey L Krichmar. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci, 8:10, 2014. doi:10.3389/fnins.2014.00010. [13] P Friedrich, M Vella, A I Guly?s, T F Freund, and S K?li. A flexible, interactive software tool for fitting the parameters of neuronal models. Frontiers in neuroinformatics, 8, 2014. [14] P J Diggle and R J Gratton. Monte carlo methods of inference for implicit statistical models. J R Stat Soc B Met, 1984. [15] F Hartig, J M Calabrese, B Reineking, T Wiegand, and A Huth. Statistical inference for stochastic simulation models?theory and application. Ecol Lett, 14(8), 2011. [16] J Lintusaari, M U Gutmann, R Dutta, S Kaski, and J Corander. Fundamentals and recent developments in approximate bayesian computation. Syst Biol, 2016. [17] Aidan C Daly, David J Gavaghan, Chris Holmes, and Jonathan Cooper. Hodgkin?huxley revisited: reparametrization and identifiability analysis of the classic action potential model with approximate bayesian methods. Royal Society open science, 2(12):150499, 2015. [18] M G B Blum and O Fran?ois. Non-linear regression models for approximate bayesian computation. Stat Comput, 20(1), 2010. [19] G Papamakarios and I Murray. Fast epsilon-free inference of simulation models with bayesian conditional density estimation. In Adv in Neur In, 2017. [20] N T Carnevale and M L Hines. The NEURON Book. Cambridge University Press, 2009. [21] E Meeds, M Welling, et al. Gps-abc: Gaussian process surrogate approximate bayesian computation. UAI, 2014. [22] J K Pritchard, M T Seielstad, A Perez-Lezaun, and M W Feldman. Population growth of human y chromosomes: a study of y chromosome microsatellites. Mol Biol Evol, 16(12), 1999. [23] P Marjoram, J Molitor, V Plagnol, and S Tavare. Markov chain monte carlo without likelihoods. Proc Natl Acad Sci U S A, 100(26), 2003. [24] E Meeds, R Leenders, and M Welling. Hamiltonian abc. arXiv preprint arXiv:1503.01916, 2015. [25] M A Beaumont, J Cornuet, J Marin, and C P Robert. Adaptive approximate bayesian computation. Biometrika, 2009. [26] F V Bonassi, M West, et al. Sequential monte carlo with adaptive weights for approximate bayesian computation. Bayesian Anal, 10(1), 2015. [27] R Wilkinson. Accelerating abc methods using gaussian processes. In AISTATS, 2014. [28] S N Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466(7310), 2010. [29] V M H Ong, D J Nott, M Tran, S A Sisson, and C C Drovandi. Variational bayes with synthetic likelihood. arXiv:1608.03069, 2016. [30] Y Fan, D J Nott, and S A Sisson. Approximate bayesian computation via regression density estimation. Stat, 2(1), 2013. [31] B M Turner and P B Sederberg. A generalized, likelihood-free method for posterior estimation. Psychonomic Bulletin & Review, 21(2), 2014. [32] L F Price, C C Drovandi, A Lee, and David J N. Bayesian synthetic likelihood. J Comput Graph Stat, (just-accepted), 2017. [33] M Beaumont, W Zhang, and D J Balding. Approximate bayesian computation in population genetics. Genetics, 162(4), 2002. [34] G E Hinton and D Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, 1993. [35] A Graves. Practical variational inference for neural networks. In Adv Neur In, 2011. [36] D P Kingma, T Salimans, and M Welling. Neural adaptive sequential monte carlo. In Variational Dropout and the Local Reparameterization Trick, pages 2575?2583, 2015. [37] F Gerhard, M Deger, and W Truccolo. On the stability and dynamics of stochastic spiking neuron models: Nonlinear hawkes process and point process glms. PLoS Comput Biol, 13(2), 2017. [38] K Cho, B Van Merri?nboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, and Y Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [39] M G B Blum, M A Nunes, Ds Prangle, S A Sisson, et al. A comparative review of dimension reduction methods in approximate bayesian computation. Statistical Science, 28(2), 2013. [40] B Jiang, T Wu, Cs Zheng, and W H Wong. Learning summary statistic for approximate bayesian computation via deep neural network. arXiv preprint arXiv:1510.02175, 2015. [41] J W Pillow, J Shlens, L Paninski, A Sher, A M Litke, E J Chichilnisky, and E P Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207), 2008. [42] N G Polson, J G Scott, and J Windle. Bayesian inference for logistic models using p?lya?gamma latent variables. J Am Stat Assoc, 108(504), 2013. 10 [43] S Linderman, R P Adams, and J W Pillow. Bayesian latent structure discovery from multi-neuron recordings. In Advances in Neural Information Processing Systems, 2016. [44] A L Hodgkin and A F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol, 117(4), 1952. [45] M Pospischil, M Toledo-Rodriguez, C Monier, Z Piwkowska, T Bal, Y Fr?gnac, H Markram, and A Destexhe. Minimal hodgkin-huxley type models for different classes of cortical and thalamic neurons. Biol Cybern, 99(4-5), 2008. [46] E Hay, S Hill, F Sch?rmann, H Markram, and I Segev. Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Comput Biol, 7(7), 2011. [47] J Chung, C Gulcehre, K H Cho, and Y Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [48] Marko J?rvenp??, Michael U Gutmann, Aki Vehtari, and Pekka Marttinen. Efficient acquisition rules for model-based approximate bayesian computation. arXiv preprint arXiv:1704.00520, 2017. [49] S Gu, Z Ghahramani, and R E Turner. Neural adaptive sequential monte carlo. In Advances in Neural Information Processing Systems, pages 2629?2637, 2015. [50] S W Linderman and S J Gershman. Using computational theory to constrain statistical models of neural data. bioRxiv, 2017. [51] G De Nicolao, G Sparacino, and C Cobelli. Nonparametric input estimation in physiological systems: problems, methods, and case studies. Automatica, 33(5), 1997. 11
6728 |@word middle:2 nonsensical:2 open:2 prangle:1 grey:1 simulation:39 lezaun:1 covariance:5 eng:1 pg:7 solid:1 carry:1 reduction:1 initial:1 series:4 genetic:3 current:9 comparing:1 com:1 recovered:1 yet:2 assigning:1 fn:1 numerical:2 partition:1 informative:3 physiol:1 shape:1 enables:2 designed:3 update:1 implying:2 selected:1 signalling:1 accordingly:1 hamiltonian:1 sederberg:1 colored:1 infrastructure:1 provides:4 revisited:1 zhang:1 mathematical:1 along:1 become:1 fitting:9 introduce:4 pairwise:1 expected:1 papamakarios:2 themselves:1 aborted:1 frequently:3 simulator:14 multi:8 brain:1 blackbox:1 inspired:1 decreasing:1 automatically:3 equipped:1 increasing:2 becomes:3 autapse:3 estimating:3 lesica:1 circuit:1 mass:1 qw:1 panel:1 beaumont:2 temporal:2 quantitative:1 continual:2 growth:1 interactive:1 biometrika:1 classifier:4 assoc:1 control:2 brute:1 grant:1 unit:2 planck:1 generalised:3 engineering:1 local:2 modify:1 limit:3 acad:1 marin:1 jiang:1 meng:1 optimised:1 firing:6 piwkowska:1 black:1 might:2 tmax:2 rnns:2 studied:1 challenging:1 limited:1 smc:3 range:3 bi:1 huys:1 gnac:1 practical:2 practice:2 importanceweighted:1 procedure:1 jan:2 empirical:3 rnn:6 maneesh:1 reject:4 thought:1 pre:1 diggle:1 pekka:1 get:1 cannot:5 close:4 convenience:1 context:1 applying:2 cybern:2 wong:1 map:1 center:1 missing:14 straightforward:2 flexibly:2 duration:1 identifying:4 assigns:1 correcting:1 evol:1 m2:1 estimator:3 holmes:1 rule:1 regarded:1 shlens:1 neuroinform:1 initialise:1 reparameterization:1 population:5 handle:1 stability:3 classic:1 merri:1 limiting:2 updated:1 margoliash:1 gm:4 construction:1 user:1 exact:1 gerhard:1 gps:1 designing:1 drovandi:2 agreement:1 associate:1 trick:2 expensive:2 particularly:3 approximated:2 updating:1 lfi:15 balaji:1 gon:1 predicts:1 database:2 observed:9 cloud:1 preprint:5 capture:1 region:6 ensures:1 connected:2 adv:2 sompolinsky:1 gutmann:2 plo:3 trade:4 decrease:1 highest:1 valuable:1 leenders:1 principled:1 balanced:1 vehtari:1 ideally:1 wilkinson:1 ong:1 dynamic:17 trained:2 depend:2 motivate:1 predictive:1 division:4 f2:3 efficiency:2 meed:2 neurophysiol:1 balding:1 gu:1 easily:1 schwenk:1 kaski:1 train:6 fast:1 effective:3 describe:1 monte:8 doi:1 neuroinformatics:1 heuristic:1 spend:1 plausible:1 encoder:1 ability:1 statistic:4 think:1 itself:1 noisy:2 final:2 sisson:3 hoc:2 advantage:1 sequence:2 biophysical:10 analytical:3 okun:1 propose:4 reconstruction:1 interaction:1 tran:1 fr:1 relevant:4 loop:1 krichmar:1 flexibility:3 achieve:1 description:3 scalability:1 exploiting:1 potassium:2 extending:1 comparative:1 adam:1 leave:1 illustrate:1 recurrent:9 stat:5 measured:2 h3:2 received:1 b0:1 eq:1 strong:1 soc:1 predicted:1 marcel:2 ois:2 implies:1 indicate:1 met:1 concentrate:1 c:1 correct:1 modifying:1 stochastic:5 subsequently:2 human:1 enable:2 kistler:1 require:3 argued:1 behaviour:4 darmstadt:1 feeding:1 f1:4 fix:1 truccolo:1 tighter:2 dendritic:1 extension:2 kinetics:1 correction:4 frontier:1 ecol:1 around:2 sufficiently:1 ground:5 normal:1 exp:1 great:1 cognition:1 predict:2 algorithmic:1 major:1 early:1 estimation:17 daly:1 proc:1 applicable:2 correctness:1 tool:1 weighted:1 offs:1 brought:1 concurrently:1 gaussian:16 modified:2 rather:6 i3:1 avoid:1 nott:2 voltage:17 microcircuitry:1 modelling:3 likelihood:17 indicates:1 microsatellites:1 litke:1 am:1 camp:1 dim:2 inference:38 el:2 typically:1 brette:1 initially:1 originating:1 reproduce:3 interested:2 germany:2 i1:1 kaan:2 issue:1 flexible:6 augment:1 mackelab:1 priori:1 development:1 art:1 constrained:2 orange:2 smoothing:1 marginal:2 equal:1 construct:1 having:7 beach:1 sampling:6 caesar:3 stimulus:3 quantitatively:2 steinmetz:1 gamma:2 divergence:1 individual:2 dfg:1 replaced:1 imputes:2 jeffrey:1 plagnol:1 conductance:3 neuroscientist:2 interest:7 highly:5 investigate:1 zheng:1 evaluation:1 mixture:17 undefined:1 perez:1 natl:1 chain:2 immense:1 accurate:2 partial:1 necessary:1 old:1 biorxiv:1 minimal:1 fitted:1 instance:1 modeling:1 disadvantage:1 goodness:1 calibrate:1 phrase:1 cost:1 introducing:1 deviation:2 uniform:1 front:4 conduction:1 dependency:1 giacomo:2 synthetic:4 adaptively:3 st:1 nonnenmacher:1 density:32 grus:5 fundamental:1 cho:2 probabilistic:1 off:3 lee:1 diverge:1 michael:1 mouse:1 na:3 connectivity:1 central:3 recorded:1 huang:1 possibly:1 worse:1 cognitive:1 external:1 ek:3 macke:2 inefficient:1 return:2 leading:1 abarbanel:1 li:1 account:1 potential:6 exclude:1 de:2 suggesting:1 centred:1 syst:1 waste:1 gaussianity:1 lakshminarayanan:1 explicitly:2 mv:6 multiplicative:2 h1:3 break:2 closed:1 view:1 doing:1 realised:1 red:4 recover:3 portion:1 reparametrization:1 bayes:1 thalamic:1 identifiability:1 contribution:1 compartment:1 accuracy:2 dutta:1 variance:4 efficiently:3 yield:5 identify:2 correspond:3 yellow:2 bayesian:28 raw:1 accurately:1 calabrese:1 mc:1 carlo:8 whenever:1 sixth:1 underestimate:1 pospischil:1 acquisition:1 involved:1 associated:2 mi:3 recovers:1 sampled:1 adjusting:1 knowledge:3 dimensionality:3 electrophysiological:1 carefully:1 nerve:1 focusing:1 feed:1 manuscript:1 higher:1 response:2 modal:6 evaluated:4 box:1 strongly:2 furthermore:2 parameterised:2 just:2 implicit:1 correlation:1 glms:3 hand:4 receives:1 d:1 expressive:1 nonlinear:2 incrementally:1 rodriguez:1 continuity:1 bonassi:1 mode:4 logistic:1 behaved:1 usa:1 effect:1 name:1 true:12 analytically:2 i2:1 illustrated:2 white:2 attractive:2 round:23 reweighted:1 during:1 self:1 deal:1 aki:1 marko:1 excitation:2 hawkes:1 criterion:2 m:9 generalized:1 bal:1 hill:1 complete:2 demonstrate:4 neocortical:2 seielstad:1 allen:1 variational:7 chaos:1 novel:3 fi:4 recently:1 common:3 psychonomic:1 spiking:8 empirically:2 vitro:3 stepping:1 nunes:1 linking:2 tail:1 approximates:2 resting:1 numerically:1 interpret:1 marginals:2 measurement:2 refer:1 m1:2 cambridge:2 feldman:1 bougares:1 ena:3 tuning:2 closing:2 stochasticity:1 centre:1 specification:1 calibration:6 similarity:1 cortex:1 stable:1 base:1 posterior:85 multivariate:3 recent:3 showed:1 driven:2 scenario:1 store:2 hay:1 ecological:1 affiliation:1 binary:2 vt:3 muller:1 additional:3 determine:1 lya:2 v3:1 monotonically:1 dashed:1 full:11 simoncelli:1 multiple:11 reduces:1 infer:4 desirable:1 unimodal:1 technical:1 match:2 minimising:2 long:3 post:2 dkl:2 bigger:1 variant:1 basic:1 regression:4 metric:1 expectation:1 arxiv:11 histogram:1 represent:2 sometimes:2 kernel:9 bimodal:1 ion:2 cell:4 proposal:33 addition:2 want:2 whereas:1 preserved:1 decreased:1 c1:1 pyramidal:1 source:3 sch:3 biased:1 extra:1 unlike:2 rest:1 goodman:1 comment:1 recording:8 tend:1 hz:1 bahdanau:1 leveraging:1 effectiveness:2 matthis:2 near:1 presence:5 constraining:3 bengio:2 destexhe:1 automated:1 fit:3 bandwidth:3 reduce:1 idea:1 minimise:2 absent:1 bottleneck:1 whether:1 motivated:1 sfb:2 accelerating:1 penalty:1 bingen:1 action:2 deep:1 ignored:2 useful:3 latency:6 detailed:1 tune:1 transforms:1 nonparametric:1 extensively:1 concentrated:1 reduced:1 http:1 generate:3 percentage:1 aidan:1 inhibitory:2 dotted:1 neuroscience:10 estimated:3 per:1 windle:1 blue:3 key:1 four:1 salient:1 blum:3 eden:1 drawn:1 intersected:1 imputation:3 neither:1 v1:1 imaging:1 graph:1 wood:1 run:5 package:1 uncertainty:4 fourth:1 hodgkin:11 injected:1 extends:1 almost:1 wu:1 fran:2 nikil:1 appendix:13 book:1 comparable:1 dropout:1 capturing:3 layer:9 hi:1 cyan:1 fan:1 refine:1 annual:1 activity:5 strength:2 occur:1 marder:1 constraint:1 huxley:11 constrain:2 segev:3 software:3 bonn:4 aspect:1 simulate:2 argument:1 extremely:2 elife:1 performing:3 nboer:1 injection:2 extracellular:1 neur:2 membrane:4 across:7 cornuet:1 intimately:1 making:1 gradually:1 handling:1 xo:27 restricted:3 glm:4 computationally:1 equation:2 resource:1 ln:6 previously:1 german:1 mechanism:1 hh:2 vreeswijk:1 carnevale:1 mechanistic:6 gulcehre:2 available:2 gaussians:5 linderman:2 pachitariu:1 apply:3 v2:1 appropriate:1 salimans:1 robustly:2 alternative:3 robustness:4 altogether:1 original:2 include:1 completed:2 mdn:11 carlson:1 epsilon:1 build:3 murray:2 approximating:3 society:2 classical:1 especially:2 ghahramani:1 objective:4 naud:1 already:1 question:2 spike:15 strategy:1 primary:1 parametric:1 concentration:1 diagonal:1 corander:1 surrogate:1 evolutionary:2 gradient:2 exhibit:1 distance:9 thank:1 stringer:1 simulated:8 sci:1 decoder:1 reparameterisation:1 chris:1 argue:1 unstable:7 gna:5 assuming:2 code:1 length:1 illustration:1 minimizing:1 unrolled:2 innovation:2 difficult:2 robert:1 gk:5 trace:10 huth:1 polson:1 design:5 anal:1 perform:7 gated:2 neuron:37 observation:5 markov:2 enabling:1 truncated:1 supporting:1 situation:1 extended:3 incorporated:1 excluding:1 defining:1 variability:1 hinton:1 pritchard:1 jakob:2 overcoming:1 inferred:4 introduced:1 david:3 pair:1 gru:1 specified:2 chichilnisky:1 friedrich:1 learned:3 kingma:1 prinz:1 nip:1 toledo:1 alongside:1 dynamical:2 jayram:1 mismatch:1 scott:1 regime:2 challenge:1 max:1 optimise:6 including:2 memory:1 royal:1 critical:2 natural:1 force:1 predicting:2 indicator:2 wiegand:1 marjoram:1 sodium:2 turner:2 github:1 identifies:1 deemed:1 sher:1 sn:1 sahani:2 prior:33 understanding:1 coloured:1 acknowledgement:1 review:2 discovery:1 graf:1 freund:1 loss:6 fully:1 goncalves:1 limitation:4 parameterise:1 generation:1 gershman:1 h2:3 foundation:2 consistent:2 principle:1 systematically:1 heavy:1 translation:1 excitatory:1 summary:11 genetics:2 gl:4 supported:1 free:9 keeping:1 side:1 allow:2 bias:1 normalised:1 institute:1 wide:2 guide:2 bulletin:1 markram:5 tolerance:1 van:4 overcome:1 lett:1 dimension:1 boundary:1 xn:13 evaluating:1 contour:1 greenberg:1 sensory:1 forward:2 commonly:2 cortical:2 adaptive:4 pillow:2 welling:3 approximate:18 implicitly:1 overcomes:2 dealing:4 keep:1 sequentially:4 uai:1 gidon:1 active:1 automatica:1 unnecessary:1 assumed:1 spatio:1 search:1 latent:2 tailed:1 jhm:1 learn:9 chromosome:2 channel:5 robust:2 ca:1 nature:3 mol:1 gerstner:1 complex:5 did:3 aistats:1 main:1 intracellular:1 neurosci:3 noise:5 repeated:1 neuronal:5 fig:18 enriched:1 augmented:1 west:1 gratton:1 cooper:1 precision:4 fails:1 inferring:1 comput:5 breaking:1 third:1 monier:1 learns:2 chung:1 theorem:1 bad:18 specific:3 physiological:1 intractable:1 intrinsic:2 tavare:1 sequential:8 effectively:3 importance:11 ci:3 kx:1 alves:1 gap:2 rejection:5 suited:2 simply:2 likely:3 explore:1 paninski:3 neurophysiological:1 visual:2 nicolao:1 pedro:2 utilise:2 truth:5 abc:17 ma:1 harris:1 hines:1 conditional:6 molitor:1 goal:4 kramer:1 towards:1 price:1 absence:1 generalisation:1 cde:15 sampler:2 pas:2 invariance:1 experimental:4 diverging:4 accepted:1 exception:1 jonathan:1 evaluate:1 mcmc:7 druckmann:1 biol:7 correlated:1
6,333
6,729
Online Prediction with Selfish Experts Tim Roughgarden Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Okke Schrijvers Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Abstract We consider the problem of binary prediction with expert advice in settings where experts have agency and seek to maximize their credibility. This paper makes three main contributions. First, it defines a model to reason formally about settings with selfish experts, and demonstrates that ?incentive compatible? (IC) algorithms are closely related to the design of proper scoring rules. Second, we design IC algorithms with good performance guarantees for the absolute loss function. Third, we give a formal separation between the power of online prediction with selfish versus honest experts by proving lower bounds for both IC and non-IC algorithms. In particular, with selfish experts and the absolute loss function, there is no (randomized) algorithm for online prediction?IC or otherwise?with asymptotically vanishing regret. 1 Introduction In the months leading up to elections and referendums, a plethora of pollsters try to figure out how the electorate is going to vote. Different pollsters use different methodologies, reach different people, and may have sources of random errors, so generally the polls don?t fully agree with each other. Aggregators such as Nate Silver?s FiveThirtyEight, and The Upshot by the New York Times consolidate these different reports into a single prediction, and hopefully reduce random errors.1 FiveThirtyEight in particular has a solid track record for their predictions, and as they are transparent about their methodology we use them as a motivating example. To a first-order approximation, they operate as follows: first they take the predictions of all the different pollsters, then they assign a weight to each of the pollsters based on past performance (and other factors), and finally they use the weighted average of the pollsters to run simulations and make their own prediction.2 But could the presence of an institution that rates pollsters inadvertently create perverse incentives for pollsters? The FiveThirtyEight pollster ratings are publicly available.3 They can be interpreted as a reputation, and a low rating can negatively impact future revenue opportunities for a pollster. Moreover, it has been demonstrated in practice that experts do not always report their true beliefs about future events. For example, in weather forecasting there is a known ?wet bias,? where consumerfacing weather forecasters deliberately overestimate low chances of rain (e.g. a 5% chance of rain is reported as a 25% chance of rain) because people don?t like to be surprised by rain [Bickel and Kim, 2008]. 1 https://fivethirtyeight.com/, https://www.nytimes.com/section/upshot. This is of course a simplification. FiveThirtyEight also uses features like the change in a poll over time, the state of the economy, and correlations between states. See https://fivethirtyeight.com/features/ how-fivethirtyeight-calculates-pollster-ratings/ for details. Our goal in this paper is not to accurately model all of the fine details of FiveThirtyEight (which are anyways changing all the time). Rather, it is to formulate a general model of prediction with experts that clearly illustrates why incentives matter. 3 https://projects.fivethirtyeight.com/pollster-ratings/ 2 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. These examples motivate the development of models of aggregating predictions that endow agency to the data sources.4 While there are multiple models in which we can investigate this issue, a natural candidate is the problem of prediction with expert advice. By focusing on a standard model, we abstract away from the fine details of FiveThirtyEight (which are anyways changing all the time), which allows us to formulate a general model of prediction with experts that clearly illustrates why incentives matter. In the classical model [Littlestone and Warmuth, 1994, Freund and Schapire, 1997], at each time step, several experts make predictions about an unknown event. An online prediction algorithm aggregates experts? opinions and makes its own prediction at each time step. After this prediction, the event at this time step is realized and the algorithm incurs a loss as a function of its prediction and the realization. To compare its performance against individual experts, for each expert the algorithm calculates what its loss would have been had it always followed the expert?s prediction. While the problems introduced in this paper are relevant for general online prediction, to focus on the most interesting issues we concentrate on the case of binary events, and real-valued predictions in [0, 1]. For different applications, different notions of loss are appropriate, so we parameterize the model by a loss function `. Thus our formal model is: at each time step t = 1, 2, . . . , T : (t) 1. Each expert i makes a prediction pi 2 [0, 1], representing advocacy for event ?1.? 2. The online algorithm commits to a probability q (t) 2 [0, 1] as a prediction for event ?1.? 3. The outcome r(t) 2 {0, 1} is realized. (t) 4. The algorithm incurs expected loss `(q (t) , r(t) ), each expert i is assigned loss `(pi , r(t) ). The standard goal in this problem is to design an online prediction algorithm that is guaranteed to have expected loss not much larger than that incurred by the best expert in hindsight. The classical solutions maintain a weight for each expert and make a prediction according to which outcome has more expert weight behind it. An expert?s weight can be interpreted as a measure of its credibility in light of its past performance. The (deterministic) Weighted Majority (WM) algorithm always chooses the outcome with more expert weight. The Randomized Weighted Majority (RWM) algorithm randomizes between the two outcomes with probability proportional to their total expert weights. (t) The most common method of updating experts? weights is via multiplication by 1 ?`(pi , r(t) ) after each time step t, where ? is the learning rate. We call this the ?standard? or ?classical? version of the WM and RWM algorithms. The classical model instills no agency in the experts. To account for this, in this paper we replace Step 1 of the classical model by: (t) 1a. Each expert i formulates a belief bi 2 [0, 1]. (t) 1b. Each expert i reports a prediction pi 2 [0, 1] to the algorithm. (t) Each expert now has two types of loss at each time step ? the reported loss `(pi , r(t) ) with respect (t) to the reported prediction and the true loss `(bi , r(t) ) with respect to her true beliefs.5 When experts care about the weight that they are assigned, and with it their reputation and influence in the algorithm, different loss functions can lead to different expert behaviors. For example, for the quadratic loss function, in the standard WM and RWM algorithms, experts have no reason to misreport their beliefs (see Proposition 8). This is not the case for other loss functions, such as the absolute loss function.6 The standard algorithm with the absolute loss function incentivizes extremal (t) 1 reporting, i.e. an expert reports 1 whenever bi 2 and 0 otherwise. This follows from a simple 4 More generally, one can investigate how the presence of machine learning algorithms affects data-generating processes, either during learning or deployment. We discuss some of this work in the related work section. 5 When we speak of the best expert in hindsight, we are always referring to the true losses. Guarantees with respect to reported losses follow from standard results [Littlestone and Warmuth, 1994, Freund and Schapire, 1997, Cesa-Bianchi et al., 2007], but are not immediately meaningful. 6 The loss function is often tied to the particular application. For example, in the current FiveThirtyEight pollster rankings, the performance of a pollster is primarily measured according to an absolute loss function and also whether the candidate with the highest polling numbers ended up winning (see https://github.com/fivethirtyeight/data/tree/master/pollster-ratings). However, in 2008 FiveThirtyEight used the notion of ?pollster introduced error? or PIE, which is the square root of a difference of squares, as the most important feature in calculating the weights, see https://fivethirtyeight.com/ features/pollster-ratings-v31/. 2 derivation or alternatively from results in the property elicitation literature.7 This shows that for the absolute loss function the standard WM algorithm is not ?incentive-compatible? in a sense that we formalize in Section 2. There are similar examples for the other commonly studied weight update rules and for the RWM algorithm. We might care about truthful reporting for its own sake, but additionally the worry is that non-truthful reports will impede our ability to get good regret guarantees (with respect to experts? true losses). We study several fundamental questions about online prediction with selfish experts: 1. What is the design space of ?incentive-compatible? online prediction algorithms, where every expert is incentivized to report her true beliefs? 2. Given a loss function like absolute loss, are there incentive-compatible algorithms with good regret guarantees? 3. Is online prediction with selfish experts strictly harder than in the classical model with honest experts? Our Results. The first contribution of this paper is the development of a model for reasoning formally about the design and analysis of weight-based online prediction algorithms when experts are selfish (Section 2), and the definition of an ?incentive-compatible? (IC) such algorithm. Intuitively, an IC algorithm is such that each expert wants to report its true belief at each time step. We demonstrate that the design of IC online prediction algorithms is closely related to the design of strictly proper scoring rules. Using this, we show that for the quadratic loss function, the standard WM and RWM algorithms are IC, whereas these algorithms are not generally IC for other loss functions. Our second contribution is the design of IC prediction algorithms for the absolute loss function with non-trivial performance guarantees. For example, our best result for deterministic algorithms is: the WM algorithm, with experts? weights evolving according to the spherical proper scoring rule (see p Section 3), is IC and has loss at most 2 + 2 times the loss of best expert in hindsight (in the limit as T ! 1). A variant of the RWM algorithm with the Brier scoring rule is IC and has expected loss at most 2.62 times that of the best expert in hindsight (also in the limit, see Section 5). Our third and most technical contribution is a formal separation between online prediction with selfish experts and the traditional setting with honest experts. Recall that with honest experts, the classical (deterministic) WM algorithm has loss at most twice that of the best expert in hindsight (as T ! 1) [Littlestone and Warmuth, 1994]. We prove in Section 4 that the worst-case loss of every (deterministic) IC algorithm, and every (deterministic) non-IC algorithm satisfying mild technical conditions, is bounded away from twice that of the best expert in hindsight (even as T ! 1). A consequence of our lower bound is that, with selfish experts, there is no natural (randomized) algorithm for online prediction?IC or otherwise?with asymptotically vanishing regret. Finally, in Section 6 we show simulations that indicate that different IC methods show similar regret behavior, and that their regret is substantially better than that of the non-IC standard algorithms, suggesting that the worst-case characterization we prove holds more generally. Related Work. We believe that our model of online prediction over time with selfish experts is novel. We next survey the multiple other ways in which online learning and incentive issues have been blended, and the other efforts to model incentive issues in machine learning. There is a large literature on prediction and decision markets (e.g. Chen and Pennock [2010], Horn et al. [2014]), which also aim to aggregate information over time from multiple parties and make use of proper scoring rules to do it. However, prediction markets provide incentives through payments, rather than influence, and lack the feedback mechanism to select among experts. While there are strong mathematical connections between cost function-based prediction markets and regularizationbased online learning algorithms in the standard (non-IC) model [Abernethy et al., 2013], there does not appear to be any interesting implications for online prediction with selfish experts. There is also an emerging literature on ?incentivizing exploration? in partial feedback models such as the bandit model (e.g. Frazier et al. [2014], Mansour et al. [2016]). Here, the incentive issues concern the learning algorithm itself, rather than the experts (or ?arms?) that it makes use of. 7 The absolute loss function is known to elicit the median [Bonin, 1976][Thomson, 1979], and since we have binary realizations, the median is either 0 or 1. 3 The question of how an expert should report beliefs has been studied before in the literature on strictly proper scoring rules [Brier, 1950, McCarthy, 1956, Savage, 1971, Gneiting and Raftery, 2007], but this literature typically considers the evaluation of a single prediction, rather than low-regret learning. Bayarri and DeGroot [1989] look at correlated settings where strictly proper scoring rules don?t suffice, though they also do not consider how an aggregator can achieve low regret. Finally, there are many works that fall under the broader umbrella of incentives in machine learning. Roughly, work in this area can be divided into two genres: incentives during the learning stage, e.g. [Cai et al., 2015, Shah and Zhou, 2015, Liu and Chen, 2016, Dekel et al., 2010], or incentives during the deployment stage, e.g. Br?ckner and Scheffer [2011], Hardt et al. [2016]. Finally, Babaioff et al. [2010] consider the problem of no-regret learning with selfish experts in an ad auction setting, where the incentives come from the allocations and payments of the auction, rather than from weights as in our case. 2 Preliminaries and Model Standard Model. At each time step t 2 1, ..., T we want to predict a binary realization r(t) 2 {0, 1}. To help in the prediction, we have access to n experts that for each time step report a prediction (t) pi 2 [0, 1] about the realization. The realizations are determined by an oblivious adversary, and the predictions of the experts may or may not be accurate. The goal is to use the predictions of the experts in such a way that the algorithm performs nearly as well as the best expert in hindsight. Most of the algorithms proposed for this problem fall into the following framework. Definition 1 (Weight-update Online Prediction Algorithm). A weight-update online prediction algoPn (t) (t) (t) rithm maintains a weight wi for each expert and makes its prediction q (t) based on i=1 wi pi Pn (t) (t) and i wi (1 pi ). After the algorithm makes its prediction, the realization r(t) is revealed, and the algorithm updates the weights of experts using the rule ? ? (t+1) (t) (t) wi = f pi , r(t) ? wi , (1) where f : [0, 1] ? {0, 1} ! R+ is a positive function on its domain. (t) (t) The standard WM algorithm has f (pi , r(t) ) = 1 ?`(pi , r(t) ) where ? 2 (0, 12 ) is the learning Pn (t) (t) Pn (t) (t) rate, and predicts q (t) = 1 if and only if i wi pi i wi (1 pi ). Let the total loss of the alP PT (T ) (t) T gorithm be M (T ) = t=1 `(q (t) , r(t) ) and let the total loss of expert i be mi = t=1 `(pi , r(t) ). (T ) n The MW algorithm has the property that M (T ) ? 2(1 + ?)mi + 2 ln for each expert i, and ? Pn (t) (t) RWM ?where the algorithm picks 1 with probability proportional to i wi pi ? satisfies (T ) M (T ) ? (1 + ?)mi + ln?n for each expert i [Littlestone and Warmuth, 1994][Freund and Schapire, 1997]. The notion of ?no ?-regret? [Kakade et al., 2009] captures the idea that the per time-step loss of an algorithm is ? times that of the best expert in hindsight, plus a term that goes to 0 as T grows: (T ) Definition 2 (?-regret). An algorithm is said to have no ?-regret if M (T ) ? ? mini mi + o(T ). p By taking ? = O(1/ T ), MW is a no 2-regret algorithm, and RWM is a no 1-regret algorithm. Selfish Model. We consider a model in which experts have agency about the prediction they report, and care about the weight that they are assigned. In the selfish model, at time t the expert formulates (t) (t) a private belief bi about the realization, but she is free to report any prediction pi to the algorithm. Let Bern(p) be a Bernoulli random variable with parameter p. For any non-negative weight update function f , ? ? (t+1) (t) (t) ? ? ? ? max Eb(t) [wi ] = max Er?Bern b(t) [f (p, r) wi ] = wi ? max Er?Bern b(t) [f (p, r)] . p i p So expert i will report whichever p i (t) pi i will maximize the expectation of the weight update function. Performance of an algorithm with respect to the reported loss of experts follows from the standard analysis [Littlestone and Warmuth, 1994]. However, the true loss may be worse (in Section 3 we 4 show this for the standard update rule, Section 4 shows it more generally). Unless explicitly stated PT (T ) (t) otherwise, in the remainder of this paper mi = t=1 `(bi , r(t) ) refers to the true loss of expert i. (t) (t) For now this motivates restricting the weight update rule f to functions where reporting pi = bi maximizes the expected weight of experts. We call these weight-update rules Incentive Compatible (IC). Definition 3 (Incentive Compatibility). A weight-update function f is incentive compatible (IC) if (t) reporting the true belief bi is always a best response for every expert at every time step. It is strictly (t) (t) IC when pi = bi is the only best response. By a ?best response,? we mean an expected utility-maximizing report, where the expectation is with respect to the expert?s beliefs. Collusion. The definition of IC does not rule out the possibility that experts can collude to jointly misreport to improve their weights. We therefore also consider a stronger notion of incentive compatibility for groups with transferable utility.8 Definition 4 (IC for Groups with Transferable Utility). A weight-update function f is IC for groups with transferable utility (TU-GIC) if for every subset S of players, the total expected weight of the P (t+1) (t) group i2S Eb(t) [wi ] is maximized by each reporting their private belief bi . i Proper Scoring Rules. Incentivizing truthful reporting of beliefs has been studied extensively, and the set of functions that do this is called the set of proper scoring rules. Since we focus on predicting a binary event, we restrict our attention to this class of functions. Definition 5 (Binary Proper Scoring Rule, [Schervish, 1989]). A function f : [0, 1] ? {0, 1} ! R [ {?1} is a binary proper scoring rule if it is finite except possibly on its boundary and whenever for p 2 [0, 1] it holds that p 2 maxq2[0,1] p ? f (q, 1) + (1 p) ? f (q, 0). A function f is a strictly proper scoring rule if p is the only value that maximizes the expectation. The first and perhaps most well-known proper scoring rule is the Brier scoring rule. Example 6 (Brier Scoring Rule, [Brier, 1950]). The Brier score is Br(p, r) = 2pr where pr = pr + (1 p)(1 r) is the report for the event that materialized. (p2 + (1 p)2 ) We will use the Brier scoring rule in Section 5 to construct an incentive-compatible randomized algorithm with good guarantees. The following proposition follows directly from Definitions 3 and 5. Proposition 7. Weight-update rule f is (strictly) IC if and only if f is a (strictly) proper scoring rule. Surprisingly, this result remains true even when experts can collude. While the realizations are obviously correlated, linearity of expectation causes the sum to be maximized exactly when each expert maximizes their own expected weight. Proposition 8. A weight-update rule f is (strictly) incentive compatible for groups with transferable utility if and only if f is a (strictly) proper scoring rule. Thus, for online prediction with selfish experts, we get TU-GIC ?for free.? It is quite uncommon for problems in non-cooperate game theory to admit good TU-GIC solutions. For example, results for auctions (either for revenue or welfare) break down once bidders collude, see e.g. [Goldberg and Hartline, 2005]. In the remainder of the paper we will simply use IC to refer to IC and TU-GIC, as strictly proper scoring rules yield algorithms that satisfy both definitions. Thus, for IC algorithms we are restricted to considering (bounded) proper scoring rules as weightupdate rules. Conversely, any bounded scoring rule can be used, possibly after an affine transformation (which preserve proper-ness). Are there any proper scoring rules that give an online prediction algorithm with a good performance guarantee? The standard algorithm for quadratic losses yields a weight-update function that is equivalent to the Brier strictly proper scoring rule, and thus is IC. The standard algorithm with absolute losses is not IC, so in the remainder of this paper we discuss this setting in more detail. 8 Note that TU-GIC is a strictly stronger concept than IC and group IC with nontransferable utility (NTU-GIC) [Moulin, 1999][Jain and Mahdian, 2007]. 5 3 Deterministic Algorithms for Selfish Experts This section studies the question if there are good online prediction algorithms with selfish experts for the absolute loss function. We restrict our attention here to deterministic algorithms; Section 5 gives a randomized algorithm with good guarantees. Proposition 7 tells us that for selfish experts to have a strict incentive to report truthfully, the weightupdate rule must be a strictly proper scoring rule. This section p gives a deterministic algorithm based on the spherical strictly proper scoring rule that has no (2 + 2)-regret (Theorem 10). Additionally, we consider the question if the non-truthful reports from experts in using the standard (non-IC) WM algorithm are harmful. We show that this is the case by proving it is not a no (4 O(1))-regret algorithm for any constant smaller than 4 (Proposition 11). This shows that, when experts are selfish, the IC online prediction algorithm with the spherical rule outperforms the standard WM algorithm (in the worst case). Online Prediction using a Spherical Rule. We next give an algorithm that uses a strictly proper scoring rule that is based on the spherical rule scoring rule.9 Consider the following weight-update rule: ? ? ? ? ? ? q (t) (t) (t) (t) (t) (t) (t) (t) fsp pi , r =1 ? 1 1 |pi r | / pi ? pi + (1 pi ) ? (1 pi ) . (2) The following proposition establishes that this is in fact a strictly proper scoring rule. Due to space constraints, all proofs appear in Appendix A of the supplementary material. Proposition 9. The spherical weight-update rule in (2) is a strictly proper scoring rule. In addition to incentivizing truthful reporting, the WM algorithm with the update rule fsp does not do much worse than the best expert in hindsight. p p Theorem 10. WM with weight-update rule (2) for ? = O(1/ T ) < 12 has no (2 + 2)-regret. True Loss of the Non-IC Standard Rule. It is instructive to compare the guarantee in Theorem 10 with the performance of the standard (non-IC) WM algorithm. WM with the standard weight update (t) (t) function f (pi , r(t) ) = 1 ?|pi r(t) | for ? 2 (0, 12 ) has no 2-regret with respect to the reported loss of experts. However, this algorithm incentivizes extremal reports (for details see Appendix B in the supplementary material), and in the worst case, this algorithm?s loss can be as bad as 4 times the true loss of the best expert in hindsight. Theorem 10 shows that a suitable IC algorithm obtains a superior worst-case guarantee. ? ? (t) (t) Proposition 11. The standard WM algorithm with weight-update rule f pi , r(t) = 1 ?|pi r(t) | results in a total worst-case loss no better than M (T ) 4 (T ) 4 ? mini mi o(1). The Cost of Selfish Experts We now address the third fundamental question: whether or not online prediction with selfish experts is strictly harder than with honest experts. As there exists a deterministic algorithm for honest experts with no 2-regret, showing a separation between honest and selfish experts boils down to proving that there exists a constant > 0 such that best possible no ?-regret algorithm has ? = 2 + . In this section we show that such a exists, and that it is independent of the learning rate. Hence the lower bound also holds for algorithms that, like the classical prediction algorithms, use a time-varying learning rate. Due to space considerations, this section only states the main results, for details and proofs refer to the supplementary materials where in Appendix D we give the results for IC algorithms, and in Appendix E we give the results for the non-IC algorithms. We extend these results to randomized algorithms in Section 5, where we rule out the existence of a (possibly randomized) no-regret algorithm for selfish experts. 9 In Appendix G in the supplementary materials we give an intuition for why this rule yields better results than other natural candidates, such as the Brier scoring rule. 6 IC Algorithms. To prove the lower bound, we have to be specific about which set of algorithms we consider. To cover algorithms that have a decreasing learning parameter, we first show that any positive proper scoring rule can be interpreted as having a learning parameter ?. Proposition 12. Let f be any strictly proper scoring rule. We can write f as f (p, r) = a + bf 0 (p, r) with a 2 R, b 2 R+ and f 0 a strictly proper scoring rule with min(f 0 (0, 1), f 0 (1, 0)) = 0 and max(f 0 (0, 0), f 0 (1, 1)) = 1. We call f 0 : [0, 1] ? {0, 1} ! [0, 1] a normalized scoring rule. Using normalized scoring rules, we can define a family of scoring rules with different learning rates ?. Define F as the following family of proper scoring rules generated by normalized strictly proper scoring rule f : F = {f 0 (p, r) = a (1 + ?(f (p, r) 1)) : a > 0 and ? 2 (0, 1)} By Proposition 12 the union of families generated by normalized strictly proper scoring rules cover all strictly proper scoring rules. Using this we can now formulate the class of deterministic algorithms that are incentive compatible. Definition 13 (Deterministic IC Algorithms). Let Ad be the set of deterministic algorithms that (t+1) (t) (t) update weights by wi = a(1 + ?(f (pi , r(t) ) 1))wi , for a normalized strictly proper scoring Pn (t) (t) Pn (t) rule f and ? 2 (0, 12 ) with ? possibly decreasing over time. For q = i=1 wi pi / i=1 wi , A picks q (t) = 0 if q < 12 , q (t) = 1 if q > 12 and uses any deterministic tie breaking rule for q = 12 . Using this definition we can now state our main lower bound result for IC algorithms: Theorem 14. For the absolute loss function, there does not exists a deterministic and incentivecompatible algorithm A 2 Ad with no 2-regret. Of particular interest are symmetric scoring rules, which occur often in practice, and which have a relevant parameter that drives the lower bound results: Definition 15 (Scoring Rule Gap). The scoring rule gap of family F with generator f is = f ( 12 ) 12 (f (0) + f (1)) = f ( 12 ) 12 . By definition, the scoring rule gap for strictly proper scoring rules is strictly positive, and it drives the lower bound for symmetric functions: Lemma 16. Let F be a family of scoring rules generated by a symmetric strictly proper scoring rule 0 f , and let be the scoring rule gap of F. In the ? worst case, ? MW with any scoring rule f from F (T ) with ? 2 (0, 12 ) can do no better than M (T ) 2 + d 1 1 e ? mi . As a consequence of Lemma 16, we can calculate lower bounds for specific strictly proper scoring rules. For example, the spherical rule used in Section 3 is a symmetric strictly proper scoring rule p 2 1 1 with a gap parameter = 2 e = 15 . 2 , and hence 1/d Non-IC Algorithms. What about non-incentive-compatible algorithms? Could it be that, even with experts reporting strategically instead of honestly, there is a deterministic algorithm with loss at most twice that of the best expert in hindsight (or a randomized algorithm with vanishing regret), to match the classical results for honest experts? Under mild technical conditions, the answer is no. The following definition captures how players are incentivized to report differently from their beliefs. Definition 17 (Rationality Function). For a weight update function f , let ?f : [0, 1] ! [0, 1] be the function from beliefs to predictions, such that reporting ?f (b) is rational for an expert with belief b. Under mild technical conditions on the rationality function, we show our main lower bound for (potentially non-IC) algorithms. Theorem 18. For a weight update function f with continuous or non-strictly increasing rationality function ?f , there is no deterministic no 2-regret algorithm. Note that Theorem 18 covers the standard algorithm, as well as other common update rules such (t) (t) (t) as the Hedge update rule fHedge (pi , r(t) ) = e ?|pi r | [Freund and Schapire, 1997], and all IC methods, since they have the identity rationality function (though the bounds in Thm 14 are stronger). 7 5 Randomized Algorithms: Upper and Lower Bounds Impossibility of Vanishing Regret. We now consider randomized online learning algorithms, which can typically achieve better worst-case guarantees than deterministic algoritms. For example, with honest experts, there are randomized algorithms no 1-regret. Unfortunately, the lower bounds in Section 4 imply that no such result is possible for randomized algorithms (more details in Appendix F). Corollary 19. Any incentive compatible randomized weight-update algorithm or non-IC randomized algorithm with continuous or non-strictly increasing rationality function cannot be no 1-regret. An IC Randomized Algorithm. While we cannot hope to achieve a no-regret algorithm for online prediction with selfish experts, we can do better than the deterministic algorithm from Section 3. Consider the following class of randomized algorithms: Definition 20 (?-randomized weighted majority). Let Ar be the class of algorithms that maintains (t) Pn w (t) expert weights as in Definition 1. Let b(t) = i=1 Pn i (t) ? pi be the weighted predictions. For w j=1 j 8 if b(t) ? ? <0 parameter ? 2 [0, 12 ] the algorithm chooses 1 with probability p(t) = b(t) if ? < b(t) ? 1 ? . : 1 otherwise (t) (t) We call algorithms in Ar ?-RWM algorithms. We?ll use the Brier rule fBr (pi , r(t) ) = 1 ?((pi )2 + (t) (t) (t) (t) (1 pi )2 + 1)/2 (1 si )) with si = |pi r(t) |. Theorem 21. Let A 2 Ar be p a ?-RWM algorithm with the Brier weight update rule fBr and ? = 0.382 and with ? = O(1/ T ) 2 (0, 12 ). A has no 2.62-regret. 6 Simulations The theoretical results presented so far indicate that when faced with selfish experts, one should use an IC weight update rule, and ones with smaller scoring rule gap are better. Two objections to these conclusions are: first, the presented results are worst-case, and may not represent behavior on a typical input. It is of particular interest to see if on non-worst-case inputs, the non-IC standard weight-update rule does better or worse than the IC methods proposed in this paper. Second, there is a gap between our upper and lower bounds for IC rules, so it?s interesting to see what numerical regret is obtained. Results. In our first simulation, experts are represented by a simple two-state hidden Markov model (HMM) with a ?good? state and a ?bad? state. Realization r(t) is given by a fair coin. For r(t) = 0 (t) (otherwise beliefs are reversed), in the good state expert i believes bi ? min{Exp(1)/5, 1}, in the (t) 1 bad state bi ? U[ 12 , 1]. The probability to exit a state is 10 for both states. This data generating process models that experts that have information about the event are more accurate than experts who lack the information. Figure 1a shows the regret as a function of time for the standard (nonIC) algorithm, and IC scoring rules including one from the Beta family [Buja et al., 2005] with (t) (t) (t) ? = = 12 . For the IC methods, experts report pi = bi , for the standard algorithm pi = 1 if (t) (t) 1 bi 2 and pi = 0 otherwise. The y axis is the ratio of the total loss of each of the algorithms to the performance of the best expert at that time. The plot is for 10 experts, T = 10, 000, ? = 10 2 , and the randomized10 versions of the algorithms, averaged over 30 runs. Varying model parameters and the deterministic version show similar results. Each of the IC methods does significantly better than the standard weight-update algorithm, and even at T = 200, 000 (not shown in the graph), the IC methods have a regret factor of about 1.003, whereas the standard algorithm still has 1.14. This gives credence to the notion that failing to account for incentive issues is problematic beyond the worst-case bounds presented earlier. Moreover, while there is a worst-case lower bound that rules out no-regret, for natural synthetic data, the loss of all the IC algorithms approaches that of the best expert in hindsight, while the standard algorithm fails to do 10 Here we use the regular RWM algorithm, so in the notation of Section 5, we have ? = 0. 8 (a) The HMM data-generating process. (b) The greedy lower bound instance. Figure 1: Regret for different data-generating processes. Table 1: Comparison of lower bound results with simulation. The simulation is run for T = 10, 000, ? = 10 4 and we report the average of 30 runs. For the lower bounds, the first number is the lower bound from Lemma 16, i.e. 2 + d 1 1 e , the second number (in parentheses) is 2 + . Greedy LB LB Sim Lem 16 LB Beta .1 2.3708 2.4414 2.33 (2.441) Beta .5 2.2983 2.3186 2.25 (2.318) Beta .7 2.2758 2.2847 2.25 (2.285) Beta .9 2.2584 2.2599 2.25 (2.260) Brier 2.2507 2.2502 2.25 Spherical 2.2071 2.2070 2.2 (2.207) this. This seems to indicate that eliciting the truthful beliefs of the experts is more important than the exact weight-update rule. Comparison of LB Instances. We consider both the lower bound instance described the proof of (t) Lemma 16, and a greedy version that punishes the algorithm every time w0 is ?sufficiently? large.11 Figure 1b shows the regret for different algorithms on the greedy lower bound instance. Table 1 shows that it very closely traces 2 + , as do the numerical results for the lower bound from Lemma 16. In fact, for the analysis, we needed to use d 1 e when determining the first phase of the instance. When we use instead numerically, the regret seems to trace 2 + quite closely, rather than the weaker proven lower bound of 2 + d 1 1 e . Table 1 shows that the analysis of Lemma 16 is essentially tight (up to the rounding of ). Closing the gap between the lower and upper bound requires finding a different lower bound instance, or a better analysis for the upper bound. 7 Open Problems There area number of interesting questions that this work raises. First of all, our utility model effectively causes experts to optimize their weight independently of other experts. Bayarri and DeGroot [1989] discuss different objective functions for experts, including optimizing relative weight among experts under different informational assumptions. These would impose different constraints as to which algorithms would lead to truthful reporting, and it would be interesting to see if no-regret learning is possible in this setting. It also remains an open problem to close the gap between the best known upper and lower bounds that we presented in this paper. The simulations showed that the analysis for the lower bound instances is almost tight, so this requires a novel upper bound and/or a different lower bound instance. Finally, strictly proper scoring rules are also well-defined beyond binary outcomes. It would be interesting to see what bounds can be proved for predictions over more than two outcomes. 11 (t) (t) (t) When w0 is sufficiently large we make e0 (and thus the algorithm) wrong twice: b0 = 0, b1 = 1, (t+1) (t) (t) = 12 , r(t) = 1, and b0 = 0, b1 = 12 , b0 = 1, r(t) = 1. ?Sufficiently? here means that weight of e0 is high enough for the algorithm to follow its advice during both steps. (t) b2 9 References Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2):12, 2013. Moshe Babaioff, Robert D. Kleinberg, and Aleksandrs Slivkins. Truthful mechanisms with implicit payment computation. In Proceedings of the 11th ACM Conference on Electronic Commerce, EC ?10, pages 43?52, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-822-3. doi: 10.1145/ 1807342.1807349. URL http://doi.acm.org/10.1145/1807342.1807349. M. J. Bayarri and M. H. DeGroot. Optimal reporting of predictions. Journal of the American Statistical Association, 84(405):214?222, 1989. doi: 10.1080/01621459.1989.10478758. J Eric Bickel and Seong Dae Kim. Verification of the weather channel probability of precipitation forecasts. Monthly Weather Review, 136(12):4867?4881, 2008. John P Bonin. On the design of managerial incentive structures in a decentralized planning environment. The American Economic Review, 66(4):682?687, 1976. Craig Boutilier. Eliciting forecasts from self-interested experts: scoring rules for decision makers. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, pages 737?744. International Foundation for Autonomous Agents and Multiagent Systems, 2012. Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1?3, 1950. Michael Br?ckner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 547?555. ACM, 2011. Andreas Buja, Werner Stuetzle, and Yi Shen. Loss functions for binary class probability estimation and classification: Structure and applications. 2005. Yang Cai, Constantinos Daskalakis, and Christos H Papadimitriou. Optimum statistical estimation with strategic data sources. In COLT, pages 280?296, 2015. Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321?352, 2007. Yiling Chen and David M Pennock. Designing markets for prediction. AI Magazine, 31(4):42?52, 2010. Ofer Dekel, Felix Fischer, and Ariel D Procaccia. Incentive compatible regression learning. Journal of Computer and System Sciences, 76(8):759?777, 2010. Peter Frazier, David Kempe, Jon Kleinberg, and Robert Kleinberg. Incentivizing exploration. In Proceedings of the fifteenth ACM conference on Economics and computation, pages 5?22. ACM, 2014. Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119?139, 1997. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359?378, 2007. Andrew V Goldberg and Jason D Hartline. Collusion-resistant mechanisms for single-parameter agents. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 620?629. Society for Industrial and Applied Mathematics, 2005. Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 111?122. ACM, 2016. 10 Christian Franz Horn, Bjoern Sven Ivens, Michael Ohneberg, and Alexander Brem. Prediction markets?a literature review 2014. The Journal of Prediction Markets, 8(2):89?126, 2014. Kamal Jain and Mohammad Mahdian. Cost sharing. Algorithmic game theory, pages 385?410, 2007. Victor Richmond R Jose, Robert F Nau, and Robert L Winkler. Scoring rules, generalized entropy, and utility maximization. Operations research, 56(5):1146?1157, 2008. Sham M Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms. SIAM Journal on Computing, 39(3):1088?1106, 2009. Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212?261, 1994. Yang Liu and Yiling Chen. A bandit framework for strategic regression. In Advances in Neural Information Processing Systems, pages 1813?1821, 2016. Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis, and Zhiwei Steven Wu. Bayesian exploration: Incentivizing exploration in bayesian games. arXiv preprint arXiv:1602.07570, 2016. John McCarthy. Measures of the value of information. Proceedings of the National Academy of Sciences of the United States of America, 42(9):654, 1956. Edgar C Merkle and Mark Steyvers. Choosing a strictly proper scoring rule. Decision Analysis, 10 (4):292?304, 2013. Nolan Miller, Paul Resnick, and Richard Zeckhauser. Eliciting informative feedback: The peerprediction method. Management Science, 51(9):1359?1373, 2005. Herv? Moulin. Incremental cost sharing: Characterization by coalition strategy-proofness. Social Choice and Welfare, 16(2):279?320, 1999. Tim Roughgarden and Eva Tardos. Introduction to the inefficiency of equilibria. Algorithmic Game Theory, 17:443?459, 2007. Leonard J Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336):783?801, 1971. Mark J Schervish. A general method for comparing probability assessors. The Annals of Statistics, pages 1856?1879, 1989. Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in neural information processing systems, pages 1?9, 2015. William Thomson. Eliciting production possibilities from a well-informed manager. Journal of Economic Theory, 20(3):360?380, 1979. 11
6729 |@word mild:3 private:2 version:4 stronger:3 seems:2 dekel:2 bf:1 open:2 adrian:1 seek:1 simulation:7 forecaster:1 jacob:1 pick:2 incurs:2 solid:1 harder:2 liu:2 inefficiency:1 score:1 united:1 punishes:1 past:2 outperforms:1 current:1 com:6 savage:2 comparing:1 collude:3 si:2 must:1 john:2 numerical:2 informative:1 christian:1 plot:1 ligett:1 update:31 credence:1 greedy:4 warmuth:6 vanishing:4 record:1 manfred:1 institution:1 characterization:2 boosting:1 proofness:1 org:1 mathematical:1 beta:5 symposium:1 surprised:1 prove:3 incentivizes:2 market:7 expected:7 roughly:1 behavior:3 brier:13 planning:1 mahdian:2 manager:1 informational:1 spherical:8 decreasing:2 election:1 considering:1 increasing:2 precipitation:1 project:1 moreover:2 bounded:3 suffice:1 maximizes:3 notation:1 linearity:1 what:5 interpreted:3 substantially:1 emerging:1 informed:1 hindsight:12 transformation:1 finding:1 ended:1 guarantee:11 every:7 tie:1 exactly:1 megiddo:1 demonstrates:1 wrong:1 appear:2 overestimate:1 before:1 positive:3 felix:1 aggregating:1 gneiting:2 limit:2 consequence:2 randomizes:1 okke:1 might:1 twice:4 plus:1 studied:3 eb:2 conversely:1 deployment:2 bi:13 averaged:1 horn:2 commerce:1 practice:2 regret:37 union:1 babaioff:2 stuetzle:1 area:2 managerial:1 evolving:1 elicit:1 significantly:1 weather:5 refers:1 regular:1 get:2 cannot:2 close:1 influence:2 vaughan:1 www:1 equivalent:1 deterministic:19 demonstrated:1 optimize:1 maximizing:1 go:1 attention:2 economics:2 independently:1 convex:1 survey:1 formulate:3 shen:1 syrgkanis:1 immediately:1 rule:91 steyvers:1 proving:3 notion:5 autonomous:2 tardos:1 annals:1 pt:2 rationality:5 yishay:2 magazine:1 speak:1 exact:1 us:3 goldberg:2 designing:1 satisfying:1 updating:1 gorithm:1 predicts:1 steven:1 preprint:1 resnick:1 capture:2 parameterize:1 worst:12 calculate:1 eva:1 highest:1 nytimes:1 intuition:1 agency:4 environment:1 tobias:1 personal:1 motivate:1 raise:1 tight:2 negatively:1 exit:1 eric:1 algoritms:1 differently:1 represented:1 america:1 genre:1 derivation:1 jain:2 sven:1 doi:3 tell:1 aggregate:2 outcome:6 choosing:1 abernethy:2 quite:2 stanford:6 valued:1 larger:1 supplementary:4 katrina:1 otherwise:7 nolan:1 ability:1 statistic:1 fischer:1 winkler:1 jointly:1 itself:1 online:29 obviously:1 isbn:1 cai:2 yiling:3 remainder:3 denny:1 tu:5 relevant:2 vasilis:1 realization:9 achieve:3 academy:1 sixteenth:1 double:1 optimum:1 plethora:1 generating:4 silver:1 adam:1 i2s:1 incremental:1 tim:3 help:1 andrew:1 measured:1 b0:3 sim:1 strong:1 p2:1 c:2 indicate:3 come:1 concentrate:1 closely:4 stackelberg:1 exploration:4 alp:1 opinion:1 material:4 assign:1 transparent:1 generalization:1 preliminary:1 ntu:1 proposition:11 strictly:35 hold:3 zhiwei:1 sufficiently:3 ic:57 welfare:2 exp:1 equilibrium:1 algorithmic:2 predict:1 bickel:2 failing:1 estimation:3 wet:1 maker:1 extremal:2 create:1 establishes:1 weighted:6 hope:1 clearly:2 always:5 aim:1 rather:6 kalai:1 zhou:2 pn:8 varying:2 broader:1 endow:1 corollary:1 focus:2 misreport:2 frazier:2 she:1 bernoulli:1 impossibility:1 richmond:1 adversarial:1 sigkdd:1 kim:2 sense:1 industrial:1 economy:1 typically:2 hidden:1 her:2 bandit:2 going:1 interested:1 polling:1 compatibility:2 issue:6 among:2 colt:1 classification:2 development:2 ness:1 kempe:1 construct:1 once:1 having:1 beach:1 look:1 nearly:1 constantinos:1 jon:1 kamal:1 future:2 papadimitriou:2 report:20 richard:1 primarily:1 oblivious:1 strategically:1 preserve:1 national:1 individual:1 phase:1 maintain:1 william:1 nau:1 interest:2 investigate:2 possibility:2 mining:1 evaluation:1 uncommon:1 light:1 behind:1 implication:1 accurate:2 partial:1 bonin:2 unless:1 tree:1 stoltz:1 harmful:1 littlestone:6 e0:2 dae:1 theoretical:2 instance:8 earlier:1 blended:1 cover:3 formulates:2 ar:3 yoav:1 werner:1 maximization:1 cost:4 strategic:3 v31:1 subset:1 wortman:1 rounding:1 motivating:1 reported:6 answer:1 synthetic:1 chooses:2 referring:1 st:1 fundamental:2 randomized:17 international:3 siam:2 michael:2 cesa:2 management:1 possibly:4 worse:3 admit:1 expert:114 american:4 leading:1 account:2 suggesting:1 bjoern:1 bidder:1 b2:1 matter:2 satisfy:1 explicitly:1 ranking:1 ad:3 multiplicative:1 try:1 root:1 break:1 bayarri:3 jason:1 wm:15 maintains:2 contribution:4 square:2 publicly:1 who:1 maximized:2 yield:3 miller:1 bayesian:2 accurately:1 craig:1 edgar:1 drive:2 hartline:2 reach:1 aggregator:2 whenever:2 sharing:2 definition:17 against:1 proof:3 mi:7 boil:1 rational:1 proved:1 hardt:2 recall:1 knowledge:1 gic:6 formalize:1 focusing:1 worry:1 follow:2 methodology:2 response:3 improved:1 though:2 stage:2 implicit:1 correlation:1 incentivecompatible:1 hopefully:1 lack:2 defines:1 perhaps:1 believe:1 grows:1 mary:1 usa:2 impede:1 umbrella:1 normalized:5 true:13 perverse:1 deliberately:1 concept:1 hence:2 assigned:3 moritz:1 symmetric:4 ll:1 during:4 game:6 self:1 transferable:4 generalized:1 schrijvers:1 thomson:2 demonstrate:1 theoretic:1 mohammad:1 performs:1 auction:3 reasoning:1 cooperate:1 consideration:1 novel:2 common:2 superior:1 brem:1 extend:1 association:3 numerically:1 refer:2 monthly:2 ai:1 credibility:2 mathematics:1 closing:1 had:1 access:1 resistant:1 anyways:2 nicolo:1 own:4 mccarthy:2 showed:1 optimizing:1 binary:9 yi:1 scoring:59 victor:1 care:3 moulin:2 impose:1 maximize:2 nate:1 truthful:8 zeckhauser:1 multiple:3 sham:1 technical:4 match:1 long:1 divided:1 parenthesis:1 impact:1 prediction:68 calculates:2 variant:1 regression:2 essentially:1 expectation:5 fifteenth:1 arxiv:2 represent:1 whereas:2 want:2 fine:2 addition:1 objection:1 median:2 source:3 operate:1 pennock:2 degroot:3 strict:1 call:4 mw:3 presence:2 yang:2 revealed:1 enough:1 fsp:2 affect:1 restrict:2 reduce:1 idea:1 economic:2 andreas:1 br:3 honest:9 whether:2 herv:1 utility:8 url:1 forecasting:1 effort:1 peter:1 york:2 cause:2 wootters:1 boutilier:1 generally:5 extensively:1 nimrod:1 http:7 schapire:5 problematic:1 track:1 per:1 write:1 discrete:1 incentive:30 group:6 poll:2 changing:2 asymptotically:2 graph:1 schervish:2 sum:1 run:4 jose:1 master:1 reporting:11 family:6 almost:1 electronic:1 wu:1 separation:3 decision:4 consolidate:1 appendix:6 bound:32 guaranteed:1 followed:1 simplification:1 quadratic:3 annual:1 roughgarden:2 occur:1 constraint:2 sake:1 collusion:2 kleinberg:3 seong:1 min:2 department:2 according:3 materialized:1 coalition:1 smaller:2 wi:16 kakade:2 making:1 lem:1 intuitively:1 restricted:1 pr:3 ariel:1 ln:2 agree:1 payment:3 remains:2 discus:3 jennifer:1 mechanism:4 tilmann:1 needed:1 whichever:1 available:1 ofer:1 decentralized:1 operation:1 away:2 appropriate:1 coin:1 shah:2 existence:1 rain:4 ckner:2 opportunity:1 calculating:1 commits:1 classical:9 eliciting:4 society:1 objective:1 question:6 realized:2 moshe:1 strategy:1 traditional:1 said:1 reversed:1 incentivized:2 majority:4 hmm:2 w0:2 considers:1 trivial:1 reason:2 mini:2 ratio:1 innovation:1 pie:1 unfortunately:1 robert:5 potentially:1 trace:2 negative:1 stated:1 design:9 proper:39 motivates:1 unknown:1 bianchi:2 upper:6 gilles:1 markov:1 finite:1 honestly:1 mansour:3 nihar:1 lb:4 thm:1 aleksandrs:2 advocacy:1 buja:2 rating:6 introduced:2 david:2 connection:2 slivkins:2 nick:1 nip:1 address:1 beyond:2 elicitation:2 adversary:1 max:4 including:2 belief:18 power:1 event:9 suitable:1 natural:4 predicting:1 arm:1 representing:1 improve:1 github:1 imply:1 axis:1 raftery:2 faced:1 review:4 upshot:2 literature:6 discovery:1 multiplication:1 determining:1 relative:1 freund:5 loss:53 fully:1 multiagent:2 interesting:6 proportional:2 allocation:1 proven:1 versus:1 generator:1 revenue:2 foundation:1 incurred:1 agent:3 affine:1 verification:2 playing:1 pi:41 production:1 compatible:13 course:1 surprisingly:1 free:2 bern:3 formal:3 bias:1 weaker:1 fall:2 taking:1 absolute:12 tauman:1 feedback:3 boundary:1 commonly:1 franz:1 party:1 far:1 ec:1 transaction:1 social:1 obtains:1 b1:2 alternatively:1 don:3 truthfully:1 continuous:2 daskalakis:1 reputation:2 why:3 table:3 additionally:2 glenn:1 channel:1 ca:3 correlated:2 domain:1 main:4 paul:1 nothing:1 fair:1 advice:4 scheffer:2 assessor:1 rithm:1 ny:1 christos:2 fails:1 winning:1 candidate:3 tied:1 breaking:1 third:3 incentivizing:5 down:2 theorem:8 bad:3 specific:2 showing:1 er:2 concern:1 exists:4 fbr:2 restricting:1 effectively:1 illustrates:2 forecast:3 chen:5 gap:9 entropy:1 selfish:25 simply:1 expressed:1 chance:3 satisfies:1 acm:11 hedge:1 month:1 goal:3 identity:1 leonard:1 replace:1 change:1 determined:1 except:1 typical:1 lemma:6 total:6 called:1 inadvertently:1 vote:1 meaningful:1 player:2 formally:2 select:1 procaccia:1 people:2 mark:2 alexander:1 instructive:1 crowdsourcing:1
6,334
673
A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems Wei-Tsih Lee John Pearson David Sarnoff Research Center CN5300 Princeton, NJ 08543 Abstract Channel equalization problem is an important problem in high-speed communications. The sequences of symbols transmitted are distorted by neighboring symbols. Traditionally, the channel equalization problem is considered as a channel-inversion operation. One problem of this approach is that there is no direct correspondence between error probability and residual error produced by the channel inversion operation. In this paper, the optimal equalizer design is formulated as a classification problem. The optimal classifier can be constructed by Bayes decision rule. In general it is nonlinear. An efficient hybrid linear/nonlinear equalizer approach has been proposed to train the equalizer. The error probability of new linear/nonlinear equalizer has been shown to be better than a linear equalizer in an experimental channel. 1 INTRODUCTION In a typical communication system, a sequence of symbols {ld are transmitted though a linear time-dispersive channel h(t). Let x(t) be the received signal, it can be written as x (t) = L/jh (t-n1) + W (I) j (1) where h(t) denotes the elementary pulse waveform, and wet) represents the random noise with iid Gaussian distribution. In a Quadrature Amplitude Modulation (QAM), symbols (Id are represented by complex numbers. During the transmission, interferences from neighboring symbols may distort the received signals. It is called Intersymbol Interference (lSI). It mainly because following reasons: nonideal channel which introduces phase or amplitude distortions, phase jitter, and impulse noise. Thus, equalization techniques are used to reduce the lSI. 674 A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems 2 ADAPTIVE LINEAR/RADIAL BASIS FUNCTION APPROACH TO EQUALIZER DESIGN Traditionally, the channel equalization problem is considered as a channel-inversion operation. The idea is that an equalizer is constructed as to undo the interference from neighboring symbols as they passing through a linear dispersive channel. It can be used to explain different equalizer structures (Zero-forcing, Least mean square, and decision feedback) and their performance [Proakis, 1989]. One problem of this approach is that in general there is no direct correspondence between error probability and residual error produced by the channel inversion operation. In [Gibson, etal, 1991], authors proposed a classification viewpoint for the equalizer design. They suggested that the optimal equalizer should be a classifier whose decision boundary is constructed according to Bayes decision rule. Compared with the channel inversion approach, the outputs of receiver are used as features for a classifier. The decision is made solely based on the classifier output, hence, on feature distribution. As it is well-known in [Fukunaga, 1978], the optimal decision boundaries can rapidly be computed if the features are Gaussian distributed. However, there is no idea about the structure of the optimal equalizer (classifier)for timedispersive channel outputs. In next section, we prove that for a linear channel, the optimal equalizer is nonlinear. 2.1 THE OPTIMAL EQUALIZER OF A LINEAR TIME-DISPERSIVE CHANNEL Let us first consider a two-value equalization problem. Symbols with two possible values ( -1, I) are transmitted. Let the channel be represented in a discrete form as a FIR of (hi), i=O,N-l. The output Xi can be written as X? , = N-l ~ I . .h .+w . L '-I I , (2) j=o The optimal equalizer design is equivalent to the following Bayes decision problem. Given (xi}, decide Ii by 1 /. = { , -1 if P(lj=-II X j,xj+l'?????,xi+N-l) >P(lj= lI X j,xj+l'?????,xj+N-l) (3) where P (lj = 11 xi,xi + 1, ??.?. ,xi+N-l) is the posterior probability of the transmitted symbol Ii being 1 given channel output (xd. By Bayes theorem, expression(3) can be expanded to the following form: (4) i+N-l L IT j = j kl' k;z?.... ,kj _ N + I e P(xjl Ii = 1,???Jj _ N + 1 = ki - N + 1) P(lj = 1,... J j - N + 1 = k j _ N + 1) {I. -1 } (5) Since conditional probability P (xii I j = 1,... J j -N + 1 = k j _ N + 1) in (5) is a Gaussian distribution, the numerator in (5) is a mixture of Gaussian distribution. Plugging (5) into (3), Bayes decision rule determines the optimal decision boundary as the solution of eqUality. Since denominator is the same on both sides, it can be ignored. Rearranging the equation, 675 676 Lee and Pearson it can be written as summation of exponential functions. The solution of this equation is nonlinear function of {xil. In general, no analytical form can be found. However, it can be solved by numerical methods. Thus, the optimal decision boundary can be determined. The result can be extended to multi-class problems. Based on the result established above, we provide a theoretical justification of a nonlinear equalizer approach to linear time-dispersive channel. The theoretical comparison of performances of linear and optimal equalizers can be found in [Gibson, et.al, 1991]. They concluded that performance of linear equalizers can not be improved by increasing tap length. This also suggests that a nonlinear equalizer approach is necessary. Another reason for nonlinear equalization approach is due to channels with spectrum hulls [Proakis, 1989]. In this case, the linear equalizer can not achieve the desired performance due to "noise enhancement". 2.2 NONLINEAR EQUALIZER DESIGN PROBLEM There are several approaches to nonlinear equalizer design. To reduce the Least Mean Square (LMS) error, Voterra-series approach uses high-order product terms of input as new features. The tree-structured linear equalizer method [Gelfand, et.al., 1991] partitions the feature-space, and makes a piecewise linear approximation to the optimal nonlinear equalizer. As reported in [Gelfand, et.al., 1991], the tree-structured linear equalizer approach provides reasonable fast convergence and lower error probability as compared with linear and Voterra series approaches. The problem of this approach is that a lot of training samples are needed to achieve good performance. A neural network approach, MultiLayer Perceptron(MLP) [Gibson, et.al, 1991], trains 3 or 4 layers interconnected Perceptrons to form the nonlinear decision boundary. It is observed in [Gibson, et.al, 1991] that the performance of a MLP equalizer is close to optimal Bayes classifier. However, the training time is long and a fine-turing procedure is used. A nonlinear equalizer approach using radial basis functions is also reponed in [Chen, et.al., 1991]. To put equalizers into use, the long training time is unpractical, and a fine-adjusting procedure is not allowed. Hence, it is desired to have an efficient, automatic procedure for nonlinear equalize design. To achieve this goal, we propose a hybrid linear and radial basis functions approach for automatic nonlinear equalizer design. Although the optimal equalizer should be nonlinear, all these nonlinear design methods require long training time or large amount of training samples. Linear equalizers are not optimal, but with following advantages: easy training, fast convergence. It is also reported that the linear equalizer is relatively robust [Fukunaga, 1978]. Hence, it is desirable to combine the advantages of both linear and nonlinear equalizers. However, the hybrid structure should provide desired properties: fast convergence, automatic training procedure, and low error rate. To satisfy these constraints, we propose a feature-space partitioning approach to hybrid equalizer design. 2.3 FEATURE?SPACE PARTITIONING APPROACH TO HYBRID EQUALIZER DESIGN To design a hybrid linear/nonlinear equalizer, we adopt the feature-space partitioning concept. The idea is similar to the one developed in [Gelfand, et.al, 1991]. Here, we consider a partitioning method based on geometrical reasoning for eqUalization problems. The idea is based on the fact that linear equalizers can recover distorted signals, except the cases when strong noise push samples into boundaries where two classes overlaid with each A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems other. We consider the "confused" samples as these samples near decision boundaries. The separation of "confused" samples can be accomplished based on the output values of linear equalizers. If the distance between output value and the closest point in signal constellation [Proakis, 1989] is greater than a threshold, then we consider current sample is "confused". This means that the sample is the one close to decision boundary. To achieve an accurate classification, we classify it by a nonlinear equalizer, which is constructed for separating the samples near Bayes decision boundary. The hybrid structure consisted of a linear equalizer, followed by a radial basis function (RBS) network, as shown in Fig. 1. A RBS network (Fig.2) is a two-layered network with radial_basis_function nodes in first layer, and a weighted linear combination of outputs of these nodes. Each feature vector consisted of a collection of consecutive data from the channel. It is assumed that these data are properly time and carrier synchronized [Proakis, 1989]. For the QAM, a complex-valued linear equalizer is adopted. The distance between output value of linear equalizer and the closest point is then computed and compared with threshold as described before. The "confused" samples are classified by a nonlinear RBS equalizer. The output of a RBS network can be written as weighted summation of outputs of nodes as follows: (6) where f(x) is the output of network. The oUfut value of each node is computed according to the bell-shaped function centered at ci' (J is the width of a node. Wi is the weight associated with ith node. In our experiments, the width of all nodes are fixed. The first N training samples are assigned to the centers of N -nodes network. The weights are adjusted according to stochastic gradient decent rule: (7) where 11 is the learning rate. d k is the desired output of network To train a hybrid LE/RBS equalizer, a collection of training samples is used to adjust the parameters of linear equalizer. The training samples for RBS network are collected according to the distance rule described above. They are used to adjust weights of RBS network only. The classification of a unknown sample follows a similar rule. The output value of the linear equalizer is computed. If the distance between output value and the closest point in signal constellation is smaller than the threshold, then the closest point is considered as the recovered signal. If not, the output of the RBS network is used to classify the sample. The closest point in signal constellation from output of RBS network is then used for sample class. Note, however, that there is a different interpretation for the output of linear and RBS equalizer. The function of linear equalizers can be considered as an approximation of channel inversion. Hence, it is similar to a deconvolution computation [Proakis, 1989]. However, for a RBS network, the output is the summation of weighted local Gaussian functions. For closely located points, the network is asked to give same output by then training procedure. Thus, it is more a classification approach than a deconvolution method. The approach provides a design method for hybrid LE/RBS equalizer. The linear equaliz- 677 678 Lee and Pearson {Xn } line:lI' Equalizer yes Radial_basis _ function network Fig. 1 System Diagram of Hybrid Linear/Nonlinear Equalizer I(x) Fig. 2 A RadiaCBasis_Function Network A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems ers perfonn the channel inversion or partitioning of the feature space, depending on the output value. More complicated tree-structured equalizer [Gelfand, et.al., 1991] can be adopted for this proposes. The nonlinear RBS networks are used for classifying "confused" samples. They can be replaced by MLPs. Hence, the approach provides a general method for designing hybrid structure eqalizers. However, the trade-off between complexity and efficiency of these combinations has to to be considered. For example, a multilayer tree-structured equalizer can divide the space into smaller regions for finer classification. However, the small amount of training samples in practice can be a problem for this method. A MLP network can be used for nonlinear classifier. Nevertheless, convergence time will be a major concern. 3 EXPERIMENT We have applied our hybrid design method to a 4-QAM system. The channel is modeled by Xj _ where I j 1 = 0.406/ +0.814/ j j _ 1 +0.407/j _ 2 +w j (8) = {- 1 - j,- 1 + j,l- j,1 + j} . A 7-tapped complex linear equalizer is used for classifying the input. Threshold for nonlinear equalizer is 0.1. We use 4,000training samples and 5,000 testing samples. A 400 nodes RBS network is used for nonlinear equalizer. The first 400 "confused" training samples are used for the center of network. The network is trained according to (6). Learning coefficient T) is chosen to be 0.01. The width of a RBS node is 1.0. Fig. 3 shows the symbol error probability vs. SNR. The error probability is evaluated over 5,000 testing samples. The hybrid LE/RBS network produces nearly 10% reduction of error rate compared with linear equalizer. This shows that a hybrid linear/RBS network equalizer can reduce the error rate by classifying "confused" samples near decision boundaries. No comparison with Bayes classifier has been made. In our experiments, it is observed that the error rate can be reduced further by increasing the number of RBS nodes. This seems imply that a large-size RBS network will in general produce better classification result. However, since there is always a limitation of the computation resources: computation time and memory storage, the perfonnance of the hybrid linear/RBS network is limited, especially in high signal constellation case discussed below. Equalization in high signal constellation, 16 and 64-QAM, have been tried. The result shows no significant improvement This can be explained by the increasing of complexity. Recall that the RBS network is to separate the samples near the boundary. To deal with the increasing of number of classes due to high signal constellation, the number of nodes of network must increase proportionally. Since the increasing rate is exponential in terms of number of classes, it implies a straight-forward implementation of RBS network method can not be used for high signal constellation. In [Chen, et.al., 1991], authors suggest a dynamical RBS network with adjustable center location and width. The algorithm runs in batch mode. It is reported that the size of network can be reduced dramatically by the dynamical RBS network method. However, for equalizer application, on-line version of the algorithm is needed. 679 680 Lee and Pearson ?? ? :< : ? _CUlt 1. om SNR Fig. 3 Error Probability of Hybrid Linear/Radial_Basis_Function Network Equalizer for a Linear Channel xi - 1 = 0.406Ii + 0.814/i _ 1 + 0.407li _ 2 +wi with 4-QAM. 4 CONCLUSION AND DISCUSSION FOR HYBRID LE/RBS EQUALIZER DESIGN By combining feature-space partitioning and nonlinear equalizers, we have developed a hybrid linear/nonlinear equalization approach. The major contribution of this research is to provide a theoretical justification of nonlinear equalization approach for linear time-dispersive channels. A feature-space partitioning method by linear equalizer is proposed. RBS networks for nonlinear equalizers are integrated into the design to separate the samples near decision boundary. The experiments for 4-QAM equalization have demonstrated the feasibility of the approach. For high signal constellation modulation, a dynamical RBS network method [Chen, etal., 1991] has been suggested to overcome the problem of increasing complexity. The hybrid Linear/nonlinear equalization approach combines the strength of linear and nonlinear equalizers. It offers a framework to integrate the deconvolution and classification methods. The approach can be generalized to include complicated partitioning A Hybrid Linear/Nonlinear Approach to Channel Equalization Problems scheme and other nonlinear networks, such as :MLP, as well. More researches need to be conducted to make this approach practical for general use. The relationship between the performance of hybrid equalizer and taps length of linear equalizers, the width and the number ofRBS nodes need to be investigated. The on-line version of dynamical RBS network [Chen, et.al., 1991] need to be developed. Reference: Proakis, J. G., Digital Communications, McGrwa-Hill company, New York, 1989. Gibson, GJ., Siu, S., Cowan, C.F.N., "The Application of Nonlinear Structures to Reconstruction of Binary Signals," IEEE. Trans. on Signal Processing, vol. 39, No.8, Aug.,pp. 1877-1884, 1991. Fukunaga, K., Introduction to Statistical Pattern Recognition, Academic Press, New York, 1978. Gelfand, S.B., Ravishankar, C.S., and Delp, EJ., "Tree-structured Piecewise Linear Adaptive Equalization," ICC91, 001383, 1386. Chen, S., Gilbson, GJ., Cowan, C.F.N., and Grant, P.M., "Reconstruction of binary signals using an adaptive radial-bas is-function equalizer," Signal Processing, 22, pp. 77-93, 1991. Chen, S., Cowan, C.F.N., and Grant, P.M., "Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks," IEEE. Trans. on Neural Networks, vol. 2, no., 2, March, pp.302-309, 1991. 681
673 |@word version:2 inversion:7 seems:1 pulse:1 tried:1 ld:1 reduction:1 series:2 current:1 recovered:1 written:4 must:1 john:1 numerical:1 partition:1 v:1 cult:1 ith:1 provides:3 node:13 qam:6 location:1 constructed:4 direct:2 prove:1 combine:2 multi:1 company:1 increasing:6 confused:7 developed:3 nj:1 perfonn:1 xd:1 classifier:8 partitioning:8 grant:2 carrier:1 before:1 local:1 id:1 solely:1 modulation:2 suggests:1 limited:1 sarnoff:1 practical:1 testing:2 practice:1 procedure:5 gibson:5 bell:1 radial:6 suggest:1 close:2 layered:1 put:1 storage:1 equalization:18 equivalent:1 demonstrated:1 center:4 rule:6 traditionally:2 justification:2 xjl:1 us:1 designing:1 tapped:1 recognition:1 located:1 observed:2 solved:1 region:1 trade:1 equalize:1 complexity:3 asked:1 trained:1 efficiency:1 basis:5 represented:2 train:3 fast:3 pearson:4 whose:1 gelfand:5 valued:1 distortion:1 sequence:2 advantage:2 analytical:1 propose:2 reconstruction:2 interconnected:1 product:1 neighboring:3 combining:1 rapidly:1 achieve:4 convergence:4 enhancement:1 transmission:1 xil:1 produce:2 depending:1 received:2 aug:1 strong:1 implies:1 synchronized:1 waveform:1 closely:1 hull:1 stochastic:1 centered:1 require:1 elementary:1 summation:3 adjusted:1 considered:5 overlaid:1 lm:1 major:2 adopt:1 consecutive:1 wet:1 weighted:3 gaussian:5 always:1 ej:1 properly:1 improvement:1 mainly:1 lj:4 integrated:1 classification:8 proakis:6 proposes:1 shaped:1 represents:1 nearly:1 piecewise:2 replaced:1 phase:2 n1:1 mlp:4 adjust:2 introduces:1 mixture:1 accurate:1 necessary:1 perfonnance:1 orthogonal:1 tree:5 divide:1 desired:4 theoretical:3 classify:2 snr:2 siu:1 conducted:1 reported:3 unpractical:1 lee:4 off:1 fir:1 li:3 coefficient:1 satisfy:1 lot:1 bayes:8 recover:1 complicated:2 contribution:1 mlps:1 square:3 om:1 yes:1 produced:2 iid:1 finer:1 straight:1 classified:1 explain:1 distort:1 pp:3 associated:1 adjusting:1 recall:1 dispersive:5 amplitude:2 wei:1 improved:1 evaluated:1 though:1 nonlinear:39 mode:1 impulse:1 concept:1 consisted:2 hence:5 equality:1 assigned:1 deal:1 during:1 numerator:1 width:5 generalized:1 hill:1 geometrical:1 reasoning:1 discussed:1 interpretation:1 significant:1 automatic:3 etal:2 cn5300:1 gj:2 posterior:1 closest:5 delp:1 forcing:1 binary:2 accomplished:1 transmitted:4 greater:1 signal:16 ii:5 desirable:1 academic:1 offer:1 long:3 plugging:1 feasibility:1 denominator:1 multilayer:2 fine:2 diagram:1 concluded:1 undo:1 cowan:3 near:5 easy:1 decent:1 xj:4 reduce:3 idea:4 expression:1 passing:1 york:2 jj:1 ignored:1 dramatically:1 proportionally:1 amount:2 reduced:2 lsi:2 rb:28 xii:1 discrete:1 vol:2 threshold:4 nevertheless:1 equalizer:67 run:1 turing:1 jitter:1 distorted:2 reasonable:1 decide:1 separation:1 decision:16 layer:2 ki:1 hi:1 followed:1 correspondence:2 strength:1 constraint:1 speed:1 fukunaga:3 expanded:1 relatively:1 structured:5 according:5 combination:2 march:1 smaller:2 wi:2 explained:1 interference:3 resource:1 equation:2 needed:2 adopted:2 operation:4 batch:1 denotes:1 include:1 especially:1 gradient:1 distance:4 separate:2 separating:1 collected:1 reason:2 length:2 modeled:1 relationship:1 ba:1 design:16 implementation:1 unknown:1 adjustable:1 extended:1 communication:3 david:1 kl:1 tap:2 established:1 trans:2 suggested:2 below:1 dynamical:4 pattern:1 memory:1 hybrid:25 residual:2 scheme:1 imply:1 kj:1 limitation:1 digital:1 integrate:1 viewpoint:1 classifying:3 side:1 jh:1 perceptron:1 distributed:1 feedback:1 boundary:12 xn:1 overcome:1 author:2 made:2 adaptive:3 collection:2 forward:1 receiver:1 assumed:1 xi:7 spectrum:1 channel:30 robust:1 rearranging:1 investigated:1 complex:3 noise:4 allowed:1 quadrature:1 fig:6 exponential:2 theorem:1 er:1 symbol:9 constellation:8 concern:1 deconvolution:3 ci:1 nonideal:1 push:1 chen:6 determines:1 conditional:1 ravishankar:1 goal:1 formulated:1 typical:1 determined:1 except:1 called:1 experimental:1 perceptrons:1 princeton:1
6,335
6,730
Tensor Biclustering Soheil Feizi Stanford University [email protected] Hamid Javadi Stanford University [email protected] David Tse Stanford University [email protected] Abstract Consider a dataset where data is collected on multiple features of multiple individuals over multiple times. This type of data can be represented as a three dimensional individual/feature/time tensor and has become increasingly prominent in various areas of science. The tensor biclustering problem computes a subset of individuals and a subset of features whose signal trajectories over time lie in a low-dimensional subspace, modeling similarity among the signal trajectories while allowing different scalings across different individuals or different features. We study the information-theoretic limit of this problem under a generative model. Moreover, we propose an efficient spectral algorithm to solve the tensor biclustering problem and analyze its achievability bound in an asymptotic regime. Finally, we show the efficiency of our proposed method in several synthetic and real datasets. 1 Introduction Let T ? Rn1 ?n2 be a data matrix whose rows and columns represent individuals and features, respectively. Given T, the matrix biclustering problem aims to find a subset of individuals (i.e., J1 ? {1, 2, ..., n1 }) which exhibit similar values across a subset of features (i.e., J2 ? {1, 2, ..., n2 }) (Figure 1-a). The matrix biclustering problem has been studied extensively in machine learning and statistics and is closely related to problems of sub-matrix localization, planted clique and community detection [1, 2, 3]. In modern datasets, however, instead of collecting data on every individual-feature pair at a single time, we may collect data at multiple times. One can visualize a trajectory over time for each individual-feature pair. This type of datasets has become increasingly prominent in different areas of science. For example, the roadmap epigenomics dataset [4] provides multiple histon modification marks for genome-tissue pairs, the genotype-tissue expression dataset [5] provides expression data on multiple genes for individual-tissue pairs, while there have been recent efforts to collect various omics data in individuals at different times [6]. Suppose we have n1 individuals, n2 features, and we collect data for every individual-feature pair at m different times. This data can be represented as a three dimensional tensor T ? Rn1 ?n2 ?m (Figure 1-b). The tensor biclustering problem aims to compute a subset of individuals and a subset of features whose trajectories are highly similar. Similarity is modeled as the trajectories as lying in a low-dimensional (say one-dimensional) subspace (Figure 1-d). This definition allows different scalings across different individuals or different features, and is important in many applications such as in omics datasets [6] because individual-feature trajectories often have their own intrinsic scalings. In particular, at each time the individual-feature data matrix may not exhibit a matrix bicluster separately. This means that repeated applications of matrix biclustering cannot solve the tensor biclustering problem. Moreover, owing to the same reason, trajectories in a bicluster can have large distances among themselves (Figure 1-d). Thus, a distance-based clustering of signal trajectories is likely to fail as well. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Our Model (b) (c) times Tensor Biclustering featu res features (d) featu als individu 2 Tensor Biclustering 1 0 -1 2 0 -2 -4 -4 -2 als individu res Trajectories in the Bicluster All Trajectories 3 -2 4 Tensor Triclustering times Matrix Biclustering individuals (a) 0 2 4 3 2 1 0 -1 -2 -3 4 2 0 -2 -4 -4 -2 0 2 4 Figure 1: (a) The matrix biclustering problem. (b) The tensor biclustering problem. (c) The tensor triclustering problem. (d) A visualization of a bicluster in a three dimensional tensor. Trajectories in the bicluster (red points) form a low dimensional subspace. This problem formulation has two main differences with tensor triclustering, which is a natural generalization of matrix biclustering to a three dimensional tensor (Figure 1-c). Firstly, unlike tensor triclustering, tensor biclustering has an asymmetric structure along tensor dimensions inspired by aforementioned applications. That is, since a tensor bicluster is defined as a subset of individuals and a subset of features with similar trajectories, the third dimension of the tensor (i.e., the time dimension) plays a different role compared to the other two dimensions. This is in contrast with tensor triclustering where there is not such a difference between roles of tensor dimensions in defining the cluster. Secondly, in tensor biclustering, the notion of a cluster is defined regarding to trajectories lying in a low-dimensional subspace while in tensor triclustering, a cluster is defined as a sub-cube with similar entries. Finding statistically significant patterns in multi-dimensional data tensors has been studied in dimensionality reduction [7, 8, 9, 10, 11, 12, 13, 14], topic modeling [15, 16, 17], among others. One related model is the spiked tensor model [7]. Unlike the tensor biclustering model that is asymmetric along tensor dimensions, the spiked tensor model has a symmetric structure. Computational and statistical limits for the spiked tensor model have been studied in [8, 9, 10, 14], among others. For more details, see Supplementary Materials (SM) Section 1.3. In this paper, we study information-theoretic and computational limits for the tensor biclustering problem under a statistical model described in Section 2. From a computational perspective, we present four polynomial time methods and analyze their asymptotic achievability bounds. In particular, one of our proposed methods, namely tensor folding+spectral, outperforms other methods both theoretically (under realistic model parameters) and numerically in several synthetic and real data experiments. Moreover, we characterize a fundamental limit under which no algorithm can solve the tensor biclustering problem reliably in a minimax sense. We show that above this limit, a maximum likelihood estimator (MLE) which has an exponential computational complexity can solve this problem with vanishing error probability. 1.1 Notation We use T , X , and Z to represent input, signal, and noise tensors, respectively. For any set J, |J| denotes its cardinality. [n] represents the set {1, 2, ..., n}. J? = [n] ? J. kxk2 = (xt x)1/2 is the second norm of the vector x. x ? y is the Kronecker product of two vectors x and y. The asymptotic notation a(n) = O(b(n)) means that, there exists a universal constant c such that for sufficiently 2 large n, we have |a(n)| < cb(n). If there exists c > 0 such that a(n) = O(b(n) log(n)c ), we use ? ? the notation a(n) = O(b(n)). The asymptotic notation a(n) = ?(b(n)) and a(n) = ?(b(n)) is the ? same as b(n) = O(a(n)) and b(n) = O(a(n)), respectively. Moreover, we write a(n) = ?(b(n)) ? ? iff a(n) = ?(b(n)) and b(n) = ?(a(n)). Similarly, we write a(n) = ?(b(n)) iff a(n) = ?(b(n)) ? and b(n) = ?(a(n)). 2 Problem Formulation Let T = X + Z where X is the signal tensor and Z is the noise tensor. Consider q X 1) T =X +Z = ?r u(J ? wr(J2 ) ? vr + Z, r (1) r=1 (J ) (J ) where ur 1 and wr 2 have zero entries outside of J1 and J2 index sets, respectively. We assume ?1 ? ?2 ? ... ? ?q > 0. Under this model, trajectories X (J1 , J2 , :) form an at most q dimensional subspace. We assume q  min(m, |J1 | ? |J2 |). Definition 1 (Tensor Biclustering). The problem of tensor biclustering aims to compute bicluster index sets (J1 , J2 ) given T according to (1). In this paper, we make the following simplifying assumptions: we assume q = 1, n = |n1 | = |n2 |, (J ) and k = |J1 | = |J2 |. To simplify notation, we drop superscripts (J1 ) and (J2 ) from u1 1 and (J ) w1 2 , respectively. Without loss of generality, we normalize signal vectors such that ku1 k = kw1 k = kv1 k = 1. Moreover, we assume that for every (j1 , j2 ) ? J1 ? J2 , ? ? u1 (j1 ) ? c? and ? ? w1 (j2 ) ? c?, where c is a constant. Under these assumptions, a signal trajectory can be written as X (j1 , j2 , :) = u1 (j1 )w1 (j2 )v1 . The scaling of this trajectory depends on row and column specific parameters u1 (j1 ) and w1 (j2 ). Note that our analysis can be extended naturally to a more general setup of having multiple embedded biclusters with q > 1. We discuss this in Section 7. Next we describe the noise model. If (j1 , j2 ) ? / J1 ? J2 , we assume that entries of the noise trajectory Z(j1 , j2 , :) are i.i.d. and each entry has a standard normal distribution. If (j1 , j2 ) ? J1 ? J2 , we assume that entries of Z(j1 , j2 , :) are i.i.d. and each entry has a Gaussian distribution with zero mean and ?z2 variance. We analyze the tensor biclustering problem under two noise models for ?z2 : - Noise Model I: In this model, we assume ?z2 = 1, i.e., the variance of the noise within and outside of the bicluster is assumed to be the same. This is the noise model often considered in analysis of sub-matrix localization [2, 3] and tensor PCA [7, 8, 9, 10, 11, 12, 14]. Although this model simplifies the analysis, it has the following drawback: under this noise model, for every value of ?1 , the average trajectory lengths in the bicluster is larger than the average trajectory lengths outside of the bicluster. See SM Section 1.2 for more details. ?2 - Noise Model II: In this model, we assume ?z2 = max(0, 1 ? mk12 ), i.e., ?z2 is modeled to minimize the difference between the average trajectory lengths within and outside of the bicluster. If ?12 < mk 2 , noise is added to make the average trajectory lengths within and outside of the bicluster comparable. See SM Section 1.2 for more details. 3 3.1 Computational Limits of the Tensor Biclustering Problem Tensor Folding+Spectral Recall the formulation of the tensor biclustering problem (1). Let T(j1 ,1) , T (j1 , :, :) and T(j2 ,2) , T (:, j2 , :), (2) be horizontal (the first mode) and lateral (the second mode) matrix slices of the tensor T , respectively. One way to learn the embedded bicluster in the tensor is to compute row and column indices whose trajectories are highly correlated with each other. To do that, we compute n n X X C1 , Tt(j2 ,2) T(j2 ,2) and C2 , Tt(j1 ,1) T(j1 ,1) . (3) j2 =1 j1 =1 3 Input Tensor T T(1, : , : ) T(k, : , : ) Matrix Slices m m n2 n1 n2 ... t Spectral Decomposition n 2 Combined Covariance ... t T(1, : , : ) T(1, : , : ) Bicluster Index Set (J2) T(n1, : , : ) T(k+1, : , : ) T(n1, : , : ) T(n1, : , : ) n2 n2 n2 ... ... Figure 2: A visualization of the tensor folding+spectral algorithm 1 to compute the bicluster index set J2 . The bicluster index set J1 can be computed similarly. Algorithm 1 Tensor Folding+Spectral Input: T , k ? 1 , the top eigenvector of C1 Compute u ? 1 , the top eigenvector of C2 Compute w ? 1| Compute J?1 , indices of the k largest values of |w Compute J?2 , indices of the k largest values of |? u1 | Output: J?1 and J?2 C1 represents a combined covariance matrix along the tensor columns (Figure 2). We refer to it as the folded tensor over the columns. If there was no noise, this matrix would be equal to ?12 u1 ut1 . Thus, its eigenvector corresponding to the largest eigenvalue would be equal to u1 . On the other hand, we have u1 (j1 ) = 0 if j1 ? / J1 and |u1 (j1 )| > ?, otherwise. Therefore, selecting k indices of the top eigenvector with largest magnitudes would recover the index set J1 . However, with added noise, the top eigenvector of the folded tensor would be a perturbed version of u1 . Nevertheless one can estimate J1 similarly (Algorithm 1). A similar argument holds for C2 . ? 1 and w ? 1 be top eigenvectors of C1 and C2 , respectively. Under both noise models Theorem 1. Let u I and II, ? ?n), if ? 2 = ?(n), ? - for m < O( 1 ? ? n), if ? 2 = ?( ? ?n max(n, m)), - for m = ?( 1 ? 1 (j2 )| > |w ? 1 (j20 )| for every as n ? ?, with high probability, we have |? u1 (j1 )| > |? u1 (j10 )| and |w j1 ? J1 , j10 ? J?1 , j2 ? J2 and j20 ? J?2 . In the proof of Theorem 1, following the result of [18] for a Wigner noise matrix, we have proved an l? version of the Davis-Kahan Lemma for a Wishart noise matrix. This lemma can be of independent interest for the readers. 3.2 Tensor Unfolding+Spectral 2 Let Tunf olded ? Rm?n be the unfolded tensor T such that Tunf olded (:, (j1 ? 1)n + j2 ) = T (j1 , j2 , :) for 1 ? j1 , j2 ? n. Without noise, the right singular vector of this matrix is u1 ? w1 which corresponds to the singular value ?1 . Therefore, selecting k 2 indices of this singular vector with largest magnitudes would recover the index set J1 ? J2 . With added noise, however, the top singular vector of the unfolded tensor will be perturbed. Nevertheless one can estimate J1 ? J2 similarly (SM Section 2). 4 ? be the top right singular vector of Tunf olded . Under both noise models I and II, Theorem 2. Let x 2 ? if ?12 = ?(max(n , m)), as n ? ?, with high probability, we have |? x(j 0 )| < |? x(j)| for every j in 0 the bicluster and j outside of the bicluster. 3.3 Thresholding Sum of Squared and Individual Trajectory Lengths If the average trajectory lengths in the bicluster is larger than the one outside of the bicluster, methods based on trajectory length statistics can be successful in solving the tensor biclustering problem. One such method is thresholding individual trajectory lengths. In this method, we select k 2 indices (j1 , j2 ) with the largest trajectory length kT (j1 , j2 , :)k (SM Section 2). Theorem 3. As n ? ?, with high probability, J?1 = J1 and J?2 = J2 ? ?mk 2 ), under noise model I. - if ?12 = ?( 2 ? - if ?12 = ?(mk ), under noise model II. Another method to solve the tensor biclustering problem is thresholding sum of squared trajectory lengths. In this method, we select k row indices with the largest sum of squared trajectory lengths along the columns as an estimation of J1 . We estimate J2 similarly (SM Section 2). Theorem 4. As n ? ?, with high probability, J?1 = J1 and J?2 = J2 ? ?nm), under noise model I. - if ?12 = ?(k ? 2 ? - if ?12 = ?(mk + k nm), under noise model II. 4 4.1 Statistical (Information-Theoretic) Limits of the Tensor Biclustering Problem Coherent Case In this section, we study a statistical (information theoretic) boundary for ? the tensor biclustering problem under the following statistical model: We assume u (j ) = 1/ k for j1 ? J1 . Similarly, 1 1 ? we assume w1 (j2 ) = 1/ k for j2 ? J2 . Moreover, we assume v1 is a fixed given vector with kv1 k = 1. In the next section, we consider a non-coherent model where v1 is random and unknown. Let T be an observed tensor from the tensor biclustering model (J1 , J2 ). Let Jall be the set of 2 all possible (J1 , J2 ). Thus, |Jall | = nk . A maximum likelihood estimator (MLE) for the tensor biclustering problem can be written as: max ? all J?J v1t X (j1 ,j2 )?J?1 ?J?2 T (j1 , j2 , :) ? k(1 ? ?z2 ) 2?1 X 2 kT (j1 , j2 , :)k (4) (j1 ,j2 )?J?1 ?J?2 (J?1 , J?2 ) ? Jall . Note that under the noise model I, the second term is zero. To solve this optimization, one needs 2 to compute the likelihood function for nk possible bicluster indices. Thus, the computational complexity of the MLE is exponential in n. ? as n ? ?, with high probability, (J1 , J2 ) is Theorem 5. Under noise model I, if ?12 = ?(k), the optimal solution of optimization (4). A similar result holds under noise model II if mk = ?(log(n/k)). Next, we establish an upper bound on ?12 under which no computational method can solve the tensor biclustering problem with vanishing probability of error. This upper bound indeed matches with the MLE achievability bound of Theorem 5 indicating its tightness. Theorem 6. Let T be an observed tensor from the tensor biclustering model with bicluster indices (J1 , J2 ). Let A be an algorithm that uses T and computes (J?1 , J?2 ). Under noise model I, for any 5 fixed 0 < ? < 1, if ?12 < c? k log(n/k), as n ? ?, we have h i inf sup P J?1 6= J1 or J?2 6= J2 > 1 ? ? ? A?AllAlg (J1 ,J2 )?Jall log(2) . 2k log(ne/k) (5) A similar result holds under noise model II if mk = ?(log(n/k)). 4.2 Non-coherent Case In this section we consider a similar setup to the one of Section 4.1 with the difference that v1 is assumed to be uniformly distributed over a unit sphere. For simplicity, in this section we only consider noise model I. The ML optimization in this setup can be written as follows: X max k T (j1 , j2 , :)k2 (6) ? all J?J (j1 ,j2 )?J?1 ?J?2 (J?1 , J?2 ) ? Jall . ? ? Theorem 7. Under noise model I, if ?12 = ?(max(k, km)), as n ? ?, with high probability, (J1 , J2 ) is the optimal solution of optimization (6). If k > ?(m), the achievability bound of Theorem 7 simplifies to the one of Theorem 5. In this case, using the result of? Theorem 6, this bound is tight. If k < O(m), the achievability bound of Theorem ? mk) which is larger than the one of Theorem 5 (this is the price we pay for not 7 simplifies to ?( knowing v1 ). In the following, we show that this bound is also tight. To show the converse of Theorem 7, we consider the detection task which is presumably easier than the estimation task. Consider two probability distributions: (1) P?1 under which the observed tensor is T = ?1 u1 ? w1 ? v1 + Z where J1 and J2 have uniform distributions over k subsets of [n] and v1 is uniform over a unit sphere. (2) P0 under which the observed tensor is T = Z. Noise entries are i.i.d. normal. We need the following definition of contiguous distributions ([8]): Definition 2. For every n ? N, let P0,n and P1,n be two probability measures on the same measure space. We say that the sequence (P1,n ) is contiguous with respect to (P0,n ), if, for any sequence of events An , we have lim P0,n (An ) = 0 ? lim P1,n (An ) = 0. ? n?? n?? (7) ? mk), P? is contiguous with respect to P0 . Theorem 8. If ?12 < O( 1 This theorem with Lemma 2 of [8] establishes the converse of Theorem 7. The proof is based on bounding the second moment of the Radon-Nikodym derivative of P?1 with respect to P0 (SM Section 4.9). 5 Summary of Asymptotic Results ? Table 1 summarizes asymptotic bounds for the case of ? = 1/ k and m = ?(n). For the MLE we consider the coherent model of Section 4.1. Also in Table 1 we summarize computational complexity of different tensor biclustering methods. We discuss analytical and empirical running time of these methods in SM Section 2.2. Table 1: Comparative analysis of ? tensor biclustering methods. Results have been simplified for the case of m = ?(n) and ? = 1/ k. Methods Tensor Folding+Spectral Tensor Unfolding+Spectral Th. Sum of Squared Trajectory Lengths Th. Individual Trajectory Lengths Maximum Likelihood Statistical Lower Bound ?12 , noise model I ? 3/2 ) ?(n ? 2) ?(n ? ?(nk) ? 2 ?n) ?(k ? ?(k) ? O(k) 6 ?12 , noise model II ? 3/2 ) ?(n ? 2) ?(n 2 ? ?(nk ) ? ?(nk 2 ) ? ?(k) ? O(k) Comp. Complexity O(n4 ) O(n3 ) O(n3 ) O(n3 ) exp(n) - (a) (b) noise model I 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 noise model II 1 0.9 Bicluster Recovery Rate Bicluster Recovery Rate 1 50 100 150 200 250 300 350 400 Signal Strength 0.8 0.6 0.4 0.2 0 0 50 100 150 200 250 300 350 400 Signal Strength Tensor folding+Spectral Th. sum of squared trajectory lengths Tensor unfolding+Spectral Th. individual trajectory lengths Figure 3: Performance of different tensor biclustering methods in various values of ?1 (i.e., the signal strength), under both noise models I and II. We consider n = 200, m = 50, k = 40. Experiments have been repeated 10 times for each point. In both noise models, the maximum likelihood estimator which has an exponential computational complexity leads to the best achievability bound compared to other methods. Below this bound, the inference is statistically impossible. Tensor folding+spectral method outperforms other methods ? with polynomial computational complexity if k > n under noise model I, and k > n1/4 under noise model II. For smaller values of k, thresholding individual trajectory lengths lead to a better achievability bound. This case is a part of the high-SNR regime where the average trajectory lengths within the bicluster is significantly larger than the one outside of the bicluster. Unlike thresholding individual trajectory lengths, other methods use the entire tensor to solve the tensor biclustering problem. Thus, when k is very small, the accumulated noise can dominate the signal strength. Moreover, the performance of the tensor unfolding method is always worst than the one of the tensor folding method. The reason is that, the tensor unfolding method merely infers a low dimensional subspace of trajectories, ignoring the block structure that true low dimensional trajectories form. 6 6.1 Numerical Results Synthetic Data In this section we evaluate the performance of different tensor biclustering methods in synthetic datasets. We use the statistical model described in Section 4.1 to generate the input tensor T . Let (J?1 , J?2 ) be estimated bicluster indices (J1 , J2 ) where |J?1 | = |J?2 | = k. To evaluate the inference quality we compute the fraction of correctly recovered bicluster indices (SM Section 3.1). In our simulations we consider n = 200, m = 50, k = 40. Figure 3 shows the performance of four tensor biclustering methods in different values of ?1 (i.e., the signal strength), under both noise models I and II. Tensor folding+spectral algorithm outperforms other methods in both noise models. The gain is larger in the setup of noise model II compared to the one of noise model I. 6.2 Real Data In this section we apply tensor biclustering methods to the roadmap epigenomics dataset [4] which provides histon mark signal strengths in different segments of human genome in various tissues and cell types. In this dataset, finding a subset of genome segments and a subset of tissues (cell-types) with highly correlated histon mark values can provide insight on tissue-specific functional roles of genome segments [4]. After pre-processing the data (SM Section 3.2), we obtain a data tensor T ? Rn1 ?n2 ?m where n1 = 49 is the number of tissues (cell-types), n2 = 1457 is the number of 7 (b) 10 6 5 i-th largest eigenvalue i-th largest eigenvalue 6 4 3 2 1 0 1 2 3 (d) i 4 5 6 6 5 4 3 2 1 0 7 (c) 10 6 i-th largest singular value (a) 1 2 3 4 5 6 2500 2000 1500 1000 0 7 i (e) 10 0.8 15 0.7 20 0.6 25 0.5 30 0.4 35 0.3 40 0.2 45 0.1 200 400 600 800 1000 1200 1400 genome segments (chromosome 20) 0 2 3 4 5 6 7 i 0.85 H3K27ac 30 H3K27me3 25 H3K36me3 20 H3K4me1 15 H3K4me3 10 H3K9ac 5 H3K9me3 1000 2000 3000 4000 tissues x genome segments 0 inferred bicluster quality tissues 0.9 1 (f) 1 5 500 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 Tensor folding Tensor unfolding Th. Th. sum of individual squared TL TL random Figure 4: An application of tensor biclustering methods to the the roadmap epigenomics data. genome segments, and m = 7 is the number of histon marks. Note that although in our analytical results for simplicity we assume n1 = n2 , our proposed methods can be used in a more general case such as the one considered in this section. We form two combined covariance matrices C1 ? Rn1 ?n1 and C2 ? Rn2 ?n2 according to (3). Figure 4-(a,b) shows largest eigenvalues of C1 and C2 , respectively. As illustrated in these figures, spectral gaps (i.e., ?1 ? ?2 ) of these matrices are large, indicating the existence of a low dimensional signal tensor in the input tensor. We also form an unfolded tensor Tunf olded ? Rm?n1 n2 . Similarly, there is a large gap between the first and second largest singular values of Tunf olded (Figure 4-c). We use the tensor folding+spectral algorithm 1 with |J1 | = 10 and |J2 | = 400 (we consider other values for the bicluster size in SM Section 3.2). The output of the algorithm (J?1 , J?2 ) is illustrated in Figure 4-d (note that for visualization purposes, we re-order rows and columns to have the bicluster appear in the corner). Figure 4-e illustrates the unfolded subspace {T (j1 , j2 , :) : (j1 , j2 ) ? J?1 ? J?2 }. In this inferred bicluster, Histon marks H3K4me3, H3K9ac, and H3K27ac have relatively high values. Reference [4] shows that these histon marks indicate a promoter region with an increased activation in the genome. To evaluate the quality of the inferred bicluster, we compute total absolute pairwise correlations among vectors in the inferred bicluster. As illustrated in Figure 4-f, the quality of inferred bicluster by tensor folding+spectral algorithm is larger than the one of other methods. Next, we compute the bicluster quality by choosing bicluster indices uniformly at random with the same cardinality. We repeat this experiment 100 times. There is a significant gap between the quality of these random biclusters and the ones inferred by tensor biclustering methods indicating the significance of our inferred biclusters. For more details on these experiment, see SM Section 3.2. 7 Discussion In this paper, we introduced and analyzed the tensor biclustering problem. The goal is to compute a subset of tensor rows and columns whose corresponding trajectories form a low dimensional subspace. To solve this problem, we proposed a method called tensor folding+spectral which demonstrated improved analytical and empirical performance compared to other considered methods. Moreover, we characterized computational and statistical (information theoretic) limits for the tensor biclustering problem in an asymptotic regime, under both coherent and non-coherent statistical models. Our results consider the case when the rank of the subspace is equal to one (i.e., q = 1). When q > 1, in both tensor folding+spectral and tensor unfolding+spectral methods, the embedded subspace in the signal matrix will have a rank of q > 1, with singular values ?1 ? ?2 ? ... ? ?q > 0. In this 8 setup, we need the spectral radius of the noise matrix to be smaller than ?q in order to guarantee the recovery of the subspace. The procedure to characterize asymptotic achievability bounds would follow from similar steps of the rank one case with some technical differences. For example, we would need to extend Lemma 6 to the case where the signal matrix has rank q > 1. Moreover, in our problem setup, we assumed that the size of the bicluster k and the rank of its subspace q are know parameters. In practice, these parameters can be learned approximately from the data. For example, in the tensor folding+spectral method, a good choice for the q parameter would be the index where eigenvalues of the folded matrix decrease significantly. Knowing q, one can determine the size of the bicluster similarly as the number of indices in top eigenvectors with significantly larger absolute values. Another practical approach to estimate model parameters would be trial and error plus cross validations. Some of the developed proof techniques may be of independent interest as well. For example, we proved an l? version of the Davis-Kahan lemma for a Wishart noise matrix. Solving the tensor biclustering problem for the case of having multiple overlapping biclusters, for the case of having incomplete tensor, and for the case of a priori unknown bicluster sizes are among future directions. 8 Code We provide code for tensor biclustering methods in the following link: https://github.com/ SoheilFeizi/Tensor-Biclustering. 9 Acknowledgment We thank Prof. Ofer Zeitouni for the helpful discussion on detectably proof techniques of probability measures. References [1] Amos Tanay, Roded Sharan, and Ron Shamir. Biclustering algorithms: A survey. Handbook of computational molecular biology, 9(1-20):122?124, 2005. [2] Yudong Chen and Jiaming Xu. Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. arXiv preprint arXiv:1402.1267, 2014. [3] T Tony Cai, Tengyuan Liang, and Alexander Rakhlin. Computational and statistical boundaries for submatrix localization in a large noisy matrix. arXiv preprint arXiv:1502.01988, 2015. [4] Anshul Kundaje, Wouter Meuleman, Jason Ernst, Misha Bilenky, Angela Yen, Alireza HeraviMoussavi, Pouya Kheradpour, Zhizhuo Zhang, Jianrong Wang, Michael J Ziller, et al. Integrative analysis of 111 reference human epigenomes. Nature, 518(7539):317?330, 2015. [5] GTEx Consortium et al. The genotype-tissue expression (gtex) pilot analysis: Multitissue gene regulation in humans. Science, 348(6235):648?660, 2015. [6] Rui Chen, George I Mias, Jennifer Li-Pook-Than, Lihua Jiang, Hugo YK Lam, Rong Chen, Elana Miriami, Konrad J Karczewski, Manoj Hariharan, Frederick E Dewey, et al. Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell, 148(6):1293?1307, 2012. [7] Emile Richard and Andrea Montanari. A statistical model for tensor pca. In Advances in Neural Information Processing Systems, pages 2897?2905, 2014. [8] Andrea Montanari, Daniel Reichman, and Ofer Zeitouni. On the limitation of spectral methods: From the gaussian hidden clique problem to rank-one perturbations of gaussian tensors. In Advances in Neural Information Processing Systems, pages 217?225, 2015. [9] Samuel B Hopkins, Tselil Schramm, Jonathan Shi, and David Steurer. Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors. arXiv preprint arXiv:1512.02337, 2015. 9 [10] Samuel B Hopkins, Jonathan Shi, and David Steurer. Tensor principal component analysis via sum-of-square proofs. In COLT, pages 956?1006, 2015. [11] Amelia Perry, Alexander S Wein, and Afonso S Bandeira. Statistical limits of spiked tensor models. arXiv preprint arXiv:1612.07728, 2016. [12] Thibault Lesieur, L?o Miolane, Marc Lelarge, Florent Krzakala, and Lenka Zdeborov?. Statistical and computational phase transitions in spiked tensor estimation. arXiv preprint arXiv:1701.08010, 2017. [13] Animashree Anandkumar, Rong Ge, and Majid Janzamin. Guaranteed non-orthogonal tensor decomposition via alternating rank-1 updates. arXiv preprint arXiv:1402.5180, 2014. [14] Anru Zhang and Dong Xia. Guaranteed tensor pca with optimality in statistics and computation. arXiv preprint arXiv:1703.02724, 2017. [15] Animashree Anandkumar, Rong Ge, Daniel J Hsu, and Sham M Kakade. A tensor approach to learning mixed membership community models. Journal of Machine Learning Research, 15(1):2239?2312, 2014. [16] Animashree Anandkumar, Rong Ge, Daniel J Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773?2832, 2014. [17] Victoria Hore, Ana Vi?uela, Alfonso Buil, Julian Knight, Mark I McCarthy, Kerrin Small, and Jonathan Marchini. Tensor decomposition for multiple-tissue gene expression experiments. Nature Genetics, 48(9):1094?1100, 2016. [18] Yiqiao Zhong and Nicolas Boumal. Near-optimal bounds for phase synchronization. arXiv preprint arXiv:1703.06605, 2017. 10
6730 |@word trial:1 version:3 polynomial:2 norm:1 km:1 integrative:1 simulation:1 simplifying:1 decomposition:5 covariance:3 p0:6 moment:1 reduction:1 selecting:2 daniel:3 outperforms:3 olded:5 recovered:1 z2:6 com:1 activation:1 written:3 realistic:1 numerical:1 j1:64 drop:1 kv1:2 update:1 generative:1 vanishing:2 provides:3 ron:1 firstly:1 zhang:2 along:4 c2:6 become:2 krzakala:1 pairwise:1 theoretically:1 indeed:1 andrea:2 themselves:1 p1:3 growing:1 multi:1 v1t:1 inspired:1 unfolded:4 cardinality:2 moreover:9 notation:5 eigenvector:5 developed:1 finding:2 guarantee:1 every:7 collecting:1 rm:2 k2:1 unit:2 converse:2 medical:1 appear:1 limit:9 jiang:1 approximately:1 plus:1 studied:3 collect:3 statistically:2 gtex:2 practical:1 acknowledgment:1 practice:1 block:1 procedure:1 area:2 universal:1 empirical:2 submatrices:1 significantly:3 pre:1 consortium:1 cannot:1 impossible:1 demonstrated:1 shi:2 survey:1 simplicity:2 recovery:3 estimator:3 insight:1 dominate:1 notion:1 shamir:1 suppose:1 play:1 us:1 asymmetric:2 anshul:1 observed:4 role:3 preprint:8 wang:1 worst:1 region:1 decrease:1 knight:1 yk:1 complexity:6 dynamic:1 personal:1 solving:2 tight:2 segment:6 localization:4 efficiency:1 represented:2 various:4 fast:1 describe:1 outside:8 choosing:1 whose:5 stanford:6 solve:9 supplementary:1 say:2 larger:7 otherwise:1 tightness:1 statistic:3 kahan:2 noisy:1 superscript:1 sequence:2 eigenvalue:5 analytical:3 cai:1 propose:1 lam:1 product:1 j2:61 iff:2 ernst:1 normalize:1 h3k4me1:1 cluster:4 comparative:1 telgarsky:1 indicate:1 direction:1 radius:1 closely:1 drawback:1 owing:1 human:3 ana:1 material:1 generalization:1 hamid:1 secondly:1 rong:4 hold:3 lying:2 sufficiently:1 considered:3 normal:2 exp:1 presumably:1 cb:1 visualize:1 matus:1 j10:2 purpose:1 estimation:3 largest:12 establishes:1 amos:1 unfolding:7 h3k4me3:2 gaussian:3 always:1 aim:3 manoj:1 zhong:1 rank:7 likelihood:5 contrast:1 sharan:1 sense:1 helpful:1 inference:2 membership:1 accumulated:1 entire:1 hidden:1 mk12:1 among:6 aforementioned:1 colt:1 priori:1 cube:1 equal:3 having:3 beach:1 biology:1 represents:2 mias:1 future:1 others:2 simplify:1 richard:1 modern:1 individual:25 phase:2 n1:12 detection:2 interest:2 highly:3 wouter:1 analyzed:1 genotype:2 misha:1 kt:2 reichman:1 janzamin:1 orthogonal:1 incomplete:1 re:3 mk:8 increased:1 tse:1 modeling:2 column:8 contiguous:3 subset:12 entry:7 snr:1 uniform:2 successful:1 characterize:2 perturbed:2 synthetic:4 combined:3 st:1 fundamental:1 dong:1 michael:1 hopkins:2 w1:7 squared:6 nm:2 rn1:4 wishart:2 corner:1 derivative:1 li:1 ku1:1 schramm:1 rn2:1 depends:1 vi:1 jason:1 analyze:3 sup:1 red:1 recover:2 yen:1 minimize:1 square:2 hariharan:1 variance:2 soheil:1 trajectory:39 comp:1 j20:2 tissue:11 afonso:1 lenka:1 definition:4 lelarge:1 naturally:1 proof:6 hsu:2 gain:1 pilot:1 dataset:5 proved:2 animashree:3 recall:1 lim:2 dimensionality:1 infers:1 marchini:1 follow:1 improved:1 formulation:3 generality:1 correlation:1 hand:1 horizontal:1 overlapping:1 perry:1 mode:2 dntse:1 quality:6 usa:1 true:1 alternating:1 symmetric:1 illustrated:3 featu:2 konrad:1 davis:2 samuel:2 prominent:2 theoretic:5 tt:2 wigner:1 functional:1 lihua:1 hugo:1 extend:1 numerically:1 significant:2 refer:1 similarly:8 kw1:1 similarity:2 own:1 recent:1 mccarthy:1 perspective:1 inf:1 bandeira:1 feizi:1 george:1 determine:1 signal:16 ii:13 multiple:9 thibault:1 h3k27me3:1 sham:2 technical:1 match:1 characterized:1 profiling:1 cross:1 long:1 sphere:2 mle:5 molecular:2 tselil:1 arxiv:16 represent:2 alireza:1 jiaming:1 cell:4 folding:15 c1:6 separately:1 pook:1 singular:8 unlike:3 majid:1 alfonso:1 tengyuan:1 anandkumar:3 near:1 florent:1 regarding:1 simplifies:3 knowing:2 tradeoff:1 expression:4 pca:3 effort:1 eigenvectors:2 kundaje:1 extensively:1 generate:1 http:1 estimated:1 wr:2 correctly:1 anru:1 write:2 four:2 nevertheless:2 dewey:1 v1:7 merely:1 fraction:1 sum:8 reader:1 jall:5 wein:1 summarizes:1 scaling:4 radon:1 comparable:1 submatrix:2 bound:16 pay:1 guaranteed:2 strength:6 kronecker:1 n3:3 u1:14 argument:1 min:1 optimality:1 relatively:1 according:2 across:3 smaller:2 increasingly:2 ur:1 kakade:2 n4:1 modification:1 spiked:5 visualization:3 jennifer:1 discus:2 fail:1 know:1 ge:3 ut1:1 ofer:2 apply:1 victoria:1 spectral:23 existence:1 denotes:1 clustering:1 top:8 running:1 biclusters:4 tony:1 angela:1 zeitouni:2 prof:1 establish:1 tensor:116 added:3 planted:3 bicluster:40 exhibit:2 zdeborov:1 subspace:12 distance:2 link:1 thank:1 lateral:1 topic:1 roadmap:3 collected:1 reason:2 length:18 code:2 modeled:2 index:21 julian:1 liang:1 setup:6 regulation:1 steurer:2 reliably:1 unknown:2 allowing:1 upper:2 datasets:5 sm:12 defining:1 extended:1 perturbation:1 community:2 inferred:7 david:3 introduced:1 pair:5 namely:1 amelia:1 coherent:6 learned:1 nip:1 frederick:1 below:1 pattern:1 regime:3 summarize:1 max:6 event:1 natural:1 minimax:1 github:1 ne:1 asymptotic:8 embedded:3 loss:1 synchronization:1 mixed:1 limitation:1 emile:1 validation:1 thresholding:5 nikodym:1 row:6 achievability:8 summary:1 genetics:1 repeat:1 boumal:1 absolute:2 sparse:1 distributed:1 epigenomics:3 slice:2 dimension:6 boundary:2 yudong:1 transition:1 genome:8 computes:2 xia:1 simplified:1 gene:3 clique:2 ml:1 reveals:1 handbook:1 assumed:3 latent:1 table:3 learn:1 chromosome:1 nature:2 ca:1 nicolas:1 ignoring:1 marc:1 significance:1 main:1 promoter:1 montanari:2 bounding:1 noise:46 n2:15 repeated:2 xu:1 tanay:1 lesieur:1 tl:2 vr:1 sub:3 exponential:3 lie:1 kxk2:1 third:1 theorem:18 xt:1 specific:2 rakhlin:1 intrinsic:1 exists:2 magnitude:2 illustrates:1 rui:1 nk:5 gap:3 easier:1 chen:3 phenotype:1 likely:1 biclustering:47 corresponds:1 goal:1 price:1 folded:3 uniformly:2 lemma:5 principal:1 total:1 called:1 indicating:3 select:2 h3k27ac:2 mark:7 jonathan:3 alexander:2 evaluate:3 correlated:2
6,336
6,731
DPSCREEN: Dynamic Personalized Screening Kartik Ahuja Electrical and Computer Engineering Department University of California, Los Angeles [email protected] William R. Zame Economics Department University of California, Los Angeles [email protected] Mihaela van der Schaar Engineering Science Department, University of Oxford Electrical and Computer Engineering Department, University of California, Los Angeles [email protected] Abstract Screening is important for the diagnosis and treatment of a wide variety of diseases. A good screening policy should be personalized to the features of the patient and to the dynamic history of the patient (including the history of screening). The growth of electronic health records data has led to the development of many models to predict the onset and progression of different diseases. However, there has been limited work to address the personalized screening for these different diseases. In this work, we develop the first framework to construct screening policies for a large class of disease models. The disease is modeled as a finite state stochastic process with an absorbing disease state. The patient observes an external information process (for instance, self-examinations, discovering comorbidities, etc.) which can trigger the patient to arrive at the clinician earlier than scheduled screenings. The clinician carries out the tests; based on the test results and the external information it schedules the next arrival. Computing the exactly optimal screening policy that balances the delay in the detection against the frequency of screenings is computationally intractable; this paper provides a computationally tractable construction of an approximately optimal policy. As an illustration, we make use of a large breast cancer data set. The constructed policy screens patients more or less often according to their initial risk ? it is personalized to the features of the patient ? and according to the results of previous screens ? it is personalized to the history of the patient. In comparison with existing clinical policies, the constructed policy leads to large reductions (28-68%) in the number of screens performed while achieving the same expected delays in disease detection. 1 Introduction Screening plays an important role in the diagnosis and treatment of a wide variety of diseases, including cancer, cardiovascular disease, HIV, diabetes and many others by leading to early detection of disease [1]-[3]. For some diseases (e.g., breast cancer, pancreatic cancer), the benefit of early detection is enormous [4] [5]. Because screening ? especially screening that requires invasive procedures such as mammograms, CT scans, biopsies, angiograms, etc. ? imposes financial and health costs on the patient and resource costs on society, good screening policies should trade off benefit and cost [6]. The best screening policies should take into account that the trade-off between benefit and cost should be different for different diseases ? but also for different patients ? patients whose features suggest that they are at high risk should be screened more often; patients whose features suggest that they are at low risk should be screened less often ? and even different for the same individual at different points in time, as the perceived risk for that patient changes. Thus the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. best screening policies should account for the disease type and be personalized to the features of the patient and to the history of the patient (including the history of screening) [32]. This paper develops the first such personalized screening policies in a very general setting. A screening policy prescribes what tests should/should not be done and when. Developing personalized screening policies that optimally balance the frequency of testing against the delay in the detection of the disease is extremely difficult for a number of reasons. (1) The onset and progression of different diseases varies significantly across the diseases. For instance, in [7] the development of breast cancer is modeled as a stationary Markov process, in [36] the development of HIV is modeled using a non-stationary survival process and, in [46] the development of colon cancer is modeled as a Semi-Markov process. The test outcomes observed over time may follow a non-stationary stochastic process that depends on the disease process upto that time and the features of the patient [35][36]. Existing works on screening [7] [9] are restricted to Markov disease processes and stationary Markov test outcome models, while this is not the case for many diseases and their test outcomes [10][35]-[37]. (2) The cost of not screening is the delay in detection of disease, which is not known. Hence the decision maker must act on the basis of beliefs about future disease states in addition to beliefs about the current disease state. (3) Patients can arrive at the scheduled time but may also arrive earlier on the basis of external information so the decision maker?s beliefs must take this external information into account. For instance, external information can be the development of lumps on breasts [25][26], or the development of a comorbidity [33][41]. (4) Given models of the progression of the disease and of the external information, solving for that policy is computationally intractable in general. This paper addresses all of these problems. We provide a computationally effective procedure that solves for an approximately optimal policy and we provide bounds for the approximation error (loss in performance) that arises from using the approximately optimal policy rather than the exactly optimal policy. Our procedure is applicable to many disease models such as dynamic survival models [11]-[13][36]-[37], first hitting time models [7][9][14]-[17]. Evaluating a proposed personalized screening policy using observational data is challenging. Observational data does not contain the counterfactuals: we cannot know what would have happened if a patient had been screened more often or an additional test had been performed. Instead, we follow an alternative route that has become standard in the literature [7]-[10]: we learn the disease progression model from the observational data and then evaluate the screening policy on the basis of the learned model. We also account for the fact that the disease model may be incorrectly estimated. We show that if the estimation error and the approximation error are small, then the policy we construct is very close to the policy for the correctly estimated model. In this work, use of a large breast cancer data set to illustrate the proposed personalized screening policy. We show that high risk patients are screened more often than low risk patients (personalization to the features of the patient) and that patients with bad test results are screened more often than patients with good test results (personalization to the dynamic history of the patient). The effect of these personalizations is that, in comparison with existing clinical policies, the policy we construct leads to large reductions (28-68%) in screening while achieving the same expected delays in disease detection. To illustrate the impact of the disease on the policy, we carry out a synthetic exercise across diseases, one for which the delay cost is linear and one for which the delay cost is quadratic. We show that the regime of operation (frequency of tests vs expected delay in detection) for the policies for the two costs are significantly different, thus highlighting the importance of choice of costs. 2 Model and Problem Formulation Time Time is discrete and the time horizon is finite; we write T = {1, ..., T } for the set of time slots. Patient Features Patients are distinguished by a (fixed) feature x. We assume that the features of a patient (age, sex, family history, etc.) are observable and that the set X of all patient features is finite. Disease Model We model the disease in terms of the (true physiological) state, where the state space is S. The disease follows a finite state stochastic process; S T is the space of state trajectories. The probability distribution over trajectories depends on the patient?s features; for ~s ? S T , x ? X we write P r(~s|x) for the probability that the state trajectory is ~s given that the patient?s features are x. We distinguish one state D ? S as the disease state; the disease state D is absorbing.1 Hence 1 The restriction to a single absorbing disease state is only for expositional convenience. 2 P r(s(t) = D, s(t0 ) 6= D) = 0 for every time t and every time t0 > t. The true state is hidden/not observed.2 Our stochastic process model of disease encompasses many of the disease models in the literature, including discrete time survival models. The (discrete time) Cox Proportional Odds model [11], for instance, is the particular case of our model in which there are two states (Healthy H and Disease D) and the probability distribution over state trajectories is determined from the hazard rates. To be precise: if ~s is the state trajectory for which the disease state first occurs at time t0 , so that s(t) = H for t < t0 and s(t) = D for t ? t0 , ?(t|x) is the hazard at time t conditional on x, then P r(~s|x) = [1 ? ?(1|x)] ? ? ? [1 ? ?(t0 ? 1|x)][?(t0 |x)] and P r(~s|x) = 0 for all trajectories not having this form. Similar constructions show that other dynamic survival models [14]-[17] [10][37] also fit in the rubric of the general model presented here.3 External Information The clinician performs tests that are informative about the patient?s true state; in addition, external information may also arrive (for instance, patient self-examines breasts for lumps, patient discovers comorbidities, etc.). The patient observes an external information process modeled by a finite state stochastic process with state space Y; the information at time t is Y (t) ? Y (for instance, Y = {Lump, No Lump}). If the patient visits clinician at time t, then this external information Y (t) arrives to the clinician. Y (t) may be correlated with the patient?s state trajectory through time t and the patient?s features; we write P r(Y (t) = y|~s(t), x) for the probability that the external information at time t is y ? Y, conditional on the state trajectory through time t and features x. We assume that at each time t the external information Y (t) is independent of the past observations conditional on the state trajectory through time t, ~s(t), and features x. Arrival The patient visits the clinician at time t if either (a) the information process Y (t) exceeds some threshold y? or (b) t is the time for the next recommended screening (determined in the Screening Policies described below). The first visit of the patient to the hospital depends on the screening policy and the patient?s features (See the description below). If the patient visits the clinician at time t, the clinician performs a sequence of tests and observes the results. For simplicity of exposition, we assume that the clinician performs only a single test, with a finite set Z of outcomes. We write P r(Z(t) = z|~s(t), x) as the probability that test performed at time t yields the result z, conditional on the (unobserved) state trajectory and the patient?s features. We assume that the current test result is independent of past test results, conditional on the state trajectory and patient features. We also assume that current test result is independent of the external information conditional on the state trajectory through time t and the patient features. These assumptions are standard [7] [36]. We adopt the convention that z(t) = ? if the patient does not visit the clinician at time t so that no test is performed. If the test outcome z ? Z + ? Z, then the patient is diagnosed to have the disease. We assume that there are no false positives. If a patient is diagnosed to be in the disease state, then screening ends and treatment begins. Screening Policies The history of a patient through time t consists of the trajectories of external information, test results and screening recommendations through time t. Write H(t) for the set of ST histories through time t and H = t=0 H(t) for the set of all histories. By convention H(0) consists only of the empty history. A screening policy is a map ? : X ? H ? {1, . . . , T } ? {D} that specifies, for each feature x and history h either the next screening time t+ or the decision that the patient is in the disease state D and so treatment should begin. A screening policy ? begins at time 0, when the history is empty, so ?(x, ?) specifies the first screening time for a patient with features x. (For riskier patients, screening should begin earlier.) Write ? for the space of all screening policies. Screening Cost We normalize so that the cost of each screening is 1. (We can easily generalize to the more general setting in which the clinician decides from multiple tests [50], and different tests have different costs.) The cost of screening is a proxy for some combination of the monetary cost, the resource cost and the health cost to the patient. We discount screening P costs over time so if Ts is the set of times at which the patient is screened then the screening cost is t?Ts ? t , where ? ? (0, 1). 2 For many diseases, it seems natural to identify states intermediate between Healthy and Disease. For instance, because breast lumps [26] or colon polyps [9] that are found to be benign may become malignant, it seems natural to distinguish at least one Risky state, intermediate between the Healthy and Disease states. 3 We can encompass the possibility of competing risks (e.g., different kinds of heart failure) [13] simply by allowing for multiple absorbing states. 3 Delay Cost If disease first occurs at time tD (the incidence time) but is detected only at time td > tD (the detection time) then the patient incurs a delay cost C(td ? tD ; tD ). If the disease is never detected the delay cost is C(T ? tD ; tD ). We assume that the delay cost function C : {1, . . . , T } ? {1, . . . , T ? 1} ? (0, ?) is increasing in the first argument (the lag in detection) and decreasing in the second argument (the incidence time). The cost of delay is 0 if disease never occurs or occurs only at time t = T . Note that as soon as the disease is detected screening ends and treatment begins; in particular, there is a single unique time of incidence and a single unique time of detection. We allow for general delay costs because the impact of early/late detection on the probability of survival/successful treatment is different for different diseases. Expected Costs If the patient features are x ? X then every screening policy ? ? ? induces a probability distribution P r(?|x, ?) on the space H(T ) of all histories through time T and in particular induces probability distributions ? = P r(?|x, ?) on the families Ts ? 2{1,...,T ?1} of screening times and ? = P r((?, ?)|x,?)  detection time. The expected P on thet pairs (tD , td ) of incidence time and screening cost is E? ? and the expected delay cost is E ? C(td ? tD , tD ) . We provide a t?Ts graphical model for the entire setup in the Appendix B of the Supplementary Materials. Optimal Screening Policy The objective of the screening policy is to minimize a weighted sum of the screening cost and the delay cost; i.e. the optimal screening policy is defined by n hX i o arg min (1 ? w) E? ? t + wE? [C(td ? tD , tD )] (1) ??? t?Ts The weight w reflects social/medical policy; for instance, w might be chosen to minimize cost subject to some accepted tolerance in delay (Further discussion on this is in Section 4). Comment The standard decision theory methods [18]-[21] used in screening [7][9] cannot be used to solve the above problem. In standard POMDPs, the interval between two decision epochs (in this case, screening times) is fixed exogenously; in standard POSMDPs, the time between two decision epochs is the sojourn time for the underlying core-state process. In our setting, the time between two decision epochs depends on the action (follow-up date), the external information process, and the state trajectory. In standard POMDPs (POSMDPs) the cost incurred in a decision epoch depend on the current state, while in the above problem the delay cost depends on the state trajectory. Moreover, in our setting the disease state trajectory is not restricted to a Markovian or Semi-Markovian process. 3 Proposed Approach Beliefs By a belief b we mean a probability distribution over the pairs consisting of state trajectories and a label l for the diagnosis: l = 1 if the patient has been diagnosed with Pthe disease, l = 0 otherwise. By definition, a belief is a function b : S T ? {0, 1} ? [0, 1] such that ~s,l b(~s, l) = 1 but it is often convenient to view a belief as a vector. Beliefs are updated using Bayesian updating every time there is a new observation (test outcomes, patient arrival, external information). Knowledge of beliefs will be sufficient to solve the optimization problem (1); see the Appendix C in the Supplementary Materials. We write B for the space of all beliefs. Bellman Equations To solve (1) we will formulate and solve the Bellman equations. To this end, we begin by defining the various components of the Bellman equations. Fix a time t. The cost C? incurred at time t depends on what happens at that time: i) if the patient (with diagnosis status l = 0 before the test) is tested and found to have acquired the disease, the cost is the sum of the cost of testing and the cost of delay, ii) if the patient has the disease and is not detected, then the cost of delay is incurred in the time slot T , and iii) if the patient does not have the disease, then the cost incurred in time slot t depends on whether a test was done in time slot t or not. We write these cases below. ? t t ? T, l = 0, z ? Z + ?wC(t ? tD ; tD ) + (1 ? w)? I(z 6= ?) ? C(~s, t, z, l) = wC(T ? tD ; tD ) (2) t = T, l = 0 ? (1 ? w)? t I(z 6= ?) otherwise A recommendation plan ? : Z ? T maps the observation z at the end of time slot t to the next scheduled follow-up time. Note that the recommendation plan is defined for a time t and is different than the policy. Denote the probability distribution over the observations (test outcome z, duration to the next arrival ??, and the external information at the next arrival time y) conditional on the current belief b and the current recommendation plan ? by P r(z, y, ?? b, ? , x). The belief b is updated to 4 ? b in the next arrival time ?? based on the observations, current recommended plan and the current beliefs using Bayesian updating as ?b(~s, l) = P r(~s, l|b, ? , y, z, ??, x). The optimal values for the objective in (2) starting from different initial beliefs can be expressed in terms of a value function V : B ? {1, ..., T + 1} ? R. The value function at time t when the patient is screened solves the Bellman equation: hX   X i ? t + ?? (3) ? s, t, z, l) + V (b, t) = max ?b(~s, l)P r(z|~s, x) C(~ P r(z, y, ?? b, ? , x)V b, ? z,? ? ,y ~ s,l,z We define V (b, T + 1) = 0 for all beliefs. Note that the computation of the first term in the RHS of (3) has a worst case computation time of |S|T . Therefore, solving for exact V (b, T ) that satisfies (3) is computationally intractable when T is large. Next, we derive a useful property of the value function. (The proof of this and all other results are in the Appendix D-F of the Supplementary Material.). Lemma 1 For every t, the value function V (b, t) is the maximum of a finite family of functions that are linear in the beliefs b. In particular, the value function is convex and piecewise linear. The above property was shown for POMDPs in [39], we use the same ideas to extend it to our setup. 3.1 Constructing the Exactly Optimal Policy Every linear function of beliefs is of the form ?? b for some vector ?. (We view ?, b as column vectors and write ?? for the transpose.) Hence Lemma 1 tells us that there is a finite set of vectors ?(t) such that V (b, t) = max???(t) ?? b. We refer to ?(t) as the set of alpha vectors. In view of Lemma 1, to determine the value functions we need only determine the sets of alpha vectors. If we substitute the expression V (b, t) = max???(t) ?? b into (3), then we obtain a recursive expression for ?(t) in terms of ?(t + 1). By definition, the value function at time T + 1 is identically 0 so ?(T + 1) = {0}, where 0 is the |S T ? {0, 1}| dimensional zero vector, so we have an explicit starting point for this recursive procedure. There is an optimal action associated with each alpha vector. The action corresponding to the optimal alpha vector at a certain belief is the output of the optimal action given that belief, and so constructing the sets of alpha vectors yields the optimal policy; the details of the algorithm are in the Algorithm 3 in the Appendix A of the Supplementary Materials. Unfortunately, the algorithm to compute the sets of alpha vectors is computationally intractable (as expected). We therefore propose an algorithm that is tractable to compute an approximately optimal policy. 3.2 Constructing the Approximately Optimal Policy Point-Based Value Iteration (PBVI) approximation algorithms are known to work well for standard POMDPs [18]. These algorithms rely on choosing a finite set of belief vectors and constructing alpha vectors for these belief vectors and their success depends very much on the efficient construction of the set of belief vectors. The standard approaches [18] for belief construction are not designed to cope with settings like ours when beliefs lie in a very high dimensional space; in our setup belief has |S T ? {0, 1}| dimensions. In Algorithm 1 (pseudo-code in the Appendix A of the Supplementary Materials), we first construct a lower dimensional belief space by sampling trajectories that are more likely to occur for the disease and then sampling the set of beliefs in the lower dimensional space that are likely to occur over the course of various screening policies. The key steps for Algorithm 1 are 1. Sample typical physiological state trajectories Sample a set S? ? S T of K physiological trajectories from the distribution P r(~s|x). 2. Construct the set of reachable belief vectors Say that a belief vector b2 is reachable from the belief vector b1 if it can be derived by Bayesian updating on the basis of some underlying screening policy. We construct the sets of belief vectors that can be reached under different screening policies. For the first time slot, we start with a belief vector that lies in the space S? ? {0, 1} given as ? ? l = 0. For subsequent times, we select the beliefs that are encountered P r(~s|x)/P r(S|x), ?~s ? S, under random exploration of the actions (recommendation of future test dates). In addition to using random exploration, we can choose actions determined from a set of policies such as the clinical policies used in practice [27] [28] [47] to construct the set of reachable belief vectors. 5 ? and the set of all such beliefs as Denote the set of belief vectors constructed at time t by B[t] ? ? ? (see Algorithm B = {B[t], ?t}. We carry out point-based value backups on these beliefs B 2 in the Appendix A of the Supplementary Materials), to construct the alpha vectors and thus the approximately optimal policy. Henceforth, we refer to our approach (Algorithm 1 and 2) as DPSCREEN. Computational Complexity The worst case computation of the policy requires  ? O T (B)2 T 2 K|Y||Z| steps, where B = maxt |B[t]| is the maximum over the number of points sampled by the Algorithm 1 for any time slot t. The complexity can be reduced by restricting the space of actions; e.g. by bounding the amount of time allowed between successive screenings. Moreover, the proposed algorithms can be easily parallelized (many operations carried inside the iterations in Algorithm 2 can be done parallel), thus significantly reducing computation time. Approximation Error Because we only sample a finite number of trajectories, the policy we construct is not optimal but we can bound the loss of performance in comparison to the exactly optimal policy and hence justify the term ?approximately optimal policy.? Define the approximation error to be the difference between the value achieved by the exact optimal policy (solution to (1)) and the value achieved by the approximately optimal policy (output from Algorithm 2). As a measure of the density 0 ? = ? maxt?T maxB minb?B[t] of sampling of the belief simplex we set ?(B) ? ||b ? b ||1 , where ? is a constant that measures the maximum expected loss that can occur in one time slot. We make a few assumptions for the proposition to follow. The cost for delay is C(td ? tD ; tD ) = c(td ? tD )? tD , where c(d) is a convex function of d. The test outcome is accurate, i.e. no false positives and no false negatives. The maximum screening interval is bounded by W < T . The time horizon T is sufficiently large. We show that the loss of performance is bounded by the sampling density. ? Proposition 1 The approximation error is bounded above by ?(B). 3.3 Robustness Estimation Error To this point, it has been assumed that the model parameters are known. In practice, the model parameters need to be estimated using the observational data. In the next section, we will give a concrete example of how we estimate these parameters using observational data for breast cancer. Here we discuss the effect of error in estimation. Suppose that the model being estimated 0 (true model) is m ? M , where M is the space of all the possible models (model parametrizations) under consideration. (We assume that the probability distribution of the physiological state transition, the patient?s self-observation outcomes, and the clinician?s observation outcomes are continuous on M .) Write L = M ? B for the joint space of models and beliefs. Let the estimate of the model be ? Let us assume that for every model in M the solution to (1) is unique. Therefore, we can define m. a mapping ? ? : L ? Z ? T ? T |Z| , where ? ? (l, z, t) is the optimal recommended screening time at l, at time t following z. For a fixed model m, ? ? ((m, b), z, t) is the maximizer in (3). Theorem 1. There is a closed lower dimensional set E ? L such that the function ? ? is locally constant on the complement of E. ? and the true model m0 are Theorem 1 implies that, with probability 1, if the model estimate m sufficiently close, then the actions recommended by the exactly optimal policies for both models are identical. Therefore, the impact of estimation error on the exactly optimal policy is minimal. However, we construct approximately optimal policies. We can combine these conditions with Proposition 1 to ? goes to zero, then the approximately optimal policy (for m) say that if the approximation error ?(B) ? 0 will also converge to the exactly optimal policy for true model m . Personalization: Figure 1 provides a graphical representation of the way in which DPSCREEN is personalized to the patients. We consider three Patients. The disease model for each patient is given by the ex ante survival curve (the probability of not becoming diseased by a given time). As shown in the graphs, the survival curves for Patients 1, 2 are the same; the survival curve for Patient 3 begins below the survival curve for Patients 1, 2 but is flatter and so eventually crosses the survival curve for Patients 1, 2. All three patients are screened at date 1; for all three the test outcome is z = Low. Hence the belief (risk assessment) for all three patients decreases. As a result, Patients 1, 2 are scheduled for next screening at date 4 but Patient 3, who has a lower ex ante survival probability, is scheduled for next screening at date 3. Thus, the policy is personalized to the ex ante risk. However, at date 2, all three patients experience an external information shock which causes them to be screened early. The test outcome for Patient 1 is z = Medium so Patient 1 is assessed to be at higher risk and is scheduled for next screening at date 3; the test outcome for Patient 2 is 6 Belief Disease ? ? Patient 1 and 2: Personalization through histories Same features, different histories ? different screening ? = ?????? ? = ??? ? ' = 3 ? ' = 4 ? Patient 2 Belief Disease 1 ? ? 2 ? ? ? 4 5 6 Time ? ' = 4 ? ' = 5 1 Patient 3 3 Patient 2 and 3: Personalization through features Same history, different hazard rates ? different screening ? = ??? ? = ??? ? Belief Disease Survival probability Survival probability Survival probability Patient 1 2 3 4 5 6 Time ? = ??? ' ? = ??? ? = 3 ? ' = 6 1 2 3 4 ? ' : prescribed next arrival time 5 6 Time ?: test outcomes Figure 1: Illustration of dynamic personalization z = Low so Patient 2 is assessed to be at lower risk and is scheduled for next screening at date 5. Thus the policy is personalized to the dynamic history. The test outcome for Patient 3 is z = Low and Patient 3?s ex ante survival probability is higher so Patient 3?s risk is assessed to be very low, and Patient 3 is scheduled for next screening at date 6. Thus the policy adjusts to time-varying model parameters. 4 Illustrative Experiments Here we demonstrate the effectiveness of our policy in a real setting: screening for breast cancer. Description of the dataset: We use a de-identified dataset (from Athena Health Network [22]) of 45, 000 patients aged 60-65 who underwent screening for breast cancer. For most individuals we have the following associated features: age, the number of family members with breast cancer, weight, etc. Each patient had at least one mammogram; some had several. (In total, there are 84,000 mammograms in the dataset.) If the patients had a positive mammogram, a biopsy is carried out. Further description of mammogram output is in the Appendix G of the Supplementary Materials. Model description We model the disease progression using a two-state Markov model: S = {H, D} (H = Healthy, D = Disease/Cancer). Given patient features x, the initial probability of cancer is pin (x) and the probability of transition from the H to D is ptr (x). The external information Y is the size (perhaps 0) of a breast lump, based on the patient?s own self-examination. In view of the universal growth law for tumor described in [23], we model Y (t) = g(t) + (t), where g(t) = (1 ? e??(t?ts ) )I(t > ts ) is the size of the tumor and ts is the time at which patient actually develops cancer (the lump exists), (t) is a zero mean white noise process with variance ? 2 and I() is the indicator function. If the lump size Y exceeds the threshold y?, then the patient visits the clinician, where tests are carried out. The set of test outcomes is Z = {?, 1, 2, 3}, where z = ? when no test is done, z = 1 when the mammogram is negative and no biopsy is done, z = 2 when the mammogram is positive and the biopsy is negative, z = 3 when both mammogram and biopsy is positive. Model Estimation We use the specificity and sensitivity for the mammogram from [7]. Each patient has a different (initial) risk for developing cancer; we compute the risk scores using the Gail model [24], which we use as the feature x. We assumed pin (x) and ptran (x) are logistic functions of x. We use standard Markov Chain Monte Carlo methods to estimate these functions pin (x) and ptran (x) (further details in the Appendix G of the Supplementary Materials). We assume that each woman has one self-examination per month [25] [26]. We use the value ? = 0.9 as stated in [23]. We estimate the parameters for the self-examinations ? = 0.43 and y? = 1 on the basis of the values of sensitivity and specificity for the self-examination from the literature [43]. In the comparisons to follow, we 7 will also analyze the setting when there are no self-examinations. We divide the population into two risk groups; the Low risk group consists of patients whose prior estimated risk of developing cancer within five years is less than 5%; the High risk group consists of patients whose prior estimated risk exceeds 5%. Performance Metrics, Objective and Benchmarks: Our objective is to minimize the number of screenings subject to a constraint on expected delay cost. We assume the delay cost is linear: C(td ? tD , tD ) = td ? tD . To derive the solution to this constrained problem from construction, which minimizes the weighted sum of screening cost and delay cost, we solve the weighted problem for some weight w, and then tune w to select the policy that minimizes the number of screenings subject to a constraint on expected delay cost. For comparison purposes, we take the constraint on expected delay cost to be the expected delay that arises from current clinical practice (annual screening in the US [27][28], biennial screening in some other countries [29]). (Because our objective is to minimize the number of screenings, we take the cost of each screening to be 1, whether or not a biopsy is performed.) Comment At this point, we remind that existing frameworks [7][9][10] cannot be used to solve for the optimal screening policy in the above setup because: i) the costs incurred (delay) depends on the state trajectory and not just the current state, and ii) the lump growth model and the patient?s self-examination of the lump is not easy to incorporate in these works. Comparisons with clinical screening policies: We compare our constructed policies (for the two groups), with and without self-examination, in terms of three metrics: i) E[N |R]: the expected number of tests per year, conditional on the risk group; ii) E[?|R]: the expected delay, conditional on the risk group; iii) E[?|R, D]: the expected delay, conditional on the risk group and the patient actually developing cancer. Because E[?|R] is the expected unconditional delay, it accounts for patients who do not develop cancer as well as for patients who do have cancer; because most patients do not develop cancer, E[?|R] is small. We show the comparisons with the annual policies in Table 1; we show the comparisons with biennial screening in the Appendix G of the Supplementary Materials. In Table 1 we compare the performance of DPSCREEN (with and without self-examination) for Low and High risk groups against the current clinical policy of annual screening. For both risk groups, the proposed policy achieves approximately the same expected delay as the benchmark policy while doing many fewer tests (in expectation). With self-examinations, the expected reduction in number of screens is 57-68% (depending on risk group); even without self-examinations, the expected reduction in number of screens is 28-45% percent (depending on risk group). In Table 2 we contrast the difference in DPSCREEN across the two risk groups. To keep the comparison fair, we fix the tolerance in the delay to a fixed value. The proposed policy is personalized as it recommends significantly fewer tests to the low risk patients in contrast to the high risk patients. E[N jR; Cost] 3 Impact of the type of disease: We have so High risk, Linear cost w=0.9 w=0.9 far considered breast cancer as an example High risk, Quadratic cost 2.5 Low risk, Linear cost and assumed linear delay costs. For some disLow risk, Quadratic cost 2 eases (such as Pancreatic cancer [30][5]) the w=0.5 w=0.5 survival probability decreases very quickly with 1.5 w=0.3 the delay in detection and therefore it might w=0.3 w=0.9 1 be reasonable to assume a cost of delay that is strictly convex (such as quadratic costs) in w=0.9 w=0.5 0.5 w=0.3 delay time for some disease. In Figure 2, we w=0.5w=0.3 0 show that for a fixed risk group and for the same 0 0.2 0.4 0.6 0.8 1 E["jR; Cost] months weights the policy constructed using quadratic Figure 2: Impact of the type of disease costs is much more aggressive in testing. Moreover, the regime of operation of the policy (the points achieved by the policy in the 2-D plane E[N |R, Cost] vs E[?|R, Cost]) can vary a lot depending on the choice of cost function even though the same weights are used. Therefore, the cost should be chosen based on the disease. 5 Related Works In Section 2, following the equation (1), we compared our methods with frameworks to some general frameworks in decision theory [18]-[21]. Next, we compare with other relevant works. 8 Table 1: Comparison of the proposed policy with annual screening for both high and low risk group. Risk Group Low High Metrics E[N |R], E[?|R], E[?|R, D] E[N |R], E[?|R], E[?|R, D] DPSCREEN with self-examination 0.32, 0.23, 9.2 0.43, 0.50, 6.7 DPSCREEN w/o self-examination 0.55, 0.23, 9.2 0.72, 0.52,7.07 Annual 1, 0.24, 9.6 1, 0.52, 7.07 Table 2: Comparison of the proposed policies across different risk groups Risk Group Low High DPSCREEN with self-examination DPSCREEN w/o self-examination E[N |R], E[?|R], E[?|R, D] 0.12, 0.33, 13.7 0.80, 0.35, 4.73 E[N |R], E[?|R], E[?|R, D] 0.32, 0.33, 13.7 1.09, 0.35, 4.73 Screening frameworks for different diseases in operations research: Many works have focused on optimizing population-based screening schedules, which are not personalized (See [42] and references therein). In [7] [9] the authors develop personalized POMDP based screening models. The underlying disease evolution (breast and colon cancer) is assumed to follow a Markov process. External information process such as self-exams and the test outcomes over time are assumed to follow a stationary i.i.d process given the disease process. In [10] authors develop personalized screening models based on principles of Bayesian design for maximizing information gain (based on [40]). The underlying disease model (cardiac disease) is a dynamic (two-state) survival model and the cost of misdetection is a constant and does not depend on the delay. The test outcomes are modeled using generalized linear mixed effects models, and there is no external information process. To summarize, all of the above methods rely on very specific models for their disease, test outcomes, and external information, while our method imposes much less restrictions on the same. Screening frameworks for different diseases in medical literature: The Medical research literature on screening (e.g., Cancer Intervention and Surveillance Modelling Network, US preventive services task force, etc.) relies on stochastic simulation based methods: fix a disease model and a set of screening policies to be compared; for each policy in the set, simulate outcome paths from the model; compare across the set of policies [44]-[48]. The clinical guidelines for screening issued by the US preventive services task force [47][49] for colon cancer cancers are created based on the MISCAN-COLON [46] model for colon cancer. Simulations were carried out to compare different screening policies suggested by experts for that specific disease model- MISCAN-COLON. This approach allows more realistically complex models but it only compares a fixed set of policies, all of which may be far from optimal. Controlled Sensing: In controlled sensing [21][34][38] the problem of sensor scheduling requires deciding which sensor to use and when; this problem is similar the personalized screening problem studied here. In these works [21][34][38], the main focus is to exploit (or derive) structural properties of the process being sensed and the cost functions such that the exactly optimal sensing schedule is easy to characterize and compute. The structural assumptions such as the process that is sampled is stationary and Markov make these works less suited for personalized screening. 6 Conclusion In this work, we develop a novel methodology for constructing personalized screening policies that balance the cost of screening against the cost of delay in detection of disease. The disease is modeled as an arbitrary finite state stochastic process with an absorbing disease state. Our method incorporates the possibility of external information, such as self-examination or discovery of co-morbidities, that may trigger arrival of the patient to the clinician in advance of a scheduled screening appointment. We use breast cancer data to develop the disease model. In comparison with current clinical policies, our personalized screening policies reduce the number of screenings performed while maintaining the same delay in detection of disease. 9 7 Acknowledgements This work was supported by the Office of Naval Research (ONR) and the National Science Foundation (NSF) (Grant number: 1533983 and Grant number: 1407712). References [1] Siu, A. L. (2016). Screening for breast cancer: US Preventive Services Task Force recommendation statement. Annals of internal medicine, 164(4), pp.279-296. [2] Canto, M. et.al. (2013). International Cancer of the Pancreas Screening (CAPS) Consortium summit on the management of patients with increased risk for familial pancreatic cancer, Gut, 62(3), pp.339-347. [3] Wilson, J. et.al. (1968). Principles and practice of screening for disease. [4] Jemal, A. et.al. (2010). Cancer statistics, CA: a cancer journal for clinicians, 60(5), pp.277-300. [5] Rulyak, S. J. et.al. (2003). Cost-effectiveness of pancreatic cancer screening in familial pancreatic cancer kindreds, Gastrointestinal endoscopy, 57(1), pp.23-29. [6] Pace, L. E., & Keating, N. L. (2014). A systematic assessment of benefits and risks to guide breast cancer screening decisions. Jama, 311(13), 1327-1335. [7] Ayer, T. et.al. (2012). OR forum?a POMDP approach to personalize mammography screening decisions. Operations Research, 60(5), pp.1019-1034. [8] Maillart, L. M. et.al. (2008). Assessing dynamic breast cancer screening policies. Operations Research, 56(6), pp.1411-1427. [9] Erenay, F. S. et.al. (2014). Optimizing colonoscopy screening for colorectal cancer prevention and surveillance, Manufacturing & Service Operations Management, 16(3), pp.381-400. [10] Rizopoulos, D. et.al. (2015). Personalized screening intervals for biomarkers using joint models for longitudinal and survival data. Biostatistics, 17(1), pp.149-164. [11] Cox, D. R. (1992). Regression models and life-tables. In Breakthroughs in statistics. Springer New York. [12] Miller Jr, R. G. (2011). Survival analysis,John Wiley & Sons. [13] Crowder, M. J. (2001). Classical competing risks. CRC Press. [14] Lee, M. L. T. et.al. (2003). First hitting time models for lifetime data, Handbook of statistics, 23, pp.537-543. [15] Cox, D. R. (1992). Regression models and life-tables. In Breakthroughs in statistics. Springer New York, pp. 527-541. [16] Si, X. S. et.al. (2011). Remaining useful life estimation?A review on the statistical data driven approaches. European journal of operational research, 213(1), pp.1-14. [17] Lee, M. L. T. et.al. (2006). Threshold regression for survival analysis: modeling event times by a stochastic process reaching a boundary. Statistical Science, pp.501-513. [18] Pineau, J. et.al. (2003). Point-based value iteration: An anytime algorithm for POMDPs, n Joint Conference on Artificial Intelligence, 3, pp. 1025-1032. [19] Kim, D. et.al. (2011). Point-based value iteration for constrained POMDPs, in Joint Conference on Artificial Intelligence, pp. 1968-1974. [20] Yu, H. (2006). Approximate solution methods for partially observable Markov and semi-Markov decision processes, Doctoral dissertation, Massachusetts Institute of Technology. [21] Krishnamurthy, V. (2016). Partially Observed Markov Decision Processes. Cambridge University Press. 10 [22] Elson, S. L. et.al. (2013). The Athena Breast Health Network: developing a rapid learning system in breast cancer prevention, screening, treatment, and care. Breast cancer research and treatment, , 140(2), pp.417-425. [23] Guiot, C. et.al. (2003). Does tumor growth follow a ?universal law??, Journal of theoretical biology, 225(2), pp.147-151. [24] Gail, M. H. et.al. (1989). Projecting individualized probabilities of developing breast cancer for white females who are being examined annually. Journal of the National Cancer Institute, 81(24), 1879-1886. [25] Baxter, N. (2002). Breast self-examination, Canadian Medical Association Journal, Chicago, 166(2), pp.166-168. [26] Thomas, D. B. et.al.(2002). Randomized trial of breast self-examination in Shanghai: final results. Journal of the National Cancer Institute, 94(19), 1445-1457. [27] Oeffinger, K. C. et.al. (2015). Breast cancer screening for women at average risk: 2015 guideline update from the American Cancer Society. Jama, 314(15), 1599-1614. [28] Nelson, H. D. et.al. (2009). Screening for breast cancer: an update for the US Preventive Services Task Force. Annals of internal medicine, 151(10), 727-737. [29] Klabunde, C. N. et.al. (2007). Evaluating population-based screening mammography programs internationally. In Seminars in breast disease, International Breast Cancer Screening Network, 10 (2), pp. 102-107. [30] Sener, S. F. et.al. (1999). Pancreatic cancer: a report of treatment and survival trends for 100,313 patients diagnosed from 1985?1995, using the National Cancer Database. Journal of the American College of Surgeons, 189(1), pp.1-7. [31] Armstrong, K. et.al. (2007). Screening mammography in women 40 to 49 years of age: a systematic review for the American College of Physicians. Annals of internal medicine, 146(7), 516-526. [32] Liebman, M. N. (2007). Personalized medicine: a perspective on the patient, disease and causal diagnostics, 171-174. [33] Mandelblatt, J. S. et.al. (1992). Breast cancer screening for elderly women with and without comorbid conditions. Ann Intern Med, 116, 722-730. [34] Alaa, A. M. et.al. (2016). Balancing suspense and surprise: Timely decision making with endogenous information acquisition, in Advances in Neural Information Processing Systems (NIPS), pp. 2910-2918. [35] Schulam, P. et.al. (2016). Disease Trajectory Maps, in Advances In Neural Information Processing Systems (NIPS), pp. 4709-4717. [36] Rizopoulos, D. (2011). Dynamic Predictions and Prospective Accuracy in Joint Models for Longitudinal and Time to Event Data. Biometrics, 67(3), 819-829. [37] Meira-Machado, L. et.al. (2006). Nonparametric estimation of transition probabilities in a non-Markov illness?death model. Lifetime Data Analysis, 12(3), 325-344. [38] Krishnamurthy, V. (2017). POMDP Structural Results for Controlled Sensing. arXiv preprint arXiv:1701.00179. [39] Smallwood, R. D., & Sondik, E. J. (1973). The optimal control of partially observable Markov processes over a finite horizon. Operations research, 21(5), 1071-1088. [40] Verdinelli, I. et.al. (1992). Bayesian designs for maximizing information and outcome, Journal of the American Statistical Association, 87(418), 510-515. [41] Daskivich, T. J.et.al. (2011). Overtreatment of men with low-risk prostate cancer and significant comorbidity, Cancer, 117(10), 2058-2066. [42]Alagoz, O et.al. (2011). Operations research models for cancer screening, Wiley Encyclopedia of Operations Research and Management Science. 11 [43] Elmore, J. G. et.al. (2005). Efficacy of breast cancer screening in the community according to risk level. Journal of the National Cancer Institute, 97(14), 1035-1043. [44] Vilaprinyo et.al (2014). Cost-effectiveness and harm-benefit analyses of risk-based screening strategies for breast cancer, PloS one. 9(2), p.e86858. [45] Trentham-Dietz et.al. (2016) Tailoring Breast Cancer Screening Intervals by Breast Density and Risk for Women Aged 50 Years or Older: Collaborative Modeling of Screening Outcomes Risk-Based Breast Cancer Screening Intervals, Annals of Internal Medicine, 165(10), pp.700-712. [46] Loeve, F. et.al (1999). The MISCAN-COLON simulation model for the evaluation of colorectal cancer screening. Computers in Biomedical Research, 32(1), pp.13-33. [47] Zauber, A. G. et.al. (2009) Evaluating Test Strategies for Colorectal Cancer Screening: A Decision Analysis for the US Preventive Services Task Force. Annals of Internal Medicine, 149(9), pp.659-669. [48] Frazier, A. L et.al. (2000) Cost-effectiveness of screening for colorectal cancer in the general population, JAMA, 284(15), pp.1954-1961. [49] Whitlock et.al. (2008) Screening for Colorectal Cancer: A Targeted, Updated Systematic Review for the US Preventive Services Task Force Screening for Colorectal Cancer. Annals of Internal Medicine, 149(9), pp.638-658. [50] Alaa, A.M. et.al. (2016). ConfidentCare: A Clinical Decision Support System for Personalized Breast Cancer Screening, accepted and to appear in IEEE Transactions on Multimedia-Special Issue on Multimedia-based Healthcare, 18(10), pp.1942-1955. 12
6731 |@word trial:1 cox:3 seems:2 sex:1 pancreatic:6 simulation:3 sensed:1 incurs:1 carry:3 reduction:4 initial:4 score:1 efficacy:1 ours:1 expositional:1 longitudinal:2 past:2 existing:4 current:12 incidence:4 mihaela:2 riskier:1 si:1 must:2 john:1 subsequent:1 chicago:1 informative:1 benign:1 tailoring:1 designed:1 update:2 v:2 stationary:6 intelligence:2 discovering:1 fewer:2 plane:1 core:1 dissertation:1 record:1 provides:2 successive:1 five:1 constructed:5 become:2 consists:4 combine:1 inside:1 acquired:1 elderly:1 expected:19 rapid:1 bellman:4 keating:1 decreasing:1 gastrointestinal:1 td:31 increasing:1 begin:7 annually:1 underlying:4 moreover:3 bounded:3 medium:1 biostatistics:1 what:3 kind:1 minimizes:2 unobserved:1 pseudo:1 every:7 act:1 growth:4 exactly:8 uk:1 control:1 healthcare:1 medical:4 intervention:1 grant:2 appear:1 cardiovascular:1 positive:5 before:1 engineering:3 service:7 oxford:2 path:1 becoming:1 approximately:11 might:2 therein:1 studied:1 doctoral:1 examined:1 challenging:1 rizopoulos:2 co:1 limited:1 unique:3 testing:3 recursive:2 practice:4 procedure:4 universal:2 significantly:4 convenient:1 specificity:2 suggest:2 consortium:1 cannot:3 close:2 convenience:1 scheduling:1 risk:47 restriction:2 map:3 maximizing:2 go:1 economics:1 exogenously:1 duration:1 starting:2 convex:3 formulate:1 focused:1 simplicity:1 pomdp:3 mammography:3 examines:1 kartik:1 adjusts:1 smallwood:1 financial:1 population:4 krishnamurthy:2 updated:3 annals:6 construction:5 trigger:2 play:1 suppose:1 exact:2 diabetes:1 trend:1 updating:3 summit:1 schaar:1 database:1 observed:3 role:1 preprint:1 electrical:2 worst:2 pancreas:1 plo:1 trade:2 decrease:2 observes:3 disease:84 complexity:2 dynamic:10 prescribes:1 depend:2 solving:2 surgeon:1 basis:5 easily:2 joint:5 various:2 dietz:1 effective:1 monte:1 detected:4 artificial:2 tell:1 outcome:22 choosing:1 hiv:2 whose:4 lag:1 supplementary:9 solve:6 say:2 otherwise:2 vanderschaar:1 statistic:4 final:1 sequence:1 propose:1 relevant:1 monetary:1 date:9 pbvi:1 parametrizations:1 pthe:1 realistically:1 description:4 canto:1 normalize:1 los:3 empty:2 assessing:1 diseased:1 illustrate:2 develop:7 ac:1 derive:3 depending:3 exam:1 solves:2 implies:1 convention:2 biopsy:6 stochastic:8 exploration:2 observational:5 material:9 crc:1 hx:2 fix:3 proposition:3 strictly:1 sufficiently:2 considered:1 deciding:1 mapping:1 predict:1 m0:1 achieves:1 early:4 adopt:1 vary:1 purpose:1 perceived:1 estimation:7 whitlock:1 applicable:1 label:1 maker:2 healthy:4 gail:2 weighted:3 reflects:1 sensor:2 rather:1 reaching:1 varying:1 surveillance:2 wilson:1 office:1 gut:1 derived:1 focus:1 naval:1 frazier:1 modelling:1 contrast:2 kim:1 colon:8 entire:1 hidden:1 arg:1 issue:1 development:6 plan:4 constrained:2 prevention:2 breakthrough:2 special:1 construct:10 sener:1 never:2 having:1 beach:1 sampling:4 identical:1 biology:1 yu:1 future:2 simplex:1 report:1 prostate:1 others:1 develops:2 posmdps:2 piecewise:1 few:1 national:5 individual:2 consisting:1 william:1 detection:16 screening:123 possibility:2 evaluation:1 arrives:1 personalization:6 unconditional:1 diagnostics:1 chain:1 accurate:1 experience:1 biometrics:1 divide:1 sojourn:1 causal:1 theoretical:1 minimal:1 instance:8 column:1 earlier:3 modeling:2 markovian:2 increased:1 suspense:1 cost:68 delay:40 successful:1 siu:1 optimally:1 characterize:1 varies:1 crowder:1 synthetic:1 st:2 density:3 international:2 sensitivity:2 randomized:1 eas:1 lee:2 physician:1 off:2 systematic:3 quickly:1 concrete:1 management:3 choose:1 woman:5 henceforth:1 external:23 expert:1 american:4 leading:1 account:5 aggressive:1 de:1 b2:1 flatter:1 schulam:1 onset:2 depends:9 performed:6 view:4 lot:1 closed:1 endogenous:1 analyze:1 counterfactuals:1 reached:1 start:1 doing:1 sondik:1 parallel:1 timely:1 ante:4 collaborative:1 minimize:4 elson:1 accuracy:1 variance:1 who:5 miller:1 yield:2 identify:1 generalize:1 bayesian:5 carlo:1 trajectory:23 pomdps:6 history:18 definition:2 against:4 failure:1 acquisition:1 frequency:3 pp:27 invasive:1 proof:1 associated:2 sampled:2 gain:1 dataset:3 treatment:9 massachusetts:1 knowledge:1 cap:1 anytime:1 schedule:3 actually:2 higher:2 follow:9 methodology:1 ayer:1 formulation:1 done:5 ox:1 diagnosed:4 though:1 lifetime:2 just:1 biomedical:1 maximizer:1 assessment:2 logistic:1 pineau:1 scheduled:9 perhaps:1 usa:1 effect:3 contain:1 true:6 evolution:1 hence:5 death:1 white:2 self:21 illustrative:1 ptr:1 generalized:1 demonstrate:1 performs:3 percent:1 consideration:1 discovers:1 novel:1 absorbing:5 machado:1 shanghai:1 extend:1 association:2 illness:1 refer:2 significant:1 cambridge:1 had:5 reachable:3 internationally:1 etc:6 own:1 female:1 perspective:1 optimizing:2 driven:1 route:1 certain:1 issued:1 onr:1 success:1 life:3 der:1 additional:1 care:1 parallelized:1 determine:2 converge:1 recommended:4 semi:3 ii:3 multiple:2 encompass:1 exceeds:3 clinical:9 long:1 hazard:3 cross:1 visit:6 controlled:3 impact:5 prediction:1 regression:3 breast:35 patient:107 metric:3 expectation:1 arxiv:2 iteration:4 achieved:3 addition:3 interval:5 appointment:1 aged:2 country:1 zame:2 minb:1 morbidity:1 comment:2 subject:3 med:1 member:1 incorporates:1 effectiveness:4 lump:10 odds:1 structural:3 intermediate:2 iii:2 identically:1 maxb:1 easy:2 variety:2 recommends:1 fit:1 baxter:1 thet:1 competing:2 identified:1 reduce:1 idea:1 angeles:3 t0:7 whether:2 expression:2 biomarkers:1 york:2 cause:1 action:8 useful:2 colorectal:6 tune:1 amount:1 nonparametric:1 discount:1 encyclopedia:1 comorbidity:2 locally:1 induces:2 reduced:1 specifies:2 nsf:1 familial:2 happened:1 estimated:6 correctly:1 econ:1 per:2 pace:1 diagnosis:4 discrete:3 write:10 group:17 key:1 threshold:3 enormous:1 achieving:2 shock:1 graph:1 sum:3 year:4 screened:9 arrive:4 family:4 reasonable:1 electronic:1 decision:16 appendix:9 bound:2 ct:1 distinguish:2 quadratic:5 encountered:1 annual:5 occur:3 constraint:3 personalized:24 ucla:2 loeve:1 wc:2 simulate:1 argument:2 extremely:1 min:1 prescribed:1 department:4 developing:6 according:3 meira:1 combination:1 jr:3 across:5 cardiac:1 son:1 making:1 happens:1 projecting:1 restricted:2 heart:1 computationally:6 resource:2 equation:5 discus:1 eventually:1 malignant:1 pin:3 know:1 tractable:2 end:4 rubric:1 operation:10 progression:5 upto:1 preventive:6 canadian:1 distinguished:1 alternative:1 robustness:1 substitute:1 thomas:1 remaining:1 graphical:2 maintaining:1 medicine:7 exploit:1 especially:1 society:2 forum:1 classical:1 objective:5 occurs:4 strategy:2 individualized:1 athena:2 nelson:1 prospective:1 reason:1 code:1 modeled:7 remind:1 illustration:2 balance:3 difficult:1 setup:4 unfortunately:1 statement:1 negative:3 stated:1 design:2 guideline:2 policy:86 allowing:1 observation:7 markov:13 benchmark:2 finite:12 t:8 incorrectly:1 defining:1 precise:1 arbitrary:1 community:1 complement:1 pair:2 california:3 learned:1 nip:3 address:2 suggested:1 below:4 regime:2 summarize:1 encompasses:1 program:1 including:4 max:3 belief:43 comorbidities:2 event:2 natural:2 examination:18 rely:2 force:6 indicator:1 older:1 technology:1 risky:1 created:1 carried:4 health:5 epoch:4 literature:5 prior:2 discovery:1 acknowledgement:1 review:3 law:2 loss:4 mixed:1 men:1 proportional:1 age:3 foundation:1 incurred:5 sufficient:1 proxy:1 imposes:2 principle:2 balancing:1 maxt:2 cancer:64 course:1 supported:1 soon:1 transpose:1 guide:1 allow:1 institute:4 wide:2 underwent:1 van:1 benefit:5 tolerance:2 dimension:1 curve:5 evaluating:3 transition:3 boundary:1 author:2 far:2 social:1 cope:1 transaction:1 alpha:8 observable:3 approximate:1 status:1 keep:1 decides:1 handbook:1 polyp:1 b1:1 assumed:5 harm:1 continuous:1 table:7 learn:1 ca:2 correlated:1 operational:1 complex:1 european:1 constructing:5 main:1 rh:1 backup:1 bounding:1 noise:1 arrival:8 allowed:1 fair:1 personalize:1 screen:5 ahuja:1 wiley:2 seminar:1 explicit:1 exercise:1 lie:2 late:1 mammogram:9 theorem:2 bad:1 specific:2 sensing:4 physiological:4 survival:21 intractable:4 exists:1 false:3 restricting:1 importance:1 horizon:3 surprise:1 suited:1 led:1 simply:1 likely:2 intern:1 jama:3 highlighting:1 hitting:2 expressed:1 partially:3 recommendation:6 springer:2 satisfies:1 relies:1 biennial:2 slot:8 conditional:10 month:2 targeted:1 ann:1 exposition:1 manufacturing:1 man:1 change:1 determined:3 clinician:15 typical:1 reducing:1 justify:1 lemma:3 tumor:3 total:1 hospital:1 multimedia:2 accepted:2 verdinelli:1 select:2 college:2 alaa:2 internal:6 support:1 scan:1 arises:2 assessed:3 incorporate:1 evaluate:1 armstrong:1 tested:1 ex:4
6,337
6,732
Learning Unknown Markov Decision Processes: A Thompson Sampling Approach Yi Ouyang University of California, Berkeley [email protected] Mukul Gagrani University of Southern California [email protected] Ashutosh Nayyar University of Southern California [email protected] Rahul Jain University of Southern California [email protected] Abstract We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when ? the number of visits to any state-action pair is ? doubled. We establish O(HS AT ) bounds on expected regret under a Bayesian setting, where S and A are the sizes of the state and action spaces, T is time, and H is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs. 1 Introduction We consider the problem of reinforcement learning by an agent interacting with an environment while trying to minimize the total cost accumulated over time. The environment is modeled by an infinite horizon Markov Decision Process (MDP) with finite state and action spaces. When the environment is perfectly known, the agent can determine optimal actions by solving a dynamic program for the MDP [1]. In reinforcement learning, however, the agent is uncertain about the true dynamics of the MDP. A naive approach to an unknown model is the certainty equivalence principle. The idea is to estimate the unknown MDP parameters from available information and then choose actions as if the estimates are the true parameters. But it is well-known in adaptive control theory that the certainty equivalence principle may lead to suboptimal performance due to the lack of exploration [2]. This issue actually comes from the fundamental exploitation-exploration trade-off: the agent wants to exploit available information to minimize cost, but it also needs to explore the environment to learn system dynamics. One common way to handle the exploitation-exploration trade-off is to use the optimism in the face of uncertainty (OFU) principle [3]. Under this principle, the agent constructs confidence sets for the system parameters at each time, find the optimistic parameters that are associated with the minimum cost, and then selects an action based on the optimistic parameters. The optimism procedure encourages exploration for rarely visited states and actions. Several optimistic algorithms are proved to possess strong theoretical performance guarantees [4?10]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. An alternative way to incentivize exploration is the Thompson Sampling (TS) or Posterior Sampling method. The idea of TS was first proposed by Thompson in [11] for stochastic bandit problems. It has been applied to MDP environments [12?17] where the agent computes the posterior distribution of unknown parameters using observed information and a prior distribution. A TS algorithm generally proceeds in episodes: at the beginning of each episode a set of MDP parameters is randomly sampled from the posterior distribution, then actions are selected based on the sampled model during the episode. TS algorithms have the following advantages over optimistic algorithms. First, TS algorithms can easily incorporate problem structures through the prior distribution. Second, they are more computationally efficient since a TS algorithm only needs to solve the sampled MDP, while an optimistic algorithm requires solving all MDPs that lie within the confident sets. Third, empirical studies suggest that TS algorithms outperform optimistic algorithms in bandit problems [18, 19] as well as in MDP environments [13, 16, 17]. Due to the above advantages, we focus on TS algorithms for the MDP learning problem. The main challenge in the design of a TS algorithm is the lengths of the episodes. For finite horizon MDPs under the episodic setting, the length of each episode can be set as the time horizon [13]. When there exists a recurrent state under any stationary policy, the TS algorithm of [15] starts a new episode whenever the system enters the recurrent state. However, the above methods to end an episode can not be applied to MDPs without the special features. The work of [16] proposed a dynamic episode schedule based on the doubling trick used in [7], but a mistake in their proof of regret bound was pointed out by [20]. In view of the mistake in [16], there is no TS algorithm with strong performance guarantees for general MDPs to the best of our knowledge. We consider the most general subclass of weakly communicating MDPs in which meaningful finite time regret guarantees can be analyzed. We propose the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm. In TSDE, there are two stopping criteria for an episode to end. The first stopping criterion controls the growth rate of episode length. The second stopping criterion is the doubling trick similar to the one in [7?10, 16] that stops when the number of visits to any state-action pair is doubled. Under a Bayesian framework, we show that the expected regret of TSDE ? ? ? hides logarithmic factors. Here S accumulated up to time T is bounded by O(HS AT ) where O and A are the sizes of the state and action spaces, T is time, and H is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs [7], and it matches the theoretical lower bound in order of T except for logarithmic factors. We present numerical results that show that TSDE actually outperforms current algorithms with known regret bounds that have the same order in T for a benchmark MDP problem as well as randomly generated MDPs. 2 Problem Formulation 2.1 Preliminaries An infinite horizon Markov Decision Process (MDP) is described by (S, A, c, ?). Here S is the state space, A is the action space, c : S ? A ? [0, 1]1 is the cost function, and ? : S 2 ? A ? [0, 1] represents the transition probabilities such that ?(s0 |s, a) = P(st+1 = s0 |st = s, at = a) where st ? S and at ? A are the state and the action at t = 1, 2, 3 . . . . We assume that S and A are finite spaces with sizes S ? 2 and A ? 2, and the initial state s1 is a known and fixed state. A stationary policy is a deterministic map ? : S ? A that maps a state to an action. The average cost per stage of a stationary policy is defined as T J? (?) = lim sup T ?? i 1 hX E c(st , at ) . T t=1 Here we use J? (?) to explicitly show the dependency of the average cost on ?. To have meaningful finite time regret bounds, we consider the subclass of weakly communicating MDPs defined as follows. Definition 1. An MDP is weakly communicating (or weak accessible) if its states can be partitioned into two subsets: in the first subset all states are transient under every stationary policy, and every two states in the second subset can be reached from each other under some stationary policy. 1 Since S and A are finite, we can normalize the cost function to [0, 1] without loss of generality. 2 From MDP theory [1], we know that if the MDP is weakly communicating, the optimal average cost per stage J(?) = min? J? (?) satisfies the Bellman equation n o X J(?) + v(s, ?) = min c(s, a) + ?(s0 |s, a)v(s0 , ?) (1) a?A s0 ?S for all s ? S. The corresponding optimal stationary policy ? ? is the minimizer of the above optimization given by a = ? ? (s, ?). (2) Since the cost function c(s, a) ? [0, 1], J(?) ? [0, 1] for all ?. If v satisfies the Bellman equation, v plus any constant also satisfies the Bellman equation. Without loss of generality, let mins?S v(s, ?) = 0 and define the span of the MDP as sp(?) = maxs?S v(s, ?). 2 We define ?? to be the set of all ? such that the MDP with transition probabilities ? is weakly communicating, and there exists a number H such that sp(?) ? H. We will focus on MDPs with transition probabilities in the set ?? . 2.2 Reinforcement Learning for Weakly Communicating MDPs We consider the reinforcement learning problem of an agent interacting with a random weakly communicating MDP (S, A, c, ?? ). We assume that S, A and the cost function c are completely known to the agent. The actual transition probabilities ?? is randomly generated at the beginning before the MDP interacts with the agent. The value of ?? is then fixed but unknown to the agent. The complete knowledge of the cost is typical as in [7, 15]. Algorithms can generally be extended to the unknown costs/rewards case at the expense of some constant factor for the regret bound. At each time t, the agent selects an action according to at = ?t (ht ) where ht = (s1 , s2 , . . . , st , a1 , a2 , . . . , at?1 ) is the history of states and actions. The collection ? = (?1 , ?2 . . . ) is called a learning algorithm. The functions ?t allow for the possibility of randomization over actions at each time. We focus on a Bayesian framework for the unknown parameter ?? . Let ?1 be the prior distribution for ?? , i.e., for any set ?, P(?? ? ?) = ?1 (?). We make the following assumptions on ?1 . Assumption 1. The support of the prior distribution ?1 is a subset of ?? . That is, the MDP is weakly communicating and sp(?? ) ? H. In this Bayesian framework, we define the expected regret (also called Bayesian regret or Bayes risk) of a learning algorithm ? up to time T as R(T, ?) = E T h hX ii c(st , at ) ? J(?? ) (3) t=1 where st , at , t = 1, . . . , T are generated by ? and J(?? ) is the optimal per stage cost of the MDP. The above expectation is with respect to the prior distribution ?1 for ?? , the randomness in state transitions, and the randomized algorithm. The expected regret is an important metric to quantify the performance of a learning algorithm. 3 Thompson Sampling with Dynamic Episodes In this section, we propose the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm. The input of TSDE is the prior distribution ?1 . At each time t, given the history ht , the agent can compute the posterior distribution ?t given by ?t (?) = P(?? ? ?|ht ) for any set ?. Upon applying the action at and observing the new state st+1 , the posterior distribution at t + 1 can be updated according to Bayes? rule as ?t+1 (d?) = R ?(st+1 |st , at )?t (d?) . ?0 (st+1 |st , at )?t (d?0 ) 2 (4) See [7]for a discussion on the connection of the span with other parameters such as the diameter appearing in the lower bound on regret. 3 Let Nt (s, a) be the number of visits to any state-action pair (s, a) before time t. That is, Nt (s, a) = |{? < t : (s? , a? ) = (s, a)}|. (5) With these notations, TSDE is described as follows. Algorithm 1 Thompson Sampling with Dynamic Episodes (TSDE) Input: ?1 Initialization: t ? 1, tk ? 0 for episodes k = 1, 2, ... do Tk?1 ? t ? tk tk ? t Generate ?k ? ?tk and compute ?k (?) = ? ? (?, ?k ) from (1)-(2) while t ? tk + Tk?1 and Nt (s, a) ? 2Ntk (s, a) for all (s, a) ? S ? A do Apply action at = ?k (st ) Observe new state st+1 Update ?t+1 according to (4) t?t+1 end while end for The TSDE algorithm operates in episodes. Let tk be start time of the kth episode and Tk = tk+1 ? tk be the length of the episode with the convention T0 = 1. From the description of the algorithm, t1 = 1 and tk+1 , k ? 1, is given by tk+1 = min{t > tk : t > tk + Tk?1 or Nt (s, a) > 2Ntk (s, a) for some (s, a)}. (6) At the beginning of episode k, a parameter ?k is sampled from the posterior distribution ?tk . During each episode k, actions are generated from the optimal stationary policy ?k for the sampled parameter ?k . One important feature of TSDE is that its episode lengths are not fixed. The length Tk of each episode is dynamically determined according to two stopping criteria: (i) t > tk + Tk?1 , and (ii) Nt (s, a) > 2Ntk (s, a) for some state-action pair (s, a). The first stopping criterion provides that the episode length grows at a linear rate without triggering the second criterion. The second stopping criterion ensures that the number of visits to any state-action pair (s, a) during an episode should not be more than the number visits to the pair before this episode. Remark 1. Note that TSDE only requires the knowledge of S, A, c, and the prior distribution ?1 . TSDE can operate without the knowledge of time horizon T , the bound H on span used in [7], and any knowledge about the actual ?? such as the recurrent state needed in [15]. 3.1 Main Result Theorem 1. Under Assumption 1, R(T, TSDE) ? (H + 1) p p 2SAT log(T ) + 49HS AT log(AT ). The proof of Theorem 1 appears in Section 4. Remark 2. Note that our regret bound has the same order in H, S, A and T as the optimistic algorithm in [7] which is the best available bound for weakly communicating MDPs. Moreover, the bound does not depend on the prior distribution or other problem-dependent parameters such as the recurrent time of the optimal policy used in the regret bound of [15]. 3.2 Approximation Error At the beginning of each episode, TSDE computes the optimal stationary policy ?k for the parameter ?k . This step requires the solution to a fixed finite MDP. Policy iteration or value iteration can be used to solve the sampled MDP, but the resulting stationary policy may be only approximately optimal in practice. We call ? an ?approximate policy if n o X X c(s, ?(s)) + ?(s0 |s, ?(s))v(s0 , ?) ? min c(s, a) + ?(s0 |s, a)v(s0 , ?) + . s0 ?S a?A 4 s0 ?S When the algorithm returns an k ?approximate policy ? ?k instead of the optimal stationary policy ?k at episode k, we have the following regret bound in the presence of such approximation error. ?k instead of the optimal stationary policy Theorem 2. If TSDE computes an k ?approximate policy ? ?k at each episode k, the expected regret of TSDE satisfies h X i ? ? AT ) + E R(T, TSDE) ? O(HS Tk k . k:tk ?T Furthermore, if k ? 1 k+1 , E hP i k:tk ?T Tk k ? p 2SAT log(T ). Theorem 2 shows that the approximation error in the computation of optimal ? stationary policy is only ? additive to the regret under TSDE. The regret bound would remain O(HS AT ) if the approximation 1 error is such that k ? k+1 . The proof of Theorem 2 is in the appendix due to the lack of space. 4 4.1 Analysis Number of Episodes To analyze the performance of TSDE over T time steps, define KT = arg max {k : tk ? T } be the number of episodes of TSDE until time T . Note that KT is a random variable because the number of visits Nt (x, u) depends on the dynamical state trajectory. In the analysis for time T we use the convention that t(KT +1) = T + 1. We provide an upper bound on KT as follows. Lemma 1. p KT ? 2SAT log(T ). Proof. Define macro episodes with start times tni , i = 1, 2, . . . where tn1 = t1 and tni+1 = min{tk > tni : Ntk (s, a) > 2Ntk?1 (s, a) for some (s, a)}. The idea is that each macro episode starts when the second stopping criterion happens. Let M be the number of macro episodes until time T and define n(M +1) = KT + 1. Pni+1 ?1 Let T?i = k=n Tk be the length of the ith macro episode. By the definition of macro episodes, i any episode except the last one in a macro episode must be triggered by the first stopping criterion. Therefore, within the ith macro episode, Tk = Tk?1 + 1 for all k = ni , ni + 1, . . . , ni+1 ? 2. Hence, ni+1 ?1 T?i = X ni+1 ?ni ?1 X Tk = (Tni ?1 + j) + Tni+1 ?1 j=1 k=ni ni+1 ?ni ?1 ? X (j + 1) + 1 = 0.5(ni+1 ? ni )(ni+1 ? ni + 1). j=1 p 2T?i for all i = 1, . . . , M . From this property we obtain M M q X X KT =nM +1 ? 1 = (ni+1 ? ni ) ? 2T?i . Consequently, ni+1 ? ni ? i=1 Using (7) and the fact that (7) i=1 PM ? i=1 Ti = T we get v u M q M u X X ? KT ? 2T?i ?tM 2T?i = 2M T i=1 (8) i=1 where the second inequality is Cauchy-Schwarz. From Lemma 6 in the appendix, the number of macro episodes M ? SA log(T ). Substituting this bound into (8) we obtain the result of this lemma. Remark 3. TSDE computes the optimal stationary policy of a finite MDP atpeach episode. Lemma 1 ensures that such computation only needs to be done at a sublinear rate of 2SAT log(T ). 5 4.2 Regret Bound As discussed in [13, 20, 21], one key property of Thompson/Posterior Sampling algorithms is that for any function f , E[f (?t )] = E[f (?? )] if ?t is sampled from the posterior distribution at time t. This property leads to regret bounds for algorithms with fixed sampling episodes since the start time tk of each episode is deterministic. However, our TSDE algorithm has dynamic episodes that requires us to have the stopping-time version of the above property. Lemma 2. Under TSDE, tk is a stopping time for any episode k. Then for any measurable function f and any ?(htk )?measurable random variable X, we have h i h i E f (?k , X) = E f (?? , X) Proof. From the definition (6), the start time tk is a stopping-time, i.e. tk is ?(htk )?measurable. Note that ?k is randomly sampled from the posterior distribution ?tk . Since tk is a stopping time, tk and ?tk are both measurable with respect to ?(htk ). From the assumption, X is also measurable with respect to ?(htk ). Then conditioned on htk , the only randomness in f (?k , X) is the random sampling in the algorithm. This gives the following equation: h i h i Z h i E f (?k , X)|htk = E f (?k , X)|htk , tk , ?tk = f (?, X)?tk (d?) = E f (?? , X)|htk (9) since ?tk is the posterior distribution of ?? given htk . Now the result follows by taking the expectation of both sides. For tk ? t < tk+1 in episode k, the Bellman equation (1) holds by Assumption 1 for s = st , ? = ?k and action at = ?k (st ). Then we obtain X c(st , at ) = J(?k ) + v(st , ?k ) ? ?k (s0 |st , at )v(s0 , ?k ). (10) s0 ?S Using (10), the expected regret of TSDE is equal to E KT tk+1 hX X?1 i h i c(st , at ) ? T E J(?? ) k=1 t=tk =E KT hX KT tk+1 i h i hX ii X X?1 h Tk J(?k ) ? T E J(?? ) + E v(st , ?k ) ? ?k (s0 |st , at )v(s0 , ?k ) s0 ?S k=1 t=tk k=1 =R0 + R1 + R2 , (11) where R0 , R1 and R2 are given by R0 = E KT hX i h i Tk J(?k ) ? T E J(?? ) , k=1 R1 = E KT tk+1 hX X?1 h ii v(st , ?k ) ? v(st+1 , ?k ) , k=1 t=tk R2 = E KT tk+1 hX X?1 h v(st+1 , ?k ) ? X ii ?k (s0 |st , at )v(s0 , ?k ) . s0 ?S k=1 t=tk We proceed to derive bounds on R0 , R1 and R2 . Based on the key property of Lemma 2, we derive an upper bound on R0 . Lemma 3. The first term R0 is bounded as R0 ? E[KT ]. 6 Proof. From monotone convergence theorem we have R0 = E ? hX ? i h i X h i h i 1{tk ?T } Tk J(?k ) ? T E J(?? ) = E 1{tk ?T } Tk J(?k ) ? T E J(?? ) . k=1 k=1 Note that the first stopping criterion of TSDE ensures that Tk ? Tk?1 + 1 for all k. Because J(?k ) ? 0, each term in the first summation satisfies h i h i E 1{tk ?T } Tk J(?k ) ? E 1{tk ?T } (Tk?1 + 1)J(?k ) . Note that 1{tk ?T } (Tk?1 + 1) is measurable with respect to ?(htk ). Then, Lemma 2 gives h i h i E 1{tk ?T } (Tk?1 + 1)J(?k ) = E 1{tk ?T } (Tk?1 + 1)J(?? ) . Combining the above equations we get R0 ? ? X h i h i E 1{tk ?T } (Tk?1 + 1)J(?? ) ? T E J(?? ) k=1 =E KT hX i h i (Tk?1 + 1)J(?? ) ? T E J(?? ) k=1 KT h i h X  i h i = E KT J(?? ) + E Tk?1 ? T J(?? ) ? E KT k=1 where the last equality holds because J(?? ) ? 1 and PKT k=1 Tk?1 = T0 + PKT ?1 k=1 Tk ? T . Note that the first stopping criterion of TSDE plays a crucial role in the proof of Lemma 3. It allows us to bound the length of an episode using the length of the previous episode which is measurable with respect to the information at the beginning of the episode. The other two terms R1 and R2 of the regret are bounded in the following lemmas. Their proofs follow similar steps to those in [13, 16]. The proofs are in the appendix due to the lack of space. Lemma 4. The second term R1 is bounded as R1 ? E[HKT ]. Lemma 5. The third term R2 is bounded as R2 ? 49HS p AT log(AT ). We are now ready to prove Theorem 1. Proof of Theorem 1. From (11), R(T, TSDE) = R0 + R1 + R2 ? E[KT ] + E[HKT ] + R2 where the inequality comes from Lemma 3, Lemma 4. Then the claim of the theorem directly follows from Lemma 1 and Lemma 5. 5 Simulations In this section, we compare through simulations the performance of TSDE with three learning algorithms with the same regret order: UCRL2 [8], TSMDP [15], and Lazy PSRL [16]. UCRL2 is an optimistic algorithm with similar regret bounds. TSMDP and Lazy PSRL are TS algorithms for infinite horizon MDPs. TSMDP has the same regret order in T given a recurrent state for resampling. The original regret analysis for Lazy PSRL is incorrect, but the regret bounds are conjectured to be correct [20]. We chose ? = 0.05 for the implementation of UCRL2 and assume an independent Dirichlet prior with parameters [0.1, . . . , 0.1] over the transition probabilities for all TS algorithms. We consider two environments: randomly generated MDPs and the RiverSwim example [22]. For randomly generated MDPs, we use the independent Dirichlet prior over 6 states and 2 actions but 7 with a fixed cost. We select the resampling state s0 = 1 for TSMDP here since all states are recurrent under the Dirichlet prior. The RiverSwim example models an agent swimming in a river who can choose to swim either left or right. The MDP consists of six states arranged in a chain with the agent starting in the leftmost state (s = 1). If the agent decides to move left i.e with the river current then he is always successful but if he decides to move right he might fail with some probability. The cost function is given by: c(s, a) = 0.8 if s = 1, a = left; c(s, a) = 0 if s = 6, a = right; and c(s, a) = 1 otherwise. The optimal policy is to swim right to reach the rightmost state which minimizes the cost. For TSMDP in RiverSwim, we consider two versions with s0 = 1 and with s0 = 3 for the resampling state. We simulate 500 Monte Carlo runs for both the examples and run for T = 105 . 600 6000 UCRL2 TSMDP 500 UCRL2 TSMDP with s 0 = 3 TSMDP with s 0 = 1 Lazy PSRL TSDE 5000 Lazy PSRL TSDE 4000 Regret Regret 400 300 3000 200 2000 100 1000 0 0 2 4 6 T 8 0 10 0 2 4 6 T 4 x 10 (a) Expected Regret vs Time for random MDPs 8 10 4 x 10 (b) Expected Regret vs Time for RiverSwim Figure 1: Simulation Results From Figure 1(a) we can see that TSDE outperforms all the three algorithms in randomly generated MDPs. In particular, there is a significant gap between the regret of TSDE and that of UCRL2 and TSMDP. The poor performance of UCRL2 assures the motivation to consider TS algorithms. From the specification of TSMDP, its performance heavily hinges on the choice of an appropriate resampling state which is not possible for a general unknown MDP. This is reflected in the randomly generated MDPs experiment. In the RiverSwim example, Figure 1(b) shows that TSDE significantly outperforms UCRL2, Lazy PSRL, and TSMDP with s0 = 3. Although TSMDP with s0 = 1 performs slightly better than TSDE, there is no way to pick this specific s0 if the MDP is unknown in practice. Since Lazy PSRL is also equipped with the doubling trick criterion, the performance gap between TSDE and Lazy PSRL highlights the importance of the first stopping criterion on the growth rate of episode length. We also like to point out that in this example, the MDP is fixed and is not generated from the Dirichlet prior. Therefore, we conjecture that TSDE also has the same regret bounds under a non-Bayesian setting. 6 Conclusion We propose ? the Thompson Sampling with Dynamic Episodes (TSDE) learning algorithm and establish ? O(HS AT ) bounds on expected regret for the general subclass of weakly communicating MDPs. Our result fills a gap in the theoretical analysis of Thompson Sampling for MDPs. Numerical results validate that the TSDE algorithm outperforms other learning algorithms for infinite horizon MDPs. The TSDE algorithm determines the end of an episode by two stopping criteria. The second criterion comes from the doubling trick used in many reinforcement learning algorithms. But the first criterion on the linear growth rate of episode length seems to be a new idea for episodic learning algorithms. The stopping criterion is crucial in the proof of regret bound (Lemma 3). The simulation results of TSDE versus Lazy PSRL further shows that this criterion is not only a technical constraint for proofs, it indeed helps balance exploitation and exploration. 8 Acknowledgments Yi Ouyang would like to thank Yang Liu from Harvard University for helpful discussions. Rahul Jain and Ashutosh Nayyar were supported by NSF Grants 1611574 and 1446901. References [1] D. P. Bertsekas, Dynamic programming and optimal control, vol. 2. Athena Scientific, Belmont, MA, 2012. [2] P. R. Kumar and P. Varaiya, Stochastic systems: Estimation, identification, and adaptive control. SIAM, 2015. [3] T. L. Lai and H. Robbins, ?Asymptotically efficient adaptive allocation rules,? Advances in applied mathematics, vol. 6, no. 1, pp. 4?22, 1985. [4] A. N. Burnetas and M. N. Katehakis, ?Optimal adaptive policies for markov decision processes,? Mathematics of Operations Research, vol. 22, no. 1, pp. 222?255, 1997. [5] M. Kearns and S. Singh, ?Near-optimal reinforcement learning in polynomial time,? Machine Learning, vol. 49, no. 2-3, pp. 209?232, 2002. [6] R. I. Brafman and M. Tennenholtz, ?R-max-a general polynomial time algorithm for nearoptimal reinforcement learning,? Journal of Machine Learning Research, vol. 3, no. Oct, pp. 213?231, 2002. [7] P. L. Bartlett and A. Tewari, ?Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps,? in UAI, 2009. [8] T. Jaksch, R. Ortner, and P. Auer, ?Near-optimal regret bounds for reinforcement learning,? Journal of Machine Learning Research, vol. 11, no. Apr, pp. 1563?1600, 2010. [9] S. Filippi, O. Capp?e, and A. Garivier, ?Optimism in reinforcement learning and kullback-leibler divergence,? in Allerton, pp. 115?122, 2010. [10] C. Dann and E. Brunskill, ?Sample complexity of episodic fixed-horizon reinforcement learning,? in NIPS, 2015. [11] W. R. Thompson, ?On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,? Biometrika, vol. 25, no. 3/4, pp. 285?294, 1933. [12] M. Strens, ?A bayesian framework for reinforcement learning,? in ICML, 2000. [13] I. Osband, D. Russo, and B. Van Roy, ?(More) efficient reinforcement learning via posterior sampling,? in NIPS, 2013. [14] R. Fonteneau, N. Korda, and R. Munos, ?An optimistic posterior sampling strategy for bayesian reinforcement learning,? in BayesOpt2013, 2013. [15] A. Gopalan and S. Mannor, ?Thompson sampling for learning parameterized markov decision processes,? in COLT, 2015. [16] Y. Abbasi-Yadkori and C. Szepesv?ari, ?Bayesian optimal control of smoothly parameterized systems.,? in UAI, 2015. [17] I. Osband and B. Van Roy, ?Why is posterior sampling better than optimism for reinforcement learning,? EWRL, 2016. [18] S. L. Scott, ?A modern bayesian look at the multi-armed bandit,? Applied Stochastic Models in Business and Industry, vol. 26, no. 6, pp. 639?658, 2010. [19] O. Chapelle and L. Li, ?An empirical evaluation of thompson sampling,? in NIPS, 2011. [20] I. Osband and B. Van Roy, ?Posterior sampling for reinforcement learning without episodes,? arXiv preprint arXiv:1608.02731, 2016. 9 [21] D. Russo and B. Van Roy, ?Learning to optimize via posterior sampling,? Mathematics of Operations Research, vol. 39, no. 4, pp. 1221?1243, 2014. [22] A. L. Strehl and M. L. Littman, ?An analysis of model-based interval estimation for markov decision processes,? Journal of Computer and System Sciences, vol. 74, no. 8, pp. 1309?1331, 2008. 10
6732 |@word h:7 exploitation:3 version:2 polynomial:2 seems:1 simulation:4 pick:1 initial:1 liu:1 rightmost:1 outperforms:4 existing:1 current:2 nt:6 must:1 belmont:1 additive:1 numerical:3 ashutosh:2 update:1 resampling:4 stationary:14 v:2 selected:1 beginning:6 ith:2 provides:1 mannor:1 allerton:1 ucrl2:8 katehakis:1 incorrect:1 prove:1 consists:1 indeed:1 expected:9 multi:1 bellman:4 actual:2 armed:1 equipped:1 bounded:5 notation:1 moreover:1 ouyang:2 minimizes:1 guarantee:3 certainty:2 berkeley:2 every:2 subclass:3 ti:1 growth:4 biometrika:1 control:6 grant:1 bertsekas:1 before:3 t1:2 mistake:2 ntk:5 approximately:1 might:1 plus:1 chose:1 initialization:1 dynamically:2 equivalence:2 russo:2 acknowledgment:1 practice:2 regret:37 procedure:1 episodic:3 empirical:2 significantly:1 confidence:1 suggest:1 doubled:2 get:2 risk:1 applying:1 optimize:1 measurable:7 deterministic:2 map:2 fonteneau:1 starting:1 duration:1 thompson:14 communicating:14 rule:2 fill:1 handle:1 updated:1 play:1 heavily:1 programming:1 trick:4 harvard:1 roy:4 observed:1 role:1 preprint:1 enters:1 ensures:3 episode:58 trade:2 environment:7 complexity:1 reward:1 littman:1 dynamic:12 weakly:14 solving:2 depend:1 singh:1 upon:1 completely:1 capp:1 easily:1 jain:3 monte:1 solve:2 otherwise:1 advantage:2 triggered:1 propose:4 macro:8 combining:1 tni:5 description:1 validate:1 normalize:1 convergence:1 r1:8 hkt:2 tk:76 help:1 derive:2 recurrent:6 sa:1 strong:2 come:3 quantify:1 convention:2 correct:1 stochastic:3 exploration:6 transient:1 hx:10 preliminary:1 randomization:1 summation:1 hold:2 claim:1 substituting:1 a2:1 estimation:2 visited:1 schwarz:1 robbins:1 always:1 ewrl:1 focus:3 likelihood:1 helpful:1 dependent:1 stopping:20 accumulated:2 bandit:3 selects:2 issue:1 arg:1 colt:1 special:1 equal:1 construct:1 beach:1 sampling:20 represents:1 look:1 icml:1 ortner:1 modern:1 randomly:8 divergence:1 usc:3 possibility:1 evaluation:1 analyzed:1 chain:1 kt:20 theoretical:3 uncertain:1 korda:1 industry:1 cost:16 subset:4 successful:1 burnetas:1 dependency:1 nearoptimal:1 confident:1 st:27 fundamental:1 randomized:1 river:2 accessible:1 siam:1 off:2 abbasi:1 nm:1 choose:2 return:1 li:1 filippi:1 explicitly:1 dann:1 depends:1 view:2 optimistic:9 observing:1 sup:1 reached:1 start:6 bayes:2 analyze:1 minimize:2 ni:17 who:1 weak:1 bayesian:10 identification:1 carlo:1 trajectory:1 randomness:2 history:2 reach:1 whenever:1 definition:3 pp:10 associated:1 proof:12 sampled:9 stop:1 proved:1 knowledge:5 lim:1 schedule:1 actually:2 auer:1 appears:1 htk:10 follow:1 reflected:1 rahul:3 formulation:1 done:1 arranged:1 generality:2 furthermore:1 stage:3 until:2 lack:3 scientific:1 grows:1 mdp:28 usa:1 true:2 hence:1 equality:1 regularization:1 leibler:1 jaksch:1 tn1:1 during:3 encourages:1 strens:1 criterion:21 leftmost:1 trying:1 complete:1 performs:1 ari:1 common:1 discussed:1 he:3 significant:1 pm:1 hp:1 pointed:1 mathematics:3 chapelle:1 specification:1 posterior:16 hide:1 conjectured:1 inequality:2 yi:2 minimum:1 r0:10 determine:1 ii:5 exceeds:1 technical:1 match:3 long:1 lai:1 visit:6 a1:1 expectation:2 metric:1 arxiv:2 iteration:2 szepesv:1 want:1 interval:1 crucial:2 rest:1 operate:1 posse:1 call:1 near:2 presence:1 yang:1 perfectly:1 suboptimal:1 triggering:1 idea:4 tm:1 t0:2 six:1 optimism:4 bartlett:1 swim:2 osband:3 proceed:1 action:24 remark:3 generally:2 tewari:1 gopalan:1 diameter:1 generate:1 outperform:1 nsf:1 per:3 nayyar:2 vol:10 key:2 riverswim:5 garivier:1 ht:4 incentivize:1 asymptotically:1 monotone:1 swimming:1 run:2 parameterized:2 uncertainty:1 decision:7 appendix:3 bound:33 constraint:1 generates:1 simulate:1 span:5 min:6 kumar:1 conjecture:1 according:4 poor:1 remain:1 slightly:1 partitioned:1 happens:2 s1:2 computationally:1 equation:6 assures:1 fail:1 needed:1 know:1 end:5 available:5 operation:2 apply:1 observe:1 appropriate:1 appearing:1 alternative:1 yadkori:1 original:1 dirichlet:4 hinge:1 exploit:1 establish:2 move:2 strategy:1 interacts:1 southern:3 kth:1 thank:1 athena:1 cauchy:1 length:13 modeled:1 balance:1 expense:1 design:1 implementation:1 policy:21 unknown:12 perform:1 upper:2 markov:7 benchmark:1 finite:8 t:14 extended:1 interacting:2 regal:1 pair:6 varaiya:1 connection:1 california:4 nip:4 tennenholtz:1 proceeds:1 dynamical:1 scott:1 challenge:1 program:1 max:3 ofu:1 business:1 mdps:23 ready:1 naive:1 prior:12 loss:2 highlight:1 sublinear:1 allocation:1 versus:1 agent:15 s0:26 principle:4 strehl:1 supported:1 last:2 brafman:1 side:1 allow:1 pni:1 face:1 taking:1 munos:1 van:4 transition:6 computes:4 collection:1 reinforcement:17 adaptive:4 approximate:3 kullback:1 decides:2 uai:2 sat:4 psrl:9 why:1 pkt:2 learn:1 ca:1 sp:3 apr:1 main:2 s2:1 motivation:1 brunskill:1 lie:1 third:2 theorem:9 specific:1 r2:9 evidence:1 exists:2 importance:1 conditioned:1 horizon:10 gap:3 smoothly:1 logarithmic:2 explore:1 lazy:9 doubling:4 minimizer:1 satisfies:5 determines:1 ma:1 oct:1 consequently:1 infinite:6 determined:2 except:2 typical:1 operates:1 lemma:17 kearns:1 total:1 called:2 meaningful:2 rarely:1 select:1 support:1 incorporate:1
6,338
6,733
Testing and Learning on Distributions with Symmetric Noise Invariance Ho Chung Leon Law Department of Statistics University Of Oxford [email protected] Christopher Yau Centre for Computational Biology University of Birmingham [email protected] Dino Sejdinovic Department of Statistics University Of Oxford [email protected] Abstract Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions. However, it is rare that all possible differences between samples are of interest ? discovered differences can be due to different types of measurement noise, data collection artefacts or other irrelevant sources of variability. We propose distances between distributions which encode invariance to additive symmetric noise, aimed at testing whether the assumed true underlying processes differ. Moreover, we construct invariant features of distributions, leading to learning algorithms robust to the impairment of the input distributions with symmetric additive noise. 1 Introduction There are many sources of variability in data, and not all of them are pertinent to the questions that a data analyst may be interested in. Consider, for example, a nonparametric two-sample testing problem, which has recently been attracting significant research interest, especially in the context N2 1 of kernel embeddings of distributions [2, 5, 7]. We observe samples {X1j }N j=1 and {X2j }j=1 from two data generating processes P1 and P2 , respectively, and would like to test the null hypothesis that P1 = P2 without making any parametric assumptions on these distributions. With a large sample-size, the minutiae of the two data generating processes are uncovered (e.g. slightly different calibration of the data collecting equipment, different numerical precision), and we ultimately reject the null hypothesis, even if the sources of variation across the two samples may be irrelevant for the analysis. Similarly, we may be interested in learning on distributions [14, 23, 24], where the appropriate level of granularity in the data is distributional. For example, each label yi in supervised learning i is associated to a whole bag of observations Bi = {Xij }N j=1 ? assumed to come from a probability distribution Pi , or we may be interested in clustering such bags of observations. Again, nonparametric distances used in such contexts to facilitate a learning algorithm on distributions, such as Maximum Mean Discrepancy (MMD) [5], can be sensitive to irrelevant sources of variation and may lead to suboptimal or even misleading results, in which case building predictors which are invariant to noise is of interest. While it may be tempting to revert back to a parametric setup and work with simple, easy to interpret models, we argue that a different approach is possible: we stay within a nonparametric framework, exploit the irregular and complicated nature of real life distributions and encode invariances to sources of variation assumed to be irrelevant. In this contribution, we focus on invariances to symmetric additive noise on each of the data generating distributions. Namely, assume that the i-th sample i {Xij }N j=1 we observe does not follow the distribution Pi of interest but instead its convolution Pi ? Ei with some unknown noise distributions Ei assumed to be symmetric about 0 (we also require that it has a positive characteristic function). We would like to assess the differences between Pi and Pi0 while allowing Ei and Ei0 to differ in an arbitrary way. We investigate two approaches to this problem: (1) measuring the degree of asymmetry of the paired differences {Xij ? Xi0 j }, and (2) comparing the phase functions of the corresponding samples. While the first approach is simpler and presents a sensible solution for the two-sample testing problem, we demonstrate that phase functions give a much better gauge on the relative comparisons between bags of observations, as required for learning on distributions. The paper is outlined as follows. In section 2, we provide an overview of the background. In section 3, we provide details of the construction and implementation of phase features. In section 4, we discuss the approach based on asymmetry in paired differences for two sample testing with invariances. Section 5 provides experiments on synthetic and real data, before concluding in section 6. 2 Background and Setup We will say that a random vector E on Rd is a symmetric (SPD) component if its  positive definite  characteristic function is positive, i.e. ?E (?) = EX?E exp(i? > E) > 0, ?? ? Rd . This means that E is (1) symmetric about zero, i.e. E and ?E have the same distribution and (2) if it has a density, this density must be a positive definite function [20]. Note that many distributions used to model additive noise, including the spherical zero-mean Gaussian distribution, as well as multivariate Laplace, Cauchy or Student?s t (but not uniform), are all SPD components. Following the terminology similar to that of [3], we will say that a random vector X on Rd is decomposable if its characteristic function can be written as ?X = ?X0 ?E , with ?E > 0. Thus, if X can be written in the form X = X0 + E, where X0 and E are independent and E is an SPD noise component, then X is decomposable. We will say that X is indecomposable if it is not decomposable. In this paper, we will assume that mostly the indecomposable components of distributions are of interest and will construct tools to directly measure differences between these indecomposable components, encoding invariance to other sources of variability. The class of Borel Probability measures on Rd will be denoted M1+ (Rd ), while the class of indecomposable probability measures will be denoted by I(Rd ) ? M1+ (Rd ). 2.1 Kernel Embeddings, Fourier Features and learning on distributions For any positive definite function k : X ? X 7? R, there exists a unique reproducing kernel Hilbert space (RKHS) Hk of real-valued functions on X . Function k(?, x) is an element of Hk and represents evaluation at x, i.e. hf, k(?, x)iH = f (x), ?f ? Hk , ?x ? X . The kernel mean embedding (cf. R [15] for a recent review) of a probability measure P is defined by ?P = EX?P [k(?, X)] = k(?, x)dP (x). The Maximum Mean Discrepancy (MMD) between probability measures P and Q X is then given by k?P ? ?Q kHk . For shift-invariant kernels on Rd , using Bochner?s characterisation of positive definiteness [26, 6.2], the squared MMD can be written as a weighted L2 -distance between characteristic functions [22, Corollary 4] k?P ? ?Q k2Hk = Z 2 |?P (?) ? ?Q (?)| d? (?) , (1) Rd where ? is the non-negative spectral measure (inverse Fourier transform) of kernel k as a function of x ? y, while ?P (?) and ?Q (?) are the characteristic functions of probability measures P and Q. Bochner?s theorem is also used to construct random Fourier features (RFF) [19] for fast approximations to kernel methods in order to approximate a pre-specified shift-invariant kernel by a finite dimensional explicit feature map. If we can draw samples from its spectral measure ?, we can 2 approximate k by1 m X  ? y) = 1 k(x, cos(?jT x) cos(?jT y) + sin(?jT x) sin(?jT y) = h?(x), ?(y)iR2m m j=1 q > > where ?1 , . . . , ?m ? ? and ?(x) := m1 cos ?1> x , sin ?1> x . . . , cos ?m x , sin ?m x . Thus, the explicit computation of the kernel matrix is not needed and the computational complexity is reduced. This also allows computation with the approximate, finite-dimensional embeddings ? ?P = ?(P ) = EX?P ?(X) ? R2m , which can be understood as the evaluations (real and complex part stacked together) of the characteristic function ?P at frequencies ?1 , . . . , ?m . We will refer to the approximate embeddings ?(P ) as Fourier features of distribution P .      Kernel embeddings can be used for supervised learning on distributions. Assume we have a training i set {Bi , yi }ni=1 , where input Bi = {xij }N j=1 is a bag of samples taking values in X , and yi is a response. Given a kernel k : X ? X ? R, we first map each Bi to the empirical embedding PNi ?P?i = N1i j=1 k(?, xij ) ? Hk and then can apply any positive definite kernel on Hk as the kernel ? i , B 0 ) = h? ? , ? ? iH , in order to perform classification [14] on bag inputs, e.g. linear kernel K(B k i Pi Pi0 or regression [24]. Approximate kernel embeddings have also been applied in this context [23]. 3 Phase Discrepancy and Phase Features While MMD and kernel embeddings are related to characteristic functions, and indeed the same connection forms a basis for fast approximations to kernel methods using random Fourier features [19], the relevant notion in our context is the phase function of a probability measure, recently used for nonparametric deconvolution by [3]. In this section, we overview this formalism. Based on the empirical phase functions, we will then derive and investigate hypothesis testing and learning framework using phase features of distributions. In nonparametric deconvolution [3], the goal is to estimate the density function f0 of a univariate r.v. iid X0 , but in general we only have noisy data samples X1 , . . . , Xn ? X = X0 + E, where E denotes an independent noise term. Even though the distribution of E is unknown, making the assumption that E is an SPD noise component, and that X0 is indecomposable, i.e. X0 itself does not contain any SPD noise components, [3] show that it is possible to obtain consistent estimates of f0 . They distinguish between the symmetric noise and the underlying indecomposable component by matching phase functions, defined as ?X (?) = ?X (?) |?X (?)| where ?X (?) denotes the characteristic function of X. Observe that |?X (?)| = 1, and thus we are effectively removing the amplitude information from the characteristic function. For a SPD noise component E, the phase function is ?E (?) ? 1. But then since ?X = ?X0 ?E , we have that ?X0 = ?X = ?X /|?X |, i.e. the phase function is invariant to additive SPD noise components. This motivates us to construct explicit feature maps of distributions with the same property and similarly to the motivation of [3], we argue that real-world distributions of interest often exhibit certain amount of irregularity and it is exactly this irregularity which is exploited in our methodology. In analogy to the MMD, we first define the phase discrepancy (PhD) as a weighted L2 -distances between the phase functions: Z 2 PhD(X, Y ) = |?X (?) ? ?Y (?)| d? (?) (2) Rd for some non-negative measure ? (w.l.o.g. a probability measure). Now suppose we write X = X0 + U , Y = Y0 + V , where U and V are SPD noise components. This then implies ?X = ?X0 and ?Y = ?Y0 ?-everywhere, so that PhD(X, Y ) = PhD(X0 , Y0 ). It is clear then that the PhD is 1 q > x can also be used, but we follow the cona complex feature map ?(x) = m1 exp i?1> x , . . . , exp i?m vention of real-valued Fourier features, since kernels of interest are typically real-valued.    3 not affected by additive SPD noise components, so it captures desired invariance. However, the PhD for ? supported everywhere is in fact not a proper metric on the indecomposable probability measures I(Rd ), as one can find indecomposable random variables X and Y s.t. ?X = ?Y and thus PhD(X, Y ) = 0. An example is given in Appendix A. While such cases appear contrived, we hence restrict attention to a subset of indecomposable probability measures P(Rd ) ? I(Rd ), which are uniquely determined by phase functions, i.e. ?P, Q ? P(Rd ) : ?P = ?Q ? P = Q. We now have the two following propositions (proofs are given in Appendix B). Proposition 1. PhD(X, Y ) = 2 ? 2 R E?? (X) kE?? (X)k >  E?? (Y ) kE?? (Y )k  d?(?)   > where ?? (x) = cos ? > x , sin ? > x and k ? k denotes the standard L2 norm. Proposition 2. K (PX , PY ) = R E?? (X) kE?? (X)k >  E?? (Y ) kE?? (Y )k  d?(?) is a positive definite kernel on probability measures. m Now, we can construct an approximate explicit feature map for kernel K. Taking a sample  {?i }i=1 ? q  E? (X) (X) E? ?1 1 ?, we define ? : PX 7? R2m given by ?(PX ) = m , . . . , kE???m (X)k . We will refer m kE??1 (X)k to ?(?) as the phase features. Note that these are very similar to Fourier features, but the cos, sin-pair corresponding to each frequency is normalised to have unit L2 norm. In other words, ?(?) can be thought of as evaluations of the phase function at the selected frequencies. By construction, phase features are invariant to additive SPD noise components. For an empirical measure, we simply have the following:  q  ? ? ? (X) E??1 (X) E? 1 m ? ?(PX ) = m E? (3) , . . . , E? k ? ?1 (X)k k ? ?m (X)k where we have replaced the expectations by their empirical estimates. Because ?(P?X ) = 1, we can construct 2 d P?X , P?Y ) = PhD( (4) ?(P?X ) ? ?(P?Y ) = 2 ? 2?(P?X )> ?(P?Y ), which is a Monte Carlo estimator of PhD(P?X , P?Y ). In summary, ?(P? ) ? R2m is an explicit feature vector of the empirical distribution which encodes invariance to additive SPD noise components present in P 2 , as demonstrated in Figure F.1 in the Appendix. It can now be directly applied to (1) two-sample testing up to SPD components, where the distance between the phase features, i.e. an estimate (4) of the PhD, can be used as a test statistic, with details given in section 5.1 and (2) learning on distributions, where we use phase features as the explicit feature map for a bag of samples. Although we have assumed an indecomposable underlying distribution so far, this assumption is not strict. For distribution regression, if the indecomposable assumption is invalid, given that the underlying distribution is irregular, it may still be useful to encode invariance as long as the benefit of removing the SPD components irrelevant for learning outweighs the signal in the SPD part of the distribution, i.e. there is a trade off between SPD noise and SPD signal. In practice, the phase features we propose can be used to encode such invariance where appropriate or in conjunction with other features which do not encode invariance. In order to construct the approximate mean embeddings for learning, we first compute an ? explicit q h feature map by taking i averages of the Fourier features, as given by ?(PX ) = 1 ? ? ? (X) . For phase features, we need to compute an additional normalE?? (X), . . . , E? m 1 m isation term over each frequency as in (3). To obtain the set of frequencies {wi }m i=1 , we can draw 2 Note that, unlike the population expression ?(P ), the empirical estimator ?(P? ) will in general have a distribution affected by the noise components and is thus only approximately invariant, but we observe that it captures invariance very well as long as the signal-to-noise regime remains relatively high (Section 5.1). 4 samples from a probability measure ? corresponding to an inverse Fourier transform of a shiftinvariant kernel, e.g. Gaussian Kernel. However, given a supervised signal, we can also optimise a set of frequencies {wi }m i=1 that will give us a useful representation and good discriminative performance. In other words, we no longer focus on a specific shift-invariant kernel k, but are learning discriminative Fourier/phase features. To do this, we can construct a neural network (NN) with special activation functions, pooling layers as shown in Algorithm D.1 and Figure D.1 in the Appendix. 4 Asymmetry in Paired Differences We now consider a separate approach to nonparametric two-sample test, where we wish to test the d null hypothesis that H0 : P =Q vs. the general alternative, but we only have iid samples arising from X ? P ? E1 and Y ? Q ? E2 . i.e. X = X0 + U Y = Y0 + V where X0 ? P , Y0 ? Q lie in the space of P(Rd ) of indecomposable distributions uniquely determined by phase functions and U and V are SPD noise components. With this setting (proof in Appendix B): d Proposition 3. Under the null hypothesis H0 , X ? Y is SPD ?? X0 =Y0 . This motivates us to simply perform a two-sample test on X ? Y and Y ? X since its rejection would d imply rejection of X0 =Y0 , as it tests for symmetry. However, note that this is a test for symmetry only and that for consistency against all alternatives, positivity of characteristic function would need to be checked separately. Now, given two i.i.d. samples {Xi }ni=1 and {Yi }ni=1 with n even, we split the two samples into two halves and compute Wi = Xi ? Yi on one half and Zi = Yi ? Xi on the other half, and perform a nonparametric two sample test on W and Z (which are, by construction, independent of each other). The advantage of this regime is that we can use any two-sample test ? in particular in this paper, we will focus on the linear time mean embedding (ME) test [7], which was found to have performance similar to or better than the original MMD two-sample test [5], and explicitly formulates a criterion which maximises the test power. We will refer to the resulting test on paired differences as the Symmetric Mean Embedding (SME). Although we have assumed here that X0 , Y0 lie in the space P(Rd ) of indecomposable distributions, in practice, the SME test would not reject if the underlying distributions of interest differ only in the symmetric components (or in the SPD components for the PhD test). We argue this to be unlikely due to real life distributions being complex in nature with interesting differences often having a degree of asymmetry. In practice, we recommend the use of the ME and SME or PhD test together to provide an exploratory tool to understand the underlying differences, as demonstrated in the Higgs Data experiment in section 5.1. It is tempting to also consider learning on distributions with invariances using this formalism. However note that the MMD on paired differences is not invariant to the additive SPD noise components under the alternative, i.e. in general MMD(X ? Y, Y ? X) 6= MMD(X0 ? Y0 , Y0 ? X0 ). This means that the paired differences approach to learning is sensitive to the actual type and scale of the additive SPD noise components, hence not suitable for learning. The mathematical details and empirical experiments to show this are presented in Appendix C and F.1. 5 5.1 Experimental Results Two-Sample Tests with Invariances In this section, we demonstrate the performance of the SME test and the PhD test on both artificial d and real-world data for testing the hypothesis H0 : X0 =Y0 based on samples {Xi }N i=1 from X0 + U and {Yi }N i=1 from Y0 + V , where U and V are arbitrary SPD noise components (we assume the same N/2 number of samples for simplicity). SME test follows the setup in [7] but applied to {Xi ? Yi }i=1 and d ? ? {Yi ? Xi }N i=N/2+1 . For the PhD test, we use as the test statistic the estimate PhD(PX , PY ) of (2). It is unclear what the exact form of the null distribution is, so we use a permutation test, by recomputing this statistic on the samples which are first merged and then randomly split in the original proportions. 5 1.0 0.6 0.8 Power Rejection Ratio 0.8 1.0 ME n11 = 0. 01, n12 = 0. 05 PhD n11 = 0. 01, n12 = 0. 05 SME n11 = 0. 01, n12 = 0. 05 ME n11 = 0. 25, n12 = 0. 5 PhD n11 = 0. 25, n12 = 0. 5 SME n11 = 0. 25, n12 = 0. 5 0.4 0.6 0.4 0.2 0.2 0.0 1000 2000 3000 4000 5000 6000 7000 8000 Sample Size 0.00 ME All levels PhD n1 = 0. 0, n2 = 0. 0 SME n1 = 0. 0, n2 = 0. 0 PhD n1 = 0. 01, n2 = 0. 05 SME n1 = 0. 01, n2 = 0. 05 SME n1 = 0. 1, n2 = 0. 1 SME n1 = 0. 25, n2 = 0. 25 1000 2000 3000 4000 5000 6000 7000 8000 Sample Size Figure 1: Type I error and Power under various additional symmetric noise in the synthetic ?2 dataset. Dashed line is the 99% Wald interval here. Left: Type I error, n11 denotes the noise to signal ratio for the first set of samples and n12 for the second set. Right: Power, n1 denotes the noise to signal ratio for the X set of samples and n2 denotes the noise to signal ratio for the Y set of samples. While we are combining samples with different distributions, the permutation test is still justified d since, under the null hypothesis X0 =Y0 , the resulting characteristic function ?null of the mixture can be written as 1 1 1 1 ?null = ?X0 ?U + ?X0 ?V = ?X0 ( ?U + ?V ) 2 2 2 2 and since the mixture of the SPD noise terms is also SPD, we have that ?null = ?X0 = ?Y0 . For our experiments, we denote by N the sample size, d the dimension of the samples, and we take ? = 0.05 to be the significance level. In the SME test, we take the number of test locations J to be 10, and use 20% of the samples to optimise the test locations. All experimental results are averaged over 1000 runs, where each run repeats the simulation or randomly samples without replacement from the dataset. 5.1.1 Synthetic example: Noisy ?2 We start by demonstrating our tests with invariances on a simulated dataset where X0 and Y0 are random vectors with d = 5, each dimension is the same in distribution and follows ?2 (4)/4 and ?2 (8)/8 respectively, i.e. chi-squared random variables, with different degrees of freedom, rescaled to have the same mean 1 (but have different variances, 1/2 and 1/4 respectively). An illustration of the true and empirical phase and characteristic function with noise for these two distributions can be found N in Appendix F.2. We construct samples {Xn1 ,i }N i=1 and {Yn2 ,i }i=1 such that Xn1 ? X0 + U , where 2 U ? N (0, ?1 I) and similarly Yn2 ? Y0 + V , where V ? N (0, ?22 I), ni denotes the noise-to-signal ratio given by the ratio of variances in each dimension, i.e. n1 = 2?12 and n2 = 4?22 . We first verify that Type I error is indeed controlled at our design level of ? = 0.05 up to various d additive SPD noise components. This is shown in Figure 1 (left), where X0 =Y0 , both constructed 2 using ? (4)/4, with the noiseless case found in Figure F.6 in the Appendix. It is noted here that the ME test rejects the null hypothesis for even a small difference in noise levels, hence it is unable to let us target the underlying distributions we are concerned with. This is unlike the SME test which controls the Type I error even for large differences in noise levels. The PhD test, on the other hand, while correctly controlling Type I at small noise levels, was found to have inflated Type I error rates for large noise, with more results and explanation provided in Figure F.6 in the Appendix. Namely, the test relies on the invariance to SPD of the population expression of PhD, but the estimator of the null distribution of the corresponding test statistic will in general be affected by the differing noise levels. Next, we investigate the power, shown in Figure 1 (right). For a fair comparison, we have included the PhD test power only for small noise levels, in which the Type I error is controlled at the design level. In these cases, the PhD test has better power than the SME test. This is not surprising, as for the SME we have to halve the sample size in order to construct a valid test. However, recall that the PhD test has an inflated Type I error for large noises, which means that its results should be considered with caution in practice. ME test rejects at all levels at all sample sizes as it picks up all possible 6 Figure 2: Rejection ratio vs. sample size for extremely low level features for Higgs dataset. Dashed line is the 99% Wald interval for 1000 repetitions for ? = 0.05. Note PhD is not used here, due to its expensive computational cost. Figure 3: RMSE on the Aerosol test set, corrupted by various levels of noise averaged over 100 runs, with the 5th and the 95th percentile. The noiseless case is shown with one run. RMSE from mean is 0.206. differences. SME and PhD are by construction more conservative tests whose rejection provides a much stronger statement: two samples differ even when all arbitrary additive SPD components have been stripped off. 5.1.2 Higgs Dataset The UCI Higgs dataset [1, 11] is a dataset with 11 million observations, where the problem is to distinguish between the signal process where Higgs bosons are found, versus the background process that do not produce Higgs bosons. In particular, we will consider a two-sample test with the ME and SME test on the high level features derived by physicists, as well as a two-sample test on four extremely low level features (azimuthal angular momentum ? measured by four particle jets in the detector). The high level features here (in R7 ) have been shown to have good discriminative properties in [1]. Thus, we expect them to have different distributions across two processes. Denoting by X the high level features of the process without Higgs Boson, and Y as the corresponding distribution for the processes where Higgs bosons are produced, we test the null hypothesis that the indecomposable parts of X and Y agree. The results can be found in Table F.1 in the Appendix, which shows that the high level features differ even up to additive SPD components, with a high power for the SME and ME test even at small sample sizes (rejection rate of 0.94 at N = 500). Now we perform the same experiment, but with the low level features ? R4 , commented in [1] to carry very little discriminating information, using the setup from [2]. The results for the ME and SME test can be found in Figure 2. Here we observe that while ME test clearly rejects and finds the difference between the two distributions, there is no evidence that the indecomposable parts of the joint distributions of the angular momentum actually differ. In fact, the test rejection rate remains around the chosen design level of ? = 0.05 for all sample sizes. This highlights the significance in using the SME test, suggesting that the nature of the difference between the two processes can potentially be explained by some additive symmetric noise components which may be irrelevant for discrimination, providing an insight into the dataset. Furthermore, this also highlights the argument that given two samples from complex data collection and generation processes, a nonparametric two sample test like ME will likely reject given sufficient sample sizes, even if the discovered difference may not be of interest. With the SME test however, we can ask a much more subtle question about the differences between the assumed true underlying processes. Figures showing that the Type I error is controlled at the design level of ? = 0.05 for both low and high level features can be found in Figure F.7 in the Appendix. 5.2 5.2.1 Learning with Phase Features Aerosol Dataset To demonstrate the phase features invariance to SPD noise component, we use the Aerosol MISR1 dataset also studied by [24] and [25] and consider a situation with covariate shift [18] on distribution inputs: the testing data is impaired by additive SPD components different to that in the training data. 7 Table 1: Mean Square Error (MSE) on dark matter dataset for 500 runs with 5th and 95th percentile. Algorithm MSE Mean PLRR GLRR 0.16 0.021 (0.018, 0.024) 0.033 (0.030, 0.037) LGRR PGRR GGRR 0.032 (0.028, 0.036) 0.021 (0.017, 0.024) 0.018 (0.015, 0.019) Figure 4: MSE with various levels of noise added on test set, with 5th and 95th percentile. Here, we have an aerosol optical depth (AOD) multi-instance learning problem with 800 bags, where each bag contains 100 randomly selected multispectral (potentially cloudy) pixels within 20km radius around an AOD sensor. The label yi for each bag is given by the AOD sensor measurements and each sample xi is 16-dimensional. This can be understood as a distribution regression problem where each bag is treated as a set of samples from some distribution. We use 640 bags for training and 160 bags for testing. Here in the bags for testing only, we add varying levels of Gaussian noise  ? N (0, Z) to each bag, where Z is a diagonal matrix with diagonal components zi ? U [0, ?vi ] with vi being the empirical variance in dimension i across all samples, accounting for different scales across dimensions. For comparisons, we consider linear ridge regression on embeddings with respect to a Gaussian kernel, approximated with RFF (GLRR) as described in section 2.1 (i.e. a linear kernel is applied on approximate embeddings), linear ridge regression on phase features (PLRR) (i.e. normalisation step is applied to obtain (3)), and also the phase and Fourier neural networks (NN), described in Appendix D, tuning all hyperparameters with 3-fold cross validation. With the same model, we now measure Root Mean Square Error (RMSE) 100 times with various noise-corrupted test sets and results are shown in figure 3. It is also noted that ? does not improve performance significantly on this problem [24]. a second level non-linear kernel K We see that GLRR and PLRR are competitive (see Appendix Table F.2) in the noiseless case, and these clearly outperform both the Fourier NN and Phase NN (likely due to the small size of the dataset). For increasing noise, the performance of GLRR degrades significantly, and while the performance of PLRR degrades also, the model is much more robust under additional SPD noise. In comparison, the Phase NN implementation is almost insensitive to covariate shift in the test sets, unlike the performance of PLRR, highlighting the importance of learning discriminative frequencies w in a very low signal-to-noise setting. It is noted that the Fourier NN performs similarly to that of the Phase NN on this example. Interestingly, discriminative frequencies learnt on the training data correspond to Fourier features that are nearly normalised (i.e. they are close to unit norm - see Figure F.8 in the Appendix). This means that the Fourier NN has learned to be approximately invariant based on training data, indicating that the original Aerosol data potentially has irrelevant SPD noise components. This is reinforced by the nature of the dataset (each bag contains 100 randomly selected potentially cloudy pixels, known to be noisy [25]) and no loss of performance from going from GLRR to PLRR. The results highlights that phase features are stable under additive SPD noise. 5.2.2 Dark Matter Dataset We now study the use of phase features on the dark matter dataset, composing of a catalog of galaxy clusters. In this setting, we would like to predict the total mass of galaxy clusters, using the dispersion of velocities in the direction along our line of sight. In particular, we will use the ?ML1? dataset, as obtained from the authors of [16, 17], who constructed a catalog of massive halos from the MultiDark mdpl simulation [9]. The dataset contains 5028 bags, with each sample consisting of its sub-object velocity and its mass label in R. By viewing each galaxy cluster at multiple lines of sights, we obtain 15 000 bags, using the same experimental setup as in [10]. For experiments, we use approximately 9000 bags for training, and 3000 bags each for validation and testing, keeping those of multiple lines of sight in the same set. As before, we use GLRR and PLRR and we also include 8 in comparisons methods with a second level Gaussian kernel (with RFF) applied to phase features (PGRR) and to approximate embeddings (GGRR). For a baseline, we also include a first level linear kernel (equivalent to representing each bag with its mean), before applying a second level gaussian kernel (LGRR). We use the same set of randomly sampled frequencies across the methods, tuning for the scale of the frequencies and for regularisation parameters. Table 1 shows the results of the methods across 10 different data splits, with 50 sets of randomised frequencies for each data split. We see that PLRR is significantly better than GLRR. This suggests that under this model structure, by removing SPD components from each bag, we can target the underlying signal and obtain superior performance, highlighting the applicability of phase features. Considering a second level gaussian kernel, we see that the GGRR has a slight advantage over PGRR, with PGRR performing similar to PLRR. This suggests that the SPD components of the distribution of sub-object velocity may be useful for predicting the mass of a galaxy cluster if an additional nonlinearity is applied to embeddings ? whereas the benefits of removing them outweigh the signal present in them without this additional nonlinearity. To show that indeed the phase features are robust to SPD components, we perform the same covariate shift experiment as in the aerosol dataset, with results given in Figure 4. Note that LGRR is robust to noise, as each bag is represented by its mean. 6 Conclusion No dataset is immune from measurement noise and often this noise differs across different data generation and collection processes. When measuring distances between distributions, can we disentangle the differences in noise from the differences in the signal? We considered two different ways to encode invariances to additive symmetric noise in those distances, each with different strengths: a nonparametric measure of asymmetry in paired sample differences and a weighted distance between the empirical phase functions. The former was used to construct a hypothesis test on whether the difference between the two generating processes can be explained away by the difference in postulated noise, whereas the latter allowed us to introduce a flexible framework for invariant feature construction and learning algorithms on distribution inputs which are robust to measurement noise and target underlying signal distributions. Acknowledgements We thank Dougal Sutherland for suggesting the use of of the dark matter dataset, Michelle Ntampaka for providing the catalog, as well as Ricardo Silva, Hyunjik Kim and Kaspar Martens for useful discussions. This work was supported by the EPSRC and MRC through the OxWaSP CDT programme (EP/L016710/1). C.Y. and H.C.L.L. also acknowledge the support of the MRC Grant No. MR/L001411/1. The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de). 9 References [1] Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5, 2014. [2] Kacper P Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton. Fast twosample testing with analytic representations of probability measures. In Advances in Neural Information Processing Systems, pages 1981?1989, 2015. [3] Aurore Delaigle and Peter Hall. Methodology for non-parametric deconvolution when the error distribution is unknown. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(1):231?252, 2016. [4] Paul Fearnhead and Dennis Prangle. Constructing summary statistics for approximate bayesian computation: semi-automatic approximate bayesian computation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3):419?474, 2012. [5] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch?lkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723?773, 2012. [6] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pages 448?456, 2015. [7] Wittawat Jitkrittum, Zolt?n Szab?, Kacper P Chwialkowski, and Arthur Gretton. Interpretable distribution features with maximum testing power. In Advances in Neural Information Processing Systems 29, pages 181?189. 2016. [8] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [9] Anatoly Klypin, Gustavo Yepes, Stefan Gottlober, Francisco Prada, and Steffen Hess. MultiDark simulations: the story of dark matter halo concentrations and density profiles. 2014. arXiv:1411.4001. [10] Ho Chung Leon Law, Dougal J. Sutherland, Dino Sejdinovic, and Seth Flaxman. Bayesian approaches to distribution regression. arXiv preprint arXiv:1705.04293, 2017. [11] M. Lichman. UCI machine learning repository, 2013. [12] Yu V Linnik and IV Ostrovskii. Decomposition of random variables and vectors. 1977. [13] J. Mitrovic, D. Sejdinovic, and Y.W. Teh. DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. In International Conference on Machine Learning (ICML), pages 1482?1491, 2016. [14] Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Sch?lkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems 25, pages 10?18. 2012. [15] Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Sch?lkopf. Kernel mean embedding of distributions: A review and beyonds. arXiv preprint arXiv:1605.09522, 2016. [16] Michelle Ntampaka, Hy Trac, Dougal J. Sutherland, Nicholas Battaglia, Barnab?s P?czos, and Jeff Schneider. A machine learning approach for dynamical mass measurements of galaxy clusters. The Astrophysical Journal, 803(2):50, 2015. arXiv:1410.0686. [17] Michelle Ntampaka, Hy Trac, Dougal J. Sutherland, S. Fromenteau, B. Poczos, and Jeff Schneider. Dynamical mass measurements of contaminated galaxy clusters using machine learning. The Astrophysical Journal, 831(2):135, 2016. arXiv:1509.05409. [18] Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. Dataset Shift in Machine Learning. The MIT Press, 2009. 10 [19] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 1177?1184, 2007. [20] H-J Rossberg. Positive definite probability densities and probability distributions. Journal of Mathematical Sciences, 76(1):2181?2197, 1995. [21] Le Song, Kenji Fukumizu, and Arthur Gretton. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Processing Magazine, 30(4):98?111, 2013. [22] Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Sch?lkopf, and Gert R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res., 11:1517?1561, August 2010. [23] Dougal J. Sutherland, Junier B. Oliva, Barnab?s P?czos, and Jeff G. Schneider. Linear-time learning on distributions with approximate kernel embeddings. In Proc. AAAI Conference on Artificial Intelligence, pages 2073?2079, 2016. [24] Zolt?n Szab?, Arthur Gretton, Barnab?s P?czos, and Bharath K. Sriperumbudur. Two-stage sampled learning theory on distributions. In Proc. International Conference on Artificial Intelligence and Statistics, AISTATS 2015, 2015. [25] Z. Wang, L. Lan, and S. Vucetic. Mixture model for multiple instance regression and applications in remote sensing. IEEE Transactions on Geoscience and Remote Sensing, 50(6):2226?2237, June 2012. [26] H. Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, UK, 2004. 11
6733 |@word repository:1 stronger:1 norm:3 proportion:1 consolider:1 k2hk:1 prangle:1 km:1 simulation:4 azimuthal:1 accounting:1 decomposition:1 zolt:2 pick:1 carry:1 uncovered:1 contains:3 series:2 lichman:1 daniel:1 denoting:1 rkhs:1 interestingly:1 comparing:1 surprising:1 activation:1 diederik:1 must:1 written:4 additive:17 numerical:1 pertinent:1 analytic:1 christian:1 krikamol:2 interpretable:1 v:2 discrimination:1 half:3 selected:3 intelligence:2 dinuzzo:1 provides:2 location:2 simpler:1 mathematical:2 along:1 constructed:2 khk:1 baldi:1 introduce:1 x0:29 indeed:3 karsten:1 p1:2 multi:1 steffen:1 chi:1 spherical:1 actual:1 little:1 considering:1 increasing:1 provided:1 minutia:1 underlying:10 moreover:1 project:2 mass:5 exotic:1 null:12 what:1 developed:1 caution:1 differing:1 unified:1 l016710:1 masashi:1 collecting:1 exactly:1 uk:4 control:1 unit:2 grant:1 appear:1 positive:9 before:3 understood:2 sutherland:5 service:1 physicist:1 encoding:1 mach:1 oxford:2 approximately:3 studied:1 r4:1 suggests:2 co:6 bi:4 averaged:2 unique:1 testing:15 practice:4 definite:6 differs:1 irregularity:2 empirical:10 reject:6 thought:1 matching:1 significantly:3 pre:1 word:2 trac:2 close:1 context:4 applying:1 py:2 www:3 equivalent:1 map:7 demonstrated:2 outweigh:1 marten:1 attention:1 jimmy:1 ke:6 decomposable:3 simplicity:1 stats:2 estimator:3 insight:1 embedding:5 searching:1 population:2 notion:1 variation:3 exploratory:1 laplace:1 n12:7 aerosol:6 construction:5 suppose:1 target:3 exact:1 controlling:1 massive:1 magazine:1 hypothesis:10 lanckriet:1 element:1 velocity:3 expensive:1 approximated:1 distributional:1 database:2 ep:1 epsrc:1 preprint:3 wang:1 capture:2 eu:2 trade:1 rescaled:1 remote:2 benjamin:1 complexity:1 ultimately:1 ali:1 basis:1 joint:1 seth:1 various:5 represented:1 stacked:1 revert:1 fast:3 monte:1 artificial:3 h0:3 whose:1 valued:3 pi0:2 say:3 statistic:8 neil:1 transform:2 noisy:3 itself:1 advantage:2 propose:2 relevant:1 combining:1 uci:2 rff:3 contrived:1 asymmetry:5 impaired:1 cluster:6 produce:1 generating:4 adam:1 gert:1 object:2 derive:1 ac:3 boson:4 measured:1 p2:2 kenji:4 come:1 implies:1 inflated:2 differ:6 artefact:1 rasch:1 direction:1 merged:1 radius:1 stochastic:1 viewing:1 require:1 barnab:3 proposition:4 vucetic:1 around:2 considered:2 hall:1 exp:3 lawrence:1 predict:1 battaglia:1 proc:2 birmingham:1 bag:22 label:3 sensitive:2 repetition:1 gauge:1 tool:3 weighted:3 cdt:1 stefan:1 fukumizu:4 mit:1 clearly:2 sensor:2 gaussian:7 sight:3 fearnhead:1 normale:1 varying:1 conjunction:1 corollary:1 encode:6 derived:1 focus:3 june:1 hk:5 equipment:1 baseline:1 kim:1 inference:1 nn:8 typically:1 unlikely:1 going:1 interested:3 pixel:2 classification:1 flexible:1 denoted:2 special:1 construct:11 having:1 biology:1 represents:1 r7:1 icml:2 nearly:1 yu:1 discrepancy:5 contaminated:1 recommend:1 aip:1 randomly:5 kacper:2 replaced:1 phase:37 consisting:1 replacement:1 n1:8 freedom:1 interest:9 normalisation:1 dougal:5 investigate:3 evaluation:3 mixture:3 arthur:6 iv:1 desired:1 re:1 recomputing:1 formalism:2 instance:2 formulates:1 measuring:2 cost:1 applicability:1 subset:1 rare:1 predictor:1 uniform:1 corrupted:2 learnt:1 synthetic:3 muandet:2 recht:1 density:5 borgwardt:1 international:3 discriminating:1 stay:1 physic:1 off:2 anatoly:1 together:2 again:1 squared:2 aaai:1 positivity:1 dr:1 yau:2 chung:2 leading:1 ricardo:1 ml1:1 szegedy:1 suggesting:2 de:1 student:1 matter:5 indecomposable:15 postulated:1 explicitly:1 vi:2 astrophysical:2 higgs:8 root:1 candela:1 start:1 hf:1 competitive:1 complicated:1 multispectral:1 rmse:3 contribution:1 ass:1 square:2 ni:4 variance:3 characteristic:12 who:1 reinforced:1 correspond:1 lkopf:4 bayesian:4 anton:1 produced:1 iid:2 carlo:1 mrc:2 bharath:3 detector:1 halve:1 checked:1 against:1 sriperumbudur:3 energy:1 frequency:11 galaxy:6 e2:1 associated:1 proof:2 xn1:2 sampled:2 dataset:21 ask:1 recall:1 jitkrittum:1 hilbert:2 subtle:1 amplitude:1 bham:1 x1j:1 back:1 actually:1 supervised:3 follow:2 methodology:4 response:1 ox:2 though:1 mar:1 furthermore:1 angular:2 smola:1 stage:1 hand:1 joaquin:1 dennis:1 christopher:1 ei:3 facilitate:1 building:1 contain:1 true:3 verify:1 former:1 hence:3 symmetric:13 sin:6 spanish:1 uniquely:2 noted:3 percentile:3 criterion:1 ridge:2 demonstrate:3 performs:1 aaditya:1 silva:1 recently:2 funding:1 superior:1 overview:2 insensitive:1 million:1 xi0:1 m1:4 slight:1 interpret:1 measurement:6 significant:1 refer:3 cambridge:2 hess:1 automatic:1 rd:16 tuning:2 outlined:1 similarly:4 consistency:1 particle:2 centre:4 nonlinearity:2 sugiyama:1 dino:4 gratefully:1 immune:1 calibration:1 f0:2 longer:1 stable:1 attracting:1 europe:1 add:1 disentangle:1 multivariate:1 recent:1 irrelevant:7 certain:1 life:2 yi:10 exploited:1 additional:5 mr:1 schneider:3 bochner:2 tempting:2 signal:15 dashed:2 semi:1 multiple:3 gretton:6 rahimi:1 jet:1 cross:1 long:2 e1:1 paired:7 n11:7 controlled:3 sme:21 regression:8 oliva:1 wald:2 noiseless:3 metric:2 expectation:1 arxiv:9 sejdinovic:5 kernel:38 mmd:10 sergey:1 normalization:1 irregular:2 justified:1 background:3 whereas:2 separately:1 interval:2 source:6 sch:4 unlike:3 strict:1 pooling:1 n1i:1 chwialkowski:2 granularity:1 wittawat:1 split:4 embeddings:16 easy:1 concerned:1 spd:36 zi:2 restrict:1 suboptimal:1 shift:8 whether:2 expression:2 accelerating:1 song:1 peter:2 poczos:1 impairment:1 deep:2 useful:5 clear:1 aimed:1 amount:1 nonparametric:11 dark:5 reduced:1 outperform:1 xij:5 arising:1 correctly:1 delaigle:1 write:1 affected:3 commented:1 four:2 terminology:1 demonstrating:1 lan:1 characterisation:1 mitrovic:1 run:5 inverse:2 everywhere:2 almost:1 draw:2 leibniz:2 appendix:14 layer:1 distinguish:2 fold:1 strength:1 ri:1 encodes:1 hy:2 fourier:15 argument:1 extremely:2 concluding:1 leon:2 cloudy:2 performing:1 optical:1 px:6 relatively:1 department:2 across:7 slightly:1 y0:17 wi:3 making:2 ei0:1 explained:2 invariant:11 agree:1 remains:2 randomised:1 discus:1 needed:1 apply:1 observe:5 away:1 appropriate:2 spectral:2 pierre:1 nicholas:1 r2m:3 alternative:3 batch:1 ho:2 supercomputer:1 original:3 denotes:7 clustering:1 cf:1 include:2 graphical:1 outweighs:1 exploit:1 especially:1 society:2 question:2 added:1 parametric:3 degrades:2 concentration:1 diagonal:2 unclear:1 exhibit:1 dp:1 distance:9 separate:1 unable:1 simulated:1 thank:1 quinonero:1 sensible:1 me:12 yn2:2 argue:3 cauchy:1 analyst:1 illustration:1 ratio:7 providing:3 setup:5 mostly:1 statement:1 potentially:4 negative:2 ba:1 astrophysics:1 implementation:2 design:4 motivates:2 proper:1 unknown:3 perform:5 allowing:1 maximises:1 teh:1 observation:4 convolution:1 dispersion:1 francesco:1 finite:2 acknowledge:2 situation:1 variability:3 halo:2 communication:1 discovered:2 gc:1 reproducing:1 arbitrary:3 august:1 ntampaka:3 namely:2 required:1 specified:1 pair:1 connection:1 catalog:3 learned:1 potsdam:1 kingma:1 dynamical:2 regime:2 including:1 optimise:2 explanation:1 royal:2 power:9 suitable:1 malte:1 treated:1 predicting:1 advanced:1 representing:1 improve:1 misleading:1 imply:1 flaxman:1 review:2 l2:4 acknowledgement:1 relative:1 law:2 regularisation:1 fully:1 expect:1 permutation:2 highlight:3 loss:1 interesting:1 by1:1 generation:2 analogy:1 versus:1 validation:2 degree:3 sufficient:1 consistent:1 story:1 pi:5 summary:2 cooperation:1 supported:2 repeat:1 keeping:1 twosample:1 czos:3 normalised:2 understand:1 institute:1 stripped:1 pni:1 taking:3 michelle:3 benefit:2 dimension:5 xn:1 world:2 valid:1 depth:1 author:2 collection:3 programme:1 far:1 supercomputing:3 transaction:1 approximate:13 bernhard:4 ioffe:1 assumed:7 francisco:1 discriminative:5 xi:7 table:4 learn:1 nature:5 ir2m:1 robust:5 composing:1 symmetry:2 whiteson:1 mse:3 complex:4 constructing:1 aistats:1 significance:2 whole:1 noise:59 motivation:1 hyperparameters:1 n2:9 ramdas:1 paul:1 fair:1 allowed:1 profile:1 fromenteau:1 x1:1 borel:1 scattered:1 definiteness:1 precision:1 sub:2 momentum:2 explicit:7 wish:1 lie:2 theorem:1 removing:4 sadowski:1 specific:1 covariate:4 jt:4 showing:1 sensing:2 evidence:1 deconvolution:3 exists:1 ih:2 gustavo:1 effectively:1 importance:1 aod:3 phd:27 rejection:7 simply:2 univariate:1 likely:2 highlighting:2 schwaighofer:1 geoscience:1 wendland:1 relies:1 abc:1 conditional:1 goal:1 invalid:1 jeff:3 included:1 determined:2 reducing:1 szab:2 conservative:1 x2j:1 total:1 invariance:18 experimental:3 gauss:2 junier:1 shiftinvariant:1 indicating:1 internal:1 support:2 latter:1 partnership:1 alexander:1 ex:3
6,339
6,734
A Dirichlet Mixture Model of Hawkes Processes for Event Sequence Clustering Hongteng Xu? School of ECE Georgia Institute of Technology [email protected] Hongyuan Zha College of Computing Georgia Institute of Technology [email protected] Abstract How to cluster event sequences generated via different point processes is an interesting and important problem in statistical machine learning. To solve this problem, we propose and discuss an effective model-based clustering method based on a novel Dirichlet mixture model of a special but significant type of point processes ? Hawkes process. The proposed model generates the event sequences with different clusters from the Hawkes processes with different parameters, and uses a Dirichlet distribution as the prior distribution of the clusters. We prove the identifiability of our mixture model and propose an effective variational Bayesian inference algorithm to learn our model. An adaptive inner iteration allocation strategy is designed to accelerate the convergence of our algorithm. Moreover, we investigate the sample complexity and the computational complexity of our learning algorithm in depth. Experiments on both synthetic and real-world data show that the clustering method based on our model can learn structural triggering patterns hidden in asynchronous event sequences robustly and achieve superior performance on clustering purity and consistency compared to existing methods. 1 Introduction In many practical situations, we need to deal with a huge amount of irregular and asynchronous sequential data. Typical examples include the viewing records of users in an IPTV system, the electronic health records of patients in hospitals, among many others. All of these data are so-called event sequences, each of which contains a series of events with different types in the continuous time domain, e.g., when and which TV program a user watched, when and which care unit a patient is transferred to. Given a set of event sequences, an important task is learning their clustering structure robustly. Event sequence clustering is meaningful for many practical applications. Take the previous two examples: clustering IPTV users according to their viewing records is beneficial to the program recommendation system and the ads serving system; clustering patients according to their health records helps hospitals to optimize their medication resources. Event sequence clustering is very challenging. Existing work mainly focuses on clustering synchronous (or aggregated) time series with discrete time-lagged observations [19, 23, 39]. Event sequences, on the contrary, are in the continuous time domain, so it is difficult to find a universal and tractable representation for them. A potential solution is constructing features of event sequences via parametric [22] or nonparametric [18] methods. However, these feature-based methods have a high risk of overfitting because of the large number of parameters. What is worse, these methods actually decompose the clustering problem into two phases: extracting features and learning clusters. As a result, their clustering results are very sensitive to the quality of learned (or predefined) features. ? Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To make concrete progress, we propose a Dirichlet Mixture model of Hawkes Processes (DMHP for short) and study its performance on event sequence clustering in depth. In this model, the event sequences belonging to different clusters are modeled via different Hawkes processes. The priors of the Hawkes processes? parameters are designed based on their physically-meaningful constraints. The prior of the clusters is generated via a Dirichlet distribution. We propose a variational Bayesian inference algorithm to learn the DMHP model in a nested Expectation-Maximization (EM) framework. In particular, we introduce a novel inner iteration allocation strategy into the algorithm with the help of open-loop control theory, which improves the convergence of the algorithm. We prove the local identifiability of our model and show that our learning algorithm has better sample complexity and computational complexity than its competitors. The contributions of our work include: 1) We propose a novel Dirichlet mixture model of Hawkes processes and demonstrate its local identifiability. To our knowledge, it is the first systematical research on the identifiability problem in the task of event sequence clustering. 2) We apply an adaptive inner iteration allocation strategy based on open-loop control theory to our learning algorithm and show its superiority to other strategies. The proposed strategy achieves a trade-off between convergence performance and computational complexity. 3) We propose a DMHP-based clustering method. It requires few parameters and is robust to the problems of overfitting and model misspecification, which achieves encouraging clustering results. 2 Related Work A temporal point process [4] is a random process whose realization consists of an event sequence {(ti , ci )}M i=1 with time stamps ti ? [0, T ] and event types ci ? C = {1, ..., C}. It can be equivalently represented as C counting processes {Nc (t)}C c=1 , where Nc (t) is the number of type-c events occurring at or before time t. A way to characterize point processes is via the intensity function ?c (t) = E[dNc (t)|HtC ]/dt, where HtC = {(ti , ci )|ti < t, ci ? C} collects historical events of all types before time t. It is the expected instantaneous rate of happening type-c events given the history, which captures the phenomena of interests, i.e., self-triggering [13] or self-correcting [44]. Hawkes Processes. A Hawkes process [13] is a kind of point processes modeling complicated event sequences in which historical events have influences on current and future ones. It can also be viewed as a cascade of non-homogeneous Poisson processes [8, 34]. We focus on the clustering problem of the event sequences obeying Hawkes processes because Hawkes processes have been proven to be useful for describing real-world data in many applications, e.g., financial analysis [1], social network analysis [3, 51], system analysis [22], and e-health [30, 42]. Hawkes processes have a particular form of intensity: XC Z t (1) ?c (t) = ?c + ?cc0 (s)dNc0 (t ? s), 0 c =1 0 PC R t where ?c is the exogenous base intensity independent of the history while c0 =1 0 ?cc0 (s)dNc0 (t?s) the endogenous intensity capturing the peer influence. The decay in the influence of historical type-c0 events on the subsequent type-c events is captured via the so-called impact function ?cc0 (t), which is nonnegative. A lot of existing work uses predefined impact functions with known parameters, e.g., the exponential functions in [29, 50] and the power-law functions in [49]. To enhance the flexibility, a nonparametric model of 1-D Hawkes process was first proposed in [16] based on ordinary differential equation (ODE) and extended to multi-dimensional case in [22, 51]. Another nonparametric model is the contrast function-based model in [30], which leads to a Least-Squares (LS) problem [7]. A Bayesian nonparametric model combining Hawkes processes with infinite relational model is proposed in [3]. Recently, the basis representation of impact functions was used in [6, 15, 41] to avoid discretization. Sequential Data Clustering and Mixture Models. Traditional methods mainly focus on clustering synchronous (or aggregated) time series with discrete time-lagged variables [19, 23, 39]. These methods rely on probabilistic mixture models [46], extracting features from sequential data and then learning clusters via a Gaussian mixture model (GMM) [25, 28]. Recently, a mixture model of Markov chains is proposed in [21], which learns potential clusters from aggregate data. For asynchronous event sequences, most of the existing clustering methods can be categorized into featurebased methods, clustering event sequences from learned or predefined features. Typical examples 2 include the Gaussian process-base multi-task learning method in [18] and the multi-task multidimensional Hawkes processes in [22]. Focusing on Hawkes processes, the feature-based mixture models in [5, 17, 47] combine Hawkes processes with Dirichlet processes [2, 36]. However, these methods aim at modeling clusters of events or topics hidden in event sequences (i.e., sub-sequence clustering), which cannot learn clusters of event sequences. To our knowledge, the model-based clustering method for event sequences has been rarely considered. 3 Proposed Model 3.1 Dirichlet Mixture Model of Hawkes Processes Mn Given a set of event sequences S = {sn }N n=1 , where sn = {(ti , ci )}i=1 contains a series of events ci ? C = {1, ..., C} and their time stamps ti ? [0, Tn ], we model them via a mixture model of Hawkes processes. According to the definition of Hawkes process in (1), for the event sequence belonging to the k-th cluster its intensity function of type-c event at time t is X X XD (2) ?kc (t) = ?kc + ?kcci (t ? ti ) = ?kc + akcci d gd (t ? ti ), ti <t k [?kc ] ti <t d=1 RC + where ? = ? is the exogenous base intensity of the k-th Hawkes process. Following the P work in [41], we represent each impact function ?kcc0 (t) via basis functions as d akcc0 d gd (t ? ti ), where gd (t) ? 0 is the d-th basis function and Ak = [akcc0 d ] ? RC?C?D is the coefficient tensor. 0+ Here we use Gaussian basis function, and their number D can be decided automatically using the basis selection method in [41]. In our mixture model, the probability of the appearance of an event sequence s is  X Z X Y k k k k k k p(s; ?) = ? HP(s|? , A ), HP(s|? , A ) = ?ci (ti ) exp ? k i k k c T  ?kc (s)ds . (3) 0 k Here ? ?s are the probabilities of clusters and HP(s|? , A ) is the conditional probability of the event sequence s given the k-th Hawkes process, which follows the intensity function-based definition in [4]. According to the Bayesian graphical model, we regard the parameters of Hawkes processes, {?k , Ak }, as random variables. For ?k ?s, we consider its positiveness and assume that they obey C ? K independent Rayleigh distributions. For Ak ?s, we consider its nonnegativeness and sparsity as the work in [22, 41, 50]) did, and assume that they obey C ? C ? D ? K independent exponential distributions. The prior of cluster is a Dirichlet distribution. Therefore, we can describe the proposed Dirichlet mixture model of Hawkes process in a generative way as ? ? Dir(?/K, ..., ?/K), k|? ? Category(?), ? ? Rayleigh(B), A ? Exp(?), s|k, ?, A ? HP(?k , Ak ), Here ? = [?kc ] ? RC?K and A = [akcc0 d ] ? RC?C?D?K are parameters of Hawkes processes, and + 0+ k k {B = [?c ], ? = [?cc0 d ]} are hyper-parameters. Denote the latent variables indicating the labels of clusters as matrix Z ? {0, 1}N ?K . We can factorize the joint distribution of all variables as2 p(S, Z, ?, ?, A) = p(S|Z, ?, A)p(Z|?)p(?)p(?)p(A), where Y Y p(S|Z, ?, A) = HP(sn |?k , Ak )znk , p(Z|?) = (? k )znk , n,k n,k Y Y p(?) = Dir(?|?), p(?) = Rayleigh(?kc |?ck ), p(A) = 0 c,k c,c ,d,k (4) k Exp(akcc0 d |?cc 0 d ). Our mixture model of Hawkes processes are different from the models in [5, 17, 47]. Those models focus on the sub-sequence clustering problem within an event sequence. The intensity function is a weighted sum of multiple intensity functions of different Hawkes processes. Our model, however, aims at finding the clustering structure across different sequences. The intensity of each event is generated via a single Hawkes process, while the likelihood of an event sequence is a mixture of likelihood functions from different Hawkes processes. 2 2 Rayleigh(x|?) = x x ? 2? 2 e ?2 , Exp(x|?) = x 1 ?? e ? , x ? 0. 3 3.2 Local Identifiability One of the most important questions about our mixture model is whether it is identifiable or not. According to the definition of Hawkes process and the work in [26, 31], we can prove that our model is locally identifiable. The proof of the following theorem is given in the supplementary file. Theorem 3.1. When the time of observation goes to infinity, the mixture model ofh the Hawkes proi 1 ?K cesses defined in (3) is locally identifiable, i.e., for each parameter point ? = vec ??1 ... , K ... ? C?C?D where ? k = {?k , Ak } ? RC for k = 1, .., K, there exists an open neighborhood of ? + ? R0+ 0 containing no other ? which makes p(s; ?) = p(s; ?0 ) holds for all possible s. 4 Proposed Learning Algorithm 4.1 Variational Bayesian Inference Instead of using purely MCMC-based learning method like [29], we propose an effective variational Bayesian inference algorithm to learn (4) in a nested EM framework. Specifically, we consider a variational distribution having the following factorization: Y q(Z, ?, ?, A) = q(Z)q(?, ?, A) = q(Z)q(?) q(?k )q(Ak ). (5) k An EM algorithm can be used to optimize (5). Update Responsibility (E-step). The logarithm of the optimized factor q ? (Z) is approximated as log q ? (Z) = E? [log p(Z|?)] + E?,A [log p(S|Z, ?, A)] + C X  = znk E[log ? k ] + E[log HP(sn |?k , Ak )] + C n,k   X X X Z Tn znk E[log ? k ] + E[ log ?kci (ti ) ? ?kc (s)ds] + C = n,k ? X c i  znk E[log ? k ] + n,k | X i log E[?kci (ti )] ? 0 Z Tn  Var[?kci (ti )]  X ? E[ ?kc (s)ds] +C. 2 k c 2E [?ci (ti )] 0 {z } ?nk where C is a constant and Var[?] represents the variance of random variable. Each term E[log ?kc (t)] is approximated via its second-order Taylor expansion log E[?kc (t)] ? responsibility rnk is calculated as X rnk = E[znk ] = ?nk /( ?nj ). Var[?k c (t)] 2E2 [?k c (t)] [37]. Then, the j Denote Nk = P n rnk (6) for all k?s. Update Parameters (M-step). The logarithm of optimal factor q ? (?, ?, A) is log q ? (?, ?, A) X X = log(p(?k )p(Ak )) + EZ [log p(Z|?)] + log p(?) + k n,k rnk log HP(sn |?k , Ak ) + C. We can estimate the parameters of Hawkes processes via: X b = arg max?,A log(p(?)p(A)) + ? A ?, rnk log HP(sn |?k , Ak ). n,k (7) Following the work in [41, 47, 50], we need to apply an EM algorithm to solve (7) iteratively. After b we update distributions as ? and A, getting optimal ? p bk , B k = 2/? ? (8) ?k. ?k = A Update The Number of Clusters K. When the number of clusters K is unknown, we initialize K randomly and update it in the learning phase. There are multiple methods to update the number of 4 3.44 2.37 5.1 0.2 0 7.4 7.3 7.2 7.1 Decreasing #10 41 5 Clusters 0.8 0.6 0.4 0.2 0 8.6 8.5 8.4 8.3 Responsibility 0.4 7.5 Responsibility 2.38 Constant #1014 4 Clusters Negative Log-likelihood 3.46 2.39 Responsibility 2.4 Negative Log-likelihood Negative Log-likelihood Negative Log-likelihood 2.41 3.48 4 3 Clusters Increasing #10 1 Increasing Increasing 5.2 0.8 Constant Constant Decreasing Decreasing 0.6 OpenLoop 5.15 BayesOpt Negative Log-likelihood 5 4 2 Clusters #10 #10 3.5 2.42 0.8 0.6 0.4 0.2 0 200 400 800 1000 200 60 400 80 600100 800 1000 20 200 40 400 60 600 80 800 1001000 20 40 20 40 60 600 80 100 20 2040 4060 6080 80100100 Indices of samples of iterations samples Indices samples number of inner iterations The number The The number of inner iterations of inner iterations The numberIndices of inner The number of innerofiterations Ground Truth (a) Convergence curve {rn} (15 Inner Iter.) E(rn) (15 Inner Iter.) (b) Responsibility and ground truth Figure 1: The data contain 200 event sequences generated via two 5-dimensional Hawkes processes. (a) Each curve is the average of 5 trials? results. In each trial, total 100 inner iterations are applied. The increasing (decreasing) strategy changes the number of inner iterations from 2 to 8 (from 8 to 2). The constant strategy fixes the number to 5. (b) The black line is the ground truth. The red dots are responsibilities after 15 inner iterations, and the red line is their average. clusters. Regrading our Dirichlet distribution as a finite approximation of a Dirichlet process, we set a large initial K as the truncation level. A simple empirical method is discarding the empty cluster (i.e., Nk = 0) and merging the cluster with Nk smaller than a threshold Nmin in the learning phase. Besides this, we can apply the MCMC in [11, 48] to update K via merging or splitting clusters. Repeating the three steps above, our algorithm maximizes the log-likelihood function (i.e., the logarithm of (4)) and achieves optimal {?, B} accordingly. Both the details of our algorithm and its computational complexity are given in the supplementary file. 4.2 Inner Iteration Allocation Strategy and Convergence Analysis Our algorithm is in a nested EM framework, where the outer iteration corresponds to the loop of E-step and M-step and the inner iteration corresponds to the inner EM in the M-step. The runtime of our algorithm is linearly proportional to the total number of inner iterations. Given fixed runtime (or the total number of inner iterations), both the final achievable log-likelihood and convergence behavior of the algorithm highly depend on how we allocate the number of inner iterations across the outer iterations. In this work, we test three inner iteration allocation strategies. The first strategy is heuristic, which fixes, increases, or decreases the number of inner iterations as the outer iteration progresses. Compared with constant inner iteration strategy, the increasing or decreasing strategy might improve the convergence of algorithm [9]. The second strategy is based on open-loop control [27]: in each outer iteration, we compute objective function via two methods respectively ? updating parameters directly (i.e., continuing current M-step and going to next inner iteration) or first updating responsibilities and then updating parameters (i.e., going to a new loop of E-step and M-step and starting a new outer iteration). The parameters corresponding to the smaller negative log-likelihood are preserved. The third strategy is applying Bayesian optimization [33,35] to optimize the number of inner iterations per outer iteration via maximizing the expected improvement. We apply these strategies to a synthetic data set and visualize their impacts on the convergence of our algorithm in Fig. 1(a). The open-loop control strategy and the Bayesian optimization strategy obtain comparable performance on the convergence of algorithm. They outperform heuristic strategies (i.e., increasing, decreasing and fixing the number of inner iterations per outer iteration), which reduce the negative log-likelihood more rapidly and reach lower value finally. Although adjusting the number of inner iterations via different methodologies, both these two strategies tend to increase the number of inner iterations w.r.t. the number of outer iterations. In the beginning of algorithm, the open-loop control strategy updates responsibilities frequently, and similarly, the Bayesian optimization strategy assigns small number of inner iterations. The heuristic strategy that increasing the number of inner iterations follows the same tendency, and therefore, is just slightly worse than the open-loop control and the Bayesian optimization. This phenomenon is because the estimated responsibility is not reliable in the beginning. Too many inner iterations at that time might make learning results fall into bad local optimums. Fig. 1(b) further verifies our explanation. With the help of the increasing strategy, most of the responsibilities converge to the ground truth with high confidence after just 15 inner iterations, because 5 40 Events per Sequence 0.4 0.2 1.2 1 0.8 0.6 0.4 0.2 0.2 0.3 0.4 0.1 Sample Percentage of Minor Cluster 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0.3 0.2 0.3 0.4 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.5 0.4 0.3 0.4 0.2 0.1 0.2 0.3 0.4 Sample Percentage of Minor Cluster 20 Events per Sequence 80 Events per Sequence 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0.1 Sample Percentage of Minor Cluster 0.6 0.6 0.4 0.2 0.1 0.7 1 0.8 40 Events per Sequence Distance between Centers Distance between Centers 0.2 Sample Percentage of Minor Cluster 80 Events per Sequence 1.8 0.8 1.2 0.2 Distance between Centers 0.1 0.9 1.4 0.2 0.3 0.4 Sample Percentage of Minor Cluster 1.8 1.6 1.4 40 Events per Sequence 1.2 1 0.8 0.6 0.4 0.2 0.1 0.2 0.3 0.4 0.2 0.3 1.4 1.2 1 0.8 0.6 0.4 0.4 1.8 0.2 0.3 0.8 1.4 0.7 1.2 0.6 1 0.5 0.8 0.4 0.6 0.3 0.4 0.2 0.1 0.2 0.3 0.4 Sample Percentage of Minor Cluster (b) DMHP 40 Events per Sequence 1.8 0.9 1.6 0.4 Sample Percentage of Minor Cluster 80 Events per Sequence Distance between Centers 20 Events per Sequence 1.8 0.2 0.1 Sample Percentage of Minor Cluster (a) MMHP+DPGMM 1.6 0.2 0.1 Sample Percentage of Minor Cluster 1.8 Distance between Centers 0.6 1.4 1.6 20 Events per Sequence Distance between Centers 1 0.8 20 Events per Sequence 1.8 Distance between Centers 1.2 1.6 Distance between Centers 1.4 1.8 Distance between Centers 1.6 Distance between Centers Distance between Centers Distance between Centers 80 Events per Sequence 1.8 1.8 Figure 2: Comparisons for various methods on F1 score of minor cluster. 1.6 1.4 1.2 1 0.8 0.6 0.4 1.6 1.4 1.2 1 0.8 0.6 0.4 1.6 1.4 1.2 1 0.8 0.6 0.4 the responsibilities has been updated over 5 times. On the contrary, the responsibilities corresponding to the constant and the decreasing strategies have more uncertainty ? many responsibilities are around 0.5 and far from the ground truth. 0.2 0.2 0.1 0.2 0.3 0.4 Sample Percentage of Minor Cluster 0.2 0.1 0.2 0.3 0.4 Sample Percentage of Minor Cluster 0.1 0.2 0.3 0.4 Sample Percentage of Minor Cluster Based on the analysis above, the increasing allocation strategy indeed improves the convergence of our algorithm, and the open-loop control and the Bayesian optimization are superior to other competitors. Because the computational complexity of the open-loop control is much lower than that of the Bayesian optimization, in the following experiments, we apply open-loop control strategy to our learning algorithm. The scheme of our learning algorithm and more detailed convergence analysis can be found in the supplementary file. 4.3 Empirical Analysis of Sample Complexity Focusing on the task of clustering event sequences, we investigate the sample complexity of our DMHP model and its learning algorithm. In particular, we want to show that the clustering method based on our model requires fewer samples than existing methods to identify clusters successfully. Among existing methods, the main competitor of our method is the clustering method based on the multi-task multi-dimensional Hawkes process (MMHP) model in [22]. It learns a specific Hawkes process for each sequence and clusters the sequences via applying the Dirichlet processes Gaussian mixture model (DPGMM) [10, 28] to the parameters of the corresponding Hawkes processes. Following the work in [14], we demonstrate the superiority of our DMHP-based clustering method through the comparison on the identifiability of minor clusters given finite number of samples. Specifically, we consider a binary clustering problem with 500 event sequences. For the k-th cluster, k = 1, 2, Nk event sequences are generated via a 1-dimensional Hawkes processes with parameter ? k = {?k , Ak }. Taking the parameter as a representation of the clustering center, we can calculate the distance between two clusters as d = k? 1 ? ? 2 k2 . Assume that N1 < N2 , we denote the first 1 cluster as ?minor? cluster, whose sample percentage is ? 1 = N1N+N . Applying our DMHP model 2 and its learning algorithm to the data generated with different d?s and ? 1 ?s, we can calculate the F1 scores of the minor cluster w.r.t. {d, ?}. The high F1 score means that the minor cluster is identified with high accuracy. Fig. 2 visualizes the maps of F1 scores generated via different methods w.r.t. the number of events per sequence. We can find that the F1 score obtained via our DMHP-based method is close to 1 in most situations. Its identifiable area (yellow part) is much larger than that of the MMHP+DPGMM method consistently w.r.t. the number of events per sequence. The unidentifiable cases happen only in the following two situations: the parameters of different clusters are nearly equal (i.e., d ? 0); or the minor cluster is extremely small (i.e., ? 1 ? 0). The enlarged version of Fig. 2 is given in the supplementary file. 5 Experiments To demonstrate the feasibility and the efficiency of our DMHP-based sequence clustering method, we compare it with the state-of-the-art methods, including the vector auto-regressive (VAR) method [12], the Least-Squares (LS) method in [7], and the multi-task multi-dimensional Hawkes process (MMHP) in [22]. All of the three competitors first learn features of sequences and then apply the DPGMM [10] to cluster sequences. The VAR discretizes asynchronous event sequences to time series and learns transition matrices as features. Both the LS and the MMHP learn a specific Hawkes process for each event sequence. For each event sequence, we calculateRits infectivity matrix ? = [?cc0 ], where the ? element ?cc0 is the integration of impact function (i.e., 0 ?cc0 (t)dt), and use it as the feature. 6 Table 1: Clustering Purity on Synthetic Data. C K 5 2 3 4 5 VAR+ DPGMM 0.5235 0.3860 0.2894 0.2543 Sine-like ?(t) LS+ MMHP+ DPGMM DPGMM 0.5639 0.5917 0.5278 0.5565 0.4365 0.5112 0.3980 0.4656 VAR+ DPGMM 0.5222 0.3618 0.2901 0.2476 DMHP 0.9898 0.9683 0.9360 0.9055 Piecewise constant ?(t) LS+ MMHP+ DPGMM DPGMM 0.5589 0.5913 0.4402 0.4517 0.3365 0.3876 0.2980 0.3245 DMHP 0.8085 0.7715 0.7056 0.6774 For the synthetic data with clustering labels, we use clustering purity [24] to evaluate various methods: 1 XK maxj?{1,...,K 0 } |Wk ? Cj |, Purity = k=1 N where Wk is the learned index set of sequences belonging to the k-th cluster, Cj is the real index set of sequence belonging to the j-th class, and N is the total number of sequences. For the real-world data, we visualize the infectivity matrix of each cluster and measure the clustering consistency via a cross-validation method [38, 40]. The principle is simple: because random sampling does not change the clustering structure of data, a clustering method with high consistency should preserve the pairwise relationships of samples in different trials. Specifically, we test each clustering method with J (= 100) trials. In the j-th trial, data is randomly divided into two folds. After learning the corresponding model from the training fold, we apply the method to the testing fold. We enumerate all pairs of sequences within a same cluster in the j-th trial and count the pairs preserved in all other trials. The clustering consistency is the minimum proportion of preserved pairs over all trials: 0 Consistency = minj?{1,..,J} X X j 0 6=j (n,n0 )?Mj 0 1{knj = knj 0 } , (J ? 1)|Mj | where Mj = {(n, n0 )|knj = knj 0 } is the set of sequence pairs within same cluster in the j-th trial, and knj is the index of cluster of the n-th sequence in the j-th trial. 5.1 Synthetic Data We generate two synthetic data sets with various clusters using sine-like impact functions and piecewise constant impact functions respectively. In each data set, the number of clusters is set from 2 to 5. Each cluster contains 400 event sequences, and each event sequence contains 50 (= Mn ) events and 5 (= C) event types. The elements of exogenous base intensity are sampled uniformly from [0, 1]. k k Each sine-like impact function in the k-th cluster is formulated as ?kcc0 = bkcc0 (1?cos(?cc 0 (t?scc0 ))), ? 2? k k k where {bcc0 , ?cc0 , scc0 } are sampled randomly from [ 5 , 5 ]. Each piecewise constant impact function is the truncation of the corresponding sine-like impact function, i.e., 2bkcc0 ? round(?kcc0 /(2bkcc0 )). Table 1 shows the clustering purity of various methods on the synthetic data. Compared with the three competitors, our DMHP obtains much better clustering purity consistently. The VAR simply treats asynchronous event sequences as time series, which loses the information like the order of events and the time delay of adjacent events. Both the LS and the MMHP learn Hawkes process for each individual sequence, which might suffer to over-fitting problem in the case having few events per sequence. These competitors decompose sequence clustering into two phases: learning feature and applying DPGMM, which is very sensitive to the quality of feature. The potential problems above lead to unsatisfying clustering results. Our DMHP method, however, is model-based, which learns clustering result directly and reduces the number of unknown variables greatly. As a result, our method avoids the problems of these three competitors and obtains superior clustering results. Additionally, the learning results of the synthetic data with piecewise constant impact functions prove that our DMHP method is relatively robust to the problem of model misspecification ? although our Gaussian basis cannot fit piecewise constant impact functions well, our method still outperforms other methods greatly. 5.2 Real-world Data We test our clustering method on two real-world data sets. The first is the ICU patient flow data used in [43], which is extracted from the MIMIC II data set [32]. This data set contains the transition 7 Table 2: Clustering Consistency on Real-world Data. Method VAR+DPGMM LS+DPGMM MMHP+DPGMM DMHP ICU Patient 0.0901 0.1390 0.3313 0.3778 IPTV User 0.0443 0.0389 0.1382 0.2004 Cluster 1 Cluster 1 2 2 4 4 4 6 DMHP MMHP 60 2 4 6 6 8 6 8 10 8 Cluster 2 2 10 2 4 6 8 10 2 4 6 10 8 Cluster 2 Cluster 3 Cluster 3 2 2 4 4 6 4 6 8 10 2 4 6 8 10 2 10 2 4 6 8 10 4 4 6 4 6 8 10 2 4 6 8 2 4 6 2 4 6 2 4 4 8 10 2 4 6 6 8 8 10 10 8 10 10 8 10 Cluster 6 2 6 8 10 2 10 8 10 2 4 6 8 10 2 4 6 8 10 (b) DMHP 40 Cluster 1 20 Cluster 2 Cluster 9 Cluster 10 2 2 2 2 4 4 4 4 4 6 6 6 6 6 6 8 6 8 10 4 6 8 10 Cluster 1 6 8 10 10 2 4 6 8 10 2 4 6 4 6 2 4 4 2 6 6 8 8 4 10 8 10 6 6 8 8 10 10 4 4 6 8 10 2 8 10 2 4 4 4 6 2 4 6 8 10 4 6 6 6 8 8 8 6 8 10 4 10 2 4 6 8 10 2 4 6 (c) MMHP+DPGMM 4 6 8 10 2 Cluster 11 4 6 6 10 8 10 2 4 6 8 10 2 Cluster 19 2 4 4 2 6 6 8 8 4 10 4 6 8 10 Cluster 20 Cluster 6 Cluster 7 2 4 6 2 Cluster 9 2 2 2 4 4 4 4 6 6 6 4 6 8 10 2 4 6 10 8 10 2 4 6 8 10 6 8 8 8 8 8 8 8 8 10 10 10 10 10 10 10 10 8 10 2 Cluster 12 4 6 8 10 2 Cluster 13 4 6 8 10 2 Cluster 14 4 6 8 10 2 Cluster 15 4 6 8 10 2 Cluster 16 4 6 8 10 2 Cluster 17 4 6 8 10 2 Cluster 18 4 6 8 10 Cluster 19 2 2 2 2 2 2 2 2 2 2 4 4 4 4 4 4 4 4 4 4 6 6 6 6 6 6 6 6 6 8 8 8 8 8 8 8 8 8 8 10 10 10 10 10 10 10 10 10 10 processes of 30, 308 patients among different kinds of care units. The patients can be clustered according to their transition processes. The second is the IPTV data set in [20, 22], which contains 7, 100 IPTV users? viewing records collected via Shanghai Telecomm Inc. The TV programs are categorized into 16 classes and the viewing behaviors more than 20 minutes are recorded. Similarly, the users can be clustered according to their viewing records. The event sequences in these two data have strong but structural triggering patterns, which can be modeled via different Hawkes processes. 2 4 6 8 10 2 4 6 8 10 2 4 6 8 10 2 4 6 8 10 2 4 6 8 10 2 4 6 8 10 2 4 Table 2 shows the performance of various clustering methods on the clustering consistency. We can find that our method outperforms other methods obviously, which means that the clustering result obtained via our method is more stable and consistent than other methods? results. In Fig. 3 we visualize the comparison for our method and its main competitor MMHP+DPGMM on the ICU patient flow data. Fig. 3(a) shows the histograms of the number of clusters for the two methods. We can find that MMHP+DPGMM method tends to over-segment data into too many clusters. Our DMHP method, however, can find more compact clustering structure. The distribution of the number of clusters concentrates to 6 and 19 for the two data sets, respectively. In our opinion, this phenomenon reflects the drawback of the feature-based method ? the clustering performance is highly dependent on the quality of feature while the clustering structure is not considered sufficiently in the phase of extracting feature. Taking learned infectivity matrices as representations of clusters, we compare our DMHP method with MMHP+DPGMM in Figs. 3(b) and 3(c). The infectivity matrices obtained by our DMHP are sparse and with distinguishable structure, while those obtained by MMHP+DPGMM are chaotic ? although MMHP also applies sparse regularizer to each event sequence? infectivity matrix, it cannot guarantee the average of the infectivity matrices in a cluster is still sparse. Same phenomena can also be observed in the experiments on the IPTV data. More experimental results are given in the supplementary file. 6 Cluster 8 2 6 10 2 4 8 6 Figure 3: Comparisons on the ICU patient flow data. 2 4 2 8 10 6 8 10 2 Cluster 18 Cluster 5 4 10 4 8 10 2 2 2 6 Cluster 4 4 8 10 4 Cluster 17 2 8 10 2 Cluster 16 2 6 8 10 8 10 10 6 8 10 2 Cluster 15 Cluster 3 10 2 6 Cluster 14 2 8 10 8 10 2 4 8 10 2 Cluster 13 Cluster 2 2 8 8 10 2 Cluster 12 6 4 8 10 4 2 (a) Histogram of K Cluster 8 2 4 6 4 25 Cluster 7 2 4 6 2 20 Cluster 6 2 4 6 2 15 Cluster 5 2 4 6 8 10 Cluster 4 2 4 10 5 Cluster 3 2 Cluster 11 0 4 6 10 8 10 Cluster 5 Cluster 6 2 2 4 8 86 Cluster 4 Cluster 5 2 6 6 8 10 8 10 Cluster 4 2 Conclusion and Future Work In this paper, we propose and discuss a Dirichlet mixture model of Hawkes processes and achieve a model-based solution to event sequence clustering. We prove the identifiability of our model and analyze the convergence, sample complexity and computational complexity of our learning algorithm. In the aspect of methodology, we plan to study other potential priors, e.g., the prior based on determinantial point processes (DPP) in [45], to improve the estimation of the number of clusters, and further accelerate our learning algorithm via optimizing inner iteration allocation strategy in near future. Additionally, our model can be extended to Dirichlet process mixture model when K ? ?. In that case, we plan to apply Bayesian nonparametrics to develop new learning algorithms. The source code can be found at https://github.com/HongtengXu/Hawkes-Process-Toolkit. 8 6 8 10 2 4 6 8 10 6 2 4 6 8 10 Acknowledgment This work is supported in part by NSF IIS-1639792, IIS-1717916, and CMMI-1745382. References [1] E. Bacry, K. Dayri, and J.-F. Muzy. Non-parametric kernel estimation for symmetric Hawkes processes. application to high frequency financial data. The European Physical Journal B, 85(5):1?12, 2012. [2] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian analysis, 1(1):121?143, 2006. [3] C. Blundell, J. Beck, and K. A. Heller. Modelling reciprocating relationships with Hawkes processes. In NIPS, 2012. [4] D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: general theory and structure, volume 2. Springer Science & Business Media, 2007. [5] N. Du, M. Farajtabar, A. Ahmed, A. J. Smola, and L. Song. Dirichlet-Hawkes processes with applications to clustering continuous-time document streams. In KDD, 2015. [6] N. Du, L. Song, M. Yuan, and A. J. Smola. Learning networks of heterogeneous influence. In NIPS, 2012. [7] M. Eichler, R. Dahlhaus, and J. Dueck. Graphical modeling for multivariate Hawkes processes with nonparametric link functions. Journal of Time Series Analysis, 2016. [8] M. Farajtabar, N. Du, M. Gomez-Rodriguez, I. Valera, H. Zha, and L. Song. Shaping social activity by incentivizing users. In NIPS, 2014. [9] G. H. Golub, Z. Zhang, and H. Zha. Large sparse symmetric eigenvalue problems with homogeneous linear constraints: the Lanczos process with inner?outer iterations. Linear Algebra And Its Applications, 309(1):289?306, 2000. [10] D. G?r?r and C. E. Rasmussen. Dirichlet process Gaussian mixture models: Choice of the base distribution. Journal of Computer Science and Technology, 25(4):653?664, 2010. [11] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, pages 711?732, 1995. [12] F. Han and H. Liu. Transition matrix estimation in high dimensional time series. In ICML, 2013. [13] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83?90, 1971. [14] D. Kim. Mixture inference at the edge of identifiability. Ph.D. Thesis, 2008. [15] R. Lemonnier and N. Vayatis. Nonparametric Markovian learning of triggering kernels for mutually exciting and mutually inhibiting multivariate Hawkes processes. In Machine Learning and Knowledge Discovery in Databases, pages 161?176. 2014. [16] E. Lewis and G. Mohler. A nonparametric EM algorithm for multiscale Hawkes processes. Journal of Nonparametric Statistics, 2011. [17] L. Li and H. Zha. Dyadic event attribution in social networks with mixtures of Hawkes processes. In CIKM, 2013. [18] W. Lian, R. Henao, V. Rao, J. Lucas, and L. Carin. A multitask point process predictive model. In ICML, 2015. [19] T. W. Liao. Clustering of time series data: a survey. Pattern recognition, 38(11):1857?1874, 2005. [20] D. Luo, H. Xu, H. Zha, J. Du, R. Xie, X. Yang, and W. Zhang. You are what you watch and when you watch: Inferring household structures from IPTV viewing data. IEEE Transactions on Broadcasting, 60(1):61?72, 2014. [21] D. Luo, H. Xu, Y. Zhen, B. Dilkina, H. Zha, X. Yang, and W. Zhang. Learning mixtures of Markov chains from aggregate data with structural constraints. IEEE Transactions on Knowledge and Data Engineering, 28(6):1518?1531, 2016. [22] D. Luo, H. Xu, Y. Zhen, X. Ning, H. Zha, X. Yang, and W. Zhang. Multi-task multi-dimensional Hawkes processes for modeling event sequences. In IJCAI, 2015. [23] E. A. Maharaj. Cluster of time series. Journal of Classification, 17(2):297?314, 2000. [24] C. D. Manning, P. Raghavan, H. Sch?tze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008. 9 [25] C. Maugis, G. Celeux, and M.-L. Martin-Magniette. Variable selection for clustering with Gaussian mixture models. Biometrics, 65(3):701?709, 2009. [26] E. Meijer and J. Y. Ypma. A simple identification proof for a mixture of two univariate normal distributions. Journal of Classification, 25(1):113?123, 2008. [27] B. A. Ogunnaike and W. H. Ray. Process dynamics, modeling, and control. Oxford University Press, USA, 1994. [28] C. E. Rasmussen. The infinite Gaussian mixture model. In NIPS, 1999. [29] J. G. Rasmussen. Bayesian inference for Hawkes processes. Methodology and Computing in Applied Probability, 15(3):623?642, 2013. [30] P. Reynaud-Bouret, S. Schbath, et al. Adaptive estimation for Hawkes processes; application to genome analysis. The Annals of Statistics, 38(5):2781?2822, 2010. [31] T. J. Rothenberg. Identification in parametric models. Econometrica: Journal of the Econometric Society, pages 577?591, 1971. [32] M. Saeed, C. Lieu, G. Raber, and R. G. Mark. MIMIC II: a massive temporal ICU patient database to support research in intelligent patient monitoring. In Computers in Cardiology, 2002, pages 641?644. IEEE, 2002. [33] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148?175, 2016. [34] A. Simma and M. I. Jordan. Modeling events with cascades of Poisson processes. In UAI, 2010. [35] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, 2012. [36] R. Socher, A. L. Maas, and C. D. Manning. Spectral Chinese restaurant processes: Nonparametric clustering based on similarities. In AISTATS, 2011. [37] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In NIPS, 2006. [38] R. Tibshirani and G. Walther. Cluster validation by prediction strength. Journal of Computational and Graphical Statistics, 14(3):511?528, 2005. [39] J. J. Van Wijk and E. R. Van Selow. Cluster and calendar based visualization of time series data. In IEEE Symposium on Information Visualization, 1999. [40] U. Von Luxburg. Clustering Stability. Now Publishers Inc, 2010. [41] H. Xu, M. Farajtabar, and H. Zha. Learning Granger causality for Hawkes processes. In ICML, 2016. [42] H. Xu, D. Luo, and H. Zha. Learning Hawkes processes from short doubly-censored event sequences. In ICML, 2017. [43] H. Xu, W. Wu, S. Nemati, and H. Zha. Patient flow prediction via discriminative learning of mutually-correcting processes. IEEE Transactions on Knowledge and Data Engineering, 29(1):157?171, 2017. [44] H. Xu, Y. Zhen, and H. Zha. Trailer generation via a point process-based visual attractiveness model. In IJCAI, 2015. [45] Y. Xu, P. M?ller, and D. Telesca. Bayesian inference for latent biologic structure with determinantal point processes (DPP). Biometrics, 2016. [46] S. J. Yakowitz and J. D. Spragins. On the identifiability of finite mixtures. The Annals of Mathematical Statistics, pages 209?214, 1968. [47] S.-H. Yang and H. Zha. Mixture of mutually exciting processes for viral diffusion. In ICML, 2013. [48] Z. Zhang, K. L. Chan, Y. Wu, and C. Chen. Learning a multivariate Gaussian mixture model with the reversible jump MCMC algorithm. Statistics and Computing, 14(4):343?355, 2004. [49] Q. Zhao, M. A. Erdogdu, H. Y. He, A. Rajaraman, and J. Leskovec. SEISMIC: A self-exciting point process model for predicting tweet popularity. In KDD, 2015. [50] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multi-dimensional Hawkes processes. In AISTATS, 2013. [51] K. Zhou, H. Zha, and L. Song. Learning triggering kernels for multi-dimensional Hawkes processes. In ICML, 2013. 10
6734 |@word multitask:1 trial:10 version:1 achievable:1 proportion:1 c0:2 rajaraman:1 open:10 initial:1 liu:1 contains:6 series:11 score:5 document:1 outperforms:2 existing:6 freitas:1 current:2 com:2 discretization:1 luo:4 gmail:1 vere:1 determinantal:1 subsequent:1 happen:1 kdd:2 nonnegativeness:1 designed:2 update:8 n0:2 generative:1 fewer:1 accordingly:1 xk:1 beginning:2 short:2 record:6 regressive:1 blei:1 zhang:5 rc:5 dilkina:1 mathematical:1 shahriari:1 differential:1 symposium:1 yuan:1 prove:5 consists:1 ypma:1 combine:1 fitting:1 ray:1 walther:1 doubly:1 introduce:1 pairwise:1 biologic:1 indeed:1 snoek:1 expected:2 behavior:2 frequently:1 multi:11 decreasing:7 automatically:1 encouraging:1 increasing:9 moreover:1 maximizes:1 medium:1 what:2 kind:2 finding:1 nj:1 guarantee:1 temporal:2 dueck:1 multidimensional:1 ti:16 xd:1 runtime:2 biometrika:2 k2:1 control:10 unit:2 superiority:2 before:2 infectivity:7 engineering:2 local:4 treat:1 tends:1 ak:12 oxford:1 black:1 might:3 collect:1 challenging:1 co:1 factorization:1 decided:1 practical:3 acknowledgment:1 testing:1 chaotic:1 area:1 universal:1 empirical:2 cascade:2 confidence:1 cardiology:1 cannot:3 close:1 selection:2 risk:1 influence:4 applying:4 collapsed:1 optimize:3 map:1 center:13 maximizing:1 go:1 attribution:1 starting:1 l:7 survey:1 splitting:1 assigns:1 correcting:2 financial:2 stability:1 updated:1 annals:2 user:7 massive:1 homogeneous:2 us:2 element:2 approximated:2 recognition:1 updating:3 database:2 featurebased:1 observed:1 wang:1 capture:1 calculate:2 trade:1 decrease:1 complexity:11 econometrica:1 dynamic:1 depend:1 segment:1 htc:2 algebra:1 predictive:1 purely:1 efficiency:1 basis:6 accelerate:2 joint:1 bouret:1 represented:1 various:5 regularizer:1 effective:3 describe:1 monte:1 iptv:7 newman:1 aggregate:2 hyper:1 neighborhood:1 peer:1 whose:2 heuristic:3 supplementary:5 solve:2 larger:1 calendar:1 statistic:5 final:1 obviously:1 sequence:73 eigenvalue:1 propose:8 loop:12 realization:1 combining:1 rapidly:1 flexibility:1 achieve:2 getting:1 convergence:12 cluster:117 empty:1 optimum:1 ijcai:2 adam:2 help:3 develop:1 fixing:1 school:1 minor:18 progress:2 strong:1 larochelle:1 concentrate:1 ning:1 drawback:1 human:1 raghavan:1 viewing:6 opinion:1 fix:2 f1:5 clustered:2 decompose:2 trailer:1 regrading:1 hold:1 around:1 considered:2 ground:5 sufficiently:1 exp:4 normal:1 visualize:3 inhibiting:1 achieves:3 estimation:4 label:2 sensitive:2 successfully:1 weighted:1 reflects:1 gaussian:9 aim:2 ck:1 avoid:1 zhou:2 gatech:1 focus:4 improvement:1 consistently:2 modelling:1 likelihood:11 reynaud:1 mainly:2 greatly:2 contrast:1 medication:1 rank:1 kim:1 maharaj:1 inference:9 dependent:1 hidden:2 kc:11 going:2 henao:1 arg:1 among:3 classification:2 lucas:1 plan:2 art:1 special:1 initialize:1 integration:1 equal:1 having:2 beach:1 sampling:1 represents:1 jones:1 icml:6 nearly:1 carin:1 future:3 mimic:2 others:1 piecewise:5 intelligent:1 few:2 randomly:3 preserve:1 individual:1 maxj:1 beck:1 phase:5 saeed:1 n1:1 huge:1 interest:1 investigate:2 highly:2 wijk:1 golub:1 mixture:32 pc:1 chain:3 predefined:3 edge:1 censored:1 biometrics:2 taylor:1 logarithm:3 continuing:1 leskovec:1 modeling:6 markovian:1 rao:1 bayesopt:1 lanczos:1 maximization:1 ordinary:1 delay:1 too:2 characterize:1 dir:2 synthetic:8 gd:3 st:1 probabilistic:1 off:1 enhance:1 concrete:1 thesis:1 von:1 recorded:1 containing:1 n1n:1 worse:2 zhao:1 li:1 potential:4 de:1 wk:2 coefficient:1 inc:2 unsatisfying:1 mcmc:3 ad:1 stream:1 sine:4 lot:1 endogenous:1 exogenous:3 responsibility:14 analyze:1 red:2 zha:15 complicated:1 nemati:1 identifiability:9 contribution:1 square:2 accuracy:1 variance:1 identify:1 yellow:1 bayesian:20 identification:2 carlo:1 monitoring:1 cc:3 visualizes:1 history:2 minj:1 reach:1 lemonnier:1 definition:3 competitor:8 frequency:1 e2:1 proof:2 sampled:2 adjusting:1 knowledge:5 improves:2 cj:2 shaping:1 actually:1 focusing:2 dt:2 xie:1 methodology:3 nonparametrics:1 unidentifiable:1 just:2 smola:2 nmin:1 d:3 telecomm:1 multiscale:1 reversible:2 rodriguez:1 quality:3 usa:2 contain:1 symmetric:2 iteratively:1 deal:1 round:1 adjacent:1 self:4 hawkes:58 demonstrate:3 tn:3 variational:7 instantaneous:1 novel:3 recently:2 superior:3 viral:1 physical:1 eichler:1 shanghai:1 volume:3 he:1 reciprocating:1 significant:1 positiveness:1 cambridge:2 vec:1 consistency:7 hp:8 similarly:2 dot:1 toolkit:1 stable:1 han:1 similarity:1 base:5 multivariate:3 chan:1 systematical:1 optimizing:1 binary:1 captured:1 minimum:1 care:2 purity:6 r0:1 aggregated:2 converge:1 ller:1 ii:5 dnc:1 multiple:2 reduces:1 determination:1 ahmed:1 cross:1 long:1 retrieval:1 divided:1 watched:1 impact:13 openloop:1 feasibility:1 prediction:2 heterogeneous:1 patient:12 expectation:1 poisson:2 liao:1 physically:1 iteration:37 represent:1 histogram:2 kernel:3 irregular:1 preserved:3 vayatis:1 want:1 ode:1 source:1 publisher:1 sch:1 file:5 tend:1 contrary:2 flow:4 jordan:2 extracting:3 structural:3 near:1 counting:1 yang:4 fit:1 restaurant:1 identified:1 triggering:5 inner:32 reduce:1 ogunnaike:1 blundell:1 maugis:1 synchronous:2 whether:1 allocate:1 song:5 suffer:1 enumerate:1 useful:1 detailed:1 amount:1 nonparametric:9 repeating:1 locally:2 ph:1 category:1 generate:1 http:1 outperform:1 percentage:13 nsf:1 estimated:1 cikm:1 per:17 tibshirani:1 popularity:1 serving:1 discrete:2 kci:3 iter:2 threshold:1 gmm:1 ce:1 diffusion:1 econometric:1 tweet:1 sum:1 luxburg:1 uncertainty:1 you:3 farajtabar:3 swersky:1 electronic:1 wu:2 comparable:1 capturing:1 rnk:5 gomez:1 fold:3 nonnegative:1 identifiable:4 activity:1 strength:1 constraint:3 infinity:1 as2:1 generates:1 aspect:1 extremely:1 relatively:1 martin:1 transferred:1 tv:2 according:7 manning:2 belonging:4 beneficial:1 across:2 em:7 smaller:2 slightly:1 resource:1 equation:1 mutually:5 visualization:2 discus:2 describing:1 count:1 granger:1 tractable:1 lieu:1 apply:8 obey:2 discretizes:1 spectral:1 robustly:2 mohler:1 dirichlet:19 clustering:60 include:3 graphical:3 household:1 xc:1 chinese:1 society:1 tensor:1 objective:1 question:1 yakowitz:1 strategy:27 parametric:3 cmmi:1 traditional:1 distance:13 link:1 bacry:1 outer:9 topic:1 collected:1 besides:1 code:1 modeled:2 index:5 relationship:2 equivalently:1 difficult:1 nc:2 dahlhaus:1 negative:7 lagged:2 unknown:2 seismic:1 teh:1 observation:2 markov:3 finite:3 situation:3 extended:2 relational:1 misspecification:2 rn:2 intensity:11 bk:1 pair:4 optimized:1 learned:4 nip:7 pattern:3 sparsity:1 muzy:1 program:3 max:1 reliable:1 explanation:1 including:1 green:1 power:1 event:75 business:1 rely:1 predicting:1 valera:1 mn:2 scheme:1 improve:2 github:1 technology:3 cc0:8 zhen:3 auto:1 health:3 knj:5 sn:6 prior:6 hongteng:1 heller:1 discovery:1 review:1 law:1 interesting:1 generation:1 allocation:8 proportional:1 proven:1 var:9 validation:2 znk:6 consistent:1 principle:1 exciting:5 maas:1 supported:1 asynchronous:5 truncation:2 rasmussen:3 institute:2 fall:1 taking:3 erdogdu:1 sparse:5 van:2 regard:1 curve:2 depth:2 calculated:1 world:6 transition:4 avoids:1 dpp:2 genome:1 author:1 adaptive:3 jump:2 historical:3 far:1 social:4 transaction:3 welling:1 obtains:2 compact:1 overfitting:2 uai:1 hongyuan:1 factorize:1 discriminative:1 spectrum:1 continuous:3 latent:3 table:4 additionally:2 learn:8 mj:3 robust:2 ca:1 rothenberg:1 expansion:1 du:4 european:1 constructing:1 domain:2 icu:5 did:1 aistats:2 main:2 linearly:1 n2:1 verifies:1 dyadic:1 categorized:2 xu:9 enlarged:1 fig:7 causality:1 attractiveness:1 georgia:2 sub:2 inferring:1 obeying:1 exponential:2 daley:1 stamp:2 third:1 learns:4 meijer:1 theorem:2 minute:1 incentivizing:1 bad:1 discarding:1 specific:2 dpgmm:19 decay:1 exists:1 socher:1 sequential:3 merging:2 ci:8 occurring:1 nk:6 chen:1 rayleigh:4 distinguishable:1 simply:1 appearance:1 broadcasting:1 tze:1 ez:1 happening:1 univariate:1 visual:1 simma:1 watch:2 recommendation:1 applies:1 springer:1 nested:3 truth:5 corresponds:2 loses:1 extracted:1 lewis:1 conditional:1 viewed:1 formulated:1 change:2 typical:2 infinite:2 specifically:3 uniformly:1 called:2 hospital:2 total:4 ece:1 tendency:1 experimental:1 celeux:1 meaningful:2 rarely:1 indicating:1 college:1 mark:1 support:1 evaluate:1 lian:1 phenomenon:4
6,340
6,735
Deanonymization in the Bitcoin P2P Network Giulia Fanti and Pramod Viswanath Abstract Recent attacks on Bitcoin?s peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization?the linkage of a user?s IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network?s flooding mechanism to a different protocol, known as diffusion. However, it is unclear if diffusion actually improves the system?s anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both pre- and post-2015. The core problem is one of epidemic source inference over graphs, where the observational model and spreading mechanisms are informed by Bitcoin?s implementation; notably, these models have not been studied in the epidemic source detection literature before. We identify and analyze near-optimal source estimators. This analysis suggests that Bitcoin?s networking protocols (both pre- and post-2015) offer poor anonymity properties on networks with a regular-tree topology. We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology. 1 Introduction The Bitcoin cryptocurrency has seen widespread adoption, due in part to its reputation as a privacypreserving financial system [17, 22]. In practice, though, Bitcoin exhibits serious privacy vulnerabilities [3, 19, 27, 28, 24]. Most of these vulnerabilities arise because of two key properties: (1) Bitcoin associates each user with a pseudonym, and (2) pseudonyms can be linked to financial transactions through a public transaction ledger, called the blockchain [23]. If an attacker can associate a pseudonym with a human identity, the attacker may learn the user?s transaction history. In practice, there are several ways to link a user to her Bitcoin pseudonym. The most commonlystudied methods analyze transaction patterns in the public blockchain, and link those patterns using side information [3, 19, 27, 28, 24]. In this paper, we are interested in a lower-layer vulnerability: the networking stack. Like most cryptocurrencies, Bitcoin nodes communicate over a P2P network [23]. Whenever a user (Alice) generates a transaction (i.e., sends bitcoins to another user, Bob), she first creates a ?transaction message? that contains her pseudonym, Bob?s pseudonym, and the transaction amount. Alice subsequently floods this transaction message over the P2P network, which enables other users to validate her transaction and incorporate it into the global blockchain. The anonymity implications of transaction broadcasting were largely ignored until recently, when researchers demonstrated practical deanonymization attacks on the P2P network [6, 15]. These attacks use a ?supernode? to connect to all active Bitcoin nodes and listen to the transaction traffic they relay [15, 6, 7]. By using simple estimators to infer the source IP of each transaction broadcast, this eavesdropper adversary was able to link IP addresses to Bitcoin pseudonyms with an accuracy of up to 30% [6]. We refer to such linkage as deanonymization. Giulia Fanti ([email protected]) is in the ECE Department at Carnegie Mellon University. Pramod Viswanath ([email protected]) is in the ECE Department at the University of Illinois at UrbanaChampaign. This work was funded by NSF grant CCF-1705007. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In 2015, the Bitcoin community responded to these attacks by changing its flooding protocols from a gossip-style protocol known as trickle spreading to a diffusion spreading protocol that spreads content with independent exponential delays [1]. We define these protocols precisely in Section 2. However, no systematic motivation was provided for this shift. Indeed, it is unclear whether the change actually defends against the deanonymization attacks in [6, 15]. Problem and contributions. The main point of our paper is to show that Bitcoin?s flooding protocols have poor anonymity properties, and the community?s shift from trickle spreading (pre-2015) to diffusion spreading (post-2015) did not help the situation. The problem of deanonymizing a user in this context is mathematically equivalent to inferring the source of a random spreading process over a graph, given partial observations of the spread. The optimal (maximum-likelihood) sourceidentification algorithms change between spreading protocols; identifying such algorithms and quantifying their accuracy is the primary focus of this work. We find that despite having different maximum-likelihood estimators, trickle and diffusion exhibit roughly the same, poor anonymity properties. Our specific contributions are threefold: (1) Modeling. We model the Bitcoin P2P network and an eavesdropper adversary, whose capabilities reflect recent practical attacks in [6, 15]. Most Bitcoin network protocols are not explicitly documented, so modeling the system requires parsing a combination of documentation, papers, and code. Several of the resulting models are new to the epidemic source detection literature. (2) Analysis of Trickle (Pre-2015). We analyze the probability of deanonymization by an eavesdropper adversary under trickle propagation. Our analysis is conducted over a regular tree-structured network. Although the Bitcoin network topology is not a regular tree, we show in Section 2 that regular trees are a reasonable first-order model. We consider graph-independent estimators (e.g., the first-timestamp estimator), as well as maximum-likelihood estimators; both are defined precisely in Section 2. Our analysis suggests that although the first-timestamp estimator performs poorly on high-degree trees, maximum-likelihood estimators achieve high probabilities of detection for trees of any degree d. (3) Analysis of Diffusion (Post-2015). We conduct a similar analysis of diffusion spreading, which was adopted in 2015 as a fix for the anonymity weaknesses observed under trickle propagation [6, 15]. The analysis of diffusion requires different theoretical tools, including nonlinear differential equations and generalized P?lya urns. Although the analysis techniques and attack mechanisms are different, we find that the anonymity properties of diffusion are similar to those of trickle. Namely, the first-timestamp estimator?s probability of detection decays to 0 as degree d grows, but the maximum-likelihood probability of detection remains high (in particular, non-vanishing) even as d ! 1. 2 Model and related work Network model. We model the P2P network of Bitcoin nodes as a graph G(V, E), where V is the set of all server nodes and E is the set of edges, or connections, between them. Each server is represented by a (IP address, port) tuple; it can establish up to eight outgoing connections to other Bitcoin nodes [6, 2]. The resulting sparse random graph between nodes can be modeled approximately as a 16-regular graph; in practice, the average degree is closer to 8 due to nonhomogeneities across nodes [20]. The graph is locally tree-like and (approximately) regular. For this reason, regular trees are a natural class of graphs to study. In our theoretical analysis, we model G as a d-regular tree. We validate this choice by running simulations on a snapshot of the true Bitcoin network [20] (Section 5). Spreading protocols. Each transaction must be broadcast over the network; we analyze the spread of a single message originating from source node v ? 2 V . Without loss of generality, we label v ? as node ?0? when iterating over nodes. At time t = 0, the message starts spreading according to one of two randomized protocols: trickle (pre-2015) or diffusion (post-2015). Trickle spreading is a gossip-based flooding protocol. Each source or relay chooses a neighboring peer (called the ?trickle? node) uniformly at random, every 200 ms. If the trickle node has not yet received the message, the sender forwards the message [6].1 We model this by considering a canonical, simpler spreading protocol of round-robin gossip. In round-robin gossip, each source or relay randomly orders its neighbors who have not yet seen the message; we call these uninfected neighbors. In each successive (discrete) timestep, the node transmits the message to the next neighbor 1 This description omits some details of trickle spreading, which we do not consider in our analysis. For example, with probability 1/4, each relay forwards the message instantaneously to its neighbors without trickling. 2 in its ordering. Thus, if a node has d neighbors, all d neighbors will receive the message within d timesteps. This differs from trickle spreading, where the time-to-infection is a coupon collector?s problem, and therefore takes ?(d log d) timesteps in expectation [8]. We will henceforth abuse terminology by referring to round-robin gossip as trickle spreading. In diffusion, each source or relay node transmits the message to each of its uninfected neighbors with an independent, exponential delay of rate . In practice, Bitcoin uses a higher rate on outgoing edges than incoming ones [2]; we omit this distinction in our model. We assume a continuous-time system, with each node starting the exponential clocks upon receipt (or creation) of a message. For both protocols, we let Xv denote the timestamp at which node v 2 V receives a given message. Note that server nodes cannot be infected more than once. We assume the message originates at time t = 0, so Xv? = X0 = 0. Moreover, we let Gt (Vt , Et ) denote the infected subgraph of G at time t, or the subgraph of nodes who have received the message (but not necessarily reported it to the adversary) by time t. Adversarial model. The adversary?s goal is to link a message with the source (IP address, port)?i.e., to identify the source node v ? 2 V . We consider an eavesdropper adversary, whose capabilities are modeled on the practical deanonymization attacks in [6, 15]. These attacks use a supernode that connects to most of the servers in the Bitcoin network. It can make multiple connections to each honest server, with each connection coming from a different (IP address, port). Hence, the honest server does not realize that the supernode?s connections are all from the same entity. We model this by assuming that the eavesdropper adversary makes a fixed number ? of connections to each server, where ? 1. We do not include these adversarial connections in the original server graph G, so G remains a d-regular graph (see Figure 1). The supernode can learn the network structure between servers [6], so we assume that G(V, E) is known to the eavesdropper. The supernode in [6, 15] observes the timestamps at which messages are relayed from each honest server, without relaying or transmitting content. If the adversary maintains Eavesdropper multiple active connections to each server (? > 1), it &=2 receives the message ? times from each server. We let )=3 ?v denote the time at which the adversary first observes ? $ the message from node v 2 V . We let ? = (?v )v2V denote the set of all observed first-timestamps. We assume timestamps are relative to time t = 0, i.e., the adversary Figure 1: The eavesdropper adversary knows when the message started spreading. establishes ? links (in red) to each server. Source estimation. The adversary?s goal is as follows: Honest servers are connected in a dgiven the observed timestamps ? (up to estimation time regular tree topology (edges in black). t) and the graph G, find an estimator M(? , G) that outputs the true source. Our metric of success for the adversary is probability of detection, P(M(? , G) = v ? ), taken over the random spreading realization (captured by ? ) and any randomness in the estimator. In [6, 15], the adversary uses a variant of the first-timestamp estimator MFT (? , G) = arg minv2Vt ?v , which outputs the first node (prior to estimation time t) to report the message to the adversary. The first-timestamp estimator requires no knowledge of the graph, and it is computationally easy to implement. We begin by analyzing this estimator for both trickle and diffusion propagation. We also consider the maximum-likelihood (ML) estimator: MML (? , G) = arg maxv2V P(? |G, v ? = v). The ML estimator depends on the time of estimation t to the extent that ? only contains timestamps up to time t. Unlike the first-timestamp estimator, the ML estimator differs across spreading protocols, depends on the graph, and may be computationally intractable in general. Problem statement. Our goal is to understand whether the Bitcoin community?s move from trickle spreading to diffusion actually improved the system?s anonymity guarantees. The problem at hand is to characterize the maximum-likelihood (ML) probability of detection of the eavesdropper adversary for both trickle and diffusion processes on d-regular trees, as a function of degree d, number of corrupted connections ?, and detection time t. We meet this goal by computing lower bounds derived from the analysis of suboptimal estimators (e.g., first-timestamp estimator and centrality-based estimators), and upper bounds derived from fundamental limits on detection. 3 Related work. Although there has been much work on the anonymity properties of Bitcoin [19, 28, 24, 27], the ?epidemic source finding? interpretation of Bitcoin deanonymization is fairly new. Prior work that (implicitly) adopts this interpretation has focused on Bitcoin?s protocol flaws more than the inference aspect of the problem [6, 15]. As this is the focus of our paper, we include the related source detection literature. Epidemic source detection has been widely studied under diffusion spreading with a snapshot adversary, which observes the set of infected nodes at a single time t; in our notation, the adversary would learn the set {v 2 V : Xv ? t} (no timestamps), along with graph G. Shah and Zaman first characterized the ML probability of detection for diffusion observed by a snapshot adversary when the underlying graph is a regular tree [29]. These results were later extended to random, irregular trees [31], whereas other authors studied heuristic source detection methods on general graphs [12, 26, 16] and related theoretical limits [32, 21, 14]. The eavesdropper adversary differs in that it eventually observes a noisy timestamp ?v from every node, regardless of when the node is infected. This changes both the analysis and the estimators that one can use. Another common adversarial model is the spy-based adversary, which observes exact timestamps for a corrupted set of nodes that does not include the source [25, 34]. In our notation, for a set of spies S ? V , the spy-based adversary observes {(s, Xs ) : s 2 S}. Prior work on the spy-based adversary does not characterize the ML probability of detection, but researchers have proposed efficient heuristics that perform well in practice [25, 34, 35, 9]. Unlike the spy-based adversary, the eavesdropper only observes delayed timestamps, and it does so for all nodes, including the source. 3 3.1 Analysis of trickle (pre-2015) First-timestamp estimator The analysis of trickle propagation is complicated by its combinatorial, time-dependent nature. As such, we lower-bound the first-timestamp estimator?s probability of detection. Let ?m , min(?1 , ?2 , . . .) denote the minimum observed timestamp among nodes that are not the source. Then we compute P(?0 < ?m ), i.e., the probability that the true source reports the message to the adversary strictly before any of the other nodes. This event (which causes the source to be detected with probability 1) does not include cases where the true source is one of k nodes (k > 1) that report the message to the adversary simultaneously, and before any other node in the system. Nonetheless, for large node degree d, the ?simultaneous reporting? event is rare, so our lower bound is close to the empirical probability of detection of the first-timestamp estimator. Theorem 3.1 (Proof in Appendix C.1) Consider a message that propagates according to trickle spreading over a d-regular tree of servers, where each node additionally has ? connections to an eavesdropping adversary. The first-timestamp estimator?s probability of detection at time t = 1 ? ? ? 1 d satisfies P(MFT (? , G) = v ? ) Ei(2 log ?) Ei (log ?) where ? = d d 1+? , and Ei(x) , d log 2 R 1 e t dt denotes the exponential integral. t x Probability of Detection We prove this bound by conditioning on the time at which the source reports to the adversary. The proof 0.65 Theoretical lower bound then becomes a combinatorial counting problem. The log(d) / (d log(2)) 0.6 Simulation expression in Theorem 3.1 can be simplified by exam0.55 ining its Taylor expansion (see Appendix A). In par0.5 ticular, for the special case of ? = 1 where the adver0.45 sary establishes only one connection per server, ? line ? 0.4 log d (5) simplifies to P(MFT (? , G)) ? d?log 2 +o logd d . 0.35 This suggests that the first-timestamp estimator has 0.3 a probability of detection that decays to zero asymptotically as log(d)/d. Intuitively, the probability of 2 4 6 8 10 detection should decay to zero, because the higher Tree degree, d the degree of the tree, the higher the likelihood that Figure 2: First-timestamp estimator accuracy a node other than the source reports to the adversary before the source does. Nonetheless, this is only a on d-regular trees when ? = 1. lower bound on the first-timestamp?s probability of detection, so we wish to understand how tight the bound is. 4 Simulation. To evaluate the lower bound in Theorem 3.1 and its approximation for ? = 1, we simulate the first-timestamp estimator on regular trees. Figure 2 illustrates the simulation results for ? = 1 compared to the approximation above. Each data point is averaged over 5,000 trials. Empirically, the lower bound appears to be tight, especially as d grows. Figure 2 suggest a natural solution to improve anonymity in the Bitcoin network: increase the degree of each node to reduce the adversary?s probability of detection. However, we shall see in the next section that stronger estimators (e.g., the ML estimator) may achieve high probabilities of detection, even for large d. 3.2 Maximum-likelihood estimator At any time t, if one knew the ground truth timestamps (i.e., the Xv ?s), one could arrange the nodes of the infected subgraph Gt in the order they received the message. We call such an arrangement an ordering of nodes. Since propagation is in discrete time, multiple nodes may receive the message simultaneously; such nodes are lumped together in the ordering. Of course, the true ordering is not observed by the adversary, but the observed timestamps (i.e., ? ) restrict the set of possible orderings. A feasible ordering is an ordering that respects the rules of trickle propagation over graph G, as well as the observed timestamps ? . In this subsection only, we will abuse notation by using ? to refer to all timestamps observed by the adversary, not just the first timestamp from each server. So if the adversary has ? connections to each server, ? would include ? timestamps per honest server. We propose an estimator called timestamp rumor centrality, which counts the number of feasible orderings originating from each candidate source. The candidate with the most feasible orderings is chosen as the estimator output. This estimator is similar to rumor centrality, an estimator devised for snapshot adversaries in [29]. However, the presence of timestamps and the lack of knowledge of the infected subgraph increases the estimator?s complexity. We first motivate timestamp rumor centrality. Proposition 3.2 (Proof in Appendix C.2) Consider a trickle process over a d-regular graph, where each node has ? connections to the eavesdropper adversary. Any feasible orderings o1 and o2 with respect to observed timestamps ? and graph G have the same likelihood. Proposition 3.2 implies that at any fixed time, the likelihood of observing ? given a candidate source is proportional to the number of feasible orderings originating from that candidate source. Therefore, an ML estimator (timestamp rumor centrality) counts the number of feasible orderings at estimation time t. Timestamp rumor centrality is a message-passing algorithm that proceeds as follows: for each candidate source, recursively determine the set of feasible times when each node could have been infected, given the observed timestamps. This is achieved by passing a set of ?feasible times of receipt" from the candidate source to the leaves of the largest feasible infected subtree rooted at the candidate source. In each step, nodes prune receipt times that conflict with their observed timestamps. Next, given each node?s set of feasible receipt times, they count the number of feasible orderings that obey the rules of trickle propagation. This is achieved by passing sets of partial orderings from the leaves to the candidate source, and pruning infeasible orderings. The timestamp rumor centrality protocol is presented in Appendix A.2, along with minor modifications that reduce its complexity. In [31], precise analysis of standard rumor centrality was possible because rumor centrality . =4 can be reduced to a simple counting problem. . =2 . =2 Such an analysis is more challenging for timestamp rumor centrality, because timestamps pre? ?3 ?2 ?1 1 2 $ 3 vent us from using the same counting argument. However, we identify a suboptimal, simplified 5 =3 5 =4 5 =3 5 =0 5 =4 5 =1 version of timestamp rumor centrality that apFigure 3: Example of ball centrality on a line with proaches optimal probabilities of detection as t one link to the adversary per server (these links are grows. We call this estimator ball centrality. not shown). The estimator is run at time t = 4. Ball centrality checks whether a candidate source v could have generated each of the observed timestamps, independently. For example, Figure 3 contains a sample spread on a line graph, where the adversary has one connection per server (not shown). Therefore, d = 2 and ? = 1. The ground truth infection time is written as Xv below each node, and the observed timestamps are written above the node. In this figure, the estimator is run at time t = 4, so the adversary only sees three timestamps. For each observed timestamp ?v , the estimator creates a ball of radius ?v 1, centered 210 /?0 10 270 210 /?0 10 70 80 5 at v. For example, in our figure, the green node (node 1) has ?1 = 2. Therefore, the adversary would make a ball of radius 1 centered at node 1; this ball is depicted by the green bubble in our figure. The ball represents the set of nodes that are close enough to node 1 to feasibly report to the adversary from node 1 at time ?1 = 2. After constructing an analogous ball for every observed timestamp in ? , the protocol outputs a source selected uniformly from the intersection of these balls. In our example, there are exactly two nodes in this intersection. We describe ball centrality precisely in Protocol 1 (Appendix A.2.1). Although ball centrality is not ML for a fixed time t, the following theorem lower bounds the ML probability of detection by analyzing ball centrality and showing that its probability of detection approaches a fundamental upper bound exponentially fast in detection time t. Theorem 3.3 (Proof in Section C.3) Consider a trickle spreading process over a d-regular graph of honest servers. In addition, each server has ? independent connections to an eavesdropper adversary. The ML probability of detection at time t satisfies the following expression: ? ?t (a) (b) d d d 1 ? P(MML (? , G) = v ? ) ? 1 (1) 2(? + d) ?+d 2(? + d) Note that the right-hand side of equation (1) is always greater than 12 . As such, increasing the graph degree would not significantly reduce the probability of detection; the adversary can still identify the source with probability at least 12 given enough time. Second, the ML probability of detection approaches its upper bound exponentially fast in time t. This suggests that the adversary can achieve high probabilities of detection at small times t. These results highlight an important point: estimators that exploit graph structure can reap significant, order-level gains in accuracy. 4 4.1 Analysis of diffusion (post-2015) First-timestamp estimator Although the first-timestamp estimator does not use knowledge of the underlying graph, its performance depends on the underlying graph structure. The following theorem exactly characterizes its probability of detection on a regular tree as t ! 1. Theorem 4.1 (Proof in Appendix C.4) Consider a diffusion process of rate = 1 over a d-regular tree, d > 2. Suppose an adversary observes each node?s infection time with an independent, exponential delay of rate 2 = ?, ? 1. Then the following expression describes the probability of detection for the first-timestamp estimator at time t = 1: P(MFT (? , G) = v ? ) = d ? 2 log d+?? 2 . The proof expresses the probability of detection as a nonlinear differential equation that can be solved exactly. The expression highlights a few points: First, for a fixed degree d, the probability of detection is strictly positive as t ! 1. This is straightforward to see, but under other adversarial models (e.g., snapshot adversaries) it is not trivial to see that the probability of detection is positive as t ! 1. Indeed, several papers are dedicated to making that point [30, 31]. Second, when ? = 1, i.e., the adversary has only one connection per node, the probability of detection approaches log(d)/d asymptotically in d. This quantity tends to 0 as d ! 1, and it is order-equal to the probability of detection of the first-timestamp adversary on the trickle protocol when ? = 1 (see Section 3.1). Theorem 4.1 suggests that the Bitcoin community?s transition from trickle spreading to diffusion does not provide order-level anonymity gains (asymptotically in the degree of the graph), at least for the first-timestamp adversary. Next, we ask if the same is true for estimators that use the graph structure. 4.2 Centrality-based estimators We compute a different lower bound on the ML probability of detection by analyzing a centralitybased estimator. Unlike the first-timestamp estimator, this reporting centrality estimator uses the structure of the infected subgraph by selecting a candidate source that is close to the center (on the graph) of the observed timestamps. However, it does not explicitly use the observed timestamps. Also unlike the first-timestamp estimator, this centrality-based estimator improves as the degree d of the underlying tree increases, with a strictly positive probability of detection as d ! 1. Thus the eavesdropper adversary has an ML probability of detection that scales as ?(1) in d. Intuitively, 6 reporting centrality works as follows: for each candidate source v, the estimator counts the number of nodes that have reported to the adversary from each of the node v?s adjacent subtrees. It picks a candidate source for which the number of reporting nodes is approximately equal in each subtree. To make this precise, suppose the infected subtree Gt is rooted at w; we use Tvw to denote the subtree of Gt that contains v and all of v?s descendants, with respect to root node w. Consider a random variable Yv (t),P which is 1 if node v 2 V has reported to the adversary by time t, and 0 otherwise. We let YTvw (t) = u2Tvw Yu (t) denote the number of nodes in Tvw that have reported to the adversary P by time t. We use Y (t) = v2Vt Yv (t) to denote the total number of reporting nodes in Gt at time t. Similarly, we use NTvw (t) to denote the number of infected nodes in Tvw (so NTvw (t) YTvw (t)), and we let N (t) denote the total number of infected nodes at time t (N (t) Y (t)). For each candidate source v, we consider its d neighbors, which comprise the set N (v). We define a node v?s reporting centrality at time t?denoted Rv (t)?as follows: ( 1 if maxu2N (v) YTuv (t) < Y 2(t) Rv (t) = (2) 0 otherwise. That is, a node?s reporting centrality is 1 iff each of its adjacent subtrees has fewer than Y (t)/2 reporting nodes. A node is a reporting center iff its reporting centrality is 1. The estimator outputs v? chosen uniformly from all reporting centers. In Figure 4, v ? is the only reporting center. Reporting centrality does not use the adversary?s observed timestamps?it only counts the number of reporting nodes in each of a node?s adjacent subtrees. This estimator is inspired by rumor centrality [30], an ML estimator for the source of a diffusion process under a snapshot adversary. Recall that a snapshot adversary sees the infected subgraph Gt at time t, but it does not learn timestamp information. Y (t) = 5 N (t) = 7 Rv? (t) = 1 $? # Rw (t) = 0 The next theorem shows that for trees with high degree d, reporting centrality has a strictly higher (in an order sense) probability of detection than the first-timestamp estimator; Figure 4: Yellow nodes are infected; a its probability of detection is strictly positive as d ! 1. red outline means the node has reported. Rv? (t) = 1 since v ? ?s adjacent subtrees Theorem 4.2 (Proof in Section C.5) Consider a diffu- have ? Y (t)/2 = 2.5 reporting nodes. sion process of rate = 1 over a d-regular tree. Suppose this process is observed by an eavesdropper adversary, which sees each node?s timestamp with an independent exponential delay of rate 2 = ?, ? 1. Then the reporting central? ity estimator has a (time-dependent) probability of detection P(M ? RC (? , G)? = v ) that satisfies ?? lim inf t!1 P(MRC (? , G) = v ? ) Cd > 0. where Cd = 1 d 1 I1/2 1 d 2, 1 + 1 d 2 is a constant that depends only on degree d, and I1/2 (a, b) is the regularized incomplete Beta function, i.e., the probability a Beta random variable with parameters a and b takes a value in [0, 12 ). To prove this, we relate two P?lya urn processes: one that represents the diffusion process over the regular tree of honest nodes, and one that describes the full spreading process, which includes both diffusion over the regular tree and random reporting to the adversary. The first urn can be posed as a classic P?lya urn [10], which has been studied in the context of diffusion [31, 14]. The second urn can be described by an unbalanced generalized P?lya urn (GPU) with negative coefficients [4, 13]?a class of urns that does not typically appear in the study of diffusion (to the best of our knowledge). As a side note, this approach can be used to analyze other epidemic source-finding problems that have previously evaded analysis, as we show in Appendix B. Notice that the constant Cd in Theorem 4.2 does not depend on ??this is because the reporting centrality estimator makes no use of timestamp information, so the delays in the timestamps ? do not affect the estimator?s asymptotic behavior. Simulation results. To evaluate the lower bound in Theorem 4.2, we simulate reporting centrality on diffusion over regular trees. Figure 5 illustrates the empirical performance of reporting centrality averaged over 4,000 trials, compared to the theoretical lower bound on the liminf. The estimator is run at time t = d + 2. Our simulations are run up to degree d = 5 due to computational constraints, since the infected subgraph grows exponentially in the degree of the tree. By d = 5, reporting centrality reaches the theoretical lower bound on the limiting detection probability. 7 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 2 3 4 5 6 7 8 9 0.9 0.8 0.7 0.6 0.5 Trickle, Theoretical (Lower bound) Trickle, Simulated (Lower bound) Trickle, Simulated (Exact) Diffusion, Theoretical Diffusion, Simulated 0.4 0 10 Prob. of Detection 1 First-timestamp, theoretical First-timestamp, simulated Reporting centrality, theoretical Reporting centrality, simulated Prob. of Detection Prob. of Detection 1 0.9 5 10 15 20 Figure 5: First-timestamp vs. reporting centrality on diffusion over regular trees, theoretically and simulated. ? = 1, t = d+2. 0.8 0.7 0.6 0.5 Trickle, Theoretical lower bound Trickle, Simulated Trickle, Theoretical lower bound (d=2) Diffusion, Theoretical Diffusion, Simulation 0.4 0.3 0 5 10 15 20 Eavesdropper connections, ? Eavesdropper connections, ? Degree, d 0.9 Figure 6: Comparison of Figure 7: Trickle vs. diffusion trickle and diffusion under the under the first-timestamp estimafirst-timestamp estimator on tor, simulated on a snapshot of the real Bitcoin network [20]. 4-regular trees. Table 1: Probability of detection on a d-regular tree. The adversary has ? connections per server. Trickle FirstTimestamp MaximumLikelihood All ? ?=1 All ? ?=1 ? [Ei(2d log ?) Ei(log ?)] d log 2? ? log(d) log d d log(2) + o d 1 1 d 2(?+d) d 2(d+1) Diffusion (Thm 3.1) ? d 2 (Sec. 3.1) (Thm 3.3) (Thm. 3.3) 1 d+? 2 (Thm. 4.1) ? log(d 1) (Thm. 4.1) (d 2) log ? d 1 I1/2 ? 1 d 2, 1 + (Thm. 4.2) 1 d 2 ?? For diffusion, neither lower bound on the first-timestamp or reporting centrality estimator strictly outperforms the other. Figure 5 compares the two estimators as a function of degree d. We observe that reporting centrality outstrips first-timestamp estimation for trees of degree 9 and higher; since our theoretical result is only a lower bound on the performance of reporting centrality, the transition may occur at even smaller d. Empirically, the true Bitcoin graph is approximately 8-regular [20], a regime in which we expect reporting centrality to perform similarly to the first-timestamp estimator. 5 Discussion Table 1 summarizes our theoretical results for trickle and diffusion. The probabilities of detection for trickle and diffusion are similar, particularly when ? = 1. Although the maximum-likelihood results are difficult to compare visually, they both approach a positive constant as d, t ! 1; for trickle propagation, that constant is 12 , whereas for diffusion, it is approximately 0.307. These results are asymptotic in degree d. In practice, the underlying Bitcoin graph is fixed; the only variable quantity is the adversary?s resources, represented by ?. Figure 6 compares analytical expressions and simulations for 4-regular trees under the first-timestamp estimator (as we lack an ML estimator on general graphs), as a function of ?. It suggests nearly identical detection probabilities for diffusion and trickle on regular trees; while our theoretical prediction for diffusion is exact, our lower bound on trickle is loose since d is small. To validate our decision to analyze regular trees, we simulate trickle and diffusion on a 2015 snapshot of the Bitcoin network [20]. Figure 7 compares these results as a function of ?, for the first-timestamp estimator. Unless specified otherwise, theoretical curves are calculated for a regular tree with d = 8, the mean degree of our dataset. Diffusion performs close to the theoretical prediction; this is because with high probability, the first-timestamp estimator uses only on a local neighborhood to estimate v ? , and the Bitcoin graph is locally tree-like. However, our trickle lower bound remains loose. This is partially due to simultaneous reporting events, but the main contributing factor seems to be graph irregularity. Understanding this effect more carefully is an interesting question for future work. In summary, trickle and diffusion have similar probabilities of detection, both in an asymptotic-order sense and numerically. We have analyzed the canonical class of d-regular trees and simulated these protocols on a real Bitcoin graph topology. Our results omit certain details of the spreading protocols, (Sec. 2); extending the analysis to include these details is practically relevant. 8 References [1] Bitcoin core commit 5400ef6. https://github.com/bitcoin/bitcoin/commit/ 5400ef6bcb9d243b2b21697775aa6491115420f3. [2] Bitcoin core integration/staging tree. https://github.com/bitcoin/bitcoin. [3] Elli Androulaki, Ghassan O Karame, Marc Roeschlin, Tobias Scherer, and Srdjan Capkun. Evaluating user privacy in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 34?51. Springer, 2013. [4] Krishna B Athreya and Peter E Ney. Branching processes, volume 196. Springer Science & Business Media, 2012. [5] Carl M Bender and Steven A Orszag. Advanced mathematical methods for scientists and engineers I. Springer Science & Business Media, 1999. [6] Alex Biryukov, Dmitry Khovratovich, and Ivan Pustogarov. Deanonymisation of clients in bitcoin p2p network. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pages 15?29. ACM, 2014. [7] Alex Biryukov and Ivan Pustogarov. Bitcoin over tor isn?t a good idea. In 2015 IEEE Symposium on Security and Privacy, pages 122?134. IEEE, 2015. [8] Arnon Boneh and Micha Hofri. The coupon-collector problem revisited?a survey of engineering problems and computational methods. Stochastic Models, 13(1):39?66, 1997. [9] Zhen Chen, Kai Zhu, and Lei Ying. Detecting multiple information sources in networks under the sir model. IEEE Transactions on Network Science and Engineering, 3(1):17?31, 2016. [10] Florian Eggenberger and George P?lya. ?ber die statistik verketteter vorg?nge. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift f?r Angewandte Mathematik und Mechanik, 3(4):279?289, 1923. [11] G. Fanti, P. Kairouz, S. Oh, K. Ramchandran, and P. Viswanath. Metadata-aware anonymous messaging. In ICML, 2015. [12] V. Fioriti and M. Chinnici. Predicting the sources of an outbreak with a spectral technique. arXiv:1211.2333, 2012. [13] Svante Janson. Functional limit theorems for multitype branching processes and generalized p?lya urns. Stochastic Processes and their Applications, 110(2):177?245, 2004. [14] Justin Khim and Po-Ling Loh. Confidence sets for the source of a diffusion in regular trees. arXiv preprint arXiv:1510.05461, 2015. [15] Philip Koshy, Diana Koshy, and Patrick McDaniel. An analysis of anonymity in bitcoin using p2p network traffic. In International Conference on Financial Cryptography and Data Security, pages 469?485. Springer, 2014. [16] A. Y. Lokhov, M. M?zard, H. Ohta, and L. Zdeborov?. Inferring the origin of an epidemic with dynamic message-passing algorithm. arXiv preprint arXiv:1303.5315, 2013. [17] Paul Mah. Top 5 vpn services for personal privacy and security, 2016. http://www.cio.com/article/3152904/security/ top-5-vpn-services-for-personal-privacy-and-security.html. [18] Hosam Mahmoud. P?lya urn models. CRC press, 2008. [19] Sarah Meiklejohn, Marjori Pomarole, Grant Jordan, Kirill Levchenko, Damon McCoy, Geoffrey M Voelker, and Stefan Savage. A fistful of bitcoins: characterizing payments among men with no names. In Proceedings of the 2013 conference on Internet measurement conference, pages 127?140. ACM, 2013. [20] Andrew Miller, James Litton, Andrew Pachulski, Neal Gupta, Dave Levin, Neil Spring, and Bobby Bhattacharjee. Discovering bitcoins public topology and influential nodes, 2015. 9 [21] Chris Milling, Constantine Caramanis, Shie Mannor, and Sanjay Shakkottai. Network forensics: random infection vs spreading epidemic. ACM SIGMETRICS Performance Evaluation Review, 40(1):223?234, 2012. [22] David Z. Morris. Legal sparring continues in bitcoin user?s battle with irs tax sweep, 2017. http://fortune.com/2017/01/01/bitcoin-irs-tax-sweep-user-battle/. [23] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008. [24] Micha Ober, Stefan Katzenbeisser, and Kay Hamacher. Structure and anonymity of the bitcoin transaction graph. Future internet, 5(2):237?250, 2013. [25] P. C. Pinto, P. Thiran, and M. Vetterli. Locating the source of diffusion in large-scale networks. Physical review letters, 109(6):068702, 2012. [26] B. A. Prakash, J. Vreeken, and C. Faloutsos. Spotting culprits in epidemics: How many and which ones? In ICDM, volume 12, pages 11?20, 2012. [27] Fergal Reid and Martin Harrigan. An analysis of anonymity in the bitcoin system. In Security and privacy in social networks, pages 197?223. Springer, 2013. [28] Dorit Ron and Adi Shamir. Quantitative analysis of the full bitcoin transaction graph. In International Conference on Financial Cryptography and Data Security, pages 6?24. Springer, 2013. [29] D. Shah and T. Zaman. Detecting sources of computer viruses in networks: theory and experiment. In ACM SIGMETRICS Performance Evaluation Review, volume 38, pages 203? 214. ACM, 2010. [30] D. Shah and T. Zaman. Rumors in a network: Who?s the culprit? Information Theory, IEEE Transactions on, 57:5163?5181, Aug 2011. [31] D. Shah and T. Zaman. Rumor centrality: a universal source detector. In ACM SIGMETRICS Performance Evaluation Review, volume 40, pages 199?210. ACM, 2012. [32] Z. Wang, W. Dong, W. Zhang, and C.W. Tan. Rumor source detection with multiple observations: Fundamental limits and algorithms. In ACM SIGMETRICS, 2014. [33] Eric W Weisstein. Euler-mascheroni constant. 2002. [34] K. Zhu and L. Ying. A robust information source estimator with sparse observations. arXiv preprint arXiv:1309.4846, 2013. [35] Kai Zhu and Lei Ying. A robust information source estimator with sparse observations. Computational Social Networks, 1(1):1, 2014. 10
6735 |@word trial:2 version:1 stronger:1 seems:1 simulation:9 reap:1 pick:1 recursively:1 contains:4 selecting:1 janson:1 o2:1 outperforms:1 savage:1 com:4 virus:1 culprit:2 yet:2 must:1 parsing:1 written:2 realize:1 gpu:1 timestamps:25 enables:1 v:3 leaf:2 selected:1 fewer:1 discovering:1 vanishing:1 core:3 kairouz:1 detecting:2 mannor:1 node:75 revisited:1 successive:1 attack:10 relayed:1 simpler:1 ron:1 zhang:1 bitcoin:53 along:2 rc:1 mathematical:1 differential:2 beta:2 symposium:1 descendant:1 prove:2 privacy:6 theoretically:1 x0:1 notably:1 indeed:2 behavior:1 roughly:1 mechanic:1 inspired:1 bender:1 considering:1 increasing:1 becomes:1 provided:1 begin:1 moreover:1 notation:3 underlying:5 medium:2 arnon:1 bhattacharjee:1 informed:1 finding:2 guarantee:1 quantitative:1 every:3 prakash:1 pramod:2 exactly:3 originates:1 grant:2 omit:2 appear:1 reid:1 before:4 positive:5 scientist:1 local:1 engineering:2 xv:5 limit:4 tends:1 zeitschrift:1 despite:1 service:2 analyzing:3 meet:1 approximately:5 abuse:2 black:1 eavesdropping:1 studied:4 suggests:6 challenging:1 alice:2 micha:2 adoption:1 averaged:2 practical:3 practice:6 implement:1 differs:3 irregularity:1 harrigan:1 empirical:2 universal:1 significantly:1 pre:7 confidence:1 regular:33 suggest:1 cannot:1 close:4 context:2 www:1 equivalent:1 demonstrated:2 center:4 straightforward:1 regardless:1 starting:1 independently:1 focused:1 survey:1 mascheroni:1 identifying:1 estimator:72 rule:2 oh:1 financial:5 kay:1 ity:1 classic:1 analogous:1 limiting:1 shamir:1 suppose:3 tan:1 user:12 exact:3 uninfected:2 us:4 carl:1 origin:1 associate:2 documentation:1 anonymity:15 particularly:1 continues:1 viswanath:3 observed:20 steven:1 preprint:3 solved:1 wang:1 connected:1 ordering:15 observes:8 und:1 diana:1 complexity:2 tobias:1 dynamic:1 personal:2 motivate:1 pramodv:1 tight:2 depend:1 creation:1 creates:2 upon:1 eric:1 po:1 vent:1 represented:2 caramanis:1 rumor:14 fast:2 describe:1 mechanik:1 detected:1 neighborhood:1 peer:5 whose:2 heuristic:2 widely:1 posed:1 kai:2 voelker:1 otherwise:3 epidemic:9 neil:1 flood:1 commit:2 fortune:1 noisy:1 ip:6 mah:1 analytical:1 nakamoto:1 propose:1 coming:1 neighboring:1 relevant:1 realization:1 subgraph:7 poorly:1 achieve:3 iff:2 tax:2 description:1 validate:3 proaches:1 ohta:1 extending:1 help:1 sarah:1 andrew:3 urbanachampaign:1 minor:1 received:3 defends:1 aug:1 fistful:1 implies:1 radius:2 subsequently:1 stochastic:2 centered:2 human:1 enable:1 observational:1 public:3 sary:1 crc:1 fix:1 anonymous:1 proposition:2 mathematically:1 strictly:6 practically:1 ground:2 visually:1 claim:1 tor:2 arrange:1 lokhov:1 relay:5 estimation:6 spreading:26 label:1 combinatorial:2 vulnerability:3 largest:1 establishes:2 tool:1 instantaneously:1 stefan:2 always:1 sigmetrics:4 cash:1 sion:1 mahmoud:1 mccoy:1 derived:2 focus:2 she:1 likelihood:12 check:1 adversarial:4 sense:2 inference:2 flaw:1 dependent:2 typically:1 her:4 originating:3 interested:1 i1:3 arg:2 among:2 html:1 denoted:1 special:1 fairly:1 integration:1 timestamp:50 equal:2 comprise:1 once:1 zaman:4 having:1 beach:1 aware:1 identical:1 represents:2 yu:1 icml:1 nearly:1 future:2 report:6 serious:1 feasibly:1 few:1 randomly:1 simultaneously:2 delayed:1 connects:1 detection:53 irs:2 message:28 evaluation:3 weakness:1 analyzed:1 staging:1 implication:1 subtrees:4 edge:3 tuple:1 partial:2 closer:1 integral:1 bobby:1 unless:1 tree:39 conduct:1 taylor:1 incomplete:1 theoretical:18 vreeken:1 modeling:2 elli:1 infected:15 rare:1 euler:1 delay:5 levin:1 conducted:1 characterize:2 reported:5 connect:1 corrupted:2 chooses:1 referring:1 st:1 fundamental:3 randomized:1 international:3 systematic:1 dong:1 together:1 outstrips:1 transmitting:1 reflect:1 central:1 broadcast:2 receipt:4 henceforth:1 messaging:1 style:1 sec:2 includes:1 coefficient:1 vpn:2 v2v:1 explicitly:2 sparring:1 depends:4 later:1 root:1 analyze:7 linked:1 traffic:2 start:1 red:2 maintains:1 capability:2 complicated:1 observing:1 p2p:10 characterizes:1 yv:2 contribution:2 cio:1 accuracy:4 responded:2 largely:1 who:3 miller:1 identify:4 yellow:1 satoshi:1 mrc:1 researcher:2 bob:2 dave:1 randomness:1 history:1 simultaneous:2 detector:1 networking:3 reach:1 whenever:1 infection:4 against:1 nonetheless:2 james:1 transmits:2 proof:7 gain:2 dataset:1 ask:1 recall:1 knowledge:4 listen:1 improves:2 subsection:1 lim:1 vetterli:1 carefully:1 actually:3 appears:1 higher:5 flooding:5 dt:1 forensics:1 improved:1 though:1 generality:1 just:1 until:1 clock:1 hand:2 receives:2 ei:5 nonlinear:2 propagation:8 lack:2 widespread:1 evaded:1 lei:2 grows:4 usa:1 effect:1 name:1 true:7 ccf:1 hence:1 neal:1 round:3 adjacent:4 lumped:1 branching:2 rooted:2 die:1 m:1 generalized:3 outline:1 performs:2 dedicated:1 logd:1 recently:1 common:1 functional:1 empirically:2 physical:1 conditioning:1 exponentially:3 volume:4 interpretation:2 numerically:1 refer:2 mellon:1 mft:4 significant:1 measurement:1 consistency:1 mathematics:1 similarly:2 illinois:2 funded:1 gt:6 patrick:1 trickle:43 damon:1 recent:2 constantine:1 inf:1 certain:1 server:24 success:1 vt:1 seen:2 captured:1 minimum:1 greater:1 krishna:1 florian:1 george:1 prune:1 lya:7 determine:1 rv:4 multiple:5 full:2 infer:1 characterized:1 offer:1 long:1 devised:1 icdm:1 post:6 prediction:2 variant:1 cmu:1 expectation:1 metric:1 arxiv:7 achieved:2 irregular:1 receive:2 whereas:2 addition:1 source:52 sends:1 unlike:4 privacypreserving:1 shie:1 jordan:1 call:3 near:1 counting:3 presence:1 easy:1 enough:2 ivan:2 affect:1 timesteps:2 topology:6 suboptimal:2 restrict:1 reduce:3 simplifies:1 idea:1 shift:2 honest:7 whether:3 expression:5 linkage:2 loh:1 peter:1 locating:1 passing:4 cause:1 shakkottai:1 ignored:1 iterating:1 amount:1 locally:2 morris:1 mcdaniel:1 rw:1 documented:1 fanti:3 reduced:1 http:4 nsf:1 canonical:2 notice:1 spy:5 per:6 carnegie:1 threefold:1 discrete:2 shall:1 express:1 key:1 terminology:1 changing:2 neither:1 diffusion:44 timestep:1 graph:36 asymptotically:3 run:4 prob:3 letter:1 communicate:1 reporting:30 reasonable:1 electronic:1 decision:1 appendix:7 summarizes:1 layer:1 bound:25 internet:2 zard:1 occur:1 precisely:3 constraint:1 alex:2 statistik:1 generates:1 aspect:1 ining:1 simulate:3 min:1 argument:1 spring:1 urn:9 martin:1 structured:1 department:2 according:2 influential:1 combination:1 poor:3 ball:12 battle:2 across:2 describes:2 smaller:1 modification:1 making:1 liminf:1 outbreak:1 intuitively:2 taken:1 legal:1 computationally:2 equation:3 resource:1 remains:3 previously:1 mathematik:1 eventually:1 mechanism:3 count:5 loose:2 know:1 mml:2 adopted:1 eight:1 obey:1 observe:1 spectral:1 centrality:39 ney:1 faloutsos:1 shah:4 original:1 denotes:1 running:1 ensure:1 include:6 top:2 exploit:1 especially:1 establish:1 sweep:2 move:1 arrangement:1 quantity:2 question:1 primary:1 unclear:2 exhibit:2 zdeborov:1 link:7 simulated:9 entity:1 philip:1 chris:1 extent:1 trivial:1 reason:1 assuming:1 eavesdropper:17 code:1 o1:1 modeled:2 ober:1 ying:3 difficult:1 statement:1 relate:1 negative:1 implementation:1 attacker:2 perform:2 upper:3 observation:4 snapshot:10 situation:1 extended:1 communication:1 precise:2 stack:2 thm:6 community:5 david:1 thiran:1 namely:1 specified:1 connection:19 security:9 conflict:1 omits:1 distinction:1 nip:1 address:5 able:1 adversary:59 proceeds:1 below:1 pattern:2 justin:1 sanjay:1 spotting:1 regime:1 including:2 green:2 event:3 natural:2 business:2 regularized:1 client:1 predicting:1 khim:1 advanced:1 zhu:3 improve:1 github:2 started:1 bubble:1 zhen:1 metadata:1 isn:1 weisstein:1 literature:3 prior:3 understanding:1 athreya:1 review:4 contributing:1 relative:1 asymptotic:3 sir:1 loss:1 expect:1 highlight:2 interesting:1 men:1 proportional:1 scherer:1 geoffrey:1 boneh:1 degree:22 propagates:1 port:3 article:1 cd:3 course:1 summary:1 infeasible:1 side:3 understand:2 ber:1 kirill:1 neighbor:8 characterizing:1 nge:1 sparse:3 coupon:2 curve:1 calculated:1 transition:2 evaluating:1 payment:1 forward:2 adopts:1 author:1 ticular:1 simplified:2 dorit:1 social:2 transaction:18 pruning:1 implicitly:1 dmitry:1 confirm:1 supernode:5 global:1 active:2 incoming:1 ml:16 svante:1 knew:1 continuous:1 reputation:1 robin:3 additionally:1 table:2 learn:4 nature:1 robust:2 ca:1 angewandte:1 adi:1 expansion:1 necessarily:1 constructing:1 protocol:23 marc:1 did:1 spread:4 main:2 motivation:1 ling:1 arise:1 paul:1 collector:2 cryptography:3 gossip:5 inferring:2 wish:1 exponential:6 candidate:13 theorem:13 specific:1 showing:1 decay:3 x:1 gupta:1 intractable:1 giulia:2 milling:1 subtree:4 illustrates:2 ramchandran:1 chen:1 depicted:1 broadcasting:1 intersection:2 sender:1 partially:1 zamm:1 pinto:1 springer:6 truth:2 satisfies:3 acm:9 identity:1 goal:4 quantifying:1 content:2 change:3 feasible:11 uniformly:3 engineer:1 called:3 total:2 ece:2 maximumlikelihood:1 unbalanced:1 incorporate:1 evaluate:2 outgoing:2
6,341
6,736
Accelerated consensus via Min-Sum Splitting Patrick Rebeschini Department of Statistics University of Oxford [email protected] Sekhar Tatikonda Department of Electrical Engineering Yale University [email protected] Abstract We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization. 1 Introduction Min-Sum is a local message-passing algorithm designed to distributedly optimize an objective function that can be written as a sum of component functions, each of which depends on a subset of the decision variables. Due to its simplicity, Min-Sum has emerged as canonical protocol to address large scale problems in a variety of domains, including signal processing, statistics, and machine learning. For problems supported on tree graphs, the Min-Sum algorithm corresponds to dynamic programming and is guaranteed to converge to the problem solution. For arbitrary graphs, the ordinary Min-Sum algorithm may fail to converge, or it may converge to something different than the problem solution [28]. In the case of strictly convex objective functions, there are known sufficient conditions to guarantee the convergence and correctness of the algorithm. The most general condition requires the Hessian of the objective function to be scaled diagonally dominant [28, 25]. While the Min-Sum scheme can be applied to optimization problems with constraints, by incorporating the constraints into the objective function as hard barriers, the known sufficient conditions do not apply in this case. In [34], a generalization of the traditional Min-Sum scheme has been proposed, based on a reparametrization of the original objective function. This algorithm is called Splitting, as it can be derived by creating equivalent graph representations for the objective function by ?splitting? the nodes of the original graph. In the case of unconstrained problems with quadratic objective functions, where Min-Sum is also known as Gaussian Belief Propagation, the algorithm with splitting has been shown to yield convergence in settings where the ordinary Min-Sum does not converge [35]. To date, a theoretical investigation of the rates of convergence of Min-Sum Splitting has not been established. In this paper we establish rates of convergence for the Min-Sum Splitting algorithm applied to solve the consensus problem, which can be formulated as an equality-constrained problem in optimization. The basic version of the consensus problem is the network averaging problem. In this setting, each node in a graph is assigned a real number, and the goal is to design a distributed protocol that allows the nodes to iteratively exchange information with their neighbors so to arrive at consensus on the average across the network. Early work include [42, 41]. The design of distributed algorithms to solve the averaging problem has received a lot of attention recently, as consensus represents a widely-used primitive to compute aggregate statistics in a variety of fields. Applications include, for instance, estimation problems in sensor networks, distributed tracking and localization, multi-agents coordination, and distributed inference [20, 21, 9, 19]. Consensus is typically combined with some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. form of local optimization over a peer-to-peer network, as in the case of iterative subgradient methods [29, 40, 17, 10, 6, 16, 39]. In large-scale machine learning, consensus is used as a tool to distribute the minimization of a loss function over a large dataset into a network of processors that can exchange and aggregate information, and only have access to a subset of the data [31, 11, 26, 3]. Classical algorithms to solve the network averaging problem involve linear dynamical systems supported on the nodes of the graph. Even when the coefficients that control the dynamics are optimized, these methods are known to suffer from a ?diffusive? rate of convergence, which corresponds to the rate of convergence to stationarity exhibited by the ?diffusion? random walk naturally associated to a graph [44, 2]. This rate is optimal for graphs with good expansion properties, such as complete graphs or expanders. In this case the convergence time, i.e., the number of iterations required to reach a prescribed level of error accuracy ? > 0 in the `2 norm relative to the initial condition, scales independently of the dimension of the problem, as ?(log 1/?). For graphs with geometry this rate is suboptimal [7], and it does not yield a convergence time that matches the lower bound ?(D log 1/?), where D is the graph diameter [37, 36]. For example, in both cycle graphs and in grid-like topologies 2 the number ? of iterations scale like ?(D log 1/?) (if n is the number of nodes, D ? n in a cycle and D ? n in a two-dimensional torus). ?(D2 log 1/?) is also the convergence time exhibited in random geometric graphs, which represent the relevant topologies for many applications in sensor networks [9]. In [7] it was established that for a class of graphs with geometry (polynomial growth or finite doubling dimension), the mixing time of any reversible Markov chain scales at least like D2 , embodying the fact that symmetric walks on these graphs take D2 steps to travel distances of orderD. Min-Sum schemes to solve the consensus problem have been previously investigated in [27]. The authors show that the ordinary Min-Sum algorithm does not converge in graphs with cycles. They investigate a modified version of it that uses a soft barrier function to incorporate the equality constrains into the objective function. In the case of d-regular graphs, upon a proper choice of initial conditions, the authors show that the algorithm they propose reduces to a linear process supported on the directed edges of the graph, and they characterize the convergence time of the algorithm in terms of the Ces?ro mixing time of a Markov chain defined on the set of directed edges of the original graph. In the case of cycle graphs (i.e., d = 2), they prove that the mixing time scales like O(D), which yields the convergence time O(D/? log 1/?). See Theorem 4 and Theorem 5 in [27]. In the case of (d/2)-dimensional tori (D ? n2/d ), they conjecture that the mixing time is ?(D2(d?1)/d ), but do not present bounds for the convergence time. See Conjecture 1 in [27]. For other graph topologies, they leave the mixing time (and convergence time) achieved by their method as an open question. In this paper we show that the Min-Sum scheme based on splitting yields convergence to the consensus solution, and we analytically establish rates of convergence for any graph topology. First, we show that a certain parametrization of the Min-Sum protocol for consensus yields a linear message-passing update for any graph and for any choice of initial conditions. Second, we show that the introduction of the splitting parameters is not only fundamental to guarantee the convergence and correctness of the Min-Sum scheme in the consensus problem, but that proper tuning of these parameters yields accelerated (i.e., ?subdiffusive?) asymptotic rates of convergence. We establish a square-root improvement for the asymptotic convergence time over diffusive methods, which allows Min-Sum Splitting to scale like O(D log(D/?)) for cycles and tori. Our results show that Min-Sum schemes are competitive and get close to the optimal rate O(D log(1/?)) recently established for some algorithms based on Nesterov?s acceleration [30, 36]. The main tool used for the analysis involves the construction of an auxiliary linear process supported on the nodes of the original graph to track the evolution of the Min-Sum Splitting algorithm, which is instead supported on the directed edges. This construction allows us to relate the convergence time of the Min-Sum scheme to the spectral gap of the matrix describing the dynamics of the auxiliary process, which is easier to analyze than the matrix describing the dynamics on the edges as in [27]. In the literature, overcoming the suboptimal convergence rate of classical algorithms for network averaging consensus has motivated the design of several accelerated methods. Two main lines of research have been developed, and seem to have evolved independently of each others: one involves lifted Markov chains techniques, see [37] for a review, the other involves accelerated first order methods in convex optimization, see [13] for a review. Another contribution of this paper is to show that Min-Sum Splitting bears similarities with both types of accelerated methods. On the one hand, Min-Sum can be seen as a process on a lifted space, which is the space of directed edges in the original graph. Here, splitting is seen to introduce a directionality in the message exchange of the ordinary Min-Sum protocol that is analogous to the directionality introduced in non-reversible 2 random walks on lifted graphs to achieve faster convergence to stationarity. The advantage of the Min-Sum algorithm over lifted Markov chain methods is that no lifted graph needs to be constructed. On the other hand, the directionality induced on the edges by splitting translates into a memory term for the auxiliary algorithm running on the nodes. This memory term, which allows nodes to remember previous values and incorporate them into the next update, directly relates the Min-Sum Splitting algorithm to accelerated multi-step first order methods in convex optimization. In particular, we show that a proper choice of the splitting parameters recovers the same matrix that support the evolution of shift-register methods used in numerical analysis for linear solvers, and, as a consequence, we recover the same accelerated rate of convergence for consensus [45, 4, 24]. To summarize, the main contributions of this paper are: 1. First connection of Min-Sum schemes with lifted Markov chains techniques and multi-step methods in convex optimization. 2. First proof of how the directionality embedded in Belief Propagation protocols can be tuned and exploited to accelerate the convergence rate towards the problem solution. 3. First analysis of convergence rates for Min-Sum Splitting. New proof technique based on the introduction of an auxiliary process to track the evolution of the algorithm on the nodes. 4. Design of a Min-Sum protocol for the consensus problem that achieves better convergence rates than the ones established (and conjectured) for the Min-Sum method in [27]. Our results motivate further studies to generalize the acceleration due to splittings to other problems. The paper is organized as follows. In Section 2 we introduce the Min-Sum Splitting algorithm in its general form. In Section 3 we describe the consensus problem and review the classical diffusive algorithms. In Section 4 we review the main accelerated methods that have been proposed in the literature. In Section 5 we specialize the Min-Sum Splitting algorithm to the consensus problem, and show that a proper parametrization yields a linear exchange of messages supported on the directed edges of the graph. In Section 6 we derive the auxiliary message-passing algorithm that allows us to track the evolution of the Min-Sum Splitting algorithm via a linear process with memory supported on the nodes of the graph. In Section 7 we state Theorem 1, which shows that a proper choice of the tuning parameters recovers the rates of shift-registers. Proofs are given in the supplementary material. 2 The Min-Sum Splitting algorithm The Min-Sum algorithm is a distributed routine to optimize a cost function that is the sum of components supported on a given graph structure. Given a simple graph G = (V, E) with n := |V | vertices and m := |E| edges, let us assume that we are given a set of functions ?v : R ? R ? {?}, for each v ? V , and ?vw = ?wv : R ? R ? R ? {?}, for each {v, w} ? E, and that we want to solve the following problem over the decision variables x = (xv )v?V ? RV : X X minimize ?v (xv ) + ?vw (xv , xw ). (1) v?V {v,w}?E The Min-Sum algorithm describes an iterative exchange of messages?which are functions of the decision variables?associated to each directed edge in G. Let E := {(v, w) ? V ? V : {v, w} ? E} be the set of directed edges associated to the undirected edges in E (each edge in E corresponds to two edges in E). In this work we consider the synchronous implementation of the Min-Sum algorithm where at any given time step s, each directed edge (v, w) ? E supports two messages, s ??vw ,? ?svw : R ? R ? {?}. Messages are computed iteratively. Given an initial choice of messages ? ?0 = (? ?0vw )(v,w)?E , the Min-Sum scheme that we investigate in this paper is given in Algorithm 1. Henceforth, for each v ? V , let N (v) := {w ? V : {v, w} ? E} denote the neighbors of node v. The formulation of the Min-Sum scheme given in Algorithm 1, which we refer to as Min-Sum Splitting, was introduced in [34]. This formulation admits as tuning parameters the real number ? ? R and the symmetric matrix ? = (?vw )v,w?V ? RV ?V . Without loss of generality, we assume that the sparsity of ? respects the structure of the graph G, in the sense that if {v, w} 6? E then ?vw = 0 (note that Algorithm 1 only involves summations with respect to nearest neighbors in the graph). The choice of ? = 1 and ? = A, where A is the adjacency matrix defined as Avw := 1 if {v, w} ? E and Avw := 0 otherwise, yields the ordinary Min-Sum algorithm. For 3 Algorithm 1: Min-Sum Splitting Input: Messages ? ?0 = (? ?0vw )(v,w)?E ; parameters ? ? R and ? ? RV ?V symmetric; time t ? 1. for s ? {1, . . . , t} do P s ??wv = ?v /? ? ? ?s?1 ?s?1 wv + zv , (w, v) ? E; z?N (v) ?zv ? s s s ? ?wv = minz?R {?vw ( ? , z)/?vw + (? ? 1)??wv + ? ??vw (z)}, (w, v) ? E; P ?tv = ?v + ? w?N (v) ?wv ? ?twv , v ? V ; Output: xtv = arg minz?R ?tv (z), v ? V . an arbitrary choice of strictly positive integer parameters, Algorithm 1 can be seen to correspond to the ordinary Min-Sum algorithm applied to a new formulation of the original problem, where an equivalent objective function is obtained from the original one in (1) by splitting each term ? N \ {0} term ?v into ? ? N \ {0} terms. Namely, minimize Pvw into P??vw ? Pterms, and P?each vw k k 1 k k v?V k=1 ?v (xv ) + {v,w}?E k=1 ?vw (xv , xw ), with ?v := ?v /? and ?vw := ?vw /?vw . Hence the reason for the name ?splitting? algorithm. Despite this interpretation, Algorithm 1 is defined for any real choice of parameters ? and ?. In this paper we investigate the convergence behavior of the Min-Sum Splitting algorithm for some choices of ? and ?, in the case of the consensus problem that we define in the next section. 3 The consensus problem and standard diffusive algorithms Given a simple graph G = (V, E) with n := |V | nodes, for each v ? V let ?v : R ? R ? {?} be a given function. The consensus problem is defined as follows: X minimize ?v (xv ) subject to xv = xw , {v, w} ? E. (2) v?V We interpret G as a communication graph where each node represents an agent, and each edge represent a communication channel between neighbor agents. Each agent v is given the function ?v , and agents collaborate by iteratively exchanging information with their neighbors in G with the goal to eventually arrive to the solution of problem (2). The consensus problem amounts to designing distributed algorithms to solve problem (2) that respect the communication constraints encoded by G. A classical setting investigated in the literature is the least-square case yielding the network averaging 1 2 V 2 problem, where P for a given b ? R we have ?v (z) := 2 z ? bv z and the solution of problem (2) is ?b := n1 v?V bv . In this setup, each agent v ? V is given a number bv , and agents want to exchange information with their neighbors according to a protocol that allows each of them to eventually reach consensus on the average ?b across the entire network. Classical algorithms to solve this problem involve a linear exchange of information of the form xt = W xt?1 with x0 = b, for a given matrix W ? RV ?V that respects the topology of the graph G (i.e., Wvw 6= 0 only if {v, w} ? E or v = w), so that W t ? 11T /n for t ? ?, where 1 is the all ones vector. This linear iteration allows for a distributed exchange of information among agents, as at any iteration each agent Pv ? V only receives information from his/her neighbors N (v) via the update: xtv = Wvv xt?1 + w?N (v) Wvw xt?1 v w . The original literature on this problem investigates the case where the matrix W has non-negative coefficients and represents the transition matrix of a random walk on the nodes of the graph G, so that Wvw is interpreted as the probability that a random walk at node v visits node w in the next time step. A popular choice is given by the Metropolis-Hastings MH method [37], which involved the doubly-stochastic matrix W M H defined as Wvw := 1/(2dmax ) if MH MH {v, w} ? E, Wvw := 1 ? dv /(2dmax ) if w = v, and Wvw := 0 otherwise, where dv := |N (v)| is the degree of node v, and dmax := maxv?V dv is the maximum degree of the graph G. 1 As mentioned in [34], one can also consider a more general formulation of the splitting algorithm with ? ? (?v )v?V ? R (possibly also with time-varying parameters). The current choice of the algorithm is motivated by the fact that in the present case the output of the algorithm can be tracked by analyzing a linear system on the nodes of the graph, as we will show in Section P 5. 2 In the literature, the classical choice is ?v (z) := 21 v?V (z ? bv )2 , which yields the same results as the quadratic function that we define in the main text, as constant terms in the objective function do not alter the optimal point of the problem but only the optimal value of the objective function. 4 In [44], necessary and sufficient conditions are given for a generic matrix W to satisfy W t ? 11T /n, namely, 1T W = 1T , W 1 = 1, and ?(W ? 11T /n) < 1, where ?(M ) denotes the spectral radius of a given matrix M . The authors show that the problem of choosing the optimal symmetric matrix W that minimizes ?(W ? 11T /n) = kW ? 11T /nk ? where kM k denotes the spectral norm of a matrix M that coincides with ?(M ) if M is symmetric ? is a convex problem and it can be cast as a semi-definite program. Typically, the optimal matrix involves negative coefficients, hence departing from the random walk interpretation. However, even the optimal choice of symmetric matrix is shown to yield a diffusive rate of convergence, which is already attained by the matrix W M H [7]. This rate corresponds to the speed of convergence to stationarity achieved by the diffusion random walk, defined as the Markov chain with transition matrix diag(d)?1 A, where diag(d) ? RV ?V is the degree matrix, i.e., diagonal with diag(d)vv := dv , and A ? RV ?V is the adjacency matrix, i.e., symmetric with Avw := 1 if {v, w} ? E, and Avw := 0 otherwise. For instance, the condition kW ? 11T /nkt ? ?, where k ? k is the `2 norm, yields a convergence time that scales like t ? ?(D2 log(1/?)) in cycle graphs and tori [33], where D is the graph diameter. The authors in [7] established that for a class of graphs with geometry (polynomial growth or finite doubling dimension) the mixing time of any reversible Markov chain scales at least like D2 , and it is achieved by Metropolis-Hastings [37]. 4 Accelerated algorithms To overcome the diffusive behavior typical of classical consensus algorithms, two main types of approaches have been investigated in the literature, which seem to have been developed independently. b = (Vb , E) b and of a linear system The first approach involves the construction of a lifted graph G cx c ? RVb ?Vb is the transition matrix supported on the nodes of it, of the form x ?t = W ?t?1 , where W b This approach has its origins in the work of of a non-reversible Markov chain on the nodes of G. [8] and [5], where it was observed for the first time that certain non-reversible Markov chains on properly-constructed lifted graphs yield better mixing times than reversible chains on the original graphs. For some simple graph topologies, such as cycle graphs and two-dimensional grids, the construction of the optimal lifted graphs is well-understood already from the works in [8, 5]. A general theory of lifting in the context of Gossip algorithms has been investigated in [18, 37]. However, this construction incurs additional overhead, which yield non-optimal computational complexity, even for cycle graphs and two-dimensional grids. Typically, lifted random walks on arbitrary graph topologies are constructed on a one-by-one case, exploiting the specifics of the graph at hand. This is the case, for instance, for random geometric graphs [22, 23]. The key property that allows non-reversible lifted Markov chains to achieve subdiffusive rates is the introduction of a directionality in the process to break the diffusive nature of reversible chains. The strength of the directionality depends on global properties of the original graph, such as the number of nodes [8, 5] or the diameter [37]. See Figure 1. 1/2 1/2 1?1/n ? 1?1/n 1 1/n 1/n ? ?1/n 1 1?1/n (a) (b) (c) (d) Figure 1: (a) Symmetric Markov chain W on the nodes of the ring graph G. (b) Non-reversible c on the nodes of the lifted graph G b [8]. (c) Ordinary Min-Sum algorithm on the Markov chain W b ?), Algorithm 2, with ? = 1 and ? = A, where A is directed edges E associated to G (i.e., K(?, b ?), Algorithm 2, with ? = 1, ? = ?W , the adjacency pmatrix of G). (d) Min-Sum Splitting K(?, ? = 2/(1 + 1 ? ?2W ) as in Theorem 1. Here, ?W is ?(1 ? 1/n2 ) and ? ? 2(1 ? 1/n) for n large. b ?) has negative entries, departing from the Markov chain interpretation. This is also The matrix K(?, the case for the optimal tuning in classical consensus schemes [44] and for the ADMM lifting in [12]. The second approach involves designing linear updates that are supported on the original graph G and keep track of a longer history of previous iterates. This approach relies on the fact that the original consensus update xt = W xt?1 can be interpreted as a primal-dual gradient ascent method to solve problem (2) with a quadratic objective function [32]. This allows the implementation of accelerated 5 gradient methods. To the best of our knowledge, this idea was first introduced in [14], and since then it has been investigated in many other papers. We refer to [13, 24], and references in there, for a review and comparison of multi-step accelerated methods for consensus. The simplest multi-step extension of gradient methods is Polyak?s ?heavy ball,? which involves adding a ?momentum? term to the standard update and yields a primal iterate of the form xt = W xt?1 + ?(xt?1 ? xt?2 ). Another popular multi-step method involves Nesterov?s acceleration, and yields xt = (1 + ?)W xt?1 ? ?W xt?2 . Aligned with the idea of adding a momentum term is the idea of adding a shift register term, which yields xt = (1 + ?)W xt?1 ? ?xt?2 . For our purposes, we note that these methods can be written as  xt xt?1   =K xt?1 xt?2  , (3) for a certain matrix K ? R2n?2n . As in the case of lifted Markov chains techniques, also multi-step methods are able to achieve accelerated rates by exploiting some form of global information: the choice of the parameter ? that yields subdiffusive rates depends on the eigenvalues of W . Remark 1. Beyond lifted Markov chains techniques and accelerated first order methods, many other algorithms have been proposed to solve the consensus problem. The literature is vast. As we focus on Min-Sum schemes, an exhaustive literature review on consensus is beyond the scope of our work. Of particular interest for our results is the distributed ADMM approach [3, 43, 38]. Recently in [12], for a class of unconstrained problems with quadratic objective functions, it has been shown that message-passing ADMM schemes can be interpreted as lifting of gradient descent techniques. This prompts for further investigation to connect Min-Sum, ADMM, and accelerated first order methods. In the next two sections we show that Min-Sum Splitting bears similarities with both types of accelerated methods described above. On the one hand, in Section 5 we show that the estimates xtv ?s of Algorithm 1 applied to the network averaging problem can be interpreted as the result of a linear process supported on a lifted space, i.e., the space E of directed edges associated to the undirected edges of G. On the other hand, in Section 6 we show that the estimates xtv ?s can be seen as the result of a linear multi-step process supported on the nodes of G, which can be written as in (3). Later on, in Section 7 and Section 8, we will see that the similarities just described go beyond the structure of the processes, and they extend to the acceleration mechanism itself. In particular, the choice of splitting parameters that yields subdiffusive convergence rates, matching the asymptotic rates of shift register methods, is also shown to depend on global information about G. 5 Min-Sum Splitting for consensus We apply Min-Sum Splitting to solve network averaging. We show that in this case the messagepassing protocol is a linear exchange of parameters associated to the directed edges in E. ? ? wv := Given ? ? R and ? ? RV ?V symmetric, let h(?) ? RE be the vector defined as h(?) E?E b bw + (1 ? 1/?)bv , and let K(?, ?) ? R be matrix defined as b ?)wv,zu K(?, ? ? ???zw ? ? ? ??(?vw ? 1) := (? ? 1)?zv ? ? ? (? ? 1)(?wv ? 1) ? ? ?0 if u = w, z ? N (w) \ {v}, if u = w, z = v, if u = v, z ? N (v) \ {w}, if u = v, z = w, otherwise. (4) 0 0 ? 0 = (R ? vw Consider Algorithm 2 with initial conditions R )(v,w)?E ? RE , r?0 = (? rvw )(v,w)?E ? RE . Algorithm 2: Min-Sum Splitting, consensus problem, quadratic case ? 0 , r?0 ? RE ; ? ? R, ? ? RV ?V symmetric; K(?, b ?) defined in (5); t ? 1. Input: R for s ? {1, . . . , t} do ? ? s = (2 ? 1/?)1 + K(?, b ?)R ? s?1 ; b ?)? R r?s = h(?) + K(?, rs?1 ; Output: xtv := P t bv +? w?N (v) ?wv r?wv P ?t , v 1+? w?N (v) ?wv R wv ?V. 6 Proposition 1. Let ? ? R and ? ? RV ?V symmetric be given. Consider Algorithm 1 applied to 0 ? 0 z 2 ?? problem (2) with ?v (z) := 21 z 2 ?bv z and with quadratic initial messages: ? ?0vw (z) = 12 R rvw z, vw 1 ?s s 2 ? 0 > 0 and r?0 ? R. Then, the messages will remain quadratic, i.e., ? for some R ? (z) = R z vw vw vw 2 vw ? P s t ? wv r?vw z for any s ? 1, and the parameters evolve as in Algorithm 2. If 1 + ? w?N (v) ?wv R >0 for any v ? V and t ? 1, then the output of Algorithm 2 coincides with the output of Algorithm 1. 6 Auxiliary message-passing scheme We show that the output of Algorithm 2 can be tracked by a new message-passing scheme that corresponds to a multi-step linear exchange of parameters associated to the nodes of G. This auxiliary algorithm represents the main tool to establish convergence rates for the Min-Sum Splitting protocol, i.e., Theorem 1 below. The intuition behind the auxiliary process is that while Algorithm 1 (hence, Algorithm 2) involves an exchange of messages supported on the directed edges E, the computation of the estimates xtv ?s only involve the belief functions ?tv ?s, which are supported on the nodes of G. Due to the simple nature of the pairwise equality constraints in the consensus problem, in the present case a reparametrization allows to track the output of Min-Sum via an algorithm that directly updates the belief functions on the nodes of the graph, which yields Algorithm 3. Given ? ? R and ? ? Rn?n symmetric, define the matrix K(?, ?) ? R2n?2n as   (1 ? ?)I ? (1 ? ?)diag(?1) + ?? ?I K(?, ?) := , ?I ? ?diag(?1) + (1 ? ?)? (1 ? ?)I (5) V ?V where I ? is the identity matrix and diag(?1) ? RV ?V is diagonal with (diag(?1))vv = PR (?1)v = w?N (v) ?vw . Consider Algorithm 3 with initial conditions R0 , r0 , Q0 , q 0 ? RV . Algorithm 3: Auxiliary message-passing Input: R0 , r0 , Q0 , q 0 ? RV ; ? ? R, ? ? RV ?V symmetric; K(?, ?) defined in (5); t ? 1. for s? {1,. . . , t} do    s   s?1  rs rs?1 R R = K(?, ?) ; = K(?, ?) ; qs Qs q s?1 Qs?1 Output: xtv := rvt /Rvt , v ? V . Proposition 2. Let ? ? R and ? ? RV ?V symmetric be given. The outputPof Algorithm 2 with initial E 0 ?0 0 ?0 conditions is the output ofP Algorithm 3 with Rv0 := 1 + ? w?N v := P R , r? ? R P(v) ?wv Rwv , Q 0 0 0 0 0 ? 1 ? ? w?N (v) ?wv Rwv , rv := bv + ? w?N (v) ?wv r?wv , and qv := bv ? ? w?N (v) ?vw r?vw . Proposition 2 shows that upon proper initialization, the outputs of Algorithm 2 and Algorithm 3 are equivalent. Hence, Algorithm 3 represents a tool to investigate the convergence behavior of the Min-Sum Splitting algorithm. Analytically, the advantage of the formulation given in Algorithm 3 over the one given in Algorithm 2 is that the former involves two coupled systems of n equations whose convergence behavior can explicitly be linked to the spectral properties of the n ? n matrix ?, as we will see in Theorem 1 below. On the contrary, the linear system of 2m equations in Algorithm 2 does not seem to exhibit an immediate link to the spectral properties of ?. In this respect, we note that the previous paper that investigated Min-Sum schemes for consensus, i.e., [27], characterized the convergence rate of the algorithm under consideration ? albeit only in the case of d-regular graphs, and upon initializing the quadratic terms to the fix point ? in terms of the spectral gap of a matrix that controls a linear system of 2m equations. However, the authors only list results on the behavior of this spectral gap in the case of cycle graphs, i.e., d = 2, and present a conjecture for 2d-tori. 7 Accelerated convergence rates for Min-Sum Splitting We investigate the convergence behavior of the Min-Sum Splitting algorithm to solve problem (2) V with quadratic objective functions. Henceforth, without loss of generality, P let b ? R be given with 1 2 ? 0 < bv < 1 for each v ? V , and let ?v (z) := 2 z ? bv z. Define b := v?V bv /n. Recall from [27] that the ordinary Min-Sum algorithm (i.e., Algorithm 2 with ? = 1 and ? = A, where A is the adjacency matrix of the graph G) does not converge if the graph G has a cycle. 7 We now show that a proper choice of the tuning parameters allows Min-Sum Splitting to converge to the problem solution in a subdiffusive way. The proof of this result, which is contained in the supplementary material, relies on the use of the auxiliary method defined in Algorithm 3 to track the evolution of the Min-Sum Splitting scheme. Here, recall that kxk denotes the `2 norm of a given vector x, kM k denotes the `2 matrix norm of the given matrix M , and ?(M ) its spectral radius. T Theorem 1. Let W ? RV ?V be a symmetric p matrix with W 1t = 1 and ?W := ?(W ? 11 /n) < 1. 2 Let ? = 1 and ? = ?W , with ? = 2/(1 + 1 ? ?W ). Let x be the output at time t of Algorithm 2 ? 0 = r?0 = 0. Define with initial conditions R     1 ?W I 11T 11T ? K := , K := . (6) (1 ? ?)I 0 (1 ? ?)11T (1 ? ?)11T (2 ? ?)n ? 2n Then, for any v ? V we have limt?? xtv = ?b and kxt ? ?b1k ? 42?? k(K ? K ? )t k. The asymptotic rate of convergence is given by q p p ? ? t 1/t ?K := ?(K ? K ) = limt?? k(K ? K ) k = (1? 1??2W )/(1+ 1??2W ) < ?W < 1, p p which satisfies 21 1/(1 ? ?W ) ? 1/(1 ? ?K ) ? 1/(1 ? ?W ). Theorem 1 shows that the choice of splitting parameters ? = 1 and ? = ?W , where ? and W are defined as in the statement of the theorem, allows the Min-Sum Splitting scheme to achieve the asymptotic rate of convergence that is given by the second largest eigenvalue in magnitude of the matrix K defined in (6), i.e., the quantity ?K . The matrix K is the same matrix that describes shift-register methods for consensus [45, 4, 24]. In fact, the proof of Theorem 1 relies on the spectral analysis previously established for shift-registers, which can be traced back to [15]. See also [13, 24]. ? Following [27], let us consider the absolute measure of error given by kxt ? ?b1k/ n (recall that we ? assume 0 <?bv < 1?so that kbk ? n). From Theorem 1 it follows that, asymptotically, we have kxt ? ?b1k/ n . 4 2?tK /(2 ? ?). If we?define the asymptotic convergence time as the minimum time t so that, asymptotically, kxt ? ?b1k/ n . ?, then the Min-Sum Splitting scheme investigated in Theorem 1 has an asymptotic convergence time that is O(1/(1??K ) log{[1/(1??K )]/?}). Given the last bound in Theorem 1, this result achieves (modulo logarithmic terms) a square-root improvement over the convergence time of diffusive methods, which scale like ?(1/(1 ? ?W ) log 1/?). For cycle graphs and, more generally, for higher-dimensional tori ? where 1/(1 ? ?W ) is ?(D2 ) so that 1/(1??K ) is ?(D) [33, 1] ? the convergence time is O(D log D/?), where D is the graph diameter. As prescribed by Theorem 1, the choice of ? that makes the Min-Sum scheme achieve a subdiffusive rate depends on global properties of the graph G. Namely, ? depends on the quantity ?W , the second largest eigenvalue in magnitude of the matrix W . This fact connects the acceleration mechanism induced by splitting in the Min-Sum scheme to the acceleration mechanism of lifted Markov chains techniques (see Figure 1) and multi-step first order methods, as described in Section 4. It remains to be investigated how choices of splitting parameters different than the ones investigated in Theorem 1 affect the convergence behavior of the Min-Sum Splitting algorithm. 8 Conclusions The Min-Sum Splitting algorithm has been previously observed to yield convergence in settings where the ordinary Min-Sum protocol does not converge [35]. In this paper we proved that the introduction of splitting parameters is not only fundamental to guarantee the convergence of the Min-Sum scheme applied to the consensus problem, but that proper tuning of these parameters yields accelerated convergence rates. As prescribed by Theorem 1, the choice of splitting parameters that yields subdiffusive rates involves global type of information, via the spectral gap of a matrix associated to the original graph (see the choice of ? in Theorem 1). The acceleration mechanism exploited by Min-Sum Splitting is analogous to the acceleration mechanism exploited by lifted Markov chain techniques ? where the transition matrix of the lifted random walks is typically chosen to depend on the total number of nodes in the graph [8, 5] or on its diameter [37] (global pieces of information) ? and to the acceleration mechanism exploited by multi-step gradient methods ? where the momentum/shift-register term is chosen as a function of the eigenvalues of a matrix supported on the original graph [13] (again, a global information). Prior to our results, this connection seems to have not been established in the literature. Our findings motivate further studies to generalize the acceleration due to splittings to other problem instances, beyond consensus. 8 Acknowledgements This work was partially supported by the NSF under Grant EECS-1609484. References [1] David Aldous and James Allen Fill. Reversible markov chains and random walks on graphs, 2002. Unfinished monograph, recompiled 2014, available at http://www.stat.berkeley. edu/~aldous/RWG/book.html. [2] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. IEEE Transactions on Information Theory, 52(6):2508?2530, 2006. [3] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1?122, 2011. [4] Ming Cao, Daniel A. Spielman, and Edmund M. Yeh. Accelerated gossip algorithms for distributed computation. Proc. 44th Ann. Allerton Conf. Commun., Contr., Computat, pages 952 ? 959, 2006. [5] Fang Chen, L?szl? Lov?sz, and Igor Pak. Lifting markov chains to speed up mixing. In Proceedings of the Thirty-first Annual ACM Symposium on Theory of Computing, pages 275? 281, 1999. [6] J. Chen and A. H. Sayed. Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Transactions on Signal Processing, 60(8):4289?4305, 2012. [7] P. Diaconis and L. Saloff-Coste. Moderate growth and random walk on finite groups. Geometric & Functional Analysis GAFA, 4(1):1?36, 1994. [8] Persi Diaconis, Susan Holmes, and Radford M. Neal. Analysis of a nonreversible markov chain sampler. The Annals of Applied Probability, 10(3):726?752, 2000. [9] A. G. Dimakis, S. Kar, J. M. F. Moura, M. G. Rabbat, and A. Scaglione. Gossip algorithms for distributed signal processing. Proceedings of the IEEE, 98(11):1847?1864, 2010. [10] John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592? 606, 2012. [11] Pedro A. Forero, Alfonso Cano, and Georgios B. Giannakis. Consensus-based distributed support vector machines. J. Mach. Learn. Res., 11:1663?1707, 2010. [12] G. Fran?a and J. Bento. Markov chain lifting and distributed admm. IEEE Signal Processing Letters, 24(3):294?298, 2017. [13] E. Ghadimi, I. Shames, and M. Johansson. Multi-step gradient methods for networked optimization. IEEE Transactions on Signal Processing, 61(21):5417?5429, 2013. [14] Bhaskar Ghosh, S. Muthukrishnan, and Martin H. Schultz. First and second order diffusive methods for rapid, coarse, distributed load balancing (extended abstract). In Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, pages 72?81, 1996. [15] Gene H. Golub and Richard S. Varga. Chebyshev semi-iterative methods, successive overrelaxation iterative methods, and second order richardson iterative methods. Numer. Math., 3(1):147?156, 1961. [16] D. Jakoveti?c, J. Xavier, and J. M. F. Moura. Fast distributed gradient methods. IEEE Transactions on Automatic Control, 59(5):1131?1146, May 2014. [17] Bj?rn Johansson, Maben Rabi, and Mikael Johansson. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization, 20(3):1157?1170, 2010. 9 [18] K. Jung, D. Shah, and J. Shin. Distributed averaging via lifted markov chains. IEEE Transactions on Information Theory, 56(1):634?647, 2010. [19] S. Kar, S. Aldosari, and J. M. F. Moura. Topology for distributed inference on graphs. IEEE Transactions on Signal Processing, 56(6):2609?2613, 2008. [20] V. Lesser, C. Ortiz, and M. Tambe, editors. Distributed Sensor Networks: A Multiagent Perspective (Edited book), volume 9. Kluwer Academic Publishers, 2003. [21] Dan Li, K. D. Wong, Yu Hen Hu, and A. M. Sayeed. Detection, classification, and tracking of targets. IEEE Signal Processing Magazine, 19(2):17?29, 2002. [22] W. Li and H. Dai. Accelerating distributed consensus via lifting markov chains. In 2007 IEEE International Symposium on Information Theory, pages 2881?2885, 2007. [23] W. Li, H. Dai, and Y. Zhang. Location-aided fast distributed consensus in wireless networks. IEEE Transactions on Information Theory, 56(12):6208?6227, 2010. [24] Ji Liu, Brian D.O. Anderson, Ming Cao, and A. Stephen Morse. Analysis of accelerated gossip algorithms. Automatica, 49(4):873?883, 2013. [25] Dmitry M. Malioutov, Jason K. Johnson, and Alan S. Willsky. Walk-sums and belief propagation in gaussian graphical models. J. Mach. Learn. Res., 7:2031?2064, 2006. [26] G. Mateos, J. A. Bazerque, and G. B. Giannakis. Distributed sparse linear regression. IEEE Transactions on Signal Processing, 58(10):5262?5276, 2010. [27] C. C. Moallemi and B. Van Roy. Consensus propagation. IEEE Transactions on Information Theory, 52(11):4753?4766, 2006. [28] Ciamac C. Moallemi and Benjamin Van Roy. Convergence of min-sum message-passing for convex optimization. Information Theory, IEEE Transactions on, 56(4):2041?2050, 2010. [29] A. Nedic and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48?61, 2009. [30] A. Olshevsky. Linear Time Average Consensus on Fixed Graphs and Implications for Decentralized Optimization and Multi-Agent Control. ArXiv e-prints (1411.4186), 2014. [31] J. B. Predd, S. R. Kulkarni, and H. V. Poor. A collaborative training algorithm for distributed learning. IEEE Transactions on Information Theory, 55(4):1856?1871, 2009. [32] M. G. Rabbat, R. D. Nowak, and J. A. Bucklew. Generalized consensus computation in networked systems with erasure links. In IEEE 6th Workshop on Signal Processing Advances in Wireless Communications, 2005., pages 1088?1092, 2005. [33] S?bastien Roch. Bounding fastest mixing. Electron. Commun. Probab., 10:282?296, 2005. [34] N. Ruozzi and S. Tatikonda. Message-passing algorithms: Reparameterizations and splittings. IEEE Transactions on Information Theory, 59(9):5860?5881, 2013. [35] Nicholas Ruozzi and Sekhar Tatikonda. Message-passing algorithms for quadratic minimization. Journal of Machine Learning Research, 14:2287?2314, 2013. [36] Kevin Scaman, Francis Bach, S?bastien Bubeck, Yin Tat Lee, and Laurent Massouli?. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 3027?3036, 2017. R in Networking, 3(1):1?125, [37] Devavrat Shah. Gossip algorithms. Foundations and Trends 2009. [38] W. Shi, Q. Ling, K. Yuan, G. Wu, and W. Yin. On the linear convergence of the ADMM in decentralized consensus optimization. IEEE Transactions on Signal Processing, 62(7):1750? 1761, 2014. 10 [39] Wei Shi, Qing Ling, Gang Wu, and Wotao Yin. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 25(2):944?966, 2015. [40] S. Sundhar Ram, A. Nedi?c, and V. V. Veeravalli. Distributed stochastic subgradient projection algorithms for convex optimization. Journal of Optimization Theory and Applications, 147(3):516?545, 2010. [41] J. Tsitsiklis, D. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Transactions on Automatic Control, 31(9):803?812, 1986. [42] John N. Tsitsiklis. Problems in Decentralized Decision Making and Computation. PhD thesis, Department of EECS, MIT, 1984. [43] E. Wei and A. Ozdaglar. Distributed alternating direction method of multipliers. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 5445?5450, 2012. [44] Lin Xiao and Stephen Boyd. Fast linear iterations for distributed averaging. Systems & Control Letters, 53(1):65 ? 78, 2004. [45] David M Young. Second-degree iterative methods for the solution of large linear systems. Journal of Approximation Theory, 5(2):137 ? 148, 1972. 11
6736 |@word version:3 polynomial:2 norm:5 johansson:3 seems:1 open:1 d2:7 km:2 r:3 hu:1 tat:1 automat:1 incurs:1 initial:9 liu:1 daniel:1 tuned:1 current:1 chu:1 written:3 john:2 numerical:1 designed:1 update:7 maxv:1 parametrization:2 iterates:1 coarse:1 node:28 math:1 successive:1 allerton:1 location:1 zhang:1 constructed:3 symposium:3 yuan:1 prove:2 specialize:1 doubly:1 overhead:1 dan:1 introduce:2 sayed:1 pairwise:1 x0:1 lov:1 rapid:1 behavior:7 multi:15 ming:2 solver:1 evolved:1 interpreted:4 minimizes:1 dimakis:1 developed:2 finding:1 ghosh:2 guarantee:3 remember:1 berkeley:1 growth:3 ro:1 scaled:1 uk:1 control:8 ozdaglar:2 grant:1 bertsekas:1 positive:1 engineering:1 local:2 understood:1 xv:7 consequence:1 despite:1 mach:3 oxford:1 analyzing:1 laurent:1 initialization:1 fastest:1 tambe:1 directed:12 thirty:1 rvt:2 definite:1 shin:1 erasure:1 saloff:1 matching:2 boyd:3 projection:1 regular:2 get:1 close:1 context:1 wong:1 optimize:2 equivalent:3 www:1 ghadimi:1 shi:2 deterministic:1 primitive:1 attention:1 go:1 independently:3 convex:9 nedi:1 sekhar:3 distributedly:1 splitting:51 stats:1 simplicity:1 q:3 wvv:1 holmes:1 fill:1 his:1 fang:1 analogous:2 annals:1 construction:5 target:1 modulo:1 magazine:1 programming:1 exact:1 us:1 designing:2 origin:1 trend:2 roy:2 observed:2 electrical:1 initializing:1 susan:1 cycle:11 edited:1 mentioned:1 intuition:1 monograph:1 benjamin:1 complexity:1 constrains:1 reparameterizations:1 nesterov:2 dynamic:4 motivate:2 depend:2 localization:1 upon:3 eric:1 rvb:1 accelerate:1 mh:3 muthukrishnan:1 fast:3 describe:1 aggregate:2 kevin:1 choosing:1 exhaustive:1 peer:2 whose:1 emerged:1 widely:1 solve:12 supplementary:2 encoded:1 otherwise:4 statistic:3 richardson:1 itself:1 bento:1 advantage:2 eigenvalue:4 kxt:4 propose:1 adaptation:1 scaman:1 relevant:1 aligned:1 cao:2 date:1 networked:3 mixing:9 achieve:5 wvw:6 exploiting:2 convergence:51 prabhakar:1 incremental:1 leave:1 ring:1 tk:1 derive:1 ac:1 stat:1 nearest:1 received:1 auxiliary:10 involves:12 direction:2 radius:2 stochastic:3 material:2 adjacency:4 exchange:11 fix:1 generalization:1 investigation:2 pmatrix:1 proposition:3 rebeschini:2 brian:1 summation:1 strictly:2 extension:1 scope:1 bj:1 electron:1 achieves:2 early:1 purpose:1 pak:1 estimation:1 proc:1 travel:1 tatikonda:4 coordination:1 largest:2 correctness:2 tool:4 qv:1 minimization:2 mit:1 sensor:3 gaussian:2 modified:2 lifted:20 varying:1 derived:1 focus:1 improvement:2 properly:1 sense:1 contr:2 inference:2 typically:4 entire:1 her:1 arg:1 among:1 dual:2 html:1 classification:1 constrained:1 field:1 beach:1 represents:5 kw:2 yu:1 igor:1 alter:1 others:1 richard:1 diaconis:2 qing:1 geometry:3 connects:1 bw:1 n1:1 ortiz:1 detection:1 stationarity:3 interest:1 message:21 investigate:5 golub:1 szl:1 numer:1 yielding:1 primal:2 behind:1 chain:26 implication:1 coste:1 edge:20 nowak:1 moallemi:2 necessary:1 tree:1 walk:12 re:6 theoretical:1 instance:4 soft:1 exchanging:1 ordinary:10 cost:1 vertex:1 subset:2 entry:1 johnson:1 characterize:1 connect:1 eec:2 combined:1 st:2 fundamental:2 randomized:2 siam:2 international:2 lee:1 again:1 thesis:1 possibly:1 henceforth:2 conf:1 creating:1 book:2 li:3 distribute:1 coefficient:3 satisfy:1 register:8 explicitly:1 depends:5 piece:1 later:1 root:2 lot:1 break:1 jason:1 analyze:1 linked:1 francis:1 competitive:1 recover:1 reparametrization:2 parallel:1 contribution:2 minimize:3 square:3 collaborative:1 accuracy:1 yield:24 correspond:1 generalize:2 xtv:8 malioutov:1 processor:1 history:1 scaglione:1 moura:3 reach:2 networking:1 involved:1 james:1 naturally:1 associated:8 proof:5 recovers:2 athans:1 dataset:1 proved:1 popular:2 persi:1 recall:3 knowledge:1 organized:1 routine:1 back:1 bucklew:1 attained:1 higher:1 wei:2 formulation:5 ox:1 strongly:1 generality:2 anderson:1 just:1 hand:5 receives:1 hastings:2 veeravalli:1 reversible:10 propagation:4 usa:1 name:1 multiplier:2 evolution:5 equality:3 assigned:1 analytically:2 hence:4 symmetric:15 iteratively:3 q0:2 alternating:2 twv:1 neal:2 coincides:2 generalized:1 forero:1 complete:1 duchi:1 allen:1 cano:1 consideration:1 recently:3 parikh:1 functional:1 ji:1 tracked:2 volume:2 extend:1 interpretation:3 kluwer:1 interpret:1 refer:2 tuning:7 unconstrained:2 grid:3 collaborate:1 automatic:3 access:1 similarity:4 longer:1 alekh:1 patrick:2 something:1 dominant:1 perspective:1 conjectured:1 aldous:2 commun:2 moderate:1 certain:3 kar:2 wv:19 exploited:4 seen:4 minimum:1 additional:1 b1k:4 dai:2 olshevsky:1 r0:4 converge:9 signal:10 semi:2 relates:1 rv:16 stephen:3 reduces:1 smooth:1 borja:1 match:1 faster:1 characterized:1 academic:1 long:1 alan:1 ofp:1 bach:1 lin:1 visit:1 basic:1 regression:1 arxiv:1 iteration:5 represent:2 limt:2 agarwal:1 achieved:3 diffusive:9 former:1 want:2 publisher:1 zw:1 extra:1 exhibited:2 ascent:1 induced:2 subject:1 undirected:2 alfonso:1 contrary:1 seem:3 bhaskar:1 integer:1 vw:28 variety:2 iterate:1 affect:1 architecture:1 topology:8 suboptimal:2 polyak:1 rabbat:2 idea:3 lesser:1 translates:1 chebyshev:1 shift:8 synchronous:1 motivated:2 accelerating:1 avw:4 suffer:1 splittings:3 passing:11 hessian:1 remark:1 generally:1 varga:1 involve:3 amount:1 embodying:1 diameter:5 simplest:1 http:1 computat:1 canonical:1 nsf:1 track:6 ruozzi:2 xavier:1 zv:3 key:1 group:1 traced:1 ce:1 diffusion:3 vast:1 graph:70 subgradient:4 asymptotically:2 overrelaxation:1 sum:77 ram:1 letter:2 massouli:1 arrive:2 wu:2 fran:1 decision:5 scaling:1 investigates:1 vb:2 sundhar:1 bound:3 guaranteed:1 yale:2 quadratic:10 annual:2 bv:13 strength:1 gang:1 constraint:4 speed:2 min:74 prescribed:3 martin:2 conjecture:3 department:3 tv:3 according:1 ball:1 poor:1 across:2 describes:2 remain:1 giannakis:2 metropolis:2 making:1 kbk:1 dv:4 pr:1 equation:3 previously:3 remains:1 describing:2 eventually:2 fail:1 dmax:3 mechanism:6 devavrat:1 available:1 decentralized:4 apply:3 spectral:10 generic:1 nicholas:1 shah:3 original:14 denotes:4 running:1 include:2 graphical:1 xw:3 mikael:1 establish:4 classical:8 rabi:1 objective:14 question:1 already:2 quantity:2 print:1 strategy:1 traditional:1 diagonal:2 exhibit:1 gradient:8 distance:1 link:2 consensus:44 reason:1 willsky:1 setup:1 statement:1 relate:1 negative:3 design:4 implementation:2 proper:9 wotao:1 markov:24 finite:3 descent:1 immediate:1 extended:1 communication:4 rn:2 arbitrary:3 prompt:1 peleato:1 overcoming:1 introduced:3 david:2 namely:3 required:1 cast:1 eckstein:1 optimized:1 connection:2 established:7 nip:1 trans:1 address:1 able:1 beyond:4 roch:1 dynamical:1 below:2 eighth:1 sparsity:1 summarize:1 program:1 including:1 memory:3 belief:5 wainwright:1 nkt:1 nedic:1 scheme:23 mateos:1 rwg:1 coupled:1 embodied:1 text:1 probab:1 review:6 geometric:3 literature:9 hen:1 prior:1 acknowledgement:1 evolve:1 yeh:1 relative:1 asymptotic:7 embedded:1 loss:3 georgios:1 bear:3 multiagent:1 morse:1 cdc:1 foundation:1 agent:11 degree:4 sufficient:3 xiao:1 editor:1 unfinished:1 heavy:1 r2n:2 balancing:1 diagonally:1 supported:16 last:1 jung:1 wireless:2 asynchronous:1 tsitsiklis:2 vv:2 neighbor:7 barrier:2 absolute:1 sparse:1 departing:2 distributed:32 van:2 overcome:1 dimension:3 transition:4 author:5 schultz:1 transaction:15 dmitry:1 keep:1 sz:1 gene:1 global:7 automatica:1 iterative:6 learn:3 channel:1 nature:2 ca:1 messagepassing:1 expansion:1 investigated:9 protocol:11 domain:1 diag:7 main:7 bounding:1 predd:1 expanders:1 ling:2 n2:2 gossip:6 momentum:3 pv:1 nonreversible:1 torus:6 minz:2 young:1 theorem:17 load:1 xt:20 specific:1 zu:1 ciamac:1 bastien:2 list:1 admits:1 incorporating:1 workshop:1 albeit:1 adding:3 rv0:1 lifting:6 magnitude:2 edmund:1 phd:1 nk:1 gap:4 easier:1 chen:2 cx:1 logarithmic:1 yin:3 bazerque:1 bubeck:1 kxk:1 contained:1 tracking:2 partially:1 doubling:2 radford:1 pedro:1 corresponds:5 satisfies:1 relies:3 acm:2 goal:2 formulated:1 identity:1 acceleration:11 ann:1 towards:1 admm:6 hard:1 directionality:6 aided:1 typical:1 averaging:10 sampler:1 called:1 total:1 support:3 jonathan:1 accelerated:20 kulkarni:1 spielman:1 incorporate:2
6,342
6,737
Generalized Linear Model Regression under Distance-to-set Penalties Jason Xu University of California, Los Angeles [email protected] Eric C. Chi North Carolina State University [email protected] Kenneth Lange University of California, Los Angeles [email protected] Abstract Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and regularization penalties, and avoid the drawback of shrinkage. To optimize distance penalized objectives, we make use of the majorization-minimization principle. Resulting algorithms constructed within this framework are amenable to acceleration and come with global convergence guarantees. Applications to shape constraints, sparse regression, and rank-restricted matrix regression on synthetic and real data showcase strong empirical performance, even under non-convex constraints. 1 Introduction and Background In classical linear regression, the response variable y follows a Gaussian distribution whose mean xt ? depends linearly on a parameter vector ? through a vector of predictors x. Generalized linear models (GLMs) extend classical linear regression by allowing y to follow any exponential family distribution, and the conditional mean of y to be a nonlinear function h(xt ?) of xt ? [24]. This encompasses a broad class of important models in statistics and machine learning. For instance, count data and binary classification come within the purview of generalized linear regression. In many settings, it is desirable to impose constraints on the regression coefficients. Sparse regression is a prominent example. In high-dimensional problems where the number of predictors n exceeds the number of cases m, inference is possible provided the regression function lies in a low-dimensional manifold [11]. In this case, the coefficient vector ? is sparse, and just a few predictors explain the response y. The goals of sparse regression are to correctly identify the relevant predictors and to estimate their effect sizes. One approach, best subset regression, is known to be NP hard. Penalizing the likelihood by including an `0 penalty k?k0 (the number of nonzero coefficients) is a possibility, but the resulting objective function is nonconvex and discontinuous. The convex relaxation of `0 regression replaces k?k0 by the `1 norm k?k1 . This LASSO proxy for k?k0 restores convexity and continuity [31]. While LASSO regression has been a great success, it has the downside of simultaneously inducing both sparsity and parameter shrinkage. Unfortunately, shrinkage often has the undesirable side effect of including spurious predictors (false positives) with the true predictors. Motivated by sparse regression, we now consider the alternative of penalizing the log-likelihood by the squared distance from the parameter vector ? to the constraint set. If there are several constraints, then we add a distance penalty for each constraint set. Our approach is closely related to the proximal 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. distance algorithm [19, 20] and proximity function approaches to convex feasibility problems [5]. Neither of these prior algorithm classes explicitly considers generalized linear models. Beyond sparse regression, distance penalization applies to a wide class of statistically relevant constraint sets, including isotonic constraints and matrix rank constraints. To maximize distance penalized loglikelihoods, we advocate the majorization-minimization (MM) principle [2, 18, 19]. MM algorithms are increasingly popular in solving the large-scale optimization problems arising in statistics and machine learning [22]. Although distance penalization preserves convexity when it already exists, neither the objective function nor the constraints sets need be convex to carry out estimation. The capacity to project onto each constraint set is necessary. Fortunately, many projection operators are known. Even in the absence of convexity, we are able to prove that our algorithm converges to a stationary point. In the presence of convexity, the stationary points are global minima. In subsequent sections, we begin by briefly reviewing GLM regression and shrinkage penalties. We then present our distance penalty method and a sample of statistically relevant problems that it can address. Next we lay out in detail our distance penalized GLM algorithm, discuss how it can be accelerated, summarize our convergence results, and compare its performance to that of competing methods on real and simulated data. We close with a summary and a discussion of future directions. GLMs and Exponential Families: In linear regression, the vector of responses y is normally distributed with mean vector E(y) = X? and covariance matrix V(y) = ? 2 I. A GLM preserves the independence of the responses yi but assumes that they are generated from a shared exponential family distribution. The response yi is postulated to have mean ?i (?) = E[yi |?] = h(xti ?), where xi is the ith row of a design matrix X, and the inverse link function h(s) is smooth and strictly increasing [24]. The functional inverse h?1 (s) of h(s) is called the link function. The likelihood of any exponential family can be written in the canonical form   y?i ? ?(?i ) p(yi |?i , ? ) = c1 (yi , ? ) exp . (1) c2 (? ) Here ? is a fixed scale parameter, and the positive functions c1 and c2 are constant with respect to the natural parameter ?i . The function ? is smooth and convex; a brief calculation shows that ?i = ? 0 (?i ). The canonical link function h?1 (s) is defined by the condition h?1 (?i ) = xti ? = ?i . In this case, h(?i ) = ? 0 (?i ), and the log-likelihood ln p(y|?, xj , ? ) is concave in ?. Because c1 and c2 are not functions of ?, we may drop these terms and work with the log-likelihood up to proportionality. We denote this by L(? | y, X) ? ln p(y|?, xj , ? ). The gradient and second differential of L(? | y, X) amount to m m X X ?L = [yi ? ? 0 (xti ?)]xi and d2 L = ? ? 00 (xti ?)xi xti . (2) i=1 i=1 As an example, when ?(?) = ?2 /2 and c2 (? ) = ? 2 , the density (1) is the Gaussian likelihood, and GLM regression under the identity link coincides with standard linear regression. Choosing ?(?) = ln[1 + exp(?)] and c2 (? ) = 1 corresponds to logistic regression under the canonical link s es h?1 (s) = ln 1?s with inverse link h(s) = 1+e s . GLMs unify a range of regression settings, including Poisson, logistic, gamma, and multinomial regression. Shrinkage penalties: The least absolute shrinkage and selection operator (LASSO) [12, 31] solves m h i X ? = argmin ?k?k1 ? 1 ? L(? | yj , xj ) , ? m j=1 (3) where ? > 0 is a tuning constant that controls the strength of the `1 penalty. The `1 relaxation is a popular approach to promote a sparse solution, but there is no obvious map between ? and the sparsity level k. In practice, a suitable value of ? is found by cross-validation. Relying on global shrinkage towards zero, LASSO notoriously leads to biased estimates. This bias can be ameliorated by re-estimating under the model containing only the selected variables, known as the relaxed LASSO [25], but success of this two-stage procedure relies on correct support recovery in the first step. In many cases, LASSO shrinkage is known to introduce false positives [30], resulting in spurious covariates that cannot be corrected. To combat these shortcomings, one may replace the LASSO penalty by a non-convex penalty with milder effects on large coefficients. The smoothly clipped 2 absolute deviation (SCAD) penalty [10] and minimax concave penalty (MCP) [34] are even functions defined through their derivatives     (?? ? |?i |)+ |?i | 0 0 q? (?i , ?) = ? 1{|?i |??} + 1{|?i |>?} and q? (?i , ?) = ? 1 ? (? ? 1)? ?? + for ?i > 0. Both penalties reduce bias, interpolate between hard thresholding and LASSO shrinkage, and significantly outperform the LASSO in some settings, especially in problems with extreme sparsity. SCAD, MCP, as well as the relaxed lasso come with the disadvantage of requiring an extra tuning parameter ? > 0 to be selected. 2 Regression with distance-to-constraint set penalties As an alternative to shrinkage, we consider penalizing the distance between the parameter vector ? and constraints defined by sets Ci . Penalized estimation seeks the solution ? ? m X X 1 ? = argmin ? 1 L(? | yj , xj )? := argmin? f (?), (4) ? vi dist(?, Ci )2 ? ? 2 i m j=1 where the vi are weights on the distance penalty to constraint set Ci . The Euclidean distance can also be written as dist(?, Ci ) = k? ? PCi (?)k2 , where PCi (?) denotes the projection of ? onto Ci . The projection operator is uniquely defined when Ci is closed and convex. If Ci is merely closed, then PCi (?) may be multi-valued for a few unusual external points ?. Notice the distance penalty dist(?, Ci )2 is 0 precisely when ? ? Ci . The solution (4) represents a tradeoff between maximizing the log-likelihood and satisfying the constraints. When each Ci is convex, the objective function is convex as a whole. Sending all of the penalty constants vi to ? produces in the limit the constrained maximum likelihood estimate. This is the philosophy behind the proximal distance algorithm [19, 20]. In practice, it often suffices to find the solution (4) under fixed vi large. The reader may wonder why we employ squared distances rather than distances. The advantage is that squaring renders the penalties differentiable. Indeed, ? 12 dist(x, Ci )2 = x ? PCi (x) whenever PCi (x) is single valued. This is almost always the case. In contrast, dist(x, Ci ) is typically nondifferentiable at boundary points of Ci even when Ci is convex. The following examples motivate distance penalization by considering constraint sets and their projections for several important models. Sparse regression: Sparsity can be imposed directly through the constraint set Ck = {z ? Rn : kzk0 ? k} . Projecting a point ? onto C is trivially accomplished by setting all but the k largest entries in magnitude of ? equal to 0, the same operation behind iterative hard thresholding algorithms. Instead of solving the `1 -relaxation (3), our algorithm approximately solves the original `0 -constrained problem by repeatedly projecting onto the sparsity set Ck . Unlike LASSO regression, this strategy enables one to directly incorporate prior knowledge of the sparsity level k in an interpretable manner. When no such information is available, k can be selected by cross validation just as the LASSO tuning constant ? is selected. Distance penalization escapes the NP hard dilemma of best subset regression at the cost of possible convergence to a local minimum. Shape and order constraints: As an example of shape and order restrictions, consider isotonic regression [1]. For data y ? Rn , isotonic regression seeks to minimize 12 ky ? ?k22 subject to the condition that the ?i are non-decreasing. In this case, the relevant constraint set is the isotone convex cone C = {? : ?1 ? ?2 ? . . . ? ?n }. Projection onto C is straightforward and efficiently accomplished using the pooled adjacent violators algorithm [1, 8]. More complicated order constraints can be imposed analogously: for instance, ?i ? ?j might be required of all edges i ? j in a directed graph model. Notably, isotonic linear regression applies to changepoint problems [32]; our approach allows isotonic constraints in GLM estimation. One noteworthy application is Poisson regression where the intensity parameter is assumed to be nondecreasing with time. Rank restriction: Consider GLM regression where the predictors X i and regression coefficients B are matrix-valued. To impose structure in high-dimensional settings, rank restriction serves as an 3 appropriate matrix counterpart to sparsity for vector parameters. Prior work suggests that imposing matrix sparsity is much less effective than restricting the rank of B in achieving model parsimony [37]. The matrix analog of the LASSO penalty is the nuclear norm penalty. The nuclear ? ? norm of a P matrix B is defined as the sum of its singular values kBk? = j ?j (B) = trace( B B). Notice kBk? is a convex relaxation of rank(B). Including a nuclear norm penalty entails shrinkage and induces low-rankness by proxy. Distance penalization of rank involves projecting onto the set Cr = {Z ? Rn?n : rank(Z) ? r} for a given rank r. Despite sacrificing convexity, distance penalization of rank is, in our view, both more natural and more effective than nuclear norm penalization. Avoiding shrinkage works to the advantage of distance penalization, which we will see empirically in Section 4. According to the Eckart-Young theorem, the projection of a matrix B onto Cr is achieved by extracting the singular value decomposition of B and truncating all but the top r singular values. Truncating the singular value decomposition is a standard numerical task best computed by Krylov subspace methods [14]. Simple box constraints, hyperplanes, and balls: Many relevant set constraints reduce to closed convex sets with trivial projections. For instance, enforcing non-negative parameter values is accomplished by projecting onto the non-negative orthant. This is an example of a box constraint. Specifying linear equality and inequality constraints entails projecting onto a hyperplane or half-space, respectively. A Tikhonov or ridge penalty constraint k?k2 ? r requires spherical projection. Finally, we stress that it is straightforward to consider combinations of the aforementioned constraints. Multiple norm penalties are already in common use. To encourage selection of correlated variables [38], the elastic net includes both `1 and `2 regularization terms. Further examples include matrix fitting subject to both sparse and low-rank matrix constraints [29] and LASSO regression subject to linear equality and inequality constraints [13]. In our setting the relative importance of different constraints can be controlled via the weights vi . 3 Majorization-minimization Figure 1: Illustrative example of two MM iterates with surrogates g(x|xk ) majorizing f (x) = cos(x). To solve the minimization problem (4), we exploit the principle of majorization-minimization. An MM algorithm successively minimizes a sequence of surrogate functions g(? | ? k ) majorizing the objective function f (?) around the current iterate ? k . See Figure 1. Forcing g(? | ? k ) downhill automatically drives f (?) downhill as well [19, 22]. Every expectation-maximization (EM) algorithm [9] for maximum likelihood estimation is an MM algorithm. Majorization requires two conditions: tangency at the current iterate g(? k | ? k ) = f (? k ), and domination g(? | ? k ) ? f (?) for all ? ? Rm . The iterates of the MM algorithm are defined by ? k+1 := arg min g(? | ? k ) ? although all that is absolutely necessary is that g(? k+1 | ? k ) < g(? k | ? k ). Whenever this holds, the descent property f (? k+1 ) ? g(? k+1 | ? k ) ? g(? k | ? k ) = f (? k ) 4 follows. This simple principle is widely applicable and converts many hard optimization problems (non-convex or non-smooth) into a sequence of simpler problems. 2 To majorize the objective (4), it suffices to majorize each distance penalty dist (?, Ci ) . The ma2 jorization dist (?, Ci ) ? k? ? PCi (? k )k22 is an immediate consequence of the definitions of the set 2 distance dist (?, Ci ) and the projection operator PCi (?) [8]. The surrogate function m 1X 1 X g(? | ? k ) = vi k? ? PCi (? k )k22 ? L(? | yj , xj ). 2 i m j=1 has gradient m ?g(? | ? k ) = X vi [? ? PCi (? k )] ? i 1 X ?L(? | yj , xj ) m j=1 and second differential d2 g(? | ? k ) = m X  1 X 2 d L(? | yj , xj ) := H k . vi I n ? m j=1 i (5) The score ?L(? | yj , xj ) and information ?d2 L(? | yj , xj ) appear in equation (2). Note that for GLMs under canonical link, the observed and expected information matrices coincide, and their common value is thus positive semidefinite. Adding a multiple of the identity I n to the information matrix is analogous to the Levenberg-Marquardt maneuver against ill-conditioning in ordinary regression [26]. Our algorithm therefore naturally benefits from this safeguard. Since solving the stationarity equation ?g(? | ? k ) = 0 is not analytically feasible in general, we employ one step of Newton?s method in the form ? k+1 = ? k ? ?k d2 g(? k | ? k )?1 ?f (? k ), where ?k ? (0, 1] is a stepsize multiplier chosen via backtracking. Note here our application of the gradient identity ?f (? k ) = ?g(? k | ? k ), valid for all smooth surrogate functions. Because the Newton increment is a descent direction, some value of ?k is bound to produce a decrease in the surrogate and therefore in the objective. The following theorem, proved in the Supplement, establishes global convergence of our algorithm under simple Armijo backtracking for choosing ?k : Theorem 3.1 Consider the algorithm map M(?) = ? ? ?? H(?)?1 ?f (?), where the step size ?? has been selected by Armijo backtracking. Assume that f (?) is coercive in the sense limk?k?? f (?) = +?. Then the limit points of the sequence ? k+1 = M(? k ) are stationary points of f (?). Moreover, the set of limit points is compact and connected. We remark that stationary points are necessarily global minimizers when f (?) is convex. Furthermore, coercivity of f (?) is a very mild assumption, and is satisfied whenever either the distance penalty or the negative log-likelihood is coercive. For instance, the negative log-likelihoods of the Poisson and Gaussian distributions are coercive functions. While this is not the case for the Bernoulli distribution, adding a small `2 penalty ?k?k22 restores coerciveness. Including such a penalty in logistic regression is a common remedy to the well-known problem of numerical instability in parameter estimates caused by a poorly conditioned design matrix X [27]. Since L(?) is concave in ?, the compactness of one or more of the constraint sets Ci is another sufficient condition for coerciveness. Generalization to Bregman divergences: Although we have focused on penalizing GLM likelihoods with Euclidean distance penalties, this approach holds more generally for objectives containing non-Euclidean measures of distance. As reviewed in the Supplement, the Bregman divergence D? (v, u) = ?(v) ? ?(u) ? d?(u)(v ? u) generated by a convex function ?(v) provides a general notion of directed distance [4]. The Bregman divergence associated with the choice ?(v) = 12 kvk22 , for instance, is the squared Euclidean distance. One can rewrite the GLM penalized likelihood as a sum of multiple Bregman divergences m h i X h i X f (?) = vi D? PC?i (?), ? + wj D? y j , e hj (?) . (6) i j=1 5 Algorithm 1 MM algorithm to solve distance-penalized objective (4) 1: Initialize k = 0, starting point ? 0 , initial step size ? ? (0, 1), and halving parameter ? ? (0, 1): 2: repeat Pm P 1 3: ?fk ? i vi [? ? PCi (? k )] ? m j=1 ?L(? | yj , ? j ) P  Pm 2 1 4: Hk ? i vi I n ? m j=1 d L(? | yj , ? j ) 5: v ? ?H ?1 k ?fk 6: ??1 7: while f (? k + ?v) > f (? k ) + ???fkt ? k do 8: ? ? ?? 9: end while 10: ? k+1 ? ? k + ?v 11: k ?k+1 12: until convergence The first sum in equation (6) represents the distance penalty to the constraint sets Ci . The projection PC?i (?) denotes the closest point to ? in Ci measured under D? . The second sum generalizes the GLM log-likelihood term where e hj (?) = h?1 (xtj ?). Every exponential family likelihood uniquely corresponds to a Bregman divergence D? generated by the conjugate of its cumulant function ? = ? ?   Pm 1 ?1 t [28]. Hence, ?L(? | y, X) is proportional to m j=1 D? y j , h (xj ?) . The functional form (6) immediately broadens the class of objectives to include quasi-likelihoods and distances to constraint sets measured under a broad range of divergences. Objective functions of this form are closely related to proximity function minimization in the convex feasibility literature [5, 6, 7, 33]. The MM principle makes possible the extension of the projection algorithms of [7] to minimize this general objective. Our MM algorithm for distance penalized GLM regression is summarized in Algorithm 1. Although for the sake of clarity the algorithm is written for vector-valued arguments, it holds more generally for matrix-variate regression. In this setting the regression coefficients B and predictors X i are matrix valued, and response yj has mean h[trace(X ti B)] = h[vec(X i )t vec(B)]. Here the vec operator stacks the columns of its matrix argument. Thus, the algorithm immediately applies if we replace B by vec(B) and X 1 , . . . , X m by X = [vec(X 1 ), . . . , vec(X m )]t . Projections requiring the matrix structure are performed by reshaping vec(B) into matrix form. In contrast to shrinkage approaches, these maneuvers obviate the need for new algorithms in matrix regression [37]. Acceleration: Here we mention two modifications to the MM algorithm that translate to large practical differences in computational cost. Inverting the n-by-n matrix d2 g(? k | ? k ) naively requires O(n3 ) flops. When the number of cases m  n, invoking the Woodbury formula requires solving a substantially smaller m ? m linear system at each iteration. This computational savings is crucial in the analysis of the EEG data of Section 4. The Woodbury formula says ?1 (vI n + U V )?1 = v ?1 I n ? v ?2 U I m + v ?1 V U V when U and V are n ? m and m ? n matrices, respectively. Inspection of equations (2) and (5) shows that d2 g(? k | ? k ) takes the required form. Under Woodbury?s formula the dominant computation is the matrix-matrix product V U , which requires only O(nm2 ) flops. The second modification to the MM algorithm is quasi-Newton acceleration. This technique exploits secant approximations derived from iterates of the algorithm map to approximate the differential of the map. As few as two secant approximations can lead to orders of magnitude reduction in the number of iterations until convergence. We refer the reader to [36] for a detailed description of quasi-Newton acceleration and a summary of its performance on various high-dimensional problems. 4 Results and performance We first compare the performance of our distance penalization method to leading shrinkage methods in sparse regression. Our simulations involve a sparse length n = 2000 coefficient vector ? with 10 nonzero entries. Nonzero coefficients have uniformly random effect sizes. The entries of the design matrix X are N (0, 0.1) Gaussian random deviates. We then recover ? from undersampled responses 6 0.09 0.07 Mean squared error ?0.2 0.0 0.03 ?0.2 MM MCP SCAD logistic poisson 0.05 ?0.6 ?0.4 0.2 Relative Error, Logistic Relative Error, Poisson MM MCP SCAD LASSO 600 Support indices of true coefficients 800 1000 1200 1400 1600 1800 Number of samples Figure 2: The left figure displays relative errors among nonzero predictors in underdetermined Poisson and logistic regression with m = 1000 cases. It is clear that LASSO suffers the most shrinkage and bias, while MM appears to outperform MCP and SCAD. The right figure displays MSE as a function of m, favoring MM most notably for logistic regression. yj following Poisson and Bernoulli distributions with canonical links. Figure 2 compares solutions obtained using our distance penalties (MM) to those obtained under MCP, SCAD, and LASSO penalties. Relative errors (left) with m = 1000 cases clearly show that LASSO suffers the most shrinkage and bias; MM seems to outperform MCP and SCAD. For a more detailed comparison, the right side of the figure plots mean squared error (MSE) as a function of the number of cases averaged over 50 trials. All methods significantly outperform LASSO, which is omitted for scale, with MM achieving lower MSE than competitors, most noticeably in logistic regression. As suggested by an anonymous reviewer, similar results from additional experiments for Gaussian (linear) regression with comparison to relaxed lasso are included in the Supplement. (a) Sparsity constraint (b) Regularize kXk? (c) Restrict rk(X) = 2 (d) Vary rk(X) = 1, . . . , 8 Figure 3: True B 0 in the top left of each set of 9 images has rank 2. The other 8 images in (a)?(c) display solutions as  varies over the set {0, 0.1, . . . , 0.7}. Figure (a) applies our MM algorithm with sparsity rather than rank constraints to illustrate how failing to account for matrix structure misses the true signal; Zhou and Li [37] report similar findings comparing spectral regularization to `1 regularization. Figure (b) performs spectral shrinkage [37] and displays solutions under optimal ? values via BIC, while (c) uses our MM algorithm restricting rank(B) = 2. Figure (d) fixes  = 0.1 and uses MM with rank(B) ? {1, . . . , 8} to illustrate robustness to rank over-specification. For underdetermined matrix regression, we compare to the spectral regularization method developed by Zhou and Li [37]. We generate their cross-shaped 32 ? 32 true signal B0 and in all trials sample m = 300 responses yi ? N [tr(X ti , B), ]. Here the design tensor X is generated with standard normal entries. Figure 3 demonstrates that imposing sparsity alone fails to recover Y 0 and that rank-set projections visibly outperform spectral norm shrinkage as  varies. The rightmost panel also shows that our method is robust to over-specification of the rank of the true signal to an extent. We consider two real datasets. We apply our method to count data of global temperature anomalies relative to the 1961-1990 average, collected by the Climate Research Unit [17]. We assume a non7 Global Temperature Anomalies ?0.6 ?0.2 0.2 0.6 50 100 150 200 1850 1900 1950 Year 2000 250 10 20 30 40 50 60 Figure 4: The leftmost plot shows our isotonic fit to temperature anomaly data [17]. The right figures display the estimated coefficient matrix B on EEG alcoholism data using distance penalization, nuclear norm shrinkage [37], and LASSO shrinkage, respectively. decreasing solution, illustrating an instance of isotonic regression. The fitted solution displayed in Figure 4 has mean squared error 0.009, clearly obeys the isotonic constraint, and is consistent with that obtained on a previous version of the data [32]. We next focus on rank constrained matrix regression for electroencephalography (EEG) data, collected by [35] to study the association between alcoholism and voltage patterns over times and channels. The study consists of 77 individuals with alcoholism and 45 controls, providing 122 binary responses yi indicating whether subject i has alcoholism. The EEG measurements are contained in 256 ? 64 predictor matrices X i ; the dimension m is thus greater than 16, 000. Further details about the data appear in the Supplement. Previous studies apply dimension reduction [21] and propose algorithms to seek the optimal rank 1 solution [16]. These methods could not handle the size of the original data directly, and the spectral shrinkage approach proposed in [37] is the first to consider the full EEG data. Figure 4 shows that our regression solution is qualitatively similar to that obtained under nuclear norm penalization [37], revealing similar time-varying patterns among channels 20-30 and 50-60. In contrast, ignoring matrix structure and penalizing the `1 norm of B yields no useful information, consistent with findings in [37]. However, our distance penalization approach achieves a lower misclassification error of 0.1475. The lowest misclassification rate reported in previous analyses is 0.139 by [16]. As their approach is strictly more restrictive than ours in seeking a rank 1 solution, we agree with [37] in concluding that the lower misclassification error can be largely attributed to benefits from data preprocessing and dimension reduction. While not visually distinguishable, we also note that shrinking the eigenvalues via nuclear norm penalization [37] fails to produce a low-rank solution on this dataset. We omit detailed timing comparisons throughout since the various methods were run across platforms and depend heavily on implementation. We note that MCP regression relies on the MM principle, and the LQA and LLA algorithms used to fit models with SCAD penalties are also instances of MM algorithms [11]. Almost all MM algorithms share an overall linear rate of convergence. While these require several seconds of compute time on a standard laptop machine, coordinate-descent implementations of LASSO outstrip our algorithm in terms of computational speed. Our MM algorithm required 31 seconds to converge on the EEG data, the largest example we considered. 5 Discussion GLM regression is one of the most widely employed tools in statistics and machine learning. Imposing constraints upon the solution is integral to parameter estimation in many settings. This paper considers GLM regression under distance-to-set penalties when seeking a constrained solution. Such penalties allow a flexible range of constraints, and are competitive with standard shrinkage methods for sparse and low-rank regression in high dimensions. The MM principle yields a reliable solution method with theoretical guarantees and strong empirical results over a number of practical examples. These examples emphasize promising performance under non-convex constraints, and demonstrate how distance penalization avoids the disadvantages of shrinkage approaches. Several avenues for future work may be pursued. The primary computational bottleneck we face is matrix inversion, which limits the algorithm when faced with extremely large and high-dimensional datasets. Further improvements may be possible using modifications of the algorithm tailored to 8 specific problems, such as coordinate or block descent variants. Since the linear systems encountered in our parameter updates are well conditioned, a conjugate gradient algorithm may be preferable to direct methods of solution in such cases. The updates within our algorithm can be recast as weighted least squares minimization, and a re-examination of this classical problem may suggest even better iterative solvers. As the methods apply to a generalized objective comprised of multiple Bregman divergences, it will be fruitful to study penalties under alternate measures of distance, and settings beyond GLM regression such as quasi-likelihood estimation. While our experiments primarily compare against shrinkage approaches, an anonymous referee points us to recent work revisiting best subset selection using modern advances in mixed integer optimization [3]. These exciting developments make best subset regression possible for much larger problems than previously thought possible. As [3] focus on the linear case, it is of interest to consider how ideas in this paper may offer extensions to GLMs, and to compare the performance of such generalizations. Best subsets constitutes a gold standard for sparse estimation in the noiseless setting; whether it outperforms shrinkage methods seems to depend on the noise level and is a topic of much recent discussion [15, 23]. Finally, these studies as well as our present paper focus on estimation, and it will be fruitful to examine variable selection properties in future work. Recent work evidences an inevitable trade-off between false and true positives under LASSO shrinkage in the linear sparsity regime [30]. The authors demonstrate that this need not be the case with `0 methods, remarking that computationally efficient methods which also enjoy good model performance would be highly desirable as `0 and `1 approaches possess one property but not the other [30]. Our results suggest that distance penalties, together with the MM principle, seem to enjoy benefits from both worlds on a number of statistical tasks. Acknowledgements: We would like to thank Hua Zhou for helpful discussions about matrix regression and the EEG data. JX was supported by NSF MSPRF #1606177. References [1] Barlow, R. E., Bartholomew, D. J., Bremner, J., and Brunk, H. D. Statistical inference under order restrictions: The theory and application of isotonic regression. Wiley New York, 1972. [2] Becker, M. P., Yang, I., and Lange, K. EM algorithms without missing data. Statistical Methods in Medical Research, 6:38?54, 1997. [3] Bertsimas, D., King, A., and Mazumder, R. Best subset selection via a modern optimization lens. The Annals of Statistics, 44(2):813?852, 2016. [4] Bregman, L. M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3):200?217, 1967. [5] Byrne, C. and Censor, Y. Proximity function minimization using multiple Bregman projections, with applications to split feasibility and Kullback?Leibler distance minimization. Annals of Operations Research, 105(1-4):77?98, 2001. [6] Censor, Y. and Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numerical Algorithms, 8(2):221?239, 1994. [7] Censor, Y., Elfving, T., Kopf, N., and Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems, 21(6):2071?2084, 2005. [8] Chi, E. C., Zhou, H., and Lange, K. Distance majorization and its applications. Mathematical Programming Series A, 146(1-2):409?436, 2014. [9] Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), pages 1?38, 1977. [10] Fan, J. and Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348?1360, 2001. [11] Fan, J. and Lv, J. A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20(1):101, 2010. [12] Friedman, J., Hastie, T., and Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1?22, 2010. 9 [13] Gaines, B. R. and Zhou, H. Algorithms for fitting the constrained lasso. arXiv preprint arXiv:1611.01511, 2016. [14] Golub, G. H. and Van Loan, C. F. Matrix computations, volume 3. JHU Press, 2012. [15] Hastie, T., Tibshirani, R., and Tibshirani, R. J. Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv preprint arXiv:1707.08692, 2017. [16] Hung, H. and Wang, C.-C. Matrix variate logistic regression model with application to EEG data. Biostatistics, 14(1):189?202, 2013. [17] Jones, P., Parker, D., Osborn, T., and Briffa, K. Global and hemispheric temperature anomalies? land and marine instrumental records. Trends: a compendium of data on global change, 2016. [18] Lange, K., Hunter, D. R., and Yang, I. Optimization transfer using surrogate objective functions (with discussion). Journal of Computational and Graphical Statistics, 9:1?20, 2000. [19] Lange, K. MM Optimization Algorithms. SIAM, 2016. [20] Lange, K. and Keys, K. L. The proximal distance algorithm. arXiv preprint arXiv:1507.07598, 2015. [21] Li, B., Kim, M. K., and Altman, N. On dimension folding of matrix-or array-valued statistical objects. The Annals of Statistics, pages 1094?1121, 2010. [22] Mairal, J. Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM Journal on Optimization, 25(2):829?855, 2015. [23] Mazumder, R., Radchenko, P., and Dedieu, A. Subset selection with shrinkage: Sparse linear modeling when the SNR is low. arXiv preprint arXiv:1708.03288, 2017. [24] McCullagh, P. and Nelder, J. A. Generalized linear models, volume 37. CRC press, 1989. [25] Meinshausen, N. Relaxed lasso. Computational Statistics & Data Analysis, 52(1):374?393, 2007. [26] Mor?, J. J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical analysis, pages 105?116. Springer, 1978. [27] Park, M. Y. and Hastie, T. L1-regularization path algorithm for generalized linear models. Journal of the Royal Statistical Society: Series B (Methodological), 69(4):659?677, 2007. [28] Polson, N. G., Scott, J. G., and Willard, B. T. Proximal algorithms in statistics and machine learning. Statistical Science, 30(4):559?581, 2015. [29] Richard, E., Savalle, P.-a., and Vayatis, N. Estimation of simultaneously sparse and low rank matrices. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1351?1358, 2012. [30] Su, W., Bogdan, M., and Cand?s, E. False discoveries occur early on the lasso path. The Annals of Statistics, 45(5), 2017. [31] Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), pages 267?288, 1996. [32] Wu, W. B., Woodroofe, M., and Mentz, G. Isotonic regression: Another look at the changepoint problem. Biometrika, pages 793?804, 2001. [33] Xu, J., Chi, E. C., Yang, M., and Lange, K. A majorization-minimization algorithm for split feasibility problems. arXiv preprint arXiv:1612.05614, 2016. [34] Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894?942, 2010. [35] Zhang, X. L., Begleiter, H., Porjesz, B., Wang, W., and Litke, A. Event related potentials during object recognition tasks. Brain Research Bulletin, 38(6):531?538, 1995. [36] Zhou, H., Alexander, D., and Lange, K. A quasi-Newton acceleration for high-dimensional optimization algorithms. Statistics and Computing, 21:261?273, 2011. [37] Zhou, H. and Li, L. Regularized matrix regression. Journal of the Royal Statistical Society: Series B (Methodological), 76(2):463?483, 2014. [38] Zou, H. and Hastie, T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Methodological), 67(2):301?320, 2005. 10
6737 |@word mild:1 trial:2 illustrating:1 briefly:1 version:1 inversion:1 norm:11 seems:2 instrumental:1 proportionality:1 d2:6 seek:3 carolina:1 simulation:1 covariance:1 decomposition:2 invoking:1 mention:1 tr:1 carry:1 reduction:3 initial:1 series:6 score:1 ours:1 rightmost:1 outperforms:1 current:2 comparing:1 marquardt:2 written:3 subsequent:1 numerical:4 shape:3 enables:1 drop:1 interpretable:1 plot:2 update:2 stationary:4 half:1 selected:5 alone:1 pursued:1 inspection:1 xk:1 ith:1 marine:1 isotone:1 record:1 iterates:3 provides:1 hyperplanes:1 simpler:1 zhang:2 mathematical:2 constructed:1 c2:5 differential:3 direct:1 prove:1 consists:1 advocate:1 fitting:2 manner:1 introduce:1 notably:2 expected:1 indeed:1 cand:1 nor:1 dist:8 examine:1 brain:1 multi:1 chi:3 relying:1 decreasing:2 spherical:1 automatically:1 xti:5 considering:1 increasing:1 electroencephalography:1 provided:1 project:1 begin:1 estimating:1 moreover:1 panel:1 laptop:1 lowest:1 biostatistics:1 argmin:3 parsimony:1 minimizes:1 substantially:1 developed:1 coercive:3 savalle:1 finding:3 guarantee:2 combat:1 every:2 ti:2 concave:4 unwanted:1 preferable:1 biometrika:1 k2:2 rm:1 demonstrates:1 control:2 normally:1 unit:1 omit:1 appear:2 enjoy:2 maneuver:2 medical:1 positive:5 local:1 timing:1 limit:4 consequence:1 despite:1 path:3 approximately:1 noteworthy:1 might:1 solver:1 meinshausen:1 suggests:1 specifying:1 co:1 range:3 statistically:2 averaged:1 obeys:1 directed:2 practical:2 woodbury:3 yj:11 practice:2 block:1 procedure:1 secant:2 lla:1 empirical:2 jhu:1 significantly:2 revealing:1 projection:15 thought:1 suggest:2 onto:9 undesirable:1 close:1 operator:5 selection:13 cannot:1 instability:1 isotonic:10 optimize:1 restriction:4 map:4 imposed:2 missing:1 reviewer:1 maximizing:2 straightforward:2 fruitful:2 starting:1 truncating:2 convex:20 focused:1 unify:1 recovery:1 immediately:2 kzk0:1 array:1 nuclear:7 regularize:1 obviate:1 handle:2 notion:1 coordinate:3 increment:1 analogous:1 altman:1 annals:5 heavily:1 anomaly:4 programming:2 us:2 referee:1 trend:1 satisfying:1 recognition:1 lay:1 showcase:1 observed:1 preprint:5 wang:2 revisiting:1 eckart:1 wj:1 connected:1 decrease:1 trade:1 dempster:1 convexity:5 covariates:1 motivate:1 depend:2 solving:4 reviewing:1 rewrite:1 dilemma:1 upon:1 eric:1 k0:3 various:2 effective:3 shortcoming:1 broadens:1 pci:10 choosing:2 whose:1 widely:2 valued:6 solve:2 say:1 larger:1 statistic:11 nondecreasing:1 laird:1 advantage:2 differentiable:1 sequence:3 net:2 eigenvalue:1 propose:1 product:2 relevant:5 loglikelihoods:1 translate:1 poorly:1 gold:1 description:1 inducing:1 ky:1 los:2 convergence:7 produce:3 incremental:1 converges:1 object:2 bogdan:1 illustrate:2 measured:2 b0:1 strong:2 solves:2 involves:1 come:3 direction:2 drawback:1 discontinuous:1 closely:2 correct:1 noticeably:1 crc:1 require:1 suffices:2 generalization:2 fix:1 anonymous:2 underdetermined:2 strictly:2 extension:2 mm:28 proximity:3 around:1 hold:3 considered:1 normal:1 exp:2 great:1 visually:1 changepoint:2 vary:1 achieves:1 jx:1 early:1 omitted:1 failing:1 estimation:10 applicable:1 radchenko:1 majorizing:2 largest:2 establishes:1 tool:1 weighted:1 minimization:11 clearly:2 gaussian:5 always:1 rather:2 ck:2 avoid:1 cr:2 shrinkage:29 hj:2 zhou:7 voltage:1 varying:1 derived:1 focus:3 improvement:1 methodological:5 rank:24 likelihood:20 bernoulli:2 visibly:1 contrast:3 hk:1 litke:1 kim:1 sense:1 helpful:1 inference:2 milder:1 censor:3 minimizers:1 squaring:1 typically:1 compactness:1 spurious:2 favoring:1 quasi:5 selective:1 arg:1 classification:1 flexible:2 aforementioned:1 ill:1 among:2 overall:1 development:1 ussr:1 restores:2 constrained:5 initialize:1 platform:1 equal:1 saving:1 shaped:1 beach:1 represents:2 broad:2 look:1 jones:1 park:1 constitutes:1 icml:1 promote:1 inevitable:1 future:3 np:2 report:1 nearly:1 escape:1 few:3 employ:2 primarily:1 modern:2 richard:1 simultaneously:2 preserve:2 gamma:1 interpolate:1 divergence:7 xtj:1 individual:1 willard:1 friedman:1 stationarity:1 interest:1 possibility:1 highly:1 golub:1 extreme:1 semidefinite:1 pc:2 behind:2 amenable:1 bregman:9 edge:1 encourage:1 integral:1 necessary:2 incomplete:1 euclidean:4 gaines:1 re:2 sacrificing:1 theoretical:1 fitted:1 instance:7 column:1 modeling:1 downside:1 disadvantage:2 maximization:1 ordinary:1 cost:2 deviation:1 subset:8 entry:4 snr:1 predictor:10 comprised:1 wonder:1 reported:1 varies:2 proximal:4 synthetic:1 st:1 density:1 explores:1 siam:2 international:1 off:1 physic:1 safeguard:1 analogously:1 together:1 squared:7 satisfied:1 successively:1 containing:2 begleiter:1 external:1 american:1 derivative:1 leading:1 li:5 account:1 potential:1 pooled:1 summarized:1 north:1 coefficient:10 includes:1 postulated:1 explicitly:1 caused:1 depends:1 vi:12 performed:1 view:1 jason:1 closed:3 competitive:1 recover:2 complicated:2 majorization:8 minimize:2 square:1 largely:1 efficiently:1 yield:2 identify:1 hunter:1 notoriously:1 drive:1 explain:1 suffers:2 whenever:3 definition:1 against:2 competitor:1 obvious:1 naturally:1 associated:1 attributed:1 proved:1 dataset:1 popular:2 knowledge:1 appears:1 follow:1 response:9 brunk:1 box:2 furthermore:1 just:2 stage:1 until:2 glms:5 su:1 nonlinear:1 continuity:1 logistic:9 usa:1 effect:4 k22:4 requiring:2 unbiased:1 barlow:1 byrne:1 true:7 counterpart:1 regularization:8 equality:2 analytically:1 multiplier:1 nonzero:4 remedy:1 hence:1 leibler:1 climate:1 adjacent:1 during:1 uniquely:2 illustrative:1 levenberg:2 coincides:1 hemispheric:1 generalized:9 prominent:1 leftmost:1 stress:1 ridge:1 demonstrate:2 performs:1 kopf:1 temperature:4 l1:1 image:2 common:4 functional:2 multinomial:1 empirically:1 overview:1 conditioning:1 volume:2 extend:1 analog:1 association:2 mor:1 refer:1 measurement:1 imposing:3 vec:7 tuning:3 trivially:1 pm:3 fk:2 mathematics:1 bartholomew:1 specification:2 entail:2 add:1 dominant:1 closest:1 recent:3 forcing:1 tikhonov:1 nonconvex:1 inequality:2 binary:2 success:2 yi:8 accomplished:3 minimum:2 fortunately:1 relaxed:4 impose:2 additional:1 greater:1 employed:1 converge:1 maximize:1 signal:3 multiple:6 desirable:2 full:1 exceeds:1 smooth:4 calculation:1 cross:3 long:1 offer:1 reshaping:1 feasibility:5 controlled:1 halving:1 variant:1 regression:60 noiseless:1 expectation:1 poisson:7 arxiv:10 iteration:2 tailored:1 achieved:1 c1:3 folding:1 background:1 vayatis:1 singular:4 crucial:1 biased:1 extra:1 unlike:1 posse:1 limk:1 subject:4 nonconcave:1 seem:1 integer:1 extracting:1 presence:2 yang:3 split:3 iterate:2 independence:1 xj:10 variate:2 bic:1 hastie:4 fit:2 lasso:29 competing:1 restrict:1 lange:8 reduce:2 avenue:1 tradeoff:1 idea:1 angeles:2 bottleneck:1 whether:2 motivated:1 becker:1 penalty:39 render:1 algebraic:1 york:1 repeatedly:1 remark:1 generally:2 useful:1 detailed:3 involve:1 clear:1 amount:1 induces:1 generate:1 outperform:5 canonical:5 nsf:1 notice:2 estimated:1 arising:1 correctly:1 tibshirani:4 key:1 achieving:2 tangency:1 clarity:1 penalizing:6 neither:2 kenneth:1 bertsimas:1 graph:1 relaxation:5 merely:1 ma2:1 cone:1 sum:4 convert:1 inverse:5 year:1 run:1 osborn:1 clipped:1 family:5 reader:2 almost:2 throughout:1 wu:1 bound:1 display:5 fan:2 replaces:1 encountered:1 oracle:1 strength:1 occur:1 constraint:42 precisely:1 n3:1 software:1 sake:1 ucla:2 compendium:1 ncsu:1 speed:1 argument:2 min:1 concluding:1 extremely:1 according:1 alternate:1 scad:8 ball:1 combination:1 conjugate:2 elfving:2 smaller:1 across:1 increasingly:1 em:3 modification:3 kbk:2 projecting:5 restricted:1 glm:14 ln:4 mcp:8 equation:4 agree:1 previously:1 discus:1 count:2 computationally:1 serf:1 unusual:1 sending:1 end:1 available:1 operation:2 generalizes:1 apply:3 appropriate:1 spectral:5 stepsize:1 alternative:2 robustness:1 original:2 assumes:1 denotes:2 top:2 include:2 graphical:1 newton:5 exploit:2 restrictive:1 k1:2 especially:1 classical:3 society:5 tensor:1 objective:14 seeking:2 already:2 strategy:1 primary:1 surrogate:6 gradient:4 subspace:1 distance:47 link:8 thank:1 simulated:1 capacity:1 nondifferentiable:1 topic:1 manifold:1 considers:2 extent:1 trivial:1 collected:2 enforcing:1 bremner:1 length:1 index:1 providing:1 sinica:1 unfortunately:1 trace:2 negative:4 polson:1 design:4 implementation:3 allowing:1 datasets:2 descent:5 orthant:1 displayed:1 immediate:1 flop:2 extended:1 rn:3 stack:1 intensity:1 inverting:1 required:3 california:2 nm2:1 nip:1 address:1 beyond:2 able:1 krylov:1 suggested:1 pattern:2 remarking:1 scott:1 regime:1 sparsity:12 summarize:1 encompasses:1 recast:1 including:6 reliable:1 royal:5 suitable:1 misclassification:3 natural:2 examination:1 event:1 regularized:1 undersampled:1 minimax:2 brief:1 kvk22:1 deviate:1 prior:3 literature:1 faced:1 acknowledgement:1 discovery:1 relative:6 mixed:1 proportional:1 lv:1 penalization:14 validation:2 sufficient:1 proxy:2 consistent:2 rubin:1 principle:8 thresholding:2 exciting:1 share:1 land:1 row:1 penalized:9 summary:2 repeat:1 supported:1 side:2 bias:4 allow:1 majorize:2 wide:1 face:1 bulletin:1 absolute:2 sparse:15 distributed:1 benefit:3 boundary:1 dimension:6 van:1 valid:1 avoids:1 world:1 author:1 qualitatively:1 forward:1 coincide:1 preprocessing:1 approximate:1 compact:1 ameliorated:1 emphasize:1 kullback:1 global:9 mairal:1 assumed:1 nelder:1 xi:3 iterative:2 why:1 reviewed:1 promising:1 channel:2 transfer:1 robust:1 ca:1 elastic:2 ignoring:1 correlated:1 eeg:8 purview:1 mazumder:2 mse:3 necessarily:1 zou:1 alcoholism:4 coercivity:1 statistica:1 linearly:1 whole:1 noise:1 xu:2 parker:1 wiley:1 shrinking:1 fails:2 downhill:2 exponential:5 lie:1 young:1 theorem:3 formula:3 rk:2 xt:3 specific:1 evidence:1 exists:1 stepwise:1 naively:1 false:4 restricting:2 adding:2 importance:1 ci:20 supplement:4 magnitude:2 conditioned:2 rankness:1 smoothly:1 backtracking:3 distinguishable:1 kxk:1 contained:1 fkt:1 applies:4 hua:1 springer:1 corresponds:2 violator:1 relies:2 conditional:1 goal:1 identity:3 king:1 acceleration:5 towards:1 shared:1 absence:1 replace:2 hard:5 feasible:1 included:1 loan:1 change:1 corrected:1 uniformly:1 hyperplane:1 mccullagh:1 miss:1 called:1 lens:1 e:1 domination:1 indicating:1 support:2 armijo:2 cumulant:1 alexander:1 accelerated:1 philosophy:1 absolutely:1 incorporate:1 avoiding:1 hung:1
6,343
6,738
Adaptive stimulus selection for optimizing neural population responses Benjamin R. Cowley1,2 , Ryan C. Williamson1,2,5 , Katerina Acar2,6 , Matthew A. Smith?,2,7 , Byron M. Yu?,2,3,4 1 Machine Learning Dept., 2 Center for Neural Basis of Cognition, 3 Dept. of Electrical and Computer Engineering, 4 Dept. of Biomedical Engineering, Carnegie Mellon University 5 School of Medicine, 6 Dept. of Neuroscience, 7 Dept. of Ophthalmology, University of Pittsburgh [email protected], {rcw30, kac216, smithma}@pitt.edu, [email protected] ? denotes equal contribution. Abstract Adaptive stimulus selection methods in neuroscience have primarily focused on maximizing the firing rate of a single recorded neuron. When recording from a population of neurons, it is usually not possible to find a single stimulus that maximizes the firing rates of all neurons. This motivates optimizing an objective function that takes into account the responses of all recorded neurons together. We propose ?Adept,? an adaptive stimulus selection method that can optimize population objective functions. In simulations, we first confirmed that population objective functions elicited more diverse stimulus responses than single-neuron objective functions. Then, we tested Adept in a closed-loop electrophysiological experiment in which population activity was recorded from macaque V4, a cortical area known for mid-level visual processing. To predict neural responses, we used the outputs of a deep convolutional neural network model as feature embeddings. Images chosen by Adept elicited mean neural responses that were 20% larger than those for randomly-chosen natural images, and also evoked a larger diversity of neural responses. Such adaptive stimulus selection methods can facilitate experiments that involve neurons far from the sensory periphery, for which it is often unclear which stimuli to present. 1 Introduction A key choice in a neurophysiological experiment is to determine which stimuli to present. Often, it is unknown a priori which stimuli will drive a to-be-recorded neuron, especially in brain areas far from the sensory periphery. Most studies either choose from a class of parameterized stimuli (e.g., sinusoidal gratings or pure tones) or present many randomized stimuli (e.g., white noise) to find the stimulus that maximizes the response of a neuron (i.e., the preferred stimulus) [1, 2]. However, the first approach limits the range of stimuli explored, and the second approach may not converge in a finite amount of recording time [3]. To efficiently find a preferred stimulus, studies have employed adaptive stimulus selection (also known as ?adaptive sampling? or ?optimal experimental design?) to determine the next stimulus to show given the responses to previous stimuli in a closed-loop experiment [4, 5]. Many adaptive methods have been developed to find the smallest number of stimuli needed to fit parameters of a model that predicts the recorded neuron?s activity from the stimulus [6, 7, 8, 9, 10, 11]. When no encoding model exists for a neuron (e.g., neurons in higher visual cortical areas), adaptive methods rely on maximizing the neuron?s firing rate via genetic algorithms [12, 13, 14] or gradient ascent [15, 16] to home in on the neuron?s preferred stimulus. To our knowledge, all current adaptive stimulus selection methods focus solely on optimizing the firing rate of a single neuron. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. A B 100 V4 neuron 1 0 0 sorted image indices 1500 0 0 sorted image indices V4 neuron 2 spikes/sec 100 1500 V4 neuron 2 (spikes/sec) spikes/sec 100 0 0 V4 neuron 1 (spikes/sec) 100 Figure 1: Responses of two macaque V4 neurons. A. Different neurons prefer different stimuli. Displayed images evoked 5 of top 25 largest responses. B. Images placed according to their responses. Gray dots represent responses to other images. Same neurons as in A. Developments in neural recording technologies now enable the simultaneous recordings of tens to hundreds of neurons [17], each of which has its own preferred stimulus. For example, consider two neurons recorded in V4, a mid-level visual cortical area (Fig. 1A). Whereas neuron 1 responds most strongly to teddy bears, neuron 2 responds most strongly to arranged circular fruit. Both neurons moderately respond to images of animals (Fig. 1B). Given that different neurons have different preferred stimuli, how do we select which stimuli to present when simultaneously recording from multiple neurons? This necessitates defining objective functions for adaptive stimulus selection that are based on a population of neurons rather than any single neuron. Importantly, these objective functions can go beyond simply maximizing the firing rates of neurons and instead can be optimized for other attributes of the population response, such as maximizing the scatter of the responses in a multi-neuronal response space (Fig. 1B). We propose Adept, an adaptive stimulus selection method for a population of neurons that ?adeptly? chooses the next stimulus to show based on a population objective function. Because the neural responses to candidate stimuli are unknown, Adept utilizes feature embeddings of the stimuli to predict to-be-recorded responses. In this work, we use the feature embeddings of a deep convolutional neural network (CNN) for prediction. We first confirmed with simulations that Adept, using a population objective, elicited larger mean responses and a larger diversity of responses than using a single-neuron objective to optimize the response of each neuron in the population. Then, we ran Adept on V4 population activity recorded during a closed-loop electrophysiological experiment. Images chosen by Adept elicited higher mean firing rates and greater scatter of population responses compared to randomly-chosen images. This demonstrates that Adept is effective at finding stimuli to drive a population of neurons in brain areas far from the sensory periphery. 2 Population objective functions Depending on the desired outcomes of an experiment, one may favor one objective function over another. Here we discuss different objection functions for adaptive stimulus selection and the resulting responses r ? Rp , where the ith element ri is the response of the ith neuron (i = 1, . . . , p) and p is the number of neurons recorded simultaneously. To illustrate the effects of different objective functions, we ran an adaptive stimulus selection method on the activity of two simulated neurons (see details in Section 5.1). We first consider a single-neuron objective function employed by many adaptive methods [12, 13, 14]. Using this objective function f (r) = ri , which maximizes the response for the ith neuron of the population, the adaptive method for i = 1 chose stimuli that maximized neuron 1?s response (Fig. 2A, red dots). However, images that produced large responses for neuron 2 were not chosen (Fig. 2A, top left gray dots). A natural population-level extension to this objective function is to maximize the responses of all neurons by defining the objective function to be f (r) = krk2 . This objective function led to choosing stimuli that maximized responses for neurons 1 and 2 individually, as well as large responses for both neurons together (Fig. 2B). Another possible objective function is to maximize the scatter of the responses. In particular, we would like to choose the next stimulus such that the response vector r is far away from the previously-seen response vectors r1 , . . . , rM after M chosen stimuli. One way to achieve this is to maximize the average Euclidean distance between r and r1 , . . . , rM , which leads 2 B 80 max r? 80 max r C 80 2 max M1 M j=1 unseen responses responses to chosen stimuli neuron 2?s activity A 0 0 neuron 1?s activity 80 0 0 neuron 1?s activity 80 0 0 r ? rj neuron 1?s activity D 80 2 80 0 max r 0 2 + 1 M M j=1 r ? rj neuron 1?s activity 2 80 Figure 2: Different objective functions for adaptive stimulus selection yield different observed population responses (red dots). Blue * denote responses to stimuli used to initialize the adaptive method (the same for each panel). PM 1 to the objective function f (r, r1 , . . . , rM ) = M j=1 kr ? rj k2 . This objective function led to a large scatter in responses for neurons 1 and 2 (Fig. 2C, red dots near and far from origin). This is because choosing stimuli that yield small and large responses produces the largest distances between responses. Finally, we considered an objective function that favored large responses that are far away from one another. To achieve this, we summed the objectives in Fig. 2B and 2C. The objective function PM 1 f (r, r1 , . . . , rM ) = krk2 + M j=1 kr ? rj k2 was able to uncover large responses for both neurons (Fig. 2D, red dots far from origin). It also led to a larger scatter than maximizing the norm of r alone (e.g., compare red dots in bottom right of Fig. 2B and Fig. 2D). For these reasons, we use this objection function in the remainder of this work. However, the Adept framework is general and can be used with many different objective functions, including all presented in this section. 3 Using feature embeddings to predict distances We now formulate the optimization problem using the last objective function in Section 2. Consider a pool of N candidate stimuli s1 , . . . , sN . After showing (t ? 1) stimuli, we are given previouslyrecorded response vectors rn1 , . . . , rnt?1 ? Rp , where n1 , . . . , nt?1 ? {1, . . . , N }. In other words, rnj is the vector of responses to the stimulus snj . At the tth iteration of adaptive stimulus selection, we choose the index nt of the next stimulus to show by the following: t?1 1 X nt = arg max krs k2 + krs ? rnj k2 t ? 1 j=1 s?{1,...,N }\{n1 ,...,nt?1 } (1) where rs is the unseen population response vector to stimulus ss . If the rs were known, we could directly optimize Eqn. 1. However, in an online setting, we do not have access to the rs . Instead, we can directly predict the norm and average distance terms in Eqn. 1 by relating distances in neural response space to distances in a feature embedding space. The key idea is that if two stimuli have similar feature embeddings, then the corresponding neural responses will have similar norms and average distances. Concretely, consider feature embedding vectors x1 , . . . , xN ? Rq corresponding to candidate stimuli s1 , . . . , sN . For example, we can use the activity of q neurons from a CNN as a feature embedding vector for natural images [18]. To predict the norm of unseen response vector rs ? Rq , we use kernel regression with the previously-recorded response vectors rn1 , . . . , rnt?1 as training data [19]. To predict the distance between rs and a previously-recorded response vector rnj , we extend kernel regression to account for the paired nature of distances. Thus, the norm and average distance in Eqn. 1 for the unseen response vector rs to the sth candidate stimulus are predicted by the following: X K(xs , xn ) X K(xs , xn ) k k P P krs k2 = krnk k2 , krs ? rnj k2 = krnk ? rnj k2 K(xs , xn` ) K(xs , xn` ) ` ` k k (2) where k, ` ? {1, . . . , t ? 1}. Here we use the radial basis function kernel K(xj , xk ) = exp(?kxj ? xk k22 /h2 ) with kernel bandwidth h, although other kernels can be used. V V We tested the performance of this approach versus three other possible prediction approaches. The first two approaches use linear ridge regression and kernel regression, respectively, to predict rs . Their 3 prediction ?rs is then used to evaluate the objective in place of rs . The third approach is a linear ridge regression version of Eqn. 2 to directly predict krs k2 and krs ? rnj k2 . To compare the performance of these approaches, we developed a testbed in which we sampled two distinct populations of neurons from the same CNN, and asked how well one population can predict the responses of the other population using the different approaches described above. Formally, we let x1 , . . . , xN be feature embedding vectors of q = 500 CNN neurons, and response vectors rn1 , . . . , rn800 be the responses of p = 200 different CNN neurons to 800 natural images. CNN neurons were from the same GoogLeNet CNN [18] (see CNN details in Results). To compute performance, we took the Pearson?s correlation ? between the predicted and actual objective values on a held out set of responses not used for training. We also tracked the computation time ? (computed on an Intel Xeon 2.3GHz CPU with 36GB RAM) because these computations need to occur between stimulus presentations in an electrophysiological experiment. The approach in Eqn. 2 performed the best (? = 0.64) and was the fastest (? = 0.2 s) compared to the other prediction approaches (? = 0.39, 0.41, 0.23 and ? = 12.9 s, 1.5 s, 48.4 s, for the three other approaches, respectively). The remarkably faster speed of Eqn. 2 over other approaches comes from the evaluation of the objective function (fast matrix operations), the fact that no training of linear regression weight vectors is needed, and the fact that distances are directly predicted (unlike the approaches that first predict ?rs and then must re-compute distances between ?rs and rn1 , . . . , rnt?1 for each candidate stimulus s). Due to its performance and fast computation time, we use the prediction approach in Eqn. 2 for the remainder of this work. 4 Adept algorithm We now combine the optimization problem in Eqn. 1 and prediction approach in Eqn. 2 to formulate the Adept algorithm. We first discuss the adaptive stimulus selection paradigm (Fig. 3, left) and then the Adept algorithm (Fig. 3, right). For the adaptive stimulus selection paradigm (Fig. 3, left), the experimenter first selects a candidate stimulus pool s1 , . . . , sN from which Adept chooses, where N is large. For a vision experiment, the candidate stimulus pool could comprise natural images, textures, or sinusoidal gratings. For an auditory experiment, the stimulus pool could comprise natural sounds or pure tones. Next, feature embedding vectors x1 , . . . , xN ? Rq are computed for each candidate stimulus, and the pre-computed N ? N kernel matrix K(xj , xk ) (i.e., similarity matrix) is input into Adept. For visual neurons, the feature embeddings could come from a bank of Gabor-like filters with different orientations and spatial frequencies [20], or from a more expressive model, such as CNN neurons in a middle layer of a pre-trained CNN. Because Adept only takes as input the kernel matrix K(xj , xk ) and not the feature embeddings x1 , . . . , xN , one could alternatively use a similarity matrix computed from psychophysical data to define the similarity between stimuli if no model exists. The previouslyrecorded response vectors rn1 , . . . , rnt?1 are also input into Adept, which then outputs the next chosen stimulus snt to show. While the observer views snt , the response vector rnt is recorded and appended to the previously-recorded response vectors. This procedure is iteratively repeated until the end of the recording session. To show as many stimuli as possible, Adept does not choose the same stimulus more than once. For the Adept algorithm (Fig. 3, right), we initialize by randomly choosing a small number of stimuli (e.g., Ninit = 5) from the large pool of N candidate stimuli and presenting them to the observer. Using the responses to these stimuli R(:, 1:Ninit ), Adept then adaptively chooses a new stimulus by finding the candidate stimulus that yields the largest objective (in this case, using the objective defined by Eqns. 1 and 2). This search is carried out by evaluating the objective for every candidate stimulus. There are three primary reasons why Adept is computationally fast enough to consider all candidate stimuli. First, the kernel matrix KX is pre-computed, which is then easily indexed. Second, the prediction of the norm and average distance is computed with fast matrix operations. Third, Adept updates the distance matrix DR , which contains the pairwise distances between recorded response vectors, instead of re-computing DR at each iteration. 5 Results We tested Adept in two settings. First, we tested Adept on a surrogate for the brain?a pre-trained CNN. This allowed us to perform comparisons between methods with a noiseless system. Second, in a closed-loop electrophysiological experiment, we performed Adept on population activity recorded in macaque V4. In both settings, we used the same candidate image pool of N ? 10,000 natural 4 candidate stimulus pool s1 , . . . , s N model (e.g., CNN) feature embeddings x1 , . . . , xN compute similarity K(xj , xk ) recorded responses Adept rn1 , . . . , rnt?1 chosen stimulus s nt response rnt observer (e.g., monkey) Algorithm 1: Adept algorithm Input: N candidate stimuli, feature embeddings X(q ? N ), kernel bandwidth h (hyperparameter) Initialization: KX (j, k) = exp(?kX(:, j) ? X(:, k)k22 /h2 ) for all j, k R(:, 1:Ninit ) ? responses to Ninit initial stimuli DR (j, k) = kR(:, j) ? R(:, k)k2 for j, k = 1, . . . , Ninit ind_obs ? indices of Ninit observed stimuli Online algorithm: for tth stimulus to show do for sth candidate stimulus do P kX = KX (ind_obs, s)/ `?ind_obs KX (`, s) % predict norm from recorded responses ? norms(s) ? krs k2 = kX T diag( RT R) % predict average distance from recorded responses P T 1 avgdists(s) ? t?1 ` krs ? rn` k2 = mean(kX DR ) end ind_obs(Ninit + t) ? argmax(norms + avgdists) R(:, Ninit + t) ? recorded responses to chosen stimulus update DR with kR(:, Ninit + t) ? R(:, `)k2 for all ` end V V Figure 3: Flowchart of the adaptive sampling paradigm (left) and the Adept algorithm (right). images from the McGill natural image dataset [21] and Google image search [22]. For the predictive feature embeddings in both settings, we used responses from a pre-trained CNN different from the CNN used as a surrogate for the brain in the first setting. The motivation to use CNNs was inspired by the recent successes of CNNs to predict neural activity in V4 [23]. 5.1 Testing Adept on CNN neurons The testbed for Adept involved two different CNNs. One CNN is the surrogate for the brain. For this CNN, we took responses of p = 200 neurons in a middle layer of the pre-trained ResNet CNN [24] (layer 25 of 50, named ?res3dx?). A second CNN is used for feature embeddings to predict responses of the first CNN. For this CNN, we took responses of q = 750 neurons in a middle layer of the pre-trained GoogLeNet CNN [18] (layer 5 of 10, named ?icp4_out?). Both CNNs were trained for image classification but had substantially different architectures. Pre-trained CNNs were downloaded from MatConvNet [25], with the PVT version of GoogLeNet [26]. We ran Adept for 2,000 out of the 10,000 candidate images (with Ninit = 5 and kernel bandwidth h = 200?similar results were obtained for different h), and compared the CNN responses to those of 2,000 randomly-chosen images. We asked two questions pertaining to the two terms in the objective function in Eqn. 1. First, are responses larger for Adept than for randomly-chosen images? Second, to what extent does Adept produce larger scatter of responses than if we had chosen images at random? A larger scatter implies a greater diversity in evoked population responses (Fig. 1B). To address the first question, we computed the mean response across all 2,000 images for each CNN neuron. The mean responses using Adept were on average 15.5% larger than the mean responses to randomly chosen images (Fig. 4A, difference in means was significantly greater than zero, p < 10?4 ). For the second question, we assessed the amount of response scatter by computing the amount of variance captured by each dimension. We applied PCA separately to the responses to images chosen by Adept and those to images selected randomly. For each dimension, we computed the ratio between the Adept eigenvalue divided by the randomly-chosen-image eigenvalue. In this way, we compared the dimensions of greatest variance, followed by the dimensions of the second-most variance, and so on. Ratios above 1 indicate that Adept explored a dimension more than the corresponding ordered dimension of random selection. We found that Adept produced larger response scatter compared to randomly-chosen images for many dimensions (Fig. 4B). Ratios for dimensions of lesser variance (e.g., dimensions 10 to 75) are nearly as meaningful as those of the dimensions of greatest variance 5 ?Adept ?random 1.0 0.8 1.0 equal to random selection 0 20 40 60 dimension index 75 D 1.4 equal to Adept 1.0 equal to Adept corr predicted vs. actual 75 single multineuron neuron 2 2 ?Adept /? method 2 2 ?Adept /?random mean response 0.8 dim 1.2 0.6 0.4 better prediction 0.4 0% 0 1.4 C ist vgd pt-a Ade t-norm p Ade -50 etic 0 gen pt-5 Adept-1 Ade om rand 0 %? 1.8 Adept random ist vgd pt-a Ade t-norm p Ade -50 etic 0 gen pt-5 Adept-1 Ade om rand 0 -0.4 5% 2 ?Adept / ?method B * 0.2 Adept better fraction of CNN neurons A 0.2 0.0 1 2 3 4 5 6 7 8 9 10 CNN layer index Figure 4: CNN testbed for Adept. A. Mean responses (arbitrary units) to images chosen by Adept were greater than to randomly-chosen images. B. Adept produced higher response variance for each PC dimension than when randomly choosing images. Inset: Percent variance explained. C. Relative to the full objective function in Eqn. 1, population objective functions (green) yielded higher response mean and variance than those of single-neuron objective functions (blue). D. Feature embeddings for all CNN layers were predictive. Error bars are ? s.d. across 10 runs. (i.e., dimensions 1 to 10), as the top 10 dimensions explained only 16.8% of the total variance (Fig. 4B, inset). Next, we asked to what extent does optimizing a population objective function perform better than optimizing a single-neuron objective function. For the single-neuron case, we implemented three different methods. First, we ran Adept to optimize the response of a single CNN neuron with the largest mean response (?Adept-1?). Second, we applied Adept in a sequential manner to optimize the response of 50 randomly-chosen CNN neurons individually. After optimizing a CNN neuron for 40 images, optimization switched to the next CNN neuron (?Adept-50?). Third, we sequentially optimized 50 randomly-chosen CNN neurons individually using a genetic algorithm (?genetic-50?), similar to the ones proposed in previous studies [12, 13, 14]. We found that Adept produced higher mean responses than the three single-neuron methods (Fig. 4C, blue points in left panel), likely because Adept chose images that evoked large responses across neurons together. All methods produced higher mean responses than randomly choosing images (Fig. 4C, black point above blue points in left panel). Adept also produced higher mean eigenvalue ratios across the top 75 PCA dimensions than the three single-neuron methods (Fig. 4C, blue points in right panel). This indicates that Adept, using a population objective, is better able to optimize population responses than using a single-neuron objective to optimize the response of each neuron in the population. We then modified the Adept objective function to include only the norm term (?Adept-norm?, Fig. 2B) and only the average distance term (?Adept-avgdist?, Fig. 2C). Both of these population methods performed better than single-neuron methods (Fig. 4C, green points below blue points). While their performance was comparable to Adept using the full objective function, upon closer inspection, we observed differences in performance that matched our intuition about the objective functions. The mean response ratio for Adept using the full objection function and Adept-norm was close to 1 (Fig. 4C, left panel, Adept-norm on red-dashed line), but the eigenvalue ratio was greater than 1 (Fig. 4C, right panel, Adept-norm above red-dashed line, p < 0.005). Thus, Adept-norm maximizes mean responses at the expense of less scatter. On the other hand, Adept-avgdist produced a lower mean response than that of Adept using the full objective function (Fig. 4C, left panel, Adept-avgdist above red-dashed line, p < 10?4 ), but an eigenvalue ratio of 1 (Fig. 4C, right panel, Adept-avgdist on red-dashed line). Thus, Adept-avgdist increases the response scatter at the expense of a lower mean response. The results in this section were based on middle layer neurons in the GoogleNet CNN predicting middle layer neurons in the ResNet CNN. However, it is possible that CNN neurons in other layers may be better predictors than those in a middle layer. To test for this, we asked which layers of the GoogLeNet CNN were most predictive of the objective values of the middle layer of the ResNet CNN. For each layer of increasing depth, we computed the correlation between the predicted objective (using 750 CNN neurons from that layer) and the actual objective of the ResNet responses (200 CNN neurons) (Fig. 4D). We found that all layers were predictive (? ? 0.6), although there was variation across layers. Middle layers were slightly more predictive than deeper layers, likely because 6 deeper layers of GoogLeNet have a different embedding of natural images than the middle layer of the ResNet CNN. 5.2 Testing Adept on V4 population recordings Next, we tested Adept in a closed-loop neurophysiological experiment. We implanted a 96-electrode array in macaque V4, whose neurons respond differently to a wide range of image features, including orientation, spatial frequency, color, shape, texture, and curvature, among others [27]. Currently, no existing parametric encoding model fully captures the stimulus-response relationship of V4 neurons. The current state-of-the-art model for predicting the activity of V4 neurons uses the output of middle layer neurons in a CNN previously trained without any information about the responses of V4 neurons [23]. Thus, we used a pre-trained CNN (GoogLeNet) to obtain the predictive feature embeddings. The experimental task flow proceeded as follows. On each trial, a monkey fixated on a central dot while an image flashed four times in the aggregate receptive fields of the recorded V4 neurons. After the fourth flash, the monkey made a saccade to a target dot (whose location was unrelated to the shown image), for which he received a juice reward. During this task, we recorded threshold crossings on each electrode (referred to as ?spikes?), where the threshold was defined as a multiple of the RMS voltage set independently for each channel. This yielded 87 to 96 neural units in each session. The spike counts for each neural unit were averaged across the four 100 ms flashes to obtain mean responses. The mean response vector for the p neural units was then appended to the previously-recorded responses and input into Adept. Adept then output an image to show on the next trial. For the predictive feature embeddings, we used q = 500 CNN neurons in the fifth layer of GoogLeNet CNN (kernel bandwidth h = 200). In each recording session, the monkey typically performed 2,000 trials (i.e., 2,000 of the N =10,000 natural images would be sampled). Each Adept run started with Ninit = 5 randomly-chosen images. We first recorded a session in which we used Adept during one block of trials and randomly chose images in another block of trials. To qualitatively compare Adept and randomly selecting images, we first applied PCA to the response vectors of both blocks, and plotted the top two PCs (Fig. 5A, left panel). Adept uncovers more responses that are far away from the origin (Fig. 5A, left panel, red dots farther from black * than black dots). For visual clarity, we also computed kernel density estimates for the Adept responses (pAdept ) and responses to randomly-chosen images (prandom ), and plotted the difference pAdept ? prandom (Fig. 5A, right panel). Responses for Adept were denser than for randomly-chosen images further from the origin, whereas the opposite was true closer to the origin (Fig. 5A, right panel, red region further from origin than black region). These plots suggest that Adept uncovers large responses that are far from one another. Quantitatively, we verified that Adept chose images with larger objective values in Eqn. 1 than randomly-chosen images (Fig. 5B). This result is not trivial because it relies on the ability of the CNN to predict V4 population responses. If the CNN predicted V4 responses poorly, the objective evaluated on the V4 responses to images chosen by Adept could be lower than that evaluated on random images. We then compared Adept and random stimulus selection across 7 recording sessions, including the above session (450 trials per block, with three sessions with the Adept block before the random selection block, three sessions with the opposite ordering, and one session with interleaved trials). We found that the images chosen by Adept produced on average 19.5% higher mean responses than randomly-chosen images (Fig. 5C, difference in mean responses were significantly greater than zero, p < 10?4 ). We also found that images chosen by Adept produced greater response scatter than for randomly-chosen images, as the mean ratios of eigenvalues were greater than 1 (Fig. 5D, dimensions 1 to 5). Yet, there were dimensions for which the mean ratios of eigenvalues were less than 1 (Fig. 5D, dimensions 9 and 10). These dimensions explained little overall variance (< 5% of the total response variance). Finally, we asked to what extent do the different CNN layers predict the objective of V4 responses, as in Fig. 4D. We found that, using 500 CNN neurons for each layer, all layers had some predictive ability (Fig. 5E, ? > 0). Deeper layers (5 to 10) tended to have better prediction than superficial layers (1 to 4). To establish a noise level for the V4 responses, we also predicted the norm and average distance for one session (day 1) with the V4 responses of another session (day 2), where the same images were shown each day. In other words, we used the V4 responses of day 2 as feature embeddings to predict V4 responses of day 1. The correlation of prediction was much higher 7 0 -200 -200 0 PC1 (spikes/sec) * D 2 2 ?Adept /?random fraction of neural units 0.3 600 0 -40 -20 0 20 mean response (spikes/sec) ?Adept ?random 40 0 600 PC1 (spikes/sec) 40% 1.6 %? 1.4 2 0% 1 dim 1.2 15 1.0 0.8 1 5 10 dimension index pAdept < prandom E Adept random 15 800 0.6 0.4 1600 Adept random avgdist + norm p Adept = prandom 0 C pAdept B > prandom 400 Adept random corr predicted vs. actual PC2 (spikes/sec) A 400 0 trial number 1200 predict day 1 responses with day 2 responses predict day 1 responses with CNN responses 0.2 0.0 1 2 3 4 5 6 7 8 9 10 responses from day 2 CNN layer index Figure 5: Closed-loop experiments in V4. A. Top 2 PCs of V4 responses to stimuli chosen by Adept and random selection (500 trials each). Left: scatter plot, where each dot represents the population response to one stimulus. Right: difference of kernel densities, pAdept ? prandom . Black * denotes a zero response for all neural units. B. Objective function evaluated across trials (one stimulus per trial) using V4 responses. Same data as in A. C. Difference in mean responses across neural units from 7 sessions. D. Ratio of eigenvalues for different PC dimensions. Error bars: ? s.e.m. E. Ability of different CNN layers to predict V4 responses. For comparison, we also used V4 responses from a different day to predict the same V4 responses. Error bars: ? s.d. across 100 runs. (? ? 0.5) than that of any CNN layer (? < 0.25). This discrepancy indicates that finding feature embeddings that are more predictive of V4 responses is a way to improve Adept?s performance. 5.3 Testing Adept for robustness to neural noise and overfitting A potential concern for an adaptive method is that stimulus responses are susceptible to neural noise. Specifically, spike counts are subject to Poisson-like variability, which might not be entirely averaged away based on a finite number of stimulus repeats. Moreover, adaptation to stimuli and changes in attention or motivation may cause a gain factor to scale responses dynamically across a session [9]. To examine how Adept performs in the presence of noise, we first recorded a ?ground-truth?, spike-sorted dataset in which 2,000 natural images were presented (100 ms flashes, 5 to 30 repeats per image randomly presented throughout the session). We then re-ran Adept on simulated responses under three different noise models (whose parameters were fit to the ground truth data): a Poisson model (?Poisson noise?), a model that scales each response by a gain factor that varies independently from trial to trial [28] (?trial-to-trial gain?), and the same gain model but where the gain varies smoothly across trials (?slowly-drifting gain?). Because the drift in gain was randomly generated and may not match the actual drift in the recorded dataset, we also considered responses in which the drift was estimated across the recording session and added to the mean responses as their corresponding images were chosen (?recorded drift?). For reference, we also ran Adept on responses with no noise (?no noise?). To compare performance across the different settings, we computed the mean response and variance ratios between responses based on Adept and random selection (Fig. 6A). All settings showed better performance using Adept than random selection (Fig. 6A, all points above red-dashed line), and Adept performed best with no noise (Fig. 6, ?no noise? bar at or above others). For a fair comparison, ratios were computed with the ground truth responses, where only the chosen images could differ across settings. These results indicate that, although Adept would benefit from removing neural noise, Adept continues to outperform random selection in the presence of noise. Another concern for an adaptive method is overfitting. For example, when no relationship exists between the CNN feature embeddings and neural responses, Adept may overfit to a spurious stimulus8 1.0 equal to random selection B 1.3 1.1 2 2 ?Adept /? random 1.0 1.3 ?Adept /? random 1.1 2 2 ?Adept /? random ?Adept / ?random A 1.0 1.0 set sub et ffled ubs shu ed s huffl uns es ons resp es ons ffled esp shu ed r huffl uns set sub ffled et shu ubs ed s huffl uns es ons resp es ons ffled esp shu ed r huffl uns drift rded in reco g ga riftin ly-d slow gain rial to-t triale nois son Pois oise no n drift rded in reco g ga riftin ly-d slow gain rial to-t triale nois son Pois oise no n Figure 6: A. Adept is robust to neural noise. B. Adept shows no overfitting when responses are shuffled across images. Error bars: ? s.d. across 10 runs. response mapping and perform worse than random selection. To address this concern, we performed two analyses using the same ground truth dataset as in Fig. 6A. For the first analysis, we ran Adept on the ground truth responses (choosing 500 of the 2,000 candidate images) to yield on average a 6% larger mean response and a 21% larger response scatter (average over top 5 PCs) than random selection (Fig 6B, unshuffled embeds.). Next, to break any stimulus-response relationship, we shuffled all of the ground truth responses across images, and re-ran Adept. Adept performed no worse than random selection (Fig 6B, shuffled embeds., blue points on red-dashed line). For the second analysis, we asked if Adept focuses on the most predictable neurons to the detriment of other neurons. We shuffled all of the ground truth responses across images for half of the neurons, and ran Adept on the full population. Adept performed better than random selection for the subset of neurons with unshuffled responses (Fig 6B, unshuffled subset), but no worse than random selection for the subset with shuffled responses (Fig 6B, shuffled subset, green points on red-dashed line). Adept showed no overfitting in either scenario, likely because Adept cannot choose exceedingly similar images (i.e., differing by a few pixels) from its discrete candidate pool. 6 Discussion Here we proposed Adept, an adaptive method for selecting stimuli to optimize neural population responses. To our knowledge, this is the first adaptive method to consider a population of neurons together. We found that Adept, using a population objective, is better able to optimize population responses than using a single-neuron objective to optimize the response of each neuron in the population (Fig. 4C). While Adept can flexibly incorporate different feature embeddings, we take advantage of the recent breakthroughs in deep learning and apply them to adaptive stimulus selection. Adept does not try to predict the response of each V4 neuron, but rather uses the similarity of CNN feature embeddings to different images to predict the similarity of the V4 population responses to those images. Widely studied neural phenomena such as changes in responses due to attention [29] and trial-to-trial variability [30, 31] likely depend on mean response levels [32]. When recording from a single neuron, one can optimize to produce large mean responses in a straightforward manner. For example, one can optimize the orientation and spatial frequency of a sinusoidal grating to maximize a neuron?s firing rate [9]. However, when recording from a population of neurons, identifying stimuli that optimize the firing rate of each neuron can be infeasible due to limited recording time. Moreover, neurons far from the sensory periphery tend to be more responsive to natural stimuli [33], and the search space for natural stimuli is vast. Adept represents a principled way to efficiently search through a space of natural stimuli to optimize the responses of a population of neurons. Experimenters can run Adept for a recording session, and then present the Adept-chosen stimuli in subsequent sessions when probing neural phenomena. A future challenge for adaptive stimulus selection is to generate natural images rather than selecting from a pre-existing pool of candidate images. For Adept, one could use a parametric model to generate natural images, such as a generative adversarial network [34], and optimize Eqn. 1 with gradient-based or Bayesian optimization. 9 Acknowledgments B.R.C. was supported by a BrainHub Richard K. Mellon Fellowship. R.C.W. was supported by NIH T32 GM008208, T90 DA022762, and the Richard K. Mellon Foundation. K.A. was supported by NSF GRFP 1747452. M.A.S. and B.M.Y. were supported by NSF-NCS BCS-1734901/1734916. M.A.S. was supported by NIH R01 EY022928 and NIH P30 EY008098. B.M.Y. was supported by NSF-NCS BCS-1533672, NIH R01 HD071686, NIH R01 NS105318, and Simons Foundation 364994. References [1] D. Ringach and R. Shapley, ?Reverse correlation in neurophysiology,? Cognitive Science, vol. 28, no. 2, pp. 147?166, 2004. [2] N. C. Rust and J. A. Movshon, ?In praise of artifice,? Nature Neuroscience, vol. 8, no. 12, pp. 1647?1650, 2005. [3] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli, ?Spike-triggered neural characterization,? Journal of Vision, vol. 6, no. 4, pp. 13?13, 2006. [4] J. Benda, T. Gollisch, C. K. Machens, and A. V. Herz, ?From response to stimulus: adaptive sampling in sensory physiology,? Current Opinion in Neurobiology, vol. 17, no. 4, pp. 430?436, 2007. [5] C. DiMattina and K. Zhang, ?Adaptive stimulus optimization for sensory systems neuroscience,? Closing the Loop Around Neural Systems, p. 258, 2014. [6] C. K. Machens, ?Adaptive sampling by information maximization,? Physical Review Letters, vol. 88, no. 22, p. 228104, 2002. [7] C. K. Machens, T. Gollisch, O. Kolesnikova, and A. V. Herz, ?Testing the efficiency of sensory coding with optimal stimulus ensembles,? Neuron, vol. 47, no. 3, pp. 447?456, 2005. [8] L. Paninski, ?Asymptotic theory of information-theoretic experimental design,? Neural Computation, vol. 17, no. 7, pp. 1480?1507, 2005. [9] J. Lewi, R. Butera, and L. Paninski, ?Sequential optimal design of neurophysiology experiments,? Neural Computation, vol. 21, no. 3, pp. 619?687, 2009. [10] M. Park, J. P. Weller, G. D. Horwitz, and J. W. Pillow, ?Bayesian active learning of neural firing rate maps with transformed gaussian process priors,? Neural Computation, vol. 26, no. 8, pp. 1519?1541, 2014. [11] J. W. Pillow and M. Park, ?Adaptive bayesian methods for closed-loop neurophysiology,? in Closed Loop Neuroscience (A. E. Hady, ed.), Elsevier, 2016. [12] E. T. Carlson, R. J. Rasquinha, K. Zhang, and C. E. Connor, ?A sparse object coding scheme in area V4,? Current Biology, vol. 21, no. 4, pp. 288?293, 2011. [13] Y. Yamane, E. T. Carlson, K. C. Bowman, Z. Wang, and C. E. Connor, ?A neural code for three-dimensional object shape in macaque inferotemporal cortex,? Nature Neuroscience, vol. 11, no. 11, pp. 1352?1360, 2008. [14] C.-C. Hung, E. T. Carlson, and C. E. Connor, ?Medial axis shape coding in macaque inferotemporal cortex,? Neuron, vol. 74, no. 6, pp. 1099?1113, 2012. [15] P. F?ldi?k, ?Stimulus optimisation in primary visual cortex,? Neurocomputing, vol. 38, pp. 1217?1222, 2001. [16] K. N. O?Connor, C. I. Petkov, and M. L. Sutter, ?Adaptive stimulus optimization for auditory cortical neurons,? Journal of Neurophysiology, vol. 94, no. 6, pp. 4051?4067, 2005. [17] I. H. Stevenson and K. P. Kording, ?How advances in neural recording affect data analysis,? Nature Neuroscience, vol. 14, no. 2, pp. 139?142, 2011. [18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, ?Going deeper with convolutions,? in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1?9, 2015. [19] G. S. Watson, ?Smooth regression analysis,? Sankhy?a: The Indian Journal of Statistics, Series A, pp. 359? 372, 1964. [20] E. P. Simoncelli and W. T. Freeman, ?The steerable pyramid: A flexible architecture for multi-scale derivative computation,? in Image Processing, 1995. Proceedings., International Conference on, vol. 3, pp. 444?447, IEEE, 1995. [21] A. Olmos and F. A. Kingdom, ?A biologically inspired algorithm for the recovery of shading and reflectance images,? Perception, vol. 33, no. 12, pp. 1463?1473, 2004. 10 [22] ?Google google image search.? http://images.google.com. Accessed: 2017-04-25. [23] D. L. Yamins and J. J. DiCarlo, ?Using goal-driven deep learning models to understand sensory cortex,? Nature Neuroscience, vol. 19, no. 3, pp. 356?365, 2016. [24] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep residual learning for image recognition,? in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770?778, 2016. [25] A. Vedaldi and K. Lenc, ?Matconvnet ? convolutional neural networks for Matlab,? in Proceeding of the ACM Int. Conf. on Multimedia, 2015. [26] J. Xiao, ?Princeton vision and robotics toolkit,? 2013. Available from: http://3dvision.princeton. edu/pvt/GoogLeNet/. [27] A. W. Roe, L. Chelazzi, C. E. Connor, B. R. Conway, I. Fujita, J. L. Gallant, H. Lu, and W. Vanduffel, ?Toward a unified theory of visual area V4,? Neuron, vol. 74, no. 1, pp. 12?29, 2012. [28] I.-C. Lin, M. Okun, M. Carandini, and K. D. Harris, ?The nature of shared cortical variability,? Neuron, vol. 87, no. 3, pp. 644?656, 2015. [29] M. R. Cohen and J. H. Maunsell, ?Attention improves performance primarily by reducing interneuronal correlations,? Nature Neuroscience, vol. 12, no. 12, pp. 1594?1600, 2009. [30] A. Kohn, R. Coen-Cagli, I. Kanitscheider, and A. Pouget, ?Correlations and neuronal population information,? Annual Review of Neuroscience, vol. 39, pp. 237?256, 2016. [31] M. Okun, N. A. Steinmetz, L. Cossell, M. F. Iacaruso, H. Ko, P. Barth?, T. Moore, S. B. Hofer, T. D. Mrsic-Flogel, M. Carandini, et al., ?Diverse coupling of neurons to populations in sensory cortex,? Nature, vol. 521, no. 7553, pp. 511?515, 2015. [32] M. R. Cohen and A. Kohn, ?Measuring and interpreting neuronal correlations,? Nature Neuroscience, vol. 14, no. 7, pp. 811?819, 2011. [33] G. Felsen, J. Touryan, F. Han, and Y. Dan, ?Cortical sensitivity to visual features in natural scenes,? PLoS biology, vol. 3, no. 10, p. e342, 2005. [34] A. Radford, L. Metz, and S. Chintala, ?Unsupervised representation learning with deep convolutional generative adversarial networks,? arXiv preprint arXiv:1511.06434, 2015. 11
6738 |@word neurophysiology:4 proceeded:1 cnn:55 middle:10 version:2 trial:18 norm:19 simulation:2 r:11 uncovers:2 shading:1 initial:1 liu:1 contains:1 series:1 selecting:3 genetic:3 existing:2 ninit:11 current:4 com:1 nt:5 scatter:14 yet:1 must:1 multineuron:1 subsequent:1 shape:3 plot:2 update:2 medial:1 v:2 alone:1 half:1 selected:1 generative:2 tone:2 inspection:1 xk:5 ith:3 smith:1 sutter:1 farther:1 grfp:1 characterization:1 location:1 zhang:3 accessed:1 rnt:7 bowman:1 combine:1 shapley:1 dan:1 manner:2 pairwise:1 examine:1 multi:2 brain:5 inspired:2 freeman:1 gollisch:2 actual:5 cpu:1 little:1 increasing:1 matched:1 unrelated:1 maximizes:4 panel:12 moreover:2 what:3 substantially:1 monkey:4 developed:2 differing:1 finding:3 unified:1 every:1 demonstrates:1 rm:4 k2:14 schwartz:1 unit:7 ly:2 maunsell:1 interneuronal:1 before:1 engineering:2 limit:1 esp:2 encoding:2 firing:9 solely:1 black:5 chose:4 might:1 initialization:1 studied:1 evoked:4 dynamically:1 praise:1 fastest:1 limited:1 range:2 averaged:2 acknowledgment:1 testing:4 block:6 lewi:1 procedure:1 steerable:1 area:7 gabor:1 significantly:2 physiology:1 vedaldi:1 word:2 radial:1 pre:10 suggest:1 cannot:1 close:1 selection:30 ga:2 t90:1 optimize:15 map:1 center:1 maximizing:5 go:1 rnj:6 attention:3 independently:2 flexibly:1 focused:1 formulate:2 straightforward:1 petkov:1 identifying:1 recovery:1 pure:2 pouget:1 array:1 importantly:1 population:44 embedding:6 variation:1 mcgill:1 pt:4 target:1 resp:2 us:2 machens:3 origin:6 element:1 crossing:1 recognition:3 continues:1 predicts:1 observed:3 bottom:1 preprint:1 electrical:1 snt:2 capture:1 wang:1 region:2 sun:1 ordering:1 plo:1 ran:9 rq:3 benjamin:1 byronyu:1 intuition:1 predictable:1 moderately:1 reward:1 asked:6 principled:1 trained:9 depend:1 predictive:9 upon:1 efficiency:1 basis:2 necessitates:1 kxj:1 easily:1 differently:1 distinct:1 fast:4 effective:1 pertaining:1 aggregate:1 outcome:1 choosing:6 pearson:1 whose:3 larger:13 widely:1 denser:1 tested:5 s:1 favor:1 ability:3 statistic:1 unseen:4 online:2 advantage:1 eigenvalue:8 triggered:1 took:3 propose:2 okun:2 remainder:2 adaptation:1 loop:9 gen:2 poorly:1 achieve:2 electrode:2 r1:4 produce:3 resnet:5 object:2 depending:1 illustrate:1 coupling:1 ldi:1 school:1 received:1 grating:3 reco:2 implemented:1 c:1 predicted:8 come:2 implies:1 indicate:2 differ:1 mrsic:1 attribute:1 filter:1 cnns:5 enable:1 opinion:1 ryan:1 extension:1 around:1 considered:2 ground:7 exp:2 cognition:1 predict:23 mapping:1 pitt:1 matthew:1 matconvnet:2 smallest:1 currently:1 individually:3 largest:4 gaussian:1 modified:1 rather:3 poi:2 voltage:1 rial:2 focus:2 indicates:2 adversarial:2 dim:2 elsevier:1 typically:1 spurious:1 transformed:1 going:1 selects:1 pixel:1 arg:1 classification:1 flexible:1 overall:1 among:1 priori:1 favored:1 development:1 animal:1 art:1 breakthrough:1 summed:1 initialize:2 spatial:3 equal:5 comprise:2 once:1 field:1 beach:1 sampling:4 biology:2 represents:2 park:2 yu:1 unsupervised:1 nearly:1 sankhy:1 discrepancy:1 future:1 others:2 stimulus:100 quantitatively:1 richard:2 primarily:2 few:1 randomly:24 steinmetz:1 simultaneously:2 neurocomputing:1 argmax:1 n1:2 circular:1 evaluation:1 pc:5 held:1 closer:2 indexed:1 euclidean:1 desired:1 re:4 plotted:2 xeon:1 flogel:1 cossell:1 measuring:1 rabinovich:1 maximization:1 subset:4 hundred:1 predictor:1 weller:1 varies:2 chooses:3 adaptively:1 st:1 density:2 international:1 randomized:1 sensitivity:1 v4:35 felsen:1 pool:9 conway:1 together:4 central:1 recorded:26 rn1:6 choose:5 slowly:1 dr:5 worse:3 cognitive:1 conf:1 derivative:1 szegedy:1 account:2 potential:1 sinusoidal:3 diversity:3 stevenson:1 sec:8 coding:3 int:1 performed:8 view:1 observer:3 closed:8 break:1 try:1 red:14 metz:1 elicited:4 simon:1 jia:1 contribution:1 appended:2 om:2 convolutional:4 variance:12 efficiently:2 maximized:2 yield:4 ensemble:1 bayesian:3 produced:9 ren:1 lu:1 confirmed:2 drive:2 horwitz:1 simultaneous:1 tended:1 ed:5 frequency:3 involved:1 pp:26 chintala:1 sampled:2 auditory:2 experimenter:2 dataset:4 gain:9 carandini:2 knowledge:2 color:1 improves:1 electrophysiological:4 uncover:1 barth:1 higher:9 day:10 response:166 rand:2 arranged:1 pvt:2 evaluated:3 strongly:2 biomedical:1 correlation:7 until:1 hand:1 eqn:13 overfit:1 expressive:1 google:4 gray:2 usa:1 effect:1 facilitate:1 k22:2 true:1 shuffled:6 butera:1 iteratively:1 moore:1 flashed:1 ringach:1 white:1 during:3 eqns:1 m:2 presenting:1 ridge:2 theoretic:1 performs:1 interpreting:1 percent:1 image:74 nih:5 hofer:1 juice:1 rust:2 tracked:1 physical:1 cohen:2 extend:1 googlenet:9 m1:1 relating:1 he:2 mellon:3 anguelov:1 connor:5 pm:2 session:17 closing:1 had:3 dot:12 toolkit:1 access:1 han:1 similarity:6 cortex:5 inferotemporal:2 curvature:1 own:1 recent:2 showed:2 optimizing:6 driven:1 reverse:1 periphery:4 scenario:1 success:1 watson:1 seen:1 captured:1 greater:8 employed:2 determine:2 converge:1 maximize:4 paradigm:3 dashed:7 multiple:2 sound:1 rj:4 full:5 bcs:2 simoncelli:2 smooth:1 faster:1 match:1 long:1 lin:1 dept:5 divided:1 paired:1 prediction:10 regression:7 ko:1 implanted:1 vision:5 cmu:2 noiseless:1 poisson:3 optimisation:1 iteration:2 represent:1 kernel:14 roe:1 pyramid:1 robotics:1 arxiv:2 whereas:2 remarkably:1 separately:1 fellowship:1 objection:3 touryan:1 lenc:1 unlike:1 snj:1 ascent:1 recording:15 subject:1 byron:1 tend:1 flow:1 near:1 presence:2 embeddings:19 enough:1 xj:4 fit:2 affect:1 architecture:2 bandwidth:4 opposite:2 idea:1 lesser:1 pca:3 rms:1 kohn:2 gb:1 fujita:1 movshon:1 cause:1 olmos:1 matlab:1 deep:6 involve:1 benda:1 amount:3 adept:130 mid:2 ten:1 tth:2 generate:2 http:2 outperform:1 nsf:3 neuroscience:11 estimated:1 per:3 iacaruso:1 blue:7 diverse:2 herz:2 carnegie:1 hyperparameter:1 discrete:1 vol:25 ist:2 key:2 four:2 threshold:2 p30:1 clarity:1 verified:1 ram:1 vast:1 fraction:2 run:5 parameterized:1 fourth:1 respond:2 letter:1 named:2 place:1 throughout:1 utilizes:1 home:1 prefer:1 pc2:1 comparable:1 interleaved:1 layer:31 entirely:1 followed:1 chelazzi:1 yielded:2 activity:13 annual:1 occur:1 ri:2 scene:1 t32:1 orientation:3 speed:1 according:1 ophthalmology:1 across:19 slightly:1 son:2 sth:2 biologically:1 s1:4 explained:3 computationally:1 previously:6 discus:2 count:2 needed:2 yamins:1 end:3 available:1 operation:2 apply:1 away:4 nois:2 responsive:1 robustness:1 rp:2 drifting:1 denotes:2 top:7 include:1 kanitscheider:1 medicine:1 carlson:3 reflectance:1 especially:1 establish:1 r01:3 psychophysical:1 objective:53 question:3 added:1 spike:13 parametric:2 primary:2 rt:1 receptive:1 responds:2 surrogate:3 unclear:1 gradient:2 distance:18 simulated:2 extent:3 trivial:1 reason:2 toward:1 code:1 dicarlo:1 index:8 relationship:3 reed:1 ratio:12 sermanet:1 detriment:1 nc:2 kingdom:1 susceptible:1 expense:2 shu:4 design:3 motivates:1 unknown:2 perform:3 gallant:1 neuron:112 convolution:1 finite:2 teddy:1 displayed:1 yamane:1 defining:2 neurobiology:1 variability:3 ubs:2 rn:1 arbitrary:1 pc1:2 drift:6 optimized:2 testbed:3 flowchart:1 macaque:6 nip:1 address:2 beyond:1 able:3 bar:5 usually:1 below:1 pattern:2 perception:1 challenge:1 max:5 including:3 green:3 greatest:2 natural:17 rely:1 predicting:2 residual:1 scheme:1 improve:1 technology:1 axis:1 started:1 carried:1 sn:3 review:2 prior:1 relative:1 asymptotic:1 fully:1 bear:1 versus:1 h2:2 downloaded:1 switched:1 foundation:2 vanhoucke:1 fruit:1 xiao:1 bank:1 placed:1 last:1 repeat:2 unshuffled:3 infeasible:1 supported:6 deeper:4 understand:1 wide:1 fifth:1 sparse:1 ghz:1 benefit:1 dimension:21 cortical:6 xn:9 evaluating:1 depth:1 exceedingly:1 sensory:9 concretely:1 made:1 adaptive:32 qualitatively:1 pillow:3 far:10 erhan:1 kording:1 preferred:5 ons:4 sequentially:1 overfitting:4 active:1 fixated:1 pittsburgh:1 alternatively:1 search:5 un:4 why:1 nature:9 channel:1 superficial:1 ca:1 robust:1 diag:1 motivation:2 noise:14 repeated:1 allowed:1 fair:1 x1:5 neuronal:3 fig:49 intel:1 referred:1 slow:2 probing:1 embeds:2 sub:2 candidate:20 krk2:2 third:3 removing:1 inset:2 showing:1 explored:2 x:4 concern:3 exists:3 sequential:2 corr:2 kr:12 texture:2 kx:8 smoothly:1 led:3 simply:1 likely:4 paninski:2 neurophysiological:2 visual:8 ordered:1 saccade:1 radford:1 truth:7 relies:1 acm:1 harris:1 coen:1 sorted:3 presentation:1 goal:1 flash:3 shared:1 change:2 specifically:1 reducing:1 total:2 ade:6 multimedia:1 experimental:3 e:4 meaningful:1 katerina:1 select:1 formally:1 oise:2 assessed:1 indian:1 incorporate:1 evaluate:1 princeton:2 rded:2 phenomenon:2 hung:1
6,344
6,739
Nonbacktracking Bounds on the Influence in Independent Cascade Models 1 Emmanuel Abbe1 2 Sanjeev Kulkarni2 Eun Jee Lee1 Program in Applied and Computational Mathematics 2 The Department of Electrical Engineering Princeton University {eabbe, kulkarni, ejlee}@princeton.edu Abstract This paper develops upper and lower bounds on the influence measure in a network, more precisely, the expected number of nodes that a seed set can influence in the independent cascade model. In particular, our bounds exploit nonbacktracking walks, Fortuin?Kasteleyn?Ginibre (FKG) type inequalities, and are computed by message passing algorithms. Nonbacktracking walks have recently allowed for headways in community detection, and this paper shows that their use can also impact the influence computation. Further, we provide parameterized versions of the bounds that control the trade-off between the efficiency and the accuracy. Finally, the tightness of the bounds is illustrated with simulations on various network models. 1 Introduction Influence propagation is concerned with the diffusion of information from initially influenced nodes, called seeds, in a network. Understanding how information propagates in networks has become a central problem in a broad range of fields, such as viral marketing [17], sociology [8, 19, 23], communication [12], epidemiology [20], and social network analysis [24]. One of the most fundamental questions on influence propagation is to estimate the influence, i.e. the expected number of influenced nodes at the end of the propagation given a set of seeds. Estimating the influence is central to diverse research problems related to influence propagation, such as the widely-known influence maximization problem ? finding a set of k nodes that maximizes the influence. Recent studies on influence propagation have proposed various algorithms [11, 18, 3, 7, 22, 21] for the influence maximization problem while using Monte Carlo (MC) simulations to approximate the influence. The submodularity argument and the probabilistic error bound on MC give a probabilistic lower bound on the influence that is obtainable by the algorithms in terms of the true maximum influence. Despite its benefits on the influence maximization problem, approximating the influence via MC simulations is far from ideal for large networks; in particular, MC may require a large amount of computations in order to stabilize the approximation. To overcome the limitations of Monte Carlo simulations, many researchers have been taking theoretical approaches to approximate the influence of given seeds in a network. Draief et al., [5] introduced an upper bound for the influence by using the spectral radius of the adjacency matrix. Tighter upper bounds were later suggested in [16] which relate the ratio of influenced nodes in a network to the spectral radius of the so-called Hazard matrix. Further, improved upper bounds which account for sensitive edges were introduced in [15]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In contrast, there has been little work on finding a tight lower bound for the influence. An exception is a work by Khim et al. [13], where the lower bound is obtained by only considering the influence through the maximal-weighted paths. In this paper, we propose both upper and lower bounds on the influence using nonbacktracking walks and Fortuin?Kasteleyn?Ginibre (FKG) type inequalities. The bounds can be efficiently obtained by message passing implementation. This shows that nonbacktracking walks can also impact influence propagation, making another case for the use of nonbacktracking walks in graphical model problems as in [14, 9, 2, 1], discussed later in the paper. Further, we provide a parametrized version of the bounds that can adjust the trade-off between the efficiency and the accuracy of the bounds. 2 Background We introduce here the independent cascade model and provide background for the main results. Definition 1 (Independent Cascade Model). Consider a directed graph G = (V, E) with |V | = n, a transmission probability matrix P ? [0, 1]n?n , and a seed set S0 ? V . For all u ? V , let N + (u) be the set of out-neighbors of node u. The independent cascade model IC(G, P, S0 ) sequentially generates the influenced set St ? V for each discrete time t ? 1 as follows. At time t, St is initialized to be an empty set. Then, each node u ? St?1 attempts to influence v ? N + (u) \ ?t?1 i=0 Si with probability Puv , i.e. node u influences its uninfluenced out-neighbor v with probability Puv . If v is influenced at time t, add v to St . The process stops at T if ST = ? at the end of the step t = T . The ?1 set of the influenced nodes at the end of propagation is defined as S = ?Ti=0 St . We often refer an edge (u, v) being open if node u influences node v. The IC model is equivalent to the live-arc graph model, where the influence happens at once, rather than sequentially. The live-arc graph model first decides the state of every edge with a Bernoulli trial, i.e. edge (u, v) is open independently with probability Puv and closed, otherwise. Then, the set of influenced nodes is defined as the nodes that are reachable from at least one of the seeds by the open edges. Definition 2 (Influence). The expected number of nodes that are influenced at the end of the propagation process is called the influence (rather than the expected influence, with a slight abuse of terminology) of IC(G, P, S0 ), and is defined as X ?(S0 ) = P(v is influenced). (1) v?V It is shown in [4] that computing the influence ?(S0 ) in the independent cascade model IC(G, P, S0 ) is #P-hard, even with a single seed, i.e. |S0 | = 1. Next, we define nonbacktracking (NB) walks on a directed graph. Nonbacktracking walks have already been used for studying the characteristics of networks. To the best of our knowledge, the use of NB walks in the context of epidemics was first introduced in the paper of Karrer et al. [10] and later applied to percolation in [9]. In particular, Karrer et al. reformulate the spread of influence as a message passing process and demonstrate how the resulting equations can be used to calculate an upper bound on the number of nodes that are susceptible at a given time. As we shall see, we take a different approach to the use of the NB walks, which focuses on the effective contribution of a node in influencing another node and accumulates such contributions to obtain upper and lower bounds. More recently, nonbacktracking walks are used for community detection [14, 2, 1]. Definition 3 (Nonbacktracking Walk). Let G = (V, E) be a directed graph. A nonbacktracking walk of length k is defined as w(k) = (v0 , v1 , . . . , vk ), where vi ? V and (vi?1 , vi ) ? E for all i ? [k], and vi?1 6= vi+1 for i ? [k ? 1]. We next recall a key inequality introduced by Fortuin et. al [6]. Theorem 1 (FKG Inequality). Let (?, ?) be a distributive lattice, where ? is a finite partially ordered set, ordered by ?, and let ? be a positive measure on ? satisfying the following condition: for all x, y ? ?, ?(x ? y)?(x ? y) ? ?(x)?(y), where x ? y = max{z ? ? : z  x, z  y} and x ? y = min{z ? ? : y  z, y  z}. Let f and g be both increasing (or both decreasing) functions on ?. Then, X X X X ( ?(x))( f (x)g(x)?(x)) ? ( f (x)?(x))( g(x)?(x)). (2) x?? x?? x?? 2 x?? FKG inequality is instrumental in studying influence propagation since the probability that a node is influenced is nondecreasing with respect to the partial order of random variables describing the states, open or closed, of the edges. 3 Nonbacktracking bounds on the influence In this section, we present upper and lower bounds on the influence in the independent cascade model and explain the motivations and intuitions of the bounds. The bounds utilize nonbacktracking walks and FKG inequalities and are computed efficiently by message passing algorithms. In particular, the upper bound on a network based on a graph G(V, E) runs in O(|V |2 + |V ||E|) and the lower bound runs in O(|V | + |E|), whereas Monte Carlo simulation would require O(|V |3 + |V |2 |E|) computations without knowing the variance of the influence, which is harder to estimate than the influence. The reason for the large computational complexity of MC is that in order to ensure that the standard error of the estimation does not grow with respect to |V |, MC requires O(|V |2 ) computations. Hence, for large networks, where MC may not be feasible, our algorithms can still provide bounds on the influence. Furthermore, from the proposed upper ? + and lower bounds ? ? , we can compute an upper bound on the variance given by (? + ? ? ? )2 /4. This could be used to estimate the number of computations needed by MC. Computing the upper bound on the variance with the proposed bounds can be done in O(|V |2 + |V ||E|), whereas computing the variance with MC simulation requires O(|V |5 + |V |4 |E|). 3.1 Nonbacktracking upper bounds (NB-UB) We start by defining the following terms for the independent cascade model IC(G, P, S0 ), where G = (V, E) and |V | = n. Definition 4. For any v ? V , we define the set of in-neighbors N ? (v) = {u ? V : (u, v) ? E} and the set of out-neighbors N + (v) = {u ? V : (v, u) ? E}. Definition 5. For any v ? V and l ? [n ? 1], the set Pl (S0 ? v) is defined as the set of all paths with length l from any seed s ? S0 to v. We call a path P is open iff every edge in P is open. For l = 0, we define P0 (S0 ? v) as the set (of size one) of the zero-length path containing node v and assume the path P ? P0 (S0 ? v) is open iff v ? S0 . Definition 6. For any v ? V and l ? {0, . . . , n ? 1}, we define p(v) pl (v) pl (u ? v) = P(v is influenced) = P(?P ?Pl (S0?v) {P is open}) (3) (4) = P(?P ?Pl (S0?u),P 6?v {P is open and edge (u, v) is open}) (5) In other words, pl (v) is the probability that node v is influenced by open paths of length l, i.e. there exists an open path of length l from a seed to v, and pl (u ? v) is the probability that v is influenced by node u with open paths of length l + 1, i.e. there exists an open path of length l + 1 from a seed to v that ends with edge (u, v). Lemma 1. For any v ? V , n?1 Y p(v) ? 1 ? (1 ? pl (v)). (6) l=0 For any v ? V and l ? [n ? 1], pl (v) ? 1 ? Y (1 ? pl?1 (u ? v)). (7) u?N ? (v) Lemma 1, which can be proved by FKG inequalities, suggests that given pl?1 (u ? v), we may compute an upper bound on the influence. Ideally, pl?1 (u ? v) can be computed by considering all paths that end with (u, v) having length l. However, this results in exponential complexity O(nl ), as l goes up to n ? 1. Thus, we present an efficient way to compute an upper bound UBl?1 (u ? v) on pl?1 (u ? v), which in turns gives an upper bound UBl (v) on pl (v), with the following recursion formula. 3 Definition 7. For all l ? {0, . . . , n?1} and u, v ? V such that (u, v) ? E, UBl (u) ? [0, 1] and UBl (u ? v) ? [0, 1] are defined recursively as follows. Initial condition: For every s ? S0 , s+? N + (s), u ? V \S0 , and v ? N + (u), UB0 (s) = 1, UB0 (s ? s+ ) = Pss+ UB0 (u) = 0, UB0 (u ? v) = 0. (8) (9) Recursion: For every l ? [n?1], s ? S0 , s+? N + (s), s?? N ? (s), u ? V \S0 , and v ? N + (u)\S0 , UBl (s) = 0, UBl (s ? s+ ) = 0, UBl (s? ? s) = 0 Y UBl (u) = 1 ? (1 ? UBl?1 (w ? u)) (10) (11) w?N ? (u) ( UBl (u ? v) = 1?UBl (u) ), if v ? N ? (u) Puv (1 ? 1?UB l?1 (v?u) Puv UBl (u), otherwise. (12) Equation (10) follows from that for any seed node s ? S0 and for all l > 0, the probabilities pl (s) = 0, pl (s ? s+ ) = 0, and pl (s? ? s) = 0. A naive way to compute UBl (u ? v) is UBl (u ? v) = Puv UBl?1 (u), but this results in an extremely loose bound due to the backtracking. For a tighter bound, we use nonbacktracking in Equation (12), i.e. when computing UBl (u ? v), we ignore the contribution of UBl?1 (v ? u). Theorem 2. For any independent cascade model IC(G, P, S0 ), ?(S0 ) ? X v?V (1 ? n?1 Y (1 ? UBl (v))) =: ? + (S0 ), (13) l=0 where UBl (v) is obtained recursively as in Definition 7. Next, we present Nonbacktracking Upper Bound (NB-UB) algorithm which computes UBl (v) and UBl (u ? v) by message passing. At l-th iteration, the variables in NB-UB represent as follows. ? Sl : set of nodes that are processed at l-th iteration. ? Mcurr (v) = {(u, UBl?1 (u ? v)) : u is an in-neighbor of v, and u ? Sl?1 }, the set of pairs (previously processed in-neighbor u of v, incoming message from u to v). ? MSrc(v) = {u : u is a in-neighbor of v, and u ? Sl?1 }, the set of in-neighbor nodes of v that were processed at the previous step. ? Mcurr (v)[u] = UBl?1 (u ? v), the incoming message from u to v. ? Mnext (v) = {(u, UBl (u ? v)) : u is an in-neighbor of v, and u ? Sl }, the set of pairs (currently processed in-neighbor u, next iteration?s incoming message from u to v). Algorithm 1 Nonbacktracking Upper Bound (NB-UB) Initialize: UBl (v) = 0 for all 0 ? l ? n ? 1 and v ? V Initialize: Insert (s, 1) to Mnext (s) for all s ? S0 for l = 0 to n ? 1 do for u ? Sl do Mcurr (u) = Mnext (u) Clear Mnext (u) UBl (u) = ProcessIncomingMsgUB (Mcurr (u)) for v ? N + (u) \ S0 do Sl+1 .insert(v) if v ? MSrc(u) then UBl (u ? v) = GenerateOutgoingMsgUB (Mcurr (u)[v], UBl (u), Puv ) Mnext (v).insert((u, UBl (u ? v))). else UBl (u ? v) = GenerateOutgoingMsgUB (0, UBl (u), Puv ) Mnext (v).insert((u, UBl (u ? v))). Output: UBl (u) for all l, u 4 At the beginning, every seed node s ? S0 is initialized such that Mcurr (s) = {(s, 1)} in order to satisfy the initial condition, UB0 (s) = 1. For each l-th iteration, every node u in Sl is processed as follows. First, ProcessIncomingMsgUB (Mcurr (u)) computes UBl (u) as in Equation (11). Second, u passes a message to its neighbor v ? N + (u) \ S0 along the edge (u, v), and v stores (inserts) the message in Mnext (v) for the next iteration. The message contains 1) the source of the message, u, and 2) UBl (u ? v), which is computed as in Equation (12), by the function GenerateOutgoingMsgUB . Finally, the algorithm outputs UBl (u) for all u ? V and l ? {0, . . . , n?1}, and the upper bound ? + (S0 ) is computed by Equation (13). The description of how the algorithm runs on a small network can be found in the supplementary material. Computational complexity: Notice that for each iteration l ? {0, . . . , n ? 1}, the algorithm accesses at most n nodes, and for each node v, the functions ProcessIncomingMsgUB and GenerateOutgoingMsgUB are computed in O(deg(v)) and O(1), respectively. Therefore, the worst case computational complexity is O(|V |2 + |V ||E|). 3.2 Nonbacktracking lower bounds (NB-LB) A naive way to compute a lower bound on the influence in a network IC(G, P, S0 ) is to reduce the network to a (spanning) tree network, by removing edges. Then, since there is a unique path from a node to another, we can compute the influence of the tree network, which is a lower bound on the influence in the original network, in O(|V |). We take this approach of generating a subnetwork from the original network, yet we avoid the significant gap between the bound and the influence by considering the following directed acyclic subnetwork, in which there is no backtracking walk. Definition 8 (Min-distance Directed Acyclic Subnetwork). Consider an independent cascade model IC(G, P, S0 ) with G = (V, E) and |V | = n. Let d(S0 , v) := mins?S0 d(s, v), i.e. the minimum distance from a seed in S0 to v. A minimum-distance directed acyclic subnetwork (MDAS), IC(G0 , P 0 , S0 ), where G0 = (V 0 , E 0 ), is obtained as follows. ? V 0 = {v1 , ..., vn } is an ordered set of nodes such that d(S0 , vi ) ? d(S0 , vj ), for every i < j. ? E 0 = {(vi , vj ) ? E : i < j}, i.e. remove edges from E whose source node comes later in the order than its destination node to obtain E 0 . ? Pv0 i vj = Pvi vj , if (vi , vj ) ? E 0 , and Pv0 i vj = 0, otherwise. If there are multiple ordered sets of vertices satisfying the condition, we may choose one arbitrarily. For any k ? [n], let p(vk ) be the probability that vk ? V 0 is influenced in the MDAS, IC(G0 , P 0 , S0 ). Since p(vk ) is equivalent to the probability of the union of the events that an in-neighbor ui ? N ? (vk ) influences vk , p(vk ) can be computed by the principle of inclusion and exclusion. Thus, we may compute a lower bound on p(vk ), using Bonferroni inequalities, if we know the probabilities that in-neighbors u and v both influences vk , for every pair u, v ? N ? (vk ). However, computing such probabilities can take O(k k ). Hence, we present LB(vk ) which efficiently computes a lower bound on p(vk ) by the following recursion. Definition 9. For all vk ? V 0 , LB(vk ) ? [0, 1] is defined by the recursion on k as follows. Initial condition: For every vs ? S0 , LB(vs ) = 1. (14) 0 Recursion: For every vk ? V \ S0 , ? ? m? i?1 X X ?Pu0 v LB(ui )(1 ? LB(vk ) = Pu0 j vk )? , (15) i k i=1 j=1 ? where N (vk ) = {u1 , . . . , um } is the ordered set of in-neighbors of vk in IC(G0 , P 0 , S0 ) and Pm0 ?1 m? = max{m0 ? m : j=1 Pu0 j vk ? 1}. Pi?2 Remark. Since the i-th summand in Equation (15) can utilize j=1 Pu0 j vk , which is already Pi?1 0 computed in (i?1)-th summand, to compute j=1 Puj vk , the summation takes at most O(deg(vk )). Theorem 3. For any independent cascade model IC(G, P, S0 ) and its MDAS IC(G0 , P 0 , S0 ), X ?(S0 ) ? LB(vk ) =: ? ? (S0 ), (16) vk ?V 0 where LB(vk ) is obtained recursively as in Definition 9. 5 Next, we present Nonbacktracking Lower Bound (NB-LB) algorithm which efficiently computes LB(vk ). At k-th iteration, the variables in NB-LB represent as follow. ? M(vk ) = {(LB(vj ), Pv0 j vk ) : vj is an in-neighbor of vk }, set of pairs (incoming message from an in-neighbor vj to vk , the transmission probability of edge (vj , vk )). Algorithm 2 Nonbacktracking Lower Bound (NB-LB) Input: directed acyclic network IC(G0 , P 0 , S0 ) Initialize: ? ? = 0 Initialize: Insert (1, 1) to M(vi ) for all vi ? S0 for k = 1 to n do LB(vk ) = ProcessIncomingMsgLB (M(vk )) ? ? += LB(vk ) for vl ? N + (vk ) \ S0 do M(vl ).insert((LB(vk ), Pv0 k vl )) Output: ? ? At the beginning, every seed node s ? S0 is initialized such that M(s) = {(1, 1)} in order to satisfy the initial condition, LB(s) = 1. For each k-th iteration, node vk is processed as follows. First, LB(vk ) is computed as in the Equation (15), by the function ProcessIncomingMsgLB , and added to ? ? . Second, vk passes the message (LB(vk ), Pv0 k vl ) to its out-neighbor vl ? N + (vk )\S0 , and vl stores (inserts) it in M(vl ). Finally, the algorithm outputs ? ? , the lower bound on the influence. The description of how the algorithm runs on a small network can be found in the supplementary material. Computational complexity: Obtaining an arbitrary directed acyclic subnetwork from the original network takes O(|V | + |E|). Next, the algorithm iterates through the nodes V 0 = {v1 , . . . , vn }. For each node vk , ProcessIncomingMsgLB takes O(deg(vk )) and vk sends messages to its out-neighbors in O(deg(vk )). Hence, the worst case computational complexity is O(|V | + |E|). 3.3 Tunable bounds In this section, we briefly introduce the parametrized version of NB-UB and NB-LB which provide control to adjust the trade-off between the efficiency and the accuracy of the bounds. Upper bounds (tNB-UB): Given a non-negative integer t ? n ? 1, for every node u ? V , we compute the probability p?t (u) that node u is influenced by open paths whose length is less than or equal to t, and for each v ? N + (u), we compute the probability pt (u ? v). Then, we start NB-UB from l = t + 1 with the new initial conditions that UBt (u ? v) = pt (u ? v) and UBt (u) = p?t (u), P Qn?1 and compute the upper bound as v?V (1 ? l=t (1 ? UBl (v))). For higher values of t, the algorithm results in tighter upper bounds, while the computational complexity may increase exponentially for dense networks. Thus, this method is most applicable in sparse networks, where the degree of each node is bounded. Lower bounds (tNB-LB): We first order the set of nodes {v1 , . . . , vn } such that d(S0 , vi ) ? d(S0 , vj ) for every i < j. Given a non-negative integer t ? n, we obtain a subnetwork IC(G[Vt ], P[Vt ], S0 ? Vt ) of size t, where G[Vt ] is the subgraph induced by the set of nodes Vt = {v1 , . . . , vt }, and P[Vt ] is the corresponding transmission probability matrix. For each vi ? Vt , we compute the exact probability pt (vi ) that node vi is influenced in the subnetwork IC(G[Vt ], P[Vt ], S0 ? Vt ). Then, we start NB-LB from i = t + 1 with the new initial conditions that LB(vk ) = pt (vk ), for all k ? t. For larger t, the algorithm results in tighter lower bounds. However, the computational complexity may increase exponentially with respect to t, the size of the subnetwork. This algorithm can adopt Monte Carlo simulations on the subnetwork to avoid the large computational complexity. However, this modification results in probabilistic lower bounds, rather than theoretically guaranteed lower bounds. Nonetheless, this can still give a significant improvement, because the Monte Carlo simulations on a smaller size of network require less computation to stabilize the estimation. 6 4 Experimental Results In this section, we evaluate the NB-UB and NB-LB in independent cascade models on a variety of classical synthetic networks. Network Generation. We consider 4 classical random graph models with the parameters shown as follows: Erdos Renyi random graphs with ER(n = 1000, p = 0.003), scale-free networks SF (n = 1000, ? = 2.5), random regular graphs Reg(n = 1000, d = 3), and random tree graphs with power-law degree distributions T (n = 1000, ? = 3). For each graph model, we generate 100 networks IC(G, pA, {s}) as follows. The graph G is the largest connected component of a graph drawn from the graph model, the seed node s is a randomly selected vertex, and A is the adjacency matrix of G. The corresponding IC model has the same transmission probability p for every edge. Evaluation of Bounds. For each network generated, we compute the following quantities for each p ? {0.1, 0.2, . . . , 0.9}. ? ?mc : the estimation of the influence with 106 Monte Carlo simulations. ? ? + : the upper bound obtained by NB-UB. + ? ?spec : the spectral upper bound by [16]. ? ? ? : the lower bound obtained by NB-LB. ? ? ?prob : the probabilistic lower bound obtained by 10 Monte Carlo simulations. Figure 1: This figure compares the average relative gap of the bounds: NB-UB, the spectral upper bound in [16], NB-LB, and the probabilistic lower bound computed by MC simulations, for various types of networks. The probabilistic lower bound is chosen for the experiments since there has not been any tight lower bound. The sample size of 10 is determined to overly match the computational complexity of NB-LB algorithm. In Figure 1, we compare the average relative gap of the bounds for every network model and for each transmission probability, where the true value is assumed to be ?mc . For example, the average relative gap of NB-UB for 100 Erdos Renyi networks {Ni }100 i=1 with the transmission P ? + [Ni ]??mc [Ni ] 1 + probability p is computed by 100 , where ? [N i ] and ?mc [Ni ] denote the i?[100] ?mc [Ni ] NB-UB and the MC estimation, respectively, for the network Ni . Results. Figure 1 shows that NB-UB outperforms the upper bound in [16] for the Erdos-Renyi and random 3-regular networks, and performs comparably for the scale-free networks. Also, NB-LB gives tighter bounds than the MC bounds on the Erdos-Renyi, scale-free, and random regular networks when the transmission probability is small, p < 0.4. The NB-UB and NB-LB compute the exact influence for the tree networks since both algorithms avoid backtracking walks. Next, we show the bounds on exemplary networks. 4.1 Upper Bounds Selection of Networks. In order to illustrate a typical behavior of the bounds, we have chosen the network in Figure 2a as follows. First, we generate 100 random 3-regular graphs G with 1000 nodes and assign a random seed s. Then, the corresponding IC model is defined as IC(G, P = 7 Upper bounds of the influence 100 100 180.92 90 90 900 80 80 2000 800 70 70 700 60 60 1500 600 50 50 Influence Influence Lower bounds of the influence 2500 1000 605.8 500 40 40 400 30 30 1000 300 20 20 500 200 10 10 100 0 00 0.05 0 0.1 0.1 0.2 0.15 0.2 0.3 0.4 0.25 0.5 0 0 0.3 0.5 0.35 0.6 0.4 0.7 0.45 0.8 0 0.5 0.9 0 1 0.1 0.2 MC (5) MC (10) MC (30) MC (300) MC (3000) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Transmission Probability Transmission Probability NB-UB Spectral MC (5) (a) MC (12) MC (30) MC (300) MC (3000) NB-LB (b) Figure 2: (a) The figure compares various upper bounds on the influence in the 3-regular network in section 4.1. The MC upper bounds are computed with various simulation sizes and shown with the data points indicated with MC(N ), where N is the number of simulations. The spectral upper bound in [16] is shown in red line, and NB-UB is shown in green line. (b) The figure shows lower bounds on the influence of a scale-free network in section 4.2. The probabilistic lower bounds shown with points are obtained from Monte Carlo simulations with various simulation sizes, and the data points indicated with MC(N ) are obtained by N number of simulations. NB-LB is shown in green line. pA, S0 = {s}). For each network, we compute NB-UB and MC estimation. Then, we compute the score for each network, where the score is defined as the sum of the square differences between the upper bounds and MC estimations over the transmission probability p ? {0.1, 0.2,. . ., 0.9}. Finally, a graph whose score is the median from all 100 scores is chosen for Figure 2a. Results. In figure 2a, we compare 1) the upper bounds introduced [16] and 2) the probabilistic upper bounds obtained by Monte Carlo simulations with 99% confidence level, to NB-UB. The MC upper bounds are computed with the various sample sizes N ? {5, 10, 30, 300, 3000}. It is evident from the figure that a larger sample size provides a tighter probabilistic upper bound. NB-UB outperforms the bound by [16] and the probabilistic MC bound when the transmission probability is relatively small. Further, it shows a similar trend as the MC simulations with a large sample size. 4.2 Lower Bounds Selection of Networks. We adopt a similar selection process as in the selection for the upper bounds, but with the scale free networks, with 3000 nodes and ? = 2.5. Results. We compare probabilistic lower bounds obtained by MC with 99% confidence level to NB-LB. The lower bounds from Monte Carlo simulations are computed with various sample sizes N ? {5, 12, 30, 300, 3000}, which accounts for a constant, log(|V |), 0.01|V |, 0.1|V |, and |V |. NB-LB outperforms the probabilistic bounds by MC with small sample sizes. Recall that the computational complexity of the lower bound in algorithm 2 is O(|V | + |E|), which is the computational complexity of a constant number of Monte Carlo simulations. In figure 2b, it shows that NB-LB is tighter than the probabilistic lower bounds with the same computational complexity, and it also agrees with the behavior of the MC simulations. 5 Conclusion In this paper, we propose both upper and lower bounds on the influence in the independent cascade models and provide algorithms to efficiently compute the bounds. We extend the results by proposing tunable bounds which can adjust the trade-off between the efficiency and the accuracy. Finally, the tightness and the performance of bounds are shown with experimental results. One can further improve the bounds considering r-nonbacktracking walks, i.e. avoiding cycles of length r rather than just backtracks, and we leave this for future study. Acknowledgement. The authors thank Colin Sandon for helpful discussions. This research was partly supported by the NSF CAREER Award CCF-1552131 and the ARO grant W911NF-16-1-0051 8 References [1] E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic bp, and the information-computation gap. arXiv preprint arXiv:1512.09080, 2015. [2] C. Bordenave, M. Lelarge, and L. Massouli?. Non-backtracking spectrum of random graphs: community detection and non-regular ramanujan graphs. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 1347?1357. IEEE, 2015. [3] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 199?208. ACM, 2009. [4] W. Chen, Y. Yuan, and L. Zhang. Scalable influence maximization in social networks under the linear threshold model. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 88?97. IEEE, 2010. [5] M. Draief, A. Ganesh, and L. Massouli?. Thresholds for virus spread on networks. In Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, page 51. ACM, 2006. [6] C. M. Fortuin, P. W. Kasteleyn, and J. Ginibre. Correlation inequalities on some partially ordered sets. Communications in Mathematical Physics, 22(2):89?103, 1971. [7] A. Goyal, W. Lu, and L. V. Lakshmanan. Celf++: optimizing the greedy algorithm for influence maximization in social networks. In Proceedings of the 20th international conference companion on World wide web, pages 47?48. ACM, 2011. [8] M. Granovetter. Threshold models of collective behavior. American journal of sociology, pages 1420?1443, 1978. [9] B. Karrer, M. Newman, and L. Zdeborov?. Percolation on sparse networks. Physical review letters, 113(20):208702, 2014. [10] B. Karrer and M. E. Newman. Message passing approach for general epidemic models. Physical Review E, 82(1):016101, 2010. [11] D. Kempe, J. Kleinberg, and ?. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137?146. ACM, 2003. [12] A. Khelil, C. Becker, J. Tian, and K. Rothermel. An epidemic model for information diffusion in manets. In Proceedings of the 5th ACM international workshop on Modeling analysis and simulation of wireless and mobile systems, pages 54?60. ACM, 2002. [13] J. T. Khim, V. Jog, and P.-L. Loh. Computing and maximizing influence in linear threshold and triggering models. In Advances in Neural Information Processing Systems, pages 4538?4546, 2016. [14] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov?, and P. Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of Sciences, 110(52):20935?20940, 2013. [15] E. J. Lee, S. Kamath, E. Abbe, and S. R. Kulkarni. Spectral bounds for independent cascade model with sensitive edges. In 2016 Annual Conference on Information Science and Systems (CISS), pages 649?653, March 2016. [16] R. Lemonnier, K. Scaman, and N. Vayatis. Tight bounds for influence in diffusion networks and application to bond percolation and epidemiology. In Advances in Neural Information Processing Systems, pages 846?854, 2014. [17] J. Leskovec, L. A. Adamic, and B. A. Huberman. The dynamics of viral marketing. ACM Transactions on the Web (TWEB), 1(1):5, 2007. 9 [18] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 420?429. ACM, 2007. [19] D. Lopez-Pintado and D. J. Watts. Social influence, binary decisions and collective dynamics. Rationality and Society, 20(4):399?443, 2008. [20] B. Shulgin, L. Stone, and Z. Agur. Pulse vaccination strategy in the sir epidemic model. Bulletin of Mathematical Biology, 60(6):1123?1148, 1998. [21] Y. Tang, X. Xiao, and Y. Shi. Influence maximization: Near-optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 75?86. ACM, 2014. [22] C. Wang, W. Chen, and Y. Wang. Scalable influence maximization for independent cascade model in large-scale social networks. Data Mining and Knowledge Discovery, 25(3):545?576, 2012. [23] D. J. Watts. A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences, 99(9):5766?5771, 2002. [24] J. Yang and S. Counts. Predicting the speed, scale, and range of information diffusion in twitter. 2010. 10
6739 |@word trial:1 briefly:1 version:3 instrumental:1 open:15 simulation:22 pulse:1 p0:2 lakshmanan:1 harder:1 recursively:3 initial:6 contains:1 score:4 neeman:1 outperforms:3 virus:1 si:1 yet:1 cis:1 remove:1 v:2 spec:1 selected:1 greedy:1 beginning:2 iterates:1 provides:1 node:45 zhang:2 mathematical:2 along:1 become:1 symposium:1 focs:1 yuan:1 lopez:1 krzakala:1 introduce:2 theoretically:1 expected:4 behavior:3 decreasing:1 little:1 considering:4 increasing:1 estimating:1 bounded:1 maximizes:1 proposing:1 finding:2 every:15 ti:1 draief:2 um:1 control:2 grant:1 positive:1 engineering:1 influencing:1 despite:1 accumulates:1 meet:1 path:12 abuse:1 suggests:1 range:2 tian:1 directed:8 unique:1 practical:1 union:1 block:1 goyal:1 cascade:16 word:1 confidence:2 regular:6 selection:4 fkg:6 nb:38 context:1 influence:65 live:2 equivalent:2 shi:1 ramanujan:1 maximizing:2 go:1 independently:1 nonbacktracking:21 headway:1 tardos:1 pt:4 rationality:1 exact:2 pa:2 trend:1 satisfying:2 preprint:1 electrical:1 wang:3 worst:2 calculate:1 connected:1 cycle:1 trade:4 redemption:1 intuition:1 complexity:14 ui:2 ideally:1 dynamic:2 tight:3 efficiency:5 various:8 effective:2 monte:11 newman:2 whose:3 widely:1 supplementary:2 larger:2 tightness:2 otherwise:3 epidemic:4 nondecreasing:1 lee1:1 exemplary:1 propose:2 aro:1 maximal:1 scaman:1 iff:2 subgraph:1 academy:2 description:2 empty:1 transmission:11 cluster:1 generating:1 leave:1 illustrate:1 come:1 submodularity:1 radius:2 stochastic:1 material:2 adjacency:2 require:3 assign:1 tighter:7 summation:1 insert:8 pl:17 ic:20 seed:16 m0:1 uninfluenced:1 adopt:2 estimation:6 applicable:1 bond:1 currently:1 percolation:3 sensitive:2 largest:1 agrees:1 tool:1 weighted:1 rather:4 avoid:3 mobile:1 focus:1 vk:48 improvement:1 bernoulli:1 ps:1 contrast:1 sigkdd:3 helpful:1 twitter:1 vl:7 initially:1 initialize:4 kempe:1 field:1 once:1 equal:1 having:1 beach:1 biology:1 broad:1 abbe:2 future:1 develops:1 summand:2 kasteleyn:3 randomly:1 national:2 attempt:1 detection:5 message:16 mining:5 evaluation:2 adjust:3 nl:1 edge:15 partial:1 tree:4 walk:16 initialized:3 theoretical:1 sociology:2 leskovec:2 modeling:1 w911nf:1 maximization:8 karrer:4 lattice:1 cost:1 vertex:2 synthetic:1 st:8 epidemiology:2 fundamental:1 international:8 ubt:2 probabilistic:13 off:4 destination:1 physic:1 lee:1 sanjeev:1 central:2 management:1 containing:1 choose:1 granovetter:1 american:1 account:2 stabilize:2 satisfy:2 vi:14 later:4 closed:2 red:1 start:3 contribution:3 square:1 ni:6 accuracy:4 variance:4 characteristic:1 efficiently:5 comparably:1 mc:38 carlo:11 lu:1 researcher:1 explain:1 influenced:16 lemonnier:1 definition:11 lelarge:1 nonetheless:1 tweb:1 proof:1 stop:1 proved:1 tunable:2 recall:2 knowledge:5 obtainable:1 puv:8 higher:1 follow:1 improved:1 done:1 furthermore:1 marketing:2 just:1 sly:1 correlation:1 web:2 adamic:1 ganesh:1 propagation:9 glance:1 indicated:2 usa:1 true:2 ccf:1 hence:3 moore:1 illustrated:1 bonferroni:1 stone:1 evident:1 demonstrate:1 performs:1 recently:2 viral:2 physical:2 exponentially:2 extend:1 discussed:1 slight:1 refer:1 significant:2 mathematics:1 msrc:2 inclusion:1 reachable:1 access:1 v0:1 add:1 exclusion:1 recent:1 optimizing:1 store:2 inequality:9 binary:1 arbitrarily:1 vt:11 guestrin:1 minimum:2 colin:1 multiple:2 jog:1 match:1 long:1 hazard:1 icdm:1 award:1 impact:2 scalable:2 arxiv:2 iteration:8 represent:2 vayatis:1 background:2 whereas:2 krause:1 else:1 grow:1 source:2 sends:1 median:1 pass:2 induced:1 pv0:5 call:1 integer:2 near:1 yang:2 ideal:1 concerned:1 variety:1 triggering:1 reduce:1 knowing:1 becker:1 loh:1 passing:6 remark:1 clear:1 amount:1 processed:6 generate:2 sl:7 nsf:1 notice:1 overly:1 diverse:1 eabbe:1 discrete:1 shall:1 key:1 terminology:1 threshold:4 ginibre:3 drawn:1 diffusion:4 utilize:2 v1:5 graph:18 sum:1 run:4 prob:1 parameterized:1 letter:1 massouli:2 vn:3 fortuin:4 decision:1 bound:101 guaranteed:1 annual:2 precisely:1 bp:1 bordenave:1 generates:1 pvi:1 u1:1 argument:1 min:3 extremely:1 kleinberg:1 speed:1 relatively:1 conjecture:1 department:1 march:1 watt:2 smaller:1 vanbriesen:1 making:1 happens:1 modification:1 vaccination:1 outbreak:1 equation:8 previously:1 describing:1 turn:1 loose:1 eun:1 needed:1 know:1 count:1 end:6 studying:2 backtracks:1 spectral:8 faloutsos:1 original:3 clustering:1 ensure:1 graphical:1 exploit:1 sigmod:1 emmanuel:1 approximating:1 classical:2 society:1 g0:6 question:1 already:2 added:1 quantity:1 strategy:1 subnetwork:9 zdeborov:2 distance:3 thank:1 parametrized:2 distributive:1 reason:1 spanning:1 length:10 reformulate:1 ratio:1 susceptible:1 kamath:1 relate:1 negative:2 implementation:1 pm0:1 collective:2 upper:38 arc:2 finite:1 defining:1 communication:2 ninth:1 lb:34 arbitrary:1 community:3 princeton:2 introduced:5 pair:4 sandon:2 nip:1 suggested:1 program:1 max:2 green:2 power:1 event:1 khim:2 predicting:1 recursion:5 improve:1 mossel:1 naive:2 review:2 understanding:1 acknowledgement:1 discovery:4 relative:3 law:1 sir:1 generation:1 limitation:1 acyclic:6 foundation:1 degree:2 jee:1 s0:55 propagates:1 principle:1 xiao:1 pi:2 achievability:1 supported:1 wireless:1 free:5 neighbor:18 wide:1 taking:1 bulletin:1 sparse:3 benefit:1 celf:1 overcome:1 world:1 computes:4 ubl:37 qn:1 author:1 far:1 social:7 transaction:1 approximate:2 ignore:1 erdos:4 deg:4 global:1 sequentially:2 decides:1 incoming:4 assumed:1 spectrum:1 ca:1 career:1 obtaining:1 vj:11 main:1 spread:3 dense:1 motivation:1 allowed:1 exponential:1 sf:1 renyi:4 tang:1 theorem:3 formula:1 removing:1 companion:1 er:1 exists:2 workshop:1 gap:5 chen:3 backtracking:4 ordered:6 partially:2 acm:13 feasible:1 hard:1 determined:1 typical:1 huberman:1 rothermel:1 lemma:2 called:3 partly:1 experimental:2 exception:1 pu0:4 ub:20 kulkarni:2 evaluate:1 reg:1 avoiding:1
6,345
674
Directional-Unit Boltzmann Machines Richard S. Zemel Computer Science Dept. University of Toronto Toronto, ONT M5S lA4 Christopher K. I. Williams Computer Science Dept. University of Toronto Toronto, ONT M5S lA4 Michael C. Mozer Computer Science Dept. University of Colorado Boulder, CO 80309-0430 Abstract We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values in a cyclic range, between 0 and 271' radians. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic DUBM, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We describe a learning algorithm and simulations that demonstrate a mean-field DUBM'S ability to learn interesting mappings. Many kinds of information can naturally be represented in terms of angular, or directional, variables. A circular range forms a suitable representation for explicitly directional information, such as wind direction, as well as for information where the underlying range is periodic, such as days of the week or months of the year. In computer vision, tangent fields and optic flow fields are represented as fields of oriented line segments, each of which can be described by a magnitude and direction. Directions can also be used to represent a set of symbolic labels, e.g., object label A at 0, and object label B at 71'/2 radians. We discuss below some advantages of representing symbolic labels with directional units. 172 Directional-Unit Boltzmann Machines These and many other phenomena can be usefully encoded using a directional representation-a polar coordinate representation of complex values in which the phase parameter indicates a direction between 0 and 27r radians. We have devised a general formulation of networks of stochastic directional units. This paper describes a directional-unit Boltzmann machine (DUBM), which is a novel generalization of a Boltzmann machine (Ackley, Hinton and Sejnowski, 1985) in which the units are not binary, but instead take on directional values between 0 and 27r. 1 STOCHASTIC DUBM A stochastic directional unit takes on values on the unit circle. We associate with unit j a random variable Zj; a particular state of j is described by a complex number with magnitude one and direction, or phase Tj: Zj = eiTj ? The weights of a DUBM also take on complex values. The weight from unit k to unit j is: Wj k = hj ke ifJ'k . We constrain the weight matrix W to be Hermitian : W T = W*, where the diagonal elements of the matrix are zero, and the asterisk indicates the complex conjugate operation. Note that if the components are real, then W T = W, which is a real symmetric matrix. Thus, the Hermitian form is a natural generalization of weight symmetry to the complex domain. This definition of W leads to a Hermitian quadratic form that generalizes the real quadratic form of the Hopfield energy function: E(z) = -1/2 z*TWz = -1/2 LZjZZWjk j,k (1) where z is the vector of the units' complex states in a particular global configuration. Noest (1988) independently proposed this energy function. It is similar to that used in Fradkin, Huberman, and Shenker's (1978) generalization of the XY model of statistical mechanics to allow arbitary weight phases OJ k, and coupled oscillator models, e.g., Baldi and Meir (1990). We can define a probability distribution over the possible states of a stochastic network using the Boltzmann factor. In a DUBM, we can describe the energy as a function of the state of a particular unit j: E(Zj = Zj) = -1/2 [L ZjZZWjk +L k ZkZIWkj] k We define Xj = LZkwlk k to be the net input to unit j, where x j, respectively. aj and O:j denote the magnitude and phase of Applying the Boltzmann factor, we find that the probability that unit j particular state is proportional to: p(Zj = Zj) ex e-/3E(Zj=zj) = e/3 a j where f3 is the reciprocal of the system temperature. COS(Tj-crj) IS 10 a (2) 173 174 Zemel, Williams, and Mozer 0? ---> Figure 1: A circular normal density function laid over a unit circle. The dots along the circle represent samples of the circular normal random variable Zj. The expected direction of Zj, Tj, is 7r /4; rj is its resultant length. This probability distribution for a unit's state corresponds to a distribution known as the von Mises, or circular normal, distribution (Mardia, 1972). Two parameters completely characterize this distribution: a mean direction r = (0,27r] and a concentration parameter m > 0 that behaves like the reciprocal of the variance of a Gaussian distribution on a linear random variable. The probability density function of a circular normal random variable Z is l : p (T; r, m ) = 1 () em cos( T-T) 27r1o m (3) From Equations 2 and 3, we see that if a unit adopts states according to its contribution to the system energy, it will be a circular normal variable with mean direction Cl:j and concentration parameter mj = f3aj. These parameters are directly determined by the net input to the unit. Figure 1 shows a circular normal density function for Zj, the state of unit j. This figure also shows the expected value of its stochastic state, which we define as: Yj = < Zj > = (4) rjei'yi where Ij, the phase of Yj, is the mean direction and rj, the magnitude of Yj, is the resultant length. For a circular normal random variable, Ij = Tj, and rj = ~~~:!j.2 When samples of Zj are concentrated on a small arc about the mean (see Figure 1), rj will approach length one. This corresponds to a large concentration parameter (mj f3aj). Conversely, for small mj, the distribution approaches the uniform distribution on the circle, and the resultant length falls toward zero. For a uniform distribution, rj = O. Note that the concentration parameter for a unit's circular = IThe normalization factor Io(m) is the modified Bessel function of the first kind and order zero. An integral representation of this function is Io(m) = ~ e?mcos()d6. It can be computed by numerical routines. 2 An integral representation of the modified Bessel function of the first kind and order k is h(m) = ~ o7r emcos() cos(k6)d6. Note that II(m) = dlo(m)/dm. J: J Directional-Unit Boltzmann Machines normal density function is proportional to /3, the reciprocal of the system temperature. Higher temperatures will thus have the effect of making this distribution more uniform, just as they do in a binary-unit Boltzmann machine. 2 EMERGENT PROPERTIES OF A DUBM A network of directional units as defined above contains two important emergent properties. The first property is that the magnitude of the net input to unit j describes the extent to which its various inputs "agree". Intuitively, one can think of each component Zk wj k of the sum that comprises x j as predicting a phase for unit j. When the phases of these components are equal, the magnitude of Xj, aj, is maximized. If these phase predictions are far apart, then they will act to cancel each other out, and produce a small aj. Given Xj, we can compute the expected value of the output of unit j. The expected direction of the unit roughly represents the weighted average of the phase predictions, while the resultant length is a monotonic function of aj and hence describes the agreement between the various predictions. The key idea here is that the resultant length directly describes the degree of certainty in the expected direction of unit j. Thus, a DUBM naturally incorporates a representation of the system's confidence in a value. This ability to combine several sources of evidence, and not only represent a value but also describe the certainty of that value is an important property that may be useful in a variety of domains. The second emergent property is that the DUBM energy is globally rotationinvariant-E is unaffected when the same rotation is applied to all units' states in the network. For each DUBM configuration, there is an equivalence class of configurations which have the same energy. In a similar way, we find that the magnitude of Xj is rotation-invariant. That is, when we translate the phases of all units but one by some phase, the magnitude of that unit is unaffected. This property underlies one of the key advantages of the representation: both the magnitude of a unit's state as well as system energy depend on the relative rather than absolute phases of the units. 3 DETERMINISTIC DUBM Just as in deterministic binary-unit Boltzmann machines (Peterson and Anderson, 1987; Hinton, 1989), we can greatly reduce the computational time required to run a large stochastic system if we invoke the mean-field approximation, which states that once the system has reached equilibrium, the stochastic variables can be approximated by their mean values. In this approximation, the variables are treated as independent, and the system probability distribution is simply the product of the probability distributions for the individual units. Gislen, Peterson, and Soderberg (1992) originally proposed a mean-field theory for networks of directional (or "rotor") units, but only considered the case of realvalued weights. They derived the mean-field consistency equations by using the saddle-point method. Our approach provides an alternative, perhaps more intuitive derivation, due to the use of the circular normal distribution. 175 176 Zemel, Williams, and Mozer We can directly describe these mean values based on the circular normal interpretation. We still denote the net input to a unit j as Xj: xj = ~ * ~ Yk W j k = iao aj e (5) 1 k Once equilibrium has been reached, the state of unit j is Zj given the mean-field approximation: Yj, the expected value of (6) In the stochastic as well as the deterministic system, units evolve to minimize the free energy, F = < E > - T H. The calculation of H, the entropy of the system, follows directly from the circular normal distribution and the mean-field approximation. We can derive mean-field consistency equations for Xj and Yj by minimizing the mean-field free energy, FM F, with respect to each variable independently. The resulting equations match the mean-field equations (Equations 5 and 6) derived directly from the circular normal probability density function. They also match the special case derived by Gislen et al. for real-valued weights. We have implemented a DUBM using the mean-field approximation. We solve for a consistent set of x and y values by performing synchronous updates of the discretetime approximation of the set of differential equations based on the net input to each unit j. We update the x j variables using the following differential equation: dXj --;It * = -Xj + ~ ~ YkWjk (7) k which has Equation 5 as its steady-state solution. In the simulations, we use simulated annealing to help find good minima of FM F. Just as for the Hopfield binary-state network, it can be shown that the free energy always decreases during the dynamical evolution described in Equation 7 (Zemel, Williams and Mozer, 1992). The equilibrium solutions are free energy minima. 4 DUBM LEARNING The units in a DUBM can be arranged in a variety of architectures. The appropriate method for determining weight values for the network depends on the particular class of network architecture. In an autoassociative network containing a single set of interconnected units, the weights can be set directly from the training patterns. If hidden units are required to perform a task, then an algorithm for learning the weights is required. We use an algorithm that generalizes the Boltzmann machine training algorithm (Ackley, Hinton and Sejnowski, 1985; Peterson and Anderson, 1987) to these networks. As in the standard Boltzmann machine learning algorithm, the partial derivative of the objective function with respect to a weight depends on the difference between the partials of two mean-field free energies: one when both input and output units are clamped, and the other when only the input units are clamped. On a given Directional-Unit Boltzmann Machines training case, for each of these stages we let the network settle to equilibrium and then calculate the following derivatives: OFMF/objk OFM F / O(}j k + (}jk) 'Yk + (}jk) -rjTk COS(-yj - 'Yk rjrkbjk sin('Yj - The learning algorithm uses these gradients to find weight values that will minimize the objective over a training set. 5 EXPERIMENTAL RESULTS We present below some illustrative examples to show that an adaptive network of directional units can be used in a range of paradigms, including associative memory, input/output mappings, and pattern completion. 5.1 SIMPLE AUTOASSOCIATIVE DUBM The first set of experiments considers a simple autoassociative DUBM, which contains no hidden units, and the units are fully connected. As in a standard Hopfield network, the weights are set directly from the training patterns; they equal the superposition of the outer product of the patterns. We have run several experiments with simple autoassociative DUBMs. The empirical results parallel those for binary-unit autoassociative networks. We find, for example, that a network containing 30 fully interconnected units is capable of reliably settling from a corrupted version of one of 4 stored patterns to a state near the pattern. These patterns thus form stable attractors, as the network can perform pattern completion and clean-up from noisy inputs. The rotation-invariance property of the energy function allows any rotated version of a training pattern to also act as an attractor. The network's performance rapidly degrades for more than 4 orthogonal patterns; the patterns themselves no longer act as fixed-points, and many random initial states end in states far from any stored pattern. In addition, more orthogonal patterns can be stored than random patterns. See Noest (1988) for an analysis of the capacity of an autoassociative DUBM with sparse and asymmetric connections. 5.2 LEARNING INPUT/OUTPUT MAPPINGS We have also used the mean-field DUBM learning algorithm to learn the weights in networks containing hidden units. We have experimented with a task that is wellsuited to a directional representation. There is a single-jointed robot arm, anchored at a point, as shown in Figure 2. The input consists of two angles: the angle between the first arm segment and the positive x-axis (A), and the angle between the two arm segments (p). The two segments each have a fixed length, A and B; these are not explicitly given to the network. The output is the angle between the line connecting the two ends of the arm and the x-axis (J.t). This target angle is related in a complex, non-linear way to the input angles-the network must learn to approximate the following trigonometric relationship: J.1. A sin A - B sin( A + p) ) = arctan ( A cos A - B cos( A + p) 177 178 Zemel, Williams, and Mozer / I I ,\ ~ p, ----------- -j~----------~ Figure 2: A sample training case for the robot arm problem. The arm consists of two fixed-length segments, A and B, and is anchored on the x-axis. The two angles, ,\ and p, are given as input for each case, and the target output is the angle p,. With 500 training cases, a DUBM with 2 input units and 8 hidden units is able to learn the task so that it can accurately estimate p, for novel patterns. The learning requires 200 iterations of a conjugate gradient training algorithm. On each of 100 testing patterns, the resultant length of the output unit exceeds .85, and the mean error on the angle is less than .05 radians. The network can also learn the task with as few as 5 hidden units, with a concomitant decrease in learning speed. The compact nature of this network shows that the directional units form a natural, efficient representation for this problem. 5.3 COMPLEX PATTERN COMPLETION Our earlier work described a large-scale DUBM that attacks a difficult problem in computer vision: image segmentation. In MAGIC (Mozer et al., 1992), directional values are used to represent alternative labels that can be assigned to image features. The goal of MAGIC is to learn to assign appropriate object labels to a set of image features (e.g., edge segments) based on a set of examples. The idea is that the features of a given object should have consistent phases, with each object taking on its own phase. The units in the network are arranged into two layers-feature and hidden-and the computation proceeds by randomly initializing the phases of the units in the feature layer, and settling on a labeling through a relaxation procedure. The units in the hidden layer learn to detect spatially local configurations of the image features that are labeled in a consistent manner across the training examples. successfully learns to segment novel scenes consisting of overlapping geometric objects. The emergent DUBM properties described above are essential to MAGIC'S ability to perform this task. The complex weights are necessary in MAGIC, as the weights encode statistical regularities in the relationships between image features, e.g., that two features typically belong to the same object (i.e., have similar phase values) or to different objects (i.e., are out of phase). The fact that a unit's resultant length reflects the certainty in a phase label allows the system to decide which phase labels to use when updating labels of neighboring features: the initially random phases are ignored, while confident labels are propagated. Finally, the rotation-invariance property allows the system to assign labels to features in a manner consistent with the relationships described in the weights, where it is the relative rather than absolute phases of the units that are important. MAGIC Directional-Unit Boltzmann Machines 6 CURRENT DIRECTIONS We are currently extending this work in a number of directions. We are extending the definition of a DUBM to combine binary and directional units (Radford Neal, personal communication). This expanded representation may be useful in domains with directional data that is not present everywhere. For example, it can be directly applied to the object labeling problem explored in MAGIC. The binary aspect of the unit can describe whether a particular image feature is present or absent. This may enable the system to handle various complications, particularly labeling across gaps along the contour of an object. Finally, we are applying a DUBM network to the interesting and challenging problem of time-series prediction of wind directions. Acknowledgements The authors thank Geoffrey Hinton for his generous support and guidance. We thank Radford Neal, Peter Dayan, Conrad Galland, Sue Becker, Steve Nowlan, and other members of the Connectionist Research Group at the University of Toronto for helpful comments regarding this work. This research was supported by a grant from the Information Technology Research Centre of Ontario to Geoffrey Hinton, and NSF Presidential Young Investigator award IRI-9058450 and grant 90-21 from the James S. McDonnell Foundation to MM. References Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169. Baldi, P. and Meir, R. (1990). Computing with arrays of coupled oscillators: An application to preattentive texture discrimination. Neural Computation, 2( 4):458-471. Fradkin, E., Huberman, B. A.,? and Shenker, S. H. (1978). Gauge symmetries in random magnetic systems. Physical Review B, 18(9):4789-4814. GisIen, 1., Peterson, C., and Soderberg, B. (1992). Rotor neurons: Basic formalism and dynamics. Neural Computation, 4(5):737-745. Hinton, G. E. (1989). Deterministic Boltzmann learning performs steepest descent in weight-space. Neural Computation, 1(2):143-150. Mardia, K. V. (1972). Statistics of Directional Data. Academic Press, London. Mozer, M. C., Zemel, R. S., Behrmann, M., and Williams, C. K. I. (1992). Learning to segment images using dynamic feature binding. Neural Computation, 4(5):650-665. Noest, A. J. (1988). Phasor neural networks. In Neural Information Processing Systems, pages 584-591, New York. AlP. Peterson, C. and Anderson, J. R. (1987). A mean field theory learning algorithm for neural networks. Complex Systems, 1:995-1019. Zemel, R. S., Williams, C. K. I., and Mozer, M. C. (1992). Adaptive networks of directional units. Technical Report CRG-TR-92-2, University of Toronto. 179
674 |@word version:3 simulation:2 tr:1 initial:1 cyclic:1 configuration:5 contains:2 series:1 current:1 nowlan:1 must:1 numerical:1 update:2 discrimination:1 reciprocal:3 steepest:1 provides:2 complication:1 toronto:6 attack:1 arctan:1 along:2 differential:2 consists:2 combine:2 baldi:2 hermitian:3 manner:2 expected:6 roughly:1 themselves:1 mechanic:1 globally:1 ont:2 underlying:1 kind:3 certainty:5 act:3 usefully:1 unit:74 grant:2 positive:1 local:1 io:2 equivalence:1 conversely:1 challenging:1 co:7 range:4 yj:7 testing:1 procedure:1 empirical:1 confidence:1 symbolic:2 applying:2 deterministic:4 williams:7 iri:1 independently:2 ke:1 array:1 his:1 handle:1 coordinate:1 target:2 colorado:1 us:1 agreement:1 associate:2 element:1 approximated:1 jk:2 updating:1 particularly:1 asymmetric:1 labeled:1 ackley:3 initializing:1 calculate:1 wj:2 connected:1 decrease:2 rotor:2 yk:3 mcos:1 mozer:8 dynamic:2 personal:1 depend:1 segment:8 ithe:1 completely:1 hopfield:3 emergent:4 represented:2 various:3 phasor:1 derivation:1 describe:5 london:1 sejnowski:3 zemel:7 labeling:3 encoded:1 valued:1 solve:1 presidential:1 ability:3 statistic:1 think:1 noisy:1 la4:2 associative:1 advantage:2 net:5 interconnected:2 product:2 neighboring:1 rapidly:1 translate:1 trigonometric:1 ontario:1 representational:1 intuitive:1 regularity:1 extending:2 produce:1 rotated:1 object:10 help:1 derive:1 completion:3 dlo:1 ij:2 implemented:1 direction:17 stochastic:11 alp:1 enable:1 settle:1 assign:2 wellsuited:1 generalization:3 crg:1 extension:1 mm:1 considered:1 normal:12 equilibrium:4 mapping:3 week:1 generous:1 polar:1 label:11 currently:1 superposition:1 gauge:1 successfully:1 weighted:1 reflects:1 gaussian:2 always:1 modified:2 rather:2 hj:1 encode:1 derived:3 indicates:2 greatly:1 detect:1 helpful:1 dayan:1 typically:1 initially:1 hidden:7 k6:1 special:1 field:17 equal:2 f3:1 once:2 represents:2 cancel:1 connectionist:1 report:1 richard:1 few:1 oriented:1 randomly:1 individual:1 phase:23 consisting:1 attractor:2 circular:14 tj:4 integral:2 capable:1 partial:2 edge:1 xy:1 necessary:1 orthogonal:2 circle:4 guidance:1 formalism:1 earlier:1 uniform:3 characterize:1 stored:3 periodic:1 corrupted:1 confident:1 density:5 invoke:1 michael:1 connecting:1 von:2 containing:3 cognitive:1 derivative:2 explicitly:2 depends:2 wind:2 reached:2 parallel:1 contribution:1 minimize:2 variance:1 maximized:1 directional:25 accurately:1 m5s:2 unaffected:2 definition:2 energy:14 james:1 dm:1 naturally:2 associated:1 mi:2 resultant:7 radian:4 propagated:1 segmentation:1 routine:1 steve:1 higher:1 originally:1 day:1 formulation:3 arranged:2 anderson:3 angular:1 just:3 stage:1 christopher:1 iao:1 overlapping:1 aj:5 perhaps:1 effect:1 evolution:1 hence:1 assigned:1 spatially:1 symmetric:1 neal:2 sin:3 during:1 illustrative:1 steady:1 demonstrate:1 performs:1 temperature:3 image:7 novel:3 rotation:4 behaves:1 physical:1 shenker:2 interpretation:1 belong:1 consistency:2 centre:1 dot:1 stable:1 robot:2 longer:1 own:1 apart:1 binary:8 yi:1 conrad:1 minimum:2 additional:1 paradigm:1 bessel:2 ii:1 rj:5 exceeds:1 technical:1 match:2 academic:1 calculation:1 devised:1 award:1 prediction:4 underlies:1 basic:1 vision:2 sue:1 iteration:1 represent:4 normalization:1 addition:1 annealing:1 source:1 comment:1 member:1 flow:1 incorporates:1 dubm:25 dxj:1 near:1 variety:2 xj:8 architecture:2 fm:2 reduce:1 idea:2 regarding:1 absent:1 synchronous:1 whether:1 becker:1 peter:1 york:1 autoassociative:6 ignored:1 useful:2 concentrated:1 discretetime:1 specifies:2 meir:2 zj:14 nsf:1 fradkin:2 group:1 key:2 ifj:1 clean:1 relaxation:1 year:1 sum:1 run:2 angle:9 everywhere:1 laid:1 decide:1 ofm:1 jointed:1 layer:3 quadratic:3 optic:1 constrain:1 scene:1 aspect:1 speed:1 performing:1 expanded:1 according:1 combination:1 mcdonnell:1 conjugate:2 describes:4 across:2 em:1 making:1 intuitively:1 invariant:1 boulder:1 equation:10 agree:1 discus:1 end:2 generalizes:2 operation:1 appropriate:2 magnetic:1 alternative:2 galland:1 objective:2 degrades:1 concentration:4 diagonal:1 gradient:2 thank:2 simulated:1 capacity:1 d6:2 outer:1 extent:1 considers:1 toward:1 length:10 relationship:3 concomitant:1 minimizing:1 difficult:1 magic:6 reliably:1 boltzmann:17 perform:3 neuron:1 noest:3 arc:1 descent:1 hinton:7 communication:1 required:3 connection:1 able:1 proceeds:1 below:2 dynamical:1 pattern:17 oj:1 including:1 memory:1 power:1 suitable:1 natural:2 treated:1 settling:2 predicting:1 arm:6 representing:1 technology:1 realvalued:1 axis:3 coupled:2 review:1 geometric:1 acknowledgement:1 tangent:1 evolve:1 determining:1 relative:2 fully:2 interesting:2 proportional:2 geoffrey:2 asterisk:1 soderberg:2 foundation:1 degree:2 consistent:4 supported:1 free:5 allow:1 fall:1 peterson:5 taking:1 absolute:2 sparse:1 contour:1 adopts:1 author:1 adaptive:2 far:2 approximate:1 compact:1 global:1 anchored:2 learn:7 zk:1 mj:3 nature:1 symmetry:2 complex:12 cl:1 domain:3 comprises:1 clamped:2 mardia:2 behrmann:1 learns:1 young:1 explored:1 experimented:1 evidence:1 essential:1 texture:1 magnitude:10 crj:1 gap:1 entropy:1 simply:1 saddle:1 monotonic:1 radford:2 binding:1 corresponds:2 conditional:1 month:1 goal:1 oscillator:2 determined:1 huberman:2 invariance:2 experimental:1 arbitary:1 preattentive:1 support:1 investigator:1 dept:3 phenomenon:1 ex:1
6,346
6,740
Learning with Feature Evolvable Streams Bo-Jian Hou Lijun Zhang Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210023, China {houbj,zhanglj,zhouzh}@lamda.nju.edu.cn Abstract Learning with streaming data has attracted much attention during the past few years. Though most studies consider data stream with fixed features, in real practice the features may be evolvable. For example, features of data gathered by limitedlifespan sensors will change when these sensors are substituted by new ones. In this paper, we propose a novel learning paradigm: Feature Evolvable Streaming Learning where old features would vanish and new features would occur. Rather than relying on only the current features, we attempt to recover the vanished features and exploit it to improve performance. Specifically, we learn two models from the recovered features and the current features, respectively. To benefit from the recovered features, we develop two ensemble methods. In the first method, we combine the predictions from two models and theoretically show that with the assistance of old features, the performance on new features can be improved. In the second approach, we dynamically select the best single prediction and establish a better performance guarantee when the best model switches. Experiments on both synthetic and real data validate the effectiveness of our proposal. 1 Introduction In many real tasks, data are accumulated over time, and thus, learning with streaming data has attracted much attention during the past few years. Many effective approaches have been developed, such as hoeffding tree [7], Bayes tree [27], evolving granular neural network (eGNN) [17], Core Vector Machine (CVM) [29], etc. Though these approaches are effective for certain scenarios, they have a common assumption, i.e., the data stream comes with a fixed stable feature space. In other words, the data samples are always described by the same set of features. Unfortunately, this assumption does not hold in many streaming tasks. For example, for ecosystem protection one can deploy many sensors in a reserve to collect data, where each sensor corresponds to an attribute/feature. Due to its limited-lifespan, after some periods many sensors will wear out, whereas some new sensors can be spread. Thus, features corresponding to the old sensors vanish while features corresponding to the new sensors appear, and the learning algorithm needs to work well under such evolving environment. Note that the ability of adapting to environmental change is one of the fundamental requirements for learnware [37], where an important aspect is the ability of handling evolvable features. A straightforward approach is to rely on the new features and learn a new model to use. However, this solution suffers from some deficiencies. First, when new features just emerge, there are few data samples described by these features, and thus, the training samples might be insufficient to train a strong model. Second, the old model of vanished features is ignored, which is a big waste of our data collection effort. To address these limitations, in this paper we propose a novel learning paradigm: Feature Evolvable Streaming Learning (FESL). We formulate the problem based on a key observation: in general features do not change in an arbitrary way; instead, there are some overlapping periods in which both old and new features are available. Back to the ecosystem protection example, since the lifespan of sensors is known to us, e.g., how long their battery will run out is a prior knowledge, we 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. usually spread a set of new sensors before the old ones wear out. Thus, the data stream arrives in a way as shown in Figure 1, where in period T1 , the original set of features are valid and at the end of T1 , period B1 appears, where the original set of features are still accessible, but some new features are included; then in T2 , the original set of features vanish, only the new features are valid but at the end of T2 , period B2 appears where newer features come. This process will repeat again and again. Note that the T1 and T2 periods are usually long, whereas the B1 and B2 periods are short because, as in the ecosystem protection example, the B1 and B2 periods are just used to switch the sensors and we do not want to waste a lot of lifetime of sensors for such overlapping periods. Data Streaming Feature Evolution In this paper, we propose to solve the FESL Feature Set problem by utilizing the overlapping period to ?1 ?2 ?3 ? discover the relationship between the old and data with new features, and exploiting the old model even feature set ?1 ?1 when only the new features are available. Specif?1 ically, we try to learn a mapping from new feadata with feature set ?1 and ?2 tures to old features through the samples in the data with feature set ?2 overlapping period. In this way, we are able to ?2 data with feature set ?2 and ?3 ?2 reconstruct old features from new ones and thus the old model can still be applied. To benefit from additional features, we develop two ensemFigure 1: Illustration that how data stream comes. ble methods, one is in a combination manner and the other in a dynamic selection manner. In the first method, we combine the predictions from two models and theoretically show that with the assistance of old features, the performance on new features can be improved. In the second approach, we dynamically select the best single prediction and establish a better performance guarantee when the best model switches at an arbitrary time. Experiments on synthetic and real datasets validate the effectiveness of our proposal. The rest of this paper is organized as follows. Section 2 introduces related work. Section 3 presents the formulation of FESL. Our proposed approaches with corresponding analyses are presented in section 4. Section 5 reports experimental results. Finally, Section 6 concludes. 2 Related Work Data stream mining contains several tasks, including classification, clustering, frequency counting, and time series analysis. Our work is most related to the classification task and we can also solve the regression problem. Existing techniques for data stream classification can be divided into two categories, one only considers a single classifier and the other considers ensemble classifiers. For the former, several methods origin from approaches such as decision tree [7], Bayesian classification [27], neural networks [17], support vector machines [29], and k-nearest neighbour [1]. For the latter, various ensemble methods have been proposed including Online Bagging & Boosting [22], Weighted Ensemble Classifiers [30, 20], Adapted One-vs-All Decision Trees (OVA) [12] and Meta-knowledge Ensemble [33]. For more details, please refer to [9, 10, 2, 6, 21]. These traditional streaming data algorithms often assume that the data samples are described by the same set of features, while in many real streaming tasks feature often changes. We want to emphasize that though concept-drift happens in streaming data where the underlying data distribution changes over time [2, 10, 4], the number of features in concept-drift never changes which is different from our problem. Most studies correlated to features changing are focusing on feature selection and extraction [26, 35] and to the best of our knowledge, none of them consider the evolving of feature set during the learning process. Data stream mining is a hot research direction in the area of data mining while online learning [38, 14] is a related topic from the area of machine learning. Yet online learning can also tackle the streaming data problem since it assumes that the data come in a streaming way. Online learning has been extensively studied under different settings, such as learning with experts [5] and online convex optimization [13, 28]. There are strong theoretical guarantees for online learning, and it usually uses regret or the number of mistakes to measure the performance of the learning procedure. However, most of existing online learning algorithms are limited to the case that the feature set is fixed. Other related topics involving multiple feature sets include multi-view learning [18, 19, 32], transfer learning [23, 24] and incremental attribute learning [11]. Although both our approaches and multiview learning exploit the relation between different sets of features, there exists a fundamental 2 difference: multi-view learning assumes that every sample is described by multiple feature sets simultaneously, whereas in FESL only few samples in the feature switching period have two sets of features, and no matter how many periods there are, the switching part involves only two sets of features. Transfer learning usually assumes that data are in batch mode, few of them consider the streaming cases where data arrives sequentially and cannot be stored completely. One exception is online transfer learning [34] in which data from both sets of features arrive sequentially. However, they assume that all the feature spaces must appear simultaneously during the whole learning process while such an assumption is not available in FESL. When it comes to incremental attribute learning, old sets of features do not vanish or do not vanish entirely while in FESL, old ones will vanish thoroughly when new sets of features come. The most related work is [15], which also handles evolving features in streaming data. Different to our setting where there are overlapping periods, [15] handles situations where there is no overlapping period but there are overlapping features. Thus, the technical challenges and solutions are different. 3 Preliminaries Feature Evolution We focus on both classification and regression Feature Space ?1 Feature Space ?2 tasks. On each round of the learning process, the ? ?1 algorithm observes an instance and gives its prediction. After the prediction has been made, the ? true label is revealed and the algorithm suffers ? ?1 ? ? ?? a loss which reflects the discrepancy between ? ? ? ? ? ??+1 ? ??+1 the prediction and the groundtruth. We define ? ? ? ?feature space" in our paper by a set of features. ? ? ?? ?? That the feature space changes means both the ? ? ? +1 underlying distribution of the feature set and the number of features change. Consider the pro? ?2 cess with three periods where in the first period large amount of data stream come from the old ? ? ? +? feature space; then in the second period named as overlapping period, few of data come from Figure 2: Specific illustration with one cycle. both the old and the new feature space; soon afterwards in the third period, data stream only come from the new feature space. We call this whole process a cycle. As can be seen from Figure 1, each cycle merely includes two feature spaces. Thus, we only need to focus on one cycle and it is easy to extend to the case with multiple cycles. Besides, we assume that the old features in one cycle will vanish simultaneously by considering the example that in ecosystem protection, all the sensors share the same expected lifespan and thus they will wear out at the same time. We will study the case where old features do not vanish simultaneously in the future work. 1 Data Streaming 1 1 1 1 2 1 1 1 2 1 2 1 2 1 2 Based on the above discussion, we only consider two feature spaces denoted by S1 and S2 , respectively. Suppose that in the overlapping period, there are B rounds of instances both from S1 and S2 . As can be seen from Figure 2, the process can be concluded as follows. ? For t = 1, . . . , T1 ? B, in each round, the learner observes a vector xSt 1 ? Rd1 sampled from S1 where d1 is the number of features of S1 , T1 is the number of total rounds in S1 . ? For t = T1 ? B + 1, . . . , T1 , in each round, the learner observes two vectors xSt 1 ? Rd1 and xSt 2 ? Rd2 from S1 and S2 , respectively where d2 is the number of features of S2 . ? For t = T1 + 1, . . . , T1 + T2 , in each round, the learner observes a vector xSt 2 ? Rd2 sampled from S2 where T2 is the number of rounds in S2 . Note that B is small, so we can omit the streaming data from S2 on rounds T1 ? B + 1, . . . , T1 since they have minor effect on training the model in S2 . We use kxk to denote the `2 -norm of a vector x ? Rdi , i = 1, 2. The inner product is denoted by h?, ?i. Let ?1 ? Rd1 and ?2 ? Rd2 be two sets of linear models that we are interested in. We define the projection ??i (b) = argmina??i ka ? bk, i = 1, 2. We restrict our prediction function in i-th feature space and t-th round to be linear which takes the form hwi,t , xSt i i where wi,t ? Rdi , i = 1, 2. The loss function `(w> x, y) is convex in its first argument whereas in implementing algorithms, we 3 Algorithm 1 Initialize 1: Initialize w1,1 ? ?1 randomly, M1 = 0, and M2 = 0; 2: for t = 1, 2, . . . , T1 do d1 1 Receive xS and predict ft = w> xS1 ? R; Receive the target yt ? R, and suffer loss `(ft , yt ); 3: t ? R ?1,t t 4: Update w1,t using (1) where ?t = 1/ t; > > 2 S1 2 S2 ; and M2 = M2 + xS 5: if t > T1 ? B then M1 = M1 + xS t xt t xt ?1 6: M? = M1 M2 . use logistic loss for classification task, namely `(w> x, y) = (1/ ln 2) ln(1 + exp(?y(w> x))) and square loss for regression task, namely `(w> x, y) = (y ? w> x)2 . The most straightforward or baseline algorithm is to apply online gradient descent [38] on rounds 1, . . . , T1 with streaming data xSt 1 , and invoke it again on rounds T1 + 1, . . . , T1 + T2 with streaming data xSt 2 . The models are updated according to (1), where ?t is a varied step size:   Si wi,t+1 = ??i wi,t ? ?t ?`(w> (1) i,t xt , yt ) , i = 1, 2. 4 Our Proposed Approach In this section, we first introduce the basic idea of the solution to FESL, then two different kinds of approaches with the corresponding analyses are proposed. The major limitation of the baseline algorithm mentioned above is that the model learned on rounds 1, . . . , T1 is ignored on rounds T1 + 1, . . . , T1 + T2 . The reason is that from rounds t > T1 , we cannot observe data from feature space S1 , and thus the model w1,T1 , which operates in S1 , cannot be used directly. To address this challenge, we assume there is a certain relationship ? : Rd2 ? Rd1 between the two feature spaces, and we try to discover it in the overlapping period. There are several methods to learn a relationship between two sets of features including multivariate regression [16], streaming multi-label learning [25], etc. In our setting, since the overlapping period is very short, it is unrealistic to learn a complex relationship between the two spaces. Instead, we use a linear mapping to approximate ?. Assume the coefficient matrix of the linear mapping is M , then during rounds T1 ? B + 1, . . . , T1 , the estimation of M can be based on least squares min XT1 M ?Rd2 ?d1 t=T1 ?B+1 kxSt 1 ? M > xSt 2 k22 . The optimal solution M? to the above problem is given by M? = T1 X > xSt 2 xSt 2 !?1 T1 X > xSt 2 xSt 1 ! . t=T1 ?B+1 t=T1 ?B+1 Then if we only observe an instance xSt 2 ? Rd2 from S2 , we can recover an instance in S1 by ?(xS2 ) ? Rd1 , to which w1,T1 can be applied. Based on this idea, we will make two changes to the baseline algorithm: ? During rounds T1 ?B +1, . . . , T1 , we will learn a relationship ? from (xST11 ?B+1 , xST12 ?B+1 ), . . . , (xST11 , xST12 ). ? From rounds t > T1 , we will keep on updating w1,t using the recovered data ?(xSt 2 ) and predict the target by utilizing the predictions of w1,t and w2,t . In round t > T1 , the learner can calculate two base predictions based on models w1,t and w2,t : S2 > S2 f1,t = w> 1,t (?(xt )) and f2,t = w2,t xt . By utilizing the two base predictions in each round, we propose two methods, both of which are able to follow the better base prediction empirically and theoretically. The process to obtain the relationship mapping ? and w1,T1 during rounds 1, . . . , T1 are concluded in Algorithm 1. 4 Algorithm 2 FESL-c(ombination) 1: 2: 3: 4: 5: 6: 7: 8: 4.1 Initialize ? and w1,T1 during 1, . . . , T1 using Algorithm 1; ?1,T1 = ?2,T1 = 12 ; Initialize w2,T1 +1 randomly and w1,T1 +1 by w1,T1 ; for t = T1 + 1, T1 + 2, . . . , T1 + T2 do S2 S2 > S2 2 Receive xS and predict f1,t = w> 1,t (?(xt )) and f2,t = w2,t xt ; t ? R Predict pbt ? R using (2), then receivep the target yt ? R, and suffer loss `(b pt , yt ); Update weights using (3) where ? = 8(ln 2)/T2 ; ? Update w1,t and w2,t using (4) and (1) respectively where ?t = 1/ t ? T1 ; Weighted Combination We first propose an ensemble method by combining predictions with weights based on exponential of the cumulative loss [5]. The prediction at time t is the weighted average of all the base predictions: pbt = ?1,t f1,t + ?2,t f2,t (2) where ?i,t is the weight of the i-th base prediction. With the previous loss of each base model, we can update the weights of the two base models as follows: ?i,t e??`(fi,t ,yt ) ?i,t+1 = P2 , i = 1, 2, ??`(fj,t ,yt ) j=1 ?j,t e (3) where ? is a tuned parameter. The updating rule of the weights shows that if the loss of one of the models on previous round is large, then its weight will decrease in an exponential rate in next round, which is reasonable and can derive a good theoretical result shown in Theorem 1. Algorithm 2 summarizes our first approach for FESL named as FESL-c(ombination). We first learn a model w1,T1 using online gradient descent on rounds 1, . . . , T1 , during which, we also learn a relationship ? for t = T1 ? B + 1, . . . , T1 . For t = T1 + 1, . . . , T1 + T2 , we learn a model w2,t on each round and keep updating w1,t on the recovered data ?(xSt 2 ) showed in (4) where ?t is a varied step size:   S2 w1,t+1 = ??i w1,t ? ?t ?`(w> (?(x )), y ) . (4) t t 1,t Then we combine the predictions of the two models by weights calculated in (3). Analysis In this paragraph, we borrow the regret from online learning to measure the performance of FESL-c. Specifically, we give a loss bound as follows which shows that the performance will be improved with assistance of the old feature space. For the sake of soundness, we put the proof of our theorems in the supplementary file. We define that LS1 and LS2 are two cumulative losses suffered by base models on rounds T1 + 1, . . . , T1 + T2 , LS1 = TX 1 +T2 `(f1,t , yt ), LS2 = t=T1 +1 TX 1 +T2 `(f2,t , yt ), (5) t=T1 +1 and LS12 is the cumulative loss suffered by our methods: LS12 = PT1 +T2 t=T1 +1 `(b pt , yt ). Then we have: Theorem 1. Assume that the loss function ` is convex in its first argument and that it takes value S12 in [0,1]. with parameter p For all T2 > 1 and for all yt ? Y with t = T1 + 1, . . . , T1 + T2 , L ?t = 8(ln 2)/T2 satisfies p LS12 ? min(LS1 , LS2 ) + (T2 /2) ln 2 (6) This theorem implies that the cumulative loss LS12 of Algorithm 2 over rounds T1 + p1, . . . , T1 + T2 is comparable to the minimum of LS1 and LS2 . Furthermore, we define C = (T2 /2) ln 2. If LS2 ? LS1 > C, it is easy to verify that LS12 is smaller than LS2 . In summary, on rounds T1 + 1, . . . , T1 + T2 , when w1,t is better than w2,t to certain degree, the model with assistance from S1 is better than that without assistance. 5 Algorithm 3 FESL-s(election) 1: 2: 3: 4: 5: 6: 7: 8: 4.2 Initialize ? and w1,T1 during 1, . . . , T1 using Algorithm 1; ?1,T1 = ?2,T1 = 12 ; Initialize w2,T1 +1 randomly and w1,T1 +1 by w1,T1 ; for t = T1 + 1, T1 + 2, . . . , T1 + T2 do S2 S2 > S2 2 Receive xS and predict f1,t = w> 1,t (?(xt )) and f2,t = w2,t xt ; t ? R Draw a model wi,t according to the distribution (7) and predict pbt = fi,t according to the model; Receive the target yt ? R, and suffer loss `(b pt , yt ); Update the weights ? using (8); Update w1,t and w2,t using (4) and (1) respectively, where ?t = 1/ t ? T1 . Dynamic Selection The combination approach mentioned in the above subsection combines several base models to improve the overall performance. Generally, combination of several classifiers performs better than selecting only one single classifier [36]. However, it requires that the performance of base models should not be too bad, for example, in Adaboost the accuracy of the base classifiers should be no less than 0.5 [8]. Nevertheless, in our FESL problem, on rounds T1 + 1, . . . , T1 + T2 , w2,t cannot satisfy the requirement in the beginning due to insufficient training data and w1,t may become worse when more and more data come causing a cumulation of recovered error. Thus, it may not be appropriate to combine the two models all the time, whereas dynamically selecting the best single may be a better choice. Hence we propose a method based on a new strategy, i.e., dynamic selection, similar to the Dynamic Classifier Selection [36] which only uses the best single model rather than combining both of them in each round. Note that, though we only select one of the models, we retain and utilize both of them to update their weights. So it is still an ensemble method. The basic idea of dynamic selection is to select the model of larger weight with higher probability. Algorithm 3 summarizes our second approach for FESL named as FESL-s(election). Specifically, the steps in Algorithm 3 on rounds 1, . . . , T1 is the same as that in Algorithm 2. For t = T1 + 1, . . . , T1 + T2 , we still update weights of each model. However, when doing prediction, we do not combine all the models? prediction, we adopt the result of the ?best" model?s according to the distribution of their weights ?i,t?1 pi,t = P2 i = 1, 2. (7) j=1 ?j,t?1 To track the best model, we have a different way of updating weights which is given as follows [5]. Wt + (1 ? ?)vi,t , i = 1, 2, (8) vi,t = ?i,t?1 e??`(fi,t ,yt ) , i = 1, 2, ?i,t = ? 2 p where we define Wt = v1,t + v2,t , ? = 1/(T2 ? 1), ? = 8/T2 (2 ln 2 + (T2 ? 1)H(1/(T2 ? 1))) and H(x) = ?x ln x ? (1 ? x) ln(1 ? x) is the binary entropy function defined for x ? (0, 1). Analysis From rounds t > T1 , the first model w1,t would become worse due to the cumulative recovered error while the second model will become better by the large amount of coming data. Since w1,t is initialized by w1,T 1 which is learnt from the old feature space and w2,t is initialized randomly, it is reasonable to assume that w1,t is better than w2,t in the beginning, but inferior to w2,t after sufficient large number of rounds. Let s be the round after which w1,t is worse than w2,t . We define Ps PT2 Ls = t=T1 +1 `(f1,t , yt ) + t=s+1 `(f2,t , yt ), we can verify that min T1 +1?s?T1 +T2 Ls ? min LSi . i=1,2 (9) Then a more ambitious goal is to compare the proposed algorithm against w1,t from rounds T1 + 1 to s, and against the w2,t from rounds s to T1 + T2 , which motivates us to study the following performance measure LS12 ? Ls . Because the exact value of s is generally unknown, we need to bound the worst-case LS12 ? minT1 +1?s?T1 +T2 Ls . An upper bound of LS12 is given as follows. Theorem 2. For all T2 > 1, if the model is run with parameter ? = 1/(T2 ? 1) and ? = p 8/T2 (2 ln 2 + (T2 ? 1)H(1/T2 ? 1)), then s   T2 H(?) S12 s L ? min L + 2 ln 2 + (10) T1 +1?s?T1 +T2 2 ? where H(x) = ?x ln x ? (1 ? x) ln(1 ? x) is the binary entropy function. 6 Table 1: Detail description of datasets: let n be the number of examples, and d1 and d2 denote the dimensionality of the first and second feature space, respectively. The first 9 datasets in the left column are synthetic datasets, ?r.EN-GR" means the dataset EN-GR comes from Reuter and ?RFID" is the real dataset. Dataset Dataset Dataset n d1 d2 n d1 d2 n d1 d2 Australian 690 42 29 r.EN-FR 18,758 21,531 24,892 r.GR-IT 29,953 34,279 15,505 Credit-a 653 15 10 r.EN-GR 18,758 21,531 34,215 r.GR-SP 29,953 34,279 11,547 1,000 20 14 r.EN-IT 18,758 21,531 15,506 r.IT-EN 24,039 15,506 21,517 Credit-g Diabetes 768 8 5 r.EN-SP 18,758 21,531 11,547 r.IT-FR 24,039 15,506 24,892 DNA 940 180 125 r.FR-EN 26,648 24,893 21,531 r.IT-GR 24,039 15,506 34,278 1,000 59 41 r.FR-GR 26,648 24,893 34,287 r.IT-SP 24,039 15,506 11,547 German Kr-vs-kp 3,196 36 25 r.FR-IT 26,648 24,893 15,503 r.SP-EN 12,342 11,547 21,530 3,175 60 42 r.FR-SP 26,648 24,893 11,547 r.SP-FR 12,342 11,547 24,892 Splice Svmguide3 1,284 22 15 r.GR-EN 29,953 34,279 21,531 r.SP-GR 12,342 11,547 34,262 940 78 72 r.GR-FR 29,953 34,279 24,892 r.SP-IT 12,342 11,547 15,500 RFID According to Theorem 2 we know that LS12 is comparable to minT1 +1?s?T1 +T2 Ls . Due to (9), we can conclude that the upper bound of LS12 in Algorithm 3 is tighter than that of Algorithm 2. 5 Experiments In this section, we first introduce the datasets we use. We want to emphasize that we collected one real dataset by ourselves since our setting of feature evolving is relatively novel so that the required datasets are not widely available yet. Then we introduce the compared methods and settings. Finally experiment results are given. 5.1 Datasets We conduct our experiments on 30 datasets consisting of 9 synthetic datasets, 20 Reuter datasets and 1 real dataset. To generate synthetic data, we randomly choose some datasets from different domains including economy and biology, etc1 whose scales vary from 690 to 3,196. They only have one feature space at first. We artificially map the original datasets into another feature space by random Gaussian matrices, then we have data both from feature space S1 and S2 . Since the original data are in batch mode, we manually make them come sequentially. In this way, synthetic data are completely generated. We also conduct our experiments on 20 datasets from Reuter [3]. They are multi-view datasets which have large scale varying from 12,342 to 29,963. Each dataset has two views which represent two different kinds of languages, respectively. We regard the two views as the two feature spaces. Now they do have two feature spaces but the original data is in batch mode, so we will artificially make them come in streaming way. We use the RFID technique to collect the real data which has 450 instances from S1 and S2 respectively. RFID technique is widely used to do moving goods detection [31]. In our case, we want to utilize the RFID technique to predict the location?s coordinate of the moving goods attached by RFID tag. Concretely, we arranged several RFID aerials around the indoor area. In each round, each RFID aerial received the tag signals, then the goods with tag moved, at the same time, we recorded the goods? coordinate. Before the aerials expired, we arranged new aerials beside the old ones to avoid the situation without aerials. So in this overlapping period, we have data from both old and new feature spaces. After the old aerials expired, we continue to use the new ones to receive signals. Then we only have data from feature space S2 . So the RFID data we collect totally satisfy our assumptions. The details of all the datasets we use are presented in Table 1. 5.2 Compared Approaches and Settings We compare our FESL-c and FESL-s with three approaches. One is mentioned in Section 3, where once the feature space changed, the online gradient descent algorithm will be invoked from scratch, named as NOGD (Naive Online Gradient Descent). The other two approaches utilize the model learned from feature space S1 by online gradient descent to do predictions on the recovered data. The 1 Datasets can be found in http://archive.ics.uci.edu/ml/. 7 0.9 1.2 1.0 0.8 2 0.8 0.6 0.08 0.06 1.0 3 Loss 0.10 4 Loss 1.0 Loss 1.2 0.12 Loss Loss 0.14 1 70 139 208 0.4 277 Time 66 131 196 261 301 401 77 Time Time (a) australian 201 153 229 305 Time (c) credit-g (b) credit-a 0.6 0.4 0.7 101 326 0.8 381 0.2 376 751 1126 1501 Time (d) diabetes (e) r.EN-SP 1.0 0.4 0.2 0.6 0.2 Time (f) r.FR-SP 0.6 0.4 0.4 533 1065 1597 2129 2661 Loss 0.8 Loss Loss Loss 0.6 3.0 0.8 1.0 0.8 600 1199 1798 2397 Time (g) r.GR-EN 0.2 2.5 2.0 1.5 481 961 1441 1921 2401 Time (h) r.IT-FR 1.0 91 181 271 361 Time (i) RFID legend Figure 3: The trend of loss with three baseline methods and the proposed methods on synthetic data. The smaller the cumulative loss, the better. All the average cumulative loss at any time of our methods is comparable to the best of baseline methods and 8 of 9 are smaller. difference between them is that one keeps updating with the recovered data while the other does not. The one which keeps updating is called Updating Recovered Online Gradient Descent (ROGD-u) and the other which keeps fixed is called Fixed Recovered Online Gradient Descent (ROGD-f). We evaluate the empirical performances of the proposed approaches on classification and regression tasks on rounds T1 + 1, . . . , T1 + T2 . To verify that our analysis is reasonable, we present the trend of average cumulative loss. Concretely, at each time t0 , the loss `?t0 of every method is the average of Pt0 the cumulative loss over 1, . . . , t0 , namely `?t0 = (1/t0 ) t=1 `t . We also present the classification performance over all instances on rounds T1 + 1, . . . , T1 + T2 on synthetic and Reuter data. The performances of all approaches are obtained by average results over 10 independent runs on synthetic data. Due to the large scale of Reuter data, we only conduct 3 independent runs on Reuter data and report the average results. The parameters we need to set are the number of instances in overlapping period, i.e., B, the number of instances in S1 and S2 , i.e., T1 and T2 and the step size, i.e., ?t where t is time. For all baseline methods and our methods, the parameters are the same. In our experiments, we set B 5 or 10 for synthetic data, 50 for Reuter data and 40 ?for RFID data. We set almost T1 and T2 to be half of the number of instances, and ?t to be 1/(c t) where c is searched in the range {1, 10, 50, 100, 150}. The detailed setting of c in ?t for each dataset is presented in supplementary file. 5.3 Results Here we only present part of the loss trend results, and other results are presented in the supplementary file. Figure 3 gives the trend of average cumulative loss. (a-d) are the results on synthetic data, (e-h) are the results on Reuter data, (i) is the result of the real data. The smaller the average cumulative loss, the better. From the experimental results, we have the following observations. First, all the curves with circle marks representing NOGD decrease rapidly which conforms to the fact that NOGD on rounds T1 + 1, . . . , T1 + T2 becomes better and better with more and more data coming. Besides, the curves with star marks representing ROGD-u also decline but not very apparent since on rounds 1, . . . , T1 , ROGD-u already learned well and tend to converge, so updating with more recovered data could not bring too much benefits. Moreover, the curves with plus marks representing ROGD-f does not drop down but even go up instead, which is also reasonable because it is fixed and if there are some recovering error, it will perform worse. Lastly, our methods are based on NOGD and ROGD-u, so their average cumulative loss also decrease. As can be seen from Figure 3, the average cumulative loss of our methods is comparable to the best of baseline methods on all datasets and are smaller than them on 8 datasets. And FESL-s exhibits slightly smaller average cumulative loss than FESL-c. You may notice that NOGD is always worse than ROGD-u on synthetic data and real data while on Reuter data NOGD becomes better than ROGD-u after a few rounds. This is because on synthetic data and real data, we do not have enough rounds to let all methods converge while on Reuter data, large amounts of instances ensure the convergence of every method. So when all the methods converge, we can see that NOGD is better than other baseline methods since it always receives the real instances while ROGD-u and ROGD-f receive the recovered instances which may contain recovered error. As can be seen from (e-h), in the first few rounds, our methods are comparable to ROGD-u. When NOGD is better than ROGD-u, our methods are comparable to NOGD which shows that our methods 8 Table 2: Accuracy with its variance on synthetic datasets and Reuter datasets. The larger the better. The best ones among all the methods are bold. Dataset NOGD ROGD-u ROGD-f FESL-c FESL-s australian .767?.009 .849?.009 .809?.025 .849?.009 .849?.009 credit-a .811?.006 .826?.018 .785?.051 .827?.014 .831?.009 credit-g .659?.010 .733?.006 .716?.011 .733?.006 .733?.006 diabetes .650?.002 .652?.009 .651?.006 .652?.007 .652?.009 dna .610?.013 .691?.023 .608?.064 .691?.023 .692?.021 german .684?.006 .700?.002 .700?.002 .700?.001 .703?.004 kr-vs-kp .612?.005 .621?.036 .538?.024 .626?.028 .630?.016 splice .568?.005 .612?.022 .567?.057 .612?.022 .612?.022 svmguide3 .680?.010 .779?.010 .748?.012 .779?.010 .778?.010 r.EN-FR .902?.004 .849?.003 .769?.069 .903?.003 .902?.005 r.EN-GR .867?.005 .836?.007 .802?.036 .870?.002 .870?.003 r.EN-IT .858?.014 .847?.014 .831?.018 .861?.010 .863?.013 r.EN-SP .900?.002 .848?.002 .825?.001 .901?.001 .899?.002 r.FR-EN .858?.007 .776?.009 .754?.012 .858?.007 .858?.007 r.FR-GR .869?.004 .774?.019 .753?.021 .870?.004 .868?.003 r.FR-IT .874?.005 .780?.022 .744?.040 .874?.005 .873?.005 r.FR-SP .872?.001 .778?.022 .735?.013 .872?.001 .871?.002 .907?.000 .850?.007 .801?.035 .907?.001 .906?.000 r.GR-EN r.GR-FR .898?.001 .827?.009 .802?.023 .898?.001 .898?.000 r.GR-IT .847?.011 .851?.017 .816?.006 .850?.018 .851?.017 r.GR-SP .902?.001 .845?.003 .797?.012 .902?.001 .902?.001 .854?.003 .760?.006 .730?.024 .856?.002 .854?.003 r.IT-EN r.IT-FR .863?.002 .753?.012 .730?.020 .864?.002 .862?.003 r.IT-GR .849?.004 .736?.022 .702?.012 .849?.004 .846?.004 r.IT-SP .839?.006 .753?.014 .726?.005 .839?.007 .839?.006 r.SP-EN .926?.002 .860?.005 .814?.021 .926?.002 .924?.001 .876?.005 .873?.017 .833?.042 .876?.014 .878?.012 r.SP-FR r.SP-GR .871?.013 .827?.025 .810?.026 .873?.013 .873?.013 r.SP-IT .928?.002 .861?.005 .826?.005 .928?.003 .927?.002 are comparable to the best one all the time. Moreover, FESL-s performs worse than FESL-c in the beginning while afterwards, it becomes slightly better than FESL-c. Table 2 shows the accuracy results on synthetic datasets and Reuter datasets. We can see that for synthetic datasets, FESL-s outperforms other methods on 8 datasets, FESL-c gets the best on 5 datasets and ROGD-u also gets 5. NOGD performs worst since it starts from scratch. ROGD-u is better than NOGD and ROGD-f because ROGD-u exploits the old better trained model from old feature space and keep updating with recovered instances. Our two methods are based on NOGD and ROGD-u. We can see that our methods can follow the best baseline method or even outperform it. For Reuter datasets, we can see that FESL-c outperforms other methods on 17 datasets, FESL-s gets the best on 9 datasets and NOGD gets 8 while ROGD-u gets 1. In Reuter datasets, the period on new feature space is longer than that in synthetic datasets so that NOGD can update itself to a good model. Whereas ROGD-u updates itself with recovered data, so the model will become worse when recovered error accumulates. ROGD-f does not update itself, thus it performs worst. Our two methods can take the advantage of NOGD and ROGD-f and perform better than them. 6 Conclusion In this paper, we focus on a new setting: feature evolvable streaming learning. Our key observation is that in learning with streaming data, old features could vanish and new ones could occur. To make the problem tractable, we assume there is an overlapping period that contains samples from both feature spaces. Then, we learn a mapping from new features to old features, and in this way both the new and old models can be used for prediction. In our first approach FESL-c, we ensemble two predictions by learning weights adaptively. Theoretical results show that the assistance of the old feature space can improve the performance of learning with streaming data. Furthermore, we propose FESL-s to dynamically select the best model with better performance guarantee. 9 Acknowledgement This research was supported by NSFC (61333014, 61603177), JiangsuSF (BK20160658), Huawei Fund (YBN2017030027) and Collaborative Innovation Center of Novel Software Technology and Industrialization. References [1] C. C. Aggarwal, J. Han, J. Wang, and P. S. Yu. A framework for on-demand classification of evolving data streams. IEEE Transactions on Knowledge and Data Engineering, 18:577?589, 2006. [2] C. C. Aggarwal. Data streams: An overview and scientific applications. In Scientific Data Mining and Knowledge Discovery - Principles and Foundations, pages 377?397. Springer, 2010. [3] M.-R. Amini, N. Usunier, and C. Goutte. Learning from multiple partially observed views - an application to multilingual text categorization. In Advances in Neural Information Processing Systems 22, pages 28?36, 2009. [4] A. Bifet, G. Holmes, R. Kirkby, and B. Pfahringer. MOA: Massive online analysis. Journal of Machine Learning Research, 11:1601?1604, 2010. [5] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [6] J. de Andrade Silva, E. R. Faria, R. C. Barros, E. R. Hruschka, A. C. P. L. F. de Carvalho, and J. Gama. Data stream clustering: A survey. ACM Computing Surveys. [7] P. M. Domingos and G. Hulten. Mining high-speed data streams. In Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 71?80, 2000. [8] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119?139, 1997. [9] M. M. Gaber, A. B. Zaslavsky, and S. Krishnaswamy. Mining data streams: A review. SIGMOD Record, 34:18?26, 2005. [10] J. Gama and P. P. Rodrigues. An overview on mining data streams. In Foundations of Computational Intelligence, pages 29?45. Springer, 2009. [11] S. U. Guan and S. Li. Incremental learning with respect to new incoming input attributes. Neural Processing Letters, 14:241?260, 2001. [12] S. Hashemi, Y. Yang, Z. Mirzamomen, and M. R. Kangavari. Adapted one-versus-all decision trees for data stream classification. IEEE Transactions on Knowledge and Data Engineering, 21:624?637, 2009. [13] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Maching Learning, 69:169?192, 2007. [14] S. Hoi, J. Wang, and P. Zhao. LIBOL: A library for online learning algorithms. Journal of Machine Learning Research, 15:495?499, 2014. [15] C. Hou and Z.-H. Zhou. One-pass learning with incremental and decremental features. ArXiv e-prints, arXiv:1605.09082, 2016. [16] B. M. Golam Kibria. Bayesian statistics and marketing. Technometrics, 49:230, 2007. [17] D. Leite, P. Costa Jr., and F. Gomide. Evolving granular classification neural networks. In Proceedings of International Joint Conference on Neural Networks 2009, pages 1736?1743, 2009. [18] S.-Y. Li, Y. Jiang, and Z.-H. Zhou. Partial multi-view clustering. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1968?1974, 2014. [19] I. Muslea, S. Minton, and C. Knoblock. Active + semi-supervised learning = robust multi-view learning. In Proceedings of the 19th International Conference on Machine Learning, pages 435?442, 2002. [20] H.-L. Nguyen, Y.-K. Woon, W. K. Ng, and L. Wan. Heterogeneous ensemble for feature drifts in data streams. In Proceedings of the 16th Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 1?12, 2012. [21] H.-L. Nguyen, Y.-K. Woon, and W. K. Ng. A survey on data stream clustering and classification. Knowledge and Information Systems, 45:535?569, 2015. 10 [22] N. C. Oza. Online bagging and boosting. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 2005, pages 2340?2345, 2005. [23] S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22:1345?1359, 2010. [24] R. Raina, A. Battle, H. Lee, B. Packer, and A. Ng. Self-taught learning: Transfer learning from unlabeled data. In Proceedings of the 24th International Conference on Machine Learning, pages 759?766, 2007. [25] J. Read, A. Bifet, G. Holmes, and B. Pfahringer. Streaming multi-label classification. In Proceedings of the 2nd Workshop on Applications of Pattern Analysis, pages 19?25, 2011. [26] K. Samina, K. Tehmina, and N. Shamila. A survey of feature selection and feature extraction techniques in machine learning. In Proceedings of Science and Information Conference 2014, pages 372?378, 2014. [27] T. Seidl, I. Assent, P. Kranen, R. Krieger, and J. Herrmann. Indexing density models for incremental learning and anytime classification on data streams. In Proceedings of the 12th International Conference on Extending Database Technology, pages 311?322, 2009. [28] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4:107?194, 2012. [29] I. W. Tsang, A. Kocsor, and J. T. Kwok. Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning, pages 911?918, 2007. [30] H. Wang, W. Fan, P. S. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 226?235, 2003. [31] C. Wang, L. Xie, W. Wang, T. Xue, and S. Lu. Moving tag detection via physical layer analysis for large-scale RFID systems. In Proceedings of the 35th Annual IEEE International Conference on Computer Communications, pages 1?9, 2016. [32] C. Xu, D. Tao, and C. Xu. A survey on multi-view learning. ArXiv e-prints, arXiv:1304.5634, 2013. [33] P. Zhang, J. Li, P. Wang, B. J. Gao, X. Zhu, and L. Guo. Enabling fast prediction for ensemble models on data streams. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 177?185, 2011. [34] P. Zhao, S. Hoi, J. Wang, and B. Li. Online transfer learning. Artificial Intelligence, 216:76?102, 2014. [35] G. Zhou, K. Sohn, and H. Lee. Online incremental feature learning with denoising autoencoders. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, pages 1453?1461, 2012. [36] Z.-H. Zhou. Ensemble methods: Foundations and algorithms. CRC press, 2012. [37] Z.-H. Zhou. Learnware: On the future of machine learning. Frontiers of Computer Science, 10:589?590, 2016. [38] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pages 928?936, 2003. 11
6740 |@word norm:1 nd:1 d2:5 contains:2 series:1 selecting:2 pt0:1 tuned:1 past:2 existing:2 outperforms:2 current:2 recovered:16 ka:1 protection:4 cumulation:1 si:1 yet:2 attracted:2 must:1 hou:2 drop:1 update:11 rd2:6 v:3 fund:1 half:1 intelligence:4 beginning:3 core:2 short:2 record:1 boosting:3 location:1 simpler:1 zhang:2 become:4 combine:6 paragraph:1 introduce:3 manner:2 theoretically:3 expected:1 p1:1 multi:8 zhouzh:1 relying:1 muslea:1 zhi:1 election:2 considering:1 totally:1 becomes:3 discover:2 underlying:2 moreover:2 kind:2 vanished:2 developed:1 guarantee:4 every:3 tackle:1 classifier:8 omit:1 appear:2 before:2 nju:1 t1:98 engineering:3 mistake:1 switching:2 accumulates:1 nsfc:1 jiang:1 lugosi:1 might:1 plus:1 ls2:6 china:1 studied:1 dynamically:4 collect:3 limited:2 range:1 practice:1 regret:3 procedure:1 area:3 empirical:1 evolving:7 adapting:1 projection:1 word:1 nanjing:2 cannot:4 get:5 selection:7 unlabeled:1 kocsor:1 put:1 leite:1 bifet:2 lijun:1 zinkevich:1 map:1 yt:16 center:1 straightforward:2 attention:2 go:1 l:5 convex:6 survey:6 formulate:1 kale:1 m2:4 rule:1 holmes:2 utilizing:3 borrow:1 handle:2 coordinate:2 updated:1 target:4 deploy:1 suppose:1 pt:3 exact:1 massive:1 programming:1 us:2 rodrigues:1 domingo:1 origin:1 diabetes:3 trend:5 updating:9 database:1 observed:1 ft:2 wang:7 oza:1 worst:3 calculate:1 tsang:1 cycle:6 decrease:3 observes:4 mentioned:3 environment:1 battery:1 dynamic:5 trained:1 f2:6 learner:4 completely:2 joint:1 various:1 tx:2 train:1 fast:1 effective:2 kp:2 artificial:3 shalev:1 whose:1 apparent:1 supplementary:3 solve:2 pt1:1 larger:2 widely:2 reconstruct:1 ability:2 soundness:1 statistic:2 itself:3 online:25 advantage:1 propose:7 product:1 coming:2 fr:18 causing:1 uci:1 combining:2 rapidly:1 description:1 moved:1 validate:2 nogd:16 exploiting:1 convergence:1 requirement:2 p:1 extending:1 categorization:1 incremental:6 derive:1 develop:2 nearest:1 minor:1 received:1 p2:2 strong:2 recovering:1 involves:1 come:13 implies:1 australian:3 direction:1 attribute:4 hoi:2 implementing:1 crc:1 f1:6 generalization:1 preliminary:1 tighter:1 frontier:1 hold:1 around:1 credit:6 ic:1 exp:1 mapping:5 predict:7 reserve:1 major:1 vary:1 adopt:1 estimation:1 label:3 s12:2 maching:1 weighted:3 reflects:1 sensor:13 always:3 gaussian:1 lamda:1 rather:2 zhou:6 avoid:1 varying:1 hulten:1 minton:1 focus:3 sigkdd:3 baseline:9 economy:1 huawei:1 streaming:23 accumulated:1 pfahringer:2 relation:1 interested:1 tao:1 overall:1 classification:14 among:1 denoted:2 initialize:6 once:1 never:1 extraction:2 beach:1 ng:3 manually:1 biology:1 yu:2 rfid:12 discrepancy:1 future:2 t2:44 report:2 few:8 randomly:5 neighbour:1 simultaneously:4 national:1 packer:1 ourselves:1 consisting:1 attempt:1 technometrics:1 detection:2 mining:12 introduces:1 arrives:2 hwi:1 partial:1 conforms:1 tree:5 conduct:3 old:29 initialized:2 circle:1 theoretical:3 instance:13 column:1 rdi:2 gr:19 too:2 stored:1 learnt:1 xue:1 synthetic:17 thoroughly:1 st:1 adaptively:1 fundamental:2 international:12 density:1 accessible:1 retain:1 lee:2 invoke:1 w1:28 again:3 aaai:1 cesa:1 recorded:1 choose:1 wan:1 hoeffding:1 worse:7 expert:1 zhao:2 li:4 de:2 star:1 bold:1 b2:3 waste:2 includes:1 coefficient:1 matter:1 satisfy:2 vi:2 stream:22 try:2 lot:1 view:9 doing:1 hazan:1 start:1 recover:2 bayes:1 collaborative:1 square:2 accuracy:3 variance:1 ensemble:12 gathered:1 bayesian:2 none:1 lu:1 cybernetics:1 evolvable:6 suffers:2 infinitesimal:1 against:2 frequency:1 proof:1 sampled:2 costa:1 dataset:10 knowledge:12 subsection:1 dimensionality:1 anytime:1 organized:1 back:1 appears:2 focusing:1 higher:1 supervised:1 follow:2 adaboost:1 asia:1 improved:3 xie:1 formulation:1 arranged:2 though:4 xs2:1 lifetime:1 just:2 furthermore:2 lastly:1 jiangsusf:1 marketing:1 autoencoders:1 receives:1 overlapping:14 gaber:1 mode:3 logistic:1 scientific:2 usa:1 effect:1 k22:1 concept:3 true:1 verify:3 contain:1 evolution:2 former:1 hence:1 read:1 laboratory:1 round:43 assistance:6 during:10 game:1 zaslavsky:1 please:1 inferior:1 self:1 generalized:1 zhanglj:1 multiview:1 theoretic:1 performs:4 assent:1 bring:1 pro:1 fj:1 reuter:14 silva:1 invoked:1 novel:5 fi:3 common:1 empirically:1 overview:2 physical:1 attached:1 extend:1 m1:4 ecosystem:4 refer:1 cambridge:1 language:1 knoblock:1 wear:3 moving:3 stable:1 han:2 longer:1 etc:2 argmina:1 base:11 krishnaswamy:1 multivariate:1 showed:1 scenario:1 certain:3 meta:1 binary:2 continue:1 seen:4 minimum:1 additional:1 andrade:1 converge:3 paradigm:2 period:27 signal:2 semi:1 multiple:4 afterwards:2 aggarwal:2 technical:1 long:3 divided:1 prediction:24 involving:1 regression:5 basic:2 heterogeneous:1 arxiv:4 represent:1 agarwal:1 proposal:2 whereas:6 want:4 receive:7 xst:15 jian:1 concluded:2 suffered:2 specif:1 w2:17 rest:1 archive:1 file:3 ascent:1 tend:1 legend:1 effectiveness:2 call:1 counting:1 yang:2 revealed:1 hashemi:1 easy:2 enough:1 switch:3 restrict:1 inner:1 idea:3 cn:1 decline:1 t0:5 effort:1 suffer:3 ignored:2 generally:2 detailed:1 amount:3 extensively:1 industrialization:1 sohn:1 lifespan:3 category:1 dna:2 generate:1 http:1 outperform:1 schapire:1 lsi:1 notice:1 track:1 taught:1 key:3 nevertheless:1 changing:1 ce:1 ls1:5 utilize:3 v1:1 merely:1 year:2 run:4 letter:1 you:1 named:4 arrive:1 almost:1 reasonable:4 groundtruth:1 draw:1 ble:1 decision:4 summarizes:2 comparable:7 entirely:1 bound:4 layer:1 fan:1 annual:1 adapted:2 occur:2 deficiency:1 software:2 sake:1 tag:4 aspect:1 speed:1 argument:2 min:5 relatively:1 pacific:1 according:5 combination:4 ball:1 aerial:6 jr:1 smaller:6 slightly:2 pan:1 battle:1 newer:1 wi:4 happens:1 s1:15 indexing:1 handling:1 ln:13 goutte:1 german:2 know:1 tractable:1 end:2 usunier:1 available:4 apply:1 observe:2 kwok:1 v2:1 appropriate:1 amini:1 hruschka:1 batch:3 drifting:1 original:6 bagging:2 assumes:3 clustering:4 include:1 ensure:1 pt2:1 exploit:3 sigmod:1 establish:2 already:1 print:2 strategy:1 traditional:1 exhibit:1 gradient:8 topic:2 considers:2 collected:1 reason:1 svmguide3:2 besides:2 learnware:2 illustration:2 insufficient:2 relationship:7 innovation:1 unfortunately:1 enclosing:1 ambitious:1 motivates:1 unknown:1 perform:2 bianchi:1 upper:2 observation:3 datasets:30 enabling:1 descent:7 situation:2 communication:1 varied:2 arbitrary:2 drift:3 bk:1 namely:3 required:1 learned:3 nip:1 address:2 able:2 usually:4 pattern:1 indoor:1 challenge:2 including:4 hot:1 unrealistic:1 rely:1 raina:1 zhu:1 representing:3 improve:3 technology:3 library:1 concludes:1 naive:1 text:1 prior:1 review:1 acknowledgement:1 discovery:5 beside:1 loss:36 freund:1 gama:2 ically:1 limitation:2 tures:1 carvalho:1 versus:1 granular:2 foundation:4 degree:1 sufficient:1 expired:2 principle:1 share:1 pi:1 summary:1 changed:1 repeat:1 supported:1 soon:1 ovum:1 xs1:1 emerge:1 benefit:3 regard:1 curve:3 calculated:1 valid:2 cumulative:14 concretely:2 collection:1 made:1 herrmann:1 nguyen:2 transaction:3 decremental:1 approximate:1 emphasize:2 multilingual:1 keep:6 ml:1 sequentially:3 incoming:1 active:1 b1:3 xt1:1 conclude:1 shwartz:1 table:4 learn:10 transfer:6 pbt:3 ca:1 robust:1 woon:2 complex:1 artificially:2 barros:1 domain:1 substituted:1 sp:18 spread:2 big:1 whole:2 s2:23 xu:2 en:20 cvm:1 exponential:2 vanish:9 guan:1 third:1 splice:2 theorem:6 down:1 bad:1 specific:1 xt:9 x:5 exists:1 workshop:1 kr:2 krieger:1 demand:1 rd1:5 entropy:2 logarithmic:1 gao:1 kxk:1 partially:1 bo:1 hua:1 springer:2 corresponds:1 environmental:1 satisfies:1 acm:4 goal:1 man:1 change:9 included:1 specifically:3 operates:1 wt:2 denoising:1 total:1 called:2 pas:1 experimental:2 exception:1 select:5 support:1 searched:1 latter:1 mark:3 guo:1 evaluate:1 d1:7 scratch:2 correlated:1
6,347
6,741
Online Convex Optimization with Stochastic Constraints Hao Yu, Michael J. Neely, Xiaohan Wei Department of Electrical Engineering, University of Southern California? {yuhao,mjneely,xiaohanw}@usc.edu Abstract This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich?s OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special case, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. p To solve this problem, this paper proposes a newpalgorithm that achieves O( T ) expected regret and constraint violations and O( T log(T )) high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm. 1 Introduction Online convex optimization (OCO) is a multi-round learning process with arbitrarily-varying convex loss functions where the decision maker has to choose decision x(t) 2 X before observing the corresponding loss function f t (?). For a fixed time horizon T , define the regret of a learning algorithm with respect to the best fixed decision in hindsight (with full knowledge of all loss functions) as regret(T ) = T X t=1 f t (x(t)) min x2X T X f t (x). t=1 The goal of OCO is to develop dynamic learning algorithms such that regret grows sub-linearly with respect to T . The setting of OCO is introduced in a series of work [3, 14, 9, 29] and is formalized in [29]. OCO has gained considerable amount of research interest recently with various applications such as online regression, prediction with expert advice, online ranking, online shortest paths, and portfolio selection. See [23, 11] for more applications and backgrounds. In [29], Zinkevich shows that using an online gradient descent (OGD) update given by ? ? x(t + 1) = PX x(t) rf t (x(t)) (1) p where rf t (?) is a subgradient of f t (?) and PX [?] is the projection onto set X can achieve O( T ) regret. Hazan et al. in [12] show that p better regret is possible under the assumption that each loss function is strongly convex but O( T ) is the best possible if no additional assumption is imposed. It is obvious that Zinkevich?s OGD in (1) requires the full knowledge of set X and low complexity of the projection PX [?]. However, in practice, the constraint set X , which is often described by many functional inequality constraints, can be time varying and may not be fully disclosed to the ? This work is supported in part by grant NSF CCF-1718477 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. decision maker. In [18], Mannor et al. extend OCO by considering time-varying constraint functions g t (x) which can arbitrarily vary and are only disclosed to us after each x(t) is chosen. In this setting, Mannor et al. in [18] explore the possibility of designing learning algorithms such that PT regret grows sub-linearly and lim supT !1 T1 t=1 g t (x(t)) ? 0, i.e., the (cumulative) constraint PT violation t=1 g t (x(t)) also grows sub-linearly. Unfortunately, Mannor et al. in [18] prove that this is impossible even when both f t (?) and g t (?) are simple linear functions. Given the impossibility results shown by Mannor et al. in [18], this paper considers OCO where constraint functions g t (x) are not arbitrarily varying but independently and identically distributed (i.i.d.) generated from an unknown probability model (and functions f t (x) are still arbitrarily varying and possibly non-i.i.d.). More specifically, this paper considers online convex optimization (OCO) with stochastic constraint X = {x 2 X0 : E! [gk (x; !)] ? 0, k 2 {1, 2, . . . , m}} where X0 is a known fixed set; the expressions of stochastic constraints E! [gk (x; !)] (involving expectations with respect to ! from an unknown distribution) are unknown; and subscripts k 2 {1, 2, . . . , m} indicate the possibility of multiple functional constraints. In OCO with stochastic constraints, the decision maker receives loss function f t (x) and i.i.d. constraint function realizations gkt (x) = gk (x; !(t)) at each round t. However, the expressions of gkt (?) and f t (?) are disclosed to the decision maker only after decision x(t) 2 X0 is chosen. This setting arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. For example, if we consider online routing (with link capacity constraints) in wireless networks [18], each link capacity is not a fixed constant (as in wireline networks) but an i.i.d. random variable since wireless channels are stochastically time-varying by nature [25]. OCO with stochastic constraints also covers important special cases such as OCO with long term constraints [16, 5, 13], stochastic constrained convex optimization [17] and deterministic constrained convex optimization [21]. PT Let x? = argmin{x2X0 :E[gk (x;!)]?0,8k2{1,2,...,m}} t=1 f t (x) be the best fixed decision in hindsight (knowing all loss functions f t (x) and the distribution of stochastic constraint functions gk (x; !)). Thus, x? minimizes the T -round cumulative loss and satisfies all stochastic constraints in PT expectation, which also implies lim supT !1 T1 t=1 gkt (x? ) ? 0 almost surely by the strong law of large numbers. Our goal is to develop dynamic learning algorithms that guarantee both regret PT PT PT t t ? t t=1 f (x(t)) t=1 f (x ) and constraint violations t=1 gk (x(t)) grow sub-linearly. Note that Zinkevich?s algorithm in (1) is not applicable to OCO with stochastic constraints since X is unknown and it can happen that X (t) = {x 2 X0 : gk (x; !(t)) ? 0, 8k 2 {1, 2, . . . , m}} = ; for certain realizations !(t), such that projections PX [?] or PX (t) [?] required in (1) are not even well-defined. Our Contributions: This paper solves online convex optimization with stochastic constraints. In p particular, we propose a new learning algorithm that is proven to achieve O( T ) expected regret p and constraint violations and O( T log(T )) high probability regret and constraint violations. The proposed new algorithm also improves upon state-of-the-art results in the following special cases: ? OCO with long term constraints: This is a special case where each gkt (x) ? gk (x) is known and does not depend on time. Note that X = {x 2 X0 : gk (x) ? 0, 8k 2 {1, 2, . . . , m}} can be complicated while X0 might be a simple hypercube. To avoid high complexity involved in the projection onto X as in Zinkevich?s algorithm, work in [16, 5, 13] develops low complexity algorithms that use projections onto a simpler set X0 by allowing gk (x(t)) > 0 for certain PT rounds but ensuring lim supT !1 T1 t=1 gk (x(t)) ? 0. The best existing performance is 1 /2 O(T max{ ,1 } ) regret and O(T ) constraint violations where 2 (0, 1) is an algorithm p p parameter [13]. This gives O( T ) regret with worse O(T 3/4 ) constraint violations or O( T ) constraint violations with worse O(T ) regret. In contrast,pour algorithm, which p only uses projections onto X0 as shown in Lemma 1, can achieve O( T ) regret and O( T ) constraint violations simultaneously. Note that by adapting the methodology presented in this paper, our other work [27] developed a different algorithm thatpcan only solve the special case problem ?OCO with long term constraints? but can achieve O( T ) regret and O(1) constraint violations. ? Stochastic constrained convex optimization: This is a special case where each f t (x) is i.i.d. generated from an unknown distribution. This problem has many applications in operations research and machine learning such as Neyman-Pearson classification and risk-mean portfolio. The work [17] develops a (batch) offline algorithm that produces a solution with high probability 2 performance guarantees only after sampling the problems for sufficiently many times. That is, during the process of sampling, there is no performance guarantees. The work [15] proposes a stochastic approximation based (batch) offline algorithm for stochastic convex optimization with one single stochastic functional inequality constraint. In contrast, our algorithm is an online algorithm with online performance guarantees and can deal with an arbitrary number of stochastic constraints. ? Deterministic constrained convex optimization: This is a special case where each f t (x) ? f (x) and gkt (x) ? gk (x) are known and do not depend on time. In this case, the goal is to develop a fast algorithm that converges to a good p solution (with a small error) with a few number of iterations; and our algorithm with O( Tp) regret and constraint violations is equivalent to an iterative numerical algorithm with O(1/ T ) convergence rate. Our algorithm is subgradient based and does not require the smoothness or differentiability of the p convex program. The primal-dual subgradient method considered in [19] has the same O(1/ T ) convergence rate but requires an upper bound of optimal Lagrange multipliers, which is usually unknown in practice. 2 Formulation and New Algorithm Let X0 be a known fixed compact convex set. Let f t (x) be a sequence of arbitrarily-varying convex functions. Let gk (x; !(t)), k 2 {1, 2, . . . , m} be sequences of functions that are i.i.d. realizations of stochastic constraint functions g?k (x) = E! [gk (x; !)] with random variable ! 2 ? from an unknown distribution. That is, !(t) are i.i.d. samples of !. Assume that each f t (?) is independent of all !(? ) with ? t + 1 so that we are unable to predict future constraint functions based on the knowledge of the current loss function. For each ! 2 ?, we assume gk (x; !) are convex with respect to x 2 X0 . At the beginning of each round t, neither the loss function f t (x) nor the constraint function realizations gk (x; !(t)) are known to the decision maker. However, the decision maker still needs to make a decision x(t) 2 X0 for round t; and after that f t (x) and gk (x, !(t)) are disclosed to the decision maker at the end of round t. For convenience, we often suppress the dependence of each gk (x; !(t)) on !(t) and write gkt (x) = gk (x; !(t)). Recall g?k (x) = E! [gk (x; !)] where the expectation is with respect to !. Define X = {x 2 X0 : g?k (x) = E[gk (x; !)] ? 0, 8k 2 {1, 2, . . . , m}}. We further define the t t stacked vector of multiple functions g1t (x), . . . , gm (x) as gt (x) = [g1t (x), . . . , gm (x)]T and define T ? (x) = [E! [g1 (x; !)], . . . , E! [gm (x; !)]] . We use k ? k to denote the Euclidean norm for a vector. g Throughout this paper, we have the following assumptions: Assumption 1 (Basic Assumptions). ? Loss functions f t (x) and constraint functions gk (x; !) have bounded subgradients on X0 . That is, there exists D1 > 0 and D2 > 0 such that krf t (x)k ? D1 for all x 2 X0 and all t 2 {0, 1, . . .} and krgk (x; !)k ? D2 for all x 2 X0 , all ! 2 ? and all k 2 {1, 2, . . . , m}.2 ? There exists constant G > 0 such that kg(x; !)k ? G for all x 2 X0 and all ! 2 ?. ? There exists constant R > 0 such that kx yk ? R for all x, y 2 X0 . ? 2 X0 such that g?k (? Assumption 2 (The Slater Condition). There exists ? > 0 and x x) = E! [gk (? x; !)] ? ? for all k 2 {1, 2, . . . , m}. 2.1 New Algorithm Now consider the following algorithm described in Algorithm 1. This algorithm chooses x(t + 1) as the decision for round t + 1 based on f t (?) and gt (?) without requiring f t+1 (?) or gt+1 (?). For each stochastic constraint function gk (x; !), we introduce Qk (t) and call it a virtual queue since its dynamic is similar to a queue dynamic. The next lemma summarizes that x(t + 1) update in (2) can be implemented via a simple projection onto X0 . ? ? 1 Lemma 1. The x(t + 1) update in (2) is given by x(t + 1) = PX0 x(t) 2? d(t) , where d(t) = Pm V rf t (x(t)) + k=1 Qk (t)rgkt (x(t)) and PX0 [?] is the projection onto convex set X0 . 2 The notation rh(x) is used to denote a subgradient of a convex function h at the point x.; it is the same as the gradient whenever the gradient exists. 3 Algorithm 1 Let V > 0 and ? > 0 be constant algorithm parameters. Choose x(1) 2 X0 arbitrarily and let Qk (1) = 0, 8k 2 {1, 2, . . . , m}. At the end of each round t 2 {1, 2, . . .}, observe f t (?) and gt (?) and do the following: ? Choose x(t + 1) that solves m X min V [rf t (x(t))]T [x x(t)] + Qk (t)[rgkt (x(t))]T [x x(t)] + ?kx x(t)k2 (2) x2X0 k=1 as the decision for the next round t + 1, where rf t (x(t)) is a subgradient of f t (x) at point x = x(t) and rgkt (x(t)) is a subgradient of gkt (x) at point x = x(t). ? Update each virtual queue Qk (t + 1), 8k 2 {1, 2, . . . , m} via Qk (t + 1) = max Qk (t) + gkt (x(t)) + [rgkt (x(t))]T [x(t + 1) x(t)], 0 , (3) where max{?, ?} takes the larger one between two elements. Proof. The projection by definition is minx2X0 kx 2.2 [x(t) 1 2 2? d(t)]k and is equivalent to (2). Intuitions of Algorithm 1 Note that if there are no stochastic constraints gkt (x), i.e., X = X0 , then Algorithm 1 has Qk (t) ? V 0, 8t and becomes Zinkevich?s algorithm with = 2? in (1) since (a) x(t + 1) = argmin V [rf t (x(t))]T [x | x2X0 x(t)] + ?kx {z penalty x(t)k2 } ? = PX0 x(t) (b) ? V rf t (x(t)) 2? (4) where (a) follows from (2); and (b) follows from Lemma 1 by noting that d(t) = V rf t (x(t)). Call the term marked by an underbrace in (4) the penalty. Thus, Zinkevich?s algorithm is to minimize the penalty term and is a special case of Algorithm 1 used to solve OCO over X0 . ? ?T Let Q(t) = Q1 (t), . . . , Qm (t) be the vector of virtual queue backlogs. Let L(t) = 12 kQ(t)k2 be a Lyapunov function and define Lyapunov drift (t) = L(t + 1) 1 [kQ(t + 1)k2 2 L(t) = kQ(t)k2 ]. (5) The intuition behind Algorithm 1 is to choose x(t + 1) to minimize an upper bound of the expression (t) + V [rf t (x(t))]T [x |{z} | drift x(t)] + ?kx {z penalty x(t)k2 } (6) The intention to minimize penalty is natural since Zinkevich?s algorithm (for OCO without stochastic constraints) minimizes penalty, while the intention to minimize drift is motivated by observing that gkt (x(t)) is accumulated into queue Qk (t + 1) introduced in (3) such that we intend to have small queue backlogs. The drift (t) can be complicated and is in general non-convex. The next lemma (proven in Supplement 7.1) provides a simple upper bound on (t) and follows directly from (3). Lemma 2. At each round t 2 {1, 2, . . .}, Algorithm 1 guarantees m X ? ? 1 p (t) ? Qk (t) gkt (x(t)) + [rgkt (x(t))]T [x(t + 1) x(t)] + [G + mD2 R]2 , (7) 2 k=1 where m is the number of constraint functions; and D2 , G and R are defined in Assumption 1. Pm p At the end of round t, k=1 Qk (t)gkt (x(t)) + 12 [G + mD2 R]2 is a given constant that is not affected by decision x(t + 1). The algorithm decision in (2) is now transparent: x(t + 1) is chosen to minimize the drift-plus-penalty expression (6), where (t) is approximated by the bound in (7). 2.3 Preliminary Analysis and More Intuitions of Algorithm 1 The next lemma (proven in Supplement 7.2) relates constraint violations and virtual queue values and follows directly from (3). 4 PT PT Lemma 3. For any T 1, Algorithm 1 guarantees t=1 gkt (x(t)) ? kQ(T +1)k+D2 t=1 kx(t+ 1) x(t)k, 8k 2 {1, 2, . . . , m}, where D2 is defined in Assumption 1. Recall that function h : X0 ! R is said to be c-strongly convex if h(x) 2c kxk2 is convex over x 2 X0 . It is easy to see that if q : X0 ! R is a convex function, then for any constant c > 0 and any vector b, the function q(x) + 2c kx bk2 is c-strongly convex. Further, it is known that if h : X ! R is a c-strongly convex function that is minimized at a point xmin 2 X0 , then (see, for example, Corollary 1 in [28]): c h(xmin ) ? h(x) kx xmin k2 8x 2 X0 (8) 2 Note that the expression involved in minimization (2) in Algorithm 1 is strongly convex with modulus 2? and x(t + 1) is chosen to minimize it. Thus, the next lemma follows. Lemma 4. Let z 2 X0 be arbitrary. For all t 1, Algorithm 1 guarantees V [rf t (x(t))]T [x(t + 1) x(t)] + m X Qk (t)[rgkt (x(t))]T [x(t + 1) x(t)] + ?kx(t + 1) x(t)k2 k=1 ?V [rf t (x(t))]T [z x(t)] + m X Qk (t)[rgkt (x(t))]T [z x(t)] + ?kz x(t)k2 ?kz x(t + 1)k2 . k=1 The next corollary follows by taking z = x(t) in Lemma 4 and is proven in Supplement 7.3. Corollary 1. For all t 1, Algorithm 1 guarantees kx(t + 1) x(t)k ? V D1 2? + p mD2 2? kQ(t)k. The next corollary follows directly from Lemma 3 and Corollary 1 and shows that constraint violations are ultimately bounded by sequence kQ(t)k, t 2 {1, 2, . . . , T + 1}. PT D1 D2 Corollary 2. For any T 1, Algorithm 1 guarantees t=1 gkt (x(t)) ? kQ(T + 1)k + V T 2? + p mD22 PT t=1 kQ(t)k, 8k 2 {1, 2, . . . , m} where D1 and D2 are defined in Assumption 1. 2? This corollary further justifies why Algorithm 1 intends to minimize drift (t). As illustrated in the next section, controlled drift can often lead to boundedness of a stochastic process. Thus, the intuition of minimizing drift (t) is to yield small kQ(t)k bounds. 3 Expected Performance Analysis of Algorithm 1 p This section shows that if we choose V =p T and ? = T in Algorithm 1, then both expected regret and expected constraint violations are O( T ). 3.1 A Drift Lemma for Stochastic Processes Let {Z(t), t 0} be a discrete time stochastic process adapted3 to a filtration {F(t), t 0}. For example, Z(t) can be a random walk, a Markov chain or a martingale. The drift analysis is the method of deducing properties, e.g., recurrence, ergodicity, or boundedness, about Z(t) from its drift E[Z(t + 1) Z(t)|F(t)]. See [6, 10] for more discussions or applications on drift analysis. This paper proposes a new drift analysis lemma for stochastic processes as follows: Lemma 5. Let {Z(t), t 0} be a discrete time stochastic process adapted to a filtration {F(t), t 0} with Z(0) = 0 and F(0) = {;, ?}. Suppose there exists an integer t0 > 0, real constants ? > 0, max > 0 and 0 < ? ? max such that |Z(t + 1) E[Z(t + t0 ) Z(t)| ? Z(t)|F(t)] ? (9) max , ? t0 max , t0 ?, if Z(t) < ? . if Z(t) ? (10) hold for all t 2 {1, 2, . . .}. Then, the following holds 1. E[Z(t)] ? ? + t0 3 max + t0 4 2 max ? log ?8 2 max ?2 ? , 8t 2 {1, 2, . . .}. Random variable Y is said to be adapted to -algebra F if Y is F -measurable. In this case, we often write Y 2 F. Similarly, random process {Z(t)} is adapted to filtration {F(t)} if Z(t) 2 F(t), 8t. See e.g. [7]. 5 2. For any constant 0 < ? < 1, we have Pr(Z(t) ?8 2 ? 4 2 4 2 ? + t0 max + t0 max log ?max + t0 max log( ?1 ). 2 ? ? z) ? ?, 8t 2 {1, 2, . . .} where z = The above lemma is proven in Supplement 7.4 and provides both expected and high probability bounds for stochastic processes based on a drift condition. It will be used to establish upper bounds of virtual queues kQ(t)k, which further leads to expected and high probability constraint performance bounds of our algorithm. For a given stochastic process Z(t), it is possible to show the drift condition (10) holds for multiple t0 with different ? and ?. In fact, we will show in Lemma 7 that kQ(t)k yielded by Algorithm 1 satisfies (10) for any integer t0 > 0 by selecting ? and ? according to t0 . One-step drift conditions, corresponding to the special case t0 = 1 of Lemma 5, have been previously considered in [10, 20]. However, Lemma 5 (with general t0 > 0) allows us to choose the best t0 in performance analysis such that sublinear regret and constraint violation bounds are possible. 3.2 Expected Constraint Violation Analysis Define filtration {W(t), t 0} with W(0) = {;, ?} and W(t) = (!(1), . . . , !(t)) being the -algebra generated by random samples {!(1), . . . , !(t)} up to round t. From the update rule in Algorithm 1, we observe that x(t + 1) is a deterministic function of f t (?), g(?; !(t)) and Q(t) where Q(t) is further a deterministic function of Q(t 1), g(?; !(t 1)), x(t) and x(t 1). By inductions, it is easy to show that (x(t)) ? W(t 1) and (Q(t)) ? W(t 1) for all t 1 where (Y ) denotes the -algebra generated by random variable Y . For fixed t 1, since Q(t) is fully determined by !(? ), ? 2 {1, 2, . . . , t 1} and !(t) are i.i.d., we know gt (x) is independent of Q(t). This is formally summarized in the next lemma. ? (x? ) = E! [g(x? ; !)] ? 0, then Algorithm 1 guarantees: Lemma 6. If x? 2 X0 satisfies g E[Qk (t)gkt (x? )] ? 0, 8k 2 {1, 2, . . . , m}, 8t (11) 1. Proof. Fix k 2 {1, 2, . . . , m} and t 1. Since gkt (x? ) = gk (x? ; !(t)) is independent of Qk (t), which is determined by {!(1), . . . , !(t 1)}, it follows that E[Qk (t)gkt (x? )] = (a) E[Qk (t)]E[gkt (x? )] ? 0, where (a) follows from the fact that E[gkt (x? )] ? 0 and Qk (t) 0. To establish a bound on constraint violations, by Corollary 2, it suffices to derive upper bounds for kQ(t)k. In this subsection, we derive upper bounds for kQ(t)k by applying the new drift lemma (Lemma 5) developed at the beginning of this section. The next lemma shows that random process Z(t) = kQ(t)k satisfies the conditions in Lemma 5. Lemma 7. Let t0 > 0 be an arbitrary integer. At each round t 2 {1, 2, . . . , } in Algorithm 1, the following holds p kQ(t + 1)k kQ(t)k ?G + mD2 R, and ? p t0 (G + mD2 R), if kQ(t)k < ? E[kQ(t + t0 )k kQ(t)k W(t 1)] ? , t0 2? , if kQ(t)k ? p p 2 2V D1 R+[G+ mD2 R]2 where ? = 2? t0 + (G + mD2 R)t0 + 2?R , m is the number of constraint t0 ? + ? functions; D1 , D2 , G and R are defined in Assumption 1; and ? is defined in Assumption 2. (Note that ? < G by the definition of G.) Lemma 7 (proven in Supplement 5 to random process Z(t) = kQ(t)k p 7.5) allows us to apply Lemma p p p and obtain E[kQ(t)k] = O( T ), 8t by taking t = d T e, V = T and ? = T , where d T e 0 p represents the smallest integer no less than T . By pCorollary 2, this further implies the expected PT constraint violation bound E[ t=1 gk (x(t))] ? O( T ) as summarized in the next theorem. p Theorem 1 (Expected Constraint Violation Bound). If V = T and ? = T in Algorithm 1, then for all T 1, we have E[ T X t=1 p gkt (x(t))] ? O( T ), 8k 2 {1, 2, . . . , m}. where the expectation is taken with respect to all !(t). 6 (12) Proof. Define random process Z(t) with Z(0) = 0 and Z(t) = kQ(t)k, t 1 and filtration F(t) with F(0) = {;, ?} and F(t) = W(t 1), t 1. Note that Z(t) ispadapted to F(t). By Lemma 7, Z(t) satisfies the conditions in Lemma 5 with max = G + mD2 R, ? = 2? and p p 2 2V D1 R+[G+ mD2 R]2 ? = 2? t0 + (G + mD2 R)t0 + 2?R . Thus, by part (1) of Lemma 5, for all t0 ? + ? p p 2 2V D1 R+[G+ mD2 R]2 ? t 2 {1, 2, . . .}, we have E[kQ(t)k] ? 2 t0 + 2(G + mD2 R)t0 + 2?R + + t ? ? 0 p p p p 8[G+ mD2 R]2 32[G+ mD2 R]2 t0 log[ ]. Taking t0 = d T e, V = T and ? = T , we have ? ?2 p E[kQ(t)k] ? O( T ) for all t 2 {1, 2, . . .}. p PT Fix T 1. By Corollary 2 (with V = T and ? = T ) , we have t=1 gkt (x(t)) ? kQ(T + p p 2 P mD T 1)k + T D21 D2 + 2T 2 t=1 kQ(t)k, 8k 2 {1, 2, . . . , m}. Taking expectations on both sides and p p PT substituting E[kQ(t)k] = O( T ), 8t into it yields E[ t=1 gkt (x(t))] ? O( T ). 3.3 Expected Regret Analysis The next lemma (proven in Supplement 7.6) refines Lemma 4 and is useful to analyze the regret. Lemma 8. Let z 2 X0 be arbitrary. For all T 1, Algorithm 1 guarantees T T X X T m ? p ? V D12 1 1 X? X 2 T t f t (x(t)) ? f t (z) + R2 + T + [G + mD2 R] + Qk (t)gk (z) (13) t=1 V | t=1 4? 2 {z (I) V } V | t=1 k=1 {z (II) } where m is the number of constraint functions; and D1 , D2 , G and R are defined in Assumption 1. p p Note that if we take V = T and ? = T , then term (I) in (13) is O( T ). Recall that the expectation of term (II) in (13) with z = x? is non-positive by Lemma 6. The expected regret bound of Algorithm 1 follows by taking expectations on both sides of (13) and is summarized in the next theorem. ? (x? ) ? 0, Theorem 2 (Expected Regret Bound). Let xp? 2 X0 be any fixed solution that satisfies g PT ? t e.g., x = argminx2X t=1 f (x). If V = T and ? = T in Algorithm 1, then for all T 1, E[ T X t=1 f t (x(t))] ? E[ T X p f t (x? )] + O( T ). t=1 where the expectation is taken with respect to all !(t). PT PT 1. Taking z = x? in Lemma 8 yields t=1 f t (x(t)) ? t=1 f t (x? ) + V? R2 + ? ? PT Pm p V 1 1 2T t ? t=1 k=1 Qk (t)gk (x ) . Taking expectations on both sides 4? T + 2 [G + mD2 R] V + V PT P p D2 T and using (11) yields t=1 E[f t (x(t))] ? t=1 E[f t (x? )] + R2 V? + 41 V? T + 12 [G + mD2 R]2 VT . p p PT P T Taking V = T and ? = T yields t=1 E[f t (x(t))] ? t=1 E[f t (x? )] + O( T ). Proof. Fix T D12 3.4 Special Case Performance Guarantees Theorems 1 and 2 provide expected performance guarantees of Algorithm 1 for OCO with stochastic constraints. The results further imply the performance guarantees in the following special cases: ? OCO with long term constraints: In this case, gk (x; !(t)) ? gk (x) and there is no randomness. Thus,p the expectations in Theorems 1 and p 2 disappear. For this problem, Algorithm 1 can achieve O( T ) (deterministic) regret and O( T ) (deterministic) constraint violations. ? Stochastic constrained convex optimization: Note that i.i.d. time-varying f (x; !(t)) is a special case of arbitrarily-varying f t (x) as considered in our OCO setting. Thus, Theorems 1 and 2 still hold when Algorithm 1 is applied to stochastic constrained convex optimization. That p p PT PT PT is, t=1 E[f t (x(t))] ? t=1 E[f t (x? )] + O( T ) and t=1 E[gkt (x(t))] ? O( T ), 8k 2 {1, 2, . . . , n}. This online performance guarantee also implies Algorithm 1 can be used as a p (batch) offline algorithm with O(1/ T ) convergence for stochastic constrained convex optimizaPT tion. That is, after running Algorithm 1 for T slots, if we use x(T ) = T1 t=1 x(t) as a fixed solution, then E[f (x(T ); !)] = E[f t (x(T ))] ? E[f t (x? )] + O( p1T ) and E[gk (x(T ); !)] = 7 E[gkt (x(T ))] ? O( p1T ), 8k 2 {1, 2, . . . , m} with t T + 1 by the i.i.d. property of each t t f and g and Jensen?s inequality. If we use Algorithm 1 as a (batch) offline algorithm, its performance ties with the algorithm developed in [15], which is by design a (batch) offline algorithm and can only solve stochastic optimization with a single constraint function. ? Deterministic constrained convex optimization: Similarly to OCO with long term constraints, the expectations in Theorems 1 and 2 disappear in this case since f t (x) ? f (x) PT and gk (x; !(t)) ? gk (x). If we use x(T ) = T1 t=1 x(t) as the solution, then f (x(T )) ? f (x? ) + O( p1T ) and gk (x(T )) ? O( p1T ), which follows by dividing inequalities in Theorems 1 and 2 by T on both sides and applying Jensen?s inequality. Thus, Algorithm 1 solves deterministic constrained convex optimization with O( p1T ) convergence. 4 High Probability Performance Analysis p This section shows that if we choose V = Tpand ? = T in Algorithm 1, then for any 0 < < 1, with probability at least 1 , regret is O( T log(T ) log1.5 ( 1 )) and constraint violations are p O T log(T ) log( 1 ) . 4.1 High Probability Constraint Violation Analysis Similarly to the expected constraint violation analysis, we can use part (2) of the new drift lemma (Lemma 5) to obtain a high probability bound of kQ(t)k, which together with Corollary 2 leads to a high probability constraint violation bound summarized in Theorem 3 (proven in Supplement 7.7). p Theorem 3 (High Probability Constraint Violation Bound). Let 0 < < 1 be arbitrary. If V = T and ? = T in Algorithm 1, then for all T 1 and all k 2 {1, 2, . . . , m}, we have T ?X p 1 ? Pr gk (x(t)) ? O T log(T ) log( ) 1 . t=1 4.2 High Probability Regret Analysis To obtain a high probability regret bound from Lemma 8, it remains to derive a high probability bound of term (II) in (13) with z = x? . The main challenge is that term (II) is a supermartingale with unbounded differences (due to the possibly unbounded virtual queues Qk (t)). Most concentration inequalities, e.g., the Hoeffding-Azuma inequality, used in high probability performance analysis of online algorithms are restricted to martingales/supermartingales with bounded differences. See for example [4, 2, 16]. The following lemma considers supermartingales with unbounded differences. Its proof (provided in Supplement 7.8) uses the truncation method to construct an auxiliary wellbehaved supermargingale. Similar proof techniques are previously used in [26, 24] to prove different concentration inequalities for supermartingales/martingales with unbounded differences. Lemma 9. Let {Z(t), t 0} be a supermartingale adapted to a filtration {F(t), t 0} with Z(0) = 0 and F(0) = {;, ?}, i.e., E[Z(t + 1)|F(t)] ? Z(t), 8t 0. Suppose there exits a constant c > 0 such that {|Z(t + 1) Z(t)| > c} ? {Y (t) > 0}, 8t 0, where Y (t) is process with Y (t) adapted to F(t) for all t 0. Then, for all z > 0, we have t 1 X 2 2 Pr(Z(t) z) ? e z /(2tc ) + Pr(Y (? ) > 0), 8t 1. ? =0 Note that if Pr(Y (t) > 0) = 0, 8t 0, then Pr({|Z(t + 1) Z(t)| > c}) = 0, 8t 0 and Z(t) is a supermartingale with differences bounded by c. In this case, Lemma 9 reduces to the conventional Hoeffding-Azuma inequality. The next theorem (proven in Supplement 7.9) summarizes the high probability regret performance of Algorithm 1 and follows from Lemmas 5-9 . Theorem 4 (High Probability Regret Bound). Let x? 2 X0 be any fixed solution that p satisfies PT ? ? t ? g(x ) ? 0, e.g., x = argminx2X t=1 f (x). Let 0 < < 1 be arbitrary. If V = T and 8 ? = T in Algorithm 1, then for all T 1, we have T T ?X X p 1 ? Pr f t (x(t)) ? f t (x? ) + O( T log(T ) log1.5 ( )) t=1 5 1 . t=1 Experiment: Online Job Scheduling in Distributed Data Centers Consider a geo-distributed data center infrastructure consisting of one front-end job router and 100 geographically distributed servers, which are located at 10 different zones to form 10 clusters (10 servers in each cluster). See Fig. 1(a) for an illustration. The front-end job router receives job tasks and schedules them to different servers to fulfill the service. To serve the assigned jobs, each server purchases power (within its capacity) from its zone market. Electricity market prices can vary significantly across time and zones. For example, see Fig. 1(b) for a 5-minute average electricity price trace (between 05/01/2017 and 05/10/2017) at New York zone CENTRL [1]. This problem is to schedule jobs and control power levels at each server in real time such that all incoming jobs are served and electricity cost is minimized. In our experiment, each server power is adjusted every 5 minutes, which is called a slot. (In practice, server power can not be adjusted too frequently due to hardware restrictions and configuration delay.) Let x(t) = [x1 (t), . . . , x100 (t)] be the power vector at slot t, where each xi (t) must be chosen from an interval [xmin , xmax ] restricted by the i i hardware, and the service rate at each server i satisfies ?i (t) = hi (xi (t)), where hi (?) is an increasing concave function. At each slot t, the job router schedules ?i (t) amount of jobs to server i. The P100 electricity cost at slot t is f t (x(t)) = i=1 ci (t)xi (t) where ci (t) is the electricity price at server i?s zone. We use ci (t) from real-world 5-minute average electricity price data at 10 different zones in New York city between 05/01/2017 and 05/10/2017 obtained from NYISO [1]. At each slot t, the incoming job is given by !(t) and satisfies a Poisson distribution. Note that the amount of incoming jobs and electricity price ci (t) are unknown to us at the beginning of each slot t but can be observed at the end of each slot. This is an example of OCO with stochastic constraints, where we aim to minimize the electricity cost subject to the constraint that incoming jobs must be served in time. In particular, at each round t, we receive loss function f t (x(t)) and constraint function P100 g t (x(t)) = !(t) i=1 hi (xi (t)). We compare our proposed algorithm with 3 baselines: (1) best fixed decision in hindsight; (2) react [8] and (3) low-power [22]. Both ?react" and ?low-power" are popular power control strategies used in distributed data centers. See Supplement 7.10 for more details of these 2 baselines and our experiment. Fig. 1(c)(d) plot the performance of 4 algorithms, where the running average is the time average up to the current slot. Fig. 1(c) compares electricity cost while Fig. 1(d) compares unserved jobs. (Unserved jobs accumulate if the service rate provided by an algorithm is less than the job arrival rate, i.e., the stochastic constraint is violated.) Fig. 1(c)(d) show that our proposed algorithm performs closely to the best fixed decision in hindsight over time, both in electricity cost and constraint violations. ?React" performs well in serving job arrivals but yields larger electricity cost, while ?low-power" has low electricity cost but fails to serve job arrivals. Electricity market price Running average electricity cost 450 Running average unserved jobs 15000 1200 400 300 10000 Cost (dollar) Price (dollar/MWh) 350 250 200 5000 150 Our algorithm Best fixed strategy in hindsight React (Gandhi et al. 2012) Low-power (Qureshi et al. 2009) 100 50 0 0 0 500 1000 1500 2000 2500 (a) (b) 800 Our algorithm Best fixed decision in hindsight React (Gandhi et al. 2012) Low-power (Qureshi et al. 2009) 600 400 200 0 -200 0 500 1000 1500 2000 Number of slots (each 5 min) Number of slots (each 5 min) Unserved jobs (per slot) 1000 (c) 2500 0 500 1000 1500 2000 2500 Number of slots (each 5 min) (d) Figure 1: (a) Geo-distributed data center infrastructure; (b) Electricity market prices at zone CENTRAL New York; (c) Running average electricity cost; (d) Running average unserved jobs. 6 Conclusion This paper studies OCO with stochastic constraints, where the objective function varies arbitrarily but thep constraint functions are i.i.d. over time. A novel learning p algorithm is developed that guarantees O( T ) expected regret and constraint violations and O( T log(T )) high probability regret and constraint violations. 9 References [1] New York ISO open access pricing data. http://www.nyiso.com/. [2] Peter L Bartlett, Varsha Dani, Thomas Hayes, Sham Kakade, Alexander Rakhlin, and Ambuj Tewari. High-probability regret bounds for bandit online linear optimization. In Proceedings of Conference on Learning Theory (COLT), 2008. [3] Nicol? Cesa-Bianchi, Philip M Long, and Manfred K Warmuth. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7(3):604?619, 1996. [4] Nicol? Cesa-Bianchi and G?bor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [5] Andrew Cotter, Maya Gupta, and Jan Pfeifer. A light touch for heavily constrained sgd. In Proceedings of Conference on Learning Theory (COLT), 2015. [6] Joseph L Doob. Stochastic processes. Wiley New York, 1953. [7] Rick Durrett. Probability: Theory and Examples. Cambridge University Press, 2010. [8] Anshul Gandhi, Mor Harchol-Balter, and Michael A Kozuch. Are sleep states effective in data centers? In International Green Computing Conference (IGCC), 2012. [9] Geoffrey J Gordon. Regret bounds for prediction problems. In Proceeding of Conference on Learning Theory (COLT), 1999. [10] Bruce Hajek. Hitting-time and occupation-time bounds implied by drift analysis with applications. Advances in Applied Probability, 14(3):502?525, 1982. [11] Elad Hazan. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3?4):157?325, 2016. [12] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69:169?192, 2007. [13] Rodolphe Jenatton, Jim Huang, and C?dric Archambeau. Adaptive algorithms for online convex optimization with long-term constraints. In Proceedings of International Conference on Machine Learning (ICML), 2016. [14] Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?63, 1997. [15] Guanghui Lan and Zhiqiang Zhou. Algorithms for stochastic optimization with expectation constraints. arXiv:1604.03887, 2016. [16] Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. Trading regret for efficiency: online convex optimization with long term constraints. Journal of Machine Learning Research, 13(1):2503? 2528, 2012. [17] Mehrdad Mahdavi, Tianbao Yang, and Rong Jin. Stochastic convex optimization with multiple objectives. In Advances in Neural Information Processing Systems (NIPS), 2013. [18] Shie Mannor, John N Tsitsiklis, and Jia Yuan Yu. Online learning with sample path constraints. Journal of Machine Learning Research, 10:569?590, March 2009. [19] Angelia Nedi?c and Asuman Ozdaglar. Subgradient methods for saddle-point problems. Journal of Optimization Theory and Applications, 142(1):205?228, 2009. [20] Michael J. Neely. Energy-aware wireless scheduling with near optimal backlog and convergence time tradeoffs. IEEE/ACM Transactions on Networking, 24(4):2223?2236, 2016. [21] Yurii Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer Science & Business Media, 2004. [22] Asfandyar Qureshi, Rick Weber, Hari Balakrishnan, John Guttag, and Bruce Maggs. Cutting the electric bill for internet-scale systems. In ACM SIGCOMM, 2009. [23] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2011. [24] Terence Tao and Van Vu. Random matrices: universality of local spectral statistics of nonhermitian matrices. The Annals of Probability, 43(2):782?874, 2015. [25] David Tse and Pramod Viswanath. Fundamentals of Wireless Communication. Cambridge University Press, 2005. [26] Van Vu. Concentration of non-lipschitz functions and applications. Random Structures & Algorithms, 20(3):262?316, 2002. 10 p [27] Hao Yu and Michael J. Neely. A low complexity algorithm with O( T ) regret and finite constraint violations for online convex optimization with long term constraints. arXiv:1604.02218, 2016. [28] Hao Yu and Michael J. Neely. A simple parallel algorithm with an O(1/t) convergence rate for general convex programs. SIAM Journal on Optimization, 27(2):759?783, 2017. [29] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of International Conference on Machine Learning (ICML), 2003. 11
6741 |@word norm:1 open:1 d2:11 q1:1 sgd:1 boundedness:2 configuration:1 series:1 selecting:1 existing:1 current:2 com:1 qureshi:3 universality:1 router:3 must:2 john:2 refines:1 numerical:1 happen:1 wellbehaved:1 plot:1 update:5 warmuth:2 beginning:3 iso:1 manfred:2 infrastructure:2 provides:2 mannor:5 simpler:1 unbounded:4 yuan:1 prove:2 introductory:1 introduce:1 x0:33 market:4 expected:16 pour:1 nor:1 frequently:1 multi:1 considering:1 increasing:1 becomes:1 provided:2 bounded:4 notation:1 medium:1 kg:1 argmin:2 minimizes:2 developed:4 hindsight:6 guarantee:16 every:1 concave:1 tie:1 pramod:1 k2:11 qm:1 control:2 ozdaglar:1 grant:1 before:1 t1:5 engineering:1 positive:1 service:3 local:1 subscript:1 path:2 lugosi:1 might:1 plus:1 archambeau:1 vu:2 practice:3 regret:36 nyiso:2 jan:1 adapting:1 significantly:1 projection:9 intention:2 onto:6 convenience:1 selection:1 scheduling:3 risk:1 impossible:1 applying:2 restriction:1 bill:1 zinkevich:9 deterministic:11 imposed:1 center:6 equivalent:2 measurable:1 conventional:1 www:1 kale:1 independently:1 convex:42 d12:2 tianbao:2 nedi:1 formalized:1 react:5 rule:1 annals:1 pt:26 gm:3 suppose:2 heavily:1 gandhi:3 ogd:2 us:2 designing:1 programming:1 element:1 trend:2 approximated:1 located:1 viswanath:1 slater:1 anshul:1 observed:1 electrical:1 worst:1 intends:1 xmin:4 yk:1 intuition:4 environment:4 complexity:4 nesterov:1 dynamic:4 ultimately:1 depend:2 algebra:3 serve:2 upon:1 exit:1 efficiency:1 various:1 x100:1 stacked:1 fast:1 effective:1 pearson:1 shalev:1 larger:2 solve:4 elad:2 satyen:1 statistic:1 g1:1 noisy:2 online:25 sequence:3 propose:1 d21:1 realization:4 achieve:5 convergence:6 cluster:2 produce:1 converges:1 derive:3 develop:3 andrew:1 job:20 solves:3 dividing:1 implemented:1 auxiliary:1 strong:1 indicate:1 implies:3 trading:1 lyapunov:2 closely:1 stochastic:42 routing:1 supermartingales:3 virtual:6 require:1 transparent:1 fix:3 suffices:1 preliminary:1 adjusted:2 rong:2 hold:5 sufficiently:1 considered:3 predict:1 substituting:1 achieves:1 vary:2 smallest:1 applicable:1 maker:8 city:1 cotter:1 minimization:1 dani:1 supt:3 aim:1 dric:1 fulfill:1 avoid:1 zhou:1 varying:9 rick:2 geographically:1 corollary:10 impossibility:1 contrast:2 baseline:2 dollar:2 accumulated:1 bandit:1 doob:1 tao:1 classification:1 dual:1 colt:3 proposes:3 constrained:12 special:12 art:1 construct:1 aware:1 beach:1 sampling:2 represents:1 yu:4 icml:2 oco:24 future:1 minimized:2 purchase:1 develops:2 gordon:1 few:1 simultaneously:1 usc:1 consisting:1 interest:1 possibility:2 rodolphe:1 violation:30 light:1 primal:1 behind:1 chain:1 yuhao:1 euclidean:1 walk:1 tse:1 cover:1 tp:1 electricity:16 cost:10 geo:2 introducing:1 predictor:1 kq:29 delay:1 front:2 too:1 varies:1 angelia:1 chooses:1 varsha:1 st:1 guanghui:1 international:3 fundamental:1 siam:1 terence:1 michael:5 together:1 argminx2x:2 central:1 cesa:2 choose:7 possibly:2 hoeffding:2 huang:1 worse:2 stochastically:1 expert:1 mahdavi:2 summarized:4 includes:1 ranking:1 tion:1 mwh:1 observing:2 hazan:3 px0:3 analyze:1 complicated:2 parallel:1 shai:1 bruce:2 jia:1 contribution:1 minimize:8 neely:4 qk:21 yield:6 asuman:1 bor:1 served:2 randomness:1 networking:1 whenever:1 definition:2 infinitesimal:1 energy:1 involved:2 obvious:1 naturally:2 proof:6 popular:1 recall:3 knowledge:3 lim:3 improves:1 subsection:1 schedule:3 hajek:1 jenatton:1 methodology:1 wei:1 formulation:2 strongly:5 ergodicity:1 receives:2 touch:1 pricing:1 grows:3 modulus:1 usa:1 verify:1 multiplier:1 requiring:1 ccf:1 assigned:1 illustrated:1 deal:1 round:16 during:1 game:1 recurrence:1 supermartingale:3 generalized:1 performs:2 weber:1 novel:1 recently:1 functional:4 extend:1 mor:1 accumulate:1 cambridge:3 smoothness:1 pm:3 similarly:3 portfolio:2 access:1 gt:5 certain:2 server:10 inequality:9 arbitrarily:8 vt:1 additional:1 surely:1 shortest:1 ii:4 relates:1 multiple:5 full:2 sham:1 reduces:1 long:11 sigcomm:1 controlled:1 ensuring:1 prediction:4 involving:1 regression:1 basic:2 expectation:12 poisson:1 arxiv:2 iteration:1 xmax:1 agarwal:1 receive:1 background:1 x2x:1 interval:1 grow:1 ascent:1 subject:1 shie:1 balakrishnan:1 call:2 integer:4 near:1 noting:1 yang:2 identically:1 easy:2 knowing:1 tradeoff:1 t0:29 expression:5 motivated:1 bartlett:1 penalty:7 queue:9 peter:1 york:5 useful:1 tewari:1 amount:3 hardware:2 differentiability:1 http:1 nsf:1 per:1 serving:1 write:2 discrete:2 affected:1 lan:1 neither:1 krf:1 subgradient:7 almost:1 throughout:1 decision:23 summarizes:2 bound:26 internet:1 hi:3 maya:1 backlog:3 quadratic:1 sleep:1 yielded:1 adapted:5 constraint:80 md2:17 min:5 subgradients:1 px:5 martin:1 department:1 according:1 march:1 across:1 kakade:1 joseph:1 restricted:4 pr:7 taken:2 neyman:1 previously:2 remains:1 know:1 end:6 yurii:1 generalizes:1 operation:1 apply:1 observe:2 spectral:1 batch:5 thomas:1 denotes:1 running:6 gkt:24 amit:1 establish:2 disappear:2 hypercube:1 implied:1 objective:2 intend:1 strategy:2 concentration:3 dependence:1 md:1 mehrdad:2 said:2 southern:1 gradient:7 link:2 unable:1 capacity:3 philip:1 considers:4 induction:1 guttag:1 illustration:1 minimizing:1 unfortunately:1 hao:3 gk:35 trace:1 filtration:6 suppress:1 design:1 unknown:8 allowing:1 upper:6 bianchi:2 observation:2 markov:1 finite:1 descent:3 jin:2 communication:1 jim:1 arbitrary:6 drift:19 introduced:2 david:1 required:1 california:1 nip:2 usually:1 azuma:2 challenge:1 program:2 ambuj:1 rf:11 max:15 green:1 power:11 natural:1 business:1 kivinen:1 imply:1 log1:2 nicol:2 occupation:1 law:1 loss:12 fully:2 lecture:1 sublinear:1 proven:9 geoffrey:1 versus:1 foundation:2 xp:1 bk2:1 course:1 supported:1 wireless:4 truncation:1 offline:5 side:4 exponentiated:1 tsitsiklis:1 taking:8 xiaohan:1 distributed:6 van:2 world:2 cumulative:2 kz:2 made:1 durrett:1 adaptive:1 transaction:2 compact:1 cutting:1 incoming:4 hayes:1 hari:1 xi:4 thep:1 shwartz:1 iterative:1 why:1 channel:1 nature:1 ca:1 p100:2 electric:1 g1t:2 main:1 linearly:4 rh:1 arrival:3 x1:1 advice:1 fig:6 martingale:3 wiley:1 sub:4 fails:1 kxk2:1 pfeifer:1 theorem:13 minute:3 jensen:2 r2:3 rakhlin:1 gupta:1 disclosed:5 exists:6 gained:1 ci:4 supplement:10 justifies:1 horizon:1 kx:10 tc:1 logarithmic:1 p1t:5 explore:1 saddle:1 deducing:1 lagrange:1 hitting:1 springer:1 satisfies:9 acm:2 slot:13 goal:3 marked:1 jyrki:1 price:8 lipschitz:1 considerable:1 specifically:1 determined:2 lemma:43 called:1 zone:7 formally:1 arises:2 alexander:1 violated:1 d1:10
6,348
6,742
Max-Margin Invariant Features from Transformed Unlabeled Data Dipan K. Pal, Ashwin A. Kannan?, Gautam Arakalgud?, Marios Savvides Department of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {dipanp,aalapakk,garakalgud,marioss}@cmu.edu Abstract The study of representations invariant to common transformations of the data is important to learning. Most techniques have focused on local approximate invariance implemented within expensive optimization frameworks lacking explicit theoretical guarantees. In this paper, we study kernels that are invariant to a unitary group while having theoretical guarantees in addressing the important practical issue of unavailability of transformed versions of labelled data. A problem we call the Unlabeled Transformation Problem which is a special form of semisupervised learning and one-shot learning. We present a theoretically motivated alternate approach to the invariant kernel SVM based on which we propose MaxMargin Invariant Features (MMIF) to solve this problem. As an illustration, we design an framework for face recognition and demonstrate the efficacy of our approach on a large scale semi-synthetic dataset with 153,000 images and a new challenging protocol on Labelled Faces in the Wild (LFW) while out-performing strong baselines. 1 Introduction It is becoming increasingly important to learn well generalizing representations that are invariant to many common nuisance transformations of the data. Indeed, being invariant to intra-class transformations while being discriminative to between-class transformations can be said to be one of the fundamental problems in pattern recognition. The nuisance transformations can give rise to many ?degrees of freedom? even in a constrained task such as face recognition (e.g. pose, age-variation, illumination etc.). Explicitly factoring them out leads to improvements in recognition performance as found in [10, 7, 6]. It has also been shown that that features that are explicitly invariant to intra-class transformations allow the sample complexity of the recognition problem to be reduced [2]. To this end, the study of invariant representations and machinery built on the concept of explicit invariance is important. Invariance through Data Augmentation. Many approaches in the past have enforced invariance by generating transformed labelled training samples in some form such as [13, 17, 19, 9, 15, 4]. Perhaps, one of the most popular method for incorporating invariances in SVMs is the virtual support method (VSV) in [18], which used sequential runs of SVMs in order to find and augment the support vectors with transformed versions of themselves. Indecipherable transformations in data leads to shortage of transformed labelled samples. The above approaches however, assume that one has explicit knowledge about the transformation. This is a strong assumption. Indeed, in most general machine learning applications, the transformation ? Authors contributed equally 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. present in the data is not clear and cannot be modelled easily, e.g. transformations between different views of a general 3D object and between different sentences articulated by the same person. Methods which work on generating invariance by explicitly transforming or augmenting labelled training data cannot be applied to these scenarios. Further, in cases where we do know the transformations that exist and we actually can model them, it is difficult to generate transformed versions of very large labelled datasets. Hence there arises an important problem: how do we train models to be invariant to transformations in test data, when we do not have access to transformed labelled training samples ? Transformed unlabeled data Non-transformed labeled data Train Train Availability of unlabeled transformed data. Although it is difficult to obtain or generate transformed labelled data (due to the reasons mentioned above), unlabeled transformed data is more readily available. For instance, if different views of specific objects of interest are not available, one can simply collect views of general objects. Also, if different sentences spoken by a specific group of people are not available, one can simply collect those spoken by members of the general population. In both these scenarios, no explicit knowledge or model of the transformation is needed, thereby bypassing the problem of indecipherable transformations. This situation is common in vision e.g. only unlabeled transformed images are observed, but has so far mostly been addressed by the community by intense efforts in large scale data collection. Note that the transformed data that is collected is not required to be labelled. We now are in a position to state the central problem that this paper addresses. Test image invariant to not invariant to Figure 1: Max-Margin Invariant Features (MMIF) can solve an important problem we call the Unlabeled Transformation Problem. In the figure, a traditional classifier F (x) "learns" invariance to nuisance transformations directly from the labeled dataset X . On the other hand, our approach (MMIF) can incorporate additional invariance learned from any unlabeled data that undergoes the nuisance transformation of interest. The Unlabeled Transformation (UT) Problem: Having access to transformed versions of the training unlabeled data but not of labelled data, how do we learn a discriminative model of the labelled data, while being invariant to transformations present in the unlabeled data ? Overall approach. The approach presented in this paper however (see Fig. 1), can solve this problem and learn invariance to transformations observed only through unlabeled samples and does not need labelled training data augmentation. We explicitly and simultaneously address both problems of generating invariance to intra-class transformation (through invariant kernels) and being discriminative to inter or between class transformations (through max-margin classifiers). Given a new test sample, the final extracted feature is invariant to the transformations observed in the unlabeled set, and thereby generalizes using just a single example. This is an example of one-shot learning. Prior Art: Invariant Kernels. Kernel methods in machine learning have long been studied to considerable depth. Nonetheless, the study of invariant kernels and techniques to extract invariant features has received much less attention. An invariant kernel allows the kernel product to remain invariant under transformations of the inputs. Most instances of incorporating invariances focused on local invariances through regularization and optimization such as [18, 19, 3, 21]. Some other techniques were jittering kernels [17, 3] and tangent-distance kernels [5], both of which sacrificed the positive semi-definite property of its kernels and were computationally expensive. Though these methods have had some success, most of them still lack explicit theoretical guarantees towards invariance. The proposed invariant kernel SVM formulation on the other hand, develops a valid PSD kernel that is guaranteed to be invariant. [4] used group integration to arrive at invariant kernels but did not address the Unlabeled Transformation problem which our proposed kernels do address. Further, our proposed kernels allow for the formulation of the invariant SVM and application to large scale problems. Recently, [14] presented some work with invariant kernels. However, unlike our non-parametric formulation, they do not learn the group transformations from the data itself and assume known parametric transformations (i.e. they assume that transformation is computable). Key ideas. The key ideas in this paper are twofold. 2 1. The first is to model transformations using unitary groups (or sub-groups) leading to unitarygroup invariant kernels. Unitary transforms allow the dot product to be preserved and allow for interesting generalization properties leading to low sample complexity and also allow learning transformation invariance from unlabeled examples (thereby solving the Unlabeled Transformation Problem). Classes of learning problems, such as vision, often have transformations belonging to a unitary-group, that one would like to be invariant towards (such as translation and rotation). In practice however, [8] found that invariance to much more general transformations not captured by this model can been achieved. 2. Secondly, we combine max-margin classifiers with invariant kernels leading to non-linear max-margin unitary-group invariant classifiers. These theoretically motivated invariant non-linear SVMs form the foundation upon which Max-Margin Invariant Features (MMIF) are based. MMIF features can effectively solve the important Unlabeled Transformation Problem. To the best of our knowledge, this is the first theoretically proven formulation of this nature. Contributions. In contrast to many previous studies on invariant kernels, we study non-linear positive semi-definite unitary-group invariant kernels guaranteeing invariance that can address the UT Problem. One of our central theoretical results to applies group integration in the RKHS. It builds on the observation that, under unitary restrictions on the kernel map, group action in the input space is reciprocated in the RKHS. Using the proposed invariant kernel, we present a theoretically motivated approach towards a non-linear invariant SVM that can solve the UT Problem with explicit invariance guarantees. As our main theoretical contribution, we showcase a result on the generalization of max-margin classifiers in group-invariant subspaces. We propose Max-Margin Invariant Features (MMIF) to learn highly discriminative non-linear features that also solve the UT problem. On the practical side, we propose an approach to face recognition to combine MMIFs with a pre-trained deep learning feature extractor (in our case VGG-Face [12]). MMIF features can be used with deep learning whenever there is a need to focus on a particular transformation in data (in our application pose in face recognition) and can further improve performance. 2 Unitary-Group Invariant Kernels Premise: Consider a dataset of normalized samples along with labels X = {xi }, Y = {yi } ?i ? 1...N with x ? Rd and y ? {+1, ?1}. We now introduce into the dataset a number of unitary transformations g part of a locally compact unitary-group G. We note again that the set of transformations under consideration need not be the entire unitary group. They could very well be a subgroup. Our augmented normalized dataset becomes {gxi , yi } ?g ? G ?i. For clarity, we denote by gx the action of group element g ? G on x, i.e. gx = g(x). We also define an orbit of x under G as the set XG = {gx} ?g ? G. Clearly, X ? XG . An invariant function is defined as follows. Definition 2.1 (G-Invariant Function). For any group G, we define a function f : X ? Rn to be G-invariant if f (x) = f (gx) ?x ? X ?g ? G. One method of generating an invariant towards a group is through group integration. Group integration has stemmed from classical invariant theory and can be shown to be a projection onto a G-invariant subspace for vector spaces. In such a space x = gx ?g ? G and thus the representation x is invariant under the transformation of any element from the group G. This is ideal for recognition problems where one would want to be discriminative to between-class transformations (for e.g. between distinct subjects in face recognition) but be invariant to within-class transformations (for e.g. different images of the same subject). The set of transformations we model as G are the within-class transformations that we would like to be invariant towards. An invariant to any group G can be generated through the following basic (previously) known property (Lemma 2.1) based on group integration. d Lemma 2.1. (Invariance Property) Given a vector ? ? R R , and any R affine group G, for any fixed 0 0 g ? G and a normalized Haar measure dg, we have g G g? dg = G g? dg The Haar measure (dg) exists for every locally compact group and is unique up to a positive multiplicative constant (hence normalized). A similar property holds for discrete groups. Lemma 2.1 R results in the quantity G g? dg enjoy global invariance (encompassing all elements) to group G. This property allows one to generate a G-invariant subspace in the inherent space Rd through group integration. In practice, the integral corresponds to a summation over transformed samples. The 3 following two lemmas (novel results, and part R of our contribution) (Lemma 2.2 and 2.3) showcase elementary properties of the operator ? = G g dg for a unitary-group G 2 . These properties would prove useful in the analysis of unitary-group invariant kernels and features. R Lemma 2.2. If ? = G g dg for unitary G, then ?T = ? R Lemma 2.3. (Unitary Projection) If ? = G g dg for any affine G, then ?? = ?, i.e. it is a projection operator. Further, if G is unitary, then h?, ?? 0 i = h??, ? 0 i ??, ? 0 ? Rd Sample Complexity and Generalization. On applying the operator ? to the dataset X , all points in the set {gx | g ? G} for any x ? X map to the same point ?x in the G-invariant subspace thereby reducing the number of distinct points by a factor of |G| (the cardinality of G, if G is finite). Theoretically, this would drastically reduce sample complexity while preserving linear feasibility (separability). It is trivial to observe that a perfect linear separator learned in X? = {?x | x ? X } would also be a perfect separator for XG , thus in theory achieving perfect generalization. Generalization here refers to the ability to perform correct classification even in the presence of the set of transformations G. We prove a similar result for Reproducing Kernel Hilbert Spaces (RKHS) in Section 2.2. This property is theoretically powerful since cardinality of G can be large. A classifier can avoid having to observe transformed versions {gx} of any x and yet generalize perfectly. is The case of Face Recognition. As an illustration, if the group G of transformations considered R pose (it is hypothesized that small changes in pose can be modeled as unitary [10]), then ? = G g dg represents a pose invariant subspace. In theory, all poses of a subject will converge to the same point in that subspace leading to near perfect pose invariant recognition. We have not yet leveraged the power of the unitary structure of the groups which is also critical in generalization to test cases as we would see later. We now present our central result showcasing that unitary kernels allow the unitary group action to reciprocate in a Reproducing Kernel Hilbert Space. This is critical to set the foundation for our core method called Max-Margin Invariant Features. 2.1 Group Actions Reciprocate in a Reproducing Kernel Hilbert Space Group integration provides exact invariance as seen in the previous section. However, it requires the group structure to be preserved, i.e. if the group structure is destroyed, group integration does not provide an invariant function. In the context of kernels, it is imperative that the group relation between the samples in XG be preserved in the kernel Hilbert space H corresponding to some kernel k with a mapping ?. If the kernel k is unitary in the following sense, then this is possible. Definition 2.2 (Unitary Kernel). A kernel k(x, y) = h?(x), ?(y)i is a unitary kernel if, for a unitary group G, the mapping ?(x) : X ? H satisfies h?(gx), ?(gy)i = h?(x), ?(y)i ?g ? G, ?x, y ? X . The unitary condition is fairly general, a common class of unitary kernels is the RBF kernel. We now define a transformation within the RKHS itself as gH : ?(x) ? ?(gx) ??(x) ? H for any g ? G where G is a unitary group. We then have the following result of significance. Theorem 2.4. (Covariance in the RKHS) If k(x, y) = h?(x), ?(y)i is a unitary kernel in the sense of Definition 2.2, then gH is a unitary transformation, and the set GH = {gH | gH : ?(x) ? ?(gx) ?g ? G} is a unitary-group in H. Theorem 2.4 shows that the unitary-group structure is preserved in the RKHS. This paves the way for new theoretically motivated approaches to achieve invariance to transformations in the RKHS. There have been a few studies on group invariant kernels [4, 10]. However, [4] does not examine whether the unitary group structure is actually preserved in the RKHS, which is critical. Also, DIKF was recently proposed as a method utilizing group structure under the unitary kernel [10]. Our result is a generalization of the theorems they present. Theorem 2.4 shows that since the unitary group structure is preserved in the RKHS, any method involving group integration would be invariant in the original space. The preservation of the group structure allows more direct group invariance results to be applied in the RKHS. It also directly allows one to formulate a non-linear SVM while guaranteeing invariance theoretically leading to Max-Margin Invariant Features. 2 All proofs are presented in the supplementary material 4 2.2 Invariant Non-linear SVM: An Alternate Approach Through Group Integration We now apply the group integration approach to the kernel SVM. The decision function of SVMs can be written in the general form as f? (x) = ? T ?(x) + b for some bias b ? R (we agglomerate all parameters of f in ?) where ? is the kernel feature map, i.e. ? : X ? H. Reviewing the SVM, a maximum margin separator is found by minimizing loss functions such as the hinge loss along with a regularizer. In order to invoke invariance, we can now utilize group integration in the the kernel space H using Theorem 2.4. All points in the set {gx ? XG } get mapped to ?(gx) = gH ?(x) for a given g ? G in R the input space X . Group integration then results in a G-invariant subspace within H through ?H = GH gH dgH using Lemma 2.1. Introducing Lagrange multipliers ? = (?1 , ?2 ...?N ) ? RN , the dual formulation (utilizing Lemma 2.2 and Lemma 2.3) then becomes min ? ? X i ?i + 1X yi yj ?i ?j h?H ?(xi ), ?H ?(xj )i 2 i,j (1) P under the constraints 0 ? ?i ? N1 ?i. The SVM separator is then given by i ?i yi = 0, P ? ? ?H = ?H ? = i yi ?i ?H ?(xi ) thereby existing in the GH -invariant (or equivalently G-invariant) subspace ?H within H (since g ? gH is a bijection). Effectively, the SVM observes samples from ? X?H = {x | ?(x) = ?H ?(u), ?u ? XG } and therefore ?H enjoys exact global invariance to G. ? Further, ?H ? is a maximum-margin separator of {?(XG )} (i.e. the set of all transformed samples). This can be shown by the following result. For a unitary group G and unitary kernel k(x, y) = h?(x), ?(y)i, Theorem 2.5. (Generalization) R ? if ?H = ?H ? ? = ( GH gH dgH ) ? ? is a perfect separator for {?H ?(X )} = {?H ?(x) | ?x ? X }, then ?H ? ? is also a perfect separator for {?(XG )} = {?(x) | x ? XG } with the same margin. Further, a max-margin separator of {?H ?(X )} is also a max-margin separator of {?(XG )}. The invariant non-linear SVM in objective 1, observes samples in the form of ?H ?(x) and obtains a max-margin separator ?H ? ? . This allows for the generalization properties of max-margin classifiers to be combined with those of group invariant classifiers. While being invariant to nuisance transformations, max-margin classifiers can lead to highly discriminative features (more robust than DIKF [10] as we find in our experiments) that are invariant to within-class transformations. Theorem 2.5 shows that the margins of ?(XG ) and {?H ?(XG )} are deeply related and implies that ?H ?(x) is a max-margin separator for both datasets. Theoretically, the invariant non-linear SVM is able to generalize to XG on just observing X and utilizing prior information in the form of G for all unitary kernels k. This is true in practice for linear kernels. For non-linear kernels in practice, the invariant SVM still needs to observe and integrate over transformed training inputs. Leveraging unitary group properties. During test time to achieve invariance, the SVM would require to observe and integrate over all possible transformations of the test sample. This is a huge computational and design bottleneck. We would ideally want to achieve invariance and generalize by observing just a single test sample, in effect perform one shot learning. This would not only be computationally much cheaper but make the classifier powerful owing to generalization to full transformed orbits of test samples by observing just that single sample. This is where unitarity of g helps and we leverage it in the form of the following Lemma. R Lemma 2.6. (Invariant Projection) If ? = G g dg for any unitary group G, then for any fixed g 0 ? G (including the identity element) we have h?x0 , ?? 0 i = hg 0 x0 , ?? 0 i ??, ? 0 ? Rd Assuming ?? 0 is the learned SVM classifier, Lemma 2.6 shows that for any test x0 , the invariant dot product h?x0 , ?? 0 i which involves observing all transformations of x0 is equivalent to the quantity hg 0 x0 , ?? 0 i which involves observing only one transformation of x0 . Hence one can model the entire orbit of x0 under G by a single sample g 0 x0 where g 0 ? G can be any particular transformation including identity. This drastically reduces sample complexity and vastly increases generalization capabilities of the classifier since one only need to observe one test sample to achieve invariance Lemma 2.6 also helps us in saving computation, allowing us to apply the computationally expensive ? (group integration) operation only once on he classifier and not the test sample. Thus, the kernel in the Invariant SVM formulation can be replaced by the form k? (x, y) = h?(x), ?H ?(y)i. For kernels in general, the GH -invariant subspace cannot be explicitly R computed since it lies in the RKHS. It is only implicitly projected upon through ?H ?(xi ) = G ?(gxi )dgH . It is important to 5 Integration over the group (pooling) Class 1 Test Image Class 2 Class 3 Kernel Invariant Feature Test Image Class 4 (a) Invariant kernel feature extraction (b) SVM feature extraction leading to MMIF features Figure 2: MMIF Feature Extraction. (a) l(x) denotes the invariant kernel feature of any x which is invariant to the transformation G. Invariance is generated by group integration (or pooling). The invariant kernel feature learns invariance form the unlabeled transformed template set TG . Also, the faces depicted are actual samples from the large-scale mugshots data (? 153, 000 images). (b) Once the invariant features have been extracted for the labelled non-transformed dataset X , then the SVMs learned act as feature extractors. Each binary class SVM (different color) was trained on the invariant kernel feature of a random subset of l(X ) with random class assignments. The final MMIF feature for x is the concatenation of all SVM inner-products with l(x). note that during testing however, the SVM formulation will be invariant to transformations of the test sample regardless of a linear or non-linear kernel. Positive R Semi-Definiteness. The G-invariant kernel map is now of the form k? (x, y) = h?(x), G ?(gy)dgH i. This preserves the positive semi-definite property of the kernel k while guaranteeing global invariance to unitary transformations., unlike jittering kernels [17, 3] and tangent-distance kernels [5]. If we wish to include invariance to scaling however (in the sense of scaling an image), then we would lose positive-semi-definiteness (it is also not a unitary transform). Nonetheless, [20] show that conditionally positive definite kernels still exist for transformations including scaling, although we focus of unitary transformations in this paper. 3 Max-Margin Invariant Features The previous section utilized a group integration approach to arrive a theoretically invariant non-linear SVM. It however does not Transformation problem i.e. the kernel k? (x, y) = R address the Unlabeled R h?H ?(x), ?H ?(y)i = h G ?(gx)dgH , G ?(gy)dgH i still requires observing transformed versions of the labelled input sample namely {gx | gx ? XG } (or atleast one of the labelled samples if we utilize Lemma 2.6). We now present our core approach called Max-Margin Invariant Features (MMIF) that does not require the observation of any transformed labelled training sample whatsoever. Assume that we have access to an unlabeled set of M templates T = {ti }i={1,...M } . We assume that we can observe all transformations under a unitary-group G, i.e. we have access to TG = {gti | ?g ? G}i={1,...M } . Also, assume we have access to a set X = {xj }i={1,...D} of labelled data with N classes which are not transformed. We can extract an M -dimensional invariant kernel feature for each xj ? X as follows. Let the invariant kernel feature be l(x) ? RM to explicitly show the dependence on x. Then the ith dimension of l for any particular x is computed as Z l(x)i = h?(x), ?H ?(ti )i = h?(x), Z gH ?(ti )dgH i = h?(x), G ?(gti )dgH i (2) G The first equality utilizes Lemma 2.6 and the third equality uses Theorem 2.4. This is equivalent to observing all transformations of x since h?(x), ?H ?(ti )i = h?H ?(x), ?(ti )i using Lemma 2.3. Thereby we have constructed a feature l(x) which is invariant to G without ever needing to observe transformed versions of the labelled vector x. We now briefly the training of the MMIF feature extractor. The matching metrics we use for this study is normalized cosine distance. 6 Training MMIF SVMs. To learn a K-dimensional MMIF feature (potentially independent of N ), we learn K independent binary-class linear SVMs. Each SVM trains on the labelled dataset l(X ) = {l(xj ) | j = {1, ...D}} with each sample being label +1 for some subset of the N classes (potentially P just one class) and the rest being labelled ?1. This leads us to a classifier in the form of ?k = j yj ?j l(xj ). Here, yj is the label of xj for the k th SVM. It is important to note that the unlabeled data was only used to extract l(xj ). Having multiple classes randomly labelled as positive allows the SVM to extract some feature that is common between them. This increases generalization by forcing the extracted feature to be more general (shared between multiple classes) rather than being highly tuned to a single class. Any K-dimensional MMIF feature can be trained through this technique leading to a higher dimensional feature vector useful in case where one has limited labelled samples and classes (N is small). During feature extraction, the K inner products (scores) of the test sample x0 with the K distinct binary-class SVMs provides the K-dimensional MMIF feature vector. This feature vector is highly discriminative due to the max-margin nature of SVMs while being invariant to G due to the invariant kernels. 0 MMIF. Given TG and X , the MMIF feature is defined as MMIF(x ) ? RK for any test x0 with P 0 each dimension k being computed as hl(x ), ?k i for ?k = j yj ?j l(xj ) ?xj ? X . Further, l(x0 ) ? RM ?x with each dimension i being l(x0 )i = h?(x0 ), ?H ?(ti )i. The process is illustrated in Fig. 2. Inheriting transformation invariance from transformed unlabeled data: A special case of semisupervised learning. MMIF features can learn to be invariant to transformations (G) by observing them only through TG . It can then transfer the invariance knowledge to new unseen samples from X thereby becoming invariant to XG despite never having observed any samples from XG . This is a special case of semi-supervised learning where we leverage on the specific transformations present in the unlabeled data. This is a very useful property of MMIFs allowing one to learn transformation invariance from one source and sample points from another source while having powerful discrimination and generalization properties. The property is can be formally stated as the following Theorem. Theorem 3.1. (MMIF is invariant to learnt transformations) MMIF(x0 ) = MMIF(gx0 ) ?x0 ?g ? G where G is observed only through TG = {gti | ?g ? G}i={1,...M } . Thus we find that MMIF can solve the Unlabeled Transformation Problem. MMIFs have an invariant and a discriminative component. The invariant component of MMIF allows it to generalize to new transformations of the test sample whereas the discriminative component allows for robust classification due to max-margin classifiers. These two properties allow MMIFs to be very useful as we find in our experiments on face recognition. Max and Mean Pooling in MMIF. Group integration in practice directly results in mean pooling. Recent work however, showed that group integration can be treated as a subset of I-theory where one tries to measure moments (or a subset of) of the distribution hx, g?i g ? G since the distribution itself is also an invariant [1]. Group integration can be seen as measuring the mean or the first moment of the distribution. One can also characterize using the infinite moment or the max of the distribution. We find in our experiments that max pooling outperforms mean pooling in general. All results in this paper however, still hold under the I-theory framework. MMIF on external feature extractors (deep networks). MMIF does not make any assumptions regarding its input and hence one can apply it to features extracted from any feature extractor in general. The goal of any feature extractor is to (ideally) be invariant to within-class transformation while maximizing between-class discrimination. However, most feature extractors are not trained to explicitly factor out specific transformations. If we have access to even a small dataset with the transformation we would like to be invariant to, we can transfer the invariance using MMIFs (e.g. it is unlikely to observe all poses of a person in datasets, but pose is an important nuisance transformation). Modelling general non-unitary transformations. General non-linear transformations such as out-of-plane rotation or pose variation are challenging to model. Nonetheless, a small variation in these transformations can be approximated by some unitary G assuming piece wise linearity through transformation-dependent sub-manifold unfolding [11]. Further, it was found that in practice, integrating over general transformations produced approximate invariance [8]. 7 1 0.9 0.9 0.8 Verification Rate Verification Rate 1 0.95 0.85 ? -DIKF (0.74) 1 -DIKF (0.61) NDP-? (0.41) NDP-1 (0.32) MMIF (Ours) (0.78) VGG Features (0.55) MMIF-VGG (Ours) (0.61) 0.8 0.75 0.7 0.65 0.6 0.55 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10 -8 0.5 0.1 0.2 0.3 0.4 0.5 0.6 MMIF VGG (Ours)(0.71) VGG (0.56) 0.7 False Accept Rate (a) Invariant kernel feature extraction 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 False Accept Rate (b) SVM feature extraction leading to MMIF features Figure 3: (a) Pose-invariant face recognition results on the semi-synthetic large-scale mugshot database (testing on 114,750 images). Operating on pixels: MMIF (Pixels) outperforms invariance based methods DIKF [10] and invariant NDP [8]. Operating on deep features: MMIF trained on VGG-Face features [12] (MMIF-VGG) produces a significant improvement in performance. The numbers in the brackets represent VR at 0.1% FAR. (b) Face recognition results on LFW with raw VGG-Face features and MMIF trained on VGG-Face features. The values in the bracket show VR at 0.1% FAR. 4 Experiments on Face Recognition As illustration, we apply MMIFs using two modalities overall 1) on raw pixels and 2) on deep features from the pre-trained VGG-Face network [12]. We provide more implementation details and results discussion in the supplementary. A. MMIF on a large-scale semi-synthetic mugshot database (Raw-pixels and deep features). We utilize a large-scale semi-synthetic face dataset to generate the sets TG and X for MMIF. In this dataset, only two major transformations exist, that of pose variation and subject variation. All other transformations such as illumination, translation, rotation etc are strictly and synthetically controlled. This provides a very good benchmark for face recognition. where we want to be invariant to pose variation and be discriminative for subject variation. The experiment follows the exact protocol and data as described in [10] 3 We test on 750 subjects identities with 153 pose varied real-textured gray-scale image each (a total of 114,750 images) against each other resulting in about 13 billion pair-wise comparisons (compared to 6,000 for the standard LFW protocol). Results are reported as ROC curves along with VR at 0.1% FAR. Fig. 3(a) shows the ROC curves for this experiment. We find that MMIF features out-performs all baselines including VGG-Face features (pre-trained), DIKF and NDP approaches thereby demonstrating superior discriminability while being able to effectively capture pose-invariance from the transformed template set TG . MMIF is able to solve the Unlabeled Transformation problem by extracting transformation information from unlabeled TG . B. MMIF on LFW (deep features): Unseen subject protocol. In order to be able to effectively train under the scenario of general transformations and to challenge our algorithms, we define a new much harder protocol on LFW. We choose the top 500 subjects with a total of 6,300 images for training MMIF on VGG-Face features and test on the remaining subjects with 7,000 images. We perform all versus all matching, totalling upto 49 million matches (4 orders more than the official protocol). The evaluation metric is defined to be the standard ROC curve with verification rate reported at 0.1% false accept rate. We split the 500 subjects into two sets of 250 and use as TG and X . We do not use any alignment for this experiment, and the faces were cropped according to [16]. Fig. 3(b) shows the results of this experiment. We see that MMIF on VGG features significantly outperforms raw VGG on this protocol, boosting the VR at 0.1% FAR from 0.56 to 0.71. This demonstrates that MMIF is able to generate invariance for highly non-linear transformations that are not well-defined rendering it useful in real-world scenarios where transformations are unknown but observable. 3 We provide more details in the supplementary. Also note that we do not need utilize identity information, all that is required is the fact that a set of pose varied images belong to the same subject. Such data can be obtained through temporal sampling. 8 References [1] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Magic materials: a theory of deep hierarchical architectures for learning sensory representations. MIT, CBCL paper, 2013. [2] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of invariant representations in hierarchical architectures. CoRR, abs/1311.4158, 2013. [3] D. Decoste and B. Sch?lkopf. Training invariant support vector machines. Mach. Learn., 46(1-3):161?190, Mar. 2002. [4] B. Haasdonk and H. Burkhardt. Invariant kernel functions for pattern analysis and machine learning. In Machine Learning, pages 35?61, 2007. [5] B. Haasdonk and D. Keysers. Tangent distance kernels for support vector machines. In Pattern Recognition, 2002. Proceedings. 16th International Conference on, volume 2, pages 864?868 vol.2, 2002. [6] G. E. Hinton. Learning translation invariant recognition in a massively parallel networks. In PARLE Parallel Architectures and Languages Europe, pages 1?13. Springer, 1987. [7] J. Z. Leibo, Q. Liao, and T. Poggio. Subtasks of unconstrained face recognition. In International Joint Conference on Computer Vision, Imaging and Computer Graphics, VISIGRAPP, 2014. [8] Q. Liao, J. Z. Leibo, and T. Poggio. Learning invariant representations and applications to face verification. Advances in Neural Information Processing Systems (NIPS), 2013. [9] P. Niyogi, F. Girosi, and T. Poggio. Incorporating prior information in machine learning by creating virtual examples. In Proceedings of the IEEE, pages 2196?2209, 1998. [10] D. K. Pal, F. Juefei-Xu, and M. Savvides. Discriminative invariant kernel features: a bells-and-whistles-free approach to unsupervised face recognition and pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5590?5599, 2016. [11] S. W. Park and M. Savvides. An extension of multifactor analysis for face recognition based on submanifold learning. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2645? 2652. IEEE, 2010. [12] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. 2015. [13] T. Poggio and T. Vetter. Recognition and structure from one 2d model view: Observations on prototypes, object classes and symmetries. Laboratory, Massachusetts Institute of Technology, 1992. [14] A. Raj, A. Kumar, Y. Mroueh, T. Fletcher, and B. Sch?lkopf. Local group invariant representations via orbit embeddings. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017), volume 54 of Proceedings of Machine Learning Research, pages 1225?1235, 2017. [15] M. Reisert. Group integration techniques in pattern analysis ? a kernel view. PhD Thesis, 2008. [16] C. Sanderson and B. C. Lovell. Multi-region probabilistic histograms for robust and scalable identity inference. In International Conference on Biometrics, pages 199?208. Springer, 2009. [17] B. Sch?lkopf and A. J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002. [18] B. Sch?lkopf, C. Burges, and V. Vapnik. Incorporating invariances in support vector learning machines. pages 47?52. Springer, 1996. [19] B. Sch?lkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. Advances in Neural Information Processing Systems (NIPS), 1998. [20] C. Walder and O. Chapelle. Learning with transformation invariant kernels. In Advances in Neural Information Processing Systems, pages 1561?1568, 2007. [21] X. Zhang, W. S. Lee, and Y. W. Teh. Learning with invariance via linear functionals on reproducing kernel hilbert space. In Advances in Neural Information Processing Systems, pages 2031?2039, 2013. 9
6742 |@word briefly:1 version:7 covariance:1 thereby:8 harder:1 shot:3 moment:3 efficacy:1 score:1 tuned:1 rkhs:11 ours:3 past:1 existing:1 outperforms:3 stemmed:1 yet:2 written:1 readily:1 girosi:1 discrimination:2 intelligence:1 plane:1 ith:1 core:2 provides:3 boosting:1 bijection:1 gautam:1 gx:15 zhang:1 along:3 constructed:1 direct:1 prove:2 wild:1 combine:2 introduce:1 x0:16 theoretically:10 inter:1 indeed:2 themselves:1 examine:1 multi:1 whistle:1 actual:1 decoste:1 cardinality:2 becomes:2 linearity:1 spoken:2 whatsoever:1 transformation:86 guarantee:4 temporal:1 every:1 act:1 ti:6 classifier:15 rm:2 demonstrates:1 enjoy:1 positive:8 engineering:1 local:3 despite:1 mach:1 becoming:2 discriminability:1 studied:1 collect:2 challenging:2 limited:1 practical:2 unique:1 yj:4 testing:2 practice:6 definite:4 bell:1 significantly:1 vedaldi:1 projection:4 matching:2 pre:3 integrating:1 refers:1 vetter:1 get:1 cannot:3 unlabeled:26 onto:1 operator:3 context:1 applying:1 restriction:1 equivalent:2 map:4 maximizing:1 attention:1 regardless:1 focused:2 formulate:1 utilizing:3 population:1 variation:7 exact:3 us:1 pa:1 element:4 expensive:3 recognition:25 utilized:1 approximated:1 showcase:2 labeled:2 database:2 observed:5 electrical:1 capture:1 haasdonk:2 region:1 observes:2 deeply:1 mentioned:1 transforming:1 complexity:5 ideally:2 reciprocate:2 jittering:2 trained:8 solving:1 reviewing:1 upon:2 textured:1 easily:1 joint:1 regularizer:1 sacrificed:1 articulated:1 train:5 distinct:3 artificial:1 supplementary:3 solve:8 cvpr:1 ability:1 niyogi:1 statistic:1 unseen:2 transform:1 itself:3 final:2 propose:3 product:5 achieve:4 billion:1 produce:1 generating:4 guaranteeing:3 perfect:6 object:4 help:2 gx0:1 pose:17 augmenting:1 received:1 strong:2 implemented:1 involves:2 implies:1 correct:1 owing:1 virtual:2 material:2 require:2 premise:1 hx:1 generalization:13 elementary:1 secondly:1 summation:1 strictly:1 extension:1 bypassing:1 hold:2 considered:1 cbcl:1 fletcher:1 mapping:2 major:1 estimation:1 lose:1 label:3 unfolding:1 mit:2 clearly:1 unitarity:1 rather:1 avoid:1 focus:2 improvement:2 modelling:1 contrast:1 baseline:2 sense:3 inference:1 dependent:1 factoring:1 parle:1 entire:2 unlikely:1 accept:3 relation:1 transformed:28 pixel:4 issue:1 overall:2 classification:2 dual:1 augment:1 constrained:1 special:3 art:1 integration:21 fairly:1 once:2 saving:1 having:6 beach:1 extraction:6 sampling:1 never:1 represents:1 park:1 unsupervised:2 develops:1 inherent:1 few:1 randomly:1 dg:10 simultaneously:1 preserve:1 cheaper:1 replaced:1 n1:1 psd:1 freedom:1 ab:1 interest:2 huge:1 highly:5 intra:3 evaluation:1 alignment:1 bracket:2 hg:2 integral:1 poggio:6 machinery:1 intense:1 biometrics:1 orbit:4 showcasing:1 theoretical:5 instance:2 measuring:1 assignment:1 tg:9 introducing:1 addressing:1 imperative:1 subset:4 submanifold:1 graphic:1 pal:2 characterize:1 reported:2 learnt:1 synthetic:4 combined:1 st:1 person:2 fundamental:1 international:4 probabilistic:1 invoke:1 lee:1 thesis:1 augmentation:2 central:3 again:1 vastly:1 leveraged:1 choose:1 rosasco:2 external:1 creating:1 simard:1 leading:8 gy:3 availability:1 explicitly:7 piece:1 multiplicative:1 view:5 later:1 try:1 observing:8 capability:1 parallel:2 contribution:3 generalize:4 modelled:1 raw:4 lkopf:5 produced:1 whenever:1 definition:3 against:1 nonetheless:3 proof:1 dataset:11 popular:1 massachusetts:1 knowledge:5 ut:4 color:1 hilbert:5 actually:2 higher:1 supervised:1 mutch:2 zisserman:1 formulation:7 though:1 mar:1 vsv:1 just:5 smola:2 hand:2 lack:1 undergoes:1 perhaps:1 gray:1 semisupervised:2 gti:3 usa:1 effect:1 hypothesized:1 concept:1 normalized:5 multiplier:1 true:1 hence:4 regularization:2 equality:2 laboratory:1 illustrated:1 conditionally:1 unavailability:1 during:3 nuisance:6 cosine:1 lovell:1 agglomerate:1 demonstrate:1 performs:1 gh:14 image:14 wise:2 consideration:1 novel:1 recently:2 common:5 rotation:3 superior:1 volume:2 million:1 belong:1 he:1 mellon:1 significant:1 ashwin:1 rd:4 unconstrained:1 mroueh:1 language:1 had:1 dot:2 chapelle:1 access:6 europe:1 operating:2 etc:2 recent:1 showed:1 raj:1 forcing:1 scenario:4 massively:1 binary:3 success:1 yi:5 captured:1 preserving:1 additional:1 seen:2 converge:1 semi:10 preservation:1 full:1 multiple:2 needing:1 reduces:1 match:1 long:2 equally:1 feasibility:1 controlled:1 involving:1 basic:1 scalable:1 liao:2 vision:5 cmu:1 lfw:5 metric:2 histogram:1 kernel:74 represent:1 achieved:1 preserved:6 whereas:1 want:3 cropped:1 addressed:1 source:2 modality:1 sch:5 rest:1 savvides:3 unlike:2 subject:11 pooling:6 member:1 leveraging:1 call:2 extracting:1 unitary:45 near:1 presence:1 ideal:1 leverage:2 synthetically:1 split:1 destroyed:1 rendering:1 embeddings:1 xj:9 architecture:3 perfectly:1 reduce:1 idea:2 inner:2 regarding:1 computable:1 vgg:14 prototype:1 bottleneck:1 whether:1 motivated:4 effort:1 action:4 deep:9 useful:5 clear:1 shortage:1 burkhardt:1 transforms:1 locally:2 svms:9 reduced:1 generate:5 exist:3 multifactor:1 carnegie:1 discrete:1 vol:1 group:68 key:2 demonstrating:1 achieving:1 clarity:1 leibo:4 utilize:4 imaging:1 enforced:1 run:1 powerful:3 arrive:2 utilizes:1 decision:1 scaling:3 guaranteed:1 constraint:1 min:1 kumar:1 performing:1 department:1 according:1 alternate:2 belonging:1 remain:1 increasingly:1 separability:1 maxmargin:1 hl:1 invariant:110 computationally:3 previously:1 needed:1 know:1 end:1 sanderson:1 available:3 generalizes:1 operation:1 marios:1 observe:8 apply:4 hierarchical:2 upto:1 original:1 anselmi:2 denotes:1 top:1 include:1 remaining:1 hinge:1 build:1 classical:1 objective:1 quantity:2 parametric:2 dependence:1 traditional:1 pave:1 said:1 subspace:9 distance:4 mapped:1 concatenation:1 gxi:2 manifold:1 collected:1 trivial:1 reason:1 kannan:1 assuming:2 modeled:1 illustration:3 minimizing:1 equivalently:1 difficult:2 mostly:1 potentially:2 stated:1 rise:1 magic:1 design:2 implementation:1 unknown:1 contributed:1 perform:3 allowing:2 teh:1 observation:3 datasets:3 benchmark:1 finite:1 walder:1 situation:1 hinton:1 ever:1 rn:2 varied:2 reproducing:4 tacchetti:2 community:1 subtasks:1 namely:1 required:2 pair:1 sentence:2 learned:4 subgroup:1 nip:3 address:6 able:5 beyond:1 pattern:6 challenge:1 built:1 max:23 including:4 power:1 critical:3 treated:1 haar:2 improve:1 technology:1 xg:16 extract:4 prior:4 tangent:3 lacking:1 encompassing:1 loss:2 interesting:1 proven:1 versus:1 age:1 foundation:2 integrate:2 degree:1 affine:2 verification:4 atleast:1 translation:3 totalling:1 free:1 enjoys:1 drastically:2 side:1 allow:7 bias:1 burges:1 institute:1 template:3 face:27 curve:3 depth:1 dimension:3 valid:1 world:1 sensory:1 author:1 collection:1 projected:1 far:5 functionals:1 approximate:2 compact:2 obtains:1 implicitly:1 observable:1 global:3 pittsburgh:1 discriminative:11 xi:4 ndp:4 learn:10 nature:2 reciprocated:1 ca:1 robust:3 transfer:2 symmetry:1 separator:11 protocol:7 inheriting:1 official:1 did:1 significance:1 main:1 aistats:1 xu:1 augmented:1 fig:4 roc:3 definiteness:2 vr:4 sub:2 position:1 explicit:6 wish:1 lie:1 third:1 extractor:7 learns:2 theorem:10 rk:1 specific:4 svm:25 incorporating:4 exists:1 false:3 sequential:1 effectively:4 corr:1 vapnik:2 phd:1 illumination:2 margin:24 keysers:1 generalizing:1 depicted:1 simply:2 lagrange:1 applies:1 springer:3 corresponds:1 satisfies:1 extracted:4 identity:5 goal:1 dgh:8 rbf:1 towards:5 labelled:22 twofold:1 shared:1 considerable:1 change:1 parkhi:1 infinite:1 reducing:1 lemma:17 called:2 total:2 invariance:42 formally:1 people:1 support:7 arises:1 incorporate:1
6,349
6,743
Regularized Modal Regression with Applications in Cognitive Impairment Prediction Xiaoqian Wang1 , Hong Chen1 , Weidong Cai2 , Dinggang Shen3 , Heng Huang1? Department of Electrical and Computer Engineering, University of Pittsburgh, USA 2 School of Information Technologies, University of Sydney, Australia 3 Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA [email protected],[email protected] [email protected],[email protected],[email protected] 1 Abstract Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis. However, most existing methods are built on least squares with the mean square error (MSE) criterion, which are sensitive to outliers and their performance may be degraded for heavy-tailed noise. In this paper, we go beyond this criterion by investigating the regularized modal regression from a statistical learning viewpoint. A new regularized modal regression model is proposed for estimation and variable selection, which is robust to outliers, heavy-tailed noise, and skewed noise. On the theoretical side, we establish the approximation estimate for learning the conditional mode function, the sparsity analysis for variable selection, and the robustness characterization. On the application side, we applied our model to successfully improve the cognitive impairment prediction using the Alzheimer?s Disease Neuroimaging Initiative (ADNI) cohort data. 1 Introduction Modal regression [21, 5] has gained increasing attention recently due to its effectiveness on function estimation and robustness to outliers and heavy-tailed noise. Unlike the traditional least-square estimator pursuing the conditional mean, modal regression aims to estimate the conditional mode of output Y given the input X = x. It is well known that the conditional modes can reveal the structure of outputs and the trends of observation, which is missed by the conditional mean [29, 4]. Thus, modal regression often achieves better performance than the traditional least square regression in practical applications. There are some studies for modal regression with (semi-)parametric or nonparametric methods, such as [29, 28, 4, 6]. For parametric approaches, a parametric form is required for the global conditional mode function. Recent works in [29, 28] belong to this category, where the method in [28] is based on linear mode function assumption and the algorithm in [29] is associated with the local polynomial regression. For non-parametric approaches, the conditional mode is usually derived by maximizing a conditional density or a joint density. Typical work for this setting is established in [4], where a local modal regression is proposed based on kernel density estimation and theoretical analysis is provided to characterize asymptotic error bounds. Most of the above mentioned works consider the asymptotic theory on the conditional mode function estimation. Recently, several studies on variable selection under modal regression were also conducted in [30, 27]. These approaches addressed the problem from statistical theory viewpoint (e.g., ? X. Wang and H. Chen made equal contributions to this paper. H. Huang is the corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. asymptotic normality) and were implemented by modified EM algorithm. Although these studies provide us good understanding for modal regression, the following problems still remain unclear in theory and applications. Can we design new modal regression following the line of structural risk minimization? Can we provide its statistical guarantees and computing algorithm for designed model? This paper focuses on answering the above questions. To illustrate the effectiveness of our model, we looked into a practical problem, i.e., cognitive impairment prediction via neuroimaging data. As the most common cause of dementia, Alzheimer?s Disease (AD) imposes extensive and complex impact on human thinking and behavior. Accurate and automatic study of the relationship between brain structural changes and cognitive impairment plays a crucial role in early diagnosis of AD. In order to increase the diagnostic capabilities, neuroimaging provides an effective approach for clinical detection and treatment response monitoring of AD [13]. Several cognitive tests were presented to assess the individual?s cognitive level, such as Mini-Mental State Examination (MMSE) [8] and Trail Making Test (TMT) [1]. With the development of these techniques, a wide range of work employed regression models to study the correlations between neuroimaging data and cognitive measures [23, 16, 26, 25, 24]. However, existing methods use mean regression models based on the least-square estimator to predict the relationship between neuroimaging features and cognitive assessment, which may fail when the noise in the data is heavy-tailed or skewed. According to the complex data collection process [13], the assumption of symmetric noise may not be guaranteed in biomedical data. Under such a circumstance, modal regression model proves to be more appropriate due to its robustness to outliers, heavy-tailed noise, and skewed noise. We applied our method to the ADNI cohort for the association study between neuroimaging features and cognitive assessment. Experimental results illustrated the effectiveness of our model. Moreover, with sparse constraints, our model found several imaging features that have been reported to be crucial to the onset and progression of AD. The replication of these results further support the validity of our model. Our main works can be summarized as below: 1) Following the Tikhonov regularization and kernel density estimation, we develop a new Regularized Modal Regression (RMR) for estimating the conditional mode function and selecting informative variables, which can be considered as a natural extension of Lasso [22] and can be implemented efficiently by half-quadratic minimization methods. 2) Learning theory analysis is established for RMR from three aspects: approximation ability, sparsity, and robustness, which provide the theoretical foundations of the proposed approach. 3) By applying our RMR model to the ADNI cohort, we reveal interesting findings in cognitive impairment prediction of Alzheimer?s disease. 2 2.1 Regularized Modal Regression Modal regression We consider learning problem with input space X ? Rp and output space Y ? R. Let pY |X=x be the conditional density of Y ? Y for given X = x ? X . In the prediction of cognitive assessment, we denote the neuroimaging data for the i-th sample as xi and the cognitive measure for the i-th sample as yi . Suppose that training samples z = {(xi , yi )}ni=1 ? X ? Y are generated independently by: Y = f ? (X) + ?, (1) where mode(?|X = x) = arg max p?|X (t|X = x) = 0 for any x ? X . Here, p?|X , as the t conditional density of ? conditioned on X, is well defined. Then, the target function of modal regression can be written as: f ? (x) = mode(Y |X = x) = arg max pY |X (t|X = x), ?x ? X . (2) t To assure f ? is well defined on X , we require that the existence and uniqueness of pY |X (t|X = x) for any given x ? X . The relationship (2) means f ? is the maximum of the conditional density pY |X , and also equals to maximize the joint density pX,Y [4, 29, 28]. Here, we formulate the modal regression following the dimension-insensitive statistical learning framework [7]. 2 For feasibility, we denote ? on X ? Y as the intrinsic distribution for data generated by (1), and denote ?X as the corresponding marginal distribution on X . It has been proved in Theorem 3 [6] that f ? is the maximizer of Z R(f ) = pY |X (f (x)|X = x)d?X (x) (3) X over all measurable function. Hence, we can adopt R(f ) as the evaluation measure of modal regression estimator f : X ? R. However, we can not get the estimator directly by maximizing this criterion since pY |X and ?X are unknown. Recently, Theorem 5.1 in [6] shows R(f ) = p?f (0), where p?f is the density function of random variable ?f = Y ? f (X). Then, the problem of maximizing R(f ) over some hypothesis spaces can be transformed to maximize the density of ?f at 0. This density p?f can be estimated by nonparametric kernel density estimation. 0 u?u 0 For a kernel K? : R ? R ? R+ , we denote its representing function R ?( ? ) = K? (u, u ), which usually satisfies ?(u) = ?(?u), ?(u) ? ?(0) for any u ? R and R ?(u)du = 1. Typical examples of kernel include Gaussian kernel, Epanechnikov kernel, quadratic kernel, triwight kernel, and sigmoid function. The empirical estimation of R(f ) (also pf (0)) can be obtained by kernel density estimation, which is defined as: n n 1 X 1 X yi ? f (xi ) R?z (f ) = ). K? (yi ? f (xi ), 0) = ?( n? i=1 n? i=1 ? Hence, the approximation of f ? can be found by learning algorithms associated with R?z (f ). In theory, for any f : X ? R, the expectation version of R?z (f ) is: Z y ? f (x) 1 ? ?( )d?(x, y). R (f ) = ? X ?Y ? In particular, there holds R(f ) ? R? (f ) ? 0 as ? ? 0 [6]. 2.2 Modal regression with coefficient-based regularization In this paper, we assume that f ? (x) = mode(Y |X = x) = w?T x for some w? ? Rp . Following the ideas of ridge regression and Lasso [22], we consider the robust linear estimator for learning the conditional mode function. Let F be a linear hypothesis space defined by: F = {f (x) = wT x : w = (w1 , ..., wp ) ? Rp , x ? X }. For any given positive tuning parameters {?j }pj=1 , we denote: p nX o ?(f ) = inf ?j |wj |q : f (x) = wT x, q ? [1, 2] . j=1 Given training set z, the regularized modal regression (RMR) can be formulated as below: n o fz = arg max R?z (f ) ? ??(f ) , (4) f ?F where regularization parameter ? > 0 is used to balance the modal regression measure and hypothesis space complexity. It is easy to deduce that fz (x) = wzT x with p n o n 1 X X yi ? wT xi ?( )?? ?j |wj |q . (5) wz = arg max n? i=1 ? w?Rp j=1 When ?j ? 1 for 1 ? j ? p and q = 1, (5) can be considered as an natural extension of Lasso in [22] from learning the conditional mean function to estimating the conditional mode function. When ?j ? 1 for 1 ? j ? p and q = 2, (5) also can be regarded as the corresponding version of ridge regression by replacing the MSE criterion with modal regression criterion. In particular, when K? is Gaussian kernel and ?j ? 1 for 1 ? j ? p, (5) can be rewritten as: n n 1 X n (y ? wT x )2 o o i i q wz = arg max exp ? ?kwk , q n? i=1 ?2 w?Rp which is equivalent to correntropy regression under maximum correntropy criterion [19, 9, 7]. 3 2.3 Optimization algorithm We employ the half-quadratic (HQ) theory [18] in the optimization. For a convex problem min u(s), s it is equivalent to solve the following half-quadratic reformulation: min Q(s, t) + v(t), s,t where Q(s, t) is quadratic for any t ? R and v : R ? R satisfies: u(s) = min Q(s, t) + v(t), ?s ? R. t Such a dual potential function v can be determined via convex conjugacy as shown below. According to the convex optimization theory [20], for a closed convex function f (a), there exists a convex function g(b), such that: f (a) = max(ab ? g(b)), b where g is the conjugate of f , i.e., g = f ? . Symmetrically, it is easy to prove f = g ? . Theorem 1 For a closed convex function f (a) = max(ab ? g(b)), we have arg max(ab ? g(b)) = b f 0 (a) for any a ? R. b When K? is Gaussian kernel, the optimization steps can be found in [9]. Here we take Epanechnikov kernel (a.k.a., parabolic kernel) as an example to show the optimization of Problem (5) via HQ theory. The kernel-induced representing function of Epanechnikov kernel is ?(e) = 34 (1 ? e2 )1[|e|?1] . Define a closed convex function f as:  f (a) = 3 4 (1 0, ? a), 0 ? a ? 1 a ? 1. There exists a convex function g such that f (a) = max(ab ? g(b)) and ?(e) = f (e2 ) = max(e2 b ? b b g(b)). Thus, when ?j ? 1 for 1 ? j ? p, the optimization problem (5) can be rewritten as: p n  n 1 X  o X yi ? wT xi 2 q max b ( ) ? g(b ) ? ? ? |w | . i i j j w?Rp ,b?Rn n? ? i=1 j=1 (6) Problem (6) can be easily optimized via alternating optimization algorithm. Note that according T xi 2 to Theorem 1, when w is fixed, b can be updated as bi = f 0 (( yi ?w for ) ) = ? 43 1 yi ?wT xi ? [| ? |?1] i = 1, 2, . . . , n. For the space limitation, we provide the proof of Theorem 1 and the optimization steps of RMR in the supplementary material. 3 Learning Theory Analysis This section presents the theoretical foundations of RMR from approximation ability, variable sparsity, and algorithmic robustness. Detail proofs of these results can be found in the supplementary material. 3.1 Approximation ability analysis Besides the linear requirement for the conditional mode function, we also need some basic conditions on the kernel-induced representing function ? [6, 28]. Assumption 1 The representing function ? satisfies the following conditions: 1) R R ?u ? R, ?(u) ? ?(0) < ?, 2) ? is Lipschitz continuous with constant L? , 3) R ?(u)du = 1 and R u2 ?(u)du < ?. It is easy to verify that most of kernels used for density estimation satisfy the above conditions, e.g., Gaussian kernel, Epanechnikov kernel, quadratic kernel, etc. Since RMR is associated with R?z (f ), we need to establish quantitative relationship between R? (f ) and R(f ). Recently, the modal regression calibration has been illustrated in Theorem 10 [6] under the following restrictions on the conditional density p?|X . 4 Assumption 2 The conditional density p?|X is second-order continuously differentiable and uniform bounded. Now, we present the approximation bound on R(f ? ) ? R(fz ). q Theorem 2 Let kxk q?1 ? a for q ? (1, 2] for any x ? X and f ? ? F. Under Assumptions 1-2, for q q ? (1, 2], by taking ? = ? 2 = O(n? 4q+3 ), we have: q R(f ? ) ? R(fz ) ? C log(4/?)n? 4q+3 1 with confidence at least 1 ? ?. In particular, for q = 1 and kxk? ? a, choosing ? = ? 2 = ( lnnp ) 7 , we have:  ln p  17 R(f ? ) ? R(fz ) ? C log(4/?) n with confidence at least 1 ? ?, Here C1 , C2 is a constant independent of n, ?. Theorem 2 shows that the excess risk of R(f ? ) ? R(fz ) ? 0 with the polynomial decay and the estimation consistency is guaranteed as n ? ?. Moreover, under Assumption 3 in [6], we can derive q 1 that fz tends to f ? with approximation order O(n? 4q+3 ) for q ? (1, 2] and O( lnnp ) 7 ) for q = 1. Although approximation analysis has been provided for modal regression in [6, 28], both of them are limited to the empirical risk minimization. This is different from our result for regularized modal regression under structural risk minimization. 3.2 Sparsity analysis To characterize the variable selection ability of RMR, we first present the properties for nonzero component of wz . Theorem 3 Assume that ? is differentiable for any t ? R. For j ? {1, 2, ..., p} satisfying wzj 6= 0, there holds: n 1 X p?? |w |p?1 yi ? fz (xi ) j zj ?0 ( )xij = . 2 n? i=1 ? 2 Observe that the condition on ? holds true for Gaussian kernel, sigmoid function, and logistic function. Theorem 3 demonstrates the necessary condition for the non-zero wzj . Without loss of generality, we set S0 = {1, 2, ..., p0 } as the index set of truly informative variables and denote Sz = {j : wzj 6= 0} as the set of identified informative variables by RMR in (4). Theorem 4 Assume that kxk? ? a for any x ? X and ??j ? k?0 k? ? for any j > p0 . Then, for RMR (4) with q = 1, there holds Sz ? S0 for all z ? (X ? Y)n . Theorem 4 assures that RMR has the capacity to identify the truly informative variable in theory. Combining Theorem 4 and Theorem 2, we provide the asymptotic theory of RMR on estimation and model selection. 3.3 Robustness analysis To quantify the robustness of RMR, we calculate its finite sample breakdown point, which reflects the largest amount of contamination points that an estimator can tolerate before returning arbitrary values [11, 12]. Recently, this index has been used to investigate the robustness of modal linear regression [28] and kernel-based modal regression [6]. Recall that the derived weight wz defined in (5) is dependent on any given sampling set z = {(xi , yi )}ni=1 . By adding m arbitrary points z0 = {(xn+j , yn+j )}m j=1 ? X ? Y, we obtain the corrupted sample set z ? z0 . For given ?, ?, {?j }pj=1 , we denote wz?z0 be the maximizer of (5). Then, the finite sample breakdown point of wz is defined as: n m o (wz ) = min : sup kwz?z0 k2 = ? . 1?m?n n + m z0 5 Theorem 5 Assume that ?(u) = ?(?u) and ?(t) ? 0 as t ? ?. For given ?, ?, {?j }pj=1 , we denote: n M= 1 X y?i ? fz (xi )) ?( ) ? ??(?(0))?1 ?(fz ). ?(0) i=1 ? Then the finite sample breakdown point of wz in (5) is (wz ) = is the smallest integer not less than M . m? n+m? , where m? ? dM e and dM e From Theorem 5, we know that the finite breakdown point of RMR depends on ?, ?, and the sample configuration, which is similar with re-descending M-estimator and recent analysis for modal linear regression in [28]. As illustrated in [11, 12], the finite sample breakdown point is high when the bandwidth ? only depends on the training samples. Hence, RMR can achieve satisfactory robustness when ?, ?j are chosen properly and ? is determined by data-driven techniques. 4 Experimental Analysis In this section, we conduct experiments on both toy data, benchmark data as well as the ADNI cohort data to evaluate our RMR model. We compare several regression methods in the experiments, including: LSR (traditional mean regression based on the least square estimator), LSR-L2 (LSR with squared `2 -norm regularization, i.e., ridge regression) LSR-L1 (LSR with `1 -norm regularization), MedianR (median regression), HuberR (regression with huber loss), RMR-L2 (RMR with squared `2 -norm regularization), and RMR-L1 (RMR with `1 -norm regularization). For evaluation, we calculate root mean square error (RMSE) between the predicted value and ground truth in out-of-sample prediction. The RMSE value is normalized via Frobenius norm of the ground truth matrix. We employ 2-fold cross validation and report the average performance for each method. For each method, we set the hyper-parameter of the regularization term in the range of {10?4 , 10?3.5 , . . . , 104 }. We tune the hyper-parameters via 2-fold cross validation on the training data and report the best parameter w.r.t. RMSE of each method. For RMR methods, we adopt the Epanechnikov kernel and set the bandwidth as ? = max(|y ? wT x|). 4.1 Performance comparison on toy data Following the design in [28], we generate the toy data by sampling i.i.d. from the model: Y = ?2 + 3X + ? (X), where X ? U(0, 1), ?(X) = 1 + 2X and  ? 0.5N (?2, 32 ) + 0.5N (2, 12 ). We can derive that E() = 0, Mode() = 1.94 and Median() = 1, hence the conditional mean regression function of the toy data is E(Y |X) = ?2 + 3X, the conditional median function is Median(Y |X) = 1 + 5X, while the conditional mode is Mode(Y |X) = ?0.06 + 6.88X. We consider three different number of samples: 100,200,500, and repeat the experiments 100 times for each setting. We present the RMSE in Table 1, which shows that RMR models get lower RMSE values than all comparing methods. It indicates that RMR models make better estimation of the output when the noise in data is skewed and relatively heavy-tailed. Moreover, we compare the coverage probabilities for prediction intervals centered around the predicted value from each method. We set the length of coverage intervals to be {0.1?, 0.2?, 0.3?} respectively with ? = 3 being the approximate standard error of . From Table 2 we can find that RMR models provide larger coverage probabilities than the counterparts. 4.2 Performance comparison on benchmark data Here we present the comparison results on six benchmark datasets from UCI repository [15] and StatLib2 , which include: slumptest, forestfire, bolts, cloud, kidney, and lupus. We summarize the results in Table 3. From the comparison we notice that RMR models tend to perform better on all datasets. Also, RMR-L1 obtains lower RMSE value since the RMR-L1 model is more robust with the `1 -norm regularization term. 2 http://lib.stat.cmu.edu/datasets/ 6 Table 1: Average RMSE and standard deviation with different number (n) of toy samples. n=100 0.9687?0.0699 0.9671?0.0685 0.9672?0.0685 0.9944?0.0806 0.9725?0.0681 0.9663?0.0683 0.9662?0.0679 LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 n=200 0.9477?0.0294 0.9469?0.0284 0.9473?0.0288 0.9568?0.0350 0.9485?0.0296 0.9466?0.0282 0.9465?0.0281 n=500 0.9495?0.0114 0.9495?0.0114 0.9495?0.0114 0.9542?0.0120 0.9502?0.0116 0.9493?0.0114 0.9492?0.0114 Table 2: Average coverage possibilities and standard deviation on toy data. 0.1? LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 n=100 0.0730?0.0247 0.0753?0.0247 0.0747?0.0246 0.0563?0.0255 0.0710?0.0258 0.0760?0.0254 0.0760?0.0255 n=200 0.0702?0.0166 0.0731?0.0155 0.0719?0.0161 0.0626?0.0124 0.0698?0.0160 0.0740?0.0161 0.0742?0.0156 n=500 0.0702?0.0106 0.0709?0.0108 0.0706?0.0106 0.0654?0.0097 0.0694?0.0101 0.0719?0.0111 0.0720?0.0111 0.2? LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 0.1313?0.0338 0.1337?0.0334 0.1337?0.0337 0.1087?0.0351 0.1237?0.0347 0.1340?0.0336 0.1343?0.0340 0.1450?0.0255 0.1461?0.0251 0.1458?0.0258 0.1331?0.0239 0.1442?0.0257 0.1477?0.0256 0.1481?0.0247 0.1430?0.0193 0.1429?0.0196 0.1430?0.0193 0.1377?0.0182 0.1421?0.0188 0.1441?0.0199 0.1441?0.0198 0.3? LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 0.1923?0.0402 0.1940?0.0415 0.1940?0.0415 0.1750?0.0414 0.1873?0.0389 0.1943?0.0420 0.1950?0.0406 0.2142?0.0342 0.2165?0.0331 0.2153?0.0334 0.2031?0.0299 0.2132?0.0333 0.2179?0.0327 0.2177?0.0323 0.2150?0.0229 0.2156?0.0222 0.2153?0.0226 0.2095?0.0233 0.2144?0.0224 0.2168?0.0220 0.2167?0.0219 Table 3: Average RMSE and standard deviation on benchmark data. LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 slumptest 0.2689?0.0295 0.2616?0.0266 0.2571?0.0277 0.2810?0.0024 0.2669?0.0268 0.2538?0.0185 0.2517?0.0240 forestfire 0.9986?0.0874 0.9822?0.0064 0.9822?0.0079 0.9964?0.0050 0.9874?0.0299 0.9817?0.0093 0.9802?0.0198 bolts 0.4865?0.0607 0.4687?0.0137 0.4713?0.0172 0.4436?0.0232 0.4841?0.0661 0.4782?0.0107 0.3298?0.1313 7 cloud 0.6178?0.0190 0.5782?0.0029 0.5802?0.0043 0.6457?0.0301 0.6178?0.0190 0.5702?0.0131 0.5663?0.0305 kidney 0.5077?0.0264 0.5106?0.0219 0.5196?0.0089 0.5432?0.0160 0.5447?0.0270 0.4871?0.0578 0.4989?0.0398 lupus 0.8646?0.3703 0.8338?0.3282 0.8408?0.3366 1.2274?0.6979 0.9198?0.4226 0.8071?0.3053 0.7885?0.2910 Table 4: Average RMSE and standard deviation on the ADNI data. LSR LSR-L2 LSR-L1 MedianR HuberR RMR-L2 RMR-L1 4.3 Fluency 0.3856?0.0034 0.3269?0.0069 0.3295?0.0035 0.4164?0.0291 0.3856?0.0034 0.3256?0.0049 0.3269?0.0057 ADAS 0.4397?0.0112 0.4116?0.0208 0.4121?0.0100 0.4700?0.0151 0.4383?0.0133 0.4105?0.0216 0.4029?0.0234 TRAILS 0.6798?0.0538 0.5443?0.0127 0.5476?0.0115 0.6702?0.1184 0.6621?0.0789 0.5342?0.0186 0.5423?0.0123 Performance comparison on the ADNI cohort data Now we look into a practical problem in Alzheimer?s disease, i.e., prediction of cognitive scores via neuroimaging features. Data used in this article were obtained from the ADNI database (adni. loni.usc.edu). We extract 93 regions of interest (ROIs) as neuroimaging features and use cognitive scores from three tests: Fluency Test, Alzheimer?s Disease Assessment Scale (ADAS) and Trail making test (TRAILS). 795 sample subjects were involved in our study, including 180 AD samples, 390 MCI samples and 225 normal control (NC) samples. Detailed data description can be found in the supplementary material. Our goal is to construct an appropriate model to predict cognitive performance given neuroimaging data. Meanwhile, we expect the model to illustrate the importance of different features in the prediction, which is fundamental to understanding the role of each imaging marker in the study of AD. From Table 4, we find that RMR models always perform equal or better than the comparing methods, which verifies that RMR is more appropriate to learn the association between neuroimaging markers and cognitive performance. We can notice that RMR-L2 always performs better than LSR-L2, and RMR-L1 outperforms LSR-L1. This is because the symmetric noise assumption in least square models may not be guaranteed on the ADNI cohort. Compared with HuberR, our RMR model is shown to be less sensitive to outliers. Moreover, from the comparison between MedianR and RMR models, we can infer that conditional mode is more suitable than conditional median for the prediction of cognitive scores. RMR-L1 imposes sparse constraints on the learnt weight matrix, which naturally achieves the goal of feature selection in the association study. Here we take TRAILS cognitive assessment as an example and look into the important neuroimaging features in the prediction. From the heat map and brain map in Fig. 1 and 2, we obtain several interesting findings. In the prediction, temporal lobe white matter has been picked out as a predominant feature. [10, 2] reported decreased fractional anisotropy (FA) and increased radial diffusivity (DR) in the white matter of the temporal lobe among AD and Mild Cognitive Impairment (MCI) subjects. [10] also revealed the correlation between temporal lobe FA and episodic memory, which may account for the influence of temporal lobe to TMT results. Besides, there is evidence in [17] supporting the association between left temporal lobe and the working memory component involving letters and numbers in TMT. Moreover, angular gyrus indicates high correlation with TRAILS scores in our analysis. Previous research has revealed that angular gyrus share many clinical features with AD. [14] presented structural MRI findings showing more left anular gyrus in MCI converters than non-converters, which pointed out the role of atrophy of structures like angular gyrus in the progression of dementia. [3] showed evidence for the role of angular gyrus in orienting spatial attention, which serves as a key factor in TMT results. The replication of these results supports the effectiveness of our model. 5 Conclusion This paper proposes a new regularized modal regression and establishes its theoretical foundations on approximation ability, sparsity, and robustness. These characterizations fill in the theoretical gaps for modal regression under Tikhonov regularization. Empirical results verify the competitive performance of the proposed approach on simulated data, benchmark data and real biomedical data. 8 Figure 1: Heatmap showing the weights of each neuroimaging feature via RMR-L1 model for the prediction of TRAILS cognitive measures. We draw two matrices, where the upper figure is for the left hemisphere and the lower figure for the right hemisphere. Imaging markers (columns) with larger weights indicate higher correlation with corresponding cognitive measure in the prediction. Figure 2: Cortical maps of ROIs identified in RMR-L1 model for the prediction of TRAILS cognitive measures. The brain maps show one slice of multi-view. The three maps correspond to three different cognitive measures in TRAILS cognitive test, respectively. With the sparsity property of our model, we identified several biological meaningful neuroimaging markers, showing the potential to enhance the understanding of onset and progression of AD. Acknowledgments This work was partially supported by U.S. NSF-IIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NSF-IIS 1619308, NSF-IIS 1633753, NIH AG049371. Hong Chen was partially supported by National Natural Science Foundation of China (NSFC) 11671161. We are grateful to the anonymous NIPS reviewers for the insightful comments. 9 References [1] S. G. Armitage. An analysis of certain psychological tests used for the evaluation of brain injury. Psychol. Monogr., 60(1):1?48, 1946. [2] M. Bozzali, A. Falini, M. Franceschi, M. Cercignani, M. Zuffi, G. Scotti, G. Comi, and M. Filippi. White matter damage in alzheimer?s disease assessed in vivo using diffusion tensor magnetic resonance imaging. J. Neurol. Neurosurg. Psychiatry, 72(6):742?746, 2002. [3] C. D. Chambers, J. M. Payne, M. G. Stokes, and J. B. Mattingley. Fast and slow parietal pathways mediate spatial attention. Nat. Neurosci., 7(3):217?218, 2004. [4] Y.-C. Chen, C. R. Genovese, R. J. Tibshirani, and L. Wasserman. Nonparametric modal regression. Ann. Statist., 44(2):489?514, 2016. [5] G. Collomb, W. H?rdle, and S. Hassani. A note on prediction via estimation of the conditional mode function. J. Stat. Plan. Infer., 15:227?236, 1987. [6] Y. Feng, J. Fan, and J. A. Suykens. A statistical learning approach to modal regression. arXiv:1702.05960, 2017. [7] Y. Feng, X. Huang, L. Shi, Y. Yang, and J. A. Suykens. Learning with the maximum correntropy criterion induced losses for regression. J. Mach. Learn. Res., 16:993?1034, 2015. [8] M. F. Folstein, S. E. Folstein, and P. R. McHugh. ?mini-mental state?: a practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res., 12(3):189?198, 1975. [9] R. He, W.-S. Zheng, and B.-G. Hu. Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern. Anal. Mach. Intell., 33(8):1561?1576, 2011. [10] J. Huang, R. Friedland, and A. Auchus. Diffusion tensor imaging of normal-appearing white matter in mild cognitive impairment and early alzheimer disease: preliminary evidence of axonal degeneration in the temporal lobe. AJNR Am. J. Neuroradiol., 28(10):1943?1948, 2007. [11] P. J. Huber. Robust statistics. John Wiley & Sons, 1981. [12] P. J. Huber. Finite sample breakdown of m-and p-estimators. Ann. Statist., 12(1):119?126, 1984. [13] C. R. Jack, M. A. Bernstein, N. C. Fox, P. Thompson, G. Alexander, D. Harvey, B. Borowski, P. J. Britson, J. L Whitwell, C. Ward, et al. The alzheimer?s disease neuroimaging initiative (adni): Mri methods. J. Magn. Reson. Imaging, 27(4):685?691, 2008. [14] G. Karas, J. Sluimer, R. Goekoop, W. Van Der Flier, S. Rombouts, H. Vrenken, P. Scheltens, N. Fox, and F. Barkhof. Amnestic mild cognitive impairment: structural mr imaging findings predictive of conversion to alzheimer disease. AJNR Am. J. Neuroradiol., 29(5):944?949, 2008. [15] M. Lichman. UCI machine learning repository, 2013. [16] E. Moradi, I. Hallikainen, T. H?nninen, J. Tohka, A. D. N. Initiative, et al. Rey?s auditory verbal learning test scores can be predicted from whole brainmri in alzheimer?s disease. Neuroimage Clin., 13:415?427, 2017. [17] J. Nickel, H. Jokeit, G. Wunderlich, A. Ebner, O. W. Witte, and R. J. Seitz. Gender-specific differences of hypometabolism in mtle: Implication for cognitive impairments. Epilepsia, 44(12):1551?1561, 2003. [18] M. Nikolova and M. K. Ng. Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput., 27(3):937?966, 2005. [19] J. C. Principe. Information theoretic learning: Renyi?s entropy and kernel perspectives. Springer, New York, 2010. [20] R. T. Rockafellar. Convex Analysis. Princeton, NJ, USA: Princeton Univ. Press, 1970. 10 [21] T. W. Sager and R. A. Thisted. Maximum likelihood estimation of isotonic modal regression. Ann. Statist., 10(3):690?707, 1982. [22] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58(1):267?288, 1996. [23] H. Wang, F. Nie, H. Huang, S. Risacher, C. Ding, A. J. Saykin, L. Shen, et al. Sparse multi-task regression and feature selection to identify brain imaging predictors for memory performance. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 557?562. IEEE, 2011. [24] H. Wang, F. Nie, H. Huang, S. Risacher, A. J. Saykin, and L. Shen. Joint classification and regression for identifying ad-sensitive and cognition-relevant imaging biomarkers. The 14th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2011), pages 115?123. [25] H. Wang, F. Nie, H. Huang, J. Yan, S. Kim, S. Risacher, A. Saykin, and L. Shen. High-order multi-task feature learning to identify longitudinal phenotypic markers for alzheimer disease progression prediction. Neural Information Processing Systems Conference (NIPS 2012), pages 1286?1294. [26] X. Wang, D. Shen, and H. Huang. Prediction of memory impairment with mri data: A longitudinal study of alzheimer?s disease. 19th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2016), pages 273?281. [27] H. Yang and J. Yang. A robust and efficient estimation and variable selection method for partially linear single-index models. J. Multivariate Anal., 129:227?242, 2014. [28] W. Yao and L. Li. A new regression model: modal linear regression. Scandinavian J. Statistics, 41(3):656?671, 2014. [29] W. Yao, B. G. Lindsay, and R. Li. Local modal regression. J. Nonparametric Stat., 24(3):647? 663, 2012. [30] W. Zhao, R. Zhang, J. Liu, and Y. Lv. Robust and efficient variable selection for semiparametric partially linear varying coefficient model based on modal regression. Ann. I. Stat. Math., 66(1):165?191, 2014. 11
6743 |@word mild:3 repository:2 version:2 mri:3 polynomial:2 norm:6 hu:1 seitz:1 carolina:1 lobe:6 p0:2 configuration:1 liu:1 score:5 selecting:1 lichman:1 mmse:1 longitudinal:2 outperforms:1 existing:2 com:1 comparing:2 gmail:1 written:1 john:1 informative:4 designed:1 half:4 epanechnikov:5 mental:2 characterization:2 provides:1 math:1 tmt:4 zhang:1 c2:1 initiative:3 replication:2 prove:1 pathway:1 huber:3 behavior:1 multi:3 brain:5 anisotropy:1 increasing:1 lib:1 provided:2 estimating:2 moreover:5 bounded:1 sager:1 correntropy:4 rmr:48 finding:4 nj:1 guarantee:1 temporal:6 quantitative:1 returning:1 demonstrates:1 k2:1 control:1 medical:2 intervention:2 yn:1 positive:1 before:1 engineering:1 local:3 magn:1 tends:1 mach:2 nsfc:1 au:1 china:1 limited:1 range:2 bi:1 practical:4 acknowledgment:1 episodic:1 empirical:3 yan:1 confidence:2 radial:1 get:2 unc:1 selection:11 risk:4 applying:1 influence:1 isotonic:1 py:6 restriction:1 measurable:1 equivalent:2 descending:1 shi:1 maximizing:3 map:5 go:1 attention:3 reviewer:1 independently:1 convex:9 kidney:2 formulate:1 thompson:1 shen:4 recovery:1 identifying:1 wasserman:1 chapel:1 estimator:9 regarded:1 fill:1 dbi:1 updated:1 reson:1 target:1 play:1 suppose:1 lindsay:1 trail:9 hypothesis:3 trend:1 assure:1 satisfying:1 recognition:1 breakdown:6 database:1 role:4 cloud:2 ding:1 electrical:1 wang:5 calculate:2 wj:2 region:1 degeneration:1 contamination:1 disease:12 mentioned:1 complexity:1 nie:3 grateful:1 predictive:1 easily:1 joint:3 univ:1 heat:1 fast:1 effective:1 hyper:2 choosing:1 huang1:1 solve:1 supplementary:3 larger:2 ability:5 statistic:2 ward:1 radiology:1 differentiable:2 cai:1 uci:2 combining:1 payne:1 relevant:1 achieve:1 description:1 frobenius:1 requirement:1 illustrate:2 develop:1 derive:2 stat:4 fluency:2 school:1 sydney:2 implemented:2 predicted:3 coverage:4 indicate:1 soc:1 quantify:1 centered:1 human:1 australia:1 material:3 require:1 anonymous:1 preliminary:1 biological:1 extension:2 assisted:2 hold:4 around:1 considered:2 ground:2 roi:2 exp:1 normal:2 algorithmic:1 predict:2 cognition:1 pitt:1 achieves:2 early:2 adopt:2 smallest:1 uniqueness:1 estimation:16 sensitive:3 largest:1 successfully:2 establishes:1 reflects:1 minimization:5 gaussian:5 always:2 aim:1 modified:1 shrinkage:1 varying:1 derived:2 focus:1 properly:1 indicates:2 likelihood:1 psychiatry:1 kim:1 am:2 wang1:1 dependent:1 transformed:1 arg:6 dual:1 among:1 classification:1 development:1 plan:1 proposes:1 spatial:2 heatmap:1 resonance:1 marginal:1 equal:3 construct:1 beach:1 sampling:2 ng:1 look:2 genovese:1 thinking:1 report:2 employ:2 national:1 intell:1 individual:1 ag049371:1 usc:1 ab:4 detection:1 interest:1 investigate:1 possibility:1 zheng:1 evaluation:3 predominant:1 truly:2 implication:1 accurate:1 necessary:1 fox:2 conduct:1 re:3 theoretical:6 psychological:1 increased:1 column:1 injury:1 ada:2 deviation:4 uniform:1 predictor:1 conducted:1 characterize:2 reported:2 corrupted:1 learnt:1 st:1 density:16 fundamental:1 siam:1 international:3 enhance:1 continuously:1 yao:2 w1:1 squared:2 huang:8 dr:1 cognitive:28 zhao:1 toy:6 li:2 account:1 potential:2 filippi:1 summarized:1 north:1 coefficient:2 matter:4 rockafellar:1 satisfy:1 ad:10 onset:2 depends:2 root:1 picked:1 closed:3 view:1 kwk:1 sup:1 competitive:1 capability:1 rmse:9 vivo:1 contribution:1 ass:1 square:8 ni:2 degraded:1 efficiently:1 correspond:1 identify:3 monitoring:1 involved:1 e2:3 dm:2 associated:3 proof:2 naturally:1 auditory:1 proved:1 treatment:1 rdle:1 recall:1 fractional:1 hassani:1 tolerate:1 higher:1 tom:1 modal:36 response:1 loni:1 generality:1 angular:4 biomedical:2 lsr:25 psychiatr:1 correlation:4 miccai:2 working:1 replacing:1 marker:5 assessment:5 maximizer:2 scheltens:1 mode:19 logistic:1 reveal:2 orienting:1 usa:4 validity:1 verify:2 lupus:2 true:1 counterpart:1 normalized:1 regularization:10 hence:4 alternating:1 symmetric:2 wp:1 nonzero:1 satisfactory:1 illustrated:3 white:4 skewed:4 hong:2 criterion:8 hill:1 ridge:3 theoretic:1 performs:1 l1:21 image:3 jack:1 recently:5 nih:1 common:1 sigmoid:2 neurosurg:1 insensitive:1 belong:1 association:4 he:1 automatic:1 tuning:1 consistency:1 pointed:1 calibration:1 bolt:2 scandinavian:1 deduce:1 etc:1 multivariate:1 recent:2 showed:1 perspective:1 inf:1 driven:1 hemisphere:2 tikhonov:2 certain:1 harvey:1 yi:10 der:1 mr:1 employed:1 maximize:2 signal:1 semi:1 ii:4 hzau:1 infer:2 adni:10 clinical:2 long:1 cross:2 feasibility:1 impact:1 prediction:18 involving:1 regression:55 basic:1 circumstance:1 expectation:1 cmu:1 patient:1 arxiv:1 vision:1 kernel:25 suykens:2 c1:1 semiparametric:1 addressed:1 interval:2 decreased:1 median:5 crucial:2 unlike:1 comment:1 induced:3 med:1 tend:1 subject:2 effectiveness:4 barkhof:1 integer:1 alzheimer:12 structural:5 axonal:1 symmetrically:1 yang:3 cohort:6 revealed:2 easy:3 bernstein:1 nikolova:1 lasso:4 identified:3 bandwidth:2 converter:2 idea:1 cn:1 grading:1 biomarkers:1 six:1 york:1 cause:1 rey:1 impairment:10 detailed:1 tune:1 amount:1 nonparametric:4 statist:4 category:1 gyrus:5 generate:1 fz:10 http:1 xij:1 zuffi:1 zj:1 nsf:5 notice:2 diagnostic:1 estimated:1 tibshirani:2 diagnosis:1 key:1 reformulation:1 pj:3 phenotypic:1 diffusion:2 imaging:9 letter:1 parabolic:1 pursuing:1 missed:1 draw:1 bound:2 guaranteed:3 fold:2 quadratic:7 fan:1 constraint:2 aspect:1 min:4 px:1 relatively:1 department:2 according:3 witte:1 conjugate:1 remain:1 em:1 son:1 making:2 outlier:5 iccv:1 ln:1 conjugacy:1 assures:1 fail:1 know:1 xiaoqian:1 serf:1 rewritten:2 progression:4 observe:1 appropriate:3 magnetic:1 chamber:1 appearing:1 robustness:10 rp:6 existence:1 include:2 risacher:3 clin:1 atrophy:1 prof:1 establish:2 feng:2 tensor:2 question:1 looked:1 parametric:4 fa:2 damage:1 traditional:3 unclear:1 friedland:1 rombouts:1 hq:2 simulated:1 capacity:1 sci:1 nx:1 mail:1 besides:2 length:1 index:3 relationship:4 mini:2 balance:1 nc:1 neuroimaging:15 design:2 anal:2 unknown:1 perform:2 ebner:1 upper:1 conversion:1 observation:1 datasets:3 benchmark:5 finite:6 parietal:1 supporting:1 stokes:1 thisted:1 rn:1 arbitrary:2 weidong:1 required:1 extensive:1 optimized:1 chen1:1 established:2 nip:3 trans:1 beyond:1 usually:2 below:3 pattern:1 sparsity:6 summarize:1 built:1 max:12 wz:9 including:2 memory:4 royal:1 suitable:1 natural:3 examination:1 regularized:8 xqwang1991:1 normality:1 representing:4 improve:1 technology:1 psychol:1 extract:1 understanding:3 l2:16 asymptotic:4 loss:3 expect:1 interesting:2 cai2:1 limitation:1 nickel:1 lv:1 validation:2 foundation:4 imposes:2 s0:2 article:1 viewpoint:2 heng:2 share:1 heavy:6 bric:1 repeat:1 supported:2 verbal:1 side:2 wide:1 taking:1 face:1 sparse:3 saykin:3 van:1 slice:1 dimension:1 xn:1 cortical:1 author:1 made:1 collection:1 wzt:1 excess:1 approximate:1 obtains:1 sz:2 global:1 investigating:1 pittsburgh:1 xi:11 continuous:1 tailed:6 table:8 learn:2 robust:7 ca:1 du:3 mse:2 complex:2 meanwhile:1 main:1 neurosci:1 whole:1 noise:10 mediate:1 verifies:1 fig:1 shen3:1 slow:1 wiley:1 neuroimage:1 diffusivity:1 comput:1 answering:1 renyi:1 theorem:16 z0:5 folstein:2 specific:1 showing:3 dementia:2 insightful:1 decay:1 chenh:1 neurol:1 evidence:3 intrinsic:1 exists:2 wzj:3 adding:1 gained:1 importance:1 nat:1 conditioned:1 chen:3 gap:1 entropy:1 kxk:3 partially:4 u2:1 springer:1 gender:1 truth:2 satisfies:3 conditional:25 goal:2 formulated:1 ann:4 lipschitz:1 change:1 typical:2 determined:2 clinician:1 wt:7 experimental:2 mci:3 meaningful:1 principe:1 support:2 assessed:1 alexander:1 evaluate:1 princeton:2
6,350
6,744
Translation Synchronization via Truncated Least Squares Xiangru Huang? The University of Texas at Austin 2317 Speedway, Austin, 78712 [email protected] Zhenxiao Liang? Tsinghua University Beijing, China, 100084 [email protected] Chandrajit Bajaj The University of Texas at Austin 2317 Speedway, Austin, 78712 [email protected] Qixing Huang The University of Texas at Austin 2317 Speedway, Austin, 78712 [email protected] Abstract In this paper, we introduce a robust algorithm, TranSync, for the 1D translation synchronization problem, in which the aim is to recover the global coordinates of a set of nodes from noisy measurements of relative coordinates along an observation graph. The basic idea of TranSync is to apply truncated least squares, where the solution at each step is used to gradually prune out noisy measurements. We analyze TranSync under both deterministic and randomized noisy models, demonstrating its robustness and stability. Experimental results on synthetic and real datasets show that TranSync is superior to state-of-the-art convex formulations in terms of both efficiency and accuracy. 1 Introduction In this paper, we are interested in solving the 1D translation synchronization problem, where the input is encoded as an observation graph G = (V, E) with n nodes (i.e. V = {1, ? ? ? , n}). Each node is associated with a latent coordinate x?i ? R, 1 ? i ? n, and each edge (i, j) ? E is associated with a noisy measurement tij = x?i ? x?j + N (ij ) of the coordinate difference xi ? xj under some noise model N (ij ). The goal of translation synchronization is to recover the latent coordinates (up to a global shift) from these noisy pairwise measurements. Translation synchronization is a fundamental problem that arises in many application domains, including joint alignment of point clouds [7] and ranking from relative comparisons [8, 16]. A standard approach to translation synchronization is to solve the following linear program: minimize X (i,j)?E |tij ? (xi ? xj )|, subject to n X xi = 0, (1) i=1 Where the constraint ensures that the solution is unique. The major drawback of the linear programming formulation is that it can only tolerate up to 50% of measurements coming from biased noise models (e.g., uniform samples with non-zero mean). Moreover, it is challenging to solve (1) efficiently at scale. Solving (1) using interior point method becomes impractical for large-scale datasets, while more scalable methods such as coordinate descent usually exhibit slow convergence. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we introduce a robust and scalable algorithm, TranSync, for translation synchronization. The algorithm is rather simple, we solve a truncated least squares at each iteration k: (k) {xi } = argmin {xi } X wij |tij ? (xi ? xj )|2 , subject to n p X di xi = 0, i=1 (i,j)?E di := X wij . j?N (i) (2) (k?1) (k?1) where the weights wij = Id(|tij ? (xi ? xj )| < ?k ) are obtained from the solution at the previous iteration using a geometrically decaying truncation parameter ?k . Although TranSync requires solving a linear system at each step, these linear systems are fairly similar to each other, meaning that the solution at the previous iteration provides an excellent warm-start for solving the linear system at the current iteration. As a result, the computational efficiency of TranSync is superior to state-of-the-art methods for solving the linear programming formulation. We analyze TranSync under both deterministic and randomized noise models, demonstrating its robustness and stability. In particular, we show that TranSync is able to handle biased noisy measurements. We have evaluated TranSync on both synthetic datasets and real datasets used in the applications of joint alignment of point clouds and ranking from pair-wise measurements. Experimental results show that TranSync is superior to state-of-the-art solvers for the linear programming formulation in terms of both computational efficiency and accuracy. 1.1 Related Work Translation synchronization falls into the general problem of map synchronization, which takes maps computed between pairs of objects as input, and outputs consistent maps across all the objects. Map synchronization appears as a crucial step in many scientific problems, including fusing partially overlapping range scans [15], assembling fractured surfaces [14], solving jigsaw puzzles [5, 11], multi-view structure from motion [25], data-driven shape analysis and processing [17], and structure from motion [27]. Early methods for map synchronization focused on applying greedy algorithms [14, 15, 18] or combinatorial optimization [20, 23, 27]. Although these methods exhibit certain empirical success, they lack theoretical understanding (e.g. we do not know under what conditions can the underlying ground-truth maps be exactly recovered). Recent methods for map synchronization apply modern optimization techniques such as convex optimization and non-convex optimization. In [13], Huang and Guibas introduce a semidefinite programming formulation for permutation synchronization and its variants. Chen et al. [4] generalize the method to partial maps. In [26], Wang and Singer introduce a method for rotation synchronization. Although these methods provide tight, exact recovery conditions, the computational cost of the convex optimizations provide an obstruction for applying these methods to large-scale data sets. In contrast to convex optimization, very recent map synchronization methods leverage non-convex optimization approaches such as spectral techniques and gradient-based optimization. In [21, 22], Pachauri et al. study map synchronization from the perspective of spectral decomposition. Recently, Shen et al. [24] provide an analysis of spectral techniques for permutation synchronization. Beyond spectral techniques, Zhou et al. [28] apply alternating minimization for permutation synchronization. Finally, Chen and Candes [3] introduce a method for the generalized permutation synchronization problem using the projected power method. To the best of our knowledge, we are the first to develop and analyze continuous map synchronizations (e.g., translations or rotations) beyond convex optimization. Our approach can be considered as a special case of reweighted least squares (or RLS) [9, 12], which is a powerful method for solving convex and non-convex optimizations. The general RLS framework has been applied for map synchronization (e.g. see [1, 2]). Despite the empirical success of these approaches, the theoretical understanding of RLS remains rather limited. The analysis in this paper provides a first step towards the understanding of RLS for map synchronization. 1.2 Notation Before proceeding to the technical part of this paper, we introduce some notation that will be used later. The unnormalized graph Laplacian of a graph G is denoted as LG . If it is obvious from the 2 Algorithm 1 TranSync(c, kmax ) 1. x(?1) ? 0. ??1 ? ?. for k = 0, 1, 2, kmax do 2. Obtain the truncated graph G (k) using x(k?1) and ?k?1 . 3. Break if G (k) is disconnected if G is non-bipartite then 4. Solve (2) using (3) to obtain x(k) . else 5. Solve (2) using conjugate gradient to obtain x(k) . end if.  (0) (0) 6. ?k = min max |tij ? (xi ? xj )|, c?k?1 . (i,j)?E end for Output: x(k) . context, we will always shorten LG as L to make the notation uncluttered. Similarly, we will use D = diag(d1 , ? ? ? , dn ) to collect the vertex degrees and denote the vertex adjacency and vertex-edge adjacency matrices as A and B respectively. The peusdo-inverse of a matrix X is given by X + . In addition, we always sort the eigenvalues of a symmetric matrix X ? Rn?n in increasing order (i.e. ?1 (X) ? ?2 (X) ? ? ? ? ? ?n (X)). Moreover, we will consider several matrix norms k ? k, k ? k1,? and k ? kF , which are defined as follows: kXk = ?max (X), kXk1,? = max 1?i?n n X |xij |, kXkF = X j=1 x2ij  12 . i,j Note that kXk1,? is consistent with the L? -norm of vectors. 2 Algorithm In this section, we provide the algorithmic details of TranSync. The iterative scheme (1) requires an initial solution x(0) , an initial truncation parameter ?0 , and a stopping condition. The initial solution can be determined by solving for x(0) from (2) w.r.t. wij = 1. We set the initial truncation parameter (0) (0) ?0 = max |tij ? (xi ? xj )|, so that the edge with the biggest residual is removed. We stop (i,j)?E TranSync either after the maximum number of iterations is reached or the truncated graph becomes disconnected. Algorithm 1 provides the pseudo code of TranSync. Clearly, the performance of TranSync is driven by the efficiency of solving (2) at each iteration. TranSync takes an iterative approach, in which we utilize a warm-start x(k?1) provided by the solution obtained at the previous iteration. When the truncated graph is non-bipartite, we find a simple weighted average scheme delivers satisfactory computational efficiency. Specifically, it generates a series of vectors xk,0 = x(k?1) , xk,1 , ? ? ? , xk,nmax via the following recursion: xik,l+1 = X j?N (i) wij (xk,l j + tij )/ X xk,l+1 = xk,l+1 ? Pn i i wij , 1 ? i0 =1 j?N (i) n p X di di0 xk,l+1 , i0 i0 =1 (3) which may be written in the following matrix form:  1 ? 1 T 1 ?1 D 2 11 D 2 )D Axk,l + Bt(k) , (4) n P? Remark 2.1 The corresponding normalization constraint in (3), i.e., di xi = 0, only changes xk,l+1 = (In ? i the solution to (2) by a constant factor. We utilize this modification for the purpose of obtaining a concise convergence property of the iterative scheme detailed below. The following proposition states that (3) admits a geometric convergence rate: 3 Proposition 2.1 xk,l geometrically converges to x(k+1) . Specifically, ?l ? 0, kD 1 2 k,l x ? (k)  xshift k l ? (1 ? ?) kD 1 2 k,0 x ? (k)  xshift k, (k) xshift (k) =x P ? (k) i di xi ? 1. ? P i di where ? < 1 is the spectral gap of the normalized Graph Laplacian of the truncated graph. Proof. The proof is straightforward from (4) and is omitted for brevity. Since the intermediate solutions are mainly used to prune outlier observations, it is clear that O(log(n)) iterations of (4), which induce a O(1/n) error for solving (2), are sufficient. The complexity of checking  if the graph is non-bapriatite is O(|E|). The total running time for solving (2) is thus O |E| log(n) . This means the total running time of TranSync is O(|E| log(n)kmax ), making it scalable to large-scale datasets. 3 Analysis of TranSync In this section, we provide exact recovery conditions of TranSync. We begin with describing an exact recovery condition under a deterministic noise model in Section 3.1. We then study an exact recovery condition to demonstrate that TranSync can handle biased noisy samples in Section 3.2. 3.1 Deterministic Exact Recovery Condition We consider the following deterministic noisy model: We are given the ground-truth location xgt . gt Then, for each correct measurement tij , (i, j) ? G, |tij ? (xgt i ? xj )| ? ? for a threshold ?. In contrast, each incorrect measurement tij , (i, j) ? G could take any real number. The following theorem provides an exact recovery condition under this noisy model. Theorem 3.1 Let dbad be the maximum number of incorrect measurements per node. Define n max |L?G,ki ? L?G,kj |, ? = max L?G,kk + max L?G,ij + i,j,k k i6=j 2 pairwisely different and dbad ? (n ? dbad )? , q= . 1 ? 2h 1 ? 2h Suppose h < 61 (or p < 14 ), then starting from any initial solution x(0) , and for any large enough initial truncation threshold  ? 2kx(0) k? + ? and iterative step size c satisfying 4p < c < 1, we have kx(k) ? xgt k? ? q? + 2pck?1 , where   (c ? 4p) k ? ? log / log c + 1. (1 + 2q) ? h = ?dbad , p= Moreover, we can eventually reach an x(k) such that kx(k) k? ? 2p + cq ? c ? 4p which is independent of the initial solution x(0) , initial truncation threshold , and values of all wrong measurements tG\Ggood .  Proof: See Appendix A. Theorem 3.1 essentially says that TransSync can tolerate a constant fraction of arbitrary noise. To understand how strong this condition is, we consider the case where G = Kn is given by a clique. Moreover, we assume the nodes are divided into two clusters of equal size, where all the measurements within each cluster are correct. For measurements between different clusters, half of them are correct and the other half are wrong. In this case, 25% of all measurements are wrong. However, we cannot recover the original xgt in this case. In fact, we can set the wrong measurements in a consistent 4 gt manner, i.e tij = xgt i ? xj + b for a constant b 6= 0, leading to two competing clusters (one correct and the other one incorrect) with equal strength. Hence, in the worst case, any algorithm can only tolerate at most 25% of measurements being wrong. We now try to use Theorem 3.1 to analyze the case where the observation graph is a clique. In this case, it is clear that ? = n1 , and p = dbad n , i.e the fraction of wrong measurements out of all measurements. Hence, in the clique case, we have shown that TranSync converges to a neighborhood of the ground truth from any initial solution if the fraction of wrong measurements is less that 16 (i.e., 2/3 of the upper bound). 3.2 Biased Random Noisy Model We proceed to provide an exact recovery condition of TranSync under a biased random noisy model. To simplify the discussion, we assume the observation graph G = Kn is a clique. However, our analysis framework can be extended to handle arbitrary graphs. Assume ? << a + b. We consider the following noise model, where the noisy measurements are independent, and they follow  gt xi ? xgt with probability p j + U [??, ?] tij = (5) gt gt xi ? xj + U [?a, b] with probability 1 ? p It is easy to check that the linear programming formulation is unable to recover the ground-truth b (1 ? p) > 21 . The following theorem shows that TranSync achieves a sub-constant solution if a+b recovery rate instead. p Theorem 3.2 There exists a constant c so that if p > c/ log(n), then w.h.p, kx(k) ? xgt k? ? (1 ? p/2)k (b ? a), ? k = 0, ? ? ? , [? log( b+a )/log(1 ? p/2)]. 2? The major difficulty of proving Theorem 3.2 is that x(k) is dependent on tk , making it hard to control x(k) using existing concentration bounds. We address this issue by showing that the solutions x(k) , k = 0, ? ? ? , stay close to the segment between xgt and xgt + (1 ? p) a+b 2 1. Specifically, for points on this segment, we can leverage the independence of tij to derive the following concentration bound for one step of TranSync: Lemma 3.1 Consider a fixed observation graph G. Let r = (a+b)p (a+b)p+2(1?p)? and dmin be the minimum degree of G. Suppose dmin = ?(log (n)), and p + r(1 ? p) = ?(log2 (n)/dmin ) . Consider an initial point x(0) (independent from tij ) and a threshold parameter ? such that ?a + ? ? (0) (0) mini xi ? maxi xi ? b ? ?. Then w.h.p., one step of TranSync outputs x(1) which satisfies 2 kx(1) ? (1 ? r)x(0) + rxgt )k? s ! r p  log(n) =O ) ? max(kx(0) k2d,? , r2 ) + O ?2 , r (p + r(1 ? p))dmin ?2 (LG ) (0) where kx(0) kd,? = max |xi 1?i,j?n (0) ? xj |, and LG is the normalized graph Laplacian of G.  Proof: See Appendix B.1. Remark 3.1 Note qthat when G is a clique orqa graph sampled from the standard Erd?os-R?nyi model ? log(n) log(n) ) = O( (p+r(1?p))n ). G(n, q), then O( (p+r(1?p))? 2 (LG ) 3 To prove Theorem 3.2, we show that when k = O(log 4 (n)), the L? distance between x(k) to the line segment between xgt and xgt + (1 ? p) a+b 2 1 only grows geometrically, and this distance is in the order of o(p). On the other hand, (1 ? p/2)k = o(p). So when k ? k, that distance decays with a geometrical rate that is small than c. The details are deferred to Appendix B.2. 5 Improving recovery rate via sample splitting. Note that Lemma 3.1 enables us to apply standard sampling tricks to improve the recovery rate. To simplify the discussion, we will assume ? is sufficiently small. First of all, it is clear that if re-sampling is allowed at each iteration, then TranSync log(n) admits a recovery rate of O( ? ). When re-sampling is not allowed, we can improve the recovery d min ? ) independent sets, and apply one set of observations rate by dividing the observations into O( log(n) n 2 at each iteration. In this case, the recovery rate is O( log?n(n) ). These recovery rates suggest that the recovery rate in Theorem 3.2 could potentially be improved. Nevertheless, Theorem 3.2 still shows that TranSync can tolerate a sub-constant recovery rate, which is superior to the linear programming formulation. 4 Experimental Results In this section, we provide a detailed experimental evaluation of the proposed translation synchronization (TranSync) method. We begin with describing the experimental setup in Section 4.1. We then perform evaluations on synthetic and real datasets in Section 4.2 and Section 4.3 respectively. 4.1 Experimental Setup Datasets. We employ both synthetic datasets and real datasets for evaluation. The synthetic data is generated following the noisy model described in (5). In the following, we encode the noisy model as M(G, p, ?), where G is the observation graph, p is the fraction of correct measurements, and ? describes the interval of correct measurements. Besides the synthetic data, we also consider two real datasets coming from the applications of joint alignment of point clouds and global ranking from relative rankings. Baseline comparison. We choose coordinate descent for solving (1) as the baseline algorithm. (k) (k) Specifically, denote the solution of xi , 1 ? i ? n at iteration k as xi . Then {xi } are given by the following recursion: X (k) (k?1) xi = arg min |xi ? (xj ? tij )| xi j?N (i) (k?1) = median{xj j?N (i) ? tij }, 1 ? i ? n, k = 1, 2, ? ? ? , (6) We use the same initial starting point as TranSync. We also tested interior point methods, and all the datasets used in our experiments are beyond their reach. Evaluation protocol. We report the min, median, and max of the coordinate-wise difference between the solution of each algorithm and the underlying ground-truth. We also report the total running time of each algorithm on each dataset (See Table 1). 4.2 Experimental Evaluation on Synthetic Datasets We generate the synthetic datasets by sampling from four kinds of observation graphs and two values of ?, i.e. ? ? {0.01, 0.04}. The graphs are generated according to two modes: 1) dense graphs versus sparse graphs, and 2) regular graphs versus irregular graphs. To illustrate the strength of TranSync, we choose p ? {0.4, 0.8} for dense graphs and p ? {0.8, 1.0} for sparse graphs. Below is a detailed descriptions for all kinds of observation graphs generated. ? Gdr (dense, regular): The first graph contains n = 2000 nodes. Independently, we connect an edge between a pair of vertices vi , vj with a fixed probability p = 0.1. The expected degree of each vertex is 200. ? Gdi (dense, irregular): The second graph contains n = 2000 nodes. Independently, we connect an edge between a pair of vertices vi , vj with probability p = 0.4si sj , where i?1 si = 0.2 + 0.6 n?1 , 1 ? i ? n are scalar values associated the vertices. The expected degree of each vertex is about 200. 6 G Gdr Gdr Gdr Gdr Gdi Gdi Gdi Gdi Gsr Gsr Gsr Gsr Gsi Gsi Gsi Gsi p 0.4 0.4 0.8 0.8 0.4 0.4 0.8 0.8 0.8 0.8 1.0 1.0 0.8 0.8 1.0 1.0 ? 0.01 0.04 0.01 0.04 0.01 0.04 0.01 0.04 0.01 0.04 0.01 0.04 0.01 0.04 0.01 0.04 min 0.95e-2 3.87e-2 0.30e-2 1.19e-2 2.17e-2 5.46e-2 0.34e-2 1.39e-2 0.58e-2 2.35e-2 0.45e-2 1.84e-2 0.72e-2 2.88e-2 0.53e-2 2.24e-2 Coordinate Descent median max 1.28e-2 11.40e-2 4.73e-2 18.59e-2 0.34e-2 0.41e-2 1.35e-2 1.78e-2 17.59e-2 50.51e-2 19.40e-2 53.88e-2 0.42e-2 0.58e-2 1.66e-2 2.30e-2 0.65e-2 0.79e-2 2.62e-2 3.54e-2 0.50e-2 0.58e-2 1.99e-2 2.36e-2 0.85e-2 75.85e-2 3.38e-2 11.48e-2 0.62e-2 0.77e-2 2.52e-2 3.12e-2 time 0.939s 1.325s 0.781s 1.006s 0.865s 1.043s 0.766s 0.972s 10.062s 12.375s 9.798s 11.626s 10.236s 12.350s 9.388s 12.200s min 0.30e-2 1.04e-2 0.16e-2 0.57e-2 0.39e-2 1.25e-2 0.17e-2 0.68e-2 0.38e-2 1.35e-2 0.28e-2 1.14e-2 0.52e-2 1.79e-2 0.37e-2 1.44e-2 TranSync median max 0.37e-2 0.60e-2 1.22e-2 1.59e-2 0.18e-2 0.28e-2 0.70e-2 0.87e-2 0.52e-2 0.93e-2 1.55e-2 2.42e-2 0.24e-2 0.33e-2 0.86e-2 1.16e-2 0.45e-2 0.61e-2 1.55e-2 2.05e-2 0.32e-2 0.39e-2 1.29e-2 1.60e-2 0.64e-2 1.10e-2 2.16e-2 3.59e-2 0.43e-2 0.57e-2 1.72e-2 2.47e-2 time 0.178s 0.155s 0.149s 0.133s 0.179s 0.169s 0.159s 0.141s 1.852s 1.577s 0.188s 0.179s 1.835s 1.610s 0.180s 0.187s Table 1: Experimental results comparing TranSync and Coordinate Descent (CD) under different settings. All statistics (min, median, max) and mean running time are computed among 100 independent experiments with the same setting. As observed, TranSync outperforms Coordinate Descent in all experiments. ? Gsr (sparse, regular): The third graph is generated in a similar fashion as the first graph, except that the number of nodes n = 20K, and the connecting probability is set to p = 0.003. The expected degree of each vertex is 60. ? Gsi (sparse, irregular): The fourth graph is generated in a similar fashion as the second graph, except that the number of nodes n = 20K, and the connecting probability between a i?1 , 1 ? i ? n are scalar values pair of vertices is p = 0.1si sj , where si = 0.07 + 0.21 n?1 associated the vertices. The expected degree of each vertex is about 60. For this experiment, instead of using kmax as stopping condition as in Algorithm 1, we stop when we observe ?k < ?min . Here ?min does not need to be close to ?. In fact, we choose ?min = 0.05, 0.1 for ? = 0.01, 0.04, respectively. We also claim that if a small validation set (with size significantly less than n) of correct observations is available, our performance could be further improved. As illustrated in Table 1, TranSync dominates coordinate descent in terms of both accuracy and prediction. In particular, TranSync is significantly better than coordinate descent on dense graphs in terms of accuracy. In particular, on dense but irregular graphs, coordinate descent did not converge at all when p = 0.8. The main advantage of TranSync on sparse graphs is the computational cost, although the accuracy is still considerably better than coordinate descent. 4.3 Experimental Evaluation on Real Datasets Translation synchronization for joint alignment of point clouds. In the first application, we consider the problem of joint alignment of point clouds from pair-wise alignment [10]. To this end, we utilize the Patriot Circle Lidar dataset1 . We uniformly subsampled the dataset to 6K scans. We applied Super4PCS [19] to match each scan to 300 randomly selected scans, where each match returns a pair-wise rigid transformation and a score. We then pick the top-30 matches for each scan, this results in a graph with 140K edges. To create the input data for translation synchronization, we run the state-of-the-art rotation synchronization algorithm described in [2] to estimate a global pose Ri for each scan. The pair-wise measurement tij from node i to node j is then given by RiT tlocal ij , where tlocal is the translation vector obtained in pair-wise matching. The average outlier ratio of the ij pair-wise matches per node is 35%, which is relatively high since the observation graph is fairly sparse. Since tij is a 3D vector, we run TranSync three times, one for each coordinate. As illustrated in Figure 1, TranSync is able to recover the the global shape of the underlying scanning trajectory. In contrast, coordinate descent completely fails on this dataset. 1 http://masc.cs.gmu.edu/wiki/MapGMU 7 Figure 1: The application of TranSync in joint alignment of 6K Lidar scans around a city block. (a) Snapshot of the underlying scanning trajectory. (b) Reconstruction using TranSync (c) Reconstruction using Coordinate Descent. Movie Shakespeare in Love Witness October Sky The Waterboy Interview with the Vampire Dune MRQE 1(85) 2(77) 3(76) 4(66) 5(65) 6(44) Hodge-Diff. 1(0.247) 2(0.217) 3(0.213) 6(-0.464) 4(-0.031) 5(-0.183) Global ranking (score) Hodge-Ratio Hodge-Binary 2(0.078) 1 (0.138) 1(0.088) 3(0.107) 3(0.078) 2(0.111) 6(-0.162) 6(-0.252) 4(-0.012) 4(-0.120) 5(-0.069) 5(-0.092) TS-Init 1(0.135) 3(0.076) 2(0.092) 5(-0.134) 4 (-0.098) 6(-0.216) TS-Final 1(0.219) 2(0.095) 3(0.0714) 4(-0.112) 5(-0.140) 6(-0.281) Table 2: Global ranking of selected six movies via different methods: MRQE, HodgeRank[16] with 1) arithmetic mean score difference, 2) geometric mean score ratio and 3) and binary comparisons, and the initial and final predictions of TranSync. TranSync results in the most consistent result with MRQE. Ranking from relative comparisons. In the second application, we apply TranSync to predict global rankings of Netflix movies from their relative comparisons provided by users. The Netflix dataset contains 17070 movies that were rated between October, 1998 and December, 2005. We adapt the procedure described in [16] to generate the input data. Specifically, for each pair of movies, we average the relative ratings from the same users within the same month. We only consider a relative measurement if we collect more than 10 such relative ratings. We then apply TranSync to predict the global rankings of all the movies. We report the initial prediction obtained by the first step of TranSync (i.e., all the relative comparisons are used) and the final prediction suggested by TranSync (i.e., after removing inconsistent relative comparisons). Table 2 compares TranSync with HodgeRank [16] on six representative movies that are studied in [16]. The experimental results show that both predictions appear to be more consistent with MRQE2 (the largest online directory of movie reviews on the internet) than HodgeRank [16] and its variants, which were only applied on these six movies in isolation. Moreover, the final prediction is superior to the initial prediction. These observations indicate two key advantages of TranSync, i.e., scalability on large-scale datasets and robustness to noisy relative comparisons. 5 Conclusions and Future Work In this paper, we have introduced an iterative algorithm for solving the translation synchronization problem, which estimates the global locations of objects from noisy measurements of relative locations. We have justified the performance of our approach both experimentally and theoretically under both deterministic and randomized conditions. Our approach is more scalable and accurate than the standard linear programming formulation. In particular, when the pair-wise measurement 2 http://www.mrqe.com 8 is biased, our approach can still achieve sub-constant recovery rate, while the linear programming approach can tolerate no more than 50% of the measurements being biased. In the future, we plan to extend this iterative scheme to other synchronization problems, such as synchronizing rotations and point-based maps. Moreover, it would also be interesting to study variants of the iterative scheme such as re-weighted least squares. We would also like to close the gap between the current recovery rate and the lower bound, which exhibits a poly-log factor. This requires developing new tools for analyzing the iterative algorithm. Acknowledgement. Qixing Huang would like to acknowledge support this research from NSF DMS1700234. Chandrajit Bajaj would like to acknowledge support for this research from the National Institute of Health grants #R41 GM116300 and #R01 GM117594. References [1] F. Arrigoni, A. Fusiello, B. Rossi, and P. Fragneto. Robust rotation synchronization via low-rank and sparse matrix decomposition. CoRR, abs/1505.06079, 2015. [2] A. Chatterjee and V. M. Govindu. Efficient and robust large-scale rotation averaging. In 2013 IEEE International Conference on Computer Vision (ICCV). IEEE, 2013. [3] Y. Chen and E. J. Cand?s. The projected power method: An efficient algorithm for joint alignment from pairwise differences. CoRR, abs/1609.05820, 2016. [4] Y. Chen, L. J. Guibas, and Q. Huang. Near-optimal joint object matching via convex relaxation. 2014. [5] T. S. Cho, S. Avidan, and W. T. Freeman. A probabilistic image jigsaw puzzle solver. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010. [6] F. R. K. Chung and P. Horn. The spectral gap of a random subgraph of a graph. Internet Mathematics, 4(2):225?244, 2007. [7] D. Crandall, A. Owens, N. Snavely, and D. Huttenlocher. Discrete-continuous optimization for large-scale structure from motion. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ?11, pages 3001?3008, 2011. [8] R. Cruz, K. Fernandes, J. S. Cardoso, and J. F. P. da Costa. Tackling class imbalance with ranking. In 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, BC, Canada, July 24-29, 2016, pages 2182?2187, 2016. [9] I. Daubechies, R. Devore, M. Fornasier, and C. S. G?nt?rk. Iteratively reweighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. [10] N. Gelfand, N. J. Mitra, L. J. Guibas, and H. Pottmann. Robust global registration. In Proceedings of the Third Eurographics Symposium on Geometry Processing, SGP ?05, Aire-la-Ville, Switzerland, Switzerland, 2005. Eurographics Association. [11] D. Goldberg, C. Malon, and M. Bern. A global approach to automatic solution of jigsaw puzzles. In Proceedings of the Eighteenth Annual Symposium on Computational Geometry, SCG ?02, pages 82?87, 2002. [12] R. M. Heiberger and R. A. Becker. Design of an s function for robust regression using iteratively reweighted least squares. pages 112?116, 1992. [13] Q. Huang and L. Guibas. Consistent shape maps via semidefinite programming. Computer Graphics Forum, Proc. Eurographics Symposium on Geometry Processing (SGP), 32(5):177?186, 2013. [14] Q.-X. Huang, S. Fl?ry, N. Gelfand, M. Hofer, and H. Pottmann. Reassembling fractured objects by geometric matching. In ACM SIGGRAPH 2006 Papers, SIGGRAPH ?06, pages 569?578, 2006. [15] D. Huber. Automatic Three-dimensional Modeling from Reality. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, December 2002. [16] X. Jiang, L. Lim, Y. Yao, and Y. Ye. Statistical ranking and combinatorial hodge theory. Math. Program., 127(1):203?244, 2011. [17] V. G. Kim, W. Li, N. Mitra, S. DiVerdi, and T. Funkhouser. Exploring collections of 3D models using fuzzy correspondences. Transactions on Graphics (Proc. of SIGGRAPH 2012), 31(4), Aug. 2012. [18] W. Marande and G. Burger. Mitochondrial dna as a genomic jigsaw puzzle. Science, 318:5849, July 2007. [19] N. Mellado, D. Aiger, and N. J. Mitra. Super 4pcs fast global pointcloud registration via smart indexing. Computer Graphics Forum, 33(5):205?215, 2014. [20] A. Nguyen, M. Ben-Chen, K. Welnicka, Y. Ye, and L. Guibas. An optimization approach to improving collections of shape maps. In Eurographics Symposium on Geometry Processing (SGP), pages 1481?1491, 2011. [21] D. Pachauri, R. Kondor, G. Sargur, and V. Singh. Permutation diffusion maps (pdm) with application to the image association problem in computer vision. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 541?549. Curran Associates, Inc., 2014. [22] D. Pachauri, R. Kondor, and V. Singh. Solving the multi-way matching problem by permutation synchronization. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1860?1868. Curran Associates, Inc., 2013. [23] R. Roberts, S. N. Sinha, R. Szeliski, and D. Steedly. Structure from motion for scenes with large duplicate structures. pages 3137?3144. Computer Vision and Patter Recognition, June 2011. 9 [24] Y. Shen, Q. Huang, N. Srebro, and S. Sanghavi. Normalized spectral map synchronization. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4925?4933. 2016. [25] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3d. ACM Trans. Graph., 25(3):835?846, July 2006. [26] L. Wang and A. Singer. Exact and stable recovery of rotations for robust synchronization. CoRR, abs/1211.2441, 2012. [27] C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In CVPR, pages 1426?1433, 2010. [28] X. Zhou, M. Zhu, and K. Daniilidis. Multi-image matching via fast alternating minimization. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4032?4040, 2015. 10
6744 |@word qthat:1 kondor:2 norm:2 seitz:1 scg:1 decomposition:2 pick:1 concise:1 initial:14 series:1 contains:3 score:4 bc:1 outperforms:1 existing:1 current:2 recovered:1 comparing:1 com:1 nt:1 si:4 tackling:1 written:1 cruz:1 shakespeare:1 shape:4 enables:1 greedy:1 half:2 selected:2 directory:1 xk:9 chile:1 provides:4 math:2 node:12 location:3 along:1 dn:1 symposium:4 incorrect:3 prove:1 manner:1 introduce:6 theoretically:1 pairwise:2 huber:1 expected:4 cand:1 love:1 multi:3 ry:1 freeman:1 solver:2 increasing:1 becomes:2 provided:2 begin:2 moreover:6 underlying:4 notation:3 burger:1 what:1 argmin:1 kind:2 fuzzy:1 transformation:1 gm117594:1 impractical:1 pseudo:1 sky:1 mitochondrial:1 exactly:1 wrong:7 control:1 grant:1 appear:1 before:1 mitra:3 tsinghua:2 despite:1 id:1 analyzing:1 jiang:1 hodgerank:3 china:1 studied:1 collect:2 challenging:1 appl:1 limited:1 range:1 unique:1 horn:1 block:1 procedure:1 empirical:2 significantly:2 matching:5 induce:1 regular:3 nmax:1 suggest:1 cannot:1 interior:2 close:3 kmax:4 applying:2 context:1 www:1 deterministic:6 map:18 eighteenth:1 straightforward:1 starting:2 independently:2 convex:10 focused:1 shen:2 recovery:20 splitting:1 shorten:1 pure:1 d1:1 stability:2 handle:3 proving:1 coordinate:18 suppose:2 user:2 exact:8 programming:9 goldberg:1 curran:2 trick:1 pa:1 associate:2 satisfying:1 recognition:3 huttenlocher:1 kxk1:2 cloud:5 observed:1 wang:2 worst:1 ensures:1 removed:1 comm:1 complexity:1 pdm:1 singh:2 solving:14 tight:1 segment:3 smart:1 bipartite:2 efficiency:5 completely:1 joint:9 siggraph:3 patter:1 fast:2 crandall:1 neighborhood:1 encoded:1 gelfand:2 solve:5 cvpr:3 say:1 statistic:1 noisy:16 final:4 online:1 advantage:2 eigenvalue:1 interview:1 reconstruction:2 coming:2 loop:1 subgraph:1 achieve:1 description:1 scalability:1 convergence:3 cluster:4 converges:2 ben:1 object:5 tk:1 derive:1 develop:1 illustrate:1 pose:1 ij:5 aug:1 strong:1 dividing:1 c:4 indicate:1 switzerland:2 drawback:1 correct:7 adjacency:2 fornasier:1 proposition:2 exploring:2 sufficiently:1 considered:1 ground:5 guibas:5 around:1 lawrence:1 puzzle:4 algorithmic:1 predict:2 claim:1 major:2 achieves:1 early:1 omitted:1 purpose:1 proc:2 combinatorial:2 utexas:3 largest:1 create:1 city:1 tool:1 weighted:2 minimization:3 di0:1 clearly:1 genomic:1 always:2 aim:1 super:1 rather:2 ck:1 zhou:2 pn:1 encode:1 june:1 rank:1 check:1 mainly:1 contrast:3 baseline:2 kim:1 dependent:1 stopping:2 rigid:1 i0:3 bt:1 relation:1 wij:6 interested:1 issue:1 arg:1 among:1 denoted:1 plan:1 art:4 special:1 fairly:2 tourism:1 equal:2 beach:1 sampling:4 synchronizing:1 rls:4 govindu:1 pottmann:2 future:2 report:3 sanghavi:1 simplify:2 aire:1 employ:1 duplicate:1 modern:1 randomly:1 national:1 subsampled:1 geometry:4 n1:1 ab:3 evaluation:6 alignment:8 deferred:1 semidefinite:2 pc:1 accurate:1 edge:6 partial:1 gmu:1 re:3 circle:1 theoretical:2 sinha:1 modeling:1 kxkf:1 tg:1 cost:2 fusing:1 vertex:12 uniform:1 graphic:3 masc:1 kn:2 connect:2 scanning:2 synthetic:8 considerably:1 cho:1 st:1 tlocal:2 fundamental:1 randomized:3 international:3 stay:1 probabilistic:1 lee:1 connecting:2 yao:1 thesis:1 daubechies:1 eurographics:4 huang:8 choose:3 klopschitz:1 chung:1 leading:1 return:1 li:1 inc:2 santiago:1 ranking:11 vi:2 later:1 view:1 break:1 try:1 jigsaw:4 analyze:4 welnicka:1 reached:1 start:2 recover:5 decaying:1 sort:1 netflix:2 candes:1 minimize:1 square:7 accuracy:5 gdi:5 efficiently:1 generalize:1 trajectory:2 daniilidis:1 reach:2 obvious:1 associated:4 di:6 proof:4 reassembling:1 stop:2 sampled:1 dataset:4 costa:1 knowledge:1 lim:1 appears:1 tolerate:5 follow:1 devore:1 improved:2 erd:1 formulation:8 evaluated:1 hand:1 axk:1 o:1 overlapping:1 lack:1 mode:1 scientific:1 grows:1 fractured:2 usa:1 ye:2 normalized:3 hence:2 alternating:2 symmetric:1 iteratively:2 satisfactory:1 sgp:3 illustrated:2 funkhouser:1 reweighted:3 unnormalized:1 generalized:1 demonstrate:1 motion:4 delivers:1 geometrical:1 meaning:1 wise:8 image:3 recently:1 hofer:1 superior:5 rotation:7 extend:1 assembling:1 association:2 measurement:27 mellon:1 automatic:2 mathematics:1 similarly:1 i6:1 sugiyama:1 stable:1 surface:1 gt:5 recent:2 perspective:1 driven:2 certain:1 binary:2 success:2 minimum:1 xgt:11 prune:2 converge:1 july:3 arithmetic:1 uncluttered:1 technical:1 match:4 adapt:1 long:1 divided:1 laplacian:3 prediction:7 scalable:4 basic:1 variant:3 avidan:1 essentially:1 vision:6 regression:1 steedly:1 iteration:11 normalization:1 robotics:1 irregular:4 justified:1 addition:1 interval:1 else:1 median:5 crucial:1 biased:7 subject:2 december:3 inconsistent:1 near:1 leverage:2 intermediate:1 enough:1 easy:1 xj:12 independence:1 isolation:1 competing:1 idea:1 cn:1 texas:3 shift:1 six:3 becker:1 proceed:1 remark:2 tij:18 chandrajit:2 detailed:3 clear:3 cardoso:1 obstruction:1 dna:1 generate:2 http:2 wiki:1 xij:1 nsf:1 per:2 discrete:1 carnegie:1 pollefeys:1 key:1 four:1 demonstrating:2 threshold:4 nevertheless:1 registration:2 diffusion:1 utilize:3 graph:39 relaxation:1 ville:1 geometrically:3 fraction:4 beijing:1 run:2 inverse:1 luxburg:1 powerful:1 fourth:1 guyon:1 appendix:3 ki:1 bound:4 internet:2 fl:1 correspondence:1 annual:1 strength:2 ijcnn:1 constraint:3 ri:1 scene:1 generates:1 min:10 relatively:1 rossi:1 developing:1 according:1 disconnected:2 conjugate:1 kd:3 across:1 describes:1 modification:1 making:2 patriot:1 outlier:2 gradually:1 iccv:2 pointcloud:1 indexing:1 xiangru:1 remains:1 describing:2 eventually:1 singer:2 know:1 end:3 photo:2 available:1 apply:7 observe:1 spectral:7 fernandes:1 robustness:3 weinberger:2 original:1 top:1 running:4 log2:1 k1:1 ghahramani:2 nyi:1 pachauri:3 r01:1 forum:2 snavely:2 concentration:2 exhibit:3 gradient:2 distance:3 unable:1 mail:1 code:1 besides:1 kk:1 cq:1 mini:1 ratio:3 liang:1 lg:5 setup:2 october:2 robert:1 potentially:1 xik:1 design:1 perform:1 upper:1 dmin:4 observation:14 snapshot:1 datasets:15 imbalance:1 acknowledge:2 descent:11 t:2 truncated:7 extended:1 witness:1 rn:1 arbitrary:2 canada:1 rating:2 introduced:1 pair:12 nip:1 trans:1 address:1 able:2 beyond:3 suggested:1 usually:1 below:2 pattern:2 program:2 including:2 max:13 power:2 difficulty:1 warm:2 residual:1 recursion:2 zhu:1 scheme:5 improve:2 movie:9 rated:1 health:1 kj:1 review:1 understanding:3 geometric:3 checking:1 kf:1 acknowledgement:1 vancouver:1 relative:12 synchronization:32 permutation:6 interesting:1 k2d:1 srebro:1 versus:2 validation:1 degree:6 sufficient:1 consistent:6 editor:3 cd:1 translation:14 austin:6 truncation:5 bern:1 understand:1 burges:1 institute:2 fall:1 szeliski:2 sparse:8 x2ij:1 dataset1:1 collection:3 projected:2 nguyen:1 welling:2 transaction:1 sj:2 clique:5 global:13 gsr:5 pittsburgh:1 xi:23 continuous:2 latent:2 iterative:8 table:5 reality:1 robust:7 ca:1 obtaining:1 init:1 improving:2 excellent:1 poly:1 bottou:1 garnett:1 domain:1 diag:1 protocol:1 vj:2 dense:6 did:1 main:1 da:1 noise:6 bajaj:3 allowed:2 biggest:1 representative:1 fashion:2 slow:1 sub:3 fails:1 zach:1 third:2 theorem:10 removing:1 rk:1 showing:1 maxi:1 r2:1 decay:1 admits:2 cortes:1 dominates:1 exists:1 corr:3 gdr:5 phd:1 chatterjee:1 kx:7 chen:5 gap:3 hodge:4 sargur:1 visual:1 kxk:1 partially:1 scalar:2 ggood:1 truth:5 satisfies:1 vampire:1 acm:2 goal:1 month:1 towards:1 disambiguating:1 owen:1 change:1 hard:1 lidar:2 determined:1 specifically:5 except:2 uniformly:1 diff:1 experimentally:1 averaging:1 lemma:2 total:3 experimental:10 la:1 rit:1 qixing:2 support:2 arises:1 scan:7 brevity:1 gsi:5 r41:1 tested:1
6,351
6,745
From which world is your graph? Cheng Li College of William & Mary Felix M. F. Wong Independent Researcher? Zhenming Liu College of William & Mary Varun Kanade University of Oxford Abstract Discovering statistical structure from links is a fundamental problem in the analysis of social networks. Choosing a misspecified model, or equivalently, an incorrect inference algorithm will result in an invalid analysis or even falsely uncover patterns that are in fact artifacts of the model. This work focuses on unifying two of the most widely used link-formation models: the stochastic blockmodel (SBM) and the small world (or latent space) model (SWM). Integrating techniques from kernel learning, spectral graph theory, and nonlinear dimensionality reduction, we develop the first statistically sound polynomial-time algorithm to discover latent patterns in sparse graphs for both models. When the network comes from an SBM, the algorithm outputs a block structure. When it is from an SWM, the algorithm outputs estimates of each node?s latent position. 1 Introduction Discovering statistical structures from links is a fundamental problem in the analysis of social networks. Connections between entities are typically formed based on underlying feature-based similarities; however these features themselves are partially or entirely hidden. A question of great interest is to what extent can these latent features be inferred from the observable links in the network. This work focuses on the so-called assortative setting, the principle that similar individuals are more likely to interact with each other. Most stochastic models of social networks rely on this assumption, including the two most famous ones ? the stochastic blockmodel [1] and the small-world model [2, 3], described below. Stochastic Blockmodel (SBM). In a stochastic blockmodel [4, 5, 6, 7, 8, 9, 10, 11, 12, 13], nodes are grouped into disjoint ?communities? and links are added randomly between nodes, with a higher probability if nodes are in the same community. In its simplest incarnation, an edge is added between nodes within the same community with probability p, and between nodes in different communities with probability q, for p > q. Despite arguably na??ve modelling choices, such as the independence of edges, algorithms designed with SBM work well in practice [14, 15]. Small-World Model (SWM). In a small-world model, each node is associated with a latent variable xi , e.g., the geographic location of an individual. The probability that there is a link between two nodes is proportional to an inverse polynomial of some notion of distance, dist(xi , xj ), between them. The presence of a small number of ?long-range? connections is essential to some of the most intriguing properties of these networks, such as small diameter and fast decentralized routing algorithms [3]. In general, the latent position may reflect geographic location as well as more abstract concepts, e.g., position on a political ideology spectrum. The Inference Problem. Without observing the latent positions, or knowing which model generates the underlying graph, the adjacency matrix of a social graph typically looks like the one shown in ? Currently at Google. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Fig. 5(a) (App. A.1). However, if the model generating the graph is known, it is then possible to run a suitable ?clustering algorithm? [14, 16] that reveals the hidden structure. When the vertices are ordered suitably, the SBM?s adjacency matrix looks like the one shown in Fig. 5(b) (App. A.1) and that of the SWM looks like the one shown in Fig. 5(c) (App. A.1). Existing algorithms typically depend on knowing the ?true? model and are tailored to graphs generated according to one of these models, e.g., [14, 16, 17, 18]. Our Contributions. We consider a latent space model that is general enough to include both these models as special cases. In our model, an edge is added between two nodes with a probability that is a decreasing function of the distance between their latent positions. This model is a fairly natural one, and it is quite likely that a variant has already been studied; however, to the best of our knowledge there is no known statistically sound and computationally efficient algorithm for latent-position inference on a model as general as the one we consider. 1. A unified model. We propose a model that is a natural generalization of both the stochastic blockmodel and the small-world model that captures some of the key properties of real-world social networks, such as small out-degrees for ordinary users and large in-degrees for celebrities. We focus on a simplified model where we have a modest degree graph only on ?celebrities?; the full paper material contains an analysis of the more realistic model using somewhat technical machinery [19]. 2. A provable algorithm. We present statistically sound and polynomial-time algorithms for inferring latent positions in our model(s). Our algorithm approximately infers the latent positions of almost all ?celebrities? (1 ? o(1)-fraction), and approximately infers a constant fraction of the latent positions of ordinary users. We show that it is statistically impossible to err on at most o(1) fraction of ordinary users by using standard lower bound arguments. 3. Proof-of-concept experiments. We report several experiments on synthetic and real-world data collected on Twitter from Oct 1 and Nov 30, 2016. Our experiments demonstrate that our model and inference algorithms perform well on real-world data and reveal interesting structures in networks. Additional Related Work. We briefly review the relevant published literature. 1. Graphon & Latent-space techniques. Studies using graphons and latent-space models have focused on the statistical properties of the estimators [20, 21, 22, 23, 24, 25, 26, 27, 28], with limited attention paid to computational efficiency. The ?USVT? technique developed recently [29] estimates the kernel well when the graph is dense. Xu et al. [30] consider a polynomial time algorithm for a sparse model similar to ours, but focus on edge classification rather than latent position estimation. 2. Correspondence analysis in political science. Estimating the ideology scores of politicians is an important research topic in political science [31, 32, 33, 34, 35, 36, 17, 18]. High accuracy heuristics developed to analyze dense graphs include [17, 18]. Organization. Section 2 describes background, our model and results. Section 3 describes our algorithm and an gives an overview of its analysis. Section 4 contains the experiments. 2 Preliminaries and Summary of Results Basic Notation. We use c0 , c1 , etc. to denote constants which may be different in each case. We use whp to denote with high probability, by which we mean with probability larger 1 ? n1c for any c. All notation is summarized in Appendix B for quick reference. Stochastic Blockmodel. Let n be the number of nodes in the graph with each node assigned a label from the set {1, . . . , k} uniformly at random. An edge is added between two nodes with the same label with probability p and between the nodes with different labels with probability q, with p > q (assortative case). In this work, we focus on the k = 2 case, where p, q = ? ((log n)c /n) and the community sizes are exactly the same. (Many studies of the regimes where recovery is possible have been published [37, 9, 5, 8].)   P Q Let A be the adjacency matrix of the realized graph and let M = E[A] = Q , where P n n P and Q ? R 2 ? 2 with every entry equal to p and q, respectively. We next explain the inference algorithm, which uses two key observations. 1. Spectral Properties of M . M has rank 2 and the non-trivial eigenvectors are (1, . . . , 1)T and (1, . . . , 1, ?1, . . . , ?1) corresponding to eigenvalues n(p + q)/2 and n(p ? q)/2, respectively. If one has access to M , the hidden structure in the graph is revealed merely by reading off the second eigenvector. 2. Low Discrepancy between A and 2 M . Provided the average degree n(p + q)/2 and the gap p ? q are large enough, the spectrum and eigenspaces of the matrices A and M can be shown to be close using matrix concentration inequalities and the Davis-Kahan theorem [38, 39]. Thus, it is sufficient to look at the projection of the columns of A onto the top two eigenvectors of A to identify the hidden latent structure. Small-World Model (SWM). In a 1-dim. SWM, each node vi is associated with an independent latent variable xi ? [0, 1] that is drawn from the uniform distribution on [0, 1]. The probability of a link between two nodes is Pr[{vi , vj } ? E] ? |xi ?xj1|? +c0 , where ? > 1 is a hyper-parameter. The inference algorithm for small-world models uses different ideas. Each edge in the graph is considered as either ?short-range? or ?long-range.? Short-range edges are those between nodes that are nearby in latent space, while long-range edges have end-points that are far away in latent space. After removing the long-range edges, the shortest path distance between two nodes scales proportionally to the corresponding latent space distance (see Fig. 6 in App. A.2). After obtaining estimates for pairwise distances, standard buidling blocks are used to find the latent positions xi [40]. The key observation used to remove the long-range edges is: an edge {vi , vj } is a short-range edge if and only if vi and vj will share many neighbors. A Unified Model. Both SBM and SWM are special cases of our unified latent space model. We begin by describing the full-fledged bipartite (heterogeneous) model that is a better approximation of real-world networks, but requires sophisticated algorithmic techniques (see [19] for a detailed analysis). Next, we present a simplified (homogeneous) model to explain the key ideas. Bipartite Model. We use latent-space model to characterize the stochastic interactions between users. Each individual is associated with a latent variable in [0, 1]. The bipartite graph model consists of two types of users: the left side of the graph Y = {y1 , . . . , ym } are the followers (ordinary users) and the right side X = {x1 , . . . , xn } are the influencers (celebrities). Both yi and xi are i.i.d. random variables from a distribution D. This assumption follows the convention of existing heterogeneous models [41, 42]. The probability that two individuals yi and xj interact is ?(yi , xj )/n, where ? : [0, 1] ? [0, 1] ? (0, 1] is a kernel function. Throughout this paper we assume that ? is a small-world kernel, i.e., ?(x, y) = c0 /(kx ? yk? + c1 ) for some ? > 1 and suitable constants c0 , c1 , and that m = ?(n ? polylog(n)). Let B ? Rm?n be a binary matrix that Bi,j = 1 if and only if there is an edge between yi and xj . Our goal is to estimate {xi }i?[n] based on B for suitably large n. Simplified Model. The graph only has the node set is X = {x1 , ..., xn } of celebrity users. Each xi is again an i.i.d. random variable from D. The probability that two users vi and vj interact is ?(xi , xj )/C(n). The denominator is a normalization term that controls the edge density of the graph. We assume C(n) = n/polylog(n), i.e., the average degree is polylog(n). Unlike the SWM where the xi are drawn uniformly from [0, 1], in the unified model D can be flexible. When D is the uniform distribution, the model is the standard SWM. When D has discrete support (e.g., xi = 0 with prob. 1/2 and xi = 1 otherwise), then the unified model reduces to the SBM. Our distributionagnostic algorithm can automatically select the most suitable model from SBM and SWM, and infer the latent positions of (almost) all the nodes. Bipartite vs. Simplified Model. The simplified model suffers from the following problem: If the average degree is O(1), then we err on estimating every individual?s latent position with a constant probability (e.g., whp the graph is disconnected), but in practice we usually want a high prediction accuracy on the subset of nodes corresponding to high-profile users. Assuming that the average degree is ?(1) mismatches empirical social network data. Therefore, we use a bipartite model that introduces heterogeneity among nodes: By splitting the nodes into two classes, we achieve high estimation accuracy on the influencers and the degree distribution more closely matches real-world data. For example, in most online social networks, nodes have O(1) average degree, and a small fraction of users (influencers) account for the production of almost all ?trendy? content while most users (followers) simply consume the content. Additional Remarks on the Bipartite Model. 1. Algorithmic contribution. Our algorithm computes B T B and then regularizes the product by shrinking the diagonal entries before carrying out spectral analysis. Previous studies of the bipartite graph in similar settings [43, 44, 45] attempt to construct a regularized product using different heuristics. Our work presents the first theoretically sound regularization technique for spectral algorithms. In addition, some studies have suggested running SVD on B directly (e.g., [28]). We show that the (right) singular vectors of B do not converge 3 to the eigenvectors of K (the matrix with entries ?(xi , xj )). Thus, it is necessary to take the product and use regularization. 2. Comparison to degree-corrected models (DCM). In DCM, each node vi is associated with a degree parameter D(vi ). Then we have Pr[{vi , vj } ? E] ? D(vi )?(xi , xj )D(vj ). The DCM model implies the subgraph induced by the highest degree nodes is dense, which is inconsistent with real-world networks. There is a need for better tools to analyze the asymptotic behavior of such models and we leave this for future work (see, e.g., [41, 42]). Theoretical Results. Let F be the cdf of D. We say F and ? are well-conditioned if: (1) F has finitely many points of discontinuity, i.e., the closure of the support of F can be expressed as the union of non-overlapping closed intervals I1 , I2 , ..., Ik for a finite number k. (2) R F is near-uniform, i.e., for any interval I that has non-empty overlap with F ?s support, dF (x) ? c0 |I|, for some constant c0 . I (3) Decay Condition: The eigenvalues of the integral operator based on ? and F decay sufficiently R fast. We define the Kf (x) = ?(x, x0 )f (x0 )dF (x0 ) and let (?i )i?1 denote the eigenvalues of K. Then, it holds that ?i = O(i?2.5 ). If we use the small-word kernel ?(x, y) = c0 /(|x ? y|? + c1 ) and choose F that gives rise to SBM or SWM, in each case the pair F and ? are well-conditioned, as described below. As the decay condition is slightly more invoved, we comment upon it. The condition is a mild one. When F is uniformly distributed on [0, 1], it is equivalent to requiring K to be twice differentiable, which is true for the small world kernel. When F has a finite discrete support, there are only finitely many non-zero eigenvalues, i.e., this condition also holds. The decay condition holds in more general settings, e.g., when F is piecewise linear [46] (see [19]). Without the decay condition, we would require much stronger assumptions: Either the graph is very dense or ?  2. Neither of these assumptions is realistic, so effectively our algorithm fails to work. In practice, whether the decay condition is satisfied can be checked by making a log-log plot and it has been observed that for several real-world networks, the eigenvalues follow a power-law distribution [47]. Next, we define the notion of latent position recovery for our algorithms. Definition 2.1 ((?, ?, ?)-Aproximation Algorithm). Let Ii , F , and K be defined as above, and let Ri = {xj : xj ? Ii }. An algorithm is called an (?, ?, ?)-approximation algorithm if 1. It outputs a collection of disjoint points C1 , C2 , . . . , Ck such that Ci ? Ri , which correspond to subsets of reconstructed latent variables. 2. For each Ci , it produces a distance matrix D(i) . Let Gi ? Ci be such that for any ij , ik ? Gi (i) (i) Dij ,ik ? |xij ? xik | ? (1 + ?)Dij ,ik + ?. (1) S 3. | i Gi | ? (1 ? ?)n. In bipartite graphs, Eq.(1) is required only for influencers. We do not attempt to optimize constants in this paper. We set ? = o(1), ? a small constant, and ? = o(1). Definition 2.1 allows two types of errors: Ci s are not required to form a partition i.e., some nodes can be left out, and a small fraction of estimation errors is allowed in each Ci , e.g., if xj = 0.9 but x bj = 0.2, then the j-th ?row? in D(i) is incorrect. To interpret the definition, consider the blockmodel with 2 communities. Condition 1 means that our algorithm will output two disjoint groups of points. Each group corresponds to one block. Condition 2 means that there are pairwise distance estimates within each group. Since the true distances for nodes within the same block are zero, our estimates must also be zero to satisfy Eq.1. Condition 3 says that the proportion of misclassified nodes is ? = o(1). We can also interpret the definition when we consider a smallworld graph, in which case k = 1. The algorithm outputs pairwise distances for a subset C1 . We know that there is a sufficiently large G1 ? C1 such that the pairwise distances are all correct in C1 . Our algorithm does not attempt to estimate the distance between Ci and Cj for i 6= j. When the support contains multiple disjoint intervals, e.g., in the SBM case, it first pulls apart the nodes in different communities. Estimating the distance between intervals, given the output of our algorithm is straightforward. Our main result is the following. Theorem 2.2. Using the notation above, assume F and ? are well-conditioned, and C(n) and m/n are ?(logc n) for some suitably large c. The algorithm for the simplified model shown in Figure 1 and that for the bipartite model (appears in [19]) give us an (1/ log2 n, , O(1/ log n))approximation algorithm w.h.p. for any constant . Furthermore, the distance estimates D(i) for each Ci are constructed using the shortest path distance of an unweighted graph. 4 L ATENT-I NFERENCE(A) 1 // Step 1. Estimate ? . b = SM-E ST(A). 2 ? 3 // Step 2. Execute isomap algo. b 4 D = I SOMAP -A LGO(?) 5 // Step 3. Find latent variables. 6 Run a line embedding algorithm [48, 49]. b `) I SOMAP -A LGO(?, b (See Section 3.2) 1 Execute S ? D ENOISE(?) 2 // S is a subset of [n]. 3 Build G = {S, E} s.t. {i, j} ? E iff ? d )i ? (? ? d )j | ? `/ log n (` a constant). 4 |(? 5 Compute D such D(i, j) is the shortest 6 path distance between i and j when i, j ? S. 7 return D SM-E ST(A, t) ?A , S?A , V?A ] = svd(A). 1 [U 2 Let also ?i be i-th singular value of A. 3 // let t be a suitable parameter. 4 d = D ECIDE T HRESHOLD(t, ?(n)). 5 SA : diagonal matrix comprised of {?i }i?d 6 UA , VA : the singular vectors 7 corresponding to SA . p b = C(n)UA S 1/2 . 8 Let ? A b 9 return ? D ECIDE T HRESHOLD(t, ?(n)) 1 // This procedure decides d the number 2 of Eigenvectors to keep. 3 // t is a tunable parameter. See Proposition 3.1. A A 4 d = arg maxd {?d ( ?(n) ) ? ?d+1 ( ?(n) ) ? ?}. 24/59 5 where ? = 10(t/?(n)) Figure 1: Subroutines of our Latent Inference Algorithm. Pairwise Estimation to Line-embedding and High-dimensional Generalization. Our algorithm builds estimates on pairwise latent distance and uses well-studied metric-embedding methods [48, 49] as blackboxes to infer latent positions. Our inference algorithm can be generalized to d-dimensional space with d being a constant. But the metric-embedding on `dp becomes increasingly ? difficult, e.g., when d = 2, the approximation ratio for embedding a graph is ?( n) [50]. 3 Our algorithms As previously noted, SBM and SWM are special cases of our unified model and both require different algorithmic techniques. Given that it is not surprising that our algorithm blends ingredients from both sets of techniques. Before proceeding, we review basics of kernel learning. Notation. Let A be the adjacency matrix of the observed graph (simplified model) and let ?(n) , ?K S?K V? T (U ?A S?A V? T ) be the SVD of K n/C(n). Let K be the matrix with entries ?(xi , xj ). Let U K A (A). Let d be a parameter to be chosen later. Let SK (SA ) be a d ? d diagonal matrix comprising the d-largest eigenvalues of K (A). Let UK (UA ) and VK (VA ) be the corresponding singular vectors of ? = UK SK V T (A? = UA SA V T ) be the low-rank approximation of K (A). Note K (A). Finally, let K A K that when a matrix is positive definite and symmetric SVD coincides with eigen-decomposition; as a consequence UK = VK and UA = VA . R Kernel Learning. Define an integral operator K as Kf (x) = ?(x, x0 )f (x0 )dF (x0 ). Let ?1 , ?2 , . . . be the eigenfunctions of K and ?1 , ?2 , . . . be the corresponding eigenvalues such that ?1 ? ?2 ? ? ? ? and ?i ? 0 for each i. Also let NH be the number of eigenfunctions/eigenvalues of K, which is either finite or countably infinite. p We recall some important properties of K [51, 25]. For x ? [0, 1], define the feature map ?(x) = ( ?j ?j (x) : j = 1, 2, ...), so that h?(x), ?(x0 )i = p ?(x, x0 ). We also consider a truncated feature ?d (x) = ( ?j ?j (x) : j = 1, 2, ..., d). Intuitively, if ?j is too small for sufficiently large j, then the first d coordinates (i.e., ?d ) already p approximate the feature map well. Finally, let ?d (X) ? Rn?d such that its (i, j)-th entry is ?j ?j (xi ). Let?s further write (?d (X)):,i be the i-th column of ?d (X). Let ?(X) = limd?? ?d (X). When the context is clear, shorten ?d (X) and ?(X) to ?d and ?, respectively. There are two main steps in our algorithm which we explain in the following two subsections. 3.1 Estimation of ? through K and A The mapping ? : [0, 1] ? RNH is bijective so a (reasonably) accurate estimate of ?(xi ) can be used to recover xi . Our main result is the design of a data-driven procedure to choose a suitable number of eigenvectors and eigenvalues of A to approximate ? (see SM-E ST(A) in Fig. 1). 5 Proposition 3.1. Let t be a tunable parameter such that t = o(?(n)) and t2 /?(n) = ?(log n). b ? RNH be such that its first d-coordinates are Let d be chosen by D ECIDE T HRESHOLD(?). Let ? p 1/2 equal to C(n)UA SA , and its remaining entries are 0. If ?(n) = ?(log n) and K (F and ?) is well-conditioned, then with high probability:  ? 2 b ? ?kF = O (2) k? n (t/(?(n))) 29  b ? ?kF = O ?n??2/87 (n) . We remark that Specifically, by letting t = ?2/3 (n), we have k? our result is stronger than an analogous result for sparse graphs in [25] as our estimate is close to ? rather than the truncated ?d . Remark on the Eigengap. In our analysis, there are three groups of eigenvalues: the eigenvalues of K, those of K, and those of A. They are in different scales: ?i (K) ? 1 (resulting from the fact that ?(x, y) ? 1 for all x and y), and ?i (A/?(n)) ? ?i (K/n) ? ?i (K) if n and ?(n) are sufficiently large. Thus, ?d (K) are independent of n for a fixed d and should be treated as ?(1). Also ?d , ?d (K) ? ?d+1 (K) ? 0 as d ? ?. Since the procedure of choosing d depends on C(n) (and thus also on n), ?d depends on n and can be bounded by a function in n. This is the reason why Proposition 3.1 does not explicitly depend on the eigengap. We also note that we cannot directly find ?d based on the input matrix A. But standard interlacing results can give ?d = ?(?d (A/?(n))? ?d+1 (A/?(n))) (cf. [19]). Intuition of the algorithm. Using Mercer?s theorem, we have h?(xi ), ?(xj )i = limd?? h?d (xi ), ?d (xj )i = ?(xi , xj ). Thus, limd?? ?d ?T d = K. On the other hand, we have ?K S?1/2 )(U ?K S?1/2 )T = K. Thus, ?d (X) and U ?K S?1/2 are approximately the same, up to a uni(U K K K tary transformation. We need to identify different sources of errors to understand the approximation quality. Error source 1. Finite samples to learn the kernel. We want to infer about ?continuous objects? ? and D (specifically the eigenfunctions of K) but K only contains the kernel values of a finite set of pairs. From standard results in Kernel PCA [52, 25], we have with probability ? 1 ? , p p ? ? log ?1 log ?1 1/2 kUK SK W ? ?d (X)kF ? 2 2 =2 2 . ?d (K) ? ?d+1 (K) ?d Error source 2. Only observe A. We observe only the realized graph A and not K, though it holds ? ?1/2 that EA = K/C(n). Thus, we can only use singular  ? vectors  of C(n)A to approximate UK SK . p 1/2 1/2 dn . When A is dense (i.e., C(n) = O(1)), We have: C(n)UA SA W ? UK SK = O ?t2 ?(n) d F the problem is analyzed in [25]. We generalize the results in [25] for the sparse graph case. See [19] for a complete analysis. ?A ):,i ?outweighs? the Error source 3. Truncation error. When i is large, the noise in ?i (A)(U signal. Thus, we need to choose a d such that only the first d eigenvectors/eigenvalues of A are ? used to approximate ?d . Here, we need to address the truncation error: the tail { ?i ?i (xj )}i>d is thrown away. Next we analyze the magitude of the tail. We abuse notation so that ?d (x) refers to both a d-dimensional vector and a NH -dimensional after the d-th one R ? vector in which P P all entries P are 0. We have Ek?(x) ? ?d (x)k2 = i>d E[( ?i ?i (x))2 ] = i>d ?i |?i (x)|2 dF (x) = i>d ?i . ? pP (A Chernoff bound is used to obtain that k? ? ?d kF = O( n/( i>d ?i ))). Using the decay condition, we show that a d can be identified so that the tail can be bounded by a polynomial in ?d . The details are technical and are provided in [19]. 3.2 b i ) through Isomap Estimating Pairwise Distances from ?(x b d , we See I SOMAP -A LGO(?) in Fig. 1 for the pseudocode. After we construct our estimate ? T ? b b b estimate K by letting K = ?d ?d . Recalling Ki,j = c0 /(|xi ? xj | + c1 ), a plausible approach is b i,j ? c1 )1/? . However, ?(xi , xj ) is a convex function in |xi ? xj |. to estimate |xi ? xj | = (c0 /K 6 (a) True features (b) Estimated features (c) Isomap w/o denoising (d) Isomap + denoising Figure 2: Using the Isomap Algorithm to recover pairwise distances. (a) The true curve C = {?(x)}x?[0,1] b (c) Shows that an undesirable short-cut may exist when we run the Isomap algorithm and (d) (b) Estimate ? Shows the result of running the Isomap algorithm after removal of the corrupted nodes. Thus, when Ki,j is small, a small estimation error here will result in an amplified estimation error in |xi ? xj | (see also Fig. 7 in App. A.3). But when |xi ? xj | is small, Ki,j is reliable (see the ?reliable? region in Fig. 7 in App. A.3). Thus, our algorithm only uses large values of Ki,j to construct estimates. The isomap technique introduced in topological learning [53, 54] is designed to handle this setting. Specifically, the set b i )}i?[n] will be a noisy C = {?(x)}x?[0,1] forms a curve in RNH (Fig. 2(a)). Our estimate {?(x approximation of the curve (Fig. 2(b)). Thus, we build up a graph on {?(xi )}i?n so that xi and xj b i ) and ?(x b j ) are close (Fig. 2(c-d)). Then the shortest path distance are connected if and only if ?(x on G approximates the geodesic distance on C. By using the fact that ? is a radial basis kernel, the geodesic distance will also be proportional to the latent distance. Corrupted nodes. Excessively corrupted nodes may help build up ?undesirable bridges? and interfere with the shortest-path based estimation (cf.Fig. 2(c)). Here, the shortest path between two green nodes ?jumps through? the excessively corrupted nodes (labeled in red) so the shortest path distance is very different from the geodesic distance. Below, we describe a procedure to remove excessively corrupted nodes and then explain how to analyze the isomap technique?s performance after their removal. Note that d in this section mostly refers to the shortest path distance. Step 1. Eliminate corrupted nodes. Recall that x1 , x2 , ..., xn are the latent variables. Let zi = b i ). For any z ? RNH and r > 0, we let Ball(z, r) = {z 0 : kz 0 ?zk ? r}. Define ?(xi ) and zbi = ?(x projection Proj(z) = arg minz0 ?C kz 0 ? zk, where C is the curve formed by {?(x)}x?[0,1] . Finally, for any point z ? C, define ??1 (z) such that ?(??1 (z)) = z (i.e., z?s original latent position). For the points that fall outside of C, define ??1 (z) = ??1 (Proj(z)). Let us re-parametrize the error b ? ?kF ? ?n/f (n), where f (n) = ?2/87 (n) = term in Propostion 3.1. Let f (n) be such that k? 2 b i ) ? ?(xi )k2 ? ?(log we have Pri [k?(x p p n) for sufficiently large ?(n). By Markov?s inequality, 2 b 1/ f (n)] ? 1/f (n). Intuitively, when k?(xi ) ? ?(xi )k ? 1/ f (n), i becomes a candidate that can serve to build up undesirable shortcuts. Thus, we want to eliminate these nodes. p Looking at a ball of radius O(1/ f (n)) centered at a point zbi , consider two cases. Case 1. If zbi is close to Proj(b zi ), i.e., corresponding to the blue nodes in Figure 2(c). For the purpose of exposition, let us assume z b | = O(f ?1/? (n)), then we p i = zi . Now for any point zj , if |xi ? xjp have kb zi ? zbj k = O(1/ f (n)), which means zj is in Ball(zi , O(1/ f (n))). The total number of such nodes will be in the order of ?(n/f 1/? (n)), by using the near-uniform density assumption. Case 2. If zbi is far awaypfrom any point in C, i.e., corresponding to the red ball in Figure 2(c), any points in Ball(b zi , O(1/ f (n))) will also be far from C. Then the total number of such nodes will be O(n/f (n)). As n/f 1/? (n) = ?(n/f (n)) for ? > 1, there is a phase-transition phenomenon: When zbi is far from C, then a neighborhood of zbi contains O(n/f (n)) nodes. When zbi is close to C, then a neighborhood of zbi contains ?(n/f (n)) nodes. We can leverage this intuition to design a countingbased algorithm to eliminate nodes that are far from C: D ENOISE(b zi ) : If |Ball(b zi , 3/ p f (n))| < n/f (n), remove zbi . 7 (3) Algo. Ours Mod. [55] CA [18] Maj [56] RW [54] MDS [49] ? 0.53 0.16 0.20 0.13 0.01 0.05 Slope of ? 9.54 1.14 0.11 0.09 1.92 30.91 S.E. 0.28 0.02 7e-4 0.02 0.65 120.9 p-value < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 0.09 Figure 3: Latent Estimates vs. Ground-truth. (a) Inferred kernel (b) SWM (c) SBM Figure 4: Visualization of real and synthetic networks. (a) Our inferred kernel matrix, which is ?in-between? (b) the small-world model and (c) the stochastic blockmodel. Theoretical result. We classify a point i into three groups: p 1. Good: Satisfying kb zi ? Proj(b zi )k ? 1/ f (n). We p further partition the set of good points into two parts. Good-I are points such that kb zi ? zi k ? 1/ f (n), while Good-II are points that are good but not in Good-I. p 2. Bad: when kzi ? Proj(zi )k > 4/ f (n). 3. Unclear: otherwise. Lemma 3.2. (cf. [19] ) After running D ENOISE that uses the counting-based decision rule, all good points are kept, all bad points are eliminated, and all unclear points have no performance guarantee. The total number of eliminated nodes is ? n/f (n). Step 2. An isomap-based algorithm. Wlog assume there is only one closed interval for support(F ). Wepbuild a graph G on [n] so that two nodes zbi and zbj are connected if and only if kb zi ? zbj k ? `/ f (n), where ` is a sufficiently large constant (say 10). Consider the shortest path distance between arbitrary pairs of nodes i and j (that are not eliminated.) Because the corrupted nodes are removed, the whole path is around C. Also, by the uniform density assumption, walking on the shortest path in G is equivalent to walking on C with ?uniform speed?, i.e., each edge on the path will map to an approximately fixed distance on C. Thus, the shortest path distance scales with 2/? 2/?   1/? 1/? ?`?3 ?`+8 the latent distance, i.e., (d ? 1) 2c ? |xi ? xj | ? d 2c , which f (n) f (n) implies Theorem 2.2 (cf. [19] for details). Discussion: ?Gluing together? two algorithms? The unified model is much more flexible than SBM and SWM. We were intrigued that the generalized algorithm needs only to ?glue together? important techniques used in both models: Step 1 uses the spectral technique inspired by SBM inference methods, while Step 2 resembles techniques used in SWM: the isomap G only connects between two nodes that are close, which is akin to throwing away the long-range edges. 4 Experiments We apply our algorithm to a social interaction graph from Twitter to construct users? ideology scores. We assembled a dataset by tracking keywords related to the 2016 US presidential election for 10 million users. First, we note that as of 2016 the Twitter interaction graph behaves ?in-between? the small-world and stochastic blockmodels (see Figure 4), i.e., the latent distributions are bi-modal but not as extreme as the SBM. Ground-truth data. Ideology scores of the US Congress (estimated by third parties [57]) are usually considered as a ?ground-truth? dataset, e.g., [18]. We apply our algorithm and other baselines on Twitter data to estimate the ideology score of politicians (members of the 114th Congress), and 8 observe that our algorithm has the highest correlation with ground-truth. See Fig. 3. Beyond correlation, we also need to estimate the statistical significance of our estimates. We set up a linear model y ? ?1 x b + ?0 , in which x b?s are our estimates and y?s are ground-truth. We use bootstrapping to compute the standard error of our estimator, and use the standard error to estimate the p-value of our estimator. The details of this experiment and additional empirical evaluation are available in [19]. Acknowlegments The authors thank Amazon for partly providing AWS Cloud Credits for this research. References [1] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109?137, 1983. [2] Duncan J Watts and Steven H Strogatz. 393(6684):440?442, 1998. Collective dynamics of small-world networks. Nature, [3] Jon Kleinberg. The small-world phenomenon: An algorithmic perspective. In Proceedings of the thirtysecond annual ACM symposium on Theory of computing, pages 163?170. ACM, 2000. [4] Se-Young Yun and Alexandre Prouti`ere. Optimal cluster recovery in the labeled stochastic block model. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 965?973, 2016. [5] Elchanan Mossel, Joe Neeman, and Allan Sly. Reconstruction and estimation in the planted partition model. Probability Theory and Related Fields, 162(3-4):431?461, 2015. [6] Emmanuel Abbe and Colin Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic BP, and the information-computation gap. arXiv preprint arXiv:1512.09080, 2015. [7] Emmanuel Abbe and Colin Sandon. Community detection in the general stochastic block model: Fundamental limits and efficient algorithms for recovery. In Proceedings of 56th Annual IEEE Symposium on Foundations of Computer Science, Berkely, CA, USA, pages 18?20, 2015. [8] Laurent Massouli?e. Community detection thresholds and the weak Ramanujan property. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 694?703. ACM, 2014. [9] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof of the block model threshold conjecture. arXiv preprint arXiv:1311.4115, 2013. [10] Peter J. Bickel and Aiyou Chen. A nonparametric view of network models and newmangirvan and other modularities. Proceedings of the National Academy of Sciences, 106(50):21068?21073, 2009. [11] Jure Leskovec, Kevin J Lang, Anirban Dasgupta, and Michael W Mahoney. Statistical properties of community structure in large social and information networks. In Proceedings of the 17th international conference on World Wide Web, pages 695?704. ACM, 2008. [12] Mark EJ Newman and Michelle Girvan. Finding and evaluating community structure in networks. Physical review E, 69(2):026113, 2004. [13] Mark EJ Newman, Duncan J Watts, and Steven H Strogatz. Random graph models of social networks. Proceedings of the National Academy of Sciences, 99(suppl 1):2566?2572, 2002. [14] Frank McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529?537. IEEE, 2001. [15] Jure Leskovec, Kevin J Lang, and Michael Mahoney. Empirical comparison of algorithms for network community detection. In Proceedings of the 19th international conference on World wide web, pages 631?640. ACM, 2010. [16] Ittai Abraham, Shiri Chechik, David Kempe, and Aleksandrs Slivkins. Low-distortion inference of latent similarities from a multiplex social network. In SODA, pages 1853?1872. SIAM, 2013. [17] Pablo Barber?a. Birds of the Same Feather Tweet Together. Bayesian Ideal Point Estimation Using Twitter Data. 2012. 9 [18] Pablo Barber?a, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau. Tweeting from left to right. Psychological Science, 26(10):1531?1542, 2015. [19] Cheng Li, Felix M. F. Wong, Zhenming Liu, and Varun Kanade. From which world is your graph? Available on Arxiv, 2017. [20] Peter D. Hoff, Adrian E. Raftery, and Mark S. Handcock. Latent space approaches to social network analysis. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 97:1090?1098, 2001. [21] Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership stochastic blockmodels. J. Mach. Learn. Res., 9:1981?2014, 2008. [22] Karl Rohe, Sourav Chatterjee, and Bin Yu. Spectral clustering and the high-dimensional stochastic blockmodel. The Annals of Statistics, 39(4):1878?1915, 2011. [23] Edo M Airoldi, Thiago B Costa, and Stanley H Chan. Stochastic blockmodel approximation of a graphon: Theory and consistent estimation. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 692?700. Curran Associates, Inc., 2013. [24] Sofia C. Olhede Patrick J. Wolfe. Nonparametric graphon estimation. 2013. [25] Minh Tang, Daniel L. Sussman, and Carey E. Priebe. Universally consistent vertex classification for latent positions graphs. Ann. Statist., 41(3):1406?1430, 06 2013. [26] Patrick J. Wolfe and David Choi. Co-clustering separately exchangeable network data. The Annals of Statistics, 42(1):29?63, 2014. [27] Varun Kanade, Elchanan Mossel, and Tselil Schramm. Global and local information in clustering labeled block models. IEEE Trans. Information Theory, 62(10):5906?5917, 2016. [28] Karl Rohe, Tai Qin, and Bin Yu. Co-clustering directed graphs to discover asymmetries and directional communities. Proceedings of the National Academy of Sciences, 113(45):12679?12684, 2016. [29] Sourav Chatterjee. Matrix estimation by universal singular value thresholding. Ann. Statist., 43(1):177? 214, 02 2015. [30] Jiaming Xu, Laurent Massouli?e, and Marc Lelarge. Edge label inference in generalized stochastic block models: from spectral theory to impossibility results. In Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesvri, editors, Proceedings of The 27th Conference on Learning Theory, volume 35 of Proceedings of Machine Learning Research, pages 903?920, Barcelona, Spain, 13?15 Jun 2014. PMLR. [31] K. T. Poole and H. Rosenthal. A spatial model for legislative roll call analysis. American Journal of Political Science, 29(2):357?384, 1985. [32] M. Laver, K. Benoit, and J. Garry. Extracting policy positions from political texts using words as data. American Political Science Review, 97(2), 2003. [33] J. Clinton, S. Jackman, and D. Rivers. The statistical analysis of roll call data. American Political Science Review, 98(2):355?370, 2004. [34] S. Gerrish and D. Blei. How the vote: Issue-adjusted models of legislative behavior. In Proc. NIPS, 2012. [35] S. Gerrish and D. Blei. Predicting legislative roll calls from text. In Proc. ICML, 2011. [36] J. Grimmer and B. M. Stewart. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 2013. [37] Emmanuel Abbe. Community detection and the stochastic block model. 2016. [38] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012. [39] C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. SIAM J. Numer. Anal., 7:1?46, 1970. [40] Piotr Indyk and Jiri Matou?sek. Low-distortion embeddings of finite metric spaces. Handbook of discrete and computational geometry, page 177, 2004. [41] Yunpeng Zhao, Elizaveta Levina, and Ji Zhu. Consistency of community detection in networks under degree-corrected stochastic block models. Ann. Statist., 40(4):2266?2292, 08 2012. 10 [42] Tai Qin and Karl Rohe. Regularized spectral clustering under the degree-corrected stochastic blockmodel. In C.j.c. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3120?3128. 2013. [43] Inderjit S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ?01, pages 269?274, New York, NY, USA, 2001. ACM. [44] T. Zhou, J. Ren, M. Medo, and Y.-C. Zhang. Bipartite network projection and personal recommendation. 76(4):046115, October 2007. [45] Felix Ming Fai Wong, Chee-Wei Tan, Soumya Sen, and Mung Chiang. Quantifying political leaning from tweets, retweets, and retweeters. IEEE Trans. Knowl. Data Eng., 28(8):2158?2172, 2016. [46] H. K?onig. Eigenvalue Distribution of Compact Operators. Operator Theory: Advances and Applications. Birkh?auser, 1986. [47] Milena Mihail and Christos Papadimitriou. On the eigenvalue power law. In International Workshop on Randomization and Approximation Techniques in Computer Science, pages 254?262. Springer, 2002. [48] Mihai Badoiu, Julia Chuzhoy, Piotr Indyk, and Anastasios Sidiropoulos. Low-distortion embeddings of general metrics into the line. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 225?233, 2005. [49] I. Borg and P.J.F. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, 2005. [50] Piotr Indyk and Jiri Matousek. Low-distortion embeddings of finite metric spaces. In in Handbook of Discrete and Computational Geometry, pages 177?196. CRC Press, 2004. [51] Bernhard Scholkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001. [52] Lorenzo Rosasco, Mikhail Belkin, and Ernesto De Vito. On learning with integral operators. J. Mach. Learn. Res., 11:905?934, March 2010. [53] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319, 2000. [54] Vin De Silva and Joshua B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. In Advances in Neural Information Processing Systems 15, pages 705?712. MIT Press, 2003. [55] Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. Physical review E, 74, 2006. [56] U. N. Raghavan, R. Albert, and S. Kumara. Near linear time algorithm to detect community structures in large-scale networks. Physical Review E, 76(3), 2007. [57] Joshua Tauberer. Observing the unobservables in the us congress. Law Via the Internet, 2012. 11
6745 |@word mild:1 briefly:1 polynomial:5 stronger:2 proportion:1 nd:1 glue:1 suitably:3 c0:9 adrian:1 closure:1 eng:1 decomposition:1 paid:1 incarnation:1 reduction:3 liu:2 contains:6 score:4 daniel:1 neeman:2 ours:2 document:1 existing:2 err:2 whp:2 surprising:1 lang:2 intriguing:1 follower:2 must:1 john:2 realistic:2 partition:3 kdd:1 remove:3 designed:2 plot:1 v:2 discovering:2 olhede:1 short:4 chiang:1 blei:3 node:48 location:2 zhang:1 limd:3 c2:1 dn:1 borg:1 symposium:5 jiri:2 scholkopf:1 incorrect:2 consists:1 constructed:1 feather:1 ik:4 falsely:1 x0:8 theoretically:1 pairwise:8 allan:2 behavior:2 themselves:1 dist:1 usvt:1 inspired:1 ming:1 decreasing:1 pitfall:1 automatically:1 election:1 ua:7 becomes:2 spain:2 provided:2 notation:5 bounded:2 underlying:2 discover:2 begin:1 estimating:4 what:1 eigenvector:1 developed:2 unified:7 finding:2 transformation:1 bootstrapping:1 csaba:1 guarantee:1 every:2 multidimensional:1 friendly:1 exactly:1 k2:2 rm:1 uk:5 partitioning:2 exchangeable:1 onig:1 control:1 arguably:1 before:2 felix:3 positive:1 local:2 multiplex:1 swm:15 congress:3 consequence:1 limit:1 despite:1 mach:2 oxford:1 laurent:2 path:13 approximately:4 abuse:1 twice:1 sussman:1 resembles:1 matousek:1 studied:2 bird:1 co:3 limited:1 bi:2 range:9 statistically:4 directed:1 practice:3 assortative:2 definite:1 block:12 union:1 procedure:4 special:3 universal:1 empirical:3 projection:3 chechik:1 word:3 integrating:1 refers:2 radial:1 onto:1 close:6 undesirable:3 operator:5 cannot:1 context:1 impossible:1 wong:3 optimize:1 equivalent:2 map:3 quick:1 ramanujan:1 straightforward:1 attention:1 convex:1 focused:1 amazon:1 recovery:4 splitting:1 zbi:10 shorten:1 estimator:3 sbm:15 rule:1 pull:1 tary:1 embedding:5 handle:1 notion:2 coordinate:2 analogous:1 annals:2 tan:1 user:14 homogeneous:1 us:6 kathryn:1 curran:1 associate:1 wolfe:2 satisfying:1 walking:2 cut:1 modularities:1 labeled:3 observed:2 cloud:1 steven:2 preprint:2 capture:1 region:1 connected:2 highest:2 removed:1 yk:1 intuition:2 dynamic:1 geodesic:3 personal:1 vito:1 rnh:4 carrying:1 depend:2 algo:2 serve:1 upon:1 bipartite:11 efficiency:1 eric:1 basis:1 fast:2 describe:1 birkh:1 newman:3 formation:1 choosing:2 outside:1 neighborhood:2 hyper:1 kevin:2 quite:1 larger:1 plausible:1 heuristic:2 consume:1 distortion:4 say:3 otherwise:2 presidential:1 widely:1 statistic:2 gi:3 g1:1 kahan:2 noisy:1 indyk:3 online:1 eigenvalue:14 differentiable:1 sen:1 leinhardt:1 propose:1 reconstruction:1 product:3 interaction:3 qin:2 relevant:1 iff:1 subgraph:1 achieve:1 amplified:1 academy:3 empty:1 cluster:2 asymmetry:1 produce:1 generating:1 leave:1 object:1 help:1 polylog:3 develop:1 ij:1 keywords:1 finitely:2 eq:2 sa:6 implies:2 come:1 convention:1 radius:1 closely:1 correct:1 somap:3 stochastic:21 kb:4 centered:1 raghavan:1 routing:1 material:1 adjacency:4 bin:2 require:2 crc:1 generalization:2 preliminary:1 randomization:1 proposition:3 adjusted:1 ideology:5 graphon:3 hold:4 around:1 credit:1 considered:2 ground:5 sufficiently:6 great:1 algorithmic:4 mapping:1 bj:1 bickel:1 purpose:1 estimation:13 proc:2 label:4 currently:1 knowl:1 bridge:1 grouped:1 largest:1 ere:1 tool:1 mit:2 ck:1 rather:2 zhou:1 ej:3 aiyou:1 focus:5 vk:2 maria:1 rank:2 modelling:1 impossibility:1 political:10 blockmodel:11 sigkdd:1 baseline:1 detect:1 dim:1 inference:11 twitter:5 membership:1 szepesvri:1 eliminate:3 typically:3 hidden:4 proj:5 misclassified:1 subroutine:1 comprising:1 i1:1 arg:2 classification:2 groenen:1 issue:1 among:1 flexible:2 spatial:1 auser:1 fairly:1 logc:1 kempe:1 hoff:1 construct:4 field:1 ernesto:1 beach:1 eliminated:3 equal:2 piotr:3 chernoff:1 look:4 icml:1 abbe:3 jon:1 yu:2 discrepancy:1 papadimitriou:1 t2:2 report:1 piecewise:1 richard:1 belkin:1 future:1 modern:1 randomly:1 soumya:1 ve:1 national:3 individual:5 phase:1 geometry:2 connects:1 william:2 recalling:1 thrown:1 detection:6 attempt:3 interest:1 organization:1 mining:1 evaluation:1 jackman:1 numer:1 mahoney:2 introduces:1 analyzed:1 joel:1 benoit:1 extreme:1 mcsherry:1 accurate:1 integral:3 edge:17 necessary:1 nference:1 eigenspaces:1 machinery:1 modest:1 elchanan:3 re:3 theoretical:2 leskovec:2 politician:2 psychological:1 classify:1 column:2 stewart:1 ordinary:4 vertex:2 subset:4 entry:7 uniform:6 comprised:1 dij:2 seventh:1 too:1 characterize:1 corrupted:7 synthetic:2 st:4 density:3 international:4 river:1 siam:2 fundamental:3 off:1 influencers:4 michael:2 ym:1 together:3 na:1 again:1 reflect:1 satisfied:1 choose:3 rosasco:1 ek:1 american:4 zhao:1 return:2 li:2 account:1 de:3 schramm:1 summarized:1 inc:1 satisfy:1 explicitly:1 depends:2 vi:9 later:1 view:1 closed:2 analyze:4 observing:2 red:2 xing:1 recover:2 vin:2 slope:1 carey:1 contribution:2 formed:2 accuracy:3 blackmond:1 roll:3 correspond:1 identify:2 directional:1 generalize:1 weak:1 bayesian:1 famous:1 ren:1 researcher:1 published:2 app:6 explain:4 suffers:1 edo:1 checked:1 definition:4 lelarge:1 pp:1 tucker:1 tweeting:1 proof:3 associated:4 costa:1 tunable:2 dataset:2 recall:2 subsection:1 knowledge:2 dimensionality:3 infers:2 stanley:1 cj:1 uncover:1 sophisticated:1 ea:1 appears:1 alexandre:1 higher:1 varun:3 follow:1 modal:1 wei:1 execute:2 though:1 furthermore:1 smola:1 sly:2 langford:1 correlation:2 hand:1 web:2 tropp:1 nonlinear:3 overlapping:1 celebrity:5 google:1 interfere:1 fai:1 quality:1 reveal:1 laskey:1 artifact:1 mary:2 usa:5 xj1:1 requiring:1 true:5 isomap:11 excessively:3 concept:2 regularization:3 assigned:1 geographic:2 symmetric:1 dhillon:1 pri:1 i2:1 davis:2 noted:1 coincides:1 samuel:1 nagler:1 generalized:3 bijective:1 yun:1 complete:1 demonstrate:1 julia:1 silva:2 balcan:1 recently:1 misspecified:1 hreshold:3 rotation:1 pseudocode:1 behaves:1 sek:1 physical:3 ji:1 overview:1 volume:1 nh:2 association:1 thiago:1 million:1 approximates:1 interpret:2 tail:4 sidiropoulos:1 mihai:1 cambridge:1 feldman:1 automatic:1 consistency:1 mathematics:1 handcock:1 access:1 similarity:2 etc:1 patrick:2 chan:1 perspective:1 apart:1 driven:1 inequality:2 binary:1 trendy:1 maxd:1 yi:4 joshua:4 additional:3 somewhat:1 converge:1 shortest:11 colin:2 signal:1 ii:3 stephen:1 full:2 sound:4 multiple:2 reduces:1 infer:3 anastasios:1 legislative:3 technical:2 badoiu:1 match:1 levina:1 interlacing:1 long:7 va:3 jost:1 prediction:1 basic:2 tselil:1 heterogeneous:2 denominator:1 metric:5 variant:1 florina:1 arxiv:5 albert:1 kernel:15 tailored:1 df:4 suppl:1 jiaming:1 normalization:1 c1:10 addition:1 want:3 separately:1 background:1 interval:5 baltimore:1 aws:1 singular:6 source:4 unlike:1 smallworld:1 milena:1 comment:1 eigenfunctions:3 induced:1 chee:1 december:1 vitaly:1 member:1 mod:1 inconsistent:1 call:3 extracting:1 near:3 presence:1 ideal:1 revealed:1 leverage:1 enough:2 embeddings:3 counting:1 xj:23 independence:1 zi:14 identified:1 idea:2 knowing:2 aproximation:1 whether:1 pca:1 lgo:3 eigengap:2 akin:1 edoardo:1 peter:2 york:1 remark:3 proportionally:1 detailed:1 eigenvectors:8 clear:1 se:1 nonparametric:2 tenenbaum:2 statist:3 simplest:1 diameter:1 rw:1 exist:1 xij:1 zj:2 estimated:2 disjoint:4 rosenthal:1 blue:1 write:1 discrete:4 dasgupta:1 n1c:1 promise:1 group:5 key:4 threshold:2 drawn:2 neither:1 kuk:1 kept:1 graph:39 merely:1 tweet:2 fraction:5 sum:1 mihail:1 run:3 inverse:1 prob:1 soda:1 massouli:2 throughout:1 almost:3 decision:1 duncan:2 scaling:1 appendix:1 entirely:1 ki:4 internet:1 bound:3 cheng:2 correspondence:1 topological:1 annual:5 throwing:1 your:2 bp:1 ri:2 x2:1 nearby:1 generates:1 kleinberg:1 speed:1 argument:1 conjecture:2 according:1 ball:6 disconnected:1 watt:2 march:1 anirban:1 describes:2 slightly:1 increasingly:1 gluing:1 making:1 intuitively:2 pr:2 fienberg:1 computationally:1 visualization:1 previously:1 tai:2 describing:1 know:1 letting:2 end:1 parametrize:1 available:2 decentralized:1 apply:2 observe:3 away:3 spectral:10 pmlr:1 weinberger:2 eigen:1 original:1 top:1 running:3 remaining:1 cf:4 clustering:7 include:2 outweighs:1 log2:1 unifying:1 intrigued:1 ghahramani:2 build:5 emmanuel:3 added:4 already:2 realized:2 blend:1 question:1 concentration:1 planted:1 md:2 diagonal:3 unclear:2 elizaveta:1 dp:1 distance:29 link:7 thank:1 entity:1 topic:1 barber:2 collected:1 extent:1 trivial:1 reason:1 provable:1 assuming:1 ratio:1 providing:1 equivalently:1 difficult:1 mostly:1 october:1 frank:1 xik:1 priebe:1 rise:1 design:2 anal:1 collective:1 policy:1 perform:1 observation:2 markov:1 unobservables:1 sm:3 minh:1 yunpeng:1 finite:7 truncated:2 heterogeneity:1 regularizes:1 looking:1 y1:1 rn:1 perturbation:1 arbitrary:1 aleksandrs:1 grimmer:1 community:17 inferred:3 pablo:2 david:3 introduced:1 required:2 pair:3 connection:2 slivkins:1 sandon:2 prouti:1 barcelona:2 discontinuity:1 trans:2 jure:2 nip:2 suggested:1 address:1 usually:2 beyond:2 poole:1 assembled:1 below:3 regime:1 graphons:1 pattern:2 xjp:1 mismatch:1 reading:1 including:1 reliable:2 green:1 power:2 suitable:5 overlap:1 treated:1 rely:1 regularized:2 predicting:1 zhenming:2 natural:2 zhu:1 lorenzo:1 mossel:3 maj:1 raftery:1 jun:1 text:4 review:7 literature:1 geometric:1 removal:2 discovery:1 kf:7 garry:1 asymptotic:1 law:3 girvan:1 mixed:1 interesting:1 proportional:2 acyclic:1 versus:1 ingredient:1 foundation:3 degree:14 sufficient:1 consistent:2 mercer:1 principle:1 editor:3 leaning:1 thresholding:1 share:1 production:1 row:1 karl:3 achievability:1 summary:1 truncation:2 side:2 understand:1 burges:2 fledged:1 neighbor:1 fall:1 wide:2 michelle:1 mikhail:1 sparse:4 distributed:1 curve:4 xn:3 world:24 evaluating:1 transition:1 acknowlegments:1 kz:2 unweighted:1 collection:1 computes:1 author:1 jump:1 universally:1 simplified:7 far:5 party:1 welling:2 kzi:1 sourav:2 social:13 nov:1 approximate:4 reconstructed:1 countably:1 bernhard:1 observable:1 keep:1 uni:1 compact:1 global:3 decides:1 reveals:1 handbook:2 atent:1 xi:35 blackboxes:1 spectrum:2 continuous:1 latent:42 sk:5 why:1 kanade:3 nature:1 reasonably:1 learn:3 ca:3 zk:2 obtaining:1 interact:3 bottou:2 clinton:1 vj:6 marc:1 blockmodels:3 dense:5 significance:1 main:3 abraham:1 whole:1 noise:1 paul:1 profile:1 sofia:1 allowed:1 xu:2 x1:3 fig:13 ny:1 wlog:1 shrinking:1 fails:1 position:18 retweets:1 christos:1 inferring:1 candidate:1 third:1 young:1 tang:1 theorem:4 removing:1 choi:1 bad:2 rohe:3 decay:7 essential:1 workshop:1 joe:2 effectively:1 laver:1 ci:7 airoldi:2 conditioned:4 chatterjee:2 kx:1 gap:2 chen:1 simply:1 likely:2 expressed:1 ordered:1 strogatz:2 tracking:1 partially:1 inderjit:1 recommendation:1 holland:1 springer:2 corresponds:1 truth:5 gerrish:2 acm:9 ma:1 cdf:1 oct:1 dcm:3 goal:1 ann:3 exposition:1 invalid:1 quantifying:1 matou:1 content:3 shortcut:1 infinite:1 specifically:3 corrected:3 uniformly:3 denoising:2 lemma:1 total:3 called:2 partly:1 svd:4 vote:1 select:1 college:2 mark:4 support:7 jonathan:1 alexander:1 zbj:3 phenomenon:2
6,352
6,746
A New Alternating Direction Method for Linear Programming Sinong Wang Department of ECE The Ohio State University [email protected] Ness Shroff Department of ECE and CSE The Ohio State University [email protected] Abstract It is well known that, for a linear program (LP) with constraint matrix A ? Rm?n , the Alternating Direction Method of Multiplier converges globally and linearly at a rate O((kAk2F + mn) log(1/)). However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating ?tail convergence? in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of O(kAk2 log(1/)). The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix A and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with the current fastest LP solvers. 1 Introduction We are interested in applying the Alternating Direction Method of Multiplier (ADMM) to solve a linear program (LP) of the form min x?Rn cT x s.t. Ax = b, xi ? 0, i ? [nb ]. (1) where c ? Rn , A ? Rm?n is the constraint matrix, b ? Rm and [nb ] = {1, . . . , nb }. This problem plays a major role in numerical optimization, and has been used in a large variety of application areas. For example, several important machine learning problems including the nonnegative matrix factorization (NMF) [1], l1 -regularized SVM [2], sparse inverse covariance matrix estimation (SICE) [3] and the basis pursuit (BP) [4], and the MAP inference [5] problem can be cast into an LP setting. The complexity of the traditional LP solver is still at least quadratic in the problem dimension, i.e., the Interior Point method (IPM) with a weighted path finding strategy. However, many recent problems in machine learning have extremely large-scale targeting data but exhibit a sparse structure, i.e., nnz(A)  mn, where nnz(A) is the number of non-zero elements in the constraint matrix A. This characteristic severely limits the ability of the IPM or Simplex technique to solve these problems. On the other hand, first-order methods have received extensive attention recently due to their ability to deal with large data sets. These methods require a matrix vector multiplication Ax in each iteration with complexity linear in nnz(A). However, the key challenge in designing a first-order algorithm is that LPs are usually non-smooth and non-strongly convex optimization problems (may not have a unique solution). Utilizing the standard primal and dual stochastic sub-gradient descent method will result in an extremely slow convergence rate, i.e., O(1/2 ) [6]. The ADMM was first developed in 1975 [7], and since then there have been several LP solvers based on this technique. Compared with the traditional Augmented Lagrangian Method (ALM), this 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. method splits the variable into several blocks, and optimizes the augmented Lagrangian (AL) function in a Gauss-Seidel fashion, which often results in relatively easier subproblems to solve. However, this method suffers from a slow convergence when the number of blocks increases. Moreover, the challenge of applying the ADMM to the LP is that the LP problem does not exhibit an explicit separable structure among variables, which are difficult to split in the traditional sense. The notable work [8] first applies the ADMM to solve the LP by augmenting the original n-dimensional variables into nm?dimensions, and the resultant Augmented Lagrangian function is separable among n blocks of variables. They prove that this method converges globally and linearly. However, the rate of this method is dependent on the problem dimension m, n, and converges quite slowly when m, n are large. Thus, they leave an open question on whether other efficient splitting methods exist, resulting in convergence analysis in the space with lower dimension m or n. In this paper, we propose a new splitting method for LP, which splits the equality and inequality constraints into two blocks. The resultant subproblems in each iteration are a linear system with a positive definite matrix, and n one-dimensional truncation operations. We prove our new method converges globally and linearly at a faster rate compared with the method in [8]. Specifically, the main contributions of this paper can be summarized as follows: (i) We show that the existing ADMM in [8] exhibits a slow and fluctuating ?tail convergence?, and provide a theoretical understanding of why this phenomenon occurs. (ii) We propose a new ADMM method for LP and provide a new analysis of the linear convergence rate of this new method, which only involves O(m + n)?dimensional iterates. This result answers the open question proposed in [8]. (iii) We show that when the matrix A possesses some specific structure, the resultant subproblem can be solved in closed form. For the general constraint matrix A, we design an efficiently implemented Accelerated Coordinate Descent Method (ACDM) to solve the subproblem in O(log(1/)nnz(A)) time. (iv) Practically, we show that our proposed algorithm significantly speeds up solving the basis pursuit, l1 -regularized SVM, sparse inverse covariance matrix estimation, and the nonnegative matrix factorization problem compared with existing splitting method [8] and the current fastest first-order LP solver in [9]. 2 Preliminaries In this section, we first review several definitions that will be used in the sequel. Then we illustrate some observations from the existing method. We also include several LP-based machine problems that can be cast into the LP setting in the Appendix. 2.1 Notation A twice differentiable function f : Rn ? R has strong convexity parameter ? if and only if its Hessian satisfies ?2 f (x)  ?I, ?x. We use k?k to denote standard l2 norm for vector or spectral norm for matrix, k ? k1 to denote the l1 norm and k ? kF to denote the Frobenius norm. A twice differentiable function f : Rn ? R has a component-wise Lipschitz continuous gradient with constant Li if and only if k?i f (x) ? ?i f (y)k ? Li kx ? yk, ?x, y. For example, for the quadratic function F (x) = 12 kAx ? bk2 , the gradient ?F (x) = AT (Ax ? b) and the Hessian ?2 F (x) = AT A. Hence the parameter ? and Li satisfy (choose y = x + tei , where t ? R, ei ? Rn is the unit vector), xAT Ax ? ?kxk2 and tATi Aei ? Li |t|, ?x, t. Thus, the ? is the smallest eigenvalue of AT A and Li = kAi k2 , where Ai is the ith column of the matrix A. The projection operator of point x into convex set S is defined as [x]S = arg minu?S kx ? uk. If S is the non-negative cone, let [x]+ , [x]S . Let Vi = [0, ?) for i ? [nb ] and Vi = R for i ? [nf ]. 2.2 Tail Convergence of the Existing ADMM Method The existing ADMM in [8] solves the LP (1) by following procedure: in each iteration k, go through the following two steps: h  T i k Ai (b?Axk ) ci ?AT 1 k i z 1. Primal update: xk+1 = x + ? , i = 1, . . . , n. i i kAi k2 q ? Vi 2. Dual update: z k+1 k =z ? k ? q (Ax ? b). We plot the solving accuracy versus the number of iterations for solving three kinds of problems (see Fig.1 in Appendix). We can observe that it converges fast in the initial phase, but exhibits a slow and 2 fluctuating convergence when the iterates approach the optimal set. This method originates from a specific splitting method in the standard 2?block ADMM [10]. To provide some understanding of this phenomenon, we show that this method can be actually recovered by an inexact Uzawa method [11]. The Augmented Lagrangian function of the problem (1) is denoted by L(x, z) = cT x + ?2 kAx ? b ? z/?k2 . In each iteration k, the inexact Uzawa method first minimizes a local second-order approximation of the quadratic term in L(x, zk ) with respect to primal variables x, specifically, 1 xk+1 = arg min cT x + h?AT (Axk ? b ? zk /?), x ? xk i + kx ? xk kD , xi ?Vi 2 (2) then update the dual variables by zk+1 = zk ? ?(Axk+1 ? b). Let the proximity parameter ? = ?/q and matrix D equal to the diagonal matrix diag{. . . , 1/qkAi k2 , . . .}, then we can recover the above algorithm by the first-order optimality condition of (2). This equivalence allows us to illustrate the main reason for the slow and fluctuating ?tail convergence? comes from the inefficiency of such a local approximation of the Augmented Lagrangian function when the iterates approach the optimal set. One straightforward idea to resolve this issue is to minimize the Augmented Lagrangian function exactly instead of its local approximation, which leads to the classic ALM. There exists a line of works focusing on analyzing the convergence of applying ALM to LP [9, 12, 13]. This method will produce a sequence of constrained quadratic programs (QP) that are difficult to solve. The work [9] proves that the proximal Coordinate Descent method can solve each QPs at a linear rate even when matrix A is not full column rank. However, there exists several drawbacks in this approach: (i) the practical solving time of each subproblem is quite long when A is rank-deficient; (ii) the theoretical performance and complexity of using recent accelerated techniques in proximal optimization [14] with the ALM is unknown; (iii) it cannot exploit the specific structure of matrix A when solving each constrained QP. Therefore, it motivates us to investigate the new and efficient variable splitting method for such a problem. 3 New Splitting Method in ADMM We first separate the equality and inequality constraints of the above LP (1) by adding another group of variables y ? Rn . min cT x s.t. Ax = b, x = y, yi ? 0, i ? [nb ]. (3) The dual of problem (3) takes the following form. min bT zx (4) T s.t. ?A zx ? zy = c, zy,i ? 0, i ? [nb ], zy,i = 0, i ? [n]\[nb ]. Let zx , zy be the Lagrange multipliers for constraints Ax = b, x = y,respectively. Define the indicator function g(y) of the non-negative cone: g(y) = 0 if yi ? 0, ?i ? [nb ]; otherwise g(y) = +?. Then the augmented Lagrangian function of the primal problem (3) is defined as ? L(x, y, z) = cT x + g(y) + zT (A1 x + A2 y ? b) + kA1 x + A2 y ? bk2 , (5) 2 where z = [zx ; zy ]. The matrix A1 , A2 and vector b are denoted by       A 0 b , and b = A1 = , A2 = . I ?I 0 In each iteration k, the standard ADMM go through following three steps: 1. Primal update: xk+1 = arg minn L(x, yk , zk ). x?R 2. Primal update: y k+1 = arg minn L(xk+1 , y, zk ). y?R 3 (6) Algorithm 1 Alternating Direction Method of Multiplier with Inexact Subproblem Solver Initialize z0 ? Rm+n , choose parameter ? > 0. repeat n Fk (x) ? k . 1. Primal update: find xk+1 such that Fk(xk+1 ) ? minx?R  k 2. Primal update: for each i, let yik+1 = xk+1 + z /? . y,i i Vi k k+1 k+1 k k+1 3. Dual update: zk+1 = z + ?(Ax ? b), z = z ? yk+1 ). x x y y + ?(x k+1 until kAx ? bk? ?  and kxk+1 ? yk+1 k? ?  3. Dual update: zk+1 = zk + ?(A1 xk+1 + A2 yk+1 ? b). The first step is an unconstrained quadratic program, which can be simplified as ? xk+1 = arg min Fk (x) , cT x + (zk )T A1 x + kA1 x + A2 yk ? bk2 . x 2 The gradient of the function Fk (x) can be expressed as ?Fk (x) = ?(AT A + I)x + AT1 [zk + ?(A2 yk ? b)] + c, (7) (8) and the Hessian of function Fk (x) is ?2 Fk (x) = ?(AT A + I). (9) Further, based on the first-order optimality condition, the first step is equivalent to solving a linear system, which requires inverting the Hessian matrix (9). In practice, the complexity is quite high to be exactly solved unless the Hessian exhibits some specific structures. Thus, we relax the first step into the inexact minimization: find xk+1 such that Fk (xk+1 ) ? minn Fk (x) ? k , x?R (10) where k is the given accuracy. Transforming the indicator function g(y) back to the constraints, the second step can be separated into n one?dimensional optimization problems: for each i,   ? k k yik+1 = arg min ?zy,i yi + (yi ? xk+1 )2 = xk+1 + zy,i /? V . i i i yi ?Vi 2 The resultant algorithm is sketched in Algorithm 1. In some applications such as l1 -regularized SVMs and basis pursuit problem, the objective function contains the l1 norm of the variables. Transforming to the canonical form (1) will introduce additional n variables and 2n constraints. One important feature in our method is that we can split the objective function by adding variable y. The corresponding subproblems are similar with Algorithm 1 and the only difference is that the second step will be n one?dimensional shrinkage operations. (Details can be seen in Appendix.) 4 Convergence Analysis of New ADMM In this section, we prove that the Algorithm 1 converges at a global and linear rate, and provide a roadmap of the main technical development. We can first write the primal problem (3) as the following standard 2?block form. min f (x) + g(y) s.t. A1 x + A2 y = b, x,y (11) where f (x) = cT x and g(y) is the indicator function as defined before. Most works in the literature prove that the 2-block ADMM converges globally and linearly via assuming that one of the functions f and g is strongly convex [15, 16, 17]. Unfortunately, both the linear function f and the indicator function g in the LP do not satisfy this property, which poses a significant challenge on the current analytical framework. There exists several recent works trying to address this problem in some sense. In work [18], they have demonstrated that when the dual step size ? is sufficiently small (impractical), the ADMM converges globally linearly, while no implicit rate is given. The work [13] shows that the ADMM is locally linearly converged when applying to LP. They utilize a unique combination of iterates and conduct a spectral analysis. However, they still leave an open question whether ADMM converges globally and linearly when applying to the LP in the above form. 4 In the sequel, we will answer this question positively and provide an accurate analysis of such a splitting method. The main technical development is based on a geometric argument: we first prove that the set formed by optimal primal and dual solutions of LP (3) is a (3n + m)?dimensional polyhedron S ? ; then we utilize certain global error bound to simultaneously estimate the distance from iterates xk+1 , yk , zk to S ? . All detailed proofs are given in the Appendix. Lemma 1. (Convergence of 2-block ADMM [10]) Let pk = zk ? ?A2 yk , we have kpk+1 ? [pk+1 ]G? k2 ? kpk ? [pk ]G? k2 ? kpk+1 ? pk k2 , where G? , {p? ? Rm+n |T (p? ) = p? }, and the definition of operator T is given in (54) in Appendix. Moreover, if the LP (3) has a pair of optimal primal and dual solution, the iterates xk ,yk and zk converges to an optimal solution; Otherwise, at least one of the iterates is unbounded. Lemma 1 is tailored from applying the classic Douglas-Rachford splitting method to the LP. This result guarantees that the sequence pk produced by ADMM globally converges under a mild assumption. However, to establish the linear convergence rate, the key lies in estimating the other side inequality, kpk ? [pk ]G? k ? ?kpk+1 ? pk k, ? > 0. (12) k Then one can combine these two results together to prove that sequence p converges globally and linearly with kpk+1 ? [pk+1 ]G? k2 ? (1 ? 1/? 2 ) ? kpk ? [pk ]G? k2 , which further can be used to show the R?linear convergence of iterates xk , yk and zk . To estimate the constant ?, we first describe the geometry formed by the optimal primal solutions x? , y? and dual solutions z? of the LP (3). Lemma 2. (Geometry of the optimal solution set of LP) The variables (x? , y? ) are the optimal primal solutions and z? are optimal dual solutions of LP (3) if and only if (i) Ax? = b, x? = y? ; (ii) ? ? ?AT z?x ? z?y = c; (iii) yi? ? 0, zy,i ? 0, i ? [nb ]; zy,i = 0, i ? [n]\[nb ]; (iv) cT x? + bT z?x = 0. In Lemma 2, one interesting element is to utilize the strong duality condition (iv) to eliminate the complementary slackness in the standard KKT condition. Then, the set of optimal primal and dual solutions is described only by affine constraints, which further implies that the optimal solution set is an (m + 3n)?dimensional polyhedron. We use S ? to denote such a polyhedron. Lemma 3. (Hoffman bound [19, 20]) Consider a polyhedron set S = {x ? Rd |Ex = t, Cx ? d}. For any point x ? Rd , we have   Ex ? t , kx ? [x]S k ? ?S (13) [Cx ? d]+ where ?S is the Hoffman constant that depends on the structure of polyhedron S. According to the result in Lemma 2, it seems that we can use the Hoffman bound to estimate the distance between the current iterates (xk , yk , zk ) and the solution set S ? via the their primal and dual residual. However, to obtain the form of inequality (12), we need to bound such a residual in terms of kpk ? pk+1 k. Indeed, we have these results. Lemma 4. (Estimation of residual) The sequence (xk+1 , yk , zk ) produced by Algorithm 1 satisfies ? A1 xk+1 + A2 yk ? b = (pk+1 ? pk )/?, ? ? ? ? ? c + AT zk = AT (pk ? pk+1 ), 1 1 T k+1 T k k+1 ? c x + b z ? zk /?)T (pk ? pk+1 ), ? x = (A1 x ? ? ? k k k yi ? 0, zy,i ? 0, i ? [nb ]; zy,i = 0, i ? [n]\[nb ]. One observation from Lemma 4 is that Algorithm 1 automatically preserves the boundness and the complementary slackness of both primal and dual iterates. Instead, in the previous algorithm in [8], the complementary slackness is not preserved during the iteration. Combining the results in Lemma 2, Lemma 3 and Lemma 4, we are readily to estimate the constant ?. Lemma 5. (Estimation of linear rate) The sequence pk = zk ? ?A2 yk produced by Algorithm 1 satisfies kpk ? [pk ]G? k ? ?kpk+1 ? pk k, where the rate ? is given by   Rz + 1 T ? = (1 + ?) + Rx kA1 k + kA1 k ?S ? . (14) ? Rx = supk kxk k < +?, Rz = supk kzk k < +? are the maximum radius of iterates xk and zk . 5 Then we can establish the global and linear convergence of Algorithm 1. Theorem 1. (Linear convergence of Algorithm 1) Denote zk as the primal iterates produced by Algorithm 1. To guarantee that there exists an optimal dual solution z? such that kzk ? z? k ? , it suffices to run Algorithm 1 for number of iterations K = 2? 2 log(2D0 /) with the solving accuracy k satisfying k ? 2 /8K 2 , where D0 = kp0 ? [p0 ]G? k. The proof of Theorem 1 consists of two steps: first, we establish the global and linear convergence rate of Algorithm 1 when k = 0, ?k (exact subproblem solver); then we relax this condition and prove that when k is less than a specified threshold, the algorithm still shares a convergence rate of the same order. The results of primal iterates xk and yk are similar. 5 Efficient Subproblem Solver In this section, we will show that, due to our specific splitting method, each subproblem in line 1 of Algorithm 1 can be either solved in closed-form expression or efficiently solved by the Accelerated Coordinate Descent Method. 5.1 Well-structured Constraint Matrix Let the gradient (8) vanish, then the primal iterates xk+1 can be exactly determined by xk+1 = ??1 (I + AT A)?1 dk , with dk = ?AT1 [zk + ?(A2 yk ? b)] ? c, (15) which requires inverting an n ? n positive definite matrix I + AT A, or equivalently, inverting an m ? m positive definite matrix I + AAT via the following Sherman?Morrison?Woodbury identity, (I + AT A)?1 = I ? AT (I + AAT )?1 A. (16) One basic fact is that we only need to invert such a matrix once and then use this cached factorization in subsequent iterations. Therefore, there are several cases for which the above factorization can be efficiently calculated: (i) Factorization has a closed-form expression. For example, in the LPbased MAP inference [5], the matrix I + AT A is block diagonal, and each block has been shown to possess a closed-form factorization. Another important application is that, in the basis pursuit problem, the encoding matrices such as DFT (discrete Fourier transform) and DWHT (discrete Walsh-Hadamard transform) matrices have orthonormal rows and satisfy AAT = I. Based on (15), each xk+1 = ??1 (I ? 12 AT A)dk and can be calculated in O(n log(n)) time by certain fast transforms. (ii) Factorization has a low-complexity: the dimension m (or n) is small, i.e., m = 104 . Such a factorization can be calculated in O(m3 ) and the complexity of each iteration is only O(nnz(A) + m2 ). Detailed applications can be viewed in Appendix. Remark 1. In the traditional Augmented Lagrangian method, the resultant subproblem is a constrained and non-strongly convex QP (Hessian is not invertible), which does not allow the above close-form expression. Besides, in the ALCD [9], the coordinate descent (CD) step only picks one column in each iteration and cannot exploit the nice structure of matrix A. One idea is to modify the CD step in [9] to the proximal gradient descent. However, it will greatly increase the computation time due to the large number of inner gradient descent steps. 5.2 General Constraint Matrix However, in other applications, the constraint matrix A only exhibits the sparsity, which is difficult to invert. To resolve this issue, we resort to the current fastest accelerated coordinate descent ? method [21]. This method has an order improvement up to O( n) of iteration complexity compared with previous accelerated coordinate descent methods [22]. However, the naive evaluation of partial derivative of function Fk (x) in ACDM takes O(nnz(A)) time; second, the time cost of full vector operation in each iteration of ACDM is O(n). We will show that these difficulties can be tackled by a carefully designed implementation technique1 and the main procedure is listed in Algorithm 2. Here the iterates st and matrix M in Algorithm 2 is defined as  " #      ?  ?? + 1?? ?i Fk (ut )eTi 1 ? ?v ?v ?v i 1+?? p (1+??) L i i M= with = ?? and st = , ? T ?u 1 ? ?u ?u 1+?? pi (1+??) ?i Fk (ut )ei (17) 1 This technique is motivated by [22]. 6 Algorithm 2 Efficiently Subproblem Solver Initialize p u0 , v0 , u0 = Au0 , v0 = Av0 , matrix M, parameter ? , ?, S by (17) and distribution p = [. . . , 1 + kAi k2 /S, . . . ] and let dk = AT1 [zk + ?(A2 yk ? b)] + c. repeat [ut , vt ]T = Mt?1 ? [u, v]T and [ut , vt ]T = Mt?1 ? [u, v]T . Sample i from [n] based on probability distribution p. k i ?i Fk (ut ) = ?(Ai )T ut + ?u (17).   t,i + di , and  calculate st byT  T T u u u uT ?1 i i T Mt = M ? Mt?1 . Update T = T ? Mt st , T = T ? M?1 t st A , v v v v until Converge Output xk+1 = (uT ? ? vT )/(1 ? ? ). where ? = 1 ? S2 , ? = 1+ ? 2 , 4S 2 /?+1 S= Pn i=1 p kAi k2 + 1. See more details in Appendix. Lemma 6. (Inner complexity) In each iteration of Algorithm 2, if the current picked coordinate is i, the update can be finished in O(nnz(Ai )) time, moreover, to guarantee that Fk (xk+1 ) ? minx Fk (x) ? k with probability 1 ? p, it suffices to run Algorithm 2 for number of iterations  k n X D0 Tk ? O(1) ? kAi k log , D0k = kF k (u0 ) ? min F k (x)k. (18) x  p k i=1 The above iteration complexity is obtained by choosing parameter ? = 0 in [21] and utilizing the Theorem 1 in [23] to transform the convergence in expectation to the form of probability. Theorem 2. (Overall complexity) Denote zk as the dual iterates produced by Algorithm 1. To guarantee that there exists an optimal solution z? such that kzk ? z? k ?  with probability 1 ? p, it suffices to run Algorithm 1 for k ? 2? 2 log(2D0 /) outer iterations and solve each sub-problem (7) for the number of inner iterations  ! 1 n X ?(D0k ) 3 ? 2 2D0 T ? O(1) ? kAi k log log . (19) 2 1  3 p3 i=1 The results for the primal iterates xk and yk are similar. In the existing ADMM [8], each primal and dual update only requires O(nnz(A)) time to solve. The complexity of this method is ? O(am ?2 (am Rx + dm Rz )2 ( mn + kAkF )2 nnz(A) log(1/)), where am = maxi kAi k, dm is the largest number of non-zero elements of each row of matrix A, and ? is the Hoffman constant depends on the optimal solution set of LP. Based on Theorem 2, an estimation of the worst-case complexity of Algorithm 1 is O(am ?S2 ? (Rx kAk + Rz )2 nnz(A) log2 (1/)). Remark that our method has a weak dependence on the problem dimension compared with the existing ADMM. Since the Frobenius norm of a matrix satisfies kAk2 ? kAkF , our method is faster than the one in [8]. 6 Numerical Results In this section, we examine the performance of our algorithm and compare it with the state-of-art of algorithms developed for solving the LP. The first is the existing ADMM in [8]. The second is the ALCD method in [9], which is reported to be the current fastest first-order LP solver. They have shown that this algorithm can significantly speed up solving several important machine learning problems compared with the Simplex and IPM. We name our Algorithm 1 as LPADMM. In the experiments, we require that the accuracy of subproblem solver k = 10?3 and the stopping criteria is that both primal residual kA1 xk + A2 yk ? bk? and dual residual kAT1 zk + ck? is less than 10?3 . All the LP instances are generated from the basis pursuit, L1 SVM, SICE and NMF problems. The data source and statistics are included in the supplementary material. 7 1 10 ADMM LPADMM ALCD 100 10 -1 Duality gap Duality gap 10 10-2 10-3 5000 10000 15000 100 10-1 10-3 ADMM LPADMM ALCD 100 10-1 0 Number of iterations 50 100 150 200 10-3 10-1 10-2 0 1000 2000 3000 4000 5000 6000 7000 Number of iterations Number of iterations ADMM LPADMM ALCD 100 10-2 10-2 0 101 101 ADMM LPADMM ALCD Duality gap 102 1 Duality gap 102 10-3 1000 2000 3000 4000 5000 Number of iterations Figure 1: The duality gap versus the number of iterations. From left to right figures are the BP, NMF, the L1 SVM and and the SICE problem. Table 1: Timing Results for BP, SICE, NMF and L1 SVM Problem (in sec. long means > 60 hours) Data bp1 bp2 bp3 arcene real-sim sonar colon w2a news20 m 17408 34816 69632 50095 176986 80912 217580 12048256 2785205 n 16384 32768 65536 30097 135072 68224 161040 12146960 2498375 nnz(A) 8421376 33619968 134348800 1151775 7609186 2756832 8439626 167299110 53625267 LPADMM Time Iterations 22 3155 79 4657 217 6287 801 15198 955 4274 258 5446 395 216 19630 2525 7765 2205 Time 864 2846 12862 1978 1906 659 455 45388 9173 ALCD Iterations 14534 19036 24760 176060 18262 13789 1288 8492 6174 ADMM Time Iterations long long long long long long 21329 2035415 19697 249363 3828 151972 7423 83680 long long long long We first compare the convergence rate of different algorithms in solving the above problems. We use the bp1 for BP problem, data set colon cancer for NMF problem, news20 for L1 SVM problem and real-sim for SICE problem. We set proximity parameter ? = 1. We adopt the relative duality gap as the comparison metric, which is defined as kcT xk + bT zkx k/kcT x? k, where x? is obtained approximately by running our method with a strict stopping condition. In our simulation, one iteration represents n coordinate descent steps for ALCD and LPADMM, and one dual updating step for ADMM. As can be seen in the Fig. 1, our new method exhibits a global and linear convergence rate and matches our theoretical performance bound. Besides, it converges faster than both the ALCD and existing ADMM method, especially in solving the BP and NMF problem. The sensitivity analysis of ? is listed in Appendix. We next examine the performance of our algorithm from the perspective of time efficiency (both clocking time and number of iterations). We adopt the dynamic step size rule for ALCD to optimize its performance. Note that, exchanging the role of the primal and dual problem in (3), we can obtain the dual version of both ADMM and ACLD, which can be used to tackle the primal or dual sparse problem. We run both methods and adopt the minimum time. The stopping criterion requires that the primal and dual residual and the relative duality gap is less than 10?3 . The data set bp1,bp2,bp3 is used for basis pursuit problem, news20 is used for L1 SVM problem; arcene, real-sim are used for SICE problem; sonar, colon and w2a are used for NMF problem. Among all experiments, we can observe that our proposed algorithm requires approximately 10% ? 40% iterations and 10% ? 85% time of the ALCD method, and become particularly advantageous for basis pursuit problem (50? speed up) or ill posed problems such as SICE and NMF problem. In particular, for the basis pursuit problem, the primal iterates xk is updated by closed-form expression (15), which can be calculated in O(n log(n)) time by Fast Walsh?Hadamard transform. 7 Conclusions In this paper, we proposed a new variable splitting method to solve the linear programming problem. The theoretical contribution of this work is that we prove that 2?block ADMM converges globally and linearly when applying to the linear program. The obtained convergence rate has a weak dependence of the problem dimension and is less than the best known result. Compared with the existing LP solvers, our algorithms not only provides a flexibility to exploit the specific structure of constraint matrix A, but also can be naturally combined with the existing acceleration techniques to significantly speed up solving the large-scale machine learning problems. The future work focuses on generalizing our theoretical framework and exhibiting the global linear convergence rate when applying ADMM to solve a convex quadratic program. Acknowledgments: This work is supported by ONR N00014-17-1-2417, N00014-15-1-2166, NSF CNS-1719371 and ARO W911NF-1-0277. 8 References [1] Ben Recht, Christopher Re, Joel Tropp, and Victor Bittorf. Factoring nonnegative matrices with linear programs. In Advances in Neural Information Processing Systems, pages 1214?1222, 2012. [2] Ji Zhu, Saharon Rosset, Trevor Hastie, and Robert Tibshirani. 1-norm support vector machines. In NIPS, volume 15, pages 49?56, 2003. [3] Ming Yuan. High dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research, 11(Aug):2261?2286, 2010. [4] Junfeng Yang and Yin Zhang. Alternating direction algorithms for l1 -problems in compressive sensing. SIAM journal on scientific computing, 33(1):250?278, 2011. [5] Ofer Meshi and Amir Globerson. An alternating direction method for dual map lp relaxation. Machine Learning and Knowledge Discovery in Databases, pages 470?483, 2011. [6] V?nia L?cia Dos Santos Eleut?rio. Finding approximate solutions for large scale linear programs. PhD thesis, ETH Zurich, 2009. [7] Roland Glowinski and A Marroco. Sur l?approximation, par ?l?ments finis d?ordre un, et la r?solution, par p?nalisation-dualit? d?une classe de probl?mes de dirichlet non lin?aires. Revue fran?aise d?automatique, informatique, recherche op?rationnelle. Analyse num?rique, 9(2):41?76, 1975. [8] Jonathan Eckstein, Dimitri P Bertsekas, et al. An alternating direction method for linear programming. 1990. [9] Ian En-Hsu Yen, Kai Zhong, Cho-Jui Hsieh, Pradeep K Ravikumar, and Inderjit S Dhillon. Sparse linear programming via primal and dual augmented coordinate descent. In Advances in Neural Information Processing Systems, pages 2368?2376, 2015. [10] Jonathan Eckstein and Dimitri P Bertsekas. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293?318, 1992. [11] Wotao Yin. Analysis and generalizations of the linearized bregman method. SIAM Journal on Imaging Sciences, 3(4):856?877, 2010. [12] O G?ler. Augmented lagrangian algorithms for linear programming. Journal of optimization theory and applications, 75(3):445?470, 1992. [13] Daniel Boley. Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs. SIAM Journal on Optimization, 23(4):2183?2207, 2013. [14] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method. In Advances in Neural Information Processing Systems, pages 3059?3067, 2014. [15] Robert Nishihara, Laurent Lessard, Benjamin Recht, Andrew Packard, and Michael I Jordan. A general analysis of the convergence of admm. In ICML, pages 343?352, 2015. [16] Tianyi Lin, Shiqian Ma, and Shuzhong Zhang. On the global linear convergence of the admm with multiblock variables. SIAM Journal on Optimization, 25(3):1478?1497, 2015. [17] Wei Deng and Wotao Yin. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing, 66(3):889?916, 2016. [18] Mingyi Hong and Zhi-Quan Luo. On the linear convergence of the alternating direction method of multipliers. Mathematical Programming, pages 1?35, 2012. [19] Alan J Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 49(4), 1952. [20] Wu Li. Sharp lipschitz constants for basic optimal solutions and basic feasible solutions of linear programs. SIAM journal on control and optimization, 32(1):140?153, 1994. [21] Zeyuan Allen-Zhu, Zheng Qu, Peter Richtarik, and Yang Yuan. Even faster accelerated coordinate descent using non-uniform sampling. In Proceedings of The 33rd International Conference on Machine Learning, pages 1110?1119, 2016. [22] Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on, pages 147?156. IEEE, 2013. [23] Peter Richt?rik and Martin Tak??c. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014. 9
6746 |@word mild:1 version:1 advantageous:1 norm:7 seems:1 open:3 simulation:1 linearized:1 tat:1 covariance:4 p0:1 hsieh:1 pick:1 d0k:2 tianyi:1 ipm:3 initial:1 inefficiency:1 contains:1 daniel:1 existing:11 current:7 recovered:1 luo:1 readily:1 numerical:2 subsequent:1 plot:1 designed:1 update:12 amir:1 une:1 xk:32 ith:1 recherche:1 num:1 iterates:19 provides:1 cse:1 bittorf:1 zhang:2 unbounded:1 mathematical:3 become:1 symposium:1 yuan:2 prove:9 consists:1 multiblock:1 combine:1 focs:1 introduce:1 alm:4 news20:3 indeed:1 automatique:1 examine:2 globally:9 ming:1 automatically:1 resolve:2 kp0:1 zhi:1 solver:12 estimating:2 notation:1 moreover:3 santos:1 kind:1 minimizes:1 developed:2 compressive:1 finding:2 impractical:1 guarantee:4 nf:1 tackle:1 exactly:3 rm:5 k2:11 uk:1 control:1 unit:1 originates:1 bertsekas:2 positive:3 before:1 aat:3 local:4 modify:1 timing:1 limit:1 severely:1 encoding:1 analyzing:1 laurent:1 path:1 approximately:2 twice:2 equivalence:1 fastest:4 factorization:9 walsh:2 unique:2 practical:1 woodbury:1 acknowledgment:1 globerson:1 practice:3 block:12 definite:3 revue:1 procedure:2 area:1 nnz:11 eth:1 significantly:3 composite:1 projection:1 jui:1 cannot:2 interior:1 targeting:1 operator:3 close:1 nb:12 arcene:2 applying:8 optimize:1 equivalent:1 map:3 lagrangian:9 demonstrated:1 sice:7 attention:1 go:2 straightforward:1 convex:5 splitting:12 m2:1 rule:1 utilizing:2 orthonormal:1 classic:2 coordinate:13 updated:1 play:1 exact:1 programming:9 designing:1 element:3 satisfying:1 particularly:1 updating:1 bp2:2 database:1 role:2 subproblem:10 wang:2 solved:4 worst:1 calculate:1 richt:1 yk:20 boley:1 benjamin:1 transforming:2 convexity:1 complexity:13 dynamic:1 solving:13 efficiency:1 basis:9 aei:1 separated:1 informatique:1 fast:3 describe:1 choosing:1 shuzhong:1 quite:3 kai:8 solve:11 supplementary:1 posed:1 relax:2 otherwise:2 ability:2 statistic:1 acdm:3 transform:4 analyse:1 sequence:5 differentiable:2 eigenvalue:1 analytical:1 propose:3 aro:1 kat1:1 maximal:1 junfeng:1 combining:1 hadamard:2 flexibility:1 frobenius:2 convergence:29 produce:1 cached:1 converges:14 leave:2 qps:1 tk:1 illustrate:2 ben:1 andrew:1 pose:1 augmenting:1 op:1 received:1 aug:1 solves:1 sim:3 implemented:1 strong:2 involves:1 come:1 implies:1 exhibiting:1 direction:10 radius:1 drawback:1 stochastic:1 material:1 meshi:1 require:2 suffices:3 generalization:1 preliminary:1 practically:1 tati:1 proximity:2 sufficiently:1 minu:1 major:1 adopt:3 smallest:1 a2:14 estimation:7 largest:1 weighted:1 hoffman:5 minimization:1 eti:1 ck:1 pn:1 shrinkage:1 zhong:1 ax:9 focus:1 improvement:1 rank:2 polyhedron:5 greatly:1 sense:2 am:4 colon:3 inference:2 rio:1 dependent:1 stopping:3 factoring:1 bt:3 eliminate:1 tak:1 interested:1 sketched:1 arg:6 dual:27 among:3 issue:2 denoted:2 overall:1 ill:1 development:2 art:1 constrained:3 ness:1 initialize:2 equal:1 once:1 beach:1 sampling:1 represents:1 icml:1 future:1 simplex:2 aire:1 simultaneously:2 preserve:1 national:1 phase:1 geometry:2 cns:1 investigate:1 clocking:1 zheng:1 evaluation:1 joel:1 zhaosong:1 pradeep:1 primal:29 xat:1 accurate:1 bregman:1 partial:1 unless:1 conduct:1 iv:3 re:1 theoretical:5 instance:1 column:3 w911nf:1 sidford:1 exchanging:1 cost:1 uniform:1 w2a:2 reported:1 answer:2 proximal:5 rosset:1 combined:1 rationnelle:1 st:6 recht:2 cho:1 sensitivity:1 siam:5 international:1 randomized:1 sequel:2 lee:1 invertible:1 michael:1 together:1 thesis:1 nm:1 choose:2 slowly:1 shiqian:1 resort:1 derivative:1 dimitri:2 li:6 de:2 summarized:1 sec:1 satisfy:3 notable:1 vi:6 depends:2 picked:1 closed:5 nishihara:1 recover:1 yen:1 contribution:2 minimize:1 formed:2 accuracy:4 characteristic:1 efficiently:4 ka1:5 richtarik:1 weak:2 zy:11 produced:5 lu:1 rx:4 zx:4 converged:1 kpk:10 suffers:1 trevor:1 definition:2 inexact:4 kct:2 dm:2 resultant:5 proof:3 naturally:1 hsu:1 knowledge:1 ut:8 carefully:1 actually:1 back:1 focusing:1 shroff:2 wei:1 strongly:3 implicit:1 until:2 hand:1 tropp:1 ei:2 axk:3 christopher:1 slackness:3 scientific:2 usa:1 name:1 multiplier:7 equality:2 hence:1 alternating:10 dhillon:1 deal:1 during:1 kak:1 criterion:2 generalized:1 trying:1 hong:1 l1:12 saharon:1 allen:1 wise:1 ohio:2 recently:1 mt:5 qp:3 ji:1 volume:1 rachford:2 tail:4 significant:2 ai:4 dft:1 probl:1 rd:3 unconstrained:1 fk:15 sherman:1 v0:2 recent:3 perspective:1 optimizes:1 certain:3 n00014:2 inequality:5 alcd:11 onr:1 vt:3 yi:7 victor:1 seen:2 minimum:1 additional:1 deng:1 zeyuan:1 converge:1 morrison:1 u0:3 ii:4 full:2 d0:5 seidel:1 smooth:1 technical:2 faster:5 match:1 alan:1 long:13 lin:4 roland:1 ravikumar:1 a1:8 kax:3 basic:3 expectation:1 metric:1 iteration:30 tailored:1 invert:2 preserved:1 source:1 posse:2 strict:1 deficient:1 quan:1 jordan:1 yang:2 split:4 iii:3 variety:1 tei:1 hastie:1 inner:3 idea:2 whether:2 expression:4 motivated:1 peter:2 hessian:6 remark:2 yik:2 detailed:2 listed:2 transforms:1 locally:1 svms:1 exist:1 canonical:1 nsf:1 qihang:1 tibshirani:1 write:1 discrete:2 group:1 key:2 threshold:1 douglas:2 utilize:3 ordre:1 imaging:1 relaxation:1 monotone:1 cone:2 run:4 inverse:4 wu:1 p3:1 fran:1 appendix:8 bound:5 ct:8 tackled:1 quadratic:7 nonnegative:4 annual:1 constraint:14 bp:5 fourier:1 speed:4 argument:1 min:8 extremely:2 optimality:2 separable:2 relatively:1 martin:1 speedup:1 department:2 structured:1 according:1 combination:1 kd:1 lp:35 qu:1 marroco:1 zurich:1 finis:1 pursuit:9 operation:3 ofer:1 observe:2 fluctuating:4 spectral:2 cia:1 original:1 rz:4 bureau:1 running:1 include:1 bp1:3 dirichlet:1 log2:1 exploit:4 k1:1 prof:1 establish:3 especially:1 objective:2 question:4 occurs:1 strategy:1 dependence:2 kak2:2 traditional:4 diagonal:2 exhibit:8 gradient:8 minx:2 distance:3 separate:1 outer:1 kak2f:1 me:1 roadmap:1 reason:1 assuming:1 besides:2 sur:1 minn:3 ler:1 minimizing:1 equivalently:1 difficult:3 unfortunately:1 robert:2 subproblems:3 negative:2 design:1 implementation:1 motivates:1 zt:1 unknown:1 wotao:2 observation:2 descent:14 glowinski:1 rn:6 sharp:1 nmf:8 bk:2 inverting:3 pair:2 cast:2 specified:1 extensive:1 eckstein:2 hour:1 nip:2 address:1 usually:1 sparsity:2 challenge:3 program:10 including:1 packard:1 difficulty:1 regularized:3 indicator:4 residual:7 mn:3 zhu:2 finished:1 naive:1 dualit:1 review:1 understanding:2 l2:1 literature:1 kf:2 multiplication:1 geometric:1 nice:1 relative:2 discovery:1 par:2 kakf:2 interesting:1 versus:2 at1:3 foundation:1 affine:1 rik:1 xiao:1 lessard:1 bk2:3 share:1 cd:2 pi:1 row:2 cancer:1 repeat:2 supported:1 truncation:1 side:1 allow:1 lpadmm:7 sparse:5 uzawa:2 kzk:3 dimension:8 calculated:4 av0:1 simplified:1 approximate:2 global:8 kkt:1 xi:2 continuous:1 un:1 boundness:1 sonar:2 why:1 table:1 zk:26 ca:1 nia:1 diag:1 pk:19 main:5 linearly:9 s2:2 complementary:3 positively:1 augmented:10 fig:2 aise:1 en:1 fashion:1 byt:1 slow:6 sub:2 explicit:1 classe:1 lie:1 kxk2:1 vanish:1 ian:1 z0:1 theorem:5 specific:7 bp3:2 maxi:1 sensing:1 dk:4 svm:8 ments:1 exists:5 adding:2 ci:1 phd:1 kx:4 gap:7 easier:1 cx:2 generalizing:1 yin:4 lagrange:1 kxk:2 expressed:1 inderjit:1 supk:2 applies:1 satisfies:4 mingyi:1 ma:1 identity:1 viewed:1 acceleration:1 lipschitz:2 admm:33 feasible:1 included:1 specifically:2 determined:1 lemma:13 ece:2 gauss:1 duality:8 osu:2 m3:1 la:1 aaron:1 support:1 jonathan:2 accelerated:8 phenomenon:2 ex:2
6,353
6,747
Regret Analysis for Continuous Dueling Bandit Wataru Kumagai Center for Advanced Intelligence Project RIKEN 1-4-1, Nihonbashi, Chuo, Tokyo 103-0027, Japan [email protected] Abstract The dueling bandit is a learning framework wherein the feedback information in the learning process is restricted to a noisy comparison between a pair of actions. In this research, we address a dueling bandit problem based on a cost function over a continuous space. We propose a?stochastic mirror descent algorithm and show that the algorithm achieves an O( T log T )-regret bound under strong convexity and smoothness assumptions for the cost function. Subsequently, we clarify the equivalence between regret minimization in dueling bandit and convex optimization for the cost function. Moreover, when considering a lower bound in convex optimization, our algorithm is shown to achieve the optimal convergence rate in convex optimization and the optimal regret in dueling bandit except for a logarithmic factor. 1 Introduction Information systems and computer algorithms often have many parameters which should be tuned. When cost or utility are explicitly given as numerical values or concrete functions, the system parameters can be appropriately determined depending on the the values or the functions. However, in a human-computer interaction system, it is difficult or impossible for users of the system to provide user preference as numerical values or concrete functions. Dueling bandit is introduced to model such situations in Yue and Joachims (2009) and enables us to appropriately tune the parameters based only on comparison results on two parameters by the users. In the learning process of a dueling bandit algorithm, the algorithm chooses a pair of parameters called actions (or arms) and receives only the corresponding comparison result. Since dueling bandit algorithms do not require an individual evaluation value for each action, they can be applied for wider areas that cannot be formulated using the conventional bandit approach. When action cost (or user utility) implicitly exists, the comparison between two actions is modeled via a cost (or utility) function, which represents the degree of the cost (or utility), and a link function, which determines the noise in the comparison results. We refer to such a modeling method as costbased (or utility-based) approach and employ it in this research. Yue and Joachims (2009) first introduced the utility-based approach as a model for a dueling bandit problem. The cost-based dueling bandit relates to function optimization with noisy comparisons (Jamieson et al., 2012; Matsui et al., 2016) because in both frameworks an oracle compare two actions and the feedback from the oracle is represented by binary information. In particular, the same algorithm can be applied to both frameworks. However, as different performance measures are applied to the algorithms in function optimization and dueling bandit, it has not been demonstrated that an algorithm that works efficiently in one framework will also perform well in the other framework. This study clarifies relation between function optimization and dueling bandit thorough their regret analysis. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Problem Setup In the learning process of the dueling bandit problem, a learner presents two points, called actions in a space A, to an oracle and the oracle returns one-bit feedback to the learner based on which action wins (i.e., which action is more preferable for the oracle). Here, we denote by a ? a? the event that a wins a? and by P (a ? a? ) the probability that a ? a? happens. In other words, we assume that the feedback from the oracle follows the following two-valued random variable: { 1 w.p. P (a ? a? ) ? F (a, a ) := (1) 0 w.p. 1 ? P (a ? a? ), where the probability P (a ? a? ) is determined by the oracle. We refer to this type of feedback as noisy comparison feedback. Unlike conventional bandit problems, the leaner has to make a decision that is based only on the noisy comparison feedback and cannot access the individual values of the cost (or utility) function. We further assume that each comparison between a pair of actions is independent of other comparisons. The learner makes a sequence of decisions based on the noisy comparisons provided by the oracle. After receiving F (at , a?t ) at time t, the learner chooses the next two action (at+1 , a?t+1 ). As a performance measure for an action a, we introduce the minimum win probability: P ? (a) = inf P (a ? a? ). ? a ?A We next quantify the performance of the algorithm using the expected regret as follows:1) [ T ] ? DB ? ? ? ? ? RegT = sup E {(P (a) ? P (at )) + (P (a) ? P (at ))} . a?A 1.2 (2) t=1 Modeling Assumption In this section, we clarify some of the notations and assumptions. Let an action space A ? Rd be compact convex set with non-empty interior. We denote the Euclidean norm by ? ? ?. Assumption 1. There exist functions f : A ? R and ? : R ? [0, 1] such that the probability in noisy comparison feedback can be represented as follows: P (a ? a? ) = ?(f (a? ) ? f (a)). (3) In the following, we call f in Assumption 1 a cost function and and ? a link function. Here, the cost function and the link function are fixed for each query to the oracle. In this sense, our setting is different from online optimization where the objective function changes. Definition 1. (Strong Convexity) A function f : Rd ? R is ?-strongly convex over the set A ? Rd if for all x, y ? A it holds that ? f (y) ? f (x) + ?f (x)? (y ? x) + ?y ? x?2 . 2 Definition 2. (Smoothness) A function f : Rd ? R is ?-smooth over the set A ? Rd if for all x, y ? A it holds that ? ?y ? x?2 . 2 Assumption 2. The cost function f : A ? R is twice continuously differentiable, L-Lipschitz, ?-strongly convex and ?-smooth with respect to the Euclidean norm. f (y) ? f (x) + ?f (x)? (y ? x) + From Assumption 2, there exists a unique minimizer a? of the cost function f since f is strictly convex. We set B := supa,a? ?A f (a? ) ? f (a). Assumption 3. The link function ? : R ? [0, 1] is three times differentiable and rotation-symmetric (i.e., ?(?x) = 1 ? ?(x)). Its first derivative is positive and monotonically non-increasing on [0, B]. 1) Although the regret in (2) appears superficially different from that in Yue and Joachims (2009), two regrets can be shown to coincide with each other under Assumptions 1-3 in Subsection 1.2. 2 For examples, the standard logistic distribution function, the cumulative standard Gaussian distribution function and the linear function ?(x) = (1 + x)/2 can be taken to be link functions that satisfy Assumption 3. We note that link functions often behave like cumulative probability distribution functions. This is because the sign of the difference between two noisy function values can be regarded as the feedback (1) which satisfies Assumption 1, and then, the link function ? coincides with the cumulative probability distribution function of the noise (see Section 2 of Jamieson et al. (2012) for more details). We will discuss the relation of noisy comparison feedback to noisy function values in Section 5. 1.3 Related Work and Our Contributions Dueling bandit on the continuous action space relates with various optimization methods. We summarize related studies in the following. Dueling bandit problem: Yue and Joachims (2009) formulated information retrieval systems as a dueling bandit problem. They reduced this to a problem of optimizing an ?almost"-concave function and presented a stochastic gradient ascent algorithm based on one-point bandit feedback. Subsequently, they showed that their algorithm achieves an O(T 3/4 )-regret bound under the differentiability and the strict concavity for a utility function. Ailon et al. (2014) presented reduction methods from dueling bandit to the conventional bandit under the?strong restriction that the link function is linear and showed that their algorithm achieves an O( T log3 T )-regret bound. We note that dueling bandit has a number of other formulations (Yue and Joachims, 2011; Yue et al., 2012; BusaFekete et al., 2013, 2014; Urvoy et al., 2013; Zoghi et al., 2014; Jamieson et al., 2015). Optimization with one-point bandit feedback: In conventional bandit settings, various convex optimization methods have been studied. Flaxman et al. (2005) showed that the gradient of smoothed version of a convex function can be estimated from a one-point bandit feedback and proposed a stochastic gradient descent algorithm which achieves an O(T 3/4 ) regret bound under the Lipschitzness condition. Moreover, assuming the strong convexity and the smoothness for the convex function such as?(2), Hazan and Levy (2014) proposed a stochastic mirror descent algorithm which achieves an O( T log T ) regret bound and showed that the algorithm is near optimal because the upper ? bound matched the lower bound of ?( T ) derived by Shamir (2013) up to a logarithmic factor in bandit convex optimization. Optimization with two-point bandit feedback: Dueling bandit algorithms require two actions at each round in common with two-point bandit optimization. In the context of online optimization, Agarwal et al. (2010) first considered convex optimization with two-point feedback. They proposed a gradient descent-based algorithm and showed that the algorithm achieves the regret bounds of under the Lipschitzness condition and O(log T ) under the strong convexity condition. In stochastic convex?optimization, Duchi et al. (2015) showed that a stochastic mirror descent algorithm achieves an O( T ) regret bound under the Lipschitzness (or the smoothness) condition and proved the up? per bound to be optimal deriving a matching lower bound ?( T ). Moreover, in both of online and stochastic convex optimization, Shamir (2017) showed that a gradient descent-based algorithm ? achieves an O( T ) regret bound with optimal dependence on the dimension under the Lipschitzness condition. However, those two-point bandit algorithms strongly depend on the availability of the difference of function values and cannot be directly applied to the case of dueling bandit where the difference of function values is compressed to one bit in noisy comparison feedback. Optimization with noisy comparison feedback: The cost-based dueling bandit relates to function optimization with noisy comparisons (Jamieson et al., 2012; Matsui et al., 2016) because in both frameworks, the feedback is represented by preference information. Jamieson et al. (2012) proposed a coordinate descent algorithm and proved that the convergence rate of the algorithm achieved an optimal order.2) Matsui et al. (2016) proposed a Newton method-based algorithm and proved that its convergence rate was almost equivalent to that of Jamieson et al. (2012). They further showed that their algorithm could easily be parallelized and performed better numerically than the dueling bandit algorithm in Yue and Joachims (2009). However, since they considered only the unconstrained case in which A = Rd , it is not possible to apply their algorithm to the setting considered here, in which the action space is compact. The optimal order changes depending on the model parameter ? ? 1 of the pairwise comparison oracle in Jamieson et al. (2012). 2) 3 Optimization with one-bit feedback: The optimization method of the dueling bandit algorithm is based on one-bit feedback. In related work, Zhang et al. (2016) considered stochastic optimization under one-bit feedback. However, since their approach was restricted to the problem of linear optimization with feedback generated by the logit model, it cannot be applied to the problem addressed in the current study. Our contributions: In this paper, we consider the cost-based dueling bandit under Assumptions 1-3. While the formulation is similar to that of Yue and Joachims (2009), Assumptions 2 and 3 are stronger than those used in that study. On the other hand, we impose the weaker assumption on the link function than that of Ailon et al. (2014). Yue and Joachims (2009) showed that a stochastic gradient descent algorithm can be applied to dueling bandit. Thus, it is naturally expected that a stochastic mirror descent algorithm, which achieves the (near) optimal order in convex optimization with one/two-point bandit feedback, can be applied to dueling bandit setting and achieves good performance. Following this intuition, we propose a mirror descent-based algorithm. Our key contributions can be summarized as follows: ? We propose a stochastic mirror descent algorithm with noisy comparison feedback. ? ? We provide an O( T log T )-regret bound for our algorithm in dueling bandit. ? We clarify the relation between the cost-based dueling bandit and convex optimization in terms of their regrets and show that our algorithm can be applied to convex optimization. ? ? We show that the convergence rate of our algorithm is O( log T /T ) in convex optimization. ? We derive a lower bound in convex optimization with noisy comparison feedback and show our algorithm to be near optimal in both dueling bandit and convex optimization. 2 Algorithm and Main Result 2.1 Stochastic Mirror Descent Algorithm We first prepare the notion of a self-concordant function on which our algorithm is constructed (see e.g., Nesterov et al. (1994), Appendix F in Griva et al. (2009)). Definition 3. A function R : int(A) ? R is considered self-concordant if the following two conditions hold: 1. R is three times continuously differentiable and convex, and approaches infinity along any sequence of points approaching the boundary of int(A). 2. For every h ? Rd and x ? int(A), |?3 R(x)[h, h, h]| ? 2(h? ?2 R(x)h) 2 holds, where 3 (x + t h + t h + t h) . ?3 R(x)[h, h, h] := ?t1??tR 1 2 3 ?t t1 =t2 =t3 =0 2 3 3 In addition to these two conditions, if R satisfies the following condition for a positive real number ?, it is called a ?-self-concordant function: 3. For every h ? Rd and x ? int(A), |?R(x)? h| ? ? 2 (h? ?2 R(x)h) 2 . 1 1 In this paper, we assume the Hessian ?2 R(a) of a ?-self-concordant function to be full-rank over A and ?R(int(A)) = Rd . Bubeck and Eldan (2014) showed that such a ?-self-concordant function satisfying ? = (1 + o(1))d will always exist as long as the dimension d is sufficiently large. We next propose Algorithm 1, which we call NC-SMD. This can be regarded as stochastic mirror descent with noisy comparison feedback. We make three remarks on Algorithm 1. First, the function Rt is self-concordant though not ?self-concordant. The second remark is as follows. Let us denote the local norms by ?a?w = ? a? ?2 R(w)a. Then, if R is a self-concordant function for A, the Dikin Ellipsoid {a? ? A| ?a? ? a?a ? 1} centered at a is entirely contained in int(A) for any a ? int(A). Thus, a?t := at + 1 ?2 Rt (at )? 2 ut in Algorithm 1 is contained in int(A) for any at ? int(A) and a unit vector ut . This shows a comparison between actions at and a?t to be feasible. Our third remark is as follows. Since the self-concordant function Rt at round t depends on the past actions {ai }ti=1 , it may be thought that those past actions are stored during the learning process. However, note that only ?Rt 4 Algorithm 1 Noisy Comparison-based Stochastic Mirror Descent (NC-SMD) Input: Learning rate ?, ?-self-concordant function R, time horizon T , tuning parameters ?, ? Initialize: a1 = argmina?A R(a). for t = 1 to T do ?t 2 2 Update Rt (a) = R(a) + ?? i=1 ?a ? ai ? + ??a? 2 Pick a unit vector ut uniformly at random 1 Compare at and a?t := at + ?2 Rt (at )? 2 ut and receive F (a?t , at ) 1 Set g?t = F (a?t , at )d?2 Rt (at ) 2 ut Set at+1 = ?R?1 gt ) t (?Rt (at ) ? ?? end for Output: aT +1 ?t and ?2 Rt are used in the algorithm; ?Rt depends only on i=1 at and ?2 Rt does not depend on the past actions. Thus, only the sum of past actions must be stored, rather than all past actions. 2.2 Main Result: Regret Bound From Assumption 2 and the compactness of A, the diameter R and B := supa,a? ?A f (a? ) ? f (a) are finite. From Assumption 3, there are exist positive constants l0 , L0 , B2 and L2 such that the first derivative ? ? of the link function is bounded as l0 ? ? ? ? L0 on [?B, B] and the second derivative ? ?? is bounded above by B2 and L2 -Lipschitz on [?B, B]. We use the constants below. The following theorem shows that with appropriate parameters, NC-SMD (Algorithms 1) achieves ? an O( T log T )-regret bound. 2 0? Theorem 4. We set C := ? + B2 L +(L+1)L . When the tuning parameters satisfy ? ? l0 ?/2, 2? ( 3 )2 ? ? L0 L2 /? and the total number T of rounds satisfies T ? C log T . Algorithm 1 with a ?? C log T 1 self-concordant function and the learning parameter ? = 2d achieves the following regret T bound under Assumptions 1-3: ? RegTDB ? 4d CT log T + 2LL0 R. (4) 3 Regret Analysis We prove Theorem 4 in this section. The proofs of lemmas in this section are provided in supplementary material. 3.1 Reduction to Locally-Convex Optimization We first reduce the dueling bandit problem to a locally-convex optimization problem. We define Pb (a) := ?(f (a) ? f (b)) for a, b ? A and Pt (a) := Pat (a). For a cost function f and a selfconcordant function R, we set a? := argmina?A f (a), a1 := argmina?A R(a) and a?T := T1 a1 + (1 ? T1 )a? . The regret of dueling bandit is bounded as follows. Lemma 5. The regret of Algorithm 1 is bounded as follows: [ T ] ? LL0 ? DB ? RegT ? 2E (Pt (at ) ? Pt (aT )) + (5) log T + 2LL0 R. ?? t=1 The following lemma shows that Pb inherits the smoothness of f globally. Lemma 6. The function Pb is (L0 ? + B2 L2 )-smooth for an arbitrary b ? A. Let B be the unit Euclidean ball, B(a, ?) the ball centered at a with radius ? and L(a, b) the line segment between a and b. In addition, for a, b ? A, let A? (a, b) := ?a? ?L(a,b) B(a? , ?) ? A. The following lemma guarantees the local strong convexity of Pb . Lemma 7. The function Pb is 12 l0 ?-strongly convex on A? (a? , b) when ? ? 5 l0 ? . 2L30 L2 3.2 Gradient Estimation We note that at + ?2 Rt (at )? 2 x for x ? B is included in A due to the properties of the Dikin ellipsoid. We introduce the smoothed version of Pt over int(A): 1 P?t (a) := Ex?B [Pt (a + ?2 Rt (at )? 2 x)] (6) 1 Ex?B [?(f (a + ?2 Rt (at )? 2 x) ? f (at ))]. Next, we adopt the following estimator for the gradient of P?t : 1 = (7) g?t := F (at + ?2 Rt (at )? 2 ut , at )d?2 Rt (at ) 2 ut , where ut is drawn from the unit surface S uniformly. We then derive the unbiasedness of g?t as follows. Lemma 8. E[? gt |at ] = ?P?t (at ). 1 3.3 1 Regret Bound with Bregman Divergence From Lemma 5, the regret analysis in dueling bandit is reduced to the minimization problem of the regret-like value of Pt . Since Pt is globally smooth and locally strongly convex from Lemmas 6 and 7, we can employ convex-optimization methods. Moreover, since g?t is an unbiased estimator for the gradient of the smoothed version of Pt from Lemma 8, it is expected that stochastic mirror descent (Algorithm 1) with g?t is effective to the minimization problem. In the following, making use of the property of stochastic mirror descent, we bound the regret-like value of Pt by the Bregman divergence, and subsequently prove Theorem 4. Definition 9. Let R be a continuously differentiable strictly convex function on int(A). Then, the Bregman divergence associated with R is defined by DR (a, b) = R(a) ? R(b) ? ?R(b)? (a ? b). ( )2 Lemma 10. When ? ? l0 ?/2 and ? ? L30 L2 /? , the regret of Algorithm 1 is bounded for any a ? int(A) as follows: [ T ] ? E (Pt (at ) ? Pt (a)) t=1 1 ? ? ( [ R(a) ? R(a1 ) + E T ? ]) gt , ?R(at )) DR?t (?R(at ) ? ?? t=1 + L0 ? + B2 L2 log T, (8) ?? where R?t (a) := supx?Rd ?x, a? ? Rt (a) is the Fenchel dual of Rt . The Bregman divergence in Lemma 10 is bounded as follows. 1 Lemma 11. When ? ? 2d , the sequence at output by Algorithm 1 satisfies DR?t (?Rt (at ) ? ?? gt , ?Rt (at )) ? 4d2 ? 2 . (9) [Proof of Theorem 4] From Lemma 4 of Hazan and Levy (2014), the ?-self-concordant function R satisfies 1 R(a?T ) ? R(a1 ) ? ? log , 1 ? ?a1 (a?T ) where ?a (a? ) := inf{r ? 0|a + r?1 (a? ? a)} is the Minkowsky function. Since ?a1 (a?T ) ? 1 ? T ?1 from the definition of a?T , we obtain R(a?T ) ? R(a1 ) ? ? log T. 1 Note that the condition ? ? 2d in Lemma 11 is satisfied due to T ? C log T . Combining Lemmas 5, 10 and 11, we have ) L0 ? + D??? L2 2 ( LL0 ? RegTDB ? ? log T + 4d2 ? 2 T + log T + + 2LL0 R ? ?? l0 ?? ( ) B2 L2 + (L + 1)L0 ? log T ? 2? + + 8d2 T ? + 2LL0 R. ? ? Thus, when ? is defined in Theorem 4, the regret bound (4) is obtained. 6 4 Convergence Rate in Convex Optimization In the previous sections, we considered the minimization problem for the regret of dueling bandit. In this section, as an application of the approach, we show that the averaged action of NC-SMD (Algorithm 1) minimize the cost function f in (3). To derive the convergence rate of our algorithm, we introduce the regret of function optimization and establish a connection between the regrets of dueling bandit and function optimization. In convex optimization with noisy comparison feedback, the learner chooses a pair (at , a?t ) of actions in the learning process and suffers a loss f (at ) + f (a?t ). Then, the regret of the algorithms in function optimization is defined as follows: [ T ] ? FO ? ? ? RegT := sup E (f (at ) ? f (a )) + (f (at ) ? f (a )) , (10) a?A t=1 ? where a = argmina?A f . Recalling that the positive constants l0 and L0 satisfy l0 ? ? ? ? L0 on [?B, B], where B := supa,a? ?A f (a? ) ? f (a), the regrets of function optimization (10) and dueling bandit (2) are related as follows: Lemma 12. RegTDB RegTDB ? RegTF O ? . L0 l0 (11) ? Theorem 4 and Lemma 12 give an O( T log T )-upper bound of the regret of function optimization in Algorithm 1 under the same conditions as Theorem 4. Given the convexity of f , the average of ?T 1 ? the chosen actions of any dueling bandit algorithm a ?T := 2T t=1 (at + at ) satisfies RegTF O . (12) 2T Thus, if an optimization algorithm has a sub-linear regret bound, the above online-to-batch conversion guarantees convergence to the optimal point. Theorem 13. Under Assumptions 1-3, the averaged action a ?T satisfies the following when T ? C log T : ( ? ) ? log T + C 1 LL0 R ? E[f (? aT ) ? f (a )] ? 2d + , l0 T T E[f (? aT ) ? f (a? )] ? where C is the constant defined in Theorem 4. ? Theorem 13 shows the convergence rate of NC-SMD (Algorithm 1) to be O(d log T /T ). 5 Lower Bound We next derive a lower bound in convex optimization with noisy comparison feedback. To do so, we employ a lower bound of convex optimization with noisy function feedback. In a setting where the function feedback is noisy, we query a point a ? A and obtain a noisy function value f (a)+?, where ? is a zero-mean random variable with a finite second moment and independent for each query. 3) Theorem 14. Assume that the action space A is the d-dimensional unit Euclidean ball Bd and that the link function ?G is the cumulative distribution function of the zero-mean Gaussian random variable with variance 2. Let the number of rounds T be fixed. Then, for any algorithm with noisy comparison feedback, there exists a function f over Bd which is twice continuously differentiable, 0.5-strongly convex and 3.5-smooth such that the output aT of the algorithm satisfies } { d . E[f (aT ) ? f (a? )] ? 0.004 min 1, ? (13) 2T 3) In general, the noise ? can depend on the action a. See e.g. Shamir (2013) for more details. 7 [Proof] The probability distribution of noisy comparison feedback F (a, a? ) with the link function ?G can be realized by noisy function feedback with thestandard Gaussian noise as follows. Two noisy function values f (a) + ? and f (a? ) + ? ? can be obtained using the noisy function feedback twice, where ? and ? ? are independent standard Gaussian random variables. Then, the probability distribution of the following random variable coincide with that of F (a, a? ) for arbitrary a, a? ? A: sign(f (a) + ? ? (f (a? ) + ? ? )) = sign(f (a) ? f (a? ) + (? ? ? ? )). (14) Here, note that ? ? ? ? is the zero-mean Gaussian random variable with variance 2. Thus, single noisy comparison feedback with the link function ?G for any actions can be obtained by using noisy function feedback with standard Gaussian noise twice. This means that if any algorithm with 2T -times noisy function feedback is unable to achieve a certain performance, any algorithm with T -times noisy comarison feedback is similarly unable to achieve that performance. Thus, to derive Theorem 14, it is sufficient to show a lower bound of convergence rate with noisy function feedback. The following lower bound is derived by Theorem 7 of Shamir (2013). Theorem 15. (Shamir, 2013) Let the number of rounds T be fixed. Suppose that the noise ? at each round is a standard Gaussian random variable. Then, for any algorithm with noisy function feedback, there exists a function f over Bd which is twice continuously differentiable, 0.5-strongly convex and 3.5-smooth such that the output aT satisfies { } d ? E[f (aT ) ? f (a )] ? 0.004 min 1, ? . T By the above discussion and from Theorem 15, we obtain Theorem 14. Combining Theorem 13 and Theorem 14, the convergence rate of NC-SMD (Algorithm 1) is near optimal with respect to the number of rounds T . In addition, when the parameter ? of the selfconcordant function is of constant order with respect to the dimension d of the space A, the convergence rate of NC-SMD is optimal with respect to d. However, it should be noted that the parameter ? of a self-concordant function is often of the order of ?(d) for compact convex sets including the simplex and the hypercube. As a consequece of Lemma 12, (12), and Theorems 4 and 14, the optimal regrets of dueling ban? dit and function optimization are of the order T except for the logarithmic factor and NC-SMD achieves the order. To the best of our knowledge, this is the first algorithm with the optimal order in the continuous dueling bandit setting with the non-linear link function. Finally, we provide an interesting observation on convex optimization. When noisy ? function feedback is available, the optimal regret of function optimization is of the order ?( T ) under strong convexity and smoothness conditions (Shamir, 2013). However, even when noisy function feedback is "compressed" into one-bit information as in (14), our results show that NC-MSD (Algorithm 1) ? achieves almost the same order O( T log T ) about the regret of function optimization as long as the cumulative probability distribution of the noise satisfies Assumption 3.4) 6 Conclusion We considered a dueling bandit problem over a continuous action space and proposed a stochastic ? mirror descent. By introducing Assumptions 1-3, we proved that our algorithm achieves an O( T log T )-regret bound. We further considered convex optimization under noisy comparison feedback and showed that the regrets of dueling bandit and function optimization are essentially equivalent. Using the connection between the two regrets, it was shown that our algorithm achieves ? a convergence rate of O( log T /T ) in the framework of function optimization with noisy comparison feedback. Moreover, we derived a lower bound of the convergence rate in convex optimization and showed that our algorithm achieves near optimal performance in dueling bandit and convex optimization. Some open questions still remain. While we have only dealt with bounds which hold in expectation, the derivation of the high-probability bound is a problem that has not been solved. While the analysis of our algorithm relies on strong convexity and smoothness, a regret bound without these conditions is also important. 4) Jamieson et al. (2012) provided a similar observation. However, their upper bound of the regret was derived only when the action space is the whole of Euclidean space (i.e., A = Rd ) and the assumption for noisy comparison feedback is different from ours (Assumption 1). 8 Acknowledgment We would like to thank Professor Takafumi Kanamori and Professor Kota Matsui for helpful comments. This work was supported by JSPS KAKENHI Grant Number 17K12653. References [1] A. Agarwal, O. Dekel, and L. Xiao (2010) ?Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback.,? in COLT, pp. 28?40, Citeseer. [2] N. Ailon, T. Joachims, and Z. Karnin (2014) ?Reducing dueling bandits to cardinal bandits,? arXiv preprint arXiv:1405.3396. [3] S. Bubeck and R. Eldan (2014) ?The entropic barrier: a simple and optimal universal self-concordant barrier,? arXiv preprint arXiv:1412.1587. [4] R. Busa-Fekete, E. H?llermeier, and B. Sz?r?nyi (2014) ?Preference-based rank elicitation using statistical models: The case of Mallows,? in Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1071?1079. [5] R. Busa-Fekete, B. Szorenyi, W. Cheng, P. Weng, and E. H?llermeier (2013) ?Top-k selection based on adaptive sampling of noisy preferences,? in International Conference on Machine Learning, pp. 1094? 1102. [6] J. C. Duchi, M. I. Jordan, M. J. Wainwright, and A. Wibisono (2015) ?Optimal rates for zero-order convex optimization: The power of two function evaluations,? IEEE Transactions on Information Theory, Vol. 61, pp. 2788?2806. [7] A. D. Flaxman, A. T. Kalai, and H. B. McMahan (2005) ?Online convex optimization in the bandit setting: gradient descent without a gradient,? in Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pp. 385?394, Society for Industrial and Applied Mathematics. [8] I. Griva, S. G. Nash, and A. Sofer (2009) Linear and nonlinear optimization: Siam Appendix F which contains (F.2) is available at the following URL: http://math.gmu.edu/~igriva/ book/topics.html. [9] E. Hazan and K. Levy (2014) ?Bandit convex optimization: Towards tight bounds,? in Advances in Neural Information Processing Systems, pp. 784?792. [10] K. G. Jamieson, S. Katariya, A. Deshpande, and R. D. Nowak (2015) ?Sparse Dueling Bandits.,? in AISTATS. [11] K. G. Jamieson, R. Nowak, and B. Recht (2012) ?Query complexity of derivative-free optimization,? in Advances in Neural Information Processing Systems, pp. 2672?2680. [12] K. Matsui, W. Kumagai, and T. Kanamori (2016) ?Parallel distributed block coordinate descent methods based on pairwise comparison oracle,? Journal of Global Optimization, pp. 1?21. [13] Y. Nesterov, A. Nemirovskii, and Y. Ye (1994) Interior-point polynomial algorithms in convex programming, Vol. 13: SIAM. [14] O. Shamir (2013) ?On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization.,? in COLT, pp. 3?24. [15] (2017) ?An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with TwoPoint Feedback,? The Journal of Machine Learning Research, Vol. 18, p. 1?11. [16] T. Urvoy, F. Clerot, R. F?raud, and S. Naamane (2013) ?Generic Exploration and K-armed Voting Bandits.,? in ICML (2), pp. 91?99. [17] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims (2012) ?The k-armed dueling bandits problem,? Journal of Computer and System Sciences, Vol. 78, pp. 1538?1556. [18] Y. Yue and T. Joachims (2009) ?Interactively optimizing information retrieval systems as a dueling bandits problem,? in Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1201?1208, ACM. 9 [19] (2011) ?Beat the mean bandit,? in Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 241?248. [20] L. Zhang, T. Yang, R. Jin, Y. Xiao, and Z.-H. Zhou (2016) ?Online stochastic linear optimization under one-bit feedback,? in International Conference on Machine Learning, pp. 392?401. [21] M. Zoghi, S. Whiteson, R. Munos, M. d. Rijke et al. (2014) ?Relative upper confidence bound for the k-armed dueling bandit problem,? in JMLR Workshop and Conference Proceedings, No. 32, pp. 10?18, JMLR. 10
6747 |@word version:3 polynomial:1 norm:3 stronger:1 logit:1 dekel:1 open:1 d2:3 citeseer:1 pick:1 tr:1 moment:1 reduction:2 contains:1 tuned:1 ours:1 past:5 current:1 dikin:2 must:1 bd:3 numerical:2 enables:1 update:1 intelligence:1 math:1 preference:4 zhang:2 along:1 constructed:1 symposium:1 prove:2 busa:2 introduce:3 pairwise:2 expected:3 multi:1 globally:2 armed:3 considering:1 increasing:1 project:1 provided:3 moreover:5 notation:1 matched:1 bounded:6 lipschitzness:4 guarantee:2 thorough:1 every:2 ti:1 concave:1 voting:1 preferable:1 unit:5 grant:1 jamieson:10 positive:4 t1:4 local:2 twice:5 studied:1 equivalence:1 matsui:5 averaged:2 unique:1 acknowledgment:1 mallow:1 regret:44 block:1 area:1 universal:1 thought:1 matching:1 word:1 confidence:1 cannot:4 interior:2 selection:1 context:1 impossible:1 restriction:1 conventional:4 equivalent:2 demonstrated:1 center:1 convex:45 estimator:2 regarded:2 deriving:1 notion:1 coordinate:2 shamir:7 pt:11 suppose:1 user:4 programming:1 satisfying:1 preprint:2 solved:1 intuition:1 convexity:8 nash:1 complexity:2 nesterov:2 depend:3 tight:1 segment:1 learner:5 easily:1 represented:3 various:2 riken:2 derivation:1 effective:1 query:4 supplementary:1 valued:1 compressed:2 noisy:38 online:7 sequence:3 differentiable:6 propose:4 l30:2 interaction:1 combining:2 achieve:3 sixteenth:1 convergence:13 empty:1 wider:1 depending:2 derive:5 strong:8 quantify:1 radius:1 tokyo:1 stochastic:19 subsequently:3 centered:2 human:1 exploration:1 material:1 require:2 raud:1 strictly:2 clarify:3 hold:5 sufficiently:1 considered:8 urvoy:2 naamane:1 achieves:17 adopt:1 entropic:1 estimation:1 prepare:1 minimization:4 gaussian:7 always:1 rather:1 kalai:1 zhou:1 derived:4 l0:21 inherits:1 joachim:11 kakenhi:1 rank:2 zoghi:2 industrial:1 sense:1 helpful:1 compactness:1 bandit:65 relation:3 dual:1 colt:2 html:1 initialize:1 karnin:1 beach:1 sampling:1 represents:1 icml:3 simplex:1 t2:1 cardinal:1 employ:3 divergence:4 individual:2 recalling:1 evaluation:2 weng:1 bregman:4 nowak:2 euclidean:5 gmu:1 fenchel:1 modeling:2 cost:18 introducing:1 jsps:1 stored:2 supx:1 chooses:3 unbiasedness:1 st:2 recht:1 international:5 siam:3 broder:1 receiving:1 continuously:5 concrete:2 sofer:1 satisfied:1 interactively:1 dr:3 book:1 derivative:5 return:1 japan:1 summarized:1 b2:6 availability:1 int:12 satisfy:3 explicitly:1 depends:2 performed:1 hazan:3 sup:2 parallel:1 griva:2 contribution:3 minimize:1 variance:2 efficiently:1 clarifies:1 t3:1 rijke:1 dealt:1 fo:1 suffers:1 definition:5 pp:15 deshpande:1 naturally:1 proof:3 associated:1 proved:4 subsection:1 ut:8 knowledge:1 appears:1 wherein:1 formulation:2 though:1 strongly:7 hand:1 receives:1 nonlinear:1 logistic:1 usa:1 ye:1 unbiased:1 wataru:2 clerot:1 twopoint:1 symmetric:1 round:7 during:1 self:14 noted:1 coincides:1 duchi:2 common:1 rotation:1 jp:1 numerically:1 refer:2 ai:2 smoothness:7 rd:11 unconstrained:1 tuning:2 mathematics:1 similarly:1 access:1 surface:1 gt:4 argmina:4 showed:12 optimizing:2 inf:2 certain:1 binary:1 minimum:1 impose:1 parallelized:1 monotonically:1 relates:3 full:1 smooth:6 long:3 retrieval:2 msd:1 a1:8 essentially:1 expectation:1 arxiv:4 agarwal:2 achieved:1 receive:1 addition:3 addressed:1 appropriately:2 unlike:1 ascent:1 yue:11 strict:1 comment:1 szorenyi:1 db:2 jordan:1 call:2 near:5 yang:1 approaching:1 reduce:1 regt:3 utility:8 url:1 hessian:1 action:31 remark:3 tune:1 locally:3 differentiability:1 diameter:1 reduced:2 dit:1 http:1 exist:3 llermeier:2 sign:3 estimated:1 per:1 discrete:1 vol:4 key:1 pb:5 drawn:1 sum:1 almost:3 decision:2 appendix:2 bit:7 entirely:1 bound:36 ct:1 smd:8 cheng:1 oracle:11 annual:2 infinity:1 kota:1 katariya:1 kleinberg:1 selfconcordant:2 min:2 ailon:3 ball:3 remain:1 making:1 happens:1 restricted:2 taken:1 discus:1 end:1 available:2 apply:1 appropriate:1 generic:1 batch:1 top:1 newton:1 establish:1 nyi:1 hypercube:1 society:1 objective:1 question:1 realized:1 leaner:1 dependence:1 rt:20 gradient:11 win:3 link:14 unable:2 thank:1 topic:1 assuming:1 modeled:1 ellipsoid:2 nc:9 difficult:1 setup:1 perform:1 upper:4 conversion:1 observation:2 finite:2 descent:19 behave:1 jin:1 pat:1 beat:1 situation:1 nemirovskii:1 supa:3 smoothed:3 arbitrary:2 introduced:2 pair:4 connection:2 nip:1 address:1 elicitation:1 below:1 summarize:1 including:1 dueling:46 wainwright:1 event:1 power:1 advanced:1 arm:1 flaxman:2 l2:9 relative:1 loss:1 interesting:1 degree:1 sufficient:1 xiao:2 eldan:2 ban:1 supported:1 free:2 kanamori:2 weaker:1 barrier:2 munos:1 sparse:1 distributed:1 feedback:48 dimension:3 boundary:1 superficially:1 cumulative:5 concavity:1 adaptive:1 coincide:2 log3:1 transaction:1 compact:3 implicitly:1 sz:1 global:1 continuous:5 ca:1 whiteson:1 aistats:1 main:2 whole:1 noise:7 kumagai:3 sub:1 mcmahan:1 levy:3 third:1 jmlr:2 theorem:20 exists:4 workshop:1 mirror:12 horizon:1 logarithmic:3 bubeck:2 ll0:7 contained:2 fekete:2 minimizer:1 satisfies:10 determines:1 relies:1 acm:2 formulated:2 towards:1 lipschitz:2 professor:2 feasible:1 change:2 included:1 determined:2 except:2 uniformly:2 reducing:1 lemma:19 called:3 total:1 concordant:14 takafumi:1 wibisono:1 ex:2
6,354
6,748
Best Response Regression Omer Ben-Porat Technion - Israel Institute of Technology Haifa 32000 Israel [email protected] Moshe Tennenholtz Technion - Israel Institute of Technology Haifa 32000 Israel [email protected] Abstract In a regression task, a predictor is given a set of instances, along with a real value for each point. Subsequently, she has to identify the value of a new instance as accurately as possible. In this work, we initiate the study of strategic predictions in machine learning. We consider a regression task tackled by two players, where the payoff of each player is the proportion of the points she predicts more accurately than the other player. We first revise the probably approximately correct learning framework to deal with the case of a duel between two predictors. We then devise an algorithm which finds a linear regression predictor that is a best response to any (not necessarily linear) regression algorithm. We show that it has linearithmic sample complexity, and polynomial time complexity when the dimension of the instances domain is fixed. We also test our approach in a high-dimensional setting, and show it significantly defeats classical regression algorithms in the prediction duel. Together, our work introduces a novel machine learning task that lends itself well to current competitive online settings, provides its theoretical foundations, and illustrates its applicability. 1 Introduction Prediction is fundamental to machine learning and statistics. In a prediction task, an algorithm is given a sequence of examples composed of labeled instances, and its goal is to learn a general rule that maps instances to labels. When the labels take continuous values, the task is typically referred to as regression. The quality of a regression algorithm is measured by its success in predicting the value of an unlabeled instance. Literature on regression is mostly concerned with minimizing the discrepancy of the prediction, i.e. the difference between the true value and the predicted one. Despite the tremendous amount of work on prediction and regression, online commerce presents new challenges. In this context, prediction is not carried out in isolation. New entrants can utilize knowledge of previous expert predictions and the corresponding true values, to maximize their probability of predicting better than that expert, treated as the new entrant?s opponent. This fundamental task is the main challenge we tackle in this work. We initiate the study of strategic predictions in machine learning. We present a regression learning setting that stems from a game-theoretic point of view, where the goal of the learner is to maximize the probability of being the most accurate among a set of predictors. Note that this approach may be in conflict with the traditional prediction goal. Consider an online real estate expert, who frequently predicts the sale value of apartments. This expert, having been in the market for a while, has historical data on the values and characteristics of similar apartments. For simplicity, assume the expert uses simple linear regression to predict the value of an apartment as a function of its size. When a new apartment comes on the market, the expert uses her gathered historical data to predict the new apartment?s value. When the apartment is sold, the true value (and the accuracy of the prediction) is revealed. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Value (dollars) Expert Agent Size (square feet) Figure 1: A case where minimizing the square error can be easily beaten. Each point is an instancevalue pair, where the circles are historical points (i.e. their value has been revealed) and the triangles are new points, unseen by either the expert and the agent. The red (solid) line represents the linear least squares estimators, employed by the expert. After collecting a sufficient amount of historical data (circles) on apartments along with their true value and the value predicted by the expert, the agent comes up with the response represented by the green (dashed) line. For each of the unseen apartment sizes, both the expert and the agent declare their predictions of the apartment?s value. Notice that the agent outperforms the expert in the majority of the historical points. In addition, the agent produces a more accurate prediction in the majority of the new (unseen) points. At first glance this seems extremely effective, however it is also extremely fragile. An agent who enters the real estate business may come up with a linear predictor for which the probability (over all apartments and their values) of being more accurate is high, making it the preferable predictor. Figure 1 illustrates our approach. The expert uses linear least square estimators (LSE) to minimize the mean square error (MSE). The agent, after having collected "enough" historical data (circles) and having observed the predictions of the expert, produces a strategy (regression line). Both the expert and the agent predict the value of new apartments coming on the market (triangles). As illustrated, the prediction of the agent is the most accurate in the majority of new instances. One criticism of this novel approach is that while maximizing the probability of being the most accurate, the agent may produce "embarrassing" predictions for some instances. Current prediction algorithms are designed to minimize some measure of overall loss, such as the MSE. Notice that in many, and perhaps even most, practical scenarios, being a better predictor on more instances is more important than avoiding such sporadic "embarrassing predictions". In particular, our approach fits any commerce and advertising setting where the agent offers predictions to users on the value of different goods or services, aiming at maximizing the number of users that will find her predictions more accurate than the one provided by the expert. For example, an agent, serving users searching for small apartments, would be happy to fail completely in predicting the value of very large sized apartments if this allowed predicting the value of smaller apartments better than an opponent. Our novel perspective suggests several new fundamental problems: 1. Given a prediction algorithm ALG (e.g. LSE), what would be the best response to ALG, if we aim at maximizing the probability that the new algorithm would be more accurate than ALG? 2. In case ALG is unknown, but the agent has access to a labeled set of instances along with the prediction made by ALG for each instance, how many i.i.d. samples are needed in order to learn a best response to ALG over the whole population? 3. How poorly do classical regression algorithms preform against such a best response algorithm? In this work, we focus on a two player scenario and analyze the best response of the agent against an opponent. We examine the agent?s perspective, and introduce a rigorous treatment of Problems 1-3 above. We model the task of finding a best response as a supervised learning task, and show that it 2 fits the probably approximately correct (PAC) learning framework. Specifically, we show that when the strategy space of the agent is restricted, a best response over a large enough sample set is likely to be an approximate best response over the unknown distribution. Our main result deals with an agent employing linear regression in Rn for any constant n. We present a polynomial time algorithm which computes a linear best response (i.e. from the set of all linear predictors) to any regression algorithm employed by the opponent. We also show a linearithmic bound in the number of training samples needed in order to successfully learn a best response. In addition, we show that in some cases our algorithm can be adapted to have an MSE score arbitrarily close to that of the given regression algorithm ALG. The theoretical analysis is complemented by an experimental study, which illustrates the effectiveness of our approach. In order to find a best linear response in high dimensional space, we provide a mixed integer linear programming (MILP) algorithm. The MILP algorithm is tested on the Boston housing dataset [5]. Indeed, we show that we can outperform classical regression algorithms in up to 70% of the points. Moreover, we outperform classical regression algorithms even in the case where they have full access to both training and test data, while we restrict our responder algorithm to the use of the training data only. Our contribution. Our contributions are 3-fold. The main conceptual contribution of this paper is the explicit suggestion that a prediction task may have strategic aspects. We introduce the setting of best response regression, applicable to a huge variety of scenarios, and revise the PAC-learning framework to deal with such a duel framework. Then, we show an efficient algorithm dealing with finding a best-response linear regression in Rn for any constant n, against any regression algorithm. This best response algorithm maximizes the probability of beating the latter on new instances. Finally, we present an experimental study showing the applicability of our approach. Together, this work offers a new machine learning challenge, addresses some of its theoretical properties and algorithmic challenges, while also showing its applicability. 1.1 Related work The intersection of learning theory with multi-agent systems is expanding with the rise of data science. In the field of mechanism design [8], [3, 7] considered prediction tasks with strategic aspects. In their model, the instances domain is to be labeled by one agent, and the dataset is constructed of points controlled by selfish users, who have their own view on how to label the instances domain. Hence, the users can misreport the points in order to sway decisions in their favor. A different line of work that is related to our model is the analysis of sample complexity in revenue maximizing auctions. In a recent work [2] the authors reconsider an auction setting where the auctioneer can sample from the valuation functions of the bidders, thereby relaxing the ubiquitous assumption of knowing the underlying distribution over bidders? valuations. While the above papers consider mechanism design problems inspired by machine learning, our work considers a novel machine learning problem inspired by game theory. In work on dueling algorithms [6], an optimization problem is analyzed from the perspective of competition, rather than from the point of view of a single optimizer. That work examines the dueling form of several optimization problems, e.g. the shortest path from the source vertex to the target vertex in a graph with random weights. While minimizing the expected length is a probable solution concept for a single optimizer, this is no longer the case in the defined duel. While [6] assumes a commonly-known distribution over a finite set of instances, we have no such assumption. Instead, we consider a sample set drawn from the underlying distribution with the aim of predicting a new instance better than the opponent. Our formulation is also related to the Learning Using Privileged Information paradigm (see, e.g., [9, 14, 15]), in which the learner (agent) is supplied with additional information along with the label of each instance. In this paper, we assume the agent has access to predictions made by another algorithm (the opponent?s), which can be treated as additional information. 2 Problem formulation The environment is composed of instances and labels. In the motivating example given above, the instances are the characteristics of the apartments, and the labels are the values of these apartments. 3 A set of N players offer predictive services, where a strategy of a player is a labeling function. For each instance-label pair (x, y), the players see x, and subsequently each player i, predicts the value of the y. We call this label estimate y?i . The player who wins a point (x, y) is the one with the smallest discrepancy, i.e. mini |y?i ? y|. Under the strategy profile (h1 , . . . hN ), where each entry is the labeling function chosen by the corresponding player, the payoff of Player i is Pr ({(x, y) : Player i wins (x, y)}). A strategy of a player is called a best response if it maximizes the payoff of that player, when the strategies of all the other players are fixed. In this work, we analyze the best response of a player, and w.l.o.g. we assume she has only one opponent. The model is as follows: 1. We assume a distribution over the examples domain, which is the cross product of the instances domain X ? Rn and the labels domain Y ? R. 2. The agent and the opponent both predict the label of each instance. The opponent uses a ? which is a conditional distribution over R given x ? X . strategy h, ? 3. The agent is unaware of the distribution over X ? Y or the strategy of the opponent h. Hence, we explicitly address the joint distribution D over Z = X ? Y ? R, where a triplet (x, y, p) represents an instance x, its label y, and the discrepancy of the opponent?s predicted ? value p, i.e. p = |h(x) ? y|. We stress that D is unknown to the agent. 4. The payoff of the agent under the strategy h : X ? Y is given by  ?D (h) = E 1|h(x)?y)|<p . (x,y,p)?D 5. The agent has access to a sequence of examples S, with which she wishes to maximize her payoff. Note that a strategy which outputs yi for every instance xi in S may look promising, but will probably lead to overfitting, and low payoff for the agent. Since the agent wishes to generalize from S to D, restricting the strategy set to H ? Y X seems justified. We define the goal of the agent: 6. The agent is willing to restrict herself to a strategy from H ? Y X . Her goal: to find an algorithm which, given , ? ? (0, 1) and a sequence of m = m(, ?) examples S sampled i.i.d. from D, outputs a strategy h? such that with probability at least 1 ? ? (over the choices of S) it holds that ?D (h? ) ? sup ?D (h) ? . h?H Indeed, the access to a sequence of examples seems realistic, and the size of S depends on the amount of resources at the agent?s disposal. The size of S also affects the selection of H: if the agent can gather "many" examples, she might be able to learn a "good" strategy from a more complex strategy space. We say that h ? H is an approximate best response with factor  if for all h0 ? H it holds that ?D (h0 ) ? ?D (h) ? . Note that the goal of the agent can be interpreted as finding an approximate best response with high probability. The empirical payoff of the agent is defined by 1 ?S (h) = ? {i : 1|h(xi )?yi )|<pi } , m and a strategy h ? arg maxh0 ?H ?S (h0 ) is called an empirical best response (w.r.t S). Next, we adopt the PAC framework [12] to define under which strategy spaces an empirical best response is likely to be an approximate best response. 2.1 Approximate best response with PAC learnability The field of statistical learning addresses the problem of finding a predictive function based on data. We briefly define some key concepts in learning theory, that will be used later. For a more gentle introduction the reader is referred to [11]. Let G be a class of functions from Z to {0, 1} and let S = {z1 , . . . , zm } ? Z. The restriction of G to S, denoted G(S), is defined by G(S) = {(g(z1 ), g(z2 ), . . . , g(zm )) : g ? G}. Namely, G(S) contains all the binary vectors induced by the functions in G on the items of S. We say that G shatters S if G(S) contains all binary vectors of size m, i.e. |G(S)| = 2m . 4 Definition 1 (VC dimension,[13]). The VC dimension of a class G, denoted VCdim(G), is the maximal size of a set S ? Z that can be shattered by G. Definition 2 (PAC learnability,[12]). A hypothesis class H is PAC-learnable with respect to a domain set Z and a loss function l : H ? Z ? R+ , if there exists a function ?H : (0, 1)2 ? N and a learning algorithm ALG such that for every , ? ? (0, 1) and for every distribution D over Z, when running ALG on m ? ?H (, ?) i.i.d. examples generated by D, it returns a hypothesis h ? H such that with probability of at least 1 ? ? it holds that LD (h) ? inf LD (h0 ) + , 0 (1) h ?H where LD (h) = Ez?Z l(h, z). Let H be a class of functions from X to Y, and let Z = X ? Y ? R, as defined earlier in this section. Typically in a regression task, the hypothesis class is restricted in order to decrease the distance between the predicted labels and the true label. In the aforementioned model, however, the agent may want to deliberately harm her accuracy on some subset of the instances domain. She will do this as long as it increases the number of instances having a better prediction, thereby improving her payoff. Since h ? H can either win a point (x, y, p) or lose it, the model resembles a binary classification task, where the "label" of (x, y, p) is the identity of the winner. That is, a triplet (x, y, p) would be labeled 1 if the agent produced a better prediction than the opponent, and zero otherwise. However, notice that the agent?s strategy is involved in the labeling. This is, of course, not the case of binary classification. Our approach is to introduce a corresponding binary classification problem, and by leveraging former results obtained on binary classification, deduce sufficient learnability conditions for our model. The complete reduction is described in detail in the appendix. Adjusting to the loss function framework, define:  ?z = (x, y, p) ? Z : l(h, z) = 1 0 |h(x) ? y| ? p . |h(x) ? y| < p Observe that l(h, z) = 0 whenever the agent wins a point and l(h, z) = 1 otherwise. If we set LD (h) = Ez?D l(h, z), Equation (1) can be reformulated as ?D (h) ? suph0 ?H ?D (h0 ) ? . Our goal is to find sufficient conditions for H to be PAC-learnable w.r.t Z and l. Given H, let GH = {gh : h ? H} such that  ?h ? H, ?z ? Z : gh (z) = 1 ? l(h, z) = 1 0 |h(x) ? y| < p . |h(x) ? y| ? p Note that GH is a class of functions from Z to {0, 1}. Sufficient learnability conditions can now be stated. Lemma 1. Let H be a class of functions from X to Y with VCdim(GH ) = d < ?. Then there is a constant C, such that for every , ? ? (0, 1) and every distribution D over Z = X ? Y ? R, if we d+log 1 sample a sequence of examples S of size m ? C ? 2 ? i.i.d. from D and pick an empirical best response h ? H w.r.t. S, then with probability of at least 1 ? ? it holds that ?D (h) ? sup ?D (h0 ) ? . h0 ?H 3 Best linear response We assume throughout this section that the agent uses a linear response. In what follows, we first show that H is PAC-learnable with respect to Z and the payoff function. Afterwards, we devise an empirical best response algorithm with respect to a sequence of examples. Hence, according to the previous section, this empirical payoff maximization algorithm outputs, with high probability, an approximate best response with respect to D. The proofs of all theorems and the supporting lemmas are in the appendix. For ease of presentation, we re-denote the dimension of the instances domain to be n ? 1, i.e. X ? Rn?1 . Every h ? Rn defines a linear predictor of a point x ? Rn?1 via dot product, namely 5 h ? (xi , 1). Thus, Rn is referred to as the strategy space H, where axis i represents the i?th entry in h, 1 ? i ? n + 1. We study the case where n is fixed, although the complementary case is discussed in the end of the section. m Recall that the empirical Pmpayoff of the agent w.r.t to a sequence of examples S = (xi , yi , pi )i=1 is de1 fined as ?S (h) = m i=1 1|h?(xi ,1)?yi |<pi , and the best response w.r.t. to S is arg maxh?H ?S (h). m Observe that there is a mapping MH from any h ? H to a vector v ? {0, 1}m S : H ? {0, 1} such that entry i in v equals one if h gains the i?th point, and zero otherwise. Put differently, MH S (h) = v = (v1 , . . . vm ) such that: ?i ? [m] : vi = 1 ? |h ? (xi , 1) ? yi | < pi . Hence, the target set of MH S is GH (S), which is the restriction of GH to S. The size of GH (S) is essentially the effective size of H, since any two strategies which are mapped to the same vector will gain the same points, and thus are equivalent. The following theorem puts a bound on the size of GH (S). Theorem 1. Let H be the hypothesis class of all linear functions in Rn?1 P. nFor anysequence of examples S of size m, GH (S) is polynomial in m. Specifically, |GH (S)| ? i=0 2i mi . The VC-dimension of GH can be bounded using the Sauer - Shelah lemma [10]: Lemma 2. It holds that VCdim(GH ) ? max{b2n ? log(n)c, 20}. We now devise an empirical payoff maximizing algorithm. Our approach is to first explicitly characterize the vectors in GH (S), and afterwards to pick a strategy from   h : MH (h) = max kvk S 1 . 1 v?GH (S) For each vector v, one can formulate a linear program which outputs a strategy in {h : MH S (h) = v} in case this set is not empty, or outputs none in case it is. Naively, 2m such feasibility problems can be solved, although this is very inefficient. Instead, we will recursively construct the set of feasible vectors. The Partial Vector Feasibility problem aids in recursively partitioning the hypothesis space. Note that it is solvable in time poly(n, m) using Linear Programming. Problem: PARTIAL V ECTOR F EASIBILITY (PVF) m Input: a sequence of examples S = (xi , yi , pi )m i=1 , and a vector v ? {1, 0, a, b} n Output: a point h ? R satisfying 1. If vi = 1 then |h ? (xi , 1) ? yi | < pi . 2. If vi = a then h ? (xi , 1) ? yi > pi . 3. If vi = b then h ? (xi , 1) ? yi < ?pi . if such exists, and ? otherwise. The following algorithm partitions Rn according to GH (S), where in each iteration it "discovers" one more point in the sequence S. Algorithm: E MPIRICAL PAYOFF M AXIMIZATION (EPM) Input: S = (xi , yi , pi )m i=1 Output: Empirical payoff maximizer w.r.t. S m 1 v ? {0} // v = (v1 , v2 , . . . , vm ) 2 R0 ? {v} 3 for i = 1 to m do 4 Ri ? ? 5 for v ? Ri?1 do 6 for ? ? {1, a, b} do 7 if PVF (S, (v ?i , ?)) 6= ? then 8 add (v ?i , ?) to Ri // (v ?i , ?) = (v1 , . . . vi?1 , ?, vi+1 , . . . , vm ) ? 9 return v ? arg maxv?Rm kvk1 Theorem 2. When running EPM on a sequence of examples S, it finds an empirical best response in poly(|S|) time. 6 1 R1 2 b y y=a ? ? x + ?b y = a? ? x + b? (? a, ?b) 3 R2 (a? , b? ) x R3 a Figure 2: An example of simple linear regression with linear strategies. On the left we have ? = (? a, ?b) of the opponent (the solid line) a sample sequence of size 3, along with the strategy h and a best response strategy of the agent (the dashed line). On the right the hypothesis space is presented,  where each pair (a, b) represents a possible strategy, and each bounded set Ri is defined by Ri = (a, b) ? R2 : |a ? xi + b ? yi | < pi , i.e. the set of hypotheses which give xi better prediction ? Notice that (? than h. a, ?b) relies on the boundaries of all Ri , 1 ? i ? 3. In addition, since (a? , b? ) is inside R1 ? R2 ? R3 , the strategy h? = (a? , b? ), i.e. the line y = a? ? x + b? , predicts all the points ? the agent not only better than the opponent. Observe that by taking any convex combination of h? , h, perserves her empirical payoff but also improves her MSE score. When we combine Theorem 2 with Lemmas 2 and 1, we get:  Corollary 1. Given , ? ? (0, 1), if we run EPM on m ? C2 ? max{b2n ? log(n)c, 20} + log 1? examples sampled i.i.d. from D (for a constant C), then it outputs h? such that with probability at least 1 ? ? satisfies ?D (h? ) ? sup ?D (h0 ) ? . h0 ?H A desirable achievement would be if the best response prediction algorithm would also keep the loss small in the original (e.g. MSE) measure. We now show that in some cases the agent can, by slightly modifying the output of EPM, find a strategy that is not only an approximate best response, but is also robust with respect to additive functions of discrepancies. See Figure 2 for illustration. ? and denote by h? the strategy output by Lemma 3. Assume the opponent uses a linear predictor h, ? EPM. Then, h can be efficiently modified to a strategy which is not only an empirical best response, ? w.r.t. to any additive function of the discrepancies. but also performs arbitrarily close to h Finaly, we discuss the case where the dimension of the instances domain is a part of the input. It is known that learning the best halfspace is NP-hard in binary classification (w.r.t. to a given sequence of points), when the dimension of the data is not fixed (see e.g. [1]). We show that the empirical best (linear) response problem is of the same flavor. Lemma 4. In case H is the set of linear functions in Rn?1 and n is not fixed, the empirical best response problem is NP-hard. 4 Experimental results We note that when n is large, the proposed method for finding an empirical best response may not be suitable. Nevertheless, if the agent is interested in finding a "good" response to her opponents, she should come up with something. With slight modifications, the linear best response problem can be formulated as a mixed integer linear program (MILP).1 Hence, the agent can exploit sophisticated solvers and use clever heuristics. Further, one implication of Lemma 1 is that the true payoffs 1 See the appendix for the mixed integer linear programming formulation. 7 Table 1: Experiments on Boston Housing dataset The opponent?s strategy Least square errors (LSE) Least absolute errors (LAE) Scenario TRAIN ALL TRAIN ALL Train payoff 0.699 0.711 0.621 0.625 Test payoff 0.641 0.645 0.570 0.528 Results obtained on the Boston Housing dataset. Each cell in the table represents the average payoff of the agent over 1000 simulations (splits into 80% train and 20% test). The "train payoff" is the proportion of points in the training set on which the agent is more accurate, and the "test payoff" payoff is the equivalent proportion with respect to the test (unseen) data. uniformly converge, and hence any empirical payoff obtained by the MILP is close to its real payoff with high probability. In this section, we show the extent to which classical linear regression algorithms can be beaten using the Boston housing dataset [5], a built-in dataset in the leading data science packages (e.g. scikit-learn in Python and MASS in R). The Boston housing dataset contains 506 instances, where each instance has 13 continuous attributes and one binary attribute. The label is the median value of owner-occupied homes, and among the attributes are the per capita crime rate, the average number of rooms per dwelling, the pupil-teacher ratio by town and more. The R-squared measure for minimizing the square error in the Boston housing dataset is 0.74, indicating that the use of linear regression is reasonable. As possible strategies of the opponent, we analyzed the linear least squares estimators (LSE) and linear least absolute estimators (LAE). The dataset was split into training (80%) and test (20%) sets, and two scenarios were considered: Scenario TRAIN - the opponent?s model is learned from the training set only. Scenario ALL - the opponent?s model is learned from both the training and the test sets. In both scenarios the agent had access to the training set only, along with the opponent?s discrepancy for each point in the training set. Obviously, achieving payoff of more than 0.5 (that is, more than 50% of the points) in the ALL scenario is a real challenge, since the opponent has seen the test set in her learning process. We ran 1000 simulations, where each simulation is a random split of the dataset. We employed the MILP formulation, and used Gurobi software [4] in order to find a response, where the running time of the solver was limited to one minute.2 Our findings are reported in Table 1. Notice that against both opponent strategies, and even in case where the opponent had seen the test set, the agent still gets more than 50% of the points. In both scenarios, LAE guarantees the opponent more than LSE. This is because absolute error is less sensitive to large deviations. We also noticed that when the opponent learns from the whole dataset, the empirical payoff of the agent is greater. Indeed, the latter is reasonable as in the ALL scenario the agent?s strategy fits the training set while the opponent strategy does not. Beyond the main analysis, we examined the success (or lack thereof) of the agent with respect to the additive loss function optimized by the opponent (corresponding to the MSE for LSE, and the MAE (mean absolute error) for LAE), hereby referred to as the "classical loss". Recall that Lemma 3 guarantees that the agent?s classical loss can be arbitrarily close to that of the opponent when she plays a best response; however, the response we consider in this section (using the MILP) does not necessarily converge to a best response. Therefore, we find it interesting to consider the classical loss as well, thereby presenting the complementary view. We report in Table 2 the average ratio between the agent?s classical loss and that of the opponent under the TRAIN scenario with respect to the training and test sets. Notice that the agent suffers from less than a 0.7% increase with respect to the classical loss optimized by the opponent. In particular, 2 Code for reproducing the Best-Response-Regression experiments is 8 available at https://github.com/omerbp/ Table 2: Ratio of the classical loss The opponent?s strategy Training set Test set LSE LAE 1.007 0.999 1.005 1.002 Ratio of the agent?s loss and the opponent?s loss, where the loss function corresponds to the original optimization function of the opponent, under scenario TRAIN. For example, the upper leftmost cell represents the agent?s MSE divided by the opponents MSE on the training set, where the opponent uses LSE. Similarly, the lower rightmost cell represents the agent?s MAE (mean absolute error) divided by the opponents MAE on the test data, when the opponent uses LAE. the MSE of the agent (when she responds to LSE) on the test set is less than that of the opponent. The same phenomenon, albeit on a smaller scale, occurs against LAE: the training set ratio is greater than the test set ratio. To conclude, the agent is not only able to obtain the majority of the points (and in some cases, up to 70%), but also to keep the classical loss optimized by her opponent within less than 0.2% from the optimum on the test set. 5 Discussion This work introduces a game theoretic view of a machine learning task. After finding sufficient conditions for learning to occur, we analyzed the induced learning problem, when the agent is restricted to a linear response. We showed that a best response with respect to a sequence of examples can be computed in polynomial time in the number of examples, as long as the instance domain has a constant dimension. Further, we showed an algorithm that for any , ? computes an -best response  with a probability of at least 1 ? ?, when it is given a sequence of poly 12 n log n + log 1? examples drawn i.i.d. As the reader may notice, our analysis holds as long as the hypothesis is linear in its parameters, and therefore is much more general than linear regression. Interestingly, this is a novel type of optimization problem and so rich hypothesis, which are somewhat unnatural in the traditional task of regression, might be successfully employed in the proposed setting. From an empirical standpoint, the gap between the empirical payoff and the true payoff calls for applying regularization methods for the best response problem and encourages further algorithmic research. Exploring whether or not a response in the form of hyperplanes can be effective against a more complex strategy employed by the opponent will be intriguing. For instance, showing that a deep learner is beatable in this setting will be remarkable. The main direction to follow is the analysis of the competitive environment introduced in the beginning of Section 2 as a simultaneous game: is there an equilibrium strategy? Namely, is there a linear predictor which, when used by both the agent and the opponent, is a best response to one another? Acknowledgments We thank Gili Baumer and Argyris Deligkas for helpful discussions, and anonymous reviewers for their useful suggestions. This project has received funding from the European Research Council (ERC) under the European Union?s Horizon 2020 research and innovation programme (grant agreement n? 740435). References [1] E. Amaldi and V. Kann. The complexity and approximability of finding maximum feasible subsystems of linear relations. Theoretical computer science, 147(1-2):181?210, 1995. [2] R. Cole and T. Roughgarden. The sample complexity of revenue maximization. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 243?252. ACM, 2014. 9 [3] O. Dekel, F. Fischer, and A. D. Procaccia. Incentive compatible regression learning. Journal of Computer and System Sciences, 76(8):759?777, 2010. [4] I. Gurobi Optimization. Gurobi optimizer reference manual, 2016. [5] D. Harrison and D. L. Rubinfeld. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management, 5(1):81?102, 1978. [6] N. Immorlica, A. T. Kalai, B. Lucier, A. Moitra, A. Postlewaite, and M. Tennenholtz. Dueling algorithms. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 215?224. ACM, 2011. [7] R. Meir, A. D. Procaccia, and J. S. Rosenschein. Algorithms for strategyproof classification. Artificial Intelligence, 186:123?156, 2012. [8] N. Nisan and A. Ronen. Algorithmic mechanism design. In Proceedings of the thirty-first annual ACM symposium on Theory of computing, pages 129?140. ACM, 1999. [9] D. Pechyony and V. Vapnik. On the theory of learnining with privileged information. In Advances in neural information processing systems, pages 1894?1902, 2010. [10] N. Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1): 145?147, 1972. [11] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014. [12] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984. [13] V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264, 1971. [14] V. Vapnik and A. Vashist. A new learning paradigm: Learning using privileged information. Neural networks, 22(5):544?557, 2009. [15] V. Vapnik, A. Vashist, and N. Pavlovitch. Learning using hidden information: Master class learning. NATO Science for Peace and Security Series, D: Information and Communication Security, 19:3?14, 2008. 10
6748 |@word briefly:1 polynomial:4 proportion:3 seems:3 dekel:1 willing:1 simulation:3 pick:2 thereby:3 solid:2 recursively:2 ld:4 reduction:1 contains:3 score:2 series:2 chervonenkis:1 interestingly:1 rightmost:1 outperforms:1 current:2 z2:1 com:1 intriguing:1 realistic:1 partition:1 additive:3 designed:1 maxv:1 intelligence:1 item:1 de1:1 beginning:1 provides:1 hyperplanes:1 along:6 constructed:1 c2:1 symposium:3 combine:1 inside:1 owner:1 introduce:3 indeed:3 expected:1 market:3 frequently:1 examine:1 multi:1 inspired:2 solver:2 provided:1 project:1 campus:1 moreover:1 maximizes:2 underlying:2 mass:1 bounded:2 what:2 israel:4 preform:1 interpreted:1 finding:9 guarantee:2 every:6 collecting:1 tackle:1 preferable:1 rm:1 sale:1 partitioning:1 grant:1 declare:1 service:2 aiming:1 despite:1 path:1 approximately:2 might:2 resembles:1 examined:1 suggests:1 relaxing:1 ease:1 limited:1 commerce:2 practical:1 acknowledgment:1 thirty:1 union:1 empirical:19 significantly:1 get:2 unlabeled:1 close:4 selection:1 clever:1 put:2 context:1 applying:1 subsystem:1 restriction:2 equivalent:2 map:1 reviewer:1 maximizing:5 economics:1 convex:1 formulate:1 simplicity:1 rule:1 estimator:4 examines:1 population:1 searching:1 target:2 play:1 user:5 programming:3 us:8 hypothesis:9 agreement:1 satisfying:1 predicts:4 labeled:4 observed:1 enters:1 solved:1 apartment:16 decrease:1 pvf:2 ran:1 learnining:1 environment:2 complexity:5 predictive:2 learner:3 completely:1 triangle:2 easily:1 joint:1 mh:5 differently:1 represented:1 herself:1 train:8 effective:3 artificial:1 milp:6 labeling:3 shalev:1 h0:9 heuristic:1 say:2 otherwise:4 favor:1 statistic:1 fischer:1 unseen:4 itself:1 online:3 obviously:1 housing:7 sequence:15 coming:1 product:2 zm:2 maximal:1 omer:1 poorly:1 competition:1 gentle:1 achievement:1 convergence:1 empty:1 optimum:1 r1:2 produce:3 ben:2 ac:2 measured:1 received:1 predicted:4 come:4 direction:1 foot:1 correct:2 attribute:3 modifying:1 subsequently:2 vc:3 vcdim:3 anonymous:1 probable:1 exploring:1 hold:6 considered:2 equilibrium:1 algorithmic:3 predict:4 mapping:1 optimizer:3 adopt:1 smallest:1 applicable:1 lose:1 label:15 combinatorial:1 sensitive:1 council:1 cole:1 successfully:2 aim:2 modified:1 rather:1 occupied:1 kalai:1 kvk1:1 corollary:1 focus:1 misreport:1 she:9 rigorous:1 criticism:1 dollar:1 helpful:1 shattered:1 typically:2 her:11 relation:1 hidden:1 interested:1 arg:3 overall:1 among:2 aforementioned:1 denoted:2 classification:6 field:2 equal:1 construct:1 having:4 beach:1 represents:7 look:1 amaldi:1 discrepancy:6 np:2 report:1 composed:2 huge:1 introduces:2 analyzed:3 kvk:1 implication:1 accurate:8 partial:2 sauer:2 haifa:2 circle:3 re:1 theoretical:4 instance:32 earlier:1 maximization:2 strategic:4 applicability:3 deviation:1 subset:1 vertex:2 entry:3 predictor:11 technion:4 uniform:1 learnability:4 motivating:1 characterize:1 reported:1 teacher:1 st:1 density:1 fundamental:3 ie:1 vm:3 together:2 squared:1 town:1 management:1 hn:1 moitra:1 expert:16 inefficient:1 leading:1 return:2 bidder:2 explicitly:2 nisan:1 depends:1 vi:6 later:1 view:5 h1:1 analyze:2 sup:3 red:1 competitive:2 halfspace:1 contribution:3 responder:1 air:1 il:2 accuracy:2 square:8 minimize:2 who:4 efficiently:1 characteristic:2 gathered:1 identify:1 ronen:1 generalize:1 vashist:2 accurately:2 produced:1 none:1 advertising:1 pechyony:1 simultaneous:1 suffers:1 whenever:1 manual:1 duel:4 definition:2 against:6 frequency:1 involved:1 thereof:1 hereby:1 proof:1 mi:1 sampled:2 gain:2 dataset:11 treatment:1 adjusting:1 revise:2 recall:2 knowledge:1 lucier:1 improves:1 ubiquitous:1 sophisticated:1 disposal:1 supervised:1 follow:1 response:51 caput:1 kann:1 formulation:4 scikit:1 lack:1 glance:1 maximizer:1 defines:1 quality:1 perhaps:1 usa:1 concept:2 true:7 deliberately:1 former:1 hence:6 regularization:1 illustrated:1 deal:3 game:4 encourages:1 leftmost:1 stress:1 presenting:1 theoretic:2 complete:1 performs:1 gh:16 auction:2 lse:9 novel:5 discovers:1 funding:1 winner:1 defeat:1 discussed:1 slight:1 mae:3 cambridge:1 similarly:1 erc:1 had:2 dot:1 access:6 longer:1 maxh:1 deduce:1 ector:1 add:1 something:1 own:1 recent:1 showed:2 perspective:3 inf:1 scenario:13 binary:8 success:2 arbitrarily:3 yi:11 devise:3 seen:2 additional:2 greater:2 somewhat:1 employed:5 r0:1 converge:2 maximize:3 shortest:1 paradigm:2 dashed:2 forty:1 full:1 afterwards:2 desirable:1 stem:1 offer:3 long:4 cross:1 fined:1 divided:2 privileged:3 controlled:1 feasibility:2 prediction:29 shelah:1 regression:30 peace:1 essentially:1 iteration:1 strategyproof:1 cell:3 justified:1 addition:3 want:1 harrison:1 median:1 source:1 standpoint:1 probably:3 induced:2 leveraging:1 effectiveness:1 integer:3 call:2 estate:2 revealed:2 split:3 enough:2 concerned:1 variety:1 affect:1 isolation:1 fit:3 restrict:2 knowing:1 fragile:1 whether:1 nfor:1 unnatural:1 reformulated:1 deep:1 useful:1 amount:3 http:1 outperform:2 supplied:1 meir:1 notice:7 per:2 serving:1 incentive:1 key:1 nevertheless:1 achieving:1 drawn:2 sporadic:1 shatters:1 clean:1 utilize:1 v1:3 graph:1 run:1 package:1 master:1 auctioneer:1 throughout:1 reader:2 reasonable:2 family:1 home:1 decision:1 appendix:3 dwelling:1 bound:2 tackled:1 fold:1 annual:3 roughgarden:1 adapted:1 occur:1 ri:6 software:1 aspect:2 extremely:2 approximability:1 according:2 rubinfeld:1 combination:1 smaller:2 slightly:1 making:1 modification:1 restricted:3 pr:1 resource:1 equation:1 discus:1 r3:2 fail:1 mechanism:3 needed:2 initiate:2 end:1 available:1 opponent:42 epm:5 observe:3 v2:1 original:2 assumes:1 running:3 exploit:1 classical:12 noticed:1 moshe:1 occurs:1 strategy:38 traditional:2 responds:1 lends:1 win:4 distance:1 thank:1 mapped:1 majority:4 valuation:2 collected:1 considers:1 extent:1 embarrassing:2 length:1 code:1 mini:1 illustration:1 minimizing:4 happy:1 ratio:6 innovation:1 mostly:1 stated:1 rise:1 reconsider:1 design:3 linearithmic:2 unknown:3 upper:1 b2n:2 sold:1 finite:1 supporting:1 payoff:27 communication:2 rn:10 reproducing:1 introduced:1 david:1 pair:3 namely:3 gurobi:3 z1:2 optimized:3 crime:1 conflict:1 security:2 learned:2 tremendous:1 nip:1 address:3 able:2 tennenholtz:2 beyond:1 beating:1 challenge:5 program:2 built:1 green:1 max:3 dueling:3 suitable:1 event:1 treated:2 business:1 predicting:5 solvable:1 github:1 technology:2 axis:1 carried:1 literature:1 understanding:1 python:1 relative:1 lae:7 loss:15 mixed:3 suggestion:2 interesting:1 entrant:2 remarkable:1 revenue:2 foundation:1 agent:63 gather:1 sufficient:5 pi:10 course:1 compatible:1 institute:2 hedonic:1 taking:1 absolute:5 boundary:1 dimension:8 unaware:1 computes:2 rich:1 author:1 made:2 commonly:1 historical:6 employing:1 programme:1 approximate:7 nato:1 rosenschein:1 keep:2 dealing:1 overfitting:1 conceptual:1 harm:1 conclude:1 xi:13 shwartz:1 continuous:2 triplet:2 porat:1 table:5 promising:1 learn:5 robust:1 ca:1 expanding:1 improving:1 alg:9 mse:9 necessarily:2 complex:2 poly:3 domain:11 european:2 main:5 whole:2 profile:1 allowed:1 complementary:2 referred:4 pupil:1 aid:1 explicit:1 wish:2 third:1 learns:1 theorem:5 minute:1 pac:8 showing:3 learnable:4 r2:3 beaten:2 exists:2 naively:1 restricting:1 albeit:1 vapnik:4 valiant:1 illustrates:3 horizon:1 demand:1 gap:1 flavor:1 boston:6 intersection:1 selfish:1 likely:2 ez:2 corresponds:1 satisfies:1 relies:1 complemented:1 acm:7 environmental:1 conditional:1 goal:7 sized:1 identity:1 presentation:1 formulated:1 room:1 price:1 feasible:2 hard:2 specifically:2 uniformly:1 lemma:9 called:2 experimental:3 player:16 indicating:1 procaccia:2 immorlica:1 latter:2 avoiding:1 tested:1 phenomenon:1
6,355
6,749
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning Wei Wen1 , Cong Xu2 , Feng Yan3 , Chunpeng Wu1 , Yandan Wang4 , Yiran Chen1 , Hai Li1 1 Duke University, 2 Hewlett Packard Labs, 3 University of Nevada ? Reno, 4 University of Pittsburgh 1 {wei.wen, chunpeng.wu, yiran.chen, hai.li}@duke.edu 2 [email protected], 3 [email protected], 4 [email protected] Abstract High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {?1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn?t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1 . 1 Introduction The remarkable advances in deep learning is driven by data explosion and increase of model size. The training of large-scale models with huge amounts of data are often carried on distributed systems [1][2][3][4][5][6][7][8][9], where data parallelism is adopted to exploit the compute capability empowered by multiple workers [10]. Stochastic Gradient Descent (SGD) is usually selected as the optimization method because of its high computation efficiency. In realizing the data parallelism of SGD, model copies in computing workers are trained in parallel by applying different subsets of data. A centralized parameter server performs gradient synchronization by collecting all gradients and averaging them to update parameters. The updated parameters will be sent back to workers, that is, parameter synchronization. Increasing the number of workers helps to reduce the computation time dramatically. However, as the scale of distributed systems grows up, the extensive gradient and parameter synchronizations prolong the communication time and even amortize the savings of computation time [4][11][12]. A common approach to overcome such a network bottleneck is asynchronous SGD [1][4][7][12][13][14], which continues computation by using stale values without waiting for the completeness of synchronization. The inconsistency of parameters across computing workers, however, can degrade training accuracy and incur occasional divergence [15][16]. Moreover, its workload dynamics make the training nondeterministic and hard to debug. From the perspective of inference acceleration, sparse and quantized Deep Neural Networks (DNNs) have been widely studied, such as [17][18][19][20][21][22]. However, these methods generally aggravate the training effort. Researches such as sparse logistic regression and Lasso optimization problems [4][12][23] took advantage of the sparsity inherent in models and achieved remarkable 1 https://github.com/wenwei202/terngrad 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. speedup for distributed training. A more generic and important topic is how to accelerate the distributed training of dense models by utilizing sparsity and quantization techniques. For instance, Aji and Heafield [24] proposed to heuristically sparsify dense gradients by dropping off small values in order to reduce gradient communication. For the same purpose, quantizing gradients to low-precision values with smaller bit width has also been extensively studied [22][25][26][27]. Our work belongs to the category of gradient quantization, which is an orthogonal approach to sparsity methods. We propose TernGrad that quantizes gradients to ternary levels {?1, 0, 1} to reduce the overhead of gradient synchronization. Furthermore, we propose scaler sharing and parameter localization, which can replace parameter synchronization with a low-precision gradient pulling. Comparing with previous works, our major contributions include: (1) we use ternary values for gradients to reduce communication; (2) we mathematically prove the convergence of TernGrad in general by proposing a statistical bound on gradients; (3) we propose layer-wise ternarizing and gradient clipping to move this bound closer toward the bound of standard SGD. These simple techniques successfully improve the convergence; (4) we build a performance model to evaluate the speed of training methods with compressed gradients, like TernGrad. 2 Related work Gradient sparsification. Aji and Heafield [24] proposed a heuristic gradient sparsification method that truncated the smallest gradients and transmitted only the remaining large ones. The method greatly reduced the gradient communication and achieved 22% speed gain on 4 GPUs for a neural machine translation, without impacting the translation quality. An earlier study by Garg et al. [28] adopted the similar approach, but targeted at sparsity recovery instead of training acceleration. Our proposed TernGrad is orthogonal to these sparsity-based methods. Gradient quantization. DoReFa-Net [22] derived from AlexNet reduced the bit widths of weights, activations and gradients to 1, 2 and 6, respectively. However, DoReFa-Net showed 9.8% accuracy loss as it targeted at acceleration on single worker. S. Gupta et al. [27] successfully trained neural networks on MNIST and CIFAR-10 datasets using 16-bit numerical precision for an energy-efficient hardware accelerator. Our work, instead, tends to speedup the distributed training by decreasing the communicated gradients to three numerical levels {?1, 0, 1}. F. Seide et al. [25] applied 1-bit SGD to accelerate distributed training and empirically verified its effectiveness in speech applications. As the gradient quantization is conducted by columns, a floating-point scaler per column is required. So it cannot yield speed benefit on convolutional neural networks [26]. Moreover, ?cold start? of the method [25] requires floating-point gradients to converge to a good initial point for the following 1-bit SGD. More importantly, it is unknown what conditions can guarantee its convergence. Comparably, our TernGrad can start the DNN training from scratch and we prove the conditions that promise the convergence of TernGrad. Very recently, a preprint by D. Alistarh et al. [26] presented QSGD that explores the trade-off between accuracy and gradient precision. The effectiveness of gradient quantization was justified and the convergence of QSGD was provably guaranteed. Compared to QSGD developed simultaneously, our TernGrad shares the same concept but advances in the following three aspects: (1) we prove the convergence from the perspective of statistic bound on gradients. The bound also explains why multiple quantization buckets are necessary in QSGD; (2) the bound is used to guide practices and inspires techniques of layer-wise ternarizing and gradient clipping; (3) TernGrad using only 3-level gradients achieves 0.92% top-1 accuracy improvement for AlexNet, while 1.73% top-1 accuracy loss is observed in QSGD with 4 levels. The accuracy loss in QSGD can be eliminated by paying the cost of increasing the precision to 4 bits (16 levels) and beyond. 3 Problem Formulation and Our Approach 3.1 Problem Formulation and TernGrad Figure 1 formulates the distributed training problem of synchronous SGD using data parallelism. At iteration t, a mini-batch of training samples are split and fed into multiple workers (i ? {1, ..., N }). (i) (i) Worker i computes the gradients gt of parameters w.r.t. its input samples zt . All gradients are first synchronized and averaged at parameter server, and then sent back to update workers. Note that parameter server in most implementations [1][12] are used to preserve shared parameters, while here we utilize it in a slightly different way of maintaining shared gradients. In Figure 1, each 2 worker keeps a copy of parameters locally. We name this technique as parameter localization. The parameter consistency among workers can be maintained by random initialization with an identical seed. Parameter localization changes the communication of parameters in floating-point form to the transfer of quantized gradients that require much lighter traffic. Note that our proposed TernGrad can be integrated with many settings like Asynchronous SGD [1][4], even though the scope of this paper only focuses on the distributed SGD in Figure 1. Algorithm 1 formulates the t-th iteration of TernGrad algorithm according to Figure 1. Most steps of TernGrad remain the same as traditional distributed training, except that gradients shall be quantized into ternary precision before sending to parameter server. More specific, ternarize(?) aims to reduce the communication volume of gradients. It randomly quantizes gradient gt 2 to a ternary vector with values ? {?1, 0, +1}. Formally, with a random binary vector bt , gt is ternarized as g?t = ternarize(gt ) = st ? sign (gt ) ? bt , (1) st , max (abs (gt )) (2) where is a scaler that can shrink ?1 to a much smaller amplitude. ? is the Hadamard product. sign(?) and abs(?) respectively returns the sign and absolute value of each element. Giving a gt , each element of bt independently follows the Bernoulli distribution  P (btk = 1 | gt ) = |gtk |/st , (3) P (btk = 0 | gt ) = 1 ? |gtk |/st where btk and gtk is the k-th element of bt and gt , respectively. This stochastic rounding, instead of deterministic one, is chosen by both our study and QSGD [26], as stochastic rounding has an unbiased expectation and has been successfully studied for low-precision processing [20][27]. Theoretically, ternary gradients can at least reduce the worker-to-server traffic by a factor of 32/log2 (3) = 20.18?. Even using 2 bits to encode a ternary gradient, the reduction factor is still 16?. In this work, we compare TernGrad with 32-bit gradients, considering 32-bit is the default precision in modern deep learning frameworks. Although a lower-precision (e.g. 16-bit) may be enough in some scenarios, it will not undervalue TernGrad. As aforementioned, parameter localization reduces server-to-worker traffic by pulling quantized gradients from servers. However, summing P (i) up ternary values in i g?t will produce more possible levels and thereby the final averaged gradient gt is no longer ternary as shown in Figure 2(d). It emerges as a critical issue when workers use (i) different scalers st . To minimize the number of levels, we propose a shared scaler (i) st = max({st } : i = 1...N ) (4) across all the workers. We name this technique as scaler sharing. The sharing process has a small overhead of transferring 2N floating scalars. By integrating parameter localization and scaler sharing, the maximum number of levels in gt decreases to 2N + 1. As a result, the server-to-worker communication reduces by a factor of 32/log2 (1 + 2N ), unless N ? 230 . ?" ?" ($) ?" Worker 1 ?"#$ ? ?" ? ?" Algorithm 1 TernGrad: distributed SGD training using ternary gradients. Parameter server ?" ?? (*) ?" Worker 2 ?"#$ ? ?" ? ?" 1 (+) ?" 2 3 Worker N ?"#$ ? ?" ? ?" 4 5 6 Figure 1: Distributed SGD with data parallelism. 2 7 Worker : i = 1, ..., N (i) Input zt , a part of a mini-batch of training samples zt (i) (i) Compute gradients gt under zt (i) (i) Ternarize gradients to g?t = ternarize(gt ) (i) Push ternary g?t to the server Pull averaged gradients gt from the server Update parameters wt+1 ? wt ? ? ? gt Server : P (i) Average ternary gradients gt = i g?t /N Here, the superscript of gt is omitted for simplicity. 3 3.2 Convergence Analysis and Gradient Bound We analyze the convergence of TernGrad in the framework of online learning systems. An online learning system adapts its parameter w to a sequence of observations to maximize performance. Each observation z is drawn from an unknown distribution, and a loss function Q(z, w) is used to measure the performance of current system with parameter w and input z. The minimization target then is the loss expectation C(w) , E {Q(z, w)} . (5) In General Online Gradient Algorithm (GOGA) [29], parameter is updated at learning rate ?t as wt+1 = wt ? ?t gt = wt ? ?t ? ?w Q(zt , wt ), (6) g , ?w Q(z, w) (7) where and the subscript t denotes observing step t. In GOGA, E {g} is the gradient of the minimization target in Eq. (5). According to Eq. (1), the parameter in TernGrad is updated, such as wt+1 = wt ? ?t (st ? sign (gt ) ? bt ) , (8) where st , max (abs (gt )) is a random variable depending on zt and wt . As gt is known for given zt and wt , Eq. (3) is equivalent to  P (btk = 1 | zt , wt ) = |gtk |/st . (9) P (btk = 0 | zt , wt ) = 1 ? |gtk |/st At any given wt , the expectation of ternary gradient satisfies E {st ? sign (gt ) ? bt } = E {st ? sign (gt ) ? E {bt |zt }} = E {gt } = ?w C(wt ), (10) which is an unbiased gradient of minimization target in Eq. (5). The convergence analysis of TernGrad is adapted from the convergence proof of GOGA presented in [29]. We adopt two assumptions, which were used in analysis of the convergence of standard GOGA in [29]. Without explicit mention, vectors indicate column vectors here. Assumption 1. C(w) has a single minimum w? and gradient ??w C(w) always points to w? , i.e., ? > 0, inf ||w?w? ||2 > T (w ? w? ) ?w C(w) > 0. (11) Convexity is a subset of Assumption 1, and we can easily find non-convex functions satisfying it. Assumption 2. Learning rate ?t is positive and constrained as P+? 2 ?t < +? Pt=0 , (12) +? t=0 ?t = +? which ensures ?t decreases neither very fast nor very slow respectively. We define the square of distance between current parameter wt and the minimum w? as 2 ht , ||wt ? w? || , (13) where || ? || is `2 norm. We also define the set of all random variables before step t as Xt , (z1...t?1 , b1...t?1 ) . (14) Under Assumption 1 and Assumption 2, using Lyapunov process and Quasi-Martingales convergence theorem, L. Bottou [29] proved Lemma 1. If ?A, B > 0 s.t.    E ht+1 ? 1 + ?t2 B ht |Xt ? ?2?t (wt ? w? )T ?w C(wt ) + ?t2 A, (15) then C(z, w) converges almost surely toward minimum w? , i.e., P (limt?+? wt = w? ) = 1. 4 We further make an assumption on the gradient as Assumption 3 (Gradient Bound). The gradient g is bounded as 2 E {max(abs(g)) ? ||g||1 } ? A + B ||w ? w? || , (16) where A, B > 0 and || ? ||1 is `1 norm. With Assumption 3 and Lemma 1, we prove Theorem 1 ( in Supplementary Material): Theorem 1. When online learning systems update as wt+1 = wt ? ?t (st ? sign (gt ) ? bt ) (17) ? using stochastic ternary gradients, they converge almost surely toward minimum w , i.e., P (limt?+? wt = w? ) = 1. Comparing with the gradient bound of standard GOGA [29]  2 E ||g||2 ? A + B ||w ? w? || , (18) the bound in Assumption 3 is stronger because max(abs(g)) ? ||g||1 ? ||g||2 . (19) We propose layer-wise ternarizing and gradient clipping to make two bounds closer, which shall be explained in Section 3.3. A side benefit of our work is that, by following the similar proof procedure, we can prove the convergence of GOGA when Gaussian noise N (0, ? 2 ) is added to gradients [30], under the gradient bound of  2 E ||g||2 ? A + B ||w ? w? || ? ? 2 . (20) Although the bound is also stronger, Gaussian noise encourages active exploration of parameter space and improves accuracy as was empirically studied in [30]. Similarly, the randomness of ternary gradients also encourages space exploration and improves accuracy for some models, as shall be presented in Section 4. 3.3 Feasibility Considerations The gradient bound of TernGrad in Assumption 3 is stronger than the bound in standard GOGA. Pushing the two bounds closer can improve the convergence of TernGrad. In Assumption 3, max (abs (g)) is the maximum absolute value of all the gradients in the DNN. So, in a large DNN, max (abs (g)) could be relatively much larger than most gradients, implying that the bound in TernGrad becomes much stronger. Considering the situation, we propose layer-wise ternarizing and gradient clipping to reduce max (abs (g)) and therefore shrink the gap between these two bounds. Layer-wise ternarizing is proposed based on the observation that the range of gradients in each layer changes as gradients are back propagated. Instead of adopting a large global maximum scaler, Iteration # conv Iteration # fc (a) original (b) clipped (c) ternary (d) final Figure 2: Histograms of (a) original floating gradients, (b) clipped gradients, (c) ternary gradients and (d) final averaged gradients. Visualization by TensorBoard. The DNN is AlexNet distributed on two workers, and vertical axis is the training iteration. As examples, top row visualizes the third convolutional layer and bottom one visualizes the first fully-connected layer. 5 we independently ternarize gradients in each layer using the layer-wise scalers. More specific, we separately ternarize the gradients of biases and weights by using Eq. (1), where gt could be the gradients of biases or weights in each layer. To approach the standard bound more closely, we can split gradients to more buckets and ternarize each bucket independently as D. Alistarh et al. [26] does. However, this will introduce more floating scalers and increase communication. When the size of bucket is one, it degenerates to floating gradients. Layer-wise ternarizing can shrink the bound gap resulted from the dynamic ranges of the gradients across layers. However, the dynamic range within a layer still remains as a problem. We propose gradient clipping, which limits the magnitude of each gradient gi in g as  gi |gi | ? c? f (gi ) = , (21) sign(gi ) ? c? |gi | > c? where ? is the standard derivation of gradients in g. In distributed training, gradient clipping is applied to every worker before ternarizing. c is a hyper-parameter to select, but we cross validate it only once and use the constant in all our experiments. Specifically, we used a CNN [31] trained on CIFAR-10 by momentum SGD with staircase learning rate and obtained the optimal c = 2.5. Suppose the distribution of gradients is close to Gaussian distribution as shown in Figure 2(a), very few gradients can drop out of [?2.5?, 2.5?]. Clipping these gradients in Figure 2(b) can significantly reduce the scaler but slightly changes the length and direction of original g. Numerical analysis shows that gradient clipping with c = 2.5 only changes the length of g by 1.0% ? 1.5% and its direction by 2? ? 3? . In our experiments, c = 2.5 remains valid across multiple databases (MNIST, CIFAR-10 and ImageNet), various network structures (LeNet, CifarNet, AlexNet, GoogLeNet, etc) and training schemes (momentum, vanilla SGD, adam, etc). The effectiveness of layer-wise ternarizing and gradient clipping can also be explained as follows. When the scalar st in Eq. (1) and Eq. (3) is very large, most gradients have a high possibility to be ternarized to zeros, leaving only a few gradients to large-magnitude values. The scenario raises a severe parameter update pattern: most parameters keep unchanged while others likely overshoot. This will introduce large training variance. Our experiments on AlexNet show that by applying both layer-wise ternarizing and gradient clipping techniques, TernGrad can converge to the same accuracy as standard SGD. Removing any of the two techniques can result in accuracy degradation, e.g., 3% top-1 accuracy loss without applying gradient clipping as we shall show in Table 2. 4 Experiments We first investigate the convergence of TernGrad under various training schemes on relatively small databases and show the results in Section 4.1. Then the scalability of TernGrad to large-scale distributed deep learning is explored and discussed in Section 4.2. The experiments are performed by TensorFlow[2]. We maintain the exponential moving average of parameters by employing an exponential decay of 0.9999 [15]. The accuracy is evaluated by the final averaged parameters. This gives slightly better accuracy in our experiments. For fair comparison, in each pair of comparative experiments using either floating or ternary gradients, all the other training hyper-parameters are the same unless differences are explicitly pointed out. In experiments, when SGD with momentum is adopted, momentum value of 0.9 is used. When polynomial decay is applied to decay the learning rate (LR), the power of 0.5 is used to decay LR from the base LR to zero. 4.1 Integrating with Various Training Schemes We study the convergence of TernGrad using LeNet on MNIST and a ConvNet [31] (named as CifarNet) on CIFAR-10. LeNet is trained without data augmentation. While training CifarNet, images Accuracy 100.00% (a) momentum SGD 99.50% (b) vanilla SGD baseline TernGrad 99.00% 98.50% N workers 98.00% 2 4 8 16 32 2 64 4 8 16 32 64 Figure 3: Accuracy vs. worker number for baseline and TernGrad, trained with (a) momentum SGD or (b) vanilla SGD. In all experiments, total mini-batch size is 64 and maximum iteration is 10K. 6 Table 1: Results of TernGrad on CifarNet. SGD base LR total mini-batch size iterations gradients workers 2 2 86.56% 85.64% (-0.92%) 16 16 83.19% 82.80% (-0.39%) Adam 0.0002 128 300K floating TernGrad Adam 0.0002 2048 18.75K floating TernGrad accuracy are randomly cropped to 24 ? 24 images and mirrored. Brightness and contrast are also randomly adjusted. During the testing of CifarNet, only center crop is used. Our experiments cover the scope of SGD optimizers over vanilla SGD, SGD with momentum [32] and Adam [33]. Figure 3 shows the results of LeNet. All are trained using polynomial LR decay with weight decay of 0.0005. The base learning rates of momentum SGD and vanilla SGD are 0.01 and 0.1, respectively. Given the total mini-batch size M and the worker number N , the mini-batch size per worker is M/N . Without explicit mention, mini-batch size refers to the total mini-batch size in this work. Figure 3 shows that TernGrad can converge to the similar accuracy within the same iterations, using momentum SGD or vanilla SGD. The maximum accuracy gain is 0.15% and the maximum accuracy loss is 0.22%. Very importantly, the communication time per iteration can be reduced. The figure also shows that TernGrad generalizes well to distributed training with large N . No degradation is observed even for N = 64, which indicates one training sample per iteration per worker. Table 1 summarizes the results of CifarNet, where all trainings terminate after the same epochs. Adam SGD is used for training. Instead of keeping total mini-batch size unchanged, we maintain the mini-batch size per worker. Therefore, the total mini-batch size linearly increases as the number of workers grows. Though the base learning rate of 0.0002 seems small, it can achieve better accuracy than larger ones like 0.001 for baseline. In each pair of experiments, TernGrad can converge to the accuracy level with less than 1% degradation. The accuracy degrades under a large mini-batch size in both baseline and TernGrad. This is because parameters are updated less frequently and large-batch training tends to converge to poorer sharp minima [34]. However, the noise inherent in TernGrad can help converge to better flat minimizers [34], which could explain the smaller accuracy gap between the baseline and TernGrad when the mini-batch size is 2048. In our experiments of AlexNet in Section 4.2, TernGrad even improves the accuracy in the large-batch scenario. This attribute is beneficial for distributed training as a large mini-batch size is usually required. 4.2 Scaling to Large-scale Deep Learning We also evaluate TernGrad by AlexNet and GoogLeNet trained on ImageNet. It is more challenging to apply TernGrad to large-scale DNNs. It may result in some accuracy loss when simply replacing the floating gradients with ternary gradients while keeping other hyper-parameters unchanged. However, we are able to train large-scale DNNs by TernGrad successfully after making some or all of the following changes: (1) decreasing dropout ratio to keep more neurons; (2) using smaller weight decay; and (3) disabling ternarizing in the last classification layer. Dropout can regularize DNNs by adding randomness, while TernGrad also introduces randomness. Thus, dropping fewer neurons helps avoid over-randomness. Similarly, as the randomness of TernGrad introduces regularization, smaller weight decay may be adopted. We suggest not to apply ternarizing to the last layer, considering that the one-hot encoding of labels generates a skew distribution of gradients and the symmetric ternary encoding {?1, 0, 1} is not optimal for such a skew distribution. Though asymmetric ternary levels could be an option, we decide to stick to floating gradients in the last layer for simplicity. The overhead of communicating these floating gradients is small, as the last layer occupies only a small percentage of total parameters, like 6.7% in AlexNet and 3.99% in ResNet-152 [35]. All DNNs are trained by momentum SGD with Batch Normalization [36] on convolutional layers. AlexNet is trained by the hyper-parameters and data augmentation depicted in Caffe. GoogLeNet is trained by polynomial LR decay and data augmentation in [37]. Our implementation of GoogLeNet does not utilize any auxiliary classifiers, that is, the loss from the last softmax layer is the total loss. More training hyper-parameters are reported in corresponding tables and published source code. Validation accuracy is evaluated using only the central crops of images. The results of AlexNet are shown in Table 2. Mini-batch size per worker is fixed to 128. For fast development, all DNNs are trained through the same epochs of images. In this setting, when there are 7 Table 2: Accuracy comparison for AlexNet. base LR mini-batch size workers iterations gradients weight decay DR? top-1 top-5 0.01 256 2 370K floating TernGrad TernGrad-noclip ? 0.0005 0.0005 0.0005 0.5 0.2 0.2 57.33% 57.61% 54.63% 80.56% 80.47% 78.16% 0.02 512 4 185K floating TernGrad 0.0005 0.0005 0.5 0.2 57.32% 57.28% 80.73% 80.23% 0.5 0.2 56.62% 57.54% 80.28% 80.25% floating 0.0005 TernGrad 0.0005 DR: dropout ratio, the ratio of dropped neurons. ? TernGrad without gradient clipping. 0.04 ? 1024 8 92.5K Table 3: Accuracy comparison for GoogLeNet. base LR mini-batch size workers iterations gradients weight decay DR top-5 0.04 128 2 600K floating TernGrad 4e-5 1e-5 0.2 0.08 88.30% 86.77% 0.08 256 4 300K floating TernGrad 4e-5 1e-5 0.2 0.08 87.82% 85.96% 0.10 512 8 300K floating TernGrad 4e-5 2e-5 0.2 0.08 89.00% 86.47% more workers, the number of iterations becomes smaller and parameters are less frequently updated. To overcome this problem, we increase the learning rate for large-batch scenario [10]. Using this scheme, SGD with floating gradients successfully trains AlexNet to similar accuracy, for mini-batch size of 256 and 512. However, when mini-batch size is 1024, the top-1 accuracy drops 0.71% for the same reason as we point out in Section 4.1. TernGrad converges to approximate accuracy levels regardless of mini-batch size. Notably, it improves the top-1 accuracy by 0.92% when mini-batch size is 1024, because its inherent randomness encourages to escape from poorer sharp minima [30][34]. Figure 4 plots training details vs. iteration when mini-batch size is 512. Figure 4(a) shows that the convergence curve of TernGrad matches well with the baseline?s, demonstrating the effectiveness of TernGrad. The training efficiency can be further improved by reducing communication time as shall be discussed in Section 5. The training data loss in Figure 4(b) shows that TernGrad converges to a slightly lower level, which further proves the capability of TernGrad to minimize the target function even with ternary gradients. A smaller dropout ratio in TernGrad can be another reason of the lower loss. Figure 4(c) simply illustrate that on average 71.32% gradients of a fully-connected layer (fc6) are ternarized to zeros. Finally, we summarize the results of GoogLeNet in Table 3. On average, the accuracy loss is less than 2%. In TernGrad, we adopted all that hyper-parameters (except dropout ratio and weight decay) that are well tuned for the baseline [38]. Tuning these hyper-parameters specifically for TernGrad could further optimize TernGrad and obtain higher accuracy. 5 Performance Model and Discussion Our proposed TernGrad requires only three numerical levels {?1, 0, 1}, which can aggressively reduce the communication time. Moreover, our experiments in Section 4 demonstrate that within the (a) top-1 accuracy vs iteration 70% 60% 50% 40% 30% 20% 10% 0% baseline terngrad (b) training loss vs iteration 8 6 baseline terngrad 4 2 50000 100000 150000 60% 40% 20% 0 0 (c) gradient sparsity of terngrad in fc6 80% 0% 0 50000 100000 150000 0 50000 100000 150000 Figure 4: AlexNet trained on 4 workers with mini-batch size 512: (a) top-1 validation accuracy, (b) training data loss and (c) sparsity of gradients in first fully-connected layer (fc6) vs. iteration. 8 Training throughput on GPU cluster with Ethernet and PCI switch 100000 AlexNet FP32 GoogLeNet FP32 VggNet-A FP32 80000 Training throughput on GPU cluster with InfiniBand and NVLink 240000 AlexNet TernGrad GoogLeNet TernGrad VggNet-A TernGrad 200000 Images/sec Images/sec AlexNet TernGrad GoogLeNet TernGrad VggNet-A TernGrad 6000 4000 60000 3000 2000 40000 1000 160000 4000 120000 80000 2000 0 20000 40000 0 AlexNet FP32 GoogLeNet FP32 VggNet-A FP32 1 2 4 8 32 16 # of GPUs 64 128 256 512 (a) 0 0 1 2 4 8 16 32 # of GPUs 64 128 256 512 (b) Figure 5: Training throughput on two different GPUs clusters: (a) 128-node GPU cluster with 1Gbps Ethernet, each node has 4 NVIDIA GTX 1080 GPUs and one PCI switch; (b) 128-node GPU cluster with 100 Gbps InfiniBand network connections, each node has 4 NVIDIA Tesla P100 GPUs connected via NVLink. Mini-batch size per GPU of AlexNet, GoogLeNet and VggNet-A is 128, 64 and 32, respectively same iterations, TernGrad can converge to approximately the same accuracy as its corresponding baseline. Consequently, a dramatical throughput improvement on the distributed DNN training is expected. Due to the resource and time constraint, unfortunately, we aren?t able to perform the training of more DNN models like VggNet-A [39] and distributed training beyond 8 workers. We plan to continue the experiments in our future work. We opt for using a performance model to conduct the scalability analysis of DNN models when utilizing up to 512 GPUs, with and without applying TernGrad. Three neural network models?AlexNet, GoogLeNet and VggNet-A?are investigated. In discussions of performance model, performance refers to training speed. Here, we extend the performance model that was initially developed for CPU-based deep learning systems [40] to estimate the performance of distributed GPUs/machines. The key idea is combining the lightweight profiling on single machine with analytical modeling for accurate performance estimation. In the interest of space, please refer to Supplementary Material for details of the performance model. Figure 5 presents the training throughput on two different GPUs clusters. Our results show that TernGrad effectively increases the training throughput for the three DNNs. The speedup depends on the communication-to-computation ratio of the DNN, the number of GPUs, and the communication bandwidth. DNNs with larger communication-to-computation ratios (e.g. AlexNet and VggNet-A) can benefit more from TernGrad than those with smaller ratios (e.g., GoogLeNet). Even on a very high-end HPC system with InfiniBand and NVLink, TernGrad is still able to double the training speed of VggNet-A on 128 nodes as shown in Figure 5(b). Moreover, the TernGrad becomes more efficient when the bandwidth becomes smaller, such as 1Gbps Ethernet and PCI switch in Figure 5(a) where TernGrad can have 3.04? training speedup for AlexNet on 8 GPUs. Acknowledgments This work was supported in part by NSF CCF-1744082 and DOE SC0017030. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, DOE, or their contractors. References [1] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223?1231. 2012. [2] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint:1603.04467, 2016. 9 [3] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In International Conference on Machine Learning, pages 1337?1345, 2013. [4] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693?701, 2011. [5] Trishul M Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam: Building an efficient and scalable deep learning training system. In OSDI, volume 14, pages 571?582, 2014. [6] Eric P Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, and Yaoliang Yu. Petuum: A new platform for distributed machine learning on big data. IEEE Transactions on Big Data, 1(2):49?67, 2015. [7] Philipp Moritz, Robert Nishihara, Ion Stoica, and Michael I Jordan. Sparknet: Training deep networks in spark. arXiv preprint:1511.06051, 2015. [8] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint:1512.01274, 2015. [9] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685?693, 2015. [10] Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Carnegie Mellon University, 2017. [11] Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In OSDI, volume 14, pages 583?598, 2014. [12] Mu Li, David G Andersen, Alexander J Smola, and Kai Yu. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems, pages 19?27, 2014. [13] Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B Gibbons, Garth A Gibson, Greg Ganger, and Eric P Xing. More effective distributed ml via a stale synchronous parallel parameter server. In Advances in neural information processing systems, pages 1223?1231, 2013. [14] Martin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pages 2595?2603, 2010. [15] Xinghao Pan, Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint:1702.05800, 2017. [16] Wei Zhang, Suyog Gupta, Xiangru Lian, and Ji Liu. Staleness-aware async-sgd for distributed deep learning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI?16, pages 2350?2356. AAAI Press, 2016. ISBN 978-1-57735-770-4. URL http://dl.acm.org/ citation.cfm?id=3060832.3060950. [17] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [18] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074?2082, 2016. [19] J Park, S Li, W Wen, PTP Tang, H Li, Y Chen, and P Dubey. Faster cnns with direct sparse convolutions and guided pruning. In International Conference on Learning Representations (ICLR), 2017. [20] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in Neural Information Processing Systems, pages 4107?4115, 2016. [21] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525?542. Springer, 2016. [22] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. [23] Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel coordinate descent for l1-regularized loss minimization. arXiv preprint arXiv:1105.5379, 2011. [24] Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. arXiv preprint:1704.05021, 2017. [25] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Interspeech, pages 1058?1062, 2014. 10 [26] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communicationoptimal stochastic gradient descent, with applications to training neural networks. arXiv preprint:1610.02132, 2017. [27] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, pages 1737?1746, 2015. [28] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 337?344. ACM, 2009. [29] L?on Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9): 142, 1998. [30] Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint:1511.06807, 2015. [31] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105. 2012. [32] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145?151, 1999. [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint:1412.6980, 2014. [34] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017. [35] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. [36] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint:1502.03167, 2015. [37] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818?2826, 2016. [38] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint:1409.4842, 2015. [39] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint:1409.1556, 2014. [40] Feng Yan, Olatunji Ruwase, Yuxiong He, and Trishul M. Chilimbi. Performance modeling and scalability optimization of distributed deep learning systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pages 1355?1364, 2015. doi: 10.1145/2783258.2783270. URL http://doi.acm.org/10.1145/2783258. 2783270. 11
6749 |@word cnn:1 polynomial:3 seems:1 norm:2 stronger:4 compression:1 heuristically:1 cipar:1 brightness:1 sgd:33 thereby:1 mention:2 nsw:1 reduction:1 initial:1 liu:2 lightweight:1 daniel:1 tuned:1 bradley:1 current:2 com:2 comparing:2 activation:1 diederik:1 danny:1 gpu:5 devin:2 numerical:6 christian:3 drop:2 plot:1 update:5 bickson:1 v:5 implying:1 intelligence:1 selected:1 fewer:1 realizing:1 lr:8 yuxin:1 completeness:1 quantized:4 node:5 philipp:1 org:2 zhang:5 direct:1 jasha:1 abadi:1 prove:6 seide:2 overhead:3 dan:1 nondeterministic:1 introduce:2 theoretically:1 notably:1 expected:1 cot:1 nor:1 frequently:2 yiran:3 decreasing:2 cpu:1 farhadi:1 considering:3 increasing:2 becomes:4 conv:1 project:1 moreover:4 bounded:1 alexnet:22 what:1 developed:2 proposing:1 finding:1 sparsification:3 guarantee:1 every:1 collecting:1 binarized:1 classifier:1 stick:1 before:3 positive:1 dropped:1 tends:2 limit:1 soudry:1 encoding:2 id:1 subscript:1 niu:1 approximately:1 garg:2 initialization:1 studied:4 ankur:1 challenging:1 heafield:3 catanzaro:1 co:1 josifovski:1 limited:1 luke:1 range:3 averaged:5 acknowledgment:1 lecun:1 testing:1 ternary:23 practice:1 petuum:1 communicated:1 optimizers:1 cold:1 procedure:1 aji:3 gibson:1 yan:1 significantly:1 integrating:2 refers:2 pritish:1 suggest:1 cannot:1 close:1 naiyan:1 applying:5 optimize:1 chunpeng:3 deterministic:1 equivalent:1 center:1 dean:2 zinkevich:1 marten:1 regardless:1 independently:3 convex:1 jimmy:1 ke:1 simplicity:2 recovery:2 spark:1 qian:1 matthieu:3 communicating:1 utilizing:2 importantly:2 shlens:1 pull:1 regularize:1 coordinate:1 updated:5 target:4 pt:1 suppose:1 itay:1 duke:2 lighter:1 us:1 samy:1 element:3 satisfying:1 recognition:4 continues:1 asymmetric:1 database:2 observed:2 bottom:1 preprint:15 wang:4 cong:2 revisiting:1 ensures:1 connected:4 compressing:1 sun:1 ranzato:1 trade:1 decrease:2 ran:1 kailash:1 benjamin:1 convexity:1 mu:4 gibbon:1 dynamic:3 trained:13 raise:1 overshoot:1 ali:1 incur:2 yutian:1 localization:5 efficiency:2 eric:2 accelerate:3 workload:1 easily:1 joint:1 bitwidth:2 various:4 shirish:1 unr:1 derivation:1 train:2 fast:2 effective:1 doi:2 artificial:1 hyper:7 pci:3 caffe:1 heuristic:1 widely:1 supplementary:2 larger:3 kai:2 compressed:1 statistic:1 gi:6 simonyan:1 zekun:1 final:4 superscript:1 online:5 advantage:1 sequence:1 quantizing:1 net:4 analytical:1 nevada:1 propose:9 took:1 isbn:1 product:1 hadamard:1 combining:1 qirong:2 degenerate:1 achieve:1 adapts:1 grubic:1 validate:1 xun:1 scalability:4 milan:1 sutskever:2 convergence:19 cluster:6 double:1 ijcai:1 yaniv:1 produce:1 comparative:1 adam:8 converges:3 tianqi:1 resnet:1 help:3 depending:1 illustrate:1 andrew:5 karol:1 disabling:1 eq:7 paying:1 sydney:1 auxiliary:1 indicate:1 ethernet:3 synchronized:1 lyapunov:1 direction:2 ning:1 guided:2 closely:1 dramatical:1 attribute:1 cnns:1 stochastic:10 droppo:1 exploration:2 occupies:1 australia:1 opinion:1 material:3 explains:1 require:1 dnns:9 generalization:1 opt:1 mathematically:2 adjusted:1 wright:1 seed:1 scope:2 cfm:1 pitt:1 major:1 achieves:1 adopt:1 smallest:1 chiyuan:1 omitted:1 purpose:1 gbps:3 estimation:1 label:1 hubara:1 hpc:2 successfully:5 vanja:1 minimization:4 always:1 gaussian:3 aim:1 suzue:1 avoid:1 zhou:2 terngrad:83 sparsify:1 encode:1 derived:1 focus:1 improvement:2 bernoulli:1 indicates:1 kyrola:1 greatly:1 contrast:1 sigkdd:1 baseline:10 kim:2 inference:1 osdi:2 minimizers:1 el:1 integrated:1 bt:8 transferring:1 initially:1 yaoliang:1 dnn:8 tak:1 quasi:1 choromanska:1 going:1 tao:1 provably:1 issue:1 among:1 aforementioned:1 classification:3 flexible:1 impacting:1 jianmin:1 development:1 plan:1 constrained:1 softmax:1 platform:1 aware:1 once:1 saving:1 beach:1 eliminated:1 ng:2 identical:1 synchronizing:1 yu:3 park:2 throughput:6 icml:1 jon:1 future:1 t2:2 others:1 yoshua:1 inherent:3 few:2 wen:4 modern:1 randomly:3 escape:1 simultaneously:1 divergence:1 preserve:1 resulted:1 floating:20 jeffrey:2 maintain:2 karthik:1 ab:8 william:1 huge:1 centralized:1 interest:1 possibility:1 investigate:1 zheng:2 mining:1 severe:1 introduces:2 hewlett:1 accurate:1 poorer:2 andy:1 fu:1 closer:3 explosion:1 worker:36 necessary:1 orthogonal:2 unless:2 conduct:1 re:1 instance:1 column:3 empowered:1 earlier:1 modeling:2 cover:1 dheevatsa:1 formulates:2 rabinovich:1 clipping:13 cost:2 subset:2 krizhevsky:1 rounding:2 conducted:1 inspires:1 johnson:1 reported:1 st:16 recht:1 explores:1 international:6 lee:2 off:2 dong:1 michael:1 ashish:1 ilya:2 augmentation:3 central:1 reflect:1 thesis:1 andersen:2 rafal:1 aaai:1 dr:3 lukasz:1 return:1 li:12 szegedy:3 huval:1 sec:2 coding:1 explicitly:1 depends:1 stoica:1 performed:1 nishihara:1 view:1 lab:1 analyze:1 traffic:3 observing:1 start:2 hogwild:1 option:1 capability:2 parallel:4 xing:2 yandan:2 carlos:1 jia:1 contribution:1 minimize:2 square:1 ni:1 accuracy:40 convolutional:7 variance:1 greg:3 keskar:1 yield:1 bor:1 vincent:2 comparably:1 craig:1 ren:1 published:1 randomness:6 visualizes:2 explain:1 ping:1 ptp:1 sharing:4 mudigere:1 energy:1 tucker:1 james:3 proof:2 propagated:1 gain:3 proved:1 knowledge:1 emerges:1 improves:5 amplitude:1 back:3 higher:1 xie:1 zisserman:1 wei:7 improved:1 rahul:1 formulation:2 evaluated:2 though:3 shrink:3 huizi:1 furthermore:1 inception:1 smola:3 replacing:1 christopher:1 su:1 logistic:1 quality:1 ordonez:1 qsgd:8 pulling:2 grows:2 stale:2 building:1 usa:1 name:2 concept:1 unbiased:2 staircase:1 gtx:1 ccf:1 lenet:4 regularization:1 aggressively:2 moritz:1 symmetric:1 jerry:1 staleness:1 xnor:1 during:1 width:2 encourages:3 please:1 maintained:1 davis:1 interspeech:1 demonstrate:1 mohammad:1 performs:1 l1:1 dragomir:1 mxnet:1 image:8 wise:10 consideration:1 recently:1 yutaka:1 common:1 empirically:2 ji:1 volume:3 googlenet:14 discussed:2 extend:1 he:3 significant:1 refer:1 mellon:1 jozefowicz:1 anguelov:1 tuning:1 vanilla:6 debug:1 consistency:1 similarly:2 pointed:1 lihong:1 moving:1 han:1 longer:1 gt:27 etc:2 base:6 isometry:1 showed:1 perspective:2 belongs:1 driven:1 inf:1 scenario:4 sixin:1 nvidia:2 server:15 suyog:2 binary:2 continue:1 jorge:1 inconsistency:1 transmitted:1 minimum:7 dai:1 guestrin:1 parallelized:1 surely:2 converge:8 maximize:1 xiangyu:1 corrado:2 stephen:1 multiple:4 reduces:2 match:1 faster:1 profiling:1 cross:1 long:2 cifar:4 lin:1 ahmed:1 arvind:1 feasibility:1 scalable:1 regression:1 crop:2 heterogeneous:2 vision:4 expectation:3 aapo:1 arxiv:17 iteration:18 sergey:2 histogram:1 limt:2 adopting:1 normalization:2 achieved:2 monga:2 agarwal:1 justified:1 cropped:1 ion:1 separately:1 huffman:1 source:2 leaving:1 jian:1 induced:1 sent:2 effectiveness:4 jordan:1 yang:1 yuheng:1 split:2 enough:1 bengio:2 switch:3 li1:1 lasso:1 wu1:1 bandwidth:2 reduce:11 idea:1 barham:1 architecture:1 kurach:1 shift:1 bottleneck:2 synchronous:3 dorefa:3 url:2 accelerating:1 effort:1 song:1 peter:1 karen:1 speech:2 shaoqing:1 deep:26 dramatically:1 generally:1 dubey:1 amount:1 apacible:1 extensively:1 locally:1 hardware:1 narayanan:1 category:1 neelakantan:1 reduced:3 http:3 percentage:1 ternarize:7 mirrored:1 nsf:2 coates:1 sign:8 async:1 per:8 bryan:1 carnegie:1 dropping:2 promise:1 waiting:1 shall:5 key:1 infiniband:3 demonstrating:1 yangqing:1 drawn:1 neither:1 dally:1 verified:1 ht:3 utilize:2 kenneth:1 nocedal:1 named:1 clipped:2 almost:2 shekita:1 decide:1 wu:4 yann:1 summarizes:1 scaling:3 bit:11 dropout:5 bound:22 layer:25 brody:1 guaranteed:1 scaler:11 annual:1 adapted:1 gang:1 constraint:1 alex:2 flat:1 btk:5 markus:1 generates:1 aspect:1 speed:6 nitish:1 min:1 kumar:1 alistarh:3 relatively:2 gpus:11 contractor:1 speedup:4 martin:1 structured:1 according:2 smelyanskiy:1 cui:1 xinyu:1 across:4 smaller:9 slightly:4 remain:1 beneficial:1 pan:1 joseph:2 making:1 quoc:2 explained:2 restricted:1 bucket:4 resource:1 visualization:1 remains:2 bing:1 skew:2 xiangru:1 fed:1 end:1 sending:1 adopted:5 available:1 generalizes:1 brevdo:1 xinghao:1 apply:2 occasional:1 generic:1 pierre:1 batch:30 ho:2 original:3 top:11 remaining:1 include:1 denotes:1 log2:2 maintaining:1 lock:1 pushing:1 exploit:1 giving:1 kyu:2 build:1 prof:1 unchanged:3 feng:3 move:1 xu2:1 added:1 amr:1 kaiser:1 degrades:1 traditional:1 hai:3 gradient:119 iclr:1 distance:1 convnet:1 phillip:1 rethinking:1 degrade:1 topic:1 toward:3 reason:2 khandekar:1 gopalakrishnan:1 code:2 length:2 reed:1 mini:24 ratio:8 sermanet:1 minjie:1 unfortunately:1 robert:1 frank:1 ryota:1 hao:1 shuchang:1 ba:1 wojna:1 implementation:2 zbigniew:1 zt:10 design:1 unknown:2 perform:1 twenty:1 vertical:1 observation:3 neuron:3 datasets:1 convolution:2 descent:9 jin:2 truncated:1 situation:1 hinton:1 communication:19 sharp:3 august:1 parallelizing:1 david:3 pair:2 required:2 extensive:1 ternarized:3 z1:1 imagenet:4 connection:1 chen1:1 tensorflow:2 kingma:1 nip:1 beyond:2 able:3 parallelism:5 usually:2 pattern:3 scott:1 sparsity:8 summarize:1 packard:1 max:8 power:1 critical:1 hot:1 regularized:1 residual:1 scheme:4 improve:4 github:1 library:1 reno:1 vggnet:9 axis:1 carried:1 jun:1 woo:1 epoch:2 eugene:2 discovery:1 rohit:1 synchronization:6 loss:18 fully:3 accelerator:1 chilimbi:2 remarkable:2 geoffrey:1 validation:2 vanhoucke:2 trishul:2 xiao:1 courbariaux:1 share:1 translation:2 row:1 supported:1 last:5 copy:2 asynchronous:2 keeping:2 free:1 guide:1 side:1 bias:2 senior:1 deeper:1 absolute:2 sparse:5 fifth:1 distributed:35 benefit:3 overcome:2 default:1 curve:1 valid:1 mikhail:1 doesn:1 computes:1 author:1 employing:1 garth:1 erhan:1 transaction:1 approximate:1 citation:1 pruning:2 keep:3 ml:1 global:1 active:1 ioffe:2 summing:1 pittsburgh:1 quantizes:2 b1:1 kalyanaraman:1 iterative:1 why:1 table:8 terminate:1 transfer:1 fc6:3 ca:1 p100:1 elastic:1 rastegari:1 bottou:2 investigated:1 necessarily:1 european:1 marc:1 zou:1 anna:1 dense:2 linearly:1 aurelio:1 big:2 noise:4 paul:2 weimer:1 gtk:5 goga:7 fair:1 tesla:1 xu:2 amortize:1 slow:1 martingale:1 precision:10 tomioka:1 momentum:11 mao:2 explicit:2 exponential:2 third:1 zhifeng:1 tang:2 theorem:3 removing:1 ganger:1 dumitru:1 specific:2 xt:2 covariate:1 explored:1 decay:12 gupta:3 dl:1 quantization:7 mnist:3 adding:2 effectively:1 phd:1 magnitude:2 push:1 chen:7 gap:4 demjan:1 aren:1 depicted:1 fc:1 simply:2 likely:1 expressed:1 kaiming:1 scalar:2 recommendation:1 springer:1 satisfies:1 acm:4 mart:1 vojnovic:1 targeted:2 acceleration:3 consequently:1 replace:1 shared:3 hard:1 change:5 vicente:1 specifically:2 except:2 reducing:2 redmon:1 averaging:2 wt:22 lemma:2 degradation:3 total:8 seunghak:2 agrawal:1 citro:1 formally:1 select:1 internal:1 mark:1 alexander:2 vilnis:1 rajat:2 evaluate:2 lian:1 scratch:1
6,356
675
How Oscillatory Neuronal Responses Reflect Bistability and Switching of the Hidden Assembly Dynamics K. Pawelzik, H.-V. Bauert, J. Deppisch, and T. Geisel Institut fur Theoretische Physik and SFB 185 Nichtlineare Dynamik Universitat Frankfurt, Robert-Mayer-Str. 8-10, D-6000 Frankfurt/M. 11, FRG ttemporary adress:CNS-Program, Caltech 216-76, Pasadena email: [email protected] Abstract A switching between apparently coherent (oscillatory) and stochastic episodes of activity has been observed in responses from cat and monkey visual cortex. We describe the dynamics of these phenomena in two parallel approaches, a phenomenological and a rather microscopic one. On the one hand we analyze neuronal responses in terms of a hidden state model (HSM). The parameters of this model are extracted directly from experimental spike trains. They characterize the underlying dynamics as well as the coupling of individual neurons to the network. This phenomenological model thus provides a new framework for the experimental analysis of network dynamics. The application of this method to multi unit activities from the visual cortex of the cat substantiates the existence of oscillatory and stochastic states and quantifies the switching behaviour in the assembly dynamics. On the other hand we start from the single spiking neuron and derive a master equation for the time evolution of the assembly state which we represent by a phase density. This phase density dynamics (PDD) exhibits costability of two attractors, a limit cycle, and a fixed point when synaptic interaction is nonlinear. External fluctuations can switch the bistable system from one state to the other. Finally we show, that the two approaches are mutually consistent and therefore both explain the detailed time structure in the data. 977 978 Pawelzik, Bauer, Deppisch, and Geisel 1 INTRODUCTION A few years ago, oscillatory and synchronous neuronal activity was discovered in cat visual cortex [1-3]. These experiments backed earlier considerations about synchrony in neuronal activity as a mechanism to bind features, e.g., of an object in a visual scene [4]. They triggered broad experimental and theoretical investigations of detailed neuronal dynamics as a means for information processing and, in particular, for feature binding. Many theoretical contributions tried to reproduce and explain aspects of the experimentally observed phenomena [5]. Motivated by the experiments, the models where particularly designed to exhibit spatial synchronization of permanent oscillatory responses upon stimulation by a common, connected stimulus like a bar. Most models consist of elements which exhibit a limit cycle after a simple Hopf bifurcation. The experimental data, however, contain many details which the present models do not yet completely incorporate. One of these details is the coexistence of regular and irregular episodes in the data, which interchange in an apparently stochastic manner. This interchange can be observed in the signals from a single electrode [6] as well as in the time-resolved correlation of the signals from two electrodes [7]. In this contribution we show, that the observed time structure reflects a switching in the dynamics of the underlying neuronal system. This will be demonstrated by two complementary approaches: On the one hand we present a new method for a quantitative analysis of the dynamical system underlying the measured spike trains. Our approach gives a quantitative description of the dynamical phenomena and furthermore explains the relation between the collective excitation in the network which is not accessible experimentally (i.e. hidden) and the contributions of the single observed neurons in terms of transition probability functions. These probabilities are the parameters of our Ansatz and can be estimated directly from multi unit activities (MUA) using the Baum-Welchalgorithm. Especially for the data from cat visual cortex we find that indeed there are two states dominating the dynamics of collective excitation, namely a state of repeated excitation and a state in which the observed neurons fire independently and stochastically. On the other hand using simple statistical considerations we derive a description for a local neuronal subpopulation which exhibits bistability. The dynamics of the subpopulation can either rest on a fixed point - corresponding to the irregular firing pat terns - or can follow a limit cycle - corresponding to the oscillatory firing patterns. The subpopulation can alternate between both states under the influence of noise in the external excitation. It turns out that the dynamics of this formal model reproduces the observed local cortical signals in much detail. 2 Excitability of Neurons and Neuronal Assemblies An abstract model of a neuron under external excitation e is given by its threshold dynamics. The state of the neuron is represented by its phase 4>8, which is the time passed by since the last action potential (4)8 = 0). The threshold 6 is high directly after a spike and falls off in time and the neuron can fire again when e exceeds 6. In case of noise or internal stochasticity, an excitability description of Oscillatory Neuronal Responses Reflect Bistability & Switching of Assembly Dynamics the dynamics of the neuron is more adequate. It gives the probability PI to fire again in dependence of the state l/J6 with PI (l/J6) = 0'( e - B( l/JS)) and 0' some sigmoid function. A monotonously falling threshold B then corresponds to a monotonously increasing excitability PI' Such a description neglects any memory in the neuron going beyond the last spike. In particular this means for an isolated neuron, that PI can be easily calculated from the inter-spike interval histogram (ISIH) Ph using the relation Ph(t) PI(t) . (1- Ph (t')dt'). In that case also the autocorrelation Ph(T)G(T - t)dt. function can be calculated from Ph(t) via G(T) Ph(T) + = f: f; = The excitability formulation sketched above is not valid for a neuron which is embedded in a neuronal assembly. However, we may use this Ansatz of a renewal process to describe the activation dynamics of the whole assembly (see section 5). The phase l/Jb 0 here corresponds to the state of synchronous activity of many neurons in the assembly, which we call burst for convenience. Since the dynamics of the network can differ from the dynamics of the elements we expect the function pj(l/Jb) which now describes the burst excitability of the whole assembly to be different from?the spike excitability PI(l/Js) of the single neuron. = A simple example for this is a system of integrate and fire neurons in which oscillatory and irregular phases emerge under fixed stimulus conditions( [8, 9] and section 5). Contrary to the excitability of the single refractory element the burst excitability pj of the system has a maximum at l/Jb T which expresses the increased probability to burst again after the typical oscillation period T, i.e. the maximum represents a state 0 of oscillation. The assembly, however, can miss to burst around l/Jb = T with a probability PO-+6 and switch into a second state s in which the probability Ps-+o to burst again is reduced to a constant level. The switching probabilities PO-+6 and Ps-+o can be easily calculated from Pj. In this way the shape of pj distinguishes a system with an oscillatory state from a system which is purely refractory but which nevertheless can still have strong modulations in the autocorrelogram [13]. = 3 Hidden states and stochastic observables The single neuron in an assembly, however, need not be strictly coupled to the state of the assembly, i.e. a neuron may also spike for l/Jb > 0 and it may not take part in a burst. This stochastic coupling to an underlying process suffices to destroy the equivalence of Ph and the autocorrelogram C( T) =< s(t)s(t + T) >t of the spike train set) E {O, I} (Fig 1). We therefore include the probability PObs(l/Jb) to observe a spike when the assembly is in the state l/Jb into our description (Fig. 2). The unlikely case where the spike represents the burst corresponds to the choice Pobs = 84>b,O' 4 Application to Experimental Data While our approach is quite general, we here concentrate on the measurements of Gray et al. [2] in cat visual cortex. Because our hidden state model has the structure of a hidden Markov model we can obtain all the parameters POb6(l/J) and 979 980 Pawelzik, Bauer, Deppisch, and Geisel 0.10 0.08 C(T) 0.0(3 0.04 0.02 20 40 60 80 T[mS] Figure 1: Correlogram of multi unit activities from cat visual cortex (line). Correlaograms predicted from the ISIH (6) and from the hidden state model (+) . Measurement Level Network Level 4>: ,. p (n) f Figure 2: The hidden state model. While Pj(q}) governs the dynamics of assembly states q;b, Pobs (q;b) represents the probability to observe a spike of a single neuron. Oscillatory Neuronal Responses Reflect Bistability & Switching of Assembly Dynamics 0.25[ 0.20 Pj( 4>b) 0.15 010 005 O.Oob~~~:::t:!~~_ __ o ----:,::--_ _ _~. 020 0.10 o.oo L---_ _ _---=-~:=:!:?.""--~:!T~~~~. o 10 20 pJ Figure 3: Network excitability and single neuron contribution Pobs estimated from experimental spike trains (A17, cat). pJ (?;) directly from the multi unit activities using the well known Baum-Welch algorithm[lO]. The results can be seen in Fig. 3. The excitability shows a typical peak around the main period at T = 19ms, which indicates a state of oscillation. For larger phases we see a reduced excitability which reveals a state of stochastic activity (Pobs (?;b > T) > 0). The spike observation probability Pobs (?;) is peaked near the burst and is about constant elsewhere. This means that we can characterize the data by a stochastic switching between two dynamical states in the underlying system. Because of the stochastic coupling of the single neuron to the assembly state this can only hardly be observed directly. The switching probabilities between either states calculated from coincide with results from other methods [11]. pJ From the excitability pJ and the spike probabilities Pobs we now obtain the autocor- relation function C(r) = f4>f4>' Pobs(?;')M(?;',?;fPobs(?;)p(?;)d?;'d?;, with M being the transition matrix of the Markov model (see also below). The result is compared to the true autocorrelation C( r) in Fig. 1. The excellent agreement confirms our simple Ansatz of a renewal process for the hidden burst dynamics of the assembly. 5 Bistability and Switching in Networks of Spiking Neurons The above results indicate that the dynamics of a cortical assembly includes bistability rather than a simple Hopf bifurcation. In order to understand how this bistability emerges in a network we go one step back and derive a model for a neuronal subpopulation on the basis of spiking neurons. We assume again that the internal state of the neuron is given by the threshold function e depending on the time since the last spike event and that the excitability of the neuron can be described by a 981 982 Pawelzik, Bauer, Deppisch, and Geisel c o ':=J Q) c: 4 ~----'l.r---....Il ,..----" 3 2 t ti me Figure 4: Illustration of assembly state representation by a phase density. firing probability Pf. In a network, however, the input to the neuron has external contributions i ext as well as from within iint i.e. = sigm(wextiext(t) + Wintiint(t) - 8(<p)). For a more formal treatment of the dynamics of such a network we characterize the assembly state by a phase density p( <P, t) which gives the relative amount of neurons in the assembly which are in phase <p at time t (Fig 4). Pf(<P,t) Discretizing the internal phases <p, we transform p(<p, t) to p(j), a vector whose components i give the probability to be in phase <Pi E [(i -1)ilt,iilt] at time tj = jilt. The number T of components is chosen large enough to ensure that Pf (T, j) = Pf (T ilt, jilt) does not change any more. This vector evolves in time according to p(j + 1) = M(j)p(j) (1) with 0 Pf(I,}) Pf(2,j) Pf(T-l , j) Pf(T,j) 1 0 0 0 0 I-Pf (1, j) 0 M(j) = I-Pi (2,j) 0 0 0 1-Pf(T-1,j) I-Pf(T,j) beeing a matrix that incorporates the effects offiring (reset) via the firing probability Pf(i,j). It remains to define the lateral interaction in the sUbpopulation. Clearly only the fraction of neurons that fire can interact, therefore we have l.int = g(po). Oscillatory Neuronal Responses Reflect Bistability & Switching of Assembly Dynamics ~ 0 ... .c 1.0 0.8 - - 0.6 - - 0.4 l0.2 t-0.0 ",,"IA~ o ltv 200 ~~ If'"'" 400 ~ 600 IA .AlIJ BOO 1000 Figure 5: Switching in the assembly dynamics under external fluctuations. Note that Po denotes the time dependent firing rate. Numerically iterating the dynamics (1) we find, that the distribution can evolve in two distinct ways, depending on the lateral interaction function 9 and the initialization. First Ii can relax to a fixed point, corresponding to a constant fraction of the neurons firing at a particular time. On the level of an individual neuron, this corresponds to a stochastic firing characteristics, and in the measurements this state corresponds to the irregular periods. Secondly the distribution can evolve according to a limit cycle, with a rather large fraction of the neurons in a narrow band of phases i.e. the neurons are synchronous. We find parameter combinations Wext, Wint where both states coexist, i.e. we find bistability, if the interaction function is nonlinear, like g( x) ex: x 2 or g( x) ex: x4. Nonlinearities of this kind can be brought about by a different type of preferred synapses with a nonlinear transmission characteristic (like an NMDA-receptor synapse) for the corticocortical projections, as compared to the thalamocortical projections. At this point we only want to make the point that our simple model system can exhibit bistability under reasonable assumptions. In the bistable regime some initializations of Ii lead to the oscillatory state, some to the stochastic state. Adding some noise to the external input the system can also switch between the two states(Fig. 5). In this way our simple local model can capture the switching phenomenon which is inherent in the experimental data [12]. 6 Summary and Synthesis We presented two complementary approaches to the dynamics of neuronal subpopulations, a phenomenological one which captures the time structure in measured spike trains and a neuronal one which provides a formal description of the dynamics of assemblies of spiking neurons. In the phenomenological approach we introduced the hidden state model which revealed that the system underlying the multi unit activities from the cat switches between states of oscillatory and stochastic activity. The analysis of the phase density dynamics showed, that bistability and switching emerges in networks of spiking neurons when the neuronal interaction is nonlinear. It remains to show that these approaches are also quantitatively consistent, i.e. that they are two sides of the same medal. Instead of a formal proof we only remark here that the parameters of the HSM can be extracted directly from the dynamics of the PDD under external noise. For this purpose one only needs to evaluate the interburst interval distribution from which pj can be calculated. Pobs is easily estimated as the average shape of Po(t) between successive bursts. We find, that already this procedure gives HSMs which accurately reproduce the correlation function of the 983 984 Pawelzik, Bauer, Deppisch, and Geisel full firing dynamics poet). This means that the HSM captures relevant aspects of the assembly dynamics including the relation between the network dynamics and the contributions of single neurons. References [1] Gray C.M., Singer W., Stimulus-Specific Neuronal Oscillations in Cat Visual Cortex: a Cortical Functional Unit, Soc.Neurosci.Abstr. 13.404.3 (1989). [2] Gray C.M., Konig P., Engel A.K., and Singer W., Oscillatory Responses in Cat Visual Cortex Exhibit Inter-Columnar Synchronization which Reflects Global Stimulus Properties Nature 338, pp. 334-337 (1989). [3] Eckhorn R., Bauer R., Jordan W., Brosch M., Kruse W., Munk M., and Reitboeck H.J., Coherent Oscillations: a Mechanism of Feature Linking in the Visual Cortex'l, BioI. Cyb. 60, pp121-130 (1988). [4] C. v.d.Malsburg The Correlation Theory of Brain Function Internal Report 812, Max-Planck-Institute for Biophysical Chemistry, Gottingen, F.R.G. (1981). [5] Schuster H.G. (Ed.), Nonlinear Dynamics and Neuronal Networks, VCH Weinheim, Heidelberg (1991) [6] Pawelzik K., Bauer H.-U., Geisel T. Switching between predictable and unpredictable states in data from cat visual cortex, talk at CNS San Francisco 1992, to appear in the CNS Proceedings. [7] Gray C.M., Engel A.K., Konig P., Singer W., Temporal Properties of Synchronous Oscillatory Interactions in Cat Striate Cortex, in: Nonlinear Dynamics and Neuronal Networks, Ed. H.G. Schuster, VCH Weinheim, pp. 27-55 (1991) [8] Deppisch J., Bauer H.-U., Schillen T., Konig P., Pawelzik K., Geisel T., Stochastic and Oscillatory Burst Activities, accepted for ICANN'92, Brighton, UK. (1992). [9] Deppisch J., Bauer H.-U., Schillen T., Konig P., Pawelzik K., Geisel T., Alternating Oscillatory and Stochastic States in a Network of Spiking Neurons, submitted to Biol.Cyb. (1992). [10] Rabiner, L.R., A Tutorial on Hidden-Markov Models and Selected Applications in Speech Recognition Proc. IEEE 11, 2 pp. 257-286 (1989). [11] Bauer H.TU., Deppisch J., Geisel T., Pawelzik K., in preparation. [12] Bauer H.U., Pawelzik K., Alternating Oscillatory and Stochastic Dynamics in a Model for a Neuronal Assembly, Physica D, submitted. [13] Schuster H.G., Koch C., Burst Synchronization Without Frequency-Locking in a Completely Solvable Network Model, in Moody J.E., Hanson S.J., Lippmann R.P. (Eds.), Neural Information Processing Systems 4, p. 117, Morgan Kauffmann (1992).
675 |@word physik:1 confirms:1 tried:1 activation:1 yet:1 shape:2 designed:1 nichtlineare:1 selected:1 provides:2 successive:1 burst:13 hopf:2 autocorrelation:2 manner:1 inter:2 indeed:1 multi:5 brain:1 weinheim:2 pawelzik:10 str:1 pf:12 increasing:1 unpredictable:1 underlying:6 kind:1 dynamik:1 monkey:1 gottingen:1 temporal:1 quantitative:2 ti:1 uk:1 unit:6 appear:1 planck:1 bind:1 local:3 limit:4 switching:15 ext:1 receptor:1 fluctuation:2 firing:8 modulation:1 initialization:2 substantiates:1 equivalence:1 procedure:1 projection:2 regular:1 subpopulation:6 convenience:1 coexist:1 influence:1 demonstrated:1 baum:2 backed:1 go:1 independently:1 welch:1 pdd:2 kauffmann:1 agreement:1 element:3 recognition:1 particularly:1 corticocortical:1 observed:8 capture:3 cycle:4 connected:1 episode:2 predictable:1 locking:1 dynamic:35 cyb:2 purely:1 upon:1 observables:1 completely:2 basis:1 resolved:1 easily:3 po:5 cat:12 represented:1 talk:1 sigm:1 train:5 distinct:1 describe:2 klaus:1 quite:1 whose:1 larger:1 dominating:1 relax:1 transform:1 triggered:1 biophysical:1 interaction:6 reset:1 tu:1 relevant:1 description:6 konig:4 electrode:2 p:2 transmission:1 abstr:1 object:1 coupling:3 derive:3 oo:1 depending:2 measured:2 strong:1 soc:1 geisel:9 predicted:1 indicate:1 differ:1 concentrate:1 f4:2 stochastic:14 bistable:2 munk:1 frg:1 explains:1 behaviour:1 suffices:1 investigation:1 mua:1 secondly:1 strictly:1 physica:1 around:2 koch:1 purpose:1 proc:1 engel:2 reflects:2 brought:1 clearly:1 rather:3 ltv:1 deppisch:8 l0:1 fur:1 indicates:1 dependent:1 unlikely:1 hidden:11 pasadena:1 relation:4 reproduce:2 going:1 sketched:1 spatial:1 renewal:2 bifurcation:2 x4:1 represents:3 broad:1 peaked:1 report:1 jb:7 stimulus:4 quantitatively:1 inherent:1 few:1 distinguishes:1 individual:2 phase:13 cns:3 fire:5 attractor:1 tj:1 institut:1 isolated:1 theoretical:2 increased:1 earlier:1 bistability:11 monotonously:2 universitat:1 characterize:3 density:5 peak:1 accessible:1 off:1 ansatz:3 synthesis:1 moody:1 again:5 reflect:4 external:7 stochastically:1 potential:1 nonlinearities:1 de:1 chemistry:1 includes:1 int:1 permanent:1 apparently:2 analyze:1 start:1 parallel:1 synchrony:1 contribution:6 il:1 characteristic:2 rabiner:1 theoretische:1 accurately:1 schillen:2 j6:2 ago:1 submitted:2 oscillatory:18 explain:2 synapsis:1 synaptic:1 email:1 ed:3 pp:3 frequency:1 proof:1 coexistence:1 treatment:1 emerges:2 nmda:1 back:1 dt:2 follow:1 response:8 synapse:1 formulation:1 furthermore:1 correlation:3 hand:4 nonlinear:6 gray:4 effect:1 contain:1 true:1 evolution:1 excitability:13 alternating:2 hsm:3 excitation:5 m:2 brighton:1 chaos:1 consideration:2 common:1 sigmoid:1 stimulation:1 spiking:6 functional:1 refractory:2 linking:1 numerically:1 measurement:3 frankfurt:3 eckhorn:1 stochasticity:1 phenomenological:4 cortex:11 j:2 showed:1 discretizing:1 caltech:1 seen:1 morgan:1 period:3 kruse:1 signal:3 ii:2 full:1 exceeds:1 histogram:1 represent:1 irregular:4 want:1 interval:2 rest:1 contrary:1 incorporates:1 jordan:1 call:1 alij:1 near:1 revealed:1 enough:1 boo:1 switch:4 synchronous:4 motivated:1 sfb:1 passed:1 speech:1 hardly:1 action:1 adequate:1 jilt:2 remark:1 iterating:1 detailed:2 governs:1 amount:1 band:1 ph:7 reduced:2 tutorial:1 estimated:3 express:1 pob:9 threshold:4 nevertheless:1 falling:1 pj:11 offiring:1 destroy:1 interburst:1 fraction:3 year:1 master:1 ilt:2 reasonable:1 oscillation:5 autocorrelogram:2 activity:12 scene:1 aspect:2 according:2 alternate:1 combination:1 describes:1 evolves:1 equation:1 mutually:1 remains:2 brosch:1 turn:1 mechanism:2 singer:3 observe:2 dbp:1 existence:1 denotes:1 include:1 assembly:26 ensure:1 malsburg:1 neglect:1 especially:1 already:1 spike:16 dependence:1 striate:1 microscopic:1 exhibit:6 lateral:2 me:1 illustration:1 robert:1 collective:2 neuron:35 observation:1 markov:3 pat:1 discovered:1 introduced:1 namely:1 mayer:1 hanson:1 isih:2 vch:2 coherent:2 narrow:1 beyond:1 bar:1 dynamical:3 pattern:1 below:1 regime:1 program:1 including:1 memory:1 max:1 ia:2 event:1 solvable:1 coupled:1 evolve:2 relative:1 embedded:1 synchronization:3 expect:1 integrate:1 reitboeck:1 consistent:2 pi:8 lo:1 elsewhere:1 summary:1 last:3 thalamocortical:1 formal:4 side:1 understand:1 institute:1 fall:1 emerge:1 bauer:10 calculated:5 cortical:3 transition:2 valid:1 interchange:2 coincide:1 san:1 poet:1 lippmann:1 uni:1 preferred:1 reproduces:1 global:1 reveals:1 francisco:1 quantifies:1 schuster:3 nature:1 interact:1 heidelberg:1 excellent:1 icann:1 main:1 neurosci:1 whole:2 noise:4 adress:1 repeated:1 complementary:2 neuronal:20 fig:6 specific:1 consist:1 adding:1 columnar:1 visual:11 correlogram:1 medal:1 binding:1 corresponds:5 extracted:2 bioi:1 experimentally:2 change:1 typical:2 miss:1 accepted:1 experimental:7 internal:4 tern:1 phenomenon:4 preparation:1 incorporate:1 evaluate:1 biol:1 ex:2
6,357
6,750
Learning Affinity via Spatial Propagation Networks Sifei Liu UC Merced, NVIDIA Guangyu Zhong Dalian University of Technology Shalini De Mello NVIDIA Ming-Hsuan Yang UC Merced, NVIDIA Jinwei Gu NVIDIA Jan Kautz NVIDIA Abstract In this paper, we propose spatial propagation networks for learning the affinity matrix for vision tasks. We show that by constructing a row/column linear propagation model, the spatially varying transformation matrix exactly constitutes an affinity matrix that models dense, global pairwise relationships of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix, where all elements can be outputs from a deep CNN, but (b) results in a dense affinity matrix that effectively models any task-specific pairwise similarity matrix. Instead of designing the similarity kernels according to image features of two points, we can directly output all the similarities in a purely data-driven manner. The spatial propagation network is a generic framework that can be applied to many affinity-related tasks, such as image matting, segmentation and colorization, to name a few. Essentially, the model can learn semantically-aware affinity values for high-level vision tasks due to the powerful learning capability of deep CNNs. We validate the framework on the task of refinement of image segmentation boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides a general, effective and efficient solution for generating high-quality segmentation results. 1 Introduction An affinity matrix is a generic matrix that determines how close, or similar, two points are in a space. In computer vision tasks, it is a weighted graph that regards each pixel as a node, and connects each pair of pixels by an edge [25, 16, 15, 10, 29]. The weight on that edge should reflect the pairwise similarity with respect to different tasks. For example, for low-level vision tasks such as image filtering, the affinity values should reveal the low-level coherence of color and texture [29, 28, 10, 9]; for mid to high-level vision tasks such as image matting and segmentation [16, 22], the affinity measure should reveal the semantic-level pairwise similarities. Most techniques explicitly or implicitly assume a measurement or a similarity structure over the space of configurations. The success of such algorithms depends heavily on the assumptions made to construct these affinity matrices, which are generally not treated as part of the learning problem. In this paper, we show that the problem of learning the affinity matrix can be equivalently expressed as learning a group of small row/column-wise, spatially varying linear transformation matrices. Since a linear transformation can be easily implemented as a differentiable module in a deep neural network, the transformation matrix can be learned in a purely data-driven manner as opposed to being constructed by hand. Specifically, we adopt an independent deep CNN with the original RGB images as inputs to output all entities of the matrix, such that the affinity is learned by a deep model conditioned on the specific inputs. We show that using a three-way connection, instead of the full connection between adjoining rows/columns, is sufficient for learning a dense affinity matrix and requires much fewer output channels of a deep CNN. Therefore, instead of using designed features and kernel tricks, our network outputs all entities of the affinity matrix in a data-driven manner. The advantages of learning an affinity matrix in a data-driven manner are multifold. First, a handdesigned similarity matrix based on a distance metric in a certain space (e.g., RGB or Euclidean [10, 25, 5, 36, 14]) may not adequately describe the pairwise relationships in the mid-to-high-level feature spaces. To apply such designed pairwise kernels to tasks such as semantic segmentation, multiple iterations are required [14, 5, 36] for satisfactory performance. In contrast, the proposed method learns and outputs all entities of an affinity matrix under direct supervision of ultimate objectives, where no iteration, specific design or assumption about the kernel function is needed. Second, we can learn the high-level semantic affinity measures by initializing with hierarchical deep features from pre-trained VGG [26] and ResNet [11] networks where conventional metrics and kernels may not be applied. Due to the above properties, the framework is far more efficient than the related graphical models, such as Dense CRF. Our proposed architecture, namely spatial propagation network (SPN), contains a deep CNN that learns the entities of the affinity matrix and a spatial linear propagation module, which propagates information in an image using the learned affinity values. Images or general 2D matrices are input into the module, and propagated under the guidance of the learned affinity values. All modules are differentiable and jointly trained using the stochastic gradient descent (SGD) method. The spatial linear propagation module is computationally efficient for inference due to the linear time complexity of its recurrent architecture. 2 Related Work Numerous methods explicitly design affinity matrices for image filtering [29, 10], colorization [15], matting [16] and image segmentation [14] based on the characterstics of the problem. Other methods, such as total variation (TV) [23] and learning to diffuse [18] improve the modeling of pairwise relationships by utilizing different objectives, or incorporating more priors into diffusion partial differential equations (PDEs). However, due to the lack of an effective learning strategy, it is still challenging to produce learning-based affinity for complex visual analysis problems. Recently, Maire et al. [22] trained a deep CNN to directly predict the entities of an affinity matrix, which demonstrated good performance on image segmentation. However, since the affinity is followed by a solver of spectral embedding as an independent part, it is not directly supervised for the classification/prediction task. Bertasius et al. [2] introduced a random walk network that optimizes the objectives of pixel-wise affinity for semantic segmentation. Differently, their affinity matrix is additionally supervised by ground-truth sparse pixel similarities, which limits the potential connections between pixels. On the other hand, many graphical model-based methods have successfully improved the performance of image segmentation. In the deep learning framework, conditional random fields (CRFs) with efficient mean field inference are frequently used [14, 36, 17, 5, 24, 1] to model the pairwise relations in the semantic labeling space. Some methods use CFR as a post-processing module [5], while others integrate it as a jointly-trained part [36, 17, 24, 1]. While both methods describe the densely connected pairwise relationships, dense CRFs rely on designed kernels, while our method directly learns all pairwise links. Since in this paper, SPN is trained as a universal segmentation refinement module, we specifically compare it with one of the methods [5] that relies on dense CRF [14] as a post-processing strategy. Our architecture is also related to the multi-dimensional RNN or LSTM [30, 3, 8]. However, both the standard RNN and LSTM contain multiple non-linear units and thus do not fit into our proposed affinity framework. 3 Proposed Approach In this work, we construct a spatial propagation network that can transform a two-dimensional (2D) map (e.g., coarse image segmentation) into a new one with desired properties (e.g., refined segmentation). With spatially varying parameters that supports the propagation process, we show theoretically in Section 3.1 that this module is equivalent to the standard anisotropic diffusion process [32, 18]. We prove that the transformation of maps is controlled by a Laplacian matrix that is constituted by the parameters of the spatial propagation module. Since the propagation module is differentiable, its parameters can be learned by any type of neural network (e.g., a typical deep CNN) that is connected to this module, through joint training. We introduce the spatial propagation network in Section 3.2, and specifically analyze the properties of different types of connections within its framework for learning the affinity matrix. 2 3.1 Linear Propagation as Spatial Diffusion We apply a linear transformation by means of the spatial propagation network, where a matrix is scanned row/column-wise in four fixed directions: left-to-right, top-to-bottom, and verse-vise. This strategy is used widely in [8, 30, 19, 4]. We take the left-to-right direction as an example for the following discussion. Other directions are processed independently in the same manner. We denote X and H as two 2D maps of size n ? n, with exactly the same dimensions as the matrix before and after spatial propagation, where xt and ht , respectively, represent their tth columns with n ? 1 elements each. We linearly propagate information from left-to-right between adjacent columns using an n ? n linear transformation matrix wt as: ht = (I ? dt ) xt + wt ht?1 , t ? [2, n] (1) where I is the n ? n identity matrix, the initial condition h1 = x1 , and dt (i, i) is a diagonal matrix, whose ith element is the sum of all the elements of the ith row of wt except wt (i, j) as: dt (i, i) = n X wt (i, j). (2) j=1,j6=i To propagate across the entire image, the matrix H, where {ht ? H, t ? [1, n]}, is updated in a column-wise manner recursively. For each column, ht is a linear, weighted combination of the previous column ht?1 , and the corresponding column xt in X. When the recursive scanning is finished, the updated 2D matrix H can be expressed with an expanded formulation of Eq. (1): ? I ? w2 ? ?w3 w2 Hv = ? ? .. ? . ? .. . 0 ?2 w3 ?2 .. . .. . ??? 0 ?3 .. . ??? ??? 0 .. . ??? ??? ? 0 ? ? ?? ? ? ? ?? .. ? ? Xv = GXv , . ? ? ?n (3) where G is a lower triangular, N ? N (N = n2 ) transformation matrix, which relates X and H. Hv and Xv are vectorized versions of X and H, respectively, with the dimension of N ? 1. Specifically, they are created by concatenating ht and xt along the same, single dimension, i.e.,  T  T Hv = hT1 , ..., hTn and Xv = xT1 , ..., xTn . All the parameters {?t , wt , dt , I} , t ? [2, n] are n ? n sub-matrices, where ?t = I ? dt . In the following section, we validate that Eq. (3) can be expressed as a spatial anisotropic diffusion process, with the corresponding propagation affinity matrix constituted by all wt for t ? [2, n]. Theorem 1. The summation of elements in each row of G equals to one. Since G contains n ? n sub-matrices, each representing the transformation between the corresponding columns of H and X, we denote all the weights used to compute ht as the tth block-row Gt . On setting ?1 = I, the k th constituent n ? n sub-matrix of Gt is: Gtk = ? t Y ? ? ? w? ?k , ? =k+1 ? ? ? ?k , k ? [1, t ? 1] (4) k=t To prove that the summation of any row in G equals to one, we instead prove that for ?t ? [1, n], each row of Gt has the summation of one. T T Proof. Denoting E = [1, 1, ..., 1] as an n ? 1 vector, we need to prove that Gt [1, ..., 1]N ?1 = E. Pt Equivalently k=1 Gtk E = E, because G is a lower triangular matrix. In the following part, we first Pm Qt prove that when m ? [1, t ? 1], we have k=1 Gtk E = ? =m+1 wt E by mathematical induction . Initial step. When m = 1, Pm k=1 Gtk E = Gt1 E = 3 Qt ? =2 w? E, which satisfies the assertion. Figure 1: Different propagation ranges for (a) one-way connections; and (b) three-way connections. Each pixel (node) receives information from a single line with one-way connection, and from a 2 dimensional plane with three-way connection. Integration of four directions w.r.t. (a) results in global, but sparsely connected pairwise relations, while (b) formulates global and densely connected pairwise relations. Inductive step. Assume there is an n ? [1, t ? 1], such that prove the formula is true for n + 1 ? [1, t ? 1]. n+1 X k=1 Gtk E = n X Gtk E + Gt(n+1) E = t Y w? E + ? =n+1 k=1 t Y ? =n+2 Pn k=1 w? = Gtk E = t Y Qt ? =n+1 wt E, we must w? [(wn+1 + I ? dn+1 ) E] . ? =n+2 (5) According to the formulation of the diagonal matrix in Eq. (2) we have Qt ? =n+2 w? E. Therefore, the assertion is satisfied. When m = t, we have: t X k=1 Gtk E = t?1 X Gtk E + Gtt E = t Y Pn+1 k=1 Gtk E = w? E + ?t E = w? E + (I ? dt ) E = E, (6) ? =t k=1 which yields the equivalence of Theorem 1. Theorem 2. We define the evolution of a 2D matrix as a time sequence {U }T , where U (T = 1) = U1 is the initial state. When the transformation between any two adjacent states follows Eq. (3), the sequence is a diffusion process expressed with a partial differential equation (PDE): ?T U = ?LU (7) where L = D ? A is the Laplacian matrix, D is the degree matrix composed of dt in Eq. (2), and A is the affinity matrix composed by the off-diagonal elements of G. Proof. We substitute the X and H as two consecutive matrices UT +1 and UT in (3). According to Theorem 1, we ensure that the sum of each row I ? G is 0 that can formulate a standard Laplacian matrix. Since G has the diagonal sub-matrix I ? dt , we can rewrite (3) as: UT +1 = (I ? D + A) UT = (I ? L) UT (8) where G = (I ? D + A), D is an N ? N diagonal matrix containing all the dt and A is the offdiagonal part of G. It then yields UT +1 ? UT = ?LUT , a discrete formulation of (7) with the time discretization interval as one. Theorem 2 shows the essential property of the row/column-wise linear propagation in Eq. (1): it is a standard diffusion process where L defines the spatial propagation and A, the affinity matrix, describes the similarities between any two points. Therefore, learning the image affinity matrix A in Eq. (8) is equivalent to learning a group of transformation matrices wt in Eq. (1). In the following section, we show how to build the spatial propagation (1) as a differentiable module that can be inserted into a standard feed-forward neural network, so that the affinity matrix A can be learned in a data-driven manner. 3.2 Learning Data-Driven Affinity Since the spatial propagation in Eq.(1) is differentiable, the transformation matrix can be easily configured as a row/column-wise fully-connected layer. However, we note that since the affinity matrix indicates the pairwise similarities of a specific input, it should also be conditioned on the 4 content of this input (i.e., different input images should have different affinity matrices). Instead of setting the wt matrices as fixed parameters of the module, we design them as the outputs of a deep CNN, which can be directly conditioned on an input image. One simple way is to set the output of the deep CNN to use the same size as the input matrix. When the input has c channels (e.g., an RGB image has c = 3), the output needs n ? c ? 4 channels (there are n connections from the previous row/column per pixel per channel, and with four different directions). Obviously, this is too many (e.g., an 128 ? 128 ? 16 feature map needs an output of 128 ? 128 ? 8192) to be implemented in a real-world system. Instead of using full connections between the adjacent rows/columns, we show that certain local connections, corresponding to a sparse row/column-wise transform matrix, can also formulate densely connected affinity. Specifically, we introduce the (a) one-way connection and the (b) three-way connection as two different ways to implement Eq. (1). One-way connection. The one-way connection enables every pixel to connect to only one pixel from the previous row/column (see Figure 1(a)). It is equivalent to one-dimensional (1D) linear recurrent propagation that scans each row/column independently as a 1D sequence. Following Eq. (1), we denote xk,t and hk,t as the k th pixels in the tth column, where the left-to-right propagation for one-way connection is: hk,t = (1 ? pk,t ) ? xk,t + pk,t ? hk,t?1 , (9) where p is a scaler weight indicating the propagation strength between the pixels at {k, t ? 1} and {k, t}. Equivalently, wt in Eq. (1) is a diagonal matrix, with the elements constituted by pk,t , k ? [1, n]. The one-way connection is a direct extension of sequential recurrent propagation [8, 31, 13]. The exact formulation of Eq. (9) has been used previously for semantic segmentation [4] and for learning low-level vision filters [19]. In [4], Chen et al.explain it as domain transform, where for semantic segmentation, p corresponds to the object edges. Liu et al. [19] explain it by arbitrary-order recursive filters, where p corresponds to more general image properties (e.g., low-level image/color edges, missing pixels, etc.). Both of these formulations can be explained as the same linear propagation framework of Eq. (1) with one-way connections. Three-way connection. We propose a novel three-way connection in this paper. It enables each pixel to connect to three pixels from the previous row/column, i.e., the left-top, middle and bottom pixels from the previous column for the left-to-right propagation direction (see Figure. 2(b)). With the same notations, we denote N as the set of these three pixels. Then the propagation for the three-way connection is: ! hk,t = 1? X pk,t xk,t + k?N X pk,t hk,t?1 (10) k?N Equivalently, wt forms a tridiagonal matrix, with p:,k , k ? N constitute the three non-zero elements of each row/column. Relations to the affinity matrix. As introduced in Theorem 2, the affinity matrix A with linear propagation is composed of the off-diagonal elements of G in Eq. (3). The one-way connection formulates a spares affinity matrix, since each sub-matrix of A has nonzero elements only along its diagonal, and the multiplication of several individual diagonal matrics will also results in a diagonal matrix. On the other hand, the three-way connection, also with a sparse wt , can form a relatively dense A with the multiplication of several different tridiagonal matrices. It means pixels can be densely and globally associated, by simply increasing the number of connections of each pixel during spatial propagation from one to three. As shown in Figures 2(a) and 2(b), the propagation of one-way connections is restricted to a single row, while the three-way connections can expand the region to a triangular 2D plane with respect to each direction. The summarization of the four directions result in dense connections of all pixels to each other (see Figure. 2(b)). Stability of linear propagation. Model stability is of critical importance for designing linear systems. In the context of spatial propagation (Eq. 1), it refers to restricting the responses or errors that flow in the module from going to infinity, and preventing the network from encountering the vanishing of gradients in the backpropagation process [37]. Specifically, the norm of the temporal Jacobian ?ht \ ?ht?1 should be equal to or less than one. In our case, it is equivalent to regularizing each transformation matrix wt with its norm satisfying k?ht \ ?ht?1 k = kwt k ? ?max , 5 (11) Figure 2: We illustrate the general architecture of the SPN using a three-way connection for segmentation refinement. The network, divided by the black dash line, contains a propagation module (upper) and a guidance network (lower). The guidance network outputs all entities that can constitute four affinity matrices, where each sub-matrix wt is a tridiagonal matrix. The propagation module, being guided by the affinity matrices, deforms the input mask to a desired shape. All modules are differentiable and jointly learned via SGD. where ?max denotes the largest singularity value of wt . This condition, ?max ? 1 provides a sufficient condition for stability. n o P be the weight in wt , the model can be stabilized if k?N pK Theorem 3. Let pK t,k t,k ? 1. See k?N the supplementary material for proof. Theorem 3 shows that the stability of a linear propagation model can be maintained by regularizing the all weights of each pixel in the hidden layer H, with the summation of their absolute values less than one. For the one-way connection, Chen et al. [4] limited each scalar output p to be within (0, 1). Liu et al. [19] extended the range to (?1, 1), where the negative weights showed preferable effects for learning image enhancers. It indicates that the affinity matrix is not necessarily restricted to be positive/semi-positive definite (e.g., the setting is also applied in [16].) For the three-way connection, we simply regularize the three weights (the output of a deep CNN) according to Theorem 3 without restriction to be any positive/semi-positive definite. 4 Implementation We specify two separate branches: (a) a deep CNN, namely the guidance network that outputs all elements of the transformation matrix, and (b) a linear propagation module that outputs the transformation matrix entities (see Figure 2). The propagation module receives an input map and output a refined or transformed result. It also takes the weights learned by the deep CNN guidance network as the second input. The structure of a guidance network can be any regular CNN, which is designed for the task at hand. Examples of this network are described in Section 5. It takes, as input, any 2D matrix that can help with learning the affinity matrix (e.g., typically an RGB image), and outputs all the weights that constitute the transformation matrix wt . Suppose that we have a map of size n ? n ? c that is input into the propagation module, the guidance network needs to output a weight map with the dimensions of n ? n ? c ? (3 ? 4), i.e., each pixel in the input map is paired with 3 scalar weights per direction, and 4 directions in total. The propagation module contains 4 independent hidden layers for the different directions, where each layer combines the input map with its respective weight map using Eq. (10). All submodules are differentiable and jointly trained using stochastic gradient descent (SGD). We use node-wise max-pooling [19] to integrate the hidden layers and to obtain the final propagation result. We implement the network with a modified version of CAFFE [12]. We employ a parallel version of the SPN implemented in CUDA for propagating each row/column to the next one. We use the SGD optimizer, and set the base learning rate to 0.0001. In general, we train the networks for the HELEN and VOC segmentation tasks for about 40 and 100 epochs, respectively. The inference time (we do not use cuDNN) of SPN on HELEN and Pascal VOC is about 7ms and 84ms for an image of size 512 ? 512 pixels, respectively. In comparison, the dense CRF (CPU only) takes about 1s [14], 3.2s [5] and 4.4s [36] with different publicly available implementations. We note that the majority of the time for the SPN is spend in the guidance network, which can be accelerated by utilizing various existing network compressing strategies, applying smaller models, or sharing weights with the segmentation model if they are trained jointly. During inference, a single 64 ? 64 ? 32 SPN hidden layer takes 1.3ms with the same computational settings. 6 original CNN-base CNN-Highres one-way SPN three-way SPN ground truth Figure 3: Results of face parsing on the HELEN dataset with detailed regions cropped from the high resolution images. (The images are all in high resolution and can be viewed by zooming in.) 5 Experimental Results The SPN can be trained jointly with any segmentation CNN model by being inserted on top of the last layer that outputs probability maps, or trained separately as a segmentation refinement model. In this paper we choose the second option. Given a coarse image segmentation mask as the input to the spatial propagation module, we show that the SPN can produce higher-quality masks with significantly refined details at object boundaries. Many models [21, 5] generate low-resolution segmentation masks with coarse boundary shapes to seek a balance between computational efficiency and semantic accuracy. The majority of work [21, 5, 36] choose to first produce an output probability map with 8? smaller resolution, and then refine the result using either post-processing [5] or jointly trained modules [36]. Hence, producing high-quality segmentation results with low computational complexity is a non-trivial task. In this work, we train only one SPN model for a specific task, and treat it as a universal refinement tool for the different publicly available CNN models for each of these tasks. We carry out the refinement of segmentation masks on two tasks: (a) generating high-resolution segmentations on the HELEN face parsing dataset [27]; and (b) refining generic object segmentation maps generated by pretrained models (e.g., VGG based model [21, 5]. For the HELEN dataset, we directly use low-resolution RGB face images to train a baseline parser, which successfully encapsulates the global semantic information. The SPN is then trained on top of the coarse segmentations to generate high-resolution outputs. For the Pascal VOC dataset, we train the SPN on top of the coarse segmentation results generated by the FCN-8s [21], and directly generalize it to any other pretrained model. General network settings. For both tasks, we train the SPN as a patch refinement model on top of the coarse map with basic semantic information. It is trained with smaller patches cropped from the original high-resolution images, their corresponding coarse segmentation maps produced by a baseline segmentor, and with the corresponding high-resolution ground-truth segmentation masks for supervision. All coarse segmentation maps are obtained by applying a baseline (for HELEN) or pre-trained (for Pascal VOC) image segmentation CNN to their standard training splits [6, 5]. Since the baseline HELEN parser produces low-resolution segmentation results, we upsample them using a bi-linear filter to be of the same size as the desired higher output resolution. We fix the size of our input patches to 128 ? 128, use the softmax loss, and use the SGD solver for all the experiments. During training, the patches are sampled from image regions that contain more than one ground-truth segmentation label (e.g., a patch with all pixels labeled as ?background? will not be sampled). During testing, for the VOC dataset, we restrict the classes in the refined results to be contained within the corresponding coarse input. More specific settings are specified in the supplementary material. HELEN Dataset. The HELEN dataset provides high-resolution photography-style face images (2330 in total), with high-quality manually labeled facial components including eyes, eyebrows, nose, lips, and jawline, which makes the high-resolution segmentation tasks applicable. All previous work utilize low-resolution parsing output as their final results for evaluation. Although many [27, 33, 20] achieve preferable performance, their results cannot be directly adopted by high-quality facial image editing applications. We use the same settings as the state-of-the work [20]. We use similarity transformation according to the results of 5-keypoint detection [35] to align all face images to the center. Keeping the original resolution, we then crop or pad them to the size of 1024 ? 1024. 7 Table 1: Quantitative evaluation results on the HELEN dataset. We denote the upper and lower lips as ?U-lip? and ?L-lip?, and overall mouth part as ?mouth?, respectively. The label definitions follow [20]. Method skin brows eyes nose mouth U-lip L-lip in-mouth Liu et al. [20] 90.87 69.89 74.74 90.23 82.07 59.22 66.30 81.70 baseline-CNN 90.53 70.09 74.86 89.16 83.83 55.61 64.88 71.72 Highres-CNN 91.78 71.84 74.46 89.42 81.83 68.15 72.00 71.95 SPN (one-way) 92.26 75.05 85.44 91.51 88.13 77.61 70.81 79.95 83.13 SPN (three-way) 93.10 78.53 87.71 92.62 91.08 80.17 71.63 overall 83.68 82.89 83.21 87.09 89.30 We first train a baseline CNN with a symmetric U-net structure, where both the input image and the output map are 8? smaller than the original image. The detailed settings are in the supplementary meterial. We apply the multi-objective loss as [20] to improve the accuracy along the boundaries. We note that the symmetric structure is powerful, since the results we obtained for the baseline CNN are comparable (see Table. 1) to that of [20], who apply a much larger model (38 MB vs. 12 MB). We then train a SPN on top of the baseline CNN results on the training set, with patches sampled from the high-resolution input image and the coarse segmentations masks. For the guidance network, we use the same structure as that of the baseline segmentation network, except that its upsampling part ends at a resolution of 64 ? 64, and its output layer has 32 ? 12 = 384 channels. In addition, we train another face parsing CNN with 1024 ? 1024 sized inputs and outputs (CNN-Highres) for better comparison. It has three more sub-modules at each end of the baseline network, where all are configured with 16 channels to process higher resolution images. We show quantitative and qualitative results in Table. 1 and 3 respectively. We compared the one/three way connection SPNs with the baseline, the CNN-Highres and the most relevant state-of-the-art technique for face parsing [20]. Note that the results of baseline and [20]1 are bi-linearly upsampled to 1024 ? 1024 before evaluation. Overall, both SPNs outperform the other techniques with a significant margin of over 6 intersection-over-union (IoU) points. Especially for the smaller facial components (e.g., eyes and lips) where with smaller resolution images, the segmentation network performs poorly. We note that the one-way connection-based SPN is quite successful on relatively simple tasks such as the HELEN dataset, but fails for more complex tasks, as revealed by the results of Pascal VOC dataset in the following section. Pascal VOC Dataset. The PASCAL VOC 2012 segmentation benchmark [6] involves 20 foreground object classes and one background class. The original dataset contains 1464 training, 1499 validation and 1456 testing images, with pixel-level annotations. The performance is mainly measured in terms of pixel IoU averaged across the 21 classes. We train our SPNs on the train split with the coarse segmentation results produced by the FCN-8s model [21]. The model is fine-tuned on a pre-trained VGG-16 network, where different levels of features are upsampled and concatenated to obtain the final, low-resolution segmentation results (8? smaller than the original image size). The guidance network of the SPN also fine-tunes the VGG-16 structure from the beginning till the pool5 layer as the downsampling part. Similar to the settings for the HELEN dataset, the upsampling part has a symmetric structure with skipped links until the feature dimensions of 64 ? 64. The spatial propagation module has the same configuration as that of the SPN that we employed for the HELEN dataset. The model is applied on the coarse segmentation maps of the validation and test splits generated by any image segmentation algorithm without fine-tuning. We test the refinement SPN on three base models: (a) FCN-8s [21], (b) the atrous spatial pyramid pooling (ASPP-L) network fine-tuned with VGG-16, denoted as Deeplab VGG, and (c) the ASPP-L: a multi-scale network fine-tuned with ResNet-101 [11] (pre-trained on the COCO dataset), denoted as Deeplab ResNet-101. Among them, (b) and (c) are the two basic models from [5], which are then refined with dense CRF [14] Table 2: Quantitative comparison (mean conditioned on the original image. IoU) with dense CRF-based refinement [5] on Table 3 shows that through the three-way SPN, the Deeplab pre-trained models. accuarcy of segmentation is significantly improved over CNN +dense CRF mIoU VGG 68.97 71.57 the coarse segmentation results for all the three baseline ResNet 76.40 77.69 models. It has strong capability of generalization and can successfully refine any coarse maps from different pre-trained models by a large margin. Different with 1 The original output (also for evaluation) size it 250 ? 250. 8 +SPN 73.12 79.76 Figure 4: Visualization of Pascal VOC segmentation results (left) and object probability (by 1 ? Pb , Pb is the probability of background). The ?pretrained? column denotes the base Deeplab ResNet-101 model, while the rest 4 columns show the base model combined with the dense CRF [5] and the proposed SPN, respectively. Table 3: Quantitative evaluation results on the Pascal VOC dataset. We compare the two connections of SPN with the corresponding pre-trained models, including: (a) FCN-8s (F), (b) Deeplab VGG (V) and (c) Deeplab ResNet-101 (R). AC denotes accuracy, ?+? denote added on top of the base model. F +1 way +3 way V +1 way +3 way R +1 way +3 way Model overall AC 91.22 90.64 92.90 92.61 92.16 93.83 94.63 94.12 95.49 mean AC 77.61 70.64 79.49 80.97 73.53 83.15 84.16 77.46 86.09 mean IoU 65.51 60.95 69.86 68.97 64.42 73.12 76.46 72.02 79.76 the Helen dataset, the one-way SPN fails to refine the segmentation, which is probably due to its limited capability of learning preferable affinity with a sparse form, especially when the data distribution gets more complex. Table 2 shows that by replacing the dense CRF module with the same refinement model, the performance is boosted by a large margin, without fine-tuning. One the test split, the DeepNet ResNet-101 based SPN achieves the mean IoU of 80.22, while the dense CRF gets 79.7. The three-way SPN produces fine visual results, as shown in the red bounding box of Figure 4. By comparing the probability maps (column 3 versus 7), SPN exhibits fundamental improvement in object details, boundaries, and semantic integrity. In addition, we show in table 4 that the same refinement model can also be generalize to dilated convolution based networks [34]. It significantly improves the quantitative performance on top of the ?Front end? base model, as well as adding a multi-scale refinement module, denoted as ?+Context?. Specifically, the SPN improves the base model with much larger margin compared to the context aggregation module (see ?+3 way? vs ?+Context? in table 4). 6 Conclusion We propose spatial propagation networks for learning pairwise affinities for vision tasks. It is a generic framework that can be applied to numerous tasks, and in this work we demonstrate its effectiveness for semantic object segmentation. Experiments on the HELEN face parsing and PASCAL VOC object semantic segmentation tasks show that the spatial propagation network is general, effective and efficient for generating high-quality segmentation results. Table 4: Quantitative evaluation results on the Pascal VOC dataset. We refine the base models proposed with dilated convolutions [34]. ?+? denotes additions on top of the ?Front end? model. Model Front end +3 way +Context +Context+3 way overall AC 93.03 93.89 93.44 94.35 mean AC 80.31 83.47 80.97 83.98 mean IoU 69.75 73.14 71.86 75.28 9 Acknowledgement. This work is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and NVIDIA. References [1] A. Arnab, S. Jayasumana, S. Zheng, and P. H. Torr. Higher order conditional random fields in deep neural networks. In ECCV. Springer, 2016. [2] G. Bertasius, L. Torresani, S. X. Yu, and J. Shi. Convolutional random walk networks for semantic image segmentation. arXiv preprint arXiv:1605.07681, 2016. [3] W. Byeon, T. M. Breuel, F. Raue, and M. Liwicki. Scene labeling with lstm recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. [4] L. Chen, J. T. Barron, G. Papandreou, K. Murphy, and A. L. Yuille. Semantic image segmentation with task-specific edge detection using cnns and a discriminatively trained domain transform. arXiv preprint arXiv:1511.03328, 2015. [5] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR, abs/1606.00915, 2016. [6] M. Everingham, S. A. Eslami, L. V. Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98?136, 2015. [7] S. Ger?gorin. Uber die abgrenzung der eigenwerte einer matrix. Bulletin de l?Acad?mie des Sciences de l?URSS. Classe des sciences math?matiques et na, 1931. [8] A. Graves, S. Fern?ndez, and J. Schmidhuber. Multi-dimensional recurrent neural networks. In ICANN, 549?558, 2007. [9] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341?2353, 2011. [10] K. He, J. Sun, and X. Tang. Guided image filtering. IEEE transactions on pattern analysis and machine intelligence, 35(6):1397?1409, 2013. [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [12] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [13] N. Kalchbrenner, I. Danihelka, and A. Graves. arXiv:1507.01526, 2015. Grid long short-term memory. arXiv preprint [14] P. Kr?henb?hl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pages 109?117, 2011. [15] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. ACM Transactions on Graphics (ToG), 23(3):689?694, 2004. [16] A. Levin, D. Lischinski, and Y. Weiss. A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):228?242, 2008. [17] G. Lin, C. Shen, I. D. Reid, and A. van den Hengel. Deeply learning the messages in message passing inference. arXiv preprint arXiv:1506.02108, 2015. [18] R. Liu, G. Zhong, J. Cao, Z. Lin, S. Shan, and Z. Luo. Learning to diffuse: A new perspective to design pdes for visual analysis. IEEE transactions on pattern analysis and machine intelligence, 38(12):2457?2471, 2016. [19] S. Liu, J. Pan, and M.-H. Yang. Learning recursive filters for low-level vision via a hybrid neural network. In European Conference on Computer Vision, 2016. [20] S. Liu, J. Yang, C. Huang, and M.-H. Yang. Multi-objective convolutional learning for face labeling. In CVPR, 2015. 10 [21] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431?3440, 2015. [22] M. Maire, T. Narihira, and S. X. Yu. Affinity CNN: learning pixel-centric pairwise relations for figure/ground embedding. CoRR, abs/1512.02767, 2015. [23] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259?268, 1992. [24] A. G. Schwing and R. Urtasun. Fully connected deep structured networks. arXiv preprint arXiv:1503.02351, 2015. [25] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888?905, 2000. [26] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [27] B. M. Smith, L. Zhang, J. Brandt, Z. Lin, and J. Yang. Exemplar-based face parsing. In CVPR, 2013. [28] J. A. Suykens, J. D. Brabanter, L. Lukas, and J. Vandewalle. Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing, 48(1):85?105, 2002. [29] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ICCV, 1998. [30] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [31] F. Visin, K. Kastner, K. Cho, M. Matteucci, A. Courville, and Y. Bengio. Renet: A recurrent neural network based alternative to convolutional networks. arXiv preprint arXiv:1505.00393, 2015. [32] J. Weickert. Anisotropic diffusion in image processing, volume 1. Teubner Stuttgart, 1998. [33] T. Yamashita, T. Nakamura, H. Fukui, Y. Yamauchi, and H. Fujiyoshi. Cost-alleviative learning for deep convolutional neural network-based facial part labeling. IPSJ Transactions on Computer Vision and Applications, 7:99?103, 2015. [34] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. arXiv:1511.07122, 2015. arXiv preprint [35] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, 2014. [36] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random fields as recurrent neural networks. In IEEE International Conference on Computer Vision, 2015. [37] J. G. Zilly, R. K. Srivastava, J. Koutn?k, and J. Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016. 11
6750 |@word cnn:27 middle:1 version:3 norm:2 kokkinos:1 everingham:1 paredes:1 seek:1 propagate:2 rgb:5 sgd:5 recursively:1 carry:1 initial:3 liu:7 configuration:2 contains:5 ndez:1 denoting:1 tuned:3 romera:1 existing:1 guadarrama:1 discretization:1 comparing:1 luo:2 must:1 parsing:8 shape:2 enables:2 designed:4 v:2 intelligence:5 fewer:1 rudin:1 plane:2 xk:3 beginning:1 ith:2 vanishing:1 short:1 smith:1 provides:3 coarse:14 node:3 math:1 brandt:1 zhang:3 mathematical:1 along:3 constructed:1 direct:2 differential:2 dn:1 gtt:1 koltun:2 qualitative:1 prove:6 combine:1 introduce:2 manner:7 theoretically:1 pairwise:15 mask:7 frequently:1 multi:8 ming:1 voc:13 globally:1 cpu:1 solver:2 increasing:1 gift:1 notation:1 transformation:18 temporal:1 quantitative:6 every:1 brow:1 exactly:2 preferable:3 unit:1 grant:1 producing:1 reid:1 danihelka:1 before:2 positive:4 local:1 treat:1 xv:3 limit:1 acad:1 eslami:1 handdesigned:1 black:1 equivalence:1 challenging:1 limited:2 range:2 bi:2 averaged:1 testing:2 recursive:3 block:1 implement:2 definite:2 backpropagation:1 union:1 maire:2 jan:1 deforms:1 universal:2 rnn:2 significantly:3 ipsj:1 narihira:1 pre:7 refers:1 regular:1 upsampled:2 get:2 cannot:1 close:1 context:7 applying:2 restriction:1 conventional:1 map:20 demonstrated:1 equivalent:4 crfs:4 helen:16 missing:1 center:1 shi:2 independently:2 williams:1 formulate:2 resolution:20 hsuan:1 shen:1 utilizing:2 regularize:1 embedding:3 stability:4 variation:2 updated:2 pt:1 suppose:1 heavily:1 parser:2 exact:1 designing:2 trick:1 element:11 satisfying:1 recognition:4 merced:2 sparsely:1 labeled:2 cut:1 bottom:2 inserted:2 module:28 preprint:10 initializing:1 hv:3 region:3 compressing:1 connected:9 sun:3 deeply:1 complexity:2 trained:19 rewrite:1 purely:2 atrous:2 yuille:2 efficiency:1 tog:1 zilly:1 gu:1 easily:2 joint:1 differently:1 various:1 train:10 fast:1 effective:3 describe:2 pool5:1 liwicki:1 labeling:4 refined:5 caffe:2 kalchbrenner:2 whose:1 quite:1 widely:1 supplementary:3 spend:1 larger:2 cvpr:2 triangular:3 simonyan:1 jointly:7 transform:4 final:3 obviously:1 brabanter:1 advantage:1 differentiable:7 sequence:3 net:2 breuel:1 karayev:1 propose:3 mb:2 relevant:1 cao:1 till:1 poorly:1 achieve:1 validate:2 mello:1 constituent:1 darrell:2 produce:5 generating:3 resnet:7 object:9 illustrate:1 develop:1 recurrent:9 propagating:1 help:1 exemplar:1 ac:5 measured:1 qt:4 eq:17 strong:1 implemented:3 involves:1 iou:6 direction:11 guided:2 cnns:2 stochastic:2 filter:4 mie:1 material:2 spare:1 fix:1 generalization:1 koutn:1 summation:4 singularity:1 extension:1 physica:1 ground:5 lischinski:2 predict:1 visin:1 optimizer:1 adopt:1 consecutive:1 achieves:1 applicable:1 label:2 highway:1 largest:1 successfully:3 tool:1 weighted:3 gt1:1 gaussian:1 modified:1 pn:2 zhong:2 boosted:1 varying:3 refining:1 improvement:1 indicates:2 mainly:1 hk:5 contrast:1 skipped:1 manduchi:1 baseline:13 inference:6 entire:1 typically:1 pad:1 hidden:4 relation:5 expand:1 going:1 transformed:1 einer:1 pixel:27 overall:5 classification:1 among:1 pascal:12 denoted:3 spatial:24 integration:1 softmax:1 uc:2 art:1 field:4 aware:1 construct:2 equal:3 manually:1 yu:3 constitutes:1 kastner:1 fcn:4 foreground:1 others:1 torresani:1 few:1 employ:1 composed:3 densely:4 neurocomputing:1 individual:1 kwt:1 murphy:2 connects:1 ab:4 detection:3 yamashita:1 message:2 zheng:2 evaluation:6 miou:1 characterstics:1 adjoining:1 edge:6 partial:2 respective:1 facial:5 euclidean:1 walk:2 desired:3 guidance:10 girshick:1 column:26 modeling:1 assertion:2 formulates:3 papandreou:2 cost:1 byeon:1 successful:1 levin:2 vandewalle:1 tridiagonal:3 too:1 front:3 graphic:1 connect:2 scanning:1 combined:1 cho:1 lstm:3 fundamental:1 international:2 oord:1 vineet:1 off:2 na:1 reflect:1 satisfied:1 opposed:1 containing:1 choose:2 huang:2 style:1 potential:2 de:5 dilated:3 configured:2 explicitly:2 depends:1 bilateral:1 h1:1 teubner:1 closed:1 analyze:1 red:1 aggregation:2 offdiagonal:1 capability:3 kautz:1 parallel:1 option:1 annotation:1 jia:1 square:1 publicly:2 accuracy:3 convolutional:8 who:1 yield:2 generalize:2 kavukcuoglu:1 produced:2 fern:1 lu:1 ren:1 xtn:1 j6:1 explain:2 sharing:1 definition:1 verse:1 proof:3 associated:1 propagated:1 sampled:3 dataset:18 enhancer:1 color:3 ut:7 improves:2 segmentation:55 centric:1 feed:1 higher:4 dt:9 supervised:2 follow:1 response:1 improved:2 specify:1 editing:1 formulation:5 zisserman:2 box:1 wei:2 stuttgart:1 until:1 hand:4 receives:2 replacing:1 su:1 nonlinear:2 propagation:47 lack:1 defines:1 quality:6 reveal:2 gray:1 jayasumana:2 name:1 effect:1 normalized:1 contain:2 true:1 adequately:1 inductive:1 evolution:1 hence:1 spatially:3 symmetric:3 nonzero:1 satisfactory:1 semantic:18 adjacent:3 during:4 maintained:1 die:1 m:3 crf:9 demonstrate:1 performs:1 image:54 wise:8 photography:1 novel:1 recently:1 matiques:1 volume:1 anisotropic:3 he:3 measurement:1 significant:1 fukui:1 tuning:2 grid:1 pm:2 similarity:11 supervision:2 encountering:1 gt:5 etc:1 base:9 align:1 integrity:1 showed:1 perspective:1 optimizes:1 driven:6 coco:1 schmidhuber:2 nvidia:6 certain:2 success:1 ht1:1 der:1 employed:1 semi:2 relates:1 full:2 multiple:2 branch:1 pde:1 long:3 lin:3 divided:1 post:3 paired:1 controlled:1 laplacian:3 prediction:1 adobe:1 basic:2 crop:1 vision:14 essentially:1 metric:2 arxiv:20 iteration:2 kernel:6 represent:1 arnab:1 pyramid:1 deeplab:7 suykens:1 cropped:2 background:3 separately:1 addition:3 htn:1 interval:1 fine:7 winn:1 w2:2 rest:1 probably:1 pooling:2 flow:1 effectiveness:1 yang:5 revealed:1 split:4 bengio:1 wn:1 submodules:1 fit:1 w3:2 architecture:5 restrict:1 vgg:8 ultimate:1 retrospective:1 henb:1 passing:1 constitute:3 deep:23 generally:1 detailed:2 tune:1 dark:1 mid:2 processed:1 tth:3 generate:2 outperform:1 nsf:1 stabilized:1 cuda:1 per:3 discrete:1 group:2 four:5 pb:2 spns:3 diffusion:7 ht:12 utilize:1 graph:1 sum:2 powerful:2 patch:6 coherence:1 comparable:1 layer:9 shan:1 followed:1 dash:1 courville:1 weickert:1 scaler:1 refine:4 strength:1 scanned:1 infinity:1 scene:1 diffuse:2 u1:1 expanded:1 relatively:2 structured:1 tv:1 according:5 combination:1 across:2 describes:1 smaller:7 pan:1 ur:1 encapsulates:1 hl:1 osher:1 explained:1 iccv:1 restricted:2 den:2 haze:1 computationally:1 equation:2 visualization:1 previously:1 needed:1 nose:2 end:5 adopted:1 available:2 apply:4 hierarchical:1 barron:1 generic:4 spectral:1 alternative:1 robustness:1 original:9 substitute:1 top:10 denotes:4 ensure:1 graphical:2 spn:31 matric:1 concatenated:1 build:1 especially:2 objective:5 skin:1 added:1 malik:1 strategy:4 diagonal:10 cudnn:1 affinity:47 gradient:3 exhibit:1 distance:1 link:2 separate:1 zooming:1 entity:7 majority:2 upsampling:2 landmark:1 cfr:1 segmentor:1 trivial:1 urtasun:1 induction:1 fatemi:1 relationship:4 colorization:3 balance:1 downsampling:1 loy:1 equivalently:4 bertasius:2 negative:1 design:4 implementation:2 summarization:1 upper:2 convolution:4 benchmark:1 descent:2 extended:1 arbitrary:1 introduced:2 pair:1 required:1 namely:2 specified:1 connection:34 tomasi:1 deepnet:1 learned:8 pattern:7 challenge:1 eyebrow:1 max:4 including:2 memory:1 mouth:4 gool:1 critical:1 treated:1 rely:1 natural:1 hybrid:1 nakamura:1 residual:1 representing:1 improve:2 technology:1 eye:3 numerous:2 keypoint:1 finished:1 created:1 prior:2 epoch:1 acknowledgement:1 removal:2 multiplication:2 graf:2 fully:5 loss:2 discriminatively:1 filtering:4 ger:1 versus:1 validation:2 shelhamer:2 integrate:2 degree:1 sufficient:2 vectorized:1 propagates:1 row:21 eccv:2 supported:1 last:1 keeping:1 pdes:2 face:11 bulletin:1 lukas:1 absolute:1 sparse:6 matting:4 van:2 regard:1 boundary:5 dimension:5 world:1 hengel:1 preventing:1 forward:1 made:1 refinement:12 far:1 lut:1 transaction:7 implicitly:1 global:4 xt1:1 table:10 additionally:1 lip:7 learn:2 channel:7 career:1 du:1 complex:3 necessarily:1 constructing:1 domain:2 european:1 pk:7 dense:15 constituted:3 linearly:2 icann:1 bounding:1 noise:1 n2:1 gtk:10 x1:1 sub:7 fails:2 concatenating:1 classe:1 jacobian:1 learns:3 donahue:1 tang:3 theorem:9 formula:1 specific:7 xt:4 incorporating:1 essential:1 restricting:1 sequential:1 effectively:1 importance:1 adding:1 corr:4 texture:1 kr:1 conditioned:4 margin:4 chen:4 intersection:1 vise:1 simply:2 visual:4 expressed:4 contained:1 upsample:1 scalar:2 pretrained:3 springer:1 corresponds:2 truth:4 determines:1 relies:1 satisfies:1 acm:1 conditional:3 identity:1 viewed:1 sized:1 content:1 specifically:8 typical:1 except:2 semantically:1 wt:19 torr:2 schwing:1 total:4 experimental:1 uber:1 aspp:2 indicating:1 support:2 scan:1 accelerated:1 gxv:1 regularizing:2 phenomenon:1 srivastava:1
6,358
6,751
Linear regression without correspondence Daniel Hsu Columbia University New York, NY [email protected] Kevin Shi Columbia University New York, NY [email protected] Xiaorui Sun Microsoft Research Redmond, WA [email protected] Abstract This article considers algorithmic and statistical aspects of linear regression when the correspondence between the covariates and the responses is unknown. First, a fully polynomial-time approximation scheme is given for the natural least squares optimization problem in any constant dimension. Next, in an average-case and noise-free setting where the responses exactly correspond to a linear function of i.i.d. draws from a standard multivariate normal distribution, an efficient algorithm based on lattice basis reduction is shown to exactly recover the unknown linear function in arbitrary dimension. Finally, lower bounds on the signal-to-noise ratio are established for approximate recovery of the unknown linear function by any estimator. 1 Introduction ? 2 Rd from noisy linear measurements Consider the problem of recovering an unknown vector w when the correspondence between the measurement vectors and the measurements themselves is unknown. The measurement vectors (i.e., covariates) from Rd are denoted by x1 , x2 , . . . , xn ; for each i 2 [n] := {1, 2, . . . , n}, the i-th measurement (i.e., response) yi is obtained using x?? (i) : ? > x?? (i) + "i , yi = w i 2 [n] . (1) Above, ? ? is an unknown permutation on [n], and the "1 , "2 , . . . , "n are unknown measurement errors. This problem, which has been called unlabeled sensing [22], linear regression with an unknown permutation [18], and linear regression with shuffled labels [1], arises in many settings; see the aforementioned references for more details. In short, sensing limitations may create ambiguity in or even completely lose the ordering of measurements. The problem is also interesting because the missing correspondence makes an otherwise well-understood problem into one with very different computational and statistical properties. Prior works. Unnikrishnan et al. [22] study conditions on the measurement vectors that permit ? under noiseless measurements. They show that when the entries of recovery of any target vector w the xi are drawn i.i.d. from a continuous distribution, and n 2d, then almost surely, every vector ? 2 Rd is uniquely determined by noiseless correspondence-free measurements as in (1). (Under w ? can be recovered when an appropriate signal-to-noise ratio noisy measurements, it is shown that w tends to infinity.) It is also shown that n 2d is necessary for such a guarantee that holds for all ? 2 Rd . vectors w Pananjady et al. [18] study statistical and computational limits on recovering the unknown permutation ? ? . On the statistical front, they consider necessary and sufficient conditions on the signal-to-noise ratio ? 22 / 2 when the measurement errors ("i )ni=1 are i.i.d. draws from the normal distribution SNR := kwk N(0, 2 ) and the measurement vectors (xi )ni=1 are i.i.d. draws from the standard multivariate normal distribution N(0, I d ). Roughly speaking, exact recovery of ? ? is possible via maximum likelihood 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. when SNR nc for some absolute constant c > 0, and approximate recovery is impossible for any 0 method when SNR ? nc for some other absolute constant c0 > 0. On the computational front, they show that the least squares problem (which is equivalent to maximum likelihood problem) n ? ?2 X min w> x?(i) yi (2) w,? i=1 given arbitrary x1 , x2 , . . . , xn 2 R and y1 , y2 , . . . , yn 2 R is NP-hard when d = ?(n)1 , but admits a polynomial-time algorithm (in fact, an O(n log n)-time algorithm based on sorting) when d = 1. d ? in Abid et al. [1] observe that the maximum likelihood estimator can be inconsistent for estimating w certain settings (including the normal setting of Pananjady et al. [18], with SNR fixed but n ! 1). One of the alternative estimators they suggest is consistent under additional assumptions in dimension d = 1. Elhami et al. [8] give a O(dnd+1 )-time algorithm that, in dimension d = 2, is guaranteed to ? when the measurement vectors are chosen in a very particular way from the approximately recover w unit circle and the measurement errors are uniformly bounded. Contributions. We make progress on both computational and statistical aspects of the problem. 1. We give an approximation algorithm for the least squares problem from (2) that, any given (xi )ni=1 , (yi )ni=1 , and ? 2 (0, 1), returns a solution with objective value at most 1 + ? times that of the minimum in time (n/?)O(d) . This a fully polynomial-time approximation scheme for any constant dimension. ? in the measurement model from (1), under the 2. We give an algorithm that exactly recovers w assumption that there are no measurement errors and the covariates (xi )ni=1 are i.i.d. draws from N(0, I d ). The algorithm, which is based on a reduction to a lattice problem and employs the lattice basis reduction algorithm of Lenstra et al. [16], runs in poly(n, d) time when the ? are appropriately quantized. This result may covariate vectors (xi )ni=1 and target vector w ? which also be regarded as for each-type guarantee for exactly recovering a fixed vector w, complements the for all-type results of Unnikrishnan et al. [22] concerning the number of measurement vectors needed for recovering all possible vectors. 3. We show that in the measurement model from (1) where the measurement errors are i.i.d. draws from N(0, 2 ) and the covariate vectors are i.i.d. draws from N(0, I d ), then no algorithm can ? unless SNR C min {1, d/ log log(n)} for some absolute constant approximately recover w C > 0. We also show that when the covariate vectors are i.i.d. draws from the uniform distribution on [ 1/2, 1/2]d , then approximate recovery is impossible unless SNR C 0 for some other absolute constant C 0 > 0. Our algorithms are not meant for practical deployment, but instead are intended to shed light on the computational difficulty of the least squares problem and the average-case recovery problem. Indeed, note that a na?ve brute-force search over permutations requires time ?(n!) = n?(n) , and the only other previous algorithms (already discussed above) were restricted to d = 1 [18] or only had some form of approximation guarantee when d = 2 [8]. We are not aware of previous algorithms for the average-case problem in general dimension d.2 Our lower bounds on SNR stand in contrast to what is achievable in the classical linear regression model (where the covariate/response correspondence is known): in that model, the SNR requirement ? scales as d/n, and hence the problem becomes easier with n. The for approximately recovering w lack of correspondence thus drastically changes the difficulty of the problem. 2 Approximation algorithm for the least squares problem In this section, we consider the least squares problem from Equation (2). The inputs are an arbitrary matrix X = [x1 |x2 | ? ? ? |xn ]> 2 Rn?d and an arbitrary vector y = (y1 , y2 , . . . , yn )> 2 Rn , and the 1 Pananjady et al. [18] prove that PARTITION reduces to the problem of deciding if the optimal value of (2) is zero or non-zero. Note that PARTITION is weakly, but not strongly, NP-hard: it admits a pseudo-polynomial-time algorithm [10, Section 4.2]. In Appendix A, we prove that the least squares problem is strongly NP-hard by reduction from 3-PARTITION (which is strongly NP-complete [10, Section 4.2.2]). 2 A recent algorithm of Pananjady et al. [19] exploits a similar average-case setting but only for a somewhat easier variant of the problem where more information about the unknown correspondence is provided. 2 Algorithm 1 Approximation algorithm for least squares problem input Covariate matrix X = [x1 |x2 | ? ? ? |xn ]> 2 Rn?k ; response vector y = (y1 , y2 , . . . , yn )> 2 Rn ; approximation parameter ? 2 (0, 1). assume X > X = I k . ? 2 Pn . ? 2 Rk and permutation matrix ? output Weight vector w 1: Run ?Row Sampling? algorithm with input matrix X to obtain a matrix S 2 Rr?n with r = 4k. 2: Let B be the set of vectors b = (b1 , b2 , . . . , bn )> 2 Rn satisfying the following: for each i 2 [n], ? if the i-th column of S is all zeros, then bi = 0; ? otherwise, bi 2 {y1 , y2 , . . . , yn }. p 3: Let c := 1 + 4(1 + n/(4k))2 . 4: for each b 2 B do 2 2 ?p ? b ?> yk2 . 5: Compute w b)k2 , and let rb := min?2Pn kX w b 2 arg minw2Rk kS(Xw p ? b , so that for each 6: Construct a ?rb /c-net Nb for the Euclidean ball of radius crb around w p p ? b k2 ? crb , there exists v 0 2 Nb such that kv v 0 k2 ? ?rb /c. v 2 Rk with kv w 7: end for 2 2 ? 2 arg min kX w ? 2 arg ? ?> yk2 . 8: return w min min kXw ?> yk2 and ? S w2 b2B Nb ?2Pn ?2Pn goal is to find a vector w 2 Rd and permutation matrix ? 2 Pn (where Pn denotes the space of n?n 2 permutation matrices3 ) to minimize kXw ?> yk2 . This problem is NP-hard in the case where d = ?(n) [18] (see also Appendix A). We give an approximation scheme that, for any ? 2 (0, 1), returns a (1 + ?)-approximation in time (n/?)O(k) + poly(n, d), where k := rank(X) ? min{n, d}. We assume without loss of generality that X 2 Rn?k and X > X = I k . This is because we can always replace X with its matrix of left singular vectors U 2 Rn?k , obtained via singular value decomposition X = U ?V > , where U > U = V > V = I k and ? 0 is diagonal. A solution (w, ?) for (U , y) has the same cost as the solution (V ? 1 w, ?) for (X, y), and a solution (w, ?) for (X, y) has the same cost as the solution (?V > w, ?) for (U , y). 2.1 Algorithm Our approximation algorithm, shown as Algorithm 1, uses a careful enumeration to beat the na?ve brute-force running time of ?(|Pn |) = ?(n!). It uses as a subroutine a ?Row Sampling? algorithm of Boutsidis et al. [5] (described in Appendix B), which has the following property. Theorem 1 (Specialization of Theorem 12 in [5]). There is an algorithm (?Row Sampling?) that, given any matrix A 2 Rn?k with n k, returns in poly(n, k) time a matrix S 2 Rr?n with r = 4k such that the following hold. 1. Every row of S has at most one non-zero entry. 2 2. For every b 2 Rn , every w0 2 arg minp b)k2 satisfies kAw0 w2Rk kS(Aw 2 minw2Rk kAw bk2 for c = 1 + 4(1 + n/(4k))2 = O(n/k). 2 bk2 ? c ? The matrix S returned by Row Sampling determines a (weighted) subset of O(k) rows of A such that solving a (ordinary) least squares problem (with any right-hand side b) on this subset of rows and corresponding right-hand side entries yields a O(n/k)-approximation to the least squares problem over all rows and right-hand side entries. Row Sampling does not directly apply to our problem because (1) it does not minimize over permutations of the right-hand side, and (2) the approximation factor is too large. However, we are able to use it to narrow the search space in our problem. An alternative to Row Sampling is to simply enumerate all subsets of k rows of X. This is justified by a recent result of Derezi?nski and Warmuth [7], which shows that for any right-hand side b 2 Rn , k?k using ?volume sampling? [3] to choose a matrix S 2 {0, 1} (where each row has one non-zero entry) gives a similar guarantee as that of Row Sampling, except with the O(n/k) factor replaced by k + 1 in expectation. 3 Each permutation matrix ? 2 Pn corresponds to a permutation ? on [n]; the (i, j)-th entry of ? is one if ?(i) = j and is zero otherwise. 3 2.2 Analysis The approximation guarantee of Algorithm 1 is given in the following theorem. ? 2 Pn satisfying ? 2 Rk and ? Theorem 2. Algorithm 1 returns w ? >y ? ? Xw 2 2 ? (1 + ?) min w2Rk ,?2Pn ?> y Xw 2 2 . 2 Proof. Let opt := minw,? kXw ?> yk2 be the optimal cost, and let (w? , ?? ) denote a solution achieving this cost. The optimality implies that w? satisfies the normal equations X > Xw? = > X > ?> ? y. Observe that there exists a vector b? 2 B satisfying Sb? = S?? y. By Theorem 1 and ? b? and cost value rb? satisfy the normal equations, the vector w ? b? opt ? rb? ? X w ?> ?y 2 2 ? b? = X(w ? b? Moreover, since X > X = I k , we have that kw tion of Nb? , there exists w 2 Nb? satisfying kw this w, the normal equations imply min kXw ?2Pn 2 ?> yk2 ? kXw 2 w? ) p w? k2 ? (c 2 w? k2 = kX(w ?> ? yk2 = kX(w 2 2 + opt ? c ? opt . p 1) opt ? crb? . By construc2 w? )k2 ? ?rb? /c ? ? opt. For 2 w? )k2 + opt ? (1 + ?) opt . Therefore, the solution returned by Algorithm 1 has cost no more than (1 + ?) opt. By the results of Pananjady et al. [18] for maximum likelihood estimation, our algorithm enjoys recov? and ? ery guarantees for w ? when the data come from the Gaussian measurement model (1). However, the approximation guarantee also holds for worst-case inputs without generative assumptions. Running time. We now consider the running time of Algorithm 1. There is the initial cost for singular value decomposition (as discussed at the beginning of the section), and also for ?Row Sampling?; both of these take poly(n, d) time. For the rest of the algorithm, we need to consider the size of B and the size of the net Nb for each b 2 B. First, we have |B| ? nr = nO(k) , since S has only 4k rows and each entry. Next, for each b 2 B, we construct the p row has at most a single non-zero p p -net Nb (for := ?rb /c) by constructing a / k-net for the `1 -ball of radius crb centered at ? b (using an appropriate axis-aligned grid). This has size |Nb | ? (4c2 k/?)k/2 = (n/?)O(k) . Finally, w each arg minw2Rk computation takes O(nk 2 ) time, and each (arg) min?2Pn takes O(nk + n log n) time [18] (also see Appendix B). So, the overall running time is (n/?)O(k) + poly(n, d). 3 Exact recovery algorithm in noiseless Gaussian setting To counter the intractability of the least squares problem in (2) confronted in Section 2, it is natural to explore distributional assumptions that may lead to faster algorithms. In this section, we consider the noiseless measurement model where the (xi )ni=1 are i.i.d. draws from N(0, I d ) (as in [18]). We ? with high probability when n d + 1. The algorithm runs give an algorithm that exactly recovers w ? are appropriately quantized. in poly(n, d)-time when (xi )ni=1 and w It will be notationally simpler to consider n + 1 covariate vectors and responses ? > x?? (i) , yi = w i = 0, 1, . . . , n . (3) Here, (xi )ni=0 are n + 1 i.i.d. draws from N(0, I d ), the unknown permutation ? ? is over {0, 1, . . . , n}, and the requirement of at least d + 1 measurements is expressed as n d. In fact, we shall consider a variant of the problem in which we are given one of the values of the unknown permutation ? ? . Without loss of generality, assume we are given that ? ? (0) = 0. Solving this variant of the problem suffices because there are only n + 1 possible values of ? ? (0): we can try them all, incurring just a factor n + 1 in the computation time. So henceforth, we just consider ? ? as an unknown permutation on [n]. 4 Algorithm 2 Find permutation input Covariate vectors x0 , x1 , x2 , . . . , xn in Rd ; response values y0 , y1 , y2 , . . . , yn in R; confidence parameter 2 (0, 1); lattice parameter > 0. ? 2 Rd and permutation ? ? > x?? (i) for each i 2 [n], and assume there exists w ? on [n] such that yi = w > ? x0 . that y0 = w output Permutation ? ? on [n] or failure. 1: Let X = [x1 |x2 | ? ? ? |xn ]> 2 Rn?d , and its pseudoinverse be X ? = [? x1 |? x2 | ? ? ? |? xn ]. ?> 2: Create Subset Sum instance with n2 source numbers ci,j := yi x j x0 for (i, j) 2 [n] ? [n] and target sum y0 . 3: Run Algorithm 3 with Subset Sum instance and lattice parameter . 4: if Algorithm 3 returns a solution S ? [n] ? [n] then 5: return any permutation ? ? on [n] such that ? ? (i) = j implies (i, j) 2 S. 6: else 7: return failure. 8: end if Algorithm 3 Lagarias and Odlyzko [12] subset sum algorithm input Source numbers {ci }i2I ? R; target sum t 2 R; lattice parameter > 0. output Subset S? ? I or failure. 1: Construct lattice basis B 2 R(|I|+2)?(|I|+1) where " # I |I|+1 B := 2 R(|I|+2)?(|I|+1) . t ci : i 2 I 2: Run basis reduction [e.g., 16] to find non-zero lattice vector v of length at most 2|I|/2 ? 1 (B). I 3: if v = z(1, >?, 0)> , with z 2 Z and S? 2 {0, 1} is characteristic vector for some S? ? I then S ? 4: return S. 5: else 6: return failure. 7: end if 3.1 Algorithm Our algorithm, shown as Algorithm 2, is based on a reduction to the Subset Sum problem. An instance of Subset Sum is specified by an unordered collection ofPsource numbers {ci }i2I ? R, and a target sum t 2 R. The goal is to find a subset S ? I such that i2S ci = t. Although Subset Sum is NP-hard in the worst case, it is tractable for certain structured instances [12, 9]. We prove that Algorithm 2 constructs such an instance with high probability. A similar algorithm based on such a reduction was recently used by Andoni et al. [2] for a different but related problem. Algorithm 2 proceeds by (i) solving a Subset Sum instance based on the covariate vectors and response values (using Algorithm 3), and (ii) constructing a permutation ? ? on [n] based on the solution to the Subset Sum instance. With the permutation ? ? in hand, we (try to) find a solution w 2 Rd to the system of linear equations yi = w> x?? (i) for i 2 [n]. If ? ?=? ? , then there is a unique such solution almost surely. 3.2 Analysis The following theorem is the main recovery guarantee for Algorithm 2. Theorem 3. Pick any 2 (0, 1). Suppose (xi )ni=0 are i.i.d. draws from N(0, I d ), and (y0 )ni=1 ? 2 Rd and permutation ? follow the noiseless measurement model from (3) for some w ? on [n] (and ? ? (0) = 0), and that n d. Furthermore, suppose Algorithm 2 is run with inputs (xi )ni=0 , (yi )ni=0 , , 2 and , and also that 2n /" where " is defined in Equation (8). With probability at least 1 , Algorithm 2 returns ? ?=? ?. ? 2 , and Algorithm 2 Remark 1. The value of " from Equation (8) is directly proportional to kwk requires a lower bound on " (in the setting of the lattice parameter ). Hence, it suffices to determine 5 ? 2 . Such a bound can be obtained from the measurement a lower bound on kwk pPn values: a standard 2 tail bound (Lemma 6 in Appendix C) shows that with high probability, i=1 yi /(2n) is a lower ? 2 , and is within a constant factor of it as well. bound on kwk Remark 2. Algorithm 2 strongly exploits the assumption of noiseless measurements, which is expected given the SNR lower bounds of Pananjady et al. [18] for recovering ? ? . The algorithm, however, is also very brittle and very likely fails in the presence of noise. Remark 3. The recovery result does not contradict the results of Unnikrishnan et al. [22], which ? even in the show that a collection of 2d measurement vectors are necessary for recovering all w, ? 2 Rd , with high noiseless measurement model of (3). Indeed, our result shows that for a fixed w ? but this probability d + 1 measurements in the model of (3) suffice to permit exactly recovery of w, ? 0. same set of measurement vectors (when d + 1 < 2d) will fail for some other w The proof of Theorem 3 is based on the following theorem?essentially due to Lagarias and Odlyzko [12] and Frieze [9]?concerning certain structured instances of Subset Sum that can be solved using the lattice basis reduction algorithm of Lenstra et al. [16]. Given a basis B = [b1 |b2 | ? ? ? |bk ] 2 Rm?k for a lattice 8 9 k <X = L(B) := zi bi : z1 , z2 , . . . , zk 2 Z ? Rm , : ; i=1 this algorithm can be used to find a non-zero vector v 2 L(B) \ {0} whose length is at most 2(k times that of the shortest non-zero vector in the lattice min kvk2 . 1 (B) := 1)/2 v2L(B)\{0} Theorem 4 ([12, 9]). Suppose the Subset Sum instance specified by source numbers {ci }i2I ? R and target sum t 2 R satisfy the following properties. P 1. There is a subset S ? ? I such that i2S ? ci = t. p P 2. Define R := 2|I|/2 |S ? | + 1 and ZRP:= {(z0 , z) 2 Z ? ZI : 0 < z02 + i2I zi2 ? R2 }. There exists " > 0 such that |z0 ? t " for each (z0 , z) 2 ZR that is not i2I zi ? ci | I ? ? an integer multiple of (1, ), where 2 {0, 1} is the characteristic vector for S ? . Let B be the lattice basis B constructed by Algorithm 3, and assume 2|I|/2 /". Then every |I|/2 non-zero vector in the lattice ?(B) with length at most 2 times the length of the shortest non-zero vector in ?(B) is an integer multiple of the vector (1, S ? , 0), and the basis reduction algorithm of Lenstra et al. [16] returns such a non-zero vector. The Subset Sum instance constructed in Algorithm 2 has n2 source numbers {ci,j : (i, j) 2 [n] ? [n]} and target sum y0 . We need to show that it satisfies the two conditions of Theorem 4. ? = (? ? i,j )(i,j)2[n]?[n] 2 Pn be the permutation Let S?? := {(i, j) : ? ? (i) = j} ? [n] ? [n], and let ? ? is the ?characteristic vector? ? i,j := 1{? matrix with ? ? (i) = j} for all (i, j) 2 [n] ? [n]. Note that ? p 2 n /2 for S?? . Define R := 2 n + 1 and ( ) X n?n 2 2 2 : 0 < z0 + ZR := (z0 , Z) 2 Z ? Z Zi,j ? R . 1?i,j?n O(n4 ) A crude bound shows that|ZR | ? 2 . The following lemma establishes the first required property in Theorem 4. Lemma 1. The random matrix X has rank d almost surely, and the subset S?? satisfies y0 = P ci,j . (i,j)2S? ? Proof. That X has rank d almost surely follows fromPthe fact that the probability density of X is n ? j x> supported on all of Rn?d . This implies that X ? X = j=1 x j = I d , and y0 = n X j=1 ? j x> ? = x> 0x jw X 1?i,j?n ? j ? yi ? 1{? x> ? (i) = j} = 0x 6 X 1?i,j?n ci,j ? 1{? ? (i) = j} . The next lemma establishes the second required property in Theorem 4. Here, we use the fact that ? Z the Frobenius norm z0 ? is at least one whenever (z0 , Z) 2 Z ? Zn?n is not an integer F ? multiple of (1, ?). Lemma 2. Pick any ?, ? 0 > 0 such that 3|ZR | ? + ? 0 < 1. With probability at least 1 3|ZR | ? ? 0 , every (z0 , Z) 2 ZR with Z = (Zi,j )(i,j)2[n]?[n] satisfies p 1 X (?/4) ? (d 1)/n ? ? 2+ d 1 ? Z ? 2. z0 ? y 0 Zi,j ? ci,j ?kwk ?p ? 2 ? z0 ? p p F i,j n + d + 2 ln(1/? 0 ) ? satisfies y0 = P ? ? Proof. By Lemma 1, the matrix ? i,j i,j ? ci,j . Fix any (z0 , Z) 2 ZR with Z = (Zi,j )(i,j)2[n]?[n] . Then X X ? i,j Zi,j ) ? x> ?j ? w ? > x?? (i) . z0 ? y 0 Zi,j ? ci,j = (z0 ? ? 0x i,j i,j ? ? Using matrix and vector notations, this can be written compactly as the inner product x> 0 (X (z0 ? > ? ? Since x0 ? N(0, I d ) and is independent of X, the distribution of the inner product is Z) ?X w). ? Z)> ?X ? wk ? 2 . By Lemma 7 (in normal with mean zero and standard deviation equal to kX ? (z0 ? Appendix C), with probability at least 1 ?, r ? ? ? > ? > ? ? ? ? ? x> X (z ? Z) ?X w kX (z ? Z) ?X wk ? ??. (4) 0 0 0 2 2 Observe that X ? = (X > X) ? kX ? (z0 ? 1 X > since X has rank d by Lemma 1, so ? kX > (z0 ? ? wk ? 2 Z)> ?X ? wk ? 2 Z)> ?X 2 kXk2 (5) . By Lemma 4 (in Appendix C), with probability at least 1 ? 0 , ?p ?2 p p 2 kXk2 ? n + d + 2 ln(1/? 0 ) . And by Lemma 9 (in Appendix C), with probability at least 1 ? kX > (z0 ? ? wk ? 2 Z)> ?X ? (z0 ? ? Z)> ? F 2?, r ? 2? ?kwk (d (6) 1)? 1+1/(d ?? 8n 1) . (7) ? is orthogonal, we have that k(z0 ? ? Z)> ?k ? ? Zk . Combining this with (4), Since ? F = kz0 ? F (5), (6), and (7), and union bounds over all (z0 , Z) 2 ZR proves the claim. Proof of Theorem 3. Lemma 1 and Lemma 2 (with ? 0 := /2 and ? := /(6|ZR |)) together imply that with probability at least 1 , the source numbers {ci,j : (i, j) 2 [n] ? [n]} and target sum y0 satisfy the conditions of Theorem 4 with S ? := {(i, j) 2 [n] ? [n] : ? ? (i) = j} , p 1 (?/4) ? (d 1)/n ? ( /(6|ZR |))2+ d 1 := ? 2 " ?kwk ?p ?2 p p n + d + 2 ln(2/ ) ? 2 . (8) 2 poly(n, log(1/ )) ?kwk 2 Thus, in this event, Algorithm 3 (with satisfying 2n /2 /") returns S? = S ? , which uniquely determines the permutation ? ?=? ? returned by Algorithm 2. Running time. The basis reduction algorithm of Lenstra et al. [16] is iterative, with each iteration primarily consisting of Gram-Schmidt orthogonalization and another efficient linear algebraic process called ?size reduction?. The total number of iterations required is 0 !1 p maxi2[k] kbi k2 k(k + 1) A. O@ log k? 2 1 (B) 7 p In our case, k = n2 and 1 (B) = n + 1; and by Lemma 10 (in Appendix C), each of the basis ? 22 . Using the tight vectors constructed has squared length at most 1 + 2 ? poly(d, log(n), 1/ ) ?kwk setting of required in Theorem 3, this gives a poly(n, d, log(1/ )) bound on the total number of iterations as well as on the total running time. However, the basis reduction algorithm requires both arithmetic and rounding operations, which are typically only available for finite precision rational inputs. Therefore, a formal running time analysis ? to be would require the idealized real-valued covariate vectors (xi )ni=0 and unknown target vector w quantized to finite precision values. This is doable, and is similar to using a discretized Gaussian ? is a vector of finite precision distribution for the distribution of the covariate vectors (and assuming w values), but leads to a messier analysis incomparable to the setup of previous works. Nevertheless, it would be desirable to find a different algorithm that avoids lattice basis reduction that still works with just d + 1 measurements. 4 Lower bounds on signal-to-noise for approximate recovery In this section, we consider the measurement model from (1) where (xi )ni=1 are i.i.d. draws from either N(0, I d ) or the uniform distribution on [ 1/2, 1/2]d , and ("i )ni=1 are i.i.d. draws from N(0, 2 ). We establish lower bounds on the signal-to-noise ratio (SNR), 2 SNR = ? 2 kwk 2 , n n ? = w((x ? ? to approximately recover w ? in expectation. required by any estimator w i )i=1 , (yi )i=1 ) for w ? 2 and 2 . The estimators may have a priori knowledge of the values ofkwk Theorem 5. Assume ("i )ni=1 are i.i.d. draws from N(0, 2 ). 1. There is an absolute constant C > 0 such that the following holds. If n 3, d 22, (xi )ni=1 are i.i.d. draws from N(0, I d ), (yi )ni=1 follow the measurement model from (1), and ? d SNR ? C ? min ,1 , log log(n) ? there exists some w ? 2 Rd such that then for any estimator w, ? ? E kw ? 2 wk ? 1 ? 2. kwk 24 2. If (xi )ni=1 are i.i.d. draws from the uniform distribution on [ 1/2, 1/2]d , and (yi )ni=1 follow the measurement model from (1), and SNR ? 2 , ? there exists some w ? 2 Rd such that then for any estimator w, ? ? ? ? 1 1 ? wk ? 2 ? 2. E kw 1 p kwk 2 2 ? > xi + "i for i 2 [n], the maximum Note that in the classical linear regression model where p yi = w ? mle satisfies Ekw ? mle wk ? 2?C likelihood estimator w d/n, where C > 0 is an absolute constant. ? up to (say) Euclidean distancekwk ? 2 /24 Therefore, the SNR requirement to approximately recover w is SNR 242 Cd/n. Compared to this setting, Theorem 5 implies that with the measurement model of (1), the SNR requirement (as a function of n) is at substantially higher (d/ log log(n) in the normal covariate case, or a constant not even decreasing with n in the uniform covariate case). p For the normal covariate case, Pananjady et al. [18] show that if n > d, ? < n, and SNR nc? n n d +? , ? mle , ? then the maximum likelihood estimator (w ?mle ) (i.e., any minimizer of (2)) satisfies ? ?mle = ? ? with probability at least 1 c0 n 2? . (Here, c > 0 and c0 > 0 are absolute constants.) It is p ? mle wk ? 2?C straightforward to see that, on the same event, we havekw d/n for some absolute 8 constant C > 0. Therefore, the necessary and sufficient conditions on SNR for approximate recovery 00 ? lie between C 0 d/ log log(n) and nC (for absolute constants C 0 , C 00 > 0). Narrowing this of w range remains an interesting open problem. A sketch of the proof in the normal covariate case is as follows. Without p loss of generality, we restrict ? is a unit vector. We construct a 1/ 2-packing of the unit sphere in attention to the case where w ? will be chosen from from this set. Observe that for any distinct u, u0 2 U , each Rd ; the target w > n 0 n of (xi u)i=1 and (x> i u )i=1 is an i.i.d. sample from N(0, 1) of size n; we prove that they therefore determine empirical distributions that are close to each other in Wasserstein-2 distance with high probability. We then prove that conditional on this event, the resulting distributions of (yi )ni=1 under ? = u and x ? = u0 (for any pair u, u0 2 U ) are close in Kullback-Leibler divergence. Hence, by (a x generalization of) Fano?s inequality [see, e.g., 11], no estimator can determine the correct u 2 U with high probability. The proof for the uniform case is similar, using U = {e1 , e1 } where e1 = (1, 0, . . . , 0)> . The full proof of Theorem 5 is given in Appendix D. Acknowledgments We are grateful to Ashwin Pananjady, Micha? Derezi?nski, and Manfred Warmuth for helpful discussions. DH was supported in part by NSF awards DMR-1534910 and IIS-1563785, a Bloomberg Data Science Research Grant, and a Sloan Research Fellowship. XS was supported in part by a grant from the Simons Foundation (#320173 to Xiaorui Sun). This work was done in part while DH and KS were research visitors and XS was a research fellow at the Simons Institute for the Theory of Computing. References [1] Abubakar Abid, Ada Poon, and James Zou. Linear regression with shuffled labels. arXiv preprint arXiv:1705.01342, 2017. [2] Alexandr Andoni, Daniel Hsu, Kevin Shi, and Xiaorui Sun. Correspondence retrieval. In Conference on Learning Theory, 2017. [3] Haim Avron and Christos Boutsidis. Faster subset selection for matrices and applications. SIAM Journal on Matrix Analysis and Applications, 34(4):1464?1499, 2013. [4] Sergey Bobkov and Michel Ledoux. One-dimensional empirical measures, order statistics and Kantorovich transport distances. preprint, 2014. [5] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near-optimal coresets for least-squares regression. IEEE Transactions on Information Theory, 59(10):6880?6892, 2013. [6] Kenneth R Davidson and Stanislaw J Szarek. Local operator theory, random matrices and banach spaces. Handbook of the geometry of Banach spaces, 1(317-366):131, 2001. [7] Micha? Derezi?nski and Manfred K Warmuth. Unbiased estimates for linear regression via volume sampling. arXiv preprint arXiv:1705.06908, 2017. [8] Golnooshsadat Elhami, Adam James Scholefield, Benjamin Bejar Haro, and Martin Vetterli. Unlabeled sensing: Reconstruction algorithm and theoretical guarantees. In Proceedings of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing, 2017. [9] Alan M Frieze. On the lagarias-odlyzko algorithm for the subset sum problem. SIAM Journal on Computing, 15(2):536?539, 1986. [10] Michael R Garey and David S Johnson. Computers and Intractability: A Guide to the Theory of NP-completeness. WH Freeman and Company, New York, 1979. [11] Te Sun Han and Sergio Verd?. Generalizing the Fano inequality. IEEE Transactions on Information Theory, 40(4):1247?1251, 1994. [12] Jeffrey C Lagarias and Andrew M Odlyzko. Solving low-density subset sum problems. Journal of the ACM, 32(1):229?246, 1985. 9 [13] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. Annals of Statistics, pages 1302?1338, 2000. [14] Lucien Le Cam. Convergence of estimates under dimensionality restrictions. The Annals of Statistics, pages 38?53, 1973. [15] Michel Ledoux. The Concentration of Measure Phenomenon. American Mathematical Society, 2000. [16] Arjen Klaas Lenstra, Hendrik Willem Lenstra, and L?szl? Lov?sz. Factoring polynomials with rational coefficients. Mathematische Annalen, 261(4):515?534, 1982. [17] Pascal Massart. Concentration inequalities and model selection, volume 6. Springer, 2007. [18] Ashwin Pananjady, Martin J Wainwright, and Thomas A Courtade. Linear regression with an unknown permutation: Statistical and computational limits. In 54th Annual Allerton Conference on Communication, Control, and Computing, pages 417?424, 2016. [19] Ashwin Pananjady, Martin J Wainwright, and Thomas A Courtade. Denoising linear models with permuted data. arXiv preprint arXiv:1704.07461, 2017. [20] Rolf-Dieter Reiss. Approximate distributions of order statistics: with applications to nonparametric statistics. Springer Science & Business Media, 2012. [21] Mark Rudelson and Roman Vershynin. Non-asymptotic theory of random matrices: extreme singular values. arXiv preprint arXiv:1003.2990, 2010. [22] Jayakrishnan Unnikrishnan, Saeid Haghighatshoar, and Martin Vetterli. Unlabeled sensing with random linear measurements. arXiv preprint arXiv:1512.00115, 2015. [23] David P Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1?2):1?157, 2014. [24] Bin Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423?435. Springer, 1997. 10
6751 |@word achievable:1 polynomial:5 norm:1 nd:1 c0:3 open:1 bn:1 decomposition:2 arjen:1 pick:2 reduction:13 initial:1 minw2rk:3 daniel:2 woodruff:1 recovered:1 z2:1 written:1 numerical:1 partition:3 klaas:1 generative:1 warmuth:3 beginning:1 short:1 manfred:2 completeness:1 quantized:3 allerton:1 simpler:1 mathematical:1 c2:1 kvk2:1 constructed:3 prove:5 x0:4 lov:1 expected:1 indeed:2 roughly:1 themselves:1 discretized:1 ekw:1 freeman:1 decreasing:1 company:1 enumeration:1 becomes:1 provided:1 estimating:1 bounded:1 moreover:1 suffice:1 notation:1 medium:1 what:1 substantially:1 szarek:1 guarantee:9 pseudo:1 fellow:1 every:6 avron:1 shed:1 exactly:6 k2:9 rm:2 brute:2 unit:3 grant:2 control:1 yn:5 understood:1 local:1 tends:1 limit:2 laurent:1 approximately:5 k:3 deployment:1 micha:2 bi:3 range:1 practical:1 unique:1 acknowledgment:1 alexandr:1 union:1 empirical:2 confidence:1 kbi:1 suggest:1 unlabeled:3 close:2 selection:3 dnd:1 nb:8 operator:1 impossible:2 restriction:1 equivalent:1 shi:2 missing:1 straightforward:1 attention:1 recovery:12 estimator:10 regarded:1 annals:2 target:10 suppose:3 z02:1 exact:2 us:2 verd:1 trend:1 satisfying:5 distributional:1 narrowing:1 preprint:6 solved:1 worst:2 sun:4 ordering:1 counter:1 benjamin:1 covariates:3 cam:3 weakly:1 solving:4 kaw:1 tight:1 grateful:1 algebra:1 basis:12 completely:1 compactly:1 packing:1 drineas:1 distinct:1 kevin:2 whose:1 valued:1 say:1 otherwise:3 statistic:5 noisy:2 confronted:1 doable:1 rr:2 ledoux:2 net:4 reconstruction:1 product:2 aligned:1 combining:1 poon:1 ismail:1 frobenius:1 kv:2 convergence:1 requirement:4 adam:1 i2s:2 andrew:1 progress:1 recovering:7 c:3 implies:4 come:1 radius:2 correct:1 centered:1 bin:1 require:1 beatrice:1 suffices:2 fix:1 generalization:1 opt:9 hold:4 around:1 normal:11 deciding:1 algorithmic:1 claim:1 estimation:2 lose:1 label:2 lucien:2 create:2 djhsu:1 establishes:2 weighted:1 tool:1 always:1 gaussian:3 pn:13 unnikrishnan:4 pananjady:10 likelihood:6 rank:4 contrast:1 helpful:1 factoring:1 sb:1 typically:1 subroutine:1 arg:6 aforementioned:1 overall:1 pascal:2 denoted:1 priori:1 bobkov:1 equal:1 aware:1 construct:5 beach:1 sampling:10 kw:4 yu:1 np:7 roman:1 employ:1 primarily:1 frieze:2 ve:2 divergence:1 festschrift:1 replaced:1 intended:1 consisting:1 geometry:1 jeffrey:1 microsoft:1 szl:1 extreme:1 light:1 maxi2:1 necessary:4 minw:1 orthogonal:1 unless:2 euclidean:2 circle:1 theoretical:2 instance:10 column:1 zn:1 lattice:15 ordinary:1 cost:7 ada:1 deviation:1 subset:21 entry:7 snr:18 uniform:5 rounding:1 johnson:1 front:2 too:1 aw:1 nski:3 vershynin:1 st:1 lenstra:6 density:2 siam:2 international:1 michael:1 together:1 sketching:1 na:2 squared:1 ambiguity:1 choose:1 henceforth:1 american:1 return:13 michel:2 messier:1 unordered:1 b2:2 wk:9 coresets:1 coefficient:1 satisfy:3 sloan:1 idealized:1 tion:1 try:2 kwk:12 recover:5 ery:1 simon:2 contribution:1 minimize:2 square:12 ni:22 characteristic:3 correspond:1 yield:1 i2i:5 whenever:1 failure:4 boutsidis:3 james:2 garey:1 proof:8 recovers:2 dmr:1 petros:1 hsu:2 rational:2 wh:1 knowledge:1 dimensionality:1 vetterli:2 higher:1 follow:3 response:8 jw:1 done:1 strongly:4 generality:3 furthermore:1 just:3 hand:6 sketch:1 transport:1 xiaorui:3 lack:1 usa:1 y2:5 unbiased:1 hence:3 shuffled:2 b2b:1 leibler:1 uniquely:2 complete:1 orthogonalization:1 recently:1 permuted:1 functional:1 volume:3 banach:2 discussed:2 tail:1 measurement:36 ashwin:3 rd:13 grid:1 fano:3 had:1 han:1 yk2:7 sergio:1 multivariate:2 recent:2 certain:3 inequality:3 yi:16 minimum:1 additional:1 somewhat:1 wasserstein:1 surely:4 determine:3 shortest:2 signal:6 ii:2 arithmetic:1 multiple:3 desirable:1 u0:3 reduces:1 full:1 alan:1 faster:2 long:1 sphere:1 retrieval:1 concerning:2 mle:6 e1:3 award:1 variant:3 regression:10 noiseless:7 expectation:2 essentially:1 arxiv:10 iteration:3 sergey:1 justified:1 fellowship:1 else:2 singular:4 source:5 appropriately:2 w2:1 rest:1 massart:2 inconsistent:1 integer:3 near:1 presence:1 zi:9 restrict:1 inner:2 incomparable:1 specialization:1 returned:3 algebraic:1 speech:1 york:3 speaking:1 remark:3 enumerate:1 nonparametric:1 annalen:1 crb:4 nsf:1 rb:7 mathematische:1 shall:1 nevertheless:1 achieving:1 drawn:1 kenneth:1 sum:19 run:6 almost:4 recov:1 draw:15 appendix:10 bound:13 guaranteed:1 haim:1 correspondence:9 quadratic:1 annual:1 infinity:1 x2:7 aspect:2 min:12 optimality:1 notationally:1 haro:1 martin:4 structured:2 ball:2 y0:9 n4:1 restricted:1 dieter:1 ln:3 equation:7 remains:1 courtade:2 fail:1 needed:1 tractable:1 end:3 available:1 operation:1 incurring:1 permit:2 magdon:1 apply:1 observe:4 willem:1 appropriate:2 zi2:1 alternative:2 schmidt:1 thomas:2 denotes:1 running:7 rudelson:1 xw:4 exploit:2 prof:1 establish:1 classical:2 society:1 objective:1 malik:1 already:1 concentration:2 diagonal:1 nr:1 kantorovich:1 distance:2 w0:1 considers:1 stanislaw:1 assuming:1 length:5 ratio:4 nc:4 setup:1 unknown:15 ppn:1 finite:3 beat:1 communication:1 y1:5 rn:12 arbitrary:4 kxw:5 bk:1 complement:1 pair:1 required:5 specified:2 david:2 z1:1 acoustic:1 narrow:1 established:1 nip:1 able:1 redmond:1 proceeds:1 hendrik:1 rolf:1 including:1 wainwright:2 event:3 natural:2 difficulty:2 force:2 business:1 zr:10 scheme:3 imply:2 axis:1 columbia:5 prior:1 asymptotic:1 fully:2 loss:3 permutation:23 brittle:1 interesting:2 limitation:1 proportional:1 foundation:2 sufficient:2 consistent:1 article:1 minp:1 bk2:2 intractability:2 cd:1 row:16 supported:3 free:2 enjoys:1 drastically:1 side:5 formal:1 guide:1 institute:1 absolute:9 dimension:6 xn:7 stand:1 gram:1 avoids:1 collection:2 adaptive:1 odlyzko:4 transaction:2 approximate:6 contradict:1 kullback:1 sz:1 pseudoinverse:1 handbook:1 b1:2 xi:16 davidson:1 continuous:1 search:2 iterative:1 zk:2 ca:1 poly:9 zou:1 constructing:2 main:1 noise:7 lagarias:4 n2:3 x1:7 ny:2 precision:3 fails:1 christos:2 lie:1 crude:1 kxk2:2 rk:3 theorem:19 z0:21 covariate:14 sensing:4 r2:1 x:2 admits:2 exists:7 andoni:2 ci:15 te:1 kx:9 nk:2 sorting:1 easier:2 generalizing:1 simply:1 explore:1 likely:1 expressed:1 springer:3 corresponds:1 minimizer:1 satisfies:8 determines:2 dh:2 acm:1 assouad:1 kz0:1 conditional:1 goal:2 careful:1 replace:1 hard:5 change:1 determined:1 except:1 uniformly:1 denoising:1 lemma:13 called:2 total:3 mark:1 arises:1 meant:1 visitor:1 reiss:1 phenomenon:1
6,359
6,752
NeuralFDR: Learning Discovery Thresholds from Hypothesis Features Fei Xia? , Martin J. Zhang?, James Zou? , David Tse? Stanford University {feixia,jinye,jamesz,dntse}@stanford.edu Abstract As datasets grow richer, an important challenge is to leverage the full features in the data to maximize the number of useful discoveries while controlling for false positives. We address this problem in the context of multiple hypotheses testing, where for each hypothesis, we observe a p-value along with a set of features specific to that hypothesis. For example, in genetic association studies, each hypothesis tests the correlation between a variant and the trait. We have a rich set of features for each variant (e.g. its location, conservation, epigenetics etc.) which could inform how likely the variant is to have a true association. However popular testing approaches, such as Benjamini-Hochberg?s procedure (BH) and independent hypothesis weighting (IHW), either ignore these features or assume that the features are categorical. We propose a new algorithm, NeuralFDR, which automatically learns a discovery threshold as a function of all the hypothesis features. We parametrize the discovery threshold as a neural network, which enables flexible handling of multi-dimensional discrete and continuous features as well as efficient end-to-end optimization. We prove that NeuralFDR has strong false discovery rate (FDR) guarantees, and show that it makes substantially more discoveries in synthetic and real datasets. Moreover, we demonstrate that the learned discovery threshold is directly interpretable. 1 Introduction In modern data science, the analyst is often swarmed with a large number of hypotheses ? e.g. is a mutation associated with a certain trait or is this ad effective for that section of the users. Deciding which hypothesis to statistically accept or reject is a ubiquitous task. In standard multiple hypothesis testing, each hypothesis is boiled down to one number, a p-value computed against some null distribution, with a smaller value indicating less likely to be null. We have powerful procedures to systematically reject hypotheses while controlling the false discovery rate (FDR) Note that here the convention is that a ?discovery? corresponds to a ?rejected? null hypothesis. These FDR procedures are widely used but they ignore additional information that is often available in modern applications. Each hypothesis, in addition to the p-value, could also contain a set of features pertinent to the objects being tested in the hypothesis. In the genetic association setting above, each hypothesis tests whether a mutation is correlated with the trait and we have a p-value for this. Moreover, we also have other features about both the mutation (e.g. its location, epigenetic status, conservation etc.) and the trait (e.g. if the trait is gene expression then we have features on the gene). Together these form a feature representation of the hypothesis. This feature vector is ignored by the standard multiple hypotheses testing procedures. In this paper, we present a flexible method using neural networks to learn a nonlinear mapping from hypothesis features to a discovery threshold. Popular procedures for multiple hypotheses ? ? These authors contributed equally to this work and are listed in alphabetical order. These authors contributed equally. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Discovery Threshold Input H1 H2 H3 H4 p1 p2 p3 p4 x1 x2 x3 x4 discovery H1 H2 H3 H4 End-to-end learning of the neural network t(x; ?) Covariate X true alternative yes yes no yes true true false false FDP = 1/3 Figure 1: NeuralFDR: an end-to-end learning procedure. testing correspond to having one constant threshold for all the hypotheses (BH [2]), or a constant for each group of hypotheses (group BH [12], IHW [13]). Our algorithm takes account of all the features to automatically learn different thresholds for different hypotheses. Our deep learning architecture enables efficient optimization and gracefully handles both continuous and discrete multidimensional hypothesis features. Our theoretical analysis shows that we can control false discovery proportion (FDP) with high probability. We provide extensive simulation on synthetic and real datasets to demonstrate that our algorithm makes more discoveries while controlling FDR compared to state-of-the-art methods. Contribution. As shown in Fig. 1, we provide NeuralFDR, a practical end-to-end algorithm to the multiple hypotheses testing problem where the hypothesis features can be continuous and multi-dimensional. In contrast, the currently widely-used algorithms either ignore the hypothesis features (BH [2], Storey?s BH [17]) or are designed for simple discrete features (group BH [12], IHW [13]). Our algorithm has several innovative features. We learn a multi-layer perceptron as the discovery threshold and use a mirroring technique to robustly estimate false discoveries. We show that NeuralFDR controls false discovery with high probability for independent hypotheses and asymptotically under weak dependence [17, 12], and we demonstrate on both synthetic and real datasets that it controls FDR while making substantially more discoveries. Another advantage of our end-to-end approach is that the learned discovery threshold are directly interpretable. We will illustrate in Sec. 4 how the threshold conveys biological insights. Related works. Holm [11] investigated the use of p-value weights, where a larger weight suggests that the hypothesis is more likely to be an alternative. Benjamini and Hochberg [3] considered assigning different losses to different hypotheses according to their importance. Some more recent works are [9, 8, 12]. In these works, the features are assumed to have some specific forms, either prespecified weights for each hypothesis or the grouping information. The more general formulation considered in this paper was purposed quite recently [13, 15, 14]. It assumes that for each hypothesis, we observe not only a p-value Pi but also a feature Xi lying in some generic space X . The feature is meant to capture some side information that might bear on the likelihood of a hypothesis to be significant, or on the power of Pi under the alternative, but the nature of this relationship is not fully known ahead of time and must be learned from the data. The recent work most relevant to ours is IHW [13]. In IHW, the data is grouped into G groups based on the features and the decision threshold is a constant for each group. IHW is similar to NeuralFDR in that both methods optimize the parameters of the decision rule to increase the number of discoveries while using cross validation for asymptotic FDR control. IHW has several limitations: first, binning the data into G groups can be difficult if the feature space X is multi-dimensional; second, the decision rule, restricted to be a constant for each group, is artificial for continuous features; and third, the asymptotic FDR control guarantee requires the number of groups going to infinity, which can be unrealistic. In contrast, NeuralFDR uses a neural network to parametrize the decision rule which is much more general and fits the continuous features. As demonstrated in the empirical results, it works well with multi-dimensional features. In addition to asymptotic FDR control, NeuralFDR also has high-probability false discovery proportion control guarantee with a finite number of hypotheses. SABHA [15] and AdaPT [14] are two recent FDR control frameworks that allow flexible methods to explore the data and compute the feature dependent decision rules. To compute the decision rule, SABHA estimates the null proportion using non-parametric methods while AdaPT estimates the joint density of the p-value and the features with spline regression. Both methods have a similar 2 limitation to IHW in that the estimation becomes very hard for multi-dimensional features. This issue is addressed in [4], where the null proportion is modeled as a linear combination of some hand-crafted transformation of the features. NeuralFDR models this relation in a more flexible way. 2 Preliminaries We have n hypotheses and each hypothesis i is characterized by a tuple (Pi , Xi , Hi ), where Pi 2 (0, 1) is the p-value, Xi 2 X is the hypothesis feature, and Hi 2 {0, 1} indicates if this hypothesis is null ( Hi = 0) or alternative ( Hi = 1). The p-value Pi represents the probability of observing an equally or more extreme value compared to the testing statistic when the hypothesis is null, and is calculated based on some data different from Xi . The alternate hypotheses (Hi = 1) are the true signals that we would like to discover. A smaller p-value presents stronger evidence for a hypothesis to be alternative. In practice, we observe Pi and Xi but do not know Hi . We define the null proportion ?0 (x) to be the probability that the hypothesis is null conditional on the feature Xi = x. The standard assumption is that under the null (Hi = 0), the p-value is uniformly distributed in (0, 1). Under the alternative (Hi = 1), we denote the p-value distribution by f1 (p|x). In most applications, the p-values under the alternative are systematically smaller than those under the null. A detailed discussion of the assumptions can be found in Sec. 5. The general goal of multiple hypotheses testing is to claim a maximum number of discoveries based on the observations {(Pi , Xi )}ni=1 while controlling the false positives. The most popular quantities that conceptualize the false positives are the family-wise error rate (FWER) [7] and the false discovery rate (FDR) [2]. We specifically consider FDR in this paper. FDR is the expected proportion of false discoveries, and one closely related quantity, the false discovery proportion (FDP), is the actual proportion of false discoveries. We note that FDP is the actual realization of FDR. Formally, Definition 1. (FDP and FDR) For any decision rule t, let D(t) and F D(t) be the number of discoveries and the number of false discoveries. The false discovery proportion F DP (t) and the false discovery rate F DR(t) are defined as F DP (t) , F D(t)/D(t) and F DR(t) , E[F DP (t)]. In this paper, we aim to maximize D(t) while controlling F DP (t) ? ? with high probability. This is a stronger statement than those in FDR control literature of controlling FDR under the level ?. Motivating example. Consider a genetic association study where the genotype and phenotype (e.g. height) are measured in a population. Hypothesis i corresponds to testing the correlation between the variant i and the individual?s height. The null hypothesis is that there is no correlation, and Pi is the probability of observing equally or more extreme values than the empirically observed correlation conditional on the hypothesis is null Hi = 0. Small Pi indicates that the null is unlikely. Here Hi = 1 (or 0) corresponds to the variant truly is (or is not) associated with height. The features Xi could include the location, conservation, etc. of the variant. Note that Xi is not used to compute Pi , but it could contain information about how likely the hypotheses is to be an alternative. Careful readers may notice that the distribution of Pi given Xi is uniform between 0 and 1 under the null and f1 (p|x) under the alternative, which depends on x. This implies that Pi and Xi are independent under the null and dependent under the alternative. To illustrate why modeling the features could improve discovery power, suppose hypothetically that all the variants truly associated with height reside on a single chromosome j ? and the feature is the chromosome index of each SNP (see Fig. 2 (a)). Standard multiple testing methods ignore this feature and assign the same discovery threshold to all the chromosomes. As there are many purely noisy chromosomes, the p-value threshold must be very small in order to control FDR. In contrast, a method that learns the threshold t(x) could learn to assign a higher threshold to chromosome j ? and 0 to other chromosomes. As a higher threshold leads to more discoveries and vice versa, this would effectively ignore much of the noise and make more discoveries under the same FDR. 3 Algorithm Description Since a smaller p-value presents stronger evidence against the null hypothesis, we consider the threshold decision rule without loss of generality. As the null proportion ?0 (x) and the alternative distribution f1 (p|x) vary with x, the threshold should also depend on x. Therefore, we can write the rule as t(x) in general, which claims hypothesis i to be significant if Pi < t(Xi ). Let I be the indicator function. For t(x), the number of discoveries D(t) and the number of false discoveries 3 Train Train d F D(t) p t*(x; ?) Optimize (3) CV ?*t*(x; ?) Rescale D(t) Test (a) t*(x; ?) Optimize (3) CV ?*t*(x; ?) Covariate X (b) Mirroring estimator Rescale Test (c) Mirroring estimato Figure 2: (a) Hypothetical example where small p-values are enriched at chromosome j ? . (b) The mirroring estimator. (c) The training and cross validation procedure. Pn Pn F D(t) can be expressed as D(t) = i=1 I{Pi <t(Xi )} and F D(t) = i=1 I{Pi <t(Xi ),Hi =0} . Note that computing F D(t) requires the knowledge of Hi , which is not available from the observations. Ideally we want to solve t for the following problem: maximizet D(t), s.t. F DP (t) ? ?. (1) Directly solving (1) is not possible. First, without a parametric representation, t can not be optimized. Second, while D(t) can be calculated from the data, F D(t) can not, which is needed for evaluating F DP (t). Third, while each decision rule candidate tj controls FDP, optimizing over them may yield a rule that overfits the data and loses FDP control. We next address these three difficulties in order. First, the representation of the decision rule t(x) should be flexible enough to address different structures of the data. Intuitively, to have maximal discoveries, the landscape of t(x) should be similar to that of the alternative proportion ?1 (x): t(x) is large in places where the alternative hypotheses abound. As discussed in detail in Sec. 4, two structures of ?1 (x) are typical in practice. The first is bumps at a few locations, and the second is slopes that vary with x. Hence the representation should at least be able to address these two structures. In addition, the number of parameters needed for the representation should not grow exponentially with the dimensionality of x. Hence non-parametric models, such as the spline-based methods or the kernel based methods, are infeasible. Take kernel density estimation in 5D as example. If we let the kernel width be 0.1, each kernel contains on average 0.001% of the data. Then we need at least a million alternative hypothesis data to have a reasonable estimate of the landscape of ?1 (x). In this work, we investigate the idea of modeling t(x) using a multilayer perceptron (MLP), which has a high expressive power and has a number of parameters that does not grow exponentially with the dimensionality of the features. As demonstrated in Sec. 4, it can efficiently recover the two common structures, bumps and slopes, and yield promising results in all real data experiments. Second, although F D(t) can not be calculated from the data, if it can be overestimated by some d d \ F D(t), then the corresponding estimate of FDP, namely F DP (t) = F D(t)/D(t), is also an \ overestimate. Then if F DP (t) ? ?, then F DP (t) ? ?, yielding the desired FDP control. Moreover, d if F D(t) is close to F D(t), the FDP control is tight. Conditional on X = x, the rejection region of p, namely (0, t(x)), contains a mixture of nulls and alternatives. As the null distribution Unif(0, 1) is symmetrical w.r.t. p = 0.5 while the alternative distribution f1 (p|x) is highly asymmetrical, the mirrored region (1 t(x), 1) will contain roughly the same number of nulls but very few alternatives. Then the number of hypothesis in (t(x), 1) can be a proxy of the number of nulls in (0, t(x)). This idea is illustrated in Fig. 2 (b) and we refer to this estimator as the mirroring estimator. Definition 2. (The mirroring estimator) For any decision rule t, let C(t) = {(p, x) : p < t(x)} be the rejection region of t over (Pi , Xi ) and let its mirrored region be C M (t) = {(p, x) : p > 1 t(x)}.The P d mirroring estimator of F D(t) is defined as F D(t) = i I{(Pi ,Xi )2C M (t)} . The mirroring estimator overestimates the number of false discoveries in expectation: Lemma 1. (Positive bias of the mirroring estimator) n X ? ? d E[F D(t)] E[F D(t)] = P (Pi , Xi ) 2 C M (t), Hi = 1 0. i=1 4 (2) Remark 1. In practice, t(x) is always very small and f1 (p|x) approaches 0 very fast as p ! 1. Then for any hypothesis with (Pi , Xi ) 2 C M (t), Pi is very close to 1 and hence P(Hi = 1) is very small. In other words, the bias in (2) is much smaller than E[F D(t)]. Thus the estimator is accurate. d In addition, F D(t) and F D(t) are both sums of n terms. Under mild conditions, they concentrate d well around their means. Thus we should expect that F D(t) approximates F D(t) well most of the times. We make this precise in Sec. 5 in the form of the high probability FDP control statement. Third, we use cross validation to address the overfitting problem introduced by optimization. To be more specific, we divide the data into M folds. For fold j, the decision rule tj (x; ?), before applied on fold j, is trained and cross validated on the rest of the data. The cross validation is done by \ rescaling the learned threshold tj (x) by a factor j so that the corresponding mirror estimate F DP on the CV set is ?. This will not introduce much of additional overfitting since we are only searching over a scalar . The discoveries in all M folds are merged as the final result. We note here distinct folds correspond to subsets of hypotheses rather than samples used to compute the corresponding p-values. This procedure is shown in Fig. 2 (c). The details of the procedure as well as the FDP control property are also presented in Sec. 5. Algorithm 1 NeuralFDR 1: Randomly divide the data {(Pi , Xi )}n i=1 into M folds. 2: for fold j = 1, ? ? ? , M do 3: Let the testing data be fold j, the CV data be fold j 0 6= j, and the training data be the rest. 4: Train tj (x; ?) based on the training data by optimizing \ maximize? D(t(?)) s.t. F DP (t?j (?)) ? ?. \ Rescale t?j (x; ?) by j? so that the estimated FDP on the CV data F DP ( ? ? Apply j tj (?) on the data in fold j (the testing data). 7: Report the discoveries in all M folds. 5: 6: (3) ? ? j tj (?)) = ?. The proposed method NeuralFDR is summarized as Alg. 1. There are two techniques that enabled robust training of the neural network. First, to have non-vanishing gradients, the indicator functions in (3) are substituted by sigmoid functions with the intensity parameters automatically chosen based on the dataset. Second, the training process of the neural network may be unstable if we use random initialization. Hence, we use an initialization method called the k-cluster initialization: 1) use k-means clustering to divide the data into k clusters based on the features; 2) compute the optimal threshold for each cluster based on the optimal group threshold condition ((7) in Sec. 5); 3) initialize the neural network by training it to fit a smoothed version of the computed thresholds. See Supp. Sec. 2 for more implementation details. 4 Empirical Results We evaluate our method using both simulated data and two real-world datasets3 . The implementation details are in Supp. Sec. 2. We compare NeuralFDR with three other methods: BH procedure (BH) [2], Storey?s BH procedure (SBH) with threshold = 0.4 [17], and Independent Hypothesis Weighting (IHW) with number of bins and folds set as default [13]. BH and SBH are two most popular methods without using the hypothesis features and IHW is the state-of-the-art method that utilizes hypothesis features. For IHW, in the multi-dimensional feature case, k-means is used to group the hypotheses. In all experiments, k is set to 20 and the group index is provided to IHW as the hypothesis feature. Other than the FDR control experiment, we set the nominal FDR level ? = 0.1. Simulated data. We first consider DataIHW, the simulated data in the IHW paper ( Supp. 7.2.2 [13]). Then, we use our own data that are generated to have two feature structures commonly seen in practice, the bumps and the slopes. For the bumps, the alternative proportion ?1 (x) is generated from a Gaussian mixture (GM) to have a few peaks with abundant alternative hypotheses. For the slopes, ?1 (x) is generated linearly dependent with the features. After generating ?1 (x), the p-values are generated following a beta mixture under the alternative and uniform (0, 1) under the null. We 3 We released the software at https://github.com/fxia22/NeuralFDR 5 5/19/17, 12)45 AM (a) 5/19/17, 12)44 AM (b) Figure 3: FDP for (a) DataIHW and (b) 1DGM. Dashed line indicate 45 degrees, which is optimal. Table 1: Simulated data: # of discoveries and gain over BH at FDR = 0.1. DataIHW DataIHW(WD) 1D GM BH 2259 6674 8266 SBH 2651(+17.3%) 7844(+17.5%) 9227(+11.62%) IHW 5074(+124.6%) 10382(+55.6%) 11172(+35.2%) NeuralFDR 6222(+175.4%) 12153(+82.1%) 14899(+80.2%) 1D slope 2D GM 2D slope 5D GM BH 11794 9917 8473 9917 SBH 13593(+15.3%) 11334(+14.2%) 9539(+12.58%) 11334(+14.28%) IHW 12658(+7.3%) 12175(+22.7%) 8758(+3.36%) 11408(+15.0%) NeuralFDR 15781(+33.8%) 18844(+90.0%) 10318(+21.7%) 18364(+85.1%) http://localhost:8894/files/sideinfo/FDR2.svg http://localhost:8894/files/sideinfo/FDR1.svg Page 1 of 1 generated the data for both 1D and 2D cases, namely 1DGM, 2DGM, 1Dslope, 2Dslope. For example, Fig. 4 (a) shows the alternative proportion of 2Dslope. In addition, for the high dimensional feature scenario, we generated a 5D data, 5DGM, which contains the same alternative proportion as 2DGM with 3 addition non-informative directions. We first examine the FDR control property using DataIHW and 1DGM. Knowing the ground truth, we plot the FDP (actual FDR) over different values of the nominal FDR ? in Fig. 3. For a perfect FDR control, the curve should be along the 45-degree dashed line. As we can see, all the methods control FDR. NeuralFDR controls FDR accurately while IHW tends to make overly conservative decisions. Second, we visualize the learned threshold by both NeuralFDR and IWH. As mentioned in Sec. 3, to make more discoveries, the learned threshold should roughly have the same shape as ?1 (x). The learned thresholds of NeuralFDR and IHW for 2Dslope are shown in Fig. 3 (b,c). As we can see, NeuralFDR well recovers the slope structure while IHW fails to assign the highest threshold to the bottom right block. IHW is forced to be piecewise constant while NeuralFDR can learn a smooth threshold, better recovering the structure of ?1 (x). In general, methods that partition the hypotheses into discrete groups would not scale for higher-dimensional features. In Appendix 1, we show that NeuralFDR is also able to recover the correct threshold for the Gaussian signal. Finally, we report the total numbers of discoveries in Tab. 1. In addition, we ran an experiment with dependent p-values with the same dependency structure as Sec. 3.2 in [13]. We call this dataset DataIHW(WD). The number of discoveries are shown in Tab. 1. NeuralFDR has the actual FDP 9.7% while making more discoveries than SBH and IHW. This empirically shows that NeuralFDR also works for weakly dependent data. All numbers are averaged over 10 runs of the same simulation setting. We can see that NeuralFDR outperforms IHW in all simulated datasets. Moreover, it outperforms IHW by a large margin multi-dimensional feature settings. Airway RNA-Seq data. Airway data [10] is a RNA-Seq dataset that contains n = 33469 genes and aims to identify glucocorticoid responsive (GC) genes that modulate cytokine function in airway smooth muscle cells. The p-values are obtained by a standard two-group differential analysis using DESeq2 [16]. We consider the log count for each gene as the hypothesis feature. As shown in the first column in Tab. 2, NeuralFDR makes 800 more discoveries than IHW. The learned threshold 6 Page 1 of 1 5/19/17, 12)57 AM 5/19/17, 12)58 AM 5/19/17, 12)58 AM 5/19/17, 1(06 AM 5/19/17, 11(02 AM 5/19/17, 11(02 AM (a) Actual alternative proportion for 2Dslope. http://localhost:8894/files/sideinfo/2dslope1.png (b) NeuralFDR?s learned threshold. (c) IHW?s learned threshold http://localhost:8894/files/sideinfo/2dslope3.png Page 1 of 1 Page 1 of 1 http://localhost:8894/files/sideinfo/2dslope2.png Page 1 of 1 (d) NeuralFDR?s learned thresh- (e) NeuralFDR?s learned thresh- (f) NeuralFDR?s learned threshold for Airway log count. old for GTEx log distance. old for GTEx expression level. Figure 4: (a-c) Results for 2Dslope: (a) the alternative proportion for 2Dslope; (b) NeuralFDR?s learned threshold; (c) IHW?s learned threshold. (d-f): Each dot corresponds to one hypothesis. The red curves shows the learned threshold by NeuralFDR: (d) for log count for airway data; (e) for log distance for GTEx data; (f) for expression level for GTEx data. Table 2: Real data: # of discoveries at FDR = 0.1. http://deepfei:8894/files/airway.png BH SBH IHW NeuralFDR Airway 4079 4038(-1.0%) 4873(+19.5%) 6031(+47.9%) GTEx-PhastCons 29348 29758(+1.4%) 30241(+3.0%) 30525(+4.0%) http://deep.fxia.me:8894/files/gtex-distance.pngPage 1 of 1 BH SBH IHW NeuralFDR GTEx-dist 29348 29758(+1.4%) 35771(+21.9%) 36127(+23.1%) GTEx-2D 29348 29758(+1.4%) 35705(+21.7%) 37095(+26.4%) GTEx-exp 29348 29758(+1.4%) 32195(+9.7%) 32214(+9.8%) GTEx-3D 29348 29758(+1.4%) 35598(+21.3%) 37195(+26.7%) http://deep.fxia.me:8894/files/gtex-expression.png Page 1 of 1 by NeuralFDR is shown in Fig. 4 (d). It increases monotonically with the log count, capturing the positive dependency relation. Such learned structure is interpretable: low count genes tend to have higher variances, usually dominating the systematic difference between the two conditions; on the contrary, it is easier for high counts genes to show a strong signal for differential expression [16, 13]. GTEx data. A major component of the GTEx [5] study is to quantify expression quantitative trait loci (eQTLs) in human tissues. In such an eQTL analysis, each pair of single nucleotide polymorphism (SNP) and nearby gene forms one hypothesis. Its p-value is computed under the null hypothesis that the SNP?s genotype is not correlated with the gene expression.We obtained all the GTEx p-values from chromosome 1 in a brain tissue (interior caudate), corresponding to 10, 623, 893 SNP-gene combinations. In the original GTEx eQTL study, no features were considered in the FDR analysis, corresponding to running the standard BH or SBH on the p-values. However, we know many biological features affect whether a SNP is likely to be a true eQTL; i.e. these features could vary the alternative proportion ?1 (x) and accounting for them could increase the power to discover true eQTL?s while guaranteeing that the FDR remains the same. For each hypothesis, we generated three features: 1) the distance (GTEx-dist) between the SNP and the gene (measured in log base-pairs) ; 2) the average expression (GTEx-exp) of the gene across individuals (measured in log rpkm); 3) the evolutionary conservation measured by the standard PhastCons scores (GTEx-PhastCons). 7 Page 1 of 1 The numbers of discoveries are shown in Tab. 2. For GTEx-2D, GTEx-dist and GTEx-exp are used. For NeuralFDR, the number of discoveries increases as we put in more and more features, indicating that it can work well with multi-dimensional features. For IHW, however, the number of discoveries decreases as more features are incorporated. This is because when the feature dimension becomes higher, each bin in IHW will cover a larger space, decreasing the resolution of the piecewise constant function, preventing it from capturing the informative part of the feature. The learned discovery thresholds of NeuralFDR are directly interpretable and match prior biological knowledge. Fig. 4 (e) shows that the threshold is higher when SNP is closer to the gene. This allows more discoveries to be made among nearby SNPs, which is desirable since we know there most of the eQTLs tend to be in cis (i.e. nearby) rather than trans (far away) from the target gene [5]. Fig. 4 (f) shows that the NeuralFDR threshold for gene expression decreases as the gene expression becomes large. This also confirms known biology: the highly expressed genes tend to be more housekeeping genes which are less variable across individuals and hence have fewer eQTLs [5]. Therefore it is desirable that NeuralFDR learns to place less emphasis on these genes. We also show that NeuralFDR learns to give higher threshold to more conserved variants in Supp. Sec. 1, which also matches biology. 5 Theoretical Guarantees We assume the tuples {(Pi , Xi , Hi )}ni=1 are i.i.d. samples from an empirical Bayes model: ? [P |H = 0, X = x] ? Unif(0, 1) i.i.d. Xi ? ?(X), [Hi |Xi = x] ? Bern(1 ?0 (x)), [Pi |Hi = 1, X = x] ? f (p|x) i i 1 (4) The features Xi are drawn i.i.d. from some unknown distribution ?(x). Conditional on the feature Xi = x, hypothesis i is null with probability ?0 (x) and is alternative otherwise. The conditional distributions of p-values are Unif(0, 1) under the null and f1 (p|x) under the alternative. FDR control via cross validation. The cross validation procedure is described as follows. The data is divided randomly into M folds of equal size m = n/M . For fold j, let the testing set Dte (j) be itself, the cross validation set Dcv (j) be any other fold, and the training set Dtr (j) be the remaining. The size of the three are m, m, (M 2)m respectively. For fold j, suppose at most L decision rules are calculated based on the training set, namely tj1 , ? ? ? , tjL . Evaluated on the cross validation set, let l? -th rule be the rule with most discoveries among rules that satisfies 1) its mirroring estimate \ F DP (tjl ) ? ?; 2) D(tjl )/m > c0 , for some small constant c0 > 0. Then, tjl? is selected to apply on the testing set (fold j). Finally, discoveries from all folds are combined. The FDP control follows a standard argument of cross validation. Intuitively, the FDP of the rules {tjl }L l=1 are estimated based on Dcv (j), a dataset independent of the training set. Hence there is no overfitting and the overestimation property of the mirroring estimator, as in Lemma 1, is statistical valid, leading to a conservative decision that controls FDP. This is formally stated as below. Theorem 1. (FDP control) Let M be the number of folds and let L be the maximum number of decision rule candidates evaluated by the cross validation , ?q set. Then with ? probability at least 1 M ML the overall FDP is less than (1 + )?, where = O log . ?n Remark 2. There are two subtle points. First, L can not be too large. Otherwise Dcv (j) may eventually be overfitted by being used too many times for FDP estimation. Second, the FDP estimates may be unstable if the probability of discovery E[D(tjl )/m] approaches 0. Indeed, the mirroring d F D(tjl ) d \ method estimates FDP by F DP (tjl ) = , where both F D(tjl ) and D(tjl ) are i.i.d. sums of n D(tjl ) Bernoulli random variables with mean roughly ?E[D(tjl )/m] and E[D(tjl )/m]. When their means are small, the concentration property will fail. So we need E[D(tjl )/m] to be bounded away from zero. Nevertheless this is required in theory but may not be used in practice. Remark 3. (Asymptotic FDR control under weak dependence) Besides the i.i.d. case, NeuralFDR can also be extended to control FDR asymptotically under weak dependence [17, 12]. Generalizing the concept in [12] from discrete groups to continuous features X, the data are under weak dependence if the CDF of (Pi , Xi ) for both the null and the alternative proportion converge almost surely to their true values respectively. The linkage disequilibrium (LD) in GWAS and the correlated genes in RNA-Seq can be addressed by such dependence structure. In this case, if learned threshold is c-Lipschitz continuous for some constant c, NeuralFDR will control FDR asymptotically. The 8 Lipschitz continuity can be achieved, for example, by weight clipping [1], i.e. clamping the weights to a bounded set after each gradient update when training the neural network. See Supp. 3 for details. Optimal decision rule with infinite hypotheses. When n = 1, we can recover the joint density fP X (p, x). Based on that, the explicit form of the optimal decision rule can be obtained if we are willing to further assumer f1 (p|x) is monotonically non-increasing w.r.t. p. This rule is used for the k-cluster initialization for NeuralFDR as mentioned in Sec. 3. Now suppose we know fP X (p, x). Then ?(x) and fP |X (p|x) can also be determined. Furthermore, as f1 (p|x) = 1 ?10 (x) (fP |X (p|x) ?0 (x)), once we specify ?0 (x), the entire model is specified. Let S(fP X ) be the set of null proportions ?0 (x) that produces the model consistent with fP X . Because f1 (p|x) 0, we have 8p, x, ?0 (x) ? fP |X (p|x). This can be further simplified as ?0 (x) ? fP |X (1|x) by recalling that fP |X (p|x) is monotonically decreasing w.r.t. p. Then we know S(fP X ) = {?0 (x) : 8x, ?0 (x) ? fP |X (1|x)}. (5) Given fP X (p, x), the model is not fully identifiable. Hence we should look for a rule t that maximizes the power while controlling FDP for all elements in S(fP X ). For (P1 , X1 , H1 ) ? (fP X , ?0 , f1 ) following (4), the probability of discovery and the probability of false discovery are PD (t, fP X ) = P(P1 ? t(X1 )), PF D (t, fP X , ?0 ) = P(P1 ? t(X1 ), H1 = 0). Then the FDP P X ,?0 ) is F DP (t, fP X , ?0 ) = PFPDD(t,f (t,fP X ) . In this limiting case, all quantities are deterministic and FDP coincides with FDR. Given that the FDP is controlled, maximizing the power is equivalent to maximizing the probability of discovery. Then we have the following minimax problem: max t min ?0 2S(fP X ) PD (t, fP X ) s.t. max ?0 2S(fP X ) F DP (t, fP X , ?0 ) ? ?, (6) where S(fP X ) is the set of possible null proportions consistent with fP X , as defined in (5). Theorem 2. Fixing fP X and let ?0? (x) = fP |X (1|x). If f1 (p|x) is monotonically non-increasing w.r.t. p, the solution to problem (6), t? (x), satisfies 1. fP X (1, x) = const, almost surely w.r.t. ?(x) fP X (t? (x), x) 2. F DR(t? , fP X , ?0? ) = ?. (7) Remark 4. To compute the optimal rule t? by the conditions (7), consider any t that satisfies (7.1). According to (7.1), once we specify the value of t(x) at any location x, say t(0), the entire function is determined. Also, F DP (t, fP X , ?0? ) is monotonically non-decreasing w.r.t. t(0). These suggests the following strategy: starting with t(0) = 0, keep increasing t(0) until the corresponding FDP equals ?, which gives us the optimal threshold t? . Similar conditions are also mentioned in [14, 13]. 6 Discussion We proposed NeuralFDR, an end-to-end algorithm to the learn discovery threshold from hypothesis features. We showed that the algorithm controls FDR and makes more discoveries on synthetic and real datasets with multi-dimensional features. While the results are promising, there are also a few challenges. First, we notice that NeuralFDR performs better when both the number of hypotheses and the alternative proportion are large. Indeed, in order to have large gradients for the optimization, we need a lot of elements at the decision boundary t(x) and the mirroring boundary 1 t(x). It is important to improve the performance of NeuralFDR on small datasets with small alternative proportion. Second, we found that a 10-layer MLP performed well to model the decision threshold and that shallower networks performed more poorly. A better understanding of which network architectures optimally capture signal in the data is also an important question. References [1] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. [2] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the royal statistical society. Series B (Methodological), pages 289?300, 1995. 9 [3] Yoav Benjamini and Yosef Hochberg. Multiple hypotheses testing with weights. Scandinavian Journal of Statistics, 24(3):407?418, 1997. [4] Simina M Boca and Jeffrey T Leek. A regression framework for the proportion of true null hypotheses. bioRxiv, page 035675, 2015. [5] GTEx Consortium et al. The genotype-tissue expression (gtex) pilot analysis: Multitissue gene regulation in humans. Science, 348(6235):648?660, 2015. [6] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011. [7] Olive Jean Dunn. Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52?64, 1961. [8] Bradley Efron. Simultaneous inference: When should hypothesis testing problems be combined? The annals of applied statistics, pages 197?223, 2008. [9] Christopher R Genovese, Kathryn Roeder, and Larry Wasserman. False discovery control with p-value weighting. Biometrika, pages 509?524, 2006. [10] Blanca E Himes, Xiaofeng Jiang, Peter Wagner, Ruoxi Hu, Qiyu Wang, Barbara Klanderman, Reid M Whitaker, Qingling Duan, Jessica Lasky-Su, Christina Nikolos, et al. Rna-seq transcriptome profiling identifies crispld2 as a glucocorticoid responsive gene that modulates cytokine function in airway smooth muscle cells. PloS one, 9(6):e99625, 2014. [11] Sture Holm. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pages 65?70, 1979. [12] James X Hu, Hongyu Zhao, and Harrison H Zhou. False discovery rate control with groups. Journal of the American Statistical Association, 105(491):1215?1227, 2010. [13] Nikolaos Ignatiadis, Bernd Klaus, Judith B Zaugg, and Wolfgang Huber. Data-driven hypothesis weighting increases detection power in genome-scale multiple testing. Nature methods, 13(7):577?580, 2016. [14] Lihua Lei and William Fithian. Adapt: An interactive procedure for multiple testing with side information. arXiv preprint arXiv:1609.06035, 2016. [15] Ang Li and Rina Foygel Barber. Multiple testing with the structure adaptive benjamini-hochberg algorithm. arXiv preprint arXiv:1606.07926, 2016. [16] Michael I Love, Wolfgang Huber, and Simon Anders. Moderated estimation of fold change and dispersion for rna-seq data with deseq2. Genome biology, 15(12):550, 2014. [17] John D Storey, Jonathan E Taylor, and David Siegmund. Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66(1):187?205, 2004. 10
6752 |@word mild:1 version:1 proportion:23 stronger:3 c0:2 unif:3 willing:1 confirms:1 simulation:2 hu:2 accounting:1 ld:1 contains:4 score:1 series:2 genetic:3 ours:1 outperforms:2 bradley:1 com:1 wd:2 assigning:1 must:2 olive:1 john:2 partition:1 informative:2 shape:1 enables:2 pertinent:1 designed:1 interpretable:4 plot:1 update:1 fewer:1 selected:1 vanishing:1 prespecified:1 location:5 judith:1 zhang:1 height:4 along:2 h4:2 beta:1 differential:2 prove:1 introduce:1 huber:2 indeed:2 expected:1 roughly:3 p1:4 examine:1 dist:3 multi:10 brain:1 love:1 decreasing:3 automatically:3 duan:1 actual:5 soumith:1 pf:1 increasing:3 becomes:3 abound:1 discover:2 moreover:4 provided:1 bounded:2 maximizes:1 null:30 substantially:2 unified:1 transformation:1 guarantee:4 quantitative:1 multidimensional:1 hypothetical:1 interactive:1 biometrika:1 control:32 reid:1 overestimate:2 positive:5 before:1 tends:1 jiang:1 might:1 emphasis:1 initialization:4 suggests:2 statistically:1 averaged:1 gtex:23 practical:2 testing:20 practice:5 alphabetical:1 block:1 x3:1 procedure:14 dunn:1 empirical:3 reject:2 word:1 consortium:1 close:2 interior:1 bh:16 put:1 context:1 optimize:3 equivalent:1 deterministic:1 demonstrated:2 maximizing:2 starting:1 resolution:1 wasserman:1 insight:1 rule:24 estimator:10 enabled:1 population:1 handle:1 searching:1 siegmund:1 moderated:1 limiting:1 annals:1 controlling:8 suppose:3 nominal:2 user:1 gm:4 target:1 us:1 kathryn:1 hypothesis:74 element:2 binning:1 observed:1 bottom:1 preprint:3 wang:1 capture:2 boca:1 region:4 rina:1 plo:1 decrease:2 highest:1 overfitted:1 ran:1 mentioned:3 pd:2 overestimation:1 ideally:1 trained:1 depend:1 solving:1 tight:1 weakly:1 purely:1 joint:2 train:3 distinct:1 fast:1 effective:1 forced:1 artificial:1 klaus:1 quite:1 richer:1 stanford:2 widely:2 larger:2 solve:1 dominating:1 otherwise:2 say:1 elad:1 tested:1 statistic:4 noisy:1 itself:1 final:1 online:1 advantage:1 propose:1 maximal:1 p4:1 relevant:1 realization:1 tj1:1 poorly:1 description:1 cluster:4 produce:1 generating:1 perfect:1 guaranteeing:1 object:1 illustrate:2 fixing:1 measured:4 rescale:3 h3:2 p2:1 strong:3 recovering:1 implies:1 indicate:1 convention:1 quantify:1 concentrate:1 direction:1 closely:1 merged:1 correct:1 stochastic:1 human:2 larry:1 bin:2 assign:3 f1:11 polymorphism:1 preliminary:1 biological:3 lying:1 around:1 considered:3 ground:1 deciding:1 exp:3 mapping:1 visualize:1 claim:2 bump:4 major:1 vary:3 released:1 estimation:5 eqtl:4 currently:1 grouped:1 vice:1 datasets3:1 rna:5 always:1 aim:2 gaussian:2 rather:2 pn:2 zhou:1 validated:1 methodological:1 bernoulli:1 likelihood:1 indicates:2 contrast:3 am:8 inference:1 roeder:1 dependent:5 anders:1 unlikely:1 entire:2 accept:1 eqtls:3 relation:2 going:1 issue:1 among:3 flexible:5 tjl:14 overall:1 art:2 conceptualize:1 initialize:1 equal:2 once:2 having:1 beach:1 x4:1 represents:1 biology:3 look:1 genovese:1 hongyu:1 report:2 spline:2 piecewise:2 few:4 modern:2 randomly:2 individual:3 jeffrey:1 william:1 recalling:1 jessica:1 detection:1 epigenetic:1 mlp:2 investigate:1 highly:2 truly:2 extreme:2 genotype:3 yielding:1 mixture:3 tj:6 accurate:1 tuple:1 closer:1 nucleotide:1 divide:3 old:2 taylor:1 desired:1 abundant:1 biorxiv:1 theoretical:2 tse:1 modeling:2 column:1 cover:1 yoav:2 clipping:1 subset:1 uniform:2 fdp:29 too:2 motivating:1 optimally:1 dependency:2 synthetic:4 combined:2 st:1 density:3 peak:1 fithian:1 overestimated:1 systematic:1 michael:1 together:1 dr:3 american:2 zhao:1 leading:1 rescaling:1 li:1 supp:5 account:1 sec:13 summarized:1 ad:1 depends:1 performed:2 h1:4 lot:1 wolfgang:2 observing:2 overfits:1 tab:4 red:1 recover:3 bayes:1 hazan:1 gwas:1 jul:1 slope:7 simon:1 mutation:3 contribution:1 ni:2 variance:1 efficiently:1 correspond:2 yield:2 landscape:2 identify:1 yes:3 weak:4 boiled:1 accurately:1 tissue:3 simultaneous:2 inform:1 definition:2 against:2 james:2 conveys:1 chintala:1 associated:3 recovers:1 gain:1 pilot:1 dataset:4 popular:4 knowledge:2 efron:1 dimensionality:2 ubiquitous:1 subtle:1 higher:7 methodology:1 specify:2 formulation:1 done:1 evaluated:2 generality:1 furthermore:1 rejected:1 correlation:4 until:1 hand:1 dgm:6 expressive:1 christopher:1 nonlinear:1 su:1 continuity:1 dntse:1 lei:1 usa:1 contain:3 true:9 asymmetrical:1 concept:1 hence:7 illustrated:1 width:1 coincides:1 fwer:1 demonstrate:3 performs:1 duchi:1 snp:8 wise:1 recently:1 common:1 sigmoid:1 empirically:2 lihua:1 exponentially:2 million:1 association:6 discussed:1 approximates:1 trait:6 significant:2 refer:1 cytokine:2 versa:1 cv:5 consistency:1 benjamini:5 dot:1 scandinavian:2 etc:3 base:1 own:1 recent:3 thresh:2 showed:1 optimizing:2 driven:1 barbara:1 scenario:1 certain:1 transcriptome:1 muscle:2 conserved:1 seen:1 arjovsky:1 additional:2 wasserstein:1 surely:2 converge:1 maximize:3 monotonically:5 signal:4 dashed:2 full:1 multiple:14 desirable:2 smooth:3 match:2 adapt:3 characterized:1 cross:11 long:1 profiling:1 divided:1 christina:1 equally:4 jean:1 controlled:1 variant:8 regression:2 multilayer:1 expectation:1 arxiv:6 kernel:4 achieved:1 cell:2 addition:7 want:1 addressed:2 harrison:1 grow:3 rest:2 file:8 tend:3 contrary:1 call:1 leverage:1 enough:1 affect:1 fit:2 architecture:2 idea:2 knowing:1 whether:2 expression:11 simina:1 linkage:1 peter:1 remark:4 deep:3 ignored:1 useful:1 mirroring:13 detailed:1 listed:1 nikolaos:1 ang:1 png:5 http:9 mirrored:2 notice:2 estimated:2 overly:1 disequilibrium:1 discrete:5 write:1 group:15 threshold:45 nevertheless:1 drawn:1 asymptotically:3 subgradient:1 sum:2 run:1 powerful:2 place:2 family:1 reader:1 reasonable:1 almost:2 p3:1 utilizes:1 seq:5 decision:20 hochberg:5 appendix:1 capturing:2 layer:2 hi:17 fold:20 identifiable:1 ahead:1 infinity:1 fei:1 x2:1 software:1 nearby:3 argument:1 innovative:1 min:1 martin:2 according:2 alternate:1 combination:2 yosef:2 smaller:5 across:2 making:2 intuitively:2 restricted:1 remains:1 foygel:1 count:6 eventually:1 fail:1 needed:2 know:5 locus:1 leek:1 singer:1 end:12 parametrize:2 available:2 apply:2 observe:3 away:2 generic:1 robustly:1 responsive:2 alternative:30 original:1 assumes:1 clustering:1 include:1 running:1 remaining:1 gan:1 whitaker:1 const:1 yoram:1 society:2 question:1 quantity:3 parametric:3 concentration:1 dependence:5 strategy:1 evolutionary:1 gradient:3 dp:17 ihw:29 distance:4 simulated:5 gracefully:1 me:2 barber:1 unstable:2 analyst:1 besides:1 holm:2 modeled:1 relationship:1 index:2 difficult:1 regulation:1 statement:2 dtr:1 stated:1 implementation:2 svg:2 fdr:36 unknown:1 contributed:2 shallower:1 observation:2 dispersion:1 datasets:7 finite:1 extended:1 incorporated:1 precise:1 gc:1 smoothed:1 intensity:1 david:2 introduced:1 namely:4 pair:2 required:1 extensive:1 optimized:1 specified:1 bernd:1 rejective:1 learned:19 nip:1 trans:1 address:5 able:2 usually:1 below:1 fp:30 challenge:2 max:2 royal:2 power:7 unrealistic:1 difficulty:1 indicator:2 minimax:1 improve:2 github:1 identifies:1 categorical:1 prior:1 literature:1 discovery:64 understanding:1 asymptotic:4 loss:2 fully:2 bear:1 expect:1 limitation:2 validation:10 h2:2 degree:2 proxy:1 consistent:2 systematically:2 pi:24 bern:1 infeasible:1 side:2 allow:1 bias:2 perceptron:2 wagner:1 distributed:1 boundary:2 xia:1 calculated:4 valid:1 default:1 genome:2 evaluating:1 world:1 rich:1 curve:2 dimension:1 author:2 reside:1 commonly:1 preventing:1 made:1 simplified:1 adaptive:2 far:1 ignore:5 status:1 gene:22 keep:1 ml:1 overfitting:3 sequentially:1 symmetrical:1 conservation:4 assumed:1 tuples:1 xi:25 continuous:7 why:1 table:2 storey:3 promising:2 learn:6 nature:2 chromosome:8 ca:1 robust:1 correlated:3 dte:1 alg:1 investigated:1 bottou:1 zou:1 substituted:1 linearly:1 noise:1 x1:4 enriched:1 fig:10 crafted:1 fails:1 airway:8 explicit:1 candidate:2 weighting:4 third:3 learns:4 down:1 theorem:2 xiaofeng:1 specific:3 covariate:2 evidence:2 grouping:1 false:25 effectively:1 importance:1 ci:1 mirror:1 modulates:1 margin:1 clamping:1 phenotype:1 rejection:2 easier:1 generalizing:1 likely:5 explore:1 expressed:2 scalar:1 corresponds:4 loses:1 truth:1 satisfies:3 cdf:1 conditional:5 modulate:1 goal:1 careful:1 lipschitz:2 hard:1 change:1 specifically:1 typical:1 uniformly:1 infinite:1 determined:2 lemma:2 conservative:4 called:1 total:1 indicating:2 formally:2 hypothetically:1 meant:1 jonathan:1 evaluate:1 caudate:1 handling:1
6,360
6,753
Cost efficient gradient boosting Sven Peter Ferran Diego Heidelberg Collaboratory for Image Processing Interdisciplinary Center for Scientific Computing University of Heidelberg 69115 Heidelberg, Germany Robert Bosch GmbH Robert-Bosch-Stra?e 200 31139 Hildesheim, Germany [email protected] [email protected] Fred A. Hamprecht Boaz Nadler Heidelberg Collaboratory for Image Processing Interdisciplinary Center for Scientific Computing University of Heidelberg 69115 Heidelberg, Germany Department of Computer Science Weizmann Institute of Science Rehovot 76100, Israel [email protected] [email protected] Abstract Many applications require learning classifiers or regressors that are both accurate and cheap to evaluate. Prediction cost can be drastically reduced if the learned predictor is constructed such that on the majority of the inputs, it uses cheap features and fast evaluations. The main challenge is to do so with little loss in accuracy. In this work we propose a budget-aware strategy based on deep boosted regression trees. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. We evaluate our method on a number of datasets and find that it outperforms the current state of the art by a large margin. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting. Source code is made available at http://github.com/svenpeter42/LightGBM-CEGB. 1 Introduction Many applications need classifiers or regressors that are not only accurate, but also cheap to evaluate [32, 29]. Prediction cost usually consists of two different components: The acquisition or computation of the features used to predict the output, and the evaluation of the predictor itself. A common approach to construct an accurate predictor with low evaluation cost is to modify the classical empirical risk minimization objective, such that it includes a prediction cost penalty, and optimize this modified functional [32, 29, 22, 23]. In this work we also follow this general approach, and develop a budget-aware strategy based on deep boosted regression trees. Despite the recent re-emergence and popularity of neural networks, our choice of boosted regression trees is motivated by three observations: (i) Given ample training data and computational resources, deep neural networks often give the most accurate results. However, standard feed-forward architectures route a single input component (for example, a single coefficient in the case of vectorial input) through most network units. While the computational cost can be mitigated by network compression or quantization [14], in the extreme case to binary activations only [16], the computational graph is fundamentally dense. In a standard decision tree, on the other hand, each sample is routed along a single path from the root to a leaf, thus 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. visiting typically only a small subset of all split nodes, the "units" of a decision tree. In the extreme case of a balanced binary tree, each sample visits only log(N ) out of a total of N nodes. (ii) Individual decision trees and their ensembles, such as Random Forest [4] and Gradient Boosting [12], are still among the most useful and highly competitive methods in machine learning, particularly in the regime of limited training data, little training time and little expertise for parameter tuning [11]. (iii) When features and/or decisions come at a premium, it is convenient but wasteful to assume that all instances in a data set are created equal (even when assumed i.i.d.). Some instances may be easy to classify based on reading a single measurement / feature, while others may require a full battery of tests before a decision can be reached with confidence [34]. Decision trees naturally lend themselves to such a "sequential experimental design" setup: after first using cheap features to split all instances into subsets, the subsequent decisions can be based on more expensive features which are, however, only elicited if truly needed. Importantly, the set of more expensive features is requested conditionally on the values of features used earlier in the tree. In this work we address the challenge of constructing an ensemble of trees that is both accurate and yet cheap to evaluate. We first describe the problem setup in Section 2, and discuss related work in Section 3. Our key contribution appears in Section 4, where we propose an extension of gradient boosting [12] which takes prediction time penalties into account. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting. As illustrated in Section 5, on a number of datasets our method outperforms the current state of the art by a large margin. 2 Problem setup Consider a regression problem where the response Y ? R and each instance X is represented by M features, X ? RM . Let L : R ? R ? R be a loss function, and T be a set of admissible functions. In supervised learning, given a training set of N pairs (xi , yi ) sampled i.i.d. from (X, Y ), a classical approach to learn a predictor T ? T is to minimize the empirical loss L on the training set, min T ?T N X L(yi , T (xi )). (1) i=1 In this paper we restrict ourselves to the set T that consists of an ensemble of trees, namely predictors PK of the form T (x) = k=1 tk (x). Each single decision tree tk can be represented as a collection of Lk leaf nodes with corresponding responses ? k = (?k,1 , . . . , ?1,Lk ) ? RLk and a function qk : RM ? {1, . . . , Lk } that encodes the tree structure and maps an input to its corresponding terminal leaf index. The output of the tree is tk (x) = ? k,qk (x) . Learning even a single tree that exactly minimizes the functional in Eq. (1) is NP-hard under several aspects of optimality [15, 18, 24, 35]. Yet, single trees and ensemble of trees are some of the most successful predictors in machine learning and there are multiple greedy based methods to construct tree ensembles that approximately solve Eq. (1) [4, 12, 11]. In many practical applications, however, it is important that the predictor T is not only accurate but also fast to compute. Given a prediction cost function ? : T ? RM ? R+ a standard approach is to add a penalty to the empirical risk minimization above [32, 29, 34, 22, 23]: X min L(yi , T (xi )) + ??(T, xi ). (2) T ?T i The parameter ? controls the tradeoff between accuracy and prediction cost. Typically, the prediction cost function ? consists of two components. The first is the cost of acquiring or computing relevant input features. For example, think of a patient at the emergency room where taking his temperature and blood oxygen levels are cheap, but a CT-scan is expensive. The second component is the cost of evaluating the function T , which in our case is the sum of the cost of evaluating the K individual trees tk . In more detail, the first component of feature computation cost may also depend on the specific prediction problem. In some scenarios, test instances are independent of each other and the features 2 can be computed for each input instance on demand. But there are also others. In image processing, for example, where the input is an image which consists of many pixels and the task is to predict some function at all pixels. In such cases, even though specific features can be computed for each pixel independently, it may be cheaper or more efficient to compute the same feature, such as a separable convolution filter, at all pixels at once [1, 13]. The cost function ? may be dominated in these cases by the second component - the time it takes to evaluate the trees. After discussing related work in Section 3, in Section 4 we present a general adaptation of gradient boosting [12] to minimize Eq. (2), that takes into account both prediction cost components. 3 Related work The problem of learning with prediction cost penalties has been extensively studied. One particular case is that of class imbalance, where one class is extremely rare and yet it is important to accurately annotate it. For example, the famous Viola-Jones cascades [30] use cheap features to discard examples belonging to the negative class. Later stages requiring expensive features are only used for the rare suspected positive class. While such an approach is very successful, due to its early exit strategy it cannot use expensive features for different inputs [19, 29, 9]. To overcome the limitations imposed by early exit strategies, various methods [33, 34, 17, 31] proposed single tree constructions but with more complicated decisions at the individual split nodes. The tree is first learned without taking prediction cost into account followed by an optimization step that includes this cost. Unfortunately, in practice these single-tree methods are inferior to current state-of-the-art algorithms that construct tree ensembles [22, 23]. B UDGET RF [22] is based on Random Forests and modifies the impurity function that decides which split to make, to take feature costs into account. B UDGET RF has several limitations: First, it assumes that tree evaluation cost is negligible compared to feature acquisition, and hence is not suitable for problems where features are cheap to compute and the prediction cost is dominated by predictor evaluation or were both components contribute equally. Second, during its training phase, each usage of a feature incurs its acquisition cost so repeated feature usage is not modeled, and the probability for reaching a node is not taken into account. At test time, in contrast, they do allow "free" reuse of expensive features and do compute the precise cost of reaching various tree branches. B UDGET RF thus typically does not yield deep but expensive branches which are only seldomly reached. B UDGET P RUNE [23] is a pruning scheme for ensembles of decision trees. It aims to mitigate limitations of B UDGET RF by pruning expensive branches from the individual trees. An Integer Linear Program is formulated and efficiently solved to take repeated feature usage and probabilities for reaching different branches into account. This method results in a better tradeoff but still cannot create deep and expensive branches which are only seldomly reached if these were not present in the original ensemble. This method is considered to be state of the art when prediction cost is dominated by the feature acquisition cost [23]. We show in Section 5 that constructing deeper trees with our methods results in a significantly better performance. G REEDY M ISER [32], which is most similar to our work, is a stage-wise gradient-boosting type algorithm that also aims to minimize Eq. (2) using an ensemble of regression trees. When both prediction cost components are assumed equally significant, G REEDY M ISER is considered state of the art. Yet, G REEDY M ISER also has few limitations: First, all trees are assumed to have the same prediction cost for all inputs. Second, by design it constructs shallow trees all having the same depth. We instead consider individual costs for each leaf and thus allow construction of deeper trees. Our experiments in Section 5 suggest that constructing deeper trees with our proposed method significantly outperforms G REEDY M ISER. 4 Gradient boosting with cost penalties We build on the gradient boosting framework [12] and adapt it to allow optimization with cost penalties. First we briefly review the original algorithm. We then present our cost penalty in Section 4.1, the step wise optimization in 4.2 and finally our tree growing algorithm that builds trees with deep branches but low expected depth and feature cost in Section 4.3 (such a tree is shown in Figure 1b and compared to a shallow tree that is more expensive and less accurate in Figure 1a). 3 0 0 4 1 2 A 1 B 2 5 E 6 K F C G H D A 4 B C 5 6 G 7 10 11 12 3 3 H L M N 9 O 8 13 E F I J (b) CEGB D (a) Other methods Figure 1: Illustration of trees generated by the different methods: Split nodes are numbered in the order they have been created, leaves are represented with letters. The vertical position of nodes corresponds to the feature cost required for each sample and the edge?s thickness represents the number of samples moving along this edge. A tree constructed by GreedyMiser is shown in (a): The majority of samples travel along a path requiring a very expensive feature. BudgetPrune could only prune away leaves E,F,G and H which does not correspond to a large reduction in costs. CEGB however only uses two very cheap splits for almost all samples (leaves A and B) and builds a complex subtree for the minority that is hard to classify. The constructed tree shown in (b) is deep but nevertheless cheap to evaluate on average. Gradient boosting tries to minimize the empirical risk of Eq. (1), by constructing a linear combination of K weak predictors tk : RM ? R from a set F of admissible functions (not necessarily decision trees). Starting with T0 (x) = 0 each iteration k > 0 constructs a new weak function tk aiming to reduce the current loss. These boosting updates can be interpreted as approximations of the gradient descent direction in function space. We follow the notation of [8] who use gradient boosting with weak predictors tk from the set of regression trees T to minimize regularized empirical risk min t1 ,...,tK ?T N X i=1 " L(yi , K X k=1 # tk (xi )) + K X ?(tk ). (3) k=1 The regularization term ?(tk ) penalizes the complexity of the regression tree functions. They assume that ?(tk ) only depends on the number of leaves Lk and leaf responses wk and derive a simple algorithm to directly learn these. We instead use a more complicated prediction cost penalty ? and use a different tree construction algorithm that allows optimization with cost penalties. 4.1 Prediction cost penalty Recall that for each individual tree the prediction cost penalty ? consists of two components: (i) the feature acquisition cost ?f and (ii) the tree evaluation cost ?ev . However, this prediction cost for the k-th tree, which is fitted to the residual of all previous iterations, depends on the earlier trees. Specifically, for any input x, features used in the trees of the previous iterations do not contribute to the cost penalty again. We thus use the indicator function C : N0?K ? N?N ? N?M ? {0, 1} with C(k, i, m) = 1 if and only if feature m was used to predict xi by any tree constructed prior to and including iteration k. Furthermore ?m ? 0 is the cost for computing or acquiring feature m for a single input x. Then the feature cost contribution ?f : N0?K ? N?N ? R+ of xi for the first k trees 4 is calculated as M X ?f (k, i) = ?m C(k, i, m) (4) m=1 Features computed for all inputs at once (e.g. separable convolution filters) contribute to the penalty independent of the instance x being evaluated. For those we use ?m as their total computation cost and define the indicator function D : N0?K ? N?M ? {0, 1} with D(k, m) = 1 if and only if feature m was used for any input x in any tree constructed prior to and including iteration k. Then M X ?c (k) = ?m D(k, m) (5) m=1 The evaluation cost ?ev,k : N?Lk ? R+ for a single input x passing through a tree is the number of split nodes between the root node and the input?s terminal leaf qk (x), multiplied by a suitable constant ? ? 0 which captures the cost to evaluate a single split. The total cost ?ev : N0?K ? N?N ? R+ for the first k trees is the sum of the costs of each tree ?ev (k, i) = k X ?ev,k? (qk? (xi )). (6) ? k=1 4.2 Tree Boosting with Prediction Costs We have now defined all components of Eq. (2). Simultaneous optimization of all trees tk is intractable. Instead, as in gradient boosting , we minimize the objective by starting with T0 (x) = 0 and iteratively adding a new tree at each iteration. At iteration k we construct the k-th regression tree tk by minimizing the following objective Ok = N X [L(yi , Tk?1 (xi ) + tk (xi )) + ??(k, xi )] + ??c (k) (7) i=1 with ?(k, xi ) = ?ev (k, i) + ?f (k, i). Note that the penalty for features, which are computed for all inputs at once, ?c (k) does not depend on x but only on the structure of the current and previous trees. Directly optimizing the objective Ok w.r.t. the tree tk is difficult since the argument tk appears inside the loss function. Following [8] we use a second order Taylor expansion of the loss around Tk?1 (xi ). Removing constant terms from earlier iterations the objective function can be approximated by  N  X 1 2 ? Ok ? Ok = gi tk (xi ) + hi tk (xi ) + ???(xi ) + ???c (8) 2 i=1 where gi = ?y?i L(yi , y?i ) , (9a) y?i =Tk?1 (xi ) ??(xi ) = ?(k, xi ) ? ?(k ? 1, xi ), (9c) hi = ?y2?i L(yi , y?i ) , y?i =Tk?1 (xi ) ??c = ?c (k) ? ?c (k ? 1). As in [8] we rewrite Eq. (8) for a decision tree tk (x) = ? k,qk (x) with a fixed structure qk , ! !# ! " Lk X X X X 1 hi ? 2k,l + ? ??(xi ) + ???c O?k = gi ? k,l + 2 l i?Il i?Il (9b) (9d) (10) i?Il with the set Il = {i|qk (xi ) = l} containing inputs in leaf l. For this fixed structure the optimal weights and the corresponding best objective reduction can be calculated explicitly: ? ?k,l " P P L X ( i?Il gi )2 1 i?Il gi ? ?k = ? P +? = ?P , (11a) O 2 i?Il hi i?Il hi l 5 !# X i?Il ??(xi ) +???c (11b) As we shall see in the next section, our cost-aware impurity function depends on the difference of Eq. (10) which results by replacing a terminal leaf with a split node [8]. Let p be any leaf of the tree that can be converted to a split node and two new children r and l then the difference of Eq. (10) evaluated for the original and the modified tree is " P # P P ( i?Ip gi )2 ( i?Il gi )2 ( i?Ir gi )2 1 split ? P ?O = + P ? P ? ? ??split (12) k k 2 h h h i i i i?Ir i?Il i?Ip Let m be the feature used by the node s that we are considering to split. Then is feature m used to split x for the first time? is feature m used for the first time? ??split = |Ip |? + ?m k | {z } ?split | ev,k 4.3 i z }| { X z }| { (1 ? D(k, m)) + ?m (1 ? C(k, i, m)) (13) i?Ip {z ?split f,k } Learning a weak regressor with cost penalties With these preparations we can now construct the regression trees. As mentioned above, this is a NP-hard problem. We use a greedy algorithm to grow a tree that approximately minimizes Eq. (10). Standard algorithms that grow trees start from a single leaf containing all inputs. The tree is then iteratively expanded by replacing a single leaf with a split node and two new child leaves [4]. Typically this expansion happens in a predefined leaf order (breadth- or depth-first). Splits are only evaluated locally at a single leaf to select the best feature. The expansion is stopped once leaves are pure or once a maximum depth has been reached. Here, in contrast, we adopt the approach of [28] and grow the tree in a best-first order. Splits are evaluated for all current leaves and the one with the best objection reduction according to Eq. (12) is chosen. The tree can thus grow at any location. This allows to compare splits across different leaves and features at the same time (figure 1b shows an example for a best-first tree while figure 1a shows a tree constructed in breadth-first order). Instead of limiting the depth we limit the number of leaves in each tree to prevent over fitting. This procedure has an important advantage when optimizing with cost penalties: Growing in a predefined order usually leads to balanced trees - all branches are grown independent of the cost. Deep and expensive branches using only a tiny subset of inputs are not easily possible. In contrast, growing at the leaf that promises the best tradeoff as given by Eq. (12) encourages growth on branches that contain few instances or growth using cheap features. Growth on branches that contain many instances or growth that requires expensive features is penalized. This strategy results in deep trees that are nevertheless cheap to compute on average. Figure 1 compares an individual tree constructed by others methods to the deeper tree constructed by CEGB. We briefly compare our proposed strategy to G REEDY M ISER: When we limit Eq. (8) to first order terms only, use breadth-first instead of best-first growth, assume only features that have to be computed for all instances at once and limit the tree depth to four we minimize Eq. (18) from [32]. GreedyMiser can therefore be represented as a special case of our proposed algorithm. 5 Experiments The Yahoo! Learning to Rank (Yahoo! LTR) challenge dataset [7] consists of 473134 training, 71083 validation and 165660 test document-query pairs with labels {0, 1, 2, 3, 4} where 0 means the document is irrelevant and 4 that it is highly relevant to the query. Computation cost for the 519 features used in the dataset are provided [32] and take the values {1, 5, 10, 20, 50, 100, 150, 200}. Prediction performance is evaluated using the Average Precision@5 metric which only considers the five most relevant documents returned for a query by the regressor [32, 22, 23]. We use the dataset provided by [7] and used in [22, 23]. We consider two different settings for our experiments, (i) feature acquisition and classifier evaluation time both contribute to prediction cost and (ii) classifier evaluation time is negligible w.r.t feature acquisition cost. The first setting is used by G REEDY M ISER. Regression trees with depth four are constructed and assumed to approximately cost as much as features with feature cost ?m = 1. We therefore set the 6 (a) (b) (c) (d) (e) (f) Figure 2: Comparison against state of the art algorithms: The Yahoo! LTR dataset has been used for (2a) and (2b) in different settings. In (2a) both tree evaluation and feature acquisition cost is considered. In (2b) only feature acquisition cost is shown. (2c) shows results on the MiniBooNE dataset with uniform feature costs. G REEDY M ISER and B UDGET P RUNE results for (2b), (2c) and (2d) from [23]. B UDGET P RUNE did not finish training on the HEPMASSS datasets to due their size and the associated CPU time and RAM requirements. CEGB is our proposed method. split cost ? = 14 to allow a fair comparison with our trees which will contain deeper branches. We also use our algorithm to construct trees similar to G REEDY M ISER by limiting the trees to 16 leaves with a maximum branch depth of four. Figure 2a shows that even the shallow trees are already always strictly better than G REEDY M ISER. This happens because our algorithm correctly accounts for the different probabilities of reaching different leaves (see also figure 1). When we allow deep branches the proposed method gives significantly better results than G REEDY M ISER and learns a predictor with better accuracy at a much lower cost. The second setting is considered by B UDGET P RUNE. It assumes that feature computation is much more expensive than classifier evaluation. We set ? = 0 to adapt our algorithm to this setting. 7 (a) (b) Figure 3: In (3a) we study the influence of the feature penalty on the learned classifier. (3b) shows how best-first training results in better precision given the same cost budget. The dataset is additionally binarized by setting all targets y > 0 to y = 1. G REEDY M ISER has a disadvantage in this setting since it works on the assumption that the cost of each tree is independent of the input x. We still include it in our comparison as a baseline. Figure 3b shows that our proposed method again performs significantly better than others. This confirms that we learn a classifier with very expected cheap prediction cost in terms of both feature acquisition and classifier evaluation time. The MiniBooNE dataset [26, 20] consists of 45523 training, 19510 validation and 65031 test instances with labels {0, 1} and 50 features. The Forest Covertype dataset [20, 3] consists of 36603 training, 15688 validation and 58101 test instances with 54 features restricted to two classes as done in [23]. Feature costs are not available for either dataset and assumed to be uniform, i.e. ?m = 1. Since no relation between classifier evaluation and feature cost is known we only compute the latter to allow a fair comparison, as in [23]. Figure 2c and 2d show that our proposed method again results in a significantly better predictor than both G REEDY M ISER and B UDGET P RUNE. We additionally use the HEPMASS-1000 and HEPMASS-not1000 datasets [2, 20]. Similar to MiniBooNE no feature costs are known and we again uniformly set them to one for all features, i.e. ?m = 1. Both datasets contain over ten million instances which we split into 3.5 million training, 1.4 million validation and 5.6 million test instances. These datasets are much larger than the others and we did not manage to successfully run B UDGET P RUNE due to its RAM and CPU time requirements. We only report results on G REEDY M ISER and our algorithm in Figure 2e and 2f. CEGB again results in a classifier with a better tradeoff than G REEDY M ISER. 5.1 Influence of feature cost and tradeoff parameters We use the Yahoo! LTR dataset to study the influence of the features costs ? and the tradeoff parameter ? on the learned regressor. Figure 3a shows that regressors learned with a large ? reach similar accuracy as those with smaller ? at a much cheaper cost. Only ? = 0.001 converges to a lower accuracy while others approximately reach the same final accuracy. The tradeoff is shifted towards using cheap features too strongly. Such a regressor is nevertheless useful when the problems requires very cheap results and the final improvement in accuracy does not matter. Next, we set all ? m = 1 during training time only and use the original cost during test time. The learned regressor behaves similar to one learned with ? = 0. This shows that the regressors save most of the cost by limiting usage of expensive features to a small subset of inputs. Finally we compare breadth-first to best-first training in Figure 3b. We use the same number of leaves and trees and try to build a classifier that is as cheap as possible. Best-first training always reaches a higher accuracy for a given prediction cost budget. This supports our observation that deep trees which are cheap to evaluate on average are important for constructing cheap and accurate predictors. 5.2 Multi-scale classification / tree structure optimization In images processing, classification using multiple scales has been extensively studied and used to build fast or more more accurate classifiers [6, 30, 10, 25]. The basic idea of these schemes is 8 (a) (b) (c) Figure 4: Multi-scale classification: (4a) shows a single frame from the dataset we used. (4b) shows how our proposed algorithm CEGB is able to build significantly cheaper trees than normal gradient boosting. (4c) zooms into the region showing the differences between the various patch sizes. that a large image is downsampled to increasingly coarse resolutions. A multi-scale classifier first analyzes the coarsest resolution and decides whether a pixel on the coarse level represents a block of homogeneous pixels on the original resolution, or if analysis on a less coarse resolution is required. Efficiency results from the ability to label many pixels on the original resolution at once by labeling a single pixel on a coarser image. We use this setting as an example to show how our algorithm is also capable of optimizing problems where feature cost is negligible compare to predictor evaluation cost. Inspired by average pooling layers in neural networks [27] and image pyramids [5] we first compute the average pixel values across non-overlapping 2x2, 4x4 and 8x8 blocks of the original image. We compute several commonly used and very fast convolutional filters on each of those resolutions. We then replicated these features values on the original resolution, e.g. the feature response of a single pixel on the 8x8-averaged image is used for all 64 pixels We modify Eq. (12) and set ??split = |Ip |?m where m is the number k of pixels that share this feature value, e.g. m = 64 when feature m was computed on the coarse 8x8-averaged image. We use forty frames with a resolution of 1024x1024 pixels taken from a video studying fly ethology. Our goal here is to detect flies as quickly as possible, as preprocessing for subsequent tracking. A single frame is shown in Figure 4a. We use twenty of those for training and twenty for evaluation. Accuracy is evaluated using the SEGMeasure score as defined in [21]. Comparison is done against regular gradient boosting by setting ? = 0. Figure 4b shows that our algorithm constructs an ensemble that is able to reach similar accuracy with a significantly smaller evaluation cost. Figure 4c shows more clearly how the different available resolutions influence the learned ensemble. Coarser resolutions allow a very efficient prediction at the cost of accuracy. Overall these experiments show that our algorithm is also capable of learning predictors that are cheap while maintaining accuracy even when the evaluation cost of these dominates w.r.t the feature acquisition cost. 6 Conclusion We presented an adaptation of gradient boosting that includes prediction cost penalties, and devised fast methods to learn an ensemble of deep regression trees. A key feature of our approach is its ability to construct deep trees that are nevertheless cheap to evaluate on average. In the experimental part we demonstrated that this approach is capable of handing various different settings of prediction cost penalties consisting of feature cost and tree evaluation cost. Specifically, our method significantly outperformed state of the art algorithms G REEDY M ISER and B UDGET P RUNE when feature cost either dominates or contributes equally to the total cost. We additionally showed an example where we are able to optimize the decision structure of the trees itself when evaluation of these is the limiting factor. Our algorithm can be easily implemented using any gradient boosting library and does not slow down training significantly. For these reasons we believe it will be highly valuable for many applications. Source code is available at http://github.com/svenpeter42/LightGBM-CEGB. 9 References [1] Gholamreza Amayeh, Alireza Tavakkoli, and George Bebis. Accurate and efficient computation of Gabor features in real-time applications. Advances in Visual Computing, pages 243?252, 2009. [2] Pierre Baldi, Kyle Cranmer, Taylor Faucett, Peter Sadowski, and Daniel Whiteson. Parameterized machine learning for high-energy physics. arXiv preprint arXiv:1601.07913, 2016. [3] Jock A. Blackard and Denis J. Dean. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Computers and Electronics in Agriculture, 24(3):131 ? 151, 1999. [4] Leo Breiman. Random forests. Machine learning, 45(1):5?32, 2001. [5] Peter Burt and Edward Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on communications, 31(4):532?540, 1983. [6] Vittorio Castelli, Chung-Sheng Li, John Turek, and Ioannis Kontoyiannis. Progressive classification in the compressed domain for large EOS satellite databases. In Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on, volume 4, pages 2199?2202. IEEE, 1996. [7] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Yahoo! Learning to Rank Challenge, pages 1?24, 2011. [8] Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ?16, pages 785?794, New York, NY, USA, 2016. ACM. [9] Giulia DeSalvo, Mehryar Mohri, and Umar Syed. Learning with deep cascades. In International Conference on Algorithmic Learning Theory, pages 254?269. Springer, 2015. [10] Piotr Doll?r, Serge J Belongie, and Pietro Perona. The fastest pedestrian detector in the west. In BMVC, volume 2, page 7. Citeseer, 2010. [11] Manuel Fern?ndez-Delgado, Eva Cernadas, Sen?n Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res, 15(1):3133?3181, 2014. [12] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189?1232, 2001. [13] Pascal Getreuer. A survey of Gaussian convolution algorithms. Image Processing On Line, 2013:286?310, 2013. [14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations (ICLR), 2016. [15] Thomas Hancock, Tao Jiang, Ming Li, and John Tromp. Lower bounds on learning decision lists and trees. Information and Computation, 126(2):114?122, 1996. [16] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4107?4115. Curran Associates, Inc., 2016. [17] Matt J Kusner, Wenlin Chen, Quan Zhou, Zhixiang Eddie Xu, Kilian Q Weinberger, and Yixin Chen. Feature-cost sensitive learning with submodular trees of classifiers. In AAAI, pages 1939?1945, 2014. [18] Hyafil Laurent and Ronald L Rivest. Constructing optimal binary decision trees is NP-complete. Information Processing Letters, 5(1):15?17, 1976. 10 [19] Leonidas Lefakis and Fran?ois Fleuret. Joint cascade optimization using a product of boosted classifiers. In Advances in neural information processing systems, pages 1315?1323, 2010. [20] M. Lichman. UCI machine learning repository, 2013. [21] Martin Ma?ka, Vladim?r Ulman, David Svoboda, Pavel Matula, Petr Matula, Cristina Ederra, Ainhoa Urbiola, Tom?s Espa?a, Subramanian Venkatesan, Deepak MW Balak, et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics, 30(11):1609?1617, 2014. [22] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Feature-budgeted random forest. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1983?1991, Lille, France, 07?09 Jul 2015. PMLR. [23] Feng Nan, Joseph Wang, and Venkatesh Saligrama. Pruning random forests for prediction on a budget. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2334?2342. Curran Associates, Inc., 2016. [24] GE Naumov. NP-completeness of problems of construction of optimal decision trees. In Soviet Physics Doklady, volume 36, page 270, 1991. [25] Marco Pedersoli, Andrea Vedaldi, Jordi Gonzalez, and Xavier Roca. A coarse-to-fine approach for fast deformable object detection. Pattern Recognition, 48(5):1844?1853, 2015. [26] Byron P. Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. Boosted decision trees, an alternative to artificial neural networks. Nucl. Instrum. Meth., A543(2-3):577? 584, 2005. [27] Dominik Scherer, Andreas M?ller, and Sven Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. Artificial Neural Networks?ICANN 2010, pages 92?101, 2010. [28] Haijian Shi. Best-first decision tree learning. PhD thesis, The University of Waikato, 2007. [29] Kirill Trapeznikov and Venkatesh Saligrama. Supervised sequential classification under budget constraints. In AISTATS, pages 581?589, 2013. [30] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 2001. [31] Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Efficient learning by directed acyclic graph for resource constrained prediction. In Advances in Neural Information Processing Systems, pages 2152?2160, 2015. [32] Zhixiang Xu, Kilian Weinberger, and Olivier Chapelle. The greedy miser: Learning under testtime budgets. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ?12, pages 1175?1182, July 2012. [33] Zhixiang Eddie Xu, Matt J Kusner, Kilian Q Weinberger, and Minmin Chen. Cost-sensitive tree of classifiers. In ICML (1), pages 133?141, 2013. [34] Zhixiang Eddie Xu, Matt J Kusner, Kilian Q Weinberger, Minmin Chen, and Olivier Chapelle. Classifier cascades and trees for minimizing feature evaluation cost. Journal of Machine Learning Research, 15(1):2113?2144, 2014. [35] Hans Zantema and Hans L Bodlaender. Finding small equivalent decision trees is hard. International Journal of Foundations of Computer Science, 11(02):343?354, 2000. 11
6753 |@word repository:1 briefly:2 compression:2 nd:2 confirms:1 pavel:1 citeseer:1 incurs:1 delgado:1 reduction:3 electronics:1 ndez:1 cristina:1 score:1 lichman:1 liu:1 daniel:2 document:3 greedymiser:2 outperforms:3 current:6 com:3 ka:1 manuel:1 activation:1 yet:4 john:3 ronald:1 subsequent:2 kdd:1 cheap:22 minmin:2 update:1 n0:4 greedy:4 leaf:26 blei:1 coarse:5 boosting:20 node:13 contribute:4 location:1 denis:1 completeness:1 five:1 along:3 constructed:9 consists:8 fitting:1 inside:1 baldi:1 barro:1 cernadas:1 expected:2 rapid:1 andrea:1 themselves:1 growing:3 multi:3 terminal:3 inspired:1 ming:1 little:3 cpu:2 considering:1 provided:2 mitigated:1 notation:1 rivest:1 israel:1 interpreted:1 minimizes:2 finding:1 mitigate:1 binarized:2 growth:5 exactly:1 doklady:1 classifier:18 rm:4 control:1 unit:2 before:1 positive:1 negligible:3 t1:1 modify:2 limit:3 aiming:1 soudry:1 despite:1 mach:1 jiang:1 laurent:1 path:2 approximately:4 studied:2 fastest:1 limited:1 averaged:2 weizmann:2 directed:1 practical:1 practice:1 block:2 implement:2 procedure:1 empirical:5 cascade:5 significantly:9 convenient:1 gabor:1 confidence:1 vedaldi:1 regular:1 numbered:1 downsampled:1 suggest:1 cannot:2 hyafil:1 risk:4 influence:4 cartographic:1 wenlin:1 optimize:2 equivalent:1 map:1 imposed:1 center:2 demonstrated:1 modifies:1 dean:1 vittorio:1 starting:2 independently:1 shi:1 survey:1 resolution:10 rune:7 pure:1 matthieu:1 importantly:1 his:1 limiting:4 annals:1 diego:1 construction:4 target:1 itay:1 svoboda:1 olivier:3 homogeneous:1 us:2 curran:2 associate:2 expensive:15 particularly:1 approximated:1 recognition:3 coarser:2 database:1 fly:2 preprint:1 solved:1 capture:1 wang:3 region:1 eva:1 compressing:1 kilian:4 valuable:1 ran:1 balanced:2 mentioned:1 complexity:1 battery:1 trained:1 depend:2 rewrite:1 impurity:2 exit:2 efficiency:1 easily:2 icassp:1 joint:1 represented:4 various:4 soviet:1 grown:1 leo:1 sven:3 fast:6 describe:1 hancock:1 query:3 artificial:3 labeling:1 eos:1 larger:1 solve:2 cvpr:1 compressed:1 ability:2 statistic:1 gi:8 think:1 emergence:1 itself:2 ip:5 final:2 advantage:1 sen:1 propose:2 product:1 adaptation:2 saligrama:4 relevant:3 uci:1 stra:1 deformable:1 yaniv:1 requirement:2 satellite:1 comparative:1 converges:1 tianqi:1 tk:24 object:3 derive:1 develop:1 ac:1 bosch:3 eq:15 edward:1 implemented:1 ois:1 come:1 direction:1 filter:3 require:2 extension:1 strictly:1 marco:1 around:1 considered:4 miniboone:3 normal:1 trapeznikov:2 nadler:2 algorithmic:1 predict:3 early:2 adopt:1 yixin:1 agriculture:1 travel:1 outperformed:1 label:3 hubara:1 sensitive:2 lightgbm:2 create:1 ferran:2 successfully:1 minimization:2 clearly:1 gaussian:1 always:2 aim:2 modified:2 reaching:4 collaboratory:2 zhou:1 boosted:6 breiman:1 improvement:1 rank:3 contrast:5 sigkdd:1 baseline:1 detect:1 el:1 typically:4 perona:1 relation:1 france:1 germany:3 tao:1 pixel:13 overall:1 among:1 classification:6 pascal:1 yahoo:6 art:7 special:1 constrained:1 equal:1 aware:3 construct:10 once:7 piotr:1 beach:1 having:1 x4:1 represents:2 progressive:1 jones:2 adelson:1 lille:1 icml:3 others:6 np:4 fundamentally:1 report:1 few:2 gordon:1 yoshua:1 zoom:1 individual:7 cheaper:3 phase:1 consisting:1 ourselves:1 william:1 friedman:1 detection:2 highly:3 mining:1 turek:1 evaluation:21 truly:1 extreme:2 hamprecht:2 predefined:2 accurate:10 edge:2 capable:3 tree:106 taylor:2 penalizes:1 re:2 waikato:1 fitted:1 stopped:1 instance:14 classify:2 earlier:3 cover:1 disadvantage:1 cost:93 subset:4 rare:2 predictor:15 uniform:2 hundred:1 successful:2 too:1 thickness:1 st:1 international:7 interdisciplinary:2 kontoyiannis:1 lee:2 physic:2 regressor:5 michael:1 quickly:1 again:5 aaai:1 thesis:1 manage:1 containing:2 chung:1 li:2 account:7 converted:1 de:3 ioannis:1 wk:1 includes:3 coefficient:1 matter:1 pedestrian:1 coding:1 inc:2 explicitly:1 depends:3 leonidas:1 later:1 root:2 try:2 dally:1 francis:1 reached:4 competitive:1 start:1 carlos:1 elicited:1 complicated:2 jul:1 contribution:2 minimize:7 il:12 ir:2 accuracy:13 convolutional:2 qk:7 who:1 efficiently:1 ensemble:12 yield:1 correspond:1 serge:1 weak:4 famous:1 accurately:1 castelli:1 fern:1 budgetprune:1 expertise:1 simultaneous:1 detector:1 reach:4 against:2 nonetheless:2 acquisition:11 energy:1 testtime:1 naturally:1 associated:1 jordi:1 sampled:1 dataset:11 recall:1 knowledge:1 appears:2 feed:1 ok:4 higher:1 supervised:2 follow:2 tom:1 response:4 bmvc:1 huizi:1 evaluated:6 though:1 done:2 strongly:1 furthermore:1 stage:2 jerome:1 zhixiang:4 hand:1 sheng:1 langford:1 replacing:2 overlapping:1 petr:1 pineau:1 scientific:2 believe:1 usa:2 usage:4 matt:3 requiring:2 y2:1 contain:4 xavier:1 hence:1 regularization:1 iteratively:2 hildesheim:1 illustrated:1 conditionally:1 during:3 encourages:1 inferior:1 complete:1 performs:1 temperature:1 oxygen:1 image:13 wise:2 kyle:1 common:1 behaves:1 functional:2 ji:1 overview:1 volume:4 million:4 measurement:1 significant:1 tuning:1 iser:15 sugiyama:2 submodular:1 moving:1 chapelle:3 han:3 add:1 recent:1 showed:1 optimizing:3 irrelevant:1 discard:1 scenario:1 route:1 binary:3 discussing:1 joelle:1 yi:8 guestrin:1 analyzes:1 george:1 prune:1 forty:1 ller:1 venkatesan:1 signal:1 ii:3 branch:13 full:1 multiple:2 july:1 adapt:2 bach:1 long:1 devised:1 equally:3 visit:1 laplacian:1 prediction:29 scalable:1 regression:11 basic:1 vision:1 patient:1 metric:1 jock:1 arxiv:2 annotate:1 iteration:8 alireza:1 roe:1 pyramid:2 xgboost:1 cell:1 ion:1 huffman:1 fine:1 objection:1 grow:6 source:2 pooling:2 byron:1 quan:1 ample:1 integer:1 mw:1 yang:1 split:24 easy:3 iii:1 bengio:1 finish:1 architecture:2 restrict:1 behnke:1 reduce:1 idea:1 andreas:1 tradeoff:7 t0:2 whether:1 motivated:1 reuse:1 penalty:21 song:1 routed:1 peter:4 returned:1 speech:1 passing:1 york:1 deep:18 useful:2 fleuret:1 lefakis:1 extensively:2 locally:1 ten:1 reduced:1 http:2 shifted:1 popularity:1 correctly:1 rehovot:1 shall:1 promise:1 key:2 four:3 nevertheless:4 blood:1 wasteful:1 prevent:1 budgeted:1 breadth:4 ram:2 graph:2 pietro:1 sum:2 miser:1 run:1 luxburg:2 letter:2 parameterized:1 almost:1 guyon:2 patch:1 fran:1 gonzalez:1 decision:19 comparable:2 bound:1 layer:1 ct:1 emergency:1 followed:1 hi:5 nan:2 covertype:1 vectorial:1 constraint:1 x2:1 encodes:1 yong:1 dominated:3 aspect:1 argument:1 min:3 optimality:1 extremely:1 separable:2 expanded:1 coarsest:1 martin:1 handing:1 department:1 according:1 combination:1 belonging:1 across:2 smaller:2 increasingly:1 kusner:3 shallow:3 joseph:3 happens:2 restricted:1 taken:2 resource:2 discus:1 needed:1 ge:1 studying:1 available:4 operation:1 doll:1 multiplied:1 away:1 pierre:1 pmlr:1 save:1 alternative:1 weinberger:4 bodlaender:1 original:10 thomas:1 assumes:2 include:1 maintaining:1 umar:1 build:6 classical:2 society:1 feng:2 hepmass:2 objective:6 desalvo:1 already:1 strategy:6 visiting:1 hai:1 gradient:18 iclr:1 majority:2 considers:1 discriminant:1 reason:1 minority:1 code:3 index:1 modeled:1 illustration:1 minimizing:2 setup:3 unfortunately:1 difficult:1 robert:2 negative:1 design:2 twenty:2 imbalance:1 vertical:1 observation:2 convolution:3 datasets:6 benchmark:1 descent:1 viola:2 communication:1 precise:1 frame:3 burt:1 amorim:1 david:2 venkatesh:4 pair:2 namely:1 required:2 pedersoli:1 acoustic:1 learned:8 nip:1 address:1 able:3 usually:2 pattern:2 ev:7 regime:1 reading:1 challenge:5 program:1 rf:4 including:2 lend:1 video:1 subramanian:1 suitable:2 syed:1 regularized:1 predicting:1 indicator:2 residual:1 nucl:1 zhu:1 meth:1 scheme:2 github:2 library:1 ethology:1 lk:6 created:2 x8:3 jun:1 review:1 prior:2 discovery:1 espa:1 loss:6 limitation:4 gholamreza:1 scherer:1 acyclic:1 validation:4 foundation:1 ltr:3 suspected:1 editor:4 courbariaux:1 tiny:1 share:1 penalized:1 mohri:1 free:1 drastically:1 allow:7 deeper:5 kirill:2 institute:1 taking:2 deepak:1 cranmer:1 overcome:1 depth:8 calculated:2 fred:2 evaluating:2 world:1 forward:1 made:1 collection:1 regressors:4 commonly:1 replicated:1 preprocessing:1 transaction:1 pruning:4 compact:1 uni:2 boaz:2 blackard:1 decides:2 assumed:5 belongie:1 xi:24 eddie:3 additionally:3 learn:5 ca:1 contributes:1 forest:7 requested:1 heidelberg:8 expansion:3 mehryar:1 whiteson:1 complex:1 necessarily:1 constructing:6 domain:1 garnett:2 did:2 pk:1 main:1 dense:1 icann:1 aistats:1 paul:1 repeated:2 child:2 fair:2 xu:4 gmbh:1 west:1 slow:1 ny:1 precision:2 position:1 mao:1 dominik:1 learns:1 admissible:2 removing:1 down:1 sadowski:1 specific:2 showing:1 list:1 rlk:1 dominates:2 intractable:1 giulia:1 quantization:2 sequential:2 adding:1 roca:1 iwr:2 phd:1 subtree:1 budget:7 margin:2 demand:1 chen:5 reedy:15 visual:1 tracking:2 udget:11 chang:1 acquiring:2 springer:1 corresponds:1 acm:2 ma:1 goal:1 formulated:1 towards:1 room:1 hard:4 instrum:1 specifically:2 uniformly:1 total:4 experimental:2 premium:1 select:1 support:1 latter:1 scan:1 bioinformatics:1 preparation:1 evaluate:9 mcgregor:1
6,361
6,754
Probabilistic Rule Realization and Selection Haizi Yu? ? Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 [email protected] Tianxi Li? Department of Statistics University of Michigan Ann Arbor, MI 48109 [email protected] Lav R. Varshney? Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801 [email protected] Abstract Abstraction and realization are bilateral processes that are key in deriving intelligence and creativity. In many domains, the two processes are approached through rules: high-level principles that reveal invariances within similar yet diverse examples. Under a probabilistic setting for discrete input spaces, we focus on the rule realization problem which generates input sample distributions that follow the given rules. More ambitiously, we go beyond a mechanical realization that takes whatever is given, but instead ask for proactively selecting reasonable rules to realize. This goal is demanding in practice, since the initial rule set may not always be consistent and thus intelligent compromises are needed. We formulate both rule realization and selection as two strongly connected components within a single and symmetric bi-convex problem, and derive an efficient algorithm that works at large scale. Taking music compositional rules as the main example throughout the paper, we demonstrate our model?s efficiency in not only music realization (composition) but also music interpretation and understanding (analysis). 1 Introduction Abstraction is a conceptual process by which high-level principles are derived from specific examples; realization, the reverse process, applies the principles to generalize [1, 2]. The two, once combined, form the art and science in developing knowledge and intelligence [3, 4]. Neural networks have recently become popular in modeling the two processes, with the belief that the neurons, as distributed data representations, are best organized hierarchically in a layered architecture [5, 6]. Probably the most relevant such examples are auto-encoders, where the cascaded encoder and decoder respectively model abstraction and realization. From a different angle that aims for interpretability, this paper first defines a high-level data representation as a partition of the raw input space, and then formalizes abstraction and realization as bi-directional probability inferences between the raw inputs and its high-level representations. While abstraction and realization is ubiquitous among knowledge domains, this paper embodies the two as theory and composition in music, and refers to music high-level representations as compositional rules. Historically, theorists [7, 8] devised rules and guidelines to describe compositional ? Equal contribution. Supported in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM Cognitive Horizons Network. ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. regularities, resulting in music theory that serves as the formal language to speak of music style and composers? decisions. Automatic music theorists [9?11] have also been recently developed to extract probabilistic rules in an interpretable way. Both human theorists and auto-theorists enable teaching of music composition via rules such as avoiding parallel octaves and resolving tendency tones. So, writing music, to a certain extent (e.g. realizing a part-writing exercise), becomes the process of generating ?legitimate? music realizations that satisfy the given rules. This paper focuses on the realization process in music, assuming rules are given by a preceding abstraction step. There are two main challenges. First, rule realization: problem occurs when one asks for efficient and diverse music generation satisfying the given rules. Depending on the rule representation (hard or probabilistic), there are search-based systems that realize hard-coded rules to produce music pieces [12, 13], as well as statistical models that realize probabilistic rules to produce distributions of music pieces [9, 14]. Both types of realizations typically suffer from the enormity of the sample space, a curse of input dimensionality. Second, rule selection (which is subtler): not all rules are equally important nor are they always consistent. In some cases, a perfect and all-inclusive realization is not possible, which requires relaxation/sacrifice of some rules. In other cases, composers intentionally break certain rules to establish unique styles. So the freedom and creativity in selecting the ?right? rules for realization poses the challenge. The main contribution of the paper is to propose and implement a unified framework that makes reasonable rule selections and realizes them in an efficient way, tackling the two challenges in one shot. As one part of the framework, we introduce a two-step dimensionality reduction technique?a group de-overlap step followed by a screening step?to efficiently solve music rule realization. As the other part, we introduce a group-level generalization of the elastic net penalty [15] to weight the rules for a reasonable selection. The unified framework is formulated as a single bi-convex optimization problem (w.r.t. a probability variable and a weight variable) that coherently couples the two parts in a symmetric way. The symmetry is beneficial in both computation and interpretation. We run experiments on artificial rule sets to illustrate the operational characteristics of our model, and further test it on a real rule set that is exported from an automatic music theorist [11], demonstrating the model?s selectivity in music rule realization at large scale. Although music is the main case study in the paper, we formulate the problem in generality so the proposed framework is domain-agnostic and applicable anywhere there are rules (i.e. abstractions) to be understood. Detailed discussion at the end of the paper demonstrates that the framework applies directly to general real-world problems beyond music. In the discussion, we also emphasize how our algorithm is non-trivial, not just a simple combinatorial massaging of standard models. Therefore, the techniques introduced in this paper offer broader algorithmic takeaways and are worth further studying in the future. 2 The Formalism: Abstraction, Realization, and Rule Abstraction and Realization We restrict our attention to raw input spaces that are discrete and finite: X = {x1 , . . . , xn }, and assume the raw data is drawn from a probability distribution pX , where the subscript refers to the sample space (not a random variable). We denote a high-level representation space (of X ) by a partition A (of X ) and its probability distribution by pA . Partitioning the raw input space gives one way of abstracting low-level details by grouping raw data into clusters and ignoring within-cluster variations. Following this line of thought, we define an abstraction as the process: (X , pX ) ? (A, pA ) for some high-level representation A, where pA is inferred from pX by summing up the probability masses within each partition cluster. Conversely, we define a realization as the process: (A, pA ) ? (X , pX ), where pX is any probability distribution that infers pA . Probabilistic Compositional Rule To put the formalism in the context of music, we first follow the convention [9] to approach a music piece as a sequence of sonorities (a generic term for chord) and view each moment in a composition as determining a sonority that fits the existing music context. If we let ? be a finite collection of pitches specifying the discrete range of an instrument, e.g. the collection of the 88 keys on a piano, then a k-part sonority?k simultaneously sounding pitches?is a point in ?k . So X = ?k is the raw input space containing all possible sonorities. Although discrete and finite, the raw input size is typically large, e.g. |X | = 884 considering piano range and 4-part chorales. Therefore, theorists have invented various music parameters such as quality and inversion, to abstract specific sonorities. In this paper, we inherit the approach in [11] to formalize a high2 level representation of X by a feature-induced partition A, and call the output of the corresponding abstraction (A, pA ) a probabilistic compositional rule. Probabilistic Rule System The interrelation between abstraction and realization (X , pX ) ? (A, pA ) can be formalized by a linear equation: Ap = b, where A ? {0, 1}m?n represents a partition (Aij = 1 if and only if xj is assigned to the ith cluster in the partition), and p = pX , b = pA are probability distributions of the raw input space and the high-level representation space, respectively. In the sequel, we represent a rule by the pair (A, b), so realizing this rule becomes solving the linear equation Ap = b. More interestingly, given a set of rules: (A(1) , b(1) ), . . . , (A(K) , b(K) ), the realization of all of them involves finding a p such that A(r) p = b(r) , for all r = 1, . . . , K. In this case, we form a probabilistic rule system by stacking all rules into one single linear system: ? (1) ? ? (1) ? A b ? ? ? ? A = ? ... ? ? {0, 1}m?n , b = ? ... ? ? [0, 1]m . (1) A(K) (r) (r) We call Ai,: p = bi 3 b(K) a rule component, and mr = dim(b(r) ) the size (# of components) of a rule. Unified Framework for Rule Realization and Selection In this section, we detail a unified framework for simultaneous rule realization and selection. Recall rules themselves can be inconsistent, e.g. rules learned from different music contexts can conflict. So given an inconsistent rule system, we can only achieve Ap ? b. To best realizePthe possibly inconsistent rule system, we solve for p ? ?n by minimizing the error kAp ? bk22 = r kA(r) p ? b(r) k22 , the sum of the Brier scores from every individual rule. This objective does not differentiate rules (or their components) in the rule system, which typically yields a solution that satisfies all rules approximately and achieves a small error on average. This performance, though optimal in the averaged sense, is somewhat disappointing since most often no rule is satisfied exactly (error-free). Contrarily, a human composer would typically make a clear separation: follow some rules exactly and disregard others even at the cost of a larger realization error. The decision made on rule selection usually manifests the style of a musician and is a higher level intelligence that we aim for. In this pursuit, we introduce a fine-grained set of weights w ? ?m to distinguish not only individual rules but also their components. The weights are estimates of relative importance, and are further leveraged for rule selection. This yields a weighted error, which is used herein to measure realization quality: E(p, w; A, b) = (Ap ? b)> diag(w)(Ap ? b). (2) minimize E(p, w; A, b) + ?p Pp (p) + ?w Pw (w) subject to p ? ?n , w ? ?m . (3) 2 0 k , Pp (p) = kpg10 k21 + ? ? ? + kpgK 0 1 ? ? Pw0 (w) = m1 kwg1 k12 + ? ? ? + mK kwgK k12 . (4) If we revisit the two challenges mentioned in Sec. 1, we see that under the current setting, the first challenge concerns the curse of dimensionality for p, while the second concerns the selectivity for w. We introduce two penalty terms, one each for p and w, to tackle the two challenges, and propose the following bi-convex optimization problem as the unified framework: Despite contrasting purposes, both penalty terms, Pp (p) and Pw (w), adopt the same high-level strategy of exploiting group structures in p and w. Regarding the curse of dimensionality, we exploit the group structure of p by grouping pj and pj 0 together if the jth and j 0 th columns of A are identical, 0 0 partitioning p?s coordinates into K 0 groups: g10 , . . . , gK 0 where K is the number of distinct columns of A. This grouping strategy uses the fact that in a simplex-constrained linear system, we cannot determine the individual pj s within each group but only their sum. We later show (Sec. 4.1) the resulting group structure of p is essential in dimensionality reduction (when K 0  n ) and has a deeper interpretation regarding abstraction levels. Regarding the rule-level selectivity, we exploit the group structure of w by grouping weights together if they are associated with the same rule, partitioning w?s coordinates into K groups: g1 , . . . , gK where K is the number of given rules. Based on the group structures of p and w, we introduce their corresponding group penalties as follows: 3 (5) One can see the symmetry here: group penalty (4) on p is a squared, unweighted L2,1 -norm, which is designed to secure a unique solution that favors more randomness in p for the sake of diversity in sonority generation [9]; group penalty (5) on w is a weighted L1,2 -norm (group lasso), which enables rule selection. However, there is a pitfall of the group lasso penalty when deployed in Problem (3): the problem has multiple global optima that are indefinite about the number of rules to pick (e.g. selecting one rule and ten consistent rules are both optimal). To give more control over the number of selections, we finalize the penalty on w as the group elastic net that blends between a group lasso penalty and a ridge penalty: Pw (w) = ?Pw0 (w) + (1 ? ?)kwk22 , 0 ? ? ? 1, (6) where ? balances the trade-off between rule elimination (less rules) and selection (more rules). Model Interpretation Problem (3) is a bi-convex problem: fixing p it is convex in w; fixing w it is convex in p. The symmetry between the two optimization variables further gives us the reciprocal interpretations of the rule realization and selection problem: given p, the music realization, we can analyze its style by computing w; given w, the music style, we can realize it by computing p and further sample from it to obtain music that matches the style. The roles of the hyperparameters ?p and (?w , ?) are quite different. In setting ?p sufficiently small, we secure a unique solution for the rule realization part. However, for the rule selection part, what is more interesting is that adjusting ?w and ? allows us to guide the overall composition towards different directions, e.g. conservative (less strictly obeyed rules) versus liberal (more loosely obeyed rules). Model Properties We state two properties of the bi-convex problem (3) as the following theorems whose proofs can be found in the supplementary material. Both theorems involve the notion of group selective weight. We say w ? ?m is group selective if for every rule in the rule set, w either drops it or selects it entirely, i.e. either wgr = 0 or wgr > 0 element-wisely, for any r = 1, . . . , K. For a group selective w, we further define suppg (w) to be the selected rules, i.e. suppg (w) = {r | wgr > 0 element-wisely} ? {1, . . . , K}. Theorem 1. Fix any ?p > 0, ? ? [0, 1]. Let (p? (?w ), w? (?w )) be a solution path to problem (3). (1) w? (?w ) is group selective, if ?w > 1/?. ? (2) kwg?r (?w )k2 ? mr /m as ?w ? ?, for r = 1, . . . , K. Theorem 2. For ?p = 0 and any ?w > 0, ? ? [0, 1], let (p? , w? ) be a solution to problem (3). We define C ? 2{1,...,K} such (error-free) subset of the given rule set. If P P that any C ? C is a consistent suppg (w? ) ? C, then r?suppg (w? ) mr = max r?C mr | C ? C . Thm. 1 implies a useful range of the ?w -solution path: if ?w is too large, w? will converge to a known value that always selects all the rules; if ?w is too small, w? can lose the guarantee to be group selective. This further suggests the termination criteria used later in the experiments. Thm. 2 considers rule selection in the consistent case, where the solution selects the largest number of rule components among all other consistent rule selections. Despite the condition ?p = 0, in practice, this theorem suggests one way of using model for a small ?p : if the primary interest is to select consistent rules, the model is guaranteed to pick as many rule components as possible (Sec. 5.1). Yet, a more interesting application is to slightly compromise consistency to achieve better selection (Sec. 5.2). 4 Alternating Solvers for Probability and Weight It is natural to solve the bi-convex problem (3) by iteratively alternating the update of one optimization variable while fixing the other, yielding two alternating solvers. 4.1 The p-Solver: for Rule Realization If we fix w, the optimization problem (3) boils down to: minimize E(p, w; A, b) + ?p Pp (p) subject to p ? ?n . 4 (7) 8 > >(1, 0, 0), > > > (0, 1, 0), De-Overlap > > > > > G06 0 G05 <(0, 0, 1), G7 g(x) = (0, 1, 1), > G02 G0 G03 > > 4 (1, 0, 1), G2 G3 > > > > > (1, 1, 0), > > : DeO(G) = {G01 , G02 , G03 , G04 , G05 , G06 , G07 } G = {G1 , G2 , G3 } (1, 1, 1), G1 G01 x 2 G01 x 2 G02 x 2 G03 x 2 G04 x 2 G05 x 2 G06 x 2 G07 Figure 1: An example of group de-overlap. Making a change of variable: qk = 1> pgk0 = kpgk0 k1 for k = 1, . . . , K 0 and letting q = (q1 , . . . , qK 0 ), problem (7) is transformed to its reduced form: minimize E(p, w; A0 , b) + ?p kqk22 (8) 0 subject to q ? ?K , where A0 is obtained from A by removing its column duplicates. Problem (8) is a convex problem with a strictly convex objective, so it has a unique solution q ? . However, the solution to the original problem (7) may not be unique: any p? satisfying qk? = 1> p?g0 is a solution to (7). To favor a k more random p (as discussed in Sec. 3), we can uniquely determine p? by uniformly distributing the probability mass qk within the group gk0 : p?g0 = (qk / dim(pgk0 ))1, k = 1, . . . , K 0 . k Dimensionality Reduction: Group De-Overlap Problem (7) is of dimension n, while its reduced form (8) is of dimension K 0 (? n) from which we can attain dimensionality reduction. In cases where K 0  n, we have a huge speed-up for the p-solver; in other cases, there is still no harm to always run the p-solve from the reduced problem (8). Recall that we have achieved this type of dimensionality reduction by exploiting the group structure of p purely from a computational perspective (Sec. 3). However, the resulting group structure has a deeper interpretation regarding abstraction levels, which is closely related to the concept of de-overlapping a family of groups, group de-overlap in short. (Group De-Overlap) Let G = {G1 , . . . , Gm } be a family of groups (a group is a non-empty set), and m G = ?m i=1 Gi . We introduce a group assignment function g : G 7? {0, 1} , such that for any x ? G, g(x)i = 1{x ? Gi }, and further introduce an equivalence relation ? on G: x ? x0 if g(x) = g(x0 ). We then define the de-overlap of G, another family of groups, by the quotient space DeO(G) = {G01 , . . . , G0m0 } := G/ ? . (9) The idea of group de-overlap is simple (Fig. 1), and DeO(G) indeed comprises non-overlapping groups, since it is a partition of G that equals the set of equivalence classes under ?. Now given a set of rules (A(1) , b(1) ), . . . , (A(K) , b(K) ), we denote their corresponding high-level representation spaces by A(1) , . . . , A(K) , each of which is a partition of the raw input space X (k) (Sec. 2). Let G = ?K , then DeO(G) is a new partition?hence a new high-level representation k=1 A space?of G = X , and is finest (may be tied) among all partitions A(1) , . . . , A(K) . Therefore, DeO(G), as a summary of the rule system, delimits a lower bound on the level of abstraction produced by the given set of rules/abstractions. What coincides with DeO(G), is the group structure of p (recall: pj and pj 0 are grouped together if the jth and j 0 th columns of A are identical), since for any xj ? X , the jth column of A is precisely the group assignment vector g(xj ). Therefore, the decomposed solve step from q ? to p? reflects the following realization chain: n o (A(1) , pA(1) ), . . . , (A(K) , pA(K) ) ? (DeO(G), q ? ) ? (X , pX ), (10) where the intermediate step not only computationally achieves dimensionality reduction, but also conceptually summarizes the given set of abstractions and is further realized in the raw input space. Note that the ?-algebra of the probability space associated with (8) is precisely generated by DeO(G). When rules are inserted into a rule system sequentially (e.g. the growing rule set from an automatic music theorist), the successive solve of (8) is conducted along a ?-algebra path that forms a filtration: nested ?-algebras that lead to finer and finer delineations of the raw input space. In a pedagogical setting, the filtration reflects the iterative refinements of music composition from high-level principles that are taught step by step. 5 Dimensionality Reduction: Screening We propose an additional technique for further dimensionality reduction when solving the reduced problem (8). The idea is to perform screening, which quickly identifies the zero components in q ? and removes them from the optimization problem. Leveraging DPC screening for non-negative lasso [16], we introduce a screening strategy for solving a general simplex-constrained linear least-squares problem (one can check problem (8) is indeed of this form): minimize kX? ? yk22 , subject to ?  0, k?k1 = 1. (11) We start with the following non-negative lasso problem, which is closely related to problem (11): minimize ?? (?) := kX? ? yk22 + ?k?k1 , subject to ?  0, (12) and denote its solution by ? (?). One can show that if k? (? )k1 = 1, then ? (? ) is a solution to problem (11). Our screening strategy for problem (11) runs the DPC screening algorithm on the non-negative lasso problem (12), which applies a repeated screening rule (called EDPP) to solve a solution path specified by a ?-sequence: ?max = ?0 > ?1 > ? ? ? . The `1 -norms along the solution path are non-decreasing: 0 = k? ? (?0 )k1 ? k? ? (?1 )k1 ? ? ? ? . We terminate the solution path at ?t if k? ? (?t )k1 ? 1 and k? ? (?t?1 )k1 < 1. Our goal is to use ? ? (?t ) to predict the zero components in ? ? (?? ), a solution to problem (11). More specifically, we assume that the zero components in ? ? (?t ) are also zero in ? ? (?? ), hence we can remove those components from ? (also the corresponding columns of X) in problem (11) and reduce its dimensionality. ? ? ? ? ? While in practice this assumption is usually true provided that we have a delicate solution path, the monotonicity of ? ? (?)?s support along the solution path does not hold in general [17]. Nevertheless, the assumption does hold when k? ? (?t )k1 ? 1, since the solution path is continuous and piecewise linear [18]. Therefore, we carefully design a solution path in the hope of a ? ? (?t ) whose `1 -norm is close to 1 (e.g. let ?i = ??i?1 with a large ? ? (0, 1), while more sophisticated design is possible such as a bi-section search). To remedy the (rare) situations where ? ? (?t ) predicts some incorrect zero components in ? ? (?? ), one can always leverage the KKT conditions of problem (11) as a final check to correct those mis-predicted components [19]. Finally, note that the screening strategy may fail when the `1 -norms along the solution path converge to a value less than 1. In these cases we can never find a desired ?t with k? ? (?t )k1 ? 1. In theory, such failure can be avoided by a modified lasso problem which in practice does not improve efficiency much (see the supplementary material). 4.2 The w-Solver: for Rule Selection If we fix p, the optimization problem (3) boils down to: minimize E(p, w; A, b) + ?w Pw (w) subject to w ? ?m . (13) w(k+1) = arg min e> w + ?w Pw (w) + ?2 kw ? z (k) + u(k) k22 , (14) z (k+1) = arg min I?m (z) + ?2 kw(k+1) ? z + u(k) k22 , (15) u(k+1) = u (16) We solve problem (13) via ADMM [20]: w z (k) + w(k+1) ? z (k+1) . In the w-update (14), we introduce the error vector e = (Ap ? b) (element-wise square), and obtain a closed-form solution by a soft-thresholding procedure [21]: for r = 1, . . . , K, ! ? ?w ? mr ?(z (k) ? u(k) ) ? e (k+1) (k) (k) w gr = 1? e ? , where e ? = . (17) g r (k) ? + 2?w (1 ? ?) (? + 2?w (1 ? ?)) ? k? egr k2 2 + In the z-update (15), we introduce the indicator function I?m (z) = 0 if z ? ?m and ? otherwise, and recognize it as a (Euclidean) projection onto the probability simplex: z (k+1) = ??m (w(k+1) + u(k) ), (18) which can be solved efficiently by a non-iterative method [22]. Given that ADMM enjoys a linear convergence rate in general [23] and the problem?s dimension m  n, one execution of the wsolver is cheaper than that of the p-solver. Indeed, the result from the w-solver can speed up the subsequent execution of the p-solver, since we can leverage the zero components in w? to remove the corresponding rows in A, yielding additional savings in the group de-overlap of the p-solver. 6 1.2 group norm 2.0 0 wt. err. rule 1 rule 2 rule 3 rule 4 rule 5 4.0 1.0 0 ?8 ?6 ?4 ?2 0 2 log2 (?w) (a) Case A1: ? = 0.8. ? 10?1 rule 1 rule 2 rule 3 rule 4 rule 5 0.6 0 ? 10?4 wt. err. group norm ? 10?2 ? 10?4 4.0 2.0 0 ?8 ?6 ?4 ?2 0 2 log2 (?w) (b) Case A2: ? = 0.8. Figure 2: The ?w -solution paths obtained from the two artificial rule sets. Each path is depicted by the trajectories of the group norms (top) and the trajectory of the weighted errors (bottom). 5 5.1 Experiments Artificial Rule Set We generate two artificial rule sets: Case A1 and A2, both of which are derived from the same raw input space X = {x1 , . . . , xn } for n = 600, and comprise K = 5 rules. The rules in Case A1 are of size 80, 50, 60, 60, 60, respectively; the rules in Case A2 are of size 70, 50, 65, 65, 65, respectively. For both cases, rule 1&2 and rule 3&4 are the only two consistent sub rule sets of size ? 2. The main difference between the two cases is: in Case A1, rule 1&2 has a combined size of 130 which is larger than rule 3&4 and in Case A2 it is opposite. Under different settings of the hyperparameters ?w and ?, our model selects different rule combinations exhibiting unique ?personal? styles. Tuning the blending factor ? ? [0, 1] is relatively easy, since it is bounded and has a nice interpretation. Intuitively, if ? ? 0, the effect of the group lasso vanishes, yielding a solution w? that is not selective; if ? ? 1, the group elastic net penalty reduces to the group lasso, exposing the pitfall mentioned in Sec. 3. Experiments show that if we fix a small ?, the model picks either all five rules or none; if we fix a large ?, the group norms associated with each rule are highly unstable as ?w varies. Fortunately in practice, ? has a wide middle range (typically between 0.4 and 0.9), within which all corresponding ?w -solution paths look similar and perform stable rule selection. Therefore, for all experiments herein, we fix ? = 0.8 and study the behavior of the corresponding ?w -solution path. We show the ?w -solution paths in Fig. 2. Along the path, we plot the group norms (top, one curve per rule) and the weighted errors (bottom). The former, formulated as kwg?r (?w )k2 , describes the options for rule selection; the latter, formulated as E(p? (?w ), w? (?w ); A, b), describes the quality of rule realization. To produce the trajectories, we start with a moderate ?w (e.g. ?w = 1), and gradually increase and decrease its value to bi-directionally grow the curves. We terminate the descending direction when w? (?w ) is not group selective and terminate the ascending direction when the group norms converge. Both terminations are indicated by Thm. 1, and work well in practice. As ?w grows, the model transitions its compositional behavior from a conservative style (sacrifice a number of rules for accuracy) towards a more liberal one (sacrifice accuracy for more rules). If we further focus on the ?w s that give us zero weighted error, Fig. 2a reveals rule 1&2, and Fig. 2b reveals rule 3&4, i.e. the largest consistent subset of the given rule set in both cases (Thm. 2). Finally, we mention the efficiency of our algorithm. Averaged over several runs on multiple artificial rule sets of the same size, the run-time of our solver is 27.2 ? 5.5 seconds, while that of a generic solver (CVX) is 41.4 ? 3.8 seconds. We attribute the savings to the dimensionality reduction techniques introduced in Sec. 4.1, which will be more significant at large scale. 5.2 Real Compositional Rule Set As a real-world application, we test our unified framework on rule sets from an automatic music theorist [11]. The auto-theorist teaches people to write 4-part chorales by providing personalized 7 385 A.1 The KKT Condition of Simplex Constrained Linear Least-Squares 386 A.2 An Equivalent Formulation of Simplex Constrained Linear Least-Squares 387 A.3 The Convergence of Group Norms 388 A.4 The Global Minimum of Problem (3) Under Consistency 389 A.5 Miscellaneous group norm ? 10?2 2.0 Table 1: Compositional rule selections rule 3 rule 6 rule 9 rule 10 others 1.0 wt. err. 0 ? 10?4 1.0 0 ?8 ?6 w) selected rule set # of rules # of rule components [ 12, 6] [ 5, 2] [ 1, 0] 1 2 3 {10} {3, 6, 10} {3, 6, 9, 10} {3, 6, 8, 9, 10, 11, 13} {1, 3, 7, 9, 10, 11, 13} all 1 3 4 7 7 16 1540 1699 2154 2166 2312 2417 log2 ( ?4 ?2 0 2 log2 (?w) Figure 3: The ?w -solution path obtained from a real compositional rule set. rules at every stage of composition. In this experiment, we exported a set of 16 compositional rules which aims to guide a student in writing the next sonority that follows well with the existing music content. Each voice in a chorale is drawn from ? = {R, G1 , . . . , C6 } that includes the rest (R) and 54 pitches (G1 to C6 ) from human vocal range. The resulting raw input space X = ?4 consists of n = 554 ? 107 sonorities, whose distribution lives in a very high dimensional simplex. This curse of dimensionality typically fails most of the generic solvers in obtaining an acceptable solution within a reasonable amount of time. We show the ?w -solution path associated with this rule set in Fig. 3. Again, the general trend shows the same pattern here: the model turns into a more liberal style (more rules but less accurate) as ?w increases. Along the solution path, we also observe that the consistent range (i.e. the error-free zone) is wider than that in the artificial cases. This is intuitive, since a real rule set should be largely consistent with minor contradictions, otherwise it will confuse the student and lose its pedagogical purpose. A more interesting phenomenon occurs when the model is about to leave the error-free zone. When log2 (?w ) goes from 1 to 2, the combined size of the selected rules increases from 2166 to 2312 but the realization error increases only a little. Will sacrificing this tiny error be a smarter decision to make? The difference between the selected rules at these two moments shows that rule 1 and 7 were added into the selection at log2 (?w ) = 2 replacing rule 6 and 8. Rule 1 is about the bass line, while rule 6 is about tenor voice. It is known in music theory that outer voices (soprano and bass) are more characteristic and also more identifiable than inner voices (alto and tenor) which typically stay more or less stationary as background voices. So it is understandable that although larger variety in the bass increases the opportunity for inconsistency (in this case not too much), it is a more important rule to keep. Rule 7 is about the interval between soprano and tenor, while rule 8 describes a small feature between the upper two voices but does not have a meaning yet in music theory. So unlike rule 7 that brings up the important concept of voicing (i.e. classifying a sonority 11 into open/closed/neutral position), rule 8 could simply be a miscellaneous artifact. To conclude, in this particular example, we would argue that the rule selection happens at log2 (?w ) = 2 is a better decision, in which case the model makes a good compromise on exact consistency. To compare a selective rule realization with its non-selective counterpart [11], we plot the errors kA(r) p ? b(r) k2 for each rule r = 1, . . . , 16 as histograms in Fig. 4. The non-selective realization takes all rules into consideration with equal importance, which turns out to be a degenerate case along our model?s solution path for log2 (?w ) ? ?. This realization yields a ?well-balanced? solution but no rules are satisfied exactly. In constrast, a selective realization (e.g. log2 (?w ) = 1) gives near-zero errors on selected rules, producing more human-like compositional decisions. 1 error 1 error 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 rule (a) selective 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 rule (b) non-selective Figure 4: Comparison between a selective rule realization (log2 (?w ) = 1) and its non-selective counterpart. The boldfaced x-tick labels designate the indices of the selected rules. 8 6 Discussion Generality of the Framework The formalism of abstraction and realization in Sec. 2, as well as the unified framework for simultaneous rule realization and selection in Sec. 3, is general and domain-agnostic, not specific to music. The problem formulation as a bi-convex problem (3) admits numerous real-world applications that can be cast as (quasi-)linear systems, possibly equipped with some group structure. For instance, many problems in physical science involve estimating unknowns x from their observations y via a linear (or linearized) equation y = Ax [24], where a grouping of yi ?s (say, from a single sensor or sensor type) itself summarizes x as a rule/abstraction. In general, the observations are noisy and inconsistent due to errors from the measuring devices or even the failure of a sensor. It is then necessary to assign a different reliability weight to every individual sensor reading, and ask for a ?selective? algorithm to ?realize? the readings respecting the group structure. So in cases where some devices fail and give inconsistent readings, we can run the proposed algorithm to filter them out. Linearity versus Expressiveness The linearity with respect to p in the rule system Ap = b results directly from adopting the probability-space representation. However, this does not imply that the underlying domain (e.g. music) is as simple as linear. In fact, the abstraction process can be highly nonlinear which involves hierarchical partitioning of the input space [11]. So, instead of running the risk of losing expressiveness, the linear equation Ap = b hides the model complexity in the A matrix. On the other hand, the linearity with respect to w in the bi-convex objective (3) is a design choice. We start with a simple linear model to represent relative importance for the sake of interpretability, which may sacrifice the model?s expressiveness like other classic linear models. To push the boundary of this trade off in the future, we will pursue more expressive models without compromising (practically important) interpretability. Differences from (Group) Lasso Component-wise, both subproblems (7) and (13) of the unified framework look similar to regular feature selection settings such as lasso [25] and group lasso [26]. However, not only does the strong coupling between the two subproblems exhibit new properties (Thm. 1 and 2), but also the differences in the formulation present unique algorithmic challenges. First, the weighted error term (2) in the objective is in stark contrast with the regular regression formulation where (group) lasso is paired with least-squares or other similar loss functions. Whereas dropping features in a regression model typically increases training loss (under-fitting), dropping rules, on the contrary, helps drive the error to zero since a smaller rule set is more likely to achieve consensus. Hence, the tendency to drop rules in a regular (group) lasso is against the pursuit of a largest consistent rule set as desired. This stresses the necessity of a more carefully designed penalty like our proposed group elastic net. Second, the additional simplex constraint weakens the grouping property of group lasso: failures in group selection (i.e. there exists a rule that is not entirely selected) are observed for small ?w s. The simplex constraint, effectively an `1 constraint, also incurs an ?`1 cancellation?, which nullifies a simple lasso (also an `1 ) on a simple parameterization of the rules (one weight per rule). These differences pose new model behaviors and deserve further study. Local Convergence We solve the bi-convex problem (3) via alternating minimizations in which the algorithm decreases the non-negative objective in every iteration thus assures its convergence. Nevertheless, neither a global optimum nor a convergence in solution can be guaranteed. The former leaves the local convergence susceptible to different initializations, demanding further improvements through techniques such as random start and noisy updates. The latter leaves the possibility for the optimization variables to enter a limit cycle. However, we consider this as an advantage, especially in music where one prefers multiple realizations and interpretations that are equally optimal. More Microscopic Views The weighting scheme in this paper presents the rule selection problem in a most general setting, where a different weight is assigned to every rule component. Hence, we can study the relative importance not only between rules by the group norms kwgr k2 , but also within every single rule. The former compares compositional rules in a macroscopic level, e.g. restricting to a diatonic scale is more important than avoiding parallel octaves; while the latter in a microscopic level, e.g. changing the probability mass within a diatonic scale creates variety in modes: think about C major versus A minor. We can further study the rule system microscopically by sharing weights of the same component but from different rules, yielding an overlapping group elastic net. 9 References [1] K. Lewin, Field Theory in Social Science. Harpers, 1951. [2] J. Skorstad, D. Gentner, and D. Medin, ?Abstraction processes during concept learning: A structural view,? in Proc. 10th Annu. Conf. Cognitive Sci. Soc., 1988, pp. 419?425. [3] K. Haase, ?Discovery systems: From AM to CYRANO,? MIT AI Lab Working Paper 293, 1987. [4] A. M. Barry, Visual Intelligence: Perception, Image, and Manipulation in Visual Communication. SUNY Press, 1997. [5] Y. Bengio, A. Courville, and P. Vincent, ?Representation learning: A review and new perspectives,? IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798?1828, 2013. [6] Y. Bengio, ?Deep learning of representations: Looking forward,? in Proc. Int. Conf. Stat. Lang. and Speech Process., 2013, pp. 1?37. [7] J. J. Fux, Gradus ad Parnassum. [8] H. Schenker, Kontrapunkt. Johann Peter van Ghelen, 1725. Universal-Edition A.G., 1922. [9] H. Yu, L. R. Varshney, G. E. Garnett, and R. Kumar, ?MUS-ROVER: A self-learning system for musical compositional rules,? in Proc. 4th Int. Workshop Music. Metacreation (MUME 2016), 2016. [10] ??, ?Learning interpretable musical compositional rules and traces,? in Proc. 2016 ICML Workshop Hum. Interpret. Mach. Learn. (WHI 2016), 2016. [11] H. Yu and L. R. Varshney, ?Towards deep interpretability (MUS-ROVER II): Learning hierarchical representations of tonal music,? in Proc. 5th Int. Conf. Learn. Represent. (ICLR 2017), 2017. [12] D. Cope, ?An expert system for computer-assisted composition,? Comput. Music J., vol. 11, no. 4, pp. 30?46, 1987. [13] K. Ebcio?glu, ?An expert system for harmonizing four-part chorales,? Comput. Music J., vol. 12, no. 3, pp. 43?51, 1988. [14] J. R. Pierce and M. E. Shannon, ?Composing music by a stochastic process,? Bell Telephone Laboratories, Technical Memorandum MM-49-150-29, Nov. 1949. [15] H. Zou and T. Hastie, ?Regularization and variable selection via the elastic net,? J. R. Stat. Soc. Ser. B. Methodol., vol. 67, no. 2, pp. 301?320, 2005. [16] J. Wang and J. Ye, ?Two-layer feature reduction for sparse-group lasso via decomposition of convex sets,? in Proc. 28th Annu. Conf. Neural Inf. Process. Syst. (NIPS), 2014, pp. 2132?2140. [17] T. Hastie, J. Taylor, R. Tibshirani, and G. Walther, ?Forward stagewise regression and the monotone lasso,? Electron. J. Stat., vol. 1, pp. 1?29, 2007. [18] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, ?Least angle regression,? Ann. Stat., vol. 32, no. 2, pp. 407?499, 2004. [19] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani, ?Strong rules for discarding predictors in lasso-type problems,? J. R. Stat. Soc. Ser. B. Methodol., vol. 74, no. 2, pp. 245?266, 2012. [20] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learning via the alternating direction method of multipliers,? Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1?122, 2011. [21] M. Yuan and Y. Lin, ?Model selection and estimation in regression with grouped variables,? J. R. Stat. Soc. Ser. B. Methodol., vol. 68, no. 1, pp. 49?67, 2006. [22] W. Wang and M. A. Carreira-Perpin?n, ?Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application,? arXiv:1309.1541 [cs.LG], 2013. [23] M. Hong and Z.-Q. Luo, ?On the linear convergence of the alternating direction method of multipliers,? Math. Program., pp. 1?35, 2012. [24] D. D. Jackson, ?Interpretation of inaccurate, insufficient and inconsistent data,? Geophys. J. Int., vol. 28, no. 2, pp. 97?109, 1972. 10 [25] R. Tibshirani, ?Regression shrinkage and selection via the lasso,? J. R. Stat. Soc. Ser. B. Methodol., pp. 267?288, 1996. [26] J. Friedman, T. Hastie, and R. Tibshirani, ?A note on the group lasso and a sparse group lasso,? 2010. 11
6754 |@word middle:1 inversion:1 pw:5 norm:14 open:1 termination:2 linearized:1 perpin:1 decomposition:1 q1:1 pick:3 incurs:1 asks:1 mention:1 shot:1 moment:2 necessity:1 reduction:10 initial:1 score:1 selecting:3 interestingly:1 existing:2 err:3 ka:2 current:1 luo:1 lang:1 yet:3 tackling:1 chu:1 finest:1 exposing:1 realize:5 subsequent:1 partition:10 enables:1 remove:3 designed:2 interpretable:2 drop:2 update:4 plot:2 stationary:1 intelligence:4 selected:7 device:2 leaf:2 tone:1 parameterization:1 ith:1 realizing:2 reciprocal:1 short:1 math:1 successive:1 liberal:3 c6:2 five:1 along:7 become:1 incorrect:1 consists:1 walther:1 yuan:1 fitting:1 boldfaced:1 introduce:10 x0:2 sacrifice:4 indeed:3 behavior:3 themselves:1 nor:2 brier:1 growing:1 decomposed:1 pitfall:2 decreasing:1 little:1 curse:4 delineation:1 considering:1 solver:12 becomes:2 provided:1 equipped:1 bounded:1 estimating:1 alto:1 agnostic:2 mass:3 linearity:3 what:2 underlying:1 pursue:1 developed:1 contrasting:1 unified:8 finding:1 formalizes:1 guarantee:1 every:7 tackle:1 exactly:3 demonstrates:1 k2:5 ser:4 whatever:1 partitioning:4 control:1 producing:1 engineering:1 understood:1 local:2 limit:1 interrelation:1 despite:2 mach:3 subscript:1 path:21 ap:8 approximately:1 initialization:1 equivalence:2 conversely:1 specifying:1 suggests:2 g7:1 bi:13 range:6 averaged:2 medin:1 unique:7 practice:6 implement:1 procedure:1 universal:1 bell:1 thought:1 attain:1 projection:2 boyd:1 refers:2 vocal:1 regular:3 cannot:1 close:1 selection:30 layered:1 onto:2 put:1 context:3 risk:1 writing:3 descending:1 equivalent:1 center:1 musician:1 go:2 attention:1 convex:14 formulate:2 formalized:1 constrast:1 legitimate:1 rule:182 contradiction:1 deriving:1 jackson:1 classic:1 notion:1 variation:1 coordinate:2 memorandum:1 gm:1 speak:1 exact:1 losing:1 us:1 pa:10 element:3 trend:2 satisfying:2 predicts:1 invented:1 role:1 inserted:1 bottom:2 observed:1 electrical:1 solved:1 wang:2 mume:1 connected:1 cycle:1 bass:3 trade:2 decrease:2 chord:1 mentioned:2 balanced:1 vanishes:1 mu:2 respecting:1 complexity:1 personal:1 deo:8 solving:3 algebra:3 compromise:3 purely:1 creates:1 efficiency:3 rover:2 various:1 soprano:2 g05:3 distinct:1 describe:1 artificial:6 approached:1 quite:1 whi:1 larger:3 solve:9 whose:3 supplementary:2 say:2 otherwise:2 encoder:1 favor:2 statistic:1 gi:2 g1:6 think:1 itself:1 noisy:2 final:1 directionally:1 differentiate:1 sequence:2 advantage:1 net:6 propose:3 relevant:1 realization:42 wgr:3 degenerate:1 achieve:3 intuitive:1 exploiting:2 convergence:7 regularity:1 cluster:4 optimum:2 empty:1 produce:3 generating:1 perfect:1 leave:1 wider:1 derive:1 depending:1 illustrate:1 fixing:3 stat:7 pose:2 coupling:1 help:1 weakens:1 minor:2 strong:2 soc:5 quotient:1 involves:2 implies:1 c:1 convention:1 exhibiting:1 direction:5 predicted:1 closely:2 correct:1 attribute:1 compromising:1 filter:1 stochastic:1 human:4 enable:1 elimination:1 material:2 assign:1 fix:6 generalization:1 creativity:2 designate:1 blending:1 strictly:2 assisted:1 hold:2 mm:1 practically:1 sufficiently:1 algorithmic:2 predict:1 electron:1 major:1 achieves:2 adopt:1 a2:4 purpose:2 estimation:1 proc:6 applicable:1 realizes:1 label:1 lose:2 combinatorial:1 largest:3 grouped:2 nullifies:1 weighted:6 reflects:2 hope:1 minimization:1 mit:1 g04:2 sensor:4 always:5 aim:3 modified:1 harmonizing:1 shrinkage:1 gradus:1 broader:1 derived:2 focus:3 ax:1 improvement:1 check:2 secure:2 contrast:1 sense:1 am:1 dim:2 inference:1 abstraction:21 inaccurate:1 typically:8 a0:2 relation:1 selective:16 transformed:1 selects:4 quasi:1 overall:1 ambitiously:1 among:3 arg:2 art:1 constrained:4 haase:1 field:1 equal:3 once:1 never:1 beach:1 saving:2 comprise:1 identical:2 represents:1 kw:2 yu:3 look:2 icml:1 future:2 simplex:9 others:2 intelligent:1 duplicate:1 piecewise:1 simultaneously:1 recognize:1 intell:1 individual:4 cheaper:1 microscopically:1 delicate:1 friedman:2 freedom:1 screening:9 interest:1 huge:1 highly:2 possibility:1 yielding:4 chain:1 accurate:1 necessary:1 loosely:1 euclidean:1 taylor:2 desired:2 sacrificing:1 mk:1 instance:1 formalism:3 modeling:1 column:6 soft:1 measuring:1 assignment:2 enormity:1 stacking:1 cost:1 subset:2 rare:1 neutral:1 parnassum:1 predictor:1 conducted:1 gr:1 too:3 obeyed:2 encoders:1 varies:1 massaging:1 combined:3 st:1 lewin:1 stay:1 sequel:1 probabilistic:9 off:2 together:3 quickly:1 squared:1 again:1 satisfied:2 containing:1 leveraged:1 possibly:2 cognitive:3 conf:4 expert:2 style:9 stark:1 li:1 syst:1 de:10 diversity:1 sec:11 student:2 includes:1 int:4 satisfy:1 proactively:1 ad:1 piece:3 bilateral:1 break:1 view:3 later:2 closed:2 analyze:1 lab:1 start:4 option:1 parallel:2 simon:1 contribution:2 minimize:6 il:2 square:5 accuracy:2 musical:2 qk:5 characteristic:2 efficiently:2 yield:3 largely:1 directional:1 conceptually:1 generalize:1 raw:14 vincent:1 produced:1 none:1 trajectory:3 worth:1 finalize:1 finer:2 drive:1 randomness:1 simultaneous:2 sharing:1 failure:3 against:1 pp:19 intentionally:1 associated:4 mi:2 proof:2 boil:2 couple:1 adjusting:1 popular:1 ask:2 recall:3 knowledge:2 manifest:1 dimensionality:14 ubiquitous:1 organized:1 infers:1 formalize:1 efron:1 carefully:2 sophisticated:1 higher:1 follow:3 formulation:4 though:1 strongly:1 generality:2 anywhere:1 just:1 stage:1 hand:1 working:1 replacing:1 expressive:1 nonlinear:1 overlapping:3 defines:1 mode:1 brings:1 quality:3 reveal:1 indicated:1 artifact:1 grows:1 stagewise:1 usa:1 effect:1 ye:1 k22:3 concept:3 true:1 counterpart:2 remedy:1 regularization:1 hence:4 assigned:2 alternating:6 symmetric:2 iteratively:1 former:3 laboratory:1 edpp:1 during:1 self:1 uniquely:1 coincides:1 criterion:1 hong:1 octave:2 stress:1 ridge:1 demonstrate:1 pedagogical:2 l1:1 meaning:1 wise:2 consideration:1 image:1 recently:2 parikh:1 physical:1 discussed:1 interpretation:9 m1:1 interpret:1 significant:1 composition:8 bk22:1 theorist:9 ai:2 enter:1 automatic:4 tuning:1 consistency:3 teaching:1 illinois:5 cancellation:1 language:1 reliability:1 stable:1 hide:1 perspective:2 moderate:1 inf:1 reverse:1 disappointing:1 exported:2 certain:2 selectivity:3 manipulation:1 life:1 inconsistency:1 yi:1 minimum:1 additional:3 somewhat:1 preceding:1 mr:5 fortunately:1 egr:1 determine:2 converge:3 barry:1 ii:1 resolving:1 multiple:3 reduces:1 champaign:2 technical:1 match:1 offer:1 long:1 lin:1 devised:1 equally:2 coded:1 a1:4 paired:1 pitch:3 regression:6 arxiv:1 histogram:1 represent:3 adopting:1 smarter:1 iteration:1 achieved:1 background:1 whereas:1 fine:1 interval:1 grow:1 macroscopic:1 rest:1 contrarily:1 unlike:1 probably:1 sounding:1 induced:1 subject:6 kwk22:1 contrary:1 inconsistent:6 leveraging:1 call:2 structural:1 near:1 leverage:2 yk22:2 intermediate:1 bengio:2 easy:1 variety:2 xj:3 fit:1 hastie:5 multiplier:2 architecture:1 restrict:1 lasso:22 opposite:1 reduce:1 regarding:4 idea:2 inner:1 distributing:1 tonal:1 penalty:12 suffer:1 peter:1 speech:1 compositional:14 prefers:1 deep:2 useful:1 detailed:1 clear:1 involve:2 amount:1 ten:1 glu:1 reduced:4 generate:1 gentner:1 kap:1 wisely:2 revisit:1 per:2 tibshirani:6 diverse:2 discrete:4 write:1 dropping:2 vol:10 taught:1 group:65 key:2 indefinite:1 diatonic:2 demonstrating:1 nevertheless:2 four:1 drawn:2 suny:1 changing:1 pj:5 neither:1 g02:3 relaxation:1 monotone:1 sum:2 run:6 angle:2 sonority:9 throughout:1 reasonable:4 family:3 separation:1 cvx:1 decision:5 summarizes:2 acceptable:1 entirely:2 bound:1 layer:1 followed:1 distinguish:1 guaranteed:2 courville:1 identifiable:1 precisely:2 constraint:3 inclusive:1 takeaway:1 personalized:1 sake:2 generates:1 speed:2 min:2 kumar:1 px:8 relatively:1 department:3 developing:1 combination:1 beneficial:1 slightly:1 describes:3 smaller:1 g3:2 making:1 subtler:1 happens:1 intuitively:1 gradually:1 computationally:1 equation:4 assures:1 turn:2 fail:2 needed:1 letting:1 ascending:1 instrument:1 serf:1 umich:1 end:1 studying:1 pursuit:2 kwg:2 observe:1 hierarchical:2 generic:3 voicing:1 voice:6 original:1 top:2 running:1 log2:10 opportunity:1 music:42 embodies:1 exploit:2 k1:10 especially:1 establish:1 objective:5 g0:3 added:1 coherently:1 occurs:2 blend:1 strategy:5 primary:1 realized:1 hum:1 exhibit:1 microscopic:2 iclr:1 sci:1 decoder:1 outer:1 argue:1 extent:1 considers:1 trivial:1 unstable:1 consensus:1 assuming:1 index:1 insufficient:1 providing:1 minimizing:1 balance:1 lg:1 susceptible:1 teach:1 gk:2 subproblems:2 negative:4 filtration:2 trace:1 design:3 guideline:1 understandable:1 anal:1 unknown:1 perform:2 upper:1 neuron:1 observation:2 urbana:4 finite:3 fux:1 situation:1 communication:1 looking:1 thm:5 expressiveness:3 peleato:1 inferred:1 introduced:2 pair:1 mechanical:1 specified:1 cast:1 eckstein:1 conflict:1 learned:1 herein:2 nip:2 gk0:1 deserve:1 beyond:2 trans:1 usually:2 pattern:2 perception:1 reading:3 challenge:7 bien:1 program:1 interpretability:4 max:2 belief:1 geophys:1 overlap:9 demanding:2 natural:1 cascaded:1 dpc:2 indicator:1 methodol:4 chorale:4 improve:1 scheme:1 historically:1 imply:1 numerous:1 identifies:1 auto:3 extract:1 nice:1 understanding:1 piano:2 l2:1 discovery:1 review:1 determining:1 relative:3 loss:2 abstracting:1 generation:2 interesting:3 versus:3 consistent:12 principle:4 thresholding:1 tiny:1 classifying:1 collaboration:1 ibm:2 row:1 summary:1 supported:1 free:4 jth:3 enjoys:1 aij:1 formal:1 guide:2 deeper:2 tick:1 johnstone:1 wide:1 taking:1 sparse:2 k12:2 distributed:2 van:1 curve:2 dimension:3 xn:2 world:3 transition:1 unweighted:1 boundary:1 forward:2 collection:2 made:1 refinement:1 avoided:1 social:1 cope:1 nov:1 emphasize:1 varshney:4 monotonicity:1 keep:1 global:3 sequentially:1 kkt:2 reveals:2 conceptual:1 summing:1 harm:1 conclude:1 tenor:3 search:2 iterative:2 continuous:1 table:1 terminate:3 learn:3 composing:1 ca:1 elastic:6 symmetry:3 operational:1 ignoring:1 obtaining:1 composer:3 zou:1 domain:5 diag:1 garnett:1 inherit:1 main:5 hierarchically:1 hyperparameters:2 edition:1 repeated:1 x1:2 fig:6 kqk22:1 deployed:1 sub:1 fails:1 comprises:1 position:1 exercise:1 comput:2 tied:1 weighting:1 grained:1 theorem:5 down:2 removing:1 annu:2 specific:3 discarding:1 k21:1 admits:1 concern:2 grouping:6 essential:1 exists:1 workshop:2 restricting:1 g10:1 importance:4 effectively:1 pierce:1 execution:2 confuse:1 push:1 horizon:1 kx:2 depicted:1 michigan:1 simply:1 likely:1 visual:2 lav:1 g2:2 applies:3 nested:1 satisfies:1 goal:2 formulated:3 ann:2 towards:3 miscellaneous:2 admm:2 content:1 hard:2 change:1 carreira:1 specifically:1 telephone:1 uniformly:1 wt:3 conservative:2 called:1 invariance:1 arbor:1 tendency:2 disregard:1 g01:4 shannon:1 zone:2 select:1 support:1 people:1 latter:3 harper:1 avoiding:2 johann:1 phenomenon:1
6,362
6,755
Nearest-Neighbor Sample Compression: Efficiency, Consistency, Infinite Dimensions Aryeh Kontorovich Department of Computer Science Ben-Gurion University of the Negev [email protected] Sivan Sabato Department of Computer Science Ben-Gurion University of the Negev [email protected] Roi Weiss Department of Computer Science and Applied Mathematics Weizmann Institute of Science [email protected] Abstract We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm. This algorithm is derived from sample compression bounds and enjoys the statistical advantages of tight, fully empirical generalization bounds, as well as the algorithmic advantages of a faster runtime and memory savings. We prove that this algorithm is strongly Bayes-consistent in metric spaces with finite doubling dimension ? the first consistency result for an efficient nearest-neighbor sample compression scheme. Rather surprisingly, we discover that this algorithm continues to be Bayes-consistent even in a certain infinitedimensional setting, in which the basic measure-theoretic conditions on which classic consistency proofs hinge are violated. This is all the more surprising, since it is known that k-NN is not Bayes-consistent in this setting. We pose several challenging open problems for future research. 1 Introduction This paper deals with Nearest-Neighbor (NN) learning algorithms in metric spaces. Initiated by Fix and Hodges in 1951 [16], this seemingly naive learning paradigm remains competitive against more sophisticated methods [8, 46] and, in its celebrated k-NN version, has been placed on a solid theoretical foundation [11, 44, 13, 47]. Although the classic 1-NN is well known to be inconsistent in general, in recent years a series of papers has presented variations on the theme of a regularized 1-NN classifier, as an alternative to the Bayes-consistent k-NN. Gottlieb et al. [18] showed that approximate nearest neighbor search can act as a regularizer, actually improving generalization performance rather than just injecting noise. In a follow-up work, [27] showed that applying Structural Risk Minimization to (essentially) the margin-regularized data-dependent bound in [18] yields a strongly Bayes-consistent 1-NN classifier. A further development has seen margin-based regularization analyzed through the lens of sample compression: a near-optimal nearest neighbor condensing algorithm was presented [20] and later extended to cover semimetric spaces [21]; an activized version also appeared [25]. As detailed in [27], margin-regularized 1-NN methods enjoy a number of statistical and computational advantages over the traditional k-NN classifier. Salient among these are explicit data-dependent generalization bounds, and considerable runtime and memory savings. Sample compression affords additional advantages, in the form of tighter generalization bounds and increased efficiency in time and space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work we study the Bayes-consistency of a compression-based 1-NN multiclass learning algorithm, in both finite-dimensional and infinite-dimensional metric spaces. The algorithm is essentially the passive component of the active learner proposed by Kontorovich, Sabato, and Urner in [25], and we refer to it in the sequel as KSU; for completeness, we present it here in full (Alg. 1). We show that in finite-dimensional metric spaces, KSU is both computationally efficient and Bayesconsistent. This is the first compression-based multiclass 1-NN algorithm proven to possess both of these properties. We further exhibit a surprising phenomenon in infinite-dimensional spaces, where we construct a distribution for which KSU is Bayes-consistent while k-NN is not. Main results. Our main contributions consist of analyzing the performance of KSU in finite and infinite dimensional settings, and comparing it to the classical k-NN learner. Our key findings are summarized below. ? In Theorem 2, we show that KSU is computationally efficient and strongly Bayes-consistent in metric spaces with a finite doubling dimension. This is the first (strong or otherwise) Bayes-consistency result for an efficient sample compression scheme for a multiclass (or even binary)1 1-NN algorithm. This result should be contrasted with the one in [27], where margin-based regularization was employed, but not compression; the proof techniques from [27] do not carry over to the compression-based scheme. Instead, novel arguments are required, as we discuss below. The new sample compression technique provides a Bayes-consistency proof for multiple (even countably many) labels; this is contrasted with the multiclass 1-NN algorithm in [28], which is not compression-based, and requires solving a minimum vertex cover problem, thereby imposing a 2-approximation factor whenever there are more than two labels. ? In Theorem 4, we make the surprising discovery that KSU continues to be Bayes-consistent in a certain infinite-dimensional setting, even though this setting violates the basic measuretheoretic conditions on which classic consistency proofs hinge, including Theorem 2. This is all the more surprising, since it is known that k-NN is not Bayes-consistent for this construction [9]. We are currently unaware of any separable2 metric probability space on which KSU fails to be Bayes-consistent; this is posed as an intriguing open problem. Our results indicate that in finite dimensions, an efficient, compression-based, Bayes-consistent multiclass 1-NN algorithm exists, and hence can be offered as an alternative to k-NN, which is well known to be Bayes-consistent in finite dimensions [12, 41]. In contrast, in infinite dimensions, our results show that the condition characterizing the Bayes-consistency of k-NN does not extend to all NN algorithms. It is an open problem to characterize the necessary and sufficient conditions for the existence of a Bayes-consistent NN-based algorithm in infinite dimensions. Related work. Following the pioneering work of [11] on nearest-neighbor classification, it was shown by [13, 47, 14] that the k-NN classifier is strongly Bayes consistent in Rd . These results made extensive use of the Euclidean structure of Rd , but in [41] a weak Bayes-consistency result was shown for metric spaces with a bounded diameter and a bounded doubling dimension, and additional distributional smoothness assumptions. More recently, some of the classic results on k-NN risk decay rates were refined by [10] in an analysis that captures the interplay between the metric and the sampling distribution. The worst-case rates have an exponential dependence on the dimension (i.e., the so-called curse of dimensionality), and Pestov [33, 34] examines this phenomenon closely under various distributional and structural assumptions. Consistency of NN-type algorithms in more general (and in particular infinite-dimensional) metric spaces was discussed in [1, 5, 6, 9, 30]. In [1, 9], characterizations of Bayes-consistency were given in terms of Besicovitch-type conditions (see Eq. (3)). In [1], a generalized ?moving window? classification rule is used and additional regularity conditions on the regression function are imposed. The filtering technique (i.e., taking the first d coordinates in some basis representation) was shown to be universally consistent in [5]. However, that algorithm suffers from the cost of cross-validating over both the dimension d and number of neighbors k. Also, the technique is only applicable in 1 An efficient sample compression algorithm was given in [20] for the binary case, but no Bayes-consistency guarantee is known for it. 2 C?rou and Guyader [9] gave a simple example of a nonseparable metric on which all known nearest-neighbor methods, including k-NN and KSU, obviously fail. 2 Hilbert spaces (as opposed to more general metric spaces) and provides only asymptotic consistency, without finite-sample bounds such as those provided by KSU. The insight of [5] is extended to the more general Banach spaces in [6] under various regularity assumptions. None of the aforementioned generalization results for NN-based techniques are in the form of fully empirical, explicitly computable sample-dependent error bounds. Rather, they are stated in terms of the unknown Bayes-optimal rate, and some involve additional parameters quantifying the well-behavedness of the unknown distribution (see [27] for a detailed discussion). As such, these guarantees do not enable a practitioner to compute a numerical generalization error estimate for a given training sample, much less allow for a data-dependent selection of k, which must be tuned via cross-validation. The asymptotic expansions in [43, 37, 23, 40] likewise do not provide a computable finite-sample bound. The quest for such bounds was a key motivation behind the series of works [18, 28, 20], of which KSU [25] is the latest development. The work of Devroye et al. [14, Theorem 21.2] has implications for 1-NN classifiers in Rd that are defined based on data-dependent majority-vote partitions of the space. It is shown that under some conditions, a fixed mapping from each sample size to a data-dependent partition rule induces a strongly Bayes-consistent algorithm. This result requires the partition rule to have a bounded VC dimension, and since this rule must be fixed in advance, the algorithm is not fully adaptive. Theorem 19.3 ibid. proves weak consistency for an inefficient compression-based algorithm, which selects among all the possible compression sets of a certain size, and maintains a certain rate of compression relative to the sample size. The generalizing power of sample compression was independently discovered by [31], and later elaborated upon by [22]. In the context of NN classification, [14] lists various condensing heuristics (which have no known performance guarantees) and leaves open the algorithmic question of how to minimize the empirical risk over all subsets of a given size. The first compression-based 1-NN algorithm with provable optimality guarantees was given in [20]; it was based on constructing ?-nets in spaces with a finite doubling dimension. The compression size of this construction was shown to be nearly unimprovable by an efficient algorithm unless P=NP. With ?-nets as its algorithmic engine, KSU inherits this near-optimality. The compression-based 1-NN paradigm was later extended to semimetrics in [21], where it was shown to survive violations of the triangle inequality, while the hierarchy-based search methods that have become standard for metric spaces (such as [4, 18] and related approaches) all break down. It was shown in [27] that a margin-regularized 1-NN learner (essentially, the one proposed in [18], which, unlike [20], did not involve sample compression) becomes strongly Bayes-consistent when the margin is chosen optimally in an explicitly prescribed sample-dependent fashion. The margin-based technique developed in [18] for the binary case was extended to multiclass in [28]. Since the algorithm relied on computing a minimum vertex cover, it was not possible to make it both computationally efficient and Bayes-consistent when the number of lables exceeds two. An additional improvement over [28] is that the generalization bounds presented there had an explicit (logarithmic) dependence on the number of labels, while our compression scheme extends seamlessly to countable label spaces. Paper outline. After fixing the notation and setup in Sec. 2, in Sec. 3 we present KSU, the compression-based 1-NN algorithm we analyze in this work. Sec. 4 discusses our main contributions regarding KSU, together with some open problems. High-level proof sketches are given in Sec. 5 for the finite-dimensional case, and Sec. 6 for the infinite-dimensional case. Full detailed proofs can be found in [26]. 2 Setting and Notation Our instance space is the metric space (X , ?), where X is the instance domain and ? is the metric. (See Appendix A in [26] for relevant background on metric measure spaces.) We consider a countable label space Y. The unknown sampling distribution is a probability measure ? ? over X ? Y, with marginal ? over X . Denote by (X, Y ) ? ? ? a pair drawn according to ? ?. The generalization error of a classifier f : X ? Y is given by err?? (f ) := P?? (Y 6= f (X)), P and its empirical error with respect to a labeled set S 0 ? X ? Y is given by err(f, c S 0 ) := |S10 | (x,y)?S 0 1[y 6= f (x)]. The optimal Bayes risk of ? ? is R??? := inf err?? (f ), where the infimum is taken over all measurable classifiers f : X ? Y. We say that ? ? is realizable when R??? = 0. We omit the overline in ? ? in the sequel when there is no ambiguity. 3 For a finite labeled set S ? X ? Y and any x ? X , let Xnn (x, S) be the nearest neighbor of x with respect to S and let Ynn (x, S) be the nearest neighbor label of x with respect to S: (Xnn (x, S), Ynn (x, S)) := argmin ?(x, x0 ), (x0 ,y 0 )?S where ties are broken arbitrarily. The 1-NN classifier induced by S is denoted by hS (x) := Ynn (x, S). The set of points in S, denoted by X = {X1 , . . . , X|S| } ? X , induces a Voronoi partition of X , V(X) := {V1 (X), . . . , V|S| (X)}, where each Voronoi cell is Vi (X) := {x ? X : argminj?{1,...,|S|} ?(x, Xj ) = i}. By definition, ?x ? Vi (X), hS (x) = Yi . A 1-NN algorithm is a mapping from an i.i.d. labeled sample Sn ? ? ?n to a labeled set Sn0 ? X ? Y, yielding the 1-NN classifier hSn0 . While the classic 1-NN algorithm sets Sn0 := Sn , in this work we study a compression-based algorithm which sets Sn0 adaptively, as discussed further below. A 1-NN algorithm is strongly Bayes-consistent on ? ? if err(hSn0 ) converges to R? almost surely, that is P[limn?? err(hSn0 ) = R? ] = 1. An algorithm is weakly Bayes-consistent on ? ? if err(hSn0 ) converges to R? in expectation, limn?? E[err(hSn0 )] = R? . Obviously, the former implies the latter. We say that an algorithm is Bayes-consistent on a metric space if it is Bayes-consistent on all distributions in the metric space. A convenient property that is used when studying the Bayes-consistency of algorithms in metric spaces is the doubling dimension. Denote the open ball of radius r around x by Br (x) := {x0 ? ?r (x) denote the corresponding closed ball. The doubling dimension of a X : ?(x, x0 ) < r} and let B metric space (X , ?) is defined as follows. Let n be the smallest number such that every ball in X can be covered by n balls of half its radius, where all balls are centered at points of X . Formally, n := min{n ? N : ?x ? X , r > 0, ?x1 , . . . , xn ? X s.t. Br (x) ? ?ni=1 Br/2 (xi )}. Then the doubling dimension of (X , ?) is defined by ddim(X , ?) := log2 n. For an integer n, let [n] := {1, . . . , n}. Denote the set of all index vectors of length d by In,d := [n]d . Given a labeled set Sn = (Xi , Yi )i?[n] and any i = {i1 , . . . , id } ? In,d , denote the subsample of Sn indexed by i by Sn (i) := {(Xi1 , Yi1 ), . . . , (Xid , Yid )}. Similarly, for a vector Y 0 = {Y10 , . . . , Yd0 } ? Y d , denote by Sn (i, Y 0 ) := {(Xi1 , Y10 ), . . . , (Xid , Yd0 )}, namely the sub-sample of Sn as determined by i where the labels are replaced with Y 0 . Lastly, for i, j ? In,d , we denote Sn (i; j) := {(Xi1 , Yj1 ), . . . , (Xid , Yjd )}. 3 1-NN majority-based compression In this work we consider the 1-NN majority-based compression algorithm proposed in [25], which we refer to as KSU. This algorithm is based on constructing ?-nets at different scales; for ? > 0 and A ? X , a set X ? A is said to be a ?-net of A if ?a ? A, ?x ? X : ?(a, x) ? ? and for all x 6= x0 ? X, ?(x, x0 ) > ?.3 The algorithm (see Alg. 1) operates as follows. Given an input sample Sn , whose set of points is denoted Xn = {X1 , . . . , Xn }, KSU considers all possible scales ? > 0. For each such scale it constructs a ?-net of Xn . Denote this ?-net by X(?) := {Xi1 , . . . , Xim }, where m ? m(?) denotes its size and i ? i(?) := {i1 , . . . , im } ? In,m denotes the indices selected from Sn for this ?-net. For every such ?-net, the algorithm attaches the labels Y 0 ? Y 0 (?) ? Y m , which are the empirical majority-vote labels in the respective Voronoi cells in the partition V(X(?)) = {V1 , . . . , Vm }. Formally, for i ? [m], Yi0 ? argmax |{j ? [n] | Xj ? Vi , Yj = y}|, (1) y?Y where ties are broken arbitrarily. This procedure creates a labeled set Sn0 (?) := Sn (i(?), Y 0 (?)) for every relevant ? ? {?(Xi , Xj ) | i, j ? [n]} \ {0}. The algorithm then selects a single ?, denoted ? ? ? ?n? , and outputs hSn0 (? ? ) . The scale ? ? is selected so as to minimize a generalization error bound, which upper bounds err(Sn0 (?)) with high probability. This error bound, denoted Q in the algorithm, can be derived using a compression-based analysis, as described below. 3 For technical reasons, having to do with the construction in Sec. 6, we depart slightly from the standard definition of a ?-net X ? A. The classic definition requires that (i) ?a ? A, ?x ? X : ?(a, x) < ? and (ii) ?x 6= x0 ? X : ?(x, x0 ) ? ?. In our definition, the relations < and ? in (i) and (ii) are replaced by ? and >. 4 Algorithm 1 KSU: 1-NN compression-based algorithm Require: Sample Sn = (Xi , Yi )i?[n] , confidence ? Ensure: A 1-NN classifier 1: Let ? := {?(Xi , Xj ) | i, j ? [n]} \ {0} 2: for ? ? ? do 3: Let X(?) be a ?-net of {X1 , . . . , Xn } 4: Let m(?) := |X(?)| 5: For each i ? [m(?)], let Yi0 be the majority label in Vi (X(?)) as defined in Eq. (1) 6: Set Sn0 (?) := (X(?), Y 0 (?)) 7: end for 8: Set ?(?) := err(h c Sn0 (?) , Sn ) 9: Find ?n? ? argmin??? Q(n, ?(?), 2m(?), ?), where Q is, e.g., as in Eq. (2) 10: Set Sn0 := Sn0 (?n? ) 11: return hSn0 m We say that a mapping Sn 7? Sn0 is a compression scheme if there is a function C : ?? m=0 (X ?Y) ? 2X ?Y , from sub-samples to subsets of X ? Y, such that for every Sn there exists an m and a sequence i ? In,m such that Sn0 = C(Sn (i)). Given a compression scheme Sn 7? Sn0 and a matching function C, we say that a specific Sn0 is an (?, m)-compression of a given Sn if Sn0 = C(Sn (i)) for some i ? In,m and err(h c Sn0 , Sn ) ? ?. The generalization power of compression was recognized by [17] and [22]. Specifically, it was shown in [21, Theorem 8] that if the mapping Sn 7? Sn0 is a compression scheme, then with probability at least 1 ? ?, for any Sn0 which is an (?, m)-compression of Sn ? ? ?n , we have (omitting the constants, explicitly provided therein, which do not affect our analysis) s nm ? log(n) + log(1/?) n m log(n) + log(1/?) err(hSn0 ) ? ? + O( ) + O( n?m ). (2) n?m n?m n?m Defining Q(n, ?, m, ?) as the RHS of Eq. (2) provides KSU with a compression bound. The following proposition shows that KSU is a compression scheme, which enables us to use Eq. (2) with the appropriate substitution.4 Proposition 1. The mapping Sn 7? Sn0 defined by Alg. 1 is a compression scheme whose output Sn0 is a (err(h c Sn0 ), 2|Sn0 |)-compression of Sn . ? i , Y?i )i?[2m] ) = (X ? i , Y?i+m )i?[m] , and observe that for all Proof. Define the function C by C((X 0 Sn , we have Sn = C(Sn (i(?); j(?))), where i(?) is the ?-net index set as defined above, and j(?) = {j1 , . . . , jm(?) } ? In,m(?) is some index vector such that Yi0 = Yji for every i ? [m(?)]. Since Yi0 is an empirical majority vote, clearly such a j exists. Under this scheme, the output Sn0 of this algorithm is a (err(h c Sn0 ), 2|Sn0 |)-compression. KSU is efficient, for any countable Y. Indeed, Alg. 1 has a naive runtime complexity of O(n4 ), since O(n2 ) values of ? are considered and a ?-net is constructed for each one in time O(n2 ) (see [20, Algorithm 1]). Improved runtimes can be obtained, e.g., using the methods in [29, 18]. In this work we focus on the Bayes-consistency of KSU, rather than optimize its computational complexity. Our Bayes-consistency results below hold for KSU, whenever the generalization bound Q(n, ?, m, ?n ) satisfies the following properties: Property 1 For any integer n and ? ? (0, 1), with probability 1 ? ? over the i.i.d. random sample Sn ? ? ?n , for all ? ? [0, 1] and m ? [n]: If Sn0 is an (?, m)-compression of Sn , then err(hSn0 ) ? Q(n, ?, m, ?). Property 2 Q is monotonically increasing in ? and in m. Property 3 There is a sequence {?n }? n=1 , ?n ? (0, 1) such that lim P? n=1 ?n < ? and for all m, sup (Q(n, ?, m, ?n ) ? ?) = 0. n?? ??[0,1] 4 In [25] the analysis was based on compression with side information, and does not extend to infinite Y. 5 The compression bound in Eq. (2) clearly P?satisfies these properties. Note that Property 3 is satisfied by Eq. (2) using any convergent series n=1 ?n < ? such that ?n = e?o(n) ; in particular, the decay of ?n cannot be too rapid. 4 Main results In this section we describe our main results. The proofs appear in subsequent sections. First, we show that KSU is Bayes-consistent if the instance space has a finite doubling dimension. This contrasts with classical 1-NN, which is only Bayes-consistent if the distribution is realizable. Theorem 2. Let (X , ?) be a metric space with a finite doubling-dimension. Let Q be a generalization bound that satisfies Properties 1-3, and let ?n be as stipulated by Property 3 for Q. If the input confidence ? for input size n is set to ?n , then the 1-NN classifier hSn0 (?n? ) calculated by KSU is strongly Bayes consistent on (X , ?): P(limn?? err(hSn0 ) = R? ) = 1. The proof, provided in Sec. 5, closely follows the line of reasoning in [27], where the strong Bayesconsistency of an adaptive margin-regularized 1-NN algorithm was proved, but with several crucial differences. In particular, the generalization bounds used by KSU are purely compression-based, as opposed to the Rademacher-based generalization bounds used in [27]. The former can be much tighter in practice and guarantee Bayes-consistency of KSU even for countably many labels. This however requires novel technical arguments, which are discussed in detail in Appendix B.1 in [26]. Moreover, since the compression-based bounds do not explicitly depend on ddim, they can be used even when ddim is infinite, as we do in Theorem 4 below. To underscore the subtle nature of Bayes-consistency, we note that the proof technique given here does not carry to an earlier algorithm, suggested in [20, Theorem 4], which also uses ?-nets. It is an open question whether the latter is Bayes-consistent. Next, we study Bayes-consistency of KSU in infinite dimensions (i.e., with ddim = ?) ? in particular, in a setting where k-NN was shown by [9] not to be Bayes-consistent. Indeed, a straightforward application of [9, Lemma A.1] yields the following result. Theorem 3 (C?rou and Guyader [9]). There exists an infinite dimensional separable metric space (X , ?) and a realizable distribution ? ? over X ? {0, 1} such that no kn -NN learner satisfying kn /n ? 0 when n ? ? is Bayes-consistent under ? ?. In particular, this holds for any space and realizable distribution ? ? that satisfy the following condition: The set C of points labeled 1 by ? ? satisfies ?r (x)) ?(C ? B ?(C) > 0 and ?x ? C, lim = 0. (3) ? r?0 ?(Br (x)) Since ?(C) > 0, Eq. (3) constitutes a violation of the Besicovitch covering property. In doubling spaces, the Besicovitch covering theorem precludes such a violation [15]. In contrast, as [35, 36] show, in infinite-dimensional spaces this violation can in fact occur. Moreover, this is not an isolated pathology, as this property is shared by Gaussian Hilbert spaces [45]. At first sight, Eq. (3) might appear to thwart any 1-NN algorithm applied to such a distribution. However, the following result shows that this is not the case: KSU is Bayes-consistent on a distribution with this property. Theorem 4. There is a metric space equipped with a realizable distribution for which KSU is weakly Bayes-consistent, while any k-NN classifier necessarily is not. The proof relies on a classic construction of Preiss [35] which satisfies Eq. (3). We show that the structure of the construction, combined with the packing and covering properties of ?-nets, imply that the majority-vote classifier induced by any ?-net with a sufficienlty small ? approaches the Bayes error. To contrast with Theorem 4, we next show that on the same construction, not all majority-vote Voronoi partitions succeed. Indeed, if the packing property of ?-nets is relaxed, partition sequences obstructing Bayes-consistency exist. Theorem 5. For the example constructed in Theorem 4, there exists a sequence of Voronoi partitions with a vanishing diameter such that the induced true majority-vote classifiers are not Bayes consistent. The above result also stands in contrast to [14, Theorem 21.2], showing that, unlike in finite dimensions, the partitions? vanishing diameter is insufficient to establish consistency when ddim = ?. We conclude the main results by posing intriguing open problems. 6 Open problem 1. Does there exist a metric probability space on which some k-NN algorithm is consistent while KSU is not? Does there exist any separable metric space on which KSU fails? Open problem 2. C?rou and Guyader [9] distill a certain Besicovitch condition which is necessary and sufficient for k-NN to be Bayes-consistent in a metric space. Our Theorem 4 shows that the Besicovitch condition is not necessary for KSU to be Bayes-consistent. Is it sufficient? What is a necessary condition? 5 Bayes-consistency of KSU in finite dimensions In this section we give a high-level proof of Theorem 2, showing that KSU is strongly Bayesconsistent in finite-dimensional metric spaces. A fully detailed proof is given in Appendix B in [26]. Recall the optimal empirical error ?n? ? ?(?n? ) and the optimal compression size m?n ? m(?n? ) as computed by KSU. As shown in Proposition 1, the sub-sample Sn0 (?n? ) is an (?n? , 2m?n )-compression of Sn . Abbreviate the compression-based generalization bound used in KSU by Qn (?, m) := Q(n, ?, 2m, ?n ). To show Bayes-consistency, we start by a standard decomposition of the excess error over the optimal Bayes into two terms:   err(hSn0 (?n? ) ) ? R? = err(hSn0 (?n? ) ) ? Qn (?n? , m?n ) + Qn (?n? , m?n ) ? R? =: TI (n) + TII (n), and show that each term decays to zero with probability one. For the first term, Property 1 for Q, together with the Borel-Cantelli lemma, readily imply lim supn?? TI (n) ? 0 with probability one. The main challenge is showing that lim supn?? TII (n) ? 0 with probability one. We do so in several stages: 1. Loosely speaking, we first show (Lemma 10) that the Bayes error R? can be well approximated using 1-NN classifiers defined by the true (as opposed to empirical) majority-vote labels over fine partitions of X . In particular, this holds for any partition induced by a ?-net of X with a sufficiently small ? > 0. This approximation guarantee relies on the fact that in finite-dimensional spaces, the class of continuous functions with compact support is dense in L1 (?) (Lemma 9). 2. Fix ?? > 0 sufficiently small such that any true majority-vote classifier induced by a ?? -net has a true error close to R? , as guaranteed by stage 1. Since for bounded subsets of finitedimensional spaces the size of any ?-net is finite, the empirical error of any majority-vote ?-net almost surely converges to its true majority-vote error as the sample size n ? ?. Let n(? ? ) sufficiently large such that Qn(?? ) (?(? ? ), m(? ? )) as computed by KSU for a sample of 0 size n(? ? ) is a reliable estimate for the true error of hSn(? (? ?). ?) 3. Let ?? and n(? ? ) be as in stage 2. Given a sample of size n = n(? ? ), recall that KSU selects an optimal ? ? such that Qn (?(?), m(?)) is minimized over all ? > 0. For margins ?  ?? , which are prone to over-fitting, Qn (?(?), m(?)) is not a reliable estimate for hSn0 (?) since compression may not yet taken place for samples of size n. Nevertheless, these margins are discarded by KSU due to the penalty term in Q. On the other hand, for ?-nets with margin ?  ?? , which are prone to under-fitting, the true error is well estimated by Qn (?(?), m(?)). It follows that KSU selects ?n? ? ?? and Qn (?n? , m?n ) ? R? , implying lim supn?? TII (n) ? 0 with probability one. As one can see, the assumption that X is finite-dimensional plays a major role in the proof. A simple argument shows that the family of continuous functions with compact support is no longer dense in L1 in infinite-dimensional spaces. In addition, ?-nets of bounded subsets in infinite dimensional spaces need no longer be finite. 6 On Bayes-consistency of NN algorithms in infinite dimensions In this section we study the Bayes-consistency properties of 1-NN algorithms on a classic infinitedimensional construction of Preiss [35], which we describe below in detail. This construction was 7 z1:k?2 ?k?1 z1:k?1 ?k ?k z1:k ?k ?k ?k ?k z C = Z? ?? (z) for some z ? C. Figure 1: Preiss?s construction. Encircled is the closed ball B k?1 first introduced as a concrete example showing that in infinite-dimensional spaces the Besicovich covering theorem [15] can be strongly violated, as manifested in Eq. (3). Example 1 (Preiss?s construction). The construction (see Figure 1) defines an infinite-dimensional metric space (X , ?) and a realizable measure ? ? over X ? Y with the binary label set Y = {0, 1}. It relies on two sequences: a sequence of natural numbers {Nk }k?N and a sequence of positive numbers {ak }k?N . The two sequences should satisfy the following: P? limk?? ak N1 . . . Nk+1 = ?; and limk?? Nk = ?. (4) k=1 ak N1 . . . Nk = 1; Q These properties are satisfied, for instance, by setting Nk := k! and ak := 2?k / i?[k] Ni . Let Z0 be the set of all finite sequences (z1 , . . . , zk )k?N of natural numbers such that zi ? Ni , and let Z? be the set of all infinite sequences (z1 , z2 , . . . ) of natural numbers such that zi ? Ni . Define the example space X := Z0 ? Z? and denote ?k := 2?k , where ?? := 0. The metric ? over X is defined as follows: for x, y ? X , denote by x ? y their longest common prefix. Then, ?(x, y) = (?|x?y| ? ?|x| ) + (?|x?y| ? ?|y| ). It can be shown (see [35]) that ?(x, y) is a metric; in fact, it embeds isometrically into the square norm metric of a Hilbert space. To define ?, the marginal measure over X , let ?? be the uniform product distribution measure over Z? , that is: for all i ? N, each zi in the sequence z = (z1 , z2 , . . . ) ? Z? is independently drawn from a uniform distribution over [Ni ]. Let ?0 be an atomic measure on Z0 such that for all z ? Z0 , ?0 (z) = a|z| . Clearly, the first condition in Eq. (4) implies ?0 (Z0 ) = 1. Define the marginal probability measure ? over X by ?A ? Z0 ? Z? , ?(A) := ??? (A) + (1 ? ?)?0 (A). In words, an infinite sequence is drawn with probability ? (and all such sequences are equally likely), or else a finite sequence is drawn (and all finite sequences of the same length are equally likely). Define the realizable distribution ? ? over X ? Y by setting the marginal over X to ?, and by setting the label of z ? Z? to be 1 with probability 1 and the label of z ? Z0 to be 0 with probability 1. As shown in [35], this construction satisfies Eq. (3) with C = Z? and ?(C) = ? > 0. It follows from Theorem 3 that no k-NN algorithm is Bayes-consistent on it. In contrast, the following theorem shows that KSU is weakly Bayes-consistent on this distribution. Theorem 4 immediately follows from the this result. Theorem 6. Assume (X , ?), Y and ? ? as in Example 1. KSU is weakly Bayes-consistent on ? ?. The proof, provided in Appendix C in [26], first characterizes the Voronoi cells for which the true majority-vote yields a significant error for the cell (Lemma 15). In finite-dimensional spaces, the total measure of all such ?bad? cells can be made arbitrarily close to zero by taking ? to be sufficiently small, as shown in Lemma 10 of Theorem 2. However, it is not immediately clear whether this can be achieved for the infinite dimensional construction above. Indeed, we expect such bad cells, due to the unintuitive property that for any x ? C, we have ?? (x) ? C)/?(B ?? (x)) ? 0 when ? ? 0, and yet ?(C) > 0. Thus, if for example a significant ?(B 8 ?? (x) with portion of the set C (whose label is 1) is covered by Voronoi cells of the form V = B x ? C, then for all sufficiently small ?, each one of these cells will have a true majority-vote 0. Thus a significant portion of C would be misclassified. However, we show that by the structure of the construction, combined with the packing and covering properties of ?-nets, we have that in any ?-net, the total measure of all these ?bad? cells goes to 0 when ? ? 0, thus yielding a consistent classifier. Lastly, the following theorem shows that on the same construction above, when the Voronoi partitions are allowed to violate the packing property of ?-nets, Bayes-consistency does not necessarily hold. Theorem 5 immediately follows from the following result. Theorem 7. Assume (X , ?), Y and ? ? as in Example 1. There exists a sequence of Voronoi partitions (Pk )k?N of X with maxV ?Pk diam(V ) ? ?k such that the sequence of true majority-vote classifiers (hPk )k?N induced by these partitions is not Bayes consistent: lim inf k?? err(hPk ) = ? > 0. The proof, provided in Appendix D, constructs a sequence of Voronoi partitions, where each partition Pk has all of its impure Voronoi cells (those with both 0 and 1 labels) being bad. In this case, C is incorrectly classified by hPk , yielding a significant error. Thus, in infinite-dimensional metric spaces, the shape of the Voronoi cells plays a fundamental role in the consistency of the partition. Acknowledgments. We thank Fr?d?ric C?rou for the numerous fruitful discussions and helpful feedback on an earlier draft. Aryeh Kontorovich was supported in part by the Israel Science Foundation (grant No. 755/15), Paypal and IBM. Sivan Sabato was supported in part by the Israel Science Foundation (grant No. 555/15). References [1] Christophe Abraham, G?rard Biau, and Beno?t Cadre. On the kernel rule for function classification. Ann. Inst. Statist. Math., 58(3):619?633, 2006. [2] Daniel Berend and Aryeh Kontorovich. The missing mass problem. Statistics & Probability Letters, 82(6):1102?1110, 2012. [3] Daniel Berend and Aryeh Kontorovich. On the concentration of the missing mass. Electronic Communications in Probability, 18(3):1?7, 2013. [4] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In ICML ?06: Proceedings of the 23rd international conference on Machine learning, pages 97?104, New York, NY, USA, 2006. ACM. [5] G?rard Biau, Florentina Bunea, and Marten H. Wegkamp. Functional classification in Hilbert spaces. IEEE Trans. Inform. Theory, 51(6):2163?2172, 2005. [6] G?rard Biau, Fr?d?ric C?rou, and Arnaud Guyader. Rates of convergence of the functional k-nearest neighbor estimate. IEEE Trans. Inform. Theory, 56(4):2034?2040, 2010. [7] V. I. Bogachev. Measure theory. Vol. I, II. Springer-Verlag, Berlin, 2007. [8] Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [9] Fr?d?ric C?rou and Arnaud Guyader. Nearest neighbor classification in infinite dimension. ESAIM: Probability and Statistics, 10:340?355, 2006. [10] Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. In NIPS, 2014. [11] Thomas M. Cover and Peter E. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13:21?27, 1967. [12] Luc Devroye. On the inequality of Cover and Hart in nearest neighbor discrimination. IEEE Trans. Pattern Anal. Mach. Intell., 3(1):75?78, 1981. [13] Luc Devroye and L?szl? Gy?rfi. Nonparametric density estimation: the L1 view. Wiley Series in Probability and Mathematical Statistics: Tracts on Probability and Statistics. John Wiley & Sons, Inc., New York, 1985. 9 [14] Luc Devroye, L?szl? Gy?rfi, and G?bor Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013. [15] Herbert Federer. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969. [16] Evelyn Fix and Jr. Hodges, J. L. Discriminatory analysis. nonparametric discrimination: Consistency properties. International Statistical Review / Revue Internationale de Statistique, 57(3):pp. 238?247, 1989. [17] Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the VapnikChervonenkis dimension. Machine learning, 21(3):269?304, 1995. [18] Lee-Ad Gottlieb, Aryeh Kontorovich, and Robert Krauthgamer. Efficient classification for metric data (extended abstract COLT 2010). IEEE Transactions on Information Theory, 60(9):5750? 5759, 2014. [19] Lee-Ad Gottlieb, Aryeh Kontorovich, and Robert Krauthgamer. Adaptive metric dimensionality reduction. Theoretical Computer Science, 620:105?118, 2016. [20] Lee-Ad Gottlieb, Aryeh Kontorovich, and Pinhas Nisnevitch. Near-optimal sample compression for nearest neighbors. In Neural Information Processing Systems (NIPS), 2014. [21] Lee-Ad Gottlieb, Aryeh Kontorovich, and Pinhas Nisnevitch. Nearly optimal classification for semimetrics (extended abstract AISTATS 2016). Journal of Machine Learning Research, 2017. [22] Thore Graepel, Ralf Herbrich, and John Shawe-Taylor. PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification. Machine Learning, 59(1):55?76, 2005. [23] Peter Hall and Kee-Hoon Kang. Bandwidth choice for nonparametric classification. Ann. Statist., 33(1):284?306, 02 2005. [24] Olav Kallenberg. Foundations of modern probability. Second edition. Probability and its Applications. Springer-Verlag, 2002. [25] Aryeh Kontorovich, Sivan Sabato, and Ruth Urner. Active nearest-neighbor learning in metric spaces. In Advances in Neural Information Processing Systems, pages 856?864, 2016. [26] Aryeh Kontorovich, Sivan Sabato, and Roi Weiss. Nearest-neighbor sample compression: Efficiency, consistency, infinite dimensions. CoRR, abs/1705.08184, 2017. [27] Aryeh Kontorovich and Roi Weiss. A Bayes consistent 1-NN classifier. In Artificial Intelligence and Statistics (AISTATS 2015), 2014. [28] Aryeh Kontorovich and Roi Weiss. Maximum margin multiclass nearest neighbors. In International Conference on Machine Learning (ICML 2014), 2014. [29] Robert Krauthgamer and James R. Lee. Navigating nets: Simple algorithms for proximity search. In 15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 791?801, January 2004. [30] Sanjeev R. Kulkarni and Steven E. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling. IEEE Trans. Inform. Theory, 41(4):1028?1039, 1995. [31] Nick Littlestone and Manfred K. Warmuth. Relating data compression and learnability. unpublished, 1986. [32] James R. Munkres. Topology: a first course. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1975. [33] Vladimir Pestov. On the geometry of similarity search: dimensionality curse and concentration of measure. Inform. Process. Lett., 73(1-2):47?51, 2000. [34] Vladimir Pestov. Is the k-NN classifier in high dimensions affected by the curse of dimensionality? Comput. Math. Appl., 65(10):1427?1437, 2013. 10 [35] David Preiss. Invalid Vitali theorems. Abstracta. 7th Winter School on Abstract Analysis, pages 58?60, 1979. [36] David Preiss. Gaussian measures and the density theorem. Comment. Math. Univ. Carolin., 22(1):181?193, 1981. [37] Demetri Psaltis, Robert R. Snapp, and Santosh S. Venkatesh. On the finite sample performance of the nearest neighbor classifier. IEEE Transactions on Information Theory, 40(3):820?837, 1994. [38] Walter Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., New York, third edition, 1976. International Series in Pure and Applied Mathematics. [39] Walter Rudin. Real and Complex Analysis. McGraw-Hill, 1987. [40] Richard J. Samworth. Optimal weighted nearest neighbour classifiers. Ann. Statist., 40(5):2733? 2763, 10 2012. [41] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [42] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926?1940, 1998. [43] Robert R. Snapp and Santosh S. Venkatesh. Asymptotic expansions of the k nearest neighbor risk. Ann. Statist., 26(3):850?878, 1998. [44] Charles J. Stone. Consistent nonparametric regression. The Annals of Statistics, 5(4):595?620, 1977. [45] Jaroslav Ti?er. Vitali covering theorem in Hilbert space. Trans. Amer. Math. Soc., 355(8):3277? 3289, 2003. [46] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207?244, 2009. [47] Lin Cheng Zhao. Exponential bounds of mean error for the nearest neighbor estimates of regression functions. J. Multivariate Anal., 21(1):168?178, 1987. 11
6755 |@word h:2 version:2 compression:54 norm:1 yi0:4 open:10 decomposition:1 yjd:1 thereby:1 solid:1 carry:2 reduction:1 shechtman:1 celebrated:1 series:5 substitution:1 daniel:2 tuned:1 prefix:1 err:18 comparing:1 z2:2 surprising:4 ddim:5 beygelzimer:1 yet:2 intriguing:2 must:2 readily:1 john:4 michal:1 numerical:1 partition:17 gurion:2 j1:1 subsequent:1 enables:1 shape:1 maxv:1 discrimination:2 implying:1 half:1 leaf:1 selected:2 intelligence:1 warmuth:2 rudin:2 yi1:1 vanishing:2 manfred:2 completeness:1 provides:3 characterization:1 draft:1 math:4 herbrich:1 mathematical:2 constructed:2 aryeh:12 become:1 symposium:1 prove:1 fitting:2 x0:8 overline:1 indeed:4 rapid:1 examine:1 nonseparable:1 curse:3 window:1 jm:1 increasing:1 becomes:1 provided:5 discover:1 bounded:5 bayesconsistent:2 notation:2 moreover:2 mass:2 yd0:2 what:1 israel:2 argmin:2 medium:1 developed:1 finding:1 guarantee:6 every:5 act:1 ti:3 runtime:3 tie:2 isometrically:1 classifier:22 demetri:1 grant:2 enjoy:1 omit:1 appear:2 positive:1 mach:1 ak:4 analyzing:1 initiated:1 id:1 cliff:1 lugosi:1 might:1 therein:1 munkres:1 challenging:1 appl:1 co:1 discriminatory:1 weizmann:2 acknowledgment:1 yj:1 atomic:1 practice:1 revue:1 procedure:1 cadre:1 empirical:9 convenient:1 matching:1 confidence:2 word:1 statistique:1 cannot:1 close:2 selection:1 nisnevitch:2 prentice:1 risk:6 applying:1 context:1 optimize:1 measurable:1 imposed:1 fruitful:1 missing:2 marten:1 latest:1 straightforward:1 go:1 independently:2 immediately:3 pure:1 examines:1 rule:5 insight:1 posner:1 ralf:1 classic:8 beno:1 variation:1 coordinate:1 annals:1 construction:15 hierarchy:2 play:2 sn0:26 us:1 satisfying:1 approximated:1 recognition:1 continues:2 distributional:2 labeled:7 role:2 steven:1 capture:1 worst:1 rou:6 kilian:1 broken:2 complexity:2 weakly:4 tight:1 solving:1 depend:1 purely:1 upon:1 creates:1 efficiency:3 learner:4 basis:1 triangle:1 packing:4 various:3 regularizer:1 univ:1 walter:2 describe:2 artificial:1 shalev:1 refined:1 whose:3 heuristic:1 posed:1 cvpr:1 say:4 olav:1 otherwise:1 precludes:1 statistic:6 seemingly:1 obviously:2 interplay:1 advantage:4 sequence:18 net:26 product:1 fr:3 relevant:2 chaudhuri:1 convergence:3 regularity:2 xim:1 rademacher:1 tract:1 converges:3 ben:3 ac:3 fixing:1 pose:1 nearest:27 school:1 eq:13 strong:2 soc:1 c:1 indicate:1 implies:2 radius:2 closely:2 stipulated:1 vc:1 centered:1 enable:1 violates:1 xid:3 require:1 fix:3 generalization:15 proposition:3 tighter:2 im:1 hold:4 proximity:1 sufficiently:5 considered:1 hall:2 roi:4 around:1 lawrence:1 mapping:5 algorithmic:3 major:1 smallest:1 estimation:2 injecting:1 applicable:1 samworth:1 label:17 currently:1 psaltis:1 bunea:1 weighted:1 minimization:2 clearly:3 gaussian:2 sight:1 rather:4 derived:2 inherits:1 focus:1 improvement:1 longest:1 hpk:3 vapnikchervonenkis:1 seamlessly:1 underscore:1 contrast:6 cantelli:1 realizable:7 helpful:1 inst:1 dependent:8 voronoi:12 nn:56 relation:1 misclassified:1 i1:2 selects:4 federer:1 among:2 classification:14 aforementioned:1 denoted:5 colt:1 development:2 marginal:4 santosh:2 construct:3 saving:2 having:1 beach:1 sampling:3 runtimes:1 berend:2 survive:1 nearly:2 constitutes:1 icml:2 future:1 minimized:1 np:1 richard:1 modern:1 winter:1 neighbour:1 intell:1 replaced:2 argmax:1 geometry:1 n1:2 ab:1 unimprovable:1 englewood:1 guyader:5 szl:2 violation:4 analyzed:1 yielding:3 behind:1 implication:1 necessary:4 hoon:1 respective:1 unless:1 indexed:1 tree:1 euclidean:1 loosely:1 taylor:2 littlestone:1 isolated:1 theoretical:2 increased:1 instance:4 earlier:2 cover:6 cost:1 vertex:2 subset:4 distill:1 uniform:2 too:1 learnability:2 characterize:1 optimally:1 kn:2 semimetrics:2 combined:2 adaptively:1 st:1 density:2 thwart:1 fundamental:1 international:4 siam:1 sequel:2 probabilistic:1 xi1:4 vm:1 lee:5 wegkamp:1 together:2 kontorovich:13 concrete:1 sanjeev:1 hodges:2 ambiguity:1 nm:1 opposed:3 satisfied:2 bgu:2 book:1 inefficient:1 zhao:1 return:1 tii:3 de:1 gy:2 summarized:1 sec:7 inc:3 satisfy:2 explicitly:4 vi:4 ad:4 later:3 break:1 view:1 closed:2 analyze:1 sup:1 characterizes:1 competitive:1 bayes:62 maintains:1 relied:1 start:1 portion:2 shai:2 elaborated:1 contribution:2 minimize:2 square:1 il:3 ni:5 likewise:1 yield:3 boiman:1 biau:3 weak:2 bor:1 bayesian:1 none:1 classified:1 inform:4 suffers:1 whenever:2 urner:2 definition:4 against:1 semimetric:1 pp:1 james:2 proof:16 proved:1 recall:2 lim:6 dimensionality:4 hilbert:5 subtle:1 graepel:1 sophisticated:1 actually:1 equipped:1 follow:1 wei:4 improved:1 rard:3 amer:1 though:1 strongly:10 just:1 stage:3 lastly:2 langford:1 sketch:1 hand:1 defines:1 infimum:1 thore:1 usa:2 omitting:1 true:10 former:2 regularization:2 hence:1 arnaud:2 irani:1 deal:1 floyd:1 covering:6 die:1 generalized:1 stone:1 hill:2 outline:1 theoretic:1 l1:3 passive:1 reasoning:1 image:1 novel:2 recently:2 charles:1 common:1 functional:2 volume:1 banach:1 extend:2 discussed:3 mathematischen:1 relating:1 refer:2 significant:4 cambridge:1 imposing:1 smoothness:1 rd:4 consistency:31 mathematics:2 similarly:1 pathology:1 had:1 shawe:2 moving:1 longer:2 similarity:1 multivariate:1 recent:1 showed:2 inf:2 certain:5 verlag:3 manifested:1 inequality:2 binary:4 arbitrarily:3 christophe:1 yi:3 der:1 seen:1 minimum:2 additional:5 relaxed:1 herbert:1 employed:1 surely:2 recognized:1 paradigm:2 monotonically:1 impure:1 ii:3 violate:1 full:2 multiple:1 sham:1 encircled:1 exceeds:1 technical:2 faster:1 cross:2 long:1 lin:1 hart:2 equally:2 prediction:1 basic:2 regression:3 essentially:3 metric:35 expectation:1 kernel:1 achieved:1 cell:11 oren:1 background:1 addition:1 fine:1 else:1 limn:3 crucial:1 sabato:5 unlike:2 posse:1 limk:2 comment:1 induced:6 validating:1 grundlehren:1 inconsistent:1 practitioner:1 integer:2 structural:3 near:3 paypal:1 xj:4 affect:1 gave:1 zi:3 bandwidth:1 topology:1 pestov:3 regarding:1 multiclass:8 computable:2 br:4 whether:2 defense:1 bartlett:1 penalty:1 peter:3 argminj:1 speaking:1 york:5 rfi:2 detailed:4 involve:2 covered:2 clear:1 nonparametric:4 band:1 statist:4 induces:2 ibid:1 diameter:3 affords:1 exist:3 estimated:1 discrete:1 dasgupta:1 vol:1 affected:1 key:2 salient:1 sivan:4 nevertheless:1 drawn:4 alina:1 internationale:1 y10:2 v1:2 kallenberg:1 ynn:3 year:1 eli:1 letter:1 extends:1 almost:2 place:1 family:1 electronic:1 florentina:1 appendix:5 ric:3 bogachev:1 bound:23 guaranteed:1 convergent:1 cheng:1 annual:1 occur:1 s10:1 argument:3 optimality:2 prescribed:1 min:1 separable:2 martin:1 department:3 according:1 ball:6 jr:1 slightly:1 son:1 negev:2 kakade:1 n4:1 condensing:2 taken:2 computationally:3 remains:1 discus:2 fail:1 end:1 yj1:1 studying:1 observe:1 appropriate:1 alternative:2 weinberger:1 existence:1 thomas:1 denotes:2 pinhas:2 ensure:1 krauthgamer:3 log2:1 hinge:2 prof:1 establish:1 classical:2 question:2 depart:1 concentration:2 dependence:2 traditional:1 said:1 exhibit:1 navigating:1 supn:3 distance:1 thank:1 berlin:1 majority:16 considers:1 reason:1 provable:1 devroye:4 length:2 index:4 ruth:1 insufficient:1 vladimir:2 setup:1 robert:6 stated:1 unintuitive:1 anal:2 countable:3 unknown:3 upper:1 discarded:1 finite:26 incorrectly:1 january:1 defining:1 extended:6 communication:1 discovered:1 arbitrary:1 introduced:1 david:3 pair:1 required:1 namely:1 extensive:1 z1:6 unpublished:1 venkatesh:2 nick:1 engine:1 kang:1 nip:3 trans:5 suggested:1 below:7 pattern:3 appeared:1 challenge:1 pioneering:1 including:2 memory:2 reliable:2 power:2 karyeh:1 natural:3 business:1 regularized:5 abbreviate:1 yid:1 scheme:10 esaim:1 imply:2 numerous:1 naive:2 sn:30 review:1 geometric:1 discovery:1 understanding:1 asymptotic:3 relative:1 fully:4 expect:1 attache:1 filtering:1 proven:1 validation:1 foundation:4 offered:1 sufficient:3 consistent:41 principle:1 ibm:1 prone:2 course:1 surprisingly:1 placed:1 supported:2 enjoys:1 side:1 allow:1 institute:1 neighbor:27 saul:1 characterizing:1 taking:2 feedback:1 lett:1 dimension:25 xn:5 calculated:1 stand:1 unaware:1 qn:8 infinitedimensional:2 finitedimensional:1 made:2 adaptive:3 universally:1 transaction:4 excess:1 approximate:1 compact:2 countably:2 mcgraw:2 active:2 conclude:1 xi:5 shwartz:1 yji:1 search:4 continuous:2 nature:1 zk:1 ca:1 improving:1 alg:4 expansion:2 posing:1 williamson:1 necessarily:2 complex:1 anthony:1 constructing:2 domain:1 wissenschaften:1 did:1 pk:3 main:7 dense:2 aistats:2 rh:1 motivation:1 noise:1 subsample:1 abraham:1 n2:2 edition:2 snapp:2 allowed:1 x1:4 borel:1 fashion:1 ny:1 wiley:2 embeds:1 fails:2 theme:1 sub:3 explicit:2 exponential:2 comput:1 third:1 theorem:30 down:1 z0:7 bad:4 specific:1 showing:4 pac:1 er:1 list:1 decay:3 consist:1 exists:6 kamalika:1 corr:1 jaroslav:1 margin:13 nk:5 generalizing:1 logarithmic:1 likely:2 doubling:10 xnn:2 obstructing:1 sally:1 springer:4 satisfies:6 relies:3 acm:2 succeed:1 diam:1 kee:1 quantifying:1 ann:4 invalid:1 shared:1 luc:3 considerable:1 infinite:25 determined:1 contrasted:2 gottlieb:5 operates:1 specifically:1 lemma:6 lens:1 called:1 total:2 sanjoy:1 vote:13 formally:2 quest:1 support:2 latter:2 violated:2 kulkarni:1 phenomenon:2
6,363
6,756
A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis Tor Lattimore? [email protected] Abstract Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptions are the analysis of Gaussian bandits with unknown mean and variance by Cowan et al. [2015] and of uniform distributions with unknown support [Cowan and Katehakis, 2015]. The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, which is a scale free measure of the extremity of outliers. 1 Introduction SpaceBandits is a fictional company that specialises in optimising the power output of satellitemounted solar panels. The data science team wants to use a bandit algorithm to adjust the knobs on a legacy satellite, but they don?t remember the units of the sensors, and have limited knowledge about the noise distribution of the panel output or sensors. The SpaceBandits data science team searches the literature for an algorithm that does not depend on the scale or location of the means of the arms, and find this simple paper, in NIPS 2017. It turns out that logarithmic regret is possible for finite-armed bandits with no assumptions on the noise of the payoffs except for a known finite bound on the kurtosis, which corresponds to knowing the likelihood/magnitude of outliers [DeCarlo, 1997]. Importantly, the kurtosis is independent of the location of the mean and scale of the central tendency (the variance). This generalises the ideas of Cowan et al. [2015] beyond the Gaussian case with unknown mean and variance to the nonparametric setting. The setup is as follows. Let k ? 2 be the number of bandits (or arms). In each round 1 ? t ? n the player should choose an action At ? {1, . . . , k} and subsequently receives a reward Xt ? ?At , where ?1 , . . . , ?k are a set of distributions that are not known in advance. Let ?i be the mean payoff of the ith arm and ?? = maxi ?i and ?i = ?? ? ?i . The regret measures the expected deficit of the player relative to the optimal choice of distribution: " n # X Rn = E ?A t . (1) t=1 The table below summarises many of the known results on the optimal achievable asymptotic regret under different assumptions on (?i )i . A reference for each of the upper bounds is given in Table 1, while the lower bounds are mostly due to Lai and Robbins [1985] and Burnetas and Katehakis [1996]. An omission from the table is when the distributions are known to lie in a single-parameter exponential family (which does not fit well with the columns). Details are by Capp?e et al. [2013]. ? Now at DeepMind, London. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1 Assumption Known Unknown Bernoulli Supp(?i ) ? {0, 1} ?i ? [0, 1] Lai and Robbins [1985] 2 limn?? Rn / log(n) X 1 i:?i >0 d(?i , ?? ) Supp(?i ) ? [0, 1] distribution it?s complicated Supp(?i ) ? A |A| < ? distribution it?s complicated Burnetas and Katehakis [1996] Semi-bounded Supp(?i ) ? (??, 1] distribution it?s complicated ?i = N (?i , ?i2 ) ?i ? R ?i = U(ai , bi ) ai , bi Bounded Honda and Takemura [2010] 3 4 Discrete Honda and Takemura [2015] 5 Gaussian (known var.) Katehakis and Robbins [1995] 6 Uniform Cowan and Katehakis [2015] 7 log M?i (?) ? Subgaussian Bubeck and Cesa-Bianchi [2012] ?2 ?i2 2 ?? distribution X 2?i2 ?i i:?i >0 X ?i   2?i i:?i >0 log 1 + b ?a i i X 2?i2 ?i i:?i >0 ? 8 Known variance V[?i ] ? ?i2 distribution ?i = N (?i , ? 2 ) ?i ? R, ?i2 > 0 Bubeck et al. [2013] 9 Gaussian Cowan et al. [2015] ? X ?i2 ? O? ?i i:?i >0 X 2?i i:?i >0 log (1 + ?2i /?i2 ) d(p, q) = p log(p/q) + (1 ? p) log((1 ? p)/(1 ? q)) and M? (?) = EX?? exp((X ? ?)?) with ? the mean of ? is the centered moment generating function. All asymptotic results are optimal except for the grey cells. Table 1: Typical distributional assumptions and asymptotic regret With the exception of rows 6 and 9 in Table 1, all entries essentially depend on some kind of scale parameter. Missing is an entry for a non-parametric assumption that is scale free. This paper fills that gap with the following assumption and regret guarantee. Assumption 1. There exists a known ?? ? R such that for all 1 ? i ? k, the kurtosis of X ? ?i is at most Kurt[X] = E[(X ? E[X])4 ]/V[X]2 ? ?? . Theorem 2. If Assumption 1 holds, then the algorithm described in ?2 satisfies   X ?i2 Rn lim sup ?C ?i ?? ? 1 + 2 , ?i n?? log(n) i:?i >0 where ?i2 is the variance of ?i and C > 0 is a universal constant. What are the implications of this result? The first point is that the algorithm in ?2 is scale and translation invariant in the sense that its behaviour does not change if the payoffs are multiplied by a positive constant or shifted. The regret also depends appropriately on the scale so that multiplying the rewards of all arms by a positive constant factor also multiplies the regret by this factor. As far as I know, this is the first scale free bandit algorithm for a non-parametric class. The assumption on the boundedness of the kurtosis is much less restrictive than assuming an exact Gaussian model (which has kurtosis 3) or uniform (kurtosis 9/5). See Table 2 for other examples. As mentioned, the kurtosis is a measure of the likelihood/existence of outliers of a distribution, and it makes intuitive sense that a bandit strategy might depend on some kind of assumption on this quantity. How else to know whether or not to cease exploring an unpromising action? The assumption can also be justified from a mathematical perspective. If the variance of an arm is not assumed known, then calculating confidence intervals requires an estimate of the variance from the data. Let X, X1 , X2 , . . . , Xn be a sequence of i.i.d. centered random variables with variance ? 2 and 2 kurtosis ?. A reasonable estimate of ? 2 is n ? ?2 = 1X 2 X . n t=1 t (2) Clearly this estimator is unbiased and has variance ? 4 (? ? 1) E[X 4 ] ? E[X 2 ]2 = . V[? ?2 ] = n n Therefore, if we are to expect good estimaDistribution Parameters Kurtosis tion of ? 2 , then the kurtosis should be fi2 nite. Note that if ? 2 is estimated by (2), then Gaussian ? ? R, ? > 0 3 the central limit theorem combined with fi1?3?(1??) Bernoulli ? ? [0, 1] nite kurtosis is enough for an estimation er?(1??) ror of O(? 2 ((? ? 1)/n)1/2 ) asymptotically. Exponential ?>0 9 For bandits, however, finite-time bounds are Laplace ? ? R, b > 0 9 required, which are not available using (2) without additional moment assumptions (for Uniform a<b?R 9/5 example, on the moment generating function). An example demonstrating the necesTable 2: Kurtosis sity of the limit in the standard central limit theorem is as follows. Suppose that X1 , . . . , Xn are Bernoulli with bias p = 1/n, then for large n the distribution of the sum is closely approximated by a Poisson distribution with parameter 1, which is very different to a Gaussian. Finite kurtosis alone is enough if the classical empirical estimator is replaced by a robust estimator such as the median-of-means estimator [Alon et al., 1996] or Catoni?s estimator [Catoni, 2012]. Of course, if the kurtosis were not known, then you could try and estimate it with assumptions on the eighth moment, and so on. Is there any justification to stop here? The main reason is that this seems like a useful place to stop. Large classes of distributions have known bounds on their kurtosis (see table) and the independence of scale is a satisfying property. Contributions The main contribution is the new assumption, algorithm, and the proof of Theorem 2 (see ?2). The upper bound is also complemented by an asymptotic lower bound (?3) that applies to all strategies with sub-polynomial regret and all bandit problems with bounded kurtosis. Pt Additional notation Let Ti (t) = s=1 1 {As = i} be the number of times arm i has been played after round t. For measures P, Q on the same probability space, KL(P, Q) is the relative entropy between P and Q and ?2 (P, Q) is the ?2 distance. The following lemma is well known. Lemma 3. Let X1 , X2 be independent random variables with Xi having variance ?i2 and kurtosis ?i < ? and skewness ?i = E[(Xi ? E[Xi ])3 /?i3 ], then: ? ? 4 (?1 ? 3) + ?24 (?2 ? 3) (a) Kurt[X1 + X2 ] = 3 + 1 (b) ?1 ? ?1 ? 1 . 2 2 2 (?1 + ?2 ) 2 Algorithm and upper bound Like the robust upper confidence bound algorithm by Bubeck et al. [2013], the new algorithm makes use of the robust median-of-means estimator. Median-of-means estimator Let Y1 , Y2 , . . . , Yn be a sequence of independent and identically distributed random variables. The median-of-means estimator first partitions the data into m blocks of equal size (up to rounding errors). The empirical mean of each block is then computed and the estimate is the median of the means of each of the blocks. The number of blocks depends on the desired confidence level and should be O(log(1/?)). The median-of-means estimator at confidence level ? ? (0, 1) is denoted by [ MM? ((Yt )nt=1 ). Lemma 4 (Bubeck et al. 2013). Let Y1 , Y2 , . . . , Yn be a sequence of independent and identically distributed random variables with mean ? and variance ? 2 < ?. s  ! ?2 C2 [ n P MM? ((Yt )t=1 ) ? ? ? C1 log ? ?, n ? 3 where C1 = ? 12 ? 16 and C2 = exp(1/8) are universal constants. Upper confidence bounds The new algorithm is a generalisation of UCB, but with optimistic estimates of the mean and variance using confidence bounds about the median-of-means estimator. Let ? ? (0, 1) and Y1 , Y2 , . . . , Yt be a sequence of independent and identically distributed random variables with mean ?, variance ? 2 and kurtosis ? < ?? . Furthermore, let s (  )   ? ?t2 ((Ys )ts=1 , ?, ?) C2 t t [ ? ?((Ys )s=1 , ?) = sup ? ? R : ? ? MM? (Ys )s=1 + C1 log . t ?  t  [ MM? (Ys ? ?)2 s=1  . where ? ? 2 ((Ys )ts=1 , ?, ?) = q  ?? ?1 C2 max 0, 1 ? C1 log ? t Note that ? ?((Ys )ts=1 , ?) may be (positive) infinity if t is insufficiently large. The computation of ? ?(?) seems non-trivial and is discussed in the summary at the end of the paper where a roughly equivalent and efficiently computable alternative is given. The following two lemmas show that ? ? is indeed optimistic with high probability, and also that it concentrates with reasonable speed around the true mean.  Lemma 5. P ? ?((Ys )ts=1 , ?) ? ? ? 2? . Proof. By Lemma 4 and the fact that V[(Ys ? ?)2 ] = ? 4 (? ? 1) ? ? 4 (?? ? 1) it holds with probability at least 1 ? ? that ? ? 2 ((Ys )ts=1 , ?, ?) ? ? 2 . Another application of Lemma 4 along with a union bound ensures that with probability at least 1 ? 2?, s s     2 C C2 ? ? ?t2 ((Ys )ts=1 , ?, ?) 2 t [ ? C1 . log log MM? ((Ys )s=1 ) ? C1 t ? t ? Therefore with probability at least 1 ? 2? the true mean ? is in the set of which ? ? is the supremum and in this case ? ?((Ys )ts=1 , ?) ? ? as required. Lemma 6. Let ?t be monotone decreasing and ? ?t = ? ?((Ys )ts=1 , ?t ). Then there exists a universal constant C3 > 0 such that for any ? > 0,     n n X X ?2 C2 P (? ?t ? ? + ?) ? C3 max ?? ? 1, 2 log +2 ?t . ? ?n t=1 t=1 Proof. First, by Lemma 4 n X s   t P [ MM?t (Ys )s=1 ? ? ? C1 t=1 ?2 log t  C2 ?t ! ? n X ?t . (3) t=1 Similarly, n X s  t  P [ MM?t (Ys ? ?)2 s=1 ? ? 2 ? C1 ? 2 t=1 ?? ? 1 log t  Suppose that t is a round where all of the following hold: s ?2 log t  C2 . ?t s      ?? ? 1 C2 t [ 2 2 2 (b) MM?t (Ys ? ?) s=1 ? ? < C1 ? log . t ?t   C2 (c) t ? 16C12 (?? ? 1) log . ?t   t (a) [ MM?t (Ys )s=1 ? ? < C1  4 C2 ? ! ? n X t=1 ?t . (4)   t Abbreviating ? ?t2 = ? ? 2 ((Ys )ts=1 , ? ?t , ?t ) and ? ?t = [ MM?t (Ys )s=1 , t   (Ys ? ? ?s )2 s=1 t  [ r ? ?t2 = ?t )2 s=1   ? 2 MM?t (Ys ? ? 1 ? C1 ??t?1 log C?t2  t  ? 4[ MM?t (Ys ? ?)2 s=1 + 4(? ?t ? ?)2  t  ? 4[ MM?t (Ys ? ?)2 s=1 + 8(? ?t ? ? ?t )2 + 8(? ?t ? ?)2 s     ?? ? 1 C2 8C12 (? 2 + ? ?t2 )(?? ? 1) C2 11 2 ? ?2 2 2 < 4? + 4C1 ? log + log ? ? + t , t ?t t ?t 2 2 [ MM?t  where the first inequality follows from (c), the second since (x ? y)2 ? 2x2 + 2y 2 and the fact that [ MM? ((aYs + b)ts=1 ) = a [ MM? ((Ys )ts=1 ) + b . The third inequality again uses (x ? y)2 ? 2x2 + 2y 2 , while the last uses the definition of ? ?t and (a,b). Therefore ? ?t2 ? 11? 2 , which means that if (a,b,c) and additionally 19C12 ? 2 log (d) t ? ?2  1 ?n  . s s     ? ?t2 C2 ?2 C2 Then |? ?t ? ?| ? |? ?t ? ? ?t | + |? ?t ? ?| < C1 log log + C1 t ?n t ?n s s     C2 C2 11? 2 ?2 log log + C1 ? ?. ? C1 t ?n t ?n Combining this with (3) and (4) and choosing C3 = 19C12 completes the result. Algorithm and Proof of Theorem 2 Let ?t = 1/(t2 log(1+t)) and ? ?i (t) = ? ?((Xs )s?[t],As =i , ?t ). In each round the algorithm chooses At = arg maxi?[k] ? ?i (t ? 1), where ties are broken arbitrarily. Proof of Theorem 2. Assume without loss of generality that ?1 = ?? . Then suboptimal arm i is only played in round t if either ? ?1 (t ? 1) ? ?1 or ? ?i (t ? 1) ? ?1 . Therefore E[Ti (n)] ? n X P (? ?1 (t ? 1) ? ?1 ) + t=1 n X P (? ?i (t ? 1) ? ?1 and At = i) (5) t=1 The two sums are bounded using Lemmas 5 and 6 respectively: n X P (? ?1 (t ? 1) ? ?1 ) ? t=1 n X t X P (? ?1 (t ? 1) ? ?1 and T1 (t ? 1) = u) t=1 u=1 n X t X ?2 n X t=1 P (? ?i (t ? 1) ? ?1 and At = i) ? t=1 u=1 n X ?t = 2 n X t?t = o(log(n)) . (By Lem. 5) t=1 P (? ?i (t ? 1) ? ?i ? ?i ) t=1     n X ?2 C2 ? C3 max ?? ? 1, i2 log +2 ?t = o(log(n)) . ?i ?n t=1 (By Lem. 6) And the result follows by substituting the above bounds into Eq. (5) and then into the regret decomPk position Rn = i=1 ?i E[Ti (n)]. 5 3 Lower bound Let H?? = {? : ? has kurtosis less than ?? } be the class of all distributions with kurtosis bounded by ?? . Following the nomenclature of Lai and Robbins [1985], a bandit strategy is called consistent over H if Rn = o(np ) for all p ? (0, 1) and bandits (?i )i with ?i ? H?? for all i. The next theorem shows that the upper bound derived in the previous section is nearly tight up to constant factors. Let H be a family of distributions and let (?i )i be a bandit with ?i ? H for all i. Burnetas and Katehakis [1996] showed that for any consistent strategy, for all i ? [k] : lim inf n??  ?1 E[Ti (n)] ? inf KL(?i , ?i0 ) : ?i0 ? H and EX??i0 [X] > ?? . (6) log(n) In parameterised families of distributions, the optimisation problem can often be evaluated analytically (eg., Bernoulli, Gaussian with known variance, Gaussian with unknown variance, Exponential). For non-parametric families the calculation is much more challenging. The following theorem takes the first steps towards understanding this problem for the class of distributions H?? for ?? ? 7/2. Theorem 7. Let ?? ? 7/2 and ? > 0 and ? ? H?? with mean ?, variance ? 2 > 0 and kurtosis ?. Then for appropriately chosen universal constant C, C 0 > 0,   7 1 ? 0 0 inf {KL(?, ? ) : ? ? H? and EX?? 0 [X] > ? + ?} ? min . , 5 ?? ? If additionally it holds that ? + C 0 ??1/2 (? + 1) ? ?? , then inf {KL(?, ? 0 ) : ? 0 ? H? and EX?? 0 [X] > ? + ?} ? C ?2 ?2 Therefore provided that ? ? H?? is not too close to the boundary of H?? in the sense that its kurtosis is not too close to ?? , then the lower bound derived from Theorem 7 and Eq. (6) matches the upper bound up to constant factors. This condition is probably necessary because distributions like the Bernoulli with kurtosis close to ?? have barely any wiggle room to increase the mean without also increasing the kurtosis. Proof of Theorem 7. Let ?? = ? + ? for small ? > 0. Assume without loss of generality that ? is centered and has variance ? 2 = 1, which can always be achieved by shifting and scaling (neither effects the kurtosis or the relative entropy). The first part of the claim is established by considering the perturbed distribution obtained by adding a Bernoulli ?outlier?. Let X be a random variable sampled from ? and B be a Bernoulli with parameter p = min {?? , 1/?? }. Let Z = X + Y where Y = ?? B/p. Then E[Z] = ?? > ? and  2 (1?p)?2? 1?6p(1?p) ??3+ p p(1?p) ? ? 3 + V[Y ]2 (Kurt[Y ] ? 3) Kurt[Z] = 3 + =3+  2 2 2 (1 + V[Y ]) (1?p)?? 1+ p  2 2 (1?p)?? 1?6p(1?p) ?? ? 3 + p p(1?p) ?3+ ? ?? ,  2 2 (1?p)?? 1+ p where the first inequality used Lemma 3 and the final inequality follows from simple case-based analysis, calculus and the assumption that ?? ? 7/2 (see Lemma 9 in the appendix). Let ? 0 = L(Y ) be the law of Y . Then   Z Z d? 1 1 p 7 1 0 KL(?, ? ) = log 0 d? ? log d? = log ? ? min ?? , . d? 1?p 1?p 1?p 5 ?? R R Taking the limit as ? tends to 0 completes the proof of the first part of the theorem. Moving onto the second claim and using C for a universal positive constant that changes from equation to equation. ? Let a > 0 be a constant to be chosen later and A = {x : |x| ? a?} and A? = R ? A. Define 6 R alternative measure ? 0 (E) = E (1 + g)d? where g(x) = (? + ?x)1 {x ? A} for some constants ? and ? chosen so that Z Z Z g(x)d?(x) = ? d?(x) + ? xd?(x) = 0 . A Z R ZA Z g(x)xd?(x) = ? xd?(x) + ? x2 d?(x) = ?? . R A A Solving for ? and ? shows that ?? ?=R and 2 ?=? ?? R xd?(x) 2 . R ?(A) A x2 d?(x) ? A xd?(x) A ( xd?(x)) x2 d?(x) ? A ?(A) R R This implies that R d? 0 (x) = 1 and R xd? 0 (x) = ?? > ?. It remains to show that ? 0 is a probability measure with kurtosis bounded by ?? . That ? 0 is a probability measure will follow from the positivity of 1 ? g(?). The first step is to control each of the terms appearing in the definitions of ? = ?(x2 ? a?) ? 1/(?a2 ) and ? and ?. By Cauchy-Schwarz and Chebyshev?s inequalities, ?(A) Z Z q ? ?1? 1 . x2 d?(x) ? 1 ? ??(A) x2 d?(x) = 1 ? a ? A A R A R Similarly, since ? is centered, q Z Z 1 ? ? ? xd?(x) = xd?(x) ? ? 2 ?(A) . ? a ? A A Therefore by choosing a = 2 and using the fact that the kurtosis is always larger than 1, R ? xd?(x) ?? / ? 4? A   ? ?? |?| = ?? ? 2 R R 1 1 1 ?(A) x2 d?(x) ? ? a 1 ? 1 ? ? 2 2 xd?(x) ?a a a ? A A ?? ?? ? 6?? . |?| = R R 2 ? 1 1 xd?(x) ) 1 ? a ? ?a2 1? 1 2 d?(x) ? ( A x ( ) 2 a ? ?(A) A Now g(x) is a linear function supported on compact set A, so ?  ? ? ? 4?? 1 max |g(x)| = max |g( a?)|, |g(? a?)| ? |?| + a?|?| ? ? + 6?? 2? ? , x?R 2 ? ? ? ?1/2 where the last inequality follows by assuming that ?? ? ?/(4(2 + 3 2?)) = O(? ), which is reasonable without loss of generality, since if ?? is larger than this quantity, then we would prefer the bound that depends on ?? derived in the first part of the proof. The relative entropy between ? and ? 0 is bounded by 2 Z  Z d?(x) g(x)2 0 0 2 0 ? 1 d? (x) = d?(x) KL(?, ? ) ? ? (?, ? ) = d? 0 (x) R A 1 + g(x) Z Z Z ?2 g(x)2 d?(x) ? 4 ?2 d?(x) + 4 ? 2 x2 d?(x) ? 4?2 + 4? 2 A A A 4 ? 16?2? ? + 4 ? 36?2? ? C?2? . ? In order to bound the kurtosis we need to evaluate the moments: Z Z Z Z Z ? x2 d? 0 = x2 d? + g(x)x2 d? = 1 + ? x2 d?(x) + ? x3 d?(x) ? 1 + C?? ? . A A ZR ZR ZA ? x2 d? 0 = x2 d? + g(x)x2 d? ? 1 ? C?? ? . R R A Z Z Z Z Z ?  x4 d? 0 = x4 d? + g(x)x4 d? = ? + ? x4 d?(x) + ? x5 d?(x) ? ? 1 + C?? ? . R R A A A Z sZ Z ? x3 d? 0 (x) ? x2 d? 0 (x) x4 d? 0 (x) ? C? . R R R 7 Therefore if ?0 is the kurtosis of ? 0 , then R 4 0 R R R x d? (x) ? 3?4? + 6?2? R x2 d? 0 (x) ? 4?? R x3 d? 0 (x) (x ? ?? )4 d? 0 (x) 0 R R ? = R 2 = 2 R R x2 d? 0 (x) ? ?2? 1 ? ?2? + ? A x2 d?(x) + ? A x3 d?(x) R Therefore R R x4 d? 0 (x) ? 3?4? + 6?2? R x2 d? 0 (x) ? 4?? R x3 d? 0 (x) ? = 2 R x2 d? 0 (x) ? ?2? R  ? 1 + C?? ?1/2 + 6?2? (1 + C?? ?1/2 ) + C?? ?1/2 ? 2 1 ? C?? ?1/2 ? ?2? R 0 R ? ? + C?? ?1/2 (? + 1) ? ? + C?? ?1/2 (? + 1) . 1 ? C?? ?1/2 Therefore ?0 ? ?? provided ?? is sufficiently small, which after taking the limit as ? ? 0 completes the proof. 4 Summary The assumption of finite kurtosis generalises the parametric Gaussian assumption to a comparable non-parametric setup with a similar basic structure. Of course there are several open questions. Optimal constants The leading constants in the main results (Theorem 2 and Theorem 7) are certainly quite loose. Deriving the optimal form of the regret is an interesting challenge, with both lower and upper bounds appearing quite non-trivial. It may be necessary to resort to an implicit analysis showing that (6) is (or is not) achievable when H is the class of distributions with kurtosis bounded by some ?? . Even then, constructing an efficient algorithm would remain a challenge. Certainly what has been presented here is quite far from optimal. At the very least the median-ofmeans estimator needs to be replaced, or the analysis improved. An excellent candidate is Catoni?s estimator [Catoni, 2012], which is slightly more complicated than the median-of-means, but also comes with smaller constants and could be plugged into the algorithm with very little effort. An alternative approach is to use the theory of self-normalised processes [Pe?na et al., 2008], but even this seems to lead to suboptimal constants. For the lower bound, there appears to be almost no work on the explicit form of the lower bounds presented by Burnetas and Katehakis [1996] in interesting nonparametric classes beyond rewards with bounded or semi-bounded support [Honda and Takemura, 2010, 2015]. Absorbing other improvements There has recently been a range of improvements to the confidence level for the classical upper confidence bound algorithms that shave logarithmic terms from the worst-case regret or improve the lower-order terms in the finite-time bounds [Audibert and Bubeck, 2009, Lattimore, 2015]. Many of these enhancements can be incorporated into the algorithm presented here, which may lead to practical and theoretical improvements. Computation complexity The main challenge is the computation of the index, which as written seems challenging. The easiest solution is to change the algorithm slightly by estimating ? ?i (t) = [ MM?t ((Xs )s?[t],As =i ) ? ?i2 (t) = [ MM?t ((Xs2 )s?[t],As =i ) ? ? ?i (t)2 . Then an upper confidence bound on ? ?i (t) is easily derived from Lemma 4 and the rest of the analysis goes through in about the same way. Naively the computational complexity of the above is ?(t) in round t, which would lead to a running time over n rounds of ?(n2 ). Provided the number of buckets used between rounds t and t + 1 is the same, then the median-of-means estimator can be updated incrementally in O(Bt ) time, where Bt is the number of buckets. Now Bt = O(log(1/?t )) = O(log(t)) so there are at most O(log(n)) changes over n rounds. Therefore the total computation is O(nk + n log(n)). 8 Comparison to Bernoulli Table 2 shows that the kurtosis for a Bernoulli random variable with mean ? is ? = O(1/(?(1 ? ?))), which is obviously not bounded as ? tends towards the P boundaries. The? optimal asymptotic regret for the Bernoulli case is limn?? Rn / log(n) = i:?i >0 ?i /d(?i , ? ). The interesting differences occur near the boundary of the parameter space. Suppose that ?i ? 0 for some arm i and ?? > 0 is close to zero. An easy calculation shows that d(?i , ?? ) ? log(1/(1 ? ?i )) ? ?i . Therefore lim inf n?? E[Ti (n)] 1 1 ? ? . log(n) log(1/(1 ? ?i )) ?i Here we see an algorithm is enjoying logarithmic regret on a class with infinite kurtosis! But this is a special case and is not possible in general. The reason is that the structure of the hypothesis class allows strategies to (essentially) estimate the kurtosis with reasonable accuracy and anticipate outliers more/less depending on the data observed so far. Another way of saying it is that when the kurtosis is actually small, the algorithms can learn this fact by examining the empirical mean. A Technical calculations This section completes some of the calculations required in the proof of Theorem 7. Lemma 8. Let ?? ? 7/2 and f (x) = 3 + (?? ? 3 + x)/(1 + x)2 . Then f (x) ? ?? for all x ? 0. Proof. Clearly f (0) = ?? and for ?? ? 7/2 and x ? 0,   1 2(?? ? 3 + x) 1 ? ? 0. f 0 (x) = (1 + x)2 1+x Rx Therefore f (x) = ?? + 0 f 0 (y)dy ? ?? . Lemma 9. If ?? ? 7/2 and p = min {?, 1/?? }, then  2 2 1?6p(1?p) ?? ? 3 + (1?p)? p p(1?p) 3+ ? ?? .  2 2 1 + (1?p)? p Proof. Suppose that p = ?. Then since ?? ? 7/2 ? 1, p ? 1. Therefore  2 2 1?6p(1?p) ?? ? 3 + (1?p)? p p(1?p) ?? ? 3 + ?(1 ? ?)(1 ? 6?(1 ? ?)) 3+ =3+ 2  2 (1?p)?2 (1 + ?(1 ? ?)) 1+ p ?3+ ?? ? 3 + ?(1 ? ?) (1 + ?(1 ? ?)) 2 ? ?? , where the last inequality follows from Lemma 8. Now suppose that p = 1/?? . Then 2  2  2 1?6p(1?p) (1?p)?2 1?6p(1?p) ? ? 3 + ?? ? 3 + (1?p)? ? p p(1?p) p p(1?p) 3+ ?3+  2  2 2 2 1 + (1?p)? 1 + (1?p)? p p ( ) ?? ? max ?? , ?3 1 ? ?1? ? ?? , where the first inequality follows since (a+b)2 ? a2 +b2 for a, b ? 0. The second since the average is less than the maximum. The third since ?? ? 7/2 > 4/3. 9 References Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 20?29. ACM, 1996. Jean-Yves Audibert and S?ebastien Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of Conference on Learning Theory (COLT), pages 217?226, 2009. Sebastian Bubeck, Nicolo Cesa-Bianchi, and G?abor Lugosi. Bandits with heavy tail. Information Theory, IEEE Transactions on, 59(11):7711?7717, 2013. S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers Incorporated, 2012. ISBN 9781601986269. Apostolos N Burnetas and Michael N Katehakis. Optimal adaptive policies for sequential allocation problems. Advances in Applied Mathematics, 17(2):122?142, 1996. Olivier Capp?e, Aur?elien Garivier, Odalric-Ambrym Maillard, R?emi Munos, and Gilles Stoltz. Kullback?Leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516?1541, 2013. Olivier Catoni. Challenging the empirical mean and empirical variance: a deviation study. In Annales de l?Institut Henri Poincar?e, Probabilit?es et Statistiques, volume 48, pages 1148?1185. Institut Henri Poincar?e, 2012. Wesley Cowan and Michael N Katehakis. An asymptotically optimal policy for uniform bandits of unknown support. arXiv preprint arXiv:1505.01918, 2015. Wesley Cowan, Junya Honda, and Michael N Katehakis. Normal bandits of unknown means and variances: Asymptotic optimality, finite horizon regret bounds, and a solution to an open problem. arXiv preprint arXiv:1504.05823v2, 2015. Lawrence T DeCarlo. On the meaning and use of kurtosis. Psychological methods, 2(3):292, 1997. Junya Honda and Akimichi Takemura. An asymptotically optimal bandit algorithm for bounded support models. In Proceedings of Conference on Learning Theory (COLT), pages 67?79, 2010. Junya Honda and Akimichi Takemura. Non-asymptotic analysis of a new bandit algorithm for semibounded rewards. Journal of Machine Learning Research, 16:3721?3756, 2015. Michael N Katehakis and Herbert Robbins. Sequential choice from several populations. Proceedings of the National Academy of Sciences of the United States of America, 92(19):8584, 1995. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985. Tor Lattimore. Optimally confident UCB: Improved regret for finite-armed bandits. arXiv preprint arXiv:1507.07880, 2015. Victor H Pe?na, Tze Leung Lai, and Qi-Man Shao. Self-normalized processes: Limit theory and Statistical Applications. Springer Science & Business Media, 2008. 10
6756 |@word polynomial:1 achievable:2 seems:4 open:2 grey:1 calculus:1 specialises:1 boundedness:1 moment:6 united:1 kurt:4 existing:1 com:1 nt:1 gmail:1 must:1 written:1 partition:1 alone:1 ith:1 location:2 honda:6 mathematical:1 along:1 c2:18 katehakis:11 symposium:1 apostolos:1 indeed:1 expected:1 roughly:1 abbreviating:1 decreasing:1 company:1 little:1 armed:3 considering:1 increasing:1 provided:3 estimating:1 bounded:13 notation:1 panel:2 medium:1 what:2 easiest:1 kind:2 skewness:1 deepmind:1 guarantee:1 remember:1 ti:5 xd:12 tie:1 control:1 unit:1 yn:2 generalised:1 positive:4 t1:1 tends:2 limit:6 extremity:1 lugosi:1 might:1 challenging:3 limited:1 bi:2 range:1 practical:1 union:1 regret:16 block:4 x3:5 poincar:2 nite:2 probabilit:1 universal:5 empirical:5 confidence:10 onto:1 close:4 equivalent:1 missing:1 yt:3 go:1 estimator:13 rule:1 importantly:1 fill:1 deriving:1 population:1 justification:1 laplace:1 updated:1 annals:1 pt:1 suppose:5 exact:1 olivier:2 us:2 hypothesis:1 trend:1 approximated:1 satisfying:1 distributional:1 observed:1 preprint:3 worst:1 ensures:1 mentioned:1 broken:1 complexity:3 reward:4 depend:4 tight:1 solving:1 ror:1 learner:1 capp:2 shao:1 easily:1 america:1 london:1 choosing:2 quite:3 jean:1 larger:2 statistic:1 final:1 obviously:1 sequence:4 kurtosis:38 isbn:1 combining:1 academy:1 intuitive:1 enhancement:1 satellite:1 generating:2 sity:1 depending:1 alon:2 eq:2 implies:1 come:1 concentrate:1 closely:1 stochastic:4 subsequently:1 centered:4 behaviour:1 anticipate:1 akimichi:2 exploring:1 hold:4 mm:18 around:1 sufficiently:1 normal:1 exp:2 ays:1 lawrence:1 claim:2 substituting:1 tor:3 a2:3 estimation:1 schwarz:1 robbins:6 clearly:2 sensor:2 gaussian:10 always:2 i3:1 knob:1 derived:5 improvement:3 bernoulli:10 likelihood:2 adversarial:1 sense:3 i0:3 leung:2 bt:3 abor:1 bandit:21 arg:1 colt:2 denoted:1 multiplies:1 special:1 equal:1 having:1 beach:1 optimising:1 x4:6 nearly:1 t2:9 np:1 national:1 replaced:2 decarlo:2 unpromising:1 adjust:1 certainly:2 implication:1 necessary:2 institut:2 stoltz:1 enjoying:1 plugged:1 desired:1 theoretical:1 psychological:1 column:1 deviation:1 entry:2 uniform:5 examining:1 rounding:1 too:2 optimally:1 burnetas:5 shave:1 perturbed:1 combined:1 chooses:1 st:1 confident:1 aur:1 michael:4 na:2 again:1 central:3 cesa:3 choose:1 positivity:1 resort:1 leading:1 elien:1 supp:4 szegedy:1 de:1 c12:4 b2:1 notable:1 audibert:2 depends:3 tion:1 try:1 later:1 optimistic:2 mario:1 sup:2 complicated:4 solar:1 contribution:2 yves:1 accuracy:1 variance:20 efficiently:1 multiplying:1 rx:1 za:2 sebastian:1 definition:2 matias:1 frequency:1 proof:12 stop:2 sampled:1 knowledge:2 lim:3 maillard:1 actually:1 appears:1 wesley:2 follow:1 improved:2 evaluated:1 generality:3 furthermore:1 parameterised:1 implicit:1 xs2:1 statistiques:1 receives:1 incrementally:1 usa:1 effect:1 normalized:1 unbiased:1 y2:3 true:2 analytically:1 leibler:1 i2:13 eg:1 round:9 x5:1 self:2 meaning:1 lattimore:4 recently:1 absorbing:1 volume:1 discussed:1 tail:1 multiarmed:1 ai:2 mathematics:2 similarly:2 moving:1 fictional:1 nicolo:1 showed:1 perspective:1 inf:5 inequality:8 arbitrarily:1 victor:1 herbert:2 additional:2 semi:2 generalises:2 match:1 technical:1 calculation:4 long:1 lai:5 y:24 qi:1 basic:1 essentially:2 optimisation:1 poisson:1 arxiv:6 sometimes:1 achieved:1 cell:1 c1:16 justified:1 want:1 interval:1 else:1 median:10 completes:4 limn:2 publisher:1 appropriately:2 noga:1 rest:1 probably:1 cowan:7 subgaussian:2 near:1 enough:2 identically:3 easy:1 independence:1 fit:1 nonstochastic:1 suboptimal:2 idea:1 knowing:1 computable:1 chebyshev:1 whether:1 effort:1 nomenclature:1 action:2 useful:1 nonparametric:2 shifted:1 estimated:1 discrete:1 demonstrating:1 neither:1 garivier:1 asymptotically:4 annales:1 monotone:1 sum:2 you:1 place:1 family:4 reasonable:4 almost:1 saying:1 appendix:1 scaling:1 prefer:1 comparable:1 dy:1 bound:29 played:2 annual:1 insufficiently:1 occur:1 infinity:1 x2:26 junya:3 speed:1 emi:1 min:4 optimality:1 remain:1 slightly:2 smaller:1 lem:2 outlier:5 invariant:1 bucket:2 equation:2 remains:1 turn:1 loose:1 know:3 yossi:1 fi1:1 end:1 available:1 multiplied:1 v2:1 appearing:2 alternative:3 existence:1 running:1 calculating:1 restrictive:1 approximating:1 classical:2 summarises:1 question:1 quantity:2 strategy:6 parametric:6 distance:1 deficit:1 cauchy:1 odalric:1 trivial:2 reason:2 barely:1 assuming:2 index:1 setup:3 mostly:2 ebastien:2 policy:3 unknown:7 twenty:1 bianchi:3 upper:11 gilles:1 finite:9 t:11 payoff:4 incorporated:2 team:2 y1:3 rn:6 omission:1 required:3 kl:6 c3:4 established:1 nip:2 beyond:2 fi2:1 below:1 eighth:2 challenge:3 max:6 shifting:1 power:1 business:1 zr:2 arm:8 minimax:1 improve:1 literature:1 understanding:1 nicol:1 relative:4 asymptotic:7 law:1 loss:3 expect:1 takemura:5 interesting:3 allocation:3 var:1 foundation:1 consistent:2 heavy:1 translation:1 row:1 course:2 summary:2 supported:1 last:3 free:4 legacy:1 bias:1 normalised:1 ambrym:1 taking:2 munos:1 distributed:3 boundary:3 xn:2 adaptive:2 far:3 transaction:1 henri:2 compact:1 kullback:1 supremum:1 sz:1 assumed:1 xi:3 don:1 search:1 table:8 additionally:2 learn:1 robust:3 ca:1 excellent:1 constructing:1 main:4 noise:3 n2:1 x1:4 sub:1 position:1 explicit:1 exponential:3 lie:1 candidate:1 pe:2 third:2 theorem:15 xt:1 showing:1 er:1 maxi:2 x:2 cease:1 exists:2 naively:1 adding:1 sequential:3 magnitude:1 catoni:5 wiggle:1 nk:1 gap:1 horizon:1 specialised:1 entropy:3 logarithmic:3 tze:2 bubeck:8 applies:1 springer:1 corresponds:1 satisfies:1 complemented:1 acm:2 towards:2 room:1 man:1 change:4 typical:1 except:2 generalisation:1 infinite:1 lemma:16 called:1 total:1 tendency:1 e:1 player:2 ucb:2 exception:2 support:4 evaluate:1 ex:4
6,364
6,757
Learning Multiple Tasks with Multilinear Relationship Networks Mingsheng Long, Zhangjie Cao, Jianmin Wang, Philip S. Yu School of Software, Tsinghua University, Beijing 100084, China {mingsheng,jimwang}@tsinghua.edu.cn [email protected] [email protected] Abstract Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. Since deep features eventually transition from general to specific along deep networks, a fundamental problem of multi-task learning is how to exploit the task relatedness underlying parameter tensors and improve feature transferability in the multiple task-specific layers. This paper presents Multilinear Relationship Networks (MRN) that discover the task relationships based on novel tensor normal priors over parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and multilinear relationships of tasks and features, MRN is able to alleviate the dilemma of negativetransfer in the feature layers and under-transfer in the classifier layer. Experiments show that MRN yields state-of-the-art results on three multi-task learning datasets. 1 Introduction Supervised learning machines trained with limited labeled samples are prone to overfitting, while manual labeling of sufficient training data for new domains is often prohibitive. Thus it is imperative to design versatile algorithms for reducing the labeling consumption, typically by leveraging off-theshelf labeled data from relevant tasks. Multi-task learning is based on the idea that the performance of one task can be improved using related tasks as inductive bias [4]. Knowing the task relationship should enable the transfer of shared knowledge from relevant tasks such that only task-specific features need to be learned. This fundamental idea of task relatedness has motivated a variety of methods, including multi-task feature learning that learns a shared feature representation [1, 2, 6, 5, 23], and multi-task relationship learning that models inherent task relationship [10, 14, 29, 31, 15, 17, 8]. Learning inherent task relatedness is a hard problem, since the training data of different tasks may be sampled from different distributions and fitted by different models. Without prior knowledge on the task relatedness, the distribution shift may pose a major difficulty in transferring knowledge across different tasks. Unfortunately, if cross-task knowledge transfer is impossible, then we will overfit each task due to limited amount of labeled data. One way to circumvent this dilemma is to use an external data source, e.g. ImageNet, to learn transferable features through which the shift in the inductive biases can be reduced such that different tasks can be correlated more effectively. This idea has motivated some latest deep learning methods for learning multiple tasks [25, 22, 7, 27], which learn a shared representation in feature layers and multiple independent classifiers in classifier layer. However, these deep multi-task learning methods do not explicitly model the task relationships. This may result in under-transfer in the classifier layer as knowledge can not be transferred across different classifiers. Recent research also reveals that deep features eventually transition from general to specific along the network, and feature transferability drops significantly in higher layers with increasing task dissimilarity [28], hence the sharing of all feature layers may be risky to negativetransfer. Therefore, it remains an open problem how to exploit the task relationship across different deep networks while improving the feature transferability in task-specific layers of the deep networks. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. This paper presents Multilinear Relationship Network (MRN) for multi-task learning, which discovers the task relationships based on multiple task-specific layers of deep convolutional neural networks. Since the parameters of deep networks are natively tensors, the tensor normal distribution [21] is explored for multi-task learning, which is imposed as the prior distribution over network parameters of all task-specific layers to learn find-grained multilinear relationships of tasks, classes and features. By jointly learning transferable features and multilinear relationships, MRN is able to circumvent the dilemma of negative-transfer in feature layers and under-transfer in classifier layer. Experiments show that MRN learns fine-grained relationships and yields state-of-the-art results on standard benchmarks. 2 Related Work Multi-task learning is a learning paradigm that learns multiple tasks jointly by exploiting the shared structures to improve generalization performance [4, 19] and mitigate manual labeling consumption. There are generally two categories of approaches: (1) multi-task feature learning, which learns a shared feature representation such that the distribution shift across different tasks can be reduced [1, 2, 6, 5, 23]; (2) multi-task relationship learning, which explicitly models the task relationship in the forms of task grouping [14, 15, 17] or task covariance [10, 29, 31, 8]. While these methods have achieved improved performance, they may be restricted by their shallow learning paradigm that cannot embody task relationships by suppressing the task-specific variations in transferable features. Deep networks learn abstract representations that disentangle and hide explanatory factors of variation behind data [3, 16]. Deep representations manifest invariant factors underlying different populations and are transferable across similar tasks [28]. Thus deep networks have been successfully explored for domain adaptation [11, 18] and multi-task learning [25, 22, 32, 7, 20, 27], where significant performance gains have been witnessed. Most multi-task deep learning methods [22, 32, 7] learn a shared representation in the feature layers and multiple independent classifiers in the classifier layer without inferring the task relationships. However, this may result in under-transfer in the classifier layer as knowledge cannot be adaptively propagated across different classifiers, while the sharing of all feature layers may still be vulnerable to negative-transfer in the feature layers, as the higher layers of deep networks are tailored to fit task-specific structures and may not be safely transferable [28]. This paper presents a multilinear relationship network based on novel tensor normal priors to learn transferable features and task relationships that mitigate both under-transfer and negative-transfer. Our work contrasts from prior relationship learning [29, 31] and multi-task deep learning [22, 32, 7, 27] methods in two key aspects. (1) Tensor normal prior: our work is the first to explore tensor normal distribution as priors of network parameters in different layers to learn multilinear task relationships in deep networks. Since the network parameters of multiple tasks natively stack into high-order tensors, previous matrix normal distribution [13] cannot be used as priors of network parameters to learn task relationships. (2) Deep task relationship: we define the tensor normal prior on multiple task-specific layers, while previous deep learning methods do not learn the task relationships. To our knowledge, multi-task deep learning by tensor factorization [27] is the first work that tackles multi-task deep learning by tensor factorization, which learns shared feature subspace from multilayer parameter tensors; in contrast, our work learns multilinear task relationships from multiplayer parameter tensors. 3 3.1 Tensor Normal Distribution Probability Density Function Tensor normal distribution is a natural extension of multivariate normal distribution and matrix-variate normal distribution [13] to tensor-variate distributions. The multivariate normal distribution is order-1 tensor normal distribution, and matrix-variate normal distribution is order-2 tensor normal distribution. Before defining tensor normal distribution, we first introduce the notations and operations of order-K tensor. An order-K tensor is an element of the tensor product of K vector spaces, each of which has its own coordinate system. A vector x ? Rd1 is an order-1 tensor with dimension d1 . A matrix X ? Rd1 ?d2 is an order-2 tensor with dimensions (d1 , d2 ). A order-K tensor X ? Rd1 ?...?dK with dimensions (d1 , . . . , dK ) has elements {xi1 ...iK : ik = 1, . . . , dk }. The vectorization of X is unfolding the tensor into a vector, denoted by vec(X ). The matricization of X is a generalization of vectorization, reordering the elements of X into a matrix. In this paper, to simply the notations and 2 describe the tensor relationships, we use the mode-k matricization and denote by X(k) the mode-k matrix of tensor X , where row i of X(k) contains all elements of X having the k-th index equal to i. QK Consider an order-K tensor X ? Rd1 ?...?dK . Since we can vectorize X to a ( k=1 dk ) ? 1 vector, the normal distributionQ on a tensor X can be considered as a multivariate normal distribution on vector K vec(X ) of dimension k=1 dk . However, such an ordinary multivariate normal distribution ignores the special structure of X as a d1 ? . . . ? dK tensor, and as a result, the covariance characterizing the QK QK correlations across elements of X is of size ( k=1 dk ) ? ( k=1 dk ), which is often prohibitively large for modeling and estimation. To exploit the structure of X , tensor normal distributions assume QK QK that the ( k=1 dk ) ? ( k=1 dk ) covariance matrix ?1:K can be decomposed into the Kronecker product ?1:K = ?1 ? . . . ? ?K , and elements of X (in vectorization) follow the normal distribution, vec (X ) ? N (vec (M) , ?1 ? . . . ? ?K ) , (1) dk ?dk where ? is the Kronecker product, ?k ? R is a positive definite matrix indicating the covariance Q between the dk rows of the mode-k matricization X(k) of dimension dk ? ( k0 6=k dk0 ), and M is a mean tensor containing the expectation of each element of X . Due to the decomposition of covariance as the Kronecker product, the tensor normal distribution of an order-K tensor X , parameterized by mean tensor M and covariance matrices ?1 , . . . , ?K , can define probability density function as [21] !   K Y 1 ?d/2 ?d/(2dk ) T ?1 (2) p (x) = (2?) |?k | ? exp ? (x ? ?) ?1:K (x ? ?) , 2 k=1 where |?| is the determinant of a square matrix, and x = vec (X ) , ? = vec (M) , ?1:K = ?1 ?. . .? QK ?K , d = k=1 dk . The tensor normal distribution corresponds to the multivariate normal distribution with Kronecker decomposable covariance structure. X following tensor normal distribution, i.e. vec (X ) following the normal distribution with Kronecker decomposable covariance, is denoted by X ? T Nd1 ?...?dK (M, ?1 , . . . , ?K ) . 3.2 (3) Maximum Likelihood Estimation Consider a set of n samples {Xi }ni=1 where each Xi is an order-3 tensor generated by a tensor normal distribution as in Equation (2). The maximum likelihood estimation (MLE) of the mean tensor M is n X c= 1 M Xi . (4) n i=1 b 1, . . . , ? b 3 are computed by iteratively updating these equations: The MLE of covariance matrices ? b1 = ? n  ?1 1 X b3 ? ? b2 (Xi ? M)(1) ? (Xi ? M)T(1) , nd2 d3 i=1 b2 = ? n  ?1 1 X b3 ? ? b1 (Xi ? M)(2) ? (Xi ? M)T(2) , nd1 d3 i=1 b3 = ? n  ?1 1 X b2 ? ? b1 (Xi ? M)(3) ? (Xi ? M)T(3) . nd1 d2 i=1 (5) This flip-flop algorithm [21] is efficient to solve by simple matrix manipulations and convergence is b 1, . . . , ? b 3 are not identifiable and the solutions to maximizing guaranteed. Covariance matrices ? density function (2) are not unique, while only the Kronecker product ?1 ?. . .??K (1) is identifiable. 4 Multilinear Relationship Networks This work models multiple tasks by jointly learning transferable representations and task relationships. t Given T tasks with training data {Xt , Yt }Tt=1 , where Xt = {xt1 , . . . , xtNt } and Yt = {y1t , . . . , yN } t are the Nt training examples and associated labels of the t-th task, respectively drawn from Ddimensional feature space and C-cardinality label space, i.e. each training example xtn ? RD and ynt ? {1, . . . , C}. Our goal is to build a deep network for multiple tasks ynt = ft (xtn ) which learns transferable features and adaptive task relationships to bridge different tasks effectively and robustly. 3 Task 1 TNPrior TNPrior Task T input conv1 conv2 conv3 conv4 conv5 fc6 fc7 fc8 output Figure 1: Multilinear relationship network (MRN) for multi-task learning: (1) convolutional layers conv1?conv5 and fully-connected layer f c6 learn transferable features, so their parameters are shared across tasks; (2) fully-connected layers f c7?f c8 fit task-specific structures, so their parameters are modeled by tensor normal priors for learning multilinear relationships of features, classes and tasks. 4.1 Model We start with deep convolutional neural networks (CNNs) [16], a family of models to learn transferable features that are well adaptive to multiple tasks [32, 28, 18, 27]. The main challenge is that in multitask learning, each task is provided with a limited amount of labeled data, which is insufficient to build reliable classifiers without overfitting. In this sense, it is vital to model the task relationships through which each pair of tasks can help with each other to enable knowledge transfer if they are related, and can remain independent to mitigate negative transfer if they are unrelated. With this idea, we design a Multilinear Relationship Network (MRN) that exploits both feature transferability and task relationship to establish effective and robust multi-task learning. Figure 1 shows the architecture of the proposed MRN model based on AlexNet [16], while other deep networks are also applicable. We build the proposed MRN model upon AlexNet [16], which is comprised of convolutional layers (conv1?conv5) and fully-connected layers (f c6?f c8). The `-th f c layer learns a nonlinear mapping  ` t,` t,`?1 t + bt,` for task t, where ht,` ht,` n = a W hn n is the hidden representation of each point xn , Wt,` and bt,` are the weight and bias parameters, and a` is the activation function, taken as ReLU P|x| a` (x) = max(0, x) for hidden layers or softmax units a` (x) = ex / j=1 exj for the output layer. Denote by y = ft (x) the CNN classifier of t-th task, and the empirical error of CNN on {Xt , Yt } is min ft Nt X   J ft xtn , ynt , (6) n=1 where J is the cross-entropy loss function, and ft (xtn ) is the conditional probability that CNN assigns xtn to label ynt . We will not describe how to compute the convolutional layers since these layers can learn transferable features in general [28, 18], and we will simply share the network parameters of these layers across different tasks, without explicitly modeling the relationships of features and tasks in these layers. To benefit from pre-training and fine-tuning as most deep learning work, we copy these layers from a model pre-trained on ImageNet 2012 [28], and fine-tune all conv1?conv5 layers. As revealed by the recent literature findings [28], the deep features in standard CNNs must eventually transition from general to specific along the network, and the feature transferability decreases while the task discrepancy increases, making the features in higher layers f c7?f c8 unsafely transferable across different tasks. In other words, the f c layers are tailored to their original task at the expense of degraded performance on the target task, which may deteriorate multi-task learning based on deep neural networks. Most previous methods generally assume that the multiple tasks can be well correlated given the shared representation learned by the feature layers conv1?f c7 of deep networks [25, 22, 32, 27]. However, it may be vulnerable if different tasks are not well correlated under deep features, which is common as higher layers are not safely transferable and tasks may be dissimilar. Moreover, existing multi-task learning methods are natively designed for binary classification tasks, which are not good choices as deep networks mainly adopt multi-class softmax regression. It remains an open problem to explore the task relationships of multi-class classification for multi-task learning. In this work, we jointly learn transferable features and multilinear relationships of features and tasks for multiple task-specific layers L in a Bayesian framework. Based on the transferability of deep 4 T networks discussed above, the task-specific layers L are set to {f c7, f c8}. Denote by X = {Xt }t=1 , ` ` T Y = {Yt }t=1 the complete training data of T tasks, and by Wt,` ? RD1 ?D2 the network parameters of the t-th task in the `-th layer, where D1` and D2` are the rows and columns of matrix Wt,` . In order to capture the task relationship in the network parameters of all T tasks, we construct the `-th layer    ` ` parameter tensor as W ` = W1,` ; . . . ; WT,` ? RD1 ?D2 ?T . Denote by W = W ` : ` ? L the set of parameter tensors of all the task-specific layers L = {f c7, f c8}. The Maximum a Posteriori (MAP) estimation of network parameters W given training data {X , Y} for learning multiple tasks is p ( W| X , Y) ? p (W) ? p (Y |X , W ) = Y Nt T Y  Y  p W` ? p ynt xtn , W ` , (7) t=1 n=1 `?L where we assume that for prior p (W), the parameter tensor of each layer W ` is independent on the 0 parameter tensors of the other layers W ` 6=` , which is a common assumption made by most feedforward neural network methods [3]. Finally, we assume when the network parameter is sampled from the prior, all tasks are independent. These independence assumptions lead to the factorization of the posteriori in Equation (7), which make the final MAP estimation in deep networks easy to solve. The maximum likelihood estimation (MLE) part p (Y |X , W ) in Equation (7) is modeled by deep CNN in Equation (6), which can learn transferable features in lower layers for multi-task learning. We opt to share the network parameters of all these layers (conv1?f c6). This parameter sharing strategy is a relaxation of existing deep multi-task learning methods [22, 32, 7], which share all the feature layers except for the classifier layer. We do not share task-specific layers (the last feature layer f c7 and classifier layer f c8), with the expectation to potentially mitigate negative-transfer [28]. The prior part p (W) in Equation (7) is the key to enabling multi-task deep learning since this prior part should be able to model the multilinear relationship across parameter tensors. This paper, for the first time, defines the prior for the `-th layer parameter tensor by tensor normal distribution [21] as   p W ` = T ND1` ?D2` ?T O, ?`1 , ?`2 , ?`3 , (8) ` ` ` ` where ?`1 ? RD1 ?D1 , ?`2 ? RD2 ?D2 , and ?`3 ? RT ?T are the mode-1, mode-2, and mode-3 covariance matrices, respectively. Specifically, in the tensor normal prior, the row covariance matrix ?`1 models the relationships between features (feature covariance), the column covariance matrix ?`2 models the relationships between classes (class covariance), and the mode-3 covariance matrix ?`3 models the relationships between tasks in the `-th layer network parameters {W1,` , . . . , WT,` }. A common strategy used by previous methods is to use identity covariance for feature covariance [31, 8] and class covariance [2], which implicitly assumes independent features and classes and cannot capture the dependencies between them. This work learns all feature covariance, class covariance, task covariance and all network parameters from data to build robust multilinear task relationships. We integrate the CNN error functional (6) and tensor normal prior (8) into MAP estimation (7) and taking negative logarithm, which leads to the MAP estimation of the network parameters W, a regularized optimization problem for Multilinear Relationship Network (MRN) formally writing as min Nt T X X ` K ft |T t=1 ,?k |k=1   J ft xtn , ynt t=1 n=1 ! K X ?1 1X D`  `  ` T ` ` + vec(W ) (?1:K ) vec(W ) ? ln |?k | , 2 Dk` `?L k=1 (9) QK where D` = k=1 Dk` and K = 3 is the number of modes in parameter tensor W, which could be K = 4 for the convolutional layers (width, height, number of feature maps, and number of tasks); ?`1:3 = ?`1 ? ?`2 ? ?`3 is the Kronecker product of the feature covariance ?`1 , class covariance ?`2 , and task covariance ?`3 . Moreover, we can assume shared task relationship across different layers as ?`3 = ?3 , which enhances connection between task relationships on features f c7 and classifiers f c8. 4.2 Algorithm The optimization problem (9) is jointly non-convex with respect to the parameter tensors W as well as feature covariance ?`1 , class covariance ?`2 , and task covariance ?`3 . Thus, we alternatively optimize 5 one set of variables with the others fixed. We first update Wt,` , the parameter of task-t in layer-`. When training deep CNN by back-propagation, we only require the gradient of the objective function (denoted by O) in Equation (10) w.r.t. Wt,` on each data point (xtn , ynt ), which can be computed as  ?J (ft (xtn ) , ynt )  ` ?1 ?O (xtn , ynt ) (10) = + (?1:3 ) vec W ` ??t , t,` t,` ?W ?W  where [(?`1:3 )?1 vec W ` ]??t is the (:, :, t) slice of a tensor folded from elements (?`1:3 )?1 vec(W ` ) that are corresponding to parameter matrix Wt,` . Since training a deep CNN requires a large amount of labeled data, which is prohibitive for many multi-task learning problems, we fine-tune from an AlexNet model pre-trained on ImageNet as in [28]. In each epoch, after updating W, we can update the feature covariance ?`1 , class covariance ?`2 , and task covariance ?`3 by the flip-flop algorithm as  ?1 1 ` ` ` (W ) ? ? ? (W ` )T(1) + ID` , 3 2 (1) 1 D2` T  ?1 1 ?`2 = ` (W ` )(2) ?`3 ? ?`1 (W` )T(2) + ID` , 2 D1 T  ?1 1 (W ` )T(3) + IT . ?`3 = ` ` (W ` )(3) ?`2 ? ?`1 D1 D2 ?`1 = (11) where the last term of each update equation is a small penalty traded off by  for numerical stability. However, the above updating equations (11) are computationally prohibitive, due to the dimension explosion of the Kronecker product, e.g. ?`2 ? ?`1 is of dimension D1` D2` ? D1` D2` . To speed up ?1 computation, (A ? B) = A?1 ? B?1 and  we will use the following rules of Kronecker product: ` T ?T T as an example, we have B ? A vec (X) = vec (AXB). Taking the computation of ?3 ? R (?`3 )ij = = 1 D1` D2` (W ` )(3),i? ?`2 ? ?`1 ?1 (W ` )T (3),j? + Iij   1 ` ` ?1 ` ` ?1 (W ) vec (? ) W (? ) + Iij , (3),i? 1 ??j 2 D1` D2` (12) ` where (W ` )(3),i? denotes the i-th row of the mode-3 matricization of tensor W ` , and W??j denotes ` ` the (:, :, j) slice of tensor W . We can derive that updating ?3 has a computational complexity of O T 2 D1` D2` D1` + D2` , similarly for ?`1 and ?`2 . The total computational  complexity of updating covariance matrices ?`k |3k=1 will be O D1` D2` T D1` D2` + D1` T + D2` T , which is still expensive. A key to computation speedup is that the covariance matrices ?`k |3k=1 should be low-rank, since the features and tasks are enforced to be correlated for multi-task learning. Thus, the inverses of ?`k |3k=1 do not exist in general and we have to compute the generalized inverses using eigendecomposition. We perform eigendecomposition for each ?`k and maintain all eigenvectors with eigenvalues greater ` than zero. The rank r of the eigen-reconstructed covariance matrices should be r ? min(D1` , D2 , T ). ` 3 ` ` ` ` Thus, the total computational complexity for ?k |k=1 is reduced to O rD1 D2 T D1 + D2 + T . It is straight-forward to see the computational complexity of updating the parameter tensor W is the cost of back-propagation in standard CNNs plus the cost for computing the gradient of regularization term  by Equation (10), which is O rD1` D2` T D1` + D2` + T given generalized inverses (?`k )?1 |3k=1 . 4.3 Discussion The proposed Multilinear Relationship Network (MRN) is very flexible and can be easily configured to deal with different network architectures and multi-task learning scenarios. For example, replacing the network backbone from AlexNet to VGGnet [24] boils down to configuring task-specific layers L = {f c7, f c8}, where f c7 is the last feature layer while f c8 is the classifier layer in the VGGnet. The architecture of MRN in Figure 1 can readily cope with homogeneous multi-task learning where all tasks share the same output space. It can cope with heterogeneous multi-task learning where different tasks have different output spaces by setting L = {f c7}, by only considering feature layers. The multilinear relationship learning in Equation (9) is a general framework that readily subsumes many classical multi-task learning methods as special cases. Many regularized multi-task algorithms can be classified into two main categories: learning with feature covariances [1, 2, 6, 5] and learning 6 with task relations [10, 14, 29, 31, 15, 17, 8]. Learning with feature covariances can be viewed as a representative formulation in feature-based methods while learning with task relations is for parameter-based methods [30]. More specifically, previous multi-task feature learning methods [1, 2] can be viewed as a special case of Equation (9) by setting all covariance matrices but the feature covariance to identity matrix, i.e. ?k = I|K k=2 ; and previous multi-task relationship learning methods [31, 8] can be viewed as a special case of Equation (9) by setting all covariance matrices but the task covariance to identity matrix, i.e. ?k = I|K?1 k=1 . The proposed MRN is more general in the architecture perspective in dealing with parameter tensors in multiple layers of deep neural networks. It is noteworthy to highlight a concurrent work on multi-task deep learning using tensor decomposition [27], which is feature-based method that explicitly learns the low-rank shared parameter subspace. The proposed multilinear relationship across parameter tensors can be viewed as a strong alternative to the tensor decomposition, with the advantage to explicitly model the positive and negative relations across features and tasks. As a defense of [27], the tensor decomposition can extract finer-grained feature relations (what to share and how much to share) than the proposed multilinear relationships. 5 Experiments We compare MRN with state-of-the-art multi-task and deep learning methods to verify the efficacy of learning transferable features and multilinear task relationships. Codes and datasets will be released. 5.1 Setup Office-Caltech [12] This dataset is the standard benchmark for multi-task learning and transfer learning. The Office part consists of 4,652 images in 31 categories collected from three distinct domains (tasks): Amazon (A), which contains images downloaded from amazon.com, Webcam (W) and DSLR (D), which are images taken by Web camera and digital SLR camera under different environmental variations. This dataset is organized by selecting the 10 common categories shared by the Office dataset and the Caltech-256 (C) dataset [12], hence it yields four multi-class learning tasks. Real World Product Clipart Art Office-Home1 [26] This dataset is to evaluate transfer learning algorithms using deep learning. It consists of images from 4 different domains: Artistic images (A), Clip Art (C), Product images (P) and Real-World images (R). For each domain, the dataset contains images of 65 object categories collected in office and home settings. Spoon Sink Mug Pen Knife Bed Bike Kettle TV Keyboard Classes Alarm-Clock Desk-Lamp Hammer Chair Fan Figure 2: Examples of the Office-Home dataset. ImageCLEF-DA2 This dataset is the benchmark for ImageCLEF domain adaptation challenge, organized by selecting the 12 common categories shared by the following four public datasets (tasks): Caltech-256 (C), ImageNet ILSVRC 2012 (I), Pascal VOC 2012 (P), and Bing (B). All three datasets are evaluated using DeCAF7 [9] features for shallow methods and original images for deep methods. We compare MRN with standard and state-of-the-art methods: Single-Task Learning (STL), MultiTask Feature Learning (MTFL) [2], Multi-Task Relationship Learning (MTRL) [31], Robust MultiTask Learning (RMTL) [5], and Deep Multi-Task Learning with Tensor Factorization (DMTL-TF) [27]. STL performs per-task classification in separate deep networks without knowledge transfer. MTFL extracts the low-rank shared feature representations by learning feature covariance. RMTL extends MTFL to further capture the task relationships using a low-rank structure and identify outlier tasks using a group-sparse structure. MTRL captures the task relationships using task covariance of a matrix normal distribution. DMTL-TF tackles multi-task deep learning by tensor factorization, which learns shared feature subspace instead of multilinear task relationship in multilayer parameter tensors. To go deep into the efficacy of jointly learning transferable features and multilinear task relationships, we evaluate two MRN variants: (1) MRN8 , MRN using only one network layer f c8 for multilinear relationship learning; (2) MRNt , MRN using only task covariance ?3 for single-relationship learning. The proposed MRN model can natively deal with multi-class problems using the parameter tensors. However, most shallow multi-task learning methods such as MTFL, RMTL and MTRL are formulated 1 2 http://hemanthdv.org/OfficeHome-Dataset http://imageclef.org/2014/adaptation 7 Table 1: Classification accuracy on Office-Caltech with standard evaluation protocol (AlexNet). Method STL (AlexNet) MTFL [2] RMTL [6] MTRL [31] DMTL-TF [27] MRN8 MRNt MRN (full) A W 5% D C Avg A W 10% D C Avg A W 20% D C Avg 88.9 90.0 91.3 86.4 91.2 91.7 91.1 92.5 73.0 78.9 82.3 83.0 88.3 96.4 96.3 97.5 80.4 90.2 88.8 95.1 92.5 96.9 97.4 97.9 88.7 86.9 89.1 89.1 85.6 86.5 86.1 87.5 82.8 86.5 87.9 88.4 89.4 92.9 92.7 93.8 92.2 92.4 92.6 91.1 92.2 92.7 92.5 93.6 80.9 85.3 85.2 87.1 91.9 97.1 97.7 98.6 88.2 89.5 93.3 97.0 97.4 97.3 96.6 98.6 88.9 89.2 87.2 87.6 86.8 86.6 86.7 87.3 87.6 89.1 89.6 90.7 92.0 93.4 93.4 94.5 91.3 93.5 94.3 90.0 92.6 93.2 91.9 94.4 83.3 89.0 87.0 88.8 97.6 96.9 96.6 98.3 93.7 95.2 96.7 99.2 94.5 99.4 95.9 99.9 94.9 92.6 93.4 94.3 88.4 82.8 90.0 89.1 90.8 92.6 92.4 93.1 93.3 94.4 93.6 95.5 Table 2: Classification accuracy on Office-Home with standard evaluation protocol (VGGnet). Method STL (VGGnet) MTFL [2] RMTL [6] MTRL [31] DMTL-TF [27] MRN8 MRNt MRN (full) A C 5% P R Avg A C 10% P R Avg A C 20% P R Avg 35.8 40.1 42.3 42.7 49.2 52.7 52.0 53.3 31.2 30.4 32.8 33.3 34.5 34.7 34.0 36.4 67.8 61.5 62.3 62.9 67.1 70.1 69.9 70.5 62.5 59.5 60.6 61.3 62.9 67.6 66.8 67.7 49.3 47.9 49.5 50.1 53.4 56.3 55.7 57.0 51.0 50.3 49.7 51.6 57.2 59.1 58.6 59.9 40.7 35.0 34.6 36.3 42.3 42.7 42.6 42.7 75.0 66.3 65.9 67.7 73.6 75.1 74.9 76.3 68.8 65.0 64.6 66.3 69.9 72.8 72.4 73.0 58.9 54.2 53.7 55.5 60.8 62.4 62.1 63.0 56.1 55.2 55.2 55.8 58.3 58.4 57.7 58.5 54.6 38.8 39.2 39.9 56.1 55.6 54.8 55.6 80.4 69.1 69.6 70.2 79.3 80.4 80.2 80.7 71.8 70.0 70.5 71.2 72.1 72.4 71.6 72.8 65.7 58.3 58.6 59.3 66.5 66.7 66.1 66.9 only for binary-class problems, due to the difficulty in dealing with order-3 parameter tensors for multi-class problems. We adopt one-vs-rest strategy to enable them working on multi-class datasets. We follow the standard evaluation protocol [31, 5] for multi-task learning and randomly select 5%, 10%, and 20% samples from each task as training set and use the rest of the samples as test set. We compare the average classification accuracy for all tasks based on five random experiments, where standard errors are generally less than ?0.5%, which are not significant and thus are not reported for space limitation. We conduct model selection for all methods using five-fold cross-validation on the training set. For deep learning methods, we adopt AlexNet [16] and VGGnet [24], fix convolutional layers conv1?conv5, fine-tune fully-connected layers f c6?f c7, and train classifier layer f c8 via back-propagation. As the classifier layer is trained from scratch, we set its learning rate to be 10 times that of the other layers. We use mini-batch stochastic gradient descent (SGD) with 0.9 momentum 1 and learning rate decaying strategy, and select learning rate between 10?5 and 10?2 by stepsize 10 2 . 5.2 Results The multi-task classification results on the Office-Caltech, Office-Home and ImageCLEF-DA datasets based on 5%, 10%, and 20% sampled training data are shown in Tables 1, 2 and 3, respectively. We observe that the proposed MRN model significantly outperforms the comparison methods on most multi-task problems. The substantial accuracy improvement validates that our multilinear relationship networks through multilayer and multilinear relationship learning is able to learn both transferable features and adaptive task relationships, which enables effective and robust multi-task deep learning. We can make the following observations from the results. (1) Shallow multi-task learning methods MTFL, RMTL, and MTRL outperform single-task deep learning method STL in most cases, which confirms the efficacy of learning multiple tasks by exploiting shared structures. Among the shallow multi-task methods, MTRL gives the best accuracies, showing that exploiting task relationship may be more effective than extracting shared feature subspace for multi-task learning. It is worth noting that, although STL cannot learn from knowledge transfer, it can be fine-tuned on each task to improve performance, and thus when the number of training samples are large enough and when different tasks are dissimilar enough (e.g. Office-Home dataset), STL may outperform shallow multi-task learning methods, as evidenced by the results in Table 2. (2) Deep multi-task learning method DMTL-TF outperforms shallow multi-task learning methods with deep features as input, which confirms the importance of learning deep transferable features to enable knowledge transfer across tasks. However, DMTL-TF only learns the shared feature subspace based on tensor factorization of the network parameters, while the task relationships in multiple network layers are not captured. This may result in negative-transfer in the feature layers [28] and under-transfer in the classifier layers. Negative-transfer can be witnessed by comparing multi-task methods with single-task methods: if multi-task learning methods yield lower accuracy in some of the tasks, then negative-transfer arises. 8 Table 3: Classification accuracy on ImageCLEF-DA with standard evaluation protocol (AlexNet). C I 5% P B Avg C I 10% P B Avg C I 20% P B Avg 77.4 79.9 81.1 80.8 87.9 87.0 88.5 89.6 60.3 68.6 71.3 68.4 70.0 74.4 73.5 76.9 48.0 43.4 52.4 51.9 58.1 61.8 63.3 65.4 45.0 41.5 40.9 42.9 34.1 47.6 51.1 49.4 57.7 58.3 61.4 61.0 62.5 67.7 69.1 70.3 78.9 82.9 81.5 83.1 89.1 89.1 88.0 88.1 70.5 71.4 71.7 72.7 82.1 82.2 83.1 84.6 48.1 56.7 55.6 54.5 58.7 64.4 67.4 68.7 41.8 41.7 45.3 45.5 48.0 49.3 54.8 55.6 59.8 63.2 63.5 63.9 69.5 71.2 73.3 74.3 83.3 83.1 83.3 83.7 91.7 91.1 91.1 92.8 74.9 72.2 73.3 75.5 80.0 84.1 83.5 83.3 49.2 54.5 53.7 57.5 63.2 65.7 65.7 67.4 47.1 52.5 49.2 49.4 54.1 54.1 55.7 57.8 63.6 65.6 64.9 66.5 72.2 73.7 74.0 75.3 Method STL (AlexNet) MTFL [2] RMTL [6] MTRL [31] DMTL-TF [27] MRN8 MRNt MRN (full) We go deeper into MRN by reporting the results of the two MRN variants: MRN8 and MRNt , all significantly outperform the comparison methods but generally underperform MRN (full), which verify our motivation that jointly learning transferable features and multilinear task relationships can bridge multiple tasks more effectively. (1) The disadvantage of MRN8 is that it does not learn the task relationship in the lower layers f c7, which are not safely transferable and may result in negative transfer [28]. (2) The shortcoming of MRNt is that it does not learn the multilinear relationship of features, classes and tasks, hence the learned relationships may only capture the task covariance without capturing the feature covariance and class covariance, which may lose some intrinsic relations. A A W W D D C C A W D C A W D C A W D C (a) MTRL Relationship A W D C (b) MRN Relationship (c) DMTL-TF Features (d) MRN Features Figure 3: Hinton diagram of task relationships (a)(b) and t-SNE embedding of deep features (c)(d). 5.3 Visualization Analysis We show that MRN can learn more reasonable task relationships with deep features than MTRL with shallow features, by visualizing the Hinton diagrams of task covariances learned by MTRL and MRN (?f3 c8 ) in Figures 3(a) and 3(b), respectively. Prior knowledge on task similarity in the Office-Caltech dataset [12] describes that tasks A, W and D are more similar with each other while they are relatively dissimilar to task C. MRN successfully captures this prior task relationship and enhances the task correlation across dissimilar tasks, which enables stronger transferability for multi-task learning. Furthermore, all tasks are positively correlated (green color) in MRN, implying that all tasks can better reinforce each other. However, some of the tasks (D and C) are still negatively correlated (red color) in MTRL, implying these tasks should be drawn far apart and cannot improve with each other. We illustrate the feature transferability by visualizing in Figures 3(c) and 3(d) the t-SNE embeddings [18] of the images in the Office-Caltech dataset with DMTL-TF features and MRN features, respectively. Compared with DMTL-TF features, the data points with MRN features are discriminated better across different categories, i.e., each category has small intra-class variance and large inter-class margin; the data points are also aligned better across different tasks, i.e. the embeddings of different tasks overlap well, implying that different tasks reinforce each other effectively. This verifies that with multilinear relationship learning, MRN can learn more transferable features for multi-task learning. 6 Conclusion This paper presented multilinear relationship networks (MRN) that integrate deep neural networks with tensor normal priors over the network parameters of all task-specific layers, which model the task relatedness through the covariance structures over tasks, classes and features to enable transfer across related tasks. An effective learning algorithm was devised to jointly learn transferable features and multilinear relationships. Experiments testify that MRN yields superior results on standard datasets. 9 Acknowledgments This work was supported by the National Key R&D Program of China (2016YFB1000701), National Natural Science Foundation of China (61772299, 61325008, 61502265, 61672313) and TNList Fund. References [1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243?272, 2008. [3] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798?1828, 2013. [4] R. Caruana. Multitask learning. Machine learning, 28(1):41?75, 1997. [5] J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning a shared predictive structure from multiple tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5):1025?1038, 2013. [6] J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning. In KDD, 2011. [7] X. Chu, W. Ouyang, W. Yang, and X. Wang. Multi-task recurrent neural network for immediacy prediction. In ICCV, 2015. [8] C. Ciliberto, Y. Mroueh, T. Poggio, and L. Rosasco. Convex learning of multiple tasks and their structure. In ICML, 2015. [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [10] T. Evgeniou and M. Pontil. Regularized multi-task learning. In KDD, 2004. [11] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, 2011. [12] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [13] A. K. Gupta and D. K. Nagar. Matrix variate distributions. Chapman & Hall, 2000. [14] L. Jacob, J.-P. Vert, and F. R. Bach. Clustered multi-task learning: A convex formulation. In NIPS, 2009. [15] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multi-task feature learning. In ICML, 2011. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [17] A. Kumar and H. Daume III. Learning task grouping and overlap in multi-task learning. ICML, 2012. [18] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015. [19] A. Maurer, M. Pontil, and B. Romera-Paredes. The benefit of multitask representation learning. The Journal of Machine Learning Research, 17(1):2853?2884, 2016. [20] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016. [21] M. Ohlson, M. R. Ahmad, and D. Von Rosen. The multilinear normal distribution: Introduction and some basic properties. Journal of Multivariate Analysis, 113:37?47, 2013. [22] W. Ouyang, X. Chu, and X. Wang. Multisource deep learning for human pose estimation. In CVPR, 2014. [23] B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In ICML, 2013. [24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [25] N. Srivastava and R. Salakhutdinov. Discriminative transfer learning with tree-based priors. In NIPS, 2013. [26] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017. [27] Y. Yang and T. Hospedales. Deep multi-task representation learning: A tensor factorisation approach. ICLR, 2017. [28] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014. [29] Y. Zhang and J. Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In NIPS, 2010. [30] Y. Zhang and Q. Yang. A survey on multi-task learning. arXiv preprint arXiv:1707.08114, 2017. [31] Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI, 2010. [32] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, 2014. 10
6757 |@word multitask:6 determinant:1 cnn:7 stronger:1 paredes:2 chakraborty:1 open:2 d2:24 confirms:2 underperform:1 covariance:48 decomposition:4 jacob:1 sgd:1 versatile:1 tnlist:1 liu:1 contains:3 efficacy:3 selecting:2 tuned:1 suppressing:1 romera:2 outperforms:2 existing:2 com:2 transferability:8 nt:4 comparing:1 activation:2 gmail:1 chu:2 must:1 readily:2 luo:1 numerical:1 kdd:2 enables:2 drop:1 designed:1 update:3 rd2:1 mtfl:8 v:1 implying:3 prohibitive:3 fund:1 intelligence:2 lamp:1 c6:4 org:2 zhang:6 five:2 height:1 along:3 ik:2 consists:2 introduce:1 deteriorate:1 inter:1 embody:1 multi:72 salakhutdinov:1 voc:1 decomposed:1 cardinality:1 increasing:1 considering:1 provided:1 discover:1 underlying:2 notation:2 unrelated:1 moreover:2 alexnet:9 bike:1 what:1 backbone:1 ouyang:2 finding:1 safely:3 mitigate:4 tackle:2 prohibitively:1 classifier:19 grauman:2 unit:1 configuring:1 yn:1 slr:1 before:1 positive:2 tsinghua:2 id:2 noteworthy:1 plus:1 china:3 limited:3 factorization:6 unique:1 camera:2 acknowledgment:1 definite:1 pontil:4 empirical:1 significantly:3 vert:1 pre:3 word:1 integrating:1 cannot:6 unlabeled:1 selection:1 impossible:1 writing:1 optimize:1 imposed:1 map:5 yt:4 maximizing:1 shi:1 latest:1 go:2 conv4:1 convex:6 survey:1 decomposable:2 amazon:2 assigns:1 factorisation:1 rule:1 d1:20 population:1 stability:1 embedding:1 coordinate:1 variation:3 target:1 homogeneous:1 element:8 expensive:1 recognition:2 updating:6 labeled:5 ft:8 preprint:1 wang:4 capture:6 connected:4 decrease:1 ahmad:1 substantial:1 yfb1000701:1 complexity:4 geodesic:1 trained:5 predictive:2 dilemma:3 upon:1 negatively:1 sink:1 easily:1 k0:1 train:1 distinct:1 describe:2 effective:4 shortcoming:1 labeling:3 y1t:1 solve:2 cvpr:4 simonyan:1 uic:1 jointly:9 validates:1 final:1 advantage:1 eigenvalue:1 product:10 adaptation:7 cao:2 relevant:2 aligned:1 bed:1 exploiting:3 convergence:1 sutskever:1 darrell:1 object:1 help:1 derive:1 illustrate:1 recurrent:1 gong:1 pose:2 ij:1 school:1 conv5:5 strong:1 ddimensional:1 hammer:1 cnns:3 stochastic:1 human:1 enable:5 public:1 require:1 fix:1 generalization:2 clustered:1 alleviate:1 opt:1 multilinear:34 extension:1 considered:1 hall:1 normal:35 exp:1 mapping:1 traded:1 major:1 adopt:3 theshelf:1 released:1 estimation:9 applicable:1 lose:1 label:3 bridge:2 concurrent:1 tf:10 successfully:2 hoffman:1 unfolding:1 zhou:1 spoon:1 office:13 clune:1 nd2:1 improvement:1 rank:6 likelihood:3 mainly:1 contrast:2 sense:1 posteriori:2 typically:1 transferring:1 explanatory:1 bt:2 hidden:2 relation:5 classification:10 flexible:1 pascal:1 denoted:3 among:1 jianmin:1 multisource:1 art:6 special:4 softmax:2 tzeng:1 equal:1 construct:1 f3:1 having:1 beach:1 evgeniou:2 chapman:1 yu:1 icml:7 unsupervised:2 promote:1 discrepancy:1 rosen:1 others:1 inherent:2 randomly:1 national:2 decaf7:1 maintain:1 ando:1 testify:1 ciliberto:1 detection:1 intra:1 evaluation:4 behind:1 explosion:1 poggio:1 facial:1 conduct:1 tree:1 maurer:1 logarithm:1 fitted:1 witnessed:2 column:2 modeling:2 disadvantage:1 caruana:1 ordinary:1 cost:2 artistic:1 imperative:1 comprised:1 krizhevsky:1 reported:1 dependency:1 adaptively:1 st:1 density:3 fundamental:2 xi1:1 off:2 w1:2 von:1 containing:1 hn:1 rosasco:1 external:1 b2:3 subsumes:1 configured:1 explicitly:5 red:1 start:1 decaying:1 jia:1 lipson:1 imageclef:5 square:1 ni:1 degraded:1 convolutional:11 qk:7 ynt:9 accuracy:7 variance:1 yield:5 identify:1 bayesian:1 vincent:1 worth:1 xtn:10 straight:1 finer:1 classified:1 manual:2 sharing:3 dslr:1 c7:12 associated:1 boil:1 propagated:1 sampled:3 gain:1 dataset:12 manifest:1 knowledge:12 color:2 organized:2 back:3 higher:4 hashing:1 supervised:1 follow:2 zisserman:1 improved:2 formulation:4 evaluated:1 furthermore:1 correlation:2 overfit:1 clock:1 working:1 web:1 replacing:1 nonlinear:1 propagation:3 defines:1 mode:9 aung:1 mingsheng:2 usa:1 b3:3 ye:2 verify:2 inductive:2 hence:3 regularization:1 iteratively:1 deal:2 mug:1 visualizing:2 width:1 transferable:27 generalized:2 tt:1 complete:1 performs:1 image:11 novel:2 discovers:1 jimwang:1 common:5 superior:1 functional:1 discriminated:1 discussed:1 yosinski:1 significant:2 hospedales:1 vec:15 rd:1 tuning:1 mroueh:1 similarly:1 exj:1 panchanathan:1 similarity:1 fc7:1 disentangle:1 multivariate:6 own:1 recent:2 hide:1 perspective:2 nagar:1 apart:1 manipulation:1 scenario:1 keyboard:1 misra:1 binary:2 caltech:7 captured:1 greater:1 schneider:1 mrn:38 paradigm:2 dmtl:10 multiple:25 full:4 cross:4 long:3 knife:1 bach:1 devised:1 mle:3 prediction:1 variant:2 regression:1 basic:1 multilayer:3 heterogeneous:1 expectation:2 arxiv:2 yeung:1 kernel:1 tailored:2 achieved:1 fine:6 diagram:2 source:1 rest:2 leveraging:1 flow:1 jordan:1 extracting:1 noting:1 yang:3 revealed:1 vital:1 embeddings:2 feedforward:1 easy:1 variety:1 independence:1 fit:2 variate:4 relu:1 architecture:4 enough:2 idea:4 cn:1 knowing:1 shift:3 motivated:2 defense:1 penalty:2 sentiment:1 deep:66 generally:4 eigenvectors:1 tune:3 amount:3 desk:1 clip:1 category:8 reduced:3 http:2 outperform:3 exist:1 per:1 group:2 key:4 four:2 drawn:2 d3:2 ht:2 relaxation:1 beijing:1 enforced:1 inverse:3 parameterized:1 extends:1 family:1 reporting:1 reasonable:1 home:5 capturing:1 layer:75 guaranteed:1 courville:1 fan:1 fold:1 identifiable:2 kronecker:9 software:1 aspect:1 speed:1 min:3 c8:12 chair:1 kumar:1 relatively:1 transferred:1 speedup:1 tv:1 multiplayer:1 across:19 remain:1 describes:1 shallow:8 making:1 outlier:1 iccv:1 restricted:1 invariant:1 taken:2 ln:1 equation:13 computationally:1 remains:2 bing:1 visualization:1 eventually:3 flip:2 operation:1 observe:1 generic:1 robustly:1 stepsize:1 alternative:1 batch:1 eigen:1 original:2 assumes:1 denotes:2 exploit:4 build:4 establish:1 classical:1 webcam:1 tensor:71 objective:1 strategy:4 sha:2 rt:1 enhances:2 gradient:3 iclr:2 subspace:5 separate:1 reinforce:2 philip:1 landmark:1 consumption:2 whom:1 vectorize:1 collected:2 bengio:3 code:1 index:1 relationship:78 modeled:2 insufficient:1 mini:1 loy:1 setup:1 unfortunately:1 potentially:1 sne:2 expense:1 negative:11 design:2 conv2:1 perform:1 bianchi:1 observation:1 datasets:7 benchmark:3 enabling:1 descent:1 defining:1 flop:2 hinton:3 stack:1 evidenced:1 pair:1 connection:1 imagenet:5 learned:4 kang:1 nip:6 able:4 pattern:2 challenge:2 program:1 including:1 reliable:1 max:1 green:1 overlap:2 difficulty:2 natural:2 circumvent:2 regularized:3 fc8:1 improve:4 risky:1 vggnet:5 extract:2 prior:21 literature:1 epoch:1 review:1 immediacy:1 reordering:1 fully:4 loss:1 highlight:1 limitation:1 digital:1 eigendecomposition:2 integrate:2 downloaded:1 validation:1 foundation:1 sufficient:1 conv1:7 share:8 bordes:1 berthouze:1 row:5 prone:1 eccv:1 supported:1 last:3 copy:1 hebert:1 bias:3 deeper:1 conv3:1 characterizing:1 taking:2 sparse:3 benefit:2 slice:2 dimension:7 xn:1 transition:3 world:2 ignores:1 forward:1 made:1 adaptive:3 avg:9 far:1 cope:2 transaction:2 reconstructed:1 relatedness:5 implicitly:1 dealing:2 overfitting:2 reveals:1 dk0:1 uai:1 b1:3 xt1:1 xi:9 discriminative:1 alternatively:1 vectorization:3 pen:1 table:5 matricization:4 learn:22 transfer:25 fc6:1 correlated:6 ca:1 robust:5 shrivastava:1 improving:1 domain:9 protocol:4 da:2 main:2 motivation:1 alarm:1 daume:1 verifies:1 positively:1 representative:1 iii:1 iij:2 natively:4 inferring:1 nd1:4 momentum:1 mtrl:12 learns:12 grained:3 tang:2 donahue:1 down:1 specific:19 xt:4 showing:1 explored:2 dk:20 gupta:2 stl:8 grouping:2 intrinsic:1 glorot:1 effectively:4 importance:1 decaf:1 dissimilarity:1 margin:1 chen:2 rd1:9 entropy:1 simply:2 explore:2 visual:1 vinyals:1 stitch:1 vulnerable:2 srivastava:1 corresponds:1 environmental:1 conditional:1 goal:1 identity:3 viewed:4 formulated:1 shared:19 axb:1 hard:1 specifically:2 except:1 reducing:1 folded:1 wt:8 total:2 da2:1 indicating:1 formally:1 select:2 ilsvrc:1 arises:1 dissimilar:4 evaluate:2 argyriou:1 scratch:1 ex:1
6,365
6,758
Deep Hyperalignment Muhammad Yousefnezhad, Daoqiang Zhang College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics {myousefnezhad,dqzhang}@nuaa.edu.cn Abstract This paper proposes Deep Hyperalignment (DHA) as a regularized, deep extension, scalable Hyperalignment (HA) method, which is well-suited for applying functional alignment to fMRI datasets with nonlinearity, high-dimensionality (broad ROI), and a large number of subjects. Unlink previous methods, DHA is not limited by a restricted fixed kernel function. Further, it uses a parametric approach, rank-m Singular Value Decomposition (SVD), and stochastic gradient descent for optimization. Therefore, DHA has a suitable time complexity for large datasets, and DHA does not require the training data when it computes the functional alignment for a new subject. Experimental studies on multi-subject fMRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms. 1 Introduction The multi-subject fMRI analysis is a challenging problem in the human brain decoding [1?7]. On the one hand, the multi-subject analysis can verify the developed models across subjects. On the other hand, this analysis requires authentic functional and anatomical alignments among neuronal activities of different subjects, which these alignments can significantly improve the performance of the developed models [1, 4]. In fact, multi-subject fMRI images must be aligned across subjects in order to take between-subject variability into account. There are technically two main alignment methods, including anatomical alignment and functional alignment, which can work in unison. Indeed, anatomical alignment is only utilized in the majority of the fMRI studies as a preprocessing step. It is applied by aligning fMRI images based on anatomical features of standard structural MRI images, e.g. Talairach [2, 7]. However, anatomical alignment can limitedly improve the accuracy because the size, shape and anatomical location of functional loci differ across subjects [1, 2, 7]. By contrast, functional alignment explores to precisely align the fMRI images across subjects. Indeed, it has a broad range of applications in neuroscience, such as localization of the Brain?s tumor [8]. As the widely used functional alignment method [1?7], Hyperalignment (HA) [1] is an ?anatomy free? functional alignment method, which can be mathematically formulated as a multiple-set Canonical Correlation Analysis (CCA) problem [2, 3, 5]. Original HA does not work in a very high dimensional space. In order to extend HA into the real-world problems, Xu et al. developed the Regularized Hyperalignment (RHA) by utilizing an EM algorithm to iteratively seek the regularized optimum parameters [2]. Further, Chen et al. developed Singular Value Decomposition Hyperalignment (SVDHA), which firstly provides dimensionality reduction by SVD, and then HA aligns the functional responses in the reduced space [4]. In another study, Chen et al. introduced Shared Response Model (SRM), which is technically equivalent to Probabilistic CCA [5]. In addition, Guntupalli et al. developed SearchLight (SL) model, which is actually an ensemble of quasi-CCA models fits on patches of the brain images [9]. Lorbert et al. illustrated the limitation of HA methods on the linear representation of fMRI responses. They also proposed Kernel Hyperalignment (KHA) as a nonlinear alternative in an embedding space for solving the HA limitation [3]. Although KHA 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. can solve the nonlinearity and high-dimensionality problems, its performance is limited by the fixed employed kernel function. As another nonlinear HA method, Chen et al. recently developed Convolutional Autoencoder (CAE) for whole brain functional alignment. Indeed, this method reformulates the SRM as a multi-view autoencoder [5] and then uses the standard SL analysis [9] in order to improve the stability and robustness of the generated classification (cognitive) model [6]. Since CAE simultaneously employs SRM and SL, its time complexity is so high. In a nutshell, there are three main challenges in previous HA methods for calculating accurate functional alignments, i.e. nonlinearity [3, 6], high-dimensionality [2, 4, 5], and using a large number of subjects [6]. As the main contribution of this paper, we propose a novel kernel approach, which is called Deep Hyperalignment (DHA), in order to solve mentioned challenges in HA problems. Indeed, DHA employs deep network, i.e. multiple stacked layers of nonlinear transformation, as the kernel function, which is parametric and uses rank-m SVD [10] and Stochastic Gradient Descent (SGD) [13] for optimization. Consequently, DHA generates low-runtime on large datasets, and the training data is not referenced when DHA computes the functional alignment for a new subject. Further, DHA is not limited by a restricted fixed representational space because the kernel in DHA is a multi-layer neural network, which can separately implement any nonlinear function [11?13] for each subject to transfer the brain activities to a common space. The proposed method is related to RHA [2] and MVLSA [10]. Indeed, the main difference between DHA and the mentioned methods lies in the deep kernel function. Further, KHA [3] is equivalent to DHA, where the proposed deep network is employed as the kernel function. In addition, DHA can be looked as a multi-set regularized DCCA [11] with stochastic optimization [13]. Finally, DHA is related to DGCCA [12], when DGCCA is reformulated for functional alignment by using regularization, and rank-m SVD [10]. The rest of this paper is organized as follows: In Section 2, this study briefly introduces HA method. Then, DHA is proposed in Section 3. Experimental results are reported in Section 4; and finally, this paper presents conclusion and pointed out some future works in Section 5. 2 Hyperalignment n o (`) As a training set, preprocessed fMRI time series for S subjects can be denoted by X(`) = xmn ? RT ?V , ` = 1:S, m = 1:T, n = 1:V , where V denotes the number of voxels, T is the number of time (`) points in units of TRs (Time of Repetition), and xmn ? R denotes the functional activity for the `-th subject in the m-th time point and the n-th voxel. For assuring temporal alignment, the stimuli in the training set are considered time synchronized, i.e. the m-th time point for all subjects illustrates the same simulation [2, 3]. Original HA can be defined based on Inter-Subject Correlation (ISC), which is a classical metric in order to apply functional alignment: [1-4, 7] S S S S   X X X X max ISC(X(i) R(i) , X(j) R(j) ) ? max tr (X(i) R(i) )> X(j) R(j) R(i) ,R(j) R(i) ,R(j) (1) i=1 j=i+1 i=1 j=i+1  (`) (`) > (`) (`) s.t. X R X R = I, ` = 1:S, where tr() denotes the trace function, I is the identity matrix, R(`) ? RV ?V denotes the solution for `-th subject. For avoiding overfitting, the constrains must be imposed in R(`) [2, 7]. If X(`) ? N (0, 1), ` = 1:S are column-wise standardized, the ISC lies in [?1, +1]. Here, the large values illustrate better alignment [2, 3]. In order to seek an optimum solution, solving (1) may not be the best approach because there is no scale to evaluate the distance between current result and the optimum (fully maximized) solution [2, 4, 7]. Instead, we can reformulate (1) as a minimization problem by using a multiple-set CCA: [1?4] S S 2  > X X (i) (i) min R ? X(j) R(j) , s.t. X(`) R(`) X(`) R(`) = I, ` = 1:S, (2) X (i) (j) R ,R i=1 j=i+1 F where (2) approaches zero for an optimum result. Indeed, the main assumption in the original HA is that the R(`) , ` = 1:S are noisy ?rotations? of a common template [1, 9]. This paper provides a detailed description of HA methods in the supplementary materials (https://sourceforge.net/ projects/myousefnezhad/files/DHA/). 2 3 Deep Hyperalignment Objective function of DHA is defined as follows: S S 2 X X  (i)  (i) (i) R ? fj X(j) ;?(j) R(j) f i X ;? (i) min ? (i) ,R i=1 j=i+1 ? (j) ,R(j) s.t.  R(`) >   f` X(`) ;? F (3)  > (`) f` X(`) ;?  (`)  + I R(`) = I, ` = 1:S,  (`) (`) where ?(`) = Wm , bm , m=2:C denotes all parameters in `-th deep network belonged to `-th subject, R(`) ? RVnew ?Vnew is the DHA solution for `-th subject, Vnew ? V denotes the number of features after transformation, the regularized parameter  is a small constant, e.g. 10?8 , and deep  (`) (`) T ?Vnew multi-layer kernel function f` X ;? ?R is denoted as follows:    (`) f` X(`) ;?(`) = mat hC , T, Vnew , (4) where T denotes the number of time points, C ? 3 is number of deep network layers, (`) mat(x, m, n):Rmn ? Rm?n denotes the reshape (matricization) function, and hC ? RT Vnew is the output layer of the following multi-layer deep network:    (`) (`) (`) (`) h(`) where h1 = vec X(`) and m = 2:C. (5) m = g Wm hm?1 + bm , Here, g:R ? R is a nonlinear function applied componentwise, vec:Rm?n ? Rmn denotes the  (`) vectorization function, consequently h1 = vec X(`) ? RT V . Notably, this paper considers both vec() and mat() functions are linear transformations, where X ? Rm?n = mat vec(X), m, n for any matrix X. By considering U (m) units in the m-th intermediate layer, parameters of distinctive  (C-1) (`) (`) layers of f` X(`) ;?(`) are defined by following properties: WC ? RT Vnew ?U and bC ? (2) (2) (`) (`) RT Vnew for the output layer, W2 ? RU ?T V and b2 ? RU for the first intermediate layer, (m) (m-1) (m) (m) (`) (`) (`) and Wm ? RU ?U , bm ? RU and hm ? RU for m-th intermediate layer (3 ? m ? C ? 1). Since (3) must be calculated for any new subject in the testing phase, it is not computationally efficient. In other words, the transformed training data must be referenced by the current objective function for each new subject in the testing phase. Lemma 1. The equation (3) can be reformulated as follows where G ? RT ?Vnew is the HA template: min G,R(i) ,? S 2 X  ? fi X(i) ;?(i) R(i) G (i) i=1 F s.t. G> G = I, where G = S  1X fj X(j) ;?(j) R(j) . S j=1 (6)  Proof. In a nutshell, both (3) and (6) can! be rewritten as ?S 2 tr G> G +    (`) >  (`) PS (`) (`) (`) (`) S `=1 tr f` X ;? R f` X ;? R . Please see supplementary materials for proof in details. Remark 1. G is called DHA template, which can be used for functional alignment in the testing phase. Remark 2. Same as previous approaches for HA problems [1?7], a DHA solution is not unique. If a DHA template G is calculated for a specific HA problem, then QG is another solution for that specific HA problem, where Q ? RVnew ?Vnew can be any orthogonal matrix. Consequently, if two independent templates G1 , G2 are trained for a specific dataset, the solutions can be mapped to each other by calculating G2 ? QG1 , where Q can be used as a coefficient for functional alignment in the first solution in order to compare its results to the second one. Indeed, G1 and G2 are located in different positions on the same contour line [5, 7]. 3 3.1 Optimization This section proposes an effective approach for optimizing the DHA objective function by using rank-m SVD [10] and SGD [13]. This method seeks an optimum solution for the DHA objective function (6) by using two different steps, which iteratively work in unison. By considering fixed network parameters (?(`) ), a mini-batch of neural activities is firstly aligned through the deep network. Then, back-propagation algorithm [14] is used to update the network parameters. The main challenge for solving the DHA objective function is that we cannot seek a natural extension of the correlation object to more than two random variables. Consequently, functional alignments are stacked in a S ? S matrix and maximize a certain matrix norm for that matrix [10, 12]. As the first step, we consider network parameters are in an optimum state. Therefore, the mappings (R(`) , ` = 1:S) and template (G) must be calculated to solve the DHA problem. In order to scale DHA approach, this paper employs the rank-m SVD [10] of the mapped neural activities as follows:  SV D > (7) f` X(`) ;?(`) = ?(`) ?(`) ?(`) , ` = 1:S where ?(`) ? Rm?m  denotes the diagonal matrix with m-largest singular values of the mapped feature f` X(`) ;?(`) , ?(`) ? RT ?m and ?(`) ? Rm?Vnew are respectively the corresponding left and right singular vectors. Based on (7), the projection matrix for `-th subject can be generated as follows: [10]  ?1     >   > P(`) = f` X(`) ;?(`) f` X(`) ;?(`) f` X(`) ;?(`) + I f` X(`) ;?(`) (8) ?1  >   (`) (`) >  (`) (`) > (`) (`) > (`) (`) (`) (`) =? ? ? ? + I ? ? =? D ? D , where P(`) ? RT ?T is symmetric and idempotent [10, 12], and diagonal matrix D(`) ? Rm?m is ?1 > >  (`) (`) > (9) D(`) D(`) = ?(`) ? ? + I ?(`) . eA e > is the Cholesky Further, the sum of projection matrices can be defined as follows, where A decomposition [10] of A: S X   e ? RT ?mS = ?(1) D(1) . . . ?(S) D(S) . A (10) Lemma 2. Based on (10), the objective function of DHA (6) can be rewritten as follows: S  X   min G ? fi X(i) ;?(i) R(i) ? max tr G> AG . (11) A= eA e >, P(i) = A where i=1 G G,R(i) ,? (i) i=1 Proof. Since P(`) is idempotent, the trace form of (6) can be reformulated as maximizing the sum of projections. Please see the supplementary materials for proof in details. Based on Lemma 2, the first optimization  step of DHA problem can be expressed as eigendecomposition of AG = G?, where ? = ?1 . . . ?T and G respectively denote the eigenvalues and eigenvectors of A. Further, the matrix G that we are interested in finding, can be calculated by the e = G? e? e > , where G> G = I [10]. This paper utilizes Incremental SVD left singular vectors of A [15] for calculating these left singular vectors. Further, DHA mapping for `-th subject is denoted as follows:  ?1   >   > (12) R(`) = f` X(`) ;?(`) f` X(`) ;?(`) + I f` X(`) ;?(`) G. PT Lemma 3. In order to update network parameters as the second step, the derivative of Z = `=1 ?` , which is the sum of eigenvalues of A, over the mapped neural activities of `-th subject is defined as follows: >  > ?Z (`) (`)  = 2R(`) G> ? 2R(`) R(`) f X ;? . (13) ` ?f` X(`) ;?(`) Proof. This derivative can be solved by using the chain and product rules in the matrix derivative as well as considering ?Z/?A = GG> [12]. Please see the supplementary materials for proof in details. 4 Algorithm 1 Deep Hyperalignment (DHA) Input: Data X(i) , i = 1:S, Regularized parameter , Number of layers C, Number of units U (m) b for testing phase (default ?), Learning rate ? (default 10?4 [13]). for m = 2:C, HA template G (`) Output: DHA mappings R and parameters ?(`) , HA template G just from training phase Method: (`) 01. Initialize iteration counter: ` = 1:S.  m ? 1 and ? ? N (0, 1) for (`) (`) (`) 02. Construct f` X ;? based on (4) and (5) by using ? , C, U (m) for ` = 1:S. b 03. IF (G 6= ?) THEN % The first step of DHA: fixed ?(`) and calculating G and R(`) ? e by using (8) and (10). 04. Generate A e = G? e? e >. 05. Calculate G by applying Incremental SVD [15] to A 06. ELSE b 07. G = G. 08. END IF 09. Calculate mappings R(`) , ` = 1:S by using (12). 2   PS PS 10. Estimate error of iteration ?m = i=1 j=i+1 fi X(i) ;?(i) R(i) ? fj X(j) ;?(j) R(j) . F  11. IF (m > 3) and (?m ? ?m?1 ? ?m?2 ) THEN % This is the finishing condition. 12. Return calculated G, R(`) , ?(`) (` = 1:S) related to (m-2)-th iteration. 13. END IF % The secondstep of DHA: fixed G and R(`) and updating ?(`) ?   14. ??(`) ? backprop ?Z/?f` X(`) ;?(`) , ?(`) by using (13) for ` = 1:S. 15. Update ?(`) ? ?(`) ? ???(`) for ` = 1:S and then m ? m + 1 16. SAVE all DHA parameters related to this iteration and GO TO Line 02. Algorithm 1 illustrates the DHA method for both training and testing phases. As depicted in this algorithm, (12) is just needed as the first step in the testing phase because the DHA template G is calculated for this phase based on the training samples (please see Lemma 1). As the second step in the DHA method, the networks? parameters (?(`) ) must be updated. This paper employs the back-propagation algorithm (backprop() function) [14] as well as Lemma 3 for this step. In addition, finishing condition is defined by tackling errors in last three iterations, i.e. the average of the difference between each pair correlations of aligned functional activities across subjects (?m for last three iterations). In other words, DHA will be finished if the error rates in the last three iterations are going to be worst. Further, a structure (nonlinear function for componentwise, and numbers of layers and units) for the deep network can be selected based on the optimum-state error (?opt ) generated by training samples across different structures (see Experiment Schemes in the supplementary materials). In summary, this paper proposes DHA as a flexible deep kernel approach to improve the performance of functional alignment in fMRI analysis. In order to seek an efficient functional alignment, DHA uses a deep network (multiple stacked layers of nonlinear transformation) for mapping fMRI responses of each subject to an embedded space (f` : RT ?V ? RT ?Vnew , ` = 1:S). Unlike previous methods that use a restricted fixed kernel function, mapping functions in DHA are flexible across subjects because they employ multi-layer neural networks, which can implement any nonlinear function [12]. Therefore, DHA does not suffer from disadvantages of the previous kernel approach. In order to deal with high-dimensionality (broad ROI), DHA can also apply an optional feature selection by considering Vnew < V for constructing the deep networks. The performance of the optional feature selection will be analyzed in Section 4. Finally, DHA can be scaled across a large number of subjects by using the proposed optimization algorithm, i.e. rank-m SVD, regularization, and mini-batch SGD. 4 Experiments The empirical studies are reported in this section. Like previous studies [1?7, 9], this paper employs the ?-SVM algorithms [16] for generating the classification model. Indeed, we use the binary ?-SVM for datasets with just two categories of stimuli and multi-label ?-SVM [3, 16] as the multi-class approach. All datasets are separately preprocessed by FSL 5.0.9 (https://fsl.fmrib.ox.ac.uk), i.e. slice timing, anatomical alignment, normalization, smoothing. Regions of Interests (ROI) are also denoted by employing the main reference of each dataset. In addition, leave-one-subject-out 5 Table 1: Accuracy of HA methods in post-alignment classification by using simple task datasets ?Algorithms, Datasets? ?-SVM [17] HA [1] RHA [2] KHA [3] SVD-HA [4] SRM [5] SL [9] CAE [6] DHA DS005 71.65?0.97 81.27?0.59 83.06?0.36 85.29?0.49 90.82?1.23 91.26?0.34 90.21?0.61 94.25?0.76 97.92?0.82 DS105 22.89?1.02 30.03?0.87 32.62?0.52 37.14?0.91 40.21?0.83 48.77?0.94 49.86?0.4 54.52?0.80 60.39?0.68 DS107 38.84?0.82 43.01?0.56 46.82?0.37 52.69?0.69 59.54?0.99 64.11?0.37 64.07?0.98 72.16?0.43 73.05?0.63 DS116 67.26?1.99 74.23?1.40 78.71?0.76 78.03?0.89 81.56?0.54 83.31?0.73 82.32?0.28 91.49?0.67 90.28?0.71 DS117 73.32?1.67 77.93?0.29 84.22?0.44 83.32?0.41 95.62?0.83 95.01?0.64 94.96?0.24 95.92?0.67 97.99?0.94 Table 2: Area under the ROC curve (AUC) of different HA methods in post-alignment classification by using simple task datasets ?Algorithms, Datasets? ?-SVM [17] HA [1] RHA [2] KHA [3] SVD-HA [4] SRM [5] SL [9] CAE [6] DHA DS005 68.37?1.01 70.32?0.92 82.22?0.42 80.91?0.21 88.54?0.71 90.23?0.74 89.79?0.25 91.24?0.61 96.91?0.82 DS105 21.76?0.91 28.91?1.03 30.35?0.39 36.23?0.57 37.61?0.62 44.48?0.75 47.32?0.92 52.16?0.63 59.57?0.32 DS107 36.84?1.45 40.21?0.33 43.63?0.61 50.41?0.92 57.54?0.31 62.41?0.72 61.84?0.32 72.33?0.79 70.23?0.92 DS116 62.49?1.34 70.67?0.97 76.34?0.45 75.28?0.94 78.66?0.82 79.20?0.98 80.63?0.81 87.53?0.72 89.93?0.24 DS117 70.17?0.59 76.14?0.49 81.54?0.92 80.92?0.28 92.14?0.42 93.65?0.93 93.26?0.72 91.49?0.33 96.13?0.32 cross-validation is utilized for partitioning datasets to the training set and testing set. Different HA methods are employed for functional aligning and then the mapped neural activities are used to generate the classification model. The performance of the proposed method is compared with the ?-SVM algorithm as the baseline, where the features are used after anatomical alignment without applying any hyperalignment mapping. Further, performances of the standard HA [1], RHA [2], KHA [3], SVDHA [4], SRM [5], and SL [9] are reported as state-of-the-arts HA methods. In this paper, the results of HA algorithm is generated by employing Generalized CCA proposed in [10]. In addition, regularized parameters (?, ?) in RHA are optimally assigned based on [2]. Further, KHA algorithm is used by the Gaussian kernel, which is evaluated as the best kernel in the original paper [3]. As another deep-learning-based alternative for functional alignment, the performance of CAE [6] is also compared with the proposed method. Like the original paper [6], this paper employs k1 = k3 = {5, 10, 15, 20, 25}, ? = {0.1, 0.25, 0.5, 0.75, 0.9}, ? = {0.1, 1, 5, 10}. Then, aligned neural activities (by using CAE) are applied to the classification algorithm same as other HA techniques. This paper follows the CAE setup to set the same settings in the proposed method. Consequently, three hidden layers (C = 5) and the regularized parameters  = {10?4 , 10?6 , 10?8 } are employed in the DHA method. In addition, the number of units in the intermediate layers are considered U (m) = KV , where m = 2:C-1, C is the number of layers, V denotes the number of voxels and K is the number of stimulus categories in each dataset1 . Further, three distinctive activation functions are employed, i.e. Sigmoid (g(x) = 1/1 + exp(?x)), Hyperbolic (g(x) = tanh(x)), and Rectified Linear Unit or ReLU (g(x) = ln(1 + exp(x))). In this paper, the optimum parameters for DHA and CAE methods are reported for each dataset. Moreover, all algorithms are implemented by Python 3 on a PC with certain specifications2 by authors in order to generate experimental results. Experiment schemes are also described in supplementary materials. 4.1 Simple Tasks Analysis This paper utilizes 5 datasets, shared by Open fMRI (https://openfmri.org), for running empirical studies of this section. Further, numbers of original and aligned features are considered 1 Although we can use any settings for DHA, we empirically figured out this setting is acceptable to seek an optimum solution. Indeed, we followed CAE setup in the network structure but used the number of categories (K) rather than a series of parameters. In the current format of DHA, we just need to set the regularized constant and the nonlinear activation function, while a wide range of parameters must be set in the CAE. 2 DEL, CPU = Intel Xeon E5-2630 v3 (8?2.4 GHz), RAM = 64GB, GPU = GeForce GTX TITAN X (12GB memory), OS = Ubuntu 16.04.3 LTS, Python = 3.6.2, Pip = 9.0.1, Numpy = 1.13.1, Scipy = 0.19.1, Scikit-Learn = 0.18.2, Theano = 0.9.0. 6 400 600 800 1000 1200 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA 140 210 280 350 420 490 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA (e) Raiders (TRs = 100) 400 600 800 1000 1200 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA (d) Forrest Gump (TRs = 2000) (f) Raiders (TRs = 400) 70 65 60 55 50 45 40 35 30 70 140 210 280 350 420 490 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA (g) Raiders (TRs = 800) Classification Accuracy (%) 70 65 60 55 50 45 40 35 30 70 85 80 75 70 65 60 55 50 45 40 35 30 100 200 (c) Forrest Gump (TRs = 800) (b) Forrest Gump (TRs = 400) Classification Accuracy (%) Classification Accuracy (%) 140 210 280 350 420 490 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA 85 80 75 70 65 60 55 50 45 40 35 30 100 200 Classification Accuracy (%) 400 600 800 1000 1200 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA (a) Forrest Gump (TRs = 100) 70 65 60 55 50 45 40 35 30 70 Classification Accuracy (%) 85 80 75 70 65 60 55 50 45 40 35 30 100 200 Classification Accuracy (%) 400 600 800 1000 1200 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA Classification Accuracy (%) Classification Accuracy (%) 85 80 75 70 65 60 55 50 45 40 35 30 25 100 200 75 70 65 60 55 50 45 40 35 30 70 140 210 280 350 420 490 # of voxels per hemisphere vSVM HA KHA RHA SL SVDHA SRM CAE DHA (h) Raiders (TRs = 2000) Figure 1: Comparison of different HA algorithms on complex task datasets by using ranked voxels. equal (V = Vnew ) for all HA methods. As the first dataset, ?Mixed-gambles task? (DS005) includes S = 48 subjects. It also contains K = 2 categories of risk tasks in the human brain, where the chance of selection is 50/50. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 20, ? = 0.75, ? = 1 and for DHA by using  = 10?8 and Hyperbolic function. In addition, ROI is defined based on the original paper [17]. As the second dataset, ?Visual Object Recognition? (DS105) includes S = 71 subjects. It also contains K = 8 categories of visual stimuli, i.e. gray-scale images of faces, houses, cats, bottles, scissors, shoes, chairs, and scrambles (nonsense patterns). In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 25, ? = 0.9, ? = 5 and for DHA by using  = 10?6 and Sigmoid function. Please see [1, 7] for more information. As the third dataset, ?Word and Object Processing? (DS107) includes S = 98 subjects. It contains K = 4 categories of visual stimuli, i.e. words, objects, scrambles, consonants. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 10, ? = 0.5, ? = 10 and for DHA by using  = 10?6 and ReLU function. Please see [18] for more information. As the fourth dataset, ?Multi-subject, multi-modal human neuroimaging dataset? (DS117) includes MEG and fMRI images for S = 171 subjects. This paper just uses the fMRI images of this dataset. It also contains K = 2 categories of visual stimuli, i.e. human faces, and scrambles. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 20, ? = 0.9, ? = 5 and for DHA by using  = 10?8 and Sigmoid function. Please see [19] for more information. The responses of voxels in the Ventral Cortex are analyzed for these three datasets (DS105, DS107, DS117). As the last dataset, ?Auditory and Visual Oddball EEG-fMRI? (DS116) includes EEG signals and fMRI images for S = 102 subjects. This paper only employs the fMRI images of this dataset. It contains K = 2 categories of audio and visual stimuli, including oddball tasks. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 10, ? = 0.75, ? = 1 and for DHA by using  = 10?4 and ReLU function. In addition, ROI is defined based on the original paper [20]. This paper also provides the technical information of the employed datasets in the supplementary materials. Table 1 and 2 respectively demonstrate the classification Accuracy and Area Under the ROC Curve (AUC) in percentage (%) for the predictors. As these tables demonstrate, the performances of classification analysis without HA method are significantly low. Further, the proposed algorithm has generated better performance in comparison with other methods because it provided a better embedded space in order to align neural activities. 4.2 Complex Tasks Analysis This section uses two fMRI datasets, which are related to watching movies. The numbers of original and aligned features are considered equal (V = Vnew ) for all HA methods. As the first dataset, ?A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie? (DS113) includes the fMRI data of S = 18 subjects, who watched ?Forrest Gump (1994)? movie during the experiment. This dataset provided by Open fMRI. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 25, ? = 0.9, ? = 10 and for DHA by using  = 10?8 and Sigmoid function. Please see [7] for more information. As the second dataset, S = 10 subjects watched ?Raiders of the Lost Ark (1981)?, where whole brain volumes are 48. In this dataset, the best results for CAE is generated by following parameters k1 = k3 = 15, ? = 0.75, ? = 1 and for DHA 7 2 80 55 50 45 2 78 76 74 72 70 68 66 64 90 80 70 60 0.5 0 SRM CAE DHA 0 (B) DS107 (A) DS105 Figure 2: Classification by using feature selection. A H D E A C SVDHA A H R (A) DS105 60 1 L S M R S A H D V S A H K DHA 70 1.5 A H M V ?S CAE 80 A H D E A C SRM 90 The Percentage of Selected Features A H M V ?S SVDHA 60 100 1 L S M R S A H D V S A H K A H R The Percentage of Selected Features 1.5 0.5 62 40 100 Runtime (%) 60 Runtime (%) Classification Accuracy Classification Accuracy 82 65 (B) DS107 Figure 3: Runtime Analysis by using  = 10?4 and Sigmoid function. Please see [3-5] for more information. In these two datasets, the ROI is defined in the ventral temporal cortex (VT). Figure 1 depicts the generated results, where the voxels in ROI are ranked by the method proposed in [1] based on their neurological priorities same as previous studies [1, 4, 7, 9]. Then, the experiments are repeated by using the different number of ranked voxels per hemisphere, i.e. in Forrest: [100, 200, 400, 600, 800, 1000, 1200], and in Raiders: [70, 140, 210, 280, 350, 420, 490]. In addition, the empirical studies are reported by using the first T Rs = [100, 400, 800, 2000] in both datasets. Figure 1 shows that the DHA achieves superior performance to other HA algorithms. 4.3 Classification analysis by using feature selection In this section, the effect of features selection (Vnew < V ) on the performance of classification methods will be discussed by using DS105 and DS107 datasets. Here, the performance of the proposed method is compared with SVDHA [4], SRM [5], and CAE [6] as the state-of-the-art HA techniques, which can apply feature selection before generating a classification model. Here, multi-label ?-SVM [16] is used for generating the classification models after each of the mentioned methods applied on preprocessed fMRI images for functional alignment. In addition, the setup of this experiment is same as the previous sections (cross-validation, the best parameters, etc.). Figure 2 illustrates the performance of different methods by employing 100% to 60% of features. As depicted in this figure, the proposed method has generated better performance in comparison with other methods because it provides better feature representation in comparison with other techniques. 4.4 Runtime Analysis In this section, the runtime of the proposed method is compared with the previous HA methods by using DS105 and DS107 datasets. As mentioned before, all of the results in this experiment are generated by a PC with certain specifications. Figure 3 illustrates the runtime of the mentioned methods, where runtime of other methods are scaled based on the DHA (runtime of the proposed method is considered as the unit). As depicted in this figure, CAE generated the worse runtime because it concurrently employs modified versions of SRM and SL for functional alignment. Further, SL also includes high time complexity because of the ensemble approach. By considering the performance of the proposed method in the previous sections, it generates acceptable runtime. As mentioned before, the proposed method employs rank-m SVD [10] as well as Incremental SVD [15], which can significantly reduce the time complexity of the optimization procedure [10, 12]. 5 Conclusion This paper extended a deep approach for hyperalignment methods in order to provide accurate functional alignment in multi-subject fMRI analysis. Deep Hyperalignment (DHA) can handle fMRI datasets with nonlinearity, high-dimensionality (broad ROI), and a large number of subjects. We have also illustrated how DHA can be used for post-alignment classification. DHA is parametric and uses rank-m SVD and stochastic gradient descent for optimization. Therefore, DHA generates lowruntime on large datasets, and DHA does not require the training data when the functional alignment is computed for a new subject. Further, DHA is not limited by a restricted fixed representational space because the kernel in DHA is a multi-layer neural network, which can separately implement any nonlinear function for each subject to transfer the brain activities to a common space. Experimental studies on multi-subject fMRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms. In the future, we will plan to employ DHA for improving the performance of other techniques in fMRI analysis, e.g. Representational Similarity Analysis (RSA). 8 Acknowledgments This work was supported in part by the National Natural Science Foundation of China (61422204, 61473149, and 61732006), and NUAA Fundamental Research Funds (NE2013105). References [1] Haxby, J.V. & Connolly, A.C. & Guntupalli, J.S. (2014) Decoding neural representational spaces using multivariate pattern analysis. Annual Review of Neuroscience. 37:435?456, [2] Xu, H. & Lorbert, A. & Ramadge, P.J. & Guntupalli, J.S. & Haxby, J.V. (2012) Regularized hyperalignment of multi-set fMRI data. IEEE Statistical Signal Processing Workshop (SSP). pp. 229?232, Aug/5?8, USA. [3] Lorbert, A. & Ramadge, P.J. (2012) Kernel hyperalignment. 25th Advances in Neural Information Processing Systems (NIPS). pp. 1790?179. Dec/3?8, Harveys. [4] Chen, P.H. & Guntupalli, J.S. & Haxby, J.V. & Ramadge, P.J. (2014) Joint SVD-Hyperalignment for multisubject FMRI data alignment. 24th IEEE International Workshop on Machine Learning for Signal Processing (MLSP). pp. 1?6, Sep/21?24, France. [5] Chen, P.H. & Chen, J. & Yeshurun, Y. & Hasson, U. & Haxby, J.V. & Ramadge, P.J. (2015) A reduceddimension fMRI shared response model. 28th Advances in Neural Information Processing Systems (NIPS). pp. 460?468, Dec/7?12, Canada. [6] Chen, P.H. & Zhu, X. & Zhang, H. & Turek, J.S. & Chen, J. & Willke, T.L. & Hasson, U. & Ramadge, P.J. (2016) A convolutional autoencoder for multi-subject fMRI data aggregation. 29th Workshop of Representation Learning in Artificial and Biological Neural Networks. NIPS, Dec/5?10, Barcelona. [7] Yousefnezhad, M. & Zhang D. (2017) Local Discriminant Hyperalignment for multi-subject fMRI data alignment. 34th AAAI Conference on Artificial Intelligence. pp. 59?61, Feb/4?9, San Francisco, USA. [8] Langs, G. & Tie, Y. & Rigolo, L. & Golby, A. & Golland, P. (2010) Functional geometry alignment and localization of brain areas, 23th Advances in Neural Information Processing Systems (NIPS). Dec/6?11, Canada. [9] Guntupalli, J.S. & Hanke, M. & Halchenko, Y.O. & Connolly, A.C. & Ramadge, P.J. & Haxby, J.V. (2016) A model of representational spaces in human cortex. Cerebral Cortex. Oxford University Press. [10] Rastogi, P. & Van D.B. & Arora, R. (2015) Multiview LSA: Representation Learning via Generalized CCA. 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL). pp. 556?566, May/31 to Jun/5, Denver, USA. [11] Andrew, G. & Arora, R. & Bilmes, J. & Livescu, K. (2012) Deep Canonical Correlation Analysis. 30th International Conference on Machine Learning (ICML). pp. 1247?1255, Jun/16?21, Atlanta, USA. [12] Benton, A. & Khayrallah, H. & Gujral, B. & Reisinger, D. & Zhang, S. & Arora, R. (2017) Deep Generalized Canonical Correlation Analysis. 5th International Conference on Learning Representations (ICLR). [13] Wang, W. & Arora, R. & Livescu, K. & Srebro, N. Stochastic optimization for deep CCA via nonlinear orthogonal iterations. 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). pp. 688?695, Oct/3?6, Urbana-Champaign, USA. [14] Rumelhart, D.E. & Hinton, G.E. & Williams, R.J. (1986) Learning representations by back-propagating errors. Nature. 323(6088):533?538. [15] Brand, M. (2002) Incremental Singular Value Decomposition of uncertain data with missing values. 7th European Conference on Computer Vision (ECCV). pp. 707?720, May/28?31, Copenhagen, Denmark. [16] Smola, A.J. & Sch?lkopf, B. (2004) A tutorial on support vector regression. Statistics and Computing. 14(3):199?222. [17] Sabrina, T.M. & Craig, F.R. & Trepel, C. & Poldrack, R.A. (2007) The neural basis of loss aversion in decision-making under risk. American Association for the Advancement of Science. 315(5811):515?518. [18] Duncan, K.J. & Pattamadilok, C. & Knierim, I. & Devlin, Joseph T. (2009) Consistency and variability in functional localisers. NeuroImage. 46(4):1018?1026. [19] Wakeman, D.G. & Henson, R.N. (2015) A multi-subject, multi-modal human neuroimaging dataset. Scientific Data. vol. 2. [20] Walz J.M. & Goldman R.I. & Carapezza M. & Muraskin J. & Brown T.R. & Sajda P. (2013) Simultaneous EEG-fMRI reveals temporal evolution of coupling between supramodal cortical attention networks and the brainstem. Journal of Neuroscience. 33(49):19212-22. 9
6758 |@word briefly:1 version:1 mri:1 norm:1 open:2 seek:6 simulation:1 r:1 decomposition:4 sgd:3 tr:5 reduction:1 series:2 contains:5 halchenko:1 bc:1 current:3 activation:2 tackling:1 must:7 gpu:1 shape:1 haxby:5 update:3 openfmri:1 fund:1 intelligence:1 selected:3 advancement:1 ubuntu:1 provides:4 location:1 allerton:2 firstly:2 org:1 zhang:4 raider:6 multisubject:1 inter:1 notably:1 indeed:9 multi:23 brain:9 goldman:1 cpu:1 considering:5 project:1 provided:2 moreover:1 dha:79 developed:6 ag:2 transformation:4 finding:1 temporal:3 nutshell:2 runtime:11 tie:1 rm:6 scaled:2 uk:1 partitioning:1 unit:7 control:1 lsa:1 before:3 referenced:2 reformulates:1 timing:1 local:1 oxford:1 china:1 challenging:1 ramadge:6 limited:4 range:2 unique:1 acknowledgment:1 testing:7 lost:1 rsa:1 implement:3 procedure:1 area:3 empirical:3 significantly:3 hyperbolic:2 projection:3 word:4 nanjing:1 cannot:1 fsl:2 selection:7 risk:2 applying:3 hyperalignment:18 equivalent:2 imposed:1 langs:1 maximizing:1 missing:1 go:1 williams:1 attention:1 resolution:1 scipy:1 rule:1 utilizing:1 embedding:1 stability:1 handle:1 updated:1 pt:1 assuring:1 us:7 livescu:2 rumelhart:1 recognition:1 utilized:2 located:1 updating:1 ark:1 solved:1 wang:1 worst:1 calculate:2 region:1 counter:1 mentioned:6 complexity:4 constrains:1 trained:1 solving:3 technically:2 localization:2 distinctive:2 basis:1 cae:29 joint:1 sep:1 yeshurun:1 trepel:1 cat:1 chapter:1 stacked:3 sajda:1 effective:1 artificial:2 widely:1 solve:3 supplementary:7 statistic:1 g1:2 noisy:1 eigenvalue:2 net:1 propose:1 product:1 aligned:6 representational:5 description:1 kv:1 sourceforge:1 optimum:9 p:3 generating:3 incremental:4 leave:1 object:4 illustrate:1 andrew:1 ac:1 propagating:1 coupling:1 aug:1 implemented:1 synchronized:1 differ:1 anatomy:1 stochastic:5 human:7 brainstem:1 material:7 muhammad:1 backprop:2 require:2 opt:1 biological:1 mathematically:1 extension:2 considered:5 roi:8 exp:2 k3:8 mapping:7 achieves:3 ventral:2 label:2 tanh:1 guntupalli:5 largest:1 repetition:1 scramble:3 minimization:1 concurrently:1 gaussian:1 modified:1 rather:1 pip:1 finishing:2 rank:8 contrast:1 baseline:1 hidden:1 quasi:1 transformed:1 interested:1 going:1 france:1 among:1 classification:24 flexible:2 denoted:4 proposes:3 plan:1 art:4 smoothing:1 initialize:1 oddball:2 equal:2 construct:1 beach:1 broad:4 kha:15 icml:1 fmri:32 future:2 stimulus:7 employ:11 simultaneously:1 national:1 numpy:1 phase:8 geometry:1 atlanta:1 interest:1 unison:2 turek:1 alignment:37 introduces:1 analyzed:2 pc:2 chain:1 isc:3 accurate:2 orthogonal:2 unlink:1 uncertain:1 column:1 xeon:1 disadvantage:1 predictor:1 srm:18 connolly:2 optimally:1 reported:5 sv:1 st:1 explores:1 fundamental:1 international:3 probabilistic:1 decoding:2 gump:5 aaai:1 astronautics:1 priority:1 watching:1 cognitive:1 worse:1 american:2 derivative:3 return:1 account:1 b2:1 includes:7 coefficient:1 mlsp:1 titan:1 north:1 scissors:1 view:1 h1:2 wm:3 aggregation:1 hanke:1 contribution:1 accuracy:13 convolutional:2 who:1 ensemble:2 maximized:1 rastogi:1 reisinger:1 lkopf:1 craig:1 bilmes:1 lorbert:3 rectified:1 simultaneous:1 aligns:1 hlt:1 pp:9 geforce:1 proof:6 auditory:1 dataset:23 dimensionality:6 organized:1 actually:1 back:3 ea:2 response:6 modal:2 evaluated:1 ox:1 just:5 smola:1 correlation:6 hand:2 o:1 scikit:1 nonlinear:11 propagation:2 del:1 gray:1 scientific:1 usa:6 effect:1 naacl:1 verify:1 gtx:1 brown:1 evolution:1 regularization:2 assigned:1 symmetric:1 iteratively:2 lts:1 illustrated:2 deal:1 during:1 please:9 auc:2 m:1 generalized:3 gg:1 multiview:1 demonstrate:2 fj:3 image:11 wise:1 novel:1 recently:1 fi:3 superior:3 common:3 rotation:1 rmn:2 functional:29 sigmoid:5 empirically:1 stimulation:1 denver:1 poldrack:1 volume:1 cerebral:1 extend:1 discussed:1 association:2 vec:5 rd:1 consistency:1 pointed:1 nonlinearity:4 language:1 specification:1 henson:1 cortex:4 rha:14 similarity:1 aeronautics:1 align:2 aligning:2 etc:1 feb:1 multivariate:1 optimizing:1 hemisphere:9 certain:3 harvey:1 binary:1 vt:1 employed:6 maximize:1 v3:1 signal:3 rv:1 multiple:4 champaign:1 technical:1 cross:2 long:1 post:3 qg:1 watched:2 scalable:1 regression:1 vision:1 metric:1 iteration:8 kernel:16 normalization:1 dec:4 golland:1 addition:10 separately:3 else:1 singular:7 sch:1 w2:1 rest:1 unlike:1 file:1 subject:48 structural:1 intermediate:4 fit:1 relu:3 reduce:1 devlin:1 cn:1 gb:2 suffer:1 reformulated:3 remark:2 deep:24 detailed:1 eigenvectors:1 category:8 reduced:1 http:3 generate:3 sl:16 percentage:3 canonical:3 tutorial:1 neuroscience:3 per:9 benton:1 anatomical:8 mat:4 vol:1 authentic:1 preprocessed:3 figured:1 ram:1 sum:3 fourth:1 forrest:6 patch:1 utilizes:2 decision:1 acceptable:2 duncan:1 cca:7 layer:19 followed:1 trs:9 activity:11 annual:3 hasson:2 precisely:1 generates:3 wc:1 min:4 chair:1 format:1 across:8 em:1 joseph:1 making:1 restricted:4 theano:1 computationally:1 equation:1 ln:1 needed:1 locus:1 end:2 rewritten:2 apply:3 reshape:1 save:1 alternative:2 robustness:1 batch:2 original:9 denotes:11 standardized:1 running:1 linguistics:1 calculating:4 k1:8 classical:1 objective:6 looked:1 parametric:3 rt:11 diagonal:2 ssp:1 gradient:3 iclr:1 distance:1 mapped:5 majority:1 considers:1 discriminant:1 denmark:1 ru:5 meg:1 reformulate:1 mini:2 setup:3 neuroimaging:2 willke:1 trace:2 datasets:21 urbana:1 descent:3 optional:2 extended:1 variability:2 communication:1 hinton:1 knierim:1 canada:2 introduced:1 searchlight:1 pair:1 bottle:1 copenhagen:1 componentwise:2 barcelona:1 nip:5 xmn:2 pattern:2 belonged:1 challenge:3 including:2 max:3 memory:1 suitable:1 natural:3 ranked:3 regularized:10 zhu:1 scheme:2 improve:4 movie:3 technology:2 idempotent:2 finished:1 nonsense:1 arora:4 hm:2 jun:2 autoencoder:3 review:1 voxels:14 python:2 wakeman:1 embedded:2 fully:1 loss:1 mixed:1 limitation:2 srebro:1 validation:2 eigendecomposition:1 foundation:1 aversion:1 vsvm:8 eccv:1 summary:1 supported:1 last:4 free:1 wide:1 template:9 face:2 ghz:1 slice:1 curve:2 calculated:6 default:2 world:1 van:1 contour:1 computes:2 dataset1:1 author:1 cortical:1 preprocessing:1 san:1 bm:3 voxel:1 employing:3 confirm:2 overfitting:1 reveals:1 consonant:1 francisco:1 vectorization:1 table:4 matricization:1 nature:1 learn:1 transfer:2 ca:1 eeg:3 improving:1 e5:1 hc:2 complex:3 european:1 constructing:1 main:7 dcca:1 whole:2 tesla:1 repeated:1 xu:2 neuronal:1 intel:1 roc:2 depicts:1 neuroimage:1 position:1 lie:2 house:1 third:1 specific:3 svm:7 vnew:15 workshop:3 illustrates:4 chen:8 suited:1 depicted:3 shoe:1 visual:6 expressed:1 g2:3 neurological:1 talairach:1 chance:1 oct:1 identity:1 formulated:1 consequently:5 shared:3 tumor:1 lemma:6 called:2 svd:15 experimental:4 brand:1 gamble:1 college:1 cholesky:1 support:1 evaluate:1 audio:2 avoiding:1
6,366
6,759
Online to Offline Conversions, Universality and Adaptive Minibatch Sizes Kfir Y. Levy Department of Computer Science, ETH Z?rich. [email protected] Abstract We present an approach towards convex optimization that relies on a novel scheme which converts adaptive online algorithms into offline methods. In the offline optimization setting, our derived methods are shown to obtain favourable adaptive guarantees which depend on the harmonic sum of the queried gradients. We further show that our methods implicitly adapt to the objective?s structure: in the smooth case fast convergence rates are ensured without any prior knowledge of the smoothness parameter, while still maintaining guarantees in the non-smooth setting. Our approach has a natural extension to the stochastic setting, resulting in a lazy version of SGD (stochastic GD), where minibathces are chosen adaptively depending on the magnitude of the gradients. Thus providing a principled approach towards choosing minibatch sizes. 1 Introduction Over the past years data adaptiveness has proven to be crucial to the success of learning algorithms. The objective function underlying ?big data" applications often demonstrates intricate structure: the scale and smoothness are often unknown and may change substantially in between different regions/directions, [1]. Learning methods that acclimatize to these changes may exhibit superior performance compared to non adaptive procedures. State-of-the-art first order methods like AdaGrad, [1], and Adam, [2], adapt the learning rate on the fly according to the feedback (i.e. gradients) received during the optimization process. AdaGrad and Adam are guaranteed to work well in the online convex optimization setting, where loss functions may be chosen adversarially and change between rounds. Nevertheless, this setting is harder than the stochastic/offline settings, which may better depict practical applications. Interestingly, even in the offline convex optimization setting it could be shown that in several scenarios very simple schemes may substantially outperform the output of AdaGrad/Adam. An example of such a simple scheme is choosing the point with the smallest gradient norm among all rounds. In the first part of this work we address this issue and design adaptive methods for the offline convex optimization setting. At heart of our derivations is a novel scheme which converts adaptive online algorithms into offline methods with favourable guarantees1 . Our shceme is inspired by standard online to batch conversions, [3]. A seemingly different issue is choosing the minibatch size, b, in the stochastic setting. Stochastic optimization algorithms that can access a noisy gradient oracle may choose to invoke the oracle b times in every query point, subsequently employing an averaged gradient estimate. Theory p for stochastic convex optimization suggests to use a minibatch of b = 1, and predicts a degradation of b factor upon using larger minibatch sizes2 . Nevertheless in practice larger minibatch sizes are usually found to be effective. In the second part of this work we design stochastic optimization methods in 1 For concreteness we concentrate in this work on converting AdaGrad, [1]. Note that our conversion scheme applies more widely to other p adaptive online methods. 2 A degradation by a b factor in the general case and by a b factor in the strongly-convex case. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. which minibatch sizes are chosen adaptively without any theoretical degradation. These are natural extensions of the offline methods presented in the first part. Our contributions: Offline setting: We present two (families of) algorithms AdaNGD (Alg. 2) and SC-AdaNGD (Alg. 3) for the convex/strongly-convex settings which achieve favourable adaptive guarantees (Thms. 2.1, 2.2, 3.1, 3.2 ). The latter theorems also establish their universality, i.e., their ability to implicitly take advantage of the objective?s smoothness and attain rates as fast as GD would have achieved if the smoothness parameter was known. In contrast to other universal approaches such as line-search-GD, [4], and universal gradient [5], we do so without any line search procedure. p Concretely, without the knowledge of the smoothness parameter our algorithm ensures an O(1/ T ) rate in general convex case and an O(1/T ) rate if the objective is also smooth (Thms. 2.1, 2.2). In the strongly-convex case our algorithm ensures an O(1/T ) rate in general and an O(exp( T )) rate if the objective is also smooth (Thm. 3.2 ), where is the condition number. Stochastic setting: We present Lazy-SGD (Algorithm 4) which is an extension of our offline algorithms. Lazy-SGD employs larger minibatch sizes in points with smaller gradients, which selectively reduces the variance in the ?more important" query points. Lazy-SGD guarantees are comparable with SGD in the convex/strongly-convex settings (Thms. 4.2, 4.3). On the technical side, our online to offline conversion schemes employ three simultaneous mechanisms: an adaptive online algorithm used in conjunction with gradient normalization and with a respective importance weighting. To the best of our knowledge the combination of the above techniques is novel, and we believe it might also find use in other scenarios. This paper is organized as follows. In Sections 2,3, we present our methods for the offline convex/strongly-convex settings. Section 4 describes our methods for the stochastic setting, and Section 5 concludes. Extensions and a preliminary experimental study appear in the Appendix. 1.1 Related Work The authors of [1] simultaneously to [6], were the first to suggest AdaGrad?an adaptive gradient based method, and prove its efficiency in tackling online convex problems. AdaGrad was subsequently adjusted to the deep-learning setting to yield the RMSprop, [7], and Adadelta, [8], heuristics. Adam, [2], is a popular adaptive algorithm which is often the method of choice in deep-learning applications. It combines ideas from AdaGrad together with momentum machinery, [9]. An optimization procedure is called universal if it implicitly adapts to the objective?s smoothness. In [5], universal gradient methods are devised for the general convex setting. Concretely, without the knowledge of the smoothness parameter, these methods p attain the standard O(1/T ), an accelerated 2 O(1/T ) rates for smooth objectives, and an O(1/ T ) rate in the non-smooth case. The core technique in this work is a line search procedure which estimates the smoothness parameter in every iteration. For strongly-convex and smooth objectives, line search techniques, [4], ensure linear convergence rate, without the knowledge of the smoothness parameter. However, line search is not ?fully universal", in the sense that it holds no guarantees in the non-smooth case. For the latter setting we present a method which is ?fully universal" (Thm. 3.2), nevertheless it requires the strong-convexity parameter. The usefulness of employing normalized gradients was demonstrated in several non-convex scenarios. In the context of quasi-convex optimization, [10], and [11], established convergence guarantees for the offline/stochastic settings. More recently, it was shown in [12], that normalized gradient descent is more appropriate than GD for saddle-evasion scenarios. In the context of stochastic optimization, the effect of minibatch size was extensively investigated throughout the past years, [13, 14, 15, 16, 17, 18]. Yet, all of these studies: (i) assume a smooth expected loss, (ii) discuss fixed minibatch sizes. Conversely, our work discusses adaptive minibatch sizes, and applies to both smooth/non-smooth expected losses. 1.2 Preliminaries Notation: k ? k denotes the `2 norm, G denotes a bound on the norm of the objective?s gradients, and [T ] := {1, . . . , T }. For a set K 2 Rd its diameter is defined as D = supx,y2K kx yk. Next we 2 Algorithm 1 Adaptive Gradient Descent (AdaGrad) Input: #Iterations T , x1 2 Rd , set K Set: Q0 = 0 for t = 1 . . . T do Calculate: gt = rft (xt ) Update: p Set: ?t = D/ 2Qt Update: xt+1 = ?K (xt end for Qt = Qt 1 + kgt k2 ?t g t ) define H-strongly-convex/ -smooth functions, f (x) + rf (x)> (y x) + f (y) ? f (x) + rf (x)> (y x) + f (y) 1.2.1 H kx 2 yk2 ; 8x, y 2 K (H-strong-convexity) kx yk2 ; 8x, y 2 K ( -smoothness) 2 AdaGrad The methods presented in this paper lean on AdaGrad (Alg. 1), an online optimization method which employs an adaptive learning rate. The following theorem states AdaGrad?s guarantees, [1], Theorem 1.1. Let K be a convex set with diameter D. Let {ft }Tt=1 be an arbitrary sequence of convex loss functions. Then Algorithm 1 guarantees the following regret; v u T T T u X X X ft (xt ) min ft (x) ? t2D2 kgt k2 . t=1 2 x2K t=1 t=1 Adaptive Normalized Gradient Descent (AdaNGD) In this section we discuss the convex optimization setting and introduce our AdaNGDk algorithm, which depends on a parameter k 2 R. We first derive a general convergence rate which holds for a general k. Subsequently, we elaborate on the k = 1, 2 cases which exhibit universality as well as adaptive guarantees that may be substantially better compared to standard methods. Our method AdaNGDk is depicted in Alg. 2. This algorithm can be thought of as an online to offline conversion scheme which utilizes AdaGrad (Alg. 1) as a black box and eventually outputs a weighted sum of the online queries. Indeed, for a fixed k 2 R, it is not hard to notice that AdaNGDk is equivalent to invoking AdaGrad with the following loss sequence {f?t (x) := gt> x/kgt kk }Tt=1 . And eventually weighting each query point inversely proportional to the k?th power norm of its gradient. The reason behind this scheme is that in offline optimization it makes sense to dramatically reduce the learning rate upon uncountering a point with a very small gradient. For k 1, this is achieved by invoking AdaGrad with gradients normalized by their k?th power norm. Since we discuss constrained optimization, we use the projection operator defined as, ?K (y) := minx2K kx yk . The next lemma states the guarantee of AdaNGD for a general k: Lemma 2.1. Let k 2 R, K be a convex set with diameter D, and f be a convex function; Also let x ?T be the output of AdaNGDk (Algorithm 2), then the following holds: q PT 2D2 t=1 1/kgt k2(k 1) f (? xT ) min f (x) ? PT k x2K t=1 1/kgt k Proof sketch. Notice that the AdaNGDk algorithm is equivalent to applying AdaGrad to the following loss sequence: {f?t (x) := gt> x/kgt kk }Tt=1 . Thus, applying Theorem 1.1, and using the definition of x ?T together with Jensen?s inequality the lemma follows. 3 Algorithm 2 Adaptive Normalized Gradient Descent (AdaNGDk ) Input: #Iterations T , x1 2 Rd , set K , parameter k Set: Q0 = 0 for t = 1 . . . T 1 do Calculate: gt = rf (xt ), g?t = gt /kgt kk Update: Qt = Qt 1 + 1/kgt k2(k p Set ?t = D/ 2Qt Update: xt+1 = ?K (xt ?t g?t ) end for k PT tk Return: x ?T = t=1 PT1/kg x 1/kg kk t ? =1 1) ? For k = 0, Algorithm 2 becomes AdaGrad (Alg. 1). Next we focus on the cases where k = 1, 2, showing improved adaptive rates and universality compared to GD/AdaGrad. These improved rates are attained thanks to the adaptivity of the learning rate: when query points with small gradients are encountered, AdaNGDk (with k 1) reduces the learning rate, thus focusing on the region around these points. The hindsight weighting further emphasizes points with smaller gradients. 2.1 AdaNGD1 p Here we show that AdaNGD1 enjoys a rate of O(1/ T ) in the non-smooth convex setting, and a fast rate of O(1/T ) in the smooth setting. We emphasize that the same algorithm enjoys these rates simultaneously, without any prior knowledge of the smoothness or of the gradient norms. From Algorithm 2 it can p be noted that for k = 1 the learning rate becomes independent of the gradients, i.e. ?t = D/ 2t, the update is made according to the direction of the gradients, and the weighting is inversely proportional to the norm of the gradients. The following Theorem establishes the guarantees of AdaNGD1 , Theorem 2.1. Let k = 1, K be a convex set with diameter D, and f be a convex function; Also let x ?T be the outputs of AdaNGD1 (Alg. 2), then the following holds: p p 2D2 T 2GD f (? xT ) min f (x) ? PT ? p . x2K T t=1 1/kgt k Moreover, if f is also -smooth and the global minimum x? = arg minx2Rn f (x) belongs to K, then: p D T 4 D2 f (? xT ) min f (x) ? PT ? . x2K T t=1 1/kgt k Proof sketch. The data dependent bound is a direct corollary of Lemma 2.1. The general case bound p PT holds by using kgt k ? G. The bound for the smooth case is proven by showing t=1 kgt k ? O( T ). PT This translates to a lower bound t=1 1/kgt k ?(T 3/2 ), which concludes the proof. The data dependent bound in Theorem 2.1 may be substantially better compared to the bound of the GD/AdaGrad. As an example, assume that half of the gradients encountered during the run of the algorithm are of O(1) norms, and the other gradient norms decay proportionally to O(1/t). p In this case the guarantee of GD/AdaGrad is O(1/ T ), whereas AdaNGD1 guarantees a bound that behaves like O(1/T 3/2 ). Note that the above example presumes that all algorithms encounter the same gradient magnitudes, which might be untrue. Nevertheless in the smooth case AdaNGD1 provably benefits due to its adaptivity. 2.2 AdaNGD2 Here we show that AdaNGD2 enjoys comparable guarantees to AdaNGD1 in the general/smooth case. Similarly to AdaNGD1 the same algorithm enjoys these rates simultaneously, without any prior knowledge of the smoothness or of the gradient norms. The following Theorem establishes the guarantees of AdaNGD2 , 4 Algorithm 3 Strongly-Convex AdaNGD (SC-AdaNGDk ) Input: #Iterations T , x1 2 Rd , set K, strong-convexity H, parameter k Set: Q0 = 0 for t = 1 . . . T 1 do Calculate: gt = rf (xt ), g?t = gt /kgt kk Update: Qt = Qt 1 + 1/kgt kk Set ?t = 1/HQt Update: xt+1 = ?K (xt ?t g?t ) end for k PT tk Return: x ?T = t=1 PT1/kg x 1/kg kk t ? =1 ? Theorem 2.2. Let k = 2, K be a convex set with diameter D, and f be a convex function; Also let x ?T be the outputs of AdaNGD2 (Alg. 2), then the following holds: p p 2D2 2GD f (? xT ) min f (x) ? qP ? p . x2K T T 2 1/kg k t t=1 Moreover, if f is also -smooth and the global minimum x? = arg minx2Rn f (x) belongs to K, then: p 2D2 4 D2 f (? xT ) min f (x) ? qP ? . x2K T T 2 1/kg k t t=1 It is interesting to note that AdaNGD2 will have always performed better than AdaGrad, had both algorithms encountered the same gradient norms. This is due to the well known inequality between PT PT1 arithmetic and harmonic means, [19], T1 t=1 at , 8{at }Tt=1 ? R+ , which directly 1 t=1 1/at T qP T 2 implies, pPT 1 ? T1 t=1 kgt k . 2 t=1 3 1/kgt k Adaptive NGD for Strongly Convex Functions Here we discuss the offline optimization setting of strongly convex objectives. We introduce our SC-AdaNGDk algorithm, and present convergence rates for general k 2 R. Subsequently, we elaborate on the k = 1, 2 cases which exhibit universality as well as adaptive guarantees that may be substantially better compared to standard methods. Our SC-AdaNGDk algorithm is depicted in Algorithm 3. Similarly to its non strongly-convex counterpart, SC-AdaNGDk can be thought of as an online to offline conversion scheme which utilizes an online algorithm which we denote SC-AdaGrad (we elaborate on the latter in the appendix). The next Lemma states its guarantees, Lemma 3.1. Let k 2 R, and K be a convex set. Let f be an H-strongly-convex function; Also let x ?T be the outputs of SC-AdaNGDk (Alg. 3), then the following holds: f (? xT ) min f (x) ? x2K 2H PT 1 t=1 kgt k k T X kgt k Pt t=1 ? =1 2(k 1) kg? k k . Proof sketch. In the appendix we present and analyze SC-AdaGrad. This is an online first P order algot rithm for strongly-convex functions in which the learning rate decays according to ?t = 1/ ? =1 H? , where H? is the strong-convexity parameter of the loss function at time ? . Then we show that SC-AdaNGDk is equivalent to applying SC-AdaGrad to the following loss sequence: ? T 1 H > 2 f?t (x) = g x + kx x k . t kgt kk t 2kgt kk t=1 The lemma follows by combining the regret bound of SC-AdaGrad together with the definition of x ?T and with Jensen?s inequality. 5 For k = 0, SC-AdaNGD becomes the standard GD algorithm which uses learning rate of ?t = 1/Ht. Next we focus on the cases where k = 1, 2. 3.1 SC-AdaNGD1 ? Here we show that SC-AdaNGD1 enjoys a rate of O(1/T ) for strongly-convex objectives, and a 2 ? faster rate of O(1/T ) assuming that the objective is also smooth. We emphasize that the same algorithm enjoys these rates simultaneously, without any prior knowledge of the smoothness or of the gradient norms. The following theorem establishes the guarantees of SC-AdaNGD1 , Theorem 3.1. Let k = 1, and K be a convex set. Let f be a G-Lipschitz and H-strongly-convex function; Also let x ?T be the outputs of SC-AdaNGD1 (Alg. 3), then the following holds: ? ?P ?? T G G 1 + log t=1 kgt k G2 (1 + log T ) f (? xT ) min f (x) ? ? . PT x2K 2HT 2H t=1 1 kgt k Moreover, if f is also -smooth and the global minimum x? = arg minx2Rn f (x) belongs to K, then, 2 f (? xT ) 3.2 min f (x) ? x2K ( /H)G2 (1 + log T ) . HT 2 SC-AdaNGD2 ? Here we show that SC-AdaNGD2 enjoys the standard O(1/T ) rate for strongly-convex objectives, and a linear rate assuming that the objective is also smooth. We emphasize that the same algorithm enjoys these rates simultaneously, without any prior knowledge of the smoothness or of the gradient norms. In the case where k = 2 the guarantee of SC-AdaNGD is as follows, Theorem 3.2. Let k = 2, K be a convex set, and f be a G-Lipschitz and H-strongly-convex function; Also let x ?T be the outputs of SC-AdaNGD2 (Alg. 3), then the following holds: PT 1 + log(G2 t=1 kgt k 2 ) G2 (1 + log T ) f (? xT ) min f (x) ? ? . PT x2K 2HT 2H t=1 kgt k 2 Moreover, if f is also -smooth and the global minimum x? = arg minx2Rn f (x) belongs to K, then, ? ? 3G2 H T H f (? xT ) min f (x) ? e 1+ T . x2K 2H Intuition: For strongly-convex objectives the appropriate GD algorithm utilizes two very extreme learning rates of ?t / 1/t vs. ?t = 1/ for the general/smooth settings respectively. A possible explanation to the universality of SCAdaNGD2 is that it implicitly interpolate between these rates. k 2 Indeed the update rule of our algorithm can be written as follows, xt+1 = xt H1 Pt kgtkg 2 gt . ?k ? =1 Thus, ignoring the hindsight weighting, SCAdaNGD is equivalent to GD with an adaptive learning 2 Pt rate ??t := kgt k 2 /H ? =1 kg? k 2 . Now, when all gradient norms are of the same magnitude, then ??t / 1/t, which boils down to the standard GD for strongly-convex objectives. Conversely, assume that the gradients are exponentially decaying, i.e., that kgt k / q t for some q < 1. In this case ??t is approximately constant. We believe that the latter applies for strongly-convex & smooth case. 4 Adaptive NGD for Stochastic Optimization Here we show that using data-dependent minibatch sizes, we can adapt our (SC-)AdaNGD2 algorithms (Algs. 2, 3 with k = 2) to the stochastic setting, and achieve the well know convergence rates for the convex/strongly-convex settings. Next we introduce the stochastic optimization setting, and then we present and discuss our Lazy SGD algorithm. Setup: We consider the problem of minimizing a convex/strongly-convex function f : K 7! R, where K 2 Rd is a convex set. We assume that optimization lasts for T rounds; on each round 6 Algorithm 4 Lazy Stochastic Gradient Descent (LazySGD) Input: #Oracle Queries T , x1 2 Rd , set K, ?0 , p Set: t = 0, s = 0 while t ? T do Update: s = s + 1 Set G = GradOracle(xs ), i.e., G generates i.i.d. noisy samples of rf (xs ) Get: (? gs , ns ) = AE(G, T t) % Adaptive Minibatch Update: t = t + ns Calculate: g?s = ns g?s Set: ?s = ?0 /tp Update: xs+1 = ?K (xs ?s g?s ) end while Ps Return: x ?T = i=1 nTi xi . (Note that Psi=1 ni = T ) Algorithm 5 Adaptive Estimate (AE) Input: random vectors generator G, sample budget Tmax , sample factor m0 Set: i = 0, N = 0, g?0 = 0 while N < Tmax do Take ?i = min{2i , Tmax N } samples from G Set N N + ?i Update: g?N Average of N samples received so far from G p If k? gN k > 3m0 / N then return (? gN , N ) Update i i+1 end while Return: (? gN , N ) t = 1, . . . , T , we may query a point xt 2 K, and receive a feedback. After the last round, we choose x ?T 2 K, and our performance measure is the expected excess loss, defined as, E[f (? xT )] min f (x) . x2K Here we assume that our feedback is a first order noisy oracle G : K 7! Rd such that upon querying G with a point xt 2 K, we receive a bounded and unbiased gradient estimate, G(xt ), such E[G(xt )|xt ] = rf (xt ); kG(xt )k ? G. We also assume that the that the internal coin tosses (randomizations) of the oracle are independent. It is well known that variants of Stochastic Gradient Descent ?T such that the excess loss is bounded by p (SGD) are ensured to output an estimate x O(1/ T )/O(1/T ) for the setups of convex/strongly-convex stochastic optimization, [20], [21]. Notation: In this section we make a clear distinction between the number of queries to the gradient oracle, denoted henceforth by T ; and between the number of iterations in the algorithm, denoted henceforth by S. We care about the dependence of the excess loss in T . 4.1 Lazy Stochastic Gradient Descent Data Dependent Minibatch sizes: The Lazy SGD (Alg. 4) algorithm that we present in this section, uses a minibatch size that changes in between query points. Given a query point xs , Lazy SGD 2 3 ? invokes the noisy gradient oracle O(1/kg s k ) times, where gs := rf (xs ) . Thus, in contrast to SGD which utilizes a fixed number of oracle calls per query point, our algorithm tends to stall in points with smaller gradients, hence the name Lazy SGD. Here we give some intuition regarding our adaptive minibatch size rule: Consider the stochastic optimization setting. However, imagine that instead of the noisy gradient oracle G, we may access an improved (imaginary) oracle which provides us with unbiased estimates, g?(x), that are accurate up to some multiplicative factor, e.g., E[? g (x)|x] = rf (x), and 12 krf (x)k ? k? g (x)k ? 2krf (x)k . Then intuitively we could have used these estimates instead of the exact normalized gradients inside our (SC-)AdaNGD2 algorithms (Algs. 2, 3 with k = 2), and still get similar (in expectation) data 3 Note that the gradient norm, kgs k, is unknown to the algorithm. Nevertheless it is estimated on the fly. 7 dependent bounds. Quite nicely, we may use our original noisy oracle G to generate estimates 2 ? from this imaginary oracle. This can be done by invoking G for O(1/kg s k ) times at each query point. Using this minibatch rule, the total number of calls to G (along all iterations) is equal to PS T = s=1 1/kgs k2 . Plugging this pinto the data dependent bounds of (SC-)AdaNGD2 (Thms. 2.2, ? ? 3.2), we get the well known O(1/ T )/O(1/T ) rates for the stochastic convex settings. The imaginary oracle: The construction of the imaginary oracle from the original oracle appears in Algorithm 5 (AE procedure) . It receives as an input, G, a generator of independent random vectors with an (unknown) expected value g 2 Rd . The algorithm outputs two variables: N which is an estimate of 1/kgk2 , and g?N an average of N random vectors from G. Thus, it is natural to think of N g?N as an estimate for g/kgk2 . Moreover, it can be shown that E[N (? gN g)] = 0. Thus in a sense we receive an unbiased estimate. The guarantees of Algorithm 5 appear below, Lemma 4.1 (Informal). Let Tmax 1, 2 (0, 1). Suppose an oracle G : K 7! Rd that generates G-bounded i.i.d. random vectors with an (unknown) expected value g 2 Rd . Then w.p. 1 , invoking AE (Algorithm 5), with m0 = ?(G log(1/ )), it is ensured that: N = ?(min{m0 /kgk2 , Tmax }), and E[N (? gN g)] = 0 . Lazy SGD: Now, plugging the output of the AE algorithm into our offline algorithms (SC-)AdaNGD2 , we get their stochastic variants which appears in Algorithm 4 (Lazy SGD). This algorithm is equivalent to the offline version of (SC-)AdaNGD2 , with the difference that we use ns instead of 1/krf (xs )k2 and ns g?s instead of rf (xs )/krf (xs )k2 . Let T be a bound on the total number of queries to the the first order oracle G, and be the confidence parameter used to set m0 in the AE procedure. Next we present the guarantees of LazySGD, Lemma 4.2. Let = O(T 3/2 ); let K be a convex set with diameterp D, and f be a convex function; and assume kG(x)k ? G w.p.1. Then using LazySGD with ?0 = D/ 2G, p = 1/2, ensures: ? ? GD log(T ) p E[f (? xT )] min f (x) ? O . x2K T Lemma 4.3. Let = O(T 2 ), let K be a convex set, and f be an H-strongly-convex convex function; and assume kG(x)k ? G w.p.1. Then using LazySGD with ?0 = 1/H, p = 1, ensures: ? 2 ? G log2 (T ) E[f (? xT )] min f (x) ? O . x2K HT Note that LazySGD uses minibatch sizes that are adapted to the magnitude of the gradients, and still p maintains the optimal O(1/ T )/O(1/T ) rates. Inpcontrast p using a fixed minibatch size b for SGD might degrade the convergence rates, yielding O( b/ T )/O(b/T ) guarantees. This property of LazySGD may be beneficial when considering distributed computations (see [13]). 5 Discussion We have presented a new approach based on a conversion scheme, which exhibits universality and new adaptive bounds in the offline convex optimization setting, and provides a principled approach towards minibatch size selection in the stochastic setting. Among the many questions that remain open is whether we can devise ?accelerated" universal methods. Furthermore, our universality results only apply when the global minimum is inside the constraints. Thus, it is natural to seek for methods that ensure universality when this assumption is violated. Moreover, our algorithms depend on a parameter k 2 R, but only the cases where k 2 {0, 1, 2} are well understood. Investigating a wider spectrum of k values is intriguing. Lastly, it is interesting to modify and test our methods in non-convex scenarios, especially in the context of deep-learning applications. Acknowledgments I would like to thank Elad Hazan and Shai Shalev-Shwartz for fruitful discussions during the early stages of this work. This work was supported by the ETH Z?rich Postdoctoral Fellowship and Marie Curie Actions for People COFUND program. 8 References [1] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011. [2] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [3] Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, 2004. [4] Stephen Wright and Jorge Nocedal. Numerical optimization. Springer Science, 35:67?68, 1999. [5] Yu Nesterov. Universal gradient methods for convex optimization problems. Mathematical Programming, 152(1-2):381?404, 2015. [6] H Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. COLT 2010, page 244, 2010. [7] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. [8] Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. [9] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k2). In Doklady an SSSR, volume 269, pages 543?547, 1983. [10] Yu E Nesterov. Minimization methods for nonsmooth convex and quasiconvex functions. Matekon, 29:519?531, 1984. [11] Elad Hazan, Kfir Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex optimization. In Advances in Neural Information Processing Systems, pages 1594?1602, 2015. [12] Kfir Y Levy. The power of normalization: Faster evasion of saddle points. arXiv preprint arXiv:1611.04831, 2016. [13] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13(Jan):165?202, 2012. [14] Andrew Cotter, Ohad Shamir, Nati Srebro, and Karthik Sridharan. Better mini-batch algorithms via accelerated gradient methods. In Advances in neural information processing systems, pages 1647?1655, 2011. [15] Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems, pages 378?385, 2013. [16] Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 661?670. ACM, 2014. [17] Martin Tak??c, Peter Richt?rik, and Nathan Srebro. Distributed mini-batch sdca. arXiv preprint arXiv:1507.08322, 2015. [18] Prateek Jain, Sham M Kakade, Rahul Kidambi, Praneeth Netrapalli, and Aaron Sidford. Parallelizing stochastic approximation through mini-batching and tail-averaging. arXiv preprint arXiv:1610.03774, 2016. [19] Peter S Bullen, Dragoslav S Mitrinovic, and M Vasic. Means and their Inequalities, volume 31. Springer Science & Business Media, 2013. [20] Arkadii Nemirovskii, David Borisovich Yudin, and ER Dawson. Problem complexity and method efficiency in optimization. 1983. 9 [21] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007. [22] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In Advances in Neural Information Processing Systems, pages 3384?3392, 2015. [23] Elad Hazan and Tomer Koren. Linear regression with limited observation. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 807?814, 2012. [24] Kenneth L Clarkson, Elad Hazan, and David P Woodruff. Sublinear optimization for machine learning. Journal of the ACM (JACM), 59(5):23, 2012. [25] Sham Kakade. Lecture notes in multivariate analysis, dimensionality reduction, and spectral methods. http://stat.wharton.upenn.edu/~skakade/courses/stat991_ mult/lectures/MatrixConcen.pdf, April 2010. [26] Anatoli B Juditsky and Arkadi S Nemirovski. Large deviations of vector-valued martingales in 2-smooth normed spaces. arXiv preprint arXiv:0809.0813, 2008. [27] David Asher Levin, Yuval Peres, and Elizabeth Lee Wilmer. Markov chains and mixing times. American Mathematical Soc., 2009. 10
6759 |@word version:2 norm:15 dekel:1 open:1 d2:6 seek:1 invoking:4 sgd:14 harder:1 reduction:1 woodruff:1 interestingly:1 past:2 imaginary:4 universality:9 tackling:1 yet:1 written:1 john:1 intriguing:1 diederik:1 numerical:1 zaid:1 depict:1 update:13 v:1 juditsky:1 half:1 core:1 provides:2 zhang:2 mathematical:2 along:1 direct:1 prove:1 combine:1 inside:2 introduce:3 upenn:1 expected:5 indeed:2 intricate:1 inspired:1 considering:1 becomes:3 underlying:1 notation:2 moreover:6 bounded:3 medium:1 prateek:1 kg:15 substantially:5 hindsight:2 untrue:1 guarantee:22 every:2 ensured:3 demonstrates:1 k2:8 doklady:1 appear:2 t1:2 understood:1 modify:1 tends:1 minx2k:1 approximately:1 might:3 black:1 tmax:5 suggests:1 conversely:2 limited:1 nemirovski:1 averaged:1 practical:1 acknowledgment:1 yehuda:1 practice:1 regret:3 procedure:6 jan:1 sdca:1 universal:9 eth:2 attain:2 thought:2 projection:1 mult:1 confidence:1 suggest:1 get:4 selection:1 operator:1 context:3 applying:3 equivalent:5 hqt:1 demonstrated:1 fruitful:1 kale:1 jimmy:1 convex:67 normed:1 bachrach:1 rule:3 yuqiang:1 coordinate:1 pt:16 imagine:1 construction:1 suppose:1 exact:1 programming:1 shamir:2 us:3 adadelta:2 lean:1 predicts:1 ft:3 fly:2 preprint:6 calculate:4 region:2 ensures:4 coursera:1 richt:1 yk:2 principled:2 intuition:2 ran:1 convexity:5 rmsprop:2 mu:1 complexity:1 nesterov:3 depend:2 upon:3 efficiency:2 derivation:1 jain:1 fast:3 effective:1 query:13 sc:25 choosing:3 shalev:3 quite:1 heuristic:1 larger:3 widely:1 pt1:3 elad:6 valued:1 ability:2 satyen:1 think:1 noisy:6 online:19 seemingly:1 advantage:1 sequence:4 skakade:1 combining:1 mixing:1 achieve:2 adapts:1 convergence:8 p:2 adam:5 tk:2 wider:1 depending:1 derive:1 andrew:1 stat:1 qt:8 received:2 strong:4 netrapalli:1 soc:1 implies:1 direction:2 concentrate:1 sssr:1 kgt:27 stochastic:28 subsequently:4 generalization:1 preliminary:2 randomization:1 adjusted:1 extension:4 hold:9 around:1 wright:1 exp:1 matthew:2 m0:5 early:1 smallest:1 establishes:3 weighted:1 cotter:1 minimization:2 always:1 claudio:1 conjunction:1 corollary:1 derived:1 focus:2 hongzhou:1 contrast:2 brendan:1 sigkdd:1 sense:3 dependent:6 tak:1 quasi:2 provably:1 issue:2 among:2 arg:4 colt:1 denoted:2 dual:1 art:1 constrained:1 rft:1 equal:1 wharton:1 nicely:1 beach:1 adversarially:1 yu:2 icml:1 nonsmooth:1 employ:3 ppt:1 algs:2 simultaneously:5 interpolate:1 karthik:1 mining:1 extreme:1 yielding:1 behind:1 kfir:3 chain:1 accurate:1 respective:1 ohad:2 machinery:1 divide:1 theoretical:1 y2k:1 gn:5 tp:1 sidford:1 deviation:1 usefulness:1 levin:1 supx:1 gd:14 adaptively:2 st:1 thanks:1 international:2 lee:1 invoke:1 together:3 cesa:1 choose:2 henceforth:2 kidambi:1 american:1 return:5 presumes:1 li:1 depends:1 performed:1 h1:1 multiplicative:1 analyze:1 hazan:6 decaying:1 maintains:1 shai:3 jul:1 kgk2:3 arkadi:1 curie:1 contribution:1 ni:1 variance:1 yield:1 algot:1 emphasizes:1 simultaneous:1 definition:2 proof:4 psi:1 boil:1 popular:1 knowledge:10 dimensionality:1 organized:1 focusing:1 appears:2 attained:1 improved:3 rahul:1 april:1 done:1 box:1 strongly:24 furthermore:1 stage:1 lastly:1 smola:1 sketch:3 receives:1 minibatch:20 believe:2 usa:1 effect:1 name:1 normalized:6 unbiased:3 counterpart:1 hence:1 arkadii:1 q0:3 round:5 during:3 noted:1 pdf:1 tt:4 duchi:1 harmonic:2 novel:3 recently:1 superior:1 behaves:1 qp:3 exponentially:1 volume:2 tail:1 queried:1 smoothness:14 rd:10 unconstrained:1 similarly:2 had:1 access:2 yk2:2 gt:8 nicolo:1 multivariate:1 recent:1 inf:1 belongs:4 scenario:5 inequality:4 dawson:1 success:1 jorge:1 devise:1 minimum:5 gentile:1 care:1 converting:1 borisovich:1 ii:1 arithmetic:1 stephen:1 harchaoui:1 sham:2 reduces:2 smooth:26 technical:1 faster:2 adapt:3 long:1 lin:2 devised:1 plugging:2 prediction:1 variant:2 regression:1 ae:6 expectation:1 arxiv:12 iteration:6 normalization:2 gilad:1 achieved:2 agarwal:1 receive:3 whereas:1 fellowship:1 crucial:1 ascent:1 sridharan:1 call:2 ngd:2 stall:1 reduce:1 idea:1 minx2rn:4 regarding:1 praneeth:1 translates:1 whether:1 clarkson:1 peter:2 action:1 deep:3 dramatically:1 proportionally:1 clear:1 extensively:1 diameter:5 generate:1 http:1 outperform:1 notice:2 estimated:1 per:1 nevertheless:5 evasion:2 krf:4 marie:1 ht:5 kenneth:1 nocedal:1 subgradient:1 concreteness:1 convert:2 sum:2 year:2 run:1 family:1 throughout:1 utilizes:4 appendix:3 comparable:2 x2k:14 bound:14 guaranteed:1 koren:1 encountered:3 oracle:17 g:2 adapted:1 constraint:1 alex:1 generates:2 nathan:1 min:16 martin:1 department:1 according:3 combination:1 smaller:3 describes:1 beneficial:1 remain:1 elizabeth:1 kakade:2 intuitively:1 heart:1 discus:6 eventually:2 mechanism:1 singer:1 know:1 end:5 yurii:1 informal:1 ofer:1 apply:1 appropriate:2 spectral:1 batching:1 batch:6 encounter:1 coin:1 original:2 denotes:2 running:1 ensure:2 zeiler:1 log2:1 maintaining:1 anatoli:1 yoram:1 invokes:1 especially:1 establish:1 amit:1 objective:16 question:1 dependence:1 exhibit:4 gradient:49 thank:1 degrade:1 reason:1 assuming:2 kk:9 providing:1 minimizing:1 tijmen:1 mini:6 setup:2 ba:1 design:2 unknown:4 bianchi:1 conversion:7 observation:1 markov:1 descent:7 peres:1 hinton:1 nemirovskii:1 arbitrary:1 thm:6 tomer:1 parallelizing:1 david:3 distinction:1 nti:1 established:1 kingma:1 nip:1 address:1 beyond:1 usually:1 below:1 program:1 rf:9 explanation:1 power:3 natural:4 business:1 scheme:10 inversely:2 julien:1 concludes:2 prior:5 discovery:1 nati:1 adagrad:24 catalyst:1 loss:11 fully:2 lecture:3 adaptivity:2 interesting:2 sublinear:1 proportional:2 proven:2 querying:1 geoffrey:1 srebro:2 generator:2 rik:1 xiao:1 course:1 supported:1 last:2 wilmer:1 enjoys:8 offline:20 side:1 benefit:1 distributed:3 feedback:3 yudin:1 rich:2 concretely:2 author:1 adaptive:29 made:1 employing:2 far:1 transaction:1 excess:3 emphasize:3 implicitly:4 global:5 investigating:1 mairal:1 xi:1 shwartz:3 spectrum:1 postdoctoral:1 search:5 streeter:1 ca:1 ignoring:1 alg:12 investigated:1 big:1 asher:1 x1:4 elaborate:3 rithm:1 martingale:1 tong:2 n:5 quasiconvex:1 momentum:1 mcmahan:1 levy:4 weighting:5 theorem:12 down:1 xt:32 showing:2 jensen:2 favourable:3 er:1 decay:2 x:9 importance:1 magnitude:5 budget:1 kx:5 chen:1 depicted:2 logarithmic:1 saddle:2 jacm:1 lazy:12 conconi:1 g2:5 pinto:1 applies:3 springer:2 ch:1 tieleman:1 relies:1 acm:3 towards:3 toss:1 lipschitz:2 change:4 hard:1 yuval:1 averaging:1 degradation:3 lemma:10 called:1 total:2 experimental:1 aaron:1 selectively:1 internal:1 people:1 latter:4 adaptiveness:1 alexander:1 ethz:1 accelerated:4 violated:1
6,367
676
Destabilization and Route to Chaos in Neural Networks with Random Connectivity Bernard Doyon Unite INSERM 230 Service de Neurologie CHUPurpan F-31059 Toulouse Cedex, France Bruno Cessac Centre d'Etudes et de Recherches de Toulouse 2, avenue Edouard Belin, BP 4025 F-31055 Toulouse Cedex, France Mathias Quoy Centre d'Etudes et de Recherches de Toulouse 2, avenue Edouard Belin, BP 4025 F-31055 Toulouse Cedex, France Manuel Samuelides Ecole Nationale Superieure de I'Aeronautique et de l'Espace 10, avenue Edouard Belin, BP 4032 F-31055 Toulouse Cedex, France Abstract The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip). 549 550 Doyon, Cessac, Quoy, and Samuelides 1 INTRODUCTION Most part of studies on recurrent neural networks assume sufficient conditions of convergence. Models with symmetric synaptic connections have dynamical properties strongly connected with those of spin-glasses. In particular, they have relaxationnal dynamics caracterised by the decreasing of a function which is analogous to the energy in spin-glasses (or free energy for models submitted to thermal noise). Networks with asymmetric synaptic connections lose this convergence property and can have more complex dynamics. but searchers try to obtain such a convergence because the relaxation to a stable network state is simply interpreted as a stored pattern. However, as pointed out by Hirsch (1989), it might be very interesting, from an engineering point of view. to investigate non convergent networks because their dynamical possibilities are much richer for a given number of units. Moreover, the real brain is a highly dynamic system. Recent neurophysiological findings have focused attention on the rich temporal structures (oscillations) of neuronal processes (Gray et al., 1989), which might play an important role in information processing. Chaotic behavior has been found out in the nervous system (Gallez & Babloyantz, 1991) and might be implicated in cognitive processes (Skarda & Freeman. 1987). We have studied the emergent dynamics of a general class of non convergent networks. Some results are already available in this field. Sompolinsky et al. (1988) established strong theoretical results concerning the occurrence of chaos forfully connected networks in the thermodynamic limit (N - 00) by using the Dynamic Mean Field Theory. Their model is a continuous time. continuous state dynamical system with N fully connected neurons. Each connection Jij is a gaussian random variable with zero mean and a normalized variance fllN. As the Jij'S are independent. the constant term fl can be seen as the variance of the sum of the weights connected to a given unit. Thus. the global strength of coupling remains constant for each neuron as N increases. The output function of each neuron is sigmoidal with a slope g. Sompolinsky et al. established that, in the limit N - 00, there is a sharp transition from a stationary state to a chaotic flow. The onset of chaos is given by the critical value gJ=l. For gJ<1 the system admits the only fixed point zero, while for gJ >1 it is chaotic. The same authors performed simulations on finite and large values of N and showed the existence of an intermediate regime (nonzero stationary states or limit cycles) separating the stationary and the chaotic phase. but the routes to chaos were not systematically explored. The range of gJ where this intermediate behavior is observed shrinks as N increases. 2 THE MODEL The hypothesis of a fully connected network being not biologically plausible. it could be interesting to inspect how far these results could be extended as the dilution increases for a general class of networks. The model we study is defined as follows: the number of units is N. and K is the fixed number of connections received by one unit (K>I). There is no connection from one unit to itself. The K connections are randomly selected (with an uniform law) among the N-l' s. The state of each neuron i at time t is characterized by its Destabilization and Route to Chaos in Neural Networks with Random Connectivity output xi (t) which is a real variable varying between -1 and 1. The discrete and parallel dynamics is given by: Jij is the synaptic weight which couples the output of unit j to the input of unit i. These weights are random independent variables chosen with a uniform law, with zero menn and a normalized variance J 2 1K. Notice that, with such a normalization, the standard deviation of the sum of the weights afferent to a given neuron is the constant J. One has to distinguish two effects of coupling on the behavior of such a class of models. The first effect is due to the strength of coupling, independent of the number of connections. The second one is due to the architecture of coupling, which can be studied by keeping constant the global synaptic effect of coupling. The genericity of our model cancels the peculiar dynamic features which may occur due to geometrical effects. Moreover it allows to study a model at different scales of dilution. 3 FIRST BIFURCATION For such a system, zero is always a fixed point and for low bifurcation parameter value it is the only fixed point and it is stable. Let us call Amax the eigenvalue of the matrix of synaptic weights with the greatest modulus and p =IAmaxl the spectral radius of this matrix. The loss of stability arises when the product gp is larger than 1. Our numerical simulations allow us to state that p is approximately equal to J for sufficiently largesized networks. This statement can be derived rigorously for an approximate regularized model in the thermodynamic limit (Doyon et aI., 1993). Table 1: Mean Value of the Bifurcation Parameter gJ over 30 Networks. Destabilization of the zero fixed point 1Onset of Chaos Connectivity K 4 8 16 32 128 .954 / 1.337 .950 / 1.449 .951/1.434 .961 / 1.360 Number of neurons 256 .9651 1.298 .966 1 1.301 .9651 1.315 .958 1 1.333 512 .9701 1.258 .9781 1.233 .969/ 1.239 .972 I 1.246 We have studied by intensive simulations on a Cray I-XMP computer the statistical spectral distribution for N ranging from 4 to 512 and for K ranging from 2 to 32. Figure 1 shows two examples of spectra (for convenience, J is set to 1). The apparent drawing of a real axis is due to the real eigenvalue density but the distribution converges to a uniform one over the J radius disk, as N increases. A similar result has been theoretically 551 552 Doyon, Cessac, Quay, and Samuelides achieved for full gaussian matrices (Girko. 1985; Sommers et al.. 1988). Thus pquicldy decreases to J, so the loss of stability arises for a mean gJ value that increases to 1 for increasing size (Tab. 1). For a given N value, p is nearly independent of K . .....:. .. . . '.-'. ~ ",- Figure 1: Plot of the Unit Disk and of the Eigenvalues in the Complex Plane. Left: 100 Spectra for N=64. K=4. Right: 10 Spectra for N=512. K=4. Three types of first bifurcation can occur, depending on the eigenvalue Ama.'t : a) Hopf Bifurcation: this corresponds to the appearance of oscillations. There are two complex conjugate eigenvalues with maximal modulus p. b) Pitchfork bifurcation: if Amax is real positive, the bifurcation arises when gAmax = 1. Zero loses its stability and two branches of stable equilibria emerge. c) Flip Bifurcation: for Amaxreal and negative a flip bifurcation occurs when g Amax = - 1. This corresponds to the appearance of a period two oscillation. As the network size increases, the proportion of Hopf bifurcations increases because the proportion of real Amax decreases, nearly independent of K . 4 ROUTE TO CHAOS To study the following bifurcations, we chose the global observable: The value m(t) correctly characterizes all types of first bifurcation that can occur. Indeed the route to chaos is qualitatively well described by this observable, as we checked it by Destabilization and Route to Chaos in Neural Networks with Random Connectivity studying simultaneously xi (I). The onset- of chaos was computed by testing the sensitivity on initial conditions for m(1) . We observed the onset of chaos occurs for quite low parameter values. The transient zone from fixed point to chaos shrinks slowly to zero as the network size increases (fab. 1). The qualitative study of the routes to chaos was made on a span of networks with various connectivity and quite important size. The route towards chaos that was observed was a quasi-periodic one in all cases with some variations due to the particular symmetry x- x. '~"he following figures are obtained by plotting m(l+l) versus m(1) after discarding the transient (Fig. 2). They are not qualitatively different with a reconstruction in a higher dimensional space. The dominant features are the following ones. ! "'+J~ 0.1- a) 0.0 b) 0.0 -4.1, ~-4~.I----------------~o:o----------------~o.i----~-')- ~. II ~,--------~--------~--0.1 inti) -4.1 0.0 r ",: ~_ .. d) 0.0 "r''<~ -4.1 . -.~~,;b :t" 0.0 0.1 "'(II Figure 2: Example of route to chaos when the fust bifurcation is a Hopf one. (N=128, K=16) . a) After the first bifurcation, the zero fixed point has lost its stability. The series of points (m(I), m(t+l) densely covers a cycle (gJ=l.O). b) After the second Hopf bifurcation: projection of a T2 torus (gJ=l.23). c) Frequency locking on the T2 torus (gJ=1.247). d) Chaos (gJ=1.26). 553 554 Doyon, Cessac, Quoy, and Samuelides When the first bifurcation is a Hopf one (Fig. 2a), it is followed by a second Hopf bifurcation (Fig. 2b). Then there is a frequency locking occuring on the T2 torus born from the second Hopf bifurcation (Fig. 2c), followed by chaos (Fig. 2d). This route is then a quasi-periodic one (Ruelle & Takens, 1971 ; Newhouse et al., 1978). A slightly different feature emerges when the first bifurcation is followed by a stable resonance due to discrete time occuring before the second Hopf bifurcation. Then the limit cycle reduces to periodic points. When the second bifurcation occurs, the resonance persists until chaos is reached. When the first bifurcation is a pitchfork, it is followed by a Hopf bifurcation for each stable point of the pitchfork (due to the symmetry x - -x). Then a second Hopf bifurcation occurs followed, via a frequency locking, by chaos. It follows then, despite the pitchfork bifurcation, a quasi-periodicity route. Notice that in this case, we get two symmetric strange attractors. When gJ increases, the two attractors fuse. For a first bifurcation of flip type, the route followed is like the one described by Bauer & Martienssen (1989). The flip bifurcation leads to an oscillatory system with two states. A first Hopf bifurcation arises followed by a second one leading to a quasi-periodic state, followed by a frequency locking preceeding chaos. 5 CONCLUSION We have presented a type of neural network exhibiting a chaotic behavior when increasing a bifurcation parameter. As in Sompolinsky's model, gJ is the control parameter of the network dynamics. The variance of the synaptic weights being normalized, the bifurcation values are nearly independent of the connectivity K. The magnitude of dilution is not important for the behavior. The route to chaos by quasiperiodicity seems to be generic. It suggests that such high-dimensional networks behave like low-dimensional dynamical systems. It could be much simpler to control such networks than a priori expected. From a biological point of view, we built our model to provide a tool that could be used to investigate the influence of chaotic dynamics in the cognitive processes in the brain. We clearly chose to simplify the biological complexity in order to understand a complex dynamic. We think that, if chaos plays a role in cognitive processes, it does neither depend on a specific architecture, nor on the exact internal modelling of the biological neuron. However, it could be interesting to introduce some biological caracteristics in the model. The next step will be to study the influence of non-zero entries on the behavior of the system, leading to the modelling of learning in a chaotic network. Acknowledgements This research has been partly supported by the COGNISCIENCE research program of the C.N.R.S. through PRESCOT, the Toulouse network of searchers in Cognitive Sciences. Destabilization and Route to Chaos in Neural Networks with Random Connectivity References M. Bauer & W. Martienssen. (1989) Quasi-Periodicity Route to Chaos in Neural Networks. Europhys. Lett. 10: 427-431. B. Doyon, B. Cessac, M. Quoy & M. Samuelides. (1993) Control of the Transition to Chaos in Neural Networks with Random Connectivity. Int. 1. Bifurcation and Chaos (in press). D. Gallez & A. Babloyantz. (1991) Predictability of human EEG: a dynamical approach. BiGI. Cybern. 64: 381-392. V.l.. Girko. (1985) Circular Law. Theory Prob. Its Appl. (USSR) 29: 694-706. C.M. Gray, P. Koenig, A.K. Engel & W. Singer. (1989) Oscillatory responses in cat visual cortex exhibit intercolumnar synchronisation which reflects global stimulus properties. Nature 338: 334-337. M. W. Hirsch. (1989) Convergent Activation Dynamics in Continuous Time Networks. Neural Networks 2: 331-349. S. Newhouse, D. Ruelle & F. Takens. (1978) Occurrence of Strange Axiom A Attractors Near Quasi Periodic Flows on rm, m ~ 3. Commun. math. Phys. 64: 35-40. D. Ruelle & F. Takens. (1971) On the nature of turbulence. Comm. math. Phys. 20: 167-192. C.A. Skarda & W.J. Freeman. (1987) How brains makes chaos in order to make sense of the world. Behav. Brain Sci. 10: 161-195. H.J. Sommers, A. Crisanti, H. Sompolinsky & Y. Stein. (1988) Spectrum of large random asymmetric matrices. Phys. Rev. Lett. 60: 1895-1898. H. Sompolinsky, A. Crisanti & H.J. Sommers. (1988) Chaos in random neural networks. Phys. Rev. Lett. 61: 259-262. 555
676 |@word proportion:2 seems:1 disk:2 simulation:3 initial:1 born:1 series:1 ecole:1 manuel:1 activation:1 numerical:2 plot:1 stationary:3 selected:1 nervous:1 plane:1 math:2 sigmoidal:1 simpler:1 hopf:12 qualitative:1 cray:1 sustained:1 introduce:1 theoretically:1 expected:1 indeed:1 behavior:6 nor:1 brain:4 freeman:2 decreasing:1 increasing:2 moreover:3 interpreted:1 finding:1 temporal:1 synchronisation:1 rm:1 whatever:1 unit:8 control:3 positive:1 service:1 engineering:1 accordance:1 before:1 persists:1 limit:5 despite:1 approximately:1 might:3 chose:2 studied:4 edouard:3 suggests:1 appl:1 range:1 testing:1 lost:1 chaotic:7 axiom:1 projection:1 get:1 convenience:1 turbulence:1 influence:2 cybern:1 attention:1 focused:1 preceeding:1 amax:4 stability:4 variation:1 analogous:1 play:2 exact:1 hypothesis:1 asymmetric:2 observed:3 role:2 cycle:3 connected:6 sompolinsky:5 decrease:2 comm:1 locking:4 complexity:1 rigorously:1 dynamic:11 depend:2 emergent:1 various:1 cat:1 europhys:1 apparent:1 richer:1 larger:1 plausible:1 quite:2 drawing:1 toulouse:7 skarda:2 gp:1 itself:1 think:1 eigenvalue:5 reconstruction:1 jij:3 doyon:6 product:1 maximal:1 ama:1 supposed:1 convergence:3 produce:1 converges:1 diluted:1 coupling:6 recurrent:2 depending:1 received:1 strong:1 exhibiting:1 radius:2 human:1 transient:2 biological:4 sufficiently:1 equilibrium:1 lose:1 engel:1 tool:1 reflects:1 clearly:1 gaussian:2 always:1 reaching:1 varying:1 derived:1 modelling:2 sense:1 glass:2 dependent:1 fust:1 quasi:7 france:4 quoy:4 among:1 priori:1 ussr:1 takens:3 resonance:2 bifurcation:34 field:2 equal:1 cancel:1 nearly:3 espace:1 t2:3 stimulus:1 simplify:1 dilution:3 randomly:2 simultaneously:1 densely:1 phase:1 attractor:3 investigate:2 possibility:1 highly:1 circular:1 peculiar:1 unite:1 theoretical:2 cover:1 deviation:1 entry:1 uniform:3 crisanti:2 stored:1 periodic:6 density:1 sensitivity:1 pitchfork:5 xmp:1 connectivity:10 slowly:1 cognitive:4 leading:2 de:7 int:1 samuelides:5 afferent:1 onset:4 performed:1 try:1 view:2 tab:1 characterizes:1 reached:1 parallel:1 slope:2 spin:2 variance:6 weak:1 cessac:5 submitted:1 oscillatory:2 phys:4 synaptic:8 checked:2 energy:2 frequency:4 couple:1 emerges:1 higher:1 response:1 shrink:2 strongly:1 until:1 koenig:1 gray:2 modulus:2 effect:4 normalized:3 symmetric:2 recherches:2 nonzero:1 intercolumnar:1 occuring:2 geometrical:1 ranging:2 chaos:31 he:1 numerically:1 destabilization:5 ai:1 pointed:1 centre:2 bruno:1 sommers:3 stable:5 cortex:1 gj:12 dominant:1 recent:1 showed:1 commun:1 route:16 seen:1 period:1 ii:2 branch:1 thermodynamic:2 full:1 reduces:1 characterized:1 concerning:1 searcher:2 normalization:1 achieved:1 cedex:4 flow:2 call:1 near:1 intermediate:2 architecture:4 avenue:3 intensive:1 behav:1 stein:1 notice:2 correctly:1 discrete:2 neither:1 fuse:1 relaxation:1 sum:2 prob:1 strange:2 ruelle:3 oscillation:3 fl:1 followed:8 distinguish:1 convergent:3 activity:1 strength:3 occur:3 bp:3 span:1 conjugate:1 slightly:1 rev:2 biologically:1 inti:1 previously:1 remains:1 singer:1 flip:5 studying:1 available:1 quay:1 spectral:2 generic:1 occurrence:2 existence:1 already:1 occurs:4 exhibit:1 separating:1 sci:1 statement:1 negative:1 inspect:1 etude:2 neuron:7 finite:1 behave:1 thermal:1 extended:1 sharp:1 connection:7 fab:1 established:3 dynamical:5 pattern:1 regime:1 program:1 built:1 greatest:1 critical:2 regularized:1 axis:1 occurence:2 acknowledgement:1 law:3 fully:3 loss:2 interesting:3 versus:1 sufficient:1 plotting:1 systematically:1 periodicity:2 supported:1 free:1 keeping:1 implicated:1 allow:1 understand:1 emerge:1 bauer:2 lett:3 transition:2 world:1 rich:1 author:1 qualitatively:2 made:1 far:1 approximate:1 observable:2 inserm:1 hirsch:2 global:4 xi:2 spectrum:4 continuous:3 table:1 nature:2 transfer:1 symmetry:2 eeg:1 complex:4 noise:1 neuronal:1 fig:5 predictability:1 torus:3 discarding:1 specific:1 explored:1 admits:1 normalizing:1 magnitude:1 genericity:1 nationale:1 simply:1 appearance:2 neurophysiological:1 visual:1 corresponds:2 loses:1 sized:1 towards:2 infinite:1 bernard:1 mathias:1 partly:1 zone:1 internal:1 arises:4 superieure:1
6,368
6,760
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure Alberto Bietti Inria? [email protected] Julien Mairal Inria? [email protected] Abstract Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. Unfortunately, these techniques are unable to deal with stochastic perturbations of input data, induced for example by data augmentation. In such cases, the objective is no longer a finite sum, and the main candidate for optimization is the stochastic gradient descent method (SGD). In this paper, we introduce a variance reduction approach for these settings when the objective is composite and strongly convex. The convergence rate outperforms SGD with a typically much smaller constant factor, which depends on the variance of gradient estimates only due to perturbations on a single example. 1 Introduction Many supervised machine learning problems can be cast as the minimization of an expected loss over a data distribution with respect to a vector x in Rp of model parameters. When an infinite amount of data is available, stochastic optimization methods such as SGD or stochastic mirror descent algorithms, or their variants, are typically used (see [5, 11, 24, 34]). Nevertheless, when the dataset is finite, incremental methods based on variance reduction techniques (e.g., [2, 8, 15, 17, 18, 27, 29]) have proven to be significantly faster at solving the finite-sum problem n n o 1X minp F (x) := f (x) + h(x) = fi (x) + h(x) , x?R n i=1 (1) where the functions fi are smooth and convex, and h is a simple convex penalty that need not be differentiable such as the ?1 norm. A classical setting is fi (x) = ?(yi , x? ?i ) + (?/2)kxk2 , where (?i , yi ) is an example-label pair, ? is a convex loss function, and ? is a regularization parameter. In this paper, we are interested in a variant of (1) where random perturbations of data are introduced, which is a common scenario in machine learning. Then, the functions fi involve an expectation over a random perturbation ?, leading to the problem n n o 1X minp F (x) := fi (x) + h(x) . x?R n i=1 with fi (x) = E? [f?i (x, ?)]. (2) Unfortunately, variance reduction methods are not compatible with the setting (2), since evaluating a single gradient ?fi (x) requires computing a full expectation. Yet, dealing with random perturbations is of utmost interest; for instance, this is a key to achieve stable feature selection [23], improving the generalization error both in theory [33] and in practice [19, 32], obtaining stable and robust predictors [36], or using complex a priori knowledge about data to generate virtually ? Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Iteration complexity of different methods for solving the objective (2) in terms of number of iterations required to find x such that E[f (x) ? f (x? )] ? ?. The complexity of N-SAGA [14] matches the first term of S-MISO but is asymptotically biased. Note that we always have the perturbation 2 noise variance ?p2 smaller than the total variance ?tot and thus S-MISO improves on SGD both in the first term (linear convergence to a smaller ??) and in the second (smaller constant in the asymptotic 2 rate). In many application cases, we also have ?p2 ? ?tot (see main text and Table 2). Method SGD N-SAGA [14] S-MISO Asymptotic error 0 ?0 = O 0 ?p2 ? ! Iteration complexity   2  2 ?tot L 1 ?tot with ?? = O O log + ? ?? ?? ?    1 L O n+ log with ? > ?0 ? ? ! !   ?p2 ?p2 L 1 n+ O log + with ?? = O ? ?? ?? ?  larger datasets [19, 26, 30]. Injecting noise in data is also useful to hide gradient information for privacy-aware learning [10]. Despite its importance, the optimization problem (2) has been littled studied and to the best of our knowledge, no dedicated optimization method that is able to exploit the problem structure has been developed so far. A natural way to optimize this objective when h = 0 is indeed SGD, but ignoring the finite-sum structure leads to gradient estimates with high variance and slow convergence. The goal of this paper is to introduce an algorithm for strongly convex objectives, called stochastic MISO, which exploits the underlying finite sum using variance reduction. Our method achieves a faster convergence rate than SGD, by removing the dependence on the gradient variance due to sampling the data points i in {1, . . . , n}; the dependence remains only for the variance due to random perturbations ?. To the best of our knowledge, our method is the first algorithm that interpolates naturally between incremental methods for finite sums (when there are no perturbations) and the stochastic approximation setting (when n = 1), while being able to efficiently tackle the hybrid case. Related work. Many optimization methods dedicated to the finite-sum problem (e.g., [15, 29]) have been motivated by the fact that their updates can be interpreted as SGD steps with unbiased estimates of the full gradient, but with a variance that decreases as the algorithm approaches the optimum [15]; on the other hand, vanilla SGD requires decreasing step-sizes to achieve this reduction of variance, thereby slowing down convergence. Our work aims at extending these techniques to the case where each function in the finite sum can only be accessed via a first-order stochastic oracle. Most related to our work, recent methods that use data clustering to accelerate variance reduction techniques [3, 14] can be seen as tackling a special case of (2), where the expectations in fi are replaced by empirical averages over points in a cluster. While N-SAGA [14] was originally not designed for the stochastic context we consider, we remark that their method can be applied to (2). Their algorithm is however asymptotically biased and does not converge to the optimum. On the other hand, ClusterSVRG [3] is not biased, but does not support infinite datasets. The method proposed in [1] uses variance reduction in a setting where gradients are computed approximately, but the algorithm computes a full gradient at every pass, which is not available in our stochastic setting. Paper organization. In Section 2, we present our algorithm for smooth objectives, and we analyze its convergence in Section 3. For space limitation reasons, we present an extension to composite objectives and non-uniform sampling in Appendix A. Section 4 is devoted to empirical results. 2 The Stochastic MISO Algorithm for Smooth Objectives In this section, we introduce the stochastic MISO approach for smooth objectives (h = 0), which relies on the following assumptions: ? (A1) global strong convexity: f is ?-strongly convex; ? (A2) smoothness: f?i (?, ?) is L-smooth for all i and ? (i.e., with L-Lipschitz gradients). 2 2 Table 2: Estimated ratio ?tot /?p2 , which corresponds to the expected acceleration of S-MISO over SGD. These numbers are based on feature vectors variance, which is closely related to the gradient variance when learning a linear model. ResNet-50 denotes a 50 layer network [12] pre-trained on the ImageNet dataset. For image transformations, the numbers are empirically evaluated from 100 2 2 different images, with 100 random perturbations for each image. Rtot (respectively, Rcluster ) denotes the average squared distance between pairs of points in the dataset (respectively, in a given cluster), following [14]. The settings for unsupervised CKN and Scattering are described in Section 4. More details are given in the main text. Type of perturbation Direct perturbation of linear model features Random image transformations Application case Data clustering as in [3, 14] Additive Gaussian noise N (0, ?2 I) Dropout with probability ? Feature rescaling by s in U (1 ? w, 1 + w) ResNet-50 [12], color perturbation ResNet-50 [12], rescaling + crop Unsupervised CKN [22], rescaling + crop Scattering [6], gamma correction 2 Estimated ratio ?tot /?p2 2 2 ? Rtot /Rcluster ? 1 + 1/?2 ? 1 + 1/? ? 1 + 3/w2 21.9 13.6 9.6 9.8 Note that these assumptions are relaxed in Appendix A by supporting composite objectives and by exploiting different smoothness parameters Li on each example, a setting where non-uniform sampling of the training points is typically helpful to accelerate convergence (e.g., [35]). Complexity results. We now introduce the following quantity, which is essential in our analysis: n i h 1X 2 ?p2 := ?i , with ?i2 := E? k?f?i (x? , ?) ? ?fi (x? )k2 , n i=1 where x? is the (unique) minimizer of f . The quantity ?p2 represents the part of the variance of the gradients at the optimum that is due to the perturbations ?. In contrast, another quantity of interest is 2 the total variance ?tot , which also includes the randomness in the choice of the index i, defined as 2 ?tot = Ei,? [k?f?i (x? , ?)k2 ] = ?p2 + Ei [k?fi (x? )k2 ] (note that ?f (x? ) = 0). 2 The relation between ?tot and ?p2 is obtained by simple algebraic manipulations. 2 The goal of our paper is to exploit the potential imbalance ?p2 ? ?tot , occurring when perturbations on input data are small compared to the sampling noise. The assumption is reasonable: given a data point, selecting a different one should lead to larger variation than a simple perturbation. From a theoretical point of view, the approach we propose achieves the iteration complexity presented in Table 1, see also Appendix D and [4, 5, 24] for the complexity analysis of SGD. The gain over SGD 2 is of order ?tot /?p2 , which is also observed in our experiments in Section 4. We also compare against the method N-SAGA; its convergence rate is similar to ours but suffers from a non-zero asymptotic error. Motivation from application cases. One clear framework of application is the data clustering scenario already investigated in [3, 14]. Nevertheless, we will focus on less-studied data augmentation settings that lead instead to true stochastic formulations such as (2). First, we consider learning a linear model when adding simple direct manipulations of feature vectors, via rescaling (multiplying each entry vector by a random scalar), Dropout, or additive Gaussian noise, in order to improve the generalization error [33] or to get more stable estimators [23]. In Table 2, we present the potential gain over SGD in these scenarios. To do that, we study the variance of perturbations applied to a feature vector ?. Indeed, the gradient of the loss is proportional to ?, which allows us to obtain 2 good estimates of the ratio ?tot /?p2 , as we observed in our empirical study of Dropout presented in Section 4. Whereas some perturbations are friendly for our method such as feature rescaling (a rescaling window of [0.9, 1.1] yields for instance a huge gain factor of 300), a large Dropout rate would lead to less impressive acceleration (e.g., a Dropout with ? = 0.5 simply yields a factor 2). Second, we also consider more interesting domain-driven data perturbations such as classical image transformations considered in computer vision [26, 36] including image cropping, rescaling, brightness, contrast, hue, and saturation changes. These transformations may be used to train a linear 3 Algorithm 1 S-MISO for smooth objectives Input: step-size sequence (?t )t?1 ; P initialize x0 = n1 i zi0 for some (zi0 )i=1,...,n ; for t = 1, . . . do Sample an index it uniformly at random, a perturbation ?t , and update ( (1 ? ?t )zit?1 + ?t (xt?1 ? ?1 ?f?it (xt?1 , ?t )), if i = it t zi = zit?1 , otherwise. (3) n xt = 1X t 1 ). zi = xt?1 + (zitt ? zit?1 t n i=1 n (4) end for classifier on top of an unsupervised multilayer image model such as unsupervised CKNs [22] or the scattering transform [6]. It may also be used for retraining the last layer of a pre-trained deep neural network: given a new task unseen during the full network training and given limited amount of training data, data augmentation may be indeed crucial to obtain good prediction and S-MISO can help accelerate learning in this setting. These scenarios are also studied in Table 2, where the experiment with ResNet-50 involving random cropping and rescaling produces 224 ? 224 images from 256 ? 256 ones. For these scenarios with realistic perturbations, the potential gain varies from 10 to 20. Description of stochastic MISO. We are now in shape to present our method, described in Algorithm 1. Without perturbations and with a constant step-size, the algorithm resembles the MISO/Finito algorithms [9, 18, 21], which may be seen as primal variants of SDCA [28, 29]. Specifically, MISO is not able to deal with our stochastic objective (2), but it may address the deterministic finite-sum problem (1). It is part of a larger body of optimization methods that iteratively build a model of the objective function, typically a lower or upper bound on the objective that is easier to optimize; for instance, this strategy is commonly adopted in bundle methods [13, 25]. More precisely, PnMISO assumes that each fi is strongly convex and builds a model using lower bounds Dt (x) = n1 i=1 dti (x), where each dti is a quadratic lower bound on fi of the form ? ? dti (x) = cti,1 + kx ? zit k2 = cti,2 ? ?hx, zit i + kxk2 . (5) 2 2 These lower bounds are updated during the algorithm using strong convexity lower bounds at xt?1 of the form lit (x) = fi (xt?1 ) + h?fi (xt?1 ), x ? xt?1 i + ?2 kx ? xt?1 k2 ? fi (x):  (1 ? ?t )dt?1 (x) + ?t lit (x), if i = it t i di (x) = (6) t?1 di (x), otherwise, which corresponds to an update of the quantity zit : ( (1 ? ?t )zit?1 + ?t (xt?1 ? ?1 ?fit (xt?1 )), t zi = zit?1 , if i = it otherwise. The next iterate is then computed as xt = arg minx Dt (x), which is equivalent to (4). The original MISO/Finito algorithms use ?t = 1 under a ?big data? condition on the sample size n [9, 21], while the theory was later extended in [18] to relax this condition by supporting smaller constant steps ?t = ?, leading to an algorithm that may be interpreted as a primal variant of SDCA (see [28]). Note that when fi is an expectation, it is hard to obtain such lower bounds since the gradient ?fi (xt?1 ) is not available in general. For this reason, we have introduced S-MISO, which can exploit approximate lower bounds to each fi using gradient estimates, by letting the step-sizes ?t decrease appropriately as commonly done in stochastic approximation. This leads to update (3). Separately, SDCA [29] considers the Fenchel conjugates of fi , defined by fi? (y) = supx x? y ?fi (x). When fi is an expectation, fi? is not available in closed form in general, nor are its gradients, and in fact exploiting stochastic gradient estimates is difficult in the duality framework. In contrast, [28] gives an analysis of SDCA in the primal, aka. ?without duality?, for smooth finite sums, and our work extends this line of reasoning to the stochastic approximation and composite settings. 4 Relationship with SGD in the smooth case. The link between S-MISO in the non-composite setting and SGD can be seen by rewriting the update (4) as 1 ?t xt = xt?1 + (zitt ? zit?1 ) = xt?1 + vt , t n n where 1 ? vt := xt?1 ? ?fit (xt?1 , ?t ) ? zit?1 . (7) t ? Note that E[vt |Ft?1 ] = ? ?1 ?f (xt?1 ), where Ft?1 contains all information up to iteration t; hence, the algorithm can be seen as an instance of the stochastic gradient method with unbiased gradients, which was a key motivation in SVRG [15] and later in other variance reduction algorithms [8, 28]. It = xt?1 ; hence is also worth noting that in the absence of a finite-sum structure (n = 1), we have zit?1 t our method becomes identical to SGD, up to a redefinition of step-sizes. In the composite case (see Appendix A), our approach yields a new algorithm that resembles regularized dual averaging [34]. Memory requirements and handling of sparse datasets. The algorithm requires storing the vectors (zit )i=1,...,n , which takes the same amount of memory as the original dataset and which is therefore a reasonable requirement in many practical cases. In the case of sparse datasets, it is fair to assume that random perturbations applied to input data preserve the sparsity patterns of the original vectors, as is the case, e.g., when applying Dropout to text documents described with bag-ofwords representations [33]. If we further assume the typical setting where the ?-strong convexity comes from an ?2 regularizer: f?i (x, ?) = ?i (x? ?i? ) + (?/2)kxk2 , where ?i? is the (sparse) perturbed example and ?i encodes the loss, then the update (3) can be written as ( ?t ?t (1 ? ?t )zit?1 ? ??t ??i (x? t?1 ?i )?i , if i = it t zi = zit?1 , otherwise, which shows that for every index i, the vector zit preserves the same sparsity pattern as the examples ?i? throughout the algorithm (assuming the initialization zi0 = 0), making the update (3) efficient. The update (4) has the same cost since vt = zitt ? zit?1 is also sparse. t Limitations and alternative approaches. Since our algorithm is uniformly better than SGD in terms of iteration complexity, its main limitation is in terms of memory storage when the dataset cannot fit into memory (remember that the memory cost of S-MISO is the same as the input dataset). In these huge-scale settings, SGD should be preferred; this holds true in fact for all incremental methods when one cannot afford to perform more than one (or very few) passes over the data. Our paper focuses instead on non-huge datasets, which are those benefiting most from data augmentation. We note that a different approach to variance reduction like SVRG [15] is able to trade off storage requirements for additional full gradient computations, which would be desirable in some situations. However, we were not able to obtain any decreasing step-size strategy that works for these methods, both in theory and practice, leaving us with constant step-size approaches as in [1, 14] that either maintain a non-zero asymptotic error, or require dynamically reducing the variance of gradient estimates. One possible way to explain this difficulty is that SVRG and SAGA [8] ?forget? past gradients for a given example i, while S-MISO averages them in (3), which seems to be a technical key to make it suitable to stochastic approximation. Nevertheless, the question of whether it is possible to trade-off storage with computation in a setting like ours is open and of utmost interest. 3 Convergence Analysis of S-MISO We now study the convergence properties of the S-MISO algorithm. For space limitation reasons, all proofs are provided in Appendix B. We start by defining the problem-dependent quantities zi? := x? ? ?1 ?fi (x? ), and then introduce the Lyapunov function Ct = n 1 ?t X t kxt ? x? k2 + 2 kz ? zi? k2 . 2 n i=1 i (8) Proposition 1 gives a recursion on Ct , obtained by upper-bounding separately its two terms, and finding coefficients to cancel out other appearing quantities when relating Ct to Ct?1 . To this end, we borrow elements of the convergence proof of SDCA without duality [28]; our technical contribution is to extend their result to the stochastic approximation and composite (see Appendix A) cases. 5 Proposition 1 (Recursion on Ct ). If (?t )t?1 is a positive and non-increasing sequence satisfying   n 1 , (9) , ?1 ? min 2 2(2? ? 1) with ? = L/?, then Ct obeys the recursion   ? 2 ? 2 ?t  t p E[Ct ] ? 1 ? E[Ct?1 ] + 2 . n n ?2 (10) We now state the main convergence result, which provides the expected rate O(1/t) on Ct based on decreasing step-sizes, similar to [5] for SGD. Note that convergence of objective function values is directly related to that of the Lyapunov function Ct via smoothness:  L  (11) E[f (xt ) ? f (x? )] ? E kxt ? x? k2 ? L E[Ct ]. 2 Theorem 2 (Convergence of Lyapunov function). Let the sequence of step-sizes (?t )t?1 be defined 2n by ?t = ?+t with ? ? 0 such that ?1 satisfies (9). For all t ? 0, it holds that ) ( 8?p2 ? (12) where ? := max , (? + 1)C0 . E[Ct ] ? ?+t+1 ?2 Choice of step-sizes in practice. Naturally, we would like ? to be small, in particular independent of the initial condition C0 and equal to the first term in the definition (12). We would like the dependence on C0 to vanish at a faster rate than O(1/t), as it is the case in variance reduction algorithms on finite sums. As advised in [5] in the context of SGD, we can initially run the algorithm with a constant step-size ? ? and exploit this linear convergence regime until we reach the level of noise given by ?p , and then start decaying the step-size. It is easy to see that by using a constant step-size ? ? , Ct converges near a value C? := 2? ??p2 /n?2 . Indeed, Eq. (10) with ?t = ? ? yields   ? ? ? 1? ? ? E[Ct ? C] E[Ct?1 ? C]. n n Thus, we can reach a precision C0? with E[C0? ] ? ?? := 2C? in O( ? ?) iterations. Then, if we ? log C0 /? start decaying step-sizes as in Theorem 2 with ? large enough so that ?1 = ? ? , we have (? + 1) E[C0? ] ? (? + 1)? ? = 8?p2 /?2 , making both terms in (12) smaller than or equal to ? = 8?p2 /?2 . Considering these two phases, with an initial step-size ? ? given by (9), the final work complexity for reaching E[kxt ? x? k2 ] ? ? is !    ?p2 C0 L log +O . (13) O n+ ? ?? ?2 ? We can then use (11) in order to obtain the complexity for reaching E[f (xt ) ? f (x? )] ? ?. Note that following this step-size strategy was found to be very effective in practice (see Section 4). Acceleration by iterate averaging. When one is interested in the convergence in function values, the complexity (13) combined with (11) yields O(L?p2 /?2 ?), which can be problematic for illconditioned problems (large condition number L/?). The following theorem presents an iterate averaging scheme which brings the complexity term down to O(?p2 /??), which appeared in Table 1. Theorem 3 (Convergence under iterate averaging). Let the step-size sequence (?t )t?1 be defined by   1 n 2n for ? ? 1 s.t. ?1 ? min , . ?t = ?+t 2 4(2? ? 1) We have E[f (? xT ) ? f (x? )] ? where 16?p2 2??(? ? 1)C0 + , T (2? + T ? 1) ?(2? + T ? 1) T ?1 X 2 (? + t)xt . x ?T := T (2? + T ? 1) t=0 6 STL-10 ckn, ? = 10 ?3 10 -2 10 -3 10 10 10 10 10 10 10 10 10 0 50 100 150 200 250 300 350 400 450 epochs ?3 STL-10 scattering, ? = 10 -2 10 10 10 -4 10 -5 0 50 10 -3 -4 0 10 -3 STL-10 ckn, 0 10 100 150 200 250 300 350 400 ? = 10 ?5 -1 10 -1 -2 10 -1 10 50 100 150 200 250 300 350 400 450 -2 -3 0 50 epochs F - F* F - F* 10 0 ? = 10 ?4 10 10 -4 10 -5 STL-10 ckn, 0 f - f* f - f* 10 -1 10 1 100 150 200 250 300 350 400 epochs ?4 STL-10 scattering, ? = 10 10 0 10 STL-10 scattering, 1 ? = 10 ?5 0 -1 F - F* S-MISO ? = 0. 1 S-MISO ? = 1. 0 N-SAGA ? = 0. 1 SGD ? = 0. 1 SGD ? = 1. 0 f - f* 10 0 -2 -3 10 10 10 -4 -5 0 50 epochs 10 100 150 200 250 300 350 400 -1 -2 -3 -4 0 50 epochs 100 150 200 250 300 350 400 epochs Figure 1: Impact of conditioning for data augmentation on STL-10 (controlled by ?, where ? = 10?4 gives the best accuracy). Values of the loss are shown on a logarithmic scale (1 unit = factor 10). ? = 0.1 satisfies the theory for all methods, and we include curves for larger step-sizes ? = 1. We omit N-SAGA for ? = 1 because it remains far from the optimum. For the scattering representation, the problem we study is ?1 -regularized, and we use the composite algorithm of Appendix A. f - f* 10 -3 10 -4 10 -5 10 -6 10 10 10 f - f* S-MISO ? = 0. 1 S-MISO ? = 1. 0 N-SAGA ? = 0. 1 SGD ? = 0. 1 SGD ? = 1. 0 10 10 10 10 0 50 100 150 200 250 300 350 400 epochs 10 ResNet50, 0 ? = 10 ?3 10 ResNet50, 0 ? = 10 ?4 -1 10 -2 -3 f - f* ResNet50, ? = 10 ?2 10 -2 -4 10 10 -1 -2 -3 -5 10 -6 -7 0 50 100 150 200 250 300 350 400 epochs 10 -4 -5 0 50 100 150 200 250 300 350 400 epochs Figure 2: Re-training of the last layer of a pre-trained ResNet 50 model, on a small dataset with random color perturbations (for different values of ?). The proof uses a similar telescoping sum technique to [16]. Note that if T ? ?, the first term, which depends on the initial condition C0 , decays as 1/T 2 and is thus dominated by the second term. Moreover, if we start averaging after an initial phase with constant step-size ? ? , we can consider C0 ? 4? ??p2 /n?2 . In the ill-conditioned regime, taking ? ? = ?1 = 2n/(? + 1) as large as allowed by (9), we have ? of the order of ? = L/? ? 1. The full convergence rate then becomes ! 2   ? ? p E[f (? xT ) ? f (x? )] ? O . 1+ ?(? + T ) T When T is large enough compared to ?, this becomes O(?p2 /?T ), leading to a complexity O(?p2 /??). 4 Experiments We present experiments comparing S-MISO with SGD and N-SAGA [14] on four different scenarios, in order to demonstrate the wide applicability of our method: we consider an image classification dataset with two different image representations and random transformations, and two classification tasks with Dropout regularization, one on genetic data, and one on (sparse) text data. Figures 1 and 3 show the curves for an estimate of the training objective using 5 sampled perturbations per example. The plots are shown on a logarithmic scale, and the values are compared to the best value obtained among the different methods in 500 epochs. The strong convexity constant ? is the regularization parameter. For all methods, we consider step-sizes supported by the theory as well as larger step-sizes that may work better in practice. Our C++/Cython implementation of all methods considered in this section is available at https://github.com/albietz/stochs. Choices of step-sizes. For both S-MISO and SGD, we use the step-size strategy mentioned in Section 3 and advised by [5], which we have found to be most effective among many heuristics 7 10 -3 10 10 10 -4 10 0 10 -1 10 0 50 100 150 200 250 300 350 400 epochs imdb dropout, = 0.30 ? S-MISO-NU = 1 0 S-MISO = 10 0 SGD-NU = 1 0 SGD = 10 0 N-SAGA = 10 0 ? ? f - f* 10 -2 . . ? ? . . ? . f - f* 10 -5 10 10 -3 10 -4 0 50 100 150 200 250 300 350 400 epochs 10 gene dropout, 0 ? = 0.10 10 -1 10 -2 10 f - f* 10 -2 10 f - f* f - f* 10 S-MISO ? = 0. 1 S-MISO ? = 1. 0 SGD ? = 0. 1 SGD ? = 1. 0 N-SAGA ? = 0. 1 N-SAGA ? = 1. 0 10 -1 -3 -4 10 10 10 -5 10 -6 0 50 100 150 200 250 300 350 400 epochs imdb dropout, ? = 0.10 10 0 -1 10 10 -2 10 -3 10 -4 10 -5 0 50 100 150 200 250 300 350 400 epochs f - f* gene dropout, ? = 0.30 10 0 10 gene dropout, 0 ? = 0.01 -1 -2 -3 -4 -5 -6 -7 0 50 100 150 200 250 300 350 400 epochs imdb dropout, ? = 0.01 10 0 10 -1 10 -2 10 -3 10 -4 10 -5 10 -6 10 -7 0 50 100 150 200 250 300 350 400 epochs Figure 3: Impact of perturbations controlled by the Dropout rate ?. The gene data is ?2 -normalized; hence, we consider similar step-sizes as Figure 1. The IMDB dataset is highly heterogeneous; thus, we also include non-uniform (NU) sampling variants of Appendix A. For uniform sampling, theoretical step-sizes perform poorly for all methods; thus, we show a larger tuned step-size ? = 10. we have tried: we initially keep the step-size constant (controlled by a factor ? ? 1 in the figures) for 2 epochs, and then start decaying as ?t = C/(? + t), where C = 2n for S-MISO, C = 2/? for SGD, and ? is chosen large enough to match the previous constant step-size. For N-SAGA, we maintain a constant step-size throughout the optimization, as suggested in the original paper [14]. The factor ? shown in the figures is such that ? = 1 corresponds to an initial step-size n?/(L ? ?) ? instead of L in for S-MISO (from (19) in the uniform case) and 1/L for SGD and N-SAGA (with L the non-uniform case when using the variant of Appendix A). Image classification with ?data augmentation?. The success of deep neural networks is often limited by the availability of large amounts of labeled images. When there are many unlabeled images but few labeled ones, a common approach is to train a linear classifier on top of a deep network learned in an unsupervised manner, or pre-trained on a different task (e.g., on the ImageNet dataset). We follow this approach on the STL-10 dataset [7], which contains 5K training images from 10 classes and 100K unlabeled images, using a 2-layer unsupervised convolutional kernel network [22], giving representations of dimension 9 216. The perturbation consists of randomly cropping and scaling the input images. We use the squared hinge loss in a one-versus-all setting. The vector representations are ?2 -normalized such that we may use the upper bound L = 1 + ? for the smoothness constant. We also present results on the same dataset using a scattering representation [6] of dimension 21 696, with random gamma corrections (raising all pixels to the power ?, where ? is chosen randomly around 1). For this representation, we add an ?1 regularization term and use the composite variant of S-MISO presented in Appendix A. Figure 1 shows convergence results on one training fold (500 images), for different values of ?, allowing us to study the behavior of the algorithms for different condition numbers. The low variance induced by data transformations allows S-MISO to reach suboptimality that is orders of magnitude smaller than SGD after the same number of epochs. Note that one unit on these plots corresponds to one order of magnitude in the logarithmic scale. N-SAGA initially reaches a smaller suboptimality than SGD, but quickly gets stuck due to the bias in the algorithm, as predicted by the theory [14], while S-MISO and SGD continue to converge to the optimum thanks to the decreasing step-sizes. The best validation accuracy for both representations is obtained for ? ? 10?4 (middle column), and we observed relative gains of up to 1% from using data augmentation. We computed empirical variances of the image representations for these two strategies, which are closely related to the variance in gradient estimates, and observed these transformations to account for about 10% of the total variance. Figure 2 shows convergence results when training the last layer of a 50-layer Residual network [12] that has been pre-trained on ImageNet. Here, we consider the common scenario of leveraging a deep model trained on a large dataset as a feature extractor in order to learn a new classifier on a different small dataset, where it would be difficult to train such a model from scratch. To simulate this setting, we consider a binary classification task on a small dataset of 100 images of size 256x256 taken from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012, which we crop to 8 224x224 before performing random adjustments to brightness, saturation, hue and contrast. As in the STL-10 experiments, the gains of S-MISO over other methods are of about one order of magnitude in suboptimality, as predicted by Table 2. Dropout on gene expression data. We trained a binary logistic regression model on the breast cancer dataset of [31], with different Dropout rates ?, i.e., where at every iteration, each coordinate ?j of a feature vector ? is set to zero independently with probability ? and to ?j /(1 ? ?) otherwise. The dataset consists of 295 vectors of dimension 8 141 of gene expression data, which we normalize in ?2 norm. Figure 3 (top) compares S-MISO with SGD and N-SAGA for three values of ?, as a way to control the variance of the perturbations. We include a Dropout rate of 0.01 to illustrate the impact of ? on the algorithms and study the influence of the perturbation variance ?p2 , even though this value of ? is less relevant for the task. The plots show very clearly how the variance induced by the perturbations affects the convergence of S-MISO, giving suboptimality values that may be orders of magnitude smaller than SGD. This behavior is consistent with the theoretical convergence rate established in Section 3 and shows that the practice matches the theory. Dropout on movie review sentiment analysis data. We trained a binary classifier with a squared hinge loss on the IMDB dataset [20] with different Dropout rates ?. We use the labeled part of the IMDB dataset, which consists of 25K training and 250K testing movie reviews, represented as 89 527-dimensional sparse bag-of-words vectors. In contrast to the previous experiments, we do not normalize the representations, which have great variability in their norms, in particular, the maximum Lipschitz constant across training points is roughly 100 times larger than the average one. Figure 3 (bottom) compares non-uniform sampling versions of S-MISO (see Appendix A) and SGD (see Appendix D) with their uniform sampling counterparts as well as N-SAGA. Note that we use a large step-size ? = 10 for the uniform sampling algorithms, since ? = 1 was significantly slower for all methods, likely due to outliers in the dataset. In contrast, the non-uniform sampling algorithms required no tuning and just use ? = 1. The curves clearly show that S-MISO-NU has a much faster convergence in the initial phase, thanks to the larger step-size allowed by non-uniform sampling, and later converges similarly to S-MISO, i.e., at a much faster rate than SGD when the perturbations are small. The value of ? used in the experiments was chosen by cross-validation, and the use of Dropout gave improvements in test accuracy from 88.51% with no dropout to 88.68 ? 0.03% with ? = 0.1 and 88.86 ? 0.11% with ? = 0.3 (based on 10 different runs of S-MISO-NU after 400 epochs). Finally, we also study the effect of the iterate averaging scheme of Theorem 3 in Appendix E. Acknowledgements This work was supported by a grant from ANR (MACARON project under grant number ANR14-CE23-0003-01), by the ERC grant number 714381 (SOLARIS project), and by the MSR-Inria joint center. References [1] M. Achab, A. Guilloux, S. Ga?ffas, and E. Bacry. SGD with Variance Reduction beyond Empirical Risk Minimization. arXiv:1510.04822, 2015. [2] Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Symposium on the Theory of Computing (STOC), 2017. [3] Z. Allen-Zhu, Y. Yuan, and K. Sridharan. Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters. In Advances in Neural Information Processing Systems (NIPS), 2016. [4] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems (NIPS), 2011. [5] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization Methods for Large-Scale Machine Learning. arXiv:1606.04838, 2016. [6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence (PAMI), 35(8):1872?1886, 2013. [7] A. Coates, H. Lee, and A. Y. Ng. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. 9 [8] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS), 2014. [9] A. Defazio, J. Domke, and T. S. Caetano. Finito: A faster, permutable incremental gradient method for big data problems. In International Conference on Machine Learning (ICML), 2014. [10] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Privacy aware learning. In Advances in Neural Information Processing Systems (NIPS), 2012. [11] J. C. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research (JMLR), 10:2899?2934, 2009. [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [13] J.-B. Hiriart-Urruty and C. Lemar?chal. Convex analysis and minimization algorithms I: Fundamentals. Springer science & business media, 1993. [14] T. Hofmann, A. Lucchi, S. Lacoste-Julien, and B. McWilliams. Variance Reduced Stochastic Gradient Descent with Neighbors. In Advances in Neural Information Processing Systems (NIPS), 2015. [15] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems (NIPS), 2013. [16] S. Lacoste-Julien, M. Schmidt, and F. Bach. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method. arXiv:1212.2002, 2012. [17] G. Lan and Y. Zhou. An optimal randomized incremental gradient method. Mathematical Programming, 2017. [18] H. Lin, J. Mairal, and Z. Harchaoui. A Universal Catalyst for First-Order Optimization. In Advances in Neural Information Processing Systems (NIPS), 2015. [19] G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines using selective sampling. In Large Scale Kernel Machines, pages 301?320. MIT Press, Cambridge, MA., 2007. [20] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics (ACL), pages 142?150. Association for Computational Linguistics, 2011. [21] J. Mairal. Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning. SIAM Journal on Optimization, 25(2):829?855, 2015. [22] J. Mairal. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks. In Advances in Neural Information Processing Systems (NIPS), 2016. [23] N. Meinshausen and P. B?hlmann. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):417?473, 2010. [24] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust Stochastic Approximation Approach to Stochastic Programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009. [25] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2004. [26] M. Paulin, J. Revaud, Z. Harchaoui, F. Perronnin, and C. Schmid. Transformation pursuit for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [27] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162(1):83?112, 2017. [28] S. Shalev-Shwartz. SDCA without Duality, Regularization, and Individual Convexity. In International Conference on Machine Learning (ICML), 2016. [29] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research (JMLR), 14:567?599, 2013. 10 [30] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation Invariance in Pattern Recognition ? Tangent Distance and Tangent Propagation. In G. B. Orr and K.-R. M?ller, editors, Neural Networks: Tricks of the Trade, number 1524 in Lecture Notes in Computer Science, pages 239?274. Springer Berlin Heidelberg, 1998. [31] M. J. van de Vijver et al. A Gene-Expression Signature as a Predictor of Survival in Breast Cancer. New England Journal of Medicine, 347(25):1999?2009, Dec. 2002. [32] L. van der Maaten, M. Chen, S. Tyree, and K. Q. Weinberger. Learning with marginalized corrupted features. In International Conference on Machine Learning (ICML), 2013. [33] S. Wager, W. Fithian, S. Wang, and P. Liang. Altitude Training: Strong Bounds for Single-layer Dropout. In Advances in Neural Information Processing Systems (NIPS), 2014. [34] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research (JMLR), 11:2543?2596, 2010. [35] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014. [36] S. Zheng, Y. Song, T. Leung, and I. Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 11
6760 |@word msr:1 version:1 middle:1 norm:3 seems:1 retraining:1 c0:11 open:1 tried:1 brightness:2 sgd:40 thereby:1 reduction:15 initial:6 contains:2 series:1 selecting:1 genetic:1 ours:2 document:1 tuned:1 outperforms:1 past:1 comparing:1 com:1 yet:1 tackling:1 written:1 tot:12 additive:2 ckns:1 realistic:1 shape:1 hofmann:1 designed:1 plot:3 update:8 juditsky:1 intelligence:2 slowing:1 paulin:1 provides:1 simpler:1 accessed:1 zhang:4 mathematical:2 direct:3 symposium:1 yuan:1 consists:3 introductory:1 privacy:2 manner:1 introduce:5 x0:1 indeed:4 expected:3 roughly:1 behavior:2 nor:1 moulines:1 decreasing:4 window:1 considering:1 increasing:1 becomes:3 provided:1 project:2 underlying:1 moreover:1 medium:1 permutable:1 interpreted:2 developed:1 finding:1 transformation:9 dti:3 every:3 remember:1 friendly:1 tackle:1 k2:9 classifier:4 control:1 unit:2 grant:3 omit:1 mcwilliams:1 positive:1 before:1 despite:1 approximately:1 advised:2 inria:6 pami:1 acl:1 initialization:1 studied:3 resembles:2 dynamically:1 meinshausen:1 limited:2 zi0:3 nemirovski:1 obeys:1 unique:1 practical:1 x256:1 testing:1 lecun:1 practice:6 sdca:6 empirical:5 universal:1 significantly:2 composite:10 pre:5 word:2 inp:1 get:2 cannot:2 unlabeled:2 selection:2 ga:1 storage:3 context:2 applying:1 influence:1 risk:1 optimize:2 equivalent:1 deterministic:1 center:1 independently:1 convex:10 roux:1 alpes:1 splitting:1 estimator:1 borrow:1 stability:2 variation:1 coordinate:2 updated:1 mallat:1 programming:3 us:2 goodfellow:1 trick:1 element:1 satisfying:1 recognition:6 labeled:3 observed:4 ft:2 bottom:1 loosli:1 wang:1 revaud:1 caetano:1 sun:1 decrease:2 trade:3 mentioned:1 convexity:5 complexity:12 nesterov:1 signature:1 trained:8 solving:2 predictive:1 imdb:6 accelerate:3 joint:1 represented:1 regularizer:1 train:3 univ:1 fast:1 effective:2 artificial:1 shalev:2 heuristic:1 larger:8 cvpr:3 relax:1 otherwise:5 anr:1 statistic:1 unseen:1 transform:1 final:1 online:2 sequence:4 differentiable:1 kxt:3 propose:1 hiriart:1 fr:2 relevant:1 poorly:1 achieve:2 benefiting:1 description:1 normalize:2 exploiting:3 convergence:24 cluster:3 optimum:5 extending:1 cropping:3 produce:1 requirement:3 incremental:7 converges:2 resnet:5 help:1 illustrate:1 eq:1 strong:5 zit:16 p2:26 predicted:2 come:1 lyapunov:3 closely:2 stochastic:34 require:1 hx:1 generalization:2 proposition:2 extension:1 correction:2 hold:2 pham:1 around:1 considered:2 great:1 solaris:1 achieves:2 a2:1 daly:1 injecting:1 bag:2 label:1 miso:41 minimization:5 mit:1 clearly:2 always:1 gaussian:2 aim:1 reaching:2 zhou:1 focus:2 improvement:1 potts:1 aka:1 contrast:6 helpful:1 dependent:1 perronnin:1 cnrs:1 leung:1 typically:4 initially:3 relation:1 selective:1 france:1 interested:2 x224:1 pixel:1 arg:1 dual:3 ill:1 classification:5 among:2 priori:1 special:1 initialize:1 equal:2 aware:2 beach:1 sampling:12 ng:2 identical:1 represents:1 lit:2 progressive:1 unsupervised:7 cancel:1 icml:3 grenoble:3 few:2 randomly:2 gamma:2 preserve:2 individual:1 replaced:1 phase:3 n1:2 maintain:2 organization:1 interest:3 huge:3 highly:1 zheng:1 cython:1 primal:3 devoted:1 bundle:1 wager:1 re:1 theoretical:3 instance:4 fenchel:1 column:1 hlmann:1 cost:2 applicability:1 entry:1 predictor:2 uniform:11 successful:1 johnson:1 varies:1 supx:1 perturbed:1 corrupted:1 proximal:1 combined:1 st:1 thanks:2 international:4 fundamental:1 randomized:1 siam:3 fithian:1 lee:1 off:2 quickly:1 lucchi:1 augmentation:7 squared:3 huang:1 simard:1 leading:3 rescaling:8 li:1 account:1 potential:3 de:1 orr:1 includes:1 coefficient:1 availability:1 depends:2 later:3 view:1 closed:1 analyze:1 start:5 decaying:3 majorization:1 contribution:1 accuracy:3 convolutional:2 variance:36 illconditioned:1 efficiently:1 yield:5 raw:1 ren:1 multiplying:1 worth:1 randomness:1 explain:1 reach:4 suffers:1 definition:1 against:1 naturally:2 proof:3 di:2 gain:6 sampled:1 dataset:20 knowledge:3 color:2 improves:1 scattering:9 originally:1 dt:3 supervised:2 follow:1 methodology:1 katyusha:1 formulation:1 evaluated:1 done:1 strongly:5 though:1 just:1 until:1 hand:2 ei:2 propagation:1 logistic:1 brings:1 usa:1 effect:1 normalized:2 unbiased:2 true:2 counterpart:1 regularization:5 hence:3 iteratively:1 i2:1 deal:2 during:2 ffas:1 suboptimality:4 demonstrate:1 duchi:2 dedicated:2 allen:2 reasoning:1 image:21 fi:24 common:3 empirically:1 conditioning:1 clustersvrg:1 extend:1 he:1 association:2 relating:1 bietti:2 cambridge:1 smoothness:4 tuning:1 vanilla:1 canu:1 similarly:1 erc:1 bruna:1 stable:3 vijver:1 longer:1 impressive:1 add:1 hide:1 recent:1 driven:1 scenario:7 manipulation:2 binary:3 success:1 continue:1 vt:4 meeting:1 yi:2 der:1 seen:4 additional:1 relaxed:1 converge:2 ller:1 full:6 desirable:1 harchaoui:2 smooth:8 technical:2 faster:6 match:3 england:1 cross:1 long:1 bach:4 lin:1 alberto:2 a1:1 controlled:3 impact:3 prediction:1 variant:7 regression:1 crop:3 multilayer:1 vision:4 expectation:5 involving:1 redefinition:1 heterogeneous:1 iteration:8 kernel:4 breast:2 arxiv:3 dec:1 whereas:1 separately:2 victorri:1 leaving:1 crucial:1 appropriately:1 biased:3 w2:1 pass:1 ascent:1 induced:3 virtually:1 leveraging:1 sridharan:1 jordan:1 near:1 noting:1 easy:1 enough:3 iterate:5 affect:1 fit:3 zi:6 gave:1 ce23:1 whether:1 motivated:1 expression:3 defazio:2 accelerating:1 penalty:1 sentiment:2 song:1 algebraic:1 interpolates:1 afford:1 remark:1 deep:6 useful:1 clear:1 involve:1 utmost:2 amount:4 hue:2 reduced:1 generate:1 http:1 shapiro:1 problematic:1 coates:1 estimated:2 per:1 key:3 four:1 nevertheless:3 lan:2 rewriting:1 lacoste:3 nocedal:1 backward:1 asymptotically:2 subgradient:1 sum:15 run:2 extends:1 throughout:2 reasonable:2 maaten:1 appendix:13 scaling:1 dropout:22 layer:8 bound:9 ct:15 fold:1 quadratic:1 oracle:1 annual:1 precisely:1 encodes:1 dominated:1 simulate:1 min:2 performing:1 conjugate:1 smaller:9 across:1 making:2 outlier:1 invariant:2 altitude:1 taken:1 remains:2 singer:1 urruty:1 letting:1 end:4 adopted:1 available:5 pursuit:1 denker:1 appearing:1 alternative:1 batch:1 schmidt:2 slower:1 rp:1 weinberger:1 original:4 robustness:1 denotes:2 clustering:3 top:3 assumes:1 include:3 linguistics:2 hinge:2 marginalized:1 medicine:1 exploit:5 giving:2 build:2 classical:2 society:1 objective:17 already:1 quantity:6 question:1 strategy:5 ofwords:1 dependence:3 gradient:32 minx:1 distance:2 unable:1 link:1 bacry:1 berlin:1 considers:1 reason:3 assuming:1 index:3 relationship:1 ratio:3 minimizing:2 liang:1 difficult:2 unfortunately:2 stoc:1 rtot:2 implementation:1 perform:2 allowing:1 imbalance:1 upper:3 convolution:1 datasets:6 finite:15 descent:4 supporting:2 situation:1 extended:1 defining:1 variability:1 perturbation:30 introduced:2 cast:1 pair:2 required:2 imagenet:4 raising:1 learned:1 established:1 nu:5 nip:10 address:1 able:5 suggested:1 beyond:1 pattern:7 chal:1 regime:2 sparsity:2 appeared:1 challenge:1 saturation:2 including:1 memory:5 max:1 royal:1 wainwright:1 power:1 suitable:1 natural:1 hybrid:1 regularized:4 difficulty:1 business:1 recursion:3 telescoping:1 residual:2 zhu:2 scheme:2 improve:1 github:1 movie:2 julien:5 schmid:1 resnet50:3 text:4 epoch:19 review:2 acknowledgement:1 tangent:2 asymptotic:5 relative:1 catalyst:1 loss:8 lecture:2 interesting:1 limitation:4 proportional:1 proven:2 versus:1 validation:2 consistent:1 ckn:5 minp:2 xiao:2 editor:1 tyree:1 storing:1 cancer:2 compatible:1 maas:1 supported:2 last:3 svrg:3 bias:1 wide:1 neighbor:1 taking:1 sparse:6 van:2 curve:3 dimension:3 evaluating:1 computes:1 kz:1 stuck:1 commonly:2 forward:1 projected:1 far:2 transaction:1 approximate:1 preferred:1 gene:7 dealing:1 keep:1 global:1 mairal:5 shwartz:2 table:8 learn:1 robust:2 ca:1 ignoring:1 obtaining:2 curtis:1 improving:2 heidelberg:1 investigated:1 complex:1 bottou:2 domain:1 aistats:1 main:5 motivation:2 noise:6 big:2 bounding:1 finito:3 fair:1 allowed:2 body:1 slow:1 precision:1 saga:18 candidate:1 kxk2:3 vanish:1 jmlr:3 extractor:1 removing:1 down:2 theorem:5 xt:25 decay:1 stl:9 macaron:1 essential:1 survival:1 adding:1 importance:1 mirror:1 magnitude:4 conditioned:1 occurring:1 kx:2 chen:1 easier:1 forget:1 logarithmic:3 simply:1 likely:1 visual:1 adjustment:1 scalar:1 springer:3 corresponds:4 minimizer:1 satisfies:2 relies:1 ma:1 ljk:1 cti:2 goal:2 acceleration:4 lipschitz:2 absence:1 lemar:1 change:1 hard:1 infinite:3 specifically:1 uniformly:2 typical:1 averaging:7 reducing:1 domke:1 total:3 called:1 pas:1 duality:4 invariance:1 ilsvrc:1 support:3 scratch:1 handling:1
6,369
6,761
Deep Learning with Topological Signatures Roland Kwitt Department of Computer Science University of Salzburg, Austria [email protected] Christoph Hofer Department of Computer Science University of Salzburg, Austria [email protected] Marc Niethammer UNC Chapel Hill, NC, USA [email protected] Andreas Uhl Department of Computer Science University of Salzburg, Austria [email protected] Abstract Inferring topological and geometrical information from data can offer an alternative perspective on machine learning problems. Methods from topological data analysis, e.g., persistent homology, enable us to obtain such information, typically in the form of summary representations of topological features. However, such topological signatures often come with an unusual structure (e.g., multisets of intervals) that is highly impractical for most machine learning techniques. While many strategies have been proposed to map these topological signatures into machine learning compatible representations, they suffer from being agnostic to the target learning task. In contrast, we propose a technique that enables us to input topological signatures to deep neural networks and learn a task-optimal representation during training. Our approach is realized as a novel input layer with favorable theoretical properties. Classification experiments on 2D object shapes and social network graphs demonstrate the versatility of the approach and, in case of the latter, we even outperform the state-of-the-art by a large margin. 1 Introduction Methods from algebraic topology have only recently emerged in the machine learning community, most prominently under the term topological data analysis (TDA) [7]. Since TDA enables us to infer relevant topological and geometrical information from data, it can offer a novel and potentially beneficial perspective on various machine learning problems. Two compelling benefits of TDA are (1) its versatility, i.e., we are not restricted to any particular kind of data (such as images, sensor measurements, time-series, graphs, etc.) and (2) its robustness to noise. Several works have demonstrated that TDA can be beneficial in a diverse set of problems, such as studying the manifold of natural image patches [8], analyzing activity patterns of the visual cortex [28], classification of 3D surface meshes [27, 22], clustering [11], or recognition of 2D object shapes [29]. Currently, the most widely-used tool from TDA is persistent homology [15, 14]. Essentially1 , persistent homology allows us to track topological changes as we analyze data at multiple ?scales?. As the scale changes, topological features (such as connected components, holes, etc.) appear and disappear. Persistent homology associates a lifespan to these features in the form of a birth and a death time. The collection of (birth, death) tuples forms a multiset that can be visualized as a persistence diagram or a barcode, also referred to as a topological signature of the data. However, leveraging these signatures for learning purposes poses considerable challenges, mostly due to their 1 We will make these concepts more concrete in Sec. 2. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Death (2) Transform & Project (?2 , ? 2 ) x = ?(p) (?1 , ? 1 ) s?,?,? ((x0 , x1 )) ? Output: (y1 , y2 )> ? R0 + ?R0 (1) Rotate points in D by ?/4 Death-Birth (persistence) Input Layer Input: D ? D p = (b, d) ?1 Param.: ? = (?i , ? i )N i=0 Birth ? (x0 , x1 ) Birth+Death ? Figure 1: Illustration of the proposed network input layer for topological signatures. Each signature, in the form of a persistence diagram D ? D (left), is projected w.r.t. a collection of structure elements. The layer?s learnable parameters ? are the locations ?i and the scales ? i of these elements; ? ? R+ is set a-priori and meant to discount the impact of points with low persistence (and, in many cases, of low discriminative power). The layer output y is a concatenation of the projections. In this illustration, N = 2 and hence y = (y1 , y2 )> . unusual structure as a multiset. While there exist suitable metrics to compare signatures (e.g., the Wasserstein metric), they are highly impractical for learning, as they require solving optimal matching problems. Related work. In order to deal with these issues, several strategies have been proposed. In [2] for instance, Adcock et al. use invariant theory to ?coordinatize? the space of barcodes. This allows to map barcodes to vectors of fixed size which can then be fed to standard machine learning techniques, such as support vector machines (SVMs). Alternatively, Adams et al. [1] map barcodes to so-called persistence images which, upon discretization, can also be interpreted as vectors and used with standard learning techniques. Along another line of research, Bubenik [6] proposes a mapping of barcodes into a Banach space. This has been shown to be particularly viable in a statistical context (see, e.g., [10]). The mapping outputs a representation referred to as a persistence landscape. Interestingly, under a specific choice of parameters, barcodes are mapped into L2 (R2 ) and the inner-product in that space can be used to construct a valid kernel function. Similar, kernel-based techniques, have also recently been studied by Reininghaus et al. [27], Kwitt et al. [20] and Kusano et al. [19]. While all previously mentioned approaches retain certain stability properties of the original representation with respect to common metrics in TDA (such as the Wasserstein or Bottleneck distances), they also share one common drawback: the mapping of topological signatures to a representation that is compatible with existing learning techniques is pre-defined. Consequently, it is fixed and therefore agnostic to any specific learning task. This is clearly suboptimal, as the eminent success of deep neural networks (e.g., [18, 17]) has shown that learning representations is a preferable approach. Furthermore, techniques based on kernels [27, 20, 19] for instance, additionally suffer scalability issues, as training typically scales poorly with the number of samples (e.g., roughly cubic in case of kernel-SVMs). In the spirit of end-to-end training, we therefore aim for an approach that allows to learn a task-optimal representation of topological signatures. We additionally remark that, e.g., Qi et al. [25] or Ravanbakhsh et al. [26] have proposed architectures that can handle sets, but only with fixed size. In our context, this is impractical as the capability of handling sets with varying cardinality is a requirement to handle persistent homology in a machine learning setting. Contribution. To realize this idea, we advocate a novel input layer for deep neural networks that takes a topological signature (in our case, a persistence diagram), and computes a parametrized projection that can be learned during network training. Specifically, this layer is designed such that its output is stable with respect to the 1-Wasserstein distance (similar to [27] or [1]). To demonstrate the versatility of this approach, we present experiments on 2D object shape classification and the classification of social network graphs. On the latter, we improve the state-of-the-art by a large margin, clearly demonstrating the power of combining TDA with deep learning in this context. 2 Background For space reasons, we only provide a brief overview of the concepts that are relevant to this work and refer the reader to [16] or [14] for further details. Homology. The key concept of homology theory is to study the properties of some object X by means of (commutative) algebra. In particular, we assign to X a sequence of modules C0 , C1 , . . . 2 which are connected by homomorphisms ?n : Cn ? Cn?1 such that im ?n+1 ? ker ?n . A structure of this form is called a chain complex and by studying its homology groups Hn = ker ?n / im ?n+1 we can derive properties of X. A prominent example of a homology theory is simplicial homology. Throughout this work, it is the used homology theory and hence we will now concretize the already presented ideas. Let K be a simplicial complex and Kn its n-skeleton. Then we set Cn (K) as the vector space generated (freely) by Kn over Z/2Z2 . The connecting homomorphisms ?n : Cn (K) ? Cn?1 (K) are called Pn boundary operators. For a simplex ? = [x0 , . . . , xn ] ? Kn , we define P them as P?n (?) = ?i ) = ?n (?i ). i=0 [x0 , . . . , xi?1 , xi+1 , . . . , xn ] and linearly extend this to Cn (K), i.e., ?n ( Persistent homology. Let K be a simplicial complex and (K i )m i=0 a sequence of simplicial complexes such that ? = K 0 ? K 1 ? ? ? ? ? K m = K. Then, (K i )m i=0 is called a filtration of K. If we use the extra information provided by the filtration of K, we obtain the following sequence of chain complexes (left), ??? ?3 C22 ??? ?3 ?2 C11 ?2 C12 ? C01 ?1 C02 ? ? C2m ?1 C1m ?0 0 ? ?1 C0m C01 = [[v1 ], [v2 ]]Z2 K1 0 ? ? ?2 ?0 ?0 Example C21 C11 = 0 0 C02 = [[v1 ], [v2 ], [v3 ]]Z2 K2 ? ?3 ? ??? C12 = [[v1 , v3 ], [v2 , v3 ]]Z2 v2 K3 v3 v4 v1 C02 = [[v1 ], [v2 ], [v3 ], [v4 ]]Z2 C12 = [[v1 , v3 ], [v2 , v3 ], [v3 , v4 ]]Z2 C21 = 0 C22 = 0 C23 = 0 where Cni = Cn (Kni ) and ? denotes the inclusion. This then leads to the concept of persistent homology groups, defined by ?ni,j j Hni,j = ker ?ni /(im ?n+1 ? ker ?ni ) for rank Hni,j , i?j . The ranks, = of these homology groups (i.e., the n-th persistent Betti numbers), capture the number of homological features of dimensionality n (e.g., connected components for n = 0, holes for n = 1, etc.) that persist from i to (at least) j. In fact, according to [14, Fundamental Lemma of Persistent Homology], the quantities i,j?1 ?i,j ? ?ni,j ) ? (?ni?1,j?1 ? ?ni?1,j ) for n = (?n i<j (1) encode all the information about the persistent Betti numbers of dimension n. Topological signatures. A typical way to obtain a filtration of K is to consider sublevel sets of a function f : C0 (K) ? R. This function can be easily lifted to higher-dimensional chain groups of K by f ([v0 , . . . , vn ]) = max{f ([vi ]) : 0 ? i ? n} . ?1 Given m = |f (C0 (K))|, we obtain (Ki )m ((??, ai ]) for i=0 by setting K0 = ? and Ki = f 1 ? i ? m, where a1 < ? ? ? < am is the sorted sequence of values of f (C0 (K)). If we construct a multiset such that, for i < j, the point (ai , aj ) is inserted with multiplicity ?i,j n , we effectively encode the persistent homology of dimension n w.r.t. the sublevel set filtration induced by f . Upon adding diagonal points with infinite multiplicity, we obtain the following structure: Definition 1 (Persistence diagram). Let ? = {x ? R2? : mult(x) = ?} be the multiset of the diagonal R2? = {(x0 , x1 ) ? R2 : x0 = x1 }, where mult denotes the multiplicity function and let R2? = {(x0 , x1 ) ? R2 : x1 > x0 }. A persistence diagram, D, is a multiset of the form D = {x : x ? R2? } ? ? . We denote by D the set of all persistence diagrams of the form |D \ ?| < ? . For a given complex K of dimension nmax and a function f (of the discussed form), we can interpret persistent homology as a mapping (K, f ) 7? (D0 , . . . , Dnmax ?1 ), where Di is the diagram of dimension i and nmax the dimension of K. We can additionally add a metric structure to the space of persistence diagrams by introducing the notion of distances. Simplicial homology is not specific to Z/2Z, but it?s a typical choice, since it allows us to interpret n-chains as sets of n-simplices. 2 3 Definition 2 (Bottleneck, Wasserstein distance). For two persistence diagrams D and E, we define their Bottleneck (w? ) and Wasserstein (wqp ) distances by ! p1 X , (2) w? (D, E) = inf sup ||x ? ?(x)||? and wqp (D, E) = inf ||x ? ?(x)||pq ? x?D ? x?D where p, q ? N and the infimum is taken over all bijections ? : D ? E. Essentially, this facilitates studying stability/continuity properties of topological signatures w.r.t. metrics in the filtration or complex space; we refer the reader to [12],[13], [9] for a selection of important stability results. Remark. By setting ?ni,? = ?ni,m ??ni?1,m , we extend Eq. (1) to features which never disappear, also referred to as essential. This change can be lifted to D by setting R2? = {(x0 , x1 ) ? R ? (R ? {?}) : x1 > x0 }. In Sec. 5, we will see that essential features can offer discriminative information. 3 A network layer for topological signatures In this section, we introduce the proposed (parametrized) network layer for topological signatures (in the form of persistence diagrams). The key idea is to take any D and define a projection w.r.t. a collection (of fixed size N ) of structure elements. In the following, we set R+ := {x ? R : x > 0} and R+ 0 := {x ? R : x ? 0}, resp., and start by rotating points of D such that points on R2? lie on the x-axis, see Fig. 1. The y-axis can then be interpreted as the persistence of features. Formally, we let b0 and b1 be the unit vectors in directions (1, 1)> and (?1, 1)> and define a mapping ? : R2? ? R2? ? R ? R+ 0 such that x 7? (hx, b0 i, hx, b1 i). This rotates points in R? ? R2? clock-wise by ?/4. We will later see that this construction is beneficial for a closer analysis of the layers? properties. Similar to [27, 19], we choose exponential functions as structure elements, but other choices are possible (see Lemma 1). Differently to [27, 19], however, our structure elements are not at fixed locations (i.e., one element per point in D), but their locations and scales are learned during training. Definition 3. Let ? = (?0 , ?1 )> ? R ? R+ , ? = (?0 , ?1 ) ? R+ ? R+ and ? ? R+ . We define s?,?,? : R ? R+ 0 ?R as follows: s?,?,? ? 2 2 2 2 ? e??0 (x0 ??0 ) ??1 (x1 ??1 ) , x1 ? [?, ?) ? ? ? ?  x1 2 2 2 2 (x0 , x1 ) = e??0 (x0 ??0 ) ??1 (ln( ? )+???1 ) , x1 ? (0, ?) ? ? ? ? ? 0, x1 = 0 A persistence diagram D is then projected w.r.t. s?,?,? via X S?,?,? : D ? R, D 7? s?,?,? (?(x)) . (3) (4) x?D Remark. Note that s?,?,? is continuous in x1 as x   + ? and lim s?,?,? (x0 , x1 ) = 0 = s?,?,? (x0 , 0) lim x = lim ln x?? x?? x1 ?0 ? and e(?) is continuous. Further, s?,?,? is differentiable on R ? R+ , since ?x1 1 = lim+ (x) and x?? ?x1 lim? ? ln x1 ?  ?x1 x?? +?  (x) = lim? x?? ? =1 . x Also note that we use the log-transform in Eq. (4) to guarantee that s?,?,? satisfies the conditions of Lemma 1; this is, however, only one possible choice. 4 Remark. The intuition behind ? is the following. It is the threshold at which the log-transform starts to operate. The log-transform, on the other hand, stretches the space between the x-axis and the line drawn at x + ? to infinite length. As a consequence, s?,?,? = 0 for x ? R2? . This is necessary since otherwise S?,?,? (D) = ? for D ? D (as each persistence diagram contains points at the diagonal with infinite multiplicity). Finally, given a collection of S?i ,?i ,? , we combine them to form the output of the network layer.  ?1 + + + N Definition 4. Let N ? N, ? = (?i , ? i )N and ? ? R+ . We define i=0 ? (R ? R ) ? (R ? R ) N ?1 N S?,? : D ? (R+ D 7? S?i ,?i ,? (D) i=0 . 0) as the concatenation of all N mappings defined in Eq. (4). Importantly, a network layer implementing Def. 4 is trainable via backpropagation, as (1) s?i ,?i ,? is differentiable in ?i , ? i , (2) S?i ,?i ,? (D) is a finite sum of s?i ,?i ,? and (3) S?,? is just a concatenation. 4 Theoretical properties In this section, we demonstrate that the proposed layer is stable w.r.t. the 1-Wasserstein distance wq1 , see Eq. (2). In fact, this claim will follow from a more general result, stating sufficient conditions on q functions s : R2? ? R2? ? R+ 0 such that a construction in the form of Eq. (3) is stable w.r.t. w1 . Lemma 1. Let s : R2? ? R2? ? R+ 0 have the following properties: (i) s is Lipschitz continuous w.r.t. k ? kq and constant Ks  (ii) s(x = 0, for x ? R2? Then, for two persistence diagrams D, E ? D, it holds that X X s(x) ? s(y) ? Ks ? wq1 (D, E) . x?D y?E (5) Proof. see Appendix B Remark. At this point, we want to clarify that Lemma 1 is not specific to s?,?,? (e.g., as in Def. 3). Rather, Lemma 1 yields sufficient conditions to construct a w1 -stable input layer. Our choice of s?,?,? is just a natural example that fulfils those requirements and, hence, S?,? is just one possible representative of a whole family of input layers. With the result of Lemma 1 in mind, we turn to the specific case of S?,? and analyze its stability properties w.r.t. wq1 . The following lemma is important in this context. Lemma 2. s?,?,? has absolutely bounded first-order partial derivatives w.r.t. x0 and x1 on R ? R+ . Proof. see Appendix B Theorem 1. S?,? is Lipschitz continuous with respect to wq1 on D. Proof. Lemma 2 immediately implies that s?,?,? from Eq. (3) is Lipschitz continuous w.r.t || ? ||q . Consequently, s = s?,?,? ? ? satisfies property (i) from Lemma 1; property (ii) from Lemma 1 is satisfied by construction. Hence, S?,?,? is Lipschitz continuous w.r.t. wq1 . Consequently, S?,? is Lipschitz in each coordinate and therefore Liptschitz continuous. Interestingly, the stability result of Theorem 1 is comparable to the stability results in [1] or [27] (which are also w.r.t. wq1 and in the setting of diagrams with finitely-many points). However, contrary to previous works, if we would chop-off the input layer after network training, we would then have a mapping S?,? of persistence diagrams that is specifically-tailored to the learning task on which the network was trained. 5 b5 shift due to noise b8 S1 a2 a3 b1 b2,3,4 b6 b7 b7 b8 ? b5 b3 Artificially added noise Filtration directions Persistence diagram (0-dim. features) a3 b9 a2 a1 b9 b1 b2 a1 b4 b6 Figure 2: Height function filtration of a ?clean? (left, green points) and a ?noisy? (right, blue points) shape along direction d = (0, ?1)> . This example demonstrates the insensitivity of homology towards noise, as the added noise only (1) slightly shifts the dominant points (upper left corner) and (2) produces additional points close to the diagonal, which have little impact on the Wasserstein distance and the output of our layer. 5 Experiments To demonstrate the versatility of the proposed approach, we present experiments with two totally different types of data: (1) 2D shapes of objects, represented as binary images and (2) social network graphs, given by their adjacency matrix. In both cases, the learning task is classification. In each experiment we ensured a balanced group size (per label) and used a 90/10 random training/test split; all reported results are averaged over five runs with fixed ? = 0.1. In practice, points in input diagrams were thresholded at 0.01 for computational reasons. Additionally, we conducted a reference experiment on all datasets using simple vectorization (see Sec. 5.3) of the persistence diagrams in combination with a linear SVM. Implementation. All experiments were implemented in PyTorch3 , using DIPHA4 and Perseus [23]. Source code is publicly-available at https://github.com/c-hofer/nips2017. 5.1 Classification of 2D object shapes We apply persistent homology combined with our proposed input layer to two different datasets of binary 2D object shapes: (1) the Animal dataset, introduced in [3] which consists of 20 different animal classes, 100 samples each; (2) the MPEG-7 dataset which consists of 70 classes of different object/animal contours, 20 samples each (see [21] for more details). Filtration. The requirements to use persistent homology on 2D shapes are twofold: First, we need to assign a simplicial complex to each shape; second, we need to appropriately filtrate the complex. While, in principle, we could analyze contour features, such as curvature, and choose a sublevel set filtration based on that, such a strategy requires substantial preprocessing of the discrete data (e.g., smoothing). Instead, we choose to work with the raw pixel data and leverage the persistent homology transform, introduced by Turner et al. [29]. The filtration in that case is based on sublevel sets of the height function, computed from multiple directions (see Fig. 2). Practically, this means that we directly construct a simplicial complex from the binary image. We set K0 as the set of all pixels which are contained in the object. Then, a 1-simplex [p0 , p1 ] is in the 1-skeleton K1 iff p0 and p1 are 4?neighbors on the pixel grid. To filtrate the constructed complex, we denote by b the barycenter of the object and with r the radius of its bounding circle around b. Finally, we define, for [p] ? K0 and d ? S1 , the filtration function by f ([p]) = 1/r ? hp ? b, di. Function values are lifted to K1 by taking the maximum, cf. Sec. 2. Finally, let di be the 32 equidistantly distributed directions in S1 , starting from (1, 0)> . For each shape, we get a vector of persistence diagrams (Di )32 i=1 where Di is the 0-th diagram obtained by filtration along di . As most objects do not differ in homology groups of higher dimensions (> 0), we did not use the corresponding persistence diagrams. Network architecture. While the full network is listed in the supplementary material, the key architectural choices are: 32 independent input branches, i.e., one for each filtration direction. Further, the i-th branch gets, as input, the vector of persistence diagrams from directions di?1 , di and di+1 . This is a straightforward approach to capture dependencies among the filtration directions. We use cross-entropy loss to train the network for 400 epochs, using stochastic gradient descent (SGD) with mini-batches of size 128 and an initial learning rate of 0.1 (halved every 25-th epoch). 3 4 https://github.com/pytorch/pytorch https://bitbucket.org/dipha/dipha 6 MPEG-7 Animal Skeleton paths Class segment sets ? ICS ? BCF 86.7 90.9 96.6 97.2 67.9 69.7 78.4 83.4 Ours 91.8 69.5 ? ? Figure 3: Left: some examples from the MPEG-7 (bottom) and Animal (top) datasets. Right: Classification results, compared to the two best (?) and two worst (?) results reported in [30]. Results. Fig. 3 shows a selection of 2D object shapes from both datasets, together with the obtained classification results. We list the two best (?) and two worst (?) results as reported in [30]. While, on the one hand, using topological signatures is below the state-of-the-art, the proposed architecture is still better than other approaches that are specifically tailored to the problem. Most notably, our approach does not require any specific data preprocessing, whereas all other competitors listed in Fig. 3 require, e.g., some sort of contour extraction. Furthermore, the proposed architecture readily generalizes to 3D with the only difference that in this case di ? S2 . Fig. 4 (Right) shows an exemplary visualization of the position of the learned structure elements for the Animal dataset. 5.2 Classification of social network graphs In this experiment, we consider the problem of graph classification, where vertices are unlabeled and edges are undirected. That is, a graph G is given by G = (V, E), where V denotes the set of vertices and E denotes the set of edges. We evaluate our approach on the challenging problem of social network classification, using the two largest benchmark datasets from [31], i.e., reddit-5k (5 classes, 5k graphs) and reddit-12k (11 classes, ?12k graphs). Each sample in these datasets represents a discussion graph and the classes indicate subreddits (e.g., worldnews, video, etc.). Filtration. The construction of a simplicial complex from G = (V, E) is straightforward: we set K0 = {[v] ? V } and K1 = {[v0 , v1 ] : {v0 , v1 } ? E}. We choose a very simple filtration based on the vertex degree, i.e., the number of incident edges to a vertex v ? V . Hence, for [v0 ] ? K0 we get f ([v0 ]) = deg(v0 )/ maxv?V deg(v) and again lift f to K1 by taking the maximum. Note that chain groups are trivial for dimension > 1, hence, all features in dimension 1 are essential. Network architecture. Our network has four input branches: two for each dimension (0 and 1) of the homological features, split into essential and non-essential ones, see Sec. 2. We train the network for 500 epochs using SGD and cross-entropy loss with an initial learning rate of 0.1 (reddit_5k), or 0.4 (reddit_12k). The full network architecture is listed in the supplementary material. Results. Fig. 5 (right) compares our proposed strategy to state-of-the-art approaches from the literature. In particular, we compare against (1) the graphlet kernel (GK) and deep graphlet kernel (DGK) results from [31], (2) the Patchy-SAN (PSCN) results from [24] and (3) a recently reported graph-feature + random forest approach (RF) from [4]. As we can see, using topological signatures in our proposed setting considerably outperforms the current state-of-the-art on both datasets. This is an interesting observation, as PSCN [24] for instance, also relies on node degrees and an extension of the convolution operation to graphs. Further, the results reveal that including essential features is key to these improvements. 5.3 Vectorization of persistence diagrams Here, we briefly present a reference experiment we conducted following Bendich et al. [5]. The idea is to directly use the persistence diagrams as features via vectorization. For each point (b, d) in a persistence diagram D we calculate its persistence, i.e., d ? b. We then sort the calculated persistences by magnitude from high to low and take the first N values. Hence, we get, for each persistence diagram, a vector of dimension N (if |D \ ?| < N , we pad with zero). We used this technique on all four data sets. As can be seen from the results in Table 4 (averaged over 10 cross-validation runs), vectorization performs poorly on MPEG-7 and Animal but can lead to competitive rates on reddit-5k and reddit-12k. Nevertheless, the obtained performance is considerably inferior to our proposed approach. 7 MPEG-7 Animal reddit-5k reddit-12k 5 10 20 40 80 160 81.8 48.8 37.1 24.2 82.3 50.0 38.2 24.6 79.7 46.2 39.7 27.9 74.5 42.4 42.1 29.8 68.2 39.3 43.8 31.5 64.4 36.0 45.2 31.6 Ours 91.8 69.5 54.5 44.5 0.8 Death 1.0 N 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 Birth 1.0 Figure 4: Left: Classification accuracies for a linear SVM trained on vectorized (in RN ) persistence diagrams (see Sec. 5.3). Right: Exemplary visualization of the learned structure elements (in 0-th dimension) for the Animal dataset and filtration direction d = (?1, 0)> . Centers of the learned elements are marked in blue. 1 1 1 G = (V, E) f ?1 ((??, 2]) 1 5 3 1 2 2 1 f ?1 ((??, 3]) f ?1 ((??, 5]) reddit-5k reddit-12k GK [31] DGK [31] PSCN [24] RF [4] 41.0 41.3 49.1 50.9 31.8 32.2 41.3 42.7 Ours (w/o essential) Ours (w/ essential) 49.1 54.5 38.5 44.5 Figure 5: Left: Illustration of graph filtration by vertex degree, i.e., f ? deg (for different choices of ai , see Sec. 2). Right: Classification results as reported in [31] for GK and DGK, Patchy-SAN (PSCN) as reported in [24] and feature-based random-forest (RF) classification. from [4]. Finally, we remark that in both experiments, tests with the kernel of [27] turned out to be computationally impractical, (1) on shape data due to the need to evaluate the kernel for all filtration directions and (2) on graphs due the large number of samples and the number of points in each diagram. 6 Discussion We have presented, to the best of our knowledge, the first approach towards learning task-optimal stable representations of topological signatures, in our case persistence diagrams. Our particular realization of this idea, i.e., as an input layer to deep neural networks, not only enables us to learn with topological signatures, but also to use them as additional (and potentially complementary) inputs to existing deep architectures. From a theoretical point of view, we remark that the presented structure elements are not restricted to exponential functions, so long as the conditions of Lemma 1 are met. One drawback of the proposed approach, however, is the artificial bending of the persistence axis (see Fig. 1) by a logarithmic transformation; in fact, other strategies might be possible and better suited in certain situations. A detailed investigation of this issue is left for future work. From a practical perspective, it is also worth pointing out that, in principle, the proposed layer could be used to handle any kind of input that comes in the form of multisets (of Rn ), whereas previous works only allow to handle sets of fixed size (see Sec. 1). In summary, we argue that our experiments show strong evidence that topological features of data can be beneficial in many learning tasks, not necessarily to replace existing inputs, but rather as a complementary source of discriminative information. Acknowledgements. This work was partially funded by the Austrian Science Fund FWF (KLI project 00012) and the Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, Salzburg. 8 A Technical results Lemma 3. Let ? ? R+ and ? ? R. We have i) ln(x) x?0 x lim 2 1 x?0 x ? e??(ln(x)+?) = 0 ii) lim 2 ? e??(ln(x)+?) = 0 . Proof. We omit the proof for brevity (see supplementary material for details), but remark that only i) needs to be shown as ii) follows immediately. B Proofs Proof of Lemma 1. Let ? be a bijection between D and E which realizes wq1 (D, E) and let D0 = D \ ?, E0 = E \ ?. To show the result of Eq. (5), we consider the following decomposition: D = ??1 (E0 ) ? ??1 (?) = (??1 (E0 ) \ ?) ? (??1 (E0 ) ? ?) ? (??1 (?) \ ?) ? (??1 (?) ? ?) {z } | {z } | {z } | {z } | A B C (6) D Except for the term D, all sets are finite. In fact, ? realizes the Wasserstein distance wq1 which implies ? D = id. Therefore, s(x) = s(?(x)) = 0 for x ? D since D ? ?. Consequently, we can ignore D in the summation and it suffices to consider E = A ? B ? C. It follows that X X X X X X = s(x) ? s(y) s(x) ? s(?(x)) = s(x) ? s(?(x)) x?D y?E x?D x?D x?E x?E X X = s(x) ? s(?(x)) ? |s(x) ? s(?(x))| x?E x?E X X ? Ks ? ||x ? ?(x)||q = Ks ? ||x ? ?(x)||q = Ks ? wq1 (D, E) . x?E x?D Proof of Lemma 2. Since s?,?,? is defined differently for x1 ? [?, ?) and x1 ? (0, ?), we need to distinguish these two cases. In the following x0 ? R. (1) x1 ? [?, ?): The partial derivative w.r.t. xi is given as     ? ? ??i2 (xi ??i )2 s?,?,? (x0 , x1 ) = C ? e (x0 , x1 ) ?xi ?xi =C ?e ??i2 (xi ??i )2 ? (?2?i2 )(xi (7) ? ?i ) , where C is just the part of exp(?) which is not dependent on xi . For all cases, i.e., x0 ? ?, x0 ? ?? and x1 ? ?, it holds that Eq. (7) ? 0. (2) x1 ? (0, ?): The partial derivative w.r.t. x0 is similar to Eq. (7) with the same asymptotic behaviour for x0 ? ? and x0 ? ??. However, for the partial derivative w.r.t. x1 we get     ? ? ??12 (ln( x1 )+???1 )2 ? s?,?,? (x0 , x1 ) = C ? e (x0 , x1 ) ?x1 ?x1  x   ? 1 = C ? e( ... ) ? (?2?12 ) ? ln + ? ? ?1 ? ? x1 (8)       x 1 1 1 = C 0 ? e( ... ) ? ln ? +(? ? ?1 ) ? e( ... ) ? . ? x1 x | {z 1} | {z } (a) (b) As x1 ? 0, we can invoke Lemma 3 i) to handle (a) and Lemma 3 ii) to handle (b); conclusively, Eq. (8) ? 0. As the partial derivatives w.r.t. xi are continuous and their limits are 0 on R, R+ , resp., we conclude that they are absolutely bounded. 9 References [1] H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson, F. Motta, and L. Ziegelmeier. Persistence images: A stable vector representation of persistent homology. JMLR, 18(8):1?35, 2017. 2, 5 [2] A. Adcock, E. Carlsson, and G. Carlsson. The ring of algebraic functions on persistence bar codes. CoRR, 2013. https://arxiv.org/abs/1304.0530. 2 [3] X. Bai, W. Liu, and Z. Tu. Integrating contour and skeleton for shape classification. In ICCV Workshops, 2009. 6 [4] I. Barnett, N. Malik, M.L. Kuijjer, P.J. Mucha, and J.-P. Onnela. Feature-based classification of networks. CoRR, 2016. https://arxiv.org/abs/1610.05868. 7, 8 [5] P. Bendich, J.S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain artery trees. Ann. Appl. Stat, 10(2), 2016. 7 [6] P. Bubenik. Statistical topological data analysis using persistence landscapes. JMLR, 16(1):77?102, 2015. 2 [7] G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255?308, 2009. 1 [8] G. Carlsson, T. Ishkhanov, V. de Silva, and A. Zomorodian. On the local behavior of spaces of natural images. IJCV, 76:1?12, 2008. 1 [9] F. Chazal, D. Cohen-Steiner, L. J. Guibas, F. M?moli, and S. Y. Oudot. Gromov-Hausdorff stable signatures for shapes using persistence. Comput. Graph. Forum, 28(5):1393?1403, 2009. 4 [10] F. Chazal, B.T. Fasy, F. Lecci, A. Rinaldo, and L. Wassermann. Stochastic convergence of persistence landscapes and silhouettes. JoCG, 6(2):140?161, 2014. 2 [11] F. Chazal, L.J. Guibas, S.Y. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds. J. ACM, 60(6):41?79, 2013. 1 [12] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete Comput. Geom., 37(1):103?120, 2007. 4 [13] D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have Lp -stable persistence. Found. Comput. Math., 10(2):127?139, 2010. 4 [14] H. Edelsbrunner and J. L. Harer. Computational Topology : An Introduction. American Mathematical Society, 2010. 1, 2, 3 [15] H. Edelsbrunner, D. Letcher, and A. Zomorodian. Topological persistence and simplification. Discrete Comput. Geom., 28(4):511?533, 2002. 1 [16] A. Hatcher. Algebraic Topology. Cambridge University Press, Cambridge, 2002. 2 [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 2 [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 2 [19] G. Kusano, K. Fukumizu, and Y. Hiraoka. Persistence weighted Gaussian kernel for topological data analysis. In ICML, 2016. 2, 4 [20] R. Kwitt, S. Huber, M. Niethammer, W. Lin, and U. Bauer. Statistical topological data analysis - a kernel perspective. In NIPS, 2015. 2 [21] L. Latecki, R. Lakamper, and T. Eckhardt. Shape descriptors for non-rigid shapes with a single closed contour. In CVPR, 2000. 6 [22] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014. 1 [23] K. Mischaikow and V. Nanda. Morse theory for filtrations and efficient computation of persistent homology. Discrete Comput. Geom., 50(2):330?353, 2013. 6 [24] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML, 2016. 7, 8 [25] C.R. Qi, H. Su, K. Mo, and L.J. Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In CVPR, 2017. 2 [26] S. Ravanbakhsh, S. Schneider, and B. P?czos. Deep learning with sets and point clouds. In ICLR, 2017. 2 [27] R. Reininghaus, U. Bauer, S. Huber, and R. Kwitt. A stable multi-scale kernel for topological machine learning. In CVPR, 2015. 1, 2, 4, 5, 8 [28] G. Singh, F. Memoli, T. Ishkhanov, G. Sapiro, G. Carlsson, and D.L. Ringach. Topological analysis of population activity in visual cortex. J. Vis., 8(8), 2008. 1 10 [29] K. Turner, S. Mukherjee, and D. M. Boyer. Persistent homology transform for modeling shapes and surfaces. Inf. Inference, 3(4):310?344, 2014. 1, 6 [30] X. Wang, B. Feng, X. Bai, W. Liu, and L.J. Latecki. Bag of contour fragments for robust shape classification. Pattern Recognit., 47(6):2116?2125, 2014. 7 [31] P. Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In KDD, 2015. 7, 8 11
6761 |@word briefly:1 c0:4 decomposition:1 p0:2 homomorphism:2 kutzkov:1 sgd:2 bai:2 liu:2 series:1 contains:1 fragment:1 initial:2 ours:4 interestingly:2 outperforms:1 existing:3 steiner:3 current:1 discretization:1 z2:6 com:2 nanda:1 readily:1 realize:1 mesh:1 kdd:1 shape:18 enables:3 designed:1 fund:1 maxv:1 wassermann:1 eminent:1 multiset:5 node:1 location:3 bijection:1 math:2 org:3 c22:2 zhang:1 five:1 height:2 mathematical:1 along:3 constructed:1 viable:1 persistent:19 consists:2 ijcv:1 advocate:1 combine:1 introduce:1 bitbucket:1 x0:26 huber:2 notably:1 behavior:1 p1:3 roughly:1 multi:1 brain:1 little:1 param:1 cardinality:1 totally:1 latecki:2 project:2 provided:1 bounded:2 agnostic:2 kind:2 interpreted:2 reddit:8 perseus:1 c01:2 transformation:1 impractical:4 guarantee:1 sapiro:1 every:1 preferable:1 ensured:1 k2:1 homological:2 demonstrates:1 reininghaus:2 unit:1 medical:1 omit:1 appear:1 local:1 limit:1 consequence:1 analyzing:1 id:1 path:1 might:1 studied:1 k:5 christoph:1 challenging:1 appl:1 c21:2 averaged:2 practical:1 practice:1 graphlet:2 backpropagation:1 ker:4 emerson:1 mult:2 projection:3 persistence:43 matching:1 pre:1 integrating:1 nmax:2 get:5 unc:2 close:1 selection:2 operator:1 kni:1 unlabeled:1 context:4 map:3 demonstrated:1 center:2 straightforward:2 starting:1 immediately:2 chapel:1 importantly:1 stability:7 handle:6 notion:1 coordinate:1 population:1 resp:2 target:1 construction:4 associate:1 element:10 recognition:3 particularly:1 mukherjee:1 persist:1 bottom:1 inserted:1 module:1 cloud:1 wang:1 capture:2 worst:2 calculate:1 cord:1 connected:3 sun:1 mentioned:1 intuition:1 balanced:1 substantial:1 skeleton:4 signature:21 trained:2 singh:1 solving:1 segment:1 algebra:1 upon:2 easily:1 k0:5 differently:2 various:1 represented:1 train:2 artificial:1 recognit:1 lift:1 birth:6 emerged:1 widely:1 supplementary:3 skraba:1 cvpr:5 regeneration:1 otherwise:1 transform:6 noisy:1 sequence:4 differentiable:2 moli:1 niethammer:2 exemplary:2 propose:1 dipha:2 product:1 tu:1 relevant:2 combining:1 turned:1 realization:1 iff:1 poorly:2 insensitivity:1 artery:1 scalability:1 sutskever:1 convergence:1 requirement:3 produce:1 adam:2 mpeg:5 ring:1 object:12 derive:1 ac:3 stat:1 pose:1 stating:1 finitely:1 b0:2 eq:10 strong:1 soc:1 implemented:1 c:1 come:2 implies:2 indicate:1 differ:1 direction:10 met:1 radius:1 drawback:2 stochastic:2 enable:1 material:3 implementing:1 adjacency:1 require:3 hx:2 assign:2 behaviour:1 suffices:1 investigation:1 summation:1 im:3 extension:1 stretch:1 hold:2 clarify:1 practically:1 b9:2 ic:1 guibas:3 exp:1 around:1 k3:1 mapping:7 mo:1 claim:1 pointing:1 a2:2 purpose:1 dgk:3 favorable:1 realizes:2 label:1 currently:1 bag:1 largest:1 tool:1 weighted:1 fukumizu:1 clearly:2 sensor:1 gaussian:1 aim:1 rather:2 pn:1 lifted:3 varying:1 encode:2 wqp:2 improvement:1 rank:2 contrast:1 am:1 dim:1 inference:1 dependent:1 rigid:1 typically:2 pad:1 boyer:1 pixel:3 issue:3 classification:19 among:1 priori:1 proposes:1 animal:9 art:5 smoothing:1 uhl:2 construct:4 never:1 extraction:1 beach:1 barnett:1 represents:1 icml:2 future:1 simplex:2 hni:2 versatility:4 ab:2 highly:2 zomorodian:2 behind:1 chain:5 edge:3 closer:1 partial:5 necessary:1 tree:1 rotating:1 circle:1 e0:4 theoretical:3 instance:3 modeling:1 compelling:1 injury:1 patchy:2 adcock:2 bull:1 introducing:1 vertex:5 mileyko:1 kq:1 krizhevsky:1 conducted:2 reported:6 kn:3 dependency:1 marron:1 considerably:2 combined:1 st:1 fundamental:1 retain:1 v4:3 off:1 invoke:1 connecting:1 b8:2 concrete:1 together:1 barcodes:5 w1:2 again:1 satisfied:1 sublevel:4 hn:1 choose:4 corner:1 american:1 derivative:5 li:1 de:1 c12:3 sec:8 b2:2 pointnet:1 vi:2 later:1 view:1 closed:1 analyze:3 sup:1 start:2 sort:2 competitive:1 capability:1 equidistantly:1 b6:2 contribution:1 oudot:2 ni:9 publicly:1 accuracy:1 descriptor:1 convolutional:2 miller:1 simplicial:8 yield:1 landscape:3 cni:1 raw:1 ren:1 worth:1 tissue:1 chazal:4 definition:4 competitor:1 against:1 proof:8 di:10 riemannian:1 dataset:4 austria:3 lim:8 knowledge:1 dimensionality:1 segmentation:1 higher:2 follow:1 amer:1 harer:3 niepert:1 sbg:3 furthermore:2 just:4 clock:1 hand:2 su:1 pytorch:2 continuity:1 infimum:1 aj:1 reveal:1 usa:2 b3:1 concept:4 homology:27 y2:2 hausdorff:1 hence:7 death:6 i2:3 ringach:1 deal:1 during:3 chop:1 inferior:1 prominent:1 hill:1 demonstrate:4 performs:1 silva:1 geometrical:2 image:8 wise:1 novel:3 recently:3 hofer:2 common:2 overview:1 spinal:1 b4:1 cohen:3 banach:1 extend:2 discussed:1 he:1 interpret:2 tda:7 measurement:1 refer:2 cambridge:2 ai:3 grid:1 hp:1 inclusion:1 bendich:2 pq:1 funded:1 stable:9 cortex:2 surface:2 v0:6 etc:4 add:1 dominant:1 curvature:1 halved:1 edelsbrunner:4 perspective:4 inf:3 certain:2 binary:3 success:1 seen:1 fasy:1 wasserstein:8 additional:2 schneider:1 r0:2 freely:1 c11:2 v3:8 ii:5 branch:3 multiple:2 full:2 infer:1 d0:2 technical:1 ahmed:1 offer:3 long:2 cross:3 pscn:4 mucha:1 lin:1 roland:2 a1:3 impact:2 qi:2 austrian:1 essentially:1 metric:5 arxiv:2 kernel:12 tailored:2 c1:1 eckhardt:1 background:1 want:1 whereas:2 interval:1 diagram:30 source:2 appropriately:1 extra:1 operate:1 induced:1 facilitates:1 undirected:1 contrary:1 leveraging:1 spirit:1 fwf:1 structural:1 leverage:1 split:2 b7:2 architecture:7 topology:4 suboptimal:1 andreas:1 barcode:1 inner:1 idea:5 cn:7 shift:2 bottleneck:3 b5:2 suffer:2 algebraic:3 remark:8 deep:13 detailed:1 listed:3 discount:1 bijections:1 visualized:1 lifespan:1 svms:2 http:5 outperform:1 exist:1 ravanbakhsh:2 ovsjanikov:1 track:1 per:2 blue:2 diverse:1 discrete:4 group:7 key:4 four:2 salzburg:5 demonstrating:1 threshold:1 nevertheless:1 drawn:1 clean:1 thresholded:1 v1:8 graph:17 sum:1 run:2 throughout:1 reader:2 c02:3 family:1 vn:1 patch:1 architectural:1 appendix:2 comparable:1 layer:20 ki:2 def:2 distinguish:1 simplification:1 topological:31 activity:2 vishwanathan:1 wq1:9 fulfils:1 department:3 c2m:1 according:1 lecci:1 combination:1 beneficial:4 slightly:1 yanardag:1 kirby:1 lp:1 s1:3 restricted:2 invariant:1 multiplicity:4 iccv:1 taken:1 ln:9 computationally:1 visualization:2 previously:1 turn:1 mind:1 fed:1 end:2 unusual:2 studying:3 available:1 generalizes:1 operation:1 apply:1 v2:6 alternative:1 robustness:1 batch:1 letcher:1 original:1 denotes:4 clustering:2 cf:1 top:1 k1:5 disappear:2 forum:1 society:1 feng:1 malik:1 already:1 realized:1 quantity:1 added:2 strategy:5 barycenter:1 diagonal:4 gradient:1 iclr:1 distance:8 mapped:1 rotates:1 concatenation:3 parametrized:2 sci:1 manifold:2 argue:1 trivial:1 reason:2 kwitt:5 length:1 code:2 illustration:3 mini:1 neville:1 nc:1 mostly:1 potentially:2 gk:3 filtration:20 implementation:1 upper:1 observation:1 convolution:1 datasets:7 benchmark:1 finite:2 descent:1 situation:1 hinton:1 y1:2 rn:2 community:1 introduced:2 imagenet:1 hanson:1 nips2017:1 learned:5 nip:3 bar:1 below:1 pattern:2 challenge:1 geom:3 rf:3 max:1 green:1 video:1 including:1 power:2 suitable:1 natural:3 concretize:1 lakamper:1 residual:1 turner:2 mn:1 improve:1 github:2 brief:1 axis:4 multisets:2 bending:1 epoch:3 literature:1 l2:1 acknowledgement:1 carlsson:5 asymptotic:1 morse:1 loss:2 interesting:1 validation:1 degree:3 incident:1 sufficient:2 vectorized:1 principle:2 c23:1 share:1 compatible:2 summary:2 czos:1 allow:1 neighbor:1 peterson:1 taking:2 benefit:1 distributed:1 boundary:1 dimension:11 xn:2 valid:1 calculated:1 contour:6 computes:1 bauer:2 collection:4 projected:2 preprocessing:2 san:2 social:5 ignore:1 conclusively:1 silhouette:1 deg:3 b1:4 conclude:1 tuples:1 discriminative:3 xi:10 alternatively:1 continuous:8 vectorization:4 betti:2 table:1 additionally:4 learn:3 robust:1 ca:1 forest:2 complex:12 artificially:1 necessarily:1 marc:1 did:1 linearly:1 whole:1 noise:5 bounding:1 s2:1 complementary:2 x1:38 fig:7 referred:3 representative:1 cubic:1 simplices:1 inferring:1 position:1 exponential:2 prominently:1 lie:1 comput:5 jmlr:2 theorem:2 specific:6 learnable:1 r2:18 list:1 svm:2 a3:2 evidence:1 essential:8 workshop:1 adding:1 effectively:1 corr:2 magnitude:1 commutative:1 hole:2 margin:2 suited:1 entropy:2 logarithmic:1 visual:2 rinaldo:1 contained:1 partially:1 satisfies:2 relies:1 gromov:1 acm:1 sorted:1 marked:1 kli:1 consequently:4 ann:1 towards:2 twofold:1 lipschitz:6 replace:1 considerable:1 change:3 specifically:3 typical:2 infinite:3 except:1 lemma:18 called:4 formally:1 support:1 latter:2 rotate:1 meant:1 bubenik:2 brevity:1 absolutely:2 evaluate:2 trainable:1 handling:1
6,370
6,762
Predicting User Activity Level In Point Processes With Mass Transport Equation Yichen Wang? , Xiaojing Ye? , Hongyuan Zha? , Le Song? ? College of Computing, Georgia Institute of Technology ? School of Mathematics, Georgia State University {yichen.wang}@gatech.edu, [email protected] {zha,lsong}@cc.gatech.edu Abstract Point processes are powerful tools to model user activities and have a plethora of applications in social sciences. Predicting user activities based on point processes is a central problem. However, existing works are mostly problem specific, use heuristics, or simplify the stochastic nature of point processes. In this paper, we propose a framework that provides an efficient estimator of the probability mass function of point processes. In particular, we design a key reformulation of the prediction problem, and further derive a differential-difference equation to compute a conditional probability mass function. Our framework is applicable to general point processes and prediction tasks, and achieves superb predictive and efficiency performance in diverse real-world applications compared to the state of the art. 1 Introduction Online social platforms, such as Facebook and Twitter, enable users to post opinions, share information, and influence peers. Recently, user-generated event data archived in fine-grained temporal resolutions are becoming increasingly available, which calls for expressive models and algorithms to understand, predict and distill knowledge from complex dynamics of these data. Particularly, temporal point processes are well-suited to model the event pattern of user behaviors and have been successfully applied in modeling event sequence data [6, 10, 12, 21, 23, 24, 25, 26, 27, 28, 33]. A fundamental task in social networks is to predict user activity levels based on learned point process models. Mathematically, the goal is to compute E[f (N (t))], where N (?) is a given point process that is learned from user behaviors, t is a fixed future time, and f is an application-dependent function. A framework for doing this is critically important. For example, for social networking services, an accurate inference of the number of reshares of a post enables the network moderator to detect trending posts and improve its content delivery networks [13, 32]; an accurate estimate of the change of network topology (the number of new followers of a user) facilitates the moderator to identify influential users and suppress the spread of terrorist propaganda and cyber-attacks [12]; an accurate inference of the activity level (number of posts in the network) allows us to gain fundamental insight into the predictability of collective behaviors [22]. Moreover, for online merchants such as Amazon, an accurate estimate of the number of future purchases of a product helps optimizing future advertisement placements [10, 25]. Despite the prevalence of prediction problems, an accurate prediction is very challenging for two reasons. First, the function f is arbitrary. For instance, to evaluate the homogeneity of user activities, we set f (x) = x log(x) to compute the Shannon entropy; to measure the distance between a predicted activity level and a target x? , we set f (x) = (x x? )2 . However, most works [8, 9, 13, 30, 31, 32] are problem specific and only designed for the simple task with f (x) = x; hence these works are not generalizable. Second, point process models typically have intertwined stochasticity and can 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Simulate point process on 0, # &')( = {#) , #, , #-} time 0 #) #, #- # &',( = {#) , #, , #-} 0 #) #, #- Construct conditional intensity Compute conditional mass function on [0, #] 67) 8, 0 / 0 &1)( , 0 ? [0, #] 0 #) #, #- 0=0 1 # 0 /(0|&1,( ) Average 0 =# x 1 67,(8, 0) # (a) Samples of Hawkes process (c) Mass transport from 0 to # (b) Intensity functions (d) Unbiased estimator Figure 1: An illustration of H YBRID using Hawkes process (Eq. 1). Our method first generates two samples {Hti } of events; then it constructs intensity functions; with these inputs, it computes conditional probability mass functions ?i (x, s) := P[N (s) = x|Hsi ] using a mass transport equation. Panel (c) shows the transport of conditional mass at four different times (the initial probability mass ?(x, 0) is an indicator function I[x = 0], as there is no event with probability one). Finally, the average of conditional mass functions yields our estimator of the probability mass. co-evolve over time [12, 25], e.g., in the influence propagation problem, the information diffusion over networks can change the structure of networks, which adversely influences the diffusion process [12]. However, previous works often ignore parts of the stochasticity in the intensity function [29] or make heuristic approximations [13, 32]. Hence, there is an urgent need for a method that is applicable to an arbitrary function f and keeps all the stochasticity in the process, which is largely nonexistent to date. We propose H YBRID, a generic framework that provides an efficient estimator of the probability mass of point processes. Figure 1 illustrates our framework. We also make the following contributions: ? Unifying framework. Our framework is applicable to general point processes and does not depend on specific parameterization of intensity functions. It incorporates all stochasticity in point processes and is applicable to prediction tasks with an arbitrary function f . ? Technical challenges. We reformulate the prediction problem and design a random variable with reduced variance. To derive an analytical form of this random variable, we also propose a mass transport equation to compute the conditional probability mass of point processes. We further transform this equation to an Ordinary Differential Equation and provide a scalable algorithm. ? Superior performance. Our framework significantly reduces the sample size to estimate the probability mass function of point processes in real-world applications. For example, to infer the number of tweeting and retweeting events of users in the co-evolution model of information diffusion and social link creation [12], our method needs 103 samples and 14.4 minutes, while Monte Carlo needs 106 samples and 27.8 hours to achieve the same relative error of 0.1. 2 Background and preliminaries Point processes. A temporal point process [1] is a random process whose realization consists of a set of discrete events {tk }, localized in time. It has been successfully applied to model user behaviors in social networks [16, 17, 19, 23, 24, 25, 28, 30]. It can be equivalently represented as a counting process N (t), which records the number of events on [0, t]. The counting process is a right continuous step function, i.e., if an event happens at t, N (t) N (t ) = 1. Let Ht = {tk |tk < t} be the history of events happened up to time t. An important way to characterize point processes is via the conditional intensity function (t) := (t|Ht ), a stochastic model for the time of the next event given the history. Formally, (t) is the conditional probability of observing an event in [t, t + dt) given events on [0, t), i.e., P {event in [t, t + dt)|Ht } = E[dN (t)|Ht ] := (t)dt, where dN (t) 2 {0, 1}. The intensity function is designed to capture the phenomena of interest. Some useful forms include (i) Poisson process: the intensity is a deterministic function, and (ii) Hawkes process [15]: it captures the mutual excitation phenomena between events and its intensity is parameterized as X (t) = ? + ? ?(t tk ), (1) tk 2Ht 2 where ? > 0 is the baseline intensity; the trigging kernel ?(t) = exp( t) models the decay of past events? influence over time; ? > 0 quantifies the strength of influence from each past event. Here, the occurrence of each historical event increases the intensity by a certain amount determined by ?(t) and ?, making (t) history-dependent and a stochastic process by itself. Monte Carlo (MC). To compute the probability mass of a point process, MC simulates n realizations of history {Hti } using the thinning algorithm [20]. The number of events in sample i is defined as N i (t) = |Hti |. Let (x, t) := P[N (t) = x], where x 2 N, be the probability mass. Then its mc ?mc (x, t) and the estimator ? ?mc estimator n P P?n (t)i for ?(t) := E[f (N (t))] are defined as n (x, t) = 1 1 i mc ?n (t) = n i f (N (t)). The root mean square error (RMSE) is defined as i I[N (t) = x] and ? n p p mc "(? ?n (t)) = E[? ?mc ?(t)]2 = VAR[f (N (t))]/n. (2) n (t) 3 Solution overview Given an arbitrary point process N (t) that is learned from data, existing prediction methods for computing E[f (N (t))] have three major limitations: ? Generalizability. Most methods [8, 9, 13, 30, 31, 32] only predict E[N (t)] and are not generalizable to an arbitrary function f . Moreover, they typically rely on specific parameterizations of the intensity functions, such as the reinforced Poisson process [13] and Hawkes process [5, 32]; hence they are not applicable to general point processes. ? Approximation and heuristics. These works also ignore parts of the stochasticity in the intensity functions [29] or make heuristic approximations to the point process [13, 32]. Hence the accuracy is limited by the approximations and heuristic corrections. ? Large sample size. The MC method overcomes the above limitations since it has an unbiased estimator of the probability mass. However, the high stochasticity in point processes leads to a large value of VAR[f (N (t))], which requires a large number of samples to achieve a small error. To address these challenges, we propose a generic framework with a novel estimator of the probability mass, which has a smaller sample size than MC. Our framework has the following key steps. I. New random variable. We design a random variable g(Ht ), a conditional expectation given the history. Its variance is guaranteed to be smaller than that of f (N (t)). For a fixed number of samples, the error of MC is decided by the variance of the random variable of interest, as shown in (2). Hence, to achieve the same error, applying MC to estimate the new objective EHt [g(Ht )] requires smaller number of samples compared with the procedure that directly estimates E[f (N (t))]. II. Mass transport equation. To compute g(Ht ), we derive a differential-difference equation that describes the evolutionary dynamics of the conditional probability mass P[N (t) = x|Ht ]. We further formulate this equation as an Ordinary Differential Equation, and provide a scalable algorithm. 4 Hybrid inference machine with probability mass transport In this section, we present technical details of our framework. We first design a new random variable for prediction; then we propose a mass transport equation to compute this random variable analytically. Finally, we combine the mass transport equation with the sampling scheme to compute the probability mass function of general point processes and solve prediction tasks with an arbitrary function f . 4.1 New random variable with reduced variance We reformulate the problem and design a new random variable g(Ht ), which has a smaller variance than f (N (t)) and the same expectation. To do this, we express E[f (N (t))] as an iterated expectation h h i ? ?i E[f (N (t))] = EHt EN (t)|Ht f (N (t))|Ht = EHt g(Ht ) , (3) where EHt is w.r.t. the randomness of the history and EN (t)|Ht is w.r.t. the randomness of the point process given the history. We design the random variable as a conditional expectation given the history: g(Ht ) = EN (t)|Ht [f (N (t))|Ht ]. Theorem 1 shows that it has a smaller variance. 3 Theorem 1. For time t > 0 and an arbitrary function f , we have VAR[g(Ht )] < VAR[f (N (t))]. Theorem 1 extends the Rao-Blackwell (RB) theorem [3] to point processes. RB says that if ?? is an ? ]] 6 VAR[?], ? i.e., the estimator of a parameter ? and T is a sufficient statistic for ?; then VAR[E[?|T ? sufficient statistic reduces uncertainty of ?. However, RB is not applicable to point processes since it studies a different problem (improving the estimator of a distribution?s parameter), while we focus on the prediction problem for general point processes, which introduces two new technical challenges: (i) Is there a definition in point processes whose role is similar to the sufficient statistic in RB? Our first contribution shows that the history Ht contains all the necessary information in a point process and reduces the uncertainty of N (t). Hence, g(Ht ) is an improved variable for prediction. Moreover, in contrast to the RB theorem, the inequality in Theorem 1 is strict because the counting process N (t) is right-continuous in time t and not predictable [4] (a predictable process is measurable w.r.t. Ht , such as the processes that are left-continuous). Appendix C contains details on the proof. (ii) Is g(Ht ) computable for general point processes and an arbitrary function f ? An efficient computation will enable us to estimate EHt [g(Ht )] using the sampling method. Specifically, let P ? ?n (t) = n1 i g(Hti ) be the estimator computed from n samples; then from the definition of RMSE in (2), this estimator has smaller error than MC: "(? ?n (t)) < "(? ?mc n (t)). However, the challenge in our new formulation is that it seems very hard to compute this conditional expectation, as one typically needs another round of sampling, which is undesirable as it will increase the variance of the estimator. To address this challenge, next we propose a mass transport equation. 4.2 Transport equation for conditional probability mass function We present a novel mass transport equation that computes the conditional probability mass ?(x, t) := P[N (t) = x|Ht ] of general point processes. With this definition, we derive an analytical expression P for the conditional expectation: g(Ht ) = x f (x) ?(x, t). The transport equation is as follows. Theorem 2 (Mass Transport Equation for Point Processes). Let (t) := (t|Ht ) be the conditional intensity function of the point process N (t) and ?(x, t) := P[N (t) = x|Ht ] be its conditional probability mass function; then ?(x, t) satisfies the following differential-difference equation: 8 > < (t) ?(x, t) ?(x, t) @ ?t (x, t) := (t) ?(x, t) = | {z } > @t " : rate of change in conditional mass loss in mass, at rate (t) + (t) ?(x 1, t) | {z } gain in mass, at rate (t) if x = 0 if x = 1, 2, 3, ? ? ? (4) Proof sketch. For the simplicity of notation, we set the right-hand-side of (4) to be F[ ?], where F is a functional operator onP?. We also define the inner product between functions u : N ! R and v : N ! R as (u, v) := x u(x)v(x). The main idea in our proof is to show that the equality (v, ?t ) = (v, F[ ?]) holds for any test function v; then ?t = F[ ?] follows from the fundamental lemma of the calculus of variations [14]. Specifically, the proof contains two parts as follows. We first prove (v, ?t ) = (B[v], ?), where B[v] is a functional operator defined as B[v] = (v(x + 1) v(x)) (t). This equality can be proved by the property of point processes and the definition of conditional mass. Second, we show (B[v], ?) = (v, F[ ?]) using a variable substitution technique. Mathematically, this equality means B and F are adjoint operators on the function space. Combining these two equalities yields the mass transport equation. Appendix A contains details on the proof. Mass transport dynamics. This differential-difference equation describes the time evolution of the conditional mass. Specifically, the differential term ?t , i.e., the instantaneous rate of change in the probability mass, is equal to a first order difference equation on the right-hand-side. This difference equation is a summation of two terms: (i) the negative loss of its own probability mass ?(x, t) at rate (t), and (ii) the positive gain of probability mass ?(x 1, t) from last state x 1 at rate (t). Moreover, since initially no event happens with probability one, we have ?(x, 0) = I[x = 0]. Solving this transport equation on [0, t] essentially transports the initial mass to the mass at time t. 4 Algorithm 1: C ONDITIONAL M ASS F UNCTION Algorithm 2: H YBRID M ASS TRANSPORT Input: Ht = {tk }K ? , set t = tK+1 k=1 , Output: Conditional probability mass function ?(t) for k = 0, ? ? ? K do Construct (s) and Q(s) on [tk , tk+1 ] ; ?(tk+1 ) = O DE 45[ ?(tk ), Q(s), ? )] (RK Alg); end Set ?(t) = ?(tK+1 ) 4.3 Input: Sample size n, time t, ? Output: ? ?n (t), ?n (x, t) n Generate n samples of point process: Hti i=1 ; for i = 1, ? ? ? , n do ?i (x, t) = C OND -M ASS -F UNC(Hi , ? ); t end P P ?n (x, t) = 1 ?i (x, t), ? ?n (t) = x f (x) ?n (x, t) i n Mass transport as a banded linear Ordinary Differential Equation (ODE) To efficiently solve the mass transport equation, we reformulate it as a banded linear ODE. Specifically, we set the upper bound for x to be M , and set ?(t) to be a vector that includes the value of ?(x, t) for each integer x: ?(t) = ( ?(0, t), ?(1, t), ? ? ? , ?(M, t))> . With this representation of the conditional mass, the mass transport equation in (4) can be expressed as a simple banded linear ODE: ?(t)0 = Q(t) ?(t), (5) where ?(t)0 = ( ?t (0, t), ? ? ? , ?t (M, t))> , and the matrix Q(t) is a sparse bi-diagonal matrix with Qi,i = (t) and Qi 1,i = (t). The following equation visualizes the ODE in (5) when M = 2. 0 1 1 !0 ? ?t (0, t) (0, t) (t) @ ?t (1, t) A = @ ?(1, t) A . (t) (t) (6) ?t (2, t) ?(2, t) (t) (t) This dynamic ODE is a compact representation of the transport equation in (4) and M decides the dimension of the ODE in (5). In theory, M can be unbounded. However, the conditional probability mass is tends to zero when M becomes large. Hence, in practice we choose a finite support {0, 1, ? ? ? , M } for the conditional probability mass function. To choose a proper M , we generate samples from the point process. Suppose the largest number of events in the samples is L, we set M = 2L such that it is reasonably large. Next, with the initial probability mass ?(t0 ) = (1, 0, ? ? ? , 0)> , we present an efficient algorithm to solve the ODE. 4.4 Scalable algorithm for solving the ODE We present the algorithm that transports the initial mass ?(t0 ) to ?(t) by solving the ODE. Since the intensity function is history-dependent and has a discrete jump when an event happens at time tk , the matrix Q(t) in the ODE is discontinuous at tk . Hence we split [0, t] into intervals [tk , tk+1 ]. On each interval, the intensity is continuous and we can use the classic numerical Runge-Kutta (RK) method [7] to solve the ODE. Figure 2 illustrates the overall algorithm. $( $& $% ) * , * ? [$&, $% ] ) * $& 1 !" 0 $& !" $& $% $' )(*) $% !" $% $ )(*) $' $' !" $' $ !" $ Figure 2: Illustration of Algorithm 1 using Hawkes process. The intensity is updated after each event tk . Within [tk , tk+1 ], we use (tk ) and the intensity (s) to solve the ODE and obtain (tk+1 ). Our algorithm works as follows. First, with the initial intensity on [0, t1 ] and ?(t0 ) as input, the RK method solves the ODE on [0, t1 ] and outputs ?(t1 ). Since an event happens at t1 , the intensity is updated on [t1 , t2 ]. Next, with the updated intensity and ?(t1 ) as the initial value, the RK method solves the ODE on [t1 , t2 ] and outputs ?(t2 ). This procedure repeats for each [tk , tk+1 ] until time t. Now we present the RK method that solves the ODE on each interval [tk , tk+1 ]. RK divides this interval into equally-spaced subintervals [?i , ?i+1 ], for i = 0, ? ? ? , I and ? = ?i+1 ?i . It then conducts linear extrapolation on each subinterval. It starts from ?0 = tk and uses ?(?0 ) and the approximation of the gradient ?(?0 )0 to compute ?(?1 ). Next, ?(?1 ) is taken as the initial value and the process is repeated until ?I = tk+1 . Appendix D contains details of this method. The RK method approximates the gradient ?(t)0 with different levels of accuracy, called states s. When s = 1, it is the Euler method, which uses the first order approximation ?(?i+1 ) ?(?i )/ ? . 5 We use the O DE 45 solver in MATLAB and choose the stage s = 4 for RK. Moreover, the main computation in the RK method comes from the matrix-vector product. Since the matrix Q(t) is sparse and bi-diagonal with O(M ) non-zero elements, the cost for this operation is only O(M ). 4.5 Hybrid inference machine with mass transport equation With the conditional probability mass, we are now ready to express g(Ht ) in closed form and estimate EHt [g(Ht )] using the MC sampling method. We present our framework H YBRID: (i) Generate n samples {Hti } from a point process N (t) with a stochastic intensity (t). (ii) For each sample Hti , we compute the value of intensity function (s|Hsi ), for each s 2 [0, t]; then we solve (5) to compute the conditional probability mass ?i (x, t). (iii) We obtain the estimator of the probability mass function (x, t) and ?(t) by taking the Pn P average: ?n (x, t) = n1 i=1 ?i (x, t), ? ?n (t) = x f (x) ?n (x, t) Algorithm 2 summarizes the above procedure. Next, we discuss two properties of H YBRID. First, our framework efficiently uses all event information in each sample. In fact, each event tk influences the transport rate of the conditional probability mass (Figure 2). This feature is in sharp contrast to MC that only uses the information of the total number of events and neglects the differences in event times. For instance, the two samples in Figure 1(a) both have three events and MC treats them equally; hence its estimator is an indicator function ?mc n (x, t) = I[x = 3]. However, for H YBRID , these samples have different event information and conditional probability mass functions, and our estimator in Figure 1(d) is much more informative than an indicator function. Moreover, our estimator for the probability mass is unbiased if we can solve the mass transport equation in (4) exactly. To prove this property, we show that the following equality holds for an arbitrary function f : (f, ) = E[f (N (t))] = EHt [g(Ht )] = (f, EHt [ ?]). Then EHt [ ?n ] = follows from the fundamental lemma of the calculus of variations [14]. Appendix B contains detailed derivations. In practice, we choose a reasonable finite support for the conditional probability mass in order to solve the mass transport ODE in (5). Hence our estimator is nearly unbiased. 5 Applications and extensions to multi-dimensional point processes In this section, we present two real world applications, where the point process models have intertwined stochasticity and co-evolving intensity functions. Predicting the activeness and popularity of users in social networks. The co-evolution model [12] uses a Hawkes process Nus (t) to model information diffusion (tweets/retweets), and a survival process Aus (t) to model the dynamics of network topology (link creation process). The intensity of Nus (t) depends on the network topology Aus (t), and the intensity of Aus (t) also depends on Nus (t); hence these processesP co-evolve over time. We focus on two tasks in this model: (i) inferring the activeness of a user by E[ u Nus (t)], which P is the number of tweets and retweets from user s; and (ii) inferring the popularity of a user by E[ u Aus (t)], which is the number of new links created to the user. Predicting the popularity of items in recommender systems. Recent works on recommendation systems [10, 25] use a point process Nui (t) to model user u?s sequential interaction with item i. The intensity function ui (t) denotes user?s interest to the item. As users interact with items over time, the user latent feature uu (t) and item latent feature iu (t) co-evolve over time, and are mutually dependent [25]. The intensity is parameterized as ui (t) = ?ui +uu (t)> ii (t), where ?ui is a baseline term representing the long-term preference, and the tendency for u to interact with i depends on the > compatibility of their instantaneous P latent features uu (t) ii (t). With this model, we can infer an item?s popularity by evaluating E[ u Nui (t)], which is the number of events happened to item i. To solve these prediction tasks, we extend the Ptransport equation to the multivariate case. Specifically, we create a new stochastic process x(t) = u Nus (t) and compute its conditional mass function. Theorem 3 (Mass Transport for Multidimensional Point Processes). Let Nus (t) be the point process PU with intensity us (t), x(t) = u=1 Nus (t), and ?(x, t) = P[x(t) = x|Ht ] be the conditional P ?(x, t) + P us (t) ?(x 1, t). probability mass of x(t); then ? satisfies: ?t = u us (t) u To compute the conditional probability mass, we also solve the ODE in (5), where the diagonal and off-diagonal of Q(t) is now the negative and positive summation of intensities in all dimensions. 6 0.8 0.4 0.6 0.4 0.2 2 3 4 5 0.6 0.6 Test time (half day) 0.65 0.7 0.75 0.4 0.2 2 3 4 5 0.6 Test time (half day) Training data size in proportion (a) MAPE vs. test time 0.6 0.4 1 0.8 HYBRID MC-1e6 MC-1e3 FPE 0.8 0.2 0.2 1 HYBRID MC-1e6 MC-1e3 FPE 0.8 MAPE MAPE MAPE 0.6 HYBRID MC-1e6 MC-1e3 SEISMIC RPP FPE 0.8 MAPE HYBRID MC-1e6 MC-1e3 SEISMIC RPP FPE (b) MAPE vs. train size 0.65 0.7 0.75 0.8 Training data size in proportion (c) MAPE vs. test time (d) MAPE vs. train size Figure 3: Prediction results for user activeness and user popularity. (a,b) user activeness: predicting the number of posts per user; (c,d) user popularity: predicting the number of new links per user. Test times are the relative times after the end of train time. The train data is fixed with 70% of total data. 0.8 MAPE MAPE MAPE 0.4 0.6 0.4 2 3 4 Test time (week) (a) MAPE vs. test time 5 0.6 0.8 0.4 0.2 0.2 0.2 1 HYBRID MC-1e6 MC-1e3 SEISMIC RPP FPE 0.8 0.8 MAPE HYBRID MC-1e6 MC-1e3 SEISMIC RPP FPE 0.6 0.6 0.65 0.7 0.75 0.8 Training data size in proportion 0.6 0.4 0.2 2 4 6 8 10 0.6 Test time (day) (b) MAPE vs. train size (c) MAPE vs. test time 0.65 0.7 0.75 0.8 Training data size in proportion (d) MAPE vs. train size Figure 4: Prediction results for item popularity. (a,b) predicting the number of watching events per program on IPTV; (c,d) predicting the number of discussions per group on Reddit. 6 Experiments In this section, we evaluate the predictive performance of H YBRID in two real world applications in Section 5 and a synthetic dataset. We use the following metrics: (i) Mean Average Percentage Error (MAPE). Given a prediction time t, we compute the MAPE |? ?n (t) ?(t)|/?(t) between the estimated value and the ground truth. (ii) Rank correlation. For all users/items, we obtain two lists of ranks according to the true and estimated value of user activeness/user popularity/item popularity. The accuracy is evaluated by the Kendall-? rank correlation [18] between two lists. 6.1 Experiments on real world data We show H YBRID has both accuracy and efficiency improvement in predicting the activeness and popularity of users in social networks and predicting the popularity of items in recommender systems. Competitors. We use 103 samples for H YBRID and compare it with the following the state of the art. ? S EISMIC [32]. It defines a self-exciting process with a post infectiousness factor. It uses the branching property of Hawkes process and heuristic corrections for prediction. ? R PP [13]. It adds a reinforcement coefficient to Poisson process that depicts the self-excitation phenomena. It sets dN (t) = (t)dt and solves a deterministic equation for prediction. ? F PE [29]. It uses a deterministic function to approximate the stochastic intensity function. ? M C -1 E 3. It is the MC sampling method with 103 samples (same as these for H YBRID), and M C -1 E 6 uses 106 samples. 6.1.1 Predicting the activeness and popularity of users in social networks We use a Twitter dataset [2] that contains 280,000 users with 550,000 tweet, retweet, and link creation events during Sep. 21 - 30, 2012. This data is previously used to validate the network co-evolution model [12]. The parameters for tweeting/retweeting processes and link creation process are learned using maximum likelihood estimation [12]. S EISMIC and R PP are not designed for the popularity prediction task since they do not consider the evolution of network topology. We use p proportion of total data as the training data to learn parameters of all methods, and the rest as test data. We make predictions for each user and report the averaged results. 7 10 3 4 600 1000 HYBRID HYBRID 800 Time (s) 6 4 HYBRID MC 2 Time (s) 8 Time (s) 4 HYBRID MC 1 Time (s) 10 10 600 400 400 200 200 2 0 0.5 0.4 0.3 0.2 0 0.5 0.1 0.4 0.3 0.2 0 0.5 0.1 0.4 MAPE MAPE (a) User activeness 0.3 0.2 0 0.5 0.1 0.4 (b) Item popularity, IPTV 0.3 0.2 0.1 MAPE MAPE (c) User activeness (d) Item popularity, IPTV Figure 5: Scalability analysis: computation time as a function of error. (a,b) comparison between H YBRID and M C in different problems; (c,d) scalability plots for H YBRID. 0.69 Methods HYBRID MC-1e6 FPE MC-1e3 0.72 0.75 Rank correlation Rank correlation 0.71 0.50 Methods 0.75 Rank correlation HYBRID MC-1e6 FPE SEISMIC RPP MC-1e3 0.69 0.50 0.41 0.39 0.25 0.78 HYBRID MC-1e6 FPE SEISMIC RPP MC-1e3 0.76 0.50 0.44 0.25 0.50 0.51 0.41 0.25 0.75 0.58 0.51 0.31 0.21 0.15 0.11 0.00 Methods HYBRID MC-1e6 FPE SEISMIC RPP MC-1e3 0.21 0.13 (a) User activeness 0.77 0.25 0.21 0.00 Methods 0.75 Rank correlation Methods 0.75 0.00 Methods 0.00 Methods (b) User popularity Methods (c) Item popularity, IPTV (d) Item popularity, Reddit Figure 6: Rank correlation results in different problems. We vary the proportion p of training data from 0.6 to 0.8, and the error bar represents the variance over different training sets. Predictive performance. Figure 3(a) shows that MAPE increases as test time increases, since the model?s stochasticity increases. H YBRID has the smallest error. Figure 3(b) shows that MAPE decreases as training data increases since model parameters are more accurate. Moreover, H YBRID is more accurate than S EISMIC and F PE with only 60% of training data, while these works need 80%. Thus, we make accurate predictions by observing users in the early stage. This feature is important for network moderators to identify malicious users and suppress the propagation undesired content. Moreover, the consistent performance improvement shows two messages: (i) considering all the randomness is important. H YBRID is 2? more accurate than S EISMIC and F PE because H YBRID naturally considers all the stochasticity, but S EISMIC, F PE, and R PP need heuristics or approximations that discard parts of the stochasticity; (ii) sampling efficiently is important. To consider all the stochasticity, we need to use the sampling scheme, and H YBRID has a much smaller sample size. Specifically, H YBRID uses the same 103 samples, but has 4? error reduction compared with M C -1 E 3. M C -1 E 6 has a similar predictive performance as H YBRID, but needs 103 ? more samples. Scalability. How does the reduction in sample size improve the speed? Figure 5(a) shows that as the error decreases from 0.5 to 0.1, M C has higher computation cost, since it needs much more samples than H YBRID to achieve the same error. We include the plots of H YBRID in (c). In particular, to achieve the error of 0.1, M C needs 106 samples in 27.8 hours, but H YBRID only needs 14.4 minutes with 103 samples. We use the machine with 16 cores, 2.4 GHz Intel Core i5 CPU and 64 GB memory. Rank correlation. We rank all users according to the predicted level of activeness and level of popularity separately. Figure 6(a,b) show that H YBRID performs the best with the accuracy around 80%, and it consistently identifies around 30% items more correctly than F PE on both tasks. 6.1.2 Predicting the popularity of items in recommender systems In the recommendation system setting, we use two datasets from [25]. The IPTV dataset contains 7,100 users? watching history of 436 TV programs in 11 months, with around 2M events. The Reddit dataset contains online discussions of 1,000 users in 1,403 groups, with 10,000 discussion events. The predictive and scalability performance are consistent with the application in social networks. Figure 4 shows that H YBRID is 15% more accurate than F PE and 20% than S EISMIC. Figure 5 also shows that H YBRID needs much smaller amount of computation time than M C -1 E 6. To achieve the error of 0.1, it takes 9.8 minutes for H YBRID and 7.5 hours for M C -1 E 6. Figure 6(c,d) show that H YBRID achieves the rank correlation accuracy of 77%, with 20% improvement over F PE. 8 -3 10 10-2 10-3 101 102 103 104 105 101 102 10-2 103 104 105 106 101 number of samples 10 HYBRID MC -1 10-2 102 103 104 105 101 106 102 103 104 105 106 number of samples number of samples 2 (b) f (x) = x log(x) 1 100 10-3 number of samples (a) f (x) = x 10 HYBRID MC -1 MAPE 10 10 100 Hybrid MC -1 MAPE 10 -2 100 HYBRID MC MAPE MAPE 10-1 (c) f (x) = x (d) f (x) = exp(x) Figure 7: Error of E[f (N (t))] as a function of sample size (loglog scale). (a-d) different choices of f . 0.03 0.02 0.025 0.05 0.025 0.01 0.02 Probability 0.01 Probability Probability Probability 0.015 0.015 0.01 0.005 0.005 0 0.04 0.015 0.02 0 85 Counts (a) H YBRID, ?n (x, t) 160 0 0 80 160 0 85 Counts Counts (b) M C, ?mc n (x, t) 0.02 0.01 0.005 0 0.03 (c) H YBRID, 1 sample 160 0 0 85 160 Counts (d) H YBRID, 1 sample Figure 8: Comparison of estimators of probability mass functions in H YBRID and M C. (a,b) estimators with the same 1000 samples. (c,d) estimator with one sample in H YBRID. 6.2 Experiments on synthetic data We compare H YBRID with MC in two aspects: (i) the significance of the reduction in the error and sample size, and (ii) estimators of the probability mass function. We study a Hawkes process and set the parameters of its intensity function as ? = 1.2, and ? = 0.5. We fix the prediction time to be t = 30. The ground truth is computed with 108 samples from MC simulations. Error vs. number of samples. In four tasks with different f , Figure 7 shows that given the same number of samples, H YBRID has a smaller error. Moreover, to achieve the same error, H YBRID needs 100? less samples than M C. In particular, to achieve the error of 0.01, (a) shows H YBRID needs 103 and M C needs 105 samples; (b) shows H YBRID needs 104 and M C needs 106 samples. Probability mass functions. We compare our estimator of the probability mass with M C. Figure 8(a,b) show that our estimator is much smoother than M C, because our estimator is the average of conditional probability mass functions, which are computed by solving the mass transport equation. Moreover, our estimator centers around 85, which is the ground truth of E[N (t)], while that of M C centers around 80. Hence H YBRID is more accurate. We also plot two conditional mass functions in (c,d). The average of 1000 conditional mass functions yields (a). Thus, this averaging procedure in H YBRID adjusts the shape of the estimated probability mass. On the contrary, given one sample, the estimator in M C is just an indicator function and cannot capture the shape of the probability mass. 7 Conclusions We have proposed H YBRID, a generic framework with a new formulation of the prediction problem in point processes and a novel mass transport equation. This equation efficiently uses the event information to update the transport rate and compute the conditional mass function. Moreover, H YBRID is applicable to general point processes and prediction tasks with an arbitrary function f . Hence it can take any point process models as input, and the predictive performance of our framework can be further improved with the advancement of point process models. Experiments on real world and synthetic data demonstrate that H YBRID outperforms the state of the art both in terms of accuracy and efficiency. There are many interesting lines for future research. For example, H YBRID can be generalized to marked point processes [4], where a mark is observed along with the timing of each event. 9 Acknowledgements. This project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF IIS-1639792 EAGER, NSF CNS-1704701, ONR N00014-15-1-2340, DMS-1620342, CMMI-1745382, IIS-1639792, IIS-1717916, NVIDIA, Intel ISTC and Amazon AWS. References [1] O. Aalen, O. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. [2] D. Antoniades and C. Dovrolis. Co-evolutionary dynamics in social networks: A case study of twitter. arXiv preprint arXiv:1309.6001, 2013. [3] D. Blackwell. Conditional expectation and unbiased sequential estimation. The Annals of Mathematical Statistics, pages 105?110, 1947. [4] P. Br?maud. Point processes and queues. 1981. [5] J. Da Fonseca and R. Zaatour. Hawkes process: Fast calibration, application to trade clustering, and diffusive limit. Journal of Futures Markets, 34(6):548?579, 2014. [6] H. Dai, Y. Wang, R. Trivedi, and L. Song. Deep coevolutionary network: Embedding user and item features for recommendation. arXiv preprint arXiv:1609.03675, 2016. [7] J. R. Dormand and P. J. Prince. A family of embedded runge-kutta formulae. Journal of computational and applied mathematics, 6(1):19?26, 1980. [8] N. Du, L. Song, M. Gomez-Rodriguez, and H. Zha. Scalable influence estimation in continuoustime diffusion networks. In NIPS, 2013. [9] N. Du, L. Song, A. J. Smola, and M. Yuan. Learning networks of heterogeneous influence. In NIPS, 2012. [10] N. Du, Y. Wang, N. He, and L. Song. Time sensitive recommendation from recurrent user activities. In NIPS, pages 3492?3500, 2015. [11] R. M. Dudley. Real analysis and probability. Cambridge University Press, Cambridge, UK, 2002. [12] M. Farajtabar, Y. Wang, M. Gomez-Rodriguez, S. Li, H. Zha, and L. Song. Coevolve: A joint point process model for information diffusion and network co-evolution. In NIPS, pages 1954?1962, 2015. [13] S. Gao, J. Ma, and Z. Chen. Modeling and predicting retweeting dynamics on microblogging platforms. In WSDM, 2015. [14] I. M. Gelfand, R. A. Silverman, et al. Calculus of variations. Courier Corporation, 2000. [15] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83?90, 1971. [16] N. He, Z. Harchaoui, Y. Wang, and L. Song. Fast and simple optimization for poisson likelihood models. arXiv preprint arXiv:1608.01264, 2016. [17] X. He, T. Rekatsinas, J. Foulds, L. Getoor, and Y. Liu. Hawkestopic: A joint model for network inference and topic modeling from text-based cascades. In ICML, pages 871?880, 2015. [18] M. G. Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81?93, 1938. [19] W. Lian, R. Henao, V. Rao, J. E. Lucas, and L. Carin. A multitask point process predictive model. In ICML, pages 2030?2038, 2015. [20] Y. Ogata. On lewis? simulation method for point processes. IEEE Transactions on Information Theory, 27(1):23?31, 1981. 10 [21] J. Pan, V. Rao, P. Agarwal, and A. Gelfand. Markov-modulated marked poisson processes for check-in data. In ICML, pages 2244?2253, 2016. [22] R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani. Epidemic processes in complex networks. Reviews of modern physics, 87(3):925, 2015. [23] X. Tan, S. A. Naqvi, A. Y. Qi, K. A. Heller, and V. Rao. Content-based modeling of reciprocal relationships using hawkes and gaussian processes. In UAI, pages 726?734, 2016. [24] R. Trivedi, H. Dai, Y. Wang, and L. Song. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In ICML, 2017. [25] Y. Wang, N. Du, R. Trivedi, and L. Song. Coevolutionary latent feature processes for continuoustime user-item interactions. In NIPS, pages 4547?4555, 2016. [26] Y. Wang, E. Theodorou, A. Verma, and L. Song. A stochastic differential equation framework for guiding online user activities in closed loop. arXiv preprint arXiv:1603.09021, 2016. [27] Y. Wang, G. Williams, E. Theodorou, and L. Song. Variational policy for guiding point processes. In ICML, 2017. [28] Y. Wang, B. Xie, N. Du, and L. Song. Isotonic hawkes processes. In ICML, pages 2226?2234, 2016. [29] Y. Wang, X. Ye, H. Zha, and L. Song. Predicting user activity level in point processes with mass transport equation. In NIPS, 2017. [30] S.-H. Yang and H. Zha. Mixture of mutually exciting processes for viral diffusion. In ICML, pages 1?9, 2013. [31] L. Yu, P. Cui, F. Wang, C. Song, and S. Yang. From micro to macro: Uncovering and predicting information cascading process with behavioral dynamics. In ICDM, 2015. [32] Q. Zhao, M. A. Erdogdu, H. Y. He, A. Rajaraman, and J. Leskovec. Seismic: A self-exciting point process model for predicting tweet popularity. In KDD, 2015. [33] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. In AISTAT, volume 31, pages 641?649, 2013. 11
6762 |@word multitask:1 seems:1 proportion:6 rajaraman:1 calculus:3 simulation:2 reduction:3 nonexistent:1 substitution:1 contains:9 liu:1 initial:7 past:2 existing:2 outperforms:1 unction:1 follower:1 numerical:1 informative:1 kdd:1 shape:2 enables:1 designed:3 plot:3 update:1 v:9 half:2 advancement:1 item:19 parameterization:1 reciprocal:1 core:2 record:1 provides:2 parameterizations:1 preference:1 attack:1 unbounded:1 mathematical:1 dn:3 xye:1 along:1 differential:9 yuan:1 consists:1 prove:2 combine:1 behavioral:1 market:1 behavior:4 multi:2 wsdm:1 cpu:1 solver:1 considering:1 becomes:1 project:1 moreover:11 notation:1 panel:1 mass:79 reddit:3 generalizable:2 superb:1 corporation:1 temporal:4 multidimensional:1 exactly:1 biometrika:2 uk:1 positive:2 service:1 t1:7 infectivity:1 treat:1 tends:1 limit:1 timing:1 despite:1 fpe:10 becoming:1 au:4 challenging:1 co:9 limited:1 bi:2 averaged:1 decided:1 ond:1 practice:2 silverman:1 prevalence:1 procedure:4 evolving:1 significantly:1 cascade:1 courier:1 unc:1 undesirable:1 cannot:1 operator:3 influence:8 applying:1 isotonic:1 measurable:1 deterministic:3 center:2 williams:1 resolution:1 formulate:1 amazon:2 simplicity:1 foulds:1 estimator:27 insight:1 adjusts:1 cascading:1 classic:1 embedding:1 rekatsinas:1 variation:3 updated:3 annals:1 target:1 suppose:1 tan:1 user:49 us:10 element:1 particularly:1 observed:1 role:1 preprint:4 wang:13 capture:3 mieghem:1 decrease:2 gjessing:1 trade:1 borgan:1 coevolutionary:2 predictable:2 ui:4 dynamic:9 depend:1 solving:4 creation:4 predictive:7 efficiency:3 sep:1 joint:2 represented:1 derivation:1 train:6 fast:2 monte:2 iptv:5 peer:1 whose:2 heuristic:7 gelfand:2 solve:10 say:1 epidemic:1 statistic:4 transform:1 itself:1 online:4 runge:2 sequence:1 analytical:2 propose:6 interaction:2 product:3 macro:1 combining:1 realization:2 date:1 loop:1 achieve:8 adjoint:1 validate:1 scalability:4 aistat:1 plethora:1 tk:28 help:1 derive:4 nui:2 recurrent:1 lsong:1 school:1 eq:1 solves:4 predicted:2 come:1 uu:3 discontinuous:1 stochastic:7 enable:2 opinion:1 fix:1 preliminary:1 summation:2 mathematically:2 extension:1 correction:2 hold:2 around:5 ground:3 exp:2 predict:3 week:1 satorras:1 major:1 achieves:2 vary:1 smallest:1 early:1 estimation:3 applicable:7 sensitive:1 largest:1 create:1 successfully:2 tool:1 istc:1 gaussian:1 dormand:1 pn:1 zhou:1 gatech:2 focus:2 improvement:3 consistently:1 rank:13 likelihood:2 check:1 contrast:2 retweeting:3 detect:1 baseline:2 rpp:7 inference:5 twitter:3 dependent:4 typically:3 initially:1 iu:1 overall:1 compatibility:1 henao:1 uncovering:1 lucas:1 art:3 platform:2 mutual:1 equal:1 construct:3 beach:1 sampling:7 represents:1 yu:1 icml:7 nearly:1 carin:1 future:5 purchase:1 t2:3 report:1 simplify:1 micro:1 modern:1 activeness:11 homogeneity:1 cns:1 n1:2 continuoustime:2 trending:1 interest:3 message:1 introduces:1 mixture:1 r01gm108341:1 accurate:11 necessary:1 conduct:1 divide:1 prince:1 leskovec:1 instance:2 modeling:4 rao:4 yichen:2 ordinary:3 cost:2 distill:1 euler:1 theodorou:2 eager:1 characterize:1 generalizability:1 synthetic:3 st:1 fundamental:4 coevolve:1 physic:1 off:1 central:1 choose:4 watching:2 adversely:1 zhao:1 li:1 archived:1 de:2 microblogging:1 includes:1 coefficient:1 depends:3 root:1 extrapolation:1 closed:2 kendall:2 doing:1 observing:2 vespignani:1 zha:7 start:1 view:1 rmse:2 contribution:2 square:1 accuracy:7 variance:8 largely:1 efficiently:4 reinforced:1 yield:3 identify:2 spaced:1 iterated:1 critically:1 mc:48 carlo:2 cc:1 randomness:3 history:12 moderator:3 visualizes:1 banded:3 networking:1 facebook:1 definition:4 competitor:1 tweeting:2 pp:3 dm:1 naturally:1 proof:5 gain:3 proved:1 dataset:4 knowledge:2 thinning:1 higher:1 dt:4 day:3 xie:1 improved:2 formulation:2 evaluated:1 just:1 stage:2 smola:1 until:2 correlation:10 sketch:1 hand:2 transport:33 expressive:1 propagation:2 rodriguez:2 defines:1 usa:1 ye:2 unbiased:5 true:1 evolution:6 hence:13 analytically:1 equality:5 castellano:1 undesired:1 round:1 during:1 self:4 branching:1 hawkes:13 excitation:2 generalized:1 demonstrate:1 performs:1 reasoning:1 variational:1 instantaneous:2 novel:3 recently:1 nih:1 superior:1 viral:1 functional:2 overview:1 volume:1 extend:1 he:4 approximates:1 cambridge:2 mathematics:2 stochasticity:11 calibration:1 pu:1 add:1 multivariate:1 own:1 recent:1 optimizing:1 pastor:1 discard:1 certain:1 n00014:1 nvidia:1 inequality:1 onr:1 dai:2 hsi:2 ii:16 smoother:1 harchaoui:1 reduces:3 infer:2 technical:3 onp:1 long:2 icdm:1 post:6 equally:2 qi:3 prediction:23 scalable:4 heterogeneous:1 essentially:1 expectation:7 poisson:5 metric:1 arxiv:8 kernel:1 agarwal:1 diffusive:1 background:1 fine:1 ode:17 interval:4 separately:1 aws:1 malicious:1 rest:1 strict:1 cyber:1 facilitates:1 simulates:1 contrary:1 incorporates:1 call:1 integer:1 counting:3 yang:2 split:1 iii:1 topology:4 inner:1 idea:1 computable:1 br:1 t0:3 expression:1 gb:1 song:15 queue:1 e3:10 matlab:1 deep:2 useful:1 detailed:1 amount:2 reduced:2 generate:3 percentage:1 nsf:4 happened:2 estimated:3 popularity:21 rb:5 per:4 correctly:1 diverse:1 intertwined:2 discrete:2 express:2 group:2 key:2 four:2 reformulation:1 diffusion:7 ht:32 graph:1 tweet:4 parameterized:2 powerful:1 uncertainty:2 i5:1 farajtabar:1 extends:1 family:1 reasonable:1 delivery:1 appendix:4 summarizes:1 bound:1 hi:1 guaranteed:1 gomez:2 activity:10 strength:1 placement:1 generates:1 aspect:1 simulate:1 speed:1 influential:1 tv:1 according:2 cui:1 smaller:9 describes:2 increasingly:1 pan:1 urgent:1 making:1 happens:4 taken:1 equation:37 mutually:3 previously:1 discus:1 count:4 know:1 end:3 available:1 operation:1 generic:3 occurrence:1 dudley:1 denotes:1 clustering:1 include:2 unifying:1 neglect:1 objective:1 cmmi:1 diagonal:4 evolutionary:2 gradient:2 kutta:2 distance:1 link:6 topic:1 considers:1 reason:1 relationship:1 illustration:2 reformulate:3 equivalently:1 mostly:1 eht:9 negative:2 suppress:2 design:6 collective:1 proper:1 policy:1 seismic:8 upper:1 recommender:3 datasets:1 markov:1 finite:2 merchant:1 arbitrary:10 sharp:1 intensity:32 blackwell:2 antoniades:1 learned:4 gsu:1 hour:3 nip:7 nu:7 address:2 bar:1 pattern:1 challenge:5 program:2 memory:1 event:38 getoor:1 rely:1 hybrid:20 predicting:16 indicator:4 representing:1 scheme:2 improve:2 technology:1 identifies:1 created:1 ready:1 text:1 review:1 heller:1 acknowledgement:1 evolve:4 relative:2 embedded:1 loss:2 interesting:1 limitation:2 dovrolis:1 var:6 localized:1 sufficient:3 consistent:2 exciting:5 verma:1 share:1 repeat:1 last:1 supported:1 side:2 understand:1 institute:1 terrorist:1 taking:1 erdogdu:1 sparse:3 ghz:1 van:1 dimension:2 world:6 evaluating:1 computes:2 jump:1 reinforcement:1 historical:1 social:12 transaction:1 approximate:1 compact:1 ignore:2 keep:1 overcomes:1 decides:1 uai:1 hongyuan:1 spectrum:1 continuous:4 latent:4 quantifies:1 learn:1 nature:1 reasonably:1 ca:1 career:1 subintervals:1 improving:1 subinterval:1 alg:1 interact:2 as:3 du:5 complex:2 da:1 significance:1 spread:1 main:2 repeated:1 intel:2 en:3 depicts:1 georgia:2 predictability:1 retweets:2 inferring:2 guiding:2 pe:7 mape:28 loglog:1 hti:7 advertisement:1 grained:1 ogata:1 minute:3 theorem:8 rk:9 formula:1 specific:4 retweet:1 list:2 decay:1 survival:2 sequential:2 illustrates:2 trivedi:3 chen:1 suited:1 entropy:1 gao:1 expressed:1 recommendation:4 springer:1 truth:3 satisfies:2 lewis:1 ma:1 conditional:38 goal:1 month:1 marked:2 content:3 change:4 hard:1 determined:1 specifically:6 averaging:1 lemma:2 called:1 total:3 tendency:1 shannon:1 xiaojing:1 aalen:1 formally:1 college:1 e6:10 support:2 mark:1 modulated:1 bigdata:1 evaluate:2 lian:1 phenomenon:3
6,371
6,763
Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues Noga Alon Tel Aviv University, Israel and Microsoft Research [email protected] Moshe Babaioff Microsoft Research [email protected] Yannai A. Gonczarowski The Hebrew University of Jerusalem, Israel and Microsoft Research [email protected] Shay Moran Institute for Advanced Study, Princeton [email protected] Yishay Mansour Tel Aviv University, Israel and Google Research, Israel [email protected] Amir Yehudayoff Technion ? IIT, Israel [email protected] Abstract In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs. 1 Introduction A basic task in machine learning is to learn an unknown distribution ?, given access to samples from it. A natural and widely studied criterion for learning a distribution is approximating its Cumulative Distribution Function (CDF). The seminal Glivenko-Cantelli Theorem [13, 6] addresses this question when the distribution ? is over the real numbers. It determines the behavior of the empirical distribution function as the number of samples grows: let X1 , X2 , . . . be a sequence of i.i.d. random variables drawn from a distribution ? on R with Cumulative Distribution Function (CDF) F , and let x1 , x2 , . . . be their realizations. The empirical distribution ?n is n ?n , 1X ?x , n i=1 i 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. where supported on xi . Let Fn denote the CDF of ?n , i.e., Fn (t) , ?xi is the constant distribution 1 . The Glivenko-Cantelli Theorem formalizes the statement that ?n ? {1 ? i ? n : x ? t} i n converges to ? as n grows, by establishing that Fn (t) converges to F (t), uniformly over all t ? R: Theorem 1.1 (Glivenko-Cantelli Theorem, [13, 6]). Almost surely, lim sup Fn (t) ? F (t) = 0. n?? t Some twenty years after Glivenko [13] and Cantelli [6] discovered this theorem, Dvoretzky, Kiefer, and Wolfowitz (DKW) [12] strengthened this result by giving an almost1 tight quantitative bound on the convergence rate. In 1990, Massart [17] proved a tight inequality, confirming a conjecture due to Birnbaum and McCarty [3]: h i Theorem 1.2 ([17]). Pr supt Fn (t) ? F (t) >  ? 2 exp(?2n2 ) for all  > 0, n ? N. The above theorems show that, with high probability, F and Fn are close up to some additive error. We would have liked to prove a stronger, multiplicative bound on the error: ?t : F (t) ? Fn (t) ?  ? F (t). However, for some distributions, the above event has probability 0, no matter how large n is. For example, assume that ? satisfies F (t) > 0 for all t. Since the empirical measure ?n has finite support, there is t with Fn (t) = 0; for such a value of t, such a multiplicative approximation fails to hold. So, the above multiplicative requirement is too strong to hold in general. A natural compromise is to consider a submultiplicative bound: ?t : F (t) ? Fn (t) ?  ? F (t)? , where 0 ? ? < 1. When ? = 0, this is the additive bound studied in the context of the GlivenkoCantelli Theorem. When ? = 1, this is the unattainable multiplicative bound. Our first main result shows that the case of ? < 1 is attainable: Theorem 1.3 (Submultiplicative Glivenko-Cantelli Theorem). Let  > 0, ? > 0 and 0 ? ? < 1. There exists n0 (, ?, ?) such that for all n > n0 , with probability 1 ? ?: ?t : F (t) ? Fn (t) ?  ? F (t)? . It is worth pointing out a central difference between Theorem 1.3 and other generalizations of the Glivenko-Cantelli Theorem: for example, the seminal work of Vapnik and Chervonenkis [24] shows that for every class of events F of VC dimension d, there is n0 = n0 (, ?, d) such that for every n ? n0 , with probability 1 ?  ? it holds that ?A ? F : p(A) ? pn (A) ? . This yields Glivenko-Cantelli by plugging F = (??, t] : t ? R , which has VC dimension 1. In contrast, the submultiplicative  bound from Theorem 1.3 does not even extend to the VC dimension 1 class F = {t} : t ? R . Indeed, pick any distribution p over R such  that p {t} = 0 for every  t, and 1/n, however p {x } = 0, and observe that for every sample x1 , . . . , xn , it holds that p {x } ? n i i   ? therefore, as long as ? > 0, it is never the case that p {xi } ? pn {xi } ? p {xi } . Our second main result gives an explicit upper bound on n0 (, ?, ?): Theorem 1.4 (Submultiplicative Glivenko-Cantelli Bound). Let , ? ? 1/4, and ? < 1. Then ? ? 4?  ! 1?? 4? ? ln 6/?  ? ? 1?? ? D+4 n0 (, ?, ?) ? max , (D + 1) 10 ? ln 12 ? , ? 22 ? 3 ?(1 ? ?) where D = ln(6/?) 22  ? 6 ? ln 1+? 2? 4? ? 1?? .  1  Note that for fixed , ?, when ? ? 0 the above bound approaches the familiar O ln(2/?) bound by DKW [12] and Massart [17] for ? = 0. On the other hand, when ? ? 1 the above bound tends 1 The inequality due to [12] has a larger constant C in front of the exponent on the right hand side. 2 to ?, reflecting the fact that the multiplicative variant of Glivenko-Cantelli (? = 1) does not hold. Theorems 1.3 and 1.4 are proven in the supplementary material. Note that the dependency of the above bound on the confidence parameter ? is polynomial. This contrasts with standard uniform convergence rates, which, due to applications of concentration bounds such as Chernoff/Hoeffding, achieve logarithmic dependencies on ?. These concentration bounds are not applicable in our setting when the CDF values are very small, and we use Markov?s inequality instead. The following example shows that a polynomial dependency on ? is indeed necessary and is not due to a limitation of our proof. Example 1.5. For large n, consider n independent samples x1 , . . . , xn from the uniform distribution over [0, 1], and set ? = 1/2 and  = 1. The probability of the event ?i : xi ? 1/n3 is roughly 1/n2 : indeed, the complementary event has probability (1?1/n3 )n ? exp(?1/n2 ) ? 1?  1/2 1/n2 . When this happens, we have: Fn (1/n3 ) ? 1/n >> 1/n3 +1/n3/2 = F (1/n3 )+ F (1/n3 ) . Note that this happens with probability inverse polynomial in n (roughly 1/n2 ) and not inverse exponential. An application to revenue learning. We demonstrate an application of our Submultiplicative Glivenko-Cantelli Theorem in the context of a widely studied problem in economics and algorithmic game theory: the problem of revenue learning. In the setting of this problem, a seller has to decide which price to post for a good she wishes to sell. Assume that each consumer draws her private valuation for the good from an unknown distribution ?. We envision that a consumer with valuation v will buy the good at any price p ? v, but not at any higher price. This implies that the expected revenue at price p is simply r(p) , p ? q(p), where q(p) , PrV ?? [V ? p]. In the language of machine learning, this problem can be phrased as follows: the examples domain Z , R+ is the set of all valuations v. The hypothesis space H , R+ is the set of all prices p. The revenue (which is a gain, rather than loss) of a price p on a valuation v is the function p ? 1{p?v} . The well-known revenue maximization problem is to find a price p? that maximizes the expected revenue, given a sample of valuations drawn i.i.d. from ?. In this paper, we consider the more demanding revenue estimation problem: the problem of well-approximating r(p), simultaneously for all prices p, from a given sample of valuations. (This clearly also implies a good estimation of the maximum revenue and of a price that yields it.) More specifically, we address the following question: when do the empirical revenues, rn (p) , p ? qn (p), where qn (p) , PrV ??n [V ? p] = n1 ? {1 ? i ? n : xi ? t} , uniformly converge to the true revenues r(p)? More specifically, we would like to show that for some n0 , for n ? n0 we have with probability 1 ? ? that r(p) ? rn (p) ? . The revenue estimation problem is a basic instance of the more general problem of uniform convergence of empirical estimates. The main challenge in this instance is that the prices are unbounded (and so are the private valuations that are drawn from the distribution ?). Unfortunately, there is no (upper) bound on n0 that is only a function of  and ?. Moreover, even if we add the expectation of valuations, i.e., E[V ] where V is distributed according to ?, still there is no bound on n0 that is a function of only those three parameters (see Section 2.3 for an example). In contrast, when we consider higher moments of the distribution ?, we are able to derive bounds on the value of n0 . These bounds are based on our Submultiplicative Glivenko-Cantelli Bound. Specifically, assume that EV ?? [V 1+? ] ? C for some ? > 0 and C ? 1. Then, we show that for any , ? ? (0, 1), we have   h i 1  1+? Pr ?v : r(v) ? rn (v) >  ? Pr ?v : q(v) ? qn (v) > . 1 q(v) C 1+? This essentially reduces uniform convergence bounds to our Submultiplicative Glivenko-Cantelli variant. It then follows that there exists n0 (C, ?, , ?) such that for any n ? n0 , with probability at least 1 ? ?, ?v : rn (v) ? r(v) ? . 3  1  We remark that when ? is large, our bound yields n0 ? O ln(2/?) , which recovers the standard sample complexity bounds obtainable via DKW [12] and Massart [17]. When ? ? 0, our bound diverges to infinity, reflecting the fact (discussed above) that there is no bound on n0 that depends only on , ?, and E[V ]. Nevertheless, we find that E[V ] qualitatively determines whether uniform convergence occurs in the limit. Namely, we show that ? If E? [V ] < ?, then almost surely limn?? supv r(v) ? rn (v) = 0, ? Conversely, if E? [V ] = ?, then almost never limn?? supv r(v) ? rn (v) = 0. 1.1 Related work Generalizations of Glivenko-Cantelli. Various generalizations of the Glivenko-Cantelli Theorem were established. These include uniform convergence bounds for more general classes of functions as well as more general loss functions (for example, [24, 23, 16, 2]). The results that concern unbounded loss functions are most relevant to this work (for example, [9, 8, 23]). We next briefly discuss the relevant results from Cortes et al. [8] in the context of this paper; more specifically, in the context of Theorem 1.3. To ease presentation, set ? in this theorem to be 1/2. Theorem 1.3 analyzes the event where the empirical quantile is bounded by2 p qn (p) ? q(p) +  q(p), p qn (p) ? q(p) ?  q(p). whereas, [8] analyzes the event where it is bounded it by: p  ? q(p) + q(p)/n + 1/n , qn (p) ? O p  ? q(p) ? qn (p)/n ? 1/n qn (p) ? ? Thus, the main difference is the additive 1/n term in the bound from [8]. In the context of uniform convergence of revenues, it is crucial to use the upper bound on the empirical quantile as we do, as it guarantees that large prices will not overfit, which is the main challenge in proving uniform convergence in this context. In particular, the upper bound from [8] does not provide any guarantee on the revenues of prices p >> n, as for such prices p ? 1/n >> 1. It is also worth pointing out that our lower bound on the empirical quantile implies that with high probability the quantile of the maximum sampled point is at least 1/n2 (or more generally, at least 1/n1/? when ? 6= 1/2), while the bound from [8] does not imply any non-trivial lower bound. Another, more qualitative difference is that unlike the bounds in [8] that apply for general VC classes, our bound is tailored for the class of thresholds (corresponding to CDF/quantiles), and does not extend even to other classes of VC dimension 1 (see the discussion after Theorem 1.3). Uniform convergence of revenues. The problem of revenue maximization is a central problem in economics and Algorithmic Game Theory (AGT). The seminal work of Myerson [20] shows that given a valuation distribution for a single good, the revenue-maximizing selling mechanism for this good is a posted-price mechanism. In the recent years, there has been a growing interest in the case where the valuation distribution is unknown, but the seller observes samples drawn from it. Most papers in this direction assume that the distribution meets some tail condition that is considered ?natural? within the algorithmic game theory community, such as boundedness [18, 21, 19, 1, 14, 10]3 , such as a condition known as Myerson-regularity [11, 15, 7, 10], or such as a condition known as monotone hazard rate [15].4 These papers then go on to derive computation- or sample-complexity 2 For consistency with the canonical statement of the Glivenko-Cantelli theorem, we stated our submultiplicative variants of this theorem with regard to the CDFs Fn and F . However, these results also hold when replacing these CDFs with the respective quantiles (tail CDFs) qn and q. See Section 2.2 for details. 3 The analysis of [1] assumes a bound on the realized revenue (from any possible valuation profile) of any mechanism/auction in the class that they consider. For the class of posted-price mechanisms, this is equivalent to assuming a bound on the support of the valuation distribution. Indeed, for any valuation v, pricing at v gives realized revenue v (from the valuation v), and so unbounded valuations (together with the ability to post unbounded prices) imply unbounded realized revenues. 4 Both Myerson-regularity and monotone hazard rate are conditions on the second derivative of the revenue as a function of the quantile of the underlying distribution. In particular, they impose restrictions on the tail of the distribution. 4 bounds on learning an optimal price (or an optimal selling mechanism from a given class) for a distribution that meets the assumed condition. A recurring theme in statistical learning theory is that learnability guarantees are derived via a, sometimes implicit, uniform convergence bound. However, this has not been the case in the context of revenue learning. Indeed, while some papers that studied bounded distributions [18, 21, 19, 1] did use uniform convergence bounds as part of their analysis, other papers, in particular those that considered unbounded distributions, had to bypass the usage of uniform convergence by more specialized arguments. This is due to the fact that many unbounded distributions do not satisfy any uniform convergence bound. As a concrete example, the (unbounded, Myerson-regular) equal revenue distribution5 has an infinite expectation and therefore, by our Theorem 2.3, satisfies no uniform convergence, even in the limit. Thus, it turns out that the works that studied the popular class of Myerson-regular distributions [11, 15, 7, 10] indeed could not have hoped to establish learnability via a uniform convergence argument. For instance, the way [11, 7] establish learnability for Myerson-regular distributions is by considering the guarded ERM algorithm ? (an algorithm that chooses an empirical revenue maximizing price that is smaller than, say, the nth largest sampled price), and proving ? a uniform convergence bound, not for all prices, but only for prices that are, say, smaller than the nth largest sampled price, and then arguing that larger prices are likely to have a small empirical revenue, compared to the guarded empirical revenue maximizer. This means that the guarded ERM will output a good price, but it does not (and cannot) imply uniform convergence for all prices. We complement the extensive literature surveyed above in a few ways. The first is generalizing the revenue maximization problem to a revenue estimation problem, where the goal is to uniformly estimate the revenue of all possible prices, when no bound on the possible valuations is given (or even exists). The problem of revenue estimation arises naturally when the seller has additional considerations when pricing her good, such as regulations that limit the price choice, bad publicity if the price is too high (or, conversely, damage to prestige if the price is too low), or willingness to suffer some revenue loss for better market penetration (which may translate to more revenue in the future). In such a case, the seller may wish to estimate the revenue loss due to posting a discounted (or inflated) price. The second, and most important, contribution to the above literature is that we consider arbitrary distributions rather than very specific and limited classes of distributions (e.g., bounded, Myersonregular, monotone hazard rate, etc.). Third, we derive finite sample bounds in the case that the expected valuation is bounded for some moment larger than 1. We further derive a zero-one law for uniform convergence in the limit that depends on the finiteness of the first moment. Technically, our bounds are based on an additive error rather than multiplicative ones, which are popular in the AGT community. 1.2 Paper organization The rest of the paper is organized as follows. Section 2 contains the application of our Submultiplicative Glivenko-Cantelli to revenue estimation, and Section 3 contains a discussion and possible directions of future work. The proof of the Submultiplicative Glivenko-Cantelli variant, and some extensions of it, appear in the supplementary material. 2 Uniform Convergence of Empirical Revenues In this section we demonstrate an application of our Submultiplicative Glivenko-Cantelli variant by establishing uniform convergence bounds for a family of unbounded random variables in the context of revenue estimation. 2.1 Model Consider a good g that we wish to post a price for. Let V be a random variable that models the valuation of a random consumer for g. Technically, it is assumed that V is a nonnegative random variable, and we denote by ? its induced distribution over R+ . A consumer who values g at a 5 This is a distribution that satisfies the special property that all prices have the same expected revenue. 5 valuation v is willing to buy the good at any price p ? v, but not at any higher price. This implies that the realized revenue to the seller from a (posted) price p is the random variable p ? 1{p?V } . The quantile of a value v ? R+ is  q(v) = q(v; ?) , ? {x : x ? v} . This models the fraction of the consumers in the population that are willing to purchase the good if priced at v. The expected revenue from a (posted) price p ? R+ is   r(p) = r(p; ?) , E p ? 1{p?V } = p ? q(p). ? Let V1 , V2 , . . . be a sequence of i.i.d. valuations drawn from ?, and let v1 , v2 , . . . be their realizations. The empirical quantile of a value v ? R+ is qn (v) = q(v; ?n ) , 1 ? {1 ? i ? n : vi ? v} . n The empirical revenue from a price p ? R+ is   rn (p) = r(p; ?n ) , E p ? 1{p?V } = p ? qn (p). ?n The revenue estimation error for a given sample of size n is n , sup rn (p) ? r(p) . p It is worth highlighting the difference between revenue estimation and revenue maximization. Let p? be a price that maximizes the revenue, i.e., p? ? arg supp r(p). The maximum revenue is r? = r(p? ). The goal in many works in revenue maximization is to find a price p? such that r? ? r(? p) ? , or ? alternatively, to bound r /r(p) ?. Given a revenue-estimation error n , one can clearly maximize the revenue within an additive error of 2n by simply posting a price p?n ? arg maxp rn (p), thereby attaining revenue rn? = r(p?n ). This follows since rn? = r(p?n ) ? rn (p?n ) ? n ? rn (p? ) ? n ? r(p? ) ? 2n = r? ? 2n . Therefore, good revenue estimation implies good revenue maximization. We note that the converse does not hold. Namely, there are distributions for which revenue maximization is trivial but revenue estimation is impossible. One such case is the equal revenue distribution, where all values in the support of ? have the same expected revenue. For such distributions, the problem of revenue maximization becomes trivial, since any posted price is optimal. However, as follows from Theorem 2.3, since the expected revenue of such distributions is infinite, almost never do the empirical revenues uniformly converge to the true revenues. 2.2 Quantitative bounds on the uniform convergence rate Recall that we are interested in deriving sample bounds that would guarantee uniform convergence for the revenue estimation problem. We will show that given an upper bound on the kth moment of V for some k > 1, we can derive a finite sample bound. To this end we utilize our Submultiplicative Glivenko-Cantelli Bound (Theorem 1.4). We also consider the case of k = 1, namely that E[V ] is bounded, and show that in this case there is still uniform convergence in the limit, but that there cannot be any guarantees on the convergence rate. Interestingly, it turns out that E[V ] < ? is not only sufficient but also necessary so that in the limit, the empirical revenues uniformly converge to the true revenues (see Section 2.3). We begin by showing that bounds on the kth moment for k > 1 yield explicit bounds on the convergence rate. It is convenient to parametrize by setting k = 1 + ?, where ? > 0. Theorem 2.1. Let EV ?? [V 1+? ] ? C for some ? > 0 and C ? 1, and let , ? ? (0, 1). Set6  4/? ! 1 1/? ) 1+? 2 6 ? C ln( ?  n0 = O C 1+? . (1) 2 ? ln 1 + ?/2 For any n ? n0 , with probability at least 1 ? ?, ?v : rn (v) ? r(v) ? . 6 ? conceals low order terms. The O 6  1  Note that when ? is large, this bound approaches the standard O ln(2/?) sample complexity bound of the additive Glivenko-Cantelli. For example, if all moments are uniformly bounded, then the convergence is roughly as fast as in standard uniform convergence settings (e.g., VC-dimension based bounds). The proof of Theorem 2.1 follows from Theorem 1.4 and the next proposition, which reduces bounds on the uniform convergence rate of the empirical revenues to our Submultiplicative Glivenko-Cantelli. Proposition 2.2. Let EV ?? [V 1+? ] ? C for some ? > 0 and C ? 1, and let , ? ? (0, 1). Then,   h i 1  1+? . Pr ?v : r(v) ? rn (v) >  ? Pr ?v : q(v) ? qn (v) > q(v) 1 C 1+? Thus, to prove Theorem 2.1, we first note that Theorem 1.4 (as well as Theorem 1.3) also holds when Fn and F are respectively replaced in the definition of n0 with  qn and q (indeed, applying Theorem 1.4 to the measure ?0 defined by ?0 (A) , ? {?a | a ? A} yields the required result with 1 regard to the measure ?). We then plug  ? 1 and ? ? 1+? into this variant of Theorem 1.4 to C 1+? yield a bound on the right-hand side of the inequality in Proposition 2.2, whose application concludes the proof. Proof of Proposition 2.2. By Markov?s inequality: q(v) = Pr[V ? v] = Pr[V 1+? ? v 1+? ] ? C . v 1+? (2) Now, h i h i Pr ?v : r(v)?rn (v) >  = Pr ?v : v ? q(v)?v ? qn (v) >  h i 1  1+? = Pr ?v : v ? q(v)?v ? qn (v) > ?q(v)) 1+? 1 (v (v 1+? ? q(v)) 1+? h i 1  1+? 1+? ? Pr ?v : v ? q(v)?v ? qn (v) > (v ?q(v)) 1 C 1+? i h 1  1+? . = Pr ?v : q(v)?qn (v) > 1 q(v) 1+? C where the inequality follows from Equation (2). 2.3 A qualitative characterization of uniform convergence The sample complexity bounds in Theorem 2.1 are meaningful as long as ? > 0, but deteriorate drastically as ? ? 0. Indeed, as the following example shows, there is no bound on the uniform convergence sample complexity that depends only on the first moment of V , i.e., its expectation. Consider a distribution ?p so that with probability p we have V = 1/p and otherwise V = 0. Clearly, E[V ] = 1. However, we need to sample mp = O(1/p) valuations to see a single nonzero value. Therefore, there is no bound on the sample size mp as a function of the expectation, which is simply 1. We can now consider the higher moments of ?p . Consider the kth moment, for k = 1 + ? and 1+? ? > 0, so k > 1. For this ] = p?/(1+?) , which implies that  moment, we have Ap,? = E[V (1+?)/? mp = O 1/(Ap,? ) . This does allow us to bound mp as a function of ? and E[V 1+? ], but for small ? we have a huge exponent of approximately 1/?. While the above examples show that there cannot be a bound on the sample size as a function of the expectation of the value, it turns out that there is a very tight connection between the first moment and uniform convergence: Theorem 2.3. The following dichotomy holds for a distribution ? on R+ : 1. If E? [V ] < ?, then almost surely limn?? supv r(v) ? rn (v) = 0. 2. If E? [V ] = ?, then almost never limn?? supv r(v) ? rn (v) = 0. 7 That is, the empirical revenues uniformly converge to the true revenues if and only if E? [V ] < ?. We use the following basic fact in the Proof of Theorem 2.3: Lemma 2.4. Let X be a nonnegative random variable. Then ? X n=1 Proof. Note that: ? X ? X Pr[X ? n] ? E[X] ? Pr[X ? n]. n=0 1{X?n} = bXc ? X ? bXc + 1 = n=1 ? X 1{X?n} . n=0 The lemma follows by taking expectations.   Proof of Theorem 2.3. We start by proving item 2. Let ? be a distribution such that E? V = ?. If supv v ? q(v) = ? then for every realization v1 , . . . , vn there is some v ? max{v1 , . . . , vn } such that v ? q(v) ? 1, but v ? qn (v) = 0. So, we may assume supv v ? q(v) < ?. Without loss of generality we may assume that supv v ? q(v) = 1/2 by rescaling the distribution if needed. Consider the sequence  of events E1 , E2 , . . . where En denotes the event that Vn ? n. Since E? V = ?, Lemma 2.4 P? implies that n=1 Pr[En ] = ?. Thus, since these events are independent, the second Borel-Cantelli Lemma [4, 5] implies that almost surely, infinitely many of them occur and so infinitely often Vn ? qn (Vn ) ? 1 ? Vn ? q(Vn ) + 12 . Therefore, the probability that v ? qn (v) uniformly converge to v ? q(v) is 0. Item 1 follows from the following monotone domination theorem: Theorem 2.5. Let F be a family of nonnegative monotone functions, and let F be an upper envelope7 for F. If E? [F ] < ?, then almost surely: lim sup E[f ] ? E [f ] = 0. n?? f ?F ? ?n  Indeed, item 1 follows by plugging F = v ? 1x?v : v ? R+ , which is uniformly bounded by the identity function F (x) = x. Now, by assumption E? [F ] < ?, and therefore, almost surely lim sup r(v) ? rn (v) = lim sup E[f ] ? E [f ] = 0. n?? v?R+ n?? f ?F ? ?n Theorem 2.5 follows by known results in the theory of empirical processes (for example, with some work it can be proved using Theorem 2.4.3 from Vaart and Wellner [22]). For completeness, we give a short and basic proof in the supplementary material. 3 Discussion Our main result is a submultiplicative variant of the Glivenko-Cantelli Theorem, which allows for tighter convergence bounds for extreme values of the CDF. We show that for the revenue learning setting our submultiplicative bound can be used to derive uniform convergence sample complexity bounds, assuming a finite bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization, where uniform convergence almost surely occurs if and only if the first moment is finite. It would be interesting to find other applications of our submultiplicative bound in other settings. A potentially interesting direction is to consider unbounded loss functions (e.g., the squared-loss, or log-loss). Many works circumvent the unboundedness in such cases by ensuring (implicitly) that the losses are bounded, e.g., through restricting the inputs and the hypotheses. Our bound offers a different perspective of addressing this issue. In this paper we consider revenue learning, and replace the boundedness assumption by assuming bounds on higher moments. An interesting challenge is to 7 F is an upper envelope for F if F (v) ? f (v) for every v ? V and f ? F. 8 prove uniform convergence bounds for other practically interesting settings. One such setting might be estimating the effect of outliers (which correspond to the extreme values of the loss). In the context of revenue estimation, this work only considers the most na?ve estimator, namely of estimating the revenues by the empirical revenues. One can envision other estimators, for example ones which regularize the extreme tail of the sample. Such estimators may have a potential of better guarantees or better convergence bounds. In the context of uniform convergence of selling mechanism revenues, this work only considers the basic class of posted-price mechanisms. While for one good and one valuation distribution, it is always possible to maximize revenue via a selling mechanism of this class, this is not the case in more complex auction environments. While in many more-complex environments, the revenue-maximizing mechanism/auction is still not understood well enough, for environments where it is understood [7, 10, 14] (as well as for simple auction classes that do not necessarily contain a revenue-maximizing auction [19, 1]) it would also be interesting to study relaxations of the restrictive tail or boundedness assumptions currently common in the literature. Acknowledgments The research of Noga Alon is supported in part by an ISF grant and by a GIF grant. Yannai Gonczarowski is supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities; his work is supported by ISF grant 1435/14 administered by the Israeli Academy of Sciences and by Israel-USA Bi-national Science Foundation (BSF) grant number 2014389; this project has received funding from the European Research Council (ERC) under the European Union?s Horizon 2020 research and innovation programme (grant agreement No 740282). The research of Yishay Mansour was supported in part by The Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11), by a grant from the Israel Science Foundation, and by a grant from United States-Israel Binational Science Foundation (BSF); the research was done while author was co-affiliated with Microsoft Research. The research of Shay Moran is supported by the National Science Foundations and the Simons Foundations; part of the research was done while author was co-affiliated with Microsoft Research. The research of Amir Yehudayoff is supported by ISF grant 1162/15. References [1] Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of automated mechanism design. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS), pages 2083?2091, 2016. [2] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002. [3] Z. W. Birnbaum and R. C. McCarty. A distribution-free upper confidence bound for Pr{Y < X}, based on independent samples of X and Y . The Annals of Mathematical Statistics, 29(2):558?562, 1958. [4] ?mile Borel. Les probabilit?s d?nombrables et leurs applications arithm?tiques. Rendiconti del Circolo Matematico di Palermo (1884-1940), 27(1):247?271, 1909. [5] Francesco Paolo Cantelli. Sulla probabilit? come limite della frequenza. Atti Accad. Naz. Lincei, 26(1):39? 45, 1917. [6] Francesco Paolo Cantelli. Sulla determinazione empirica delle leggi di probabilita. Giornalle dell?Istituto Italiano degli Attuari, 4:421?424, 1933. [7] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC), pages 243?252, 2014. [8] Corinna Cortes, Spencer Greenberg, and Mehryar Mohri. Relative deviation learning bounds and generalization with unbounded loss functions. CoRR, abs/1310.5796, 2013. [9] Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. In Proceedings of the 24th Conference on Neural Information Processing Systems (NIPS), pages 442?450, 2010. [10] Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC), pages 426?439, 2016. 9 [11] Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample. Games and Economic Behavior, 91:318?333, 2015. [12] Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics, 27(3):642?669, 1956. [13] VL Glivenko. Sulla determinazione empirica delle leggi di probabilita. Giornalle dell?Istituto Italiano degli Attuari, 4:92?99, 1933. [14] Yannai A. Gonczarowski and Noam Nisan. Efficient empirical revenue maximization in single-parameter auction environments. In Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC), pages 856?868, 2017. [15] Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. In Proceedings of the 16th ACM Conference on Economics and Computation (EC), pages 45?60, 2015. [16] Vladimir Koltchinskii and Dmitriy Panchenko. Rademacher Processes and Bounding the Risk of Function Learning, pages 443?457. Birkh?user Boston, Boston, MA, 2000. [17] Pascal Massart. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The Annals of Probability, 18(3):1269?1283, 1990. [18] Jamie Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS), pages 136?144, 2015. [19] Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Proceedings of the 29th Annual Conference on Learning Theory (COLT), pages 1298?1318, 2016. [20] Roger Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58?73, 1981. [21] Tim Roughgarden and Okke Schrijvers. Ironing in the dark. In Proceedings of the 17th ACM Conference on Economics and Computation (EC), pages 1?18, 2016. [22] A. W. van der Vaart and Jon August Wellner. Weak convergence and empirical processes : with applications to statistics. Springer series in statistics. Springer, New York, 1996. R?impr. avec corrections 2000. [23] Vladimir Vapnik. Statistical Learning Theory. Wiley, 1998. [24] V.N. Vapnik and A.Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl., 16:264?280, 1971. 10
6763 |@word private:2 briefly:1 polynomial:3 stronger:1 willing:2 jacob:1 attainable:1 pick:1 thereby:1 boundedness:3 moment:17 contains:2 series:1 united:1 chervonenkis:2 interestingly:1 envision:2 ironing:1 com:3 gmail:2 fn:13 additive:6 confirming:1 n0:19 item:3 amir:3 yannai:4 short:1 core:1 alexandros:1 characterization:3 completeness:1 dell:2 unbounded:11 mathematical:2 aryeh:1 symposium:3 qualitative:2 prove:3 excellence:1 deteriorate:1 expected:7 market:1 roughly:3 indeed:9 behavior:2 growing:1 discounted:1 considering:1 becomes:1 begin:1 estimating:2 gonczarowski:3 underlying:2 maximizes:2 moreover:1 project:1 bounded:9 israel:9 gif:1 morgenstern:2 formalizes:1 guarantee:6 quantitative:2 every:6 pseudo:1 supv:7 converse:1 grant:8 appear:1 understood:2 tends:1 limit:8 mccarty:2 establishing:2 meet:2 ap:2 approximately:1 might:1 okke:1 koltchinskii:1 studied:6 conversely:3 appl:1 co:2 ease:1 limited:1 cdfs:3 bi:1 acknowledgment:1 arguing:1 union:1 babaioff:1 probabilit:2 empirical:24 yan:1 convenient:1 confidence:2 regular:3 cannot:3 close:1 context:11 impossible:1 seminal:3 applying:1 risk:2 restriction:1 equivalent:1 zhiyi:2 center:2 maximizing:4 jerusalem:1 economics:5 go:1 devanur:1 estimator:4 bsf:2 deriving:1 regularize:1 ellen:1 his:1 classic:1 proving:3 population:1 annals:3 yishay:4 user:1 humanity:1 hypothesis:2 agreement:1 observes:1 environment:4 panchenko:1 complexity:11 seller:5 tight:4 compromise:1 technically:2 selling:4 iit:1 various:1 fast:1 birkh:1 glivenko:25 dichotomy:1 whose:1 widely:2 larger:3 supplementary:3 say:2 nikhil:1 otherwise:1 dmitriy:1 maxp:1 ability:1 statistic:4 vaart:2 sequence:3 jamie:2 relevant:2 realization:3 translate:1 achieve:1 academy:2 asserts:1 convergence:47 regularity:2 requirement:1 diverges:1 rademacher:2 liked:1 converges:2 adam:1 tim:6 derive:8 alon:2 ac:2 received:1 strong:1 empirica:2 implies:8 come:1 inflated:1 direction:3 vc:6 material:3 generalization:4 proposition:4 tighter:2 avec:1 spencer:1 extension:1 correction:1 hold:9 practically:1 yehudayoff:3 considered:2 exp:2 algorithmic:4 pointing:2 estimation:14 applicable:1 currently:1 by2:1 council:1 cole:1 largest:2 clearly:3 always:1 supt:1 gaussian:1 accad:1 rather:3 pn:2 derived:1 she:1 maria:1 cantelli:27 contrast:3 vl:1 her:2 interested:1 arg:2 issue:1 colt:1 pascal:1 exponent:2 special:1 equal:2 never:5 beach:1 chernoff:1 sell:1 nearly:1 jon:1 future:2 purchase:1 richard:1 few:1 simultaneously:1 ve:1 national:2 familiar:1 replaced:1 peerapong:1 microsoft:6 n1:2 ab:1 organization:1 nogaa:1 interest:1 huge:1 extreme:4 necessary:2 istituto:2 dkw:3 respective:1 instance:3 delle:2 maximization:11 deviation:1 addressing:1 leurs:1 uniform:40 technion:1 too:3 front:1 learnability:3 unattainable:1 dependency:3 chooses:1 st:1 together:1 concrete:1 na:1 squared:1 central:2 huang:2 possibly:2 hoeffding:1 derivative:1 rescaling:1 supp:1 potential:1 attaining:1 matter:1 satisfy:1 mp:4 depends:3 vi:1 nisan:1 multiplicative:6 sup:5 guarded:3 start:1 simon:1 contribution:1 il:2 kiefer:3 who:1 yield:6 correspond:1 weak:1 worth:3 definition:1 frequency:1 e2:1 naturally:1 proof:9 di:3 recovers:1 gain:1 sampled:3 proved:2 popular:2 recall:1 lim:4 fractional:2 organized:1 obtainable:1 reflecting:2 dvoretzky:3 higher:5 prestige:1 done:2 generality:1 roger:1 implicit:1 overfit:1 hand:3 replacing:1 maximizer:1 google:1 del:1 willingness:1 pricing:2 aviv:2 grows:2 usage:1 name:1 usa:2 effect:1 true:5 contain:1 nonzero:1 mile:1 game:5 criterion:1 schrijvers:1 complete:2 demonstrate:2 auction:10 balcan:1 consideration:1 jack:1 funding:1 common:1 specialized:1 multinomial:1 binational:1 extend:2 discussed:1 tail:5 isf:3 consistency:1 mathematics:1 erc:1 language:1 had:1 access:1 circolo:1 etc:1 add:1 recent:1 perspective:1 inequality:7 shahar:1 psomas:1 der:1 analyzes:2 additional:1 atti:1 impose:1 surely:8 converge:5 wolfowitz:3 maximize:2 reduces:2 plug:1 offer:1 long:3 hazard:3 post:3 e1:1 plugging:2 ensuring:1 variant:9 basic:5 florina:1 determinazione:2 essentially:1 expectation:6 sometimes:1 tailored:1 whereas:1 fellowship:1 finiteness:1 limn:4 crucial:1 noga:2 rest:1 unlike:1 envelope:1 massart:4 induced:1 structural:1 enough:1 automated:1 economic:1 administered:1 whether:1 bartlett:1 wellner:2 suffer:1 peter:1 york:1 remark:1 generally:1 dark:1 canonical:1 impr:1 paolo:2 nevertheless:1 threshold:1 drawn:5 birnbaum:2 utilize:1 v1:4 relaxation:1 monotone:5 fraction:1 year:2 inverse:2 almost:12 family:2 decide:1 vn:7 draw:1 bound:79 nonnegative:3 annual:4 roughgarden:6 occur:1 infinity:1 your:1 x2:2 n3:7 prv:2 phrased:1 argument:2 conjecture:1 according:1 smaller:2 sandholm:1 character:1 penetration:1 happens:2 making:1 lincei:1 outlier:1 pr:16 erm:2 ln:9 equation:1 discus:1 turn:3 mechanism:10 needed:1 end:1 italiano:2 parametrize:1 operation:1 apply:2 observe:1 v2:2 submultiplicative:17 corinna:2 assumes:1 denotes:1 include:1 palermo:1 giving:1 restrictive:1 quantile:7 establish:2 approximating:2 classical:1 question:2 moshe:2 occurs:4 realized:4 damage:1 concentration:2 kth:5 valuation:25 considers:2 vitercik:1 trivial:3 assuming:4 consumer:5 tuomas:1 hebrew:1 innovation:1 vladimir:2 regulation:1 unfortunately:1 statement:2 potentially:1 stoc:3 noam:1 stated:1 design:2 affiliated:2 unknown:3 twenty:1 upper:8 francesco:2 markov:2 finite:6 mansour:5 discovered:1 rn:19 arbitrary:1 august:1 community:2 complement:1 namely:4 required:1 extensive:1 connection:1 established:1 nip:4 israeli:2 address:2 able:1 recurring:1 matematico:1 ev:3 challenge:3 dhangwatnotai:1 program:2 max:2 tau:2 event:10 demanding:1 natural:3 circumvent:1 advanced:1 nth:2 minimax:1 imply:3 concludes:1 sulla:3 probab:1 literature:3 conceals:1 relative:2 law:2 asymptotic:1 loss:12 interesting:5 limitation:1 proven:1 revenue:80 foundation:5 shay:2 sufficient:1 bypass:1 rendiconti:1 mohri:2 supported:7 qiqi:1 free:1 drastically:1 side:3 allow:1 institute:1 taking:1 limite:1 priced:1 distributed:1 regard:2 greenberg:1 dimension:6 xn:2 van:1 cumulative:3 qn:20 author:2 qualitatively:1 programme:1 ec:2 implicitly:1 buy:2 assumed:2 xi:7 degli:2 alternatively:1 learn:1 ca:1 tel:2 mehryar:2 complex:2 posted:6 necessarily:1 domain:1 european:2 did:1 unboundedness:1 main:6 bounding:1 profile:1 n2:5 complementary:1 x1:4 quantiles:2 en:2 borel:2 strengthened:1 bxc:2 wiley:1 christos:1 fails:1 theme:1 surveyed:1 explicit:2 wish:3 exponential:1 third:1 weighting:1 posting:2 theorem:44 bad:1 specific:1 attuari:2 showing:1 moran:2 cortes:3 concern:1 exists:3 mendelson:1 vapnik:3 restricting:1 corr:1 importance:1 agt:2 hoped:1 horizon:1 boston:2 generalizing:1 logarithmic:1 simply:3 likely:1 myerson:7 infinitely:2 highlighting:1 springer:2 determines:2 satisfies:3 acm:5 cdf:9 ma:1 goal:2 presentation:1 identity:1 price:42 replace:1 infinite:3 specifically:4 uniformly:9 lemma:4 ya:1 meaningful:1 domination:1 support:3 arises:1 princeton:1 della:1
6,372
6,764
Deep Dynamic Poisson Factorization Model Chengyue Gong Department of Information Management Peking University [email protected] Win-bin Huang Department of Information Management Peking University [email protected] Abstract A new model, named as deep dynamic poisson factorization model, is proposed in this paper for analyzing sequential count vectors. The model based on the Poisson Factor Analysis method captures dependence among time steps by neural networks, representing the implicit distributions. Local complicated relationship is obtained from local implicit distribution, and deep latent structure is exploited to get the long-time dependence. Variational inference on latent variables and gradient descent based on the loss functions derived from variational distribution is performed in our inference. Synthetic datasets and real-world datasets are applied to the proposed model and our results show good predicting and fitting performance with interpretable latent structure. 1 Introduction There has been growing interest in analyzing sequentially observed count vectors x1 , x2 ,. . . , xT . Such data appears in many real world applications, such as recommend systems, text analysis, network analysis and time series analysis. Analyzing such data should conquer the computational or statistical challenges, since they are often high-dimensional, sparse, and with complex dependence across the time steps. For example, when analyzing the dynamic word count matrix of research papers, the amount of words used is large and many words appear only few times. Although we know the trend that one topic may encourage researchers to write papers about related topics in the following year, the relationship among each time step and each topic is still hard to analyze completely. Bayesian factor analysis model has recently reached success in modeling sequentially observed count matrix. They assume the data is Poisson distributed, and model the data under Poisson Factorize Analysis (PFA). PFA factorizes a count matrix, where ? ? RV+ ?K is the loading matrix and ? ? RT+?K is the factor score matrix. The assumption that ?t ? Gamma(?t?1 , ?t ) is then included [1, 2] to smooth the transition through time. With property of the Gamma-Poisson distribution and Gamma-NB process, inference via MCMC is used in these models. Considering the lack of ability to capture the relationship between factors, a transition matrix is included in Poisson-Gamma Dynamical System (PGDS) [2]. However, these models may still have some shortcomings in exploring the long-time dependence among the time steps, as the independence assumption is made on ? t?1 and ? t+1 if ? t is given. In text analysis problem, temporal Dirichlet process [3] is used to catch the time dependence on each topic using a given decay rate. This method may have weak points in analyzing other data with different pattern long-time dependence, such as fanatical data and disaster data [3]. Deep models, which are also called hierarchical models in Bayesian learning field, are widely used in Bayesian models to fit the deep relationship between latent variables. Examples of this include the nested Chinese Restaurant Process [4], nest hierarchical Dirichlet process [5], deep Gaussian process [6, 7] and so on. Some models based on neural network structure or recurrent structure is also used, such as the Deep Exponential Families [8], the Deep Poisson Factor Analysis based on RBM or SBN [9, 10], the Neural Autoregressive Density Estimator based on neural networks [11], Deep Poisson 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ? 0 ? 1 ? 3 ? 2 ? 4 ? 5 ? 0 ? 1 ? 2 (a) ? 3 ? 4 ? 5 (b) h (0) 0 h (0) 1 h (0) 2 h (0) 3 h (0) 4 h (0) 5 h (0) 0 h (0) 1 h (0) 2 h (0) 3 h (0) 4 h (0) 5 ? 0 ? 1 ? 2 ? 3 ? 4 ? 5 ? 0 ? 1 ? 2 ? 3 ? 4 ? 5 (d) (c) Figure 1: The visual representation of our model. In (a), the structure of one-layer model is shown. (b) shows the transmission of the posterior information. The prior and posterior distributions between interfacing layers are shown in (c) and (d). Factor Modeling with a recurrent structure based on PFA using a Bernoulli-Poisson link [12], Deep Latent Dirichlet Allocation uses stochastic gradient MCMC [23]. These models capture the deep relationship among the shallow models, and often outperform shallow models. In this paper, we present the Deep Dynamic Poisson Factor Analysis (DDPFA) model. Based on PFA, our model includes recurrent neural networks to represent implicit distributions, in order to learn complicated relationship between different factors among short time. Deep structure is included in order to capture the long-time dependence. An inference algorithm based on variational inference is used for inferring the latent variables. Parameters in the neural networks are learnt according to a loss function based on the variational distributions. Finally, the DDPFA model is used on several synthetic and real-world datasets, and excellent results are obtained in prediction and fitting tasks. 2 Deep Dynamic Poisson Factorization Model Assume that V -dimensional sequentially observed count data x1 , x2 ,. . . , xT are represented as a V ? T count matrix X, a count data xvt ? {0, 1, . . . } is generated by the proposed DDPFA model as follows: PK (1) xvt ? P oisson( k=1 ?tk ?vk ?k ?v ) where the latent variables ?tk , ?vk , ?k and ?v are all positive variables. ?k represents the strength of the k th component and is treated as factor. ?tk represents the strength of the k th component at the tth time step. Feature-wise variable ?v captures the sparsity of the v th feature and ?k recognizes the importance of the k th component. According to the regular setting in [2, 13-16], the factorization is regarded as X ? P oisson(??T ). ? and ? can be absorbed into ?. In this paper, in order to extract the sparsity of v th feature or k th component and impose a feature-wise or temporal smoothness constraint, ? and ? are included in our model. The additive property of the Poisson distribution is used to decompose the observed count of xvt as K latent counts xvtk , k ? {0, . . . , K}. In this way, the model is rewritten as: PK (2) xvt = k=1 xvtk and xvtk ? P oisson(?tk ?vk ?k ?v ) Capturing the complicated temporal dependence of ? is the major purpose in this paper. In the previous work, transition via Gamma-Gamma-Poisson distribution structure is used, where ?t ? Gamma(?t?1 , ?t ) [1]. Non-homogeneous Poisson process over time to model the stochastic transition over different features is exploited in Poisson process models [17-19]. These models are then trained via MCMC or variational inference. However, it is rough for these models to catch complicated time dependence because of the weak points in their shallow structure in time dimension. In order to capture the complex time dependence over ?, a deep and long-time dependence model with a dynamic structure over time steps is proposed. The first layer over ? is as follows: (0) (0) ?t ? p(?t |ht?c , ..., ht ) 2 (3) where c is the size of a window for analysis, and the latent variables in the nth layer, n ? N , are indicated as follows: (n) (n) (n+1) (n+1) (N ) (N ) (N ) (N ) (4) ht ? p(ht |ht?c , ..., ht ) and ht ? p(ht |ht?c?1 , ..., ht?1 ) (n) where the implicit probability distribution p(ht |?) is modeled as a recurrent neural network. Proba(n) (n?1) (n?1) bility AutoEncoder with an auxiliary posterior distribution p(ht |ht , . . . , ht+c ), also modeled (n) as a neural network, is exploited in our training phase. ht is a K-dimensional latent variable in (N ) the nth layer at the tth time step. Specially, in the nth layer, ht is generated from a Gamma (N ) distribution with ht?c?1:t?1 as the prior information. This structure is illustrated in Figure 1. Finally, prior parameters are placed over other latent variables for Bayesian inference. These variables are generated as: ?vk ? Gamma(?? , ?? ) and?k ? Gamma(?? , ?? ) and ?v ? Gamma(?? , ?? ). Although Dirichlet distribution is often used as prior distribution [13, 14, 15] over ?vk in previous works, a Gamma distribution is exploited in our model due to the including of feature-wise parameter ?v and the purpose for obtaining feasible factor strength of ?k . In real world applications, like recommendation systems, the observed binary count data can be formulated by the proposed DDPFA model with a Bernoulli-Poisson link [1]. The distribution of b given ? is called Bernoulli-Poisson distribution as: b = 1(x > 1), x ? P oisson(?) and the linking b distribution is rewritten as: f (b|x, ?) = e??(1?b) (1 ? e?? ) . The conditional posterior distribution is then (x|b, ?) ? b ? P oisson+ (?), where P oisson+ (?) is a truncated Poisson distribution, so the MCMC or VI methods can be used to do inference. Non-count real-valued matrix can also be linked to a latent count matrix via a compound Poisson distribution or a Gamma belief network [20]. 3 Inference There are many classical inference approaches for Bayesian probabilistic model, such as Monte Carlo methods and variational inference. In the proposed method, variational inference is exploited because the implicit distribution is regarded as prior distribution over ?. Two stages of inference in our model are adopted: the first stage updates latent variables by the coordinate-ascent method with a fixed implicit distribution, and the parameters in neural networks are learned in the second one. Mean-field Approximation: In order to obtain mean-field VI, all variables are independent and governed by its own variational distribution. The joint distribution of the variational distribution is written as: Q (n) (n)? ? (5) q(?, ?, ?, ?, H) = v,t,n,k q(?vk |??vk )q(?k |?k? )q(?tk |?tk )q(?k |??k )q(htk |htk ) where y ? represents the prior variational parameter of the variable y. The variational parameters ? are fitted to minimize the KL divergence: ? = argmin? ? KL(p(?, ?, ?, ?, H|X)||q(?, ?, ?, ?, H|?)) (6) The variational distribution q(?|? ? ) is then used as a proxy for the posterior. The objective actually is equal to maximize the evidence low bound (ELBO) [19]. The optimization can be performed by a coordinate-ascent method or a variational-EM method. As a result, each variational parameter can be optimized iteratively while the remaining parameters of the model are set to fixed value. Due to Eq. 2, the conditional distribution of (xvt1 , . . . , xvtk ) is a multinomial while its parameter is normalized set of rates [19] and formulated as: P (xvt1 , . . . , xvtk )|?t , ?v , ?, ?v ? M ult(xvt? ; ?t ?v ??v / k ?tk ?vk ?k ?v ) (7) Given the auxiliary variables xvtk , the Poisson factorization model is a conditional conjugate model. The complete conditional of the latent variables is Gamma distribution and shown as: ?vk |?, ?, ?, ?, ?, X ? Gamma(?? + xv?k , ?? + ?k ?v ??k ) P ?k |?, ?, ?, ?, ?, X ? Gamma(?? + x??k , ?? + ??k v ?v ?vk ) P ?v |?, ?, ?, ?, ?, X ? Gamma(?? + xv?? , ?? + k ?k ?vk ??k ) 3 (8) Generally, these distributions are derived from conjugate properties between Poisson and Gamma distribution. The posterior distribution of ?tk described in Eq. 3 can be a Gamma distribution while (0) the prior ht?c:t is given as: P (9) ?tk |?, ?, ?, h(0) , ?, X ? Gamma(??tk + xv?k , ?? + ?k v ?v ?vk ) (0) (0) where ??tk is calculated through a recurrent neural network with (ht?c , ..., ht ) as its inputs. Then (0) the posterior distribution of htk described in Eq. 4 is given as: (0) htk |?, h(1) , ?, X ? Gamma(?h(0) + ?h(0) , ?h ) tk (10) tk where ?h(n) is the prior information given by the (n + 1)th layer, ?h(n) is the posterior information tk tk (?1) given by the (n ? 1)th layer. Here, the notation htk tk (n+1) (n+1) (n?1) (n?1) a recurrent neural network using (ht?c , ..., ht is equal to ?tk . ?h(n) is calculated through ) as its inputs. ?h(n) is calculated through tk a recurrent neural network using (ht+c , ..., ht ) as its inputs. Therefore, the distribution (0) (0) mentioned in Eq. 9 can be regarded as an implicit conditional distribution of ?tk given (ht?c , ..., ht ). (n+1) (n+1) And the distribution in Eq. 10 is an implicit distribution of ?h(n) given (ht?c , ..., ht ) and tk (n?1) (n?1) (ht+c , ..., ht ). Variational Inference: Mean field variational inference can approximate the latent variables while all parameters of a neural network are given. If the observed data satisfies xvt > 0, the auxiliary variables xvtk can be updated by: shp rte rte xvtk ? exp{? (?tk ) ? log?tk + ? (?shp k ) ? log?k (11) rte shp rte + ? (?shp vk ) ? log?vk + ? (?v ) ? log?v } where ? (?) is the digamma function. Variables with the superscript ?shp? indicate the shape parameter of Gamma distribution, and those with the superscript ?rte? are the rate parameter of it. This update comes from the expectation of the logarithm of a Gamma variable as hlog?i = ? (?shp ) ? log(?rte ). Here, ? is generated from a Gamma distribution and h?i represents the expectation of the variable. Calculation of the expectation of the variable, obeyed Gamma distribution, is noted as h?i = ?shp /?rte . Variables can be updated by mean-field method as: ?vk ? Gamma(?? + hxv?k i, ?? + h?k ih?v ih??k i) P ?k ? Gamma(?? + hx??k i, ?? + h??k i v h?v ih?vk i) P ?v ? Gamma(?? + hxv?? i, ?? + k h?k ih?vk ih??k i) (12) The latent variables in the deep structure can also be updated by mean-field method: P ?tk ? Gamma(??tk + hxv?k i, ?? + h?k i v h?v ih?vk i) (13) (n) htk ? Gamma(?h(n) + ?h(n) , ?h ) tk (14) tk n?1 where ?h(n) = ff eed (hhn+1 i), ?h(N ) = ff eed (hhN i), ?h(N ) = t?c?1:t?1 i) and ?h(n) = fback (hh t t t t fback (hhN t+c+1:t+1 i). ff eed (?) is the neural network constructing the prior distribution and fback (?) is the neural network constructing the posterior distribution. Probability AutoEncoder: This stage of the inference is to update the parameters of the neural networks. The bottom layer is used by us as an example. Given all latent variables, these parameters (0) (0) (0) (n) (0) (0) can be approximated by p(?t |ht?c , ..., ht ) and p(ht |?t+c , ..., ?t ). p(?t |ht?c , ..., ht ) = (0) (0) Gamma(??t , ?h ) is modeled by a RNN with the inputs (ht?c , ..., ht ) and the outputs, ??t . The 4 (0) p(ht |?t+c , ..., ?t ) is also modeled as a RNN with the inputs (?t+c , ..., ?t ) and the outputs ,?h(0) t . With the posterior distribution from ? to H (0) and the prior distribution from H (0) to ?, the probability of ? should be maximized. The loss function of these two neural networks is as follows: R max{ p(?|H (0) )p(H (0) |?)dH (0) } (15) W where W represents the parameters in neural networks. Because the integration in Eq. is intractable, a new loss function should include auxiliary variational variables H (0)0 . sume that H (0)0 is generated by ?, the optimization can be regarded as maximizing probability of ? with minimal difference between H (0)0 and H (0) as max{p(?|H (0) )} W 15 Asthe and min{KL(p(H (0)0 |?)||p(H (0) |H (1) )} W Then approximating the variables generated from a distribution by its expectation, the loss function, similar to variational AutoEncoder [21], can be simplified to: min{khp(H (0)0 |?)i ? hp(H (0) |H (1) )ik2 + k? ? hp(?|H (0) )ik2 } W (16) Since only a few samples are drawn from one certain distrbution, which means sampling all latent variables is high-cost and useless, differentiable variational Bayes is not suitable. As a result, we focus more on fitting data than generating data. In our objective, the first term, a regularization, encourages the data to be reconstructed from the latent variables, and the second term encourages the decoder to fit the data. The parameters in the networks for nth and (n + 1) min{khp(H W (n+1)0 |H (n) th layer are trained by the loss function: )i ? hp(H (n) |H (n+1) )ik2 + kH (n) ? hp(H (n) |H (n+1) )ik2 } (17) In order to make the convergence more stable, the term of ? in the first layer is collapsed into X by using the fixed latent variables approximated by mean-field VI, and the loss function is as follows: min{khp(H (0)0 |?)i ? hp(H (0) |H (1) )ik2 + kX ? h?ih?ih?ihp(?|H (0) )ik2 } W (18) After the layer-wise training, all the parameters in neural networks are jointly trained by the fine-tuning trick in stacked AutoEncoder [22]. 4 Experiments In this section, four multi-dimensional synthetic datasets and five real-world datasets are exploited to examine the performance of the proposed model. Besides, the results of three existed methods, PGDS, LSTM, and PFA, are compared with results of our model. PGDS is a dynamic Poisson-Gamma system mentioned in Section 1, and LSTM is a classical time sequence model. In order to prove the deep relationship learnt by the deep structure can improve the performance, a simple PFA model is also included as a baseline. All hyperparameters of PGDS set in [2] are used in this paper. 1000 times gibbs sampling iterations for PGDS is performed, 100 iterations used mean-field VI for PFA is performed, and 400 epochs is executed for LSTM. The parameters in the proposed DDPFA model are set as follows:?(?,?,?) = 1, ?(?,?,?) = 2, ?(?,h) = 1, ?(?,h) = 1. The iterations is set to 100. The stochastic gradient descent for the neural networks is executed 10 epochs in each iteration. The size of the window is 4. Hyperparameters of PFA are set as the same to our model. Data in the last time step is exploited as the predicting target in a prediction task. Mean squared error (MSE) between the ground truth and the estimated value and the predicted mean squared error (PMSE) between the ground truth and the predicted value in next time step are exploited to evaluate the performance of each model. 4.1 Synthetic Datasets The multi-dimensional synthetic datasets are obtained by using the following functions where the subscript stands for the index of dimension: 5 Data SDS1 SDS2 SDS3 SDS4 Table 1: The result on the synthetic data Measure DDPFA PGDS LSTM MSE PMSE MSE PMSE MSE PMSE MSE PMSE 0.15 ? 0.01 2.07 ? 0.02 0.06 ? 0.01 2.01 ? 0.02 0.10 ? 0.02 2.14 ? 0.04 0.15 ? 0.03 1.48 ? 0.04 1.48 ? 0.00 5.96 ? 0.00 3.38 ? 0.00 3.50 ? 0.01 1.62 ? 0.00 4.33 ? 0.01 2.92 ? 0.00 6.41 ? 0.01 2.02 ? 0.23 2.94 ? 0.31 1.83 ? 0.04 2.41 ? 0.06 1.13 ? 0.06 3.03 ? 0.05 4.30 ? 0.26 4.67 ? 0.24 PFA 1.61 ? 0.00 4.42 ? 0.00 1.34 ? 0.00 0.25 ? 0.00 - SDS1:f1 (t) = f2 (t) = t, f3 (t) = f4 (t) = t + 1 on the interval t = [1 : 1 : 6]. SDS2:f1 (t) = t (mod 2), f2 (t) = 2t (mod 2) + 2, f3 (t) = t on the interval t = [1 : 1 : 20]. SDS3:f1 (t) = f2 (t) = t, f3 (t) = f4 (t) = t + 1, f5 (t) = I(4|t) on the interval t = [1 : 1 : 20], where I is an indicator function. SDS4:f1 (t) = t (mod 2), f2 (t) = 2t (mod 2) + 2, f3 (t) = t (mod t = [1 : 1 : 100]. 10) on the interval The number of factor is set to K = 3, and the number of the layers is 2. Both fitting and predicting tasks are performed in each model. The hidden layer of LSTM is 4 and the size in each layer is 20. In Table 1, it is obviously that DDPFA has the best performance in fitting and prediction task of all the datasets. Note that the complex relationship learnt from the time steps helps the model catch more time patterns according to the results of DDPFA, PGDS and PFA. LSTM performs worse in SDS4 because the noise in the synthetic data and the long time steps make the neural network difficult to memorize enough information. 4.2 Real-world Datasets Five real-world datasets are used as follows: Integrated Crisis Early Warning System (ICEWS): ICEWS is an international relations event data set extracted from news corpora used in [2]. We therefore treated undirected pairs of countries i ? j as features and created a count matrix for the year 2003. The number of events for each pair during each day time step is counted, and all pairs with fewer than twenty-five total events is discarded, leaving T = 365, V = 6197, and 475646 events for the matrix. NIPS corpus (NIPS): NIPS corpus contains the text of every NIPS conference paper from the year 1987 to 2003. We created a single count matrix with one column per year. The dataset is downloaded from Gal?s page 1 , with T = 17, V = 14036, with 3280697 events for the matrix. Ebola corpus (EBOLA)2 : EBOLA corpus contains the data for the 2014 Ebola outbreak in West Africa every day from Mar 22th, 2014 to Jan 5th 2015, each column represents the cases or deaths in a West Africa country. After data cleaning, the dataset is with T = 122, V = 16. International Disaster(ID)3 : The International Disaster dataset contains essential core data on the occurrence and effects of over 22,000 mass disasters in the world from 1900 to the present day. A count matrix with T = 115 and V = 12 is built from the events of disasters occurred in Europe from the year 1902 to 2016, classified according to their disaster types. Annual Sheep Population(ASP)4 : The Annual Sheep Population contains the sheep population in England & Wales from the year 1867 to 1939 yearly. The data matrix is with T = 73, V = 1. 1 http://ai.stanford.edu/gal/data.html https://github.com/cmrivers/ebola/blob/master/country_timeseries.csv 3 http://www.emdat.be/ 4 https://datamarket.com/data/list/?q=provider:tsdl 2 6 Data ICEWS NIPS EBOLA ID ASP Measure MSE PMSE MSE PMSE MSE PMSE MSE PMSE MSE PMSE Table 2: The result on the real-world data DDPFA PGDS LSTM 3.05 ? 0.02 0.96 ? 0.03 51.14 ? 0.03 289.21 ? 0.02 381.82 ? 0.13 490.32 ? 0.12 1.59 ? 0.01 5.18 ? 0.01 14.17 ? 0.02 21.23 ? 0.04 3.21 ? 0.01 0.97 ? 0.02 54.71 ? 0.08 337.60 ? 0.10 516.57 ? 0.01 1071.01 ? 0.01 3.45 ? 0.00 10.44 ? 0.00 2128.47 ? 0.02 760.42 ? 0.02 (a) PGDS 4.53 ? 0.04 6.30 ? 0.03 1053.12 ? 39.01 1728.04 ? 38.42 4892.34 ? 10.21 5839.26 ? 11.92 11.19 ? 1.32 10.37 ? 1.54 17962.47 ? 14.12 21324.72 ? 17.48 PFA 3.70 ? 0.01 69.05 ? 0.43 1493.32 ? 0.21 4.41 ? 0.01 388.02 ? 0.01 - (b) DDPFA Figure 2: The visual of the factor strength in each time step of the ICEWS data, the data is normalized each time step. In (a), the result of PGDS shows the factors are shrunk to some local time steps. In (b), the result of DDPFA shows the factors are not taking effects locally. We set K = 3 for ID and ASP datasets, while set K = 10 for the others. The size of the hidden layers of the LSTM is 40. The settings of remainder parameters here are the same as those in the above experiment. The results of the experiment are shown in Table 2. Table 2 shows the results of four different models on the five datasets, and the proposed model DDPFA has satisfying performance in most experiments although the DDPFA?s result in ICEWS prediction task is not good enough. While smoothed data obtained from the transition matrix in PGDS performs well in this prediction task. However, In EBOLA and ASP datasets, PGDS fails in catching complicated time dependence. And it is a tough challenge for LSTM network to memorize enough useful patterns while its input data includes long-time patterns or the dimension of the data is particular high. According to the observation in Figure 2, it can be shown that the factors learnt by our model are not activated locally compared to PGDS. Natrually, in real-world data, it is impossible that only one factor happens in one time step. For example, in the ICEWS dataset, the connection between Israel and Occupied Palestinian Territory still remains strong during the Iraq War or other accidents. Figure 2(a) reveals that several factors at a certain time step are not captured by PGDS. In Figure 3, the changes of two meaningful factors in ICEWS is shown. These two factor, respectively, indicate Israel-Palestinian conflict and six-party talks. The long-time activation of factors is shown in thi figure, since DDPFA model can capture weak strength along time. In Table 3, we show the performance of our model with different sizes. From the table, performance cannot be improved distinctly by adding more layers or adding more variables in upper layer. It is also noticed that expanding the dimension in bottom layer is more useful than in upper layers. The results reveal two problems of proposed DDPFA: "pruning" and uselessness of adding network layers. 7 (a) (b) Figure 3: The visual of the top two factors of the ICEWS data generated by DDPFA method. In (a), ?Japan?Russian Federation?, ?North Korea?United States?, ?Russian Federation?United States?, ?South Korea?United States?, and ?China?Russian Federation? are the largest features due to their loading weights. This factor stands for six-party talks and other accidents about it. In (b), ?Israel?Occupied Palestinian Territory?, ?Israel?United States?, ?Occupied Palestinian Territory?United States? are the largest features and it stands for the Israeli-Palestinian conflict. Table 3: MSE on real datasets with different sizes. Size ICEWS NIPS EBOLA 10-10-10 10-10-10 (ladder structure) 10-10 32-32-32 32-32-32 (ladder structure) 32-64-64 64-32-32 2.94 2.88 3.05 2.95 2.86 2.93 2.90 51.24 49.81 51.14 50.12 49.26 50.18 50.04 382.17 379.08 381.82 379.64 377.81 380.01 378.87 [25] notices hierarchical latent variable models do not take advantage of the structure, and gives such a conclusion that only using the bottom latent layer of hierarchical variational autoencoders should be enough. In order to solve this problem, the ladder-like architecture, in which each layer combines independent variables with latent variables depend on the upper layers, is used in our model. It is noticed that using ladder architecture could reach much better results from Table 3. Another problem, "pruning", is a phenomenon where the optimizer severs connections between most of the latent variables and the data [24]. In our experiments, it is noticed that some dimenisions in the latent layers only contain data noise. This problem is also found in differentiable variational Bayes and solved by using auxiliary MCMC strcuture [24]. Therefore, we believe this problem is caused by MF-variational inference used in our model and we hope it can be solved if we try other inference methods. 5 Summary A new model, called DDPFA, is proposed to obtain long-time and complicated dependence in time series count data. Inference in DDPFA is based on variational method for estimating the latent variables and approximating parameters in neural networks. In order to show the performance of the proposed model, four multi-dimensional synthetic datasets and five real-world datasets, ICEWS, NIPS corpus, EBOLA, International Disaster and Annual Sheep Population, are used, and the performance of three existed methods, PGDS, LSTM, and PFA, are compared. According to our experimental results, DDPFA has better effectivity and interpretability in sequential count analysis. 8 References [1] A. Ayan, J. Ghosh, & M. Zhou. Nonparametric Bayesian Factor Analysis for Dynamic Count Matrices. AISTATS, 2015. [2] A. Schein, M. Zhou, & H. Wallach. Poisson?Gamma Dynamical Systems. NIPS, 2016. [3] A. Ahmed, & E. Xing. Dynamic Non-Parametric Mixture Models and The Recurrent Chinese Restaurant Process. SDM, 2008. [4] D. M. Blei, D. M. Griffiths, M. I. Jordan, & J. B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. NIPS, 2004. [5] J. Paisley, C. Wang, D. M. Blei, & M. I. Jordan. Nested hierarchical Dirichlet processes. PAMI, 2015. [6] T. D. Bui, D. Hern?ndezlobato, Y. Li, & et al. Deep Gaussian Processes for Regression using Approximate Expectation Propagation. ICML, 2016. [7] T. D. Bui, D. Thang, E. Richard, & et al. A unifying framework for sparse Gaussian process approximation using Power Expectation Propagation. arXiv:1605.07066. [8] R. Ranganath, L. Tang, L. Charlin, & D. M. Blei. Deep exponential families. AISTATS, 2014. [9] Z. Gan, C. Chen, R. Henao, D. Carlson, & L. Carin. Scalable deep Poisson factor analysis for topic modeling. ICML, 2015. [10] Z. Gan, R. Henao, D. Carlson, & L. Carin. Learning deep sigmoid belief networks with data augmentation. AISTATS, 2015. [11] H. Larochelle & S. Lauly. A neural autoregressive topic model. NIPS, 2012. [12] H. Ricardo, Z. Gan, J. Lu & L. Carin. Deep Poisson Factor Modeling. NIPS, 2015. [13] M. Zhou & L. Carin. Augment-and-conquer negative binomial processes. NIPS, pages 2546?2554, 2012. [14] M. Zhou, L. Hannah, D. Dunson, & L. Carin. Beta-negative binomial process and Poisson factor analysis. AISTATS, 2012. [15] M. Zhou & L. Carin. Negative binomial process count and mixture modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):307?320, 2015. [16] M. Zhou. Nonparametric Bayesian negative binomial factor analysis. arXiv:1604.07464. [17] S. A. Hosseini, K. Alizadeh, A. Khodadadi, & et al. Recurrent Poisson Factorization for Temporal Recommendation. KDD, 2017. [18] P. Gopalan, J. M. Hofman, & D. M. Blei. Scalable Recommendation with Hierarchical Poisson Factorization. UAI, 2015. [19] P. Gopalan, J. M. Hofman, & D. M. Blei. Scalable Recommendation with Poisson Factorization. arXiv:1311.1704. [20] M. Zhou, Y. Cong, & B. Chen. Augmentable gamma belief networks. Journal of Machine Learning Research, 17(163):1?44, 2016. [21] D. P. Kingma & W. Max. Auto-encoding variational Bayes. ICLR, 2014. [22] Y. Bengio, P. Lamblin, D. Popovici, & H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2006. [23] Y. Cong, B. Chen, H. Liu, and M. Zhou, Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC. ICML, 2017. [24] S. Zhao, J. Song, S. Ermon. Learning Hierarchical Features from Generative Models. ICML, 2017. [25] M. Hoffman. Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo. ICML, 2017. 9
6764 |@word loading:2 liu:1 series:2 score:1 contains:4 united:5 africa:2 com:2 activation:1 written:1 lauly:1 additive:1 kdd:1 shape:1 interpretable:1 update:3 intelligence:1 fewer:1 greedy:1 generative:1 short:1 core:1 blei:5 five:5 along:1 augmentable:1 beta:1 prove:1 wale:1 fitting:5 combine:1 icews:10 bility:1 growing:1 multi:3 examine:1 window:2 considering:1 estimating:1 notation:1 mass:1 israel:4 crisis:1 argmin:1 ghosh:1 warning:1 gal:2 temporal:4 every:2 appear:1 positive:1 local:3 xv:3 encoding:1 analyzing:5 id:3 subscript:1 pami:1 china:1 wallach:1 factorization:8 rte:7 jan:1 thi:1 rnn:2 word:3 regular:1 griffith:1 get:1 cannot:1 nb:1 collapsed:1 impossible:1 www:1 maximizing:1 estimator:1 regarded:4 lamblin:1 population:4 coordinate:2 updated:3 target:1 cleaning:1 homogeneous:1 us:1 trick:1 trend:1 approximated:2 satisfying:1 iraq:1 observed:6 bottom:3 solved:2 capture:7 wang:1 cong:2 sbn:1 news:1 mentioned:2 dynamic:9 trained:3 depend:1 hofman:2 f2:4 completely:1 joint:1 represented:1 talk:2 stacked:1 shortcoming:1 monte:2 pgds:15 sume:1 widely:1 valued:1 stanford:1 federation:3 solve:1 elbo:1 ability:1 khp:3 jointly:1 superscript:2 obviously:1 sequence:1 differentiable:2 blob:1 advantage:1 sdm:1 remainder:1 hxv:3 kh:1 convergence:1 transmission:1 generating:1 tk:26 help:1 recurrent:9 gong:1 eq:6 strong:1 auxiliary:5 predicted:2 indicate:2 come:1 memorize:2 larochelle:2 f4:2 stochastic:4 shrunk:1 oisson:6 ermon:1 bin:1 hx:1 f1:4 decompose:1 exploring:1 ground:2 exp:1 major:1 optimizer:1 early:1 purpose:2 largest:2 hoffman:1 hope:1 rough:1 interfacing:1 gaussian:4 occupied:3 zhou:8 asp:4 factorizes:1 derived:2 focus:1 vk:18 bernoulli:3 digamma:1 baseline:1 inference:19 integrated:1 hidden:2 relation:1 henao:2 among:5 html:1 augment:1 integration:1 field:8 equal:2 xvt:6 f3:4 beach:1 sampling:2 thang:1 represents:6 icml:5 carin:6 others:1 recommend:1 richard:1 few:2 gamma:34 divergence:1 phase:1 proba:1 interest:1 sheep:4 mixture:2 pfa:12 activated:1 chain:1 encourage:1 korea:2 pku:2 logarithm:1 catching:1 schein:1 minimal:1 fitted:1 column:2 modeling:5 csv:1 cost:1 obeyed:1 learnt:4 synthetic:8 st:1 density:1 lstm:10 international:4 probabilistic:1 squared:2 augmentation:1 management:2 huang:1 f5:1 nest:1 worse:1 zhao:1 ricardo:1 li:1 japan:1 includes:2 north:1 caused:1 vi:4 performed:5 try:1 analyze:1 linked:1 reached:1 xing:1 bayes:3 complicated:6 minimize:1 maximized:1 weak:3 bayesian:7 territory:3 lu:1 carlo:2 provider:1 researcher:1 classified:1 reach:1 eed:3 shp:7 rbm:1 riemannian:1 dataset:4 actually:1 appears:1 htk:6 day:3 improved:1 charlin:1 mar:1 implicit:8 stage:3 autoencoders:1 lack:1 propagation:2 reveal:1 indicated:1 believe:1 russian:3 usa:1 effect:2 normalized:2 ihp:1 contain:1 regularization:1 iteratively:1 death:1 illustrated:1 during:2 encourages:2 noted:1 complete:1 performs:2 variational:24 wise:5 recently:1 sigmoid:1 multinomial:1 linking:1 occurred:1 gibbs:1 ai:1 paisley:1 smoothness:1 tuning:1 hp:5 stable:1 europe:1 posterior:10 own:1 compound:1 certain:2 binary:1 success:1 palestinian:5 exploited:8 captured:1 impose:1 accident:2 maximize:1 rv:1 smooth:1 england:1 calculation:1 ahmed:1 long:10 peking:2 prediction:5 scalable:3 regression:1 expectation:6 poisson:30 arxiv:3 iteration:4 represent:1 disaster:7 fine:1 ayan:1 interval:4 country:2 leaving:1 specially:1 ascent:2 south:1 undirected:1 tough:1 mod:5 jordan:2 bengio:1 enough:4 independence:1 fit:2 restaurant:3 architecture:2 cn:2 six:2 war:1 song:1 deep:26 generally:1 useful:2 gopalan:2 amount:1 nonparametric:2 locally:2 tenenbaum:1 tth:2 http:4 outperform:1 notice:1 estimated:1 per:1 write:1 four:3 drawn:1 ht:38 year:6 master:1 named:1 family:2 capturing:1 layer:27 bound:1 existed:2 annual:3 strength:5 constraint:1 x2:2 min:4 department:2 according:6 conjugate:2 across:1 em:1 shallow:3 happens:1 outbreak:1 remains:1 hern:1 count:20 hh:1 know:1 hhn:3 adopted:1 rewritten:2 hierarchical:8 occurrence:1 top:1 dirichlet:6 include:2 remaining:1 recognizes:1 gan:3 binomial:4 unifying:1 carlson:2 yearly:1 chinese:3 conquer:2 approximating:2 classical:2 uselessness:1 hosseini:1 objective:2 noticed:3 parametric:1 dependence:13 rt:1 gradient:4 win:1 iclr:1 link:2 decoder:1 topic:8 besides:1 modeled:4 relationship:8 useless:1 index:1 difficult:1 executed:2 hlog:1 dunson:1 negative:4 severs:1 twenty:1 upper:3 observation:1 datasets:16 discarded:1 markov:1 descent:2 truncated:1 smoothed:1 pair:3 kl:3 optimized:1 connection:2 conflict:2 learned:1 kingma:1 nip:14 israeli:1 dynamical:2 pattern:5 sparsity:2 challenge:2 built:1 including:1 max:3 interpretability:1 belief:3 power:1 suitable:1 event:6 treated:2 predicting:3 indicator:1 nth:4 representing:1 improve:1 github:1 ladder:4 created:2 catch:3 autoencoder:4 extract:1 auto:1 text:3 prior:10 epoch:2 popovici:1 loss:7 allocation:2 downloaded:1 strcuture:1 proxy:1 ebola:9 summary:1 placed:1 last:1 ik2:6 taking:1 sparse:2 distinctly:1 distributed:1 dimension:4 calculated:3 world:11 transition:5 stand:3 autoregressive:2 made:1 adaptive:1 simplified:1 counted:1 party:2 transaction:1 ranganath:1 reconstructed:1 approximate:2 pruning:2 bui:2 sequentially:3 reveals:1 uai:1 corpus:6 factorize:1 latent:28 table:9 learn:1 ca:1 expanding:1 obtaining:1 mse:11 excellent:1 complex:3 constructing:2 aistats:4 pk:2 asthe:1 noise:2 hyperparameters:2 x1:2 west:2 ff:3 alizadeh:1 fails:1 inferring:1 exponential:2 governed:1 tang:1 hannah:1 xt:2 list:1 decay:1 evidence:1 intractable:1 essential:1 ih:8 sequential:2 adding:3 importance:1 kx:1 chen:3 mf:1 absorbed:1 visual:3 recommendation:4 nested:3 truth:2 satisfies:1 dh:1 extracted:1 conditional:5 formulated:2 feasible:1 hard:1 change:1 included:5 called:3 total:1 experimental:1 meaningful:1 ult:1 evaluate:1 mcmc:6 phenomenon:1
6,373
6,765
Positive-Unlabeled Learning with Non-Negative Risk Estimator Ryuichi Kiryo1,2 Gang Niu1,2 Marthinus C. du Plessis Masashi Sugiyama2,1 1 The University of Tokyo, 7-3-1 Hongo, Tokyo 113-0033, Japan 2 RIKEN, 1-4-1 Nihonbashi, Tokyo 103-0027, Japan { kiryo@ms., gang@ms., sugi@ }k.u-tokyo.ac.jp Abstract From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts. 1 Introduction Positive-unlabeled (PU) learning can be dated back to [1, 2, 3] and has been well studied since then. It mainly focuses on binary classification applied to retrieval and novelty or outlier detection tasks [4, 5, 6, 7], while it also has applications in matrix completion [8] and sequential data [9, 10]. Existing PU methods can be divided into two categories based on how U data is handled. The first category (e.g., [11, 12]) identifies possible negative (N) data in U data, and then performs ordinary supervised (PN) learning; the second (e.g., [13, 14]) regards U data as N data with smaller weights. The former heavily relies on the heuristics for identifying N data; the latter heavily relies on good choices of the weights of U data, which is computationally expensive to tune. In order to avoid tuning the weights, unbiased PU learning comes into play as a subcategory of the second category. The milestone is [4], which regards a U data as weighted P and N data simultaneously. It might lead to unbiased risk estimators, if we unrealistically assume that the class-posterior probability is one for all P data.1 A breakthrough in this direction is [15] for proposing the first unbiased risk estimator, and a more general estimator was suggested in [16] as a common foundation. The former is unbiased but non-convex for loss functions satisfying some symmetric condition; the latter is always unbiased, and it is further convex for loss functions meeting some linear-odd condition [17, 18]. PU learning based on these unbiased risk estimators is the current state of the art. However, the unbiased risk estimators will give negative empirical risks, if the model being trained is very flexible. For the general estimator in [16], there exist three partial risks in the total risk (see Eq. (2) defined later), especially it has a negative risk regarding P data as N data to cancel the bias caused by regarding U data as N data. The worst case is that the model can realize any measurable function and the loss function is not upper bounded, so that the empirical risk is not lower bounded. This needs to be fixed since the original risk, which is the target to be estimated, is non-negative. 1 It implies the P and N class-conditional densities have disjoint support sets, and then any P and N data (as the test data) can be perfectly separated by a fixed classifier that is sufficiently flexible. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To this end, we propose a novel non-negative risk estimator that follows and improves on the stateof-the-art unbiased risk estimators mentioned above. This estimator can be used for two purposes: first, given some validation data (which are also PU data), we can use our estimator to evaluate the risk?for this case it is biased yet optimal, and for some symmetric losses, the mean-squared-error reduction is guaranteed; second, given some training data, we can use our estimator to train binary classifiers?for this case its risk minimizer possesses an estimation error bound of the same order as the risk minimizers corresponding to its unbiased counterparts [15, 16, 19]. In addition, we propose a large-scale PU learning algorithm for minimizing the unbiased and nonnegative risk estimators. This algorithm accepts any surrogate loss and is based on stochastic optimization, e.g., [20]. Note that [21] is the only existing large-scale PU algorithm, but it only accepts a single surrogate loss from [16] and is based on sequential minimal optimization [22]. The rest of this paper is organized as follows. In Section 2 we review unbiased PU learning, and in Section 3 we propose non-negative PU learning. Theoretical analyses are carried out in Section 4, and experimental results are discussed in Section 5. Conclusions are given in Section 6. 2 Unbiased PU learning In this section, we review unbiased PU learning [15, 16]. Problem settings Let X ? Rd and Y ? {?1} (d ? N) be the input and output random variables. Let p(x, y) be the underlying joint density of (X, Y ), pp (x) = p(x | Y = +1) and pn (x) = p(x | Y = ?1) be the P and N marginals (a.k.a. the P and N class-conditional densities), p(x) be the U marginal, ?p = p(Y = +1) be the class-prior probability, and ?n = p(Y = ?1) = 1 ? ?p . ?p is assumed known throughout the paper; it can be estimated from P and U data [23, 24, 25, 26]. Consider the two-sample problem setting of PU learning [5]: two sets of data are sampled indepennp u dently from pp (x) and p(x) as Xp = {xpi }i=1 ? pp (x) and Xu = {xui }ni=1 ? p(x), and a classifier n needs to be trained from Xp and Xu .2 If it is PN learning as usual, Xn = {xni }ni=1 ? pn (x) rather than Xu would be available and a classifier could be trained from Xp and Xn . Risk estimators Unbiased PU learning relies on unbiased risk estimators. Let g : Rd ? R be an arbitrary decision function, and ` : R ? {?1} ? R be the loss function, such that the value `(t, y) means the loss incurred by predicting an output t when the ground truth is y. Denote by Rp+ (g) = Ep [`(g(X), +1)] and Rn? (g) = En [`(g(X), ?1)], where Ep [?] = EX?pp [?] and En [?] = EX?pn [?]. Then, the risk of g is R(g) = E(X,Y )?p(x,y) [`(g(X), Y )] = ?p Rp+ (g) + ?n Rn? (g). In PN learning, thanks to the availability of Xp and Xn , R(g) can be approximated directly by bpn (g) = ?p R bp+ (g) + ?n R bn? (g), R (1) bp+ (g) = (1/np ) Pnp `(g(xp ), +1) and R bn? (g) = (1/nn ) Pnn `(g(xn ), ?1). In PU learnwhere R i i=1 i i=1 ing, Xn is unavailable, but Rn? (g) can be approximated indirectly, as shown in [15, 16]. Denote by Rp? (g) = Ep [`(g(X), ?1)] and Ru? (g) = EX?p(x) [`(g(X), ?1)]. As ?n pn (x) = p(x) ? ?p pp (x), we can obtain that ?n Rn? (g) = Ru? (g) ? ?p Rp? (g), and R(g) can be approximated indirectly by bpu (g) = ?p R bp+ (g) ? ?p R bp? (g) + R bu? (g), R (2) bp? (g) = (1/np ) Pnp `(g(xp ), ?1) and R bu? (g) = (1/nu ) Pnu `(g(xu ), ?1). where R i i=1 i i=1 The empirical risk estimators in Eqs. (1) and (2) are unbiased and consistent w.r.t. all popular loss functions.3 When they are used for evaluating the risk (e.g., in cross-validation), ` is by default the zero-one loss, namely `01 (t, y) = (1 ? sign(ty))/2; when used for training, `01 is replaced with a surrogate loss [27]. In particular, [15] showed that if ` satisfies a symmetric condition: `(t, +1) + `(t, ?1) = 1, 2 3 Xp is a set of independent data and so is Xu , but Xp ? Xu does not need to be such a set. bpn (g) ? R(g) and R bpu (g) ? R(g) as np , nn , nu ? ?. The consistency here means for fixed g, R 2 (3) we will have bpu (g) = 2?p R bp+ (g) + R bu? (g) ? ?p , R (4) which can be minimized by separating Xp and Xu with ordinary cost-sensitive learning. An issue is bpu (g) in (4) must be non-convex in g, since no `(t, y) in (3) can be convex in t. [16] showed that R bpu (g) in (2) is convex in g, if `(t, y) is convex in t and meets a linear-odd condition [17, 18]: R `(t, +1) ? `(t, ?1) = ?t. (5) Let g be parameterized by ?, then (5) leads to a convex optimization problem so long as g is linear in ?, for which the globally optimal solution can be obtained. Eq. (5) is not only sufficient but also necessary for the convexity, if ` is unary, i.e., `(t, ?1) = `(?t, +1). Justification Thanks to the unbiasedness, we can study estimation error bounds (EEB). Let G be b b the function class, and gbpn and gbpu be the empirical risk minimizers ? of Rpn (g) ? and Rpu (g). [19] ? proved EEB of gbpu is tighter than EEB of gbpn when ?p / np + 1/ nu < ?n / nn , if (a) ` satisfies ? (3) and is Lipschitz continuous; (b) the Rademacher complexity of G decays in O(1/ n) for data 4 of size n drawn from p(x), pp (x) or pn (x). In other ? words, under ?mild conditions, PU learning is ? likely to outperform PN learning when ?p / np + 1/ nu < ?n / nn . This phenomenon has been observed in experiments [19] and is illustrated in Figure 1(a). 3 Non-negative PU learning In this section, we propose the non-negative risk estimator and the large-scale PU algorithm. 3.1 Motivation Let us look inside the aforementioned justification of unbiased PU (uPU) learning. Intuitively, the advantage comes from the transformation ?n Rn? (g) = Ru? (g) ? ?p Rp? (g). When we approximate ? n ?n Rn? (g) from N data {xni }ni=1 , the convergence rate is Op (?n / nn ), where Op denotes the order np u and U data {xui }ni=1 , in probability; when we approximate Ru? (g) ? ?p Rp? (g) from P data {xpi }i=1 ? ? the convergence rate becomes Op (?p / np + 1/ nu ). As a result, we might benefit from a tighter ? ? ? uniform deviation bound when ?p / np + 1/ nu < ?n / nn . However, the critical assumption on the Rademacher complexity is indispensable, otherwise it will be difficult for EEB of gbpu to be tighter than EEB of gbpn . If G = {g | kgk? ? Cg } where Cg > 0 is a constant, i.e., it has all measurable functions with some bounded norm, then Rn,q (G) = O(1) bpu (g) for any n and q(x) and all bounds become trivial; moreover if ` is not bounded from above, R becomes not bounded from below, i.e., it may diverge to ??. Thus, in order to obtain high-quality gbpu , G cannot be too complex, or equivalently the model of g cannot be too flexible. This argument is supported by an experiment as illustrated in Figure 1(b). A multilayer perceptron was trained for separating the even and odd digits of MNIST hand-written digits [29]. This model is so flexible that the number of parameters is 500 times more than the total number of P and N data. From Figure 1(b) we can see: (A) on training data, the risks of uPU and PN both decrease, and uPU is faster than PN; (B) on test data, the risk of PN decreases, whereas the risk of uPU does not; the risk of uPU is lower at the beginning but higher at the end than that of PN. To sum up, the overfitting problem of uPU is serious, which evidences that in order to obtain highquality gbpu , the model of g cannot be too flexible. 3.2 Non-negative risk estimator Nevertheless, we have no choice sometimes: we are interested in using flexible models, while labeling more data is out of our control. Can we alleviate the overfitting problem with neither changing the model nor labeling more data? 4 Let ?1 , . . . , ?n be n Rademacher variables, the Rademacher complexity of G for X of size n drawn from P q(x) is defined by Rn,q (G) = EX E?1 ,...,?n [supg?G n1 xi ?X ?i g(xi )] [28]. For any fixed G and q, Rn,q (G) still depends on n and should decrease with n. 3 0.5 0.4 0.45 Risk w.r.t. surrogate loss Risk w.r.t. surrogate loss 0.50 0.40 0.35 0.30 PN test PN train uPU test uPU train 0.25 0.20 0.15 0 100 0.3 0.2 0.1 PN test PN train uPU test uPU train nnPU test nnPU train 0.0 0.1 0.2 200 Epoch 300 400 500 0 (a) Plain linear model 100 200 Epoch 300 400 500 (b) Multilayer perceptron (MLP) The dataset is MNIST; even/odd digits are regarded as the P/N class, and ?p ? 1/2; np = 100 and nn = 50 for PN learning; np = 100 and nu = 59, 900 for unbiased PU (uPU) and non-negative PU (nnPU) learning. The model is a plain linear model (784-1) in (a) and an MLP (784-100-1) with ReLU in (b); it was trained by Algorithm 1, where the loss ` is `sig , the optimization algorithm A is [20], with ? = 1/2 for uPU, and ? = 0 bpn (g) on test data where g ? {b and ? = 1 for nnPU. Solid curves are R gpn , gbpu , gepu }, and dashed curves are b b e Rpn (b gpn ), Rpu (b gpu ) and Rpu (e gpu ) on training data. Note that nnPU is identical to uPU in (a). Figure 1: Illustrative experimental results. bpu (b The answer is affirmative. Note that R gpu ) keeps decreasing and goes negative. This should be fixed since R(g) ? 0 for any g. Specifically, it holds that Ru? (g) ? ?p Rp? (g) = ?n Rn? (g) ? 0, but bu? (g) ? ?p R bp? (g) ? 0 is not always true, which is a potential reason for uPU to overfit. Based on R this key observation, we propose a non-negative risk estimator for PU learning: n o epu (g) = ?p R bp+ (g) + max 0, R bu? (g) ? ?p R bp? (g) . R (6) epu (g) be the empirical risk minimizer of R epu (g). We refer to the process Let gepu = arg ming?G R of obtaining gepu as non-negative PU (nnPU) learning. The implementation of nnPU will be given epu (g) and gepu will be given in Section 4. in Section 3.3, and theoretical analyses of R Again, from Figure 1(b) we can see: (A) on training data, the risk of nnPU first decreases and then becomes more and more flat, so that the risk of nnPU is closer to the risk of PN and farther from that of uPU; in short, the risk of nnPU does not go down with uPU after a certain epoch; (B) on test data, the tendency is similar, but the risk of nnPU does not go up with uPU; (C) at the end, nnPU achieves the lowest risk on test data. In summary, nnPU works by explicitly constraining the training risk of uPU to be non-negative. 3.3 Implementation A list of popular loss functions and their properties is shown in Table 1. Let g be parameterized by ?. If g is linear in ?, the losses satisfying (5) result in convex optimizations. However, if g needs to be flexible, it will be highly nonlinear in ?; then the losses satisfying (5) are not advantageous over bpu (g) others, since the optimizations are anyway non-convex. In [15], the ramp loss was used and R was minimized by the concave-convex procedure [30]. This solver is fairly sophisticated, and if we bpu (g) with R epu (g), it will be more difficult to implement. To this end, we propose to use replace R epu (g) can the sigmoid loss `sig (t, y) = 1/(1 + exp(ty)): its gradient is everywhere non-zero and R be minimized by off-the-shelf gradient methods. bpu (g) In front of big data, we should scale PU learning up by stochastic optimization. Minimizing R e b epu (g) is embarrassingly parallel while minimizing Rpu (g) is not, since Rpu (g) is point-wise but R ? ? bu (g; Xu ) ? ?p R bp (g; Xp )} is no greater is not due to the max operator. That being said, max{0, R PN ? i ? i i i b b than (1/N ) i=1 max{0, Ru (g; Xu ) ? ?p Rp (g; Xp )}, where (Xp , Xu ) is the i-th mini-batch, and epu (g) can easily be minimized in parallel. hence the corresponding upper bound of R 4 Table 1: Loss functions for PU learning and their properties. Name Definition (3) (5) Bounded Lipschitz `0 (z) 6= 0 Zero-one loss Ramp loss Squared loss Logistic loss Hinge loss Double hinge loss Sigmoid loss (1 ? sign(z))/2 max{0, min{1, (1 ? z)/2}} (z ? 1)2 /4 ln(1 + exp(?z)) max{0, 1 ? z} max{0, (1 ? z)/2, ?z} 1/(1 + exp(z)) X X ? ? ? ? X ? ? X X ? X ? X X ? ? ? ? X ? X ? X X X X z=0 z ? [?1, +1] z?R z?R z ? (??, +1] z ? (??, +1] z?R All loss functions are unary, such that `(t, y) = `(z) with z = ty. The ramp loss comes from [15]; the double hinge loss is from [16], in which the squared, logistic and hinge losses were discussed as well. The ramp and squared losses are scaled to satisfy (3) or (5). The sigmoid loss is a horizontally mirrored logistic function; the logistic loss is the negative logarithm of the logistic function. Algorithm 1 Large-scale PU learning based on stochastic optimization Input: training data (Xp , Xu ); hyperparameters 0 ? ? ? ?p supt maxy `(t, y) and 0 ? ? ? 1 Output: model parameter ? for gbpu (x; ?) or gepu (x; ?) 1: Let A be an external SGD-like stochastic optimization algorithm such as [20] or [31] 2: while no stopping criterion has been met: 3: Shuffle (Xp , Xu ) into N mini-batches, and denote by (Xpi , Xui ) the i-th mini-batch 4: for i = 1 to N : bp? (g; Xpi ) ? ??: bu? (g; Xui ) ? ?p R 5: if R bpu (g; Xpi , Xui ) 6: Set gradient ?? R 7: Update ? by A with its current step size ? 8: else: bu? (g; Xui )) bp? (g; Xpi ) ? R 9: Set gradient ?? (?p R 10: Update ? by A with a discounted step size ?? b? (g; X i ) ? ?p R b? (g; X i ). In The large-scale PU algorithm is described in Algorithm 1. Let ri = R u u p p practice, we may tolerate ri ? ?? where 0 ? ? ? ?p supt maxy `(t, y), as ri comes from a single mini-batch. The degree of tolerance is controlled by ?: there is zero tolerance if ? = 0, and we are bpu (g) if ? = ?p supt maxy `(t, y). Otherwise if ri < ??, we go along ??? ri with a minimizing R step size discounted by ? where 0 ? ? ? 1, to make this mini-batch less overfitted. Algorithm 1 is insensitive to the choice of ?, if the optimization algorithm A is adaptive such as [20] or [31]. 4 Theoretical analyses In this section, we analyze the risk estimator (6) and its minimizer (all proofs are in Appendix B). 4.1 Bias and consistency epu (g) ? R bpu (g) for any (Xp , Xu ) but R bpu (g) is unbiased, which implies R epu (g) is biased Fix g, R e in general. A fundamental question is then whether Rpu (g) is consistent. From now on, we prove bu? (g) ? this consistency. To begin with, partition all possible (Xp , Xu ) into D+ (g) = {(Xp , Xu ) | R ? ? ? ? b b b ?p Rp (g) ? 0} and D (g) = {(Xp , Xu ) | Ru (g) ? ?p Rp (g) < 0}. Assume there are Cg > 0 and C` > 0 such that supg?G kgk? ? Cg and sup|t|?Cg maxy `(t, y) ? C` . Lemma 1. The following three conditions are equivalent: (A) the probability measure of D? (g) epu (g) differs from R bpu (g) with a non-zero probability over repeated sampling of is non-zero; (B) R e (Xp , Xu ); (C) the bias of Rpu (g) is positive. In addition, by assuming that there is ? > 0 such that Rn? (g) ? ?, the probability measure of D? (g) can be bounded by Pr(D? (g)) ? exp(?2(?/C` )2 /(?p2 /np + 1/nu )). 5 (7) Based on Lemma 1, we can show the exponential?decay of the bias and also the consistency. For ? convenience, denote by ?np ,nu = 2?p / np + 1/ nu . Theorem 2 (Bias and consistency). Assume that Rn? (g) ? ? > 0 and denote by ?g the right-hand epu (g) decays exponentially: side of Eq. (7). As np , nu ? ?, the bias of R epu (g)] ? R(g) ? C` ?p ?g . 0 ? EXp ,Xu [R (8) p Moreover, for any ? > 0, let C? = C` ln(2/?)/2, then we have with probability at least 1 ? ?, epu (g) ? R(g)| ? C? ? ?n ,n + C` ?p ?g , |R p u (9) and with probability at least 1 ? ? ? ?g , epu (g) ? R(g)| ? C? ? ?n ,n . |R p u (10) epu (g) ? R(g) in Op (?p /?np + 1/?nu ). Either (9) or (10) in Theorem 2 indicates for fixed g, R This convergence rate is optimal according to the central limit theorem [32], which means the proposed estimator is a biased yet optimal estimator to the risk. 4.2 Mean squared error epu (g) tends to overestimate R(g). It is not a shrinkage estimator [33, After introducing the bias, R bpu (g). However, 34] so that its mean squared error (MSE) is not necessarily smaller than that of R we can still characterize this reduction in MSE. epu (g)) < MSE(R bpu (g)),5 if and only if Theorem 3 (MSE reduction). It holds that MSE(R Z bpu (g) + R epu (g) ? 2R(g))(R bu? (g) ? ?p R bp? (g)) dF (Xp , Xu ) > 0, (R (11) (Xp ,Xu )?D? (g) Qnp Qnu where dF (Xp , Xu ) = i=1 pp (xpi )dxpi ? i=1 p(xui )dxui . Eq. (11) is valid, if the following condi? tions are met: (a) Pr(D (g)) > 0; (b) ` satisfies Eq. (3); (c) Rn? (g) ? ? > 0; (d) nu  np , such b? (g) ? 2? almost surely on D? (g). In fact, given these four conditions, that we have Ru? (g) ? R u we have for any 0 ? ? ? C` ?p , bpu (g)) ? MSE(R epu (g)) ? 3? 2 Pr{R epu (g) ? R bpu (g) > ?}. MSE(R (12) The assumption (d) in Theorem 3 is explained as follows. Since U data can be much cheaper than P data in practice, it would be natural to assume nu is much larger and grows much faster than np , bp? (g) ? Rp? (g) ? ?/?p } ? exp(np ? nu ) asymptotically.6 bu? (g) ? ?}/Pr{R hence Pr{Ru? (g) ? R This means the contribution of Xu is negligible for making (Xp , Xu ) ? D? (g) so that Pr(D? (g)) bu? (g) ? 2?} has stronger exponential exhibits exponential decay mainly in np . As Pr{Ru? (g) ? R ? ? b decay in nu than Pr{Ru (g) ? Ru (g) ? ?} as well as nu  np , we made the assumption (d). 4.3 Estimation error While Theorems 2 and 3 addressed the use of (6) for evaluating the risk, we are likewise interested in its use for training classifiers. In what follows, we analyze the estimation error R(e gpu ) ? R(g ? ), ? ? where g is the true risk minimizer in G, i.e., g = arg ming?G R(g). As a common practice [28], assume that `(t, y) is Lipschitz continuous in t for all |t| ? Cg with a Lipschitz constant L` . Theorem 4 (Estimation error bound). Assume that (a) inf g?G Rn? (g) ? ? > 0 and denote by ? the right-hand side of Eq. (7); (b) G is closed under negation, i.e., g ? G if and only if ?g ? G. Then, for any ? > 0, with probability at least 1 ? ?, R(e gpu ) ? R(g ? ) ? 16L` ?p Rnp ,pp (G) + 8L` Rnu ,p (G) + 2C?0 ? ?np ,nu + 2C` ?p ?, (13) p where C?0 = C` ln(1/?)/2, and Rnp ,pp (G) and Rnu ,p (G) are the Rademacher complexities of G for the sampling of size np from pp (x) and of size nu from p(x), respectively. 5 Here, MSE(?) is over repeated sampling of (Xp , Xu ). This can be derived as np , nu ? ? by applying the central limit theorem to the two differences and then L?H?pital?s rule to the ratio of complementary error functions [32]. 6 6 Theorem 4 ensures that learning with (6) is also consistent: as np , nu ? ?, R(e gpu ) ? R(g ? ) and ? if ` satisfies (5), all optimizations are convex and gepu ? g . For ? linear-in-parameter models with a ? gpu ) ? R(g ? ) bounded norm, Rnp ,pp (G) = O(1/ np ) and Rnu ,p (G) = O(1/ nu ), and thus R(e ? ? in Op (?p / np + 1/ nu ). For comparison, R(b gpu ) ? R(g ? ) can be bounded using a different proof technique [19]: R(b gpu ) ? R(g ? ) ? 8L` ?p Rnp ,pp (G) + 4L` Rnu ,p (G) + 2C? ? ?np ,nu , (14) p where C? = C` ln(2/?)/2. The differences of (13) and (14) are completely from the differences of the corresponding uniform deviation bounds, i.e., the following lemma and Lemma 8 of [19]. Lemma 5. Under the assumptions of Theorem 4, for any ? > 0, with probability at least 1 ? ?, epu (g) ? R(g)| ? 8L` ?p Rn ,p (G) + 4L` Rn ,p (G) + C?0 ? ?n ,n + C` ?p ?. (15) supg?G |R p p u p u bpu (g) is point-wise while R epu (g) is not due to the maximum, which makes Lemma 5 Notice that R much more difficult to prove than Lemma 8 of [19]. The key trick is that after symmetrization, we employ | max{0, z} ? max{0, z 0 }| ? |z ? z 0 |, making three differences of partial risks point-wise (see (18) in the proof). As a consequence, we have to use a different Rademacher complexity with the absolute value inside the supremum [35, 36], whose contraction makes the coefficients of (15) doubled compared with Lemma 8 of [19]; moreover, we have to assume G is closed under negation to change back to the standard Rademacher complexity without the absolute value [28]. Therefore, the differences of (13) and (14) are mainly due to different proof techniques and cannot reflect the intrinsic differences of empirical risk minimizers. 5 Experiments In this section, we compare PN, unbiased PU (uPU) and non-negative PU (nnPU) learning experimentally. We focus on training deep neural networks, as uPU learning usually does not overfit if a linear-in-parameter model is used [19] and nothing needs to be fixed. Table 2 describes the specification of benchmark datasets. MNIST, 20News and CIFAR-10 have 10, 7 and 10 classes originally, and we constructed the P and N classes from them as follows: MNIST was preprocessed in such a way that 0, 2, 4, 6, 8 constitute the P class, while 1, 3, 5, 7, 9 constitute the N class; for 20News, ?alt.?, ?comp.?, ?misc.? and ?rec.? make up the P class, and ?sci.?, ?soc.? and ?talk.? make up the N class; for CIFAR-10, the P class is formed by ?airplane?, ?automobile?, ?ship? and ?truck?, and the N class is formed by ?bird?, ?cat?, ?deer?, ?dog?, ?frog? and ?horse?. The dataset epsilon has 2 classes and such a construction is unnecessary. Three learning methods were set up as follows: (A) for PN, np = 1, 000 and nn = (?n /2?p )2 np ; (B) for uPU, np = 1, 000 and nu is the total number of training data; (C) for nnPU, np and nu are bpu (g) in exactly same as uPU. For uPU and nnPU, P and U data were dependent, because neither R epu (g) in Eq. (6) requires them to be independent. The choice of nn was motivated by Eq. (2) nor R [19] and may make nnPU potentially better than PN as nu ? ? (whether np < ? or np ? nu ). The model for MNIST was a 6-layer multilayer perceptron (MLP) with ReLU [40] (more specifically, d-300-300-300-300-1). For epsilon, the model was similar while the activation was replaced with Softsign [41] for better performance. For 20News, we borrowed the pre-trained word embeddings from GloVe [42], and the model can be written as d-avg_pool(word_emb(d,300))-300-300-1, Table 2: Specification of benchmark datasets, models, and optimition algorithms. Name MNIST [29] epsilon [37] 20News [38] CIFAR-10 [39] # Train # Test # Feature ?p 60, 000 400, 000 11, 314 50, 000 10, 000 100, 000 7, 532 10, 000 784 2, 000 61, 188 3, 072 0.49 0.50 0.44 0.40 Model g(x; ?) Opt. alg. A 6-layer MLP with ReLU 6-layer MLP with Softsign 5-layer MLP with Softsign 13-layer CNN with ReLU Adam [20] Adam [20] AdaGrad [31] Adam [20] See http://yann.lecun.com/exdb/mnist/ for MNIST, https://www.csie.ntu.edu.tw/~cjlin/ libsvmtools/datasets/binary.html for epsilon, http://qwone.com/~jason/20Newsgroups/ for 20Newsgroups, and https://www.cs.toronto.edu/~kriz/cifar.html for CIFAR-10. 7 0.50 0.4 0.45 0.40 0.3 Risk w.r.t. zero-one loss Risk w.r.t. zero-one loss 0.5 0.2 0.1 0.0 PN test PN train uPU test uPU train nnPU test nnPU train 0.1 0.2 0.3 0.4 0 25 50 75 100 Epoch 125 150 175 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0 200 25 50 (a) MNIST 75 100 Epoch 125 150 175 200 125 150 175 200 (b) epsilon 0.4 0.5 0.4 Risk w.r.t. zero-one loss Risk w.r.t. zero-one loss 0.3 0.2 0.1 0.0 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.1 0 25 50 75 100 Epoch 125 150 175 0.4 200 (c) 20News 0 25 50 75 100 Epoch (d) CIFAR-10 Figure 2: Experimental results of training deep neural networks. where word_emb(d,300) retrieves 300-dimensional word embeddings for all words in a document, avg_pool executes average pooling, and the resulting vector is fed to a 4-layer MLP with Softsign. The model for CIFAR-10 was an all convolutional net [43]: (32*32*3)-[C(3*3,96)]*2-C(3*3,96,2)[C(3*3,192)]*2-C(3*3,192,2)-C(3*3,192)-C(1*1,192)-C(1*1,10)-1000-1000-1, where the input is a 32*32 RGB image, C(3*3,96) means 96 channels of 3*3 convolutions followed by ReLU, [ ? ]*2 means there are two such layers, C(3*3,96,2) means a similar layer but with stride 2, etc.; it is one of the best architectures for CIFAR-10. Batch normalization [44] was applied before hidden layers. Furthermore, the sigmoid loss `sig was used as the surrogate loss and an `2 -regularization was also added. The resulting objectives were minimized by Adam [20] on MNIST, epsilon and CIFAR-10, and by AdaGrad [31] on 20News; we fixed ? = 0 and ? = 1 for simplicity. The experimental results are reported in Figure 2, where means and standard deviations of training and test risks based on the same 10 random samplings are shown. We can see that uPU overfitted training data and nnPU fixed this problem. Additionally, given limited N data, nnPU outperformed PN on MNIST, epsilon and CIFAR-10 and was comparable to it on 20News. In summary, with the proposed non-negative risk estimator, we are able to use very flexible models given limited P data. We further tried some cases where ?p is misspecified, in order to simulate PU learning in the wild, where we must suffer from errors in estimating ?p . More specifically, we tested nnPU learning by replacing ?p with ?p0 ? {0.8?p , 0.9?p , . . . , 1.2?p } and giving ?p0 to the learning method, so that it would regard ?p0 as ?p during the entire training process. The experimental setup was exactly same as before except the replacement of ?p . The experimental results are reported in Figure 3, where means of test risks of nnPU based on the same 10 random samplings are shown, and the best test risks are identified (horizontal lines are the best mean test risks and vertical lines are the epochs when they were achieved). We can see that on MNIST, the more misspecification was, the worse nnPU performed, while under-misspecification hurt more than over-misspecification; on epsilon, the cases where ?p0 equals to ?p , 1.1?p and 1.2?p 8 0.46 0.8 0.9 1.0 1.1 1.2 0.25 0.44 Risk w.r.t. zero-one loss Risk w.r.t. zero-one loss 0.30 0.20 0.15 0.10 0.42 0.40 0.38 0.36 0.34 0.32 0.30 0.05 0 25 50 75 100 Epoch 125 150 175 0.28 200 0 25 50 (a) MNIST 100 Epoch 125 150 175 200 125 150 175 200 (b) epsilon 0.30 0.30 0.28 0.28 Risk w.r.t. zero-one loss Risk w.r.t. zero-one loss 75 0.26 0.24 0.22 0.26 0.24 0.22 0.20 0.18 0.20 0 25 50 75 100 Epoch 125 150 175 200 0 (c) 20News 25 50 75 100 Epoch (d) CIFAR-10 Figure 3: Experimental results given ?p0 ? {0.8?p , 0.9?p , . . . , 1.2?p }. were comparable, but the best was ?p0 = 1.1?p rather than ?p0 = ?p ; on 20News, these three cases became different, such that ?p0 = ?p was superior to ?p0 = 1.2?p but inferior to ?p0 = 1.1?p ; at last on CIFAR-10, ?p0 = ?p and ?p0 = 1.1?p were comparable again, and ?p0 = 1.2?p was the winner. In all the experiments, we have fixed ? = 0, which may explain this phenomenon. Recall that uPU overfitted seriously on all the benchmark datasets, and note that the larger ?p0 is, the more different nnPU is from uPU. Therefore, the replacement of ?p with some ?p0 > ?p introduces additional bias epu (g) in estimating R(g), but it also pushes R epu (g) away from R bpu (g) and then pushes nnPU of R away from uPU. This may result in lower test risks given some ?p0 slightly larger than ?p as shown in Figure 3. This is also why under-misspecified ?p0 hurt more than over-misspecified ?p0 . All the experiments were done with Chainer [45], and our implementation based on it is available at https://github.com/kiryor/nnPUlearning. 6 Conclusions We proposed a non-negative risk estimator for PU learning that follows and improves on the stateof-the-art unbiased risk estimators. No matter how flexible the model is, it will not go negative as its unbiased counterparts. It is more robust against overfitting when being minimized, and training very flexible models such as deep neural networks given limited P data becomes possible. We also developed a large-scale PU learning algorithm. Extensive theoretical analyses were presented, and the usefulness of our non-negative PU learning was verified by intensive experiments. A promising future direction is extending the current work to semi-supervised learning along [46]. Acknowledgments GN and MS were supported by JST CREST JPMJCR1403 and GN was also partially supported by Microsoft Research Asia. 9 References [1] F. Denis. PAC learning from positive statistical queries. In ALT, 1998. [2] F. De Comit?, F. Denis, R. Gilleron, and F. Letouzey. Positive and unlabeled examples help learning. In ALT, 1999. [3] F. Letouzey, F. Denis, and R. Gilleron. Learning from positive and unlabeled examples. In ALT, 2000. [4] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In KDD, 2008. [5] G. Ward, T. Hastie, S. Barry, J. Elith, and J. Leathwick. Presence-only data and the EM algorithm. Biometrics, 65(2):554?563, 2009. [6] C. Scott and G. Blanchard. Novelty detection: Unlabeled data definitely help. In AISTATS, 2009. [7] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. Journal of Machine Learning Research, 11:2973?3009, 2010. [8] C.-J. Hsieh, N. Natarajan, and I. S. Dhillon. PU learning for matrix completion. In ICML, 2015. [9] X. Li, P. S. Yu, B. Liu, and S.-K. Ng. Positive unlabeled learning for data stream classification. In SDM, 2009. [10] M. N. Nguyen, X. Li, and S.-K. Ng. Positive unlabeled leaning for time series classification. In IJCAI, 2011. [11] B. Liu, W. S. Lee, P. S. Yu, and X. Li. Partially supervised classification of text documents. In ICML, 2002. [12] X. Li and B. Liu. Learning to classify texts using positive and unlabeled data. In IJCAI, 2003. [13] W. S. Lee and B. Liu. Learning with positive and unlabeled examples using weighted logistic regression. In ICML, 2003. [14] B. Liu, Y. Dai, X. Li, W. S. Lee, and P. S. Yu. Building text classifiers using positive and unlabeled examples. In ICDM, 2003. [15] M. C. du Plessis, G. Niu, and M. Sugiyama. Analysis of learning from positive and unlabeled data. In NIPS, 2014. [16] M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled data. In ICML, 2015. [17] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In NIPS, 2013. [18] G. Patrini, F. Nielsen, R. Nock, and M. Carioni. Loss factorization, weakly supervised learning and label noise robustness. In ICML, 2016. [19] G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama. Theoretical comparisons of positiveunlabeled learning against positive-negative learning. In NIPS, 2016. [20] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [21] E. Sansone, F. G. B. De Natale, and Z.-H. Zhou. Efficient training for positive unlabeled learning. arXiv preprint arXiv:1608.06807, 2016. [22] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch?lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods, pages 185?208. MIT Press, 1999. [23] C. S. Ong B. Williamson A. Menon, B. Van Rooyen. Learning from corrupted binary labels via classprobability estimation. In ICML, 2015. [24] H. G. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embedding of distributions. In ICML, 2016. [25] S. Jain, M. White, and P. Radivojac. Estimating the class prior and posterior from noisy positives and unlabeled data. In NIPS, 2016. [26] M. C. du Plessis, G. Niu, and M. Sugiyama. Class-prior estimation for learning from positive and unlabeled data. Machine Learning, 106(4):463?492, 2017. [27] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138?156, 2006. [28] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012. [29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [30] A. L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In NIPS, 2001. 10 [31] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2011. [32] K.-L. Chung. A Course in Probability Theory. Academic Press, 1968. [33] C. Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proc. 3rd Berkeley Symposium on Mathematical Statistics and Probability, 1956. [34] W. James and C. Stein. Estimation with quadratic loss. In Proc. 4th Berkeley Symposium on Mathematical Statistics and Probability, 1961. [35] V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5):1902?1914, 2001. [36] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002. [37] G.-X. Yuan, C.-H. Ho, and C.-J. Lin. An improved GLMNET for l1-regularized logistic regression. Journal of Machine Learning Research, 13:1999?2030, 2012. [38] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995. [39] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [40] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. [41] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. [42] J. Pennington, R. Socher, and C. D. Manning. GloVe: Global vectors for word representation. In EMNLP, 2014. [43] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR, 2015. [44] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [45] S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In Machine Learning Systems Workshop at NIPS, 2015. [46] T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama. Semi-supervised classification based on classification from positive and unlabeled data. In ICML, 2017. [47] C. McDiarmid. On the method of bounded differences. In J. Siemons, editor, Surveys in Combinatorics, pages 148?188. Cambridge University Press, 1989. [48] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991. [49] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [50] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998. 11
6765 |@word mild:1 kgk:2 cnn:1 norm:2 advantageous:1 stronger:1 bpu:23 softsign:4 proportion:1 open:1 tried:1 bn:2 contraction:1 rgb:1 p0:18 hsieh:1 sgd:1 solid:1 reduction:4 liu:5 series:1 seriously:1 document:3 existing:2 current:3 com:3 activation:1 yet:2 lang:1 must:2 written:2 gpu:9 realize:1 john:1 partition:1 kdd:1 update:2 rpn:2 beginning:1 short:1 farther:1 letouzey:2 toronto:2 denis:3 mcdiarmid:1 mathematical:2 along:2 constructed:1 become:1 symposium:2 yuan:1 prove:2 marthinus:1 wild:1 inside:2 pnp:2 jpmjcr1403:1 nor:2 globally:1 decreasing:1 ming:2 discounted:2 solver:1 becomes:4 begin:1 estimating:3 moreover:4 bounded:10 underlying:1 lowest:1 what:1 affirmative:1 developed:1 proposing:1 transformation:1 berkeley:2 masashi:1 concave:2 exactly:2 classifier:8 milestone:1 scaled:1 control:1 highquality:1 platt:1 radivojac:1 unit:1 mcauliffe:1 overestimate:1 positive:20 negligible:1 before:2 tends:1 limit:2 consequence:1 meet:1 niu:5 might:2 bird:1 frog:1 studied:1 koltchinskii:1 limited:4 factorization:1 acknowledgment:1 lecun:2 elith:1 practice:3 implement:1 differs:1 digit:3 procedure:2 riedmiller:1 empirical:8 word:5 pre:1 doubled:1 cannot:4 unlabeled:18 convenience:1 operator:1 risk:70 applying:1 www:2 measurable:2 equivalent:1 go:6 convex:13 survey:1 simplicity:2 identifying:1 estimator:30 rule:1 regarded:1 embedding:1 anyway:1 justification:2 hurt:2 target:1 play:1 heavily:2 gpn:2 construction:1 sig:3 elkan:1 trick:1 expensive:1 satisfying:3 approximated:3 rec:1 natarajan:2 recognition:1 ep:3 observed:1 csie:1 preprint:1 worst:1 ensures:1 news:9 decrease:4 shuffle:1 overfitted:3 mentioned:1 convexity:2 complexity:7 ong:1 trained:7 weakly:1 yuille:1 completely:1 easily:1 joint:1 cat:1 retrieves:1 talk:1 riken:1 train:10 separated:1 jain:1 fast:1 query:1 labeling:2 deer:1 horse:1 netnews:1 shalev:1 whose:1 heuristic:1 larger:3 ramp:4 otherwise:2 statistic:2 ward:1 noisy:2 online:1 advantage:1 sdm:1 ledoux:1 net:2 propose:7 getting:1 convergence:3 double:2 ijcai:2 extending:1 rademacher:9 rangarajan:1 inadmissibility:1 adam:5 ben:1 tions:1 help:2 ac:1 completion:2 op:5 odd:4 borrowed:1 eq:9 p2:1 soc:1 c:1 come:4 implies:2 met:2 direction:2 xui:7 tokyo:4 nock:1 filter:1 stochastic:6 libsvmtools:1 jst:1 fix:2 alleviate:1 ntu:1 opt:1 tighter:3 hold:2 sufficiently:1 ground:1 normal:1 exp:6 achieves:1 noto:1 purpose:1 estimation:10 proc:2 outperformed:1 label:3 sensitive:1 symmetrization:1 weighted:2 minimization:1 mit:2 gaussian:1 always:2 supt:3 rather:2 pn:26 avoid:1 shelf:1 shrinkage:1 zhou:1 derived:1 focus:2 plessis:6 indicates:1 mainly:3 cg:6 rostamizadeh:1 talwalkar:1 dependent:1 minimizers:3 stopping:1 nn:9 unary:2 entire:1 hidden:1 interested:2 issue:1 classification:7 flexible:12 aforementioned:1 stateof:2 html:2 arg:2 art:4 breakthrough:1 fairly:1 brox:1 marginal:1 equal:1 beach:1 sampling:5 ng:2 identical:1 look:1 cancel:1 icml:11 yu:3 future:1 minimized:7 report:1 np:33 dosovitskiy:1 serious:2 employ:1 others:1 simultaneously:1 cheaper:1 replaced:2 replacement:2 n1:1 negation:2 microsoft:1 detection:3 mlp:7 highly:1 introduces:1 mixture:1 xni:2 closer:1 partial:2 necessary:1 biometrics:1 rpu:7 logarithm:1 theoretical:5 minimal:2 classify:1 gn:2 gilleron:2 ordinary:2 cost:1 introducing:1 deviation:3 uniform:2 usefulness:1 krizhevsky:1 too:3 front:1 characterize:1 reported:2 answer:1 corrupted:1 unbiasedness:1 st:1 density:3 thanks:2 fundamental:1 xpi:7 definitely:1 bu:12 lee:4 off:1 tokui:1 diverge:1 squared:7 again:2 central:2 reflect:1 emnlp:1 worse:1 external:1 american:1 chung:1 li:5 japan:2 szegedy:1 potential:1 de:2 stride:1 availability:1 coefficient:1 matter:1 blanchard:2 satisfy:1 combinatorics:1 caused:1 explicitly:1 depends:1 stream:1 supg:3 later:1 performed:1 jason:1 closed:2 ramaswamy:1 analyze:3 sup:1 hazan:1 parallel:2 contribution:1 formed:2 ni:4 convolutional:2 became:1 likewise:1 lkopf:1 comp:1 rectified:1 executes:1 explain:1 definition:1 against:3 ty:3 pp:12 sugi:1 james:1 proof:4 sampled:1 proved:1 dataset:2 popular:2 recall:1 improves:2 organized:1 embarrassingly:1 nielsen:1 sophisticated:1 back:2 higher:1 tolerate:1 supervised:6 originally:1 asia:1 improved:1 formulation:1 done:1 furthermore:1 smola:1 overfit:2 hand:3 talagrand:1 horizontal:1 replacing:1 nonlinear:1 logistic:7 quality:1 menon:1 grows:1 usa:1 name:2 building:1 unbiased:24 true:2 counterpart:3 former:2 hence:2 qwone:1 regularization:1 symmetric:3 dhillon:2 misc:1 illustrated:2 white:1 during:1 inferior:1 kriz:1 illustrative:1 m:3 criterion:1 exdb:1 demonstrate:1 performs:1 duchi:1 patrini:1 l1:1 image:2 wise:3 novel:1 misspecified:3 common:2 sigmoid:4 superior:1 winner:1 jp:1 insensitive:1 exponentially:1 discussed:2 association:1 banach:1 marginals:1 refer:1 cambridge:2 tuning:1 rd:3 consistency:6 sugiyama:5 sansone:1 specification:2 etc:1 pu:36 posterior:2 multivariate:1 showed:2 inf:1 ship:1 indispensable:1 certain:1 binary:5 meeting:1 greater:1 additional:1 dai:1 surely:1 novelty:3 barry:1 dashed:1 semi:3 multiple:1 ing:1 technical:1 faster:2 academic:1 cross:1 long:2 retrieval:1 cifar:12 divided:1 icdm:1 cccp:1 ravikumar:1 lin:1 hido:1 controlled:1 regression:2 multilayer:3 df:2 arxiv:2 sometimes:1 normalization:2 kernel:2 pnn:1 achieved:1 condi:1 addition:2 unrealistically:1 whereas:1 addressed:1 else:1 source:1 sch:1 biased:3 rest:1 posse:1 pooling:1 siemons:1 jordan:1 structural:2 presence:1 constraining:1 bengio:2 embeddings:2 feedforward:1 newsgroups:2 relu:5 architecture:1 perfectly:1 identified:1 hastie:1 regarding:2 haffner:1 airplane:1 intensive:1 shift:1 whether:2 motivated:1 handled:1 bartlett:2 accelerating:1 penalty:1 suffer:2 constitute:2 deep:7 tewari:2 tune:1 oono:1 stein:2 category:3 http:5 chainer:2 outperform:1 exist:1 mirrored:1 notice:1 sign:2 estimated:2 disjoint:1 dently:1 key:2 four:1 nevertheless:1 drawn:2 changing:1 preprocessed:1 neither:2 verified:1 asymptotically:1 subgradient:1 sum:1 parameterized:2 everywhere:1 springenberg:1 throughout:1 almost:1 yann:1 bpn:3 decision:1 appendix:1 comparable:3 bound:10 layer:10 guaranteed:1 followed:1 quadratic:1 truck:1 nonnegative:1 gang:2 bp:14 ri:5 flat:1 simulate:1 argument:1 min:1 according:1 manning:1 rnp:4 smaller:2 describes:1 slightly:1 em:1 son:1 tw:1 making:2 maxy:4 outlier:1 restricted:1 intuitively:1 pr:8 explained:1 computationally:1 ln:4 cjlin:1 singer:1 fed:1 end:4 available:2 away:2 indirectly:2 batch:7 robustness:1 ho:1 rp:11 original:1 denotes:1 hinge:4 giving:1 epsilon:9 especially:1 objective:1 question:1 added:1 usual:2 surrogate:6 said:1 exhibit:1 gradient:5 iclr:2 separating:2 sci:1 trivial:1 reason:1 assuming:1 ru:12 mini:5 ratio:1 minimizing:4 equivalently:1 difficult:3 setup:1 potentially:1 negative:24 ba:1 rooyen:1 implementation:3 carioni:1 boltzmann:1 subcategory:1 upper:2 vertical:1 observation:1 convolution:1 datasets:4 benchmark:3 hinton:1 misspecification:3 rn:16 arbitrary:1 leathwick:1 clayton:1 david:1 namely:1 dog:1 extensive:1 accepts:2 kingma:1 nu:28 nip:7 able:2 suggested:1 below:1 usually:1 scott:3 max:9 critical:1 natural:1 difficulty:1 regularized:1 predicting:1 isoperimetry:1 improve:1 github:1 dated:1 identifies:1 carried:1 rnu:4 text:3 review:2 prior:3 epoch:12 understanding:2 adagrad:2 loss:46 generation:1 validation:2 foundation:2 incurred:1 degree:1 sufficient:1 xp:24 consistent:3 editor:2 leaning:1 tiny:1 course:1 summary:2 mohri:1 supported:3 last:1 bias:9 side:2 burges:1 perceptron:3 absolute:2 benefit:1 regard:3 curve:2 default:1 xn:5 evaluating:2 plain:2 epu:26 tolerance:2 valid:1 sakai:2 made:1 adaptive:2 nguyen:1 transaction:1 crest:1 approximate:2 hongo:1 keep:1 supremum:1 global:1 overfitting:6 ioffe:1 assumed:1 unnecessary:1 xi:2 shwartz:1 continuous:2 why:1 table:4 additionally:1 promising:1 channel:1 robust:2 ca:1 obtaining:1 unavailable:1 alg:1 du:6 mse:8 automobile:1 complex:1 necessarily:1 williamson:1 bottou:1 aistats:2 motivation:1 big:1 hyperparameters:1 noise:1 nothing:1 repeated:2 complementary:1 xu:24 en:2 wiley:1 exponential:3 down:1 theorem:10 covariate:1 pac:1 list:1 striving:1 decay:5 pnu:1 alt:4 evidence:1 glorot:1 intrinsic:1 mendelson:1 mnist:13 socher:1 sequential:3 pennington:1 workshop:1 vapnik:1 push:2 likely:1 horizontally:1 glmnet:1 newsweeder:1 partially:2 van:1 springer:1 minimizer:5 truth:1 relies:3 satisfies:4 ma:1 nair:1 conditional:2 lipschitz:4 replace:1 change:1 experimentally:1 specifically:3 glove:2 except:1 reducing:1 lemma:8 total:3 experimental:7 tendency:1 internal:1 support:2 latter:2 evaluate:1 tested:1 phenomenon:2 ex:4
6,374
6,766
Optimal Sample Complexity of M -wise Data for Top-K Ranking Minje Jang? School of Electrical Engineering KAIST [email protected] Sunghyun Kim? Electronics and Telecommunications Research Institute Daejeon, Korea [email protected] Changho Suh School of Electrical Engineering KAIST [email protected] Sewoong Oh Industrial and Enterprise Systems Engineering Department UIUC [email protected] Abstract We explore the top-K rank aggregation problem in which one aims to recover a consistent ordering that focuses on top-K ranked items based on partially revealed preference information. We examine an M -wise comparison model that builds on the Plackett-Luce (PL) model where for each sample, M items are ranked according to their perceived utilities modeled as noisy observations of their underlying true utilities. As our result, we characterize the minimax optimality on the sample size for top-K ranking. The optimal sample size turns out to be inversely proportional to M . We devise an algorithm that effectively converts M -wise samples into pairwise ones and employs a spectral method using the refined data. In demonstrating its optimality, we develop a novel technique for deriving tight `? estimation error bounds, which is key to accurately analyzing the performance of top-K ranking algorithms, but has been challenging. Recent work relied on an additional maximum-likelihood estimation (MLE) stage merged with a spectral method to attain good estimates in `? error to achieve the limit for the pairwise model. In contrast, although it is valid in slightly restricted regimes, our result demonstrates a spectral method alone to be sufficient for the general M -wise model. We run numerical experiments using synthetic data and confirm that the optimal sample size decreases at the rate of 1/M . Moreover, running our algorithm on real-world data, we find that its applicability extends to settings that may not fit the PL model. 1 Introduction Rank aggregation has been explored in a variety of contexts such as social choice [15, 6], web search and information retrieval [20], recommendation systems [7], and crowd sourcing [16], to name a few. It aims to bring a consistent ordering to a collection of items, given partial preference information. ? Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Preference information can take various forms depending on the context. One such form, which we examine in this paper, is ordinal; preferences for alternatives are represented as an ordering. Consider crowd-sourced data collected by annotators asked to rank a few given alternatives based on their preference. The aggregated data can be used to identify the most preferred. One example can be a review process for conference papers (e.g., NIPS) where reviewers are asked to not only review papers, but also order them based on how much they enjoy them. The collected data could be used to highlight papers that may interest a large audience. Alternatively, consider sports (races or the like) and online games where a number of players compete. One may wish to rank them according to skill. Its broad range of applications has led to a volume of work done. Of numerous schemes developed, arguably most dominant paradigms are spectral algorithms [14, 20, 37, 41, 47, 45] and maximum likelihood estimation (MLE) [22, 28]. Postulating the existence of underlying real-valued preferences of items, they aim to produce preference estimates consistent in a global sense, e.g., measured by low squared loss. But such estimates do not necessarily guarantee optimal ranking accuracy. Accurate ranking has more to do with how well the ordering of estimates matches that of the true preferences, and less to do with how close the estimates are to the true preferences minimizing overall errors. Moreover, in practice, what we expect from accurate ranking is an ordering that precisely separates only a few items ranked highest from the rest, not an ordering that respects the entire items. Main contributions. In light of it, we explore top-K ranking which aims to recover the correct set of top-ranked items only. We examine the Plackett-Luce (PL) model which has been extensively explored [24, 18, 5, 25, 38, 43, 33, 4]. It is a special case of random utility models [46] where true utilities of items are presumed and a user?s revealed preference is a partial ordering according to noisy manifestations of the utilities. It satisfies the ?independence of irrelevant alternatives? property in social choice theory [34, 35] and is the most popular model in studying human choice behavior given multiple alternatives (see Section 2). It is well-known that it subsumes as a special case the Bradley-Terry-Luce (BTL) model [12, 32] which concerns two items. We consider an M -wise comparison model where comparisons are given as a preference ordering of M items. In this setting, we characterize the minimax limit on the sample size (i.e., sample complexity) needed to reliably identify the set of top-K ranked items, which turns out to be inversely proportional to M . To the best of our knowledge, it is the first result that characterizes the limit under an M -wise comparison model. In achieving the limit, we propose an algorithm that consists of sample breaking and Rank Centrality [37], one spectral method we choose among other variants [10, 9, 37, 33]. First, it converts M -wise samples into many more pairwise ones, and in doing so, it carefully chooses only M out of all M 2 pairwise samples obtainable from each M -wise sample. This sample breaking (see Section 3.1) extracts only the essential information needed to achieve the limit from given M -wise data. Next, using the refined pairwise data, the algorithm runs a spectral method to identify top-ranked items. A novel technique we develop to attain tight `? estimation error bounds has been instrumental to our progress. Analyzing `? error bounds is a critical step to characterizing the minimax sample complexity for top-K ranking as presented in [17], but has been technically challenging. Even after decades of research since the introduction of spectral methods and MLE, two dominant approaches in the field, we lack notable results for tight `? error bounds. This is largely because techniques proven useful to obtain good `2 error bounds do not translate into attaining good `? error bounds. In this regard, our result contributes to progress on `? error analysis (see Section 3.2 and the supplementary). We can compare our result to that of [17] by considering M = 2. Although the two optimal sample complexities match, the conditions under which they do differ; our result turns out to be valid under a slightly restricted condition (see Section 3.3). In terms of achievability, the algorithm in [17] merges an additional MLE stage with a spectral method, whereas we employ only a spectral method. From numerical experiments, we speculate that the condition under which the result of [17] holds may not be sufficient for spectral methods alone to achieve optimality (see Section 4.1). We conduct numerical experiments to support our result. Using synthetic data, we show that the minimax optimal sample size indeed decreases at the rate of 1/M . We run our algorithm on real-world data collected from a popular online game (League of Legends) and find its applicability to extend to settings that may not necessarily match the PL model. From the collected data, we extract M -wise comparisons and rank top users in terms of skill. We examine its robustness aspect against partial data and also evaluate its rank result with respect to the official rank League of Legends provides. In both cases, we compare it with a counting-based algorithm [42, 11] and demonstrate its advantages. 2 Related work. To the best of our knowledge, [17] investigated top-K identification under the random comparison model of interest for the first time. A key distinction here is that we examine the random listwise comparison model based on the PL model. Rank Centrality was developed in [37] based on which we devise our ranking scheme tailored for listwise comparison data. In the PL model, some viewed ranking as parameter estimation. Maystre and Grossglauser [33] developed an algorithm that shares a spirit of spectral ranking and showed its performance is the same as MLE for estimating underlying preference scores. Hajek et al. [25] derived minimax lower bounds of parameter estimation error, and examined gaps with upper bounds of MLE as well as MLE with a rank-breaking scheme that decomposes partial rankings into pairwise comparisons. Some works examined several sample breaking methods that convert listwise data into pairwise data in the PL model. Azari Soufiani et al. [5] considered various methods to see if they sustain some statistical  property in parameter estimation. It examined full breaking that converts an M -wise sample into M 2 pairwise ones, and adjacent breaking that converts an ordinal M -wise sample into M ? 1 pairwise ones whose associated items are adjacent in the sample. Ashish and Oh [4] considered a method that converts an M -wise sample into multiple pairwise ones and assigns different importance weights to each, and examined the method on several types of comparison graphs. There are a number of works that explored ranking problems in different models and with different interests. Some works [43, 2] have adopted PAC (probably approximately correct) [44] or regret [21, 8, 23] as their metric to allow some margin of error, in contrast to our work where 0/1 loss (the most stringent criterion) is considered to investigate the worst-case scenario (see Section 2). Rajkumar and Agarwal [40] put forth statistical assumptions that ensure the convergence of rank aggregation methods including Rank Centrality and MLE to an optimal ranking. Active ranking where samples are obtained adaptively has received attention as well. Jamieson and Nowak [29] considered perfect total ranking and characterized the query complexity gain of adaptive sampling in the noise-free case, and the works of [29, 1] explored the query complexity in the presence of noise aiming at approximate total rankings. Recently, Braverman et al. [13] considered three noisy models, examining if their algorithm can achieve reliable top-K ranking. Heckel et al. [27] considered a model where noisy pairwise observations are given, with a goal to partition the items into sets of pre-specified sizes based on their scores, which includes top-K ranking as a special case. Mohajer et al. [36] considered a fairly general noisy model which subsumes as special cases various models. They derived upper bounds on the sample size required for reliable top-K sorting as well as top-K partitioning, and showed that active ranking can provide significant gains over passive ranking. 2 Problem Formulation Notation. We denote by [n] to represent {1, 2, . . . , n}, and by G = ([n], E (M ) ) to represent an M -wise comparison graph in which total n vertices reside and each hyper-edge is connected if there is a comparison among M vertices, and di to represent the out-degree of vertex i. Comparison model and assumptions. Suppose we perform a few evaluations on n items. We assume the comparison outcomes are generated based on the PL model [39]. We consider M -wise models where the comparison outcomes are obtained in the form of a preference ordering of M items. Preference scores. The PL model assumes the existence of underlying preferences w := {w1 , w2 , . . . , wn }, where wi represents the preference score of item i. The outcome of each comparison depends solely on the latent scores of the items being compared. Without loss of generality, we assume that w1 ? w2 ? ? ? ? ? wn > 0. We assume the range of scores to be fixed irrespective of n. For some positive constants wmin and wmax , wi ? [wmin , wmax ], 1 ? i ? n. We note that the case where the range wmax /wmin grows with n can be translated into the above fixed-range regime by separating out those items with vanishing scores (e.g. via a voting method like Borda count [11, 3]). Comparison model. We denote by G = ([n], E (M ) ) a comparison graph where a set of M items I = {i1 , i2 , . . . , iM } are compared if and only if I belongs to the hyper-edge set E (M ) . We examine random graphs, constructed in a similar manner according to the Erd?os-R?nyi random graph model; each set of M vertices is connected by a hyper-edge independently with probability p. Notice that when M = 2, such random graphs we consider follow precisely the Erd?os-R?nyi random model. 3 M -wise comparisons. We observe L samples for each I = {i1 , i2 , . . . , iM } ? E (M ) . Each sample is an ordering of M items in order of preference. The outcome of the `th sample, de(`) (`) noted by sI , is generated  according to the PL model: sI = (i1 , i2 , . . . , iM ) with probability QM  PM m=1 wim / r=m wir , where item ia is preferred over item ib in I if ia appears to the left (`) of ib , which we also denote by ia  ib . We assume that conditional on G, sI ?s are jointly independent over I and `. We denote the collection of all samples by s := {sI : I ? E (M ) }, where (1) (2) (L) sI = {sI , sI , . . . , sI }. Performance metric and goal. Given comparison data, one wishes to know whether or not the top-K ranked items are identifiable. We consider the probability of error Pe in identifying the correct set of the top-K ranked items: Pe (?) := P {?(s) 6= [K]}, where ? is any ranking scheme that returns a set of K indices and [K] is the set of the first K indices. Our goal in this work is to characterize the admissible region Rw of (p, L) in which top-K ranking is feasible for a given PL parameter w, in other words, Pe can be vanishingly small as n grows. The admissible region Rw is defined as Rw := {(p, L) : limn?? Pe (?(s)) = 0}. In particular, we are  interested in the minimax  n sample complexity of an estimator defined as S? := inf + sup M pL : (p, L) ? Rv , p?[0,1],L?Z v??? where ?? = {v ? Rn : (vK ? vK+1 )/vmax ? ?}. Note that this definition shows that we conservatively examine minimax scenarios where nature behaves adversely with the worst-case w. 3 Main Results Separating the two items near the decision boundary (i.e., the K th and (K + 1)th ranked items) is key in top-K ranking. Unless the gap is large enough, noise in the observations leads to erroneous estimates which no ranking scheme can overcome. We pinpoint a separation measure as ?K := (wK ? wK+1 )/wmax , which turns out to be crucial in establishing the fundamental limit. Noted in [22], if a comparison graph G is not connected, it is impossible to determine the relative preferences between two disconnected entities. Thus, we assume all comparison graphs to be connected.  n?1 2 To guarantee it, for a hyper-random graph with edge size M , we assume p > log n/ M ?1 . Now, let us formally state our main results. First, for comparison graphs under M -wise observations, we establish a necessary condition for top-K ranking. (M ) Theorem 1. Fix  ? (0, 21 ). Given ), if  anM -wise comparison graph G = ([n], E n n log n 1 , pL ? c0 (1 ? ) M ?2K M (1) for some numerical constant c0 , then for any ranking scheme ?, there exists a preference score vector w with separation measure ?K such that Pe (?) ? . The proof is a generalization of Theorem 2 in [17], and we provide it in the supplementary. Next, for comparison graphs under M -wise observations, we establish a sufficient condition for top-K ranking. r log n (M ) Theorem 2. Given an M -wise comparison graph G = ([n], E ) and p ? c (M ? 1) 1 n?1 , if   (M ?1) n n log n 1 pL ? c2 , (2) M ?2K M for some numerical constants c1 and c2 , then Rank Centrality correctly identifies the top-K ranked 1 items with probability at least 1 ? 2n? 15 . We provide the proof of Theorem 2 in the supplementary. From below, we describe the algorithm we use, sample breaking and Rank Centrality [37], and soon give an outline of the proof. Note that Theorem 1 gives a necessary condition of the sample complexity S?K & n log n/M ?2K and Theorem 2 gives a sufficient condition of it S?K . n log n/M ?2K , and they match. That is, we establish the minimax optimality of Rank Centrality: n log n/M ?2K .  p > log n/ Mn?1 is derived in [19] as a sharp threshold for connectivity of hyper-graphs. We assume a slightly more strict condition for ease of analysis. This does not make a big difference in our result, as the two conditions are almost identical order-wise given M < n/2, a reasonable condition for regimes where n is large. 2 4 3.1 Algorithm description Algorithm 1 Rank Centrality [37]  Input the collection of statistics s = sI : I ? E (M ) . Convert the M -wise sample for each hyper-edge I into M pairwise samples: 1. Choose a circular permutation of the items in I uniformly at random, 2. Break it into the M pairs of adjacent items, and denote the set of pairs by ?(I), 3. Use the (pairwise) data of the pairs in ?(I). ? 1 if i 6= j; ? 2dmax yij P ? ? ? ? Compute the transition matrix P = [Pij ]1?i,j?n : Pij = 1 ? k:k6=j Pkj if i = j; ? 0 otherwise., where dmax is the maximum out-degree of vertices in E (M ) . ?. Output the stationary distribution of matrix P Rank Centrality aims to estimate rankings from pairwise comparison data. Thus, to make use of M -wise comparison data for Rank Centrality, we apply a sample breaking method that converts M -wise data into pairwise data. To be more specific, if there is a hyper-edge I = {1, 2, . . . , M }, we choose a circular permutation of the items in I uniformly at random. Suppose we pick a circular permutation (1, 2, . . . , M ? 1, M, 1). Then, we break it into M pairs of items in the order specified by the permutation: {1, 2}, {2, 3}, . . . , {M ? 1, M }, {M, 1} (see Section 3.3 for a remark on why we do not lose optimality by our sample breaking method). Let us denote by ?(I) this set of pairs. We use the converted pairwise comparison data associated with the pairs in ?(I)3 :  L X 1 X (`) 1 if {i, j} ? ?(I) and i  j; (`) yij,I = , yij := yij,I . (3) 0 otherwise L I:{i,j}??(I) `=1 In an ideal scenario where we obtain an infinite number of samples per M -wise comparison, i.e., PL (`) L ? ?, sufficient statistics L1 l=1 yij,I converge to wi /(wi + wj ). Then, the constructed matrix ? defined in Algorithm 1 becomes a matrix P whose entries [Pij ]1?i,j?n are defined as P ? 1 P i for I ? E (M ) ; ? 2dmaxP I:{i,j}??(I) wiw+w j Pij = (4) 1 ? k:k6=j Pkj if i = j; ? 0 otherwise. The entries for observed item pairs represent the relative likelihood of item i being preferred over item j. Intuitively, random walks of P in the long run visit some states more often, if they have been preferred over other frequently-visited states and/or preferred over many other states. The random walks are reversible as wi Pji = wj Pij holds, and irreducible under the connectivity assumption. Once we obtain the unique stationary distribution, it is equal to w = {w1 , . . . , wn } up to some constant ? , a noisy version of P , will give us an approximation of w. scaling. It is clear that random walks of P 3.2 Proof outline We outline the proof of Theorem 2 by introducing Theorem 3, which we show leads to Theorem 2. Theorem 3. When Rank Centrality is employed, with high probability, the `? norm estimation error is upper-bounded by s r ? ? wk? kw n log n 1  . , (5) n kwk? M pL M r n where p ? c1 (M ? 1) log n?1 , and c1 is some numerical constant. (M ?1) 3 In comparison, the adjacent breaking method [5] directly follows the ordering evaluated in each sample; if it is 1 ? 2 ? ? ? ? ? M ? 1 ? M , it is broken into pairs of adjacent items: 1 ? 2 up to M ? 1 ? M . Our method Pr[y =1] wi turns out to be consistent, i.e., Pr[yij = w (see (4)), whereas the adjacent breaking method is not [5]. ji =0] j 5 Let q kwk?  = pwmax = 1 for ease of presentation. Suppose ?K = wK ? wK+1 & n log n/ M pL 1/M . Then, w ?i ? w ?j ? wi ? wj ? |w ?i ? wi | ? |w ?j ? wj | ? wK ? wK+1 ? ? ? wk? > 0, for all 1 ? i ?qK and j ? K + 1. That is, the top-K items are identified as 2kw  p  n n desired. Hence, as long as ?K & log n/ M pL 1/M , i.e., M pL & n log n/M ?2K , reliable top-K ranking is achieved with the sample size of n log n/M ?2K . Now, let us prove Theorem 3. To find an `? error bound, we first derive an upper bound on the point-wise error between the score estimate of item i and its true score, which consists of three terms: X   X ? ? ? |w ? i ? wi | ? | w ?i ? wi | Pii + |w ?j ? wj | Pij + (wi + wj ) Pji ? Pji . (6) j:j6=i j:j6=i ?w ? =P ? and w = P w. We then obtain upper bounds on the three terms: We can obtain (6) from w s s r r X   n log n 1 X n log n 1 ? ? ?   Pii < 1, (wi + wj ) Pji ? Pji . , |w ?j ? wj | Pij . , n n M M M pL M pL j:j6=i j:j6=i (7) with high probability (Lemmas 1, 2 and 3 in the supplementary). (7) ends the proof. We obtain the firstqtwo from Hoeffding?s inequality. The last is key; this is where we sharply link an `2 error bound  p n of n log n/ M pL 1/M (Theorem 4 in the supplementary) to the desired `? error bound (5). On the left hand side of the third inequality, the point-wise error of item j which affects that of item i as expressed in (6), may not be captured for some j, since there may be no hyper-edge that includes items i and j. This makes it hard to draw a link from the obtained `2 error bound to the inequality, since `2 errors can be seen as the sum of all point-wise errors. To include them all, we recursively apply (6) to |w ?j ? wj | in the third inequality and then apply the rest two properly (for detailed derivation, see the beginning of the proof of Lemma 3 in the supplementary). Then, we get s r X X X n log n 1 ? ? ?  |w ?j ? wj | Pij . |w ?k ? wk | Pjk Pij + . (8) n M pL M j:j6=i j:j6=i k:k6=j Manipulating the first term of the right hand side (for derivation, see the proof of Lemma 3), we get v  X 2 uX n X X u n ? ? ? ? t ? ? wk2 |w ?k ? wk | Pjk Pij ? kw Pjk Pij . (9) j:j ?{i,k} / k=1 k=1 j:j ?{i,k} / P We show that j:j ?{i,k} P?jk P?ij concentrates on the order of 1/n for all k?s in the proof of Lemma ?/ ? ? ? ? wk2 / n ? kw ? ? wk2 /kwk2 . We derive this `2 3. Since kwk2 ? qnkwk? = n, we get kw  p n error bound to be n log n/ M pL 1/M (Theorem 4 in the supplementary), matching (5). P To describe the concentration of j:j ?{i,k} P?jk P?ij , we need to consider dependencies in it. To see / them, we upper-bound it as follows (for details, see the proof of Lemma 3 in the supplementary). X j:j ?{i,k} / P?ij P?jk ? 1 X X 4d2max I1 :i,j?I1 ,I2 :j,k?I2 j:j ?{i,k} / XI1 I2 , (10) where XI1 I2 := I [{i, j} ? ?(I1 )] I [{j, k} ? ?(I2 )] . For M > 2, there can exist ja and jb such that {i, ja , jb } ? I1 , ja ? I2 and jb ? / I2 . Then, summing over j, XI1 I2 and XI1 I3 , where I3 is another hyper-edge that includes jb and k, are dependent concerning the same hyper-edge I1 . To handle this, we use Janson?s inquality [30], one of concentration inequalities that consider dependencies. To derive a necessary condition matching our sufficient condition, we use a generalized version of Fano?s inequality [26] as in the proof of Theorem 3 in [17] and complete combinatorial calculations. 6 3.3 Discussion Optimality versus M ? intuition behind our sample breaking method: For each M -wise sample, we form a circular permutation uniformly at random, and extract M pairwise samples each of which concerns two adjacent items in it. Suppose we have an M -wise sample 1 ? 2 ? ? ? ? ? M , and for simplicity we happen to form a circular permutation as (1, 2, . . . , M ? 1, M, 1); we extract M pairwise samples as 1 ? 2, 2 ? 3, . . . , (M ? 1) ? M , 1 ? M . Let us provide the intuition behind why this leads us to the optimal sample complexity. For the case of M = 2, Rank Centrality achieves the optimal order-wise sample complexity of n log n/?2K as characterized in [17]. In addition, one M -wise sample in the PL model can be broken into M ? 1 independent pairwise ones, since pairwise data of two arbitrary items among the M items depend on the true scores of the two items only. In our example, one can convert the M -wise sample into M ? 1 independent pairwise ones as 1 ? 2, 2 ? 3, . . . , (M ? 1) ? M . From these, it is intuitive to see that we can achieve reliable top-K ranking with an order-wise sample complexity of n log n/(M ? 1)?2K by converting each M -wise sample into M ? 1 independent pairwise ones. Notice a close gap to the optimal sample complexity in Section 3. Tight `? error bounds: As shown in 3.2, deriving a tight `? error bound is critical to analyzing the performance of an algorithm for top-K rank aggregation. Recent work [17] has relied on combining an additional stage of local refinement in series with Rank Centrality to derive it, and characterized the optimal sample complexity for the pairwise model. In contrast, although it is valid in a slightly restricted regime (see the next remark), we employ only Rank Centrality and still succeed in achieving optimality for the M -wise model that includes the pairwise model. Deriving tight `? error bounds being crucial, it is hard for one to attain this result without a fine analytical technique. It is our main theoretical contribution to develop one. For details, see the proof of Lemma 3 in the supplementary that sharply links an `? error bound (Theorem 3 therein) and an `2 error bound (Theorem 4 therein). Rank Centrality has been shown to achieve the performance nearly as good as MLE in terms of `2 error, but little has been known in terms of `? error, until now. Our result has made clear progress. Analytical technique: Our analysis is not limited to Rank Centrality. Whenever one wishes to compute the difference between the leading eigenvector of any matrix and that of its noisy version, one can obtain (6), (8) and (9). Thus, it can be adopted to link `2 and `? error bounds for any spectral method. Dense regimes: q Our main result concerns a slightly denser regime, indicated by the condition n?1 p & (M ? 1) log n/ M ?1 , where many distinct item pairs are likely to be compared. One can see that this dense regime condition is not necessary for top-K ranking; for the pairwise case M = 2, it is p & log n/n as shown in [17]. However, it is not clear yet whether or not the dense regime condition is required under our approach that employs only a spectral method. Our speculation q from numerical   n?1 n?1 experiments is that the sparse regime condition, log n/ M . p . (M ? 1) log n/ M ?1 ?1 , may not be sufficient for spectral methods to achieve reliable top-K ranking (see Section 4). Experimental Results Synthetic data simulation 1 ?? norm of estimation errors 0.5 empirical success rate Spectral MLE: p = 0.25 0.3 0.2 0.1 0 1 5 10 15 20 L: number of repeated comparisons 0.5 0.9 Rank Centrality: p = 0.25 0.4 25 0.8 0.4 0.7 0.6 Spectral MLE: p = 0.025 0.3 0.5 0.4 0.2 0.3 Rank Centrality: p = 0.25 0.2 Spectral MLE: p = 0.25 0.1 Borda Count: p = 0.25 0 1 1 0.9 Rank Centrality: p = 0.025 empirical success rate 4.1 ?? norm of estimation errors 4 5 10 15 20 0.1 L: number of repeated comparisons 0.7 0.6 0.5 0.4 Rank Centrality: p = 0.025 0.3 Spectral MLE: p = 0.025 0.2 Borda Count: p = 0.025 0.1 0 10 25 0.8 50 100 150 200 L: number of repeated comparisons 250 0 10 50 100 150 200 250 L: number of repeated comparisons Figure 1: Dense regime (pdense = 0.25, first two figures): empirical `? estimation error v.s. L (left); empirical success rate v.s. L (right). Sparse regime (psparse = 0.025, last two figures): empirical `? estimation error v.s. L (left); empirical success rate v.s. L (right). First, we conduct a synthetic data experiment for M = 2, the pairwise comparison model, p to compare our result in Theorem 2 p to that in recent work [17]. We consider both dense (p & log n/n) and sparse (log n/n . p . log n/n) regimes. We set constant c1 = 2, and set pdense = 0.25 and psparse = 0.025, to make each be in its proper range. We use n = 500, K = 10, and ?K = 0.1. Each result in all numerical simulations is obtained by averaging over 10000 Monte Carlo trials. 7 In Figure 1, the first two figures show the experiments in the dense regime. We see that as L increases, meaning as we obtain pairwise samples beyond the minimal sample complexity, (1) the `? error of Rank Centrality decreases and meets that of Spectral MLE (left); (2) the success rate of Rank Centrality increases and soon hits p 100% along with Spectral MLE (right). The curves support our result; in the dense regime p & log n/n, Rank Centrality alone can achieve reliable top-K ranking. The last two figures show the experiments in the sparse regime. We see that as L increases, (1) the `? error of Rank Centrality decreases but does not meet that of Spectral MLE (left); (2) the success rate of Rank Centrality increases but does not reach that of Spectral MLE which hits nearly p 100% (right). The curves lead us to speculate that the sparse regime condition log n/n . p . log n/n may not be sufficient for spectral methods to achieve reliable top-K ranking. Empirical 10000 Curve ?tting: 1/M 8000 6000 4000 2000 0 3 4 5 6 7 8 9 10 11 12 13 14 ?105 Empirical 2.5 Curve ?tting: 1/?2K 2 1.5 1 0.5 0 0.1 15 2 Minimal sample complexity 3 Minimal sample complexity Minimal sample complexity 12000 0.15 0.2 0.25 ?K M : Size of hyper-edges ?105 Empirical 1.5 Curve ?tting: n log n 1 0.5 0 0.3 500 1000 1500 n Figure 2: Empirical minimal sample complexity v.s. M (first), ?K (second), and n log n (third). Next, we corroborate our optimal sample complexity result in Theorem 2. We examine whether the empirical minimal sample complexity decreases at the rate of 1/M and 1/?2K , and increases at the rate of n log n. To verify its reduction at the rate of 1/M , we run experiments for M ranging from 3 to 15. We increase the number of samples by increasing p until the success rate reaches 95% for each M . The number of samples we use to achieve it is considered as the empirical minimal sample complexity for each M . We set the other parameters as n = 100, L = 20, K = 5 and ?K = 0.3. The result for each M in all simulations is obtained by averaging over 1000 Monte Carlo trials. To verify the other two relations, we follow similar procedures. As for 1/?2K , we set n = 200, M = 2, L = 20 and K = 5. As for n log n, we set M = 2, L = 4, K = 5 and ?K = 0.4. The first figure in Figure 2 shows the reduction in empirical minimal sample complexity with a blue solid curve. The red dashed curve is obtained by curve-fitting. We can see that the empirical minimal sample complexity drops inversely proportional to M . From the second and third figures, we can see that in terms of ?K and n log n, it also behaves as our result in Theorem 2 predicts. 1 1 0.7 0.6 0.5 0.4 Spectral MLE Proposed Least Square Counting 0.3 0.2 0.1 15 20 25 30 L 35 40 45 Spectral MLE 0.9 80 Least Square 0.8 Counting Percentile Normalized overlap (K = 5) Success rate 0.8 0 10 100 Proposed 0.9 0.7 0.6 60 40 Proposed 0.5 Spectral MLE 20 Least Square 0.4 Counting 50 1 0.9 0.8 0.7 f: Fraction of samples used 0.6 0.5 0 1 2 3 4 5 Top-5 users based on average league point per match Figure 3: (First) Empirical success rates of four algorithms: our algorithm (blue circle), heuristic Spectral MLE (red cross), least square (green plus), and counting (purple triangle); (Second) Top-5 ranked users: normalized overlap v.s. fraction of samples used; (Third) Top-5 users? (sorted by average League of Legends points earned per match) percentile in the ranks by our algorithm, heuristic Spectral MLE, least square, and counting. For instance, the user who earns largest points per match (first entry) is at around the 80-th percentile according to our algorithm and heuristic Spectral MLE, the 60-th percentile according to least square, and the 10-th percentile according to counting. Last, we evaluate the success rates of various algorithms on M -wise comparison data. We consider our proposed algorithm, Spectral MLE, least square (HodgeRank [31]), and counting. Since Spectral MLE has been developed for pairwise data, we heuristically extend it. We apply our sample breaking method to obtain pairwise data needed. For any parameters required to run Spectral MLE, we heuristically find the best ones which give rise to the highest success rate. In the other two algorithms, we first apply our sample breaking method as well. Then, for least square, we find a score vector P 2 ? such that the squared error (i,j)?E (log(w w ? i /w ?j ) ? log(yij /yji )) , where E is the edge set for the converted pairwise data, is minimized. For counting, we count each item?s number of wins in all 8 q  n?1 involved pairwise data. We use n = 100, M = 4, p = 0.0025 ? (M ? 1) log n/ M ?1 , K = 5 and ?K = 0.3. Each result in all simulations is obtained by averaging over 5000 Monte Carlo trials. The first figure in Figure 3 shows that our algorithm and heuristic Spectral MLE perform best (the latter being marginally better), achieving near-100% success rates for large L. It also shows that they outperform the other two algorithms which do not achieve near-100% success rates even for large L. 4.2 Real-world data simulation One natural setting where we can obtain M -wise comparison data is an online game. Users randomly get together and play, and the results depend on their skills. We find League of Legends to be a proper fit4 . In extracting M -wise data, we adopt a measure widely accepted as a factor that rates users? skill in the user community5 . We incorporate this measure into our model as follows. For each match (M -wise sample), we have 10 users, each associated with its measure. In breaking M -wise samples, for each user pair (i, j), we compare their measures and declare user i wins if its measure is larger (`) than user j?s. This corresponds to yij in our model. We assign 1 if user i wins and 0 otherwise. They PLij (`) may play together in multiple, say Lij , matches. We can compute yij := ( `=1 yij )/Lij to use for Rank Centrality. As M -wise data is extracted from team competitions, League of Legends does not perfectly fit our model. Yet one main reason to run this experiment is to see whether our algorithm works well in other settings that do not necessarily fit the PL model, being broadly applicable. We first investigate the robustness aspect by evaluating the performance against partial information. To this end, we use all collected data and obtain a ranking result for each algorithm which we consider as its baseline. Then, for each algorithm, we reduce sample sizes by discarding some of the data, and compare the results to the baseline to see how robust each algorithm is against partial information. We conduct this experiment for four algorithms: our proposed algorithm, the heuristic extension of Spectral MLE, least square and counting. We choose our metric as a normalized overlap: |Scomp ? Spart |/K, where K = 5, Scomp is the set of top-K users identified using the complete dataset and Spart is that identified using partial datasets. In choosing partial data, we set f ? (0.5, 1), and discard each match result with probability f independently. We compute the metric for each f by averaging over 1000 Monte Carlo trials. The second figure of Figure 3 shows that over the range of f where overlaps above 60% are retained, our algorithm, along with some others, demonstrates good robustness against partial information. In addition, we compare the ranks estimated by the four algorithms to the rank provided by League of Legends. By computing the average points earned per match for each user, we infer the rank of the users determined by official standards. In the third figure of Figure 3, the x-axis indicates the top-5 users identified by computing average League of Legends points earned per match and sorting them in descending order. The y-axis indicates the percentile of these top-5 users according to the ranks by the algorithms of interest. Notice that the top-5 ranked users by League of Legends standards are also placed at high ranks when ranked by our algorithm and heuristic Spectral MLE; they are all placed at the 80-th percentile or above. On the other hand, most of them (4 users out of the top-5 users) are placed at noticeably lower ranks when ranked by least square and counting. 5 Conclusion We characterized the minimax (order-wise) optimal sample complexity for top-K rank aggregation in the M -wise comparison model that builds on the PL model. We corroborated our result using synthetic data experiments and verified the applicability of our algorithm on real-world data. 4 Two teams of 5 users compete. Each user kills an opponent, assists a mate to kill one, and dies from an attack. At the end, one team wins, and different points are given to the users. We use users? kill/assist/death data (non-negative integers), which can be considered as noisy measurements of their skill, and rank them by skill. 5 We define a measure as {(# of kills + # of assists)/(1 + # of deaths)}?weight. We adopt this measure since it is similar to the one officially provided (called KDA statistics). We assign winning users a weight of 1.1 and losing users a weight of 1.0, to give extra credit (10%) to users who lead their team?s winning. 9 Acknowledgments This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (2017-0-00694, Coding for High-Speed Distributed Networks). References [1] Ailon, N. (2012). Active learning ranking from pairwise preferences with almost optimal query complexity. Journal of Machine Learning, 13, 137?164. [2] Ailon, N. and Mohri, M. (2007). An efficient reduction of ranking to classification. arXiv preprint arXiv:0710.2889. [3] Ammar, A. and Shah, D. (2011). Ranking: Compare, don?t score. In Allerton Conference, pages 776?783. IEEE. [4] Ashish, K. and Oh, S. (2016). Data-driven rank breaking for efficient rank aggregation. Journal of Machine Learning Research, 17, 1?54. [5] Azari Soufiani, H., Chen, W., Parkes, D. C., and Xia, L. (2013). Generalized method-of-moments for rank aggregation. In Neural Information Processing Systems, pages 2706?2714. [6] Azari Soufiani, H., Parkes, D. C., and Xia, L. (2014). A statistical decision-theoretic framework for social choice. In Neural Information Processing Systems, pages 3185?3193. [7] Baltrunas, L., Makcinskas, T., and Ricci, F. (2010). Group recommendations with rank aggregation and collaborative filtering. In ACM Conference on Recommender Systems, pages 119?126. ACM. [8] Bell, D. (1982). Econometric models for probabilisitic choice among products. Operations Research, 30(5), 961?981. [9] Bergstrom, C. T., W. J. D. and Wiseman, M. A. (2008). The eigenfactorTM metrics. Journal of Neuroscience, 28(45), 11433?11434. [10] Bonacich, P. and Lloyd, P. (2001). Eigenvector-like measures of centrality for asymmetric relations. Social networks, 23(3), 191?201. [11] Borda, J. C. (1781). M?moire sur les ?lections au scrutin. [12] Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3-4), 324?345. [13] Braverman, M., Mao, J., and Weinberg, S. M. (2016). Parallel algorithms for select and partition with noisy comparisons. In ACM symposium on Theory of Computing, pages 851?862. [14] Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN systems, 30(1), 107?117. [15] Caplin, A. and Nalebuff, B. (1991). Aggregation and social choice: a mean voter theorem. Econometrica, pages 1?23. [16] Chen, X., Bennett, P. N., Collins-Thompson, K., and Horvitz, E. (2013). Pairwise ranking aggregation in a crowdsourced setting. In ACM Conference on Web Search and Data Mining, pages 193?202. ACM. [17] Chen, Y. and Suh, C. (2015). Spectral MLE: Top-K rank aggregation from pairwise comparisons. In International Conference on Machine Learning, pages 371?380. [18] Cheng, W., H. E. and Dembczynski, K. J. (2010). Label ranking methods based on the Plackett-Luce model. In International Conference on Machine Learning, pages 215?222. [19] Cooley, O., Kang, M., and Koch, C. (2016). Threshold and hitting time for high-order connectedness in random hypergraphs. the electronic journal of combinatorics, pages 2?48. [20] Dwork, C., Kumar, R., Naor, M., and Sivakumar, D. (2001). Rank aggregation methods for the web. In International conference on World Wide Web, pages 613?622. ACM. [21] Fishburn, P. (1982). Nontransitive measurable utility. Journal of Mathematical Psychology, 26(1), 31?67. 10 [22] Ford, L. R. (1957). Solution of a ranking problem from binary comparisons. American Mathematical Monthly, pages 28?33. [23] Graham, L. and Sugden, R. (1982). Econometric models for probabilisitic choice among products. Economic Journal, 92(368), 805?824. [24] Guiver, J. and Snelson, E. (2009). Bayesian inference for Plackett-Luce ranking models. In ACM International Conference on Machine Learning, pages 377?384. [25] Hajek, B., Oh, S., and Xu, J. (2014). Minimax-optimal inference from partial rankings. In Neural Information Processing Systems, pages 1475?1483. [26] Han, T. and Verd?, S. (1994). Generalizing the Fano inequality. IEEE Transactions on Information Theory, 40, 1247?1251. [27] Heckel, R., Shah, N., Ramchandran, K., and Wainwright, M. (2016). Active ranking from pairwise comparisons and when parametric assumptions don?t help. arXiv preprint arXiv:1606.08842. [28] Hunter, D. R. (2004). MM algorithms for generalized Bradley-Terry models. Annals of Statistics, pages 384?406. [29] Jamieson, K. G. and Nowak, R. (2011). Active ranking using pairwise comparisons. In Neural Information Processing Systems, pages 2240?2248. [30] Janson, S. (2004). Large deviations for sums of partly dependent random variables. In Random Structures & Algorithms, pages 234?248. [31] Jiang, X., Lim, L. H., Yao, Y., and Ye, Y. (2011). Statistical ranking and combinatorial Hodge theory. Mathematical Programming, 127, 203?244. [32] Luce, R. D. (1959). Individual choice behavior: A theoretical analysis. Wiley. [33] Maystre, L. and Grossglauser, M. (2015). Fast and accurate inference of Plackett-Luce models. In Neural Information Processing Systems, pages 172?180. [34] McFadden, D. (1973). Conditional logit analysis of qualitative choice behavior. Frontiers in Econometrics, pages 105?142. [35] McFadden, D. (1980). Econometric models for probabilisitic choice among products. Journal of Business, 53(3), S13?S29. [36] Mohajer, S., Suh, C., and Elmahdy, A. (2017). Active learning for top-K rank aggregation from noisy comparisons. In International Conference on Machine Learning, pages 2488?2497. [37] Negahban, S., Oh, S., and Shah, D. (2016). Rank centrality: Ranking from pair-wise comparisons. Operations Research, 65, 266?287. [38] Oh, S., Thekumparampil, K. K., and Xu, J. (2015). Collaboratively learning preferences from ordinal data. In Neural Information Processing Systems, pages 1909?1917. [39] Plackett, R. L. and Luce, R. D. (1975). The analysis of permutations. Applied Statistics, pages 193?202. [40] Rajkumar, A. and Agarwal, S. (2014). A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In International Conference on Machine Learning, pages 118?126. [41] Seeley, J. R. (1949). The net of reciprocal influence. Canadian Journal of Psychology, 3(4), 234?240. [42] Shah, N. B. and Wainwright, M. J. (2015). Simple, robust and optimal ranking from pairwise comparisons. arXiv preprint arXiv:1512.08949. [43] Sz?r?nyi, B., Busa-Fekete, R., Paul, A., and H?llermeier, E. (2015). Online rank elicitation for PlackettLuce: A dueling bandits approach. In Neural Information Processing Systems, pages 604?612. [44] Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134?1142. [45] Vigna, S. (2016). Spectral ranking. Network Science, 4(4), 433?445. [46] Walker, J. and Ben-Akiva, M. (2002). Generalized random utility model. Mathematical Social Sciences, 43(3), 303?343. [47] Wei, T. H. (1952). The algebraic foundations of ranking theory. Ph.D. thesis, University of Cambridge. 11
6766 |@word trial:4 version:3 instrumental:1 norm:3 logit:1 c0:2 heuristically:2 simulation:5 pick:1 solid:1 recursively:1 moment:1 reduction:3 electronics:1 series:1 score:13 janson:2 horvitz:1 bradley:3 si:9 yet:2 numerical:8 partition:2 happen:1 drop:1 alone:3 stationary:2 item:47 beginning:1 reciprocal:1 vanishing:1 parkes:2 provides:1 preference:21 wir:1 attack:1 allerton:1 mathematical:4 along:2 enterprise:1 constructed:2 c2:2 symposium:1 qualitative:1 consists:2 prove:1 naor:1 fitting:1 busa:1 manner:1 pairwise:38 indeed:1 presumed:1 behavior:3 examine:8 uiuc:1 frequently:1 probabilisitic:3 little:1 considering:1 increasing:1 becomes:1 provided:2 estimating:1 underlying:4 moreover:2 notation:1 bounded:1 what:1 eigenvector:2 developed:4 guarantee:2 voting:1 biometrika:1 demonstrates:2 qm:1 hit:2 partitioning:1 grant:1 enjoy:1 jamieson:2 arguably:1 positive:1 declare:1 engineering:3 local:1 limit:6 aiming:1 analyzing:3 jiang:1 establishing:1 meet:2 solely:1 hodgerank:1 approximately:1 connectedness:1 baltrunas:1 plus:1 bergstrom:1 therein:2 au:1 examined:4 wk2:3 voter:1 challenging:2 etri:1 ease:2 limited:1 range:6 unique:1 acknowledgment:1 msit:1 practice:1 regret:1 block:1 procedure:1 empirical:15 bell:1 attain:3 matching:2 pre:1 word:1 isdn:1 get:4 close:2 put:1 context:2 impossible:1 influence:1 descending:1 measurable:1 reviewer:1 attention:1 independently:2 thompson:1 guiver:1 simplicity:1 identifying:1 assigns:1 estimator:1 deriving:3 oh:6 handle:1 tting:3 annals:1 suppose:4 play:2 user:29 losing:1 programming:1 verd:1 rajkumar:2 jk:3 seeley:1 asymmetric:1 econometrics:1 predicts:1 corroborated:1 observed:1 preprint:3 electrical:2 worst:2 region:2 soufiani:3 azari:3 connected:4 wj:10 earned:3 ordering:11 decrease:5 highest:2 iitp:1 intuition:2 broken:2 complexity:25 asked:2 econometrica:1 depend:2 tight:6 technically:1 triangle:1 translated:1 various:4 represented:1 derivation:2 distinct:1 fast:1 describe:2 monte:4 query:3 hyper:11 outcome:4 refined:2 crowd:2 sourced:1 whose:2 heuristic:6 supplementary:9 kaist:4 valued:1 denser:1 widely:1 otherwise:4 larger:1 say:1 statistic:5 jointly:1 noisy:10 ford:1 online:4 advantage:1 analytical:2 net:1 nontransitive:1 propose:1 vanishingly:1 product:3 combining:1 translate:1 achieve:11 forth:1 description:1 intuitive:1 competition:1 convergence:2 produce:1 perfect:1 ben:1 help:1 depending:1 develop:3 ac:2 derive:4 measured:1 ij:3 school:2 received:1 progress:3 pii:2 differ:1 concentrate:1 anatomy:1 merged:1 correct:3 human:1 stringent:1 brin:1 pkj:2 noticeably:1 ja:3 government:1 pjk:3 assign:2 ricci:1 fix:1 generalization:1 im:3 yij:10 extension:1 pl:27 frontier:1 hold:2 mm:1 around:1 considered:9 credit:1 koch:1 achieves:1 adopt:2 collaboratively:1 perceived:1 estimation:12 applicable:1 lose:1 combinatorial:2 label:1 wim:1 visited:1 largest:1 thekumparampil:1 promotion:1 aim:5 i3:2 derived:3 focus:1 vk:2 properly:1 rank:55 likelihood:3 indicates:2 contrast:3 industrial:1 kim:1 sense:1 baseline:2 inference:3 plackett:6 dependent:2 entire:1 maystre:2 relation:2 manipulating:1 bandit:1 i1:8 interested:1 overall:1 among:6 classification:1 k6:3 special:4 fairly:1 equal:2 field:1 once:1 beach:1 sampling:1 identical:1 represents:1 broad:1 kw:5 nearly:2 minimized:1 jb:4 others:1 employ:4 irreducible:1 few:4 randomly:1 individual:1 interest:4 moire:1 mining:1 dwork:1 circular:5 braverman:2 evaluation:1 investigate:2 light:1 behind:2 accurate:3 edge:11 nowak:2 partial:10 necessary:4 korea:2 unless:1 conduct:3 incomplete:1 walk:3 re:1 desired:2 circle:1 plackettluce:1 theoretical:2 minimal:9 instance:1 corroborate:1 wiseman:1 applicability:3 introducing:1 vertex:5 entry:3 deviation:1 examining:1 characterize:3 dependency:2 synthetic:5 chooses:1 adaptively:1 st:1 fundamental:1 international:6 negahban:1 xi1:4 ashish:2 together:2 yao:1 w1:3 squared:2 connectivity:2 thesis:1 choose:4 fishburn:1 hoeffding:1 adversely:1 american:1 leading:1 return:1 s13:1 converted:2 de:1 attaining:1 speculate:2 coding:1 subsumes:2 includes:4 wk:10 lloyd:1 notable:1 combinatorics:1 ranking:51 race:1 depends:1 break:2 doing:1 characterizes:1 sup:1 kwk:2 aggregation:14 recover:2 relied:2 red:2 parallel:1 crowdsourced:1 dembczynski:1 borda:4 contribution:3 collaborative:1 square:10 purple:1 accuracy:1 qk:1 largely:1 who:2 identify:3 identification:1 bayesian:1 accurately:1 hunter:1 marginally:1 carlo:4 j6:6 reach:2 whenever:1 definition:1 against:4 involved:1 associated:3 di:1 proof:12 gain:2 dataset:1 popular:2 knowledge:2 lim:1 obtainable:1 hajek:2 carefully:1 appears:1 follow:2 sustain:1 wei:1 erd:2 formulation:1 done:1 evaluated:1 generality:1 stage:3 until:2 hand:3 web:5 wmax:4 o:2 reversible:1 lack:1 indicated:1 grows:2 usa:1 name:1 ye:1 verify:2 true:6 normalized:3 hence:1 death:2 i2:11 adjacent:7 game:3 noted:2 percentile:7 criterion:1 manifestation:1 generalized:4 outline:3 complete:2 demonstrate:1 theoretic:1 l1:1 bring:1 passive:1 meaning:1 wise:45 ranging:1 novel:2 recently:1 snelson:1 behaves:2 heckel:2 ji:1 volume:1 extend:2 hypergraphs:1 kwk2:2 significant:1 measurement:1 monthly:1 cambridge:1 swoh:1 league:9 pm:1 fano:2 illinois:1 funded:1 han:1 dominant:2 recent:3 showed:2 perspective:1 irrelevant:1 belongs:1 inf:1 scenario:3 discard:1 driven:1 inequality:7 binary:1 success:13 devise:2 captured:1 seen:1 additional:3 employed:1 converting:1 aggregated:1 paradigm:1 determine:1 converge:1 dashed:1 rv:1 multiple:3 full:1 infer:1 match:12 characterized:4 calculation:1 cross:1 long:3 retrieval:1 concerning:1 mle:30 visit:1 paired:1 wiw:1 variant:1 d2max:1 metric:5 arxiv:6 represent:4 tailored:1 agarwal:2 achieved:1 audience:1 c1:4 whereas:2 addition:2 fine:1 walker:1 limn:1 crucial:2 w2:2 rest:2 extra:1 probably:1 strict:1 legend:8 s29:1 spirit:1 integer:1 extracting:1 near:3 counting:11 presence:1 revealed:2 ideal:1 enough:1 wn:3 canadian:1 variety:1 independence:1 fit:3 affect:1 psychology:2 identified:4 earns:1 perfectly:1 reduce:1 economic:1 luce:8 whether:4 utility:7 assist:3 algebraic:1 remark:2 useful:1 clear:3 detailed:1 officially:1 extensively:1 sivakumar:1 ph:1 rw:3 outperform:1 exist:1 sunghyun:1 notice:3 llermeier:1 estimated:1 neuroscience:1 correctly:1 per:6 blue:2 broadly:1 kill:4 grossglauser:2 group:1 key:4 four:3 demonstrating:1 threshold:2 achieving:3 verified:1 btl:1 econometric:3 graph:14 fraction:2 convert:9 sum:2 run:7 compete:2 telecommunication:1 extends:1 almost:2 reasonable:1 electronic:1 separation:2 draw:1 decision:2 dy:1 scaling:1 graham:1 bound:23 cheng:1 hypertextual:1 identifiable:1 precisely:2 sharply:2 aspect:2 speed:1 optimality:7 kumar:1 department:1 ailon:2 according:9 disconnected:1 slightly:5 wi:12 intuitively:1 restricted:3 pr:2 turn:5 count:4 dmax:2 needed:3 ordinal:3 know:1 kda:1 end:3 studying:1 adopted:2 operation:2 opponent:1 apply:5 observe:1 spectral:37 centrality:27 alternative:4 jang:1 robustness:3 pji:5 shah:4 existence:2 top:42 running:1 ensure:1 assumes:1 include:1 build:2 establish:3 nyi:3 parametric:1 concentration:2 win:4 separate:1 link:4 separating:2 entity:1 vigna:1 collected:5 reason:1 sur:1 modeled:1 index:2 retained:1 minimizing:1 psparse:2 weinberg:1 negative:1 rise:1 design:1 reliably:1 proper:2 perform:2 upper:6 recommender:1 observation:5 datasets:1 mate:1 choosing:1 cooley:1 communication:2 team:4 rn:1 sharp:1 arbitrary:1 pair:11 required:3 specified:2 speculation:1 engine:1 merges:1 distinction:1 kang:1 nip:2 beyond:1 plij:1 elicitation:1 below:1 regime:16 including:1 reliable:7 green:1 terry:3 ia:3 critical:2 overlap:4 ranked:14 natural:1 wainwright:2 business:1 dueling:1 mn:1 minimax:10 scheme:6 technology:1 inversely:3 numerous:1 identifies:1 axis:2 irrespective:1 extract:4 lij:2 review:2 ammar:1 relative:2 loss:3 expect:1 highlight:1 permutation:7 mcfadden:2 proportional:3 filtering:1 proven:1 versus:1 annotator:1 foundation:1 degree:2 sufficient:8 consistent:4 pij:11 sewoong:1 share:1 achievability:1 mohri:1 sourcing:1 minje:1 free:1 soon:2 last:4 placed:3 supported:1 side:2 allow:1 institute:2 wide:1 characterizing:1 wmin:3 sparse:5 distributed:1 listwise:3 xia:2 regard:1 boundary:1 overcome:1 curve:8 valid:3 world:5 transition:1 evaluating:1 conservatively:1 reside:1 collection:3 adaptive:1 vmax:1 refinement:1 made:1 social:6 transaction:1 approximate:1 skill:6 preferred:5 confirm:1 sz:1 global:1 active:6 summing:1 alternatively:1 don:2 yji:1 suh:3 search:3 latent:1 decade:1 decomposes:1 why:2 nature:1 robust:2 ca:1 contributes:1 investigated:1 necessarily:3 official:2 main:6 dense:7 big:1 noise:3 paul:1 repeated:4 xu:2 caplin:1 postulating:1 wiley:1 mohajer:2 mao:1 wish:3 pinpoint:1 winning:2 pe:5 breaking:16 ib:3 third:6 admissible:2 theorem:20 erroneous:1 specific:1 discarding:1 pac:1 learnable:1 explored:4 concern:3 essential:1 exists:1 effectively:1 kr:3 importance:1 valiant:1 ramchandran:1 margin:1 gap:3 sorting:2 chen:3 generalizing:1 led:1 explore:2 likely:1 hodge:1 expressed:1 hitting:1 ux:1 partially:1 sport:1 recommendation:2 fekete:1 corresponds:1 satisfies:1 extracted:1 acm:8 succeed:1 conditional:2 viewed:1 goal:3 presentation:1 sorted:1 bennett:1 feasible:1 hard:2 daejeon:1 infinite:1 determined:1 uniformly:3 averaging:4 lemma:6 total:3 called:1 accepted:1 experimental:1 partly:1 player:1 akiva:1 formally:1 select:1 support:2 latter:1 collins:1 incorporate:1 evaluate:2
6,375
6,767
Reliable Decision Support using Counterfactual Models Suchi Saria Department of Computer Science Johns Hopkins University Baltimore, MD 21211 [email protected] Peter Schulam Department of Computer Science Johns Hopkins University Baltimore, MD 21211 [email protected] Abstract Making a good decision involves considering the likely outcomes under each possible action. For example, would drug A or drug B lead to a better outcome for this patient? Ideally, we answer these questions using an experiment, but this is not always possible (e.g., it may be unethical). As an alternative, we can use non-experimental data to learn models that make counterfactual predictions of what we would observe had we run an experiment. To learn such models for decision-making problems, we propose the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context, and propose the counterfactual GP (CGP), a counterfactual model of continuous-time trajectories (time series) under sequences of actions taken in continuous-time. We develop our model within the potential outcomes framework of Neyman [1923] and Rubin [1978]. The counterfactual GP is trained using a joint maximum likelihood objective that adjusts for dependencies between observed actions and outcomes in the training data. We report two sets of experimental results. First, we show that the CGP?s predictions are reliable; they are stable to changes in certain characteristics of the training data that are not relevant to the decision-making problem. Predictive models trained using classical supervised learning objectives, however, are not stable to such perturbations. In the second experiment, we use data from a real intensive care unit (ICU) and qualitatively demonstrate how the CGP?s ability to answer ?What if?? questions offers medical decision-makers a powerful new tool for planning treatment. 1 Introduction Making a good decision involves considering the likely outcomes under each possible action. Would changing the color or text of an ad be more effective for increasing a firm?s revenue? Would drug A or drug B lead to a better outcome for this patient? Ideally, we would run an experiment: clone the patient and try each action on a different clone, compare the outcomes across scenarios, and choose the one with the best result. Experiments, however, are not always feasible. An alternative is to learn models from non-experimental data (i.e. data where we do not control actions) that can make counterfactual predictions of the outcomes we would have observed had we run an experiment (see e.g., Pearl 2009). The key challenge when learning counterfactual models from non-experimental data is that it is difficult to distinguish between statistical dependence and causal relationships. For instance, consider a drug that is often given to sicker patients who are also more likely to die. Without accounting for this bias in the treatment policy, a statistical model would predict that the drug kills patients even if it is actually beneficial. This challenge is commonly addressed using the potential outcomes framework [Neyman, 1923, 1990, Rubin, 1978], which introduces a collection of counterfactual random variables {Y [a] : a ? C} for an outcome Y and each action a from a set of choices C. The counterfactuals may be interpreted 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. History ? (a) H (b) 120 ? 100 ? 80 100 ?? ? ? ? ? ? ? 60 ? 80 ?? ? ? ? ? ? PFVC 100 ? ? E[Y [ 60 Drug A Drug B 40 ] | H] 5 10 Years Since First Symptom 15 ? 80 ?? ? ? ? E[Y [ ] | H] E[Y [ ] | H] ? ? 60 40 0 (c) 120 ? PFVC Lung Capacity PFVC 120 40 0 5 10 Since First Symptom Years Years Since First Symptom 15 0 5 10 15 Years Since First Symptom Figure 1: Best viewed in color. An illustration of the counterfactual GP applied to health care. The red box in (a) shows previous lung capacity measurements (black dots) and treatments (the history). Panels (b) and (c) show the type of predictions we would like to make. We use Y [a] to represent the potential outcome under action a. as probabilistic models of outcomes obtained by running experiments. Using the potential outcomes framework, we can clearly state assumptions under which the distribution of the counterfactuals can be learned from non-experimental data. When this is possible, learned counterfactual models can make ?What if?? predictions to guide decisions. In this paper, we use the potential outcomes framework to develop the counterfactual Gaussian process (CGP), an approach to modeling the effects of sequences of actions on continuous-time trajectories (i.e. time series). Figure 1 illustrates this idea. We show an individual with a lung disease, and would like to predict how her lung capacity (y-axis) will progress in response to different treatment plans. Panel (a) shows the history in the red box, which contains previous lung capacity measurements (black dots) and previous treatments (green and blue bars). Panels (b) and (c) illustrate the type of predictions we would like to make: given the history, what is the likely future trajectory of lung capacity if we prescribe Drug B (green)? What if we prescribe two doses of drug A (blue)? A physician might use these ?What if?? queries to decide that two doses of A is best. Many authors have studied counterfactual models for discrete time series data. For instance, in health care and epidemiology, Robins [1997] develops counterfactual models of a single outcome measured after a sequence of actions in discrete time. Brodersen et al. [2015] build counterfactual models to estimate the impact that a single, dicrete event has on a discrete time series of interest (e.g. daily sales after a product launch). Others have modeled the effect of actions taken in continuous time on a single outcome (e.g. Arjas and Parner 2004, Lok 2008). The CGP is unique in that it allows us to predict how future trajectories in continuous-time will change in response to sequences of interventions. Contributions. For problems in which learned predictive models are used to guide decisionmakers in choosing actions, this paper proposes the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context; problems where outcomes are measured and actions are taken at discrete points in continuous-time, and may be freely interwoven. Our key methodological contribution is an adjusted maximum likelihood objective for Gaussian processes that allows us to learn counterfactual models of continuous-time trajectories from observational traces: irregularly sampled sequences of actions i and outcomes denoted using D = {{(yij , aij , tij )}nj=1 }m i=1 , where yij ? R ? {?}, aij ? C ? {?}, 1 and tij ? [0, ? ]. We derive the technique by jointly modeling actions and outcomes using a marked point process (MPP; see e.g., Daley and Vere-Jones 2007), where the GP models the conditional distribution of the marks. When using potential outcomes, several assumptions are typically required to show that the learned statistical model estimates the target counterfactuals. We describe one set of assumptions sufficient for recovering the counterfactual GP from non-experimental data. We report two sets of experimental results. First, we show that the CGP?s predictions are reliable; they are stable to changes in certain characteristics of the training data that are not relevant to the decision-making problem. Predictive models trained using classical supervised learning objectives, however, are not stable to such perturbations. In this experiment, we use simulated data so that (1) we can control characteristics of the training data, and (2) we have access to the ground truth consequences of our actions on test data for evaluation. In our second experiment, we use data from a real intensive care unit (ICU) to learn the CGP, and qualitatively demonstrate how its ability to answer ?What if?? questions offers medical decision-makers a powerful new tool for planning treatment. 1 yij and aij may be the null variable ? to allow for the possibility that an action is taken but no outcome is observed and vice versa. [0, ? ] denotes a fixed period of time over which the trajectories are observed. 2 1.1 Related Work The difference between counterfactual predictions of an outcome if an action had been taken and if it had not been taken is defined as the causal effect of the action in the causal inference community (see e.g., Pearl 2009 or Morgan and Winship 2014). Potential outcomes are commonly used to formalize counterfactual predictions and obtain causal effect estimates [Neyman, 1923, 1990, Rubin, 1978]; we will review them shortly. Potential outcomes are often applied to cross-sectional data (see examples in Morgan and Winship 2014; recent examples from machine learning are Bottou et al. 2013, Johansson et al. 2016), but have also been used to estimate the causal effect of a sequence of actions in discrete time on a final outcome (e.g. Robins 1997, Taubman et al. 2009). Conversely, Brodersen et al. [2015] estimate the effect that a single discrete intervention has on a discrete time series. Recent work on optimal dynamic treatment regimes uses the sequential potential outcomes framework proposed by Robins [1997] to learn lists of discrete-time treatment rules that optimize a scalar outcome. Algorithms for learning these rules often use value-function approximations (Q-learning; e.g., Nahum-Shani et al. 2012). Alternatively, A-learning directly learns the relative differences between competing policies [Murphy, 2003]. Others have extended the potential outcomes framework in Robins [1997] to learn causal effects of actions taken in continuous-time on a single final outcome using observational data. Lok [2008] proposes an estimator based on structural nested models [Robins, 1992] that learns the instantaneous effect of administering a single type of treatment. Arjas and Parner [2004] develop an alternative framework for causal inference using Bayesian posterior predictive distributions to estimate the effects of actions in continuous time on a final outcome. Xu et al. [2016] also learn effects of actions in continuous-time on outcomes measured in continuous-time, but make one-step-ahead scalar predictions instead of trajectory-valued predictions. Both Lok [2008] and Arjas and Parner [2004] use marked point processes to formalize assumptions that make it possible to learn causal effects from continuous-time observational data. We build on these ideas to learn causal effects of actions on continuous-time trajectories instead of a single outcome. Cunningham et al. [2012] introduce the Causal Gaussian Process, but their use of the term ?causal? is different from ours, and refers to a constraint that holds for all samples drawn from the GP. Causal effects in continuous-time have also been studied using differential equations. Mooij et al. [2013] formalize an analog of Pearl?s ?do? operation for deterministic ordinary differential equations. Sokol and Hansen [2014] make similar contributions for stochastic differential equations by studying limits of discrete-time non-parametric structural equation models [Pearl, 2009]. Reinforcement learning (RL) algorithms learn from data where actions and observations are interleaved in discrete time (see e.g., Sutton and Barto 1998). In RL, however, the focus is on learning a policy (a map from states to actions) that optimizes the expected reward, rather than a model that predicts the effects of the agent?s actions on future observations. In model-based RL, a model of an action?s effect on the subsequent state is produced as a by-product either offline before optimizing the policy (e.g., Ng et al. 2006) or incrementally as the agent interacts with its environment. In most RL problems, however, learning algorithms rely on active experimentation to collect samples. This is not always possible, for example in health care we cannot actively experiment on patients, and so we must rely on retrospective observational data. In RL, a related problem known as off-policy evaluation also uses retrospective observational data (see e.g., Dud?k et al. 2011, Jiang and Li 2016, P?aduraru et al. 2012). The goal is to use state-action-reward sequences generated by an agent operating under an unknown policy to estimate the expected reward of a target policy. Off-policy algorithms typically use value-function approximations, importance reweighting, or doubly robust combinations of the two to estimate the expected reward. 2 Counterfactual Models from Observational Traces Counterfactual GPs build on ideas from potential outcomes [Neyman, 1923, 1990, Rubin, 1978], Gaussian processes [Rasmussen and Williams, 2006], and marked point processes [Daley and VereJones, 2007]. In the interest of space, we review potential outcomes and marked point processes, but refer the interested reader to Rasmussen and Williams [2006] for background on GPs. Background: Potential Outcomes. To formalize counterfactuals, we adopt the potential outcomes framework [Neyman, 1923, 1990, Rubin, 1978], which uses a collection of random variables {Y [a] : a ? C} to model the distribution over outcomes under each action a from a set of choices C. To make 3 counterfactual predictions, we must learn the distribution P (Y [a]) for each action a ? C. If we can freely experiment by repeatedly taking actions and recording the effects, then it is straightforward to learn a probabilistic model for each potential outcome Y [a]. Conducting experiments, however, may not be possible. Alternatively, we can use observational data, where we have example actions A and outcomes Y , but do not know how actions were chosen. Note the difference between the action a and the random variable A that models the observed actions in our data. The notation Y [a] serves to distinguish between the observed distribution P (Y | A = a) and the target distribution P (Y [a]). In general, we can use observational data to estimate P (Y | A = a). Under two assumptions, however, we can show that this conditional distribution is equivalent to the counterfactual model P (Y [a]). The first is known as the Consistency Assumption. Assumption 1 (Consistency). Let Y be the observed outcome, A ? C be the observed action, and Y [a] be the potential outcome for action a ? C, then: ( Y , Y [a] ) | A = a. Under consistency, we have that P (Y | A = a) = P (Y [a] | A = a). Now, the potential outcome Y [a] may depend on the action A, so in general P (Y [a] | A = a) 6= P (Y [a]). The next assumption posits that we have additional observed variables X known as confounders [Morgan and Winship, 2014] that are sufficient to d-separate Y [a] and A. Assumption 2 (No Unmeasured Confounders (NUC)). Let Y be the observed outcome, A ? C be the observed action, X = x be a vector containing potential confounders, and Y [a] be the potential outcome under action a ? C, then: ( Y [a] ? A ) | X = x. Under Assumptions 1 and 2, P (Y | A = a, X = x) = P (Y [a] | X = x). By marginalizing with respect to P (X) we can estimate P (Y [a]). An extension of Assumption 2 introduced by Robins [1997] known as sequential NUC allows us to estimate the effect of a sequence of actions in discrete time on a single outcome. In continuous-time settings, where both the type and timing of actions may be statistically dependent on the potential outcomes, Assumption 2 (and sequential NUC) cannot be applied as-is. We will describe an alternative that serves a similar role for CGPs. Background: Marked Point Processes. Point processes are distributions over sequences of timesN tamps {Ti }i=1 , which we call points, and a marked point process (MPP) is a point process where each point is annotated with an additional random variable Xi , called its mark. For example, a point T might represent the arrival time of a customer, and X the amount that she spent at the store. We emphasize that both the annotated points (Ti , Xi ) and the number of points N are random variables. A point process can be characterized as a counting process {Nt : t ? 0} that counts the number of PN points that occured up to and including time t: Nt = i=1 I(Ti ?t) . By definition, this processes can only take integer values, and Nt ? Ns if t ? s. In addition, it is commonly assumed that N0 = 0 and that ?Nt = lim??0+ Nt ? Nt?? ? {0, 1}. We can parameterize a point process using a probabilistic model of ?Nt given the history of the process Ht? up to but not including time t (we use t? to denote the left limit of t). Using the Doob-Meyer decomposition [Daley and Vere-Jones, 2007], we can write ?Nt = ?Mt + ??t , where Mt is a martingale, ?t is a cumulative intensity function, and P (?Nt = 1 | Ht? ) = E [?Nt | Ht? ] = E [?Mt | Ht? ] + ??t (Ht? ) = 0 + ??t (Ht? ), which shows that we can parameterize the point process using the conditional intensity function ?? (t) dt , ??t (Ht? ) The star superscript on the intensity function serves as a reminder that it depends on the history Ht? . For example, in non-homogeneous Poisson processes ?? (t) is a function of time that does not depend on the history. On the other hand, a Hawkes process is an example of a point process where ?? (t) does depend on the history [Hawkes, 1971]. MPPs are defined by an intensity that is a function of both the time t and the mark x: ?? (t, x) = ?? (t)p? (x | t). We have written the joint intensity in a factored form, where ?? (t) is the intensity of any point occuring (that is, the mark is unspecified), and p? (x | t) is the pdf of the observed mark given the point?s time. For an MPP, the history Ht contains each prior point?s time and mark. 2.1 Counterfactual Gaussian Processes Let {Yt : t ? [0, ? ]} denote a continuous-time stochastic process, where Yt ? R, and [0, ? ] defines the interval over which the process is defined. We will assume that the process is observed at a n discrete set of irregular and random times {(yj , tj )}j=1 . We use C to denote the set of possible action types, a ? C to denote the elements of the set, and define an action to be a 2-tuple (a, t) specifying 4 an action type a ? C and a time t ? [0, ? ] at which it is taken. To refer to multiple actions, we use a = [(a1 , t1 ), . . . , (an , tn )]. Finally, we define the history Ht at a time t ? [0, ? ] to be a list of all previous observations of the process and all previous actions. Our goal is to model the counterfactual: m P ({Ys [a] : s > t} | Ht ), where a = {(aj , tj ) : tj > t}j=1 . (1) n i To learn the counterfactual model, we will use traces D , {hi = {(tij , yij , aij )}j=1 }m i=1 , where yij ? R ? {?}, aij ? C ? {?}, and tij ? [0, ? ]. Our approach is to model D using a marked point process (MPP), which we learn using the traces. Using Assumption 1 and two additional assumptions defined below, the estimated MPP recovers the counterfactual model in Equation 1. We define the MPP mark space as the Cartesian product of the outcome space R and the set of action types C. To allow either the outcome or the action (but not both) to be the null variable ?, we introduce binary random variables zy ? {0, 1} and za ? {0, 1} to indicate when the outcome y and action a are not ?. Formally, the mark space is X = (R ? {?}) ? (C ? {?}) ? {0, 1} ? {0, 1}. We can then write the MPP intensity as ?? (t, y, a, zy , za ) = ?? (t)p? (zy , za | t) | {z } [A] Event model p? (y | t, zy ) | {z } p? (a | y, t, za ), | {z } (2) [B] Outcome model (GP) [C] Action model where we have again used the ? superscript as a reminder that the hazard function and densities above are implicitly conditioned on the history Ht? . The parameterization of the event and action models can be chosen to reflect domain knowledge about how the timing of events and choice of action depend on the history. The outcome model is parameterized using a GP (or any elaboration such as a hierarchical GP, or mixture of GPs), and can be simply designed as a regression model that predicts how the future trajectory will progress given the previous actions and outcome observations. Learning. To learn the CGP, we maximize the likelihood of observational traces over a fixed interval [0, ? ]. Let ? denote the model parameters, then the likelihood for a single trace is Z ? n n X X `(?) = log p?? (yj | tj , zyj ) + log ??? (t)p?? (aj , zyj , zaj | tj , yj ) ? ??? (s) ds. (3) j=1 0 j=1 We assume that traces are independent, and so can learn from multiple traces by maximizing the sum of the individual-trace log likelihoods with respect to ?. We refer to Equation 3 as the adjusted maximum likelihood objective. We see that the first term fits the GP to the outcome data, and the second term acts as an adjustment to account for dependencies between future outcomes and the timing and types of actions that were observed in the training data. Connection to target counterfactual. By maximizing Equation 3, we obtain a statistical model of the observational traces D. In general, the statistical model may not recover the target counterfactual model (Equation 1). To connect the CGP to Equation 1, we describe two additional assumptions. The first assumption is an alternative to Assumption 2. Assumption 3 (Continuous-Time NUC). For all times t and all histories Ht? , the densities ?? (t), p? (zy , za | t), and p? (a | y, t, za ) do not depend on Ys [a] for all times s > t and all actions a. The key implication of this assumption is that the policy used to choose actions in the observational data did not depend on any unobserved information that is predictive of the future potential outcomes. Assumption 4 (Non-Informative Measurement Times). For all times t and any history Ht? , the following holds: p? (y | t, zy = 1) dy = P (Yt ? dy | Ht? ). Under Assumptions 1, 3, and 4, we can show that Equation 1 is equivalent to the GP used to model p? (y | t, zy = 1). In the interest of space, the argument for this equivalence is in Section A of the supplement. Note that these assumptions are not statistically testable (see e.g., Pearl 2009). 3 Experiments Counterfactual models that answer ?What if?? questions are more reliable decision support tools than predictive models trained using classical supervised learning objectives because counterfactual models are, by construction, invariant to the policy used to choose actions in the training data. In our first experiments, we use simulated data so that (1) we can control characteristics of the training data, and (2) we have access to the ground truth consequences of our actions on test data for evaluation. In our second experiment, we use data from a real ICU to learn the CGP, and qualitatively demonstrate how its ability to answer ?What if?? questions has the potential to give medical decision-makers a powerful new tool for planning treatment. 5 Regime A Baseline GP CGP Risk Score ? from A Kendall?s ? from A AUC 0.000 1.000 0.853 Regime B Baseline GP CGP 0.000 1.000 0.872 0.083 0.857 0.832 0.001 0.998 0.872 Regime C Baseline GP CGP 0.162 0.640 0.806 0.128 0.562 0.829 Table 1: Results measuring reliability for simulated data experiments. See Section 3.1 for details. 3.1 Reliable Decision-making with CGPs We focus on a decision-making problem where the goal is to decide whether or not to treat a patient. The decision-maker (a clinician), should treat a patient if the value of a severity marker is likely to fall below a threshold in the future. A common approach to solving this problem is to build a predictive model of the future outcome, and use the prediction to make the decision (i.e. if the predicted value is below the threshold, then treat). We will use this approach as our baseline. If we could look into the future and see what the marker value would be under different actions, then the best decision would be to treat if the severity marker will fall below the threshold without treatment. It is precisely this type of reasoning that we can perform with the CGP. We simulate the value of a severity marker recorded over a period of 24 hours in the hospital; high values indicate that the patient is healthy. For the baseline approach, we learn a GP that predicts the future trajectory given clinical history up until time t; i.e. P ({Ys : s > t} | Ht ). Using the CGP, we model the counterfactual ?What if we do not treat this patient??; i.e. P ({Ys [?] : s > t} | Ht ). For all experiments, we consider a single decision time t = 12hrs. We create a risk score using the negative of each model?s predicted value at the end of 24 hours, normalized to lie in [0, 1]. Data. We simulate data from three regimes. In regimes A and B, we simulate severity marker trajectories that are treated by policies ?A and ?B respectively, that are both unknown to the baseline model and CGP at train time. Both ?A and ?B are designed to satisfy Assumptions 1, 3, and 4. In regime C, we use a policy that does not satisfy these assumptions. This regime will demonstrate the importance of verifying whether the assumptions hold when applying the CGP. We train both the baseline model and CGP on data simulated from all three regimes. In all regimes, we test decisions on a common set of trajectories treated up until t = 12hrs with policy ?A . Simulator. For each patient, we randomly sample outcome measurement times from a homogeneous Poisson process with with constant intensity ? over the 24 hour period. Given the measurement times, outcomes are sampled from a mixture of three GPs. The covariance function is shared between all classes, and is defined using a Mat?rn 3/2 kernel (variance 0.22 , lengthscale 8.0) and independent Gaussian noise (scale 0.1) added to each observation. Each class has a distinct mean function parameterized using a 5-dimensional, order-3 B-spline. The first class has a declining mean trajectory, the second has a trajectory that declines then stabilizes, and the third has a stable trajectory.2 All classes are equally likely a priori. At each measurement time, the treatment policy ? determines a probability p of treatment administration (we use only a single treatment type). The treatments increase the severity marker by a constant amount for 2 hours. If two or more actions occur within 2 hours of one another, the effects do not add up (i.e. it is as though only one treatment is active). Additional details about the simulator and policies can be found in the supplement. Model. For both the baseline predictive model and CGP outcome model, we use a mixture of three GPs (as was used to simulate the data). We assume that the mean function coefficients, the covariance parameters, and the treatment effect size are unknown and must be learned. We emphasize that both the CGP and RGP have identical forms, but are trained using different objectives; the RGP marginalizes over future actions, inducing an implicit dependence on the treatment policy in the training data, while the CGP explicitly controls for them while learning. For both the baseline model and CGP, we analytically sum over the mixture component likelihoods to obtain a closed form expression for the likelihood, which we optimize using BFGS [Nocedal and Wright, 2006].3 Predictions for both models are made using the posterior predictive mean given data and interventions up until 12 hours. Results. When trained on data where actions were taken according to different policies, the baseline model produces different risk scores for the same patient. In Table 1, the first row shows the average 2 3 The exact B-spline coefficients can be found in the simulation code included in the supplement. Additional details can be found in the supplement. 6 Creatinine Hours Since ICU Admission Figure 2: Example factual (grey) and counterfactual (blue) predictions on real ICU data using the CGP. difference in risk scores (calibrated to lie in [0, 1]) produced by the models trained in each regime and produced by the models trained in regime A. In row 1, column B we see that the baseline GP?s risk scores differ for the same person on average by around eight points (? = 0.083). From the perspective of a decision-maker, this behavior could make the system appear less reliable. Intuitively, the risk for a given patient should not depend on the policy used to determine treatments in retrospective data. On the other hand, the CGP?s scores change very little when trained on different regimes (? = 0.001), as long as Assumptions 1, 3, and 4 are satisfied. Note, however, that the scores do change for the CGP in row 1, column C where the policy ?C does not satisfy these assumptions (? = 0.128). Although we illustrate stability of the CGP comapred to the baseline GP using two regimes, this property is not specific to the choice of policies used in regimes A and B. Rather, the issue persists as we generate different training data by varying the distribution over the action choices. A cynical reader might ask: even if the risk scores are unstable, perhaps it has no consequences on the downstream decision-making task? In the second row of Table 1, we report Kendall?s ? computed between each regime and regime A using the risk scores to rank the patient?s in the test data according to severity (i.e. scores closer to 1 are more severe). In the third row, we report the AUC for both models trained in each regime on the common test set. We label a patient as ?at risk? if the last marker value in the untreated trajectory is below zero, and ?not at risk? otherwise. In row 2, column B we see that the CGP has a high rank correlation (? = 0.998) between the two regimes where the policies satisfy our key assumptions. The baseline GP model trained on regime B, however, has a lower rank correlation of ? = 0.857 with the risk scores produced by the same model trained on regime A. Similarly, in row three, columns A and B, we see that the CGP?s AUC is unchanged (AUC = 0.872). The baseline GP, however, is unstable and creates a risk score with poorer discrimination in regime B (AUC = 0.832) than in regime A (AUC = 0.853). Finally, we see that in column C, where the policy ?C does not satisfy our key assumptions, the CGP?s rank correlation degrades (? = 0.562), and the AUC degrades to 0.829 (note that the baseline GP?s rank correlation and AUC also degrade). This further emphasizes the importance of verifying Assumptions 1, 3, and 4 when using the CGP. These results have broad consequences for the practice of building predictive models from observational data for decision support. Classical supervised learning is commonly used to build predictive models for assessing risk from non-experimental data. Our experiments show that the predictions these models make and the decisions that they may lead to are highly dependent on action policies in the training data. Intuitively, this is troubling because we do not expect that past action policies should have any impact on the assessment of risk. As predictive models are becoming more widely used in domains like health care (e.g., Li-wei et al. 2015, Schulam and Saria 2015, Alaa et al. 2016, Wiens et al. 2016, Cheng et al. 2017) where safety is critical, the framework proposed here is increasingly pertinent. Others have noted this issue and studied the impact that action policies in the training data have on models fit using supervised learning objectives (e.g., Dyagilev and Saria 2016). Counterfactual GPs (and counterfactual models more broadly) make predictions that are independent of policies in the training data, and offer a new more reliable way to train predictive models for decision support. 3.2 CGPs for Medical Decision Support CGPs offer a powerful new tool for decision-makers to evaluate different actions using data-driven models. In health care, for instance, we can move closer towards the vision of evidence-based medicine by allowing clinicians to answer ?What if?? questions at the point of care with predictions tailored to the patient?s clinical history. When treatments are expensive or have potent side effects, the CGP can estimate the effects, which can be weighed against the cost of its administration. 7 To demonstrate how the CGP can be used as a decision support tool, we extract observational creatinine traces from the publicly available MIMIC-II database [Saeed et al., 2011]. Creatinine is a compound produced as a by-product of the chemical reaction in the body that breaks down creatine to fuel muscles. Healthy kidneys normally filter creatinine out of the body, which can otherwise be toxic in large concentrations. During kidney failure, however, creatinine levels rise and the compound must be extracted using a medical procedure called dialysis. We extract patients in the database who tested positive for abnormal creatinine levels, which is a sign of kidney failure. We also extract the times at which three different types of dialysis were given to each individual: intermittent hemodialysis (IHD), continuous veno-venous hemofiltration (CVVH), and continuous veno-venous hemodialysis (CVVHD). The data set includes a total of 428 individuals, with an average of 34 (?12) creatinine observations each. We shuffle the data and use 300 traces for training, 50 for validation and model selection, and 78 for testing. Model. We parameterize the outcome model of the CGP using a mixture of GPs. We always condition on the initial creatinine measurement and model the deviation from that initial value. The mean for each class is zero (i.e. there is no deviation from the initial value on average). We parameterize the covariance function using the sum of two non-stationary kernel functions. Let ? : t ? [1, t, t2 ]> ? R3 denote the quadratic polynomial basis, then the first kernel is k1 (t1 , t2 ) = ?> (t1 )??(t2 ), where ? ? R3?3 is a positive-definite symmetric matrix parameterizing the kernel. The second kernel is the covariance function of the integrated Ornstein-Uhlenbeck (IOU) process (see e.g., Taylor et al. 1994), which is parameterized by two scalars ? and ? and defined as  ?2 kIOU (t1 , t2 ) = 2? 2?min(t1 , t2 ) + e??t1 + e??t2 ? 1 ? e??|t1 ?t2 | . 3 The IOU covariance corresponds to the random trajectory of a particle whose velocity drifts according to an OU process. We assume that each creatinine measurement is observed with independent Gaussian noise with scale ?. Each class in the mixture has a unique set of covariance parameters. To model the treatment effects in the outcome model, we define a short-term function and longterm response function. If an action is taken at time t0 , the outcome ? = t ? t0 hours later will be additively affected by the response function g(?; h1 , a, b, h2 , r) = gs (?; h1 , a, b) + g` (?; h2 , r), where h1 , h2 ? R and a, b, r ? R+ . The  short-term and long-term response functions are defined h1 a as gs (?; h1 , a, b) = a?b e?b?t ? e?a?t , and g` (? : h2 , r) = h2 ? (1.0 ? e?r?t ). The two response functions are included in the mean function of the GP, and each class in the mixture has a unique set of response function parameters. We assume that Assumptions 1, 3, and 4 hold, and that the event and action models have separate parameters, so can remain unspecified when estimating the outcome model. We fit the CGP outcome model using Equation 3, and select the number of classes in the mixture using fit on the validation data (we choose three components). Results. Figure 2 demonstrates how the CGP can be used for medical decision support. Each panel in the figure shows data for an individual drawn from the test set. The green points show measurements on which we condition to obtain a posterior distribution over mixture class membership and the individual?s latent trajectory under each class. The red points are unobserved, future measurements. In grey, we show predictions under the factual sequence of actions extracted from the MIMIC-II database. Treatment times are shown using vertical bars marked with an ?x? (color indicates which type of treatment was given). In blue, we show the CGP?s counterfactual predictions under an alternative sequence of actions. The posterior predictive trajectory is shown for the MAP mixture class (mean is shown by a solid grey/blue line, 95% credible intervals are shaded). We qualitatively discuss the CGP?s counterfactual predictions, but cannot quantitatively evaluate them without prospective experimental data from the ICU. We can, however, measure fit on the factual data and compare to baselines to evaluate our modeling decisions. Our CGP?s outcome model allows for heterogeneity in the covariance parameters and the response functions. We compare this choice to two alternatives. The first is a mixture of three GPs that does not model treatment effects. The second is a single GP that does model treatment effects. Over a 24-hour horizon, the CGP?s mean absolute error (MAE) is 0.39 (95% CI: 0.38-0.40),4 , and for predictions between 24 and 48 hours in the future the MAE is 0.62 (95% CI: 0.60-0.64). The pairwise mean difference between the first baseline?s absolute errors and the CGP?s is 0.07 (0.06, 0.08) for 24 hours, and 0.09 (0.08, 0.10) for 24-48 hours. The mean difference between the second baseline?s absolute errors and the CGP?s is 0.04 (0.04, 0.05) for 24 hours and 0.03 (0.02, 0.04) for 24-48 hours. The improvements over the 4 95% confidence intervals computed using the pivotal bootstrap are shown in parentheses 8 baselines suggest that modeling treatments and heterogeneity with a mixture of GPs for the outcome model are useful for this problem. Figure 2 shows factual and counterfactual predictions made by the CGP. In the first (left-most) panel, the patient is factually administered IHD about once a day, and is responsive to the treatment (creatinine steadily improves). We query the CGP to estimate how the individual would have responded had the IHD treatment been stopped early. The model reasonably predicts that we would have seen no further improvement in creatinine. In the third panel, an individual with erratic creatinine levels receives CVVHD for the last 100 hours and is responsive to the treatment. As before, the CGP counterfactually predicts that she would not have improved had CVVHD not been given. Interestingly, panel four shows the opposite situation: the individual did not receive treatment and did not improve for the last 100 hours, but the CGP counterfactually predicts an improvement in creatinine similar to that in panel 3 if daily CVVHD had been administered. 4 Discussion Our key message is that predictive models used in decision-making problems should be trained using counterfactual objectives (like the one shown in Equation 3). One reason this approach should be preferred is because the models produced are stable to information in the training data that is irrelevant to the downstream decision-making task (the action policies in the training data). We studied this idea in the context of problems where outcomes are measured and actions are taken at discrete points in continuous-time, and proposed the counterfactual Gaussian process (CGP) to model the effects of sequences of actions taken on continuous-time trajectories (time series). The CGP builds on previous ideas in continuous-time causal inference (e.g., Robins 1997, Arjas and Parner 2004, Lok 2008), but is unique in that it can predict the full counterfactual trajectory; we combined marked point processes (MPP) with GPs to model observational traces, and described three assumptions that are sufficient to connect the statistical model to the target counterfactuals. We presented two sets of experimental results. In the first, we used simulated data to show risk score models fit using classical supervised learning objectives are sensitive to the action policies used in the training data; information that is irrelevant to the downstream decision problem. We showed that this sensitivity can alter risk assessments for the same individual, change relative risk assessments across individuals, and can cause differences in AUC. On the other hand, the CGP is not sensitive to the action policies in the training data, as long as they satisfy Assumptions 1, 3, and 4. In the second set of experiments, we demonstrated how the CGP offers a powerful new tool for medical decision support by learning the effects of dialysis on creatinine trajectories from real ICU data and demonstrating the types of ?What if?? questions that it can be used to answer about patient prognosis under various treatment plans. These results suggest a number of new questions and directions for future work. First, the validity of the CGP is conditioned upon a set of assumptions (this is true for all counterfactual models). In general, these assumptions are not testable. The reliability of approaches using counterfactual models therefore critically depends on the plausibility of those assumptions in light of domain knowledge. Formal procedures, such as sensitivity analyses (e.g., Robins et al. 2000, Scharfstein et al. 2014), that can identify when causal assumptions conflict with a data set will help to make these methods more easily applied in practice. In addition, there may be other sets of structural assumptions beyond those presented that allow us to learn counterfactual GPs from non-experimental data. For instance, the back door and front door criteria are two separate sets of structural assumptions discussed by Pearl [2009] in the context of estimating parameters of causal Bayesian networks from observational data. More broadly, there are implications for recent pushes to introduce transparency, interpretability, and accountability into machine learning systems embedded in decision-making processes. We have characterized a notion of model stability relative to information in the training data that is not relevant to its downstream application, and showed that models fit using counterfactual objectives achieve stability. The framework can be further extended to incorporate recent ideas on model interpretability and accountability in the context of supervised learning objectives (e.g., Caruana et al. 2015, Ribeiro et al. 2016). Acknowledgements We thank the anonymous reviewers for their insightful feedback. This work was supported by generous funding from DARPA YFA #D17AP00014 and NSF SCH #1418590. PS was also supported by an NSF Graduate Research Fellowship. We thank Katie Henry and Andong Zhan for help with the ICU data set. 9 References A.M. Alaa, J. Yoon, S. Hu, and M. van der Schaar. Personalized Risk Scoring for Critical Care Patients using Mixtures of Gaussian Process Experts. In ICML Workshop on Computational Frameworks for Personalization, 2016. E. Arjas and J. Parner. Causal reasoning from longitudinal data. Scandinavian Journal of Statistics, 31(2):171?187, 2004. L. Bottou, J. Peters, J.Q. Candela, D.X. Charles, M. Chickering, E. Portugaly, D. Ray, P.Y. Simard, and E. Snelson. Counterfactual reasoning and learning systems: the example of computational advertising. Journal of Machine Learning Research (JMLR), 14(1):3207?3260, 2013. K.H. Brodersen, F. Gallusser, J. Koehler, N. Remy, and S.L. Scott. Inferring causal impact using bayesian structural time-series models. The Annals of Applied Statistics, 9(1):247?274, 2015. R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 1721?1730. ACM, 2015. L.F. Cheng, G. Darnell, C. Chivers, M.E. Draugelis, K. Li, and B.E. Engelhardt. Sparse multi-output Gaussian processes for medical time series prediction. arXiv preprint arXiv:1703.09112, 2017. J. Cunningham, Z. Ghahramani, and C.E. Rasmussen. Gaussian processes for time-marked timeseries data. In International Conference on Artificial Intelligence and Statistics (AISTATS), pages 255?263, 2012. D.J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer Science & Business Media, 2007. M. Dud?k, J. Langford, and L. Li. Doubly robust policy evaluation and learning. In International Conference on Machine Learning (ICML), 2011. K. Dyagilev and S. Saria. Learning (predictive) risk scores in the presence of censoring due to interventions. Machine Learning, 102(3):323?348, 2016. A.G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, pages 83?90, 1971. N. Jiang and L. Li. Doubly robust off-policy value evaluation for reinforcement learning. In International Conference on Machine Learning (ICML), pages 652?661, 2016. F.D. Johansson, U. Shalit, and D. Sontag. Learning representations for counterfactual inference. In International Conference on Machine Learning (ICML), 2016. H.L Li-wei, R.P. Adams, L. Mayaud, G.B. Moody, A. Malhotra, R.G. Mark, and S. Nemati. A physiological time series dynamics-based approach to patient monitoring and outcome prediction. IEEE Journal of Biomedical and Health Informatics, 19(3):1068?1076, 2015. J.J. Lok. Statistical modeling of causal effects in continuous time. The Annals of Statistics, pages 1464?1507, 2008. J.M. Mooij, D. Janzing, and B. Sch?lkopf. From ordinary differential equations to structural causal models: the deterministic case. 2013. S.L. Morgan and C. Winship. Counterfactuals and causal inference. Cambridge University Press, 2014. S.A. Murphy. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(2):331?355, 2003. I. Nahum-Shani, M. Qian, D. Almirall, W.E. Pelham, B. Gnagy, G.A. Fabiano, J.G. Waxmonsky, J. Yu, and S.A. Murphy. Q-learning: A data analysis method for constructing adaptive interventions. Psychological Methods, 17(4):478, 2012. 10 J. Neyman. Sur les applications de la th?orie des probabilit?s aux experiences agricoles: Essai des principes. Roczniki Nauk Rolniczych, 10:1?51, 1923. J. Neyman. On the application of probability theory to agricultural experiments. Statistical Science, 5(4):465?472, 1990. A.Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In Experimental Robotics IX, pages 363?372. Springer, 2006. J. Nocedal and S.J. Wright. Numerical optimization 2nd, 2006. C. P?aduraru, D. Precup, J. Pineau, and G. Com?anici. An empirical analysis of off-policy learning in discrete mdps. In Workshop on Reinforcement Learning, page 89, 2012. J. Pearl. Causality: models, reasoning and inference. Cambridge University Press, 2009. C.E. Rasmussen and C.K.I. Williams. Gaussian processes for machine learning. the MIT Press, 2006. M.T. Ribeiro, S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 1135?1144. ACM, 2016. J.M. Robins. Estimation of the time-dependent accelerated failure time model in the presence of confounding factors. Biometrika, 79(2):321?334, 1992. J.M. Robins. Causal inference from complex longitudinal data. In Latent variable modeling and applications to causality, pages 69?117. Springer, 1997. J.M. Robins, A. Rotnitzky, and D.O. Scharfstein. Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In Statistical models in epidemiology, the environment, and clinical trials, pages 1?94. Springer, 2000. D.B. Rubin. Bayesian inference for causal effects: The role of randomization. The Annals of statistics, pages 34?58, 1978. M. Saeed, M. Villarroel, A.T. Reisner, G. Clifford, L.W. Lehman, G. Moody, T. Heldt, T.H. Kyaw, B. Moody, and R.G. Mark. Multiparameter intelligent monitoring in intensive care II (MIMIC-II): a public-access intensive care unit database. Critical Care Medicine, 39(5):952, 2011. D. Scharfstein, A. McDermott, W. Olson, and F. Wiegand. Global sensitivity analysis for repeated measures studies with informative dropout: A fully parametric approach. Statistics in Biopharmaceutical Research, 6(4):338?348, 2014. P. Schulam and S. Saria. A framework for individualizing predictions of disease trajectories by exploiting multi-resolution structure. In Advances in Neural Information Processing Systems (NIPS), pages 748?756, 2015. A. Sokol and N.R. Hansen. Causal interpretation of stochastic differential equations. Electronic Journal of Probability, 19(100):1?24, 2014. R.S. Sutton and A.G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. S.L. Taubman, J.M. Robins, M.A. Mittleman, and M.A. Hern?n. Intervening on risk factors for coronary heart disease: an application of the parametric g-formula. International Journal of Epidemiology, 38(6):1599?1611, 2009. J. Taylor, W. Cumberland, and J. Sy. A stochastic model for analysis of longitudinal AIDS data. Journal of the American Statistical Association, 89(427):727?736, 1994. J. Wiens, J. Guttag, and E. Horvitz. Patient risk stratification with time-varying parameters: a multitask learning approach. Journal of Machine Learning Research (JMLR), 17(209):1?23, 2016. Y. Xu, Y. Xu, and S. Saria. A Bayesian nonparametric approach for estimating individualized treatment-response curves. In Machine Learning for Healthcare Conference (MLHC), pages 282?300, 2016. 11
6767 |@word multitask:1 trial:1 longterm:1 polynomial:1 johansson:2 nd:1 grey:3 additively:1 simulation:1 hu:1 accounting:1 decomposition:1 covariance:7 creatinine:14 solid:1 initial:3 series:10 contains:2 score:14 ours:1 interestingly:1 longitudinal:3 past:1 reaction:1 horvitz:1 com:1 nt:10 vere:3 must:4 john:2 written:1 subsequent:1 numerical:1 informative:2 kdd:2 pertinent:1 designed:2 n0:1 discrimination:1 stationary:1 intelligence:1 parameterization:1 short:2 pschulam:1 admission:1 differential:5 doubly:3 ray:1 introduce:3 pairwise:1 expected:3 behavior:1 frequently:2 planning:3 simulator:2 multi:2 little:1 considering:2 increasing:1 agricultural:1 estimating:3 notation:1 panel:8 medium:1 fuel:1 null:2 what:13 sokol:2 interpreted:1 unspecified:2 shani:2 unobserved:2 nj:1 remy:1 ti:3 act:1 biometrika:2 demonstrates:1 classifier:1 control:4 unit:3 medical:8 sale:1 intervention:5 appear:1 normally:1 healthcare:2 before:2 t1:7 persists:1 timing:3 treat:5 reisner:1 limit:2 consequence:4 safety:1 sutton:2 jiang:2 becoming:1 black:2 might:3 accountability:2 studied:4 equivalence:1 conversely:1 challenging:2 collect:1 specifying:1 shaded:1 graduate:1 statistically:2 unique:4 yj:3 testing:1 practice:2 implement:2 definite:1 bootstrap:1 procedure:2 probabilit:1 empirical:1 drug:10 jhu:2 confidence:1 refers:1 suggest:2 cannot:3 selection:2 context:5 risk:22 applying:1 optimize:2 equivalent:2 deterministic:2 map:2 missing:1 customer:1 yt:3 maximizing:2 williams:3 straightforward:1 kidney:3 demonstrated:1 reviewer:1 resolution:1 qian:1 factored:1 suchi:1 adjusts:1 rule:2 estimator:1 parameterizing:1 stability:3 notion:1 unmeasured:2 counterfactually:2 autonomous:1 annals:3 target:6 construction:1 exact:1 gps:11 us:3 prescribe:2 homogeneous:2 element:1 velocity:1 expensive:1 schaar:1 predicts:6 database:4 observed:15 role:2 yoon:1 factual:4 preprint:1 verifying:2 parameterize:4 shuffle:1 disease:3 environment:2 reward:4 ideally:2 dynamic:3 trained:13 depend:7 solving:1 singh:1 predictive:16 creates:1 upon:1 basis:1 easily:1 joint:2 darpa:1 various:1 train:3 distinct:1 effective:1 describe:3 lengthscale:1 query:2 artificial:1 outcome:68 choosing:1 firm:1 whose:1 widely:1 valued:1 tested:1 otherwise:2 ability:3 statistic:6 gp:22 jointly:1 multiparameter:1 final:3 superscript:2 sequence:12 propose:2 product:4 helicopter:1 relevant:3 achieve:1 nauk:1 intervening:1 inducing:1 olson:1 exploiting:1 p:1 assessing:1 produce:1 adam:1 spent:1 illustrate:2 develop:3 derive:1 help:2 measured:4 progress:2 recovering:1 c:2 involves:2 launch:1 indicate:2 predicted:2 differ:1 iou:2 posit:1 direction:1 annotated:2 filter:1 stochastic:4 observational:15 public:1 anonymous:1 randomization:1 adjusted:2 yij:5 extension:1 hold:4 around:1 koch:1 ground:2 wright:2 predict:4 stabilizes:1 readmission:1 adopt:1 early:1 generous:1 estimation:1 label:1 maker:6 hansen:2 healthy:2 sensitive:2 vice:1 create:1 gehrke:1 tool:7 mit:2 clearly:1 always:4 gaussian:12 brodersen:3 rather:2 pn:1 taubman:2 barto:2 varying:2 focus:2 she:2 methodological:1 rank:5 likelihood:8 indicates:1 improvement:3 baseline:19 rgp:2 inference:9 dependent:3 membership:1 typically:2 integrated:1 cunningham:2 her:1 doob:1 interested:1 issue:2 denoted:1 priori:1 proposes:2 plan:2 once:1 schulte:1 beach:1 ng:2 stratification:1 identical:1 broad:1 jones:3 look:1 icml:4 yu:1 alter:1 future:14 mimic:3 others:3 spline:2 develops:1 quantitatively:1 intelligent:1 report:4 t2:7 randomly:1 individual:11 murphy:3 saeed:2 interest:3 message:1 possibility:1 highly:1 mining:2 evaluation:5 severe:1 introduces:1 mixture:13 venous:2 personalization:1 light:1 tj:5 yfa:1 darnell:1 implication:2 poorer:1 tuple:1 closer:2 daily:2 experience:1 taylor:2 shalit:1 causal:24 stopped:1 psychological:1 instance:4 column:5 modeling:6 tse:1 measuring:1 caruana:2 portugaly:1 ordinary:2 cost:1 deviation:2 front:1 dependency:2 answer:7 connect:2 essai:1 calibrated:1 confounders:3 st:1 density:2 clone:2 epidemiology:3 person:1 potent:1 combined:1 sensitivity:4 probabilistic:3 physician:1 off:4 international:7 informatics:1 hopkins:2 moody:3 precup:1 clifford:1 again:1 reflect:1 recorded:1 containing:1 choose:4 satisfied:1 marginalizes:1 weighed:1 dicrete:1 expert:1 simard:1 american:1 ganapathi:1 actively:1 li:6 account:1 potential:22 bfgs:1 de:3 star:1 includes:1 coefficient:2 lehman:1 satisfy:6 schulam:3 explicitly:1 ad:1 depends:2 wiens:2 later:1 h1:5 try:1 closed:1 kendall:2 break:1 counterfactuals:6 red:3 candela:1 lung:6 recover:1 nemati:1 contribution:3 rotnitzky:1 publicly:1 responded:1 variance:1 characteristic:4 who:2 conducting:1 sy:1 identify:1 lkopf:1 bayesian:5 produced:6 zy:7 emphasizes:1 critically:1 trajectory:23 advertising:1 monitoring:2 zaj:1 history:16 za:6 janzing:1 definition:1 against:1 failure:3 steadily:1 recovers:1 sampled:2 treatment:33 ask:1 counterfactual:48 color:3 lim:1 occured:1 reminder:2 knowledge:4 formalize:4 ou:1 credible:1 improves:1 actually:1 back:1 dt:1 supervised:9 day:2 methodology:1 response:9 wei:2 improved:1 box:2 symptom:4 though:1 implicit:1 biomedical:1 until:3 d:1 hand:3 correlation:4 receives:1 sturm:1 langford:1 flight:1 trust:1 marker:7 reweighting:1 incrementally:1 assessment:3 defines:1 pineau:1 aj:2 perhaps:1 diel:1 usa:1 effect:28 building:1 normalized:1 true:1 validity:1 untreated:1 analytically:1 chemical:1 dud:2 symmetric:1 during:1 self:1 auc:9 noted:1 die:1 hawkes:3 criterion:1 pdf:1 occuring:1 demonstrate:5 tn:1 reasoning:4 factually:1 snelson:1 instantaneous:1 funding:1 charles:1 common:3 mt:3 rl:5 individualizing:1 volume:1 analog:1 discussed:1 interpretation:1 mae:2 association:1 measurement:10 refer:3 versa:1 declining:1 cambridge:3 consistency:3 similarly:1 particle:1 had:7 dot:2 reliability:2 henry:1 stable:6 access:3 scandinavian:1 operating:1 add:1 zyj:2 posterior:4 recent:4 confounding:2 showed:2 perspective:1 optimizing:1 optimizes:1 driven:1 scenario:1 store:1 certain:2 arjas:5 compound:2 irrelevant:2 binary:1 der:1 mcdermott:1 muscle:1 scoring:1 morgan:4 seen:1 additional:6 care:12 inverted:1 guestrin:1 freely:2 determine:1 maximize:1 period:3 ihd:3 pelham:1 ii:4 multiple:2 full:1 transparency:1 cgp:50 characterized:2 plausibility:1 offer:5 long:4 cross:1 hazard:1 elaboration:1 clinical:3 equally:1 y:4 a1:1 parenthesis:1 impact:4 prediction:27 regression:1 patient:22 vision:1 poisson:2 arxiv:2 dialysis:3 represent:2 kernel:5 tailored:1 uhlenbeck:1 robotics:1 irregular:1 receive:1 background:3 addition:2 fellowship:1 baltimore:2 addressed:1 interval:4 sch:2 recording:1 call:1 integer:1 structural:6 counting:1 door:2 presence:2 fit:7 competing:1 opposite:1 prognosis:1 idea:8 decisionmakers:1 decline:1 mpp:8 intensive:4 administration:2 administered:2 t0:2 whether:2 expression:1 retrospective:3 peter:2 sontag:1 cause:1 action:78 repeatedly:1 tij:4 useful:1 nuc:4 amount:2 nonparametric:1 generate:1 nsf:2 coates:1 sign:1 estimated:1 kill:1 blue:5 broadly:2 discrete:14 write:2 mat:1 affected:1 key:6 four:1 elhadad:1 threshold:3 demonstrating:1 drawn:2 changing:1 ht:17 nocedal:2 downstream:4 year:4 sum:3 run:3 parameterized:3 powerful:5 nahum:2 you:1 reader:2 decide:2 electronic:1 winship:4 decision:33 dy:2 zhan:1 dropout:1 interleaved:1 abnormal:1 hi:1 distinguish:2 cheng:2 quadratic:1 g:2 ahead:1 occur:1 constraint:1 precisely:1 personalized:1 simulate:4 argument:1 min:1 department:2 according:3 combination:1 across:2 beneficial:1 increasingly:1 remain:1 toxic:1 making:11 intuitively:2 invariant:1 taken:12 heart:1 neyman:7 equation:14 mutually:1 hern:1 discus:1 count:1 r3:2 know:1 irregularly:1 serf:3 end:1 lieu:2 studying:1 available:1 operation:1 experimentation:1 eight:1 observe:1 hierarchical:1 responsive:2 alternative:7 shortly:1 denotes:1 running:1 medicine:2 testable:2 k1:1 build:6 ghahramani:1 classical:7 society:1 unchanged:1 objective:16 move:1 question:8 added:1 koehler:1 parametric:3 degrades:2 dependence:2 md:2 interacts:1 concentration:1 pneumonia:1 separate:3 thank:2 simulated:5 capacity:5 lou:1 individualized:1 degrade:1 prospective:1 unstable:2 reason:1 engelhardt:1 guttag:1 code:1 sur:1 modeled:1 relationship:1 illustration:1 berger:1 liang:1 difficult:1 troubling:1 trace:13 negative:1 rise:1 policy:32 unknown:3 perform:1 allowing:1 vertical:1 observation:6 timeseries:1 heterogeneity:2 extended:2 severity:6 situation:1 rn:1 perturbation:2 intermittent:1 community:1 intensity:8 drift:1 introduced:1 required:1 connection:1 conflict:1 learned:5 pearl:7 hour:16 nip:2 beyond:1 bar:2 below:5 scott:1 regime:24 challenge:2 royal:1 reliable:7 green:3 including:2 erratic:1 interpretability:2 event:5 critical:3 treated:2 rely:2 business:1 predicting:1 hr:2 wiegand:1 improve:1 administering:1 mdps:1 ornstein:1 axis:1 extract:3 health:6 text:1 review:2 prior:1 acknowledgement:1 discovery:2 mooij:2 marginalizing:1 relative:3 embedded:1 fully:1 expect:1 coronary:1 revenue:1 validation:2 h2:5 agent:3 sufficient:3 rubin:6 exciting:2 row:7 censoring:1 supported:2 last:3 rasmussen:4 interwoven:1 aij:5 bias:2 guide:2 allow:3 offline:1 side:1 fall:2 formal:1 taking:1 explaining:1 absolute:3 sparse:1 van:1 feedback:1 curve:1 cumulative:1 author:1 qualitatively:4 commonly:4 collection:2 reinforcement:5 made:2 adaptive:1 ribeiro:2 pfvc:3 emphasize:2 implicitly:1 preferred:1 global:1 active:2 assumed:1 xi:2 alternatively:2 spectrum:1 continuous:23 latent:2 why:1 robin:12 table:3 learn:20 reasonably:1 robust:3 ca:1 bottou:2 complex:1 constructing:1 domain:3 icu:8 did:3 aistats:1 intelligible:1 noise:2 arrival:1 repeated:1 pivotal:1 xu:3 body:2 causality:2 martingale:1 aid:1 n:1 dos:2 meyer:1 inferring:1 daley:4 lie:2 chickering:1 jmlr:2 third:3 learns:2 ix:1 down:1 formula:1 specific:1 insightful:1 list:2 physiological:1 orie:1 evidence:1 workshop:2 positive:2 sequential:3 importance:3 ci:2 supplement:4 illustrates:1 occurring:2 cartesian:1 conditioned:2 horizon:1 push:1 simply:1 likely:6 sectional:1 adjustment:1 scalar:3 springer:4 nested:1 truth:2 determines:1 corresponds:1 extracted:2 acm:2 conditional:3 viewed:1 marked:10 goal:3 towards:1 shared:1 unethical:1 saria:6 change:6 feasible:1 included:2 clinician:2 called:2 hospital:2 total:1 experimental:12 la:1 heldt:1 formally:1 select:1 alaa:2 principe:1 support:8 mark:10 accelerated:1 incorporate:1 evaluate:3 aux:1
6,376
6,768
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding Dan Alistarh IST Austria & ETH Zurich [email protected] Demjan Grubic ETH Zurich & Google [email protected] Ryota Tomioka Microsoft Research [email protected] Jerry Z. Li MIT [email protected] Milan Vojnovic London School of Economics [email protected] Abstract Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8? faster than the full-precision variant. 1 Introduction The surge of massive data has led to significant interest in distributed algorithms for scaling computations in the context of machine learning and optimization. In this context, much attention has been devoted to scaling large-scale stochastic gradient descent (SGD) algorithms [33], which can be briefly defined as follows. Let f : Rn ? R be a function which we want to minimize. We have access to stochastic gradients ge such that E[e g (x)] = ?f (x). A standard instance of SGD will converge towards the minimum by iterating the procedure xt+1 = xt ? ?t ge(xt ), (1) where xt is the current candidate, and ?t is a variable step-size parameter. Notably, this arises if we are given i.i.d. data points X1 , . . . , Xm generated from an unknown distribution D, and a loss function `(X, ?), which measures the loss of the model ? at data point X. We wish to find a model ?? which minimizes f (?) = EX?D [`(X, ?)], the expected loss to the data. This framework captures many fundamental tasks, such as neural network training. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper, we focus on parallel SGD methods, which have received considerable attention recently due to their high scalability [6, 8, 32, 13]. Specifically, we consider a setting where a large dataset is partitioned among K processors, which collectively minimize a function f . Each processor maintains a local copy of the parameter vector xt ; in each iteration, it obtains a new stochastic gradient update (corresponding to its local data). Processors then broadcast their gradient updates to their peers, and aggregate the gradients to compute the new iterate xt+1 . In most current implementations of parallel SGD, in each iteration, each processor must communicate its entire gradient update to all other processors. If the gradient vector is dense, each processor will need to send and receive n floating-point numbers per iteration to/from each peer to communicate the gradients and maintain the parameter vector x. In practical applications, communicating the gradients in each iteration has been observed to be a significant performance bottleneck [35, 37, 8]. One popular way to reduce this cost has been to perform lossy compression of the gradients [11, 1, 3, 10, 41]. A simple implementation is to simply reduce precision of the representation, which has been shown to converge under convexity and sparsity assumptions [10]. A more drastic quantization technique is 1BitSGD [35, 37], which reduces each component of the gradient to just its sign (one bit), scaled by the average over the coordinates of ge, accumulating errors locally. 1BitSGD was experimentally observed to preserve convergence [35], under certain conditions; thanks to the reduction in communication, it enabled state-of-the-art scaling of deep neural networks (DNNs) for acoustic modelling [37]. However, it is currently not known if 1BitSGD provides any guarantees, even under strong assumptions, and it is not clear if higher compression is achievable. Contributions. Our focus is understanding the trade-offs between the communication cost of dataparallel SGD, and its convergence guarantees. We propose a family of algorithms allowing for lossy compression of gradients called Quantized SGD (QSGD), by which processors can trade-off the number of bits communicated per iteration with the variance added to the process. QSGD is built on two algorithmic ideas. The first is an intuitive stochastic quantization scheme: given the gradient vector at a processor, we quantize each component by randomized rounding to a discrete set of values, in a principled way which preserves the statistical properties of the original. The second step is an efficient lossless code for quantized gradients, which exploits their statistical properties to generate efficient encodings. Our analysis gives tight bounds on the precision-variance trade-off induced by QSGD. ? At one extreme of this trade-off, we can guarantee that each processor transmits ? at most n(log n + O(1)) expected bits per iteration, while increasing variance by at most a n multiplicative factor. At the other extreme, we show that each processor can transmit ? 2.8n + 32 bits per iteration in expectation, while increasing variance by a only a factor of 2. In particular, in the latter regime, compared to full precision SGD, we use ? 2.8n bits of communication per iteration as opposed to 32n bits, and guarantee at most 2? more iterations, leading to bandwidth savings of ? 5.7?. QSGD is fairly general: it can also be shown to converge, under assumptions, to local minima for nonconvex objectives, as well as under asynchronous iterations. One non-trivial extension we develop is a stochastic variance-reduced [23] variant of QSGD, called QSVRG, which has exponential convergence rate. One key question is whether QSGD?s compression-variance trade-off is inherent: for instance, does any algorithm guaranteeing at most constant variance blowup need to transmit ?(n) bits per iteration? The answer is positive: improving asymptotically upon this trade-off would break the communication complexity lower bound of distributed mean estimation (see [44, Proposition 2] and [38]). Experiments. The crucial question is whether, in practice, QSGD can reduce communication cost by enough to offset the overhead of any additional iterations to convergence. The answer is yes. We explore the practicality of QSGD on a variety of state-of-the-art datasets and machine learning models: we examine its performance in training networks for image classification tasks (AlexNet, Inception, ResNet, and VGG) on the ImageNet [12] and CIFAR-10 [25] datasets, as well as on LSTMs [19] for speech recognition. We implement QSGD in Microsoft CNTK [3]. Experiments show that all these models can significantly benefit from reduced communication when doing multi-GPU training, with virtually no accuracy loss, and under standard parameters. For example, when training AlexNet on 16 GPUs with standard parameters, the reduction in communication time is 4?, and the reduction in training to the network?s top accuracy is 2.5?. When training an LSTM on two GPUs, the reduction in communication time is 6.8?, while the reduction in training 2 time to the same target accuracy is 2.7?. Further, even computationally-heavy architectures such as Inception and ResNet can benefit from the reduction in communication: on 16GPUs, QSGD reduces the end-to-end convergence time of ResNet152 by approximately 2?. Networks trained with QSGD can converge to virtually the same accuracy as full-precision variants, and that gradient quantization may even slightly improve accuracy in some settings. Related Work. One line of related research studies the communication complexity of convex optimization. In particular, [40] studied two-processor convex minimization in the same model, provided a lower bound of ?(n(log n + log(1/))) bits on the communication cost of n-dimensional convex problems, and proposed a non-stochastic algorithm for strongly convex problems, whose communication cost is within a log factor of the lower bound. By contrast, our focus is on stochastic gradient methods. Recent work [5] focused on round complexity lower bounds on the number of communication rounds necessary for convex learning. Buckwild! [10] was the first to consider the convergence guarantees of low-precision SGD. It gave upper bounds on the error probability of SGD, assuming unbiased stochastic quantization, convexity, and gradient sparsity, and showed significant speedup when solving convex problems on CPUs. QSGD refines these results by focusing on the trade-off between communication and convergence. We view quantization as an independent source of variance for SGD, which allows us to employ standard convergence results [7]. The main differences from Buckwild! are that 1) we focus on the variance-precision trade-off; 2) our results apply to the quantized non-convex case; 3) we validate the practicality of our scheme on neural network training on GPUs. Concurrent work proposes TernGrad [41], which starts from a similar stochastic quantization, but focuses on the case where individual gradient components can have only three possible values. They show that significant speedups can be achieved on TensorFlow [1], while maintaining accuracy within a few percentage points relative to full precision. The main differences to our work are: 1) our implementation guarantees convergence under standard assumptions; 2) we strive to provide a black-box compression technique, with no additional hyperparameters to tune; 3) experimentally, QSGD maintains the same accuracy within the same target number of epochs; for this, we allow gradients to have larger bit width; 4) our experiments focus on the single-machine multi-GPU case. We note that QSGD can be applied to solve the distributed mean estimation problem [38, 24] with an optimal error-communication trade-off in some regimes. In contrast to the elegant random rotation solution presented in [38], QSGD employs quantization and Elias coding. Our use case is different from the federated learning application of [38, 24], and has the advantage of being more efficient to compute on a GPU. There is an extremely rich area studying algorithms and systems for efficient distributed large-scale learning, e.g. [6, 11, 1, 3, 39, 32, 10, 21, 43]. Significant interest has recently been dedicated to quantized frameworks, both for inference, e.g., [1, 17] and training [45, 35, 20, 37, 16, 10, 42]. In this context, [35] proposed 1BitSGD, a heuristic for compressing gradients in SGD, inspired by delta-sigma modulation [34]. It is implemented in Microsoft CNTK, and has a cost of n bits and two floats per iteration. Variants of it were shown to perform well on large-scale Amazon datasets by [37]. Compared to 1BitSGD, QSGD can achieve asymptotically higher compression, provably converges under standard assumptions, and shows superior practical performance in some cases. 2 Preliminaries SGD has many variants, with different preconditions and guarantees. Our techniques are rather portable, and can usually be applied in a black-box fashion on top of SGD. For conciseness, we will focus on a basic SGD setup. The following assumptions are standard; see e.g. [7]. Let X ? Rn be a known convex set, and let f : X ? R be differentiable, convex, smooth, and unknown. We assume repeated access to stochastic gradients of f , which on (possibly random) input x, outputs a direction which is in expectation the correct direction to move in. Formally: Definition 2.1. Fix f : X ? R. A stochastic gradient for f is a random function ge(x) so that E[e g (x)] = ?f (x). We say the stochastic gradient has second moment at most B if E[ke g k22 ] ? B for 2 2 2 all x ? X . We say it has variance at most ? if E[ke g (x) ? ?f (x)k2 ] ? ? for all x ? X . Observe that any stochastic gradient with second moment bound B is automatically also a stochastic gradient with variance bound ? 2 = B, since E[ke g (x) ? ?f (x)k2 ] ? E[ke g (x)k2 ] as long as E[e g (x)] = ?f (x). Second, in convex optimization, one often assumes a second moment bound 3 Data: Local copy of the parameter vector x 1 for each iteration t do 2 Let g eti be an independent stochastic gradient ; i i M ? Encode(e g (x)) //encode gradients ; 3 broadcast M i to all peers; for each peer ` do receive M ` from peer `; 4 5 6 ` ` g b ? Decode(M ) //decode gradients ; 7 8 9 10 end Figure 1: An illustration of generalized stochastic quantization with 5 levels. end P xt+1 ? xt ? (?t /K) K b` ; `=1 g Algorithm 1: Parallel SGD Algorithm. when dealing with non-smooth convex optimization, and a variance bound when dealing with smooth convex optimization. However, for us it will be convenient to consistently assume a second moment bound. This does not seem to be a major distinction in theory or in practice [7]. Given access to stochastic gradients, and a starting point x0 , SGD builds iterates xt given by Equation (1), projected onto X , where (?t )t?0 is a sequence of step sizes. In this setting, one can show: Theorem 2.1 ([7], Theorem 6.3). Let X ? Rn be convex, and let f : X ? R be unknown, convex, and L-smooth. Let x0 ? X be given, and let R2 = supx?X kx ? x0 k2 . Let T > 0 be fixed. Given repeated, independent access to stochastic gradients with variance bound ? 2 for f , SGD with initial q point x0 and constant step sizes ?t = " E f T 1X xt T t=0 1 L+1/? , where ? = R ? !# r ? min f (x) ? R x?X 2 T , achieves LR2 2? 2 + . T T (2) Minibatched SGD. A modification to the SGD scheme presented above often observed in practice is a technique known as minibatching. In minibatched SGD, updates are of the form xt+1 = e t (xt ) = 1 Pm get,i , and where each get,i is an independent stochastic e t (xt )), where G ?X (xt ? ?t G i=1 m gradient for f at xt . It is not hard to see that if get,i are stochastic gradients with variance bound ? 2 , e t is a stochastic gradient with variance bound ? 2 /m. By inspection of Theorem 2.1, as then the G long as the first term in (2) dominates, minibatched SGD requires 1/m fewer iterations to converge. Data-Parallel SGD. We consider synchronous data-parallel SGD, modelling real-world multi-GPU systems, and focus on the communication cost of SGD in this setting. We have a set of K processors p1 , p2 , . . . , pK who proceed in synchronous steps, and communicate using point-to-point messages. Each processor maintains a local copy of a vector x of dimension n, representing the current estimate of the minimizer, and has access to private, independent stochastic gradients for f . In each synchronous iteration, described in Algorithm 1, each processor aggregates the value of x, then obtains random gradient updates for each component of x, then communicates these updates to all peers, and finally aggregates the received updates and applies them locally. Importantly, we add encoding and decoding steps for the gradients before and after send/receive in lines 3 and 7, respectively. In the following, whenever describing a variant of SGD, we assume the above general pattern, and only specify the encode/decode functions. Notice that the decoding step does not necessarily recover the original gradient ge` ; instead, we usually apply an approximate version. When the encoding and decoding steps are the identity (i.e., no encoding / decoding), we shall refer to this algorithm as parallel SGD. In this case, it is a simple calculation to see that at each processor, if xt was the value of x that the processors held before iteration t, then the updated value of x by the PK end of this iteration is xt+1 = xt ? (?t /K) `=1 ge` (xt ), where each ge` is a stochatic gradient. In particular, this update is merely a minibatched update of size K. Thus, by the discussion above, and by rephrasing Theorem 2.1, we have the following corollary: Corollary 2.2. Let X , f, L, x0 , and R be as in Theorem 2.1. Fix  > 0. Suppose we run parallel SGD on K processors, each with access to independent stochastic gradients with second moment 4 ? bound B, with step size ?t = 1/(L + K/?), where ? is as in Theorem 2.1. Then if " !#    T 1X 2B L 2 , , then E f xt T = O R ? max ? min f (x) ? . x?X K2  T t=0 (3) In most reasonable regimes, the first term of the max in (3) will dominate the number of iterations necessary. Specifically, the number of iterations will depend linearly on the second moment bound B. 3 Quantized Stochastic Gradient Descent (QSGD) In this section, we present our main results on stochastically quantized SGD. Throughout, log denotes the base-2 logarithm, and the number of bits to represent a float is 32. For any vector v ? Rn , we let kvk0 denote the number of nonzeros of v. For any string ? ? {0, 1}? , we will let |?| denote its length. For any scalar x ? R, we let sgn (x) ? {?1, +1} denote its sign, with sgn (0) = 1. 3.1 Generalized Stochastic Quantization and Coding Stochastic Quantization. We now consider a general, parametrizable lossy-compression scheme for stochastic gradient vectors. The quantization function is denoted with Qs (v), where s ? 1 is a tuning parameter, corresponding to the number of quantization levels we implement. Intuitively, we define s uniformly distributed levels between 0 and 1, to which each value is quantized in a way which preserves the value in expectation, and introduces minimal variance. Please see Figure 1. For any v ? Rn with v 6= 0, Qs (v) is defined as Qs (vi ) = kvk2 ? sgn (vi ) ? ?i (v, s) , (4) where ?i (v, s)?s are independent random variables defined as follows. Let 0 ? ` < s be an integer such that |vi |/kvk2 ? [`/s, (` + 1)/s]. That is, [`/s, (` + 1)/s] is the quantization interval corresponding to |vi |/kvk2 . Then (   |vi | `/s with probability 1 ? p kvk , s ; 2 ?i (v, s) = (` + 1)/s otherwise. Here, p(a, s) = as ? ` for any a ? [0, 1]. If v = 0, then we define Q(v, s) = 0. The distribution of ?i (v, s) has minimal variance over distributions with support {0, 1/s, . . . , 1}, and its expectation satisfies E[?i (v, s)] = |vi |/kvk2 . Formally, we can show: n Lemma 3.1. For any ? vector v ? R , we have that (i) E[Qs (v)] = v (unbiasedness),?(ii) E[kQs (v)? vk22 ] ? min(n/s2 , n/s)kvk22 (variance bound), and (iii) E[kQs (v)k0 ] ? s(s + n) (sparsity). Efficient Coding of Gradients. Observe that for any vector v, the output of Qs (v) is naturally expressible by a tuple (kvk2 , ?, ?), where ? is the vector of signs of the vi ?s and ? is the vector of integer values s ? ?i (v, s). The key idea behind the coding scheme is that not all integer values s ? ?i (v, s) can be equally likely: in particular, larger integers are less frequent. We will exploit this via a specialized Elias integer encoding [14], presented in full in the full version of our paper [4]. Intuitively, for any positive integer k, its code, denoted Elias(k), starts from the binary representation of k, to which it prepends the length of this representation. It then recursively encodes this prefix. We show that for any positive integer k, the length of the resulting code has |Elias(k)| = log k + log log k + . . . + 1 ? (1 + o(1)) log k + 1, and that encoding and decoding can be done efficiently. Given a gradient vector represented as the triple (kvk2 , ?, ?), with s quantization levels, our coding outputs a string S defined as follows. First, it uses 32 bits to encode kvk2 . It proceeds to encode using Elias recursive coding the position of the first nonzero entry of ?. It then appends a bit denoting ?i and follows that with Elias(s ? ?i (v, s)). Iteratively, it proceeds to encode the distance from the current coordinate of ? to the next nonzero, and encodes the ?i and ?i for that coordinate in the same way. The decoding scheme is straightforward: we first read off 32 bits to construct kvk2 , then iteratively use the decoding scheme for Elias recursive coding to read off the positions and values of the nonzeros of ? and ?. The properties of the quantization and of the encoding imply the following. Theorem 3.2. Let f : Rn ? R be fixed, and let x ? Rn be arbitrary. Fix s ? 2 quantization levels. If ge(x) is a stochastic gradient for f at x with second moment bound B, then Qs (e g (x)) is a 5 stochastic gradient for f at x with variance bound min  ?  n n B. s2 , s Moreover, there is an encoding scheme so that in expectation, the number of bits to communicate Qs (e g (x)) is upper bounded by      2 ? 3 2(s + n) ? 3+ s(s + n) + 32. + o(1) log 2 s(s + n) Sparse levels 0, 1, and ?1, the gradient density is ? Regime. For the case s = 1, i.e., quantization ? O(?n), while the second-moment blowup is ? n. Intuitively, this means?that we will employ O( n log n) bits per iteration, while the convergence time is increased by O( n). ? Dense Regime. The variance blowup is minimized to at most 2 for s = n quantization levels; in this case, we devise a more efficient encoding which yields an order of magnitude shorter codes compared to the full-precision variant. The proof of this statement is not entirely obvious, as it exploits both the statistical properties of the quantization and the guarantees of the Elias coding. Corollary 3.3. Let f, x, and ge(x) be as in Theorem 3.2. There is an encoding scheme for Q?n (e g (x)) which in expectation has length at most 2.8n + 32. 3.2 QSGD Guarantees Putting the bounds on the communication and variance given above with the guarantees for SGD algorithms on smooth, convex functions yield the following results: Theorem 3.4 (Smooth Convex QSGD). Let X , f, L, x0 , and R be as in Theorem 2.1. Fix  > 0. Suppose we run parallel QSGD with s quantization levels on K processors accessing indepen? dent stochastic gradients with second moment bound B, with stepsize ?t = 1/(L + K/?), ? where ? is as in Theorem 2.1 with ? = B 0 , where B 0 = min sn2 , sn B. Then if T =   0  h  P i T L 2B O R2 ? max K , then E f T1 t=0 xt ? minx?X f (x) ? . Moreover, QSGD re2,     2  ? +n) ? quires 3 + 23 + o(1) log 2(s (s2 + n) + 32 bits of communication per round. In the 2 s + n ? special case when s = n, this can be reduced to 2.8n + 32. QSGD is quite portable, and can be applied to almost any stochastic gradient method. For illustration, we can use quantization along with [15] to get communication-efficient non-convex SGD. Theorem 3.5 (QSGD for smooth non-convex optimization). Let f : Rn ? R be a L-smooth (possibly nonconvex) function, and let x1 be an arbitrary initial point. Let T > 0 be fixed, and s > 0. Then there is a random stopping time R supported on {1, . . . , N } so that QSGD with quantization level s, constant stepsizes ? = O(1/L) and access   ? to stochastic gradients?of f with   L(f (x1 )?f ? ) min(n/s2 , n/s)B 1 second moment bound B satisfies L E k?f (x)k22 ? O + . N L Moreover, the communication cost is the same as in Theorem 3.4. 3.3 Quantized Variance-Reduced SGD Assume we are given K processors, and a parameter m > 0, where each processor i P has access to m 1 functions {fim/K , . . . , f(i+1)m/K?1 }. The goal is to approximately minimize f = m i=1 fi . For P P (i+1)m/K?1 K 1 processor i, let hi = m j=im/K fi be the portion of f that it knows, so that f = i=1 hi . A natural question is whether we can apply stochastic quantization to reduce communication for parallel SVRG. Upon inspection, we notice that the resulting update will break standard SVRG. We resolve this technical issue, proving one can quantize SVRG updates using our techniques and still obtain the same convergence bounds. ? e = Q(v, n), where Q(v, s) is defined as in Section 3.1. Given Algorithm Description. Let Q(v) arbitrary starting point x0 , we let y (1) = x0 . At the beginning of epoch p, each processor broadcasts ?hi (y (p) P),mthat is, the unquantized full gradient, from which the processors each aggregate ?f (y (p) ) = i=1 ?hi (y (p) ). Within each epoch, for each iteration t = 1, . . . , T , and for each (p) processor i = 1, . . . , K, we let ji,t be a uniformly random integer from [m] completely independent from everything else. Then, in iteration t in epochp, processor i broadcasts the update vector  (p) (p) e ut,i = Q ?fj (p) (xt ) ? ?fj (p) (y (p) ) + ?f (y (p) ) . i,t i,t 6 Table 1: Description of networks, final top-1 accuracy, as well as end-to-end training speedup on 8GPUs. Network AlexNet ResNet152 ResNet50 ResNet110 BN-Inception VGG19 LSTM Dataset ImageNet ImageNet ImageNet CIFAR-10 ImageNet ImageNet AN4 Params. 62M 60M 25M 1M 11M 143M 13M Init. Rate 0.07 1 1 0.1 3.6 0.1 0.5 Top-1 (32bit) 59.50% 77.0% 74.68% 93.86% 81.13% Top-1 (QSGD) 60.05% (4bit) 76.74% (8bit) 74.76% (4bit) 94.19% (4bit) 81.15 % (4bit) Speedup (8 GPUs) 2.05 ? 1.56 ? 1.26 ? 1.10 ? 1.16? (projected) 2.25? (projected) 2? (2 GPUs) PK (p) (p) (p) (p) 1 Each processor then computes the total update ut = K i=1 ut,i , and sets xt+1 = xt ? ?ut . P (p) T At the end of epoch p, each processor sets y (p+1) = T1 t=1 xt . We can prove the following. P m 1 Theorem 3.6. Let f (x) = m i=1 fi (x), where f is `-strongly convex, and fi are convex and ? L-smooth, for all i. Let x be the unique minimizer of f over Rn . Then, if ? = O(1/L) and T = O(L/`), then QSVRG with initial point y (1) ensures E f (y (p+1) ) ? f (x? ) ?  0.9p f (y (1) ) ? f (x? ) , for any epoch p ? 1. Moreover, QSVRG with T iterations per epoch requires ? (F + 2.8n)(T + 1) + F n bits of communication per epoch. Discussion. In particular, this allows us to largely decouple the dependence between F and the condition number of f in the communication. Let ? = L/` denote the condition number of f . Observe that whenever F  ?, the second term is subsumed by the first and the per epoch communication is dominated by (F + 2.8n)(T + 1). Specifically, for any fixed , to attain accuracy  we must take F = O(log 1/). As long as log 1/ ? ?(?), which is true for instance in the case when ? ? poly log(n) and  ? poly(1/n), then the communication per epoch is O(?(log 1/ + n)). Gradient Descent. The full version of the paper [4] contains an application of QSGD to gradient descent. Roughly, in this case, QSGD can simply truncate the gradient to its top components, sorted by magnitude. 4 QSGD Variants Our experiments will stretch the theory, as we use deep networks, with non-convex objectives. (We have also tested QSGD for convex objectives. Results closely follow the theory, and are therefore omitted.) Our implementations will depart from the previous algorithm description as follows. First, we notice that the we can control the variance the quantization by quantizing into buckets of a fixed size d. If we view each gradient as a one-dimensional vector v, reshaping tensors if necessary, a bucket will be defined as a set of d consecutive vector values. (E.g. the ith bucket is the sub-vector v[(i ? 1)d + 1 : i ? d].) We will quantize each bucket independently, using QSGD. Setting d = 1 corresponds to no quantization (vanilla SGD), and d = n corresponds to full quantization, as described in the previous section. It is easy to see that, using bucketing, the guarantees from Lemma 3.1 will be expressed in terms of d, as opposed to the full dimension n. This provides a knob by which we can control variance, at the cost of storing an extra scaling factor on every d bucket values. As an example, if we use a bucket ? size of 512, and 4 bits, the variance increase due to quantization will be upper bounded by only 512/24 ' 1.41. This provides a theoretical justification for the similar convergence rates we observe in practice. The second difference from the theory is that we will scale by the maximum value of the vector (as opposed to the 2-norm). Intuitively, normalizing by the max preserves more values, and has slightly higher accuracy for the same number of iterations. Both methods have the same baseline bandwidth reduction because of lower bit width (e.g. 32 bits to 2 bits per dimension), but normalizing by the max no longer provides any ? sparsity guarantees. We note that this does not affect our bounds in the regime where we use ?( n) quantization levels per component, as we employ no sparsity in that case. (However, we note that in practice max normalization also generates non-trivial sparsity.) 5 Experiments Setup. We performed experiments on Amazon EC2 p2.16xlarge instances, with 16 NVIDIA K80 GPUs. Instances have GPUDirect peer-to-peer communication, but do not currently support NVIDIA 7 2.3x 1.6x 3.5x Figure 2: Breakdown of communication versus computation for various neural networks, on 2, 4, 8, 16 GPUs, 2.0 1.5 Training loss > 2x faster 2bit QSGD (d=128) 4bit QSGD (d=8192) 8bit QSGD (d=8192) SGD 1.0 0.5 0.00 (a) AlexNet Accuracy versus Time. 300 600 900 Time (sec) 1200 (b) LSTM error vs Time. 1500 Test accuracy (%) for full 32-bit precision versus QSGD 4-bit. Each bar represents the total time for an epoch under standard parameters. Epoch time is broken down into communication (bottom, solid) and computation (top, transparent). Although epoch time diminishes as we parallelize, the proportion of communication increases. 80 70 60 50 40 30 20 10 00 1bitSGD* 32bit QSGD 4bit QSGD 8bit 20 40 60 Epoch 80 100 120 (c) ResNet50 Accuracy. Figure 3: Accuracy numbers for different networks. Light blue lines represent 32-bit accuracy. NCCL extensions. We have implemented QSGD on GPUs using the Microsoft Cognitive Toolkit (CNTK) [3]. This package provides efficient (MPI-based) GPU-to-GPU communication, and implements an optimized version of 1bit-SGD [35]. Our code is released as open-source [31]. We execute two types of tasks: image classification on ILSVRC 2015 (ImageNet) [12], CIFAR10 [25], and MNIST [27], and speech recognition on the CMU AN4 dataset [2]. For vision, we experimented with AlexNet [26], VGG [36], ResNet [18], and Inception with Batch Normalization [22] deep networks. For speech, we trained an LSTM network [19]. See Table 1 for details. Protocol. Our methodology emphasizes zero error tolerance, in the sense that we always aim to preserve the accuracy of the networks trained. We used standard sizes for the networks, with hyperparameters optimized for the 32bit precision variant. (Unless otherwise stated, we use the default networks and hyper-parameters optimized for full-precision CNTK 2.0.) We increased batch size when necessary to balance communication and computation for larger GPU counts, but never past the point where we lose accuracy. We employed double buffering [35] to perform communication and quantization concurrently with the computation. Quantization usually benefits from lowering learning rates; yet, we always run the 32bit learning rate, and decrease bucket size to reduce variance. We will not quantize small gradient matrices (< 10K elements), since the computational cost of quantizing them significantly exceeds the reduction in communication. However, in all experiments, more than 99% of all parameters are transmitted in quantized form. We reshape matrices to fit bucket sizes, so that no receptive field is split across two buckets. Communication vs. Computation. In the first set of experiments, we examine the ratio between computation and communication costs during training, for increased parallelism. The image classification networks are trained on ImageNet, while LSTM is trained on AN4. We examine the cost breakdown for these networks over a pass over the dataset (epoch). Figure 2 gives the results for various networks for image classification. The variance of epoch times is practically negligible (<1%), hence we omit confidence intervals. Figure 2 leads to some interesting observations. First, based on the ratio of communication to computation, we can roughly split networks into communication-intensive (AlexNet, VGG, LSTM), and computation-intensive (Inception, ResNet). For both network types, the relative impact of communication increases significantly as we increase the number of GPUs. Examining the breakdown for the 32-bit version, all networks could significantly benefit from reduced communication. For 8 example, for AlexNet on 16 GPUs with batch size 1024, more than 80% of training time is spent on communication, whereas for LSTM on 2 GPUs with batch size 256, the ratio is 71%. (These ratios can be slightly changed by increasing batch size, but this can decrease accuracy, see e.g. [21].) Next, we examine the impact of QSGD on communication and overall training time. (Communication time includes time spent compressing and uncompressing gradients.) We measured QSGD with 2-bit quantization and 128 bucket size, and 4-bit and 8-bit quantization with 512 bucket size. The results for these two variants are similar, since the different bucket sizes mean that the 4bit version only sends 77% more data than the 2-bit version (but ? 8? less than 32-bit). These bucket sizes are chosen to ensure good convergence, but are not carefully tuned. On 16GPU AlexNet with batch size 1024, 4-bit QSGD reduces communication time by 4?, and overall epoch time by 2.5?. On LSTM, it reduces communication time by 6.8?, and overall epoch time by 2.7?. Runtime improvements are non-trivial for all architectures we considered. Accuracy. We now examine how QSGD influences accuracy and convergence rate. We ran AlexNet and ResNet to full convergence on ImageNet, LSTM on AN4, ResNet110 on CIFAR-10, as well as a two-layer perceptron on MNIST. Results are given in Figure 3, and exact numbers are given in Table 1. QSGD tests are performed on an 8GPU setup, and are compared against the best known full-precision accuracy of the networks. In general, we notice that 4bit or 8bit gradient quantization is sufficient to recover or even slightly improve full accuracy, while ensuring non-trivial speedup. Across all our experiments, 8-bit gradients with 512 bucket size have been sufficient to recover or improve upon the full-precision accuracy. Our results are consistent with recent work [30] noting benefits of adding noise to gradients when training deep networks. Thus, quantization can be seen as a source of zero-mean noise, which happens to render communication more efficient. At the same time, we note that more aggressive quantization can hurt accuracy. In particular, 4-bit QSGD with 8192 bucket size (not shown) loses 0.57% for top-5 accuracy, and 0.68% for top-1, versus full precision on AlexNet when trained for the same number of epochs. Also, QSGD with 2-bit and 64 bucket size has gap 1.73% for top-1, and 1.18% for top-1. One issue we examined in more detail is which layers are more sensitive to quantization. It appears that quantizing convolutional layers too aggressively (e.g., 2-bit precision) can lead to accuracy loss if trained for the same period of time as the full precision variant. However, increasing precision to 4-bit or 8-bit recovers accuracy. This finding suggests that modern architectures for vision tasks, such as ResNet or Inception, which are almost entirely convolutional, may benefit less from quantization than recurrent deep networks such as LSTMs. Additional Experiments. The full version of the paper contains additional experiments, including a full comparison with 1BitSGD. In brief, QSGD outperforms or matches the performance and final accuracy of 1BitSGD for the networks and parameter values we consider. 6 Conclusions and Future Work We have presented QSGD, a family of SGD algorithms which allow a smooth trade off between the amount of communication per iteration and the running time. Experiments suggest that QSGD is highly competitive with the full-precision variant on a variety of tasks. There are a number of optimizations we did not explore. The most significant is leveraging the sparsity created by QSGD. Current implementations of MPI do not provide support for sparse types, but we plan to explore such support in future work. Further, we plan to examine the potential of QSGD in larger-scale applications, such as super-computing. On the theoretical side, it is interesting to consider applications of quantization beyond SGD. The full version of this paper [4] contains complete proofs, as well as additional applications. 7 Acknowledgments The authors would like to thank Martin Jaggi, Ce Zhang, Frank Seide and the CNTK team for their support during the development of this project, as well as the anonymous NIPS reviewers for their careful consideration and excellent suggestions. Dan Alistarh was supported by a Swiss National Fund Ambizione Fellowship. Jerry Li was supported by the NSF CAREER Award CCF-1453261, CCF-1565235, a Google Faculty Research Award, and an NSF Graduate Research Fellowship. This work was developed in part while Dan Alistarh, Jerri Li and Milan Vojnovic were with Microsoft Research Cambridge, UK. 9 References [1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Alex Acero. Acoustical and environmental robustness in automatic speech recognition, volume 201. Springer Science & Business Media, 2012. [3] Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, et al. An introduction to computational networks and the computational network toolkit. Technical report, Tech. Rep. MSR-TR-2014-112, August 2014., 2014. [4] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communication-efficient SGD via gradient quantization and encoding. arXiv preprint arXiv:1610.02132, 2016. [5] Yossi Arjevani and Ohad Shamir. Communication complexity of distributed convex learning and optimization. In NIPS, 2015. [6] Ron Bekkerman, Mikhail Bilenko, and John Langford. Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011. R [7] S?bastien Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4):231?357, 2015. [8] Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam: Building an efficient and scalable deep learning training system. In OSDI, October 2014. [9] Cntk brainscript file for alexnet. https://github.com/Microsoft/CNTK/tree/master/ Examples/Image/Classification/AlexNet/BrainScript. Accessed: 2017-02-24. [10] Christopher M De Sa, Ce Zhang, Kunle Olukotun, and Christopher R?. Taming the wild: A unified analysis of hogwild-style algorithms. In NIPS, 2015. [11] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In NIPS, 2012. [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248?255. IEEE, 2009. [13] John C Duchi, Sorathan Chaturapruek, and Christopher R?. Asynchronous stochastic convex optimization. NIPS, 2015. [14] Peter Elias. Universal codeword sets and representations of the integers. IEEE transactions on information theory, 21(2):194?203, 1975. [15] Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341?2368, 2013. [16] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, pages 1737?1746, 2015. [17] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. 10 [19] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [20] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in Neural Information Processing Systems, pages 4107?4115, 2016. [21] Forrest N Iandola, Matthew W Moskewicz, Khalid Ashraf, and Kurt Keutzer. Firecaffe: nearlinear acceleration of deep neural network training on compute clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2592?2600, 2016. [22] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [23] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. [24] Jakub Kone?cn`y. Stochastic, distributed and federated optimization for machine learning. arXiv preprint arXiv:1707.01155, 2017. [25] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [27] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998. [28] Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014. [29] Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. In NIPS. 2015. [30] Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015. [31] Cntk implementation of qsgd. https://gitlab.com/demjangrubic/QSGD. Accessed: 201711-4. [32] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [33] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400?407, 1951. [34] Richard Schreier and Gabor C Temes. Understanding delta-sigma data converters, volume 74. IEEE Press, Piscataway, NJ, 2005. [35] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In INTERSPEECH, 2014. [36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [37] Nikko Strom. Scalable distributed DNN training using commodity GPU cloud computing. In INTERSPEECH, 2015. [38] Ananda Theertha Suresh, Felix X Yu, H Brendan McMahan, and Sanjiv Kumar. Distributed mean estimation with limited communication. arXiv preprint arXiv:1611.00429, 2016. 11 [39] Seiya Tokui, Kenta Oono, Shohei Hido, CA San Mateo, and Justin Clayton. Chainer: a next-generation open source framework for deep learning. [40] John N Tsitsiklis and Zhi-Quan Luo. Communication complexity of convex optimization. Journal of Complexity, 3(3), 1987. [41] Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Terngrad: Ternary gradients to reduce communication in distributed deep learning. arXiv preprint arXiv:1705.07878, 2017. [42] Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. In International Conference on Machine Learning, pages 4035?4043, 2017. [43] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685?693, 2015. [44] Yuchen Zhang, John Duchi, Michael I Jordan, and Martin J Wainwright. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In NIPS, 2013. [45] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. 12
6768 |@word private:1 faculty:1 msr:1 compression:9 norm:1 version:9 proportion:1 bekkerman:1 briefly:1 open:2 achievable:1 bn:1 sgd:42 tr:1 solid:1 recursively:1 reduction:10 initial:3 liu:2 moment:10 contains:3 daniel:1 tuned:1 denoting:1 kurt:1 prefix:1 past:2 outperforms:1 current:5 com:4 luo:1 yet:1 gmail:1 must:2 gpu:10 john:4 devin:2 refines:1 sanjiv:1 numerical:1 christian:1 update:14 fund:1 v:2 fewer:1 inspection:2 beginning:1 ith:1 short:1 yuxin:1 provides:5 quantized:11 iterates:1 ron:1 node:3 zhang:8 accessed:2 parametrizable:1 mathematical:1 along:1 kvk2:8 jasha:2 re2:1 abadi:1 prove:1 seide:2 wild:1 dan:6 overhead:1 seiya:1 x0:8 notably:1 expected:2 blowup:3 roughly:2 p1:1 surge:1 examine:6 multi:3 yiran:1 unquantized:1 inspired:1 automatically:1 resolve:1 zhi:1 cpu:1 bilenko:1 little:1 increasing:4 provided:1 project:2 bounded:2 moreover:4 medium:1 alexnet:12 string:2 minimizes:1 developed:1 unified:1 finding:1 nj:1 guarantee:14 commodity:1 every:1 binarized:1 runtime:1 k2:4 scaled:1 uk:2 control:2 omit:1 before:2 felix:1 negligible:1 local:5 t1:2 positive:3 soudry:1 sutton:1 encoding:12 parallelize:1 niu:1 modulation:1 approximately:2 black:2 zeroth:1 mateo:1 ankur:1 studied:1 suggests:1 examined:1 luke:1 josifovski:1 limited:2 graduate:1 unique:1 acknowledgment:1 lecun:2 practical:3 ternary:1 practice:6 recursive:2 implement:3 communicated:1 swiss:1 digit:1 procedure:1 suresh:1 area:1 universal:1 yan:1 eth:2 significantly:4 gabor:1 convenient:1 attain:1 confidence:1 pritish:1 suggest:1 get:4 onto:1 acero:1 context:3 influence:1 accumulating:1 ghadimi:1 dean:2 marten:1 reviewer:1 vgg19:1 send:2 straightforward:1 sepp:1 starting:2 economics:1 attention:3 focused:1 independently:1 ke:5 amazon:2 convex:28 matthieu:3 communicating:2 q:7 importantly:1 dominate:1 enabled:1 proving:1 coordinate:3 hurt:1 justification:1 updated:1 annals:1 target:2 shamir:1 suppose:2 user:1 decode:3 itay:1 programming:1 us:1 massive:1 exact:1 element:1 trend:1 recognition:9 breakdown:3 database:2 observed:3 cloud:1 bottom:1 preprint:10 wang:1 capture:1 precondition:1 cong:1 compressing:3 ensures:1 sun:1 decrease:2 trade:12 yijun:1 principled:1 accessing:1 ran:2 broken:1 kailash:1 convexity:2 benjamin:1 mu:1 complexity:7 trained:8 depend:1 solving:1 tight:1 predictive:1 upon:3 completely:1 bitwidth:2 k0:1 represented:1 various:2 train:1 effective:1 london:1 aggregate:4 hyper:1 peer:8 whose:1 heuristic:3 kai:2 larger:4 cvpr:1 quite:1 solve:1 say:2 otherwise:2 statistic:1 simonyan:1 zekun:1 final:2 advantage:1 sequence:1 agrawal:1 differentiable:1 quantizing:3 net:1 propose:2 minibatching:1 frequent:1 fundamental:2 grubic:2 achieve:1 intuitive:1 description:3 validate:1 milan:3 scalability:2 sutskever:2 convergence:18 double:1 yaniv:1 cluster:1 karol:1 guaranteeing:1 converges:1 adam:2 resnet:7 spent:2 andrew:2 develop:1 ac:2 recurrent:1 measured:1 school:1 received:3 sa:1 strong:1 p2:2 implemented:2 transmit:2 direction:2 closely:1 correct:1 droppo:2 stochastic:44 sgn:3 everything:1 dnns:2 fix:4 transparent:1 preliminary:1 anonymous:1 proposition:1 brian:1 ryan:1 im:1 dent:1 extension:2 stretch:1 practically:1 considered:1 wright:1 algorithmic:1 matthew:1 rgen:1 major:1 achieves:1 consecutive:1 omitted:1 released:1 estimation:4 diminishes:1 lose:1 currently:2 hubara:1 sensitive:1 robbins:1 concurrent:1 vanja:1 minimization:1 eti:1 offs:1 concurrently:1 mit:2 always:3 suzue:1 super:1 aim:1 rather:1 zhou:2 stepsizes:1 terngrad:2 corollary:3 knob:1 encode:6 focus:8 improvement:1 consistently:1 modelling:2 tech:1 contrast:2 brendan:1 baseline:1 sense:2 osdi:2 inference:1 el:1 stopping:1 entire:1 dnn:1 expressible:1 choromanska:1 provably:1 kaan:1 overall:3 classification:7 issue:2 eldar:1 among:1 denoted:2 proposes:1 plan:2 art:2 development:1 fairly:1 special:1 field:1 construct:1 saving:1 resnet152:2 beach:1 never:1 represents:1 buffering:1 park:1 icml:1 yu:2 future:2 minimized:1 yoshua:1 report:1 richard:2 employ:4 wen:2 modern:1 inherent:2 few:1 preserve:5 national:1 individual:1 floating:1 jeffrey:2 saeed:1 microsoft:7 maintain:1 karthik:1 william:1 subsumed:1 interest:2 message:1 highly:1 khalid:1 adjust:1 introduces:1 extreme:2 kvk:1 kone:1 behind:1 light:1 devoted:1 held:1 chaturapruek:1 andy:1 tuple:1 fu:1 necessary:4 cifar10:1 ohad:1 shorter:1 unless:1 tree:1 yuchen:1 logarithm:1 re:1 theoretical:2 minimal:2 increased:3 instance:6 cost:14 entry:1 krizhevsky:2 examining:1 rounding:1 johnson:2 too:1 answer:2 supx:1 params:1 guanghui:1 st:1 unbiasedness:1 international:1 randomized:1 ec2:1 siam:1 recht:1 tokui:1 density:1 dong:2 lstm:9 decoding:7 off:13 michael:1 ashish:1 ilya:2 andersen:1 opposed:3 huang:1 broadcast:4 possibly:3 stochastically:1 lukasz:1 cognitive:1 strive:1 leading:1 style:1 li:13 szegedy:1 aggressive:1 potential:1 de:1 coding:9 sec:1 includes:1 jc:1 vi:7 multiplicative:1 performed:2 hogwild:2 dally:1 view:2 break:2 guoguo:1 doing:1 start:2 competitive:1 recover:3 parallel:13 portion:1 maintains:3 yandan:1 lr2:1 jia:2 affect:1 monro:1 contribution:1 vk22:1 minimize:3 ni:1 greg:2 accuracy:29 convolutional:5 who:1 largely:1 efficiently:1 variance:30 yield:2 yes:1 bor:1 handwritten:1 emphasizes:1 craig:1 ren:1 processor:27 whenever:2 definition:1 against:1 tucker:1 james:2 obvious:1 naturally:1 proof:2 conciseness:1 recovers:1 transmits:1 dataset:4 popular:1 appends:1 austria:1 cntk:8 ut:4 improves:1 thanks:2 carefully:1 yuncheng:1 focusing:1 appears:1 higher:4 follow:1 methodology:1 specify:1 wei:2 zisserman:1 rie:1 huizi:1 execute:1 done:1 strongly:2 box:2 just:1 smola:1 inception:6 langford:1 lstms:2 christopher:5 su:1 dataparallel:1 sorathan:1 google:2 asynchrony:1 qsgd:57 quire:1 lossy:4 building:1 usa:1 k22:2 true:1 unbiased:1 ccf:2 hence:1 jerry:4 aggressively:1 read:2 nonzero:2 iteratively:2 strom:1 round:3 during:2 width:2 interspeech:2 please:1 davis:1 mpi:2 kqs:2 generalized:2 guenter:1 theoretic:2 complete:1 duchi:2 dedicated:1 fj:2 nccl:1 lse:1 image:10 consideration:1 recently:2 fi:4 yutaka:1 superior:1 rotation:1 specialized:1 ji:3 volume:2 he:2 significant:8 refer:1 cambridge:2 automatic:1 vanilla:1 tuning:1 pm:1 stochatic:1 toolkit:2 access:8 han:1 longer:1 add:1 base:1 jaggi:1 recent:2 showed:1 suyog:1 schmidhuber:1 sixin:1 codeword:1 nonconvex:4 certain:1 server:1 rep:1 nvidia:2 binary:1 devise:1 transmitted:1 seen:1 additional:5 herbert:1 minimum:2 deng:1 employed:1 xiangyu:1 converge:6 period:1 corrado:2 ii:1 stephen:1 multiple:1 full:24 violate:1 reduces:4 nonzeros:2 exceeds:1 technical:2 match:1 smooth:10 calculation:1 arvind:1 ahmed:1 cifar:3 long:6 faster:2 reshaping:1 equally:1 award:2 hido:1 impact:2 ensuring:1 scalable:2 basic:1 kunle:1 heterogeneous:1 vision:5 cmu:1 expectation:6 variant:12 arxiv:20 iteration:27 sergey:1 represent:2 monga:1 agarwal:2 normalization:3 achieved:1 hochreiter:1 receive:3 whereas:1 fellowship:2 want:1 huffman:1 interval:2 else:1 float:2 source:4 sends:1 jian:1 crucial:1 extra:1 file:1 induced:1 elegant:1 sent:1 quan:1 virtually:2 leveraging:1 seem:1 jordan:1 integer:9 noting:1 yang:1 yuheng:1 split:2 enough:1 easy:1 automated:1 iterate:1 bengio:1 fit:1 gave:1 iii:1 architecture:3 bandwidth:4 converter:1 variety:2 reduce:6 idea:2 barham:1 cn:1 vgg:3 kurach:1 intensive:2 shift:1 bottleneck:1 whether:3 synchronous:3 dorefa:1 fim:1 minibatched:4 accelerating:2 arjevani:1 render:1 peter:1 song:1 karen:1 speech:6 proceed:1 shaoqing:1 deep:21 iterating:1 clear:1 tune:1 sn2:1 oono:1 apacible:1 amount:1 locally:2 neelakantan:1 narayanan:1 reduced:6 generate:1 http:2 chainer:1 percentage:1 nsf:2 notice:4 sign:3 delta:2 per:17 blue:1 discrete:1 shall:1 ist:2 key:2 putting:1 lan:1 threshold:1 chunpeng:1 ce:3 lowering:1 asymptotically:2 olukotun:1 merely:1 run:3 package:1 master:1 communicate:5 throughout:1 shekita:1 reasonable:1 wu:2 forrest:1 family:3 almost:2 yann:2 keutzer:1 temes:1 scaling:6 bit:59 entirely:2 bound:26 hi:4 layer:4 gang:1 constraint:1 fei:2 alex:3 encodes:2 dominated:1 generates:1 min:6 extremely:1 kumar:1 alistarh:6 martin:2 gpus:14 speedup:5 piscataway:1 truncate:1 xinyu:1 across:2 slightly:4 bucketing:1 partitioned:1 modification:1 happens:1 quoc:2 intuitively:4 bucket:16 computationally:1 equation:1 xiangru:1 zurich:2 describing:1 count:1 know:1 yossi:1 ge:9 drastic:1 end:12 studying:1 brevdo:1 apply:3 observe:4 hierarchical:1 reshape:1 batch:7 corinna:1 robustness:1 original:2 denotes:1 running:1 ensure:1 top:11 assumes:1 lock:1 maintaining:1 exploit:3 practicality:2 build:1 amit:1 feng:2 tensor:1 move:1 objective:4 added:1 question:3 depart:1 kaiser:1 amr:1 receptive:1 dependence:1 hai:1 minx:1 gradient:71 distance:1 thank:1 chris:1 acoustical:1 portable:2 trivial:4 shohei:1 assuming:1 gopalakrishnan:1 length:4 code:5 illustration:2 ratio:4 balance:1 setup:3 october:1 statement:1 frank:2 ryota:2 hao:1 sigma:2 stated:1 shuchang:1 implementation:7 unknown:3 perform:3 allowing:1 upper:3 observation:1 datasets:3 descent:8 hinton:2 communication:54 extended:1 team:1 rn:9 kvk0:1 arbitrary:3 august:1 parallelizing:2 nikko:1 david:1 clayton:1 optimized:3 imagenet:12 acoustic:1 distinction:1 tensorflow:2 nip:10 beyond:1 bar:1 justin:1 usually:3 proceeds:2 xm:1 parallelism:1 pattern:4 regime:6 scott:1 sparsity:7 built:1 max:6 memory:1 including:1 wainwright:1 business:1 natural:1 residual:1 representing:1 scheme:10 improve:3 github:1 brief:1 imply:1 lossless:1 kvk22:1 created:1 jun:1 woo:1 resnet50:2 sn:1 taming:1 eugene:2 understanding:2 epoch:19 relative:2 loss:6 generation:1 suggestion:1 interesting:2 geoffrey:2 versus:4 chilimbi:1 triple:1 foundation:1 elia:9 sufficient:2 consistent:1 trishul:1 courbariaux:1 storing:1 tiny:1 heavy:1 changed:1 supported:3 copy:3 svrg:3 asynchronous:3 indepen:1 tsitsiklis:1 free:1 senior:1 burges:1 moskewicz:1 side:1 allow:2 perceptron:1 barrier:1 mikhail:1 sparse:2 distributed:16 benefit:6 tolerance:1 dimension:3 default:1 world:1 xlarge:1 rich:1 computes:1 author:1 san:1 projected:3 transaction:1 k80:1 approximate:1 pruning:1 obtains:2 dealing:2 ioffe:1 kalyanaraman:1 table:3 ca:2 career:1 elastic:1 init:1 improving:2 quantize:4 excellent:2 poly:2 necessarily:1 zou:1 protocol:1 did:1 pk:3 dense:2 anna:1 linearly:1 main:3 s2:4 noise:3 hyperparameters:2 paul:2 kara:1 repeated:2 rephrasing:1 x1:3 xu:1 fashion:1 tong:1 tomioka:2 precision:21 position:2 mao:2 wish:1 sub:1 exponential:1 candidate:1 mcmahan:1 communicates:1 zhifeng:1 theorem:14 down:1 xt:25 covariate:1 bastien:1 jakub:1 offset:1 r2:2 experimented:1 theertha:1 gupta:1 normalizing:2 dominates:1 cortes:1 socher:1 mnist:3 quantization:41 adding:2 federated:2 magnitude:2 kx:1 gap:1 chen:4 demjan:2 smoothly:1 led:1 simply:2 likely:1 bubeck:1 explore:3 expressed:1 kaiming:1 iandola:1 scalar:1 collectively:1 applies:1 springer:1 corresponds:2 minimizer:2 loses:1 satisfies:2 vojnovic:4 mart:1 environmental:1 identity:1 sorted:1 goal:1 acceleration:1 careful:1 consequently:1 towards:1 considerable:1 experimentally:2 hard:1 specifically:3 reducing:1 uniformly:2 averaging:1 decouple:1 ananda:1 lemma:2 total:2 called:2 pas:1 citro:1 formally:2 ilsvrc:1 internal:1 mark:2 support:5 latter:1 arises:1 alexander:1 vilnis:1 rajat:1 ashraf:1 lian:1 tested:1 ex:1
6,377
6,769
Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks Ziming Zhang and Matthew Brand Mitsubishi Electric Research Laboratories (MERL) Cambridge, MA 02139-1955 {zzhang, brand}@merl.com Abstract By lifting the ReLU function into a higher dimensional space, we develop a smooth multi-convex formulation for training feed-forward deep neural networks (DNNs). This allows us to develop a block coordinate descent (BCD) training algorithm consisting of a sequence of numerically well-behaved convex optimizations. Using ideas from proximal point methods in convex analysis, we prove that this BCD algorithm will converge globally to a stationary point with R-linear convergence rate of order one. In experiments with the MNIST database, DNNs trained with this BCD algorithm consistently yielded better test-set error rates than identical DNN architectures trained via all the stochastic gradient descent (SGD) variants in the Caffe toolbox. 1 Introduction Feed-forward deep neural networks (DNNs) are function approximators wherein weighted combinations inputs are filtered through nonlinear activation functions that are organized into a cascade of fully connected (FC) hidden layers. In recent years DNNs have become the tool of choice for many research areas such as machine translation and computer vision. The objective function for training a DNN is highly non-convex, leading to numerous obstacles to global optimization [10], notably proliferation of saddle points [11] and prevalence of local extrema that offer poor generalization off the training sample [8]. These observations have motivated regularization schemes to smooth or simplify the energy surface, either explicitly such as weight decay [23] or implicitly such as dropout [32] and batch normalization [19], so that the solutions are more robust, i.e. better generalized to test data. Training algorithms face many numerically difficulties that can make it difficult to even find a local optimum. One of the well-known issues is so-called vanishing gradient in back propagation (chain rule differentiation) [18], i.e. the long dependency chains between hidden layers (and corresponding variables) tend to drive gradients to zero far from the optimum. This issue leads to very slow improvements of the model parameters, an issue that becomes more and more serious in deeper networks [16]. The vanishing gradient problem can be partially ameliorated by using non-saturating activation functions such as rectified linear unit (ReLU) [25], and network architectures that have shorter input-to-output paths such as ResNet [17]. The saddle-point problem has been addressed by switching from deterministic gradient descent to stochastic gradient descent (SGD), which can achieve weak convergence in probability [6]. Classic proximal-point optimization methods such as the alternating direction method of multipliers (ADMM) have also shown promise for DNN training [34; 41], but in the DNN setting their convergence properties remain unknown. Contributions: In this paper, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1. We propose a novel Tikhonov regularized multi-convex formulation for deep learning, which can be used to learn both dense and sparse DNNs; 2. We propose a novel block coordinate descent (BCD) based learning algorithm accordingly, which can guarantee to globally converge to stationary points with R-linear convergence rate of order one; 3. We demonstrate empirically that DNNs estimated with BCD can produce better representations than DNNs estimated with SGD, in the sense of yielding better test-set classification rates. Our Tikhonov regularization is motivated by the fact that the ReLU activation function is equivalent to solving a smoothly penalized projection problem in a higher-dimensional Euclidean space. We use this to build a Tikhonov regularization matrix which encodes all the information of the networks, i.e. the architectures as well as their associated weights. In this way our training objective can be divided into three sub-problems, namely, (1) Tikhonov regularized inverse problem [37], (2) least-square regression, and (3) learning classifiers. Since each sub-problem is convex and coupled with the other two, our overall objective is multi-convex. Block coordinate descent (BCD) is often used for problems where finding an exact solution of a sub-problem with respect to a subset (block) of variables is much simpler than finding the solution for all variables simultaneously [27]. In our case, each sub-problem isolates block of variables which can be solved easily (e.g. close-form solutions exist). One of the advantages of our decomposition into sub-problems is that the long-range dependency between hidden layers is captured within a subproblem whose solution helps to propagate the information between inputs and outputs to stabilize the networks (i.e. convergence). Therefore, it does not suffer from vanishing gradient at all. In our experiments, we demonstrate the effectiveness and efficiency of our algorithm by comparing with SGD based solvers. 1.1 Related Work (1) Stochastic Regularization (SR) vs. Local Regularization vs. Tikhonov Regularization: SR is a widely-used technique in deep learning to prevent the training from overfitting. The basic idea in SR is to multiple the network weights with some random variables so that the learned network is more robust and generalized to test data. Dropout [32] and its variants such like [22] are classic examples of SR. Gal & Ghahramani [14] showed that SR in deep learning can be considered as approximate variational inference in Bayesian neural networks. Recently Baldassi et al. [2] proposed smoothing non-convex functions with local entropy, and latter Chaudhari et al. [8] proposed Entropy-SGD for training DNNs. The idea behind such methods is to locate solutions locally within large flat regions of the energy landscape that favors good generalization. In [9] Chaudhari et al. provided the mathematical justification for these methods from the perspective of partial differential equations (PDEs) In contrast, our Tikhonov regularization tends to smooth the non-convex loss explicitly, globally, and data-dependently. We deterministically learn the Tikhonov matrix as well as the auxiliary variables in the ill-posed inverse problems. The Tikhonov matrix encodes all the information in the network, and the auxiliary variables represent the ideal outputs of the data from each hidden layer that minimize our objective. Conceptually these variables work similarly as target propagation [4]. (2) SGD vs. BCD: In [6] Bottou et al. proved weak convergence of SGD for non-convex optimization.  Ghadimi & Lan [15] showed that SGD can achieve convergence rates that scale as O t?1/2 for non-convex loss functions if the stochastic gradient is unbiased with bounded variance, where t denotes the number of iterations. For non-convex optimization, the BCD based algorithm in [39] was proven to converge globally to stationary points. For parallel computing another BCD based algorithm, namely Parallel Successive Convex Approximation (PSCA), was proposed in [31] and proven to be convergent. (3) ADMM vs. BCD: Alternating direction method of multipliers (ADMM) is a proximal-point optimization framework from the 1970s and recently championed by Boyd [7]. It breaks a nearlyseparable problem into loosely-coupled smaller problems, some of which can be solved independently and thus in parallel. ADMM offers linear convergence for strictly convex problems, and for certain special non-convex optimization problems, ADMM can also converge [29; 36]. Unfortunately, thus 2 far there is no evidence or mathematical argument that DNN training is one of these special cases. Therefore, even though empirically it has been successfully applied to DNN training [34; 41], it still lacks of convergence guarantee. Our BCD-based DNN training algorithm is also amenable to ADMM-like parallelization. More importantly, as we prove in Sec. 4, it will converge globally to stationary points with R-linear convergence. 2 Tikhonov Regularization for Deep Learning 2.1 Problem Setup Key Notations: We denote xi ? Rd0 as the i-th training data, yi ? Y as its corresponding class label from label set Y, ui,n ? Rdn as the output feature for xi from the n-th (1 ? n ? N ) hidden layer in our network, Wn,m ? Rdn ?dm as the weight matrix between the n-th and m-th hidden layers, Mn as the input layer index set for the n-th hidden layer, V ? RdN +1 ?dN as the weight matrix between the last hidden layer and the output layer, U, V, W as nonempty closed convex sets, and `(?, ?) as a convex loss function. Network Architectures: In our networks we only consider ReLU as the activation functions. To provide short paths through the DNN, we allow multi-input ReLU units which can take the outputs from multiple previous layers as its inputs. Fig. 1 illustrates a network architecture that we consider, where the third hidden layers (with ReLU activations), for instance, takes the input data and the outputs from the first and second hidden layers as its inputs. Mathematically, we define our multi-input ReLU function at layer n for data xi as: input hidden layers output  xi ,  P if n = 0 ui,n = (1) Figure 1: Illustration of DNN architecmax 0, m?Mn Wn,m ui,m , otherwise tures that we consider in the paper. where max denotes the entry-wise max operator and 0 denotes a dn -dim zero vector. Note that multi-input ReLUs can be thought of as conventional ReLU with skip layers [17] where W?s are set to identity matrices accordingly. Conventional Objective for Training DNNs with ReLU: We write down the general objective1 in a recursive way as used in [41] as follows for clarity: ( ) X X min `(yi , Vui,N ), s.t. ui,n = max 0, Wn,m ui,m , ui,0 = xi , ?i, ?n, (2) ? V?V,W?W i m?Mn ? = {Wn,m }. Note that we separate the last FC layer (with weight matrix V) from the where W ? intentionally, because V is for learning classifiers rest hidden layers (with weight matrices in W) ? is for learning useful features. The network architectures we use in this paper are mainly for while W extracting features, on top of which any arbitrary classifier can be learned further. Our goal is to optimize Eq. 2. To that end, we propose a novel BCD based algorithm which can solve the relaxation of Eq. 2 using Tikhonov regularization with convergence guarantee. 2.2 Reinterpretation of ReLU The ReLU, ordinarily defined as u = max{0, x} for x ? Rd , can be viewed as a projection onto a convex set (POCS) [3], and thus rewritten as a simple smooth convex optimization problem, max{0, x} ? arg min ku ? xk22 , u?U (3) where k ? k2 denotes the `2 norm of a vector and U here is the nonnegative closed half-space. This non-negative least squares problem becomes the basis of our lifted objective. 1 For simplicity in this paper we always presume that the domain of each variable contains the regularization, e.g. `2 -norm, without showing it in the objective explicitly. 3 2.3 Our Tikhonov Regularized Objective We use Eq. 3 to lift and unroll the general training objective in Eq. 2 obtaining the relaxation: 2 X X X ?n ? ? V, W) ? = Wn,m ui,m , min f (U, `(yi , Vui,N ) + ui,n ? ? ? 2 U ?U ,V?V,W?W i i,n m?Mn s.t. (4) 2 ui,n ? 0, ui,0 = xi , ?i, ?n ? 1, where U? = {ui,n } and ?n ? 0, ?n denote predefined regularization constants. Larger ?n values force ui,n , ?i to more closely approximate the output of ReLU at the n-th hidden layer. Arranging u and ? terms into a matrix Q, we rewrite Eq. 4 in familiar form as a Tikhonov regularized objective:  X 1 T ? ? ? min f (U, V, W) ? `(yi , VPui ) + ui Q(W)ui . (5) ? ?U ,V?V,W?W ? 2 U i Here ui , ?i denotes the concatenating vector of all hidden outputs as well as the input data, i.e. ? ui = [ui,n ]N n=0 , ?i, P is a predefined constant matrix so that Pui = ui,N , ?i, and Q(W) denotes ? another matrix constructed by the weight matrix set W. ? is positive semidefinite, leading to the following Tikhonov regularization: Proposition 1. Q(W) ? i ? (?ui )T (?ui ) = k?ui k22 , ??, ?i, uTi Q(W)u where ? is the Tikhonov matrix. Definition 1 (Block Multi-Convexity [38]). A function f is block multi-convex if for each block variable xi , ?i, f is a convex function of xi while all the other blocks are fixed. ? V, W) ? is block multi-convex. Proposition 2. f (U, 3 3.1 Block Coordinate Descent Algorithm Training Eq. 4 can be minimized using alternating optimization, which decomposes the problem into the following three convex sub-problems based on Lemma 2: ? i , ?i. ? Tikhonov regularized inverse problem: minui ?U `(yi , VPui ) + 12 uTi Q(W)u 2 P P ?n ? Least-square regression: min?Wn,m ?W ? 2 i ui,n ? m?Mn Wn,m ui,m 2 ; P ? Classification using learned features: minV?V i `(yi , VPui ). All the three sub-problems can be solved efficiently due to their convexity. In fact the inverse subproblem alleviates the vanishing gradient issue in traditional deep learning, because it tries to obtain the estimated solution for the output feature of each hidden layer, which are dependent on each other through the Tikhonov matrix. Such functionality is similar to that of target (i.e. estimated outputs of each layer) propagation [4], namely, propagating information between input data and output labels. Unfortunately, a simple alternating optimization scheme cannot guarantee the convergence to stationary points for solving Eq. 4. Therefore we propose a novel BCD based algorithm for training DNNs based on Eq. 4 as listed in Alg. 1. Basically we sequentially solve each sub-problem with an extra quadratic term. These extra terms as well as the convex combination rule guarantee the global convergence of the algorithm (see Sec. 4 for more details). Our algorithm involves solving a sequence of quadratic programs (QP), whose computational complexity is cubic, in general, in the input dimension [28]. In this paper we focus on the theoretical development of the algorithm, and consider fast implementations in future work. 3.2 Testing ?? ? Given a test sample x and learned network weights W n , V , based on Eq.o4 the ideal decision ? ? ? ) . This indicates that function for classification should be y = arg miny?Y minu f (u, V? , W 4 Algorithm 1 Block Coordinate Descent (BCD) Algorithm for Training DNNs Input :training data {(xi , yi )} and regularization parameters {?n } ? Output :network weights W (0) ? ? (0) ? W; Randomly initialize U ? U , V(0) ? V, W nP o? ? ? ?k Set sequence {?t }t=1 so that 0 ? ?t ? 1, ?t and sequence converges to zero, e.g. ?t = k=t 1??k t=1 1 ; t2 for t = 1, 2, ? ? ? do ? (t?1) )ui + 1 (1 ? ?t )2 kui ? u(t?1) k22 , ?i; u?i ? arg minui ?U `(yi , V(t?1) Pui ) + 21 uTi Q(W i 2 (t) (t?1) (t?1) ? ui ? ui + ?t (ui ? ui ), ?i; P (t) V? ? arg minV?V i `(yi , VPui ) + 12 (1 ? ?t )2 kV ? V(t?1) k2F ; (t) (t?1) ? (t?1) V ?V + ?t (V ? V ); P 1 (t) T (t?1) 2 ? ? ? arg min ? ? (t) + 1 (1 ? ?t )2 P P [u ] Q( W)u W i i W?W n m?Mn kWn,m ? Wn,m kF i 2 2 (t) (t?1) (t?1) ? ? ? ?; Wn,m ? Wn,m + ?t (Wn,m ? Wn,m ), ?n, ?m ? Mn , Wn,m ?W end ? return W; for each pair of test data and potential label we have to solve an optimization problem, leading to unaffordably high computational complexity that prevents us from using it. Recall that our goal is to train feed-forward DNNs using the BCD algorithm in Alg. 1. Considering ? ? to construct the network for extracting deep features. Since this, we utilize the network weights W these features are the approximation of U? in Eq. 4 (in fact this is a feasible solution of an extreme case where ?n = +?, ?n), the learned classifier V? can never be reused at test time. Therefore, we retain the architecture and weights of the trained network and replace the classification layer (i.e. the last layer with weights V) with a linear support vector machine (SVM). 3.3 3.3.1 Experiments MNIST Demonstration To demonstrate the effectiveness and efficiency of our BCD based algorithm in Alg. 1, we conduct comprehensive experiments on MNIST [26] dataset using its 28 ? 28 = 784 raw w2,1 w1,0 w3,2 v pixels as input features. We refer to our algorithm for learning dense networks as ?BCD? and that for learning sparse networks ui,1 ui,2 ui,3 as ?BCD-S?, respectively. For sparse learning, we define the convex set W = {W | kWk k1 ? 1, ?k}, where Wk denotes yi the k-th row in matrix W and k ? k1 denotes the `1 norm of a xi vector. All the comparisons are performed on the same PC. We implement our algorithms using MATLAB GPU implementa- Figure 2: The network architecture for algorithm/solver comparison. tion without optimizing the code. We compare our algorithms with the six SGD based solvers in Caffe [20], i.e. SGD [5], AdaDelta [40], AdaGrad [12], Adam [21], Nesterov [33], RMSProp [35], which are coded in Python. The network architecture that we implemented is illustrated in Fig. 2. This network has three hidden layers (with ReLU) with 784 nodes per layer, four FC layers, and three skip layers inside. Therefore, the mapping function from input xi to output yi defined by the network is: f (xi ) = Vui,3 , ui,3 = max{0, xi + ui,1 + W3,2 ui,2 }, ui,2 = max{0, xi + W2,1 ui,1 }, ui,1 = max{0, W1,0 xi }. For simplicity without loss of generality, we utilize MSE as the loss function, and learn the network parameters using different solvers with the same inputs and random initial weights for each FC layer. Without fine-tuning the regularization parameters, we simply set ?n = 0.1, ?n in Eq. 4 for both BCD and BCD-S algorithms. For the Caffe solvers, we modify the demo code in Caffe for MNIST and run the comparison with carefully tuning the parameters to achieve the best performance that we can. We report the results within 100 epochs by averaging three trials, because at this point the training of all the methods seems convergent already. For all competing algorithms, in each epoch the entire 5 2.5 ?10 4 0.08 BCD BCD-S 0.07 0.06 Test Error Training Objective 2 1.5 1 0.05 0.04 0.03 0.02 0.5 0 0 0.01 0 10 20 30 40 50 60 70 80 90 100 Adadelta Adagrad Adam Nesterov Rmsprop SGD BCD BCD-S Solvers # Epochs (a) (b) 2.5 1 0.9 Percentage of Nonzeros Relative Running Time 2 1.5 1 0.5 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Adadelta Adagrad Adam Nesterov Rmsprop SGD BCD 0 BCD-S Solvers BCD BCD-S 1st 2nd 3rd 4th Fully-connected Layer (c) (d) Figure 3: (a) Illustration of convergence for BCD and BCD-S. (b) Test error comparison. (c) Running time comparison. (d) Sparseness comparison for BCD and BCD-S. training data is passed through once to update parameters. Therefore, for our algorithms each epoch is equivalent to one iteration, and there are 100 iterations in total. Convergence: Fig. 3(a) shows the change of training objective with increase of epochs for BCD and BCD-S, respectively. As we see both curves decrease monotonically and become flatter and flatter eventually, indicating that both algorithms converge. BCD-S converges much faster than BCD, but its objective is higher than BCD. This is because BCD-S learns sparse models that may not fit data as well as dense models learned by BCD. Testing Error: As mentioned in Sec. 3.2, here we utilize linear SVMs and last-layer hidden features extracted from training data to retrain the classifier. Based on the network in Fig. 2 the feature extraction function is ui,3 = max{0, xi + max{0, W1,0 xi } + W3,2 max{0, xi + W2,1 max{0, W1,0 xi }}}. To conduct fair comparison, we retrain the classifiers for all the algorithms, and summarize the test-time results in Fig. 3(b) with 100 epochs. Our BCD algorithm which learns dense architectures, same as the SGD based solvers, performs best, while our BCD-S algorithm works still better than the SGD competitors, although it learns much sparser networks. These results are consistent with the training objectives in Fig. 3(a) as well. Computational Time: We compare the training time in Fig. 3(c). It seems that our BCD implementation is significantly faster than the Caffe solvers. For instance, our BCD achieves about 2.5 times speed-up than the competitors, while achieving best classification performance at test time. Sparseness: In order to compare the difference in terms of weights between the dense and sparse networks learned by BCD and BCD-S, respectively, we compare the percentage of nonzero weights in each FC layer, and show the results in Fig. 3(d). As we see, expect the last FC layer (corresponding to parameter V as classifiers) BCD-S has the ability of learning much sparser networks for deep feature extraction. In our case BCD-S learns a network with 2.42% nonzero weights2 , on average, with classification accuracy 1.34% lower than that of BCD which learns a network with 97.15% nonzero weights. Potentially this ability could be very useful in the scenarios such as embedding systems where sparse networks are desired. 3.3.2 Supervised Hashing To further demonstrate the usage of our approach, we compare with [41]3 for the application of supervised hashing, which is the state-of-the-art in the literature. [41] proposed an ADMM based 2 3 Since we will retrain the classifiers after all, here we do not take the nonzeros in the last FC into account. MATLAB code is available at https://zimingzhang.wordpress.com/publications/. 6 optimization algorithm to train DNNs with relaxed objective that is very related to ours. We train the same DNN on MNIST as used in [41], i.e. with 48 hidden layers and 256 nodes per layer that are sequentially and fully connected (see [41] for more details on the network). Using the same image features, we consistently observe marginal improvement over the results (i.e. precision, recall, mAP) reported in [41]. However, on the same PC we can finish training within 1 hour based on our implementation, while using the MATLAB code for [41] the training needs about 9 hours. Similar observations can be made on CIFAR-10 as used in [41] using a network with 16 hidden layers and 1024 nodes per layer. 4 4.1 Convergence Analysis Preliminaries Definition 2 (Lipschitz Continuity [13]). We say that function f is Lipschitz continuous with Lipschitz constant Lf on X , if there is a (necessarily nonnegative) constant Lf such that |f (x1 ) ? f (x2 )| ? Lf |x1 ? x2 |, ?x1 , x2 ? X . Definition 3 (Global Convergence [24]). Let X be a set and x0 ? X a given point, Then an Algorithm, A, with initial point x0 is a point-to-set map A : X ? P(X ) which generates a sequence {xk }? k=1 via the rule xk+1 ? A(xk ), k = 0, 1, ? ? ? . A is said to be global convergent if for any chosen initial point x0 , the sequence {xk }? k=0 generated by xk+1 ? A(xk ) (or a subsequence) converges to a point for which a necessary condition of optimality holds. Definition 4 (R-linear Convergence Rate [30]). Let {xk } be a sequence in Rn that converges to x? . We say that convergence is R-linear if there is a sequence of nonnegative scalars {vk } such that kxk ? x? k ? vk , ?k, and {vk } converges Q-linearly to zero. ? = arg minw?Rd ?(w) + 12 kw ? Lemma 1 (3-Point Property [1]). If function ?(w) is convex and w 2 d w0 k2 , then for any w ? R , 1 1 1 ? ? w0 k22 ? ?(w) + kw ? w0 k22 ? kw ? wk ? 22 . ? + kw ?(w) 2 2 2 4.2 Theoretical Results ? ? ? ? = ? ?, ?), f2 (V) = ? = Definition 5 (Assumptions on f in Eq. 4). Let f1 (U) f (U, f (?, V, ?), f3 (W) ? be the objectives of the three sub-problems, respectively. Then we assume that f is f (?, ?, W) lower-bounded and f1 , f2 , f3 are Lipschitz continuous with constants Lf1 , Lf2 , Lf3 , respectively. 2 Proposition 3. Let x, y, x ? ? X and y = (1 ? ?)x + ?? x. Then 12 k? x ? yk22 = 12 (1 ? ?) k? x ? xk22 . Lemma 2. Let X be a nonempty closed convex set, function ? : X ? R is convex and Lipschitz continuous with constant L, and scalar 0 ? ? ? 1. Suppose that ?x ? X , x ? = arg minz?X ?(z) + 1 2 x. Then we have 2 kz ? z0 k2 and z0 = y = (1 ? ?)x + ?? L? 1?? ky ? xk22 ? ?(x) ? ?(y) ? Lky ? xk2 ? ky ? xk2 ? . ? 1?? Proof. Based on the convexity of ?, Prop. 3, and Lemma 1, we have ?(x) ? ?(y) ? ?(x) ? [(1 ? ?) ?(x) + ??(? x)] = ? [?(x) ? ?(? x)]   1 1 1 1?? ? ? kx ? x ?k22 + k? x ? z0 k22 ? kx ? z0 k22 = ? (1 ? ?) kx ? x ?k22 = ky ? xk22 , 2 2 2 ? where ky ? xk22 = 0 if and only if x ? = x (equivalently ?(x) = ?(y)); otherwise ky ? xk22 is lower-bounded from 0 provided that ? 6= 1. Based on Def. 2, we have ?(x) ? ?(y) ? Lky ? xk2 . n o? ? (t) Theorem 1. Let U?(t) , V(t) , W ? U ?V ?W be an arbitrary sequence from a closed cont=1 nP o? ? ?k vex set that is generated by Alg. 1. Suppose that 0 ? ?t ? 1, ?t and the sequence k=t 1??k t=1 converges to zero. Then we have 7  1.  ? (?) is a stationary point; U?(?) , V(?) , W   n o? ? (?) globally with R-linear ? (t) will converge to U?(?) , V(?) , W U?(t) , V(t) , W t=1 convergence rate. 2. ? = f1 (U?(?) ) 6 ? so that f1 (U?(?) + 4U) Proof. 1. Suppose that for U?(?) there exists a 4U? = (?) ? (otherwise, it conflicts with the fact of U being the limit point). From Lemma 2, f1 (U?(?) + (?) (?) ? ? ? 4U) = f1 (U ) is equivalent to U + 4U? = U?(?) , and thus 4U? = ?, which conflicts ? with the assumption of 4U = 6 ?. Therefore, there is no direction that can decrease f1 (U?(?) ), (?) (?) ? ? (?) ) = 0. Therefore, i.e. ) = 0.  ?f1 (U  Similarly we have ?f2 (V ) = 0 and ?f3 (W ? (?) is a stationary point. U?(?) , V(?) , W 2. Based on Def. 5 and Lemma 2, we have v 2 u X 2 (t) u (?) ui,n ? ui,n + V(t) ? V(?) F + t 2 ? ui,n ?U ? F ? Wn,m ?W X (t) (?) ui,n ? ui,n + V(t) ? V(?) + 2 ? ui,n ?U 2 (t) (?) Wn,m ? Wn,m X F X (t) (?) Wn,m ? Wn,m F ? Wn,m ?W ? ? ? X X X X X (k) (k+1) (k) (k+1) ui,n ? ui,n + V(k) ? V(k+1) + Wn,m ? Wn,m = ? k=t ? k=t k=t 2 F F Wn,m ?W ui,n ?U ? ? ? X X X (k) (k) (k+1) (k+1) ? ? ? ui,n ? ui,n + V(k) ? V(k+1) + Wn,m ? Wn,m k=t ? ? X k=t 2 ? ui,n ?U F ? X Lf ?k Lf2 ?k 1 ? + + 1 ? ?k 1 ? ?k ? ui,n ?U X ? Wn,m ?W F ? Wn,m ?W ? ? X ?k Lf3 ?k ? =O 1 ? ?k 1 ? ?k ! . k=t By combining this with Def. 3 and Def. 4 we can complete the proof. p Corollary 1. Let ?t = 1t , ?t. Then when p > 1, Alg. 1 will converge globally with order one. Proof. ? X k=t ? Z 1 1 1 ? 1 1 d(x + 1) p = (x + 1) p ?1 dx p tp ?1 x tp ?1 x Z ? ?p>1 1 1 1 ? x p ?2 dx = (p ? 1)?1 (tp ? 1) p ?1 . p tp ?1 X 1 ?k ? = 1 ? ?k kp ? 1 k=t Z ? (6) n o? 1 Since the sequence (tp ? 1) p ?1 , ?p > 1 converges to zero sublinearly with order one, by t=1 combining these with Def. 4 and Thm. 1 we can complete the proof. 5 Conclusion In this paper we first propose a novel Tikhonov regularization for training DNNs with ReLU as the activation functions. The Tikhonov matrix encodes the network architecture as well as parameterization. With its help we reformulate the network training as a block multi-convex minimization problem. Accordingly we further propose a novel block coordinate descent (BCD) based algorithm, which is proven to converge globally to stationary points with R-linear converge rate of order one. Our empirical results suggest that our algorithm does converge, is suitable for learning both dense and sparse networks, and may work better than traditional SGD based deep learning solvers. 8 References [1] L. Baldassarre and M. Pontil. Advanced topics in machine learning part II 5. proximal methods. University Lecture, http://www0.cs.ucl.ac.uk/staff/l.baldassarre/lectures/ baldassarre_proximal_methods.pdf. [2] C. Baldassi, A. Ingrosso, C. Lucibello, L. Saglietti, and R. Zecchina. Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses. Physical review letters, 115(12):128101, 2015. [3] H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex feasibility problems. SIAM review, 38(3):367?426, 1996. [4] Y. Bengio. How auto-encoders could provide credit assignment in deep networks via target propagation. arXiv preprint arXiv:1407.7906, 2014. [5] L. Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421?436. Springer, 2012. [6] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016. [7] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning R in Machine Learning, via the alternating direction method of multipliers. Foundations and Trends 3(1):1?122, 2011. [8] P. Chaudhari, A. Choromanska, S. Soatto, and Y. LeCun. Entropy-sgd: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016. [9] P. Chaudhari, A. Oberman, S. Osher, S. Soatto, and G. Carlier. Deep relaxation: partial differential equations for optimizing deep neural networks. arXiv preprint arXiv:1704.04932, 2017. [10] A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015. [11] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, pages 2933?2941, 2014. [12] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12(Jul):2121?2159, 2011. [13] K. Eriksson, D. Estep, and C. Johnson. Applied Mathematics Body and Soul: Vol I-III. Springer-Verlag Publishing, 2003. [14] Y. Gal and Z. Ghahramani. On modern deep learning and variational inference. In Advances in Approximate Bayesian Inference workshop, NIPS, 2015. [15] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341?2368, 2013. [16] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, pages 249?256, 2010. [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770?778, 2016. [18] S. Hochreiter, Y. Bengio, and P. Frasconi. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In J. Kolen and S. Kremer, editors, Field Guide to Dynamical Recurrent Networks. IEEE Press, 2001. [19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [20] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, pages 675?678. ACM, 2014. [21] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [22] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In NIPS, pages 2575?2583, 2015. [23] A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In NIPS, pages 950?957, 1991. [24] G. R. Lanckriet and B. K. Sriperumbudur. On the convergence of the concave-convex procedure. In NIPS, pages 1759?1767, 2009. [25] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015. [26] Y. LeCun, C. Cortes, and C. J. Burges. The mnist database of handwritten digits, 1998. [27] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [28] Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM, 1994. [29] R. Nishihara, L. Lessard, B. Recht, A. Packard, and M. I. Jordan. A general analysis of the convergence of admm. In ICML, pages 343?352, 2015. [30] J. Nocedal and S. J. Wright. Numerical optimization. Springer, 1st. ed. 1999. corr. 2nd printing edition, Aug. 1999. [31] M. Razaviyayn, M. Hong, Z.-Q. Luo, and J.-S. Pang. Parallel successive convex approximation for nonsmooth nonconvex optimization. In NIPS, pages 1440?1448, 2014. [32] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929?1958, 2014. 9 [33] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139?1147, 2013. [34] G. Taylor, R. Burmeister, Z. Xu, B. Singh, A. Patel, and T. Goldstein. Training neural networks without gradients: A scalable admm approach. In ICML, 2016. [35] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. [36] Y. Wang, W. Yin, and J. Zeng. Global convergence of admm in nonconvex nonsmooth optimization. arXiv preprint arXiv:1511.06324, 2015. [37] R. A. Willoughby. Solutions of ill-posed problems (an tikhonov and vy arsenin). SIAM Review, 21(2):266, 1979. [38] Y. Xu and W. Yin. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on imaging sciences, 6(3):1758?1789, 2013. [39] Y. Xu and W. Yin. A globally convergent algorithm for nonconvex optimization based on block coordinate update. arXiv preprint arXiv:1410.1386, 2014. [40] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. [41] Z. Zhang, Y. Chen, and V. Saligrama. Efficient training of very deep neural networks for supervised hashing. In CVPR, June 2016. 10
6769 |@word trial:1 polynomial:1 norm:3 seems:2 nd:2 reused:1 mitsubishi:1 propagate:1 decomposition:1 sgd:16 arous:1 initial:3 contains:1 ours:1 guadarrama:1 com:2 comparing:1 luo:1 activation:6 dx:2 chu:1 gpu:1 numerical:1 oberman:1 update:2 v:4 stationary:8 half:1 lky:2 parameterization:1 accordingly:3 xk:7 vanishing:4 short:1 filtered:1 pascanu:1 node:3 successive:2 simpler:1 zhang:3 mathematical:2 dn:2 constructed:1 become:2 differential:2 prove:2 inside:1 x0:3 notably:1 sublinearly:1 proliferation:1 multi:10 salakhutdinov:1 globally:9 solver:10 considering:1 becomes:2 provided:2 bounded:3 notation:1 finding:2 extremum:1 differentiation:1 gal:2 guarantee:5 zecchina:1 concave:1 classifier:8 k2:3 uk:1 unit:2 positive:1 local:5 modify:1 tends:1 limit:1 switching:1 path:2 zeroth:1 championed:1 initialization:1 factorization:1 range:1 pui:2 lecun:4 testing:2 recursive:1 block:17 minv:2 implement:1 lf:4 prevalence:1 digit:1 procedure:1 pontil:1 area:1 empirical:1 cascade:1 thought:1 projection:3 boyd:2 significantly:1 suggest:1 onto:1 close:1 cannot:1 operator:1 valley:1 eriksson:1 interior:1 baldassi:2 optimize:1 equivalent:3 deterministic:1 ghadimi:2 conventional:2 map:2 marten:1 independently:1 convex:34 simplicity:2 identifying:1 rule:3 importantly:1 reparameterization:1 classic:2 embedding:2 coordinate:10 justification:1 arranging:1 target:3 suppose:3 exact:1 programming:2 lanckriet:1 trick:3 trend:1 adadelta:4 recognition:1 database:2 subproblem:2 preprint:9 solved:3 wang:1 region:1 connected:3 sun:1 coursera:1 decrease:2 trade:1 mentioned:1 convexity:3 ui:51 complexity:2 miny:1 rmsprop:4 nesterov:5 trained:3 singh:1 solving:4 reinterpretation:1 rewrite:1 efficiency:3 f2:3 basis:1 easily:1 kwn:1 train:3 fast:2 kp:1 lift:1 caffe:6 whose:2 widely:1 posed:2 solve:3 larger:1 say:2 otherwise:3 cvpr:2 favor:1 ability:2 online:1 sequence:11 advantage:1 karayev:1 net:1 ucl:1 propose:6 saligrama:1 combining:2 alleviates:1 achieve:3 kv:1 ky:5 sutskever:2 convergence:23 cluster:1 optimum:2 darrell:1 produce:1 adam:4 converges:7 resnet:1 help:2 develop:2 ac:1 propagating:1 recurrent:2 completion:1 aug:1 eq:12 krogh:1 auxiliary:2 implemented:1 skip:2 involves:1 c:1 direction:4 closely:1 functionality:1 stochastic:9 dnns:14 f1:8 generalization:3 preliminary:1 proposition:3 mathematically:1 strictly:1 hold:1 considered:1 credit:1 wright:1 minu:1 mapping:1 matthew:1 achieves:1 xk2:3 baldassarre:2 label:4 ingrosso:1 wordpress:1 successfully:1 tool:1 weighted:1 minimization:1 always:1 lifted:1 publication:1 corollary:1 focus:1 june:1 improvement:2 consistently:2 vk:3 indicates:1 mainly:1 contrast:1 sense:1 dim:1 inference:3 ganguli:1 dependent:1 entire:1 hidden:19 dnn:10 choromanska:2 pixel:1 issue:4 classification:6 overall:1 ill:2 arg:7 dauphin:1 development:1 smoothing:1 special:2 initialize:1 art:1 marginal:1 field:1 construct:1 never:1 once:1 beach:1 extraction:2 f3:3 identical:1 kw:4 frasconi:1 k2f:1 icml:3 future:1 minimized:1 np:2 t2:1 simplify:1 serious:1 report:1 nonsmooth:2 modern:1 randomly:1 simultaneously:1 comprehensive:1 familiar:1 consisting:1 huge:1 lf3:2 highly:1 multiconvex:1 extreme:1 yielding:1 semidefinite:1 behind:1 pc:2 chain:2 amenable:1 predefined:2 partial:2 necessary:1 shorter:1 minw:1 conduct:2 divide:1 euclidean:1 loosely:1 desired:1 taylor:1 girshick:1 theoretical:2 merl:2 instance:2 obstacle:1 tp:5 assignment:1 subset:1 entry:1 krizhevsky:1 johnson:1 reported:1 bauschke:1 dependency:3 subdominant:1 encoders:1 proximal:4 cho:1 st:3 recht:1 siam:6 retain:1 off:1 w1:4 borwein:1 leading:3 return:1 szegedy:1 account:1 potential:1 kolen:1 sec:3 stabilize:1 wk:2 flatter:2 explicitly:3 performed:1 break:1 try:1 closed:4 tion:1 kwk:1 hazan:1 nishihara:1 relus:1 parallel:4 jul:1 jia:1 contribution:1 minimize:1 pang:1 square:3 accuracy:1 convolutional:1 variance:1 efficiently:1 landscape:1 conceptually:1 weak:2 bayesian:2 raw:1 handwritten:1 basically:1 ren:1 drive:1 rectified:1 presume:1 chaudhari:4 synapsis:1 ed:1 definition:5 competitor:2 sriperumbudur:1 energy:2 intentionally:1 dm:1 associated:1 proof:5 proved:1 dataset:1 recall:2 organized:1 carefully:1 goldstein:1 back:1 feed:3 higher:3 hashing:3 supervised:3 wherein:1 formulation:2 though:1 generality:1 zeng:1 nonlinear:1 propagation:4 lack:1 continuity:1 behaved:1 usa:1 usage:1 k22:8 multiplier:3 unbiased:1 unroll:1 regularization:15 soatto:2 alternating:5 dependently:1 laboratory:1 nonzero:3 illustrated:1 hong:1 generalized:2 pdf:1 complete:2 demonstrate:4 performs:1 duchi:1 image:2 variational:3 isolates:1 novel:6 recently:2 wise:1 parikh:1 empirically:2 qp:1 physical:1 he:1 rd0:1 numerically:2 refer:1 cambridge:1 rd:3 tuning:2 mathematics:1 similarly:2 minui:2 surface:2 recent:2 showed:2 perspective:1 optimizing:2 henaff:1 scenario:1 tikhonov:20 certain:1 verlag:1 vui:3 nonconvex:4 approximators:1 yi:11 captured:1 relaxed:1 staff:1 converge:11 attacking:1 monotonically:1 ii:1 multiple:2 nonzeros:2 smooth:4 faster:2 offer:2 long:5 cifar:1 divided:1 coded:1 feasibility:1 variant:2 regression:2 basic:1 multilayer:1 vision:1 scalable:1 arxiv:18 iteration:3 normalization:2 represent:1 hochreiter:1 fine:1 addressed:1 parallelization:1 rest:1 extra:2 sr:5 w2:3 tend:1 flow:1 effectiveness:2 jordan:1 extracting:2 ideal:2 yk22:1 bengio:5 iii:1 wn:26 feedforward:1 relu:14 fit:1 w3:3 architecture:12 competing:1 finish:1 idea:3 shift:1 motivated:2 six:1 passed:1 accelerating:1 suffer:1 carlier:1 matlab:3 deep:21 useful:2 listed:1 locally:1 svms:1 http:2 exist:1 percentage:2 vy:1 estimated:4 per:3 write:1 discrete:1 promise:1 vol:1 key:1 four:1 saglietti:1 lan:2 achieving:1 clarity:1 prevent:2 dahl:1 utilize:3 nocedal:2 imaging:1 relaxation:3 subgradient:1 year:1 run:1 inverse:4 letter:1 uti:3 decision:1 vex:1 dropout:4 layer:36 def:5 convergent:5 quadratic:2 yielded:1 nonnegative:4 psca:1 flat:1 encodes:3 bcd:49 x2:3 generates:1 speed:1 argument:1 min:6 optimality:1 lf2:2 estep:1 combination:2 poor:1 hertz:1 remain:1 smaller:1 osher:1 equation:2 xk22:6 eventually:1 nonempty:2 singer:1 end:2 gulcehre:1 available:1 rewritten:1 observe:1 salimans:1 batch:2 denotes:8 top:1 running:3 publishing:1 zeiler:1 ghahramani:2 build:1 k1:2 tensor:1 objective:16 already:1 traditional:2 said:1 gradient:14 separate:1 w0:3 topic:1 o4:1 code:4 index:1 cont:1 illustration:2 reformulate:1 demonstration:1 equivalently:1 difficult:1 unfortunately:2 setup:1 potentially:1 negative:1 ordinarily:1 ba:1 implementation:3 unknown:1 observation:2 descent:14 hinton:4 nemirovskii:1 locate:1 rn:1 arbitrary:2 thm:1 peleato:1 namely:3 pair:1 toolbox:1 eckstein:1 conflict:2 learned:7 hour:2 kingma:2 nip:7 dynamical:1 soul:1 biasing:1 summarize:1 program:1 max:12 packard:1 suitable:1 difficulty:3 force:1 regularized:7 residual:1 advanced:1 mn:7 scheme:2 improve:1 numerous:1 mathieu:1 coupled:2 auto:1 epoch:6 literature:1 review:3 python:1 kf:1 understanding:1 adagrad:3 relative:1 lucibello:1 fully:3 loss:6 expect:1 lecture:3 ziming:1 tures:1 proven:3 shelhamer:1 foundation:1 lf1:1 consistent:1 lessard:1 editor:1 translation:1 row:1 arsenin:1 penalized:1 last:6 kremer:1 pdes:1 guide:1 allow:2 deeper:1 burges:1 wide:1 face:1 sparse:7 distributed:1 curve:1 dimension:1 kz:1 forward:3 made:1 adaptive:2 far:2 welling:1 approximate:3 ameliorated:1 implicitly:1 patel:1 global:5 overfitting:2 sequentially:2 ioffe:1 xi:19 demo:1 subsequence:1 continuous:3 decomposes:1 nature:1 learn:3 ku:1 robust:2 ca:1 obtaining:1 curtis:1 alg:5 kui:1 bottou:3 mse:1 necessarily:1 electric:1 domain:1 aistats:2 dense:7 linearly:1 edition:1 fair:1 razaviyayn:1 x1:3 body:1 fig:8 xu:3 retrain:3 cubic:1 slow:1 precision:1 sub:9 momentum:1 deterministically:1 concatenating:1 jmlr:2 printing:1 third:1 minz:1 rdn:3 learns:5 donahue:1 down:1 z0:4 theorem:1 covariate:1 showing:1 decay:2 svm:1 cortes:1 evidence:1 glorot:1 exists:1 workshop:1 mnist:6 corr:1 importance:1 lifting:1 magnitude:1 illustrates:1 sparseness:2 kx:3 sparser:2 chen:1 smoothly:1 entropy:3 yin:3 fc:7 simply:1 saddle:3 prevents:1 kxk:1 saturating:1 partially:1 scalar:2 springer:3 tieleman:1 extracted:1 ma:1 prop:1 acm:2 willoughby:1 identity:1 goal:2 viewed:1 replace:1 admm:10 feasible:1 change:1 lipschitz:5 reducing:1 averaging:1 lemma:6 called:1 total:1 multimedia:1 brand:2 indicating:1 internal:1 support:1 latter:1 srivastava:1
6,378
677
A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytical Results. Christiane Linster David Marsan ESPCI, Laboratoire d'Electronique 10, Rue Vauquelin 75005 Paris, France Claudine Masson Laboratoire de Neurobiologie Comparee des Invertebrees INRA/CNRS (URA 1190) 91140 Bures sur Yvette, France Michel Kerszberg Institut Pasteur CNRS (URA 1284) Neurobiologie Moleculaire 25, Rue du Dr. Roux 75015 Paris, France Gerard Dreyfus Leon Personnaz ESPCI, Laboratoire d'Electronique 10, Rue Vauquelin 75005 Paris, France Abstract It is known from biological data that the response patterns of interneurons in the olfactory macroglomerulus (MGC) of insects are of central importance for the coding of the olfactory signal. We propose an analytically tractable model of the MGC which allows us to relate the distribution of response patterns to the architecture of the network. 1. Introduction The processing of pheromone odors in the antennallobe of several insect species relies on a number of response patterns of the antennallobe neurons in reaction to stimulation with pheromone components and blends. Antennallobe interneurons receive input from different receptor types, and relay this input to antennal lobe projection neurons via excitatory as well as inhibitory synapses. The diversity of the responses of the interneurons and projection neurons as well the long response latencies of these neurons to pheromone stimulation or electrical stimulation of the antenna, suggest a polysynaptic pathway 1022 A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytical Results between the receptor neurons and these projection neurons (for a review see (Kaissling, 1990; Masson and Mustaparta, 1990)). I. Pf-EROMONE ce..ERALlSTS A. Carnot Discrinilate Single Odors AN) Camot COde Ter1lXlI'aI Olanges ,. Excited Type Stml'S Resooose ill' ' : IIIIIIIM~IIIH~11 l III :. .. '-I:.!.! tU.i)! D II.. ua .~.~ _...-._,,' .. i. I 'I BAL ? C1S Blend i I I I I I 2. Wtited Type Stint'S Response .111.J1IL. . ._.-_. .'_'_' J1J~UlUlJ BAL C15 B1erd n. PJ-EROMO\E SPECtAUSTS A. Can Oiscrini1ate Singe Odors Stinhs BUT Camot Code Terrporal Olanges Response (1) 00 (2) BAl C15 Blend + 0 ? ? stmt? ? ? ? ? B. can Discriri1ate 5ilgIe Odors ,L I- 0 f>K) Can Code I T~a1 Olanges 8espoose (1) 00 (2) BAL C15 Blend + ? -/./- -/./- Figure 1: With courtesy of John Hildebrand, by permission from Oxford University Press, from: Christensen, Mustaparta and Hildebrand: Discrimination of sex pheromone blends in the olfactory system of the moth, Chemical Senses, Vol 14, no 3, pp 463-477, 1989. 1023 1024 Linster, Marsan, Masson, Kerszberg, Dreyfus, and Personnaz In the MOC of Manduca sexta, antennal lobe interneurons respond in various ways to antennal stimulation with single pheromone components or the blend: pheromone generalists respond by either excitation or inhibition to both components and the blend: they cannot discriminate the components; pheromone specialists respond (i) to one component but not to the other by either excitation or inhibition, (ii) with different response patterns to the presence of the single components or the blend, namely with excitation to one component, with inhibition to the other component and with a mixed response to the blend. These neurons can also follow pulsed stimulation up to a cut-off frequency (Figure 1). A model of the MOC (Linster et aI, 1993), based on biological data (anatomical and physiological) has demonstrated that the full diversity of response patterns can be reproduced with a random architecture using very simple ingredients such as spiking neurons governed by a first order differential equation, and synapses modeled as simple delay lines. In a model with uniform distributions of afferent, inhibitory and excitatory synapses, the distribution of the response patterns depends on the following network parameters: the percentage of afferent, inhibitory and excitatory synapses, the ratio of the average excitation of any interneuron to its spiking threshold, and the amount of feedback in the network. In the present paper, we show that the behavior of such a model can be described by a statistical approach, allowing us to search through parameter space and to make predictions about the biological system without exhaustive simulations. We compare the results obtained with simulation of the network model to the results obtained analytically by the statistical approach, and we show that the approximations made for the statistical descriptions are valid. 2. Simulations and comparison to biological data In (Linster et aI, 1993), we have used a simple neuron model: all neurons are spiking neurons, governed by a first order differential equation, with a membrane time constant and a probabilistic threshold 9. The time constant represents the decay time of the membrane potential of the neuron. The output of each neuron consists of an all-or-none action potential with unit amplitude that is generated when the membrane potential of the cell crosses a threshold, whose cumulative distribution function is a continuous and bounded probabilistic function of the membrane potential. All sources of delay and signal transformation from the presynaptic neuron to its postsynaptic site are modeled by a synaptic time delay. These delays are chosen in a random distribution (gaussian), with a longer mean value for inhibitory synapses than for excitatory synapses. We model two main populations of olfactory neurons: receptor neurons which are sensitive to the main pheromone component (called A) or to the minor pheromone component (called B) project uniformly onto the network of interneurons; two types of interneurons (excitatory and inhibitory) exist: each interneuron is allowed to make one synapse with any other interneuron. The model exhibits several behaviors that agree with biological data, and it allows us to state several predictive hypotheses about the processing of the pheromone blend. We observe two broad classes of intemeurons: selective (to one odor component) and nonselective neurons (in comparison to Figure 1). Selective neurons and non-selective neurons exhibit a variety of response patterns, which fall into three classes: inhibitory, excitatory and mixed (Figure 2). Such a classification has indeed been proposed for olfactory antennal A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytical Results lobe neurons (local interneurons and projection neurons) in the specialist olfactory system in Manduca (Christensen and Hildebrand, 1987) and for the cockroach (Burrows et al, 1982; Boeckh and Ernst, 1987). Inhibitory response Action potenUals Excitatory response I,!r"'"'' ",1""""""",,1', II II I."='.,_we." Simple mixed response dlllI, ? ?',IIIIIIII'''QII,'', "III, I Membrane potential Stimulus A ,r-------., ,,....-----"""'\ ,,------"""', Stimulus B '~----"""'\ Mixed responses ....-- ......... Ib ll ??? , ,111M " " IU! ,'d, ", ", "b I, ,." -------.~ " .. , 11" 111111 , ,""I j 111," 1".' I.h IgUIUIIi. I dil'"'' I I rI ........ 500 ms - ,,---------.., ,,---------.., ,,-----------, Oscillatory responses 4 ~"I I I . " I . ! ,., I. II." II II" , ?? I.h ? ? V\fVrfl.]"'" ~ I.! I.,!" '" ,I,U'!, !., ,. ! . " . , " ,,',,! II" II ",! ''tMN\ ", , -\JWtJV'' ' "'J"", I ,,-------.., ,,-------, ,,-----"""', Figure 2: Response patterns of interneurons in the model presented, in response to stimulation with single components A and B, and with a blend with equal component concentrations. Receptor neurons fIre at maximum frequency during the stimulations. The interneuron in the upper row is inhibited by stimulus A, excited by stimulus B, and has a mixed response (excitation followed by inhibition) to the blend: in reference to Figure 1, this is a pheromone specialist receiving mixed input from both types of receptor neurons. These types of simple and mixed responses can be observed in the model at low connectivity, where the average excitation received by an interneuron is low compared to its spiking threshold. The neuron in the middle row responds with similar mixed responses to stimuli A, Band A+B. The neuron in the lower row responds to all stimuli with the same oscillatory response, here the average excitation received by an interneuron approaches or exceeds the spiking threshold of the neurons. Network parameters: 15 receptor neurons; 35 interneurons; 40% excitatory interneurons; 60% inhibitory interneurons; afferent connectivity 10%; membrane time constant 25 ms; mean inhibitory synaptic delays 100 ms; mean excitatory synaptic delays 25 ms, spiking threshold 4.0, synaptic weights +1 and -1. 1025 1026 Linster, Marsan, Masson, Kerszberg, Dreyfus, and Personnaz In our model, as well as in biological systems (Christensen and Hildebrand 1988, Christensen et ai., 1989) we observe a number of local interneurons that cannot follow pulsed stimulation beyond a neuron-specific cut-off frequency. This frequency depends on the neuron response pattern and on the duration of the interstimulus interval. Therefore, the type of response pattern is of central importance for the coding of the olfactory signal. Thus, in order to be able to relate the coding capabilities of a (model or biological) network to its architecture, we have investigated the distribution of response patterns both analytically and by simulations. 3. Analytical approach In order to investigate these questions in a more rigorous way, some of us (C.L., D.M., G.D., L.P.) have designed a simplified, analytically tractable model. We define two layers of interneurons: those which receive direct afferent input from the receptor neurons (layer 1), and those which receive only input from other interneurons (layer 2). In order to predict the response pattern of any interneuron as a function of the network parameters, we make the following assumptions: (i) statistically, all interneurons within a given layer receive the same synaptic input, (ii) the effect of feedback loops from layer 2 can be neglected, (iii) the response patterns have the same distribution for stimulations either by the blend or by pure components. Assumption (i) is correct because of the uniform distribution of synapses in the network of interneurons. Assumption (ii) is valid at low connectivity: if the average amount of excitation received by an interneuron is low as compared to its spiking threshold, its firing probability is low; therefore, the effect of the excitation from the receptors is vanishingly small beyond two interneurons: we thus neglect the effect of signals sent from layer 2. Thus, feedback is present within layer 1, and layer 2 receives only feed forward connections. Assumption (iii) is plausible if we suppose that the natural pheromone blend is more relevant for the system than the single components of the blend. We further assume in the analytical approach (as in the simulations) that the synaptic delays are longer on the average for inhibitory synapses than for excitatory synapses . An interneuron can thus respond with four types of patterns: non-response, which means that it does not have a presynaptic neuron (this response pattern can only occur in layer 2, at low connectivity); excitation, meaning that an interneuron receives only afferent input from receptor neurons or from excitatory interneurons; inhibition, meaning that an interneuron receives only input from inhibitory interneurons (this can occur in layer 2 only); and mixed responses, covering all other combinations of presynaptic input. We consider a network of N + N r neurons, N (number of interneurons) and N r (number of receptor neurons) being random variables, N + Nr being fixed. We define the probability ni that a neuron is an inhibitory interneuron, and the probability ne that it is an excitatory interneuron. Any interneuron has a probability c to make one synapse (with synaptic weight +1 or -1) with any other interneuron and a probability (1 - c) not to make a synapse with this interneuron; Cr is the afferent connectivity: any receptor neuron has a probability Cr to connect once to any interneuron, and a probability (1 - cr) not to connect to this interneuron. Then na = 1 - (1 - cr)Nr is the probability that an interneuron belongs to layer 1, and the number of interneurons in layer I obeys a binomial distribution with expectation value N nQ and variance N na (1 - na). In the following, the fixed number of interneurons in layer 1 will be taken equal to its expectation value. Similarly, the number of interneurons in layer 2 is taken to be N (1 - na). A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytical Results Because of the assumptions made above, in both layers, we take into account for each interneuron the N na c synapses from presynaptic neurons of layerl. In layer 1, these neurons respond with excitatory or mixed responses. P1= nena N C is the probability that an interneuron in layer 1 responds with an excitation, and p~= 1 - neflaN c is the probability that an interneuron in layer 1 receives mixed synaptic input. In layer 2, we have to consider two cases: (i) at low connectivity, if N c na < 1, P6 1 - N c na is the probability that an interneuron of layer 2 does not receive a synapse, thus does not respond to stimulation, P; N c nane is the probability that a neuron in layer 2 responds with excitation, p? =N c nam is the probability that an interneuron responds with inhibition; (ii) at higher connectivity, N c na > 1, P6 =0, P; =ne naN c and pl =m naN c. In both cases (i) and (ii), the probability that an interneuron in layer 2 has a mixed response pattern is P; = 1 - P6 -Pe - Pl. Thus, an interneuron in the model responds with excitation with probability Pe =na P; + (1 - na) P;, with inhibition with probability Pi =na p/ + (1 - na) p? and has a mixed response with probability Pm =na P~ + (1 - na) p;. = = P 0 .80 Layer 1 0 .60 0 .40 0 .20 0 . 10 0.20 0 .30 0 .40 0.50 0 .60 0 .70 C 0 .60 0 .70 C 0.60 0 .70 C P 0.80 Layer 2 0 .60 0 .40 0.20 0.20 0.30 0.40 0 .50 0 .80 Layers 1 & 2 0.60 0.40 0.20 0.10 0.20 0 .30 0 .40 0 .50 Figure 4: Analytically derived distribution of the response patterns in a typical network (35 interneurons, 15 receptor neurons, 40% excitation, 60% inhibition, spiking threshold 4.0); the curves show the percentage of interneurons in the model which respond with a given pattern, as a function of the connectivity c. In this case, the average excitation an interneuron receives from other interneurons is 3.15 at c=O.3. Figure 4 shows the distribution of the response patterns computed analytically for a typical set of parameters. In order to perform comparisons between computed pattern distributions and pattern distributions obtained from simulations with the model, we designed an automatic classifier for the response patterns, based on the perceptron learning rule and the pocket algorithm (Gallant 1986). The classifier is trained to classify the responses of 1027 1028 Linster, Marsan, Masson, Kerszberg, Dreyfus, and Personnaz individual interneurons, based on their membrane potential, into 5 typical response classes: non-response, pure excitation, pure inhibition, simple mixed response and oscillatory responses. Figure 5 shows the simulation results for the same set of parameters as for Figure 4. The agreement between the two curves shows that the approximations which we have made in order to describe the analytical model are valid. Figure 6 shows how the mixed responses in the simulations divide into simple mixed and oscillatory responses. When the validity limit of the approximations made in the analytical approach is reached, all neurons fire at maximum frequency and the network oscillates. Therefore, the analytical model describes satisfactorily the whole range of connectivity in which the pattern distribution does not reduce to oscillations. The oscillation frequency is determined by the mean synaptic delays and by the membrane time constants; more detailed results on the oscillatory behavior will be published in a future paper. P 80 ~ ................. ../'4- Mixed ..-* 60 40 Layer 1 20 0.3 0.4 c o~ P 80 60 Layer 2 40 20 80 60 40 Layers 1 & 2 20 0.4 Figure 5: Distribution of the response patterns obtained from simulations of the model with the set of parameters described above. The curves show the percentages of interneurons that respond with a given pattern, as a function of connectivity c. For each value of c, 100 simulation runs with three different stimulation inputs have been averaged. pr-------------------7-=-=--------==~-----------------=------80 60 Layers 1 & 2 40 20 o .~ o.b o. c Figure 6: Distribution of simple mixed and oscillatory responses in the simulation model. With the set of parameters chosen, condition ne c :::: e is satisfied for c::::O.3. A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytical Results 4. Conclusion In the olfactory system of insects and mammals, a number of response patterns are observed, which are of central importance for the coding of the olfactory signal. In the present paper, we show that, under some constraints, an analytical model can predict the existence and the distribution of these response patterns. We further show that the transition between non-oscillatory and oscillatory regimes is governed by a single parameter (ne c / E?. It is thus possible, to explore the parameter space without exhaustive simulations, and to relate the coding capabilities of a model or biological network to its architecture. Acknowledgements This work was supported in part by a grant from Ministere de la Recherche et de la Technologie (Sciences de la Cognition). C. Linster has been supported by a research grant (BFR91/051) from the Ministere des Affaires Culturelles, Grand-Duche de Luxembourg. References Boeckh, J. and Ernst, K.D. (1987). Contribution of single unit analysis in insects to an understanding of olfactory function. 1. Compo Physiolo. AI61:549-565. Burrows, M., Boeckh, J., Esslen, J. (1982). Physiological and Morphological Properties of Interneurons in the Deutocerebrum of Male Cockroaches which respond to Female Pheromone. 1. Compo Physiolo. 145:447-457. Christensen, T.A., Hildebrand, J.G. (1987). Functions, Organization, and Physiology of the Olfactory Pathways in the Lepidoteran Brain. In Arthropod Brain: its Evolution, Development, Structure and Functions, A.P. GuPta, (ed), John Wiley & Sons. Christensen, T.A., Hildebrand, J.G. (1988). Frequency coding by central olfactory neurons in the spinx moth Manduca sexta. Chemical Senses 13 (1): 123-130. Christensen, T.A., Mustaparta, H., Hildebrand, J.G. (1989). Discrimination of sex pheromone blends in the olfactory system of the moth. Chemical Senses 14 (3):463-477. Kaissling, K-E., Kramer, E. (1990). Sensory basis of pheromone-mediated orientation in moths. Verh. Dtsch. Zoolo. Ges. 83:109-131. Linster, C., Masson, C., Kerszberg, M., Personnaz, L., Dreyfus, G. (1993). Computational Diversity in a Formal Model of the Insect Olfactory Macroglomerulus. Neural Computation 5:239-252. Masson, C., Mustaparta, H. (1990). Chemical Information Processing in the Olfactory System of Insects. Physiol. Reviews 70 (1): 199-245. 1029
677 |@word middle:1 sex:2 simulation:17 lobe:3 excited:2 mammal:1 reaction:1 john:2 physiol:1 designed:2 discrimination:2 nq:1 recherche:1 compo:2 direct:1 differential:2 consists:1 pathway:2 olfactory:20 indeed:1 behavior:3 p1:1 brain:2 pf:1 ua:1 project:1 bounded:1 transformation:1 oscillates:1 classifier:2 unit:2 grant:2 local:2 limit:1 receptor:12 kaissling:2 oxford:1 firing:1 qii:1 range:1 statistically:1 obeys:1 averaged:1 satisfactorily:1 physiology:1 projection:4 suggest:1 tmn:1 cannot:2 onto:1 mgc:2 luxembourg:1 demonstrated:1 courtesy:1 masson:7 duration:1 roux:1 pure:3 rule:1 inra:1 nam:1 population:1 suppose:1 hypothesis:1 agreement:1 marsan:4 cut:2 observed:2 electrical:1 morphological:1 technologie:1 neglected:1 trained:1 predictive:1 basis:1 various:1 describe:1 exhaustive:2 whose:1 plausible:1 pheromone:15 antenna:1 reproduced:1 analytical:11 propose:1 vanishingly:1 tu:1 relevant:1 loop:1 ernst:2 description:1 interstimulus:1 gerard:1 minor:1 received:3 correct:1 sexta:2 biological:8 pl:2 cognition:1 predict:2 relay:1 sensitive:1 gaussian:1 nonselective:1 cr:4 physiolo:2 derived:1 rigorous:1 cnrs:2 selective:3 france:4 iu:1 classification:1 ill:1 orientation:1 insect:11 development:1 equal:2 once:1 macroglomerulus:7 represents:1 broad:1 future:1 stimulus:6 inhibited:1 antennallobe:3 individual:1 fire:2 organization:1 interneurons:29 investigate:1 male:1 sens:3 institut:1 divide:1 classify:1 uniform:2 delay:8 connect:2 neurobiologie:2 grand:1 probabilistic:2 off:2 receiving:1 na:14 connectivity:10 central:4 satisfied:1 dr:1 michel:1 account:1 potential:6 de:7 diversity:3 coding:6 afferent:6 depends:2 reached:1 capability:2 contribution:1 ni:1 variance:1 none:1 published:1 oscillatory:8 synapsis:10 synaptic:9 ed:1 c15:3 pp:1 frequency:7 pocket:1 amplitude:1 feed:1 higher:1 follow:2 response:48 synapse:4 p6:3 receives:5 dil:1 bures:1 intemeurons:1 christiane:1 effect:3 validity:1 evolution:1 analytically:6 chemical:4 ll:1 during:1 covering:1 excitation:16 bal:4 m:4 meaning:2 dreyfus:5 stimulation:11 spiking:8 c1s:1 ai:4 automatic:1 pm:1 similarly:1 longer:2 inhibition:9 female:1 pulsed:2 belongs:1 espci:2 signal:5 ii:13 full:1 exceeds:1 cross:1 long:1 a1:1 moc:2 prediction:1 expectation:2 cell:1 receive:5 interval:1 laboratoire:3 source:1 ura:2 sent:1 presence:1 iii:4 variety:1 architecture:4 reduce:1 arthropod:1 action:2 latency:1 detailed:1 amount:2 band:1 vauquelin:2 percentage:3 exist:1 inhibitory:12 anatomical:1 vol:1 four:1 threshold:8 pj:1 ce:1 run:1 respond:9 oscillation:2 layer:29 followed:1 nan:2 occur:2 constraint:1 ri:1 generalist:1 leon:1 mustaparta:4 moth:4 combination:1 membrane:8 describes:1 son:1 postsynaptic:1 christensen:7 pr:1 taken:2 equation:2 agree:1 manduca:3 ge:1 tractable:2 observe:2 permission:1 odor:5 specialist:3 existence:1 binomial:1 neglect:1 personnaz:5 question:1 blend:16 concentration:1 kerszberg:5 responds:6 nr:2 exhibit:2 layerl:1 presynaptic:4 code:3 sur:1 modeled:2 ratio:1 relate:3 perform:1 allowing:1 upper:1 gallant:1 neuron:42 david:1 namely:1 paris:3 connection:1 beyond:2 able:1 pattern:27 regime:1 natural:1 cockroach:2 ne:4 mediated:1 review:2 understanding:1 acknowledgement:1 mixed:18 antennal:4 ingredient:1 pi:1 linster:8 row:3 excitatory:13 supported:2 formal:6 perceptron:1 pasteur:1 fall:1 electronique:2 feedback:3 curve:3 valid:3 cumulative:1 hildebrand:7 transition:1 sensory:1 forward:1 made:4 simplified:1 search:1 continuous:1 singe:1 du:1 investigated:1 rue:3 main:2 whole:1 allowed:1 site:1 wiley:1 governed:3 burrow:2 ib:1 pe:2 specific:1 decay:1 physiological:2 gupta:1 importance:3 yvette:1 interneuron:27 explore:1 polysynaptic:1 relies:1 kramer:1 typical:3 determined:1 uniformly:1 called:2 specie:1 discriminate:1 la:3
6,379
6,770
Train longer, generalize better: closing the generalization gap in large batch training of neural networks Elad Hoffer1?, Itay Hubara?, Daniel Soudry2 1 Technion - Israel Institute of Technology, Haifa, Israel 2 Columbia University, New York, New York, USA {elad.hoffer, itayhubara, daniel.soudry}@gmail.com Abstract Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the "generalization gap" phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a "random walk on a random landscape" statistical model which is known to exhibit similar "ultra-slow" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named "Ghost Batch Normalization" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization. 1 Introduction For quite a few years, deep neural networks (DNNs) have persistently enabled significant improvements in many application domains, such as object recognition from images (He et al., 2016); speech recognition (Amodei et al., 2015); natural language processing (Luong et al., 2015) and computer games control using reinforcement learning (Silver et al., 2016; Mnih et al., 2015). The optimization method of choice for training highly complex and non-convex DNNs, is typically stochastic gradient decent (SGD) or some variant of it. Since SGD, at best, finds a local minimum of the non-convex objective function, substantial research efforts are invested to explain DNNs ground breaking results. It has been argued that saddle-points can be avoided (Ge et al., 2015) and that "bad" local minima in the training error vanish exponentially (Dauphin et al., 2014; Choromanska et al., 2015; Soudry & Hoffer, 2017). However, it is still unclear why these complex models tend to generalize well to unseen data despite being heavily over-parameterized (Zhang et al., 2017). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ? Equal contribution A specific aspect of generalization has recently attracted much interest. Keskar et al. (2017) focused on a long observed phenomenon (LeCun et al., 1998a) ? that when a large batch size is used while training DNNs, the trained models appear to generalize less well. This remained true even when the models were trained "without any budget or limits, until the loss function ceased to improve" (Keskar et al., 2017). This decrease in performance has been named the "generalization gap". Understanding the origin of the generalization gap, and moreover, finding ways to decrease it, may have a significant practical importance. Training with large batch size immediately increases parallelization, thus has the potential to decrease learning time. Many efforts have been made to parallelize SGD for Deep Learning (Dean et al., 2012; Das et al., 2016; Zhang et al., 2015), yet the speed-ups and scale-out are still limited by the batch size. In this study we suggest a first attempt to tackle this issue. First, ? We propose that the initial learning phase can be described using a high-dimensional "random walk on a random potential" process, with an "ultra-slow" logarithmic increase in the distance of the weights from their initialization, as we observe empirically. Inspired by this hypothesis, we find that ? By simply adjusting the learning rate and batch normalization the generalization gap can be significantly decreased (for example, from 5% to 1% ? 2%). ? In contrast to common practices (Montavon et al., 2012) and theoretical recommendations (Hardt et al., 2016), generalization keeps improving for a long time at the initial high learning rate, even without any observable changes in training or validation errors. However, this improvement seems to be related to the distance of the weights from their initialization. ? There is no inherent "generalization gap": large-batch training can generalize as well as small batch training by adapting the number of iterations. 2 Training with a large batch Training method. A common practice of training deep neural networks is to follow an optimization "regime" in which the objective is minimized using gradient steps with a fixed learning rate and a momentum term (Sutskever et al., 2013). The learning rate is annealed over time, usually with an exponential decrease every few epochs of training data. An alternative to this regime is to use an adaptive per-parameter learning method such as Adam (Kingma & Ba, 2014), Rmsprop (Dauphin et al.) or Adagrad (Duchi et al., 2011). These methods are known to benefit the convergence rate of SGD based optimization. Yet, many current studies still use simple variants of SGD (Ruder, 2016) for all or part of the optimization process (Wu et al., 2016), due to the tendency of these methods to converge to a lower test error and better generalization. Thus, we focused on momentum SGD, with a fixed learning rate that decreases exponentially every few epochs, similarly to the regime employed by He et al. (2016). The convergence of SGD is also known to be affected by the batch size (Li et al., 2014), but in this work we will focus on generalization. Most of our results were conducted on the Resnet44 topology, introduced by He et al. (2016). We strengthen our findings with additional empirical results in section 6. Empirical observations of previous work. Previous work by Keskar et al. (2017) studied the performance and properties of models which were trained with relatively large batches and reported the following observations: ? Training models with large batch size increase the generalization error (see Figure 1). ? This "generalization gap" seemed to remain even when the models were trained without limits, until the loss function ceased to improve. 2 (a) Training error (b) Validation error Figure 1: Impact of batch size on classification error ? Low generalization was correlated with "sharp" minima2 (strong positive curvature), while good generalization was correlated with "flat" minima (weak positive curvature). ? Small-batch regimes were briefly noted to produce weights that are farther away from the initial point, in comparison with the weights produced in a large-batch regime. Their hypothesis was that a large estimation noise (originated by the use of mini-batch rather than full batch) in small mini-batches encourages the weights to exit out of the basins of attraction of sharp minima, and towards flatter minima which have better generalization.In the next section we provide an analysis that suggest a somewhat different explanation. 3 Theoretical analysis Notation. In this paper we examine Stochastic Gradient Descent (SGD) based training of a Deep Neural Network (DNN). The DNN is trained on a finite training set of N samples. We define w as the vector of the neural network parameters, and Ln (w) as loss function on sample n. We find w by minimizing the training loss. N 1 X L (w) , Ln (w) , N n=1 using SGD. Minimizing L (w) requires an estimate of the gradient of the negative loss. g, N N 1 X 1 X gn , ? ?Ln (w) N n=1 N n=1 where g is the true gradient, and gn is the per-sample gradient. During training we increment the ? computed on some mini-batch B ? a set of M parameter vector w using only the mean gradient g randomly selected sample indices. 1 X ?, g gn . M n?B In order to gain a better insight into the optimization process and the empirical results, we first examine simple SGD training, in which the weights at update step t are incremented according to the mini-batch gradient ?wt = ?? gt . With respect to the randomness of SGD, E? gt = g = ??L (wt ) , and the increments are uncorrelated between different mini-batches3 . For physical intuition, one can think of the weight vector wt as a particle performing a random walk on the loss (?potential?) landscape L (wt ). Thus, for example, adding momentum term to the increment is similar to adding inertia to the particle. 2 It was later pointed out (Dinh et al., 2017) that certain "degenerate" directions, in which the parameters can be changed without affecting the loss, must be excluded from this explanation. For example, for any c > 0 and any neuron, we can multiply all input weights by c and divide the output weights by c: this does not affect the loss, but can generate arbitrarily strong positive curvature. 3 Either exactly (with replacement) or approximately (without replacement): see appendix section A. 3 Motivation. In complex systems (such as DNNs) where we do not know the exact shape of the loss, statistical physics models commonly assume a simpler description of the potential as a random process. For example, Dauphin et al. (2014) explained the observation that local minima tend to have low error using an analogy between L (w), the DNN loss surface, and the high-dimensional Gaussian random field analyzed in Bray & Dean (2007), which has zero mean and auto-covariance   2 E (L (w1 ) L (w2 )) = f kw1 ? w2 k (1) for some function f , where the expectation now is over the randomness of the loss. This analogy resulted with the hypothesis that in DNNs, local minima with high loss are indeed exponentially vanishing, as in Bray & Dean (2007). Only recently, similar results are starting to be proved for realistic neural network models (Soudry & Hoffer, 2017). Thus, a similar statistical model of the loss might also give useful insights for our empirical observations. Model: Random walk on a random potential. Fortunately, the high dimensional case of a particle doing a ?random walk on a random potential? was extensively investigated already decades ago (Bouchaud & Georges, 1990). The main result of that investigation was that the asymptotic behavior of the auto-covariance of a random potential4 , ? E (L (w1 ) L (w2 )) ? kw1 ? w2 k , ?>0 (2) in a certain range, determines the asymptotic behavior of the random walker in that range: 2 4 E kwt ? w0 k ? (log t) ? . (3) This is called an ?ultra-slow diffusion? in which, typically kwt ? w0 k ? (log?t)2/? , in contrast to standard diffusion (on a flat potential), in which we have kwt ? w0 k ? t . The informal reason for this behavior (for any ? > 0), is that for a particle to move a distance d, it has to pass potential barriers of height ? d?/2 , from eq. (2). Then, to climb (or go around) each barrier takes exponentially long time in the height of the barrier: t ? exp(d?/2 ). Inverting this relation, we get eq. d ? (log(t))2/? . In the high-dimensional case, this type of behavior was first shown numerically and explained heuristically by Marinari et al. (1983), then rigorously proven for the case of a discrete lattice by Durrett (1986), and explained in the continuous case by Bouchaud & Comtet (1987). 3.1 Comparison with empirical results and implications To examine this prediction of ultra slow diffusion and find the value of ?, in Figure 2a, we examine kwt ? w0 k during the initial training phase over the experiment shown in Figure 1. We found that the weight distance from initialization point increases logarithmically with the number of training iterations (weight updates), which matches our model with ? = 2: kwt ? w0 k ? log t . (4) Interestingly, the value of ? = 2 matches the statistics of the loss estimated in appendix section B. Moreover, in Figure 2a, we find that a very similar logarithmic graph is observed for all batch sizes. Yet, there are two main differences. First, each graph seems to have a somewhat different slope (i.e., it is multiplied by different positive constant), which peaks at M = 128 and then decreases with the mini-batch size. This indicates a somewhat different diffusion rate for different batch sizes. Second, since we trained all models for a constant number of epochs, smaller batch sizes entail more training iterations in total. Thus, there is a significant difference in the number of iterations and the corresponding weight distance reached at the end of the initial learning phase. This leads to the following informal argument (which assumes flat minima are indeed important for generalization). During the initial training phase, to reach a minima of "width" d the weight vector wt has to travel at least a distance d, and this takes a long time ? about exp(d) iterations. Thus, to reach wide ("flat") minima we need to have the highest possible diffusion rates (which do not result in numerical instability) and a large number of training iterations. In the next sections we will implement these conclusions in practice. 4 Note that this form is consistent with eq. (1), if f (x) = x?/2 . 4 (a) Before learning rate adjustment and GBN (b) After learning rate adjustment and GBN Figure 2: Euclidean distance of weight vector from initialization 4 Matching weight increment statistics for different mini-batch sizes First, to correct the different diffusion rates observed for different batch sizes, we will aim to match the statistics of the weights increments to that of a small batch size. Learning rate. Recall that in this paper we investigate SGD, possibly with momentum, where the weight updates are proportional to the estimated gradient. ?w ? ?? g, (5) where ? is the learning rate, and we ignore for now the effect of batch normalization. In appendix section A, we show that the covariance matrix of the parameters update step ?w is, ! N ?2 1 X > cov (?w, ?w) ? gn gn (6) M N n=1 in the case of uniform sampling of the mini-batch indices (with or without replacement), when M  N . Therefore, a simple way to make sure that the covariance matrix stays the same for all mini-batch sizes is to choose ? (7) ?? M, i.e., we should increase the learning rate by the square root of the mini-batch size. We note that Krizhevsky (2014) suggested a similar learning rate scaling in order to keep the variance in the gradient expectation constant, but chose to use a linear scaling heuristics as it reached better empirical result in his setting. Later on, Li (2017) suggested the same. Naturally, such an increase in the learning rate also increases the mean steps E [?w]. However, we found that this effect is negligible since E [?w] is typically orders of magnitude lower than the standard deviation. Furthermore, we can match both the first and second order statistics by adding multiplicative noise to the gradient estimate as follows: N 1 X ?= g gn zn , M n?B  where zn ? N 1, ? 2 are independent random Gaussian variables for which ? 2 ? M . This can be verified by using similar calculation as in appendix section A. This method keeps the covariance constant when we change the batch size, yet does not change the mean steps E [?w]. In both cases, for the first few iterations, we had to clip or normalize the gradients to prevent divergence. Since both methods yielded similar performance 5 (due the negligible effect of the first order statistics), we preferred to use the simpler learning rate method. It is important to note that other types of noise (e.g., dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013), label noise (Szegedy et al., 2016)) change the structure of the covariance matrix and not just its scale, thus the second order statistics of the small batch increment cannot be accurately 5 a simple comparison can be seen in appendix (figure 3) 5 matched. Accordingly, we did not find that these types of noise helped to reduce the generalization gap for large batch sizes. Lastly, note that in our discussion above (and the derivations provided in appendix section A) we assumed each per-sample gradient gn does not depend on the selected mini-batch. However, this ignores the influence of batch normalization. We take this into consideration in the next subsection. Ghost Batch Normalization. Batch Normalization (BN) (Ioffe & Szegedy, 2015), is known to accelerate the training, increase the robustness of neural network to different initialization schemes and improve generalization. Nonetheless, since BN uses the batch statistics it is bounded to depend on the choosen batch size. We study this dependency and observe that by acquiring the statistics on small virtual ("ghost") batches instead of the real large batch we can reduce the generalization error. In our experiments we found out that it is important to use the full batch statistic as suggested by (Ioffe & Szegedy, 2015) for the inference phase. Full details are given in Algorithm 1. This modification by itself reduce the generalization error substantially. Algorithm 1: Ghost Batch Normalization (GBN), applied to activation x over a large batch BL with virtual mini-batch BS . Where BS < BL . Require: Values of x over a large-batch: BL = {x1...m } size of virtual batch |BS |; Parameters to be learned: ?, ?, momentum ? Training Phase: Scatter BL to {X 1 , X 2 , ...X |BL |/|BS | } = {x1...|BS | , x|BS |+1...2|BS | ...x|BL |?|BS |...m } P|B | ?lB ? |B1S | i=1S Xil for l = 1, 2, 3 . . . {calculate ghost mini-batches means} q P|B | l ? |B1S | i=1S (Xil ? ?B )2 +  for l = 1, 2, 3 . . . {calculate ghost mini-batches std} ?B P|B |/|BS | ?run = (1 ? ?)|BS | ?run + i=1L (1 ? ?)i ? ? ? ?lB P |B |/|B | S l ?run = (1 ? ?)|BS | ?run + i=1L (1 ? ?)i ? ? ? ?B X l ??l return ? ?l B + ? B Test Phase: l X??run + ? {scale and shift} return ? ?run We note that in a multi-device distributed setting, some of the benefits of "Ghost BN" may already occur, since batch-normalization is often preformed on each device separately to avoid additional communication cost. Thus, each device computes the batch norm statistics using only its samples (i.e., part of the whole mini-batch). It is a known fact, yet unpublished, to the best of the authors knowledge, that this form of batch norm update helps generalization and yields better results than computing the batch-norm statistics over the entire batch. Note that GBN enables flexibility in the small (virtual) batch size which is not provided by the commercial frameworks (e.g., TensorFlow, PyTorch) in which the batch statistics is calculated on the entire, per-device, batch. Moreover, in those commercial frameworks, the running statistics are usually computed differently from "Ghost BN", by weighting each update part equally. In our experiments we found it to worsen the generalization performance. Implementing both the learning rate and GBN adjustments seem to improve generalization performance, as we shall see in section 6. Additionally, as can be seen in Figure 6, the slopes of the logarithmic weight distance graphs seem to better matched, indicating similar diffusion rates. We also observe some constant shift, which we believe is related to the gradient clipping. Since this shift only increased the weight distances, we assume it does not harm the performance. 5 Adapting number of weight updates eliminates generalization gap According to our conclusions in section 3, the initial high-learning rate training phase enables the model to reach farther locations in the parameter space, which may be necessary to find wider local minima and better generalization. Examining figure 2b, the next obvious step to match the graphs for different batch sizes is to increase the number of training iterations in the initial high learning rate 6 (a) Validation error (b) Validation error - zoomed Figure 3: Comparing generalization of large-batch regimes, adapted to match performance of smallbatch training. regime. And indeed we noticed that the distance between the current weight and the initialization point can be a good measure to decide upon when to decrease the learning rate. Note that this is different from common practices. Usually, practitioners decrease the learning rate after validation error appears to reach a plateau. This practice is due to the long-held belief that the optimization process should not be allowed to decrease the training error when validation error "flatlines", for fear of overfitting (Girosi et al., 1995). However, we observed that substantial improvement to the final accuracy can be obtained by continuing the optimization using the same learning rate even if the training error decreases while the validation plateaus. Subsequent learning rate drops resulted with a sharp validation error decrease, and better generalization for the final model. These observations led us to believe that "generalization gap" phenomenon stems from the relatively small number of updates rather than the batch size. Specifically, using the insights from Figure 2 and our model, we adapted the training regime to better suit the usage of large mini-batch. We "stretched" the time-frame of the optimization process, where each time period of e epochs in the original regime, L| will be transformed to |B |BS | e epochs according to the mini-batch size used. This modification ensures that the number of optimization steps taken is identical to those performed in the small batch regime. As can be seen in Figure 3, combining this modification with learning rate adjustment completely eliminates the generalization gap observed earlier 6 . 6 Experiments Experimental setting. We experimented with a set of popular image classification tasks: ? MNIST (LeCun et al., 1998b) - Consists of a training set of 60K and a test set of 10K 28 ? 28 gray-scale images representing digits ranging from 0 to 9. ? CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) - Each consists of a training set of size 50K and a test set of size 10K. Instance are 32 ? 32 color images representing 10 or 100 classes. ? ImageNet classification task Deng et al. (2009) - Consists of a training set of size 1.2M samples and test set of size 50K. Each instance is labeled with one of 1000 categories. To validate our findings, we used a representative choice of neural network models. We used the fully-connected model, F1, as well as shallow convolutional models C1 and C3 suggested by Keskar et al. (2017). As a demonstration of more current architectures, we used the models: VGG (Simonyan, 2014) and Resnet44 (He et al., 2016) for CIFAR10 dataset, Wide-Resnet16-4 (Zagoruyko, 2016) for CIFAR100 dataset and Alexnet (Krizhevsky, 2014) for ImageNet dataset. In each of the experiments, we used the training regime suggested by the original work, together with a momentum SGD optimizer. We use a batch of 4096 samples as "large batch" (LB) and a small batch (SB) of either 128 (F1,C1,VGG,Resnet44,C3,Alexnet) or 256 (WResnet). We compare the original training baseline for small and large batch, as well as the following methods7 : 6 7 Additional graphs, including comparison to non-adapted regime, are available in appendix (figure 2). Code is available at https://github.com/eladhoffer/bigBatch. 7 ? Learning rate tuningq(LB+LR): Using a large batch, while adapting the learning rate to be L| larger so that ?L = |B |BS | ?S where ?S is the original learning rate used for small batch, ?L is the adapted learning rate and |BL |, |BS | are the large and small batch sizes, respectively. ? Ghost batch norm (LB+LR+GBN): Additionally using the "Ghost batch normalization" method in our training procedure. The "ghost batch size" used is 128. ? Regime adaptation: Using the tuned learning rate as well as ghost batch-norm, but with an adapted training regime. The training regime is modified to have the same number of iterations for each batch size used - effectively multiplying the number of epochs by the relative size of the large batch. Results. Following our experiments, we can establish an empirical basis to our claims. Observing the final validation accuracy displayed in Table 1, we can see that in accordance with previous works the move from a small-batch (SB) to a large-batch (LB) indeed incurs a substantial generalization gap. However, modifying the learning-rate used for large-batch (+LR) causes much of this gap to diminish, following with an additional improvement by using the Ghost-BN method (+GBN). Finally, we can see that the generalization gap completely disappears when the training regime is adapted (+RA), yielding validation accuracy that is good-as or better than the one obtained using a small batch. We additionally display results obtained on the more challenging ImageNet dataset in Table 2 which shows similar impact for our methods. Table 1: Validation accuracy results, SB/LB represent small and large batch respectively. GBN stands for Ghost-BN, and RA stands for regime adaptation Network Dataset SB LB +LR +GBN +RA F1 (Keskar et al., 2017) C1 (Keskar et al., 2017) Resnet44 (He et al., 2016) VGG (Simonyan, 2014) C3 (Keskar et al., 2017) WResnet16-4 (Zagoruyko, 2016) MNIST Cifar10 Cifar10 Cifar10 Cifar100 Cifar100 98.27% 87.80% 92.83% 92.30% 61.25% 73.70% 97.05% 83.95% 86.10% 84.1% 51.50% 68.15% 97.55% 86.15% 89.30% 88.6% 57.38% 69.05% 97.60% 86.4% 90.50% 91.50% 57.5% 71.20% 98.53% 88.20% 93.07% 93.03% 63.20% 73.57% Table 2: ImageNet top-1 results using Alexnet topology (Krizhevsky, 2014), notation as in Table 1. 7 Network LB size Dataset SB LB8 +LR8 +GBN +RA Alexnet Alexnet 4096 8192 ImageNet ImageNet 57.10% 57.10% 41.23% 41.23% 53.25% 53.25% 54.92% 53.93% 59.5% 59.5% Discussion There are two important issues regarding the use of large batch sizes. First, why do we get worse generalization with a larger batch, and how do we avoid this behaviour? Second, can we decrease the training wall clock time by using a larger batch (exploiting parallelization), while retaining the same generalization performance as in small batch? This work tackles the first issue by investigating the random walk behaviour of SGD and the relationship of its diffusion rate to the size of a batch. Based on this and empirical observations, we propose simple set of remedies to close down the generalization gap between the small and large batch training strategies: (1) Use SGD with momentum, gradient clipping, and a decreasing learning rate schedule; (2) adapt the learning rate with batch size (we used a square root scaling); (3) compute batch-norm statistics over several partitions ("ghost batch-norm"); and (4) use a sufficient number of high learning rate training iterations. Thus, the main point arising from our results is that, in contrast to previous conception, there is no inherent generalization problem with training using large mini batches. That is, model training using 8 Due to memory limitation those experiments were conducted with batch of 2048. 8 large mini-batches can generalize as well as models trained using small mini-batches. Though this answers the first issues, the second issue remained open: can we speed up training by using large batch sizes? Not long after our paper first appeared, this issue was also answered. Using a Resnet model on Imagenet Goyal et al. (2017) showed that, indeed, significant speedups in training could be achieved using a large batch size. This further highlights the ideas brought in this work and their importance to future scale-up, especially since Goyal et al. (2017) used similar training practices to those we described above. The main difference between our works is the use of a linear scaling of the learning rate9 , similarly to Krizhevsky (2014), and as suggested by Bottou (2010). However, we found that linear scaling works less well on CIFAR10, and later work found that linear scaling rules work less well for other architectures on ImageNet (You et al., 2017). We also note that current "rules of thumb" regarding optimization regime and explicitly learning rate annealing schedule may be misguided. We showed that good generalization can result from extensive amount of gradient updates in which there is no apparent validation error change and training error continues to drop, in contrast to common practice. After our work appeared, Soudry et al. (2017) suggested an explanation to this, and to the logarithmic increase in the weight distance observed in Figure 2. We show this behavior happens even in simple logistic regression problems with separable ? + O(1) data. In this case, we exactly solve the asymptotic dynamics and prove that w(t) = log(t)w ? is to the L2 maximum margin separator. Therefore, the margin (affecting generalization) where w improves slowly (as O(1/ log(t)), even while the training error is very low. Future work, based on this, may be focused on finding when and how the learning rate should be decreased while training. Conclusion. In this work we make a first attempt to tackle the "generalization gap" phenomenon. We argue that the initial learning phase can be described using a high-dimensional "random walk on a random potential" process, with a an "ultra-slow" logarithmic increase in the distance of the weights from their initialization, as we observe empirically. Following this observation we suggest several techniques which enable training with large batch without suffering from performance degradation. Thus implying that the problem is not related to the batch size but rather to the amount of updates. Moreover we introduce a simple yet efficient algorithm "Ghost-BN" which improves the generalization performance significantly while keeping the training time intact. Acknowledgments We wish to thank Nir Ailon, Dar Gilboa, Kfir Levy and Igor Berman for their feedback on the initial manuscript. The research was partially supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. References Amodei, D., Anubhai, R., Battenberg, E., et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT?2010, pp. 177?186. Springer, 2010. Bouchaud, J. P. and Georges, A. Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications. Physics reports, 195:127?293, 1990. Bouchaud, J. P. and Comtet, A. Anomalous diffusion in random media of any dimensionality. J. Physique, 48: 1445?1450, 1987. Bray, A. J. and Dean, D. S. Statistics of critical points of Gaussian fields on large-dimensional spaces. Physical Review Letters, 98(15):1?5, 2007. Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., and LeCun, Y. The Loss Surfaces of Multilayer Networks. AISTATS15, 38, 2015. 9 e.g., Goyal et al. (2017) also used an initial warm-phase for the learning rate, however, this has a similar effect to the gradient clipping we used, since this clipping was mostly active during the initial steps of training. 9 Das, D., Avancha, S., Mudigere, D., et al. Distributed deep learning using synchronous stochastic gradient descent. arXiv preprint arXiv:1602.06709, 2016. Dauphin, Y., de Vries, H., Chung, J., and Bengio, Y. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. corr abs/1502.04390 (2015). Dauphin, Y., Pascanu, R., and Gulcehre, C. Identifying and attacking the saddle point problem in highdimensional non-convex optimization. In NIPS, pp. 1?9, 2014. Dean, J., Corrado, G., Monga, R., et al. Large scale distributed deep networks. In NIPS, pp. 1223?1231, 2012. Deng, J., Dong, W., Socher, R., et al. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. Dinh, L., Pascanu, R., Bengio, S., and Bengio, Y. Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933, 2017. Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011. Durrett, R. Multidimensional random walks in random environments with subclassical limiting behavior. Communications in Mathematical Physics, 104(1):87?102, 1986. Ge, R., Huang, F., Jin, C., and Yuan, Y. Escaping from saddle points-online stochastic gradient for tensor decomposition. In COLT, pp. 797?842, 2015. Girosi, F., Jones, M., and Poggio, T. Regularization theory and neural networks architectures. Neural computation, 7(2):219?269, 1995. Goyal, P., Doll?r, P., Girshick, R., et al. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Hardt, M., Recht, B., and Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. ICML, pp. 1?24, 2016. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770?778, 2016. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Krizhevsky, A. Learning multiple layers of features from tiny images. 2009. Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. LeCun, Y., Bottou, L., and Orr, G. Efficient backprop in neural networks: Tricks of the trade (orr, g. and m?ller, k., eds.). Lecture Notes in Computer Science, 1524, 1998a. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998b. Li, M. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Intel, 2017. Li, M., Zhang, T., Chen, Y., and Smola, A. J. Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661?670. ACM, 2014. Luong, M.-T., Pham, H., and Manning, C. D. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. Marinari, E., Parisi, G., Ruelle, D., and Windey, P. Random Walk in a Random Environment and 1f Noise. Physical Review Letters, 50(1):1223?1225, 1983. Mnih, V., Kavukcuoglu, K., Silver, D., et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. Montavon, G., Orr, G., and M?ller, K.-R. Neural Networks: Tricks of the Trade. 2 edition, 2012. ISBN 978-3-642-35288-1. Ruder, S. An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747, 2016. Silver, D., Huang, A., Maddison, C. J., et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. Simonyan, K. e. a. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Soudry, D., Hoffer, E., and Srebro, N. The Implicit Bias of Gradient Descent on Separable Data. ArXiv e-prints, October 2017. Soudry, D. and Hoffer, E. Exponentially vanishing sub-optimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929?1958, 2014. Sutskever, I., Martens, J., Dahl, G., and Hinton, G. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139?1147, 2013. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818?2826, 2016. 10 Wan, L., Zeiler, M., Zhang, S., LeCun, Y., and Fergus, R. Regularization of neural networks using dropconnect. ICML?13, pp. III?1058?III?1066. JMLR.org, 2013. Wu, Y., Schuster, M., Chen, Z., et al. Google?s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. You, Y., Gitman, I., and Ginsburg, B. Scaling sgd batch size to 32k for imagenet training. arXiv preprint arXiv:1708.03888, 2017. Zagoruyko, K. Wide residual networks. In BMVC, 2016. Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization. In ICLR, 2017. Zhang, S., Choromanska, A. E., and LeCun, Y. Deep learning with elastic averaging sgd. In NIPS, pp. 685?693, 2015. 11
6770 |@word briefly:1 seems:2 norm:7 open:2 heuristically:1 bn:7 covariance:6 decomposition:1 incurs:1 sgd:18 arous:1 initial:13 daniel:2 tuned:1 document:1 interestingly:1 current:4 com:2 comparing:1 activation:1 gmail:1 yet:6 attracted:1 must:1 scatter:1 realistic:1 numerical:1 subsequent:1 partition:1 shape:1 enables:3 girosi:2 drop:2 update:14 implying:1 intelligence:1 selected:2 device:4 accordingly:1 vanishing:2 farther:2 lr:4 pascanu:2 location:1 org:1 simpler:2 zhang:7 height:2 mathematical:1 persistent:1 yuan:1 consists:3 prove:1 introduce:1 ra:4 indeed:5 behavior:7 examine:5 multi:1 inspired:1 salakhutdinov:1 decreasing:1 increasing:1 provided:2 project:1 moreover:4 notation:2 matched:2 bounded:1 alexnet:5 medium:2 israel:2 interpreted:1 substantially:1 finding:5 every:2 multidimensional:1 tackle:3 exactly:2 control:2 appear:1 positive:4 before:1 negligible:2 local:6 accordance:1 thereon:1 limit:2 soudry:6 despite:1 parallelize:1 approximately:1 might:1 chose:1 initialization:9 studied:1 challenging:1 co:1 limited:1 range:2 practical:1 lecun:7 acknowledgment:1 practice:8 goyal:4 implement:1 digit:1 procedure:1 empirical:8 itayhubara:1 adapting:4 significantly:2 matching:1 ups:1 suggest:4 get:2 cannot:1 close:1 interior:2 influence:1 instability:1 dean:5 marten:1 center:1 annealed:1 go:2 compstat:1 starting:1 attention:1 convex:4 focused:3 identifying:2 immediately:1 insight:3 attraction:1 rule:2 shlens:1 his:1 enabled:1 stability:1 cvpr09:1 increment:6 cifar100:3 limiting:1 commercial:2 heavily:1 itay:1 strengthen:1 exact:1 us:1 hypothesis:4 origin:2 trick:3 logarithmically:2 persistently:1 recognition:8 continues:1 std:1 labeled:1 database:1 observed:7 preprint:11 calculate:2 ensures:1 connected:1 sun:1 decrease:13 incremented:1 highest:1 trade:2 substantial:3 intuition:1 environment:2 rmsprop:2 rigorously:1 dynamic:1 trained:8 depend:2 upon:1 exit:1 completely:3 basis:1 accelerate:1 differently:1 derivation:1 train:3 effective:1 doi:2 quite:1 heuristic:1 elad:2 larger:3 apparent:1 solve:1 statistic:15 cov:1 unseen:1 simonyan:3 invested:1 think:1 itself:1 final:3 online:2 parisi:1 net:1 isbn:1 propose:3 preformed:1 zoomed:1 adaptation:2 combining:1 degenerate:1 achieve:1 flexibility:1 description:1 validate:2 normalize:1 ceased:2 sutskever:3 convergence:2 exploiting:1 produce:1 xil:2 silver:3 adam:2 object:1 help:1 wider:1 resnet:1 mandarin:1 equilibrated:1 eq:3 strong:2 berman:1 direction:1 correct:1 modifying:1 stochastic:10 disordered:1 human:2 enable:1 virtual:4 implementing:1 backprop:1 argued:1 require:1 dnns:6 behaviour:2 government:2 f1:3 generalization:45 wall:1 investigation:1 ultra:5 d16pc00003:1 pytorch:1 pham:1 around:1 diminish:1 ground:1 exp:2 claim:1 optimizer:1 purpose:1 estimation:1 travel:1 label:1 hubara:1 brought:1 gaussian:3 aim:1 modified:1 rather:4 avoid:2 focus:1 improvement:4 indicates:1 contrast:4 sigkdd:1 baseline:1 inference:1 sb:5 typically:4 entire:2 relation:1 dnn:3 transformed:1 reproduce:1 choromanska:3 issue:6 classification:3 colt:1 dauphin:5 retaining:1 equal:1 field:2 beach:1 eliminated:1 sampling:1 identical:1 jones:1 icml:2 igor:1 ibc:2 future:2 minimized:1 report:1 inherent:2 few:4 randomly:1 resulted:2 kwt:5 divergence:1 phase:11 replacement:3 suit:1 attempt:2 ab:3 interest:1 investigate:2 mnih:2 highly:1 multiply:1 mining:1 physique:1 analyzed:1 yielding:1 copyright:1 held:1 kfir:1 implication:1 accurate:1 necessary:1 cifar10:5 poggio:1 conduct:1 tree:1 divide:1 euclidean:1 walk:9 haifa:1 continuing:1 girshick:1 theoretical:2 battenberg:1 increased:1 instance:2 earlier:1 gn:7 zn:2 lattice:1 clipping:4 cost:1 deviation:1 uniform:1 technion:1 krizhevsky:8 examining:1 conducted:3 reported:1 dependency:1 answer:1 st:1 recht:2 peak:1 international:2 stay:1 contract:1 physic:3 dong:1 together:1 w1:2 thesis:1 choose:1 possibly:1 wan:2 dropconnect:2 slowly:1 huang:2 worse:1 luong:2 chung:1 return:2 li:4 szegedy:5 potential:9 distribute:1 de:1 orr:3 flatter:1 explicitly:1 later:3 root:2 multiplicative:1 helped:1 performed:1 doing:1 observing:1 reached:2 view:1 hazan:1 worsen:1 annotation:1 slope:2 jul:1 contribution:2 disclaimer:1 square:2 accuracy:4 convolutional:3 variance:1 keskar:8 yield:1 landscape:2 generalize:7 weak:1 thumb:1 kavukcuoglu:1 accurately:1 produced:1 b1s:2 ren:1 multiplying:1 randomness:2 ago:1 explain:1 plateau:2 reach:4 ed:1 mudigere:2 nonetheless:1 pp:11 obvious:1 naturally:1 gain:1 proved:1 adjusting:1 hardt:3 popular:1 dataset:6 recall:1 subsection:1 knowledge:2 color:1 improves:2 dimensionality:1 schedule:2 appears:1 manuscript:1 follow:1 bmvc:1 though:1 furthermore:1 just:1 smola:1 lastly:1 implicit:1 until:2 clock:1 inception:1 google:1 minibatch:1 logistic:1 gray:1 grows:1 believe:2 bouchaud:4 usa:2 effect:4 usage:1 true:2 remedy:1 regularization:2 excluded:1 game:2 during:4 encourages:1 width:1 noted:1 duchi:2 image:8 ranging:1 consideration:1 novel:1 recently:2 common:5 empirically:3 physical:4 overview:1 exponentially:5 he:6 numerically:1 significant:5 dinh:2 stretched:1 similarly:2 pointed:1 closing:2 particle:4 language:1 had:2 kw1:2 entail:1 longer:1 surface:2 gt:2 curvature:3 showed:2 henaff:1 certain:2 arbitrarily:1 seen:3 minimum:14 additional:5 somewhat:3 fortunately:1 george:2 employed:1 deng:2 converge:1 attacking:1 period:1 corrado:1 ller:2 full:3 multiple:1 stem:2 match:6 adapt:1 calculation:1 faster:1 long:7 cifar:4 concerning:1 equally:1 impact:2 prediction:1 variant:3 regression:1 anomalous:2 multilayer:2 vision:3 expectation:2 arxiv:23 iteration:10 normalization:10 represent:1 monga:1 achieved:1 c1:3 background:1 affecting:2 separately:1 decreased:2 annealing:1 walker:1 parallelization:2 w2:4 eliminates:2 zagoruyko:3 sure:1 tend:2 climb:1 seem:2 practitioner:1 bengio:5 conception:1 decent:1 iii:2 affect:1 architecture:4 topology:2 escaping:1 reduce:3 regarding:2 idea:1 haffner:1 vgg:3 shift:4 synchronous:1 bridging:1 accelerating:1 effort:2 flatlines:1 speech:3 york:2 cause:1 dar:1 deep:19 useful:1 amount:2 weird:1 extensively:1 clip:1 category:1 generate:1 http:1 governmental:1 estimated:3 arising:1 per:4 discrete:1 shall:1 affected:1 prevent:2 verified:1 dahl:1 diffusion:11 nocedal:1 graph:5 subgradient:1 fraction:1 year:1 run:6 parameterized:1 you:2 letter:2 named:2 decide:1 wu:2 ruelle:1 endorsement:1 appendix:7 scaling:8 dropout:2 layer:1 display:1 yielded:1 activity:1 bray:3 occur:1 adapted:6 flat:4 anubhai:1 aspect:1 speed:2 argument:1 answered:1 misguided:1 performing:1 separable:2 relatively:3 speedup:1 department:1 ailon:1 according:3 smelyanskiy:1 amodei:2 manning:1 remain:1 smaller:1 mastering:1 shallow:1 modification:3 b:14 happens:1 explained:3 taken:1 ln:3 mechanism:1 singer:2 know:1 ge:2 end:3 hoffer:5 informal:2 gulcehre:1 available:2 doll:1 multiplied:1 observe:4 hierarchical:1 away:1 ginsburg:1 batch:107 alternative:1 robustness:1 original:4 assumes:1 running:1 top:1 zeiler:1 especially:1 establish:1 bl:7 implied:1 objective:2 move:2 already:2 noticed:1 tensor:1 print:1 strategy:1 unclear:1 exhibit:1 gradient:26 iclr:2 distance:13 thank:1 rethinking:2 w0:5 maddison:1 argue:1 reason:1 code:1 index:2 relationship:1 mini:21 minimizing:2 demonstration:1 mostly:1 october:1 negative:1 ba:2 wojna:1 design:1 policy:1 observation:7 neuron:1 finite:1 descent:7 jin:1 displayed:1 hinton:2 communication:2 frame:1 sharp:5 lb:9 parallelizing:1 introduced:1 inverting:1 unpublished:1 c3:3 extensive:1 imagenet:12 learned:1 herein:1 tensorflow:1 kingma:2 hour:1 nip:4 suggested:7 usually:3 pattern:2 regime:20 ghost:16 appeared:2 including:1 memory:1 explanation:3 belief:2 critical:1 natural:1 business:1 warm:1 residual:2 advanced:1 representing:3 scheme:1 improve:4 github:1 technology:1 mathieu:1 disappears:1 reprint:1 auto:2 columbia:1 nir:1 epoch:6 understanding:2 l2:1 review:2 discovery:1 adagrad:1 asymptotic:3 relative:1 loss:15 fully:1 highlight:1 lecture:1 limitation:1 proportional:1 analogy:2 proven:1 srebro:1 validation:12 vanhoucke:1 basin:1 consistent:1 sufficient:1 uncorrelated:1 tiny:1 translation:3 changed:1 supported:1 keeping:1 english:1 gilboa:1 choosen:1 bias:1 institute:1 wide:3 barrier:3 benefit:2 distributed:4 feedback:1 calculated:1 stand:2 seemed:1 ignores:1 inertia:1 made:1 reinforcement:2 adaptive:3 avoided:1 commonly:1 durrett:2 computes:1 author:2 observable:1 ignore:1 preferred:1 keep:3 overfitting:2 investigating:1 ioffe:4 active:1 harm:1 assumed:1 fergus:1 continuous:1 search:1 decade:1 gbn:10 why:2 table:5 additionally:3 nature:2 ca:1 elastic:1 improving:1 investigated:1 complex:3 bottou:4 separator:1 domain:1 da:2 necessarily:1 did:1 official:1 main:4 motivation:1 noise:6 whole:1 edition:1 iarpa:2 allowed:1 suffering:1 x1:2 representative:1 marinari:2 intel:1 slow:5 sub:1 momentum:8 originated:1 wish:1 exponential:1 breaking:1 vanish:1 weighting:1 levy:1 jmlr:1 montavon:2 tang:1 down:1 remained:3 bad:1 specific:1 covariate:1 experimented:1 socher:1 mnist:3 adding:3 effectively:1 importance:3 corr:3 phd:1 magnitude:1 notwithstanding:1 budget:1 vries:1 margin:2 gap:21 authorized:1 chen:2 logarithmic:5 led:1 simply:1 saddle:3 vinyals:1 expressed:1 adjustment:4 contained:1 partially:1 fear:1 recommendation:1 srivastava:2 acquiring:1 springer:1 determines:1 acm:2 towards:1 change:5 specifically:1 reducing:1 wt:5 averaging:1 degradation:2 called:1 total:1 pas:1 tendency:1 experimental:1 intact:1 indicating:1 highdimensional:1 internal:1 phenomenon:4 schuster:1 correlated:2
6,380
6,771
Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks Urs K?ster* , Tristan Webb* , Xin Wang* , Marcel Nassar* , Arjun Bansal, William Constable, Oguz Elibol, Stewart Hall, Luke Hornof, Amir Khosrowshahi, Carey Kloss, Ruby Pai, Naveen Rao Artificial Intelligence Products Group, Intel Corporation Abstract Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet [1], a deep residual network [2, 3] and a generative adversarial network [4], using a simulator implemented with the neon deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference. 1 Introduction Deep learning is a rapidly growing field that achieves state-of-the-art performance in solving many key data-driven problems in a wide range of industries. With major chip makers? quest for novel hardware architectures for deep learning, the next few years will see the advent of new computing devices optimized for training and inference of deep neural networks with increasing performance at decreasing cost. Typically deep learning research is done on CPU and/or GPU architectures that offer native 64-bit, 32-bit or 16-bit floating point data format and operations. Substantial improvements in hardware footprint, power consumption, speed, and memory requirements could be obtained with more efficient data formats. This calls for innovations in numerical representations and operations specifically tailored for deep learning needs. Recently, inference with low bit-width fixed point data formats has made significant advancement, whereas low bit-width training remains an open challenge [5, 6, 7]. Because training in low precision reduces memory footprint and increases the computational density of the deployed hardware infrastructure, it is crucial to efficient and scalable deep learning applications. In this paper, we present Flexpoint, a flexible low bit-width numerical format, which faithfully maintains algorithmic parity with full-precision floating point training and supports a wide range of deep network topologies, while at the same time substantially reduces consumption of computational *Equal Contribution 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. resources, making it amenable for specialized training hardware optimized for field deployment of already existing deep learning models. The remainder of this paper is structured as follows. In Section 2, we review relevant work in literature. In Section 3, we present the Flexpoint numerical format along with an exponent management algorithm that tracks the statistics of tensor extrema and adjusts tensor scales on a per-minibatch basis. In Section 4, we show results from training several deep neural networks in Flexpoint, showing close parity to floating point performance: AlexNet and a deep residual network (ResNet) for image classification, and the recently published Wasserstein GAN. In Section 5, we discuss specific advantages and limitations of Flexpoint, and compare its merits to those of competing low-precision training schemes. 2 Related Work In 2011, Vanhoucke et al. first showed that inference and training of deep neural networks is feasible with values of certain tensors quantized to a low-precision fixed point format [8]. More recently, an increasing number of studies demonstrated low-precision inference with substantially reduced computation. These studies involve, usually in a model-dependent manner, quantization of specific tensors into low-precision fixed point formats. These include quantization of weights and/or activations to 8-bit [8, 9, 10, 11], down to 4-bit, 2-bit [12, 13] or ternary [10], and ultimately all binary [7, 14, 5, 6]. Weights trained at full precision are commonly converted from floating point values, and bit-widths of component tensors are either pre-determined based on the characteristics of the model, or optimized per layer [11]. Low-precision inference has already made its way into production hardware such as Google?s tensor processing unit (TPU) [15]. On the other hand, reasonable successes in low-precision training have been obtained with binarized [13, 16, 17, 5] or ternarized weights [18], or binarized gradients in the case of stochastic gradient descent [19], while accumulation of activations and gradients is usually at higher precision. Motivated by the non-uniform distribution of weights and activations, Miyashita et al. [20] used a logarithmic quantizer to quantize the parameters and gradients to 6 bits without significant loss in performance. XNOR-nets focused on speeding up neural network computations by parametrizing the activations and weights as rank-1 products of binary tensors and higher precision scalar values [7]. This enables the use of kernels composed of XNOR and bit-count operations to perform highly efficient convolutions. However, additional high-precision multipliers are still needed to perform the scaling after each convolution which limits its performance. Quantized Neural Networks (QNNs), and their binary version (Binarized Nets), successfully perform low-precision inference (down to 1-bit) by keeping real-valued weights and quantizing them only to compute the gradients and performing forward inference [17, 5]. Hubara et al. found that low precision networks coupled with efficient bit shiftbased operations resulted in computational speed-up, from experiments performed using specialized GPU kernels. DoReFa-Nets utilize similar ideas as QNNs and quantize the gradients to 6-bits to achieve similar performance [6]. The authors also trained in limited precision the deepest ResNet (18 layers) so far. The closest work related to this manuscript is by Courbariaux et al. [21], who used a dynamical fixed point (DFXP) format in training a number of benchmark models. In their study, tensors are polled periodically for the fraction of overflowed entries in a given tensor: if that number exceeds a certain threshold the exponent is incremented to extend the dynamic range, and vice versa. The main drawback is that this update mechanism only passively reacts to overflows rather than anticipating and preemptively avoiding overflows; this turns out to be catastrophic for maintaining convergence of the training. 3 3.1 Flexpoint The Flexpoint Data Format Flexpoint is a data format that combines the advantages of fixed point and floating point arithmetic. By using a common exponent for integer values in a tensor, Flexpoint reduces computational and memory requirements while automatically managing the exponent of each tensor in a user transparent manner. 2 Figure 1: Diagrams of bit representations of different tensorial numerical formats. Red, green and blue shading each signify mantissa, exponent, and sign bits respectively. In both (a) IEEE 754 32-bit floating point and (b) IEEE 754 16-bit floating point a portion of the bit string are allocated to specify exponents. (c) illustrates a Flexpoint tensor with 16-bit mantissa and 5-bit shared exponent. Flexpoint is based on tensors with an N -bit mantissa storing an integer value in two?s complement form, and an M -bit exponent e, shared across all elements of a tensor. This format is denoted as flexN+M. Fig. 1 shows an illustration of a Flexpoint tensor with a 16-bit mantissa and 5-bit exponent, i.e. flex16+5 compared to 32-bit and 16-bit floating point tensors. In contrast to floating point, the exponent is shared across tensor elements, and different from fixed point, the exponent is updated automatically every time a tensor is written. Compared to 32-bit floating point, Flexpoint reduces both memory and bandwidth requirements in hardware, as storage and communication of the exponent can be amortized over the entire tensor. Power and area requirements are also reduced due to simpler multipliers compared to floating point. Specifically, multiplication of entries of two separate tensors can be computed as a fixed point operation since the common exponent is identical across all the output elements. For the same reason, addition across elements of the same tensor can also be implemented as fixed point operations. This essentially turns the majority of computations of deep neural networks into fixed point operations. 3.2 Exponent Management These remarkable advantages come at the cost of added complexity of exponent management and dynamic range limitations imposed by sharing a single exponent. Other authors have reported on the range of values contained within tensors during neural network training: ?the activations, gradients and parameters have very different ranges? and ?gradients ranges slowly diminish during the training? [21]. These observations are promising indicators on the viability of numerical formats based around tensor shared exponents. Fig. 2 shows histograms of values from different types of tensors taken from a 110-layer ResNet trained on CIFAR-10 using 32-bit floating point. In order to preserve a faithful representation of floating point, tensors with a shared exponent must have a sufficiently narrow dynamic range such that mantissa bits alone can encode variability. As suggested by Fig. 2, 16-bits of mantissa is sufficient to cover the majority of values of a single tensor. For performing operations such as adding gradient updates to weights, there must be sufficient mantissa overlap between tensors, putting additional requirements on number of bits needed to represent values in training, as compared to inference. Establishing that deep learning tensors conform to these requirements during training is a key finding in our present results. An alternative solution to addressing this problem is stochastic rounding [22]. 3 Finally, to implement Flexpoint efficiently in hardware, the output exponent has to be determined before the operation is actually performed. Otherwise the intermediate result needs to be stored in high precision, before reading the new exponent and quantizing the result, which would negate much of the potential savings in hardware. Therefore, intelligent management of the exponents is required. 3.3 Exponent Management Algorithm We propose an exponent management algorithm called Autoflex, designed for iterative optimizations, such as stochastic gradient descent, where tensor operations, e.g. matrix multiplication, are performed repeatedly and outputs are stored in hardware buffers. Autoflex predicts an optimal exponent for the output of each tensor operation based on tensor-wide statistics gathered from values computed in previous iterations. The success of training in deep neural networks in Flexpoint hinges on the assumption that ranges of values in the network change sufficiently slowly, such that exponents can be predicted with high accuracy based on historical trends. If the input data is independently and identically distributed, tensors in the network, such as weights, activations and deltas, will have slowly changing exponents. Fig. 3 shows an example of training a deep neural network model. The Autoflex algorithm tracks the maximum absolute value ?, of the mantissa of every tensor, by using a dequeue to store a bounded history of these values. Intuitively, it is then possible to estimate a trend in the stored values based on a statistical model, use it to anticipate an overflow, and increase the exponent preemptively to prevent overflow. Similarly, if the trend of ? values decreases, the exponent can be decreased to better utilize the available range. We formalize our terminology as follows. After each kernel call, statistics are stored in the floating point representation ? of the maximum absolute values of a tensor, obtained as ? = ??, by multiplying the maximum absolute mantissa value ? with scale factor ?. This scale factor is related to the exponent e by the relation ? = 2?e . If the same tensor is reused for different computations in the network, we track the exponent e and the statistics of ? separately for each use. This allows the underlying memory for the mantissa to be shared across different uses, without disrupting the exponent management. 3.4 Autoflex Initialization At the beginning of training, the statistics queue is empty, so we use a simple trial-and-error scheme described in Algorithm 1 to initialize the exponents. We perform each operation in a loop, inspecting the output value of ? for overflows or underutilization, and repeat until the target exponent is found. Figure 2: Distributions of values for (a) weights, (b) activations and (c) weight updates, all during the first epoch (blue) and last epoch (purple) of training a ResNet trained on CIFAR-10 for 165 epochs. The horizontal axis covers the entire range of values that can be represented in 16-bit Flexpoint, with the horizontal bars indicating the dynamic range covered by the 16-bit mantissa. All tensors have a narrow peak close to the right edge of the horizontal bar, where values have close to the same precision as if the elements had individual exponents. 4 Algorithm 1 Autoflex initialization algorithm. Scales are initialized by repeatedly performing the operation and adjusting the exponent up in case of overflows or down if not all bits are utilized. 1: initialized ? False 2: ? = 1 3: procedure I NITIALIZE S CALE 4: while not initialized do 5: ? ? returned by kernel call 6: if ? ? 2N ?1 ? 1 then . overflow: increase scale ? N ?1 7: ? ? ? ? 2b 2 c 8: else if ? < 2N ?2 then . underflow: decrease scale ? 9: ? ? ? ? 2dlog2 max (?,1)e?(N ?2) . Jump directly to target exponent N ?1 10: if ? > 2b 2 c?2 then . Ensure enough bits for reliable jump 11: initialized ? True 12: else . scale ? is correct 13: initialized ? True 3.5 Autoflex Exponent Prediction After the network has been initialized by running the initialization procedure for each computation in the network, we train the network in conjunction with a scale update Algorithm 2 executed twice per minibatch, once after forward activation and once after backpropagation, for each tensor / computation in the network. We maintain a fixed length dequeue f of the maximum floating point values encountered in the previous l iterations, and predict the expected maximum value for the next iteration based on the maximum and standard deviation of values stored in the dequeue. If an overflow is encountered, the history of statistics is reset and the exponent is increased by one additional bit. Algorithm 2 Autoflex scaling algorithm. Hyperparameters are multiplicative headroom factor ? = 2, number of standard deviations ? = 3, and additive constant ? = 100. Statistics are computed over a moving window of length l = 16. Returns expected maximum ? for the next iteration. 1: f ? stats dequeue of length l 2: ? ? Maximum absolute value of mantissa, returned by kernel call 3: ? ? previous scale value ? 4: procedure A DJUST S CALE 5: if ? ? 2N ?1 ? 1 then . overflow: add one bit and clear stats 6: clear f 7: ? ? 2? 8: f ? [f , ??] . Extend dequeue 9: ? ? ? [max(f ) + ?std(f ) + ??] . Predicted maximum value for next iteration 10: ? ? 2dlog2 ?e?N +1 . Nearest power of two 3.6 Autoflex Example We illustrate the algorithm by training a small 2-layer perceptron for 400 iterations on the CIFAR-10 dataset. During training, ? and ? values are stored at each iteration, as shown in Fig. 3, for instance, a linear layer?s weight, activation, and update tensors. Fig. 3(a) shows the weight tensor, which is highly stable as it is only updated with small gradient steps. ? slowly approaches its maximum value of 214 , at which point the ? value is updated, and ? drops by one bit. Shown below is the corresponding floating point representation of the statistics computed from ?, which is used to perform the exponent prediction. Using a sliding window of 16 values, the predicted maximum is computed, and used to set the exponent for the next iteration. In Fig. 3(a), the prediction crosses the exponent boundary of 23 about 20 iterations before the value itself does, safely preventing an overflow. Tensors with more variation across epochs are shown in Fig. 3(b) (activations) and Fig. 3(c) (updates). The standard deviation across iterations is higher, therefore the algorithm leaves about half a bit and one bit respectively of headroom. Even as the tensor fluctuates in magnitude by more than a factor of two, the maximum absolute value of the mantissa ? is safely prevented from overflowing. 5 Weights 213 13 211 ?9 ?8 ? 214 2 213 29 ?16 2 ? 2 Updates Activations 214 2?17 2 2?10 2?10 2?11 2?12 25 24 ?3 ? 2 24 Raw value Max 2?3.2 Estimate 23 0 100 200 300 0 100 200 300 (b) (a) 22 20 0 100 200 300 (c) Figure 3: Evolution of different tensors during training with corresponding mantissa and exponent values. The second row shows the scale ?, adjusted to keep the maximum absolute mantissa values (?, first row) at the top of the dynamic range without overflowing. As the product of the two (?, third row) is anticipated to cross a power of two boundary, the scale is changed so as to keep the mantissa in the correct range. (a) Shows this process for a weight tensor, which is very stable and slowly changing. The black arrow indicates how scale changes are synchronized with crossings of the exponent boundary. (b) shows an activation tensor with a noisier sequence of values. (c) shows a tensor of updates, which typically displays the most frequent exponent changes. In each case the Autoflex estimate (green line) crosses the exponent boundary (gray horizontal line) before the actual data (red) does, which means that exponent changes are predicted before an overflow occurs. The cost of this approach is that in the last example ? reaches 3 bits below the cutoff, leaving the top bits zero and using only 13 of the 16 bits for representing data. 3.7 Simulation on GPU The experiments described below were performed on Nvidia GPUs using the neon deep learning framework1 . In order to simulate the flex16+5 data format we stored tensors using an int16 type. Computations such as convolution and matrix multiplication were performed with a set of GPU kernels which convert the underlying int16 data format to float32 by multiplying with ?, perform operations in floating point, and convert back to int16 before returning the result as well as ?. The kernels also have the ability to compute only ? without writing any outputs, to prevent writing invalid data during exponent initialization. The computational performance of the GPU kernels is comparable to pure floating point kernels, so training models in this Flexpoint simulator adds little overhead. 4 Experimental Results 4.1 Convolutional Networks We trained two convolutional networks in flex16+5, using float32 as a benchmark: AlexNet [1], and a ResNet [2, 3]. The ResNet architecture is composed of modules with shortcuts in the dataflow graph, a key feature that makes effective end-to-end training of extremely deep networks possible. These multiple divergent and convergent flows of tensor values at potentially disparate scales might pose unique challenges for training in fixed point numerical format. We built a ResNet following the design as described in [3]. The network has 12 blocks of residual modules consisting of convolutional stacks, making a deep network of 110 layers in total. We trained this model on the CIFAR-10 dataset [1] with float32 and flex16+5 data formats for 165 epochs. Fig. 4 shows misclassification error on the validation set plotted over the course of training. Learning curves match closely between float32 and flex16+5 for both networks. In contrast, models trained 1 Available at https://github.com/NervanaSystems/neon. 6 in float16 without any changes in hyperparameter values substantially underperformed those trained in float32 and flex16+5. 4.2 Generative Adversarial Networks Next, we validate training a generative adversarial network (GAN) in flex16+5. By virtue of an adversarial (two-player game) training process, GAN models provide a principled way of unsupervised learning using deep neural networks. The unique characteristics of GAN training, namely separate data flows through two components (generator and discriminator) of the network, in addition to feeds of alternating batches of real and generated data of drastically different statistics to the discriminator at early stages of the training, pose significant challenges to fixed point numerical representations. We built a Wasserstein GAN (WGAN) model [4], which has the advantage of a metric, namely the Wasserstein-1 distance, that is indicative of generator performance and can be estimated from discriminator output during training. We trained a WGAN model with the LSUN [23] bedroom dataset in float32, flex16+5 and float16 formats with exactly the same hyperparameter settings. As shown in Fig. 5(a), estimates of the Wasserstein distance in flex16+5 training and in float32 training closely tracked each other. In float16 training the distance deviated significantly from baseline float32, starting with an initially undertrained discriminator. Further, we found no differences in the quality of generated images between float32 and flex16+5 at specific stages of the training 5(b), as quantified by the Fr?chet Inception Distance (FID) [24]. Generated images from float16 training had lower quality (significantly higher FIDs, Fig. 5(b)) with noticeably more saturated patches, examples illustrated in Fig. 5(c), 5(d) and 5(e). 5 Discussion In the present work, we show that a Flexpoint data format, flex16+5, can adequately support training of modern deep learning models without any modifications of model topology or hyperparameters, achieving a numerical performance on par with float32, the conventional data format widely used in deep learning research and development. Our discovery suggests a potential gain in efficiency and performance of future hardware architectures specialized in deep neural network training. Alternatives, i.e. schemes that more aggressively quantize tensor values to lower bit precisions, also made significant progress recently. Here we list major advantages and limitations of Flexpoint, and make a detailed comparison with competing methods in the following sections. 0.4 flex16+5 Top 1 Misclassification error Top 5 Misclassification error flex16+5 float32 float16 0.6 0.4 0.2 0 10 20 30 40 50 float32 float16 0.3 0.2 0.1 0 Epoch 50 100 150 Epoch (a) ImageNet1k AlexNet (b) CIFAR-10 ResNet Figure 4: Convolutional networks trained in flex16+5 and float32 numerical formats. (a) AlexNet trained on ImageNet1k, graph showing top-5 misclassification on the validation set. (b) ResNet of 110 layers trained on CIFAR-10, graph showing top-1 misclassification on the validation set. 7 Distinct from very low precision (below 8-bit) fixed point quantization schemes which significantly alter the quantitative behavior of the original model and thus requires completely different training algorithms, Flexpoint?s philosophy is to maintain numerical parity with the original network training behavior in high-precision floating point. This brings about a number of advantages. First, all prior knowledge of network design and hyperparameter tuning for efficient training can still be fully leveraged. Second, networks trained in high-precision floating point formats can be readily deployed in Flexpoint hardware for inference, or as component of a bigger network for training. Third, no re-tuning of hyperparameters is necessary for training in Flexpoint?what works with floating point simply works in Flexpoint. Fourth, the training procedure remains exactly the same, eliminating the need of intermediate high-precision representations, with the only exception of intermediate higher precision accumulation commonly needed for multipliers and adders. Fifth, all Flexpoint tensors are managed in exactly the same way by the Autoflex algorithm, which is designed to be hidden from the user, eliminating the need to remain cognizant of different type of tensors being quantized into different bit-widths. And finally, the AutoFlex algorithm is robust enough to accommodate diverse deep network topologies, without the need of model-specific tuning of its hyperparameters. Despite these advantages, the same design philosophy of Flexpoint likely prescribes a potential limitation in performance and efficiency, especially when compared to more aggressive quantization schemes, e.g. Binarized Networks, Quantized Networks and the DoReFa-Net. However, we believe Flexpoint strikes a desirable balance between aggressive extraction of performance and support for a wide collection of existing models. Furthermore, potentials and implications for hardware 1.5 Fr?chet Inception Distance (FID) 300 flex16+5 float32 Wasserstein estimate float16 1 0.5 200 100 0 flex16+5 float32 float16 0 0 50,000 100,000 150,000 200,000 0 10 15 20 25 Number of trained epochs Generator iteration (a) Training performance (c) float32 5 (b) Quality of generated images (d) flex16+5 (e) float16 Figure 5: Training performance of WGAN in flex16+5, float32 and float16 data formats. (a) Learning curves, i.e. estimated Wasserstein distance by median filtered and down-sampled values of the negative discriminator cost function, median filter kernel length 100 [4], and down-sampling by plotting every 100th value. Examples of generated images by the WGAN trained with in (c) float32, (d) flex16+5 and (e) float16 for 16 epochs. Fr?chet Inception Distance (FID) estimated from 5000 samples of the generator, as in [24]. 8 architecture of other data formats in the Flexpoint family, namely flexN+M for certain (N , M ), are yet to be explored in future investigations. Low-precision data formats: TensorFlow provides tools to quantize networks into 8-bit for inference [9]. TensorFlow?s numerical format shares some common features with Flexpoint: each tensor has two variables that encode the range of the tensor?s values; this is similar to Autoflex ? (although it uses fewer bits to encode the exponent). Then an integer value is used to represent the dynamic range with a dynamic precision. The dynamic fixed point (DFXP) numerical format, proposed in [25], has a similar representation as Flexpoint: a tensor consists of mantissa bits and values share a common exponent. This format was used by [21] to train various neural nets in low-precision with limited success (with difficulty to match CIFAR-10 maxout nets in float32). DFXP diverges significantly from Flexpoint in automatic exponent management: DFXP only updates the shared exponent at intervals specified by the user (e.g. per 100 minibatches) and solely based on the number of overflows occurring. Flexpoint is more suitable for training modern networks where the dynamics of the tensors might change rapidly. Low-precision networks: While allowing for very efficient forward inference, the low-precision networks discussed in Section 2 share the following shortcomings when it comes to neural network training. These methods utilize an intermediate floating point weight representation that is also updated in floating point. This requires special hardware to perform these operations in addition to increasing the memory footprint of the models. In addition, these low-precision quantizations render the models so different from the exact same networks trained in high-precision floating point formats that there is often no parity at the algorithmic level, which requires completely distinct training algorithms to be developed and optimized for these low-precision training schemes. 6 Conclusion To further scale up deep learning the future will require custom hardware that offers greater compute capability, supports ever-growing workloads, and minimizes memory and power consumption. Flexpoint is a numerical format designed to complement such specialized hardware. We have demonstrated that Flexpoint with a 16-bit mantissa and a 5-bit shared exponent achieved numerical parity with 32-bit floating point in training several deep learning models without modifying the models or their hyperparameters, outperforming 16-bit floating point under the same conditions. Thus, specifically designed formats, like Flexpoint, along with supporting algorithms, such as Autoflex, go beyond current standards and present a promising ground for future research. References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105, 2012. [2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. URL http://arxiv.org/abs/1512.03385. [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630?645. Springer, 2016. URL http://arxiv.org/abs/1603.05027. [4] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017. URL http://arxiv.org/abs/1701.07875. [5] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016. URL http://arxiv.org/abs/1609.07061. [6] Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. URL http://arxiv.org/abs/1606.06160. 9 [7] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525?542. Springer, 2016. [8] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on CPUs. In Advances in Neural Information Processing Systems (workshop on deep learning), 2011. URL http://research.google.com/pubs/archive/37631.pdf. [9] Mart?n Abadi and et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems, 2016. URL http://arxiv.org/abs/1603.04467. [10] Naveen Mellempudi, Abhisek Kundu, Dipankar Das, Dheevatsa Mudigere, and Bharat Kaul. Mixed low-precision deep learning inference using dynamic fixed point. arXiv preprint arXiv:1701.08978, 2017. URL http://arxiv.org/abs/1701.08978. [11] Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In International Conference on Machine Learning, pages 2849?2858, 2016. [12] Ganesh Venkatesh, Eriko Nurvitadhi, and Debbie Marr. Accelerating deep convolutional networks using low-precision and sparsity. arXiv preprint arXiv:1610.00324, 2016. URL http://arxiv.org/abs/1610.00324. [13] Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015. URL http://arxiv.org/ abs/1510.03009. [14] Minje Kim and Paris Smaragdis. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016. URL http://arxiv.org/abs/1601.06071. [15] Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. arXiv preprint arXiv:1704.04760, 2017. URL http: //arxiv.org/abs/1704.04760. [16] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123?3131, 2015. URL http://arxiv.org/abs/1511.00363. [17] Matthieu Courbariaux and Yoshua Bengio. BinaryNet: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016. URL http://arxiv.org/abs/1602.02830. [18] Kyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pages 1?6. IEEE, 2014. [19] Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In Interspeech, pages 1058?1062, 2014. [20] Daisuke Miyashita, Edward H. Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025, 2016. URL http: //arxiv.org/abs/1603.01025. [21] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural networks with low precision multiplications. In International Conference on Learning Representations (workshop contribution), 2014. URL http://arxiv.org/abs/1412.7024. [22] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. arXiv preprint arXiv:1502.02551, 2015. URL http:// arxiv.org/abs/1502.02551. [23] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. [24] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, G?nter Klambauer, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a nash equilibrium. CoRR, abs/1706.08500, 2017. URL http://arxiv.org/abs/1706.08500. 10 [25] Darrell Williamson. Dynamically scaled fixed point arithmetic. In Communications, Computers and Signal Processing, 1991., IEEE Pacific Rim Conference on, pages 315?318. IEEE, 1991. 11
6771 |@word trial:1 version:1 eliminating:2 annapureddy:1 tensorial:1 open:1 reused:1 simulation:1 accommodate:1 shading:1 pub:1 daniel:1 existing:2 bitwise:1 current:1 com:2 activation:14 yet:1 written:1 gpu:5 must:2 readily:1 numerical:18 periodically:1 additive:1 enables:1 designed:5 drop:1 update:10 yinda:1 preemptively:2 intelligence:1 generative:3 device:1 advancement:1 amir:1 alone:1 indicative:1 fewer:1 beginning:1 half:1 leaf:1 filtered:1 yuxin:1 infrastructure:1 quantizer:1 quantized:5 provides:1 org:17 simpler:1 zhang:3 along:2 jasha:1 abadi:1 consists:1 seide:1 combine:1 overhead:1 bharat:1 manner:2 expected:2 behavior:2 growing:2 simulator:2 decreasing:1 automatically:2 cpu:2 actual:1 window:2 little:1 increasing:3 soumith:1 farhadi:1 bounded:1 underlying:2 alexnet:5 advent:1 what:1 substantially:3 string:1 minimizes:1 developed:2 cognizant:1 extremum:1 finding:1 corporation:1 murmann:1 sung:1 safely:2 quantitative:1 every:3 binarized:4 sip:1 exactly:3 returning:1 scaled:1 datacenter:1 unit:2 before:6 limit:1 aiming:1 soudry:1 despite:2 cliff:1 establishing:1 solely:1 black:1 might:2 twice:1 initialization:4 quantified:1 dynamically:2 suggests:1 luke:1 challenging:1 deployment:1 ankur:1 limited:4 range:17 faithful:1 unique:2 wonyong:1 ternary:1 block:1 implement:1 backpropagation:1 footprint:3 procedure:4 suresh:1 area:1 significantly:4 pre:1 pritish:1 suggest:1 close:3 storage:1 writing:2 nessler:1 accumulation:2 conventional:1 imposed:1 demonstrated:2 go:1 sepp:1 starting:1 independently:1 focused:1 stats:2 pure:1 matthieu:5 adjusts:1 rule:1 marr:1 nitialize:1 variation:1 updated:4 target:2 construction:1 user:3 exact:1 itay:1 us:2 element:5 amortized:1 trend:3 crossing:1 utilized:1 recognition:2 std:1 native:1 predicts:1 module:2 preprint:12 wang:1 sun:2 decrease:2 incremented:1 ran:1 substantial:1 principled:1 kailash:1 nash:1 complexity:1 chet:3 dynamic:11 ultimately:1 trained:18 prescribes:1 solving:1 ali:1 patterson:1 efficiency:3 basis:1 completely:2 workload:1 chip:1 bitwidth:2 represented:1 various:1 train:2 distinct:2 effective:1 shortcoming:1 borchers:1 artificial:1 bhatia:1 jean:2 fluctuates:1 widely:1 valued:1 otherwise:1 ability:1 statistic:9 zekun:1 itself:1 advantage:7 sequence:1 quantizing:2 net:8 agrawal:2 propose:1 polled:1 product:3 reset:1 fr:3 remainder:1 frequent:1 relevant:1 loop:2 rapidly:2 achieve:1 validate:2 sutskever:1 convergence:1 empty:1 requirement:6 diverges:1 yaniv:1 darrell:1 boris:1 resnet:9 illustrate:1 andrew:1 sarah:1 pose:2 nearest:1 progress:1 arjun:1 edward:1 implemented:2 predicted:4 marcel:1 come:2 synchronized:1 closely:3 drawback:1 correct:2 filter:1 stochastic:4 modifying:1 droppo:1 human:1 noticeably:1 require:1 dnns:1 transparent:1 investigation:1 anticipate:1 inspecting:1 adjusted:2 around:1 hall:1 diminish:1 sufficiently:2 ground:1 equilibrium:1 algorithmic:2 predict:1 mapping:1 major:2 achieves:1 early:1 maker:1 hubara:2 talathi:1 vice:1 faithfully:1 successfully:1 tool:1 gaurav:1 rather:1 zhou:2 conjunction:1 encode:3 improvement:1 rank:1 indicates:1 contrast:2 adversarial:4 baseline:1 kim:1 inference:16 dependent:1 el:1 typically:2 entire:2 initially:1 hidden:1 relation:1 classification:3 flexible:1 denoted:1 exponent:53 development:1 art:1 special:1 initialize:1 constrained:1 equal:1 once:2 saving:1 extraction:1 beach:1 sampling:1 field:2 identical:1 pai:1 yu:2 unsupervised:1 anticipated:1 alter:1 future:5 yoshua:5 intelligent:1 few:2 wen:1 modern:3 composed:2 preserve:1 resulted:1 wgan:4 individual:1 floating:29 consisting:1 replacement:1 ramsauer:1 william:1 maintain:2 ab:18 overflowing:2 highly:2 custom:1 saturated:1 daisuke:1 hubert:1 amenable:1 implication:1 edge:1 fu:1 necessary:1 shuran:1 unterthiner:1 initialized:6 re:1 plotted:1 increased:1 industry:1 instance:1 rao:1 cover:2 dheevatsa:1 stewart:1 kyuyeon:1 cost:4 addressing:1 entry:2 deviation:3 uniform:1 krizhevsky:1 rounding:1 lsun:2 reported:1 stored:7 st:1 density:1 peak:1 international:2 memisevic:1 lee:1 dong:1 zhouhan:1 ilya:1 gans:1 management:8 leveraged:1 slowly:5 cale:2 return:1 li:1 aggressive:2 converted:1 potential:4 performed:5 multiplicative:1 red:2 portion:1 maintains:1 capability:1 parallel:1 carey:1 contribution:2 minimize:1 purple:1 ni:1 accuracy:1 convolutional:10 characteristic:2 who:1 efficiently:1 gathered:1 raw:1 vincent:1 nter:1 ren:2 bates:1 multiplying:2 published:1 history:2 reach:1 sharing:1 mudigere:1 energy:1 chintala:1 gain:2 sampled:1 dataset:4 adjusting:1 dataflow:1 mantissa:18 knowledge:1 framework1:1 formalize:1 ster:1 rim:1 anticipating:1 actually:1 back:1 manuscript:1 feed:1 higher:5 specify:1 done:1 furthermore:1 inception:3 stage:2 until:1 hand:1 horizontal:4 undertrained:1 adder:1 ganesh:1 kaul:1 propagation:1 google:2 minibatch:2 qnns:2 brings:1 quality:3 gray:1 ordonez:1 hwang:1 believe:1 usa:1 multiplier:3 true:2 managed:1 evolution:1 adequately:1 norman:1 aggressively:1 alternating:1 xnor:3 illustrated:1 during:9 width:6 game:1 interspeech:1 seff:1 bansal:1 ruby:1 pdf:1 complete:1 demonstrate:1 disrupting:1 mohammad:1 image:7 novel:1 recently:4 ari:1 common:4 specialized:4 tracked:1 extend:2 discussed:1 he:3 nurvitadhi:1 significant:5 versa:1 tuning:4 automatic:1 similarly:1 had:2 moving:1 stable:2 add:2 closest:1 recent:1 showed:1 driven:1 suyog:1 store:1 certain:3 buffer:1 nvidia:1 binary:5 success:3 fid:3 outperforming:1 arjovsky:1 wasserstein:7 additional:3 greater:1 managing:1 xiangyu:2 maximize:1 strike:1 converge:1 signal:2 arithmetic:2 sliding:1 full:2 multiple:1 desirable:1 reduces:4 exceeds:1 match:3 offer:2 long:1 cifar:7 cross:3 lin:2 prevented:1 roland:1 bigger:1 prediction:3 scalable:1 heterogeneous:1 essentially:1 metric:1 vision:3 arxiv:41 histogram:1 kernel:10 tailored:1 represent:2 iteration:11 achieved:1 hochreiter:1 whereas:1 addition:4 signify:1 separately:1 decreased:1 interval:1 diagram:1 else:2 leaving:1 median:2 crucial:1 allocated:1 jian:2 archive:1 flow:2 call:4 integer:3 yuheng:1 intermediate:4 bengio:5 viability:1 identically:1 reacts:1 enough:2 feedforward:1 bedroom:1 architecture:5 topology:4 competing:2 bandwidth:1 idea:1 motivated:1 dorefa:3 url:18 accelerating:1 song:1 render:1 queue:1 returned:2 speech:1 shaoqing:2 repeatedly:2 deep:44 covered:1 involve:1 clear:2 detailed:1 hardware:16 narayanan:1 sachin:1 reduced:2 http:19 sign:1 delta:1 estimated:3 track:3 per:4 blue:2 diverse:1 conform:1 hyperparameter:3 group:1 key:3 putting:1 terminology:1 threshold:1 achieving:1 binaryconnect:1 changing:2 prevent:2 cutoff:1 utilize:3 graph:3 fraction:1 year:2 convert:2 fourth:1 family:1 reasonable:1 wu:1 patch:1 scaling:2 comparable:1 bit:61 layer:7 nan:1 display:1 convergent:1 deviated:1 neon:3 smaragdis:1 encountered:2 gang:1 alex:1 bajwa:1 speed:3 simulate:1 extremely:1 performing:3 passively:1 format:37 gpus:1 martin:2 structured:1 pacific:1 tristan:1 dequeue:5 xinyu:1 across:7 remain:1 ur:1 joseph:1 modification:2 making:2 intuitively:1 taken:1 resource:1 remains:3 discus:1 count:1 mechanism:1 turn:2 needed:3 merit:1 end:2 available:3 operation:15 pierre:2 alternative:2 batch:1 original:2 thomas:1 top:6 running:1 include:1 ensure:1 gan:6 patil:1 maintaining:1 hinge:1 especially:1 overflow:13 tensor:56 already:2 realized:1 added:1 occurs:1 gradient:13 distance:7 separate:2 majority:2 consumption:3 reason:1 gopalakrishnan:1 length:4 illustration:1 balance:1 innovation:1 sreekanth:1 executed:1 webb:1 potentially:1 frank:1 hao:1 negative:1 disparate:1 shuchang:1 design:4 perform:7 allowing:1 convolution:3 observation:1 benchmark:2 descent:3 parametrizing:1 supporting:1 hinton:1 communication:2 variability:1 ever:1 stack:1 david:3 complement:2 namely:3 required:1 specified:1 ternarized:1 optimized:5 discriminator:5 imagenet:2 venkatesh:1 paris:1 nishant:1 narrow:2 tensorflow:3 binarynet:1 nip:1 miyashita:2 beyond:1 suggested:1 bar:2 usually:2 dynamical:1 below:4 pattern:1 reading:1 challenge:3 sparsity:1 built:2 green:2 memory:7 max:3 reliable:1 power:5 overlap:1 misclassification:5 difficulty:1 suitable:1 indicator:1 underutilization:1 residual:5 kundu:1 representing:1 scheme:6 github:1 axis:1 coupled:1 speeding:1 review:1 literature:1 deepest:1 epoch:9 discovery:1 multiplication:5 prior:1 loss:1 par:1 fully:1 mixed:1 limitation:4 geoffrey:1 remarkable:1 generator:4 validation:3 vanhoucke:2 sufficient:2 xiao:1 plotting:1 courbariaux:6 storing:1 share:3 production:1 row:3 course:1 changed:1 repeat:1 parity:5 keeping:1 last:2 minje:1 drastically:1 senior:1 perceptron:1 wide:4 absolute:6 fifth:1 distributed:3 boundary:4 curve:2 klambauer:1 preventing:1 forward:3 commonly:3 adaptive:1 made:3 author:2 jump:2 historical:1 far:1 collection:1 bernhard:1 dlog2:2 keep:2 iterative:1 underperformed:1 promising:3 robust:1 ca:1 rastegari:1 improving:1 boden:1 quantize:4 williamson:1 bottou:1 european:2 zou:1 da:1 main:1 arrow:1 hyperparameters:6 fig:13 intel:1 deployed:2 precision:37 mao:1 float32:19 third:2 young:1 down:5 specific:4 showing:3 list:1 divergent:1 explored:1 negate:1 virtue:1 gupta:1 workshop:3 quantization:6 false:1 adding:1 corr:1 magnitude:1 illustrates:1 occurring:1 logarithmic:2 simply:1 likely:1 dipankar:1 contained:1 kaiming:2 scalar:1 tpu:1 springer:2 underflow:1 minibatches:1 mart:1 identity:1 khosrowshahi:1 invalid:1 maxout:1 shared:9 fisher:1 feasible:1 change:6 shortcut:1 vicente:1 specifically:3 determined:2 redmon:1 called:1 total:1 catastrophic:1 experimental:1 xin:1 player:1 indicating:1 exception:1 naveen:2 support:5 quest:1 mark:1 noisier:1 jianxiong:1 philosophy:2 avoiding:1
6,381
6,772
Model evidence from nonequilibrium simulations Michael Habeck Statistical Inverse Problems in Biophysics, Max Planck Institute for Biophysical Chemistry & Institute for Mathematical Stochastics, University of G?ttingen, 37077 G?ttingen, Germany email [email protected] Abstract The marginal likelihood, or model evidence, is a key quantity in Bayesian parameter estimation and model comparison. For many probabilistic models, computation of the marginal likelihood is challenging, because it involves a sum or integral over an enormous parameter space. Markov chain Monte Carlo (MCMC) is a powerful approach to compute marginal likelihoods. Various MCMC algorithms and evidence estimators have been proposed in the literature. Here we discuss the use of nonequilibrium techniques for estimating the marginal likelihood. Nonequilibrium estimators build on recent developments in statistical physics and are known as annealed importance sampling (AIS) and reverse AIS in probabilistic machine learning. We introduce estimators for the model evidence that combine forward and backward simulations and show for various challenging models that the evidence estimators outperform forward and reverse AIS. 1 Introduction The marginal likelihood or model evidence is a central quantity in Bayesian inference [1, 2], but notoriously difficult to compute. If likelihood L(x) ? p(y|x, M ) models data y and prior ?(x) ? p(x|M ) expresses our knowledge about the parameters x of the model M , the posterior p(x|y, M ) and the model evidence Z are given by: Z p(y|x, M ) p(x|M ) L(x) ?(x) p(x|y, M ) = = , Z ? p(y|M ) = L(x) ?(x) dx . (1) p(y|M ) Z Parameter estimation proceeds by drawing samples from p(x|y, M ), and different ways to model the data are ranked by their evidence. For example, two models M1 and M2 can be compared via their Bayes factor, which is proportional to the ratio of their marginal likelihoods p(y|M1 )/p(y|M2 ) [3]. Often the posterior (and perhaps also the prior) is intractable in the sense that it is not possible to compute the normalizing constant and therefore also the evidence analytically. This makes it difficult to compare different models via their posterior probability and model evidence. Markov chain Monte Carlo (MCMC) algorithms [4] only require unnormalized probability distributions and are among the most powerful and accurate methods to estimate the marginal likelihood, but they are computationally expensive. Therefore, it is important to develop efficient MCMC algorithms that can sample from the posterior and allow us to compute the marginal likelihood. There is a close analogy between the marginal likelihood and the log-partition function or free energy from statistical physics [5]. Therefore, many concepts and algorithms originating in statistical physics have been applied to problems arising in probabilistic inference. Here we show that nonequilibrium fluctuation theorems (FTs) [6, 7, 8] can be used to estimate the marginal likelihood from forward and reverse simulations. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Marginal likelihood estimation by annealed importance sampling A common MCMC strategy to sample from the posterior and estimate the evidence is to simulate a sequence of distributions pk that bridge between the prior and the posterior [9]. Samples can either be generated in sequential order as in annealed importance sampling (AIS) [10] or in parallel as in replica-exchange Monte Carlo or parallel tempering [11, 12]. Several methods have been proposed to estimate the marginal likelihood from MCMC samples including thermodynamic integration (TI) [9], annealed importance sampling (AIS) [10], nested sampling (NS) [13] or the density of states (DOS) [14]. Most of these approaches (TI, DOS and NS) assume that we can draw exact samples from the intermediate distributions pk , typically after a sufficiently large number of equilibration steps has been simulated. AIS, on the other hand, does not assume that the samples are equilibrated after each annealing step, which makes AIS very attractive for analyzing complex models for which equilibration is hard to achieve. AIS employs a sequence of K + 1 probability R distributions pk and Markov transition operators Tk whose stationary distributions are pk , i.e. Tk (x|x0 ) pk (x0 ) dx0 = pk (x). In a Bayesian setting, p0 is the prior and pK the posterior. Typically, pk is intractable meaning that we only R know an unnormalized version fk , but not the normalizer Zk , i.e. pk (x) = fk (x)/Zk where Zk = fk (x) dx and the evidence is Z = ZK /Z0 . Often, it is convenient to write fk as an energy based model fk (x) = exp{?Ek (x)}. In Bayesian inference, a popular choice is fk (x) = [L(x)]?k ?(x) with prior ?(x) and likelihood L(x); ?k is an inverse temperature schedule that starts at ?0 = 0 (prior) and ends at ?K = 1 (posterior). AIS samples paths x = [x0 , x1 , . . . , xK?1 ] according to the probability Pf [x] = TK?1 (xK?1 |xK?2 ) ? ? ? T1 (x1 |x0 ) p0 (x0 ) (2) where, following Crooks [8], calligraphic symbols and square brackets denote quantities that depend on the entire path. The subscript indicates that the path is generated by a forward simulation, which starts from p0 and propagates the initial state through a sequence of new states by the successive action of the Markov kernels T1 , T2 , . . . , TK?1 . The importance weight of a path is w[x] = K?1 Y k=0  K?1  X fk+1 (xk ) = exp ? [ Ek+1 (xk ) ? Ek (xk ) ] . fk (xk ) (3) k=0 The average weight over many paths is a consistent and unbiased estimator of the model evidence Z = ZK /Z0 , which follows from [15, 10] (see supplementary material for details): Z hwif = w[x] Pf [x] D[x] = Z (4) where the average h?if is an integral over all possible paths generated by the forward sequence of transition kernels (D[x] = dx0 ? ? ? dxK?1 average weight of M forward paths x(i) is an P ). The 1 (i) estimate of the model evidence: Z ? M i w[x ]. This estimator is at the core of AIS and its variants [10, 16]. To avoid overflow problems, it will be numerically more stable to compute log weights. Logarithmic weights also naturally occur from a physical perspective where ? log w[x] is identified as the work required to generate path x. 3 Nonequilibrium fluctuation theorems Nonequilibrium fluctuations theorems (FTs) quantify the degree of irreversibility of a stochastic process by relating the probability of generating a path by a forward simulation to the probability of generating the exact same path by a time-reversed simulation. If the Markov kernels Tk satisfy detailed balance, time reversal is achieved by applying the same sequence of kernels in reverse order. For Markov kernels not satisfying detailed balance, the definition is slightly more general [7, 10]. Here we assume that all kernels Tk satisfy detailed balance, which is valid for Markov chains based on the Metropolis algorithm and its variants [4]. 2 Under these assumptions, the probability of generating the path x by a reverse simulation starting in xK?1 is Pr [x] = T1 (x0 |x1 ) ? ? ? TK?1 (xK?2 |xK?1 ) pK (xK?1 ) . (5) Averages over the reverse paths are indicated by h?ir . The detailed fluctuation theorem [6, 8] relates the probabilities of generating x in a forward and reverse simulation (see supplementary material): K?1 Pf [x] ZK Y fk (xk ) Z = = = exp{W[x] ? ?F } Pr [x] Z0 fk+1 (xk ) w[x] (6) k=0 where the physical analogs of theP path weight and the marginal likelihood were introduced, namely the work W[x] = ? log w[x] = k [ Ek+1 (xk ) ? Ek (xk ) ] and the free energy difference ?F = ? log Z = ? log(ZK /Z0 ). Various demonstrations of relation (6) exist in the physics and machine learning literature [6, 7, 8, 10, 17]. Lower and upper bounds sandwiching the log evidence [17, 18, 16] follow directly from equation (6) and the non-negativity of the relative entropy: Z DKL (Pf kPr ) = Pf [x] log(Pf [x]/Pr [x]) D[x] = hWif ? ?F ? 0 . From DKL (Pr kPf ) ? 0 we obtain an upper bound on log Z, such that overall we have: hlog wif = ?hWif ? log Z ? ?hWir = hlog wir . (7) Grosse et al. use these bounds to assess the convergence of bidirectional Monte Carlo [18]. Thanks to the detailed fluctuation theorem (Eq. 6), we can also relate the marginal distributions of the work resulting from many forward and reverse simulations: Z pf (W ) = ?(W ? W[x]) Pf [x] D[x] = pr (W ) eW ??F (8) which is Crooks? fluctuation theorem (CFT) [7]. CFT tells us that the work distributions pf and pr cross exactly at W = ?F . Therefore, by plotting histograms of the work obtained in forward and backward simulations we can read off an estimate for the negative log evidence. The Jarzynski equality (JE) [15] follows directly from CFT due to the normalization of pr : Z pf (W ) e?W dW = he?W if = e??F . (9) JE restates the AIS estimator hwif = Z (Eq. 4) in terms of the physical quantities. Remarkably, JE holds for any stochastic dynamics bridging between the initial and target distribution. This suggests to use fast annealing protocols to drag samples from the prior into the posterior. However, the JE involves an exponential average in which paths requiring the least work contribute most strongly. These paths correspond to work realizations that reside in the left tail of pf . With faster annealing, the chance of generating a minimal work path decreases exponentially and becomes a rare event. A key feature of CFT and JE is that they do not require exact samples from the stationary distributions pk , which is needed in applications of TI or DOS. For complex probabilistic models, the states generated by the kernels Tk will typically ?lag behind? due to slow mixing, especially near phase transitions. The k-th state of the forward path will follow the intermediate distribution qk (xk ) = Z Y k Tl (xl |xl?1 ) p0 (x0 ) dx0 ? ? ? dxk?1 , q0 (x) = p0 (x) . (10) l=1 Unless the transition kernels Tk mix very rapidly, qk 6= pk for k > 0. Consider the common case in Bayesian inference where Ek (x) = ?k E(x) and E(x) = ? log L(x). Then, according to inequalities (7), we have the following lower bound on the marginal likelihood hlog wif = ?hWif = ? K?1 X (?k+1 ? ?k ) hEiqk k=0 3 (11) 0.3 pf(W) pr(W) logZ 0.2 0.1 0.0 30 B 20 pk qk 10 work?W A parameter?x work?distribution?p(W) 0.4 5 10 15 20 work?W C 5 Wf Wr log e W f logZ 10 0 0 0 0 5 10 index?k 0 50 100 150 200 K Figure 1: Nonequilibrium analysis of a Gaussian toy model. (A) Work distributions pf and pr shown in blue and green. The correct free energy difference (minus log evidence) is indicated by a dashed line. (B) Comparison of stationary distribution pk and marginal distributions qk generated by the transition kernels. Shown is a 1? band about the mean positions for pk (blue) and qk (green). (C) Lower and upper bounds of the log evidence (Eq. 7) and logarithm of the exponential average over the forward work distribution for increasingly slow annealing schedules. and an analogous expression for the upper bound/reverse direction, in which the average energies along the forward path hEiqk need to be replaced by the corresponding average energies along the backward path. The difference between the forward and reverse averages is called ?hysteresis? in physics. The larger the hysteresis, the more strongly will the marginal likelihood bounds disagree and the more uncertain will our guess of the model evidence be. The opposite limiting case is slow annealing and full equilibration where the bound (Eq. 11) approaches thermodynamic integration (see supplementary material). So we expect that there is a tradeoff between switching fast in order to save computation time, versus a desire to control the amount of hysteresis, which otherwise makes it difficult to extract accurate evidence estimates from the simulations. 4 Illustration for a tractable model Let us illustrate the nonequilibrium results for a tractable model where the initial, the target and all intermediate distribution are Gaussians pk (x) = N x; ?k , ?k2 ) with means ?k and standard deviations ?k > 0. The transition kernels are also Gaussian: Tk (x|x0 ) = N x; (1 ? ?k )?k + ?k x0 , (1 ? ?k2 )?k2 ) with ?k ? [0, 1] controlling the speed of convergence: For ?k = 0 convergence is immediate, whereas for ?k ? 1, the chain generated by Tk converges infinitely slowly. Note that the kernels Tk satisfy detailed balance, therefore backward simulations apply the same kernels in reverse order. The energies and exact log partition functions are Ek (x) = 2?1 2 (x ? ?k )2 and log Zk = log(2??k2 )/2. k We bridge between an initial distribution with mean ?0 = 20 and standard deviation ?0 = 10 and a target with ?K = 0, ?K = 1 using K = 10 intermediate distributions and compute work distributions resulting from forward/backward simulations. Both distributions indeed cross at W = ? log Z = log(?0 /?K ) = log 10, as predicted by CFT (see Fig. 1A). Figure 1B illustrates the difference between the marginal distribution of the samples after k annealing steps qk (Eq. 10) and the stationary distribution pk . The marginal distributions qk are also Gaussian, but their means and variances diverge from the means and variances of the stationary distributions pk . This divergence results in hysteresis, if the annealing process is forced to progress very rapidly without equilibrating the samples (quenching). Figure 1C confirms the validity of the JE (Eq. 9) and of the lower and upper bounds (Eq. 7). For short annealing protocols, the bounds are very conservative, whereas the Jarzynski equality gives the correct evidence even for fast protocols (small K). In realistic applications, however, we cannot compute the work distribution pf over the entire range of work values. In fast annealing simulations, it will become increasingly difficult to explore the left tail of the work distribution, such that in practice the accuracy of the JE estimator deteriorates for too small K. 4 Algorithm 1 Bennett?s acceptance ratio (BAR) (i) (i) Require: Work Wf , Wr from M forward and reverse nonequilibrium simulations, tolerance ? (e.g. ? = 10?4 ) P (i) 1 Z?M i exp{?Wf } (Jarzynski estimator) repeat LHS ? 1 i 1+Z exp{W (i) } , f LHS RHS P Z?Z? RHS ? 1 i 1+Z ?1 exp{?W (i) } r P until | log(LHS/RHS)| < ? return Z 5 Using the fluctuation theorem to estimate the evidence To use the fluctuation theorem for evidence estimation, we run two sets of simulations. As in AIS, forward simulations start from a prior sample which is successively propagated by the transition (i) kernels Tk . For each forward path x(i) the total work Wf is recorded. We also run reverse simulations starting from a posterior sample. For complex inference problems it is generally impossible to generate an exact sample from the posterior. However, in some cases the mode of the posterior is known or powerful methods for locating the posterior maximum exist. We can then generate a posterior sample by applying the transition operator TK many times starting from the posterior mode. The reverse simulations could also be started from the final states generated by the forward (i) (i) simulations drawn according to their importance weights wf ? exp{?Wf }. Another possibility to generate a posterior sample is to start from the data, if we want to evaluate an intractable generative model such as a restricted Boltzmann machine. The posterior sample is then propagated by the reverse chain of transition operators. Again, we accumulate the total work generated by the reverse (i) simulation Wr . The reverse simulation corresponds to reverse AIS proposed by Burda et al. [16]. 5.1 Jarzynski and cumulant estimators There are various options to estimate the evidence from forward and backward simulations. We can (i) (i) apply the Jarzynski equality to Wf and Wr , which corresponds to the estimators used in AIS [10] and reverse AIS [16]. According to Eq. (7) we can also compute an interval that likely contains the log evidence, but typically this interval will be quite large. Hummer [19] has developed estimators based on the cumulant expansion of pf and pr : log Z ? ?hWif + varf (W)/2, log Z ? ?hWir ? varr (W)/2 (12) where varf (W) and varr (W) indicate the sample variances of the work generated during the forward and reverse simulations. The cumulant estimators increase/decrease the lower/upper bound of the log evidence (Eq. 7) by the sample variance of the work. The forward and reverse cumulant estimators can also be combined into a single estimate [19]: log Z ? ? 5.2   1 1 hWif + hWir + varf (W) ? varr (W) . 2 12 (13) Bennett?s acceptance ratio Another powerful method is Bennett?s acceptance ratio (BAR) [20, 21], which is based on the observation that according to CFT (Eq. 8) Z Z ?W h(W ; ?F ) pf (W ) e dW = h(W ; ?F ) pr (W ) e??F dW for any function h. Therefore, any choice of h gives an implicit estimator for ?F . Bennett showed [20, 9] that the minimum mean squared error is achieved for h ? (pf + pr )?1 , leading to the implicit 5 error 0.2 0.0 A 0.2 0 100 K B C D E F 200 Figure 2: Performance of evidence estimators on the Gaussian toy model. M = 100 forward and reverse simulations were run for schedules of increasing length K. This experiment was repeated 1000 times to probe the stability of the estimators. Shown is the difference between the log evidence estimate and its true value ? log 10. The average over all repetitions is shown as red line; the light band indicates one standard deviation over all repetitions. (A) Cumulant estimator (Eq. 12) based on the forward simulation. (B) The combined cumulant estimator (Eq. 13). (C) Forward AIS estimator. (D) Reverse AIS. (E) BAR. (F) Histogram estimator. equation 1 X 1+Z i (i) exp{Wf } = X 1 i 1 + Z ?1 exp{?Wr } (i) . (14) By numerically solving equation (14) for Z, we obtain an estimator of the evidence based on Bennett?s acceptance ratio (BAR). A straightforward way to solve the BAR equation is to iterate over the multiplicative update Z (t+1) ? Z (t) LHS(Z (t) )/RHS(Z (t) ) where LHS and RHS are the left and right hand side of equation (14) and the superscript (t) indicates an iteration index. Algorithm (1) provides pseudocode to compute the BAR estimator (further details are given in the supplementary material). 5.3 Histogram estimator Here we introduce yet another way of combining forward/backward simulations and estimating the model evidence. According to CFT, we have: (i) Wr(i) ? pr (W ) = pf (W ) e?W /Z . Wf ? pf (W ), (i) (i) The idea is to combine all samples Wf and Wr to estimate pf , from which we can then obtain the evidence by using the JE (Eq. 9). Thanks to the CFT, the samples from the reverse simulation contribute most strongly to the integral in the JE. Therefore, if we can use the reverse paths to estimate the forward work distribution, pf will be quite accurate in the region that is most relevant for evaluating JE. (i) (i) To estimate pf from Wf and Wr is mathematically equivalent to estimating the density of states (DOS) (i.e. the marginal distribution of the log likelihood) from equilibrium simulations run at two inverse temperatures ? = 0 and ? = 1. We can therefore directly apply histogram techniques [14, 22] used to analyze equilibrium simulations to estimate pf from nonequilibrium simulations (details are given in the supplementary material). Histogram techniques result in a non-parametric estimate of the work distribution: X pf (W ) ? pj ?(W ? Wj ) (15) j (i) Wf (i) Wr , where all sampled work values, and were combined into a single set Wj and pj are normalized weights associated with each Wj . Using the JE, we obtain X Z? pj e?Wj (16) j which is best evaluated in log space. The histogram iterations [14] used to determine pj and Z are very similar to the multiplicative that solve the BAR equation (Eq. 14). After running the histogram iterations, we obtain a non-parametric maximum likelihood estimate of pf (Eq. 15). It is also possible 6 A 0.75 pf(W) pr(W) logZ 0.50 0.25 0.00 1350 1300 1250 work?W B 1.0 Ef Er E 1.5 2.0 0.4 0.5 0.6 0.7 inverse?temperature? log?evidence?logZ 1e3 average?energy? E work?distribution?p(W) 1.00 1e 1 1.35 1e3 C 1.34 1.33 103 104 equilibration?steps?N Figure 3: Evidence estimation for the 32 ? 32 Ising model. (A) Work distributions obtained for K = 1000 annealing and N = 1000 equilibration steps. (B) Average energy hEif and hEir at different annealing steps k in comparison to the average energy of the stationary distribution hEi? . Shown is a zoom into the inverse temperature range from 0.4 to 0.7; the average energies agree quite well outside this interval. (C) Evidence estimates for increasing number of equilibration steps N . Light/dark blue: lower/upper bounds hlog wif / hlog wir ; light/dark green: forward/reverse AIS estimators loghwif / loghwir ; light red: BAR; dark red: histogram estimator. For N > 1000, BAR and the histogram estimator produce virtually identical evidence estimates. to carry out a Bayesian analysis, and derive a Gibbs sampler for pf , which does not only provide a point estimate for log Z, but also quantifies its uncertainty (see supplementary material for details). We studied the performance of the evidence estimators on forward/backward simulations of the Gaussian toy model. The cumulant estimators (Figs. 2A, B) are systematically biased in case of rapid annealing (small K). The combined cumulant estimator (Fig. 2B) is a significant improvement over the forward estimator, which does not take the reverse simulation data into account. The forward and reverse AIS estimators are shown in Figs. 2C and 2D. For this system, the evidence estimates derived from the reverse simulation are systematically more accurate than the AIS estimate based on the forward simulation, which is clear given that the work distribution from reverse simulations pr is much more concentrated than the forward work distribution pf (see Fig. 1A). The most accurate, least biased and most stable estimators are BAR (Fig. 2E) and the histogram estimator (Fig. 2F), which both combine forward and backward simulations into a single evidence estimate. 6 Experiments We studied the performance of the nonequilibrium marginal likelihood estimators on various challenging probabilistic models including Markov random fields and Gaussian mixture models. A python package implementing the work simulations and evidence estimators can be downloaded from https://github.com/michaelhabeck/paths. 6.1 Ising model Our first test system is a 32 ? 32 Ising model for which the log evidence can be computed exactly: log Z = 1339.27 [23]. A single configurationP consists of 1024 spins xi = ?1. The energies of the intermediate distributions are Ek (x) = ?k i?j xi xj where i ? j indicates nearest neighbors on a 2D square lattice. We generate M = 1000 forward and reverse paths using a linear inverse temperature schedule that interpolates between ?0 = 0 and ?K = 1 where K = 1000. Forward simulations start from random spin configurations. For the reverse simulations, we start in one of the two ground states with all spins either ?1 or +1. Tk are Metropolis kernels based on pk : a new spin configuration is proposed by flipping a randomly selected spin and accepted or rejected according to Metropolis? rule. The single spin-flip transitions are repeated N times at constant ?k , i.e. N is the number of equilibration steps after a perturbation was induced by lowering the temperature. The larger N , the more time we allow the simulation to equilibrate, and the closer will qk be to pk . Figure 3A shows the work distributions obtained with N = 1000 equilibration steps per temperature perturbation. Even though the forward and reverse work distributions overlap only weakly, the 7 logZ 50 40 30 103 A N = 105 N = 106 N = 107 N = 108 B 6 4 logw f logw r log w f log w r BAR Histogram 104 K 105 0.15 p(W) +1.7e3 2 0 C parallel tempering 0.10 0.05 1 2 logZ 3 4 +1.74e3 0.00 475 logZ 450 Figure 4: Evidence estimation for the Potts model and RBM. (A) Estimated log evidence of the Potts model for a fixed computational budget K ? N = 109 where M = 100 and ten repetitions were computed. The reference value log Z = 1742 (obtained with parallel tempering) is shown as dashed black line. (B) log Z distributions obtained with the Gibbs sampling version of the histogram estimator for K = 1000 and varying number of equilibration steps. (C) Work distributions obtained for a marginal and full RBM (light/dark blue: forward/reverse simulation of the marginal model; light/dark green: forward/reverse simulation of the full model). evidence estimates obtained with BAR and the histogram estimator are quite accurate with 1338.05 (BAR) and 1338.28 (histogram estimator), which differs only by approx. 1 nat from the true evidence and corresponds to a relative error of ? 9 ? 10?4 (BAR) and 7 ? 10?4 (histogram estimator). Forward and reverse AIS provide less accurate estimates of the log evidence: 1333.66 (AIS) and 1342.05 (RAISE). The lower and upper bounds are very broad hlog wif = 1290.5 and hlog wir = 1352.0, which results from hysteresis effects. Figure 3B zooms into the average energies obtained during the forward and reverse simulations and compares them with the average energy of a fully equilibrated simulation. The average energies differ most strongly at inverse temperatures close to the critical value ?crit ? 0.44 at which the Ising model undergoes a second-order phase transition. We also tested the performance of the estimators as a function of the number of equilibration steps N . As already observed for the Gaussian toy model, BAR and the histogram estimator outperform the Jarzynski estimators (AIS and RAISE) also in case of the Ising model (see Fig. 3C). 6.2 Ten-state Potts model Next we performed simulations of the ten-state Potts model defined over a 32 ? 32 lattice where the spins of the Ising model are replaced by integer colors xi ? {1, . . . , 10} and an interaction energy 2?(xi , xj ). This model is significantly more challenging than the Ising model, because it undergoes a first-order phase transition and has an astronomically larger number of states (101024 colorings rather than 21024 spin configurations). We performed forward/backward simulations using a linear inverse temperature schedule with ?0 = 0, ?K = 1 and a fixed computational budget K ? N = 109 . Figure 4A shows that there seems to be no advantage of increasing the number of intermediate distributions at the cost of reducing the number of equilibration steps. Again, BAR and the histogram estimator perform very similarly. The Gibbs sampling version of the histogram estimator also provides the posterior of log Z (see Fig. 4B). For too few equilibration steps N , this distribution is rather broad or even slightly biased, but for large N the log Z posterior concentrates around the correct log evidence. 6.3 Restricted Boltzmann machine The restricted Boltzmann machine (RBM) is a common building block of deep learning hierarchies. RBM is an intractable MRF with bipartite interactions: E(v, h) = ?(aT v + bT h + v T W h) where a, b are the visible and hidden biases and W are the couplings between the visible and hidden units vi and hj . Here we compare annealingP of the full model Ek (v, h) = ?k E(v, h) against annealing of the marginal model Ek (h) = ??k log v exp{?E(v, h)}. The full model can be simulated using a Gibbs sampler, which is straightforward since the conditional distributions are Bernoulli. To sample from the marginal model, we use a Metropolis kernel similar to the one used for the Ising model. To start the reverse simulations, we randomly pick an image from the training set and generate an initial 8 hidden state by sampling from the conditional distribution p(h|v). We then run 100 steps of Gibbs sampling with TK to obtain a posterior sample. We ran tests on an RBM with 784 visible and 500 hidden units trained on the MNIST handwritten digits dataset [24] with contrastive divergence using 25 steps [25]. Since the true log evidence is not known, we use a reference value obtained with parallel tempering (PT): log Z ? 451.42. Figure 4C compares evidence estimates based on annealing simulations of the full against the marginal model. Both annealing approaches provide very similar evidence estimates 451.43 (full model) and 451.48 (marginal model) that are close to the PT result. However, simulation of the marginal model is three times faster compared to the full model. Therefore, it seems beneficial to evaluate and train RBMs based on sampling and annealing of the marginal model p(h) rather than the full model p(v, h). 6.4 Gaussian mixture model Finally, we consider a sort of ?data annealing? strategy inP which independent data points are added one-by-one as in sequential Monte Carlo [10] Ek (x) = ? l<k log p(yl |x, M ). We applied thermal and data annealing to a three-component Gaussian mixture model with means -5, 0, 5, standard deviations 1, 3, 0.5 and equal weights. We generated K = 100 data points, and applied both types of annealing to estimate the mixture parameters and marginal likelihood. Parallel tempering produced a reference log evidence of -259.49. A Gibbs sampler utilizing cluster responsibilities as auxiliary variables served as transition kernel. Forward simulations started from prior samples where conjugate priors were used for the component means, widths and weights. The reverse simulations started from a posterior sample obtained by running K-means followed by 100 Gibbs sampling iterations. Thermal annealing with as many temperatures as data points and 10 Gibbs sampling steps per temperature estimated a log evidence of -259.72 ? 0.60 (M = 100, 10 repetitions). For 100 Gibbs steps, we obtain -259.47 ? 0.36. Data annealing with 10 Gibbs steps per addition of a data point yields ?257.52 ? 0.97, which seems to be slightly biased. Increasing the number of Gibbs steps to 100 improves the accuracy of the log evidence estimate: -258.32 ? 1.21. This shows that there might be some potential in a data annealing strategy, especially for larger datasets. 7 Summary This paper applies nonequilibrium techniques to estimate the marginal likelihood of an intractable probabilistic model. We outline the most important results from nonequilibrium statistical physics that are relevant to marginal likelihood estimation and relate them to machine learning algorithms such as AIS [10], RAISE [16] and bidirectional Monte Carlo [17, 18]. We introduce two estimators, BAR and the histogram estimator, that are currently not used in the context of probabilistic inference. We study the performance of the estimators on a toy system and various challenging probabilistic models including Markov random fields and Gaussian mixture models. The two evidence estimators perform very similarly and are superior to forward/reverse AIS and the cumulant estimators. Compared to BAR, the histogram estimator has the additional advantage that it also quantifies the uncertainty of the evidence estimate. Acknowledgments This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) SFB 860, subproject B09. References [1] E. T. Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, Cambridge UK, 2003. [2] D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge UK, 2003. [3] R. Kass and A. Raftery. Bayes factors. American Statistical Association, 90:773?775, 1995. [4] J. S. Liu. Monte Carlo strategies in scientific computing. Springer, 2001. 9 [5] K. H. Knuth, M. Habeck, N. K. Malakar, A. M. Mubeen, and B. Placek. Bayesian evidence and model selection. Digit. Signal Process., 47(C):50?67, 2015. [6] G. E. Crooks. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. Journal of Statistical Physics, 90(5-6):1481?1487, 1998. [7] G. E. Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys Rev E, 60:2721?2726, 1999. [8] G. E. Crooks. Excursions in statistical dynamics. PhD thesis, University of California at Berkeley, 1999. [9] A. Gelman and X. Meng. Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Statistical Science, 13:163?185, 1998. [10] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001. [11] R. H. Swendsen and J.-S. Wang. Replica Monte Carlo simulation of spin glasses. Phys Rev Lett, 57:2607? 2609, 1986. [12] C. J. Geyer. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, pages 156?163, 1991. [13] J. Skilling. Nested sampling for general Bayesian computation. Bayesian Analysis, 1:833?860, 2006. [14] M. Habeck. Evaluation of marginal likelihoods using the density of states. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 22, pages 486?494. JMLR: W&CP 22, 2012. [15] C. Jarzynski. Nonequilibrium equality for free energy differences. Phys Rev Lett, 78:2690?2693, 1997. [16] Y. Burda, R. Grosse, and R. Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Artificial Intelligence and Statistics, pages 102?110, 2015. [17] R. B. Grosse, Z. Ghahramani, and R. P. Adams. Sandwiching the marginal likelihood using bidirectional Monte Carlo. arXiv preprint arXiv:1511.02543, 2015. [18] R. B. Grosse, S. Ancha, and D. M. Roy. Measuring the reliability of MCMC inference with bidirectional Monte Carlo. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2451?2459. Curran Associates, Inc., 2016. [19] G. Hummer. Fast-growth thermodynamic integration: Error and efficiency analysis. The Journal of Chemical Physics, 114(17):7330?7337, 2001. [20] C. H. Bennett. Efficient estimation of free energy differences from Monte Carlo data. J. Comput. Phys., 22:245, 1976. [21] M. R. Shirts, E. Bair, G. Hooker, and V. S Pande. Equilibrium free energies from nonequilibrium measurements using maximum-likelihood methods. Phys Rev Lett, 91(14):140601, 2003. [22] M. Habeck. Bayesian estimation of free energies from equilibrium simulations. 109(10):100601, 2012. Phys Rev Lett, [23] P. D. Beale. Exact Distribution of Energies in the Two-Dimensional Ising Model. Phys Rev Lett, 76:78?81, 1996. [24] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [25] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8):1771?1800, 2002. 10
6772 |@word version:3 seems:3 confirms:1 simulation:54 p0:5 contrastive:2 pick:1 minus:1 carry:1 initial:5 configuration:3 contains:1 liu:1 document:1 ka:1 com:1 jaynes:1 yet:1 dx:2 realistic:1 partition:2 kpf:1 visible:3 heir:1 update:1 stationary:6 generative:1 selected:1 guess:1 intelligence:2 xk:16 geyer:1 core:1 short:1 provides:2 contribute:2 successive:1 wir:3 mathematical:1 along:2 become:1 symposium:1 consists:1 combine:3 introduce:3 x0:9 indeed:1 rapid:1 shirt:1 salakhutdinov:1 pf:28 increasing:4 becomes:1 estimating:3 deutsche:1 developed:1 berkeley:1 ti:3 growth:1 exactly:2 k2:4 uk:2 control:1 unit:2 planck:1 t1:3 switching:1 analyzing:1 meng:1 subscript:1 fluctuation:9 path:24 black:1 might:1 drag:1 studied:2 suggests:1 challenging:5 range:2 acknowledgment:1 lecun:1 practice:1 block:1 differs:1 digit:2 logz:7 significantly:1 convenient:1 equilibrating:1 inp:1 cannot:1 close:3 selection:1 operator:3 gelman:1 context:1 applying:2 impossible:1 equivalent:1 annealed:5 straightforward:2 starting:3 equilibration:12 m2:2 estimator:51 rule:1 utilizing:1 dw:3 stability:1 analogous:1 limiting:1 target:3 controlling:1 hierarchy:1 pt:2 exact:6 curran:1 associate:1 roy:1 expensive:1 satisfying:1 recognition:1 ising:9 observed:1 ft:2 pande:1 preprint:1 wang:1 region:1 wj:4 decrease:2 ran:1 dynamic:2 trained:1 depend:1 solving:1 weakly:1 raise:3 crit:1 bipartite:1 efficiency:1 various:6 train:1 forced:1 fast:5 monte:12 artificial:2 tell:1 outside:1 whose:1 lag:1 supplementary:6 larger:4 quite:4 solve:2 drawing:1 otherwise:1 statistic:4 final:1 superscript:1 sequence:5 advantage:2 biophysical:1 interaction:2 product:1 relevant:2 combining:1 realization:1 rapidly:2 mixing:1 achieve:1 convergence:3 cluster:1 produce:1 generating:5 adam:1 converges:1 tk:16 illustrate:1 develop:1 derive:1 coupling:1 nearest:1 progress:1 eq:15 equilibrated:2 auxiliary:1 predicted:1 involves:2 indicate:1 quantify:1 differ:1 direction:1 concentrate:1 correct:3 stochastic:2 material:6 implementing:1 require:3 exchange:1 mathematically:1 hold:1 sufficiently:1 around:1 ground:1 swendsen:1 exp:10 equilibrium:4 quenching:1 estimation:9 currently:1 bridge:3 repetition:4 gaussian:10 rather:3 avoid:1 hj:1 varying:1 derived:1 improvement:1 potts:4 bernoulli:1 likelihood:28 indicates:4 normalizer:1 sense:1 wf:12 glass:1 inference:8 typically:4 entire:2 bt:1 hidden:4 relation:2 originating:1 germany:1 overall:1 among:1 development:1 integration:3 mackay:1 marginal:34 field:2 equal:1 beach:1 sampling:17 identical:1 broad:2 t2:1 employ:1 few:1 randomly:2 divergence:3 zoom:2 dfg:1 microscopically:1 replaced:2 phase:3 acceptance:4 possibility:1 evaluation:1 mixture:5 bracket:1 light:6 behind:1 chain:6 accurate:8 integral:3 kpr:1 closer:1 lh:5 unless:1 logarithm:1 minimal:1 uncertain:1 markovian:1 measuring:1 lattice:2 cost:1 deviation:4 rare:1 nonequilibrium:17 too:2 combined:4 st:1 density:3 thanks:2 international:1 lee:1 probabilistic:8 physic:8 off:1 yl:1 diverge:1 michael:1 habeck:4 squared:1 thesis:1 again:2 central:1 successively:1 recorded:1 slowly:1 ek:11 american:1 leading:1 return:1 expert:1 toy:5 account:1 potential:1 de:1 chemistry:1 hysteresis:5 inc:1 satisfy:3 vi:1 multiplicative:2 performed:2 responsibility:1 analyze:1 sandwiching:2 red:3 start:7 bayes:2 option:1 parallel:6 sort:1 ass:1 square:2 spin:9 ir:1 accuracy:2 qk:8 variance:4 correspond:1 yield:1 bayesian:10 handwritten:1 produced:1 carlo:12 notoriously:1 served:1 phys:7 email:1 definition:1 against:2 energy:23 rbms:1 naturally:1 associated:1 rbm:5 propagated:2 sampled:1 dataset:1 popular:1 knowledge:1 color:1 improves:1 schedule:5 coloring:1 bidirectional:4 varf:3 follow:2 evaluated:1 though:1 strongly:4 rejected:1 implicit:2 until:1 hand:2 reversible:1 mode:2 undergoes:2 perhaps:1 indicated:2 scientific:1 restates:1 building:1 effect:1 dxk:2 normalized:1 usa:1 concept:1 unbiased:1 requiring:1 validity:1 analytically:1 equality:4 true:3 read:1 q0:1 chemical:1 neal:1 attractive:1 during:2 width:1 unnormalized:2 outline:1 cp:1 temperature:11 interface:1 meaning:1 image:1 ef:1 common:3 superior:1 pseudocode:1 physical:3 exponentially:1 volume:1 analog:1 he:1 m1:2 relating:1 numerically:2 tail:2 accumulate:1 significant:1 association:1 measurement:2 cambridge:4 gibbs:11 ai:25 approx:1 rd:1 fk:10 similarly:2 sugiyama:1 hummer:2 funded:1 reliability:1 stable:2 posterior:21 recent:1 showed:1 perspective:1 reverse:41 inequality:1 calligraphic:1 minimum:1 additional:1 determine:1 dashed:2 signal:1 relates:1 thermodynamic:3 mix:1 full:9 faster:2 cross:2 long:1 dkl:2 biophysics:1 variant:2 mrf:2 ttingen:2 fifteenth:1 histogram:20 kernel:16 normalization:1 iteration:4 arxiv:2 achieved:2 whereas:2 remarkably:1 want:1 addition:1 annealing:23 interval:3 biased:4 induced:1 virtually:1 integer:1 near:1 intermediate:6 forschungsgemeinschaft:1 wif:4 bengio:1 iterate:1 xj:2 identified:1 opposite:1 idea:1 haffner:1 tradeoff:1 expression:1 bair:1 bridging:1 sfb:1 locating:1 e3:4 interpolates:1 action:1 deep:1 generally:1 detailed:6 clear:1 amount:1 dark:5 band:2 ten:3 concentrated:1 generate:6 http:1 outperform:2 exist:2 deteriorates:1 arising:1 wr:9 per:3 estimated:2 blue:4 write:1 express:1 key:2 enormous:1 tempering:5 drawn:1 pj:4 replica:2 backward:10 lowering:1 sum:1 run:5 inverse:8 package:1 powerful:4 uncertainty:2 luxburg:1 guyon:1 excursion:1 draw:1 bound:13 followed:1 occur:1 equilibrate:1 simulate:1 speed:1 according:7 jarzynski:7 conjugate:1 beneficial:1 slightly:3 increasingly:2 stochastics:1 metropolis:4 rev:6 restricted:3 pr:15 computationally:1 equation:6 agree:1 hei:1 discus:1 needed:1 know:1 flip:1 tractable:2 end:1 reversal:1 gaussians:1 apply:3 probe:1 simulating:1 skilling:1 save:1 beale:1 running:2 ghahramani:1 build:1 overflow:1 especially:2 already:1 quantity:4 flipping:1 added:1 strategy:4 parametric:2 gradient:1 reversed:1 simulated:2 irreversibility:1 length:1 index:2 illustration:1 ratio:5 balance:4 demonstration:1 minimizing:1 difficult:4 hlog:7 relate:2 negative:1 boltzmann:3 perform:2 upper:8 disagree:1 observation:1 markov:10 datasets:1 thermal:2 immediate:1 hinton:1 perturbation:2 introduced:1 namely:1 required:1 california:1 nip:1 bar:18 proceeds:1 max:1 including:3 green:4 event:1 overlap:1 ranked:1 critical:1 github:1 started:3 raftery:1 negativity:1 extract:1 logw:2 prior:10 literature:2 python:1 relative:2 fully:1 expect:1 proportional:1 analogy:1 versus:1 downloaded:1 degree:1 consistent:1 cft:8 propagates:1 plotting:1 editor:1 systematically:2 production:1 summary:1 repeat:1 free:9 side:1 allow:2 burda:2 bias:1 institute:2 neighbor:1 tolerance:1 lett:5 transition:13 valid:1 evaluating:1 forward:44 reside:1 logic:1 gwdg:1 xi:4 thep:1 hooker:1 quantifies:2 zk:8 ca:1 expansion:1 bottou:1 complex:3 protocol:3 garnett:1 aistats:1 pk:20 rh:5 repeated:2 x1:3 fig:9 je:11 tl:1 grosse:4 slow:3 n:2 position:1 exponential:2 xl:2 comput:2 jmlr:1 theorem:9 z0:4 er:1 symbol:1 evidence:55 normalizing:2 intractable:5 mnist:1 sequential:2 importance:8 knuth:1 phd:1 nat:1 illustrates:1 budget:2 entropy:2 logarithmic:1 explore:1 infinitely:1 crook:5 likely:1 desire:1 ancha:1 applies:1 springer:1 nested:2 corresponds:3 chance:1 astronomically:1 conditional:2 bennett:6 hard:1 reducing:1 sampler:3 conservative:2 called:1 total:2 accepted:1 ew:1 dx0:3 cumulant:9 evaluate:2 mcmc:7 tested:1
6,382
6,773
Minimal Exploration in Structured Stochastic Bandits Richard Combes Centrale-Supelec / L2S [email protected] Stefan Magureanu KTH, EE School / ACL [email protected] Alexandre Proutiere KTH, EE School / ACL [email protected] Abstract This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, Lipschitz, unimodal, combinatorial, dueling, . . . ) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of ?optimism in the face of uncertainty? or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling. 1 Introduction Numerous extensions of the classical stochastic MAB problem [30] have been recently investigated. These extensions are motivated by applications arising in various fields including e.g. on-line services (search engines, display ads, recommendation systems, ...), and most often concern structural properties of the mapping of arms to their average rewards. This mapping can for instance be linear [14], convex [2], unimodal [36], Lipschitz [3], or may exhibit some combinatorial structure [10, 29, 35]. In their seminal paper, Lai and Robbins [30] develop a comprehensive theory for MAB problems with unrelated arms, i.e., without structure. They derive asymptotic (as the time horizon grows large) instance-specific regret lower bounds and propose algorithms achieving this minimal regret. These algorithms have then been considerably simplified, so that today, we have a few elementary indexbased1 and yet asymptotically optimal algorithms [18, 26]. Developing a similar comprehensive theory for MAB problems with structure is considerably more challenging. Due to the structure, the rewards observed for a given arm actually provide side-information about the average rewards of other arms2 . This side-information should be exploited so as to accelerate as much as possible the process of learning the average rewards. Very recently, instance-specific regret lower bounds and asymptotically optimal algorithms could be derived only for a few MAB problems with finite set of arms and specific structures, namely linear [31], Lipschitz [32] and unimodal [12]. In this paper, we investigate a large class of structured MAB problems. This class extends the classical stochastic MAB problem [30] in two directions: (i) it allows for any arbitrary structure; (ii) it allows different kinds of feedback. More precisely, our generic MAB problem is as follows. 1 An algorithm is index-based if the arm selection in each round is solely made comparing the indexes of each arm, and where the index of an arm only depends on the rewards observed for this arm. 2 Index-based algorithms cannot be optimal in MAB problems with structure. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In each round, the decision maker selects an arm from a finite set X . Each arm x ? X has an unknown parameter ?(x) ? R, and when this arm is chosen in round t, the decision maker observes a real-valued random variable Y (x, t) with expectation ?(x) and distribution ?(?(x)). The observations (Y (x, t))x?X ,t?1 are independent across arms and rounds. If x is chosen, she also receives an unobserved and deterministic3 reward ?(x, ?), where ? = (?(x))x?X . The parameter ? lies in a compact set ? that encodes the structural properties of the problem. The set ?, the class of distributions ?, and the mapping (x, ?) 7? ?(x, ?) encode the structure of the problem, are known to the decision maker, whereas ? is initially unknown. We denote by x? (t) the arm selected in round t under algorithm ?; this selection is based on previously selected arms and the corresponding observations. Hence the set ? of all possible arm selection rules consists in algorithms ? such that for any t ? 1, x? (t) is Ft? -measurable where Ft? is the ?-algebra generated by (x? (1), Y (x? (1), 1), . . . , x? (t ? 1), Y (x? (t ? 1), t ? 1). The performance of an algorithm ? ? ? is defined through its regret up to round T : R? (T, ?) = T max ?(x, ?) ? x?X T X E(?(x(t), ?)). t=1 The above MAB problem is very generic, as any kind of structure can be considered. In particular, our problem includes classical, linear, unimodal, dueling, and Lipschitz bandit problems as particular examples, see Section 3 for details. Our contributions in this paper are as follows: ? We derive a tight instance-specific regret lower bound satisfied by any algorithm for our generic structured MAB problem. ? We develop OSSB (Optimal Sampling for Structured Bandits), a simple and yet asymptotically optimal algorithm, i.e., its regret matches our lower bound. OSSB optimally exploits the structure of the problem so as to minimize regret. ? We briefly exemplify the numerical performance of OSSB in the case of linear bandits. OSSB outperforms existing algorithms (including Thompson Sampling [2], GLM-UCB [16], and a recently proposed asymptotically optimal algorithm [31]). As noticed in [31], for structured bandits (even for linear bandits), no algorithm based on the principle of optimism (a la UCB) or on that of Thompson sampling can achieve an asymptotically minimal regret. The design of OSSB does not follow these principles, and is rather inspired by the derivation of the regret lower bound. To obtain this bound, we characterize the minimal rates at which sub-optimal arms have to be explored. OSSB aims at sampling sub-optimal arms so as to match these rates. The latter depends on the unknown parameter ?, and so OSSB needs to accurately estimate ?. OSSB hence alternates between three phases: exploitation (playing arms with high empirical rewards), exploration (playing sub-optimal arms at well chosen rates), and estimation (getting to know ? to tune these exploration rates). The main technical contribution of this paper is a finite-time regret analysis of OSSB for any generic structure. In spite of the simplicity the algorithm, its analysis is involved. Not surprisingly, it uses concentration-of-measure arguments, but it also requires to establish that the minimal exploration rates (derived in the regret lower bound) are essentially smooth with respect to the parameter ?. This complication arises due to the (additional) estimation phase of OSSB: the minimal exploration rates should converge as our estimate of ? gets more and more accurate. The remainder of the paper is organized as follows. In the next section, we survey recent results on structured stochastic bandits. In Section 3, we illustrate the versatility of our MAB problem by casting most existing structured bandit problems into our framework. Section 4 is devoted to the derivation of the regret lower bound. In Sections 5 and 6, we present OSSB and provide an upper bound of its regret. Finally Section 7 explores the numerical performance of OSSB in the case of linear structures. 3 Usually in MAB problems, the reward is a random variable given as feedback to the decision maker. In our model, the reward is deterministic (as if it was averaged), but not observed as the only observation is Y (x, t) if x is chosen in round t. We will illustrate in Section 3 why usual MAB formulations are specific instances of our model. 2 2 Related work Structured bandits have generated many recent contributions since they find natural applications in the design of computer systems, for instance: recommender systems and information retrieval [28, 11], routing in networks and network optimization [22, 5, 17], and influence maximization in social networks [8]. A large number of existing structures have been investigated, including: linear [14, 34, 1, 31, 27] (linear bandits are treated here as a partial monitoring game), combinatorial [9, 10, 29, 35, 13], Lipschitz [32], unimodal [36, 12]. The results in this paper cover all models considered in the above body of work and are the first that can be applied to problems with any structure in the set of allowed parameters. Here, we focus on generic stochastic bandits with a finite but potentially large number of arms. Both continuous as well as adversarial versions of the problem have been investigated, see survey [6]. The performance of Thompson sampling for generic bandit problems has appeared in the literature [15, 20], however, the recent results in [31] prove that Thompson sampling is not optimal for all structured bandits. Generic structured bandits were treated in [7, 21]. The authors show that the regret of any algorithm must scale as C(?)ln T when T ? ? where C(?) is the optimal value of a semi-infinite linear program, and propose asymptotically optimal algorithms. However the proposed algorithms are involved and have poor numerical performance, furthermore their performance guarantees are asymptotic, and no finite time analysis is available. To our knowledge, our algorithm is the first which covers completely generic MAB problems, is asymptotically optimal and is amenable to a finite-time regret analysis. Our algorithm is in the same spirit as the DMED algorithm, presented in [24], as well as the algorithm in [31], but is generic enough to be optimal in any structured bandit setting. Similar to DMED, our algorithm relies on repeatedly solving an optimization problem and then exploring according to its solution, thus moving away from the UCB family of algorithms. 3 Examples The class of MAB problems described in the introduction covers most known bandit problems as illustrated in the six following examples. Classical Bandits. The classical MAB problem [33] with Bernoulli rewards is obtained by making the following choices: ?(x) ? [0, 1]; ? = [0, 1]|X | ; for any a ? [0, 1], ?(a) is the Bernoulli distribution with mean a; for all x ? X , ?(x, ?) = ?(x). Linear Bandits. To get finite linear bandit problems [14],[31], in our framework we choose X as a finite subset of Rd ; we pick an unknown vector ? ? Rd and define ?(x) = h?, xi for all x ? X ; the set of possible parameters is ? = {? = (h?, xi)x?X , ? ? Rd }; for any a ? Rd , ?(a) is a Gaussian distribution with unit variance and centered at a; for all x ? X , ?(x, ?) = ?(x). Observe that our framework also includes generalized linear bandit problems as those considered in [16]: we just need to define ?(x, ?) = g(?(x)) for some function g. Dueling Bandits. To model dueling bandits [27] using our framework, the set of arms is X = {(i, j) ? {1, . . . , d}2 }; for any x = (i, j) ? X , ?(x) ? [0, 1] denotes the probability that i is better than j with the conventions that ?(i, j) = 1 ? ?(j, i) and that ?(i, i) = 1/2; ? = {? : ?i? : ?(i? , j) > 1/2, ?j 6= i? } is the set of parameters such there exists a Condorcet winner; for any a ? [0, 1], ?(a) is the Bernoulli distribution with mean a; finally, we define the rewards as ?((i, j), ?) = 12 (?(i? , i) + ?(i? , j) ? 1). Note that the best arm is (i? , i? ) and has zero reward. Lipschitz Bandits. For finite Lipschitz bandits [32], the set of arms X is a finite subset of a metric space endowed with a distance `. For any x ? X , ?(x) is a scalar, and the mapping x 7? ?(x) is Lipschitz continuous with respect to `, and the set of parameters is: ? = {? : |?(x) ? ?(y)| ? `(x, y) ?x, y ? X }. As in classical bandits ?(x, ?(x)) = ?(x). The structure is encoded by the distance `, and is an example of local structure so that arms close to each other have similar rewards. Unimodal Bandits. Unimodal bandits [23],[12] are obtained as follows. X = {1, ..., |X |}, ?(x) is a scalar, and ?(x, ?(x)) = ?(x). The added assumption is that x 7? ?(x) is unimodal. Namely, there 3 exists x? ? X such that this mapping is stricly incrasing on {1, ..., x? } and strictly decreasing on {x? , ..., |X |}. Combinatorial bandits. The combinatorial bandit problems with bandit feedback (see [9]) are just particular instances of linear bandits where the set of arms X is a subset of {0, 1}d . Now to model combinatorial problems with semi-bandit feedback, we need a slight extension of the framework described in introduction. More precisely, the set of arms is still a subset of {0, 1}d . The observation Y (x, t) is a d-dimensional r.v. with independent components, with mean ?(x) and distribution ?(?(x)) (a product distribution). There is an unknown vector ? ? Rd such that Pd ?(x) = (?(1)x(1), . . . , ?(d)x(d)), and ?(x, ?) = i=1 ?(i)x(i) (linear reward). With semi-bandit feedback, the decision maker gets detailed information about the various components of the selected arm. 4 Regret Lower Bound To derive regret lower bounds, a strategy consists in restricting the attention to so-called uniformly good algorithms [30]: ? ? ? is uniformly good if R? (T, ?) = o(T a ) when T ? ? for all a > 0 and all ? ? ?. A simple change-of-measure argument is then enough to prove that for MAB problems without structure, under any uniformly good algorithm, the number of times that a sub-optimal arm x should be played is greater than ln T /d(?(x), ?(x? )) as the time horizon T grows large, and where x? denotes the optimal arm and d(?(x), ?(x? )) is the Kullback-Leibler divergence between the distributions ?(?(x)) and ?(?(x? )). Refer to [25] for a direct and elegant proof. For our structured MAB problems, we follow the same strategy, and derive constraints on the number of times a sub-optimal arm x is played under any uniformly good algorithm. We show that this number is greater than c(x, ?)ln T asymptotically where the c(x, ?)?s are the solutions of a semi-infinite linear program [19] whose constraints directly depend on the structure of the problem. Before stating our lower bound, we introduce the following notations. For ? ? ?, let x? (?) be the optimal arm (we assume that it is unique), and define ?? (?) = ?(x? (?), ?). For any x ? X , we denote by D(?, ?, x) the Kullback-Leibler divergence between distributions ?(?(x)) and ?(?(x)). Assumption 1 The optimal arm x? (?) is unique. Theorem 1 Let ? ? ? be a uniformly good algorithm. For any ? ? ?, we have: R? (T, ?) ? C(?), T ?? ln T where C(?) is the value of the optimization problem: X minimize ?(x)(?? (?) ? ?(x, ?)) lim inf ?(x)?0 , x?X subject to (1) (2) x?X X ?(x)D(?, ?, x) ? 1 , ?? ? ?(?), (3) x?X where ?(?) = {? ? ? : D(?, ?, x? (?)) = 0, x? (?) 6= x? (?)}. (4) Let (c(x, ?))x?X denote the solutions of the semi-infinite linear program (2)-(3). In this program, ?(x)ln T indicates the number of times arm x is played. The regret lower bound may be understood as follows. The set ?(?) is the set of ?confusing? parameters: if ? ? ?(?) then D(?, ?, x? (?)) = 0 so ? and ? cannot be differentiated by only sampling the optimal arm x? (?). Hence distinguishing ? from ? requires to sample suboptimal arms x 6= x? (?). Further, since any uniformly good algorithm must identify the best arm with high probability to ensure low regret and x? (?) 6= x? (?), any algorithm must distinguish these two parameters. The constraint (3) states thatPfor any ?, a uniformly good algorithm should perform a hypothesis test between ? and ?, and x?X ?(x)D(?, ?, x) ? 1 is required to ensure there is enough statistical information to perform this test. In summary, for a sub-optimal arm x, c(x, ?)lnT represents the asymptotically minimal number of times x should be sampled. It is noted that this lower bound is instance-specific (it depends on ?), and is attainable as we propose an algorithm which attains it. The proof of Theorem 1 is presented in appendix, and leverages techniques used in the context of controlled Markov chains [21]. 4 Next, we show that with usual structures as those considered in Section 3, the semi-infinite linear program (2)-(3) reduces to simpler optimization problems (e.g. an LP) and can sometimes even be solved explicitly. Simplifying (2)-(3) is important for us, since our proposed asymptotically optimal algorithm requires to solve this program. In the following examples, please refer to Section 3 for the definitions and notations. As mentioned already, the solutions of (2)-(3) for classical MAB is c(x, ?) = 1/d(?(x), ?(x? )). Linear bandits. For this class of problems, [31] recently proved that (2)-(3) was equivalent to the following optimization problem: X minimize ?(x)(?(x? ) ? ?(x)) ?(x)?0 , x?X x?X ! > subject to x inv X ?(z)zz z?X > x? (?(x? ) ? ?(x))2 , 2 ?x 6= x? . Refer to [31] for the proof of this result, and for insightful discussions. Lipschitz bandits. It can be shown that for Bernoulli rewards (the reward of arm x is ?(x)) (2)-(3) reduces to the following LP [32]: X minimize ?(x)(?(x? ) ? ?(x)) ?(x)?0 , x?X subject to x?X X ?(z)d(?(z), max{?(z), ?(x? ) ? `(x, z)}) ? 1 , z?X ?x 6= x? . While the solution is not explicit, the problem reduces to a LP with |X | variables and 2|X | constraints. Dueling bandits. The solution of (2)-(3) is as follows [27]. Assume to simplify that for any i 6= i? , ?((i,j),?) there exists a unique j minimizing d(?(i,j),1/2) and such that ?(i, j) < 1/2. Let j(i) denote this index. Then for any x = (i, j), we have c(x, ?) = 1{j = j(i)} . d(?(i, j), 1/2) Unimodal bandits. For such problems, it is shown in [12] that the solution of (2)-(3) is given by: for all x ? X , 1{|x ? x? | = 1} . c(x, ?) = d(?(x), ?(x? )) Hence, in unimodal bandits, under an asymptotically optimal algorithm, the sub-optimal arms contributing to the regret (i.e., those that need to be sampled ?(ln T )) are neighbours of the optimal arm. 5 The OSSB Algorithm In this section we propose OSSB (Optimal Sampling for Structured Bandits), an algorithm that is asymptotically optimal, i.e., its regret matches the lower bound of Theorem 1. OSSB pseudo-code is presented in Algorithm 1, and takes as an input two parameters ?, ? > 0 that control the amount of exploration performed by the algorithm. The design of OSSB is guided by the necessity to explore suboptimal arms as much as prescribed by the solution of the optimization problem (2)-(3), i.e., the sub-optimal arm x should be explored c(x, ?)ln T times. If ? was known, then sampling arm x c(x, ?)ln T times for all x, and then selecting the arm with the largest estimated reward should yield minimal regret. Since ? is unknown, we have to estimate it. Define the empirical averages: Pt Y (x, s)1{x(s) = x} m(x, t) = s=1 max(1, N (x, t)) 5 Algorithm 1 OSSB(?,?) s(0) ? 0, N (x, 1), m(x, 1) ? 0 , ?x ? X {Initialization} for t = 1, ..., T do Compute the optimization problem (2)-(3) solution (c(x, m(t)))x?X where m(t) = (m(x, t))x?X if N (x, t) ? c(x, m(t))(1 + ?)ln t, ?x then s(t) ? s(t ? 1) x(t) ? x? (m(t)) {Exploitation} else s(t) ? s(t ? 1) + 1 N (x,t) X(t) ? arg minx?X c(x,m(t)) X(t) ? arg minx?X N (x, t) if N (X(t), t) ? ?s(t) then x(t) ? X(t) {Estimation} else x(t) ? X(t) {Exploration} end if end if {Update statistics} Select arm x(t) and observe Y (x(t), t) m(x, t + 1) ? m(x, t), ?x 6= x(t) , N (x, t + 1) ? N (x, t), ?x 6= x(t) (x(t),t) m(x(t), t + 1) ? Y (x(t),t)+m(x(t),t)N N (x(t),t)+1 N (x(t), t + 1) ? N (x(t), t) + 1 end for Pt where x(s) is the arm selected in round s, and N (x, t) = s=1 1{x(s) = x} is the number of times x has been selected up to round t. The key idea of OSSB is to use m(t) = (m(x, t))x?X as an estimator for ?, and explore arms to match the estimated solution of the optimization problem (2)-(3), so that N (x, t) ? c(x, m(t))ln t for all x. This should work if we can ensure certainty equivalence, i.e. m(t) ? ?(t) when t ? ? at a sufficiently fast rate. The OSSB algorithm has three components. More precisely, under OSSB, we alternate between three phases: exploitation, estimation and exploration. In round t, one first attempts to identify the optimal arm. We calculate x? (m(x, t)) the arm with the largest empirical reward. If N (x, t) ? c(x, m(t))(1 + ?)ln t for all x, we enter the exploitation phase: we have enough information to infer that x? (m(x, t)) = x? (?) w.h.p. and we select x(t) = x? (m(x, t)). Otherwise, we need to gather more information to identify the optimal arm. We have two goals: (i) make sure that all components of ? are accurately estimated and (ii) make sure that N (x, t) ? c(x, m(t))ln t for all x. We maintain a counter s(t) of the number of times we have not entered the expoitation phase. We choose between two possible arms, namely the least played arm X(t) and the arm X(t) which is the farthest from satisfying N (x, t) ? c(x, m(t))ln t. We then consider the number of times X(t) has been selected. If N (X(t), t) is much smaller than s(t), there is a possibility that X(t) has not been selected enough times so that ?(X(t)) is not accurately estimated so we enter the estimation phase, where we select X(t) to ensure that certainty equivalence holds. Otherwise we enter the exploration phase where we select X(t) to explore as dictated by the solution of (2)-(3), since c(x, m(t)) should be close to c(x, ?). Theorem 2 states that OSSB is asymptotically optimal. The complete proof is presented in Appendix, with a sketch of the proof provided in the next section. We prove Theorem 2 for Bernoulli or Subgaussian observations, but the analysis is easily extended to rewards in a 1-parameter exponential family of distributions. While we state an asymptotic result here, we actually perform a finite time analysis of OSSB, and a finite time regret upper bound for OSSB is displayed at the end of next section. Assumption 2 (Bernoulli observations) ?(x) ? [0, 1] and ?(?(x)) =Ber(?(x)) for all x ? X . Assumption 3 (Gaussian observations) ?(x) ? R and ?(?(x)) = N (?(x), 1) for all x ? X . 6 Assumption 4 For all x, the mapping (?, ?) 7? D(x, ?, ?) is continuous at all points where it is not infinite. Assumption 5 For all x, the mapping ? ? ?(x, ?) is continuous. Assumption 6 The solution to problem (2)-(3) is unique. Theorem 2 If Assumptions 1, 4, 5 and 6 hold and either Assumption 2 or 3 holds, then under the algorithm ? =OSSB(?, ?) with ? < |X1 | we have: R? (T ) ? C(?)F (?, ?, ?), T ?? ln T lim sup with F a function such that for all ?, we have F (?, ?, ?) ? 1 as ? ? 0 and ? ? 0. We conclude this section by a remark on the computational complexity of the OSSB algorithm. OSSB requires to solve the optimization problem (2)-(3) in each round. The complexity of solving this problem strongly depends on the problem structure. For general structures, the complexity of this problem is difficult to assess. However for problems exemplified in Section 3, this problem is usually easy to solve. Note that the algorithm proposed in [31] for linear bandits requires to solve (2)-(3) only once, and is hence simpler to implement; its performance however is much worse in practice than that of OSSB as illustrated in Section 7. 6 Finite Time Analysis of OSSB The proof of Theorem 2 is presented in Appendix in detail, and is articulated in four steps. (i) We first notice that the probability of Pselecting a suboptimal arm during the exploitation phase at some round t is upper bounded by P( x?X N (x, t)D(m(t), ?, x) ? (1 + ?)ln t). Using a concentration inequality on KL-divergences (Lemma 1 in Appendix), we show that this probability is small and the regret caused by the exploitation phase is upper bounded by G(?, |X |) where G is finite and depends solely on ? and |X |. (ii) The second step, which is the most involved, is to show Lemma 1 stating the solutions of (2)-(3) are continuous. The main difficulty is that the set ?(?) is not finite, so that the optimization problem (2)-(3) is not a linear program. The proof strategy is similar to that used to prove Berge?s maximal theorem, the additional difficulty being that the feasible set is not compact, so that Berge?s theorem cannot be applied directly. Using Assumptions 1 and 5, both the value ? 7? C(?) and the solution ? 7? c(?) are continuous. Lemma 1 The optimal value of (2)-(3), ? 7? C(?) is continuous. If (2)-(3) admits a unique solution c(?) = (c(x, ?))x?X at ?, then ? 7? c(?) is continuous at ?. Lemma 1 is in fact interesting in its own right, since optimization problems such as (2)-(3) occur in all bandit problems. (iii) The third step is to upper bound the number of times the solution to (2)-(3) is not well estimated, so that C(m(t)) ? (1 + ?)C(?) for some ? > 0. From the previous step this implies that ||m(t) ? ?||? ? ?(?) for some well-chosen ?(?) > 0. Using a deviation result (Lemma 2 in Appendix), we show that the expected regret caused by such events is finite and | upper bounded by ??2|X 2 (?) . (vi) Finally a counting argument ensures that the regret incurred when C(?) ? C(m(t)) ? (1 + ?)C(?) i.e. the solution (2)-(3)Pis well estimated is upper bounded by (C(?)(1 + ?) + 2??(?))ln T , where ?(?) = |X |||c(?)||? x?X (?? (?) ? ?(x, ?)). Putting everything together we obtain the finite-time regret upper bound:   2|X | ? ? R (T ) ? ? (?) G(?, |X |) + 2 + (C(?)(1 + ?) + 2??(?))(1 + ?)ln T. ?? (?) This implies that: R? (T ) ? (C(?)(1 + ?) + 2??(?))(1 + ?). T ?? ln T lim sup The above holds for all ? > 0, which yields the result. 7 7 Numerical Experiments To assess the efficiency of OSSB, we compare its performance for reasonable time horizons to the state of the art algorithms for linear bandit problems. We considered a linear bandit with Gaussian rewards of unit variance, 81 arms of unit length, d = 3 and 10 parameters ? in [0.2, 0.4]3 , generated uniformly at random. In our implementation of OSSB, we use ? = ? = 0 since ? is typically chosen 0 in the literature (see [18]) and the performance of the algorithm does not appear sensitive to the choice ofp?. As baselines we select the extension of Thompson Sampling presented p in [4](using vt = R 0.5dln(t/?), we chose ? = 0.1, R = 1), GLM-UCB (using ?(t) = 0.5ln(t)), an extension of UCB [16] and the algorithm presented in [31]. Thompson Sampling (Agrawal et al.) GLM?UCB (Filippi et al.) OSSB Lattimore et al. 0 Average Regret 1000 2000 3000 4000 5000 Figure 1 presents the regret of the various algorithms averaged over the 10 parameters. OSSB clearly exhibits the best performance in terms of average regret. 0e+00 2e+04 4e+04 6e+04 8e+04 1e+05 Time Figure 1: Regret of various algorithms in the linear bandit setting with 81 arms and d = 3. Regret is averaged over 10 randomly generated parameters and 100 trials. Colored regions represent the 95% confidence intervals. 8 Conclusion In this paper, we develop a unified solution to a wide class of stochastic structured bandit problems. For the first time, we derive, for these problems, an asymptotic regret lower bound and devise OSSB, a simple and yet asymptotically optimal algorithm. The implementation of OSSB requires that we solve the optimization problem defining the minimal exploration rates of the sub-optimal arms. In the most general case, this problem is a semi-infinite linear program, which can be hard to solve in reasonable time. Studying the complexity of this semi-infinite LP depending on the structural properties of the reward function is an interesting research direction. Indeed any asymptotically optimal algorithm needs to learn the minimal exploration rates of sub-optimal arms, and hence needs to solve this semi-infinite LP. Characterizing the complexity of the latter would thus yield important insights into the trade-off between the complexity of the sequential arm selection algorithms and their regret. Acknowledgments A. Proutiere?s research is supported by the ERC FSA (308267) grant. This work is supported by the French Agence Nationale de la Recherche (ANR), under grant ANR-16-CE40-0002 (project BADASS). 8 References [1] Y. Abbasi-Yadkori, D. Pal, and C. Szepesvari. Improved algorithms for linear stochastic bandits. In NIPS, 2011. [2] A. Agarwal, D. P. Foster, D. J. Hsu, S. M. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. In NIPS, pages 1035?1043, 2011. [3] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control Optim., 33(6):1926?1951, 1995. [4] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. In ICML, 2013. [5] B. Awerbuch and R. Kleinberg. Online linear optimization and adaptive routing. J. Comput. Syst. Sci., 74(1):97?114, 2008. [6] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012. [7] A. Burnetas and M. Katehakis. Optimal adaptive policies for sequential allocation problems. Advances in Applied Mathematics, 17(2):122?142, 1996. [8] A. Carpentier and M. Valko. Revealing graph bandits for maximizing local influence. In AISTATS, 2016. [9] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. J. Comput. Syst. Sci., 78(5):1404?1422, 2012. [10] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and applications. In ICML, 2013. [11] R. Combes, S. Magureanu, A. Proutiere, and C. Laroche. Learning to rank: Regret lower bound and efficient algorithms. In SIGMETRICS, 2015. [12] R. Combes and A. Proutiere. Unimodal bandits: Regret lower bounds and optimal algorithms. In ICML, 2014. [13] R. Combes, S. Talebi, A. Proutiere, and M. Lelarge. Combinatorial bandits revisited. In NIPS, 2015. [14] V. Dani, T. Hayes, and S. Kakade. Stochastic linear optimization under bandit feedback. In COLT, 2008. [15] A. Durand and C. Gagn?. Thompson sampling for combinatorial bandits and its application to online feature selection. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. [16] S. Filippi, O. Cappe, A. Garivier, and C. Szepesv?ri. Parametric bandits: The generalized linear case. In NIPS, pages 586?594, 2010. [17] Y. Gai, B. Krishnamachari, and R. Jain. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Trans. on Networking, 20(5):1466?1478, 2012. [18] A. Garivier and O. Capp?. The KL-UCB algorithm for bounded stochastic bandits and beyond. In COLT, 2011. [19] K. Glashoff and S.-A. Gustafson. Linear Optimization and Approximation. Springer Verlag, Berlin, 1983. [20] A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In ICML, 2014. [21] T. L. Graves and T. L. Lai. Asymptotically efficient adaptive choice of control laws in controlled markov chains. SIAM J. Control and Optimization, 35(3):715?743, 1997. [22] A. Gy?rgy, T. Linder, G. Lugosi, and G. Ottucs?k. The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8(10), 2007. [23] U. Herkenrath. The n-armed bandit with unimodal structure. Metrika, 30(1):195?210, 1983. [24] J. Honda and A. Takemura. An asymptotically optimal bandit algorithm for bounded support models. In COLT, 2010. [25] E. Kaufmann, O. Capp?, and A. Garivier. On the complexity of best-arm identification in multi-armed bandit models. Journal of Machine Learning Research, 17(1):1?42, 2016. [26] E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In ALT, 2012. [27] J. Komiyama, J. Honda, H. Kashima, and H. Nakagawa. Regret lower bound and optimal algorithm in dueling bandit problem. In COLT, 2015. [28] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesvari. Cascading bandits: Learning to rank in the cascade model. In NIPS, 2015. [29] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In AISTATS, 2015. [30] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4?22, 1985. [31] T. Lattimore and C. Szepesvari. The end of optimism? an asymptotic analysis of finite-armed linear bandits. AISTATS, 2016. [32] S. Magureanu, R. Combes, and A. Proutiere. Lipschitz bandits: Regret lower bounds and optimal algorithms. COLT, 2014. [33] H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169?177. Springer, 1985. [34] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Math. Oper. Res., 35(2), 2010. [35] Z. Wen, A. Ashkan, H. Eydgahi, and B. Kveton. Efficient learning in large-scale combinatorial semi-bandits. In ICML, 2015. [36] J. Yu and S. Mannor. Unimodal bandits. In ICML, 2011. 9
6773 |@word trial:1 exploitation:6 version:1 briefly:1 simplifying:1 attainable:1 pick:1 necessity:1 selecting:1 outperforms:2 existing:5 comparing:1 optim:1 contextual:1 yet:3 must:3 numerical:5 update:1 intelligence:1 selected:8 metrika:1 recherche:1 colored:1 mannor:2 complication:1 revisited:1 honda:2 math:1 simpler:2 direct:1 katehakis:1 yuan:1 consists:2 prove:4 introduce:1 indeed:1 expected:1 multi:4 inspired:1 decreasing:1 armed:7 provided:1 project:1 unrelated:1 notation:2 bounded:6 kind:2 unified:1 unobserved:1 guarantee:1 pseudo:1 certainty:2 control:4 unit:3 farthest:1 grant:2 appear:1 before:1 service:1 understood:1 local:2 limit:1 solely:2 path:1 lugosi:2 acl:2 chose:1 initialization:1 equivalence:2 challenging:1 averaged:3 unique:5 acknowledgment:1 kveton:3 practice:1 regret:43 implement:1 goyal:1 empirical:3 cascade:1 revealing:1 matching:1 confidence:1 spite:1 dmed:2 get:3 cannot:3 close:2 selection:5 context:1 influence:2 seminal:1 measurable:1 deterministic:1 equivalent:1 maximizing:1 attention:1 thompson:12 convex:2 survey:2 simplicity:1 rule:2 estimator:1 insight:1 cascading:1 pt:2 today:1 us:1 distinguishing:1 hypothesis:1 trend:1 satisfying:1 observed:3 ft:2 solved:1 wang:1 calculate:1 region:1 ensures:1 counter:1 trade:1 observes:1 mentioned:1 pd:1 complexity:7 reward:23 depend:1 tight:2 solving:2 algebra:1 efficiency:2 completely:1 capp:2 accelerate:1 easily:1 various:4 derivation:3 articulated:1 jain:1 fast:1 artificial:1 whose:2 encoded:1 valued:1 solve:7 otherwise:2 anr:2 statistic:1 online:3 fsa:1 agrawal:3 propose:4 product:1 maximal:1 fr:1 remainder:1 entered:1 achieve:1 rgy:1 getting:1 derive:6 develop:4 illustrate:3 stating:2 depending:1 school:2 berge:2 implies:2 convention:1 direction:2 guided:1 stochastic:13 exploration:12 centered:1 routing:2 everything:1 mab:19 elementary:1 extension:5 exploring:1 strictly:1 hold:4 sufficiently:1 considered:5 mapping:8 continuum:1 estimation:5 combinatorial:13 maker:5 sensitive:1 robbins:4 largest:2 stefan:1 dani:1 clearly:1 gaussian:3 sigmetrics:1 aim:2 rather:2 casting:1 encode:1 derived:2 focus:1 she:1 bernoulli:6 indicates:1 rank:2 adversarial:1 attains:1 baseline:1 typically:1 initially:1 bandit:73 proutiere:6 selects:1 arg:2 colt:5 art:1 field:1 once:1 beach:1 sampling:17 zz:1 represents:1 yu:1 icml:6 simplify:1 richard:2 few:2 wen:3 randomly:1 neighbour:1 divergence:3 comprehensive:2 individual:1 phase:9 versatility:1 maintain:1 attempt:1 investigate:1 possibility:1 introduces:1 devoted:1 chain:2 amenable:1 accurate:1 l2s:1 partial:2 re:1 minimal:11 korda:1 instance:9 cover:3 maximization:1 deviation:1 subset:4 supelec:2 pal:1 optimally:1 characterize:1 burnetas:1 considerably:2 st:1 fundamental:1 explores:1 siam:2 off:1 together:1 talebi:1 abbasi:1 satisfied:1 cesa:2 aaai:1 choose:2 worse:1 oper:1 syst:2 filippi:2 de:1 gy:1 rusmevichientong:1 includes:2 explicitly:1 caused:2 ad:1 depends:5 vi:1 performed:1 sup:2 contribution:3 minimize:4 ass:2 variance:2 kaufmann:2 yield:3 identify:3 identification:1 accurately:3 monitoring:2 networking:1 ashkan:3 definition:1 lnt:1 lelarge:1 involved:3 proof:7 sampled:2 hsu:1 proved:1 exemplify:1 knowledge:1 lim:3 organized:1 actually:2 cappe:1 alexandre:1 follow:2 improved:1 formulation:1 strongly:1 furthermore:1 just:2 sketch:1 receives:1 combes:6 french:1 grows:2 usa:1 awerbuch:1 hence:6 leibler:2 illustrated:2 round:12 game:1 during:1 please:1 noted:1 generalized:2 complete:1 lattimore:2 recently:4 winner:1 slight:1 refer:3 enter:3 rd:5 mathematics:2 erc:1 moving:1 agence:1 own:1 recent:3 dictated:1 inf:1 verlag:1 inequality:1 durand:1 vt:1 exploited:1 devise:1 herbert:1 additional:2 greater:2 converge:1 shortest:1 ii:3 semi:11 unimodal:13 reduces:3 infer:1 smooth:1 technical:1 match:5 characterized:1 long:1 retrieval:1 ofp:1 lai:3 controlled:2 essentially:1 expectation:1 metric:1 sometimes:1 represent:1 agarwal:1 whereas:1 szepesv:1 interval:1 else:2 sure:2 subject:3 elegant:1 spirit:1 ee:2 structural:4 leverage:1 subgaussian:1 counting:1 iii:1 enough:5 easy:1 gustafson:1 nonstochastic:1 suboptimal:3 idea:1 motivated:1 optimism:3 six:1 repeatedly:1 remark:1 se:2 covered:1 tune:1 detailed:1 gopalan:1 amount:1 notice:1 estimated:6 arising:1 dln:1 key:1 four:1 putting:1 achieving:1 carpentier:1 garivier:3 asymptotically:19 graph:1 parameterized:1 uncertainty:1 extends:1 family:2 reasonable:2 decision:5 confusing:1 appendix:5 bound:26 played:4 display:1 distinguish:1 occur:1 precisely:3 constraint:4 ri:1 encodes:1 kleinberg:1 aspect:1 argument:3 prescribed:1 structured:14 developing:1 according:1 alternate:2 centrale:1 poor:1 across:1 smaller:1 lp:5 kakade:2 making:1 glm:3 ln:19 previously:1 know:1 end:5 studying:1 available:1 endowed:1 komiyama:1 observe:2 away:1 generic:9 differentiated:1 kashima:1 yadkori:1 eydgahi:1 denotes:2 ensure:4 exploit:1 establish:1 classical:8 noticed:1 added:1 already:1 strategy:3 concentration:2 parametric:1 usual:2 exhibit:3 minx:2 kth:4 distance:2 sci:2 berlin:1 condorcet:1 ottucs:1 code:1 length:1 index:5 minimizing:1 difficult:1 potentially:1 design:4 implementation:2 policy:1 unknown:7 perform:3 bianchi:2 upper:8 recommender:1 observation:8 twenty:1 markov:2 finite:19 displayed:1 magureanu:3 defining:1 extended:1 payoff:1 mansour:1 arbitrary:1 inv:1 namely:3 required:1 kl:2 engine:1 nip:6 trans:1 address:1 beyond:1 usually:2 exemplified:1 eighth:1 appeared:1 program:8 including:4 max:3 dueling:6 event:1 gagn:1 natural:1 treated:2 difficulty:2 valko:1 stricly:1 arm:62 numerous:1 literature:2 contributing:1 asymptotic:6 graf:1 law:1 takemura:1 interesting:2 allocation:2 foundation:1 incurred:1 gather:1 principle:3 foster:1 laroche:1 playing:2 pi:1 summary:1 surprisingly:1 supported:2 tsitsiklis:1 side:2 ber:1 wide:2 face:1 characterizing:1 munos:1 feedback:7 author:1 made:1 adaptive:4 simplified:1 social:1 compact:2 kullback:2 hayes:1 conclude:1 xi:2 search:1 continuous:8 why:1 learn:1 szepesvari:4 ca:1 investigated:3 complex:1 aistats:3 main:2 linearly:1 alepro:1 allowed:1 body:1 x1:1 gai:1 sub:11 explicit:1 exponential:1 comput:2 lie:1 third:1 theorem:9 specific:7 insightful:1 explored:2 rakhlin:1 admits:1 krishnamachari:1 alt:1 concern:1 exists:3 workshop:1 restricting:1 sequential:3 horizon:3 nationale:1 chen:1 explore:3 bubeck:1 scalar:2 recommendation:1 springer:2 relies:1 acm:1 goal:1 lipschitz:10 feasible:1 change:1 hard:1 infinite:8 nakagawa:1 uniformly:8 lemma:5 called:1 la:2 ucb:7 select:5 linder:1 support:1 latter:2 arises:1
6,383
6,774
Learned D-AMP: Principled Neural Network based Compressive Image Recovery Christopher A. Metzler Rice University [email protected] Ali Mousavi Rice University [email protected] Richard G. Baraniuk Rice University [email protected] Abstract Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be ?unrolled? to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50? faster than BM3D-AMP and hundreds of times faster than NLR-CS. 1 Introduction Over the last few decades computational imaging systems have proliferated in a host of different imaging domains, from synthetic aperture radar to functional MRI and CT scanners. The majority of these systems capture linear measurements y ? Rm of the signal of interest x ? Rn via y = Ax + , where A ? Rm?n is a measurement matrix and  ? Rm is noise. Given the measurements y and the measurement matrix A, a computational imaging system seeks to recover x. When m < n this problem is underdetermined, and prior knowledge about x must be used to recovery the signal. This problem is broadly referred to as compressive sampling (CS) [1; 2]. There are myriad ways to use priors to recover an image x from compressive measurements. In the following, we briefly describe some of these methods. Note that the ways in which these algorithms use priors span a spectrum; from simple hand-designed models to completely data-driven methods (see Figure 1). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: The spectrum of compressive signal recovery algorithms. 1.1 Hand-designed recovery methods The vast majority of CS recovery algorithms can be considered ?hand-designed? in the sense that they use some sort of expert knowledge, i.e., prior, about the structure of x. The most common signal prior is that x is sparse in some basis. Algorithms using sparsity priors include CoSaMP [3], ISTA [4], approximate message passing (AMP) [5], and VAMP [6], among many others. Researchers have also developed priors and algorithms that more accurately describe the structure of natural images, such as minimal total variation, e.g., TVAL3 [7], markov-tree models on the wavelet coefficients, e.g., ModelCoSaMP [8], and nonlocal self-similarity, e.g., NLR-CS [9]. Off-the-shelf denoising and compression algorithms have also been used to impose priors on the reconstruction, e.g., Denoising-based AMP (D-AMP) [10], D-VAMP [11], and C-GD [12]. When applied to natural images, algorithms using advanced priors outperform simple priors, like wavelet sparsity, by a large margin [10]. The appeal of hand-designed methods is that they are based on interpretable priors and often have well understood behavior. Moreover, when they are set up as convex optimization problems they often have theoretical convergence guarantees. Unfortunately, among the algorithms that use accurate priors on the signal, even the fastest is too slow for many real-time applications [10]. More importantly, these algorithms do not take advantage of potentially available training data. As we will see, this leaves much room for improvement. 1.2 Data-driven recovery methods At the other end of the spectrum are data-driven (often deep learning-based) methods that use no hand-designed models whatsoever. Instead, researchers provide neural networks (NNs) vast amounts of training data, and the networks learn how to best use the structure within the data [13?16]. The first paper to apply this approach was [13], where the authors used stacked denoising autoencoders (SDA) [17] to recover signals from their undersampled measurements. Other papers in this line of work have used either pure convolutional layers (DeepInverse [15]) or a combination of convolutional and fully connected layers (DR2 -Net [16] and ReconNet [14]) to build deep learning frameworks capable of solving the CS recovery problem. As demonstrated in [13], these methods can compete with state-of-the-art methods in terms of accuracy while running thousands of times faster. Unfortunately, these methods are held back by the fact that there exists almost no theory governing their performance and that, so far, they must be trained for specific measurement matrices and noise levels. 1.3 Mixing hand-designed and data-driven methods for recovery The third class of recovery algorithms blends data-driven models with hand-designed algorithms. These methods first use expert knowledge to set up a recovery algorithm and then use training data to learn priors within this algorithm. Such methods benefit from the ability to learn more realistic signal priors from the training data, while still maintaining the interpretability and guarantees that made hand-designed methods so appealing. Algorithms of this class can be divided into two subcategories. The first subcategory uses a black box neural network that performs some function within the algorithm, such as the proximal mapping. The second subcategory explicitly unrolls and iterative algorithm and turns it into a deep NN. Following this unrolling, the network can be tuned with training data. Our LDAMP algorithm uses ideas from both these camps. Black box neural nets. The simplest way to use a NN in a principled way to solve the CS problem is to treat it as a black box that performs some function; such as computing a posterior probability. 2 (a) D-IT Iterations (b) D-AMP Iterations Figure 2: Reconstruction behavior of D-IT (left) and D-AMP (right) with an idealized denoiser. Because D-IT allows bias to build up over iterations of the algorithm, its denoiser becomes ineffective at projecting onto the set C of all natural images. The Onsager correction term enables D-AMP to avoid this issue. Figure adapted from [10]. Examples of this approach include RBM-AMP and its generalizations [18?20], which use Restricted Boltzmann Machines to learn non-i.i.d. priors; RIDE-CS [21], which uses the RIDE [22] generative model to compute the probability of a given estimate of the image; and OneNet [23], which uses a NN as a proximal mapping/denoiser. Unrolled algorithms. The second way to use a NN in a principled way to solve the CS problem is to simply take a well-understood iterative recovery algorithm and unroll/unfold it. This method is best illustrated by the the LISTA [24; 25] and LAMP [26] NNs. In these works, the authors simply unroll the iterative ISTA [4] and AMP [5] algorithms, respectively, and then treat parameters of the algorithm as weights to be learned. Following the unrolling, training data can be fed through the network, and stochastic gradient descent can be used to update and optimize its parameters. Unrolling was recently applied to the ADMM algorithm to solve the CS-MRI problem [27]. The resulting network, ADMM-Net, uses training data to learn filters, penalties, simple nonlinearities, and multipliers. Moving beyond CS, the unrolling principle has been applied successfully in speech enhancement [28], non-negative matrix factorization applied to music transcription [29], and beyond. In these applications, unrolling and training significantly improve both the quality and speed of signal reconstruction. 2 Learned D-AMP 2.1 D-IT and D-AMP Learned D-AMP (LDAMP), is a mixed hand-designed/data-driven compressive signal recovery framework that is builds on the D-AMP algorithm [10]. We describe D-AMP now, as well as the simpler denoising-based iterative thresholding (D-IT) algorithm. For concreteness, but without loss of generality, we focus on image recovery. A compressive image recovery algorithm solves the ill-posed inverse problem of finding the image x given the low-dimensional measurements y = Ax by exploiting prior information on x, such as fact that x ? C, where C is the set of all natural images. A natural optimization formulation reads argminx ky ? Axk22 subject to x ? C. (1) When no measurement noise  is present, a compressive image recovery algorithm should return the (hopefully unique) image xo at the intersection of the set C and the affine subspace {x|y = Ax} (see Figure 2). The premise of D-IT and D-AMP is that high-performance image denoisers D? , such as BM3D [30], are high-quality approximate projections onto the set C of natural images.1,2 That is, suppose 1 The notation D? indicates that the denoiser can be parameterized by the standard deviation of the noise ?. Denoisers can also be thought of as a proximal mapping with respect to the negative log likelihood of natural images [31] or as taking a gradient step with respect to the data generating function of natural images [32; 33]. 2 3 xo + ?z is a noisy observation of a natural image, with xo ? C and z ? N (0, I). An ideal denoiser D? would simply find the point in the set C that is closest to the observation xo + ?z D? (x) = argminx kxo + ?z ? xk22 subject to x ? C. (2) Combining (1) and (2) leads naturally to the D-IT algorithm, presented in (3) and illustrated in Figure 2(a). Starting from x0 = 0, D-IT takes a gradient step towards the {x|y = Ax} affine subspace and then applies the denoiser D? to move to x1 in the set C of natural images . Gradient stepping and denoising is repeated for t = 1, 2, . . . until convergence. zt D-IT Algorithm x t+1 = y ? Axt , = D?? t (xt + AH z t ). (3) Let ? t = xt + AH z t ? xo denote the difference between xt + AH z t and the true signal xo at each iteration. ? t is known as the effective noise. At each iteration, D-IT denoises xt + AH z t = xo + ? t , i.e., the true signal plus the effective noise. Most denoisers are designed to work with ? t as additive white Gaussian noise (AWGN). Unfortunately, as D-IT iterates, the denoiser biases the intermediate solutions, and ? t soon deviates from AWGN. Consequently, the denoising iterations become less effective [5; 10; 26], and convergence slows. D-AMP differs from D-IT in that it corrects for the bias in the effective noise at each iteration t = 0, 1, . . . using an Onsager correction term bt . D-AMP Algorithm bt zt ? ?t xt+1 z t?1 divD?? t?1 (xt?1 + AH z t?1 ) , m t t = y ? Ax + b , kz t k2 = ? , m = = D?? t (xt + AH z t ). (4) The Onsager correction term removes the bias from the intermediate solutions so that the effective noise ? t follows the AWGN model expected by typical image denoisers. For more information on the Onsager correction, its origins, and its connection to the Thouless-Anderson-Palmer equations [34], t k2 see [5] and [35]. Note that kz?m serves as a useful and accurate estimate of the standard deviation of t ? [36]. Typically, D-AMP algorithms use a Monte-Carlo approximation for the divergence divD(?), which was first introduced in [37; 10]. 2.2 Denoising convolutional neural network NNs have a long history in signal denoising; see, for instance [38]. However, only recently have they begun to significantly outperform established methods like BM3D [30]. In this section we review the recently developed Denoising Convolutional Neural Network (DnCNN) image denoiser [39], which is both more accurate and far faster than competing techniques. The DnCNN neural network consists of 16 to 20 convolutional layers, organized as follows. The first convolutional layer uses 64 different 3 ? 3 ? c filters (where c denotes the number of color channels) and is followed by a rectified linear unit (ReLU) [40]. The next 14 to 18 convolutional layers each use 64 different 3 ? 3 ? 64 filters which are each followed by batch-normalization [41] and a ReLU. The final convolutional layer uses c separate 3 ? 3 ? 64 filters to reconstruct the signal. The parameters are learned via residual learning [42]. 2.3 Unrolling D-IT and D-AMP into networks The central contribution of this work is to apply the unrolling ideas described in Section 1.3 to D-IT and D-AMP to form the LDIT and LDAMP neural networks. The LDAMP network, presented in (5) and illustrated in Figure 3, consists of 10 AMP layers where each AMP layer contains two denoisers 4 Figure 3: Two layers of the LDAMP neural network. When used with the DnCNN denoiser, each denoiser block is a 16 to 20 convolutional-layer neural network. The forward and backward operators are represented as the matrices A and AH ; however function handles work as well. with tied weights. One denoiser is used to update xl , and the other is used to estimate the divergence using the Monte-Carlo approximation from [37; 10]. The LDIT network is nearly identical but does not compute an Onsager correction term and hence, only applies one denoiser per layer. One of the few challenges to unrolling D-IT and D-AMP is that, to enable training, we must use a denoiser that easily propagates gradients; a black box denoiser like BM3D will not work. This restricts us to denoisers such as DnCNN, which, fortunately, offers improved performance. LDAMP Neural Network l l?1 z l?1 divDw + AH z l?1 ) l?1 (? ? l?1 ) (x bl = zl = y ? Axl + bl , kz l k2 = ? , m ? ?l xl+1 m l l H l = Dw l (? ? l ) (x + A z ). , (5) l Within (5), we use the slightly cumbersome notation Dw l (? ? l ) to indicate that layer l of the network l uses denoiser D , that this denoiser depends on its weights/biases wl , and that these weights may be a function of the estimated standard deviation of the noise ? ? l . During training, the only free parameters 1 L we learn are the denoiser weights w , ...w . This is distinct from the LISTA and LAMP networks, where the authors decouple and learn the A and AH matrices used in the network [24; 26]. 3 Training the LDIT and LDAMP networks We experimented with three different methods to train the LDIT and LDAMP networks. Here we describe and compare these training methods at a high level; the details are described in Section 5. ? End-to-end training: We train all the weights of the network simultaneously. This is the standard method of training a neural network. ? Layer-by-layer training: We train a 1 AMP layer network (which itself contains a 16-20 layer denoiser) to recover the signal, fix these weights, add an AMP layer, train the second layer of the resulting 2 layer network to recover the signal, fix these weights, and repeat until we have trained a 10 layer network. ? Denoiser-by-denoiser training: We decouple the denoisers from the rest of the network and train each on AWGN denoising problems at different noise levels. During inference, the network uses its estimate of the standard deviation of the noise to select which set of denoiser weights to use. Note that, in selecting which denoiser weights to use, we must discretize the expected range of noise levels; e.g., if ? ? = 35, then we use the denoiser for noise standard deviations between 20 and 40. 5 End-to-end Layer-by-layer Denoiser-by-denoiser LDIT 32.1 26.1 28.0 LDAMP 33.1 33.1 31.6 End-to-end Layer-by-layer Denoiser-by-denoiser (a) LDIT 8.0 -2.6 22.1 LDAMP 18.7 18.7 25.9 (b) Figure 4: Average PSNRs4 of 100 40 ? 40 image reconstructions with i.i.d. Gaussian measurements m m trained at a sampling rate of m n = 0.20 and tested at sampling rates of n = 0.20 (a) and n = 0.05 (b). Comparing Training Methods. Stochastic gradient descent theory suggests that layer-by-layer and denoiser-by-denoiser training should sacrifice performance as compared to end-to-end training [43]. In Section 4.2 we will prove that this is not the case for LDAMP. For LDAMP, layer-by-layer and denoiser-by-denoiser training are minimum-mean-squared-error (MMSE) optimal. These theoretical results are born out experimentally in Tables 4(a) and 4(b). Each of the networks tested in this section consists of 10 unrolled DAMP/DIT layers that each contain a 16 layer DnCNN denoiser. Table 4(a) demonstrates that, as suggested by theory, layer-by-layer training of LDAMP is optimal; additional end-to-end training does not improve the performance of the network. In contrast, the table demonstrates that layer-by-layer training of LDIT, which represents the behavior of a typical neural network, is suboptimal; additional end-to-end training dramatically improves its performance. Despite the theoretical result the denoiser-by-denoiser training is optimal, Table 4(a) shows that LDAMP trained denoiser-by-denoiser performs slightly worse than the end-to-end and layer-by-layer trained networks. This gap in performance is likely due to the discretization of the noise levels, which is not modeled in our theory. This gap can be reduced by using a finer discretization of the noise levels or by using deeper denoiser networks that can better handle a range of noise levels [39]. In Table 4(b) we report on the performance of the two networks when trained at a one sampling rate and tested at another. LDIT and LDAMP networks trained end-to-end and layer-by-layer at a m sampling rate of m n = 0.2 perform poorly when tested at a sampling rate of n = 0.05. In contrast, the denoiser-by-denoiser trained networks, which were not trained at a specific sampling rate, generalize well to different sampling rates. 4 Theoretical analysis of LDAMP This section makes two theoretical contributions. First, we show that the state-evolution (S.E.), a framework that predicts the performance of AMP/D-AMP, holds for LDAMP as well.5 Second, we use the S.E. to prove that layer-by-layer and denoiser-by-denoiser training of LDAMP are MMSE optimal. 4.1 State-evolution In the context of LAMP and LDAMP, the S.E. equations predict the intermediate mean squared error kx k2 (MSE) of the network over each of its layers [26]. Starting from ?0 = no 2 the S.E. generates a sequence of numbers through the following iterations: 1 l l 2 ?l+1 (xo , ?, ?2 ) = E kDw (6) l (?) (xo + ? ) ? xo k2 , n where (? l )2 = 1? ?l (xo , ?, ?2 ) + ?2 , the scalar ? is the standard deviation of the measurement noise , and the expectation is with respect to  ? N (0, I). Note that the notation ?l+1 (xo , ?, ?2 ) is used to emphasize that ?l may depend on the signal xo , the under-determinacy ?, and the measurement noise. Let xl denote the estimate at layer l of LDAMP. Our empirical findings, illustrated in Figure 5, show that the MSE of LDAMP is predicted accurately by the S.E. We formally state our finding. 4 2 255 PSNR = 10 log10 ( mean((? ) when the pixel range is 0 to 255. x?xo )2 ) 5 For D-AMP and LDAMP, the S.E. is entirely observational; no rigorous theory exists. For AMP, the S.E. has been proven asymptotically accurate for i.i.d. Gaussian measurements [44]. 6 Figure 5: The MSE of intermediate reconstructions of the Boat test image across different layers for the DnCNN variants of LDAMP and LDIT alongside their predicted S.E. The image was sampled with Gaussian measurements at a rate of m n = 0.1. Note that LDAMP is well predicted by the S.E., whereas LDIT is not. Finding 1. If the LDAMP network starts from x0 = 0, then for large values of m and n, the 2 S.E. predicts the mean square error of LDAMP at each layer, i.e., ?l (xo , ?, ?2 ) ? n1 xl ? xo 2 , if the following conditions hold: (i) The elements of the matrix A are i.i.d. Gaussian (or subgaussian) with mean zero and standard deviation 1/m. (ii) The noise w is also i.i.d. Gaussian. (iii) The denoisers Dl at each layer are Lipschitz continuous.6 4.2 Layer-by-layer and denoiser-by-denoiser training is optimal The S.E. framework enables us to prove the following results: Layer-by-layer and denoiser-bydenoiser training of LDAMP are MMSE optimal. Both these results rely upon the following lemma. Lemma 1. Suppose that D1 , D2 , ...DL are monotone denoisers in the sense that for l = 1, 2, ...L l 2 1 1 inf wl EkDw l (?) (xo + ?) ? xo k2 is a non-decreasing function of ?. If the weights w of D are set to minimize Ex0 [?1 ] and fixed; and then the weights w2 of D2 are set to minimize Ex0 [?2 ] and fixed, . . . and then the weights wL of DL are set to minimize Ex0 [?L ], then together they minimize Ex0 [?L ]. Lemma 1 can be derived using the proof technique for Lemma 3 of [10], but with ?l replaced by Ex0 [?l ] throughout. It leads to the following two results. Corollary 1. Under the conditions in Lemma 1, layer-by-layer training of LDAMP is MMSE optimal. This result follows from Lemma 1 and the equivalence between Ex0 [?l ] and Ex0 [ n1 kxl ? xo k22 ]. Corollary 2. Under the conditions in Lemma 1, denoiser-by-denoiser training of LDAMP is MMSE optimal. l This result follows from Lemma 1 and the equivalence between Ex0 [?l ] and Ex0 [ n1 E kDw l (?) (xo + l 2 ? ) ? xo k2 ]. 5 Experiments Datasets Training images were pulled from Berkeley?s BSD-500 dataset [46]. From this dataset, we used 400 images for training, 50 for validation, and 50 for testing. For the results presented in Section 3, the training images were cropped, rescaled, flipped, and rotated to form a set of 204,800 overlapping 40 ? 40 patches. The validation images were cropped to form 1,000 non-overlapping 40 ? 40 patches. We used 256 non-overlapping 40 ? 40 patches for test. For the results presented in this section, we used 382,464 50 ? 50 patches for training, 6,528 50 ? 50 patches for validation, and seven standard test images, illustrated in Figure 6 and rescaled to various resolutions, for test. Implementation. We implemented LDAMP and LDIT, using the DnCNN denoiser [39], in both TensorFlow and MatConvnet [47], which is a toolbox for Matlab. Public implementations of both versions of the algorithm are available at https://github.com/ricedsp/D-AMP_Toolbox. 6 A denoiser is said to be L-Lipschitz continuous if for every x1 , x2 ? C we have kD(x1 ) ? D(x2 )k22 ? Lkx1 ? x2 k22 . While we did not find it necessary in practice, weight clipping and gradient norm penalization can be used to ensure Lipschitz continuity of the convolutional denoiser [45]. 7 (a) Barbara (b) Boat (c) Couple (d) Peppers (e) Fingerprint (f) Mandrill (g) Bridge Figure 6: The seven test images. Training parameters. We trained all the networks using the Adam optimizer [48] with a training rate of 0.001, which we dropped to 0.0001 and then 0.00001 when the validation error stopped improving. We used mini-batches of 32 to 256 patches, depending on network size and memory usage. For layer-by-layer and denoiser-by-denoiser training, we used a different randomly generated measurement matrix for each mini-batch. Training generally took between 3 and 5 hours per denoiser on an Nvidia Pascal Titan X. Results in this section are for denoiser-by-denoiser trained networks which consists of 10 unrolled DAMP/DIT layers that each contain a 20 layer DnCNN denoiser. Competition. We compared the performance of LDAMP to three state-of-the-art image recovery algorithms; TVAL3 [7], NLR-CS [9], and BM3D-AMP [10]. We also include a comparison with LDIT to demonstrate the benefits of the Onsager correction term. Our results do not include comparisons with any other NN-based techniques. While many NN-based methods are very specialized and only work for fixed matrices [13?16; 27], the recently proposed OneNet [23] and RIDE-CS [21] methods can be applied more generally. Unfortunately, we were unable to train and test the OneNet code in time for this submission. While RIDE-CS code was available, the implementation requires the measurement matrices to have orthonormalized rows. When tested on matrices without orthonormal rows, RIDE-CS performed significantly worse than the other methods. Algorithm parameters. All algorithms used their default parameters. However, NLR-CS was initialized using 8 iterations of BM3D-AMP, as described in [10]. BM3D-AMP was run for 10 iterations. LDIT and LDAMP ? used 10 layers. LDIT had its per layer noise standard deviation estimate ? ? parameter set to 2kz l k2 / m, as was done with D-IT in [10]. Testing setup. We tested the algorithms with i.i.d. Gaussian measurements and with measurements from a randomly sampled coded diffraction pattern [49]. The coded diffraction pattern forward operator was formed as a composition of three steps; randomly (uniformly) change the phase, take a 2D FFT, and then randomly (uniformly) subsample. Except for the results in Figure 7, we tested the algorithms with 128 ? 128 images (n = 1282 ). We report recovery accuracy in terms of PSNR. We report run times in seconds. Results broken down by image are provided in the supplement. Gaussian measurements. With noise-free Gaussian measurements, the LDAMP network produces the best reconstructions at every sampling rate on every image except Fingerprints, which looks very unlike the natural images the network was trained on. With noise-free Gaussian measurements, LDIT and LDAMP produce reconstructions significantly faster than the competing methods. Note that, despite having to perform twice as many denoising operations, at a sampling rate of m n = 0.25 the LDAMP network is only about 25% slower than LDIT. This indicates that matrix multiplies, not denoising operations, are the dominant source of computation. Average recovery PSNRs and run times are reported in Table 1. With noisy Gaussian measurements, LDAMP uniformly outperformed the other methods; these results can be found in the supplement. Coded diffraction measurements. With noise-free coded diffraction measurements, the LDAMP network again produces the best reconstructions on every image except Fingerprints. With coded diffraction measurements, LDIT and LDAMP produce reconstructions significantly faster than competing methods. Note that because the coded diffraction measurement forward and backward operator can be applied in O(n log n) operations, denoising becomes the dominant source of computations: LDAMP, which has twice as many denoising operations as LDIT, takes roughly 2? longer to complete. Average recovery PSNRs and run times are reported in Table 2. We end this section with a visual comparison of 512 ? 512 reconstructions from TVAL3, BM3D-AMP, and LDAMP, presented 8 Table 1: PSNRs and run times (sec) of 128 ? 128 reconstructions with i.i.d. Gaussian measurements and no measurement noise at various sampling rates. Method TVAL3 BM3D-AMP LDIT LDAMP NLR-CS m n = 0.10 m n = 0.15 m n = 0.20 m n = 0.25 PSNR Time PSNR Time PSNR Time PSNR Time 21.5 23.1 20.1 23.7 23.2 2.2 4.8 0.3 0.4 85.9 22.8 25.1 20.7 25.7 25.2 2.9 4.4 0.4 0.5 104.0 24.0 26.6 21.1 27.2 26.8 3.6 4.2 0.4 0.5 124.4 25.0 27.9 21.7 28.5 28.2 4.3 4.1 0.5 0.6 146.3 Table 2: PSNRs and run times (sec) of 128?128 reconstructions with coded diffraction measurements and no measurement noise at various sampling rates. Method TVAL3 BM3D-AMP LDIT LDAMP NLR-CS m n = 0.10 m n = 0.15 m n = 0.20 m n = 0.25 PSNR Time PSNR Time PSNR Time PSNR Time 24.0 23.8 22.9 25.3 21.6 0.52 4.55 0.14 0.26 87.82 26.0 25.7 25.6 27.4 22.8 0.46 4.29 0.14 0.26 87.43 27.9 27.5 27.4 28.9 25.1 0.43 3.67 0.14 0.27 87.18 29.7 29.1 28.9 30.5 26.4 0.41 3.40 0.14 0.26 86.87 in Figure 7. At high resolutions, the LDAMP reconstructions are incrementally better than those of BM3D-AMP yet computed over 60? faster. (a) Original Image (b) TVAL3 (26.4 dB, 6.85 (c) BM3D-AMP (27.2 dB, (d) LDAMP sec) 75.04 sec) 1.22 sec) (28.1 dB, Figure 7: Reconstructions of 512 ? 512 Boat test image sampled at a rate of m n = 0.05 using coded diffraction pattern measurements and no measurement noise. LDAMP?s reconstructions are noticeably cleaner and far faster than the competing methods. 6 Conclusions In this paper, we have developed, analyzed, and validated a novel neural network architecture that mimics the behavior of the powerful D-AMP signal recovery algorithm. The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a stateevolution heuristic that accurately predicts its performance. Most importantly, LDAMP outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. LDAMP represents the latest example in a trend towards using training data (and lots of offline computations) to improve the performance of iterative algorithms. The key idea behind this paper is that, rather than training a fairly arbitrary black box to learn to recover signals, we can unroll a conventional iterative algorithm and treat the result as a NN, which produces a network with well-understood behavior, performance guarantees, and predictable shortcomings. It is our hope this paper highlights the benefits of this approach and encourages future research in this direction. 9 References [1] E. J. Candes, J. Romberg, and T. Tao, ?Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,? IEEE Trans. Inform. Theory, vol. 52, no. 2, pp. 489?509, Feb. 2006. [2] R. G. Baraniuk, ?Compressive sensing [lecture notes],? IEEE Signal Processing Mag., vol. 24, no. 4, pp. 118?121, 2007. [3] D. Needell and J. A. Tropp, ?CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,? Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301?321, 2009. [4] I. Daubechies, M. Defrise, and C. D. Mol, ?An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,? Comm. on Pure and Applied Math., vol. 75, pp. 1412?1457, 2004. [5] D. L. Donoho, A. Maleki, and A. Montanari, ?Message passing algorithms for compressed sensing,? Proc. Natl. Acad. Sci., vol. 106, no. 45, pp. 18 914?18 919, 2009. [6] S. Rangan, P. Schniter, and A. Fletcher, ?Vector approximate message passing,? arXiv preprint arXiv:1610.03082, 2016. [7] C. Li, W. Yin, and Y. Zhang, ?User?s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms,? Rice CAAM Department report, vol. 20, pp. 46?47, 2009. [8] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, ?Model-based compressive sensing,? IEEE Trans. Inform. Theory, vol. 56, no. 4, pp. 1982 ?2001, Apr. 2010. [9] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, ?Compressive sensing via nonlocal low-rank regularization,? IEEE Trans. Image Processing, vol. 23, no. 8, pp. 3618?3632, 2014. [10] C. A. Metzler, A. Maleki, and R. G. Baraniuk, ?From denoising to compressed sensing,? IEEE Trans. Inform. Theory, vol. 62, no. 9, pp. 5117?5144, 2016. [11] P. Schniter, S. Rangan, and A. Fletcher, ?Denoising based vector approximate message passing,? arXiv preprint arXiv:1611.01376, 2016. [12] S. Beygi, S. Jalali, A. Maleki, and U. Mitra, ?An efficient algorithm for compression-based compressed sensing,? arXiv preprint arXiv:1704.01992, 2017. [13] A. Mousavi, A. B. Patel, and R. G. Baraniuk, ?A deep learning approach to structured signal recovery,? Proc. Allerton Conf. Communication, Control, and Computing, pp. 1336?1343, 2015. [14] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, ?Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,? Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 449?458, 2016. [15] A. Mousavi and R. G. Baraniuk, ?Learning to invert: Signal recovery via deep convolutional networks,? Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP), pp. 2272?2276, 2017. [16] H. Yao, F. Dai, D. Zhang, Y. Ma, S. Zhang, and Y. Zhang, ?DR2 -net: Deep residual reconstruction network for image compressive sensing,? arXiv preprint arXiv:1702.05743, 2017. [17] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, ?Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,? J. Machine Learning Research, vol. 11, pp. 3371?3408, 2010. [18] E. W. Tramel, A. Dr?meau, and F. Krzakala, ?Approximate message passing with restricted Boltzmann machine priors,? Journal of Statistical Mechanics: Theory and Experiment, vol. 2016, no. 7, p. 073401, 2016. [19] E. W. Tramel, A. Manoel, F. Caltagirone, M. Gabri?, and F. Krzakala, ?Inferring sparsity: Compressed sensing using generalized restricted Boltzmann machines,? Proc. IEEE Information Theory Workshop (ITW), pp. 265?269, 2016. [20] E. W. Tramel, M. Gabri?, A. Manoel, F. Caltagirone, and F. Krzakala, ?A deterministic and generalized framework for unsupervised learning with restricted Boltzmann machines,? arXiv preprint arXiv:1702.03260, 2017. [21] A. Dave, A. K. Vadathya, and K. Mitra, ?Compressive image recovery using recurrent generative model,? arXiv preprint arXiv:1612.04229, 2016. 10 [22] L. Theis and M. Bethge, ?Generative image modeling using spatial LSTMs,? Proc. Adv. in Neural Processing Systems (NIPS), pp. 1927?1935, 2015. [23] J. Rick Chang, C.-L. Li, B. Poczos, B. Vijaya Kumar, and A. C. Sankaranarayanan, ?One network to solve them all?Solving linear inverse problems using deep projection models,? Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 5888?5897, 2017. [24] K. Gregor and Y. LeCun, ?Learning fast approximations of sparse coding,? Proc. Int. Conf. Machine Learning, pp. 399?406, 2010. [25] U. S. Kamilov and H. Mansour, ?Learning optimal nonlinearities for iterative thresholding algorithms,? IEEE Signal Process. Lett., vol. 23, no. 5, pp. 747?751, 2016. [26] M. Borgerding and P. Schniter, ?Onsager-corrected deep networks for sparse linear inverse problems,? arXiv preprint arXiv:1612.01183, 2016. [27] Y. Yang, J. Sun, H. Li, and Z. Xu, ?Deep ADMM-net for compressive sensing MRI,? Proc. Adv. in Neural Processing Systems (NIPS), vol. 29, pp. 10?18, 2016. [28] J. R. Hershey, J. L. Roux, and F. Weninger, ?Deep unfolding: Model-based inspiration of novel deep architectures,? arXiv preprint arXiv:1409.2574, 2014. [29] T. B. Yakar, P. Sprechmann, R. Litman, A. M. Bronstein, and G. Sapiro, ?Bilevel sparse models for polyphonic music transcription.? ISMIR, pp. 65?70, 2013. [30] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, ?Image denoising by sparse 3-d transform-domain collaborative filtering,? IEEE Trans. Image Processing, vol. 16, no. 8, pp. 2080?2095, Aug. 2007. [31] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, ?Plug-and-play priors for model based reconstruction,? Global Conference on Signal and Information Processing (GlobalSIP), pp. 945?948, 2013. [32] G. Alain and Y. Bengio, ?What regularized auto-encoders learn from the data-generating distribution,? J. Machine Learning Research, vol. 15, no. 1, pp. 3563?3593, 2014. [33] C. K. S?nderby, J. Caballero, L. Theis, W. Shi, and F. Husz?r, ?Amortised map inference for image super-resolution,? Proc. Int. Conf. on Learning Representations (ICLR), 2017. [34] D. J. Thouless, P. W. Anderson, and R. G. Palmer, ?Solution of ?Solvable model of a spin glass?,? Philosophical Mag., vol. 35, no. 3, pp. 593?601, 1977. [35] M. M?zard and A. Montanari, Information, Physics, Computation: Probabilistic Approaches. Cambridge University Press, 2008. [36] A. Maleki, ?Approximate message passing algorithm for compressed sensing,? Stanford University PhD Thesis, Nov. 2010. [37] S. Ramani, T. Blu, and M. Unser, ?Monte-Carlo sure: A black-box optimization of regularization parameters for general denoising algorithms,? IEEE Trans. Image Processing, pp. 1540?1554, 2008. [38] H. C. Burger, C. J. Schuler, and S. Harmeling, ?Image denoising: Can plain neural networks compete with BM3D?? Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 2392?2399, 2012. [39] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, ?Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,? IEEE Trans. Image Processing, 2017. [40] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural networks,? Proc. Adv. in Neural Processing Systems (NIPS), pp. 1097?1105, 2012. [41] S. Ioffe and C. Szegedy, ?Batch normalization: Accelerating deep network training by reducing internal covariate shift,? arXiv preprint arXiv:1502.03167, 2015. [42] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep residual learning for image recognition,? Proc. IEEE Int. Conf. Comp. Vision, and Pattern Recognition, pp. 770?778, 2016. ? [43] F. J. Smieja, ?Neural network constructive algorithms: Trading generalization for learning efficiency?? Circuits, Systems, and Signal Processing, vol. 12, no. 2, pp. 331?374, 1993. [44] M. Bayati and A. Montanari, ?The dynamics of message passing on dense graphs, with applications to compressed sensing,? IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764?785, 2011. 11 [45] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, ?Improved training of Wasserstein GANs,? arXiv preprint arXiv:1704.00028, 2017. [46] D. Martin, C. Fowlkes, D. Tal, and J. Malik, ?A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,? Proc. Int. Conf. Computer Vision, vol. 2, pp. 416?423, July 2001. [47] A. Vedaldi and K. Lenc, ?Matconvnet ? Convolutional neural networks for MATLAB,? Proc. ACM Int. Conf. on Multimedia, 2015. [48] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv preprint arXiv:1412.6980, 2014. [49] E. J. Candes, X. Li, and M. Soltanolkotabi, ?Phase retrieval from coded diffraction patterns,? Appl. Comput. Harmon. Anal., vol. 39, no. 2, pp. 277?299, 2015. 12
6774 |@word cnn:1 briefly:1 version:1 mri:3 compression:2 norm:1 blu:1 d2:2 seek:1 sensed:1 born:1 contains:2 selecting:1 mag:2 tuned:1 amp:43 mmse:5 outperforms:2 existing:1 comparing:1 discretization:2 com:1 yet:1 must:4 gpu:1 axk22:1 realistic:1 additive:1 enables:2 remove:1 designed:10 interpretable:2 update:2 polyphonic:1 generative:3 leaf:1 lamp:3 iterates:1 math:1 allerton:1 simpler:1 zhang:7 become:1 mousavi:4 consists:4 prove:3 krzakala:3 x0:2 sacrifice:1 expected:2 roughly:1 behavior:6 mechanic:1 bm3d:15 decreasing:1 unrolling:8 becomes:2 provided:1 burger:1 moreover:1 notation:3 circuit:1 what:1 developed:3 compressive:14 whatsoever:1 finding:4 acoust:1 onsager:7 guarantee:3 sapiro:1 berkeley:1 every:4 axl:1 litman:1 axt:1 k2:8 rm:3 demonstrates:2 control:1 zl:1 unit:1 understood:3 dropped:1 treat:3 mitra:2 local:1 acad:1 despite:2 awgn:4 meng:1 defrise:1 black:7 plus:1 twice:2 equivalence:2 suggests:1 challenging:1 appl:2 fastest:1 factorization:1 palmer:2 range:3 unique:1 lecun:1 harmeling:1 testing:2 practice:1 block:1 differs:1 unfold:1 empirical:1 significantly:5 thought:1 projection:2 vedaldi:1 onto:2 operator:3 romberg:1 context:1 optimize:1 conventional:1 deterministic:1 demonstrated:2 lagrangian:1 hegde:1 shi:2 latest:1 map:1 starting:2 convex:1 resolution:4 roux:1 recovery:26 needell:1 pure:2 importantly:3 orthonormal:1 zuo:1 dw:2 handle:2 variation:1 suppose:2 play:1 user:1 exact:1 us:9 origin:1 element:1 trend:1 recognition:5 nderby:1 metzler:3 predicts:4 submission:1 database:1 preprint:11 capture:1 thousand:1 connected:1 adv:3 sun:2 rescaled:2 principled:3 predictable:1 broken:1 comm:1 dncnn:8 dynamic:1 radar:1 trained:12 depend:1 solving:2 venkatakrishnan:1 ali:2 myriad:1 upon:1 efficiency:1 completely:1 basis:1 easily:1 icassp:1 represented:1 various:3 train:9 stacked:2 distinct:1 fast:3 describe:4 effective:5 monte:3 shortcoming:1 heuristic:2 posed:1 solve:4 stanford:1 tested:7 reconstruct:1 compressed:6 ability:1 statistic:1 transform:1 noisy:2 itself:1 final:1 advantage:1 sequence:1 net:5 took:1 reconstruction:19 combining:1 mixing:1 poorly:1 competition:1 ky:1 exploiting:2 convergence:3 enhancement:1 cosamp:2 sutskever:1 produce:5 generating:2 adam:2 rotated:1 depending:1 develop:1 recurrent:1 aug:1 solves:1 implemented:1 c:19 predicted:3 come:2 indicate:1 larochelle:1 trading:1 direction:2 filter:4 stochastic:3 human:1 enable:1 observational:1 public:1 noticeably:1 premise:1 fix:2 generalization:2 underdetermined:1 correction:6 scanner:1 hold:2 considered:1 caballero:1 fletcher:2 mapping:3 predict:1 matconvnet:2 optimizer:1 proc:14 outperformed:1 nlr:8 bridge:1 ex0:9 wl:3 successfully:1 hope:1 orthonormalized:1 minimization:1 unfolding:1 gaussian:13 super:1 rather:1 husz:1 avoid:1 shelf:1 rick:1 compressively:1 corollary:2 ax:5 focus:1 derived:1 validated:1 improvement:1 denoisers:9 rank:1 indicates:2 likelihood:1 contrast:2 rigorous:1 sense:2 camp:1 duarte:1 inference:2 glass:1 nn:7 inaccurate:1 bt:2 typically:1 tao:1 pixel:1 issue:1 among:2 ill:1 pascal:1 classification:1 multiplies:1 art:4 spatial:1 fairly:1 having:1 beach:1 sampling:12 identical:1 represents:2 flipped:1 look:1 unsupervised:1 nearly:1 mimic:2 future:1 others:1 report:4 richard:1 few:2 randomly:4 simultaneously:1 divergence:2 thouless:2 replaced:1 phase:2 argminx:2 n1:3 interest:1 message:8 wohlberg:1 highly:1 analyzed:1 behind:1 natl:1 held:1 accurate:5 schniter:3 capable:1 necessary:1 harmon:2 tree:1 incomplete:2 initialized:1 theoretical:5 minimal:1 stopped:1 cevher:1 instance:1 bouman:1 modeling:1 measuring:1 clipping:1 deviation:8 hundred:1 krizhevsky:1 too:1 reported:2 encoders:1 yakar:1 damp:2 proximal:3 synthetic:1 gd:1 nns:3 st:1 sda:1 probabilistic:1 off:1 corrects:1 dong:1 physic:1 together:1 bethge:1 yao:1 gans:1 squared:2 central:1 again:1 daubechies:1 thesis:1 huang:1 dr:1 worse:2 conf:9 expert:2 denoises:1 return:1 gabri:2 li:5 szegedy:1 nonlinearities:2 sec:5 coding:1 coefficient:1 int:8 titan:1 explicitly:1 idealized:1 depends:1 performed:1 lot:1 dumoulin:1 start:1 recover:6 sort:1 parallel:1 candes:2 contribution:2 minimize:4 square:1 spin:1 accuracy:4 convolutional:13 formed:1 largely:1 egiazarian:1 collaborative:1 vijaya:1 generalize:1 vincent:1 foi:1 accurately:4 weninger:1 ren:1 carlo:3 dabov:1 comp:3 researcher:2 rectified:1 finer:1 dave:1 ah:9 history:1 globalsip:1 inform:4 cumbersome:1 unrolls:1 frequency:1 pp:32 naturally:1 proof:1 rbm:1 couple:1 sampled:3 dataset:2 begun:1 knowledge:3 color:1 improves:1 psnr:10 organized:1 ramani:1 segmentation:1 back:1 hershey:1 improved:2 formulation:1 done:1 box:7 generality:1 anderson:2 governing:1 autoencoders:2 until:2 hand:9 tropp:1 christopher:1 lstms:1 hopefully:1 overlapping:3 incrementally:1 tramel:3 continuity:1 quality:2 gulrajani:1 usage:1 usa:1 k22:3 contain:2 multiplier:1 true:2 evolution:3 regularization:2 unroll:3 inspiration:2 read:1 hence:1 maleki:4 alternating:1 illustrated:5 white:1 during:2 self:1 encourages:1 criterion:1 generalized:2 complete:1 demonstrate:1 performs:3 image:52 novel:3 recently:6 common:1 specialized:1 functional:1 stepping:1 he:1 measurement:36 composition:1 cambridge:1 soltanolkotabi:1 fingerprint:3 had:1 ride:5 moving:1 similarity:1 longer:1 vamp:2 add:1 feb:1 dominant:2 posterior:1 closest:1 inf:1 driven:6 barbara:1 massively:1 nvidia:1 ecological:1 kamilov:1 itw:1 minimum:1 fortunately:1 additional:2 impose:1 dai:1 arjovsky:1 ashok:1 wasserstein:1 signal:28 ii:1 july:1 segmented:1 faster:9 plug:1 offer:1 long:2 ahmed:1 retrieval:1 divided:1 host:1 coded:9 variant:1 vision:5 expectation:1 arxiv:22 iteration:10 normalization:2 invert:1 whereas:1 cropped:2 source:2 w2:1 rest:1 unlike:1 lenc:1 ineffective:1 sure:1 subject:2 db:3 call:1 subgaussian:1 yang:1 ideal:1 intermediate:4 iii:1 easy:2 bengio:2 fft:1 variety:2 relu:2 pepper:1 architecture:4 competing:4 suboptimal:1 idea:3 shift:1 accelerating:1 penalty:1 speech:2 passing:8 poczos:1 matlab:2 deep:17 dramatically:1 useful:2 generally:2 cleaner:1 amount:1 simplest:1 dit:2 reduced:1 http:1 outperform:2 restricts:1 estimated:1 per:3 broadly:1 vol:20 key:1 backward:2 imaging:3 graph:1 vast:2 asymptotically:1 concreteness:1 monotone:1 run:10 compete:2 inverse:4 baraniuk:6 parameterized:1 powerful:1 uncertainty:1 almost:1 throughout:1 ismir:1 patch:6 diffraction:9 entirely:1 layer:57 ct:1 followed:2 courville:1 zard:1 adapted:1 bilevel:1 constraint:1 rangan:2 x2:3 tal:1 generates:1 speed:1 span:1 kumar:1 martin:1 department:1 tv:1 structured:1 turaga:1 richb:1 combination:1 bsd:1 kd:1 across:1 slightly:2 appealing:1 caam:1 projecting:1 restricted:4 xo:21 xk22:1 equation:2 turn:1 sprechmann:1 fed:1 end:18 serf:1 available:3 operation:4 apply:2 fowlkes:1 batch:4 smieja:1 slower:1 original:1 denotes:1 running:1 include:4 ensure:1 maintaining:1 log10:1 music:2 lkx1:1 build:3 gregor:1 caltagirone:2 bl:2 move:1 malik:1 blend:1 jalali:1 said:1 gradient:7 iclr:1 subspace:2 separate:1 unable:1 sci:1 majority:2 lajoie:1 chris:1 seven:2 denoiser:56 code:2 modeled:1 mini:2 manzagol:1 unrolled:4 difficult:1 unfortunately:4 setup:1 potentially:1 negative:2 slows:1 ba:1 implementation:4 anal:2 zt:2 boltzmann:4 bronstein:1 subcategory:2 perform:2 discretize:1 observation:2 markov:1 datasets:1 descent:2 psnrs:4 communication:1 hinton:1 rn:1 mansour:1 arbitrary:1 introduced:1 toolbox:1 connection:1 philosophical:1 imagenet:1 learned:6 manoel:2 tensorflow:1 established:1 hour:1 kingma:1 nip:4 trans:8 beyond:3 suggested:1 alongside:1 pattern:8 sparsity:4 challenge:1 kxl:1 interpretability:1 memory:1 natural:12 rely:1 regularized:1 undersampled:1 boat:3 residual:4 advanced:1 solvable:1 improve:3 github:1 auto:1 deviate:1 prior:18 review:1 theis:2 fully:1 subcategories:1 loss:1 highlight:1 mixed:1 lecture:1 filtering:1 proven:1 bayati:1 validation:4 penalization:1 affine:2 dr2:2 propagates:1 principle:2 thresholding:3 row:2 repeat:1 last:1 soon:1 free:4 alain:1 offline:1 bias:5 guide:1 deeper:1 pulled:1 katkovnik:1 taking:2 amortised:1 sparse:6 benefit:3 default:1 lett:1 plain:1 evaluating:1 kz:4 author:3 made:1 forward:3 far:3 nonlocal:2 unprincipled:1 approximate:7 emphasize:1 patel:1 nov:1 aperture:1 transcription:2 global:1 ioffe:1 spectrum:3 continuous:2 iterative:11 decade:1 table:9 promising:1 schuler:1 learn:9 channel:1 ca:1 robust:1 mol:1 improving:1 mse:3 domain:2 did:1 apr:1 dense:1 montanari:3 noise:27 subsample:1 repeated:1 ista:2 x1:3 augmented:1 xu:1 referred:1 slow:1 inferring:1 xl:4 comput:2 tied:1 third:1 wavelet:2 down:1 specific:3 xt:7 covariate:1 sensing:12 appeal:1 experimented:1 unser:1 dl:3 exists:2 workshop:1 sankaranarayanan:1 supplement:2 phd:1 magnitude:1 margin:1 kx:1 gap:2 chen:1 intersection:1 yin:1 simply:3 likely:1 visual:1 scalar:1 chang:1 applies:2 acm:1 rice:7 ma:2 consequently:1 donoho:1 towards:2 room:1 lipschitz:3 admm:3 experimentally:1 change:1 typical:2 except:3 uniformly:3 corrected:1 reducing:1 denoising:23 decouple:2 lemma:8 total:1 multimedia:1 lista:2 select:1 formally:1 internal:1 kulkarni:1 constructive:1 d1:1
6,384
6,775
Deliberation Networks: Sequence Generation Beyond One-Pass Decoding ? 1 Yingce Xia, 2 Fei Tian, 3 Lijun Wu, 1 Jianxin Lin, 2 Tao Qin, 1 Nenghai Yu, 2 Tie-Yan Liu 1 University of Science and Technology of China, Hefei, China 2 3 Microsoft Research, Beijing, China Sun Yat-sen University, Guangzhou, China 1 [email protected], [email protected], [email protected] 2 {fetia,taoqin,tie-yan.liu}@microsoft.com, 3 [email protected] Abstract The encoder-decoder framework has achieved promising progress for many sequence generation tasks, including machine translation, text summarization, dialog system, image captioning, etc. Such a framework adopts an one-pass forward process while decoding and generating a sequence, but lacks the deliberation process: A generated sequence is directly used as final output without further polishing. However, deliberation is a common behavior in human?s daily life like reading news and writing papers/articles/books. In this work, we introduce the deliberation process into the encoder-decoder framework and propose deliberation networks for sequence generation. A deliberation network has two levels of decoders, where the first-pass decoder generates a raw sequence and the second-pass decoder polishes and refines the raw sentence with deliberation. Since the second-pass deliberation decoder has global information about what the sequence to be generated might be, it has the potential to generate a better sequence by looking into future words in the raw sentence. Experiments on neural machine translation and text summarization demonstrate the effectiveness of the proposed deliberation networks. On the WMT 2014 English-to-French translation task, our model establishes a new state-of-the-art BLEU score of 41.5. 1 Introduction The neural network based encoder-decoder framework has been widely adopted for sequence generation tasks, including neural machine translation [1], text summarization [19], image captioning [27], etc. In such a framework, the encoder encodes the source input x with length m into a sequence of vectors {h1 , h2 , ? ? ? , hm }. The decoder, which is typically an RNN, generates an output sequence word by word2 based on the source-side vector representations and previously generated words. The attention mechanism [1, 35], which dynamically attends to different parts of x while generating each target-side word, is integrated into the encoder-decoder framework to improve the quality of generating long sequences [1]. Although the framework has achieved great success, one concern is that while generating one word, one can only leverage the generated words but not the future words un-generated so far. That is, when the decoder generates the t-th word yt , only y<t can be used, while the possible words y>t are not explicitly considered. In contrast, in real-word human cognitive processes, global information, including both the past and the future parts, is leveraged in an iterative polishing process. Here are two examples: (1) Consider the situation that we are reading a sentence and meet an unknown word ? 2 This work was done when Yingce Xia, Lijun Wu and Jianxin Lin were interns at Microsoft Research. Throughout this work, a word refers to the basic unit in a sequence. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. in the middle of the sentence. We do not stop here. Instead, we move forward until the end of the sentence. Then we go back to the unknown word and try to understand it using its context, including the words both preceding and after it. (2) To write a good document (or paragraph, article), we usually first create a complete draft and then polish it based on global understanding of the whole draft. When polishing a specific part, we take the whole picture of the draft into consideration to evaluate how well the local element fits into the global environment rather than only looking back to the preceding parts. We call such a polishing process as deliberation. Motivated by such human cognitive behaviors, we propose the deliberation networks, which leverage the global information with both looking back and forward in sequence decoding through a deliberation process. Concretely speaking, to integrate such a process into the sequence generation framework, we carefully design our architecture, which consists of two decoders, a first-pass decoder D1 and a second-pass/deliberation decoder D2 , as well as an encoder E. Given a source input x, the E and D1 jointly works like the standard encoderdecoder model to generate a coarse sequence y? as a draft and the corresponding representations s? = {? s1 , s?2 , ? ? ? , s?Ty? } used to generate y?, where Ty? is the length of y?. Afterwards, the deliberation decoder D2 takes x, y? and s? as inputs and outputs the refined sequence y. When D2 generates the t-th word yt , an additional P attention model is used3 to assign an adaptive weight ?j to each y?j and s?j yj ; s?j ] is fed into D2 . In this way, the global information of the target for any j ? [Ty?], and ?j [? sequence can be utilized to refine the generation process. We propose a Monte Carlo based algorithm to overcome the difficulty brought by the discrete property of y? in optimizing the deliberation network. To verify the effectiveness of our model, we work on two representative sequence generation tasks. (1) Neural machine translation refers to using neural networks to translate sentences from a source language to a target language [1, 33, 32, 34]. A standard NMT model consists of an encoder (used to encode source sentences) and a decoder (used to generate target sentences), and thus can be improved by our proposed deliberation network. Experimental results show that based on a widely used single-layer GRU model [1], on the WMT?14 [29] English?French dataset, we can improve the BLEU score [17], by 1.7 points compared to the model without deliberation. We also apply our model on Chinese?English translations and improve BLEU by an averaged 1.26 points on four different test sets. Furthermore, on the WMT?14 English?French translation task, by applying deliberation to a deep LSTM model, we achieve a BLEU score 41.50, setting a new record for this task. (2) Text summarization is a task that summarizes a long article into a short abstract. The encoderdecoder framework can also be used for such a task and thus could be refined by deliberation networks. Experimental results on Gigaword dataset [6] show that deliberation network can improve ROUGE-1, ROUGE-2, and ROUGE-L by 3.45, 1.70 and 3.02 points. 1.1 Related Work Although there exist many works to improve the attention based encoder-decoder framework for sequence generation, such as changing the training loss [28, 18, 22] or the decoding objective [14, 7], not much attention has been paid to the structure of the encoder-decoder framework. Our work changes the structure of the framework by introducing the second-pass decoder into it. The idea of deliberation/refinement is not well explored for sequence generation tasks, especially for the encoder-decoder based approaches [3, 23, 1] in neural machine translation. One related work is post-editing [16, 2]: a source sentence e is first translated to f 0 , and then f 0 is refined by another model. Different from our deliberation network, the two processes (i.e., generating and refining) in post-editing are separated. As a comparison, what we build is a consistent model in which all the components are coupled together and jointly optimized in an end-to-end way. As a result, deliberation networks lead to better accuracies. Another related work is the review network [36]. The idea is to review all the information encoded by the encoder to obtain thought vectors that are more compact and abstractive. The thought vectors are then used in decoding. Different from our work, the review steps are added on the encoder side, while the decoder side is unchanged and still adopts one-pass decoding. 3 In this work, let [v1 ; v2 ; ? ? ? ; vn ] denote the long vector concatenated by the input vectors v1 , ? ? ? , vn . With a little bit confusion, [m] with a single integer input m denotes the set {1, 2, ? ? ? , m}. 2 The rest of our paper is organized as follows. Our proposed deliberation network is introduced in Section 2, including the model structure and the optimization process. Applications to neural machine translation and text summarization are introduced in Section 3 and Section 4 respectively. Section 5 concludes the paper and discusses possible future directions. 2 The Framework In this section, we first introduce the overall architecture of deliberation networks, then the details of individual components, and finally propose an end-to-end Monte Carlo based algorithm to train the deliberation networks. 2.1 Structure of Deliberation Networks As shown in Figure 1, a deliberation network consists of an encoder E, a first-pass decoder D1 and a second-pass decoder D2 . Deliberation happens at the second-pass decoder, which is also called deliberation decoder alternatively. Briefly speaking, E is used to encode the source sequence into a sequence of vector representations. D1 reads the encoder representations and generates a first-pass target sequence as a draft, which is further provided as input to the deliberation decoder D2 for the second-pass decoding. In the rest of this section, for simplicity of description, we use RNN as the basic building block for both the encoder and decoders4 . All the W ?s and v?s in this section with different superscripts or subscripts are the parameters to be learned. Besides, all the bias terms are omitted to increase readability. Figure 1: Framework of deliberation networks: Blue, yellow and green parts indicate encoder E, first-pass decoder D1 and the second-pass decoder D2 respectively. The E-to-D1 attention model is omitted for readability. 2.2 Encoder and First-pass Decoder When an input sequence x is fed into the encoder E, it is encoded into Tx hidden states H = {h1 , h2 , ? ? ? , hTx } where Tx is the length of x. Specifically, hi = RNN(xi , hi?1 ), where xi acts as the representation (e.g., word embedding vector) for the i-th word in x and h0 is a zero vector. The first-pass decoder D1 will generate a series of hidden states s?j ?j ? [Ty?], and a first-pass sequence y?j ?j ? [Ty?], where Ty? is the length of the generated sequence. Next we show how they are generated in detail. 4 The proposed deliberation networks are independent to the specific implementation of the recurrent units and can be applied to simple RNN or its variants such as LSTM [11] or GRU [3]. 3 Similar to the conventional encoder-decoder model, an attention model is included in D1 . At step j, the attention model in D1 first generates a context ctxe defined as follows: PTx PTx c c ctxe = i=1 ?i hi ; ?i ? exp(v?T tanh(Watt,h hi + Watt,? ?j?1 )) ?i ? [Tx ]; i=1 ?i = 1. (1) ss Based on ctxe , s?j is calculated as s?j = RNN([? yj?1 ; ctxe ], s?j?1 ). After obtaining s?j , another affine transformation is applied on the concatenated vector [? sj ; ctxe ; y?j?1 ]. Finally, the results of the transformation are fed into a softmax layer, and the y?j is sampled out from the obtained multinomial distribution. 2.3 Second-Pass Decoder Once the first-pass target sequence y? is generated by the first-pass decoder D1 , it is fed into the second-pass decoder D2 for further refinement. Based on the sequence y? and the hidden states s?j ?j ? [Ty?] provided by D1 , D2 eventually outputs the second-pass sequence y via the deliberation process. Specifically, at step t, D2 takes the previous hidden state st?1 generated by itself, previously decoded word yt?1 , the source contextual information ctx0e and the first-pass contextual information ctxc as inputs. Two detailed points are: (1) The computation of ctx0e is similar to that of ctxe shown in Eqn. (1) with two differences: First, s?j?1 is replaced by st?1 ; second, the model parameters are different. (2) To obtain ctxc , D2 has an attention model (i.e., the Ac in Figure 1) that can map the words y?j ?s and the hidden states s?j ?s into a context vector. Mathematically speaking, in the refinement process at t-th time step, the first-pass contextual information vector ctxc is computed as: PTy? PTy? d d ctxc = j=1 ?j [? sj ; y?j ]; ?j ? exp(v?T tanh(Watt, sj ; y?j ] + Watt,s st?1 )) ?j ? [Ty?]; j=1 ?j = 1. sy ? [? As can be seen from the above computation, the deliberation process at time step t in the second-pass decoding uses the whole sequence generated by the first-pass decoder, including both the words preceding and after t-th step in the first-pass sequence. That is, the first-pass contextual vector ctxc aggregates the global information extracted from the first-pass sequence y?. After receiving ctxc , we calculate st as st = RNN([yt?1 ; ctx0e ; ctxc ], st?1 ). Similar to sampling y?t in D1 , [st ; ctx0e ; ctxc ; yt?1 ] will be further transformed to generate yt . 2.4 Algorithm Let DXY = {(x(i) , y (i) )}ni=1 denote the training corpus with n paired sequences5 . Denote the parameters of E, D1 and D2 as ?e , ?1 and ?2 respectively. The Pn training of sequence-to-sequence learning is usually to maximize the data log likelihood (1/n) i=1 log P (yi |xi ). Under our setting, P this rule can be specialized to maximize (1/n) (x,y)?DXY J (x, y; ?e , ?1 , ?2 ), where X J (x, y; ?e , ?1 , ?2 ) = log P (y|y 0 , E(x; ?e ); ?2 )P (y 0 |E(x; ?e ); ?1 ). (2) y 0 ?Y In Eqn. (2), Y is the collection of all possible target sequences and E(x; ?e ) indicates a function that maps x to its corresponding hidden states given by the encoder. One can verify that the first-order derivative of J (x, y; ?e , ?1 , ?2 ) w.r.t ?1 is: P 0 0 y 0 ?Y P (y|y , E(x; ?e ); ?2 )??1 P (y |E(x; ?e ); ?1 ) ??1 J (x, y; ?e , ?1 , ?2 ) = P , 0 0 y 0 ?Y P (y|y , E(x; ?e ); ?2 )P (y |E(x; ?e ); ?1 ) which is extremely hard to compute due to the large space of Y. Similarly, the gradients w.r.t. ?e and ?2 are also computationally intractable. To overcome such difficulties, we propose a Monte Carlo based method to optimize the lower bound of J (x, y; ?e , ?1 , ?2 ). Note by the concavity of J w.r.t y 0 , one can verify that J (x, y; ?e , ?1 , ?2 ) ? J?(x, y; ?e , ?1 , ?2 ), with the right-hand side acting as a lower bound and defined as X J?(x, y; ?e , ?1 , ?2 ) = P (y 0 |E(x; ?e ); ?1 ) log P (y|y 0 , E(x; ?e ); ?2 ). (3) y 0 ?Y Let x(i) and y (i) denote i?th source input and target output in the training data. Let xi and yi denote the i-th word in x and y. 5 4 ? The gradients of J? w.r.t its parameters are: Denote J?(x, y; ?e , ?1 , ?2 ) as J. X ??1 J? = P (y 0 |E(x; ?e ); ?1 ) log P (y|y 0 , E(x; ?e ); ?2 )??1 log P (y 0 |E(x; ?e ); ?1 ); | {z } 0 y ?Y ??2 J? = X y 0 ?Y ??e J? = X G1 0 0 P (y |E(x; ?e ); ?1 ) ??2 log P (y|y , E(x; ?e ); ?2 ); {z } | (4) G2 P (y 0 |E(x; ?e ); ?1 )Ge (x, y, y 0 ; ?e , ?1 , ?2 ), where Ge is defined as follows: y 0 ?Y Ge = ??e log P (y|y 0 , E(x; ?e ); ?2 ) + log P (y|y 0 , E(x; ?e ); ?2 )??e log P (y 0 |E(x; ?e ); ?1 ). Let ? = [?1 ; ?2 ; ?e ] and G(x, y, y 0 ; ?) = [G1 ; G2 ; Ge ], where G1 , G2 and Ge are defined in Eqn. (4). (For ease of reference, we assume that all the ?? ?s and G? ?s are flattened.) Obviously, if y 0 is sampled from distribution P (y 0 |E(x; ?e ); ?1 ), G(x, y, y 0 ; ?) is an unbiased estimator of the gradient of J? w.r.t. all model parameters ?. Based on that we propose our algorithm in Algorithm 1. Algorithm 1: Algorithm to train the deliberation network Input: Training data corpus DXY ; minibatch size m; optimizer Opt(? ? ? ) with gradients as input ; while models not converged do Randomly sample a mini-batch of m sequence pairs {x(i) , y (i) } ?i ? [m] from DXY ; For any x(i) where i ? [m], sample y 0(i) according to distribution P (?|E(x(i) ; ?e ); ?1 ); Pm 1 (i) (i) 0(i) Perform parameter update: ? ? ? + Opt( m ; ?)). i=1 G(x , y , y Discussions (1) The choice of Opt(...) is quite flexible. One can choose different optimizers such as Adadelta [37], Adam [13], or SGD for different tasks, depending on common practice in the specific task. (2) The Y space is usually extremely large in sequence generation tasks. To obtain better sampled y 0 , we can use beam search instead of randomly sampling. 3 Application to Neural Machine Translation We evaluate the deliberation networks with two different network structures: (1) the shallow model, which is based on a widely-used single-layer GRU model named RNNSearch [1, 12]; (2) the deep model, which is based on a deep LSTM model similar to GNMT [31]. Both of the two kinds of models are implemented in Theano [24]. 3.1 Shallow Models 3.1.1 Settings Datasets We work on two translation tasks, English-to-French translation (denoted as En?Fr) and Chinese-to-English translation (denoted as Zh?En). For En?Fr, we employ the standard filtered WMT?14 dataset6 , which is widely used in NMT literature [1, 12]. There are 12M bilingual sentence pairs in the dataset. We concatenate newstest2012 and newstest2013 together as the validation set and use newstest2014 as the test set. For Zh?En, we choose 1.25M bilingual sentence pairs from LDC dataset as training corpus, use NIST2003 as the validation set, and NIST2004, NIST2005, NIST2006, NIST2008 as the test sets. Following the common practice [1, 12], we remove the sentences with more than 50 words for both translation tasks. Furthermore, we limit the both the source words and target words as 30k most-frequent ones. The out-of-vocabulary words are replaced by a special token ?UNK?. Model We choose the most widely adopted NMT model RNNSearch [1, 12, 25] as the basic structure to construct the deliberation network. To be specific, all of E, D1 and D2 are GRU networks [1] with one hidden layer of 1000 neurons. The word embedding dimension is set as 620. For Zh?En, we apply 0.5 dropout rate to the layer before softmax and no dropout is used in En?Fr translation. 6 http://www-lium.univ-lemans.fr/?schwenk/cslm_joint_paper/data/bitexts.tgz 5 Optimization All the models are trained on a single NVIDIA K40 GPU. We first pre-train two standard encoder-decoder based NMT models (i.e., RNNSearch) until convergence, which take about two weeks for En?Fr and one week for Zh?En using Adadelta [37]. For any deliberation network, (1) the encoder is initialized by the encoder of the pre-trained RNNSearch model; (2) both the first-pass and second-pass decoders are initialized by the decoder of the pre-trained RNNSearch model; (3) the attention model used to compute the first-pass context vector is randomly initialized from a uniform distribution on [?0.1, 0.1]. Then we train the deliberation networks by Algorithm 1 until convergence, which takes roughly 5 days for both tasks. The minibatch size is fixed as 80 throughout the optimization. Plain SGD is used as the optimizer in this process, with initial learning rate 0.2 and halving according to validation accuracy. To sample the intermediate translation output by the first decoder, we use beam search with beam size 2, considering the tradeoff between accuracy and efficiency. Evaluation We use BLEU [17] as the evaluation metric for translation qualities. BLEU is the geometric mean of n-gram precisions where n ? {1, 2, 3, 4}, weighted by sentence lengths. Following the common practice in NMT, we use multi-bleu.pl7 to calculate case-sensitive BLEU scores for En?Fr, while evaluating the translation qualities of Zh?En by case-insensitive BLEU scores. The larger the BLEU score is, the better the translation quality is. For the baselines and deliberation networks, we use beam search with beam size 12 to generate sentences. Baselines We compare our proposed algorithms with the following baselines: (i) The standard NMT algorithm RNNSearch [1, 12], denoted as Mbase ; (ii) The standard NMT model with two stacked decoding layers, denoted as Mdec?2 ; (3) The review network proposed in [36]. We try both 4 and 8 reviewers and find the 4-reviewer model is slightly better. The review network in our experiment is therefore denoted as Mreviewer?4 . We refer to our proposed algorithm as Mdelib . 3.1.2 Results We compare our proposed algorithms with the following baselines: (i) The standard NMT algorithm, denoted as Mbase ; (ii) The standard NMT model with two stacked decoding layers, denoted as Mdec?2 ; (3) The review network proposed in [36]. We try both 4 and 8 reviewers and find the 4-reviewer model is slightly better. The review network in our experiment is therefore denoted as Mreviewer?4 . We refer to our proposed algorithm as Mdelib . Table 1 shows the results of En?Fr translation. We have several observations: (1) Our proposed algorithm performs the best among all candidates, which validates the effectiveness of the deliberation process. (2) Our method Mdelib outperforms the baseline algorithm Mbase . This shows that further polishing the raw output indeed leads to better sequences. (3) Applying an additional decoding layer, i.e., Mdec?2 , increases the translation quality, but it is still far behind that of Mdelib . Clearly, the second decoder layer of Mdec?2 can still only leverage the previously generated words but not unseen and un-generated future words, while the second-pass decoder of Mdelib can leverage the richer information contained in all the words from the first-pass decoder. Such a refinement process from the global view significantly improves the translation results. (4) Mdelib outperforms Mreviewer?4 by 0.91 point, which shows that reviewing the possible future contextual information from the source side is not enough. The ?future? information from the decoder side is also very important, since it is directly related with the final output. Table 1: BLEU scores of En?Fr translation Algorithm BLEU Mbase 29.97 Mdec?2 30.40 Mreviewer?4 30.76 Mdelib 31.67 The translation results of Zh?En are summarized in Table 2. We have similar observations as those for En?Fr translations: Mdelib outperforms all the baseline methods, particularly with an average gain of 1.26 points over Mbase . Apart from the quantitative analysis, we list two examples in Table 3 to better understand how a deliberation network works. Each example contains five sentences, which are the source sentence in Chinese, the reference sentence in English as ground truth translation, the translation generated 7 https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl 6 Table 2: BLEU scores of Zh?En translation Algorithm Mbase Mdelib NIST04 34.96 36.90 NIST05 34.57 35.57 NIST06 32.74 33.90 NIST08 26.21 27.13 by Mbase and the output translation by both the first-pass decoder and second-pass decoder (i.e., the final translation by deliberation network Mdelib ). Table 3: Case studies of Zh?En translations. Note the ?......? in the second example represents a common sentence ?the two sides will discuss how to improve the implementation of the cease-fire agreement?. [Source] Aiji shuo, zhongdong heping xieyi yuqi jiang you yige xinde jiagou . [Reference] Egypt says a new framework is expected to come into being for the Middle East peace agreement . [Base] egypt ?s middle east peace agreement is expected to have a new framework , he said . [First-pass] egypt ?s middle east peace agreement is expected to have a new framework , egypt said . [Second-pass] egypt says the middle east peace agreement is expected to have a new framework . [Source] Nuowei dashiguan zhichu, "shuangfang jiang taolun ruhe gaijin luoshi tinghuo xieyi, zhe yeshi san nian lai shuangfang shouci zai ruci gao de cengji shang jinxing mianduimian tanpan" [Reference] The Norwegian embassy pointed out that , " Both sides will discuss how to improve the implementation of the cease-fire agreement , which is the first time for both sides to have face-to-face negotiations at such a high level . " [Base] " ...... , which is the first time for the two countries to conduct face-to-face talks on the basis of a high level of three years , " it said . [First-pass] " ...... , which is the first time for the two countries to conduct face-to-face talks on the basis of a high level of three years , " the norwegian embassy said in a statement . [Second-pass] " ...... , which is the first time in three years for the two countries to conduct face-to-face talks at such high level , " the norwegian embassy said . In the first example, the translation from both base model and first-pass decoder contains the phrase egypt?s middle east peace agreement, which is odd and inaccurate, given that an agreement cannot belong to a single country as Egypt. As a comparison, the second-pass decoder refines such phrase into a more natural and accurate one. i.e., egypt says the middle east peace agreement, by looking forward to the future translation phrase ?egypt said? output by the first-pass decoder. On the other hand, the second-pass decoder outputs a sentence with correct tense, i.e., egypt says ... is .... However, the two sentences output by Mbase and the first-pass decoder are inconsistent in tense, whose structures are ?... is ..., egypt said ?. This problem is well addressed by the deliberation network, since the second-pass decoder can access the global information contained in the draft sequence generated by the first-pass decoder, and therefore output a more consistent sentence. In the second example, as shown in bold fonts, the phrase ?conduct face-to-face talks on the basis of a high level of three years? from both base model and first-pass decoder carries all necessary information of its corresponding source segments, but apparently it is out-of-order and seems to be a simple combination of words. The second-pass decoder refines such translation into a correct, and more fluent one, by forwarding the sub phrase in three years to the position right after the first time. At last we compare the decoding time of deliberation network with that of the RNNSearch. Based on the Theano implementation, to translate 3003 English sentences to French, RNNSearch takes 964 seconds while the deliberation network takes 1924 seconds. Indeed, the deliberation network takes roughly 2 times decoding time of RNNSearch, but can bring 1.7 points improvements in BLEU. 3.2 Deep Models We work on a deep LSTM model to further evaluate deliberation networks through the WMT?14 En?Fr translation task. Compared to the shallow model, there are several different aspects: (1) We use 34M sentence pairs from WMT?14 as training data, apply the BPE [21] techniques to split the training sentences into sub-word units and restrict the source and target sentence lengths within 64 subwords. The encoder and decoder share a common vocabulary containing 36k subwords. (2) All of E, D1 and D2 are 4-layer LSTMs with residual connections [9, 10]. The word embedding dimension 7 Table 4: Comparison between deliberation network and different deep NMT systems (En?Fr). System Configurations BLEU GNMT [31] Stacked LSTM (8-layer encoder + 8 layer decoder) + RL finetune 39.92 FairSeq [4] Convolution (15-layer) encoder and (15-layer) decoder 40.51 Transformer [26] Self-Attention + 6-layer encoder + 6-layer decoder 41.0 Stack LSTM (4-layer encoder and 4-layer decoder) 39.51 this work Stack 4-layer NMT + Dual Learning 40.53 Stack 4-layer NMT + Dual Learning + Deliberation Network 41.50 and hidden node dimension are 512 and 1024 respectively. The dropout rate is set as 0.1. (3) We train the standard encoder-decoder based deep model for about 25 days until convergence. Furthermore, we leverage our recently proposed dual learning techniques [8, 33] to improve the model, which takes another 7 days. We initialize the deliberation network in the same way in Section 3.1.1. Then, we train the deliberation network by Algorithm 1 for 10 days. When generating translations, we use beam search with beam size 8. The experimental results of applying deliberation network to the deep LSTM model are shown in Table 4. On En?Fr translation task, the baseline of our implemented NMT system is 39.51. With dual learning, we achieve a 40.53 BLEU score. After applying deliberation techniques, the BLEU score can be further improved to 41.50, which as far as we know, is a new single-model state-of-the-art result for this task. This not only illustrates the effectiveness of deliberation network again, but also shows that even if a model is good enough, it can still benefit from the deliberation process. 4 Application to Text Summarization We further verify the effectiveness of deliberation networks on text summarization, which is another real-world application that encoder-decoder framework succeeds to help [19]. 4.1 Settings Text summarization refers to using a short and abstractive sentence to summarize the major points of a sentence or paragraph, which is typically much longer. The training, validation and test sets for the task are extracted from Gigaword Corpus [6]: For each selected article, the first sentence is used as source-side input and the title used as target-side output. We process the data in the same way as that proposed in [20, 30], and obtain training/validation/test sets with roughly 189k/18k/10k sentence pairs respectively. There are roughly 42k unique words in the source input and 19k unique words in the target output and we remain all of them as the vocabulary in the encoder-decoder models. The model structure is the same as that used in Section 3.1 except that both word embedding dimension and hidden node size are reduced to 128. We use Adadelta algorithm with gradient clip value 5.0 to optimize deliberation network. The mini-batch size is fixed as 32. The evaluation measures are chosen as ROUGE-1, ROUGE-2 and ROUGE-L, which are all widely adopted evaluation metric for text summarization [15]. ROUGE-N (N = 1, 2 in our setting) is an N-gram recall between a candidate summary and a set of reference summaries. ROUGE-L is a similar statistic like ROUGE-N but based on longest common subsequences. When generating the titles, we use beam search with beam size 10. For the thoroughness of comparison, similar to NMT, we add another two baselines apart from the basic encoder-decoder model: the stacked-decoder model with 2 layers (Mdec?2 ), as well as the review net with 4 reviewers (Mreviewer?4 ). 4.2 Results The experimental results of text summarization are listed in Table 5. Again, the deliberation network achieves clear improvements over all the baselines. For example, in terms of ROUGE-2, it is 1.12 and 0.96 points better compared with stacked decoder model and review net respectively. Furthermore, one may note that a significant difference between NMT and text summarization is that: In NMT, the lengths of input and output sequence are very close; but in text summarization, the input is extremely 8 long while the output is very short. The better results brought by deliberation networks shows that even if the output sentence is short, it is helpful to include the deliberation process which refines the low-level draft in the first-pass decoder. Table 5: ROUGE-{1, 2, L} scores of text summarization Algorithm Mbase Mdec?2 Mreviewer?4 Mdelib 5 ROUGE-1 27.45 27.93 28.26 30.90 ROUGE-2 10.51 11.09 11.25 12.21 ROUGE-L 26.07 26.50 27.28 29.09 Conclusions and Future Work In this work, we have proposed deliberation networks for sequence generation tasks, in which the first-pass decoder is used for generating a raw sequence, and the second-pass decoder is used to polish the raw sequence. Experiments show that our method achieves much better results than several baseline methods in both machine translation and text summarization, and achieves a new single model state-of-the-art result on WMT?14 English to French translation. There are multiple promising directions to explore in the future. First, we will study how to apply the idea of deliberation to tasks beyond sequence generation, such as improving the image qualities generated by GAN [5]. Second, we will study how to refine/polish different levels of a neural network, like the hidden states in an RNN, or feature maps in a CNN. Third, we are curious about whether better sequences can be generated with more passes of decoders, i.e., refining a generated sequence again and again. Fourth, we will study how to speed up the inference of deliberation networks and reduce their inference time. Acknowledgments The authors would like to thank Yang Fan and Kaitao Song for implementing the deep neural machine translation basic model. This work is partially supported by the National Natural Science Foundation of China (Grant No. 61371192). References [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, 2015. [2] R. Chatterjee, J. G. de Souza, M. Negri, and M. Turchi. The fbk participation in the wmt 2016 automatic post-editing shared task. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, 2016. [3] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder?decoder for statistical machine translation. In EMNLP, pages 1724?1734, 2014. [4] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional sequence to sequence learning. ICML, 2017. [5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672?2680. 2014. [6] D. Graff and C. Cieri. English gigaword. linguistic data consortium, 2003. [7] D. He, H. Lu, Y. Xia, T. Qin, L. Wang, and T. Liu. Decoding with value networks for neural machine translation. In 31st Annual Conference on Neural Information Processing Systems (NIPS), 2017. [8] D. He, Y. Xia, T. Qin, L. Wang, N. Yu, T. Liu, and W.-Y. Ma. Dual learning for machine translation. In Advances In Neural Information Processing Systems, pages 820?828, 2016. 9 [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [10] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630?645. Springer, 2016. [11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735?1780, Nov. 1997. [12] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine translation. In the annual meeting of the Association for Computational Linguistics, 2015. [13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [14] J. Li, W. Monroe, and D. Jurafsky. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562, 2016. [15] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8, 2004. [16] J. Niehues, E. Cho, T.-L. Ha, and A. Waibel. Pre-translation for neural machine translation. In COLING, 2016. [17] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In the annual meeting of the Association for Computational Linguistics, pages 311?318, 2002. [18] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. [19] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence summarization. In EMNLP, pages 379?389, 2015. [20] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence summarization. ACL, 2015. [21] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. the annual meeting of the Association for Computational Linguistics, 2016. [22] S. Shen, Y. Cheng, Z. He, W. He, H. Wu, M. Sun, and Y. Liu. Minimum risk training for neural machine translation. the annual meeting of the Association for Computational Linguistics, 2016. [23] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104?3112, 2014. [24] T. D. Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016. [25] Z. Tu, Y. Liu, L. Shang, X. Liu, and H. Li. Neural machine translation with reconstruction. In AAAI, 2017. [26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NIPS, 2017. [27] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3156?3164, 2015. [28] S. Wiseman and A. M. Rush. Sequence-to-sequence learning as beam-search optimization. In ACL, 2016. [29] WMT?14. http://www.statmt.org/wmt14/translation-task.html. 10 [30] L. Wu, L. Zhao, T. Qin, J. Lai, and T. Liu. Sequence prediction with unlabeled data by reward function learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages 3098?3104, 2017. [31] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Google?s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. [32] Y. Xia, J. Bian, T. Qin, N. Yu, and L. Tie-Yan. Dual inference for machine learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages 3112?3118, 2017. [33] Y. Xia, T. Qin, W. Chen, J. Bian, N. Yu, and T. Liu. Dual supervised learning. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3789?3798, 2017. [34] Y. Xia, F. Tian, T. Qin, N. Yu, and T. Liu. Sequence generation with target attention. In The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD), 2017. [35] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. [36] Z. Yang, Y. Yuan, Y. Wu, W. W. Cohen, and R. R. Salakhutdinov. Review networks for caption generation. In Advances in Neural Information Processing Systems, pages 2361?2369, 2016. [37] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 11
6775 |@word cnn:1 briefly:1 middle:7 seems:1 d2:14 paid:1 sgd:2 carry:1 initial:1 liu:10 series:1 score:11 contains:2 configuration:1 document:1 subword:1 past:1 outperforms:3 com:3 contextual:5 gmail:1 parmar:1 gpu:1 refines:4 concatenate:1 nian:1 remove:1 update:1 generative:1 selected:1 intelligence:2 krikun:1 short:5 record:1 filtered:1 draft:7 coarse:1 node:2 readability:2 org:1 zhang:2 five:1 mathematical:1 htx:1 yuan:1 consists:3 paragraph:2 introduce:2 expected:4 indeed:2 roughly:4 behavior:2 dialog:1 kiros:1 multi:2 salakhutdinov:2 little:1 considering:1 provided:2 what:2 kind:1 transformation:2 quantitative:1 act:1 tie:3 zaremba:1 unit:4 grant:1 before:1 attend:1 local:1 limit:1 rouge:15 jiang:2 meet:1 subscript:1 might:1 acl:3 china:5 dynamically:1 forwarding:1 jurafsky:1 ease:1 vaswani:1 tian:2 averaged:1 unique:2 acknowledgment:1 yj:2 practice:4 block:1 optimizers:1 rnn:8 yan:3 thought:2 significantly:1 word:36 pre:4 refers:3 consortium:1 cannot:1 close:1 unlabeled:1 context:4 applying:4 writing:1 transformer:1 lijun:2 optimize:2 conventional:1 map:3 www:2 yt:6 reviewer:5 go:1 attention:15 fluent:1 shen:1 simplicity:1 pouget:1 rule:1 estimator:1 embedding:4 target:14 pty:2 caption:3 us:1 goodfellow:1 agreement:9 element:1 adadelta:4 recognition:2 particularly:1 utilized:1 database:1 preprint:6 wang:2 calculate:2 news:1 sun:4 k40:1 ranzato:1 environment:1 reward:1 warde:1 trained:3 reviewing:1 segment:1 efficiency:1 basis:3 translated:1 joint:2 schwenk:2 tx:3 talk:4 gnmt:2 train:6 separated:1 univ:1 stacked:5 fast:2 monte:3 artificial:2 zemel:1 tell:2 aggregate:1 refined:3 h0:1 quite:1 encoded:2 widely:6 larger:1 cvpr:1 richer:1 s:1 say:4 whose:1 encoder:33 polishing:5 statistic:1 ward:1 g1:3 unseen:1 jointly:3 itself:1 validates:1 final:3 deliberation:65 superscript:1 obviously:1 sequence:56 blob:1 net:3 sen:1 propose:6 reconstruction:1 fr:12 qin:7 frequent:1 tu:1 cao:1 translate:3 zai:1 achieve:2 papineni:1 description:1 sutskever:1 convergence:3 ijcai:2 captioning:2 generating:8 adam:2 help:1 attends:1 recurrent:2 ac:1 depending:1 odd:1 progress:1 implemented:2 indicate:1 come:1 direction:2 correct:2 used3:1 stochastic:1 human:4 implementing:1 assign:1 opt:3 merrienboer:1 mathematically:1 considered:1 ground:1 exp:2 great:1 mapping:1 week:2 major:1 optimizer:2 achieves:3 omitted:2 tanh:2 title:2 sensitive:1 create:1 establishes:1 weighted:1 brought:2 clearly:1 yarats:1 rather:1 newstest2013:1 pn:1 guangzhou:1 linguistic:1 encode:2 refining:2 improvement:2 longest:1 likelihood:1 indicates:1 polish:4 contrast:1 adversarial:1 baseline:10 helpful:1 inference:3 polosukhin:1 inaccurate:1 typically:2 integrated:1 hidden:10 transformed:1 tao:1 statmt:1 overall:1 among:1 flexible:1 unk:1 denoted:8 rnnsearch:9 negotiation:1 dual:7 dauphin:1 art:3 softmax:2 special:1 initialize:1 once:1 construct:1 beach:1 sampling:2 represents:1 yu:5 icml:3 jones:1 future:10 word2:1 mirza:1 employ:1 randomly:3 national:1 individual:1 lium:1 replaced:2 fire:2 microsoft:3 abstractive:4 evaluation:6 farley:1 behind:1 accurate:1 daily:1 necessary:1 conduct:4 initialized:3 rush:3 wiseman:1 phrase:6 introducing:1 rare:1 uniform:1 author:1 cho:5 st:9 lstm:7 international:4 memisevic:1 receiving:1 decoding:15 together:2 again:4 aaai:1 containing:1 leveraged:1 choose:3 emnlp:2 cognitive:2 book:1 derivative:1 zhao:1 li:2 potential:1 de:2 summarized:1 bold:1 explicitly:1 script:1 h1:2 try:3 view:1 apparently:1 jianxin:2 ni:1 accuracy:3 convolutional:1 sy:1 yellow:1 html:1 raw:6 norouzi:1 lu:1 carlo:3 ren:2 converged:1 sennrich:1 taoqin:1 sixth:2 ty:8 ruhe:1 stop:1 sampled:3 dataset:4 gain:1 nenghai:1 birch:1 recall:1 knowledge:1 improves:1 organized:1 carefully:1 back:3 finetune:1 day:4 supervised:1 bian:2 improved:2 editing:3 done:1 furthermore:4 until:4 hand:2 eqn:3 lstms:1 lack:1 google:1 minibatch:2 french:6 yat:1 quality:6 newstest2014:1 usa:1 building:1 verify:4 unbiased:1 tense:2 read:1 self:1 complete:1 demonstrate:1 confusion:1 performs:1 egypt:11 bring:1 image:6 consideration:1 recently:1 common:7 specialized:1 multinomial:1 rl:1 cohen:1 insensitive:1 volume:2 belong:1 he:7 association:4 uszkoreit:1 bougares:1 refer:2 significant:1 automatic:3 pm:1 similarly:1 pointed:1 grangier:1 language:2 wmt:9 access:1 longer:1 haddow:1 etc:2 base:4 add:1 align:1 optimizing:1 apart:2 schmidhuber:1 nvidia:1 success:1 life:1 meeting:4 yi:2 seen:1 minimum:1 additional:2 preceding:3 maximize:2 ii:2 branch:1 afterwards:1 multiple:1 long:6 lin:3 lai:2 post:3 jean:1 paired:1 peace:6 halving:1 variant:1 basic:5 prediction:1 vision:2 metric:2 arxiv:12 achieved:2 hochreiter:1 beam:10 addressed:1 source:18 country:4 rest:2 pass:1 nmt:16 smt:1 bahdanau:2 inconsistent:1 effectiveness:5 encoderdecoder:2 call:1 integer:1 curious:1 chopra:3 leverage:5 yang:2 intermediate:1 split:1 enough:2 bengio:6 fit:1 architecture:2 restrict:1 reduce:1 idea:3 cn:3 tradeoff:1 tgz:1 motivated:1 whether:1 expression:1 bridging:1 song:1 speaking:3 deep:11 detailed:1 listed:1 clear:1 clip:1 reduced:1 generate:7 http:3 exist:1 risk:1 moses:1 blue:1 diverse:1 gigaword:3 write:1 discrete:1 four:1 changing:1 v1:2 ptx:2 year:5 beijing:1 package:1 master:1 you:2 fourth:1 named:1 throughout:2 wu:6 vn:2 summarizes:1 bit:1 dropout:3 layer:21 hi:4 bound:2 gomez:1 courville:2 cheng:1 fan:1 refine:2 annual:5 fei:1 encodes:1 generates:6 aspect:1 speed:1 toshev:1 extremely:3 gehring:1 according:2 turchi:1 waibel:1 watt:4 combination:1 remain:1 slightly:2 shallow:3 s1:1 happens:1 theano:3 computationally:1 previously:3 discus:3 eventually:1 mechanism:1 know:1 bpe:1 ge:5 fed:4 end:5 adopted:3 gulcehre:1 apply:4 v2:1 generic:1 batch:2 denotes:1 ecmlpkdd:1 include:1 linguistics:4 gan:1 zeiler:1 concatenated:2 chinese:3 especially:1 build:1 unchanged:1 move:1 objective:1 added:1 font:1 kaiser:1 said:7 gradient:5 thank:1 decoder:69 mail:1 bleu:19 ozair:1 length:7 besides:1 mini:2 newstest2012:1 statement:1 ba:2 design:1 implementation:4 summarization:17 unknown:2 perform:1 twenty:2 neuron:1 observation:2 datasets:1 convolution:1 situation:1 looking:4 norwegian:3 team:1 auli:2 shazeer:1 stack:3 souza:1 yingce:3 introduced:2 pair:5 gru:4 sentence:32 optimized:1 connection:1 learned:1 kingma:1 nip:4 beyond:2 usually:3 pattern:1 reading:2 summarize:1 perl:1 including:6 green:1 memory:1 ldc:1 difficulty:2 natural:2 participation:1 residual:3 zhu:1 improve:8 github:1 technology:1 picture:1 concludes:1 hm:1 coupled:1 text:15 review:10 understanding:1 literature:1 geometric:1 zh:8 python:1 discovery:1 macherey:2 loss:1 generation:16 generator:1 validation:5 h2:2 integrate:1 foundation:1 affine:1 consistent:2 article:4 principle:1 share:1 roukos:1 translation:55 summary:3 token:1 supported:1 last:1 english:10 side:12 bias:1 understand:2 face:10 sysu:1 benefit:1 van:1 xia:8 overcome:2 calculated:1 evaluating:1 world:1 vocabulary:4 dimension:4 plain:1 concavity:1 gram:2 adopts:2 forward:4 concretely:1 adaptive:2 refinement:4 collection:1 san:1 far:3 erhan:1 sj:3 nov:1 compact:1 global:9 corpus:4 xi:4 alternatively:1 zhe:1 subsequence:1 un:2 iterative:1 search:6 mosesdecoder:1 table:10 promising:2 ca:1 obtaining:1 improving:1 european:2 shuo:1 whole:3 bilingual:2 xu:2 representative:1 en:19 precision:1 sub:2 position:1 decoded:1 comput:1 candidate:2 third:1 coling:1 specific:4 explored:1 list:1 cease:2 abadie:1 concern:1 intractable:1 workshop:1 flattened:1 illustrates:1 chatterjee:1 chen:2 monroe:1 gap:1 explore:1 intern:1 gao:2 visual:1 vinyals:2 contained:2 g2:3 partially:1 springer:1 truth:1 extracted:2 ma:1 weston:2 identity:1 shared:2 change:1 hard:1 included:1 specifically:2 except:1 graff:1 acting:1 shang:2 called:1 pas:54 experimental:4 succeeds:1 east:6 ustc:2 dxy:4 evaluate:3 d1:15 schuster:1
6,385
6,776
Adaptive Clustering through Semidefinite Programming Martin Royer Laboratoire de Math?matiques d?Orsay, Univ. Paris-Sud, CNRS, Universit? Paris-Saclay, 91405 Orsay, France [email protected] Abstract We analyze the clustering problem through a flexible probabilistic model that aims to identify an optimal partition on the sample X1 , ..., Xn . We perform exact clustering with high probability using a convex semidefinite estimator that interprets as a corrected, relaxed version of K-means. The estimator is analyzed through a non-asymptotic framework and showed to be optimal or near-optimal in recovering the partition. Furthermore, its performances are shown to be adaptive to the problem?s effective dimension, as well as to K the unknown number of groups in this partition. We illustrate the method?s performances in comparison to other classical clustering algorithms with numerical experiments on simulated high-dimensional data. 1 Introduction Clustering, a form of unsupervised learning, is the classical problem of assembling n observations X1 , ..., Xn from a p-dimensional space into K groups. Applied fields are craving for robust clustering techniques, such as computational biology with genome classification, data mining or image segmentation from computer vision. But the clustering problem has proven notoriously hard when the embedding dimension is large compared to the number of observations (see for instance the recent discussions from [2, 21]). A famous early approach to clustering is to solve for the geometrical estimator K-means [19, 13, 14]. The intuition behind its objective is that groups are to be determined in a way to minimize the total intra-group variance. It can be interpreted as an attempt to "best" represent the observations by K points, a form of vector quantization. Although the method shows great performances when observations are homoscedastic, K-means is a NP-hard, ad-hoc method. Clustering with probabilistic frameworks are usually based on maximum likelihood approaches paired with a variant of the EM algorithm for model estimation, see for instance the works of Fraley & Raftery [11] and Dasgupta & Schulman [9]. These methods are widespread and popular, but they tend to be very sensitive to initialization and model misspecifications. Several recent developments establish a link between clustering and semidefinite programming. Peng & Wei [17] show that the K-means objective can be relaxed into a convex, semidefinite program, leading Mixon et al. [16] to use this relaxation under a subgaussian mixture model to estimate the cluster centers. Yan and Sarkar [24] use a similar semidefinite program in the context of covariate clustering, when the network has nodes and covariates. Chr?tien et al. [8] use a slightly different form of a semidefinite program to recover the adjacency matrix of the cluster graph with high probability. Lastly in the different context of variable clustering, Bunea et al. [6] present a semidefinite program with a correction step to produce non-asymptotic exact recovery results. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work, we build upon the work and context of [6], and transpose and adapt their ideas for point clustering: we introduce a semidefinite estimator for point clustering inspired by the findings of [17] with a correction component originally presented in [6]. We show that it produces a very strong contender for clustering recovery in terms of speed, adaptivity and robustness to model perturbations. In order to do so we produce a flexible probabilistic model inducing an optimal partition of the data that we aim to recover. Using the same structure of proof in a different context, we establish elements of stochastic control (see for instance Lemma A.1 on the concentration of random subgaussian Gram matrices in the supplementary material) to derive conditions of exact clustering recovery with high probability and show optimal performances ? including in high dimensions, improving on [16], as well as adaptivity to the effective dimension of the problem. We also show that our results continue to hold without knowledge of the number of structures given one single positive tuning parameter. Lastly we provide evidence of our method?s efficiency and further insight from simulated data. Notation. Throughout this work we use the convention 0/0 := 0 and [n] = {1, ..., n}. We take an . bn to mean that an is smaller than bn up to an absolute constant factor. Let Sd?1 denote the 0 unit sphere in Rd . For q ? N? ? {+?}, ? ? Rd , |?|q is the lq -norm and for M ? Rd?d , |M |q , |M |F and |M |op are respectively the entry-wise lq -norm, the Frobenius norm associated with scalar product h., .i and the operator norm. |D|V is the variation semi-norm for a diagonal matrix D, the difference between its maximum and minimum element. Let A < B mean that A ? B is symmetric, positive semidefinite. 2 Probabilistic modeling of point clustering Consider X1 , ..., Xn and let ?a = E [Xa ]. The variable Xa can be decomposed into Xa = ?a + Ea , a = 1, ..., n, (1) p with Ea stochastic centered variables in R . Definition 1. For K > 1, ? = (?1 , ..., ?K ) ? (Rp )K , ? > 0 and G = {G1 , ..., GK } a partition of [n], we say X1 , ..., Xn are (G, ?, ?)-clustered if ?k ? [K], ?a ? Gk , |?a ? ?k |2 6 ?. We then call ?(?) := min |?k ? ?l |2 k<l (2) the separation between the cluster means, and ?(G, ?, ?) := ?(?)/? (3) the discriminating capacity of (G, ?, ?). In this work we assume that X1 , ..., Xn are (G, ?, ?)-clustered. Notice that this definition does not impose any constraint on the data: for any given G, there exists a choice of ?, means and radius ? important enough so that X1 , ..., Xn are (G, ?, ?)-clustered. But we are interested in partitions with greater discriminating capacity, i.e. that make more sense in terms of group separation. Indeed remark that if ?(G, ?, ?) < 2, the population clusters {?a }a?G1 , ..., {?a }a?GK are not linearly separable, but a high ?(G, ?, ?) implies that they are well-separated from each other. Furthermore, we have the following result. ? Proposition 1. Let (GK , ?? , ? ? ) ? arg max ?(G, ?, ?) for (G, ?, ?) such that X1 , ..., Xn are ? ? (G, ?, ?)-clustered, and |G| = K. If ?(GK , ?? , ? ? ) > 4 then GK is the unique maximizer of ?(G, ?, ?). ? So GK is the partition maximizing the discriminating capacity over partitions of size K. Therefore in this work, we will assume that there is a K > 1 such that X1 , ..., Xn is (G, ?, ?)-clustered with |G| = K and ?(G, ?, ?) > 4. By Proposition 1, G is then identifiable. It is the partition we aim to recover. We also assume that X1 , ..., Xn are independent observations with subgaussian behavior. Instead of the classical isotropic definition of a subgaussian random vector (see for example [20]), we use a more flexible definition that can account for anisotropy. Definition 2. Let Y be a random vector in Rd , Y has a subgaussian distribution if there exist ? ? Rd?d such that ?x ? Rd , h T i T E ex (Y ?E Y ) 6 ex ?x/2 . (4) 2 We then call ? a variance-bounding matrix of random vector Y , and write shorthand Y ? subg(?). Note that Y ? subg(?) implies Cov(Y ) 4 ? in the semidefinite sense of the inequality. To sum-up our modeling assumptions in this work: Hypothesis 1. Let X1 , ..., Xn be independent, subgaussian, (G, ?, ?)-clustered with ?(G, ?, ?) > 4. Remark that the modelization of Hypothesis 1 can be connected to another popular probabilistic model: if we further ask that X1 , ..., Xn are identically-distributed within a group (and hence ? = 0), the model becomes a realization of a mixture model. 3 Exact partition recovery with high probability Let G = {G1 , ..., GK } and m := mink?[K] |Gk | denote the minimum cluster size. G can be represented by its caracteristic matrix B ? ? Rn?n defined as ?k, l ? [K]2 , ?(a, b) ? Gk ? Gl ,  1/|Gk | if k = l ? Bab := 0 otherwise. In what follows, we will demonstrate the recovery of G through recovering its caracteristic matrix B ? . We introduce the sets of square matrices {0,1} CK := {B ? Rn?n : B T = B, tr(B) = K, B1n = 1n , B 2 = B} + Rn?n + CK := {B ? [ C := CK . T : B = B, tr(B) = K, B1n = 1n , B < 0} (5) (6) (7) K?N {0,1} {0,1} We have: CK ? CK ? C and CK is convex. Notice that B ? ? CK ? can be expressed as (2007) [17] shows that the K-means estimator B . A result by Peng, Wei ? = arg maxh?, b Bi B (8) {0,1} B?CK b := (hXa , Xb i)(a,b)?[n]2 ? Rn?n , the observed Gram matrix. Therefore a natural relaxation is for ? to consider the following estimator: b := arg maxh?, b Bi. B (9) B?CK b = ? + ? for ? := (h?a , ?b i)(a,b)?[n]2 ? Rn?n , and ? := E [hEa , Eb i] Notice that E ? (a,b)?[n]2 = diag (tr(Var(Ea )))16a6n ? Rn?n . The following two results demonstrate that ? is the signal structure that lead the optimizations of (8) and (9) to recover B ? , whereas ? is a bias term that can hurt the process of recovery. ? Proposition 2. There exist c0 > 1 absolute constant such that if ?2 (G, ?, ?) > c0 (6 + n/m) and m?2 (?) > 8|?|V , then we have arg maxh? + ?, Bi = B ? = arg maxh? + ?, Bi. (10) B?CK {0,1} B?CK b estimator, as well as the K-means estimator, would recover partition This proposition shows that the B G on the population Gram matrix if the variation semi-norm of ? were sufficiently small compared to the cluster separation. Notice that to recover the? partition on the population version, we require the discriminating capacity to grow as fast as 1 + ( n/m)1/2 instead of simply 1 from Hypothesis 1. The following proposition demonstrates that if the condition on the variation semi-norm of ? is not met, G may not even be recovered on the population version. Proposition 3. There exist G, ?, ? and ? such that ?2 (G, ?, ?) = +? but we have m?2 (?) < 2|?|V and B? ? / arg maxh? + ?, Bi and B? ? / arg maxh? + ?, Bi. B?CK {0,1} B?CK 3 (11) So Proposition 3 shows that even if the population clusters are perfectly discriminated, there is a configuration for the variances of the noise that makes it impossible to recover the right clustering by K-means. This shows that K-means may fail when the random variable homoscedasticity assumption is violated, and that it is important to correct for ? = diag(tr(Var(Ea )))16a6n . b corr . Then substracting ? b corr from ? b can be interpreted as a Suppose we produce such an estimator ? b as an estimator of ?. Hence the previous results demonstrate correcting term, i.e. a way to de-bias ? the interest of studying the following semi-definite estimator of the projection matrix B ? , let b corr := arg maxh? b ?? b corr , Bi. B (12) B?CK In order to demonstrate the recovery of B ? by this estimator, we introduce different quantitative measures of the "spread" of our stochastic variables, that affect the quality of the recovery. By Hypothesis 1 there exist ?1 , ..., ?n such that ?a ? [n], Xa ? subg(?a ). Let ? 2 := max |?a |op , a?[n] V 2 := max |?a |F , a?[n] ? 2 := max tr(?a ) a?[n] (13) b corr . Since there is no relation between the variances of the points in our model, We now produce ? there is very little hope of estimating Var(Ea ). As for our quantity of interest tr(Var(Ea )), a form of volume, a rough estimation is challenging but possible. The estimator from [6] can be ?Xd adapted to our context. For (a, b) ? [n]2 let V (a, b) := max(c,d)?([n]\{a,b})2 hXa ? Xb , |XXcc?X i, d |2 bb1 := arg min V (a, b) and bb2 := arg min b V (a, b). Then for a ? [n], let b?[n]\{a} b?[n]\{a,b1 }   b corr := diag hXa ? Xb , Xa ? Xb ia?[n] . ? b1 b2 (14) Proposition 4. Assume that m > 2. For c6 , c7 > 0 absolute constants, with probability larger than 1 ? c6 /n we have   p b corr ? ?|? 6 c7 ? 2 log n + (? + ? log n)? + ? 2 . |? (15) So apart from the radius ? terms, that come from generous model assumptions, a proxy for ? is produced at a ? 2 log n rate that we could not expect to improve on. Nevertheless, this control on ? is key to attain the optimal rates below. It is general and completely independent of the structure of G, as there is no relation between G and ?. We are now ready to introduce this paper?s main result: a condition on the separation between the cluster means sufficient for ensuring recovery of B ? with high probability. Theorem 1. Assume that m > 2. For c1 , c2 > 0 absolute constants, if p p  ? m?2 (?) > c2 ? 2 (n + m log n) + V 2 ( n + m log n) + ?(? log n + ?) + ? 2 ( n + m) , (16) b corr = B ? , and therefore Gbcorr = G. then with probability larger than 1 ? c1 /n we have B We call the right hand side of (16) the separating rate. Notice that we can read two kinds of requirements coming from the separating rate: requirements on the radius ?, and?requirements on ? 2 , V 2 , ? dependent on the distributions of observations. It appears as if ? + ? log n?can be interpreted as a geometrical width of our problem. If we ask that ? is of the same order as ? log n, a maximum gaussian deviation for n variables, then all conditions on ? from?(16) can be removed. Thus for convenience of the following discussion we will now assume ? . ? log n. How optimal is the result from Theorem 1? Notice that our result is adapted to anisotropy in the noise, ? but to discuss optimality it is easier to look at the isotropic scenario: V 2 = p? 2 and ? 2 = p? 2 . Therefore ?2 (?)/? 2 represents a signal-to-noise ratio. For simplicity let us also assume that all groups have equal size, that is |G1 | = ... = |GK | = m so that n = mK and the sufficient condition (16) becomes r  ?2 (?) pK & K + log n + (K + log n) . (17) ?2 n 4 Optimality. To discuss optimality, we distinguish between low and high dimensional setups. In the low-dimensional setup n ? m log n & p, we obtain the following condition:  ?2 (?) & K + log n . 2 ? (18) Discriminating with high probability between n observations from two gaussians in dimension 1 would require a separating rate of at least ? 2 log n. This implies that when K . log n, our result is minimax. Otherwise, to our knowledge the best clustering result on approximating mixture center is from [16], and on the condition that ?2 (?)/? 2 & K 2 . Furthermore, the K & log n regime is known in the stochastic-block-model community as a hard regime where a gap is surmised to exist between the minimal information-theoretic rate and the minimal achievable computational rate (see for example [7]). In the high-dimensional setup n ? m log n . p, condition (17) becomes: r pK ?2 (?) & (K + log n) . ?2 n (19) There are few information-theoretic bounds for high-dimension clustering. Recently, Banks, Moore, Vershynin, Verzelen and Xu (2017) [3] proved a lower p bound for Gaussian mixture clustering detection, namely they require a separation of order K(log K)p/n. When K . log n, our condition is only different in that it replaces log(K) by log(n), a price to pay for going from detecting the clusters to exactly recovering the clusters. Otherwise when K grows faster than log n there might exist a gap between the minimal possible rate and the achievable, as discussed previously. Adaptation to effective dimension. We can analyse further the condition (16) by introducing an effective dimension r? , measuring the largest volume repartition for our variance-bounding matrices ?1 , ..., ?n . We will show that our estimator adapts to this effective dimension. Let r? := maxa?[n] tr(?a ) ?2 = , 2 ? maxa?[n] |?a |op (20) r? can also be interpreted as a form of global effective rank of matrices ?a . Indeed, define Re(?) := tr(?)/|?|op , then we have r? 6 maxa?[n] Re(?a ) 6 maxa?[n] rank(?a ) 6 p. ? ? Now using V 2 6 r? ? 2 and ? = r? ?, condition (16) can be written as r  ?2 (?) r? K & K + log n + (K + log n) . (21) ?2 n By comparing this equation to (17), notice that r? is in place of p, indeed playing the role of an effective dimension for the problem. This shows that our estimator adapts to this effective dimension, without the use of any dimension reduction step. In consequence, equation (21) distinguishes between an actual high-dimensional setup: n ? m log n . r? and a "low" dimensional setup r? . n ? m log n under which, regardless of the actual value of p, our estimators recovers under the near-minimax condition of (18). b corr in the theorem above when n + m log n . r? . This informs on the effect of correcting term ? The un-corrected version of the semi-definite program (9) has a leading separating rate of ? 2 /m = b corr correction on the other hand, (21) has leading separating factor smaller ? 2 r? /m, pbut with the ? ? ? 2 than ? (K + log n)r? /m = ? 2 n + m log n ? r? /m. This proves p that in a high-dimensional setup, our correction enhances the separating rate of at least a factor (n + m log n)/r? . 4 Adaptation to the unknown number of group K It is rarely the case that K is known, but we can proceed without it. We produce an estimator adaptive to the number of groups K: let ? b ? R+ , we now study the following adaptive estimator: e corr := arg maxh? b ?? b corr , Bi ? ? B b tr(B). B?C 5 (22) Theorem 2. Suppose that m > 2 and (16) is satisfied. For c3 , c4 , c5 > 0 absolute constants suppose that the following condition on ? b is satisfied  ? p ?  2 2 b < m?2 (?), (23) c4 V n + ? n + ?(? log n + ?) + ? 2 n < c5 ? e corr = B ? with probability larger than 1 ? c3 /n then we have B Notice that condition (23) essentially requires ? b to be seated between m?2 (?) and some components of the right-hand side of (16). So under (23), the results from the previous section apply to the e corr as well and this shows that it is not necessary to know K in order to adaptive estimator B perform well for recovering G. Finding an optimized, data-driven parameter ? b using some form of cross-validation is outside of the scope of this paper. 5 Numerical experiments We illustrate our method on simulated Gaussian data in two challenging, high-dimensional setup experiments for comparing clustering estimators. Our sample of n = 100 points are drawn from K = 5 identically-sized, perfectly discriminated non-isovolumic clusters of Gaussians - that is we have ?k ? [K], ?a ? Gk , Ea ? N (0, ?k ) such that |G1 | = ... = |GK | = 20. The distributions are chosen to be isotropic, and the ratio between the lowest and the highest standard deviation is of 1 to 10. We draw points of a Rp space in two different scenarii. In (S1 ), for a given dimension space p = 500 and a fixed isotropic noise level, we report the algorithm?s performances as the signal-to-noise ratio ?2 (?)/? 2 is increased from 1 to 15. In (S2 ) we impose a fixed signal to noise ratio and observe the algorithm?s decay in performance as the space dimension p is increased from 102 to 105 (logarithmic scale). All reported points of the simulated space represent a hundred simulations, and indicate a median value with asymmetric standard deviations in the form of errorbars. b corr is a hard problem as n grows. For this task we implemented an ADMM Solving for estimator B solver from the work of Boyd et al. [4] with multiple stopping criterions including a fixed number of iterations of T = 1000. The complexity of the optimization is then roughly O(T n3 ). For reference, we compare the recovering capacities of Gbcorr , labeled ?pecok? in Figure 1 with other classical clustering algorithm. We chose three different but standard clustering procedures: Lloyd?s K-means algorithm [13] with a thousand K-means++ initialization of [1] (although in scenario (S2 ), the algorithm is too slow to converge as p grows so we do not report it), Ward?s method for Hierarchical Clustering [23] and the low-rank clustering algorithm applied to the Gram matrix, a spectral method appearing in McSherry [15]. Lastly we include the CORD algorithm from Bunea et al. [5]. We measure the performances of estimators by computing the adjusted mutual information (see for instance [22]) between the truth and its estimate. In the two experiments, the results of Gbcorr are markedly better than that of other methods. Scenario (S1 ) shows it can achieve exact recovery with a lesser signal to noise ratio than its competitors, whereas scenario (S2 ) shows its performances start to decay much later than the other methods as the space dimension is increased exponentially. Table 1 summarizes the simulations in a different light: for different parameter value on each line, we count the number of experiments (out of a hundred) that had an adjusted mutual information score equal to 0.9 or higher. This accounts for exact recoveries, or approximate recoveries that reasonably reflected the underlying truth. In this table it is also evident that Gbcorr performs uniformly better, be it for exact or approximate recovery: it manages to recover the underlying truth much sooner in terms of signal-to-noise ratio, and for a given signal-to-noise ratio it will represent the truth better as the embedding dimension increases. Lastly Table 1 provides the median computing time in seconds for each method over the entire b corr is very costly to compute. experiment. Gbcorr comes with important computation times because ? Our method is computationally intensive but it is of polynomial order. The solving of a semidefinite program is a vastly developing field of Operational Research and even though we used the classical ADMM method of [4] that proved effective, this instance of the program could certainly have seen a more profitable implementation in the hands of a domain expert. All of the compared methods have a very hard time reaching high sample sizes n in the high dimensional context. The P YTHON 3 implementation of the method used is found in open access here: martinroyer/pecok [18] 6
6776 |@word version:4 polynomial:1 achievable:2 norm:7 c0:2 open:1 simulation:2 bn:2 tr:9 reduction:1 configuration:1 score:1 mixon:1 recovered:1 comparing:2 written:1 numerical:2 partition:12 isotropic:4 detecting:1 math:2 node:1 provides:1 c6:2 c2:2 shorthand:1 introduce:4 peng:2 homoscedasticity:1 indeed:3 roughly:1 behavior:1 sud:1 inspired:1 decomposed:1 anisotropy:2 little:1 actual:2 solver:1 becomes:3 estimating:1 notation:1 underlying:2 lowest:1 what:1 kind:1 interpreted:4 maxa:4 finding:2 quantitative:1 xd:1 exactly:1 universit:1 demonstrates:1 control:2 unit:1 positive:2 sd:1 consequence:1 might:1 chose:1 initialization:2 eb:1 challenging:2 bi:8 unique:1 block:1 definite:2 procedure:1 yan:1 attain:1 boyd:1 projection:1 convenience:1 operator:1 context:6 impossible:1 center:2 maximizing:1 regardless:1 convex:3 simplicity:1 recovery:13 correcting:2 estimator:22 insight:1 embedding:2 population:5 variation:3 hurt:1 profitable:1 suppose:3 exact:7 programming:2 hypothesis:4 element:2 asymmetric:1 labeled:1 observed:1 role:1 thousand:1 cord:1 connected:1 removed:1 highest:1 intuition:1 complexity:1 covariates:1 solving:2 upon:1 efficiency:1 completely:1 represented:1 univ:1 separated:1 fast:1 effective:9 outside:1 supplementary:1 solve:1 larger:3 say:1 otherwise:3 cov:1 ward:1 g1:5 analyse:1 hoc:1 product:1 coming:1 fr:1 adaptation:2 realization:1 achieve:1 adapts:2 inducing:1 frobenius:1 cluster:11 requirement:3 produce:6 illustrate:2 derive:1 informs:1 op:4 strong:1 recovering:5 implemented:1 implies:3 come:2 convention:1 met:1 indicate:1 radius:3 correct:1 stochastic:4 centered:1 material:1 adjacency:1 require:3 clustered:6 proposition:8 adjusted:2 correction:4 hold:1 sufficiently:1 great:1 scope:1 early:1 generous:1 homoscedastic:1 estimation:2 sensitive:1 largest:1 bunea:2 hope:1 rough:1 gaussian:3 aim:3 ck:14 reaching:1 rank:3 likelihood:1 sense:2 dependent:1 stopping:1 cnrs:1 entire:1 relation:2 going:1 france:1 interested:1 arg:11 classification:1 flexible:3 development:1 mutual:2 field:2 equal:2 beach:1 bab:1 biology:1 represents:1 look:1 unsupervised:1 np:1 report:2 few:1 distinguishes:1 psud:1 attempt:1 detection:1 interest:2 mining:1 intra:1 certainly:1 analyzed:1 mixture:4 semidefinite:11 light:1 behind:1 mcsherry:1 xb:4 bb1:1 necessary:1 sooner:1 re:2 minimal:3 mk:1 instance:5 increased:3 modeling:2 measuring:1 introducing:1 deviation:3 entry:1 hundred:2 too:1 reported:1 contender:1 vershynin:1 st:1 discriminating:5 probabilistic:5 vastly:1 satisfied:2 expert:1 leading:3 account:2 de:2 b2:1 lloyd:1 ad:1 later:1 analyze:1 start:1 recover:8 minimize:1 square:1 variance:5 identify:1 famous:1 produced:1 manages:1 notoriously:1 definition:5 competitor:1 c7:2 proof:1 associated:1 recovers:1 proved:2 popular:2 ask:2 knowledge:2 segmentation:1 ea:7 appears:1 originally:1 higher:1 reflected:1 wei:2 though:1 furthermore:3 xa:5 lastly:4 hand:4 maximizer:1 widespread:1 quality:1 grows:3 usa:1 effect:1 hence:2 read:1 symmetric:1 moore:1 width:1 criterion:1 evident:1 theoretic:2 demonstrate:4 performs:1 geometrical:2 image:1 wise:1 matiques:1 recently:1 discriminated:2 exponentially:1 modelization:1 volume:2 discussed:1 assembling:1 tuning:1 rd:6 had:1 access:1 maxh:8 showed:1 recent:2 subg:3 apart:1 scenario:4 driven:1 inequality:1 continue:1 tien:1 seen:1 minimum:2 greater:1 relaxed:2 impose:2 converge:1 signal:7 semi:5 multiple:1 faster:1 adapt:1 cross:1 long:1 sphere:1 paired:1 ensuring:1 variant:1 vision:1 essentially:1 iteration:1 represent:3 c1:2 whereas:2 laboratoire:1 grow:1 median:2 markedly:1 tend:1 call:3 orsay:2 near:2 subgaussian:6 enough:1 identically:2 affect:1 perfectly:2 interprets:1 idea:1 lesser:1 intensive:1 fraley:1 proceed:1 remark:2 exist:6 notice:8 write:1 dasgupta:1 group:9 key:1 nevertheless:1 drawn:1 graph:1 relaxation:2 sum:1 place:1 throughout:1 verzelen:1 separation:5 draw:1 summarizes:1 bound:2 pay:1 distinguish:1 replaces:1 identifiable:1 adapted:2 constraint:1 n3:1 speed:1 min:3 optimality:3 separable:1 martin:2 developing:1 smaller:2 slightly:1 em:1 b1n:2 s1:2 computationally:1 equation:2 previously:1 discus:2 count:1 fail:1 know:1 studying:1 gaussians:2 apply:1 observe:1 hierarchical:1 bb2:1 spectral:1 appearing:1 robustness:1 rp:2 clustering:26 include:1 build:1 establish:2 approximating:1 classical:5 prof:1 objective:2 quantity:1 concentration:1 costly:1 diagonal:1 enhances:1 link:1 simulated:4 capacity:5 separating:6 hea:1 ratio:7 setup:7 gk:14 mink:1 implementation:2 unknown:2 perform:2 observation:7 rn:6 perturbation:1 misspecifications:1 community:1 sarkar:1 namely:1 paris:2 c3:2 optimized:1 c4:2 errorbars:1 nip:1 usually:1 below:1 regime:2 saclay:1 program:7 including:2 max:5 ia:1 natural:1 minimax:2 improve:1 raftery:1 ready:1 schulman:1 asymptotic:2 expect:1 adaptivity:2 proven:1 var:4 substracting:1 validation:1 sufficient:2 proxy:1 bank:1 playing:1 seated:1 gl:1 transpose:1 bias:2 side:2 absolute:5 distributed:1 dimension:16 xn:11 gram:4 genome:1 c5:2 adaptive:5 approximate:2 global:1 b1:2 un:1 table:3 reasonably:1 robust:1 ca:1 operational:1 improving:1 domain:1 diag:3 pk:2 spread:1 main:1 linearly:1 bounding:2 noise:9 s2:3 x1:11 xu:1 slow:1 lq:2 theorem:4 covariate:1 decay:2 evidence:1 exists:1 quantization:1 corr:16 gap:2 easier:1 logarithmic:1 simply:1 expressed:1 scalar:1 truth:4 sized:1 price:1 admm:2 hard:5 determined:1 corrected:2 craving:1 uniformly:1 lemma:1 total:1 rarely:1 chr:1 violated:1 repartition:1 ex:2
6,386
6,777
Log-normality and Skewness of Estimated State/Action Values in Reinforcement Learning Liangpeng Zhang1,2 , Ke Tang3,1 , and Xin Yao3,2 1 School of Computer Science and Technology, University of Science and Technology of China 2 University of Birmingham, U.K. 3 Shenzhen Key Lab of Computational Intelligence, Department of Computer Science and Engineering, Southern University of Science and Technology, China [email protected], [email protected], [email protected] Abstract Under/overestimation of state/action values are harmful for reinforcement learning agents. In this paper, we show that a state/action value estimated using the Bellman equation can be decomposed to a weighted sum of path-wise values that follow log-normal distributions. Since log-normal distributions are skewed, the distribution of estimated state/action values can also be skewed, leading to an imbalanced likelihood of under/overestimation. The degree of such imbalance can vary greatly among actions and policies within a single problem instance, making the agent prone to select actions/policies that have inferior expected return and higher likelihood of overestimation. We present a comprehensive analysis to such skewness, examine its factors and impacts through both theoretical and empirical results, and discuss the possible ways to reduce its undesirable effects. 1 Introduction In reinforcement learning (RL) [1, 2], actions executed by the agent are decided by comparing relevant state values V or action values Q. In most cases, the ground truth V and Q are not available to the ? instead. Therefore, whether or not an agent, and the agent has to rely on estimated values V? and Q ? ? RL algorithm yields sufficiently accurate V and Q is a key factor to its performance. Many researches have proved that, for many popular RL algorithms such as Q-learning [3] and value iteration [4], estimated values are guaranteed to converge in the limit to their ground truth values [5, 6, 7, 8]. Still, under/overestimation of state/action values occur frequently in practice. Such phenomena are often considered as the result of insufficient sample size or the utilisation of function approximation [9]. However, recent researches have pointed out that the basic estimators of V and Q derived from the Bellman equation, which were considered unbiased and have been widely applied in RL algorithms, are actually biased [10] and inconsistent [11]. For example, van Hasselt [10] showed that the max operator in the Bellman equation and its transforms introduces bias to the estimated action values, resulting in overestimation. New operators and algorithms have been proposed to correct such biases [12, 13, 14], inconsistency [11] and other issues of value-based RL [15, 16, 17, 18]. This paper shows that, despite having great improvements in recent years, the value estimator of RL can still suffer from under/overestimation. Specifically, we show that the distributions of estimated state/action values are very likely to be skewed, resulting in imbalanced likelihood of under/overestimation. Such skewness and likelihood can vary dramatically among actions/policies within a single problem instance. As a result, the agent may frequently select undesirable actions/policies, regardless of its value estimator being unbiased. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 0.5 value 1 value 2 density 0.4 0.3 0.2 0.1 0 -5 -4 -3 -2 -1 0 1 2 3 4 5 estimated value Figure 1: Illustration of positive skewness (red distribution) and negative skewness (blue distribution). Thick and thin vertical lines represent the corresponding expected values and medians, respectively. Such phenomenon is illustrated in Figure 1. An estimated state/action value following the red distribution has a mean 0.21 and a median ?0.61, thus tends to be underestimated. Another following the blue distribution, on the other hand, has a mean ?0.92 and a median 0.61, thus likely to be overestimated. Despite that the red expected return is noticeably greater than the blue, the probability of an unbiased agent arriving at the opposite conclusion (blue is better) and thus selecting the inferior action/policy is around 0.59, which is even worse than random guessing. This paper also indicates that such skewness comes from the Bellman equation passing the dispersion of transition dynamics to the state/action values. Therefore, as long as a value is estimated by applying the Bellman equation to the observations of transition, it can suffer from the skewness problem, regardless of the algorithm being used. Instead of proposing new algorithms, this paper suggests two general ways to reduce the skewness. The first is to balance the impacts of positive and negative immediate rewards to the estimated values. We show that positive rewards lead to positive skewness and vice versa, and thus, a balance between the two may help neutralise the harmful effect of skewness. The second way is to simply collect more observations of transitions. However, our results in this paper indicate that the effectiveness of this approach diminishes quickly as the sample size grows, and thus is recommended only when observations are cheap to obtain. In the rest of this paper, we will elaborate our analysis to the distributions of state/action values estimated by the Bellman equation. Specifically, we will show that an estimated value in a general MDP can be decomposed to path-wise values in normalised single-reward Markov chains. The path-wise values are shown to obey log-normal distributions, and thus the distribution of an estimated value is the convolution of such log-normal distributions. To understand which factors have the most impact to the skewness, we derive the expressions of the parameters of these log-normal distributions. We then discuss whether the skewness of estimated values can be reduced in order to improve learning performance. Finally, we provide our empirical results to complement our theoretical ones, illustrating how substantial the undesirable effect of skewness can be, as well as to what degree such effect can be reduced by obtaining more observations. 2 Preliminaries The standard RL setup of [1] is followed in this paper. An environment is formulated as a finite discounted Markov Decision Process (MDP) M = (S, A, P, R, ?), where S and A are finite sets of states and actions, P (s0 |s, a) is a transition probability function, R(s, a, s0 ) is an immediate reward function, and ? ? (0, 1) is a discount factor. A trajectory (s1 , a1 , s2 , r1 ), (s2 , a2 , s3 , r2 ), ..., (st , at , st+1 , rt ) represents the interaction history between the agent and the MDP. The number of occurrences of state-action pair (s, a) and transition (s, a, s0 ) in such trajectory are denoted Ns,a and Ns,a,s0 , respectively. A policy is denoted ?, and V ? (s) is the state value of ? starting from s. An action value Q? (s, a) is essentially a state value following a non-stationary policy that selects a at the first step but follows ? thereafter. It can be analysed in the same way as V ? , so it suffices to focus on V ? in the following sections. For convenience, superscript ? in V ? will be dropped if it is clear from the context. P For any s ? S and policy ?, it holds that V ? (s) = s0 ?S P (s0 |s, ?(s))(R(s, ?(s), s0 ) + ?V ? (s0 )), which is called the Bellman equation. Most model-based and model-free RL algorithms utilise this equation, its equivalents, or its transforms to estimate state values. Since P and R are unknown to ? instead, the agent, estimated values V? (s) are computed from estimated transitions P? and rewards R 2 ? a, s0 ) = rt with (st , at , st+1 )=(s, a, s0 ). This is done where P? (s0 |s, a) = Ns,a,s0 /Ns,a and R(s, explicitly in model-based learning, and implicitly with frequencies of updates in model-free learning. We will show in later section that the skewness of estimated values is decided by the dynamic effects of the environment rather than the learning algorithm being used, and therefore, it suffices to focus on the model-based case in order to evaluate such skewness. p The skewness in this paper refers to the Pearson 2 coefficient (E[X] ? median[X])/ Var[X] [19, 20]. Following this definition, a distribution has a positive skewness if and only if its mean is greater than its median, and vice versa. Assuming that the bias of V? is corrected or absent, we have E[V? ] = V . Thus, a positive skewness of V? means Pr(V? <V ) > 0.5, indicating a higher likelihood of underestimation, while a negative skewness indicates a higher likelihood of overestimation. An informative indicator of skewness is CDFV? (V )?0.5 where CDFV? is the cumulative distribution function of V? . The sign of this indicator is consistent with the Pearson 2 coefficient, while its absolute value gives the extra probability of under/overestimation of V? compared to a zero-skew distribution. A log-normal distribution with location parameter ? and scale parameter ? is denoted lnN (?, ? 2 ). A random variable X follows lnN (?, ? 2 ) if and only if ln(X) follows normal distribution N (?, ? 2 ). The parameters ? and ? of log-normal distribution can be calculated from its mean and variance   2 by ? = ln ? E[X] , and ? 2 = ln 1 + Var[X] E[X]2 , where E[X] and Var[X] are the mean and 2 E[X] +Var[X] variance of X ? lnN (?, ? 2 ), respectively. 3 Log-normality of Estimated State Values In this section, we elaborate our analysis to the distributions of estimated values V? . The analysis is formed of three steps. First, state values in general MDPs are decomposed to the state values in relevant normalised single-reward Markov chains. Second, they are further decomposed into path-wise state values. Third, the path-wise state values are shown to obey log-normal distributions. 3.1 Decomposing into Normalised Single-reward Markov chains Given an MDP M and a policy ?, the interaction between ? and M forms a Markov chain M ? , with transition probability pi,j = P (sj |si , ?(si )) and reward ri,j = R(si , ?(si ), sj ) from arbitrary state si to state sj . Let P ? be the transition matrix of M ? , V ? be the (column) vector of state values, R? be the reward matrix, and J be a vector of 1 with the same size of V ? . Then Bellman equation is equivalent to V ? = P ? ?R? J + ?P ? V ? = (I ? ?P ? )?1 (P ? ?R? J ), where I is an identity matrix, and ? is Hadamard product. This equation indicates that a state value is a weighted sum of dynamic effects, with rewards serving ? ?1 as the weights of summation. Precisely, P let B = (I ? ?P ) , then the equation above becomes ? ? ? ? V = B(P ?R J ), or V (si ) = j,k rj,k (bi,j pj,k ). Here, term (bi,j pj,k ) describes the joint dynamic effect starting from si ending with transition sj sk , which will be elaborated in Section 3.2. ? Let Mj,k denote a normalised single-reward Markov chain (NSR-MC) of M ? , which has exactly the ? same S, A, ? and P ? as M ? , but all rewards are trivially 0 except rj,k = 1. For an NSR-MC Mj,k , ? the equation above becomes VM ? (si ) = bi,j pj,k . Thus, a state value V of a general MDP M can j,k ? be rewritten as the weighted sum of state values of all |S|2 NSR-MCs {Mj,k } of M , i.e. P ? ? (si ). VM (si ) = j,k rj,k VMj,k (1) Therefore, the next step of analysis is to examine the state values in NSR-MCs. 3.2 Decomposing into Path-wise State Values Seeing Markov chain M ? as a directed graph, a walk w of length |w| in such graph is a sequence of |w| successive transitions through states s1 , s2 , s3 , ..., s|w|+1 .1 A path is a walk without repeated states, with exception to the last state s|w|+1 , which can be either a visited or an unvisited one. 1 Superscripts here refer to the timestamps on w rather than the indices of specific states in S. 3 p1,1 p2,2 p1,2 1 p3,3 p2,3 2 p2,1 p3,4 3 4 p3,2 p3,1 Figure 2: Illustration of walks and a representative path. "Forward" and "backward" transitions are drawn in thick and thin arrows, respectively, and pi,j denotes the transition probability from si to sj . In an NSR-MC with unique non-zero reward rj,k = 1, a state value V ? (si ) = bi,j pj,k can be expanded as a sum of the discounted occurrence probabilities of walks that start from si and end with transition (sj , ?(sj ), sk ). Let Wi,j,k denotes the set of all possible walks w satisfying s1 =si , P Q s|w| =sj and s|w|+1 =sk . Then we have V (si ) = w?Wi,j,k (? |w|?1 (st ,st+1 ) on w pst ,st+1 ). Since Wi,j,k is infinite, the walks in Wi,j,k need to be put into finite groups for further analysis. Concretely, a step in a walk is considered "forward" if it arrives to a previously unvisited state, and "backward" if the destination has already been visited before that step. The latter also includes the cases where st+1 = st , that is, the agent stays at the same state after transition. The only exception to this classification is the last transition of a walk, which is always considered a "forward" one, regardless of if its destination having been visited or not. The start state s1 and all such "forward" transitions of a walk w form a representative path of w, denoted w. ? This is illustrated by Figure 2. In this example, all walks from s1 passing s2 ending with s3 s4 , such as (s1 s1 s2 s3 s3 s4 ), (s1 s2 s3 s1 s2 s3 s4 ) and (s1 s2 s3 s2 s3 s2 s3 s4 ), are grouped with the representative path (s1 s2 s3 s4 ). Note that transition s1 s3 will not happen within this group; rather, it belongs to the groups that have s1 s3 in their representative paths. As can be seen from Figure 2, all possible walks sharing one representative path w ? compose a chain which has the same transition probability values with the original Markov chain M ? , but with only two type of transitions: (forward) si to si+1 (i ? |w|); ? (backward) si to sj (j ? i ? |w|). ? We call ? this chain the derived chain of w, ? denoted M (w), ? or simply M (w). ? Then the infinite sum becomes P V (s) = w? (2) ? VM (w) ? (s), ? W ? is the set of all representative paths that start from s and end with the unique 1-reward where W ? transition of the relevant NSR-MC. Such VM (w) ? (s) are called path-wise state values of M . Since the main concern of this paper is the skewness of V? , we do not provide a constructive method of ? is at most (|S|!), and thus an estimated obtaining all M ? (w). ? Rather, we point out that the size of W ? value V in NSR-MCs can be decomposed to finitely many estimated path-wise state values. 3.3 Log-normality of Estimated Path-wise State Values Strictly speaking, derived chain M (w) ? of a representative path w ? is not necessarily a Markov chain, because only part of the transitions in the original Markov chain M ? is included, allowing the Pi+1 possibility of j=1 psi ,sj < 1. However, this does not make the path-wise state values violate Bellman equation, and thus they can be treated as regular state values. ? Since a representative path w ? has no repeated states (except for s|w|+1 which can either be a new state k or the same as some s ), the superscripts here can be treated as the indices of states for convenience. i Therefore, path-wise state value VM (w) ? (s ) is denoted Vi , and pi,j refers to psi ,sj in this section. Given w, ? the most important path-wise value is V1 , which belongs to the start point of w. ? Definition 3.1. Given a derived chain M (w) ? and discount factor ?, let pi,j be the transition probability from si to sj on M (w). ? The joint dynamic effect of M (w) ? for i ? |w| ? is recursively defined as Di = ?pi,i+1 . Pi?1 Qi?1 1 ? ?(pi,i + j=1 pi,j k=j Dk ) Lemma 3.2. For all i < |w|, ? path-wise state values satisfy Vi = Di Vi+1 . 4 Proof. By Bellman equation, it holds that Vi = P|w|+1 ? pi,j (ri,j +?Vj ). By definition of M (w) ? Pi+1 we have pi,j = 0 for j > i+1 and ri,j = 0 for (i, j) 6= (|w|, ? |w|+1). ? Thus Vi = ? j=1 pi,j Vj ?p1,2 for i < |w|. ? When i = 1, this becomes V1 = ?(p1,1 V1 +p1,2 V2 ) = 1??p V2 = D1 V2 . SupQk 1,1 pose Vi = Di Vi+1 holds for all i ? k < |w|?1. ? Then Vi = ( j=i Dj )Vk+1 for i ? k, Pk+2 Pk+1 Qk and therefore, Vk+1 = ? j=1 pk+1,j Vj = ?[ j=1 pk+1,j ( l=j Dl )Vk+1 + pk+1,k+2 Vk+2 ] = ?pk+1,k+2 P Qk V = Dk+1 Vk+2 . Thus, by the principle of induction, Vi = 1??(p + k p D ) k+2 k+1,k+1 j=1 k+1,j l=j j=1 l Di Vi+1 holds for all i < |w|. ? Lemma 3.3. For all i ? |w|, ? Vi = 1 ? Q|w| ? j=i Dj . Particularly, V1 = 1 ? Q|w| ? j=1 Dj . ? ? Proof. By definition of w, ? there are two possible cases of the last step from s|w| to s|w|+1 : |w|+1 ? 1 |w| ? |w|+1 ? k (I) s ? / {s , ..., s }; (II) there exists k ? |w| ? such that s =s . ? (Case I) There is no transition starting from s|w|+1 in this case, thus V|w|+1 = 0. Therefore, ? P|w| P|w| ? ? V|w| (r|w|,| + ?V|w|+1 ) + ? j=1 p|w|,j + ? j=1 p|w|,j ? = p|w|,| ? w|+1 ? ? w|+1 ? ? ? Vj = p|w|,| ? w|+1 ? ? Vj = Q Q p|w|,| |w|?1 ? |w| ? ? w|+1 ? 1 1 = ? D|w| P|w|?1 Q|w|?1 ? . Thus Vi = ( j=i Dj )V|w| ? = ? ? ? j=i Dj . 1??(p|w|,| ? w| ? + j=1 p|w|,j ? k=j Dk ) ? (Case II with s|w|+1 = sk ) In this case V|w|+1 = Vk and p|w|,| = p|w|,k ? ? w|+1 ? ? , thus V|w| ? = P|w| P|w| ? ? p|w|,| (r|w|,| + ?Vk ) + ? j=1,j6=k p|w|,j + ? j=1 p|w|,j ? w|+1 ? ? w|+1 ? ? Vj = p|w|,| ? w|+1 ? ? Vj which is the Q|w| ? 1 same expression as the first case, and therefore Vi = ? j=i Dj also holds for this case. In both of the two cases above, V1 is the product of D1 , D2 , ..., D|w| ? given by Definition 3.1, and P|w| ? 1 an additional factor ? . Thus we have ln(V1 ) = ? ln(?) + j=1 ln(Dj ). By replacing all pi,j in ? Then Definition 3.1 with estimated transition p?i,j , we get the ?estimated? 2 joint dynamic effects D. P | w| ? ? j ). Assuming D ? i ?s as independent the equation above becomes ln(V?1 ) = ? ln(?) + j=1 ln(D random variables, it can be shown by the central limit theorem that as |w| ? grows, ln(V?1 ) will tend to a normal distribution, and therefore, V?1 approximates a log-normal distribution. ? are actually mutually dependent in most cases, thus the The ?estimated? joint dynamic effects D rigorous analysis of log-normality is more complicated. The main idea here is to first prove all ? i ? ?, and then show that the summation involving terms pi,j Qi?1 D ? k in Definition 3.1 diminish D k=j ? i is mostly decided by p?i,i and p?i,i+1 and thus the quickly with the size of w, ? which indicates that D ? dependency between any two D is relatively weak. As the focus here is to see the skewness of V?1 , such analysis is skipped, and we proceed to the study of parameters of log-normal distribution of V?1 . ? i , we provide the result on the most repreSince p?i,i and p?i,i+1 are the main factors that decide D sentative case where pi,i + pi,i+1 = 1 and all other pi,j are 0 for i < |w|. ? Such M (w) ? is denoted ? i are mutually independent in such chains. M0 (w) ? in the following text. It is easy to see that all D The delta method [21, 22] below is used to obtain the expressions of parameters. Lemma 3.4 (Delta method[21, 22]). Suppose X is a random variable with finite moments, E[X] being its mean and Var[X] being its variance. Suppose f is a sufficiently differentiable function. Then it holds that E[f (X)] ? f (E[X]), and Var[f (X)] ? f 0 (E[X])2 Var[X]. ? j be Dj replacing all p with p?. Let Ni denotes the number of visits to the chain Lemma 3.5. Let D i ? j ] ? ?pj,j+1 , and state s in a learning trajectory. In M0 (w) ? derived chains it holds that E[D 1??pj,j ?j] ? Var[D ? 2 (1??)2 (1??pj,j )4 ? pj,j pj,j+1 . Nj Proof. It holds that Var[? pj,j+1 ] = ( N1j )2 Nj pj,j pj,j+1 = Definition 3.1. 2 pj,j pj,j+1 , Nj then by applying Lemma 3.4 to Such ?estimation? is not done explicitly in actual algorithms, but implicitly when using Bellman equation. 5 Lemma 3.6. In M0 (w) ? derived chains it holds that E[V?1 ] = Var[V?1 ] ? 1 ?2 Y |w| ? |w| ? 1Y ? E[Dj ], ? j=1 ? j ] + E[D ? j ]2 ) ? (Var[D j=1 |w| ? Y  ? j ]2 . E[D j=1 Qn Proof. For independent X1 , X2 , ..., Xn it holds that Var[X1 ...Xn ] = j=1 (Var[Xj ] + E[Xj ]2 ) ? Qn 2 ? ? by applying this and Lemma 3.4 to Lemma j=1 E[Xj ] . Since all D are independent in M0 (w), 3.3, the above results can be obtained. Theorem 3.7. In M0 (w) ? with sufficiently large |w|, ? it holds that V?1 ? ? lnN (?, ? 2 ) with ? =   2 ? ? V1 ] ln ? ?E[2V1 ] ? and ? 2 = ln 1+ Var[ , where E[V?1 ] and Var[V?1 ] are given by Lemma 3.6. E[V? ]2 E[V1 ] +Var[V1 ] 1 Proof. By applying the equations on the parameters of log-normal (see Section 2) to V?1 . 4 Skewness of Estimated State Values, and Countermeasures This section interprets the results presented in Section 3 in terms of skewness, and discuss how to reduce the undesirable effects of skewness. The skewness is mainly decided by two factors: (a) parameter ? of log-normal distributions; (b) non-zero immediate rewards. 4.1 Impact of Parameter ? of Log-normal Distributions A regular log-normal distribution lnN (?, ? 2 ) has a positive skewness, which means a sampled value from such distribution has more than 0.5 probability to be less than its expected value, resulting in a higher likelihood of underestimation. Precisely, if X ? lnN (?, ? 2 ), then E[X] = exp(? + ? 2 /2) and median[X] = exp(?), thus the Pearson 2 coefficient of X is greater than 0. Additionally, since ? lnN (?, ? 2 ) has a CDF(x) = 0.5(1 + erf( ln(x)?? )) where erf(x) is the Gauss error function, our 2? ? indicator CDF(E[X])?0.5 equals to 0.5 erf(?/ 8). This indicates that ? has a stronger impact than ? to the scale of the skewness in log-normal distributions. Combining Lemma 3.6 and Theorem 3.7 shows that ? is decided by a complicated interaction ? j ?s. By Lemma 3.5, transition probabilities pj,? completely between all observed dynamic effect D ? ? j ]. decide E[Dj ], and have substantial impacts to Var[D This indicates that the main cause of skewness is the transition dynamics of MDPs rather than learning algorithms. As an extreme case, if the forward transition of a state-action pair is deterministic (i.e. ? j ] = 0, resulting no contribution to the skewness. If an estimated pj,j+1 = 1), then its Var[D value consists of a large portion of such transitions, then the likelihoods of overestimation and underestimation are both very low. On the other hand, if backward transition probability pj,j (or any ? j ] increases dramatically, resulting a noticeable skewness. pj,k with k ? j) is close to 1, then Var[D Real-world problems can be a mix of these two extremes, which leads to a great variety of skewness among different actions/policies, making learning significantly more difficult. By Lemma 3.5, ? is also dependent to the number of observations Nj . As Nj grows infinitely, ? j ] slowly decreases to 0, which reduces Var[V?1 ] in Lemma 3.6 and eventually leads ? to Var[D 0. This indicates that running algorithms more steps does help reduce the skewness of estimated ? j ] in Lemma 3.5 also values and improve the overall performance. However, the expression of Var[D indicates that the degree of improvement diminishes quickly as Nj grows. Therefore, collecting more observations is not always an efficient way to reduce the skewness. 6 0.8 0.8 positive negative density 0.4 1.5 positive negative convolution 0.6 density density 0.6 0.4 positive negative convolution 1 0.5 0.2 0.2 0 -4 -2 0 2 0 -4 4 -2 estimated path-wise value 0 2 0 -4 4 estimated path-wise value (a) -2 0 2 4 estimated path-wise value (b) (c) Figure 3: (a) Log-normals weighted by positive reward (red) and negative reward (blue). Thick/thin vertical lines are means & medians. (b, c) Convolution of two log-normals, given by the purple curve. 1?p 1?p p 1, rD 1 1?p p 2 1 p 1 1 p ... 3 n 1, rG 1 Figure 4: A chain MDP with n states, forward probability p, goal reward rG and distraction reward rD . Transitions under taking action a+ is drawn in solid arrows, and a? in dotted arrows. 4.2 Impact of Non-zero Immediate Rewards Non-zero immediate rewards decide not only the scale of skewness, but also the direction of skewness. By Equation 1 and 2 in Sections 3.1 and 3.2, path-wise values are weighted by their corresponding immediate rewards before being summed into state values. If a path-wise state value is weighted by a positive reward, then the resulting distribution is still a regular log-normal, which has a positive skewness and thus a higher likelihood of underestimation. However, if it is weighted by a negative reward, then the result is a flipped log-normal, which has a negative skewness and thus a higher likelihood of overestimation. This is illustrated in Figure 3 (a), where the red and blue distributions correspond to the estimated path-wise values weighted by a positive and a negative reward, respectively. In general cases, the sum of positively skewed random variables is not necessarily a positively skewed random variable. However, the sum of regular log-normal random variables can be approximated by another log-normal [23], thus is still positively skewed. Since path-wise state values are approximately log-normal, it is clear that if an MDP only has positive immediate rewards, then all estimated values are likely to be positively skewed and thus have higher likelihoods to be underestimated. On the other hand, if an estimated value is composed of both positive and negative rewards, then the skewness of regular and flipped log-normal distributions may partly be neutralised in their convolution. The purple distribution in Figure 3 (b) shows the result of convolution of two skewed distributions that lie symmetrically to x = 0. The skewness is perfectly neutralised in this case, resulting in a symmetric distribution with a balanced likelihood of under/overestimation. In the case of Figure 3 (c), the convolution is still a skewed one, but the scale of this skewness is less than the original ones. To make learning easier, one may hope to design the reward function such that the more desirable actions/policies have both higher expected returns and higher likelihood of overestimation than the less desirable ones. However, the former requires more positive rewards, while the latter calls for more negative rewards, causing an unsolvable dilemma. Therefore, it is more realistic just to balance the likelihood of under/overestimation, so that all actions/policies can compete fairly with each other. Reward shaping [24, 25] can be a promising choice to achieve this goal, as it preserves the optimality of policies. Since a better balance of positive and negative rewards directly reduces the impact of the skewness of all relevant log-normal distributions, this approach might be more effective than simply collecting more observations. 5 Experiments In this section, we present our empirical results on the skewness of estimated values. There are two purposes in these experiments: (a) to demonstrate how substantial the harm of the skewness can be; (b) to see the improvement provided by collecting more observations, as mentioned in Section 4.1. We conducted experiments in chain MDPs shown in Figure 4. There are n > 0 states s1 , s2 , ..., sn in a chain MDP. At each state, the agent has two possible actions a+ and a? . By taking a+ at si with 7 sample distribution theoretical distribution 0.15 density probability of underestimation 0.2 0.1 0.05 0 0 5 10 15 20 0.8 empirical theoretical 0.75 0.7 0.65 0.6 0.55 0.5 0 estimated value 100 200 300 400 #observations per state-action (b) (a) + Figure 5: (a) Distribution of V? ? (s1 ) at m = 200. (b) Underestimation probability curve. i < n, the agent has probability p > 0 to be sent to si+1 , and 1 ? p to remain at si . Taking a+ at sn yields a goal reward rG > 0, and the agent remains at sn . Taking a? , on the other hand, sends the agent from si to si?1 (i > 1) or s1 (i = 1) with probability 1, and if a? is taken at s1 , then the agent will be provided a distraction reward rD > 0. The objective of the learning agent is to discover a policy that leads it to the goal sn and collects rG as often as possible, rather than being distracted by rD . There are two policy of interest: ? + that always take a+ , and ? ? that always take a? . Other policies can be proved to be always worse than ? + and ? ? in terms of V ? (s1 ) regardless of rG , rD , p, and discount factor ?. Since using max operator may introduce bias [10], we modified the default value iteration algorithm [4] to let it output the unbiased estimated state values by following predetermined policies rather than using max operator. In each run of experiment, m observations were collected for each state-action pair, resulting in a data set of size 2mn. Then, the observations were passed to the modified value iteration algorithm to estimate the state values of ? + and ? ? under discount factor ? = 0.9. + ? The Markov chain M ? and M ? here are both single-path ones, and thus the corresponding theoretical distributions of V? can be computed directly by applying Theorem 3.7. Further, since ? ? transition probabilities in M ? are all 1, we have Var[V? ? ] = 0, and thus its estimated values always equal trivially to the ground truth one (i.e. it will never be under/overestimated). + The empirical and theoretical distributions of estimated state value V? ? (s1 ) with m = 200, n = 20, p = 0.1, rG = 1e6 in 1000 runs is shown in Figure 5 (a). One-sample Kolmogorov-Smirnov test was conducted against the null hypotheses that the empirical data came from the theoretical log-normal distributions. The resulting p-value was 0.1190, which failed to reject the null hypothesis at 5% significance level, indicating no significant difference between the theoretical and sample distribution. More importantly, Figure 5 (a) shows a clear positive skewness, indicating a higher likelihood of underestimation. The empirical value of indicator CDF(E[V? ])?0.5 was +0.103, meaning that in 60.3% of runs, the state value was underestimated. This further indicates that, if the distraction reward ? + rD is set to a value such that V ? (s1 ) is slightly less than V ? (s1 ), then the agent will wrongly ? select ? with probability close to 0.603, which is worse than random guess. To see whether collecting more observations helps reduce skewness, the same experiments as above were conducted with the number of observations per state-action m ranged from 20 to 400. Figure 5 + + (b) shows the theoretical and empirical probability of underestimation Pr(V? ? (s1 ) < EV? ? (s1 )). At m = 20, 200 and 400, the empirical underestimate probability was 0.741, 0.603 and 0.563, respectively. While from m = 20 to 200 there was an significant improvement of 0.138, or a 18.6% relative improvement, from 200 to 400 it was only 0.040, or 6.6% relative. This result supports the analysis in Section 4.1, demonstrating that the merit of collecting more observations is most noticeable when the sample size is low, and diminishes quickly as the sample size grows. We also conducted experiments in the complex maze domain [26] in the same manner as above. In this domain, the task of the agent is to find a policy that can collect all flags and bring them to the goal as often as possible, without falling into any traps. The maze used is given in Figure 6 (a). The states in this domain is represented by the current position of the agent and the status of the three flags. The agent starts at the start point indicated by S with no flag. At each time step, the agent can 8 1 T T T T X 3 T 2 T m=10 lognormal fit 2 1 X X X X G X 0.7 3 density S 10-4 X X probability of underestimation X data curve fit 0.65 0.6 0.55 0.5 0 2000 4000 6000 8000 estimated state value (a) (b) 10000 12000 0 40 80 120 160 200 #observations per state-action (c) Figure 6: (a) A complex maze. S, G, numbers, and circles stand for start, goal, flags, and traps, ? respectively. (b) Distribution of V? ? (sstart ) at m = 10. (c) Underestimation probability curve. select one of the four directions to move to. The agent is then sent to the adjacent grid at the chosen direction with probability 0.7, and at each of the other three directions with probability 0.1, unless the destination is blocked, in which case the agent remains at the current grid. Additionally, at the flag grids (numbers in Figure 6 (a)), taking actions also provides the corresponding flag to the agent if that flag has not been obtained yet. At the goal point (G), taking arbitrary action yields an immediate reward equals to 1, 100, 1002 or 1003 if the agent holds 0, 1, 2 or 3 flags, respectively. Then the agent is sent back to the start point, and all three flag are reset to their initial position. Finally, at any trap grid (circles), taking actions sends the agent to S and resets all flags without yielding a goal reward. The complex maze in Figure 6(a) has 440 states, 4 actions, 32 non-zero immediate rewards, and complicated transition patterns, and thus is difficult to analyse manually. However, it is noticeable that all non-zero immediate rewards are positive, and thus according to Section 4.2, estimated state values are likely to have positive skew, resulting in greater likelihood of underestimation. ? Figure 6 (b) shows the empirical distribution of estimated value V? ? (sstart, no flag ) under ? = 0.9 and m = 10 in 1000 runs. Although it is not a path-wise state value, the distribution is approximately log-normal with parameter ? ? 8.21, ? ? 0.480. In 67.6% of these 1000 runs, the optimal state value at the start state was underestimated. The effect of collecting a larger sample is show in Figure 6 (c). The probability of underestimation decreased from 0.676 at m = 10 to 0.597 at m = 50, 0.563 at m = 100, and 0.556 at m = 200. The data points approximated an exponential function y = 0.1725 exp(?0.04015x) + 0.5546, which suggests that it can be very difficult to achieve underestimation probability lower than 0.55 by collecting more data in this domain. 6 Conclusion and Future Work This paper has shown that estimated state values computed using the Bellman equation can be decomposed to the relevant path-wise state values, and the latter obey log-normal distributions. Since log-normal distributions are skewed, the estimated state values also have skewed distributions, resulting in imbalanced likelihood of under/overestimation, which can be harmful for learning. We have also pointed out that the direction of such imbalance is decided by the immediate reward associated to the log-normal distributions, and thus, by carefully balancing the impact of positive and negative rewards when designing the MDPs, such undesirable imbalance can possibly be neutralised. Collecting more observations, on the other hand, helps reduce the skewness to a degree, but such effect becomes less significant when the sample size is already large. It would be interesting to see how the skewness studied in this paper interacts with function approximation (e.g. neural networks [27, 28]), policy gradient [29, 30], or Monte-Carlo tree search [31, 32]. A reasonable guess is that these techniques introduce their own skewness, and the two different skewness amplify each other, making learning even more difficult. On the other hand, reducing the skewness discussed in this paper may improve learning performance even when such techniques are used. Therefore, developing a concrete method of balancing positive and negative rewards (as discussed in Section 4.2) can be very helpful, and will be investigated in the future. 9 Acknowledgements This paper was supported by Ministry of Science and Technology of China (Grant No. 2017YFB1003102), the National Natural Science Foundation of China (Grant Nos. 61672478 and 61329302), the Science and Technology Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), EPSRC (Grant No. J017515/1), and in part by the Royal Society Newton Advanced Fellowship (Reference No. NA150123). References [1] Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998. [2] Csaba Szepesv?ri. Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning, 4(1):1?103, 2010. [3] Christopher Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992. [4] Martin Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, 1994. [5] Peter Dayan. The convergence of TD (?) for general ?. Machine learning, 8(3-4):341?362, 1992. [6] John N. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning, 16(3):185?202, 1994. [7] Michael L. Littman, Thomas L. Dean, and Leslie P. Kaelbling. On the complexity of solving markov decision problems. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, pages 394?402. Morgan Kaufmann Publishers Inc., 1995. [8] Csaba Szepesv?ri. The asymptotic convergence-rate of Q-learning. In Proceedings of the 10th International Conference on Neural Information Processing Systems, pages 1064?1070. MIT Press, 1997. [9] Sebastian Thrun and Anton Schwartz. Issues in using function approximation for reinforcement learning. In Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum. Citeseer, 1993. [10] Hado Van Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems, pages 2613?2621, 2010. [11] Marc G. Bellemare, Georg Ostrovski, Arthur Guez, Philip S. Thomas, and R?mi Munos. Increasing the action gap: New operators for reinforcement learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, pages 1476?1483, 2016. [12] Donghun Lee, Boris Defourny, and Warren B. Powell. Bias-corrected Q-learning to control max-operator bias in Q-learning. In Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2013 IEEE Symposium on, pages 93?99. IEEE, 2013. [13] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q-learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, pages 2094?2100, 2016. [14] Carlo D?Eramo, Alessandro Nuara, Matteo Pirotta, and Marcello Restelli. Estimating the maximum expected value in continuous reinforcement learning problems. In Proceedings of the 31th AAAI Conference on Artificial Intelligence, pages 1840?1846, 2017. [15] Dimitri P. Bertsekas and Huizhen Yu. Q-learning and enhanced policy iteration in discounted dynamic programming. Mathematics of Operations Research, 37(1):66?94, 2012. [16] Paul Wagner. Policy oscillation is overshooting. Neural Networks, 52:43?61, 2014. 10 [17] Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. The dependence of effective planning horizon on model accuracy. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 1181?1189. International Foundation for Autonomous Agents and Multiagent Systems, 2015. [18] Harm Van Seijen, A. Rupam Mahmood, Patrick M. Pilarski, Marlos C. Machado, and Richard S. Sutton. True online temporal-difference learning. Journal of Machine Learning Research, 17(145):1?40, 2016. [19] David P. Doane and Lori E. Seward. Measuring skewness: a forgotten statistic. Journal of Statistics Education, 19(2):1?18, 2011. [20] Harold Hotelling and Leonard M. Solomons. The limits of a measure of skewness. The Annals of Mathematical Statistics, 3(2):141?142, 05 1932. [21] Gary W. Oehlert. A note on the delta method. The American Statistician, 46(1):27?29, 1992. [22] George Casella and Roger L. Berger. Statistical inference. 2nd edition, 2002. [23] Norman C. Beaulieu and Qiong Xie. An optimal lognormal approximation to lognormal sum distributions. IEEE Transactions on Vehicular Technology, 53(2):479?489, 2004. [24] Andrew Y. Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, volume 99, pages 278?287, 1999. [25] John Asmuth, Michael L. Littman, and Robert Zinkov. Potential-based shaping in model-based reinforcement learning. In Proceedings of the 23th AAAI Conference on Artificial Intelligence, pages 604?609, 2008. [26] Liangpeng Zhang, Ke Tang, and Xin Yao. Increasingly cautious optimism for practical PACMDP exploration. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 4033?4040, 2015. [27] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. [28] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pages 1928?1937, 2016. [29] Sham Kakade. A natural policy gradient. Advances in neural information processing systems, 2:1531?1538, 2002. [30] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of The 32nd International Conference on Machine Learning, pages 1889?1897, 2015. [31] Levente Kocsis and Csaba Szepesv?ri. Bandit based monte-carlo planning. In European Conference on Machine Learning, 2006. [32] Cameron B. Browne, Edward Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1?43, 2012. 11
6777 |@word illustrating:1 stronger:1 smirnov:1 nd:2 d2:1 pieter:1 citeseer:1 solid:1 recursively:1 moment:1 initial:1 selecting:1 daniel:1 hasselt:3 current:2 comparing:1 analysed:1 si:24 yet:1 guez:2 john:3 timestamps:1 happen:1 informative:1 realistic:1 predetermined:1 cheap:1 update:1 overshooting:1 stationary:1 intelligence:9 guess:2 amir:1 provides:1 location:1 successive:1 philipp:2 zhang:1 mathematical:1 wierstra:1 symposium:1 prove:1 consists:1 compose:1 eleventh:1 interscience:1 manner:1 introduce:2 expected:6 p1:5 examine:2 frequently:2 planning:2 bellman:12 discounted:3 decomposed:6 td:1 actual:1 increasing:1 becomes:6 provided:2 discover:1 estimating:1 null:2 what:1 skewness:53 dharshan:1 proposing:1 n1j:1 csaba:3 transformation:1 nj:7 temporal:1 forgotten:1 collecting:8 exactly:1 uk:1 schwartz:1 control:2 grant:4 zhang1:1 bertsekas:1 positive:23 before:2 engineering:1 dropped:1 tends:1 limit:3 despite:2 sutton:2 jiang:1 path:32 matteo:1 approximately:2 might:1 china:4 studied:1 suggests:2 collect:3 bi:4 decided:6 directed:1 unique:2 practical:1 practice:1 powell:1 demis:1 riedmiller:1 empirical:10 significantly:1 reject:1 refers:2 seeing:1 regular:5 petersen:1 get:1 convenience:2 undesirable:5 close:2 operator:6 wrongly:1 put:1 context:1 applying:5 amplify:1 bellemare:2 equivalent:2 deterministic:1 dean:1 helen:1 regardless:4 starting:3 survey:1 ke:2 estimator:3 importantly:1 lnn:7 autonomous:2 annals:1 enhanced:1 suppose:2 diego:1 programming:3 designing:1 hypothesis:2 satisfying:1 particularly:1 approximated:2 observed:1 epsrc:1 levine:1 region:1 decrease:1 russell:1 substantial:3 balanced:1 environment:2 mentioned:1 complexity:1 overestimation:15 reward:44 littman:2 alessandro:1 dynamic:12 singh:1 solving:1 dilemma:1 completely:1 joint:5 represented:1 kolmogorov:1 effective:2 monte:3 artificial:7 pearson:3 widely:1 larger:1 pilarski:1 erf:3 statistic:3 analyse:1 superscript:3 online:1 kocsis:1 sequence:1 differentiable:1 interaction:3 product:2 reset:2 causing:1 relevant:5 hadamard:1 combining:1 achieve:2 sixteenth:1 cautious:1 convergence:2 double:2 r1:1 boris:1 silver:3 help:4 derive:1 andrew:2 ac:1 pose:1 tim:1 finitely:1 school:2 noticeable:3 edward:1 p2:3 c:1 come:1 indicate:1 direction:5 thick:3 correct:1 stochastic:2 exploration:1 human:1 noticeably:1 hillsdale:1 adprl:1 education:1 suffices:2 abbeel:1 preliminary:1 summation:2 strictly:1 hold:12 sufficiently:3 considered:4 ground:3 normal:31 diminish:1 great:2 lawrence:1 exp:3 around:1 m0:5 vary:2 tavener:1 a2:1 purpose:1 estimation:1 diminishes:3 birmingham:1 visited:3 grouped:1 vice:2 weighted:8 hope:1 mit:2 always:6 spyridon:1 modified:2 rather:7 rusu:1 barto:1 derived:6 focus:3 improvement:5 vk:7 legg:1 likelihood:17 indicates:9 mainly:1 greatly:1 rigorous:1 skipped:1 helpful:1 inference:1 dayan:2 dependent:2 bandit:1 selects:1 issue:2 among:3 classification:1 overall:1 denoted:7 lucas:1 summed:1 fairly:1 equal:3 never:1 having:2 beach:1 koray:2 manually:1 ng:1 represents:1 flipped:2 marcello:1 yu:1 stuart:1 thin:3 future:2 connectionist:1 mirza:1 richard:3 composed:1 preserve:1 powley:1 national:1 comprehensive:1 statistician:1 harley:1 interest:1 ostrovski:2 possibility:1 mnih:2 joel:1 introduces:1 arrives:1 extreme:2 yielding:1 perez:1 chain:22 accurate:1 arthur:2 unless:1 tree:2 mahmood:1 harmful:3 puigdomenech:1 walk:11 circle:2 theoretical:9 instance:2 column:1 measuring:1 leslie:1 kaelbling:1 harada:1 conducted:4 erlbaum:1 dependency:1 st:11 density:6 international:7 overestimated:2 stay:1 destination:3 vm:5 lee:1 michael:3 synthesis:1 quickly:4 concrete:1 yao:1 central:1 aaai:4 solomon:1 slowly:1 possibly:1 worse:3 american:1 leading:1 return:3 dimitri:1 unvisited:2 volodymyr:2 potential:1 ioannis:1 includes:1 coefficient:3 inc:1 satisfy:1 explicitly:2 vi:13 later:1 lab:1 red:5 start:9 portion:1 complicated:3 simon:2 elaborated:1 contribution:1 formed:1 ni:1 purple:2 accuracy:1 variance:3 qk:2 kaufmann:1 yield:3 correspond:1 shenzhen:2 weak:1 anton:1 qiong:1 kavukcuoglu:2 mc:7 carlo:4 trajectory:3 j6:1 history:1 veness:1 casella:1 sharing:1 sebastian:1 definition:8 against:1 underestimate:1 frequency:1 proof:5 psi:2 di:4 associated:1 mi:1 sampled:1 proved:2 pst:1 popular:1 shaping:3 carefully:1 bham:1 actually:2 back:1 higher:10 asmuth:1 xie:1 follow:1 done:2 just:1 roger:1 hand:6 replacing:2 christopher:1 mehdi:1 trust:1 indicated:1 mdp:8 grows:5 usa:2 effect:14 nsr:7 ranged:1 unbiased:4 true:1 norman:1 former:1 lillicrap:1 moritz:1 symmetric:1 utilisation:1 illustrated:3 puterman:1 adjacent:1 whitehouse:1 skewed:11 game:1 inferior:2 harold:1 demonstrate:1 bring:1 meaning:1 wise:22 charles:1 machado:1 rl:8 volume:1 discussed:2 approximates:1 refer:1 significant:3 blocked:1 versa:2 cambridge:1 ai:1 rd:7 trivially:2 grid:4 mathematics:1 pointed:2 dj:10 badia:1 patrick:1 imbalanced:3 recent:2 showed:1 own:1 belongs:2 came:1 inconsistency:1 seen:1 ministry:1 greater:4 additional:1 morgan:1 george:1 converge:1 recommended:1 ii:2 stephen:1 violate:1 mix:1 rj:4 reduces:2 desirable:2 sham:1 long:2 rupam:1 cameron:1 visit:1 a1:1 seijen:1 impact:9 qi:2 involving:1 basic:1 essentially:1 iteration:4 represent:1 hado:2 sergey:1 szepesv:3 fellowship:1 decreased:1 underestimated:4 median:7 sends:2 publisher:1 biased:1 rest:1 extra:1 shane:1 tend:1 neutralised:3 sent:3 inconsistent:1 effectiveness:1 jordan:1 call:2 symmetrically:1 easy:1 variety:1 xj:3 fit:2 browne:1 perfectly:1 opposite:1 interprets:1 reduce:7 idea:1 cn:2 andreas:1 absent:1 whether:3 expression:4 optimism:1 passed:1 suffer:2 peter:3 passing:2 speaking:1 proceed:1 action:35 cause:1 deep:3 dramatically:2 pacmdp:1 clear:3 transforms:2 s4:5 discount:4 reduced:2 s3:13 dotted:1 sign:1 estimated:45 delta:3 per:3 sstart:2 blue:6 serving:1 discrete:1 georg:2 group:3 key:2 thereafter:1 four:1 demonstrating:1 falling:1 drawn:2 levente:1 pj:18 backward:4 v1:10 graph:2 sum:8 year:1 compete:1 run:5 unsolvable:1 uncertainty:1 reasonable:1 decide:3 p3:4 oscillation:1 decision:3 cowling:1 daishi:1 guaranteed:1 followed:1 summer:1 nan:1 occur:1 precisely:2 alex:3 ri:6 x2:1 countermeasure:1 optimality:1 expanded:1 vehicular:1 relatively:1 martin:2 department:1 developing:1 according:1 describes:1 remain:1 slightly:1 increasingly:1 wi:4 kakade:1 making:3 s1:23 pr:2 taken:1 ln:13 equation:19 mutually:2 previously:1 remains:2 discus:3 skew:2 eventually:1 committee:1 merit:1 antonoglou:1 end:2 stand:1 rohlfshagen:1 available:1 decomposing:2 rewritten:1 operation:1 obey:3 v2:3 stig:1 occurrence:2 hotelling:1 hassabis:1 original:3 thomas:2 denotes:3 running:1 newton:1 society:1 objective:1 move:1 already:2 rt:2 dependence:1 interacts:1 guessing:1 defourny:1 southern:1 gradient:2 thrun:1 fidjeland:1 philip:1 collected:1 samothrakis:1 induction:1 assuming:2 length:1 index:2 berger:1 insufficient:1 illustration:2 balance:4 innovation:1 setup:1 executed:1 mostly:1 difficult:4 robert:1 negative:15 design:1 policy:24 unknown:1 allowing:1 imbalance:3 vertical:2 observation:16 dispersion:1 markov:13 convolution:7 kumaran:1 finite:4 daan:1 immediate:11 distracted:1 arbitrary:2 david:4 complement:1 pair:3 nip:1 below:1 pattern:1 ev:1 kulesza:1 max:4 royal:1 treated:2 rely:1 natural:2 indicator:4 advanced:1 mn:1 normality:4 improve:3 technology:6 mdps:4 sn:4 text:1 acknowledgement:1 schulman:1 relative:2 asymptotic:1 graf:2 multiagent:2 lecture:1 interesting:1 var:23 foundation:3 agent:29 degree:4 consistent:1 s0:12 principle:1 pi:18 balancing:2 prone:1 supported:1 last:3 free:2 arriving:1 asynchronous:2 tsitsiklis:1 bias:6 normalised:4 understand:1 warren:1 taking:7 lognormal:3 munos:1 absolute:1 wagner:1 van:4 curve:4 default:1 calculated:1 xn:2 transition:32 cumulative:1 ending:2 qn:2 world:1 forward:7 concretely:1 reinforcement:13 maze:4 adaptive:1 transaction:2 sj:12 implicitly:2 status:1 satinder:1 colton:1 harm:2 search:2 continuous:1 sk:4 additionally:2 promising:1 nature:1 mj:3 ca:1 obtaining:2 investigated:1 necessarily:2 complex:3 european:1 domain:4 vj:7 marc:2 pk:6 main:4 significance:1 arrow:3 s2:12 paul:1 edition:2 restelli:1 repeated:2 x1:2 positively:4 representative:8 elaborate:2 andrei:1 wiley:1 n:4 pirotta:1 position:2 exponential:1 lie:1 watkins:1 third:1 tang:1 theorem:4 beaulieu:1 specific:1 r2:1 dk:3 concern:1 dl:1 exists:1 trap:3 sustc:2 horizon:1 gap:1 easier:1 lori:1 rg:6 timothy:1 simply:3 likely:4 infinitely:1 failed:1 sadik:1 utilise:1 truth:3 gary:1 lewis:1 cdf:3 ma:1 identity:1 formulated:1 goal:8 king:1 leonard:1 adria:1 included:1 specifically:2 except:2 corrected:2 infinite:2 reducing:1 flag:11 lemma:14 beattie:1 called:2 partly:1 gauss:1 xin:2 invariance:1 underestimation:13 indicating:3 select:4 exception:2 distraction:3 e6:1 support:1 latter:3 constructive:1 evaluate:1 d1:2 phenomenon:2
6,387
6,778
Repeated Inverse Reinforcement Learning Kareem Amin? Google Research New York, NY 10011 [email protected] Nan Jiang? Satinder Singh Computer Science & Engineering, University of Michigan, Ann Arbor, MI 48104 {nanjiang,baveja}@umich.edu Abstract We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted. Each time the human is surprised, the agent is provided a demonstration of the desired behavior by the human. We formalize this problem, including how the sequence of tasks is chosen, in a few different ways and provide some foundational results. 1 Introduction One challenge in building AI agents that learn from experience is how to set their goals or rewards. In the Reinforcement Learning (RL) setting, one interesting answer to this question is inverse RL (or IRL) in which the agent infers the rewards of a human by observing the human?s policy in a task [2]. Unfortunately, the IRL problem is ill-posed for there are typically many reward functions for which the observed behavior is optimal in a single task [3]. While the use of heuristics to select from among the set of feasible reward functions has led to successful applications of IRL to the problem of learning from demonstration [e.g., 4], not identifying the reward function poses fundamental challenges to the question of how well and how safely the agent will perform when using the learned reward function in other tasks. We formalize multiple variations of a new repeated IRL problem in which the agent and (the same) human face multiple tasks over time. We separate the reward function into two components, one which is invariant across tasks and can be viewed as intrinsic to the human, and a second that is task specific. As a motivating example, consider a human doing tasks throughout a work day, e.g., getting coffee, driving to work, interacting with co-workers, and so on. Each of these tasks has a task-specific goal, but the human brings to each task intrinsic goals that correspond to maintaining health, financial well-being, not violating moral and legal principles, etc. In our repeated IRL setting, the agent presents a policy for each new task that it thinks the human would do. If the agent?s policy ?surprises? the human by being sub-optimal, the human presents the agent with the optimal policy. The objective of the agent is to minimize the number of surprises to the human, i.e., to generalize the human?s behavior to new tasks. In addition to addressing generalization across tasks, the repeated IRL problem we introduce and our results are of interest in resolving the question of unidentifiability of rewards from observations in standard IRL. Our results are also of interest to a particular aspect of the concern about how to make sure that the AI systems we build are safe, or AI safety. Specifically, the issue of reward misspecification is often mentioned in AI safety articles [e.g., 5, 6, 7]. These articles mostly discuss broad ethical concerns and possible research directions, while our paper develops mathematical formulations and algorithmic solutions to a specific way of addressing reward misspecification. * ? This paper extends an unpublished arXiv paper by the authors [1]. Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In summary form, our contributions include: (1) an efficient reward-identification algorithm when the agent can choose the tasks in which it observes human behavior; (2) an upper bound on the number of total surprises when no assumptions are made on the tasks, along with a corresponding lower bound; (3) an extension to the setting where the human provides sample trajectories instead of complete behavior; and (4) identification guarantees when the agent can only choose the task rewards but is given a fixed task environment. 2 Markov Decision Processes (MDPs) An MDP is specified by its state space S, action space A, initial state distribution ? ? ?(S), transition function (or dynamics) P : S ? A ? ?(S), reward function Y : S ? R, and discount factor ? ? [0, 1). We assume finite S and A, and ?(S) is the space of all distributions over S. A policy ? : S ? A describes an agent?s behavior by specifying the action to take inPeach state. The (normalized) ? value function or long-term utility of ? is defined as V ? (s) (1??) E[ t=1 ? t?1 Y (st )|s0 = s; ?].2 P= ? ? t?1 Similarly, the Q-value function is Q (s, a) = (1 ? ?) E[ t=1 ? Y (st )|s0 = s, a0 = a; ?]. Where ? necessary we will use the notation VP,Y to avoid ambiguity about the dynamics and the reward ? function. Let ? : S ? A be an optimal policy, which maximizes V ? and Q? in all states (and actions) simultaneously. Given an initial distribution over states, ?, a scalar value that measures the goodness of ? is defined as Es?? [V ? (s)]. We introduce some further notation to express Es?? [V ? (s)] in vector-matrix form. ? Let ??,P ? R|S| be the normalized state occupancy under initial distribution ?, dynamics P , and P? policy ?, whose s-th entry is (1??) E[ t=1 ? t?1 I(st = s)|s0 ? ?; ?] (I(?) is the indicator function).  ?1 > ? This vector can be computed in closed-form as ??,P = (1 ? ?) ?> P ? I|S| ? ?P ? , where P ? is an |S| ? |S| matrix whose (s, s0 )-th element is P (s0 |s, ?(s)), and I|S| is the |S| ? |S| identity matrix. For convenience we will also treat the reward function Y as a vector in R|S| , and we have ? Es?? [V ? (s)] = Y > ??,P . 3 (1) Problem setup Here we define the repeated IRL problem. The human?s reward function ?? captures his/her safety concerns and intrinsic/general preferences. This ?? is unknown to the agent and is the object of interest herein, i.e., if ?? were known to the agent, the concerns addressed in this paper would be solved. We assume that the human cannot directly communicate ?? to the agent but can evaluate the agent?s behavior in a task as well as demonstrate optimal behavior. Each task comes with an external reward function R, and the goal is to maximize the reward with respect to Y := ?? + R in each task. As a concrete example, consider an agent for an autonomous vehicle. In this case, ?? represents the cross-task principles that define good driving (e.g., courtesy towards pedestrians and other vehicles), which are often difficult to explicitly describe. In contrast, R, the task-specific reward, could reward the agent for successfully completing parallel parking. While R is easier to construct, it may not completely capture what a human deems good driving. (For example, an agent might successfully parallel park while still boxing in neighboring vehicles.) More formally, a task is defined by a pair (E, R), where E = (S, A, ?, P, ?) is the task environment (i.e., a controlled Markov process) and R is the task-specific reward function (task reward). We assume that all tasks share the same S, A, ?, with |A| ? 2, but may differ in the initial distribution ?, dynamics P , and task reward R; all of the task-specifying quantities are known to the agent. In any task, the human?s optimal behavior is always with respect to the reward function Y = ?? + R. We emphasize again that ?? is intrinsic to the human and remains the same across all tasks. Our use of task specific reward functions R allows for greater generality than the usual IRL setting, and most of our results apply equally to the case where R ? 0. While ?? is private to the human, the agent has some prior knowledge on ?? , represented as a set of possible parameters ?0 ? R|S| that contains ?? . Throughout, we assume that the human?s reward has bounded and normalized magnitude, that is, k?? k? ? 1. 2 Here we differ (w.l.o.g.) from common IRL literature in assuming that reward occurs after transition. 2 A demonstration in (E, R) reveals ? ? , optimal for Y = ?? + R under environment E, to the agent. A common assumption in the IRL literature is that the full mapping is revealed, which can be unrealistic if some states are unreachable from the initial distribution. We address the issue by requiring only ?? the state occupancy vector ??,P . In Section 7 we show that this also allows an easy extension to the setting where the human only demonstrates trajectories instead of providing a policy. Under the above framework for repeated IRL, we consider two settings that differ in how the sequence of tasks are chosen. In both settings, we will want to minimize the number of demonstrations needed. 1. (Section 5) Agent chooses the tasks, observes the human?s behavior in each of them, and infers the reward function. In this setting where the agent is powerful enough to choose tasks arbitrarily, we will show that the agent will be able to identify the human?s reward function which of course implies the ability to generalize to new tasks. 2. (Section 6) Nature chooses the tasks, and the agent proposes a policy in each task. The human demonstrates a policy only if the agent?s policy is significantly suboptimal (i.e., a mistake). In this setting we will derive upper and lower bounds on the number of mistakes our agent will make. 4 The challenge of identifying rewards Note that it is impossible to identify ?? from watching human behavior in a single task. This is because any ?? is fundamentally indistinguishable from an infinite set of reward functions that yield exactly the policy observed in the task. We introduce the idea of behavioral equivalence below to tease apart two separate issues wrapped up in the challenge of identifying rewards. Definition 1. Two reward functions ?, ?0 ? R|S| are behaviorally equivalent in all MDP tasks, if for any (E, R), the set of optimal policies for (R + ?) and (R + ?0 ) are the same. We argue that the task of identifying the reward function should amount only to identifying the (behavioral) equivalence class to which ?? belongs. In particular, identifying the equivalence class is sufficient to get perfect generalization to new tasks. Any remaining unidentifiability is merely representational and of no real consequence. Next we present a constraint that captures the reward functions that belong to the same equivalence class. Proposition 1. Two reward functions ? and ?0 are behaviorally equivalent in all MDP tasks if and only if ? ? ?0 = c ? 1|S| for some c ? R, where 1|S| is an all-1 vector of length |S|. The proof is elementary and deferred to Appendix A. For any class of ??s that are equivalent to each other, we can choose a canonical element to represent this class. For example, we can fix an arbitrary reference state sref ? S, and fix the reward of this state to 0 for ?? and all candidate ??s. In the rest of the paper, we will always assume such canonicalization in the MDP setting, hence ?? ? ?0 ? {? ? [?1, 1]|S| : ?(sref ) = 0}. 5 Agent chooses the tasks In this section, the protocol is that the agent chooses a sequence of tasks {(Et , Rt )}. For each task (Et , Rt ), the human reveals ?t? , which is optimal for environment Et and reward function ?? + Rt . Our goal is to design an algorithm which chooses {(Et , Rt )} and identifies ?? to a desired accuracy, , using as few tasks as possible. Theorem 1 shows that a simple algorithm can identify ?? after only O(log(1/)) tasks, if any tasks may be chosen. Roughly speaking, the algorithm amounts to a binary search on each component of ?? by manipulating the task reward Rt .3 See the proof for the algorithm specification. As noted before, once the agent has identified ?? within an appropriate tolerance, it can compute a sufficiently-near-optimal policy for all tasks, thus completing the generalization objective through the far stronger identification objective in this setting. Theorem 1. If ?? ? ?0 ? {? ? [?1, 1]|S| : ?(sref ) = 0}, there exists an algorithm that outputs ? ? R|S| that satisfies k? ? ?? k? ?  after O(log(1/)) demonstrations. Proof. The algorithm chooses the following fixed environment in all tasks: for each s ? S \ {sref }, let one action be a self-loop, and the other action transitions to sref . In sref , all actions cause self-loops. 3 While we present a proof that manipulates Rt , an only slightly more complex proof applies to the setting where all the Rt are exactly zero and the manipulation is limited to the environment [1]. 3 The initial distribution over states is uniformly at random over S \ {sref }. Each task only differs in the task reward Rt (where Rt (sref ) ? 0 always). After observing the state occupancy of the optimal policy, for each s we check if the occupancy is equal to 0. If so, it means that the demonstrated optimal policy chooses to go to sref from s in the first time step, and ?? (s) + Rt (s) ? ?? (sref ) + Rt (sref ) = 0; if not, we have ?? (s) + Rt (s) ? 0. Consequently, after each task we learn the relationship between ?? (s) and ?Rt (s) on each s ? S \ {sref }, so conducting a binary search by manipulating Rt (s) will identify ?? to -accuracy after O(log(1/)) tasks. 6 Nature chooses the tasks While Theorem 1 yields a strong identification guarantee, it also relies on a strong assumption, that {(Et , Rt )} may be chosen by the agent in an arbitrary manner. In this section, we let nature, who is allowed to be adversarial for the purpose of the analysis, choose {(Et , Rt )}. Generally speaking, we cannot obtain identification guarantees in such an adversarial setup. As an example, if Rt ? 0 and Et remains the same over time, we are essentially back to the classical IRL setting and suffer from the degeneracy issue. However, generalization to future tasks, which is our ultimate goal, is easy in this special case: after the initial demonstration, the agent can mimic it to behave optimally in all subsequent tasks without requiring further demonstrations. More generally, if nature repeats similar tasks, then the agent obtains little new information, but presumably it knows how to behave in most cases; if nature chooses a task unfamiliar to the agent, then the agent is likely to err, but it may learn about ?? from the mistake. To formalize this intuition, we consider the following protocol: the nature chooses a sequence of tasks {(Et , Rt )} in an arbitrary manner. For every task (Et , Rt ), the agent proposes a policy ?t . The human examines the policy?s value under ?t , and if the loss h ? i h i ? lt = Es?? VEtt, ?? +Rt (s) ? Es?? VE?tt, ?? +Rt (s) (2) is less than some?  then the human is satisfied and no demonstration is needed; otherwise a mistake is ?t ?t? counted and ??t ,Pt is revealed to the agent (note that ??t ,Pt can be computed by the agent if needed from ?t? and its knowledge of the task). The main goal of this section is to design an algorithm that has a provable guarantee on the total number of mistakes. On human supervision Here we require the human to evaluate the agent?s policies in addition to providing demonstrations. We argue that this is a reasonable assumption because (1) only a binary signal I(lt > ) is needed as opposed to the precise value of lt , and (2) if a policy is suboptimal but the human fails to realize it, arguably it should not be treated as a mistake. Meanwhile, we will also provide identification guarantees in Section 6.4, as the human will be relieved from the supervision duty once ?? is identified. Before describing and analyzing our algorithm, we first notice that the Equation 2 can be rewritten as ?? lt = (?? + R)> (??tt ,Pt ? ???tt ,Pt ), (3) using Equation 1. So effectively, the given environment Et in each round induces a set of state occupancy vectors {???t ,Pt : ? ? (S ? A)}, and we want the agent to choose the vector that has the largest dot product with ?? + R. The exponential size of the set will not be a concern because our main result (Theorem 2) has no dependence on the number of vectors, and only depends on the dimension of those vectors. The result is enabled by studying the linear bandit version of the problem, which subsumes the MDP setting for our purpose and is also a model of independent interest. 6.1 The linear bandit setting In the linear bandit setting, D is a finite action space with size |D| = K. Each task is denoted as a pair (X, R), where R is the task specific reward function as before. X = [x(1) ? ? ? x(K) ] is a d ? K feature matrix, where x(i) is the feature vector for the i-th action, and kx(i) k1 ? 1. When we reduce MDPs to linear bandits, each element of D corresponds to an MDP policy, and the feature vector is the state occupancy of that policy. As before, R, ?? ? Rd are the task reward and the human?s unknown reward, respectively. The initial uncertainty set for ?? is ?0 ? [?1, 1]d . The value of the i-th action is calculated as (?? + R)> x(i) , 4 Algorithm 1 Ellipsoid Algorithm for Repeated Inverse Reinforcement Learning 1: Input: ?0 . 2: ?1 ? MVEE(?0 ). 3: for t = 1, 2, . . . do 4: Nature reveals (Xt , Rt ). a 5: Learner plays at = arg maxa?D c> t xt , where ct is the center of ?t . ?t+1 ? ?t . 6: if lt >  then a? 7: Human reveals a?t . ?t+1 ? MVEE({? ? ?t : (? ? ct )> (xt t ? xat t ) ? 0}). 8: end if 9: end for and a? is the action that maximizes this value. Every round the agent proposes an action a ? D, whose loss is defined as ? lt = (?? + R)> (xa ? xa ). We now show how to embed the previous MDP setting in linear bandits. Example 1. Given an MDP problem with variables S, A, ?, ?? , sref , ?0 , {(Et , Rt )}, we can convert it into a linear bandit problem as follows: (all variables with prime belong to the linear bandit problem, and we use v \i to denote the vector v with the i-th coordinate removed) \s ? D = {? : S ? A}, d = |S| ? 1, ??0 = ?? ref , ?00 = {?\sref : ? ? ?0 }. \s ? x?t = (???t ,Pt )\sref . Rt0 = Rt ref ? Rt (sref ) ? 1d . Note that there is a more straightforward conversion by letting d = |S|, ??0 = ?? , ?00 = ?0 , x?t = ???t ,Pt , Rt0 = Rt , which also preserves losses. We perform a more succinct conversion in Example 1 by canonicalizing both ?? (already assumed) and Rt (explicitly done here) and dropping the coordinate for sref in all relevant vectors. MDPs with linear rewards In IRL literature, a generalization of the MDP setting is often considered, that reward is linear in state features ?(s) ? Rd [2, 3]. In this new setting, ?? and R are reward parameters, and the actual reward is (?? + R)> ?(s). This new setting can also be reduced to linear bandits similarly to Example 1, except that the state occupancy is replaced by the discounted sum of expected feature values. Our main result, Theorem 2, will still apply automatically, but now the guarantee will only depend on the dimension of the feature space and has no dependence on |S|. We include the conversion below but do not further discuss this setting in the rest of the paper. Example 2. Consider an MDP problem with state features, defined by S, A, ?, d ? Z+ , ?? ? Rd , ?0 ? [?1, 1]d , {(Et , ?t ? Rd , Rt ? Rd )}, where task reward and background reward in state s are ??> ?t (s) and R> ?t (s) respectively, and ?? ? ?0 . Suppose k?t (s)k? ? 1 always holds, then we can convert it into a linear P? bandit problem as follows: D = {? : S ? A}. d, ?? , and Rt remain the same. x?t = (1 ? ?) h=1 ? h?1 E[?(sh ) | ?t , Pt , ?]/d. Note that the division of d in x?t is for the purpose of normalization, so that kx?t k1 ? k?k1 /d ? k?k? ? 1. 6.2 Ellipsoid Algorithm for Repeated Inverse Reinforcement Learning We propose Algorithm 1, and provide the mistake bound in the following theorem. Theorem 2. For ?0 = [?1, 1]d , the number of mistakes made by Algorithm 1 is guaranteed to be O(d2 log(d/)). To prove Theorem 2, we quote a result from linear programming literature in Lemma 1, which is found in standard lecture notes (e.g., [8], Theorem 8.8; see also [9], Lemma 3.1.34). Lemma 1 (Volume reduction in ellipsoid algorithm). Given any non-degenerate ellipsoid B in Rd centered at c ? Rd , and any non-zero vector v ? Rd , let B + be the minimum-volume enclosing 1 ellipsoid (MVEE) of {u ? B : (u ? c)> v ? 0}. We have vol(B + )/vol(B) ? e? 2(d+1) . a? Proof of Theorem 2. Whenever a mistake is made, we can induce the constraint (Rt + ?? )> (xt t ? a? xat t ) > . Meanwhile, since at is greedy w.r.t. ct , we have (Rt + ct )> (xt t ? xat t ) ? 0, where ct is 5 the center of ?t as in Line 5. Taking the difference of the two inequalities, we obtain a? (?? ? ct )> (xt t ? xat t ) > . (4) Therefore, the update rule on Line 7 of Algorithm 1 preserves ?? in ?t+1 . Since the update makes a central cut through the ellipsoid, Lemma 1 applies and the volume shrinks every time a mistake is made. To prove the theorem, it remains to upper bound the initial volume and lower bound the terminal volume of ?t . We first show that an update never eliminates B? (?? , /2), the `? ball a? centered at ?? with radius /2. This is because, any eliminated ? satisfies (? + ct )> (xt t ? xat t ) < 0. Combining this with Equation 4, we have a? a?  < (?? ? ?)> (xt t ? xat t ) ? k?? ? ?k? kxt t ? xat t k1 ? 2k?? ? ?k? . The last step follows from kxk1 ? 1. We conclude that any eliminated ? should be /2 far away from T ?? in `? distance. Hence, we can lower bound the volume of ?t for any t by that of ?0 B? (?? , /2), which contains an `? ball with radius /4 at its smallest (when ?? is one of ?0 ?s vertices). To simplify calculation, we relax this lower bound (volume of the `? ball) to the volume of the inscribed `2 ball. Finally we put everything together: let MT be the number of mistakes made from round 1 to T , Cd be the volume of the unit hypersphere in Rd (i.e., `2 ball with radius 1), and vol(?) denote the volume of an ellipsoid, we have ? ? d 4 d MT d ? log(vol(?1 )) ? log(vol(?T +1 )) ? log(Cd ( d) ) ? log(Cd (/4) ) = d log . 2(d + 1)  So MT ? 2d(d + 1) log 6.3 ? 4 d  = O(d2 log d ). Lower bound In Section 5, we get an O(log(1/)) upper bound on the number of demonstrations, which has no dependence on |S| (which corresponds to d + 1 in linear bandits). Comparing Theorem 2 to 1, one may wonder whether the polynomial dependence on d is an artifact of the inefficiency of Algorithm 1. We clarify this issue by proving a lower bound, showing that ?(d log(1/)) mistakes are inevitable in the worst case when nature chooses the tasks. We provide a proof sketch below, and the complete proof is deferred to Appendix E. Theorem 3. For any randomized algorithm4 in the linear bandit setting, there always exists ?? ? [?1, 1]d and an adversarial sequence of {(Xt , Rt )} that potentially adapts to the algorithm?s previous decisions, such that the expected number of mistakes made by the algorithm is ?(d log(1/)). Proof Sketch. We randomize ?? by sampling each element i.i.d. from Unif([?1, 1]). We will prove that there exists a strategy of choosing (Xt , Rt ) such that any algorithm?s expected number of mistakes is ?(d log(1/), which proves the theorem as max is no less than average. In our construction, Xt = [0d , ejt ], where jt is some index to be specified. Hence, every round the agent is essentially asked to decided whether ?(jt ) ? ?Rt (jt ). The adversary?s strategy goes in phases, and Rt remains the same during each phase. Every phase has d rounds where jt is enumerated over {1, . . . , d}. The adversary will use Rt to shift the posterior on ?(jt ) + Rt (jt ) so that it is centered around the origin; in this way, the agent has about 1/2 probability to make an error (regardless of the algorithm), and the posterior interval will be halved. Overall, the agent makes d/2 mistakes in each phase, and there will be about log(1/) phases in total, which gives the lower bound. Applying the lower bound to MDPs The above lower bound is stated for linear bandits. In principle, we need to prove lower bound for MDPs separately, because linear bandits are more general than MDPs for our purpose, and the hard instances in linear bandits may not have corresponding MDP instances. In Lemma 2 below, we show that a certain type of linear bandit instances can always be emulated by MDPs with the same number of actions, and the hard instances constructed in 4 While our Algorithm 1 is deterministic, randomization is often crucial for online learning in general [10]. 6 Theorem 3 indeed satisfy the conditions for such a type; in particular, we require the feature vectors to be non-negative and have `1 norm bounded by 1. As a corollary, an ?(|S| log(1/)) lower bound for the MDP setting (even with a small action space |A| = 2) follows directly from Theorem 3. The proof of Lemma 2 is deferred to Appendix B. Lemma 2 (Linear bandit to MDP conversion). Let (X, R) be a linear bandit task, and K be the number of actions. If every xa is non-negative and kxa k1 ? 1, then there exists an MDP task (E, R0 ) with d + 1 states and K actions, such that under some choice of sref , converting (E, R0 ) as in Example 1 recovers the original problem. 6.4 On identification when nature chooses tasks While Theorem 2 successfully controls the number of total mistakes, it completely avoids the identification problem and does not guarantee to recover ?? . In this section we explore further conditions under which we can obtain identification guarantees when Nature chooses the tasks. The first condition, stated in Proposition 2, implies that if we have made all the possible mistakes, then we have indeed identified the ?? , where the identification accuracy is determined by the tolerance parameter  that defines what is counted as a mistake. Due to space limit, the proof is deferred to Appendix C. Proposition 2. Consider the linear bandit setting. If there exists T0 such that for any round t ? T0 , no more mistakes can be ever made by the algorithm for any choice of (Et , Rt ) and any tie-braking mechanism, then we have ?? ? B? (cT0 , ). While the above proposition shows that identification is guaranteed if the agent exhausts the mistakes, the agent has no ability to actively fulfill this condition when nature chooses tasks. For a stronger identification guarantee, we may need to grant the agent some freedom in choosing the tasks. Identification with fixed environment Here we consider a setting that fits in between Section 5 (completely active) and Section 6.1 (completely passive), where the environment E (hence the induced feature vectors {x(1) , x(2) , . . . , x(K) }) is given and fixed, and the agent can arbitrarily choose the task reward Rt . The goal is to obtain identification guarantee in this intermediate setting. Unfortunately, a degenerate case can be easily constructed that prevents the revelation of any information about ?? . In particular, if x(1) = x(2) = . . . = x(K) , i.e., the environment is completely uncontrolled, then all actions are equally optimal and nothing can be learned. More generally, if for some v 6= 0 we have v > x(1) = v > x(2) = . . . = v > x(K) , then we may never recover ?? along the direction of v. In fact, Proposition 1 can be viewed as an instance of this result where v = 1|S| ? (recall that 1> |S| ??,P ? 1), and that is why we have to remove such redundancy in Example 1 in order to discuss identification in MDPs. Therefore, to guarantee identification in a fixed environment, the feature vectors must have significant variation in all directions, and we capture this intuition by defining a diversity score spread(X) (Definition 2) and showing that the identification accuracy depends inversely on the score (Theorem 4).   Definition 2. Given the feature matrix X = x(1) x(2) ? ? ? x(K) whose size is d ? K, define e := X(IK ? 1 1K 1> ). spread(X) as the d-th largest singular value of X K K Theorem 4. For a fixed feature matrix X, if spread(X) > 0, then there exists a sequence R1 , R2 , . . . , RT with T = O(d2 log(d/)) and apsequence of tie-break choices of the algorithm, such that after round T we have kcT ? ?? k? ?  (K ? 1)/2/spread(X). ? The proof is deferred to Appendix D. The K dependence in Theorem 4 may be of concern as K can be exponentially large. However, Theorem 4 also holds if we replace X by any matrix that consists of X?s columns, so we may choose a small yet most diverse set of columns as to optimize the bound. 7 Working with trajectories In previous sections, we have assumed that the human evaluates the agent?s performance based on the state occupancy of the agent?s policy, and demonstrates the optimal policy in terms of state occupancy as well. In practice, we would like to instead assume that for each task, the agent rolls out a trajectory, and the human shows an optimal trajectory if he/she finds the agent?s trajectory unsatisfying. We 7 Algorithm 2 Trajectory version of Algorithm 1 for MDPs 1: Input: ?0 , H, n. ? ? 0, Z? ? ? 0. 2: ?1 ? MVEE(?0 ), i ? 0, Z 3: for t = 1, 2, . . . do 4: Nature reveals (Et , Rt ). Agent rolls-out a trajectory using ?t greedily w.r.t. ct + Rt . 5: ?t+1 ? ?t . 6: if agent takes a in s with Q? (s, a) < V ? (s) ?  then 7: Human produces an H-step trajectory from s. Let the empirical state occupancy be z?i?,H . 8: i ? i + 1, Z? ? ? Z? ? + z?i?,H . 9: Let zi be the state occupancy of ?t from initial state s, and Z? ? Z? + zi . 10: if i = n then ? ? 0}). i ? 0, Z? ? 0, Z? ? ? 0. 11: ?t+1 ? MVEE({? ? ?t : (? ? ct )> (Z? ? ? Z) 12: end if 13: end if 14: end for are still concerned about upper bounding the number of total mistakes, and aim to provide a parallel version of Theorem 2. Unlike in traditional IRL, in our setting the agent is also acting, which gives rise to many subtleties. First, the total reward on the agent?s single trajectory is a random variable, and may deviate from the expected value of its policy. Therefore, it is generally impossible to decide if the agent?s policy is near-optimal, and instead we assume that the human can check if each action that the agent takes in the trajectory is near-optimal: when the agent takes a at state s, an error is counted if and only if Q? (s, a) < V ? (s) ? . This criterion can be viewed as a noisy version of the one used in previous sections, as taking expectation of V ? (s) ? Q? (s, ?(s)) over the occupancy induced by ? will recover Equation 2. While this resolves the issue on the agent?s side, how should the human provide his/her optimal trajectory? The most straightforward protocol is that the human rolls out a trajectory from the initial distribution of the task, ?t . We argue that this is not a reasonable protocol for two reasons: (1) in expectation, the reward collected by the human may be less than that by the agent, because conditioning on the event that an error is spotted may introduce a selection bias; (2) the human may not encounter the problematic state in his/her own trajectory, hence the information provided in the trajectory may be irrelevant. To resolve this issue, we consider a different protocol where the human rolls out a trajectory using an optimal policy from the very state where the agent errs. Now we discuss how we can prove a parallel of Theorem 2 under this new protocol. First, let?s assume that the demonstration were still given in the form a state occupancy vector starting at the problematic state. In this case, we can reduce to the setting of Section 6 by changing ?t to a point mass on the problematic state.5 To apply the algorithm and the analysis in Section 6, it remains to show that the notion of error in this section (a suboptimal action) implies the notion of error in Section 6 (a suboptimal policy): let s be the problematic state and ? be the agent?s policy, we have V ? (s) = Q? (s, ?(s)) ? Q? (s, ?(s)) < V ? (s) ? . So whenever a suboptimal action is spotted in state s, it indeed implies that the agent?s policy is suboptimal for s as the initial state. Hence, we can run Algorithm 1 as-is and Theorem 2 immediately applies. To tackle the remaining issue that the demonstration is in terms of a single trajectory, we will not update ?t after each mistake as in Algorithm 1, but only make an update after every mini-batch of mistakes, and aggregate them to form accurate update rules. See Algorithm 2. The formal guarantee of the algorithm is stated in Theorem 5, whose proof is deferred to Appendix G. 5 At the first glance this might seem suspicious: the problematic state is random and depends on the learner?s current policy, but in RL the initial distribution is usually fixed and the learner has no control over it. This concern is removed thanks to our adversarial setup on (Et , Rt ) (of which ?t is a component). 8 Theorem 5. ?? ? (0, 1), with probability at least 1 ? ?, the number of mistakes made ' by Algorithm 2 & ? m l 4d(d+1) log 6 d  log( ) ? , and n = with parameters ?0 = [?1, 1]d , H = log(12/) where d = |S|,6 1?? 322 ? d22 log( d )).7 is at most O(  ? 8 Related work & Conclusions Most existing work in IRL focused on inferring the reward function8 using data acquired from a fixed environment [2, 3, 16, 17, 18, 19, 20]. There is prior work on using data collected from multiple ? but exogenously fixed ? environments to predict agent behavior [21]. There are also applications where methods for single-environment MDPs have been adapted to multiple environments [17]. Nevertheless, all these works consider the objective of mimicking an optimal behavior in the presented environment(s), and do not aim at generalization to new tasks that is the main contribution of this paper. Recently, Hadfield et. al. [22] proposed cooperative inverse reinforcement learning, where the human and the agent act in the same environment, allowing the human to actively resolve the agent?s uncertainty on the reward function. However, they only consider a single environment (or task), and the unidentifiability issue of IRL still exists. Combining their interesting framework with our resolution to unidentifiability (by multiple tasks) can be an interesting future direction. Acknowledgement This work was supported in part by NSF grant IIS 1319365 (Singh & Jiang) and in part by a Rackham Predoctoral Fellowship from the University of Michigan (Jiang). Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors. References [1] Kareem Amin and Satinder Singh. Towards resolving unidentifiability in inverse reinforcement learning. arXiv preprint arXiv:1601.06569, 2016. [2] Andrew Y Ng and Stuart J Russell. Algorithms for inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 663?670, 2000. [3] Pieter Abbeel and Andrew Y Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In Proceedings of the 21st International Conference on Machine learning, page 1. ACM, 2004. [4] Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y Ng. An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems, 19:1, 2007. [5] Nick Bostrom. Ethical issues in advanced artificial intelligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, pages 277?284, 2003. [6] Stuart Russell, Daniel Dewey, and Max Tegmark. Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4):105?114, 2015. [7] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man?. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. 6 Here we use the simpler conversion explained right after Example 1. We can certainly improve the dimension to d = |S| ? 1 by dropping the sref coordinate in all relevant vectors but that complicates presentation. 7 ? A log log(1/) term is suppressed in O(?). 8 While we do not discuss it here, in the economics literature, the problem of inferring an agent?s utility from behavior-queries has long been studied under the heading of utility or preference elicitation [11, 12, 13, 14, 15]. While our result in Section 5 uses similar techniques to elicit the reward function, we do so purely by observing the human?s behavior without external source of information (e.g., query responses). 9 [8] Ryan O?Donnell. 15-859(E) ? linear and semidefinite programming: lecture notes. Carnegie Mellon University, 2011. https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/ class/15859-f11/www/notes/lecture08.pdf. [9] Martin Gr?tschel, L?szl? Lov?sz, and Alexander Schrijver. Geometric algorithms and combinatorial optimization, volume 2. Springer Science & Business Media, 2012. [10] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2011. [11] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility elicitation. In AAAI/IAAI, pages 363?369, 2000. [12] John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior (60th Anniversary Commemorative Edition). Princeton university press, 2007. [13] Kevin Regan and Craig Boutilier. Regret-based reward elicitation for markov decision processes. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 444?451. AUAI Press, 2009. [14] Kevin Regan and Craig Boutilier. Eliciting additive reward functions for markov decision processes. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 2159, 2011. [15] Constantin A Rothkopf and Christos Dimitrakakis. Preference elicitation and inverse reinforcement learning. In Machine Learning and Knowledge Discovery in Databases, pages 34?48. Springer, 2011. [16] Adam Coates, Pieter Abbeel, and Andrew Y Ng. Learning for control from multiple demonstrations. In Proceedings of the 25th international conference on Machine learning, pages 144?151. ACM, 2008. [17] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages 1433?1438, 2008. [18] Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. Urbana, 51:61801, 2007. [19] Umar Syed and Robert E Schapire. A game-theoretic approach to apprenticeship learning. In Advances in neural information processing systems, pages 1449?1456, 2007. [20] Kevin Regan and Craig Boutilier. Robust policy computation in reward-uncertain mdps using nondominated policies. In AAAI, 2010. [21] Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. Maximum margin planning. In Proceedings of the 23rd International Conference on Machine Learning, pages 729?736. ACM, 2006. [22] Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. In Advances in Neural Information Processing Systems, pages 3909? 3917, 2016. 10
6778 |@word private:1 version:4 polynomial:1 stronger:2 norm:1 unif:1 d2:3 pieter:4 jacob:1 deems:1 reduction:1 initial:13 inefficiency:1 contains:2 score:2 daniel:1 existing:1 err:1 current:1 com:1 comparing:1 yet:1 must:1 john:2 realize:1 ronald:1 subsequent:1 additive:1 remove:1 update:6 unidentifiability:5 greedy:1 intelligence:4 amir:1 menell:1 hypersphere:1 provides:1 preference:3 simpler:1 daphne:1 mathematical:1 along:2 constructed:2 olah:1 ik:1 surprised:1 suspicious:1 prove:5 consists:1 dan:1 behavioral:2 manner:2 introduce:5 apprenticeship:2 acquired:1 lov:1 indeed:3 expected:4 roughly:1 behavior:16 planning:1 f11:1 terminal:1 discounted:1 automatically:1 resolve:3 little:1 actual:1 provided:2 notation:2 bounded:2 maximizes:2 mass:1 medium:1 what:2 maxa:1 morgenstern:1 finding:1 guarantee:12 safely:1 every:7 act:2 auai:1 tackle:1 tie:2 exactly:2 demonstrates:3 revelation:1 control:3 unit:1 grant:2 arguably:1 safety:4 before:4 engineering:1 treat:1 mistake:24 consequence:1 limit:1 analyzing:1 jiang:3 might:2 studied:1 equivalence:4 specifying:2 co:1 limited:1 decided:1 practice:1 regret:1 differs:1 foundational:1 empirical:1 elicit:1 significantly:1 induce:1 get:2 convenience:1 cannot:2 selection:1 put:1 impossible:2 applying:1 optimize:1 zinkevich:1 equivalent:3 demonstrated:1 courtesy:1 center:2 deterministic:1 go:2 economics:1 straightforward:2 rt0:2 regardless:1 convex:1 starting:1 d22:1 focused:1 exogenously:1 identifying:6 immediately:1 manipulates:1 resolution:1 examines:1 rule:2 financial:1 his:3 enabled:1 proving:1 notion:2 variation:2 autonomous:1 coordinate:3 pt:8 play:1 suppose:1 construction:1 magazine:1 programming:2 us:1 origin:1 element:4 trend:1 cut:1 cooperative:2 database:1 observed:2 kxk1:1 preprint:2 solved:1 capture:4 worst:1 russell:3 removed:2 observes:2 mentioned:1 intuition:2 environment:18 reward:58 asked:1 ziebart:1 dynamic:4 singh:3 depend:1 purely:1 division:1 learner:3 completely:5 easily:1 joint:1 represented:1 describe:1 artificial:4 query:2 aggregate:1 kevin:3 choosing:2 shalev:1 whose:5 heuristic:1 posed:1 canonicalization:1 relax:1 otherwise:1 ability:2 think:1 noisy:1 online:3 sequence:7 kxt:1 quigley:1 propose:1 product:1 helicopter:1 neighboring:1 relevant:2 loop:2 combining:2 degenerate:2 representational:1 adapts:1 amin:2 getting:1 ijcai:1 r1:1 neumann:1 produce:1 perfect:1 adam:2 object:1 derive:1 andrew:7 pose:1 hadfield:2 strong:2 c:2 come:1 implies:4 differ:3 direction:4 safe:1 radius:3 centered:3 human:51 opinion:1 everything:1 require:2 fix:2 generalization:6 abbeel:4 randomization:1 proposition:5 brian:1 elementary:1 ryan:1 enumerated:1 extension:2 clarify:1 hold:2 sufficiently:1 considered:1 around:1 presumably:1 algorithmic:1 mapping:1 predict:1 parr:1 driving:3 smallest:1 purpose:4 travel:1 combinatorial:1 quote:1 largest:2 successfully:3 behaviorally:2 always:6 aim:2 fulfill:1 avoid:1 corollary:1 she:1 check:2 contrast:1 adversarial:4 greedily:1 typically:1 a0:1 her:3 bandit:18 manipulating:2 koller:1 mimicking:1 issue:10 among:1 ill:1 unreachable:1 denoted:1 overall:1 arg:1 proposes:3 special:1 equal:2 construct:1 once:2 never:2 beach:1 eliminated:2 sampling:1 ng:4 represents:1 broad:1 stuart:3 park:1 inevitable:1 future:2 mimic:1 develops:1 fundamentally:1 simplify:1 few:2 simultaneously:1 ve:1 preserve:2 replaced:1 phase:5 freedom:1 interest:4 certainly:1 deferred:6 szl:1 sh:1 semidefinite:1 xat:7 constantin:1 accurate:1 worker:1 necessary:1 experience:1 desired:2 complicates:1 uncertain:1 instance:5 column:2 goodness:1 addressing:2 entry:1 vertex:1 wonder:1 successful:1 gr:1 motivating:1 optimally:1 answer:1 chooses:14 st:5 thanks:1 fundamental:1 randomized:1 international:5 donnell:1 together:1 concrete:2 again:1 ambiguity:1 satisfied:1 central:1 opposed:1 choose:8 reflect:1 aaai:3 von:1 priority:1 watching:1 external:2 algorithm4:1 chajewska:1 actively:2 diversity:1 exhaust:1 subsumes:1 pedestrian:1 satisfy:1 unsatisfying:1 explicitly:2 depends:3 vehicle:3 break:1 view:1 closed:1 eyal:1 observing:3 doing:1 recover:3 parallel:4 shai:1 contribution:3 minimize:3 accuracy:4 roll:4 conducting:1 who:1 correspond:1 identify:4 yield:2 vp:1 generalize:2 identification:17 bayesian:1 craig:3 emulated:1 trajectory:17 whenever:2 definition:3 evaluates:1 kct:1 proof:13 mi:1 recovers:1 degeneracy:1 rational:1 iaai:1 recall:1 knowledge:3 infers:2 formalize:3 urszula:1 back:1 day:1 violating:1 response:1 bostrom:1 formulation:1 done:1 shrink:1 generality:1 dey:1 xa:3 sketch:2 working:1 flight:1 ramachandran:1 irl:17 google:2 glance:1 defines:1 brings:1 artifact:1 mdp:14 usa:1 building:1 dario:1 normalized:3 requiring:2 www:2 hence:6 round:7 indistinguishable:1 wrapped:1 self:2 during:1 game:2 noted:1 criterion:1 pdf:1 complete:2 demonstrate:1 tt:3 theoretic:1 passive:1 rothkopf:1 novel:1 recently:1 common:2 mt:3 rl:3 conditioning:1 exponentially:1 volume:12 belong:2 he:1 braking:1 unfamiliar:1 significant:1 mellon:1 ai:6 rd:10 similarly:2 baveja:1 dot:1 specification:1 supervision:2 ct0:1 etc:1 anca:1 posterior:2 halved:1 own:1 belongs:1 apart:1 prime:1 manipulation:1 irrelevant:1 certain:1 kxa:1 inequality:1 binary:3 arbitrarily:2 errs:1 morgan:1 minimum:1 greater:1 r0:2 converting:1 maximize:1 signal:1 ii:1 resolving:2 multiple:6 full:1 christiano:1 academic:1 calculation:1 cross:1 long:3 equally:2 spotted:2 controlled:1 sponsor:1 essentially:2 expectation:2 cmu:2 arxiv:5 represent:1 normalization:1 addition:2 want:2 background:1 separately:1 addressed:1 interval:1 fellowship:1 singular:1 source:1 crucial:1 rest:2 eliminates:1 ejt:1 unlike:1 sure:1 induced:2 seem:1 inscribed:1 near:3 revealed:2 intermediate:1 easy:2 enough:1 concerned:1 fit:1 zi:2 identified:3 suboptimal:6 reduce:2 idea:1 economic:1 shift:1 t0:2 whether:2 duty:1 utility:4 ultimate:1 moral:1 suffer:1 york:1 speaking:2 cause:1 action:19 boutilier:3 generally:4 amount:2 discount:1 induces:1 reduced:1 http:1 schapire:1 canonical:1 problematic:5 notice:1 nsf:1 coates:2 fiction:1 diverse:1 carnegie:1 dropping:2 vol:5 express:1 redundancy:1 nevertheless:1 dewey:1 changing:1 merely:1 convert:2 sum:1 dimitrakakis:1 run:1 inverse:13 powerful:1 communicate:1 uncertainty:3 extends:1 throughout:2 reasonable:2 decide:1 decision:5 appendix:6 bound:17 ct:9 completing:2 nan:1 guaranteed:2 uncontrolled:1 adapted:1 constraint:2 relieved:1 aspect:1 nathan:1 martin:2 acted:1 amodei:1 ball:5 across:3 describes:1 slightly:1 remain:1 beneficial:1 suppressed:1 making:1 tegmark:1 explained:1 invariant:1 oskar:1 legal:1 equation:4 remains:5 discus:5 describing:1 mechanism:1 needed:4 know:1 letting:1 end:5 umich:1 studying:1 boxing:1 rewritten:1 apply:3 away:1 appropriate:1 batch:1 encounter:1 original:1 remaining:2 include:2 maintaining:1 umar:1 k1:5 coffee:1 build:1 prof:1 classical:1 eliciting:1 objective:4 question:3 quantity:1 occurs:1 already:1 randomize:1 strategy:2 rt:43 usual:1 dependence:5 traditional:1 bagnell:2 behalf:1 distance:1 separate:2 chris:1 argue:3 collected:2 sref:19 reason:1 provable:1 assuming:1 suboptimally:1 length:1 index:1 relationship:1 ellipsoid:7 providing:2 demonstration:13 mini:1 setup:3 unfortunately:2 mostly:1 difficult:1 potentially:1 robert:1 stated:3 negative:2 rise:1 ratliff:1 design:2 enclosing:1 policy:33 unknown:2 perform:2 allowing:1 upper:5 conversion:5 observation:1 predoctoral:1 markov:4 twenty:1 urbana:1 finite:2 behave:2 defining:1 ever:1 misspecification:2 precise:1 interacting:1 arbitrary:3 unpublished:1 pair:2 specified:2 nick:1 learned:2 herein:1 nip:1 address:1 able:1 adversary:2 elicitation:4 below:4 usually:1 challenge:4 including:1 max:2 unrealistic:1 event:1 syed:1 treated:1 afs:1 business:1 indicator:1 advanced:1 occupancy:13 improve:1 mdps:11 inversely:1 identifies:1 health:1 deviate:1 prior:2 literature:5 acknowledgement:1 schulman:1 geometric:1 discovery:1 aerobatic:1 dragan:1 loss:3 lecture:2 interesting:3 regan:3 foundation:1 agent:70 sufficient:1 s0:5 article:2 principle:3 share:1 cd:3 anniversary:1 course:1 summary:1 maas:1 repeat:1 last:1 supported:1 tease:1 heading:1 side:1 bias:1 formal:1 face:1 taking:2 kareem:2 fifth:1 deepak:1 tolerance:2 dimension:3 calculated:1 transition:3 avoids:1 author:2 made:9 reinforcement:14 adaptive:1 counted:3 far:2 emphasize:1 obtains:1 satinder:2 sz:1 active:1 reveals:5 assumed:2 conclude:1 shwartz:1 search:2 why:1 learn:3 nature:12 robust:2 ca:1 tschel:1 complex:1 meanwhile:2 necessarily:1 protocol:6 main:4 spread:4 bounding:1 paul:1 edition:1 nothing:1 succinct:1 repeated:9 allowed:1 ref:2 ny:1 christos:1 sub:1 fails:1 inferring:2 wish:1 exponential:1 parking:1 candidate:1 dylan:1 theorem:26 embed:1 specific:7 xt:11 jt:6 showing:2 r2:1 concern:7 intrinsic:4 exists:7 effectively:1 anind:1 magnitude:1 kx:2 margin:1 easier:1 surprise:4 entropy:1 michigan:2 led:1 lt:6 likely:1 explore:1 steinhardt:1 prevents:1 expressed:1 ethical:2 scalar:1 subtlety:1 recommendation:1 applies:3 springer:2 corresponds:2 satisfies:2 relies:1 acm:3 goal:8 viewed:3 identity:1 ann:1 consequently:1 towards:2 presentation:1 replace:1 man:1 feasible:1 hard:2 specifically:1 infinite:1 uniformly:1 except:1 acting:2 determined:1 lemma:7 total:6 arbor:1 e:5 schrijver:1 select:1 formally:1 alexander:1 philosophy:1 evaluate:2 princeton:1
6,388
6,779
The Numerics of GANs Lars Mescheder Autonomous Vision Group MPI T?bingen [email protected] Sebastian Nowozin Machine Intelligence and Perception Group Microsoft Research [email protected] Andreas Geiger Autonomous Vision Group MPI T?bingen [email protected] Abstract In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train. 1 Introduction Generative Adversarial Networks (GANs) [10] have been very successful in learning probability distributions. Since their first appearance, GANs have been successfully applied to a variety of tasks, including image-to-image translation [12], image super-resolution [13], image in-painting [27] domain adaptation [26], probabilistic inference [14, 9, 8] and many more. While very powerful, GANs are known to be notoriously hard to train. The standard strategy for stabilizing training is to carefully design the model, either by adapting the architecture [21] or by selecting an easy-to-optimize objective function [23, 4, 11]. In this work, we examine the general problem of finding local Nash-equilibria of smooth games. We revisit the de-facto standard algorithm for finding such equilibrium points, simultaneous gradient ascent. We theoretically show that the main factors preventing the algorithm from converging are the presence of eigenvalues of the Jacobian of the associated gradient vector field with zero real-part and eigenvalues with a large imaginary part. The presence of the latter is also one of the reasons that make saddle-point problems more difficult than local optimization problems. Utilizing these insights, we design a new algorithm that overcomes some of these problems. Experimentally, we show that our algorithm leads to stable training on many GAN architectures, including some that are known to be hard to train. Our technique is orthogonal to strategies that try to make the GAN-game well-defined, e.g. by adding instance noise [24] or by using the Wasserstein-divergence [4, 11]: while these strategies try to ensure the existence of Nash-equilibria, our paper deals with their computation and the numerical difficulties that can arise in practice. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In summary, our contributions are as follows: ? We identify the main reasons why simultaneous gradient ascent often fails to find local Nash-equilibria. ? By utilizing these insights, we design a new, more robust algorithm for finding Nashequilibria of smooth two-player games. ? We empirically demonstrate that our method enables stable training of GANs on a variety of architectures and divergence measures. The proofs for the theorems in this paper can be found the supplementary material.1 2 Background In this section we first revisit the concept of Generative Adversarial Networks (GANs) from a divergence minimization point of view. We then introduce the concept of a smooth (non-convex) two-player game and define the terminology used in the rest of the paper. Finally, we describe simultaneous gradient ascent, the de-facto standard algorithm for finding Nash-equilibria of such games, and derive some of its properties. 2.1 Divergence Measures and GANs Generative Adversarial Networks are best understood in the context of divergence minimization: assume we are given a divergence function D, i.e. a function that takes a pair of probability distributions as input, outputs an element from [0, ?] and satisfies D(p, p) = 0 for all probability distributions p. Moreover, assume we are given some target distribution p0 from which we can draw i.i.d. samples and a parametric family of distributions q? that also allows us to draw i.i.d. samples. In practice q? is usually implemented as a neural network that acts on a hidden code z sampled from some known distribution and outputs an element from the target space. Our goal is to find ?? that minimizes the divergence D(p0 , q? ), i.e. we want to solve the optimization problem min D(p0 , q? ). ? (1) Most divergences that are used in practice can be represented in the following form [10, 16, 4]: D(p, q) = max Ex?q [g1 (f (x))] ? Ex?p [g2 (f (x))] f ?F (2) for some function class F ? X ? R and convex functions g1 , g2 : R ? R. Together with (1), this leads to mini-max problems of the form min max Ex?q? [g1 (f (x))] ? Ex?p0 [g2 (f (x))] . ? f ?F (3) These divergences include the Jensen-Shannon divergence [10], all f-divergences [16], the Wasserstein divergence [4] and even the indicator divergence, which is 0 if p = q and ? otherwise. In practice, the function class F in (3) is approximated with a parametric family of functions, e.g. parameterized by a neural network. Of course, when minimizing the divergence w.r.t. this approximated family, we no longer minimize the correct divergence. However, it can be verified that taking any class of functions in (3) leads to a divergence function for appropriate choices of g1 and g2 . Therefore, some authors call these divergence functions neural network divergences [5]. 2.2 Smooth Two-Player Games A differentiable two-player game is defined by two utility functions f (?, ?) and g(?, ?) defined over a common space (?, ?) ? ?1 ? ?2 . ?1 corresponds to the possible actions of player 1, ?2 corresponds to the possible actions of player 2. The goal of player 1 is to maximize f , whereas player 2 tries to maximize g. In the context of GANs, ?1 is the set of possible parameter values for the generator, whereas ?2 is the set of possible parameter values for the discriminator. We call a game a zero-sum game if f = ?g. Note that the derivation of the GAN-game in Section 2.1 leads to a zero-sum game, 1 The code for all experiments in this paper is available under https://github.com/LMescheder/ TheNumericsOfGANs. 2 Algorithm 1 Simultaneous Gradient Ascent (SimGA) 1: while not converged do 2: v? ? ?? f (?, ?) 3: v? ? ?? g(?, ?) 4: ? ? ? + hv? 5: ? ? ? + hv? 6: end while whereas in practice people usually employ a variant of this formulation that is not a zero-sum game for better convergence [10]. ? ?) ? given by the two conditions Our goal is to find a Nash-equilibrium of the game, i.e. a point x ? = (?, ? and ?? ? argmax g(?, ? ?). ?? ? argmax f (?, ?) ? (4) ? ? ?) ? a local Nash-equilibrium, if (4) holds in a local neighborhood of (?, ? ?). ? We call a point (?, Every differentiable two-player game defines a vector field   ?? f (?, ?) v(?, ?) = . ?? g(?, ?) (5) We call v the associated gradient vector field to the game defined by f and g. For the special case of zero-sum two-player games, we have g = ?f and thus   ?2? f (?, ?) ??,? f (?, ?) 0 v (?, ?) = . ???,? f (?, ?) ??2? f (?, ?) (6) As a direct consequence, we have the following: Lemma 1. For zero-sum games, v 0 (x) is negative (semi-)definite if and only if ?2? f (?, ?) is negative (semi-)definite and ?2? f (?, ?) is positive (semi-)definite. x) is negative semi-definite for any local Nash-equilibrium Corollary 2. For zero-sum games, v 0 (? x ?. Conversely, if x ? is a stationary point of v(x) and v 0 (? x) is negative definite, then x ? is a local Nash-equilibrium. Note that Corollary 2 is not true for general two-player games. 2.3 Simultaneous Gradient Ascent The de-facto standard algorithm for finding Nash-equilibria of general smooth two-player games is Simultaneous Gradient Ascent (SimGA), which was described in several works, for example in [22] and, more recently also in the context of GANs, in [16]. The idea is simple and is illustrated in Algorithm 1. We iteratively update the parameters of the two players by simultaneously applying gradient ascent to the utility functions of the two players. This can also be understood as applying the Euler-method to the ordinary differential equation d x(t) = v(x(t)), dt (7) where v(x) is the associated gradient vector field of the two-player game. It can be shown that simultaneous gradient ascent converges locally to a Nash-equilibrium for a zero-sum game, if the Hessian of both players is negative definite [16, 22] and the learning rate is small enough. Unfortunately, in the context of GANs the former condition is rarely met. We revisit the properties of simultaneous gradient ascent in Section 3 and also show a more subtle property, namely that even if the conditions for the convergence of simultaneous gradient ascent are met, it might require extremely small step sizes for convergence if the Jacobian of the associated gradient vector field has eigenvalues with large imaginary part. 3 =(z) =(z) =(z) <(z) <(z) <(z) (a) Illustration how the eigenvalues (b) Example where h has to be cho- (c) Illustration how our method alleare projected into unit ball. sen extremely small. viates the problem. Figure 1: Images showing how the eigenvalues of A are projected into the unit circle and what causes problems: when discretizing the gradient flow with step size h, the eigenvalues of the Jacobian at a fixed point are projected into the unit ball along rays from 1. However, this is only possible if the eigenvalues lie in the left half plane and requires extremely small step sizes h if the eigenvalues are close to the imaginary axis. The proposed method moves the eigenvalues to the left in order to make the problem better posed, thus allowing the algorithm to converge for reasonable step sizes. 3 Convergence Theory In this section, we analyze the convergence properties of the most common method for training GANs, simultaneous gradient ascent2 . We show that two major failure causes for this algorithm are eigenvalues of the Jacobian of the associated gradient vector field with zero real-part as well as eigenvalues with large imaginary part. For our theoretical analysis, we start with the following classical theorem about the convergence of fixed-point iterations: Proposition 3. Let F : ? ? ? be a continuously differential function on an open subset ? of Rn and let x ? ? ? be so that 1. F (? x) = x ?, and 2. the absolute values of the eigenvalues of the Jacobian F 0 (? x) are all smaller than 1. Then there is an open neighborhood U of x ? so that for all x0 ? U , the iterates F (k) (x0 ) converge to x ?. The rate of convergence is at least linear. More precisely, the error kF (k) (x0 ) ? x ?k is in O(|?max |k ) for k ? ? where ?max is the eigenvalue of F 0 (? x) with the largest absolute value. Proof. See [6], Proposition 4.4.1. In numerics, we often consider functions of the form F (x) = x + h G(x) (8) for some h > 0. Finding fixed points of F is then equivalent to finding solutions to the nonlinear equation G(x) = 0 for x. For F as in (8), the Jacobian is given by F 0 (x) = I + h G0 (x). (9) Note that in general neither F 0 (x) nor G0 (x) are symmetric and can therefore have complex eigenvalues. The following Lemma gives an easy condition, when a fixed point of F as in (8) satisfies the conditions of Proposition 3. 2 A similar analysis of alternating gradient ascent, a popular alternative to simultaneous gradient ascent, can be found in the supplementary material. 4 Lemma 4. Assume that A ? Rn?n only has eigenvalues with negative real-part and let h > 0. Then the eigenvalues of the matrix I + h A lie in the unit ball if and only if h< 1 |<(?)| 2 1+  =(?) <(?) 2 (10) for all eigenvalues ? of A. Corollary 5. If v 0 (? x) only has eigenvalues with negative real-part at a stationary point x ?, then Algorithm 1 is locally convergent to x ? for h > 0 small enough. Equation 10 shows that there are two major factors that determine the maximum possible step size h: (i) the maximum value of <(?) and (ii) the maximum value q of |=(?)/<(?)|. Note that as q goes to infinity, we have to choose h according to O(q ?2 ) which can quickly become extremely small. This is visualized in Figure 1: if G0 (? x) has an eigenvalue with small absolute real part but big imaginary part, h needs to be chosen extremely small to still achieve convergence. Moreover, even if we make h small enough, most eigenvalues of F 0 (? x) will be very close to 1, which leads by Proposition 3 to very slow convergence of the algorithm. This is in particular a problem of simultaneous gradient ascent for two-player games (in contrast to gradient ascent for local optimization), where the Jacobian G0 (? x) is not symmetric and can therefore have non-real eigenvalues. 4 Consensus Optimization In this section, we derive the proposed method and analyze its convergence properties. 4.1 Derivation Finding stationary points of the vector field v(x) is equivalent to solving the equation v(x) = 0. In the context of two-player games this means solving the two equations ?? f (?, ?) = 0 and ?? g(?, ?) = 0. (11) A simple strategy for finding such stationary points is to minimize L(x) = 21 kv(x)k2 for x. Unfortunately, this can result in unstable stationary points of v or other local minima of 12 kv(x)k2 and in practice, we found it did not work well. We therefore consider a modified vector field w(x) that is as close as possible to the original vector field v(x), but at the same time still minimizes L(x) (at least locally). A sensible candidate for such a vector field is w(x) = v(x) ? ??L(x) (12) for some ? > 0. A simple calculation shows that the gradient ?L(x) is given by ?L(x) = v 0 (x)T v(x). (13) This vector field is the gradient vector field associated to the modified two-player game given by the two modified utility functions f?(?, ?) = f (?, ?) ? ?L(?, ?) and g?(?, ?) = g(?, ?) ? ?L(?, ?). (14) The regularizer L(?, ?) encourages agreement between the two players. Therefore we call the resulting algorithm Consensus Optimization (Algorithm 2). 3 4 3 This algorithm requires backpropagation through the squared norm of the gradient with respect to the weights of the network. This is sometimes called double backpropagation and is for example supported by the deep learning frameworks Tensorflow [1] and PyTorch [19]. 4 As was pointed out by Ferenc Huzs?r in one of his blog posts on www.inference.vc, naively implementing this algorithm in a mini-batch setting leads to biased estimates of L(x). However, the bias goes down linearly with the batch size, which justifies the usage of consensus optimization in a mini-batch setting. Alternatively, it is possible to debias the estimate by subtracting a multiple of the sample variance of the gradients, see the supplementary material for details. 5 Algorithm 2 Consensus optimization 1: while not converged do 2: v? ? ?? (f (?, ?) ? ?L(?, ?)) 3: v? ? ?? (g(?, ?) ? ?L(?, ?)) 4: ? ? ? + hv? 5: ? ? ? + hv? 6: end while 4.2 Convergence For analyzing convergence, we consider a more general algorithm than in Section 4.1 which is given by iteratively applying a function F of the form F (x) = x + h A(x)v(x). (15) for some step size h > 0 and an invertible matrix A(x) to x. Consensus optimization is a special case of this algorithm for A(x) = I ? ? v 0 (x)T . We assume that ?1 is not an eigenvalue of v 0 (x)T for any x, so that A(x) is indeed invertible. ? is a fixed point of (15) if and only if Lemma 6. Assume h > 0 and A(x) invertible for all x. Then x it is a stationary point of v. Moreover, if x ? is a stationary point of v, we have F 0 (? x) = I + hA(? x)v 0 (? x). (16) Lemma 7. Let A(x) = I ? ?v 0 (x)T and assume that v 0 (? x) is negative semi-definite and invertible5 . Then A(? x)v 0 (? x) is negative definite. As a consequence of Lemma 6 and Lemma 7, we can show local convergence of our algorithm to a local Nash equilibrium: Corollary 8. Let v(x) be the associated gradient vector field of a two-player zero-sum game and A(x) = I ? ?v 0 (x)T . If x ? is a local Nash-equilibrium, then there is an open neighborhood U of x ? so that for all x0 ? U , the iterates F (k) (x0 ) converge to x ? for h > 0 small enough. Our method solves the problem of eigenvalues of the Jacobian with (approximately) zero real-part. As the next Lemma shows, it also alleviates the problem of eigenvalues with a big imaginary-to-realpart-quotient: Lemma 9. Assume that A ? Rn?n is negative semi-definite. Let q(?) be the maximum of |=(?)| |<(?)| (possibly infinite) with respect to ? where ? denotes the eigenvalues of A ? ?AT A and <(?) and =(?) denote their real and imaginary part respectively. Moreover, assume that A is invertible with |Av| ? ?|v| for ? > 0 and let |? v T (A + AT )v| c = minn T (17) v (A ? AT )v| v?S(C ) |? where S(Cn ) denotes the unit sphere in Cn . Then q(?) ? 1 . c + 2?2 ? (18) Lemma 9 shows that the imaginary-to-real-part-quotient can be made arbitrarily small for an appropriate choice of ?. According to Proposition 3, this leads to better convergence properties near a local Nash-equilibrium. 5 Experiments Mixture of Gaussians In our first experiment we evaluate our method on a simple 2D-example where our goal is to learn a mixture of 8 Gaussians with standard deviations equal to 10?2 and modes Note that v 0 (? x) is usually not symmetric and therefore it is possible that v 0 (? x) is negative semi-definite and invertible but not negative-definite. 5 6 (a) Simultaneous Gradient Ascent (b) Consensus optimization Figure 2: Comparison of Simultaneous Gradient Ascent and Consensus optimization on a circular mixture of Gaussians. The images depict from left to right the resulting densities of the algorithm after 0, 5000, 10000 and 20000 iterations as well as the target density (in red). v 0 (x) w0 (x) Before training After training Figure 3: Empirical distribution of eigenvalues before and after training using consensus optimization. The first column shows the distribution of the eigenvalues of the Jacobian v 0 (x) of the unmodified vector field v(x). The second column shows the eigenvalues of the Jacobian w0 (x) of the regularized vector field w(x) = v(x) ? ??L(x) used in consensus optimization. We see that v 0 (x) has eigenvalues close to the imaginary axis near the Nash-equilibrium. As predicted theoretically, this is not the case for the regularized vector field w(x). For visualization purposes, the real part of the spectrum of w0 (x) before training was clipped. uniformly distributed around the unit circle. While simplistic, algorithms training GANs often fail to converge even on such simple examples without extensive fine-tuning of the architecture and hyper parameters [15]. For both the generator and critic we use fully connected neural networks with 4 hidden layers and 16 hidden units in each layer. For all layers, we use RELU-nonlinearities. We use a 16-dimensional Gaussian prior for the latent code z and set up the game between the generator and critic using the utility functions as in [10]. To test our method, we run both SimGA and our method with RMSProp and a learning rate of 10?4 for 20000 steps. For our method, we use a regularization parameter of ? = 10. The results produced by SimGA and our method for 0, 5000, 10000 and 20000 iterations are depicted in Figure 2. We see that while SimGA jumps around the modes of the distribution and fails to converge , our method converges smoothly to the target distribution (shown in red). Figure 3 shows the empirical distribution of the eigenvalues of the Jacobian of v(x) and the regularized vector field w(x). It can be seen that near the Nash-equilibrium most eigenvalues are indeed very close to the 7 (b) celebA (a) cifar-10 Figure 4: Samples generated from a model where both the generator and discriminator are given as in [21], but without batch-normalization. For celebA, we also use a constant number of filters in each layer and add additional RESNET-layers. (a) Discriminator loss (b) Generator loss (c) Inception score Figure 5: (a) and (b): Comparison of the generator and discriminator loss on a DC-GAN architecture with 3 convolutional layers trained on cifar-10 for consensus optimization (without batchnormalization) and alternating gradient ascent (with batch-normalization). We observe that while alternating gradient ascent leads to highly fluctuating losses, consensus optimization successfully stabilizes the training and makes the losses almost constant during training. (c): Comparison of the inception score over time which was computed using 6400 samples. We see that on this architecture both methods have comparable rates of convergence and consensus optimization achieves slightly better end results. imaginary axis and that the proposed modification of the vector field used in consensus optimization moves the eigenvalues to the left. CIFAR-10 and CelebA In our second experiment, we apply our method to the cifar-10 and celebAdatasets, using a DC-GAN-like architecture [21] without batch normalization in the generator or the discriminator. For celebA, we additionally use a constant number of filters in each layer and add additional RESNET-layers. These architectures are known to be hard to optimize using simultaneous (or alternating) gradient ascent [21, 4]. Figure 4a and 4b depict samples from the model trained with our method. We see that our method successfully trains the models and we also observe that unlike when using alternating gradient ascent, the generator and discriminator losses remain almost constant during training. This is illustrated in Figure 5. For a quantitative evaluation, we also measured the inception-score [23] over time (Figure 5c), showing that our method compares favorably to a DC-GAN trained with alternating gradient ascent. The improvement of consensus optimization over alternating gradient ascent is even more significant if we use 4 instead of 3 convolutional layers, see Figure 11 in the supplementary material for details. Additional experimental results can be found in the supplementary material. 6 Discussion While we could prove local convergence of our method in Section 4, we believe that even more insights can be gained by examining global convergence properties. In particular, our analysis from 8 Section 4 cannot explain why the generator and discriminator losses remain almost constant during training. Our theoretical results assume the existence of a Nash-equilibrium. When we are trying to minimize an f-divergence and the dimensionality of the generator distribution is misspecified, this might not be the case [3]. Nonetheless, we found that our method works well in practice and we leave a closer theoretical investigation of this fact to future research. In practice, our method can potentially make formerly instable stationary points of the gradient vector field stable if the regularization parameter is chosen to be high. This may lead to poor solutions. We also found that our method becomes less stable for deeper architectures, which we attribute to the fact that the gradients can have very different scales in such architectures, so that the simple L2-penalty from Section 4 needs to be rescaled accordingly. Our method can be regarded as an approximation to the implicit Euler method for integrating the gradient vector field. It can be shown that the implicit Euler method has appealing stability properties [7] that can be translated into convergence theorems for local Nash-equilibria. However, the implicit Euler method requires the solution of a nonlinear equation in each iteration. Nonetheless, we believe that further progress can be made by finding better approximations to the implicit Euler method. An alternative interpretation is to view our method as a second order method. We hence believe that further progress can be made by revisiting second order optimization methods [2, 18] in the context of saddle point problems. 7 Related Work Saddle point problems do not only arise in the context of training GANs. For example, the popular actor-critic models [20] in reinforcement learning are also special cases of saddle-point problems. Finding a stable algorithm for training GANs is a long standing problem and multiple solutions have been proposed. Unrolled GANs [15] unroll the optimization with respect to the critic, thereby giving the generator more informative gradients. Though unrolling the optimization was shown to stabilize training, it can be cumbersome to implement and in addition it also results in a big model. As was recently shown, the stability of GAN-training can be improved by using objectives derived from the Wasserstein-1-distance (induced by the Kantorovich-Rubinstein-norm) instead of f-divergences [4, 11]. While Wasserstein-GANs often provide a good solution for the stable training of GANs, they require keeping the critic optimal, which can be time-consuming and can in practice only be achieved approximately, thus violating the conditions for theoretical guarantees. Moreover, some methods like Adversarial Variational Bayes [14] explicitly prescribe the divergence measure to be used, thus making it impossible to apply Wasserstein-GANs. Other approaches that try to stabilize training, try to design an easy-to-optimize architecture [23, 21] or make use of additional labels [23, 17]. In contrast to all the approaches described above, our work focuses on stabilizing training on a wide range of architecture and divergence functions. 8 Conclusion In this work, starting from GAN objective functions we analyzed the general difficulties of finding local Nash-equilibria in smooth two-player games. We pinpointed the major numerical difficulties that arise in the current state-of-the-art algorithms and, using our insights, we presented a new algorithm for training generative adversarial networks. Our novel algorithm has favorable properties in theory and practice: from the theoretical viewpoint, we showed that it is locally convergent to a Nashequilibrium even if the eigenvalues of the Jacobian are problematic. This is particularly interesting for games that arise in the context of GANs where such problems are common. From the practical viewpoint, our algorithm can be used in combination with any GAN-architecture whose objective can be formulated as a two-player game to stabilize the training. We demonstrated experimentally that our algorithm stabilizes the training and successfully combats training issues like mode collapse. We believe our work is a first step towards an understanding of the numerics of GAN training and more general deep learning objective functions. 9 Acknowledgements This work was supported by Microsoft Research through its PhD Scholarship Programme. References [1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. [2] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251? 276, 1998. [3] Mart?n Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks. CoRR, abs/1701.04862, 2017. [4] Mart?n Arjovsky, Soumith Chintala, and L?on Bottou. abs/1701.07875, 2017. Wasserstein GAN. CoRR, [5] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 224?232, 2017. [6] Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods. Academic press, 2014. [7] John Charles Butcher. Numerical methods for ordinary differential equations. John Wiley & Sons, 2016. [8] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. CoRR, abs/1605.09782, 2016. [9] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Mart?n Arjovsky, Olivier Mastropietro, and Aaron C. Courville. Adversarially learned inference. CoRR, abs/1606.00704, 2016. [10] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672?2680, 2014. [11] Ishaan Gulrajani, Faruk Ahmed, Mart?n Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. CoRR, abs/1704.00028, 2017. [12] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. CoRR, abs/1611.07004, 2016. [13] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. [14] Lars M. Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2391?2400, 2017. [15] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. CoRR, abs/1611.02163, 2016. 10 [16] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 271?279, 2016. [17] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2642?2651, 2017. [18] Razvan Pascanu and Yoshua Bengio. Natural gradient revisited. CoRR, abs/1301.3584, 2013. [19] Adam Paszke and Soumith Chintala. Pytorch, 2017. [20] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. CoRR, abs/1610.01945, 2016. [21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [22] Lillian J. Ratliff, Samuel Burden, and S. Shankar Sastry. Characterization and computation of local nash equilibria in continuous games. In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton 2013, Allerton Park & Retreat Center, Monticello, IL, USA, October 2-4, 2013, pages 917?924, 2013. [23] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2226?2234, 2016. [24] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised MAP inference for image super-resolution. CoRR, abs/1610.04490, 2016. [25] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012. [26] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. CoRR, abs/1702.05464, 2017. [27] Raymond Yeh, Chen Chen, Teck-Yian Lim, Mark Hasegawa-Johnson, and Minh N. Do. Semantic image inpainting with perceptual and contextual losses. CoRR, abs/1607.07539, 2016. 11
6779 |@word norm:2 open:3 p0:4 nsw:3 thereby:1 inpainting:1 score:3 selecting:1 imaginary:11 current:2 com:2 contextual:1 john:2 devin:1 realistic:1 numerical:3 informative:1 enables:1 christian:1 update:1 depict:2 stationary:8 intelligence:1 generative:14 half:1 alec:2 accordingly:1 plane:1 iterates:2 pascanu:1 revisited:1 philipp:1 characterization:1 allerton:3 zhang:1 along:1 olah:1 direct:1 differential:3 become:1 abadi:1 prove:1 ray:1 yingyu:1 introduce:1 x0:5 theoretically:2 aitken:1 indeed:2 mpg:2 examine:1 nor:1 soumith:3 unrolling:1 becomes:1 spain:2 moreover:5 what:1 minimizes:2 finding:14 guarantee:1 quantitative:1 every:1 combat:1 act:1 zaremba:1 k2:2 classifier:1 facto:3 control:1 sherjil:1 unit:7 faruk:1 superiority:1 bertsekas:1 positive:1 before:3 understood:2 local:17 consequence:2 analyzing:1 approximately:2 might:2 conversely:1 luke:2 collapse:1 range:1 practical:1 practice:10 definite:11 implement:1 backpropagation:2 razvan:1 empirical:2 yan:1 adapting:1 integrating:1 suggest:1 cannot:1 close:5 zehan:1 shankar:1 context:8 applying:3 impossible:1 optimize:3 equivalent:2 www:1 demonstrated:1 mescheder:3 dean:1 shi:2 go:2 center:1 starting:1 map:1 convex:2 resolution:3 stabilizing:2 matthieu:1 pouget:1 jascha:1 insight:4 utilizing:2 regarded:1 shlens:1 his:1 stability:2 autonomous:2 target:4 olivier:1 prescribe:1 goodfellow:2 agreement:1 element:2 approximated:2 particularly:1 nderby:1 wang:1 hv:4 revisiting:1 connected:1 rescaled:1 principled:1 nash:19 rmsprop:2 warde:1 trained:3 solving:2 ferenc:3 debias:1 eric:1 translated:1 represented:1 regularizer:1 derivation:2 train:4 describe:1 rubinstein:1 vicki:1 hyper:1 neighborhood:3 whose:1 jean:1 supplementary:5 solve:1 posed:1 otherwise:1 amari:1 g1:4 augustus:1 eigenvalue:35 differentiable:2 net:2 sen:1 subtracting:1 instable:1 adaptation:2 alleviates:1 achieve:1 kv:2 convergence:21 double:1 darrell:2 adam:1 converges:2 leave:1 resnet:2 ben:2 derive:2 andrew:1 montreal:1 tim:1 measured:1 progress:2 solves:1 sydney:3 implemented:1 quotient:2 predicted:1 auxiliary:1 met:2 kaae:1 correct:1 attribute:1 filter:2 lars:3 vc:1 australia:3 jonathon:1 material:5 implementing:1 shun:1 require:2 generalization:1 investigation:1 proposition:5 rong:1 pytorch:2 hold:1 around:2 caballero:2 equilibrium:21 stabilizes:2 major:3 achieves:1 efros:1 purpose:1 favorable:1 label:1 largest:1 successfully:4 hoffman:1 minimization:3 gaussian:1 super:3 modified:3 husz:1 zhou:1 corollary:4 cseke:1 derived:1 focus:1 improvement:1 contrast:2 adversarial:18 inference:4 hidden:3 butcher:1 issue:1 lucas:2 art:1 special:3 constrained:1 tzeng:1 field:22 equal:1 beach:1 adversarially:1 park:1 icml:3 unsupervised:1 celeba:4 future:1 mirza:1 yoshua:2 employ:1 simultaneously:1 divergence:23 argmax:2 jeffrey:1 microsoft:3 ab:15 circular:1 highly:1 alexei:1 evaluation:1 mixture:3 analyzed:1 farley:1 andy:1 closer:1 monticello:1 orthogonal:1 divide:1 circle:2 theoretical:5 instance:1 formalism:1 column:2 unmodified:1 ishmael:1 ordinary:2 deviation:1 subset:1 euler:5 successful:1 examining:1 johnson:1 cho:1 st:2 density:2 international:3 standing:1 probabilistic:1 invertible:5 connecting:1 together:1 continuously:1 quickly:1 gans:24 ashish:1 sanjeev:1 squared:1 synthesis:1 choose:1 possibly:1 dimitri:1 wojciech:1 nonlinearities:1 de:5 stabilize:3 kate:1 explicitly:1 try:5 view:2 dumoulin:2 analyze:4 red:2 start:1 bayes:2 metz:2 contribution:1 minimize:3 il:1 botond:1 greg:1 convolutional:3 variance:1 efficiently:1 identify:1 painting:1 tejani:1 vincent:2 produced:1 craig:1 notoriously:2 converged:2 simultaneous:15 explain:1 suffers:1 cumbersome:1 sebastian:4 trevor:2 failure:1 nonetheless:2 chintala:3 associated:8 proof:2 sampled:1 ledig:1 popular:2 lim:1 dimensionality:1 subtle:1 carefully:1 dt:1 violating:1 totz:1 improved:3 formulation:1 though:1 inception:3 implicit:4 alykhan:1 autoencoders:1 christopher:1 mehdi:1 nonlinear:2 defines:1 mode:3 gulrajani:1 believe:4 usa:2 usage:1 phillip:1 concept:2 true:1 multiplier:1 former:1 regularization:2 hence:1 unroll:1 alternating:7 symmetric:3 iteratively:2 pinpointed:1 semantic:1 illustrated:2 deal:1 game:32 during:3 encourages:1 davis:1 mpi:2 samuel:1 trying:1 demonstrate:2 image:12 variational:4 novel:1 recently:2 charles:1 misspecified:1 common:5 empirically:1 interpretation:1 significant:1 ishaan:1 tuning:1 sastry:1 pointed:1 stable:6 actor:2 longer:1 add:2 showed:1 recent:1 discretizing:1 blog:1 arbitrarily:1 yi:1 seen:1 minimum:1 wasserstein:7 additional:4 arjovsky:4 isola:1 tinghui:1 converge:5 maximize:2 determine:1 corrado:1 ii:2 semi:7 multiple:2 smooth:7 academic:1 calculation:1 ahmed:1 long:2 sphere:1 cifar:4 post:1 converging:1 variant:1 simplistic:1 heterogeneous:1 vision:2 iteration:4 sometimes:1 normalization:3 agarwal:1 achieved:1 background:1 want:1 whereas:3 fine:1 addition:1 biased:1 rest:1 unlike:1 ascent:22 induced:1 december:3 quebec:1 flow:1 call:5 near:3 presence:3 mastropietro:1 easy:3 enough:4 bengio:2 variety:2 relu:1 architecture:15 andreas:3 idea:1 cn:2 barham:1 utility:4 penalty:1 bingen:2 henb:1 hessian:1 cause:2 action:2 deep:3 johannes:1 locally:4 visualized:1 http:1 problematic:1 revisit:3 dickstein:1 group:3 ichi:1 terminology:1 neither:1 verified:1 sum:8 run:1 jose:2 parameterized:1 powerful:1 clipped:1 family:3 reasonable:1 almost:3 lamb:1 geiger:3 draw:2 comparable:1 huszar:1 layer:9 convergent:2 courville:3 annual:4 precisely:1 infinity:1 alex:1 min:2 extremely:5 tengyu:1 according:2 ball:3 poor:1 combination:1 smaller:1 slightly:1 remain:2 son:1 appealing:1 modification:1 making:1 hl:1 equation:7 visualization:1 bing:1 fail:1 ge:1 end:3 photo:1 available:1 gaussians:3 brevdo:1 apply:2 observe:2 fluctuating:1 appropriate:2 salimans:1 alternative:2 batch:6 existence:2 original:1 denotes:2 running:1 include:1 ensure:1 gan:15 unifying:1 giving:1 scholarship:1 classical:1 objective:6 move:2 g0:4 strategy:4 parametric:2 kantorovich:1 gradient:43 distance:1 sensible:1 w0:3 tuebingen:2 consensus:14 reason:2 unstable:1 ozair:1 code:3 minn:1 mini:3 illustration:2 minimizing:1 unrolled:2 tijmen:1 liang:1 difficult:1 unfortunately:2 october:1 potentially:1 favorably:1 ryota:1 hasegawa:1 negative:12 ratliff:1 numerics:4 design:5 allowing:1 av:1 minh:1 lillian:1 hinton:1 communication:1 dc:3 rn:3 august:3 canada:1 david:3 pair:1 namely:1 extensive:1 discriminator:7 pfau:2 learned:1 tensorflow:2 barcelona:2 nip:1 poole:2 usually:3 perception:1 including:2 max:5 odena:1 difficulty:3 natural:2 regularized:3 indicator:1 zhu:1 github:1 axis:3 arora:1 jun:1 raymond:1 formerly:1 prior:1 understanding:1 l2:1 acknowledgement:1 kf:1 eugene:1 theis:2 yeh:1 fully:1 loss:8 lecture:1 interesting:1 limitation:1 geoffrey:1 generator:11 retreat:1 viewpoint:2 nowozin:4 critic:6 translation:2 casper:1 course:1 summary:1 supported:2 keeping:1 bias:1 deeper:1 wide:1 taking:1 amortised:1 absolute:3 distributed:2 preventing:1 author:1 made:3 jump:1 projected:3 reinforcement:1 programme:1 overcomes:2 belghazi:1 global:1 consuming:1 xi:1 discriminative:1 alternatively:1 spectrum:1 continuous:1 latent:1 why:2 additionally:1 learn:1 robust:1 ca:1 bottou:2 complex:1 domain:2 did:1 main:2 linearly:1 big:4 noise:1 arise:4 paul:1 wenzhe:2 xu:1 slow:1 wiley:1 judy:1 tomioka:1 fails:2 lie:2 candidate:1 perceptual:1 jacobian:13 zhifeng:1 donahue:1 ian:2 theorem:3 down:1 showing:2 jensen:1 abadie:1 naively:1 burden:1 adding:1 corr:15 gained:1 kr:1 sohl:1 phd:1 magnitude:1 justifies:1 chen:4 smoothly:1 depicted:1 appearance:1 saddle:4 lagrange:1 vinyals:1 g2:4 radford:2 corresponds:2 tieleman:1 satisfies:2 mart:5 ma:1 conditional:2 goal:4 formulated:1 cheung:1 towards:2 jeff:1 experimentally:3 hard:4 infinite:1 uniformly:1 sampler:1 lemma:10 called:1 experimental:1 player:24 shannon:1 saenko:1 citro:1 rarely:1 aaron:3 people:1 mark:1 latter:1 oriol:1 evaluate:1 ex:4
6,389
678
Physiologically Based Speech Synthesis ~akoto Hirayanaa t ATR Human Information Processing Research Laboratories 2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02 Japan Eric Vatikiotis-Bateson tATR Auditory and Visual Perception Research Laboratories Kiyoshi Hondat Yasuharu Koiket ~itsuo Kawatot* Abstract This study demonstrates a paradigm for modeling speech production based on neural networks. Using physiological data from speech utterances, a neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior that allows articulator trajectories to be generated from motor commands constrained by phoneme input strings and global performance parameters. From these movement trajectories, a second neural network generates PARCOR parameters that are then used to synthesize the speech acoustics. 1 INTRODUCTION Our group has attempted to model speech production computationally as a process in which linguistic intentions are realized as speech through a causal succession of patterned behavior. Our aim is to gain insight into the cognitive and neurophysiological mechanisms governing this complex skilled behavior as well as to provide plausible models of speech synthesis and possibly recognition based on the physiology of speech production. It is the use of physiological data (EMG) representing * Also, Laboratory of Parallel Distributed Processing, Research Institute for Electronic Science, Hokkaido University, Sapporo, Hokkaido 060, Japan 658 Physiologically Based Speech Synthesis motor commands to muscles that distinguishes our modeling effort from those of others who use neural networks for articulation-based synthesis and/or inference of the dynamical constraints on speech motor control (Jordan, 1986, Jordan, 1990, Bailly, Laboissiere, and Schwalz, 1992, Saltzman, 1986, Bengio, Houde, and Jordan, 1992). This paper reports two areas in which implementation of the speech production scheme shown in Figure 1 has progressed. Initially, we concentrated on modeling the dynamics underlying articulation so that phoneme strings can specify motor commands to muscles, which then specify phoneme-specific articulator behavior (Hirayama, Vatikiotis-Bateson, Kawato, and Jordan, 1992). A neural network learned the forward dynamics relating motor commands to muscles and the ensuing articulator behavior associated with prosodic ally intact, but phonemic ally simplified, reiterant speech utterances. Then, a cascade neural network (Kawato, Maeda, Uno, and Suzuki, 1990) containing the forward dynamics model along with a suitable smoothness criterion (Uno, Kawato, and Suzuki, 1989) was used to produce continuous motor commands from a sequence of discrete articulatory targets corresponding to the phoneme input string. From this sequence of motor commands, appropriate articulator trajectories were then generated. Intention to Speak Intended Phoneme Global Performance Sequence Parameters Articulator Movement Figure 1: Conceptual scheme of speech production Although the results of this early work were encouraging, there were two technical limitations obstructing our effort to model real speech. First, using optoelectronic transduction techniques, only simple speech samples whose primary articulators were the lips and jaw could be recorded, hence the use of reiterant ba. Without dynamic tongue data, real speech could not be modeled. Also, the reiterant paradigm introduced a degree of rhythmical movement behavior not observed in real speech. The second limitation was that activity of only four muscles and generally only one dimension of articulator motion could be recorded simultaneously. Thus, agonistantagonist muscle activity was not represented even for this limited set of articulators. Technical improvements in data acquisition and their consequences for the subsequent dynamical modeling of real speech are presented in the next two sections. The second area of progress has been to implement the transform from model- generated articulator trajectories to acoustic output. A neural network is 659 660 Hirayama, Vatikiotis-Bateson, Honda, Koike, and Kawato used to acquire the mapping between articulation and acoustics in terms of PARCOR parameters (Itakura and Saito, 1969), which are correlated with vocal tract area functions. Speech signals are then generated using a PARCOR synthesizer from articulator input and appropriate glottal sources (currently, the residual of the PARCOR analysis). The results of this modeling for real and reiterant speech are reported in the final section of the paper. 2 EMPIRICAL DEVELOPMENTS In order to acquire data more suitable for real speech modeling, two additional experiments were run in which articulator position, EMG and acoustic data were recorded while the same subject produced real and reiterant speech utterances 5-8 seconds long at different speaking rates and styles (e.g., casual vs. precise). In the first of these, a sophisticated optoelectronic device, OPTOTRAK (Northern Digital, Inc.), was used because it permitted simultaneous recording of numerous 3D articulator positions for the lips, jaw and head, ten EMG channels, the speech acoustics, and even dynamic tongue-palate contact patterns. These data were used for modeling of the forward dynamics (see Figure 2) and the forward acoustics. Real speech utterances collected with this system were heavily loaded with labial stops, /p,b,m/, and labiodental fricatives, /f,v /, as well as many low vowels /a, ae/. Since surface EMG was used, it was difficult to obtain reliable recordings of jaw opening (anterior belly of the digastric), and closing (medial pterygoid) muscles. More recently, an electromagnetic position traking system, EMMA (Perkell, Cohen, Svirsky, Matthies, Garabieta, and Jackson, 1992), was used to transduce midsagittal motions of the tongue tip and tongue blade as well as the lips, jaw, and head. Data were collected for the same speech utterances used in the OPTOTRAK and original experiments as well as more natural utterances. Reiterant speech was also recorded for tao For this experiment, surface and hooked-wire EMG techniques were combined, which enabled nine orofacial and extrinsic tongue muscles to be recorded for jaw opening and closing, lip opening and closing, and tongue raising and lowering. The most important aspects of the signal processing for modeling the forward dynamics concern the numerical differentiation of articulator position to obtain velocity and acceleration, and the severe low-pass filtering (including rectification and integration) of the EMG to from 2000 Hz to 20-40 Hz. Both of these introduce spatiotemporal distortions, whose effects on the forward dynamics model are currently being examined. 3 MODELING THE FORWARD DYNAMICS The forward dynamics model was obtained using a 3-layer perceptron with back propagation (Rumelhart, Hinton, and Williams, 1986). Inputs to the network were instantaneous position and velocity for each dimension of articulator motion, and the EMG signals of 9-10 related muscles, which serve as the record of motor commands to muscles; outputs were accelerations for each dimension of motion. Figure 2 shows an example of predicting lip and jaw accelerations from 10 orofacial muscles for the 'natural' test utterance, "Pam put the bobbin in the frying pan and added more puppy parts to the boiling potato soup." As shown by the generalization results in Figure 2, the acquired model produced appropriate acceleration trajectories Physiologically Based Speech Synthesis for real speech utterances, suggesting that utterance complexity is not a limiting factor in this approach. -Network Output Upper Lip ....... ?Experimental Data I---"-'~ Lower Lip Jaw o 200 400 600 800 1000 Figure 2: Estimated acceleration over time (5 ms samples) for vertical motion of the three articulators is compared to that of the test sentence: "Pam put the bobbin in the frying pan and added more puppy parts to the boiling potato soup". Motor Commands .Jln~ra.!!d EMG.L r -V-E~L~ l---j. . . . , Accelerlation Predictor P~S ( Forwa~d ) ACC .~....-JI~ DynamiCs Movement Trajectory Figure 3: The musculo-skeletal forward dynamics model for producing articulator movement trajectories is implemented as a recurrent network. Continuous motor command (EMG) input drives the network, which uses estimated acceleration at time tn, to predict new velocity (integration) and position (double integration) values at the next time step tn+l. D is a one-sample delay unit. The network is initialized with position and velocity values taken from the test utterance at to. Network training resulted in a one-step look-ahead predictor of the articulator dynamics, and was connected recurrently as shown in Figure 3. Using only initial values of articulator position and velocity for the first sample and continuous EMG input, estimated acceleration is looped back and summed with the velocities and positions of the input layer to predict their values for each time step. This is perhaps an overly stringent test of the acquired model because errors are cumulative over the entire utterance 5-8 second utterance. Yet the network outputs appropriate articulator trajectories for the entire utterance. Figure 4 shows the generated trajectory for vertical motion of the jaw during reiterant production of ba (recorded with the electromagnetometer). While the trajectory generated by the network tends to underestimate movement amplitude and introduce a small DC offset, it preserves the temporal properties of the test utterance very well everywhere except before a phrasal pause. Although good results have been obtained for the analysis 661 662 Hirayama, Vatikiotis-Bateson, Honda, Koike, and Kawato - Network Output ........ Experimental Data JAW (Vertical) o 4 2 Time (5) 6 8 Figure 4: Jaw trajectories, generated by the forward dynamics network are compared with experimental data. of real speech using the larger sets of articulator and muscle inputs, network complexity has greatly increased. Performance of the full network has been poorer than before in modeling simple reiterant speech, which suggests some form of modularity should be introduced. Also, the addition of tongue data has increased the number of apparent many-to-one mappings between muscle activity and articulator motion. We are now incorporating as a boundary constraint the midsagittal profile of the hard palate and alveolar ridge, against which tongue-tip articulations are made. 4 MODELING THE FORWARD ACOUSTICS Articulator Positions Glottal Source t----------......;-.:;~.)) ) "----------' Acoustic wa;e Figure 5: Forward acoustics network. The final stage of our speech production model entails using a neural network to acquire a model of the relation between articulator motion and the ensuing acoustics. As shown in Figure 5, a 3-layer perceptron, using articulator position as input, was used to learn PARCOR analysis and generate appropriate 16-order PARCOR parameters for subsequent speech synthesis (Itakura and Saito, 1969). We chose PARCOR parameters, rather than more commonly used formant values, because the parameters have some relation to specific cross-sections of the vocal tract e.g., the first PARCOR corresponds to the cross-sectional area closest to the lips Physiologically Based Speech Synthesis - Network Output ............. ........ Experimental Data ? ????? ? ???? k1 k2 .- k3 . ? ~ 0, . -" ~ k4 -,." k5 .: .1f":'_ -: ..... , k6 o 2 4 6 8 Time (s) Figure 6: PARCOR parameter values (IS-order, 30ms Hanning window at 200Hz) for reiterant ba are predicted by the network. Only the first six PARCOR parameters are shown. The range of each parameter is -1 to 1 (small tick beside each wave label indicates 0). The value of kl is about 1 during vowels, and network output generally matches the desire wave almost perfectly. (Wakita, 1973). Also, PARCOR estimation errors do not have the radical consequences that formant estimation errors show. Finally, there is a unique mapping from PARCOR to formant values, but not the reverse (Itakura and Saito, 1969). Figure 6 shows the performance of the PARCOR estimation network for the first 6 parameters out of 16 parameters. Using the learned PARCOR coefficients and a sound source, acoustic signals can be synthesized. Currently, we are investigating various models for controlling sound source as well as prosodic characteristics. However, for this preliminary test of the network's ability to learn PARCOR parameters, the residual signal of PARCOR analysis served as the source waveform. Figure 7 shows an example of the network-learned PARCOR synthesis for reiterant ba. In this case, the training result is good as can be seen in the waveform (and frequency spectrum), or by listening the synthesized sound. However, the results have not been as good, so far, for real speech utterances containing a lot of abrupt changes and variability in vocal tract shape. One reason for this may be that learning has not yet converged, because the number articulator input channels is still too limited. So far, we have only two markers on tongue, which is not enough to recover the full vocal tract shape. This situation, hopefully, will improve as data for more tongue positions, or perhaps more functionally motivated placements, are collected. Another reason may be the inherent weakness of PARCOR analysis for modeling dynamic changes in vocal tract shape. 5 SUMMARY This paper outlines two areas of progress in our effort to develop a computational model of speech production. First, we extended our data acquisition to include more 663 664 Hirayama, Vatikiotis-Bateson, Honda, Koike, and Kawato / b a/ Experimental Source (Residual) Synthesized ,_.". ............ ....... .t .......... -_.tl. ?? , ?? 1_'. II' 0 b .. . ", ..... tt ??? I I I I 2 4 Time (s) 6 8 a Experimental Source (Residual) ,. Synthesized 3.00 3.02 3.04 3.06 3.08 3.10 Time (s) Figure 7: Speech acoustics are synthesized by driving network-learned PARCOR parameters with a glottal source (the residual). The test sentence is reiterant speech using 00. Top and bottom graphs differ only in time scale. muscles and dimensions of motion for more articulators, especially the tongue, so that we could begin modeling the articulatory dynamics of real speech. As hoped, increasing the scope of the data demonstrated the applicability of our network approach to real speech. However, this also increases the size of the network, which has introduced some interesting problems for modeling simple speech samples. We are now considering modifications to the network architecture that will enable adaptive modeling of speech samples, whose complexity (e.g., number of physiological/ articulatory components) may vary. Second, we have employed a simple neural network for modeling the articulatory-to-acoustic transform based on PARCOR analysis, whose parameters are correlated with vocal tract shape. Although PARCOR can be used to synthesize speech, its main use for us is as a tool for assessing empirical issues associated with articulatory-acoustic interface. Acknowledgments We thank Haskins Laboratories for use of their facilities (NIH grant DC-00121), Vincent Gracco and Kiyoshi Ohsima for muscle insertions, M. I. Jordan for insightful Physiologically Based Speech Synthesis discussion, and Yoh'ichi Toh'kura for continuous encouragement. Further support was provided by HFSP grants to M. Kawato. References [1] Bailly, G., Laboissiere, R. and Schwarz, J. L. (1992) Formant trajectories as [2] [3] [4] [5] [6] [7] [8] [9] [10] audible gestures: an alternative for speech synthesis. Journal of Phonetics, 19,9-23. Bengio, Y., Houde, J., and Jordan, M. I. (1992) Representations based on articulatory dynamics for speech recogmtion. Presented at Neural Networks for Computing, Snowbird, Utah. Hirayama, M., Vatikiotis-Bateson, E., Kawato, M., and Jordan, M. I. (1992) Forward dynamics modeling of speech motor control using physiological data. In Moody, J. E., Hanson, S. J., and Lippmann, R. P. (eds.) Advances in neural information processing systems 4. San Mateo, CA: Morgan Kaufmann Publishers. Itakura, F., and Saito, S. (1969) Speech analysis and synthesis by partial correlation parameters. Proceeding of Japan Acoust. Soc., 2-2-6. Jordan, M. I. (1986) Serial order: a parallel distributed processing approach. ICS Report, 8604. Jordan, M. I. (1990) Motor learning and the degrees of freedom problem. In M. Jeannerod (ed.) Attention and performance XIII, 796-836, Hillsdale, NJ: Erlbaum. Kawato, M., Maeda, M., Uno, Y., and Suzuki, R. (1990). Trajectory formation of arm movement by cascade neural-network model based on minimum torquechange criterion. Bioi. Cybern., 62, 275-288. Perkell, J., Cohen, M., Svirsky, M., Matthies, M., Garabieta, I., and Jackson, M., Electromagnetic midsagittal articulometer systems for transducing speech articulatory movements. J. Acoust. Soc. Am., 92, 3078-3096. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986) Learning representations by back-propagating errors. Nature, 323, 533-536. Saltzman, E. L. (1986) Task dynamic coordination of the speech articulators: A preliminary model. In H. Heuer and C. Fromm (eds.) Generation and modulation of action patterns, Berlin: Springer-Verlag. [11] Uno, Y., Kawato, M., and Suzuki, R. (1989) Formation and control of optimal trajectory in human multijoint arm movement - minimum torque-change model. Bioi. Cybern., 61, 89-101. [12] Wakita, H. (1973) Direct estimation of the vocal tract shape by inverse filtering of acoustic speech waveforms. IEEE Trans. Audio Electroacoust., AV-21 417-427. 665
678 |@word kura:1 blade:1 initial:1 anterior:1 wakita:2 synthesizer:1 yet:2 toh:1 numerical:1 subsequent:2 shape:5 motor:13 medial:1 v:1 device:1 yoh:1 record:1 honda:3 along:1 skilled:1 direct:1 emma:1 introduce:2 acquired:2 ra:1 behavior:6 seika:1 torque:1 encouraging:1 window:1 considering:1 increasing:1 begin:1 provided:1 underlying:1 musculo:1 string:3 acoust:2 differentiation:1 nj:1 temporal:1 demonstrates:1 k2:1 control:3 unit:1 grant:2 producing:1 before:2 tends:1 consequence:2 modulation:1 pam:2 chose:1 mateo:1 examined:1 suggests:1 patterned:1 limited:2 range:1 unique:1 acknowledgment:1 implement:1 saito:4 area:5 empirical:2 physiology:1 cascade:2 intention:2 vocal:7 put:2 cybern:2 transduce:1 demonstrated:1 williams:2 attention:1 abrupt:1 insight:1 jackson:2 enabled:1 limiting:1 phrasal:1 target:1 controlling:1 heavily:1 speak:1 us:1 perkell:2 synthesize:2 velocity:6 recognition:1 rumelhart:2 observed:1 bottom:1 yasuharu:1 connected:1 movement:9 complexity:3 insertion:1 belly:1 dynamic:20 serve:1 eric:1 represented:1 various:1 hooked:1 prosodic:2 sapporo:1 formation:2 whose:4 apparent:1 larger:1 plausible:1 distortion:1 ability:1 formant:4 transform:2 final:2 sequence:3 double:1 assessing:1 houde:2 produce:1 tract:7 recurrent:1 radical:1 propagating:1 develop:1 snowbird:1 hirayama:5 frying:2 progress:2 phonemic:1 soc:2 implemented:1 predicted:1 differ:1 puppy:2 waveform:3 human:2 stringent:1 enable:1 hillsdale:1 electromagnetic:2 generalization:1 preliminary:2 digastric:1 kiyoshi:2 ic:1 k3:1 mapping:3 predict:2 scope:1 driving:1 vary:1 early:1 estimation:4 multijoint:1 label:1 currently:3 coordination:1 schwarz:1 tool:1 aim:1 rather:1 fricative:1 command:10 linguistic:1 vatikiotis:6 improvement:1 articulator:28 indicates:1 greatly:1 am:1 inference:1 entire:2 initially:1 glottal:3 relation:2 jln:1 tao:1 issue:1 k6:1 development:1 constrained:1 integration:3 summed:1 look:1 progressed:1 looped:1 matthies:2 others:1 report:2 inherent:1 opening:3 distinguishes:1 xiii:1 simultaneously:1 resulted:1 preserve:1 intended:1 vowel:2 freedom:1 reiterant:11 severe:1 weakness:1 articulatory:7 poorer:1 potato:2 partial:1 saltzman:2 initialized:1 causal:1 tongue:11 increased:2 modeling:17 applicability:1 predictor:2 delay:1 erlbaum:1 too:1 reported:1 emg:10 spatiotemporal:1 cho:1 combined:1 audible:1 tip:2 synthesis:11 moody:1 recorded:6 containing:2 possibly:1 cognitive:1 style:1 japan:3 suggesting:1 coefficient:1 inc:1 lot:1 wave:2 recover:1 parallel:2 loaded:1 phoneme:5 who:1 succession:1 characteristic:1 kaufmann:1 svirsky:2 koike:3 vincent:1 produced:2 trajectory:14 served:1 drive:1 casual:1 bateson:6 converged:1 acc:1 simultaneous:1 ed:3 palate:2 underestimate:1 against:1 acquisition:2 frequency:1 associated:2 gain:1 auditory:1 stop:1 amplitude:1 sophisticated:1 back:3 permitted:1 specify:2 optotrak:2 governing:1 stage:1 correlation:1 ally:2 marker:1 propagation:1 hopefully:1 hokkaido:2 perhaps:2 utah:1 effect:1 facility:1 hence:1 laboratory:4 during:2 criterion:2 m:2 outline:1 ridge:1 tt:1 tn:2 motion:9 interface:1 soup:2 phonetics:1 instantaneous:1 recently:1 nih:1 kawato:10 ji:1 cohen:2 relating:2 synthesized:5 functionally:1 smoothness:1 encouragement:1 parcor:21 closing:3 entail:1 surface:2 closest:1 reverse:1 verlag:1 muscle:15 seen:1 morgan:1 additional:1 minimum:2 employed:1 paradigm:2 signal:5 ii:1 full:2 sound:3 kyoto:1 technical:2 match:1 gesture:1 cross:2 long:1 serial:1 ae:1 addition:1 source:8 publisher:1 midsagittal:3 subject:1 recording:2 hz:3 jeannerod:1 jordan:9 bengio:2 enough:1 architecture:1 perfectly:1 listening:1 six:1 motivated:1 garabieta:2 effort:3 soraku:1 speech:50 speaking:1 nine:1 action:1 generally:2 ten:1 concentrated:1 generate:1 northern:1 estimated:3 extrinsic:1 overly:1 discrete:1 boiling:2 skeletal:1 group:1 ichi:1 four:1 torquechange:1 k4:1 lowering:1 graph:1 run:1 inverse:1 everywhere:1 almost:1 electronic:1 layer:3 activity:3 ahead:1 placement:1 constraint:2 uno:4 generates:1 aspect:1 jaw:10 pan:2 modification:1 taken:1 computationally:1 rectification:1 mechanism:1 k5:1 appropriate:5 optoelectronic:2 alternative:1 original:1 top:1 rhythmical:1 include:1 k1:1 especially:1 hikaridai:1 contact:1 added:2 realized:1 primary:1 thank:1 atr:1 berlin:1 ensuing:3 laboissiere:2 gun:1 collected:3 reason:2 modeled:1 acquire:3 difficult:1 ba:4 implementation:1 upper:1 vertical:3 wire:1 av:1 situation:1 hinton:2 variability:1 precise:1 head:2 dc:2 extended:1 gracco:1 introduced:3 kl:1 sentence:2 hanson:1 raising:1 acoustic:15 learned:4 trans:1 dynamical:2 perception:1 pattern:2 recogmtion:1 articulation:4 maeda:2 reliable:1 including:1 suitable:2 natural:2 predicting:1 residual:5 pause:1 arm:2 representing:1 scheme:2 improve:1 transducing:1 numerous:1 utterance:15 beside:1 interesting:1 limitation:2 filtering:2 generation:1 digital:1 degree:2 production:8 summary:1 tick:1 perceptron:2 institute:1 distributed:2 boundary:1 dimension:4 cumulative:1 forward:14 suzuki:4 made:1 commonly:1 simplified:1 adaptive:1 san:1 far:2 lippmann:1 global:2 investigating:1 conceptual:1 spectrum:1 physiologically:5 continuous:4 modularity:1 lip:8 channel:2 learn:2 nature:1 ca:1 itakura:4 complex:1 main:1 profile:1 tl:1 transduction:1 position:12 hanning:1 learns:1 specific:2 insightful:1 recurrently:1 offset:1 physiological:4 concern:1 incorporating:1 hoped:1 bailly:2 neurophysiological:1 visual:1 sectional:1 desire:1 obstructing:1 springer:1 corresponds:1 bioi:2 acceleration:7 hard:1 change:3 except:1 pas:1 hfsp:1 experimental:6 attempted:1 intact:1 support:1 audio:1 correlated:2
6,390
6,780
Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search Luigi Acerbi? Center for Neural Science New York University [email protected] Wei Ji Ma Center for Neural Science & Dept. of Psychology New York University [email protected] Abstract Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation. Fitting these models implies a difficult optimization problem over complex, possibly noisy parameter landscapes. Bayesian optimization (BO) has been successfully applied to solving expensive black-box problems in engineering and machine learning. Here we explore whether BO can be applied as a general tool for model fitting. First, we present a novel hybrid BO algorithm, Bayesian adaptive direct search (BADS), that achieves competitive performance with an affordable computational overhead for the running time of typical models. We then perform an extensive benchmark of BADS vs. many common and state-of-the-art nonconvex, derivativefree optimizers, on a set of model-fitting problems with real data and models from six studies in behavioral, cognitive, and computational neuroscience. With default settings, BADS consistently finds comparable or better solutions than other methods, including ?vanilla? BO, showing great promise for advanced BO techniques, and BADS in particular, as a general model-fitting tool. 1 Introduction Many complex, nonlinear computational models in fields such as behaviorial, cognitive, and computational neuroscience cannot be evaluated analytically, but require moderately expensive numerical approximations or simulations. In these cases, finding the maximum-likelihood (ML) solution ? for parameter estimation, or model selection ? requires the costly exploration of a rough or noisy nonconvex landscape, in which gradients are often unavailable to guide the search. Here we consider the problem of finding the (global) optimum x? = argminx?X E [f (x)] of a possibly noisy objective f over a (bounded) domain X ? RD , where the function f can be intended as the (negative) log likelihood of a parameter vector x for a given dataset and model, but is generally a black box. With many derivative-free optimization algorithms available to the researcher [1], it is unclear which one should be chosen. Crucially, an inadequate optimizer can hinder progress, limit the complexity of the models that can be fit, and even cast doubt on the reliability of one?s findings. Bayesian optimization (BO) is a state-of-the-art machine learning framework for optimizing expensive and possibly noisy black-box functions [2, 3, 4]. This makes it an ideal candidate for solving difficult model-fitting problems. Yet there are several obstacles to a widespread usage of BO as a general tool for model fitting. First, traditional BO methods target very costly problems, such as hyperparameter tuning [5], whereas evaluating a typical behavioral model might only have a moderate computational cost (e.g., 0.1-10 s per evaluation). This implies major differences in what is considered an acceptable algorithmic overhead, and in the maximum number of allowed function evaluations (e.g., hundreds vs. ? Current address: D?partement des neurosciences fondamentales, Universit? de Gen?ve, CMU, 1 rue Michel-Servet, 1206 Gen?ve, Switzerland. E-mail: [email protected]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. thousands). Second, it is unclear how BO methods would fare in this regime against commonly used and state-of-the-art, non-Bayesian optimizers. Finally, BO might be perceived by non-practitioners as an advanced tool that requires specific technical knowledge to be implemented or tuned. We address these issues by developing a novel hybrid BO algorithm, Bayesian Adaptive Direct Search (BADS), that achieves competitive performance at a small computational cost. We tested BADS, together with a wide array of commonly used optimizers, on a novel benchmark set of model-fitting problems with real data and models drawn from studies in cognitive, behaviorial and computational neuroscience. Finally, we make BADS available as a free MATLAB package with the same user interface as existing optimizers and that can be used out-of-the-box with no tuning.1 BADS is a hybrid BO method in that it combines the mesh adaptive direct search (MADS) framework [6] (Section 2.1) with a BO search performed via a local Gaussian process (GP) surrogate (Section 2.2), implemented via a number of heuristics for efficiency (Section 3). BADS proves to be highly competitive on both artificial functions and real-world model-fitting problems (Section 4), showing promise as a general tool for model fitting in computational neuroscience and related fields. Related work There is a large literature about (Bayesian) optimization of expensive, possibly stochastic, computer simulations, mostly used in machine learning [3, 4, 5] or engineering (known as kriging-based optimization) [7, 8, 9]. Recent work has combined MADS with treed GP models for constrained optimization (TGP-MADS [9]). Crucially, these methods have large overheads and may require problem-specific tuning, making them impractical as a generic tool for model fitting. Cheaper but less precise surrogate models than GPs have been proposed, such as random forests [10], Parzen estimators [11], and dynamic trees [12]. In this paper, we focus on BO based on traditional GP surrogates, leaving the analysis of alternative models for future work (see Conclusions). 2 Optimization frameworks 2.1 Mesh adaptive direct search (MADS) The MADS algorithm is a directional direct search framework for nonlinear optimization [6, 13]. Briefly, MADS seeks to improve the current solution by testing points in the neighborhood of the current point (the incumbent), by moving one step in each direction on an iteration-dependent mesh. In addition, the MADS framework can incorporate in the optimization any arbitrary search strategy which proposes additional test points that lie on the mesh.  S MADS defines the current mesh at the k-th iteration as Mk = x?Sk x + ?mesh Dz : z ? ND , k where Sk ? Rn is the set of all points evaluated since the start of the iteration, ?mesh ? R+ is the k mesh size, and D is a fixed matrix in RD?nD whose nD columns represent viable search directions. We choose D = [ID , ?ID ], where ID is the identity matrix in dimension D. Each iteration of MADS comprises of two stages, a SEARCH stage and an optional POLL stage. The SEARCH stage evaluates a finite number of points proposed by a provided search strategy, with the only restriction that the tested points lie on the current mesh. The search strategy is intended to inject problem-specific information in the optimization. In BADS, we exploit the freedom of SEARCH to perform Bayesian optimization in the neighborhood of the incumbent (see Section 2.2 and 3.3). The POLL stage is performed if the SEARCH fails in finding a point with animproved objective value. POLL constructs a poll set of candidate points, Pk , defined as Pk = xk + ?mesh v : v ? Dk , k where xk is the incumbent and Dk is the set of polling directions constructed by taking discrete linear mesh combinations of the set of directions D. The poll size parameter ?poll defines the maximum k ? ?k poll mesh length of poll displacement vectors ?k v, for v ? Dk (typically, ?k ? ?mesh ||v||). Points in the k poll set can be evaluated in any order, and the POLL is opportunistic in that it can be stopped as soon as a better solution is found. The POLL stage ensures theoretical convergence to a local stationary point according to Clarke calculus for nonsmooth functions [6, 14]. If either SEARCH or POLL are a success, finding a mesh point with an improved objective value, the incumbent is updated and the mesh size remains the same or is multiplied by a factor ? > 1. If neither SEARCH or POLL are successful, the incumbent does not move and the mesh size is divided by ? . The algorithm proceeds until a stopping criterion is met (e.g., maximum budget of function evaluations). 1 Code available at https://github.com/lacerbi/bads. 2 2.2 Bayesian optimization The typical form of Bayesian optimization (BO) [2] builds a Gaussian process (GP) approximation of the objective f , which is used as a relatively inexpensive surrogate to guide the search towards regions that are promising (low GP mean) and/or unknown (high GP uncertainty), according to a rule, the acquisition function, that formalizes the exploitation-exploration trade-off. Gaussian processes GPs are a flexible class of models for specifying prior distributions over unknown functions f : X ? RD ? R [15]. GPs are specified by a mean function m : X ? R and a positive or kernel function k : X ?X ? R. Given any finite collection of n points  definite covariance, n X = x(i) ? X i=1 , the value of f at these points is assumed to be jointly Gaussian with mean (m(x(1) ), . . . , m(x(n) ))> and covariance matrix K, where Kij = k(x(i) , x(j) ) for 1 ? i, j ? n. We  assume i.i.d. Gaussian observation noise such that f evaluated at x(i) returns y (i) ? N f (x(i) ), ? 2 , and y = (y (1) , . . . , y (n) )> is the vector of observed values. For a deterministic f , we still assume a small ? > 0 to improve numerical stability of the GP [16]. Conveniently, observation of such (noisy) function values will produce a GP posterior whose latent marginal conditional mean ?(x; {X, y} , ?) and variance s2 (x; {X, y} , ?) at a given point are available in closed form (see Supplementary Material), where ? is a hyperparameter vector for the mean, covariance, and likelihood. In the following, we omit the dependency of ? and s2 from the data and GP parameters to reduce clutter. Covariance functions Our main choice of stationary (translationally-invariant) covariance function is the automatic relevance determination (ARD) rational quadratic (RQ) kernel, ??  D X 1 1 2 2 0 2 0 kRQ (x, x ) = ?f 1 + r (x, x ) , with r2 (x, x0 ) = (xd ? x0d ) , (1) 2? `2d d=1 where ?f2 is the signal variance, `1 , . . . , `D are the kernel length scales along each coordinate direction, and ? > 0 is the shape parameter. More common choices for Bayesian optimization include the squared exponential (SE) kernel [9] or the twice-differentiable ARD Mat?rn 5/2 (M5/2 ) kernel [5], but we found the RQ kernel to work best in combination with our method (see Section 4.2). We also consider composite periodic kernels for circular or periodic variables (see Supplementary Material). Acquisition function For a given GP approximation of f , the acquisition function, a : X ? R, determines which point in X should be evaluated next via a proxy optimization xnext = argminx a(x). We consider here the GP lower confidence bound (LCB) metric [17], p  aLCB (x; {X, y} , ?) = ? (x) ? ??t s2 (x), ?t = 2 ln Dt2 ? 2 /(6?) (2) where ? > 0 is a tunable parameter, t is the number of function evaluations so far, ? > 0 is a probabilistic tolerance, and ?t is a learning rate chosen to minimize cumulative regret under certain assumptions. For BADS we use the recommended values ? = 0.2 and ? = 0.1 [17]. Another popular choice is the (negative) expected improvement (EI) over the current best function value [18], and an historical, less used metric is the (negative) probability of improvement (PI) [19]. 3 Bayesian adaptive direct search (BADS) We describe here the main steps of BADS (Algorithm 1). Briefly, BADS alternates between a series of fast, local BO steps (the SEARCH stage of MADS) and a systematic, slower exploration of the mesh grid (POLL stage). The two stages complement each other, in that the SEARCH can explore the space very effectively, provided an adequate surrogate model. When the SEARCH repeatedly fails, meaning that the GP model is not helping the optimization (e.g., due to a misspecified model, or excess uncertainty), BADS switches to POLL. The POLL stage performs a fail-safe, model-free optimization, during which BADS gathers information about the local shape of the objective function, so as to build a better surrogate for the next SEARCH. This alternation makes BADS able to deal effectively and robustly with a variety of problems. See Supplementary Material for a full description. 3.1 Initial setup Problem specification The algorithm is initialized by providing a starting point x0 , vectors of hard lower/upper bounds LB, UB, and optional vectors of plausible lower/upper bounds PLB, PUB, with the 3 Algorithm 1 Bayesian Adaptive Direct Search Input: objective function f , starting point x0 , hard bounds LB, UB, (optional: plausible bounds PLB, PUB, barrier function c, additional options) 1: Initialization: ?mesh ? 2?10 , ?poll . Section 3.1 0 0 ? 1, k ? 0, evaluate f on initial design 2: repeat 3: (update GP approximation at any step; refit hyperparameters if necessary) . Section 3.2 4: for 1 . . . nsearch do . SEARCH stage, Section 3.3 5: xsearch ? S EARCH O RACLE . local Bayesian optimization step 6: Evaluate f on xsearch , if improvement is sufficient then break 7: if SEARCH is NOT successful then . optional POLL stage, Section 3.3 8: compute poll set Pk 9: evaluate opportunistically f on Pk sorted by acquisition function 10: if iteration k is successful then 11: update incumbent xk+1 poll 12: if POLL was successful then ?mesh ? 2?mesh , ?poll k k k ? 2?k 13: else 1 poll 14: ?mesh ? 21 ?mesh , ?poll k k k ? 2 ?k 15: k ?k+1 poll 16: until fevals > MaxFunEvals or ?k < 10?6 or stalling . stopping criteria 17: return xend = arg mink f (xk ) (or xend = arg mink q? (xk ) for noisy objectives, Section 3.4) requirement that for each dimension 1 ? d ? D, LBd ? PLBd < PUBd ? UBd .2 Plausible bounds identify a region in parameter space where most solutions are expected to lie. Hard upper/lower bounds can be infinite, but plausible bounds need to be finite. Problem variables whose hard bounds are strictly positive and UBd ? 10 ? LBd are automatically converted to log space. All variables are then linearly rescaled to the standardized box [?1, 1]D such that the box bounds correspond to [PLB, PUB] in the original space. BADS supports bound or no constraints, and optionally other constraints via a provided barrier function c (see Supplementary Material). The user can also specify circular or periodic dimensions (such as angles); and whether the objective f is deterministic or noisy (stochastic), and in the latter case provide a coarse estimate of the noise (see Section 3.4). Initial design The initial design consists of the provided starting point x0 and ninit = D additional points chosen via a space-filling quasi-random Sobol sequence [20] in the standardized box, and forced to lie on the mesh grid. If the user does not specify whether f is deterministic or stochastic, the algorithm assesses it by performing two consecutive evaluations at x0 . 3.2 GP model in BADS The default GP model is specified by a constant mean function m ? R, a smooth ARD RQ kernel (Eq. 1), and we use aLCB (Eq. 2) as a default acquisition function. Hyperparameters The default GP has hyperparameters ? = (`1 , . . . , `D , ?f2 , ?, ? 2 , m). We impose an empirical Bayes prior on the GP hyperparameters based on the current training set (see Supplementary Material), and select ? via maximum a posteriori (MAP) estimation. We fit ? via a gradient-based nonlinear optimizer, starting from either the previous value of ? or a weighted draw from the prior, as a means to escape local optima. We refit the hyperparameters every 2D to 5D function evaluations; more often earlier in the optimization, and whenever the current GP is particularly inaccurate at predicting new points, according to a normality test on the residuals,  p z (i) = y (i) ? ?(x(i) ) / s2 (x(i) ) + ? 2 (assumed independent, in first approximation). Training set The GP training set X consists of a subset of the points evaluated so far (the cache), selected to build a local approximation of the objective in the neighborhood of the incumbent xk , constructed as follows. Each time X is rebuilt, points in the cache are sorted by their `-scaled distance r2 (Eq. 1) from xk . First, the closest nmin = 50 points are automatically added to X. Second, up to 10D additional points with r ? 3?(?) are included in the set, where ?(?) & 1 is a radius A variable d can be fixed by setting (x0 )d = LBd = UBd = PLBd = PUBd . Fixed variables become constants, and BADS runs on an optimization problem with reduced dimensionality. 2 4 ? ? function that depends on the decay of the kernel. For the RQ kernel, ?RQ (?) = ? e1/? ? 1 (see Supplementary Material). Newly evaluated points are added incrementally to the set, using fast rank-one updates of the GP posterior. The training set is rebuilt any time the incumbent is moved. 3.3 Implementation of the MADS framework mesh We initialize ?poll = 2?10 (in standardized space), such that the initial poll steps can 0 = 1 and ?0 span the plausible region, whereas the mesh grid is relatively fine. We use ? = 2, and increase the mesh size only after a successful POLL. We skip the POLL after a successful SEARCH. Search stage We apply an aggressive, repeated SEARCH strategy that consists of up to nsearch = max{D, b3 + D/2c} unsuccessful SEARCH steps. In each step, we use a search oracle, based on a local BO with the current GP, to produce a search point xsearch (see below). We evaluate f (xsearch ) and add it to the training set. If the improvement in objective value is none or insufficient, that is less 3/2 than (?poll , we continue searching, or switch to POLL after nsearch steps. Otherwise, we call it a k ) success and start a new SEARCH from scratch, centered on the updated incumbent. Search oracle We choose xsearch via a fast, approximate optimization inspired by CMA-ES [21]. 2 We sample batches of points in the neighborhood of the incumbent xk , drawn ? N (xs , ?2 (?poll k ) ?), where xs is the current search focus, ? a search covariance matrix, and ? > 0 a scaling factor, and we pick the point that optimizes the acquisition function (see Supplementary Material). We remove from the SEARCH set candidate points that violate non-bound constraints (c(x) > 0), and we project candidate points that fall outside hard bounds to the closest mesh point inside the bounds. Across  SEARCH steps, we use both a diagonal matrix ?` with diagonal `21 /|`|2 , . . . , `2D /|`|2 , and a matrix ?WCM proportional to the weighted covariance matrix of points in X (each point weighted according to a function of its ranking in terms of objective values yi ). We choose between ?` and ?WCM probabilistically via a hedge strategy, based on their track record of cumulative improvement [22]. Poll stage We incorporate the GP approximation in the POLL in two ways: when constructing the set of polling directions Dk , and when choosing the polling order. We generate Dk according to the random LTMADS algorithm [6], but then rescale each vector coordinate 1 ? d ? D proportionally to the GP length scale `d (see Supplementary Material). We discard poll vectors that do not satisfy the given bound or nonbound constraints. Second, since the POLL is opportunistic, we evaluate points in the poll set according to the ranking given by the acquisition function [9]. Stopping criteria We stop the optimization when the poll size ?poll k goes below a threshold (default 10?6 ); when reaching a maximum number of objective evaluations (default 500D); or if there is no significant improvement of the objective for more than 4 + bD/2c iterations. The algorithm returns the optimum xend (transformed back to original coordinates) with the lowest objective value yend . 3.4 Noisy objective In case of a noisy objective, we assume for the noise a hyperprior ln ? ? N (ln ?est , 1), with ?est a base noise magnitude (default ?est = 1, but the user can provide an estimate). To account for additional uncertainty, we also make the following changes: double the minimum number of points added to the training set, nmin = 100, and increase the maximum number to 200; increase the initial design to ninit = 20; and double the number of allowed stalled iterations before stopping. Uncertainty handling Due to noise, we cannot simply use the output values yi as ground truth in the SEARCH and POLL stages. Instead, we replace yi with the GP latent quantile function [23] q? (x; {X, y} , ?) ? q? (x) = ? (x) + ??1 (?)s (x) , ?1 ? ? [0.5, 1), (3) where ? (?) is the quantile function of the standard normal (plugin approach [24]). Moreover, we modify the MADS procedure by keeping an incumbent set {xi }ki=1 , where xi is the incumbent at the end of the i-th iteration. At the end of each POLL we re-evaluate q? for all elements of the incumbent set, in light of the new points added to the cache. We select as current (active) incumbent the point with lowest q? (xi ). During optimization we set ? = 0.5 (mean prediction only), which promotes exploration. We use a conservative ?end = 0.999 for the last iteration, to select the optimum xend returned by the algorithm in a robust manner. Instead of yend , we return either ?(xend ) or an unbiased estimate of E[f (xend )] obtained by averaging multiple evaluations (see Supplementary Material). 5 4 Experiments We tested BADS and many optimizers with implementation available in MATLAB (R2015b, R2017a) on a large set of artificial and real optimization problems (see Supplementary Material for details). 4.1 Design of the benchmark Algorithms Besides BADS, we tested 16 optimization algorithms, including popular choices such as Nelder-Mead (fminsearch [25]), several constrained nonlinear optimizers in the fmincon function (default interior-point [26], sequential quadratic programming sqp [27], and active-set actset [28]), genetic algorithms (ga [29]), random search (randsearch) as a baseline [30]; and also less-known state-of-the-art methods for nonconvex derivative-free optimization [1], such as Multilevel Coordinate Search (MCS [31]) and CMA-ES [21, 32] (cmaes, in different flavors). For noisy objectives, we included algorithms that explicitly handle uncertainty, such as snobfit [33] and noisy CMA-ES [34]. Finally, to verify the advantage of BADS? hybrid approach to BO, we also tested a standard, ?vanilla? version of BO [5] (bayesopt, R2017a) on the set of real model-fitting problems (see below). For all algorithms, including BADS, we used default settings (no fine-tuning). Problem sets First, we considered a standard benchmark set of artificial, noiseless functions (BBOB 09 [35], 24 functions) in dimensions D ? {3, 6, 10, 15}, for a total of 96 test functions. We also created ?noisy? versions of the same set. Second, we collected model-fitting problems from six published or ongoing studies in cognitive and computational neuroscience (CCN 17). The objectives of the CCN 17 set are negative log likelihood functions of an input parameter vector, for specified datasets and models, and can be deterministic or stochastic. For each study in the CCN 17 set we asked its authors for six different real datasets (i.e., subjects or neurons), divided between one or two main models of interest; collecting a total of 36 test functions with D ? {6, 9, 10, 12, 13}. Procedure We ran 50 independent runs of each algorithm on each test function, with randomized starting points and a budget of 500 ? D function evaluations (200 ? D for noisy problems). If an algorithm terminated before depleting the budget, it was restarted from a new random point. We consider a run successful if the current best (or returned, for noisy problems) function value is within a given error tolerance ? > 0 from the true optimum fmin (or our best estimate thereof).3 For noiseless problems, we compute the fraction of successful runs as a function of number of objective evaluations, averaged over datasets/functions and over ? ? [0.01, 10] (log spaced). This is a realistic range for ?, as differences in log likelihood below 0.01 are irrelevant for model selection; an acceptable tolerance is ? ? 0.5 (a difference in deviance, the metric used for AIC or BIC, less than 1); larger ? associate with coarse solutions, but errors larger than 10 would induce excessive biases in model selection. For noisy problems, what matters most is the solution xend that the algorithm actually returns, which, depending on the algorithm, may not necessarily be the point with the lowest observed function value. Since, unlike the noiseless case, we generally do not know the solutions that would be returned by any algorithm at every time step, but only at the last step, we plot instead the fraction of successful runs at 200 ? D function evaluations as a function of ?, for ? ? [0.1, 10] (noise makes higher precisions moot), and averaged over datasets/functions. In all plots we omit error bars for clarity (standard errors would be about the size of the line markers or less). 4.2 Results on artificial functions (BBOB 09) The BBOB 09 noiseless set [35] comprises of 24 functions divided in 5 groups with different properties: separable; low or moderate conditioning; unimodal with high conditioning; multi-modal with adequate / with weak global structure. First, we use this benchmark to show the performance of different configurations for BADS. Note that we selected the default configuration (RQ kernel, aLCB ) and other algorithmic details by testing on a different benchmark set (see Supplementary Material). Fig 1 (left) shows aggregate results across all noiseless functions with D ? {3, 6, 10, 15}, for alternative choices of kernels and acquisition functions (only a subset is shown, such as the popular M5/2 , EI combination), or by altering other features (such as setting nsearch = 1, or fixing the search covariance matrix to ?` or ?WCM ). Almost all changes from the default configuration worsen performance. 3 Note that the error tolerance ? is not a fractional error, as sometimes reported in optimization, because for model comparison we typically care about (absolute) differences in log likelihoods. 6 1 bads (rq,lcb,default) bads (search-wcm) bads (m5/2,ei) bads (search-?) bads (se,pi) bads ( n search =1) 0.5 0.25 BBOB09 with heteroskedastic noise bads fmincon (actset) fmincon fmincon (sqp) cmaes (active) mcs fminsearch cmaes global patternsearch simulannealbnd particleswarm ga randsearch 0.75 Fraction solved Fraction solved 0.75 BBOB09 noiseless 1 0.5 Fraction solved at 200?D func. evals. BBOB09 noiseless (BADS variants) 0.25 0 0 10 50 100 Function evaluations / D 500 10 50 100 Function evaluations / D 500 1 bads cmaes (noisy,active) cmaes (noisy) snobfit particleswarm patternsearch mcs ga simulannealbnd fmincon (actset) randsearch fmincon fmincon (sqp) fminsearch global 0.75 0.5 0.25 0 10 3 1 0.3 0.1 Error tolerance ? Figure 1: Artificial test functions (BBOB 09). Left & middle: Noiseless functions. Fraction of successful runs (? ? [0.01, 10]) vs. # function evaluations per # dimensions, for D ? {3, 6, 10, 15} (96 test functions); for different BADS configurations (left) and all algorithms (middle). Right: Heteroskedastic noise. Fraction of successful runs at 200 ? D objective evaluations vs. tolerance ?. Noiseless functions We then compared BADS to other algorithms (Fig 1 middle). Depending on the number of function evaluations, the best optimizers are BADS, methods of the fmincon family, and, for large budget of function evaluations, CMA-ES with active update of the covariance matrix. Noisy functions We produce noisy versions of the BBOB 09 set by adding i.i.d. Gaussian observation noise at each function evaluation, y (i) = f (x(i) ) + ?(x(i) )? (i) , with ? (i) ? N (0, 1). We consider a variant with moderate homoskedastic (constant) noise (? = 1), and a variant with heteroskedastic noise with ?(x) = 1+0.1?(f (x)?fmin ), which follows the observation that variability generally increases for solutions away from the optimum. For many functions in the BBOB 09 set, this heteroskedastic noise can become substantial (?  10) away from the optimum. Fig 1 (right) shows aggregate results for the heteroskedastic set (homoskedastic results are similar). BADS outperforms all other optimizers, with CMA-ES (active, with or without the noisy option) coming second. Notably, BADS performs well even on problems with non-stationary (location-dependent) features, such as heteroskedastic noise, thanks to its local GP approximation. 4.3 Results on real model-fitting problems (CCN 17) The objectives of the CCN 17 set are deterministic (e.g., computed via numerical approximation) for three studies (Fig 2), and noisy (e.g., evaluated via simulation) for the other three (Fig 3). The algorithmic cost of BADS is ? 0.03 s to 0.15 s per function evaluation, depending on D, mostly due to the refitting of the GP hyperparameters. This produces a non-negligible overhead, defined as 100% ? (total optimization time / total function time ?1). For a fair comparison with other methods with little or no overhead, for deterministic problems we also plot the effective performance of BADS by accounting for the extra cost per function evaluation. In practice, this correction shifts rightward the performance curve of BADS in log-iteration space, since each function evaluation with BADS has an increased fractional time cost. For stochastic problems, we cannot compute effective performance as easily, but there we found small overheads (< 5%), due to more costly evaluations (more than 1 s). For a direct comparison with standard BO, we also tested on the CCN 17 set a ?vanilla? BO algorithm, as implemented in MATLAB R2017a (bayesopt). This implementation closely follows [5], with optimization instead of marginalization over GP hyperparameters. Due to the fast-growing cost of BO as a function of training set size, we allowed up to 300 training points for the GP, restarting the BO algorithm from scratch with a different initial design every 300 BO iterations (until the total budget of function evaluations was exhausted). The choice of 300 iterations already produced a large average algorithmic overhead of ? 8 s per function evaluation. In showing the results of bayesopt, we display raw performance without penalizing for the overhead. Causal inference in visuo-vestibular perception Causal inference (CI) in perception is the process whereby the brain decides whether to integrate or segregate multisensory cues that could arise from the same or from different sources [39]. This study investigates CI in visuo-vestibular heading 7 0.5 0.25 0.5 0.25 0 50 100 Function evaluations / D 500 bads bads [overhead-corrected, 14%] fmincon fmincon (sqp) fmincon (actset) cmaes (active) cmaes mcs fminsearch patternsearch simulannealbnd ga global particleswarm randsearch bayesopt 0.75 0.5 0.25 0 10 CCN17 neuronal selectivity 1 bads bads [overhead-corrected, 68%] fmincon fmincon (sqp) fmincon (actset) cmaes (active) cmaes mcs patternsearch fminsearch particleswarm global randsearch simulannealbnd ga bayesopt 0.75 Fraction solved Fraction solved 0.75 CCN17 Bayesian confidence 1 bads bads [overhead-corrected, 24%] cmaes (active) cmaes fminsearch patternsearch particleswarm global simulannealbnd fmincon fmincon (sqp) mcs ga fmincon (actset) randsearch bayesopt Fraction solved CCN17 causal inference 1 0 10 50 100 Function evaluations / D 500 10 50 100 500 Function evaluations / D Figure 2: Real model-fitting problems (CCN 17, deterministic). Fraction of successful runs (? ? [0.01, 10]) vs. # function evaluations per # dimensions. Left: Causal inference in visuo-vestibular perception [36] (6 subjects, D = 10). Middle: Bayesian confidence in perceptual categorization [37] (6 subjects, D = 13). Right: Neural model of orientation selectivity [38] (6 neurons, D = 12). perception across tasks and under different levels of visual reliability, via a factorial model comparison [36]. For our benchmark we fit three subjects with a Bayesian CI model (D = 10), and another three with a fixed-criterion CI model (D = 10) that disregards visual reliability. Both models include heading-dependent likelihoods and marginalization of the decision variable over the latent space of noisy sensory measurements (xvis , xvest ), solved via nested numerical integration in 1-D and 2-D. Bayesian confidence in perceptual categorization This study investigates the Bayesian confidence hypothesis that subjective judgments of confidence are directly related to the posterior probability the observer assigns to a learnt perceptual category [37] (e.g., whether the orientation of a drifting Gabor patch belongs to a ?narrow? or to a ?wide? category). For our benchmark we fit six subjects to the ?Ultrastrong? Bayesian confidence model (D = 13), which uses the same mapping between posterior probability and confidence across two tasks with different distributions of stimuli. This model includes a latent noisy decision variable, marginalized over via 1-D numerical integration. Neural model of orientation selectivity The authors of this study explore the origins of diversity of neuronal orientation selectivity in visual cortex via novel stimuli (orientation mixtures) and modeling [38]. We fit the responses of five V1 and one V2 cells with the authors? neuronal model (D = 12) that combines effects of filtering, suppression, and response nonlinearity [38]. The model has one circular parameter, the preferred direction of motion of the neuron. The model is analytical but still computationally expensive due to large datasets and a cascade of several nonlinear operations. Word recognition memory This study models a word recognition task in which subjects rated their confidence that a presented word was in a previously studied list [40] (data from [41]). We consider six subjects divided between two normative models, the ?Retrieving Effectively from Memory? model [42] (D = 9) and a similar, novel model4 (D = 6). Both models use Monte Carlo methods to draw random samples from a large space of latent noisy memories, yielding a stochastic log likelihood. Target detection and localization This study looks at differences in observers? decision making strategies in target detection (?was the target present??) and localization (?which one was the target??) with displays of 2, 3, 4, or 6 oriented Gabor patches.5 Here we fit six subjects with a previously derived ideal observer model [43, 44] (D = 6) with variable-precision noise [45], assuming shared parameters between detection and localization. The log likelihood is evaluated via simulation due to marginalization over latent noisy measurements of stimuli orientations with variable precision. Combinatorial board game playing This study analyzes people?s strategies in a four-in-a-row game played on a 4-by-9 board against human opponents ([46], Experiment 1). We fit the data of six players with the main model (D = 10), which is based on a Best-First exploration of a decision tree guided by a feature-based value heuristic. The model also includes feature dropping, value noise, and lapses, to better capture human variability. Model evaluation is computationally expensive due to the 4 5 Unpublished; upcoming work from Aspen H. Yoo and Wei Ji Ma. Unpublished; upcoming work from Andra Mihali and Wei Ji Ma. 8 0.75 0.5 0.25 0 10 3 1 0.3 Error tolerance ? 0.1 CCN17 target detection/localization 1 bads noisy, cmaes ( active ) cmaes (noisy) snobfit bayesopt particleswarm mcs patternsearch fminsearch simulannealbnd ga global fmincon (actset) randsearch 0.75 0.5 0.25 0 10 3 1 0.3 Error tolerance ? 0.1 CCN17 combinatorial game playing Fraction solved at 200?D func. evals. bads cmaes (noisy,active) cmaes (noisy) fminsearch patternsearch particleswarm fmincon (actset) ga mcs simulannealbnd randsearch snobfit global bayesopt Fraction solved at 200?D func. evals. Fraction solved at 200?D func. evals. CCN17 word recognition memory 1 1 bads cmaes (noisy,active) particleswarm bayesopt snobfit mcs patternsearch fminsearch 0.75 0.5 0.25 0 10 3 1 0.3 0.1 Error tolerance ? Figure 3: Real model-fitting problems (CCN 17, noisy). Fraction of successful runs at 200 ? D objective evaluations vs. tolerance ?. Left: Confidence in word recognition memory [40] (6 subjects, D = 6, 9). Middle: Target detection and localization [44] (6 subjects, D = 6). Right: Combinatorial board game playing [46] (6 subjects, D = 10). construction and evaluation of trees of future board states, and achieved via inverse binomial sampling, an unbiased stochastic estimator of the log likelihood [46]. Due to prohibitive computational costs, here we only test major algorithms (MCS is the method used in the paper [46]); see Fig 3 right. In all problems, BADS consistently performs on par with or outperforms all other tested optimizers, even when accounting for its extra algorithmic cost. The second best algorithm is either some flavor of CMA-ES or, for some deterministic problems, a member of the fmincon family. Crucially, their ranking across problems is inconsistent, with both CMA-ES and fmincon performing occasionally quite poorly (e.g., fmincon does poorly in the causal inference set because of small fluctuations in the log likelihood landscape caused by coarse numerical integration). Interestingly, vanilla BO (bayesopt) performs poorly on all problems, often at the level of random search, and always substantially worse than BADS, even without accounting for the much larger overhead of bayesopt. The solutions found by bayesopt are often hundreds (even thousands) points of log likelihood from the optimum. This failure is possibly due to the difficulty of building a global GP surrogate for BO, coupled with strong non-stationarity of the log likelihood functions; and might be ameliorated by more complex forms of BO (e.g., input warping to produce nonstationary kernels [47], hyperparameter marginalization [5]). However, these advanced approaches would substantially increase the already large overhead. Importantly, we expect this poor perfomance to extend to any package which implements vanilla BO (such as BayesOpt [48]), regardless of the efficiency of implementation. 5 Conclusions We have developed a novel BO method and an associated toolbox, BADS, with the goal of fitting moderately expensive computational models out-of-the-box. We have shown on real model-fitting problems that BADS outperforms widely used and state-of-the-art methods for nonconvex, derivativefree optimization, including ?vanilla? BO. We attribute the robust performance of BADS to the alternation between the aggressive SEARCH strategy, based on local BO, and the failsafe POLL stage, which protects against failures of the GP surrogate ? whereas vanilla BO does not have such failsafe mechanisms, and can be strongly affected by model misspecification. Our results demonstrate that a hybrid Bayesian approach to optimization can be beneficial beyond the domain of very costly black-box functions, in line with recent advancements in probabilistic numerics [49]. Like other surrogate-based methods, the performance of BADS is linked to its ability to obtain a fast approximation of the objective, which generally deteriorates in high dimensions, or for functions with pathological structure (often improvable via reparameterization). From our tests, we recommend BADS, paired with some multi-start optimization strategy, for models with up to ? 15 variables, a noisy or jagged log likelihood landscape, and when algorithmic overhead is . 75% (e.g., model evaluation & 0.1 s). Future work with BADS will focus on testing alternative statistical surrogates instead of GPs [12]; combining it with a smart multi-start method for global optimization; providing support for tunable precision of noisy observations [23]; improving the numerical implementation; and recasting some of its heuristics in terms of approximate inference. 9 Acknowledgments We thank Will Adler, Robbe Goris, Andra Mihali, Bas van Opheusden, and Aspen Yoo for sharing data and model evaluation code that we used in the CCN 17 benchmark set; Maija Honig, Andra Mihali, Bas van Opheusden, and Aspen Yoo for providing user feedback on earlier versions of the bads package for MATLAB; Will Adler, Andra Mihali, Bas van Opheusden, and Aspen Yoo for helpful feedback on a previous version of this manuscript; John Wixted and colleagues for allowing us to reuse their data for the CCN 17 ?word recognition memory? problem set; and the three anonymous reviewers for useful feedback. This work has utilized the NYU IT High Performance Computing resources and services. References [1] Rios, L. M. & Sahinidis, N. V. (2013) Derivative-free optimization: A review of algorithms and comparison of software implementations. Journal of Global Optimization 56, 1247?1293. [2] Jones, D. R., Schonlau, M., & Welch, W. J. (1998) Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455?492. [3] Brochu, E., Cora, V. M., & De Freitas, N. (2010) A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599. [4] Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & de Freitas, N. (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104, 148?175. [5] Snoek, J., Larochelle, H., & Adams, R. P. (2012) Practical Bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems 24, 2951?2959. [6] Audet, C. & Dennis Jr, J. E. (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on optimization 17, 188?217. [7] Taddy, M. A., Lee, H. K., Gray, G. A., & Griffin, J. D. (2009) Bayesian guided pattern search for robust local optimization. Technometrics 51, 389?401. [8] Picheny, V. & Ginsbourger, D. (2014) Noisy kriging-based optimization methods: A unified implementation within the DiceOptim package. Computational Statistics & Data Analysis 71, 1035?1053. [9] Gramacy, R. B. & Le Digabel, S. (2015) The mesh adaptive direct search algorithm with treed Gaussian process surrogates. Pacific Journal of Optimization 11, 419?447. [10] Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2011) Sequential model-based optimization for general algorithm configuration. LION 5, 507?523. [11] Bergstra, J. S., Bardenet, R., Bengio, Y., & K?gl, B. (2011) Algorithms for hyper-parameter optimization. pp. 2546?2554. [12] Talgorn, B., Le Digabel, S., & Kokkolaras, M. (2015) Statistical surrogate formulations for simulationbased design optimization. Journal of Mechanical Design 137, 021405?1?021405?18. [13] Audet, C., Cust?dio, A., & Dennis Jr, J. E. (2008) Erratum: Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization 18, 1501?1503. [14] Clarke, F. H. (1983) Optimization and Nonsmooth Analysis. (John Wiley & Sons, New York). [15] Rasmussen, C. & Williams, C. K. I. (2006) Gaussian Processes for Machine Learning. (MIT Press). [16] Gramacy, R. B. & Lee, H. K. (2012) Cases for the nugget in modeling computer experiments. Statistics and Computing 22, 713?722. [17] Srinivas, N., Krause, A., Seeger, M., & Kakade, S. M. (2010) Gaussian process optimization in the bandit setting: No regret and experimental design. ICML-10 pp. 1015?1022. [18] Mockus, J., Tiesis, V., & Zilinskas, A. (1978) in Towards Global Optimisation. (North-Holland Amsterdam), pp. 117?129. [19] Kushner, H. J. (1964) A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering 86, 97?106. [20] Bratley, P. & Fox, B. L. (1988) Algorithm 659: Implementing Sobol?s quasirandom sequence generator. ACM Transactions on Mathematical Software (TOMS) 14, 88?100. [21] Hansen, N., M?ller, S. D., & Koumoutsakos, P. (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation 11, 1?18. [22] Hoffman, M. D., Brochu, E., & de Freitas, N. (2011) Portfolio allocation for Bayesian optimization. Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence pp. 327?336. 10 [23] Picheny, V., Ginsbourger, D., Richet, Y., & Caplin, G. (2013) Quantile-based optimization of noisy computer experiments with tunable precision. Technometrics 55, 2?13. [24] Picheny, V., Wagner, T., & Ginsbourger, D. (2013) A benchmark of kriging-based infill criteria for noisy optimization. Structural and Multidisciplinary Optimization 48, 607?626. [25] Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. (1998) Convergence properties of the Nelder?Mead simplex method in low dimensions. SIAM Journal on Optimization 9, 112?147. [26] Waltz, R. A., Morales, J. L., Nocedal, J., & Orban, D. (2006) An interior algorithm for nonlinear optimization that combines line search and trust region steps. Mathematical Programming 107, 391?408. [27] Nocedal, J. & Wright, S. (2006) Numerical Optimization, Springer Series in Operations Research. (Springer Verlag), 2nd edition. [28] Gill, P. E., Murray, W., & Wright, M. H. (1981) Practical Optimization. (Academic press). [29] Goldberg, D. E. (1989) Genetic Algorithms in Search, Optimization & Machine Learning. (AddisonWesley). [30] Bergstra, J. & Bengio, Y. (2012) Random search for hyper-parameter optimization. Journal of Machine Learning Research 13, 281?305. [31] Huyer, W. & Neumaier, A. (1999) Global optimization by multilevel coordinate search. Journal of Global Optimization 14, 331?355. [32] Jastrebski, G. A. & Arnold, D. V. (2006) Improving evolution strategies through active covariance matrix adaptation. IEEE Congress on Evolutionary Computation (CEC 2006). pp. 2814?2821. [33] Csendes, T., P?l, L., Sendin, J. O. H., & Banga, J. R. (2008) The GLOBAL optimization method revisited. Optimization Letters 2, 445?454. [34] Hansen, N., Niederberger, A. S., Guzzella, L., & Koumoutsakos, P. (2009) A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. IEEE Transactions on Evolutionary Computation 13, 180?197. [35] Hansen, N., Finck, S., Ros, R., & Auger, A. (2009) Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. [36] Acerbi, L., Dokka, K., Angelaki, D. E., & Ma, W. J. (2017) Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. bioRxiv preprint bioRxiv:150052. [37] Adler, W. T. & Ma, W. J. (2017) Human confidence reports account for sensory uncertainty but in a non-Bayesian way. bioRxiv preprint bioRxiv:093203. [38] Goris, R. L., Simoncelli, E. P., & Movshon, J. A. (2015) Origin and function of tuning diversity in macaque visual cortex. Neuron 88, 819?831. [39] K?rding, K. P., Beierholm, U., Ma, W. J., Quartz, S., Tenenbaum, J. B., & Shams, L. (2007) Causal inference in multisensory perception. PLoS One 2, e943. [40] van den Berg, R., Yoo, A. H., & Ma, W. J. (2017) Fechner?s law in metacognition: A quantitative model of visual working memory confidence. Psychological Review 124, 197?214. [41] Mickes, L., Wixted, J. T., & Wais, P. E. (2007) A direct test of the unequal-variance signal detection model of recognition memory. Psychonomic Bulletin & Review 14, 858?865. [42] Shiffrin, R. M. & Steyvers, M. (1997) A model for recognition memory: REM?retrieving effectively from memory. Psychonomic Bulletin & Review 4, 145?166. [43] Ma, W. J., Navalpakkam, V., Beck, J. M., van Den Berg, R., & Pouget, A. (2011) Behavior and neural basis of near-optimal visual search. Nature Neuroscience 14, 783?790. [44] Mazyar, H., van den Berg, R., & Ma, W. J. (2012) Does precision decrease with set size? J Vis 12, 1?10. [45] van den Berg, R., Shin, H., Chou, W.-C., George, R., & Ma, W. J. (2012) Variability in encoding precision accounts for visual short-term memory limitations. Proc Natl Acad Sci U S A 109, 8780?8785. [46] van Opheusden, B., Bnaya, Z., Galbiati, G., & Ma, W. J. (2016) Do people think like computers? International Conference on Computers and Games pp. 212?224. [47] Snoek, J., Swersky, K., Zemel, R., & Adams, R. (2014) Input warping for Bayesian optimization of non-stationary functions. pp. 1674?1682. [48] Martinez-Cantin, R. (2014) BayesOpt: A Bayesian optimization library for nonlinear optimization, experimental design and bandits. Journal of Machine Learning Research 15, 3735?3739. [49] Hennig, P., Osborne, M. A., & Girolami, M. (2015) Probabilistic numerics and uncertainty in computations. Proceedings of the Royal Society A 471, 20150142. 11
6780 |@word exploitation:1 version:5 middle:5 briefly:2 nd:4 mockus:1 zilinskas:1 calculus:1 simulation:5 seek:1 crucially:3 covariance:11 accounting:3 pick:1 initial:7 configuration:5 series:2 pub:3 sobol:2 tuned:1 genetic:2 interestingly:1 luigi:3 existing:1 ninit:2 freitas:3 current:12 outperforms:3 subjective:1 com:2 yet:1 gmail:1 bd:1 john:2 mesh:30 realistic:1 numerical:9 recasting:1 shape:2 remove:1 plot:3 update:4 v:6 stationary:4 cue:1 selected:2 prohibitive:1 intelligence:1 advancement:1 xk:8 short:1 record:1 coarse:3 revisited:1 location:1 treed:2 five:1 mathematical:2 along:1 constructed:2 shahriari:1 become:2 direct:13 viable:1 retrieving:2 depleting:1 consists:3 fitting:18 overhead:14 behavioral:2 inside:1 combine:3 manner:1 x0:6 rding:1 snoek:2 expected:2 notably:1 behavior:1 growing:1 multi:3 brain:1 rem:1 inspired:1 wcm:4 automatically:2 little:1 richet:1 cache:3 provided:4 project:1 bounded:1 moreover:1 lowest:3 what:2 substantially:2 developed:1 unified:1 finding:5 impractical:1 formalizes:1 quantitative:1 every:3 collecting:1 xd:1 ro:1 universit:1 scaled:1 fmincon:22 control:1 omit:2 before:2 service:1 negligible:1 local:11 engineering:3 congress:1 modify:1 limit:1 acad:1 positive:2 plugin:1 encoding:1 id:3 mead:2 fluctuation:1 black:6 might:3 twice:1 initialization:1 studied:1 specifying:1 range:1 averaged:2 mads:12 acknowledgment:1 practical:3 testing:3 practice:1 regret:2 definite:1 implement:1 opportunistic:2 optimizers:9 procedure:2 shin:1 infill:1 displacement:1 empirical:1 gabor:2 cascade:1 composite:1 confidence:12 induce:1 deviance:1 lcb:2 word:6 cannot:3 interior:2 selection:3 ga:8 restriction:1 deterministic:8 map:1 dz:1 reviewer:1 center:2 williams:1 regardless:1 starting:5 go:1 welch:1 assigns:1 schonlau:1 pouget:1 gramacy:2 estimator:2 rule:1 array:1 importantly:1 steyvers:1 stability:1 handle:1 reparameterization:1 searching:1 coordinate:5 updated:2 target:7 construction:1 user:6 taddy:1 cmaes:16 gps:4 us:1 programming:2 hypothesis:1 origin:2 beierholm:1 associate:1 element:1 goldberg:1 recognition:7 expensive:9 utilized:1 particularly:1 observed:2 preprint:3 wang:1 capture:1 solved:10 thousand:2 region:4 ensures:1 plo:1 decrease:1 trade:1 rescaled:1 kriging:3 ran:1 rq:7 substantial:1 complexity:2 moderately:2 asked:1 dt2:1 dynamic:1 hinder:1 solving:2 smart:1 localization:5 efficiency:2 f2:2 basis:1 rightward:1 easily:1 forced:1 fast:5 describe:1 effective:2 monte:1 artificial:6 zemel:1 aggregate:2 hyper:2 neighborhood:4 outside:1 opportunistically:1 choosing:1 multipeak:1 whose:3 larger:3 plausible:5 heuristic:3 supplementary:11 quite:1 otherwise:1 widely:1 ability:1 statistic:2 cma:8 gp:30 jointly:1 noisy:36 think:1 sequence:2 advantage:1 differentiable:1 analytical:1 coming:1 adaptation:2 combining:1 loop:1 gen:2 poorly:3 shiffrin:1 description:1 moved:1 convergence:2 double:2 sqp:6 optimum:8 requirement:1 produce:5 categorization:2 adam:3 depending:3 fixing:1 rescale:1 ard:3 progress:1 eq:3 strong:1 implemented:3 skip:1 implies:2 larochelle:1 girolami:1 met:1 switzerland:1 safe:1 guided:2 alcb:3 direction:7 radius:1 closely:1 stochastic:8 attribute:1 centered:1 human:4 exploration:5 material:11 implementing:1 require:2 dio:1 multilevel:2 fechner:1 anonymous:1 evals:4 strictly:1 helping:1 correction:1 considered:2 ground:1 wright:4 normal:1 great:1 algorithmic:6 mapping:1 major:2 optimizer:2 consecutive:1 achieves:2 perceived:1 estimation:2 proc:1 tiesis:1 combinatorial:3 hansen:3 successfully:1 tool:6 weighted:3 hoffman:1 cora:1 rough:1 mit:1 gaussian:9 always:1 reaching:1 probabilistically:1 derived:1 focus:3 improvement:6 consistently:2 rank:1 likelihood:14 model4:1 seeger:1 suppression:1 chou:1 baseline:1 rio:1 helpful:1 posteriori:1 inference:8 dependent:3 stopping:4 inaccurate:1 typically:2 bandit:2 stalling:1 quasi:1 transformed:1 polling:3 arg:2 issue:1 flexible:1 orientation:6 tgp:1 proposes:1 art:5 integration:3 constrained:4 initialize:1 marginal:1 field:3 construct:1 beach:1 sampling:1 jones:1 icml:1 look:1 filling:1 excessive:1 future:3 simplex:1 nonsmooth:2 partement:1 report:1 stimulus:3 recommend:1 escape:1 oriented:1 pathological:1 ve:2 cheaper:1 beck:1 intended:2 translationally:1 argminx:2 technometrics:2 freedom:1 detection:6 stationarity:1 interest:1 earch:1 circular:3 highly:1 evaluation:33 derandomized:1 mixture:1 yielding:1 light:1 natl:1 waltz:1 ccn:10 necessary:1 fox:1 addisonwesley:1 tree:3 hyperprior:1 initialized:1 re:1 biorxiv:4 causal:7 theoretical:1 mk:1 hutter:1 increased:1 stopped:1 column:1 earlier:2 obstacle:1 modeling:3 psychological:1 kij:1 altering:1 bayesopt:14 cost:9 subset:2 hundred:2 successful:13 inadequate:1 seventh:1 reported:1 dependency:1 periodic:3 learnt:1 combined:1 adler:3 st:1 thanks:1 international:1 randomized:1 siam:3 incumbent:14 digabel:2 refitting:1 lee:2 probabilistic:3 off:1 systematic:1 parzen:1 together:1 squared:1 choose:3 possibly:5 worse:1 cognitive:4 inject:1 derivative:3 return:5 michel:1 doubt:1 aggressive:2 converted:1 account:3 de:5 diversity:2 bergstra:2 includes:2 north:1 matter:1 satisfy:1 jagged:1 explicitly:1 ranking:3 caused:1 depends:1 vi:1 performed:2 break:1 observer:3 closed:1 linked:1 start:4 bayes:1 competitive:3 option:2 worsen:1 nugget:1 minimize:1 ass:1 improvable:1 variance:3 krq:1 judgment:1 correspond:1 spaced:1 identify:1 landscape:4 directional:1 weak:1 bayesian:31 raw:1 produced:1 none:1 carlo:1 mc:10 researcher:1 published:1 whenever:1 homoskedastic:2 sharing:1 definition:1 inexpensive:1 against:3 failure:2 acquisition:8 colleague:1 pp:7 evaluates:1 thereof:1 associated:1 visuo:3 stop:1 newly:1 rational:1 dataset:1 popular:3 tunable:3 knowledge:1 fractional:2 dimensionality:1 brochu:2 actually:1 back:1 manuscript:1 higher:1 combustion:1 tom:1 modal:1 improved:2 response:2 specify:2 wei:3 evaluated:10 formulation:1 strongly:1 cantin:1 box:11 implicit:1 stage:16 until:3 nmin:2 working:1 dennis:2 ei:3 trust:1 nonlinear:7 marker:1 incrementally:1 widespread:1 defines:2 multidisciplinary:1 gray:1 usage:1 effect:1 x0d:1 b3:1 true:1 verify:1 brown:1 evolution:2 building:1 usa:1 unbiased:2 analytically:1 simulationbased:1 lapse:1 deal:1 game:5 during:2 whereby:1 criterion:5 m5:3 demonstrate:1 performs:4 motion:1 interface:1 meaning:1 novel:6 misspecified:1 common:2 psychonomic:2 ji:3 conditioning:2 perfomance:1 fare:1 extend:1 significant:1 measurement:2 automatic:1 vanilla:7 grid:3 rd:3 tuning:5 nonlinearity:1 wais:1 portfolio:1 reliability:3 moving:1 specification:1 cortex:2 lbd:3 add:1 base:1 closest:2 posterior:4 recent:2 optimizing:1 irrelevant:1 belongs:1 moderate:3 optimizes:1 selectivity:4 certain:1 nonconvex:4 occasionally:1 discard:1 verlag:1 continue:1 success:2 alternation:2 yi:3 analyzes:1 george:1 care:1 additional:5 impose:1 gill:1 minimum:1 ller:1 recommended:1 signal:2 full:1 unimodal:1 sham:1 multiple:1 violate:1 simoncelli:1 smooth:1 technical:1 academic:1 determination:1 aspen:4 long:1 dept:1 divided:4 goris:2 e1:1 promotes:1 paired:1 prediction:1 variant:3 basic:1 optimisation:1 cmu:1 metric:3 noiseless:10 arxiv:2 iteration:12 represent:1 kernel:13 sometimes:1 affordable:1 achieved:1 cell:1 addition:1 whereas:3 krause:1 fine:2 else:1 source:1 leaving:1 extra:2 unlike:1 subject:11 member:1 inconsistent:1 practitioner:1 call:1 structural:1 near:1 presence:1 ideal:2 nonstationary:1 bengio:2 switch:2 variety:1 bic:1 fit:7 psychology:1 marginalization:4 reduce:1 shift:1 whether:5 six:7 reuse:1 derivativefree:2 movshon:1 locating:1 returned:3 york:3 repeatedly:1 adequate:2 matlab:4 useful:1 generally:4 proportionally:1 neumaier:1 se:2 factorial:1 clutter:1 tenenbaum:1 diceoptim:1 category:2 reduced:1 generate:1 http:1 tutorial:1 deteriorates:1 neuroscience:8 per:6 track:1 stalled:1 discrete:1 hyperparameter:3 mat:1 dropping:1 hennig:1 affected:1 promise:2 group:1 four:1 threshold:1 poll:42 fminsearch:9 drawn:2 audet:2 clarity:1 srinivas:1 neither:1 penalizing:1 bardenet:1 nocedal:2 v1:1 fraction:15 run:9 angle:1 package:4 inverse:1 uncertainty:9 letter:1 swersky:2 almost:1 family:2 patch:2 draw:2 griffin:1 acceptable:2 clarke:2 scaling:1 investigates:2 comparable:1 decision:4 ki:1 bound:15 played:1 aic:1 display:2 quadratic:2 oracle:2 constraint:4 software:2 protects:1 fmin:2 orban:1 span:1 performing:2 separable:1 relatively:2 pacific:1 developing:1 according:6 alternate:1 combination:3 poor:1 jr:2 across:5 beneficial:1 son:1 kakade:1 making:2 den:4 invariant:1 computationally:2 ln:3 koumoutsakos:2 remains:1 previously:2 auger:1 resource:1 fail:1 mechanism:1 know:1 end:3 available:5 operation:2 opponent:1 multiplied:1 apply:1 hierarchical:1 away:2 generic:1 v2:1 robustly:1 alternative:3 batch:1 rebuilt:2 drifting:1 slower:1 original:2 standardized:3 running:1 include:2 kushner:1 binomial:1 marginalized:1 exploit:1 quantile:3 murray:1 build:3 prof:1 society:1 upcoming:2 warping:2 move:1 objective:23 added:4 already:2 strategy:12 costly:4 heteroskedastic:6 diagonal:2 traditional:2 unclear:2 surrogate:12 evolutionary:4 gradient:2 distance:1 thank:1 sci:1 mail:1 collected:1 xnext:1 plb:3 assuming:1 length:3 besides:1 code:2 reed:1 navalpakkam:1 insufficient:1 providing:3 optionally:1 difficult:2 setup:1 mostly:2 mink:2 negative:4 ba:3 numerics:2 design:10 implementation:7 refit:2 twenty:1 perform:2 unknown:2 allowing:1 upper:3 neuron:4 observation:5 datasets:5 benchmark:10 finite:3 racle:1 optional:4 segregate:1 variability:3 precise:1 misspecification:1 rn:2 arbitrary:2 lb:2 complement:1 unpublished:2 cast:1 specified:3 extensive:1 mechanical:1 toolbox:1 unequal:1 xvi:1 narrow:1 vestibular:3 macaque:1 nip:1 address:2 beyond:1 bar:1 proceeds:1 lion:1 perception:6 able:1 below:4 pattern:1 regime:1 unsuccessful:1 memory:11 royal:1 max:1 including:4 difficulty:1 hybrid:5 predicting:1 residual:1 advanced:3 normality:1 improve:2 github:1 rated:1 library:1 created:1 coupled:1 func:4 prior:3 literature:1 review:5 law:1 par:1 expect:1 limitation:1 proportional:1 allocation:1 filtering:1 generator:1 integrate:1 gather:1 sufficient:1 proxy:1 acerbi:4 playing:3 pi:2 row:1 morale:1 gl:1 last:2 keeping:1 repeat:1 heading:3 free:5 soon:1 bias:1 rasmussen:1 guide:2 moot:1 arnold:1 fall:1 wide:2 taking:2 wagner:1 bulletin:2 absolute:1 barrier:2 cust:1 van:8 tolerance:10 feedback:4 curve:2 default:12 world:1 cumulative:2 dimension:8 evaluating:1 sensory:2 author:3 commonly:2 reinforcement:1 collection:1 adaptive:10 ginsbourger:3 historical:1 far:2 transaction:2 picheny:3 approximate:2 excess:1 ameliorated:1 preferred:1 restarting:1 ml:1 global:18 active:14 decides:1 quasirandom:1 assumed:2 nelder:2 xi:3 search:57 latent:6 sk:2 promising:1 nature:1 robust:3 ca:1 unavailable:1 forest:1 improving:2 complex:3 necessarily:1 constructing:1 rue:1 domain:2 wixted:2 pk:4 main:4 linearly:1 terminated:1 s2:4 noise:16 arise:1 edition:1 lagarias:1 martinez:1 osborne:1 repeated:1 angelaki:1 allowed:3 fair:1 hyperparameters:7 weijima:1 fig:6 neuronal:3 caplin:1 benchmarking:1 board:4 wiley:1 precision:7 fails:2 comprises:2 explicit:1 exponential:1 lie:4 candidate:4 perceptual:3 bad:62 cec:1 specific:3 showing:3 quartz:1 normative:1 r2:2 dk:5 decay:1 nyu:3 list:1 x:2 sequential:2 effectively:4 adding:1 ci:4 magnitude:1 budget:5 exhausted:1 flavor:2 hoos:1 simply:1 explore:3 erratum:1 visual:7 conveniently:1 amsterdam:1 bo:32 holland:1 restarted:1 springer:2 nested:1 leyton:1 truth:1 determines:1 acm:1 ma:11 hedge:1 conditional:1 identity:1 sorted:2 goal:1 towards:2 replace:1 shared:1 hard:5 change:2 included:2 infinite:1 typical:3 corrected:3 reducing:1 averaging:1 conservative:1 total:5 experimental:2 e:8 disregard:1 est:3 player:1 multisensory:3 select:3 berg:4 support:2 people:2 latter:1 relevance:1 ub:2 ongoing:1 incorporate:2 evaluate:6 yoo:5 tested:7 scratch:2 handling:2
6,391
6,781
Learning Chordal Markov Networks via Branch and Bound Kari Rantanen HIIT, Dept. Comp. Sci., University of Helsinki Antti Hyttinen HIIT, Dept. Comp. Sci., University of Helsinki Matti J?rvisalo HIIT, Dept. Comp. Sci., University of Helsinki Abstract We present a new algorithmic approach for the task of finding a chordal Markov network structure that maximizes a given scoring function. The algorithm is based on branch and bound and integrates dynamic programming for both domain pruning and for obtaining strong bounds for search-space pruning. Empirically, we show that the approach dominates in terms of running times a recent integer programming approach (and thereby also a recent constraint optimization approach) for the problem. Furthermore, our algorithm scales at times further with respect to the number of variables than a state-of-the-art dynamic programming algorithm for the problem, with the potential of reaching 20 variables and at the same time circumventing the tight exponential lower bounds on memory consumption of the pure dynamic programming approach. 1 Introduction Graphical models offer a versatile and theoretically solid framework for various data analysis tasks [1, 30, 17]. In this paper we focus on the structure learning task for chordal Markov networks (or chordal/triangulated Markov random fields or decomposable graphs), a central class of undirected graphical models [7, 31, 18, 17]. This problem, chordal Markov network structure learning (CMSL), is computationally notoriously challenging; e.g., finding a maximum likelihood chordal Markov network with bounded structure complexity (clique size) is known to be NP-hard [23]. Several Markov chain Monte Carlo (MCMC) approaches have been proposed for this task in the literature [19, 27, 10, 11]. Here we take on the challenge of developing a new exact algorithmic approach for finding an optimal chordal Markov network structure in the score-based setting. Underlining the difficulty of this challenge, first exact algorithms for CMSL have only recently been proposed [6, 12, 13, 14], and generally do no scale up to 20 variables. Specifically, the constraint optimization approach introduced in [6] does not scale up to 10 variables within hours. A similar approach was also taken in [16] in the form of a direct integer programming encoding for CMSL, but was not empirically evaluated in an exact setting. Comparably better performance, scaling up to 10 (at most 15) variables, is exhibited by the integer programming approach implemented in the GOBNILP system [2], extending the core approach of GOBNILP to CMSL by enforcing additional constraints. The true state-of-the-art exact algorithm for CMSL, especially when the clique size of the networks to be learned is not restricted, is Junctor, implementing a dynamic programming approach [13]. The method is based on recursive characterization of clique trees and storing in memory the scores of already-solved subproblems. Due to its nature, the algorithm has to iterate through every single solution candidate, although its effective memoization technique helps to avoid revisiting solution candidates [13]. As typical for dynamic programming algorithms, the worst-case and best-case performance coincide: Junctor is guaranteed to use ?(4n ) time and ?(3n ) space. In this work, we develop an alternative exact algorithm for CMSL. While a number of branchand-bound algorithms have been proposed in the past for Bayesian network structure learning 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (BNSL) [25, 28, 20, 29, 26], to the best of our knowledge our approach constitutes the first non-trivial branch-and-bound approach for CMSL. Our core search routine takes advantage of similar ideas as a recently proposed approach for optimally solving BNSL [29], and, on the other hand, like GOBNILP, uses the tight connection between BNSL and CMSL by searching over the space of chordal Markov network structures via considering decomposable directed acyclic graphs. Central to the efficiency of our approach is the integration of dynamic programming over Bayesian network structures for obtaining strong bounds for effectively pruning the search space during search, as well as problem-specific dynamic programming for efficiently implementing domain filtering during search. Furthermore, we establish a condition which enables symmetry breaking for noticeably pruning the search space over which we perform branch and bound. In comparison with Junctor, a key benefit of our approach is the potential of avoiding worst-case behavior, especially in terms of memory usage, based on using strong bounds to rule out provably non-optimal solutions from consideration during search. Empirically, we show the approach dominates the integer programming approach of GOBNILP [2], and thereby also the constraint optimization approach [6, 12]. Furthermore, our algorithm scales at times further in terms of the number of variables than the DP-based approach implemented in Junctor [13], with the potential of reaching 20 variables within hours and at the same time circumventing the tight exponential lower bounds on memory consumption of the pure dynamic programming approach, which is witnessed also in practice by noticeably lower memory consumption.1 2 Chordal Markov Network Structure Learning A Markov network structure is represented by an undirected graph Gu = (V, E u ), where V = {v1 , . . . , vn } is the set of vertices and E u the set of undirected edges. This structure represents independencies vi ? ? vj |S according to the undirected separation property: vi and vj are separated given set S if and only if all paths between them go through a vertex in set S. The undirected graph is chordal iff every (undirected) cycle of length greater than three contains a chord, i.e., an edge between two non-consecutive vertices in the cycle. Figure 1 a) shows an example. Here we focus on the task of finding a chordal graph U that maximizes posterior probability P (Gu |D) = P (D|Gu )P (Gu )/P (D), where D denotes the i.i.d. data set. As we assume a uniform prior over chordal graphs, this boils down to maximizing the marginal likelihood P (D|Gu ). Dawid et al. have shown that the marginal likelihood P (D|Gu ) for chordal Markov networks can be calculated using a clique tree representation [7, 9]. A clique C is a fully connected subset of vertices. A clique tree for an undirected graph Gu is an undirected tree such that S I. i Ci = V , II. if {v` , vk } ? E u , then either {v` , vk } ? Ck or {v` , vk } ? C` , and III. the running intersection property holds: whenever vk ? Ci and vk ? Cj , then vk is also in every clique on the unique path between Ci and Cj . The separators are the intersections of adjacent cliques in a clique tree. Figure 1Qb) shows anQexample. The marginal likelihood factorizes according to the clique tree: P (D|U ) = i P (Ci )/ j P (Sj ) (assuming positivity and that the prior factorizes) [6]. The marginal likelihood P (S) for a set S of random variables can be calculated with suitable priors; in this paper we consider discrete data using a Dirichlet prior. If we denote s(S) = log P (S), CMSL can be cast as maximizing P P Ci s(Ci ) ? Sj s(Sj ). For example, the marginal log-likelihood of the graph in Figure 1 a) can be calculated using the clique tree presentation in Figure 1 b) as s({v1 , v6 }) + s({v1 , v5 }) + s({v1 , v2 , v3 }) + s( v2 , v3 , v4 }) ? s({v1 }) ? s({v1 }) ? s({v2 , v3 }). In this paper, we view the chordal Markov network structure learning problem from the viewpoint of directed graphs, making use of the fact that for each chordal Markov network structure there are equivalent directed graph structures [15, 7], which we call here decomposable DAGs. A decomposable DAG is a DAG G = (V, E) such that the set of directed edges E ? V ? V does not include any immoralities, i.e., structures of the form vi ? vk ? vj with no edges between vi and vj . Due to lack of immoralities, the d-separation property on a decomposable DAG corresponds exactly to the separation property on the chordal undirected graph (the skeleton of the decomposable DAG). Thus, decomposable graphs represent distributions that are representable by Markov and by 1 Extended discussion and empirical results are available in [21]. 2 v6 v2 v1 a) v5 {v1 , v6 } {v1 } {v2 , v3 } {v1 , v2 , v3 } {v2 , v3 , v4 } v4 v3 b) {v1 , v5 } {v1 } c) v6 v2 v1 v5 v4 v3 Figure 1: Three views on chordal Markov network structures: a) chordal undirected graph, b) clique tree, (c) decomposable DAG. Bayesian networks. Figure 1 c) shows a corresponding decomposable DAG for the chordal undirected graph in a). Note that the decomposable DAG may not be unique; for example, v2 ? v3 can be directed also in the opposite direction. The score of the decomposable DAG can be calculated as s(v1 , ?) + s(v5 , {v1 }) + s(v6 , {v1 }) + s(v2 , {v1 }) + s(v3 , {v1 , v2 }) + s(v4 , {v2 , v3 }), where s(vi , S) are the local scores for BNSL using e.g. a Dirichlet prior. Because these local scores s(?, ?) correspond to s(?) through s(vi , S) = s({vi , S}) ? s(S) (and s(?) = 0), we find that this BNSL scoring gives the same result as the clique tree based scoring rule. Thus CMSL can also be cast as the optimization problem of finding a graph in X arg max s(vi , paG (vi )), G?G vi ?V where G denotes the class of decomposable DAGs. (This formulation is used also in the GOBNILP system [2].) The optimal chordal Markov network structure is the skeleton of the optimal G. This problem is notoriously computationally difficult in practice, emphasized by the fact that standard score-pruning [3, 8] used for BNSL is not generally applicable in the context of CMSL as it will often prevent finding the true optimum: pruning parent sets for vertices often circumvents other vertices achieving high scoring parents sets (as immoralities would be induced). 3 Hybrid Branch and Bound for CMSL In this section we present details on our branch-and-bound approach to CMSL. We start with an overview of the search algorithm, and then detail how we apply symmetry breaking and make use of dynamic programming to dynamically update variable domains, i.e., for computing parent set choices during search, and to obtain tight bounds for pruning the search tree. 3.1 Branch and Bound over Ordered Decomposable DAGs The search is performed over the space of ordered decomposable DAGs. While in general the order of the vertices of a DAG can be ambiguous, this notion allows for differentiating the exact order of the vertices, and allows for pruning the search space by identifying symmetries (see Section 3.2). Definition 1. G = (V, E, ?) is an ordered decomposable DAG if and only if (V, E) is a decomposable DAG and ? : {1...n} ? {1...n} a total order over V such that (vi , vj ) ? E only if ? ?1 (i) < ? ?1 (j) for all vi , vj ? V . Partial solutions during search are hence ordered decomposable DAGs, which are extended by adding a parent set choice (v, P ), i.e., adding the new vertex v and edges from each of its parents in P to v. Definition 2. Let G = (V, E, ?) be an ordered decomposable DAG. Given vk ? / V and P ? V , we say that the ordered decomposable DAG G0 = (V 0 , E 0 , ? 0 ) is G with the parent set choice (vk , P ) if the following conditions hold. 1. V 0 = V ? S {vk } 2. E 0 = E ? v0 ?P {(v 0 , vk )}. 3. We have ? 0 (i) = ?(i) for all i = 1...|V |, and ? 0 (|V | + 1) = k. Algorithm 1 represents the core functionality of the branch and bound. The recursive function takes two arguments: the remaining vertices of the problem instance, U , and the current partial solution G = (V, E, ?). In addition we keep stored a best lower bound solution G? , which is the 3 Algorithm 1 The core branch-and-bound search. 1: function B RANCH A ND B OUND(U, G = (V, E, ?)) 2: if U = ? and s(G? ) < s(G) then G? ? G . Update LB if improved. 3: if this branch cannot improve LB then return . Backtrack 4: for (vi , P ) ? PARENT S ET C HOICES(U, G) do . Iterate the current parent set choices. 5: Let G0 = (V 0 , E 0 , ? 0 ) be G with the parent set choice (vi , P ). 6: B RANCH A ND B OUND(U \ {vi }, G0 ) . Continue the search. highest-scoring solution that has been found so far. Thus, at the end of the search, G? is an optimal solution. During the search we use G? for bounding as further detailed in Section 3.3. In the loop on line 4 we branch with all the parent set choices that we have deemed necessary to try during the search. The method PARENT S ET C HOICES(U, G) and the related symmetry breaking are explained in Section 3.2. We sort the parent set choices into decreasing order based on their score, so that (v, P ) is tried before (v 0 , P 0 ) if s(v, P ) > s(v 0 , P 0 ), where v, v 0 ? U and P, P 0 ? V . This is done to focus the search first to the most promising branches for finding an optimal solution. When U = ?, we have PARENT S ET C HOICES(U, G) = ?, and so the current branch gets terminated. 3.2 Dynamic Branch Selection, Parent Set Pruning, and Symmetry Breaking We continue by proposing symmetry breaking for the space of ordered decomposable DAGs, and propose a dynamic programming approach for dynamic parent set pruning during search. We start with symmetry breaking. In terms of our branch-and-bound approach to CMSL, symmetry breaking is a vital part of the search, as there can be exponentially many decomposable DAGs which correspond to a single undirected chordal graph; for example, the edges of a complete graph can be directed arbitrarily without the resulting DAG containing any immoralities. Hence symmetry breaking in terms of pruning out symmetric solution candidates during search has potential for noticeably speeding up search. Chickering [4, 5] showed how so-called covered edges can be used to detect equivalencies between Bayesian network structures. Later van Beek and Hoffmann [29] implemented covered edge based symmetry breaking in their BNSL approach. Here we introduce the concept of preferred vertex orders, which generalizes covered edges for CMSL based on the decomposability of the solution graphs. Definition 3. Let G = (V, E, ?) be an ordered decomposable DAG. A pair vi , vj ? V violates the preferred vertex order in G if the following conditions hold. 1. i > j. 2. paG (vi ) ? paG (vj ). 3. There is a path from vi to vj in G. Theorem 1 states that for any (partial) solution (i.e., an ordered decomposable DAG), there always exists an equivalent solution that does not contain any violations of the preferred vertex order. Mapping to practice, this theorem allows for very effectively pruning out all symmetric solutions but the one not violating the preferred vertex order within our branch-and-bound approach. A detailed proof is provided in Appendix A. Theorem 1. Let G = (V, E, ?) be an ordered decomposable DAG. There exists an ordered decomposable DAG G0 = (V, E 0 , ? 0 ) that is equivalent to G, but where for all vi , vj ? V the pair (vi , vj ) does not violate the preferred vertex order in G0 . It follows from Theorem 1 that for each solution (ordered decomposable DAG) there exists an equivalent solution where the lexicographically smallest vertex is a source. Thus we can fix it as the first vertex in the order at the beginning of the search. Similarly as in [29] for BNSL, we define the depths of vertices as follows. Definition 4. Let G = (V, E, ?) be an ordered decomposable DAG. The depth of v ? V in G is ( 0 if paG (v) = ?, 0 d(G, v) = max d(G, v ) + 1 otherwise. 0 v ?paG (v) 4 The depths of G are ordered if for all vi , vj ? V , where ? ?1 (i) < ? ?1 (j), the following hold. 1. d(G, vi ) ? d(G, vj ), and 2. If d(G, vi ) = d(G, vj ), then i < j. Note that "violating the preferred vertex order" concerns the order in which the vertices are in the underlying DAG, whereas "depths are ordered" concerns the order by which a solution was constructed. We use the former to prune whole solution candidates from the search space, and the latter to ensure that no solution candidate is seen twice during search. We also propose a dynamic programming approach to branch selection and parent set pruning during search, based on the following definition of valid parent sets. Definition 5. Let G = (V, E, ?) be an ordered decomposable DAG. Given vk ? / V and P ? V , let G0 = (V 0 , E 0 , ? 0 ) be G with the parent set choice (vk , P ). The parent set choice (vk , P ) is valid for G if the following hold. 1. For all vi , vj ? P we have either (vi , vj ) ? E or (vj , vi ) ? E. 2. For all vi ? V , the pair (vi , vk ) does not violate the preferred vertex order in G0 . 3. The depths of G0 are ordered. Given a partial solution G = (V, E, ?), a vertex v ? / V , and a subset P ? V , the function G ET S U PERSETS in Algorithm 2 represents a dynamic programming method for determining valid parent set choices (v, P 0 ) for G where P 0 ? P . An advantage of this formulation is that invalidating conditions for a parent set, such as immoralities or violations of the preferred vertex order, automatically hold for all the supersets of the parent set; this is applied on line 6 to avoid unnecessary branching. On line 8 we require that a parent set P is added to the list only if none of its valid supersets P 0 ? C have a higher score. This pruning technique is based on the observation that P 0 provides all the same moralizing edges as P , and therefore it is sufficient to only consider the parent set choice (v, P 0 ) in the search when s(v, P ) ? s(v, P 0 ). Given the set of remaining vertices U , the function PARENT S ET C HOICES in Algorithm 2 constructs all the available parent set choices for the current partial solution G = (V, E, ?). The collection M(G, vi ) contains the subset-minimal parent sets for vertex vi ? U that satisfy the 3rd condition of Definition 5. If V = ?, then M(G, vi ) = {?}. Otherwise, let k be the maximum depth of the vertices in G. Now M(G, vi ) contains the subset-minimal parent sets that would insert vi on depth k + 1. In addition, if i > j for all vj ? V where d(G, vj ) = k, then M(G, vi ) also contains the subset-minimal parent sets that would insert vi on depth k. Note that the cardinality of any parent set in M(G, vi ) is at most one. 3.3 Computing Tight Bounds by Harnessing Dynamic Programming for BNSL To obtain tight bounds during search, we make use of the fact that the score of the optimal BN structures for the BNSL instance with same scores as in the CMSL instance at hand is guaranteed to give an upper bound on the optimal solutions to the CMSL instance. To compute an optimal BN structure, we use a variant of a standard dynamic programming algorithm by Silander and Myllym?ki [22]. While there are far more efficient algorithms for BNSL [2, 32, 29], we use BNSL DP for obtaining an upper bound during the branch-and-bound search under the current partial Algorithm 2 Constructing parent set choices via dynamic programming. 1: function PARENT S S ET SC HOICES(U, G = (V, E, ?)) 2: return G ET S UPERSETS(v, G, M ) v?U M ?M(G,v) 3: function G ET S UPERSETS(v, G = (V, E, ?), P ) 4: Let C = ? 5: for v 0 ? V \ P \ {v} do 6: if (v, P 0 ) is a valid parent set choice for G with some P 0 ? P ? {v 0 } then 7: C ? C ? G ET S UPERSETS(v, G, P ? {v 0 }) 8: if (v, P ) is valid parent set choice for G and s(v, P ) > s(v, P 0 ) for all P 0 ? C then 9: C ? C ? {(v, P )} 10: return C 5 CMSL solution (i.e., under the current branch). Specifically, before the actual branch and bound, we precompute a DP table which stores, for each subset of vertices V 0 ? V of the problem instance, the score of the so-called BN extensions of V 0 , i.e., the optimal BN structures over U = V \ V 0 where we additionally allow the vertices in U to also take parents from V 0 . This guarantees that the BN extensions are compatible with the vertex order in the current branch of the branch-and-bound search tree, and thereby the sum of the score of the current partial CMSL solution over V 0 and the score of the optimal BN extensions of V 0 is a valid upper bound. By spending O(n ? 2n ) time in the beginning of the branch and bound for computing the scores of optimal BN extensions of every V 0 ? V , we can then look up these scores during branch and bound in O(1) time. With the DP table, it takes only low polynomial time to construct the optimal BN structure over the set of all vertices [22], i.e., a BN extension of ?. Thus, we can obtain an initial lower bound solution G? for the branch and bound as follows. 1. Construct the optimal BN structure for the vertices of the problem instance 2. Try to make the BN decomposable by heuristically adding or removing edges. 3. Let G? be the highest-scoring decomposable DAG from step 2. However, the upper bounds obtained via BNSL can be at times can be quite weak when the network structures contain many immoralities. For this reason, in Algorithm 3, we introduce an additional method for computing the upper bounds, taking immoralities ?relaxedly? into consideration. The algorithm takes four inputs: A fixed partial solution G = (V, E, ?), a list of vertices A that we have assigned during the upper bound computation, a list of remaining vertices U , and an integer d ? 0 which dictates the maximum recursion depth. As a fallback option, on line 3 we return the optimal BN score for the remaining vertices if the maximum recursion depth is reached. On line 4 we construct the collection of sets P that are the maximal sets that any vertex can take as parent set during the upper bound computation. The sets in P take immoralities relaxedly into consideration: For any vi , vj ? V , we have {vi , vj } ? P for some P ? P if and only if (vi , vj ) ? E or (vj , vi ) ? E. That is, when choosing parent sets during the upper bound computation, we allow immoralities to appear, as long as they are not between vertices of the fixed partial solution. In the loop on line 6, we iterate through each vertex v ? U that is still remaining, and find its highestscoring relaxedly-moral parent set according to P. Note that given any P 0 ? P, we can find the highest-scoring parent set P ? P 0 in O(1) time when the scores are stored in a segment tree. For information about constructing such data structure, see [22]. Thus line 7 takes O(|V |) time to execute. Finally, on line 8 of the loop, we split the problem into subproblems to see which parent set choice (v, P ) provides the highest local upper bound u to be returned. Algorithm 3 requires O((n ? m) ? m ? 2n?m ) time, where m = |V | is the number of vertices in the partial solution and n the number of vertices in the problem instance, assuming that the BN extensions and the segment trees have been precomputed. (In the empirical evaluation, the total runtimes of our branch-and-bound approach include these computations.) The collections P can exist implicitly. We use the upper bounds within branch and bound as follows. Let G = (V, E, ?) be the current partial solution, let U be the set of remaining vertices, and let b be the score of optimal BN extensions of V . We can close the current branch if s(G? ) ? s(G) + b. Otherwise, we can close the branch if s(G? ) ? s(G) + U PPER B OUND(G, ?, U, d) for some d > 0. Our implementation uses d = 10. Algorithm 3 Computing upper bounds for a partial solution via dynamic programming. 1: function U PPER B OUND(G = (V, E, ?), A, U, d) 2: if U = ? then return 0 3: if d = 0 then S return the score of optimal BN extensions of V ? A 4: Let P = {{v} ? paG (v) ? A} v?V 5: 6: 7: Let u ? ?? for v ? U do Let P = arg max s(v, P ) 8: 9: u ? max(u, s(v, P ) + U PPER B OUND(G, A ? {v}, U \ {v}, d ? 1)) return u P ?P 0 ?P 6 4 Empirical Evaluation We implemented the branch-and-bound algorithm in C++, and refer to this prototype as BBMarkov. We compare the performance of BBMarkov to that of GOBNILP (the newest development version [24] at the time of publication, using IBM CPLEX version 12.7.1 as the internal IP solver) as a state-ofthe-art BNSL system implementing a integer programming branch-and-cut approach to CMSL by ruling out non-chordal graphs, and Junctor, implementing a state-of-the-art DP approach to CMSL. We used a total of 54 real-world datasets used as standard benchmarks for exact approaches [32, 29]. For investigating scalability of the algorithms in terms of the number of variables n, we obtained from each dataset several benchmark instances by restricting to the first n variables for increasing values of n. We did not impose a bound on the treewidth of the chordal graphs of interest, i.e., the size of candidate parent sets was not limited. We used the BDeu score with equivalent sample size 1. As standard practice in benchmarking exact structure learning algorithms, we focus on comparing the running times of the considered approaches on precomputed input CMSL instances. The experiments were run under Debian GNU/Linux on 2.83-GHz Intel Xeon E5440 nodes with 32-GB RAM. Figure 2 compares BBMarkov to GOBNILP and Junctor under a 1-h per-instance time limit, with different numbers n of variables distinguished using different point styles. BBMarkov clearly dominates GOBNILP in runtime performance (Fig. 2 left); instances for n > 15 are not shown as GOBNILP was unable to solve them. Compared to Junctor (Fig. 2 middle, Table 1), BBMarkov exhibits complementary performance. Junctor is noticeably strong on several datasets and lower values of n, and exhibits fewer timeouts. For a fixed n, Junctor?s runtimes have a very low variance independent of the dataset, which is due to the ?(4n ) (both worst-case and best-case) runtime guarantee. However, BBMarkov shows potential for scaling up for larger n than Junctor: at n = 17 Junctor?s runtimes are very close to 1 h on all instances, while BBMarkov?s bounds rule out at times very effectively non-optimal solutions, resulting in noticeable lower runtimes on specific datasets with increasing n. This is show-cased in Table 1 on the right, highlighting some of the best-case performance of BBMarkov using per-instance time limit of 24 h for both BBMarkov and Junctor. Another benefit of BBMarkov compared to Junctor is the observed lower memory consumption (Figure 3). Junctor?s ?(3n ) memory 1m <1s <1s n=15 n=14 n=13 n=12 n=11 1m Run time of BBMarkov >1h To find an optimal solution >1h Run time of Junctor Run time of GOBNILP >1h 1m <1s <1s 1m Run time of BBMarkov >1h 25GB 10GB Memory usage In terms of how the various search techniques implemented in BBMarkov contribute to the running times of BBMarkov, we observed that the running times for obtaining BNSL-based bounds (via the use of exact BN dynamic programming and segment trees) tend to be only a small fraction of the overall running times. For example, at n = 20, these computations take less than minute in total. Most of the time in the search is typically used in the optimization loop and in computing the tighter upper bounds that take immoralities "relaxedly" into consideration. While computing the tighter bounds is more expensive than computing the exact BNs at the beginning of search, the tighter bounds often pay off in terms of overall running times as branches can be closed earlier during search. Junctor BBMarkov 1GB 100MB 10MB 8 9 11 13 15 17 19 Number of variables Figure 3: Memory usage >1h n=17 n=16 n=15 n=14 n=13 1m <1s <1s 1m >1h To find and prove the solution Figure 2: Per-instance runtime comparisons. Left: BBMarkov vs GOBNILP. Middle: BBMarkov vs Junctor. Right: BBMarkov time to finding vs BBMarkov time to proving an optimal solution. 7 Table 1: BBMarkov v Junctor. Left: smaller datasets and for different sample sizes on the Water dataset. Right: Examples of best-case performance of BBMarkov. to: timeout, mo: memout. Dataset Wine Adult Letter Voting Zoo Water100 Water1000 Water10000 Tumor n 13 14 16 17 17 17 17 17 18 Running times (s) BBMarkov Junctor <1 (<1) 6 58 (35) 29 >3600 (>3600) 592 281 (207) 3050 >3600 (>3600) 2690 100 (49) 2580 2731 (279) 2592 >3600 (>3600) 2928 610 (268) 12019 Dataset Alarm Heart Hailfinder500 Water100 n 17 18 19 20 17 18 19 20 17 18 19 20 18 19 20 Running times (s) BBMarkov Junctor 268 (62) 2724 1462 (315) 12477 10274 (2028) 52130 49610 (50) mo 41 (22) 3007 162 (85) 11179 1186 (698) 50296 15501 (13845) mo 225 (108) 2588 2543 (1348) 12422 13749 (6418) 53108 33503 (25393) mo 590 (244) 12244 6581 (6187) 52575 61152 (54806) mo usage results consistently in running out on memory for n ? 20. At n = 19, BBMarkov uses on average approx. 1 GB of memory, while Junctor uses close to 30 GB. A further benefit of BBMarkov is its ability to provide ?anytime? solutions during search. In fact, the bounds obtained during search result at times in finding optimal solutions relatively fast: Figure 2 right shows the ratio of time needed to find an optimal solution (x-axis) from time needed to terminate search, i.e., to find a solution and prove its optimality (y-axis), and in Table 1, with the time needed to find an optimal solution given in parentheses. 5 Conclusions We introduced a new branch-and-bound approach to learning optimal chordal Markov network structures, i.e., decomposable graphs. In addition to core branch-and-bound search, the approach integrates dynamic programming for obtaining tight bounds and effective variable domain pruning during search. In terms of practical performance, the approach has the potential of reaching 20 variables within hours of runtime, at which point the competing native dynamic programming approach Junctor runs out of memory on standard modern computers. When approaching 20 variables, our approach is approximately 30 times as memory-efficient as Junctor. Furthermore, in contrast to Junctor, the approach is ?anytime? as solutions can be obtained already before finishing search. Efficient parallelization of the approach is a promising direction for future work. Acknowledgments The authors gratefully acknowledge financial support from the Academy of Finland under grants 251170 COIN Centre of Excellence in Computational Inference Research, 276412, 284591, 295673, and 312662; and the Research Funds of the University of Helsinki. A Proofs We give a proof for Theorem 1, central in enabling effective symmetric breaking in our branch-andbound approach. We start with a definition and lemma towards the proof. Definition 6. Let V = {v1 , ..., vn } be a set of vertices and let ? and ? 0 be some total orders over V . Let k = mini,?(i)6=?0 (i) i be the first difference between the orders. If no such difference exists, we denote ? = ? 0 . Otherwise we denote ? < ? 0 if and only if ?(k) < ? 0 (k). Lemma 1. Let G = (V, E, ?) be an ordered decomposable DAG. If there are vi , vj ? V such that the pair (vi , vj ) violates the preferred vertex order in G, then there exists an ordered decomposable DAG G0 = (V, E 0 , ? 0 ), where 1. G0 belongs to the same equivalence class with G, 2. the pair (vi , vj ) does not violate the preferred vertex order in G0 , and 3. ? < ? 0 . 8 Proof. We begin by defining a directed clique tree C = (V, E) over G. Given vk ? V , let Ck = paG (vk ) ? {vk } be the clique defined by vk in G. The vertices of C are these cliques; we also add an empty set as a clique to make sure the cliques form a tree (and not a forest). Formally, V = {Ck | vk ? V } ? {?}. Given vk ? V , where paG (vk ) 6= ?, let ?k = argmaxv` ?paG (vk ) ? ?1 (`) denote the parent of vk in G that is in the least significant position in ?. Now, the edges of C are E = {(?, Ck ) | Ck = {vk }, vk ? V } ? {(C` , Ck ) | v` = ?k , Ck 6= {vk }, vk ? V }. In words, if vk ? V is a source vertex in G (i.e., Ck = {vk }), then the parent of Ck is ? in C. Otherwise (i.e., Ck 6= {vk }) the parent of Ck is C` , where v` is the closest vertex to vk in order ?S that satisfies C` ? paG (vk ) 6= ?. We see that all the requirements for clique trees hold for C: I. C?V C = V , II. if {v` , vk } ? E, then either {v` , vk } ? Ck or {v` , vk } ? C` , and III. due to the decomposability of G, we have Ca ? Cc ? Cb on any path from Ca to Cc through Cb (the running intersection property). Now assume that there are vi , vj ? V such that the pair (vi , vj ) violates the preferred vertex order in G; that is, we have i > j, paG (vi ) ? paG (vj ) and a path from vi to vj in G. This means that there is a path from Ci to Cj in C as well. Let P ? V be the parent vertex of Ci in C. We see that Cj exists in a subtree T of C that is separated from rest of C by P , and where Ci is the root vertex. Let T 0 be a new clique tree that is like T , but redirected so that Cj is the root vertex of T 0 . Let C 0 be a new clique tree that is like C, but T is replaced with T 0 . We show that C 0 is a valid clique tree. First of all, the vertices (cliques) of C 0 are exactly the same as in C, so C 0 clearly satisfies the requirements I and II. As for the requirement III, consider the non-trivial case where Ca , Cb ? C have a path from Ca to Cb through Ci in C. This means vi ? / Ca (due to the way C was constructed), and so we get Ca ? Cb ? Ci ? Ca ? Cb ? Ci \ {vi } ? Ca ? Cb ? paG (vi ) ? paG (vj ) ? Cj . Def. 3 (2) Therefore the running intersection property holds for C 0 . Let ? ? be the total order by which C 0 is ordered. Let G0 = (V, E 0 , ? ? ) be a new ordered decomposable DAG that is equivalent to G, but where the edges E 0 are arranged to follow the order ? ?. Finally, we see that G0 satisfies the conditions of the theorem: 1. The cliques of G0 are identical to that of G, so G0 belongs to the same equivalence class with G. 2. We have ? ? ?1 (j) < ? ? ?1 (i), and 0 therefore there is no path from vi to vj in G . Thus the pair (vi , vj ) does not violate the preferred vertex order in G0 . 3. Let o = ? ?1 (i). We have ? ? (o) = j < i = ?(o). Furthermore, the change from T to T 0 in C 0 did not affect any vertex whose position was earlier than o. Therefore ? ? (k) = ?(k) for all k = 1...(o ? 1). This implies ? ? < ?. Proof of Theorem 1. Consider the following procedure for finding G0 . 1. Select vi , vj ? V where the pair (vi , vj ) violates the preferred vertex order in G. If there are no such vertices, assign G0 ? G and terminate. 2. Let ? be the total order of the vertices of G. Construct an ordered decomposable DAG ? = (V, E, ? ? 0 ) such that I. the pair (vi , vj ) does not violate the preferred vertex order in G, ? G 0 ? ? II. G belongs to the same equivalent class with G, and III. ? < ?. By Lemma 1, G can be constructed from G. ? and return to step 1. 3. Assign G ? G It is clear that when the procedure terminates, G0 belongs to same equivalence class with G and there are no violations of the preferred vertex order in G0 . We also see that the total order of G (i.e., ?) is lexicographically strictly decreasing every time the step 3 is reached. There are finite amount of possible permutations (total orders) and therefore the procedure converges. The existence of this procedure and its correctness proves that G0 exists. 9 References [1] Haley J. Abel and Alun Thomas. Accuracy and computational efficiency of a graphical modeling approach to linkage disequilibrium estimation. Statistical Applications in Genetics and Molecular Biology, 143(10.1), 2017. [2] Mark Bartlett and James Cussens. Integer linear programming for the Bayesian network structure learning problem. Artificial Intelligence, 244:258?271, 2017. [3] Cassio P. de Campos and Qiang Ji. Efficient structure learning of Bayesian networks using constraints. Journal of Machine Learning Research, 12:663?689, 2011. [4] David Maxwell Chickering. A transformational characterization of equivalent Bayesian network structures. In Proc. UAI, pages 87?98. Morgan Kaufmann, 1995. [5] David Maxwell Chickering. Learning equivalence classes of Bayesian network structures. Journal of Machine Learning Research, 2:445?498, 2002. [6] Jukka Corander, Tomi Janhunen, Jussi Rintanen, Henrik J. Nyman, and Johan Pensar. Learning chordal Markov networks by constraint satisfaction. In Proc. NIPS, pages 1349?1357, 2013. [7] A. Philip Dawid and Steffen L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. Annals of Statistics, 21(3):1272?1317, 09 1993. [8] Cassio P. de Campos and Qiang Ji. Properties of Bayesian Dirichlet scores to learn Bayesian network structures. In Proc. AAAI, pages 431?436. AAAI Press, 2010. [9] Petros Dellaportas and Jonathan J. Forster. Markov chain Monte Carlo model determination for hierarchical and graphical log-linear models. Biometrika, 86(3):615?633, 1999. [10] Paolo Giudici and Peter J. Green. Decomposable graphical Gaussian model determination. Biometrika, 86(4):785, 1999. [11] Peter J. Green and Alun Thomas. Sampling decomposable graphs using a Markov chain on junction trees. Biometrika, 100(1):91, 2013. [12] Tomi Janhunen, Martin Gebser, Jussi Rintanen, Henrik Nyman, Johan Pensar, and Jukka Corander. Learning discrete decomposable graphical models via constraint optimization. Statistics and Computing, 27(1):115?130, 2017. [13] Kustaa Kangas, Mikko Koivisto, and Teppo M. Niinim?ki. Learning chordal Markov networks by dynamic programming. In Proc. NIPS, pages 2357?2365, 2014. [14] Kustaa Kangas, Teppo Niinim?ki, and Mikko Koivisto. Averaging of decomposable graphs by dynamic programming and sampling. In Proc. UAI, pages 415?424. AUAI Press, 2015. [15] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [16] K. S. Sesh Kumar and Francis R. Bach. Convex relaxations for learning bounded-treewidth decomposable graphs. In Proc. ICML, volume 28 of JMLR Workshop and Conference Proceedings, pages 525?533. JMLR.org, 2013. [17] Steffen L. Lauritzen and David J. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. In Glenn Shafer and Judea Pearl, editors, Readings in Uncertain Reasoning, pages 415?448. Morgan Kaufmann Publishers Inc., 1990. [18] G?rard Letac and H?l?ne Massam. Wishart distributions for decomposable graphs. The Annals of Statistics, 35(3):1278?1323, 2007. [19] David Madigan, Jeremy York, and Denis Allard. Bayesian graphical models for discrete data. International Statistical Review/Revue Internationale de Statistique, pages 215?232, 1995. 10 [20] Brandon M. Malone and Changhe Yuan. A depth-first branch and bound algorithm for learning optimal Bayesian networks. In GKR 2013 Revised Selected Papers, volume 8323 of Lecture Notes in Computer Science, pages 111?122. Springer, 2014. [21] Kari Rantanen. Learning score-optimal chordal Markov networks via branch and bound. Master?s thesis, University of Helsinki, Finland, 2017. [22] Tomi Silander and Petri Myllym?ki. A simple approach for finding the globally optimal Bayesian network structure. In Proc. UAI, pages 445?452. AUAI Press, 2006. [23] Nathan Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1):123 ? 138, 2003. [24] Milan Studen? and James Cussens. Towards using the chordal graph polytope in learning decomposable models. International Journal of Approximate Reasoning, 88:259?281, 2017. [25] Joe Suzuki. Learning Bayesian belief networks based on the Minimum Description Length principle: An efficient algorithm using the B&B technique. In Proc. ICML, pages 462?470. Morgan Kaufmann, 1996. [26] Joe Suzuki and Jun Kawahara. Branch and Bound for regular Bayesian network structure learning. In Proc. UAI. AUAI Press, 2017. [27] Claudia Tarantola. MCMC model determination for discrete graphical models. Statistical Modelling, 4(1):39?61, 2004. [28] Jin Tian. A branch-and-bound algorithm for MDL learning Bayesian networks. In Proc. UAI, pages 580?588. Morgan Kaufmann, 2000. [29] Peter van Beek and Hella-Franziska Hoffmann. Machine learning of Bayesian networks using constraint programming. In Proc. CP, volume 9255 of Lecture Notes in Computer Science, pages 429?445. Springer, 2015. [30] Claudio J. Verzilli, Nigel Stallard, and John C. Whittaker. Bayesian graphical models for genomewide association studies. The American Journal of Human Genetics, 79(1):100?112, 2006. [31] Ami Wiesel, Yonina C. Eldar, and Alfred O. Hero III. Covariance estimation in decomposable Gaussian graphical models. IEEE Transactions on Signal Processing, 58(3):1482?1492, 2010. [32] Changhe Yuan and Brandon M. Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of Artificial Intelligence Research, 48:23?65, 2013. 11
6781 |@word middle:2 version:2 wiesel:1 polynomial:1 nd:2 giudici:1 heuristically:1 tried:1 bn:16 covariance:1 thereby:3 solid:1 versatile:1 initial:1 contains:4 score:22 past:1 current:10 comparing:1 chordal:28 john:1 tarantola:1 enables:1 update:2 fund:1 v:3 newest:1 intelligence:3 fewer:1 malone:2 selected:1 beginning:3 core:5 andbound:1 characterization:2 provides:2 node:1 contribute:1 denis:1 org:1 daphne:1 constructed:3 direct:1 redirected:1 yuan:2 prove:2 introduce:2 excellence:1 theoretically:1 behavior:1 steffen:2 globally:1 decreasing:2 automatically:1 actual:1 bdeu:1 considering:1 cardinality:1 solver:1 provided:1 increasing:2 bounded:3 underlying:1 maximizes:2 begin:1 cassio:2 proposing:1 finding:11 guarantee:2 every:5 voting:1 auai:3 runtime:4 exactly:2 biometrika:3 grant:1 appear:1 branchand:1 before:3 local:4 limit:2 encoding:1 path:9 approximately:1 twice:1 dynamically:1 equivalence:4 challenging:1 limited:1 tian:1 directed:7 unique:2 practical:1 acknowledgment:1 recursive:2 practice:4 revue:1 procedure:4 empirical:3 dictate:1 word:1 statistique:1 regular:1 madigan:1 get:2 cannot:1 close:4 selection:2 context:1 equivalent:8 maximizing:2 go:1 convex:1 decomposable:44 identifying:1 pure:2 rule:3 financial:1 proving:1 searching:1 notion:1 annals:2 exact:10 programming:28 us:4 mikko:2 dawid:2 expensive:1 cut:1 native:1 observed:2 solved:1 worst:3 revisiting:1 connected:1 cycle:2 timeouts:1 highest:4 chord:1 complexity:1 skeleton:2 abel:1 dynamic:23 tight:7 solving:1 segment:3 highestscoring:1 efficiency:2 gu:7 gobnilp:11 various:2 represented:1 separated:2 fast:1 effective:3 monte:2 artificial:3 sc:1 hyper:1 choosing:1 harnessing:1 kawahara:1 quite:1 whose:1 larger:1 solve:1 say:1 otherwise:5 ability:1 statistic:3 ip:1 timeout:1 advantage:2 propose:2 maximal:1 mb:2 silander:2 loop:4 iff:1 academy:1 description:1 milan:1 scalability:1 franziska:1 parent:45 empty:1 optimum:1 extending:1 requirement:3 converges:1 help:1 develop:1 lauritzen:2 noticeable:1 strong:4 implemented:5 pper:3 treewidth:2 implies:1 triangulated:1 direction:2 functionality:1 human:1 violates:4 implementing:4 noticeably:4 require:1 assign:2 fix:1 tighter:3 insert:2 extension:8 strictly:1 hold:8 brandon:2 considered:1 cb:7 algorithmic:2 mapping:1 mo:5 genomewide:1 finland:2 consecutive:1 smallest:1 wine:1 estimation:2 proc:11 integrates:2 applicable:1 ound:5 correctness:1 mit:1 clearly:2 gaussian:2 always:1 reaching:3 ck:12 avoid:2 immorality:10 claudio:1 factorizes:2 publication:1 focus:4 finishing:1 vk:36 consistently:1 modelling:1 likelihood:7 contrast:1 detect:1 inference:1 typically:1 koller:1 supersets:2 provably:1 arg:2 overall:2 eldar:1 development:1 art:4 integration:1 marginal:5 field:1 construct:5 hyttinen:1 beach:1 runtimes:4 identical:1 represents:3 biology:1 look:1 icml:2 constitutes:1 sampling:2 yonina:1 qiang:2 future:1 petri:1 np:1 modern:1 replaced:1 cplex:1 friedman:1 interest:1 evaluation:2 mdl:1 violation:3 chain:3 edge:13 partial:12 necessary:1 tree:22 junctor:24 minimal:3 uncertain:1 witnessed:1 instance:14 xeon:1 earlier:2 modeling:1 vertex:58 subset:6 decomposability:2 uniform:1 optimally:1 stored:2 nigel:1 st:1 international:2 v4:5 off:1 probabilistic:1 linux:1 thesis:1 central:3 aaai:2 containing:1 positivity:1 wishart:1 expert:1 american:1 style:1 return:8 potential:6 transformational:1 de:3 jeremy:1 gebser:1 inc:1 teppo:2 satisfy:1 vi:55 performed:1 view:2 try:2 later:1 closed:1 root:2 francis:1 reached:2 start:3 sort:1 option:1 accuracy:1 variance:1 kaufmann:4 efficiently:1 correspond:2 ofthe:1 weak:1 bayesian:19 comparably:1 backtrack:1 none:1 carlo:2 zoo:1 comp:3 notoriously:2 cc:2 gkr:1 whenever:1 definition:9 cased:1 james:2 proof:6 boil:1 petros:1 judea:1 dataset:5 knowledge:1 anytime:2 cj:6 routine:1 maxwell:2 higher:1 violating:2 follow:1 improved:1 rard:1 formulation:2 hiit:3 underlining:1 evaluated:1 done:1 furthermore:5 execute:1 arranged:1 hand:2 lack:1 fallback:1 usage:4 usa:1 concept:1 true:2 contain:2 former:1 hence:2 assigned:1 symmetric:3 adjacent:1 during:21 branching:1 width:1 ambiguous:1 claudia:1 complete:1 cp:1 reasoning:2 spending:1 consideration:4 recently:2 empirically:3 overview:1 ji:2 exponentially:1 volume:3 association:1 refer:1 significant:1 dag:34 rd:1 approx:1 similarly:1 centre:1 gratefully:1 v0:1 add:1 posterior:1 closest:1 recent:2 showed:1 perspective:1 belongs:4 store:1 allard:1 continue:2 arbitrarily:1 scoring:7 seen:1 argmaxv:1 additional:2 greater:1 impose:1 morgan:4 minimum:1 prune:1 v3:11 shortest:1 signal:1 ii:4 branch:39 violate:5 lexicographically:2 determination:3 kustaa:2 offer:1 long:2 bach:1 molecular:1 memout:1 parenthesis:1 variant:1 janhunen:2 represent:1 addition:3 whereas:1 campos:2 source:2 publisher:1 parallelization:1 rest:1 exhibited:1 sure:1 induced:1 tend:1 undirected:12 integer:7 call:1 iii:5 vital:1 split:1 iterate:3 affect:1 competing:1 opposite:1 approaching:1 idea:1 prototype:1 bartlett:1 gb:6 linkage:1 moral:1 peter:3 returned:1 york:1 generally:2 detailed:2 covered:3 clear:1 amount:1 exist:1 disequilibrium:1 per:3 alfred:1 discrete:4 paolo:1 key:1 independency:1 four:1 achieving:1 prevent:1 internationale:1 v1:19 ram:1 graph:26 circumventing:2 relaxation:1 fraction:1 sum:1 moralizing:1 run:6 letter:1 master:1 ruling:1 vn:2 separation:3 circumvents:1 cussens:2 appendix:1 scaling:2 bound:56 rvisalo:1 ki:4 guaranteed:2 gnu:1 pay:1 def:1 constraint:8 jukka:2 helsinki:5 nathan:1 argument:1 bns:1 optimality:1 qb:1 kumar:1 relatively:1 martin:1 rintanen:2 changhe:2 developing:1 according:3 precompute:1 representable:1 tomi:3 smaller:1 terminates:1 making:1 explained:1 restricted:1 jussi:2 taken:1 heart:1 computationally:2 precomputed:2 needed:3 hero:1 end:1 koivisto:2 available:2 generalizes:1 junction:1 apply:1 hierarchical:1 v2:12 distinguished:1 alternative:1 coin:1 existence:1 thomas:2 denotes:2 running:12 dirichlet:3 include:2 remaining:6 graphical:13 ensure:1 especially:2 establish:1 prof:1 g0:21 already:2 v5:5 hoffmann:2 added:1 forster:1 corander:2 exhibit:2 dp:5 unable:1 sci:3 philip:1 consumption:4 polytope:1 trivial:2 reason:1 enforcing:1 water:1 assuming:2 length:2 mini:1 memoization:1 ratio:1 difficult:1 subproblems:2 implementation:1 perform:1 ranch:2 upper:12 observation:1 revised:1 markov:25 datasets:4 benchmark:2 acknowledge:1 enabling:1 finite:1 jin:1 defining:1 extended:2 kangas:2 lb:2 introduced:2 david:4 cast:2 pair:9 connection:1 learned:1 nyman:2 hour:3 pearl:1 nip:3 dellaportas:1 adult:1 niinim:2 reading:1 challenge:2 max:4 memory:13 green:2 belief:1 suitable:1 satisfaction:1 difficulty:1 hybrid:1 recursion:2 improve:1 spiegelhalter:1 ne:1 axis:2 deemed:1 jun:1 speeding:1 nir:1 prior:5 literature:1 review:1 determining:1 law:1 fully:1 lecture:2 permutation:1 filtering:1 acyclic:1 srebro:1 sufficient:1 principle:2 viewpoint:1 editor:1 storing:1 ibm:1 compatible:1 genetics:2 antti:1 allow:2 taking:1 pag:14 differentiating:1 benefit:3 van:2 ghz:1 calculated:4 depth:11 valid:8 world:1 kari:2 author:1 collection:3 suzuki:2 coincide:1 far:2 transaction:1 sj:3 pruning:15 approximate:1 preferred:15 implicitly:1 keep:1 clique:24 investigating:1 uai:5 unnecessary:1 search:41 glenn:1 table:6 additionally:1 promising:2 terminate:2 johan:2 learn:1 nature:1 matti:1 ca:9 obtaining:5 symmetry:10 forest:1 separator:1 constructing:2 domain:4 vj:36 beek:2 did:2 terminated:1 bounding:1 whole:1 alarm:1 shafer:1 myllym:2 complementary:1 fig:2 intel:1 benchmarking:1 henrik:2 position:2 exponential:2 candidate:6 breaking:10 chickering:3 jmlr:2 down:1 theorem:7 removing:1 minute:1 specific:2 emphasized:1 invalidating:1 list:3 dominates:3 concern:2 exists:7 workshop:1 joe:2 restricting:1 adding:3 effectively:3 ci:12 subtree:1 intersection:4 highlighting:1 ordered:22 v6:5 springer:2 corresponds:1 satisfies:3 whittaker:1 presentation:1 towards:2 pensar:2 hard:1 change:1 specifically:2 typical:1 ami:1 averaging:1 tumor:1 lemma:3 total:9 called:2 formally:1 select:1 internal:1 support:1 mark:1 latter:1 jonathan:1 dept:3 mcmc:2 avoiding:1
6,392
6,782
Revenue Optimization with Approximate Bid Predictions ? Medina Andr?es Munoz Google Research 76 9th Ave New York, NY 10011 Sergei Vassilvitskii Google Research 76 9th Ave New York, NY 10011 Abstract In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem. This is due to the heterogeneity of ad opportunity types, and the non-convexity of the objective function. In this work, we show how to reduce reserve price optimization to the standard setting of prediction under squared loss, a well understood problem in the learning community. We further bound the gap between the expected bid and revenue in terms of the average loss of the predictor. This is the first result that formally relates the revenue gained to the quality of a standard machine learned model. 1 Introduction A crucial task for revenue optimization in auctions is setting a good reserve (or minimum) price. Set it too low, and the sale may yield little revenue, set it too high and there may not be anyone willing to buy the item. The celebrated work by Myerson [1981] shows how to optimally set reserves in second price auctions, provided the value distribution of each bidder is known. In practice there are two challenges that make this problem significantly more complicated. First, the value distribution is never known directly; rather, the auctioneer can only observe samples drawn from it. Second, in the context of ad auctions, the items for sale (impressions) are heterogeneous, and there are literally trillions of different types of items being sold. It is therefore likely that a specific type of item has never been observed previously, and no information about its value is known. A standard machine learning approach addressing the heterogeneity problem is to parametrize each impression by a feature vector, with the underlying assumption that bids observed from auctions with similar features will be similar. In online advertising. these features encode, for instance, the ad size, whether it?s mobile or desktop, etc. The question is, then, how to use the features to set a good reserve price for a particular ad opportunity. On the face of it, this sounds like a standard machine learning question?given a set of features, predict the value of the maximum bid. The difficulty comes from the shape of the loss function. Much of the machine learning literature is concerned with optimizing well behaved loss functions, such as squared loss, or hinge loss. The revenue function, on the other hand is non-continuous and strongly non-concave, making a direct attack a challenging proposition. In this work we take a different approach and reduce the problem of finding good reserve prices to a prediction problem under the squared loss. In this way we can rely upon many widely available and scalable algorithms developed to minimize this objective. We proceed by using the predictor to define a judicious clustering of the data, and then compute the empirically maximizing reserve price for each group. Our reduction is simple and practical, and directly ties the revenue gained by the algorithm to the prediction error. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Related Work Optimizing revenue in auctions has been a rich area of study, beginning with the seminal work of Myerson [1981] who introduced optimal auction design. Follow up work by Chawla et al. [2007] and Hartline and Roughgarden [2009], among others, refined his results to increasingly more complex settings, taking into account multiple items, diverse demand functions, and weaker assumptions on the shape of the value distributions. Most of the classical literature on revenue optimization focuses on the design of optimal auctions when the bidding distribution of buyers is known. More recent work has considered the computational and information theoretic challenges in learning optimal auctions from data. A long line of work [Cole and Roughgarden, 2015, Devanur et al., 2016, Dhangwatnotai et al., 2015, Morgenstern and Roughgarden, 2015, 2016] analyzes the sample complexity of designing optimal auctions. The main contribution of this direction is to show that under fairly general bidding scenarios, a near-optimal auction can be designed knowing only a polynomial number of samples from bidders? valuations. Other authors, [Leme et al., 2016, Roughgarden and Wang, 2016] have focused on the computational complexity of finding optimal reserve prices from samples, showing that even for simple mechanisms the problem is often NP-hard to solve directly. Another well studied approach to data-driven revenue optimization is that of online learning. Here, auctions occur one at a time, and the learning algorithm must compute prices as a function of the history of the algorithm. These algorithms generally make no distributional assumptions and measure their performance in terms of regret: the difference between the algorithm?s performance and the performance of the best fixed reserve price in hindsight. Kleinberg and Leighton [2003] developed an online revenue optimization algorithm for posted-price auctions that achieves low regret. Their work was later extended to second-price auctions by Cesa-Bianchi et al. [2015]. A natural approach in both of these settings is to attempt to predict an optimal reserve price, equivalently the highest bid submitted by any of the buyers. While the problem of learning this reserve price is well understood for the simplistic model of buyers with i.i.d. valuations [Cesa-Bianchi et al., 2015, Devanur et al., 2016, Kleinberg and Leighton, 2003], the problem becomes much more challenging in practice, when the valuations of a buyer also depend on features associated with the ad opportunity (for instance user demographics, and publisher information). This problem is not nearly as well understood as its i.i.d. counterpart. Mohri and Medina [2014] provide learning guarantees and an algorithm based on DC programming to optimize revenue in second-price auctions with reserve. The proposed algorithm, however, does not easily scale to large auction data sets as each iteration involves solving a convex optimization problem. A smoother version of this algorithm is given by [Rudolph et al., 2016]. However, being a highly non-convex problem, neither algorithm provides a guarantee on the revenue attainable by the algorithm?s output. Devanur et al. [2016] give sample complexity bounds on the design of optimal auctions with side information. However, the authors consider only cases where this side information is given by ? ? [0, 1]. More importantly, their proposed algorithm only works under the unverifiable assumption that the conditional distributions of bids given ? satisfy stochastic dominance. Our results. We show that given a predictor of the bid with squared loss of ? 2 , we can construct a reserve function r that extracts all but g(?) revenue, for a simple increasing function g. (See Theorem 2 for the exact statement.) To the best of our knowledge, this is the first result that ties the revenue one can achieve directly to the quality of a standard prediction task. Our algorithm for computing r is scalable, practical, and efficient. Along the way we show what kinds of distributions are amenable to revenue optimization via reserve prices. We prove that when bids are drawn i.i.d. from a distribution F , the ratio between the mean bid and the revenue extracted with the optimum monopoly reserve scales as O(log Var(F )) ? Theorem 5. This result refines the log h bound derived by Goldberg et al. [2001], and formalizes the intuition that reserve prices are more successful for low variance distributions. 2 Setup We consider a repeated posted price auction setup where every auction is parametrized by a feature vector x ? X and a bid b ? [0, 1]. Let D be a distribution over X ? [0, 1]. Let h : X ? [0, 1], be a bid prediction function and denote by ? 2 the squared loss incurred by h: E[(h(x) ? b)2 ] = ? 2 . 2 We assume h is given, and make no assumption on the structure of h or how it is obtained. Notice that while the existence of such h is not guaranteed for all values of ?, using historical data one could use one of multiple readily available regression algorithms to find the best hypothesis h.  Let S = (x1 , b1 ), . . . , (xm , bm ) ? D be a set of m i.i.d. samples drawn from D and denote by SX = (x1 , . . . , xm ) its projection on X . Given a price p let Rev(p, b) = p1b?p denote the revenue obtained when the bidder bids b. For a reserve price function r : X ? [0, 1] we let:   1 X b R(r) = E Rev(r(x), b) and R(r) = Rev(r(x), b) m (x,b)?D (x,b)?S denote the expected and empirical revenue of reserve price function r. b = 1 Pm bi denote the population and empirical mean bid, and S(r) = We also let B = E[b], B i=1 m b b ? R(r) b B ? R(r), S(r) = B denote the expected and empirical separation between bid values and the revenue. Notice that for a given reserve price function r, S(r) corresponds to revenue left on the table. Our goal is, given S and h, to find a function r that maximizes R(r) or equivalently minimizes S(r). 2.1 Generalization Error Note that in our set up we are only given samples from the distribution, D, but aim to maximize the expected revenue. Understanding the difference between the empirical performance of an algorithm and its expected performance, also known as the generalization error, is a key tenet of learning theory. At a high level, the generalization error is a function of the training set size: larger training sets lead to smaller generalization error; and the inherent complexity of the learning algorithm: simple rules such as linear classifiers generalize better than more complex ones. In this paper we characterize the complexity of a class G of functions by its growth function ?. The growth function corresponds to the maximum number of binary labelings that can be obtained by G over all possible samples SX . It is closely related to the VC-dimension when G takes values in {0, 1} and to the pseudo-dimension [Morgenstern and Roughgarden, 2015, Mohri et al., 2012] when G takes values in R. We can give a bound on the generalization error associated with minimizing the empirical separation over a class of functions G. The following theorem is an adaptation of Theorem 1 of [Mohri and Medina, 2014] to our particular setup. Theorem 1. Let ? > 0, with probability at least 1 ? ? over the choice of the sample S the following bound holds uniformly for r ? G r r log 1/? 2 log(?(G, m)) b +2 S(r) ? S(r) +4 . (1) 2m m Therefore, in order to minimize the expected separation S(r) it suffices to minimize the empirical b over a class of functions G whose growth function scales polynomially in m. separation S(r) 3 Warmup In order to better understand the problem at hand, we begin by introducing a straightforward mechanism for transforming the hypothesis function h to a reserve price function r with guarantees on its achievable revenue. Lemma 1. Let r : X ? [0, 1] be defined by r(x) := max(h(x) ? ? 2/3 , 0). The function r then satisfies S(r) ? ? 1/2 + 2? 2/3 . The proof is a simple application of Jensen?s and Markov?s inequalities and it is deferred to Appendix B. This surprisingly simple algorithm shows there are ways to obtain revenue guarantees from a simple regressor. To the best of our knowledge these is the first guarantee of its kind. The reader may be 3 curious about the choice of ? 2/3 as the offset in our reserve price function. We will show that the dependence on ? 2/3 is not a simple artifact of our analysis, but a cost inherent to the problem of revenue optimization. Moreover, observe that this simple algorithm fixes a static offset, and does not make a distinction between those parts of the feature space, where the algorithm makes a low error, and those where the error is relatively high. By contrast our proposed algorithm partitions the space appropriately and calculates a different reserve for each partition. More importantly we will provide a data dependent bound on the performance of our algorithm that only in the worst case scenario behaves like ? 2/3 . 4 Results Overview In principle to maximize revenue we need to find a class of functions G with small complexity, but that contains a function which approximately minimizes the empirical separation. The challenge comes from the fact that the revenue function, Rev, is not continuous and highly non-concave?a small change in the price, p, may lead to very large changes in revenue. This is the main reason why simply using the predictor h(x) as a proxy for a reserve function is a poor choice, even if its average error, ? 2 is small. For example a function h, that is just as likely to over-predict by ? as to under predict by ? will have very small error, but lead to 0 revenue in half the cases. A solution on the other end of the spectrum would simply memorize the optimum prices from the sample S, setting r(xi ) = bi . While this leads to optimal empirical revenue, a function class G containing r would satisfy ?(G, m) = 2m , making the bound of Theorem 1 vacuous. In this work we introduce a family G(h, k) of classes parameterized by k ? N. This family admits an approximate minimizer that can be computed in polynomial time, has low generalization error, and achieves provable guarantees to the overall revenue. More precisely, we show that given S, and a hypothesis h with expected squared loss of ? 2 : ? For every k ? 1 there exists a set of functions G(h, k) such that ?(G(h, k), m) = O(m2k ). ? For every k ? 1, there is a polynomial time algorithm that outputs rk ? G(h, k) such that 1 b k ) is bounded by O( 2/3 in the worst case scenario S(r + ? 2/3 + m11/6 ). k Effectively, we show how to transform any classifier h with low squared loss, ? 2 , to a reserve price predictor that recovers all but O(? 2/3 ) revenue in expectation. 4.1 Algorithm Description In this section we give an overview of the algorithm that uses both the predictor h and the set of samples in S to develop a pricing function r. Our approach has two steps. First we partition the set of feasible prices, 0 ? p ? 1, into k partitions, C1 , C2 , . . . , Ck . The exact boundaries between partitions depend on the samples S and their predicted values, as given by h. For each partition we find the price that maximizes the empirical revenue in the partition. We let r(x) return the empirically optimum price in the partition that contains h(x). For a more formal description, let Tk be the set of k-partitions of the interval [0, 1] that is: Tk = {t = (t0 , t1 , . . . , tk?1 , tk ) | 0 = t0 < . . . < tk = 1}. Pk?1 We define G(h, k) = {x 7? j=0 ri 1tj ?h(x)<tj+1 | rj ? [ti , tj+1 ] and t ? Tk }. A function in G(h, k) chooses k level sets of h and k reserve prices. Given x, price rj is chosen if x falls on the j-th level set. It remains to define the function rk ? G(h, k). Given a partition vector t ? Tk , let the partition C h = {C1h , . . . , Ckh } of X be given by Cjh = {x ? X |tj?1 < h(x) ? tj }. Let mj = |SX ? Cjh | be the number of elements that fall into the j-th partition. We define the predicted mean and variance of each group Cjh as 1 X 1 X ?hj = h(xi ) and (?jh )2 = (h(xi ) ? ?j )2 . mj m j h h xi ?Cj xi ?Cj 4 We are now ready to present algorithm RIC-h for computing rk ? Hk . Algorithm 1. Reserve Inference from Clusters Pk?1 1 h Compute th ? Tk that minimizes m j=0 mj ?j . Let C h = C1h , C2h , . . . , Ckh be the induced partitions. For each j ? 1, . . . , k, set rj = maxr r ? |{i|bi ? r ? xi ? Cjh }|. Pk?1 Return x 7? j=0 rj 1h(x)?Cjh . end Our main theorem states that the separation of rk is bounded by the cluster variance of C h . For a partition C = {C1 , . . . , Ck } of X let ?j denote the empirical variance of bids for auctions in Cj . We define the weighted empirical variance by: ?(C) : = k s X j=1 X (bi ? bi0 )2 = 2 i,i0 :xi ,xi0 ?Ck k X mj ? bj (2) j=1 Theorem 2. Let ? > 0 and let rk denote the output of Algorithm 1 then rk ? G(h, k) and with probability at least 1 ? ? over the samples S: ! r   1/2 2/3 1  1 log 1/? h 1/3 2 1/3 b b k ) ? (3B) b . ?(C ) ? (3B) +2 ? + S(r 2m 2k 2m Notice that our bound is data dependent and only in he worst case scenario it behaves like ? 2/3 . In general it could be much smaller. We also show that the complexity of G(h, k) admits a favorable bound. The proof is similar to that in [Morgenstern and Roughgarden, 2015]; we include it in Appendix E for completness. Theorem 3. The growth function of the class G(h, k) can be bounded as: ?(G(h, k), m) ? m2k?1 . kk b in terms of B to conclude: We can combine these results with Equation 1 and an easy bound on B Corollary 1. Let ? > 0 and let rk denote the output of Algorithm 1 then rk ? G(h, k) and with probability at least 1 ? ? over the samples S: r k log m   1  log 1/? 1/6 r k log m   h  2 1/3 1/3 ?(C ) b + +O ? (12B? ) +O 2/3 + . S(rk ) ? (3B) 2m m 2m m k Since B ? [0, 1], this implies that when k = ?(m3/7 ), the separation is bounded by 2.28? 2/3 plus ? ?2/7 ). additional error factors that go to 0 with the number of samples, m, as O(m 5 Bounding Separation In this section we prove the main bound motivating our algorithm. This bound relates the variance of the bid distribution and the maximum revenue that can be extracted when a buyer?s bids follow such distribution. It formally shows what makes a distribution amenable to revenue optimization. To gain intuition for the kind of bound we are striving for, consider a bid distribution F . If the variance of F is 0, that is F is a point mass at some value v, then setting a reserve price to v leads to no separation. On the other hand, consider the equal revenue distribution, with F (x) = 1? 1/x. Here any reserve price leads to revenue of 1. However, the distribution has unbounded expected bid and variance, so it is not too surprising that more revenue cannot be extracted. We make this connection precise, showing that after setting the optimal reserve price, the separation can be bounded by a function of the variance of the distribution. Given any bid distribution F over [0, 1] we denote by G(r) = 1 ? limr0 ?r? F (r0 ) the probability that a bid is greater than or equal to r. Finally, we will let R = maxr rG(r) denote the maximum revenue achievable when facing a bidder whose bids are drawn from distribution F . As before we denote by B = Eb?F [b] the mean bid and by S = B ? R the expected separation of distribution F . 5 S Theorem 4. Let ? 2 denote the variance of F . Then ? 2 ? 2R2 e R ? B 2 ? R2 . The proof of this theorem is highly technical and we present it in Appendix A. Corollary 2. The following bound holds for any distribution F: S ? (3R)1/3 ? 2/3 ? (3B)1/3 ? 2/3 The proof of this corollary follows immediately by an application of Taylor?s theorem to the bound of Theorem 4. It is also easy to show that this bound is tight (see Appendix D). 5.1 Approximating Maximum Revenue In their seminal work Goldberg et al. [2001] showed that when faced with a bidder drawing values distribution F on [1, M ] with mean B, an auctioneer setting the optimum monopoly reserve would recover at least ?(B/ log M ) revenue. We show how to adapt the result of Theorem 4 to refine this approximation ratio as a function of the variance of F . We defer the proof to Appendix B. Theorem 5. For any distribution F with mean B and variance ? 2 , the maximum revenue with  ?2 monopoly reserves, R, satisfies: B R ? 4.78 + 2 log 1 + B 2 Note that since ? 2 ? M 2 this always leads to a tighter bound on the revenue. 5.2 Partition of X Corollary 2 suggests clustering points in such a way that the variance of the bids in each cluster bj = is minimized. Given a partition C = {C1 , . . . , Ck } of X we denote by mj = |SX ? Cj |, B P P 1 1 2 2 b bj = mj i:xi ?Cj (bi ? Bj ) . Let also rj = argmaxp>0 p|{bi > p|xi ? Cj }| and i:xi ?Cj bi , ? mj bj = rj |{bi > rj |xi ? Cj }|. R  1/3  P 2/3 Pk k 1 b b Lemma 2. Let r(x) = bj 3B = j=1 rj 1x?Cj then S(r) ? j=1 mj ? m   1/3  1 b 3B 2m ?(C) . bj ?R bj , Corollary 2 applied to the empirical bid distribution in Cj yields Sbj ? Proof. Let Sbj = B m 1/3 2/3 b (3Bj ) ? bj . Multiplying by mj , summing over all clusters and using H?older?s inequality gives: k k k k  X 3m 1/3  X m 2/3 1 X 1 X b 1/3 2/3 j b j b S(r) = mj Sj ? (3Bj ) ? bj mj ? Bj ? bj . m j=1 m j=1 m m j=1 j=1 6 Clustering Algorithm b is fixed, we can find a function minimizing the In view of Lemma 2 and since the quantity B expected separation by finding a partition of X that minimizes the weighted variance ?(C) defined Section 4.1. From the definition of ?, this problem resembles a traditional k-means clustering problem with distance function d(xi , xi0 ) = (bi ?bi0 )2 . Thus, one could use one of several clustering algorithms to solve it. Nevertheless, in order to allocate a new point x ? X to a cluster, we would require access to the bid b which at evaluation time is unknown. Instead, we show how to utilize the predictions of h to define an almost optimal clustering of X . For any partition C = {C1 , . . . , Ck } of X define k s X X ?h (C) = j=1 (h(xi ) ? h(xi0 ))2 . i,i0 :xi ,xi0 ?Ck 1 Notice that 2m ?h (C) is the function minimized by Algorithm 1. The following lemma, proved in Appendix B, bounds the cluster variance achieved by clustering bids according to their predictions. 6 Pm 1 2 b2 , and let C ? denote the partition Lemma 3. Let h be a function such that m i=1 (h(xi ) ? bi ) ? ? h h ? that minimizes ?(C). If C minimizes ?h (C) then ?(C ) ? ?(C ) + 4mb ?. Pm 1 Corollary 3. Let rk be the output of Algorithm 1. If m j=1 (h(xi ) ? bi )2 ? ?b2 then: b k ) ? (3B) b 1/3 S(r  1 2/3 2/3 b 1/3 Big( 1 ?(C ? ) + 2b ?(C h ) ? (3B) ? . 2m 2m (3) Proof. It is easy to see that the elements Cjh of C h are of the form Cj = {x|tj ? h(x) ? tj+1 } for t ? Tk . Thus if rk is the hypothesis induced by the partition C h , then rk ? G(h, k). The result now follows by definition of ? and lemmas 2 and 3. The proof of Theorem 2 is now straightforward. Define a partition C by xi ? Cj if bi ? Since (bi ? bi0 )2 ? k12 for bi , bi0 ? Cj we have s k X m2j m = . ?(C) ? 2 k k j=1  j?1 k  , kj . (4) Furthermore since E[(h(x) ? b)2 ] ? ? 2 , Hoeffding?s inequality implies that with probability 1 ? ?: r m  1 X log 1/?  2 2 (h(xi ) ? bi ) ? ? + . (5) m i=1 2m In view of inequalities (4) and (5) as well as Corollary 3 we have: ! ! r r  1/2 2/3  1/2 2/3 1 1 log 1/? log 1/? 1/3 2 1/3 2 b k ) ? (3B) b b S(r ?(C)+2 ? + ? (3B) +2 ? + 2m 2m 2k 2m This completes the proof of the main result. To implement the algorithm, note that the problem of minimizing ?h (C) reduces to finding a partition t ? Tk such that the sum of the variances within the partitions is minimized. It is clear that it suffices to consider points tj in the set B = {h(x1 ), . . . , h(xm )}. With this observation, a simple dynamic program leads to a polynomial time algorithm with an O(km2 ) running time (see Appendix C). 7 Experiments We now compare the performance of our algorithm against the following baselines: 1. The offset algorithm presented in Section 3, where instead P of using the theoretical offset m ? 2/3 we find the optimal t maximizing the empirical revenue i=1 h(xi )?t)1h(xi )?t?bi . 2. The DC algorithm introduced by Mohri and Medina [2014], which represents the state of the art in learning a revenue optimal reserve price. Synthetic data. We begin by running experiments on synthetic data to demonstrate the regimes where each algorithm excels. We generate feature vectors xi ? R10 with coordinates sampled from a mixture of lognormal distributions with means ?1 = 0, ?2 = 1, variance ?1 = ?2 = 0.5 and mixture parameter p = 0.5. Let 1 ? Rd denote the vector with entries set to 1. Bids are generated according to two different scenarios: Linear Bids bi generated according to bi = max(x> i 1 + ?i , 0) where ?i is a Gaussian random variable with mean 0, and standard deviation ? ? {0.01, 0.1, 1.0, 2.0, 4.0}. Bimodal Bids bi generated according to the following rule: let si = max(x> i 1 + ?i , 0) if si > 30 then bi = 40 + ?i otherwise bi = si . Here ?i has the same distribution as ?i . The linear scenario demonstrates what happens when we have a good estimate of the bids. The bimodal scenario models a buyer, which for the most part will bid as a continuous function of features but that is interested in a particular set of objects (for instance retargeting buyers in online advertisement) for which she is willing to pay a much higher price. 7 (a) (b) (c) Figure 1: (a) Mean revenue of the three algorithms on the linear scenario. (b) Mean revenue of the three algorithms on the bimodal scenario. (c) Mean revenue on auction data. For each experiment we generated a training dataset Strain , a holdout set Sholdout and a test set Stest each with 16,000 examples. The function h used by RIC-h and the offset algorithm is found by training a linear regressor over Strain . For efficiency, we ran RIC-h algorithm on quantizations of predictions h(xi ). Quantized predictions belong to one of 1000 buckets over the interval [0, 50]. Finally, the choice of hyperparameters ? for the Lipchitz loss and k for the clustering algorithm was done by selecting the best performing parameter over the holdout set. Following the suggestions in [Mohri and Medina, 2014] we chose ? ? {0.001, 0.01, 0.1, 1.0} and k ? {2, 4, . . . , 24}. Figure 1(a),(b) shows the average revenue of the three approaches across 20 replicas of the experiment as a function of the log of ?. Revenue is normalized so that the DC algorithm revenue is 1.0 when ? = 0.01. The error bars at one standard deviation are indistinguishable in the plot. It is not surprising to see that in the linear scenario, the DC algorithm of [Mohri and Medina, 2014] and the offset algorithm outperform RIC-h under low noise conditions. Both algorithms will recover a solution close to the true weight vector 1. In this case the offset is minimal, thus recovering virtually all revenue. On the other hand, even if we set the optimal reserve price for every cluster, the inherent variance of each cluster makes us leave some revenue on the table. Nevertheless, notice that as the noise increases all three algorithms seem to achieve the same revenue. This is due to the fact that the variance in each cluster is comparable with the error in the prediction function h. The results are reversed for the bimodal scenario where RIC-h outperforms both algorithms under low noise. This is due to the fact that RIC-h recovers virtually all revenue obtained from high bids while the offset and DC algorithms must set conservative prices to avoid losing revenue from lower bids. Auction data. In practice, however, neither of the synthetic regimes is fully representative of the bidding patterns. In order to fully evaluate RIC-h, we collected auction bid data from AdExchange for 4 different publisher-advertiser pairs. For each pair we sampled 100,000 examples with a set of discrete and continuous features. The final feature vectors are in Rd for d ? [100, 200] depending on the publisher-buyer pair. For each experiment, we extract a random training sample of 20,0000 points as well as a holdout and test sample. We repeated this experiment 20 times and present the results on Figure 1 (c) where we have normalized the data so that the performance of the DC algorithm is always 1. The error bars represent one standard deviation from the mean revenue lift. Notice that our proposed algorithm achieves on average up to 30% improvement over the DC algorithm. Moreover, the simple offset strategy never outperforms the clustering algorithm, and in some cases achieves significantly less revenue. 8 Conclusion We provided a simple, scalable reduction of the problem of revenue optimization with side information to the well studied problem of minimizing the squared loss. Our reduction provides the first polynomial time algoritm with a quantifiable bound on the achieved revenue. In the analysis of our algorithm we also provided the first variance dependent lower bound on the revenue attained by setting optimal monopoly prices. Finally, we provided extensive empirical evidence of the advantages of RIC-h over the current state of theart. 8 References Nicol`o Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve prices in second-price auctions. IEEE Trans. Information Theory, 61(1):549?564, 2015. Shuchi Chawla, Jason D. Hartline, and Robert D. Kleinberg. Algorithmic pricing via virtual valuations. In Proceedings 8th ACM Conference on Electronic Commerce (EC-2007), San Diego, California, USA, June 11-15, 2007, pages 243?251, 2007. doi: 10.1145/1250910.1250946. Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. CoRR, abs/1502.00963, 2015. Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Proceedings of STOC, pages 426?439, 2016. Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample. Games and Economic Behavior, 91:318?333, 2015. Andrew V. Goldberg, Jason D. Hartline, and Andrew Wright. Competitive auctions and digital goods. In Proceedings of the Twelfth Annual Symposium on Discrete Algorithms, January 7-9, 2001, Washington, DC, USA., pages 735?744, 2001. Jason D. Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In Proceedings 10th ACM Conference on Electronic Commerce (EC-2009), Stanford, California, USA, July 6?10, 2009, pages 225?234, 2009. Robert D. Kleinberg and Frank Thomson Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Proceedings of FOCS, pages 594?605, 2003. Renato Paes Leme, Martin P?al, and Sergei Vassilvitskii. A field guide to personalized reserve prices. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 - 15, 2016, pages 1093?1102, 2016. doi: 10.1145/2872427.2883071. Mehryar Mohri and Andres Mu?noz Medina. Learning theory and algorithms for revenue optimization in second-price auctions with reserve. In Proceedings of ICML, pages 262?270, 2014. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. ISBN 026201825X, 9780262018258. Jamie Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Proceedings of NIPS, pages 136?144, 2015. Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Proceedings ofCOLT, pages 1298?1318, 2016. R. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58?73, 1981. Tim Roughgarden and Joshua R. Wang. Minimizing regret with multiple reserves. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC ?16, Maastricht, The Netherlands, July 24-28, 2016, pages 601?616, 2016. doi: 10.1145/2940716.2940792. Maja R. Rudolph, Joseph G. Ellis, and David M. Blei. Objective variables for probabilistic revenue maximization in second-price auctions with reserve. In Proceedings of WWW 2016, pages 1113? 1122, 2016. 9
6782 |@word version:1 achievable:2 polynomial:5 leighton:3 twelfth:1 willing:2 attainable:1 reduction:3 celebrated:1 contains:2 selecting:1 outperforms:2 current:1 surprising:2 si:3 sergei:2 must:2 readily:1 tenet:1 refines:1 adexchange:1 partition:23 shape:2 designed:1 plot:1 half:1 item:5 desktop:1 beginning:1 stest:1 alexandros:1 blei:1 provides:2 quantized:1 attack:1 lipchitz:1 warmup:1 unbounded:1 along:1 c2:1 direct:1 symposium:1 focs:1 prove:2 combine:1 introduce:1 expected:10 behavior:1 little:1 increasing:1 becomes:1 provided:4 begin:2 underlying:1 moreover:2 maximizes:2 bounded:5 mass:1 maja:1 what:3 kind:3 minimizes:6 morgenstern:5 developed:2 finding:5 hindsight:1 guarantee:6 formalizes:1 pseudo:2 every:4 ti:1 concave:2 growth:4 tie:2 classifier:2 demonstrates:1 sale:2 t1:1 before:1 understood:3 approximately:1 plus:1 chose:1 eb:1 studied:2 resembles:1 suggests:1 challenging:3 bi:21 practical:2 commerce:2 practice:3 regret:5 implement:1 area:1 empirical:14 yan:1 significantly:2 projection:1 cannot:1 close:1 context:2 zhiyi:1 seminal:2 optimize:1 www:2 maximizing:2 straightforward:2 go:1 economics:1 devanur:4 focused:1 convex:2 immediately:1 rule:2 importantly:2 his:1 population:1 coordinate:1 monopoly:4 diego:1 yishay:1 user:1 exact:2 programming:1 losing:1 goldberg:3 designing:1 hypothesis:4 us:1 element:2 distributional:1 observed:2 wang:2 worst:3 highest:1 ran:1 intuition:2 transforming:1 convexity:1 complexity:9 mu:1 dynamic:1 depend:2 solving:1 tight:1 upon:1 efficiency:1 bidding:3 easily:1 m2j:1 doi:3 lift:1 refined:1 whose:2 widely:1 solve:2 larger:1 nikhil:1 drawing:1 otherwise:1 stanford:1 transform:1 rudolph:2 final:1 online:5 advantage:1 isbn:1 jamie:2 mb:1 adaptation:1 km2:1 achieve:2 description:2 quantifiable:1 cluster:9 optimum:4 argmaxp:1 leave:1 tk:10 object:1 depending:1 develop:1 tim:6 andrew:2 montreal:1 recovering:1 predicted:2 involves:1 come:2 memorize:1 implies:2 direction:1 closely:1 stochastic:1 vc:1 virtual:1 require:1 suffices:2 generalization:6 fix:1 proposition:1 tighter:1 hold:2 considered:1 wright:1 algorithmic:1 predict:4 bj:14 reserve:38 achieves:4 bi0:4 favorable:1 cole:2 weighted:2 minimization:1 mit:1 always:2 gaussian:1 aim:1 rather:1 ck:6 avoid:1 hj:1 claudio:1 mobile:1 ckh:2 corollary:7 encode:1 derived:1 focus:1 june:1 she:1 improvement:1 hk:1 contrast:1 ave:2 baseline:1 rostamizadeh:1 talwalkar:1 inference:1 dependent:3 i0:2 algoritm:1 labelings:1 interested:1 overall:1 among:1 art:1 fairly:1 equal:2 construct:1 never:3 field:1 beach:1 washington:1 represents:1 icml:1 nearly:2 theart:1 paes:1 minimized:3 others:1 np:1 inherent:3 richard:1 peerapong:1 attempt:1 ab:1 highly:3 evaluation:1 deferred:1 mixture:2 tj:8 amenable:2 literally:1 taylor:1 theoretical:1 minimal:1 instance:3 elli:1 maximization:3 cost:1 introducing:1 addressing:1 entry:1 deviation:3 predictor:6 successful:1 too:3 optimally:1 characterize:1 motivating:1 synthetic:3 chooses:1 st:1 international:1 probabilistic:1 regressor:2 squared:8 cesa:3 containing:1 huang:1 hoeffding:1 return:2 account:1 bidder:5 b2:2 satisfy:2 ad:5 later:1 view:2 jason:3 competitive:1 recover:2 complicated:1 defer:1 contribution:1 minimize:3 variance:20 who:1 yield:2 generalize:1 andres:1 advertising:2 notoriously:1 multiplying:1 hartline:4 history:1 submitted:1 definition:2 against:1 associated:2 proof:9 recovers:2 static:1 gain:1 sampled:2 proved:1 dataset:1 holdout:3 knowledge:2 cj:13 higher:1 attained:1 follow:2 april:1 done:1 strongly:1 furthermore:1 just:1 hand:4 web:1 google:2 completness:1 quality:2 artifact:1 behaved:1 pricing:2 usa:4 normalized:2 true:1 counterpart:1 indistinguishable:1 game:1 impression:2 theoretic:1 demonstrate:1 thomson:1 auction:32 behaves:2 empirically:2 overview:2 belong:1 xi0:4 he:1 munoz:1 rd:2 pm:3 mathematics:1 access:1 etc:1 recent:1 showed:1 optimizing:2 driven:1 scenario:11 inequality:4 binary:1 psomas:1 joshua:1 minimum:1 analyzes:1 additional:1 greater:1 gentile:1 r0:1 maximize:2 advertiser:1 july:2 relates:2 multiple:3 sound:1 smoother:1 rj:8 reduces:1 technical:1 adapt:1 long:2 calculates:1 prediction:11 scalable:3 simplistic:1 regression:1 heterogeneous:1 expectation:1 iteration:1 represent:1 bimodal:4 achieved:2 c1:4 interval:2 completes:1 crucial:1 publisher:3 appropriately:1 induced:2 virtually:2 seem:1 curious:1 near:1 easy:3 concerned:1 bid:35 reduce:2 economic:1 knowing:2 t0:2 vassilvitskii:2 whether:1 allocate:1 york:2 proceed:1 generally:1 leme:2 clear:1 netherlands:1 generate:1 outperform:1 andr:1 notice:6 diverse:1 discrete:2 group:2 dominance:1 key:1 nevertheless:2 drawn:4 neither:2 r10:1 utilize:1 replica:1 sum:1 parameterized:1 shuchi:1 auctioneer:2 family:2 reader:1 almost:1 electronic:2 separation:12 appendix:7 ric:8 comparable:1 bound:21 renato:1 pay:1 guaranteed:1 refine:1 annual:1 roughgarden:12 occur:1 precisely:1 ri:1 unverifiable:1 personalized:1 kleinberg:4 anyone:1 performing:1 ameet:1 relatively:1 martin:1 according:4 poor:1 smaller:2 across:1 increasingly:1 joseph:1 rev:4 making:2 happens:1 bucket:1 equation:1 previously:1 remains:1 mechanism:3 demographic:1 end:2 parametrize:1 available:2 operation:1 observe:2 chawla:2 m2k:2 sbj:2 existence:1 clustering:9 include:1 running:2 opportunity:3 hinge:1 approximating:1 classical:1 objective:3 question:2 quantity:1 strategy:1 dependence:1 traditional:1 excels:1 distance:1 reversed:1 parametrized:1 valuation:4 collected:1 reason:1 provable:1 afshin:1 kk:1 ratio:2 minimizing:5 equivalently:2 setup:3 robert:2 statement:1 stoc:1 frank:1 design:4 unknown:1 bianchi:3 m11:1 observation:1 markov:1 sold:1 january:1 heterogeneity:2 extended:1 precise:1 strain:2 dc:8 mansour:1 community:1 canada:1 introduced:2 vacuous:1 pair:3 david:1 extensive:1 connection:1 california:2 learned:1 distinction:1 nip:2 trans:1 bar:2 pattern:1 xm:3 regime:2 challenge:3 dhangwatnotai:2 program:1 max:3 difficulty:1 rely:1 natural:1 older:1 ready:1 extract:2 kj:1 faced:1 literature:2 understanding:1 nicol:1 loss:13 fully:2 suggestion:1 var:1 facing:1 versus:1 revenue:65 digital:1 foundation:1 incurred:1 proxy:1 principle:1 maastricht:1 mohri:8 surprisingly:1 qiqi:1 retargeting:1 guide:1 side:4 weaker:1 understand:1 formal:1 fall:2 jh:1 face:1 taking:1 lognormal:1 wide:1 noz:1 k12:1 boundary:1 dimension:3 curve:1 world:1 rich:1 author:2 san:1 bm:1 historical:1 ec:3 polynomially:1 sj:1 approximate:2 maxr:2 buy:1 b1:1 summing:1 conclude:1 xi:22 spectrum:1 continuous:4 why:1 table:2 mj:11 ca:1 mehryar:2 complex:2 posted:3 pk:4 main:5 bounding:1 big:1 hyperparameters:1 noise:3 repeated:2 x1:3 representative:1 ny:2 christos:1 medina:7 advertisement:1 theorem:16 rk:12 specific:1 showing:2 jensen:1 offset:9 striving:1 admits:2 r2:2 evidence:1 exists:1 quantization:1 effectively:1 gained:2 corr:1 demand:2 sx:4 gap:1 rg:1 simply:2 likely:2 myerson:3 corresponds:2 minimizer:1 satisfies:2 trillion:1 extracted:3 acm:3 conditional:1 goal:1 price:46 feasible:1 hard:1 change:2 judicious:1 uniformly:1 lemma:6 conservative:1 e:1 buyer:8 m3:1 formally:2 cjh:6 evaluate:1
6,393
6,783
Solving Most Systems of Random Quadratic Equations Gang Wang?,? ? Georgios B. Giannakis? Yousef Saad? Jie Chen? Key Lab of Intell. Contr. and Decision of Complex Syst., Beijing Inst. of Technology ? Digital Tech. Center & Dept. of Electrical and Computer Eng., Univ. of Minnesota ? Department of Computer Science and Engineering, Univ. of Minnesota {gangwang, georgios, saad}@umn.edu; [email protected]. Abstract This paper deals with finding an n-dimensional solution x to a system of quadratic equations yi = |hai , xi|2 , 1 ? i ? m, which in general is known to be NP-hard. We put forth a novel procedure, that starts with a weighted maximal correlation initialization obtainable with a few power iterations, followed by successive refinements based on iteratively reweighted gradient-type iterations. The novel techniques distinguish themselves from prior works by the inclusion of a fresh (re)weighting regularization. For certain random measurement models, the proposed procedure returns the true solution x with high probability in time proportional to reading the data {(ai ; yi )}1?i?m , provided that the number m of equations is some constant c > 0 times the number n of unknowns, that is, m ? cn. Empirically, the upshots of this contribution are: i) perfect signal recovery in the high-dimensional regime given only an information-theoretic limit number of equations; and, ii) (near-)optimal statistical accuracy in the presence of additive noise. Extensive numerical tests using both synthetic data and real images corroborate its improved signal recovery performance and computational efficiency relative to state-of-the-art approaches. 1 Introduction One is often faced with solving quadratic equations of the form yi = |hai , xi|2 , or equivalently, ?i = |hai , xi|, 1?i?m (1) where x ? Rn /Cn (hereafter, symbol ?A/B? denotes either A or B) is the wanted unknown n ? 1 vector, while given observations ?i and feature vectors ai ? Rn /Cn that are collectively stacked in the data vector ? := [?i ]1?i?m and the m ? n sensing matrix A := [ai ]1?i?m , respectively. Put differently, given information about the (squared) modulus of the inner products of the signal vector x and several known design vectors ai , can one reconstruct exactly (up to a global phase factor) x, or alternatively, the missing phase of hai , xi? In fact, much effort has been devoted to determining the number of such equations necessary and/or sufficient for the uniqueness of the solution x; see e.g., [1, 8]. It has been proved that m ? 2n ? 1 (m ? 4n ? 4) generic 1 (which includes the case of random vectors) real (complex) vectors ai are sufficient for uniquely determining an n-dimensional real (complex) vector x [1, Theorem 2.8], [8], while in the real case m = 2n ? 1 is shown also necessary [1]. In this sense, the number m = 2n ? 1 of equations as in (1) can be regarded as the information-theoretic limit for such a quadratic system to be uniquely solvable. 1 It is out of the scope of the present paper to explain the meaning of generic vectors, whereas interested readers are referred to [1]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In diverse physical sciences and engineering fields, it is impossible or very difficult to record phase measurements. The problem of recovering the signal or phase from magnitude measurements only, also commonly known as phase retrieval, emerges naturally [10, 11]. Relevant application domains include e.g., X-ray crystallography, astronomy, microscopy, ptychography, and coherent diffraction imaging [21]. In such setups, optical measurement and detection systems record solely the photon flux, which is proportional to the (squared) magnitude of the field, but not the phase. Problem (1) in its squared form, on the other hand, can be readily recast as an instance of nonconvex quadratically constrained quadratic programming, that subsumes as special cases several well-known combinatorial optimization problems involving Boolean variables, e.g., the NP-complete stone problem [2, Sec. 3.4.1]. A related task of this kind is that of estimating the mixture of linear regressions, where the latent membership indicators can be converted into the missing phases [29]. Although of simple form and practical relevance across different fields, solving systems of nonlinear equations is arguably the most difficult problem in all of the numerical computations [19, Page 355]. Notation: Lower- (upper-) case boldface letters denote vectors (matrices), e.g., a ? Rn (A ? Rm?n ). Calligraphic letters are reserved for sets. The floor operation bcc gives the largest integer no greater than the given real quantity c > 0, the cardinality |S| counts the number of elements in set S, and kxk denotes the Euclidean norm of x. Since for any phase ? ? R, vectors x ? Cn and ej? x are indistinguishable given {?i } in (1), let dist(z, x) := min??[0,2?) kz ? xej? k be the Euclidean distance of any estimate z ? Cn to the solution set {ej? x}0??<2? of (1); in particular, ? = 0/? in the real case. 1.1 Prior contributions Following the least-squares (LS) criterion (which coincides with the maximum likelihood (ML) one assuming additive white Gaussian noise), the problem of solving quadratic equations can be naturally recast as an empirical loss minimization m 1 X minimize L(z) := `(z; ?i /yi ) (2) m i=1 z?Rn /Cn 2 where one can choose to work with the amplitude-based loss `(z; ?i ) := (?i ?|hai ,zi|) /2 [28, 30], or 2 2 the intensity-based one `(z; yi ) := (yi ?|hai ,zi| ) /2 [3], and its related Poisson likelihood `(z; yi ) := 2 2 yi log(|hai , zi| ) ? |hai , zi| [7]. Either way, the objective functional L(z) is nonconvex; hence, it is generally NP-hard and computationally intractable to compute the ML or LS estimate. Minimizing the squared modulus-based LS loss in (2), several numerical polynomial-time algorithms have been devised via convex programming for certain choices of design vectors ai [4, 25]. Such convex paradigms first rely on the matrix-lifting technique to express all squared modulus terms into linear ones in a new rank-1 matrix variable, followed by solving a convex semi-definite program (SDP) after dropping the rank constraint. It has been established that perfect recovery and (near-)optimal statistical accuracy are achieved in noiseless and noisy settings respectively with an optimal-order number of measurements [4]. In terms of computational efficiency however, such lifting-based convex approaches entail storing and solving for an n ? n semi-definite matrix from m general SDP constraints, whose computational complexity in the worst case scales as n4.5 log 1/ for m ? n [25], which is not scalable. Another recent line of convex relaxation [12], [13] reformulated the problem of phase retrieval as that of sparse signal recovery, and solved a linear program in the natural parameter vector domain. Although exact signal recovery can be established assuming an accurate enough anchor vector, its empirical performance is in general not competitive with state-of-the-art phase retrieval approaches. Recent proposals advocate suitably initialized iterative procedures for coping with certain nonconvex formulations directly; see e.g., algorithms abbreviated as AltMinPhase, (R/P)WF, (M)TWF, (S)TAF [16, 3, 7, 26, 28, 27, 30, 22, 6, 24], as well as a prox-linear algorithm [9]. These nonconvex approaches operate directly upon vector optimization variables, thus leading to significant computational advantages over their convex counterparts. With random features, they can be interpreted as performing stochastic optimization over acquired examples {(ai ; ?i /yi )}1?i?m to approximately minimize the population risk functional L(z) := E(ai ,?i /yi ) [`(z; ?i /yi )]. It is well documented that minimizing nonconvex functionals is generally intractable due to existence of multiple critical points [17]. Assuming Gaussian sensing vectors however, such nonconvex paradigms can provably locate the global optimum, several of which also achieve optimal (statistical) guarantees. Specifically, 2 starting with a judiciously designed initial guess, successive improvement is effected by means of a sequence of (truncated) (generalized) gradient-type iterations given by ?t X z t+1 := z t ? ?`i (z t ; ?i /yi ), t = 0, 1, . . . (3) m t+1 i?T t where z denotes the estimate returned by the algorithm at the t-th iteration, ?t > 0 is learning rate that can be pre-selected or found via e.g., the backtracking line search strategy, and ?`(z t , ?i /yi ) represents the (generalized) gradient of the modulus- or squared modulus-based LS loss evaluated at z t . Here, T t+1 denotes some time-varying index set signifying the per-iteration gradient truncation. Although they achieve optimal statistical guarantees in both noiseless and noisy settings, state-of-theart (convex and nonconvex) approaches studied under Gaussian designs, empirically require stable recovery of a number of equations (several) times larger than the information-theoretic limit [7, 3, 30]. As a matter of fact, when there are numerously enough measurements (on the order of n up to some polylog factors), the squared modulus-based LS functional admits benign geometric structure in the sense that [23]: i) all local minimizers are also global; and, ii) there always exists a negative directional curvature at every saddle point. In a nutshell, the grand challenge of tackling systems of random quadratic equations remains to develop algorithms capable of achieving perfect recovery and statistical accuracy when the number of measurements approaches the information limit. 1.2 This work Building upon but going beyond the scope of the aforementioned nonconvex paradigms, the present paper puts forward a novel iterative linear-time scheme, namely, time proportional to that required by the processor to scan all the data {(ai ; ?i )}1?i?m , that we term reweighted amplitude flow, and henceforth, abbreviate as RAF. Our methodology is capable of solving noiseless random quadratic equations exactly, yielding an estimate of (near)-optimal statistical accuracy from noisy modulus observations. Exactness and accuracy hold with high probability and without extra assumption on the unknown signal vector x, provided that the ratio m/n of the number of equations to that of the unknowns is larger than a certain constant. Empirically, our approach is shown able to ensure exact recovery of high-dimensional unstructured signals given a minimal number of equations, where m/n in the real case can be as small as 2. The new twist here is to leverage judiciously designed yet conceptually simple (re)weighting regularization techniques to enhance existing initializations and also gradient refinements. An informal depiction of our RAF methodology is given in two stages as follows, with rigorous details deferred to Section 3: S1) Weighted maximal correlation initialization: Obtain an initializer z 0 maximally correlated with a carefully selected subset S ( M := {1, 2, . . . , m} of feature vectors ai , whose contributions toward constructing z 0 are judiciously weighted by suitable parameters {wi0 > 0}i?S . S2) Iteratively reweighted ?gradient-like? iterations: Loop over 0 ? t ? T : z t+1 = z t ? m ?t X t w ?`(z t ; ?i ) m i=1 i (4) for some time-varying weighting parameters {wit ? 0}, each possibly relying on the current iterate zt and the datum (ai ; ?i ). Two attributes of the novel approach are worth highlighting next. First, albeit being a variant of the spectral initialization devised in [28], the initialization here [cf. S1)] is distinct in the sense that different importance is attached to each selected datum (ai ; ?i ). Likewise, the gradient flow [cf. S2)] weighs judiciously the search direction suggested by each datum (ai ; ?i ). In this manner, more robust initializations and more stable overall search directions can be constructed even based solely on a rather limited number of data samples. Moreover, with particular choices of the weights wit ?s (e.g., taking 0/1 values), the developed methodology subsumes as special cases the recently proposed algorithms RWF [30] and TAF [28]. 2 Algorithm: Reweighted Amplitude Flow This section explains the intuition and basic principles behind each stage of the advocated RAF algorithm in detail. For analytical concreteness, we focus on the real Gaussian model with x ? Rn , 3 and independent sensing vectors ai ? Rn ? N (0, I) for all 1 ? i ? m. Nonetheless, the presented approach can be directly applied when the complex Gaussian and the coded diffraction pattern (CDP) models are considered. 2.1 Weighted maximal correlation initialization A key enabler of general nonconvex iterative heuristics? success in finding the global optimum is to seed them with an excellent starting point [14]. Indeed, several smart initialization strategies have been advocated for iterative phase retrieval algorithms; see e.g., the spectral initialization [16], [3] as well as its truncated variants [7], [28], [9], [30], [15]. One promising approach is the one pursued in [28], which is also shown robust to outliers in [9]. To hopefully approach the informationtheoretic limit however, its performance may need further enhancement. Intuitively, it is increasingly challenging to improve the initialization (over state-of-the-art) as the number of acquired data samples approaches the information-theoretic limit. In this context, we develop a more flexible initialization scheme based on the correlation property (as opposed to the orthogonality in [28]), in which the added benefit is the inclusion of a flexible weighting regularization technique to better balance the useful information exploited in the selected data. Similar to related approaches of the same kind, our strategy entails estimating both the norm kxk and the direction x/kxk of x. Leveraging the strong law of large numbers and the rotational invariance of Gaussian ai vectors (the latter suffices to assume x = kxke1 , with e1 being the first canonical vector in Rn ), it is clear that m m m  2  1 X 1 X 2 1 X ?i = hai , kxke1 i = a2i,1 kxk2 ? kxk2 (5) m i=1 m i=1 m i=1 Pm 2 ? whereby kxk can be estimated to be P i=1 i /m. This estimate proves very accurate even with a 2 m limited number of data samples because i=1 ai,1/m is unbiased and tightly concentrated. The challenge thus lies in accurately estimating the direction of x, or seeking a unit vector maximally aligned with x. Toward this end, let us first present a variant of the initialization in [28]. Note that the larger the modulus ?i of the inner-product between ai and x is, the known design vector ai is deemed more correlated to the unknown solution x, hence bearing useful directional information of x. Inspired by this fact and having available data {(ai ; ?i )}1?i?m , one can sort all (absolute) correlation coefficients {?i }1?i?m in an ascending order, yielding ordered coefficients 0 < ?[m] ? ? ? ? ? ?[2] ? ?[1] . Sorting m records takes time proportional to O(m log m).2 Let S $ M denote the set of selected feature vectors ai to be used for computing the initialization, which is to be designed next. Fix a priori the cardinality |S| to some integer on the order of m, say, |S| := b3m/13c. It is then natural to define S to collect the ai vectors that correspond to one of the largest |S| correlation coefficients {?[i] }1?i?|S| , each of which can be thought of as pointing to (roughly) the direction of x. Approximating the direction of x therefore boils down to finding a vector to maximize its correlation with the subset S of selected directional vectors ai . Succinctly, the wanted approximation vector can be efficiently found as the solution of  1 X  2 1 X maximize hai , zi = z ? ai a?i z (6) |S| |S| kzk=1 i?S i?S ? where the superscript represents the transpose or the conjugate transpose that will be P clear from 2 m the context. Upon scaling the unity-norm solution of (6) by the norm estimate obtained i=1 ?i /m in (5), to match the magnitude of x, we will develop what we will henceforth refer to as maximal correlation initialization. As long as |S| is chosen on the order of m, the maximal correlation method outperforms the spectral ones in [3, 16, 7], and has comparable performance to the orthogonality-promoting method [28]. Its performance around the information-limit however, is still not the best that we can hope for. Recall from (6) that all selected directional vectors {ai }i?S are treated the same in terms of their contributions to constructing the initialization. Nevertheless, according to our starting principle, this ordering information carried by the selected ai vectors is not exploited by the initialization scheme in (6) and [28]. In other words, if for i, j ? S, the correlation coefficient of ?i with ai is larger 2 f (m) = O(g(m)) means that there exists a constant C > 0 such that |f (m)| ? C|g(m)|. 4 than that of ?j with aj , then ai is deemed more correlated (with x) than aj is, hence bearing more useful information about the direction of x. It is thus prudent to weigh more the selected ai vectors associated with larger ?i values. Given the ordering information ?[|S|] ? ? ? ? ? ?[2] ? ?[1] available from the sorting procedure, a natural way to achieve this goal is weighting each ai vector with simple monotonically increasing functions of ?i , say e.g., taking the weights wi0 := ?i? , ?i ? S with the 0 0 0 exponent parameter ? ? 0 chosen to maintain the wanted ordering w|S| ? ? ? ? ? w[2] ? w[1] . In a nutshell, a more flexible initialization strategy, that we refer to as weighted maximal correlation, can be summarized as follows  1 X  ?0 := arg max z ? z ?i? ai a?i z. (7) |S| kzk=1 i?S For any given  > 0, the power method or the Lanczos algorithm can be called for to find an -accurate solution to (7) in time proportional to O(n|S|) [20], assuming P a positive eigengap between the largest and the second largest eigenvalues of the matrix (1/|S|) i?S ?i? ai a?i , which is often true when {ai } are sampled from continuous distribution. The proposed P initialization can be obtained upon ?i2/m)? ?0 from (7) by the norm estimate in (5), to yield z0 := ( m scaling z z0 . By default, we take i=1 p 0 1 ? := /2 in all reported numerical implementations, yielding wi := |hai , xi| for all i ? S. Regarding the initialization procedure in (7), we next highlight two features, whereas technical details and theoretical performance guarantees are provided in Section 3: F1) The weights {wi0 } in the maximal correlation scheme enable leveraging useful information that each feature vector ai may bear regarding the direction of x. F2) Taking wi0 := ?i? for all i ? S and 0 otherwise, problem (7) can be equivalently rewritten as m 1 X  ?0 := arg max z ? z wi0 ai a?i z (8) m i=1 kzk=1 which subsumes previous initialization schemes with particular selections of weights {wi0 }. For instance, the spectral initialization in [16, 3] is recovered by choosing S := M, and wi0 := ?i2 for all 1 ? i ? m. For comparison, define dist(z, x) 1.5 Reweight. max. correlation . Spectral initialization kxk Trunc. spectral in TWF 1.4 Orthogonality promoting Throughout the paper, all simulated results were Trunc. spectral in RWF averaged over 100 Monte Carlo (MC) realizations, 1.3 and each simulated scheme was implemented with their pertinent default parameters. Figure 1 eval1.2 uates the performance of the developed initializa1.1 tion relative to several state-of-the-art strategies, and also with the information limit number of 1 data benchmarking the minimal number of samples required. It is clear that our initialization 0.9 is: i) consistently better than the state-of-the-art; 1,000 2,000 3,000 4,000 5,000 n: signal dimension (m=2n-1) and, ii) stable as n grows, which is in contrast to the instability encountered by the spectral ones [16, 3, 7, 30]. It is worth stressing that the more Figure 1: Relative initialization error for i.i.d. than 5% empirical advantage (relative to the best) ai ? N (0, I1,000 ), 1 ? i ? 1, 999. at the challenging information-theoretic benchmark is nontrivial, and is one of the main RAF upshots. This advantage becomes increasingly pronounced as the ratio m/n grows. Relative error Relative error := 2.2 Iteratively reweighted gradient flow For independent data obeying the real Gaussian model, the direction that TAF moves along in stage S2) presented earlier is given by the following (generalized) gradient [28]: 1 X 1 X ? a? z  ?`(z; ?i ) = ai z ? ?i i? ai (9) m m |ai z| i?T i?T 5 where the dependence on the iterate count t is neglected for notational brevity, and the convention ? ? a? i z/|ai z| := 0 is adopted when a z = 0. i Unfortunately, the (negative) gradient of the average in (9) generally may not point towards the true solution x unless the current iterate z is already very close to x. Therefore, moving along such a descent direction may not drag z closer to x. To see this, consider an initial guess z0 that has already been in a basin of attraction (i.e., a region within which there is only a unique stationary point) of a? z x. Certainly, there are summands (a?i z ? ?i |a?i z| )ai in (9), that could give rise to ?bad/misleading? i a? z a? x gradient directions due to the erroneously estimated signs |a?i z| 6= |ai? x| [28], or (a?i z)(a?i x) < 0 i i [30]. Those gradients as a whole may drag z away from x, and hence out of the basin of attraction. Such an effect becomes increasingly severe as m approaches the information-theoretic limit of 2n ? 1, thus rendering past approaches less effective in this case. Although this issue is somewhat remedied by TAF with a truncation procedure, its efficacy is limited due to misses of bad gradients and mis-rejections of meaningful ones around the information limit. To address this challenge, reweighted amplitude flow effecting suitable gradient directions from all data samples {(ai ; ?i )}1?i?m will be adopted in a (timely) adaptive fashion, namely introducing appropriate weights for all gradients to yield the update z t+1 = z t ? ?t ?`rw (z t ; ?i ), t = 0, 1, . . . (10) t t The reweighted gradient ?`rw (z ) evaluated at the current point z is given as m 1 X ?`rw (z) := wi ?`(z; ?i ) (11) m i=1 for suitable weights {wi }1?i?m to be designed next. To that end, we observe that the truncation criterion [28]   |a? z| (12) T := 1 ? i ? m : ?i ? ? |ai x| with some given parameter ? > 0 suggests to include only gradient components associated with |a?i z| ? of relatively large sizes. This is because gradients of sizable |ai z|/|a?i x| offer reliable and meaningful ? directions pointing to the truth x with large probability [28]. As such, the ratio |ai z|/|a?i x| can be somewhat viewed as a confidence score about the reliability or meaningfulness of the corresponding gradient ?`(z; ?i ). Recognizing that confidence can vary, it is natural to distinguish the contributions that different gradients make to the overall search direction. An easy way is to attach large weights to the reliable gradients, and small weights to the spurious ones. Assume without loss of generality that 0 ? wi ? 1 for all 1 ? i ? m; otherwise, lump the normalization factor achieving this into the learning rate ?t . Building upon this observation and leveraging the gradient reliability confidence ? score |ai z|/|a?i x|, the weight per gradient ?`(z; ?i ) in RAF is designed to be 1 wi := , 1?i?m (13) 1 + ?i/(|a?i z|/|a?i x|) in which {?i > 0}1?i?m are some pre-selected parameters. Regarding the proposed weighting criterion in (13), three remarks are in order, followed by the RAF algorithm summarized in Algorithm 1. R1) The weights {wit }1?i?m are time adapted to z t . One can also interpret the reweighted gradient t+1 flow in (10) as performing a single gradient step to minimize the smooth reweighted loss Pz m 1 t t i=1 wi `(z; ?i ) with starting point z ; see also [4] for related ideas successfully exploited in the m iteratively reweighted least-squares approach to compressive sampling. ? R2) Note that the larger |ai z|/|a?i x| is, the larger wi will be. More importance will be attached to reliable gradients than to spurious ones. Gradients from almost all data points are are judiciously accounted for, which is in sharp contrast to [28], where withdrawn gradients do not contribute the information they carry. R3) At the points {z} where a?i z = 0 for certain i ? M, the corresponding weight will be wi = 0. That is, the losses `(z; ?i ) in (2) that are nonsmooth at points z will be eliminated, to prevent their contribution to the reweighted gradient update in (10). Hence, the convergence analysis of RAF can be considerably simplified because it does not have to cope with the nonsmoothness of the objective function in (2). 6 2.3 Algorithmic parameters To optimize the empirical performance and facilitate numerical implementations, choice of pertinent algorithmic parameters of RAF is independently discussed here. It is obvious that the RAF algorithm entails four parameters. Our theory and all experiments are based on: i) |S|/m ? 0.25; ii) 0 ? ?i ? 10 for all 1 ? i ? m; and, iii) 0 ? ? ? 1. For convenience, a constant step size ?t ? ? > 0 is suggested, but other step size rules such as backtracking line search with the reweighted objective work as well. As will be formalized in Section 3, RAF converges if the constant ? is not too large, with the upper bound depending in part on the selection of {?i }1?i?m . In the numerical tests presented in Sections 2 and 4, we take |S| := b3m/13c, ?i ? ? := 10, ? := 0.5, and ? := 2 (14) while larger step sizes ? > 0 can be afforded for larger m/n values. Algorithm 1 Reweighted Amplitude Flow 1: Input: Data {(ai ; ?i }1?i?m ; maximum number of iterations T ; step size ?t = 2/6 and weighting parameter ?i = 10/5 for real/complex Gaussian model; |S| = b3m/13c, and ? = 0.5. 2: Construct S to include indices associated with the |S| largest entries among {?i }1?i?m . pPm 2 ? 0 with z ? 0 being the unit principal eigenvector of 3: Initialize z 0 := i=1 ?i /m z m Y := 1 X 0 w ai a?i m i=1 i ?i? , i ? S ? M for all 1 ? i ? m. 0, otherwise 4: Loop: for t = 0 to T ? 1 m ?t X t  ? t a? z t  t+1 t z =z ? wi ai z ? ?i i? t ai m i=1 |ai z | where wi0 (15)  := where wit := t |a? i z |/?i t |a? i z |/?i +?i (16) for all 1 ? i ? m. T 5: Output: z . 3 Main results Our main results summarized in Theorem 1 next establish exact recovery under the real Gaussian model, whose proof is provided in the supplementary material. Our RAF approach however can be generalized readily to the complex Gaussian and CDP models. Theorem 1 (Exact recovery) Consider m noiseless measurements ? = |Ax| for an arbitrary x ? Rn . If the data size m ? c0 |S| ? c1 n and the step size ? ? ?0 , then with probability at least 1 ? c3 e?c2 m , the reweighted amplitude flow?s estimates z t in Algorithm 1 obey 1 (1 ? ?)t kxk, t = 0, 1, . . . (17) 10 where c0 , c1 , c2 , c3 > 0, 0 < ? < 1, and ?0 > 0 are certain numerical constants depending on the choice of algorithmic parameters |S|, ?, ?, and ?. dist(z t , x) ? According to Theorem 1, a few interesting properties of our RAF algorithm are worth highlighting. To start, RAF recovers the true solution exactly with high probability whenever the ratio m/n of the number of equations to the unknowns exceeds some numerical constant. Expressed differently, RAF achieves the information-theoretic optimal order of sample complexity, which is consistent with the state-of-the-art including TWF [7], TAF [28], and RWF [30]. Notice that (17) also holds at t = 0, namely, dist(z 0 , x) ? kxk/10, therefore providing performance guarantees for the proposed initialization scheme (cf. Step 3 in Algorithm 1). Moreover, starting from this initial estimate, RAF converges linearly to the true solution x. That is, to reach any -relative solution accuracy (i.e., dist(z T , x) ? kxk), it suffices to run at most T = O(log 1/) RAF iterations (cf. Step 4). This in 7 conjunction with the per-iteration complexity O(mn) confirms that RAF solves exactly a quadratic system in time O(mn log 1/), which is linear in O(mn), the time required to read the entire data {(ai ; ?i )}1?i?m . Given the fact that the initialization stage can be performed in time O(n|S|) and |S| < m, the overall linear-time complexity of RAF is order-optimal. Proof of Theorem 1 is provided in the supplementary material. Simulated tests Our theoretical findings about RAF have been corroborated with comprehensive numerical tests, a sample of which are discussed next. Performance of RAF is evaluated relative to the state-of-the-art (T)WF, RWF, and TAF in terms of the empirical success rate among 100 MC trials, where a success will be declared for a trial if the returned estimate incurs error ? ? |Az T | ? 10?5 kxk 30 25 ! log10 L(zT ) 4 20 15 10 5 where the modulus operator | ? | is understood element-wise. The real Gaussian model and the 0 0 10 20 30 40 50 60 70 80 90 100 physically realizable CDPs were simulated in this section. For fairness, all schemes were impleRealization number mented with their suggested parameter values. The true signal vector x was randomly generated using T x ? N (0, I), and the i.i.d. sensing vectors ai Figure 2: Function value L(z ) by RAF for ai ? N (0, I). Each scheme obtained the initial 100 MC realizations when m = 2n ? 1. guess based on 200 power iterations, followed by a series of T = 2, 000 (truncated/reweighted) gradient iterations. All experiments were performed using MATLAB on an Intel CPU @ 3.4 GHz (32 GB RAM) computer. For reproducibility, the Matlab code of the RAF algorithm is publicly available at https://gangwg.github.io/RAF/. To demonstrate the power of RAF in the high-dimensional regime, the function value L(z) in (2) evaluated at the returned estimate z T for 100 independent trials is plotted (in negative logarithmic scale) in Fig. 2, where m = 2n ? 1 = 9, 999. It is self-evident that RAF succeeded in all trials even at this challenging information limit. To the best of our knowledge, RAF is the first algorithm that empirically recovers any solution exactly from a minimal number of random quadratic equations. Left panel in Fig. 3 further compares the empirical success rate of five schemes under the real Gaussian model with n = 1, 000 and m/n varying by 0.1 from 1 to 5. Evidently, the developed RAF achieves perfect recovery as soon as m is about 2n, where its competing alternatives do not work well. To demonstrate the stability and robustness of RAF in the presence of additive noise, the right panel in Fig. 3 depicts the normalized mean-square error NMSE := dist2 (z T , x) kxk2 as a function of the signal-to-noise ratio (SNR) for m/n taking values {3, 4, 5}. The noise model ?i = |hai , xi| + ?i , 1?i?m 2 with ? := [?i ]1?i?m ? N (0, ? Im ) was employed, where ? 2 was set such that certain SNR := 2 10 log10 (kAxk /m?2 ) values on the x-axis were achieved. To examine the efficacy and scalability of RAF in real-world conditions, the last experiment entails the Galaxy image 3 depicted by a three-way array X ? R1,080?1,920?3 , whose first two coordinates encode the pixel locations, and the third the RGB color bands. Consider the physically realizable CDP model with random masks [3]. Letting x ? Rn (n ? 2 ? 106 ) be a vectorization of a certain band of X, the CDP model with K masks is ? (k) = |F D (k) x|, 3 1 ? k ? K, Downloaded from http://pics-about-space.com/milky-way-galaxy. 8 10 0 m=3n m=4n m=5n 1 0.8 10 -2 NMSE Empirical success rate 10 -1 0.6 10 -3 0.4 10 -4 RAF TAF TWF RWF WF 0.2 0 1 2 3 4 10 -5 10 -6 5 5 1,000 10 15 20 25 30 35 40 45 SNR (dB) for x2 R1,000 m/n for x2 R Figure 3: Real Gaussian model: Empirical success rate (Left); and, Relative MSE vs. SNR (Right). (k) where F ? Cn?n is a DFT matrix, and diagonal matrices their diagonal entries sampled ? D have (k) uniformly at random from {1, ?1, j, ?j} with j := ?1. Each D represents a random mask placed after the object to modulate the illumination patterns [5]. Implementing K = 4 masks, each algorithm performs independently over each band 100 power iterations for an initial guess, which was refined by 100 gradient iterations. Recovered images of TAF (left) and RAF (right) are displayed in Fig. 4, whose relative errors were 1.0347 and 1.0715 ? 10?3 , respectively. WF and TWF returned images of corresponding relative error 1.6870 and 1.4211, which are far away from the ground truth. Figure 4: Recovered Galaxy images after 100 gradient iterations of TAF (Left); and of RAF (Right). 5 Conclusion This paper developed a linear-time algorithm called RAF for solving systems of random quadratic equations. Our procedure consists of two stages: a weighted maximal correlation initializer attainable with a few power or Lanczos iterations, and a sequence of scalable reweighted gradient refinements for a nonconvex nonsmooth LS loss function. It was demonstrated that RAF achieves the optimal sample and computational complexity. Judicious numerical tests showcase its superior performance over state-of-the-art alternatives. Empirically, RAF solves a set of random quadratic equations with high probability so long as a unique solution exists. Promising extensions include studying robust and/or sparse phase retrieval and matrix recovery via (stochastic) reweighted amplitude flow counterparts, and in particular exploiting the power of (re)weighting regularization techniques to enable more general nonconvex optimization such as training deep neural networks [18]. Acknowledgments G. Wang and G. B. Giannakis were partially supported by NSF grants 1500713 and 1514056. Y. Saad was partially supported by NSF grant 1505970. J. Chen was partially supported by the National Natural Science Foundation of China grants U1509215, 61621063, and the Program for Changjiang Scholars and Innovative Research Team in University (IRT1208). 9 References [1] R. Balan, P. Casazza, and D. Edidin, ?On signal reconstruction without phase,? Appl. Comput. Harmon. Anal., vol. 20, no. 3, pp. 345?356, May 2006. [2] A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. SIAM, 2001, vol. 2. [3] E. J. Cand?s, X. Li, and M. Soltanolkotabi, ?Phase retrieval via Wirtinger flow: Theory and algorithms,? IEEE Trans. Inf. Theory, vol. 61, no. 4, pp. 1985?2007, Apr. 2015. [4] E. J. Cand?s, T. Strohmer, and V. Voroninski, ?PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming,? Appl. Comput. Harmon. Anal., vol. 66, no. 8, pp. 1241?1274, Nov. 2013. [5] E. J. Cand?s, X. Li, and M. Soltanolkotabi, ?Phase retrieval from coded diffraction patterns,? Appl. Comput. Harmon. Anal., vol. 39, no. 2, pp. 277?299, Sept. 2015. [6] J. Chen, L. Wang, X. Zhang, and Q. Gu, ?Robust Wirtinger flow for phase retrieval with arbitrary corruption,? arXiv:1704.06256, 2017. [7] Y. Chen and E. J. Cand?s, ?Solving random quadratic systems of equations is nearly as easy as solving linear systems,? in Adv. on Neural Inf. Process. Syst., Montr?al, Canada, 2015, pp. 739?747. [8] A. Conca, D. Edidin, M. Hering, and C. Vinzant, ?An algebraic characterization of injectivity in phase retrieval,? Appl. Comput. Harmon. Anal., vol. 38, no. 2, pp. 346?356, Mar. 2015. [9] J. C. Duchi and F. Ruan, ?Solving (most) of a set of quadratic equalities: Composite optimization for robust phase retrieval,? arXiv:1705.02356, 2017. [10] J. R. Fienup, ?Phase retrieval algorithms: A comparison,? Appl. Opt., vol. 21, no. 15, pp. 2758?2769, Aug. 1982. [11] R. W. Gerchberg and W. O. Saxton, ?A practical algorithm for the determination of phase from image and diffraction,? Optik, vol. 35, pp. 237?246, Nov. 1972. [12] T. Goldstein and S. Studer, ?PhaseMax: arXiv:1610.07531v1, 2016. Convex phase retrieval via basis pursuit,? [13] P. Hand and V. Voroninski, ?An elementary proof of convex phase retrieval in the natural parameter space via the linear program phasemax,? arXiv:1611.03935, 2016. [14] R. H. Keshavan, A. Montanari, and S. Oh, ?Matrix completion from a few entries,? IEEE Trans. Inf. Theory, vol. 56, no. 6, pp. 2980?2998, Jun. 2010. [15] Y. M. Lu and G. Li, ?Phase transitions of spectral initialization for high-dimensional nonconvex estimation,? arXiv:1702.06435, 2017. [16] P. Netrapalli, P. Jain, and S. Sanghavi, ?Phase retrieval using alternating minimization,? in Adv. on Neural Inf. Process. Syst., Stateline, NV, 2013, pp. 2796?2804. [17] P. M. Pardalos and S. A. Vavasis, ?Quadratic programming with one negative eigenvalue is NP-hard,? J. Global Optim., vol. 1, no. 1, pp. 15?22, 1991. [18] G. Pereyra, G. Tucker, J. Chorowski, ?. Kaiser, and G. Hinton, ?Regularizing neural networks by penalizing confident output distributions,? arXiv:1701.06548, 2017. [19] J. R. Rice, Numerical Methods in Software and Analysis. Academic Press, 1992. [20] Y. Saad, Numerical Methods for Large Eigenvalue Problems: Revised Edition. SIAM, 2011. [21] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, ?Phase retrieval with application to optical imaging: A contemporary overview,? IEEE Signal Process. Mag., vol. 32, no. 3, pp. 87?109, May 2015. 10 [22] M. Soltanolkotabi, ?Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization,? arXiv:1702.06175, 2017. [23] J. Sun, Q. Qu, and J. Wright, ?A geometric analysis of phase retrieval,? Found. Comput. Math., 2017 (to appear); see also arXiv:1602.06664, 2016. [24] I. Waldspurger, ?Phase retrieval with random Gaussian sensing vectors by alternating projections,? aXiv:1609.03088, 2016. [25] I. Waldspurger, A. d?Aspremont, and S. Mallat, ?Phase recovery, maxcut and complex semidefinite programming,? Math. Program., vol. 149, no. 1, pp. 47?81, 2015. [26] G. Wang and G. B. Giannakis, ?Solving random systems of quadratic equations via truncated generalized gradient flow,? in Adv. on Neural Inf. Process. Syst., Barcelona, Spain, 2016, pp. 568?576. [27] G. Wang, G. B. Giannakis, and J. Chen, ?Scalable solvers of random quadratic equations via stochastic truncated amplitude flow,? IEEE Trans. Signal Process., vol. 65, no. 8, pp. 1961?1974, Apr. 2017. [28] G. Wang, G. B. Giannakis, and Y. C. Eldar, ?Solving systems of random quadratic equations via truncated amplitude flow,? IEEE Trans. Inf. Theory, 2017 (to appear); see also arXiv:1605.08285, 2016. [29] X. Yi, C. Caramanis, and S. Sanghavi, ?Alternating minimization for mixed linear regression,? in Proc. Intl. Conf. on Mach. Learn., Beijing, China, 2014, pp. 613?621. [30] H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, ?Reshaped Wirtinger flow and incremental algorithm for solving quadratic system of equations,? J. Mach. Learn. Res., 2017 (to appear); see also arXiv:1605.07719, 2016. 11
6783 |@word trial:4 phasemax:2 polynomial:1 norm:5 suitably:1 c0:2 confirms:1 rgb:1 eng:1 attainable:1 incurs:1 carry:1 shechtman:1 initial:5 series:1 efficacy:2 hereafter:1 score:2 mag:1 outperforms:1 existing:1 past:1 current:3 recovered:3 com:1 optim:1 tackling:1 yet:1 readily:2 additive:3 numerical:12 benign:1 pertinent:2 wanted:3 designed:5 update:2 v:1 stationary:1 pursued:1 selected:10 guess:4 record:3 characterization:1 math:2 contribute:1 location:1 successive:2 kaxk:1 zhang:2 five:1 along:2 constructed:1 c2:2 consists:1 advocate:1 ray:1 manner:1 acquired:2 mask:4 indeed:1 roughly:1 themselves:1 dist:5 sdp:2 examine:1 cand:4 chi:1 inspired:1 relying:1 gangwang:1 cpu:1 cardinality:2 increasing:1 becomes:2 provided:5 estimating:3 notation:1 moreover:2 panel:2 solver:1 spain:1 what:1 kind:2 interpreted:1 eigenvector:1 developed:4 compressive:1 finding:4 astronomy:1 guarantee:4 every:1 nutshell:2 exactly:5 rm:1 unit:2 grant:3 appear:3 arguably:1 positive:1 engineering:3 local:1 cdp:4 understood:1 limit:11 io:1 mach:2 solely:2 approximately:1 initialization:27 studied:1 drag:2 china:2 collect:1 challenging:3 suggests:1 appl:5 limited:3 nemirovski:1 averaged:1 practical:2 unique:2 acknowledgment:1 definite:2 procedure:7 empirical:8 coping:1 thought:1 composite:1 projection:1 pre:2 word:1 confidence:3 studer:1 convenience:1 close:1 selection:2 operator:1 put:3 risk:1 impossible:1 context:2 instability:1 optimize:1 demonstrated:1 center:1 missing:2 starting:5 l:6 convex:11 independently:2 wit:4 formalized:1 recovery:15 unstructured:1 rule:1 attraction:2 array:1 regarded:1 oh:1 population:1 stability:1 coordinate:1 mallat:1 exact:5 programming:5 element:2 showcase:1 corroborated:1 wang:6 electrical:1 worst:1 solved:1 region:1 adv:3 sun:1 ordering:3 contemporary:1 weigh:1 intuition:1 complexity:6 saxton:1 ppm:1 trunc:2 neglected:1 solving:14 smart:1 upon:5 efficiency:2 f2:1 basis:1 gu:1 differently:2 caramanis:1 stacked:1 univ:2 distinct:1 jain:1 effective:1 monte:1 choosing:1 refined:1 whose:5 heuristic:1 larger:9 supplementary:2 say:2 reconstruct:1 otherwise:3 reshaped:1 noisy:3 superscript:1 advantage:3 sequence:2 eigenvalue:3 analytical:1 evidently:1 reconstruction:1 maximal:8 product:2 relevant:1 loop:2 aligned:1 realization:2 reproducibility:1 achieve:3 forth:1 pronounced:1 az:1 scalability:1 dist2:1 waldspurger:2 exploiting:1 convergence:1 enhancement:1 optimum:2 r1:3 intl:1 perfect:4 converges:2 ben:1 object:1 incremental:1 polylog:1 develop:3 depending:2 completion:1 advocated:2 aug:1 solves:2 netrapalli:1 recovering:1 implemented:1 sizable:1 strong:1 convention:1 direction:14 attribute:1 stochastic:3 enable:2 material:2 implementing:1 pardalos:1 explains:1 require:1 suffices:2 fix:1 f1:1 scholar:1 opt:1 elementary:1 im:1 extension:1 hold:2 around:2 considered:1 ground:1 wright:1 withdrawn:1 seed:1 algorithmic:3 scope:2 pointing:2 xej:1 achieves:3 vary:1 uniqueness:1 estimation:1 proc:1 combinatorial:1 largest:5 successfully:1 weighted:6 minimization:3 hope:1 exactness:1 stressing:1 gaussian:14 always:1 rather:1 zhou:1 ej:2 varying:3 conjunction:1 encode:1 ax:1 focus:1 improvement:1 consistently:1 rank:2 likelihood:2 notational:1 tech:1 contrast:2 rigorous:1 realizable:2 sense:3 contr:1 inst:1 wf:4 minimizers:1 membership:1 entire:1 spurious:2 going:1 interested:1 i1:1 provably:1 pixel:1 issue:1 aforementioned:1 flexible:3 overall:3 prudent:1 priori:1 exponent:1 among:2 arg:2 art:8 special:2 constrained:1 initialize:1 ruan:1 field:3 construct:1 having:1 beach:1 sampling:1 eliminated:1 chapman:1 represents:3 fairness:1 theart:1 nearly:1 np:4 nonsmooth:2 sanghavi:2 few:4 modern:1 randomly:1 tightly:1 intell:1 comprehensive:1 national:1 phase:28 maintain:1 detection:1 montr:1 certainly:1 severe:1 umn:1 deferred:1 mixture:1 yielding:3 semidefinite:1 behind:1 devoted:1 strohmer:1 accurate:3 succeeded:1 capable:2 closer:1 necessary:2 unless:1 harmon:4 euclidean:2 phaselift:1 initialized:1 re:4 plotted:1 weighs:1 theoretical:2 minimal:3 instance:2 earlier:1 boolean:1 corroborate:1 taf:9 lanczos:2 introducing:1 subset:2 entry:3 snr:4 recognizing:1 too:1 reported:1 synthetic:1 considerably:1 confident:1 st:1 grand:1 siam:2 voroninski:2 enhance:1 eval1:1 squared:7 initializer:2 opposed:1 choose:1 possibly:1 henceforth:2 conf:1 leading:1 return:1 li:3 syst:4 chorowski:1 converted:1 photon:1 prox:1 edidin:2 sec:1 subsumes:3 includes:1 coefficient:4 matter:1 summarized:3 changjiang:1 tion:1 performed:2 lab:1 start:2 competitive:1 effected:1 sort:1 raf:35 timely:1 contribution:6 minimize:3 square:3 publicly:1 accuracy:6 reserved:1 likewise:1 efficiently:1 correspond:1 yield:2 directional:4 conceptually:1 accurately:1 mc:3 carlo:1 lu:1 worth:3 corruption:1 processor:1 explain:1 reach:1 whenever:1 nonetheless:1 pp:16 tucker:1 galaxy:3 obvious:1 naturally:2 associated:3 mi:1 proof:3 boil:1 recovers:2 sampled:2 proved:1 recall:1 knowledge:1 emerges:1 color:1 obtainable:1 amplitude:9 carefully:1 goldstein:1 miao:1 methodology:3 improved:1 maximally:2 formulation:1 evaluated:4 mar:1 generality:1 stage:5 correlation:14 hand:2 mented:1 keshavan:1 nonlinear:1 hopefully:1 nonsmoothness:1 aj:2 grows:2 modulus:9 usa:1 effect:1 building:2 normalized:1 true:6 unbiased:1 counterpart:2 facilitate:1 regularization:4 hence:5 wi0:8 alternating:3 read:1 iteratively:4 equality:1 i2:2 deal:1 reweighted:17 white:1 indistinguishable:1 self:1 uniquely:2 whereby:1 coincides:1 ptychography:1 criterion:3 generalized:5 stone:1 evident:1 theoretic:7 complete:1 demonstrate:2 performs:1 duchi:1 optik:1 image:6 meaning:1 wise:1 novel:4 recently:1 superior:1 functional:3 empirically:5 physical:1 twist:1 cohen:1 attached:2 overview:1 discussed:2 interpret:1 measurement:10 significant:1 refer:2 ai:55 dft:1 pm:1 inclusion:2 maxcut:1 soltanolkotabi:3 reliability:2 minnesota:2 moving:1 stable:4 entail:4 depiction:1 summands:1 curvature:1 recent:2 inf:6 certain:8 nonconvex:13 calligraphic:1 success:6 yi:14 exploited:3 injectivity:1 greater:1 somewhat:2 floor:1 employed:1 paradigm:3 maximize:2 monotonically:1 signal:16 ii:4 semi:2 multiple:1 smooth:1 technical:1 match:1 exceeds:1 determination:1 offer:1 long:3 retrieval:17 academic:1 devised:2 e1:1 coded:2 involving:1 regression:2 scalable:3 variant:3 noiseless:4 basic:1 poisson:1 physically:2 iteration:15 normalization:1 arxiv:10 microscopy:1 achieved:2 c1:2 proposal:1 whereas:2 saad:4 operate:1 extra:1 nv:1 db:1 flow:15 leveraging:3 lump:1 integer:2 near:3 presence:2 leverage:1 wirtinger:3 iii:1 enough:2 easy:2 rendering:1 iterate:3 zi:5 competing:1 inner:2 regarding:3 cn:8 idea:1 judiciously:5 rwf:5 gb:1 effort:1 eigengap:1 returned:4 reformulated:1 algebraic:1 remark:1 matlab:2 jie:1 deep:1 generally:3 useful:4 clear:3 band:3 concentrated:1 rw:3 documented:1 http:2 vavasis:1 canonical:1 nsf:2 notice:1 sign:1 estimated:2 per:3 diverse:1 twf:5 dropping:1 vol:13 express:1 key:2 four:1 nevertheless:1 achieving:2 prevent:1 penalizing:1 v1:1 imaging:2 ram:1 relaxation:1 concreteness:1 beijing:2 run:1 letter:2 throughout:1 reader:1 almost:1 decision:1 diffraction:4 scaling:2 comparable:1 bit:1 bound:1 followed:4 distinguish:2 datum:3 quadratic:20 encountered:1 nontrivial:1 gang:1 adapted:1 constraint:2 orthogonality:3 segev:1 afforded:1 x2:2 software:1 tal:1 erroneously:1 declared:1 min:1 innovative:1 performing:2 optical:2 altminphase:1 relatively:1 department:1 structured:1 according:2 conjugate:1 across:1 giannakis:5 increasingly:3 unity:1 wi:9 qu:1 n4:1 s1:2 enabler:1 outlier:1 intuitively:1 computationally:1 equation:23 remains:1 abbreviated:1 count:2 r3:1 letting:1 ascending:1 end:2 informal:1 studying:1 pursuit:1 operation:1 available:3 rewritten:1 adopted:2 promoting:2 observe:1 obey:1 away:2 generic:2 spectral:9 appropriate:1 alternative:2 robustness:1 a2i:1 existence:1 denotes:4 include:4 ensure:1 cf:4 log10:2 prof:1 establish:1 approximating:1 meaningfulness:1 seeking:1 objective:3 move:1 added:1 quantity:1 already:2 kaiser:1 strategy:5 dependence:1 diagonal:2 hai:12 gradient:34 distance:1 remedied:1 simulated:4 toward:2 fresh:1 boldface:1 assuming:4 code:1 index:2 ratio:5 minimizing:2 balance:1 rotational:1 equivalently:2 difficult:2 setup:1 unfortunately:1 effecting:1 liang:1 reweight:1 negative:4 rise:1 yousef:1 design:4 zt:2 implementation:2 unknown:6 anal:4 upper:2 observation:3 revised:1 benchmark:1 descent:1 displayed:1 truncated:6 hinton:1 team:1 milky:1 locate:1 rn:9 sharp:1 arbitrary:2 intensity:1 canada:1 pic:1 namely:3 required:3 extensive:1 c3:2 coherent:1 quadratically:1 established:2 barcelona:1 nip:1 trans:4 address:1 beyond:1 able:1 suggested:3 pattern:3 regime:2 reading:1 challenge:3 program:5 recast:2 max:3 reliable:3 including:1 power:7 critical:1 suitable:3 natural:6 rely:1 treated:1 attach:1 indicator:1 solvable:1 abbreviate:1 mn:3 scheme:10 improve:1 github:1 technology:1 misleading:1 axis:1 deemed:2 carried:1 aspremont:1 jun:1 sept:1 faced:1 prior:2 upshot:2 geometric:2 bcc:1 determining:2 georgios:2 relative:11 law:1 loss:8 lecture:1 highlight:1 bear:1 mixed:1 interesting:1 proportional:5 digital:1 foundation:1 downloaded:1 fienup:1 sufficient:2 basin:2 consistent:1 principle:2 storing:1 balan:1 succinctly:1 accounted:1 placed:1 last:1 truncation:3 transpose:2 soon:1 supported:3 taking:4 barrier:1 absolute:1 sparse:2 benefit:1 ghz:1 kzk:3 default:2 dimension:1 world:1 transition:1 kz:1 forward:1 commonly:1 refinement:3 adaptive:1 simplified:1 far:1 flux:1 cope:1 functionals:1 nov:2 eldar:2 informationtheoretic:1 ml:2 global:5 gerchberg:1 anchor:1 xi:6 alternatively:1 search:5 latent:1 iterative:4 continuous:1 vectorization:1 promising:2 learn:2 robust:5 ca:1 hering:1 bearing:2 excellent:1 complex:7 mse:1 constructing:2 domain:2 apr:2 main:3 montanari:1 linearly:1 s2:3 noise:5 whole:1 edition:1 nmse:2 conca:1 fig:4 referred:1 intel:1 benchmarking:1 depicts:1 fashion:1 obeying:1 comput:5 lie:1 kxk2:3 breaking:1 weighting:8 third:1 theorem:5 down:1 z0:3 bad:2 symbol:1 sensing:5 pz:1 admits:1 r2:1 intractable:2 exists:3 albeit:1 importance:2 lifting:2 magnitude:4 illumination:1 chen:5 sorting:2 crystallography:1 rejection:1 depicted:1 logarithmic:1 backtracking:2 cdps:1 saddle:1 highlighting:2 kxk:9 ordered:1 expressed:1 partially:3 collectively:1 truth:2 rice:1 modulate:1 goal:1 viewed:1 towards:1 hard:3 judicious:1 specifically:1 uniformly:1 miss:1 principal:1 called:2 invariance:1 meaningful:2 latter:1 scan:1 signifying:1 brevity:1 relevance:1 providing:1 dept:1 regularizing:1 correlated:3
6,394
6,784
Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data Wei-Ning Hsu, Yu Zhang, and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {wnhsu,yzhang87,glass}@csail.mit.edu Abstract We present a factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations from sequential data without supervision. Specifically, we exploit the multi-scale nature of information in sequential data by formulating it explicitly within a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to different sets of latent variables. The model is evaluated on two speech corpora to demonstrate, qualitatively, its ability to transform speakers or linguistic content by manipulating different sets of latent variables; and quantitatively, its ability to outperform an i-vector baseline for speaker verification and reduce the word error rate by as much as 35% in mismatched train/test scenarios for automatic speech recognition tasks. 1 Introduction Unsupervised learning is a powerful methodology that can leverage vast quantities of unannotated data in order to learn useful representations that can be incorporated into subsequent applications in either supervised or unsupervised fashions. One of the principle approaches to unsupervised learning is probabilistic generative modeling. Recently, there has been significant interest in three classes of deep probabilistic generative models: 1) Variational Autoencoders (VAEs) [23, 34, 22], 2) Generative Adversarial Networks (GANs) [11], and 3) auto-regressive models [30, 39]; more recently, there are also studies combining multiple classes of models [6, 27, 26]. While GANs bypass any inference of latent variables, and auto-regressive models abstain from using latent variables, VAEs jointly learn an inference model and a generative model, allowing them to infer latent variables from observed data. Despite successes with VAEs, understanding the underlying factors that latent variables associate with is a major challenge. Some research focuses on the supervised or semi-supervised setting using VAEs [21, 17]. There is also research attempting to develop weakly supervised or unsupervised methods to learn disentangled representations, such as DC-IGN [25], InfoGAN [1], and ?-VAE [13]. There is yet another line of research analyzing the latent variables with labeled data after the model is trained [33, 15]. While there has been much research investigating static data, such as the aforementioned ones, there is relatively little research on learning from sequential data [8, 3, 2, 9, 7, 18, 36]. Moreover, to the best of our knowledge, there has not been any attempt to learn disentangled and interpretable representations without supervision from sequential data. The information encoded in sequential data, such as speech, video, and text, is naturally multi-scaled; in speech for example, information about the channel, speaker, and linguistic content is encoded in the statistics at the session, utterance, and segment levels, respectively. By leveraging this source of constraint, we can learn disentangled and interpretable factors in an unsupervised manner. In this paper, we propose a novel factorized hierarchical variational autoencoder, which learns disentangled and interpretable latent representations from sequential data without supervision by 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: FHVAE (? = 0) decoding results of three combinations of latent segment variables z1 and latent sequence variables z2 from two utterances in Aurora-4: a clean one (top-left) and a noisy one (bottom-left). FHVAEs learn to encode local attributes, such as linguistic content, into z1 , and encode global attributes, such as noise level, into z2 . Therefore, by replacing z2 of a noisy utterance with z2 of a clean utterance, an FHVAE decodes a denoised utterance (middle-right) that preserves the linguistic content. Reconstruction results of the clean and noisy utterances are also shown on the right. Audio samples are available at https://youtu.be/naJZITvCfI4. explicitly modeling the multi-scaled information with a factorized hierarchical graphical model. The inference model is designed such that the model can be optimized at the segment level, instead of at the sequence level, which may cause scalability issues when sequences become too long. A sequence-to-sequence neural network architecture is applied to better capture temporal relationships. We evaluate the proposed model on two speech datasets. Qualitatively, the model demonstrates an ability to factorize sequence-level and segment-level attributes into different sets of latent variables. Quantitatively, the model achieves 2.38% and 1.34% equal error rate on unsupervised and supervised speaker verification tasks respectively, which outperforms an i-vector baseline. On speech recognition tasks, it reduces the word error rate in mismatched train/test scenarios by up to 35%. The rest of the paper is organized as follows. In Section 2, we introduce our proposed model, and describe the neural network architecture in Section 3. Experimental results are reported in Section 4. We discuss related work in Section 5, and conclude our work as well as discuss future research plans in Section 6. We have released the code for the model described in this paper.1 2 Factorized Hierarchical Variational Autoencoder Generation of sequential data, such as speech, often involves multiple independent factors operating at different time scales. For instance, the speaker identity affects fundamental frequency (F0) and volume at the sequence level, while phonetic content only affects spectral contour and durations of formants at the segmental level. This multi-scale behavior results in the fact that some attributes, such as F0 and volume, tend to have a smaller amount of variation within an utterance, compared to between utterances; while other attributes, such as phonetic content, tend to have a similar amount of variation within and between utterances. We refer to the first type of attributes as sequence-level attributes, and the other as segment-level attributes. In this work, we achieve disentanglement and interpretability by encoding the two types of attributes into latent sequence variables and latent segment variables respectively, where the former is regularized by an sequence-dependent prior and the latter by an sequence-independent prior. We now formulate a generative process for speech and propose our Factorized Hierarchical Variational Autoencoder (FHVAE). Consider some dataset D = {X (i) }M i=1 consisting of M i.i.d. sequences, (i) (i) where X (i) = {x(i,n) }N is a sequence of N observed variables. N (i) is referred to as the n=1 1 https://github.com/wnhsu/FactorizedHierarchicalVAE 2 ? (i,n) z1 (i) (i,n) x(i,n) (i,n) ?2 z2 N (i) ?2 z2 x(i,n) N (i) M (a) Generative Model (i) (i,n) z1 M (b) Inference Model Figure 2: Graphical illustration of the proposed generative model and inference model. Grey nodes denote the observed variables, and white nodes are the hidden variables. number of segments for the i-th sequence, and x(i,n) is referred to as the n-th segment of the i-th sequence. Note that here a ?segment? refers to a variable of smaller temporal scale compared to the ?sequence?, which is in fact a sub-sequence. We will drop the index i whenever it is clear that we are referring to terms associated with a single sequence. We assume that each sequence X is generated (n) (n) N from some random process involving the latent variables Z1 = {z1 }N n=1 , Z2 = {z2 }n=1 , and ?2 . The following generation process as illustrated in Figure 2(a) is considered: (1) a s-vector (n) ?2 is drawn from a prior distribution p? (?2 ); (2) N i.i.d. latent sequence variables {z2 }N n=1 (n) N and latent segment variables {z1 }n=1 are drawn from a sequence-dependent prior distribution p? (z2 |?2 ) and a sequence-independent prior distribution p? (z1 ) respectively; (3) N i.i.d. observed variables {x(n) }N n=1 are drawn from a conditional distribution p? (x|z1 , z2 ). The joint probability for a sequence is formulated in Eq. 1: p? (X, Z1 , Z2 , ?2 ) = p? (?2 ) N Y (n) (n) (n) (n) p? (x(n) |z1 , z2 )p? (z1 )p? (z2 |?2 ). (1) n=1 Specifically, we formulate each of the RHS term as follows: p? (x|z1 , z2 ) = N (x|f?x (z1 , z2 ), diag(f?x2 (z1 , z2 ))) p? (z1 ) = N (z1 |0, ?z21 I), p? (z2 |?2 ) = N (z2 |?2 , ?z22 I), 2 p? (?2 ) = N (?2 |0, ?? I), 2 where the priors over the s-vectors ?2 and the latent segment variables z1 are centered isotropic multivariate Gaussian distributions. The prior over the latent sequence variable z2 conditioned on ?2 is an isotropic multivariate Gaussian centered at ?2 . The conditional distribution of the observed variable x is the multivariate Gaussian with a diagonal covariance matrix, whose mean and diagonal variance are parameterized by neural networks f?x (?, ?) and f?x2 (?, ?) with input z1 and z2 . We use ? to denote the set of parameters in the generative model. This generative model is factorized in a way such that the latent sequence variables z2 within a sequence are forced to be close to ?2 as well as to each other in Euclidean distance, and therefore are encouraged to encode sequence-level attributes that may have larger variance across sequences, but smaller variance within sequences. The constraint to the latent segment variables z1 is imposed globally, and therefore encourages encoding of residual attributes whose variation is not distinguishable inter and intra sequences. In the variational autoencoder framework, since the exact posterior inference is in(i) (i) (i) tractable, an inference model, q? (Z1 , Z2 , ?2 |X (i) ), that approximates the true posterior, (i) (i) (i) p? (Z1 , Z2 , ?2 |X (i) ), for variational inference [19] is introduced. We consider the following inference model as Figure 2(b): (i) (i) (i) q? (Z1 , Z2 , ?2 |X (i) ) = (i) q? (?2 ) (i) N Y (i,n) q? (z1 (i,n) |x(i,n) , z2 (i,n) )q? (z2 |x(i,n) ) n=1 (i) q? (?2 ) = (i) 2 N (?2 |g??2 (i), ?? ? 2 I), q? (z2 |x) = N (z2 |g?z2 (x), diag(g?z2 (x))) 2 q? (z1 |x, z2 ) = N (z1 |g?z1 (x, z2 ), diag(g?z2 (x, z2 ))), 1 3 where the posteriors over ?2 , z1 , and z2 are all multivariate diagonal Gaussian distributions. Note that the mean of the posterior distribution of ?2 is not directly inferred from X, but instead is regarded as part of inference model parameters, with one for each utterance, which would be optimized during (i) training. Therefore, g??2 (?) can be seen as a lookup table, and we use ? ? 2 = g??2 (i) to denote the posterior mean of ?2 for the i-th sequence; we fix the posterior covariance matrix of ?2 for all sequences. Similar to the generative model, g?z2 (?), g?z2 (?), g?z1 (?, ?), and g?z2 (?, ?) are also neural 2 1 networks whose parameters along with g??2 (?) are denoted collectively by ?. The variational lower bound for this inference model on the marginal likelihood of a sequence X is derived as follows: L(?, ?; X) = N X L(?, ?; x(n) |? ?2 ) + log p? (? ?2 ) + const n=1 L(?, ?; x (n) |? ?2 ) =Eq (n) (n) (n) ) ? (z1 ,z2 |x ? Eq (n) ? (z2 |x(n) )  (n) (n) log p? (x(n) |z1 , z2 )   (n) (n) (n)  DKL (q? (z1 |x(n) , z2 )||p? (z1 )) (n) (n) ? DKL (q? (z2 |x(n) )||p? (z2 |? ?2 )). The detailed derivation can be found in Appendix A. Because the approximated posterior of ?2 does not depend on the sequence X, the sequence variational lower bound L(?, ?; X) can be decomposed into the sum of L(?, ?; x(n) |? ?2 ), the conditional segment variational lower bounds, over segments, plus the log prior probability of ? ? 2 and a constant. Therefore, instead of sampling a batch at the sequence level to maximize the sequence variational lower bound, we can sample a batch at the segment level to maximize the segment variational lower bound: 1 log p? (? ?2 ) + const. (2) N This approach provides better scalability when the sequences are extremely long, such that computing an entire sequence for a batched update is too computationally expensive. L(?, ?; x(n) ) = L(?, ?; x(n) |? ?2 ) + In this paper we only introduce two scales of attributes; however, one can easily extend this model to more scales by simply introducing ?k for k = 2, 3, ? ? ? 2 that constrains the prior distribution of latent variables at more scales, such as having session-dependent prior or dataset-dependent prior. 2.1 Discriminative Objective The idea of having sequence-specific priors for each sequence is to encourage the model to encode the sequence-level attributes and the segment-level attributes into different sets of latent variables. However, when ?2 = 0 for all sequences, the prior probability of the s-vector is maximized, and the KL-divergence of the inferred posterior of z2 is measured from the same conditional prior for all sequences. This would result in trivial s-vectors ?2 , and therefore z1 and z2 would not be factorized to encode sequence and segment attributes respectively. (i,n) To encourage z2 to encode sequence-level attributes, we use z2 , which is inferred from x(i,n) , to infer the sequence index i of x(i,n) . We formulate the discriminative objective as: (i,n) log p(i|z2 (i,n) ) = log p(z2 |i) ? log M X (i,n) p(z2 |j) (p(i) is assumed uniform) j=1 (i,n) := log p? (z2 (i) |? ?2 ) ? log M X (i,n) p? (z2 (j)  |? ?2 ) , j=1 Combining the discriminative objective using a weighting parameter ? with the segment variational lower bound, the objective function to maximize then becomes: (i,n) Ldis (?, ?; x(i,n) ) = L(?, ?; x(i,n) ) + ? log p(i|z2 which we refer to as the discriminative segment variational lower bound. 2 The index starts from 2 because we do not introduce the hierarchy to z1 . 4 ), (3) 2.2 Inferring S-Vectors During Testing ? ? = {? During testing, we may want to use the s-vector ?2 of an unseen sequence X x(n) }N n=1 as the sequence-level attribute representation for tasks such as speaker verification. Since the exact maximum a posterior estimation of ?2 is intractable, we approximate the estimation using the conditional segment variational lower bound as follows: ? = argmax log p? (X, ? ?2 ) ??2 = argmax log p? (?2 |X) ?2 ?2 = argmax ?2  log p? (? x(n) |?2 ) + log p? (?2 ) n=1 ? argmax ?2 ? N X ? N X L(?, ?; x ?(n) |?2 ) + log p? (?2 ). (4) n=1 The closed form solution of ??2 can be derived by differentiating Eq. 4 w.r.t. ?2 (see Appendix B): PN? g?z (? x(n) ) ? . (5) ?2 = n=1 2 2 ? + ?z2/??2 N 2 3 Sequence-to-Sequence Autoencoder Model Architecture In this section, we introduce the detailed neural network architectures for our proposed FHVAE. Let a segment x = x1:T be a sub-sequence of X that contains T time steps, and xt denotes the t-th time step of x. We use recurrent network architectures for encoders that capture the temporal relationship among time steps, and generate a summarized fixed-dimension vector after consuming an entire sub-sequence. Likewise, we adopt a recurrent network architecture that generates a frame step by step conditioned on the latent variables z1 and z2 . The complete network can be seen as a stochastic sequence-to-sequence autoencoder that encodes x1:T stochastically into z1 , z2 , and stochastically decodes from them back to x1:T . Encoder x1 x2 x3 xT ? z2 q(z1|x1:T, z2) x1 x2 p(x1|z1, z2) p(x2|z1, z2) xT p(xT|z1, z2) ? Decoder q(z2|x1:T) ? z1 Figure 3: Sequence-to-sequence factorized hierarchical variational autoencoder. Dashed lines indicate the sampling process using the reparameterization trick [23]. The encoders for z1 and z2 are pink and amber, respectively, while the decoder for x is blue. Darker colors denote the recurrent neural networks, while lighter colors denote the fully-connected layers predicting the mean and log variance. Figure 3 shows our proposed Seq2Seq-FHVAE architecture.3 Here we show the detailed formulation: (hz2 ,t , cz2 ,t ) = LSTM(xt?1 , hz2 ,t?1 , cz2 ,t?1 ; ?LSTM,z2 ) q? (z2 |x1:T ) = N (z2 | MLP(hz2 ,T ; ?MLP? ,z2 ), diag(exp(MLP(hz2 ,T ; ?MLP?2 ,z2 )))) (hz1 ,t , cz1 ,t ) = LSTM([xt?1 ; z2 ], hz1 ,t?1 , cz1 ,t?1 ; ?z1 ) q? (z1 |x1:T , z2 ) = N (z1 | MLP(hz1 ,T ; ?MLP? ,z1 ), diag(exp(MLP(hz1 ,T ; ?MLP?2 ,z1 )))) (hx,t , cx,t ) = LSTM([z1 ; z2 ], hx,t?1 , cx,t?1 ; ?x ) p? (xt |z1 , z2 ) = N (xt | MLP(hx,t ; ?MLP? ,x ), diag(exp(MLP(hx,t ; ?MLP?2 ,x )))), where LSTM refers to a long short-term memory recurrent neural network [14], and MLP refers to a multi-layer perceptron, ?? are the related weight matrices. None of the neural network parameters are shared. We refer to this model as Seq2Seq-FHVAE. Log-likelihood and qualitative comparison with alternative architectures can be found in Appendix D. 3 Best viewed in color. 5 4 Experiments We use speech, which inherently contains information at multiple scales, such as channel, speaker, and linguistic content to test our model. Learning to disentangle the mixed information from the surface representation is essential for a wide variety of speech applications: for example, noise robust speech recognition [41, 38, 37, 16], speaker verification [5], and voice conversion [40, 29, 24]. The following two corpora are used for our experiments: (1) TIMIT [10], which contains broadband 16kHz recordings of phonetically-balanced read speech. A total of 6300 utterances (5.4 hours) are presented with 10 sentences from each of 630 speakers, of which approximately 70% are male and 30% are female. (2) Aurora-4 [32], a broadband corpus designed for noisy speech recognition tasks based on the Wall Street Journal corpus (WSJ0) [31]. Two microphone types, CLEAN/CHANNEL are included, and six noise types are artificially added to both microphone types, which results in four conditions: CLEAN, CHANNEL, NOISY, and CHANNEL + NOISY. Two 14 hour training sets are used, where one is clean and the other is a mix of all four conditions. The same noise types and microphones are used to generate the development and test sets, which both consist of 330 utterances from all four conditions, resulting in 4,620 utterances in total for each set. All speech is represented as a sequence of 80 dimensional Mel-scale filter bank (FBank) features or 200 dimensional log-magnitude spectrum (only for audio reconstruction), computed every 10ms. Mel-scale features are a popular auditory approximation for many speech applications [28]. We consider a sample x to be a 200ms sub-sequence, which is on the order of the length of a syllable, and implies T = 20 for each x. For the Seq2Seq-FHVAE model, all the LSTM and MLP networks are one-layered, and Adam [20] is used for optimization. More details of the model architecture and training procedure can be found in Appendix C. 4.1 Qualitative Evaluation of the Disentangled Latent Variables Figure 4: (left) Examples generated by varying different latent variables. (right) An illustration of harmonics and formants in filter bank images. The green block ?A? contains four reconstructed examples. The red block ?B? contains ten original sequences on the first row with the corresponding reconstructed examples on the second row. The entry on the i-th row and the j-th column in the blue block ?C? is the reconstructed example using the latent segment variable z1 of the i-th row from block ?A? and the latent sequence variable z2 of the j-th column from block ?B?. To qualitatively study the factorization of information between the latent segment variable z1 and the latent sequence variable z2 , we generate examples x by varying each of them respectively. Figure 4 shows 40 examples in block ?C? of all the combinations of the 4 latent segment variables extracted from block ?A? and the 10 latent sequence variables extracted from block ?B.? The top two examples from block ?A? and the five leftmost examples from block ?B? are from male speakers, while the rest are from female speakers, which show higher fundamental frequencies and harmonics.4 4 The harmonics corresponds to horizontal dark stripes in the figure; the more widely these stripes are spaced vertically, the higher the fundamental frequency of the speaker is. 6 Figure 5: FHVAE (? = 0) decoding results of three combinations of latent segment variables z1 and latent sequence variables z2 from one male-speaker utterance (top-left) and one female-speaker utterance (bottom-left) in Aurora-4. By replacing z2 of a male-speaker utterance with z2 of a femalespeaker utterance, an FHVAE decodes a voice-converted utterance (middle-right) that preserves the linguistic content. Audio samples are available at https://youtu.be/VMX3IZYWYdg. We can observe that along each row in block ?C?, the linguistic phonetic-level content, which manifests itself in the form of the spectral contour and temporal position of formants, as well as the relative position between formants, is very similar between elements; the speaker identity however changes (e.g., harmonic structure). On the other hand, for each column we see that the speaker identity remains consistent, despite the change of linguistic content. The factorization of the sequence-level attributes and the segment-level attributes of our proposed Seq2Seq-FHVAE is clearly evident. In addition, we also show examples of modifying an entire utterance in Figure 1 and 5, which achieves denoising by replacing the latent sequence variable of a noisy utterance with those of a clean utterance, and achieves voice conversion by replacing the latent sequence variable of one speaker with that of another speaker. Details of the operations we applied to modify an entire utterance as well as more larger-sized examples of different ? values can be found in Appendix E. We also show extra latent space traversal experiments in Appendix H. 4.2 Quantitative Evaluation of S-Vectors ? Speaker Verification To quantify the performance of our model on disentangling the utterance-level attributes from the segment-level attributes, we present experiments on a speaker verification task on the TIMIT corpus to evaluate how well the estimated ?2 encodes speaker-level information.5 As a sanity check, we modify Eq. 5 to estimate an alternative s-vector based on latent segment variables z1 as follows: PN? ? + ? 2 ). We use the i-vector method [5] as the baseline, which is ?1 = x(n) )/(N z1 n=1 g?z1 (? the representation used in most state-of-the-art speaker verification systems. They are in a low dimensional subspace of the Gaussian mixture model (GMM) mean supervector space, where the GMM is the universal background model (UBM) that models the generative process of speech. I-vectors, ?1 , and ?2 can all be extracted without supervision; when speaker labels are available during training, techniques such as linear discriminative analysis (LDA) can be applied to further improve the linear separability of the representation. For all experiments, we use the fast scoring approach in [4] that uses cosine similarity as the similarity metric and compute the equal error rate (EER). More details about the experimental settings can be found in Appendix F. We compare different dimensions for both features as well as different ??s in Eq.3 for training FHVAE models. The results in Table 1 show that the 16 dimensional s-vectors ?2 outperform i-vector baselines in both unsupervised (Raw) and supervised (LDA) settings for all ??s as shown in the fourth column; the more discriminatively the FHVAE model is trained (i.e., with larger ?), the better speaker 5 TIMIT is not a standard corpus for speaker verification, but it is a good corpus to show the utterance-level attribute we learned via this task, because the main attribute that is consistent within an utterance is speaker identity, while in Aurora-4 both speaker identity and the background noise are consistent within an utterance. 7 verification results it achieves. Moreover, with the appropriately chosen dimension, a 32 dimensional ?2 reaches an even lower EER at 1.34%. On the other hand, the negative results of using ?1 also validate the success in disentangling utterance and segment level attributes. Table 1: Comparison of speaker verification equal error rate (EER) on the TIMIT test set 4.3 Features Dimension ? Raw LDA (12 dim) LDA (24 dim) i-vector 48 100 200 - 10.12% 9.52% 9.82% 6.25% 6.10% 6.54% 5.95% 5.50% 6.10% ?2 16 16 16 16 32 0 10?1 100 101 101 5.06% 4.91% 3.87% 2.38% 2.38% 4.02% 4.61% 3.86% 2.08% 2.08% 1.34% ?1 16 16 32 100 101 101 22.77% 27.68% 22.47% 15.62% 22.17% 16.82% 17.26% Quantitative Evaluation of the Latent Segment Variables ? Domain Invariant ASR Speaker adaptation and robust speech recognition in automatic speech recognition (ASR) can often be seen as domain adaptation problems, where available labeled data is limited and hence the data distributions during training and testing are mismatched. One approach to reduce the severity of this issue is to extract speaker/channel invariant features for the tasks. As demonstrated in Section 4.2, the s-vector contains information about domains. Here we evaluate if the latent segment variables contains domain invariant linguistic information by evaluating on an ASR task: (1) train our proposed Seq2Seq-FHVAE using FBank feature on a set that covers different domains. (2) train an LSTM acoustic model [12, 35, 42] on the set that only covers partial domains using mean and log variance of the latent segment variable z1 extracted from the trained Seq2Seq-FHVAE. (3) test the ASR system on all domains. As a baseline, we also train the same ASR models but use the FBank features alone. Detailed configurations are in Appendix G. For TIMIT we assume that male and female speakers constitute different domains, and show the results in Table 2. The first row of results shows that the ASR model trained on all domains (speakers) using FBank features as the upper bound. When trained on only male speakers, the phone error rate (PER) on female speakers increases by 16.1% for FBank features; however, for z1 , despite the slight degradation on male speakers, the PER on the unseen domain, which are female speakers, improves by 6.6% compared to FBank features. Table 2: TIMIT test phone error rate of acoustic models trained on different features and sets Train Set and Configuration Test PER by Set ASR FHVAE Features Male Female All Train All - FBank 20.1% 16.7% 19.1% Train Male Train All, ? = 10 FBank z1 21.0% 22.0% 32.8% 26.2% 25.2% 23.5% On Aurora-4, four domains are considered, which are clean, noisy, channel, and noisy+channel (NC for short). We train the FHVAE on the development set for two purposes: (1) the FHVAE can be considered as a general feature extractor, which can be trained on an arbitrary collection of data that does not necessarily include the data for subsequent applications. (2) the dev set of Aurora-4 contains the domain label for each utterance so it is possible to control which domain has been observed by the FHVAE. Table 3 shows the word error rate (WER) results on Aurora-4, from which we can observe that the FBank representation suffers from severe domain mismatch problems; specifically, the WER 8 increases by 53.3% when noise is presented in mismatched microphone recordings (NC). In contrast, when the FHVAE is trained on data from all domains, using the latent segment variables as features reduce WER from 16% to 35% compare to baseline on mismatched domains, with less than 2% WER degradation on the matched domain. In addition, ?-VAEs [13] are trained on the same data as the FHVAE to serve as the baseline feature extractor, from which we extract the latent variables z as the ASR feature and show the result in the third to the sixth rows. The ?-VAE features outperform FBank in all mismatched domains, but are inferior to the latent segment variable z1 from the FHVAE in those domains. The results demonstrate the importance of learning not only disentangled, but also interpretable representations, which can be achieved by our proposed FHVAE models. As a sanity check, we replace z1 with z2 , the latent sequence variable and train an ASR, which results in terrible WER performance as shown in the eighth row as expected. Finally, we train another FHVAE on all domains excluding the combinatory NC domain, and shows the results in the last row in Table 3. It can be observed that the latent segment variable still outperforms the baseline feature with 30% lower WER on noise and channel combined data, even though the FHAVE has only seen noise and channel variation independently. Table 3: Aurora-4 test word error rate of acoustic models trained on different features and sets Train Set and Configuration ASR {FH-,?-}VAE Features Clean Noisy Channel NC All Train All - FBank 3.60% 7.06% 8.24% 18.49% 11.80% Dev, ? = 1 Dev, ? = 2 Dev, ? = 4 Dev, ? = 8 Dev, ? = 10 Dev, ? = 10 FBank z (?-VAE) z (?-VAE) z (?-VAE) z (?-VAE) z1 (FHVAE) z2 (FHVAE) 3.47% 4.95% 3.57% 3.89% 5.32% 5.01% 41.08% 50.97% 23.54% 27.24% 24.40% 34.84% 16.42% 68.73% 36.99% 31.12% 30.56% 29.80% 36.13% 20.29% 61.89% 71.80% 46.21% 48.17% 47.87% 58.02% 36.33% 86.36% 55.51% 32.47% 34.75% 33.38% 42.76% 24.41% 72.53% Dev\NC, ? = 10 z1 (FHVAE) 5.25% 16.52% 19.30% 40.59% 26.23% Train Clean 5 Test WER by Set Related Work A number of prior publications have extended VAEs to model structured data by altering the underlying graphical model to dynamic Bayesian networks, such as SRNN [3] and VRNN [9], or to hierarchical models, such as neural statistician [7] and SVAE [18]. These models have shown success in quantitatively increasing the log-likelihood, or qualitatively generating reasonable structured data by sampling. However, it remains unclear whether independent attributes are disentangled in the latent space. Moreover, the learned latent variables in these models are not interpretable without manually inspecting or using labeled data. In contrast, our work presents a VAE framework that addresses both problems by explicitly modeling the difference in the rate of temporal variation of the attributes that operate at different scales. Our work is also related to ?-VAE [13] with respect to unsupervised learning of disentangled representations with VAEs. The boosted KL-divergence penalty imposed in ?-VAE training encourages disentanglement of independent attributes, but does not provide interpretability without supervision. We demonstrate in our domain invariant ASR experiments that learning interpretable representations is important for such applications, and can be achieved by our FHVAE model. In addition, the idea of boosting KL-divergence regularization is complimentary to our model, which can be potentially integrated for better disentanglement. 6 Conclusions and Future Work We introduce the factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations for sequence-level and segment-level attributes without any supervision. We verify the disentangling ability both qualitatively and quantitatively on two speech corpora. For future work, we plan to (1) extend to more levels of hierarchy, (2) investigate adversarial training for disentanglement, and (3) apply the model to other types of sequential data, such as text and videos. 9 References [1] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, page 2172?2180, 2016. [2] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. [3] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980?2988, 2015. [4] Najim Dehak, Reda Dehak, Patrick Kenny, Niko Br?mmer, Pierre Ouellet, and Pierre Dumouchel. Support vector machines versus fast scoring in the low-dimensional total variability space for speaker verification. In Interspeech, volume 9, pages 1559?1562, 2009. [5] Najim Dehak, Patrick J Kenny, R?da Dehak, Pierre Dumouchel, and Pierre Ouellet. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):788?798, 2011. [6] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. [7] Harrison Edwards and Amos Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016. [8] Otto Fabius and Joost R van Amersfoort. arXiv:1412.6581, 2014. Variational recurrent auto-encoders. arXiv preprint [9] Marco Fraccaro, S?ren Kaae S?nderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems, pages 2199?2207, 2016. [10] John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93, 1993. [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672?2680, 2014. [12] Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. Hybrid speech recognition with deep bidirectional LSTM. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273?278. IEEE, 2013. [13] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016. [14] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [15] Wei-Ning Hsu, Yu Zhang, and James Glass. Learning latent representations for speech generation and transformation. In Interspeech, pages 1273?1277, 2017. [16] Wei-Ning Hsu, Yu Zhang, and James Glass. Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation. In Automatic Speech Recognition and Understanding (ASRU), 2017 IEEE Workshop on. IEEE, 2017. [17] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Controllable text generation. arXiv preprint arXiv:1703.00955, 2017. [18] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems, pages 2946?2954, 2016. [19] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183?233, 1999. 10 [20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [21] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581?3589, 2014. [22] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. 2016. [23] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [24] Tomi Kinnunen, Lauri Juvela, Paavo Alku, and Junichi Yamagishi. Non-parallel voice conversion using i-vector plda: Towards unifying speaker verification and transformation. In ICASSP, 2017. [25] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages 2539?2547, 2015. [26] Anders Boesen Lindbo Larsen, S?ren Kaae S?nderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015. [27] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. [28] Nelson Mogran, Herv? Bourlard, and Hynek Hermansky. Automatic speech recognition: An auditory perspective. In Speech processing in the auditory system, pages 309?338. Springer, 2004. [29] Toru Nakashika, Tetsuya Takiguchi, Yasuhiro Minami, Toru Nakashika, Tetsuya Takiguchi, and Yasuhiro Minami. Non-parallel training in voice conversion using an adaptive restricted boltzmann machine. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 24(11):2032?2045, November 2016. [30] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [31] Douglas B Paul and Janet M Baker. The design for the wall street journal-based csr corpus. In Proceedings of the workshop on Speech and Natural Language, pages 357?362. Association for Computational Linguistics, 1992. [32] David Pearce. Aurora working group: DSR front end LVCSR evaluation AU/384/02. PhD thesis, Mississippi State University, 2002. [33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [34] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [35] Hasim Sak, Andrew W Senior, and Fran?oise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Interspeech, pages 338?342, 2014. [36] Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. [37] Dmitriy Serdyuk, Kartik Audhkhasi, Philemon Brakel, Bhuvana Ramabhadran, Samuel Thomas, and Yoshua Bengio. Invariant representations for noisy speech recognition. CoRR, abs/1612.01928, 2016. [38] Yusuke Shunohara. Adversarial multi-task learning of deep neural networks for robust speech recognition. In Interspeeech, pages 2369?2372, 2016. [39] A?ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. [40] Zhizheng Wu, Eng Siong Chng, and Haizhou Li. Conditional restricted boltzmann machine for voice conversion. In ChinaSIP, 2013. [41] Dong Yu, Michael Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. Feature learning in deep neural networks ? studies on speech recognition tasks. arXiv preprint arXiv:1301.3605, 2013. 11 [42] Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. Highway long short-term memory RNNs for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5755?5759. IEEE, 2016. 12
6784 |@word kohli:1 middle:2 hu:1 pieter:1 grey:1 covariance:2 eng:1 configuration:3 contains:8 jimenez:2 outperforms:2 z2:81 com:1 lang:1 yet:1 diederik:4 john:2 distant:1 subsequent:2 designed:2 interpretable:10 drop:1 update:1 alone:1 intelligence:2 generative:17 alec:1 isotropic:2 fabius:1 short:5 regressive:2 provides:1 boosting:1 node:2 ron:1 zhang:4 five:1 wierstra:1 along:2 become:1 beta:1 qualitative:2 seide:1 introduce:5 manner:1 inter:1 expected:1 behavior:1 multi:6 formants:4 wavenet:1 salakhutdinov:1 globally:1 decomposed:1 lindbo:1 bhuvana:1 duan:1 little:1 soumith:1 tetsuya:2 increasing:1 becomes:1 underlying:2 moreover:3 matched:1 factorized:10 baker:1 complimentary:1 yamagishi:1 transformation:2 temporal:5 quantitative:2 every:1 botvinick:1 scaled:2 demonstrates:1 sherjil:1 control:1 local:1 vertically:1 modify:2 frey:1 despite:3 encoding:3 analyzing:1 laurent:2 yusuke:1 approximately:1 garofolo:1 plus:1 rnns:1 au:1 luke:1 dehak:4 factorization:2 limited:1 thirty:1 testing:3 block:11 x3:1 backpropagation:1 procedure:1 universal:1 yan:1 word:4 eer:3 refers:3 jui:1 zoubin:1 close:1 layered:1 janet:1 loic:1 imposed:2 demonstrated:1 maximizing:1 sepp:1 duration:1 independently:1 jimmy:1 formulate:3 iulian:1 pouget:1 higgins:1 kartik:1 regarded:1 dumouchel:2 shlens:1 disentangled:11 reparameterization:1 variation:5 hierarchy:2 exact:2 lighter:1 olivier:1 us:1 xiaodan:1 goodfellow:2 jaitly:2 associate:1 trick:1 element:1 recognition:15 approximated:1 expensive:1 storkey:1 nderby:2 stripe:2 labeled:3 observed:7 bottom:2 preprint:13 capture:2 connected:1 fiscus:1 balanced:1 alessandro:1 constrains:1 warde:1 traversal:1 dynamic:1 trained:10 weakly:1 depend:1 segment:37 serve:1 eric:1 srnn:1 easily:1 joint:1 joost:1 kratarth:1 darpa:1 represented:1 icassp:2 derivation:1 train:15 forced:1 fast:3 describe:1 ole:2 artificial:2 rein:1 sanity:2 whose:3 encoded:2 larger:3 widely:1 jean:1 kalchbrenner:2 dmitriy:1 encoder:2 ability:4 simonyan:1 statistic:1 otto:1 unseen:2 paquet:1 transform:1 jointly:1 noisy:11 itself:1 shakir:3 autoencoding:1 sequence:71 net:2 propose:2 reconstruction:2 adaptation:3 combining:2 achieve:1 validate:1 scalability:2 sutskever:2 generating:2 adam:3 ben:1 tim:1 develop:1 recurrent:9 andrew:2 measured:1 eq:6 edward:1 involves:1 indicate:1 implies:1 quantify:1 tommi:1 kaae:2 ning:3 larochelle:1 attribute:28 filter:2 stochastic:4 modifying:1 centered:2 jonathon:2 amersfoort:1 seltzer:1 hx:4 fix:1 abbeel:1 wall:2 disentanglement:4 inspecting:1 ryan:2 minami:2 marco:1 sordoni:1 considered:3 duvenaud:1 exp:3 lawrence:1 dieleman:1 matthew:2 rgen:1 major:1 achieves:4 adopt:1 released:1 fh:1 purpose:1 estimation:2 ruslan:1 proc:1 label:2 highway:1 amos:1 mit:1 clearly:1 gaussian:5 pn:2 boosted:1 varying:2 vae:11 publication:1 jaakkola:1 linguistic:9 encode:6 derived:2 focus:1 rezende:2 likelihood:3 check:2 contrast:2 adversarial:7 brendan:1 baseline:8 glass:5 dim:2 inference:15 dependent:5 anders:1 entire:4 integrated:1 aurora:9 hidden:1 manipulating:1 pixel:2 issue:2 aforementioned:1 among:1 ubm:1 denoted:1 development:2 plan:2 art:1 constrained:1 marginal:1 equal:3 asr:11 having:2 beach:1 sampling:3 encouraged:1 manually:1 adversarially:1 koray:2 yu:6 unsupervised:11 hermansky:1 kastner:1 future:3 yoshua:5 report:1 quantitatively:4 mirza:1 preserve:2 divergence:3 argmax:4 consisting:1 irina:1 statistician:2 william:2 ab:2 attempt:1 interest:1 mlp:14 investigate:1 intra:1 evaluation:4 severe:1 male:9 mixture:1 farley:1 encourage:2 partial:1 euclidean:1 svae:1 seq2seq:6 instance:1 column:4 modeling:4 dev:8 cover:2 altering:1 ishmael:1 whitney:1 introducing:1 entry:1 uniform:1 johnson:1 too:2 front:2 pal:1 reported:1 graphic:1 encoders:3 combined:1 referring:1 st:1 fundamental:3 lstm:8 winther:2 oord:2 csail:1 international:1 probabilistic:2 dong:2 decoding:2 michael:2 ilya:2 gans:2 sanjeev:1 augmentation:1 thesis:1 aaai:1 rafal:1 zen:1 huang:1 stochastically:2 chung:2 dialogue:1 li:2 converted:1 lookup:1 summarized:1 explicitly:3 unannotated:1 lowe:1 closed:1 dumoulin:1 guoguo:1 red:1 start:1 xing:1 denoised:1 bayes:1 parallel:2 metz:1 youtu:2 timit:7 phonetically:1 variance:5 convolutional:2 likewise:1 maximized:1 spaced:1 raw:3 decodes:3 bayesian:1 vincent:1 disc:1 kavukcuoglu:2 none:1 ren:2 reach:1 suffers:1 whenever:1 sixth:1 frequency:3 niko:1 james:4 mohamed:4 larsen:1 naturally:1 associated:1 chintala:1 static:1 hsu:3 auditory:3 dataset:2 massachusetts:1 popular:1 vlad:1 manifest:1 knowledge:1 color:3 improves:1 organized:1 back:1 nasa:1 bidirectional:1 higher:2 supervised:7 danilo:2 methodology:1 wei:3 improved:1 formulation:1 evaluated:1 though:1 charlin:1 autoencoders:2 rahman:1 hand:2 working:1 horizontal:1 replacing:4 mehdi:1 multiscale:1 christopher:1 pineau:1 lda:4 usa:2 verify:1 true:1 concept:1 former:1 hence:1 vrnn:1 regularization:1 read:1 xavier:1 laboratory:1 illustrated:1 white:1 during:5 interspeech:3 encourages:2 inferior:1 speaker:40 mel:2 cosine:1 samuel:1 m:2 leftmost:1 evident:1 complete:1 demonstrate:3 junichi:1 lamel:1 image:1 variational:23 harmonic:4 abstain:1 recently:2 novel:1 kyle:1 hugo:1 khz:1 volume:3 extend:2 slight:1 approximates:1 association:1 significant:1 refer:3 dinh:1 cambridge:1 jozefowicz:1 jinyu:1 automatic:5 session:2 language:2 f0:2 supervision:6 operating:1 surface:1 similarity:3 ahn:1 segmental:1 patrick:2 disentangle:1 multivariate:4 posterior:9 female:7 perspective:1 boesen:1 phone:2 scenario:2 phonetic:4 schmidhuber:1 success:3 joelle:1 scoring:2 seen:4 arjovsky:1 goel:1 kenny:2 maximize:3 signal:1 dashed:1 semi:2 multiple:3 mix:1 infer:2 reduces:1 technical:1 mmer:1 ign:1 long:7 wiltschko:1 dkl:2 involving:1 basic:1 metric:2 navdeep:2 arxiv:26 alireza:1 achieved:2 hochreiter:1 addition:3 want:1 background:2 harrison:1 source:1 appropriately:1 extra:1 rest:2 operate:1 recording:2 tend:2 leveraging:1 flow:1 jordan:1 leverage:1 yang:1 bengio:5 mastropietro:1 sander:1 variety:1 affect:2 lauri:1 burgess:1 architecture:10 reduce:3 idea:2 pallett:1 br:1 whether:1 six:1 sandeep:1 herv:1 penalty:1 lvcsr:1 karen:1 speech:36 cause:1 constitute:1 deep:8 useful:1 clear:1 detailed:4 amount:2 z22:1 dark:1 ten:1 tenenbaum:1 recon:1 hynek:1 http:3 generate:3 outperform:3 terrible:1 estimated:1 per:3 chng:1 blue:2 group:1 four:5 serban:1 drawn:3 gmm:2 clean:10 douglas:1 nal:2 vast:1 sum:1 houthooft:1 sti:1 inverse:2 parameterized:1 powerful:1 fourth:1 wer:7 reasonable:1 lamb:1 wu:1 fran:1 appendix:8 bound:9 layer:3 syllable:1 courville:4 constraint:2 alex:4 x2:5 encodes:2 generates:1 extremely:1 formulating:1 attempting:1 relatively:1 martin:1 structured:3 combination:3 pink:1 tomi:1 smaller:3 across:1 separability:1 lerchner:1 den:2 invariant:5 restricted:2 fraccaro:1 computationally:1 remains:2 bing:1 discus:2 tractable:1 end:2 available:4 operation:1 apply:1 observe:2 hierarchical:11 arka:1 spectral:2 salimans:1 pierre:4 sak:1 batch:2 alternative:2 voice:6 original:1 thomas:1 top:3 denotes:1 include:1 linguistics:1 graphical:6 unifying:1 const:2 exploit:1 ting:1 ghahramani:1 ramabhadran:1 objective:4 added:1 quantity:1 kaisheng:1 makhzani:1 diagonal:3 unclear:1 subspace:1 distance:1 decoder:3 street:2 nelson:1 trivial:1 rom:1 ozair:1 code:1 length:1 index:3 relationship:2 illustration:2 juvela:1 zichao:1 nc:5 liang:1 disentangling:3 potentially:1 frank:1 negative:1 ba:1 design:1 boltzmann:2 allowing:1 conversion:5 upper:1 datasets:1 pearce:1 daan:1 nist:1 philemon:1 november:1 extended:1 incorporated:1 severity:1 excluding:1 dc:1 frame:1 variability:1 arbitrary:1 datta:1 csr:1 inferred:3 introduced:1 david:4 kl:3 z1:63 optimized:2 sentence:1 continous:1 acoustic:6 learned:4 heiga:1 hour:2 kingma:4 nip:1 trans:1 address:1 beyond:1 poole:1 mismatch:1 eighth:1 challenge:1 dsr:1 interpretability:2 memory:4 video:2 green:1 zhiting:1 max:3 natural:1 hybrid:1 regularized:1 predicting:1 bourlard:1 residual:1 improve:1 github:1 technology:1 supervector:1 autoencoder:10 auto:4 utterance:28 extract:2 text:3 prior:17 understanding:3 schulman:1 relative:1 graf:2 fully:1 discriminatively:1 mixed:1 generation:4 versus:1 abdel:1 verification:13 consistent:3 imposes:1 principle:1 bank:2 ulrich:1 bypass:1 cd:1 row:9 last:1 senior:2 perceptron:1 institute:1 mismatched:6 wide:1 saul:1 differentiating:1 van:3 matthey:1 dimension:4 evaluating:1 contour:2 autoregressive:1 qualitatively:5 collection:1 sungjin:1 adaptive:1 najim:2 welling:3 transaction:1 pushmeet:1 reconstructed:3 approximate:2 brakel:1 beaufays:1 belghazi:1 global:1 investigating:1 corpus:10 conclude:1 assumed:1 consuming:1 factorize:1 discriminative:5 xi:2 spectrum:1 latent:50 ouellet:2 table:8 nature:1 learn:6 channel:11 ca:1 inherently:1 robust:4 controllable:1 composing:1 serdyuk:1 necessarily:1 artificially:1 domain:23 diag:6 da:1 main:1 rh:1 noise:8 paul:1 x1:10 xu:1 referred:2 broadband:2 junyoung:2 batched:1 fashion:1 darker:1 hz1:4 sub:4 inferring:1 position:2 infogan:2 weighting:1 extractor:2 third:1 learns:3 ian:2 specific:1 xt:8 abadie:1 glorot:1 intractable:1 essential:1 consist:1 workshop:3 sequential:11 corr:2 importance:1 phd:1 magnitude:1 conditioned:2 chen:3 lori:1 cx:2 distinguishable:1 simply:1 visual:1 josh:1 vinyals:1 khudanpur:1 ldis:1 collectively:1 springer:1 toru:2 corresponds:1 radford:1 extracted:4 ma:1 acm:1 tejas:1 conditional:6 identity:5 formulated:1 viewed:1 sized:1 towards:2 cz1:2 asru:2 z21:1 shared:1 content:10 change:2 replace:1 included:1 specifically:3 fisher:1 combinatory:1 denoising:1 degradation:2 microphone:4 total:3 experimental:2 vaes:7 aaron:5 yasuhiro:2 support:1 oise:1 latter:1 alexander:1 kulkarni:1 oriol:1 evaluate:3 audio:6
6,395
6,785
Lookahead Bayesian Optimization with Inequality Constraints Remi R. Lam Massachusetts Institute of Technology Cambridge, MA [email protected] Karen E. Willcox Massachusetts Institute of Technology Cambridge, MA [email protected] Abstract We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expensive constraints but are limited to greedy strategies maximizing immediate reward. To address this limitation, we propose a lookahead approach that selects the next evaluation in order to maximize the long-term feasible reduction of the objective function. We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC). 1 Introduction Constrained optimization problems are often challenging to solve, due to complex interactions between the goals of minimizing (or maximizing) the objective function while satisfying the constraints. In particular, non-linear constraints can result in complicated feasible spaces, sometimes partitioned in disconnected regions. Such feasible spaces can be difficult to explore for a local optimizer, potentially preventing the algorithm from converging to a global solution. Global optimizers, on the other hand, are designed to tackle disconnected feasible spaces and optimization of multi-modal objective functions. Such algorithms typically require a large number of evaluations to converge. This can be prohibitive when the evaluation of the objective function or the constraints is expensive, or when there is a finite budget of evaluations allocated for the optimization, as it is often the case with expensive models. This evaluation budget typically results from resource scarcity such as the restricted availability of a high-performance computer, finite financial resources to build prototypes, or even time when working on a paper submission deadline. Bayesian optimization (BO) [19] is a global optimization technique designed to address problems with expensive function evaluations. Its constrained extension, constrained Bayesian optimization (CBO), iteratively builds a statistical model for the objective function and the constraints. Based on this model that leverages all the past evaluations, a utility function quantifies the merit of evaluating any design under consideration. At each iteration, a CBO algorithm evaluates the expensive objective function and constraints at the design which maximizes this utility function. In most existing methods, the utility function only quantifies the reward obtained over the immediate next step, and ignores the gains that could be collected at future steps. This results in greedy CBO algorithms. However, quantifying long-term rewards may be beneficial. For instance, in the presence of constraints, it could be valuable to learn the boundaries of the feasible space. In order to do so, it is likely that an infeasible design would need to be evaluated, bringing no immediate improvement, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. but leading to long-term benefits. Such strategy requires planning over several steps. Planning is also required to balance the so-called exploration-exploitation trade-off. Intuitively, in order to improve the statistical model, the beginning of the optimization should mainly be dedicated to exploring the design space, while the end of the optimization should focus on exploiting that statistical model to find the best design. To balance this trade-off in a principled way, the optimizer needs to plan ahead and be aware of the remaining evaluation budget. To address the shortcomings of greedy algorithms, we propose a new lookahead formulation for CBO with a finite budget. This approach is aware of the remaining budget and can balance the exploration-exploitation trade-off in a principled way. In this formulation, the best optimization policy sequentially evaluates the design yielding the maximum cumulated reward over multiple steps. This optimal policy is the solution of an intractable dynamic programming (DP) problem. We circumvent this issue by employing an approximate dynamic programming (ADP) algorithm: rollout, building on the unconstrained BO algorithm in [17]. Numerical examples illustrate the benefits of the proposed lookahead algorithm over several greedy ones, especially when the objective function is multi-modal and the feasible space has a complex topology. The next section gives an overview of CBO and discusses some of the related work (Sec. 2). Then, we formulate the lookahead approach to CBO as a dynamic programming problem and demonstrate how to approximately solve it by adapting the rollout algorithm (Sec. 3). Numerical results are provided in Sec. 4. Finally, we present our conclusions in Sec. 5. 2 Constrained Bayesian Optimization We consider the following optimization problem: (OPc) x? = argmin f (x) x?X s.t. (1) gi (x) ? 0, ?i ? {1, . . . , I}, where x is a d-dimensional vector of design variables. The design space X is a bounded subset of Rd , f : X 7? R is an objective function, I is the number of inequality constraints and gi : X 7? R is the ith constraint function. The functions f and gi are considered expensive to evaluate. We are interested in finding the minimizer x? of the objective function f subject to the non-linear constraints gi ? 0 with a finite budget of N evaluations. We refer to this problem as the original constrained problem (OPc). Constrained Bayesian optimization (CBO) addresses the original constrained problem (OPc) by modeling the objective function f and the constraints gi as realizations of stochastic processes. Typically, each expensive-to-evaluate function is modeled with an independent Gaussian process (GP). At every iteration n, new evaluations of f and gi become available and augment a training set Sn = {(xj , f (xj ), g1 (xj ), ? ? ? , gI (xj ))}nj=1 . Using Bayes rule, the statistical model is updated and the posterior quantities of the GP, conditioned on Sn , reflect the current representation of the unknown expensive functions. In particular, for any design x, the posterior mean ?n (x; ?) and the posterior variance ? 2n (x; ?) of the GP associated with the expensive function ? ? {f, g1 , ? ? ? , gI } can be computed cheaply using a closed-form expression (see [24] for an overview of GP). CBO leverages this statistical model to quantify, in a cheap-to-evaluate utility function Un , the usefulness of any design under consideration. The next design to evaluate is then selected by solving the following auxiliary problem (AP): (AP) xn+1 = argmax Un (x; Sn ). (2) x?X The vanilla CBO algorithm is summarized in Algorithm 1. Many utility functions have been proposed in the literature. To decide which design to evaluate next, [27] proposed the use of constrained expected improvement EIc , which, in the case of independent GPs, can be computed in closed-form as the product of the expected improvement (obtained by considering the GP associated with the objective function) and the probability of feasibility associated with each constraint. This approach was later applied to machine learning applications [6] and extended to the multi-objective case [5]. Note that this method transforms an original constrained optimization problem into an unconstrained auxiliary problem by modifying the utility function. Other attempts to cast the constrained problem into an unconstrained one include [3]. That work uses 2 Algorithm 1 Constrained Bayesian Optimization Input: Initial training set S1 , budget N for n = 1 to N do Construct GPs using Sn Update hyper-parameters Solve AP for xn+1 = argmaxx?X Un (x; Sn ) Evaluate f (xn+1 ), g1 (xn+1 ), ? ? ? , gI (xn+1 ) Sn+1 = Sn ? {(xn+1 , f (xn+1 ), g1 (xn+1 ), ? ? ? , gI (xn+1 ))} end for a penalty method to transform the original constrained problem into an unconstrained problem, to which they apply a radial basis functions (RBF) method for global optimization (constrained RBF methods exist as well [25]). Other techniques from local constrained optimization have been leveraged in [10] where the utility function is constructed based on an augmented Lagrangian formulation. This technique was recently extended in [22] where a slack-variables formulation allows the handling of equality and mixed constraints. Another approach is proposed by [1]: at each iteration, a finite set of candidate designs is first generated from a Latin hypercube, second, candidate designs with expected constraint violation higher than a user-defined threshold are rejected. Finally, among the remaining candidates, the ones achieving the best expected improvement are evaluated (several designs can be selected simultaneously at each iteration in this formulation). Another method [26] solves a constrained auxiliary optimization problem: the next design is selected to maximize the expected improvement subject to approximated constraints (the posterior mean of the GP associated with a constraint is used in lieu of the constraint itself). Note that the two previous methods solve a constrained auxiliary problem. Another method to address constrained BO is proposed by [11], who develop an integrated conditional expected improvement criterion. Given a candidate design, this criterion quantifies the expected improvement point-wise (conditioned on the fact that the candidate will be evaluated). This pointwise improvement is then integrated over the entire design space. In the unconstrained case, in the integration phase, equal weight is given to designs throughout the design space. The constrained case is addressed by defining a weight function that depends on the feasible probability of a design: improvement at designs that are likely to be infeasible have low weight. The probability of a design being feasible is calculated using a classification GP. The computation of this criterion is more involved as there is no closed-form formulation available for the integration and techniques such as Monte Carlo or Markov chain Monte Carlo must be employed. In a similar spirit, [21] introduces a utility function which quantifies the benefit of evaluating a design by integrating its effect over the design space. The proposed utility function computes the expected reduction of the feasible domain below the best feasible value evaluated so far. This results in the expected volume of excursion criteria which also requires approximation techniques to be computed. The former approaches revolve around computing a quantity based on improvement and require having at least one feasible design. Other strategies use information gain as the key element to drive the optimization strategy. [7] proposed a two-step approach for constrained BO when the objective and the constraints can be evaluated independently. The first step chooses the next location by maximizing the constrained EI [27], the second step chooses whether to evaluate the objective or a constraint using an information gain metric (i.e., entropy search [12]). [13, 14] developed a strategy that simultaneously selects the design to be evaluated and the model to query (the objective or a constraint). The criterion used, predictive entropy search with constraints (PESC), is an extension of predictive entropy search (PES) [15]. One of the advantages of information gain-based methods stems from the fact that one does not need to start with a feasible design. All aforementioned methods use myopic utilities to select the next design to evaluate, leading to suboptimal optimization strategies. In the unconstrained BO setting, multiple-steps lookahead algorithms have been explored [20, 8, 18, 9, 17] and were shown to improve the performance of BO. To our knowledge, such lookahead strategies for constrained optimization have not yet been addressed in the literature and also have the potential to improve the performance of CBO algorithms. 3 3 Lookahead Formulation of CBO In this section, we formulate CBO with a finite budget as a dynamic programming (DP) problem (Sec. 3.1). This leads to an optimal but computationally challenging optimization policy. To mitigate the cost of computing such a policy, we employ an approximate dynamic programming algorithm, rollout, and demonstrate how it can be adapted to CBO with a finite budget (Sec. 3.2). 3.1 Dynamic Programming Formulation We seek an optimization policy which leads, after consumption of the evaluation budget, to the maximum feasible decrease of the objective function. Because the value of the expensive objective function and constraints are not known before their evaluations, it is impossible to quantify such long-term reward within a cheap-to-evaluate utility function Un . However, CBO endows the objective function and the constraints with a statistical model that can be interrogated to inform the optimizer of the likely values of f and gi at a given design. This statistical model can be leveraged to simulate optimization scenarios over multiple steps and quantify their probabilities. Using this simulation mechanism, it is possible to quantify, in an average sense, the long-term reward achieved under a given optimization policy. The optimal policy is the solution of the DP problem that we formalize now. Let n be the current iteration number of the CBO algorithm, and N the total budget of evaluations, or horizon. We refer to the future iterations of the optimization generated by simulation as stages. For any stage k ? {n, ? ? ? , N }, all the information collected is contained in the training set Sk . The function f and the I functions gi are modeled with independent GPs. Their posterior quantities, conditioned on Sk , fully characterize our knowledge of f and gi . Thus, we define the state of our knowledge at stage k to be the training set Sk ? Zk . Based on the training set Sk , the simulation makes a decision regarding the next design xk+1 ? X to evaluate using an optimization policy. An optimization policy ? = {?1 , ? ? ? , ?N } is a sequence of rules, ?k : Zk 7? X for k ? {1, ? ? ? , N }, mapping a training set Sk to a design xk+1 = ?k (Sk ). In the simulations, the values f (xk+1 ) and gi (xk+1 ) are unknown and are treated as uncertainties. f We model those I + 1 uncertain quantities with I + 1 independent Gaussian random variables Wk+1 gi and Wk+1 based on the GPs: f Wk+1 ? N (?k (xk+1 ; f ), ? 2k (xk+1 ; f )), gi Wk+1 ? N (?k (xk+1 ; gi ), ? 2k (xk+1 ; gi )), and ? 2k (xk+1 ; ?) are the posterior mean (3) (4) where we recall that ?k (xk+1 ; ?) and variance of the GP associated with any expensive function ? ? {f, g1 , ? ? ? , gI }, conditioned on Sk , at xk+1 . Then, the 1 I simulation generates an outcome. A simulated outcome wk+1 = (fk+1 , gk+1 , ? ? ? , gk+1 )?W? f g gI 1 RI+1 is a sample of the (I + 1)-dimensional random variable Wk+1 = [Wk+1 , Wk+1 , ? ? ? , Wk+1 ]. Note that simulating an outcome does not require evaluating the expensive f and gi . In particular, i fk+1 and gk+1 are not f (xk+1 ) and gi (xk+1 ). 1 I Once an outcome wk+1 = (fk+1 , gk+1 , ? ? ? , gk+1 ) is simulated, the system transitions to a new state Sk+1 , governed by the system dynamic Fk : Zk ? X ? W 7? Zk+1 given by: 1 I Sk+1 = Fk (Sk , xk+1 , wk+1 ) = Sk ? {(xk+1 , fk+1 , gk+1 , ? ? ? , gk+1 )). (5) Now that the simulation mechanism is defined, one needs a metric to assert the quality of a given optimization policy. At stage k, a stage-reward function rk : Zk ? X ? W 7? R quantifies the merit 1 I of querying a design if the outcome wk = (fk+1 , gk+1 , ? ? ? , gk+1 ) occurs. This stage-reward is defined as the reduction of the objective function satisfying the constraints: n o Sk rk (Sk , xk+1 , wk+1 ) = max 0, fbest ? fk+1 , (6) Sk i if gk+1 ? 0 for all i ? {1, ? ? ? , I}, and rk (?, ?, ?) = 0 otherwise, where fbest is the best feasible value at stage k. Thus, the expected (long-term) reward starting from training set Sn under optimization policy ? is: " N # X J? (Sn ) = E rk (Sk , ?k (Sk ), wk+1 ) , (7) k=n 4 where the expectation is taken with respect to the (correlated) simulated values (wn+1 , ? ? ? , wN +1 ), and the state evolution is governed by Eq. 5. An optimal policy, ? ? , is a policy maximizing this long-term expected reward in the space of admissible policies ?: J?? (Sn ) = max J? (Sn ). (8) ??? The optimal reward J?? (Sn ) is given by Bellman?s principle of optimality and can be computed using the DP recursive algorithm, working backward from k = N ? 1 to k = n, JN (SN ) = max E[rN (SN , xN +1 , wN +1 )] = xN +1 ?X max EIc (xN +1 ; SN ) xN +1 ?X Jk (Sk ) = max E[rk (Sk , xk+1 , wk+1 ) + Jk+1 (Fk (Sk , xk+1 , wk+1 ))], (9) xk+1 ?X where each expectation is taken with respect to one simulated outcome vector wk+1 , and we have used the fact that E[rk (Sk , xk+1 , wk+1 )] = EIc (xk+1 ; Sk ) is the constrained expected improvement known in closed-form [27]. The optimal reward is given by J?? (Sn ) = Jn (Sn ). Thus, at iteration n of the CBO algorithm, the optimal policy select the next design xn+1 that maximizes Jn (Sn ) given by Eqs. 9. In other words, the best decision to make at iteration n maximizes, on average, the sum of the immediate reward rn and the future long-term reward Jn+1 (Sn+1 ) obtained by making optimal subsequent decisions. This is illustrated in Fig. 1, left panel. Sk wk+1 Sk+1 xk+1 xk+2 wk+2 Sk+2 xk+3 Sk wk+1 Sk+1 ?k+1 (Sk+1 ) xk+1 wk+2 Sk+2 ?k+2 (Sk+2 ) ??? ??? ??? ??? ??? ??? ??? ??? Figure 1: Left: Tree illustrating the intractable DP formulation. Each black circle represents a training set and a design, each white circle is a training set. Dashed lines represent simulated outcomes resulting in expectations. The double arrows represent designs selected with the (unknown) optimal policy, leading to nested maximizations. Double arrows depict the bidirectional way information propagates when the optimal policy is built: each optimal decision depends on the previous steps and relies on the optimality of the future decisions. Right: Single arrows represent designs selected using a heuristic. This illustrates the unidirectional propagation of information when a known heuristic drives the simulations: each decision depends on the previous steps but is independent of the future ones. The absence of nested maximization leads to a tractable formulation. 3.2 Rollout for Constrained Bayesian Optimization The best optimization policy evaluates, at each iteration n of the CBO algorithm, the design xn+1 maximizing the optimal reward J?? (Sn ) (Eq. 8). This requires solving a problem with several nested maximizations and expectations (Eqs. 9), which is computationally intractable. To mitigate the cost of solving the DP algorithm, we employ an approximate dynamic programming (ADP) technique: rollout (see [2, 23] for an overview). Rollout selects the next design by maximizing a (suboptimal) long-term reward J? . The reward is computed by simulating optimization scenarios over several future steps. However, the simulated steps are not controlled by the optimal policy ? ? . Instead, rollout uses a suboptimal policy ?, i.e. a heuristic, to drive the simulation. This circumvents the need for nested maximizations (as illustrated in Fig. 1, right panel) and simplifies the computation of J? compared to J?? . We now formalize the rollout algorithm, propose a heuristic ? adapted to the context of CBO with a finite budget, and detail further numerical approximations. Let us consider the iteration n of the CBO algorithm. The long-term reward J? (Sn ) induced by a (known) heuristic ? = {?1 , ? ? ? , ?N }, starting from state Sn , is defined by Eq. 7. This can be rewritten as J? (Sn ) = Hn , where Hn is recursively defined, from k = N back to k = n, by: HN +1 (SN +1 ) = 0 Hk (Sk ) = E[rk (Sk , ?k (Sk ), wk+1 ) + ?Hk+1 (Fk (Sk , ?k (Sk ), wk+1 ))], 5 (10) where each expectation is taken with respect to one simulated outcome vector wk+1 , and ? ? [0, 1] is a discount factor encouraging the early collection of reward. A discount factor ? = 0 leads to a greedy policy, focusing on immediate reward. In that case, the reward J? simplifies to the constrained expected improvement EIc . A discount factor ? = 1, on the other hand, is indifferent to when the reward is collected. The fundamental simplification introduced by the rollout algorithm lies in the absence of nested maximizations in Eqs. 10. This is illustrated in Fig. 1, right panel. By applying a known heuristic, information only propagates forward: every simulated step depends on the previous steps, but is independent from the future simulated steps. This is in contrast to the DP algorithm, illustrated in Fig. 1. Because the optimal policy is not known, it needs to be built by solving a sequence of nested problems. Thus, information propagates both forward and backward. While Hn is simpler to compute than Jn , it still requires computing nested expectations for which there is no closed-form expression. To further alleviate the cost of computing the long-term reward, we introduce two numerical simplifications. First, we use a rolling horizon h ? N to decrease the ? = min{N, n+h}. number of future steps simulated. A rolling horizon h replaces the horizon N by N Second, the expectations with respect to the (I + 1)-dimensional Gaussian random variables are numerically approximated using Gauss-Hermite quadrature. We obtain the following formulation: ? ? (S ? ) = 0 H N +1 N +1 ? k (Sk ) = EIc (?k (Sk ); Sk ) + ? H Nq X (q) ? k+1 (Fk (Sk , ?k (Sk ), w ?(q) [H k+1 ))], (11) q=1 (q) where Nq is the number of quadrature weights ?(q) ? R and points wk+1 ? RI+1 . For all iteration n ? {1, ? ? ? , N } and for all xn+1 ? X , we define the utility function of our rollout algorithm for CBO with finite budget to be: Un (xn+1 ; Sn ) = EIc (xn+1 ; Sn ) + ? Nq X (q) ? n+1 (Fn (Sn , xn+1 , w ?(q) [H n+1 ))]. (12) q=1 The heuristic ? is problem-dependent. A desirable heuristic combines two properties: (1) it is cheap to compute, (2) it is a good approximation of the optimal policy ? ? . In the case of CBO with a finite budget, the heuristic ? ought to mimic the exploration-exploitation trade-off balanced by the optimal policy ? ? . To do so, we propose using a combination of greedy CBO algorithms: maximization of the constrained expected improvement (which has an exploratory behavior) and a constrained optimization based on the posterior means of the GPs (which has an exploitative behavior). For a given iteration n, we define the heuristic ? = {?n+1 , ? ? ? , ?N? } such that for stages ? ? 1}, the policy component ?k : Zk 7? X , maps a state Sk to the design xk+1 k ? {n + 1, ? ? ? , N satisfying: xk+1 = argmax EIc (x; Sk ). (13) x?X The last policy component, ?N? : ZN? 7? X , maps a state SN? to xN? +1 such that: xN? +1 = argmin ?N? (x; f ) s.t. P F (x; SN? ) ? 0.99, (14) x?X where P F is the probability of feasibility known in closed-form. Every evaluation of the utility function Un requires O Nqh applications of a heuristic component ?k . The heuristic that we propose  optimizes a quantity that requires O |Sk |2 of work. To summarize, the proposed approach sequentially selects the next design to evaluate by maximizing the long-term reward induced by a heuristic. This rollout algorithm is a one-step lookahead formulation (one maximization) and is easier to solve than the N -steps lookahead approach (N nested maximizations) presented in Sec. 3.1. Rollout is a closed-loop approach where the information collected at a given stage of the simulation is used to simulate the next stages. The heuristic used in the rollout is problem-dependent, and we proposed using a combination of greedy CBO algorithms to construct such a heuristic. The computation of the utility function is detailed in Algorithm 2. 6 Algorithm 2 Rollout Utility Function Function: utility(x, h, S) Construct GPs using S if h = 0 then U ? EIc (x; S) else U ? EIc (x; S) Generate Nq Gauss-Hermite quadrature weights ?(q) and points w(q) associated with x for q = 1 to Nq do S 0 ? S ? {(x, w(q) )} if h > 1 then x0 ? ?(S 0 ) using Eq. 13 else x0 ? ?(S 0 ) using Eq. 14 end if U ? U + ??(q) utility(x0 , h ? 1, S 0 ) end for end if Output: U 4 Results In this section, we numerically investigate the proposed algorithm and demonstrate its performance on classic test functions and a reacting flow problem. To compare the performance of the different CBO algorithms tested, we use the utility gap metric [14]. At iteration n, the utility gap en measures the error between the optimum feasible value f ? and the value of the objective function at a recommended design x?n :  |f (x?n ) ? f ? | if x?n is feasible, en = (15) |? ? f ? | else, where ? is a user-defined penalty punishing infeasible recommendations. The recommended design, x?n , differs from the design selected for evaluation xn . It is the design that the algorithm would recommend to evaluate if the optimization were to be stopped at iteration n, without early notice. We use the same system of recommendation as [14]: x?n = argmin ?n (x; f ) s.t. P F (x; Sn ) ? 0.975. (16) x?X Note that the utility gap en is not guaranteed to decrease because recommendations x?n are not necessarily better with iterations. In particular, en is not the best error achieved in the training set Sn . In the following numerical experiments, for the rollout algorithm, we use independent zero-mean GPs with automatic relevance determination (ARD) square-exponential kernel to model each expensive-toevaluate function. In Algorithm. 1, when the GPs are constructed, the vector of hyper-parameters ? i associated with the ith GP kernel is estimated by maximization of the marginal likelihood. However, to reduce the cost of computing Un , the hyper-parameters are kept constant in the simulated steps (i.e., in Algorithm. 2). To compute the expectations of Eqs. 11-12, we employ Nq = 3I+1 Gauss-Hermite quadrature weights and points and we set the discount factor to ? = 0.9. Finally, at iteration n, the Sn best value fbest is set to the minimum posterior mean ?n (x; f ) over the designs x in the training set Sn , such that the posterior mean of each constraint is feasible. If no such point can be found, Sn 2 then fbest is set to the maximum of {?n (x; f ) + 3?m } over the designs x in Sn , where ?m is the maximum variance of the GP associated with f . The EIC algorithm is computed as a special case of the rollout with rolling horizon h = 0, and we use the Spearmint package1 to run the PESC algorithm. We additionally run a CBO algorithm that selects the next design to evaluate based on the posterior means of the GPs2 : xn+1 = argmin ?n (x; f ) s.t. ?n (x; gi ) ? 0, ?i ? {1, . . . , I}. (17) x?X 1 2 https://github.com/HIPS/Spearmint/tree/PESC As suggested by a reviewer. 7 0.0 log10 Median Utility Gap en log10 Median Utility Gap en 1 0 ?1 ?2 ?3 P ESC PM EIc ?4 Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 ?5 0 5 10 15 20 25 Iteration n 30 35 40 P ESC PM EIc ?0.5 Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 ?1.0 ?1.5 ?2.0 ?2.5 ?3.0 0 5 10 15 20 25 Iteration n 30 35 40 Figure 2: Left: Multi-modal objective and single constraint (P1). Right: Linear objective and multiple non-linear constraints (P2). Shaded region indicates 95% confidence interval of the median statistic. We refer to this algorithm as PM. We also compare the CBO algorithms to three local algorithms (SLSQP, MMA and COBYLA) and to one global evolutionary algorithm (ISRES). We now consider four problems with different design space dimensions d, several numbers of constraints I, and various topologies of the feasible space. The three first problems, P1-3, are analytic functions while the last one, P4, uses a reacting flow model that requires solving a set of partial differential equations (PDEs) [4]. For P1 and P2, we use N = 40 evaluations (as in [6, 10]). For P3 and P4, we use a small number of iterations N = 60, which corresponds to situations where the functions are very expensive to evaluate (e.g. solving large systems of PDEs can take over a day on a supercomputer). The full description of the problems is available in the appendix. In Figs. 2-3, we show the median of the utility gap, the shadings represent the 95% confidence interval of the median computed by bootstrap. Other statistics of the utility gap are shown in the appendix. For P1, the median utility gap for EIC, PESC, PM and the rollout algorithm with h ? {1, 2, 3} is shown in Fig. 2 (left panel). The PM algorithm does not improve its recommendations. This is not surprising because PM focuses on exploitation (PM does not depends on posterior variance) which can result in the algorithm failing to make further progress. Such behavior has already been reported in [16] (Sec. 3). The three other CBO algorithms perform similarly in the first 10 iterations. PESC is the first to converge to a utility gap ? 10?2.7 . The rollout performs better or similarly than EIC. In the 15 first iterations, longer rolling horizons lead to slightly lower utility gaps. This is likely to be due to the more exploratory behavior associated with lookahead, which helps differentiating the global solution from the local ones. For the remaining iterations, the shorter rolling horizons reduce the utility gap faster than longer rolling horizons before reaching a plateau. EIC and rollout outperform PESC after 25 iterations. We note that EIC and rollout have essentially converged. For P2, the median performance of EIC, PESC, PM and rollout with rolling horizon h ? {1, 2, 3} is shown in Fig. 2 (right panel). The PM algorithm reduces the utility gap in the first 10 iterations, but reaches a plateau at 10?1.7 . The three other CBO algorithms perform similarly up to iteration 15, where PESC reaches a plateau 3 . This similarity may be explained by the fact that the local solutions are easily differentiable from the global one, leading to no advantage for exploratory behavior. In this example, the rollout algorithms reached the same plateau at 10?3 , with longer horizons h taking more iterations to converge. EIC performs better than rollout h = 2 before its performance slightly decreases, reaching a plateau at a larger utility gap 10?2.6 (note that the utility gap is not computed with the best value observed so far and thus is not guaranteed to decrease). This increase of the median utility gap can be explained by the fact that a few runs change their recommendation from one local minimum to another one, resulting in the change in median utility function. This is also reflected in the 95% confidence interval of the median, which further indicates that the statistic is sensitive to a few runs. For P3, the median utility gap for the four CBO algorithms is shown in Fig. 3 (left panel). PM is rapidly outperformed by the other algorithms. The PESC algorithm is outperformed by EIC and rollout after 25 iterations. Again, we note that rollout with h = 1 obtains a lower utility gap that EIC at every iteration. The rollout with h ? {2, 3} exhibits a different behavior: it starts decreasing the utility gap later in the optimization but achieves a better performance when the evaluation budget 3 Results obtained for PESC mean utility gap are consistent with [13]. 8 3.5 log10 Median Utility Gap en log10 Median Utility Gap en 2.2 2.0 1.8 1.6 P ESC PM EIc 1.4 Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 1.2 0 10 20 30 Iteration n 40 50 2.5 Rollout, h = 1 Rollout, h = 2 Rollout, h = 3 2.0 1.5 1.0 0.5 0.0 ?0.5 0 60 P ESC PM EIc 3.0 10 20 30 Iteration n 40 50 60 Figure 3: Left: Multi-modal 4-d objective and constraint (P3). Right: Reacting flow problem (P4). The awareness of the remaining budget explains the sharp decrease in the last iterations for the rollout. is consumed. Note that none of the algorithms has converged to the global solution, and the strong multi-modality of the objective and constraint function seems to favor exploratory behaviors. For the reacting flow problem P4, the median performances are shown in Fig. 3 (right panel). PM rapidly reaches a plateau at en ? 101.3 . PESC reduces rapidly the utility gap, outperforming the other algorithms after 15 iterations. EIC and rollout perform similarly and slowly decrease the utility gap up to iteration 40, where EIC reaches a plateau and rollout continues to improve performance, slightly outperforming PESC at the end of the optimization. The results are summarized in Table. 1, and show that the rollout algorithm with different rolling horizons h (R-h) performs similarly or favorably compared to the other algorithms. Table 1: Log median utility gap log10 (eN ). Statistics computed over m independent runs. Prob d N I m SLSQP MMA COBYLA ISRES PESC PM EIC R-1 R-2 R-3 P1 P2 P3 P4 2 2 4 4 40 40 60 60 1 2 1 1 500 500 500 50 0.59 -0.40 2.15 0.80 0.59 -0.40 3.06 0.80 -0.05 -0.82 3.06 0.80 -0.19 -0.70 1.68 0.13 -2.68 -2.43 1.66 0.09 0.30 -1.76 1.79 1.26 -4.45 -2.62 1.60 0.57 -4.59 -2.99 1.48 -0.10 -4.52 -2.99 1.31 -0.10 -4.42 -2.994 1.35 0.19 Based on the four previous examples, we notice that increasing the rolling horizon h does not necessarily improve the performance of the rollout algorithm. One possible reason stems from the fact that lookahead algorithms rely more on the statistical model that greedy algorithms. Because this model is learned as the optimization unfolds, it is an imperfect model (in particular the hyperparameters of the GPs are updated after each iteration, but not after each stage of a simulated scenario). By simulating too many steps with the GPs, one may be over-confidently using the model. In some sense, the rolling horizon h, as well as the discount factor ?, can be interpreted as a form of regularization. The effect of a larger rolling horizon is problem-dependent, and experiment P3 suggests that multimodal problems in higher dimension may benefits from longer rolling horizons. 5 Conclusions We proposed a new formulation for constrained Bayesian optimization with a finite budget of evaluations. The best optimization policy is defined as the one maximizing, in average, the cumulative feasible decrease of the objective function over multiple steps. This optimal policy is the solution of a dynamic programming problem that is intractable due to the presence of nested maximizations. To circumvent this difficulty, we employed the rollout algorithm. Rollout uses a heuristic to simulate optimization scenarios over several step, thereby computing an approximation of the long-term reward. This heuristic is problem-dependent and, in this paper, we proposed to use a combination of cheap-to-evaluate greedy CBO algorithms to construct such heuristic. The proposed algorithm was numerically investigated and performed similarly or favorably compared to constrained expected improvement (EIC) and predictive entropy search with constraint (PESC). This work was supported in part by the AFOSR MURI on multi-information sources of multi-physics systems under Award Number FA9550-15-1-0038, program manager Dr. Jean-Luc Cambier. 4 For cost reasons, the median for h = 3 was computed with m = 100 independent runs instead of 500. 9 References [1] C. Audet, A. J. Booker, J. E. Dennis Jr, P. D. Frank, and D. W. Moore. A surrogate-model-based method for constrained optimization. AIAA paper, 4891, 2000. [2] D. P. Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific, 1995. [3] M. Bj?rkman and K. Holmstr?m. Global optimization of costly nonconvex functions using radial basis functions. Optimization and Engineering, 4(1):373?397, 2000. [4] M. Buffoni and K. E. Willcox. Projection-based model reduction for reacting flows. In 40th Fluid Dynamics Conference and Exhibit, page 5008, 2010. [5] P. Feliot, J. Bect, and E. Vazquez. A Bayesian approach to constrained single-and multi-objective optimization. Journal of Global Optimization, 67(1-2):97?133, 2017. [6] J. Gardner, M. Kusner, K. Q. Weinberger, J. Cunningham, and Z. Xu. Bayesian optimization with inequality constraints. In T. Jebara and E. P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 937?945. JMLR Workshop and Conference Proceedings, 2014. [7] M. A. Gelbart, J. Snoek, and R. P. Adams. Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607, 2014. [8] D. Ginsbourger and R. Le Riche. Towards Gaussian process-based optimization with finite time horizon. In mODa 9?Advances in Model-Oriented Design and Analysis, pages 89?96. Springer, 2010. [9] J. Gonz?lez, M. Osborne, and N. D. Lawrence. GLASSES: Relieving the myopia of Bayesian optimisation. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 790?799, 2016. [10] R. B. Gramacy, G. A. Gray, S. Le Digabel, H. K. H. Lee, P. Ranjan, G. Wells, and S. M. Wild. Modeling an augmented Lagrangian for blackbox constrained optimization. Technometrics, 58(1):1?11, 2016. [11] R. B. Gramacy and H. K. H. Lee. Optimization under unknown constraints. arXiv preprint arXiv:1004.4027, 2010. [12] P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. The Journal of Machine Learning Research, 13(1):1809?1837, 2012. [13] J. M. Hern?ndez-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general framework for constrained bayesian optimization using information-based search. arXiv preprint arXiv:1511.09422, 2015. [14] J. M. Hern?ndez-Lobato, M. A. Gelbart, M. W. Hoffman, R. P. Adams, and Z. Ghahramani. Predictive entropy search for bayesian optimization with unknown constraints. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015. [15] J. M. Hern?ndez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Advances in Neural Information Processing Systems, pages 918?926, 2014. [16] D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4):345?383, 2001. [17] R. R. Lam, K. E. Willcox, and D. H. Wolpert. Bayesian optimization with a finite budget: An approximate dynamic programming approach. In Advances in Neural Information Processing Systems, pages 883?891, 2016. [18] C. K. Ling, K. H. Low, and P. Jaillet. Gaussian process planning with lipschitz continuous reward functions: Towards unifying Bayesian optimization, active learning, and beyond. arXiv preprint arXiv:1511.06890, 2015. [19] J. Mockus, V. Tiesis, and A. Zilinskas. The application of bayesian methods for seeking the extremum. Towards Global Optimization, 2(117-129):2, 1978. [20] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In 3rd International Conference on Learning and Intelligent Optimization (LION3), pages 1?15, 2009. [21] V. Picheny. A stepwise uncertainty reduction approach to constrained global optimization. In AISTATS, pages 787?795, 2014. 10 [22] V. Picheny, R. B. Gramacy, S. Wild, and S. Le Digabel. Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian. In Advances in Neural Information Processing Systems, pages 1435?1443, 2016. [23] W. B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality, volume 842. John Wiley & Sons, 2011. [24] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. [25] R. G. Regis. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Engineering Optimization, 46(2):218?243, 2014. [26] M. J. Sasena, P. Y. Papalambros, and P. Goovaerts. The use of surrogate modeling algorithms to exploit disparities in function computation time within simulation-based optimization. Constraints, 2:5, 2001. [27] M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pages 11?25, 1998. 11
6785 |@word exploitation:4 illustrating:1 seems:1 nd:1 mockus:1 zilinskas:1 seek:1 simulation:10 thereby:1 shading:1 recursively:1 reduction:5 initial:2 ndez:3 series:1 disparity:1 past:1 existing:1 current:2 com:1 surprising:1 yet:1 must:1 john:1 fn:1 numerical:6 subsequent:1 cheap:4 analytic:1 designed:2 update:1 depict:1 greedy:10 prohibitive:1 selected:6 nq:6 intelligence:1 xk:27 beginning:1 ith:2 fa9550:1 location:1 simpler:1 hermite:3 rollout:45 constructed:2 become:1 differential:1 combine:1 wild:2 introduce:1 x0:3 snoek:1 slsqp:2 expected:16 opc:3 p1:5 planning:3 blackbox:1 multi:9 manager:1 behavior:7 bellman:1 decreasing:1 encouraging:1 curse:1 considering:1 increasing:1 provided:1 bounded:1 maximizes:3 panel:7 argmin:4 interpreted:1 developed:1 finding:1 extremum:1 nj:1 ought:1 assert:1 mitigate:2 every:4 tackle:2 control:1 bertsekas:1 before:3 aiaa:1 engineering:2 local:7 reacting:5 interpolation:1 approximately:1 ap:3 black:3 suggests:1 challenging:2 shaded:1 limited:1 lion3:1 recursive:1 differs:1 optimizers:1 bootstrap:1 powell:1 goovaerts:1 adapting:1 projection:1 word:1 radial:3 integrating:1 confidence:3 context:1 impossible:1 applying:1 map:2 lagrangian:3 reviewer:1 maximizing:8 ranjan:1 lobato:3 williams:1 starting:2 independently:1 formulate:2 welch:1 gramacy:3 schonlau:1 rule:2 financial:1 classic:1 exploratory:4 updated:2 user:2 programming:11 gps:10 us:4 element:1 expensive:17 satisfying:3 approximated:2 jk:2 continues:1 submission:1 muri:1 observed:1 preprint:4 region:2 trade:4 decrease:8 valuable:1 cbo:30 principled:2 balanced:1 monograph:1 reward:26 dynamic:13 solving:7 predictive:6 basis:3 easily:1 multimodal:1 various:1 shortcoming:1 monte:2 query:1 artificial:1 hyper:3 outcome:8 jean:1 heuristic:18 larger:2 solve:5 otherwise:1 favor:1 statistic:5 gi:23 g1:5 gp:10 transform:1 itself:1 advantage:2 sequence:2 differentiable:1 propose:5 lam:2 interaction:1 product:1 eic:27 p4:5 loop:1 realization:1 rapidly:3 lookahead:13 description:1 exploiting:1 double:2 optimum:1 spearmint:2 adam:3 help:1 illustrate:1 develop:1 ard:1 progress:1 package1:1 strong:1 p2:4 auxiliary:4 eq:9 solves:1 quantify:4 modifying:1 stochastic:1 exploration:3 explains:1 require:3 alleviate:1 extension:2 exploring:1 around:1 considered:1 lawrence:1 mapping:1 bj:1 optimizer:3 early:2 achieves:1 failing:1 tiesis:1 outperformed:2 sensitive:1 hoffman:3 mit:3 gaussian:7 nqh:1 reaching:2 focus:2 improvement:16 likelihood:1 mainly:1 indicates:2 hk:2 contrast:1 regis:1 sense:2 glass:1 dependent:4 typically:3 integrated:2 entire:1 cunningham:1 france:1 selects:5 interested:1 booker:1 issue:1 among:1 classification:1 aforementioned:1 augment:1 plan:1 constrained:36 integration:2 special:1 marginal:1 equal:1 construct:4 aware:2 having:1 beach:1 once:1 represents:1 lille:1 jones:2 icml:1 future:8 mimic:1 recommend:1 intelligent:1 employ:3 few:2 oriented:1 simultaneously:2 argmax:2 phase:1 attempt:1 technometrics:1 investigate:1 evaluation:19 indifferent:1 violation:1 introduces:1 yielding:1 myopic:1 chain:1 partial:1 shorter:1 tree:2 circle:2 uncertain:1 stopped:1 instance:1 hip:1 modeling:3 zn:1 maximization:10 cost:5 subset:1 rolling:12 usefulness:1 too:1 characterize:1 reported:1 chooses:2 st:2 fundamental:1 international:4 digabel:2 lee:2 off:4 physic:1 lez:1 again:1 reflect:1 leveraged:2 hn:4 slowly:1 dr:1 leading:4 potential:1 relieving:1 sec:8 summarized:2 availability:1 wk:26 depends:5 later:2 performed:1 closed:7 reached:1 start:2 bayes:1 xing:1 complicated:1 unidirectional:1 square:1 variance:4 who:1 bayesian:19 cambier:1 none:1 carlo:2 drive:3 vazquez:1 converged:2 plateau:7 inform:1 reach:4 myopia:1 evaluates:3 involved:1 associated:9 gain:4 massachusetts:2 popular:1 recall:1 knowledge:3 dimensionality:1 formalize:2 back:1 focusing:1 bidirectional:1 higher:2 day:1 reflected:1 modal:4 pesc:15 response:1 formulation:13 evaluated:6 box:2 rejected:1 stage:11 hand:2 working:2 dennis:1 ei:1 propagation:1 quality:1 gray:1 scientific:1 building:1 usa:1 effect:2 former:1 equality:1 evolution:1 regularization:1 iteratively:1 moore:1 illustrated:4 white:1 criterion:5 gelbart:3 demonstrate:3 performs:3 dedicated:1 wise:1 consideration:2 recently:1 overview:3 volume:3 adp:2 numerically:3 refer:3 cambridge:3 rd:2 unconstrained:7 vanilla:1 fk:11 automatic:1 pm:14 similarly:6 jaillet:1 longer:4 similarity:1 surface:1 posterior:11 optimizing:1 optimizes:1 scenario:4 gonz:1 nonconvex:1 inequality:4 outperforming:2 minimum:2 employed:2 converge:3 maximize:2 recommended:2 dashed:1 multiple:5 desirable:1 full:1 reduces:2 stem:2 faster:1 determination:1 long:14 deadline:1 award:1 feasibility:2 controlled:1 converging:1 essentially:1 metric:3 expectation:8 optimisation:1 arxiv:8 iteration:34 sometimes:1 represent:4 kernel:2 achieved:2 buffoni:1 addressed:2 interval:3 else:3 median:16 source:1 allocated:1 modality:1 bringing:1 subject:3 induced:2 flow:5 spirit:1 leverage:2 presence:2 latin:1 wn:3 xj:4 topology:2 suboptimal:3 reduce:2 regarding:1 prototype:1 simplifies:2 imperfect:1 consumed:1 riche:1 whether:1 expression:2 utility:42 penalty:2 karen:1 detailed:1 transforms:1 discount:5 generate:1 http:1 outperform:1 exist:1 exploitative:1 notice:2 estimated:1 hennig:1 revolve:1 key:1 four:3 demonstrating:1 threshold:1 achieving:1 audet:1 kept:1 backward:2 sum:1 run:6 prob:1 uncertainty:2 throughout:1 decide:1 excursion:1 p3:5 circumvents:1 decision:6 appendix:2 guaranteed:2 simplification:2 replaces:1 adapted:2 ahead:1 constraint:41 ri:2 generates:1 simulate:3 optimality:2 min:1 sasena:1 combination:3 disconnected:2 jr:1 beneficial:1 slightly:3 son:1 partitioned:1 kusner:1 making:1 s1:1 intuitively:1 restricted:1 explained:2 handling:1 taken:3 computationally:2 resource:2 equation:1 hern:3 discus:1 slack:2 mechanism:2 merit:2 tractable:1 end:6 lieu:1 available:3 rewritten:1 apply:1 simulating:3 weinberger:1 supercomputer:1 jn:5 original:4 remaining:5 include:1 esc:4 log10:5 unifying:1 exploit:1 ghahramani:3 build:2 especially:1 hypercube:1 seeking:1 objective:29 already:1 quantity:5 occurs:1 strategy:7 costly:1 surrogate:2 evolutionary:1 exhibit:2 dp:7 simulated:12 athena:1 consumption:1 collected:4 reason:2 modeled:2 pointwise:1 minimizing:1 balance:3 difficult:1 mostly:1 robert:1 potentially:1 frank:1 favorably:2 gk:10 taxonomy:1 fluid:1 design:50 policy:28 unknown:6 perform:3 markov:1 finite:13 immediate:5 defining:1 extended:2 situation:1 rn:2 sharp:1 jebara:1 introduced:1 cast:1 required:1 learned:1 nip:1 address:6 beyond:1 suggested:1 below:1 summarize:1 confidently:1 program:1 built:2 including:1 max:5 treated:1 rely:1 circumvent:2 endows:1 difficulty:1 improve:6 github:1 technology:2 gardner:1 sn:35 literature:2 afosr:1 fully:1 lecture:1 mixed:2 limitation:1 querying:1 versus:1 awareness:1 willcox:3 consistent:1 propagates:3 principle:1 editor:1 supported:1 last:3 rasmussen:1 pdes:2 infeasible:4 institute:2 taking:1 differentiating:1 benefit:4 boundary:1 calculated:1 xn:23 evaluating:3 transition:1 dimension:2 computes:1 preventing:1 ignores:1 collection:1 forward:2 unfolds:1 cumulative:1 ginsbourger:1 employing:1 far:2 picheny:2 approximate:5 obtains:1 global:18 sequentially:2 active:1 search:10 un:7 continuous:1 quantifies:5 sk:42 table:2 additionally:1 schuler:1 learn:1 zk:6 ca:1 argmaxx:1 punishing:1 investigated:1 complex:2 necessarily:2 domain:1 garnett:1 aistats:1 arrow:3 ling:1 hyperparameters:1 osborne:2 quadrature:4 xu:1 augmented:3 fig:9 en:10 wiley:1 exponential:1 candidate:5 governed:2 pe:1 lie:1 jmlr:1 admissible:1 rk:7 explored:1 intractable:4 workshop:1 stepwise:1 cumulated:1 budget:18 conditioned:4 illustrates:1 horizon:16 gap:24 easier:1 entropy:8 wolpert:1 remi:1 explore:1 likely:4 cheaply:1 contained:1 bo:9 recommendation:5 springer:1 nested:9 minimizer:1 corresponds:1 relies:1 ma:3 conditional:1 goal:1 quantifying:1 rbf:2 fbest:4 towards:3 luc:1 absence:2 feasible:19 change:2 lipschitz:1 called:1 total:1 gauss:3 select:2 relevance:1 scarcity:1 evaluate:16 tested:1 correlated:1
6,396
6,786
Hierarchical Methods of Moments Matteo Ruffini ? Universitat Polit?cnica de Catalunya Guillaume Rabusseau ? McGill University Borja Balle ? Amazon Research Cambridge Abstract Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models. Despite their theoretical appeal, the applicability of these methods to real data is still limited due to a lack of robustness to model misspecification. In this paper we present a hierarchical approach to methods of moments to circumvent such limitations. Our method is based on replacing the tensor decomposition step used in previous algorithms with approximate joint diagonalization. Experiments on topic modeling show that our method outperforms previous tensor decomposition methods in terms of speed and model quality. 1 Introduction Unsupervised learning of latent variable models is a fundamental machine learning problem. Algorithms for learning a variety of latent variable models, including topic models, hidden Markov models, and mixture models are routinely used in practical applications for solving tasks ranging from representation learning to exploratory data analysis. For practitioners faced with the problem of training a latent variable model, the decades-old Expectation-Maximization (EM) algorithm [1] is still the tool of choice. Despite its theoretical limitations, EM owes its appeal to (i) the robustness of the maximum-likelihood principle to model misspecification, and (ii) the need, in most cases, to tune a single parameter: the dimension of the latent variables. On the other hand, method of moments (MoM) algorithms for learning latent variable models via efficient tensor factorization algorithms have been proposed in the last few years [2?9]. Compared to EM, moment-based algorithms provide a stronger theoretical foundation for learning latent variable models. In particular, it is known that in the realizable setting the output of a MoM algorithm will converge to the parameters of the true model as the amount of training data increases. Furthermore, MoM algorithms only make a single pass over the training data, are highly parallelizable, and always terminate in polynomial time. However, despite their apparent advantages over EM, the adoption of MoM algorithms in practical applications is still limited. Empirical studies indicate that initializing EM with the output of a MoM algorithm can improve the convergence speed of EM by several orders of magnitude, yielding a very efficient strategy to accurately learn latent variable models [8?10]. In the case of relatively simple models this approach can be backed by intricate theoretical analyses [11]. Nonetheless, these strategies are not widely deployed in practice either. The main reason why MoM algorithms are not adopted by practitioners is their lack of robustness to model misspecification. Even when combined with EM, MoM algorithms fail to provide an initial estimate for the parameters of a model leading to fast convergence when the learning problem is too far from the realizable setting. For example, this happens when the number of the latent variables used in a MoM algorithm is too small to accurately represent the training data. In contrast, the model ? [email protected] [email protected] ? [email protected] ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. obtained by standalone EM in this case is reasonable and desirable: when asked for a small number of latent variables EM yields a model which is easy to interpret and can be useful for data visualization and exploration. For example, an important application of low-dimensional learning can be found in mixture models, where latent class assignments provided by a simple model can be used to split the training data into disjoint datasets to which EM is applied recursively to produce a hierarchical clustering [12, 13]. The tree produced by such clusterings procedure provides a useful aid in data exploration and visualization even if the models learned at each branching point do not accurately represent the training data. In this paper we develop a hierarchical method of moments that produces meaningful results even in misspecified settings. Our approach is different from previous attemps to design MoM algorithms for misspecified models. Instead of looking for convex relaxations of existing MoM algorithms like in [14?16] or analyzing the behavior of a MoM algorithm with a misspecified number of latent states like in [17, 18], we generalize well-known simultaneous diagonalization approaches to tensor decomposition by phrasing the problem as a non-convex optimization problem. Despite its non-convexity, the hierarchical nature of our method allows for a fast accurate solution based on low-dimensional grid search. We test our method on synthetic and real-world datasets on the topic modeling task, showcasing the advantages of our approach and obtaining meaningful results. 2 Moments, Tensors, and Latent Variable Models This section starts by recalling the basic ideas behind methods of moments for learning latent variable models via tensor decompositions. Then we review existing tensor decomposition algorithms and discuss the effect of model misspecification on the output of such algorithms. For simplicity we consider first a single topic model with k topics over a vocabulary with d words. A single topic model defines a generative process for text documents where first a topic Y ? [k] is drawn from some discrete distribution P[Y = i] = ?i , and then each word Xt ? [d], 1 ? t ? T , in a document of length T is independently drawn from some distribution P[Xt = j|Y = i] = ?i,j over words conditioned on the document topic. The model is completely specified by the vector of topic proportions ? ? Rk and the word distributions ?i ? Rd for each topic i ? [k]. We collect the word distributions of the model as the columns of a matrix M = [?1 ? ? ? ?k ] ? Rd?k . It is convenient to represent the words in a document using one-hot encodings so that Xt ? Rd is an indicator vector. With this notation, the conditional expectation of any word in a document drawn PT from topic i is E[Xt |Y = i] = ?i , and the random vector X = t=1 Xt is conditionally distributed as a multinomial random variable, with parameters ?i and T . Integrating Pover topics drawn from ? we obtain the first moment of the distribution over words M1 = E[Xt ] = i ?i ?i = M ?. Generalizing this argument to pairs and triples of distinct words in a document yields the matrix of second order moments and the tensor of third order moments of a single topic model: X M2 = E[Xs ? Xt ] = ?i ?i ? ?i ? Rd?d , (1) i M3 = E[Xr ? Xs ? Xt ] = X ?i ?i ? ?i ? ?i ? Rd?d?d , (2) i where ? denotes the tensor (Kronecker) product between vectors. By defining the matrix ? = diag(?) one also obtains the expression M2 = M ?M > . A method of moments for learning single topic models proceeds by (i) using a collection of n ? 1, M ? 2, M ? 3 of the moments, and (ii) using matrix and documents to compute empirical estimates M tensor decomposition methods to (approximately) factor these empirical moments and extract the model parameters from their decompositions. From the algorithmic point of view, the appeal of this scheme resides in the fact that step (i) requires a single pass over the data which can be trivially parallelized using map-reduce primitives, while step (ii) only requires linear algebra operations whose running time is independent of n. The specifics of step (ii) will be discussed in Section 2.1. ? m from data with the property that E[M ? m ] = Mm for m ? {1, 2, 3} is the Estimating moments M essential requirement for step (i). In the case of single topic models, and more generally multi-view models, such estimations are straightforward. For example, a simple consistent estimator takes a ? 3 = (1/n) Pn x(i) ? x(i) ? x(i) using the collection of documents {x(i) }ni=1 and computes M 2 3 i=1 1 2 first three words from each document. More data-efficient estimators for datasets containing long documents can be found in the literature [19]. For more complex models the method sketched above requires some modifications. Specifically, it is often necessary to correct the statistics directly observable from data in order to obtain vectors/matrices/tensors whose expectation over a training dataset exhibits precisely the relation with the parameters ? and M described above. For example, this is the case for Latent Dirichlet Allocation and mixtures of spherical Gaussians [4, 6]. For models with temporal dependence between observations, e.g. hidden Markov models, the method requires a spectral projection of observables to obtain moments behaving in a multi-view-like fashion [3, 20]. Nonetheless, methods of moments for these models and many others always reduces to the factorization of a matrix and tensor of the form M2 and M3 given above. 2.1 Existing Tensor Decomposition Algorithms Mathematically speaking, methods of moments attempt to solve the polynomial equations in ? and ? m into the expressions for their expectations M arising from plugging the empirical estimates M given above. Several approaches have been proposed to solve these non-linear systems of equations. The tensor power method (TPM) [2] starts with a whitening step where, given the SVD M2 = U SU > , the whitening matrix E = U S 1/2 ? Rd?k is used to transform M3 into a symmetric orthogonally Pk decomposable tensor T = i=1 ?i E ? ?i ? E ? ?i ? E ? ?i ? Rk?k?k . The weights ?i and vectors ?i are then recovered from T using a tensor power method and inverting the whitening step. The same whitening matrix is used in [3, 4], where the authors observe that the whitened slices of M3 are simultaneously diagonalized by the Moore-Penrose pseudoinverse of M ?1/2 . Indeed, since M2 = M ?M > = EE > , there exists a unique orthonormal matrix O ? Rk?k such that M ?1/2 = EO. Writing M3,r ? Rd?d for the rth slice of M3 across its second mode and mr for the rth row of M , it follows that M3,r = M ?1/2 diag(mr )?1/2 M > = EOdiag(mr )O> E > . Thus, the problem can be reduced to searching for the common diagonalizer O of the whitened slices of M3 defined as Hr = E ? M3,r E ?> = Odiag(mr )O> . (3) In the noiseless settings it is sufficient to diagonalize any of the slices M3,r . However, one can also recover O as the eigenvectors of a random linear combination of the various Hr which is more robust to noise [3]. Lastly, the method proposed in [21] consists in directly performing simultaneous diagonalization of random linear combinations of slices of M3 without any whitening step. This method, which in practice is slower than the others (see Section 4.1), under an incoherence assumption on the vectors ?i , can robustly recover the weights ?i and vectors ?i from the tensor M3 , even when it is not orthogonally decomposable. 2.2 The Misspecified Setting All the methods listed in the previous section have been analyzed in the case where the algorithm only has access to noisy estimates of the moments. However, such analyses assume that the data was generated by a model from the hypothesis class, that the matrix M has rank k, and that this rank is known to the algorithm. In practice the dimension k of the latent variable can be cross-validated, but in many cases this is not enough: data may come from a model outside the class, or from a model with a very large true k. Besides, the moment estimates might be too noisy to provide reliabe estimates for large number of latent variables. It is thus frequent to use these algorithms to estimate l < k latent variables. However, existing algorithms are not robust in this setting, as they have not been designed to work in this regime, and there is no theoretical explanation of what their outputs will be. The methods relying on a whitening step [2?4], will perform the whitening using the matrix El? obtained from the low-rank SVD truncated at rank l: M2 ? Ul Sl Ul> = El El> . TPM will use El to whiten the tensor M3 to a tensor Tl ? Rl?l?l . However, when k > l, Tl may not admit a 3 symmetric orthogonal decomposition 4 . Consequently, it is not clear what TPM will return in this case and there are no guarantees it will even converge. The methods from [3, 4] will compute the matrices Hl,r = El? M3,r El?> for r ? [d] that may not be jointly diagonalizable, and in this case there is no theoretical justification of what what the result of these algorithms will be. Similarly, the simultaneous diagonalization method proposed in [21] produces a matrix that nearly diagonalizes the slices of M3 , but no analysis is given for this setting. 3 Simultaneous Diagonalization Based on Whitening and Optimization This section presents the main contribution of the paper: a simultaneous diagonalization algorithms based on whitening and optimization we call SIDIWO (Simultaneous Diagonalization based on Whitening and Optimization). When asked to produce l = k components in the noiseless setting, SIDIWO will return the same output as any of the methods discussed in Section 2.1. However, in contrast with those methods, SIDIWO will provide useful results with a clear interpretation even in a misspecified setting (l < k). 3.1 SIDIWO in the Realizable Setting To derive our SIDIWO algorithm we first observe that in the noiseless setting and when l = k, the pair (M, ?) returned by all methods described in Section 2.1 is the solution of the optimization problem given in the following lemma5 . Lemma 3.1 Let M3,r be the r-th slice across the second mode of the tensor M3 from (2) with parameters (M, ?). Suppose rank(M ) = k and let ? = diag(?). Then the matrix (M ?1/2 )? is the unique optimum (up to column rescaling) of the optimization problem !1/2 d X X > 2 min (DM3,r D )i,j , (4) D?Dk i6=j r=1 where Dk = {D : D = (EOk )? for some Ok s.t. Ok Ok> = Ik } and E is the whitening matrix defined in Section 2.1. Remark 1 (The role of the constraint) Consider the cost function of Problem (4): in an unconstrained setting, there may be several matrices minimizing that cost. A trivial example is the zero matrix. A less trivial example is when the rows of D belong to the orthogonal complement of the column space of the matrix M . The constraint D = (EOk )? for some orthonormal matrix Ok first excludes the zero matrix from the set of feasible solutions, and second guarantees that all feasible solutions lay in the space generated by the columns of M . Algorithm 1 SIDIWO: Simultaneous Diagonalization based on Whitening and Optimization Require: M1 , M2 , M3 , the number of latent states l 1: Compute a SVD of M2 truncated at the l-th singular vector: M2 ? Ul Sl Ul> . 1/2 2: Define the matrix El = Ul Sl ? Rd?l . 3: Find the matrix D ? Dl optimizing Problem (4).  ?? ? 1/2 = D? M ? 4: Find (M , ? ? ) solving ? > M? ? = M1 ?,? 5: return (M ?) Problem (4) opens a new perspective on using simultaneous diagonalization to learn the parameters of a latent variable model. In fact, one could recover the pair (M, ?) from the relation M ?1/2 = D? by first finding the optimal D and then individually retrieving M and ? by solving a linear system using the vector M1 . This approach, outlined in Algorithm 1, is an alternative to the ones presented in the literature up to now (even though in the noiseless, realizable setting, it will provide the same 4 5 See the supplementary material for an example corroborating this statement. The proofs of all the results are provided in the supplementary material. 4 results). Similarly to existing methods, this approach requires to know the number of latent states. We will however see in the next section that Algorithm 1 provides meaningful results even when a misspecified number of latent states l < k is used. 3.2 The Misspecified Setting Algorithm 1 requires as inputs the low order moments M1 , M2 , M3 along with the desired number of latent states l to recover. If l = k, it will return the exact model parameters (M, ?); we will now see that it will also provide meaningful results when l < k. In this setting, Algorithm 1 returns a pair ?,? ?? ? 1/2 )? is optimal for the optimization problem (M ? ) ? Rd?l ? Rl such that the matrix D = (M !1/2 d X X > 2 min (DM3,r D )i,j . (5) D?Dl i6=j r=1 Analyzing the space of feasible solutions (Theorem 3.1) and the optimization function (Theorem 3.2), we will obtain theoretical guarantees on what SIDIWO returns when l < k, showing that the trivial solutions are not feasible, and that, in the space of feasible solutions, SIDIWO?s optima will approximate the true model parameters according to an intuitive geometric interpretation. Remarks on the constraints. The first step consists in analyzing the space of feasible solutions Dl when l < k. The observations outlined in Remark 1 still hold in this setting: the zero solution and the matrices laying in the orthonormal complement of M are not be feasible. Furthermore, the following theorem shows that other undesirable solutions will be avoided. Theorem 3.1 Let D ? Dl with rows d1 , ..., dl , and let Ir,s denote the r ? s identity matrix. The following facts hold under the hypotheses of Lemma 3.1: 1. For any row di , there exists at least one column of M such that hdi , ?j i = 6 0. ? satisfying M ?? ? 1/2 = D? are a linear combination of those of M , 2. The columns of any M laying in the best-fit l-dimensional subspace of the space spanned by the columns of M . 3. Let ? be any permutation of {1, ..., d}, and let M? and ?? be obtained by permuting the 1/2 columns of M and ? according to ?. If h?i , ?j i = 6 0 for any i, j, then ((M? ?? )Ik,l )? ? / 1/2 ? Dl , and similarly Il,k (M? ?? ) ? / Dl . The second point of Theorem 3.1 states that the feasible solutions will lay in the best l-dimensional subspace approximating the one spanned by the columns of M . This has two interesting consequences: ? cannot simply be a sub-block of the if the columns of M are not orthogonal, point 3 guarantees that M original M , but rather a non-trivial linear combination of its columns laying in the best l-dimensional subspace approximating its column space. In the single topic model case with k topics, when asked to recover l < k topics, Algorithm 1 will not return a subset of the original k topics, but a matrix ? whose columns gather the original topics via a non trivial linear combination: the original topics M ? with different weights. When the columns of M are will all be represented in the columns of M orthogonal, this space coincides with the space of the l columns of M associated with the l largest ?i ; 1/2 in this setting, the matrix (M? ?? )Ik,l (for some permutation ?) is a feasible solution and minimizes Problem (5). Thus, Algorithm 1 will recover the top l topics. ?? ? 1/2 )? ? Dl is a minimizer of Problem (5). ? be such that D = (M Interpreting the optima. Let M ? and the original matrix M , we will show In order to better understand the relation between M that the cost function of Problem (5) can be written in an equivalent form, that unveils a geometric interpretation. Theorem 3.2 Let d1 , ..., dl denote the rows of D ? Dl and introduce the following optimization problem k X X min sup hdi , ?h ihdj , ?h i?h vh (6) D?Dl k i6=j v?VM h=1 > where VM = {v ? R : v = ? M, where k?k2 ? 1}. Then this problem is equivalent to (5). 5 First, observe that the ? cost function in Equation (6) prefers D?s such that the vectors ui = ? [hdi , ?1 ?1 i, ..., hdi , ?k ?k i], i ? [l], have disjoint support. This is a consequence of the supv?VM , ? and requires that, for each j, the entries hdi , ?j ?j i are close zero for at least all but one of the various di . Consequently, each center will be almost orthogonal to all but one row of the optimal D; however the number of centers is greater than the number of rows of D, so the same row di may be nonorthogonal to various centers. For illustration, consider the single topic model: a solution D to Problem (6) would have rows that should be as orthogonal as possible to some topics and as aligned as possible to the others; in other ? words, for a given topic j, the optimization problem is trying to set hdi , ?j ?j i = 0 for all but one ? of Algorithm 1 should be in essence of the various di . Consequently, each column of the output M aligned with some of the topics and orthogonal to the others. It is worth mentioning that the constraint set Dl forbids the trivial solutions such as the zero matrix, the pseudo-inverse of any subset of l columns of M ?1/2 , and any subset of l rows of (M ?1/2 )? (which all have an objective value of 0). We remark that Theorem 3.2 doesn?t require the matrix M to be full rank k: we only need it to have at least rank greater or equal to l, in order to guarantee that the constraint set Dl is well defined. An optimal solution when l = 2. While Problem (4) can be solved in general using an extension of the Jacobi technique [22, 23], we provide a simple and efficient method for the case l = 2. This method will then be used to perform hierarchical topic modeling in Section 4. When l = 2, Equation (5) can be solved optimally with few simple steps; in fact, the following theorem shows that solving (5) is equivalent to minimizing a continuous function on the compact one-dimensional set I = [?1, 1], which can easily be done by griding I. Using this in Step 3 of Algorithm 1, one can efficiently compute an arbitrarily good approximation of the optimal matrix D ? D2 . ? ? Theorem 3.3 Consider the continuous function F (x) = c1 x4 + c2 x3 1 ? x2 + c3 x 1 ? x2 + c4 x2 + c5 , where the coefficients c1 , ..., c5 are functions of the entries of M2 and M3 . Let a be the minimizer of F on [?1, 1], and consider the matrix  ? 1 ? a2 ? a . Oa = 1 ? a2 ?a Then, the matrix D = (E2 Oa )? is a minimizer of Problem (5) when l = 2. 4 Case Study: Hierarchical Topic Modeling In this section, we show how SIDIWO can be used to efficiently recover hierarchical representations of latent variable models. Given a latent variable model with k states, our method allows to recover ?,? ? offer a a pair (M ? ) from estimate of the moments M1 , M2 and M3 , where the l columns of M synthetic representation of the k original centers. We will refer to these l vectors as pseudo-centers: each pseudo-center is representative of a group of the original centers. Consider the case l = 2. A dataset C of n samples can be split into two smaller subsets according to their similarity to the two pseudo-centers. Formally, this assignment is done using Maximum A Posteriori (MAP) to find the pseudo-center giving maximum conditional likelihood to each sample. The splitting procedure can be iterated recursively to obtain a divisive binary tree, leading to a hierarchical clustering algorithm. While this hierarchical clustering method can be applied to any latent variable model that can be learned with the tensor method of moments (e.g. Latent Dirichlet Allocation), we present it here for the single topic model for the sake of simplicity. We consider a corpus C of n texts encoded as in Section 2 and we split C into two smaller corpora according to their similarity to the two pseudo-centers in two steps: project the pseudo-centers on the simplex to obtain discrete probability distributions (using for example the method described in [24]), and use MAP assignment to assign each text x to a pseudo-center. This process is summarized in Algorithm 2. Once the corpus C has been split into two subsets C1 and C2 , each of these subsets may still contain the full set of topics but the topic distribution will differ in the two: topics similar to the first pseudo-center will be predominant in the first subset, the others in the second. By recursively iterating this process, we obtain a binary tree where topic distributions in the nodes with higher depth are expected to be more concentrated on fewer topics. 6 Algorithm 2 Splitting a corpus into two parts Require: A corpus of texts C = (x(1) , ..., x(n) ). 1: Estimate M1 , M2 and M3 . 2: Recover l = 2 pseudo-center with Algorithm 1 . 3: Project the Pseudo-center to the simplex 4: for i ? [n] do ? ], where 5: Assign the text x(i) to the cluster Cluster(i) = arg maxj P[X = x(i) |Y = j, ? ?, M ? P[X|Y = j, ? ? , M ] is the multinomial distr. associated to the j-th projected pseudo-center. 6: end for 7: return The cluster assignments Cluster. (a) (b) (c) Figure 1: Figure 1a provides a visualization of the topics used to generate the sample. Figure 1b represents the hierarchy recovered with the proposed method. Table 1c reports the average and standard deviation over 10 runs of the clustering accuracy for the various methods, along with average running times. In the next sections, we assess the validity of this approach on both synthetic and real-world data6 . 4.1 Experiment on Synthetic Data In order to test the ability of SIDIWO to recover latent structures in data, we generate a dataset distributed as a single topic model (with a vocabulary of 100 words) whose 8 topics have an intrinsic hierarchical structure depicted in Figure 1a. In this figure, topics are on the x-axis, words on the y-axis, and green (resp. red) points represents high (resp low) probability. We see for example that the first 4 topics are concentrated over the 1st half of the vocabulary, and that topics 1 and 2 have high probability on the 1st and 3rd fourth of the words while for the other two it is on the 1st and 4th. We generate 400 samples according to this model and we iteratively run Algorithm 2 to create a hierarchical binary tree with 8 leafs. We expect leafs to contain samples from a unique topic and internal nodes to gather similar topics. Results are displayed in Figure 1b where each chart represents a node of the tree (child nodes lay below their parent) and contains the heatmap of the samples clustered in that node (x-axis corresponds to samples and y-axis to words, red points are infrequent words and clear points frequent ones). The results are as expected: each leaf contains sample from one of the topics and internal nodes group similar topics together. We compare the clustering accuracy of SIDIWO with other methods using the Adjusted Rand Index [26] of the partition of the data obtained at the leafs w.r.t the one obtained using the true topics; comparisons are with the flat clustering on k = 8 topics with TPM, the method from [3] (SVD) and the one from [21] (Rand. Proj.). We repeat the experiment 10 times with different random samples and we report the average results in Table 1c; SIDIWO always recovers the original topic almost perfectly, unlike competing methods. One intuition for this improvement is that each split in the divisive clustering helps remove noise in the moments. 6 The experiments in this section have been performed in Python 2.7, using numpy [25] library for linear algebra operations, with the exception of the implementation of the method from [21], for which we used the author?s Matlab implementation: https://github.com/kuleshov/tensor-factorization. All the experiments were run on a MacBook Pro with an Intel Core i5 processor. 7 Figure 2: Experiment on the NIPS dataset. Figure 3: Experiment on the Wikipedia Mathematics Pages dataset. 4.2 Experiment on NIPS Conference Papers 1987-2015 We consider the full set of NIPS papers accepted between 1987 and 2015 , containing n = 11, 463 papers [27]. We assume that the papers are distributed according to a single topic model, we keep the d = 3000 most frequent words as vocabulary and we iteratively run Algorithm 2 to create a binary tree of depth 4. The resulting tree is shown in Figure 2 where each node contains the most relevant words of the cluster, where the relevance [28] of a word w ? Cnode ? C is defined by r(w, Cnode ) = ? log P[w|Cnode ] + (1 ? ?) log P[w|Cnode ] , P[w|C] where the weight parameter is set to ? = 0.7 and P[w|Cnode ] (resp. P[w|C]) is the empirical frequency of w in Cnode (resp. in C). The leafs clustering and the whole hierarchy have a neat interpretation. Looking at the leaves, we can easily hypothesize the dominant topics for the 8 clusters. From left to right we have: [image processing, probabilistic models], [neuroscience, neural networks], [kernel methods, algorithms], [online optimization, reinforcement learning]. Also, each node of the lower levels gathers meaningful keywords, confirming the ability of the method to hierarchically find meaningful topics. The running time for this experiment was 59 seconds. 4.3 Experiment on Wikipedia Mathematics Pages We consider a subset of the full Wikipedia corpus, containing all articles (n = 809 texts) from the following math-related categories: linear algebra, ring theory, stochastic processes and optimization. We remove a set of 895 stop-words, keep a vocabulary of d = 3000 words and run SIDIWO to perform hierarchical topic modeling (using the same methodology as in the previous section). The resulting hierarchical clustering is shown in Figure 3 where we see that each leaf is characterized by one of the dominant topics: [ring theory, linear algebra], [stochastic processes, optimization] (from 8 left to right). It is interesting to observe that the first level of the clustering has separated pure mathematical topics from applied ones. The running time for this experiment was 6 seconds. 5 Conclusions and future works We proposed a novel spectral algorithm (SIDIWO) that generalizes recent method of moments algorithms relying on tensor decomposition. While previous algorithms lack robustness to model misspecification, SIDIWO provides meaningful results even in misspecified settings. Moreover, SIDIWO can be used to perform hierarchical method of moments estimation for latent variable models. In particular, we showed through hierarchical topic modeling experiments on synthetic and real data that SIDIWO provides meaningful results while being very computationally efficient. A natural future work is to investigate the capability of the proposed hierarchical method to learn overcomplete latent variable models, a task that has received significant attention in recent literature [29, 30]. We are also interested in comparing the learning performance of SIDIWO the with those of other existing methods of moments in the realizable setting. On the applications side, we are interested in applying the methods developed in this paper to the healthcare analytics field, for instance to perform hierarchical clustering of patients using electronic healthcare records or more complex genetic data. Acknowledgments Guillaume Rabusseau acknowledges support of an IVADO postdoctoral fellowship. Borja Balle completed this work while at Lancaster University. References [1] Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (methodological), pages 1?38, 1977. [2] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773?2832, 2014. [3] Animashree Anandkumar, Daniel Hsu, and Sham M Kakade. A method of moments for mixture models and hidden Markov models. In COLT, volume 1, page 4, 2012. [4] Animashree Anandkumar, Yi-kai Liu, Daniel J Hsu, Dean P Foster, and Sham M Kakade. A spectral algorithm for Latent Dirichlet Allocation. In NIPS, pages 917?925, 2012. [5] Prateek Jain and Sewoong Oh. Learning mixtures of discrete product distributions using spectral decompositions. In COLT, pages 824?856, 2014. [6] Daniel Hsu and Sham M Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. In ITCS, pages 11?20. ACM, 2013. [7] Le Song, Eric P Xing, and Ankur P Parikh. A spectral algorithm for latent tree graphical models. In ICML, pages 1065?1072, 2011. [8] Borja Balle, William L Hamilton, and Joelle Pineau. Methods of moments for learning stochastic languages: Unified presentation and empirical comparison. In ICML, pages 1386?1394, 2014. [9] Arun T Chaganty and Percy Liang. Spectral experts for estimating mixtures of linear regressions. In ICML, pages 1040?1048, 2013. [10] Raphael Bailly. Quadratic weighted automata: Spectral algorithm and likelihood maximization. Journal of Machine Learning Research, 20:147?162, 2011. [11] Yuchen Zhang, Xi Chen, Denny Zhou, and Michael I Jordan. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. In NIPS, pages 1260?1268, 2014. [12] Michael Steinbach, George Karypis, Vipin Kumar, et al. A comparison of document clustering techniques. In KDD workshop on text mining, volume 400, pages 525?526. Boston, 2000. 9 [13] Sergio M Savaresi and Daniel L Boley. On the performance of bisecting K-means and PDDP. In SDM, pages 1?14. SIAM, 2001. [14] Borja Balle, Ariadna Quattoni, and Xavier Carreras. Local loss optimization in operator models: a new insight into spectral learning. In ICML, pages 1819?1826, 2012. [15] Borja Balle and Mehryar Mohri. Spectral learning of general weighted automata via constrained matrix completion. In NIPS, pages 2159?2167, 2012. [16] Ariadna Quattoni, Borja Balle, Xavier Carreras, and Amir Globerson. Spectral regularization for maxmargin sequence tagging. In ICML, pages 1710?1718, 2014. [17] Alex Kulesza, N Raj Rao, and Satinder Singh. Low-rank spectral learning. In Artificial Intelligence and Statistics, pages 522?530, 2014. [18] Alex Kulesza, Nan Jiang, and Satinder Singh. Low-rank spectral learning with weighted loss functions. In Artificial Intelligence and Statistics, pages 517?525, 2015. [19] Matteo Ruffini, Marta Casanellas, and Ricard Gavald?. A new spectral method for latent variable models. arXiv preprint arXiv:1612.03409, 2016. [20] Daniel Hsu, Sham M Kakade, and Tong Zhang. A spectral algorithm for learning hidden Markov models. Journal of Computer and System Sciences, 78(5):1460?1480, 2012. [21] Volodymyr Kuleshov, Arun Chaganty, and Percy Liang. Tensor factorization via matrix factorization. In AISTATS, pages 507?516, 2015. [22] Jean-Francois Cardoso and Antoine Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM journal on matrix analysis and applications, 17(1):161?164, 1996. [23] Angelika Bunse-Gerstner, Ralph Byers, and Volker Mehrmann. Numerical methods for simultaneous diagonalization. SIAM journal on matrix analysis and applications, 14(4):927?949, 1993. [24] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l 1-ball for learning in high dimensions. In ICML, pages 272?279, 2008. [25] Stefan Van Der Walt, S Chris Colbert, and Gael Varoquaux. The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22?30, 2011. [26] Lawrence Hubert and Phipps Arabie. Comparing partitions. Journal of classification, 2(1):193?218, 1985. [27] Valerio Perrone, Paul A Jenkins, Dario Spano, and Yee Whye Teh. Poisson random fields for dynamic feature models. arXiv preprint arXiv:1611.07460, 2016. [28] Carson Sievert and Kenneth E Shirley. Ldavis: A method for visualizing and interpreting topics. In ACL workshop on interactive language learning, visualization, and interfaces, 2014. [29] Animashree Anandkumar, Rong Ge, and Majid Janzamin. Learning overcomplete latent variable models through tensor methods. In COLT, pages 36?112, 2015. [30] Animashree Anandkumar, Rong Ge, and Majid Janzamin. Analyzing tensor power method dynamics in overcomplete regime. Journal of Machine Learning Research, 18(22):1?40, 2017. 10
6786 |@word polynomial:2 stronger:1 proportion:1 open:1 d2:1 decomposition:13 recursively:3 moment:30 initial:1 liu:1 contains:3 series:1 daniel:6 genetic:1 document:11 outperforms:1 existing:6 diagonalized:1 recovered:2 com:1 comparing:2 written:1 john:1 numerical:2 partition:2 confirming:1 kdd:1 remove:2 designed:1 hypothesize:1 standalone:1 generative:1 fewer:1 half:1 leaf:7 amir:1 intelligence:2 core:1 record:1 provides:5 math:1 node:8 zhang:2 mathematical:1 along:2 c2:2 ik:3 retrieving:1 consists:2 eok:2 introduce:1 tagging:1 indeed:1 expected:2 intricate:1 behavior:1 multi:2 relying:2 spherical:2 provided:2 estimating:2 notation:1 project:2 moreover:1 what:5 prateek:1 minimizes:1 developed:1 unified:1 finding:1 guarantee:5 temporal:1 pseudo:12 interactive:1 k2:1 supv:1 uk:1 healthcare:2 hamilton:1 engineering:1 local:1 phipps:1 consequence:2 despite:4 encoding:1 analyzing:4 jiang:1 meet:1 incoherence:1 matteo:2 approximately:1 might:1 acl:1 ankur:1 collect:1 co:1 mentioning:1 limited:2 factorization:5 analytics:1 karypis:1 adoption:1 pddp:1 practical:2 unique:3 acknowledgment:1 globerson:1 practice:3 block:1 x3:1 xr:1 procedure:2 empirical:6 convenient:1 projection:2 word:21 integrating:1 donald:1 cannot:1 undesirable:1 close:1 operator:1 onto:1 applying:1 writing:1 yee:1 equivalent:3 map:3 dean:1 center:16 backed:1 primitive:1 straightforward:1 attention:1 independently:1 convex:2 automaton:2 amazon:2 simplicity:2 decomposable:2 splitting:2 pure:1 m2:13 estimator:2 insight:1 array:1 orthonormal:3 spanned:2 oh:1 searching:1 exploratory:1 justification:1 diagonalizable:1 marta:1 mcgill:2 pt:1 suppose:1 hierarchy:2 resp:4 exact:1 infrequent:1 kuleshov:2 steinbach:1 hypothesis:2 satisfying:1 lay:3 role:1 preprint:2 initializing:1 solved:2 boley:1 intuition:1 dempster:1 convexity:1 ui:1 asked:3 dynamic:2 arabie:1 unveils:1 angelika:1 singh:2 solving:4 algebra:4 eric:1 completely:1 observables:1 bisecting:1 easily:2 joint:1 routinely:1 various:5 represented:1 separated:1 distinct:1 fast:2 jain:1 artificial:2 outside:1 lancaster:1 shalev:1 apparent:1 whose:4 widely:1 solve:2 supplementary:2 encoded:1 kai:1 jean:1 ability:2 statistic:3 transform:1 noisy:2 jointly:1 laird:1 online:1 advantage:2 sdm:1 sequence:1 product:2 raphael:1 frequent:3 denny:1 aligned:2 diagonalizer:1 relevant:1 intuitive:1 convergence:2 cluster:6 requirement:1 optimum:3 parent:1 produce:4 francois:1 telgarsky:1 ring:2 help:1 derive:1 develop:1 completion:1 keywords:1 received:1 c:1 indicate:1 come:1 differ:1 correct:1 stochastic:3 exploration:2 material:2 require:3 assign:2 clustered:1 varoquaux:1 mathematically:1 adjusted:1 extension:1 rong:3 mm:1 hold:2 lawrence:1 algorithmic:1 nonorthogonal:1 matus:1 a2:2 estimation:2 individually:1 largest:1 create:2 tool:2 arun:2 weighted:3 stefan:1 always:3 rather:1 pn:1 zhou:1 volker:1 validated:1 improvement:1 methodological:1 rank:9 likelihood:4 contrast:2 realizable:5 posteriori:1 el:7 hidden:4 relation:3 proj:1 interested:2 provably:1 sketched:1 arg:1 ralph:1 colt:3 classification:1 heatmap:1 constrained:1 equal:1 once:1 field:2 beach:1 x4:1 represents:3 unsupervised:1 nearly:1 icml:6 future:2 simplex:2 others:5 report:2 few:2 simultaneously:1 numpy:2 maxj:1 william:1 recalling:1 attempt:1 highly:1 investigate:1 mining:1 predominant:1 mixture:7 analyzed:1 yielding:1 behind:1 permuting:1 hubert:1 accurate:1 necessary:1 arthur:1 janzamin:2 orthogonal:7 owes:1 tree:8 hdi:6 old:1 incomplete:1 yuchen:1 desired:1 overcomplete:3 showcasing:1 theoretical:7 instance:1 column:19 modeling:6 rao:1 assignment:4 maximization:2 applicability:1 cost:4 deviation:1 subset:8 entry:2 ruffini:2 too:3 universitat:1 optimally:1 synthetic:5 combined:1 st:4 fundamental:1 siam:3 probabilistic:1 vm:3 michael:2 together:1 containing:3 admit:1 expert:1 leading:2 return:8 rescaling:1 volodymyr:1 de:1 summarized:1 coefficient:1 performed:1 view:3 sup:1 red:2 start:2 recover:10 xing:1 capability:1 shai:1 contribution:1 ass:1 il:1 ni:1 ir:1 accuracy:2 chart:1 efficiently:2 yield:2 generalize:1 itcs:1 iterated:1 accurately:3 produced:1 worth:1 cnode:6 processor:1 simultaneous:10 quattoni:2 parallelizable:1 nonetheless:2 frequency:1 e2:1 proof:1 di:4 associated:2 jacobi:2 recovers:1 stop:1 hsu:5 dataset:5 macbook:1 animashree:5 ok:4 higher:1 methodology:1 rand:2 done:2 though:1 walt:1 furthermore:2 lastly:1 hand:1 replacing:1 su:1 lack:3 defines:1 mode:2 pineau:1 quality:1 usa:1 effect:1 validity:1 contain:2 true:4 dario:1 xavier:2 regularization:1 symmetric:2 moore:1 iteratively:2 conditionally:1 visualizing:1 branching:1 essence:1 vipin:1 whiten:1 coincides:1 byers:1 carson:1 trying:1 whye:1 duchi:1 percy:2 interpreting:2 pro:1 interface:1 ranging:1 image:1 novel:1 parikh:1 misspecified:8 common:1 wikipedia:3 multinomial:2 rl:2 volume:2 discussed:2 interpretation:4 m1:7 rth:2 interpret:1 belong:1 refer:1 significant:1 cambridge:1 chaganty:2 rd:10 unconstrained:1 grid:1 trivially:1 similarly:3 i6:3 outlined:2 mathematics:2 language:2 phrasing:1 access:1 similarity:2 behaving:1 whitening:12 sergio:1 dominant:2 carreras:2 recent:2 showed:1 perspective:1 optimizing:1 raj:1 binary:4 arbitrarily:1 joelle:1 yi:1 der:1 pover:1 greater:2 george:1 mr:4 eo:1 catalunya:1 parallelized:1 converge:2 ii:4 full:4 desirable:1 sham:5 reduces:1 borja:6 valerio:1 characterized:1 cross:1 long:2 offer:1 plugging:1 basic:1 regression:1 whitened:2 noiseless:4 expectation:4 patient:1 chandra:1 arxiv:4 poisson:1 represent:3 kernel:1 c1:3 rabusseau:3 fellowship:1 singular:1 diagonalize:1 unlike:1 majid:2 jordan:1 practitioner:2 call:1 ee:1 anandkumar:5 split:5 easy:1 enough:1 variety:1 fit:1 perfectly:1 competing:1 reduce:1 idea:1 expression:2 ul:5 song:1 returned:1 speaking:1 remark:4 prefers:1 matlab:1 useful:3 generally:1 clear:3 eigenvectors:1 tune:1 listed:1 iterating:1 amount:1 cardoso:1 gael:1 concentrated:2 category:1 reduced:1 generate:3 http:1 sl:3 neuroscience:1 disjoint:2 arising:1 discrete:3 mehrmann:1 group:2 drawn:4 shirley:1 kenneth:1 relaxation:1 excludes:1 year:1 tpm:4 inverse:1 run:5 powerful:1 fourth:1 i5:1 angle:1 almost:2 reasonable:1 electronic:1 nan:2 quadratic:1 kronecker:1 precisely:1 constraint:5 alex:2 attemps:1 x2:3 flat:1 sake:1 speed:2 argument:1 min:3 kumar:1 performing:1 relatively:1 according:6 combination:5 ball:1 perrone:1 across:2 smaller:2 em:12 kakade:5 modification:1 happens:1 maxmargin:1 hl:1 computationally:1 equation:4 visualization:4 diagonalizes:1 discus:1 fail:1 singer:1 know:1 ge:3 end:1 adopted:1 generalizes:1 operation:2 gaussians:2 distr:1 jenkins:1 observe:4 hierarchical:18 spectral:17 robustly:1 alternative:1 robustness:4 slower:1 original:8 denotes:1 clustering:13 running:4 dirichlet:3 top:1 completed:1 graphical:1 yoram:1 giving:1 approximating:2 society:1 tensor:27 objective:1 strategy:2 dependence:1 antoine:1 exhibit:1 subspace:3 oa:2 chris:1 topic:55 mail:1 trivial:6 reason:1 laying:3 length:1 besides:1 index:1 illustration:1 minimizing:2 liang:2 statement:1 design:1 implementation:2 perform:5 teh:1 observation:2 markov:4 datasets:3 displayed:1 truncated:2 colbert:1 defining:1 looking:2 misspecification:5 inverting:1 pair:5 complement:2 specified:1 c3:1 c4:1 learned:2 nip:7 proceeds:1 below:1 regime:2 kulesza:2 including:1 green:1 explanation:1 royal:1 hot:1 power:3 natural:1 circumvent:1 indicator:1 hr:2 scheme:1 improve:1 github:1 orthogonally:2 library:1 axis:4 acknowledges:1 extract:1 vh:1 faced:1 mom:11 review:1 balle:6 text:7 literature:3 geometric:2 python:1 loss:2 expect:1 permutation:2 interesting:2 limitation:2 allocation:3 triple:1 foundation:1 gather:3 sufficient:1 consistent:1 article:1 principle:1 rubin:1 foster:1 sewoong:1 row:10 mohri:1 repeat:1 last:1 neat:1 ariadna:2 side:1 understand:1 polit:1 distributed:3 slice:7 dimension:3 souloumiac:1 vocabulary:5 world:2 depth:2 resides:1 computes:1 doesn:1 author:2 collection:2 c5:2 projected:1 avoided:1 reinforcement:1 spano:1 far:1 approximate:2 obtains:1 observable:1 compact:1 keep:2 satinder:2 pseudoinverse:1 corpus:6 corroborating:1 xi:1 shwartz:1 forbids:1 postdoctoral:1 search:1 latent:35 continuous:2 decade:1 why:1 table:2 terminate:1 learn:3 nature:1 ca:2 robust:2 obtaining:1 mehryar:1 gerstner:1 complex:2 diag:3 aistats:1 pk:1 main:2 hierarchically:1 whole:1 upc:1 noise:2 paul:1 child:1 representative:1 intel:1 tl:2 fashion:1 deployed:1 aid:1 tong:1 sub:1 third:1 rk:3 theorem:9 xt:8 specific:1 showing:1 appeal:3 x:2 dk:2 dl:13 essential:1 exists:2 intrinsic:1 workshop:2 diagonalization:11 magnitude:1 conditioned:1 chen:1 boston:1 generalizing:1 depicted:1 simply:1 bailly:1 penrose:1 bunse:1 van:1 corresponds:1 minimizer:3 acm:1 conditional:2 identity:1 presentation:1 consequently:3 feasible:9 specifically:1 tushar:1 lemma:2 pas:2 accepted:1 svd:4 divisive:2 m3:22 meaningful:8 exception:1 formally:1 guillaume:3 internal:2 support:2 relevance:1 d1:2 crowdsourcing:1
6,397
6,787
Interpretable and Globally Optimal Prediction for Textual Grounding using Image Concepts Raymond A. Yeh, Jinjun Xiong? , Minh N. Do, Wen-mei W. Hwu, Alexander G. Schwing Department of Electrical Engineering, University of Illinois at Urbana-Champaign ? IBM Thomas J. Watson Research Center [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Textual grounding is an important but challenging task for human-computer interaction, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals obtained from deep net based systems. In this work, we demonstrate that we can cast the problem of textual grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn?t rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word-embeddings which capture spatial-image relationships and provide interpretability. Lastly, at the time of submission, our approach outperformed the current state-of-the-art methods on the Flickr 30k Entities and the ReferItGame dataset by 3.08% and 7.77% respectively. 1 Introduction Grounding of textual phrases, i.e., finding bounding boxes in images which relate to textual phrases, is an important problem for human-computer interaction, robotics and mining of knowledge bases, three applications that are of increasing importance when considering autonomous systems, augmented and virtual reality environments. For example, we may want to guide an autonomous system by using phrases such as ?the bottle on your left,? or ?the plate in the top shelf.? While those phrases are easy to interpret for a human, they pose significant challenges for present day textual grounding algorithms, as interpretation of those phrases requires an understanding of objects and their relations. Existing approaches for textual grounding, such as [38, 15] take advantage of the cognitive performance improvements obtained from deep net features. More specifically, deep net models are designed to extract features from given bounding boxes and textual data, which are then compared to measure their fitness. To obtain suitable bounding boxes, many of the textual grounding frameworks, such as [38, 15], make use of region proposals. While being easy to obtain, automatic extraction of region proposals is limiting, because the performance of the visual grounding is inherently constrained by the quality of the proposal generation procedure. In this work we describe an interpretable mechanism which additionally alleviates any issues arising due to a limited number of region proposals. Our approach is based on a number of ?image concepts? such as semantic segmentations, detections and priors for any number of objects of interest. Based on those ?image concepts? which are represented as score maps, we formulate textual grounding as a search over all possible bounding boxes. We find the bounding box with highest accumulated score contained in its interior. The search for this box can be solved via an efficient branch and bound 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. A woman in a green shirt is getting ready to throw her bowling ball down the lane... Two women wearing hats covered in flowers are posing. Young man wearing a hooded jacket sitting on snow in front of mountain area. painting next to the two on the person all the way to the right left Figure 1: Results on the test set for grounding of textual phrases using our branch and bound based algorithm. Top Row: Flickr 30k Entities Dataset. Bottom Row: ReferItGame Dataset (Groundtruth box in green and predicted box in red). second bike from right in front scheme akin to the seminal efficient subwindow search of Lampert et al. [25]. The learned weights can additionally be used as word embeddings. We are not aware of any method that solves textual grounding in a manner similar to our approach and hope to inspire future research into the direction of deep nets combined with powerful inference algorithms. We evaluate our proposed approach on the challenging ReferItGame [20] and the Flickr 30k Entities dataset [35], obtaining results like the ones visualized in Fig. 1. At the time of submission, our approach outperformed state-of-the-art techniques on the ReferItGame and Flickr 30k Entities dataset by 7.77% and 3.08% respectively using the IoU metric. We also demonstrate that the trained parameters of our model can be used as a word-embedding which captures spatial-image relationships and provides interpretability. 2 Related Work Textual grounding: Related to textual grounding is work on image retrieval. Classical approaches learn a ranking function using recurrent neural nets [30, 6], or metric learning [13], correlation analysis [22], and neural net embeddings [9, 21]. Beyond work in image retrieval, a variety of techniques have been considered to explicitly ground natural language in images and video. One of the first models in this area was presented in [31, 24]. The authors describe an approach that jointly learns visual classifiers and semantic parsers. Gong et al. [10] propose a canonical correlation analysis technique to associate images with descriptive sentences using a latent embedding space. In spirit similar is work by Wang et al. [43], which learns a structure-preserving embedding for image-sentence retrieval. It can be applied to phrase localization using a ranking framework. In [11], text is generated for a set of candidate object regions which is subsequently compared to a query. The reverse operation, i.e., generating visual features from query text which is subsequently matched to image regions is discussed in [1]. In [23], 3D cuboids are aligned to a set of 21 nouns relevant to indoor scenes using a Markov random field based technique. A method for grounding of scene graph queries in images is presented in [17]. Grounding of dependency tree relations is discussed in [19] and reformulated using recurrent nets in [18]. Subject-Verb-Object phrases are considered in [40] to develop a visual knowledge extraction system. Their algorithm reasons about the spatial consistency of the configurations of the involved entities. In [15, 29] caption generation techniques are used to score a set of proposal boxes and returning the highest ranking one. To avoid application of a text generation pipeline on bounding box proposals, [38] improve the phrase encoding using a long short-term memory (LSTM) [12] based deep net. Additional modeling of object context relationship were explored in [32, 14]. Video 2 9/5/2017 bbest_redraw Input? Features? ? a Query? : "The?left?guy" guy Word?Prior Energy? left Image? : Detection the Output? : youth Segmentation Figure 2: Overview of our proposed approach: We obtain word priors from the input query, take into account geometric features, as well as semantic segmentation features computed from the provided input image. We compute the three image cues to predict the four variables of the bounding box y = (y1 , . . . , y4 ). datasets, although not directly related to our work in this paper, were used for spatiotemporal language grounding in [27, 47]. Common datasets for visual grounding are the ReferItGame dataset [20] and a newly introduced Flickr 30k Entities dataset [34], which provides bounding box annotations for noun phrases of the original Flickr 30k dataset [46]. In contrast to all of the aforementioned methods, which are largely based on region proposals, we suggest usage of efficient subwindow search as a suitable inference engine. Efficient subwindow search: Efficient subwindow search was proposed by Lampert et al. [25] for object localization. It is based on an extremely effective branch and bound scheme that can be applied to a large class of energy functions. The approach has been applied to very efficient deformable part models [45], for object class detection [26], for weakly supervised localization [5], indoor scene understanding [41], diverse object proposals [42] and also for spatio-temporal object detection proposals [33]. 3 Exact Inference for Grounding We outline our approach for textual grounding in Fig. 2. In contrast to the aforementioned techniques for textual grounding, which typically use a small set of bounding box proposals, we formulate our language grounding approach as an energy minimization over a large number of bounding boxes. The search over a large number of bounding boxes allows us to retrieve an accurate bounding-box prediction for a given phrase and an image. Importantly, by leveraging efficient branch-and-bound1/1 techniques, we are able to find the global minimizer for a given energy function very effectively. Our energy is based on a set of ?image concepts? like semantic segmentations, detections or image priors. All those concepts come in the form of score maps which we combine linearly before searching for the bounding box containing the highest accumulated score over the combined score map. It is trivial to add additional information to our approach by adding additional score maps. Moreover, linear combination of score maps reveals importance of score maps for specific queries as well as similarity between queries such as ?skier? and ?snowboarder.? Hence the framework that we discuss in the following is easy to interpret and extend to other settings. General problem formulation: For simplicity we use x to refer to both given input data modalities, i.e., x = (Q, I), with query text, Q, and image, I. We will differentiate them in the narrative. In addition, we define a bounding box y via its top left corner (y1 , y2 ) and its bottom right corner (y3 , y4 ) Q4 and subsume the four variables of interest in the tuple y = (y1 , . . . , y4 ) ? Y = i=1 {0, . . . , yi,max }. Every integral coordinate yi , i ? {1, . . . , 4} lies within the set {0, . . . , yi,max }, and Y denotes the 3 ?left? ?center? Algorithm 1 Branch and bound inference for grounding ? Y, w), Y) into queue, set Y? = 1: put pair (E(x, Y 2: repeat 3: split Y? = Y?1 ? Y?2 with Y?1 ? Y?2 = ? ? Y1 , w), Y1 ) into queue 4: put pair (E(x, ? Y2 , w), Y2 ) into queue 5: put pair (E(x, ? ? 6: retrieve Y having smallest E ? 7: until |Y| = 1 ?right? ?floor? (a) (b) Figure 3: Word priors in (a) and the employed inference algorithm in (b). product space of all four coordinates. For notational simplicity only, we assume all images to be scaled to identical dimensions, i.e., yi,max is not dependent on the input data x. We obtain a bounding box prediction y? given our data x, by solving the energy minimization y? = arg min E(x, y, w), y?Y (1) to global optimality. Note that w refers to the parameters of our model. Despite the fact that we are ?only? interested in a single bounding box, the product space Y is generally too large for exhaustive minimization of the energy specified in Eq. (1). Therefore, we pursue a branch-and-bound technique in the following. To apply branch and bound, we assume that the energy function E(x, y, w) depends on two sets of parameters w = [wtT , wrT ]T , i.e., the top layer parameters wt of a neural net, and the remaining parameters wr . In light of this decomposition, our approach requires the energy function to be of the following form: E(x, y, w) = wtT ?(x, y, wr ). Note that the features ?(x, y, wr ) may still depend non-linearly on all but the top-layer parameters. This assumption does not pose a severe restriction since almost all of the present-day deep net models typically obtain the logits E(x, y, w) using a fully-connected layer or a convolutional layer with kernel size 1 ? 1 as the last computation. Energy Function Details: Our energy function E(x, y, w) is based on a set of ?image concepts,? such as semantic segmentation of object categories, detections, or word priors, all of which we subsume in the set C. Importantly, all image concepts c ? C are attached a parametric score map ??c (x, wr ) ? RW ?H following the image width W and height H. Note that those parametric score maps may depend nonlinearly on some parameters wr . Given a bounding box y, we use the scalar ?c (x, y, wr ) ? R to refer to the score accumulated within the bounding box y of score map ??c (x, wr ). To define the energy function we also introduce a set of words of interest, i.e., S. Note that this set contains a special symbol denoting all other words not of interest for the considered task. We use the given query Q, which is part of the data x, to construct indicators, ?s = ?(s ? Q) ? {0, 1}, denoting for every token s ? S its existence in the query Q, where ? denotes the indicator function. Based on this definition, we formulate the energy function as follows: X X E(x, y, w) = ws,c ?c (x, y, wr ), (2) s?S:?s =1 c?C where ws,c is a parameter connecting a word s ? S to an image concept c ? C. In other words, wt = (ws,c : ?s ? S, c ? C). This energy function results in a sparse wt , which increases the speed of inference. Score maps: The energy is given by a linear combination of accumulated score maps ?c (x, y, wr ). In our case, we use |C| = k1 + k2 + k3 of those maps, which capture three kinds of information: (i) k1 word-priors; (ii) k2 geometric information cues; and (iii) k3 image based segmentations and detections. We discuss each of those maps in the following. 4 Approach Accuracy (%) SCRC (2016) [15] 27.80 DSPE (2016) [44] 43.89 GroundeR (2016) [39] 47.81 CCA (2017) [36] 50.89 Ours (Prior + Geo + Seg + Det) 51.63 Ours (Prior + Geo + Seg + bDet) 53.97 Approach Accuracy (%) SCRC (2016) [15] 17.93 GroundeR (2016) [39] 23.44 GroundeR (2016) [39] +SPAT 26.93 Ours (Prior + Geo) 25.56 Ours (Prior + Geo + Seg) 33.36 Ours (Prior + Geo + Seg + Det) 34.70 Table 1: Phrase localization performance on Flickr 30k Entities. Table 2: Phrase localization performance on ReferItGame. people clothing body parts animals vehicles instruments scene other # Instances 5,656 2,306 523 518 400 162 1,619 3,374 GroundeR(2016) [39] 61.00 38.12 10.33 62.55 68.75 36.42 58.18 29.08 CCA(2017) [36] 64.73 46.88 17.21 65.83 68.75 37.65 51.39 31.77 Ours 68.71 46.83 19.50 70.07 73.75 39.50 60.38 32.45 Table 3: Phrase localization performance over types on Flickr 30k Entities (accuracy in %). For the top k1 words in the training set we construct word prior maps like the ones shown in Fig. 3 (a). To obtain the prior for a particular word, we search a given training set for each occurrence of the word. With the corresponding subset of image-text pairs and respective bounding box annotations at hand, we compute the average number of times a pixel is covered by a bounding box. To facilitate this operation, we scale each image to a predetermined size. Investigating the obtained word priors given in Fig. 3 (a) more carefully, it is immediately apparent that they provide accurate location information for many of the words. The k2 = 2 geometric cues provide the aspect ratio and the area of the hypothesized bounding box y. Note that the word priors and geometry features contain no information about the image specifics. To encode measurements dedicated to the image at hand, we take advantage of semantic segmentation and object detection techniques. The k3 image based features are computed using deep neural nets as proposed by [4, 37, 2]. We obtain probability maps for a set of class categories, i.e., a subset of the nouns of interest. The feature ? accumulates the scores within the hypothesized bounding box y. Inference: The algorithm to find the bounding box y? with lowest energy as specified in Eq. (1) is based on an iterative decomposition of the output space Y [25], summarized in Fig. 3 (b). To this end we search across subsets of the product space Y and we define for every coordinate yi , i ? {1, . . . , 4} a corresponding lower and upper bound, yi,low and yi,high respectively. More specifically, considering the initial set of all possible bounding boxes Y, we divide it into two disjoint subsets Y?1 and Y?2 . For example, by constraining y1 to {0, . . . , y1,max /2} and {y1,max /2 + 1, . . . , y1,max } for Y?1 and Y?2 respectively, while keeping all the other intervals unchanged. It is easy to see that we can repeat this decomposition by choosing the largest among the four intervals and recursively dividing it into two parts. Given such a repetitive decomposition strategy for the output space, and since the energy E(x, y, w) for a bounding box y is obtained using a linear combination of word priors and accumulated segmentation masks, we can design an efficient branch and bound based search algorithm to exactly solve the inference problem specified in Eq. (1). The algorithm proceeds by iteratively decomposing a product space Y? into two subspaces Y?1 and Y?2 . For each subspace, the algorithm computes a lower ? Yj , w) for the energy of all possible bounding boxes within the respective subspace. bound E(x, Intuitively, we then know, that any bounding box within the subspace Y?j has a larger energy than the lower bound. The algorithm proceeds by choosing the subspace with lowest lower-bound until ? = 1. We summarize this algorithm in Alg. 1 this subspace consists of a single element, i.e., until |Y| (Fig. 3 (b)). ? Yj , w) on the energy for an To this end, it remains to show how to compute a lower bound E(x, output space, and to illustrate the conditions which guarantee convergence to the global minimum of the energy function. For the latter, we note that two conditions are required to ensure convergence to the optimum: (i) the bound of the considered product space has to lower-bound the true energy for each of its bounding 5 The lady in the red car is crossing the bridge. A dog and a cow play together inside the fence. person on the left black bottle front A woman wearig the black sunglasses and blue jean jacket is smiling. floor on the bottom Figure 4: Results on the test set for grounding of textual phrases using our branch and bound based algorithm. Top Row: Flickr 30k Entities Dataset. Bottom Row: ReferItGame Dataset (Groundtruth box in green and predicted box in red). ? i.e., ?? ? E(x, ? Y, ? w) ? E(x, y?, w); (ii) the bound has to be exact for all box hypothesis y? ? Y, y ? Y, ? possible bounding boxes y ? Y, i.e., E(x, y, w) = E(x, y, w). Given those two conditions, global convergence of the algorithm summarized in Alg. 1 is apparent: upon termination we obtain an ?interval? containing a single bounding box, and its energy is at least as low as the one for any other interval. For the former, we note that bounds on score maps for bounding box intervals can be computed by considering either the largest or the smallest possible bounding box in the bounding box hypothesis, ? depending on whether the corresponding weight in wt is positive or negative and whether the Y, feature maps contain only positive or negative values. Intuitively, if the weight is positive and ? Y, ? w) by the feature mask contains only positive values, we obtain the smallest lower bound E(x, considering the content within the smallest possible bounding box. Note that the score maps do not necessarily contain only positive or negative numbers. However we can split the given score maps into two separate score maps (i.e., one with only positive values, and another with only negative values) while applying the same weight. ? Y, ? w) has to be extremely effective for the It is important to note that computation of the bound E(x, algorithm to run at a reasonable speed. However, computing the feature mask content for a bounding box is trivially possible using integral images. This results in a constant time evaluation of the bound, which is a necessity for the success of the branch and bound procedure. Learning the Parameters: With the branch and bound based inference procedure at hand, we now describe how to formulate the learning task. Support-vector machine intuition can be applied. Formally, we are given a training set D = {(x, y)} containing pairs of input data x and groundtruth bounding boxes y. We want to find the parameters w of the energy function E(x, y, w) such that the energy of the groundtruth is smaller than the energy of any other configuration. Negating this statement results in the following desiderata when including an additional margin term L(y, y?), also known as task-loss, which measures the loss between the groundtruth y and another configuration y?: ?E(x, y, w) ? ?E(x, y?, w) + L(? y , y) ?? y ? Y. Since we want to enforce this inequality for all configurations y? ? Y, we can reduce the number of constraints by enforcing it for the highest scoring right hand side. We then design a cost function which penalizes violation of this requirement linearly. We obtain the following structured support vector machine based surrogate loss minimization: X C min kwk22 + max (?E(x, y?, w) + L(? y , y)) + E(x, y, w) (3) w y??Y 2 (x,y)?D where C is a hyperparameter adjusting the squared norm regularization to the data term. For the task loss L(? y , y) we use intersection over union (IoU). 6 her shoes a red shirt a dirt bike Figure 5: Flickr 30k Failure Cases. (Green box: ground-truth, Red box:predicted) By fixing the parameters wr and only learning the top layer parameters wt , Eq. (3) is equivalent to the problem of training a structured SVM. We found the cutting-plane algorithm [16] to work well in our context. The cutting-plane algorithm involves solving the maximization task. This maximization over the output space Y is commonly referred to as loss-augmented inference. Loss augmented inference is structurally similar to the inference task given in Eq. (1). Since maximization is identical to negated minimization, the computation of the bounds for the energy E(x, y?, w) remains identical. To bound the IoU loss, we note that a quotient can be bounded by bounding nominator and denominator independently. To lower bound the intersection of the groundtruth box with the hypothesis space we use the smallest hypothesized bounding box. To upper bound the union of the groundtruth box with the hypothesis space we use the largest bounding box. Further, even though not employed to obtain the results in this paper, we mention that it is possible to backpropagate through the neural net parameters wr that influence the energy non-linearly. This underlines that our initial assumption is merely a construct to design an effective inference procedure. 4 Experimental Evaluation In the following we first provide additional details of our implementation before discussing the results of our approach. Language processing: In order to process free-form textual phrases efficiently, we restricted the vocabulary size to the top 200 most frequent words in the training set for the ReferItGame, and to the top 1000 most frequent training set words for Flickr 30k Entities; both choices cover about 90% of all phrases in the training set. We map all the remaining words into an additional token. We don?t differentiate between uppercase and lower case characters and we also ignore punctuation. Segmentation and detection maps: We employ semantic segmentation, object detection, and poseestimation. For segmentation, we use the DeepLab system [4], trained on PASCAL VOC-2012 [8] semantic image segmentation task, to extract the probability maps for 21 categories. For detection, we use the YOLO object detection system [37], to extract 101 categories, 21 trained on PASCAL VOC-2012, and 80 trained on MSCOCO [28]. For pose estimation, we use the system from [2] to extract the body part location, then post-process to get the head, upper body, lower body, and hand regions. For the ReferItGame, we further fine-tuned the last layer of the DeepLab system to include the categories of ?sky,? ?ground,? ?building,? ?water,? ?tree,? and ?grass.? For the Flickr 30k Entities, we also fine-tuned the last layer of the DeepLab system using the eight coarse-grained types and eleven colors from [36]. Preprocessing and post-processing: For word prior feature maps and the semantic segmentation maps, we take an element-wise logarithm to convert the normalized feature counts into logprobabilities. The summation over a bounding box region then retains the notion of a joint logprobability. We also centered the feature maps to be zero-mean, which corresponds to choosing an initial decision threshold. The feature maps are resized to dimension of 64 ? 64 for efficient computation, and the predicted box is scaled back to the original image dimension during evaluation. We re-center the prediction box by a constant amount determined using the validation set, as resizing truncate box coordinates to an integer. Efficient sub-window search implementation: In order for the efficient subwindow search to run at a reasonable speed, the lower bound on E needs to be computed as fast as possible. Observe that, E(x, y, w), is a weighted sum of the feature maps over the region specified by a hypothesized bounding box. To make this computation efficient, we pre-compute integral images. Given an integral 7 Query word, s Query word, s 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 bicycle bike camera cellphone coffee cup drink man skier snowboarder woman 0.90 0.75 0.60 0.45 0.30 0.15 0.00 air cht ca r ca s bu le tt boat bo d bir le yc e bic plan ro ae womanoarder nowb sskier man drink cup coffee ne cellphoa camer bike bicycle plane bike bird ship bottle bus car cat chair Query word, s 0 Concept, c (a) (b) Figure 6: (a) Trained weight, ws,c , visualization on words, s and segmentation concepts, c, on Flicker 30k. (b) Cosine similairty visualization between words vector, ws and ws0 on Flicker 30k. image, the computation for each of the bounding box is simply a look-up operation. This trick can similarly be applied for the geometric features. Since we know the range of the ratio and areas of the bounding boxes ahead of time, we cache the results in a look up table as well. The ReferItGame dataset consists of more than 99,000 regions from 20,000 images. Bounding boxes are assigned to natural language expressions. We use the same bounding boxes as [38] and the same training test set split, i.e., 10,000 images for testing, 9,000 images for training and 1,000 images for validation. The Flickr 30k Entities dataset consists of more than 275k bounding boxes from 31k image, where each bounding box is annotated with the corresponding natural language phrase. We us the same training, validation and testing split as in [35]. Quantitative evaluation: In Tab. 1 and Tab. 2 we quantitatively compare the results of our approach to recent state-of-the-art baselines, where Prior = word priors, Geo = geometric information, Seg = Segmentation maps, Det = Detection maps, bDet = Detection maps + body parts detection. An example is considered as correct, if the predicted box overlaps with the ground-truth box by more than 0.5 IoU. We observe our approach to outperform competing methods by around 3% on the Flickr 30k Entities dataset and by around 7% on the ReferItGame dataset. We also provide an ablation study of the word and image information as shown in Tab. 1 and Tab. 2. In Tab. 3 we analyze the results for each ?phrase type? provided by Flicker30k Entities dataset. As can be seen, our system outperforms the state-of-the-art in all phrase types except for clothing. We note that our results have been surpassed by [3, 7], where they fine-tuned the entire network including the feature extractions; CCA, GroundeR and our approach uses a fixed pre-trained network for extracting image features. Qualitative evaluation: Next we evaluate our approach qualitatively. In Fig. 1 and Fig. 4 we show success cases. We observe that our method successfully captures a variety of objects and scenes. In Fig. 5 we illustrate failure cases. We observe that for a few cases word prior may hurt the prediction (e.g., shoes are typically on the bottom half of the image.) Also our system may fail when the energy is not a linear combination of the feature scores. For example, the score of ?dirt bike? should not be the score of ?dirt? + the score of ?bike.? We provide additional results in the supplementary material. Learned parameters + word embedding: Recall, in Eq. (2), our model learns a parameter per phrase word and concept pair, ws,c . We visualize its magnitude in Fig. 6 (a) for a subset of words and concepts. As can be seen, ws,c is large, when the phrase word and the concept are related, (e.g. s = ship and c = boat). This demonstrates that our model successfully learns the relationship between phrase words and image concepts. This also means that the ?word vector,? ws = [ws,1 , ws,2 , ...ws,|C| ], can be interpreted as a word embedding. Therefore, in Fig. 6 (b), we visualize the cosine similarity between pairs of word vectors. Expected groups of words form, for example (bicycle, bike), (camera, cellphone), (coffee, cup, drink), (man woman), (snowboarder, skier). The word vectors capture 8 image-spatial relationship of the words, meaning items that can be ?replaced? in an image are similar; (e.g., a ?snowboarder? can be replaced with a ?skier? and the overall image would still be reasonable). Computational Efficiency: Overall, our method?s inference speed is comparable to CCA and much faster than GroundeR. The inference speed can be divided into three main parts, (1) extracting image features, (2) extracting language features, and (3) computing scores. For extracting image features, GroundeR requires a forward pass on VGG16 for each image region, where CCA and our approach requires a single forward pass which can be done in 142.85 ms. For extracting language features, our method requires index lookups, which takes negligible amount of time (less than 1e-6 ms). CCA, uses Word2vec for processing the text, which takes 0.070 ms. GroundeR uses a Long-Short-Term Memory net, which takes 0.7457 ms. Computing the scores with our C++ implementation takes 1.05ms on a CPU. CCA needs to compare projections of the text and image features, which takes 13.41ms on a GPU and 609ms on a CPU. GroundeR uses a single fully connected layer, which takes 0.31 ms on a GPU. 5 Conclusion We demonstrated a mechanism for grounding of textual phrases which provides interpretability, is easy to extend, and permits globally optimal inference. In contrast to existing approaches which are generally based on a small set of bounding box proposals, we efficiently search over all possible bounding boxes. We think interpretability, i.e., linking of word and image concepts, is an important concept, particularly for textual grounding, which deserves more attention. Acknowledgments: This material is based upon work supported in part by the National Science Foundation under Grant No. 1718221. This work is supported by NVIDIA Corporation with the donation of a GPU. This work is supported in part by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM Cognitive Horizons Network. 9 References [1] R. Arandjelovic and A. Zisserman. Multiple queries for large scale specific object retrieval. In Proc. BMVC, 2012. [2] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1611.08050, 2016. [3] K. Chen? , R. Kovvuri? , and R. Nevatia. Query-guided regression network with context policy for phrase grounding. In Proc. ICCV, 2017. ? equal contribution. [4] L.-C. Chen? , G. Papandreou? , I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. In Proc. ICLR, 2015. (? equal contribution). [5] T. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localization and learning with generic knowledge. 2012. [6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proc. CVPR, 2015. [7] K. Endo, M. Aono, E. Nichols, and K. Funakoshi. An attention-based regression model for grounding textual phrases in images. In Proc. IJCAI, 2017. [8] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010. [9] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. Devise: A deep visual-semantic embed- ding model. In Proc. NIPS, 2013. [10] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In Proc. ECCV, 2014. [11] S. Guadarrama, E. Rodner, K. Saenko, N. Zhang, R. Farrell, J. Donahue, and T. Darrell. Open-vocabulary object retrieval. In Proc. RSS, 2014. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. [13] S. C. Hoi, W. Liu, M. R. Lyu, and W.-Y. Ma. Learning distance metrics with contextual constraints for image retrieval. In Proc. CVPR, 2006. [14] R. Hu, M. Rohrbach, J. Andreas, T. Darrell, and K. Saenko. Modeling relationships in referential expressions with compositional modular networks. In Proc. CVPR, 2017. [15] R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, and T. Darrell. Natural language object retrieval. In Proc. CVPR, 2016. [16] T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural svms. Machine Learning, 77(1):27?59, 2009. [17] J. Johnson, R. Krishna, M. Stark, L. J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proc. CVPR, 2015. [18] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proc. CVPR, 2015. [19] A. Karpathy, A. Joulin, and L. Fei-Fei. Deep fragment embeddings for bidirectional image sentence mapping. In Proc. NIPS, 2014. [20] S. Kazemzadeh, V. Ordonez, M. Matten, and T. L. Berg. ReferItGame: Referring to objects in photographs of natural scenes. In Proc. EMNLP, 2014. [21] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. In TACL, 2015. [22] B. Klein, G. Lev, G. Sadeh, and L. Wolf. Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation. In arXiv preprint arXiv:1411.7399, 2014. [23] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What are you talking about? text-to-image coreference. In Proc. CVPR, 2014. [24] J. Krishnamurthy and T. Kollar. Jointly learning to parse and perceive: connecting natural language to the physical world. In Proc. TACL, 2013. [25] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient Subwindow Search: A Branch and Bound Framework for Object Localization. PAMI, 2009. [26] A. Lehmann, B. Leibe, and L. V. Gool. Fast PRISM: Branch and Bound Hough Transform for Object Class Detection. IJCV, 2011. [27] D. Lin, S. Fidler, C. Kong, and R. Urtasun. Visual semantic search: Retrieving videos via complex textual queries. In Proc. CVPR, 2014. [28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV. Springer, 2014. [29] J. Mao, J. Huang, A. Toshev, O. Camburu, A. Yuille, and K. Murphy. Generation and comprehension of unambiguous object descriptions. In Proc. CVPR, 2016. [30] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). In Proc. ICLR, 2015. [31] C. Matuszek, N. Fitzgerald, L. Zettlemoyer, L. Bo, and D. Fox. A joint model of language and perception for grounded attribute learning. In Proc. ICML, 2012. [32] V. K. Nagaraja, V. I. Morariu, and L. S. Davis. Modeling context between objects for referring expression understanding. In Proc. ECCV, 2016. [33] D. Oneata, J. Revaud, J. Verbeek, and C. Schmid. Spatio-temporal object detection proposals. In Proc. ECCV, 2014. [34] B. Plummer, L. Wang, C. Cervantes, J. Caicedo, J. Hockenmaier, and S. Lazebnik. Collecting region-tophrase correspondences for richer image-to- sentence models. In Proc. ICCV, 2015. 10 [35] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, 2015. [36] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In IJCV, 2017. [37] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016. [38] A. Rohrbach, M. Rohrbach, R. Hu, T. DArrell, and B. Schiele. Grounding of Textual Phrases in Images by Reconstruction. In Proc. ECCV, 2016. [39] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by reconstruction. In ECCV, 2016. [40] F. Sadeghi, S. K. Divvala, and A. Farhadi. Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In Proc. CVPR, 2015. [41] A. G. Schwing and R. Urtasun. Efficient Exact Inference for 3D Indoor Scene Understanding. In Proc. ECCV, 2012. [42] Q. Sun and D. Batra. Submodboxes: Near-optimal search for a set of diverse object proposals. In Proc. NIPS, 2015. [43] L. Wang, Y. Li, and S. Lazebnik. Learning deep structure-preserving image-text em- beddings. In Proc. CVPR, 2016. [44] L. Wang, Y. Li, and S. Lazebnik. Learning deep structure-preserving image-text embeddings. In CVPR, 2016. [45] J. Yan, Z. Lei, L. Wen, and S. Z. Li. The Fastest Deformable Part Model for Object Detection. In Proc. CVPR, 2014. [46] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In Proc. TACL, 2014. [47] H. Yu and J. M. Siskind. Grounded language learning from video described with sen- tences. In Proc. ACL, 2013. 11
6787 |@word kong:2 norm:1 kokkinos:1 underline:1 everingham:1 stronger:1 open:1 termination:1 hu:4 r:1 decomposition:4 mention:1 recursively:1 necessity:1 configuration:4 contains:2 score:26 cellphone:2 liu:1 initial:3 denoting:2 ours:6 tuned:3 fragment:1 outperforms:1 existing:3 current:1 com:1 guadarrama:2 contextual:1 gpu:3 predetermined:1 eleven:1 hofmann:1 designed:1 interpretable:2 grass:1 cue:3 half:1 morariu:1 item:1 plane:4 short:3 provides:3 coarse:1 location:2 zhang:1 height:1 cht:1 retrieving:1 qualitative:1 consists:3 ijcv:3 combine:1 inside:1 introduce:1 manner:1 mask:3 expected:1 kiros:1 multi:1 shirt:2 salakhutdinov:1 globally:2 voc:3 cpu:2 window:1 considering:4 increasing:1 cache:1 provided:2 blaschko:1 matched:1 moreover:1 bounded:1 bike:8 lowest:2 what:1 mountain:1 kind:1 interpreted:1 pursue:1 unified:1 finding:1 corporation:1 guarantee:1 temporal:2 sky:1 y3:1 every:3 quantitative:1 collecting:3 exactly:1 returning:1 classifier:1 scaled:2 k2:3 ro:1 demonstrates:1 grant:1 ramanan:1 before:2 positive:6 engineering:1 negligible:1 despite:1 encoding:1 accumulates:1 lev:1 pami:1 black:2 acl:1 bird:1 challenging:2 jacket:2 limited:1 shamma:1 fastest:1 aschwing:1 range:1 submodboxes:1 acknowledgment:1 camera:2 yj:2 testing:2 union:2 procedure:4 mei:1 maire:1 area:4 rnn:1 yan:1 significantly:1 projection:1 word:44 pre:2 refers:1 suggest:1 lady:1 get:1 interior:1 selection:1 put:3 context:5 applying:1 seminal:1 influence:1 restriction:1 equivalent:1 map:31 demonstrated:1 center:4 crfs:1 dean:1 williams:1 attention:2 independently:1 formulate:5 simplicity:2 immediately:1 perceive:1 importantly:2 shlens:1 siskind:1 retrieve:2 embedding:5 searching:1 notion:1 autonomous:2 krishnamurthy:1 coordinate:4 limiting:1 hurt:1 ferrari:1 play:1 parser:1 caption:1 exact:3 us:4 hypothesis:4 associate:1 element:2 crossing:1 trick:1 particularly:1 recognition:1 submission:2 bottom:5 preprint:3 ding:1 electrical:1 capture:5 solved:1 wang:8 seg:5 region:14 revaud:1 connected:3 sun:1 highest:4 caicedo:3 intuition:1 environment:1 schiele:2 fitzgerald:1 trained:7 weakly:3 solving:2 depend:2 coreference:1 yuille:3 localization:8 upon:2 efficiency:1 multimodal:2 joint:2 represented:1 cat:1 fast:2 describe:3 effective:3 plummer:3 query:16 zemel:1 choosing:3 exhaustive:1 apparent:2 jean:1 larger:1 solve:1 supplementary:1 cvpr:13 modular:1 richer:3 resizing:1 think:1 jointly:2 transform:1 differentiate:2 advantage:2 descriptive:1 spat:1 net:14 sen:1 propose:1 reconstruction:2 interaction:2 product:5 frequent:2 aligned:1 relevant:1 ablation:1 cao:1 alleviates:1 deformable:2 description:5 getting:1 convergence:3 ijcai:1 optimum:1 requirement:1 darrell:6 captioning:1 generating:2 object:27 illustrate:2 recurrent:4 develop:1 fixing:1 gong:2 pose:4 depending:1 donation:1 eq:6 solves:1 throw:1 dividing:1 predicted:5 frome:1 come:1 quotient:1 involves:1 iou:4 direction:1 snow:1 guided:1 annotated:2 correct:1 attribute:1 subsequently:2 centered:1 human:3 virtual:1 material:2 hoi:1 summation:1 comprehension:1 clothing:2 around:2 considered:5 ground:4 k3:3 bicycle:3 predict:1 visualize:2 alexe:1 lyu:1 mapping:1 smallest:5 narrative:1 estimation:2 outperformed:2 proc:32 bridge:1 largest:3 successfully:2 weighted:1 hope:1 minimization:5 gaussian:1 avoid:1 shelf:1 resized:1 deselaers:1 encode:1 derived:1 fence:1 joachim:1 improvement:1 notational:1 contrast:3 baseline:1 inference:18 dependent:1 accumulated:5 typically:3 entire:1 her:2 relation:3 w:11 perona:1 interested:1 pixel:1 issue:1 aforementioned:2 arg:1 among:1 pascal:3 overall:2 animal:1 spatial:4 art:4 constrained:1 noun:3 special:1 field:2 aware:1 construct:3 extraction:4 beach:1 having:1 equal:2 identical:3 look:2 yu:2 icml:1 hypothesizing:1 future:1 quantitatively:1 employ:1 wen:2 few:1 national:1 fitness:1 murphy:2 replaced:2 geometry:1 microsoft:1 detection:18 interest:5 mining:2 evaluation:5 severe:1 alignment:1 violation:1 punctuation:1 mixture:1 light:1 uppercase:1 word2vec:1 accurate:2 tuple:1 integral:4 respective:2 fox:1 tree:2 divide:1 logarithm:1 penalizes:1 re:1 hough:1 instance:1 modeling:3 negating:1 cover:1 papandreou:1 retains:1 maximization:3 phrase:31 cost:1 geo:6 deserves:1 subset:5 successful:1 johnson:1 front:3 too:1 dependency:1 spatiotemporal:1 combined:2 referring:2 st:1 person:3 lstm:1 bu:1 connecting:2 together:1 squared:1 jinjun:2 containing:3 huang:2 woman:5 emnlp:1 guy:2 cognitive:3 corner:2 nevatia:1 stark:1 li:4 account:1 lookup:1 summarized:2 explicitly:1 ranking:3 depends:1 farrell:1 vehicle:1 tab:5 analyze:1 red:5 annotation:3 simon:1 nagaraja:1 contribution:2 air:1 accuracy:3 convolutional:3 largely:1 efficiently:2 sitting:1 painting:1 venugopalan:1 flickr:14 definition:1 failure:2 energy:29 skier:4 involved:1 endo:1 newly:1 dataset:15 adjusting:1 recall:1 knowledge:5 car:2 color:1 segmentation:16 carefully:1 back:1 bidirectional:1 day:2 supervised:2 zisserman:2 inspire:1 bmvc:1 wei:1 formulation:1 done:1 box:66 though:1 stage:1 lastly:1 correlation:2 until:3 hand:5 tacl:3 parse:1 quality:1 ordonez:1 lei:1 building:1 usa:1 smiling:1 normalized:1 grounding:29 concept:16 usage:1 y2:3 regularization:1 assigned:1 logits:1 hence:2 contain:3 true:1 iteratively:1 former:1 semantic:15 fidler:2 cervantes:3 bowling:1 width:1 during:1 unambiguous:1 davis:1 cosine:2 m:8 plate:1 bansal:1 outline:1 tt:1 demonstrate:3 dedicated:1 image:70 dirt:3 wise:1 meaning:1 lazebnik:6 common:2 physical:1 overview:1 attached:1 discussed:2 interpretation:1 extend:2 linking:1 interpret:2 significant:1 refer:2 measurement:1 cup:3 automatic:1 consistency:1 trivially:1 similarly:1 illinois:6 arandjelovic:1 language:13 similarity:3 base:1 add:1 recent:1 reverse:1 ship:2 schmidhuber:1 nvidia:1 hay:1 coco:1 inequality:1 prism:1 watson:1 success:2 discussing:1 yi:7 devise:1 scoring:1 preserving:3 minimum:1 additional:7 seen:2 floor:2 krishna:1 employed:2 ws0:1 corrado:1 ii:2 branch:13 vgg16:1 multiple:1 facilitate:1 champaign:1 faster:2 youth:1 long:5 retrieval:8 lin:3 divided:1 lai:1 post:2 laplacian:1 prediction:5 desideratum:1 regression:2 verbeek:1 denominator:1 ae:1 metric:4 surpassed:1 arxiv:6 repetitive:1 kernel:1 grounded:2 robotics:2 deeplab:3 hochreiter:1 proposal:16 addition:1 want:3 fine:3 zettlemoyer:1 interval:5 winn:1 modality:1 subject:1 kwk22:1 kollar:1 leveraging:1 spirit:1 rodner:1 integer:1 extracting:5 nominator:1 near:1 structural:1 yang:1 constraining:1 split:4 embeddings:7 easy:5 iii:1 variety:2 bengio:1 bic:1 bernstein:1 competing:1 cow:1 reduce:1 andreas:1 det:3 whether:2 expression:3 akin:1 queue:3 reformulated:1 compositional:1 deep:14 logprobability:1 generally:3 covered:2 karpathy:2 amount:2 referential:1 visualized:1 category:5 rw:1 svms:1 outperform:1 flicker:2 canonical:1 arising:1 wr:11 disjoint:1 per:1 blue:1 diverse:2 klein:1 hyperparameter:1 group:1 four:4 threshold:1 graph:2 merely:1 convert:1 sum:1 wtt:2 run:2 powerful:1 you:1 lehmann:1 almost:1 reasonable:3 groundtruth:7 realtime:1 nichols:1 decision:1 comparable:1 matuszek:1 bound:29 layer:8 cca:7 drink:3 correspondence:3 ahead:1 denotation:1 constraint:2 your:1 fei:6 scene:8 lane:1 toshev:1 aspect:1 speed:5 extremely:2 min:2 optimality:1 chair:1 mikolov:1 department:1 structured:2 truncate:1 ball:1 combination:4 across:1 smaller:1 hodosh:2 character:1 em:1 sheikh:1 hockenmaier:5 intuitively:2 restricted:1 iccv:3 pipeline:1 visualization:2 remains:2 bus:1 discus:2 count:1 mechanism:2 fail:1 wrt:1 know:2 instrument:1 end:2 photo:1 flickr30k:2 operation:3 decomposing:1 permit:2 apply:1 eight:1 observe:4 leibe:1 enforce:1 generic:1 doll:1 occurrence:1 xiong:1 hat:1 existence:1 thomas:1 original:2 top:10 denotes:2 remaining:2 ensure:1 include:1 unifying:1 k1:3 coffee:3 classical:1 unchanged:1 feng:1 question:1 parametric:2 strategy:1 surrogate:1 affinity:1 iclr:2 subspace:6 distance:1 separate:1 entity:16 evaluate:2 trivial:1 reason:1 enforcing:1 water:1 urtasun:3 index:1 relationship:6 y4:3 ratio:2 statement:1 relate:1 negative:4 design:3 implementation:3 policy:1 negated:1 upper:3 farhadi:2 markov:1 urbana:1 datasets:2 minh:1 hwu:2 subsume:2 head:1 y1:9 verb:1 introduced:1 cast:1 bottle:3 pair:7 specified:4 sentence:7 nonlinearly:1 required:1 dog:1 plan:1 engine:1 learned:2 textual:24 bedding:1 nip:4 able:2 beyond:2 proceeds:2 flower:1 hendricks:1 perception:1 indoor:3 yc:1 challenge:2 summarize:1 interpretability:4 green:4 video:4 memory:3 gool:2 max:7 suitable:2 including:2 natural:6 rely:1 overlap:1 hybrid:1 indicator:2 boat:2 event:1 scheme:2 improve:1 sadeghi:1 sunglass:1 ne:1 ready:1 finley:1 extract:4 schmid:1 raymond:1 text:10 prior:21 yeh:1 understanding:4 geometric:5 snowboarder:4 fully:3 loss:7 generation:4 validation:3 foundation:1 verification:1 collaboration:1 ibm:4 eccv:7 row:4 token:2 repeat:2 last:3 keeping:1 free:1 supported:3 guide:1 side:1 divvala:1 sparse:1 van:1 dimension:3 vocabulary:2 world:1 doesn:1 computes:1 author:1 subwindow:6 commonly:1 preprocessing:1 qualitatively:1 forward:2 collection:1 ignore:1 cutting:3 cuboid:1 global:4 reveals:1 investigating:1 q4:1 hypothesized:4 belongie:1 spatio:2 don:1 search:17 latent:1 iterative:1 reality:1 additionally:2 table:4 learn:1 sadeh:1 ca:3 inherently:1 obtaining:1 improving:1 alg:2 posing:1 necessarily:1 complex:1 zitnick:1 joulin:1 main:1 linearly:4 bounding:52 lampert:3 body:5 augmented:3 fig:11 referred:1 xu:2 mscoco:1 structurally:1 sub:1 mao:2 candidate:1 lie:1 logprobabilities:1 answering:1 learns:4 young:2 grained:1 donahue:2 down:1 embed:1 specific:3 symbol:1 explored:1 svm:1 adding:1 effectively:1 importance:2 magnitude:1 margin:1 horizon:1 chen:2 bir:1 backpropagate:1 intersection:2 photograph:1 simply:1 rohrbach:7 shoe:2 visual:14 yolo:1 contained:1 scalar:1 bo:2 talking:1 springer:1 corresponds:1 minimizer:1 truth:2 wolf:1 ma:1 man:4 content:2 fisher:1 specifically:2 determined:1 except:1 redmon:1 wt:5 schwing:2 batra:1 pas:2 experimental:1 saenko:4 formally:1 berg:1 people:1 support:2 latter:1 alexander:1 wearing:2
6,398
6,788
Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network Lixin Fan [email protected] Nokia Technologies Tampere, Finland Abstract We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the ?normalized? bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs. On the other hand, ReLUs turn out to be non-essential and can be removed from networks trained for simple tasks like MNIST classification. The proposed generalized hamming network (GHN) as such not only lends itself to rigorous analysis and interpretation within the fuzzy logic theory but also demonstrates fast learning speed, well-controlled behaviour and state-of-the-art performances on a variety of learning tasks. 1 Introduction Since early 1990s the integration of fuzzy logic and computational neural networks has given birth to the fuzzy neural networks (FNN) [1]. While the formal fuzzy set theory provides a strict mathematical framework in which vague conceptual phenomena can be precisely and rigorously studied [2, 3, 4, 5], application-oriented fuzzy technologies lag far behind theoretical studies. In particular, fuzzy neural networks have only demonstrated limited successes on some toy examples such as [6, 7]. In order to catch up with the rapid advances in recent neural network developments, especially those with deep layered structures, it is the goal of this paper to demonstrate the relevance of FNN, and moreover, to provide a novel view on its non-fuzzy counterparts. Our revisiting of FNN is not merely for the fond remembrances of the golden age of ?soft computing? [8]. Instead it provides a novel and theoretically justified perspective of neural computing, in which we are able to re-examine and demystify some useful techniques that were proposed to improve either effectiveness or efficiency of neural networks training processes. Among many others, batch normalization (BN) [9] is probably the most influential yet mysterious trick, that significantly improved the training efficiency by adapting to the change in the distribution of layers? inputs (coined as internal covariate shift). Such kind of adaptations, when viewed within the fuzzy neural network framework, can be interpreted as rectifications to the deficiencies of neuron outputs with respect to the rightful generalized hamming distance (see definition 1) between inputs and neuron weights. Once 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the appropriate rectification is applied , the ill effects of internal covariate shift are automatically eradicated, and consequently, one is able to enjoy the fast training process without resorting to a sophisticated learning method used by BN. Another crucial component in neural computing, Rectified linear unit (ReLU), has been widely used due to its strong biological motivations and mathematical justifications [10, 11, 12]. We show that within the generalized hamming group endowed with generalized hamming distance, ReLU can be regarded as setting a minimal hamming distance threshold between network input and neuron weights. This novel view immediately leads us to an effective double-thresholding scheme to suppress fuzzy elements in the generalized hamming group. The proposed generalized hamming network (GHN) forms its foundation on the cornerstone notion of generalized hamming distance (GHD), which is essentially defined as h(x, w) := x + w ? 2xw for any x, w ? R (see definition 1). Its connection with the inferencing rule in neural computing is obvious: the last term (?2xw) corresponds to element-wise multiplications of neuron inputs and weights, and since we aim to measure the GHD between inputs x and weights w, the bias term then should take the value x + w. In this article we define any network that has its neuron outputs fulfilling this requirement (3) as a generalized hamming network. Since the underlying GHD induces a fuzzy XOR logic, GHN lends itself to rigorous analysis within the fuzzy logics theory (see definition 4). Apart from its theoretical appeals, GHN also demonstrates appealing features in terms of fast learning speed, well-controlled behaviour and simple parameter settings (see Section 4). 1.1 Related Work Fuzzy logic and fuzzy neural network: the notion of fuzzy logic is based on the rejection of the fundamental principle of bivalence of classical logic i.e. any declarative sentence has only two possible truth values, true and false. Although the earliest connotation of fuzzy logic was attributed to Aristotle, the founder of classical logic [13], it was Zadeh?s publication in 1965 that ignited the enthusiasm about the theory of fuzzy sets [2]. Since then mathematical developments have advanced to a very high standard and are still forthcoming to day [3, 4, 5]. Fuzzy neural networks were proposed to take advantages of the flexible knowledge acquiring capability of neural networks [1, 14]. In theory it was proved that fuzzy systems and certain classes of neural networks are equivalent and convertible with each other [15, 16]. In practice, however, successful applications of FNNs are limited to some toy examples only [6, 7]. Demystifying neural networks: efforts of interpreting neural networks by means of propositional logic dated back to McCulloch & Pitts? seminial paper [17]. Recent research along this line include [18] and the references therein, in which First Order Logic (FOL) rules are encoded using soft logic on continuous truth values from the interval [0, 1]. These interpretations, albeit interesting, seldom explain effective neural network techniques such as batch normalization or ReLU. Recently [19] provided an improvement (and explanation) to batch normalization by removing dependencies in weight normalization between the examples in a minibatch. Binary-valued neural network: Restricted Boltzmann Machine (RBM) was used to model an ?ensemble of binary vectors? and rose to prominence in the mid-2000s after fast learning algorithms were demonstrated by Hinton et. al. [20, 21]. Recent binarized neural network [22, 23] approximated standard CNNs by binarizing filter weights and/or inputs, with the aim to reduce computational complexity and memory consumption. The XNOR operation employed in [23] is limited to binary hamming distance and not readily applicable to non-binary neuron weights and inputs. Ensemble of binary patterns: the distributive property of GHD described in (1) provides an intriguing view on neural computing ? even though real-valued pattens are involved in the computation, the computed GHD is strictly equivalent to the mean of binary hamming distances across two ensembles of binary patterns! This novel view illuminates the connection between generalized hamming networks and efficient binary features, that have long been used in various computer vision tasks, for instance, the celebrated Adaboost face detection[24], numerous binary features for key-point matching [25, 26] and binary codes for large database hashing [27, 28, 29, 30]. 2 h(a, b) F(h(a, b)) ?? ?(h(a, b)) ?a (h(a, b)) 0 20 ?1 0 1 2 3 (a) 4 ?1 ?2 ?3 0 1 2 3 0.010 0.005 0.6 ?600 0 0.000 0.4 ?800 ?10 ?2 0.8 ?400 ?0.005 ?0.010 0.2 ?1000 ?20 ?3 0.015 ?200 10 ?0.015 ?1200 4 ?3 ?2 ?1 0 1 2 3 4 ?1 ?2 ?3 0 1 2 3 4 ?3 (b) ?2 ?1 0 1 2 3 4 ?1 ?2 ?3 0 1 2 3 4 (c) ?3 ?2 ?1 0 1 2 3 4 ?1 ?2 ?3 0 1 2 3 4 (d) Figure 1: (a) h(a, b) has one fuzzy region near the identity element 0.5 (in white), two positively confident (in red) and two negatively confident (in blue) regions from above and below, respectively. (b) Fuzziness F (h(a, b)) = h(a, b) ? h(a, b) has its maxima along a = 0.5 or b = 0.5. (c) ?(h(a, b)) : U ? I where ?(h) = 1/(1+exp(0.5?h)) is the logistic function to assign membership to fuzzy set elements (see definition 4). (d) partial derivative of ?(h(a, b)). Note that magnitudes of gradient in the fuzzy region is non-negligible. 2 Generalized Hamming Distance Definition 1. Let a, b, c ? U ? R, and a generalized hamming distance (GHD), denoted by ?, be a binary operator h : U ? U ? U ; h(a, b) := a ? b = a + b ? 2 ? a ? b . Then (i) for U = {0, 1} GHD de-generalizes to binary hamming distance with 0 ? 0 = 0; 0 ? 1 = 1; 1 ? 0 = 1; 1 ? 1 = 0; (ii) for U = [0.0, 1.0] the unitary interval I, a ? b ? I (closure); Remark: this case is referred to as the ?restricted? hamming distance, in the sense that inverse of any elements in I are not necessarily contained in I (see below for definition of inverse). (iii) for U = R, H := (R, ?) is a group satisfying five abelian group axioms, thus is referred to as the generalized hamming group or hamming group: ? a ? b = (a + b ? 2 ? a ? b) ? R (closure); ? a ? b = (a + b ? 2 ? a ? b) = b ? a (commutativity); ? (a ? b) ? c = (a + b ? 2 ? a ? b) + c ? 2(a + b ? 2 ? a ? b)c = a + (b + c ? 2 ? b ? c) ? 2 ? a ? (b + c ? 2 ? b ? c) = a ? (b ? c) (associativity); ? ?e = 0 ? R such that e ? a = a ? e = (0 + a ? 2 ? 0 ? a) = a (identity element); a a ? for each a ? R \ {0.5}, ?a?1 := a/(2 ? a ? 1) s.t. a ? a?1 = (a + 2?a?1 ? 2a ? 2?a?1 ) ?1 = 0 = e; and we define ? := (0.5) (inverse element). Remark: note that 1 ? a = 1 ? a which complements a. ?0.5? is a fixed point since ?a ? R, 0.5 ? a = 0.5, and 0.5 ? ? = 0 according to definition1 . (iv) GHD naturally leads to a measurement of fuzziness: F (a) := a ? a, R ? (??, 0.5] : F (a) ? 0, ?a ? [0, 1]; F (a) < 0 otherwise. Therefore [0, 1] is referred to as the fuzzy region in which F (0.5) = 0.5 has the maximal fuzziness and F (0) = F (1) = 0 are two boundary points. Outer regions (??, 0] and [1, ?) are negative and positive confident regions respectively. See Figure 1 (a) for the surface of h(a, b) which has one central fuzzy region, two positive confident and two negative confident regions. (v) The direct sum of hamming group is still a hamming group HL := ?l?L Hl : let x = {x1 , . . . , xL }, y = {y1 , . . . , yL } ? HL be two group members, then the generalized hamming distance is defined as the arithmetic mean of element-wise GHD: G L (x ?L y) := 1 L (x1 ? y1 + . . . + xL ? yL ). And let x ? = (x1 + . . . xL )/L, y? = (y1 + . . . yL )/L be arithmetic means of respective elements, PL 2 then G L (x ?L y) = x ? + y? ? (x ? y) , where x ? y = l=1 xl ? yl is the dot product. L 1 By this extension, it is R = R ? {??, +?} instead of R on which we have all group members. 3 ? M = (x1 + . . . xM )/M ? HL be element-wise arithmetic mean (vi) Distributive property: let X m ? N be defined in the same vein. Then GHD is distributive: of a set of members x ? HL , and Y L M N L X 1 1 1 X XX m ? M ?L Y ?N) = 1 G L (X x ?l ? y?l = xl ? yln L M N L m=1 n=1 l=1 l=1 1 = MN M X N X (1) G L (xm ?L yn ). m=1 n=1 n Remark: in case that xm l , yl ? {0, 1} i.e. for two sets of binary patterns, the mean of binary hamming distance between two sets can be efficiently computed as the GHD between two real?M, Y ? N . Conversely, a real-valued pattern can be viewed as the element-wise valued patterns X average of an ensemble of binary patterns. 3 Generalized Hamming Network Despite the recent progresses in deep learning, artificial neural networks has long been criticized for its ?black box? nature: ?they capture hidden relations between inputs and outputs with a highly accurate approximation, but no definitive answer is offered for the question of how they work? [16]. In this section we provide an interpretation on neural computing by showing that, if the condition specified in (3) is fulfilled, outputs of each neuron can be strictly defined as the generalized hamming distance between inputs and weights. Moreover, the computations of GHD induces fuzzy implication of XOR connective, and therefore, the inferencing of entire network can be regarded as a logical calculus in the same vein as described in McCulloch & Pitts? seminial paper [17]. 3.1 New perspective on neural computing The bearing of generalized hamming distance on neural computing is elucidated by looking at the negative of generalized hamming distance, (GHD, see definition 1), between inputs x ? HL and weights w ? HL in which L denotes the length of neuron weights e.g. in convolution kernels: ?G L (w ?L x) = Divide (2) by the constant 2 L L L l=1 l=1 1X 2 1X wl ? xl w?x? L L L (2) L L X  1 X wl + xl 2 (3) and let b=? l=1 l=1 then it becomes the familiar form (w ? x + b) of neuron outputs save the non-linear activation function. By enforcing the bias term to take the given value in (3), standard neuron outputs measure PL negatives of GHD between inputs and weights. Note that, for each layer, the bias term l=1 xl is PL averaged over neighbouring neurons in individual input image. The bias term l=1 wl is computed separately for each filter in fully connected or convolution layers. When weights are updated during PL the optimization, l=1 wl changes accordingly to keep up with weights and maintain stable neuron outputs. We discuss below (re-)interpretations of neural computing in terms of GHD. Fuzzy inference: As illustrated in definition 4 GHD induces a fuzzy XOR connective. Therefore the negative of GHD quantifies the degree of equivalence between inputs x and weights w (see definition 4 of fuzzy XOR), i.e. the fuzzy truth value of the statement ?x ? w? where ? denotes a fuzzy equivalence relation. For GHD with multiple layers stacked together, neighbouring neuron outputs from the previous layer are integrated to form composite statements e.g. ?(x11 ? w11 , . . . , x1i ? wi1 ) ? wj2 ? where superscripts correspond to two layers. Thus stacked layers will form more complex, and hopefully more powerful, statements as the layer depth increases. 4 Y: Mean outputs X:epochs( 100) 0.00 BN XOR WO_BN Y: Max outputs X:epochs(x100) 8 BN XOR WO_BN ?0.02 7 ?0.04 6 ?2 ?0.06 5 ?3 ?0.08 4 ?0.10 3 ?0.12 2 ?0.14 1 ?0.16 0 0 5 10 15 20 25 30 Y: Min outputs X:epochs(x100) 0 BN XOR WO_BN ?1 ?4 ?5 ?6 ?7 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Figure 2: Left to right: mean, max and min of neuron outputs, with/without batch normalized (BN, WO_BN) and generalized hamming distance (XOR). Outputs are averaged over all 64 filters in the first convolution layer and plotted for 30 epochs training of a MNIST network used in our experiment (see Section 4). Batch normalization demystified: When a mini-batch of training samples X = {x1 , . . . , xM } is involved in the computation, due to the distributive property of GHD, the data-dependent bias term L P xl equals the arithmetic mean of corresponding bias terms computed for each sample in the l=1 mini-batch i.e. 1 M M P L P m=1 l=1 xm l . It is almost impossible to maintain a constant scalar b that fulfils this requirement when mini-batch changes, especially at deep layers of the network whose inputs are influenced by weights of incoming layers. The celebrated batch normalization (BN) technique therefore proposed a learning method to compensate for the input vector change, with additional parameters ?, ? to be learnt during the training [9]. It is our conjecture that batch normalization is approximating these rightful bias through optimization, and this connection is empirically revealed in Figure 2 with very similar neuron outputs obtained by BN and GHD. Indeed they are highly correlated during the course of training (with Pearson correlation coefficient=0.97), confirming our view that BN is attempting to influence the bias term according to (3). Once b is enforced to follow (3), neither the optimization of bias terms nor the sophisticated learning method of BN is needed. In the following section we will illustrate a rectified neural network designed as such. Rectified linear units (ReLU) redesigned: Due to its strong biological motivations [10] and mathematical justifications [11], rectified linear unit (ReLu) is the most popular activation function used for deep neural network [31]. If neuron outputs are rectified as the generalized hamming distances, the activation function max(0, 0.5 ? h(x, w)) then simply sets a minimal hamming distance threshold of 0.5 (see Figure 1). Astute readers may immediately spot two limitations of this activation function: a) it only takes into account the negative confidence region while disregards positive confidence regions; b) it allows elements in the fuzzy regime near 0.5 to misguide the optimization with their non-negligible gradients. A straightforward remedy to ReLU is to suppress elements within the fuzzy region by setting outputs between [0.5 ? r, 0.5 + r] to 0.5, where r is a parameter to control acceptable fuzziness in neuron outputs. In particular, we may set thresholds adaptively e.g. [0.5 ? r ? O, 0.5 + r ? O] where O is the maximal magnitude of neuron outputs and the threshold ratio r is adjusted by the optimizer. This double-thresholding strategy effectively prevents noisy gradients of fuzzy elements, since 0.5 is a fixed point and x ? 0.5 = 0.5 for any x. Empirically we found this scheme, in tandem with the rectification (3), dramatically boosts the training efficiency for challenging tasks such as CIFAR10/100 image classification. It must be noted that, however, the use of non-linear activation as such is not essential for GHD-based neural computing. When the double-thresholding is switched-off (by fixing r = 0), the learning is prolonged for challenging CIFAR10/100 image classification but its influence on the simple MNIST classification is almost negligible (see Section 4 for experimental results). 3.2 Ganeralized hamming network with induced fuzzy XOR Definition 2. A generalized hamming network (GHN) is any networks consisting of neurons, whose outputs h ? HL are related to neuron inputs x ? HL and weights w ? HL by h = x ?L w . 5 Remark: In case that the bias term is computed directly from (3) such that h = x ?L w is fulfilled strictly, the network is called a rectified GHN or simply a GHN. In other cases where bias terms are approximating the rightful offsets (e.g. by batch normalization [9]), the trained network is called an approximated GHN. Compared with traditional neural networks, the optimization of bias terms is no longer needed in GHN. Empirically, it is shown that the proposed GHN benefits from a fast and robust learning process that is on par with that of the batch-normalization approach, yet without resorting to sophisticated learning process of additional parameters (see Section 4 for experimental results). On the other hand, GHN also benefits from the rapid developments of neural computing techniques, in particular, those employing parallel computing on GPUs. Due to this efficient implementation of GHNs, it is the first time that fuzzy neural networks have demonstrated state-of-the-art performances on learning tasks with large scale datasets. Often neuron outputs are clamped by a logistic activation function to within the range [0, 1], so that outputs can be compared with the target labels in supervised learning. As shown below, GHD followed by such a non-linear activation actually induces a fuzzy XOR connective. We briefly review basic notion of fuzzy set used in our work and refer readers to [2, 32, 13] for thorough treatments and review of the topic. Definition 3. Fuzzy Set: Let  X be an universal set of elements x ? X, then a fuzzy set A is a set of pairs: A := { x, ?A (x) |x ? X, ?A (x) ? I}, in which ?A : X ? I is called the membership function (or grade membership). Remark: In this work we let X be a Cartesian product of two sets X = P ? U where P are (2D or 3D) collection of neural nodes and U are real numbers in ? I or ? R. We define the membership function ?X (x) := ?U (xp ), ?x = (p, xp ) ? X such that it is dependent on xp only. For the sake of brevity we abuse the notation and use ?(x), ?X (x) and ?U (xp ) interchangeably. Definition 4. Induced fuzzy XOR: let two fuzzy set elements a, b ? U be assigned with respective grade or membership by a membership function ? : U ? I : ?(a) = i, ?(b) = j, then the generalized hamming distance h(a, b) : U ? U ? U induces a fuzzy XOR connective E : I ? I ? I whose membership function is given by ?R (i, j) = ?(h(??1 (i), ??1 (j))). (4) Remark: For the restricted case U = I the membership function can be trivially defined as the identity function ? = idI as proved in [4]. Remark: For the generalized case where U = R, the fuzzy membership ? can be defined by a sigmoid function such as logistic, tanh or any function : U ? I. In this work we adopt the logistic function 1 ?(a) = 1+exp(0.5?a) and the resulting fuzzy XOR connective is given by following membership function: 1 , ?R (i, j) = (5) 1 + exp 0.5 ? ??1 (i) ? ??1 (j) where ??1 (a) = ? ln( a1 ? 1) + 21 is the inverse of ?(a). Following this analysis, it is possible to rigorously formulate neuron computing of the entire network according to inference rules of fuzzy logic theory (in the same vein as illustrated in [17]). Nevertheless, research along this line is out of the scope of the present article and will be reported elsewhere. 4 4.1 Performance evaluation A case study with MNIST image classification Overall performance: we tested a simple four-layered GHN (cv[1,5,5,16]-pool-cv[16,5,5,64]-poolfc[1024]-fc[1024,10]) on the MNIST dataset with 99.0% test accuracy obtained. For this relatively simple dataset, GHN is able to reach test accuracies above 0.95 with 1000 mini-batches and a learning rate 0.1. This learning speed is on par with that of the batch normalization (BN), but without resorting to the learning of additional parameters in BN. It was also observed a wide range of large learning rates (from 0.01 to 0.1) all resulted in similar final accuracies (see below). We ascribe this well-controlled robust learning behaviour to rectified bias terms enforced in GHNs. 6 Accuracy Accuracy 0.95 0.90 0.80 3.0 rate0.05 (98.86%) rate0.025 (98.96%) rate0.01 (98.69%) 3.5 4.0 4.5 log(#mini_batch) 1.00 0.95 0.95 0.90 0.90 0.85 rate0.1 (98.91%) rate0.1 (98.97%) 0.85 1.00 Accuracy 1.00 5.0 0.75 3.0 3.5 4.0 4.5 log(#mini_batch) 0.85 rate0.1 (98.98%) rate0.05 (99.01%) rate0.025 (98.86%) rate0.01 (98.65%) 0.80 rate0.05 (98.83%) rate0.025 (98.84%) rate0.01 (98.63%) 0.80 5.0 0.75 3.0 3.5 4.0 4.5 log(#mini_batch) 5.0 Figure 3: Test accuracies of MNIST classification with Generalized Hamming Network (GHN). Left: test accuracies without using non-linear activation (by setting r = 0). Middle: with r optimized for each layer. Right: with r optimized for each filter. Four learning rates i.e. {0.1, 0.05, 0.025, 0.01} are used for each case with the final accuracy reported in brackets. Note that the number of mini-batch are in logarithmic scale along x-axis. Influence of learning rate: This experiment compares performances with different learning rates and Figure 3 (middle,right) show that a very large learning rate (0.1) leads to much faster learning without the risk of divergences. A small learning rate (0.01) suffice to guarantee the comparable final test accuracy. Therefore we set the learning rate to a constant 0.1 for all experiments unless stated otherwise. Influence of non-linear double-thresholding: The non-linear double-thresholding can be turned off by setting the threshold ratio r = 0 (see texts in Section 3.1). Optionally the parameter r is automatically optimized together with the optimization of neuron weights. Figure 3 (left) shows that the GHN without non-linear activation (by setting r = 0) performs equally well as compared with the case where r is optimized (in Figure 3 left, right). There are no significant differences between two settings for this relative simple task. 4.2 CIFAR10/100 image classification In this experiment, we tested a six-layered GHN (cv[3,3,3,64]-cv[64,5,5,256]-pool-cv[256,5,5,256]pool-fc[1024]-fc[1024,512]-fc[1024,nclass]) on both CIFAR10 (nclass=10) and CIFAR100 (nclass=100) datasets. Figure 4 shows that the double-thresholding scheme improves the learning efficiency dramatically for these challenging image classification tasks: when the parameter r is optimized for each feature filter the numbers of iterations required to reach the same level of test accuracy are reduced by 1 to 2 orders of magnitudes. It must be noted that performances of such a simple generalized hamming network (89.3% for CIFAR10 and 60.1% for CIFAR100) are on par with many sophisticated networks reported in [33]. In our view, the rectified bias enforced by (3) can be readily applied to these sophisticated networks, although resulting improvements may vary and remain to be tested. 4.3 Generative modelling with Variational Autoencoder In this experiment, we tested the effect of rectification in GHN applied to a generative modelling setting. One crucial difference is that the objective is now to minimize reconstruction error instead of classification error. It turns out the double-thresholding scheme is no longer relevant for this setting and thus not used in the experiment. The baseline network (784-400-400-20) used in this experiment is an improved implementation [34] of the influential paper [35], trained on the MNIST dataset of images of handwritten digits. We have rectified the outputs following (3) and, instead of optimizing the lower bound of the log marginal likelihood as in [35], we directly minimize the reconstruction error. Also we did not include weights regularization terms for the optimization as it is unnecessary for GHN. Figure 5 (left) illustrates the reconstruction error with respect to number of training steps (mini-batches). It is shown that the rectified generalized hamming network converges to a lower minimal reconstruction error as compared to the baseline network, with about 28% reduction. The rectification also leads to a faster convergence, which is in accordance with our observations in other experiments. 7 0.9 0.6 0.5 0.7 Accuracy Accuracy 0.8 0.4 0.3 0.6 0.5 0.2 OPT_THRES (89.26%) 0.4 3.0 0.1 WO_THRES (84.63%) 3.5 4.0 4.5 5.0 log(#mini_batch) 5.5 6.0 OPT_THRES (60.05%) WO_THRES (51.71%) 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 log(#mini_batch) Figure 4: Left: GHN test accuracies of CIFAR10 classification (OPT THRES: parameter r optimized; WO THRES: without nonlinear activation). Right: GHN test accuracies of CIFAR100 classification(OPT THRES: parameter r optimized; WO THRES: without non-linear activation). accuracy Reconstruction error GHN VAE 5000 0.75 4500 0.70 4000 0.65 3500 3000 0.60 2500 0.55 2000 1500 GHN CNN 0.50 0 20000 40000 60000 #mini_batch 80000 0 100000 1000 2000 3000 #mini_batch 4000 5000 Figure 5: Left: Reconstruction errors of convolution VAE with and w/o rectification. Right: Evaluation accuracies of Sentence classification with GHN rectification and w/o rectification). 4.4 Sentence classification A simple CNN has been used for sentence-level classification tasks and excellent results were demonstrated on multiple benchmarks [36]. The baseline network used in this experiment is a re-implementation of [36] made available from [37]. Figure 5 (right) plots accuracy curves from both networks. It was observed that the rectified GHN did improve the learning speed, but did not improve the final accuracy as compared with the baseline network: both networks yielded the final evaluation accuracy around 74% despite that the training accuracy were almost 100%. The over-fitting in this experiment is probably due to the relatively small Movie Review dataset size with 10,662 example review sentences, half positive and half negative. 5 Conclusion In summary, we proposed a rectified generalized hamming network (GHN) architecture which materializes a re-emerging principle of fuzzy logic inferencing. This principle has been extensively studied from a theoretic fuzzy logic point of view, but has been largely overlooked in the practical research of ANN. The rectified neural network derives fuzzy logic implications with underlying generalized hamming distances computed in neuron outputs. Bearing this rectified view in mind, we proposed to compute bias terms analytically without resorting to sophisticated learning methods such as batch normalization. Moreover, we have shown that, the rectified linear units (ReLU) was theoretically non-essential and could be skipped for some easy tasks. While for challenging classification problems, the double-thresholding scheme did improve the learning efficiency significantly. The simple architecture of GHN, on the one hand, lends itself to being analysed rigorously and this follow up research will be reported elsewhere. On the other hand, GHN is the first fuzzy neural network of its kind that has demonstrated fast learning speed, well-controlled behaviour and stateof-the-art performances on a variety of learning tasks. By cross-checking existing networks against GHN, one is able to grasp the most essential ingredient of deep learning. It is our hope that this kind of comparative study will shed light on future deep learning research and eventually open the ?black box? of artificial neural networks [16]. 8 Acknowledgement I am grateful to anonymous reviewers for their constructive comments to improve the quality of this paper. I greatly appreciate valuable discussions and supports from colleagues at Nokia Technologies. References [1] M M Gupta and D H Rao. Invited Review on the principles of fuzzy neural networks. Fuzzy Sets and Systems, 61:1?18, 1994. [2] L.A. Zadeh. Fuzzy sets. Information Control, 8:338?353, 1965. [3] J?zsef Tick, J?nos Fodor, and John Von Neumann. Fuzzy Implications and Inference Process. Computing and Informatics, 24:591?602, 2005. [4] Benjam?n C Bedregal, Renata H S Reiser, and Gra?aliz P Dimuro. Xor-Implications and E-Implications: Classes of Fuzzy Implications Based on Fuzzy Xor. Electronic Notes in Theoretical Computer Science, 247:5?18, 2009. [5] Krassimir Atanassov. On Zadeh?s intuitionistic fuzzy disjunction and conjunction. NIFS, 17(1):1?4, 2011. [6] Abhay B Ulsari. Training Artificial Neural Networks for Fuzzy Logic. Complex Systems, 6:443?457, 1992. [7] Witold Pedrycz and Giancarlo Succi. fXOR fuzzy logic networks. Soft Computing, 7, 2002. [8] H.-J Zimmermann. Fuzzy set theory review. Advanced Review, 2010. [9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis R. Bach and David M. Blei, editors, ICML, volume 37, pages 448?456, 2015. [10] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R.J. Douglas, H.S.Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. 405, 2000. [11] H.S. Seung R Hahnloser. Permitted and forbidden sets in symmetric threshold-linear networks. In NIPS, 2001. [12] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Geoffrey Gordon, David Dunson, and Miroslav Dud?k, editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pages 315?323, 11?13 Apr 2011. [13] R. Belohlavek, J.W. Dauben, and G.J. Klir. Fuzzy Logic and Mathematics: A Historical Perspective. Oxford University Press, 2017. [14] P. Liu and H.X. Li. Fuzzy Neural Network Theory and Application. Series in machine perception and artificial intelligence. World Scientific, 2004. [15] Jyh-Shing Roger Jang and Chuen-Tsai Sun. Functional equivalence between radial basis function networks and fuzzy inference systems. IEEE Trans. Neural Networks, 4(1):156?159, 1993. [16] Jos? Manuel Ben?tez, Juan Luis Castro, and Ignacio Requena. Are artificial neural networks black boxes? IEEE Trans. Neural Networks, 8(5):1156?1164, 1997. [17] Warren Mcculloch and Walter Pitts. A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:127?147, 1943. [18] Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard H. Hovy, and Eric P. Xing. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016. [19] Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. page 901, 2016. [20] Geoffrey Hinton and Ruslan Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 ? 507, 2006. 9 [21] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In Johannes F?rnkranz and Thorsten Joachims, editors, Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807?814. Omnipress, 2010. [22] Matthieu Courbariaux and Yoshua Bengio. Binarized neural network: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830, 2016. [23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. [24] Paul Viola and Michael J. Jones. Robust real-time face detection. Int. J. Comput. Vision, 57(2):137?154, May 2004. [25] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. Brief: Binary robust independent elementary features. In Proceedings of the 11th European Conference on Computer Vision: Part IV, ECCV?10, pages 778?792, 2010. [26] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In Proceedings of the 2011 International Conference on Computer Vision, ICCV ?11, pages 2564?2571, Washington, DC, USA, 2011. [27] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In Proceedings of the 22Nd International Conference on Neural Information Processing Systems, NIPS?09, pages 1042?1050, 2009. [28] Mohammad Norouzi and David M. Blei. Minimal loss hashing for compact binary codes. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 353?360, New York, NY, USA, 2011. [29] Kevin Lin, Huei-Fang Yang, Jen-Hao Hsiao, and Chu-Song Chen. Deep learning of binary hash codes for fast image retrieval. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2015. [30] Mohammad Norouzi, David J Fleet, and Ruslan R Salakhutdinov. Hamming distance metric learning. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1061?1069. 2012. [31] Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 5 2015. [32] H.-J. Zimmermann. Fuzzy Set Theory ? and Its Applications. Kluwer Academic Publishers, Norwell, MA, USA, 2001. [33] What is the class of this image? Discover the current state of the art in objects classification. http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_ results.html. Accessed: 2017-07-19. [34] A baseline variational auto-encoder based on "auto-encoding variational bayes". https://github.com/ y0ast/VAE-TensorFlow. Accessed: 2017-05-19. [35] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013. [36] Yoon Kim. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882, 2014. [37] A baseline cnn network for sentence classification implemented with tensorflow. https://github.com/ dennybritz/cnn-text-classification-tf. Accessed: 2017-05-19. 10
6788 |@word kulis:1 cnn:4 middle:2 briefly:1 nd:1 open:1 closure:2 calculus:2 hu:1 bn:13 prominence:1 thres:4 lepetit:1 reduction:1 celebrated:3 liu:2 series:1 wj2:1 kurt:1 existing:1 current:1 com:3 manuel:1 analysed:1 activation:12 yet:2 intriguing:1 must:2 readily:2 john:1 luis:1 diederik:2 chu:1 confirming:1 christian:1 strecha:1 designed:1 plot:1 hash:2 generative:2 half:2 intelligence:2 nervous:1 accordingly:1 blei:2 provides:4 node:1 idi:1 accessed:3 five:1 mathematical:5 along:4 direct:1 fitting:1 aristotle:1 theoretically:3 indeed:1 rapid:2 nor:2 examine:1 grade:2 inspired:1 salakhutdinov:2 automatically:2 prolonged:1 farhadi:1 tandem:1 becomes:1 provided:1 xx:1 moreover:3 underlying:2 notation:1 suffice:1 mcculloch:3 circuit:1 what:1 discover:1 kind:3 interpreted:1 connective:5 emerging:1 fuzzy:65 ignited:1 guarantee:1 thorough:1 binarized:2 golden:1 shed:1 demonstrates:2 control:2 unit:6 enjoy:1 yn:1 positive:5 negligible:3 accordance:1 io:1 despite:2 encoding:2 oxford:1 abuse:1 hsiao:1 black:3 acl:1 therein:1 studied:2 equivalence:3 conversely:1 challenging:4 christoph:1 limited:3 range:2 averaged:2 practical:1 lecun:1 practice:1 spot:1 digit:1 axiom:1 universal:1 significantly:2 adapting:1 matching:1 composite:1 confidence:2 radial:1 layered:3 operator:1 selection:1 coexist:1 risk:1 impossible:1 influence:4 equivalent:2 demonstrated:5 reviewer:1 demystifying:2 straightforward:1 formulate:1 immediately:2 matthieu:1 rule:4 regarded:2 fang:1 reparameterization:1 notion:4 justification:2 cifar100:3 updated:1 fodor:1 target:1 intuitionistic:1 neighbouring:2 trick:1 element:16 approximated:2 satisfying:1 recognition:1 database:1 vein:3 observed:2 yoon:1 capture:1 revisiting:1 region:11 connected:1 sun:1 removed:1 valuable:1 rose:1 complexity:1 seung:2 rigorously:3 trained:3 grateful:1 ali:1 negatively:1 efficiency:5 binarizing:1 basis:1 vague:1 eric:1 rublee:1 accelerate:1 various:1 x100:2 sarpeshkar:1 stacked:2 walter:1 fast:7 effective:2 reconstructive:1 artificial:6 kevin:1 pearson:1 harnessing:1 birth:1 whose:3 lag:1 widely:1 encoded:1 valued:4 disjunction:1 cvpr:1 otherwise:2 encoder:1 statistic:1 itself:3 noisy:1 superscript:1 final:5 advantage:1 net:1 reconstruction:6 maximal:2 product:2 adaptation:1 turned:1 relevant:1 adapts:1 amplification:1 convergence:1 double:9 requirement:2 neumann:1 darrell:1 comparative:1 converges:1 ben:1 object:1 tim:1 illustrate:2 inferencing:3 fixing:1 klir:1 progress:1 strong:2 implemented:1 orb:1 cnns:1 filter:5 behaviour:4 assign:1 anonymous:1 fond:1 opt:2 biological:2 elementary:1 brian:1 adjusted:1 strictly:3 extension:1 pl:4 around:1 eduard:1 exp:3 scope:1 pitt:3 finland:1 early:1 optimizer:1 adopt:1 vary:1 wi1:1 ruslan:2 applicable:1 label:1 tanh:1 wl:4 tf:1 hope:1 aim:2 vae:3 publication:1 conjunction:1 earliest:1 ghn:28 joachim:1 improvement:2 june:1 modelling:2 likelihood:1 greatly:1 rigorous:2 skipped:1 baseline:6 sense:1 am:1 kim:1 inference:4 dependent:2 membership:10 entire:2 integrated:1 associativity:1 hidden:1 relation:2 germany:1 overall:1 classification:20 ill:1 flexible:1 denoted:1 x11:1 among:1 development:3 stateof:1 art:4 integration:1 constrained:1 pascal:1 marginal:1 equal:1 once:3 beach:1 washington:1 jones:1 icml:3 patten:1 future:1 others:1 yoshua:3 gordon:1 convertible:1 konolige:1 oriented:1 resulted:1 divergence:1 individual:1 familiar:1 consisting:1 maintain:2 ab:4 detection:2 ghns:2 bradski:1 highly:2 evaluation:3 grasp:1 extreme:1 bracket:1 light:2 behind:1 implication:6 accurate:1 norwell:1 redesigned:1 partial:1 cifar10:6 respective:2 unless:1 commutativity:1 iv:2 divide:1 re:5 plotted:1 theoretical:3 minimal:5 criticized:1 instance:1 miroslav:1 soft:3 rao:1 mahowald:1 introducing:1 successful:1 reported:4 dependency:1 answer:1 learnt:1 confident:5 st:1 adaptively:1 fundamental:1 international:5 yl:5 off:2 informatics:1 pool:3 jos:1 together:2 michael:2 von:1 central:1 shing:1 juan:1 derivative:1 toy:2 szegedy:1 account:1 li:1 de:1 coefficient:1 int:1 vi:1 view:8 francis:1 fol:1 red:1 relus:1 xing:1 capability:1 parallel:1 bayes:2 minimize:2 accuracy:21 xor:15 hovy:1 largely:1 efficiently:1 ensemble:4 correspond:1 convolutional:2 html:1 handwritten:1 calonder:1 vincent:2 norouzi:2 rectified:17 explain:1 influenced:1 reach:2 trevor:1 definition:12 against:1 colleague:1 mysterious:1 involved:2 obvious:1 naturally:1 demystify:1 attributed:1 rbm:1 hamming:43 proved:2 treatment:1 popular:2 dataset:4 logical:2 knowledge:1 improves:1 dimensionality:1 sophisticated:7 actually:2 back:1 hashing:2 day:1 follow:2 adaboost:1 supervised:1 improved:3 permitted:1 fua:1 though:1 box:3 roger:1 correlation:1 hand:5 nonlinear:1 hopefully:1 minibatch:1 logistic:4 quality:1 ascribe:1 ordonez:1 scientific:1 usa:4 effect:2 normalized:2 true:1 remedy:1 counterpart:1 xavier:1 analytically:2 assigned:1 regularization:1 dud:1 symmetric:1 xnor:2 white:1 illustrated:2 during:3 interchangeably:1 noted:2 generalized:31 illuminates:1 theoretic:1 demonstrate:1 mohammad:3 performs:1 interpreting:1 omnipress:1 image:9 wise:4 variational:4 novel:5 recently:1 sigmoid:1 functional:1 empirically:4 enthusiasm:1 volume:3 association:1 interpretation:4 approximates:1 kluwer:1 interpret:1 measurement:1 refer:1 significant:1 silicon:1 cv:5 seldom:1 resorting:4 trivially:1 mathematics:1 dot:1 stable:1 longer:2 surface:1 cortex:1 recent:4 forbidden:1 perspective:3 optimizing:1 apart:1 certain:1 binary:20 success:1 meeting:1 additional:3 employed:1 ii:1 arithmetic:4 multiple:2 faster:2 academic:1 cross:1 long:4 compensate:1 bach:1 lin:1 retrieval:1 equally:1 a1:1 controlled:4 biophysics:1 basic:1 essentially:1 vision:5 metric:1 iteration:1 normalization:16 kernel:1 sergey:1 justified:2 separately:1 interval:2 crucial:2 publisher:1 invited:1 strict:1 probably:2 induced:3 comment:1 member:3 effectiveness:1 unitary:1 near:2 yang:1 revealed:1 iii:1 easy:1 bengio:3 vinod:1 variety:2 embeddings:1 relu:9 forthcoming:1 architecture:2 reduce:1 idea:1 shift:3 fleet:1 six:1 accelerating:1 effort:1 wo:2 song:1 york:1 remark:7 deep:13 cornerstone:2 useful:2 dramatically:2 johannes:1 mid:1 extensively:1 induces:5 reduced:1 http:3 revisit:2 fulfilled:2 blue:1 ghd:22 group:10 key:1 four:2 threshold:7 nevertheless:1 neither:2 douglas:1 astute:1 merely:1 sum:1 enforced:4 inverse:4 fourteenth:1 powerful:1 connotation:1 gra:1 almost:3 reader:2 electronic:1 yann:1 zadeh:3 acceptable:1 comparable:1 layer:12 bound:1 followed:1 giancarlo:1 fan:2 yielded:1 elucidated:1 activity:1 annual:1 precisely:1 deficiency:1 sake:1 fulfils:1 speed:5 min:2 attempting:1 relatively:2 conjecture:2 gpus:1 influential:2 according:3 across:1 remain:1 appealing:1 founder:1 joseph:1 hl:10 castro:1 restricted:4 fulfilling:1 zimmermann:2 thorsten:1 iccv:1 rectification:8 ln:1 turn:2 discus:1 eventually:1 needed:3 mind:1 generalizes:1 operation:1 endowed:1 available:1 salimans:1 appropriate:1 save:1 batch:21 jang:1 alternative:1 weinberger:1 denotes:2 include:2 lixin:2 linguistics:1 xw:2 coined:1 especially:2 build:1 approximating:2 classical:2 appreciate:1 objective:1 question:1 strategy:1 traditional:1 antoine:1 gradient:3 lends:3 distance:25 berlin:1 distributive:4 outer:1 consumption:1 topic:1 declarative:1 enforcing:1 code:3 length:1 mini:6 ratio:2 nclass:3 optionally:1 dunson:1 abelian:1 statement:3 hao:1 negative:8 stated:1 suppress:2 abhay:1 implementation:3 boltzmann:2 neuron:25 convolution:4 datasets:2 observation:1 benchmark:1 yln:1 viola:1 hinton:4 looking:1 y1:3 dc:1 august:1 overlooked:1 david:4 propositional:1 complement:1 pair:1 specified:1 required:1 connection:3 sentence:7 optimized:7 imagenet:1 ethan:1 tensorflow:2 boost:1 kingma:2 nip:3 trans:2 able:4 below:5 pattern:7 xm:5 perception:1 regime:1 max:4 memory:1 explanation:1 zhiting:1 analogue:1 treated:1 advanced:2 mn:1 scheme:6 improve:6 w11:1 technology:3 dated:1 movie:1 numerous:1 ignacio:1 brief:1 axis:1 github:3 catch:1 autoencoder:1 auto:3 text:2 epoch:4 review:7 acknowledgement:1 checking:1 multiplication:1 relative:1 fully:1 par:3 loss:1 interesting:1 limitation:1 geoffrey:4 ingredient:1 age:1 digital:1 foundation:1 switched:1 degree:1 offered:1 xp:4 article:2 thresholding:10 principle:4 editor:4 courbariaux:1 bordes:1 eccv:1 course:1 elsewhere:2 summary:1 last:1 bias:19 formal:1 tick:1 warren:1 burges:1 wide:1 nokia:3 face:2 bulletin:1 sparse:1 benefit:2 boundary:1 depth:1 curve:1 world:1 rnkranz:1 collection:1 made:1 rabaud:1 historical:1 far:1 employing:1 welling:1 compact:1 logic:21 keep:1 incoming:1 ioffe:1 conceptual:1 unnecessary:1 jyh:1 continuous:1 quantifies:1 nature:2 robust:4 ca:1 rastegari:1 bearing:2 excellent:1 necessarily:1 complex:2 european:1 bottou:1 did:4 apr:1 surf:1 immanent:1 motivation:2 definitive:1 paul:1 positively:1 x1:5 referred:3 ny:1 pereira:1 xl:9 x1i:1 clamped:1 comput:1 removing:1 rectifier:1 covariate:3 jen:1 showing:1 sift:1 appeal:1 offset:1 gupta:1 glorot:1 derives:1 essential:4 workshop:1 mnist:7 false:1 albeit:1 effectively:1 chuen:1 corr:4 magnitude:3 illustrates:1 cartesian:1 chen:1 rejection:1 logarithmic:1 fc:4 simply:2 tampere:1 prevents:1 contained:1 scalar:1 acquiring:1 corresponds:1 truth:3 gary:1 ma:2 nair:1 hahnloser:2 fnn:3 goal:1 viewed:2 identity:3 consequently:1 fuzziness:4 ann:1 change:4 vicente:1 zhengzhong:1 reducing:2 redmon:1 fnns:1 called:3 experimental:2 disregard:1 internal:3 support:1 brevity:1 relevance:1 tsai:1 constructive:1 tested:4 phenomenon:1 correlated:1
6,399
6,789
Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimization Pan Xu Department of Computer Science University of Virginia Charlottesville, VA 22904 [email protected] Jian Ma School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Quanquan Gu Department of Computer Science University of Virginia Charlottesville, VA 22904 [email protected] Abstract We study the estimation of the latent variable Gaussian graphical model (LVGGM), where the precision matrix is the superposition of a sparse matrix and a low-rank matrix. In order to speed up the estimation of the sparse plus low-rank components, we propose a sparsity constrained maximum likelihood estimator based on matrix factorization, and an efficient alternating gradient descent algorithm with hard thresholding to solve it. Our algorithm is orders of magnitude faster than the convex relaxation based methods for LVGGM. In addition, we prove that our algorithm is guaranteed to linearly converge to the unknown sparse and low-rank components up to the optimal statistical precision. Experiments on both synthetic and genomic data demonstrate the superiority of our algorithm over the state-ofthe-art algorithms and corroborate our theory. 1 Introduction For a d-dimensional Gaussian graphical model (i.e., multivariate Gaussian distribution) N (0, ?? ), the inverse of covariance matrix ?? = (?? ) 1 (also known as the precision matrix or concentration matrix) measures the conditional dependence relationship between marginal random variables [19]. When the number of observations is comparable to the ambient dimension of the Gaussian graphical model, additional structural assumptions are needed for consistent estimation. Sparsity is one of the most common structures imposed on the precision matrix in Gaussian graphical models (GGM), because it gives rise to a sparse graph, which characterizes the conditional dependence of the marginal variables. The problem of estimating the sparse precision matrix in Gaussian graphical models has been studied by a large body of literature [23, 29, 12, 28, 6, 34, 37, 38, 33]. However, the real world data may not follow a sparse GGM, especially when some of the variables are unobservable. To alleviate this problem, the latent variable Gaussian graphical model (LVGGM) [9, 24] has been studied, where the precision matrix of the observed variables is conditionally sparse given the latent variables (i.e., unobserved) , but marginally not sparse. It is well-known that in LVGGM, the precision matrix ?? can be represented as the superposition of a sparse matrix S? and a low-rank matrix L? , where the latent variables contribute to the low rank component in the precision matrix. In other words, we have ?? = S? + L? . In the learning problem of LVGGM, the goal is to estimate both the unknown sparse component S? and the low-rank component L? of the precision matrix simultaneously. In the seminal work, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Chandrasekaran et al. [9] proposed a maximum-likelihood estimator based on `1 norm penalty on the sparse matrix and nuclear norm penalty on the low-rank matrix, and proved the model selection consistency for LVGGM estimation. Meng et al. [24] studied a similar penalized estimator, and derived Frobenius norm error bounds based on the restricted strong convexity [26] and the structural Fisher incoherence condition between the sparse and low-rank components. Both of these two methods for LVGGM estimation are based on a penalized convex optimization problem, which can be solved by log-determinant proximal point algorithm [32] and alternating direction method of multipliers [22]. Due to the nuclear norm penalty, these convex optimization algorithms need to do full singular value decomposition (SVD) to solve the proximal mapping of nuclear norm at each iteration, which results in an extremely high time complexity of O(d3 ). When d is large as often in the high dimensional setting, the convex relaxation based methods are computationally intractable. It is worth noting that full SVD cannot be accelerated by power method [13] or other randomized SVD algorithms [15], hence the O(d3 ) is unavoidable whenever nuclear norm regularization is employed. In this paper, in order to speed up learning LVGGM, we propose a novel sparsity constrained maximum likelihood estimator for LVGGM based on matrix factorization. Specifically, inspired by the recent work on matrix factorization [18, 16, 44, 45, 11, 30], we propose to reparameterize the low-rank component L in the precision matrix as the product of smaller matrices, i.e., L = ZZ> , where Z 2 Rd?r and r ? d is the number of latent variables. This factorization captures the intrinsic low-rank structure of L, and automatically ensures its low-rankness. We propose an alternating gradient descent with hard thresholding to solve the new estimator. We prove that the output of our algorithm is guaranteed to linearly converge to the unknown parameters up to the statistical precision. In detail, our algorithm enjoys O(d2 r) per-iteration time complexity, which outperforms the O(d3 ) per-iteration complexity of state-of-the-art LVGGM estimators based on nuclear norm 24]. In addition, the estimators from our algorithm for LVGGM attain p penalty [9, 22, p max{Op ( s? log d/n), Op ( rd/n)} statistical rate of convergence in terms of Frobenius norm, where s? is the conditional sparsity of the precision matrix (i.e., sparsity of S? ), and r is the number of latent variables (i.e., rank of L? ). This matches the minimax optimal convergence rate for LVGGM estimation [9, 1, 24]. Thorough experiments on both synthetic and breast cancer genomic datasets show that our algorithm is orders of magnitude faster than existing methods. It is also worth noting that, although our estimator and algorithm is designed for LVGGM, it is directly applicable to the Gaussian graphical model where the precision matrix is the sum of a sparse matrix and a low-rank matrix. And the theoretical guarantees of our algorithm still hold. The remainder of this paper is organized as follows: In Section 2, we briefly review existing work that is relevant to our study. We present our estimator and algorithm in detail in Section 3, and the main theory in Section 4. In Section 5, we compare the proposed algorithm with the state-of-the-art algorithms on both synthetic data and real-world breast cancer data. Finally, we conclude this paper in Section 6. Notation For matrices A, B with commensurate dimensions, we use hA, Bi = tr(A> B) to denote their inner product and A ? B denote their Kronecker product. For a matrix A 2 Rd?d , we denote its (ordered) singular values by 1 (A) ... 0. We denote by A 1 the 2 (A) d (A) inverse of A, and denote by |A| its determinant. We use the notation k ? k for various types of matrix norms, including the spectral norm kAk2 and the Frobenius norm kAkF . We also use the following P Pd norms kAk0,0 = i,j 1(Aij 6= 0), kAk1,1 = max1?i,j?d |Aij |, and kAk1,1 = i,j=1 |Aij |. A constant is called absolute constant if it does not depend on the parameters of the problem, e.g., dimension and sample size. We denote a . b if a is less than b up to a constant. 2 Additional Related Work Precision matrix estimation in sparse Gaussian graphical models (GGM) is commonly formulated as a penalized maximum likelihood estimation problem with `1,1 norm regularization [12, 29, 28] (graphical Lasso) or regularization on diagonal elements of Cholesky decomposition for precision matrix [17]. Due to the complex dependency among marginal variables in many applications, sparsity assumption on the precision matrix often does not hold. To relax this assumption, the conditional Gaussian graphical model (cGGM) was proposed in [41, 5] and the partial Gaussian graphical model (pGGM) was proposed in [42], both of which impose blockwise sparsity on the precision matrix and estimate multiple blocks therein. Despite a good interpretation of these models, they need to access both the observed variables as well as the latent variables for estimation. Another alternative 2 is the latent variable Gaussian graphical model (LVGGM), which was proposed in [9], and later investigated in [22, 24]. Compared with cGGM and pGGM, the estimation of LVGGM does not need to access the latent variables and therefore is more flexible. Another line of research related to ours is low-rank matrix estimation based on alternating minimization and gradient descent [18, 16, 44, 45, 11, 30, 3, 35, 43]. However, extending them to low-rank and sparse matrix estimation as in LVGGM turns out to be highly nontrivial. The most related work to ours includes [14] and [40], which studied nonconvex optimization for low-rank plus sparse matrix estimation. However, they are limited to robust PCA [8] and multi-task regression [1] in the noiseless setting. Due to the square loss in RPCA, the sparse matrix S can be calculated by subtracting the low-rank matrix L from the observed data matrix. Nevertheless, in LVGGM, there is no closed-form solution for the sparse matrix due to the log-determinant term, and we need to use gradient descent to update S. On the other hand, both the algorithm in [40] and our algorithm have an initialization stage. Yet our initialization algorithm is new and different from the initialization algorithm in [40] for RPCA. Furthermore, our analysis of the initialization algorithm is built on the spikiness condition, which is also different from that for RPCA. The last but not least related work is expectation maximization (EM) algorithm [2, 36], which shares a similar bivariate structure as our estimator. However, the proof technique used in [2, 36] is not directly applicable to our algorithm, due to the matrix factorization structure in our estimator. Moreover, to overcome the dependency issue between consecutive iterations in the proof, sample splitting strategy [18, 16] was employed in [2, 36, 39], i.e., dividing the whole dataset into T pieces and using a fresh piece of data in each iteration. Unfortunately, p the sample splitting technique results in a suboptimal statistical rate, incurring an extra factor of T in the rate. In sharp contrast, our proof technique does not rely on sample splitting, because we are able to prove a uniform convergence result over a small neighborhood of the unknown parameters, which directly resolves the dependency issue. 3 The Proposed Estimator and Algorithm In this section, we present a new estimator for LVGGM estimation, together with a new algorithm. 3.1 Latent Variable GGMs Let XO be the d-dimensional random vector with observed variables and XL be the r-dimensional random vector with latent variables. We assume that the concatenated random vector X = > e and sparse (XO , XL> )> follows a multivariate Gaussian distribution with covariance matrix ? e =? e 1 . It is proved in [10] that the observed data XO follows a normal disprecision matrix ? e OO , which is the top-left block matrix in ? e tribution with marginal covariance matrix ?? = ? corresponding to XO . The precision matrix of XO is then given by Schur complement [13]: e OO ) ?? = (? 1 e OO =? e OL ? e 1? e ? LL LO . (3.1) Since we can only observe XO , the marginal precision matrix ? is generally not sparse. We define ? e OO and L? := ? e OL ? e 1? e e S? := ? LL LO . Then S is sparse due to the sparsity of ?. We do not e e LO can be impose any dependency restriction on XO and XL , and thus the matrices ?OL and ? potentially dense. We assume that the number of latent variables is smaller than that of the observed. Therefore, L? is low-rank and may be dense. Thus, the precision matrix of LVGGM can be written as ? ?? = S ? + L? , (3.2) where kS k0,0 = s and rank(L ) = r. We refer to [9] for a detailed discussion of LVGGM. ? 3.2 ? ? The Proposed Estimator Suppose that we observe i.i.d. samples X1 , . . . , Xn from N (0, ?? ). Our goal is to estimate the sparse component S? and the low-rank component L? of the unknown precision matrix ?? in (3.2). The negative log-likelihood of the Gaussian graphical model is proportional to the following sample loss function up to a constant ? ? b S+L pn (S, L) = tr ? log |S + L|, (3.3) P b = 1/n n Xi X > is the sample covariance matrix, and |S + L| is the determinant of where ? i i=1 ? = S + L. We employ the maximum likelihood principle to estimate S? and L? , which is equivalent to minimizing the negative log-likelihood in (3.3). 3 The low-rank structure of the precision matrix, i.e., L poses a great challenge for computation. A typical way is to use nuclear-norm regularized estimator, or rank constrained estimator to estimate L. However, such kind of estimators involve singular value decomposition at each iteration, which is computationally very expensive. To overcome this computational obstacle, we reparameterize L as the product of smaller matrices. More specifically, due to the symmetry of L, it can be reparameterized by L = ZZ> , where Z 2 Rd?r and r > 0 is the number of latent variables and is a tuning parameter. This kind of reparametrization has recently been used in low-rank matrix estimation [18, 16, 44, 45, 11, 30] based on matrix factorization. Then we can rewrite the sample loss function in (3.3) as the following objective function ? ? b S + ZZ> qn (S, Z) = tr ? log |S + ZZ> |. (3.4) Based on (3.4), we propose a nonconvex estimator using sparsity constrained maximum likelihood: min S,Z qn (S, Z) subject to kSk0,0 ? s, (3.5) where s > 0 is a tuning parameter that controls the sparsity of S. 3.3 The Proposed Algorithm Due to the matrix factorization based reparameterization L = ZZ> , the objective function in (3.5) is nonconvex. In addition, the sparsity based constraint in (3.5) is nonconvex as well. Therefore, the estimation in (3.5) is essentially a nonconvex optimization problem. We propose to solve it by alternately performing gradient descent with respect to one parameter matrix with the other one fixed. The detailed algorithm is displayed in Algorithm 1, which consists of two stages. b (0) , Z b (0) , which, we will show later, In the initialization stage (Stage I), it outputs initial points S ? ? are guaranteed to fall in the small neighborhood of S and Z respectively. Note that we need to do inversion in Line 3, whose complexity is O(d3 ). Nevertheless, we only need to do inversion once. In sharp contrast, convex relaxation approaches need to do full SVD with O(d3 ) complexity at each iteration, which is much more time consuming than ours. In the alternating gradient descent stage (Stage II), we iteratively estimate S while fixing Z, and then estimate Z while fixing S. Instead of solving each subproblem exactly, we propose to perform one-step gradient descent for S and Z alternately, using step sizes ? and ? 0 . In Lines 6 and 8 of Algorithm 1, rS qn (S, Z) and rZ qn (S, Z) denote the partial gradient of qn (S, Z) with respect to S and Z respectively. The choice of the step sizes will be clear according to our theory. In practice, one can also use line search to choose the step sizes. Due to the sparsity constraint kSk0,0 ? s, we apply hard thresholding [4] right after the gradient descent step for S, in Line 7 of Algorithm 1. For a matrix S 2 Rd?d and an integer s, the hard thresholding operator HTs (S) preserves the s largest magnitudes in S and sets the rest entries to zero. Algorithm 1 does not involve singular value decomposition in each iteration, neither solve an exact optimization problem, which makes it much faster than the convex relaxation based algorithms [9, 24]. The computational overhead of Algorithm 1 mainly comes from the calculation of the partial gradient with respect to Z, whose time complexity is O(rd2 ). Therefore, our algorithm has a per-iteration complexity of O(rd2 ). 4 Main Theory We present our main theory in this section, which characterizes the convergence rate of Algorithm 1, and the statistical rate of its output. We begin with some definitions and assumptions, which are necessary for our theoretical analysis. Assumption 4.1. There is a constant ? > 0 such that 0 < 1/? ? min (?? ) ? max (?? ) ? ? < 1, where min (?? ) and max (?? ) are the minimal and maximal eigenvalues of ?? respectively. Assumption 4.1 requires the eigenvalues of true covariance matrix ?? to be finite and bounded below from a positive number, which is a standard assumption for Gaussian graphical models [29, 21, 28]. The relation between the covariance matrix and the precision matrix ?? = (?? ) 1 immediately yields 1/? ? min (?? ) ? max (?? ) ? ?. It is well understood that the estimation problem of the decomposition ?? = S? +L? can be ill-posed, where identifiability issue arises when the low-rank matrix L? is also sparse [10, 7]. The concept of incoherence condition, which was originally proposed for matrix completion [7], has been adopted in [9, 10], which ensures the low-rank matrix not to be too sparse by restricting the degree of coherence 4 Algorithm 1 Alternating Thresholded Gradient Descent (AltGD) for LVGGM 1: Input: i.i.d. samples X1 , . . . , Xn from LVGGM, max number of iterations T , and parameters ?, ? 0 , r, s. Stage I: Initialization b = 1 P n Xi X > . 2: ? i i=1 n b (0) = HTs (? b 1 ), which preserves the s largest magnitudes of ? b 1. 3: S 1 (0) > b b b (0) = UD1/2 4: Compute SVD: ? S = UDU , where D is a diagonal matrix. Let Z r , where Dr is the first r columns of D. Stage II: Alternating Gradient Descent 5: for t = 0, . . . , T 1 do b (t+0.5) = S b (t) ?rS qn S b (t) , Z b (t) ; 6: S ? ? b (t+1) = HTs S b (t+0.5) , which preserves the s largest magnitudes of S b (t+0.5) ; S b (t+1) = Z b (t) ? 0 rZ qn S b (t) , Z b (t) ; 8: Z 9: end for b (T ) , Z b (T ) . 10: output: S 7: between singular vectors and the standard basis. Later work such as [1, 25] relaxed this condition to a constraint on the spikiness ratio, and showed that spikiness condition is milder than incoherence condition. In our theory, we use the notion of spikiness as follows. Assumption 4.2 (Spikiness Condition [25]). For a matrix L 2 Rd?d , the spikiness ratio is defined as ?sp (L) := dkLk1,1 /kLkF . For the low-rank matrix L? in (3.2), we assume that there exists a constant ?? > 0 such that ?sp (L? ) ? kL? kF ?? kL? k1,1 = ? . (4.1) d d Since rank(L? ) = r, we define max = 1 (L? ) > 0 and min = r (L? ) > 0 to be the maximal and minimal nonzero singular value of L? respectively. We observe that the decomposition of low-rank matrix L? in Section 3.2 is not unique, since we have L? = (Z? U)(Z? U)> for any r ? r orthogonal matrix U. Thus, we define the following solution set for Z: e 2 Rd?r |Z e = Z? U for some U 2 Rr?r with UU> = U> U = Ir . U= Z e = p max and r (Z) e = p min for any Z e 2 U. Note that 1 (Z) (4.2) To measure the closeness between our estimator for Z and the unknown parameter Z? , we use the following distance d(?, ?), which is invariant to rotation. Similar definition has been used in [45, 30, 40]. e F , where U Definition 4.3. Define the distance between Z and Z? as d(Z, Z? ) = minZ2U kZ Zk e is the solution set defined in (4.2). At the core of our proof technique is the first-order stability condition on the population loss function. In detail, the population loss function is defined as the expectation of sample loss function in (3.3): p(S, L) = tr(?? (S + L)) (4.3) log S + L . For the ease of presentation, we define two balls around S and Z respectively: BF (S , R) = {S 2 Rd?d : kS S? kF ? R}, Bd (Z? , R) = {Z 2 Rd?r : d(Z, Z? ) ? R}. Then the first-order stability condition is stated as follows. Condition 4.4 (First-order Stability). Suppose S 2 BF (S? , R), Z 2 Bd (Z? , R) for some R > 0; by definition we have L = ZZ> and L? = Z? Z?> . The gradient of population loss function with respect to S satisfies ? rS p(S, L) rS p(S, L? ) F rL p(S, L) rL p(S? , L) F ? ? 2 ? 1 ? ? kL L ? kF . ? kS S ? kF , The gradient of the population loss function with respect to L satisfies where 1, 2 > 0 are constants. 5 Condition 4.4 requires the population loss function has a variant of Lipschitz continuity for the gradient. Note that the gradient is taken with respect to one variable (S or L), while the Lipschitz continuity is with respect to the other variable. Also, the Lipschitz property is defined only between the true parameters S? , L? and arbitrary elements S 2 BF (S? , R) and L = ZZ> such that Z 2 Bd (Z? , R). It should be noted that Condition 4.4, as is verified in the appendix, is inspired by a similar condition originally introduced in [2]. We extend it to the loss function of LVGMM with both sparse and low-rank structures, which plays an important role in the analysis. The following theorem characterize the theoretical properties of Algorithm 1. Theorem 4.5. Suppose Assumptions 4.1 and 4.2 hold. Assume that the sample size satisfies n 484k?? k1,1 ? 2 rs? log d/(25R2 min ) and the sparsity of the unknown sparse matrix satisp p ? 2 fies s ? 25d2 R2 min /(121r??2 ), where R = min{1/4 max , 1/(2?), min /(6.5? )}. Then (0) (0) b ,Z b obtained by the initialization stage of with probability at least 1 C/d, the initial points S Algorithm 1 satisfies b (0) S S? F ? R, and b (0) , Z? ? R, d Z (4.4) where C > 0 is an absolute constant. Furthermore, suppose Condition 4.4 holds. Let the step sizes satisfy ? ? C0 /( max ? 2 ) and ? 0 ? C0 min /( max ? 4 ), and the sparsity parameter satisfies p s 4(1/(2 ?) 1)2 + 1 s? , where C0 > 0 is a constant that can be chosen arbitrarily small. Let ? and ? be ? ? 2 ? ? 0 min 48C02 s? log d 32C02 min rd ? = max 1 , 1 , ? = max , . 2 4 6 n ?2 ?2 ? n ? max max Then for any t 1, with probability at least 1 C1 /d, the output of Algorithm 1 satisfies n o p ? b (t+1) S? 2 , d2 (Z b (t+1) , Z? ) ? max S + ?t+1 ? R , p F | {z } 1 ? | {z } optimization error (4.5) statistical error where C1 > 0 is an absolute constant. In Theorem 4.5, ? is the contraction parameter of linear convergence rate, and it depends on the step size ?. Therefore, we can always choose a sufficiently small step size by choosing a small enough C0 , such that ? is strictly between 0 and 1. Remark 4.6. (4.4) suggests that, in order to ensure that the initial points returned by the initialization stage of Algorithm 1 fall in small neighborhoods of S? and Z? , we require n = O(s? log d), which essentially attains the optimal sample complexity for LVGGM estimation. In addition, we require s? . d2 /(r??2 ), which means the unknown sparse matrix cannot be too dense. Remark 4.7. (4.5) suggests that the estimation error of the output of Algorithm 1 consists of two terms: the first term is the statistical error, andp the second term is the The statistical p optimization error. p ? error comes from ? and scales as max Op ( s log d/n), Op ( rd/n) , where Op ( s? log d/n) p corresponds to the statistical error of S? , and Op ( rd/n) corresponds to the statistical error of L? 1 . This matches the minimax optimal rate of estimation errors in Frobenius norm for LVGGM estimation [9, 1, 24]. For the optimization error, note that max and min are fixed constants. For a sufficiently small constant C0 , we can always ensure ? < 1, and this establishes the linear convergence rate for Algorithm 1. Actually, after T max{O(log(? 4 n/(s? log d))), O(log(? 6 n/(rd)))} iterations, the total estimation error of our algorithm achieves the same order as the statistical error. Remark 4.8. Our statistical rate is sharp, because our theoretical analysis is conducted uniformly over the neighborhood of true parameters S? and Z? , rather than doing sample splitting. This is another big advantage of our approach over existing algorithms which are also built upon first-order stability [2, 36] but rely on sample splitting technique. 5 Experiments In this section, we present numerical results on both synthetic and real datasets to verify the theoretical properties of our algorithm, and compare it with the state-of-the-art methods. Specifically, we 1 b (t) , it is in the same order as the error bound of L b (t) by While the derived error bound in (4.5) is for Z definition. 6 compare our method, denoted by AltGD, with two convex relaxation based methods for estimating LVGGM: (1) LogdetPPA [9, 32] for solving log-determinant semidefinite programs, denoted by PPA, and (2) the alternating direction method of multipliers in [22, 24], denoted by ADMM. We also considered alternatives of the convex methods which use the randomized SVD method [15] in each iteration. However, the randomized SVD method still needs to compute a full SVD for nuclear norm regularization and in our experiments, we found that it is slower than the full SVD method implemented in [22]. Thus, we only report the results of the orignial convex relaxations in [9, 32, 22, 24]. The implementation of these two methods were downloaded from the authors? website. All numerical experiments were run in MATLAB R2015b on a laptop with Intel Core i5 2.7 GHz CPU and 8GB of RAM. 5.1 Synthetic Data In the synthetic experiment, we first validate the performance of our method on the latent variable GGM. Then we show that our method also performs well on a more general GGM where the precision matrix is the sum of an arbitrary sparse matrix S? and arbitrary low rank matrix L? . Specifically, we generated data according to the following two schemes: ? Scheme I: we generated data from the latent variable GGM defined in Section 3.1. In detail, the dimension of observed data is d and the number of latent variables is r. We randomly generated e 2 R(d+r)?(d+r) , with sparsity s? = 0.02d2 . According to a sparse positive definite matrix ? e 1:d;1:d and the low-rank component (3.1), the sparse component of the precision matrix is S? := ? ? 1e e e is L := ?1:d;(d+1):(d+r) [?(d+1):(d+r);(d+1):(d+r) ] ?(d+1):(d+r);1:d . Then we sampled data X1 , . . . , Xn from distribution N (0, (?? ) 1 ), where ?? = S? + L? is the true precision matrix. ? Scheme II: the dimension of observed data is d and the number of latent variables is r. S? is a symmetric positive definite matrix with entries randomly generated from [ 1, 1] with sparsity s? = 0.05d2 . L? = Z? Z?> , where Z? 2 Rd?r with entries randomly generated from [ 1, 1]. Then we sampled data X1 , . . . , Xn from multivariate normal distribution N (0, (?? ) 1 ) with ?? = S? + L? being the true precision matrix. Table 1: Scheme I: estimation errors of sparse and low-rank components S? and L? as well as the true precision matrix ?? in terms of Frobenius norm on different synthetic datasets. Data were generated from LVGGM and results were reported on 10 replicates in each setting. b (T ) kS S? k F b (T ) kL L? kF b (T ) k? ?? k F Time (s) 0.7350?0.0359 0.7563?0.0298 0.6236?0.0669 1.1610 1.1120 0.0250 0.0195?0.0046 0.0294?0.0041 0.0125?0.0000 0.9813?0.0192 1.0610?0.0134 0.8210?0.0143 35.7220 25.8010 0.4800 1.1620?0.0177 1.1867?0.0253 0.9016?0.0245 0.0224?0.0034 0.0356?0.0033 0.0167?0.0030 1.1639?0.0179 1.1869?0.0254 0.9021?0.0244 356.7360 156.5550 7.4740 1.4822?0.0302 1.5010?0.0240 1.3449?0.0073 0.0371?0.0052 0.0442?0.0068 0.0208?0.0014 1.4824?0.0120 1.5012?0.0240 1.3449?0.0084 33522.0200 21090.7900 445.6730 Setting Method d = 100, r = 2, n = 2000 PPA ADMM AltGD 0.7335?0.0352 0.7521?0.0288 0.6241?0.0668 0.0170?0.0125 0.0224?0.0115 0.0113?0.0014 d = 500, r = 5, n = 10000 PPA ADMM AltGD 0.9803?0.0192 1.0571?0.0135 0.8212?0.0143 d = 1000, r = 8, n = 2.5 ? 104 PPA ADMM AltGD d = 5000, r = 10, n = 2 ? 105 PPA ADMM AltGD In both schemes, we conducted experiments in different settings of d, n, s? and r. The step sizes of our method were set as ? = c1 /( max ? 2 ) and ? 0 = c1 min /( max ? 4 ) according to Theorem 4.5, where c1 = 0.25. The thresholding parameter s is set as c2 s? , where c2 > 1 was selected by 4-fold cross-validation. The regularization parameters for `1,1 -norm and nuclear norm in PPA and ADMM and the tuning parameter r in our algorithm were selected by 4-fold cross-validation. Under both schemes, we repeatedly generated 10 datasets for each setting of d, n, s? and r? , and calculated the mean and standard error of the estimation error. We summarize the results of Scheme I over 10 replications in Table 1. Note that our algorithm AltGD outputs a slightly better estimator in terms of estimation errors compared with PPA and ADMM. It should also be noted that they do not differ too much because their statistical rates should be in the same order. To demonstrate the efficiency of our algorithm, we also reported the mean CPU time in the last column of Table 1. We observe 7 significant speed-ups brought by our algorithm, which is almost 50 times faster than the convex ones. In particular, when the dimension d scales up to several thousands, the computation of SVD in PPA and ADMM takes enormous time and therefore the computational time of them increases dramatically. We report the averaged results of Scheme II over 10 repetitions in Table 2. Again, it can be seen that our method AltGD achieves comparable or slightly better estimators in terms of estimation errors in Frobenius norm compared against PPA and ADMM. Our method AltGD is nearly 50 times faster than the other two methods based on convex algorithms. Table 2: Scheme II: estimation errors of sparse and low-rank components S? and L? as well as the true precision matrix ?? in terms of Frobenius norm on different synthetic datasets. Data were generated from multivariate distribution where the precision matrix is the sum of an arbitrary sparse matrix and an arbitrary low-rank matrix. Results were reported on 10 replicates in each setting. b (T ) kS b (T ) kL S? k F b (T ) k? L? kF ?? k F Time (s) 0.8912?0.0356 0.8588?0.0375 0.7483?0.0742 1.6710 1.2790 0.0460 0.7802?0.0104 0.7803?0.0104 0.7594?0.0111 1.1363?0.0131 1.1363?0.0131 0.9718?0.0146 43.8000 25.8980 0.8690 0.9235?0.0193 0.9209?0.0212 0.7249?0.0158 1.1985?0.0084 1.2131?0.0084 0.9651?0.0093 1.4913?0.0162 1.4975?0.0171 1.2029?0.0141 487.4900 163.9350 7.1360 1.1883?0.0091 1.2846?0.0089 1.0681?0.0034 1.0970?0.0022 1.1568?0.0023 1.0685?0.0023 1.3841?0.0083 1.5324?0.0085 1.2068?0.0032 44098.6710 20393.3650 287.8630 Setting Method d = 100, r = 2, n = 2000 PPA ADMM AltGD 0.5710?0.0319 0.6198?0.0361 0.5639?0.0905 0.6231?0.0261 0.5286?0.0308 0.4824?0.0323 d = 500, r = 5, n = 10000 PPA ADMM AltGD 0.8140?0.0157 0.8140?0.0157 0.6139?0.0198 d = 1000, r = 8, n = 2.5 ? 104 PPA ADMM AltGD d = 5000, r = 10, n = 2 ? 105 PPA ADMM AltGD In addition, we illustrate the convergence rate of our algorithm in Figure 1(a) and 1(b), where the x-axis is iteration number and y-axis is the estimation errors in Frobenius norm. We can see that our algorithm converges in dozens of iterations, which confirms our theoretical guarantee on linear convergence rate. We plot the overall estimation errors against the scaled statistical errors of S(T ) and L(T ) under different configurations of d, n, s? and r in Figure 1(c) and 1(d). According to Theorem b (t) S? kF and kL b (t) L? kF will converge to the statistical errors as the number of iterations 4.5, kS p p t goes up, which are in the order of O( s? log d/n) and O( rd/n) respectively. We can see that the estimation errors grow linearly with the theoretical rate, which validates our theoretical guarantee on the minimax optimal statistical rate. 1.18 d = 100; n = 1000; r = 2 d = 500; n = 10000; r = 5 d = 1000; n = 25000; r = 8 1.14 1.04 1.5 0.7 0.65 0.6 1 1.06 2 0.75 1.2 1.08 2.5 0.8 1.6 kS(t) ! S$ kF kL(t) ! L$ kF 1.1 3 1.8 1.4 1.12 3.5 0.55 r r r r 0.8 1.02 0.6 1 0.5 d = 100; n = 1000; r = 2 d = 500; n = 10000; r = 5 d = 1000; n = 25000; r = 8 1.16 kS(t) ! S$ kF 4 1 0 2 4 6 8 10 12 14 Number of iteration (t) 16 18 0.98 kL(t) ! L$ kF 5 4.5 0 5 10 15 20 25 30 35 0.4 0.5 40 Number of iteration (t) p 1 1.5 s$ log d=n =2 =5 =7 = 10 s$ s$ s$ s$ 0.5 0.45 2 0.4 0.35 0.4 0.45 0.5 0.55 0.6 p rd=n 0.65 0.7 = 300 = 400 = 500 = 600 0.75 0.8 (a) Estimation error for S? (b) Estimation error for L? (c) r fixed and varying (d) s? fixed and varying n, d and s? n, d and r Figure 1: (a)-(b): Evolution of estimation errors with number of iterations t going up with the sparsity b (T ) S? kF and parameter s? set as 0.02 ? d2 and varying d, npand r. (c)-(d): Estimation errors kS p (T ) ? ? b kL L kF versus scaled statistical errors s log d/n and rd/n. 5.2 Genomic Data In this subsection, we apply our method to TCGA breast cancer gene expression data to infer regulatory network. We downloaded the gene expression data from cBioPortal2 . Here we focused on 299 breast cancer related transcription factors (TFs) and estimated the regulatory relationships for each pair of TFs over two breast cancer subtypes: luminal and basal. We compared our method AltGD 2 http://www.cbioportal.org/ 8 with ADMM and PPA which are all based on LVGGM. We also compared it with the graphical Lasso (GLasso) which only considers the sparse structure of precision matrix and ignores the latent variables; we chose QUIC3 to solve the GLasso estimator. Regarding the benchmark standard, we used the ?regulatory potential scores? between a pair of genes (a TF and a target gene) for these two breast cancer subtypes compiled based on both co-expression and TF ChIP-seq binding data from the Cistrome Cancer Database4 . For luminal subtype, there are n = 601 samples Table 3: Summary of CPU time of different meth- and d = 299 TFs. The regularization parameods on luminal subtype breast cancer dataset. ters for `1,1 norm in GLasso, for `1,1 norm and Method GLasso PPA ADMM AltGD nuclear norm in PPA and ADMM were tuned by grid search. The step sizes of AltGD were Time (s) 38.6310 85.0100 7.6700 0.1500 set as ? = 0.1/b ? 2 and ? 0 = 0.1/b ? 4 , where ?b is the maximal eigenvalue of sample covariance matrix. The thresholding parameter s and number of latent variables r were tuned by grid search. In Table 3, we present the CPU time of each method. Importantly, we can see that AltGD is the fastest among all the methods and is even more than 50 times faster than the second fastest method ADMM. HDAC2 ? H2AFX HDAC2 ? ? H2AFX ELF5? ELF5? ATF4 ? ? ? POLR2B ? ? MXI1 ELF5? ? ATF4 ? ? SUMO2 IRF4 ? SF1 (a) GLasso IRF4 H2AFX ? ? ? POLR2B ? ? SF1 ? ATF4 ? ? SUMO2 ? POLR2B ? MXI1 ? IRF4 (b) PPA SREBF2 ELF5? ATF4 ? MXI1 ? ? ? SUMO2 ? POLR2B ? HDAC2 ? SREBF2 MXI1 ? H2AFX SREBF2 ? SUMO2 HDAC2 ? SREBF2 ? SF1 (c) ADMM ? IRF4 ? SF1 (d) AltGD Figure 2: An example of subnetwork in the transcriptional regulatory network of luminal breast cancer. Here gray edges are the interactions from the Cistrome Cancer Database; red edges are the ones inferred by the respective methods; green edges are incorrectly inferred interactions. To demonstrate the performances of different methods on recovering the overall transcriptional regulatory network, we randomly selected 10 TFs in the benchmark network and plotted the subnetwork in Figure 2 which has 70 edges with nonzero regulatory potential scores. Specifically, the gray edges form the benchmark network, the red edges are those identified correctly and the green edges are those incorrectly inferred by each method. We can observe from Figure 2 that the methods based on LVGGMs are able to recover more edges accurately than graphical Lasso because of the intervention of latent variables. We remark that all the methods were not able to completely recover the entire regulatory network partly because we only used the gene expression data for TFs (instead of all genes) and the regulatory potential scores from the Cistome Cancer Database also used TF binding information. Due to space limit, we postpone additional experimental results to the appendix. 6 Conclusions In this paper, to speed up the learning of LVGGM, we proposed a sparsity constrained maximum likelihood estimator based on matrix factorization. We developed an efficient alternating gradient descent algorithm, and proved that the proposed algorithm is guaranteed to converge to the unknown sparse and low-rank matrices with a linear convergence rate up to the optimal statical error. Experiments on both synthetic and real world genomic data supported our theory. Acknowledgements We would like to thank the anonymous reviewers for their helpful comments. This research was sponsored in part by the National Science Foundation IIS-1652539, IIS-1717205 and IIS-1717206. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. 3 4 http://www.cs.utexas.edu/~sustik/QUIC/ http://cistrome.org/CistromeCancer/ 9 References [1] Alekh Agarwal, Sahand Negahban, and Martin J Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics, pages 1171?1197, 2012. [2] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the em algorithm: From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014. [3] Srinadh Bhojanapalli, Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster semi-definite optimization. arXiv preprint, 2015. [4] Thomas Blumensath and Mike E Davies. Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis, 27(3):265?274, 2009. [5] T Tony Cai, Hongzhe Li, Weidong Liu, and Jichun Xie. Covariate-adjusted precision matrix estimation with an application in genetical genomics. Biometrika, page ass058, 2012. [6] Tony Cai, Weidong Liu, and Xi Luo. A constrained 1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106(494):594?607, 2011. [7] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111?119, 2012. [8] Emmanuel J Cand?s, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM (JACM), 58(3):11, 2011. [9] Venkat Chandrasekaran, Pablo A Parrilo, and Alan S Willsky. Latent variable graphical model selection via convex optimization. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 1610?1613. IEEE, 2010. [10] Venkat Chandrasekaran, Sujay Sanghavi, Pablo A Parrilo, and Alan S Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011. [11] Yudong Chen and Martin J Wainwright. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015. [12] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, 2008. [13] Gene H Golub and Charles F Van Loan. Matrix computations, volume 3. JHU Press, 2012. [14] Quanquan Gu, Zhaoran Wang, and Han Liu. Low-rank and sparse structure pursuit via alternating minimization. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 600?609, 2016. [15] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217?288, 2011. [16] Moritz Hardt. Understanding alternating minimization for matrix completion. In FOCS, pages 651?660. IEEE, 2014. [17] Jianhua Z Huang, Naiping Liu, Mohsen Pourahmadi, and Linxu Liu. Covariance matrix selection and estimation via penalised normal likelihood. Biometrika, 93(1):85?98, 2006. [18] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, pages 665?674, 2013. [19] Steffen L Lauritzen. Graphical models, volume 17. Clarendon Press, 1996. [20] Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, and Jarvis Haupt. Stochastic variance reduced optimization for nonconvex sparse learning, 2016. 10 [21] Han Liu, John Lafferty, and Larry Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10(Oct):2295? 2328, 2009. [22] Shiqian Ma, Lingzhou Xue, and Hui Zou. Alternating direction methods for latent variable gaussian graphical model selection. Neural computation, 25(8):2172?2198, 2013. [23] Nicolai Meinshausen and Peter B?hlmann. High-dimensional graphs and variable selection with the lasso. The annals of statistics, pages 1436?1462, 2006. [24] Zhaoshi Meng, Brian Eriksson, and Alfred O Hero III. Learning latent variable gaussian graphical models. arXiv preprint arXiv:1406.2721, 2014. [25] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13(May):1665? 1697, 2012. [26] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348?1356, 2009. [27] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. [28] Pradeep Ravikumar, Martin J Wainwright, Garvesh Raskutti, Bin Yu, et al. High-dimensional covariance estimation by minimizing 1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011. [29] Adam J Rothman, Peter J Bickel, Elizaveta Levina, Ji Zhu, et al. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008. [30] Stephen Tu, Ross Boczar, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank solutions of linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566, 2015. [31] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [32] Chengjing Wang, Defeng Sun, and Kim-Chuan Toh. Solving log-determinant optimization problems by a newton-cg primal proximal point algorithm. SIAM Journal on Optimization, 20(6):2994?3013, 2010. [33] Lingxiao Wang and Quanquan Gu. Robust gaussian graphical model estimation with arbitrary corruption. In International Conference on Machine Learning, pages 3617?3626, 2017. [34] Lingxiao Wang, Xiang Ren, and Quanquan Gu. Precision matrix estimation in high dimensional gaussian graphical models with faster rates. In Artificial Intelligence and Statistics, pages 177?185, 2016. [35] Lingxiao Wang, Xiao Zhang, and Quanquan Gu. A unified computational and statistical framework for nonconvex low-rank matrix estimation. arXiv preprint arXiv:1610.05275, 2016. [36] Zhaoran Wang, Quanquan Gu, Yang Ning, and Han Liu. High dimensional expectationmaximization algorithm: Statistical optimization and asymptotic normality. arXiv preprint arXiv:1412.8729, 2014. [37] Pan Xu and Quanquan Gu. Semiparametric differential graph models. In Advances in Neural Information Processing Systems, pages 1064?1072, 2016. [38] Pan Xu, Lu Tian, and Quanquan Gu. Communication-efficient distributed estimation and inference for transelliptical graphical models. arXiv preprint arXiv:1612.09297, 2016. [39] Pan Xu, Tingting Zhang, and Quanquan Gu. Efficient algorithm for sparse tensor-variate gaussian graphical models via gradient descent. In Artificial Intelligence and Statistics, pages 923?932, 2017. 11 [40] Xinyang Yi, Dohyung Park, Yudong Chen, and Constantine Caramanis. Fast algorithms for robust pca via gradient descent. arXiv preprint arXiv:1605.07784, 2016. [41] Jianxin Yin and Hongzhe Li. A sparse conditional gaussian graphical model for analysis of genetical genomics data. The annals of applied statistics, 5(4):2630, 2011. [42] Xiao-Tong Yuan and Tong Zhang. Partial gaussian graphical model estimation. IEEE Transactions on Information Theory, 60(3):1673?1687, 2014. [43] Xiao Zhang, Lingxiao Wang, and Quanquan Gu. A nonconvex free lunch for low-rank plus sparse matrix recovery. arXiv preprint arXiv:1702.06525, 2017. [44] Tuo Zhao, Zhaoran Wang, and Han Liu. A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural Information Processing Systems, pages 559?567, 2015. [45] Qinqing Zheng and John Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. In Advances in Neural Information Processing Systems, pages 109?117, 2015. 12
6789 |@word determinant:7 briefly:1 inversion:2 norm:25 bf:3 c0:5 d2:7 confirms:1 r:5 covariance:11 decomposition:9 contraction:1 tr:4 initial:3 configuration:1 liu:9 score:3 tuned:2 ours:3 xinyang:1 nonparanormal:1 outperforms:1 existing:3 nicolai:1 luo:1 toh:1 yet:1 written:1 bd:3 john:3 numerical:2 designed:1 plot:1 update:1 hts:3 rd2:2 sponsored:1 intelligence:3 selected:3 website:1 logdetppa:1 core:2 contribute:1 allerton:2 org:2 zhang:4 c2:2 differential:1 replication:1 focs:1 prove:3 consists:2 blumensath:1 overhead:1 introductory:1 yuan:1 cand:1 multi:1 ol:3 steffen:1 inspired:2 automatically:1 resolve:1 cpu:4 begin:1 estimating:2 notation:2 moreover:1 bounded:1 laptop:1 biostatistics:1 bhojanapalli:1 prateek:1 medium:1 kind:2 interpreted:1 developed:1 unified:2 unobserved:1 finding:1 guarantee:5 thorough:1 exactly:1 biometrika:2 scaled:2 control:2 subtype:2 intervention:1 superiority:1 positive:3 understood:1 limit:1 despite:1 meng:2 incoherence:4 plus:3 chose:1 therein:1 studied:4 initialization:8 k:9 suggests:2 meinshausen:1 co:1 ease:1 factorization:8 limited:1 fastest:2 bi:1 tian:1 averaged:1 unique:1 practice:1 block:2 tribution:1 definite:3 postpone:1 jhu:1 attain:1 davy:1 ups:1 word:1 ud1:1 cannot:2 eriksson:1 selection:5 operator:1 seminal:1 restriction:1 equivalent:1 imposed:1 www:2 reviewer:1 pggm:2 go:1 convex:15 focused:1 decomposable:1 splitting:5 immediately:1 sf1:4 wasserman:1 recovery:1 estimator:24 importantly:1 nuclear:9 reparameterization:1 stability:4 population:6 notion:1 annals:3 target:1 suppose:4 play:1 exact:2 programming:1 boczar:1 pa:1 element:2 ppa:17 expensive:1 database:2 hongzhe:2 observed:8 role:1 subproblem:1 preprint:11 mike:1 solved:1 capture:1 statical:1 thousand:1 wang:8 ensures:2 sun:1 benjamin:2 pd:1 convexity:3 complexity:8 agency:1 nesterov:1 tfs:5 depend:1 rewrite:1 solving:3 mohsen:1 upon:1 max1:1 efficiency:1 basis:1 gu:10 completely:1 k0:1 chip:1 represented:1 various:1 caramanis:1 jain:1 fast:2 artificial:3 neighborhood:4 choosing:1 whose:2 posed:1 solve:6 relax:1 compressed:1 statistic:8 noisy:1 validates:1 advantage:1 eigenvalue:3 rr:1 cai:2 propose:7 subtracting:1 interaction:2 product:4 maximal:3 remainder:1 jarvis:1 tu:1 relevant:1 kak1:2 frobenius:8 validate:1 convergence:9 extending:1 adam:1 converges:1 oo:4 illustrate:1 completion:5 fixing:2 pose:1 lauritzen:1 school:1 op:6 expectationmaximization:1 strong:2 dividing:1 implemented:1 c:2 netrapalli:1 come:2 uu:1 recovering:1 differ:1 direction:3 ning:1 tcga:1 stochastic:1 larry:1 bin:3 require:2 alleviate:1 anonymous:1 brian:1 rothman:1 subtypes:2 strictly:1 adjusted:1 hold:4 around:1 considered:1 sufficiently:2 normal:3 wright:1 great:1 mapping:1 algorithmic:1 achieves:2 consecutive:1 bickel:1 estimation:49 applicable:2 rpca:3 pourahmadi:1 superposition:2 utexas:1 sivaraman:1 ross:1 quanquan:10 largest:3 repetition:1 tf:3 establishes:1 weighted:1 minimization:6 brought:1 genomic:4 gaussian:23 always:2 rather:1 pn:1 varying:3 derived:2 rank:43 likelihood:10 mainly:1 contrast:2 attains:1 kim:1 cg:1 helpful:1 milder:1 inference:1 entire:1 relation:1 going:1 unobservable:1 among:2 flexible:1 overall:2 denoted:3 ill:1 issue:3 constrained:6 art:4 marginal:5 once:1 beach:1 zz:7 park:1 yu:3 qinqing:1 nearly:1 report:2 sanghavi:3 roman:1 employ:1 randomly:4 simultaneously:1 preserve:3 national:1 divergence:1 friedman:1 highly:1 zheng:1 joel:1 replicates:2 golub:1 pradeep:2 semidefinite:2 primal:1 regularizers:1 ambient:1 edge:8 partial:4 necessary:1 respective:1 orthogonal:1 plotted:1 theoretical:8 minimal:2 column:2 obstacle:1 corroborate:1 hlmann:1 maximization:1 entry:3 uniform:1 conducted:2 virginia:4 too:3 cggm:2 characterize:1 reported:3 dependency:4 proximal:3 synthetic:9 xue:1 vershynin:1 st:1 recht:2 international:2 randomized:3 negahban:3 siam:3 probabilistic:1 together:1 again:1 unavoidable:1 choose:2 huang:1 shiqian:1 dr:1 american:1 zhao:2 li:4 potential:3 parrilo:2 zhaoran:3 includes:1 satisfy:1 depends:1 piece:2 later:3 view:1 closed:1 doing:1 characterizes:2 red:2 recover:2 reparametrization:1 candes:1 identifiability:1 jianxin:1 square:1 ggm:6 ir:1 variance:1 yield:1 ofthe:1 accurately:1 marginally:1 ren:1 lu:1 worth:2 corruption:1 randomness:1 penalised:1 whenever:1 trevor:1 definition:5 against:2 proof:4 sampled:2 proved:3 dataset:2 hardt:1 subsection:1 organized:1 actually:1 clarendon:1 originally:2 xie:1 follow:1 furthermore:2 stage:10 jerome:1 hand:1 tropp:1 continuity:2 gray:2 xiaodong:1 usa:1 concept:1 multiplier:2 true:7 verify:1 evolution:1 hence:1 regularization:6 alternating:13 symmetric:1 iteratively:1 nonzero:2 moritz:1 conditionally:1 ll:2 noted:2 demonstrate:3 dohyung:1 performs:1 harmonic:1 novel:1 recently:1 funding:1 charles:1 common:1 rotation:1 garvesh:1 raskutti:1 rl:2 ji:1 volume:3 extend:1 interpretation:1 association:1 martinsson:1 mellon:1 refer:1 significant:1 measurement:1 rd:17 tuning:3 consistency:1 grid:2 sujay:3 soltanolkotabi:1 access:2 han:5 alekh:1 compiled:1 multivariate:4 recent:1 showed:1 constantine:1 nonconvex:10 arbitrarily:1 yi:2 seen:1 additional:3 relaxed:1 impose:2 employed:2 converge:4 semi:1 ii:8 full:5 multiple:1 stephen:1 infer:1 anastasios:1 alan:2 faster:8 match:2 calculation:1 cross:2 long:1 levina:1 ravikumar:2 va:2 variant:1 basic:1 regression:1 breast:8 noiseless:1 cmu:1 expectation:2 essentially:2 arxiv:21 iteration:18 agarwal:1 c1:5 addition:5 semiparametric:2 spikiness:6 singular:6 jian:1 grow:1 extra:1 rest:1 comment:1 subject:1 undirected:1 balakrishnan:1 lafferty:2 flow:1 schur:1 integer:1 structural:2 noting:2 yang:1 iii:1 enough:1 variate:1 hastie:1 lasso:5 suboptimal:1 identified:1 inner:1 regarding:1 kyrillidis:1 praneeth:1 expression:4 pca:2 gb:1 sahand:3 penalty:4 peter:2 returned:1 remark:4 matlab:1 repeatedly:1 dramatically:1 generally:1 detailed:2 involve:2 clear:1 procrustes:1 chuan:1 reduced:1 http:3 estimated:1 per:4 correctly:1 tibshirani:1 alfred:1 carnegie:1 dropping:1 basal:1 gunnar:1 nevertheless:2 enormous:1 d3:5 neither:1 verified:1 thresholded:1 ram:1 graph:4 relaxation:7 genetical:2 sum:3 run:1 inverse:3 i5:1 almost:1 chandrasekaran:3 c02:2 electronic:2 seq:1 raman:1 coherence:1 appendix:2 jianhua:1 comparable:2 bound:4 guaranteed:4 convergent:1 fold:2 annual:1 nontrivial:1 kronecker:1 constraint:3 transelliptical:1 nathan:1 speed:4 extremely:1 reparameterize:2 min:15 performing:1 martin:6 xingguo:1 department:2 according:5 ball:1 smaller:3 slightly:2 pan:4 em:2 lunch:1 restricted:2 invariant:2 xo:7 taken:1 computationally:2 equation:1 turn:1 needed:1 hero:1 end:1 sustik:1 yurii:1 adopted:1 pursuit:1 incurring:1 apply:2 observe:5 spectral:1 alternative:2 slower:1 rz:2 thomas:1 top:1 ensure:2 tony:2 graphical:28 newton:1 concatenated:1 k1:2 especially:1 emmanuel:2 tensor:1 objective:2 quic:1 strategy:1 concentration:1 dependence:2 kak2:1 diagonal:2 transcriptional:2 subnetwork:2 gradient:21 elizaveta:1 distance:2 thank:1 considers:1 fresh:1 willsky:2 relationship:2 ratio:2 minimizing:2 unfortunately:1 robert:1 potentially:1 blockwise:1 stoc:1 negative:2 rise:1 stated:1 implementation:1 unknown:9 perform:1 observation:1 datasets:5 commensurate:1 benchmark:3 finite:1 descent:15 displayed:1 incorrectly:2 reparameterized:1 communication:3 sharp:3 arbitrary:6 weidong:2 inferred:3 tuo:2 introduced:1 complement:1 pair:2 pablo:2 kl:9 nip:1 alternately:2 lingxiao:4 able:3 andp:1 below:1 sparsity:19 challenge:1 summarize:1 program:1 built:2 max:20 including:1 green:2 wainwright:6 power:1 business:1 rely:2 regularized:1 ksk0:2 minimax:3 scheme:9 meth:1 representing:1 zhu:1 normality:1 axis:2 arora:1 speeding:1 genomics:2 review:2 charlottesville:2 literature:1 acknowledgement:1 kf:14 understanding:1 asymptotic:2 xiang:1 loss:10 glasso:5 kakf:1 haupt:1 lecture:1 permutation:1 proportional:1 versus:1 validation:2 foundation:1 downloaded:2 degree:1 consistent:1 xiao:3 thresholding:7 principle:1 share:1 lo:3 cancer:11 course:1 penalized:4 summary:1 supported:1 last:2 free:1 enjoys:1 aij:3 fall:2 absolute:3 sparse:43 distributed:1 ghz:1 van:1 overcome:2 dimension:7 calculated:2 world:3 xn:4 yudong:2 qn:7 kz:1 fies:1 commonly:1 author:2 ignores:1 projected:1 transaction:1 approximate:1 transcription:1 gene:7 pittsburgh:1 conclude:1 consuming:1 xi:3 search:3 latent:24 regulatory:8 iterative:1 zhaoshi:1 table:7 zk:1 robust:4 ca:1 tingting:1 symmetry:1 investigated:1 complex:1 zou:1 constructing:1 sp:2 main:3 dense:3 linearly:3 whole:1 big:1 noise:1 xu:4 body:1 x1:4 intel:1 venkat:2 tong:2 precision:34 xl:3 mahdi:1 srinadh:1 dozen:1 theorem:5 covariate:1 udu:1 sensing:1 r2:2 closeness:1 bivariate:1 intractable:1 intrinsic:1 exists:1 restricting:1 hui:1 magnitude:5 rankness:1 chen:2 klkf:1 halko:1 yin:1 jacm:1 ordered:1 contained:1 ters:1 binding:2 springer:1 corresponds:2 satisfies:6 acm:2 ma:3 oct:1 conditional:5 kak0:1 goal:2 formulated:1 presentation:1 lipschitz:3 fisher:1 admm:18 hard:5 loan:1 specifically:5 typical:1 uniformly:1 principal:1 called:1 total:1 partly:1 svd:10 experimental:1 ggms:1 cholesky:1 arises:1 accelerated:1