Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
6,500
6,880
Decomposable Submodular Function Minimization Discrete and Continuous Alina Ene? ? ? Huy L. Nguy?n L?szl? A. V?gh? Abstract This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms. 1 Introduction Submodular functions arise in a wide range of applications: graph theory, optimization, economics, game theory, to name a few. A function f : 2V ? R on a ground set V is submodular if f (X) + f (Y ) ? f (X ? Y ) + f (X ? Y ) for all sets X, Y ? V . Submodularity can also be interpreted as a diminishing returns property. There has been significant interest in submodular optimization in the machine learning and computer vision communities. The submodular function minimization (SFM) problem arises in problems in image segmentation or MAP inference tasks in Markov Random Fields. Landmark results in combinatorial optimization give polynomial-time exact algorithms for SFM. However, the highdegree polynomial dependence in the running time is prohibitive for large-scale problem instances. The main objective in this context is to develop fast and scalable SFM algorithms. Instead of minimizing arbitrary submodular functions, several recent papers aim to exploit special structural properties of submodular functions arising in practical applications. This paper focuses on the popular model of decomposable submodular functions. These are functions that can be written as sums of several ?simple? submodular functions defined on small supports. Some definitions are needed to introduce our problem setting. Let f : 2V ? R be a submodular function, and let n := |V |. We can assume w.l.o.g. that f (?) = 0. We are interested in solving the submodular function minimization problem: min f (S). (SFM) S?V For a vector y ? RV and a set S ? V , we use the notation y(S) := of a submodular function is defined as P v?S y(v). The base polytope B(f ) := {y ? RV : y(S) ? f (S) ?S ? V, y(V ) = f (V )}. One can optimize linear functions over B(f ) using the greedy algorithm. The SFM problem can be reduced to finding the minimum norm point of the base polytope B(f ) [10].   1 min kyk22 : y ? B(f ) . (Min-Norm) 2 ? Department of Computer Science, Boston University, [email protected] College of Computer and Information Science, Northeastern University, [email protected] ? Department of Mathematics, London School of Economics, [email protected] ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. This reduction is the starting point of convex optimization approaches for SFM. We refer the reader to Sections 44?45 in [28] for concepts and results in submodular optimization, and to [2] on machine learning applications. Pr We assume that f is given in the decomposition f (S) = i=1 fi (S), where each fi : 2V ? R is a submodular function. Such functions are called decomposable or Sum-of-Submodular (SoS) in the literature. In the decomposable submodular function minimization (DSFM) problem, we aim to minimize a function given in such a decomposition. We will make the following assumptions. For each i ? [r], we assume that two oracles are provided: (i) a value oracle that returns fi (S) for any set S ? V in time EOi ; and (ii) a quadratic minimization oracle Oi (w). For any input vector w ? Rn , this oracle returns an optimal solution to (Min-Norm) for the function fi + w, or equivalently, an optimal solution to miny?B(fi ) ky + wk22 . We let ?i denote the running time of a single call to the oracle Oi , ?max := maxi?[r] ?i denote the maximum time of an oracle call, P ?avg := 1r i?[r] ?i denote the average time of an oracle call.4 We let Fi,max := maxS?V |fi (S)|, Fmax := maxS?V |f (S)| denote the maximum function values. For each i ? [r], the function fi has an effective support Ci such that fi (S) = fi (S ? Ci ) for every S ? V . DSFM thus requires algorithms on two levels. The level-0 algorithms are the subroutines used to evaluate the oracles Oi for every i ? [r]. The level-1 algorithm minimizes the function f using the level-0 algorithms as black boxes. 1.1 Prior work SFM has had a long history in combinatorial optimization since the early 1970s, following the influential work of Edmonds [4]. The first polynomial-time algorithm was obtained via the ellipsoid method [14]; recent work presented substantial improvements using this approach [22]. Substantial work focused on designing strongly polynomial combinatorial algorithms [9, 15, 16, 25, 17, 27]. Still, designing practical algorithms for SFM that can be applied to large-scale problem instances remains an open problem. Let us now turn to DSFM. Previous work mainly focused on level-1 algorithms. These can be classified as discrete and continuous optimization methods. The discrete approach builds on techniques of classical discrete algorithms for network flows and for submodular flows. Kolmogorov [21] showed that the problem can be reduced to submodular flow maximization, and also presented a more efficient augmenting path algorithm. Subsequent discrete approaches were given in [1, 7, 8]. Continuous approaches start with the convex programming formulation (Min-Norm). Gradient methods were applied for the decomposable setting in [5, 24, 30]. Less attention has been given to the level-0 algorithms. Some papers mainly focus on theoretical guarantees on the running time of level-1 algorithms, and treat the level-0 subroutines as black-boxes (e.g. [5, 24, 21]). In other papers (e.g. [18, 30]), the model is restricted to functions fi of a simple specific type that are easy to minimize. An alternative assumption is that all Ci ?s are small, of size at most k; and thus these oracles can be evaluated by exhaustive search, in 2k value oracle calls (e.g. [1, 7]). Shanu et al. [29] use a block coordinate descent method for level-1, and make no assumptions on the functions fi . The oracles are evaluated via the Fujishige-Wolfe minimum norm point algorithm [11, 31] for level-0. Let us note that these experimental studies considered the level-0 and level-1 algorithms as a single ?package?. For example, Shanu et al. [29] compare the performance of their SoS Min-Norm algorithm to the continuous approach of Jegelka et al. [18] and the combinatorial approach of Arora et al. [1]. However, these implementations cannot be directly compared, since they use three different level-0 algorithms: Fujishige-Wolfe in SoS Min-Norm, a general QP solver for the algorithm of [18], and exhaustive search for [1]. For potentials of large support, Fujishige-Wolfe outperforms these other level-0 subroutines, hence the level-1 algorithms in [18, 1] could have compared more favorably using the same Fujishige-Wolfe subroutine. 4 For flow-type algorithms for DSFM, a slightly weaker oracle assumption suffices, returning a minimizer of minS?Ci fi (S) + w(S) for any given w ? RCi . This oracle and the quadratic minimization oracle are reducible to each other: the former reduces to a single call to the latter, and one can implement the latter using O(|Ci |) calls to the former (see e.g. [2]). 2 1.2 Our contributions Our paper establishes connections between discrete and continuous methods for DSFM, as well as provides a systematic experimental comparison of these approaches. Our main theoretical contribution improves the worst-case complexity bound of the most recent continuous optimization methods [5, 24] by a factor of r, the number of functions in the decomposition. This is achieved by improving the bounds on the relevant condition numbers. Our proof exploits ideas from the discrete optimization approach. This provides not only better, but also considerably simpler arguments than the algebraic proof in [24]. The guiding principle of our experimental work is the clean conceptual distinction between the level-0 and level-1 algorithms, and to compare different level-1 algorithms by using the same level-0 subroutines. We compare the state-of-the-art continuous and discrete algorithms: RCDM and ACDM from [5] with Submodular IBFS from [7]. We consider multiple options for the level-0 subroutines. For certain potential types, we use tailored subroutines exploiting the specific form of the problem. We also consider a variant of the Fujishige-Wolfe algorithm as a subroutine applicable for arbitrary potentials. Our experimental results reveal the following tradeoff. Discrete algorithms on level-1 require more calls to the level-0 oracle, but less overhead computation. Hence using algorithms such as IBFS on level-1 can be significantly faster than gradient descent, as long as the potentials have fairly small supports. However, as the size of the potentials grow, or we do need to work with a generic level-0 algorithm, gradient methods are preferable. Gradient methods can perform better for larger potentials also due to weaker requirements on the level-0 subroutines: approximate level-0 subroutines suffice for them, whereas discrete algorithms require exact optimal solutions on level-0. Paper outline. The rest of the paper is structured as follows. The level-1 algorithmic frameworks using discrete and convex optimization are described in Sections 2 and 3, respectively. Section 4 gives improved convergence guarantees for the gradient descent algorithms outlined in Section 3. Section 5 discusses the different types of level-0 algorithms and how they can be used together with the level-1 frameworks. Section 6 presents a brief overview of our experimental results. This is an extended abstract. The full paper is available on http://arxiv.org/abs/1703.01830. 2 Discrete optimization algorithms on Level-1 In this section, we outline a level-1 algorithmic framework for DSFM that is based on a combinatorial framework first studied by Fujishige and Zhang [12] for submodular intersection. The submodular intersection problem is equivalent to DSFM for the sum of two functions, and the approach can be adapted and extended to the general DSFM problem with an arbitrary decomposition. We now give a brief description of the algorithmic framework. The full version exhibits submodular versions of the Edmonds-Karp and preflow-push algorithms. Algorithmic framework. For a decomposable function f , every x ? B(f ) can be written as Pr x = i=1 xi , where supp(xi ) ? Ci and xi ? B(fi ) (see e.g. Theorem 44.6 in [28]). A natural algorithmic approach is to maintain an x ? B(f ) in such a representation, and iteratively update it using the combinatorial framework described below. DSFM can be cast as a maximum network flow problem in a network that is suitably defined based on the current point x. This can be viewed as an analogue of the residual graph in the maxflow/mincut setting, and it is precisely the residual graph if the DSFM instance is a minimum cut instance. Pr The auxiliary graph. For an x ? B(f ) of the form Sr x = i=1 xi , we construct the following directed auxiliary graph G = (V, E), with E = i=1 Ei and capacities c : E ? R+ . E is a multiset union: we include parallel copies if the same arc occurs in multiple Ei . The arc sets Ei are complete directed graphs (cliques) on Ci , and for an arc (u, v) ? Ei , we define c(u, v) := min{fi (S) ? xi (S) : S ? Ci , u ? S, v ? / S}. This is the maximum value ? such that x0i ? B(fi ), where x0i (u) = xi (u) + ?, x0i (v) = xi (v) ? ?, x0i (z) = xi (z) for z ? / {u, v}. Let N := {v ? V : x(v) < 0} and P := {v ? V : x(v) > 0}. The algorithm aims to improve the current x by updating along shortest directed paths from N to P with positive capacity; there are several ways to update the solution, and we discuss specific approaches (derived from maximum flow algorithms) in the full version. If there exists no such directed path, then we let S denote the set 3 reachable from N on directed paths with positive capacity; thus, S ? P = ?. One can show that S is a minimizer of the function f . Updating along a shortest path Q from N to P amounts to the following. Let ? denote the minimum capacity of an arc on Q. If (u, v) ? Q ? Ei , then we increase xi (u) by ? and decrease xi (v) by ?. The crucial technical claim is the following. Let d(u) denote the shortest path distance of positive capacity arcs from u to the set P . Then, an update along a shortest directed path from N to P results in a feasible x ? B(f ), and further, all distance labels d(u) are non-decreasing. We refer the reader to Fujishige and Zhang [12] for a proof of this claim. Level-1 algorithms based on the network flow approach. Using the auxiliary graph described above, and updating on shortest augmenting paths, one can generalize several maximum flow algorithms to a level-1 algorithm of DSFM. In particular, based on the preflow-pushP algorithm [13], r one can obtain a strongly polynomial DSFM algorithm with running time O(n2 ?max Pi=1 |Ci |2 ). A r scaling variant provides a weakly polynomial running time O(n2 ?max log Fmax + n i=1 |Ci |3 ?i ). We defer the details to the full version of the paper. In our experiments, we use the submodular IBFS algorithm [7] as the main discrete level-1 algorithm; the same running time estimate as for preflow-push is applicable. If all Ci ?s are small, O(1), the running time is O(n2 r?max ); note that r = ?(n) in this case. 3 3.1 Convex optimization algorithms on Level-1 Convex formulations for DSFM Recall the convex quadratic program (Min-Norm) from the Introduction. This program has a unique optimal solution s? , and the set S = {v ? V : s? (v) < 0} is the unique smallest minimizer to the SFM problem. We will refer to this optimal solution s? throughout the section. In the DSFM setting, oneQcan write (Min-Norm) in multiple equivalent forms [18]. For the first r formulation, we let P := i=1 B(fi ) ? Rrn , and let A ? Rn?(rn) denote the following matrix: A := [In In . . . In ] . {z } | r times Pr Note that, for every y ? P, Ay = i=1 yi , where yi is the i-th block of y, and thus Ay ? B(f ). The problem (Min-Norm) can be reformulated for DSFM as follows.   1 2 kAyk2 : y ? P . (Prox-DSFM) min 2 The second formulation is the following. Let us define the subspace A := {a ? Rnr : Aa = 0}, and minimize its distance from P:  min ka ? yk22 : a ? A, y ? P . (Best-Approx) The set of optimal solutions for both formulations (Prox-DSFM) and (Best-Approx) is the set E := {y ? P : Ay = s? }, where s? is the optimum of (Min-Norm). We note that, even though the set of solutions to (Best-Approx) are pairs of points (a, y) ? A ? P, the optimal solutions are uniquely determined by y ? P, since the corresponding a is the projection of y to A. 3.2 Level-1 algorithms based on gradient descent The gradient descent algorithms of [24, 5] provide level-1 algorithms for DSFM. We provide a brief overview of these algorithms and we refer the reader to the respective papers for more details. The alternating projections algorithm. Nishihara et al. [24] minimize (Best-Approx) using alternating projections.  The algorithm starts with a point a0 ? A and it iteratively constructs a sequence (a(k) , x(k) ) k?0 by projecting onto A and P: x(k) = argminx?P ka(k) ? xk2 , a(k+1) = argmina?A ka ? x(k) k2 . Random coordinate descent algorithms. Ene and Nguyen [5] minimize (Prox-DSFM) using random coordinate descent. The RCDM algorithm adapts the random coordinate descent algorithm 4 of Nesterov [23] to (Prox-DSFM). In each iteration, the algorithm samples a block i ? [r] uniformly at random and it updates xi via a standard gradient descent step for smooth functions. ACDM, the accelerated version of the algorithm, presents a further enhancement using techniques from [6]. 3.3 Rates of convergence and condition numbers The algorithms mentioned above enjoy a linear convergence rate despite the fact that the objective functions of (Best-Approx) and (Prox-DSFM) are not strongly convex. Instead, the works [24, 5] show that there are certain parameters that one can associate with the objective functions such that the convergence is at the rate (1 ? ?)k , where ? ? (0, 1) is a quantity that depends on the appropriate parameter. Let us now define these parameters. Let A0 be the affine subspace A0 := {a ? Rnr : Aa = s? }. Note that the set E of optimal solutions to (Prox-DSFM) and (Best-Approx) is E = P ? A0 . For y ? Rnr and a closed set K ? Rnr , we let d(y, K) = min {ky ? zk2 : z ? K} denote the distance between y and K. The relevant parameter for the Alternating Projections algorithm is defined as follows. Definition 3.1 ([24]). For every y ? (P ? A0 ) \ E, let ?(y) := d(y, E) , max {d(y, P), d(y, A0 )} and ?? := sup {?(y) : y ? (P ? A0 ) \ E} . The relevant parameter for the random coordinate descent algorithms is the following. Definition 3.2 ([5]). For every y ? P, let y ? := argminp {kp ? yk2 : p ? E} be the optimal solution to (Prox-DSFM) that is closest to y. We say that the objective function 12 kAyk22 of (Prox-DSFM) is restricted `-strongly convex if, for all y ? P, we have kA(y ? y ? )k22 ? `ky ? y ? k22 . We define   1 `? := sup ` : kAyk22 is restricted `-strongly convex . 2 The running time dependence of the algorithms on these parameters is given in the following theorems. Theorem 3.3 ([24]). Let (a(0) , x(0) = argminx?P ka(0) ?xk2 ) be the initial solution and let (a? , x? ) be an optimal solution to (Best-Approx). The alternating projection algorithm produces in    (0) kx ? x? k2 k = ? ?2? ln  iterations a pair of points a(k) ? A and x(k) ? P that is -optimal, i.e., ka(k) ? x(k) k22 ? ka? ? x? k22 + ?. Theorem 3.4 ([5]). Let x(0) ? P be the initial solution and let x? be an optimal solution to (Prox-DSFM) that minimizes kx(0) ? x? k2 . The random coordinate descent algorithm produces in   (0)  r kx ? x? k2 k=? ln `?    iterations a solution x(k) that is -optimal in expectation, i.e., E 12 kAx(k) k22 ? 12 kAx? k22 + . The accelerated coordinate descent algorithm produces in  r  (0)  1 kx ? x? k2 k=? r ln `?    (0) ?   q  k2 iterations (specifically, ? ln kx ?x epochs with ? r `1? iterations in each epoch) a    solution x(k) that is -optimal in expectation, i.e., E 12 kAx(k) k22 ? 12 kAx? k22 + . 5 3.4 Tight analysis for the condition numbers and running times We provide a tight analysis for the condition numbers (the parameters ?? and `? defined above). This leads to improved upper bounds on the running times of the gradient descent algorithms. Theorem 3.5. ? Let ?? and `? be the parameters defined in Definition 3.1 and Definition 3.2. We have ?? = ?(n r) and `? = ?(1/n2 ). Using our improved convergence guarantees, we obtain the following improved running time analyses. Corollary 3.6. The total running time for obtaining an -approximate solution5 is as follows.   (0) ?  k2 . ? Alternating projections (AP): O n2 r2 ?avg ln kx ?x    (0) ?  k2 ? Random coordinate descent (RCDM): O n2 r?avg ln kx ?x .    (0) ?  k2 ? Accelerated random coordinate descent (ACDM): O nr?avg ln kx ?x .  ? We? can upper bound the diameter of the base polytope by O( nFmax ) [19], and thus kx(0) ? x? k2 = O( nFmax ). For integer-valued functions, a ?-approximate solution can be converted to an exact optimum if ? = O(1/n) [2]. The upper bound on ?? and the lower bound on `? are shown in Theorem 4.2. The lower bound on ?? and upper bound on `? in Theorem 3.5 follow by constructions in previous work, as explained next. Nishihara ? et al. showed that ?? ? nr, and they give a family of minimum cut instances for which ?? = ?(n r). Namely, consider a graph with n vertices and m edges, and suppose for simplicity that the edges have integer capacities at most C. The cut function of the graph can be decomposed into functions corresponding to the individual edges, and thus r = m and ?avg = O(1). Already on simple ? cycle graphs, they show that the running time of AP is ?(n2 m2 ln(nC)), which implies ?? = ?(n r). Using the same construction, it is easy to obtain the upper bound `? = O(1/n2 ). 4 Tight convergence bounds for the convex optimization algorithms In this section, we show that the combinatorial approach introduced in Section 2 can be applied to obtain better bounds on the parameters ?? and `? defined in Section 3. Besides giving a stronger bound, our proof is considerably simpler than the algebraic one using Cheeger?s inequality in [24]. The key is the following lemma. Lemma 4.1. ?Let y ? P and s? ? B(f ). Then there exists a point x ? P such that Ax = s? and kx ? yk2 ? 2n kAy ? s? k1 . Before proving this lemma, we show how it can be used to derive the bounds. ? Theorem 4.2. We have ?? ? n r/2 + 1 and `? ? 4/n2 . Proof: We start with the bound on ?? . In order to bound ?? , we need to upper bound ?(y) for any y ? (P ? A0 ) \ E. We distinguish between two cases: y ? P \ E and y ? A0 \ E. ? Case I: y ? P \ E. The denominator in the definition of ?(y) is equal to d(y, A0 ) = kAy ? s? k2 / r. This follows since the closest point a = (a1 , . . . , ar ) to y in A0 can be obtained as ai = yi + ? (s ? Ay)/r for each i ? [r]. Lemma 4.1 gives an x ? P such that Ax = s? and kx ? yk2 ? ? n n ? ? ? x ? E and thus the numerator of ?(y) is 2 kAy ? s k1 ? 2 kAy ? s k2 . Since Ax = s , we have ? ? at most kx ? yk2 . Thus ?(y) ? kx ? yk2 /(kAy ? s? k2 / r) ? n r/2. Case II: y ? A0 \ E. This means that Ay = s? . The denominator of ?(y) is equal to d(y, P). For each i ? [r], let qi ? B(fi ) be the point that minimizes kyi ? qi k2 . Let q = (q1 , . . . , qr ) ? P. Then 5 The algorithms considered here solve the optimization problem (Prox-DSFM). An ?-approximate solution to an optimization problem min{f (x) : x ? P } is a solution x ? P satisfying f (x) ? f (x? ) + ?, where x? ? argminx?P f (x) is an optimal solution. 6 d(y, P) = ky ? qk2 . Lemma 4.1 with q in place of y gives a point x ? E such that kq ? xk2 ? ? Pr ? n ? kAq?s k1 . We have kAq?s? k1 = kAq?Ayk1 ? i=1 kqi ?yi k1 = kq?yk1 ? nrkq?yk2 . 2 ? ? yk2 . Since Thus kq ?  xk2 ? n 2 r kq   x ? E, we have d(y, E) ? kx ? yk2 ? kx ? qk2 + kq ? yk2 ?  1+ ? n r 2 kq ? yk2 = 1 + ? n r 2 d(y, P). Therefore ?(p) ? 1 + ? n r 2 , as desired. Let us now prove the bound on `? . Let y ? P and let y ? := argminp {kp ? yk2 : y ? E}. We need to verify that kA(y ? y ? )k22 ? n42 ky ? y ? k22 . Again, we apply Lemma 4.1 to obtain a point x ? P such 2 that Ax = s? and kx ? yk22 ? n4 kAx ? Ayk21 ? n4 kAx ? Ayk22 . Since Ax = s? , the definition of y ? gives ky ? y ? k22 ? kx ? yk22 . Using that Ax = Ay ? = s? , we have kAx ? Ayk2 = kAy ? Ay ? k2 .  Proof of Lemma 4.1: We give an algorithm that transforms y to a vector x ? P as in the statement through a sequence of path augmentations in the auxiliary graph defined in Section 2. We initialize x = y and maintain x ? P (and thus Ax ? B(f )) throughout. We now define the set of source and sink nodes as N := {v ? V : (Ax)(v) < s? (v)} and P := {v ? V : (Ax)(v) > s? (v)}. ? ? Once P N = P = P ?, ?we have Ax = s and terminate. Note that since Ax, s ? B(f ), we have v (Ax)(v) = v s (v) = f (V ), and therefore N = ? is equivalent to P = ?. The blocks of x are denoted as x = (x1 , x2 , . . . , xr ), with xi ? B(fi ). Claim 4.3. If N 6= ?, then there exists a directed path of positive capacity in the auxiliary graph between the sets N and P . Proof: We say that a set T is i-tight, if xi (T ) = fi (T ). It is a simple consequence of submodularity that the intersection and union of two i-tight sets are also i-tight sets. For every i ? [r] and every u ? V , we define Ti (u) as the unique minimal i-tight set containing u. It is easy to see that for an arc (u, v) ? Ei , c(u, v) > 0 if and only if v ? Ti (u). We note that if u ? / Ci , then x(u) = fi ({u}) = 0 and thus Ti (u) = {u}. Let S be the set of vertices reachable from N on a directed path of positive capacity in the auxiliary graph. For a contradiction, assume S ? P = ?. By the definition of S, we must have Ti (u) ? S for every u ? S and every i ? [r]. Since the union of i-tight sets is also i-tight, we see that S is i-tight for every i ? [r], and consequently, x(S) = f (S). On the other hand, since N ? S, S ? P = ?, and N 6= ?, we have x(S) < s? (S). Since s? ? B(f ), we have f (S) = x(S) < s? (S) ? f (S), a contradiction. We conclude that S ? P 6= ?.  In every step of the algorithm, we take a shortest directed path Q of positive capacity from N to P , and update x along this path. That is, if (u, v) ? Q ? Ei , then we increase xi (u) by ? and decrease xi (v) by ?, where ? is the minimum capacity of an arc on Q. Note that this is the same as running the Edmonds-Karp-Dinitz algorithm in the submodular auxiliary graph. Using the analysis of [12], one can show that this change maintains x ? P, and that the algorithm terminates in finite (in fact, strongly polynomial) time. We defer the details to the full version of the paper. It remains to bound kx ? yk2 . At every path update, the change in `? -norm of x is at most ?, and in `1 -norm is at most n?, since the length of the path is ? n. At the same time, P the change ? (s (v) ? (Ax)(v)) decreases by ?. Thus, kx ? yk? ? kAy ? s? k1 /2 and kx ? yk1 ? v?N ? p nkAy ? s? k1 /2. Using the inequality kpk2 ? kpk1 kpk? , we obtain kx ? yk2 ? 2n kAy ? s? k1 , completing the proof.  5 The level-0 algorithms In this section, we briefly discuss the level-0 algorithms and the interface between the level-1 and level-0 algorithms. Two-level frameworks via quadratic minimization oracles. Recall from the Introduction the assumption on the subroutines Oi (w) that finds the minimum norm point in B(fi + w) for the input vector w ? Rn for each i ? [r]. The continuous methods in Section 3 directly use the subroutines Oi (w) for the alternating projection or coordinate descent steps. For the flow-based algorithms in Section 2, the main oracle query is to find the auxiliary graph capacity c(u, v) of an arc (u, v) ? Ei for some i ? [r]. This can be easily formulated as minimizing the function fi +w for an appropriate w with supp(w) ? Ci . As explained at the beginning of Section 3, an optimal solution to (Min-Norm) 7 immediately gives an optimal solution to the SFM problem for the same submodular function. Hence, the auxiliary graph capacity queries can be implemented via single calls to the subroutines Oi (w). Let us also remark that, while the functions fi are formally defined on the entire ground set V , their effective support is Ci , and thus it suffices to solve the quadratic minimization problems on the ground set Ci . Whereas discrete and continuous algorithms require the same type of oracles, there is an important difference between the two algorithms in terms of exactness for the oracle solutions. The discrete algorithms require exact values of the auxiliary graph capacities c(u, v), as they must maintain xi ? B(fi ) throughout. Thus, the oracle must always return an optimal solution. The continuous algorithms are more robust, and return a solution with the required accuracy even if the oracle only returns an approximate solution. As discussed in Section 6, this difference leads to the continuous methods being applicable in settings where the combinatorial algorithms are prohibitively slow. Level-0 algorithms. We now discuss specific algorithms for quadratic minimization over the base polytopes of the functions fi . Several functions that arise in applications are ?simple?, meaning that there is a function-specific quadratic minimization subroutine that is very efficient. If a functionspecific subroutine is not available, one can use a general-purpose submodular minimization algorithm. The works [1, 7] use a brute force search as the subroutine for each each fi , whose running time is 2|Ci | EOi . However, this is applicable only for small Ci ?s and is not suitable for our experiments where the maximum clique size is quite large. As a general-purpose algorithm, we used the Fujishige-Wolfe 2 minimum norm point algorithm [11, 31]. This provides an ?-approximate solution in O(|Ci |Fi,max /?) 4 2 2 iterations, with overall running time bound O((|Ci | + |Ci | EOi )Fi,max /?) [3]. The experimental running time of the Fujishige-Wolfe algorithm can be prohibitively large [20]. As we discuss in Section 6, by warm-starting the algorithm and performing only a small number of iterations, we were able to use the algorithm in conjunction with the gradient descent level-1 algorithms. 6 Experimental results We evaluate the algorithms on energy minimization problems that arise in image segmentation problems. We follow the standard approach and model the image segmentation task of segmenting an object from the background as finding a minimum cost 0/1 labeling of the pixels. The total labeling cost is the sum of labeling costs corresponding to cliques, where a clique is a set of pixels. We refer to the labeling cost functions as clique potentials. The main focus of our experimental analysis is to compare the running times of the decomposable submodular minimization algorithms. Therefore we have chosen to use the simple hand-tuned potentials that were used in previous work: the edge-based costs [1] and the count-based costs defined by [29, 30]. Specifically, we used the following clique potentials in our experiments, all of which are submodular: ? Unary potentials for each pixel. The unary potentials are derived from Gaussian Mixture Models of color features [26]. ? Pairwise potentials for each edge of the 8-neighbor grid graph. For each graph edge (i, j) between pixels i and j, the cost of a labeling equals 0 if the two pixels have the same label, and exp(?kvi ? vj k2 ) for different labels, where vi is the RGB color vector of pixel i. ? Square potentials for each 2 ? 2 square of pixels. The cost of a labeling is the square root of the number of neighboring pixels that have different labels, as in [1]. ? Region potentials. We use the algorithm from [30] to identify regions. For each region Ci , the labeling cost is fi (S) = |S||Ci \ S|, where S and Ci \ S are the subsets of Ci labeled 0 and 1, respectively, see [29, 30]. We used five image segmentation instances to evaluate the algorithms.6 The experiments were carried out on a single computer with a 3.3 GHz Intel Core i5 processor and 8 GB of memory; we reported averaged times over 10 trials. We performed several experiments with various combinations of potentials and parameters. In the minimum cut experiments, we evaluated the algorithms on instances containing only unary and 6 The data is available at http://melodi.ee.washington.edu/~jegelka/cc/index.html and http://research.microsoft.com/en-us/um/cambridge/projects/visionimagevideoediting/ segmentation/grabcut.htm 8 plant (all experiments) 1000 600 400 200 0 200 400 600 #iterations / #functions 800 400 0 1000 penguin (all experiments) 600 400 0 200 400 600 #iterations / #functions 800 1000 plant (large cliques with Fujishige-Wolfe) RCDM ACDM 900 IBFS (mincut) IBFS (small cliques) IBFS (large cliques) RCDM (mincut) RCDM (small cliques) RCDM (large cliques) ACDM (mincut) ACDM (small cliques) ACDM (large cliques) 500 Running Time 600 200 800 700 600 Running Time 0 IBFS (mincut) IBFS (small cliques) IBFS (large cliques) RCDM (mincut) RCDM (small cliques) RCDM (large cliques) ACDM (mincut) ACDM (small cliques) ACDM (large cliques) 800 Running Time Running Time 800 octopus (all experiments) 1000 IBFS (mincut) IBFS (small cliques) IBFS (large cliques) RCDM (mincut) RCDM (small cliques) RCDM (large cliques) ACDM (mincut) ACDM (small cliques) ACDM (large cliques) 300 200 500 400 300 200 100 100 0 0 200 400 600 #iterations / #functions 800 0 1000 0 200 400 600 #iterations / #functions 800 1000 Figure 1: Running times in seconds on a subset of the instances. The results for the other instances are very similar and are deferred to the full version of the paper. The x-axis shows the number of iterations for the continuous algorithms. The IBFS algorithm is exact, and we display its running time as a flat line. In the first three plots, the running time of IBFS on the small cliques instances nearly coincides with its running time on minimum cut instances. In the last plot, the running time of IBFS is missing since it is computationally prohibitive to run it on those instances. pairwise potentials; in the small cliques experiments, we used unary, pairwise, and square potentials. Finally, the large cliques experiments used all potentials above. Here, we used two different level-0 algorithms for the region potentials. Firstly, we used an algorithm specific to the particular potential, with running time O(|Ci | log(|Ci |) + |Ci |EOi ). Secondly, we used the general Fujishige-Wolfe algorithm for level-0. This turned out to be significantly slower: it was prohibitive to run the algorithm to near-convergence. Hence, we could not implement IBFS in this setting as it requires an exact solution. We were able to implement coordinate descent methods with the following modification of FujishigeWolfe at level-0. At every iteration, we ran Fujishige-Wolfe for 10 iterations only, but we warm-started with the current solution xi ? B(fi ) for each i ? [r]. Interestingly, this turned out to be sufficient for the level-1 algorithm to make progress. Summary of results. Figure 1 shows the running times for some of the instances; we defer the full experimental results to the full version of the paper. The IBFS algorithm is significantly faster than the gradient descent algorithms on all of the instances with small cliques. For all of the instances with larger cliques, IBFS (as well as other combinatorial algorithms) are no longer suitable if the only choice for the level-0 algorithms are generic methods such as the Fujishige-Wolfe algorithm. The experimental results suggest that in such cases, the coordinate descent methods together with a suitably modified Fujishige-Wolfe algorithm provides an approach for obtaining an approximate solution. 9 References [1] C. Arora, S. Banerjee, P. Kalra, and S. Maheshwari. Generic cuts: An efficient algorithm for optimal inference in higher order MRF-MAP. In European Conference on Computer Vision, pages 17?30. Springer, 2012. [2] F. Bach. Learning with submodular functions: A convex optimization perspective. Foundations and Trends in Machine Learning, 6(2-3):145?373, 2013. [3] D. Chakrabarty, P. Jain, and P. Kothari. Provable submodular minimization using Wolfe?s algorithm. In Advances in Neural Information Processing Systems, pages 802?809, 2014. [4] J. Edmonds. Submodular functions, matroids, and certain polyhedra. Combinatorial structures and their applications, pages 69?87, 1970. [5] A. R. Ene and H. L. Nguyen. Random coordinate descent methods for minimizing decomposable submodular functions. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. [6] O. Fercoq and P. Richt?rik. Accelerated, parallel, and proximal coordinate descent. SIAM Journal on Optimization, 25(4):1997?2023, 2015. [7] A. Fix, T. Joachims, S. Min Park, and R. Zabih. Structured learning of sum-of-submodular higher order energy functions. In Proceedings of the IEEE International Conference on Computer Vision, pages 3104?3111, 2013. [8] A. Fix, C. Wang, and R. Zabih. A primal-dual algorithm for higher-order multilabel Markov random fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1138?1145, 2014. [9] L. Fleischer and S. Iwata. A push-relabel framework for submodular function minimization and applications to parametric optimization. Discrete Applied Mathematics, 131(2):311?322, 2003. [10] S. Fujishige. Lexicographically optimal base of a polymatroid with respect to a weight vector. Mathematics of Operations Research, 5(2):186?196, 1980. [11] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization, 7(1):3?17, 2011. [12] S. Fujishige and X. Zhang. New algorithms for the intersection problem of submodular systems. Japan Journal of Industrial and Applied Mathematics, 9(3):369, 1992. [13] A. V. Goldberg and R. E. Tarjan. A new approach to the maximum-flow problem. Journal of the ACM (JACM), 35(4):921?940, 1988. [14] M. Gr?tschel, L. Lov?sz, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1(2):169?197, 1981. [15] S. Iwata. A faster scaling algorithm for minimizing submodular functions. SIAM Journal on Computing, 32(4):833?840, 2003. [16] S. Iwata, L. Fleischer, and S. Fujishige. A combinatorial strongly polynomial algorithm for minimizing submodular functions. Journal of the ACM (JACM), 48(4):761?777, 2001. [17] S. Iwata and J. B. Orlin. A simple combinatorial algorithm for submodular function minimization. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 2009. [18] S. Jegelka, F. Bach, and S. Sra. Reflection methods for user-friendly submodular optimization. In Advances in Neural Information Processing Systems (NIPS), 2013. [19] S. Jegelka and J. A. Bilmes. Online submodular minimization for combinatorial structures. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 345?352, 2011. [20] S. Jegelka, H. Lin, and J. A. Bilmes. On fast approximate submodular minimization. In Advances in Neural Information Processing Systems, pages 460?468, 2011. 10 [21] V. Kolmogorov. Minimizing a sum of submodular functions. Discrete Applied Mathematics, 160(15):2246?2258, 2012. [22] Y. T. Lee, A. Sidford, and S. C.-w. Wong. A faster cutting plane method and its implications for combinatorial and convex optimization. In IEEE Foundations of Computer Science (FOCS), 2015. [23] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [24] R. Nishihara, S. Jegelka, and M. I. Jordan. On the convergence rate of decomposable submodular function minimization. In Advances in Neural Information Processing Systems (NIPS), pages 640?648, 2014. [25] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming, 118(2):237?251, 2009. [26] C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG), 23(3):309?314, 2004. [27] A. Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. Journal of Combinatorial Theory, Series B, 80(2):346?355, 2000. [28] A. Schrijver. Combinatorial optimization - Polyhedra and Efficiency. Springer, 2003. [29] I. Shanu, C. Arora, and P. Singla. Min norm point algorithm for higher order MRF-MAP inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5365?5374, 2016. [30] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In Advances in Neural Information Processing Systems (NIPS), 2010. [31] P. Wolfe. Finding the nearest point in a polytope. Mathematical Programming, 11(1):128?149, 1976. 11
6880 |@word trial:1 briefly:1 version:8 polynomial:10 norm:18 stronger:1 nd:1 suitably:2 open:1 hu:1 rgb:1 decomposition:4 q1:1 reduction:1 initial:2 series:1 tuned:1 interestingly:1 outperforms:1 current:3 ka:8 com:1 written:2 must:3 subsequent:1 plot:2 update:6 greedy:1 prohibitive:3 plane:1 beginning:1 core:1 provides:5 multiset:1 node:1 org:1 simpler:2 zhang:3 five:1 firstly:1 mathematical:2 along:4 symposium:1 focs:1 prove:1 overhead:1 introduce:1 pairwise:3 lov:1 highdegree:1 decreasing:1 decomposed:1 solver:1 provided:1 project:1 notation:1 suffice:1 interpreted:1 minimizes:3 finding:3 guarantee:3 every:14 ti:4 friendly:1 interactive:1 preferable:1 returning:1 k2:16 prohibitively:2 uk:1 brute:1 um:1 enjoy:1 segmenting:1 positive:6 before:1 treat:1 consequence:2 despite:1 path:15 ap:2 black:2 studied:1 range:1 averaged:1 directed:9 practical:2 unique:3 union:3 block:4 implement:3 xr:1 maxflow:1 significantly:3 projection:7 suggest:1 cannot:1 onto:1 context:1 wong:1 optimize:1 equivalent:3 map:3 missing:1 economics:2 starting:2 attention:1 convex:12 focused:2 decomposable:11 simplicity:1 immediately:1 m2:1 contradiction:2 kay:8 proving:1 coordinate:15 construction:2 suppose:1 user:1 exact:6 programming:3 goldberg:1 designing:2 associate:1 wolfe:14 trend:1 satisfying:1 recognition:2 updating:3 cut:7 yk1:2 labeled:1 reducible:1 wang:1 worst:1 region:4 cycle:1 richt:1 decrease:3 yk:1 substantial:2 mentioned:1 cheeger:1 ran:1 complexity:1 miny:1 nesterov:2 multilabel:1 weakly:1 solving:1 tight:10 tog:1 efficiency:2 sink:1 easily:1 htm:1 various:1 kolmogorov:3 jain:1 fast:2 effective:2 london:1 kp:2 query:2 labeling:7 exhaustive:2 whose:1 quite:1 larger:2 valued:1 solve:2 say:2 acdm:13 online:1 sequence:2 neighboring:1 relevant:3 turned:2 fmax:2 adapts:1 description:1 ky:6 qr:1 exploiting:1 convergence:8 enhancement:1 requirement:1 optimum:2 produce:3 object:1 derive:1 develop:1 ac:1 augmenting:2 eoi:4 nearest:1 x0i:4 school:1 progress:1 auxiliary:10 implemented:1 implies:1 submodularity:2 require:4 suffices:2 fix:2 secondly:1 considered:2 ground:3 blake:1 exp:1 algorithmic:5 claim:3 early:1 smallest:1 xk2:4 purpose:2 applicable:4 combinatorial:19 label:4 singla:1 establishes:1 minimization:23 exactness:1 gaussian:1 always:1 aim:3 modified:1 karp:2 conjunction:1 corollary:1 derived:2 focus:3 ax:13 joachim:1 improvement:1 polyhedron:2 mainly:2 industrial:1 inference:3 unary:4 entire:1 a0:12 diminishing:1 subroutine:16 interested:1 pixel:8 overall:1 dual:1 html:1 denoted:1 art:2 special:1 fairly:1 initialize:1 field:2 construct:2 equal:3 once:1 beach:1 washington:1 extraction:1 park:1 icml:2 nearly:1 argminp:2 rci:1 foreground:1 few:1 penguin:1 individual:1 argminx:3 maintain:3 microsoft:1 ab:1 interest:1 huge:1 szl:1 deferred:1 mixture:1 primal:1 implication:1 edge:6 respective:1 desired:1 theoretical:2 minimal:1 instance:15 ar:1 sidford:1 maximization:1 cost:9 vertex:2 subset:2 kq:6 gr:1 graphic:1 reported:1 proximal:1 considerably:2 st:1 international:3 siam:4 bu:1 systematic:2 lee:1 together:2 qk2:2 again:1 augmentation:1 containing:2 return:6 rrn:1 supp:2 japan:1 potential:20 prox:10 converted:1 depends:1 vi:1 performed:1 root:1 nishihara:3 closed:1 sup:2 start:3 option:1 parallel:2 maintains:1 defer:3 contribution:2 minimize:5 oi:6 square:4 accuracy:1 orlin:2 identify:1 generalize:1 iterated:1 bilmes:2 cc:1 processor:1 history:1 classified:1 kpk:1 stobbe:1 definition:8 energy:2 chakrabarty:1 proof:8 popular:1 recall:2 color:2 kyk22:1 improves:1 segmentation:5 higher:4 follow:2 improved:5 formulation:5 evaluated:3 box:2 strongly:9 though:1 hand:2 ei:8 banerjee:1 reveal:1 usa:1 name:1 k22:11 concept:1 verify:1 former:2 hence:4 alternating:6 iteratively:2 game:1 numerator:1 uniquely:1 coincides:1 outline:2 complete:1 ay:7 gh:1 interface:1 reflection:1 lse:1 image:4 meaning:1 fi:31 polymatroid:1 qp:1 overview:2 discussed:1 significant:1 refer:5 cambridge:1 ai:1 approx:7 outlined:1 mathematics:5 grid:1 submodular:48 had:1 reachable:2 longer:1 yk2:13 base:6 argmina:1 closest:2 recent:3 showed:2 perspective:1 certain:3 inequality:2 yi:4 minimum:12 grabcut:2 shortest:6 ii:2 rv:2 multiple:3 full:8 reduces:1 smooth:1 technical:1 faster:5 lexicographically:1 bach:2 long:3 lin:1 a1:1 qi:2 kax:7 scalable:1 variant:2 mrf:2 denominator:2 vision:5 expectation:2 relabel:1 arxiv:1 iteration:14 tailored:1 achieved:1 whereas:2 background:1 krause:1 grow:1 source:1 crucial:1 rest:1 sr:1 fujishige:18 flow:10 jordan:1 call:8 integer:2 structural:1 ee:1 near:1 yk22:3 easy:3 idea:1 tradeoff:1 fleischer:2 gb:1 rnr:4 algebraic:2 reformulated:1 remark:1 kqi:1 clear:1 amount:1 transforms:1 zabih:2 diameter:1 reduced:2 http:3 arising:1 edmonds:4 discrete:19 write:1 key:1 alina:1 kyi:1 clean:1 graph:20 sum:6 vegh:1 run:2 package:1 i5:1 soda:1 place:1 throughout:3 reader:3 family:1 scaling:2 investigates:1 sfm:10 bound:19 completing:1 distinguish:1 display:1 quadratic:7 oracle:21 adapted:1 precisely:1 kpk2:1 kpk1:1 x2:1 flat:1 argument:2 min:20 fercoq:1 performing:1 department:2 influential:1 structured:2 pacific:1 combination:1 terminates:1 slightly:1 n4:2 modification:1 projecting:1 restricted:3 pr:5 ene:3 explained:2 ln:8 computationally:1 remains:2 turn:1 discus:5 count:1 needed:1 zk2:1 available:3 operation:1 apply:1 generic:3 appropriate:2 alternative:1 slower:1 running:30 include:1 mincut:10 exploit:2 giving:1 k1:8 build:1 classical:1 objective:4 already:1 quantity:1 occurs:1 parametric:1 dependence:2 nr:2 exhibit:1 gradient:11 kaq:3 subspace:2 distance:4 capacity:13 landmark:1 polytope:4 provable:1 rother:1 besides:1 length:1 index:1 ellipsoid:2 minimizing:7 equivalently:1 nc:1 statement:1 favorably:1 implementation:1 perform:1 upper:6 kothari:1 markov:2 arc:8 finite:1 descent:23 extended:2 rn:4 arbitrary:3 tarjan:1 community:1 introduced:1 cast:1 pair:2 namely:1 required:1 connection:2 distinction:2 polytopes:1 nip:4 able:2 below:1 pattern:2 program:2 max:10 memory:1 analogue:1 suitable:2 natural:1 force:1 warm:2 residual:2 improve:1 brief:3 arora:3 axis:1 carried:1 started:1 prior:1 literature:1 epoch:2 plant:2 foundation:2 jegelka:6 affine:1 sufficient:1 rik:1 principle:1 pi:1 summary:1 last:1 copy:1 weaker:2 wide:1 neighbor:1 matroids:1 melodi:1 ghz:1 avg:5 nguyen:3 transaction:1 approximate:8 cutting:1 clique:30 sz:1 conceptual:1 conclude:1 xi:17 continuous:14 search:3 terminate:1 robust:1 ca:1 sra:1 obtaining:2 improving:1 tschel:1 european:1 vj:1 octopus:1 main:5 arise:3 huy:1 n2:9 x1:1 intel:1 en:1 slow:1 guiding:1 northeastern:2 theorem:8 specific:6 kvi:1 wk22:1 maxi:1 r2:1 exists:3 ci:27 push:3 kx:21 boston:1 intersection:4 nguy:1 jacm:2 springer:2 aa:2 minimizer:3 iwata:4 acm:4 fujishigewolfe:1 viewed:1 formulated:1 consequently:1 feasible:1 change:3 isotani:1 determined:1 specifically:2 uniformly:1 lemma:7 called:1 total:2 experimental:11 schrijver:3 n42:1 formally:1 college:1 combinatorica:1 support:5 latter:2 arises:1 accelerated:4 evaluate:3
6,501
6,881
Gauging Variational Inference Sungsoo Ahn? Michael Chertkov? Jinwoo Shin? ? School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea ?1 Theoretical Division, T-4 & Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA, ?2 Skolkovo Institute of Science and Technology, 143026 Moscow, Russia ? ? {sungsoo.ahn, jinwoos}@kaist.ac.kr [email protected] Abstract Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used in practice, where mean-field (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments indeed confirm that the proposed algorithms outperform and generalize MF and BP. 1 Introduction Graphical Models (GM) express factorization of the joint multivariate probability distributions in statistics via a graph of relations between variables. The concept of GM has been developed and/or used successfully in information theory [1, 2], physics [3, 4, 5, 6, 7], artificial intelligence [8], and machine learning [9, 10]. Of many inference problems one can formulate using a GM, computing the partition function (normalization), or equivalently computing marginal probability distributions, is the most important and universal inference task of interest. However, this paradigmatic problem is known to be computationally intractable in general, i.e., it is #P-hard even to approximate [11]. The Markov chain monte carlo (MCMC) [12] is a classical approach addressing the inference task, but it typically suffers from exponentially slow mixing or large variance. Variational inference is an approach stating the inference task as an optimization. Hence, it does not have such issues of MCMC and is often more favorable. The mean-field (MF) [6] and belief propagation (BP) [13] are arguably the most popular algorithms of the variational type. They are distributed, fast and overall very successful in practical applications even though they are heuristics lacking systematic error control. This has motivated researchers to seek for methods with some guarantees, e.g., providing lower bounds [14, 15] and upper bounds [16, 17, 15] for the partition function of GM. In another line of research, which this paper extends and contributes, the so-called re-parametrizations [18], gauge transformations (GT) [19, 20] and holographic transformations [21, 22] were explored. This class of distinct, but related, transformations consist in modifying a GM by changing factors, associated with elements of the graph, continuously such that the partition function stays the same/invariant.1 In this paper, we choose to work with GT as the most general one among the three 1 See [23, 24, 25] for discussions of relations between the aforementioned techniques. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. approaches. Once applied to a GM, it transforms the original partition function, defined as a weighted series/sum over states, to a new one, dependent on the choice of gauges. In particular, a fixed point of BP minimizes the so-called Bethe free energy [26], and it can also be understood as an optimal GT [19, 20, 27, 28]. Moreover, fixing GT in accordance with BP results in the so-called loop series expression for the partition function [19, 20]. In this paper we generalize [19, 20] and explore a more general class of GT: we develop a new gauge-optimization approach which results in ?better? variational inference schemes than MF, BP and other related methods. Contribution. The main contribution of this paper consists in developing two novel variational methods, called Gauged-MF (G-MF) and Gauged-BP (G-BP), providing lower bounds on the partition function of GM. While MF minimizes the (exact) Gibbs free energy under (reduced) product distributions, G-MF does the same task by introducing an additional GT. Due to the the additional degree of freedom in optimization, G-MF improves the lower bound of the partition function provided by MF systematically. Similarly, G-BP generalizes BP, extending interpretation of the latter as an optimization of the Bethe free energy over GT [19, 20, 27, 28], by imposing additional constraints on GT, and thus forcing all the terms in the resulting series for the partition function to remain non-negative. Consequently, G-BP results in a provable lower bound for the partition function, while BP does not (except for log-supermodular models [29]). We prove that both G-MF and G-BP are exact for GMs defined over single cycle, which we call ?alternating cycle/loop?, as well as over line graph. The alternative cycle case is surprising as it represents the simplest ?counter-example? from [30], illustrating failures of MF and BP. For general GMs, we also establish that G-MF is better than, or at least as good as, G-BP. However, we also develop novel error correction schemes for G-BP such that the lower bound of the partition function provided by G-BP is improved systematically/sequentially, eventually outperforming G-MF on the expense of increasing computational complexity. Such error correction scheme has been studied for improving BP by accounting for the loop series consisting of positive and negative terms [31, 32]. According to to our design of G-BP, the corresponding series consists of only non-negative terms, which leads to easier systematic corrections to G-BP. We also show that the proposed GT-based optimizations can be restated as smooth and unconstrained, thus allowing efficient solutions via algorithms of a gradient descent type or any generic optimization solver, such as IPOPT [33]. We experiment with IPOPT on complete GMs of relatively small size and on large GM (up-to 300 variables) of fixed degree. Our experiments indeed confirm that the newly proposed algorithms outperform and generalize MF and BP. Finally, we remark that all statements of the paper are made within the framework of the so-called Forney-style GMs [34] which is general as it allows interactions beyond pair-wise (i.e., high-order GM) and includes other/alternative GM formulations, based on factor graphs [35]. 2 2.1 Preliminaries Graphical model Factor-graph model. Given (undirected) bipartite factor graph G = (X , F, E), a joint distribution of (binary) random variables x = [xv ? {0, 1} : v ? X ] is called a factor-graph Graphical Model (GM) if it factorizes as follows: 1 Y p(x) = fa (x?a ), Z a?F where fa are some non-negative functions called factor P functions, ?a Q ? X consists of nodes neighboring factor a, and the normalization constant Z := x?{0,1}X a?F fa (x?a ), is called the partition function. A factor-graph GM is called pair-wise if |?a| ? 2 for all a ? F, and high-order otherwise. It is known that approximating the partition function is #P-hard in general [11]. Forney-style model. In this paper, we primarily use the Forney-style GM [34] instead of factor-graph GM. Elementary random variables in the Forney-style GM are associated with edges of an undirected graph, G = (V, E). Then the random vector, x = [xab ? {0, 1} : {a, b} ? E] is realized with the probability distribution 1 Y p(x) = fa (xa ), (1) Z a?V 2 where P xa is associated with set of edges neighboring node a, i.e. xa = [xab : b ? ?a] and Q Z := x?{0,1}E a?V fa (xa ). As argued in [19, 20], the Forney-style GM constitutes a more universal/compact description of gauge transformations without any restriction of generality: given any factor-graph GM, one can construct an equivalent Forney-style (see the supplementary material). 2.2 Mean-field and belief propagation We now introduce two most popular methods for approximating the partition function: the mean-field and Bethe (i.e., belief propagation) approximation methods. Given any (Forney-style) GM p(x) defined as in (1) and any distribution q(x) over all variables, the Gibbs free energy is defined as X FGibbs (q) := q(x) . a?V fa (xa ) q(x) log Q x?{0,1}E (2) The partition function is related to the Gibbs free energy according to ? log Z = minq FGibbs (q), where the optimum is achieved at q = p [35]. This optimization is over all valid probability distributions from the exponentially large space, and obviously intractable. In the case of the mean-field (MF) approximation, we minimize the Gibbs free energy Q over a family of tractable probability distributions factorized into the following product: q(x) = {a,b}?E qab (xab ), where each independent qab (xab ) is a proper probability distribution, behaving as a (mean-field) proxy to the marginal of q(x) over xab . By construction, the MF approximation provides a lower bound for log Z. In the case of the Bethe approximation, the so-called Bethe free energy approximates the Gibbs free energy [36]: FBethe (b) = X X ba (xa ) log a?V xa ?{0,1}?a X ba (xa ) ? fa (xa ) X bab (xab ) log bab (xab ), (3) {a,b}?E xab ?{0,1} where beliefs b = [ba , bab : a ? V, {a, b} ? E] should satisfy following ?consistency? constraints: X X 0 ? ba , bab ? 1, ba (xab ) = 1, b(x0a ) = b(xab ) ?{a, b} ? E. x0a \xab ?{0,1}?a xab ?{0,1} Here, x0a \xab denotes a vector with x0ab = xab fixed and minb FBethe (b) is the Bethe estimation for ? log Z. The popular belief propagation (BP) distributed heuristics solves the optimization iteratively [36]. The Bethe approximation is exact over trees, i.e., ? log Z = minb FBethe (b). However, in the case of a general loopy graph, the BP estimation lacks approximation guarantees. It is known, however, that the result of BP-optimization lower bounds the log-partition function, log Z, if the factors are log-supermodular [29]. 2.3 Gauge transformation Gauge transformation (GT) [19, 20] is a family of linear transformations of the factor functions in (1) which leaves the the partition function Z invariant. It is defined with respect to the following set of invertible 2 ? 2 matrices Gab for {a, b} ? E, coined gauges:   Gab (0, 0) Gab (0, 1) Gab = . Gab (1, 0) Gab (1, 1) The GM, gauge transformed with respect to G = [Gab , Gba : {a, b} ? E], consists of factors expressed as: X Y fa,G (xa ) = fa (x0a ) Gab (xab , x0ab ). x0a ?{0,1}?a b??a Here one treats independent xab and xba equivalently for notational convenience, and {Gab , Gba } is a conjugated pair of distinct matrices satisfying the gauge constraint G> ab Gba = I, where I is the identity matrix. Then, one can prove invariance of the partition function under the transformation: X Y X Y Z = fa (xa ) = fa,G (xa ). (4) x?{0,1}|E| a?V x?{0,1}|E| a?V 3 Q Consequently, GT results in the gauge transformed distribution pG (x) = Z1 a?V fa,G (xa ). Note that some components of pG (x) can be negative, in which case it is not a valid distribution. We remark that the Bethe/BP approximation can be interpreted as a specific choice of GT [19, 20]. Indeed any fixed point of BP corresponds to a special set of gauges making an arbitrarily picked configuration/state x to be least sensitive to the local variation of the gauge. Formally, the following non-convex optimization is known to be equivalent to the Bethe approximation: X maximize log fa,G (0, 0, . . . ) G subject to a?V G> ab Gba = I, ? {a, b} ? E, (5) and the set of BP-gauges Pcorrespond to stationary points of (5), having the objective as the respective Bethe free energy, i.e., a?V log fa,G (0, 0, . . . ) = ?FBethe . 3 Gauge optimization for approximating partition functions Now we are ready to describe two novel gauge optimization schemes (different from (5)) providing guaranteed lower bound approximations for log Z. Our first GT scheme, coined Gauged-MF (G-MF), shall be considered as modifying and improving the MF approximation, while our second GT scheme, coined Gauged-BP (G-BP), modifies and improves the Bethe approximation in a way that it now provides a provable lower bound for log Z, while the bare BP does not have such guarantees. The G-BP scheme also allows further improvement (in terms of the output quality) on the expense of making underlying algorithm/computation more complex. 3.1 Gauged mean-field We first propose the following optimization inspired by, and also improving, the MF approximation: X X X X maximize qa (xa ) log fa,G (xa ) ? qab (xab ) log qab (xab ) q,G subject to a?V xa ?{0,1}?a G> ab Gba = I, {a,b}?E xab ?{0,1} ? {a, b} ? E, fa,G (xa ) ? 0, ?a ? V, ?xa ? {0, 1}?a , Y Y qab (xab ), qa (xa ) = qab (xab ), q(x) = {a,b}?E ?a ? V. (6) b??a Recall that the MF approximation optimizes the Gibbs free energy with respect to q given the original GM, i.e. factors. On the other hand, (6) jointly optimizes it over q and G. Since the partition function of the gauge transformed GM is equal to that of the original GM, (6) also outputs a lower bound on the (original) partition function, and always outperforms MF due to the additional degree of freedom in G. The non-negative constraints fa,G (xa ) ? 0 for each factor enforce that the gauge transformed GM results in a valid probability distribution (all components are non-negative). To solve (6), we propose a strategy, alternating between two optimizations, formally stated in Algorithm 1. The alternation is between updating q, within Step A, and updating G, within Step C. The optimization in Step A is simple as one can apply any solver of the mean-field approximation. On the other hand, Step C requires a new solver and, at the first glance, looks complicated due to nonlinear constraints. However, the constraints can actually be eliminated. Indeed, one observes that the non-negative constraint fa,G (xa ) ? 0 is redundant, because each term q(xa ) log fa,G (xa ) in the optimization objective already prevents factors from getting close to zero, thus keeping them positive. Equivalently, once current G satisfies the non-negative constraints, the objective, q(xa ) log fa,G (xa ), acts as a log-barrier forcing the constraints to be satisfied at the next step within an iterative optimization procedure. Furthermore, the gauge constraint, G> ab Gba = I, can also be ?1 removed simply expressing one (of the two) gauge via another, e.g., Gba via (G> . Then, Step ab ) C can be resolved by any unconstrained iterative optimization method of a gradient descent type. Next, the additional (intermediate) procedure Step B was considered to handle extreme cases when for some {a, b}, qab (xab ) = 0 at the optimum. We resolve the singularity perturbing the distribution by setting zero probabilities to a small value, qab (xab ) = ? where ? > 0 is sufficiently small. In 4 Algorithm 1 Gauged mean-field 1: Input: GM defined over graph G = (V, E) with factors {fa }a?V . A sequence of decreasing barrier terms ?1 > ?2 > ? ? ? > ?T > 0 (to handle extreme cases). 2: for t = 1, 2, ? ? ? , T do 3: Step A. Update q by solving the mean-field approximation, i.e., solve the following optimiza- tion: maximize q X X qa (xa ) log fa,G (xa ) ? a?V xa ?{0,1}?a subject to q(x) = Y X X qab (xab ) log qab (xab ) {a,b}?E xab ?{0,1} qab (xab ), qa (xa ) = {a,b}?E Y qab (xab ), ?a ? V. b??a 4: Step B. For factors with zero values, i.e. qab (xab ) = 0, make perturbation by setting  ? if x0ab = xab 0 qab (xab ) = t 1 ? ?t otherwise. 5: Step C. Update G by solving the following optimization: X X Y maximize q(x) log fa,G (xa ) G a?V x?{0,1}E G> ab Gba subject to = I, a?V ? {a, b} ? E. 6: end for 7: Output: Set of gauges G and product distribution q. summary, it is straightforward to check that the Algorithm 1 converges to a local optimum of (6), similar to some other solvers developed for the mean-field and Bethe approximations. We also provide an important class of GMs where the Algorithm 1 provably outperforms both the MF and BP (Bethe) approximations. Specifically, we prove that the optimization (6) is exact in the case when the graph is a line (which is a special case of a tree) and, somewhat surprisingly, a single loop/cycle with odd number of factors represented by negative definite matrices. In fact, the latter case is the so-called ?alternating cycle? example which was introduced in [30] as the simplest loopy example where the MF and BP approximations perform quite badly. Formally, we state the following theorem whose proof is given in the supplementary material. Theorem 1. For GM defined on any line graph or alternating cycle, the optimal objective of (6) is equal to the exact log partition function, i.e., log Z. 3.2 Gauged belief propagation We start discussion of the G-BP scheme by noticing that, according to [37], the G-MF gauge optimization (6) can be reduced to the BP/Bethe gauge optimization (5) by eliminating the nonnegative constraint fa,G (xa ) ? 0 for each factor and replacing the product distribution q(x) by:  q(x) = if x = (0, 0, ? ? ? ), otherwise. 1 0 (7) Motivated by this observation, we propose the following G-BP optimization: X maximize log fa,G (0, 0, ? ? ? ) G subject to a?V G> ab Gba ?(a, b) ? E, = I, ?a ? V, ?xa ? {0, 1}?a . fa,G (xa ) ? 0, 5 (8) The only difference between (5) and (8) is addition of the non-negative constraints for factors in (8). Hence, (8) outputs a lower bound on the partition function, while (5) can be larger or smaller then log Z. It is also easy to verify that (8) (for G-BP) is equivalent to (6) (for G-MF) with q fixed to (7). Hence, we propose the algorithmic procedure for solving (8), formally described in Algorithm 2, and it should be viewed as a modification of Algorithm 1 with q replaced by (7) in Step A, also with a properly chosen log-barrier term in Step C. As we discussed for Algorithm 1, it is straightforward to ?1 verify that Algorithm 2 also converges to a local optimum of (8) and one can replace Gba by (G> ab ) for each pair of the conjugated matrices in order to build a convergent gradient descent algorithmic implementation for the optimization. Algorithm 2 Gauged belief propagation 1: Input: GM defined over graph G = (V, E) with and factors {fa }a?V . A sequence of decreasing barrier terms ?1 > ?2 > ? ? ? > ?T > 0. 2: for t = 1, 2, ? ? ? do 3: Update G by solving the following optimization: X maximize G log fa,G (0, 0, ? ? ? ) + ?t X X q(x) log a?V x?{0,1}E a?V subject to G> ab Gba = I, Y fa,G (xa ) a?V ? {a, b} ? E. 4: end for 5: Output: Set of gauges G. Since fixing q(x) eliminates the degree of freedom in (6), G-BP should perform worse than G-MF, i.e., (8) ? (6). However, G-BP is still meaningful due to the following reasons. First, Theorem 1 still holds for (8), i.e., the optimal q of (6) is achieved at (7) for any line graph or alternating cycle (see the proof of the Theorem 1 in the supplementary material). More importantly, G-BP can be corrected systematically. At a high level, the ?error-correction" strategy consists in correcting the approximation error of (8) sequentially while maintaining the desired lower bounding guarantee. The key idea here is to decompose the error of (8) into partition functions of multiple GMs, and then repeatedly lower bound each partition function. Formally, we fix an arbitraryQordering of edges e1 , ? ? ? e|E| and define the corresponding GM for each ei as follows: p(x) = Z1i a?V fa,G (xa ) for P Q x ? Xi , where Zi := x?Xi a?V fa,G (x) and Xi := {x : xei = 1, xej = 0, xek ? {0, 1} ? j, k, such that 1 ? j < i < k ? |E|}. Namely, we consider GMs from sequential conditioning of xe1 , ? ? ? , xei Q in the gauge transformed GM. Next, recall that (8) maximizes and outputs a single configuration a fa,G (0, 0, ? ? ? ). Then, S|E| T since Xi Xj = ? and i=1 Xi = {0, 1}E \(0, 0, ? ? ? ), the error of (8) can be decomposed as follows: Z? Y fa,G (0, 0, ? ? ? ) = a |E| X X Y i=1 x?Xi a?V fa,G (x) = |E| X Zi , (9) i=1 Now, one can run G-MF, G-BP or any other methods (e.g., MF) again to obtain a lower bound Zbi Q P|E| b of Zi for all i and then output a?V fa,G (0, 0, ? ? ? ) + i=1 Z i . However, such additional runs of optimization inevitably increase the overall complexity. Instead, one can also pick a single term Q (i) (i) = [xei = 1, xej = 0, ? j 6= i] from Xi , as a choice of Zbi just after solving a fa,G (xa ) for x (8) initially, and output Y a?V fa,G (0, 0, ? ? ? ) + |E| X fa,G (x(i) a ), x(i) = [xei = 1, xej = 0, ? j 6= i], (10) i=1 Q as a better lower bound for log Z than a?V fa,G (0, 0, ? ? ? ). This choice is based on the intuition that configurations partially different from (0, 0, ? ? ? ) may be significant too as they share most of the same factor values with the zero configuration maximized in (8). In fact, one can even choose more configurations (partially different from (0, 0, ? ? ? )) by paying more complexity, which is always 6 Figure 1: Averaged log-partition approximation error vs interaction strength ? in the case of generic (non-log-supermodular) GMs on complete graphs of size 4, 5 and 6 (left, middle, right), where the average is taken over 20 random models. Figure 2: Averaged log-partition approximation error vs interaction strength ? in the case of logsupermodular GMs on complete graphs of size 4, 5 and 6 (left, middle, right), where the average is taken over 20 random models. better as it brings the approximation closer to the true partition function. In our experiments, we consider additional configurations {x : [xei = 1, xei0 = 1, xej = 0, ? i, i0 6= j] for i0 = i, ? ? ? |E|}, i.e., output Y a?V fa,G (0, 0, ? ? ? ) + |E| |E| X X 0 ) fa,G (x(i,i ), a 0 x(i,i ) = [xei = 1, xei0 = 1, xej = 0, ? j 6= i, i0 ], i=1 i0 =i (11) as a better lower bound of log Z than (10). 4 Experimental results We report results of our experiments with G-MF and G-BP introduced in Section 3. We also experiment here with improved G-BPs correcting errors by accounting for single (10) and multiple (11) terms, as well as correcting G-BP by applying it (again) sequentially to each residual partition function Zi . The error decreases, while the evaluation complexity increases, as we move from G-BPsingle to G-BP-multiple and then to G-BP-sequential. To solve the proposed gauge optimizations, e.g., Step C. of Algorithm 1, we use the generic optimization solver IPOPT [33]. Even though the gauge optimizations can be formulated as unconstrained optimizations, IPOPT runs faster on the original constrained versions in our experiments.2 However, the unconstrained formulations has a strong future potential for developing fast gradient descent algorithms. We generate random GMs with factors dependent on the ?interaction strength? parameters {?a }a?V (akin inverse temperature) according to: fa (xa ) = exp(??a |h0 (xa ) ? h1 (xa )|), where h0 and h1 count numbers of 0 and 1 contributions in xa , respectively. Intuitively, we expect that as |?a | increases, it becomes more difficult to approximate the partition function. See the supplementary material for additional information on how we generate the random models. In the first set of experiments, we consider relatively small, complete graphs with two types of factors: random generic (non-log-supermodular) factors and log-supermodular (positive/ferromagnetic) factors. Recall that the bare BP also provides a lower bound in the log-supermodular case [29], thus making the comparison between each proposed algorithm and BP informative. We use the log partition approximation error defined as | log Z ? log ZLB |/| log Z|, where ZLB is the algorithm 2 The running time of the implemented algorithms are reported in the supplementary material. 7 Figure 3: Averaged ratio of the log partition function compared to MF vs graph size (i.e., number of factors) in the case of generic (non-log-supermodular) GMs on 3-regular graphs (left) and grid graphs (right), where the average is taken over 20 random models. Figure 4: Averaged ratio of the log partition function compared to MF vs interaction strength ? in the case of log-supermodular GMs on 3-regular graphs of size 200 (left) and grid graphs of size 100 (right), where the average is taken over 20 random models. output (a lower bound of Z), to quantify the algorithm?s performance. In the first set of experiments, we deal with relatively small graphs and the explicit computation of Z (i.e., the approximation error) is feasible. The results for experiments over the small graphs are illustrated in Figure 1 and Figure 2 for the non-log-supermodular and log-supermodular cases, respectively. Figure 1 shows that, as expected, G-MF always outperforms MF. Moreover, we observe that G-MF typically provides the tightest low-bound, unless it is outperformed by G-BP-multiple or G-BP-sequential. We remark that BP is not shown in Figure 1, because in this non-log-supermodular case, it does not provide a lower bound in general. According to Figure 2, showing the log-supermodular case, both G-MF and G-BP outperform MF, while G-BP-sequential outperforms all other algorithms. Notice that G-BP performs rather similar to BP in the log-supermodular case, thus suggesting that the constraints, distinguishing (8) from (5), are very mildly violated. In the second set of experiments, we consider more sparse, larger graphs of two types: 3-regular and grid graphs with size up to 200 factors/300 variables. As in the first set of experiments, the same non-log-supermodular/log-supermodular factors are considered. Since computing the exact approximation error is not feasible for the large graphs, we instead measure here the ratio of estimation by the proposed algorithm to that of MF, i.e., log(ZLB /ZMF ) where ZMF is the output of MF. Note that a larger value of the ratio indicates better performance. The results are reported in Figure 3 and Figure 4 for the non-log-supermodular and log-supermodular cases, respectively. In Figure 3, we observe that G-MF and G-BP-sequential outperform MF significantly, e.g., up-to e14 times better in 3-regular graphs of size 200. We also observe that even the bare G-BP outperforms MF. In Figure 4, algorithms associated with G-BP outperform G-MF and MF (up to e25 times). This is because the choice of q(x) for G-BP is favored by log-supermodular models, i.e., most of configurations are concentrated around (0, 0, ? ? ? ) similar to the choice (7) of q(x) for G-BP. One observes here (again) that performance of G-BP in this log-supermodular case is almost on par with BP. This implies that G-BP generalizes BP well: the former provides a lower bound of Z for any GMs, while the latter does only for log-supermodular GMs. 5 Conclusion We explore the freedom in gauge transformations of GM and develop novel variational inference methods which result in significant improvement of the partition function estimation. We note that the GT methodology, applied here to improve MF and BP, can also be used to improve and extend utility of other variational methods. 8 Acknowledgments This work was supported in part by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-15-05-ETRI), Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No.2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework) and ICT R&D program of MSIP/IITP [2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion]. References [1] Robert Gallager. Low-density parity-check codes. IRE Transactions on information theory, 8(1):21?28, 1962. [2] Frank R. Kschischang and Brendan J. Frey. Iterative decoding of compound codes by probability propagation in graphical models. IEEE Journal on Selected Areas in Communications, 16(2):219?230, 1998. [3] Hans .A. Bethe. Statistical theory of superlattices. Proceedings of Royal Society of London A, 150:552, 1935. [4] Rudolf E. Peierls. Ising?s model of ferromagnetism. Proceedings of Cambridge Philosophical Society, 32:477?481, 1936. [5] Marc M?zard, Georgio Parisi, and M. A. Virasoro. Spin Glass Theory and Beyond. Singapore: World Scientific, 1987. [6] Giorgio Parisi. Statistical field theory, 1988. [7] Marc Mezard and Andrea Montanari. Information, Physics, and Computation. Oxford University Press, Inc., New York, NY, USA, 2009. [8] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 2014. [9] Michael Irwin Jordan. Learning in graphical models, volume 89. Springer Science & Business Media, 1998. [10] William T Freeman, Egon C Pasztor, and Owen T Carmichael. Learning low-level vision. International journal of computer vision, 40(1):25?47, 2000. [11] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the ising model. SIAM Journal on computing, 22(5):1087?1116, 1993. [12] Ethem Alpaydin. Introduction to machine learning. MIT press, 2014. [13] Judea Pearl. Reverend Bayes on inference engines: A distributed hierarchical approach. Cognitive Systems Laboratory, School of Engineering and Applied Science, University of California, Los Angeles, 1982. [14] Qiang Liu and Alexander T Ihler. Negative tree reweighted belief propagation. arXiv preprint arXiv:1203.3494, 2012. [15] Stefano Ermon, Ashish Sabharwal, Bart Selman, and Carla P Gomes. Density propagation and improved bounds on the partition function. In Advances in Neural Information Processing Systems, pages 2762?2770, 2012. [16] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005. [17] Qiang Liu and Alexander T Ihler. Bounding the partition function using holder?s inequality. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 849?856, 2011. 9 [18] Martin J. Wainwright, Tommy S. Jaakkola, and Alan S. Willsky. Tree-based reparametrization framework for approximate estimation on graphs with cycles. Information Theory, IEEE Transactions on, 49(5):1120?1146, 2003. [19] Michael Chertkov and Vladimir Chernyak. Loop calculus in statistical physics and information science. Physical Review E, 73:065102(R), 2006. [20] Michael Chertkov and Vladimir Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical Mechanics, page P06009, 2006. [21] Leslie G Valiant. Holographic algorithms. SIAM Journal on Computing, 37(5):1565?1594, 2008. [22] Ali Al-Bashabsheh and Yongyi Mao. Normal factor graphs and holographic transformations. IEEE Transactions on Information Theory, 57(2):752?763, 2011. [23] Martin J. Wainwright and Michael E. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1):1?305, 2008. [24] G David Forney Jr and Pascal O Vontobel. Partition functions of normal factor graphs. arXiv preprint arXiv:1102.0316, 2011. [25] Michael Chertkov. Lecture notes on ?statistical inference in structured graphical models: Gauge transformations, belief propagation & beyond", 2016. [26] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282? 2312, 2005. [27] Vladimir Y Chernyak and Michael Chertkov. Loop calculus and belief propagation for q-ary alphabet: Loop tower. In Information Theory, 2007. ISIT 2007. IEEE International Symposium on, pages 316?320. IEEE, 2007. [28] Ryuhei Mori. Holographic transformation, belief propagation and loop calculus for generalized probabilistic theories. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 1099?1103. IEEE, 2015. [29] Nicholas Ruozzi. The bethe partition function of log-supermodular graphical models. In Advances in Neural Information Processing Systems, pages 117?125, 2012. [30] Adrian Weller, Kui Tang, Tony Jebara, and David Sontag. Understanding the bethe approximation: when and how can it go wrong? In UAI, pages 868?877, 2014. [31] Michael Chertkov, Vladimir Y Chernyak, and Razvan Teodorescu. Belief propagation and loop series on planar graphs. Journal of Statistical Mechanics: Theory and Experiment, 2008(05):P05003, 2008. [32] Sung-Soo Ahn, Michael Chertkov, and Jinwoo Shin. Synthesis of mcmc and belief propagation. In Advances in Neural Information Processing Systems, pages 1453?1461, 2016. [33] Andreas W?chter and Lorenz T Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1):25?57, 2006. [34] G David Forney. Codes on graphs: Normal realizations. IEEE Transactions on Information Theory, 47(2):520?548, 2001. [35] Martin Wainwright and Michael Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, UC Berkeley, Department of Statistics, 2003. [36] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Bethe free energy, kikuchi approximations, and belief propagation algorithms. Advances in neural information processing systems, 13, 2001. [37] Michael Chertkov and Vladimir Y Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical Mechanics: Theory and Experiment, 2006(06):P06009, 2006. 10
6881 |@word illustrating:1 version:1 middle:2 eliminating:1 polynomial:1 adrian:1 calculus:3 seek:1 accounting:2 ferromagnetism:1 pg:2 pick:1 configuration:7 series:8 liu:2 outperforms:5 current:1 surprising:1 partition:41 informative:1 update:3 v:4 stationary:1 intelligence:1 leaf:1 selected:1 bart:1 provides:5 ire:1 node:2 mathematical:1 symposium:2 prove:4 consists:5 tommy:1 introduce:1 expected:1 indeed:4 andrea:1 mechanic:3 inspired:1 freeman:3 decreasing:2 decomposed:1 gov:1 resolve:1 solver:5 increasing:1 becomes:1 provided:2 moreover:3 underlying:1 maximizes:1 factorized:1 medium:1 xe1:1 interpreted:1 minimizes:2 reverend:1 developed:2 transformation:13 sung:1 guarantee:4 berkeley:1 act:1 wrong:1 control:1 grant:2 arguably:2 positive:3 giorgio:1 engineering:2 frey:1 understood:1 accordance:1 xv:1 treat:1 local:3 chernyak:5 oxford:1 studied:1 etri:1 factorization:1 averaged:4 practical:1 acknowledgment:1 msit:1 practice:1 definite:1 razvan:1 procedure:3 shin:2 carmichael:1 area:1 universal:2 significantly:1 regular:4 convenience:1 close:1 interior:1 applying:1 restriction:1 equivalent:3 center:1 modifies:2 straightforward:2 go:1 minq:1 convex:1 formulate:1 restated:1 correcting:3 zbi:2 utilizing:1 importantly:1 handle:2 variation:1 autonomous:1 construction:1 gm:46 exact:7 programming:2 distinguishing:1 element:1 trend:1 satisfying:1 updating:2 ising:2 preprint:2 electrical:1 ferromagnetic:1 cycle:8 iitp:2 counter:1 removed:1 decrease:1 observes:2 alpaydin:1 intuition:1 fbethe:4 complexity:4 solving:5 ali:1 division:1 bipartite:1 egon:1 resolved:1 joint:2 represented:1 alphabet:1 distinct:2 fast:2 describe:1 london:1 monte:1 artificial:1 h0:2 quite:1 heuristic:2 supplementary:5 kaist:1 solve:3 whose:1 larger:3 otherwise:3 plausible:1 xek:1 statistic:2 jerrum:1 jointly:1 obviously:1 sequence:2 parisi:2 propose:5 interaction:5 product:4 neighboring:2 loop:12 realization:1 parametrizations:1 mixing:1 description:1 zlb:3 getting:1 los:3 optimum:4 extending:1 converges:2 gab:9 kikuchi:1 develop:3 ac:1 fixing:2 stating:1 odd:1 school:2 paying:1 strong:1 solves:1 implemented:1 implies:1 quantify:1 tommi:1 sabharwal:1 modifying:2 filter:1 human:1 ermon:1 material:5 crc:1 argued:1 government:2 fix:1 preliminary:1 decompose:1 isit:2 elementary:1 singularity:1 correction:4 hold:1 sufficiently:1 considered:3 around:1 normal:3 exp:1 algorithmic:2 xej:5 favorable:1 estimation:5 outperformed:1 sensitive:1 council:1 gauge:29 successfully:1 weighted:1 promotion:1 mit:1 always:3 rather:1 factorizes:1 jaakkola:2 notational:1 improvement:2 properly:1 check:2 indicates:1 brendan:1 glass:1 inference:15 dependent:2 i0:4 typically:2 initially:1 relation:2 transformed:5 provably:1 issue:1 among:1 overall:2 pascal:1 aforementioned:1 favored:1 development:2 constrained:1 special:3 uc:1 marginal:2 field:12 construct:1 once:2 equal:2 beach:1 bab:4 eliminated:1 having:1 represents:1 qiang:2 look:1 icml:1 constitutes:1 gauging:1 future:1 report:2 intelligent:2 primarily:1 national:2 replaced:1 consisting:1 william:2 ab:9 freedom:4 interest:1 evaluation:1 extreme:2 chain:1 edge:3 closer:1 korea:4 respective:1 unless:1 tree:4 re:1 desired:1 vontobel:1 theoretical:1 virasoro:1 superlattices:1 sungsoo:2 leslie:1 loopy:2 introducing:1 addressing:1 alamo:2 holographic:4 successful:2 too:1 weller:1 reported:2 st:1 density:2 international:4 siam:2 e14:1 stay:1 systematic:2 physic:3 probabilistic:2 decoding:1 invertible:1 michael:11 ashish:1 yongyi:1 continuously:1 e25:1 synthesis:1 again:3 nm:1 satisfied:1 choose:2 russia:1 worse:1 sinclair:1 cognitive:1 style:7 conjugated:2 suggesting:1 potential:1 includes:1 inc:1 satisfy:1 msip:2 tion:1 h1:2 picked:1 start:1 bayes:1 complicated:1 reparametrization:1 contribution:3 minimize:1 spin:1 holder:1 variance:1 kaufmann:1 maximized:1 generalize:3 carlo:1 researcher:1 ary:1 suffers:1 z1i:1 failure:1 energy:12 associated:4 proof:2 ihler:2 judea:2 newly:1 popular:4 recall:3 improves:2 actually:1 supermodular:21 methodology:1 planar:1 improved:3 wei:2 formulation:2 though:3 generality:1 furthermore:1 xa:40 just:1 hand:2 replacing:1 ei:1 nonlinear:3 propagation:17 lack:1 glance:1 brings:1 quality:1 scientific:1 usa:3 concept:1 verify:2 true:1 former:1 hence:3 alternating:5 laboratory:2 iteratively:1 illustrated:1 deal:1 reweighted:1 generalized:2 complete:4 performs:1 temperature:1 stefano:1 reasoning:1 variational:11 wise:2 novel:4 physical:1 perturbing:1 conditioning:1 exponentially:2 volume:1 discussed:1 interpretation:1 approximates:1 extend:1 expressing:1 significant:2 nst:1 cambridge:1 gibbs:6 imposing:1 unconstrained:4 consistency:1 grid:3 similarly:1 logsupermodular:1 funded:1 han:1 ahn:3 behaving:1 gt:15 multivariate:1 optimizes:2 forcing:2 compound:1 inequality:1 outperforming:1 binary:1 arbitrarily:1 alternation:1 morgan:1 additional:8 somewhat:1 maximize:6 redundant:1 paradigmatic:1 multiple:4 smooth:1 alan:2 faster:1 technical:1 long:1 e1:1 vision:2 arxiv:4 normalization:2 achieved:2 addition:1 qab:14 eliminates:1 minb:2 subject:6 undirected:2 jordan:3 call:1 p06009:2 intermediate:1 easy:1 xj:1 zi:4 xba:1 idea:1 andreas:1 angeles:1 motivated:2 expression:1 utility:1 ipopt:4 akin:1 explainable:1 sontag:1 york:1 remark:3 repeatedly:1 deep:1 transforms:1 concentrated:1 simplest:2 reduced:2 generate:2 outperform:5 singapore:1 notice:1 arising:1 ruozzi:1 discrete:2 shall:1 express:1 key:1 changing:1 graph:38 sum:1 run:3 inverse:1 noticing:1 extends:1 family:4 xei:6 almost:1 forney:9 bound:24 guaranteed:1 convergent:1 nonnegative:1 badly:2 zard:1 strength:4 constraint:13 bp:69 relatively:3 martin:4 structured:1 developing:2 according:5 department:1 jr:1 remain:1 smaller:1 alistair:1 making:3 modification:1 intuitively:1 invariant:3 taken:4 computationally:2 mori:1 eventually:1 count:1 tractable:1 end:2 generalizes:2 tightest:1 yedidia:2 apply:1 observe:3 hierarchical:1 generic:5 enforce:1 nicholas:1 alternative:2 yair:1 original:5 moscow:1 denotes:1 running:1 tony:1 graphical:10 maintaining:1 coined:4 peierls:1 build:1 establish:1 approximating:3 classical:1 society:2 objective:4 move:1 already:1 realized:1 fa:42 strategy:2 gradient:4 tower:1 reason:1 provable:2 willsky:2 code:3 providing:3 ratio:4 vladimir:5 equivalently:3 difficult:1 robert:1 statement:1 frank:1 expense:2 negative:12 stated:1 ba:5 design:1 implementation:2 proper:1 perform:3 allowing:1 upper:2 observation:1 markov:1 pasztor:1 descent:4 inevitably:1 communication:2 perturbation:1 jebara:1 introduced:2 david:3 pair:4 namely:1 lanl:1 extensive:1 z1:1 philosophical:1 engine:1 california:1 pearl:2 nip:1 qa:4 beyond:3 program:1 royal:1 soo:1 belief:16 wainwright:4 business:1 residual:1 advanced:1 gba:11 scheme:9 improve:2 technology:5 ready:1 bare:4 review:1 ict:1 understanding:1 lacking:1 expect:1 par:1 gauged:10 lecture:1 skolkovo:1 digital:1 foundation:1 degree:4 proxy:1 systematically:3 share:1 summary:1 surprisingly:1 supported:1 keeping:2 free:12 parity:1 institute:3 barrier:4 sparse:1 distributed:3 valid:3 world:1 fgibbs:2 selman:1 made:1 adaptive:1 transaction:6 approximate:4 compact:1 confirm:2 sequentially:3 uai:1 gomes:1 xi:7 biegler:1 search:1 iterative:3 bethe:18 ca:1 kschischang:1 contributes:1 improving:4 kui:1 complex:1 constructing:1 marc:2 main:1 xab:31 montanari:1 bounding:2 slow:1 ny:1 mezard:1 mao:1 explicit:1 exponential:2 chertkov:9 tang:1 theorem:4 companion:1 specific:1 chter:1 showing:1 ethem:1 explored:1 intractable:3 consist:1 lorenz:1 sequential:5 valiant:1 kr:1 jinwoos:1 mildly:1 easier:1 mf:51 jinwoo:2 carla:1 simply:1 explore:2 gallager:1 prevents:1 expressed:1 partially:2 springer:1 corresponds:1 satisfies:1 identity:1 viewed:1 formulated:1 consequently:2 replace:1 owen:1 feasible:2 hard:2 daejeon:1 specifically:1 except:1 corrected:1 called:12 invariance:1 optimiza:1 experimental:1 meaningful:1 bps:1 formally:5 rudolf:1 mark:1 latter:3 irwin:1 jonathan:1 alexander:2 violated:1 mcmc:3
6,502
6,882
Deep Recurrent Neural Network-Based Identification of Precursor microRNAs Seunghyun Park Electrical and Computer Engineering Seoul National University Seoul 08826, Korea School of Electrical Engineering Korea University Seoul 02841, Korea Seonwoo Min Electrical and Computer Engineering Seoul National University Seoul 08826, Korea Hyun-Soo Choi Electrical and Computer Engineering Seoul National University Seoul 08826, Korea Sungroh Yoon? Electrical and Computer Engineering Seoul National University Seoul 08826, Korea [email protected] Abstract MicroRNAs (miRNAs) are small non-coding ribonucleic acids (RNAs) which play key roles in post-transcriptional gene regulation. Direct identification of mature miRNAs is infeasible due to their short lengths, and researchers instead aim at identifying precursor miRNAs (pre-miRNAs). Many of the known pre-miRNAs have distinctive stem-loop secondary structure, and structure-based filtering is usually the first step to predict the possibility of a given sequence being a pre-miRNA. To identify new pre-miRNAs that often have non-canonical structure, however, we need to consider additional features other than structure. To obtain such additional characteristics, existing computational methods rely on manual feature extraction, which inevitably limits the efficiency, robustness, and generalization of computational identification. To address the limitations of existing approaches, we propose a pre-miRNA identification method that incorporates (1) a deep recurrent neural network (RNN) for automated feature learning and classification, (2) multimodal architecture for seamless integration of prior knowledge (secondary structure), (3) an attention mechanism for improving long-term dependence modeling, and (4) an RNN-based class activation mapping for highlighting the learned representations that can contrast pre-miRNAs and non-pre-miRNAs. In our experiments with recent benchmarks, the proposed approach outperformed the compared state-of-the-art alternatives in terms of various performance metrics. 1 Introduction MicroRNAs (miRNAs) play crucial roles in post-transcriptional gene regulation by binding to the 30 untranslated region of target messenger RNAs [16]. Among the research problems related to miRNA, computational identification of miRNAs has been one of the most significant. The biogenesis of a miRNA consists of the primary miRNA stage, the precursor miRNA (pre-miRNA) stage, and the mature miRNA stage [17]. Mature miRNAs are usually short, having only 20?23 base pairs (bp), and it is difficult to identify them directly. Most computational approaches focus on detecting ? To whom correspondence should be addressed. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. pre-miRNAs since they are usually more identifiable because they are longer (approximately 80bp) and have a distinctive stem-loop secondary structure. In terms of the machine learning (ML), pre-miRNA identification can be viewed as a binary classification problem in which a given sequence must be classified as either a pre-miRNA or a non-pre-miRNA. A variety of computational approaches for miRNA identification have been proposed, and we can broadly classify them [18] into rule-based such as MIReNA [1], and ML-based approaches, which can be categorized into three groups in terms of the classification algorithm used: (1) MiPred [12], microPred [2], triplet-SVM [3], iMiRNA-SSF [38], miRNApre [39], and miRBoost [4] use support vector machines; (2) MiRANN [1] and DP-miRNA [37] use neural networks; and (3) (CSHMM) [5], which use a context-sensitive hidden Markov model. Known pre-miRNAs have distinctive structural characteristics, and therefore most computational methods make first-order decisions based on the secondary structure of the input RNA sequence. However, the identification of new pre-miRNAs with non-canonical structure and subtle properties, and maybe both, it requires the consideration of features other than secondary structure. Some authors [19] have even argued that the performance of ML-based tools are more dependent on the set of input features than the ML algorithms that are used. The discovery of new features which are effective in pre-miRNA identification currently involves either searching for hand-crafted features (such as the frequency of nucleotide triplets in the loop, global and intrinsic folding attributes, stem length, and minimum free energy) or combining existing features. One recent study utilized 187 such features [4], another 48 features [2], most of which were manually prepared. Manual feature extraction requires ingenuity and inevitably limits the efficiency, robustness, and generalization of the resulting identification scheme developed. Furthermore, neural network-based methods above only use neural networks for classification of hand-designed features, and not for feature learning. Similar challenges exist in other disciplines. Recently, end-to-end deep learning approaches have been successfully applied to tasks such as speech and image recognition, largely eliminating the manual construction of feature engineerings. Motivated by these successes, we propose a deep neural network-based pre-miRNA identification method which we call deepMiRGene to address the limitations of existing approaches. It incorporates the following key components: 1. A deep recurrent neural network (RNN) with long short-term memory (LSTM) units for RNA sequence modeling, automated feature learning, and robust classification based on the learned representations. 2. A multimodal architecture for seamless integration of prior knowledge (such as the importance of RNA secondary structure in pre-miRNA identification) with automatically learned features. 3. An attention mechanism for effective modeling of the long-term dependence of the primary structure (i.e., sequence) and the secondary structure of RNA molecules. 4. An RNN-based class activation mapping (CAM) to highlight the learned representations in the way that contrasts pre-miRNAs and non-pre-miRNAs to obtain biological insight. We found that simply combining existing deep learning modules did not deliver satisfactory performance in our task. Thus our contribution can be seen as inventing a novel pipeline and with components optimized for handling RNA sequences and structures to predict (possibly subtle) premiRNA signals, rather than just assembling pre-packaged components. Our research for an optimal set of RNN architectures and hyperparameters for pre-miRNA identification involved an exploration of the design space spanned by the components of our methodology. The result of this research is a technique with demonstrable advantages over other state-of-the-art alternatives in terms of both cross-validation results but also the generalization ability (i.e., performance on test data). The source code for the proposed method is available at https://github.com/eleventh83/deepMiRGene. 2 2.1 Related Work The Secondary Structure of a Pre-miRNA The secondary structure of an RNA transcript represents the base-pairing interactions within that transcript. The usual secondary structure of a pre-miRNA is shown in Fig. 1, which shows that a pre-miRNA is a base-paired double helix rather than a single strand, and this pairing is one 2 ..((((((((......)))))))).. 3? folding 3? A . . C ( ( ( ( ( ( ( ( A U G U G C C A C G U . 5? 5? C G A C A C G G U G C A A ) ) ) ) ) ) ) ) . . . . secondary structure ACGUGCCACGAUUCAACGUGGCACAG . sequence 3? . B 5? . A Figure 1: (A) sequence of a pre-miRNA, and (B) the secondary structure of the given sequence. The dot-bracket notation in (A) describes RNA secondary structures. Unpaired nucleotides are represented as ?.?s and base-paired nucleotides are represented as ?(?s and ?)?s. (#sample, lseq) (#sample, lseq, 16) A?en?on (So?max) Input Sequences Folding States Merging Onehot Encoding Secondary Structures Dropout (0.1) LSTM Layer LSTM Layer LLSTM 1 LLSTM 2 Dropout (0.1) Dropout (0.1) (#sample, lseq,20) Preprocessing (#sample, 2) Fully Connected Layer Fully Connected Layer Fully Connected Layer LFC 1 LFC 2 LFC 3 (lseqx10)x400 400x100 100x2 Dropout (0.1) Dropout (0.1) Dropout (0.1) So?max Output True/False (#sample, lseqx10) Neural Network Layers Figure 2: Overview of our method: #sample is the number of input sequences and lseq is the maximum length of the input sequence. The dimension of intermediate data is labeled (#sample, lseq , 16). of the most prominent features for pre-miRNA identification [12, 2]. The secondary structure of a given sequence can be predicted by tools such as RNAfold [5], which is widely used. It constructs a thermodynamically stable secondary structure from a given RNA sequence by calculating the minimum free energy and the probable base-pairings [20]. However, reliable pre-miRNA identification requires features other than the secondary structure to be considered, since false positives may be generated due to the limitations of structure prediction algorithms and the inherent unpredictability of these structures [21]. 2.2 Deep Recurrent Neural Networks RNNs are frequently used for sequential modeling and learning. RNNs process one element of input data at a time and implicitly store past information using cyclic connections of hidden units [8]. However, early RNNs often had difficulty in learning long-term dependencies because of the vanishing or exploding gradient problem [9]. Recent deep RNNs incorporate mechanisms to address this problem. Explicit memory units, such as LSTM units [10] or GRUs [3], are one such mechanism. An LSTM unit, for works as a sophisticated hidden unit that uses multiplicative gates to learn when to input, output, and forget in addition to cyclic connections to store the state vector. A more recent innovation [2] is the attention mechanism. This can take various forms, but in our system, a weighted combination of the output vectors at each point in time replaces the single final output vector of a standard RNN. An attention mechanism of this sort helps learn long-term dependencies and also facilitates the interpretation of results, e.g., by showing how closely the output at a specific time point is related to the final output [30, 29, 2]. 3 Methodology Fig. 2 shows the proposed methodology of our system. The input consists of either a set of premiRNA sequences (in the training phase) or a test sequence (in the testing phase). The output for each input sequence is a two-dimensional (softmax) vector which indicates whether the input sequence encodes pre-miRNA or not. In a preprocessing phase, we derive the secondary structure of the input sequence and then encode the sequence and its structure together into a 16-dimensional binary vector. Encoded vectors are then processed by the RNN architecture, consisting of LSTM layers and fully connected (FC) layers, and the attention mechanism. The pseudocode of our approach is available as Appendix A, in the supplementary material. 3 3.1 Preprocessing Preprocessing a set of input pre-miRNA sequences involves two tasks. First, RNAfold is used to obtain the secondary structure of each sequence; we already noted the importance of this data. We clarify that each position in an RNA sequence as one of {A, C, G, U}, and the corresponding location in the secondary structure as one of {(, ), ., :}. This dot-bracket notation is shown in Fig. 1. The symbol ?:? represents a position inside a loop (unpaired nucleotides surrounded by a stem). Let xs and xt denote an input sequence and its secondary structure, respectively. Then, xs ? {A, C, G, U}|xs | and xt ? {(, ), ., :}|xt | . Note that |xs | = |xt |. Next, each input sequence xs is combined with its secondary structure xt into a numerical representation. This is a simple one-hot encoding [4], which gave better results in our experiments than a soft encoding (see Section 4). Our encoding scheme uses a 16-dimensional one-hot vector, in which position i (i = 0, 1, . . . , 15) is interpreted as follows: ? ? 0 then A 0 then ( ? ? ? ? ? ? 1 then G 1 then ) if bi/4c = and if i%4 = ? ? 2 then C 2 then . ? ? ? ? 3 then U 3 then : The ?%? symbol denotes the modulus operator. After preprocessing, the sequence xs and the structure xt are together represented by the matrix Xs ? {0, 1}|xs |?16 , each row of which is the 16-dimensional one-hot vector described above. For instance, xs = AUG and xt = (:) are represented by the following 3 ? 16 binary matrix: ? ? A C G U z }| { z }| { z }| { z }| { ?1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0? ? ? Xs = ?0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1? . ?0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0? | {z } ().: 3.2 Neural Network Architecture The main features of our neural network is the attention mechanism provided by the LTSM and FC layers. 1) LSTM layers: The purpose of these layers is sequential modeling of the primary and secondary structure of the input pre-miRNA transcripts. We use two stacked LSTM layers denoted by LLSTM 1 and LLSTM respectively. LLSTM takes the matrix Xs produced in the preprocessing stage and returns a 2 1 weight matrix H1 , as follows: H1 = LLSTM (Xs ) ? R|xs |?d1 , 1 (1) where d1 is the number of LSTM units in the first layer. Similarly, the second layer first returns a second weight matrix H2 : H2 = LLSTM (H1 ) ? R|xs |?d2 , 2 (2) where d2 is the number of LSTM units in the second layer. We apply an attention mechanism to the output of LLSTM with the aim of learning the importance 2 of each position of xs . The neural networks first learn an attention weight for each output of the second LSTM layer for each sequence position in a training process. These weights are collectively represented by a matrix ? ? Rd2 ?|xs | . An attention weight matrix ?att ? R|xs |?|xs | is then constructed as follows: ?att = H2 ?. (3) ?att = softmax(diag(?att )) ? R|xs | , (4) This yields the attention weight vector ?att 4 where the ith element of ?att corresponds to the attention weight for the ith position of xs . Then, Hatt ? R|xs |?d2 , the attention-weighted representation of H2 , can be expressed as follows: Hatt = H2 (?att ? ud2 ), (5) where ud2 is the d2 -dimensional unit vector, and and ? respectively denote the element-wise multiplication and outer product operators. ? att for the Finally, we reshape the matrix Hatt by flattening it into a (d2 ? |xs |)-dimensional vector h sake of compatibility with third-party software. We use the standard nonlinearities (i.e., hyperbolic tangent and logistic sigmoid) inside each LSTM cell. 2) Fully connected layers: The neural network collects the outputs from the last LSTM layer and makes a final decision using three FC layers. We denote the operations performed by these three FC FC layers by  LFC 1 ,L2 , and L3 , which allows us to represent the outputs of the three FC layers as ? att , f2 = LFC (f1 ), and y? = LFC (f2 ), where f1 ? Rd3 and f2 ? Rd4 are intermediate f1 = LFC h 1 2 3 vectors, and y? ? R2 denotes the final softmax output; d3 and d4 are the numbers of hidden nodes in the last two FC layers. The first two FC layers use logistic sigmoids as theirs activation functions, while the last FC layer uses the softmax function. 3.3 Training We based our training objective on binary cross-entropy (also known as logloss). As will be explained in Section 4 (see Table 1), we encountered a class-imbalance problem in this study, since there exist significantly more negative training examples (non-pre-miRNA sequences) than positives (known pre-miRNA sequences). We addressed this issue by augmenting the logloss training objective with balanced class weights [31], so that the training error E is expressed as follows: 1 X ? E=? c yi log(? yi ) + c+ (1 ? yi ) log(1 ? y?i ) b i where b is the mini-batch size (we used b = 128), and yi ? {0, 1} is the class label provided in training data (yi = 0 for pre-miRNA; yi = 1 for non-pre-miRNA); c? and c+ represent the balanced class weights given by ck = N , 2 ? nk k ? {?, +} (6) where N is the total number of training examples and nk is the number of examples in either the positive or the negative class. We minimized E using the Adam [6] gradient descent method, which uses learning rates which adapt to the first and second moments of the gradients of each parameter. We tried other optimization methods (e.g., the stochastic gradient descent [27] and RMSprop [28]), but they did not give better results. We used dropout regularization with an empirical setup. In the LSTM layers, a dropout parameter for input gates and another for recurrent connection were both set to 0.1. In the FC layers, we set the dropout parameter to 0.1. We tried batch normalization [22], but did not find it effective. All the weights were randomly initialized in the range of [?0.05, 0.05]. The number of hidden nodes in the LSTM (d1 , d2 ) and the FC (d3 , d4 ) layers were determined by cross validation as d1 = 20, d2 = 10, d3 = 400, and d4 = 100. The mini-batch size and training epochs were set to 128 and 300 respectively. 4 Experimental Results We used three public benchmark datasets [4] named human, cross-species, and new. The positive pre-miRNA sequences in all three datasets were obtained from miRBase [25] (release 18). For the negative training sets, we obtained noncoding RNAs other than pre-miRNAs and exonic regions of protein-coding genes from NCBI (http://www.ncbi.nlm.nih.gov), fRNAdb [23], NONCODE [24], and 5 Table 1: Numbers of sequences in the three benchmark datasets [4] used in this study. The median length of each dataset is given in brackets. Type \ Dataset name Positive examples Negative examples Human Cross-species 863 (85) 7422 (92) 1677 (93) 8266 (96) New 690 (71) 8246 (96) Table 2: Performance evaluation of different pre-miRNA identification methods with cross-validation (CV) and test data using sensitivity (SE), specificity (SP), positive predictive value (PPV), F-score, geometric mean (g-mean), area under the receiver operating characteristic curve (AUROC), and area under the precision-recall curve (AUPR). Human Cross-species Methods SE1 SP2 PPV3 F-score4 g-mean5 AUROC AUPR miRBoost (CV) CSHMM (CV) triplet-SVM (CV) microPred (CV) MIReNA (CV) Proposed (CV) 0.803 0.713 0.669 0.763 0.818 0.799 0.988 0.777 0.986 0.989 0.943 0.988 0.887 0.559 0.851 0.888 0.624 0.885 0.843 0.570 0.749 0.820 0.708 0.839 0.891 0.673 0.812 0.869 0.878 0.888 0.957 0.974 0.984 miRBoost (test) CSHMM (test) triplet-SVM (test) microPred (test) MIReNA (test) Proposed (test) 0.884 0.616 0.744 0.779 0.826 0.822 0.969 0.978 0.992 0.988 0.941 0.992 0.768 0.768 0.914 0.882 0.617 0.919 0.822 0.684 0.821 0.827 0.706 0.868 0.925 0.777 0.859 0.877 0.881 0.903 0.947 0.980 0.981 SE SP PPV F-score g-mean AUROC AUPR 0.854 0.890 0.915 0.861 0.826 0.735 0.825 0.766 0.886 0.977 0.576 0.967 0.975 0.952 0.982 0.884 0.533 0.819 0.875 0.765 0.911 0.872 0.564 0.775 0.848 0.765 0.898 0.917 0.524 0.843 0.897 0.854 0.933 0.943 0.970 0.985 0.869 0.873 0.927 0.830 0.892 0.918 0.856 0.749 0.760 0.814 0.796 0.900 0.844 0.960 0.977 0.985 0.950 0.983 0.526 0.791 0.870 0.919 0.764 0.913 0.651 0.769 0.812 0.863 0.780 0.906 0.850 0.848 0.862 0.896 0.870 0.940 0.952 0.963 0.984 0.908 0.906 0.955 P P P P TP: true positive, TN: true negative, FP: false positive, FN: false negative. 1 SE = TP/(TP + FN) 2 SP = ? 3 4 TN/(TN + FP) PPV (precision) = TP/(TP + FP) F-score = 2TP/(2TP + FP + FN) 5 g-mean = SE ? SP snoRNA-LBME-db [26]. Note that we only acquired those datasets that had undergone redundancy removal and had annotation corrected by the data owners. As shown in Table 1, the human dataset contains 863 human pre-miRNA sequences (positive examples) and 7422 non-pre-miRNA sequences (negative examples). The cross-species dataset contains 1677 pre-miRNA sequences collected from various species (e.g., human, mouse, and fly), and 8266 non-miRNA sequences. The new dataset has 690 newly discovered pre-miRNA sequences, which are in miRBase releases 19 and 20, with 8246 non-pre-miRNA sequences. For the human and cross-species datasets, 10% of the data was randomly chosen as a clean test dataset (also known as a publication dataset) and was never used in training. Using the remaining 90% of each dataset, we carried out five-fold cross-validation for training and model selection. Note that the new dataset was used for testing purposes only, as described in Tran et al. [4]. Additional details of the experimental settings used can be found in Appendix B. 4.1 Validation and Test Performance Evaluation We used seven evaluation metrics: sensitivity (SE), specificity (SP), positive predictive value (PPV), Fscore, the geometric mean of SE and SP (g-mean), the area under the receiver operating characteristic curve (AUROC), and the area under the precision-recall curve (AUPR). Higher sensitivity indicates a more accurate pre-miRNAs predictor which is likely to assist the discovery of novel pre-miRNAs. Higher specificity indicates more effective filtering of pseudo pre-miRNAs, which increases the efficiency of biological experiments. Because they take account of results with different decision thresholds, AUROC and AUPR typically deliver more information than the more basic metrics such as sensitivity, specificity, and PPV, which are computed with a single decision threshold. Note that miRBoost, MIReNA, and CSHMM do not provide decision values, and so the AUROC and AUPR metrics cannot be obtained from these methods. The results of a cross-validation performance comparison are shown in the upper half of Table 2, while the results of the test performance comparison are shown in the bottom half. For the human dataset, the cross-validation performance of our method was comparable to that of others, but our method achieved the highest test performance in terms of F-score, AUROC, and AUPRG. For the cross-species dataset, our method achieved the best overall performance in terms of both crossvalidation and test evaluation results. Some tools, such as miRBoost, showed fair performance in 6 Table 3: Evaluation of performance on the new dataset. Methods miRBoost CSHMM triplet-SVM microPred MIReNA Proposed method SE SP PPV F-score g-mean AUROC AUPR 0.921 0.536 0.721 0.728 0.450 0.917 0.936 0.069 0.981 0.970 0.941 0.964 0.609 0.046 0.759 0.672 0.392 0.682 0.733 0.085 0.740 0.699 0.419 0.782 0.928 0.192 0.841 0.840 0.650 0.941 0.934 0.940 0.981 0.766 0.756 0.808 0.987 0.978 0.988 1 0.9 0.8 0.783 0.799 0.795 0.727 0.7 0.6 0.5 0.875 0.888 0.839 0.649 0.537 SE SP Sequence only F-score Structure only g-mean Multimodal Figure 3: Using both sequence and structure information gives the best performance on the human dataset. Each bar shows the metrics of average cross-validation results. terms of the cross-validation but failed to deliver the same level of performance on the test data. These results suggest that our approach provides better generalization than the alternatives. The similarity of the performance in terms of the cross-validation and test results suggests that overfitting was handled effectively. Following the experimental setup used by Tran et al. [4], we also evaluated the proposed method on the new dataset, with a model trained by the cross-species dataset, as shown in Table 3, to assess the potential of our approach in the search for novel pre-miRNAs. Again, our method did not show the best performance in terms of basic metrics such as sensitivity and specificity, but it returned the best values of AUROC and AUPR. The results show that the proposed method can be used effectively to identify novel pre-miRNAs as well as to filter out pseudo pre-miRNAs. To evaluate the statistical significance of our approach, we applied a Kolmogorov-Smirnov test [40] to the classifications produced by our method, grouped by true data labels. For the human, cross-species, and new datasets, the p-values we obtained were 5.23 ? 10?54 , 6.06 ? 10?102 , and 7.92 ? 10?49 respectively, indicating that the chance of these results occurring at random is very small indeed. 4.2 Effectiveness of Multimodal Learning Our approach to the identification of pre-miRNAs takes both biological sequence information and secondary structure information into account. To assess the benefit of this multimodality, we measured the performance of our method using only sequences or secondary structures in training on the human dataset. As shown in Fig. 3, all of the performance metrics were higher when both sequence and structure information were used together. Compared with the use of sequence or structure alone, the sensitivity of the multimodal approach was increased by 48% point and 2% point, respectively. For specificity, the cases using both sequence and structure achieved higher performance values (0.988) than those of the sequence only (0.987) and structure only (0.978) cases. Similarly, in terms of F-score, using the multimodality gave 29% point and 5% point higher scores (0.839) than using sequence only (0.649) or structure only (0.795), respectively. 4.3 Gaining Insights by Analyzing Attention Weights A key strength of our approach is its ability to learn the features useful for pre-miRNA identification from data. This improves efficiency, and also has the potential to aid the discovery of subtle features that might be missed in manual feature design. However, learned features, which are implicitly represented by the trained weights of a deep model, come without intuitive significance. To address this issue, we experimented with the visualization of attention weights using the class activation mapping [32], a technique that was originally proposed to interpret the operation of 7 A negative predicted examples B C Stem-loop positive predicted examples sequence position (%) U C sequence position (%) G C A C G U G A 30 U A G G A A 50 U U A G U A A U U A U A C G U A U A C G 60 A U A 10 U U A C G A U A UGGGAAACAUACUUCUUUAUAUGCCCAUAUGGACCUGCUAAGCUAUGGAAUGUAAAGAAGUAUGUAUCUCA 3? 5? stem mature miRNA C G C 20 U 5? U A U C C 40 A G A U A A U G C G U G C U A 5? 3? Homo sapiens miR-1-1 70 Figure 4: Attention weighted RNN outputs with the human dataset. (A) Class activation mapping for predicted examples (negatives: non-pre-miRNAs). (B) Class activation mapping for predicted examples (positives: pre-miRNAs). (C) Stem-loop structure of a pre-miRNA (homo sapiens miR-1-1). Table 4: Performance of different types of neural network, assessed in terms of five-fold cross-validation results from the human dataset. The number of stacked layers is shown in brackets. ATT means that an attention mechanism was included, and a BiLSTM is a bi-directional LSTM. The configuration that we finally adopted is shown in row 6. No. Type SE SP F-score 1 2 3 1D-CNN(2) 1D-CNN(2)+LSTM(2) 1D-CNN(2)+LSTM(2)+ ATT 0.745 0.707 0.691 0.978 0.976 0.979 0.771 0.738 0.739 g-mean 0.853 0.830 0.822 4 5 6 LSTM(2) LSTM(1) + ATT LSTM(2) + ATT (proposed) 0.666 0.781 0.799 0.988 0.987 0.988 0.751 0.824 0.839 0.810 0.878 0.888 7 8 BiLSTM(1) + ATT BiLSTM(2) + ATT 0.783 0.795 0.987 0.987 0.827 0.834 0.879 0.886 convolutional neural networks (CNNs) in image classification by highlighting discriminative regions. We modified the class activation mapping of RNNs to discover which part of the sequential output is significant for identifying pre-miRNAs. We performed one-dimensional global average pooling (GAP) on the attention weighted output Hatt (see Section 3.2) to derive a d2 -dimensional weight vector ?gap . We then multiplied Hatt by ?gap to obtain a class activation map of size |xs | for each sequence sample. Fig. 4 (A) and Fig. 4 (B) show the resulting heatmap representations of class activation mapping on the human dataset for positive and negative predicted examples, respectively. Since sequences can have different lengths, we normalized the sequence lengths to 1 and presented individual positions in a sequence between 0% and 100% in the x-axis. By comparing the plots in Fig. 4 (A) and (B), we can see that class activation maps of the positive and negative data show clear differences, especially at the 20?50% sequence positions, within the red box in Fig. 4 (B). This region corresponds to the 50 stem region of typical pre-miRNAs, as shown in Fig. 4 (C). This region coincides with the location of a mature miRNA encoded within a pre-miRNA, suggesting that the data-driven features learned by our approach have revealed relevant characteristics of pre-miRNAs. The presence of some nucleotide patterns has recently been reported in the mature miRNA region inside a pre-miRNA [33]. We anticipate that further interpretation of our data-driven features may assist in confirming such patterns, and also in discovering novel motifs in pre-miRNAs. 4.4 Additional Experiments 1) Architecture exploration: We explored various alternative network architectures, as listed in Table 4, which shows the performance of different network architectures, annotated with the number of layers and any of attention mechanism. Rows 1?3 of the table, show results for CNNs with and without LSTM networks. Rows 4?6 show the results of LSTM networks. Rows 7?8, show results for bi-directional LSTM (BiLSTM) networks. More details can be found in Appendix C.1. 2) Additional results: Appendix C.2?4 presents more details of hyperparameter tuning, the design decisions made between the uses of soft and hard encoding, and running-time comparisons. 8 5 Discussion Given the importance of the secondary structure in pre-miRNA identification (e.g., see Section 4.2), we derived the secondary structure of each input sequence using RNAfold. We then combined the secondary structure information with the primary structure (i.e. the sequence), and sent the result to the RNN. However, a fully end-to-end approach to pre-miRNA identification we would need to learn even the secondary structure from the input sequences. Due to the limited numbers of known pre-miRNA sequences, this remains as challenging future work. Our experimental results supported the effectiveness of a multi-modal approach that considers sequences and structures together from an early stage of the pipeline. Incorporating other types of information would be possible and might improve performance further. For example, sequencing results from RNA-seq experiments reflect the expression levels and the positions of each sequenced RNA [34]; and conservation information would allow a phylogenetic perspective [35]. Such additional information could be integrated into the current framework by representing it as new network branches and merging them with the current data before the FC layers. Our proposed method has the clear advantage over existing approaches that it does not require hand-crafted features. But we need to ensure that learned feature provide satisfactory performance, and they also need to have some biological meaning. Biomedical researchers naturally hesitate to use a black-box methodology. Our method of visualizing attention weights provides a tool for opening that black-box, and assist data-driven discovery. Acknowledgments This work was supported in part by the Samsung Research Funding Center of the Samsung Electronics [No. SRFC-IT1601-05], the Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) [No. 2016-0-00087], the Future Flagship Program funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea) [No. 10053249], the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning [No. 2016M3A7B4911115], and Brain Korea 21 Plus Project in 2017. References [1] M. E. Rahman, et al. MiRANN: A reliable approach for improved classification of precursor microRNA using Artificial Neural Network model. Genomics, 99(4):189?194, 2012. [2] K. Xu, et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML, volume 14, pages 77?81, 2015. [3] J. Chung, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [4] P. Baldi and S. Brunak. Chapter 6. Neural Networks: applications. In Bioinformatics: The Machine Learning Approach. MIT press, 2001. [5] I. L. Hofacker. Vienna RNA secondary structure server. Nucleic acids research, 31(13):3429?3431, 2003. [6] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. CoRR, abs/1412.6980, 2014. [7] V. D. T. Tran, et al. miRBoost: boosting support vector machines for microRNA precursor classification. RNA, 21(5):775?785, 2015. [8] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015. [9] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994. [10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [11] R. Batuwita and V. Palade. microPred: effective classification of pre-miRNAs for human miRNA gene prediction. Bioinformatics, 25(8):989?995, 2009. [12] P. Jiang, et al. MiPred: classification of real and pseudo microRNA precursors using random forest prediction model with combined features. Nucleic acids research, 35(suppl 2):W339?W344, 2007. [13] S. Agarwal, et al. Prediction of novel precursor miRNAs using a context-sensitive hidden Markov model (CSHMM). BMC bioinformatics, 11(Suppl 1):S29, 2010. 9 [14] A. Mathelier and A. Carbone. MIReNA: finding microRNAs with high accuracy and no learning at genome scale and from deep sequencing data. Bioinformatics, 26(18):2226?2234, 2010. [15] C. Xue, et al. Classification of real and pseudo microRNA precursors using local structure-sequence features and support vector machine. BMC bioinformatics, 6(1):310, 2005. [16] R. C. Lee, R. L. Feinbaum, and V. Ambros. The C. elegans heterochronic gene lin-4 encodes small RNAs with antisense complementarity to lin-14. Cell, 75(5):843?854, 1993. [17] D. P. Bartel. MicroRNAs: genomics, biogenesis, mechanism, and function. cell, 116(2):281?297, 2004. [18] D. Kleftogiannis, et al. Where we stand, where we are moving: Surveying computational techniques for identifying miRNA genes and uncovering their regulatory role. Journal of biomedical informatics, 46(3):563?573, 2013. [19] I. de ON Lopes, A. Schliep, and A. C. d. L. de Carvalho. The discriminant power of RNA features for pre-miRNA recognition. BMC bioinformatics, 15(1):1, 2014. [20] R. Lorenz, et al. ViennaRNA Package 2.0. Algorithms for Molecular Biology, 6(1):1, 2011. [21] R. B. Lyngs?. Complexity of pseudoknot prediction in simple models. In Automata, Languages and Programming, pages 919?931. Springer, 2004. [22] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [23] T. Kin, et al. fRNAdb: a platform for mining/annotating functional RNA candidates from non-coding RNA sequences. Nucleic acids research, 35(suppl 1):D145?D148, 2007. [24] D. Bu, et al. NONCODE v3. 0: integrative annotation of long noncoding RNAs. Nucleic acids research, page gkr1175, 2011. [25] S. Griffiths-Jones, et al. miRBase: microRNA sequences, targets and gene nomenclature. Nucleic acids research, 34(suppl 1):D140?D144, 2006. [26] L. Lestrade and M. J. Weber. snoRNA-LBME-db, a comprehensive database of human H/ACA and C/D box snoRNAs. Nucleic acids research, 34(suppl 1):D158?D162, 2006. [27] L. Bottou. Stochastic gradient learning in neural networks. Proceedings of Neuro-N?mes, 91(8), 1991. [28] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. [29] O. Vinyals, et al. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773?2781, 2015. [30] T. Rockt?schel, et al. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. [31] G. King and L. Zeng. Logistic regression in rare events data. Political analysis, 9(2):137?163, 2001. [32] B. Zhou, et al. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2921?2929, 2016. [33] J. Starega-Roslan, P. Galka-Marciniak, and W. J. Krzyzosiak. Nucleotide sequence of miRNA precursor contributes to cleavage site selection by Dicer. Nucleic acids research, 43(22):10939?10951, 2015. [34] M. R. Friedl?nder, et al. Discovering microRNAs from deep sequencing data using miRDeep. Nature biotechnology, 26(4):407?415, 2008. [35] N. Mendes, A. T. Freitas, and M.-F. Sagot. Current tools for the identification of miRNA genes and their targets. Nucleic acids research, 37(8):2419?2433, 2009. [36] T. Mikolov, et al. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111?3119, 2013. [37] J. Thomas, S. Thomas, and L. Sael. DP-miRNA: An improved prediction of precursor microRNA using deep learning model. In Big Data and Smart Computing (BigComp), 2017 IEEE International Conference on, pages 96?99. IEEE, 2017. [38] J. Chen, X. Wang, and B. Liu. IMiRNA-SSF: improving the identification of MicroRNA precursors by combining negative sets with different distributions. Scientific reports, 6, 2016. [39] L. Wei, et al. Improved and promising identification of human microRNAs by incorporating a high-quality negative set. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 11(1):192?201, 2014. [40] H. W. Lilliefors. On the Kolmogorov-Smirnov test for normality with mean and variance unknown. Journal of the American statistical Association, 62(318):399?402, 1967. 10
6882 |@word snorna:2 cnn:3 eliminating:1 smirnov:2 d2:8 integrative:1 tried:2 moment:1 electronics:1 cyclic:2 contains:2 att:15 score:9 configuration:1 liu:1 past:1 existing:6 freitas:1 current:3 com:1 comparing:1 activation:10 must:1 fn:3 numerical:1 confirming:1 designed:1 plot:1 rd2:1 alone:1 half:2 discovering:2 ud2:2 ith:2 vanishing:1 short:4 detecting:1 provides:2 node:2 location:2 boosting:1 cleavage:1 five:2 phylogenetic:1 constructed:1 direct:1 pairing:3 consists:2 inside:3 multimodality:2 owner:1 baldi:1 acquired:1 aupr:8 indeed:1 ingenuity:1 frequently:1 planning:1 multi:1 brain:1 automatically:1 gov:1 precursor:11 provided:2 discover:1 notation:2 project:1 interpreted:1 surveying:1 developed:1 finding:1 pseudo:4 unit:9 grant:1 positive:14 before:1 engineering:6 attend:1 local:1 limit:2 encoding:5 analyzing:1 jiang:1 approximately:1 might:2 rnns:5 black:2 plus:1 collect:1 suggests:1 challenging:1 limited:1 bi:3 range:1 acknowledgment:1 lecun:1 testing:2 msit:1 area:4 rnn:9 empirical:2 hyperbolic:1 significantly:1 pre:64 word:1 griffith:1 specificity:6 protein:1 suggest:1 cannot:1 selection:2 operator:2 context:2 www:1 map:2 center:1 attention:21 automaton:1 identifying:3 rule:1 insight:2 spanned:1 searching:1 target:3 play:2 construction:1 caption:1 programming:1 us:5 complementarity:1 element:3 recognition:3 utilized:1 labeled:1 database:1 bottom:1 role:3 yoon:1 module:1 fly:1 electrical:5 preprint:3 wang:1 region:7 connected:5 coursera:1 iitp:1 trade:1 highest:1 balanced:2 rmsprop:2 complexity:1 cam:1 trained:2 x400:1 smart:1 predictive:2 deliver:3 distinctive:3 localization:1 efficiency:4 f2:3 multimodal:5 samsung:2 various:4 represented:6 x100:1 kolmogorov:2 chapter:1 stacked:2 effective:5 artificial:1 tell:1 encoded:2 widely:1 supplementary:1 annotating:1 grammar:1 ability:2 final:4 galka:1 sequence:63 advantage:2 propose:2 tran:3 interaction:1 product:1 relevant:1 loop:6 combining:3 mirnas:32 intuitive:1 crossvalidation:1 double:1 unpredictability:1 adam:2 help:1 derive:2 recurrent:6 ac:1 augmenting:1 demonstrable:1 measured:1 school:1 transcript:3 aug:1 predicted:6 involves:2 come:1 closely:1 annotated:1 attribute:1 filter:1 stochastic:3 cnns:2 exploration:2 human:17 nlm:1 material:1 public:1 argued:1 require:1 inventing:1 government:1 f1:3 generalization:4 biological:4 probable:1 anticipate:1 clarify:1 considered:1 mapping:7 predict:2 sp2:1 early:2 purpose:2 outperformed:1 label:2 currently:1 sensitive:2 grouped:1 micrornas:7 successfully:1 tool:5 weighted:4 promotion:1 mit:1 rna:21 aim:2 modified:1 rather:2 ck:1 zhou:1 publication:1 encode:1 release:2 focus:1 derived:1 sequencing:3 indicates:3 contrast:2 political:1 dependent:1 motif:1 foreign:1 typically:1 integrated:1 hidden:6 compatibility:1 issue:2 classification:12 among:1 overall:1 denoted:1 uncovering:1 heatmap:1 art:2 integration:2 softmax:4 platform:1 construct:1 never:1 frasconi:1 extraction:2 having:1 beach:1 manually:1 represents:2 park:1 nrf:1 icml:1 bmc:3 biology:2 jones:1 future:3 minimized:1 others:1 report:1 inherent:1 opening:1 randomly:2 national:5 comprehensive:1 individual:1 phase:3 consisting:1 ab:1 possibility:1 mining:1 homo:2 evaluation:6 bracket:4 logloss:2 accurate:1 korea:10 nucleotide:6 divide:1 initialized:1 increased:1 industry:1 classify:1 modeling:6 soft:2 instance:1 se1:1 tp:7 phrase:1 rare:1 predictor:1 reported:1 dependency:3 xue:1 combined:3 st:1 grus:1 lstm:24 sensitivity:6 international:1 seamless:2 lee:1 bu:1 informatics:1 discipline:1 together:4 mouse:1 again:1 sapiens:2 reflect:1 ambros:1 possibly:1 american:1 chung:1 simard:1 return:2 szegedy:1 account:2 potential:2 nonlinearities:1 suggesting:1 de:2 coding:3 multiplicative:1 h1:3 performed:2 aca:1 red:1 sort:1 annotation:2 contribution:1 ass:2 accuracy:1 convolutional:1 variance:1 acid:9 largely:1 characteristic:5 yield:1 identify:3 directional:2 identification:23 produced:2 researcher:2 classified:1 messenger:1 manual:4 energy:3 frequency:1 involved:1 naturally:1 exonic:1 dataset:19 newly:1 recall:2 knowledge:2 improves:1 subtle:3 sophisticated:1 higher:5 originally:1 methodology:4 modal:1 improved:3 wei:1 entailment:1 evaluated:1 box:4 furthermore:1 just:1 stage:5 biomedical:2 rahman:1 hand:3 zeng:1 logistic:3 quality:1 scientific:1 modulus:1 usa:1 name:1 normalized:1 true:4 regularization:1 satisfactory:2 visualizing:1 noted:1 coincides:1 d4:3 prominent:1 tn:3 ribonucleic:1 reasoning:1 weber:1 image:3 wise:1 consideration:1 recently:2 novel:6 nih:1 sigmoid:1 meaning:1 funding:1 pseudocode:1 functional:1 packaged:1 overview:1 volume:1 association:1 assembling:1 interpretation:2 theirs:1 interpret:1 significant:2 cv:7 tuning:1 similarly:2 language:2 had:3 funded:3 dot:2 l3:1 stable:1 moving:1 longer:1 operating:2 similarity:1 base:5 recent:5 showed:1 perspective:1 bilstm:4 driven:3 schmidhuber:1 store:2 server:1 binary:4 success:1 yi:6 seen:1 minimum:2 additional:6 ministry:2 v3:1 signal:1 exploding:1 branch:1 stem:8 untranslated:1 adapt:1 cross:19 long:9 lin:2 post:2 molecular:1 paired:2 prediction:6 neuro:1 basic:3 regression:1 vision:1 metric:7 arxiv:6 represent:2 normalization:2 suppl:5 achieved:3 cell:3 folding:3 sequenced:1 addition:1 hesitate:1 hochreiter:1 agarwal:1 addressed:2 median:1 source:1 crucial:1 mir:2 pooling:1 mature:6 elegans:1 facilitates:1 hatt:5 db:2 incorporates:2 effectiveness:2 sent:1 call:1 ssf:2 structural:1 s29:1 presence:1 schel:1 intermediate:2 revealed:1 bengio:2 automated:2 variety:1 gave:2 architecture:8 shift:1 whether:1 motivated:1 handled:1 expression:1 assist:3 accelerating:1 returned:1 speech:1 nomenclature:1 biotechnology:1 deep:15 useful:1 se:9 clear:2 listed:1 maybe:1 prepared:1 processed:1 unpaired:2 http:2 exist:2 canonical:2 broadly:1 hyperparameter:1 group:1 key:3 redundancy:1 threshold:2 d3:3 clean:1 package:1 lope:1 named:1 missed:1 seq:1 decision:6 appendix:4 comparable:1 dropout:9 layer:30 correspondence:1 fold:2 replaces:1 encountered:1 identifiable:1 strength:1 bp:2 x2:1 software:1 encodes:2 sake:1 min:1 mikolov:1 antisense:1 combination:1 describes:1 snu:1 explained:1 handling:1 pipeline:2 visualization:1 remains:1 mechanism:12 microrna:7 end:4 adopted:1 available:2 operation:2 multiplied:1 apply:1 reshape:1 alternative:4 robustness:2 batch:4 gate:2 thomas:2 denotes:2 remaining:1 running:2 ensure:1 vienna:1 ncbi:2 lseq:5 calculating:1 especially:1 objective:2 already:1 primary:4 dependence:2 usual:1 transcriptional:2 gradient:7 dp:2 outer:1 me:1 seven:1 whom:1 collected:1 considers:1 discriminant:1 length:6 code:1 mini:2 innovation:1 regulation:2 difficult:2 setup:2 negative:13 ba:1 design:3 unknown:1 gated:1 imbalance:1 upper:1 nucleic:8 markov:2 datasets:6 benchmark:3 hyun:1 descent:3 inevitably:2 rockt:1 hinton:2 communication:1 discovered:1 compositionality:1 pair:1 optimized:1 connection:3 learned:7 biogenesis:2 kingma:1 nip:1 address:4 bar:1 usually:3 pattern:3 fp:4 challenge:1 program:2 max:2 soo:1 memory:3 reliable:2 gaining:1 hot:3 power:1 event:1 difficulty:1 rely:1 ppv:6 representing:1 scheme:2 improve:1 github:1 technology:1 normality:1 axis:1 carried:1 flagship:1 genomics:2 prior:2 ict:1 discovery:4 tangent:1 l2:1 multiplication:1 epoch:1 geometric:2 removal:1 fully:6 lecture:1 highlight:1 generation:1 limitation:3 filtering:2 carvalho:1 validation:11 h2:5 foundation:1 undergone:1 helix:1 surrounded:1 row:5 supported:2 last:3 free:2 infeasible:1 allow:1 institute:1 mirna:54 benefit:1 distributed:1 curve:4 dimension:1 stand:1 genome:1 author:1 made:1 preprocessing:6 party:1 transaction:2 implicitly:2 gene:8 ml:4 global:2 overfitting:1 ioffe:1 receiver:2 conservation:1 discriminative:2 search:1 regulatory:1 triplet:5 nder:1 table:10 brunak:1 promising:1 learn:5 nature:2 robust:1 ca:1 molecule:1 contributes:1 improving:2 forest:1 bottou:1 diag:1 did:4 flattening:1 main:1 sp:9 significance:2 big:1 hyperparameters:1 fair:1 categorized:1 xu:1 crafted:2 fig:9 site:1 en:1 aid:1 precision:3 position:11 explicit:1 candidate:1 third:1 kin:1 choi:1 specific:1 xt:7 covariate:1 showing:1 symbol:2 r2:1 x:23 svm:4 carbone:1 auroc:9 experimented:1 explored:1 intrinsic:1 incorporating:2 lorenz:1 false:4 merging:2 sequential:3 kr:1 importance:4 effectively:2 corr:1 magnitude:1 sigmoids:1 occurring:1 fscore:1 nk:2 gap:3 chen:1 entropy:1 forget:1 fc:12 simply:1 likely:1 visual:1 failed:1 highlighting:2 expressed:2 strand:1 vinyals:1 binding:1 lfc:7 collectively:1 corresponds:2 springer:1 chance:1 tieleman:1 acm:1 viewed:1 king:1 lilliefors:1 hard:1 included:1 determined:1 typical:1 corrected:1 reducing:1 total:1 specie:9 secondary:30 auprg:1 experimental:4 indicating:1 internal:1 support:3 seoul:9 noncoding:2 assessed:1 bioinformatics:7 sael:1 incorporate:1 evaluate:1 d1:4 mendes:1
6,503
6,883
Robust Estimation of Neural Signals in Calcium Imaging Hakan Inan Stanford University [email protected] Murat A. Erdogdu Microsoft Research [email protected] Mark J. Schnitzer Stanford University [email protected] Abstract Calcium imaging is a prominent technology in neuroscience research which allows for simultaneous recording of large numbers of neurons in awake animals. Automated extraction of neurons and their temporal activity in imaging datasets is an important step in the path to producing neuroscience results. However, nearly all imaging datasets contain gross contaminating sources which could be due to the technology used, or the underlying biological tissue. Although attempts were made to better extract neural signals in limited gross contamination scenarios, there has been no effort to address contamination in full generality through statistical estimation. In this work, we proceed in a new direction and propose to extract cells and their activity using robust statistical estimation. Using the theory of M-estimation, we derive a minimax optimal robust loss, and also find a simple and practical optimization routine for this loss with provably fast convergence. We use our proposed robust loss in a matrix factorization framework to extract the neurons and their temporal activity in calcium imaging datasets. We demonstrate the superiority of our robust estimation approach over existing methods on both simulated and real datasets. 1 Introduction Calcium imaging has become an indispensable tool in systems neuroscience research. It allows simultaneous imaging of the activity of very large ensembles of neurons in awake and even freely behaving animals [3, 4, 7]. It relies on fluorescence imaging of intracellular calcium activity reported by genetically encoded calcium indicators. A crucial task for a neuroscientist working with calcium imaging is to extract signals (i.e. temporal traces and spatial footprints of regions of interest) from the imaging dataset. This allows abstraction of useful information from a large dataset in a highly compressive manner, losing little to no information. Automating this process is highly desirable, as manual extraction of cells and their activities in large-scale datasets is prohibitively laborious, and prone to flawed outcomes. A variety of methods have been proposed for automated signal extraction in calcium imaging datasets, including the ones based on matrix factorization [14, 15, 16, 17], and image segmentation [1, 11]. Some of these tools were tailored to 2-photon calcium imaging, where signal-to-noise ratio is typically high, and background is fairly stable [3], whereas some targeted single-photon and microendoscopic imaging [4, 5], which are typically characterized by low SNR and large background fluctuations. Interestingly, least squares estimation has been the predominant paradigm among previous methods; yet there are no works addressing statistically the generic nature of calcium imaging datasets, which includes non-gaussian noise, non-cell background activity (e.g. neuropil), and overlapping cells not captured by algorithms (out-of-focus or foreground). As a consequence, the impact of such impurities inherent in calcium imaging on the accuracy of extracted signals has not been thoroughly investigated previously. This lack of focus on signal accuracy is worrisome as cell extraction is a fairly early step in the research pipeline, and flawed signals may lead to incorrect scientific outcomes. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this work, we propose an approach which takes into account the practical nature of calcium imaging, and solves the signal extraction problem through robust estimation. First, we offer a mathematical abstraction of imaging datasets, and arrive at an estimator which is minimax robust, in the sense that is prevalent in the field of robust estimation. We then use this M-estimator to solve a matrix factorization problem, jointly yielding the temporal and spatial components of the extracted signals. The main insight behind our robust estimation framework is that the signals present in imaging data are the superposition of many positive amplitude sources, and a lower amplitude noise component which could be well modeled by a normal distribution. Majority of the components being positive stems from the fact that the underlying signals in calcium imaging are all made up of photons, and they elicit activity above a baseline as opposed to fluctuating around it. However, not all positive sources are cells that could be extracted by an algorithm (some could be neuropil, other noise, or non-captured cells); hence we model them as generic gross non-negative contamination sources. By using the machinery of robust estimation [8], we propose an M-estimator which is asymptotically minimax optimal for our setting. We also propose a fast fixed-point optimization routine for solving our robust estimation problem. We show local linear convergence guarantees for our routine, and we demonstrate numerically that it converges very fast while only having the same per-step cost with gradient descent. The fast optimizer allows for very fast automated cell extraction in large-scale datasets. Further, since the final form for our loss function is simple and optimization only depends on matrix algebra, it is highly amenable to GPU implementation providing additional improvements. We validate our robust estimation-based cell extraction algorithm on both synthetic and real datasets. We show that our method offers large accuracy improvements over non-robust techniques in realistic settings, which include classical scenarios such as overlapping cells and neuropil contamination. Particularly, our method significantly outperforms methods with non-robust reconstruction routines in metrics such as signal fidelity and crosstalk, which are crucial for steps subsequent to cell extraction. 2 M-Estimation under Gross Non-negative Contamination In this section, we introduce our signal estimation machinery, based on the literature of robust M-estimation. The theory of M-estimation is well-developed for symmetric and certain asymmetric contamination regimes [2, 8, 10, 13]; however the existing theory does not readily suggest an optimal estimator suitable for finding the kind of signals present in fluorescence imaging of calcium in the brain. We first motivate and introduce a simple mathematical abstraction for this new regime, and then derive a minimax optimal M-estimator. 2.1 Noise Model & Mathematical Setting For simplicity, we consider the setting of location estimation, which straightforwardly generalizes to multivariate regression. Considering the nature of contamination in calcium imaging datasets, we base our noise model on the following observation: The signal background is dominated by the baseline activity which is well modeled by a normal distribution. This type of noise stems from the random arrivals of photons from the background in the imaging setup governed by a poisson process; this distribution very rapidly converges to a normal distribution. However, the signal background also contains other sources of noise such as neuropil activity, out-of-focus cells, and residual activity of overlapping cells not accounted for by the cell extraction method. The latter kind of contamination is very distinct from a normal-type noise; it is non-negative (or above the signal baseline), its characteristics are rather irregular and it may take on arbitrarily large values. Consequently, we model the data generation through an additive noise source which is normally distributed 1 ?  fraction of the time, and free to be any positive value greater than a threshold otherwise: yi = ? ? + ?i  N (0, 1), ?i ? H? , (1) w.p. 1 ?  w.p.  (2) H? ? H? = {All distributions with support [?, ?)}, ? ? 0. 2 loss, Figure 1: One-sided Huber. (a) loss function of one-sided Huber (?) and its derivative (?) for ? = 2. (b) One-sided Huber yields lower MSE compared to other known M-estimators under the distribution which causes the worst-case variance for any given estimator (for  = 0.1). In above, ? ? is the true parameter, and is corrupted additively as in (1); ?i is a standard normal with 1 ?  probability, and distributed according to an unknown distribution H? with probability . In the spirit of full generality, we allow H? to be any probability distribution with support greater than a set value ?; particularly, it could be nonzero at arbitrarily large values. Therefore,  could be interpreted as the gross contamination level. The parameter ? could be interpreted as the minimum observed value of the positive contamination, although its exact value is insignificant outside our theoretical analysis. We denote the full noise distribution by FH? , subscripted by H? . Given the observations {yi }ni=1 , we estimate the true parameter ? ? with ?? by considering an equivariant M-estimator as follows ?? = argmin ? n X ?(yi ? ?). (3) i=1 Typically, M-estimators are characterized by ? , ?0 . In this paper, we are going to consider ??s with specific properties that allow for efficient optimization and more general theoretical guarantees. Let?s define a set ? = {? | ? is non-decreasing} . If we choose an estimator ? ? ?, finding a point estimate ?? through (3) becomes equivalent to solving the first order condition: n X ? = 0. ?(yi ? ?) (4) i=1 This is simply because the members of ? correspond to convex loss functions. Our focus is on such functions since they are typically easier to optimize, and offer global optimality guarantees. 2.2 One-Sided Huber Estimator and its Asymptotic Minimax Optimality We are interested in finding an M-estimator for our noise model which is robust to the variation in the noise distribution (H? in particular) in the sense of minimizing the worst-case deviation from the true parameter, as measured by the mean squared error. We first introduce our proposed estimator, and then show that it is exactly optimal in the aforementioned minimax sense. Definition 1 (One-sided Huber). Define an estimator ?0 as follows:  y, if y < ? ?0 (y, ?) = ?, if y ? ?, (5) where ? is defined in terms of the contamination level, , according to ?(?) + g(?) 1 = , ? (1 ? ) with ?(?) and g(?) denoting the distribution and the density functions for a standard normal variable, respectively. 3 Algorithm 1 Fast Solver for one-sided Huber Loss fp_solve(X, Y, k, ?) // X = [x1 , . . . , xn ]T , Y = [y1 , . . . , yn ]T + T Compute: X = (X X)?1 XT , ? LS = X+ Y (0) Initialize ? at random, set t = 0. while ? (t+1) ? ? (t) k2 ? ? do ? (t+1) = ? LS ? X+ max(0, Y ? X? (t) ? ?) t ? t + 1. 4. end while return ? (t) . function 1. 2. 3. We shall refer to ?0 as one-sided Huber, and denote with ?0 (?, ?) its loss function (see Figure 1 for visualization). Clearly, ?0 ? ?, and therefore the loss function ?0 is convex. Under the data generation model introduced in the previous section, we can now state an asymptotic minimax result for ?0 . Proposition 2.1. One-sided Huber ?0 yields an asymptotically unbiased M-estimator for FH? = {(1 ? )? + H? }. Further, ?0 minimizes the worst case asymptotic variance in FH? , i.e. ?0 = arg inf sup V (?, F ). ??? F ?FH ? A proof for Proposition 2.1 is given in the supplementary material. Proposition 2.1 establishes that that one-sided Huber estimator has zero bias as long as the non-zero contamination is sufficiently larger than zero, and it also achieves the best worst-case asymptotic variance. We would like to offer a discussion for a comparison between one-sided Huber and some other popular M-estimators, such as the sample mean (`2 loss), the sample median (`1 loss), Huber [8], and the sample quantile. First of all, the sample mean, the sample median, and Huber estimators all have symmetric loss functions and therefore suffer from bias. This is particularly detrimental for the sample mean and leads to unbounded MSE as the gross contamination tends to very large values. The bias problem may be eliminated using a quantile estimator whose quantile level is set according to . However, this estimator has higher asymptotic variance than the one-sided Huber. We present in Figure 1b comparison of empirical mean square errors for different estimators under the noise distribution which causes the worst asymptotic variance among distributions in FH? 1 . The MSEs of the sample mean and the sample median quickly become dominated by their bias with increasing n2 . Although the quantile estimator was set up to be unbiased, its MSE (or equivalently, variance) is greater than the one-sided Huber. These results corroborate the theoretical properties of one-sided Huber, and affirm it as a good fit for our setting. Although we have not come across a previous study of one-sided Huber estimator in this context, we should note that it is related to the technique in [12], where samples are assumed to be nonnegative, and in the sample mean estimator summands are shrunk when they are above a certain threshold (this technique is called winsorizing). However, their model and application are quite different than what we consider in this paper. 2.3 Generalization to Regression Setting Here we introduce the regression setting which we will use for the remainder of the paper. We observe {yi , xi }ni=1 , where xi ? Rp could be either fixed or random, and yi ?s are generated according to yi = hxi , ? ? i + ?ig + ?ih , where ? ? ? Rp is the true parameter, and ?ih and ?ig are as previously defined. We estimate ? ? with ? = argminf? (?) := ? ? n X ?0 (yi ? hxi , ?i , ?). (6) i=1 Classical M-estimation theory establishes ?under certain regularity conditions? that the minimax optimality in Section 2.2 carries over to regression; we refer reader to [9] for details. 1 2 Refer to the proof of Proposition 2.1 for the form of this distribution. We omit Huber in this comparison since its MSE is also bias-dominated. 4 Algorithm 2 Tractable and Robust Automated Cell Extraction function EXTRACT(M, N, ?, ?) 1. Initialize S(0) , T(0) , set t = 0. 2. for t=1 to N do T(t+1) = fp_solve_nonneg(S(t) , M, ?, ?) T T S(t+1) = fp_solve_nonneg(T(t) , M , ?, ?)T S(t+1) , T(t+1) = remove_redundant S(t+1) , T(t+1) 3. end for return S(t) , T(t) . 3  Fast Fixed-point Solver for One-Sided Huber Loss We are interested in solving the robust regression problem in (6) in the large-scale setting due to the large field of view and length of most calcium imaging recordings. Hence, the solver for our problem should ideally be tractable for large n and also give as accurate an output as possible. To this end, we propose a fixed point optimization method (Algorithm 1), which has a step cost equal to that of gradient descent, while converging to the optimum at rates more similar to Newton?s method. The following proposition establishes the convergence of our solver. 1 for the problem (6), and let ?max and Proposition 3.1. Let ? ? be the fixed point of PAlgorithm n ?min > 0 denote the extreme eigenvalues of i=1 xi xTi , and let maxi kxi k ? k. Assume that for a subset of indeces s ?P{1, 2, ..., n}, ??s > 0 such that yi ? hxi , ? ? i ? ? ? ?s and denote the extreme eigenvalues of i?s xi xTi by ?max and ?min > 0 satisfying ?max ?max /?2min < 2. If the initial point ?0 is close to the true minimizer, i.e., k?0 ? ? ? k2 ? k/?s , then Algorithm 1 converges linearly, t   ?min ?max ?min  f? (? 0 ) ? f? (? ? ) . (7) f? (? t ) ? f? (? ? ) ? 1 ? 2 + 2 ?max ?min A proof for Proposition 3.1 is given in the supplementary material. Our solver is second order in nature3 , hence its convergence behavior should be close to that of Newton?s method. However, there is one caveat: the second derivative of the one-sided Huber loss is not continuous. Therefore, one cannot expect to achieve a quadratic rate of convergence; this issue is commonly encountered in M-estimation. Nevertheless, Algorithm 1 converges very fast in practice. We compare our solver to Newton?s method and gradient descent by simulating a regression setting where we synthesize a 100 x 100 movie frame (Y) with 100 neurons (see Section 5 for details). Then, given the ground truth cell images (X), we optimize for the fluorescence traces for the single frame (?) using the three algorithms. For our fixed-point solver, we use ? = 1. For gradient descent, we set the step size to the reciprocal of the largest eigenvalue of the hessian (while not taking into account the time taken to compute it). Results are shown in Figure 2. Our solver has close convergence behavior to that of Newton?s method, while taking much less time to achieve the same accuracy due to its small per-step cost. We would like to also note that estimating the entire matrix of fluorescence traces (or cell images) does not require any modification of Algorithm 1; hence, in practice estimating entire matrices of components at once does not cause much computational burden. For Newton?s method, every frame (or every pixel) requires a separate hessian; runtime in this case scales at least linearly. 4 Robust Automated Cell Extraction We now introduce our proposed method for automated cell extraction via robust estimation. Our method is based on a matrix factorization framework, where we model the imaging data as the matrix product of a spatial and a temporal matrix with additive noise: M = ST + ?. T In above, M ? RdS ?dT is the movie matrix, S ? Rd+S ?m and T ? Rm?d are the nonnegative + dS ?dT spatial and temporal matrices, respectively. ? ? R is meant to model the normal noise 3 Interested reader is referred to the supplementary material for a more rigorous argument. 5 Optimality gap 10 b 0 -2 10 Optimality gap a 10-4 -6 10 fixed point newton gradient descent -8 10 -10 -12 0 0 -2 10 10-4 -6 10 fixed point newton gradient descent -8 10 -10 10 10 10 10 0.05 0.1 time (sec) -12 10 0.15 0 10 20 30 40 50 iteration Figure 2: Our fixed point solver converges to the optimum with similar rates with Newton?s method, while being more computationally efficient. (a) Optimality gap versus absolute time. (b) Optimality gap versus number of iterations. Fixed point solver achieves the same accuracy with a notably faster speed compared to Newton?s method and gradient descent. corrupted with non-negative contamination, and ?ij has the same distribution with ? in (2) (up to the noise standard deviation). Our main contribution in this work is that we offer a method which estimates S and T using the one-sided Huber estimator, which provides the optimal robustness against the non-negative contamination inherent in calcium imaging, as discussed in Section 2. Our cell extraction algorithm starts by computing initial estimates for the matrices S and T. This is done by (1) detecting a cell peak from the time maximum of the movie one cell at a time (2) solving for the current cell?s spatial and the temporal components using the one-sided Huber estimator (3) repeating until a stopping criterion is reached. We detail this step in the supplementary material. After initial guesses for S and T are computed, the main update algorithm proceeds in a straightforward manner, where multiple alternating robust regression steps are performed using the one-sided Huber loss. At each step, new estimates of S and T are computed based on M and the current estimate of the other matrix. For computing the estimates, we use the fast fixed-point algorithm derived in Section 3. However, since we constrain S and T to be nonnegative matrices, the fixed-point solver cannot be used without constraints that enforce non-negativity. To this end, we combine our solver with fast-ADMM [6], a fast dual ascent method which solves for multiple objectives by consensus. We call the combined solver fp_solve_nonneg(). Note that, due to the symmetry between the two alternating steps, we use the same solver for computing both S and T. We do minimal post-processing at the end of each step to remove redundant components. Specifically, we identify and remove near duplicate components in S or T, and we then eliminate components which have converged to zero. We repeat these steps alternatingly for a desired number of steps N . Selection of ? depends on the positive contamination level; nevertheless, we have observed that precise tuning of ? is not necessary in practice. A range of [0.5, 1] times the standard deviation of the normally distributed noise is reasonable for ? for most practices. One should note, however, that although the robust estimator has favorable mis-specification bias, it might become significant under crucially low SNR conditions. For instance, setting a small ? in such cases will likely lead to detrimental under-estimation. On the other hand, setting high ? values decreases the estimator robustness ( this makes the loss function approach the `2 loss). Consequently, the advantage of robust estimation is expected to diminish in extremely low SNR regimes. Our algorithm has a highly favorable runtime in practice owing to the simplicity of its form. Furthermore, since the solver we use relies on basic matrix operations, we were able to produce a GPU implementation, allowing for further reduction in runtime. Comparison of our GPU implementation to other algorithms in their canonical forms naturally causes bias; therefore, we defer our runtime comparison results to the supplementary material. From here on, we shall call our algorithm EXTRACT. 5 Experiments In this section, we perform experiments on both simulated and real data in order to establish the improved signal accuracy obtained using EXTRACT. We represent the signal accuracy with two 6 Example cases of cells with non-captured neighbors a b Example maximum projection image c ROC curve by varying event detection threshold d Mean area under the ROC curve for when initialized with X fraction of true cells 1 EXTRACT CNMF true positive rate 1 0.8 0.95 0.6 EXTRACT, AUC = 0.99 CNMF, AUC = 0.92 0.4 0.9 0.85 0.2 1 X=0.8 false positive rate X=0.6 X=0.4 it e it r 1 e it r 2 er 3 it er it 1 e it r 2 er 3 0.8 it e it r 1 e it r 2 er 3 it er it 1 e it r 2 er 3 0.6 it e it r 1 e it r 2 er 3 it er it 1 e it r 2 er 3 0.4 it e it r 1 e it r 2 er 3 it e it r 1 e it r 2 er 3 0.8 0.2 X=0.2 Figure 3: Performance comparison of EXTRACT vs. CNMF for movies with overlapping image sources. (a) Examples where a captured cell (circled in white) is overlapping with non-captured neighbors (circled in red). Ground truth traces are shown in black. EXTRACT finds images and traces that match closely with the ground truth, where CNMF admits notable crosstalk from neighbors both in its found cell images and traces.(b) An example maximum projection of an imaging movie in time. (c) An example ROC curve for X=0.4, computed by varying event detection threshold and averaging TPR and FPR over single cells for each threshold. (d) Mean area under the ROC curve computed over 20 experiments for each initial fraction of true cells, X, and each iteration. EXTRACT consistently outperforms CNMF, with the performance lead becoming significant for lower X. Error bars are 1 s.e.m. quantities: (1) signal fidelity, which measures how closely a temporal (fluorescence trace) or spatial (cell image) signal matches its underlying ground truth, and (2) signal crosstalk, which quantifies interference from other sources, or noise. We primarily focus on temporal signals since they typically represent the entirety of the calcium movie for the steps subsequent to cell extraction. As opposed to using simple correlation based metrics, we compute true and false positive detection rates based on estimated calcium events found via simple amplitude thresholding. We then present receiver operating characteristics (ROC) based metrics. We compare EXTRACT to the two dominantly used cell extraction methods: CNMF [16], and spatio-temporal ICA [14], the latter of which we will simply refer to as ICA. Both methods are matrix factorization methods like EXTRACT; CNMF estimates its temporal and spatial matrices alternatingly, and jointly estimates traces and its underlying calcium event peaks, and ICA finds a single unmixing matrix which is then applied to the singular value decomposition (SVD) of the movie to jointly obtain traces and images. CNMF uses quadratic reconstruction loss with `1 penalty, whereas ICA uses a linear combination of movie data guided by high order pixel statistics for reconstruction; hence they both can be considered as non-robust estimation techniques. Simulated data. For simulated movies, we use a field of view of size 50 by 50 pixels, and produce data with 1000 time frames. We simulate 30 neurons with gaussian shaped images with standard deviations drawn from [3, 4.8] uniformly. We simulate the fluorescence traces using a Poisson process with rate 0.01 convolved with an exponential kernel with a time constant of 10 frames. We corrupt the movie with independent and normally distributed noise whose power is matched to the power of the neural activity so that average pixel-wise SNR in cell regions is 1. We have re-run our experiments with different SNR levels in order to establish the independence of our key results from noise level; we report them in the supplementary material. 5.1 Crosstalk between cells for robust vs. non-robust methods As a first experiment, we demonstrate consequences of a common phenomenon, namely cells with overlapping spatial weights. Overlapping cells do not pose a significant problem when their spatial components are correctly estimated; however, in reality, estimated images typically do not perfectly match their underlying excitation, or some overlapping cells might not even be captured by the extraction algorithm. In the latter two cases, crosstalk becomes a major issue, causing captured cells to carry false calcium activity in their fluorescence traces. 7 Example ?uorescence traces ROC curve by varying b event detection threshold c 1 TRUE CNMF ICA EXTRACT true positive rate a EXTRACT 0.8 d Mean area under the ROC curve CNMF ICA 1 0.6 EXTRACT, AUC = 0.96 CNMF, AUC = 0.91 ICA, AUC = 0.88 0.4 0.95 Recall 0.92 0.91 0.90 0.87 0.81 0.79 Precision 0.96 0.95 0.90 0.87 0.82 0.79 F1 0.94 0.92 0.90 0.86 0.82 0.80 0.9 0.2 0.85 0.2 0.4 0.6 0.8 false positive rate 1 w/o neuropil w/neuropil Cell ?nding statistics w/o neuropil w/neuropil Figure 4: EXTRACT outperforms other algorithms in the existence of neuropil contamination. (a) Example traces from algorithm outputs overlaid on the ground truth traces. EXTRACT produces traces closest to the ground truth, admitting significantly less crosstalk compared to others. (b) An example ROC curve for an instance with neuropil. (c) Mean area under the curve computed over 15 experiments, and separately for with and without neuropil. EXTRACT shows better performance, and its performance is the most robust against neuropil contamination. (d) Average cell finding statistics over 15 experiments, computed separately for with and without neuropil. EXTRACT achieves better competitive performance especially when there is neuropil contamination. We try to reproduce the aforementioned scenarios by simulating movies, and initializing the algorithms of interest with a fraction of the ground truth cells. Our aim is to set up a controlled environment to (1) quantitatively investigate the crosstalk in the captured cell traces due to missing cells, (2) observe the effect of alternating estimation on the final accuracy of estimates. In this case, the outputs of alternating estimation algorithms should deteriorate through the iteration loop since they estimate their components based on imperfect estimates of each other. We select EXTRACT and CNMF for this experiment since they are both alternating estimation algorithms. We initialize the algorithms with 4 different fractions of ground truth cells: X = {0.2, 0.4, 0.6, 0.8}. We carry out 20 experiments for each X, and we perform a 3 alternating estimation iterations for each algorithm. This number was chosen with the consideration that CNMF canonically performs 2 iterations on its initialized components. We report results for 6 iterations in the supplementary material. At the end of each iteration, we detect calcium events from the algorithms? fluorescence traces, and match them with the ground truth spikes to compute event true positive rate (TPR) and event false positive rate (FPR). Figure 3 summarizes the results of this experiment. At the end of the 3 iterations, EXTRACT produces images and traces that are visually closer to ground truth in the existence of non-captured neighboring cells with overlapping images (Figure 3a). Figure 3c shows the ROC curve from one instance of the experiment, computed by varying the threshold amplitude for detecting calcium events, and plotting FPR against TPR for each threshold. We report quantitative performance by the area under the ROC curve (AUC). We average the AUCs over all the experiments performed for each condition, and report it separately for each iteration in Figure 3d. EXTRACT outperforms CNMF uniformly, and the performance gap becomes pronounced with very low fraction of initially provided cells. This boost in the signal accuracy over non-robust estimators (e.g. ones with quadratic penalty) stands to validate our proposed robust estimator and its underlying model assumptions. 5.2 Cell extraction with neuropil contamination In most calcium imaging datasets, data is contaminated with non-cellular calcium activity caused by neuropil. This may interfere with cell extraction by contaminating the cell traces, and by making it difficult to accurately locate spatial components of cells. We study the effect of such contamination by simulating neural data and combining it with neuropil activity extracted from real two-photon imaging datasets. For this experiment, we use EXTRACT, CNMF and ICA. In order for a fair comparison, we initialize all algorithms with the same set of initial estimates. We choose to use the greedy initializer of CNMF to eliminate any competitive advantage EXTRACT might have due to using its native initializer. We perform 15 experiments with no neuropil, and 15 with added neuropil. We match the variance of the neuropil activity to that of the gaussian noise while keeping SNR constant. For each experiment, we compute (1) cell trace statistics based on the ROC curve as previously described, (2) cell finding statistics based on precision, recall, and F1 metrics. EXTRACT produces qualitatively more accurate fluorescence traces (Figure 4a), and it outperforms both CNMF and ICA quantitatively (Figure 4b,c), with the performance gap becoming more signifi8 EXTRACT a CNMF N=476 ICA N=329 N=272 b EXTRACT 0 10 CNMF 20 ICA 30 40 time (seconds) Figure 5: EXTRACT better estimates neural signals in microendoscopic single-photon imaging data. (a) The manually classified "good" cells for all 3 algorithms overlaid on the maximum of the imaging movie in time. Letter N refers to the total good cell count. (b) The fluorescence traces of the 3 algorithms belonging to the same cell. The cell has significantly low SNR compared to a neighbor cell which is also captured by all the methods. The time frames with arrows pointing to them are shown with the snapshot of the cell (circled in green) and its surrounding area. EXTRACT correctly assigns temporal activity to the cell of interest, while other algorithms register false calcium activity from the neighboring cell. cant in the existence of neuropil contamination. Further, EXTRACT yields more true cells than the other methods with less false positives when there is neuropil (Figure 4d). 5.3 Cell extraction from microendoscopic single-photon imaging data Data generated using microendoscopic single-photon calcium imaging could be quite challenging due to low SNR, and fluctuating background (out of focus fluorescence activity etc.). We put EXTRACT to test in this data regime, using an imaging dataset recorded from the dorsal CA1 region of the mouse hippocampus [18], an area known to have high cell density. We compare EXTRACT with CNMF and ICA. For this experiment, the output of each algorithm was checked by human annotators and cells were manually classified to be true cells or false positives judging from the match of their temporal signal to the activity in the movie. EXTRACT successfully extracts the majority of the cells apparent in the maximum image of the movie in time dimension, and is able to capture highly overlapping cells (Figure 5a). EXTRACT also accurately estimates the temporal activity. Figure 5b shows an instance of a dim cell with a high SNR neighboring cell, both of which are captured by all three algorithms. While CNMF and ICA both falsely show activity when the neighbor is active, EXTRACT trace seems immune to this type of contamination and is silent at such instants. 6 Conclusion We presented an automated cell extraction algorithm for calcium imaging which uses a novel robust estimator. We arrived at our estimator by defining a generic data model and optimizing its worst-case performance. We proposed a fast solver for our estimation problem, which allows for tractable cell extraction in practice. As we have demonstrated in our experiments, our cell extraction algorithm, EXTRACT, is a powerful competitor for the existing methods, performing well under different imaging modalities due to its generic nature. 9 References [1] N. J. Apthorpe, A. J. Riordan, R. E. Aguilar, J. Homann, Y. Gu, D. W. Tank, and H. S. Seung12. Automatic neuron detection in calcium imaging data using convolutional networks. arXiv preprint arXiv:1606.07372, 2016. [2] J. R. Collins. Robust estimation of a location parameter in the presence of asymmetry. The Annals of Statistics, pages 68?85, 1976. [3] W. Denk, J. H. Strickler, W. W. Webb, et al. Two-photon laser scanning fluorescence microscopy. Science, 248(4951):73?76, 1990. [4] B. A. Flusberg, A. Nimmerjahn, E. D. Cocker, E. A. Mukamel, R. P. Barretto, T. H. Ko, L. D. Burns, J. C. Jung, and M. J. Schnitzer. High-speed, miniaturized fluorescence microscopy in freely moving mice. Nature methods, 5(11):935, 2008. [5] K. K. Ghosh, L. D. Burns, E. D. Cocker, A. Nimmerjahn, Y. Ziv, A. El Gamal, and M. J. Schnitzer. Miniaturized integration of a fluorescence microscope. Nature methods, 8(10):871? 878, 2011. [6] T. Goldstein, B. O?Donoghue, S. Setzer, and R. Baraniuk. Fast alternating direction optimization methods. SIAM Journal on Imaging Sciences, 7(3):1588?1623, 2014. [7] F. Helmchen and W. Denk. Deep tissue two-photon microscopy. Nature methods, 2(12):932?940, 2005. [8] P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73?101, 1964. [9] P. J. Huber. Robust regression: asymptotics, conjectures and monte carlo. The Annals of Statistics, pages 799?821, 1973. [10] L. A. Jaeckel. Robust estimates of location: Symmetry and asymmetric contamination. The Annals of Mathematical Statistics, pages 1020?1034, 1971. [11] P. Kaifosh, J. D. Zaremba, N. B. Danielson, and A. Losonczy. Sima: Python software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics, 8:80, 2014. [12] P. Kokic and P. Bell. Optimal winsorizing cutoffs for a stratified finite population estimator. Journal of Official Statistics, 10(4):419, 1994. [13] R. D. Martin and R. H. Zamar. Efficiency-constrained bias-robust estimation of location. The Annals of Statistics, pages 338?354, 1993. [14] E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron, 63(6):747?760, 2009. [15] M. Pachitariu, A. M. Packer, N. Pettit, H. Dalgleish, M. Hausser, and M. Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In Advances in Neural Information Processing Systems, pages 1745?1753, 2013. [16] E. A. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285?299, 2016. [17] P. Zhou, S. L. Resendez, G. D. Stuber, R. E. Kass, and L. Paninski. Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. arXiv preprint arXiv:1605.07266, 2016. [18] Y. Ziv, L. D. Burns, E. D. Cocker, E. O. Hamel, K. K. Ghosh, L. J. Kitch, A. El Gamal, and M. J. Schnitzer. Long-term dynamics of ca1 hippocampal place codes. Nature neuroscience, 16(3):264?266, 2013. 10
6883 |@word hippocampus:1 seems:1 additively:1 crucially:1 decomposition:1 carry:3 schnitzer:5 reduction:1 initial:5 contains:1 denoting:1 interestingly:1 outperforms:5 existing:3 current:2 ka:1 yet:1 readily:1 gpu:3 subsequent:2 realistic:1 additive:2 cant:1 remove:2 update:1 v:2 greedy:1 guess:1 fpr:3 reciprocal:1 caveat:1 provides:1 detecting:2 toronto:1 location:5 unbounded:1 mathematical:5 become:3 incorrect:1 combine:1 introduce:5 manner:2 falsely:1 deteriorate:1 notably:1 ica:13 huber:24 behavior:2 expected:1 equivariant:1 brain:1 decreasing:1 little:1 xti:2 considering:2 solver:16 becomes:3 increasing:1 estimating:2 underlying:6 matched:1 provided:1 gamal:2 what:1 kind:2 interpreted:2 argmin:1 minimizes:1 developed:1 compressive:1 ca1:2 finding:5 ghosh:2 guarantee:3 temporal:14 quantitative:1 every:2 runtime:4 exactly:1 zaremba:1 k2:2 prohibitively:1 rm:1 normally:3 omit:1 superiority:1 producing:1 yn:1 positive:15 local:1 tends:1 consequence:2 soudry:1 path:1 fluctuation:1 becoming:2 might:3 black:1 burn:3 challenging:1 limited:1 factorization:5 stratified:1 ms:1 statistically:1 range:1 practical:2 crosstalk:7 practice:6 block:1 footprint:1 palgorithm:1 asymptotics:1 area:7 empirical:1 elicit:1 bell:1 significantly:3 projection:2 refers:1 suggest:1 cannot:2 close:3 selection:1 put:1 context:1 optimize:2 equivalent:1 demonstrated:1 missing:1 straightforward:1 l:2 convex:2 simplicity:2 assigns:1 estimator:33 insight:1 aguilar:1 population:1 variation:1 annals:5 exact:1 losing:1 us:3 synthesize:1 satisfying:1 particularly:3 asymmetric:2 native:1 observed:2 homann:1 preprint:2 initializing:1 capture:1 worst:6 region:4 decrease:1 contamination:24 gross:6 environment:1 mu:1 ideally:1 denk:2 dynamic:2 motivate:1 solving:4 algebra:1 impurity:1 efficiency:1 gu:1 cnmf:21 surrounding:1 laser:1 distinct:1 fast:13 monte:1 rds:1 outcome:2 outside:1 neuroinformatics:1 whose:2 encoded:1 stanford:4 solve:1 supplementary:7 larger:1 quite:2 otherwise:1 reardon:1 apparent:1 statistic:11 jointly:3 final:2 advantage:2 eigenvalue:3 propose:5 reconstruction:3 product:1 remainder:1 causing:1 neighboring:3 loop:1 combining:1 rapidly:1 canonically:1 achieve:2 validate:2 pronounced:1 convergence:6 regularity:1 optimum:2 asymmetry:1 produce:5 unmixing:1 converges:5 derive:2 pose:1 measured:1 ij:1 solves:2 c:1 entirety:1 come:1 hakan:1 direction:2 guided:1 closely:2 owing:1 shrunk:1 human:1 material:7 require:1 f1:2 generalization:1 pettit:1 proposition:7 biological:2 frontier:1 around:1 sufficiently:1 ground:10 normal:7 diminish:1 considered:1 overlaid:2 visually:1 pointing:1 major:1 optimizer:1 early:1 achieves:3 fh:5 estimation:31 favorable:2 superposition:1 fluorescence:15 largest:1 pnevmatikakis:1 establishes:3 tool:2 successfully:1 helmchen:1 clearly:1 gaussian:3 aim:1 rather:1 zhou:1 varying:4 derived:1 focus:6 kitch:1 improvement:2 consistently:1 prevalent:1 rigorous:1 baseline:3 sense:3 detect:1 dim:1 abstraction:3 stopping:1 el:2 typically:6 entire:2 eliminate:2 initially:1 going:1 reproduce:1 subscripted:1 interested:3 provably:1 tank:1 arg:1 issue:2 among:2 dual:1 fidelity:2 aforementioned:2 ziv:2 pixel:4 animal:2 spatial:10 integration:1 fairly:2 initialize:4 constrained:1 equal:1 once:1 field:3 extraction:23 beach:1 flawed:2 eliminated:1 manually:2 having:1 shaped:1 nearly:1 foreground:1 report:4 contaminated:1 quantitatively:2 inherent:2 primarily:1 others:1 duplicate:1 packer:1 microsoft:1 attempt:1 detection:5 neuroscientist:1 interest:4 highly:5 investigate:1 laborious:1 predominant:1 extreme:2 yielding:1 admitting:1 behind:1 amenable:1 accurate:3 closer:1 necessary:1 machinery:2 initialized:2 desired:1 re:1 theoretical:3 minimal:1 instance:4 corroborate:1 cost:3 addressing:1 deviation:4 subset:1 snr:9 reported:1 straightforwardly:1 scanning:1 corrupted:2 kxi:1 synthetic:1 combined:1 thoroughly:1 st:2 density:2 peak:2 siam:1 automating:1 quickly:1 mouse:2 squared:1 recorded:1 initializer:2 opposed:2 choose:2 derivative:2 return:2 account:2 photon:10 sec:1 coding:1 includes:1 notable:1 caused:1 register:1 depends:2 performed:2 view:2 try:1 sup:1 reached:1 start:1 red:1 competitive:2 dalgleish:1 defer:1 vivo:1 contribution:1 square:2 ni:2 accuracy:9 convolutional:2 variance:7 characteristic:2 ensemble:1 yield:3 correspond:1 identify:1 accurately:2 carlo:1 tissue:2 converged:1 alternatingly:2 classified:2 simultaneous:3 manual:1 checked:1 definition:1 against:3 competitor:1 naturally:1 proof:3 mi:1 dataset:3 popular:1 recall:2 segmentation:1 amplitude:4 routine:4 goldstein:1 higher:1 dt:2 improved:1 done:1 affirm:1 generality:2 furthermore:1 until:1 d:1 working:1 hand:1 correlation:1 overlapping:10 lack:1 interfere:1 scientific:1 usa:1 effect:2 contain:1 true:14 unbiased:2 hence:5 alternating:7 symmetric:2 nonzero:1 hamel:1 white:1 miniaturized:2 sima:1 auc:7 excitation:1 criterion:1 prominent:1 hippocampal:1 arrived:1 nimmerjahn:3 demonstrate:3 performs:1 image:15 wise:1 consideration:1 novel:1 common:1 machado:1 discussed:1 tpr:3 numerically:1 refer:4 significant:3 rd:1 tuning:1 automatic:1 immune:1 hxi:3 moving:1 stable:1 specification:1 behaving:1 operating:1 summands:1 base:1 etc:1 contaminating:2 multivariate:1 closest:1 optimizing:1 inf:1 scenario:3 indispensable:1 certain:3 arbitrarily:2 yi:9 captured:11 minimum:1 additional:1 greater:3 freely:2 paradigm:1 redundant:1 signal:29 full:3 desirable:1 multiple:2 stem:2 faster:1 characterized:2 match:6 offer:5 long:3 post:1 controlled:1 impact:1 converging:1 regression:8 basic:1 ko:1 metric:4 poisson:2 arxiv:4 iteration:10 represent:2 tailored:1 kernel:1 microscopy:3 cell:72 irregular:1 microscope:1 background:7 whereas:2 separately:3 median:3 source:8 singular:1 crucial:2 modality:1 ascent:1 recording:2 member:1 spirit:1 call:2 extracting:1 near:1 presence:1 yang:1 automated:8 variety:1 independence:1 fit:1 perfectly:1 silent:1 imperfect:1 donoghue:1 setzer:1 effort:1 penalty:2 suffer:1 proceed:1 cause:4 hessian:2 deep:1 useful:1 repeating:1 canonical:1 judging:1 neuroscience:4 estimated:3 per:2 correctly:2 shall:2 key:1 threshold:8 nevertheless:2 drawn:1 cutoff:1 imaging:40 asymptotically:2 fraction:6 run:1 letter:1 powerful:1 baraniuk:1 arrive:1 place:1 reader:2 reasonable:1 summarizes:1 quadratic:3 encountered:1 nonnegative:3 activity:22 kaifosh:1 constraint:1 constrain:1 awake:2 software:1 dominated:3 speed:2 argument:1 optimality:7 min:6 extremely:1 simulate:2 performing:1 martin:1 conjecture:1 according:4 combination:1 belonging:1 across:1 modification:1 making:1 sided:19 pipeline:1 taken:1 computationally:1 interference:1 visualization:1 previously:3 count:1 tractable:3 end:7 generalizes:1 operation:1 pachitariu:1 observe:2 fluctuating:2 generic:4 enforce:1 simulating:3 robustness:2 lacefield:1 rp:2 convolved:1 existence:3 include:1 newton:9 instant:1 quantile:4 especially:1 establish:2 classical:2 objective:1 added:1 quantity:1 spike:1 losonczy:1 riordan:1 gradient:7 detrimental:2 separate:1 simulated:4 majority:2 consensus:1 cellular:2 length:1 code:1 modeled:2 ratio:1 providing:1 minimizing:1 equivalently:1 setup:1 difficult:1 webb:1 argminf:1 trace:23 negative:5 implementation:3 calcium:32 murat:1 unknown:1 perform:3 allowing:1 neuron:9 observation:2 datasets:13 snapshot:1 finite:1 descent:7 defining:1 precise:1 y1:1 frame:6 locate:1 introduced:1 namely:1 pfau:1 hausser:1 boost:1 nip:1 address:1 able:2 bar:1 proceeds:1 regime:4 genetically:1 including:1 max:7 green:1 zamar:1 video:1 power:2 suitable:1 event:9 indicator:1 residual:1 minimax:8 movie:14 technology:2 nding:1 negativity:1 extract:39 sahani:1 literature:1 circled:3 python:1 asymptotic:6 loss:19 expect:1 generation:2 merel:1 worrisome:1 versus:2 annotator:1 thresholding:1 plotting:1 corrupt:1 prone:1 accounted:1 repeat:1 jung:1 free:1 keeping:1 dominantly:1 bias:8 allow:2 neighbor:5 erdogdu:2 taking:2 absolute:1 sparse:1 distributed:4 curve:11 dimension:1 xn:1 stand:1 made:2 commonly:1 qualitatively:1 ig:2 global:1 active:1 receiver:1 assumed:1 spatio:1 xi:4 continuous:1 quantifies:1 reality:1 nature:8 robust:35 ca:1 symmetry:2 neuropil:22 investigated:1 mse:4 official:1 main:3 intracellular:1 linearly:2 arrow:1 noise:22 arrival:1 n2:1 fair:1 x1:1 referred:1 roc:11 precision:2 exponential:1 governed:1 specific:1 xt:1 er:11 maxi:1 insignificant:1 admits:1 deconvolution:1 burden:1 demixing:1 ih:2 false:8 mukamel:2 gap:6 easier:1 simply:2 likely:1 paninski:1 gao:1 danielson:1 minimizer:1 truth:10 relies:2 extracted:4 targeted:1 consequently:2 admm:1 apthorpe:1 specifically:1 uniformly:2 averaging:1 denoising:1 called:1 total:1 svd:1 select:1 mark:1 support:2 latter:3 meant:1 dorsal:1 collins:1 phenomenon:1
6,504
6,884
State Aware Imitation Learning Yannick Schroecker College of Computing Georgia Institute of Technology [email protected] Charles Isbell College of Computing Georgia Institute of Technology [email protected] Abstract Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert. It is intuitively apparent that learning to take optimal actions is a simpler undertaking in situations that are similar to the ones shown by the teacher. However, imitation learning approaches do not tend to use this insight directly. In this paper, we introduce State Aware Imitation Learning (SAIL), an imitation learning algorithm that allows an agent to learn how to remain in states where it can confidently take the correct action and how to recover if it is lead astray. Key to this algorithm is a gradient learned using a temporal difference update rule which leads the agent to prefer states similar to the demonstrated states. We show that estimating a linear approximation of this gradient yields similar theoretical guarantees to online temporal difference learning approaches and empirically show that SAIL can effectively be used for imitation learning in continuous domains with non-linear function approximators used for both the policy representation and the gradient estimate. 1 Introduction One of the foremost challenges in the field of Artificial Intelligence is to program or train an agent to act intelligently without perfect information and in arbitrary environments. Many avenues have been explored to derive such agents but one of the most successful and practical approaches has been to learn how to imitate demonstrations provided by a human teacher. Such imitation learning approaches provide a natural way for a human expert to program agents and are often combined with other approaches such as reinforcement learning to narrow the search space and to help find a near optimal solution. Success stories are numerous in the field of robotics [3] where imitation learning has long been subject of research but can also be found in software domains with recent success stories including AlphaGo [23] which learns to play the game of Go from a database of expert games before improving further and the benchmark domain of Atari games where imitation learning combined with reinforcement learning has been shown to significantly improve performance over pure reinforcement learning approaches [9]. Formally, we define the problem domain as a Markov decision process, i.e. by its states, actions and unknown Markovian transition probabilities p(s0 |s, a) of taking action a in state s leading to state s0 . Imitation learning aims to find a policy ?(a|s) that dictates the action an agent should take in any state by learning from a set of demonstrated states SD and the corresponding demonstrated actions AD . The likely most straight-forward approach to imitation learning is to employ a supervised learning algorithm such as neural networks in order to derive a policy, treating the demonstrated states and actions as training inputs and outputs respectively. However, while this can work well in practice and has a long history of successes starting with, among other examples, early ventures into autonomous driving[18], it also violates a key assumption of statistical supervised learning by having past predictions affect the distribution of inputs seen in the future. It has been shown that agents trained this way have a tendency to take actions that lead it to states that are dissimilar from 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. any encountered during training and in which the agent is less likely to have an accurate model of how to act [18, 19]. Deviations from the demonstrations based on limitations of the learning model or randomness in the domain are therefore amplified as time progresses. Several approaches exist that are capable of addressing this problem. Interactive imitation learning methods (e.g. [5, 19, 20]) address this problem directly but require continuing queries to the human teacher which is often not practical. Inverse Reinforcement Learning (IRL) approaches attempt to learn the objective function that the demonstrations are optimizing and show better generalization capabilities. However, IRL approaches often require a model of the domain, can be limited by the representation of the reward function and are learning a policy indirectly. A consequence of the latter is that small changes to the learned objective function can lead to large changes in the learned policy. In this paper we introduce State Aware Imitation Learning (SAIL). SAIL aims to address the aforementioned problem by explicitly learning to reproduce demonstrated trajectories based on their states as well as their actions. Intuitively, if an agent trained with SAIL finds itself in a state similar to a demonstrated state it will prefer actions that are similar to the demonstrated action but it will also prefer to remain near demonstrated states where the trained policy is more likely to be accurate. An agent trained with SAIL will thus learn how to recover if it deviates from the demonstrated trajectories. We achieve this in a principled way by finding the maximum-a-posteriori (MAP) estimate of the complete trajectory. Thus, our objective is to find a policy which we define to be a parametric distribution ?? (a|s) using parameters ?. Natural choices would be linear functions or neural networks. The MAP problem is then given by argmax? p(?|SD , AD ) = argmax? log p(AD |SD , ?) + log p(SD |?) + log p(?). (1) Note that this equation differs from the naive supervised approach in which the second term log p(SD |?) is assumed to be independent from the current policy and is thus irrelevant to the optimization problem. Maximizing this term leads to the agent actively trying to reproduce states that are similar to the ones in SD . It seems natural that additional information about the domain is necessary in order to learn how to reach these states. In this work, we obtain this information using unsupervised interactions with the environment. We would like to stress that our approach does not require further input from the human teacher, any additional measure of optimality, or any model of the environment. A key component of our algorithm is based on the work of Morimura et al.[15] who estimate a gradient of the distribution of states observed when following the current policy using a least squares temporal difference learning approach and use their results to derive an alternative policy gradient algorithm. We discuss their approach in detail in section 3.1 and extend the idea to an online temporal difference learning approach in section 3.2. This adaptation gives us greater flexibility for our choice of function approximator and also provides a natural way to deal with an additional constraint to the optimization problem which we will introduce below. In section 3.3, we describe the full SAIL algorithm in detail and show that the estimated gradient can be used to derive a principled and novel imitation learning approach. We then evaluate our approach on a tabular domain in section 4.1, comparing our results to a purely supervised approach to imitation learning as well as to sample based inverse reinforcement learning. In section 4.2 we show that SAIL can successfully be applied to learn a neural network policy in a continuous bipedal walker domain and achieves significant improvements over supervised imitation learning in this domain. 2 Related works One of the main problems SAIL is trying to address is the problem of remaining close to states where the agent can act with high confidence. We identify three different classes of imitation learning algorithms that address this problem either directly or indirectly under different assumptions and with different limitations. A specialized solution to this problem can be found in the field of robotics. Imitation learning approaches in robotics often do not aim to learn a full policy using general function approximators but instead try to predict a trajectory that the robot should follow. Trajectory representations such as Dynamic Movement Primitives [21] give the robot a sequence of states (or its derivatives) which the robot then follows using a given control law. The role of the control law is to drive the robot towards the demonstrated states which is also a key objective of SAIL. However, this solution is highly domain specific and a controller needs to be chosen that fits the task and representation of the state space. It can, for example, be more challenging to use image based state representations. For a survey of imitation learning methods applied to robotics, see [3]. 2 The second class of algorithms is what we will call iterative imitation learning algorithms. A key characteristic of these algorithms is that the agent actively queries the expert for demonstrations in states that it sees when executing its current policy. One of the first approaches in this class is SEARN[5]. When applied to Imiteration Learning, SEARN starts by following the experts action at every step, then iteratively uses the demonstrations collected during the last episode to train a new policy and collects new episodes by taking actions according to a mixture of all previously trained policies and the experts actions. Over time SEARN learns to follow its mixture of policies and stops relying on the expert to decide which actions to take. Ross et al. [19] first proved that the pure supervised approach to imitation learning can lead to the error rate growing over time. To alleviate this issue they introduced a similar iterative algorithm called SMILe and proved that the error rate increases near linearly with respect to the time horizon. Building on this, Ross et al. introduced DAGGER [20]. DAGGER provides similar theoretical guarantees and empirically outperforms SMILe by augmenting a single training set during each iteration based on queries to the expert on the states seen during execution. DAGGER does not require previous policies to be stored in order to calculate a mixture. Note that while these algorithms are guaranteed to address the issue of straying too far from demonstrations, they approach the problem from a different direction. Instead of preferring states on which the agent has demonstrations, the algorithms collects more demonstrations in states the agent actually sees during execution. This can be effective but requires additional interaction with the human teacher which is often not cheaply available in practice. As mentioned above, our approach also shares significant similarities with Inverse Reinforcement Learning (IRL) approaches [17]. IRL methods aim to derive a reward function for which the provided demonstrations are optimal. This reward function can then be used to compute a complete policy. Note that the IRL problem is known to be ill-formed as a set of demonstrations can have an infinite amount of corresponding reward functions. Successful approaches such as Maximum Entropy IRL (MaxEntIRL) [27] thus attempt to disambiguate between possible reward functions by reasoning explicitly about the distribution of both states and actions. In fact, Choi and Kim [4] argue that many existing IRL methods can be rewritten as finding the MAP estimate for the reward function given the provided demonstrations using different probabilistic models. This provides a direct link to our work which maximizes the same objective but with respect to the policy as opposed to the reward function. A significant downside of many IRL approaches is that they require a model describing the dynamics of the world. However, sample based approaches exist. Boularias et al. [1] formulate an objective function similar to MaxEntIRL but find the optimal solution based on samples. Relative Entropy IRL (RelEntIRL) aims to find a reward function corresponding to a distribution over trajectories that matches the observed features while remaining within a relative entropy bound to the uniform distribution. While RelEntIRL can be effective, it is limited to linear reward functions. Few sample based methods exist that are able to learn non-linear reward functions. Recently, Finn et al. proposed Guided Cost Learning [6] which optimizes an objective based on MaxEntIRL using importance sampling and iterative refinement of the sample policy. Refinement is based on optimal control with learned models and is thus best suited for problems in domains in which such methods have been shown to work well, e.g. robotic manipulation tasks. A different direction for sample based IRL has been proposed by Klein et al. who treat the scores of a score-based classifier trained using the provided demonstration as a value function, i.e. the long-term expected reward, and use these values to derive a reward function. Structured Classification for IRL (SCIRL) [13] uses estimated feature expectations and linearity of the value function to derive the parameters of a linear reward function while the more recent Cascaded Supervised IRL (CSI) [14] derives the reward function by training a Support Vector Machine based on the observed temporal differences. While non-linear classifiers could be used, the method is dependent on the interpretability of the score as a value function. Recently, Ho et al.[11] introduced an approach that aims to find a policy that implicitly maximizes a linear reward function but without the need to explicitly represent such a reward function. Generative Adversarial Imitation Learning [10] uses a method similar to Generative Adversarial Networks[7] to extend this approach to nonlinear reward functions. The resulting algorithm trains a discriminator to distinguish between demonstration and sampled trajectory and uses the probability given by the discriminator as a reward to train a policy using reinforcement learning. The maximum likelihood approach presented here can be seen as an approximation of minimizing the KL divergence between the demonstrated states and actions and the reproduction by the learned policy. This can D (a,s) also be achieved by using the ratio of state-action probabilities d??p(s)? as a reward which is a ? (a|s) straight-forward transformation of the output of the optimal discriminator[7]. Note however that this equality only holds assuming an infinite number of demonstrations. Furthermore note that unlike the 3 gradient network introduced in this paper, the discriminator needs to learn about the distribution of the expert?s demonstrations. Finally, we would like to point out the similarities our work shares with meta learning techniques that learn the gradients (e.g.[12]) or determine the weight updates (e.g. [22], [8]) for a neural network. Similar to these meta learning approaches, we propose to estimate the gradient w.r.t. the policy. While a complete review of this work is beyond the scope of this paper, we believe that many of the techniques developed to address challenges in this field can be applicable to our work as well. 3 Approach SAIL is a gradient ascent based algorithm to finding the true MAP estimate of the policy. A significant role in estimating the gradient ?? log p(?|SD , AD ) will be to estimate the gradient of the (stationary) state distribution induced by following the current policy. We write the stationary state distribution as d?? (s), assume that the Markov chain is ergodic (i.e. the distribution exists) and review the work by Morimura et al. [15] on estimating its gradient ?? log d?? (s) in section 3.1. We outline our own online adaptation to retrieve this estimate in section 3.2 and use it in order to derive the full SAIL gradient ?? log p(?|SD , AD ) in section 3.3. 3.1 A temporal difference approach to estimating ?? log d? (s) We first review the work by Morimura et al. [15] who first discovered a relationship between the gradient ?? log d?? (s) and value functions as used in the field of reinforcement learning. Morimura et al. showed that the gradient can be written recursively and decomposed into an infinite sum so that a corresponding temporal difference loss can be derived. By definition, the gradient of the stationary state distribution in a state s0 can be written in terms of prior states s and actions a. Z ?? d?? (s0 ) = ?? d?? (s)?? (a|s)p(s0 |s, a)ds, a (2) Using ?? (d?? (s)?? (a|s)p(s0 |s, a)) = p(s, a, s0 )(?? log d?? (s) + ?? log ?? (a|s)) and dividing by d?? (s0 ) on both sides, we obtain Z 0 = q(s, a|s0 ) (?? log d?? (s) + ?? log ?? (a|s) ? ?? log d?? (s0 )) ds, a (3) Where q denotes the reverse transition probabilities. This can be seen as an expected temporal difference error over the previous state and action where the temporal difference error is defined as ?(s, a, s0 ) := ?? log d?? (s) + ?? log ?? (a|s) ? ?? log d?? (s0 ) (4) ?? 0 In the original work, Morimura et al. derive a least squares estimator for ?? log d (s ) based on minimizing the expected squared temporal difference error as well as a penalty to enforce the constraint E[?? log d?? (s)] = 0, ensuring d?? remains a proper probability distribution, and apply it to policy gradient reinforcement learning. In the following sections we formulate an online update rule to estimate the gradient, argue convergence in the linear case, and use the estimated gradient to derive a novel imitation learning algorithm. 3.2 Online temporal difference learning for ?? log d? (s) In this subsection we define the online temporal difference update rule for SAIL and show that convergence properties are similar to the case of average reward temporal difference learning[25]. Online temporal difference learning algorithms are computationally more efficient than their least squares batch counter parts and are essential when using high-dimensional non-linear function approximations to represent the gradient. We furthermore show that online methods give us a natural way to enforce the constraint E[?? log d?? (s)] = 0. We aim to approximate ?? log d? (s) up to an unknown constant vector c and thus define our target as f ? (s) := ?? log d? (s) + c. We use a temporal difference update to learn a parametric approximation f? (s) ? f ? (s). The update rule based on taking action a in state s and transitioning to state s0 is given by ?k+1 = ?k + ??? f? (s0 ) (f? (s) + ?? log ?(a|s) ? f? (s0 )) . 4 (5) Algorithm 1 State Aware Imitation Learning 1: function SAIL(?, ?? , ?? , SD , AD ) 2: ? ? SupervisedTraining(SD , AD ) 3: for k ? 0..#Iterations do 4: SE , AE ? CollectUnsupervisedEpisode(? ? )) P 5: ? ? ? + ?? |S1E | s,a,s0 ?transitions(SE ,AE ) (f? (s) + ?? log ?? (a|s) ? f? (s0 )) ?? f (s0 )) P 6: ? ? |S1E | s?SE f? (s)   P 7: ? ? ? + ?? |S1D | s,a?pairs(SD ,AD ) (?? log ?? (a|s) + (f? (s) ? ?)) + ?? p(?) return ? Note that if f? converges to an approximation of f ? then due to E[?? log d?? (s)] = 0, we have ?? log d? (s) ? f? (s) ? E[f? (s)] where the expectation can be estimated based on samples. While convergence of temporal difference methods is not guaranteed in the general case, some guarantees can be made in the case of linear function approximation f? (s) := ? T ?(s)[25]. We note that E[?? log ?(a|s)] = 0 and thus for each dimension of ? the update can be seen as a variation of average reward temporal difference learning where the scalar reward is replaced by the gradient vector ?? log ?(a|s) and f? is bootstrapped based on the previous state as opposed to the next. While the role of current and next state in this update rule are reversed and this might suggest that updates should be done in reverse, the convergence results by Tsitsiklis and Van Roy[25] are dependent only on the limiting distribution of following the sample policy on the domain which remains unchanged regardless of the ordering of updates [15]. It is therefore intuitively apparent that the convergence results still hold and that f? converges to an approximation of f ? . We formalize this notion in Appendix A. Introducing a discount factor So far we related the update rule to average reward temporal difference learning as this was a natural consequence of the assumptions we were making. However, in practice we found that a formulation analogous to discounted reward temporal difference learning may work better. While this can be seen as a biased but lower variance approximation to the average reward problem [26], a perhaps more satisfying justification can be obtained by reexamining the simplifying assumption that the sampled states are distributed by the stationary state distribution d?? . An alternative simplifying assumption is that the previous states are distributed by a mixture of the starting state distribution d0 (s?1 ) and the stationary state distribution p(s?1 ) = (1 ? ?)d0 (s?1 ) + ?d? (s?1 ) for ? ? [0, 1]. In this case, equation 3 has to be altered and we have Z 0 = p(s, a|s0 ) (??? log d?? (s) + (1 ? ?)?? log d0 (s) + ?? log ?? (a|s) ? ?? log d?? (s0 )) ds, a. Note that ?? log d0 (s) = 0 and thus we recover the discounted update rule ?k+1 = ?k + ??? f (s0 ) (?f (s) + ?? log ?(a|s) ? f (s0 )) 3.3 (6) State aware imitation learning Based on this estimate of ?? log d?? we can now derive the full State Aware Imitation Learning algorithm. SAIL aims to find the full MAP estimate as defined in Equation 1 via gradient ascent. The gradient decomposes into three parts: ?? log p(?|SD , AD ) = ?? log p(AD |SD , ?) + ?? log p(SD |?) + ?? log p(?) (7) The first and last term make up the gradient used for gradient descent based supervised learning and can usually be computed analytically. To estimate ?? log p(SD |?), we disregard information about the order of states and make the simplifying assumptions that all states are P drawn from the stationary distribution. Under this assumption, we can estimate ?? log p(SD |?) = s?SD ?? log d?? (s) based on unsupervised transition samples using the approach described in section 3.2. The full SAIL algorithm thus maintains a current policy as well an estimate of ?? log p(SD |?) and iteratively 5 Average reward Agreement 1850 1800 1750 1700 1650 1600 1550 1500 1450 SAIL Supervised baseline Random baseline 0 1000 2000 3000 Iteration 4000 5000 4.6 4.5 4.4 4.3 4.2 4.1 4.0 3.9 3.8 optimal policy supervised baseline SAIL 0 (a) 1000 2000 3000 Iteration 4000 5000 (b) Figure 1: a) The sum of probabilities of taking the optimal action double over the baseline. b) The reward (+/ ? 2?) obtained after 5000 iterations of SAIL is much closer to the optimal policy. 1. Collects unsupervised state and action samples SE and AE from the current policy, 2. Updates the gradient estimate using Equation 5 and estimates E[f? (s)] using the sample mean of the unsupervised states or an exponentially moving sample mean 1 X ? := f? (s) |SE | s?SE 3. Updates the current policy using the estimated gradient f? (s) ? ? as well as the analytical gradients for ?? log p(?) and ?? log p(AD |SD , ?). The SAIL gradient is given by X ?? log p(?|SD , AD ) = (f? (s) ? ? + ?? log p(a|s, ?)) + ?? p(?) s,a?pairs(SD ,AD ) The full algorithm is also outlined in Algorithm 1. 4 Evaluation We evaluate our approach on two domains. The first domain is a harder variation of the tabular racetrack domain first used in [1] with 7425 states and 5 actions. In section 4.1.1, we use this domain to show that SAIL can improve on the policy learned by a supervised baseline and learn to act in states the policy representation does not generalize to. In section 4.1.2 we evaluate sample efficiency of an off-policy variant of SAIL. The tabular representation allows us to compare the results to RelEntIRL [1] as a baseline without restrictions arising from the chosen representation of the reward function. The second domain we use is a noisy variation of the bipedal walker domain found in OpenAI gym[2]. We use this domain to evaluate the performance of SAIL on tasks with continuous state and action spaces using neural networks to represent the policy as well as the gradient estimate and compare it against the supervised baseline using the same representations. 4.1 Racetrack domain We first evaluate SAIL on the racetrack domain. This domain is a more difficult variation of the domain used by Boularias et al. [1] and consists of a grid with 33 by 9 possible positions. Each position has 25 states associated with it, encoding the velocity (-2, -1, 0, +1, +2) in the x and y direction which dictates the movement of the agent at each time step. The domain has 5 possible actions allowing the agent to increase or reduce its velocity in either direction or to keep its current velocity. Randomness is introduced to the domain using the notion of a failure probability which is set to be 0.8 if the absolute velocity in either direction is 2 and 0.1 otherwise. The goal of the agent is to complete a lap around the track without going off-track which we define to be the area surrounding the track (x = 0, y = 0, x > 31 or y > 6) as well as the inner rectangle (2 < x < 31 and 2 < y < 6). Note that unlike in [1], the agent has the ability to go off-track as opposed to being constrained by a wall and has to learn to move back on track if random chance makes it stray from it. Furthermore, the probability of going off-track is higher as the track is more narrow in this variation of the domain. This makes the domain more challenging to learn using imitation learning alone. 6 4.6 Optimal policy Supervised baseline Uniform off-policy SAIL Off-policy SAIL RelEntIRL 4.5 4.4 4.3 4.2 4.1 4.0 3.9 3.8 50 1000 2500 10000 50000 Figure 2: Reward obtained using off-policy training. SAIL learns a near-optimal policy using only 1000 sample episodes. The scale is logarithmic on the x-axis after 5000 iterations (gray area). For all our experiments, we use a set of 100 episodes collected from an oracle. To measure performance, we assign a score of ?0.1 to being off-track, a score of 5 for completing the lap and ?5 for crossing the finish line the wrong way. Note that this score is not used during training but is purely used to measure performance in this evaluation. We also use this score as a reward to derive an oracle. 4.1.1 On-policy results For our first experiment, we compare SAIL against a supervised baseline. As the oracle is deterministic and the domain is tabular, this means taking the optimal action in states encountered as part of one of the demonstrated episodes and uniformly random actions otherwise. For the evaluation of SAIL, we initialize the policy to the supervised baseline and use the algorithm to improve the policy over 5000 iterations. At each iteration, 20 unsupervised sample episodes are collected to estimate the SAIL gradient, using plain stochastic gradient descent with a learning rate of 0.1 for the temporal difference update and RMSprop with a a learning rate of 0.01 for updating the policy. Figure 1b shows that SAIL stably converges to a policy that significantly outperforms the supervised baseline. While we do not expect SAIL to act optimally in previously unseen states but to instead exhibit recovery behavior, it is interesting to measure on how many states the learned policy agrees with the optimal policy using a soft count for each state based on the probability of the optimal action. Figure 1a shows that the amount of states in which the agent takes the optimal action roughly doubles its advantage over random chance and that the learned behavior is significantly closer to the optimal policy on states seen during execution. 4.1.2 Off-policy sample efficiency For our second experiment, we evaluate the sample efficiency of SAIL by reusing previous sample episodes. As a temporal difference method, SAIL can be adapted using any off-policy temporal difference learning technique. In this work we elected to use truncated importance weights [16] with emphatic decay [24]. We evaluate the performance of SAIL collecting one new unsupervised sample episode in each iteration, reusing the samples collected in the past 19 episodes and compare the results against our implementation of Relative Entropy IRL[1]. We found that the importance sampling approach used by RelEntIRL makes interactions obtained by a pre-trained policy ineffective when using a tabular policy1 and thus collect samples by taking actions uniformly at random. For comparability, we also evaluated SAIL using a fixed set of samples obtained by following a uniform policy. In this case, we found that the temporal-difference learning can become unstable in later iterations and thus decay the learning rate by a factor of 0.995 after each iteration. We vary the number of unsupervised sample episodes and show the score achieved by the trained policy in Figure 2. The score for RelEntIRL is measured by computing the optimal policy given the learned reward function. Note that this requires a model that is not normally available. We found that in this domain depending on the obtained samples, RelEntIRL has a tendency to learn shortcuts through the off-track area. Since small changes in the reward function can lead to large changes in the final policy, we average the results for RelEntIRL over 20 trials and bound the total score from 1 The original work by Boularias et al. shows that a pre-trained sample policy can be used effectively if a trajectory based representation is used 7 0.40 SAIL Supervised Baseline 0.35 Failure rate 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0 (a) 2000 4000 6000 8000 Iteration 10000 12000 14000 (b) Figure 3: a) The bipedal walker has to traverse the plain, controlling the 4 noisy joint motors in its legs. b) Failure rate of SAIL over 1000 traversals compared to the supervised baseline measured. After 15000 iterations, SAIL traverses the plain far more reliably than the baseline. below by the score achieved using the supervised baseline. We can see that SAIL is able to learn a near optimal policy using a low number of sample episodes. We can furthermore see that SAIL using uniform samples is able to learn a good policy and outperform the RelEntIRL baseline reliably. 4.2 Noisy bipedal walker For our second experiment, we evaluate the performance of SAIL on a noisy variant of a twodimensional Bipedal walker domain (see Figure 3a). The goal of this domain is to learn a policy that enables the simulated robot to traverse a plain without falling. The state space in this domain consists of 4 dimensions for velocity in x and y directions, angle of the hull, angular velocity, 8 dimensions for the position and velocity of the 4 joints in the legs, 2 dimensions that denote whether the leg has contact with the ground and 10 dimensions corresponding to lidar readings, telling the robot about its surroundings. The action space is 4 dimensional and consists of the torque that is to be applied to each of the 4 joints. To make the domain more challenging, we also apply additional noise to each of the torques. The noise is sampled from a normal distribution with standard deviation of 0.1 and is kept constant for five consecutive frames at a time. The noise thus has the ability to destabilize the walker. Our goal in this experiment is to learn a continuous policy from demonstrations, mapping the state to torques and enabling the robot to traverse the plain reliably. As a demonstration, we provide a single successful crossing of the plain. The demonstration has been collected from an oracle that has been trained on the bipedal walker domain without additional noise and is therefore not optimal and prone to failure. Our main metric for success on this domain is failure rate, i.e. the fraction of times that the robot is not able to traverse the plain due to falling to the ground. While the reward metric used in [2] is more comprehensive as it measures speed and control cost, it cannot be expected that a pure imitation learning approach can minimize control cost when trained with an imperfect demonstration that does not achieve this goal itself. Failure rate, on the other hand can always be minimized by aiming to reproduce a demonstration of a successful traversal as well as possible. To represent our policy, we use a single shallow neural network with one hidden layer consisting of 100 nodes with tanh activation. We train this policy using a pure supervised approach as a baseline as well as with SAIL and contrast the results. During evaluation and supervised training, the output of the neural network is taken to be the exact torques whereas SAIL requires a probabilistic policy. Therefore we add additional Gaussian noise, kept constant for 8 consecutive frames at a time. To train the network in a purely supervised approach, we use RMSProp over 3000 epochs with a batch size of 128 frames and a learning rate of 10?5 . After the training process has converged, we found that the neural network trained with pure supervised learning fails 1650 times out of 5000 runs. To train the policy with SAIL, we first initialize it with the aforementioned supervised approach. The training is then followed up with training using the combined gradient estimated by SAIL until the failure rate stops decreasing. To represent the gradient of the logarithmic stationary distribution, we use a fully connected neural network with two hidden layers of 80 nodes each using ReLU activations. Each episode is split into mini-batches of 16 frames. The ?? log d?? -network is trained using RMSprop with a learning rate of 10?4 whereas the policy network is trained using RMSprop 8 and a learning rate of 10?6 , starting after the first 1000 episodes. As can be seen in Figure 3b, SAIL increases the success rate of 0.67 achieved by the baseline to 0.938 within 15000 iterations. 5 Conclusion Imitation learning has long been a topic of active research. However, naive supervised learning has a tendency to lead the agent to states in which it cannot act with certainty and alternative approaches either make additional assumptions or, in the case of IRL methods, address this problem only indirectly. In this work, we proposed a novel imitation learning algorithm that directly addresses this issue and learns a policy without relying on intermediate representations. We showed that the algorithm can generalize well and provides stable learning progress in both, domains with a finite number of discrete states as well as domains with continuous state and action spaces. We believe that explicit reasoning over states can be helpful even in situations where reproducing the distributions of states will not result in a desirable policy and see this as a promising direction for future research. Acknowledgements This work was supported by the Office of Naval Research under grant N000141410003 References [1] Abdeslam Boularias, Jens Kober, and Jan Peters. Relative Entropy Inverse Reinforcement Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 15:1?8, 2011. [2] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, 2016. [3] Sonia Chernova and Andrea L Thomaz. Robot learning from human teachers. Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(3):1?121, 2014. [4] Jaedeug Choi and Kee-eung Kim. MAP Inference for Bayesian Inverse Reinforcement Learning. Neural Information Processing System (NIPS), 2011. [5] Hal Daum?e, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning Journal (MLJ), 75(3):297?325, 2009. [6] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization. International Conference on Machine Learning (ICML), 2016. [7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672?2680, 2014. [8] David Ha, Andrew Dai, and Quoc V. Le. HyperNetworks. arXiv preprint, page arXiv:1609.09106v4 [cs.LG], 2016. [9] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, and Audrunas Gruslys. Learning from Demonstrations for Real World Reinforcement Learning. arXiv preprint, page 1704.03732v1 [cs.AI], 2017. [10] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565?4573, 2016. [11] Jonathan Ho, Jayesh Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization. In International Conference on Machine Learning, pages 2760?2769, 2016. [12] Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprint, page 1608.05343v1 [cs.LG], 2016. [13] Edouard Klein, Matthieu Geist, Bilal Piot, and Olivier Pietquin. Inverse Reinforcement Learning through Structured Classification. Neural Information Processing System (NIPS), 2012. 9 [14] Edouard Klein, Bilal Piot, Matthieu Geist, and Olivier Pietquin. A cascaded supervised learning approach to inverse reinforcement learning. Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD), 2013. [15] Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto, Jan Peters, and Kenji Doya. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning. Neural computation, 22(2):342?376, 2010. [16] Remi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and Efficient Off-Policy Reinforcement Learning. In Neural Information Processing System (NIPS), 2016. [17] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In International Conference on Machine Learning (ICML), 2000. [18] Dean a Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Neural Information Processing System (NIPS), 1989. [19] St?ephane Ross and J. Andrew Bagnell. Efficient Reductions for Imitation Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [20] St?ephane Ross, Geoffrey Gordon, and J. Andrew Bagnell. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. [21] Stefan Schaal. Robot learning from demonstration. Neural Information Processing System (NIPS), 1997. [22] Juergen H. Schmidhuber. A self-referential Weight Matrix. International Conference on Artificial Neural Networks, 1993. [23] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Dieleman Sander, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. [24] Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of off-policy temporal-difference learning. Journal of Machine Learning Research (JMLR), 17:1?29, 2016. [25] John N Tsitsiklis and Benjamin Van Roy. Average cost temporal-difference learning. Automatica, 35:1799? 1808, 1999. [26] John N. Tsitsiklis and Benjamin Van Roy. On average versus discounted reward temporal-difference learning. Machine Learning, 49(2-3):179?191, 2002. [27] Brian D Ziebart, Andrew Maas, J Andrew Bagnell, and Anind K Dey. Maximum Entropy Inverse Reinforcement Learning. In AAAI Conference on Artificial Intelligence (AAAI), 2007. 10
6884 |@word trial:1 seems:1 pieter:1 simplifying:3 harder:1 recursively:1 reduction:2 score:11 daniel:1 bootstrapped:1 bilal:3 past:2 outperforms:2 existing:1 current:9 comparing:1 activation:2 guez:1 written:2 john:6 enables:1 motor:1 treating:1 update:15 stationary:8 intelligence:6 generative:4 alone:1 imitate:1 aja:1 provides:4 node:2 traverse:5 simpler:1 five:1 direct:1 become:1 eung:1 jonas:1 consists:3 introduce:3 expected:4 roughly:1 pkdd:1 andrea:1 growing:1 behavior:2 torque:4 nham:1 relying:2 decomposed:1 discounted:3 decreasing:1 provided:5 estimating:4 linearity:1 maximizes:2 what:1 atari:1 developed:1 finding:3 transformation:1 guarantee:3 temporal:26 marian:1 certainty:1 every:1 collecting:1 act:7 interactive:1 zaremba:1 classifier:2 wrong:1 sherjil:1 control:6 normally:1 grant:1 before:1 treat:1 sd:21 todd:1 consequence:2 aiming:1 sutton:1 encoding:1 laurent:1 might:1 collect:4 challenging:3 edouard:2 limited:2 sail:47 practical:2 practice:3 regret:1 differs:1 gruslys:1 jan:2 demis:1 area:3 significantly:3 dictate:2 confidence:1 pre:2 suggest:1 cannot:2 close:1 twodimensional:1 bellemare:1 restriction:1 map:6 demonstrated:12 deterministic:1 maximizing:1 dean:1 go:3 primitive:1 starting:3 regardless:1 hypernetworks:1 survey:1 formulate:2 ergodic:1 recovery:1 pure:5 pouget:1 matthieu:2 insight:1 rule:7 estimator:1 retrieve:1 notion:2 autonomous:2 variation:5 analogous:1 limiting:1 justification:1 target:1 play:1 controlling:1 dulac:1 exact:1 olivier:3 us:4 goodfellow:1 agreement:1 velocity:7 roy:3 satisfying:1 crossing:2 updating:1 marcu:1 database:2 observed:3 role:3 levine:1 preprint:3 calculate:1 connected:1 episode:13 ordering:1 movement:2 counter:1 russell:1 csi:1 principled:2 mentioned:1 environment:3 benjamin:2 rmsprop:4 reward:34 sendonaris:1 ziebart:1 warde:1 dynamic:2 traversal:2 trained:14 purely:3 efficiency:3 czarnecki:1 abdeslam:1 joint:4 geist:2 surrounding:1 train:7 describe:1 effective:2 artificial:7 query:3 vicki:1 kalchbrenner:1 apparent:2 jean:1 otherwise:2 ability:2 statistic:3 unseen:1 itself:2 noisy:4 final:1 online:9 sequence:1 advantage:1 intelligently:1 analytical:1 thomaz:1 propose:1 net:1 interaction:3 kober:1 adaptation:2 flexibility:1 achieve:2 amplified:1 ludwig:1 schaul:1 venture:1 sutskever:1 convergence:5 double:2 perfect:1 executing:1 converges:3 silver:1 help:1 derive:12 depending:1 andrew:7 augmenting:1 measured:2 progress:2 dividing:1 pietquin:3 c:3 kenji:1 direction:7 guided:2 undertaking:1 safe:1 correct:1 stochastic:1 hull:1 human:7 ermon:2 violates:1 alphago:1 require:5 assign:1 abbeel:1 generalization:1 wall:1 alleviate:1 brian:1 hold:2 around:1 ground:2 normal:1 scope:1 predict:1 mapping:1 dieleman:1 driving:1 achieves:1 early:1 vary:1 consecutive:2 applicable:1 tanh:1 ross:4 agrees:1 successfully:1 stefan:1 always:1 gaussian:1 aim:8 gatech:2 office:1 derived:1 naval:1 improvement:1 schaal:1 likelihood:1 contrast:1 adversarial:4 kim:2 baseline:18 posteriori:1 inference:1 helpful:1 dependent:2 hidden:2 reproduce:3 going:2 issue:3 among:1 aforementioned:2 ill:1 classification:2 morimura:6 constrained:1 initialize:2 field:5 aware:6 having:1 beach:1 sampling:2 koray:2 ng:1 stuart:1 unsupervised:7 icml:2 future:2 minimized:1 ephane:2 brockman:1 tabular:5 mirza:1 employ:1 few:1 yoshua:1 surroundings:1 jayesh:1 gordon:1 richard:1 divergence:1 comprehensive:1 replaced:1 argmax:2 consisting:1 jaedeug:1 attempt:2 policy1:1 highly:1 evaluation:4 joel:1 scirl:1 bipedal:6 mixture:4 chernova:1 farley:1 chain:1 accurate:2 capable:1 closer:2 necessary:1 arthur:1 decoupled:1 mahmood:1 tree:1 hester:1 continuing:1 theoretical:2 soft:1 downside:1 markovian:1 juergen:1 cost:5 introducing:1 deviation:2 addressing:1 uniform:4 successful:4 osindero:1 too:1 optimally:1 stored:1 teacher:6 synthetic:1 combined:3 st:3 international:7 preferring:1 probabilistic:2 off:13 v4:1 synthesis:1 ilya:1 squared:1 aaai:2 boularias:4 opposed:3 huang:1 expert:9 derivative:2 leading:1 return:1 wojciech:2 actively:2 reusing:2 ioannis:1 vecerik:1 explicitly:3 ad:13 later:1 try:1 vehicle:1 start:1 recover:3 dagger:3 capability:1 maintains:1 simon:1 minimize:1 square:3 formed:1 greg:1 variance:1 who:3 characteristic:1 yield:1 identify:1 generalize:2 bayesian:1 kavukcuoglu:2 trajectory:8 cc:1 drive:1 straight:2 randomness:2 history:1 converged:1 reach:1 harutyunyan:1 definition:1 against:3 failure:7 associated:1 stop:2 sampled:3 proved:2 subsection:1 knowledge:1 graepel:1 formalize:1 actually:1 back:1 mlj:1 matej:1 higher:1 supervised:26 follow:2 tom:2 formulation:1 done:1 evaluated:1 dey:1 furthermore:4 angular:1 until:1 d:3 hand:1 langford:1 irl:14 mehdi:1 nonlinear:1 stably:1 perhaps:1 gray:1 believe:2 hal:1 thore:1 usa:1 building:1 lillicrap:1 true:1 equality:1 analytically:1 iteratively:2 deal:1 white:1 game:4 during:8 yoshimoto:1 self:1 trying:2 stress:1 outline:1 complete:4 stefano:2 interface:1 reasoning:2 elected:1 image:1 novel:3 recently:2 charles:1 specialized:1 empirically:2 exponentially:1 extend:2 significant:4 ai:1 outlined:1 grid:1 moving:1 robot:10 stable:1 similarity:2 add:1 racetrack:3 chelsea:1 own:1 recent:2 showed:2 optimizing:1 irrelevant:1 optimizes:1 reverse:2 manipulation:1 schmidhuber:1 meta:2 success:5 approximators:2 jens:1 leach:1 seen:8 additional:8 greater:1 dai:1 schneider:1 george:1 determine:1 full:7 desirable:1 d0:4 match:1 long:5 rupam:1 ensuring:1 prediction:3 variant:2 controller:1 ae:3 expectation:2 foremost:1 metric:2 arxiv:4 iteration:14 represent:5 sergey:1 robotics:4 achieved:4 whereas:2 walker:7 biased:1 unlike:2 ascent:2 ineffective:1 subject:1 tend:1 induced:1 smile:2 call:1 near:5 intermediate:1 split:1 bengio:1 sander:1 affect:1 fit:1 finish:1 relu:1 reduce:1 idea:1 inner:1 avenue:1 imperfect:1 whether:1 pettersson:1 veda:1 penalty:1 osband:1 peter:2 searn:3 action:33 jie:1 deep:2 gabriel:1 se:6 amount:2 discount:1 referential:1 eiji:1 outperform:1 exist:3 piot:3 estimated:6 arising:1 track:9 klein:3 write:1 discrete:1 key:5 openai:2 destabilize:1 falling:2 drawn:1 leibo:1 nal:1 kept:2 rectangle:1 v1:2 uchibe:1 fraction:1 sum:2 run:1 inverse:10 angle:1 audrunas:1 decide:1 doya:1 decision:1 prefer:3 appendix:1 lanctot:2 bound:2 layer:2 completing:1 guaranteed:2 distinguish:1 followed:1 courville:1 encountered:2 oracle:4 adapted:1 constraint:3 isbell:2 alex:1 software:1 speed:1 optimality:1 structured:4 according:1 remain:2 mastering:1 shallow:1 making:1 quoc:1 leg:3 intuitively:3 den:1 taken:1 computationally:1 equation:4 previously:2 remains:2 discus:1 describing:1 count:1 bing:1 madeleine:1 finn:2 antonoglou:1 available:2 panneershelvam:1 rewritten:1 apply:2 indirectly:3 enforce:2 alternative:3 batch:3 ho:3 gym:2 sonia:1 hassabis:1 original:2 denotes:1 remaining:2 daum:1 unchanged:1 contact:1 objective:7 move:1 parametric:2 bagnell:3 exhibit:1 gradient:37 comparability:1 reversed:1 link:1 simulated:1 astray:1 chris:1 topic:1 maddison:1 argue:2 collected:5 unstable:1 ozair:1 assuming:1 relationship:1 mini:1 ratio:1 demonstration:23 minimizing:2 julian:1 schrittwieser:1 difficult:1 lg:2 implementation:1 reliably:3 proper:1 policy:74 unknown:2 pomerleau:1 allowing:1 markov:2 benchmark:1 enabling:1 finite:1 descent:2 ecml:1 truncated:1 situation:2 frame:4 discovered:1 reproducing:1 arbitrary:1 introduced:5 david:3 pair:2 kl:1 discriminator:4 learned:9 narrow:2 nip:6 address:8 able:4 beyond:1 below:2 usually:1 reading:1 confidently:1 challenge:2 program:2 including:1 interpretability:1 max:1 natural:6 cascaded:2 improve:3 altered:1 technology:2 numerous:1 axis:1 grewe:1 naive:2 deviate:1 review:3 prior:1 epoch:1 acknowledgement:1 schulman:1 discovery:1 relative:4 law:2 graf:1 loss:1 expect:1 fully:1 lecture:1 interesting:1 limitation:2 approximator:1 geoffrey:1 versus:1 emphatic:2 agent:21 s0:22 story:2 share:2 land:1 prone:1 maas:1 supported:1 last:2 free:1 tsitsiklis:3 side:1 telling:1 institute:2 arnold:1 taking:6 munos:1 absolute:1 van:4 distributed:2 dimension:5 plain:7 transition:4 world:2 forward:2 made:1 reinforcement:18 refinement:2 sifre:1 far:3 approximate:1 implicitly:1 jaderberg:1 keep:1 robotic:1 active:1 automatica:1 assumed:1 imitation:34 agapiou:1 continuous:5 search:3 iterative:3 decomposes:1 disambiguate:1 promising:1 learn:19 nature:1 ca:1 improving:1 alvinn:1 european:1 domain:38 marc:3 aistats:3 anna:1 main:2 linearly:1 noise:5 tetsuro:1 xu:1 georgia:2 fails:1 position:3 stray:1 explicit:1 dominik:1 jmlr:1 learns:4 tang:1 ian:2 choi:2 transitioning:1 stepleton:1 specific:1 explored:1 decay:2 abadie:1 gupta:1 reproduction:1 derives:1 exists:1 essential:1 effectively:2 importance:3 anind:1 execution:3 horizon:1 suited:1 entropy:6 logarithmic:3 lap:2 remi:1 likely:3 timothy:1 cheaply:1 vinyals:1 yannick:1 scalar:1 driessche:1 chance:2 goal:4 cheung:1 kee:1 towards:1 shortcut:1 change:4 lidar:1 martha:1 infinite:3 uniformly:2 called:1 total:1 tendency:3 disregard:1 aaron:1 formally:1 college:2 support:1 latter:1 jonathan:2 dissimilar:1 oriol:1 evaluate:8
6,505
6,885
Beyond Parity: Fairness Objectives for Collaborative Filtering Sirui Yao Department of Computer Science Virginia Tech Blacksburg, VA 24061 [email protected] Bert Huang Department of Computer Science Virginia Tech Blacksburg, VA 24061 [email protected] Abstract We study fairness in collaborative-filtering recommender systems, which are sensitive to discrimination that exists in historical data. Biased data can lead collaborative-filtering methods to make unfair predictions for users from minority groups. We identify the insufficiency of existing fairness metrics and propose four new metrics that address different forms of unfairness. These fairness metrics can be optimized by adding fairness terms to the learning objective. Experiments on synthetic and real data show that our new metrics can better measure fairness than the baseline, and that the fairness objectives effectively help reduce unfairness. 1 Introduction This paper introduces new measures of unfairness in algorithmic recommendation and demonstrates how to optimize these metrics to reduce different forms of unfairness. Recommender systems study user behavior and make recommendations to support decision making. They have been widely applied in various fields to recommend items such as movies, products, jobs, and courses. However, since recommender systems make predictions based on observed data, they can easily inherit bias that may already exist. To address this issue, we first formalize the problem of unfairness in recommender systems and identify the insufficiency of demographic parity for this setting. We then propose four new unfairness metrics that address different forms of unfairness. We compare our fairness measures with non-parity on biased, synthetic training data and prove that our metrics can better measure unfairness. To improve model fairness, we provide five fairness objectives that can be optimized, each adding unfairness penalties as regularizers. Experimenting on real and synthetic data, we demonstrate that each fairness metric can be optimized without much degradation in prediction accuracy, but that trade-offs exist among the different forms of unfairness. We focus on a frequently practiced approach for recommendation called collaborative filtering, which makes recommendations based on the ratings or behavior of other users in the system. The fundamental assumption behind collaborative filtering is that other users? opinions can be selected and aggregated in such a way as to provide a reasonable prediction of the active user?s preference [7]. For example, if a user likes item A, and many other users who like item A also like item B, then it is reasonable to expect that the user will also like item B. Collaborative filtering methods would predict that the user will give item B a high rating. With this approach, predictions are made based on co-occurrence statistics, and most methods assume that the missing ratings are missing at random. Unfortunately, researchers have shown that sampled ratings have markedly different properties from the users? true preferences [21, 22]. Sampling is heavily influenced by social bias, which results in more missing ratings in some cases than others. This non-random pattern of missing and observed rating data is a potential source of unfairness. For the purpose of improving recommendation accuracy, there are collaborative filtering models 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. [2, 21, 25] that use side information to address the problem of imbalanced data, but in this work, to test the properties and effectiveness of our metrics, we focus on the basic matrix-factorization algorithm first. Investigating how these other models could reduce unfairness is one direction for future research. Throughout the paper, we consider a running example of unfair recommendation. We consider recommendation in education, and unfairness that may occur in areas with current gender imbalance, such as science, technology, engineering, and mathematics (STEM) topics. Due to societal and cultural influences, fewer female students currently choose careers in STEM. For example, in 2010, women accounted for only 18% of the bachelor?s degrees awarded in computer science [3]. The underrepresentation of women causes historical rating data of computer-science courses to be dominated by men. Consequently, the learned model may underestimate women?s preferences and be biased toward men. We consider the setting in which, even if the ratings provided by students accurately reflect their true preferences, the bias in which ratings are reported leads to unfairness. The remainder of the paper is organized as follows. First, we review previous relevant work in Section 2. In Section 3, we formalize the recommendation problem, and we introduce four new unfairness metrics and give justifications and examples. In Section 4, we show that unfairness occurs as data gets more imbalanced, and we present results that successfully minimize each form of unfairness. Finally, Section 5 concludes the paper and proposes possible future work. 2 Related Work As machine learning is being more widely applied in modern society, researchers have begun identifying the criticality of algorithmic fairness. Various studies have considered algorithmic fairness in problems such as supervised classification [20, 23, 28]. When aiming to protect algorithms from treating people differently for prejudicial reasons, removing sensitive features (e.g., gender, race, or age) can help alleviate unfairness but is often insufficient. Features are often correlated, so other unprotected attributes can be related to the sensitive features and therefore still cause the model to be biased [17, 29]. Moreover, in problems such as collaborative filtering, algorithms do not directly consider measured features and instead infer latent user attributes from their behavior. Another frequently practiced strategy for encouraging fairness is to enforce demographic parity, which is to achieve statistical parity among groups. The goal is to ensure that the overall proportion of members in the protected group receiving positive (or negative) classifications is identical to the proportion of the population as a whole [29]. For example, in the case of a binary decision Y? ? {0, 1} and a binary protected attribute A ? {0, 1}, this constraint can be formalized as [9] Pr{Y? = 1|A = 0} = Pr{Y? = 1|A = 1} . (1) Kamishima et al. [13?17] evaluate model fairness based on this non-parity unfairness concept, or try to solve the unfairness issue in recommender systems by adding a regularization term that enforces demographic parity. The objective penalizes the differences among the average predicted ratings of user groups. However, demographic parity is only appropriate when preferences are unrelated to the sensitive features. In tasks such as recommendation, user preferences are indeed influenced by sensitive features such as gender, race, and age [4, 6]. Therefore, enforcing demographic parity may significantly damage the quality of recommendations. To address the issue of demographic parity, Hardt et al. [9] propose to measure unfairness with the true positive rate and true negative rate. This idea encourages what they refer to as equal opportunity and no longer relies on the implicit assumption of demographic parity that the target variable is independent of sensitive features. They propose that, in a binary setting, given a decision Y? ? {0, 1}, a protected attribute A ? {0, 1}, and the true label Y ? {0, 1}, the constraints are equivalent to [9] Pr{Y? = 1|A = 0, Y = y} = Pr{Y? = 1|A = 1, Y = y}, y ? {0, 1} . (2) This constraint upholds fairness and simultaneously respects group differences. It penalizes models that only perform well on the majority groups. This idea is also the basis of the unfairness metrics we propose for recommendation. Our running example of recommendation in education is inspired by the recent interest in using algorithms in this domain [5, 24, 27]. Student decisions about which courses to study can have 2 significant impacts on their lives, so the usage of algorithmic recommendation in this setting has consequences that will affect society for generations. Coupling the importance of this application with the issue of gender imbalance in STEM [1] and challenges in retention of students with backgrounds underrepresented in STEM [8, 26], we find this setting a serious motivation to advance scientific understanding of unfairness?and methods to reduce unfairness?in recommendation. 3 Fairness Objectives for Collaborative Filtering This section introduces fairness objectives for collaborative filtering. We begin by reviewing the matrix factorization method. We then describe the various fairness objectives we consider, providing formal definitions and discussion of their motivations. 3.1 Matrix Factorization for Recommendation We consider the task of collaborative filtering using matrix factorization [19]. We have a set of users indexed from 1 to m and a set of items indexed from 1 to n. For the ith user, let gi be a variable indicating which group the ith user belongs to. For example, it may indicate whether user i identifies as a woman, a man, or with a non-binary gender identity. For the jth item, let hj indicate the item group that it belongs to. For example, hj may represent a genre of a movie or topic of a course. Let rij be the preference score of the ith user for the jth item. The ratings can be viewed as entries in a rating matrix R. The matrix-factorization formulation builds on the assumption that each rating can be represented as the product of vectors representing the user and item. With additional bias terms for users and items, this assumption can be summarized as follows: rij ? p> i q j + ui + vj , (3) where pi is a d-dimensional vector representing the ith user, q j is a d-dimensional vector representing the jth item, and ui and vj are scalar bias terms for the user and item, respectively. The matrixfactorization learning algorithm seeks to learn these parameters from observed ratings X, typically by minimizing a regularized, squared reconstruction error:  1 X ? 2 ||P ||2F + ||Q||2F + (yij ? rij ) , (4) J(P , Q, u, v) = 2 |X| (i,j)?X where u and v are the vectors of bias terms, || ? ||F represents the Frobenius norm, and yij = p> i q j + ui + vj . (5) Strategies for minimizing this non-convex objective are well studied, and a general approach is to compute the gradient and use a gradient-based optimizer. In our experiments, we use the Adam algorithm [18], which combines adaptive learning rates with momentum. 3.2 Unfair Recommendations from Underrepresentation In this section, we describe a process through which matrix factorization leads to unfair recommendations, even when rating data accurately reflects users? true preferences. Such unfairness can occur with imbalanced data. We identify two forms of underrepresentation: population imbalance and observation bias. We later demonstrate that either leads to unfair recommendation, and both forms together lead to worse unfairness. In our discussion, we use a running example of course recommendation, highlighting effects of underrepresentation in STEM education. Population imbalance occurs when different types of users occur in the dataset with varied frequencies. For example, we consider four types of users defined by two aspects. First, each individual identifies with a gender. For simplicity, we only consider binary gender identities, though in this example, it would also be appropriate to consider men as one gender group and women and all non-binary gender identities as the second group. Second, each individual is either someone who enjoys and would excel in STEM topics or someone who does and would not. Population imbalance occurs in STEM education when, because of systemic bias or other societal problems, there may be significantly fewer women who succeed in STEM (WS) than those who do not (W), and because of converse societal 3 unfairness, there may be more men who succeed in STEM (MS) than those who do not (M). This four-way separation of user groups is not available to the recommender system, which instead may only know the gender group of each user, but not their proclivity for STEM. Observation bias is a related but distinct form of data imbalance, in which certain types of users may have different tendencies to rate different types of items. This bias is often part of a feedback loop involving existing methods of recommendation, whether by algorithms or by humans. If an individual is never recommended a particular item, they will likely never provide rating data for that item. Therefore, algorithms will never be able to directly learn about this preference relationship. In the education example, if women are rarely recommended to take STEM courses, there may be significantly less training data about women in STEM courses. We simulate these two types of data bias with two stochastic block models [11]. We create one block model that determines the probability that an individual in a particular user group likes an item in a particular item group. The group ratios may be non-uniform, leading to population imbalance. We then use a second block model to determine the probability that an individual in a user group rates an item in an item group. Non-uniformity in the second block model will lead to observation bias. Formally, let matrix L ? [0, 1]|g|?|h| be the block-model parameters for rating probability. For the ith user and the jth item, the probability of rij = +1 is L(gi ,hj ) , and otherwise rij = ?1. Morever, let O ? [0, 1]|g|?|h| be such that the probability of observing rij is O(gi ,hj ) . 3.3 Fairness Metrics In this section, we present four new unfairness metrics for preference prediction, all measuring a discrepancy between the prediction behavior for disadvantaged users and advantaged users. Each metric captures a different type of unfairness that may have different consequences. We describe the mathematical formulation of each metric, its justification, and examples of consequences the metric may indicate. We consider a binary group feature and refer to disadvantaged and advantaged groups, which may represent women and men in our education example. The first metric is value unfairness, which measures inconsistency in signed estimation error across the user types, computed as n    1 X  (6) Uval = Eg [y]j ? Eg [r]j ? E?g [y]j ? E?g [r]j , n j=1 where Eg [y]j is the average predicted score for the jth item from disadvantaged users, E?g [y]j is the average predicted score for advantaged users, and Eg [r]j and E?g [r]j are the average ratings for the disadvantaged and advantaged users, respectively. Precisely, the quantity Eg [y]j is computed as X 1 Eg [y]j := yij , (7) |{i : ((i, j) ? X) ? gi }| i:((i,j)?X)?gi and the other averages are computed analogously. Value unfairness occurs when one class of user is consistently given higher or lower predictions than their true preferences. If the errors in prediction are evenly balanced between overestimation and underestimation or if both classes of users have the same direction and magnitude of error, the value unfairness becomes small. Value unfairness becomes large when predictions for one class are consistently overestimated and predictions for the other class are consistently underestimated. For example, in a course recommender, value unfairness may manifest in male students being recommended STEM courses even when they are not interested in STEM topics and female students not being recommended STEM courses even if they are interested in STEM topics. The second metric is absolute unfairness, which measures inconsistency in absolute estimation error across user types, computed as n 1 X Uabs = (8) Eg [y]j ? Eg [r]j ? E?g [y]j ? E?g [r]j . n j=1 Absolute unfairness is unsigned, so it captures a single statistic representing the quality of prediction for each user type. If one user type has small reconstruction error and the other user type has large 4 reconstruction error, one type of user has the unfair advantage of good recommendation, while the other user type has poor recommendation. In contrast to value unfairness, absolute unfairness does not consider the direction of error. For example, if female students are given predictions 0.5 points below their true preferences and male students are given predictions 0.5 points above their true preferences, there is no absolute unfairness. Conversely, if female students are given ratings that are off by 2 points in either direction while male students are rated within 1 point of their true preferences, absolute unfairness is high, while value unfairness may be low. The third metric is underestimation unfairness, which measures inconsistency in how much the predictions underestimate the true ratings: Uunder = n 1 X max{0, Eg [r]j ? Eg [y]j } ? max{0, E?g [r]j ? E?g [y]j } . n j=1 (9) Underestimation unfairness is important in settings where missing recommendations are more critical than extra recommendations. For example, underestimation could lead to a top student not being recommended to explore a topic they would excel in. Conversely, the fourth new metric is overestimation unfairness, which measures inconsistency in how much the predictions overestimate the true ratings: Uover = n 1 X max{0, Eg [y]j ? Eg [r]j } ? max{0, E?g [y]j ? E?g [r]j } . n j=1 (10) Overestimation unfairness may be important in settings where users may be overwhelmed by recommendations, so providing too many recommendations would be especially detrimental. For example, if users must invest large amounts of time to evaluate each recommended item, overestimating essentially costs the user time. Thus, uneven amounts of overestimation could cost one type of user more time than the other. Finally, a non-parity unfairness measure based on the regularization term introduced by Kamishima et al. [17] can be computed as the absolute difference between the overall average ratings of disadvantaged users and those of advantaged users: Upar = |Eg [y] ? E?g [y]| . Each of these metrics has a straightforward subgradient and can be optimized by various subgradient optimization techniques. We augment the learning objective by adding a smoothed variation of a fairness metric based on the Huber loss [12], where the outer absolute value is replaced with the squared difference if it is less than 1. We solve for a local minimum, i.e, min P ,Q,u,v J(P , Q, u, v) + U . (11) The smoothed penalty helps reduce discontinuities in the objective, making optimization more efficient. It is also straightforward to add a scalar trade-off term to weight the fairness against the loss. In our experiments, we use equal weighting, so we omit the term from Eq. (11). 4 Experiments We run experiments on synthetic data based on the simulated course-recommendation scenario and real movie rating data [10]. For each experiment, we investigate whether the learning objectives augmented with unfairness penalties successfully reduce unfairness. 4.1 Synthetic Data In our synthetic experiments, we generate simulated course-recommendation data from a block model as described in Section 3.2. We consider four user groups g ? {W, WS, M, MS} and three item groups h ? {Fem, STEM, Masc}. The user groups can be thought of as women who do not enjoy STEM topics (W), women who do enjoy STEM topics (WS), men who do not enjoy STEM topics (M), and men who do (MS). The item groups can be thought of as courses that tend to appeal to most 5 0.12 Error 0.08 0.06 0.04 0.02 U O P O+P 0.02 0.01 U O P O+P 0.00 U O P O+P U O P O+P 0.25 0.020 0.20 0.015 0.15 Over Under 0.0200 0.0175 0.0150 0.0125 0.0100 0.0075 0.0050 0.0025 0.0000 0.03 Parity 0.00 0.04 Value 0.10 0.040 0.035 0.030 0.025 0.020 0.015 0.010 0.005 0.000 Absolute 0.14 0.010 0.10 0.005 U O P O+P 0.000 0.05 U O P O+P 0.00 Figure 1: Average unfairness scores for standard matrix factorization on synthetic data generated from different underrepresentation schemes. For each metric, the four sampling schemes are uniform (U), biased observations (O), biased populations (P), and both biases (O+P). The reconstruction error and the first four unfairness metrics follow the same trend, while non-parity exhibits different behavior. women (Fem), STEM courses, and courses that tend to appeal to most men (Masc). Based on these groups, we consider the rating block model ? ? Fem STEM Masc ? ? 0.8 0.2 0.2 ? ? W ? L = ? WS 0.8 (12) 0.8 0.2 ? ?. ? MS 0.2 0.8 0.8 ? M 0.2 0.2 0.8 We also consider two observation block models: one with uniform observation probability across all groups O uni = [0.4]4?3 and one with unbalanced observation probability inspired by how students are often encouraged to take certain courses ? ? Fem STEM Masc ? ? 0.6 0.2 0.1 ? ? W bias ? (13) O = ? WS 0.3 0.4 0.2 ? ? . ? MS 0.1 0.3 0.5 ? 0.05 0.5 0.35 M We define two different user group distributions: one in which each of the four groups is exactly a quarter of the population, and an imbalanced setting where 0.4 of the population is in W, 0.1 in WS, 0.4 in MS, and 0.1 in M. This heavy imbalance is inspired by some of the severe gender imbalances in certain STEM areas today. For each experiment, we select an observation matrix and user group distribution, generate 400 users and 300 items, and sample preferences and observations of those preferences from the block models. Training on these ratings, we evaluate on the remaining entries of the rating matrix, comparing the predicted rating against the true expected rating, 2L(gi ,hj ) ? 1. 4.1.1 Unfairness from different types of underrepresentation Using standard matrix factorization, we measure the various unfairness metrics under the different sampling conditions. We average over five random trials and plot the average score in Fig. 1. We label the settings as follows: uniform user groups and uniform observation probabilities (U), uniform groups and biased observation probabilities (O), biased user group populations and uniform observations (P), and biased populations and biased observations (P+O). The statistics demonstrate that each type of underrepresentation contributes to various forms of unfairness. For all metrics except parity, there is a strict order of unfairness: uniform data is the most 6 Table 1: Average error and unfairness metrics for synthetic data using different fairness objectives. The best scores and those that are statistically indistinguishable from the best are printed in bold. Each row represents a different unfairness penalty, and each column is the measured metric on the expected value of unseen ratings. Unfairness Error Value Absolute Underestimation Overestimation Non-Parity None Value Absolute Under Over Non-Parity 0.317 ? 1.3e-02 0.130 ? 1.0e-02 0.205 ? 8.8e-03 0.269 ? 1.6e-02 0.130 ? 6.5e-03 0.324 ? 1.3e-02 0.649 ? 1.8e-02 0.245 ? 1.4e-02 0.535 ? 1.6e-02 0.512 ? 2.3e-02 0.296 ? 1.2e-02 0.697 ? 1.8e-02 0.443 ? 2.2e-02 0.177 ? 1.5e-02 0.267 ? 1.3e-02 0.401 ? 2.4e-02 0.172 ? 1.3e-02 0.453 ? 2.2e-02 0.107 ? 6.5e-03 0.063 ? 4.1e-03 0.135 ? 6.2e-03 0.060 ? 3.5e-03 0.074 ? 6.0e-03 0.124 ? 6.9e-03 0.544 ? 2.0e-02 0.199 ? 1.5e-02 0.400 ? 1.4e-02 0.456 ? 2.3e-02 0.228 ? 1.1e-02 0.573 ? 1.9e-02 0.362 ? 1.6e-02 0.324 ? 1.2e-02 0.294 ? 1.0e-02 0.357 ? 1.6e-02 0.321 ? 1.2e-02 0.251 ? 1.0e-02 fair; biased observations is the next most fair; biased populations is worse; and biasing the populations and observations causes the most unfairness. The squared rating error also follows this same trend. In contrast, non-parity behaves differently, in that it is heavily amplified by biased observations but seems unaffected by biased populations. Note that though non-parity is high when the observations are imbalanced, because of the imbalance in the observations, one should actually expect non-parity in the labeled ratings, so it a high non-parity score does not necessarily indicate an unfair situation. The other unfairness metrics, on the other hand, describe examples of unfair behavior by the rating predictor. These tests verify that unfairness can occur with imbalanced populations or observations, even when the measured ratings accurately represent user preferences. 4.1.2 Optimization of unfairness metrics As before, we generate rating data using the block model under the most imbalanced setting: The user populations are imbalanced, and the sampling rate is skewed. We provide the sampled ratings to the matrix factorization algorithms and evaluate on the remaining entries of the expected rating matrix. We again use two-dimensional vectors to represent the users and items, a regularization term of ? = 10?3 , and optimize for 250 iterations using the full gradient. We generate three datasets each and measure squared reconstruction error and the six unfairness metrics. The results are listed in Table 1. For each metric, we print in bold the best average score and any scores that are not statistically significantly distinct according to paired t-tests with threshold 0.05. The results indicate that the learning algorithm successfully minimizes the unfairness penalties, generalizing to unseen, held-out user-item pairs. And reducing any unfairness metric does not lead to a significant increase in reconstruction error. The complexity of computing the unfairness metrics is similar to that of the error computation, which is linear in the number of ratings, so adding the fairness term approximately doubles the training time. In our implementation, learning with fairness terms takes longer because loops and backpropagation introduce extra overhead. For example, with synthetic data of 400 users and 300 items, it takes 13.46 seconds to train a matrix factorization model without any unfairness term and 43.71 seconds for one with value unfairness. While optimizing each metric leads to improved performance on itself (see the highlighted entries in Table 1), a few trends are worth noting. Optimizing any of our new unfairness metrics almost always reduces the other forms of unfairness. An exception is that optimizing absolute unfairness leads to an increase in underestimation. Value unfairness is closely related to underestimation and overestimation, since optimizing value unfairness is even more effective at reducing underestimation and overestimation than directly optimizing them. Also, optimizing value and overestimation are more effective in reducing absolute unfairness than directly optimizing it. Finally, optimizing parity unfairness leads to increases in all unfairness metrics except absolute unfairness and parity itself. These relationships among the metrics suggest a need for practitioners to decide which types of fairness are most important for their applications. 4.2 Real Data We use the Movielens Million Dataset [10], which contains ratings (from 1 to 5) by 6,040 users of 3,883 movies. The users are annotated with demographic variables including gender, and the movies are each annotated with a set of genres. We manually selected genres that feature different forms of 7 Table 2: Gender-based statistics of movie genres in Movielens data. Count Ratings per female user Ratings per male user Average rating by women Average rating by men Romance Action Sci-Fi Musical Crime 325 54.79 36.97 3.64 3.55 425 52.00 82.97 3.45 3.45 237 31.19 50.46 3.42 3.44 93 15.04 10.83 3.79 3.58 142 17.45 23.90 3.65 3.68 Table 3: Average error and unfairness metrics for movie-rating data using different fairness objectives. Unfairness Error Value Absolute Underestimation Overestimation Non-Parity None Value Absolute Under Over Non-Parity 0.887 ? 1.9e-03 0.886 ? 2.2e-03 0.887 ? 2.0e-03 0.888 ? 2.2e-03 0.885 ? 1.9e-03 0.887 ? 1.9e-03 0.234 ? 6.3e-03 0.223 ? 6.9e-03 0.235 ? 6.2e-03 0.233 ? 6.8e-03 0.234 ? 5.8e-03 0.236 ? 6.0e-03 0.126 ? 1.7e-03 0.128 ? 2.2e-03 0.124 ? 1.7e-03 0.128 ? 1.8e-03 0.125 ? 1.6e-03 0.126 ? 1.6e-03 0.107 ? 1.6e-03 0.102 ? 1.9e-03 0.110 ? 1.8e-03 0.102 ? 1.7e-03 0.112 ? 1.9e-03 0.110 ? 1.7e-03 0.153 ? 3.9e-03 0.148 ? 4.9e-03 0.151 ? 4.2e-03 0.156 ? 4.2e-03 0.148 ? 4.1e-03 0.152 ? 3.9e-03 0.036 ? 1.3e-03 0.041 ? 1.6e-03 0.023 ? 2.7e-03 0.058 ? 9.3e-04 0.015 ? 2.0e-03 0.010 ? 1.5e-03 gender imbalance and only consider movies that list these genres. Then we filter the users to only consider those who rated at least 50 of the selected movies. The genres we selected are action, crime, musical, romance, and sci-fi. We selected these genres because they each have a noticeable gender effect in the data. Women rate musical and romance films higher and more frequently than men. Women and men both score action, crime, and sci-fi films about equally, but men rate these film much more frequently. Table 2 lists these statistics in detail. After filtering by genre and rating frequency, we have 2,953 users and 1,006 movies in the dataset. We run five trials in which we randomly split the ratings into training and testing sets, train each objective function on the training set, and evaluate each metric on the testing set. The average scores are listed in Table 3, where bold scores again indicate being statistically indistinguishable from the best average score. On real data, the results show that optimizing each unfairness metric leads to the best performance on that metric without a significant change in the reconstruction error. As in the synthetic data, optimizing value unfairness leads to the most decrease on under- and overestimation. Optimizing non-parity again causes an increase or no change in almost all the other unfairness metrics. 5 Conclusion In this paper, we discussed various types of unfairness that can occur in collaborative filtering. We demonstrate that these forms of unfairness can occur even when the observed rating data is correct, in the sense that it accurately reflects the preferences of the users. We identify two forms of data bias that can lead to such unfairness. We then demonstrate that augmenting matrix-factorization objectives with these unfairness metrics as penalty functions enables a learning algorithm to minimize each of them. Our experiments on synthetic and real data show that minimization of these forms of unfairness is possible with no significant increase in reconstruction error. We also demonstrate a combined objective that penalizes both overestimation and underestimation. Minimizing this objective leads to small unfairness penalties for the other forms of unfairness. Using this combined objective may be a good approach for practitioners. However, no single objective was the best for all unfairness metrics, so it remains necessary for practitioners to consider precisely which form of fairness is most important in their application and optimize that specific objective. Future Work While our work in this paper focused on improving fairness among users so that the model treats different groups of users fairly, we did not address fair treatment of different item groups. The model could be biased toward certain items, e.g., performing better at prediction for some items than others in terms of accuracy or over- and underestimation. Achieving fairness for both users and items may be important when considering that the items may also suffer from discrimination or bias, for example, when courses are taught by instructors with different demographics. Our experiments demonstrate that minimizing empirical unfairness generalizes, but this generalization is dependent on data density. When ratings are especially sparse, the empirical fairness does not 8 always generalize well to held-out predictions. We are investigating methods that are more robust to data sparsity in future work. Moreover, our fairness metrics assume that users rate items according to their true preferences. This assumption is likely to be violated in real data, since ratings can also be influenced by various environmental factors. E.g., in education, a student?s rating for a course also depends on whether the course has an inclusive and welcoming learning environment. However, addressing this type of bias may require additional information or external interventions beyond the provided rating data. Finally, we are investigating methods to reduce unfairness by directly modeling the two-stage sampling process we used to generate synthetic, biased data. We hypothesize that by explicitly modeling the rating and observation probabilities as separate variables, we may be able to derive a principled, probabilistic approach to address these forms of data imbalance. References [1] D. N. Beede, T. A. Julian, D. Langdon, G. McKittrick, B. Khan, and M. E. Doms. Women in STEM: A gender gap to innovation. U.S. Department of Commerce, Economics and Statistics Administration, 2011. [2] A. Beutel, E. H. Chi, Z. Cheng, H. Pham, and J. Anderson. Beyond globally optimal: Focused learning for improved recommendations. In Proceedings of the 26th International Conference on World Wide Web, pages 203?212. International World Wide Web Conferences Steering Committee, 2017. [3] S. Broad and M. McGee. Recruiting women into computer science and information systems. Proceedings of the Association Supporting Computer Users in Education Annual Conference, pages 29?40, 2014. [4] O. Chausson. Who watches what? Assessing the impact of gender and personality on film preferences. http://mypersonality.org/wiki/doku.php?id=movie_tastes_and_personality, 2010. [5] M.-I. Dascalu, C.-N. Bodea, M. N. Mihailescu, E. A. Tanase, and P. Ordo?ez de Pablos. Educational recommender systems and their application in lifelong learning. Behaviour & Information Technology, 35(4):290?297, 2016. [6] T. N. Daymont and P. J. Andrisani. Job preferences, college major, and the gender gap in earnings. Journal of Human Resources, pages 408?428, 1984. [7] M. D. Ekstrand, J. T. Riedl, J. A. Konstan, et al. Collaborative filtering recommender systems. Foundations and Trends in Human-Computer Interaction, 4(2):81?173, 2011. [8] A. L. Griffith. Persistence of women and minorities in STEM field majors: Is it the school that matters? Economics of Education Review, 29(6):911?922, 2010. [9] M. Hardt, E. Price, N. Srebro, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pages 3315?3323, 2016. [10] F. M. Harper and J. A. Konstan. The Movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19, 2016. [11] P. W. Holland and S. Leinhardt. Local structure in social networks. Sociological Methodology, 7:1?45, 1976. [12] P. J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, pages 73?101, 1964. [13] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Enhancement of the neutrality in recommendation. In Decisions@ RecSys, pages 8?14, 2012. [14] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Efficiency improvement of neutralityenhanced recommendation. In Decisions@ RecSys, pages 1?8, 2013. [15] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Correcting popularity bias by enhancing recommendation neutrality. In RecSys Posters, 2014. 9 [16] T. Kamishima, S. Akaho, H. Asoh, and I. Sato. Model-based approaches for independenceenhanced recommendation. In Data Mining Workshops (ICDMW), 2016 IEEE 16th International Conference on, pages 860?867. IEEE, 2016. [17] T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware learning through regularization approach. In 11th International Conference on Data Mining Workshops (ICDMW), pages 643?650. IEEE, 2011. [18] D. Kingma and J. Ba. arXiv:1412.6980, 2014. Adam: A method for stochastic optimization. arXiv preprint [19] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009. [20] K. Lum and J. Johndrow. A statistical framework for fair predictive algorithms. arXiv preprint arXiv:1610.08077, 2016. [21] B. Marlin, R. S. Zemel, S. Roweis, and M. Slaney. Collaborative filtering and the missing at random assumption. arXiv preprint arXiv:1206.5267, 2012. [22] B. M. Marlin and R. S. Zemel. Collaborative prediction and ranking with non-random missing data. In Proceedings of the third ACM conference on Recommender systems, pages 5?12. ACM, 2009. [23] D. Pedreshi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 560?568. ACM, 2008. [24] C. V. Sacin, J. B. Agapito, L. Shafti, and A. Ortigosa. Recommendation in higher education using data mining techniques. In Educational Data Mining, 2009. [25] S. Sahebi and P. Brusilovsky. It takes two to tango: An exploration of domain pairs for crossdomain collaborative filtering. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 131?138. ACM, 2015. [26] E. Smith. Women into science and engineering? Gendered participation in higher education STEM subjects. British Educational Research Journal, 37(6):993?1014, 2011. [27] N. Thai-Nghe, L. Drumond, A. Krohn-Grimberghe, and L. Schmidt-Thieme. Recommender system for predicting student performance. Procedia Computer Science, 1(2):2811?2819, 2010. [28] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2017. [29] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, pages 325?333, 2013. 10
6885 |@word trial:2 seems:1 proportion:2 norm:1 seek:1 contains:1 score:13 practiced:2 langdon:1 existing:2 current:1 comparing:1 must:1 romance:3 enables:1 hypothesize:1 treating:1 plot:1 discrimination:3 selected:5 fewer:2 item:36 ith:5 smith:1 earnings:1 preference:21 location:1 org:1 five:3 mathematical:2 prove:1 combine:1 overhead:1 introduce:2 huber:2 expected:3 indeed:1 behavior:6 frequently:4 chi:1 inspired:3 globally:1 encouraging:1 considering:1 becomes:2 provided:2 begin:1 moreover:2 unfairness:91 cultural:1 unrelated:1 what:2 thieme:1 minimizes:1 marlin:2 interactive:1 exactly:1 demonstrates:1 advantaged:5 converse:1 omit:1 enjoy:3 intervention:1 overestimate:1 before:1 positive:2 retention:1 engineering:2 treat:1 local:2 insufficiency:2 aiming:1 consequence:3 doku:1 id:1 approximately:1 signed:1 studied:1 conversely:2 someone:2 co:1 factorization:12 statistically:3 systemic:1 commerce:1 enforces:1 testing:2 block:10 backpropagation:1 area:2 empirical:2 bell:1 significantly:4 thought:2 printed:1 persistence:1 instructor:1 poster:1 griffith:1 suggest:1 get:1 unsigned:1 context:1 influence:1 optimize:3 equivalent:1 missing:7 straightforward:2 economics:2 educational:3 convex:1 focused:2 underrepresented:1 formalized:1 identifying:1 simplicity:1 correcting:1 population:15 variation:1 justification:2 annals:1 target:1 today:1 heavily:2 user:75 trend:4 labeled:1 observed:4 preprint:4 rij:6 capture:2 trade:2 decrease:1 balanced:1 principled:1 environment:1 ui:3 overestimation:11 complexity:1 thai:1 uniformity:1 reviewing:1 predictive:1 efficiency:1 basis:1 easily:1 differently:2 various:8 represented:1 genre:8 train:2 distinct:2 describe:4 effective:2 zemel:3 widely:2 solve:2 film:4 otherwise:1 statistic:7 gi:6 unseen:2 highlighted:1 itself:2 advantage:1 propose:5 reconstruction:8 interaction:1 product:2 leinhardt:1 remainder:1 relevant:1 loop:2 achieve:1 amplified:1 roweis:1 frobenius:1 invest:1 double:1 enhancement:1 assessing:1 adam:2 help:3 coupling:1 derive:1 augmenting:1 measured:3 school:1 noticeable:1 eq:1 job:2 predicted:4 indicate:6 direction:4 closely:1 annotated:2 attribute:4 filter:1 stochastic:2 correct:1 exploration:1 human:3 opinion:1 education:11 require:1 behaviour:1 generalization:1 alleviate:1 yij:3 pham:1 considered:1 algorithmic:4 predict:1 recruiting:1 major:2 optimizer:1 purpose:1 estimation:3 label:2 currently:1 sensitive:6 create:1 successfully:3 reflects:2 minimization:1 offs:1 always:2 hj:5 asoh:4 focus:2 improvement:1 consistently:3 experimenting:1 tech:2 contrast:2 sigkdd:1 baseline:1 sense:1 dependent:1 typically:1 w:6 tiis:1 interested:2 issue:4 among:5 classification:3 overall:2 augment:1 proposes:1 blacksburg:2 fairly:1 field:2 equal:2 never:3 aware:2 beach:1 sampling:5 encouraged:1 identical:1 represents:2 manually:1 broad:1 fairness:34 future:4 discrepancy:1 others:2 recommend:1 overestimating:1 serious:1 intelligent:1 few:1 modern:1 randomly:1 simultaneously:1 individual:5 neutrality:2 replaced:1 interest:1 investigate:1 mining:6 dwork:1 severe:1 introduces:2 male:4 behind:1 regularizers:1 held:2 necessary:1 indexed:2 penalizes:3 column:1 modeling:2 measuring:1 cost:2 addressing:1 entry:4 uniform:8 predictor:1 virginia:2 too:1 reported:1 masc:4 synthetic:12 combined:2 st:1 density:1 fundamental:1 international:6 overestimated:1 probabilistic:1 off:2 receiving:1 together:1 analogously:1 yao:1 squared:4 reflect:1 again:3 huang:1 choose:1 woman:19 worse:2 external:1 slaney:1 leading:1 krohn:1 potential:1 de:1 student:14 summarized:1 bold:3 matter:1 explicitly:1 race:2 depends:1 ranking:1 later:1 try:1 observing:1 collaborative:16 minimize:2 php:1 accuracy:3 musical:3 who:13 identify:4 generalize:1 accurately:4 none:2 worth:1 researcher:2 unaffected:1 history:1 influenced:3 definition:1 against:2 underestimate:2 volinsky:1 frequency:2 sampled:2 ruggieri:1 dataset:3 hardt:2 begun:1 treatment:1 manifest:1 knowledge:1 organized:1 formalize:2 actually:1 higher:4 supervised:2 follow:1 methodology:1 improved:2 formulation:2 though:2 anderson:1 implicit:1 stage:1 hand:1 web:2 rodriguez:1 quality:2 scientific:1 usa:1 usage:1 effect:2 concept:1 true:14 verify:1 agapito:1 regularization:4 equality:1 upar:1 eg:13 indistinguishable:2 skewed:1 encourages:1 m:6 demonstrate:7 fi:3 behaves:1 quarter:1 million:1 discussed:1 association:1 refer:2 significant:4 mathematics:1 akaho:5 longer:2 pitassi:1 add:1 imbalanced:8 recent:1 female:5 optimizing:11 belongs:2 awarded:1 scenario:1 certain:4 binary:7 vt:2 life:1 societal:3 inconsistency:4 minimum:1 additional:2 steering:1 aggregated:1 determine:1 recommended:6 full:1 infer:1 stem:27 reduces:1 long:1 equally:1 gummadi:1 paired:1 va:2 impact:2 prediction:19 involving:1 basic:1 essentially:1 metric:46 enhancing:1 arxiv:8 iteration:1 represent:4 background:1 underestimated:1 source:1 biased:16 extra:2 markedly:1 strict:1 subject:1 tend:2 member:1 effectiveness:1 practitioner:3 noting:1 split:1 affect:1 reduce:7 idea:2 administration:1 whether:4 six:1 penalty:7 suffer:1 morever:1 cause:4 uabs:1 action:3 listed:2 amount:2 generate:5 http:1 wiki:1 exist:2 per:2 popularity:1 taught:1 group:33 four:10 threshold:1 achieving:1 subgradient:2 run:2 fourth:1 sakuma:4 swersky:1 throughout:1 reasonable:2 almost:2 decide:1 wu:1 separation:1 decision:6 koren:1 gomez:1 cheng:1 annual:1 disadvantaged:5 sato:1 occur:6 constraint:4 precisely:2 inclusive:1 dominated:1 aspect:1 simulate:1 min:1 performing:1 department:3 according:2 poor:1 riedl:1 across:3 making:2 pr:4 brusilovsky:1 resource:1 remains:1 count:1 committee:1 mechanism:1 know:1 demographic:9 available:1 generalizes:1 enforce:1 appropriate:2 occurrence:1 schmidt:1 personality:1 top:1 running:3 ensure:1 remaining:2 opportunity:2 build:1 especially:2 society:2 objective:21 already:1 quantity:1 occurs:4 print:1 strategy:2 damage:1 exhibit:1 gradient:3 detrimental:1 separate:1 simulated:2 sci:3 majority:1 outer:1 recsys:3 evenly:1 topic:9 toward:2 reason:1 enforcing:1 minority:2 relationship:2 insufficient:1 providing:2 minimizing:4 ratio:1 julian:1 innovation:1 unfortunately:1 negative:2 ba:1 implementation:1 perform:1 recommender:13 imbalance:12 observation:20 datasets:2 supporting:1 criticality:1 situation:1 varied:1 bert:1 smoothed:2 rating:51 introduced:1 pair:2 khan:1 optimized:4 crime:3 learned:1 protect:1 kingma:1 nip:1 discontinuity:1 address:7 beyond:3 able:2 below:1 pattern:1 biasing:1 sparsity:1 challenge:1 max:4 including:1 critical:1 doms:1 regularized:1 participation:1 predicting:1 valera:1 representing:4 scheme:2 improve:1 movie:10 technology:2 rated:2 identifies:2 concludes:1 excel:2 lum:1 review:2 understanding:1 discovery:1 loss:2 expect:2 sociological:1 crossdomain:1 men:12 generation:1 filtering:16 srebro:1 age:2 foundation:1 degree:1 pi:1 heavy:1 row:1 course:19 accounted:1 parity:26 jth:5 enjoys:1 bias:18 side:1 formal:1 unprotected:1 wide:2 lifelong:1 absolute:16 sparse:1 feedback:1 world:2 made:1 adaptive:1 icdmw:2 turini:1 historical:2 social:2 transaction:1 uni:1 active:1 investigating:3 fem:4 latent:1 protected:3 table:7 learn:2 robust:2 ca:1 career:1 contributes:1 improving:2 necessarily:1 zafar:1 domain:2 vj:3 inherit:1 did:1 whole:1 motivation:2 fair:6 augmented:1 fig:1 momentum:1 konstan:2 unfair:8 third:2 weighting:1 removing:1 british:1 specific:1 appeal:2 list:2 exists:1 workshop:2 adding:5 effectively:1 importance:1 magnitude:1 overwhelmed:1 gap:2 generalizing:1 likely:2 explore:1 pedreshi:1 ez:1 highlighting:1 scalar:2 watch:1 recommendation:34 holland:1 gender:18 determines:1 relies:1 kamishima:7 environmental:1 acm:7 succeed:2 procedia:1 goal:1 identity:3 viewed:1 consequently:1 price:1 man:1 change:2 movielens:3 except:2 reducing:3 degradation:1 called:1 tendency:1 underestimation:11 indicating:1 rarely:1 formally:1 uneven:1 select:1 support:1 people:1 exception:1 college:1 unbalanced:1 harper:1 violated:1 evaluate:5 correlated:1
6,506
6,886
A PAC-Bayesian Analysis of Randomized Learning with Application to Stochastic Gradient Descent Ben London [email protected] Amazon Abstract We study the generalization error of randomized learning algorithms?focusing on stochastic gradient descent (SGD)?using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our generalization bounds hold for all posterior distributions on an algorithm?s random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our generalization bounds and evaluate it on a benchmark dataset. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. 1 Introduction Randomized algorithms are the workhorses of modern machine learning. One such algorithm is stochastic gradient descent (SGD), a first-order optimization method that approximates the gradient of the learning objective by a random point estimate, thereby making it efficient for large datasets. Recent interest in studying the generalization properties of SGD has led to several breakthroughs. Notably, Hardt et al. [10] showed that SGD is stable with respect to small perturbations of the training data, which let them bound the risk of a learned model. Related studies followed thereafter [13, 16]. Simultaneously, Lin and Rosasco [15] derived risk bounds that show that early stopping acts as a regularizer in multi-pass SGD (similar to studies of incremental gradient descent [19]). In this paper, we take an alternative approach to existing work. Using a novel analysis that combines PAC-Bayes with algorithmic stability (reminiscent of [17]), we prove new generalization bounds (hence, risk bounds) for randomized learning algorithms, which apply to SGD under various assumptions on the loss function and optimization objective. Our bounds improve on related studies in two important ways. While some previous bounds for SGD [1, 10, 13, 16] hold in expectation over draws of the training data, our bounds hold with high probability. Further, existing high-probability bounds for randomized learning [6, 7] only apply to algorithms with fixed distributions (such as SGD with uniform sampling [15]); thanks to our PAC-Bayesian treatment, our bounds hold for all posterior distributions, meaning they support data-dependent randomization. The penalty for overfitting the posterior to the data is captured by the posterior?s divergence from a fixed prior. Our generalization bounds suggest a sampling strategy for SGD that adapts to the training data and model, focusing on useful examples while staying close to a uniform prior. We therefore propose an adaptive sampling algorithm that dynamically updates its distribution using multiplicative weight updates (similar to boosting [8, 21], focused online learning [22] and exponentiated gradient dual coordinate ascent [4]). The algorithm requires minimal tuning and works with any stochastic gradient update rule. We analyze the divergence of the adaptive posterior and conduct experiments on a benchmark dataset, using several combinations of update rule and sampling utility function. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Preliminaries Let X denote a compact input space; let Y denote a set of labels; and let Z , X ? Y denote their Cartesian product. We assume there exists an unknown, fixed distribution, D, supported on Z. Given a dataset of examples, S , (z1 , . . . , zn ) = ((x1 , y1 ), . . . , (xn , yn )), drawn independently and identically from D, we wish to learn a predictor, X 7? Y, from a hypothesis class, H ? {X 7? Y}. (We assume that H is parameterized by a subset of Euclidean space, and will thus sometimes treat hypotheses as vectors.) We have access to a deterministic learning algorithm, A : Z n ? ? ? H, which, given S, and some hyperparameters, ? ? ?, produces a hypothesis, h ? H. We measure the quality of a hypothesis using a loss function, L : H?Z ? [0, M ], which we assume is M -bounded1 and ?-Lipschitz (see Appendix A for the definition). Let L(A(S, ?), z) denote the loss of a predictor that was output by A(S, ?) when applied to example z. Ultimately, we want the learning algorithm to have low expected loss on a random example; i.e., low risk, denoted R(S, ?) , Ez?D [L(A(S, ?), z)]. (The learning algorithm should always be clear from context.) Since this expectation cannot be computed, we approximate it by the average loss on the training data; i.e., the ? ?) , 1 Pn L(A(S, ?), zi ), which is what most learning algorithms attempt empirical risk, R(S, i=1 n ? ?), which we refer to minimize. By bounding the difference of the two, G(S, ?) , R(S, ?) ? R(S, to as the generalization error, we obtain an upper bound on R(S, ?). Throughout this document, we will view a randomized learning algorithm as a deterministic learning algorithm whose hyperparameters are randomized. For instance, stochastic gradient descent (SGD) performs a sequence of model updates, for t = 1, . . . , T , of the form ht ? Ut (ht?1 , zit ) , ht?1 ? ?t ?F (ht?1 , zit ), using a sequence of random example indices, ? = (i1 , . . . , iT ), sampled according to a distribution, P, on ? = {1, . . . , n}T . The objective function, F : H ? Z ? R+ , may be different from L; it is usually chosen as an optimizable upper bound on L, and need not be bounded. The parameter ?t is a step size for the update at iteration t. SGD can be viewed as taking a dataset, S, drawing ? ? P, then running a deterministic algorithm, A(S, ?), which executes the sequence of model updates. Since learning is randomized, we will deal with the expected loss over draws of random hyperparameters. We therefore overload the above notation for a distribution, P, on the hyperparameter space, ? P) , E??P [R(S, ? ?)], and G(S, P) , R(S, P) ? R(S, ? P). ?; let R(S, P) , E??P [R(S, ?)], R(S, 2.1 Relationship to PAC-Bayes Conditioned on the training data, a posterior distribution, Q, on the hyperparameter space, ?, induces a distribution on the hypothesis space, H. If we ignore the learning algorithm altogether and think of Q as a distribution on H directly, then Eh?Q [L(h, z)] is the Gibbs loss; that is, the expected loss of a random hypothesis. The Gibbs loss has been studied extensively using PAC-Bayesian analysis (also known simply as PAC-Bayes) [3, 9, 14, 18, 20]. In the PAC-Bayesian learning framework, we fix a prior distribution, P, then receive some training data, S ? Dn , and learn a posterior distribution, Q. PAC-Bayesian bounds frame the generalization error, G(S, Q), as a function of the posterior?s divergence from the prior, which penalizes overfitting the posterior to the training data. In Section 4, we derive new upper bounds on G(S, Q) using a novel PAC-Bayesian treatment. While traditional PAC-Bayes analyzes distributions directly on H, we instead analyze distributions on ?. Thus, instead of applying the loss directly to a random hypothesis, we apply it to the output of a learning algorithm, whose inputs are a dataset and a random hyperparameter instantiation. This distinction is subtle, but important. In our framework, a random hypothesis is explicitly a function of the learning algorithm, whereas in traditional PAC-Bayes this dependence may only be implicit?for instance, if the posterior is given by random permutations of a learned hypothesis. The advantage of making the learning aspect explicit is that it isolates the source of randomness, which may help in analyzing the distribution of learned hypotheses. Indeed, it may be difficult to map the output of a randomized learning algorithm to a distribution on the hypothesis space. That said, the disadvantage of making learning explicit is that, due to the learning algorithm?s dependence on the training data and hyperparameters, the generalization error could be sensitive to certain examples or hyperparameters. This condition is quantified with algorithmic stability, which we discuss next. 1 Accommodating unbounded loss functions is possible [11], but requires additional assumptions. 2 3 Algorithmic Stability Informally, algorithmic stability measures the change in loss when the inputs to a learning algorithm are perturbed; a learning algorithm is stable if small perturbations lead to proportional changes in the loss. In other words, a learning algorithm should not be overly sensitive to any single input. Stability is crucial for learnability [23], and has also been linked to differentially private learning [24]. In this section, we discuss several notions of stability tailored for randomized learning algorithms. From P|v| this point on, let DH (v, v0 ) , i=1 1{vi 6= vi0 } denote the Hamming distance. 3.1 Definitions of Stability The literature traditionally measures stability with respect to perturbations of the training data. We refer to this general property as data stability. Data stability has been defined in many ways. The following definitions, originally proposed by Elisseeff et al. [6], are designed to accommodate randomized algorithms via an expectation over the hyperparameters, ? ? P. Definition 1 (Uniform Stability). A randomized learning algorithm, A, is ?Z -uniformly stable with respect to a loss function, L, and a distribution, P on ?, if sup E [|L(A(S, ?), z) ? L(A(S 0 , ?), z)|] ? ?Z . sup S,S 0 ?Z n :DH (S,S 0 )=1 z?Z ??P Definition 2 (Pointwise Hypothesis Stability). For a given dataset, S, let S i,z denote the result of replacing the ith example with example z. A randomized learning algorithm, A, is ?Z -pointwise hypothesis stable with respect to a loss function, L, and a distribution, P on ?, if   sup E E E L(A(S, ?), zi ) ? L(A(S i,z , ?), zi ) ? ?Z . n i?{1,...,n} S?D z?D ??P Uniform stability measures the maximum change in loss from replacing any single training example, whereas pointwise hypothesis stability measures the expected change in loss on a random example when said example is removed from the training data. It is easy to see that ?Z -uniform stability implies ?Z -pointwise hypothesis stability, but not vice versa. Thus, while uniform stability enables sharper bounds, pointwise hypothesis stability supports a wider range of learning algorithms. In addition to data stability, we might also require stability with respect to changes in the hyperparameters. From this point forward, we will assume that the hyperparameter space, ?, decomposes QT into the product of T subspaces, t=1 ?t . For example, ? could be the set of all sequences of example indices, {1, . . . , n}T , such as one would sample from in SGD. Definition 3 (Hyperparameter Stability). A randomized learning algorithm, A, is ?? -uniformly stable with respect to a loss function, L, if sup sup sup |L(A(S, ?), z) ? L(A(S, ?0 ), z)| ? ?? . S?Z n ?,? 0 ??:DH (?,? 0 )=1 z?Z When A is both ?Z -uniformly and ?? -uniformly stable, we say that A is (?Z , ?? )-uniformly stable. Remark 1. For SGD, Definition 3 can be mapped to Bousquet and Elisseeff?s [2] original definition of uniform stability using the resampled example sequence. Yet their generalization bounds would still not apply because the resampled data is not i.i.d. and SGD is not a symmetric learning algorithm. 3.2 Stability of Stochastic Gradient Descent For non-vacuous generalization bounds, we will need the data stability coefficient, ?Z , to be of order ? ?1 ). Additionally, certain results will require the hyperparameter stability coefficient, ?? , to be O(n ? ? ?1 ) suffices.) In this section, we ? nT ). (If T = ?(n), as it often is, then ?? = O(T of order O(1/ review some conditions under which these requirements are satisfied by SGD. We rely on standard characterizations of the objective function?namely, convexity, Lipschitzness and smoothness?the definitions of which are deferred to Appendix A, along with all proofs from this section. A recent study by Hardt et al. [10] proved that some special cases of SGD?when examples are sampled uniformly, with replacement?satisfy ?Z -uniform stability (Definition 1) with ?Z = O(n?1 ). We extend their work (specifically, [10, Theorem 3.7]) in the following result for SGD with a convex objective function, when the step size is at most inversely proportional to the current iteration. 3 Proposition 1. Assume that the loss function, L, is ?-Lipschitz, and that the objective function, F , is convex, ?-Lipschitz and ?-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ?t ? [0, ?/t], for ? ? [0, 2/?]. Then, SGD is ?Z -uniformly stable with respect to L and P, with 2?2 ? (ln T + 1) . (1) ?Z ? n ? ?1 ), which is acceptable for proving generalization. When T = ?(n), Equation 1 is O(n If we do not assume that the objective function is convex, we can borrow a result (with small modification2 ) from Hardt et al. [10, Theorem 3.8]. Proposition 2. Assume that the loss function, L, is ?-Lipschitz, and that the objective function, F , is ?-Lipschitz and ?-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ?t ? [0, ?/t], for ? ? 0. Then, SGD is ?Z -uniformly stable with respect to L and P, with    1 ?? 1 + (??)?1 ?Z ? 2?2 ? ??+1 T ??+1 . (2) n?1 Assuming T = ?(n), and ignoring constants that depend on ?, ? and ?, Equation 2 reduces to  1 O n? ??+1 . As ?? approaches 1, the rate becomes O(n?1/2 ), which, as will become evident in Section 4, yields generalization bounds that are suboptimal, or even vacuous. However, if ?? is 10 small?say, ? = (10?)?1 ?then we get O n? 11 ? O(n?1 ), which suffices for generalization. Since ?Z -uniform stability implies ?Z -pointwise hypothesis stability (Definition 2), the above bounds also hold for pointwise hypothesis stability. Nonetheless, we can obtain even tighter bounds by adopting a data-dependent view. The following result for SGD with a convex objective function is adapted from a result by Kuzborskij and Lampert [13, Theorem 3]. Proposition 3. Assume that the loss function, L, is ?-Lipschitz, and that the objective function, F , is convex, ?-Lipschitz and ?-smooth. Suppose SGD starts from an initial model, h0 , and is run for T iterations with a uniform sampling distribution, P, and step sizes ?t ? [0, ?/t], for ? ? [0, 2/?]. Then, SGD is ?Z -pointwise hypothesis stable with respect to L and P, with p 2?? (ln T + 1) 2? Ez?D [L(h0 , z)] ?Z ? . (3) n Importantly, Equation 3 depends on the risk of the initial model, h0 . If h0 happens to be close to a global optimum?that is, a good first guess?then Equation 3 could be tighter than Equation 1. Kuzborskij and Lampert also proved a data-dependent bound for non-convex objective functions [13, Theorem 5], which, under certain conditions, might be tighter than Hardt et al.?s uniform stability bound (Proposition 2). Though not presented here, Kuzborskij and Lampert?s bound is worth noting. As we will later show, we can obtain stronger generalization guarantees?by combining ?Z -uniform ? stability with ?? -uniform stability (Definition 3), provided ?? = O(1/ nT ). Prior stability analyses of SGD [10, 13] have not addressed this form of stability. Elisseeff et al. [6] proved (?Z , ?? )uniform stability for certain bagging algorithms, but did not consider SGD. In light of Remark 1, it is tempting to map ?? -uniform stability to Bousquet and Elisseeff?s [2] uniform stability and thereby leverage their study of various regularized objective functions. However, their analysis crucially relies on exact minimization of the objective function, whereas SGD with a finite number of steps only finds an approximate minimizer. Thus, to our knowledge, no prior work applies to this problem. As a first step, we prove uniform stability, with respect to both data and hyperparameters, for SGD with a strongly convex objective function and decaying step sizes. Proposition 4. Assume that the loss function, L, is ?-Lipschitz, and that the objective function, F , is ?-strongly convex, ?-Lipschitz and ?-smooth. Suppose SGD is run for T iterations with a uniform sampling distribution, P, and step sizes ?t , (?t + ?)?1 . Then, SGD is (?Z , ?? )-uniformly stable with respect to L and P, with 2?2 2?2 ?Z ? and ?? ? . (4) ?n ?T ? When T = ?(n), the ?? bound in Equation 4 is O(1/ nT ), which supports good generalization. 2 Hardt et al.?s definition of stability and theorem statement differ slightly from ours. See Appendix A.1. 4 4 Generalization Bounds In this section, we present new generalization bounds for randomized learning algorithms. While prior work [6, 7] has addressed this topic, ours is the first PAC-Bayesian treatment (the benefits of which will be discussed momentarily). Recall that in the PAC-Bayesian framework, we fix a prior distribution, P, on the hypothesis space, H; then, given a sample of training data, S ? Dn , we learn a posterior distribution, Q, also on H. In our extension for randomized learning algorithms, P and Q are instead supported on the hyperparameter space, ?. Moreover, while traditional PAC-Bayes studies Eh?Q [L(h, z)], we study the expected loss over draws of hyperparameters, E??Q [L(A(S, ?), z)]. Our goal will be to upper-bound the generalization error of the posterior, G(S, Q), which thereby ? Q). upper-bounds the risk, R(S, Q), by a function of the empirical risk, R(S, Importantly, our bounds are polynomial in ? ?1 , for a free parameter ? ? (0, 1), and hold with probability at least 1 ? ? over draws of a finite training dataset. This stands in contrast to related bounds [1, 10, 13, 16] that hold in expectation. While expectation bounds are useful for gaining insight into generalization behavior, high-probability bounds are sometimes preferred. Provided the loss is M bounded, it is always possible to convert a high-probability bound of the form PrS?Dn {G(S, Q) ? B(?)} ? 1 ? ? to an expectation bound of the form ES?Dn [G(S, Q)] ? B(?) + ?M . Another useful property of PAC-Bayesian bounds is that they hold simultaneously for all posteriors, including those that depend on the training data. In Section 3, we assumed that hyperparameters were sampled according to a fixed distribution; for instance, sampling training example indices for SGD uniformly at random. However, in certain situations, it may be advantageous to sample according to a data-dependent distribution. Following the SGD example, suppose most training examples are easy to classify (e.g., far from the decision boundary), but some are difficult (e.g., near the decision boundary, or noisy). If we sample points uniformly at random, we might encounter mostly easy examples, which could slow progress on difficult examples. If we instead focus training on the difficult set, we might converge more quickly to an optimal hypothesis. Since our PAC-Bayesian bounds hold for all hyperparameter posteriors, we can characterize the generalization error of algorithms that optimize the posterior using the training data. Existing generalization bounds for randomized learning [6, 7], or SGD in particular [1, 10, 13, 15, 16], cannot address such algorithms. Of course, there is a penalty for overfitting the posterior to the data, which is captured by the posterior?s divergence from the prior. Our first PAC-Bayesian theorem requires the weakest stability condition, ?Z -pointwise hypothesis stability, but the bound is sublinear in ? ?1 . Our second bound is polylogarithmic in ? ?1 , but requires the stronger stability conditions, (?Z , ?? )-uniform stability. All proofs are deferred to Appendix B. Theorem 1. Suppose a randomized learning algorithm, A, is ?Z -pointwise hypothesis stable with respect to an M -bounded loss function, L, and a fixed prior, P on ?. Then, for any n ? 1 and ? ? (0, 1), with probability at least 1 ? ? over draws of a dataset, S ? Dn , every posterior, Q on ?, satisfies s   ?2 (QkP) + 1 2M 2 G(S, Q) ? + 12M ?Z , (5) ? n 2 i h Q(?) ? 1 is the ?2 divergence from P to Q. where ?2 (QkP) , E??P P(?) Theorem 2. Suppose a randomized learning algorithm, A, is (?Z , ?? )-uniformly stable with reQT spect to an M -bounded loss function, L, and a fixed product measure, P on ? = t=1 ?t . Then, for any n ? 1, T ? 1 and ? ? (0, 1), with probability at least 1?? over draws of a dataset, S ? Dn , every posterior, Q on ?, satisfies s    2 (M + 2n?Z )2 2 , G(S, Q) ? ?Z + 2 DKL (QkP) + ln + 4T ?? (6) ? n h  i where DKL (QkP) , E??Q ln Q(?) is the KL divergence from P to Q. P(?) Since Theorems 1 and 2 hold simultaneously for all hyperparameter posteriors, they provide generalization guarantees for SGD with any sampling distribution. Note that the stability requirements only need to be satisfied by a fixed product measure, such as a uniform distribution. This simple 5  sampling distribution can have O(n?1 ), O(T ?1 ) -uniform stability under certain conditions, as demonstrated in Section 3.2. In the following, we apply Theorem 2 to SGD with a strongly convex objective function, leveraging Proposition 4 to upper-bound the stability coefficients. Corollary 1. Assume that the loss function, L, is M -bounded and ?-Lipschitz, and that the objective function, F , is ?-strongly convex, ?-Lipschitz and ?-smooth. Let P denote a uniform prior on {1, . . . , n}T . Then, for any n ? 1, T ? 1 and ? ? (0, 1), with probability at least 1 ? ? over draws of a dataset, S ? Dn , SGD with step sizes ?t , (?t + ?)?1 and any posterior sampling distribution, Q on {1, . . . , n}T , satisfies s    2?2 2 (M + 4?2 /?)2 16?4 G(S, Q) ? + 2 DKL (QkP) + ln + 2 . ?n ? n ? T ? ?1/2 ). When the divergence is polylogarithmic in n, and T = ?(n), the generalization bound is O(n ?1/2 In the special case of uniform sampling, the KL divergence is zero, yielding a O(n ) bound. Importantly, Theorem 1 does not require hyperparameter stability, and is therefore of interest for analyzing non-convex objective functions, since it is not known whether uniform hyperparameter stability can be satisfied without (strong) convexity. One can use Equation 2 (or [13, Theorem 5]) to upper-bound ?Z in Equation 5 and thereby obtain a generalization bound for SGD with a non-convex objective function, such as neural network training. We leave this substitution to the reader. Equation 6 holds with high probability over draws of a dataset, but the generalization error is an expected value over draws of hyperparameters. To obtain a bound that holds with high probability over draws of both data and hyperparameters, we consider posteriors that are product measures. Theorem 3. Suppose a randomized learning algorithm, A, is (?Z , ?? )-uniformly stable with reQT spect to an M -bounded loss function, L, and a fixed product measure, P on ? = t=1 ?t . Then, for any n ? 1, T ? 1 and ? ? (0, 1), with probability at least 1?? over draws of a dataset, S ? Dn , and hyperparameters, ? ? Q, from any posterior product measure, Q on ?, s  r   2 4 (M + 2n?Z )2 2 . + 4T ?? (7) G(S, ?) ? ?Z + ?? 2 T ln + 2 DKL (QkP) + ln ? ? n q ? ? ? ?1/2 ). We can apply Theorem 3 to If ?? = O(1/ nT ), then ?? 2 T ln 2? vanishes at a rate of O(n SGD in the same way we applied Theorem 2 in Corollary 1. Further, note that a uniform distribution is a product distribution. Thus, if we eschew optimizing the posterior, then the KL divergence disappears, leaving a O(n?1/2 ) derandomized generalization bound for SGD with uniform sampling.3 5 Adaptive Sampling for Stochastic Gradient Descent The PAC-Bayesian theorems in Section 4 motivate data-dependent posterior distributions on the hyperparameter space. Intuitively, certain posteriors may improve, or speed up, learning from a given dataset. For instance, suppose certain training examples are considered valuable for reducing empirical risk; then, a sampling posterior for SGD should weight those examples more heavily than others, so that the learning algorithm can, probabilistically, focus its attention on the valuable examples. However, a posterior should also try to stay close to the prior, to control the divergence penalty in the generalization bounds. Based on this idea, we propose a sampling procedure for SGD (or any variant thereof) that constructs a posterior based on the training data, balancing the utility of the sampling distribution with its divergence from a uniform prior. The algorithm operates alongside the learning algorithm, iteratively generating the posterior as a sequence of conditional distributions on the training data. Each iteration of training generates a new distribution conditioned on the previous iterations, so the posterior dynamically adapts to training. We therefore call our algorithm adaptive sampling SGD. 3 We can achieve the same result by pairing Proposition 4 with Elisseeff et al.?s generalization bound for algorithms with (?Z , ?? )-uniform stability [6, Theorem 15]. However, Elisseeff et al.?s bound only applies to fixed product measures on ?, whereas Theorem 3 applies more generally to any posterior product measure, and when P = Q, Equation 7 is within a constant factor of Elisseeff et al.?s bound. 6 Algorithm 1 Adaptive Sampling SGD Require: Examples, (z1 , . . . , zn ) ? Z n ; initial model, h0 ? H; update rule, Ut : H ? Z ? H; utility function, f : Z ? H ? R; amplitude, ? ? 0; decay, ? ? (0, 1). 1: (q1 , . . . , qn ) ? 1 . Initialize sampling weights uniformly 2: for t = 1, . . . , T do 3: it ? Qt ? (q1 , . . . , qn ) . Draw index it proportional to sampling weights 4: ht ? Ut (ht?1 , zit ) . Update model . Update sampling weight for it 5: qit ? qi?t exp (? f (zit , ht )) 6: return hT Algorithm 1 maintains a set of nonnegative sampling weights, (q1 , . . . , qn ), which define a distribution on the dataset. The posterior probability of the ith example in the tth iteration, given the previous iterations, is proportional to the ith weight: Qt (i) , Q(it = i | i1 , . . . , it?1 ) ? qi . The sampling weights are initialized to 1, thereby inducing a uniform distribution. At each iteration, we draw an index, it ? Qt , and use example zit to update the model. We then update the weight for it multiplicatively as qit ? qi?t exp (? f (zit , ht )), where: f (zit , ht ) is a utility function of the chosen example and current model; ? ? 0 is an amplitude parameter, which controls the aggressiveness of the update; and ? ? (0, 1) is a decay parameter, which lets the weight gradually forget past updates. The multiplicative weight update (line 5) can be derived by choosing a sampling distribution for the next iteration, t + 1, that maximizes the expected utility while staying close to a reference distribution. Consider the following constrained optimization problem: max n Qt+1 ?? n X Qt+1 (i)f (zi , ht ) ? i=1 1 DKL (Qt+1 kQ?t ). ? (8) Pn The term i=1 Qt+1 (i)f (zi , ht ) is the expected utility under the new distribution, Qt+1 . This is offset by the KL divergence, which acts as a regularizer, penalizing Qt+1 for diverging from a reference distribution, Q?t , where Q?t (i) ? qi? . The decay parameter, ? , controls the temperature of the reference distribution, allowing it to interpolate between the current distribution (? = 1) and a uniform distribution (? = 0). The amplitude parameter, ?, scales the influence of the regularizer relative to the expected utility. We can solve Equation 8 analytically using the method of Lagrange multipliers, which yields Q?t+1 (i) ? Q?t (i) exp (? f (zit , ht ) ? 1) ? qi? exp (? f (zit , ht )) . Updating qi for all i = 1, . . . , n is impractical for large n, so we approximate the above solution by only updating the weight for the last sampled index, it , effectively performing coordinate ascent. The idea of tuning the empirical data distribution through multiplicative weight updates is reminiscent of AdaBoost [8] and focused online learning [22], but note that Algorithm 1 learns a single hypothesis, not an ensemble. In this respect, it is similar to SelfieBoost [21]. One could also draw parallels to exponentiated gradient dual coordinate ascent [4]. Finally, note that when the gradient estimate is unbiased (i.e., weighted by the inverse sampling probability), we obtain a variant of importance sampling SGD [25], though we do not necessarily need unbiased gradient estimates. It is important to note that we do not actually need to compute the full posterior distribution?which would take O(n) time per iteration?in order to sample from it. Indeed, using an algorithm and data structure described in Appendix C, we can sample from and update the distribution in O(log n) time, using O(n) space. Thus, the additional iteration complexity of adaptive sampling is logarithmic in the size of the dataset, which suitably efficient for learning from large datasets. In practice, SGD is typically applied with mini-batching, whereby multiple examples are drawn at each iteration, instead of just one. Given the massive parallelism of today?s computing hardware, mini-batching is simply a more efficient way to process a dataset, and can result in more accurate gradient estimates than single-example updates. Though Algorithm 1 is stated for single-example updates, it can be modified for mini-batching by replacing line 3 with multiple independent draws from Qt , and line 5 with sampling weight updates for each unique4 example in the mini-batch. 4 If an example is drawn multiple times in a mini-batch, its sampling weight is only updated once. 7 5.1 Divergence Analysis Recall that our generalization bounds use the posterior?s divergence from a fixed prior to penalize the posterior for overfitting the training data. Thus, to connect Algorithm 1 to our bounds, we analyze the adaptive posterior?s divergence from a uniform prior on {1, . . . , n}T . This quantity reflects the potential cost, in generalization performance, of adaptive sampling. The goal of this section is to upper-bound the KL divergence resulting from Algorithm 1 in terms of interpretable, data-dependent quantities. All proofs are deferred to Appendix D. Our analysis requires introducing some notation. Given a sequence of sampled indices, (i1 , . . . , it ), let Ni,t , |{t0 : t0 < t, it0 = i}| denote the number of times that index i was chosen before iteration t. Let Oi,j denote the j th iteration in which i was chosen; e.g., if i was chosen at iterations 13 and 47, then Oi,1 = 13 and Oi,2 = 47. With these definitions, we can state the following bound, which exposes the influences of the utility function, amplitude and decay on the KL divergence. Theorem 4. Fix a uniform prior, P, a utility function, f : Z ? H ? R, an amplitude, ? ? 0, and a decay, ? ? (0, 1). If Algorithm 1 is run for T iterations, then its posterior, Q, satisfies # " Ni ,t Ni,t n T t X X ?X X Nit ,t ?j Ni,t ?k f (zit , hOit ,j ) ? ? f (zi , hOi,k ) ? . (9) DKL (QkP) ? E (i1 ,...,it )?Q n t=2 i=1 j=1 k=1 Equation 9 can be interpreted as measuring, on average, how the cumulative past utilities of each sampled index, it , differ from the cumulative utilities of any other index, i.5 When the posterior becomes too focused on certain examples, this difference is large. The accumulated utilities decay exponentially, with the rate of decay controlled by ? . The amplitude, ?, scales the entire bound, which means that aggressive posterior updates may adversely affect generalization. An interesting special case of Theorem 4 is when the utility function is nonnegative, which results in a simpler, more interpretable bound. Theorem 5. Fix a uniform prior, P, a nonnegative utility function, f : Z ? H ? R+ , an amplitude, ? ? 0, and a decay, ? ? (0, 1). If Algorithm 1 is run for T iterations, then its posterior, Q, satisfies DKL (QkP) ? T ?1 h i ? X E f (zit , ht ) . 1 ? ? t=1 (i1 ,...,it )?Q (10) Equation 10 is simply the sum of expected utilities computed over T ?1 iterations of training, scaled by ?/(1 ? ? ). The implications of this bound are interesting when the utility function is defined as the loss, f (z, h) , L(h, z); then, if SGD quickly converges to a model with low maximal loss on the training data, it can reduce the generalization error.6 The caveat is that tuning the amplitude or decay to speed up convergence may actually counteract this effect. It is worth noting that similar guarantees hold for a mini-batch variant of Algorithm 1. The bounds are essentially unchanged, modulo notational intricacies. 6 Experiments To demonstrate the effectiveness of Algorithm 1, we conducted several experiments with the CIFAR10 dataset [12]. This benchmark dataset contains 60,000 (32 ? 32)-pixel images from 10 object classes, with a standard, static partitioning into 50,000 training examples and 10,000 test examples. We specified the model class as the following convolutional neural network architecture: 32 (3 ? 3) filters with rectified linear unit (ReLU) activations in the first and second layers, followed by (2 ? 2) max-pooling and 0.25 dropout7 ; 64 (3 ? 3) filters with ReLU activations in the third and fourth layers, again followed by (2 ? 2) max-pooling and 0.25 dropout; finally, a fully-connected, 512unit layer with ReLU activations and 0.5 dropout, followed by a fully-connected, 10-output softmax layer. We trained the network using the cross-entropy loss. We emphasize that our goal was not 5 When Ni,t = 0 (i.e., i has not yet been sampled), a summation over j = 1, . . . , Ni,t evaluates to zero. This interpretation concurs with ideas in [10, 22]. 7 It can be shown that dropout improves data stability [10, Lemma 4.4]. 6 8 to achieve state-of-the-art results on the dataset; rather, to evaluate Algorithm 1 in a simple, yet realistic, application. Following the intuition that sampling should focus on difficult examples, we experimented with two utility functions for Algorithm 1 based on common loss functions. For an example z = (x, y), with h(x, y) denoting the predicted probability of label y given input x, let f0 (z, h) , 1{arg maxy0 ?Y h(x, y 0 ) 6= y} and f1 (z, h) , 1 ? h(x, y). The first utility function, f0 , is the 0-1 loss; the second, f1 , is the L1 loss, which accounts for uncertainty in the most likely label. We combined these utility functions with two parameter update rules: standard SGD with decreasing step sizes, ?t , ?/(1+?t) ? ?/(?t), for ? > 0 and ? > 0; and AdaGrad [5], a variant of SGD that automatically tunes a separate step size for each parameter. We used mini-batches of 100 examples per update. The combination of utility functions and update rules yields four adaptive sampling algorithms: AdaSamp-01-SGD, AdaSamp-01-AdaGrad, AdaSampL1-SGD and AdaSamp-L1-AdaGrad. We compared these to their uniform sampling counterparts, Unif-SGD and Unif-AdaGrad. We tuned all hyperparameters using random subsets of the training data for cross-validation. We then ran 10 trials of training and testing, using different seeds for the pseudorandom number generator at each trial to generate different random initializations8 and training sequences. Figures 1a and 1b plot learning curves of the average cross-entropy and accuracy, respectively, on the training data; Figure 1c plots the average accuracy on the test data. We found that all adaptive sampling variants reduced empirical risk (increased training accuracy) faster than their uniform sampling counterparts. Further, AdaGrad with adaptive sampling exhibited modest, yet consistent, improvements in test accuracy in early iterations of training. Figure 1d illustrates the effect of varying the amplitude parameter, ?. Higher values of ? led to faster empirical risk reduction, but lower test accuracy?a sign of overfitting the posterior to the data, which concurs with Theorems 4 and 5 regarding the influence of ? on the KL divergence. Figure 1e plots the KL divergence from the conditional prior, Pt , to the conditional posterior, Qt , given sampled indices (i1 , . . . , it?1 ); i.e., DKL (Qt kPt ). The sampling distribution quickly diverged in early iterations, to focus on examples where the model erred, then gradually converged to a uniform distribution as the empirical risk converged. (a) Train loss (b) Train accuracy (c) Test accuracy (d) Impact of ? (e) DKL (Qt kPt ) Figure 1: Experimental results on CIFAR-10, averaged over 10 random initializations and training runs. (Best viewed in color.) Figure 1a plots learning curves of training cross-entropy (lower is better). Figures 1b and 1c, respectively, plot train and test accuracies (higher is better). Figure 1d highlights the impact of the amplitude parameter, ?, on accuracy. Figure 1e plots the KL divergence from the conditional prior, Pt , to the conditional posterior, Qt , given sampled indices (i1 , . . . , it?1 ). 7 Conclusions and Future Work We presented new generalization bounds for randomized learning algorithms, using a novel combination of PAC-Bayes and algorithmic stability. The bounds inspired an adaptive sampling algorithm for SGD that dynamically updates the sampling distribution based on the training data and model. Experimental results with this algorithm indicate that it can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy. Future research could investigate different utility functions and distribution updates, or explore the connections to related algorithms. We are also interested in providing stronger generalization guarantees, ? with polylogarithmic dependence on ? ? ?1 , for non-convex objective functions, but proving O(1/ nT )-uniform hyperparameter stability without (strong) convexity is difficult. We hope to address this problem in future work. 8 Each training algorithm started from the same initial model. 9 References [1] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Neural Information Processing Systems, 2008. [2] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499?526, 2002. [3] O. Catoni. Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, volume 56 of Institute of Mathematical Statistics Lecture Notes ? Monograph Series. Institute of Mathematical Statistics, 2007. [4] M. Collins, A. Globerson, T. Koo, X. Carreras, and P. Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks. Journal of Machine Learning Research, 9:1775?1822, 2008. [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2011. [6] A. Elisseeff, T. Evgeniou, and M. Pontil. Stability of randomized learning algorithms. Journal of Machine Learning Research, 6:55?79, 2005. [7] J. Feng, T. Zahavy, B. Kang, H. Xu, and S. Mannor. Ensemble robustness of deep learning algorithms. CoRR, abs/1602.02389, 2016. [8] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, 1995. [9] P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. PAC-Bayesian learning of linear classifiers. In International Conference on Machine Learning, 2009. [10] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning, 2016. [11] A. Kontorovich. Concentration in unbounded metric spaces and algorithmic stability. In International Conference on Machine Learning, 2014. [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [13] I. Kuzborskij and C. Lampert. Data-dependent stability of stochastic gradient descent. CoRR, abs/1703.01678, 2017. [14] J. Langford and J. Shawe-Taylor. PAC-Bayes and margins. In Neural Information Processing Systems, 2002. [15] J. Lin and L. Rosasco. Optimal learning for multi-pass stochastic gradient methods. In Neural Information Processing Systems, 2016. [16] J. Lin, R. Camoriano, and L. Rosasco. Generalization properties and implicit regularization for multiple passes SGM. In International Conference on Machine Learning, 2016. [17] B. London, B. Huang, and L. Getoor. Stability and generalization in structured prediction. Journal of Machine Learning Research, 17(222):1?52, 2016. [18] D. McAllester. PAC-Bayesian model averaging. In Computational Learning Theory, 1999. [19] L. Rosasco and S. Villa. Learning with incremental iterative regularization. In Neural Information Processing Systems, 2015. [20] M. Seeger. PAC-Bayesian generalisation error bounds for Gaussian process classification. Journal of Machine Learning Research, 3:233?269, 2002. [21] S. Shalev-Shwartz. Selfieboost: A boosting algorithm for deep learning. CoRR, abs/1411.3436, 2014. [22] S. Shalev-Shwartz and Y. Wexler. Minimizing the maximal loss: How and why. In International Conference on Machine Learning, 2016. [23] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11:2635?2670, 2010. [24] Y. Wang, J. Lei, and S. Fienberg. Learning with differential privacy: Stability, learnability and the sufficiency and necessity of ERM principle. Journal of Machine Learning Research, 17 (183):1?40, 2016. [25] P. Zhao and T. Zhang. Stochastic optimization with importance sampling for regularized loss minimization. In International Conference on Machine Learning, 2015. 10
6886 |@word trial:2 private:1 polynomial:1 stronger:3 advantageous:1 suitably:1 unif:2 crucially:1 wexler:1 elisseeff:9 q1:3 sgd:53 thereby:5 accommodate:1 reduction:1 necessity:1 substitution:1 contains:1 series:1 initial:4 denoting:1 document:1 ours:2 tuned:1 past:2 existing:3 current:3 com:1 nt:5 activation:3 yet:4 reminiscent:2 realistic:1 enables:1 designed:1 interpretable:2 update:26 plot:6 guess:1 ith:3 caveat:1 characterization:1 boosting:3 mannor:1 toronto:1 simpler:1 zhang:1 unbounded:2 mathematical:2 dn:8 along:1 become:1 differential:1 pairing:1 prove:2 combine:1 privacy:1 notably:1 indeed:2 expected:10 behavior:1 multi:2 inspired:1 decreasing:1 automatically:1 becomes:2 provided:2 bounded:6 notation:2 moreover:1 maximizes:1 what:1 interpreted:1 lipschitzness:1 impractical:1 guarantee:4 every:2 act:2 runtime:1 scaled:1 classifier:1 control:3 partitioning:1 unit:2 yn:1 before:1 treat:1 analyzing:2 koo:1 might:4 initialization:1 studied:1 quantified:1 dynamically:3 range:1 averaged:1 globerson:1 testing:1 practice:1 procedure:1 pontil:1 kpt:2 empirical:10 word:1 suggest:1 get:1 cannot:2 close:4 context:2 risk:15 applying:1 influence:3 optimize:1 deterministic:3 map:2 demonstrated:1 nit:1 attention:1 independently:1 convex:13 focused:3 amazon:2 rule:5 insight:1 importantly:4 borrow:1 stability:61 proving:2 notion:1 coordinate:3 traditionally:1 updated:1 qkp:8 pt:2 suppose:9 heavily:1 massive:1 exact:1 today:1 modulo:1 shamir:1 hypothesis:24 updating:2 wang:1 momentarily:1 connected:2 removed:1 valuable:2 ran:1 monograph:1 intuition:1 vanishes:1 convexity:3 complexity:1 sgm:1 ultimately:1 motivate:1 depend:3 trained:1 various:2 regularizer:3 train:4 london:2 choosing:1 h0:5 zahavy:1 shalev:3 whose:2 solve:1 say:2 drawing:1 statistic:2 think:1 noisy:1 online:3 sequence:8 advantage:1 propose:2 product:10 maximal:2 combining:1 achieve:2 adapts:2 inducing:1 differentially:1 convergence:2 requirement:2 optimum:1 produce:1 generating:1 incremental:2 leave:1 ben:1 staying:2 help:1 derive:1 wider:1 converges:1 object:1 qt:15 progress:1 strong:2 zit:11 predicted:1 implies:2 indicate:1 differ:2 filter:2 stochastic:12 aggressiveness:1 mcallester:1 hoi:1 require:4 fix:4 generalization:39 f1:2 preliminary:1 randomization:1 proposition:7 tighter:3 suffices:2 summation:1 extension:1 hold:13 considered:1 exp:4 seed:1 algorithmic:7 diverged:1 camoriano:1 early:3 label:3 expose:1 sensitive:2 vice:1 weighted:1 reflects:1 minimization:2 hope:1 always:2 gaussian:1 modified:1 rather:1 pn:2 varying:1 probabilistically:1 corollary:2 derived:2 focus:4 notational:1 improvement:1 contrast:1 seeger:1 dependent:7 stopping:1 accumulated:1 typically:1 entire:1 i1:7 interested:1 pixel:1 arg:1 dual:2 classification:2 denoted:1 constrained:1 breakthrough:1 special:3 initialize:1 softmax:1 art:1 construct:1 once:1 evgeniou:1 beach:1 sampling:48 field:1 future:3 others:1 report:1 modern:1 simultaneously:3 divergence:20 interpolate:1 replacement:1 attempt:1 ab:3 interest:2 investigate:1 deferred:3 derandomized:1 yielding:1 light:1 implication:1 accurate:1 cifar10:1 vi0:1 modest:1 conduct:1 euclidean:1 taylor:1 penalizes:1 initialized:1 minimal:1 instance:4 classify:1 increased:1 disadvantage:1 measuring:1 zn:2 cost:1 introducing:1 subset:2 uniform:43 predictor:2 krizhevsky:1 kq:1 conducted:1 inspires:1 too:1 learnability:3 characterize:1 connect:1 perturbed:1 combined:1 thanks:1 st:1 international:6 randomized:22 recht:1 stay:1 kontorovich:1 quickly:3 again:1 satisfied:3 rosasco:4 huang:1 adversely:1 zhao:1 return:1 aggressive:1 potential:1 account:1 coefficient:3 satisfy:1 explicitly:1 vi:1 depends:1 multiplicative:3 view:2 later:1 try:1 analyze:4 linked:1 sup:6 start:1 bayes:9 decaying:1 maintains:1 parallel:1 hazan:1 minimize:1 oi:3 ni:6 accuracy:12 convolutional:1 ensemble:2 yield:3 generalize:1 bayesian:16 worth:2 rectified:1 executes:1 randomness:1 converged:2 definition:14 evaluates:1 nonetheless:1 thereof:1 proof:3 hamming:1 static:1 sampled:9 erred:1 dataset:19 hardt:6 treatment:3 proved:3 recall:2 knowledge:1 ut:3 improves:1 color:1 subtle:1 amplitude:10 actually:2 focusing:2 originally:1 higher:2 supervised:1 adaboost:1 sufficiency:1 though:3 strongly:4 just:1 implicit:2 langford:1 replacing:3 quality:1 lei:1 usa:1 effect:2 multiplier:1 unbiased:2 counterpart:2 hence:1 analytically:1 regularization:2 symmetric:1 iteratively:1 deal:1 whereby:1 evident:1 theoretic:1 demonstrate:3 workhorse:1 performs:1 l1:2 temperature:1 duchi:1 meaning:1 image:2 isolates:1 novel:4 common:1 exponentially:1 volume:1 extend:1 discussed:1 approximates:1 interpretation:1 refer:2 versa:1 gibbs:2 smoothness:1 tuning:3 shawe:1 stable:14 access:1 f0:2 v0:1 carreras:1 posterior:45 recent:2 showed:1 optimizing:1 optimizes:1 certain:9 captured:2 analyzes:1 additional:2 converge:1 tempting:1 full:1 multiple:5 reduces:1 smooth:5 technical:1 faster:6 cross:4 long:1 lin:3 cifar:1 dkl:9 controlled:1 qi:6 impact:2 variant:5 prediction:1 essentially:1 expectation:6 metric:1 iteration:23 sometimes:2 tailored:1 adopting:1 penalize:1 receive:1 whereas:4 want:1 addition:1 addressed:2 source:1 leaving:1 crucial:1 exhibited:1 ascent:3 pass:1 pooling:2 leveraging:1 sridharan:1 effectiveness:1 call:1 near:1 noting:2 leverage:1 identically:1 easy:3 affect:1 relu:3 zi:6 architecture:1 suboptimal:1 reduce:4 idea:3 regarding:1 tradeoff:1 t0:2 whether:1 utility:21 bartlett:1 penalty:3 remark:2 deep:2 useful:3 generally:1 clear:1 informally:1 tune:1 extensively:1 induces:1 hardware:1 tth:1 reduced:1 generate:1 schapire:1 sign:1 overly:1 per:2 hyperparameter:13 thereafter:1 four:1 drawn:3 kuzborskij:4 penalizing:1 ht:15 subgradient:1 convert:1 sum:1 run:7 inverse:1 parameterized:1 counteract:1 fourth:1 uncertainty:1 throughout:1 reader:1 draw:15 decision:3 appendix:6 acceptable:1 dropout:3 bound:63 resampled:2 spect:2 followed:4 layer:5 marchand:1 nonnegative:3 adapted:1 bousquet:4 generates:1 aspect:1 speed:2 performing:1 pseudorandom:1 structured:1 according:3 combination:4 slightly:1 making:3 happens:1 intuitively:1 gradually:2 pr:1 erm:1 fienberg:1 ln:8 equation:13 concurs:2 discus:2 singer:2 optimizable:1 studying:1 apply:6 batching:3 alternative:1 encounter:1 batch:4 altogether:1 robustness:1 original:1 bagging:1 running:1 laviolette:1 qit:2 unchanged:1 feng:1 objective:20 quantity:2 strategy:1 concentration:1 dependence:3 traditional:3 villa:1 said:2 gradient:18 subspace:1 distance:1 separate:1 mapped:1 accommodating:1 topic:1 assuming:1 index:12 relationship:1 pointwise:10 multiplicatively:1 mini:7 providing:1 minimizing:1 difficult:6 mostly:1 sharper:1 statement:1 stated:1 unknown:1 allowing:1 upper:8 datasets:2 markov:1 benchmark:3 finite:2 lacasse:1 descent:9 situation:1 hinton:1 y1:1 frame:1 perturbation:3 vacuous:2 namely:1 germain:1 kl:9 specified:1 z1:2 connection:1 learned:3 distinction:1 polylogarithmic:3 kang:1 nip:1 address:2 alongside:1 usually:1 parallelism:1 including:2 gaining:1 eschew:1 max:4 getoor:1 eh:2 rely:1 regularized:2 thermodynamics:1 improve:2 inversely:1 disappears:1 started:1 prior:19 literature:1 review:1 adagrad:5 relative:1 freund:1 loss:37 fully:2 permutation:1 highlight:1 sublinear:1 interesting:2 lecture:1 proportional:4 srebro:1 generator:1 validation:1 consistent:1 principle:1 tiny:1 balancing:1 course:1 supported:2 last:1 free:1 exponentiated:3 institute:2 taking:1 benefit:1 boundary:2 curve:2 xn:1 stand:1 cumulative:2 qn:3 forward:1 adaptive:16 far:1 approximate:3 compact:1 ignore:1 preferred:1 emphasize:1 maxy0:1 global:1 overfitting:5 instantiation:1 assumed:1 it0:1 shwartz:3 iterative:1 decomposes:1 why:1 additionally:1 learn:3 ca:1 ignoring:1 improving:3 bottou:1 necessarily:1 did:1 bounding:1 hyperparameters:15 lampert:4 x1:1 xu:1 slow:1 wish:1 explicit:2 third:1 learns:1 theorem:22 pac:25 offset:1 decay:9 experimented:1 weakest:1 exists:1 effectively:1 importance:2 corr:3 catoni:1 conditioned:2 illustrates:1 cartesian:1 margin:2 entropy:3 forget:1 led:2 logarithmic:1 simply:3 intricacy:1 likely:1 explore:1 ez:2 lagrange:1 applies:3 minimizer:1 satisfies:5 relies:1 dh:3 conditional:6 viewed:2 goal:3 lipschitz:11 change:5 specifically:1 generalisation:1 uniformly:14 reducing:1 operates:1 averaging:1 lemma:1 pas:2 e:1 diverging:1 experimental:2 support:3 collins:1 overload:1 evaluate:2
6,507
6,887
Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach Roel Dobbe? Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 [email protected] David Fridovich-Keil? Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 [email protected] Claire Tomlin Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720 [email protected] Abstract Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy. 1 Introduction Finding optimal decentralized policies for multiple agents is often a hard problem hampered by partial observability and a lack of coordination between agents. The distributed multi-agent problem has been approached from a variety of angles, including distributed optimization [Boyd et al., 2011], game theory [Aumann and Dreze, 1974] and decentralized or networked partially observable Markov decision processes (POMDPs) [Oliehoek and Amato, 2016, Goldman and Zilberstein, 2004, Nair et al., 2005]. In this paper, we analyze a different approach consisting of a simple learning scheme to design fully decentralized policies for all agents that collectively mimic the solution to a common optimization problem, while having no access to a global reward signal and either no or restricted access to other agents? local state. This algorithm is a generalization of that proposed in our prior work [Sondermeijer et al., 2016] related to decentralized optimal power flow (OPF). Indeed, the success of regression-based decentralization in the OPF domain motivated us to understand when and how well the method works in a more general decentralized optimal control setting. The key contribution of this work is to view decentralization as a compression problem, and then apply classical results from information theory to analyze performance limits. More specifically, we treat the ith agent?s optimal action in the centralized problem as a random variable u?i , and model its conditional dependence on the global state variables x = (x1 , . . . , xn ), i.e. p(u?i |x), which we ? Indicates equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. assume to be stationary in time. We now restrict each agent i to observe only the ith state variable xi . Rather than solving this decentralized problem directly, we train each agent to replicate what it would have done with full information in the centralized case. That is, the vector of state variables x is compressed, and the ith agent must decompress xi to compute some estimate u ?i ? u?i . In our approach, each agent learns a parameterized Markov control policy u ?i = ? ?i (xi ) via regression. The ? ?i are learned from a data set containing local states xi taken from historical measurements of system state x and corresponding optimal actions u?i computed by solving an offline centralized optimization problem for each x. In this context, we analyze the fundamental limits of compression. In particular, we are interested in unraveling the relationship between the dependence structure of u?i and x and the corresponding ability of an agent with partial information to approximate the optimal solution, i.e. the difference ? or distortion ? between decentralized action u ?i = ? ?i (xi ) and u?i . This type of relationship is well studied within the information theory literature as an instance of rate distortion theory [Cover and Thomas, 2012, Chapter 13]. Classical results in this field provide a means of finding a lower bound on the expected distortion as a function of the mutual information ? or rate of communication ? between u?i and xi . This lower bound is valid for each specified distortion metric, and for any arbitrary strategy of computing u ?i from available data xi . Moreover, we are able to leverage a similar result to provide a conceptually simple algorithm for choosing a communication structure ? letting the regressor ? ?i depend on some other local states xj6=i ? in such a way that the lower bound on expected distortion is minimized. As such, our method generalizes [Sondermeijer et al., 2016] and provides a novel approach for the design and analysis of regression-based decentralized optimal policies for general multi-agent systems. We demonstrate these results on synthetic examples, and on a real example drawn from solving OPF in electrical distribution grids. 2 Related Work Decentralized control has long been studied within the system theory literature, e.g. [Lunze, 1992, Siljak, 2011]. Recently, various decomposition based techniques have been proposed for distributed optimization based on primal or dual decomposition methods, which all require iterative computation and some form of communication with either a central node [Boyd et al., 2011] or neighbor-toneighbor on a connected graph [Pu et al., 2014, Raffard et al., 2004, Sun et al., 2013]. Distributed model predictive control (MPC) optimizes a networked system composed of subsystems over a time horizon, which can be decentralized (no communication) if the dynamic interconnections between subsystems are weak in order to achieve closed-loop stability as well as performance [Christofides et al., 2013]. The work of Zeilinger et al. [2013] extended this to systems with strong coupling by employing time-varying distributed terminal set constraints, which requires neighbor-to-neighbor communication. Another class of methods model problems in which agents try to cooperate on a common objective without full state information as a decentralized partially observable Markov decision process (Dec-POMDP) [Oliehoek and Amato, 2016]. Nair et al. [2005] introduce networked distributed POMDPs, a variant of the Dec-POMDP inspired in part by the pairwise interaction paradigm of distributed constraint optimization problems (DCOPs). Although the specific algorithms in these works differ significantly from the regression-based decentralization scheme we consider in this paper, a larger difference is in problem formulation. As described in Sec. 3, we study a static optimization problem repeatedly solved at each time step. Much prior work, especially in optimal control (e.g. MPC) and reinforcement learning (e.g. Dec-POMDPs), poses the problem in a dynamic setting where the goal is to minimize cost over some time horizon. In the context of reinforcement learning (RL), the time horizon can be very long, leading to the well known tradeoff between exploration and exploitation; this does not appear in the static case. Additionally, many existing methods for the dynamic setting require an ongoing communication strategy between agents ? though not all, e.g. [Peshkin et al., 2000]. Even one-shot static problems such as DCOPs tend to require complex communication strategies, e.g. [Modi et al., 2005]. Although the mathematical formulation of our approach is rather different from prior work, the policies we compute are similar in spirit to other learning and robotic techniques that have been proposed, such as behavioral cloning [Sammut, 1996] and apprenticeship learning [Abbeel and Ng, 2004], which aim to let an agent learn from examples. In addition, we see a parallel with recent work on information-theoretic bounded rationality [Ortega et al., 2015] which seeks to formalize decision-making with limited resources such as the time, energy, memory, and computational effort 2 u2 x1 x2 x3 x4 x5 x6 u5 u6 (a) Distributed multi-agent problem. u1* ui* u C* x1 xj xN ?1 ?i ?C (b) Graphical model of dependency structure. Figure 1: (a) shows a connected graph corresponding to a distributed multi-agent system. The circles denote the local state xi of an agent, the dashed arrow denotes its action ui , and the double arrows denote the physical coupling between local state variables. (b) shows the Markov Random Field (MRF) graphical model of the dependency structure of all variables in the decentralized learning problem. Note that the state variables xi and the optimal actions u?i form a fully connected undirected network, and the local policy u ?i only depends on the local state xi . allocated for arriving at a decision. Our work is also related to swarm robotics [Brambilla et al., 2013], as it learns simple rules aimed to design robust, scalable and flexible collective behaviors for coordinating a large number of agents or robots. 3 General Problem Formulation Consider a distributed multi-agent problem defined by a graph G = (N , E), with N denoting the nodes in the network with cardinality |N | = N , and E representing the set of edges between nodes. Fig. 1a shows a prototypical graph of this sort. Each node has a real-valued state vector xi ? R?i , i ? N . A subset of nodes C ? N , with cardinality |C| = C, are controllable and hence are termed ?agents.? Each of these agents has an action variable ui ? R?i ,P i ? C. Let P > x = (xi , . . . , xN ) ? R i?N ?i = X denote the full network state vector and u ? R i?C ?i = U the stacked network optimization variable. Physical constraints such as spatial coupling are captured through equality constraints g(x, u) = 0. In addition, the system is subject to inequality constraints h(x, u) ? 0 that incorporate limits due to capacity, safety, robustness, etc. We are interested in minimizing a convex scalar function fo (x, u) that encodes objectives that are to be pursued cooperatively by all agents in the network, i.e. we want to find u? = arg min u s.t. fo (x, u) , g(x, u) = 0, h(x, u) ? 0. (1) Note that (1) is static in the sense that it does not consider the future evolution of the state x or the corresponding future values of cost fo . We apply this static problem to sequential control tasks by repeatedly solving (1) at each time step. Note that this simplification from an explicitly dynamic problem formulation (i.e. one in which the objective function incorporates future costs) is purely for ease of exposition and for consistency with the OPF literature as in [Sondermeijer et al., 2016]. We could also consider the optimal policy which solves a dynamic optimal control or RL problem and the decentralized learning step in Sec. 3.1 would remain the same. Since (1) is static, applying the learned decentralized policies repeatedly over time may lead to dynamical instability. Identifying when this will and will not occur is a key challenge in verifying the regression-based decentralization method, however it is beyond the scope of this work. 3.1 Decentralized Learning We interpret the process of solving (1) as applying a well-defined function or stationary Markov policy ? ? : X ?? U that maps an input collective state x to the optimal collective control or action u? . We presume that this solution exists and can be computed offline. Our objective is to learn C decentralized policies u ?i = ? ?i (xi ), one for each agent i ? C, based on T historical measurements of the states {x[t]}Tt=1 and the offline computation of the corresponding optimal actions {u? [t]}Tt=1 . Although each policy ? ?i individually aims to approximate u?i based on local state xi , we are able 3 Local training sets {x2 [t], u?2 [t]}Tt=1 Multi-Agent System Centralized Optimization 2 Data gathering Decentralize Decentralize Decentralized dLearning dLearning Learning Optimal data 3 2 3 2 x? 3 2 ? 3 x1 1 u2 u2 6 .. 7 4 6 . 7 4 . 5 , u5 5 4 .. 5 , 4 u?5 5 u6 u?6 x6 x?6 approximate Local policies u ?2 = ? ?2 (x2 ) u?5 = ??5 (x5 ) u ?6 = ? ?6 (x6 ) Figure 2: A flow diagram explaining the key steps of the decentralized regression method, depicted for the example system in Fig. 1a. We first collect data from a multi-agent system, and then solve the centralized optimization problem using all the data. The data is then split into smaller training and test sets for all agents to develop individual decentralized policies ? ?i (xi ) that approximate the optimal solution of the centralized problem. These policies are then implemented in the multi-agent system to collectively achieve a common global behavior. to reason about how well their collective action can approximate ? ? . Figure 2 summarizes the decentralized learning setup. More formally, we describe the dependency structure of the individual policies ? ?i : R?i ?? R?i with a Markov Random Field (MRF) graphical model, as shown in Fig. 1b. The u ?i are only allowed to depend on local state xi while the u?i may depend on the full state x. With this model, we can determine how information is distributed among different variables and what information-theoretic constraints the policies {? ?i }i?C are subject to when collectively trying to reconstruct the centralized policy ? ? . Note that although we may refer to ? ? as globally optimal, this is not actually required for us to reason about how closely the ? ?i approximate ? ? . That is, our analysis holds even if (1) is solved using approximate methods. In a dynamical reformulation of (1), for example, ? ? could be generated using techniques from deep RL. 3.2 A Rate-Distortion Framework We approach the problem of how well the decentralized policies ? ?i can perform in theory from the perspective of rate distortion. Rate distortion theory is a sub-field of information theory which provides a framework for understanding and computing the minimal distortion incurred by any given compression scheme. In a rate distortion context, we can interpret the fact that the output of each individual policy ? ?i depends only on the local state xi as a compression of the full state x. For a detailed overview, see [Cover and Thomas, 2012, Chapter 10]. We formulate the following variant of the the classical rate distortion problem D? = min? p(? u|u ) s.t. E [d(? u, u? )] , (2) I(? ui ; u?j ) ? I(xi ; u?j ) , ?ij , I(? ui ; u ?j ) ? I(xi ; xj ) , ?ij , ?i, j ? C , where I(?, ?) denotes mutual information and d(?, ?) an arbitrary non-negative distortion measure. As usual, the minimum distortion between random variable u? and its reconstruction u ? may be found by minimizing over conditional distributions p(? u|u? ). The novelty in (2) lies in the structure of the constraints. Typically, D? is written as a function D(R), where R is the maximum rate or mutual information I(? u; u? ). From Fig. 1b however, we know that pairs of reconstructed and optimal actions cannot share more information than is contained in the intermediate nodes in the graphical model, e.g. u ?1 and u?1 cannot share more information than x1 and ? u1 . This is a simple consequence of the data processing inequality [Cover and Thomas, 2012, Thm. 2.8.1]. Similarly, the reconstructed optimal actions at two different nodes cannot be more closely related than the measurements xi ?s from which they are computed. The resulting constraints are fixed by the joint distribution of the state x and the optimal actions u? . That is, they are fully determined by the structure of the optimization problem (1) that we wish to solve. 4 We emphasize that we have made virtually no assumptions about the distortion function. For the remainder of this paper, we will measure distortion as the deviation between u ?i and u?i . However, we could also define it to be the suboptimality gap fo (x, u ?) ? fo (x, u? ), which may be much more complicated to compute. This definition could allow us to reason explicitly about the cost of decentralization, and it could address the valid concern that the optimal decentralized policy may bear no resemblance to ? ? . We leave further investigation for future work. 3.3 Example: Squared Error, Jointly Gaussian To provide more intuition into the rate distortion framework, we consider an idealized example in which the xi , ui ? R1 . Let d(? u, u? ) = k? u ? u? k22 be the squared error distortion measure, and assume the state x and optimal actions u? to be jointly Gaussian. These assumptions allow us to derive an explicit formula for the optimal distortion D? and corresponding regression policies ? ?i . We begin by stating an identity for two jointly Gaussian X, Y ? R with correlation ?: I(X; Y ) ? ? ?? ?2 ? 1 ? e?2? , which follows immediately from the definition of mutual information and the formula for the entropy of a Gaussian random variable. Taking ?u?i ,u?i to be the correlation between u ?i and u?i , ?u2?i and ?u2 ?i to be the variances of u ?i and u?i respectively, and assuming that u?i and u ?i are of equal mean (unbiased policies ? ?i ), we can show that the minimum distortion attainable is   D? = min? E ku? ? u ?k22 : ?2u?i ,u?i ? 1 ? e?2?ii = ?2u?i ,xi , ?i ? C , (3) p(? u|u )   X ?u2 ?i + ?u2?i ? 2?u?i ,u?i ?u?i ?u?i : ?2u?i ,u?i ? ?2u?i ,xi , (4) = min {?u ?i } ? i ,u? },{?u i = min {?u ?i } = X i X i ?u2 ?i i + ?u2?i ? 2?u?i ,xi ?u?i ?u?i ?u2 ?i (1 ? ?2u?i ,xi ) .  , (5) (6) In (4), we have solved for the optimal correlations ?u?i ,u?i . Unsurprisingly, the optimal value turns out to be the maximum allowed by the mutual information constraint, i.e. u ?i should be as correlated to u?i as possible, and in particular as much as u?i is correlated to xi . Similarly, in (5) we solve for the optimal ?u?i , with the result that at optimum, ?u?i = ?u?i ,xi ?u?i . This means that as the correlation between the local state xi and the optimal action u?i decreases, the variance of the estimated action u ?i decreases as well. As a result, the learned policy will increasingly ?bet on the mean? or ?listen less? to its local measurement to approximate the optimal action. Moreover, we may also provide a closed form expression for the regressor which achieves the minimum distortion D? . Since we have assumed that each u?i and the state x are jointly Gaussian, we may write any u?i as an affine function of xi plus independent Gaussian noise. Thus, the minimum mean squared estimator is given by the conditional expectation ?u? xi ?u?i u ?i = ? ?i (xi ) = E [u?i |xi ] = E [u?i ] + i (xi ? E [xi ]) . (7) ?xi Thus, we have found a closed form expression for the best regressor ? ?i to predict u?i from only xi in the joint Gaussian case with squared error distortion. This result comes as a direct consequence of knowing the true parameterization of the joint distribution p(u? , x) (in this case, as a Gaussian). 3.4 Determining Minimum Distortion in Practice Often in practice, we do not know the parameterization p(u? |x) and hence it may be intractable to determine D? and the corresponding decentralized policies ? ?i . However, if one can assume that p(u? |x) belongs to a family of parameterized functions (for instance universal function approximators such as deep neural networks), then it is theoretically possible to attain or at least approach minimum distortion for arbitrary non-negative distortion measures. Practically, one can compute the mutual information constraint I(u?i , xi ) from (2) to understand how much information a regressor ? ?i (xi ) has available to reconstruct u?i . In the Gaussian case, we were able to compute this mutual information in closed form. For data from general distributions 5 however, there is often no way to compute mutual information analytically. Instead, we rely on access to sufficient data {x[t], u? [t]}Tt=1 , in order to estimate mutual informations numerically. In such situations (e.g. Sec. 5), we discretize the data and then compute mutual information with a minimax risk estimator, as proposed by Jiao et al. [2014]. 4 Allowing Restricted Communication Suppose that a decentralized policy ? ?i suffers from insufficient mutual information between its local measurement xi and the optimal action u?i . In this case, we would like to quantify the potential benefits of communicating with other nodes j 6= i in order to reduce the distortion limit D? from (2) and improve its ability to reconstruct u?i . In this section, we present an information-theoretic solution to the problem of how to choose optimally which other data to observe, and we provide a lower bound-achieving solution for the idealized Gaussian case introduced in Sec. 3.3. We assume that in addition to observing its own local state xi , each ? ?i is allowed to depend on at most k other xj6=i . Theorem 1. (Restricted Communication) If Si is the set of k nodes j 6= i ? N which u ?i is allowed to observe in addition to xi , then setting Si = arg max I(u?i ; xi , {xj : j ? S}) : |S| = k , S (8) minimizes the best-case expectation of any distortion measure. That is, this choice of Si yields the smallest lower bound D? from (2) of any possible choice of S. Proof. By assumption, Si maximizes the mutual information between the observed local states {xi , xj : j ? Si } and the optimal action u?i . This mutual information is equivalent to the notion of rate R in the classical rate distortion theorem [Cover and Thomas, 2012]. It is well-known that the distortion rate function D(R) is convex and monotone decreasing in R. Thus, by maximizing mutual information R we are guaranteed to minimize distortion D(R), and hence D? . Theorem 1 provides a means of choosing a subset of the state {xj : j 6= i} to communicate to each decentralized policy ? ?i that minimizes the corresponding best expected distortion D? . Practically speaking, this result may be interpreted as formalizing the following intuition: ?the best thing to do is to transmit the most information.? In this case, ?transmitting the most information? corresponds to allowing ? ?i to observe the set S of nodes {xj : j 6= i} which contains the most information about u?i . Likewise, by ?best? we mean that Si minimizes the best-case expected distortion D? , for any distortion metric d. As in Sec. 3.3, without making some assumption about the structure of the distribution of x and u? , we cannot guarantee that any particular regressor ? ?i will attain D? . ? T Nevertheless, in a practical situation where sufficient data {x[t], u [t]}t=1 is available, we can solve (8) by estimating mutual information [Jiao et al., 2014]. 4.1 Example: Joint Gaussian, Squared Error with Communication Here, we reexamine the joint Gaussian-distributed, mean squared error distortion case from Sec. 3.3, and apply Thm. 1. We will take u? ? R1 , x ? R10 and u? , x jointly Gaussian with zero mean and arbitrary covariance. The specific covariance matrix ? of the joint distribution p(u? , x) is visualized in Fig. 3a. For simplicity, we show the squared correlation coefficients of ? which lie in [0, 1]. The boxed cells in ? in Fig. 3a indicate that x9 solves (8), i.e. j = 9 maximizes I(u? ; x1 , xj ) the mutual information between the observed data and regression target u? . Intuitively, this choice of j is best because x9 is highly correlated to u? and weakly correlated to x1 , which is already observed by u ?; that is, it conveys a significant amount of information about u? that is not already conveyed by x1 . Figure 3b shows empirical results. Along the horizontal axis we increase the value of k, the number of additional variables xj which regressor ? ?i observes. The vertical axis shows the resulting average distortion. We show results for a linear regressor of the form of (7) where we have chosen Si optimally according to (8), as well as uniformly at random from all possible sets of unique indices. Note that the optimal choice of Si yields the lowest average distortion D? for all choices of k. Moreover, the linear regressor of (7) achieves D? for all k, since we have assumed a Gaussian joint distribution. 6 u? x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 1 25 0.8 20 0.6 15 optimal strategy MSE average random strategy 0.4 10 0.2 5 0 u? x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 0 2 4 6 8 10 Additional Observations k (a) Squared correlation coefficients. (b) Comparison of communication strategies. Figure 3: Results for optimal communication strategies on a synthetic Gaussian example. (a) shows squared correlation coefficients between of u? and all xi ?s. The boxed entries correspond to x9 , which was found to be optimal for k = 1. (b) shows that the optimal communication strategy of Thm. 1 achieves the lowest average distortion and outperforms the average over random strategies. 5 Application to Optimal Power Flow In this case study, we aim to minimize the voltage variability in an electric grid caused by intermittent renewable energy sources and the increasing load caused by electric vehicle charging. We do so by controlling the reactive power output of distributed energy resources (DERs), while adhering to the physics of power flow and constraints due to energy capacity and safety. Recently, various approaches have been proposed, such as [Farivar et al., 2013] or [Zhang et al., 2014]. In these methods, DERs tend to rely on an extensive communication infrastructure, either with a central master node [Xu et al., 2017] or between agents leveraging local computation [Dall?Anese et al., 2014]. We study regression-based decentralization as outlined in Sec. 3 and Fig. 2 to the optimal power flow (OPF) problem [Low, 2014], as initially proposed by Sondermeijer et al. [2016]. We apply Thm. 1 to determine the communication strategy that minimizes optimal distortion to further improve the reconstruction of the optimal actions u?i . Solving OPF requires a model of the electricity grid describing both topology and impedances; this is represented as a graph G = (N , E). For clarity of exposition and without loss of generality, we introduce the linearized power flow equations over radial networks, also known as the LinDistFlow equations [Baran and Wu, 1989]: Pij = X (j,k)?E,k6=i Qij = X (j,k)?E,k6=i Pjk + pcj ? pgj , (9a) Qjk + qjc ? qjg , (9b) vj = vi ? 2 (rij Pij + ?ij Qij ) (9c) In this model, capitals Pij and Qij represent real and reactive power flow on a branch from node i to node j for all branches (i, j) ? E, lower case pci and qic are the real and reactive power consumption g g at node ? i, and pi and qi are its real and reactive power generation. Complex line impedances rij + ?1?ij have the same indexing as the power flows. The LinDistFlow equations use the squared voltage magnitude vi , defined and indexed at all nodes i ? N . These equations are included as constraints in the optimization problem to enforce that the solution adheres to laws of physics. To formulate our decentralized learning problem, we will treat xi , (pci , qic , pgi ) to be the local state variable, and, for all controllable nodes, i.e. agents i ? C, we have ui , qig , i.e. the reactive power generation can be controlled (vi , Pij , Qij are treated as dummy variables). We assume that for all nodes i ? N , consumption pci , qic and real power generation pgi are predetermined respectively by the demand and the power generated by a potential photovoltaic (PV) system. The action space is constrained by the reactive power capacity |ui | = qig ? q?i . In addition, voltages are maintained 7 2 ?10?3 linear, random linear, optimal quadratic, random quadratic, optimal 1.8 MSE 1.6 1.4 1.2 1 0 1 2 3 4 5 Additional Observations k (a) Voltage output with and without control. (b) Comparison of OPF communication strategies. Figure 4: Results for decentralized learning on an OPF problem. (a) shows an example result of decentralized learning - the shaded region represents the range of all voltages in a network over a full day. As compared to no control, the fully decentralized regression-based control reduces voltage variation and prevents constraint violation (dashed line). (b) shows that the optimal communication strategy Si outperforms the average for random strategies on the mean squared error distortion metric. The regressors used are stepwise linear policies ? ?i with linear or quadratic features. within ?5% of 120V , which is expressed as the constraint v ? vi ? v . The OPF problem now reads X u? = arg gmin |vi ? vref | , (10) qi , ?i?C s.t. i?N (9) , qig ? q?i , v ? vi ? v . Following Fig. 2, we employ models of real electrical distribution grids (including the IEEE Test Feeders [IEEE PES, 2017]), which we equip with with T historical readings {x[t]}Tt=1 of load and PV data, which is composed with real smart meter measurements sourced from Pecan Street Inc. [2017]. We solve (10) for all data, yielding a set of minimizers {u? [t]}Tt=1 . We then separate the overall data set into C smaller data sets {xi [t], u?i [t]}Tt=1 , ?i ? C and train linear policies with feature kernels ?i (?) and parameters ?i of the form ? ?i (xi ) = ?i> ?i (xi ). Practically, the challenge is to select the best feature kernel ?i (?). We extend earlier work which showed that decentralized learning for OPF can be done satisfactorily via a hybrid forward- and backward-stepwise selection algorithm [Friedman et al., 2001, Chapter 3] that uses a quadratic feature kernels. Figure 4a shows the result for an electric distribution grid model based on a real network from Arizona. This network has 129 nodes and, in simulation, 53 nodes were equipped with a controllable DER (i.e. N = 129, C = 53). In Fig. 4a we show the voltage deviation from a normalized setpoint on a simulated network with data not used during training. The improvement over the no-control baseline is striking, and performance is nearly identical to the optimum achieved by the centralized solution. Concretely, we observed: (i) no constraint violations, and (ii) a suboptimality deviation of 0.15% on average, with a maximum deviation of 1.6%, as compared to the optimal policy ? ? . In addition, we applied Thm. 1 to the OPF problem for a smaller network [IEEE PES, 2017], in order to determine the optimal communication strategy to minimize a squared error distortion measure. Fig. 4b shows the mean squared error distortion measure for an increasing number of observed nodes k and shows how the optimal strategy outperforms an average over random strategies. 6 Conclusions and Future Work This paper generalizes the approach of Sondermeijer et al. [2016] to solve multi-agent static optimal control problems with decentralized policies that are learned offline from historical data. Our rate distortion framework facilitates a principled analysis of the performance of such decentralized policies and the design of optimal communication strategies to improve individual policies. These techniques work well on a model of a sophisticated real-world OPF example. There are still many open questions about regression-based decentralization. It is well known that strong interactions between different subsystems may lead to instability and suboptimality in decentralized control problems [Davison and Chang, 1990]. There are natural extensions of our work 8 to address dynamic control problems more explicitly, and stability analysis is a topic of ongoing work. Also, analysis of the suboptimality of regression-based decentralization should be possible within our rate distortion framework. Finally, it is worth investigating the use of deep neural networks to parameterize both the distribution p(u? |x) and local policies ? ?i in more complicated decentralized control problems with arbitrary distortion measures. Acknowledgments The authors would like to acknowledge Roberto Calandra for his insightful suggestions and feedback on the manuscript. This research is supported by NSF under the CPS Frontiers VehiCal project (1545126), by the UC-Philippine-California Advanced Research Institute under projects IIID-2016005 and IIID-2015-10, and by the ONR MURI Embedded Humans (N00014-16-1-2206). David Fridovich-Keil was also supported by the NSF GRFP. References P. Abbeel and A. Y. Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In International Conference on Machine Learning, New York, NY, USA, 2004. ACM. R. J. Aumann and J. H. Dreze. Cooperative games with coalition structures. International Journal of Game Theory, 3(4):217?237, Dec. 1974. M. Baran and F. Wu. Optimal capacitor placement on radial distribution systems. IEEE Transactions on Power Delivery, 4(1):725?734, Jan. 1989. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical R in Learning via the Alternating Direction Method of Multipliers. Foundations and Trends Machine Learning, 3(1):1?122, July 2011. M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo. Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, 7(1):1?41, Mar. 2013. P. D. Christofides, R. Scattolini, D. M. de la Pena, and J. Liu. Distributed model predictive control: A tutorial review and future research directions. Computers & Chemical Engineering, 51:21?41, 2013. T. M. Cover and J. A. Thomas. Elements of information theory. John Wiley & Sons, 2012. E. Dall?Anese, S. V. Dhople, and G. Giannakis. Optimal dispatch of photovoltaic inverters in residential distribution systems. Sustainable Energy, IEEE Transactions on, 5(2):487?497, 2014. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6719562. E. J. Davison and T. N. Chang. Decentralized stabilization and pole assignment for general proper systems. IEEE Transactions on Automatic Control, 35(6):652?664, 1990. M. Farivar, L. Chen, and S. Low. Equilibrium and dynamics of local voltage control in distribution systems. In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC), pages 4329?4334, Dec. 2013. doi: 10.1109/CDC.2013.6760555. J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. C. V. Goldman and S. Zilberstein. Decentralized control of cooperative systems: Categorization and complexity analysis. J. Artif. Int. Res., 22(1):143?174, Nov. 2004. ISSN 1076-9757. URL http://dl.acm.org/citation.cfm?id=1622487.1622493. IEEE PES. IEEE Distribution Test Feeders, 2017. URL http://ewh.ieee.org/soc/pes/ dsacom/testfeeders/. J. Jiao, K. Venkat, Y. Han, and T. Weissman. Minimax Estimation of Functionals of Discrete Distributions. arXiv preprint, June 2014. arXiv: 1406.6956. S. Low. Convex Relaxation of Optimal Power Flow; Part I: Formulations and Equivalence. IEEE Transactions on Control of Network Systems, 1(1):15?27, Mar. 2014. 9 J. Lunze. Feedback Control of Large Scale Systems. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1992. ISBN 013318353X. P. J. Modi, W.-M. Shen, M. Tambe, and M. Yokoo. Adopt: Asynchronous distributed constraint optimization with quality guarantees. Artif. Intell., 161(1-2):149?180, Jan. 2005. ISSN 0004-3702. doi: 10.1016/j.artint.2004.09.003. URL http://dx.doi.org/10.1016/j.artint.2004.09. 003. R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked Distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs. In AAAI, volume 5, pages 133?139, 2005. F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs. Springer International Publishing, 1 edition, 2016. P. A. Ortega, D. A. Braun, J. Dyer, K.-E. Kim, and N. Tishby. Information-Theoretic Bounded Rationality. arXiv preprint, 2015. arXiv:1512.06789. Pecan Street Inc. Dataport, 2017. URL http://www.pecanstreet.org/. L. Peshkin, K.-E. Kim, N. Meuleau, and L. P. Kaelbling. Learning to cooperate via policy search. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, UAI?00, pages 489?496, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1-55860-709-9. URL http://dl.acm.org/citation.cfm?id=2073946.2074003. Y. Pu, M. N. Zeilinger, and C. N. Jones. Inexact fast alternating minimization algorithm for distributed model predictive control. In Conference on Decision and Control, Los Angeles, CA, USA, 2014. IEEE. R. L. Raffard, C. J. Tomlin, and S. P. Boyd. Distributed optimization for cooperative agents: Application to formation flight. In Conference on Decision and Control, Nassau, The Bahamas, 2004. IEEE. C. Sammut. Automatic construction of reactive control systems using symbolic machine learning. The Knowledge Engineering Review, 11(01):27?42, 1996. D. D. Siljak. Decentralized control of complex systems. Dover Books on Electrical Engineering. Dover, New York, NY, 2011. URL http://cds.cern.ch/record/1985961. O. Sondermeijer, R. Dobbe, D. B. Arnold, C. Tomlin, and T. Keviczky. Regression-based Inverter Control for Decentralized Optimal Power Flow and Voltage Regulation. In Power and Energy Society General Meeting, Boston, MA, USA, July 2016. IEEE. A. X. Sun, D. T. Phan, and S. Ghosh. Fully decentralized AC optimal power flow algorithms. In Power and Energy Society General Meeting, Vancouver, Canada, July 2013. IEEE. Y. Xu, Z. Y. Dong, R. Zhang, and D. J. Hill. Multi-Timescale Coordinated Voltage/Var Control of High Renewable-Penetrated Distribution Systems. IEEE Transactions on Power Systems, PP(99): 1?1, 2017. ISSN 0885-8950. doi: 10.1109/TPWRS.2017.2669343. M. N. Zeilinger, Y. Pu, S. Riverso, G. Ferrari-Trecate, and C. N. Jones. Plug and play distributed model predictive control based on distributed invariance and optimization. In Conference on Decision and Control, Florence, Italy, 2013. IEEE. B. Zhang, A. Lam, A. Dominguez-Garcia, and D. Tse. An Optimal and Distributed Method for Voltage Regulation in Power Distribution Systems. IEEE Transactions on Power Systems, PP(99): 1?13, 2014. ISSN 0885-8950. doi: 10.1109/TPWRS.2014.2347281. 10
6887 |@word exploitation:1 compression:4 replicate:1 nd:1 open:1 seek:1 linearized:1 simulation:1 decomposition:2 covariance:2 attainable:1 concise:1 shot:1 liu:1 contains:1 series:1 denoting:1 outperforms:3 existing:1 si:9 chu:1 must:1 written:1 john:1 dx:1 predetermined:1 stationary:2 pursued:1 intelligence:2 yokoo:2 parameterization:2 ith:3 dover:2 meuleau:1 qjk:1 record:1 grfp:1 infrastructure:1 provides:4 photovoltaic:2 node:21 davison:2 org:6 zhang:3 mathematical:1 along:1 direct:1 qij:4 behavioral:1 introduce:2 apprenticeship:2 theoretically:1 pairwise:1 expected:4 indeed:1 behavior:2 multi:13 terminal:1 inspired:1 globally:1 decreasing:1 goldman:2 equipped:1 cardinality:2 increasing:2 begin:1 estimating:1 moreover:4 bounded:2 maximizes:2 formalizing:1 project:2 lowest:2 what:2 vref:1 interpreted:1 minimizes:4 finding:2 ghosh:1 nj:1 guarantee:2 berkeley:9 braun:1 control:32 appear:1 safety:2 engineering:7 local:22 treat:2 limit:4 consequence:2 id:2 plus:1 studied:2 equivalence:1 collect:1 shaded:1 ease:1 limited:2 tambe:2 range:1 practical:1 unique:1 satisfactorily:1 acknowledgment:1 practice:2 x3:3 jan:2 universal:1 empirical:1 significantly:1 attain:2 boyd:4 radial:2 symbolic:1 cannot:4 subsystem:3 selection:1 prentice:1 context:3 applying:2 instability:2 risk:1 www:1 equivalent:1 map:1 maximizing:1 convex:3 pomdp:2 formulate:2 shen:1 simplicity:1 identifying:1 immediately:1 adhering:1 communicating:1 rule:1 estimator:2 u6:2 his:1 stability:2 swarm:4 notion:1 variation:1 ferrari:1 transmit:1 target:1 rationality:2 suppose:1 controlling:1 construction:1 play:1 us:1 decompress:1 trend:1 element:2 muri:1 cooperative:4 observed:5 preprint:2 electrical:6 oliehoek:3 solved:3 verifying:1 reexamine:1 rij:2 region:1 connected:3 sun:2 gmin:1 parameterize:1 decrease:2 observes:1 principled:1 intuition:2 ui:8 complexity:1 reward:1 dynamic:7 depend:4 solving:6 weakly:1 smart:1 decentralization:8 predictive:4 purely:1 joint:7 chapter:3 various:2 represented:1 train:2 stacked:1 jiao:3 fast:1 describe:1 doi:5 artificial:1 approached:1 pci:3 choosing:2 formation:1 sourced:1 larger:1 valued:1 solve:6 distortion:44 reconstruct:4 compressed:1 interconnection:1 ability:2 statistic:1 tomlin:4 timescale:1 jointly:5 christofides:2 isbn:2 reconstruction:2 lam:1 interaction:2 remainder:1 networked:4 loop:1 achieve:2 sixteenth:1 los:1 double:1 optimum:2 r1:2 jsp:1 categorization:1 leave:1 coupling:3 develop:1 derive:1 pose:1 stating:1 ac:1 ij:4 solves:2 soc:1 implemented:1 strong:2 come:1 indicate:1 quantify:1 differ:1 direction:2 philippine:1 closely:2 exploration:1 human:1 stabilization:1 require:3 pjk:1 abbeel:2 generalization:1 renewable:2 investigation:1 extension:2 cooperatively:1 frontier:1 hold:1 practically:3 hall:1 equilibrium:1 scope:1 predict:1 cfm:2 achieves:3 inverter:2 smallest:1 adopt:1 estimation:1 coordination:2 individually:1 pcj:1 minimization:1 gaussian:15 aim:3 rather:2 varying:1 bet:1 voltage:11 zilberstein:2 amato:3 june:1 improvement:1 dall:2 indicates:1 cloning:1 baseline:1 sense:1 kim:2 minimizers:1 typically:1 initially:1 interested:2 arg:3 dual:1 flexible:1 iiid:2 overall:1 k6:2 among:1 spatial:1 constrained:1 mutual:16 uc:1 equal:2 field:4 having:1 beach:1 ng:2 roel:1 x4:3 represents:1 jones:2 identical:1 nearly:1 mimic:2 minimized:1 future:6 employ:1 modi:2 composed:2 intell:1 individual:5 consisting:1 friedman:2 centralized:9 highly:1 violation:2 yielding:1 primal:1 edge:1 partial:3 varakantham:1 indexed:1 circle:1 re:1 minimal:1 instance:2 tse:1 earlier:1 cover:5 pecan:2 challenged:1 assignment:1 electricity:1 cost:4 pole:1 deviation:4 subset:2 entry:1 kaelbling:1 calandra:1 tishby:1 optimally:2 dependency:3 eec:3 synthetic:2 st:1 fundamental:1 international:3 river:1 physic:2 dong:1 regressor:8 synthesis:1 transmitting:1 squared:13 central:2 x9:5 aaai:1 containing:1 choose:1 book:1 leading:1 potential:2 de:1 sec:7 coefficient:3 inc:3 int:1 coordinated:1 explicitly:3 caused:2 vi:6 depends:2 idealized:2 vehicle:1 view:1 try:1 closed:4 analyze:3 observing:1 sort:1 parallel:1 complicated:2 florence:1 contribution:3 minimize:4 variance:2 kaufmann:1 likewise:1 yield:2 correspond:1 cern:1 conceptually:1 weak:1 pomdps:6 presume:1 worth:1 fo:5 suffers:1 definition:2 inexact:1 energy:7 pp:2 mpc:2 pgi:2 conveys:1 proof:1 static:8 knowledge:1 listen:1 formalize:1 sophisticated:1 actually:1 manuscript:1 day:1 x6:5 formulation:5 done:2 though:1 mar:2 generality:1 correlation:7 flight:1 horizontal:1 lack:2 quality:1 resemblance:1 artif:2 usa:6 k22:2 normalized:1 unbiased:1 true:1 multiplier:1 brambilla:2 hence:3 equality:1 evolution:1 analytically:1 read:1 alternating:2 chemical:1 dispatch:1 x5:4 game:3 during:1 maintained:1 suboptimality:4 ptr:1 trying:1 ortega:2 hill:1 theoretic:6 demonstrate:1 tt:7 cooperate:2 ders:2 novel:1 recently:2 parikh:1 common:3 rl:3 physical:2 overview:1 volume:2 extend:1 pena:1 interpret:2 numerically:1 penetrated:1 measurement:6 refer:1 significant:1 automatic:2 grid:5 consistency:1 similarly:2 outlined:1 access:3 robot:1 han:1 etc:1 pu:3 qjg:1 ferrante:1 own:1 recent:1 showed:1 perspective:2 italy:1 optimizes:1 belongs:1 scenario:1 termed:1 n00014:1 inequality:2 onr:1 success:1 approximators:1 der:1 meeting:2 captured:1 minimum:6 additional:3 morgan:1 determine:4 paradigm:1 novelty:1 signal:1 dashed:2 ii:2 multiple:1 full:6 branch:2 reduces:1 x10:2 july:3 plug:1 long:3 weissman:1 controlled:1 qi:2 variant:2 regression:13 mrf:2 scalable:1 metric:3 expectation:2 arxiv:4 represent:1 kernel:3 robotics:2 dec:5 cell:1 achieved:1 addition:6 want:1 cps:1 diagram:1 source:1 allocated:1 publisher:1 subject:2 tend:2 virtually:1 facilitates:2 undirected:1 thing:1 flow:11 spirit:1 incorporates:1 leveraging:1 capacitor:1 birattari:1 leverage:1 intermediate:1 split:1 variety:1 xj:8 hastie:1 restrict:1 topology:1 observability:2 reduce:1 knowing:1 tradeoff:1 trecate:1 angeles:1 peshkin:2 motivated:1 expression:2 url:7 effort:1 speaking:1 york:2 action:20 repeatedly:3 deep:3 detailed:1 aimed:1 amount:1 u5:2 qic:3 visualized:1 http:7 nsf:2 tutorial:1 coordinating:1 estimated:1 dummy:1 tibshirani:1 write:1 discrete:1 key:3 reformulation:1 nevertheless:1 achieving:1 drawn:1 capital:1 clarity:1 r10:1 backward:1 graph:5 relaxation:1 monotone:1 residential:1 angle:1 parameterized:2 master:1 communicate:2 striking:1 inverse:1 uncertainty:1 family:1 wu:2 delivery:1 decision:8 summarizes:1 bound:5 guaranteed:1 simplification:1 quadratic:4 arizona:1 annual:1 occur:1 placement:1 constraint:17 x2:5 encodes:1 x7:2 u1:2 feeder:2 min:5 qig:3 xj6:2 according:1 coalition:1 remain:1 smaller:3 increasingly:1 son:1 giannakis:1 dominguez:1 making:2 intuitively:1 restricted:3 indexing:1 gathering:1 taken:1 resource:2 equation:4 turn:1 describing:1 know:2 letting:1 dyer:1 available:4 generalizes:2 decentralized:42 apply:4 observe:4 enforce:1 robustness:1 thomas:5 hampered:1 denotes:2 setpoint:1 publishing:1 graphical:4 especially:1 classical:4 society:2 objective:4 already:2 question:1 strategy:17 dependence:2 usual:1 unraveling:1 separate:1 simulated:1 capacity:3 street:2 berlin:1 consumption:2 topic:1 dorigo:1 reason:3 equip:1 assuming:1 issn:4 index:1 relationship:2 insufficient:1 minimizing:2 setup:1 regulation:2 negative:2 design:4 proper:1 collective:4 policy:39 perform:1 allowing:2 discretize:1 vertical:1 observation:2 upper:1 markov:6 acknowledge:1 keil:2 situation:2 extended:1 communication:21 variability:1 intermittent:1 arbitrary:5 thm:5 peleato:1 canada:1 david:2 introduced:1 pair:1 required:1 specified:1 extensive:1 eckstein:1 california:4 learned:4 nip:1 address:3 able:4 beyond:1 dynamical:2 reading:1 challenge:2 including:2 memory:1 max:1 charging:1 power:23 natural:2 rely:2 treated:1 hybrid:1 advanced:1 representing:1 scheme:3 improve:4 minimax:2 axis:2 x8:2 roberto:1 prior:3 literature:3 understanding:1 meter:1 review:3 vancouver:1 determining:1 opf:12 unsurprisingly:1 law:1 fully:7 loss:1 bear:1 embedded:1 cdc:2 prototypical:1 generation:3 suggestion:1 var:1 foundation:1 incurred:1 agent:37 conveyed:1 affine:1 sufficient:2 pij:4 decentralize:2 share:2 pi:1 cd:1 claire:1 sammut:2 supported:2 asynchronous:1 arriving:1 offline:4 allow:2 understand:2 institute:1 neighbor:3 explaining:1 taking:1 arnold:1 distributed:24 benefit:1 feedback:2 xn:3 valid:2 world:1 forward:1 aumann:2 reinforcement:3 made:1 regressors:1 concretely:1 historical:4 employing:1 author:1 san:1 transaction:6 functionals:1 reconstructed:2 approximate:8 observable:2 emphasize:1 nov:1 citation:2 global:3 robotic:1 investigating:1 uai:1 assumed:2 francisco:1 xi:48 search:1 iterative:1 impedance:2 additionally:1 learn:3 ku:1 robust:1 ca:6 controllable:3 adheres:1 boxed:2 mse:2 complex:3 electric:3 domain:1 vj:1 main:1 arrow:2 noise:1 edition:1 allowed:4 x1:10 xu:2 fig:10 venkat:1 ny:2 wiley:1 sub:1 pv:2 wish:1 explicit:1 lie:2 pe:4 learns:2 formula:2 theorem:3 load:2 specific:2 insightful:1 concern:1 dl:2 exists:1 intractable:1 stepwise:2 sequential:1 magnitude:1 horizon:3 demand:1 gap:1 chen:1 boston:1 phan:1 entropy:1 depicted:1 garcia:1 saddle:1 prevents:1 expressed:1 contained:1 partially:2 scalar:1 u2:10 chang:2 collectively:4 springer:3 ch:1 corresponds:1 acm:3 ma:1 nair:3 conditional:3 goal:1 identity:1 exposition:2 hard:1 included:1 specifically:1 determined:1 uniformly:1 invariance:1 la:1 zeilinger:3 dreze:2 formally:1 select:1 sustainable:1 reactive:7 ongoing:2 incorporate:1 correlated:4
6,508
6,888
Model-Powered Conditional Independence Test Rajat Sen1,* , Ananda Theertha Suresh2,* , Karthikeyan Shanmugam3,* , Alexandros G. Dimakis1 , and Sanjay Shakkottai1 1 The University of Texas at Austin 2 Google, New York 3 IBM Research, Thomas J. Watson Center Abstract We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution f (x, y, z) of continuous random vectors X, Y and Z, we determine whether X ? ? Y |Z. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing. The main technical challenge in the classification problem is the need for samples from the conditional product distribution f CI (x, y, z) = f (x|z)f (y|z)f (z) ? the joint distribution if and only if X ? ? Y |Z. ? when given access only to i.i.d. samples from the true joint distribution f (x, y, z). To tackle this problem we propose a novel nearest neighbor bootstrap procedure and theoretically show that our generated samples are indeed close to f CI in terms of total variational distance. We then develop theoretical results regarding the generalization bounds for classification for our problem, which translate into error bounds for CI testing. We provide a novel analysis of Rademacher type classification bounds in the presence of non-i.i.d nearindependent samples. We empirically validate the performance of our algorithm on simulated and real datasets and show performance gains over previous methods. 1 Introduction Testing datasets for Conditional Independence (CI) have significant applications in several statistical/learning problems; among others, examples include discovering/testing for edges in Bayesian networks [15, 27, 7, 9], causal inference [23, 14, 29, 5] and feature selection through Markov Blankets [16, 31]. Given a triplet of random variables/vectors (X, Y, Z), we say that X is conditionally independent of Y given Z (denoted by X ? ? Y |Z), if the joint distribution fX,Y,Z (x, y, z) factorizes as fX,Y,Z (x, y, z) = fX|Z (x|z)fY |Z (y|z)fZ (z). The problem of Conditional Independence Testing (CI Testing) can be defined as follows: Given n i.i.d samples from fX,Y,Z (x, y, z), distinguish between the two hypothesis H0 : X ? ? Y |Z and H1 : X ? 6 ? Y |Z. In this paper we propose a data-driven Model-Powered CI test. The central idea in a model-driven approach is to convert a statistical testing or estimation problem into a pipeline that utilizes the power of supervised learning models like classifiers and regressors; such pipelines can then leverage recent advances in classification/regression in high-dimensional settings. In this paper, we take such a model-powered approach (illustrated in Fig. 1), which reduces the problem of CI testing to Binary Classification. Specifically, the key steps of our procedure are as follows: * Equal Contribution 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 3n Original Samples XY Z x 1 y 1 z1 U1 n Original Samples . . . .. . 1 Training Set Dr G g ? (Trained Classifier) y3n z3n Shuffle U20 XY Z ` XY Z . . . 2n Original Samples 1 + .. . x3n ` XY Z . . . Nearest Neighbor Bootstrap 0 + .. . 0 n samples close to f CI Test Set De L(? g , De ) (Test Error) Figure 1: Illustration of our methodology. A part of the original samples are kept aside in U1 . The rest of the samples are used in our nearest neighbor boot-strap to generate a data-set U20 which is close to f CI in distribution. The samples are labeled as shown and a classifier is trained on a training set. The test error is measured on a test set there-after. If the test-error is close to 0.5, then H0 is not rejected, however if the test error is low then H0 is rejected. (i) Suppose we are provided 3n i.i.d samples from fX,Y,Z (x, y, z). We keep aside n of these original samples in a set U1 (refer to Fig. 1). The remaining 2n of the original samples are processed through our first module, the nearest-neighbor bootstrap (Algorithm 1 in our paper), which produces n simulated samples stored in U20 . In Section 3, we show that these generated samples in U20 are in fact close in total variational distance (defined in Section 3) to the conditionally independent distribution f CI (x, y, z) , fX|Z (x|z)fY |Z (y|z)fZ (z). (Note that only under H0 does the equality f CI (.) = fX,Y,Z (.) hold; our method generates samples close to f CI (x, y, z) under both hypotheses). (ii) Subsequently, the original samples kept aside in U1 are labeled 1 while the new samples simulated from the nearest-neighbor bootstrap (in U20 ) are labeled 0. The labeled samples (U1 with label 1 and U20 labeled 0) are aggregated into a data-set D. This set D is then broken into training and test sets Dr and De each containing n samples each. (iii) Given the labeled training data-set (from step (ii)), we train powerful classifiers such as gradient boosted trees [6] or deep neural networks [17] which attempt to learn the classes of the samples. If the trained classifier has good accuracy over the test set, then intuitively it means that the joint distribution fX,Y,Z (.) is distinguishable from f CI (note that the generated samples labeled 0 are close in distribution to f CI ). Therefore, we reject H0 . On the other hand, if the classifier has accuracy close to random guessing, then fX,Y,Z (.) is in fact close to f CI , and we fail to reject H0 . For independence testing (i.e whether X ? ? Y ), classifiers were recently used in [19]. Their key observation was that given i.i.d samples (X, Y ) from fX,Y (x, y), if the Y coordinates are randomly permuted then the resulting samples exactly emulate the distribution fX (x)fY (y). Thus the problem can be converted to a two sample test between a subset of the original samples and the other subset which is permuted - Binary classifiers were then harnessed for this two-sample testing; for details see [19]. However, in the case of CI testing we need to emulate samples from f CI . This is harder because the permutation of the samples needs to be Z dependent (which can be high-dimensional). One of our key technical contributions is in proving that our nearest-neighbor bootstrap in step (i) achieves this task. The advantage of this modular approach is that we can harness the power of classifiers (in step (iii) above), which have good accuracies in high-dimensions. Thus, any improvements in the field of binary classification imply an advancement in our CI test. Moreover, there is added flexibility in choosing the best classifier based on domain knowledge about the data-generation process. Finally, our bootstrap is also efficient owing to fast algorithms for identifying nearest-neighbors [24]. 1.1 Main Contributions (i) (Classification based CI testing) We reduce the problem of CI testing to Binary Classification as detailed in steps (i)-(iii) above and in Fig. 1. We simulate samples that are close to f CI through a novel nearest-neighbor bootstrap (Algorithm 1) given access to i.i.d samples from the joint distribution. 2 The problem of CI testing then reduces to a two-sample test between the original samples in U1 and U20 , which can be effectively done by binary classifiers. (ii) (Guarantees on Bootstrapped Samples) As mentioned in steps (i)-(iii), if the samples generated by the bootstrap (in U20 ) are close to f CI , then the CI testing problem reduces to testing whether the data-sets U1 and U20 are distinguishable from each other. We theoretically justify that this is indeed true. Let X,Y,Z (x, y, z) denote the distribution of a sample produced by Algorithm 1, when it is supplied with 2n i.i.d samples from fX,Y,Z (.). In Theorem 1, we prove that dT V ( , f CI ) = O(1/n1/dz ) under appropriate smoothness assumptions. Here dz is the dimension of Z and dT V denotes total variational distance (Def. 1). (iii) (Generalization Bounds for Classification under near-independence) The samples generated from the nearest-neighbor bootstrap do not remain i.i.d but they are close to i.i.d. We quantify this property and go on to show generalization risk bounds for the classifier. Let us denote the class of ? denote the probability of error of the optimal classifier function encoded by the classifier as G. Let R g? 2 G trained on the training set (Fig. 1). We prove that under appropriate assumptions, we have r0 ? ? r0 + O(1/n1/dz ) + O O(1/n1/dz ) ? R ? p V ? n 1/3 + q 2dz /n ?? with high probability, upto log factors. Here r0 = 0.5(1 dT V (f, f CI )), V is the VC dimension [30] of the class G. Thus when f is equivalent to f CI (H0 holds) then the error rate of the classifier is close to 0.5. But when H1 holds the loss is much lower. We provide a novel analysis of Rademacher complexity bounds [4] under near-independence which is of independent interest. (iv) (Empirical Evaluation) We perform extensive numerical experiments where our algorithm outperforms the state of the art [32, 28]. We also apply our algorithm for analyzing CI relations in the protein signaling network data from the flow cytometry data-set [26]. In practice we observe that the performance with respect to dimension of Z scales much better than expected from our worst case theoretical analysis. This is because powerful binary classifiers perform well in high-dimensions. 1.2 Related Work In this paper we address the problem of non-parametric CI testing when the underlying random variables are continuous. The literature on non-parametric CI testing is vast. We will review some of the recent work in this field that is most relevant to our paper. Most of the recent work in CI testing are kernel based [28, 32, 10]. Many of these works build on the study in [11], where non-parametric CI relations are characterized using covariance operators for Reproducing Kernel Hilbert Spaces (RKHS) [11]. KCIT [32] uses the partial association of regression functions relating X, Y , and Z. RCIT [28] is an approximate version of KCIT that attempts to improve running times when the number of samples are large. KCIPT [10] is perhaps most relevant to our work. In [10], a specific permutation of the samples is used to simulate data from f CI . An expensive linear program needs to be solved in order to calculate the permutation. On the other hand, we use a simple nearest-neighbor bootstrap and further we provide theoretical guarantees about the closeness of the samples to f CI in terms of total variational distance. Finally the two-sample test in [10] is based on a kernel method [3], while we use binary classifiers for the same purpose. There has also been recent work on entropy estimation [13] using nearest neighbor techniques (used for density estimation); this can subsequently be used for CI testing by estimating the conditional mutual information I(X; Y |Z). Binary classification has been recently used for two-sample testing, in particular for independence testing [19]. Our analysis of generalization guarantees of classification are aimed at recovering guarantees similar to [4], but in a non-i.i.d setting. In this regard (non-i.i.d generalization guarantees), there has been recent work in proving Rademacher complexity bounds for -mixing stationary processes [21]. This work also falls in the category of machine learning reductions, where the general philosophy is to reduce various machine learning settings like multi-class regression [2], ranking [1], reinforcement learning [18], structured prediction [8] to that of binary classification. 3 2 Problem Setting and Algorithms In this section we describe the algorithmic details of our CI testing procedure. We first formally define our problem. Then we describe our bootstrap algorithm for generating the data-set that mimics samples from f CI . We give a detailed pseudo-code for our CI testing process which reduces the problem to that of binary classification. Finally, we suggest further improvements to our algorithm. Problem Setting: The problem setting is that of non-parametric Conditional Independence (CI) testing given i.i.d samples from the joint distributions of random variables/vectors [32, 10, 28]. We are given 3n i.i.d samples from a continuous joint distribution fX,Y,Z (x, y, z) where x 2 Rdx , y 2 Rdy and z 2 Rdz . The goal is to test whether X ? ? Y |Z i.e whether fX,Y,Z (x, y, z) factorizes as, fX,Y,Z (x, y, z) = fX|Z (x|z)fY |Z (y|z)fZ (z) , f CI (x, y, z) This is essentially a hypothesis testing problem where: H0 : X ? ? Y |Z and H1 : X ? 6 ? Y |Z. Note: For notational convenience, we will drop the subscripts when the context is evident. For instance we may use f (x|z) in place of fX|Z (x|z). Nearest-Neighbor Bootstrap: Algorithm 1 is a procedure to generate a data-set U 0 consisting of n samples given a data-set U of 2n i.i.d samples from the distribution fX,Y,Z (x, y, z). The data-set U is broken into two equally sized partitions U1 and U2 . Then for each sample in U1 , we find the nearest neighbor in U2 in terms of the Z coordinates. The Y -coordinates of the sample from U1 are exchanged with the Y -coordinates of its nearest neighbor (in U2 ); the modified sample is added to U 0 . Algorithm 1 DataGen - Given data-set U = U1 [ U2 of 2n i.i.d samples from f (x, y, z) (|U1 | = |U2 | = n ), returns a new data-set U 0 having n samples. 1: function DATAG EN(U1 , U2 , 2n) 2: U0 = ; 3: for u in U1 do 4: Let v = (x0 , y 0 , z 0 ) 2 U2 be the sample such that z 0 is the 1-Nearest Neighbor (1-NN) of z (in `2 norm) in the whole data-set U2 , where u = (x, y, z) 5: Let u0 = (x, y 0 , z) and U 0 = U 0 [ {u0 }. 6: end for 7: end function One of our main results is that the samples in U 0 , generated in Algorithm 1 mimic samples coming from the distribution f CI . Suppose u = (x, y, z) 2 U1 be a sample such that fZ (z) is not too small. In this case z 0 (the 1-NN sample from U2 ) will not be far from z. Therefore given a fixed z, under appropriate smoothness assumptions, y 0 will be close to an independent sample coming from fY |Z (y|z 0 ) ? fY |Z (y|z). On the other hand if fZ (z) is small, then z is a rare occurrence and will not contribute adversely. CI Testing Algorithm: Now we introduce our CI testing algorithm, which uses Algorithm 1 along with binary classifiers. The psuedo-code is in Algorithm 2 (Classifier CI Test -CCIT). Algorithm 2 CCITv1 - Given data-set U of 3n i.i.d samples from f (x, y, z), returns if X ? ? Y |Z. 1: function CCIT(U , 3n, ?, G) 2: Partition U into three disjoint partitions U1 , U2 and U3 of size n each, randomly. 3: Let U20 = DataGen(U2 , U3 , 2n) (Algorithm 1). Note that |U20 | = n. 4: Create Labeled data-set D := {(u, ` = 1)}u2U1 [ {(u0 , `0 = 0)}u0 2U20 5: Divide data-set D into train and test set Dr and De respectively. Note that |Dr | = |De | = n. ? Dr ) := 1 P 6: Let g? = argming2G L(g, (u,`)2Dr 1{g(u) 6= l}. This is Empirical Risk |Dr | Minimization for training the classifier (finding the best function in the class G). ? g , De ) > 0.5 ? , then conclude X ? 7: If L(? ? Y |Z, otherwise, conclude X ? 6 ? Y |Z. 8: end function 4 In Algorithm 2, the original samples in U1 and the nearest-neighbor bootstrapped samples in U20 should be almost indistinguishable if H0 holds. However, if H1 holds, then the classifier trained in Line 6 should be able to easily distinguish between the samples corresponding to different labels. In Line 6, G denotes the space of functions over which risk minimization is performed in the classifier. We will show (in Theorem 1) that the variational distance between the distribution of one of the samples in U20 and f CI (x, y, z) is very small for large n. However, the samples in U20 are not exactly i.i.d but close to i.i.d. Therefore, in practice for finite n, there is a small bias b > 0 i.e. ? g , De ) ? 0.5 b, even when H0 holds. The threshold ? needs to be greater than b in order for L(? Algorithm 2 to function. In the next section, we present an algorithm where this bias is corrected. Algorithm with Bias Correction: We present an improved bias-corrected version of our algorithm as Algorithm 3. As mentioned in the previous section, in Algorithm 2, the optimal classifier may be able to achieve a loss slightly less that 0.5 in the case of finite n, even when H0 is true. However, the classifier is expected to distinguish between the two data-sets only based on the Y, Z coordinates, as the joint distribution of X and Z remains the same in the nearest-neighbor bootstrap. The key idea in Algorithm 3 is to train a classifier only using the Y and Z coordinates, denoted by g?0 . As before we also train another classier using all the coordinates, which is denoted by g?. The test loss of g?0 is expected to be roughly 0.5 b, where b is the bias mentioned in the previous section. Therefore, we ? g 0 , D0 ) L(? ? g , De ) will be close to 0. However, can just subtract this bias. Thus, when H0 is true L(? e ? g , De ) will be much lower, as the classifier g? has been trained leveraging the when H1 holds, then L(? information encoded in all the coordinates. Algorithm 3 CCITv2 - Given data-set U of 3n i.i.d samples, returns whether X ? ? Y |Z. 1: function CCIT(U , 3n, ?, G) 2: Perform Steps 1-5 as in Algorithm 2. 3: Let Dr0 = {((y, z), `)}(u=(x,y,z),`)2Dr . Similarly, let De0 = {((y, z), `)}(u=(x,y,z),`)2De . 4: These are the training and test sets without the X-coordinates. ? Dr ) := 1 P Let g? = argming2G L(g, (u,`)2Dr 1{g(u) 6= l}. Compute test loss: |Dr | ? g , De ). L(? ? D0 ) := 10 P 5: Let g?0 = argming2G L(g, r (u,`)2Dr0 1{g(u) 6= l}. Compute test loss: |Dr | 0 0 ? L(? g , De ). ? g , De ) < L(? ? g 0 , D0 ) ? , then conclude X ? 6: If L(? 6 ? Y |Z, otherwise, conclude X ? ? Y |Z. e 7: end function 3 Theoretical Results In this section, we provide our main theoretical results. We first show that the distribution of any one of the samples generated in Algorithm 1 closely resemble that of a sample coming from f CI . This result holds for a broad class of distributions fX,Y,Z (x, y, z) which satisfy some smoothness assumptions. However, the samples generated by Algorithm 1 (U2 in the algorithm) are not exactly i.i.d but close to i.i.d. We quantify this and go on to show that empirical risk minimization over a class of classifier functions generalizes well using these samples. Before, we formally state our results we provide some useful definitions. Definition 1. The total variational distance between two continuous probability distributions f (.) and g(.) defined over a domain X is, dT V (f, g) = supp2B |Ef [p(X)] Eg [p(X)]| where B is the set of all measurable functions from X ! [0, 1]. Here, Ef [.] denotes expectation under distribution f . We first prove that the distribution of any one of the samples generated in Algorithm 1 is close to f CI in terms of total variational distance. We make the following assumptions on the joint distribution of the original samples i.e. fX,Y,Z (x, y, z): Smoothness assumption on f (y|z): We assume a smoothness condition on f (y|z), that is a generalization of boundedness of the max. eigenvalue of Fisher Information matrix of y w.r.t z. 5 Assumption 1. For z 2 Rdz , a such that ka zk2 ? ?1 , the generalized curvature matrix Ia (z) is, ! " # Z 2 @2 f (y|z) log f (y|z 0 ) =E Z=z (1) Ia (z)ij = log f (y|z)dy @zi0 @zj0 f (y|z 0 ) zi0 zj0 z 0 =a 0 z =a We require that for all z 2 R and all a such that ka zk2 ? ?1 , max (Ia (z)) ? . Analogous assumptions have been made on the Hessian of the density in the context of entropy estimation [12]. dz Smoothness assumptions on f (z): We assume some smoothness properties of the probability density function f (z). The smoothness assumptions (in Assumption 2) is a subset of the assumptions made in [13] (Assumption 1, Page 5) for entropy estimation. Definition 2. For any > 0, we define G( ) = P (f (Z) ? ). This is the probability mass of the distribution of Z in the areas where the p.d.f is less than . Definition 3. (Hessian Matrix) Let Hf (z) denote the Hessian Matrix of the p.d.f f (z) with respect to z i.e Hf (z)ij = @ 2 f (z)/@zi @zj , provided it is twice continuously differentiable at z. Assumption 2. The probability density function f (z) satisfies the following: (1) f (z) is twice continuously differentiable and the Hessian matrix Hf satisfies kHf (z)k2 ? cdz almost everywhere, where cdz is only dependent on the dimension. R (2) f (z)1 1/d dz ? c3 , 8d 2 where c3 is a constant. Theorem 1. Let (X, Y 0 , Z) denote a sample in U20 produced by Algorithm 1 by modifying the original sample (X, Y, Z) in U1 , when supplied with 2n i.i.d samples from the original joint distribution fX,Y,Z (x, y, z). Let X,Y,Z (x, y, z) be the distribution of (X, Y 0 , Z). Under smoothness assumptions (1) and (2), for any ? < ?1 , n large enough, we have: dT V ( , f CI ) ? b(n) s ? 1 c3 ? 21/dz (1/dz ) ?G (2cdz ?2 ) 1 , + + exp n 1/d z 2 4 (n dz ) 4 2 dz Here, d dz c dz ? dz +2 ? + G 2cdz ?2 . is the volume of the unit radius `2 ball in Rd . Theorem 1 characterizes the variational distance of the distribution of a sample generated in Algorithm 1 with that of the conditionally independent distribution f CI . We defer the proof of Theorem 1 to Appendix A. Now, our goal is to characterize the misclassification error of the trained classifier in Algorithm 2 under both H0 and H1 . Consider the distribution of the samples in the data-set Dr used for classification in Algorithm 2. Let q(x, y, z|` = 1) be the marginal distribution of each sample with label 1. Similarly, let q(x, y, z|` = 0) denote the marginal distribution of the label 0 samples. Note that under our construction, ? CI f (x, y, z) if H0 holds q(x, y, z|` = 1) = fX,Y,Z (x, y, z) = 6= f CI (x, y, z) if H1 holds q(x, y, z|` = 0) = where X,Y,Z (x, y, z) (2) X,Y,Z (x, y, z) is as defined in Theorem 1. Note that even though the marginal of each sample with label 0 is X,Y,Z (x, y, z) (Equation (2)), they are not exactly i.i.d owing to the nearest neighbor bootstrap. We will go on to show that they are actually close to i.i.d and therefore classification risk minimization generalizes similar to the i.i.d results for classification [4]. First, we review standard definitions and results from classification theory [4]. Ideal Classification Setting: We consider an ideal classification scenario for CI testing and in the process define standard quantities in learning theory. Recall that G is the set of classifiers under consideration. Let q? be our ideal distribution for q given by q?(x, y, z|` = 1) = fX,Y,Z (x, y, z), CI q?(x, y, z|` = 0) = fX,Y,Z (x, y, z) and q?(` = 1) = q?(` = 0) = 0.5. In other words this is the ideal classification scenario for testing CI. Let L(g(u), `) be our loss function for a classifying function g 2 G, for a sample u , (x, y, z) with true label `. In our algorithms the loss function is the 0 1 loss, but our results hold for any bounded loss function s.t. |L(g(u), `)| ? |L|. For a distribution q? 6 and a classifier g let Rq?(g) , Eu,`??q [L(g(u), `)] be the expected risk of the function g. The risk optimal classifier gq?? under q? is given by gq?? , arg ming2G Rq?(g). Similarly for a set of samples S P 1 and a classifier g, let RS (g) , |S| u,`2S L(g(u), `) be the empirical risk on the set of samples. We define gS as the classifier that minimizes the empirical loss on the observed set of samples S that is, gS , arg ming2G RS (g). If the samples in S are generated independently from q?, then standard results from the learning theory states that with probability 1 , r r V 2 log(1/ ) ? Rq?(gS ) ? Rq?(gq? ) + C + , (3) n n where V is the VC dimension [30] of the classification model, C is an universal constant and n = |S|. Guarantees under near-independent samples: Our goal is to prove a result like (3), for the classification problem in Algorithm 2. However, in this case we do not have access to i.i.d samples because the samples in U20 do not remain independent. We will see that they are close to independent in some sense. This brings us to one of our main results in Theorem 2. Theorem 2. Assume that the joint distribution f (x, y, z) satisfies the conditions in Theorem 1. Further assume that f (z) has a bounded Lipschitz constant. Consider the classifier g? in Algorithm 2 trained on the set Dr . Let S = Dr . Then according to our definition gS = g?. For ? > 0 we have: Rq (gq? ) ? n ! ? ! ! r ?1/3 r d p 1 log(n/ ) 4 z log(n/ ) + on (1/?) , C|L| V + log + + G(?) , n n (i) Rq (gS ) with probability at least 1 8 . Here V is the V.C. dimension of the classification function class, G is as defined in Def. 2, C is an universal constant and |L| is the bound on the absolute value of the loss. (ii) Suppose the loss is L(g(u), `) = 1g(u)6=` (s.t |L| ? 1). Further suppose the class of classifying functions is such that Rq (gq? ) ? r0 + ?. Here, r0 , 0.5(1 dT V (q(x, y, z|1), q(x, y, z|0))) is the risk of the Bayes optimal classifier when q(` = 1) = q(` = 0). This is the best loss that any classifier can achieve for this classification problem [4]. Under this setting, w.p at least 1 8 we have: 1 1 2 dT V (f, f CI ) b(n) 1 ? Rq (gS ) ? 1 2 2 dT V (f, f CI ) + b(n) +?+ 2 n where b(n) is as defined in Theorem 1. We prove Theorem 2 as Theorem 3 and Theorem 4 in the appendix. In part (i) of the theorem we prove that generalization bounds hold even when the samples are not exactly i.i.d. Intuitively, consider two sample inputs ui , uj 2 U1 , such that corresponding Z coordinates zi and zj are far away. Then we expect the resulting samples u0i and u0j (in U20 ) to be nearly-independent. By carefully capturing this notion of spatial near-independence, we prove generalization errors in Theorem 3. Part (ii) of the theorem essentially implies that the error of the trained classifier will be close to 0.5 (l.h.s) when f ? f CI (under H0 ). On the other hand under H1 if dT V (f, f CI ) > 1 , the error will be less than 0.5( + b(n)) + n which is small. 4 Empirical Results In this section we provide empirical results comparing our proposed algorithm and other state of the art algorithms. The algorithms under comparison are: (i) CCIT - Algorithm 3 in our paper where we use XGBoost [6] as the classifier. In our experiments, for each data-set we boot-strap the samples and run our algorithm B times. The results are averaged over B bootstrap runs1 . (ii) KCIT - Kernel CI test from [32]. We use the Matlab code available online. (iii) RCIT - Randomized CI Test from [28]. We use the R package that is publicly available. 1 The python package for our implementation can be found here (https://github.com/rajatsen91/CCIT). 7 4.1 Synthetic Experiments We perform the synthetic experiments in the regime of post-nonlinear noise similar to [32]. In our experiments X and Y are dimension 1, and the dimension of Z scales (motivated by causal settings and also used in [32, 28]). X and Y are generated according to the relation G(F (Z) + ?) where ? is a noise term and G is a non-linear function, when the H0 holds. In our experiments, the data is generated as follows: (i) when X ? ? Y |Z, then each coordinate of Z is a Gaussian with unit mean and variance, X = cos(aT Z + ?1 ) and Y = cos(bT Z + ?2 ). Here, a, b 2 Rdz and kak = kbk = 1. a,b are fixed while generating a single dataset. ?1 and ?2 are zero-mean Gaussian noise variables, which are independent of everything else. We set V ar(?1 ) = V ar(?2 ) = 0.25. (ii) when X ? 6 ? Y |Z, then everything is identical to (i) except that Y = cos(bT Z + cX + ?2 ) for a randomly chosen constant c 2 [0, 2]. In Fig. 2a, we plot the performance of the algorithms when the dimension of Z scales. For generating each point in the plot, 300 data-sets were generated with the appropriate dimensions. Half of them are according to H0 and the other half are from H1 Then each of the algorithms are run on these data-sets, and the ROC AUC (Area Under the Receiver Operating Characteristic curve) score is calculated from the true labels (CI or not CI) for each data-set and the predicted scores. We observe that the accuracy of CCIT is close to 1 for dimensions upto 70, while all the other algorithms do not scale as well. In these experiments the numberpof bootstraps per data-set for CCIT was set to B = 50. We set the threshold in Algorithm 3 to ? = 1/ n, which is an upper-bound on the expected variance of the test-statistic when H0 holds. 4.2 Flow-Cytometry Dataset We use our CI testing algorithm to verify CI relations in the protein network data from the flowcytometry dataset [26], which gives expression levels of 11 proteins under various experimental conditions. The ground truth causal graph is not known with absolute certainty in this data-set, however this dataset has been widely used in the causal structure learning literature. We take three popular learned causal structures that are recovered by causal discovery algorithms, and we verify CI relations assuming these graphs to be the ground truth. The three graph are: (i) consensus graph from [26] (Fig. 1(a) in [22]) (ii) reconstructed graph by Sachs et al. [26] (Fig. 1(b) in [22]) (iii) reconstructed graph in [22] (Fig. 1(c) in [22]). For each graph we generate CI relations as follows: for each node X in the graph, identify the set Z consisting of its parents, children and parents of children in the causal graph. Conditioned on this set Z, X is independent of every other node Y in the graph (apart from the ones in Z). We use this to create all CI conditions of these types from each of the three graphs. In this process we generate over 60 CI relations for each of the graphs. In order to evaluate false positives of our algorithms, we also need relations such that X ? 6 ? Y |Z. For, this we observe that if there is an edge between two nodes, they are never CI given any other conditioning set. For each graph we generate 50 such non-CI relations, where an edge X $ Y is selected at random and a conditioning set of size 3 is randomly selected from the remaining nodes. We construct 50 such negative examples for each graph. In Fig. 2, we display the performance of all three algorithms based on considering each of the three graphs as ground-truth. The algorithms are given access to observational data for verifying CI and non-CI relations. In Fig. 2b we display the ROC plot for all three algorithms for the data-set generated by considering graph (ii). In Table 2c we display the ROC AUC score for the algorithms for the three graphs. It can be seen that our algorithm outperforms the others in all three cases, even when the dimensionality of Z is fairly low (less than 10 in all cases). An interesting thing to note is that the edges (pkc-raf), (pkc-mek) and (pka-p38) are there in all the three graphs. However, all three CI testers CCIT, KCIT and RCIT are fairly confident that these edges should be absent. These edges may be discrepancies in the ground-truth graphs and therefore the ROC AUC of the algorithms are lower than expected. 8 1.0 CCIT RCIT KCIT ROC AUC 0.9 0.8 0.7 0.6 05 20 50 70 100 Dimension of Z (a) 150 (b) Algo. Graph (i) Graph (ii) Graph (iii) CCIT RCIT KCIT 0.6848 0.6448 0.6528 0.7778 0.7168 0.7416 0.7156 0.6928 0.6610 (c) Figure 2: In (a) we plot the performance of CCIT, KCIT and RCIT in the post-nonlinear noise synthetic data. In generating each point in the plots, 300 data-sets are generated where half of them are according to H0 while the rest are according to H1 . The algorithms are run on each of them, and the ROC AUC score is plotted. In (a) the number of samples n = 1000, while the dimension of Z varies. In (b) we plot the ROC curve for all three algorithms based on the data from Graph (ii) for the flow-cytometry dataset. The ROC AUC score for each of the algorithms are provided in (c), considering each of the three graphs as ground-truth. 5 Conclusion In this paper we present a model-powered approach for CI tests by converting it into binary classification, thus empowering CI testing with powerful supervised learning tools like gradient boosted trees. We provide an efficient nearest-neighbor bootstrap which makes the reduction to classification possible. We provide theoretical guarantees on the bootstrapped samples, and also risk generalization bounds for our classification problem, under non-i.i.d near independent samples. In conclusion we believe that model-driven data dependent approaches can be extremely useful in general statistical testing and estimation problems as they enable us to use powerful supervised learning tools. Acknowledgments This work is partially supported by NSF grants CNS 1320175, NSF SaTC 1704778, ARO grants W911NF-17-1-0359, W911NF-16-1-0377 and the US DoT supported D-STOP Tier 1 University Transportation Center. References [1] Maria-Florina Balcan, Nikhil Bansal, Alina Beygelzimer, Don Coppersmith, John Langford, and Gregory Sorkin. Robust reductions from ranking to classification. Learning Theory, pages 604?619, 2007. [2] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and Alex Strehl. Conditional probability tree estimation analysis and algorithms. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 51?58. AUAI Press, 2009. [3] Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Sch?lkopf, and Alex J Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49?e57, 2006. [4] St?phane Boucheron, Olivier Bousquet, and G?bor Lugosi. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics, 9:323?375, 2005. [5] Eliot Brenner and David Sontag. Sparsityboost: A new scoring function for learning bayesian network structure. arXiv preprint arXiv:1309.6820, 2013. [6] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785?794. ACM, 2016. [7] Jie Cheng, David Bell, and Weiru Liu. Learning bayesian networks from data: An efficient approach based on information theory. On World Wide Web at http://www. cs. ualberta. ca/? jcheng/bnpc. htm, 1998. 9 [8] Hal Daum?, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297?325, 2009. [9] Luis M De Campos and Juan F Huete. A new approach for learning belief networks using independence criteria. International Journal of Approximate Reasoning, 24(1):11?37, 2000. [10] Gary Doran, Krikamol Muandet, Kun Zhang, and Bernhard Sch?lkopf. A permutation-based kernel conditional independence test. In UAI, pages 132?141, 2014. [11] Kenji Fukumizu, Francis R Bach, and Michael I Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5(Jan):73?99, 2004. [12] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Breaking the bandwidth barrier: Geometrical adaptive entropy estimation. In Advances in Neural Information Processing Systems, pages 2460?2468, 2016. [13] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [14] Markus Kalisch and Peter B?hlmann. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8(Mar):613?636, 2007. [15] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [16] Daphne Koller and Mehran Sahami. Toward optimal feature selection. Technical report, Stanford InfoLab, 1996. [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012. [18] John Langford and Bianca Zadrozny. Reducing t-step reinforcement learning to classification. In Proc. of the Machine Learning Reductions Workshop, 2003. [19] David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. arXiv preprint arXiv:1610.06545, 2016. [20] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148? 188, 1989. [21] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-iid processes. In Advances in Neural Information Processing Systems, pages 1097?1104, 2009. [22] Joris Mooij and Tom Heskes. Cyclic causal discovery from continuous equilibrium data. arXiv preprint arXiv:1309.6849, 2013. [23] Judea Pearl. Causality. Cambridge university press, 2009. [24] V Ramasubramanian and Kuldip K Paliwal. Fast k-dimensional tree algorithms for nearest neighbor search with application to vector quantization encoding. IEEE Transactions on Signal Processing, 40(3):518?531, 1992. [25] Bero Roos. On the rate of multivariate poisson convergence. Journal of Multivariate Analysis, 69(1):120?134, 1999. [26] Karen Sachs, Omar Perez, Dana Pe?er, Douglas A Lauffenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523?529, 2005. [27] Peter Spirtes, Clark N Glymour, and Richard Scheines. Causation, prediction, and search. MIT press, 2000. [28] Eric V Strobl, Kun Zhang, and Shyam Visweswaran. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. arXiv preprint arXiv:1702.03877, 2017. [29] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine learning, 65(1):31?78, 2006. [30] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of Complexity, pages 11?30. Springer, 2015. 10 [31] Eric P Xing, Michael I Jordan, Richard M Karp, et al. Feature selection for high-dimensional genomic microarray data. In ICML, volume 1, pages 601?608. Citeseer, 2001. [32] Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Sch?lkopf. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012. 11
6888 |@word version:2 norm:1 nd:1 r:2 covariance:1 citeseer:1 boundedness:1 harder:1 reduction:5 liu:1 cyclic:1 score:5 chervonenkis:1 daniel:1 bootstrapped:3 rkhs:1 outperforms:2 ka:2 comparing:1 com:1 recovered:1 beygelzimer:2 luis:1 john:4 numerical:1 partition:3 krikamol:1 drop:1 plot:6 aside:3 stationary:1 half:3 discovering:1 advancement:1 selected:2 intelligence:1 classier:1 alexandros:1 boosting:1 contribute:1 node:4 p38:1 mcdiarmid:1 zhang:3 daphne:2 along:1 jonas:1 lopez:1 prove:7 introduce:1 x0:1 theoretically:2 expected:6 indeed:2 karsten:1 roughly:1 multi:1 considering:3 provided:3 estimating:2 moreover:1 underlying:1 bounded:3 mass:1 mek:1 minimizes:1 finding:1 guarantee:7 pseudo:1 certainty:1 every:1 auai:1 tackle:1 pramod:2 exactly:5 classifier:40 k2:1 unit:2 grant:2 kalisch:1 before:2 positive:1 encoding:1 analyzing:1 subscript:1 lugosi:1 twice:2 co:3 zi0:2 averaged:1 directed:1 acknowledgment:1 testing:35 practice:2 bootstrap:17 signaling:2 procedure:4 jan:1 area:2 empirical:7 universal:2 bell:1 significantly:1 reject:2 word:1 integrating:1 protein:4 suggest:1 convenience:1 close:22 selection:3 operator:1 dr0:2 risk:10 context:2 www:1 equivalent:1 measurable:1 center:2 dz:14 transportation:1 go:3 demystifying:1 independently:1 survey:2 identifying:1 estimator:1 oh:2 proving:2 handle:1 notion:1 fx:24 coordinate:11 analogous:1 construction:1 suppose:4 ualberta:1 olivier:1 us:2 hypothesis:3 expensive:1 marcu:1 viswanath:2 labeled:8 observed:1 module:1 preprint:6 solved:1 verifying:1 worst:1 calculate:1 revisiting:1 eu:1 shuffle:1 mentioned:3 ccit:11 rq:8 broken:2 complexity:4 ui:1 trained:9 algo:1 eric:2 easily:1 joint:12 htm:1 emulate:2 various:2 u20:18 train:4 fast:3 describe:2 artificial:1 choosing:1 h0:19 modular:1 encoded:2 widely:1 stanford:1 say:1 nikhil:1 otherwise:2 nolan:1 aliferis:1 statistic:2 multiparameter:1 online:1 advantage:1 eigenvalue:1 differentiable:2 propose:2 aro:1 product:1 coming:3 gq:5 relevant:2 translate:1 flexibility:1 mixing:1 achieve:2 pka:1 validate:1 sutskever:1 parent:2 convergence:2 rademacher:4 produce:1 generating:4 phane:1 tianqi:1 develop:1 measured:1 ij:2 nearest:21 recovering:1 c:1 predicted:1 blanket:1 resemble:1 quantify:2 implies:1 rasch:1 psuedo:1 radius:1 closely:1 tester:1 owing:2 modifying:1 subsequently:2 vc:2 weihao:2 enable:1 observational:1 everything:2 require:1 generalization:9 biological:1 correction:1 hold:14 ground:5 exp:1 equilibrium:1 algorithmic:1 u3:2 achieves:1 purpose:1 estimation:8 proc:1 label:7 create:2 tool:2 minimization:4 fukumizu:1 mit:2 genomic:1 gaussian:2 satc:1 modified:1 rdx:1 boosted:3 factorizes:2 karp:1 derived:1 improvement:2 notational:1 maria:1 sigkdd:1 rostamizadeh:1 sense:1 inference:1 dependent:3 nn:2 bt:2 relation:10 koller:2 arg:2 classification:32 among:1 denoted:3 art:3 spatial:1 fairly:2 mutual:1 marginal:3 equal:1 construct:1 field:2 having:1 beach:1 u0i:1 never:1 identical:1 jcheng:1 broad:1 icml:1 nearly:1 mimic:2 discrepancy:2 others:2 report:1 richard:2 causation:1 randomly:4 consisting:2 cns:1 n1:3 attempt:2 friedman:1 interest:1 mining:1 evaluation:1 pc:1 perez:1 constantin:1 edge:6 partial:1 arthur:1 xy:4 tree:6 iv:1 divide:1 exchanged:1 plotted:1 causal:11 theoretical:6 instance:1 visweswaran:1 zj0:2 ar:2 w911nf:2 e49:1 hlmann:1 subset:3 rare:1 uniform:1 krizhevsky:1 paz:1 too:1 characterize:1 stored:1 x3n:1 varies:1 gregory:2 synthetic:3 confident:1 st:2 density:4 borgwardt:1 randomized:1 international:2 muandet:1 probabilistic:1 michael:2 continuously:2 empowering:1 ilya:1 central:1 containing:1 juan:1 dr:15 adversely:1 laura:1 return:3 converted:1 de:14 ioannis:1 satisfy:1 combinatorics:1 ranking:2 performed:1 h1:10 characterizes:1 francis:1 xing:1 hf:3 bayes:1 carlos:1 raf:1 defer:1 contribution:3 publicly:1 accuracy:4 convolutional:1 variance:2 characteristic:1 identify:1 climbing:1 infolab:1 lkopf:3 bayesian:4 bor:1 produced:2 iid:1 janzing:1 definition:6 frequency:1 proof:1 judea:1 gain:1 stop:1 dataset:5 popular:1 recall:1 knowledge:2 dimensionality:2 hilbert:2 dimakis1:1 actually:1 carefully:1 dt:9 supervised:4 harness:2 methodology:1 improved:1 tom:1 done:1 though:1 mar:1 rejected:2 just:1 smola:1 langford:4 hand:4 web:1 nonlinear:2 google:1 brings:1 perhaps:1 believe:1 hal:1 usa:1 cdz:4 true:6 verify:2 brown:1 equality:1 boucheron:1 spirtes:1 illustrated:1 eg:1 conditionally:3 indistinguishable:1 auc:6 kak:1 criterion:1 generalized:1 bansal:1 hill:1 evident:1 balcan:1 reasoning:1 geometrical:1 variational:8 consideration:1 novel:4 recently:2 ef:2 permuted:2 empirically:1 harnessed:1 conditioning:2 volume:2 association:1 relating:1 significant:1 refer:1 rdy:1 cambridge:1 smoothness:9 rd:1 heskes:1 similarly:3 dot:1 access:4 han:1 operating:1 curvature:1 pkc:2 multivariate:2 recent:6 driven:3 apart:1 scenario:2 paliwal:1 binary:12 watson:1 yuri:1 scoring:1 seen:1 guestrin:1 greater:1 oquab:1 converting:2 r0:5 determine:1 aggregated:1 colin:1 signal:1 ii:11 u0:5 reduces:4 d0:3 gretton:1 technical:3 characterized:1 bach:1 long:1 post:2 equally:1 prediction:3 scalable:1 regression:3 florina:1 essentially:2 expectation:1 mehran:1 poisson:1 arxiv:12 kernel:9 xgboost:2 cell:1 argming2g:3 maxime:1 campos:1 shyam:1 else:1 microarray:1 sch:3 rest:2 thing:1 flow:3 leveraging:1 jordan:2 near:5 presence:1 leverage:1 ideal:4 iii:8 enough:1 independence:15 zi:2 sorkin:2 bandwidth:1 reduce:2 regarding:1 idea:2 texas:1 absent:1 whether:6 motivated:1 expression:1 peter:4 sontag:1 karen:1 york:1 hessian:4 matlab:1 deep:3 jie:1 useful:2 detailed:2 aimed:1 tsamardinos:1 processed:1 category:1 generate:5 fz:5 supplied:2 http:2 zj:2 nsf:2 disjoint:1 per:1 key:4 threshold:2 alina:2 douglas:1 kept:2 vast:1 graph:25 convert:1 run:3 package:2 everywhere:1 powerful:5 uncertainty:1 place:1 almost:2 utilizes:1 dy:1 appendix:2 capturing:1 bound:12 def:2 distinguish:3 display:3 cheng:1 g:6 alex:3 markus:1 bousquet:1 generates:1 u1:19 simulate:2 extremely:1 min:1 de0:1 glymour:1 structured:3 according:5 ball:1 remain:2 slightly:1 kbk:1 intuitively:2 pipeline:2 tier:1 equation:1 scheines:1 remains:1 fail:1 sahami:1 end:4 zk2:2 generalizes:2 available:2 lauffenburger:1 apply:1 observe:3 away:1 appropriate:4 upto:2 occurrence:1 thomas:1 original:13 denotes:3 remaining:2 include:1 running:1 graphical:1 e57:1 daum:1 joris:1 build:1 uj:1 added:2 quantity:1 strobl:1 parametric:6 guessing:1 gradient:3 distance:8 simulated:3 omar:1 fy:6 consensus:1 toward:1 assuming:1 afshin:1 code:3 illustration:1 vladimir:1 kun:3 negative:1 implementation:1 twenty:1 perform:5 upper:1 boot:2 observation:1 datasets:2 markov:1 finite:2 zadrozny:1 hinton:1 reproducing:2 cytometry:3 david:3 extensive:1 z1:1 c3:3 imagenet:1 learned:1 pearl:1 nip:1 address:1 able:2 kriegel:1 sanjay:1 regime:1 coppersmith:1 challenge:1 program:1 max:3 belief:1 power:2 ia:3 misclassification:1 malte:1 event:1 rdz:3 improve:1 github:1 esaim:1 imply:1 nir:1 prior:1 literature:2 review:2 python:1 powered:4 discovery:5 mooij:1 garry:1 relative:1 loss:13 expect:1 permutation:4 generation:1 interesting:1 acyclic:1 geoffrey:1 dana:1 clark:1 sewoong:2 principle:1 classifying:2 strehl:1 ibm:1 austin:1 mohri:1 supported:2 bias:6 allow:1 neighbor:21 fall:1 wide:1 barrier:1 absolute:2 fifth:1 regard:1 curve:2 dimension:15 calculated:1 world:1 made:2 reinforcement:2 regressors:1 adaptive:1 far:2 transaction:1 reconstructed:2 approximate:3 bernhard:3 keep:1 roos:1 uai:1 receiver:1 conclude:4 don:1 continuous:6 search:3 triplet:1 table:1 learn:1 robust:1 ca:2 mehryar:1 complex:1 domain:2 main:5 sachs:2 whole:1 noise:4 karthikeyan:1 child:2 kenji:1 fig:10 causality:1 en:1 roc:8 lifshits:1 bianca:1 pe:1 breaking:1 dominik:1 theorem:16 specific:1 er:1 theertha:1 closeness:1 workshop:1 quantization:1 false:1 vapnik:1 effectively:1 ci:77 conditioned:1 chen:1 subtract:1 entropy:4 cx:1 distinguishable:2 gao:2 doran:1 partially:1 u2:12 springer:1 gary:1 truth:5 satisfies:3 acm:2 conditional:12 goal:3 sized:1 lipschitz:1 fisher:1 brenner:1 specifically:1 except:1 corrected:2 eliot:1 justify:1 reducing:1 ananda:1 strap:2 total:6 khf:1 experimental:1 ya:1 formally:2 bioinformatics:1 rajat:1 philosophy:1 evaluate:1
6,509
6,889
Deep Voice 2: Multi-Speaker Neural Text-to-Speech Sercan ?. Ar?k? [email protected] Andrew Gibiansky? [email protected] Wei Ping? [email protected] Gregory Diamos? [email protected] John Miller? [email protected] Jonathan Raiman? [email protected] Kainan Peng? [email protected] Yanqi Zhou? [email protected] Baidu Silicon Valley Artificial Intelligence Lab 1195 Bordeaux Dr. Sunnyvale, CA 94089 Abstract We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly. 1 Introduction Artificial speech synthesis, commonly known as text-to-speech (TTS), has a variety of applications in technology interfaces, accessibility, media, and entertainment. Most TTS systems are built with a single speaker voice, and multiple speaker voices are provided by having distinct speech databases or model parameters. As a result, developing a TTS system with support for multiple voices requires much more data and development effort than a system which only supports a single voice. In this work, we demonstrate that we can build all-neural multi-speaker TTS systems which share the vast majority of parameters between different speakers. We show that not only can a single model generate speech from multiple different voices, but also that significantly less data is required per speaker than when training single-speaker systems. Concretely, we make the following contributions: 1. We present Deep Voice 2, an improved architecture based on Deep Voice 1 (Arik et al., 2017). 2. We introduce a WaveNet-based (Oord et al., 2016) spectrogram-to-audio neural vocoder, and use it with Tacotron (Wang et al., 2017) as a replacement for Griffin-Lim audio generation. ? Listed alphabetically. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 3. Using these two single-speaker models as a baseline, we demonstrate multi-speaker neural speech synthesis by introducing trainable speaker embeddings into Deep Voice 2 and Tacotron. We organize the rest of this paper as follows. Section 2 discusses related work and what makes the contributions of this paper distinct from prior work. Section 3 presents Deep Voice 2 and highlights the differences from Deep Voice 1. Section 4 explains our speaker embedding technique for neural TTS models and shows multi-speaker variants of the Deep Voice 2 and Tacotron architectures. Section 5.1 quantifies the improvement for single speaker TTS through a mean opinion score (MOS) evaluation and Section 5.2 presents the synthesized audio quality of multi-speaker Deep Voice 2 and Tacotron via both MOS evaluation and a multi-speaker discriminator accuracy metric. Section 6 concludes with a discussion of the results and potential future work. 2 Related Work We discuss the related work relevant to each of our claims in Section 1 in order, starting from single-speaker neural speech synthesis and moving on to multi-speaker speech synthesis and metrics for generative model quality. With regards to single-speaker speech synthesis, deep learning has been used for a variety of subcomponents, including duration prediction (Zen et al., 2016), fundamental frequency prediction (Ronanki et al., 2016), acoustic modeling (Zen and Sak, 2015), and more recently autoregressive sample-bysample audio waveform generation (e.g., Oord et al., 2016; Mehri et al., 2016). Our contributions build upon recent work in entirely neural TTS systems, including Deep Voice 1 (Arik et al., 2017), Tacotron (Wang et al., 2017), and Char2Wav (Sotelo et al., 2017). While these works focus on building single-speaker TTS systems, our paper focuses on extending neural TTS systems to handle multiple speakers with less data per speaker. Our work is not the first to attempt a multi-speaker TTS system. For instance, in traditional HMMbased TTS synthesis (e.g., Yamagishi et al., 2009), an average voice model is trained using multiple speakers? data, which is then adapted to different speakers. DNN-based systems (e.g., Yang et al., 2016) have also been used to build average voice models, with i-vectors representing speakers as additional inputs and separate output layers for each target speaker. Similarly, Fan et al. (2015) uses a shared hidden representation among different speakers with speaker-dependent output layers predicting vocoder parameters (e.g., line spectral pairs, aperiodicity parameters etc.). For further context, Wu et al. (2015) empirically studies DNN-based multi-speaker modeling. More recently, speaker adaptation has been tackled with generative adversarial networks (GANs) (Hsu et al., 2017). We instead use trainable speaker embeddings for multi-speaker TTS. The approach was investigated in speech recognition (Abdel-Hamid and Jiang, 2013), but is a novel technique in speech synthesis. Unlike prior work which depends on fixed embeddings (e.g. i-vectors), the speaker embeddings used in this work are trained jointly with the rest of the model from scratch, and thus can directly learn the features relevant to the speech synthesis task. In addition, this work does not rely on per-speaker output layers or average voice modeling, which leads to higher-quality synthesized samples and lower data requirements (as there are fewer unique parameters per speaker to learn). In order to evaluate the distinctiveness of the generated voices in an automated way, we propose using the classification accuracy of a speaker discriminator. Similar metrics such as an ?Inception score? have been used for quantitative quality evaluations of GANs for image synthesis (e.g., Salimans et al., 2016). Speaker classification has been studied with both traditional GMM-based methods (e.g., Reynolds et al., 2000) and more recently with deep learning approaches (e.g., Li et al., 2017). 3 Single-Speaker Deep Voice 2 In this section, we present Deep Voice 2, a neural TTS system based on Deep Voice 1 (Arik et al., 2017). We keep the general structure of the Deep Voice 1 (Arik et al., 2017), as depicted in Fig. 1 (the corresponding training pipeline is depicted in Appendix A). Our primary motivation for presenting an improved single-speaker model is to use it as the starting point for a high-quality multi-speaker model. One major difference between Deep Voice 2 and Deep Voice 1 is the separation of the phoneme duration and frequency models. Deep Voice 1 has a single model to jointly predict phoneme duration 2 Filter-Bank + BN + ReLu softsign FC Taco Speaker FC MLP Char 1 ? Attention softsign FC Taco Speaker Char n ? MLP ? Mel i-1 Mel i Mel i+1 softsign Vocal Speaker Co Phonemes Text Pronunciation Dictionary Phonemes Upsampled Phonemes upsample Duration Frequency F0 Vocal FC Synthesized Speech Speaker Figure 1: Inference system diagram: first text-phonemes dictionary conversion, second predict phoneme durations, third upsample and generate F0 , finally feed F0 and phonemes to vocal model. and frequency profile (voicedness and time-dependent fundamental frequency, F0 ). In Deep Voice 2, the phoneme durations are predicted first and then are used as inputs to the frequency model. In the subsequent subsections, we present the models used in Deep Voice 2. All models are trained separately using the hyperparameters specified in Appendix B. We will provide a quantitative comparison of Deep Voice 1 and Deep Voice 2 in Section 5.1. 3.1 Segmentation model Estimation of phoneme locations is treated as an unsupervised learning problem in Deep Voice 2, similar to Deep Voice 1. The segmentation model is convolutional-recurrent architecture with connectionist temporal classification (CTC) loss (Graves et al., 2006) applied to classify phoneme pairs, which are then used to extract the boundaries between them. The major architecture changes in Deep Voice 2 are the addition of batch normalization and residual connections in the convolutional layers. Specifically, Deep Voice 1?s segmentation model computes the output of each layer as ? ? h(l) = relu W (l) ? h(l 1) + b(l) , (1) where h(l) is the output of the l-th layer, W (l) is the convolution filterbank, b(l) is the bias vector, and ? is the convolution operator. In contrast, Deep Voice 2?s segmentation model layers instead compute ? ? ?? h(l) = relu h(l 1) + BN W (l) ? h(l 1) , (2) where BN is batch normalization (Ioffe and Szegedy, 2015). In addition, we find that the segmentation model often makes mistakes for boundaries between silence phonemes and other phonemes, which can significantly reduce segmentation accuracy on some datasets. We introduce a small post-processing step to correct these mistakes: whenever the segmentation model decodes a silence boundary, we adjust the location of the boundary with a silence detection heuristic.2 3.2 Duration Model In Deep Voice 2, instead of predicting a continuous-valued duration, we formulate duration prediction as a sequence labeling problem. We discretize the phoneme duration into log-scaled buckets, and assign each input phoneme to the bucket label corresponding to its duration. We model the sequence by a conditional random field (CRF) with pairwise potentials at output layer (Lample et al., 2016). During inference, we decode discretized durations from the CRF using the Viterbi forward-backward algorithm. We find that quantizing the duration prediction and introducing the pairwise dependence implied by the CRF improves synthesis quality. 3.3 Frequency Model After decoding from the duration model, the predicted phoneme durations are upsampled from a per-phoneme input features to a per-frame input for the frequency model. 3 Deep Voice 2 frequency 2 We compute the smoothed normalized audio power as p[n] = (x[n]2 /xmax 2 ) ? g[n], where x[n] is the audio signal, g[n] is the impulse response of a Gaussian filter, xmax is the maximum value of x[n] and ? is one-dimensional convolution operation. We assign the silence phoneme boundaries when p[n] exceeds a fixed threshold. The optimal parameter values for the Gaussian filter and the threshold depend on the dataset and audio sampling rate. 3 Each frame is ensured to be 10 milliseconds. For example, if a phoneme lasts 20 milliseconds, the input features corresponding to that phoneme will be repeated in 2 frames. If it lasts less than 10 milliseconds, it is extend to a single frame. 3 FC Speaker Mel model consists of multiple layers: firstly, bidirectional gated recurrent unit (GRU) layers (Cho et al., 2014) generate hidden states from the input features. From these hidden states, an affine projection followed by a sigmoid nonlinearity produces the probability that each frame is voiced. Hidden states are also used to make two separate normalized F0 predictions. The first prediction, fGRU , is made with a single-layer bidirectional GRU followed by an affine projection. The second prediction, fconv , is made by adding up the contributions of multiple convolutions with varying convolution widths and a single output channel. Finally, the hidden state is used with an affine projection and a sigmoid nonlinearity to predict a mixture ratio !, which is used to weigh the two normalized frequency predictions and combine them into f = ! ? fGRU + (1 !) ? fconv . (3) The normalized prediction f is then converted to the true frequency F0 prediction via F0 = ?F0 + F0 ? f, (4) where ?F0 and F0 are, respectively, the mean and standard deviation of F0 for the speaker the model is trained on. We find that predicting F0 with a mixture of convolutions and a recurrent layer performs better than predicting with either one individually. We attribute this to the hypothesis that including the wide convolutions reduces the burden for the recurrent layers to maintain state over a large number of input frames, while processing the entire context information efficiently. 3.4 Vocal Model The Deep Voice 2 vocal model is based on a WaveNet architecture (Oord et al., 2016) with a two-layer bidirectional QRNN (Bradbury et al., 2017) conditioning network, similar to Deep Voice 1. However, we remove the 1 ? 1 convolution between the gated tanh nonlinearity and the residual connection. In addition, we use the same conditioner bias for every layer of the WaveNet, instead of generating a separate bias for every layer as was done in Deep Voice 1. 4 4 Multi-Speaker Models with Trainable Speaker Embeddings In order to synthesize speech from multiple speakers, we augment each of our models with a single low-dimensional speaker embedding vector per speaker. Unlike previous work, our approach does not rely on per-speaker weight matrices or layers. Speaker-dependent parameters are stored in a very low-dimensional vector and thus there is near-complete weight sharing between speakers. We use speaker embeddings to produce recurrent neural network (RNN) initial states, nonlinearity biases, and multiplicative gating factors, used throughout the networks. Speaker embeddings are initialized randomly with a uniform distribution over [ 0.1, 0.1] and trained jointly via backpropagation; each model has its own set of speaker embeddings. To encourage each speaker?s unique voice signature to influence the model, we incorporate the speaker embeddings into multiple portions of the model. Empirically, we find that simply providing the speaker embeddings to the input layers does not work as well for any of the presented models besides the vocal model, possibly due to the high degree of residual connections present in the WaveNet and due to the difficulty of learning high-quality speaker embeddings. We observed that several patterns tend to yield high performance: ? Site-Specific Speaker Embeddings: For every use site in the model architecture, transform the shared speaker embedding to the appropriate dimension and form through an affine projection and a nonlinearity. ? Recurrent Initialization: Initialize recurrent layer hidden states with site-specific speaker embeddings. ? Input Augmentation: Concatenate a site-specific speaker embedding to the input at every timestep of a recurrent layer. ? Feature Gating: Multiply layer activations elementwise with a site-specific speaker embedding to render adaptable information flow. 5 4 We find that these changes reduce model size by a factor of ?7 and speed up inference by ?25%, while yielding no perceptual change in quality. However, we do not focus on demonstrating these claims in this paper. 5 We hypothesize that feature gating lets the model learn the union of all necessary features while allowing speaker embeddings to determine what features are used for each speaker and how much influence they will have on the activations. 4 MLP FC concat ? concat FC Speaker F0 Phoneme 1 Phoneme n Voiced + CTC ? FC + Log-Mel Decoder CBFG ! FC FC ftsign Voiced ?F Stacked Bi-GRU Mel + ? Durations (bucketed) Dropout softsign ? !F ? Stacked Residual GRU f CRF + ? Dropout FC FC F0 FC f ! Phoneme pairs Stacked Bi-GRU Conv-BN-Res ! ! softsign)+)1 ? softsign Bi-GRU FC FC GRU + Attention Filter-Bank tile Speaker Conv-BN-Res ReLu6 MLP Mel 1 ? softsign Stacked Bi-GRU honeme 1 ? concat + Mel m FC ? FC Speaker Mel 1 ? ? concat Phoneme 1 FC FC FC FC Bi-GRU FC Phoneme n (b) Conv Vocal Speaker softsign)+)1 Filter-Bank FC Conv + BN Phoneme n MLP FC Speaker Mel m Stacked Bi-GRU FC Phoneme 1 ? Phoneme n (c) (a) Encoder CBFG softsign Bi-GRU Figure 2: Architecture for the multi-speaker (a) segmentation, (b) duration, and (c) frequency model. FC Highway Layers Next, we describe how speaker embeddings are used in each architecture. 4.1 FC + Filter-Bank + BN + ReLu Multi-Speaker Deep Voice 2 FC max/pool tile softsign Filter-Bank + The Deep Voice 2 models have separate speaker embeddings for each model. Yet, they can be viewed softsign BN + ReLu as chunks of a larger speaker embedding, which are trained independently. FC 4.1.1 Segmentation Model Speaker MLP Char 1 ? Speaker Duration Model The multi-speaker duration model uses speaker-dependent recurrent initialization and input augmentation. A site-specific embedding is used to initialize RNN hidden states, and another site-specific embedding is provided as input to the first RNN layer by concatenating it to the feature vectors. 4.1.3 Frequency Model The multi-speaker frequency model uses recurrent initialization, which initializes the recurrent layers (except for the recurrent output layer) with a single site-specific speaker-embedding. As described in Section 3.3, the recurrent and convolutional output layers in the single-speaker frequency model predict a normalized frequency, which is then converted into the true F0 by a fixed linear transformation. The linear transformation depends on the mean and standard deviation of observed F0 for the speaker. These values vary greatly between speakers: male speakers, for instance, tend to have a much lower mean F0 . To better adapt to these variations, we make the mean and standard deviation trainable model parameters and multiply them by scaling terms which depend on the speaker embeddings. Specifically, instead of Eq. (4), we compute the F0 prediction as F0 = ?F0 ? 1 + softsign V? T gf + F0 ? 1 + softsign V T gf ? f, (6) where gf is a site-specific speaker embedding, ?F0 and F0 are trainable scalar parameters initialized to the F0 mean and standard deviation on the dataset, and V? and V are trainable parameter vectors. 5 FC Speaker where gs is a site-specific speaker embedding. The same site-specific embedding is shared for all the convolutional layers. In addition, we initialize each of the recurrent layers with a second site specific embedding. Similarly, each layer shares the same site-specific embedding, rather than having a separate embedding per layer. Mel Stacked Residual GRU softsign FC Char n In multi-speaker segmentation model, we use feature gating in the residual connections of the convolution layers. Instead of Eq. (2), we multiply the batch-normalized activations by a site-specific speaker embedding: Duration Frequency ? ? ? ? h(l) = relu h(l 1) + BN W ? h(l 1) ? gs , (5) 4.1.2 Log-M Decoder CBFG Vocal GRU + Attention tile MLP Mel 1 ? Mel m softsign)+)1 FC FC FC FC Bi-GRU FC Stacked Bi-GRU Filter-Bank MLP FC concat Stacked Bi-GRU FC Speaker Speaker Phoneme 1 ? ? concat FC Phoneme 1 Phoneme n Phoneme n Audio Audio Grif<in9Lim ? Vocal CTC Encoder CBFG softsign Bi-GRU FC Highway Layers FC + Filter-Bank + BN + ReLu Filter-Bank + BN + ReLu FC MLP Char 1 ? Char n softsign Dropout FC Tacotron Speaker GRU i-1 GRU i+1 ? ? Conv-BN-Res ReLu6 ? MLP ? Mel i-1 Mel i Mel i+1 + Vocal Speaker softsign Phonemes in the Encoder UpsampledCBHG Phonemes module and decoder with two Figure 3: Tacotron with speaker conditioning FC FC ways to convert spectrogram to audio: Griffin-Lim or our speaker-conditioned Vocal model. F0 Speaker Pronunciation Synthesized Text 4.1.4 Dictionary Phonemes Duration Vocal Model upsample Frequency Vocal Speech Speaker The multi-speaker vocal model uses only input augmentation, with the site-specific speaker embedding concatenated onto each input frame of the conditioner. This differs from the global conditioning suggested in Oord et al. (2016) and allows the speaker embedding to influence the local conditioning network as well. Without speaker embeddings, the vocal model is still able to generate somewhat distinct-sounding voices because of the disctinctive features provided by the frequency and duration models. Yet, having speaker embeddings in the vocal model increases the audio quality. We indeed observe that the embeddings converge to a meaningful latent space. 4.2 Multi-Speaker Tacotron In addition to extending Deep Voice 2 with speaker embeddings, we also extend Tacotron (Wang et al., 2017), a sequence-to-sequence character-to-waveform model. When training multi-speaker Tacotron variants, we find that model performance is highly dependent on model hyperparameters, and that some models often fail to learn attention mechanisms for a small subset of speakers. We also find that if the speech in each audio clip does not start at the same timestep, the models are much less likely to converge to a meaningful attention curve and recognizable speech; thus, we trim all initial and final silence in each audio clip. Due to the sensitivity of the model to hyperparameters and data preprocessing, we believe that additional tuning may be necessary to obtain maximal quality. Thus, our work focuses on demonstrating that Tacotron, like Deep Voice 2, is capable of handling multiple speakers through speaker embeddings, rather than comparing the quality of the two architectures. 4.2.1 Character-to-Spectrogram Model The Tacotron character-to-spectrogram architecture consists of a convolution-bank-highway-GRU (CBHG) encoder, an attentional decoder, and a CBHG post-processing network. Due to the complexity of the architecture, we leave out a complete description and instead focus on our modifications. We find that incorporating speaker embeddings into the CBHG post-processing network degrades output quality, whereas incorporating speaker embeddings into the character encoder is necessary. Without a speaker-dependent CBHG encoder, the model is incapable of learning its attention mechanism and cannot generate meaningful output (see Appendix D.2 for speaker-dependent attention visualizations). In order to condition the encoder on the speaker, we use one site-specific embedding as an extra input to each highway layer at each timestep and initialize the CBHG RNN state with a second site-specific embedding. We also find that augmenting the decoder with speaker embeddings is helpful. We use one site-specific embedding as an extra input to the decoder pre-net, one extra site-specific embedding as the initial attention context vector for the attentional RNN, one site-specific embedding as the initial decoder GRU hidden state, and one site-specific embedding as a bias to the tanh in the content-based attention mechanism. 6 Stacked Bi-GRU Conv-BN-Res Attention softsign FC Dropout Mel i+1 Stacked Residual GRU ? GRU i-1 max9pool softsign Tacotron Speaker softsign Phoneme pairs FC Decoder CBFG FC tile Spectrogram ? Conv + BN Conv Mel 1 ? Mel m Model Samp. Freq. MOS Deep Voice 1 16 KHz 2.05 ? 0.24 Deep Voice 2 16 KHz 2.96 ? 0.38 Tacotron (Griffin-Lim) 24 KHz 2.57 ? 0.28 Tacotron (WaveNet) 24 KHz 4.17 ? 0.18 Table 1: Mean Opinion Score (MOS) evaluations with 95% confidence intervals of Deep Voice 1, Deep Voice 2, and Tacotron. Using the crowdMOS toolkit, batches of samples from these models were presented to raters on Mechanical Turk. Since batches contained samples from all models, the experiment naturally induces a comparison between the models. 4.2.2 Spectrogram-to-Waveform Model The original Tacotron implementation in (Wang et al., 2017) uses the Griffin-Lim algorithm to convert spectrograms to time-domain audio waveforms by iteratively estimating the unknown phases.6 We observe that minor noise in the input spectrogram causes noticeable estimation errors in the GriffinLim algorithm and the generated audio quality is degraded. To produce higher quality audio using Tacotron, instead of using Griffin-Lim, we train a WaveNet-based neural vocoder to convert from linear spectrograms to audio waveforms. The model used is equivalent to the Deep Voice 2 vocal model, but takes linear-scaled log-magnitude spectrograms instead of phoneme identity and F0 as input. The combined Tacotron-WaveNet model is shown in Fig. 3. As we will show in Section 5.1, WaveNet-based neural vocoder indeed significantly improves single-speaker Tacotron as well. 5 Results In this section, we will present the results on both single-speaker and multi-speaker speech synthesis using the described architectures. All model hyperparameters are presented in Appendix B. 5.1 Single-Speaker Speech Synthesis We train Deep Voice 1, Deep Voice 2, and Tacotron on an internal English speech database containing approximately 20 hours of single-speaker data. The intermediate evaluations of models in Deep Voice 1 and Deep Voice 2 can be found in Table 3 within Appendix A. We run an MOS evaluation using the crowdMOS framework (Ribeiro et al., 2011) to compare the quality of samples (Table 1). The results show conclusively that the architecture improvements in Deep Voice 2 yield significant gains in quality over Deep Voice 1. They also demonstrate that converting Tacotron-generated spectrograms to audio using WaveNet is preferable to using the iterative Griffin-Lim algorithm. 5.2 Multi-Speaker Speech Synthesis We train all the aforementioned models on the VCTK dataset with 44 hours of speech, which contains 108 speakers with approximately 400 utterances each. We also train all models on an internal dataset of audiobooks, which contains 477 speakers with 30 minutes of audio each (for a total of ?238 hours). The consistent sample quality observed from our models indicates that our architectures can easily learn hundreds of distinct voices with a variety of different accents and cadences. We also observe that the learned embeddings lie in a meaningful latent space (see Fig. 4 as an example and Appendix D for more details). In order to evaluate the quality of the synthesized audio, we run MOS evaluations using the crowdMOS framework, and present the results in Table 2. We purposefully include ground truth samples in the set being evaluated, because the accents in datasets are likely to be unfamiliar to our North American crowdsourced raters and will thus be rated poorly due to the accent rather than due to the model quality. By including ground truth samples, we are able to compare the MOS of the models with the ground truth MOS and thus evaluate the model quality rather than the data quality; however, the resulting MOS may be lower, due to the implicit comparison with the ground truth samples. Overall, we observe that the Deep Voice 2 model can approach an MOS value that is close to the ground truth, when low sampling rate and companding/expanding taken into account. 6 Estimation of the unknown phases is done by repeatedly converting between frequency and time domain representations of the signal using the short-time Fourier transform and its inverse, substituting the magnitude of each frequency component to the predicted magnitude at each step. 7 Dataset Multi-Speaker Model Samp. Freq. MOS Acc. VCTK Deep Voice 2 (20-layer WaveNet) 16 KHz 2.87?0.13 99.9% VCTK Deep Voice 2 (40-layer WaveNet) 16 KHz 3.21?0.13 100 % VCTK Deep Voice 2 (60-layer WaveNet) 16 KHz 3.42?0.12 99.7% VCTK Deep Voice 2 (80-layer WaveNet) 16 KHz 3.53?0.12 99.9% VCTK Tacotron (Griffin-Lim) 24 KHz 1.68?0.12 99.4% VCTK Tacotron (20-layer WaveNet) 24 KHz 2.51?0.13 60.9% VCTK Ground Truth Data 48 KHz 4.65?0.06 99.7% Audiobooks Deep Voice 2 (80-layer WaveNet) 16 KHz 2.97?0.17 97.4% Audiobooks Tacotron (Griffin-Lim) 24 KHz 1.73?0.22 93.9% Audiobooks Tacotron (20-layer WaveNet) 24 KHz 2.11?0.20 66.5% Audiobooks Ground Truth Data 44.1 KHz 4.63?0.04 98.8% Table 2: MOS and classification accuracy for all multi-speaker models. To obtain MOS, we use crowdMOS toolkit as detailed in Table 1. We also present classification accuracies of the speaker discriminative models (see Appendix E for details) on the samples, showing that the synthesized voices are as distinguishable as ground truth audio. (b) (a) Figure 4: Principal components of the learned speaker embeddings for the (a) 80-layer vocal model and (b) character-to-spectrogram model for VCTK dataset. See Appendix D.3 for details. A multi-speaker TTS system with high sample quality but indistinguishable voices would result in high MOS, but fail to meet the desired objective of reproducing the input voices accurately. To show that our models not only generate high quality samples, but also generate distinguishable voices, we also measure the classification accuracy of a speaker discriminative model on our generated samples. The speaker discriminative is a convolutional network trained to classify utterances to their speakers, trained on the same dataset as the TTS systems themselves. If the voices were indistinguishable (or the audio quality was low), the classification accuracy would be much lower for synthesized samples than it is for the ground truth samples. As we demonstrate in Table 2, classification accuracy demonstrates that samples generated from our models are as distinguishable as the ground truth samples (see Appendix E for more details). The classification accuracy is only significantly lower for Tacotron with WaveNet, and we suspect that generation errors in the spectrogram are exacerbated by the WaveNet, as it is trained with ground truth spectrograms. 6 Conclusion In this work, we explore how entirely-neural speech synthesis pipelines may be extended to multispeaker text-to-speech via low-dimensional trainable speaker embeddings. We start by presenting Deep Voice 2, an improved single-speaker model. Next, we demonstrate the applicability of our technique by training both multi-speaker Deep Voice 2 and multi-speaker Tacotron models, and evaluate their quality through MOS. In conclusion, we use our speaker embedding technique to create high quality text-to-speech systems and conclusively show that neural speech synthesis models can learn effectively from small amounts of data spread among hundreds of different speakers. The results presented in this work suggest many directions for future research. Future work may test the limits of this technique and explore how many speakers these models can generalize to, how little data is truly required per speaker for high quality synthesis, whether new speakers can be added to a system by fixing model parameters and solely training new speaker embeddings, and whether the speaker embeddings can be used as a meaningful vector space, as is possible with word embeddings. 8 References O. Abdel-Hamid and H. Jiang. Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code. In ICASSP, 2013. S. O. Arik, M. Chrzanowski, A. Coates, G. Diamos, A. Gibiansky, Y. Kang, X. Li, J. Miller, J. Raiman, S. Sengupta, and M. Shoeybi. Deep voice: Real-time neural text-to-speech. In ICML, 2017. J. Bradbury, S. Merity, C. Xiong, and R. Socher. Quasi-recurrent neural networks. In ICLR, 2017. K. Cho, B. Van Merri?nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014. Y. Fan, Y. Qian, F. K. Soong, and L. He. Multi-speaker modeling and speaker adaptation for DNN-based TTS synthesis. In IEEE ICASSP, 2015. A. Graves, S. Fern?ndez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, 2006. C.-C. Hsu, H.-T. Hwang, Y.-C. Wu, Y. Tsao, and H.-M. Wang. Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks. arXiv:1704.00849, 2017. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. G. Lample, M. Ballesteros, K. Kawakami, S. Subramanian, and C. Dyer. Neural architectures for named entity recognition. In Proc. NAACL-HLT, 2016. C. Li, X. Ma, B. Jiang, X. Li, X. Zhang, X. Liu, Y. Cao, A. Kannan, and Z. Zhu. Deep speaker: an end-to-end neural speaker embedding system. arXiv preprint arXiv:1705.02304, 2017. S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, S. Jain, J. Sotelo, A. Courville, and Y. Bengio. SampleRNN: An unconditional end-to-end neural audio generation model. arXiv:1612.07837, 2016. A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv:1609.03499, 2016. D. A. Reynolds, T. F. Quatieri, and R. B. Dunn. Speaker verification using adapted gaussian mixture models. Digital signal processing, 10(1-3):19?41, 2000. F. Ribeiro, D. Flor?ncio, C. Zhang, and M. Seltzer. Crowdmos: An approach for crowdsourcing mean opinion score studies. In IEEE ICASSP, 2011. S. Ronanki, O. Watts, S. King, and G. E. Henter. Median-based generation of synthetic speech durations using a non-parametric approach. arXiv:1608.06134, 2016. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016. J. Sotelo, S. Mehri, K. Kumar, J. F. Santos, K. Kastner, A. Courville, and Y. Bengio. Char2wav: End-to-end speech synthesis. In ICLR2017 workshop submission, 2017. Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio, et al. Tacotron: Towards end-to-end speech synthesis. In Interspeech, 2017. Z. Wu, P. Swietojanski, C. Veaux, S. Renals, and S. King. A study of speaker adaptation for DNN-based speech synthesis. In Interspeech, 2015. J. Yamagishi, T. Nose, H. Zen, Z.-H. Ling, T. Toda, K. Tokuda, S. King, and S. Renals. Robust speaker-adaptive hmm-based text-to-speech synthesis. IEEE Transactions on Audio, Speech, and Language Processing, 2009. S. Yang, Z. Wu, and L. Xie. On the training of DNN-based average voice model for speech synthesis. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), Asia-Pacific, 2016. H. Zen and H. Sak. Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis. In IEEE ICASSP, 2015. H. Zen, Y. Agiomyrgiannakis, N. Egberts, F. Henderson, and P. Szczepaniak. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices. arXiv:1606.06061, 2016. 9
6889 |@word softsign:21 bn:14 initial:4 ndez:1 contains:2 score:4 liu:1 reynolds:2 com:8 subcomponents:1 comparing:1 activation:3 yet:2 synthesizer:1 john:1 concatenate:1 subsequent:1 remove:1 hypothesize:1 intelligence:1 half:1 generative:4 fewer:1 concat:6 device:1 short:2 location:2 firstly:1 zhang:2 constructed:1 tokuda:1 baidu:9 consists:2 combine:1 recognizable:1 introduce:4 pairwise:2 peng:1 indeed:2 merity:1 themselves:1 multi:30 discretized:1 wavenet:19 little:1 conv:8 provided:3 estimating:1 medium:1 what:2 santos:1 yamagishi:2 transformation:2 temporal:2 quantitative:2 every:4 zaremba:1 preferable:1 ensured:1 demonstrates:2 filterbank:1 scaled:2 unit:1 organize:1 local:1 mistake:2 limit:1 jiang:3 conditioner:2 meet:1 solely:1 approximately:2 initialization:3 studied:1 co:1 bi:12 unique:3 union:1 block:1 differs:1 backpropagation:1 dunn:1 rnn:7 significantly:4 projection:4 pre:1 confidence:1 vocal:18 upsampled:2 word:1 suggest:1 onto:1 cannot:1 valley:1 operator:1 close:1 context:3 influence:3 equivalent:1 attention:10 starting:3 duration:23 independently:1 formulate:1 qian:1 embedding:26 handle:1 variation:1 merri:1 target:1 fconv:2 quatieri:1 decode:1 us:5 hypothesis:1 goodfellow:1 jaitly:1 synthesize:1 recognition:3 summit:1 submission:1 database:2 observed:3 module:1 preprint:2 wang:6 weigh:1 complexity:1 signature:1 trained:9 depend:2 upon:1 easily:1 icassp:4 schwenk:1 stacked:10 train:4 distinct:4 fast:2 describe:1 jain:1 artificial:2 labeling:1 pronunciation:2 kalchbrenner:1 heuristic:1 larger:1 valued:1 encoder:8 simonyan:1 jointly:3 transform:2 final:1 autoencoding:1 sequence:5 quantizing:1 net:1 propose:1 lowdimensional:1 unaligned:1 maximal:1 adaptation:4 renals:2 relevant:2 cao:1 poorly:1 description:1 requirement:1 extending:2 produce:3 generating:1 adam:1 leave:1 andrew:1 recurrent:18 fixing:1 augmenting:2 minor:1 bradbury:2 noticeable:1 exacerbated:1 eq:2 predicted:3 direction:1 waveform:5 correct:1 attribute:1 filter:10 stochastic:1 char:6 opinion:3 seltzer:1 sunnyvale:1 explains:1 assign:2 hamid:2 ryan:1 ground:11 viterbi:1 predict:4 mo:15 claim:2 dieleman:1 substituting:1 major:2 dictionary:3 vary:1 estimation:3 proc:1 label:1 tanh:2 individually:1 highway:4 create:1 gaussian:3 arik:5 rather:4 zhou:1 varying:1 mobile:1 focus:5 improvement:5 indicates:1 greatly:1 contrast:1 adversarial:2 baseline:1 helpful:1 inference:3 dependent:7 nn:1 entire:1 hidden:8 dnn:5 quasi:1 overall:1 among:2 classification:10 aforementioned:1 augment:1 development:1 sengupta:1 art:1 initialize:4 field:1 having:3 beach:1 sampling:2 unsupervised:1 icml:2 kastner:1 future:3 connectionist:2 cadence:1 randomly:1 phase:2 replacement:1 maintain:1 attempt:1 detection:1 mlp:10 highly:1 multiply:3 evaluation:7 adjust:1 henderson:1 male:1 mixture:3 truly:1 grif:1 yielding:1 unconditional:1 encourage:1 capable:1 necessary:3 initialized:2 re:4 desired:1 instance:2 classify:2 modeling:4 ar:1 phrase:1 applicability:1 introducing:3 deviation:4 subset:1 hundred:3 uniform:1 stored:1 gregory:1 synthetic:1 raiman:2 cho:2 st:1 chunk:1 fundamental:2 sensitivity:1 combined:1 oord:5 lstm:1 decoding:1 pool:1 synthesis:25 gans:3 augmentation:3 zen:6 containing:1 possibly:1 tile:4 dr:1 american:1 li:4 szegedy:2 account:1 potential:2 converted:2 north:1 depends:2 multiplicative:1 lab:1 portion:1 start:2 crowdsourced:1 samp:2 unidirectional:1 voiced:3 contribution:4 accuracy:9 aperiodicity:1 phoneme:37 convolutional:5 efficiently:1 miller:2 yield:2 ofthe:1 degraded:1 generalize:1 raw:1 decodes:1 kavukcuoglu:1 accurately:1 fern:1 acc:1 ping:1 whenever:1 sharing:1 hlt:1 chrzanowski:1 frequency:21 turk:1 naturally:1 hsu:2 gain:1 dataset:7 lim:8 subsection:1 improves:2 segmentation:10 adaptable:1 lample:2 feed:1 bidirectional:3 higher:3 xie:1 asia:1 response:1 wei:2 improved:4 done:2 evaluated:1 inception:1 implicit:1 companding:1 mehri:3 unsegmented:1 accent:3 kawakami:1 quality:31 gulrajani:1 impulse:1 hwang:1 believe:1 usa:1 building:2 naacl:1 normalized:6 true:2 iteratively:1 freq:2 indistinguishable:2 during:1 width:1 interspeech:2 speaker:162 mel:19 presenting:2 tt:20 demonstrate:7 crf:4 complete:2 performs:1 interface:1 multispeaker:1 image:1 variational:1 novel:1 recently:3 sigmoid:2 ctc:3 empirically:2 conditioning:4 khz:15 extend:2 he:1 association:1 elementwise:1 synthesized:7 bougares:1 silicon:1 significant:3 unfamiliar:1 tuning:1 similarly:2 nonlinearity:5 language:1 moving:1 toolkit:2 f0:27 etc:1 own:1 recent:1 schmidhuber:1 incapable:1 preserving:1 additional:2 somewhat:1 wasserstein:1 spectrogram:14 converting:2 determine:1 converge:2 signal:4 multiple:10 reduces:1 exceeds:1 adapt:1 long:2 post:4 prediction:11 variant:2 metric:3 arxiv:11 normalization:3 xmax:2 addition:6 whereas:1 separately:1 interval:1 diagram:1 median:1 extra:3 rest:2 unlike:2 flor:1 sounding:1 tend:2 suspect:1 bahdanau:1 flow:1 near:1 yang:3 intermediate:1 bengio:4 embeddings:32 automated:1 variety:3 relu:8 architecture:15 perfectly:1 reduce:2 shift:1 whether:2 accelerating:1 effort:1 render:1 speech:39 cause:1 repeatedly:1 deep:63 latency:1 detailed:1 listed:1 amount:1 clip:2 induces:1 generate:8 coates:1 millisecond:3 per:11 ballesteros:1 vctk:9 threshold:2 demonstrating:2 achieving:1 gmm:1 sotelo:3 backward:1 vast:1 timestep:3 convert:3 run:2 inverse:1 named:1 almost:1 throughout:1 wu:5 separation:1 griffin:8 appendix:9 scaling:1 entirely:2 layer:42 dropout:4 followed:2 gomez:1 tackled:1 courville:2 fan:2 g:2 annual:1 adapted:2 fourier:1 speed:1 kumar:3 nboer:1 pacific:1 developing:1 watt:1 character:5 modification:1 soong:1 handling:1 bucket:2 pipeline:3 taken:1 visualization:1 discus:2 fail:2 mechanism:3 nose:1 dyer:1 end:8 gulcehre:1 operation:1 observe:4 salimans:2 spectral:1 appropriate:1 sak:2 xiong:1 batch:6 voice:80 relu6:2 original:1 entertainment:1 include:1 concatenated:1 build:3 toda:1 implied:1 initializes:1 objective:1 added:1 degrades:1 primary:1 dependence:1 parametric:2 traditional:2 iclr:1 separate:5 attentional:2 entity:1 accessibility:1 majority:1 decoder:9 hmm:2 kannan:1 besides:1 code:1 ratio:1 providing:1 ba:1 implementation:1 unknown:2 gated:2 allowing:1 conversion:2 discretize:1 convolution:10 datasets:3 extended:1 frame:7 reproducing:1 smoothed:1 pair:4 required:2 specified:1 gru:24 connection:4 discriminator:2 mechanical:1 acoustic:1 learned:2 purposefully:1 kang:1 hour:4 kingma:1 nip:2 able:2 taco:2 suggested:1 pattern:1 built:1 including:4 max:1 memory:1 power:1 subramanian:1 treated:1 rely:2 bordeaux:1 predicting:4 difficulty:1 hybrid:1 residual:7 zhu:1 representing:1 improve:1 stanton:1 technology:1 rated:1 concludes:1 extract:1 utterance:2 gf:3 text:10 prior:2 graf:3 loss:1 highlight:1 generation:5 abdel:2 digital:1 degree:1 affine:4 verification:1 consistent:1 xiao:1 raters:2 bank:9 share:2 translation:1 last:2 english:1 silence:5 bias:5 senior:1 wide:1 distinctiveness:1 vocoder:5 van:1 regard:1 boundary:5 dimension:1 curve:1 hmmbased:1 autoregressive:1 computes:1 concretely:1 commonly:1 forward:1 made:2 preprocessing:1 adaptive:1 ribeiro:2 alphabetically:1 transaction:1 compact:1 trim:1 conclusively:2 keep:1 global:1 ioffe:2 corpus:1 discriminative:4 continuous:1 latent:2 iterative:1 quantifies:1 table:7 learn:7 channel:1 robust:1 ca:2 expanding:1 investigated:1 domain:2 spread:1 motivation:1 noise:1 hyperparameters:4 profile:1 ling:1 repeated:1 fig:3 site:21 concatenating:1 lie:1 perceptual:1 third:1 minute:1 specific:20 covariate:1 gating:4 showing:1 burden:1 incorporating:2 socher:1 workshop:1 adding:1 effectively:1 diamos:2 magnitude:3 labelling:1 conditioned:1 chen:2 depicted:2 fc:48 simply:1 likely:2 distinguishable:3 explore:2 vinyals:1 contained:1 upsample:3 scalar:1 radford:1 truth:11 ma:1 conditional:1 identity:2 viewed:1 king:3 cheung:1 towards:1 tsao:1 shared:3 content:1 change:3 specifically:2 except:1 reducing:1 principal:1 total:1 meaningful:5 internal:3 support:2 jonathan:1 incorporate:1 evaluate:4 audio:28 trainable:8 scratch:1 crowdsourcing:1
6,510
689
Spiral Waves in Integrate-and-Fire Neural Networks John G. Milton Department of Neurology The University of Chicago Chicago, IL 60637 Po Hsiang Chu Department of Computer Science DePaul University Chicago, IL 60614 Jack D. Cowan Department of Mathematics The University of Chicago Chicago, IL 60637 Abstract The formation of propagating spiral waves is studied in a randomly connected neural network composed of integrate-and-fire neurons with recovery period and excitatory connections using computer simulations. Network activity is initiated by periodic stimulation at a single point. The results suggest that spiral waves can arise in such a network via a sub-critical Hopf bifurcation. 1 Introduction In neural networks activity propagates through populations, or layers, of neurons. This propagation can be monitored as an evolution of spatial patterns of activity. Thirty years ago, computer simulations on the spread of activity through 2-D randomly connected networks demonstrated that a variety of complex spatio-temporal patterns can be generated including target waves and spirals (Beurle, 1956, 1962; Farley and Clark, 1961; Farley, 1965). The networks studied by these investigators correspond to inhomogeneous excitable media in which the probability of interneuronal connectivity decreases exponentially with distance. Although travelling spiral waves can readily be formed in excitable media by the introduction of non-uniform 1001 1002 Milton, Chu, and Cowan initial conditions (e.g. Winfree, 1987), this approach is not suitable for the study and classification of the dynamics associated with the onset of spiral wave formation. Here we show that spiral waves can "spontaneously" arise from target waves in a neural network in which activity is initiated by periodic stimulation at a single point. In particular, the onset of spiral wave formation appears to occur via a sub-critical Hopf bifurcation. 2 Methods Computer simulations were used to simulate the propagation of activity from a centrally placed source in a neural network containing 100 x 100 neurons arranged on a square lattice with excitatory interactions. At t = 0 all neurons were at rest except the source. There were free boundary conditions and all simulations were performed on a SUN SPARC 1+ computer. The network was constructed by assuming that the probability, A, of interneuronal connectivity was an exponential decreasing function of distance, i.e. A = (3 exp( -air!) where a = 0.6, {3 = 1.5 are constants and Ir I is the euclidean interneuronal distance (on average each neuron makes 24 connections and '" 1.3 connections per neuron, i.e. multiple connections occur). Once the connectivity was determined it remained fixed throughout the simulation. The dynamics of each neuron were represented by an integrate-and-fire model possessing a "leaky" membrane potential and an absolute (1 time step) and relative refractory or recovery period as described previously (Beurle, 1962; Farley, 1965; Farley and Clark, 1961): the membrane and threshold decay constants were, respectively, k m = 0.3 msec- 1 , ko = 0.03 msec- 1 ? The time step of the network was taken as 1 msec and it was assumed that during this time a neuron transmits excitation to all other neurons connected to it. 3 Results We illustrate the dynamics of a particular network as a function of the magnitude of the excitatory interneuronal excitation, E, when all other parameters are fixed. When E < 0.2 no activity propagates from the central source. For 0.2 < E < 0.58 target waves regularly emanate from the centrally placed source (Figure 1a). For E ~ 0.58 the activity patterns, once established, persisted even when the source was turned off. Complex spiral waves occurred when 0.58 < E < 0.63 (Figures 1b-ld). In these cases spiral meandering, spiral tip break-up and the formation of new spirals (some with multiple arms) occur continuously. Eventually the spirals tend to migrate out of the network. For E ~ 0.63 only disorganized spatial patterns occurred without clearly distinguishable wave fronts, except initially (Figures ie-f). Spiral Waves in Integrate-and-Fire Neural Networks Figure 1: Representative examples of the spatial pattern of neural activity as a function of E:(a) E = 0.45, (b - e) E = 0.58 and (f) E = 0.72. Color code: gray = quiescent, white = activated, black = relatively refractory. See text for details. .1 D5 c.:I Z 0 i:L .14 ~ ,~~~~~~~WL b CIl z 0 ~ :::> .07 U.I z z 0 ~ u < ~ 0 c .2 u.. o o 5 10 ITERATION x 10 2 Figure 2: Plot of the fraction of neurons firing per unit time for different values of E: (a) 0.45, (b) 0.58, and (c) 0.72. At t = 0 all neurons except the central source are quiescent. At t = 500 (indicated by .J-) the source is shut off. The region indicated by (*) corresponds to an epoch in which spiral tip breakup occurs. 1003 1004 Milton, Chu, and Cowan The temporal dynamics of the network can be examined by plotting the fraction F of neurons that fire as a function of time. As E is increased through target waves (Figure 2a) to spiral waves (Figure 2b) to disorganized patterns (Figure 2c), the fluctuations in F become less regular, the mean value increases and the amplitude decreases. On closer inspection it can be seen that during spiral wave propagation (Figure 2b) the time series for F undergoes amplitude modulation as reported previously (Farley, 1965). The interval of low amplitUde, very irregular fluctuations in F (* in Figure 2b) corresponds to a period of spiral tip breakup (Figure lc). The appearance of spiral waves is typically preceded by 20-30 target waves. The formation of a spiral wave appears to occur in two steps. First there is an increase in the minimum value of F which begins at t '" 420 and more abruptly occurs at t '" 460 (Figure 2b). The target waves first become asymmetric and then activity propagates from the source region without the more centrally located neurons first entering the quisecent state (Figure 3c). At this time the spatially coherent wave front of the target waves becomes replaced by a disordered noncoherent distribution of active and refractory neurons. Secondly, the dispersed network activity begins to coalesce (Figures 3c and 3d) until at t '" 536 the first identifiable spiral occurs (Figure 3e). Figure 3: The fraction of neurons firing per unit time, for differing values of generation time t: (a) 175, (b) 345, (c) 465, (d) 503 (e) 536, and (f) 749. At t = 0 all neurons except the central source are quiescent. It was found that only 4 out of 20 networks constructed with the same 0, j3 produced spiral waves for E = 0.58 with periodic central point stimulation (simulations, in some cases, ran up to 50,000 generations). However, for all 20 networks, spiral waves could be obtained by the use of non-uniform initial conditions. Moreover, for those networks in which spiral waves occurred, the generation at which they formed differed. These observations emphasize that small fluctuations in the local connectivity of neurons likely play a major role in governing the dynamics of the network. Spiral Waves in Integrate-and-Fire Neural Networks 4 Discussion Self-maintaining spiral waves can a..rise in an inhomogeneous neural network with uniform initial conditions. Initially well-formed target waves emanate periodically from the centrally placed source. Eventually, provided that E is in a critical range (Figures 1 & 3), the target waves may break up and be replaced by spiral waves. The necessary conditions for spiral wave formation are that: 1) the network be sufficiently tightly connected (Farley, 1965; Farley and Clark, 1961) and 2) the probability of interneuronal connectivity should decrease with distance (unpublished observations). As the network is made more tightly connected the probability that self-maintained activity arises increases provided that E is in the appropriate range (unpublished observations). These criteria are not sufficient to ensure that selfmaintained activity, including spiral waves, will form in a given realization of the neural network. It has previously been shown that partially formed spiral-like waves can arise from periodic point stimulation in a model excitable media in which the inhomogeneity arises from a dispersion of refractory times, k;l (Kaplan, et al, 1988). Integrate-and-fire neural networks have two stable states: a state in which all neurons are at rest, another associated with spiral waves. Target waves represent a transient response to perturbations away from the stable rest state. Since the neurons have memory (i.e. there is a relative refractory state with ko ? k m ), the mean threshold and membrane potential of the network evolve with time. As a consequence the mean fraction of firing neurons slowly increases (Figure 2b). Our simulations suggest that at some point, provided that the connectivity of the network is suitable, the rest state suddenly becomes unstable and is replaced by a stable spiral wave. This exchange of stability is typical of a sub-critical Hopf bifurcation. Although complex, but organized, spatio-temporal patterns of spreading activity can readily be generated by a randomly connected neural network, the significance of these phenomena, if any, is not presently clear. On the one hand it is not difficult to imagine that these spatio-temporal dynamics could be related to phenomena ranging from the generation of the EEG, to the spread of epileptic and migraine related activity and the transmission of visual images in the cortex to the formation of patterns and learning by artificial neural networks. On the other hand, the occurence of such phenomena in artificial neural nets could conceivably hinder efficient learning, for example, by slowing convergence. Continued study of the properties of these networks will clearly be necessary before these issues can be resolved. Acknowledgements The authors acknowledge useful discussions with Drs. G. B. Ermentrout, L. Glass and D. Kaplan and financial support from the National Institutes of Health (JM), the Brain Research Foundation (JDC, JM), and the Office of Naval Research (JDC). References R. L. Beurle. (1956) Properties of a mass of cells capable of regenerating pulses. Phil. Trans. Roy. Soc. Lond. 240 B, 55-94. 1005 1006 Milton, Chu, and Cowan R. L. BeUl-Ie. (1962) FUnctional organization in random networks. In Principles of Self-Organization, H. v. Foerster and G. W. Zopf, eds., pp 291-314. New York, Pergamon Press. B. G. Farley_ (1965) A neuronal network model and the "slow potentials" of electrophysiology. Compo in Biomed_ Res. 2, 265-294. B. G. Farley & W. A. Clark. (1961) Activity in networks of neuron-like elements. In Information Theory, C. Cherry, ed., pp 242-251. Washington, Butterworths. D. T. Kaplan, J. M.Smith,B. E. H. Saxberg & R. J. Cohen. (1988) Nonlinear dynamics in cardiac conduction. Math. Biosci. 90, 19-48. A. T. Winfree. Princeton, N.J. (1987) When Time Breaks Down, Princeton University Press,
689 |@word pulse:1 simulation:7 ld:1 initial:3 series:1 chu:4 readily:2 john:1 regenerating:1 periodically:1 chicago:5 plot:1 shut:1 slowing:1 inspection:1 smith:1 compo:1 math:1 constructed:2 become:2 hopf:3 brain:1 decreasing:1 jm:2 becomes:2 begin:2 provided:3 moreover:1 medium:3 mass:1 differing:1 temporal:4 interneuronal:5 unit:2 before:1 local:1 consequence:1 initiated:2 firing:3 fluctuation:3 modulation:1 black:1 studied:2 examined:1 range:2 thirty:1 spontaneously:1 emanate:2 regular:1 suggest:2 demonstrated:1 phil:1 recovery:2 d5:1 continued:1 financial:1 population:1 stability:1 target:10 play:1 imagine:1 element:1 roy:1 located:1 asymmetric:1 role:1 region:2 connected:6 sun:1 decrease:3 ran:1 ermentrout:1 hinder:1 dynamic:7 po:1 resolved:1 represented:1 artificial:2 formation:7 meandering:1 inhomogeneity:1 net:1 interaction:1 turned:1 realization:1 convergence:1 transmission:1 illustrate:1 propagating:1 soc:1 inhomogeneous:2 disordered:1 transient:1 exchange:1 secondly:1 sufficiently:1 exp:1 major:1 spreading:1 wl:1 clearly:2 office:1 naval:1 glass:1 typically:1 initially:2 issue:1 classification:1 spatial:3 bifurcation:3 once:2 washington:1 randomly:3 composed:1 tightly:2 national:1 replaced:3 fire:7 organization:2 farley:8 activated:1 cherry:1 closer:1 capable:1 necessary:2 euclidean:1 re:1 increased:1 lattice:1 uniform:3 front:2 reported:1 conduction:1 periodic:4 ie:2 off:2 tip:3 continuously:1 connectivity:6 central:4 containing:1 slowly:1 potential:3 onset:2 performed:1 break:3 wave:36 il:3 formed:4 square:1 air:1 ir:1 correspond:1 produced:1 ago:1 ed:2 pp:2 associated:2 transmits:1 monitored:1 jdc:2 color:1 organized:1 amplitude:3 appears:2 response:1 arranged:1 governing:1 until:1 hand:2 nonlinear:1 propagation:3 undergoes:1 gray:1 indicated:2 evolution:1 entering:1 spatially:1 white:1 during:2 self:3 maintained:1 excitation:2 criterion:1 ranging:1 image:1 jack:1 possessing:1 stimulation:4 preceded:1 functional:1 cohen:1 refractory:5 exponentially:1 occurred:3 biosci:1 mathematics:1 noncoherent:1 stable:3 cortex:1 sparc:1 seen:1 minimum:1 period:3 multiple:2 j3:1 ko:2 winfree:2 foerster:1 iteration:1 represent:1 cell:1 irregular:1 interval:1 source:10 rest:4 tend:1 cowan:4 regularly:1 spiral:32 variety:1 epileptic:1 abruptly:1 york:1 migrate:1 useful:1 clear:1 disorganized:2 per:3 threshold:2 fraction:4 year:1 throughout:1 layer:1 centrally:4 identifiable:1 activity:16 occur:4 simulate:1 lond:1 relatively:1 department:3 membrane:3 cardiac:1 conceivably:1 presently:1 taken:1 previously:3 eventually:2 drs:1 milton:4 travelling:1 away:1 appropriate:1 ensure:1 maintaining:1 suddenly:1 pergamon:1 breakup:2 occurs:3 distance:4 unstable:1 assuming:1 code:1 difficult:1 rise:1 kaplan:3 neuron:21 observation:3 dispersion:1 acknowledge:1 persisted:1 perturbation:1 unpublished:2 connection:4 coalesce:1 coherent:1 established:1 trans:1 pattern:8 including:2 memory:1 critical:4 suitable:2 arm:1 migraine:1 excitable:3 health:1 occurence:1 text:1 epoch:1 acknowledgement:1 evolve:1 relative:2 generation:4 clark:4 foundation:1 integrate:6 sufficient:1 propagates:3 plotting:1 principle:1 excitatory:3 placed:3 free:1 institute:1 absolute:1 leaky:1 boundary:1 author:1 made:1 emphasize:1 active:1 assumed:1 quiescent:3 spatio:3 butterworths:1 neurology:1 eeg:1 complex:3 significance:1 spread:2 arise:3 neuronal:1 representative:1 differed:1 hsiang:1 cil:1 lc:1 slow:1 sub:3 msec:3 exponential:1 down:1 remained:1 decay:1 magnitude:1 electrophysiology:1 distinguishable:1 appearance:1 likely:1 visual:1 partially:1 corresponds:2 dispersed:1 determined:1 except:4 typical:1 support:1 arises:2 investigator:1 princeton:2 phenomenon:3
6,511
6,890
Variance-based Regularization with Convex Objectives Hongseok Namkoong Stanford University [email protected] John C. Duchi Stanford University [email protected] Abstract We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen?s empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems. 1 Introduction Let X be a sample space, P a distribution on X , and ? a parameter space. For a loss function ` : ? ? X ! R, consider the problem of finding ? 2 ? minimizing the risk Z R(?) := E[`(?, X)] = `(?, x)dP (x) (1) given a sample {X1 , . . . , Xn } drawn i.i.d. according to the distribution P . Under appropriate conditions on the loss `, parameter space ?, and random variables X, a number of researchers [1, 6, 12, 7, 3] have shown results of the form that with high probability, r n 1X Var(`(?, X)) C2 R(?) ? `(?, Xi ) + C1 + for all ? 2 ? (2) n i=1 n n where C1 and C2 depend on the parameters of problem (1) and the desired confidence guarantee. Pn Such bounds justify empirical risk minimization, which chooses ?bn to minimize n1 i=1 `(?, Xi ) over ? 2 ?. Further, these bounds showcase a tradeoff between Pn bias and variance, where we identify the bias (or approximation error) with the empirical risk n1 i=1 `(?, Xi ), while the variance arises from the second term in the bound. Considering the bias-variance tradeoff (1) in statistical learning, it is natural to instead choose ? to directly minimize a quantity trading between approximation and estimation error: s n VarPbn (`(?, X)) 1X `(?, Xi ) + C , (3) n i=1 n where VarPbn denotes the empirical variance. Maurer and Pontil [16] consider this idea, giving guarantees on the convergence and good performance of such a procedure. Unfortunately, even when 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the loss ` is convex in ?, the formulation (3) is generally non-convex, which limits the applicability of procedures that minimize the variance-corrected empirical risk (3). In this paper, we develop an approach based on Owen?s empirical likelihood [19] and ideas from distributionally robust optimization [4, 5, 10] that?whenever the loss ` is convex?provides a tractable convex formulation closely approximating the penalized risk (3). We give a number of theoretical guarantees and empirical evidence for its performance. To describe our approach, require a few definitions. For a convex function : R+ ! R with R we dP (1) = 0, D (P ||Q) = X ( dQ )dQ is the -divergence between distributions P and Q defined on X . Throughout this paper, we use (t) = 12 (t 1)2 , which gives the 2 -divergence. Given and an i.i.d. sample X1 , . . . , Xn , we define the ?-neighborhood of the empirical distribution n ?o Pn := distributions P s.t. D (P ||Pbn ) ? , n where Pbn denotes the empirical distribution of the sample {Xi }ni=1 , and our choice (t) = 12 (t 1)2 means that Pn has support {Xi }ni=1 . We then define the robustly regularized risk n ?o Rn (?, Pn ) := sup EP [`(?, X)] = sup EP [`(?, X)] : D (P ||Pbn ) ? . (4) n P 2Pn P As it is the supremum of a family of convex functions, the robust risk ? 7! Rn (?, Pn ) is convex in ? regardless of the value of ? 0 whenever the original loss `(?; X) is convex and ? is a convex set. Namkoong and Duchi [18] propose a stochastic procedure for minimizing (4) almost as fast as stochastic gradient descent. See Appendix C for a detailed account of an alternative method. We show that the robust risk (4) provides an excellent surrogate for the variance-regularized quantity (3) in a number of ways. Our first result (Thm. 1 in Sec. 2) is that for bounded loss functions, r 2? Rn (?, Pn ) = EPbn [`(?, X)] + VarPbn (`(?, X)) + "n (?), (5) n where "n (?) ? 0 and is O(1/n) uniformly in ?. We show that when `(?, X) has suitably large variance, we have "n = 0 with high probability. With the expansion (5) in hand, we can show a number of finite-sample convergence guarantees for the robustly regularized estimator ? n ?o ?bnrob 2 argmin sup EP [`(?, X)] : D (P ||Pbn ) ? . (6) n P ?2? Based on the expansion (5), solutions ?bnrob of problem (6) enjoy automatic finite sample optimality certificates: for ? 0, with probability at least 1 C1 exp( ?) we have C2 ? C2 ? E[`(?bnrob ; X)] ? Rn (?bnrob ; Pn ) + = inf Rn (?, Pn ) + ?2? n n where C1 , C2 are constants (which we specify) that depend on the loss ` and domain ?. That is, with high probability the robust solution has risk no worse than the optimal finite sample robust objective up to an O(?/n) error term. To guarantee a desired level of risk performance with probability 1 , we may specify the robustness penalty ? = O(log 1 ). Secondly, we show that the procedure (6) allows us to automatically and near-optimally trade between approximation and estimation error (bias and variance), so that ( ) r 2? C? rob E[`(?bn ; X)] ? inf E[`(?; X)] + 2 Var(`(?; X)) + ?2? n n with high probability. When there are parameters ? with small risk R(?) (relative to the optimal parameter ?? ) and small variance Var(`(?, X)), this guarantees that the excess risk R(?bnrob ) R(?? ) is essentially of order O(?/n), where ? governs our desired confidence level. We give an explicit example in Section p 3.2 where our robustly regularized procedure (6) converges at O(log n/n) compared to O(1/ n) of empirical risk minimization. Bounds that trade between risk and variance are known in a number of cases in the empirical risk minimization literature [15, 22, 1, 2, 6, 3, 7, 12], which is relevant when one wishes to achieve ?fast 2 rates? of convergence for statistical learning algorithms. In many cases, such tradeoffs require either conditions such as the Mammen-Tsybakov noise condition [15, 6] or localization results [3, 1, 17] made possible by curvature conditions that relate the risk and variance. The robust solutions (6) enjoy a variance-risk tradeoff that is differen but holds essentially without conditions except compactness of ?. We show in Section 3.3 that the robust solutions enjoy fast rates of convergence under typitcal curvature conditions on the risk R. We complement our theoretical results in Section 4, where we conclude by providing two experiments comparing empirical risk minimization (ERM) strategies to robustly-regularized risk minimization (6). These results validate our theoretical predictions, showing that the robust solutions are a practical alternative to empirical risk minimization. In particular, we observe that the robust solutions outperform their ERM counterparts on ?harder? instances with higher variance. In classification problems, for example, the robustly regularized estimators exhibit an interesting tradeoff, where they improve performance on rare classes (where ERM usually sacrifices performance to improve the common cases?increasing variance slightly) at minor cost in performance on common classes. 2 Variance Expansion We begin our study of the robust regularized empirical risk Rn (?, Pn ) by showing that it is a good approximation to the empirical risk plus a variance term (5). Although the variance of the loss is in general non-convex, the robust formulation (6) is a convex optimization problem for variance regularization whenever the loss function is convex [cf. 11, Prop. 2.1.2.]. To gain intuition for the variance expansion that follows, we consider the following equivalent formulation for the robust objective supP 2Pn EP [Z] ? n X 1 2 maximize pi zi subject to p 2 Pn = p 2 Rn+ : knp 1k2 ? ?, h1, pi = 1 , (7) p 2 i=1 2 2 where z 2 Rn is a vector. For simplicity, let s2n = n1 kzk2 (z)2 = n1 kz zk2 denote the empirical ?variance? of the vector z, where z = n1 h1, zi is the mean value of z. Then by introducing the variable u = p n1 1, the objective in problem (7) satisfies hp, zi = z + hu, zi = z + hu, z zi because hu, 1i = 0. Thus problem (7) is equivalent to solving 2? 1 , h1, ui = 0, u . 2 u2R n n p p Notably, by the Cauchy-Schwarz inequality, we have hu, z zi ? 2? kz zk2 /n = 2?s2n /n, and equality is attained if and only if p p 2?(zi z) 2?(zi z) p ui = = . n kz zk2 n ns2n maximize z + hu, z n 2 zi subject to kuk2 ? Of course, it is possible to choose such ui while satisfying the constraint ui p 2?(zi z) p min 1. i2[n] ns2n 1/n if and only if (8) Thus, if inequality (8) holds for the vector z?that is, there is enough variance in z?we have r 2?s2n sup hp, zi = z + . n p2Pn For losses `(?, X) with enough variance relative to `(?, Xi ) EPbn [`(?, Xi )], that is, those satisfying inequality (8), then, we have r 2? Rn (?, Pn ) = EPbn [`(?, X)] + VarPbn (`(?, X)). n A slight elaboration of this argument, coupled with the application of a few concentration inequalities, yields the next theorem. Recall that (t) = 12 (t 1)2 in our definition of the -divergence. 3 Theorem 1. Let Z be a random variable taking values in [M0 , M1 ] where M = M1 M0 and fix ? 0. Then ! r r n 2M ? 2? ?o 2? VarPbn (Z) ? sup EP [Z] : D (P ||Pbn ) ? EPbn [Z] ? VarPbn (Z). n n n n P + (9) q p p Var(Z) 24? 16 1 M2 2 1 If n max{ Var(Z) , Var(Z) , 1}M and we set tn = Var(Z) 1 n 2 n 18 , r 2? sup EP [Z] = EPbn [Z] + VarPbn (Z) (10) n bn )? ? P :D (P ||P n with probability at least 1 exp( nt2n 2M 2 ) 1 exp( nVar(Z) 36M 2 ). See Appendix A.1 for the proof. Inequality (9) and the exact expansion (10) show that, at least for bounded loss functions `, the robustly regularized risk (4) is a natural (and convex) surrogate for empirical risk plus standard deviation of the loss, and the robust formulation approximates exact variance regularization with a convex penalty. We also provide a uniform variant of Theorem 1 based on the standard notion of the covering number, which we now define. Let V be a vector space with (semi)norm k?k on V, and let V ? V. We say a collection v1 , . . . , vN ? V is an ?-cover of V if for each v 2 V , there exists vi such that kv vi k ? ?. The covering number of V with respect to k?k is then N (V, ?, k?k) := inf {N 2 N : there is an ?-cover of V with respect to k?k}. Now, let F be a collection of functions f : X ! R, and define the L1 (X )-norm by kf gkL1 (X ) := supx2X |f (x) g(x)|. Although we state our results abstractly, we typically take F := {`(?, ?) | ? 2 ?}. As a motivating example, we give the following standard bound on the covering number of Lipschitz losses [24]. Example 1: Let ? ? Rd and assume that ` : ? ? X ! R is L-Lipschitz in ? with respect to the `2 -norm for all x 2 X , meaning that |`(?, x) `(?0 , x)| ? L k? ?0 k2 . Then taking F = {`(?, ?) : ? 2 ?}, any ?-covering {?1 , . . . , ?N } of ? in `2 -norm guarantees that mini |`(?, x) `(?i , x)| ? L? for all ?, x. That is, ? ?d diam(?)L N (F, ?, k?kL1 (X ) ) ? N (?, ?/L, k?k2 ) ? 1 + , ? where diam(?) = sup?,?0 2? k? numbers of the family F. ? ?0 k2 . Thus `2 -covering numbers of ? control L1 -covering With this definition, we provide a result showing that the variance expansion (5) holds uniformly for all functions with enough variance. Theorem 2. Let F be a collection of bounded functions f : X ! [M0 , M1 ] wherepM = M1 M0 , 1 2 1 and let ? 0 be a constant. Define F ? := f 2 F : Var(f 2) ? ) ? and tn? = ? (? 1 n? M2 n . 2 32?M If ? 2 , then with probability at least 1 n for all f 2 F ? sup bn )? ? P :D (P ||P n ? N F, 32 , k?kL1 (X ) exp EP [f (X)] = EPbn [f (X)] + r 2? VarPbn (f (X)). n nt2n 2M 2 , we have (11) We prove the theorem in Section A.2. Theorem 2 shows that the variance expansion of Theorem 1 holds uniformly for all functions f with sufficient variance. See Duchi, Glynn, and Namkoong [10] for an asymptotic analogue of the equality (11) for heavier tailed random variables. 3 Optimization by Minimizing the Robust Loss Based on the variance expansions in the preceding section, we show that the robust solution (6) automatically trades between approximation and estimation error. In addition to k?kL1 (X ) -covering 4 numbers defined in the previous section, we use the tighter notion of empirical `1 -covering numbers. For x 2 X n , define F(x) = {(f (x1 ), . . . , f (xn )) : f 2 F} and the empirical `1 -covering numbers N1 (F, ?, n) := supx2X n N (F(x), ?, k?k1 ), which bound the number of `1 -balls of radius ? required to cover F(x). Note that we always have N1 (F) ? N (F). Typically, we consider the function class F := {`(?, ?) : ? 2 ?}, though we state our minimization results abstractly. Although the below result is in terms of covering numbers for ease of exposition, a variant holds depending on localized Rademacher averages [1] of the class F, which can yield tighter guarantees (we omit such results for lack of space). We prove the following theorem in Section A.3. Theorem 3. Let F be a collection of functions f : X ! [M0 , M1 ] with M = M1 empirical minimizer ? n ?o b f 2 argmin sup EP [f (X)] : D (P ||Pbn ) ? . n P f 2F Then for ? M0 . Define the 2(N (F, ?, k?kL1 (X ) ) + 1)e t , ! r 7M ? 2t b b E[f (X)] ? sup EP [f (X)] + + 2+ ? n n 1 bn )? ? P :D (P ||P n ( ) ! r r 2? 11M ? 2t Var(f ) + + 2+ ?. ? inf E[f ] + 2 f 2F n n n 1 Further, for n t, with probability at least 1 8M 2 t , (12a) (12b) 9t, with probability at least 1 2(3N1 (F, ?, 2n)+1)e t , r ! 11 M ? 2t b b E[f (X)] ? sup EP [f (X)] + + 2+4 ? (13a) 3 n n bn )? ? P :D (P ||P n ( ) r r ! 2? 19M ? 2t ? inf E[f ] + 2 Var(f ) + + 2+4 ?. (13b) f 2F n 3n n t log 12, and ? Unlike analogous results for empirical risk minimization [6], Theorem 3 does not require the selfbounding type assumption Var(f ) ? BE[f ]. A consequencep of this is that when v = Var(f ? ) ? is small, where f 2 argminf 2F E[f ], we achieve O(1/n + v/n) (fast) rates of convergence. This condition is different from the typical conditions required for empirical risk minimization to have fast rates of convergence, highlighting the possibilities of variance-based regularization. It will be interesting to understand appropriate low-noise conditions (e.g. the Mammen-Tsybakov noise condition [15, 6]) guaranteeing good performance. Additionally, the robust objective Rn (?, Pn ) is an empirical likelihood confidence bound on the population risk [10], and as empirical likelihood confidence bounds are self-normalizing [19], other fast-rate generalizations may exist. 3.1 Consequences of Theorem 3 We now turn to a number of corollaries that expand on Theorem 3 to investigate its consequences. Our first corollary shows that Theorem 3 applies to standard Vapnik-Chervonenkis (VC) classes. As VC dimension is preserved through composition, this result also extends to the procedure (6) in typical empirical risk minimization scenarios. See Section A.4 for its proof. Corollary 3.1. In addition to the conditions of Theorem 3, let F have finite VC-dimension VC(F). constant c < 1, the bounds (13) hold with probability at least ? Then for a numerical ? 16M ne VC(F ) 1 t 1 c VC(F) +2 e . ? Next, we focus more explicitly on the estimator ?bnrob defined by minimizing the robust regularized risk (6). Let us assume that ? ? Rd , and that we have a typical linear modeling situation, where a loss h is applied to an inner product, that is, `(?, x) = h(?> x). In this case, by making the substitution that the class F = {`(?, ?) : ? 2 ?} in Corollary 3.1, we have VC(F) ? d, and we obtain the following corollary. Recall the definition (1) of the population risk R(?) = E[`(?, X)], and the uncertainty set Pn = {P : D (P ||Pbn ) ? n? }, and that Rn (?, Pn ) = supP 2Pn EP [`(?, X)]. By setting ? = M/n in Corollary 3.1, we obtain the following result. 5 Corollary 3.2. Let the conditions of the previous paragraph hold and assume that `(?, x) 2 [0, M ] for all ? 2 ?, x 2 X . Then if n ? 9 log 12, ( ) r 11M ? 4M 2? 11M ? R(?bnrob ) ? Rn (?bnrob , Pn ) + + ? inf R(?) + 2 Var(`(?; X)) + ?2? 3n n n n with probability at least 1 2 exp(c1 d log n c2 ?), where ci are universal constants with c2 1/9. Unpacking Theorem 3 and Corollary 3.2 a bit, the first result (13a) provides a high-probability guarantees that the true expectation E[fb] cannot be more than O(1/n) worse than its robustlyregularized empirical counterpart, that is, R(?bnrob ) ? Rn (?bnrob , Pn ) + O(?/n), which is (roughly) a consequence of uniform variants of Bernstein?s inequality. The second result (13b) guarantee the convergence of the empirical minimizer to a parameter with risk at most O(1/n) larger than the best possible variance-corrected risk. In the case that the losses take values in [0, M ], then Var(`(?, X)) ? M R(?), and thus for ? = 1/n in Theorem 3, we obtain r M? M ?R(?? ) rob ? R(?bn ) ? R(? ) + C +C , n n a type of result well-known and achieved by empirical risk minimization for bounded nonnegative losses [6, 26, 25]. In some scenarios, however, the variance may satisfy Var(`(?, X)) ? M R(?), yielding improvements. To give an alternative variant of Corollary 3.2, let ? ? Rd and assume that for each x 2 X , inf ?2? `(?, x) = 0 and that ` is L-Lipschitz in ?. If D := diam(?) = sup?,?0 2? k? ?0 k < 1, then 0 ? `(?, x) ? L diam(?) =: M . Corollary 3.3. Let the conditions of the preceeding paragraph hold. Set t = ? = log 2n + d log(2nDL) and ? = n1 in Theorem 3 and assume that D . nk and L . nk for a numerical constant k. With probability at least 1 1/n, ( ) r d Var(`(?, X)) dLD log n rob rob E[`(?bn ; X)] = R(?bn ) ? inf R(?) + C log n + C ?2? n n where C is a numerical constant. 3.2 Beating empirical risk minimization We now provide an example in which the robustly-regularized estimator (6) exhibits a substantial improvement over empirical risk minimization. We expect the robust approach to offer performance benefits in situations in which the empirical risk minimizer is highly sensitive to noise, say, because the losses are piecewise linear, and slight under- or over-estimates of slope may significantly degrade solution quality. With this in mind, we construct a toy 1-dimensional example?estimating the median of a distribution supported on X = { 1, 0, 1}?in which the robust-regularized estimator p has convergence rate log n/n, while empirical risk minimization is at best 1/ n. Define the loss `(?; x) = |? x| |x|, and for 2 (0, 1) let the distribution P be defined by P (X = 1) = 1 2 , P (X = 1) = 1 2 , P (X = 0) = . Then for ? 2 R, the risk of the loss is R(?) = |?| + 1 2 |? 1| + 1 2 |? + 1| (1 ). By symmetry, it is clear that ?? := argmin? R(?) = 0, which satisfies R(?? ) = 0. (Note that `(?, x) = `(?, x) `(?? , x).) Without loss of generality, we assume that ? = [ 1, 1]. Define the empirical risk minimizer and the robust solution ?berm := argmin EPbn [`(?, X)] = argmin EPbn [|? ?2R ?2[ 1,1] X|], ?bnrob 2 argmin Rn (?, Pn ). ?2? Intuitively, if too many of the observations satisfy Xi = 1 or too many satisfy Xi = 1, then ?berm will be either 1 or 1; for small , such events become reasonably probable. On the other hand, we have `(?? ; x) = 0 for all x 2 X , so that Var(`(?? ; X)) = 0 and p variance regularization achieves the rate O(log n/n) as opposed to empirical risk minimizer?s O(1/ n). See Section A.6 for the proof. 6 Proposition 1. Under the conditions of the previous paragraph, for n ? = 3 log n, with probability q n n at least 1 n4 , we have R(?bnrob ) R(?? ) ? 45 log . However, with probability at least 2 ( n n 1) q p p 1 1 n 2 2/ ?en 2 ( n 2 , we have R(?berm ) R(?? ) + n 2 . n 1) For n 20, the probability of the latter event is .088. Hence, for this (specially constructed) 1 example, we see that there is a gap of nearly n 2 in order of convergence. 3.3 Fast Rates In cases in which the risk R has curvature, empirical risk minimization often enjoys faster rates of convergence [6, 21]. The robust solution ?bnrob similarly attains faster rates of convergence in such cases, even with approximate minimizers of Rn (?, Pn ). For the risk R and ? 0, let S ? := {? 2 ? : R(?) ? inf ?? 2? R(?? ) + ?} denote the ?-sub-optimal (solution) set, and similarly let Sb? := {? 2 ? : Rn (?, Pn ) ? inf ?0 2? Rn (?0 , Pn ) + ?}. For a vector ? 2 ?, let ?S (?) = argmin?? 2S k?? ?k2 denote the Euclidean projection of ? onto the set S. Our below result depends on a local notion of Rademacher complexity. For i.i.d. random signs "i 2 {?1}, the empirical Rademacher complexity of a function class F ? {f : X ! R} is ? n 1X Rn F := E sup "i f (Xi ) | X . f 2F n i=1 Although we state our results abstractly, we typically ptake F := {`(?, ?) | ? 2 ?}. For example, when F is a VC-class, we typically have E[Rn F] . VC(F)/n. Many other bounds on E[Rn F] are possible [2, 24, Ch. 2]. For A ? ? let Rn (A) denote the Rademacher complexity of the localized process {x 7! `(?; x) `(?S (?); x) : ? 2 A}. We then have the following result, whose proof we provide in Section A.7. Theorem 4. Let ? ? Rd be convex and let `(?; x) be convex and L-Lipshitz for all x 2 X . For constants > 0, > 1, and r > 0, assume that R satisfies R(?) Let t > 0. If 0 ? ? ? ? ? then P(Sb? ? S 2? ) 4 inf R(?) ?2? 1 2 8L2 ? n 1 dist(?, S) for all ? such that dist(?, S) ? r. (14) ?1 r (15) r satisfies ? 2( 1) ? ? 2 1 1 ? and 2 2? 2E[Rn (S )] + L e t , and inequality (15) holds for all ? & ( L 2 ? 2? (t+?+d) 2( ) 2/ n 2t , n 1) . Experiments We present two real classification experiments to carefully compare standard empirical risk minimization (ERM) to the variance-regularized approach we present. Empirically, we show that the ERM estimator ?berm performs poorly on rare classes with (relatively) more variance, where the robust solution achieves improved classification performance on rare instances. In all our experiments, this occurs with little expense over the more common instances. 4.1 Protease cleavage experiments For our first experiment, we compare our robust regularization procedure to other regularizers using the HIV-1 protease cleavage dataset from the UCI ML-repository [14]. In this binary classification task, one is given a string of amino acids (a protein) and a featurized representation of the string of dimension d = 50960, and the goal is to predict whether the HIV-1 virus will cleave the amino acid sequence in its central position. We have a sample of n = 6590 observations of this process, where the class labels are somewhat skewed: there are 1360 examples with label Y = +1 (HIV-1 cleaves) and 5230 examples with Y = 1 (does not cleave). 7 (a) test error (b) rare class (Yi = +1) (c) common class (Yi = 1) Figure 1: HIV-1 Protease Cleavage plots (2-standard error confidence bars). Comparison of misclassification test error rates among different regularizers. We use the logistic loss `(?; (x, y)) = log(1 + exp( y?> x)). We compare the performance of different constraint sets ? by taking ? = ? 2 Rd : a1 k?k1 + a2 k?k2 ? r , which is equivalent to elastic net regularization [27], while varying a1 , a2 , and r. We experiment with `1 -constraints (a1 = 1, a2 = 0) with r 2 {50, 100, 500, 1000, 5000}, `2 -constraints (a1 = 0, a2 = 1) with r 2 {5, 10, 50, 100, 500}, elastic net (a1 = 1, a2 = 10) with r 2 {102 , 2 ? 102 , 103 , 2 ? 103 , 104 }, our robust regularizer with ? 2 {102 , 103 , 104 , 5 ? 104 , 105 } and our robust regularizer coupled with the `1 -constraint (a1 = 1, a2 = 0) with r = 100. Though we use a convex surrogate (logistic loss), we measure performance of the classifiers using the zero-one (misclassification) loss 1{sign(?T x)y ? 0}. For validation, we perform 50 experiments, where in each experiment we randomly select 9/10 of the data to train the model, evaluating its performance on the held out 1/10 fraction (test). We plot results summarizing these experiments in Figure 1. The horizontal axis in each figure indexes our choice of regularization value (so ?Regularizer = 1? for the `1 -constrained problem corresponds to r = 50). The figures show that the robustly regularized risk provides a different type of protection against overfitting than standard regularization or constraint techniques do: while other regularizers underperform in heavily constrained settings, the robustly regularized estimator ?bnrob achieves low classification error for all values of ?. Notably, even when coupled with a fairly stringent `1 -constraint (r = 100), robust regularization has performance better than `1 except for large values r, especially on the rare label Y = +1. We investigate the effects of the robust regularizer with a slightly different perspective in Table 1, where we use ? = {? : k?k1 ? 100} for the constraint set for each experiment. We give error rates and logistic risk values for the different procedures, averaged over 50 independent runs. We note that all gaps are significant at the 3-standard error level. We see that the ERM solutions achieve good performance on the common class (Y = 1) but sacrifice performance on the uncommon class. As we increase ?, performance of the robust solution ?bnrob on the rarer label Y = +1 improves, while the error rate on the common class degrades a small (insignificant) amount. Table 1: HIV-1 Cleavage Error error (%) error (Y = +1) test train test train test 0.1706 5.52 6.39 17.32 18.79 0.1763 4.99 5.92 15.01 17.04 0.1944 4.5 5.92 13.35 16.33 0.3031 2.39 5.67 7.18 14.65 risk ? erm 100 1000 10000 4.2 train 0.1587 0.1623 0.1777 0.283 error (Y train 2.45 2.38 2.19 1.15 = 1) test 3.17 3.02 3.2 3.32 Document classification in the Reuters corpus For our second experiment, we consider a multi-label classification problem with a reasonably large dataset. The Reuters RCV1 Corpus [13] has 804,414 examples with d = 47,236 features, where feature j is an indicator variable for whether word j appears in a given document. The goal is to classify documents as a subset of the 4 categories where documents are labeled with a subset of those. As documents can belong to multiple categories, we fit binary classifiers on each of the four 8 (a) (b) (c) Figure 2: Reuters corpus experiment. (a) Logistic risks. (b) Recall. (c) Recall on Economics (rare). categories. Each category has different number of documents (Corporate: 381, 327, Economics: 119, 920, Government: 239, 267, Markets: 204, 820) In this experiment, we expect the robust solution to outperform ERM on the rarer category (Economics), as the robustification (6) naturally upweights rarer (harder) instances, which disproportionally affect variance?as in the previous experiment. For each category k 2 {1, 2, 3, 4}, we use the logistic loss `(?k ; (x, y)) = log(1 + exp( y?k> x)). For each binary classifier, we use the `1 constraint set ? = ? 2 Rd : k?k1 ? 1000 . To evaluate performance on this multi-label problem, we use precision (ratio of the number of correct positive labels to the number classified as positive) and recall (ratio of the number of correct positive labels to the number of actual positive labels). We partition the data into ten equally-sized sub-samples and perform ten validation experiments, where in each experiment we use one of the ten subsets for fitting the logistic models and the remaining nine partitions as a test set to evaluate performance. In Figure 2, we summarize the results of our experiment averaged over the 10 runs, with 2-standard error bars (computed across the folds). To facilitate comparison across the document categories, we give exact values of these averages in Tables 2 and 3. Both ?bnrob and ?berm have reasonably high precision across all categories, with increasing ? giving a mild improvement in precision (from .93 ? .005 to .94 ? .005). On the other hand, we observe in Figure 2(c) that ERM has low recall (.69 on test) for the Economics category, which contains about 15% of documents. As we increase ? from 0 (ERM) to 105 , we see a smooth and substantial improvement in recall for this rarer category (without significant degradation in precision). This improvement in recall amounts to reducing variance in predictions on the rare class. This precision and recall improvement comes in spite of the increase in the average binary logistic risk for each of the 4 classes. In Figure 2(a), we plot the average binary logistic loss (on train and test sets) averaged over the 4 categories as well as the upper confidence bound Rn (?, Pn ) as we vary ?. The robust regularization effects reducing variance appear to improve the performance of the binary logistic loss as a surrogate for true error rate. ? erm 1E3 1E4 1E5 1E6 Precision train test 92.72 92.7 92.97 92.95 93.45 93.45 94.17 94.16 91.2 91.19 Table 2: Reuters Corpus Precision (%) Corporate Economics Government train test train test train test 93.55 93.55 89.02 89 94.1 94.12 93.31 93.33 87.84 87.81 93.73 93.76 93.58 93.61 87.6 87.58 93.77 93.8 94.18 94.19 86.55 86.56 94.07 94.09 92 92.02 74.81 74.8 91.19 91.25 Markets train test 92.88 92.94 92.56 92.62 92.71 92.75 93.16 93.24 89.98 90.18 ? erm 1E3 1E4 1E5 1E6 Recall train test 90.97 90.96 91.72 91.69 92.40 92.39 93.46 93.44 93.10 93.08 Table 3: Reuters Corpus Recall (%) Corporate Economics Government train test train test train test 90.20 90.25 67.53 67.56 90.49 90.49 90.83 90.86 70.42 70.39 91.26 91.23 91.47 91.54 72.38 72.36 91.76 91.76 92.65 92.71 76.79 76.78 92.26 92.21 92.00 92.04 79.84 79.71 91.89 91.90 Markets train test 88.77 88.78 89.62 89.58 90.48 90.45 91.46 91.47 92.00 91.97 Acknowledgments We thank Feng Ruan for pointing out a simple proof to Theorem 1. JCD and HN were partially supported by the SAIL-Toyota Center for AI Research and HN was partially 9 supported Samsung Fellowship. JCD was also partially supported by the National Science Foundation award NSF-CAREER-1553086 and the Sloan Foundation. References [1] P. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Annals of Statistics, 33(4): 1497?1537, 2005. [2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463?482, 2002. [3] P. L. Bartlett, M. I. Jordan, and J. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101:138?156, 2006. [4] A. Ben-Tal, D. den Hertog, A. D. Waegenaere, B. Melenberg, and G. Rennen. Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 59(2):341?357, 2013. [5] D. Bertsimas, V. Gupta, and N. Kallus. Robust SAA. arXiv:1408.4445 [math.OC], 2014. URL http: //arxiv.org/abs/1408.4445. [6] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of some recent advances. ESAIM: Probability and Statistics, 9:323?375, 2005. [7] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: a Nonasymptotic Theory of Independence. Oxford University Press, 2013. [8] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [9] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 -ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008. [10] J. C. Duchi, P. W. Glynn, and H. Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. arXiv:1610.03425 [stat.ML], 2016. URL https://arxiv.org/abs/1610.03425. [11] J. Hiriart-Urruty and C. Lemar?chal. Convex Analysis and Minimization Algorithms I & II. Springer, New York, 1993. [12] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. Annals of Statistics, 34(6):2593?2656, 2006. [13] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004. [14] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. [15] E. Mammen and A. B. Tsybakov. Smooth discrimination analysis. Annals of Statistics, 27:1808?1829, 1999. [16] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. In Proceedings of the Twenty Second Annual Conference on Computational Learning Theory, 2009. [17] S. Mendelson. Learning without concentration. In Proceedings of the Twenty Seventh Annual Conference on Computational Learning Theory, 2014. [18] H. Namkoong and J. C. Duchi. Stochastic gradient methods for distributionally robust optimization with f -divergences. In Advances in Neural Information Processing Systems 29, 2016. [19] A. B. Owen. Empirical likelihood. CRC press, 2001. [20] P. Samson. Concentration of measure inequalities for Markov chains and -mixing processes. Annals of Probability, 28(1):416?461, 2000. [21] A. Shapiro, D. Dentcheva, and A. Ruszczy?nski. Lectures on Stochastic Programming: Modeling and Theory. SIAM and Mathematical Programming Society, 2009. [22] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, pages 135?166, 2004. [23] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. [24] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer, New York, 1996. [25] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [26] V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, XVI(2):264?280, 1971. [27] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67(2):301?320, 2005. [28] A. Zubkov and A. Serov. A complete proof of universal inequalities for the distribution function of the binomial law. Theory of Probability & Its Applications, 57(3):539?544, 2013. 10
6890 |@word mild:1 repository:2 norm:4 suitably:1 hu:5 underperform:1 bn:9 harder:2 substitution:1 contains:1 lichman:1 series:1 chervonenkis:2 document:8 comparing:1 virus:1 protection:1 john:1 numerical:3 partition:2 plot:3 discrimination:1 supx2x:2 provides:5 certificate:2 math:1 org:2 cleavage:4 mathematical:1 c2:7 constructed:1 become:1 prove:2 fitting:1 paragraph:3 sacrifice:2 notably:2 market:3 indeed:1 roughly:1 dist:2 multi:2 automatically:3 little:1 actual:1 considering:1 increasing:2 begin:1 estimating:1 bounded:4 argmin:7 string:2 namkoong:5 finding:1 jduchi:1 guarantee:10 k2:6 classifier:4 control:1 lipshitz:1 enjoy:3 omit:1 appear:1 mcauliffe:1 positive:4 local:3 limit:1 consequence:3 oxford:1 lugosi:2 plus:2 koltchinskii:1 ease:1 sail:1 averaged:3 practical:1 acknowledgment:1 practice:1 procedure:9 pontil:2 universal:2 empirical:44 significantly:1 projection:2 boyd:1 confidence:6 word:1 spite:1 protein:1 cannot:1 onto:2 selection:1 risk:56 equivalent:3 center:1 regardless:1 economics:6 convex:21 survey:1 simplicity:1 preceeding:1 m2:2 estimator:9 vandenberghe:1 population:2 notion:3 analogous:1 annals:5 heavily:1 exact:3 programming:2 satisfying:2 showcase:1 labeled:1 ep:11 trade:4 upweights:1 substantial:2 intuition:1 rose:1 convexity:1 ui:4 complexity:6 depend:2 solving:1 localization:1 samsung:1 regularizer:4 train:16 fast:7 describe:1 neighborhood:1 shalev:1 whose:1 hiv:5 stanford:4 larger:1 say:2 statistic:7 vaart:1 abstractly:3 sequence:1 net:3 propose:1 hiriart:1 product:1 relevant:1 uci:3 mixing:1 poorly:1 achieve:3 validate:1 kv:1 convergence:14 rademacher:7 categorization:1 guaranteeing:1 converges:1 ben:1 hertog:1 depending:1 develop:2 stat:1 minor:1 trading:2 come:2 radius:1 closely:1 correct:2 stochastic:5 vc:9 stringent:1 crc:1 require:3 government:3 pbn:7 fix:1 generalization:1 proposition:1 tighter:2 probable:1 secondly:1 hold:9 ic:1 knp:1 exp:7 predict:1 pointing:1 m0:6 achieves:3 vary:1 a2:6 estimation:5 label:9 schwarz:1 sensitive:1 minimization:21 always:1 gaussian:1 pn:25 varying:1 corollary:10 focus:1 nvar:1 protease:3 improvement:6 likelihood:6 dld:1 attains:1 summarizing:1 minimizers:1 sb:2 typically:4 compactness:1 cleave:2 expand:1 classification:10 among:1 constrained:2 fairly:1 ruan:1 construct:1 beach:1 nearly:1 piecewise:1 few:2 randomly:1 divergence:4 national:1 n1:10 ab:2 possibility:1 investigate:2 highly:1 uncommon:1 yielding:1 regularizers:3 held:1 chain:1 maurer:2 euclidean:1 desired:3 theoretical:4 uncertain:1 instance:4 classify:1 modeling:2 cover:3 applicability:1 cost:1 introducing:1 deviation:1 rare:7 kl1:4 uniform:3 subset:3 seventh:1 too:2 optimally:1 motivating:1 chooses:1 nski:1 st:1 international:1 siam:1 off:1 hongseok:1 central:1 management:1 opposed:1 choose:2 hn:2 worse:2 american:1 toy:1 supp:2 account:1 nonasymptotic:1 li:1 sec:1 satisfy:3 explicitly:1 kzk2:1 vi:2 depends:1 sloan:1 h1:3 sup:13 aggregation:1 slope:1 minimize:3 ni:2 variance:42 acid:2 yield:2 identify:1 weak:1 researcher:1 ndl:1 classified:1 whenever:3 definition:4 against:1 frequency:1 glynn:2 naturally:1 proof:6 gain:1 dataset:2 recall:11 improves:1 carefully:1 appears:1 higher:1 attained:1 specify:2 improved:1 formulation:5 though:2 generality:1 hand:3 horizontal:1 lack:1 logistic:9 quality:1 facilitate:1 effect:2 usa:1 true:2 counterpart:2 unpacking:1 regularization:12 hence:1 equality:2 boucheron:2 i2:1 skewed:1 self:1 covering:10 mammen:3 oc:1 generalized:1 complete:1 tn:2 duchi:6 l1:2 performs:1 meaning:1 common:6 empirically:1 disproportionally:1 belong:1 slight:2 m1:6 approximates:1 jcd:2 association:1 significant:2 composition:1 cambridge:1 ai:1 automatic:1 rd:6 hp:2 similarly:2 samson:1 curvature:3 recent:1 perspective:1 varpbn:8 inf:11 scenario:3 inequality:11 binary:6 yi:2 der:1 somewhat:1 preceding:1 maximize:2 semi:1 ii:1 multiple:1 corporate:3 smooth:2 faster:3 offer:1 long:1 elaboration:1 equally:1 award:1 a1:6 prediction:2 variant:4 essentially:2 expectation:1 chandra:1 arxiv:4 achieved:1 c1:5 preserved:1 addition:2 fellowship:1 median:1 unlike:1 specially:1 archive:1 massart:1 subject:2 jordan:1 structural:1 near:2 yang:1 bernstein:2 enough:3 affect:1 fit:1 zi:11 independence:1 hastie:1 inner:1 idea:2 tradeoff:5 whether:2 heavier:1 bartlett:3 url:3 wellner:1 penalty:2 e3:2 york:2 nine:1 generally:1 detailed:1 governs:1 clear:1 amount:2 nonparametric:1 tsybakov:5 ten:3 category:11 http:3 shapiro:1 outperform:2 exist:1 nsf:1 sign:2 affected:1 four:1 achieving:1 drawn:1 v1:1 bertsimas:1 fraction:1 run:2 uncertainty:1 extends:1 throughout:1 family:2 almost:1 vn:1 appendix:2 bit:1 bound:14 fold:1 nonnegative:1 oracle:1 annual:2 waegenaere:1 constraint:9 tal:1 bousquet:2 argument:1 optimality:2 min:1 rcv1:2 relatively:1 according:1 ball:2 across:3 slightly:2 featurized:1 rob:4 making:1 n4:1 intuitively:1 den:1 erm:12 computationally:1 turn:1 singer:1 mind:1 urruty:1 tractable:1 zk2:3 observe:2 appropriate:2 s2n:3 robustly:9 differen:1 rennen:1 alternative:3 robustness:1 original:1 denotes:2 remaining:1 cf:1 binomial:1 giving:2 k1:4 build:1 especially:1 approximating:1 society:2 feng:1 objective:5 quantity:2 occurs:1 ruszczy:1 strategy:1 concentration:4 degrades:1 surrogate:5 exhibit:2 gradient:2 dp:2 thank:1 degrade:1 cauchy:1 index:1 mini:1 providing:1 minimizing:4 ratio:2 unfortunately:1 relate:1 expense:1 argminf:1 dentcheva:1 twenty:2 perform:2 allowing:1 upper:1 observation:2 markov:1 benchmark:1 finite:5 descent:1 situation:2 rn:23 thm:1 rarer:4 complement:1 required:2 xvi:1 nip:1 bar:2 usually:1 below:2 beating:1 chal:1 summarize:1 max:1 royal:1 analogue:1 event:3 misclassification:2 natural:2 regularized:14 indicator:1 improve:3 esaim:1 ne:1 axis:1 coupled:3 text:1 literature:1 l2:1 kf:1 asymptotic:2 relative:3 law:1 loss:27 expect:2 lecture:1 interesting:2 var:17 localized:2 validation:2 foundation:2 penalization:1 sufficient:1 dq:2 pi:2 balancing:1 course:1 penalized:1 supported:4 enjoys:1 bias:5 understand:1 characterizing:1 taking:3 absolute:1 benefit:1 van:1 dimension:4 xn:3 evaluating:1 kz:3 fb:1 made:1 collection:5 saa:1 excess:1 approximate:1 robustification:1 supremum:1 ml:3 overfitting:1 corpus:5 corroborating:1 conclude:1 xi:11 shwartz:1 tailed:1 table:5 additionally:1 reasonably:3 robust:35 ca:1 career:1 elastic:3 symmetry:1 improving:1 e5:2 expansion:8 excellent:1 zou:1 domain:1 reuters:5 noise:4 amino:2 x1:3 en:1 wiley:1 precision:7 sub:2 position:1 explicit:1 wish:1 toyota:1 theorem:19 kuk2:1 e4:2 showing:4 insignificant:1 gupta:1 virtue:1 evidence:2 normalizing:1 exists:1 mendelson:3 vapnik:3 ci:1 nk:2 gap:2 melenberg:1 highlighting:1 partially:3 applies:1 springer:3 ch:1 corresponds:1 minimizer:5 satisfies:4 lewis:1 prop:1 diam:4 goal:2 sized:1 exposition:1 owen:3 lipschitz:3 lemar:1 typical:3 except:2 corrected:2 uniformly:3 justify:1 reducing:2 degradation:1 distributionally:3 select:1 e6:2 support:1 latter:1 arises:1 evaluate:2
6,512
6,891
Deep Lattice Networks and Partial Monotonic Functions Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta Google Research 1600 Amphitheatre Parkway, Mountain View, CA 94043 {siyou,dwding,canini,janpf,mayagupta}@google.com Abstract We propose learning deep models that are monotonic with respect to a userspecified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network. We implement the layers and projections with new computational graph nodes in TensorFlow and use the Adam optimizer and batched stochastic gradients. Experiments on benchmark and real-world datasets show that six-layer monotonic deep lattice networks achieve state-of-the art performance for classification and regression with monotonicity guarantees. 1 Introduction We propose building models with multiple layers of lattices, which we refer to as deep lattice networks (DLNs). While we hypothesize that DLNs may generally be useful, we focus on the challenge of learning flexible partially-monotonic functions, that is, models that are guaranteed to be monotonic with respect to a user-specified subset of the inputs. For example, if one is predicting whether to give someone else a loan, we expect and would like to constrain the prediction to be monotonically increasing with respect to the applicant?s income, if all other features are unchanged. Imposing monotonicity acts as a regularizer, improves generalization to test data, and makes the end-to-end model more interpretable, debuggable, and trustworthy. To learn more flexible partial monotonic functions, we propose architectures that alternate three kinds of layers: linear embeddings, calibrators, and ensembles of lattices, each of which is trained discriminatively to optimize a structural risk objective and obey any given monotonicity constraints. See Fig. 2 for an example DLN with nine such layers. Lattices are interpolated look-up tables, as shown in Fig. 1. Lattices have been shown to be an efficient nonlinear function class that can be constrained to be monotonic by adding appropriate sparse linear inequalities on the parameters [1], and can be trained in a standard empirical risk minimization framework [2, 1]. Recent work showed lattices could be jointly trained as an ensemble to learn flexible monotonic functions for an arbitrary number of inputs [3]. Calibrators are one-dimensional lattices, which nonlinearly transform a single input [1]; see Fig. 1 for an example. They have been used to pre-process inputs in two-layer models: calibrators-then-linear models [4], calibrators-then-lattice models [1], and calibrators-then-ensemble-of-lattices model [3]. Here, we extend their use to discriminatively normalize between other layers of the deep model, as well as act as a pre-processing layer. We also find that using a calibrator for a last layer can help nonlinearly transform the outputs to better match the labels. We first describe the proposed DLN layers in detail in Section 2. In Section 3, we review more related work in learning flexible partial monotonic functions. We provide theoretical results characterizing the flexibility of the DLN in Section 4, followed by details on our open-source TensorFlow imple31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (0,1,1) 1 (0,0,1) (1,1,1) (1,0,1) (0,1,1) (1,1,1) (.5,5,1) (1,0,1) (0,0,1) (.7,0,.8) (1,1,0) (1,1,0) (.2,0,.4) 0 -10 -5 0 5 10 (0,0,0) (1,0,0) (0,0,0) (1,0,0) Figure 1: Left: Example calibrator (1-d lattice) with fixed input range [?10, 10] and five fixed uniformly-spaced keypoints and corresponding discriminatively-trained outputs (look-up table values values). Middle: Example lattice on three inputs in fixed input range [0, 1]3 , with 8 discriminativelytrained parameters (shown as gray-values), each corresponding to one of the 23 vertices of the unit hypercube. The parameters are linearly interpolated for any input [0, 1]3 to form the lattice function?s output. If the parameters are increasing in any direction, then the function is monotonic increasing in that direction. In this example, the gray-value parameters get lighter in all three directions, so the function is monotonic increasing in all three inputs. Right: Three examples of lattice values are shown in italics, each interpolated from the 8 lattice parameters. 1d calibrator multi-d lattice Monotonic inputs Nonmonotonic inputs monotonic non-monotonic Wm ? 0 Wn Figure 2: Illustration of a nine-layer DLN: calibrators, linear embedding, calibrators, ensemble of lattices, calibrators, ensemble of lattices, calibrators, lattice, calibrator. mentation and numerical optimization choices in Section 5. Experimental results demonstrate the potential on benchmark and real-world scenarios in Section 6. 2 Deep Lattice Network Layers We describe in detail the three types of layers we propose for learning flexible functions that can be constrained to be monotonic with respect to any subset of the inputs. Without loss of generality, we assume monotonic means monotonic non-decreasing (one can flip the sign of an input if nonincreasing monotonicity is desired). Let xt ? RDt be the input vector to the tth layer, with Dt inputs, and let xt [d] denote the dth input for d = 1, . . . , Dt . Table 1 summarizes the parameters and hyperparameters for each layer. For notational simplicity, in some places we drop the notation t if it is clear in the context. We also denote as xm t the subset of xt that are to be monotonically constrained, and as xnt the subset of xt that are non-monotonic. Linear Embedding Layer: Each linear embedding layer consists of two linear matrices, one m m matrix Wtm ? RDt+1 ?Dt that linearly embeds the monotonic inputs xm t , and a separate matrix m (Dt+1 ?Dt+1 )?(Dt ?Dtm ) n Wt ? R that linearly embeds non-monotonic inputs xnt , and one bias vector 2 bt . To preserve monotonicity on the embedded vector Wtm xm t , we impose the following linear inequality constraints: Wtm [i, j] ? 0 for all (i, j). (1) The output of the linear embedding layer is:  m    xt+1 Wtm xm t xt+1 = = + bt xnt+1 Wtn xnt m Only the first Dt+1 coordinates of xt+1 needs to be a monotonic input to the t + 1 layer. These two linear embedding matrices and bias vector are discriminatively trained. Calibration Layer: Each calibration layer consists of a separate one-dimensional piecewise linear transform for each input at that layer, ct,d (xt [d]) that maps R to [0, 1], so that xt+1 := [ct,1 (xt [1]) ct,2 (xt [2]) ? ? ? T ct,Dt (xt [Dt ])] . Here each ct,d is a 1D lattice with K key-value pairs (a ? RK , b ? RK ), and the function for each input is linearly interpolated between the two b values corresponding to the input?s surrounding a values. An example is shown on the left in Fig. 1. Each 1D calibration function is equivalent to a sum of weighted-and-shifted Rectified linear units (ReLU), that is, a calibrator function c(x[d]; a, b) can be equivalently expressed as c(x[d]; a, b) = K X ?[k]ReLU(x ? a[k]) + b[1], (2) k=1 where ?[k] := ? b[k+1]?b[k] ? ? a[k+1]?a[k] ? b[k]?b[k?1] a[k]?a[k?1] b[2]?b[1] a[2]?a[1] ? ? b[K]?b[K?1] ? a[K]?a[K?1] for k = 2, ? ? ? , K ? 1 for k = 1 for k = K However, enforcing monotonicity and boundedness constraints for the calibrator output is much simpler with the (a, b) parameterization of each keypoint?s input-output values, as we discuss shortly. Before training the DLN, we fix the input range for each calibrator to [amin , amax ], and we fix the K keypoints a ? RK to be uniformly-spaced over [amin , amax ]. Inputs that fall outside [amin , amax ] are clipped to that range. The calibrator output parameters b ? [0, 1]K are discriminatively trained. For monotonic inputs, we can constrain the calibrator functions to be monotonic by constraining the calibrator parameters b ? [0, 1]K to be monotonic, by adding the linear inequality constraints b[k] ? b[k + 1] for k = 1, . . . , K ? 1 (3) into the training objective [3]. We also experimented with constraining all calibrators to be monotonic (even for non-monotonic inputs) for more stable/regularized training. Ensemble of Lattices Layer: Each ensemble of lattices layer consists of G lattices. Each lattice is a linearly interpolated multidimensional look-up table; for an example, see the middle and right pictures in Fig. 1. Each S-dimensional look-up table takes inputs over the S-dimensional unit S hypercube [0, 1]S , and has 2S parameters ? ? R2 , specifying the lattice?s output for each of the 2S vertices of the unit hypercube. Inputs in-between the vertices are linearly interpolated, which forms a smooth but nonlinear function over the unit hypercube. Two interpolation methods have been used, multilinear interpolation and simplex interpolation [1] (also known as Lov?sz extension [5]). We use multilinear interpolation for all our experiments, which can be expressed ?(x)T ? where the S non-linear feature transformation ?(x) : [0, 1]S ? [0, 1]2 are the 2S linear interpolation weights that input x puts on each of the 2S parameters ? such that the interpolated value for x is ?(x)T ?, and ?(x)[j] = ?Sd=1 x[d]vj [d] (1?x[d])1?vj [d] , where vj [?] ? 0, 1 is the coordinate vector of the jth vertex of the unit hypercube, and j = 1, ? ? ? , 2D . For example, when S = 2, v1 = (0, 0), v2 = (0, 1), v3 = (1, 0), v4 = (1, 1) and ?(x) = ((1 ? x[1])(1 ? x[2]), (1 ? x[1])x[2], x[1](1 ? x[2]), x[1]x[2]). The ensemble of lattices layer produces G outputs, one per lattice. When initializing the DLN, if the t + 1th layer is an ensemble of lattices, we randomly permute the outputs of the previous layer 3 Table 1: DLN layers and hyperparameters Layer t Linear Embedding Calibrators Parameters m m bt ? RDt+1 , Wtm ? RDt+1 ?Dt , m m Wtn ? R(Dt+1 ?Dt+1 )?(Dt ?Dt ) Bt ? RDt ?K Lattice Ensemble ?t,g ? R2 St for g = 1, . . . , Gt Hyperparameters Dt+1 K ? N+ keypoints, input range [`, u] Gt lattices St inputs per lattice to be assigned to the Gt+1 ? St+1 inputs of the ensemble. If a lattice has at least one monotonic input, then that lattice?s output is constrained to be a monotonic input to the next layer to guarantee end-to-end monotonicity. Each lattice is constrained to be monotonic by enforcing monotonicity constraints on each pair of lattice parameters that are adjacent in the monotonic directions; for details see Gupta et al. [1]. End-to-end monotonicity: The DLN is constructed to preserve end-to-end monotonicity with respect to a user-specified subset of the inputs. As we described, the parameters for each component (matrix, calibrator, lattice) can be constrained to be monotonic with respect to a subset of inputs by satisfying certain linear inequality constraints [1]. Also if a component has a monotonic input, then the output of that component is treated as a monotonic input to the following layer. Because the composition of monotonic functions is monotonic, the constructed DLN belongs to the partial monotonic function class. The arrows in Figure 2 illustrate this construction, i.e., how the tth layer output becomes a monotonic input to t + 1th layer. 2.1 Hyperparameters We detail the hyperparameters for each type of DLN layer in Table 1. Some of these hyperparameters constrain each other since the number of outputs from each layer must be equal to the number of inputs to the next layer; for example, if you have a linear embedding layer with Dt+1 = 1000 outputs, then there are 1000 inputs to the next layer, and if that next layer is a lattice ensemble, its hyperparameters must obey Gt ? St = 1000. 3 Related Work Low-dimensional monotonic models have a long history in statistics, where they are called shape constraints, and often use isotonic regression [6]. Learning monotonic single-layer neural nets by constraining the neural net weights to be positive dates back to Archer and Wang in 1993 [7], and that basic idea has been re-visited by others [8, 9, 10, 11], but with some negative results about the obtainable flexibility, even with multiple hidden layers [12]. Sill [13] proposed a three-layer monotonic network that used monotonic linear embedding and max-and-min-pooling. Daniels and Velikova [12] extended Sill?s result to learn a partial monotonic function by combining min-maxpooling, also known as adaptive logic networks [14], with partial monotonic linear embedding, and showed that their proposed architecture is a universal approximator for partial monotone functions. None of these prior neural networks were demonstrated on problems with more than D = 10 features, nor trained on more than a few thousand examples. For our experiments we implemented a positive neural network and a min-max-pooling network [12] with TensorFlow. This paper extends recent work in learning multidimensional flexible partial monotonic 2-layer networks consisting of a layer of calibrators followed by an ensemble of lattices [3], with parameters appropriately constrained for monotonicity, which built on earlier work of Gupta et al. [1]. This work differs in three key regards. First, we alternate layers to form a deeper, and hence potentially more flexible, network. Second, a key question addressed in Canini et al. [3] is how to decide which features should be put together in each lattice in their ensemble. They found that random assignment worked well, but required large ensembles. They showed that smaller (and hence faster) models with the same accuracy could be 4 trained by using a heuristic pre-processing step they proposed (crystals) to identify which features interact nonlinearly. This pre-processing step requires training a lattice for each pair of inputs to judge that pair?s strength of interaction, which scales as O(D2 ), and we found it can be a large fraction of overall training time for D > 50. We solve this problem of determining which inputs should interact in each lattice by using a linear embedding layer before an ensemble of lattices layer to discriminatively and adaptively learn during training how to map the features to the first ensemble-layer lattices? inputs. This strategy also means each input to a lattice can be a linear combination of the features. This use of a jointly trained linear embedding is the second key difference to that prior work [3]. The third difference is that in previous work [4, 1, 3], the calibrator keypoint values were fixed a priori based on the quantiles of the features, which is challenging to do for the calibration layers mid-DLN, because the quantiles of their inputs are evolving during training. Instead, we fix the keypoint values uniformly over the bounded calibrator domain. 4 Function Class of Deep Lattice Networks We offer some results and hypotheses about the function class of deep lattice networks, depending on whether the lattices are interpolated with multilinear interpolation (which forms multilinear polynomials), or simplex interpolation (which forms locally linear surfaces). 4.1 Cascaded multilinear lookup tables We show that a deep lattice network made up only of cascaded layers of lattices (without intervening layers of calibrators or linear embeddings) is equivalent to a single lattice defined on the D input features if multilinear interpolation is used. It is easy to construct counter-examples showing that this result does not hold for simplex-interpolated lattices. Lemma 1. Suppose that a lattice has L inputs that can each be expressed in the form ?iT ?(x[si ]), where the si are mutually disjoint and ? represents multilinear interpolation weights. Then the output ? can be expressed in the form ??T ?(x[?s i ]). That is, the lattice preserves the functional form of its inputs, changing only the values of the coefficients ? and the linear interpolation weights ?. Proof. Each input i of the lattice can be expressed in the following form: fi = ?iT ?(x[si ]) = |si | 2X ?i [vik ] k=1 Y x[d]vik [d] (1 ? x[d])1?vik [d] d?si This is a multilinear polynomial on x[si ]. The output can be expressed in the following form: L F = 2 X j=1 ?i [vj ] L Y v [i] fi j (1 ? fi )1?vj [i] i=1 Note the product in the expression: fi and 1 ? fi are both multilinear polynomials, but within each term of the product, only one is present, since one of the two has exponent 0 and the other has exponent 1. Furthermore, since each fi is a function of a different subset of x, we conclude that the entire product is a multilinear polynomial. Since the sum of multilinear polynomials is still a multilinear polynomial, we conclude that F is a multilinear polynomial. Any multilinear polynomial on k variables can be converted to a k-dimensional multilinear lookup table, which concludes the proof. Lemma 1 can be applied inductively to every layer of cascaded lattices down to the final output F (x). We have shown that cascaded lattices using multilinear interpolation is equivalent to a single multilinear lattice defined on all D features. 4.2 Universal approximation of partial monotone functions Theorem 4.1 in [12] states that partial monotone linear embedding followed by min and max pooling can approximate any partial monotone functions on the hypercube up to arbitrary precision given 5 sufficiently high embedding dimension. We show in the next lemma that simplex-interpolated lattices can represent min or max pooling. Thus one can use a DLN constructed with a linear embedding layer followed by two cascaded simplex-interpolated lattice layers to approximate any partial monotone function on the hypercube. n n Lemma 2. Let ?min = (0, 0, ? ? ? , 0, 1) ? R2 and ?max = (1, 0, ? ? ? , 0) ? R2 , and ?simplex be the simplex interpolation weights. Then min(x[0], x[1], ? ? ? , x[n]) = ?simplex (x)T ?min max(x[0], x[1], ? ? ? , x[n]) = ?simplex (x)T ?max Proof. From the definition of simplex interpolation [1], ?simplex (x)T ? = ?[1]x[?[1]] + ? ? ? + ?[2n ]x[?[n]], where ? is the sorted order such that x[?[1]] ? ? ? ? ? x[?[n]], and due to sparsity, ?min and ?max selects the min and the max. 4.3 Locally linear functions If simplex interpolation [1] (aka the Lov?sz extension) is used, the deep lattice network produces a locally linear function, because each layer is locally linear, and compositions of locally linear functions are locally linear. Note that a D input lattice interpolated with simplex interpolation has D! linear pieces [1]. If one cascades an ensemble of D lattices into a lattice, then the number of possible locally linear pieces is of the order O((D!)!). 5 Numerical Optimization Details for the DLN Operators: We implemented 1D calibrators and multilinear interpolation over a lattice as new C++ operators in TensorFlow [15] and express each layer as a computational graph node using these new and existing TensorFlow operators. Our implementation is open sourced and can be found in https://github.com/tensorflow/lattice. We use the Adam optimizer [16] and batched stochastic gradients to update model parameters. After each batched gradient update, we project parameters to satisfy their monotonicity constraints. The linear embedding layer?s constraints are element-wise non-negativity constraints, so its projection clips each negative component to zero. This projection can be done in O(# of elements in a monotonic linear embedding matrix). Projection for each calibrator is isotonic regression with chain ordering, which we implement with the pooladjacent-violator algorithm [17] for each calibrator. This can be done in O(# of calibration keypoints). Projection for each lattice is isotonic regression with partial ordering that imposes O(S2S ) linear constraints for each lattice [1]. We solved it with consensus optimization and alternating direction method of multipliers [18] to parallelize the projection computations with a convergence criterion of  = 10?7 . This can be done in O(S2S log(1/)). Initialization: For linear embedding layers, we initialize each component in the linear embedding matrix with IID Gaussian noise N (2, 1). The initial mean of 2 is to bias the initial parameters to be positive so that they are not clipped to zero by the first monotonicity projection. However, because the calibration layer before the linear embedding outputs in [0, 1] and thus is expected to have output E[xt ] = 0.5, initializing the linear embedding with a mean of 2 introduces an initial bias: E[xt+1 ] = E[Wt xt ] = Dt . To counteract that we initialize each component of the bias vector, bt , to ?Dt , so that the initial expected output of the linear layer is E[xt+1 ] = E[Wt xt + bt ] = 0. We initialize each lattice?s parameters to be a linear function spanning [0, 1], and add IID Gaussian noise N (0, S12 ) to each parameter, where S is the number of input to a lattice. We initialize each calibrator to be a linear function that maps [xmin , xmax ] to [0, 1] (and did not add any noise). 6 Experiments We present results on the same benchmark dataset (Adult) with the same monotonic features as in Canini et al. [3], and for three problems from Google where the monotonicity constraints were specified by product groups. For each experiment, every model considered is trained with monotonicity guarantees on the same set of inputs. See Table 2 for a summary of the datasets. 6 Table 2: Dataset Summary Dataset Type Adult User Intent Rater Score Usefulness Classify Classify Regress Classify # Features (# Monotonic) # Training # Validation # Test 90 (4) 49 (19) 10 (10) 9 (9) 26,065 241,325 1,565,468 62,220 6,496 60,412 195,530 7,764 16,281 176,792 195,748 7,919 Table 3: User Intent Case Study Results DLN Crystals Min-Max network Validation Accuracy Test Accuracy # Parameters G?S 74.39% 74.24% 73.89% 72.48% 72.01% 72.02% 27,903 15,840 31,500 30 ? 5D 80 ? 7D 90 ? 7D For classification problems, we used logistic loss, and for the regression, we used squared error. For each problem, we used a validation set to optimize the hyperparameters for each model architecture: the learning rate, the number of training steps, etc. For an ensemble of lattices, we tune the number of lattices, G, and number of inputs to each lattice, S. All calibrators for all models used a fixed number of 100 keypoints, and set [?100, 100] as an input range. In all experiments, we use the six-layer DLN architecture: Calibrators ? Linear Embedding ? Calibrators ? Ensemble of Lattices ? Calibrators ? Linear Embedding, and validate the number of lattices in the ensemble G, number of inputs to each lattice, S, the Adam stepsize and number of loops. For crystals [3] we validated the number of ensembles, G, and number of inputs to each lattice, S, as well as Adam stepsize and number of loops. For min-max net [12], we validated the number of groups, G, and dimension of each group S, as well as Adam stepsize and number of loops. For datasets where all features are monotonic, we also train a deep neural network with a non-negative weight matrix and ReLU as an activation unit with a final fully connected layer with non-negative weight matrix, which we call monotonic DNN, akin to the proposals of [7, 8, 9, 10, 11]. We tune the depth of hidden layers, G, and the activation units in each layer S. All the result tables are sorted by their validation accuracy, and contain an additional column for chosen hyperparameters; 2 ? 5D means G = 2 and S = 5. 6.1 User Intent Case Study (Classification) For this real-world Google problem, the problem is to classify the user intent. This experiment is set-up to test generalization ability to non-IID test data. The train and validation examples are collected from the U.S., and the test set is collected from 20 other countries, and as a result of this difference between the train/validation and test distributions, there is a notable difference between the validation and the test accuracy. The results in Table 3 show a 0.5% gain in test accuracy for the DLN. 6.2 Adult Benchmark Dataset (Classification) We compare accuracy on the benchmark Adult dataset [19], where a model predicts whether a person?s income is at least $50,000 or not. Following Canini et al. [1], we require all models to be monotonically increasing in capital-gain, weekly hours of work and education level, and the gender wage gap. We used one-hot encoding for the other categorical features, for 90 features in total. We randomly split the usual train set [19] 80-20 and trained over the 80%, and validated over the 20%. 7 Table 4: Adult Results DLN Crystals Min-Max network Validation Accuracy Test Accuracy # Parameters G?S 86.50% 86.02% 85.28% 86.08% 85.87% 84.63% 40,549 3,360 57,330 70 ? 5D 60 ? 4D 70 ? 9D Results in Table 4 show the DLN provides better validation and test accuracy than the min-max network or crystals. 6.3 Rater Score Prediction Case Study (Regression) For this real-world Google problem, we train a model to predict a rater score for a candidate result, where each rater score is averaged over 1-5 raters, and takes on 5-25 possible real values. All 10 monotonic features are required to be monotonic. Results in Table 5 show the DLN has very test MSE than the two-layer crystals model, and much better MSE than the other monotonic networks. Table 5: Rater Score Prediction (Monotonic Features Only) Results Validation MSE Test MSE # Parameters G?S 1.2078 1.2101 1.3474 1.3920 1.2096 1.2109 1.3447 1.3939 81,601 1,980 5,500 2,341 50 ? 9D 10 ? 7D 100 ? 5D 20 ? 100D DLN Crystals Min-Max network Monotonic DNN 6.4 Usefulness Case Study (Classifier) For this real-world Google problem, we train a model to predict whether a candidate result adds useful information given the presence of another result. All 9 features are required to be monotonic. Table 6 shows the DLN has slightly better validation and test accuracy than crystals, and both are notably better than the min-max network or positive-weight DNN. Table 6: Usefulness Results DLN Crystals Min-Max network Monotonic DNN 7 Validation Accuracy Test Accuracy # Parameters G?S 66.08% 65.45% 64.62% 64.27% 65.26% 65.13% 63.65% 62.88% 81,051 9,920 4,200 2,012 50 ? 9D 80 ? 6D 70 ? 6D 1 ? 1000D Conclusions In this paper, we proposed combining three types of layers, (1) calibrators, (2) linear embeddings, and (3) multidimensional lattices, to produce a new class of models we call deep lattice networks that combines the flexibility of deep networks with the regularization, interpretability and debuggability advantages that come with being able to impose monotonicity constraints on some inputs. 8 References [1] M. R. Gupta, A. Cotter, J. Pfeifer, K. Voevodski, K. Canini, A. Mangylov, W. Moczydlowski, and A. Van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research, 17(109):1?47, 2016. [2] E. K. Garcia and M. R. Gupta. Lattice regression. In Advances in Neural Information Processing Systems (NIPS), 2009. [3] K. Canini, A. Cotter, M. M. Fard, M. R. Gupta, and J. Pfeifer. Fast and flexible monotonic functions with ensembles of lattices. Advances in Neural Information Processing Systems (NIPS), 2016. [4] A. Howard and T. Jebara. Learning monotonic transformations for classification. Advances in Neural Information Processing Systems (NIPS), 2007. [5] L. Lov?sz. Submodular functions and convexity. In Mathematical Programming The State of the Art, pages 235?257. Springer, 1983. [6] P. Groeneboom and G. Jongbloed. Nonparametric estimation under shape constraints. Cambridge Press, New York, USA, 2014. [7] N. P. Archer and S. Wang. Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decision Sciences, 24(1):60?75, 1993. [8] S. Wang. A neural network method of density estimation for univariate unimodal data. Neural Computing & Applications, 2(3):160?167, 1994. [9] H. Kay and L. H. Ungar. Estimating monotonic functions and their bounds. AIChE Journal, 46(12):2426?2434, 2000. [10] C. Dugas, Y. Bengio, F. B?lisle, C. Nadeau, and R. Garcia. Incorporating functional knowledge in neural networks. Journal Machine Learning Research, 2009. [11] A. Minin, M. Velikova, B. Lang, and H. Daniels. Comparison of universal approximators incorporating partial monotonicity by structure. Neural Networks, 23(4):471?475, 2010. [12] H. Daniels and M. Velikova. Monotone and partially monotone neural networks. IEEE Trans. Neural Networks, 21(6):906?917, 2010. [13] J. Sill. Monotonic networks. Advances in Neural Information Processing Systems (NIPS), 1998. [14] W. W. Armstrong and M. M. Thomas. Adaptive logic networks. Handbook of Neural Computation, Section C1. 8, IOP Publishing and Oxford U. Press, ISBN 0 7503 0312, 3, 1996. [15] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [17] M. Ayer, H. D. Brunk, G. M. Ewing, W. T. William, E. Silverman, et al. An empirical distribution function for sampling with incomplete information. The annals of mathematical statistics, 26(4):641?647, 1955. [18] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical R in learning via the alternating direction method of multipliers. Foundations and Trends Machine Learning, 3(1):1?122, 2011. [19] C. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. 9
6891 |@word repository:1 middle:2 polynomial:8 open:2 d2:1 wtm:5 boundedness:1 initial:4 score:5 daniel:3 existing:1 steiner:1 com:2 trustworthy:1 si:6 activation:2 lang:1 must:2 chu:1 applicant:1 devin:1 numerical:2 shape:2 hypothesize:1 drop:1 interpretable:1 update:2 isard:1 parameterization:1 provides:1 node:2 org:1 simpler:1 five:1 mathematical:2 constructed:3 olah:1 abadi:1 consists:3 combine:1 lov:3 notably:1 amphitheatre:1 expected:2 nor:1 multi:1 decreasing:1 increasing:5 becomes:1 project:1 estimating:1 notation:1 bounded:1 mountain:1 kind:1 transformation:2 guarantee:3 every:2 multidimensional:3 act:2 weekly:1 classifier:1 unit:8 before:3 positive:4 sd:1 encoding:1 oxford:1 parallelize:1 interpolation:16 initialization:1 specifying:1 challenging:1 someone:1 sill:3 range:6 averaged:1 implement:2 differs:1 silverman:1 jan:1 universal:3 evolving:1 empirical:2 cascade:1 fard:1 projection:7 boyd:1 pre:4 get:1 operator:3 put:2 context:1 risk:2 isotonic:3 optimize:2 equivalent:3 map:3 demonstrated:1 dean:1 simplicity:1 amax:3 shlens:1 kay:1 embedding:22 coordinate:2 annals:1 construction:1 suppose:1 user:6 lighter:1 programming:1 hypothesis:1 goodfellow:1 calibrators:20 element:2 trend:1 satisfying:1 predicts:1 database:1 preprint:1 ding:1 initializing:2 wang:3 solved:1 thousand:1 connected:1 ordering:2 counter:1 xmin:1 convexity:1 inductively:1 trained:11 regularizer:1 surrounding:1 train:6 fast:1 describe:2 kevin:1 nonmonotonic:1 outside:1 sourced:1 heuristic:1 solve:1 ability:1 statistic:2 jointly:3 transform:3 final:2 advantage:1 net:3 isbn:1 propose:4 interaction:1 product:4 uci:1 combining:2 loop:3 date:1 flexibility:3 achieve:1 amin:3 intervening:1 validate:1 normalize:1 sutskever:1 convergence:1 produce:3 adam:6 help:1 illustrate:1 depending:1 implemented:2 judge:1 come:1 direction:6 stochastic:3 education:1 require:1 ungar:1 fix:3 generalization:2 multilinear:18 voevodski:1 extension:2 hold:1 sufficiently:1 considered:1 blake:1 predict:2 optimizer:2 estimation:2 label:1 s12:1 visited:1 weighted:1 cotter:2 minimization:1 gaussian:2 validated:3 focus:1 notational:1 aka:1 bt:6 entire:1 hidden:2 dnn:4 archer:2 selects:1 overall:1 classification:6 flexible:8 priori:1 exponent:2 art:2 constrained:7 initialize:4 ewing:1 equal:1 construct:1 beach:1 sampling:1 represents:1 look:5 yu:1 wtn:2 simplex:13 others:1 piecewise:2 few:1 randomly:2 preserve:3 rater:5 consisting:1 william:1 zheng:1 introduces:1 nonincreasing:1 chain:1 partial:14 incomplete:1 desired:1 re:1 theoretical:1 classify:4 earlier:1 column:1 assignment:1 lattice:83 vertex:4 subset:7 usefulness:3 kudlur:1 calibrated:1 adaptively:1 st:4 person:1 density:1 v4:1 together:1 squared:1 potential:1 converted:1 minin:1 lookup:2 coefficient:1 satisfy:1 notable:1 vi:1 piece:2 view:1 wm:1 jia:1 accuracy:13 ensemble:24 spaced:2 identify:1 iid:3 none:1 rectified:1 history:1 rdt:5 definition:1 tucker:1 regress:1 proof:3 gain:2 dataset:5 knowledge:1 wicke:1 improves:1 obtainable:1 back:2 dt:18 ayer:1 brunk:1 done:3 generality:1 furthermore:1 nonlinear:2 propagation:1 google:6 logistic:1 mayagupta:1 gray:2 building:1 usa:2 contain:1 multiplier:2 iop:1 hence:2 assigned:1 vasudevan:1 alternating:3 regularization:1 moore:1 adjacent:1 during:2 irving:1 davis:1 levenberg:1 criterion:1 crystal:9 demonstrate:1 wise:1 fi:6 parikh:1 velikova:3 functional:2 extend:1 refer:1 composition:2 jozefowicz:1 cambridge:1 imposing:1 submodular:1 calibration:6 stable:1 surface:1 maxpooling:1 gt:4 add:3 etc:1 aiche:1 recent:2 showed:3 belongs:1 wattenberg:1 scenario:1 certain:1 inequality:4 approximators:1 additional:1 impose:2 v3:1 monotonically:3 corrado:1 multiple:2 unimodal:1 keypoints:5 smooth:1 match:1 faster:1 offer:1 long:2 dtm:1 prediction:3 regression:7 basic:1 heterogeneous:1 arxiv:2 represent:1 monga:1 xmax:1 agarwal:1 c1:1 proposal:1 addressed:1 else:1 source:1 country:1 appropriately:1 warden:1 pooling:4 call:2 structural:1 presence:1 constraining:3 split:1 embeddings:4 wn:1 easy:1 bengio:1 relu:3 architecture:4 idea:1 barham:1 vik:3 whether:4 six:2 expression:1 mentation:1 akin:1 york:1 nine:2 deep:13 generally:1 useful:2 clear:1 tune:2 nonparametric:1 mid:1 locally:7 clip:1 tth:2 http:1 shifted:1 sign:1 dln:22 disjoint:1 per:2 express:1 group:4 key:4 harp:1 capital:1 changing:1 v1:1 lisle:1 graph:2 monotone:7 fraction:1 sum:2 counteract:1 talwar:1 you:2 place:1 clipped:2 extends:1 decide:1 decision:1 summarizes:1 layer:67 ct:5 bound:1 maya:1 guaranteed:1 followed:4 strength:1 constraint:16 worked:1 constrain:3 software:1 interpolated:13 min:17 alternate:2 combination:1 smaller:1 slightly:1 mutually:1 discus:1 flip:1 end:8 brevdo:1 available:1 obey:2 v2:1 appropriate:2 stepsize:3 shortly:1 thomas:1 publishing:1 murray:1 hypercube:7 unchanged:1 objective:2 question:1 kaiser:1 strategy:1 usual:1 italic:1 gradient:3 separate:2 collected:2 consensus:1 spanning:1 enforcing:2 illustration:1 equivalently:1 potentially:1 negative:4 intent:4 xnt:4 ba:1 implementation:1 datasets:3 benchmark:5 howard:1 dugas:1 canini:7 gas:1 extended:1 arbitrary:2 jebara:1 peleato:1 david:1 s2s:2 nonlinearly:3 pair:4 userspecified:1 specified:3 required:3 eckstein:1 tensorflow:8 hour:1 kingma:1 nip:5 trans:1 adult:5 dth:1 able:1 xm:4 sparsity:1 challenge:1 built:1 max:16 interpretability:1 hot:1 treated:1 regularized:1 predicting:1 cascaded:5 github:1 keypoint:3 picture:1 concludes:1 nadeau:1 negativity:1 categorical:1 review:1 prior:2 determining:1 embedded:1 loss:2 expect:1 discriminatively:6 fully:1 approximator:1 validation:12 foundation:1 wage:1 groeneboom:1 vanhoucke:1 imposes:1 raters:1 summary:2 last:1 jth:1 bias:5 deeper:1 fall:1 characterizing:1 sparse:1 van:1 regard:1 distributed:1 dimension:2 depth:1 world:5 made:1 adaptive:2 income:2 approximate:2 logic:2 monotonicity:19 sz:3 parkway:1 handbook:1 conclude:2 table:21 learn:4 ca:2 permute:1 interact:2 mse:4 domain:1 vj:5 did:1 linearly:6 arrow:1 noise:3 hyperparameters:9 fig:5 quantiles:2 batched:3 embeds:2 precision:1 candidate:2 third:1 pfeifer:3 rk:3 down:1 theorem:1 xt:17 showing:1 ghemawat:1 r2:4 experimented:1 gupta:6 incorporating:2 adding:2 gap:1 chen:1 garcia:2 univariate:1 vinyals:1 expressed:6 partially:2 monotonic:61 springer:1 gender:1 violator:1 sorted:2 man:1 loan:1 uniformly:3 wt:3 lemma:4 called:1 total:1 experimental:1 merz:1 citro:1 armstrong:1 schuster:1
6,513
6,892
Continual Learning with Deep Generative Replay Hanul Shin Massachusetts Institute of Technology SK T-Brain [email protected] Jung Kwon Lee?, Jaehong Kim?, Jiwon Kim SK T-Brain {jklee,xhark,jk}@sktbrain.com Abstract Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (?generator?) and a task solving model (?solver?). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks. 1 Introduction One distinctive ability of humans and large primates is to continually learn new skills and accumulate knowledge throughout the lifetime [6]. Even in small vertebrates such as rodents, established connections between neurons seem to last more than an year [13]. Besides, primates incorporate new information and expand their cognitive abilities without seriously perturbing past memories. This flexible memory system results from a good balance between synaptic plasticity and stability [1]. Continual learning in deep neural networks, however, suffers from a phenomenon called catastrophic forgetting [22], in which a model?s performance on previously learned tasks abruptly degrades when trained for a new task. In artificial neural networks, inputs coincide with the outputs by implicit parametric representation. Therefore training them towards a new objective can cause almost complete forgetting of former knowledge. Such problem has been a key obstacle to continual learning for deep neural network through sequential training on multiple tasks. Previous attempts to alleviate catastrophic forgetting often relied on episodic memory system that stores past data [31]. In particular, recorded examples are regularly replayed with real samples drawn from the new task, and the network parameters are jointly optimized. While a network trained in this manner performs as well as separate networks trained solely on each task [29], a major drawback of memory-based approach is that it requires large working memory to store and replay past inputs. Moreover, such data storage and replay may not be viable in some real-world situations. Notably, humans and large primates learn new knowledge even from limited experiences and still retain past memories. While several biological mechanisms contribute to this at multiple levels, the most apparent distinction between primate brains and artificial neural networks is the existence of separate, interacting memory systems [26]. The Complementary Learning Systems (CLS) theory illustrates the significance of dual memory systems involving the hippocampus and the neocortex. The hippocampal system rapidly encodes recent experiences, and the memory trace that lasts for ? Equal Contribution 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. a short period is reactivated during sleep or conscious and unconscious recall [8]. The memory is consolidated in the neocortex through the activation synchronized with multiple replays of the encoded experience [27]?a mechanism which inspired the use of experience replay [23] in training reinforcement learning agents. Recent evidence suggests that the hippocampus is more than a simple experience replay buffer. Reactivation of the memory traces yields rather flexible outcomes. Altering the reactivation causes a defect in consolidated memory [35], while co-stimulating certain memory traces in the hippocampus creates a false memory that was never experienced [28]. These properties suggest that the hippocampus is better paralleled with a generative model than a replay buffer. Specifically, deep generative models such as deep Boltzmann machines [32] or a variational autoencoder [17] can generate high-dimensional samples that closely match observed inputs. We now propose an alternative approach to sequentially train deep neural networks without referring to past data. In our deep generative replay framework, the model retains previously acquired knowledge by the concurrent replay of generated pseudo-data. In particular, we train a deep generative model in the generative adversarial networks (GANs) framework [10] to mimic past data. Generated data are then paired with corresponding response from the past task solver to represent old tasks. Called the scholar model, the generator-solver pair can produce fake data and desired target pairs as much as needed, and when presented with a new task, these produced pairs are interleaved with new data to update the generator and solver networks. Thus, a scholar model can both learn the new task without forgetting its own knowledge and teach other models with generated input-target pairs, even when the network configuration is different. As deep generative replay supported by the scholar network retains the knowledge without revisiting actual past data, this framework can be employed to various practical situation involving privacy issues. Recent advances on training generative adversarial networks suggest that the trained models can reconstruct real data distribution in a wide range of domains. Although we tested our models on image classification tasks, our model can be applied to any task as long as the trained generator reliably reproduces the input space. 2 Related Works The term catastrophic forgetting or catastrophic interference was first introduced by McCloskey and Cohen in 1980?s [22]. They claimed that catastrophic interference is a fundamental limitation of neural networks and a downside of its high generalization ability. While the cause of catastrophic forgetting has not been studied analytically, it is known that the neural networks parameterize the internal features of inputs, and training the networks on new samples causes alteration in already established representations. Several works illustrate empirical consequences in sequential learning settings [7, 29], and provide a few primitive solutions [16, 30] such as replaying all previous data. 2.1 Comparable methods A branch of works assumes a particular situation where access to previous data is limited to the current task[12, 18, 20]. These works focus on optimizing network parameters while minimizing alterations to already consolidated weights. It is suggested that regularization methods such as dropout [33] and L2 regularization help reduce interference of new learning [12]. Furthermore, elastic weight consolidation (EWC) proposed in [18] demonstrates that protecting certain weights based on their importance to the previous tasks tempers the performance loss. Other attempts to sequentially train a deep neural network capable of solving multiple tasks reduce catastrophic interference by augmenting the networks with task-specific parameters. In general, layers close to inputs are shared to capture universal features, and independent output layers produce taskspecific outputs. Although separate output layers are free of interference, alteration on earlier layers still causes some performance loss on older tasks. Lowering learning rates on some parameters is also known to reduce forgetting [9]. A recently proposed method called Learning without Forgetting (LwF) [21] addresses the problem of sequential learning in image classification tasks while minimizing alteration on shared network parameters. In this framework, the network?s response to new task input prior to fine-tuning indirectly represents knowledge about old tasks and is maintained throughout the learning process. 2 2.2 Complementary Learning System(CLS) theory A handful of works are devoted to designing a complementary networks architecture to alleviate catastrophic forgetting. When the training data for previous tasks are not accessible, only pseudoinputs and pseudo-targets produced by a memory network can be fed into the task network. Called a pseudorehearsal technique, this method is claimed to maintain old input-output patterns without accessing real data [31]. When the tasks are as elementary as coupling two binary patterns, simply feeding random noises and corresponding responses suffices [2]. A more recent work proposes an architecture that resembles the structure of the hippocampus to facilitate continual learning for more complex data such as small binary pixel images [15]. However, none of them demonstrates scalability to high-dimensional inputs similar to those appear in real world due to the difficulty of generating meaningful high-dimensional pseudoinputs without further supervision. Our generative replay framework differs from aforementioned pseudorehearsal techniques in that the fake inputs are generated from learned past input distribution. Generative replay has several advantages over other approaches because the network is jointly optimized using an ensemble of generated past data and real current data. The performance is therefore equivalent to joint training on accumulated real data as long as the generator recovers the input distribution. The idea of generative replay also appears in Mocanu et al. [24], in which they trained Restricted Boltzmann Machine to recover past input distribution. 2.3 Deep Generative Models Generative model refers to any model that generates observable samples. Specifically, we consider deep generative models based on deep neural networks that maximize the likelihood of generated samples being in given real distribution [11]. Some deep generative models such as variational autoencoders [17] and the GANs [10] are able to mimic complex samples like images. The GANs framework defines a zero-sum game between a generator G and a discriminator D. While the discriminator learns to distinguish between the generated samples from real samples by comparing two data distributions, the generator learns to mimic the real distribution as closely as possible. The objective of two networks is thereby defined as: min max V (D, G) = Ex?pdata (x) [log D(x)] + Ez?pz (z) [log(1 ? D(G(z)))] G 3 D Generative Replay We first define several terminologies. In our continual learning framework, we define the sequence of tasks to be solved as a task sequence T = (T1 , T2 , ? ? ? , TN ) of N tasks. Definition 1 A task Ti is to optimize a model towards an objective on data distribution Di , from which the training examples (xi , y i )?s are drawn. Next, we call our model a scholar, as it is capable of learning a new task and teaching its knowledge to other networks. Note that the term scholar differs from standard notion of teacher-student framework of ensemble models [5], in which the networks either teach or learn only. Definition 2 A scholar H is a tuple hG, Si, where a generator G is a generative model that produces real-like samples and a solver S is a task solving model parameterized by ?. The solver has to perform all tasks in the task sequence T. The full objective is thereby given as to minimize the unbiased sum of losses among all tasks in the task sequence E(x,y)?D [L(S(x; ?), y)], where D is the entire data distribution and L is a loss function. While being trained for task Ti , the model is fed with samples drawn from Di . 3.1 Proposed Method We consider sequential training on our scholar model. However, training a single scholar model while referring to the recent copy of the network is equivalent to training a sequence of scholar models (Hi )N i=1 where the n-th scholar Hn (n > 1) learns the current task Tn and the knowledge of previous scholar Hn?1 . Therefore, we describe our full training procedure as in Figure 1(a). 3 Training the scholar model from another scholar involves two independent procedures of training the generator and the solver. First, the new generator receives current task input x and replayed inputs x0 from previous tasks. Real and replayed samples are mixed at a ratio that depends on the desired importance of a new task compared to the older tasks. The generator learns to reconstruct cumulative input space, and the new solver is trained to couple the inputs and targets drawn from the same mix of real and replayed data. Here, the replayed target is past solver?s response to replayed input. Formally, the loss function of the i-th solver is given as Ltrain (?i ) = rE(x,y)?Di [L(S(x; ?i ), y)] + (1 ? r) Ex0 ?Gi?1 [L(S(x0 ; ?i ), S(x0 ; ?i?1 ))] (1) where ?i are network parameters of the i-th scholar and r is a ratio of mixing real data. As we aim to evaluate the model on original tasks, test loss differs from the training loss: Ltest (?i ) = rE(x,y)?Di [L(S(x; ?i ), y)] + (1 ? r) E(x,y)?Dpast [L(S(x; ?i ), y)] (2) where Dpast is a cumulative distribution of past data. Second loss term is ignored in both functions when i = 1 because there is no replayed data to refer to for the first solver. We build our scholar model with a solver that has suitable architecture for solving a task sequence and a generator trained in the generative adversarial networks framework. However, our framework can employ any deep generative model as a generator. ??????? ???? ????1 ???????1 ????2 ???????2 ????3 ???????3 ????? ???????? ??????? ???? ?????? ????? ??? ??????? ??????? ? ????????? ?????? ?? ?????? ????????? ?????? ??? ??????? ??????? ? ? ????????? ?????? ?? ?? ?????? ????????? ??? ??????? (a) Sequential Training ?????? ????? ?????? ??? ??????? (c) Training Solver (b) Training Generator Figure 1: Sequential training of scholar models. (a) Training a sequence of scholar models is equivalent to continuous training of a single scholar while referring to its most recent copy. (b) A new generator is trained to mimic a mixed data distribution of real samples x and replayed inputs x0 from previous generator. (c) A new solver learns from real input-target pairs (x, y) and replayed input-target pairs (x0 , y 0 ), where replayed response y 0 is obtained by feeding generated inputs into previous solver. 3.2 Preliminary Experiment Prior to our main experiments, we show that the trained scholar model alone suffices to train an empty network. We tested our model on classifying MNIST handwritten digit database [19]. Sequence of scholar models were trained from scratch through generative replay from previous scholar. The accuracy on classifying full test data is shown in Table 1. We observed that the scholar model transfers knowledge without losing information. Table 1: Test accuracy of sequentially learned solver measured on full test data from MNIST database. The first solver learned from real data, and subsequent solvers learned from previous scholar networks. Accuracy(%) 4 Solver1 ? Solver2 ? Solver3 ? Solver4 ? Solver5 98.81% 98.64% 98.58% 98.53% 98.56% Experiments In this section, we show the applicability of generative replay framework on various sequential learning settings. Generative replay based on a trained scholar network is superior to other continual learning approaches in that the quality of the generative model is the only constraint of the task performance. In other words, training the networks with generative replay is equivalent to joint training on entire data when the generative model is optimal. To draw the best possible result, we used WGAN-GP [14] technique in training the generator. 4 As a base experiment, we test if generative replay enables sequential learning while compromising performance on neither the old tasks nor a new task. In section 4.1, we sequentially train the networks on independent tasks to examine the extent of forgetting. In section 4.2, we train the networks on two different yet related domains. We demonstrate that generative replay not only enables continual learning on our design of the scholar network but also compatible with other known structures. In section 4.3, we show that our scholar network can gather knowledge from different tasks to perform a meta-task, by training the network on disjoint subsets of training data. We compare the performance of the solver trained with variants of replay methods. Our model with generative replay is denoted in the figure as GR. We specify the upper bound by assuming a situation when the generator is perfect. Therefore, we replayed actual past data paired with the predicted targets from the old solver network. We denote this case as ER for exact replay. We also consider the opposite case when the generated samples do not resemble the real distribution at all. Such case is denoted as Noise. A baseline of naively trained solver network is denoted as None. We use the same notation throughout this section. 4.1 Learning independent tasks The most common experimental formulation used in continual learning literature [34, 18] is a simple image classification problem where the inputs are images from MNIST handwritten digit database [19], but pixel values of inputs are shuffled by a random permutation sequence unique to each task. The solver has to classify permuted inputs into the original classes. Since the most, if not all pixels are switched between the tasks, the tasks are technically independent from each other, being a good measure of memory retention strength of a network. (a) (b) Figure 2: Results on MNIST pixel permutation tasks. (a) Test performances on each task during sequential training. Performances for previous tasks dropped without replaying real or meaningful fake data. (b) Average test accuracy on learnt tasks. Higher accuracy is achieved when the replayed inputs better resembled real data. We observed that generative replay maintains past knowledge by recalling former task data. In Figure 2(a), the solver with generative replay (orange) maintained the former task performances throughout sequential training on multiple tasks, in contrast to the naively trained solver (violet). An average accuracy measured on cumulative tasks is illustrated in Figure 2(b). While the solver with generative replay achieved almost full performance on trained tasks, sequential training on a solver alone incurred catastrophic forgetting (violet). Replaying random gaussian noises paired with recorded responses did not help tempering performance loss (pink). 4.2 Learning new domains Training independent tasks on the same network is inefficient because no information is to be shared. We thus demonstrate the merit of our model in more reasonable settings where the model benefits from solving multiple tasks. A model operating in multiple domains has several advantages over a model that only works in a single domain. First, the knowledge of one domain can help better and faster understanding of other domains if the domains are not completely independent. Second, generalization over multiple domains may result in more universal knowledge that is applicable to unseen domains. Such phenomenon is 5 also observed in infants learning to categorize objects [3, 4]. Encountering similar but diverse objects, young children can infer the properties shared within the category, and can make a guess of which category that the new object may belong to. We tested if the model can incorporate the knowledge of a new domain with generative replay. In particular, we sequentially trained our model on classifying MNIST and Street View House Number (SVHN) dataset [25], and vice versa. Experimental details are provided in supplementary materials. (a) MNIST to SVHN (b) SVHN to MNIST Figure 3: Accuracy on classifying samples from two different domains. (a) The models are trained on MNIST then on SVHN dataset or (b) vice versa. When the previous data are recalled by generative replay (orange), knowledge of the first domain is retained as if the real inputs with predicted responses are replayed (green). Sequential training on the solver alone incurs forgetting on the former domain, thereby resulting in low average performance (violet). 1000 iterations 2000 iterations 5000 iterations 10000 iterations 20000 iterations Figure 4: Samples from trained generator in MNIST to SVHN experiment after training on SVHN dataset for 1000, 2000, 5000, 10000, and 20000 iterations. The samples are diverted into ones that mimic either SVHN or MNIST input images. Figure 3 illustrates the performance on the original task (thick curves) and the new task (dim curves). A solver trained alone lost its performance on the old task when no data are replayed (purple). Since MNIST and SVHN input data share similar spatial structure, the performance on the former task did not drop to zero, yet the decline was critical. In contrast, the solver with generative replay (orange) maintained its performance on the first task while accomplishing the second one. The results were no worse than replaying past real inputs paired with predicted responses from the old solver (green). In both cases, the model trained without any replay data achieved slightly better performance on new task, as the network was solely optimized to solve the second task. Generative replay is compatible with other continual learning models as well. For instance, Learning without Forgetting (LwF), which replays current task inputs to revoke past knowledge, can be augmented with generative models that produce samples similar to former task inputs. Because LwF requires the context information of which task is being performed to use task-specific output layers, we tested the performance separately on each task. Note that our scholar model with generative replay does not need the task context. In Figure 5, we compare the performance of LwF algorithm with a variant LwF-GR, where the task-specific generated inputs are fed to maintain older network responses. We used the same training regime as proposed in the original literature, namely warming up the new network head for some amount of the time and then fine tuning the whole network. The solver trained with original LwF algorithm loses performance on the first task when fine-tuning begins, due to alteration to shared 6 Figure 5: Performance of LwF and LwF augmented with generative replay (LwF-GR) on classifying samples from each domain. The networks were trained on SVHN then on MNIST database. Test accuracy on SVHN classification task (thick curves) dropped when the shared parameters were fine-tuned, but generative replay greatly tempered the loss (orange). Both networks achieved high accuracy on MNIST classification (dim curves). network (green). However, with generative replay, the network maintains most of the past knowledge (orange). 4.3 Learning new classes To illustrate that generative replay can recollect the past knowledge even when the inputs and targets are highly biased between the tasks, we propose a new experiment in which the network is sequentially trained on disjoint data. In particular, we assume a situation where the agent can access examples of only a few classes at a time. The agent eventually has to correctly classify examples from all classes after being sequentially trained on mutually exclusive subsets of classes. We tested the networks on MNIST handwritten digit database. Note that training the artificial neural networks independently on classes is difficult in standard settings, as the network responses may change to match the new target distribution. Hence replaying inputs and outputs that represent former input and target distributions is necessary to train a balanced network. We thus compare the variants described earlier in this section from the perspective of whether the input and target distributions of cumulative real data is recovered. For ER and GR models, both the input and target distributions represent cumulative distribution. Noise model maintains cumulative target distributions, but the input distribution only mirrors current distribution. None model has current distribution for both. Figure 6: The models were sequentially trained on 5 tasks where each task is defined to classify MNIST images belong to 2 out of 10 labels. In this case, the networks are given with examples of 0 and 1 during the first task, 2 and 3 for the second, and in the same manner. Only our networks achieved test performance close to the upper bound. In Figure 6, we divided MNIST dataset into 5 disjoint subsets, each of which contains samples from only 2 classes. When the networks are sequentially trained on the subsets, we observed that a naively trained classifier completely forgot previous classes and only learned the new subset of data (purple). Recovering only the past output distribution without a meaningful input distribution did not help retaining knowledge, as evidenced by the model with a noise generator (pink). When both the input 7 and output distributions are reconstructed, generative replay evoked previously learnt classes, and the model was able to discriminate all encountered classes (orange). Figure 7: Generated samples from trained generator after the task 1, 2, 3, 4, and 5. The generator is trained to reproduce cumulative data distribution. Because we assume that the past data are completely discarded, we trained the generator to mimic both current inputs and the generated samples from the previous generator. The generator thus reproduces cumulative input distribution of all encountered examples so far. As shown in Figure 7, generated samples from trained generator include examples equally from encountered classes. 5 Discussion We introduce deep generative replay framework, which allows sequential learning on multiple tasks by generating and rehearsing fake data that mimics former training examples. The trained scholar model comprising a generator and a solver serves as a knowledge base of a task. Although we described a cascade of knowledge transfer between a sequence of scholar models, a little change in formulation proposes a solution to other topically relevant problems. For instance, if the previous scholar model is just a past copy of the same network, it can learn multiple tasks without explicitly partitioning the training procedure. As comparable approaches, regularization methods such as EWC and careful training the shared parameters as in LwF have shown that catastrophic forgetting could be alleviated by protecting former knowledge of the network. However, regularization approaches constrain the network with additional loss terms for protecting weights, so they potentially suffer from the tradeoff between the performances on new and old tasks. To guarantee good performances on both tasks, one should train on a huge network that is much larger than normally needed. Also, the network has to maintain the same structure throughout all tasks when the constraint is given specific to each parameter as in EWC. Drawbacks of LwF framework are also twofold: the performance highly depends on the relevance of the tasks, and the training time for one task linearly increases with the number of former tasks. The deep generative replay mechanism benefits from the fact that it maintains the former knowledge solely with input-target pairs produced from the saved networks, so it allows ease of balancing the former and new task performances and flexible knowledge transfer. Most importantly, the network is jointly optimized towards task objectives, hence guaranteed to achieve the full performance when the former input spaces are recovered by the generator. One defect of the generative replay framework is that the efficacy of the algorithm heavily depends on the quality of the generator. Indeed, we observed some performance loss while training the model on SVHN dataset within same setting employed in section 4.3. Detailed analysis is provided in supplementary materials. We acknowledge that EWC, LwF, and ours are not completely exclusive, as they contribute to memory retention at different levels. Nevertheless, each method poses some constraints on training procedure or network configurations, and there is no straightforward mixture of any two frameworks. We believe a good mix of the three frameworks would give a better solution to the chronic problem in continual learning. Future works of generative replay may extend to reinforcement learning domain or the form of continuously evolving network that maintains knowledge from past copy of the self. Also, we expect the improvements in training deep generative models would directly aid the performance of generative replay framework on more complex domains. 8 Acknowledgement We would like to thank Hyunsoo Kim, Risto Vuorio, Joon Hyuk Yang, Junsik Kim and our reviewers for their valuable feedback and discussion that greatly assisted this research. References [1] W. C. Abraham and A. Robins. Memory retention?the synaptic stability versus plasticity dilemma. Trends in neurosciences, 28(2):73?78, 2005. [2] B. Ans and S. Rousset. Avoiding catastrophic forgetting by coupling two reverberating neural networks. Comptes Rendus de l?Acad?mie des Sciences-Series III-Sciences de la Vie, 320(12):989?997, 1997. [3] D. A. Baldwin, E. M. Markman, and R. L. Melartin. Infants? ability to draw inferences about nonobvious object properties: Evidence from exploratory play. Child development, 64(3):711? 728, 1993. [4] M. H. Bornstein and M. E. Arterberry. The development of object categorization in young children: Hierarchical inclusiveness, age, perceptual attribute, and group versus individual analyses. Developmental psychology, 46(2):350, 2010. [5] T. G. Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pages 1?15. Springer, 2000. [6] J. Fagot and R. G. Cook. Evidence for large long-term memory capacities in baboons and pigeons and its implications for learning and the evolution of cognition. Proceedings of the National Academy of Sciences, 103(46):17564?17567, 2006. [7] R. M. French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128?135, 1999. [8] H. Gelbard-Sagiv, R. Mukamel, M. Harel, R. Malach, and I. Fried. Internally generated reactivation of single neurons in human hippocampus during free recall. Science, 322(5898):96? 101, 2008. [9] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580?587, 2014. [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014. [11] I. J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. CoRR, abs/1701.00160, 2017. [12] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. [13] J. Grutzendler, N. Kasthuri, and W.-B. Gan. Long-term dendritic spine stability in the adult cortex. Nature, 420(6917):812?816, 2002. [14] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017. [15] M. Hattori. A biologically inspired dual-network memory model for reduction of catastrophic forgetting. Neurocomputing, 134:262?268, 2014. [16] G. E. Hinton and D. C. Plaut. Using fast weights to deblur old memories. In Proceedings of the ninth annual conference of the Cognitive Science Society, pages 177?186, 1987. 9 [17] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [18] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521?3526, 2017. [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [20] S.-W. Lee, J.-H. Kim, J.-W. Ha, and B.-T. Zhang. Overcoming catastrophic forgetting by incremental moment matching. arXiv preprint arXiv:1703.08475, 2017. [21] Z. Li and D. Hoiem. Learning without forgetting. In European Conference on Computer Vision, pages 614?629. Springer, 2016. [22] M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109?165, 1989. [23] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. [24] D. C. Mocanu, M. T. Vega, E. Eaton, P. Stone, and A. Liotta. Online contrastive divergence with generative replay: Experience replay without storing data. CoRR, abs/1610.05555, 2016. [25] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 5, 2011. [26] R. C. O?Reilly and K. A. Norman. Hippocampal and neocortical contributions to memory: Advances in the complementary learning systems framework. Trends in cognitive sciences, 6(12):505?510, 2002. [27] J. O?Neill, B. Pleydell-Bouverie, D. Dupret, and J. Csicsvari. Play it again: reactivation of waking experience and memory. Trends in neurosciences, 33(5):220?229, 2010. [28] S. Ramirez, X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R. L. Redondo, T. J. Ryan, and S. Tonegawa. Creating a false memory in the hippocampus. Science, 341(6144):387?391, 2013. [29] R. Ratcliff. Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions. Psychological review, 97(2):285?308, 1990. [30] A. Robins. Catastrophic forgetting in neural networks: the role of rehearsal mechanisms. In Artificial Neural Networks and Expert Systems, 1993. Proceedings., First New Zealand International Two-Stream Conference on, pages 65?68. IEEE, 1993. [31] A. Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):123?146, 1995. [32] R. Salakhutdinov and G. Hinton. Deep boltzmann machines. In Artificial Intelligence and Statistics, pages 448?455, 2009. [33] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929?1958, 2014. [34] R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber. Compete to compute. In Advances in neural information processing systems, pages 2310?2318, 2013. [35] R. Stickgold and M. P. Walker. Sleep-dependent memory consolidation and reconsolidation. Sleep medicine, 8(4):331?343, 2007. 10
6892 |@word hippocampus:8 solver1:1 risto:1 contrastive:1 incurs:1 thereby:3 moment:1 reduction:1 configuration:2 contains:1 efficacy:1 series:1 hoiem:1 liu:1 seriously:1 tuned:1 ours:1 hyunsoo:1 document:1 past:24 current:8 com:1 comparing:1 recovered:2 activation:1 si:1 yet:2 subsequent:1 plasticity:2 enables:2 drop:1 update:1 alone:4 generative:54 intelligence:2 infant:2 guess:1 cook:1 fried:1 hyuk:1 short:2 bissacco:1 plaut:1 contribute:2 pascanu:1 zhang:1 viable:1 introduce:1 privacy:1 solver2:1 x0:5 notably:1 manner:2 acquired:1 forgetting:25 indeed:1 nor:1 examine:1 spine:1 brain:4 inspired:3 salakhutdinov:2 actual:2 little:1 solver:32 vertebrate:1 provided:2 begin:1 moreover:1 notation:1 grabska:1 consolidated:3 guarantee:1 pseudo:2 continual:10 ti:2 demonstrates:2 classifier:2 partitioning:1 normally:1 internally:1 control:1 appear:1 continually:1 t1:1 retention:3 dropped:2 vie:1 consequence:1 topically:1 acad:1 encoding:1 solely:3 studied:1 resembles:1 evoked:1 suggests:1 co:1 ease:1 limited:3 baboon:1 range:1 practical:1 unique:1 lecun:1 lost:1 differs:3 digit:4 procedure:4 shin:1 episodic:1 riedmiller:1 empirical:2 universal:2 evolving:1 cascade:1 alleviated:1 matching:1 word:1 reilly:1 refers:1 suggest:2 hattori:1 close:2 storage:1 context:2 bellemare:1 optimize:1 equivalent:4 imposed:1 reviewer:1 chronic:2 primitive:1 straightforward:1 independently:1 impeded:1 zealand:1 pouget:1 solver4:1 importantly:1 stability:3 notion:1 exploratory:1 target:15 unconscious:1 heavily:1 play:2 exact:1 losing:1 hierarchy:1 designing:1 goodfellow:3 trend:4 ewc:4 recognition:3 jk:1 malach:1 cooperative:1 database:5 observed:6 baldwin:1 role:1 preprint:4 solved:1 capture:1 parameterize:1 wang:1 revisiting:1 valuable:1 balanced:1 accessing:1 developmental:1 warde:1 trained:34 solving:6 technically:1 distinctive:1 creates:1 dilemma:1 completely:4 easily:1 joint:2 various:2 train:9 ramalho:1 fast:1 describe:1 artificial:6 outcome:1 apparent:1 encoded:1 supplementary:2 solve:1 larger:1 reconstruct:2 ability:4 statistic:1 gi:1 unseen:1 gp:1 jointly:3 online:1 advantage:2 sequence:10 net:1 propose:3 relevant:1 rapidly:1 alleviates:1 mixing:1 achieve:1 academy:2 milan:1 scalability:1 sutskever:1 empty:1 darrell:1 produce:4 generating:2 perfect:1 categorization:1 silver:1 object:6 help:4 illustrate:2 coupling:2 incremental:1 pose:1 augmenting:1 measured:2 jiwon:1 taskspecific:1 involves:1 predicted:3 resemble:1 synchronized:1 recovering:1 thick:2 drawback:2 closely:2 compromising:1 saved:1 mie:1 attribute:1 human:4 material:2 feeding:2 suffices:2 scholar:30 generalization:2 alleviate:2 investigation:1 preliminary:1 biological:1 dendritic:1 elementary:1 ryan:1 assisted:1 cognition:1 eaton:1 major:1 desjardins:1 applicable:1 label:1 concurrent:1 ex0:1 vice:2 mit:1 gaussian:1 aim:1 rather:1 rusu:2 focus:1 improvement:1 likelihood:1 ratcliff:1 greatly:2 contrast:2 adversarial:5 baseline:1 kim:5 dim:2 inference:1 dependent:1 accumulated:1 entire:2 expand:1 reproduce:1 comprising:1 pixel:4 issue:1 dual:3 classification:6 flexible:3 aforementioned:1 among:1 retaining:1 temper:1 proposes:2 denoted:3 spatial:1 orange:6 development:2 equal:1 never:1 beach:1 veness:2 ng:1 represents:1 unsupervised:2 markman:1 pdata:1 mimic:7 future:1 t2:1 connectionist:3 mirza:2 employ:1 few:2 kwon:1 harel:1 national:2 comprehensive:1 individual:1 neurocomputing:1 wgan:1 divergence:1 consisting:1 maintain:3 attempt:3 recalling:1 detection:1 ab:2 huge:1 ostrovski:1 highly:2 mnih:1 kirkpatrick:1 mixture:1 farley:1 devoted:1 hg:1 implication:1 accurate:1 tuple:1 capable:3 necessary:1 experience:7 netzer:1 old:9 desired:2 re:2 girshick:1 psychological:1 instance:2 classify:3 earlier:2 obstacle:1 downside:1 altering:1 retains:2 applicability:1 violet:3 subset:5 krizhevsky:1 gr:4 teacher:1 learnt:2 referring:3 st:1 fundamental:1 international:2 accessible:1 retain:1 lee:2 continuously:1 gans:4 again:1 recorded:2 hn:2 worse:2 cognitive:4 creating:1 sagiv:1 inefficient:1 expert:1 li:1 de:3 alteration:5 student:1 explicitly:1 depends:3 stream:1 performed:1 view:1 dumoulin:1 relied:1 recover:1 maintains:5 bayes:1 contribution:2 minimize:1 purple:2 accuracy:9 accomplishing:1 ensemble:3 yield:1 handwritten:3 kavukcuoglu:1 produced:3 none:3 suffers:1 synaptic:2 definition:2 rehearsal:2 di:4 recovers:1 couple:1 sampled:1 dataset:5 massachusetts:1 recall:2 knowledge:26 segmentation:1 warming:1 appears:1 higher:1 response:10 specify:1 improved:1 replayed:14 formulation:2 lifetime:1 furthermore:1 implicit:1 just:1 autoencoders:1 working:1 receives:1 french:1 defines:1 quality:2 diverted:1 gulrajani:1 rabinowitz:1 believe:1 usa:1 facilitate:1 dietterich:1 unbiased:1 norman:1 former:13 analytically:1 regularization:4 shuffled:1 hence:2 evolution:1 semantic:1 illustrated:1 during:4 game:1 self:1 maintained:3 hippocampal:2 stone:1 complete:1 demonstrate:2 neocortical:1 tn:2 performs:1 svhn:11 image:10 variational:3 novel:1 recently:1 vega:1 superior:1 common:1 redondo:1 permuted:1 perturbing:1 cohen:2 ltrain:1 volume:1 belong:2 extend:1 accumulate:1 refer:1 versa:2 tuning:3 teaching:1 access:3 supervision:1 operating:1 encountering:1 cortex:1 base:2 own:1 recent:6 perspective:1 optimizing:1 schmidhuber:1 store:2 certain:2 buffer:2 claimed:2 meta:1 binary:2 tempered:1 arjovsky:1 additional:1 wasserstein:1 employed:2 maximize:1 period:1 branch:1 multiple:12 full:6 mix:2 infer:1 barwinska:1 match:2 reactivated:1 faster:1 ahmed:1 long:5 lin:1 divided:1 equally:1 paired:4 involving:3 variant:3 vision:2 arxiv:8 iteration:6 represent:3 achieved:5 fine:4 separately:1 walker:1 biased:1 quan:1 regularly:1 seem:1 call:1 yang:1 iii:1 bengio:3 psychology:2 architecture:4 opposite:1 reduce:3 idea:1 decline:1 haffner:1 tradeoff:1 whether:1 abruptly:1 suffer:1 cause:5 deep:23 ignored:1 fake:4 detailed:1 amount:1 neocortex:2 conscious:1 category:2 generate:1 coates:1 tutorial:1 neuroscience:2 disjoint:3 correctly:1 diverse:1 group:1 key:1 terminology:1 nevertheless:1 drawn:4 tempering:1 prevent:1 neither:1 lowering:1 defect:2 year:1 sum:2 compete:1 parameterized:1 throughout:5 almost:2 reasonable:1 wu:1 draw:2 comparable:2 interleaved:2 dropout:2 layer:5 hi:1 bound:2 gomez:1 distinguish:1 guaranteed:1 courville:3 sleep:3 neill:1 encountered:3 annual:1 strength:1 handful:1 constraint:4 constrain:1 encodes:1 nonobvious:1 kasthuri:1 generates:1 min:1 revoke:1 pink:2 slightly:1 primate:5 biologically:1 restricted:1 interference:6 mutually:1 previously:3 eventually:1 mechanism:4 rendus:1 needed:2 merit:1 fed:3 serf:1 hierarchical:1 indirectly:1 alternative:1 existence:1 original:5 assumes:1 include:1 gan:1 medicine:1 build:1 society:1 objective:5 malik:1 already:2 degrades:1 parametric:1 exclusive:2 gradient:2 separate:3 thank:1 fidjeland:1 capacity:1 street:1 extent:1 ozair:1 assuming:1 besides:1 retained:1 dupret:1 ratio:2 balance:1 minimizing:2 difficult:1 potentially:1 reactivation:4 teach:2 trace:3 design:1 reliably:1 boltzmann:3 perform:2 upper:2 neuron:2 discarded:1 acknowledge:1 gelbard:1 protecting:3 situation:5 hinton:3 head:1 interacting:1 ninth:1 waking:1 overcoming:2 introduced:1 evidenced:1 pair:7 namely:1 csicsvari:1 connection:2 optimized:4 discriminator:2 recalled:1 learned:6 distinction:1 established:2 kingma:1 nip:4 recollect:1 address:1 able:2 suggested:1 adult:1 pattern:3 regime:1 reading:1 max:1 memory:28 green:3 suitable:1 critical:1 difficulty:1 natural:1 replaying:6 older:3 lwf:12 technology:1 autoencoder:1 auto:1 prior:2 literature:2 l2:1 understanding:1 pseudoinputs:2 ltest:1 acknowledgement:1 graf:1 review:1 loss:12 expect:1 permutation:2 mixed:2 tonegawa:1 limitation:1 versus:2 generator:29 age:1 switched:1 incurred:1 agent:3 gather:1 xiao:1 classifying:5 share:1 balancing:1 storing:1 compatible:2 jung:1 supported:1 last:2 consolidation:2 free:2 infeasible:1 copy:4 institute:1 wide:1 benefit:2 curve:4 feedback:1 world:3 cumulative:8 rich:1 reinforcement:3 coincide:1 far:1 welling:1 reconstructed:1 skill:1 observable:1 reproduces:2 sequentially:9 overfitting:1 xi:1 continuous:1 suh:1 sk:2 table:2 robin:3 nature:3 learn:5 transfer:3 ca:1 elastic:1 bottou:1 cl:2 complex:3 european:1 domain:17 did:3 significance:1 main:1 linearly:1 joon:1 whole:1 noise:5 abraham:1 motivation:1 child:3 complementary:4 xu:1 augmented:2 aid:1 experienced:1 replay:46 house:1 perceptual:1 learns:5 young:2 donahue:1 masci:1 specific:4 resembled:1 reverberating:1 er:2 pz:1 abadie:1 evidence:3 naively:3 workshop:2 mnist:16 false:2 sequential:15 corr:2 importance:2 mirror:1 mukamel:1 illustrates:2 rodent:1 pigeon:1 simply:2 ramirez:1 ez:1 deblur:1 mccloskey:2 srivastava:2 springer:2 loses:1 stimulating:1 careful:1 towards:3 twofold:1 shared:7 change:2 specifically:2 comptes:1 called:5 kazerounian:1 discriminate:1 catastrophic:20 experimental:2 la:1 meaningful:3 formally:1 internal:1 paralleled:1 categorize:1 relevance:1 avoiding:1 phenomenon:2 incorporate:2 evaluate:1 tested:5 scratch:1 ex:1
6,514
6,893
AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms Marco F. Cusumano-Towner Probabilistic Computing Project Massachusetts Institute of Technology [email protected] Vikash K. Mansinghka Probabilistic Computing Project Massachusetts Institute of Technology [email protected] Abstract Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference algorithms. AIDE is based on the observation that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as auxiliary variables. This view leads to a new estimator for the symmetric KL divergence between the approximating distributions of two inference algorithms. The paper illustrates application of AIDE to algorithms for inference in regression, hidden Markov, and Dirichlet process mixture models. The experiments show that AIDE captures the qualitative behavior of a broad class of inference algorithms and can detect failure modes of inference algorithms that are missed by standard heuristics. 1 Introduction Approximate probabilistic inference algorithms are central to diverse disciplines, including statistics, robotics, machine learning, and artificial intelligence. Popular approaches to approximate inference include sequential Monte Carlo, variational inference, and Markov chain Monte Carlo. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. The accuracy is influenced by complex interactions between the specific data set in question, the model family, the algorithm tuning parameters such as the number of iterations, and any associated proposal distributions and/or approximating variational family. Unfortunately, practitioners assessing the accuracy of inference have to rely on heuristics that are either brittle or specialized for one type of algorithm [1], or both. For example, log marginal likelihood estimates can be used to assess the accuracy of sequential Monte Carlo and variational inference, but these estimates can fail to significantly penalize an algorithm for missing a posterior mode. Expectations of probe functions do not assess the full approximating distribution, and they require design specific to each model. This paper introduces an algorithm for estimating the symmetrized KL divergence between the output distributions of a broad class of exact and approximate inference algorithms. The key idea is that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as latent variables. We show how sequential Monte Carlo, Markov chain Monte Carlo, rejection sampling, and variational inference can be represented in a common mathematical formalism based on two new concepts: generative inference models and meta-inference algorithms. Using this framework, we introduce the Auxiliary Inference Divergence Estimator (AIDE), which estimates the symmetrized KL divergence between the output distributions 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Gold standard inference algorithm Number of gold-standard N g inference runs Target inference algorithm (the algorithm being measured) Nt Number of target inference runs AIDE Auxiliary Inference Divergence Estimator Number of meta-inference Mg runs for gold-standard Mt Number of meta-inference runs for target ? Symmetrized KL divergence estimate D ? D ? DKL (gold-standard||target) + DKL (target||gold-standard) Figure 1: Using AIDE to estimate the accuracy of a target inference algorithm relative to a goldstandard inference algorithm. AIDE is a Monte Carlo estimator of the symmetrized Kullback-Leibler (KL) divergence between the output distributions of two inference algorithms. AIDE uses metainference: inference over the internal random choices made by an inference algorithm. 8 6 4 2 0 100 101 Number of particles 102 Metropolis-Hastings 8 6 4 2 0 Variational Inference AIDE estimate (nats) Mt = 100 Mt = 101 Mt = 103 AIDE estimate (nats) AIDE estimate (nats) Sequential Monte Carlo 100 101 102 1 + Number of transitions 8 6 4 2 0 100 101 104 102 103 Number of gradient steps Figure 2: AIDE applies to SMC, variational, and MCMC algorithms. Left: AIDE estimates for SMC converge to zero, as expected. Right: AIDE estimates for variational inference converge to a nonzero asymptote that depends on the variational family. Middle: The symmetrized divergence between MH and the posterior converges to zero, but AIDE over-estimates the divergence in expectation. Although increasing the number of meta-inference runs Mt reduces the bias of AIDE, AIDE is not yet practical for measuring MH accuracy due to inaccurate meta-inference for MH. of two inference algorithms that have both been endowed with a meta-inference algorithm. We also show that the conditional SMC update of Andrieu et al. [2] and the reverse AIS Markov chain of Grosse et al. [3] are both special cases of a ?generalized conditional SMC update?, which we use as a canonical meta-inference algorithm for SMC. AIDE is a practical tool for measuring the accuracy of SMC and variational inference algorithms relative to gold-standard inference algorithms. Note that this paper does not provide a practical solution to the MCMC convergence diagnosis problem. Although in principle AIDE can be applied to MCMC, to do so in practice will require more accurate meta-inference algorithms for MCMC to be developed. 2 Background Consider a generative probabilistic model with latent variables X and observed variables Y . We denote assignments to these variables by x ? X and y ? Y. Let p(x, y) denote the joint R density of the generative model. The posterior density is p(x|y) := p(x, y)/p(y) where p(y) = p(x, y)dx is the marginal likelihood, or ?evidence?. Sampling-based approximate inference strategies including Markov chain Monte Carlo (MCMC, [4, 5]), sequential Monte Carlo (SMC, [6]), annealed importance sampling (AIS, [7]) and importance sampling with resampling (SIR, [8, 9]), generate samples of the latent variables that are approximately distributed according to p(x|y). Use of a sampling-based inference algorithm is often motivated by 2 theoretical guarantees of exact convergence to the posterior in the limit of infinite computation (e.g. number of transitions in a Markov chain, number of importance samples in SIR). However, how well the sampling distribution approximates the posterior distribution for finite computation is typically difficult to analyze theoretically or estimate empirically with confidence. Variational inference [10] explicitly minimizes the approximation error of the approximating distribution q? (x) over parameters ? of a variational family. The error is usually quantified using the Kullback-Leibler (KL) divergence from the approximation q? (x) to the posterior p(x|y), denoted DKL (q? (x) k p(x|y)). Unlike sampling-based approaches, variational inference does not generally give exact results for infinite computation because the variational family does not include the posterior. Minimizing the KL divergence is performed by maximizing the ?evidence lower bound? (ELBO) L = log p(y) ? DKL (q? (x) k p(x|y)) over ?. Since log p(y) is usually unknown, the actual error (the KL divergence) of a variational approximation is also unknown. 3 Estimating the symmetrized KL divergence between inference algorithms This section defines our mathematical formalism for analyzing inference algorithms; shows how to represent SMC, MCMC, rejection sampling, and variational inference in this formalism; and introduces the Auxiliary Inference Divergence Estimator (AIDE), an algorithm for estimating the symmetrized KL divergence between two inference algorithms. 3.1 Generative inference models and meta-inference algorithms We define an inference algorithm as a procedure that produces a single approximate posterior sample. Repeated runs of the algorithm give independent samples. For each inference algorithm, there is an ?output density? q(x) that represents the probability that the algorithm returns a given sample x on any given run of the algorithm. Note that q(x) depends on the observations y that define the inference problem, but we suppress that in the notation. The inference algorithm is accurate when q(x) ? p(x|y) for all x. We denote a sample produced by running the algorithm by x ? q(x). A naive simple Monte Carlo estimator of the KL divergence between the output distributions of two inference algorithms requires the output densities of both algorithms. However, it is typically intractable to compute the output densities of sampling-based inference algorithms like MCMC and SMC, because that would require marginalizing over all possible values that the random variables drawn during the algorithm could possibly take. A similar difficulty arises when computing the marginal likelihood p(y) of a generative probabilistic model p(x, y). This suggests that we treat the inference algorithm as a probabilistic model, estimate its output density using ideas from marginal likelihood estimation, and use these estimates in a Monte Carlo estimator of the divergence. We begin by making the analogy between an inference algorithm and a probabilistic model explicit: Definition 3.1 (Generative inference model). A generative inference model is a tuple (U, X , q) where q(u, x) is a joint density defined on U ? X . A generative inference model models an inference algoR rithm if the output density of the inference algorithm is the marginal likelihood q(x) = q(u, x)du of the model for all x. An element u ? U represents a complete assignment to the internal random variables within the inference algorithm, and is called a ?trace?. The ability to simulate from q(u, x) is required, but the ability to compute the density q(u, x) is not. A simulation, denoted u, x ? q(u, x), may be obtained by running the inference algorithm and recording the resulting trace u and output x.1 A generative inference model can be understood as a generative probabilistic model where the u are the latent variables and the x are the observations. Note that two different generative inference models may use different representations for the internal random variables of the same inference algorithm. In practice, constructing a generative inference model from an inference algorithm amounts to defining the set of internal random variables. For marginal likelihood estimation in a generative inference model, we use a ?meta-inference? algorithm: Definition 3.2 (Meta-inference algorithm). For a given generative inference model (U, X , q), a meta-inference algorithm is a tuple (r, ?) where r(u; x) is a density on traces u ? U of the inference algorithm, indexed by outputs x ? X of the inference algorithm, and where ?(u, x) is the following 1 The trace data structure could in principle be obtained by writing the inference algorithm in a probabilistic programming language like Church [11], but the computational overhead would be high. 3 function of u and x for some Z > 0: ?(u, x) := Z q(u, x) r(u; x) (1) We require the ability to sample u ? r(u; x) given a value for x, and the ability to evaluate ?(u, x) given u and x. We call a procedure for sampling from r(u; x) a ?meta-inference sampler?. We do not require the ability to evaluate the density r(u; x). A meta-inference algorithm is considered accurate for a given x if r(u; x) ? q(u|x) for all u. Conceptually, a meta-inference sampler tries to answer the question ?how could my inference algorithm have produced this output x?? Note that if it is tractable to evaluate the marginal likelihood q(x) of the generative inference model up to a normalizing constant, then it is not necessary to represent internal random variables for the inference algorithm, and a generative inference model can define the trace as an empty token u = () with U = {()}. In this case, the meta-inference algorithm has r(u; x) = 1 for all x and ?(u, x) = Zq(x). 3.2 Examples We now show how to construct generative inference models and corresponding meta-inference algorithms for SMC, AIS, MCMC, SIR, rejection sampling, and variational inference. The metainference algorithms for AIS, MCMC, and SIR are derived as special cases of a generic SMC meta-inference algorithm. Sequential Monte Carlo. We consider a general class of SMC samplers introduced by Del Moral et al. [6], which can be used for approximate inference in both sequential state space and nonsequential models. We briefly summarize a slightly restricted variant of the algorithm here, and refer the reader to the supplement and Del Moral et al. [6] for full details. The SMC algorithm propagates P weighted particles through T steps, using proposal kernels kt and multinomial resampling based on weight functions w1 (x1 ) and wt (xt?1 , xt ) for t > 1 that are defined in terms of ?backwards kernels? `t for t = 2 . . . T . Let xit , wti and Wti denote the value, unnormalized weight, and normalized weight of particle i at time t, respectively. We define the output sample x of SMC as a single draw from the particle approximation at the final time step, which is obtained by sampling a particle index IT ? Categorical(WT1:P ) where WT1:P denotes the vector of weights (WT1 , . . . , WTP ), and then setting x ? xITT . The generative inference model uses traces of the form u = (x, a, IT ), where x contains the values of all particles at all time steps and where a (for ?ancestor?) contains the index ait ? {1 . . . P } of the parent of particle xit+1 for each particle i and each time step t = 1 . . . T ? 1. Algorithm 1 defines a canonical meta-inference sampler for this generative inference model that takes as input a latent sample x and generates an SMC trace u ? r(u; x) as output. The meta-inference sampler first generates an ancestral trajectory of particles (xI11 , xI22 , . . . , xITT ) that terminates in the output sample x, by sampling sequentially from the backward kernels `t , starting from xITT = x. Next, it runs a conditional SMC update [2] conditioned on the ancestral trajectory. For this choice of d r(u; x) and for Z = 1, the function ?(u, x) is closely related to the marginal likelihood estimate p(y) d See supplement for derivation. produced by the SMC scheme:2 ?(u, x) = p(x, y)/p(y). Annealed importance sampling. When a single particle is used (P = 1), and when each forward kernel kt satisfies detailed balance for some intermediate density, the SMC algorithm simplifies to annealed importance sampling (AIS, [7]), and the canonical SMC meta-inference inference (Algorithm 1) consists of running the forward kernels in reverse order, as in the reverse annealing algorithm of Grosse et al. [3, 12]. The canonical meta-inference algorithm is accurate (r(u; x) ? q(u; x)) if the AIS Markov chain is kept close to equilibrium at all times. This is achieved if the intermediate densities form a sufficiently fine-grained sequence. See supplement for analysis. Markov chain Monte Carlo. We define each run of an MCMC algorithm as producing a single output sample x that is the iterate of the Markov chain produced after a predetermined number of burnin steps has passed. We also assume that each MCMC transition operator satisfies detailed balance 2 AIDE also applies to approximate inference algorithms for undirected probabilistic models; the marginal likelihood estimate is replaced with the estimate of the partition function. 4 Algorithm 1 Generalized conditional SMC (a canonical meta-inference sampler for SMC) Require: Latent sample x, SMC parameters IT ? Uniform(1 . . . P ) xITT ? x for t ? T ? 1 . . . 1 do It ? Uniform(1 . . . P ) . Sample from backward kernel It+1 xIt t ? `t+1 (?; xt+1 ) for i ? 1 . . . P do if i 6= I1 then xi1 ? k1 (?) w1i ? w1 (xi1 ) for t ? 2 . . . T do P i 1:P 1:P Wt?1 ? wt?1 /( P i=1 wt?1 ) for i ? 1 . . . P do if i = It then ait?1 ? It?1 else 1:P ait?1 ? Categorical(Wt?1 ) ? x13 x23 x33 x43 I3 = 2 x22 x32 x42 I2 = 3 x21 x31 x41 I1 = 1 `3 x12 `2 x11 T =3 ait?1 xit ? kt (?; xt?1 ) ait?1 i i wt ? wt (xt?1 , xt ) u ? (x, a, IT ) return u Latent sample (input to meta-inference sampler) x xit Member of ancestral trajectory . Return an SMC trace with respect to the posterior p(x|y). Then, this is formally a special case of AIS. However, unless the Markov chain was initialized near the posterior p(x|y), the chain will be far from equilibrium during the burn-in period, and the AIS meta-inference algorithm will be inaccurate. Importance sampling with resampling. Importance sampling with resampling, or SIR [8] can be seen as a special case of SMC if we set the number of steps to one (T = 1). The trace of the SIR algorithm is then the set of particles xi1 for i ? {1, . . . , P } and output particle index I1 . Given output sample x, the canonical SMC meta-inference sampler then simply samples I1 ? Uniform(1 . . . P ), sets xI11 ? x, and samples the other P ? 1 particles from the importance distribution k1 (x). Rejection sampling. To model a rejection sampler for a posterior distribution p(x|y), we assume it is tractable to evaluate the unnormalized posterior density p(x, y). We define U = {()} as described in Section 3.1. For meta-inference, we define Z = p(y) so that ?(u, x) = p(y)p(x|y) = p(x, y). It is not necessary to represent the internal random variables of the rejection sampler. Variational inference. We suppose a variational approximation q? (x) has been computed through optimization over the variational parameters ?. We assume that it is possible to sample from the variational approximation, and evaluate its normalized density. Then, we use U = {()} and Z = 1 and ?(u, x) = q? (x). Note that this formulation also applies to amortized variational inference algorithms, which reuse the parameters ? for inference across different observation contexts y. 3.3 The auxiliary inference divergence estimator Consider a probabilistic model p(x, y), a set of observations y, and two inference algorithms that approximate p(x|y). One of the two inference algorithms is considered the ?gold-standard?, and has a generative inference model (U, X , qg ) and a meta-inference algorithm (rg , ?g ). The second algorithm is considered the ?target? algorithm, with a generative inference model (V, X , qt ) (we denote a trace of the target algorithm by v ? V), and a meta-inference algorithm (rt , ?t ). This section shows how to estimate an upper bound on the symmetrized KL divergence between qg (x) and qt (x), which is:     qg (x) qt (x) DKL (qg (x) k qt (x)) + DKL (qt (x) k qg (x)) = Ex?qg (x) log + Ex?qt (x) log (2) qt (x) qg (x) We take a Monte Carlo approach. Simple Monte Carlo applied to the Equation (2) requires that qg (x) and qt (x) can be evaluated, which would prevent the estimator from being used when either inference algorithm is sampling-based. Algorithm 2 gives the Auxiliary Inference Divergence Estimator 5 (AIDE), an estimator of the symmetrized KL divergence that only requires evaluation of ?g (u, x) and ?t (v, x) and not qg (x) or qt (x), permitting its use with sampling-based inference algorithms. Algorithm 2 Auxiliary Inference Divergence Estimator (AIDE) Gold-standard inference model and meta-inference algorithm (U, X , qg ) and (rg , ?g ) Target inference model and meta-inference algorithm (V, X , qt ) and (rt , ?t ) Number of runs of gold-standard algorithm Ng Number of runs of meta-inference sampler for gold-standard Mg Number of runs of target algorithm Nt Number of runs of meta-inference sampler for target Mt for n ? 1 . . . Ng do un,1 , xn ? qg (u, x) . Run gold-standard algorithm, record trace un,1 and output xn for m ? 2 . . . Mg do un,m ? rg (u; xn ) . Run meta-inference sampler for gold-standard algorithm, on input xn for m ? 1 . . . Mt do vn,m ? rt (v; xn ) . Run meta-inference sampler for target algorithm, on input xn for n ? 1 . . . Nt do 0 0 vn,1 , x0n ? qt (v, x) . Run target algorithm, record trace vn,1 and output x0n for m ? 2 . . . Mt do 0 vn,m ? rt (v; x0n ) . Run meta-inference sampler for target algorithm, on input x0n for m ? 1 . . . Mg do u0n,m ? rg (u; x0n ) . Run meta-inference sampler for gold-standard algorithm, on input x0n ? ? ? ? PMg PMt Ng 1 0 0 Nt 1 X X m=1 ?g (un,m , xn ) m=1 ?t (vn,m , xn ) Mg 1 1 Mt ? ? ? ?+ D log ? log ? PMt PMg 1 1 Ng n=1 Nt n=1 0 0 ) ? (u , x g n,m n m=1 ?t (vn,m , xn ) m=1 Mt Mg ? ? is an estimate of DKL (qg (x)||qt (x)) + DKL (qt (x)||qg (x)) return D .D Require: The generic AIDE algorithm above is defined in terms of abstract generative inference models and meta-inference algorithms. For concreteness, the supplement contains the AIDE algorithm specialized to the case when the gold-standard is AIS and the target is a variational approximation. ? produced by AIDE is an upper bound on the symmetrized KL divergence Theorem 1. The estimate D in expectation, and the expectation is nonincreasing in AIDE parameters Mg and Mt . See supplement for proof. Briefly, AIDE estimates an upper bound on the symmetrized divergence in expectation because it uses unbiased estimates of qt (xn ) and qg (xn )?1 for xn ? qg (x), and unbiased estimates of qg (x0n ) and qt (x0n )?1 for x0n ? qt (x). For Mg = 1 and Mt = 1, AIDE over-estimates the true symmetrized divergence by: ? ? (DKL (qg (x) k qt (x)) + DKL (qt (x) k qg (x))) = E[D]   Bias of AIDE Ex?qg (x) [DKL (qg (u|x) k rg (u; x)) + DKL (rt (v; x) k qt (v|x))] for Mg =Mt =1 + Ex?qt (x) [DKL (qt (v|x) k rt (v; x)) + DKL (rg (u; x) k qg (u|x))] (3) Note that this expression involves KL divergences between the meta-inference sampling densities (rg (u; x) and rt (v; x)) and the posteriors in their respective generative inference models (qg (u|x) and qt (v|x)). Therefore, the approximation error of meta-inference determines the bias of AIDE. When both meta-inference algorithms are exact (rg (u; x) = qg (u|x) for all u and x and rt (v; x) = qt (v|x) for all v and x), AIDE is unbiased. As Mg or Mt are increased, the bias decreases (see Figure 2 and Figure 4 for examples). If the generative inference model for one of the algorithms does not use a trace (i.e. U = {()} or V = {()}), then that algorithm does not contribute a KL divergence term to the bias of Equation (3). The analysis of AIDE is equivalent to that of Grosse et al. [12] when the target algorithm is AIS and Mt = Mg = 1 and the gold-standard inference algorithm is a rejection sampler. 4 Related Work Diagnosing the convergence of approximate inference is a long-standing problem. Most existing work is either tailored to specific inference algorithms [13], designed to detect lack of exact convergence [1], or both. Estimators of the non-asymptotic approximation error of general approximate inference 6 Number of particles AIDE estimate (nats) 100 101 60 102 40 Offset proposal Broad proposal Large penalty for missing mode 20 103 posterior L 2 R density 0 2 0 4 0 Log marginal likelihood (nats) 20 Small penalty 40 Offset proposal Broad proposal Gold-standard 60 100 101 102 Number of particles 103 100 101 102 Number of particles 103 Figure 3: AIDE detects when an inference algorithm misses a posterior mode. Left: A bimodal posterior density, with kernel estimates of the output densities of importance sampling with resampling (SIR) using two proposals. The ?broad? proposal (blue) covers both modes, and the ?offset? proposal (pink) misses the ?L? mode. Middle: AIDE detects the missing mode in offset-proposal SIR. Right: Log marginal likelihood estimates suggest that the offset-proposal SIR is nearly converged. algorithms have received less attention. Gorham and Mackey [14] propose an approach that applies to arbitrary sampling algorithms but relies on special properties of the posterior density such as log-concavity. Our approach does not rely on special properties of the posterior distribution. Our work is most closely related to Bounding Divergences with REverse Annealing (BREAD, [12]) which also estimates upper bounds on the symmetric KL divergence between the output distribution of a sampling algorithm and the posterior distribution. AIDE differs from BREAD in two ways: First, whereas BREAD handles single-particle SMC samplers and annealed importance sampling (AIS), AIDE handles a substantially broader family of inference algorithms including SMC samplers with both resampling and rejuvenation steps, AIS, variational inference, and rejection samplers. Second, BREAD estimates divergences between the target algorithm?s sampling distribution and the posterior distribution, but the exact posterior samples necessary for BREAD?s theoretical properties are only readily available when the observations y that define the inference problem are simulated from the generative model. Instead, AIDE estimates divergences against an exact or approximate gold-standard sampler on real (non-simulated) inference problems. Unlike BREAD, AIDE can be used to evaluate inference in both generative and undirected models. AIDE estimates the error of sampling-based inference using a mathematical framework with roots in variational inference. Several recent works have treated sampling-based inference algorithms as variational approximations. The Monte Carlo Objective (MCO) formalism of Maddison et al. [15] is closely related to our formalism of generative inference models and meta-inference algorithms? indeed a generative inference model and a meta-inference algorithm with Z = 1 give an MCO defined by: L(y, p) = Eu,x?q(u,x) [log(p(x, y)/?(u, x))], where y denotes observed data. In independent and concurrent work to our own, Naesseth et al. [16], Maddison et al. [15] and Le et al. [17] treat SMC as a variational approximation using constructions similar to ours. In earlier work, Salimans et al. [18] recognized that MCMC samplers can be treated as variational approximations. However, these works are concerned with optimization of variational objective functions instead of estimation of KL divergences, and do not involve generating a trace of a sampler from its output. 5 5.1 Experiments Comparing the bias of AIDE for different types of inference algorithms We used a Bayesian linear regression inference problem where exact posterior sampling is tractable to characterize the bias of AIDE when applied to three different types of target inference algorithms: sequential Monte Carlo (SMC), Metropolis-Hastings (MH), and variational inference. For the goldstandard algorithm we used a posterior sampler with a tractable output density qg (x), which does not introduce bias into AIDE, so that the AIDE?s bias could be completely attributed to the approximation error of meta-inference for each target algorithm. Figure 2 shows the results. The bias of AIDE is acceptable for SMC, and AIDE is unbiased for variational inference, but better meta-inference algorithms for MCMC are needed to make AIDE practical for estimating the accuracy of MH. 7 5.2 Evaluating approximate inference in a Hidden Markov model We applied AIDE to measure the approximation error of SMC algorithms for posterior inference in a Hidden Markov model (HMM). Because exact posterior inference in this HMM is tractable via dynamic programming, we used this opportunity to compare AIDE estimates obtained using the exact posterior as the gold-standard with AIDE estimates obtained using a ?best-in-class? SMC algorithm as the gold-standard. Figure 4 shows the results, which indicate AIDE estimates using an approximate gold-standard algorithm can be nearly identical to AIDE estimates obtained with an exact posterior gold-standard. Target algorithms 50 1 state 20 time Posterior marginals 1 50 1 state 20 time 1 50 SMC optimal proposal 1000 particles (SMC gold standard) 80 80 50 60 40 20 0 100 C 20 state 1 time SMC prior proposal 10 particles 1 Measuring accuracy of target algorithms using SMC gold-standard 50 B 20 state 1 time SMC prior proposal 1 particle 1 Measuring accuracy of target algorithms using posterior as gold-standard A B C 101 102 Number of particles AIDE estimate (nats) time Ground truth states 1 state state 1 A 20 AIDE estimate (nats) 20 60 40 20 0 100 A B C 101 Number of particles 102 SMC, prior proposal, 1 meta-inference run (Mt = 1) SMC, prior proposal, 100 meta-inference runs (Mt = 100) SMC, optimal proposal, 1 meta-inference run (Mt = 1) SMC, optimal proposal, 100 meta-inference runs (Mt = 100) 1 time 1 50 SMC optimal proposal 100 particles Figure 4: Comparing use of an exact posterior as the gold-standard and a ?best-in-class? approximate algorithm as the gold-standard, when measuring accuracy of target inference algorithms with AIDE. We consider inference in an HMM, so that exact posterior sampling is tractable using dynamic programming. Left: Ground truth latent states, posterior marginals, and marginals of the the output of a gold-standard and three target SMC algorithms (A,B,C) for a particular observation sequence. Right: AIDE estimates using the exact gold-standard and using the SMC gold-standard are nearly identical. The estimated divergence bounds decrease as the number of particles in the target sampler increases. The optimal proposal outperforms the prior proposal. Increasing Mt tightens the estimated divergence bounds. We used Mg = 1. 25 20 15 10 5 0 Likelihood weighting with 1 particle appears least accurate 100 101 Number of particles Heuristic diagnostic Average number of clusters nats AIDE estimates 102 Appears accurate 4.0 3.5 SMC, prior proposal 0 rejuvenation sweeps SMC, optimal proposal 0 rejuvenation sweeps 3.0 SMC, optimal proposal 4 rejuvenation sweeps 2.5 Gold-standard 100 101 Number of particles 102 Likelihood-weighting (1 particle) Figure 5: Contrasting AIDE against a heuristic convergence diagnostic for evaluating the accuracy of approximate inference in a Dirichlet process mixture model (DPMM). The heuristic compares the expected number of clusters under the target algorithm to the expectation under the gold-standard algorithm [19]. White circles identify single-particle likelihood-weighting, which samples from the prior. AIDE clearly indicates that single-particle likelihood-weighting is inaccurate, but the heuristic suggests it is accurate. Probe functions like the expected number of clusters can be error prone measures of convergence because they only track convergence along a specific projection of the distribution. In contrast, AIDE estimates a joint KL divergence. Shaded areas in both plots show the standard error. The amount of target inference computation used is the same for the two techniques, although AIDE performs a gold-standard meta-inference run for each target inference run. 8 5.3 Comparing AIDE to alternative inference evaluation techniques A key feature of AIDE is that it applies to different types of inference algorithms. We compared AIDE to two existing techniques for evaluating the accuracy of inference algorithms that share this feature: (1) comparing log marginal likelihood (LML) estimates made by a target algorithm against LML estimates made by a gold-standard algorithm, and (2) comparing the expectation of a probe function under the approximating distribution to the same expectation under the gold-standard distribution [19]. Figure 3 shows a comparison of AIDE to LML, on a inference problem where the posterior is bimodal. Figure 5 shows a comparison of AIDE to a ?number of clusters? probe function in a Dirichlet process mixture model inference problem for a synthetic data set. We also used AIDE to evaluate the accuracy of several SMC algorithms for DPMM inference on a real data set of galaxy velocities [20] relative to an SMC gold-standard. This experiment is described in the supplement due to space constraints. 6 Discussion AIDE makes it practical to estimate bounds on the error of a broad class of approximate inference algorithms including sequential Monte Carlo (SMC), annealed importance sampling (AIS), sampling importance resampling (SIR), and variational inference. AIDE?s reliance on a gold-standard inference algorithm raises two questions that merit discussion: If we already had an acceptable gold-standard, why would we want to evaluate other inference algorithms? Gold-standard algorithms such as very long MCMC runs, SMC runs with hundreds of thousands of particles, or AIS runs with a very fine annealing schedule, are often too slow to use in production. AIDE make it possible to use gold-standard algorithms during an offline design and evaluation phase to quantitatively answer questions like ?how few particles or rejuvenation steps or samples can I get away with?? or ?is my fast variational approximation good enough??. AIDE can thus help practitioners confidently apply Monte Carlo techniques in challenging, performance constrained applications, such as probabilistic robotics or web-scale machine learning. In future work we think it will be valuable to build probabilistic models of AIDE estimates, conditioned on features of the data set, to learn offline what problem instances are easy or hard for different inference algorithms. This may help practitioners bridge the gap between offline evaluation and production more rigorously. How do we ensure that the gold-standard is accurate enough for the comparison with it to be meaningful? This is an intrinsically hard problem?we are not sure that near-exact posterior inference is really feasible, for most interesting classes of models. In practice, we think that gold-standard inference algorithms will be calibrated based on a mix of subjective assumptions and heuristic testing?much like models themselves are tested. For example, users could initially build confidence in a gold-standard algorithm by estimating the symmetric KL divergence from the posterior on simulated data sets (following the approach of Grosse et al. [12]), and then use AIDE with the trusted gold-standard for a focused evaluation of target algorithms on real data sets of interest. We do not think the subjectivity of the gold-standard assumption is a unique limitation of AIDE. A limitation of AIDE is that its bias depends on the accuracy of meta-inference, i.e. inference over the auxiliary random variables used by an inference algorithm. We currently lack an accurate meta-inference algorithm for MCMC samplers that do not employ annealing, and therefore AIDE is not yet suitable for use as a general MCMC convergence diagnostic. Research on new meta-inference algorithms for MCMC and comparisons to standard convergence diagnostics [21, 22] are needed. Other areas for future work include understanding how the accuracy of meta-inference depends on parameters of an inference algorithm, and more generally what makes an inference algorithm amenable to efficient meta-inference. Note that AIDE does not rely on asymptotic exactness of the inference algorithm being evaluated. An interesting area of future work is in using AIDE to study the non-asymptotic error of scalable but asymptotically biased sampling algorithms [23]. It also seems fruitful to connect AIDE to results from theoretical computer science, including the computability [24] and complexity [25?28] of probabilistic inference. It should be possible to study the computational tractability of approximate inference empirically using AIDE estimates, as well as theoretically using a careful treatment of the variance of these estimates. It also seems promising to use ideas from AIDE to develop Monte Carlo program analyses for samplers written in probabilistic programming languages. 9 Acknowledgments This research was supported by DARPA (PPAML program, contract number FA8750-14-2-0004), IARPA (under research contract 2015-15061000003), the Office of Naval Research (under research contract N000141310333), the Army Research Office (under agreement number W911NF-13-10212), and gifts from Analog Devices and Google. This research was conducted with Government support under and awarded by DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. References [1] Mary Kathryn Cowles and Bradley P Carlin. Markov chain monte carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association, 91(434):883?904, 1996. [2] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle markov chain monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72 (3):269?342, 2010. [3] Roger B Grosse, Zoubin Ghahramani, and Ryan P Adams. Sandwiching the marginal likelihood using bidirectional monte carlo. arXiv preprint arXiv:1511.02543, 2015. [4] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087?1092, 1953. [5] W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97?109, 1970. [6] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential monte carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411?436, 2006. [7] Radford M Neal. Annealed importance sampling. Statistics and computing, 11(2):125?139, 2001. [8] Donald B Rubin et al. Using the sir algorithm to simulate posterior distributions. Bayesian statistics, 3(1):395?402, 1988. [9] Adrian FM Smith and Alan E Gelfand. Bayesian statistics without tears: a sampling?resampling perspective. The American Statistician, 46(2):84?88, 1992. [10] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183?233, 1999. [11] Noah Goodman, Vikash Mansinghka, Daniel M Roy, Keith Bonawitz, and Joshua Tenenbaum. Church: a language for generative models with non-parametric memoization and approximate inference. In Uncertainty in Artificial Intelligence, 2008. [12] Roger B Grosse, Siddharth Ancha, and Daniel M Roy. Measuring the reliability of mcmc inference with bidirectional monte carlo. In Advances in Neural Information Processing Systems, pages 2451?2459, 2016. [13] Augustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992. [14] Jackson Gorham and Lester Mackey. Measuring sample quality with stein?s method. In Advances in Neural Information Processing Systems, pages 226?234, 2015. [15] Chris J Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. Filtering variational objectives. arXiv preprint arXiv:1705.09279, 2017. [16] Christian A Naesseth, Scott W Linderman, Rajesh Ranganath, and David M Blei. Variational sequential monte carlo. arXiv preprint arXiv:1705.11140, 2017. 10 [17] Tuan Anh Le, Maximilian Igl, Tom Jin, Tom Rainforth, and Frank Wood. Auto-encoding sequential monte carlo. arXiv preprint arXiv:1705.10306, 2017. [18] Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1218?1226, 2015. [19] Yener Ulker, Bilge G?nsel, and Taylan Cemgil. Sequential monte carlo samplers for dirichlet process mixtures. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 876?883, 2010. [20] Michael J Drinkwater, Quentin A Parker, Dominique Proust, Eric Slezak, and Hern?n Quintana. The large scale distribution of galaxies in the shapley supercluster. Publications of the Astronomical Society of Australia, 21(1):89?96, 2004. [21] Andrew Gelman and Donald B Rubin. Inference from iterative simulation using multiple sequences. Statistical science, pages 457?472, 1992. [22] John Geweke. Getting it right: Joint distribution tests of posterior simulators. Journal of the American Statistical Association, 99(467):799?804, 2004. [23] Elaine Angelino, Matthew James Johnson, Ryan P Adams, et al. Patterns of scalable bayesian R in Machine Learning, 9(2-3):119?247, 2016. inference. Foundations and Trends [24] Nathanael L Ackerman, Cameron E Freer, and Daniel M Roy. On the computability of conditional probability. arXiv preprint arXiv:1005.3014, 2010. [25] Cameron E Freer, Vikash K Mansinghka, and Daniel M Roy. When are probabilistic programs probably computationally tractable? In NIPS Workshop on Advanced Monte Carlo Methods with Applications, 2010. [26] Jonathan H Huggins and Daniel M Roy. Convergence of sequential monte carlo-based sampling methods. arXiv preprint arXiv:1503.00966, 2015. [27] Sourav Chatterjee and Persi Diaconis. The sample size required in importance sampling. arXiv preprint arXiv:1511.01437, 2015. [28] S Agapiou, Omiros Papaspiliopoulos, D Sanz-Alonso, AM Stuart, et al. Importance sampling: Intrinsic dimension and computational cost. Statistical Science, 32(3):405?431, 2017. 11
6893 |@word kong:1 briefly:2 middle:2 seems:2 nd:1 adrian:1 simulation:2 dominique:1 contains:3 series:2 daniel:5 ours:1 fa8750:1 outperforms:1 existing:2 subjective:1 bradley:1 comparing:5 nt:5 yet:2 dx:1 written:1 readily:1 diederik:1 john:1 chicago:1 partition:1 predetermined:1 christian:1 asymptote:1 designed:1 plot:1 update:3 resampling:8 mackey:2 intelligence:3 generative:29 device:1 smith:1 record:2 blei:1 contribute:1 diagnosing:1 mathematical:3 along:1 qualitative:1 consists:1 overhead:1 shapley:1 nathanael:1 introduce:2 theoretically:2 indeed:1 expected:3 behavior:1 themselves:1 simulator:1 detects:2 siddharth:1 actual:1 increasing:2 gift:1 project:2 estimating:5 notation:1 begin:1 anh:1 what:2 minimizes:1 substantially:1 developed:1 algor:1 contrasting:1 guarantee:1 biometrika:1 lester:1 producing:1 understood:1 engineering:1 treat:2 cemgil:1 limit:1 encoding:1 analyzing:1 approximately:1 burn:1 quantified:1 suggests:2 shaded:1 challenging:1 smc:51 graduate:1 practical:5 unique:1 acknowledgment:1 testing:1 practice:3 differs:1 procedure:2 area:3 significantly:1 projection:1 confidence:2 donald:2 suggest:1 zoubin:2 get:1 close:1 operator:1 wt1:3 gelman:1 context:1 writing:1 yee:1 equivalent:1 fruitful:1 missing:3 maximizing:1 annealed:6 attention:1 starting:1 focused:1 x32:1 estimator:14 jackson:1 quentin:1 handle:2 target:30 suppose:1 construction:1 user:1 exact:15 programming:4 us:3 kathryn:1 agreement:1 element:1 amortized:1 velocity:1 roy:5 bilge:1 trend:1 observed:2 preprint:7 capture:1 thousand:1 elaine:1 eu:1 decrease:2 valuable:1 proust:1 complexity:1 nats:8 rigorously:1 dynamic:2 raise:1 eric:1 completely:1 mh:5 joint:4 darpa:1 represented:1 ppaml:1 derivation:1 fast:2 monte:34 artificial:3 gorham:2 freer:2 heuristic:7 gelfand:1 elbo:1 ability:5 statistic:7 dieterich:1 think:3 final:1 pmt:2 sequence:3 mg:12 propose:1 interaction:1 ackerman:1 gold:44 getting:1 sanz:1 convergence:11 empty:1 parent:1 assessing:1 cluster:4 produce:1 generating:1 comparative:1 converges:1 adam:2 help:2 tim:1 develop:1 andrew:1 measured:1 qt:23 received:1 mansinghka:3 keith:2 edward:1 ex:4 auxiliary:9 involves:1 indicate:1 tommi:1 closely:3 australia:1 require:7 government:1 really:1 ryan:2 marco:1 sufficiently:1 considered:3 ground:2 taylan:1 equilibrium:2 lawrence:1 matthew:1 estimation:3 nonsequential:1 currently:1 bridge:1 concurrent:1 tool:1 weighted:1 trusted:1 mit:2 clearly:1 exactness:1 i3:1 broader:1 office:3 jaakkola:1 publication:1 derived:1 xit:5 naval:1 likelihood:17 indicates:1 tech:1 contrast:1 detect:2 am:1 inference:212 inaccurate:3 typically:2 initially:1 hidden:3 ancestor:1 i1:4 x11:1 denoted:2 constrained:1 special:6 marginal:13 field:1 construct:1 beach:1 sampling:40 ng:4 identical:2 represents:2 broad:6 stuart:1 icml:1 nearly:3 future:3 quantitatively:1 roman:1 few:1 employ:1 diaconis:1 divergence:37 national:1 replaced:1 phase:1 statistician:1 interest:1 mnih:1 evaluation:5 introduces:3 mixture:4 diagnostics:2 x22:1 chain:15 nonincreasing:1 accurate:9 kt:3 rajesh:1 tuple:2 amenable:1 necessary:3 respective:1 unless:1 indexed:1 initialized:1 circle:1 quintana:1 theoretical:3 increased:1 formalism:5 earlier:1 instance:1 marshall:1 cover:1 w911nf:1 w1i:1 measuring:11 assignment:2 tractability:1 cost:1 uniform:3 hundred:1 dod:1 conducted:1 johnson:1 too:1 characterize:1 connect:1 answer:2 my:2 synthetic:1 calibrated:1 st:1 density:21 international:2 ancestral:3 standing:1 probabilistic:20 xi1:3 contract:3 physic:1 discipline:1 michael:2 w1:2 central:2 ndseg:1 possibly:1 yener:1 american:3 return:4 explicitly:1 depends:4 performed:1 view:1 try:1 root:1 analyze:1 x41:1 sandwiching:1 ass:2 air:1 accuracy:19 variance:1 identify:1 conceptually:1 bayesian:4 norouzi:1 produced:5 carlo:34 trajectory:3 converged:1 holenstein:1 influenced:1 definition:2 failure:1 against:3 tucker:1 galaxy:2 subjectivity:1 james:1 associated:1 proof:1 attributed:1 treatment:1 massachusetts:2 popular:1 intrinsically:1 persi:1 x13:1 astronomical:1 geweke:1 schedule:1 mco:2 appears:2 bidirectional:2 methodology:2 tom:2 formulation:1 evaluated:2 roger:2 hastings:3 web:1 lack:2 del:3 google:1 defines:2 mode:7 quality:1 scientific:1 mary:1 usa:1 concept:1 normalized:2 unbiased:4 true:1 andrieu:2 chemical:1 arnaud:3 symmetric:3 leibler:2 nonzero:1 i2:1 neal:1 white:1 during:3 unnormalized:2 generalized:2 whye:1 complete:1 mohammad:1 performs:1 variational:37 common:1 specialized:2 multinomial:1 mt:20 empirically:2 tightens:1 analog:1 association:2 approximates:1 marginals:3 refer:1 ai:14 tuning:1 particle:33 language:3 had:1 reliability:1 posterior:37 own:1 recent:1 perspective:1 awarded:1 reverse:4 meta:55 rep:1 christophe:1 joshua:1 seen:1 george:1 recognized:1 converge:2 period:1 full:2 mix:1 bread:6 reduces:1 multiple:1 alan:1 calculation:1 tear:1 long:3 permitting:1 cameron:2 dkl:14 qg:24 variant:1 regression:2 scalable:2 ajay:1 expectation:8 arxiv:14 iteration:1 represent:3 kernel:7 tailored:1 bimodal:2 robotics:3 achieved:1 penalize:1 proposal:24 background:1 whereas:1 fine:2 want:1 annealing:4 fellowship:1 else:1 thirteenth:1 goodman:1 biased:1 unlike:2 sure:1 probably:1 recording:1 undirected:2 member:1 jordan:1 practitioner:5 call:1 near:2 backwards:1 intermediate:2 enough:2 concerned:1 easy:1 iterate:1 carlin:1 wti:2 fm:1 andriy:1 idea:3 simplifies:1 vikash:3 motivated:1 expression:1 defense:1 bridging:1 passed:1 vkm:1 moral:3 wtp:1 reuse:1 penalty:2 heess:1 generally:2 detailed:2 involve:1 amount:2 stein:1 tenenbaum:1 generate:1 canonical:6 estimated:2 diagnostic:3 track:1 blue:1 diverse:1 diagnosis:1 key:4 reliance:1 drawn:1 prevent:1 kept:1 backward:2 computability:2 asymptotically:1 concreteness:1 wood:1 run:28 uncertainty:1 family:6 reader:1 x0n:9 vn:6 missed:1 draw:1 x31:1 acceptable:2 bound:8 noah:1 constraint:1 generates:2 simulate:2 x12:1 jasra:1 according:1 pink:1 terminates:1 slightly:1 across:1 metropolis:3 making:1 huggins:1 restricted:1 lml:3 computationally:1 equation:3 hern:1 fail:1 needed:2 merit:1 tractable:7 pmg:2 available:1 linderman:1 endowed:1 x23:1 probe:4 apply:1 salimans:2 generic:2 away:1 nicholas:1 pierre:1 alternative:1 symmetrized:12 denotes:2 dirichlet:4 include:4 running:3 x21:1 ensure:1 opportunity:1 graphical:1 standardized:1 tuan:1 k1:2 build:2 ghahramani:2 approximating:5 society:3 sweep:3 objective:3 question:4 already:1 strategy:1 parametric:1 rt:8 gradient:1 simulated:3 hmm:3 alonso:1 maddison:3 chris:1 igl:1 evaluate:8 cfr:1 rainforth:1 index:3 memoization:1 minimizing:1 goldstandard:2 balance:2 difficult:1 unfortunately:1 frank:1 trace:14 suppress:1 design:2 rosenbluth:2 dpmm:2 unknown:2 teh:1 upper:4 observation:7 markov:17 finite:1 jin:1 defining:1 arbitrary:1 introduced:1 david:1 required:2 kl:20 kingma:1 nip:2 usually:2 pattern:1 scott:1 summarize:1 confidently:1 program:3 including:5 royal:2 max:1 suitable:1 treated:4 rely:3 difficulty:1 force:1 advanced:1 scheme:1 technology:2 church:2 categorical:2 augustine:1 naive:1 auto:1 faced:2 prior:7 understanding:1 review:1 teller:2 marginalizing:1 relative:3 sir:11 asymptotic:3 brittle:1 interesting:2 limitation:2 filtering:1 analogy:1 x33:1 foundation:1 propagates:1 principle:2 rubin:2 share:1 production:2 prone:1 token:1 supported:1 offline:3 bias:11 institute:2 saul:1 distributed:1 dimension:1 xn:12 transition:3 evaluating:3 concavity:1 forward:2 made:3 far:1 welling:1 sourav:1 ranganath:1 approximate:22 kullback:2 doucet:3 sequentially:1 agapiou:1 un:4 latent:8 supercluster:1 iterative:1 zq:1 why:1 bonawitz:1 promising:1 learn:1 ca:1 nicolas:1 du:1 complex:1 constructing:1 bounding:1 iarpa:1 ait:5 repeated:1 x1:1 papaspiliopoulos:1 rithm:1 parker:1 grosse:6 slow:1 explicit:1 lawson:1 weighting:4 grained:1 angelino:1 theorem:1 specific:6 xt:6 offset:5 evidence:2 normalizing:1 intractable:1 workshop:1 intrinsic:1 sequential:15 importance:16 supplement:6 illustrates:1 conditioned:2 maximilian:1 chatterjee:1 gap:2 rejection:8 rg:8 simply:1 army:1 omiros:1 ancha:1 applies:5 radford:1 truth:2 satisfies:2 determines:1 relies:1 conditional:5 viewed:2 careful:1 feasible:1 naesseth:2 hard:2 infinite:2 xi11:2 sampler:29 wt:7 miss:2 called:1 meaningful:1 burnin:1 formally:1 internal:6 support:1 arises:1 jonathan:1 aide:84 dept:1 mcmc:18 tested:1 cowles:1
6,515
6,894
Learning Causal Structures Using Regression Invariance AmirEmad Ghassami?? , Saber Salehkaleybar? , Negar Kiyavash?? , Kun Zhang? ? Department of ECE, University of Illinois at Urbana-Champaign, Urbana, USA. ? Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, USA. ? Department of Philosophy, Carnegie Mellon University, Pittsburgh, USA. ? {ghassam2,sabersk,kiyavash}@illinois.edu, ? [email protected] Abstract We study causal discovery in a multi-environment setting, in which the functional relations for producing the variables from their direct causes remain the same across environments, while the distribution of exogenous noises may vary. We introduce the idea of using the invariance of the functional relations of the variables to their causes across a set of environments for structure learning. We define a notion of completeness for a causal inference algorithm in this setting and prove the existence of such algorithm by proposing the baseline algorithm. Additionally, we present an alternate algorithm that has significantly improved computational and sample complexity compared to the baseline algorithm. Experiment results show that the proposed algorithm outperforms the other existing algorithms. 1 Introduction Causal structure learning is a fundamental problem in machine learning with applications in multiple fields such as biology, economics, epidemiology, and computer science. When performing interventions in the system is not possible or too expensive (observation-only setting), the main approach to identifying direction of influences and learning the causal structure is to run a constraint-based or a score-based causal discovery algorithm over the data. In this case, a ?complete? observational algorithm allows learning the causal structure to the extent possible, which is the Markov equivalence of the ground truth structure. When the experimenter is capable of intervening in the system to see the effect of varying one variable on the other variables (interventional setting), the causal structure could be exactly learned. In this setting, the most common identification procedure considers that the variables whose distributions have varied are the descendants of the intervened variable and hence the causal structure is reconstructed by performing interventions on different variables in the system [4, 11]. However, due to issues such as cost constraints and infeasibility of performing certain interventions, the experimenter is usually not capable of performing arbitrary interventions. In many real-life systems, due to changes in the variables of the environment, the data generating distribution will vary over time. Considering the setup after each change as a new environment, our goal is to exploit the differences across environments to learn the underlying causal structure. We consider a multi-environment setting, in which the functional relations for producing the variables from their parents remain the same across environments, while the distribution of exogenous noises may vary. Note that the standard interventional setting could be viewed as a special case of multienvironment setting in which the location and distribution of the changes across environments are designed by the experimenter. Furthermore, as will be seen in Figure 1(a), there are cases where the ordinary interventional approaches cannot take advantages of changes across environments while these changes could be utilized to learn the causal structure uniquely. The multi-environment setting was also studied in [35, 23, 37]; we will put our work into perspective in relationship to these in the Related Work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We focus on the linear structural equation models (SEMs) with additive noise [1] as the underlying data generating model (see Section 2 for details). Note that this model is one of the most problematic models in the literature of causal inference, and if the noises follow a Gaussian distribution, for many structures, none of the existing observational approaches can identify the underlying causal structure uniquely1 . The main idea in our proposed approach is to utilize the change of the regression coefficients, resulting from the changes across the environments, to distinguish causes from the effects. Our approach is able to identify causal structures that were not identifiable using observational approaches, from information that was not usable in existing interventional X1 approaches. Figure 1 shows two simple examples to ilX1 X2 X3 lustrate this point. In this figure, a directed edge form variable Xi to Xj implies that Xi is a direct cause of Xj , X2 and change of an exogenous noise across environments is denoted by the flash sign. Consider the structure in Figure (a) (b) 1(a), with equations X1 = N1 , and X2 = aX1 + N2 , where N1 ? N (0, 12 ) and N2 ? N (0, 22 ) are indepen- Figure 1: Simple examples of identifident mean-zero Gaussian exogenous noises. Suppose we able structures using the proposed apare interested in finding out which variable is the cause and proach. which is the effect. We are given two environments across which the exogenous noise of both X1 and X2 are varied. Denoting the regression coefficient resulting a 2 1 ,X2 ) from regressing Xi on Xj by Xj (Xi ), in this case, we have X2 (X1 ) = Cov(X = a2 2 +1 2 , Cov(X2 ) 1 2 1 ,X2 ) and X1 (X2 ) = Cov(X Cov(X1 ) = a. Therefore, except for pathological cases for values for the variance of the exogenous noises in two environments, the regression coefficient resulting from regressing the cause variable on the effect variable varies between the two environments, while the regression coefficient from regressing the effect variable on the cause variable remains the same. Hence, the cause is distinguishable from the effect. Note that structures X1 ! X2 and X2 ! X1 are in the same Markov equivalence class and hence, not distinguishable using merely conditional independence tests. Also since the exogenous noises of both variables have changed, ordinary interventional tests are also not capable of using the information of these two environments to distinguish between the two structures [5]. Moreover, as it will be shortly explained (see Related Work), since the exogenous noise of the target variable has changed, the invariant prediction method [23], cannot discern the correct structure either. As another example, consider the structure in Figure 1(b). Suppose the exogenous noise of X1 is varied across the two environments. Similar to the previous example, it can be shown that X2 (X1 ) varies across the two environments while X1 (X2 ) remains the same. This implies that the edge between X1 and X2 is from the former to the later. Similarly, X3 (X2 ) varies across the two environments while X2 (X3 ) remains the same. This implies that X2 is the parent of X3 . Therefore, the structure in Figure 1(b) is distinguishable using the proposed identification approach. Note that the invariant prediction method cannot identify the relation between X2 and X3 , and conditional independence tests are also not able to distinguish this structure. Related Work. The main approach to learning the causal structure in observational setting is to run a constraint-based or a score-based algorithm over the data. Constraint-based approach [33, 21] is based on performing statistical tests to learn conditional independencies among the variables along with applying the Meek rules introduced in [36]. IC and IC? [21], PC, and FCI [33] algorithms are among the well known examples of this approach. In score-based approach, first a hypothesis space of potential models along with a scoring function is defined. The scoring function measures how well the model fits the observed data. Then the highest-scoring structure is chosen as the output (usually via greedy search). Greedy Equivalence Search (GES) algorithm [20, 2] is an example of scorebased approach. Such purely observational approaches reconstruct the causal graph up to Markov equivalence classes. Thus, directions of some edges may remain unresolved. There are studies which attempt to identify the exact causal structure by restricting the model class [32, 12, 24, 22]. Most of such works consider SEM with independent noise. LiNGAM method [32] is a potent approach capable of structure learning in linear SEM model with additive noise2 , as long as the distribution of the noise is not Gaussian. Authors of [12] and [38] showed that a nonlinear SEM with additive noise, 1 As noted in [12], ?nonlinearities can play a role similar to that of non-Gaussianity?, and both lead to exact structure recovery. 2 There are extensions to LiNGAM beyond linear model [38]. 2 and even the post-nonlinear causal model, along with some mild conditions on the functions and data distributions, are not symmetric in the cause and effect. There is also a line of work on causal structure learning in models where each vertex of the graph represents a random process [26, 34, 25, 6, 7, 16]. In such models, a temporal relationship is considered among the variables and it is usually assumed that there is no instantaneous influence among the processes. In interventional approach for causal structure learning, the experimenter picks specific variables and attempts to learn their relation with other variables, by observing the effect of perturbing that variables on the distribution of others. In recent works, bounds on the required number of interventions for complete discovery of causal relationships as well as passive and adaptive algorithms for minimizing the number of experiments were derived [5, 9, 10, 11, 31]. In this work we assume that the functional relations of the variables to their direct causes across a set of environments are invariant. Similar assumptions have been considered in other work [3, 30, 14, 13, 29, 23]. Specifically, [3] which studied finding causal relation between two variables related to each other by an invertible function, assumes that ? the distribution of the cause and the function mapping cause to effect are independent since they correspond to independent mechanisms of nature?. There is little work on multi-environment setup [35, 23, 37]. In [35], the authors analyzed the classes of structures that are equivalent relative to a stream of distributions and presented algorithms that output graphical representations of these equivalence classes. They assumed that changing the distribution of a variable, varies the marginal distribution of all its descendants. Naturally this also assumes that they have access to enough samples to test each variable for marginal distribution change. This approach cannot identify the causal relations among variables which are affected by environment changes in the same way. The most closely related work to our approach is the invariant prediction method [23], which utilizes different environments to estimate the set of predictors of a target variable. In that work, it is assumed that the exogenous noise of the target variable does not vary among the environments. In fact, the method crucially relies on this assumption as it adds variables to the estimated predictors set only if they are necessary to keep the distribution of the target variable?s noise fixed. Besides high computational complexity, invariant prediction framework may result in a set which does not contain all the parents of the target variable. Additionally, the optimal predictor set (output of the algorithm) is not necessarily unique. We will show that in many cases our proposed approach can overcome both these issues. Recently, the authors of [37] considered the setting in which changes in the mechanism of variables prevents ordinary conditional independence based algorithms from discovering the correct structure. The authors have modeled these changes as multiple environments and proposed a general solution for a non-parametric model which first detects the variables whose mechanism changed and then finds causal relations among variables using conditional independence tests. Due to the generality of the model, this method may require a high number of samples. Contribution. We propose a novel causal structure learning framework, which is capable of uniquely identifying causal structures that were not identifiable using observational approaches, from information that was not usable in existing interventional approaches. The main contribution of this work is to introduce the idea of using the invariance of the functional relations of the variables to their direct causes across a set of environments. This would imply using the invariance of coefficients in the special case of linear SEM for distinguishing the causes from the effects. We define a notion of completeness for a causal inference algorithm in this setting and prove the existence of such algorithm by proposing the baseline algorithm (Section 3). Additionally, we present an alternate algorithm (Section 4) which has significantly improved computational and sample complexity compared to the baseline algorithm. 2 Regression-Based Causal Structure Learning Definition 1. Consider a directed graph G = (V, E) with vertex set V and set of directed edges E. G is a DAG if it is a finite graph with no directed cycles. A DAG G is called causal if its vertices represent random variables V = {X1 , ..., Xn } and a directed edges (Xi , Xj ) indicates that variable Xi is a direct cause of variable Xj . We consider a linear SEM [1] as the underlying data generating model. In such a model the value of each variable Xj 2 V is determined by a linear combination of the values of its causal parents PA(Xj ) plus an additive exogenous noise Nj as follows X Xj = bji Xi + Nj , 8j 2 {1, ? ? ? , p}, (1) Xi 2PA(Xj ) 3 where Nj ?s are jointly independent. This model could be represented by a single matrix equation X = BX + N. Further, we can write X = AN, (2) where A = (I B) 1 . This implies that each variable X 2 V can be written as a linear combination of the exogenous noises in the system. We assume that in our model, all variables are observable. Also, we focus on zero-mean Gaussian exogenous noise; otherwise, the proposed approach could be extended to any arbitrary distribution for the exogenous noise in the system. The following definitions will be used throughout the paper. Definition 2. Graph union of a set G of mixed graphs3 over a skeleton, is a mixed graph with the same skeleton as the members of G which contains directed edge (X, Y ), if 9 G 2 G such that (X, Y ) 2 E(G) and 6 9 G0 2 G such that (Y, X) 2 E(G0 ). The rest of the edges remain undirected. Definition 3. Causal DAGs G1 and G2 over V are Markov equivalent if every distribution that is compatible with one of the graphs is also compatible with the other. Markov equivalence is an equivalence relationship over the set of all graphs over V [17]. The graph union of all DAGs in the Markov equivalence class of a DAG G is called the essential graph of G and is denoted by Ess(G). We consider a multi-environment setting consisting of N environments E = {E1 , ..., EN }. The structure of the causal DAG and the functional relations for producing the variables from their parents (the matrix B), remains the same across all environments, the exogenous noises may vary though. For a pair of environments Ei , Ej 2 E, let Iij be the set of variables whose exogenous noise have changed between the two environments. Given Iij , for any DAG G consistent with the essential graph4 obtained from an observational algorithm, define the regression invariance set as follows R(G, Iij ) := {(X, S) : X 2 V, S ? V \{X}, (i) (i) S (X) = (j) S (X)}, (j) where S (X) and S (X) are the regression coefficients of regressing variable X on S in environments Ei and Ej , respectively. In words, R(G, Iij ) contains all pairs (X, S), X 2 V , S ? V \{X} that if we regress X on S, the regression coefficients do not change across Ei and Ej . Definition 4. Given I, the set of variables whose exogenous noise has changed between two environments, DAGs G1 and G2 are called I-distinguishable if R(G1 , I) 6= R(G2 , I). We make the following assumption on the distributions of the exogenous noises. Assumption 1 (Regression Stability Assumption). For a given set I and structure G, there exists ?0 > 0 such that for all 0 < ? ? ?0 perturbing the variance of the exogenous noises by ? does not change the regression invariance set R(G, I). The purpose of Assumption 1 is to rule out pathological cases for values of the variance of the exogenous noises in two environments which make special regression relations. For instance, in (1) (2) Example 1, X2 (X1 ) = X2 (X1 ) only if 12 ?22 = 22 ?12 where i2 and ?i2 are the variances of the exogenous noise of Xi in the environments E1 and E2 , respectively. Note that this special relation between 12 , ?12 , 22 , and ?22 has Lebesgue measure zero in the set of all possible values for the variances. We give the following examples as applications of our approach. Example 1. Consider DAGs G1 : X1 ! X2 and G2 : X 1 X2 . For I = {X1 }, I = {X2 } or I = X1 X1 {X1 , X2 }, calculating the regression coefficients as explained in Section 1, we see that (X1 , {X2 }) 62 R(G1 , I) X2 X2 but (X1 , {X2 }) 2 R(G2 , I). Hence G1 and G2 are IX3 distinguishable. As mentioned in Section 1, structures G1 X3 and G2 are not distinguishable using the observational tests. Also, in the case of I = {X1 , X2 }, the invariant (a) (b) prediction approach and the ordinary interventional tests - in which the experimenter expects that a change in the distribution of the effect would not perturb the marginal Figure 2: DAGs related to Example 3. distribution of the cause variable - are not capable of distinguishing the two structures either. 3 A mixed graph contains both directed and undirected edges. DAG G is consistent with mixed graph M , if they have the same skeleton and G does not contain edge (X, Y ) while M contains (Y, X). 4 4 Example 2. Consider the DAG G in Figure 1(b) with I = {X1 }. Consider an alternative DAG G0 in which compared to G the directed edge (X1 , X2 ) is replaced by (X2 , X1 ), and DAG G00 in which compared to G the directed edge (X2 , X3 ) is replaced by (X3 , X2 ). Since (X2 , {X1 }) 2 R(G, I) while this pair is not in R(G0 , I), and (X2 , {X3 }) 62 R(G, I) while this pair belongs to R(G00 , I), the structure of G is also distinguishable using the proposed identification approach. Note that the direction of the edges of G is not distinguishable using an observational test as it has two other DAGs in its equivalence class. Also, the invariant prediction method cannot identify the relation between X2 and X3 , since it can keep the variance of the noise of X3 fixed by setting the predictor set as {X2 } or {X1 }, which have empty intersection. Example 3. Consider the structure in Figure 2(a) with I = {X2 }. Among the six possible triangle DAGs, all of them are I-distinguishable from this structure and hence, with two environments differing in the exogenous noise of X2 , this triangle DAG could be identified. Note that all the triangle DAGs are in the same Markov equivalence class and hence, using the information of one environment alone, observation only setting cannot lead to identification, which makes this structure challenging to deal with [8]. For I = {X1 }, the structure in Figure 2(b) is not I-distinguishable from a triangle DAG in which the direction of the edge (X2 , X3 ) is flipped. These two DAGs are also not distinguishable using the invariant prediction method and usual intervention analysis with intervention on X1 . Let the structure G? be the ground truth DAG structure. Define G(G? , I) := {G : R(G, I) = R(G? , I)}, which is the set of all DAGs which are not I-distinguishable from G? . Using this set, we form the mixed graph M (G? , I) over V as the graph union of members of G(G? , I). Definition 5. Let Pi be the joint distribution over the set of variables V in environment Ei 2 E. An algorithm A : ({Pi }N i=1 ) ! M which gets the joint distribution over V in environments E = {Ei }N as the input and returns a mixed graph, is regression invariance complete if for any i=1 pair of environments Ei and Ej with Iij as the set of variables whose exogenous noise has changed between Ei and Ej , the set of directed edges of M (G? , Iij ) be a subset of the set of directed edges of the output of A . In Section 3 we will introduce a structure learning algorithm which is complete in the sense of Definition 5. 3 Existence of Complete Algorithms In this section we show the existence of complete algorithm (in the sense of Definition 5) for learning the causal structure among a set of variables V whose dynamics satisfy the SEM in (1). The pseudo-code of the algorithm is presented in Algorithm 1. Suppose G? is the ground truth structure. The algorithm first runs a complete observational al- Algorithm 1 The Baseline Algorithm Input: Joint distribution over V in environgorithm to obtain the essential graph Ess(G? ). ments E = {Ei }N For each pair of environments {Ei , Ej } 2 E, i=1 . Obtain Ess(G? ) by running a complete observafirst the algorithm calculates the regression coef(i) (j) tional algorithm. ficients S (Y ) and S (Y ), for all Y 2 V and for each pair of environments {Ei , Ej } ? E do S ? V \{Y }, and forms the regression invariObtain Rij = {(Y, S) : Y 2 V, S ? ance set Rij , which contains the pairs (Y, S) (i) (j) V \{Y }, S (Y ) = S (Y )}. for which the regression coefficients did not Iij = ChangeF inder(Ei , Ej ). change between Ei and Ej . Note that ideally Gij = ConsistentF inder(Ess(G? ), Rij , Iij ). Rij is equal to R(G? , Iij ). Next, using the function ChangeFinder(?), we discover the set S Mij = G2Gij G. Iij which is the set of variables whose exogenous noises have varied between the two enviend forS ronments Ei and Ej . Then using the function ME = 1?i,j?N Mij . ConsistantFinder(?), we find Gij which is the set ?. Apply Meek rules on ME to get M of all possible DAGs, G that are consistent with ? Output: Mixed graph M . Ess(G? ) and R(G, Iij ) = Rij . That is, this set ? is ideally equal to G(G , Iij ). After taking the union of graphs in Gij , we form Mij , which is the mixed graph containing all causal relations distinguishable from the given regression information between the two environments. This graph is ideally equal to M (G? , Iij ). After obtaining Mij for all pairs of environments, the algorithm forms a mixed graph ME by taking graph union of Mij ?s. We apply the Meek rules on ME to find all extra ? . Since for each pair of environments we are searching over all DAGs, and orientations and output M we take the graph union of Mij ?s, the baseline algorithm is complete in the sense of Definition 5. 5 Obtaining the set Rij : In this part, for a given significance level ?, we will show how the set Rij can be obtained to have total probability of false-rejection less than ?. For given Y 2 V and S ? V \{Y } ij in the environments Ei and Ej , we define the null hypothesis H0,Y,S as follows: ij H0,Y,S : 9 2 R|S| such that (i) S (Y )= and (i) (j) (i) Let ?S (Y ) and ?S (Y ) be the estimations of S (Y ) and ordinary least squares estimator, and define the test statistic (j) S (Y (i) T? := ( ?S (Y ) ?(j) (Y ))T (s2 ? ? i S 1 i ? 1) + s2j ? j 1 (j) S (Y (3) )= . ), respectively, obtained using the (i) ?(j) (Y ))/|S|, ( ?S (Y ) (4) S (i) (j) where s2i and s2j are unbiased estimates of variance of Y (XS )T S (Y ) and Y (XS )T S (Y ) in ? i and ? ? j are sample covariance matrices of E[XS (XS )T ] environments Ei and Ej , respectively, and ? in environments Ei and Ej , respectively. If the null hypothesis is true, then T? ? F (|S|, n |S|), where F (?, ?) is the F -distribution (see supplementary material for details). We set the p-value of our test to be less than ?/(p ? (2p 1 1)). Hence, by testing all null ij hypotheses H0,Y,S for any Y 2 V and S ? V \{Y }, we can obtain the set Rij with total probability of false-rejection less than ?. Function ChangeFinder(?): We use Lemma 1 to find the set Iij . (i) Lemma 1. Given environments Ei and Ej , for a variable Y 2 V , if E[(Y (XS )T S (Y ))2 |Ei ] 6= (j) E[(Y (XS )T S (Y ))2 |Ej ] for all S ? N (Y ), where N (Y ) is the set of neighbors of Y , then the variance of exogenous noise NY is changed between the two environments. Otherwise, the variance of NY is unchanged. See the supplementary material for the proof. Based on Lemma 1, for any variable Y , we try to find a set S ? N (Y ) for which the variance of Y (XS )T S (Y ) remains fixed between Ei and Ej by testing the following null hypothesis: ? ij H 0,Y,S : 9 2 R s.t. E[(Y (XS )T (i) S (Y ))2 |Ei ] = 2 and E[(Y (XS )T (j) S (Y ))2 |Ej ] = 2 . (i) In order to test the above null hypothesis, we can compute the variance of Y (XS )T S in Ei and (j) Y (XS )T S in Ej and test whether these variances are equal using an F -test. If the p-value of ? ij , where the test for the set S is less than ?/(p ? 2 ), then we will reject the null hypothesis H 0,Y,S ? ij is the maximum degree of the causal graph. If we reject all hypothesis tests H for all S 2 N (Y ), 0,Y,S then we will add Y to set Iij . Since we are performing at most p ? 2 (for each variable, at most 2 tests), we can obtain the set Iij with total probability of false-rejection less than ?. Function ConsistentFinder(?): Let Dst be the set of all directed paths from variable Xs to variable Xt . For any directed path d 2 Dst , we define the weight of d as wd := ?(u,v)2d bvu where bvu are coefficientsP in (1). By this definition, it can be seen that the entry (t, s) of matrix A in (2) is equal to [A]ts = d2Dst wd . Thus, the entries of matrix A are multivariate polynomials of entries of B. Furthermore, (i) S (Y ) = E[XS (XS )T |Ei ] 1 E[XS Y |Ei ] = (AS ?i ATS ) 1 AS ?i ATY , (5) where AS and AY are the rows corresponding to set S and Y in matrix A, respectively, and matrix (i) ?i is a diagonal matrix where [?i ]kk = E[(Nk )2 |Ei ]. Therefore, the entries of vector S (Y ) are (i) rational functions of entries in B and ?i . Hence, the entries of Jacobian matrix of S (Y ) with respect to the diagonal entries of ?i are also rational expression of these parameters. In function ConsistentFinder(?), we select any directed graph G consistent with Ess(G? ) and set bvu = 0 if (u, v) 62 G. In order to check whether G is in Gij , we initially set R(G, Iij ) = ;. Then, (i) we compute the Jacobian matrix of S (Y ) parametrically for any Y 2 V and S 2 V \{Y }. As noted above, the entries of Jacobian matrix can be obtained as rational expressions of entries in B and ?i . If (i) all columns of Jacobian matrix corresponding to the elements of Iij are zero, S (Y ) is not changing by varying the variances of exogenous noises in Iij and hence, we add (Y, S) to set R(G, Iij ). After checking all Y 2 V and S 2 V \{Y }, we add the graph G in Gij if R(G, Iij ) = Rij . 6 Algorithm 2 LRE Algorithm Input: Joint distribution over V in environments E = {Ei }N i=1 . Stage 1: Obtain Ess(G? ) by running a complete observational algorithm, and for all X 2 V , form PA(X), CH(X), UK(X). Stage 2: for each pair of environments {Ei , Ej } ? E do for all Y 2 V do for each X 2 UK(Y ) do (i) (j) (i) (j) Compute X (Y ), X (Y ), Y (X), and Y (X). (i) (j) (i) (j) if X (Y ) 6= X (Y ), but Y (X) = Y (X) then Set X as a child of Y and set Y as a parent of X. (i) (j) (i) (j) else if X (Y ) = X (Y ), but Y (X) 6= Y (X) then Set X as a parent of Y and set Y as a child of X. (i) (j) (i) (j) else if X (Y ) 6= X (Y ), and Y (X) 6= Y (X) then (i) (j) Find minimum set S ? N (Y )\{X} such that S[{X} (Y ) = S[{X} (Y ). if S does not exist then Set X as a child of Y and set Y as a parent of X. (i) (j) else if S (Y ) 6= S (Y ) then 8W 2 {X} [ S, set W as a parent of Y and set Y as a child of W . else 8W 2 S, set W as a parent of Y and set Y as a child of W . end if end if end for end for end for ?. Stage 3: Apply Meek rules on the resulted mixed graph to obtain M ? Output: Mixed graph M . 4 LRE Algorithm The baseline algorithm of Section 3 is presented to prove the existence of complete algorithms, but that algorithm is not practical due to its high computational and sample complexity. In this section we present the Local Regression Examiner (LRE) algorithm, which is an alternative much more efficient algorithm for learning the causal structure among a set of variables V . The pseudo-code of the algorithm is presented in Algorithm 2. We make use of the following result in this algorithm. Lemma 2. Consider adjacent variables X, Y 2 V in causal structure G. For a pair of environments Ei and Ej , if (X, {Y }) 2 R(G, Iij ), but (Y, {X}) 62 R(G, Iij ), then Y is a parent of X. See the supplementary material for the proof. LRE algorithm consists of three stages. In the first stage, similar to the baseline algorithm, it runs a complete observational algorithm to obtain the essential graph. Then for each variable X 2 V , it forms the set of X?s discovered parents PA(X), and discovered children CH(X), and leaves the remaining neighbors as unknown in UK(X). In the second stage, the goal is that for each variable Y 2 V , we find Y ?s relation with its neighbors in UK(Y ) based on the invariance of its regression on its neighbors across each pair of environments. To do so, for each pair of environments, after fixing a target variable Y and for each of its neighbors in UK(Y ), the regression coefficients of X on Y and Y on X are calculated. We will face one of the following cases: ? If neither is changing, we do not make any decisions about the relationship of X and Y . This case is similar to having only one environment, similar to the setup in [32]. ? If one is changing and the other is unchanged, Lemma 2 implies that the variable which fixes the coefficient as the regressor is the parent. ? If both are changing, we look for an auxiliary set S among Y ?s neighbors with minimum number (i) (j) of elements, for which S[{X} (Y ) = S[{X} (Y ). If no such S is found, it implies that X is a child of Y . Otherwise, if S and X are both required in the regressors set to fix the coefficient, we set {X} [ S as parents of Y ; otherwise, if X is not required in the regressors set to fix the 7 (a) Error ratio (b) CW ratio (c) CU ratio Figure 3: Comparsion of performance of LRE, PC, IP, and LiNGAM algorithms. coefficient, although we still set S as parents of Y , we do not make any decisions regarding the relation of X and Y (Example 3 when I = {X1 }, is an instance of this case). After adding the discovered relationships to the initial mixed graph, in the third stage, we apply the ?. Meek rules on the resulting mixed graph to find all extra possible orientations and output M Analysis of LRE Algorithm. We can use the hypothesis testing in (3) to test whether two vectors (i) (j) S (Y ) and S (Y ) are equal for any Y 2 V and S ? N (Y ). If the p-value for the set S is less ij than ?/(p ? (2 1)), then we will reject the null hypothesis H0,Y,S . By doing so, we obtain the output with total probability of false-rejection less than ?. Regarding the computational complexity, since for each pair of environments, in the worse case we perform (2 1) hypothesis tests for each variable Y 2 V , and considering that we have N2 pairs of environments, the computational complexity of LRE algorithm is in the order of N2 p (2 1). Therefore, the bottleneck in the complexity of LRE is the requirement of running a complete observational algorithm in its first stage. 5 Experiments We evaluate the performance of LRE algorithm by testing it on both synthetic and real data. As seen in the pseudo-code in Algorithm 2, LRE has three stages where in the first stage, a complete observational algorithm is run. In our simulations, we used the PC algorithm5 [33], which is known to have a complexity of order O(p ) when applied to a graph of order p with degree bound . Synthetic Data. We generated 100 DAGs of order p = 10 by first selecting a causal order for variables and then connecting each pair of variables with probability 0.25. We generated data from a linear Gaussian SEM with coefficients drawn uniformly at random from [0.1, 2], and the variance of each exogenous noise was drawn uniformly at random from [0.1, 4]. For each variable of each structure, 105 samples were generated. In our simulation, we only considered a scenario in which we have two environments E1 and E2 , where in the second environment, the exogenous noise of |I12 | variables were varied. The perturbed variables were chosen uniformly at random. Figure 3 shows the performance of LRE algorithm. Define a link to be any directed or undirected edge. The error ratio is calculated as follows: Error ratio := (|miss-detected links|+|extra detected links|+ |correctly detected wrongly directed edges|)/ p2 . Among the correctly detected links, define C := |correctly directed edges|, W := |wrongly directed edges|, and U := |undirected edges|. CW and DU ratios, are obtained as follows: CW ratio := (C)/(C + W ), CU ratio := (C)/(C + U ). As seen in Figure 3, only one change in the second environment (i.e., |I12 | = 1), increases the CU ratio of LRE by 8 percent compared to the PC algorithm. Also, the main source of error in LRE algorithm results from the application of the PC algorithm. We also compared the Error ratio and CW ratio of LRE algorithm with the Invariant Prediction (IP) [23] and LiNGAM [32] (since there is no undirected edges in the output of IP and LiNGAM, the CU ratio of both would be one). For LiNGAM, we combined the data from two environments as the input. Therefore, the distribution of the exogenous noise of variables in I12 is not Guassian anymore. As it can be seen in Figure 3(a), the Error ratio of IP increases as the size of I12 increases. This is mainly due to the fact that in IP approach it is assumed that the distribution of exogenous noise of the target variable should not change, which may be violated by increasing |I12 |. The result of simulations shows that the Error ratio of LiNGAM is 5 We use the pcalg package [15] to run the PC algorithm on a set of random variables. 8 Figure 4: Performance of LRE algorithm in GRNs from DREAM 3 challenge. All five networks have 10 genes and total number of edges in each network (from left to right) is 11, 15, 10, 25, and 22, respectively. approximately twice of those of LRE and PC. We also see that LRE performed better compared to LiNGAM and IP in terms of CW ratio. Real Data a) We considered dataset of educational attainment of teenagers [27]. The dataset was collected from 4739 pupils from about 1100 US high schools with 13 attributes including gender, race, base year composite test score, family income, whether the parent attended college, and county unemployment rate. We split the dataset into two parts where the first part includes data from all pupils who live closer than 10 miles to some 4-year college. In our experiment, we tried to identify the potential causes that influence the years of education the pupils received. We ran LRE algorithm on the two parts of data as two environments with a significance level of 0.01 and obtained the following attributes as a possible set of parents of the target variable: base year composite test score, whether father was a college graduate, race, and whether school was in urban area. The IP method [23] also showed that the first two attributes have significant effects on the target variable. b) We evaluated the performance of LRE algorithm in gene regulatory networks (GRN). GRN is a collection of biological regulators that interact with each other. In GRN, the transcription factors are the main players to activate genes. The interactions between transcription factors and regulated genes in a species genome can be presented by a directed graph. In this graph, links are drawn whenever a transcription factor regulates a gene?s expression. Moreover, some of vertices have both functions, i.e., are both transcription factor and regulated gene. We considered GRNs in ?DREAM 3 In Silico Network" challenge, conducted in 2008 [19]. The networks in this challenge were extracted from known biological interaction networks. The structures of these networks are available in the open-source tool ?GeneNetWeaver (GNW)" [28]. Since we knew the true causal structures in these GRNs, we obtained Ess(G? ) and gave it as an input to LRE algorithm. Furthermore, we used GNW tool to get 10000 measurements of steady state levels for every gene in the networks. In order to obtain measurements from the second environment, we increased coefficients of exogneous noise terms from 0.05 to 0.2 in GNW tool. Figure 4 depicts the performance of LRE algorithm in five networks extracted from GRNs of E-coli and Yeast bacteria. The green, red, and yellow bar for each network shows the number of correctly directed edges, wrongly directed edges, and undirected edges, respectively. Note that since we know the correct Ess(G? ), there is no miss-detected links or extra detected links. As it can be seen, LRE algorithm has a fairly good accuracy (84% on average over all five networks) when it decides to orient an edge. 6 Conclusion We studied the problem of causal structure learning in a multi-environment setting, in which the functional relations for producing the variables from their parents remain the same across environments, while the distribution of exogenous noises may vary. We defined a notion of completeness for a causal inference algorithm in this setting and proved the existence of such algorithm. We proposed an efficient algorithm with low computational and sample complexity and evaluated the performance of this algorithm by testing it on synthetic and real data. The results show that the proposed algorithm outperforms the other existing algorithms. 9 References [1] K. A. Bollen. Structural equations with latent variables. Wiley series in probability and mathematical statistics. Applied probability and statistics section. Wiley, 1989. [2] D. M. Chickering. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov):507?554, 2002. [3] P. Daniusis, D. Janzing, J. Mooij, J. Zscheischler, B. Steudel, K. Zhang, and B. Sch?lkopf. Distinguishing causes from effects using nonlinear acyclic causal models. In Proc. 26th Conference on Uncertainty in Artificial Intelligence (UAI2010), 2010. [4] F. Eberhardt. Causation and intervention. Unpublished doctoral dissertation, Carnegie Mellon University, 2007. [5] F. Eberhardt, C. Glymour, and R. Scheines. On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables. In Proceedings of the 21st Conference on Uncertainty and Artificial Intelligence (UAI-05), pages 178?184, 2005. [6] J. Etesami and N. Kiyavash. Directed information graphs: A generalization of linear dynamical graphs. In American Control Conference (ACC), pages 2563?2568. IEEE, 2014. [7] J. Etesami, N. Kiyavash, and T. Coleman. Learning minimal latent directed information polytrees. Neural computation, 2016. [8] A. Ghassami and N. Kiyavash. Interaction information for causal inference: The case of directed triangle. In IEEE International Symposium on Information Theory (ISIT), 2017. [9] A. Ghassami, S. Salehkaleybar, and N. Kiyavash. Optimal experiment design for causal discovery from fixed number of experiments. arXiv preprint arXiv:1702.08567, 2017. [10] A. Ghassami, S. Salehkaleybar, N. Kiyavash, and E. Bareinboim. Budgeted experiment design for causal structure learning. arXiv preprint arXiv:1709.03625, 2017. [11] A. Hauser and P. B?hlmann. Two optimal strategies for active learning of causal models from interventional data. International Journal of Approximate Reasoning, 55(4):926?939, 2014. [12] P. O. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Sch?lkopf. Nonlinear causal discovery with additive noise models. In Advances in neural information processing systems, pages 689?696, 2009. [13] D. Janzing, J. Mooij, K. Zhang, J. Lemeire, J. Zscheischler, P. Daniu?is, B. Steudel, and B. Sch?lkopf. Information-geometric approach to inferring causal directions. Artificial Intelligence, 182:1?31, 2012. [14] D. Janzing and B. Scholkopf. Causal inference using the algorithmic markov condition. IEEE Transactions on Information Theory, 56(10):5168?5194, 2010. [15] M. Kalisch, M. M?chler, D. Colombo, M. H. Maathuis, P. B?hlmann, et al. Causal inference using graphical models with the R package pcalg. Journal of Statistical Software, 47(11):1?26, 2012. [16] S. Kim, C. J. Quinn, N. Kiyavash, and T. P. Coleman. Dynamic and succinct statistical analysis of neuroscience data. Proceedings of the IEEE, 102(5):683?698, 2014. [17] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [18] H. L?tkepohl. New introduction to multiple time series analysis. Springer Science & Business Media, 2005. [19] D. Marbach, T. Schaffter, C. Mattiussi, and D. Floreano. Generating realistic in silico gene networks for performance assessment of reverse engineering methods. Journal of computational biology, 16(2):229?239, 2009. 10 [20] C. Meek. Graphical models: Selecting causal and statistical models. 1997. [21] J. Pearl. Causality. Cambridge university press, 2009. [22] J. Peters and P. B?hlmann. Identifiability of gaussian structural equation models with equal error variances. Biometrika, 101, pages 219?228, 2014. [23] J. Peters, P. B?hlmann, and N. Meinshausen. Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5):947?1012, 2016. [24] J. Peters, J. M. Mooij, D. Janzing, B. Sch?lkopf, et al. Causal discovery with continuous additive noise models. Journal of Machine Learning Research, 15(1):2009?2053, 2014. [25] C. J. Quinn, N. Kiyavash, and T. P. Coleman. Efficient methods to compute optimal tree approximations of directed information graphs. IEEE Transactions on Signal Processing, 61(12):3173?3182, 2013. [26] C. J. Quinn, N. Kiyavash, and T. P. Coleman. Directed information graphs. IEEE Transactions on information theory, 61(12):6887?6909, 2015. [27] C. E. Rouse. Democratization or diversion? the effect of community colleges on educational attainment. Journal of Business & Economic Statistics, 13(2):217?224, 1995. [28] T. Schaffter, D. Marbach, and D. Floreano. Genenetweaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics, 27(16):2263?2270, 2011. [29] B. Sch?lkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal learning. In Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255?1262, 2012. [30] E. Sgouritsa, D. Janzing, P. Hennig, and B. Sch?lkopf. Inference of cause and effect with unsupervised inverse regression. In AISTATS, 2015. [31] K. Shanmugam, M. Kocaoglu, A. G. Dimakis, and S. Vishwanath. Learning causal graphs with small interventions. In Advances in Neural Information Processing Systems, pages 3195?3203, 2015. [32] S. Shimizu, P. O. Hoyer, A. Hyv?rinen, and A. Kerminen. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(Oct):2003?2030, 2006. [33] P. Spirtes, C. N. Glymour, and R. Scheines. Causation, prediction, and search. MIT press, 2000. [34] J. Sun, D. Taylor, and E. M. Bollt. Causal network inference by optimal causation entropy. SIAM Journal on Applied Dynamical Systems, 14(1):73?106, 2015. [35] J. Tian and J. Pearl. Causal discovery from changes. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 512?521. Morgan Kaufmann Publishers Inc., 2001. [36] T. Verma and J. Pearl. An algorithm for deciding if a set of observed independencies has a causal explanation. In Proceedings of the Eighth international conference on uncertainty in artificial intelligence, pages 323?330. Morgan Kaufmann Publishers Inc., 1992. [37] K. Zhang, B. Huang, J. Zhang, C. Glymour, and B. Sch?lkopf. Causal discovery in the presence of distribution shift: Skeleton estimation and orientation determination. In Proc. International Joint Conference on Artificial Intelligence (IJCAI 2017), 2017. [38] K. Zhang and A. Hyv?rinen. On the identifiability of the post-nonlinear causal model. In Proc. 25th Conference on Uncertainty in Artificial Intelligence (UAI 2009), Montreal, Canada, 2009. 11
6894 |@word mild:1 cu:4 polynomial:1 open:1 hyv:2 simulation:3 crucially:1 tried:1 covariance:1 pick:1 attended:1 initial:1 contains:5 score:5 selecting:2 series:3 denoting:1 outperforms:2 existing:5 wd:2 written:1 additive:6 realistic:1 designed:1 alone:1 greedy:3 discovering:1 leaf:1 intelligence:7 es:9 coleman:4 dissertation:1 bvu:3 completeness:3 location:1 zhang:7 five:3 mathematical:1 along:3 direct:5 symposium:1 scholkopf:1 descendant:2 prove:3 consists:1 introduce:3 multi:6 detects:1 little:1 considering:2 increasing:1 discover:1 underlying:4 moreover:2 medium:1 null:7 dimakis:1 proposing:2 differing:1 finding:2 nj:3 temporal:1 pseudo:3 every:2 exactly:1 biometrika:1 uk:5 control:1 intervention:9 producing:4 kalisch:1 engineering:1 local:1 path:2 approximately:1 plus:1 twice:1 doctoral:1 studied:3 equivalence:10 meinshausen:1 challenging:1 polytrees:1 graduate:1 tian:1 seventeenth:1 directed:26 unique:1 practical:1 testing:5 union:6 ance:1 x3:12 examiner:1 fci:1 procedure:1 area:1 ax1:1 significantly:2 reject:3 composite:2 word:1 confidence:1 get:3 cannot:6 wrongly:3 put:1 influence:3 applying:1 live:1 silico:3 equivalent:2 educational:2 economics:1 identifying:2 recovery:1 rule:6 estimator:1 stability:1 searching:1 notion:3 floreano:2 target:9 suppose:3 play:1 rinen:2 saber:1 exact:2 distinguishing:3 hypothesis:11 pa:4 element:2 expensive:1 utilized:1 observed:2 role:1 i12:5 preprint:2 rij:9 worst:1 cycle:1 sun:1 highest:1 zscheischler:2 ran:1 mentioned:1 environment:67 complexity:9 skeleton:4 ideally:3 dynamic:2 comparsion:1 purely:1 triangle:5 joint:5 represented:1 s2i:1 guassian:1 activate:1 detected:6 artificial:7 h0:4 whose:7 supplementary:3 reconstruct:1 otherwise:4 cov:4 statistic:4 g1:7 jointly:1 ip:7 advantage:1 propose:1 interaction:3 unresolved:1 intervening:1 parent:18 empty:1 requirement:1 ijcai:1 generating:4 montreal:1 fixing:1 ij:7 school:2 received:1 p2:1 auxiliary:1 implies:6 direction:5 closely:1 correct:3 attribute:3 anticausal:1 observational:14 material:3 education:1 require:1 fix:3 generalization:1 county:1 isit:1 biological:2 extension:1 considered:6 ground:3 ic:2 deciding:1 mapping:1 algorithmic:1 vary:6 a2:1 purpose:1 estimation:2 proc:3 lemeire:1 tool:3 mit:2 gaussian:7 ej:20 colombo:1 varying:2 derived:1 focus:2 democratization:1 indicates:1 check:1 mainly:1 baseline:8 sense:3 kim:1 inference:11 tional:1 initially:1 relation:19 koller:1 interested:1 issue:2 among:13 orientation:3 denoted:2 special:4 fairly:1 marginal:3 field:1 equal:7 having:1 beach:1 biology:2 represents:1 flipped:1 look:1 icml:1 unsupervised:1 others:1 causation:3 pathological:2 resulted:1 replaced:2 consisting:1 lebesgue:1 n1:2 attempt:2 friedman:1 graphs3:1 regressing:4 analyzed:1 pc:7 edge:26 capable:6 closer:1 necessary:2 bacteria:1 tree:1 taylor:1 causal:62 minimal:1 instance:2 column:1 increased:1 s2j:2 kerminen:1 hlmann:4 ordinary:5 cost:1 vertex:4 subset:1 expects:1 entry:9 predictor:4 parametrically:1 father:1 conducted:1 too:1 hauser:1 varies:4 perturbed:1 synthetic:3 combined:1 st:2 fundamental:1 epidemiology:1 potent:1 international:5 siam:1 probabilistic:1 invertible:1 regressor:1 connecting:1 containing:1 huang:1 worse:1 coli:1 usable:2 american:1 bx:1 return:1 potential:2 nonlinearities:1 gaussianity:1 coefficient:15 includes:1 inc:2 coordinated:1 satisfy:1 race:2 stream:1 later:1 try:1 performed:1 exogenous:31 observing:1 doing:1 red:1 identifiability:2 contribution:2 square:1 accuracy:1 variance:15 who:1 kaufmann:2 correspond:1 identify:8 yellow:1 lkopf:7 identification:6 grn:3 none:1 sgouritsa:2 acc:1 coef:1 janzing:7 whenever:1 definition:10 regress:1 e2:2 naturally:1 proof:2 rational:3 experimenter:5 dataset:3 proved:1 follow:1 methodology:1 improved:2 evaluated:2 though:1 grns:4 generality:1 furthermore:3 stage:10 ei:27 nonlinear:5 assessment:1 yeast:1 usa:4 effect:15 contain:2 unbiased:1 true:2 former:1 hence:9 symmetric:1 laboratory:1 spirtes:1 i2:2 mile:1 deal:1 adjacent:1 uniquely:2 noted:2 steady:1 ay:1 complete:14 passive:1 percent:1 reasoning:1 instantaneous:1 novel:1 recently:1 common:1 functional:7 perturbing:2 regulates:1 lre:22 mellon:2 significant:1 measurement:2 cambridge:1 dag:25 similarly:1 marbach:2 illinois:3 access:1 add:4 base:2 multivariate:1 showed:2 recent:1 perspective:1 belongs:1 reverse:1 scenario:1 pcalg:2 certain:1 life:1 sems:1 scoring:3 seen:6 minimum:2 morgan:2 signal:1 multiple:3 champaign:2 determination:1 profiling:1 long:2 post:2 e1:3 calculates:1 prediction:10 regression:22 cmu:1 aty:1 arxiv:4 represent:1 interval:1 else:4 source:2 publisher:2 sch:7 extra:4 rest:1 undirected:6 member:2 structural:3 presence:1 split:1 enough:1 xj:10 independence:4 fit:1 gave:1 identified:1 economic:1 idea:3 regarding:2 teenager:1 shift:1 inder:2 bottleneck:1 whether:6 six:1 expression:3 peter:5 cause:19 exist:1 problematic:1 sign:1 estimated:1 neuroscience:1 correctly:4 carnegie:2 proach:1 write:1 hennig:1 affected:1 independency:2 drawn:3 urban:1 interventional:9 changing:5 neither:1 budgeted:1 utilize:1 graph:39 merely:1 year:4 run:6 package:2 orient:1 uncertainty:5 inverse:1 dst:2 discern:1 throughout:1 family:1 utilizes:1 decision:2 steudel:2 bound:2 meek:6 distinguish:3 identifiable:2 constraint:4 x2:40 g00:2 noise2:1 software:1 regulator:1 kocaoglu:1 performing:6 glymour:3 department:2 alternate:2 combination:2 gnw:3 remain:5 across:18 explained:2 invariant:10 lingam:8 equation:5 scheines:2 remains:5 mechanism:3 know:1 ge:1 end:5 available:1 apply:4 quinn:3 anymore:1 alternative:2 shortly:1 existence:6 assumes:2 running:3 remaining:1 graphical:4 calculating:1 exploit:1 perturb:1 society:1 unchanged:2 g0:4 parametric:1 strategy:1 usual:1 diagonal:2 hoyer:2 regulated:2 cw:5 link:7 vishwanath:1 me:4 extent:1 considers:1 collected:1 dream:2 besides:1 code:3 modeled:1 relationship:6 kk:1 ratio:15 minimizing:1 kun:1 setup:3 bareinboim:1 design:2 bollen:1 unknown:1 perform:1 unemployment:1 observation:2 etesami:2 markov:8 urbana:4 benchmark:1 finite:1 t:1 extended:1 discovered:3 varied:5 kiyavash:10 arbitrary:2 community:1 canada:1 introduced:1 pair:17 required:3 unpublished:1 learned:1 pearl:3 nip:1 able:3 beyond:1 bar:1 usually:3 dynamical:2 eighth:1 challenge:3 including:1 green:1 royal:1 explanation:1 business:2 imply:1 literature:1 discovery:9 checking:1 mooij:5 geometric:1 relative:1 negar:1 mixed:13 generation:1 acyclic:2 degree:2 sufficient:1 consistent:4 principle:1 verma:1 pi:2 row:1 daniu:1 compatible:2 changed:7 indepen:1 neighbor:6 taking:2 face:1 shanmugam:1 overcome:1 calculated:2 xn:1 genome:1 author:4 collection:1 adaptive:1 regressors:2 income:1 transaction:3 reconstructed:1 nov:1 observable:1 approximate:1 transcription:4 keep:2 gene:8 decides:1 uai:2 active:1 pittsburgh:1 assumed:4 knew:1 xi:9 search:4 regulatory:1 infeasibility:1 latent:2 continuous:1 additionally:3 learn:4 nature:1 ca:1 attainment:2 eberhardt:2 sem:7 obtaining:2 interact:1 du:1 necessarily:1 did:1 significance:2 main:6 aistats:1 s2:1 noise:41 n2:4 succinct:1 child:7 x1:31 causality:1 en:1 pupil:3 depicts:1 ny:2 wiley:2 iij:23 inferring:1 intervened:1 chickering:1 jacobian:4 third:1 specific:1 xt:1 x:15 ments:1 essential:4 exists:1 restricting:1 false:4 adding:1 nk:1 rejection:4 shimizu:1 entropy:1 intersection:1 distinguishable:13 prevents:1 g2:7 springer:1 mij:6 ch:2 truth:3 gender:1 relies:1 extracted:2 bji:1 oct:1 conditional:5 goal:2 viewed:1 flash:1 change:19 specifically:1 except:1 determined:1 uniformly:3 miss:2 lemma:5 called:3 gij:5 total:5 invariance:8 ece:1 specie:1 player:1 maathuis:1 select:1 college:4 ficients:1 bioinformatics:1 violated:1 philosophy:1 evaluate:1
6,516
6,895
Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback Zheng Wen Adobe Research [email protected] Branislav Kveton Adobe Research [email protected] Michal Valko SequeL team, INRIA Lille - Nord Europe [email protected] Sharan Vaswani University of British Columbia [email protected] Abstract We study the online influence maximization problem in social networks under the independent cascade model. Specifically, we aim to learn the set of ?best influencers? in a social network online while repeatedly interacting with it. We address the challenges of (i) combinatorial action space, since the number of feasible influencer sets grows exponentially with the maximum number of influencers, and (ii) limited feedback, since only the influenced portion of the network is observed. Under a stochastic semi-bandit feedback, we propose and analyze IMLinUCB, a computationally efficient UCB-based algorithm. Our bounds on the cumulative regret are polynomial in all quantities of interest, achieve near-optimal dependence on the number of interactions and reflect the topology of the network and the activation probabilities of its edges, thereby giving insights on the problem complexity. To the best of our knowledge, these are the first such results. Our experiments show that in several representative graph topologies, the regret of IMLinUCB scales as suggested by our upper bounds. IMLinUCB permits linear generalization and thus is both statistically and computationally suitable for large-scale problems. Our experiments also show that IMLinUCB with linear generalization can lead to low regret in real-world online influence maximization. 1 Introduction Social networks are increasingly important as media for spreading information, ideas, and influence. Computational advertising studies models of information propagation or diffusion in such networks [16, 6, 10]. Viral marketing aims to use this information propagation to spread awareness about a specific product. More precisely, agents (marketers) aim to select a fixed number of influencers (called seeds or source nodes) and provide them with free products or discounts. They expect that these users will influence their neighbours and, transitively, other users in the social network to adopt the product. This will thus result in information propagating across the network as more users adopt or become aware of the product. The marketer has a budget on the number of free products and must choose seeds in order to maximize the influence spread, which is the expected number of users that become aware of the product. This problem is referred to as influence maximization (IM) [16]. For IM, the social network is modeled as a directed graph with the nodes representing users, and the edges representing relations (e.g., friendships on Facebook, following on Twitter) between them. Each directed edge (i, j) is associated with an activation probability w(i, j) that models the strength of influence that user i has on user j. We say a node j is a downstream neighbor of node i if there is a directed edge (i, j) from i to j. The IM problem has been studied under a number of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. diffusion models [16, 13, 23]. The best known and studied are the models in [16], and in particular the independent cascade (IC) model. In this work, we assume that the diffusion follows the IC model and describe it next. After the agent chooses a set of source nodes S, the independent cascade model defines a diffusion (influence) process: At the beginning, all nodes in S are activated (influenced); subsequently, every activated node i can activate its downstream neighbor j with probability w(i, j) once, independently of the history of the process. This process runs until no activations are possible. In the IM problem, the goal of the agent is to maximize the expected number of the influenced nodes subject to a cardinality constraint on S. Finding the best set S is an NP-hard problem, but under common diffusion models including IC, it can be efficiently approximated to within a factor of 1 ? 1/e [16]. In many social networks, however, the activation probabilities are unknown. One possibility is to learn these from past propagation data [25, 14, 24]. However in practice, such data are hard to obtain and the large number of parameters makes this learning challenging. This motivates the learning framework of IM bandits [31, 28, 29], where the agent needs to learn to choose a good set of source nodes while repeatedly interacting with the network. Depending on the feedback to the agent, the IM bandits can have (1) full-bandit feedback, where only the number of influenced nodes is observed; (2) node semi-bandit feedback, where the identity of influenced nodes is observed; or (3) edge semi-bandit feedback, where the identity of influenced edges (edges going out from influenced nodes) is observed. In this paper, we give results for the edge semi-bandit feedback model, where we observe for each influenced node, the downstream neighbors that this node influences. Such feedback is feasible to obtain in most online social networks. These networks track activities of users, for instance, when a user retweets a tweet of another user. They can thus trace the propagation (of the tweet) through the network, thereby obtaining edge semi-bandit feedback. The IM bandits problem combines two main challenges. First, the number of actions (possible sets) S grows exponentially with the cardinality constraint on S. Second, the agent can only observe the influenced portion of the network as feedback. Although IM bandits have been studied in the past [21, 8, 31, 5, 29] (see Section 6 for an overview and comparison), there are a number of open challenges [28]. One challenge is to identify reasonable complexity metrics that depend on both the topology and activation probabilities of the network and characterize the information-theoretic complexity of the IM bandits problem. Another challenge is to develop learning algorithms such that (i) their performance scales gracefully with these metrics and (ii) are computationally efficient and can be applied to large social networks with millions of users. In this paper, we address these two challenges under the IC model with access to edge semi-bandit feedback. We refer to our model as an independent cascade semi-bandit (ICSB). We make four main contributions. First, we propose IMLinUCB, a UCB-like algorithm for ICSBs that permits linear generalization and is suitable for large-scale problems. Second, we define a new complexity metric, referred to as maximum observed relevance for ICSB, which depends on the topology of the network and is a non-decreasing function of activation probabilities. The maximum observed relevance C? can also be upper bounded based on the network topology or the size of the network in the worst case. However, in real-world social networks, due to the relatively low activation probabilities [14], C? attains much smaller values as compared to the worst case upper bounds. Third, we bound the cumulative regret of IMLinUCB. Our regret bounds are polynomial in all quantities of interest and have near-optimal dependence on the number of interactions. They reflect the structure and activation probabilities of the network through C? and do not depend on inherently large quantities, such as the reciprocal of the minimum probability of being influenced (unlike [8]) and the cardinality of the action set. Finally, we evaluate IMLinUCB on several problems. Our empirical results on simple representative topologies show that the regret of IMLinUCB scales as suggested by our topologydependent regret bounds. We also show that IMLinUCB with linear generalization can lead to low regret in real-world online influence maximization problems. 2 Influence Maximization under Independence Cascade Model In this section, we define notation and give the formal problem statement for the IM problem under the IC model. Consider a directed graph G = (V, E) with a set V = {1, 2, . . . , L} of L = |V| nodes, a set E = {1, 2, . . . , |E|} of directed edges, and an arbitrary binary weight function w : E ? {0, 1}. 2 We say that a node v2 ? V is reachable from a node v1 ? V under w if there is a directed path1 p = (e1 , e2 , . . . , el ) from v1 to v2 in G satisfying w(ei ) = 1 for all i = 1, 2, . . . , l, where ei is the i-th edge in p. For a given source node set S ? V and w, we say that node v ? V is influenced if v is reachable from at least one source node in S under w; and denote the number of influenced nodes in G by f (S, w). By definition, the nodes in S are always influenced. The influence maximization (IM) problem is characterized by a triple (G, K, w), where G is a given directed graph, K ? L is the cardinality of source nodes, and w : E ? [0, 1] is a probability weight function mapping each edge e ? E to a real number w(e) ? [0, 1]. The agent needs to choose a set of K source nodes S ? V based on (G, K, w). Then a random binary weight function w, which encodes the diffusion process under the IC model, is obtained by independently sampling a Bernoulli random variable w(e) ? Bern (w(e)) for each edge e ? E. The agent?s objective is to maximize the ? expected number of the influenced nodes: maxS: |S|=K f (S, w), where f (S, w) = Ew [f (S, w)] is the expected number of influenced nodes when the source node set is S and w is sampled according to w.2 It is well-known that the (offline) IM problem is NP-hard [16], but can be approximately solved by approximation/randomized algorithms [6] under the IC model. In this paper, we refer to such algorithms as oracles to distinguish them from the machine learning algorithms discussed in following sections. Let S opt be the optimal solution of this problem, and S ? = ORACLE(G, K, w) be the (possibly random) solution of an oracle ORACLE. For any ?, ? ? [0, 1], we say that ORACLE is an (?, ?)-approximation oracle for a given (G, K) if for any w, f (S ? , w) ? ?f (S opt , w) with probability at least ?. Notice that this further implies that E [f (S ? , w)] ? ??f (S opt , w). We say an oracle is exact if ? = ? = 1. 3 Influence Maximization Semi-Bandit In this section, we first describe the IM semi-bandit problem. Next, we state the linear generalization assumption and describe IMLinUCB, our UCB-based semi-bandit algorithm. 3.1 Protocol The independent cascade semi-bandit (ICSB) problem is also characterized by a triple (G, K, w), but w is unknown to the agent. The agent interacts with the independent cascade semi-bandit for n rounds. At each round t = 1, 2, . . . , n, the agent first chooses a source node set St ? V with cardinality K based on its prior information and past observations. Influence then diffuses from the nodes in St according to the IC model. Similarly to the previous section, this can be interpreted as the environment generating a binary weight function wt by independently sampling wt (e) ? Bern (w(e)) for each e ? E. At round t, the agent receives the reward f (St , wt ), that is equal to the number of nodes influenced at that round. The agent also receives edge semi-bandit feedback from the diffusion process. Specifically, for any edge e = (u1 , u2 ) ? E, the agent observes the realization of wt (e) if and only if the start node u1 of the directed edge e is influenced in the realization wt . The agent?s objective is to maximize the expected cumulative reward over the n steps. 3.2 Linear generalization Since the number of edges in real-world social networks tends to be in millions or even billions, we need to exploit some generalization model across activation probabilities to develop efficient and deployable learning algorithms. In particular, we assume that there exists a linear-generalization model for the probability weight function w. That is, each edge e ? E is associated with a known feature vector xe ? <d (here d is the dimension of the feature vector) and that there is an unknown coefficient vector ?? ? <d such that for all e ? E, w(e) is ?well approximated" by xTe ?? . Formally, ? we assume that ? = maxe?E |w(e) ? xTe ?? | is small. In Section 5.2, we see that such a linear generalization leads to efficient learning in real-world networks. Note that all vectors in this paper are column vectors. 1 As is standard in graph theory, a directed path is a sequence of directed edges connecting a sequence of distinct nodes, under the restriction that all edges are directed in the same direction. 2 Notice that the definitions of f (S, w) and f (S, w) are consistent in the sense that if w ? {0, 1}|E| , then f (S, w) = f (S, w) with probability 1. 3 Algorithm 1 IMLinUCB: Influence Maximization Linear UCB Input: graph G, source node set cardinality K, oracle ORACLE, feature vector xe ?s, and algorithm parameters ?, c > 0, Initialization: B0 ? 0 ? <d , M0 ? I ? <d?d for t = 1, 2, . . . , n do   q ?1 T T 1. set ?t?1 ? ? ?2 M?1 and the UCBs as U (e) ? Proj x ? + c x M B x t [0,1] e t?1 e t?1 t?1 t?1 e for all e ? E 2. choose St ? ORACLE(G, K, Ut ), and observe the edge-level semi-bandit feedback 3. update statistics: (a) initialize Mt ? Mt?1 and Bt ? Bt?1 (b) for all observed edges e ? E, update Mt ? Mt + ? ?2 xe xTe and Bt ? Bt + xe wt (e) Similar to the existing approaches for linear bandits [1, 9], we exploit the linear generalization to develop a learning algorithm for ICSB. Without loss of generality, we assume that kxe k2 ? 1 for all e ? E. Moreover, we use X ? <|E|?d to denote the feature matrix, i.e., the row of X associated with edge e is xTe . Note that if a learning agent does not know how to construct good features, it can always choose the na?ve feature matrix X = I ? <|E|?|E| and have no generalization model across edges. We refer to the special case X = I ? <|E|?|E| as the tabular case. 3.3 IMLinUCB algorithm In this section, we propose Influence Maximization Linear UCB (IMLinUCB), detailed in Algorithm 1. Notice that IMLinUCB represents its past observations as a positive-definite matrix (Gram matrix) Mt ? <d?d and a vector Bt ? <d . Specifically, let Xt be a matrix whose rows are the feature vectors of all observed edges in t steps and Yt be a binary column vector encoding the realizations of all observed edges in t steps. Then Mt = I + ? ?2 XTt Xt and Bt = XTt Yt . At each round t, IMLinUCB operates in three steps: First, it computes an upper confidence bound Ut (e) for each edge e ? E. Note that Proj[0,1] (?) projects a real number into interval [0, 1] to ensure that Ut ? [0, 1]|E| . Second, it chooses a set of source nodes based on the given ORACLE and Ut , which is also a probability-weight function. Finally, it receives the edge semi-bandit feedback and uses it to update Mt and Bt . It is worth emphasizing that IMLinUCB is computationally efficient as long as ORACLE is computationally efficient. Specifically, at each round t, the computational complexities of  both Step 1 and 3 of IMLinUCB are O |E|d2 .3 It is worth pointing out that in the tabular case, IMLinUCB reduces to CUCB [7], in the sense that the confidence radii in IMLinUCB are the same as those in CUCB, up to logarithmic factors. That is, CUCB can be viewed as a special case of IMLinUCB with X = I. 3.4 Performance metrics Recall that the agent?s objective is to maximize the expected cumulative reward, which is equivalent to minimizing the expected cumulative regret. The cumulative regret is the loss in reward (accumulated over rounds) because of the lack of knowledge of the activation probabilities. Observe that in each round t, IMLinUCB needs to use an approximation/randomized algorithm ORACLE for solving the offline IM problem. Naturally, this can lead to O(n) cumulative regret, since at each round there is a non-diminishing regret due to the approximation/randomized nature of ORACLE. To analyze the performance of IMLinUCB in such Pncases, we define a more appropriate performance metric, the scaled cumulative regret, as R? (n) = t=1 E [Rt? ], where n is the number of steps, ? > 0 is the scale, and Rt? = f (S opt , wt ) ? ?1 f (St , wt ) is the ?-scaled realized regret Rt? at round t. When ? = 1, R? (n) reduces to the standard expected cumulative regret R(n). 3 Notice that in a practical implementation, we store M?1 instead of Mt . Moreover, Mt ? Mt + ? ?2 xe xTe t is equivalent to M?1 ? M?1 ? t t ?1 T M?1 t xe xe Mt ?1 2 xT e Mt xe +? . 4 (a) (b) (c) (d) Figure 1: a. Bar graph on 8 nodes. b. Star graph on 4 nodes. c. Ray graph on 10 nodes. d. Grid graph on 9 nodes. Each undirected edge denotes two directed edges in opposite directions. 4 Analysis In this section, we give a regret bound for IMLinUCB for the case when w(e) = xTe ?? for all e ? E, i.e., the linear generalization is perfect. Our main contribution is a regret bound that scales with a new complexity metric, maximum observed relevance, which depends on both the topology of G and the probability weight function w, and is defined in Section 4.1. We highlight this as most known results for this problem are worst case, and some of them do not depend on probability weight function at all. 4.1 Maximum observed relevance We start by defining some terminology. For given directed graph G = (V, E) and source node set S ? V, we say an edge e ? E is relevant to a node v ? V \ S under S if there exists a path p from a source node s ? S to v such that (1) e ? p and (2) p does not contain another source node other than s. Notice that with a given S, whether or not a node v ? V \ S is influenced only depends on the binary weights w on its relevant edges. For any edge e ? E, we define NS,e as the number of nodes in V \ S it is relevant to, and define PS,e as the conditional probability that e is observed given S, ? P ? NS,e = v?V\S 1 {e is relevant to v under S} and PS,e = P (e is observed | S) . (1) Notice that NS,e only depends on the topology of G, while PS,e depends on both the topology of G and the probability weight w. The maximum observed relevance C? is defined as the maximum (over S) 2-norm of NS,e ?s weighted by PS,e ?s, qP ? 2 C? = maxS: |S|=K (2) e?E NS,e PS,e . As is detailed in the proof of Lemma 1 in Appendix A, C? arises in the step where Cauchy-Schwarz inequality is applied. Note that C? also depends on both the topology of G and the probability weight w. However, C? can be bounded from above only based on the topology of G or the size of the qP ? 2 problem, i.e., L = |V| and |E|. Specifically, by defining CG = maxS: |S|=K e?E NS,e , we have C? ? CG = maxS: |S|=K  p  p  2 ? (L ? K) |E| = O L |E| = O L2 , N S,e e?E qP (3) where CG is the maximum/worst-case (over w) C? for the directed graph G, and the maximum is obtained by setting w(e) = 1 for all e ? E. Since CG is worst-case, it might be very far away from C? if the activation probabilities are small. Indeed, this is what we expect in typical realworld situations. Notice also that if maxe?E w(e) ? 0, then PS,e ? 0 for all e ? / E(S) and PS,e = 1 for all e ? E(S), where E(S) is the set of edges with start node in S, hence we have qP 0 ? 0 2 C? ? CG = maxS: |S|=K e?E(S) NS,e . In particular, if K is small, CG is much less than CG in 3 many topologies. For example, in a complete graph with K = 1, CG = ?(L2 ) while CG0 = ?(L 2 ). Finally, it is worth pointing out that there exist situations (G, w) such that C? = ?(L2 ). One such example is when G is a complete graph with L nodes and w(e) = L/(L + 1) for all edges e in this graph. To give more intuition, in the rest of this subsection, we illustrate how CG , the worst-case C? , varies with four graph topologies in Figure 1: bar, star, ray, and grid, as well as two other topologies: 5 general tree and complete graph. We fix the node set V = {1, 2, . . . , L} for all graphs. The bar graph (Figure 1a) is a graph where nodes i and i + 1 are connected when i is odd. The star graph (Figure 1b) is a graph where node 1 is central and all remaining nodes i ? V \ {1} are connected to it. The distance between any two of these nodes is 2. The ray graph (Figure 1c) is a star graph ?  with k = L ? 1 arms, where node 1 is central and each arm contains either d(L ? 1)/ke or ? b(L ? 1)/kc nodes connected in a line. The distance between any two nodes in this graph is O( L). The grid graph (Figure 1d) is a classical non-tree graph with O(L) edges. To see how CG varies with the graph topology, we start with the simplified case when K = |S| = 1. In the bar graph (Figure 1a), only one edge is relevant to a node v ? V \ S and all the other edges are not relevant to any nodes. Therefore, CG ? 1. In the star graph (Figure 1b), for any s, at most one edge is relevant to at?most L ? 1 nodes and the remaining edges are relevant to at most one?node. In this case, CG ? L2 + L = O(L). In the ray graph (Figure 1c), for any s, at most ? O( L) edges are relevant to L edges are relevant to at most O( L) p? 11 nodes and the remaining 5 nodes. In this case,pCG = O( L 2 L2 + LL) = O(L 4 ). Finally, recall that for all graphs we can bound CG by O(L |E|), regardless of K. Hence, for the grid graph (Figure 1d) and general tree 3 graph, CG = O(L 2 ) since |E| = O(L); for the complete graph CG = O(L2 ) since |E| = O(L2 ). Clearly, CG varies widely with the topology of the graph. The second column of Table 1 summarizes how CG varies with the above-mentioned graph topologies for general K = |S|. 4.2 Regret guarantees p Consider C? defined in Section 4.1 and recall the worst-case upper bound C? ? (L ? K) |E|, we have the following regret guarantees for IMLinUCB. Theorem 1 Assume that (1) w(e) = xTe ?? for all e ? E and (2) ORACLE is an (?, ?)-approximation algorithm. Let D be a known upper bound on k?? k2 , if we apply IMLinUCB with ? = 1 and s   n|E| c = d log 1 + + 2 log (n(L + 1 ? K)) + D, (4) d then we have R ?? s     p n|E| e dC? |E|n/(??) dn|E| log2 1 + +1= O d  ? e d(L ? K)|E| n/(??) . ?O 2cC? (n) ? ?? Moreover, if the feature matrix X = I ? <|E|?|E| (i.e., the tabular case), we have  ? 2cC? p e |E|C? n/(??) n|E| log2 (1 + n) + 1 = O R?? (n) ? ??   ? e (L ? K)|E| 23 n/(??) . ?O (5) (6) (7) (8) Please refer to Appendix A for the proof of Theorem 1, that we outline in Section 4.3. We now briefly comment on the regret bounds in Theorem 1. Topology-dependent bounds: Since C? is topology-dependent, the regret bounds in Equations 5 and 7 are also topology-dependent. Table 1 summarizes the regret bounds for each topology4 discussed in Section 4.1. Since the regret bounds in Table 1 are the worst-case regret bounds for a given topology, more general topologies have larger regret bounds. For instance, the regret bounds for tree are larger than their counterparts for star and ray, since star and ray are special trees. The grid and tree can also be viewed as special complete graphs by setting w(e) = 0 for some e ? E, hence complete graph has larger regret bounds. Again, in practice we expect C? to be far smaller due to activation probabilities. 4 The regret bound for bar graph is based on Theorem 2 in the appendix, which is a stronger version of Theorem 1 for disconnected graph. 6 ray graph CG (worst-case C? ) ? O( K) ? O(L K) 5? O(L 4 K) tree graph O(L 2 ) grid graph O(L 2 ) complete graph O(L2 ) topology bar graph star graph 3 3 R?? (n) for general X e (dK ?n/(??)) O   ? e dL 32 Kn/(??) O   ? e dL 74 Kn/(??) O  e dL2 ?n/(??) O  e dL2 ?n/(??) O  e dL3 ?n/(??) O R?? (n) for X = I  ?  e L Kn/(??) O  ?  e L2 Kn/(??) O   ? e L 94 Kn/(??) O   e L 52 ?n/(??) O   e L 52 ?n/(??) O  e L4 ?n/(??) O Table 1: CG and worst-case regret bounds for different graph topologies. Tighter bounds in tabular case andpunder exact oracle: Notice that for the tabular case with e |E|) tighter regret bounds are obtained in Equations 7 and 8. feature matrix X = I and d = |E|, O( e Also notice that the O(1/(??)) factor is due to the fact that ORACLE is an (?, ?)-approximation oracle. If ORACLE solves the IM problem exactly (i.e., ? = ? = 1), then R?? (n) = R(n). Tightness of our regret bounds: First, note that our regret bound in the bar case with K = 1 matches the regret bound of the classic LinUCB algorithm. Specifically, with perfect linear generalization, this case is equivalent to a linear bandit problem with L arms and feature dimension d. From Table 1, e (d?n), which matches the known regret bound of LinUCB that can our regret bound in this case is O be obtained by the technique of [1]. Second, we briefly discuss the tightness of the regret bound in e ?n)-dependence on time Equation 6 for a general graph with L nodes and |E| edges. Note that the O( e is near-optimal, and the O(d)-dependence on feature dimension is standard in linear bandits [1, 33], ? e d) results are only known for impractical algorithms. The O(L e ? K) factor is due to the since O( e fact that the reward in this problem isp from K to L, rather than from 0 to 1. To explain the O(|E|) e e factor in this bound, notice that one O( |E|) factor is due to the fact that at most O(|E|) edges might be observed at each round (seep Theorem 3), and is intrinsic to the problem similarly to combinatorial e |E|) factor is due to linear generalization (see Lemma 1) and might semi-bandits [19]; another O( e (d(L ? K)|E|?n/(??)) regret bound in be removed by better analysis. We conjecture that our O p e |E|d) away from being tight. this case is at most O( 4.3 Proof sketch We now outline the proof of q Theorem 1. For each round t ? n, we define the favorable event T ? ?t?1 = {|xe (?? ?1 ? ? )| ? c xTe M?1 ? ?1 xe , ?e ? E, ?? ? t}, and the unfavorable event ? t?1 as the complement of ?t?1 . If we decompose E[Rt?? ], the (??)-scaled expected regret at round t, over events ?t?1 and ? t?1 , and bound Rt?? on event ? t?1 using the na?ve bound Rt?? ? L ? K, then,  E[Rt?? ] ? P (?t?1 ) E [Rt?? |?t?1 ] + P ? t?1 [L ? K].  By choosing c as specified by Equation 4, we have P ? t?1 [L ? K] < 1/n (see Lemma 2 in the appendix). On the other hand, notice that by definition of ?t?1 , w(e) ? Ut (e), ?e ? E under event ?t?1 . Using the monotonicity of f in the probability weight, and the fact that ORACLE is an (?, ?)-approximation algorithm, we have E [Rt?? |?t?1 ] ? E [f (St , Ut ) ? f (St , w)|?t?1 ] /(??). The next observation is that, from the linearity of expectation, the gap f (St , Ut ) ? f (St , w) decomposes over nodes v ? V \ St . Specifically, for any source node set S ? V, any probability weight function w : E ? [0, 1], and any node v ? V, we define f (S, w, v) as the probability that node v is influenced if the source node set is S and the probability weight is w. Hence, we have P f (St , Ut ) ? f (St , w) = v?V\St [f (St , Ut , v) ? f (St , w, v)] . 7 2 12 2 10 8 16 L 24 32 #10 2 11 2.5 2 13 Regret Regret Regret 2 14 ! = 0.7, X = I 2 15 Star Ray 2 11 29 8 16 L 24 32 (a) Stars and rays: The log-log plots of the n-step regret of IMLinUCB in two graph topologies after n = 104 steps. We vary the number of nodes L and the mean edge weight ?. 5 ! = 0.8, X = X4 CUCB IMLinUCB with d=10 2 102 Cumulative Regret ! = 0.8, X = I 2 16 1.5 29 1 28 8 0.5 16 L 24 32 0 0 1000 2000 3000 4000 5000 Number of Rounds (b) Subgraph of Facebook network Figure 2: Experimental results In the appendix, we show that under any weight function, the diffusion process from the source node set St to the target node v can be modeled as a Markov chain. Hence, weight function Ut and w give us two Markov chains with the same state space but different transition probabilities. f (St , Ut , v) ? f (St , w, v) can be recursively bounded based on the state diagram of the Markov chain under weight function w. With some algebra, Theorem 3 in Appendix A bounds f (St , Ut , v) ? f (St , w, v) by the edge-level gap Ut (e) ? w(e) on the observed relevant edges for node v, P f (St , Ut , v) ? f (St , w, v) ? e?ES ,v E [1 {Ot (e)} [Ut (e) ? w(e)]|Ht?1 , St ] , (9) t for any t, any ?history" (past observations) Ht?1 and St such that ?t?1 holds, and any v ? V \ St , where ESt ,v is the set of edges relevant to v and Ot (e) is the event that edge e is observed at round t. Based on Equation 9, we can prove Theorem 1 using the standard linear-bandit techniques (see Appendix A). 5 Experiments In this section, we present a synthetic experiment in order to empirically validate our upper bounds on the regret. Next, we evaluate our algorithm on a real-world Facebook subgraph. 5.1 Stars and rays In the first experiment, we evaluate IMLinUCB on undirected stars and rays (Figure 1) and validate that the regret grows with the number of nodes L and the maximum observed relevance C? as shown in Table 1. We focus on the tabular case (X = I) with K = |S| = 1, where the IM problem can be solved exactly. We vary the number of nodes L; and edge weight w(e) = ?, which is the same for all edges e. We run IMLinUCB for n = 104 steps and verify that it converges to the optimal solution in each experiment. We report the n-step regret of IMLinUCB for 8 ? L ? 32 in Figure 2a. Recall that e 2 ) for star and R(n) = O(L e 94 ) for ray. from Table 1, R(n) = O(L We numerically estimate the growth of regret in L, the exponent of L, in the log-log space of L and regret. In particular, since log(f (L)) = p log(L) + log(c) for any f (L) = cLp and c > 0, both p and log(c) can be estimated by linear regression in the new space. For star graphs with ? = 0.8 and ? = 0.7, our estimated growth are respectively O(L2.040 ) and O(L2.056 ), which are close to the e 2 ). For ray graphs with ? = 0.8 and ? = 0.7, our estimated growth are respectively expected O(L 2.488 e 94 ). This shows that maximum O(L ) and O(L2.467 ), which are again close to the expected O(L observed relevance C? proposed in Section 4.1 is a reasonable complexity metric for these two topologies. 5.2 Subgraph of Facebook network In the second experiment, we demonstrate the potential performance gain of IMLinUCB in realworld influence maximization semi-bandit problems by exploiting linear generalization across edges. Specifically, we compare IMLinUCB with CUCB in a subgraph of Facebook network from [22]. The subgraph has L = |V| = 327 nodes and |E| = 5038 directed edges. Since the true probability weight 8 function w is not available, we independently sample w(e)?s from the uniform distribution U (0, 0.1) and treat them as ground-truth. Note that this range of probabilities is guided by empirical evidence in [14, 3]. We set n = 5000 and K = 10 in this experiment. For IMLinUCB, we choose d = 10 and generate edge feature xe ?s as follows: we first use node2vec algorithm [15] to generate a node feature in <d for each node v ? V; then for each edge e, we generate xe as the element-wise product of node features of the two nodes connected to e. Note that the linear generalization in this experiment is imperfect in the sense that min??<d maxe?E |w(e) ? xTe ?| > 0. For both CUCB and IMLinUCB, we choose ORACLE as the state-of-the-art offline IM algorithm proposed in [27]. To compute the cumulative regret, we compare against a fixed seed set S ? obtained by using the true w as input to the oracle proposed in [27]. We average the empirical cumulative regret over 10 independent runs, and plot the results in Figure 2b. The experimental results show that compared with CUCB, IMLinUCB can significantly reduce the cumulative regret by exploiting linear generalization across w(e)?s. 6 Related Work There exist prior results on IM semi-bandits [21, 8, 31]. First, Lei et al. [21] gave algorithms for the same feedback model as ours. The algorithms are not analyzed and cannot solve large-scale problems because they estimate each edge weight independently. Second, our setting is a special case of stochastic combinatorial semi-bandit with a submodular reward function and stochastically observed edges [8]. Their work is the closest related work. Their gap-dependent and gap-free bounds are both problematic because they depend on the reciprocal of the minimum observation probability p? of an edge: Consider a line graph with |E| edges where all edge weights are 0.5. Then 1/p? is 2|E|?1 . On the other hand, our derived regret bounds in Theorem 1 are polynomial in all quantities of interest. A very recent result of Wang and Chen [32] removes the 1/p? factor in [8] for the tabular case and ? e presents a worst-case bound of O(L|E| n), which in the tabular complete graph case improves over e our result by O(L). On the other hand, their analysis does not give structural guarantees that we provide with maximum observed relevance C? obtaining potentially much better results for the case in hand and giving insights for the complexity of IM bandits. Moreover, both Chen et al. [8] and Wang and Chen [32] do not consider generalization models across edges or nodes, and therefore their proposed algorithms are unlikely to be practical for real-world social networks. In contrast, our proposed algorithm scales to large problems by exploiting linear generalization across edges. IM bandits for different influence models and settings: There exist a number of extensions and related results for IM bandits. We only mention the most related ones (see [28] for a recent survey). Vaswani et al. [31] proposed a learning algorithm for a different and more challenging feedback model, where the learning agent observes influenced nodes but not the edges, but they do not give any guarantees. Carpentier and Valko [5] give a minimax optimal algorithm for IM bandits but only consider a local model of influence with a single source and a cascade of influences never happens. In related networked bandits [11], the learner chooses a node and its reward is the sum of the rewards of the chosen node and its neighborhood. The problem gets more challenging when we allow the influence probabilities to change [2], when we allow the seed set to be chosen adaptively [30], or when we consider a continuous model [12]. Furthermore, Sigla et al. [26] treats the IM setting with an additional observability constraints, where we face a restriction on which nodes we can choose at each round. This setting is also related to the volatile multi-armed bandits where the set of possible arms changes [4]. Vaswani et al. [29] proposed a diffusion-independent algorithm for IM semi-bandits with a wide range of diffusion models, based on the maximum-reachability approximation. Despite its wide applicability, the maximum reachability approximation introduces an additional approximation factor to the scaled regret bounds. As they have discussed, this approximation factor can be large in some cases. Lagr?e et al. [20] treat a persistent extension of IM bandits when some nodes become persistent over the rounds and no longer yield rewards. This work is also a generalization and extension of recent work on cascading bandits [17, 18, 34], since cascading bandits can be viewed as variants of online influence maximization problems with special topologies (chains). Acknowledgements The research presented was supported by French Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council and French National Research Agency projects ExTra-Learn (n.ANR-14-CE24-0010-01) and BoB (n.ANR-16-CE23-0003). We would also like to thank Dr. Wei Chen and Mr. Qinshi Wang for pointing out a mistake in an earlier version of this paper. 9 References [1] Yasin Abbasi-Yadkori, D?vid P?l, and Csaba Szepesv?ri. Improved algorithms for linear stochastic bandits. In Neural Information Processing Systems, 2011. [2] Yixin Bao, Xiaoke Wang, Zhi Wang, Chuan Wu, and Francis C. M. Lau. Online influence maximization in non-stationary social networks. In International Symposium on Quality of Service, apr 2016. [3] Nicola Barbieri, Francesco Bonchi, and Giuseppe Manco. Topic-aware social influence propagation models. Knowledge and information systems, 37(3):555?584, 2013. [4] Zahy Bnaya, Rami Puzis, Roni Stern, and Ariel Felner. Social network search as a volatile multi-armed bandit problem. Human Journal, 2(2):84?98, 2013. [5] Alexandra Carpentier and Michal Valko. Revealing graph bandits for maximizing local influence. In International Conference on Artificial Intelligence and Statistics, 2016. [6] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In Knowledge Discovery and Data Mining, 2010. [7] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework, results and applications. In International Conference on Machine Learning, 2013. [8] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit and its extension to probabilistically triggered arms. Journal of Machine Learning Research, 17, 2016. [9] Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. In Conference on Learning Theory, 2008. [10] David Easley and Jon Kleinberg. Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press, 2010. [11] Meng Fang and Dacheng Tao. Networked bandits with disjoint linear payoffs. In International Conference on Knowledge Discovery and Data Mining, 2014. [12] Mehrdad Farajtabar, Xiaojing Ye, Sahar Harati, Le Song, and Hongyuan Zha. Multistage campaigning in social networks. In Neural Information Processing Systems, 2016. [13] M Gomez Rodriguez, B Sch?lkopf, Langford J Pineau, et al. Influence maximization in continuous time diffusion networks. In International Conference on Machine Learning, 2012. [14] Amit Goyal, Francesco Bonchi, and Laks VS Lakshmanan. Learning influence probabilities in social networks. In Proceedings of the third ACM international conference on Web search and data mining, pages 241?250. ACM, 2010. [15] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Knowledge Discovery and Data Mining. ACM, 2016. [16] David Kempe, Jon Kleinberg, and ?va Tardos. Maximizing the spread of influence through a social network. Knowledge Discovery and Data Mining, page 137, 2003. [17] Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan. Cascading bandits: Learning to rank in the cascade model. In Proceedings of the 32nd International Conference on Machine Learning, 2015. [18] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Combinatorial cascading bandits. In Advances in Neural Information Processing Systems 28, pages 1450?1458, 2015. [19] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [20] Paul Lagr?e, Olivier Capp?, Bogdan Cautis, and Silviu Maniu. Effective large-scale online influence maximization. In International Conference on Data Mining, 2017. 10 [21] Siyu Lei, Silviu Maniu, Luyi Mo, Reynold Cheng, and Pierre Senellart. Online influence maximization. In Knowledge Discovery and Data mining, 2015. [22] Jure Leskovec and Andrej Krevl. Snap datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, jun 2014. [23] Yanhua Li, Wei Chen, Yajun Wang, and Zhi-Li Zhang. Influence diffusion dynamics and influence maximization in social networks with friend and foe relationships. In ACM international conference on Web search and data mining. ACM, 2013. [24] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In ACM SIGMETRICS Performance Evaluation Review, volume 40, pages 211?222. ACM, 2012. [25] Kazumi Saito, Ryohei Nakano, and Masahiro Kimura. Prediction of information diffusion probabilities for independent cascade model. In Knowledge-Based Intelligent Information and Engineering Systems, pages 67?75, 2008. [26] Adish Singla, Eric Horvitz, Pushmeet Kohli, Ryen White, and Andreas Krause. Information gathering in networks via active exploration. In International Joint Conferences on Artificial Intelligence, 2015. [27] Youze Tang, Xiaokui Xiao, and Shi Yanchen. Influence maximization: Near-optimal time complexity meets practical efficiency. 2014. [28] Michal Valko. Bandits on graphs and structures. habilitation, ?cole normale sup?rieure de Cachan, 2016. [29] Sharan Vaswani, Branislav Kveton, Zheng Wen, Mohammad Ghavamzadeh, Laks VS Lakshmanan, and Mark Schmidt. Model-independent online learning for influence maximization. In International Conference on Machine Learning, 2017. [30] Sharan Vaswani and Laks V. S. Lakshmanan. Adaptive influence maximization in social networks: Why commit when you can adapt? Technical report, 2016. [31] Sharan Vaswani, Laks. V. S. Lakshmanan, and Mark Schmidt. Influence maximization with bandits. In NIPS workshop on Networks in the Social and Information Sciences 2015, 2015. [32] Qinshi Wang and Wei Chen. Improving regret bounds for combinatorial semi-bandits with probabilistically triggered arms and its applications. In Neural Information Processing Systems, mar 2017. [33] Zheng Wen, Branislav Kveton, and Azin Ashkan. Efficient learning in large-scale combinatorial semi-bandits. In International Conference on Machine Learning, 2015. [34] Shi Zong, Hao Ni, Kenny Sung, Nan Rosemary Ke, Zheng Wen, and Branislav Kveton. Cascading bandits for large-scale recommendation problems. In Uncertainty in Artificial Intelligence, 2016. 11
6895 |@word kohli:1 version:2 briefly:2 polynomial:3 norm:1 stronger:1 nd:1 open:1 d2:1 lakshmanan:4 thereby:2 mention:1 recursively:1 contains:1 ours:1 past:5 existing:1 yajun:4 horvitz:1 com:2 michal:4 activation:12 must:1 maniu:2 remove:1 plot:2 update:3 v:2 stationary:1 intelligence:4 beginning:1 reciprocal:2 node:84 zhang:1 dn:1 become:3 lagr:2 symposium:1 persistent:2 yuan:2 prove:1 ryohei:1 combine:1 ray:13 bonchi:2 node2vec:2 indeed:1 market:1 expected:11 multi:4 yasin:1 chi:1 kazumi:1 decreasing:1 harati:1 zhi:2 armed:4 cardinality:6 project:2 bounded:3 notation:1 moreover:4 medium:1 linearity:1 what:1 interpreted:1 finding:1 csaba:4 impractical:1 kimura:1 guarantee:4 sung:1 every:1 growth:3 exactly:2 k2:2 scaled:4 positive:1 service:1 engineering:1 local:2 treat:3 tends:1 mistake:1 despite:1 encoding:1 barbieri:1 meng:1 path:2 meet:1 approximately:1 inria:2 might:3 initialization:1 studied:3 challenging:3 vaswani:6 limited:1 range:2 statistically:1 directed:15 practical:3 kveton:8 practice:2 regret:55 definite:1 goyal:1 saito:1 empirical:3 ce24:1 cascade:12 significantly:1 revealing:1 confidence:2 get:1 cannot:1 close:2 andrej:1 influence:38 restriction:2 branislav:7 equivalent:3 yt:2 maximizing:2 shi:2 regardless:1 independently:5 survey:1 ke:2 insight:2 cascading:5 fang:1 classic:1 siyu:1 tardos:1 target:1 user:11 exact:2 olivier:1 us:1 pa:1 element:1 youze:1 approximated:2 satisfying:1 observed:21 solved:2 wang:11 worst:11 connected:5 removed:1 ramus:1 observes:2 mentioned:1 intuition:1 environment:1 agency:1 complexity:9 reward:9 multistage:1 dynamic:1 ghavamzadeh:1 depend:4 solving:1 tight:2 algebra:1 silviu:2 eric:1 learner:1 efficiency:1 capp:1 vid:1 isp:1 joint:1 easley:1 distinct:1 describe:3 activate:1 effective:1 artificial:4 choosing:1 neighborhood:1 crowd:1 whose:1 widely:1 larger:3 solve:1 say:6 tightness:2 snap:2 anr:2 luyi:1 epidemic:1 statistic:3 commit:1 online:11 sequence:2 triggered:2 propose:3 interaction:2 product:7 fr:1 relevant:12 networked:2 realization:3 subgraph:5 achieve:1 validate:2 bao:1 billion:1 exploiting:3 p:7 generating:1 perfect:2 converges:1 bogdan:1 depending:1 develop:3 illustrate:1 propagating:1 friend:1 odd:1 b0:1 solves:1 netrapalli:1 c:1 reachability:2 implies:1 direction:2 guided:1 radius:1 stochastic:5 subsequently:1 exploration:1 human:1 cucb:7 education:1 fix:1 generalization:20 decompose:1 opt:4 tighter:2 krevl:1 im:25 extension:4 hold:1 ic:8 ground:1 seed:4 mapping:1 mo:1 pointing:3 m0:1 vary:2 adopt:2 yixin:1 favorable:1 combinatorial:9 spreading:1 calais:1 schwarz:1 council:1 singla:1 cole:1 weighted:1 dani:1 clearly:1 always:2 sigmetrics:1 aim:3 rather:1 normale:1 probabilistically:2 derived:1 focus:1 rosemary:1 bernoulli:1 prevalent:1 rank:1 contrast:1 sharan:4 attains:1 cg:19 sense:3 twitter:1 dependent:4 el:1 accumulated:1 habilitation:1 bt:7 unlikely:1 diminishing:1 diffuses:1 bandit:55 relation:1 proj:2 going:1 kc:1 tao:1 pcg:1 exponent:1 art:1 special:6 initialize:1 kempe:1 equal:1 construct:1 aware:3 once:1 beach:1 sampling:2 never:1 x4:1 represents:1 lille:1 jon:2 tabular:8 np:2 report:2 sanghavi:1 intelligent:1 wen:7 neighbour:1 ve:2 national:1 interest:3 possibility:1 mining:8 zheng:7 highly:1 evaluation:1 introduces:1 analyzed:1 activated:2 chain:4 edge:64 tree:7 xte:9 leskovec:2 instance:2 column:3 earlier:1 maximization:22 applicability:1 uniform:1 characterize:1 kn:5 varies:4 synthetic:1 chooses:4 adaptively:1 st:26 varsha:1 international:13 randomized:3 sequel:1 influencers:3 connecting:1 na:2 again:2 reflect:2 central:2 abbasi:1 choose:8 possibly:1 dr:1 stochastically:1 li:2 potential:1 de:2 star:14 coefficient:1 depends:6 analyze:2 francis:1 portion:2 start:4 zha:1 sup:1 contribution:2 zong:1 ni:1 efficiently:1 yield:1 identify:1 lkopf:1 advertising:1 worth:3 cc:2 bob:1 history:2 foe:1 explain:1 influenced:20 ashkan:4 facebook:5 definition:3 against:1 e2:1 naturally:1 associated:3 proof:4 sampled:1 gain:1 dataset:1 recall:4 knowledge:9 ut:15 subsection:1 improves:1 higher:1 wei:6 improved:1 qinshi:2 mar:1 generality:1 furthermore:1 marketing:2 until:1 langford:1 sketch:1 receives:3 hand:4 web:2 ei:2 propagation:5 lack:1 rodriguez:1 french:2 defines:1 pineau:1 quality:1 lei:2 grows:3 alexandra:1 usa:1 ye:1 contain:1 verify:1 true:2 counterpart:1 hence:5 white:1 round:17 ll:1 please:1 outline:2 theoretic:1 complete:8 demonstrate:1 mohammad:1 reasoning:1 wise:1 common:1 volatile:2 viral:2 mt:12 qp:4 overview:1 empirically:1 exponentially:2 volume:1 million:2 discussed:3 numerically:1 refer:4 dacheng:1 cambridge:1 sujay:1 grid:6 similarly:2 submodular:1 reachable:2 access:1 europe:1 longer:1 influencer:1 closest:1 recent:3 rieure:1 store:1 inequality:1 binary:5 xe:12 reynold:1 minimum:2 additional:2 ministry:1 mr:1 clp:1 maximize:5 kenny:1 semi:24 ii:2 full:1 sham:1 reduces:2 technical:1 match:2 characterized:2 adapt:1 long:2 e1:1 va:1 adobe:4 prediction:1 variant:1 regression:1 scalable:2 metric:7 expectation:1 szepesv:1 krause:1 interval:1 diagram:1 source:18 sch:1 ot:2 rest:1 unlike:1 regional:1 extra:1 comment:1 subject:1 undirected:2 structural:1 near:4 yang:2 independence:1 gave:1 topology:27 opposite:1 imperfect:1 idea:1 reduce:1 dl2:2 observability:1 ce23:1 praneeth:1 andreas:1 whether:1 song:1 roni:1 azin:4 repeatedly:2 action:3 detailed:2 giuseppe:1 discount:1 chuan:1 generate:3 http:1 exist:3 problematic:1 notice:11 estimated:3 disjoint:1 track:1 four:2 terminology:1 carpentier:2 diffusion:13 ht:2 v1:2 graph:57 downstream:3 tweet:2 sum:1 run:3 realworld:2 you:1 uncertainty:1 farajtabar:1 reasonable:2 wu:1 appendix:7 summarizes:2 cachan:1 bound:43 nan:1 distinguish:1 gomez:1 cheng:1 oracle:22 activity:1 strength:1 precisely:1 constraint:3 ri:1 encodes:1 kleinberg:2 u1:2 min:1 relatively:1 conjecture:1 according:2 disconnected:1 across:7 smaller:2 increasingly:1 kakade:1 happens:1 lau:1 gathering:1 ariel:1 computationally:5 equation:5 discus:1 know:1 yanchen:1 available:1 permit:2 apply:1 observe:4 v2:2 appropriate:1 away:2 pierre:1 deployable:1 yadkori:1 schmidt:2 thomas:1 denotes:1 remaining:3 ensure:1 log2:2 laks:4 nakano:1 exploit:2 giving:2 ucbs:1 amit:1 nicola:1 classical:1 objective:3 quantity:4 realized:1 dependence:4 rt:9 interacts:1 mehrdad:1 linucb:2 distance:2 thank:1 gracefully:1 topic:1 cauchy:1 senellart:1 modeled:2 relationship:1 minimizing:1 statement:1 potentially:1 nord:2 trace:1 hao:1 implementation:1 motivates:1 stern:1 unknown:3 upper:7 observation:5 francesco:2 markov:3 datasets:1 defining:2 situation:2 payoff:1 team:1 dc:1 interacting:2 arbitrary:1 david:2 complement:1 specified:1 nip:2 address:2 jure:2 suggested:2 bar:7 masahiro:1 challenge:6 including:1 max:5 suitable:2 event:6 valko:5 arm:6 representing:2 minimax:1 campaigning:1 jun:1 columbia:1 prior:2 review:1 l2:12 acknowledgement:1 discovery:5 xtt:2 loss:2 expect:3 highlight:1 xiaokui:1 sahar:1 grover:1 triple:2 awareness:1 agent:18 consistent:1 xiao:1 row:2 adish:1 supported:1 free:3 bern:2 offline:3 formal:1 allow:2 neighbor:3 wide:2 face:1 feedback:18 dimension:3 world:8 cumulative:13 gram:1 computes:1 transition:1 collection:1 adaptive:1 simplified:1 far:2 social:21 pushmeet:1 monotonicity:1 active:1 hayes:1 hongyuan:1 continuous:2 search:3 decomposes:1 why:1 table:7 learn:4 nature:1 szepesvari:3 ca:2 inherently:1 obtaining:2 improving:1 transitively:1 protocol:1 apr:1 spread:3 main:3 paul:1 representative:2 referred:2 n:7 retweets:1 stanford:2 third:2 tang:1 british:1 emphasizing:1 friendship:1 theorem:10 specific:1 xt:3 dk:1 evidence:1 dl:2 exists:2 intrinsic:1 workshop:1 budget:1 gap:4 chen:9 logarithmic:1 aditya:1 u2:1 recommendation:1 marketer:2 ubc:1 truth:1 acm:7 conditional:1 goal:1 identity:2 viewed:3 feasible:2 hard:3 change:2 specifically:8 typical:1 operates:1 wt:8 lemma:3 called:1 experimental:2 e:1 unfavorable:1 est:1 ucb:5 xiaojing:1 ew:1 maxe:3 select:1 formally:1 l4:1 mark:2 arises:1 relevance:8 evaluate:3
6,517
6,896
Near Minimax Optimal Players for the Finite-Time 3-Expert Prediction Problem Yasin Abbasi-Yadkori Adobe Research Peter L. Bartlett UC Berkeley Victor Gabillon Queensland University of Technology Abstract We study minimax strategies for the online prediction problem with expert advice. It has been conjectured that a simple adversary strategy, called C OMB, is near optimal in this game for any number of experts. Our results and new insights make progress in this direction by showing that, up to a small additive term, C OMB is minimax optimal in the ?nite-time three expert problem. In addition, we provide for this setting a new near minimax optimal C OMB-based learner. Prior to this work, in this problem, learners obtaining the optimal multiplicative constant in their regret rate were known only when K = when ?2 or K ? ?. We characterize, 2 K = 3, the regret of the game scaling as 8/(9?)T ? log(T ) which gives for ? the ?rst time the optimal constant in the leading ( T ) term of the regret. 1 Introduction This paper studies the online prediction problem with expert advice. This is a fundamental problem of machine learning that has been studied for decades, going back at least to the work of Hannan [12] (see [4] for a survey). As it studies prediction under adversarial data the designed algorithms are known to be robust and are commonly used as building blocks of more complicated machine learning algorithms with numerous applications. Thus, elucidating the yet unknown optimal strategies has the potential to signi?cantly improve the performance of these higher level algorithms, in addition to providing insight into a classic prediction problem. The problem is a repeated two-player zero-sum game between an adversary and a learner. At each of the T rounds, the adversary decides the quality/gain of K experts? advice, while simultaneously the learner decides to follow the advice of one of the experts. The objective of the adversary is to maximize the regret of the learner, de?ned as the difference between the total gain of the learner and the total gain of the best ?xed expert. Open Problems and our Main Results. Previously this game has been solved asymptotically as both T and K tend to ?: asymptotically the upper bound on the performance of the state-of-theart Multiplicative Weights Algorithm (MWA) for the learner matches the optimal multiplicative ? constant of the asymptotic minimax optimal regret rate (T /2) log K [3]. However, for ?nite K, this asymptotic quantity actually overestimates the ?nite-time value of the game. Moreover, Gravin et ? al. [10] proved a matching lower bound (T /2) log K on the regret of the classic version of MWA, additionally showing that the optimal learner does not belong ? an extended MWA family. Already, Cover [5] proved that the value of the game is of order of T /(2?) when K = 2, meaning that the regret of a MWA learner is 47% larger that the optimal learner in this case. Therefore the question of optimality remains open for non-asymptotic K which are the typical cases in applications, and therefore progress in this direction is important. In studying a related setting with K = 3, where T is sampled from a geometric distribution with parameter ?, Gravin et al. [9] conjectured that, for any K, a simple adversary strategy, called the C OMB adversary, is asymptotically optimal (T ? ?, or when ? ? 0), and also excessively competitive for ?nite-time ?xed T . The C OMB strategy sorts the experts based on their cumulative 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. gains and, with probability one half, assigns gain one to each expert in an odd position and gain zero to each expert in an even position. With probability one half, the zeros and ones are swapped. The simplicity and elegance of this strategy, combined with its almost optimal performance makes it very appealing and calls for a more extensive study of its properties. Our results and new insights make progress in this direction by showing that, for any ?xed T and up to small additive terms, C OMB is minimax optimal in the ?nite-time three expert problem. Additionally and with similar guarantees, we provide for this setting a new near minimax optimal C OMB-based learner. For K = 3, the regret of a MWA learner is 39% larger than our ?new optimal learner.2 In this paper we also characterize, when K = 3, the regret of the game as 8/(9?)T ? log(T ) which ? gives for the ?rst time the optimal constant in the leading ( T ) term of the regret. Note that the state-of-the-art non-asymptotic lower bound in [15] on the value of this problem is non informative as the lower bound for the case of K = 3 is a negative quantity. Related Works and Challenges. For the case of K = 3, Gravin et al. [9] proved the exact minimax optimality of a C OMB-related adversary in the geometrical setting, i.e. where T is not ?xed in advance but rather sampled from a geometric distribution with parameter ?. However the connection between the geometrical setting and the original ?nite-time setting is not well understood, even asymptotically (possibly due to the large variance of geometric distributions with small ?). Addressing this issue, in Section 7 of [8], Gravin et al. formulate the ?Finite vs Geometric Regret? conjecture which states that the value of the game in the geometrical setting, V? , and the value of the game in the ?nite-time setting, VT , verify VT = ?2? V?=1/T . We resolve here the conjecture for K = 3. Analyzing the ?nite-time expert problem raises new challenges compared to the geometric setting. In the geometric setting, at any time (round) t of the game, the expected number of remaining rounds before the end of the game is constant (does not depend on the current time t). This simpli?es the problem to the point that, when K = 3, there exists an exactly minimax optimal adversary that ignores the time t and the parameter ?. As noted in [9], and noticeable from solving exactly small instances of the game with a computer, in the ?nite-time case, the exact optimal adversary seems to depend in a complex manner on time and state. It is therefore natural to compromise for a simpler adversary that is optimal up to a small additive error term. Actually, based on the observation of the restricted computer-based solutions, the additive error term of C OMB seems to vanish with larger T . Tightly controlling the errors made by C OMB is a new challenge with respect to [9], where the solution to the optimality equations led directly to the exact optimal adversary. The existence of such equations in the geometric setting crucially relies on the fact that the value-to-go of a given policy in a given state does not depend on the current time t (because geometric distributions are memoryless). To control the errors in the ?nite-time setting, our new approach solves the game by backward induction showing the approximate greediness of C OMB with respect to itself (read Section 2.1 for an overview of our new proof techniques and their organization). We use a novel exchangeability property, new connections to random walks and a close relation that we develop between C OMB and a T WIN -C OMB strategy. Additional connections with new related optimal strategies and random walks are used to compute the value of the game (Theorem 2). We discuss in Section 6 how our new techniques have more potential to extend to an arbitrary number of arms, than those of [9]. Additionally, we show how the approximate greediness of C OMB with respect to itself is key to proving that a learner based directly on the C OMB adversary is itself quasi-minimax-optimal. This is the ?rst work to extend to the approximate case, approaches used to designed exactly optimal players in related works. In [2] a probability matching learner is proven optimal under the assumption that the adversary is limited to a ?xed cumulative loss for the best expert. In [14] and [1], the optimal learner relies on estimating the value-to-go of the game through rollouts of the optimal adversary?s plays. The results in these papers were limited to games where the optimal adversary was only playing canonical unit vector while our result holds for general gain vectors. Note also that a probability matching learner is optimal in [9]. Notation: Let [a : b] = {a, a + 1, . . . , b} with a, b ? N, a ? b, and [a] = [1 : a]. For a vector w ? Rn , n ? N, ?w?? = maxk?[n] |wk |. A vector indexed by both a time t and a speci?c element index k is wt,k . An undiscounted Markov Decision Process (MDP) [13, 16] M is a 4-tuple ?S, A, r, p?. S is the state space, A is the set of actions, r : S ? A ? R is the reward function, and the transition model p(?|s, a) gives the probability distribution over the next state when action a is taken in state s. A state is denoted by s or st if it is taken at time t. An action is denoted by a or at . 2 2 The Game We consider a game, composed of T rounds, between two players, called a learner and an adversary. At each time/round t the learner chooses an index It ? [K] from a distribution pt on the K arms. Simultaneously, the adversary assigns a binary gain to each of the arms/experts, possibly at random from a distribution A? t , and we denote the vector of these gains by gt ? {0, 1}K . The adversary and the learner then observe It and gt . For simplicity we use the notation g[t] = (gs )s=1,...,t . The value of one realization of such a game is the cumulative regret de?ned as ? ? T T ?? ? ? ? ? gt ? ? gt,It . RT = ? ? ? t=1 ? t=1 A state s ? S = (N ? {0})K is a K-dimensional vector such that the k-th element is the cumulative sum of gains dealt by the adversary on arm k before the current time t. Here the state does not include ?t?1 t but is typically denoted for a speci?c time t as st and computed as st = t? =1 gt? . This de?nition is motivated by the fact that there exist minimax strategies for both players that rely solely on the state and time information as opposed to the complete history of plays, g[t] ? I[t] . In state s, the set of leading experts, i.e., those with maximum cumulative gain, is X(s) = {k ? [K] : sk = ?s?? }. We use ? to denote the (possibly non-stationary) strategy/policy used by the adversary, i.e., for any input state s and time t it outputs the gain distribution ?(s, t) played by the adversary at time t in state s. Similarly we use p? to denote the strategy of the learner. As the state depends only on the adversary plays, we can sample a state s at time t from ?. T ? the expected regret of the game, Vp,? Given an adversary ? and a learner p, ? , is T Vp,? = Eg[T ] ??,I[T ] ?p? [RT ] . The learner tries to minimize the expected regret while the adversary ? tries to maximize it. The value of the game is the minimax value VT de?ned by T T VT = min max Vp,? = max min Vp,? ? ? . ? p ? ? ? p In this work, we are interested in the search for optimal minimax strategies, which are adversary T ?? , such that VT = max? Vp?T? ,? . strategies ? ? such that VT = minp? Vp,? ? ? and learner strategies p 2.1 Summary of our Approach to Obtain the Near Greediness of C OMB Most of our material is new. First, Section 3 recalls that Gravin et al. [9] have shown that the search for the optimal adversary ? ? can be restricted to the ?nite family of balanced strategies (de?ned in the next section). When K = 3, the action space of a balanced adversary is limited to seven stochastic ? 2, ? {}, {123}} (see Section 5.1 for their ? , C? , V? , 1, actions (gain distributions), denoted by B? 3 = {W description). The C OMB adversary repeats the gain distribution C? at each time and in any state. In Section 4 we provide an explicit formulation of the problem as ?nding ? ? inside an MDP with a speci?c reward function. Interestingly, we observe that another adversary, which we call T WIN ? , has the same value as ?C (Section 5.1). C OMB and denote by ?W , which repeats the distribution W To control the errors made by C OMB, the proof uses a novel and intriguing exchangeability property (Section 5.2). This exchangeability property holds thanks to the surprising role played by the T WIN ?, C OMB strategy. For any distributions A? ? B? 3 there exists a distribution D? , mixture of C? and W ? and then A? in terms of such that for almost all states, playing A? and then D? is the same as playing W the expected reward and the probabilities over the next states after these two steps. Using Bellman operators, this can be concisely written as: for any (value) function f : S ?? R, in (almost) any state s, we have that [TA? [TD? f ]](s) = [TW? [TA? f ]](s). We solve the MDP with a backward induction in time from t = T . We show that playing C? at time t is almost greedy with respect to playing ?C in later rounds t? > t. The greedy error is de?ned as the difference of expected reward between always playing ?C and playing the best (greedy) ?rst action before playing C OMB. Bounding how these errors accumulate through the rounds relates the value of C OMB to the value of ? ? (Lemma 16). To illustrate the main ideas, let us ?rst make two simplifying (but unrealistic) assumptions at time t: C OMB has been proven greedy w.r.t. itself in rounds t? > t and the exchangeability holds in all states. Then we would argue at time t that by the exchangeability property, instead of optimizing the greedy 3 ? A? C? . . . C? . Then action w.r.t. C OMB as maxA? ?B? 3 A? C? . . . C? , we can study the optimizer of maxA? ?B? 3 W we use the induction property to conclude that C? is the solution of the previous optimization problem. Unfortunately, the exchangeability property does not hold in one speci?c state denoted by s? . What saves us though is that we can directly compute the error of greedi?cation of any gain distribution with respect to C OMB in s? and show that it diminishes exponentially fast as T ? t, the number of rounds remaining, increases (Lemma 7). This helps us to control how the errors accumulate during the induction. From one given state st ?= s? at time t, ?rst, we use the exchangeability property once when trying to assess the ?quality? of an action A? as a greedy action w.r.t. C OMB. This leads us to consider the quality of playing A? in possibly several new states {st+1 } at time t + 1 reached following T WIN -C OMB in s. We use our exchangeability property repeatedly, starting from the state st until a subsequent state reaches s? , say at time t? , where we can substitute the exponentially decreasing greedy error computed at this time t? in s? . Here the subsequent states are the states reached after having played T WIN -C OMB repetitively starting from the state st . If s? is never reached we use the fact that C OMB is an optimal action everywhere else in the last round. The problem is then to determine at which time t? , starting from any state at time t and following a T WIN -C OMB strategy, we hit s? for the ?rst time. This is translated into a classical gambler?s ruin problem, which concerns the hitting times of a simple random walk (Section 5.3). Similarly the value of the game is computed using the study of the expected number of equalizations of a simple random walk (Theorem 5.1). 3 Solving for the Adversary Directly In this section, we recall the results from [9] that, for arbitrary K, permit us to directly search for the minimax optimal adversary in the restricted set of balanced adversaries while ignoring the learner. De?nition 1. A gain distribution A? is balanced if there exists a constant cA? , the mean gain of A? , such that ?k ? [K], cA? = Eg|A? [gk ]. A balanced adversary uses exclusively balanced gain distributions. Lemma 1 (Claim 5 in [9]). There exists a minimax optimal balanced adversary. Use B to denote the set of all balanced strategies and B? to denote the set of all balanced gain distributions. Interestingly, as demonstrated in [9], a balanced adversary ? in?icts the same regret T ? ? Vp,? on every learner: If ? ? B, then ?VT? ? R : ?p, ? = VT . (See Lemma 10) Therefore, given an adversary strategy ?, we can de?ne the value-to-go Vt?0 (s) associated with ? from time t0 in state s, Vt?0 (s) = E ?sT +1 ?? ? sT +1 T ? t=t0 ? ? E c?(st ,t) , st st+1 ? P (.|st , ?(st , t), st0 = s). Another reduction comes from the fact that the set of balanced gain distributions can be seen as a convex combination of a ?nite set of balanced distributions [9, Claim 2 and 3]. We call this limited set the atomic gain distributions. Therefore the search for ? ? can be limited to this set. The set of convex combinations of the m distributions A? 1 , . . . A? m is denoted by ?(A? 1 , . . . A? m ). 4 Reformulation as a Markovian Decision Problem In this section we formulate, for arbitrary K, the maximization problem over balanced adversaries as an undiscounted MDP problem ?S, A, r, p?. The state space S was de?ned in Section 2 and the action space is the set of atomic balanced distributions as discussed in Section 3. The transition model is de?ned by p(.|s, D? ), which is a probability distribution over states given the current state s and a balanced distribution over gains D? . In this model, the transition dynamics are deterministic and entirely controlled by the adversary?s action choices. However, the adversary is forced to choose stochastic actions (balanced gain distributions). The maximization problem can therefore also be thought of as designing a balanced random walk on states so as to maximize a sum of rewards (that are yet to be de?ned). First, we de?ne PA? the transition probability operator with respect to a gain distribution A? . Given function f : S ?? R, PA? returns [PA? f ](s) = E[f (s? )|s? ? p(.|s, A? )] = E [f (s + g)]. g?s,A? g is sampled in s according to A? . Given A? in s, the per-step regret is denoted by rA? (s) and de?ned as rA? (s) = E ?s? ?? ? ?s?? ? cA? . s? |s,A? 4 Given an adversary? strategy ?, starting in s at time t0 , the? cumulative per-step regret is ?T V?t?0 (s) = t=t0 E r?(?,t) (st ) | st+1 ? p(.|st , ?(st , t), st0 = s) . The action-value function of ? at (s, D? ) and t is the expected sum of rewards received by starting from s, taking action D? , and then ? ?t (st , D? ) = E [ ?T? rA? (st ) | A? 0 = D? , st+1 ? p(?|st , A? t ), A? t+1 = ?(st+1 , t + 1)]. following ?: Q t =t t ? The Bellman operator of A? , TA? , is [TA? f ](s) = rA? (s) + [PA? f ](s). with [T?(s,t) V?t+1 ](s) = V?t? (s). This per-step regret, rA? (s), depends on s and A? and not on the time step t. Removing the time from the picture permits a simpli?ed view of the problem that leads to a natural formulation of the exchangeability property that is independent of the time t. Crucially, this decomposition of the regret into per-step regrets is such that maximizing V?t?0 (s) over adversaries ? is equivalent, for all time t0 and s, to maximizing over adversaries the original value of the game, the regret Vt?0 (s) (Lemma 2). Lemma 2. For any adversary strategy ? and any state s and time t0 , V ? (s) = V? ? (s) + ?s? . t0 t0 ? The proof of Lemma 2 is in Section 8. In the following, our focus will be on maximizing V?t? (s) in any state s. We now show some basic properties of the per-step regret that holds for an arbitrary number of experts K and discuss their implications. The proofs are in Section 9. ? for all s, t , we have 0 ? rA? (s) ? 1. Furthermore if |X(s)|= 1, rA? (s) = 0. Lemma 3. Let A? ? B, Lemma 3 shows that a state s in which the reward is not zero contains at least two equal leading experts, |X(s)|> 1. Therefore the goal of maximizing the reward can be rephrased into ?nding a policy that visits the states with |X(s)|> 1 as often as possible, while still taking into account that the per-step reward increases with |X(s)|. The set of states with |X(s)|> 1 is called the ?reward wall?. Lemma 4. In any state s, with |X(s)|= 2, for any balanced gain distribution D? such that with probability one exactly one of the leading expert receives a gain of 1, rD? (s) = maxA? ?B? rA? (s). 5 The Case of K = 3 5.1 Notations in the 3-Experts Case, the C OMB and the T WIN -C OMB Adversaries First we de?ne the state space in the 3-expert case. The experts are sorted with respect to their cumulative gains and are named in decreasing order, the leading expert, the middle expert and the lagging expert. As mentioned in [9], in our search for the minimax optimal adversary, it is suf?cient for any K to describe our state only using dij that denote the difference between the cumulative gains of consecutive sorted experts i and j = i + 1. Here, i denotes the expert with ith largest cumulative gains, and hence dij ? 0 for all i < j. Therefore one notation for a state, that will be used throughout this section, is s = (x, y) = (d12 , d23 ). We distinguish four types of states C1 , C2 , C3 , C4 as detailed below in Figure 1. In the same ?gure, in the center, the states are represented on a 2d-grid. C4 contains only the state denoted s? = (0, 0). Reward Wall s ? C1 , d12 > 0, d23 > 0 s ? C2 , d12 = 0, d23 > 0 s ? C3 , d12 > 0, d23 = 0 s ? C4 , d12 = 0, d23 = 0 d23 2 1 1 1 2 1 1 1 2 1 1 1 4 3 3 3 d12 Atomic A? {1}{23} {2}{13} {3}{12} {1}{2}{3} {12}{13}{23} Symbol ? W ? C ? V 1? 2? cA? 1/2 1/2 1/2 1/3 2/3 Figure 1: 4 types of states (left), their location on the 2d grid of states (center) and 5 atomic A? (right) Concerning the action space, the gain distributions use brackets. The group of arms in the same bracket receive gains together and each group receive gains with equal probability. For instance, {1}{2}{3} exclusively deals a gain to expert 1 (leading expert) with probability 1/3, expert 2 (middle expert) with probability 1/3, and expert 3 (lagging expert) with probability 1/3, whereas {1}{23} means dealing a gain to expert 1 alone with probability 1/2 and experts 2 and 3 together with probability 1/2. As discussed in Section 3, we are searching for a ? ? using mixtures of atomic balanced distributions. ? 2, ? C? , W ? , {}, {123}} When K = 3 there are seven atomic distributions, denoted by B? 3 = {V? , 1, and described in Figure 1 (right). Moreover, in Figure 2, we report in detail?in a table (left) and 5 s rC? (s) C1 C2 C3 C4 0 1/2 0 1/2 Distribution of next state s? ? p(?|s, C? ) with s = (x, y) P (s? = (x?1, y+1)) = P (s? = (x+1, y?1)) = .5 P (s? = (x + 1, y)) = P (s? = (x + 1, y ? 1)) = .5 P (s? = (x, y + 1)) = P (s? = (x ? 1, y + 1)) = .5 P (s? = (x, y + 1)) = P (s? = (x + 1, y)) = .5 d23 .5 d12 1 .5 0 2 0 3 .5 1/2 .5 .5 .5 .5 4 .5 1/2 Figure 2: The per-step regret and transition probabilities of the gain distribution C? an illustration (right) on the 2-D state grid?the properties of the C OMB gain distribution C? . The remaining atomic distributions are similarly reported in the appendix in Figures 5 to 8. In the case of three experts, the C OMB distribution is simply playing {2}{13} in any state. We use ? to denote the strategy that plays {1}{23} in any state and refer to it as the T WIN -C OMB strategy. W The C OMB and T WIN -C OMB strategies (as opposed to the distributions) repeat their respective gain distributions in any state and any time. They are respectively denoted ?C , ?W . The Lemma 5 shows that the C OMB strategy ?C , the T WIN -C OMB strategy ?W and therefore any mixture of both, have the same expected cumulative per-step regret. The proof is reported to Section 11. Lemma 5. For all states s at time t, we have V?t?C (s) = V?t?W (s). 5.2 The Exchangeability Property ? ) such that for any s ?= s? , and for any f : S ?? R, Lemma 6. Let A? ? B? 3 , there exists D? ? ?(C? , W [TA? [TD? f ]](s) = [TW? [TA? f ]](s). ? , A? = {} or A? = {123}, use D? = W ? . If A? = C? , use Lemma 11 and 12. Proof. If A? = W ? ) with s ? C3 then s? ? C3 ? C4 . Case 1. A? = V? : V? is equal to C? in C3 ? C4 and if s? ? p(.|s, W So when s ? C3 we reuse the case A? = C? above. When s ? C1 ? C2 , we consider two cases. ? which is {1}{23}. If s? ? p(.|s, V? ) with s ? C2 then Case 1.1. s ?= (0, 1): We choose D? = W s? ? C2 . Similarly, if s? ? p(.|s, V? ) with s ? C1 then s? ? C1 ? C3 . Moreover D? modi?es similarly the coordinates (d12 , d23 ) of s ? C1 and s ? C3 . Therefore the effect in terms of transition probability and reward of D? is the same whether it is done before or after the actions chosen by V? . If s? ? p(.|s, D? ) with s ? C1 ? C2 then s? ? C1 ? C2 . Moreover V? modi?es similarly the coordinates (d12 , d23 ) of s ? C1 and s ? C2 . Therefore the effect in terms of the transition probability of V? is the same whether it is done before or after the action D? . In terms of reward, notice that in the states s ? C1 ? C2 , V? has 0 per-step regret and using V? does not make s? leave or enter the reward wall. ? . One can check from the tables in Figures 7 and 8 that Case 1.2 st = (0, 1): We can chose D? = W exchangebility holds. Additionally we provide an illustration of the exchangeability equality in the 2d-grid in Figure 1. The starting state s = (0, 1), is graphically represented by . We show on the grid the effect of the gain distribution V? (in dashed red) followed (left picture) or preceded (right picture) by the gain distribution D? (in plain blue). The illustration shows that V? ?D? and D? ?V? lead to the same ?nal states ( ) with equal probabilities. The rewards are displayed on top of the pictures. Their color corresponds to the actions, the probabilities are in italic, and the rewards are in roman. ? The proof is similar and is reported in Section 12 of the appendix. Case 2 & 3. A? = 1? & A? = 2: 6 5.3 Approximate Greediness of C OMB, Minimax Players and Regret The greedy error of the gain distribution D? in state s at time t is ? D ? ?t C (s, A? ) ? Q ? ?t C (s, D? ). = max Q ?s,t ?3 ? ?B A ? D Let ?tD? = maxs?S ?s,t denote the maximum greedy error of the gain distribution D? at time t. The C OMB greedy error in s? is controlled by the following lemma proved in Section 13.1. Missing proofs from this section are in the appendix in Section 13.2. ? ? ? ?sD? ,t ? 1 1 T ?t . ? , C? , V? , 1}, Lemma 7. For any t ? [T ] and gain distribution D? ? {W 6 2 ? The following proposition shows how we can index the states in the 2d-grid as a one dimensional line over which the T WIN C OMB strategy behaves very similarly to a simple random walk. Figure 3 (top) illustrates this random walk on the 2d-grid and the indexing scheme (the yellow stickers). Proposition 1. Index a state s = (x, y) by is = x + 2y irrespective of the time. Then for any state s ?= s? , and s? ? ? ) we have that P (is? = is ?1) = P (is? = is +1) = 12 . p(?|s, W d23 2 2 2 .5 4 1 6 .5 4 1 .5 1 2 .5 3 .5 7 5 3 1 1 1 1 3 .5 8 6 4 2 1 1 1 3 1 9 1 7 1 5 3 .5 3 10 8 d12 6 4 .5 Consider a random walk that starts from state s0 = s and is gend23 d12 .5 .5 ? ). De?ne erated by the T WIN -C OMB strategy, st+1 ? p(.|st , W 3 3 3 4 3 0 1 3 4 1 2 the random variable T?,s = min{t ? N?{0} : st = s? }. This random variable is the number of steps of the random walk before hitting s? for the ?rst time. Then, let P? (s, t) be the proba- Figure 3: Numbering T WIN -C OMB bility that s? is reached after t steps: P? (s, t) = P (T?,s = t). (top) & ?G random walks (bottom) Lemma 8 controls the C OMB greedy error in st in relation to P? (s, t). Lemma 9 derives a state-independent upper-bound for P? (s, t). Lemma 8. For any time t ? [T ] and state s, ? ?T ?t? T ? 1 1 ? C ? P? (s, t ? t) . ?s,t ? 6 2 ? t =t Proof. If s = s? , this is a direct application of Lemma 7 as P? (s? , t? ) = 0 for t? > 0. When s ?= s? , the following proof is by induction. Initialization: Let t = T . At the last round only the last per-step regret matters (for all states s, ? ?t C (s, D? ) = rD? (s)). As s ?= s? , s is such that |X(s)|? 2 then rD? (s) = max ? ? rA? (s) because of Q A ?B Lemma 4 and Lemma 3. Therefore the statement holds. Induction: Let t < T . We assume the statement is true at time t + 1. We distinguish two cases. For all gain distributions D? ? B? 3 , (b) ?C ?C ? ?t C (s, D? ) (a) ? ?C (., D? )](s) = [TD? [TE? V?t+2 ]](s) = [TW? [TD? V?t+2 ]](s) = [TW? Q Q t+1 ? ?T ?t1 T ? (c) 1 1 ? ?C (., A? )](s) ? ? [TW? max Q [P P (., t ? t ? 1) ](s) ? ? 1 W t+1 ?3 6 2 ? ?B A t =t+1 1 (d) ? ?C (., A? )](s) ? ? max [TW? Q t+1 ?3 ? ?B A (b) ? ?t C (s, A? ) ? = max Q ?3 ? ?B A (e) ? ?t C (s, A? ) ? = max Q ?3 ? ?B A T ? t1 ? ?T ?t1 T ? 1 1 [PW? P? (., t1 ? t ? 1)](s) 6 2 t =t+1 1 1 6 =t+1 ? ?T ?t1 1 [PW? P? (., t1 ? t ? 1)](s) 2 ? ?T ?t1 T ? 1 1 P? (s, t1 ? t) 6 2 t =t 1 7 ? ) and this step holds because of Lemma 5, (b) holds where in (a) E? is any distribution in ?(C? , W because of the exchangeability property of Lemma 6, (c) is true by induction and monotonicity of Bellman operator, in (d) the max operators change from being speci?c to any next state s? at time t + 1 to being just one max operator that has to choose a single optimal gain distribution in state s at time t, (e) holds by de?nition as for any t2 , (here the last equality holds because s ?= s? ) [PW? P? (., t2 )](s) = Es? ?p(.|s,W? ) [P? (s? , t2 )] = Es? ?p(.|s,W? ) [P (T?,s? = t2 )] = P? (s, t2 + 1). Lemma 9. For t > 0 and any s, 2 P? (s, t) ? t ? 2 . ? Proof. Using the connection between the T WIN -C OMB strategy and a simple random walk in Proposition 1, a formula can be found for P? (s, t) from the classical ?Gambler?s ruin? problem, where one wants to know the probability that the gambler reaches ruin (here state s? ) at any time t given an initial capital in dollars (here is as de?ned in Proposition 1). The gambler has an equal probability to win or lose one dollar at each round and has no upper bound on ?his capital during the ? game. Using [7] (Chapter XIV, Equation 4.14) or [18] we have P? (s, t) = its t+it s 2?t , where the 2 binomial coef?cient is 0 if t and is are not of the same parity. The technical Lemma 14 completes the proof. We now state our main result, connecting the value of the C OMB adversary to the value of the game. T ? minp? Vp,? Theorem 1. Let K = 3, the regret of C OMB strategies against any learner p, ? C , satis?es 2 T min Vp,? ? C ? VT ? 12 log (T + 1) . ? p We also characterize the minimax regret of the game. Theorem 2. Let K = 3, for even T , we have that ? ? ? ? ? ? ?VT ? T + 2 T /2 + 1 ? ? 12 log2 (T + 1), ? T T /2 + 1 3 ? 2 ? with ? ? ? T + 2 T /2 + 1 8T . ? T /2 + 1 3 ? 2T 9? In Figure 4 we introduce a C OMB-based learner that is denoted by p?C . Here a state is represented by a vector of 3 integers. The three arms/experts are ordered as (1) (2) (3), breaking ties arbitrarily. We connect the value of the C OMB-based learner to the value of the game. ?C pt,(1) (s) = Vt+1 (s+e(1) )?Vt?C (s) ?C Theorem 3. Let K = 3, the regret of C OMB-based pt,(2) (s) = Vt+1 (s+e(2) )?Vt?C (s) learner against any adversary ?, max? Vp?TC ,? , satis?es pt,(3) (s) = 1 ? pt,(1) (s) ? pt,(2) (s) max Vp?TC ,? ? VT + 36 log2 (T + 1) . ? Figure 4: A C OMB learner, p?C Similarly to [2] and [14], this strategy can be ef?ciently computed using rollouts/simulations from the C OMB adversary in order to estimate the value Vt?C (s) of ?C in s at time t. 6 Discussion and Future Work The main objective is to generalize our new proof techniques to higher dimensions. In our case, the MDP formulation and all the results in Section 4 already holds for general K. Interestingly, Lemma 3 and 4 show that the C OMB distribution is the balanced distribution with highest per-step regret in all the states s such that |X(s)|? 2, for arbitrary K. Then assuming an ideal exchangeability property that gives maxA? ?B? A? C? . . . C? = maxA? ?B? C? C? . . . C? A? , a distribution would be greedy w.r.t the C OMB strategy at an early round of the game if it maximizes the per-step regret at the last round of the game. The C OMB policy speci?cally tends to visit almost exclusively states |X(s)|? 2, states where C OMB itself is the maximizer of the per-step regret (Lemma 3). This would give that C OMB is greedy w.r.t. itself and therefore optimal. To obtain this result for larger K, we will need to extend the exchangeability property to higher K and therefore understand how the C OMB and T WIN -C OMB families extend to higher dimensions. One could also borrow ideas from the link with pde approaches made in [6]. 8 Acknowledgements We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the Australian Research Council through an Australian Laureate Fellowship (FL110100281) and through the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS). We would like to thank Nate Eldredge for pointing us to the results in [18]! References [1] Jacob Abernethy and Manfred K. Warmuth. Repeated games against budgeted adversaries. In Advances in Neural Information Processing Systems (NIPS), pages 1?9, 2010. [2] Jacob Abernethy, Manfred K. Warmuth, and Joel Yellin. Optimal strategies from random walks. In 21st Annual Conference on Learning Theory (COLT), pages 437?446, 2008. [3] Nicol? Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM (JACM), 44(3):427?485, 1997. [4] Nicol? Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge university press, 2006. [5] Thomas M. Cover. Behavior of sequential predictors of binary sequences. In 4th Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263?272, 1965. [6] Nadeja Drenska. A pde approach to mixed strategies prediction with expert advice. http://www.gtcenter.org/Downloads/Conf/Drenska2708.pdf. (Extended abstract). [7] Willliam Feller. An Introduction to Probability Theory and its Applications, volume 2. John Wiley & Sons, 2008. [8] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Towards optimal algorithms for prediction with expert advice. In arXiv preprint arXiv:1603.04981, 2014. [9] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Towards optimal algorithms for prediction with expert advice. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 528?547, 2016. [10] Nick Gravin, Yuval Peres, and Balasubramanian Sivan. Tight Lower Bounds for Multiplicative Weights Algorithmic Families. In 44th International Colloquium on Automata, Languages, and Programming (ICALP), volume 80, pages 48:1?48:14, 2017. [11] Charles Miller Grinstead and James Laurie Snell. Introduction to probability. American Mathematical Soc., 2012. [12] James Hannan. Approximation to bayes risk in repeated play. Contributions to the Theory of Games, 3:97?139, 1957. [13] Ronald A. Howard. Dynamic Programming and Markov Processes. The MIT Press, Cambridge, MA, 1960. [14] Haipeng Luo and Robert E. Schapire. Towards minimax online learning with unknown time horizon. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 226?234, 2014. [15] Francesco Orabona and D?vid P?l. Optimal non-asymptotic lower bound on the minimax regret of learning with expert advice. arXiv preprint arXiv:1511.02176, 2015. [16] Martin L. Puterman. Markov Decision Processes. Wiley, New York, 1994. [17] Pantelimon Stanica. Good lower and upper bounds on binomial coef?cients. Journal of Inequalities in Pure and Applied Mathematics, 2(3):30, 2001. [18] Remco van der Hofstad and Michael Keane. An elementary proof of the hitting time theorem. The American Mathematical Monthly, 115(8):753?756, 2008. 9
6896 |@word middle:2 version:1 pw:3 seems:2 open:2 simulation:1 crucially:2 queensland:1 simplifying:1 decomposition:1 jacob:2 reduction:1 initial:1 contains:2 exclusively:3 interestingly:3 current:4 surprising:1 luo:1 yet:2 intriguing:1 written:1 john:1 ronald:1 additive:4 subsequent:2 informative:1 designed:2 v:1 stationary:1 half:2 greedy:13 alone:1 warmuth:3 ith:1 manfred:3 gure:1 location:1 org:1 simpler:1 rc:1 mathematical:3 c2:10 direct:1 symposium:1 inside:1 introduce:1 lagging:2 manner:1 excellence:1 ra:9 expected:8 behavior:1 bility:1 yasin:1 bellman:3 decreasing:2 balasubramanian:3 td:5 resolve:1 estimating:1 moreover:4 notation:4 maximizes:1 what:1 xed:5 maxa:5 st0:2 guarantee:1 berkeley:1 every:1 tie:1 exactly:4 hit:1 control:4 unit:1 grant:1 overestimate:1 before:6 t1:8 understood:1 sd:1 tends:1 analyzing:1 solely:1 xiv:1 lugosi:1 chose:1 downloads:1 initialization:1 studied:1 limited:5 omb:60 atomic:7 regret:34 block:1 nite:12 thought:1 matching:3 close:1 operator:6 risk:1 greediness:4 equalization:1 www:1 equivalent:1 deterministic:1 demonstrated:1 center:2 maximizing:4 missing:1 go:3 graphically:1 starting:6 convex:2 survey:1 formulate:2 d12:11 simplicity:2 automaton:1 assigns:2 helmbold:1 pure:1 insight:3 haussler:1 borrow:1 his:1 classic:2 proving:1 searching:1 coordinate:2 controlling:1 play:5 pt:6 exact:3 programming:2 us:2 designing:1 pa:4 element:2 erated:1 bottom:1 role:1 preprint:2 solved:1 highest:1 balanced:20 mentioned:1 feller:1 colloquium:1 reward:16 dynamic:2 raise:1 depend:3 solving:2 tight:1 compromise:1 learner:31 translated:1 vid:1 represented:3 chapter:1 forced:1 fast:1 describe:1 abernethy:2 larger:4 solve:1 say:1 itself:6 online:3 sequence:1 cients:1 realization:1 icts:1 description:1 haipeng:1 rst:8 undiscounted:2 leave:1 help:1 illustrate:1 develop:1 odd:1 received:1 noticeable:1 progress:3 solves:1 soc:1 signi:1 come:1 australian:3 direction:3 stochastic:2 material:1 wall:3 gravin:8 snell:1 proposition:4 elementary:1 frontier:1 hold:12 ruin:3 algorithmic:1 claim:2 pointing:1 optimizer:1 consecutive:1 early:1 diminishes:1 lose:1 council:2 largest:1 mit:1 always:1 rather:1 exchangeability:14 focus:1 check:1 adversarial:1 dollar:2 typically:1 relation:2 going:1 quasi:1 interested:1 fl110100281:1 issue:1 colt:1 denoted:11 art:1 uc:1 equal:5 once:1 never:1 having:1 beach:1 icml:1 theart:1 future:1 report:1 t2:5 roman:1 modi:2 composed:1 simultaneously:2 tightly:1 rollouts:2 proba:1 organization:1 satis:2 elucidating:1 joel:1 mixture:3 bracket:2 implication:1 tuple:1 respective:1 indexed:1 walk:12 instance:2 markovian:1 cover:2 yoav:1 maximization:2 addressing:1 predictor:1 dij:2 seventh:1 characterize:3 reported:3 connect:1 combined:1 chooses:1 st:31 thanks:1 international:2 fundamental:1 siam:1 cantly:1 michael:1 together:2 connecting:1 gabillon:1 abbasi:1 cesa:2 opposed:2 choose:3 possibly:4 conf:1 expert:42 american:2 leading:7 return:1 account:1 potential:2 de:17 wk:1 matter:1 depends:2 multiplicative:4 try:2 later:1 view:1 reached:4 competitive:1 sort:1 red:1 complicated:1 start:1 bayes:1 contribution:1 minimize:1 ass:1 variance:1 miller:1 yellow:1 vp:11 dealt:1 generalize:1 bor:1 cation:1 history:1 reach:2 coef:2 ed:1 against:3 james:2 elegance:1 proof:14 associated:1 gain:44 sampled:3 proved:4 recall:2 color:1 actually:2 back:1 higher:4 ta:6 follow:1 formulation:3 done:2 though:1 keane:1 furthermore:1 just:1 until:1 receives:1 maximizer:1 quality:3 mdp:5 usa:1 building:1 excessively:1 verify:1 effect:3 true:2 hence:1 equality:2 read:1 memoryless:1 eg:2 deal:1 round:14 puterman:1 game:32 during:2 noted:1 trying:1 pdf:1 complete:1 geometrical:3 meaning:1 novel:2 ef:1 charles:1 behaves:1 preceded:1 overview:1 exponentially:2 volume:2 belong:1 extend:4 discussed:2 accumulate:2 refer:1 monthly:1 cambridge:2 enter:1 rd:3 grid:7 mathematics:1 similarly:8 centre:1 gratefully:1 language:1 gt:5 conjectured:2 optimizing:1 inequality:1 binary:2 arbitrarily:1 vt:19 der:1 victor:1 nition:3 seen:1 additional:1 simpli:2 speci:6 determine:1 maximize:3 nate:1 dashed:1 ii:1 relates:1 hannan:2 technical:1 match:1 repetitively:1 long:1 pde:2 concerning:1 visit:2 controlled:2 adobe:1 prediction:9 basic:1 arxiv:4 c1:11 receive:2 addition:2 whereas:1 want:1 fellowship:1 else:1 completes:1 swapped:1 tend:1 prague:1 call:3 integer:1 ciently:1 near:5 ideal:1 idea:2 gambler:4 t0:8 whether:2 motivated:1 bartlett:1 reuse:1 peter:1 york:1 action:19 repeatedly:1 greedi:1 detailed:1 schapire:2 http:1 exist:1 canonical:1 nsf:1 notice:1 per:13 blue:1 discrete:1 rephrased:1 group:2 key:1 four:1 reformulation:1 sivan:3 capital:2 budgeted:1 nal:1 backward:2 asymptotically:4 sum:4 yellin:1 everywhere:1 soda:1 named:1 family:4 almost:5 throughout:1 decision:4 appendix:3 scaling:1 entirely:1 bound:9 followed:1 played:3 distinguish:2 g:1 annual:2 optimality:3 min:4 martin:1 conjecture:2 ned:10 numbering:1 according:1 combination:2 son:1 appealing:1 tw:6 acems:1 restricted:3 indexing:1 taken:2 equation:3 previously:1 remains:1 discus:2 know:1 end:1 studying:1 permit:2 observe:2 save:1 yadkori:1 existence:1 original:2 substitute:1 denotes:1 remaining:3 include:1 top:3 binomial:2 thomas:1 log2:2 cally:1 classical:2 objective:2 already:2 quantity:2 question:1 strategy:34 rt:2 italic:1 win:16 link:1 thank:1 seven:2 argue:1 induction:7 assuming:1 index:4 illustration:3 providing:1 unfortunately:1 robert:2 statement:2 gk:1 negative:1 policy:4 unknown:2 twenty:1 bianchi:2 upper:4 observation:1 francesco:1 markov:3 howard:1 finite:2 acknowledge:1 displayed:1 maxk:1 extended:2 peres:3 rn:1 arbitrary:5 david:2 extensive:1 connection:4 c3:9 nick:3 c4:6 concisely:1 nip:2 adversary:49 below:1 challenge:3 max:14 unrealistic:1 natural:2 rely:1 arm:6 minimax:20 scheme:1 improve:1 technology:1 numerous:1 ne:4 picture:4 nding:2 irrespective:1 prior:1 geometric:8 acknowledgement:1 nicol:2 asymptotic:5 freund:1 loss:1 icalp:1 mixed:1 suf:1 proven:2 s0:1 minp:2 playing:10 summary:1 repeat:3 last:5 parity:1 understand:1 taking:2 van:1 plain:1 dimension:2 transition:7 cumulative:10 ignores:1 commonly:1 made:3 approximate:4 laureate:1 dealing:1 monotonicity:1 decides:2 d23:10 conclude:1 search:5 decade:1 sk:1 table:2 additionally:4 robust:1 ca:5 ignoring:1 obtaining:1 laurie:1 complex:1 main:4 bounding:1 repeated:3 advice:9 cient:2 wiley:2 position:2 explicit:1 vanish:1 breaking:1 theorem:6 removing:1 formula:1 showing:4 symbol:1 concern:1 derives:1 exists:5 sequential:1 te:1 illustrates:1 horizon:1 tc:2 led:1 simply:1 jacm:1 hitting:3 ordered:1 corresponds:1 relies:2 acm:2 ma:1 goal:1 sorted:2 towards:3 orabona:1 change:1 typical:1 yuval:3 wt:1 lemma:28 called:4 total:2 e:7 player:6 support:1
6,518
6,897
Reinforcement Learning under Model Mismatch Aurko Roy1 , Huan Xu2 , and Sebastian Pokutta2 1 Google ,? Email: [email protected] Georgia Institute of Technology, Atlanta, GA, USA. Email: [email protected] 2 ISyE, Georgia Institute of Technology, Atlanta, GA, USA. Email: [email protected] 2 ISyE, Abstract We study reinforcement learning under model misspecification, where we do not have access to the true environment but only to a reasonably close approximation to it. We address this problem by extending the framework of robust MDPs of [1, 15, 11] to the model-free Reinforcement Learning setting, where we do not have access to the model parameters, but can only sample states from it. We define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy and approximate value function respectively. We scale up the robust algorithms to large MDPs via function approximation and prove convergence under two different settings. We prove convergence of robust approximate policy iteration and robust approximate value iteration for linear architectures (under mild assumptions). We also define a robust loss function, the mean squared robust projected Bellman error and give stochastic gradient descent algorithms that are guaranteed to converge to a local minimum. 1 Introduction Reinforcement learning is concerned with learning a good policy for sequential decision making problems modeled as a Markov Decision Process (MDP), via interacting with the environment [20, 18]. In this work we address the problem of reinforcement learning from a misspecified model. As a motivating example, consider the scenario where the problem of interest is not directly accessible, but instead the agent can interact with a simulator whose dynamics is reasonably close to the true problem. Another plausible application is when the parameters of the model may evolve over time but can still be reasonably approximated by an MDP. To address this problem we use the framework of robust MDPs which was proposed by [1, 15, 11] to solve the planning problem under model misspecification. The robust MDP framework considers a class of models and finds the robust optimal policy which is a policy that performs best under the worst model. It was shown by [1, 15, 11] that the robust optimal policy satisfies the robust Bellman equation which naturally leads to exact dynamic programming algorithms to find an optimal policy. However, this approach is model dependent and does not immediately generalize to the model-free case where the parameters of the model are unknown. Essentially, reinforcement learning is a model-free framework to solve the Bellman equation using samples. Therefore, to learn policies from misspecified models, we develop sample based methods to solve the robust Bellman equation. In particular, we develop robust versions of classical reinforcement learning algorithms such as Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal policy under mild assumptions on the discount factor. We also show that ? Work done while at Georgia Tech 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the nominal versions of these iterative algorithms converge to policies that may be arbitrarily worse compared to the optimal policy. We also scale up these robust algorithms to large scale MDPs via function approximation, where we prove convergence under two different settings. Under a technical assumption similar to [5, 24] we show convergence of robust approximate policy iteration and value iteration algorithms for linear architectures. We also study function approximation with nonlinear architectures, by defining an appropriate mean squared robust projected Bellman error (MSRPBE) loss function, which is a generalization of the mean squared projected Bellman error (MSPBE) loss function of [22, 21, 6]. We propose robust versions of stochastic gradient descent algorithms as in [22, 21, 6] and prove convergence to a local minimum under some assumptions for function approximation with arbitrary smooth functions. Contribution. In summary we have the following contributions: 1. We extend the robust MDP framework of [1, 15, 11] to the model-free reinforcement learning setting. We then define robust versions of Q-learning, SARSA, and TD-learning and prove convergence to an approximately optimal robust policy. 2. We also provide robust reinforcement learning algorithms for the function approximation case and prove convergence of robust approximate policy iteration and value iteration algorithms for linear architectures. We also define the MSRPBE loss function which contains the robust optimal policy as a local minimum and we derive stochastic gradient descent algorithms to minimize this loss function as well as establish convergence to a local minimum in the case of function approximation by arbitrary smooth functions. 3. Finally, we demonstrate empirically the improvement in performance for the robust algorithms compared to their nominal counterparts. For this we used various Reinforcement Learning test environments from OpenAI [9] as benchmark to assess the improvement in performance as well as to ensure reproducibility and consistency of our results. Related Work. Recently, several approaches have been proposed to address model performance due to parameter uncertainty for Markov Decision Processes (MDPs). A Bayesian approach was proposed by [19] which requires perfect knowledge of the prior distribution on transition matrices. Other probabilistic and risk based settings were studied by [10, 25, 23] which propose various mechanisms to incorporate percentile risk into the model. A framework for robust MDPs was first proposed by [1, 15, 11] who consider the transition matrices to lie in some uncertainty set and proposed a dynamic programming algorithm to solve the robust MDP. Recent work by [24] extended the robust MDP framework to the function approximation setting where under a technical assumption the authors prove convergence to an optimal policy for linear architectures. Note that these algorithms for robust MDPs do not readily generalize to the model-free reinforcement learning setting where the parameters of the environment are not explicitly known. For reinforcement learning in the non-robust model-free setting, several iterative algorithms such as Q-learning, TD-learning, and SARSA are known to converge to an optimal policy under mild assumptions, see [4] for a survey. Robustness in reinforcement learning for MDPs was studied by [13] who introduced a robust learning framework for learning with disturbances. Similarly, [16] also studied learning in the presence of an adversary who might apply disturbances to the system. However, for the algorithms proposed in [13, 16] no theoretical guarantees are known and there is only limited empirical evidence. Another recent work on robust reinforcement learning is [12], where the authors propose an online algorithm with certain transitions being stochastic and the others being adversarial and the devised algorithm ensures low regret. For the case of reinforcement learning with large MDPs using function approximations, theoretical guarantees for most TD-learning based algorithms are only known for linear architectures [2]. Recent work by [6] extended the results of [22, 21] and proved that a stochastic gradient descent algorithm minimizing the mean squared projected Bellman equation (MSPBE) loss function converges to a local minimum, even for nonlinear architectures. However, these algorithms do not apply to robust MDPs; in this work we extend these algorithms to the robust setting. 2 2 Preliminaries We consider an infinite horizon Markov Decision Process (MDP) [18] with finite state space X of size n and finite action space A of size m. At every time step t the agent is in a state i ? X and can choose an action a ? A incurring a cost ct (i, a). We will make the standard assumption that future cost is discounted, see e.g., [20], with a discount factor ? < 1 applied to future costs, i.e., ct (i, a) := ?t c(i, a), where c(i, a) is a fixed constant independent of the time step t for i ? X and a ? A. The states transition according to probability transition matrices ? := { P a } a?A which depends only on their last taken action a. A policy of the agent is a sequence ? = (a0 , a1 , . . . ), where every at (i ) corresponds to an action in A if the system is in state i at time t. For every policy ?, we have a corresponding value function v? ? Rn , where v? (i ) for a state i ? X measures the expected cost of that state if the agent were to follow policy ?. This can be expressed by the recurrence relation v? (i ) := c(i, a0 (i )) + ?E j?X [v? ( j)] . (1) The goal is to devise algorithms to learn an optimal policy ? ? that minimizes the expected total cost: Definition 2.1 (Optimal policy). Given an MDP with state space X , action space A and transition matrices P a , let ? be the strategy space of all possibile policies. Then an optimal policy ? ? is one that minimizes the expected total cost, i.e., " # ? ? := arg min E ? ?? ? ? ?t c(it , at (it )) . (2) t =0 In the robust case we will assume as in [15, 11] that the transition matrices P a are not fixed and may come from some uncertainty region P a and may be chosen adversarially by nature in future runs of the model. In this setting, [15, 11] prove the following robust analogue of the Bellman recursion. A policy of nature is a sequence ? := (P0 , P1 , . . . ) where every Pt ( a) ? P a corresponds to a transition probability matrix chosen from P a . Let T denote the set of all such policies of nature. In other words, a policy ? ? T of nature is a sequence of transition matrices that may be played by  it in response to the actions of the agent. For any set P ? Rn and vector v ? Rn , let ?P (v) := sup p> v | p ? P be the support function of the set P. For a state i ? X , let Pia be the projection onto the ith row of P a . Theorem 2.2. [15] We have the following perfect duality relation " # " # min max E? ? ?? ? ?T ? ? ?t c (it , at (it )) = max min E? ? ?T ? ?? t =0 ? ? ?t c (it , at (it )) . (3) t =0 The optimal value function v? ? corresponding to the optimal policy ? ? satisfies   v? ? (i ) = min c(i, a) + ??P a (v? ? ) , (4) and ? ? can then be obtained in a greedy fashion, i.e., n o a? (i ) ? arg min c(i, a) + ??P a (v) . (5) i a?A i a?A The main shortcoming of this approach is that it does not generalize to the model free case where the transition probabilities are not explicitly known but rather the agent can only sample states according to these probabilities. In the absence of this knowledge, we cannot compute the support functions of the uncertainty sets Pia . On the other hand it is often easy to have a confidence region Uia , e.g., a ball or an ellipsoid, corresponding to every state-action pair i ? X , a ? A that quantifies our uncertainty in the simulation, with the uncertainty set Pia being the confidence region Uia centered around the unknown simulator probabilities. Formally, we define the uncertainty sets corresponding to every state action pair in the following fashion. Definition 2.3 (Uncertainty sets). Corresponding to every state-action pair (i, a) we have a confidence region Uia so that the uncertainty region Pia of the probability transition matrix corresponding to (i, a) is defined as Pia := { x + pia | x ? Uia } , pia (6) where is the unknown state transition probability vector from the state i ? X to every other state in X given action a during the simulation. 3  As a simple example, we have the ellipsoid Uia := x | x> Aia x ? 1, ?i?X xi = 0 for some n ? n psd matrix Aia with the uncertainty set Pia being Pia := x + pia | x ? Uia , where pia is the unknown simulator state transition probability vector with which the agent transitioned to a new state during training. Note that while it may easy to come up with good descriptions of the confidence region Uia , the approach of [15, 11] breaks down since we have no knowledge of pia and merely observe the new state j sampled from this distribution. In the following sections we develop robust versions of Q-learning, SARSA, and TD-learning which are guaranteed to converge to an approximately optimal policy that is robust with respect to this confidence region. The robust versions of these iterative algorithms involve an additional linear optimization step over the set Uia , which in the case of Uia = {k x k2 ? r } simply corresponds to adding fixed noise during every update. In later sections we will extend it to the function approximation case where we study linear architectures as well as nonlinear architectures; in the latter case we derive new stochastic gradient descent algorithms for computing approximately robust policies. 3 Robust exact dynamic programming algorithms In this section we develop robust versions of exact dynamic programming algorithms such as Qlearning, SARSA, and TD-learning. These methods are suitable for small MDPs where the size n of the state space is not too large. Note that confidence region Uia must also be constrained to lie within the probability simplex ?n . However since we do not have knowledge of the simulator probabilities pia , we do not know how far away pia is from the boundary of ?n and so the algorithms will make ca where we drop the requirement of U ca ? ?n , to compute the use of a proxy confidence region U i i robust optimal policies. With a suitable choice of step lengths and discount factors we can prove a convergence to an approximately optimal Ui -robust policy where the approximation depends on the ca and the true confidence region U a . Below we difference between the unconstrained proxy region U i i give specific examples of possible choices for simple confidence regions. Ellipsoid: Let { Aia }i,a be a sequence of n ? n psd matrices. Then we can define the confidence region as ( ) > a a a a Ui := x x Ai x ? 1, ? xi = 0, ? pij ? x j ? 1 ? pij , ? j ? X . (7) i ?X a a := Note  a that Ui has some additional linear constraints soa that the uncertainty set Pi a pi + x | x ? Ui lies inside ?n . Since we do not know pi , we will make use of the proxy conca := { x | x > A a x ? 1, ?i?X xi = 0}. In particular when A a = r ?1 In for every fidence region U i i i i ? X , a ? A then this corresponds to a spherical confidence interval of [?r, r ] in every direction. In other words, each uncertainty set Pia is an `2 ball of radius r. Parallelepiped: Let { Bia }i,a be a sequence of n ? n invertible matrices. Then we can define the confidence region as ( ) Uia := x k Bia x k1 ? 1, ? xi = 0, ? pija ? x j ? 1 ? pija , ? j ? X . (8) i ?X ca without the ? p a ? x j ? 1 ? p a As before, we will use the unconstrained parallelepiped U ij ij i constraints, as a proxy for Uia since we do not have knowledge pia . In particular if Bia = D for a ca corresponds to a rectangle. In particular if diagonal matrix D, then the proxy confidence region U i every diagonal entry is r, then every uncertainty set Pia is an `1 ball of radius r. 3.1 Robust Q-learning Let us recall the notion of a Q-factor of a state-action pair (i, a) and a policy ? which in the non-robust setting is defined as Q(i, a) := c(i, a) + E j?X [v( j)] , 4 (9) where v is the value function of the policy ?. In other words, the Q-factor represents the expected cost if we start at state i, use the action a and follow the policy ? subsequently. One may similarly define the robust Q-factors using a similar interpretation and the minimax characterization of Theorem 2.2. Let Q? denote the Q-factors of the optimal robust policy and let v? ? Rn be its value function. Note that we may write the value function in terms of the Q-factors as v? = mina?A Q? (i, a). From Theorem 2.2 we have the following expression for Q? : Q? (i, a) = c(i, a) + ??P a (v? ) (10) i = c(i, a) + ??Uia (v? ) + ? ? pija min Q? ( j, a0 ), (11) a0 ?A j?X where equation (11) follows from Definition 2.3. For an estimate Qt of Q? , let vt ? Rn be its value vector, i.e., vt (i ) := mina?A Qt (i, a). The robust Q-iteration is defined as:   0 Qt (i, a) := (1 ? ?t ) Qt?1 (i, a) + ?t c(i, a) + ??Uca (vt?1 ) + ? min Qt?1 ( j, a ) , (12) a0 ?A pija using i where a state j ? X is sampled with the unknown transition probability the simulator. Note that the robust Q-iteration of equation (12) involves an additional linear optimization step to compute ca . We will prove that iterating the support function ?Uca (vt ) of vt over the proxy confidence region U i i equation (12) converges to an approximately optimal policy. The following definition introduces the notion of an ?-optimal policy, see e.g., [4]. The error factor ? is also referred to as the amplification factor. We will treat the Q-factors as a |X | ? |A| matrix in the definition so that its `? norm is defined as usual. Definition 3.1 (?-optimal policy). A policy ? with Q-factors Q 0 is ?-optimal with respect to the ? 0 ? ? optimal policy ? with corresponding Q-factors Q if Q ? Q ? ? ? kQ? k? . The following simple lemma allows us to decompose the optimization of a linear function over the ca in terms of linear optimization over P a , U a , and U ca . proxy uncertainty set P i i i i Lemma 3.2. Let v ? Rn be any vector and let ? ai := maxy?Uca minx?U a ky ? x k1 . Then we have i ?Pca (v) ? ?P a (v) + ? ai kvk? . i i i The following theorem proves that under a suitable choice of step lengths ?t and discount factor ?, the iteration of equation (12) converges to an ?-approximately optimal policy with respect to the confidence regions Uia . Theorem 3.3. Let the step lengths ?t of the Q-iteration algorithm be chosen such that ?? t = 0 ?t = ? 2 < ? and let the discount factor ? < 1. Let ? a be as in Lemma 3.2 and let ? := and ?? ? t =0 t i maxi?X ,a?A ? ai . If ? (1 + ?) < 1 then with probability 1 the iteration of equation (12) converges to ?? an ?-optimal policy where ? := 1??(1+ ?) . Remark 3.4. If ? = 0 then note that by Theorem 3.3, the robust Q-iterations converge to the exact optimal Q-factors since ? = 0. Since ? ai := maxy?Uca minx?U a ky ? x k1 , it follows that ? = 0 iff i i ca = U a for every i ? X , a ? A. This happens when the confidence region is small enough so that U i i the simplex constraints ? pija ? x j ? 1 ? pija ? j ? X in the description of Pia become redundant for every i ? X , a ? A. Equivalently every pia is ?far? from the boundary of the simplex ?n compared to the size of the confidence region Uia . Remark 3.5. Note that simply using the nominal Q-iteration without the ?Uca (v) term does not i ? 0 guarantee 0 convergence to Q . Indeed, the nominal Q-iterations converge to Q-factors Q where Q ? Q? may be arbitrary large. This follows easily from observing that ? | Q0 (i, a) ? Q? (i, a)| = ?Uca (v? ) (13) i , where v? is the value function of Q? and so 0 Q ? Q? = max ? ca (v? ) ? U i ?X ,a?A which can be as high as kv? k? = k Q? k? . 5 i (14) 3.2 Robust TD-Learning Let (i0 , i1 , . . . ) be a trajectory of the agent, where im denotes the state of the agent at time step m. The main idea behind the TD(?)-learning method is to estimate the value function v? of a policy ? using the temporal difference errors dm defined as dm := c(im , ? (im )) + ?vt (im+1 ) ? vt (im ). (15) For a parameter ? ? (0, 1), the TD-learning iteration is defined in terms of the temporal difference errors as ! ? vt+1 (ik ) := vt (ik ) + ?t ? (??)m?k dm . (16) m=k ca for every temporal difference In the robust setting, we have a confidence region Uia with proxy U i error, which leads us to define the robust temporal difference errors as dem := dm + ?? \ ( v t ), ? (i ) Uim (17) m where dm is the non-robust temporal difference. The robust TD-update is the usual TD-update, with the robust temporal difference errors df m replacing the usual temporal difference error dm . We define an ?-suboptimal value function for a fixed policy ? similar to Definition 3.1. Definition 3.6 (?-approximate value function). Given a policy ?, we say that a vector v0 ? Rn is an ?-approximation of v? if kv0 ? v? k? ? ? kv? k? . The following theorem guarantees convergence of the robust TD-iteration to an approximate value function for ?. We refer the reader to the supplementary material for a proof. Theorem 3.7. Let ? ai be as in Lemma 3.2 and let ? := maxi?X ,a?A ? ai . Let ? := 1????? . If ? (1 + ??) < 1 then the robust TD-iteration converges to an ?-approximate value function, where ? := ?? ca is the same as the true . In particular if ? ai = ? = 0, i.e., the proxy confidence region U i 1?? (1+??) a confidence region Ui , then the convergence is exact, i.e., ? = 0. 4 Robust Reinforcement Learning with function approximation In Section 3 we derived robust versions of exact dynamic programming algorithms such as Q-learning, SARSA and TD-learning respectively. If the state space X of the MDP is large then it is prohibitive to maintain a lookup table entry for every state. A standard approach for large scale MDPs is to use the approximate dynamic programming (ADP) framework [17]. In this setting, the problem is parametrized by a smaller dimensional vector ? ? Rd where d  n = |X |. The natural generalizations of Q-learning, SARSA, and TD-learning algorithms of Section 3 are via the projected Bellman equation, where we project back to the space spanned by all the parameters in ? ? Rd , since they are the value functions representable by the model. Convergence for these algorithms even in the non-robust setting are known only for linear architectures, see e.g., [2]. Recent work by [6] proposed stochastic gradient descent algorithms with convergence guarantees for smooth nonlinear function architectures, where the problem is framed in terms of minimizing a loss function. We give robust versions of both these approaches. 4.1 Robust approximations with linear architectures In the approximate setting with linear architectures, we approximate the value function v? of a policy ? by ?? where ? ? Rd and ? is a n ? d feature matrix with rows ?( j) for every n state j ? X o representing its feature vector. Let S be the span of the columns of ?, i.e., S := ?? | ? ? Rd . ? (i ) Define the operator T? : Rn ? Rn as ( T? v)(i ) := c(i, ? (i )) + ? ? j?X pij v( j), so that the true value function v? satisfies T? v? = v? . A natural approach towards estimating v? given a current estimate ??t is to compute T? (??t ) and project it back to S to get the next parameter ?t+1 . The motivation behind such an iteration is the fact that the true value function is a fixed point of 6 this operation if it belonged to the subspace S. This gives rise to the projected Bellman equation where the projection ? is typically taken with respect to a weighted Euclidean norm k?k? , i.e., k x k? = ?i?X ? i xi2 , where ? is some probability distribution over the states X . In the model free case, where we do not have explicit knowledge of the transition probabilities, various methods like LSTD(?), LSPE(?), TD(?) have been proposed [3, 8, 7, 14, 22, 21]. The key idea behind proving convergence for these methods is to show that the mapping ?T? is a contraction mapping with respect to the k?k? for some distribution ? over the states X . While the operator T? in the non-robust case is linear and is a contraction in the `? norm as in Section 3, the projection operator with respect to such norms is not guaranteed to be a contraction. However, it is known that if ? is the steady state distribution of the policy ? under evaluation, then ? is non-expansive in k?k? [4, 2]. In the robust setting, we have the same methods but with the robust Bellman operators T? defined as ( T? v)(i ) := c(i, ? (i )) + ?? ?(i) (v). Since we do not have access to the simulator probabilities pia , Pi ca as in Section 3, with the proxy operator denoted by T c we will use a proxy set P ? . While the iterative i methods of the non-robust setting generalize via the robust operator T? and the robust projected Bellman equation ?? = ?T? (?? ), it is however not clear how to choose the distribution ? under which the projected operator ?T? is a contraction in order to show convergence. Let ? be the steady b of the MDP with transition probability matrix P?b . We state distribution of the exploration policy ? make the following assumption on the discount factor ? as in [24]. Assumption 4.1. For every state i ? X and action a ? A, there exists a constant ? ? (0, 1) such that for any p ? Pia we have ?p j ? ?Pij?b for every j ? X . Assumption 4.1 might appear artificially restrictive; however, it is necessary to prove that ?T? is a contraction. While [24] require this assumption for proving convergence of robust MDPs, a similar assumption is also required in proving convergence of off-policy Reinforcement Learning methods of b which is not necessarily the same as [5] where the states are sampled from an exploration policy ? the policy ? under evaluation. Note that in the robust setting, all methods are necessarily off-policy since the transition matrices are not fixed for a given policy. The following lemma is an ?-weighted Euclidean norm version of Lemma 3.2. Lemma 4.2. Let v ? Rn be any vector and let ? ai := ?Pca (v) ? ?P a (v) + ? ai kvk? , where ? min := mini?X ? i . i max ca y ?U i minx?U a ky? x k? i ? min . Then we have i The following theorem shows that the robust projected Bellman equation is a contraction under some assumptions on the discount factor ?. ? (i ) Theorem 4.3. Let ? ai be as in Lemma 4.2 and let ? := maxi?X ? i . If the discount factor ? satisfies c Assumption 4.1 and ?2 + ?2 ?2 < 12 , then the operator T ? is a contraction with respect to k?k? . In 0 d other words for any two ?, ? ? R , we have 2   2 2 c 0 2 2 2 c ?? ? ?? 0 ? < ?? ? ?? 0 ? . (18) T? (?? ) ? T ? ( ?? ) ? 2 ? + ? ? ? [ ? (i ) ? (i ) If ? i = ? = 0 so that Ui = Ui , then we have a simpler contraction under the assumption that ? < 1. The following corollary shows that the solution to the proxy projected Bellman equation converges to a solution that is not too far away from the true value function v? . Corollary 4.4. Let Assumption 4.1 hold and let ? be as in Theorem 4.3. Let ve? be the fixed point of c c e? = ve? . Let vb? be the fixed the projected Bellman equation for the proxy operator T ? , i.e., ? T ?v c c b? = vb? . Let v? be the true value function of the policy ?, point of the proxy operator T ? , i.e., T ?v i.e., T? v? = v? . Then it follows that ?? kv? k? + k?v? ? v? k? p . (19) kve? ? v? k? ? 1 ? 2 ( ?2 + ? 2 ?2 ) 7 In particular if ? i = ? = 0 i.e., the proxy confidence region is actually the true confidence region, then the proxy projected Bellman equation has a solution satisfying kve? ? v? k? ? k?v? ?v? k? . 1? ? Theorem 4.3 guarantees that the robust projected Bellman iterations of LSTD(?), LSPE(?) and TD(?)-methods converge, while Corollary 4.4 guarantees that the solution it coverges to is not too far away from the true value function v? . 4.2 Robust approximations with nonlinear architectures In this section we consider the situation where the function approximator v? is a smooth but not necessarily linear function of ?. This section generalizes the results of [6] to the robust setting with confidence regions. We define robust analogues of the nonlinear GTD2 and nonlinear TDC algorithms respectively. n o Let M := v? | ? ? Rd be the manifold spanned by all possible value functions representable by our model and let PM? be the tangent plane of M at ?. Let space, i.e., n T M? be the tangent o d the translation of PM? to the origin. In other words, T M? := ?? u | u ? R , where ?? is an n ? d matrix with entries ?? (i, j) := ? ?? j v? (i ). In the nonlinear case, we project on to the tangent space T M? , since projections on to M is computationally hard. We denote this projection by ?? and it is also with respect to a weighted Euclidean norm k?k? . The mean squared projected Bellman equation (MSPBE) loss function was proposed by [6] and is an extension of [22, 21], MSPBE(? ) = kv? ? ?? T? v? k2? , where we now project to the the tangent space T M? . Since the number n of states is prohibitively large, we want stochastic gradient algorithms that run in time polynomial in d. Therefore, we assume that the confidence region of every state action pair is the ca = U a . The robust version of the MSPBE loss function, the. mean squared same: Uia = U and U i i robust projected Bellman equation (MSRPBE) loss can then be defined in terms of the robust Bellman [ ? (i ) b and proxy uncertainty set P operator with the proxy confidence region U as i 2 c MSRPBE(? ) = v? ? ?? T (20) ? v? . ? In order to derive stochastic gradient descent algorithms for minimizing the MSRPBE loss function, we need to take the gradient of ?P (v? ) for the a convex set P. The gradient ? of ? is given by > ? P (? ) := ? max y> v? = ?> ? arg max y v? , y? P y? P (21) where ?? (i ) := ?v? (i ). Let us denote ?? (i ) simply by ? and ?? (i0 ) by ?0 , where i0 is the next [ ? (i ) b the proxy confidence region U sampled state. Let us denote by U of state i and the policy ? under i evaluation. Let h i h(?, u) := ?E (de? ?> u)?2 v? (i )u (22) where de is the robust temporal difference error. As in [6], we may express ? MSRPBE(? ) in terms  >  ?1 h i e of h(?, w) where w = E ?? E d? . We refer the reader to the supplementary material for the details. This leads us to the following robust analogues of nonlinear GTD  and nonlinear TDC, > e where we update the estimators wk of w as wk+1 := wk + ? k dk ? ?k wk ?k , with the parameters ?k being updated on a slower timescale as  n o  ?k+1 := ? ?k + ?k ?k ? ??k0 ? ??Ub (? ) (?k> wk ) ? hk robust-nonlinear-GTD2, (23)  n o ?k+1 := ? ?k + ?k dek ?k ? ??k0 ? ??Ub (? )(?k> wk ) ? hk robust-nonlinear-TDC, (24)   where hk := dek ? ?k> wk ?2 v?k (ik ) wk and ? is a projection into an appropriately chosen compact set C with a smooth boundary as in [6]. Under the assumption of Lipschitz continuous gradients 8 b the updates of and suitable assumptions on the step lengths ?k and ? k and the confidence region U, equations (23) converge with probability 1 to a local optima of MSRPBE(? ). See the supplementary material for the exact statement and proof of convergence. Note that in general computing ?Ub (? ) b would take time polynomial in n, but it can be done in O(d2 ) time using a rank-d approximation to U. 5 Experiments We implemented robust versions of Q-learning and SARSA as in Section 3 and evaluated its performance against the nominal algorithms using the OpenAI gym framework [9]. To test the performance of the robust algorithms, we perturb the models slightly by choosing with a small probability p a random state after every action. The size of the confidence region Uia for the robust model is chosen by a 10-fold cross validation via line search. After the value functions are learned for the robust and the nominal algorithms, we evaluate its performance on the true environment. To compare the true algorithms we compare both the cumulative reward as well as the tail distribution function (complementary cumulative distribution function) as in [24] which for every a plots the probability that the algorithm earned a reward of at least a. Note that there is a tradeoff in the performance of the robust versus the nominal algorithms with the value of p due to the presence of the ? term in the convergence results. See Figure 1 for a comparison. More figures and detailed results are included in the supplementary material. Figure 1: Line search, tail distribution, and cumulative rewards during transient phase of robust vs nominal Q-learning on FrozenLake-v0 with p = 0.01. Note the instability of reward as a function of the size of the uncertainty set (left) is due to the small sample size used in line search. Acknowledgments The authors would like to thank Guy Tennenholtz and anonymous reviewers for helping improve the presentation of the paper. References [1] J. A. Bagnell, A. Y. Ng, and J. G. Schneider. Solving uncertain markov decision processes. 2001. [2] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3):310?335, 2011. [3] D. P. Bertsekas and S. Ioffe. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Lab. for Info. and Decision Systems Report LIDS-P-2349, MIT, Cambridge, MA, 1996. [4] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming: an overview. In Decision and Control, 1995., Proceedings of the 34th IEEE Conference on, volume 1, pages 560?564. IEEE, 1995. [5] D. P. Bertsekas and H. Yu. Projected equation methods for approximate solution of large linear systems. Journal of Computational and Applied Mathematics, 227(1):27?50, 2009. 9 [6] S. Bhatnagar, D. Precup, D. Silver, R. S. Sutton, H. R. Maei, and C. Szepesv?ri. Convergent temporal-difference learning with arbitrary smooth function approximation. In Advances in Neural Information Processing Systems, pages 1204?1212, 2009. [7] J. A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, 49(2-3):233?246, 2002. [8] S. J. Bradtke and A. G. Barto. Linear least-squares algorithms for temporal difference learning. Machine learning, 22(1-3):33?57, 1996. [9] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. [10] E. Delage and S. Mannor. Percentile optimization for markov decision processes with parameter uncertainty. Operations research, 58(1):203?213, 2010. [11] G. N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257? 280, 2005. [12] S. H. Lim, H. Xu, and S. Mannor. Reinforcement learning in robust markov decision processes. In Advances in Neural Information Processing Systems, pages 701?709, 2013. [13] J. Morimoto and K. Doya. Robust reinforcement learning. Neural computation, 17(2):335?359, 2005. [14] A. Nedi? and D. P. Bertsekas. Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems, 13(1):79?110, 2003. [15] A. Nilim and L. El Ghaoui. Robustness in markov decision problems with uncertain transition matrices. In NIPS, pages 839?846, 2003. [16] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta. Robust adversarial reinforcement learning. arXiv preprint arXiv:1703.02702, 2017. [17] W. B. Powell. Approximate Dynamic Programming: Solving the curses of dimensionality, volume 703. John Wiley & Sons, 2007. [18] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. [19] A. Shapiro and A. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and Software, 17(3):523?542, 2002. [20] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [21] R. S. Sutton, H. R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesv?ri, and E. Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 993?1000. ACM, 2009. [22] R. S. Sutton, H. R. Maei, and C. Szepesv?ri. A convergent o (n) temporal-difference algorithm for off-policy learning with linear function approximation. In Advances in neural information processing systems, pages 1609?1616, 2009. [23] A. Tamar, Y. Glassner, and S. Mannor. Optimizing the cvar via sampling. arXiv preprint arXiv:1404.3862, 2014. [24] A. Tamar, S. Mannor, and H. Xu. Scaling up robust mdps using function approximation. In ICML, volume 32, page 2014, 2014. [25] W. Wiesemann, D. Kuhn, and B. Rustem. Robust markov decision processes. Mathematics of Operations Research, 38(1):153?183, 2013. 10
6897 |@word mild:3 version:13 polynomial:2 norm:6 d2:1 simulation:2 contraction:8 p0:1 contains:1 current:1 com:1 must:1 readily:1 john:2 wiewiora:1 drop:1 plot:1 update:6 v:1 greedy:1 prohibitive:1 plane:1 ith:1 characterization:1 uca:6 mannor:4 simpler:1 kv0:1 become:1 ik:3 prove:13 inside:1 indeed:1 expected:4 p1:1 planning:1 simulator:6 bellman:20 discounted:1 spherical:1 td:18 curse:1 project:4 estimating:1 lspe:2 minimizes:2 guarantee:7 temporal:14 every:24 wiesemann:1 rustem:1 glassner:1 zaremba:1 prohibitively:1 k2:2 control:2 appear:1 bertsekas:5 before:1 local:6 treat:1 sutton:4 approximately:8 might:2 studied:3 limited:1 acknowledgment:1 regret:1 powell:1 delage:1 parallelepiped:2 empirical:1 projection:6 word:5 confidence:28 get:1 onto:1 ga:2 close:2 cannot:1 operator:11 risk:2 instability:1 sukthankar:1 reviewer:1 convex:1 survey:2 nedi:1 immediately:1 estimator:1 spanned:2 proving:3 notion:2 updated:1 pt:1 nominal:8 exact:7 programming:11 origin:1 approximated:1 satisfying:1 preprint:3 worst:1 region:31 ensures:1 earned:1 environment:5 ui:7 reward:4 dynamic:13 solving:2 easily:1 k0:2 various:3 fast:1 shortcoming:1 choosing:1 dek:2 whose:1 supplementary:4 plausible:1 solve:4 say:1 timescale:1 online:1 sequence:5 propose:3 reproducibility:1 iff:1 amplification:1 description:2 kv:4 ky:3 convergence:23 requirement:1 extending:1 optimum:1 perfect:2 converges:6 silver:2 derive:3 develop:4 ij:2 qt:5 implemented:1 involves:1 come:2 direction:1 kuhn:1 radius:2 stochastic:11 subsequently:1 centered:1 exploration:2 transient:1 material:4 require:1 generalization:2 preliminary:1 decompose:1 anonymous:1 sarsa:9 im:5 extension:1 helping:1 hold:1 around:1 mapping:2 weighted:3 mit:2 iyengar:1 rather:1 barto:2 gatech:2 corollary:3 derived:1 improvement:2 rank:1 expansive:1 tech:1 hk:3 adversarial:2 dependent:1 el:1 i0:3 typically:1 a0:5 relation:2 i1:1 arg:3 denoted:1 constrained:1 beach:1 ng:1 sampling:1 adversarially:1 represents:1 yu:1 icml:1 future:3 simplex:3 others:1 report:1 brockman:1 ve:2 phase:1 maintain:1 psd:2 atlanta:2 interest:1 evaluation:4 introduces:1 kvk:2 behind:3 necessary:1 huan:2 euclidean:3 theoretical:2 uncertain:2 column:1 cost:7 pia:21 entry:3 kq:1 too:3 motivating:1 st:1 international:1 accessible:1 probabilistic:1 off:3 invertible:1 precup:2 squared:6 choose:2 guy:1 worse:1 de:2 lookup:1 wk:8 dem:1 explicitly:2 depends:2 later:1 break:1 lab:1 observing:1 sup:1 start:1 contribution:2 minimize:1 ass:1 square:3 morimoto:1 fidence:1 who:3 generalize:4 bayesian:1 bhatnagar:2 trajectory:1 sebastian:2 email:3 definition:8 against:1 dm:6 naturally:1 proof:2 sampled:4 proved:1 recall:1 knowledge:6 lim:1 dimensionality:1 actually:1 back:2 follow:2 response:1 done:2 evaluated:1 hand:1 replacing:1 nonlinear:12 google:2 mdp:10 usa:3 true:12 counterpart:1 q0:1 puterman:1 during:4 recurrence:1 percentile:2 steady:2 mina:2 demonstrate:1 performs:1 bradtke:1 recently:1 misspecified:2 empirically:1 overview:1 volume:4 extend:3 interpretation:1 adp:1 tail:2 mspbe:5 refer:2 cambridge:2 ai:11 framed:1 rd:5 unconstrained:2 consistency:1 pm:2 similarly:2 mathematics:3 access:3 v0:2 recent:4 optimizing:1 scenario:1 certain:1 arbitrarily:1 vt:9 devise:1 minimum:5 additional:3 schneider:2 converge:8 redundant:1 smooth:6 technical:3 cross:1 long:1 devised:1 a1:1 neuro:2 essentially:1 df:1 arxiv:6 iteration:21 szepesv:3 want:1 interval:1 appropriately:1 presence:2 easy:2 concerned:1 enough:1 architecture:14 suboptimal:1 idea:2 tamar:2 tradeoff:1 pettersson:1 expression:1 pca:2 action:15 remark:2 gtd2:2 iterating:1 clear:1 involve:1 detailed:1 discount:8 shapiro:1 write:1 discrete:2 uia:18 express:1 key:1 openai:3 aurko:1 rectangle:1 merely:1 roy1:1 run:2 bia:3 uncertainty:17 reader:2 doya:1 decision:12 scaling:1 vb:2 ct:2 guaranteed:3 played:1 convergent:2 fold:1 annual:1 constraint:3 ri:3 software:1 min:9 span:1 according:2 ball:3 representable:2 smaller:1 slightly:1 son:2 lid:1 making:1 happens:1 maxy:2 ghaoui:1 taken:2 computationally:1 equation:20 mechanism:1 xi2:1 know:2 soa:1 generalizes:1 operation:4 incurring:1 apply:2 observe:1 away:3 appropriate:1 gym:2 robustness:2 aia:3 slower:1 denotes:1 ensure:1 restrictive:1 k1:3 prof:1 establish:1 perturb:1 classical:1 xu2:1 strategy:1 usual:3 diagonal:2 bagnell:1 gradient:12 minx:3 subspace:1 thank:1 parametrized:1 cvar:1 manifold:1 considers:1 length:4 modeled:1 ellipsoid:3 mini:1 minimizing:3 equivalently:1 statement:1 info:1 rise:1 policy:60 unknown:5 markov:9 benchmark:1 finite:2 descent:8 defining:1 extended:2 situation:1 misspecification:2 interacting:1 rn:10 arbitrary:4 introduced:1 maei:3 pair:5 required:1 learned:1 nip:2 address:4 adversary:1 tennenholtz:1 below:1 mismatch:1 belonged:1 max:6 analogue:3 suitable:4 event:1 natural:2 boyan:1 disturbance:2 recursion:1 minimax:2 representing:1 improve:1 technology:2 mdps:14 prior:1 schulman:1 tangent:4 evolve:1 loss:11 approximator:1 versus:1 validation:1 agent:9 tdc:3 pij:4 proxy:19 pi:4 translation:1 row:2 summary:1 last:1 free:8 tsitsiklis:1 institute:2 boundary:3 transition:18 cumulative:3 author:3 reinforcement:21 projected:16 far:4 approximate:14 compact:1 qlearning:1 ioffe:1 xi:4 davidson:1 continuous:1 iterative:4 search:3 quantifies:1 table:1 transitioned:1 learn:2 reasonably:3 robust:97 ca:16 nature:4 kleywegt:1 interact:1 necessarily:3 artificially:1 main:2 motivation:1 noise:1 gtd:1 complementary:1 xu:3 conca:1 referred:1 georgia:3 fashion:2 wiley:2 nilim:1 explicit:1 lie:3 isye:4 tang:1 theorem:12 down:1 specific:1 maxi:3 dk:1 gupta:1 pokutta:1 evidence:1 exists:1 sequential:1 adding:1 horizon:1 simply:3 expressed:1 pinto:1 lstd:2 corresponds:5 satisfies:4 acm:1 ma:1 goal:1 presentation:1 cheung:1 towards:1 lipschitz:1 absence:1 hard:1 included:1 infinite:1 lemma:8 total:2 duality:1 uim:1 formally:1 support:3 latter:1 ub:3 incorporate:1 evaluate:1
6,519
6,898
Hierarchical Attentive Recurrent Tracking Adam R. Kosiorek Department of Engineering Science University of Oxford [email protected] Alex Bewley Department of Engineering Science University of Oxford [email protected] Ingmar Posner Department of Engineering Science University of Oxford [email protected] Abstract Class-agnostic object tracking is particularly difficult in cluttered environments as target specific discriminative models cannot be learned a priori. Inspired by how the human visual cortex employs spatial attention and separate ?where? and ?what? processing pathways to actively suppress irrelevant visual features, this work develops a hierarchical attentive recurrent model for single object tracking in videos. The first layer of attention discards the majority of background by selecting a region containing the object of interest, while the subsequent layers tune in on visual features particular to the tracked object. This framework is fully differentiable and can be trained in a purely data driven fashion by gradient methods. To improve training convergence, we augment the loss function with terms for auxiliary tasks relevant for tracking. Evaluation of the proposed model is performed on two datasets: pedestrian tracking on the KTH activity recognition dataset and the more difficult KITTI object tracking dataset. 1 Introduction In computer vision, designing an algorithm for model-free tracking of anonymous objects is challenging, since no target-specific information can be gathered a priori and yet the algorithm has to handle target appearance changes, varying lighting conditions and occlusion. To make it even more difficult, the tracked object often constitutes but a small fraction of the visual field. The remaining parts may contain distractors, which are visually salient objects resembling the target but hold no relevant information. Despite this fact, recent models often process the whole image, which exposes them to noise and increases the associated computational cost or they use heuristic methods to decrease the size of search regions. This in contrast to human visual perception, which does not process the visual field in its entirety, but rather acknowledges it briefly and focuses on processing small fractions thereof, which we dub visual attention. Attention mechanisms have recently been explored in machine learning in a wide variety of contexts [27, 14], often providing new capabilities to machine learning algorithms [11, 12, 7]. While they improve efficiency [22] and performance on state-of-the-art machine learning benchmarks [27], their architecture is much simpler than that of the mechanisms found in the human visual cortex [5]. Attention has also been long studied by neuroscientists [18], who believe that it is crucial for visual perception and cognition [4], since it is inherently tied to the architecture of the visual cortex and can affect the information flow inside it. Whenever more than one visual stimulus is present in the receptive field of a neuron, all the stimuli compete for computational resources due to the limited processing capacity. Visual attention can lead to suppression of distractors by reducing the size of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (a) (b) Figure 1: KITTI image with the ground-truth and predicted bounding boxes and an attention glimpse. The lower row corresponds to the hierarchical attention of our model: 1st layer extracts an attention glimpse (a), the 2nd layer uses appearance attention to build a location map (b). The 3rd layer uses the location map to suppress distractors, visualised in (c). (c) the receptive field of a neuron and by increasing sensitivity at a given location in the visual field (spatial attention). It can also amplify activity in different parts of the cortex, which are specialised in processing different types of features, leading to response enhancement with respect to those features (appearance attention). The functional separation of the visual cortex is most apparent in two distinct processing pathways. After leaving the eye, the sensory inputs enter the primary visual cortex (known as V1) and then split into the dorsal stream, responsible for estimating spatial relationships (where), and the ventral stream, which targets appearance-based features (what). Inspired by the general architecture of the human visual cortex and the role of attention mechanisms, this work presents a biologically-inspired recurrent model for single object tracking in videos (cf. section 3). Tracking algorithms typically use simple motion models and heuristics to decrease the size of the search region. It is interesting to see whether neuroscientific insights can aid our computational efforts, thereby improving the efficiency and performance of single object tracking. It is worth noting that visual attention can be induced by the stimulus itself (due to, e. g., high contrast) in a bottom-up fashion or by back-projections from other brain regions and working memory as top-down influence. The proposed approach exploits this property to create a feedback loop that steers the three layers of visual attention mechanisms in our hierarchical attentive recurrent tracking (HART) framework, see Figure 1. The first stage immediately discards spatially irrelevant input, while later stages focus on producing target-specific filters to emphasise visual features particular to the object of interest. The resulting framework is end-to-end trainable and we resort to maximum likelihood estimation (MLE) for parameter learning. This follows from our interest in estimating the distribution over object locations in a sequence of images, given the initial location from whence our tracking commenced. Formally, given a sequence of images x1:T ? RH?W ?C , where the superscript denotes height, width and the number of channels of the image, respectively, and an initial location for the tracked object given by a bounding box b1 ? R4 , the conditional probability distribution factorises as Z p(b2:T | x1:T , b1 ) = p(h1 | x1 , b1 ) T Z Y p(bt | ht )p(ht | xt , bt?1 , ht?1 ) dht dh1 , (1) t=2 where we assume that motion of an object can be described by a Markovian state ht . Our bounding b 2:T , found by the MLE of the model parameters. In sum, our contributions box estimates are given by b are threefold: Firstly, a hierarchy of attention mechanisms that leads to suppressing distractors and computational efficiency is introduced. Secondly, a biologically plausible combination of attention mechanisms and recurrent neural networks is presented for object tracking. Finally, our attentionbased tracker is demonstrated using real-world sequences in challenging scenarios where previous recurrent attentive trackers have failed. Next we briefly review related work (Section 2) before describing how information flows through the components of our hierarchical attention in Section 3. Section 4 details the losses applied to guide the attention. Section 5 presents experiments on KTH and KITTI datasets with comparison to related attention-based trackers. Section 6 discusses the results and intriguing properties of our framework and Section 7 concludes the work. Code and results are available online1 . 1 https://github.com/akosiorek/hart 2 2 Related Work A number of recent studies have demonstrated that visual content can be captured through a sequence of spatial glimpses or foveation [22, 12]. Such a paradigm has the intriguing property that the computational complexity is proportional to the number of steps as opposed to the image size. Furthermore, the fovea centralis in the retina of primates is structured with maximum visual acuity in the centre and decaying resolution towards the periphery, Cheung et al. [4] show that if spatial attention is capable of zooming, a regular grid sampling is sufficient. Jaderberg et al. [14] introduced the spatial transformer network (STN) which provides a fully differentiable means of transforming feature maps, conditioned on the input itself. Eslami et al. [7] use the STN as a form of attention in combination with a recurrent neural network (RNN) to sequentially locate and identify objects in an image. Moreover, Eslami et al. [7] use a latent variable to estimate the presence of additional objects, allowing the RNN to adapt the number of time-steps based on the input. Our spatial attention mechanism is based on the two dimensional Gaussian grid filters of [16] which is both fully differentiable and more biologically plausible than the STN. Whilst focusing on a specific location has its merits, focusing on particular appearance features might be as important. A policy with feedback connections can learn to adjust filters of a convolutional neural network (CNN), thereby adapting them to features present in the current image and improving accuracy [25]. De Brabandere et al. [6] introduced dynamic filter network (DFN), where filters for a CNN are computed on-the-fly conditioned on input features, which can reduce model size without performance loss. Karl et al. [17] showed that an input-dependent state transitions can be helpful for learning latent Markovian state-space system. While not the focus of this work, we follow this concept in estimating the expected appearance of the tracked object. In the context of single object tracking, both attention mechanisms and RNNs appear to be perfectly suited, yet their success has mostly been limited to simple monochromatic sequences with plain backgrounds [16]. Cheung [3] applied STNs [14] as attention mechanisms for real-world object tracking, but failed due to exploding gradients potentially arising from the difficulty of the data. Ning et al. [23] achieved competitive performance by using features from an object detector as inputs to a long-short memory network (LSTM), but requires processing of the whole image at each time-step. Two recent state-of-the-art trackers employ convolutional Siamese networks which can be seen as an RNN unrolled over two time-steps [13, 26]. Both methods explicitly process small search areas around the previous target position to produce a bounding box offset [13] or a correlation response map with the maximum corresponding to the target position [26]. We acknowledge the recent work2 of Gordon et al. [10] which employ an RNN based model and use explicit cropping and warping as a form of non-differentiable spatial attention. The work presented in this paper is closest to [16] where we share a similar spatial attention mechanism which is guided through an RNN to effectively learn a motion model that spans multiple time-steps. The next section describes our additional attention mechanisms in relation to their biological counterparts. 3 Hierarchical Attention Dorsal Stream xt Spatial Attention gt ht?1 st vt V1 Ventral Stream ?t LSTM ht ?t+1 ot   ?st ot MLP bt ?b at+1 Figure 2: Hierarchical Attentive Recurrent Tracking. Spatial attention extracts a glimpse gt from the input image xt . V1 and the ventral stream extract appearance-based features ? t while the dorsal stream computes a foreground/background segmentation st of the attention glimpse. Masked features vt contribute to the working memory ht . The LSTM output ot is then used to compute attention b t . Dashed lines correspond to temporal at+1 , appearance ?t+1 and a bounding box correction ?b connections, while solid lines describe information flow within one time-step. 2 [10] only became available at the time of submitting this paper. 3 ?t DFN gt Shared CNN CNN vt Figure 3: Architecture of the appearance attention. V1 is implemented as a CNN shared among the dorsal stream (DFN) and the ventral stream (CNN). The symbol represents the Hadamard product and implements masking of visual features by the foreground/background segmentation. Inspired by the architecture of the human visual cortex, we structure our system around working memory responsible for storing the motion pattern and an appearance description of the tracked object. If both quantities were known, it would be possible to compute the expected location of the object at the next time step. Given a new frame, however, it is not immediately apparent which visual features correspond to the appearance description. If we were to pass them on to an RNN, it would have to implicitly solve a data association problem. As it is non-trivial, we prefer to model it explicitly by outsourcing the computation to a separate processing stream conditioned on the expected appearance. This results in a location-map, making it possible to neglect features inconsistent with our memory of the tracked object. We now proceed with describing the information flow in our model. Given attention parameters at , the spatial attention module extracts a glimpse gt from the input image xt . We then apply appearance attention, parametrised by appearance ?t and comprised of V1 and dorsal and ventral streams, to obtain object-specific features vt , which are used to update the hidden state ht of an LSTM. The LSTM?s output is then decoded to predict both spatial and b t for appearance attention parameters for the next time-step along with a bounding box correction ?b the current time-step. Spatial attention is driven by top-down signal at , while appearance attention depends on top-down ?t as well as bottom-up (contents of the glimpse gt ) signals. Bottom-up signals have local influence and depend on stimulus salience at a given location, while top-down signals incorporate global context into local processing. This attention hierarchy, further enhanced by recurrent connections, mimics that of the human visual cortex [18]. We now describe the individual components of the system. Spatial Attention Our spatial attention mechanism is similar to the one used by Kaho? et al. [16]. Given an input image xt ? RH?W , it creates two matrices Axt ? Rw?W and Ayt ? Rh?H , respectively. Each matrix contains one Gaussian per row; the width and positions of the Gaussians determine which parts of the image are extracted as the attention glimpse. Formally, the glimpse gt ? Rh?w is defined as T gt = Ayt xt (Axt ) . (2) 2 Attention is described by centres ? of the Gaussians, their variances ? and strides ? between centers of Gaussians of consecutive rows of the matrix, one for each axis. In contrast to the work by Kaho? et al. [16], only centres and strides are estimated from the hidden state of the LSTM, while the variance depends solely on the stride. This prevents excessive aliasing during training caused when predicting a small variance (compared to strides) leading to smoother convergence. The relationship between variance and stride is approximated using linear regression with polynomial basis functions (up to 4th order) before training the whole system. The glimpse size we use depends on the experiment. Appearance Attention This stage transforms the attention glimpse gt into a fixed-dimensional vector vt comprising appearance and spatial information about the tracked object. Its architecture depends on the experiment. In general, however, we implement V1 : Rh?w ? Rhv ?wv ?cv as a number of convolutional and max-pooling layers. They are shared among later processing stages, which corresponds to the primary visual cortex in humans [5]. Processing then splits into ventral and dorsal streams. The ventral stream is implemented as a CNN, and handles visual features and outputs feature maps ? t . The dorsal stream, implemented as a DFN, is responsible for handling spatial relationships. Let MLP(?) denote a multi-layered perceptron. The dorsal stream uses appearance ?t to dynamically compute convolutional filters ? a?b?c?d , where the superscript denotes the size of t the filters and the number of input and output feature maps, as n oK ?t = ? at i ?bi ?ci ?di = MLP(?t ). (3) i=1 The filters with corresponding nonlinearities form K convolutional layers applied to the output of V1. Finally, a convolutional layer with a 1 ? 1 kernel and a sigmoid non-linearity is applied to transform the output into a spatial Bernoulli distribution st . Each value in st represents the probability of the tracked object occupying the corresponding location. 4 The location map of the dorsal stream is combined with appearance-based features extracted by the ventral stream, to imitate the distractor-suppressing behaviour of the human brain. It also prevents drift and allows occlusion handling, since object appearance is not overwritten in the hidden state when input does not contain features particular to the tracked object. Outputs of both streams are combined as3 vt = MLP(vec(? t st )), (4) with being the Hadamard product. State Estimation Our approach relies on being able to predict future object appearance and location, and therefore it heavily depends on state estimation. We use an LSTM, which can learn to trade-off spatio-temporal and appearance information in a data-driven fashion. It acts like a working memory, enabling the system to be robust to occlusions and oscillating object appearance e. g., when an object rotates and comes back to the original orientation. ot , ht = LSTM(vt , ht?1 ), (5) b t = MLP(ot , vec(st )), ?t+1 , ?at+1 , ?b (6) at+1 = at + tanh(c)?at+1 , (7) b t = at + ?b bt b (8) Equations (5) to (8) detail the state updates. Spatial attention at time t is formed as a cumulative sum of attention updates from times t = 1 to t = T , where c is a learnable parameter initialised to a small value to constrain the size of the updates early in training. Since the spatial-attention mechanism b t is estimated is trained to predict where the object is going to go (Section 4), the bounding box b relative to attention at time t. 4 Loss We train our system by minimising a loss function comprised of: a tracking loss term, a set of terms for auxiliary tasks and regularisation terms. Auxiliary tasks are essential for real-world data, since convergence does not occur without them. They also speed up learning and lead to better performance for simpler datasets. Unlike the auxiliary tasks used by Jaderberg et al. [15], ours are relevant for our main objective ? object tracking. In order to limit the number of hyperparameters, we automatically learn loss weighting. The loss L(?) is given by LHART (D, ?) = ?t Lt (D, ?) + ?s Ls (D, ?) + ?a La (D, ?) + R(?) + ?R(D, ?), with dataset D = (9) n oM i (x1:T , b1:T ) , network parameters ?, regularisation terms R(?), adaptive i=1 weights ? = {?t , ?s , ?d } and a regularisation weight ?. We now present and justify components of our loss, where expectations E[?] are evaluated as an empirical mean over a minibatch of samples  i M x1:T , bi1:T i=1 , where M is the batch size. Tracking To achieve the main tracking objective (localising the object in the current frame), we base the first loss term on Intersection-over-Union (IoU) of the predicted bounding box w. r. t. the area of overlap ground truth, where the IoU of two bounding boxes is defined as IoU(a, b) = a?b a?b = area of union . The IoU is invariant to object and image scale, making it a suitable proxy for measuring the quality of localisation. Even though it (or an exponential thereof) does not correspond to any probability distribution (as it cannot be normalised), it is often used for evaluation [20]. We follow the work by Yu et al. [28] and express the loss term as the negative log of IoU: h i b t , bt ) , Lt (D, ?) = Ep(bb1:T |x1:T ,b1 ) ? log IoU(b (10) with IoU clipped for numerical stability. 3 vec : Rm?n ? Rmn is the vectorisation operator, which stacks columns of a matrix into a column vector. 5 time Figure 4: Tracking results on KTH dataset [24]. Starting with the first initialisation frame where all three boxes overlap exactly, time flows from left to right showing every 16th frame of the sequence captured at 25fps. The colour coding follows from Figure 1. The second row shows attention glimpses multiplied with appearance attention. Spatial Attention Spatial attention singles out the tracked object from the image. To estimate its parameters, the system has to predict the object?s motion. In case of an error, especially when the attention glimpse does not contain the tracked object, it is difficult to recover. As the probability of such an event increases with decreasing size of the glimpse, we employ two loss terms. The first one constrains the predicted attention to cover the bounding box, while the second one prevents it from becoming too large, where the logarithmic arguments are appropriately clipped to avoid numerical instabilities:     at ? bt ? log(1 ? IoU(at , xt )) . (11) Ls (D, ?) = Ep(a1:T |x1:T ,b1 ) ? log area(bt ) Appearance Attention The purpose of appearance attention is to suppress distractors while keeping full view of the tracked object e. g., focus on a particular pedestrian moving within a group. To guide this behaviour, we put a loss on appearance attention that encourages picking out only the tracked h ?w object. Let ? (at , bt ) : R4 ? R4 ? {0, 1} v v be a target function. Given the bounding box b and attention a, it outputs a binary mask of the same size as the output of V1. The mask corresponds to the the glimpse g, with the value equal to one at every location where the P bounding box overlaps with the glimpse and equal to zero otherwise. If we take H(p, q) = ? z p(z) log q(z) to be the cross-entropy, the loss reads La (D, ?) = Ep(a1:T ,s1:T |x1:T ,b1 ) [H(? (at , bt ), st )]. (12) Regularisation We apply the L2 regularisation to the model parameters ? and to the expected value 2 2 of dynamic parameters ? t (?t ) as R(D, ?) = 12 k?k2 + 12 Ep(?1:T |x1:T ,b1 ) [?t | ?t ] 2 . Adaptive Loss Weights To avoid hyper-parameter tuning, we follow the work by Kendall et al. [19] and learn the loss weighting ?. After initialising the weights P with a vector of ones, we add the following regularisation term to the loss function: R(?) = ? i log(??1 i ). 5 5.1 Experiments KTH Pedestrian Tracking Kaho? et al. [16] performed a pedestrian tracking experiment on the KTH activity recognition dataset [24] as a real-world case-study. We replicate this experiment for comparison. We use code provided by the authors for data preparation and we also use their pre-trained feature extractor. Unlike them, we did not need to upscale ground-truth bounding boxes by a factor of 1.5 and then downscale them again for evaluation. We follow the authors and set the glimpse size (h, w) = (28, 28). We replicate the training procedure exactly, with the exception of using the RMSProp optimiser [9] with learning rate of 3.33 ? 10?5 and momentum set to 0.9 instead of the stochastic gradient descent with momentum. The original work reported an IoU of 55.03% on average, on test data, while the presented work achieves an average IoU score of 77.11%, reducing the relative error by almost a factor of two. Figure 4 presents qualitative results. 5.2 Scaling to Real-World Data: KITTI Since we demonstrated that pedestrian tracking is feasible using the proposed architecture, we proceed to evaluate our model in a more challenging multi-class scenario on the KITTI dataset [8]. It consists 6 Figure 5: IoU curves on KITTI over 60 timesteps. HART (train) presents evaluation on the train set (we do not overfit). Method Avg. IoU Kaho? et al. [16] Spatial Att App Att HART 0.14 0.60 0.78 0.81 Table 1: Average IoU on KITTI over 60 time-steps. of 21 high resolution video sequences with multiple instances of the same class posing as potential distractors. We split all sequences into 80/20 sequences for train and test sets, respectively. As images in this dataset are much more varied, we implement V1 as the first three convolutional layers of a modified AlexNet [1]. The original AlexNet takes inputs of size 227 ? 227 and downsizes them to 14 ? 14 after conv3 layer. Since too low resolution would result in low tracking performance, and we did not want to upsample the extracted glimpse, we decided to replace the initial stride of four with one and to skip one of the max-pooling operations to conserve spatial dimensions. This way, our feature map has the size of 14 ? 14 ? 384 with the input glimpse of size (h, w) = (56, 56). We apply dropout with probability 0.25 at the end of V1. The ventral stream is comprised of a single convolutional layer with a 1 ? 1 kernel and five output feature maps. The dorsal stream has two dynamic filter layers with kernels of size 1 ? 1 and 3 ? 3, respectively and five feature maps each. We used 100 hidden units in the RNN with orthogonal initialisation and Zoneout [21] with probability set to 0.05. The system was trained via curriculum learning [2], by starting with sequences of length five and increasing sequence length every 13 epochs, with epoch length decreasing with increasing sequence length. We used the same optimisation settings, with the exception of the learning rate, which we set to 3.33 ? 10?6 . Table 1 and Figure 5 contain results of different variants of our model and of the RATM tracker by Kaho? et al. [16] related works. Spatial Att does not use appearance attention, nor loss on attention parameters. App Att does not apply any loss on appearance attention, while HART uses all described modules; it is also our biggest model with 1.8 million parameters. Qualitative results in the form of a video with bounding boxes and attention are available online 4 . We implemented the RATM tracker of Kaho? et al. [16] and trained with the same hyperparameters as our framework, since both are closely related. It failed to learn even with the initial curriculum of five time-steps, as RATM cannot integrate the frame xt into the estimate of bt (it predicts location at the next time-step). Furthermore, it uses feature-space distance between ground-truth and predicted attention glimpses as the error measure, which is insufficient on a dataset with rich backgrounds. It did better when we initialised its feature extractor with weights of our trained model but, despite passing a few stags of the curriculum, it achieved very poor final performance. 6 Discussion The experiments in the previous section show that it is possible to track real-world objects with a recurrent attentive tracker. While similar to the tracker by Kaho? et al. [16], our approach uses additional building blocks, specifically: (i) bounding-box regression loss, (ii) loss on spatial attention, (iii) appearance attention with an additional loss term, and (iv) combines all of these in a unified approach. We now discuss properties of these modules. Spatial Attention Loss prevents Vanishing Gradients Our early experiments suggest that using only the tracking loss causes an instance of the vanishing gradient problem. Early in training, the system is not able to estimate object?s motion correctly, leading to cases where the extracted glimpse does not contain the tracked object or contains only a part thereof. In such cases, the supervisory signal is only weakly correlated with the model?s input, which prevents learning. Even when the object is contained within the glimpse, the gradient path from the loss function is rather long, since any teaching signal has to pass to the previous timestep through the feature extractor stage. Penalising attention parameters directly seems to solve this issue. 4 https://youtu.be/Vvkjm0FRGSs 7 (a) The model with appearance attention loss (top) learns to focus on the tracked object, which prevents an ID swap when a pedestrian is occluded by another one (bottom). (b) Three examples of glimpses and locations maps for a model with and without appearance loss (left to right). Attention loss forces the appearance attention to pick out only the tracked object, thereby suppressing distractors. Figure 6: Glimpses and corresponding location maps for models trained with and without appearance loss. The appearance loss encourages the model to learn foreground/background segmentation of the input glimpse. Is Appearance Attention Loss Necessary? Given enough data and sufficiently high model capacity, appearance attention should be able to filter out irrelevant input features before updating the working memory. In general, however, this behaviour can be achieved faster if the model is constrained to do so by using an appropriate loss. Figure 6 shows examples of glimpses and corresponding location maps for a model with and without loss on the appearance attention. In figure 6a the model with loss on appearance attention is able to track a pedestrian even after it was occluded by another human. Figure 6b shows that, when not penalised, location map might not be very object-specific and can miss the object entirely (right-most figure). By using the appearance attention loss, we not only improve results but also make the model more interpretable. Spatial Attention Bias is Always Positive To condition the system on the object?s appearance and make it independent of the starting location, we translate the initial bounding box to attention parameters, to which we add a learnable bias, and create the hidden state of LSTM from corresponding visual features. In our experiments, this bias always converged to positive values favouring attention glimpse slightly larger than the object bounding box. It suggests that, while discarding irrelevant features is desirable for object tracking, the system as a whole learns to trade off attention responsibility between the spatial and appearance based attention modules. 7 Conclusion Inspired by the cascaded attention mechanisms found in the human visual cortex, this work presented a neural attentive recurrent tracking architecture suited for the task of object tracking. Beyond the biological inspiration, the proposed approach has a desirable computational cost and increased interpretability due to location maps, which select features essential for tracking. Furthermore, by introducing a set of auxiliary losses we are able to scale to challenging real world data, outperforming predecessor attempts and approaching state-of-the-art performance. Future research will look into extending the proposed approach to multi-object tracking, as unlike many single object tracking, the recurrent nature of the proposed tracker offers the ability to attend each object in turn. Acknowledgements We would like to thank Oiwi Parker Jones and Martin Engelcke for discussions and valuable insights and Neil Dhir for his help with editing the paper. Additionally, we would like to acknowledge the support of the UK?s Engineering and Physical Sciences Research Council (EPSRC) through the Programme Grant EP/M019918/1 and the Doctoral Training Award (DTA). The donation from Nvidia of the Titan Xp GPU used in this work is also gratefully acknowledged. References [1] A. Krizhevsky, I. Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional 8 Neural Networks. In NIPS, pages 1097?1105, 2012. [2] Yoshua Bengio, J?r?me Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, New York, New York, USA, 2009. ACM Press. [3] Brian Cheung. Neural Attention for Object Tracking. In GPU Technol. Conf., 2016. [4] Brian Cheung, Eric Weiss, and Bruno Olshausen. Emergence of foveal image sampling from learning to attend in visual scenes. ICLR, 2017. [5] Peter. Dayan and L. F. Abbott. Theoretical neuroscience : computational and mathematical modeling of neural systems. Massachusetts Institute of Technology Press, 2001. [6] Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic Filter Networks. NIPS, 2016. [7] S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, and Geoffrey E. Hinton. Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. In NIPS, 2016. [8] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. Int. J. Rob. Res., 32(11):1231?1237, sep 2013. [9] Hinton Geoffrey, Nitish Srivastava, and Kevin Swersky. Overview of mini-batch gradient descent, 2012. [10] Daniel Gordon, Ali Farhadi, and Dieter Fox. Re3 : Real-Time Recurrent Regression Networks for Object Tracking. In arXiv:1705.06368, 2017. [11] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi?nska, Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri? Puigdom?nech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471?476, oct 2016. [12] K Gregor, I Danihelka, A Graves, and D Wierstra. DRAW: A Recurrent Neural Network For Image Generation. ICML, 2015. [13] David Held, Sebastian Thrun, and Silvio Savarese. Learning to track at 100 FPS with deep regression networks. In ECCV Work. Springer, 2016. [14] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. In NIPS, 2015. [15] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. In arXiv:1611.05397, 2016. [16] Samira Ebrahimi Kaho?, Vincent Michalski, and Roland Memisevic. RATM: Recurrent Attentive Tracking Model. CVPR Work., 2017. [17] Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. In ICLR, 2017. [18] Sabine Kastner and Leslie G. Ungerleider. Mechanisms of visual attention in the human cortex. Annu. Rev. Neurosci., 23(1):315?341, 2000. [19] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. arXiv:1705.07115, may 2017. [20] Matej Kristan, Jiri Matas, Ale? Leonardis, Michael Felsberg, Luk Cehovin, Gustavo Fern?ndez, Tom?? Voj?, Gustav H?ger, Georg Nebehay, Roman Pflugfelder, Abhinav Gupta, Adel Bibi, Alan Luke?i?c, Alvaro Garcia-Martin, Amir Saffari, Philip H S Torr, Qiang Wang, Rafael Martin-Nieto, Rengarajan Pelapur, Richard Bowden, Chun Zhu, Stefan Becker, Stefan Duffner, Stephen L Hicks, Stuart Golodetz, Sunglok Choi, Tianfu Wu, Thomas Mauthner, Tony Pridmore, Weiming Hu, Wolfgang H?bner, Xiaomeng Wang, Xin Li, Xinchu Shi, Xu Zhao, Xue Mei, Yao Shizeng, Yang Hua, Yang Li, Yang Lu, Yuezun Li, Zhaoyun Chen, Zehua Huang, Zhe Chen, Zhe Zhang, Zhenyu He, and Zhibin Hong. The Visual Object Tracking VOT2016 challenge results. In ECCV Work., 2016. [21] David Krueger, Tegan Maharaj, J?nos Kram?r, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, and Chris Pal. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations. In ICLR, 2017. [22] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of Visual Attention. In NIPS, 2014. [23] Guanghan Ning, Zhi Zhang, Chen Huang, Zhihai He, Xiaobo Ren, and Haohong Wang. Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking. arXiv Prepr. arXiv1607.05781, 2016. [24] Christian Schuldt, Ivan Laptev, and Barbara Caputo. Recognizing human actions: A local SVM approach. In ICPR. IEEE, 2004. [25] Marijn Stollenga, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber. Deep Networks with Internal Selective Attention through Feedback Connections. In arXiv Prepr. arXiv . . . , page 13, 2014. [26] Jack Valmadre, Luca Bertinetto, Jo?o F. Henriques, Andrea Vedaldi, and Philip H. S. Torr. End-to-end representation learning for Correlation Filter based tracking. In CVPR, 2017. [27] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a Foreign Language. In NIPS, 2015. [28] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. UnitBox: An Advanced Object Detection Network. In Proc. 2016 ACM Multimed. Conf., pages 516?520. ACM, 2016. 9
6898 |@word luk:1 cnn:7 briefly:2 polynomial:1 seems:1 replicate:2 nd:1 hu:1 overwritten:1 pick:1 thereby:3 solid:1 initial:5 ndez:1 foveal:1 score:1 selecting:1 contains:2 initialisation:2 daniel:1 att:4 suppressing:3 ours:1 reynolds:1 favouring:1 current:3 com:1 activation:1 yet:2 intriguing:2 gpu:2 john:1 subsequent:1 numerical:2 ronan:1 christian:1 interpretable:1 update:4 generative:1 amir:1 imitate:1 ivo:1 vanishing:2 short:1 provides:1 contribute:1 location:21 firstly:1 simpler:2 zhang:2 five:4 height:1 mathematical:1 along:1 wierstra:1 predecessor:1 jiri:1 fps:2 qualitative:2 consists:1 pathway:2 combine:1 inside:1 mask:2 expected:4 andrea:1 nor:1 distractor:1 multi:4 brain:2 aliasing:1 inspired:5 decreasing:2 automatically:1 nieto:1 zhi:1 farhadi:1 increasing:3 provided:1 estimating:3 linearity:1 moreover:1 theophane:1 agnostic:1 alexnet:2 what:2 grabska:1 whilst:1 unified:1 gal:1 temporal:2 marian:1 every:3 act:1 exactly:2 axt:2 k2:1 rm:1 uk:4 wayne:1 grant:1 dfn:4 appear:1 producing:1 unit:1 danihelka:2 before:3 positive:2 attend:3 local:3 engineering:4 limit:1 despite:2 eslami:3 puigdom:1 oxford:3 jiang:1 id:1 meet:1 koo:1 becoming:1 solely:1 path:1 might:2 rnns:2 blunsom:1 doctoral:1 studied:1 r4:3 suggests:1 dynamically:1 challenging:4 luke:1 limited:2 bi:1 decided:1 responsible:3 union:2 block:1 implement:3 goyal:1 procedure:1 mei:1 demis:1 area:4 rnn:7 empirical:1 adapting:1 vedaldi:1 projection:1 pre:1 bowden:1 regular:1 suggest:1 amplify:1 cannot:3 layered:1 operator:1 put:1 context:3 transformer:2 instability:1 influence:2 map:16 demonstrated:3 phil:1 shi:1 resembling:1 go:1 attention:88 starting:3 l:2 cluttered:1 outsourcing:1 resolution:3 ke:1 helen:1 immediately:2 marijn:1 insight:2 posner:1 zoneout:2 bertinetto:1 stability:1 handle:2 his:1 target:9 hierarchy:2 heavily:1 enhanced:1 zhenyu:1 us:6 designing:1 recognition:2 conserve:1 updating:1 approximated:1 particularly:1 predicts:1 foreground:3 ep:5 epsrc:1 role:1 fly:1 bottom:4 wang:4 module:4 region:4 decrease:2 trade:2 distance:1 valuable:1 visualised:1 weigh:1 environment:1 transforming:1 rmsprop:1 constrains:1 complexity:1 occluded:2 dynamic:5 trained:7 weakly:1 depend:1 laptev:1 ali:2 purely:1 creates:1 efficiency:3 eric:1 basis:1 tinne:1 czarnecki:1 swap:1 sep:1 pezeshki:1 train:4 ramalho:1 fast:1 describe:2 distinct:1 hyper:1 kevin:1 apparent:2 heuristic:2 larger:1 plausible:2 cvpr:2 solve:2 otherwise:1 grammar:1 ability:1 simonyan:1 neil:1 tuytelaars:1 transform:1 itself:2 emergence:1 final:1 superscript:2 online:1 sequence:12 differentiable:4 michalski:1 dh1:1 product:2 cao:1 loop:1 hadamard:2 relevant:3 translate:1 achieve:1 schaul:1 description:2 sutskever:2 convergence:3 enhancement:1 extending:1 cropping:1 produce:1 stiller:1 silver:1 adam:2 oscillating:1 kitti:8 tim:1 object:61 andrew:1 ac:3 felsberg:1 help:1 recurrent:17 donation:1 edward:1 auxiliary:6 implemented:4 skip:1 entirety:1 predicted:4 come:1 iou:13 soelch:1 guided:1 hermann:1 closely:1 ning:2 filter:13 stochastic:1 human:12 saffari:1 behaviour:3 anonymous:1 bi1:1 biological:2 brian:2 secondly:1 correction:2 hold:1 around:2 tracker:9 ungerleider:1 ground:4 visually:1 sufficiently:1 cognition:1 predict:4 ventral:9 consecutive:1 early:3 achieves:1 purpose:1 estimation:3 proc:1 faustino:1 lenz:1 tanh:1 expose:1 council:1 create:2 occupying:1 stefan:2 gaussian:2 always:2 modified:1 rather:2 downscale:1 avoid:2 varying:1 as3:1 rosemary:1 acuity:1 focus:5 bernoulli:1 likelihood:1 contrast:3 suppression:1 maharaj:1 kristan:1 whence:1 helpful:1 dayan:1 dependent:1 foreign:1 typically:1 bt:10 hidden:6 relation:1 smagt:1 going:1 selective:1 comprising:1 semantics:1 issue:1 classification:1 among:2 orientation:1 augment:1 priori:2 art:3 spatial:31 constrained:1 stns:1 field:5 equal:2 beach:1 sampling:2 koray:5 qiang:1 represents:2 stuart:1 jones:1 icml:2 look:1 unsupervised:2 constitutes:1 yu:2 mimic:1 excessive:1 yoshua:2 nech:1 gordon:2 richard:1 develops:1 retina:1 roman:1 randomly:1 employ:4 stimulus:4 anirudh:1 few:1 individual:1 geometry:1 occlusion:3 attempt:1 harley:1 detection:1 neuroscientist:1 mlp:5 ostrovski:1 interest:3 mnih:2 localisation:1 evaluation:4 adjust:1 joel:1 parametrised:1 held:1 stollenga:1 bb1:1 bayer:1 capable:1 necessary:1 glimpse:27 orthogonal:1 fox:1 voj:1 iv:1 savarese:1 re:1 zhimin:1 theoretical:1 increased:1 instance:2 column:2 modeling:1 markovian:2 steer:1 cover:1 measuring:1 leslie:1 juergen:1 cost:2 introducing:1 masked:1 comprised:3 krizhevsky:1 recognizing:1 too:2 samira:1 pal:1 reported:1 xue:1 combined:2 st:10 lstm:9 upscale:1 alvaro:1 re3:1 sensitivity:1 memisevic:1 off:2 picking:1 michael:1 ilya:1 attentionbased:1 yao:1 jo:1 again:1 opposed:1 huang:3 containing:1 external:1 lukasz:1 conf:2 zhao:1 leading:3 resort:1 wojciech:1 actively:1 li:3 volodymyr:2 potential:1 nonlinearities:1 de:2 valmadre:1 stride:6 future:2 coding:1 b2:1 int:1 pedestrian:7 titan:1 mauthner:1 explicitly:2 caused:1 depends:5 collobert:1 stream:19 performed:2 view:1 jason:1 later:2 wolfgang:1 h1:1 responsibility:1 kendall:2 competitive:1 bayes:1 prepr:2 capability:1 recover:1 decaying:1 masking:1 jia:1 youtu:1 contribution:1 om:1 formed:1 greg:1 accuracy:1 convolutional:10 who:1 became:1 variance:4 correspond:3 gathered:1 identify:1 raw:1 vincent:1 xiaobo:1 kavukcuoglu:5 fern:1 ren:1 dub:1 lu:1 lighting:1 worth:1 app:2 converged:1 detector:1 penalised:1 whenever:1 sebastian:1 attentive:8 petrov:1 initialised:2 thereof:3 associated:1 di:1 dataset:9 massachusetts:1 signal:6 distractors:7 penalising:1 barwi:1 segmentation:3 back:2 matej:1 focusing:2 ok:1 supervised:1 follow:4 tom:2 response:2 wei:1 zisserman:1 zwols:1 editing:1 evaluated:1 ox:3 box:18 though:1 furthermore:3 mez:1 stage:5 schuldt:1 overfit:1 working:5 correlation:2 christopher:1 minibatch:1 quality:1 believe:1 olshausen:1 supervisory:1 usa:2 building:1 contain:5 concept:1 counterpart:1 inspiration:1 moritz:1 spatially:2 read:1 during:1 width:2 encourages:2 hong:1 mohammad:1 motion:6 weber:1 variational:1 jack:1 image:18 recently:1 krueger:1 sigmoid:1 rmn:1 functional:1 physical:1 tracked:16 overview:1 ballas:1 tassa:1 million:1 association:1 he:2 vec:3 enter:1 cv:1 rd:1 tuning:1 grid:2 teaching:1 centre:3 bruno:1 gratefully:1 language:1 moving:1 badia:1 robot:3 cortex:12 gt:8 base:1 add:2 patrick:1 sergio:1 closest:1 recent:4 showed:1 irrelevant:4 driven:3 discard:2 scenario:2 schmidhuber:1 periphery:1 nvidia:1 barbara:1 outperforming:1 wv:1 binary:1 vt:7 success:1 der:1 cain:1 optimiser:1 captured:2 seen:1 additional:4 preserving:1 adri:1 determine:1 paradigm:1 dashed:1 ii:1 ale:1 exploding:1 desirable:2 stephen:1 siamese:1 multiple:2 full:1 alan:1 infer:1 smoother:1 adapt:1 hick:1 minimising:1 offer:1 long:4 faster:1 luca:1 hart:5 cross:1 roland:1 award:1 mle:2 a1:2 variant:1 regression:4 vision:2 expectation:1 optimisation:1 arxiv:6 kernel:3 robotics:1 achieved:3 background:6 want:1 ingmar:2 leaving:1 crucial:1 appropriately:1 ot:5 unlike:3 nska:1 pooling:2 induced:1 monochromatic:1 flow:5 inconsistent:1 noting:1 yang:3 presence:1 gustav:1 split:3 enough:1 bengio:2 ivan:1 iii:1 variety:1 timesteps:1 affect:1 architecture:8 approaching:1 perfectly:1 reduce:1 whether:1 colour:1 becker:1 effort:1 adel:1 peter:1 karen:1 proceed:2 cause:1 york:2 passing:1 action:1 deep:4 heess:2 tune:1 transforms:1 rw:1 http:2 estimated:2 arising:1 correctly:1 per:1 neuroscience:1 track:3 threefold:1 georg:2 express:1 group:1 salient:1 four:1 acknowledged:1 abbott:1 leibo:1 ht:10 v1:10 timestep:1 fraction:2 sum:2 compete:1 uncertainty:1 swersky:1 clipped:2 almost:1 wu:1 separation:1 geiger:1 draw:1 prefer:1 scaling:1 malcolm:1 initialising:1 entirely:1 dropout:1 layer:13 nan:1 gomez:1 courville:1 activity:3 occur:1 constrain:1 alex:4 scene:3 speed:1 nitish:1 span:1 argument:1 martin:3 department:3 structured:1 slav:1 icpr:1 combination:2 poor:1 dta:1 describes:1 slightly:1 rob:1 biologically:3 primate:1 rev:1 online1:1 making:2 s1:1 invariant:1 handling:2 dieter:1 equation:1 resource:1 describing:2 discus:2 mechanism:15 turn:1 merit:1 end:5 available:3 gaussians:3 operation:1 multiplied:1 apply:4 hierarchical:7 appropriate:1 localising:1 batch:2 hassabis:1 ebrahimi:1 original:3 denotes:2 remaining:1 cf:1 tony:1 thomas:2 top:5 tiago:1 neglect:1 exploit:1 especially:1 build:1 gregor:1 dht:1 sabine:1 warping:1 objective:2 matas:1 quantity:1 kaiser:1 receptive:2 primary:2 gradient:7 iclr:3 fovea:1 kth:5 separate:2 zooming:1 stag:1 thank:1 capacity:2 majority:1 thrun:1 philip:2 me:1 chris:1 rotates:1 urtasun:1 trivial:1 length:4 code:2 relationship:3 mini:1 insufficient:1 providing:1 unrolled:1 difficult:4 mostly:1 potentially:1 negative:1 neuroscientific:1 suppress:3 policy:1 allowing:1 neuron:2 datasets:3 benchmark:1 acknowledge:2 enabling:1 descent:2 rhv:1 technol:1 hinton:4 locate:1 frame:5 varied:1 stack:1 bert:1 drift:1 david:4 introduced:3 trainable:1 connection:4 imagenet:1 learned:1 nip:7 beyond:1 justin:1 able:5 leonardis:1 perception:2 kram:1 pattern:1 challenge:1 max:4 memory:8 interpretability:1 video:4 terry:1 gool:1 suitable:1 overlap:3 difficulty:1 force:1 hybrid:1 predicting:1 cascaded:1 curriculum:4 event:1 advanced:1 zhu:1 improve:3 github:1 technology:1 factorises:1 eye:1 abhinav:1 axis:1 concludes:1 acknowledges:1 ayt:2 extract:4 roberto:1 summerfield:1 review:1 understanding:1 stn:3 epoch:2 l2:1 acknowledgement:1 regularisation:6 graf:3 relative:2 fully:3 bner:1 loss:37 generation:1 interesting:1 proportional:1 ger:1 geoffrey:4 integrate:1 sufficient:1 proxy:1 xp:1 storing:1 share:1 row:4 eccv:2 karl:3 repeat:1 keeping:1 free:1 salience:1 bias:3 henriques:1 normalised:1 guide:2 institute:1 perceptron:1 conv3:1 wide:1 emphasise:1 van:2 feedback:3 curve:1 plain:1 world:7 cumulative:1 rich:1 dimension:1 sensory:1 transition:1 computes:1 reinforcement:1 adaptive:2 agnieszka:1 author:2 avg:1 programme:1 yori:1 jaderberg:4 implicitly:1 rafael:1 global:1 sequentially:1 b1:8 spatio:1 discriminative:1 agapiou:1 zhe:2 search:3 latent:2 table:2 additionally:1 channel:1 nature:2 szepesvari:1 ca:1 inherently:1 learn:7 robust:1 nicolas:3 caputo:1 improving:2 posing:1 louradour:1 kastner:1 did:3 main:2 neurosci:1 rh:5 whole:4 bounding:17 noise:1 hyperparameters:2 center:1 yarin:1 x1:9 xu:2 biggest:1 parker:1 fashion:3 aid:1 zehua:1 position:3 decoded:1 momentum:2 explicit:1 exponential:1 tied:1 weighting:2 extractor:3 learns:2 masci:1 down:4 choi:1 annu:1 specific:6 xt:8 discarding:1 showing:1 brabandere:2 symbol:1 offset:1 learnable:2 explored:1 submitting:1 chun:1 gupta:1 svm:1 essential:2 gustavo:1 effectively:1 ci:1 conditioned:3 maximilian:2 chen:3 suited:2 specialised:1 intersection:1 lt:2 logarithmic:1 entropy:1 garcia:1 appearance:43 visual:34 failed:3 zhangyang:1 vinyals:1 tegan:1 contained:1 upsample:1 tracking:38 prevents:6 cipolla:1 hua:1 srivastava:1 springer:1 truth:4 corresponds:3 relies:1 acm:3 extracted:4 weston:1 conditional:1 oct:1 cheung:4 grefenstette:1 king:1 towards:1 luc:1 replace:1 content:2 change:1 shared:3 feasible:1 specifically:1 foveation:1 reducing:2 yuval:1 justify:1 torr:2 miss:1 silvio:1 pas:2 xin:1 la:2 exception:2 select:1 colmenarejo:1 formally:2 internal:1 aaron:1 support:1 jonathan:1 dorsal:10 preparation:1 oriol:1 incorporate:1 evaluate:1 regularizing:1 correlated:1
6,520
6,899
Tomography of the London Underground: a Scalable Model for Origin-Destination Data Nicol? Colombo Department of Statistical Science University College London [email protected] Ricardo Silva The Alan Turing Institute and Department of Statistical Science University College London [email protected] Soong Kang School of Management University College London [email protected] Abstract The paper addresses the classical network tomography problem of inferring local traffic given origin-destination observations. Focusing on large complex public transportation systems, we build a scalable model that exploits input-output information to estimate the unobserved link/station loads and the users? path preferences. Based on the reconstruction of the users? travel time distribution, the model is flexible enough to capture possible different path-choice strategies and correlations between users travelling on similar paths at similar times. The corresponding likelihood function is intractable for medium or large-scale networks and we propose two distinct strategies, namely the exact maximum-likelihood inference of an approximate but tractable model and the variational inference of the original intractable model. As an application of our approach, we consider the emblematic case of the London underground network, where a tap-in/tap-out system tracks the starting/exit time and location of all journeys in a day. A set of synthetic simulations and real data provided by Transport For London are used to validate and test the model on the predictions of observable and unobservable quantities. 1 Introduction In the last decades, networks have been playing an increasingly important role in our all-day lives [1, 2, 3, 4, 5, 6]. Most of the time, networks cannot be inspected directly and their properties should be reconstructed form end-point or partial and local observations [7, 8]. The problem has been referred to as network ?tomography?, a medical word to denote clinical techniques that produce detailed images of the interior of the body from external signals [9, 10]. Nowadays the concept of tomography has gained wider meanings and the idea applies, in different forms, to many kinds of communication and transportation networks [11, 12, 13]. In particular, as the availability of huge amounts of data has grown exponentially, network tomography has become an important branch of statistical modelling [14, 15, 16, 17, 8]. However, due to the complexity of the task, existing methods are usually only designed for small-size networks and become intractable for most real-world applications (see [7, 18] for a discussion on this point). The case of large public transportation networks has attracted special attention since massive datasets of input-output single-user data have been produced by tap-in and tap-out systems installed in big city as London, Singapore and Beijing [19, 20, 18, 21]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Depending on the available measurements, two complementary formulations of network tomography have been considered: (i) the reconstruction of origin-destination distributions from local and partial traffic observations [11, 14, 9, 15, 16] and (ii) the estimation of the link and node loads from inputoutput information [22, 23, 24]. In practice, the knowledge of the unobserved quantities may help design structural improvements of the network or be used to predict the system?s behaviour in case of disruptions [25, 26, 13, 27, 28]. Focusing on the second (also referred to as ?dual?) formulation of the tomography problem, this paper addresses the challenging case where both the amount of data and the size of the network are large. When only aggregated data are observable, traffic flows over a given network can also be analysed by methods such as collective graphical models for diffusion dynamics [29, 30]. An important real-world application of dual network tomography is reconstructing the traffic of bits sent from a source node to a destination node in a network of servers, terminals and routers. The usual assumption, in those cases, is the tree structure of the network and models infer the bits trajectories from a series of local delays, i.e. loss functions defined at each location in the network [22, 23, 24]. The posterior of the travel time distribution at each intermediate position along the path is then used to reconstruct the unobserved local loads, i.e. the number of packets at a given node and time. We extend and apply this general idea to urban public transportation systems. The traffic to be estimated is the flow of people travelling across the system during a day, i.e. the number of people at a given location and time (station/link load). The nodes of the network are (> 100) underground stations, connected via (? 10) partially overlapping underground ?lines?, which can be look at as interacting ?layers? of connectivity [31]. The observations are single-user records with information about the origin, destination, starting time and exit time of each journey. Two key unobserved quantities to be estimated are (i) the users? path preferences for a given origin-destination pair [32, 28] and (ii) the station/link loads [33, 34, 35]. Put together, a model for the users? path preferences and a precise estimation of the local train loads can help detect network anomalies or predict the behaviour of the system in case of previously unobserved disruptions [18, 27, 21]. Respect to the classical communication network case, modelling a complex transportation system requires three challenging extensions: (i) the network structure is a multi-layer (loopy) network, where users are allowed to ?change line? on those nodes that are shared by different layers; (ii) the user?s choice between many feasible paths follows rules that can go far beyond simple length-related schemes; (iii) harder physical constraints (the train time schedule for example) may create high correlations between users travelling on the same path at similar times. Taking into a account such peculiar features of transportation networks, while keeping the model scalable respect to both the size of the network and the dataset, is the main contribution of this work. Model outline We represent the transportation system by a sparse graph, where each node is associated with an underground station and each edge with a physical connection between two stations. The full network is the sum of simple sub-graphs (lines) connected by sets of shared nodes (where the users are allowed to change line) [31]. For a given origin-destination pair, there may exist a finite number of possible simple (non redundant) trajectories, corresponding to distinct line-change strategies. The unobserved user?s choice is treated as a latent variable taking values over the set of all feasible paths between the origin and destination. The corresponding probability distribution may depend on the length of the path, i.e. the number of nodes crossed by the path, or any other arbitrary feature of the path. In our multi-layers setup, for example, it is natural to include a ?depth? parameter taking into account the number of layers visited, i.e. the number of lines changes. For any feasible path ? = [?1 , . . . , ?` ], the travel time at the intermediate stations is defined by the recursive relation t(?i ) ? t(?i?1 ) + Poisson(a(?i?1 , so + t(?i?1 ), ?)) i = 1, . . . , ` (1) where t(x) the is travel time at location x ? {?1 , . . . , ?` }, so is the starting time, a = a(x, so +t(x), ?) are local delays that depend on the location, x, the absolute time so + t(x) and the path ?. The choice of the Poisson distribution is convenient 1 in this framework due to its simple single-parameter form and the fact that t(x) is an integer in the dataset that motivates this work (travel time is recorded in minutes). The dependence on ? allows including global path-related features, such as, for example, an extra delay associated to each line change along the path or the time spent by the user while walking through the origin and destination stations. The dependence on so and t(x) is what ensures 1 Other options include negative binomial and shifted geometric distributions 2 the scalability of the model because all users can be treated independently given their starting time. The likelihood associated with all journeys in a day has a factorised form (1) (N ) (N ) p(td , . . . , td |s(1) o , . . . , so ) = N Y (n) p(td |s(n) o ) (2) n=1 (n) where td is the total travel time of the nth user and N the total number of users in a day and each (n) (n) p(td |so ) depends only locally on the model parameters, i.e. on the delay functions associated with the nodes crossed by the corresponding path. The drawback is that an exact computation of (2) is intractable and one needs approximate inference methods to identify the model parameters from the data. We address the inference problem in two complementary ways. The first one is a model-approximation method, where we perform the exact inference of the approximate (tractable) model t(?i ) ? t(?i?1 ) + Poisson(a(?i?1 , so + t?i?1 , ?)) i = 1, . . . , ` (3) where t?i?1 is a deterministic function of the model parameters that is defined by the difference equation t?i = t?i?1 + a(?i?1 , so + t?i?1 , ?) i = 1, . . . , ` (4) The second one is a variational inference approach where we maximise a lower bound of the intractable likelihood associated with (1). In both cases, we use stochastic gradient updates to solve iteratively the corresponding non-convex optimization. Since the closed form solution of (4) is in general not available, the gradients of the objective functions cannot be computed explicitly. At each iteration, they are obtained recursively from a set of difference equations derived from (4), following a scheme that can be seen as a simple version of the back-propagation method used to train neural networks. Finally, we initialize the iterative algorithms by means of a method of moments estimation of the time-independent part of the delay functions. Choosing a random distribution over the feasible paths, this is obtained from the empirical moments of the travel time distribution (of the approximate model (10)) by solving a convex optimization problem. London underground experiments The predictive power of our model is tested via a series of synthetic and real-world experiments based on the London underground network. All details of the multi-layer structure of the network can be found in [36]. In the training step we use input-output data that contain the origin, the destination, the starting time and the exit time of each (pseudonymised) user of the system. This kind of data are produced nowadays by tap-in/tap-out smart card systems such as the Oyster Card systems in London [19]. The trained models can then used to predict the unobserved number of people travelling through a given station at a given time in the day, as well as the user?s path preferences for given origin-destination pair. In the synthetic experiments, we compared the model estimations with the values produced by the ?ground-truth? (a set of random parameters used to generate the synthetic data) and test the performance of the two proposed inference methods. In the real-world experiment, we used original pseudonymised data provided by Transport for London. The dataset consisted of more than 500000 origin destination records, from journeys realised in a single day on the busiest part of the London underground network (Zone 1 and 2, see [36]), and a subset of NetMIS records [37] from the same day. NetMIS data contain realtime information about the trains transiting through a given station and, for an handful of major underground stations (all of them on the Victoria line), include quantitative estimation of the realtime train weights. The latter can be interpreted as a proxy of the realtime (unobserved) number of people travelling through the corresponding nodes of the network and used to evaluate the model?s predictions in a quantitative way. The model has also been tested on a out-of-sample Oyster-card dataset by comparing expected and observed travel time between a selection of station pairs. Unfortunately, we are not aware of any existing algorithm that could be applicable for a fair comparison on similar settings. 2 Travel time model Let o, d and so be the origin, the destination and the starting time of a user travelling through the system. Let ?od be the set of all feasible paths between o and d. Then the probability of observing a 3 travel time td is a mixture of probability distributions X p(td ) = e?L(?) ?L(?) ???od e ppath (?)p(td |?) ppath (?) = P ???od (5) where the conditional p(td |?) can be interpreted as the travel time probability over a particular path, ppath (?) is the probability of choosing that particular path and L(?) is some arbitrary ?effective length? of the path ?. According to (1), the conditional probabilities p(td |?) are complicated convolutions of Poisson distributions. An equivalent but more intuitive formulation is td = `(?) X ri ? Poisson(a(?i?1 , so + ri i=2 ` X rk , ?)) ? ? Ppath (L(?)) (6) k=2 where the travel time td is explicitly expressed as the sum of the local delays, ri = t(?i ) ? t(?i?1 ), along a feasible path ? ? ?od . Since the time at the intermediate positions, i.e. t(?i ) for i 6= 1, `, is not observed, the local delays r2 , . . . , r`(?) are treated as hidden variables. Letting `? = max???od `(?), the complete likelihood is p(r1 , . . . , r`?, ?) = p(r1 , . . . , r`?|?)ppath (?) p(r1 , . . . , r`?|?) = `? Y e??i ?ri i i=1 ri ! (7) Pi?1 where ?i = a(?i?1 , so + k=2 rk , ?) if i ? `(?) and ?i = 0 if i > `(?). Marginalizing over all hidden variables one obtains the explicit form of the conditional probability distributions in the mixture (5), i.e. p(td |?) = ? X ??? r2 =0 ? X ?(td ? r`?=0 `? X ri ) i=2 `? Y e??i ?ri i i=2 ri ! (8) Since ?i = ?i (ri?1 , . . . , r2 ) for each i = 2, . . . , `, the evaluation of each conditional probabilities requires performing a (` ? 1)-dimensional infinite sum, which is numerically intractable and makes an exact maximum likelihood approach infeasible. 2 3 Inference An exact maximum likelihood estimation of the model parameters in a(x, s, ?) and L(?) is infeasible due to the intractability of the evidence (8). One possibility is to use a Monte Carlo approximation of the exact evidence (8) by sampling from the nested Poisson distributions. In this section we propose two alternative methods that do not require sampling from the target distribution. The first method is based on the exact inference of an approximate but tractable model. The latter depends on the same parameters as the original one (the ?reference? model (6)) but is such that the local delays become independent given the path and the starting time. The second approach consists of an approximate variational inference of (6) with the variational posterior distribution defined in terms of the deterministic model (4). 3.1 Exact inference of an approximate model We consider the approximation of the reference model (6) defined by td = `(?) X ri ri ? Poisson(a(?i?1 , so + t?i?1 , ?)) ? ? Ppath (L(?)) (10) i=2 2 An exact evaluation of the moments htn di = ? X t=0 n t p(t) = X ?? ?od ? X `? `? X Y e??i ?ri i n ppath (?) ??? ( ri ) ri ! r =0 r =0 i=2 i=2 ? X 2 is also intractable. 4 ? ` (9) where the t?i are obtained recursively from (4). In this case, the `(?) ? 1 local delays ri are decoupled and the complete likelihood is given by p(r1 , . . . , r`?, ?) = p(r1 , . . . , r`?|?)ppath (?) p(r1 , . . . , r`?|?) = `? Y e??i ?ri i ri ! i=1 (11) where ?i = a(?i?1 , so + t?i?1 (?), ?) if i ? `(?) and ?i = 0 if i > `(?). Noting that td is the sum of independent Poisson random variables, we have p(td ) = X ppath (?) ???od td X ..., r2 =0 P`? i=2 ?i td X ?(td ? r`?=0 `? X ri ) i=2 `? Y e??i ?ri i i=2 ri ! = X ppath (?) ???od e?t?`? t?`t?d td ! (12) where we have used = t?`?. The parameters in the model function a and L can then be identified with the solution of the following non-convex maximization problem D X D T ?1 X T X X maxa,L N (o, d, so , sd ) log p(sd ? so ) (13) o=1 d=1 so =0 sd =so where N (o, d, so , sd ) is the number of users travelling from o to d with enter and exit time so and sd respectively. 3.2 Variational inference of model the original model We define the approximate posterior distribution ? q(r, ?) = q(r|?)qpath (?) q(r|?) = pmulti (r; td , ?) q(?) = P e?L(?,td ) (14) ? ???od e?L(?,td ) ? ? ? where we have defined r = [r2 , . . . , r`?], ?i = ti ?t?t?i?1 , with t?i = t?i?1 for all `(?) < i ? `, ` r P`? Q`? ?i i ? td ) depends on the path, ?, pmulti (r; td , ?) = ?(td ? i=2 ri )td ! i=2 ri ! and the function L(?, ? td ), the variational distribution and the observed travel time, td . Except for the corrected length L(?, (14) share the same parameters over all data points and can be used directly to evaluate the likelihood lower bound (ELBO) L = Eq (log p(td )) ? Eq (log q) 3 . One has X L(o, d, so , td ) = ? log td ! + qpath (?) log ???od Li (?) = td X ..., r2 =1 td X ???od pmulti (r; td , ?) r`?=1 `? X X ppath (?) + qpath (?) Li (?) qpath (?) i=2 `? X (??i + ri log i=2 a(?i?1 ,so +t?i?1 ) t?`? ?i ) ?i (15) Pi?1 with ?i = a(?i?1 , so + k=2 rk ) and ?i = if i ? `(?) and ?i = 0 = ?i if i > `(?). The exact evaluation of each Li (?) is still intractable due to the multidimensional sum. However, since for any ? and i = 2, . . . , `, ?i depends only on the ?previous? delays and we can define t?i?1 t?? ? t?i ?past = ?future = ` ?i = a(?i?1 , so + rpast ) (16) t?`? t?`? where rpast = r2 + ? ? ? + ri?1 and rfuture = ri+1 + ? ? ? + r`?, and by the grouping property of the multinomial distribution we obtain td td X X ?i Li (?) = pmulti (r(i) , td , ? (i) )(??i + ri log ) (17) ?i r =1 r =1 future i where r(i) = [rpast , ri , rfuture ] and ? (i) = [?future , ?i , ?past ]. Every Li (?) can now be computed in O(t3d ) operations and the model parameter identified with the solution of the following non-convex optimization problem D X D T ?1 X T X X maxa,L,L? N (o, d, so , sd )L(o, d, so , sd ? so ) (18) o=1 d=1 so =0 sd =so 3 Similar ?amortised? approaches have been used elsewhere to make the approximate inference scalable [38, 39] 5 optimization path choice probability 0.5 0.8 VI ML 1.9 6.977 0.7 0.45 0.6 0.4 |p opt-p true| prediction error 0.5 0.35 0.3 0.4 0.3 0.25 0.2 0.2 0.1 0 0.15 0 1 2 3 4 5 6 7 8 9 0 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 |uniform-ptrue| log(runtime) Figure 1: On the left, stochastic iterative solution of (18) (VI) and (13) (ML) for the synthetic dataset. At each iteration, the prediction error is obtained on a small out-of-sample dataset. On the right, distance from the ground-truth of the uniform distribution (x-axis) and the models? path probability (y-axis) for various origin-destination pairs. In the legend box, total distance from the ground-truth. Stochastic gradient descent Both (13) and (18) consist of O(D2 T 2 ) terms and the estimation of the exact gradient at each iteration can be expansive for large networks D >> 1 or fine time resolutions T >> 1. A common practice in this case is to use a stochastic approximation of the gradient where only a random selection of origin-destination pairs and starting times are used. Note that each L(o, d, so , td ) depends on a(x, s, ?) only if the location x is crossed by at least one of the feasible paths between o and d. P? Initialization The analytic form of the first moments of (12), htd iso = td =1 td p(td ) = P ? ???od ppath (?)t`(?) , can be used to obtain a partial initialization of the iterative algorithms via a simple moment-matching technique. We assume that, averaging over all possible starting time, the system behaves like a simple communication network with constant delays at each nodes or, equivaPT lently, that a(x, s, ?) = ?(x) + V (x, s, ?), with s=0 V (x, s, ?) = 0. In this case an initialization of ?(x) is obtained by solving min? `(?)?1 D X D X X X (tod ? ppath (?) ?(?k ))2 o=1 d=1 ???od (19) k=1 PT ?1 PT PT ?1 PT where tod = Z1 so =0 sd =so N (o, d, so , sd )(sd ? so ), with Z = so =0 sd =so N (o, d, so , sd ), is the ?averaged? empirical moments computed from the data. Note that (19) is convex for any fixed 1 choice of ppath (?). Total derivatives All terms in (13) and (18) are in the form g = g(?, t?i ), where ? denotes the model parameters and t?i = t?i (?) is defined by the difference equation (4). Since t?i is not available as an explicit function of ? it is not possible to write g = g(?) or compute directly its gradient ?? g. A way out is to compute the total derivative of the function g with respect to ?, i.e. dg(?, t?i ) ?g(?, t?i ) ?g(?, t?i ) dt?i = + d? ?? ? t?i d? where (20) dt?i d? , for i = 1, . . . , `, can be obtained from the iterative integration of dt?i dt?i?1 ?a(x, s, ?)) ?a(x, s, ?)) dt?i?1 = + + i = 1, . . . , ` d? d? ?? ?s s=t?i?1 d? (21) which is implied by (4). 4 Experiments The method described in the previous sections is completely general and, except for the initialization step, no special form of the model functions is assumed. In order to captures few key features of 6 Oxford Circus to Waterloo LU 20 15 10 35 # of people 25 true 0.09152 0.09095 40 30 25 50 40 40 30 20 10 20 0 200 20 10 400 600 800 1000 1200 30 20 10 0 0 200 time 0.0165 10 400 600 800 1000 1200 0 200 time 0.0151 400 600 800 1000 1200 time 0.01355 50 50 50 40 40 40 1000 1200 0 200 400 600 800 1000 1200 starting time Waterloo LU to Paddington LU Paddington LU to Kings Cross LU 80 50 45 true 0.1192 0.1131 40 50 40 30 20 35 200 400 600 800 1000 1200 starting time 20 10 200 400 600 800 1000 1200 30 30 20 10 0 0 200 time 0.01263 400 600 800 1000 1200 0 200 time 0.01293 400 600 800 1000 1200 time 0.01341 50 50 50 40 40 40 25 20 15 30 20 10 0 0 30 0 0 5 0 10 0 10 10 20 true 0.1715 0.1486 # of people 60 exp travel time 70 30 # of people 800 0 200 400 600 800 1000 0 1200 30 20 10 0 0 starting time # of people 600 starting time # of people 400 # of people 200 # of people 0 0 exp travel time 30 0 0 15 5 Paddington LU 50 40 5 0 Oxford Circus 50 # of people 45 exp travel time exp travel time Kings Cross LU 50 true 0.2991 0.2376 30 # of people Kings Cross LU to Oxford Circus 35 200 400 600 800 time 1000 1200 30 20 10 0 0 200 400 600 800 time 1000 1200 0 200 400 600 800 1000 1200 time Figure 2: On the left, travel time predicted by the VI model (in blue) and the ML model (in red) of Figure 1 and the ground-truth model (in green) plotted against the starting time for a selection of origin-destination pairs. In the legend box, normalised total distance (kvexp ? vtrue k/kvtrue k) between model?s and ground-truth?s predictions. On the right, station loads predicted by the ground-truth (in green) and the VI model (in blue) and ML model (in red) of Figure 1. The three models and a reduced dataset of N = 10000 true origin, destination and starting time records has been used to simulate the trajectories of N synthetic users. For each model, the N simulated trajectories give the users expected positions at all times (the position is set to 0 if the users is not yet into the system or has finished his journey) that have been used to compute the total number of people being at a given station at a given time. The reported score is the total distance between model?s and ground-truth?s normalised predictions. For station x, the normalised load vector is vx /1T vx where vx (s) is the number of people being at station x at time s. a large transportation system and apply the model to the tomography of the London underground, we have chosen the specific parametrization of the function L(?) and a(x, s, ?) given in Section 4.1. The parametrised model has then been trained and tested on a series of synthetic and real-world datasets as described in Section 4.2. 4.1 Parametrization For each origin o and the destination d, we have reduced the set of all feasible paths, ?od , to a small set including the shortest path and few perturbations of the shortest path (by forcing different choices at the line-change points). Let C(?) ? {0, 1}` such that C(?i ) = 1 if the user changes line at ?i and zero otherwise. To P parametrize the path probability (5) we chose L(?) = ?1 `(?) + ?2 c(?) where `(?) = |?|, c(?) = i C(?i ) and ?1 , ?2 ? R are free parameters. The posterior-corrected effective ? td ) in (14) was defined as length L(?, ? L(?) = ??` `(?) + ??c c(?) ? 2 ??i = ?i1 + ?i2 u + ?i3 u?1 u = t??2 i = `, c (22) d (td ? td ) P where td is the observed travel time, t?d = o,d,so ,sd N (o, d, so , sd )(sd ? so ) and ?ij ? R, i = `, c P and j = 1, 2, 3, are extra free parameters. A regularization term ?(k?k2 + i=`,c k?i k2 ), with ? = 1/80, has been added to help the convergence of the stochastic algorithm. We let the local time-dependent delay at location x and time s be a(x, s, ?) = softplus(?(x) + V (x, s) + W (x, ?)) with V = N? N? X X i=1 j=1 ?ij (x) cos (?i s + ?j ) W1 = ` X ?(x)?x,?i C(?i ) + ?(x) (?x,?1 + ?x,?` ) (23) i=1 where ?(x), ?(x), ?(x) ? R and ?(x) ? RN? ?N? are free parameters and {?1 , . . . ?N? } and {?1 , . . . ?N? } two sets of library frequencies and phases. In the synthetic simulation, we have restricted the London underground network [36] to Zone 1 (63 stations), chosen N? = 5 = N? and set W = 0. For the real-data experiments we have considered Zone 1 and 2 (131 stations), N? = 10, N? = 5 and W 6= 0. 7 1.0873 0.9043 80 0.59661 0.898 120 120 120 100 100 100 50 40 30 20 80 60 40 20 predicted travel time 60 predicted travel time predicted travel time predicted travel time 70 80 60 40 20 80 60 40 20 10 0 0 10 20 30 40 50 60 70 80 0 0 10 20 40 50 60 70 80 0 0 10 20 30 40 50 60 70 80 0 50 0.35173 0.2634 80 70 70 50 40 30 20 60 50 40 30 20 10 0 30 40 50 60 70 80 60 50 40 30 20 10 0 20 predicted travel time 80 70 predicted travel time 80 10 20 30 40 50 60 70 80 0.20544 10 20 30 40 50 60 70 0.18855 0 30 20 60 50 40 30 20 10 10 10 0 0 0 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 predicted travel time predicted travel time predicted travel time 70 true travel time 30 40 50 70 80 0.2308 80 20 20 0.35999 70 20 10 true travel time 80 40 60 20 80 70 50 80 30 80 30 70 40 70 40 60 50 80 50 80 60 true travel time 60 70 0 0 true travel time 60 60 10 0 0 true travel time 10 40 0.22488 60 0 30 true travel time 0.28693 70 10 20 true travel time 80 0 10 true travel time 10 predicted travel time 30 true travel time predicted travel time predicted travel time 0 60 50 40 30 20 10 0 0 10 true travel time 20 30 40 50 60 70 80 0 true travel time 10 20 30 40 50 true travel time Figure 3: Travel times predicted by a random model (top), the initialization model (middle) obtained from (19) and the ML model (bottom) are scattered against the observed travel times of an out-of-sample test dataset (real data). The plots in the first three columns show the prediction-error of each model on three subsets of the test sample, Sshort (first column), Smedium (second column) and Slong (third column), consisting respectively of short, medium-length and long journeys. The plots in the last column show the prediction error of each model on the whole test dataset Sall = Sshort + Smedium + Slong The reported score is the relative prediction error for the corresponding model and subset of journeys defined as kvexp ? vtrue k/kvtrue k, with vexp (n) and vtrue (n) being the expected and observed travel times for the nth journey in Si , i ? {short, medium, long, all}. 600 800 1000 1200 1400 1600 400 600 2 0 4 2 0 400 #10-4 600 800 1000 1200 1400 1600 #10-4 600 4 2 0 600 4 2 800 1000 1200 1400 1600 400 600 #10 #10 #10 2 0 4 2 0 400 600 800 1000 1200 1400 1600 time 4 2 0 400 600 800 1000 1200 1400 1600 time 400 #10-4 2 600 600 800 1000 1200 1400 1600 time 800 1000 1200 1400 1600 time Vauxhall LU 6 4 2 800 1000 1200 1400 1600 400 600 time #10 800 1000 1200 1400 1600 time 0.677 -4 #10 0.6172 -4 8 6 4 2 0 400 600 0 400 8 6 2 8 4 0.7161 -4 4 800 1000 1200 1400 1600 6 800 1000 1200 1400 1600 0.6311 6 time Pimlico #10-4 8 6 600 time predicted load 4 600 800 1000 1200 1400 1600 0 400 0 400 8 predicted load 8 6 2 600 time 8 4 0.5859 -4 2 -4 time 1.453 400 #10-4 8 4 time Warren Street #10 800 1000 1200 1400 1600 800 1000 1200 1400 1600 6 800 1000 1200 1400 1600 6 2 0.5959 #10-4 0 time -4 600 4 time 8 6 600 0 400 0 400 2 time Victoria LU observed load 6 4 6 0 400 8 6 800 1000 1200 1400 1600 8 observed load 8 800 1000 1200 1400 1600 0 400 time Stockwell 2 0.9311 #10-4 predicted load 4 600 8 6 4 time 8 predicted load predicted load 400 0.7982 #10-4 8 observed load 800 1000 1200 1400 1600 6 0 time 0.6986 6 2 0 time #10-4 4 predicted load 2 6 observed load 4 Oxford Circus #10-4 8 predicted load 6 0 400 Kings Cross LU #10-4 8 predicted load 2 Green Park observed load 4 observed load observed load observed load 6 0 predicted load #10-4 8 observed load Finsbury Park #10-4 8 observed load Euston LU predicted load #10-4 8 6 4 2 0 400 600 800 1000 1200 1400 1600 time 400 600 800 1000 1200 1400 1600 time Figure 4: Station loads obtained from NetMIS data (in blue) and predicted by the model (in red). NetMIS data contain information about the time period during which a train was at the station and an approximate weight-score of the train. At time s, a proxy of the load at a given station is obtained by summing the score of all trains present at that station at time s. To make the weight scores and the model predictions comparable we have divided both quantities by the area under the corresponding plots (proportional to the number of people travelling through the selected stations during the day). The reported score is the relative prediction error kvexp ? vtrue k/kvtrue k, with vexp (s) being the (normalised) expected number of people being at the station at time s and vtrue (s) the (normalised) weight-score obtained from the NetMIS data. 8 4.2 Methods and discussion Synthetic and real-world numerical experiments have been performed to: (i) understand how reliable is the proposed approximation method compared to more standard approach (variational inference), (ii) provide quantitative tests of our inference algorithm on the prediction of key unobservable quantities from a ground-truth model and (iii) assess the scalability and applicability of our method by modelling the traffic of a large-scale real-world system. Both synthetic and real-world experiments were are based on the London underground network [36]. Synthetic data were generated from the true origins, destinations and starting times by simulating the trajectories with the ground-truth (random) model described in Section 4.1. On such dataset, we have compared the training performance of the variational inference and the maximum likelihood approaches by measuring the prediction error on an out-of-sample dataset at each stochastic iteration (Figure 1, right). The two trained models have then been tested against the ground-truth on predicting (i) the total travel time (Figure 2, left), (ii) the shape of the users? path preferences (Figure 1, right) and (iii) the local loads (Figure 2, right). In the real-world experiments, we have trained the model on a dataset of smart-card origin-destination data (pseudonymised Oyster Card records from 21st October 2013 provided by Transport for London4 ) and then tested the prediction of the total travel time on a small out-of-sample set of journeys (Figure 3) . In this case we have compared the model prediction with its indirect estimation obtained from NetMIS records of the same day (Figure 4). NetMIS data contain a partial reconstruction of the actual position and weights of the trains and it is possible to combine them to estimate the load of a given station an any given time in the day. Since full train information was recorded only on one of the 11 underground lines of the network (the Victoria Line), we have restricted the comparison to a small set of stations. The two inference methods (VI for (18) and ML for (13)) have obtained good and statistically similar scores on recovering the ground-truth model predictions (Figure 2). ML has been trained orders of magnitude faster and was almost as accurate as VI on reproducing the users? path preferences (see Figure 1). Since the performance of ML and VI have shown to be statistically equivalent. Only ML has been used in the real-data experiments. On the prediction of out-of-sample travel times, ML outperformed both a random model and the constant model used for the initialization (a(x, s, ?) = ?(x) with ?(x) obtained from (19) with uniform ppath ). In particular, when all journeys in the test dataset are considered, ML outperforms the baseline method with a 24% improvement. The only sub-case where ML does worse ( 8% less accurate) is on the small subset of long journeys (see Figure 3). These are journeys where i) something unusual happens to the user or ii) the user visits lot of stations. In the latter case, a constant-delay model (as our initialization model) may perform well because we can expect some averaging process between the time variability of all visited stations. Figure 4 shows that ML was able to reproduce the shape and relative magnitude of the ?true? time distributions obtained from the NetMIS data. For a more quantitative comparison, we have computed the normalised distance (reported on the top of the red plots in Figure 4) between observed and predicted loads over the day. 5 Conclusions We have proposed a new scalable method for the tomography of large-scale networks from input output data. Based on the prediction of the users? travel time, the model allows an estimation of the unobserved path preferences and station loads. Since the original model is intractable, we have proposed and compared two different approximate inference schemes. The model hes been tested on both synthetic and real data from the London underground. On synthetic data, we have trained two distinct models with the proposed approximate inference techniques and compare their performance against the ground-truth. Both of them could successfully reproduce the outputs of the ground-truth on observable and unobservable quantities. Trained on real data via stochastic gradient descent, the model outperforms a simple constant-delay model on predicting out-of-sample travel times and produces reasonable estimation of the unobserved station loads. In general, the training step could be made more efficient by a careful design of the mini-batches used in the stochastic optimization. More precisely, since each term in (13) or (18) involves only a very restricted set of parameters (depending on the set feasible paths between the corresponding origin and destination), the inference could be radically improved by stratified sampling techniques as described for example in [40, 41, 42]. 4 The data shown in Figure 3 and 4 are not publicly available, but a reduced database containing similar records can be downloaded from [19] 9 Acknowledgments We thank Transport for London for kindly providing access to data. This work has been funded by a EPSRC grant EP/N020723/1. RS also acknowledges support by The Alan Turing Institute under the EPSRC grant EP/N510129/1 and the Alan Turing Institute-Lloyd?s Register Foundation programme on Data-Centric Engineering. References [1] Everett M Rogers and D Lawrence Kincaid. Communication networks: toward a new paradigm for research. 1981. [2] Stanley Wasserman and Katherine Faust. Social network analysis: Methods and applications, volume 8. Cambridge university press, 1994. [3] Michael GH Bell and Yasunori Iida. Transportation network analysis. 1997. [4] Mark EJ Newman. The structure and function of complex networks. SIAM review, 45(2):167? 256, 2003. [5] Mark Newman, Albert-Laszlo Barabasi, and Duncan J Watts. The structure and dynamics of networks. Princeton University Press, 2011. [6] Nicholas A Christakis and James H Fowler. Social contagion theory: examining dynamic social networks and human behavior. Statistics in medicine, 32(4):556?577, 2013. [7] Mark Coates, Alfred Hero, Robert Nowak, and Bin Yu. Large scale inference and tomography for network monitoring and diagnosis. IEEE Signal Processing Magazine, 2001. [8] Edoardo M Airoldi and Alexander W Blocker. Estimating latent processes on a network from indirect measurements. Journal of the American Statistical Association, 108(501):149?164, 2013. [9] Yehuda Vardi. Network tomography: Estimating source-destination traffic intensities from link data. Journal of the American statistical association, 91(433):365?377, 1996. [10] Rui Castro, Mark Coates, Gang Liang, Robert Nowak, and Bin Yu. Network tomography: Recent developments. Statistical science, pages 499?517, 2004. [11] Luis G Willumsen. Estimation of an od matrix from traffic counts?a review. 1978. [12] Nathan Eagle, Alex Sandy Pentland, and David Lazer. Inferring friendship network structure by using mobile phone data. Proceedings of the national academy of sciences, 106(36):15274? 15278, 2009. [13] Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 5(3):38, 2014. [14] Robert J Vanderbei and James Iannone. An em approach to od matrix estimation. Technical report, Technical Report SOR 94-04, Princeton University, 1994. [15] Claudia Tebaldi and Mike West. Bayesian inference on network traffic using link count data. Journal of the American Statistical Association, 93(442):557?573, 1998. [16] Jin Cao, Drew Davis, Scott Vander Wiel, and Bin Yu. Time-varying network tomography: router link data. Journal of the American statistical association, 95(452):1063?1075, 2000. [17] Yolanda Tsang, Mark Coates, and Robert Nowak. Nonparametric internet tomography. In Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, volume 2, pages II?2045. IEEE, 2002. [18] Ricardo Silva, Soong Moon Kang, and Edoardo M Airoldi. Predicting traffic volumes and estimating the effects of shocks in massive transportation systems. Proceedings of the National Academy of Sciences, 112(18):5643?5648, 2015. 10 [19] Transport For London. Official website. https://tfl.gov.uk/. [20] Camille Roth, Soong Moon Kang, Michael Batty, and Marc Barth?lemy. Structure of urban movements: polycentric activity and entangled hierarchical flows. PloS one, 6(1):e15923, 2011. [21] Chen Zhong, Michael Batty, Ed Manley, Jiaqiu Wang, Zijia Wang, Feng Chen, and Gerhard Schmitt. Variability in regularity: Mining temporal mobility patterns in london, singapore and beijing using smart-card data. PloS one, 11(2):e0149222, 2016. [22] Ram?n C?ceres, Nick G Duffield, Joseph Horowitz, and Donald F Towsley. Multicast-based inference of network-internal loss characteristics. IEEE Transactions on Information theory, 45(7):2462?2480, 1999. [23] Mark J Coates and Robert David Nowak. Network loss inference using unicast end-to-end measurement. In ITC Conference on IP Traffic, Modeling and Management, pages 28?1, 2000. [24] F Lo Presti, Nick G Duffield, Joseph Horowitz, and Don Towsley. Multicast-based inference of network-internal delay distributions. IEEE/ACM Transactions On Networking, 10(6):761?775, 2002. [25] Llewellyn Michael Kraus Boelter and Melville Campbell Branch. Urban planning, transportation, and systems analysis. Proceedings of the National Academy of Sciences, 46(6):824?831, 1960. [26] Jayanth R Banavar, Amos Maritan, and Andrea Rinaldo. Size and form in efficient transportation networks. Nature, 399(6732):130?132, 1999. [27] Haodong Yin, Baoming Han, Dewei Li, Jianjun Wu, and Huijun Sun. Modeling and simulating passenger behavior for a station closure in a rail transit network. PLoS one, 11(12):e0167126, 2016. [28] Junbo Zhang, Yu Zheng, and Dekang Qi. Deep spatio-temporal residual networks for citywide crowd flows prediction. arXiv preprint arXiv:1610.00081, 2016. [29] Akshat Kumar, Daniel Sheldon, and Biplav Srivastava. Collective diffusion over networks: Models and inference. arXiv preprint arXiv:1309.6841, 2013. [30] Jiali Du, Akshat Kumar, and Pradeep Varakantham. On understanding diffusion dynamics of patrons at a theme park. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 1501?1502. International Foundation for Autonomous Agents and Multiagent Systems, 2014. [31] Maciej Kurant and Patrick Thiran. Layered complex networks. Physical review letters, 96(13):138701, 2006. [32] Yu Zheng and Xiaofang Zhou. Computing with spatial trajectories. Springer Science and Business Media, 2011. [33] A Nuzzolo, U Crisalli, L Rosati, and A Ibeas. Stop: a short term transit occupancy prediction tool for aptis and real time transit management systems. In Intelligent Transportation Systems(ITSC), 2013 16th International IEEE Conference on, pages 1894?1899. IEEE, 2013. [34] Bo Friis Nielsen, Laura Fr?lich, Otto Anker Nielsen, and Dorte Filges. Estimating passenger numbers in trains using existing weighing capabilities. Transportmetrica A: Transport Science, 10(6):502?517, 2014. [35] Gilles Vandewiele, Pieter Colpaert, Olivier Janssens, Joachim Van Herwegen, Ruben Verborgh, Erik Mannens, Femke Ongenae, and Filip De Turck. Predicting train occupancies based on query logs and external data sources. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 1469?1474. International World Wide Web Conferences Steering Committee, 2017. [36] Transport For London. Tube map. https://tfl.gov.uk/cdn/static/cms/documents/standard-tubemap.pdf. 11 [37] Transport For London. Netmis dataset. http://lu.uat.cds.co.uk/Ops_maintenance/Library_tools/Apps_tools/696.html. [38] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [39] Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In CogSci, 2014. [40] Prem K Gopalan, Sean Gerrish, Michael Freedman, David M Blei, and David M Mimno. Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems, pages 2249?2257, 2012. [41] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss minimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1?9, 2015. [42] Olivier Can?vet, Cijo Jose, and Francois Fleuret. Importance sampling tree for large-scale empirical expectation. In International Conference on Machine Learning, pages 1454?1462, 2016. 12
6899 |@word middle:1 version:1 nd:1 closure:1 d2:1 simulation:2 r:1 pieter:1 harder:1 recursively:2 moment:6 series:3 score:8 tist:1 daniel:1 document:1 past:2 existing:3 outperforms:2 comparing:1 od:16 analysed:1 si:1 yet:1 router:2 attracted:1 luis:1 duffield:2 diederik:1 numerical:1 shape:2 analytic:1 designed:1 plot:4 update:1 selected:1 website:1 weighing:1 lemy:1 iso:1 parametrization:2 short:3 record:7 blei:1 node:12 location:7 preference:7 zhang:2 along:3 become:3 junbo:1 consists:1 combine:1 kraus:1 expected:4 andrea:1 behavior:2 planning:1 multi:4 terminal:1 td:47 gov:2 actual:1 provided:3 estimating:4 medium:4 wolfson:1 what:1 kind:2 interpreted:2 cm:1 maxa:2 unobserved:10 temporal:2 quantitative:4 every:1 multidimensional:1 ti:1 runtime:1 k2:2 uk:6 medical:1 grant:2 maximise:1 engineering:1 local:13 dekang:1 sd:16 installed:1 encoding:1 oxford:4 path:37 chose:1 initialization:7 challenging:2 co:2 stratified:1 statistically:2 averaged:1 acknowledgment:1 emblematic:1 practice:2 recursive:1 oyster:3 yehuda:1 area:1 empirical:3 bell:1 convenient:1 matching:1 word:1 circus:4 donald:1 cannot:2 interior:1 selection:3 layered:1 put:1 equivalent:2 deterministic:2 map:1 transportation:13 roth:1 go:1 attention:1 starting:16 independently:1 convex:5 resolution:1 wasserman:1 rule:1 his:1 autonomous:2 target:1 inspected:1 pt:4 user:29 exact:11 massive:2 anomaly:1 magazine:1 gerhard:1 olivier:2 origin:20 amortized:1 walking:1 database:1 observed:17 role:1 bottom:1 epsrc:2 ep:2 mike:1 capture:2 tsang:1 wang:2 busiest:1 preprint:3 ensures:1 connected:2 sun:1 plo:3 movement:1 underground:14 complexity:1 dynamic:4 trained:7 depend:2 solving:2 smart:3 predictive:1 exit:4 completely:1 icassp:1 indirect:2 various:1 grown:1 train:12 distinct:3 effective:2 london:21 monte:1 cogsci:1 query:1 newman:2 choosing:2 batty:2 crowd:1 solve:1 faust:1 reconstruct:1 elbo:1 otherwise:1 melville:1 statistic:1 otto:1 ip:1 ucl:3 reconstruction:3 propose:2 fr:1 cao:1 academy:3 intuitive:1 validate:1 inputoutput:1 scalability:2 convergence:1 regularity:1 r1:6 francois:1 produce:2 wider:1 depending:2 help:3 ac:3 spent:1 ij:2 school:1 eq:2 recovering:1 predicted:27 involves:1 drawback:1 stochastic:9 packet:1 vx:3 human:1 public:3 rogers:1 bin:3 require:1 behaviour:2 sor:1 opt:1 extension:1 considered:3 ground:13 exp:4 lawrence:1 predict:3 major:1 barabasi:1 sandy:1 estimation:12 travel:51 applicable:1 outperformed:1 visited:2 waterloo:2 create:1 city:1 successfully:1 amos:1 tool:1 minimization:1 i3:1 zhou:1 colombo:2 ej:1 zhong:1 mobile:1 varying:1 derived:1 joachim:1 improvement:2 modelling:3 likelihood:10 expansive:1 baseline:1 detect:1 inference:28 dependent:1 capra:1 hidden:2 relation:1 reproduce:2 i1:1 unobservable:3 dual:2 flexible:1 html:1 development:1 spatial:1 special:2 initialize:1 integration:1 aware:1 beach:1 sampling:5 park:3 look:1 yu:6 icml:1 future:3 report:2 intelligent:2 few:2 dg:1 national:3 phase:1 consisting:1 vexp:2 huge:1 possibility:1 mining:1 zheng:3 evaluation:3 mixture:2 pradeep:1 parametrised:1 accurate:2 peculiar:1 laszlo:1 nowadays:2 edge:1 partial:4 nowak:4 decoupled:1 mobility:1 tree:2 varakantham:1 plotted:1 column:5 modeling:2 measuring:1 maximization:1 loopy:1 applicability:1 subset:4 uniform:3 delay:15 examining:1 reported:4 synthetic:13 st:2 international:8 siam:1 destination:22 probabilistic:1 michael:5 together:1 connectivity:1 w1:1 recorded:2 management:3 containing:1 tube:1 worse:1 external:2 horowitz:2 american:4 derivative:2 laura:1 ricardo:3 zhao:1 li:6 account:2 de:1 factorised:1 lloyd:1 availability:1 explicitly:2 register:1 depends:5 crossed:3 vi:7 performed:1 passenger:2 lot:1 closed:1 towsley:2 observing:1 traffic:11 realised:1 red:4 bayes:1 option:1 complicated:1 capability:1 contribution:1 ass:1 publicly:1 moon:2 characteristic:1 identify:1 bayesian:1 produced:3 lu:13 carlo:1 trajectory:6 monitoring:1 networking:1 ed:1 against:4 frequency:1 james:2 associated:5 di:1 lazer:1 vanderbei:1 static:1 stop:1 dataset:14 knowledge:1 stanley:1 schedule:1 nielsen:2 sean:1 back:1 barth:1 focusing:2 centric:1 campbell:1 janssens:1 dt:5 day:12 methodology:1 improved:1 formulation:3 box:2 correlation:2 web:2 transport:8 overlapping:2 propagation:1 fowler:1 usa:1 effect:1 concept:2 contain:4 consisted:1 true:20 regularization:1 iteratively:1 i2:1 during:3 davis:1 claudia:1 samuel:1 pdf:1 outline:1 complete:2 gh:1 silva:3 disruption:2 reasoning:1 image:1 variational:9 meaning:1 common:1 behaves:1 multinomial:1 physical:3 exponentially:1 volume:3 extend:1 he:1 association:4 numerically:1 measurement:3 cambridge:1 enter:1 funded:1 access:1 han:1 nicolo:1 patrick:1 something:1 posterior:4 recent:1 forcing:1 phone:1 server:1 life:1 seen:1 steering:1 aggregated:1 shortest:2 redundant:1 period:1 signal:3 ii:7 branch:2 full:2 paradigm:1 infer:1 alan:3 technical:2 faster:1 clinical:1 long:4 cross:4 divided:1 visit:1 qi:1 prediction:20 scalable:6 itc:1 expectation:1 poisson:8 albert:1 iteration:4 represent:1 arxiv:6 fine:1 htn:1 entangled:1 source:3 goodman:1 extra:2 sent:1 legend:2 flow:4 integer:1 structural:1 noting:1 yang:1 intermediate:3 iii:3 enough:1 identified:2 idea:2 edoardo:2 speech:1 deep:1 n510129:1 fleuret:1 detailed:1 gopalan:1 amount:2 nonparametric:1 locally:1 tomography:15 reduced:3 generate:1 http:3 exist:1 coates:4 singapore:2 shifted:1 estimated:2 track:1 blue:3 diagnosis:1 alfred:1 write:1 key:3 urban:4 diffusion:3 shock:1 ram:1 graph:2 blocker:1 sum:5 beijing:2 turing:3 letter:1 jose:1 journey:12 almost:1 reasonable:1 wu:1 realtime:3 duncan:1 peilin:1 comparable:1 bit:2 layer:6 bound:2 internet:1 tebaldi:1 eagle:1 gang:1 activity:1 noah:1 constraint:1 handful:1 precisely:1 alex:1 ri:27 sheldon:1 nathan:1 simulate:1 min:1 kumar:2 performing:1 department:2 transiting:1 according:1 watt:1 across:1 increasingly:1 reconstructing:1 em:1 joseph:2 happens:1 castro:1 stockwell:1 soong:3 restricted:3 equation:3 previously:1 count:2 committee:1 letting:1 hero:1 tractable:3 end:3 unusual:1 travelling:8 available:4 operation:1 parametrize:1 apply:2 victoria:3 slong:2 hierarchical:1 simulating:2 nicholas:1 alternative:1 batch:1 original:5 binomial:1 denotes:1 include:3 top:2 graphical:1 medicine:1 exploit:1 build:1 classical:2 feng:1 implied:1 objective:1 turck:1 added:1 quantity:6 strategy:3 dependence:2 usual:1 hai:1 gradient:7 manley:1 distance:5 link:7 card:6 simulated:1 thank:1 street:1 transit:3 patron:1 toward:1 erik:1 length:6 mini:1 providing:1 liang:1 setup:1 unfortunately:1 october:1 katherine:1 robert:5 negative:1 design:2 collective:2 motivates:1 perform:2 gilles:1 observation:4 convolution:1 datasets:2 finite:1 descent:2 jin:1 pentland:1 communication:4 precise:1 variability:2 interacting:1 perturbation:1 rn:1 station:31 arbitrary:2 reproducing:1 camille:1 community:1 vander:1 intensity:1 david:4 thiran:1 namely:1 pair:7 connection:1 z1:1 tap:6 nick:2 acoustic:1 kang:3 kingma:1 nip:1 address:3 beyond:1 ppath:15 able:1 usually:1 pattern:1 scott:1 including:2 max:2 green:3 reliable:1 power:1 treated:3 natural:1 business:1 predicting:4 regularized:1 residual:1 nth:2 scheme:3 occupancy:2 technology:1 library:1 contagion:1 finished:1 axis:2 acknowledges:1 auto:1 review:3 geometric:1 understanding:1 ruben:1 nicol:1 marginalizing:1 relative:3 loss:4 expect:1 multiagent:1 proportional:1 gershman:1 cdn:1 foundation:2 downloaded:1 agent:3 proxy:2 schmitt:1 intractability:1 playing:1 pi:2 share:1 cd:1 lo:1 elsewhere:1 last:2 keeping:1 free:3 infeasible:2 warren:1 normalised:6 understand:1 institute:3 wide:2 taking:3 amortised:1 absolute:1 sparse:1 van:1 mimno:1 depth:1 world:11 made:1 programme:1 far:1 social:3 transaction:3 welling:1 reconstructed:1 approximate:12 observable:3 obtains:1 everett:1 ml:13 global:1 yasunori:1 summing:1 filip:1 assumed:1 spatio:1 don:1 vet:1 latent:2 iterative:4 decade:1 nature:1 ca:1 du:1 complex:4 marc:1 official:1 kindly:1 main:1 big:1 whole:1 freedman:1 vardi:1 allowed:2 complementary:2 fair:1 citywide:1 body:1 referred:2 tod:2 htd:1 west:1 scattered:1 tong:1 sub:2 inferring:2 position:5 explicit:2 theme:1 rail:1 third:1 uat:1 minute:1 rk:3 friendship:1 load:35 specific:1 companion:1 r2:7 evidence:2 grouping:1 intractable:9 consist:1 gained:1 drew:1 airoldi:2 importance:2 magnitude:2 tfl:2 rui:1 chen:2 yin:1 rinaldo:1 expressed:1 partially:1 bo:1 applies:1 ptrue:1 springer:1 nested:1 truth:13 radically:1 gerrish:1 acm:2 conditional:4 king:4 careful:1 shared:2 feasible:9 change:7 infinite:1 except:2 corrected:2 averaging:2 total:10 zone:3 college:3 internal:2 people:17 softplus:1 latter:3 support:1 mark:6 alexander:1 prem:1 akshat:2 evaluate:2 princeton:2 tested:6 srivastava:1
6,521
69
740 SPATIAL ORGANIZATION OF NEURAL NEn~ORKS: A PROBABILISTIC MODELING APPROACH A. Stafylopatis M. Dikaiakos D. Kontoravdis National Technical University of Athens, Department of Electrical Engineering, Computer Science Division, 15773 Zographos, Athens, Greece. ABSTRACT The aim of this paper is to explore the spatial organization of neural networks under Markovian assumptions, in what concerns the behaviour of individual cells and the interconnection mechanism. Spaceorganizational properties of neural nets are very relevant in image modeling and pattern analysis, where spatial computations on stochastic two-dimensional image fields are involved. As a first approach we develop a random neural network model, based upon simple probabilistic assumptions, whose organization is studied by means of discrete-event simulation. We then investigate the possibility of approXimating the random network's behaviour by using an analytical approach originating from the theory of general product-form queueing networks. The neural network is described by an open network of nodes, in which customers moving from node to node represent stimulations and connections between nodes are expressed in terms of suitably selected routing probabilities. We obtain the solution of the model under different disciplines affecting the time spent by a stimulation at each node visited. Results concerning the distribution of excitation in the network as a function of network topology and external stimulation arrival pattern are compared with measures obtained from the simulation and validate the approach followed. INTRODUCTION Neural net models have been studied for many years in an attempt to achieve brain-like performance in computing systems. These models are composed of a large number of interconnected computational elements and their structure reflects our present understanding of the organizing principles of biological nervous systems. In the begining, neural nets, or other equivalent models, were rather intended to represent the logic arising in certain situations than to provide an accurate description in a realistic context. However, in the last decade or so the knowledge of what goes on in the brain has increased tremendously. New discoveries in natural systems, make it now reasonable to examine the possibilities of using modern technology in order to synthesige systems that have some of the properties of real neural systems 8,9,10,11. In the original neural net model developed in 1943 by McCulloch and Pitts 1,2 the network is made of many interacting components, known as the "McCulloch-Pitts cells" or "formal neurons which are simple logical units with two possible states changing state accordII , ? American Institute of Physics 1988 741 ing to a threshold function of their inputs. Related automata models have been used later for gene control systems (genetic networks) 3, in which genes are represented as binary automata changing state according to boolean functions of their inputs. Boolean networks constitute a more general model, whose dynamical behaviour has been studied extensively. Due to the large number of elements, the exact structure of the connections and the functions of individual components are generally unknown and assumed to be distributed at random. Several studies on these random boolean networks 5,6 have shown that they exhibit a surprisingly stable behaviour in what concerns their temporal and spatial organization. However, very few formal analytical results are available, since most studies concern statistical descriptions and computer simulations. The temporal and spatial organization of random boolean networks is of particular interest in the attempt of understanding the properties of such systems, and models originating from the theory of stochastic processes 13 seem to be very useful. Spatial properties of neural nets are most important in the field of image recognition 12. In the biological eye, a level-normalization computation is performed by the layer of horizontal cells, which are fed by the immediately preceding layer of photoreceptors. The horizontal cells take the outputs of the receptors and average them spatially, this average being weighted on a nearest-neighbor basis. This procedure corresponds to a mechanism for determining the brightness level of pixels in an image field by using an array of processing elements. The principle of local computation is usually adopted in models used for representing and generating textured images. Among the stochastic models applied to analyzing the parameters of image fields, the random Markov field model 7,14 seems to give a suitably structured representation, which is mainly due to the application of the markovian property in space. This type of modeling constitutes a promising alternative in the study of spatial organization phenomena in neura 1 nets. The approach taken in this paper aims to investigate some aspects of spatial organization under simple stochastic assumptions. In the next section we develop a model for random neural networks assuming boolean operation of individual cells. The behaviour of this model, obtained through simulation experiments, is then approximated by using techniques from the theory of queueing networks. The approximation yields quite interesting results as illustrated by various examples. THE RANDOM NETWORK MODEL We define a random neural network as a set of elements or cells, each one of which can be in one of two different states: firing or quiet. Cells are interconnected to form an NxN grid, where each grid point is occupied by a cell. We shall consider only connections between neighbors, so that each cell is connected to 4 among the other cells: two input and two output cells (the output of a cell is equal 'to its internal state and it is sent to its output cells which use ;it as one of their inputs). The network topology is thus specified 742 by its incidence matrix A of dimension MxM, where M=N2. This matrix takes a simple form in the case of neighbor-connection considered here. We further assume a periodic structure of connections in what concerns inputs and outputs; we will be interested in the following two types of networks depending upon the period of reproduction for elementary square modules 5, as shown in Fig.l: - Propagative nets (Period 1) - Looping nets (Period 2) - .... \ I "' "';> , (b) (a) -- - '"' - Fig.1. (a) Propagative connections, (b) Looping connections At the edges of the grid, circular connections are established (modulo N), so that the network can be viewed as supported by a torus. The operation of tile network is non-autonomous: changes of state are determined by both the interaction among cells and the influence of external stimulations. We assume that stimulations arrive from the outside world according to a Poisson process with parameter A. Each arriving stimulation is associated with exactly one cell of the network; the cell concerned is determined by means of a given discrete probability distribution qi (l~i~M), considering an one-dimensional labeling of the Mcells. The operation of each individual cell is asynchronous and can be described in terms of the following rules: - A quiet cell moves to the firing state if it receives an arriving stimulation or if a boolean function of its inputs becomes true. - A firing cell moves to the quiet state if a boolean function of its inputs becomes false. - Changes of state imply a reaction delay of the cell concerned; these delays are independent identically distributed random variables following a negative exponential distribution with parameter y. According to these rules, the operation of a cell can be viewed as illustrated by Fig.2, where the horizontal axis represents time and the numbers 0,1,2 and 3 represent phases of an operation cycle. Phases 1 and 3, as indicated in Fig.2, correspond to reaction delays. In this sense, the qui et and fi ri ng s ta tes, as defi ned in the begi ni ng of thi s section, represent the aggregates of phases 0,1 and 2,3 respectively. External stimulations affect the receiving cell only when it is in phase 0; otherwise we consider that the stimulation is lost. In the same way, we assume tha t changes of the value of the input boo 1ean function do not affect the operation of the cell during phases land 3. The conditions are checked only at the end of the respective reaction delay. 743 quiet state ~ I /r firing state 2 ~ / 0 0 Fig.2. Changes of state for individual cells The above defi ned model i ncl udes some fea tures of the ori gi na 1 McCulloch-Pitts cells 1,2. In fact, it represents an asynchronous counterpart of the latter, in which boolean functions are considered instead of threshold functions. However, it can be shown that any McCulloch and Pitts' neural network can be implemented by a boolean network designed in an appropriate fashion 5. In what follows, we will consider that the firing condition for each individual cell is determined by an "or" function of its inputs. Under the assumptions adopted, the evolution of the network in time can be described by a conti nuous-parameter Markov process. However, the size of the state-space and the complexity of the system are such that no analytical solution is tractable. The spatial organization of the network could be expressed in terms of the steadystate probability distribution for the Markov process. A more useful representation is provided by the marginal probability distributions for all cells in the network, or equivalently by the probability of being in the firing state for each cell. This measure expresses the level of excitation for each point in the grid. We have studied the behaviour of the above model by means of simulation experiments for various cases depending upon the network size, the connection type, the distribution of external stimulation arrivals on the grid and the parameters A and V. Some examples are illustrated in the last section, in comparison with results obtained using the approach discussed in the next section. The estimations obta i ned concern the probabil i ty of bei ng in the fi ri ng s ta te for all cells in the network. The simulation was implemented according to the "batched means" method; each run was carried out unti 1 the width of the 95% confidence interval was less that 10% of the estimated mean value for each cell, or until a maximum number of events had been simulated depending upon the size of the network. THE ANALYTICAL APPROACH The neural network model considered in the previous section exhibited the markovian property in both time and space. Markovianity in space, expressed by the principle of "neighbor-connections", is the basic feature of Markov random fields 7,14, as already discussed. Our idea is to attempt an approximation of the random neural network model by usi ng a well-known model, wlli ch is markovi an in time, and applying the constraint of markovianity in space. The model considered is an open queueing network, which belongs to the general class of queueing networks admitting a product-form solution 4. In fact, one could distinguish several common features in the two network models. 744 A neural network, in general, receives information in the form of external stimulation signals and performs some computation on this information, which is represented by changes of its state. The operation of the network can be viewed as a flow of excitement among the cells and the spatial distribution of this excitement represents the response of the network to the information received. This kind of operation is particularly relevant in the processing of image fields. On the other hand, in queueing networks, composed of a number of service station nodes, customers arrive from the outside world and spend some time in the network, during which they more from node to node, waiting and receiving service at each node visited. Following the external arrival pattern, the interconnection of nodes and the other network parameters, the operation of the network is characterized by a distribution of activity among the nodes. Let us now consider a queueing network, where nodes represent cells and customers represent stimulations moving from cell to cell following the topology of the network. Our aim is to define the network's characteristics in a way to match those of the neural net model as much as possible. Our queueing network model is completely specified by the following assumptions: - The network is composed of M=N2 nodes arranged on an NxN rectangular grid, as in the previous case. Interconnections are expressed by means of a matrix R of routing probabilities: rij (l~i,j~) represents the probability that a stimulation (customer) leaving node i will next visit node j. Since it is an open network, after visiting an arbitrary number of cells, stimulations may eventually leave the network. Let riO denote the probability of leaving the network upon leaving node i. In what follows, we will assume that riO=s for all node's. In what concerns the routing probauilities rij, they are determined by the two interconnection schemata considered in the previous section (propagative and looping connections): each node i has two output nodes j, for which the routing probabilities are equally distributed. Thus, rij=(1-s)/2 for the two output nodes of i and equal to zero for a11 0 ther nodes in the network. - External stimulation arrivals follow a Poisson process with parameter A and are routed to the nodes according to the probability distribution qi (l~i<M) as in the previous section. - Stimulations receive a "service time" at each node visited. Service times are independent identically distributed random variables, which are exponentially distributed with parameter V. The time spent by a stimulation at a node depends also upon the "service discipline" adopted. We shall consider two types of service disciplines according to the general queuei n9 network model 4: the fi rs t-come-fi rs t-served (FCFS) discipline, where customers are served in the order of their arrival to the node, and the infinite-server (IS) discipline, where a customer's service is immediately assumed upon arrival to the node, as if there were always a server available for each arriving customer (the second type includes no waiting delay). We will refer to the above two types of nodes as type 1 and type 2 respectively. In either case, all nodes of the network will be of the same type. The steady-state solution of the above network is a straightforwa rd app 1i ca ti on of the general BCMP theorem 4 according to the 745 Isimple assumptions considered. The state of the system is described ,by the vector (kl,k2, ... ,kM), where ki is the number of customers present at node i. We first define the traffic intensity Pi for each node i as (1) i = 1,2, ... ,M Pi = Aei/V where the quantities {ei} are the solution of the following set of linear equations: M ei = qi + I j=1 e?r .. i = J Jl 1,2, ... ,M (2) It can be easily seen that, in fact, ei represents the average number of visits a customer makes to node, i during his sojourn in the network. The existence of a steady-state distribution for the system depends on the sol uti on of the above set. Fo 11 owi ng the general theorem 4, the joint steady-state distribution takes the form of a product of independent distributions for the nodes: p(k1,k2, ... ,kM} = ~1(k1)P2(k2} ?.? Pr~(kM) (3) where ki (I-P.}P. (Type 1) 1 1 p.(k.) (4) k? 1 1 = _p. p. 1 ell (Type 2) 1 kiT provided that the stabtlity condition Pi<1 is satisfied for type 1 nodes. The product form solution of this type of network expresses the idea of global and local balance which is characteristic of ergodic Ivlarkov processes. We can then proceed to deri vi ng the des i red measure for each node in the network; we are interested in the probability of being active for each node, which can be interpreted as the probability that at least one customer is present at the node: (Type 1) Pi (5 ) P(k .>O}=1-p.(O) = _po 1 1 1-e 1 (Type 2) 1 The variation in space of the above quantity will be studied with respect to the corresponding measure obtained from simulation experiments for the neural network model. . NUMERICAL AND SIMULATION EXAMPLES Simulations and numerical solutions of the queueing network motiel were run for different values of the parameters. The network sizes considered are relatively small but can provide useful information on the spatial organization of the networks. For both types of service discipline discussed in the previous section, the approach followed yields a very good approximation of the network's organization in most regions of the rectangular grid. The choice of the probability s of leaving the network plays a critical role in the beha- 746 (a) (b) Fig.3. A 10xiO network with A=l, V=l and propagative connections. External stimulations are uniformly distributed over a 3x3 square on the upper left corner of the grid. (a) simulation (b) Queueing network approach with s=0.05 and type 2 nodes. (a) (b) Fig.4. The network of Fig.3 with A=2 (a) Simulation (b) Queueing network approach with s=0.08 and type 2 nodes. viour of the queueing model,and must have a non-zero value in order for the network to be stable. Good results are obtained for very small values of s; in fact, this parameter represents the phenomenon of excitation being "lost" somewhere in the network. Graphical representations for various cases are shown in Figures 3-7. We have used a coloring of five "grey levels", defined by dividing into five segments the interval between the smallest and the largest value of the probability on the grid; the normalization is performed with respect to simulation results. This type of representation is less accurate than directly providing numerical values, but is more clear for describing the organization of the system. In each case, the results shown for the queueing model concern only one type of nodes, the one that best fits the simulation results, which is type 2 in the majority of cases examined. The graphical representation illustrates the structuring of the distribution of excitation on the grid in terms of functionally connected regions of high and low 747 (a) (b) Fig.5. A 10xl0 network with A=l, V=l and looping connections. External stimulations are uniformly distributed over a 4x4 square on the center of the grid. (a) Simulation (b) Queueing network approach wi th s= 0.07 and type 2 nodes. (a) (b) Fig.6. The network of Fig.5 with A=0.5 (a) Simulation (b) Queuei ng network approach wi th s= 0.03 and type 2 nodes. excitation. We notice that clustering of nodes mainly follows the spatial distribution of external stimulations and is more sharply structured in the case of looping connections. CONCLUSION We have developed in this paper a simple continuous-time probabilistic model of neural nets in an attempt to investigate their spatial organization. The model incorporates some of the main features of the McCulloch-Pitts "formal neurons" model and assumes boolean operation of the elementary cells. The steady-state behaviour of the model was approximated by means of a queueing network model with suitably chosen parameters. Results obtained from the solution of the above approximation were compared with simulation results of the initial model, which validate the approximation. This simplified approach is a first step in an attempt to study the organiza- 748 (a) (b) Fig.7. A 16x16 network with A=1, V=1 and looping connections. External stimulations are uniformly distributed over two 4x4 squares on the upper left and lower right corners of the grid. (a) Simulation (b) Queueing network approach with s=0.05 and type 1 nodes. tional properties of neural nets by means of markovian modeling techn; ques. REFERENCES 1. W. S. McCulloch, W. Pitts, "A Logical Calculus of the Ideas Im- 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. manent in Nervous Activity", Bull. of Math. Biophysics 5, 115133 (1943). M. L. Minsky, Computation: Finite and Infinite Machines (Prentice Hall, 1967). S. Kauffman, "Behaviour of Randomly Constructed Genetic Nets", in Towards a Theoretical Biology, Ed. C. H. Waddington (Edinburgh University Press, 1970). F. Baskett, K. M. Chandy, R. R. Muntz, F. G. Palacios, "Open, Closed and Mixed Networks of Queues with Different Classes of Customers", J. ACM, 22 (1975). H. Atlan, F. Fogelman-Soulie, J. Salomon, G. Weisbuch, "Random Boolean Networks", Cyb. and Syst. 12 (1981). F. Folgeman-Soulie, E. Goles-Chacc, G. Weisbuch, "Specific Roles of the Different Boolean Mappings in Random Networks", Bull. of Math. Biology, Vol.44, No 5 (1982). G. R. Cross, A. K. Jain, "Markov Random Field Texture Models", IEEE Trans. on PAMI, Vol. PAMI-5, No 1 (1983). E. R. Kandel, J. H. Schwartz, Principles of Neural Science, (Elsevier, N.Y., 1985). J. J. Hopfield, D. W. Tank, "Computing with Neural Circuits: A Model", Sc i ence, Vol. 233, 625-633 (1986). Y. S. Abu-Mostafa, D. Psaltis, "Optical Neural Computers", Scient. Amer., 256, 88-95 (1987). R. P. Lippmann, "An Introduction to Computing with Neural Nets", IEEE ASSP Mag. (Apr. 1987). C. A. Mead, "Neural Hardware for Vision", Eng. and Scie. (June 1987) . E. Gelenbe, A. Stafylopatis, "Temporal Behaviour of Neural Networks", IEEE First Intern. Conf. on Neural Networks, San Diego, CA (June 1987). L. Onural, "A Systematic Procedure to Generate Connected Binary Fractal Patterns with Resolution-varying Texture", Sec. Intern. Sympt. on Compo and Inform. Sciences, Istanbul, Turkey (Oct. 1987) .
69 |@word seems:1 suitably:3 open:4 grey:1 km:3 simulation:17 r:2 calculus:1 eng:1 brightness:1 initial:1 mag:1 genetic:2 reaction:3 incidence:1 must:1 numerical:3 realistic:1 designed:1 selected:1 nervous:2 compo:1 propagative:4 math:2 node:43 five:2 scie:1 constructed:1 baskett:1 examine:1 udes:1 brain:2 considering:1 becomes:2 provided:2 circuit:1 mcculloch:6 what:7 kind:1 interpreted:1 developed:2 weisbuch:2 scient:1 temporal:3 ti:1 exactly:1 k2:3 schwartz:1 control:1 unit:1 service:8 engineering:1 local:2 receptor:1 analyzing:1 mead:1 firing:6 pami:2 studied:5 examined:1 salomon:1 lost:2 x3:1 procedure:2 thi:1 confidence:1 prentice:1 context:1 influence:1 applying:1 equivalent:1 customer:11 center:1 go:1 automaton:2 rectangular:2 ergodic:1 resolution:1 immediately:2 usi:1 rule:2 array:1 his:1 autonomous:1 variation:1 diego:1 play:1 modulo:1 exact:1 element:4 defi:2 recognition:1 approximated:2 particularly:1 role:2 module:1 electrical:1 rij:3 region:2 connected:3 cycle:1 sol:1 complexity:1 xio:1 cyb:1 segment:1 upon:7 division:1 basis:1 textured:1 completely:1 easily:1 joint:1 po:1 aei:1 hopfield:1 represented:2 various:3 jain:1 sc:1 labeling:1 aggregate:1 outside:2 whose:2 quite:1 spend:1 interconnection:4 otherwise:1 gi:1 net:13 analytical:4 interconnected:2 product:4 interaction:1 fea:1 relevant:2 organizing:1 achieve:1 description:2 validate:2 probabil:1 generating:1 a11:1 leave:1 spent:2 depending:3 develop:2 nearest:1 received:1 p2:1 dividing:1 implemented:2 come:1 stochastic:4 routing:4 behaviour:9 biological:2 elementary:2 im:1 considered:7 hall:1 mapping:1 pitt:6 mostafa:1 smallest:1 estimation:1 athens:2 psaltis:1 visited:3 largest:1 reflects:1 weighted:1 always:1 aim:3 rather:1 occupied:1 varying:1 structuring:1 june:2 mainly:2 tremendously:1 sense:1 rio:2 tional:1 elsevier:1 istanbul:1 originating:2 interested:2 pixel:1 fogelman:1 among:5 tank:1 spatial:13 ell:1 marginal:1 field:8 equal:2 ng:8 x4:2 represents:6 biology:2 constitutes:1 few:1 modern:1 randomly:1 composed:3 national:1 individual:6 intended:1 phase:5 minsky:1 attempt:5 organization:12 interest:1 investigate:3 possibility:2 circular:1 admitting:1 accurate:2 edge:1 respective:1 sojourn:1 theoretical:1 increased:1 modeling:4 boolean:12 markovian:4 ence:1 bull:2 markovianity:2 delay:5 periodic:1 probabilistic:3 physic:1 receiving:2 systematic:1 discipline:6 na:1 satisfied:1 tile:1 external:11 corner:2 american:1 conf:1 syst:1 de:1 sec:1 includes:1 depends:2 vi:1 later:1 performed:2 ori:1 closed:1 schema:1 traffic:1 red:1 orks:1 unti:1 square:4 ni:1 characteristic:2 yield:2 correspond:1 served:2 app:1 fo:1 inform:1 checked:1 ed:1 ty:1 involved:1 associated:1 logical:2 knowledge:1 greece:1 coloring:1 ta:2 follow:1 response:1 arranged:1 amer:1 until:1 hand:1 receives:2 horizontal:3 ei:3 indicated:1 true:1 deri:1 counterpart:1 evolution:1 spatially:1 illustrated:3 during:3 width:1 excitation:5 steady:4 performs:1 image:7 steadystate:1 fi:4 common:1 stimulation:21 exponentially:1 jl:1 discussed:3 functionally:1 refer:1 rd:1 grid:12 had:1 moving:2 stable:2 belongs:1 certain:1 server:2 binary:2 seen:1 preceding:1 kit:1 period:3 signal:1 turkey:1 ing:1 technical:1 match:1 characterized:1 cross:1 concerning:1 equally:1 visit:2 biophysics:1 qi:3 beha:1 basic:1 vision:1 mxm:1 poisson:2 represent:6 normalization:2 cell:36 receive:1 affecting:1 interval:2 leaving:4 chandy:1 exhibited:1 sent:1 flow:1 incorporates:1 seem:1 identically:2 concerned:2 boo:1 affect:2 fit:1 topology:3 idea:3 routed:1 queue:1 proceed:1 constitute:1 fractal:1 generally:1 useful:3 clear:1 extensively:1 hardware:1 generate:1 notice:1 estimated:1 arising:1 discrete:2 shall:2 vol:3 waiting:2 express:2 abu:1 begining:1 threshold:2 queueing:15 changing:2 year:1 run:2 arrive:2 reasonable:1 uti:1 qui:1 layer:2 ki:2 followed:2 distinguish:1 activity:2 constraint:1 sharply:1 looping:6 ri:2 aspect:1 optical:1 relatively:1 ned:3 department:1 structured:2 according:7 goles:1 markovi:1 wi:2 pr:1 taken:1 equation:1 describing:1 eventually:1 mechanism:2 excitement:2 fed:1 tractable:1 end:1 adopted:3 available:2 operation:10 nen:1 appropriate:1 alternative:1 existence:1 original:1 n9:1 assumes:1 clustering:1 graphical:2 somewhere:1 neura:1 k1:2 approximating:1 move:2 already:1 quantity:2 visiting:1 exhibit:1 quiet:4 simulated:1 ques:1 majority:1 assuming:1 providing:1 balance:1 ncl:1 equivalently:1 negative:1 unknown:1 upper:2 neuron:2 markov:5 finite:1 situation:1 assp:1 interacting:1 station:1 arbitrary:1 intensity:1 specified:2 kl:1 connection:15 established:1 ther:1 trans:1 dynamical:1 pattern:4 usually:1 kauffman:1 event:2 critical:1 natural:1 representing:1 technology:1 eye:1 imply:1 axis:1 carried:1 atlan:1 understanding:2 discovery:1 determining:1 nxn:2 mixed:1 interesting:1 tures:1 principle:4 pi:4 land:1 surprisingly:1 last:2 supported:1 arriving:3 asynchronous:2 formal:3 institute:1 neighbor:4 distributed:8 edinburgh:1 soulie:2 dimension:1 world:2 made:1 san:1 simplified:1 lippmann:1 logic:1 gene:2 global:1 active:1 photoreceptors:1 assumed:2 conti:1 continuous:1 decade:1 promising:1 ca:2 ean:1 bcmp:1 apr:1 main:1 owi:1 arrival:6 n2:2 fig:12 batched:1 fashion:1 x16:1 torus:1 exponential:1 kandel:1 bei:1 theorem:2 specific:1 concern:7 reproduction:1 false:1 texture:2 te:2 illustrates:1 explore:1 intern:2 expressed:4 ch:1 corresponds:1 tha:1 acm:1 oct:1 viewed:3 towards:1 change:5 determined:4 infinite:2 uniformly:3 techn:1 internal:1 latter:1 xl0:1 phenomenon:2
6,522
690
A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization Gert Cauwenberghs California Institute of Technology Mail-Code 128-95 Pasadena, CA 91125 E-mail: gert(Qcco. cal tech. edu Abstract A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. 1 Background and Motivation We address general optimization tasks that require finding a set of constant parameter values Pi that minimize a given error functional ?(p). For supervised learning, the error functional consists of some quantitative measure of the deviation between a desired state x T and the actual state of a network x, resulting from an input y and the parameters p. In such context the components of p consist of the connection strengths, thresholds and other adjustable parameters in the network. A 244 A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization typical specification for the error in learning a discrete set of pattern associations (yCa), x TCa ) for a steady-state network is the Mean Square Error (MSE) (1) and similarly, for learning a desired response (y(t), xT(t? in a dynamic network (2) For ?(p) to be uniquely defined in the latter dynamic case, initial conditions X(tinit) need to be specified. A popular method for minimizing the error functional is steepest error descent (gradient descent) [1]-[6] Llp = o? -7]- op (3) Iteration of (3) leads asymptotically to a local minimum of ?(p), provided 7] is strictly positive and small. The computation of the gradient is often cumbersome, especially for time-dependent problems [2]-[5], and is even ill-posed for analog hardware learning systems that unavoidably contain unknown process impurities. This calls for error descent methods avoiding calculation of the gradients but rather probing the dependence of the error on the parameters directly. Methods that use some degree of explicit internal information other than the adjustable parameters, such as Madaline III [6] which assumes a specific feedforward multi-perceptron network structure and requires access to internal nodes, are therefore excluded. Two typical methods which satisfy the above condition are illustrated below: ? Weight Perturbation [7], a simple sequential parameter perturbation technique. The method updates the individual parameters in sequence, by measuring the change in error resulting from a perturbation of a single parameter and adjusting that parameter accordingly. This technique effectively measures the components of the gradient sequentially, which for a complete knowledge of the gradient requires as many computation cycles as there are parameters in the system . ? Model-Free Distributed Learning [8], which is based on the "M.LT." rule in adaptive control [9}. Inspired by analog hardware, the distributed algorithm makes use oftime-varying perturbation signals 1I".(t) supplied in parallel to the parameters Pi, and correlates these 1I"i(t) with the instantaneous network response E(p + 11") to form an incremental update Ll.Pi. Unfortunately, the distributed model-free algorithm does not support learning of dynamic features (2) in networks with delays, and the learning speed degrades sensibly with increasing number of parameters [8]. 2 Stochastic Error-Descent: Formulation and Properties The algorithm we investigate here combines both above methods, yielding a significant improvement in performance over both. Effectively, at every epoch the constructed algorithm decreases the error along a single randomly selected direction in the parameter space. Each such decrement is performed using a single 245 246 Cauwenberghs = synchronous parallel parameter perturbation per epoch. Let I> p + 1(' with parallel perturbations 1C'i selected from a random distribution. The perturbations 1C'i are assumed reasonably small, but not necessarily mutually orthogonal. For a given single random instance of the perturbation 1r, we update the parameters with the rule (4) ~p = -I-' ? 1r , where the scalar (5) ?(1)) - ?(p) is the error contribution due to the perturbation 1r, and I-' is a small strictly positive constant. Obviously, for a sequential activation of the 1C'i, the algorithm reduces to the weight perturbation method [7]. On the other hand, by omitting ?(p) in (5) the original distributed model-free method [8] is obtained. The subtraction of the unperturbed reference term ?(p) in (5) contributes a significant increase in speed over the original method. Intuitively, the incremental error t specified in (5) isolates the specific contribution due to the perturbation, which is obviously more relevant than the total error which includes a bias ?(p) unrelated to the perturbation 1r. This bias necessitates stringent zero-mean and orthogonality conditions on the 1C'i and requires many perturbation cycles in order to effect a consistent decrease in the error [8].1 An additional difference concerns the assumption on the dynamics of the perturbations 1C'i. By fixing the perturbation 1r during every epoch in the present method, the dynamics of the 1C'i no longer interfere with the time delays of the network, and dynamic optimization tasks as (2) come within reach. t= The rather simple and intuitive structure (4) and (5) of the algorithm is somewhat reminiscent of related models for reinforcement learning, and likely finds parallels in other fields as well. Random direction and line-search error-descent algorithms for trajectory learning have been suggested and analyzed by P. Baldi [12]. As a matter of coincidence, independent derivations of basically the same algorithm but from different approaches are presented in this volume as well [13],[14]. Rather than focussing on issues of originality, we proceed by analyzing the virtues and scaling properties of this method. We directly present the results below, and defer the formal derivations to the appendix. 2.1 The algorithm performs gradient descent on average, provided that the perturbations 1C'i are mutually uncorrelated with uniform auto-variance, that is E(1C'i1C'j) (J'26 ij with (J' the perturbation strength. The effective gradient descent learning rate corresponding to (3) equals 7Jeff 1-'(J'2. = = Hence on average the learning trajectory follows the steepest path of error descent. The stochasticity of the parameter perturbations gives rise to fluctuations around the mean path of descent, injecting diffusion in the learning process. However, the individual fluctuations satisfy the following desirable regularity: 1 An interesting noise-injection variant on the model-free distributed learning paradigm of [8], presented in [10], avoids the bias due to the offset level ?(p) as well, by differentiating the perturbation and error signals prior to correlating them to construct the parameter increments. A complete demonstration of an analog VLSI system based on this approach is lJresented in this volume [llJ. As a matter offact, the modified noise-injection algorithm corresponds to a continuous-time version of the algorithm presented here , for networks and error functionals free of time-varying features. A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization 2.2 The error ?(p) always decreases under an update (4) for any that 11r12 is "small", and J1 is strictly positive and "small". 1r, provided Therefore, the algorithm is guaranteed to converge towards local error minima just like gradient descent, as long as the perturbation vector 11" statistically explores all directions of the parameter space, provided the perturbation strength and learning rate are sufficiently small. This property holds only for methods which bypass the bias due to the offset error term ?(p) for the calculation of the updates, as is performed here by subtraction of the offset i'1 (5). The guaranteed decrease in error of the update (4) under any small, single instance of the perturbation 11" removes the need of averaging multiple trials obtained by different instances of 11" in order to reduce turbulence in the learning dynamics. We intentionally omit any smoothing operation on the constructed increments (4) prior to effecting the updates ~PI' unlike the estimation of the true gradient in [8],[10],[13] by essentially accumulating and averaging contributions (4) over a large set of random perturbations. Such averaging is unnecessary here (and in [13]) since each individual increment (4) contributes a decrease in error, and since the smoothing of the ragged downward trajectory on the error surface is effectively performed by the integration of the incremental updates (4) anyway. Furthermore, from a simple analysis it follows that such averaging is actually detrimental to the effective speed of convergence. 2 For a correct measure of the convergence speed of the algorithm relative to that of other methods, we studied the boundaries of learning stability regions specifying maximum learning rates for the different methods. The analysis reveals the following scaling properties with respect to the size of the trained network, characterized by the number of adjustable parameters P: 2.3 The maximum attainable average speed of the algorithm is a factor pl/2 slower than that of pure gradient descent, as opposed to the maximum average speed of sequential weight perturbation which is a factor P slower than gradient descent. The reduction in speed of the algorithm vs. gradient descent by the square root of the number of parameters can be understood as well from an information-theoretical point of view using physical arguments. At each epoch, the stochastic algorithm applies perturbations in all P dimensions, injecting information in P different "channels". However, only scalar information about the global response of the network to the perturbations is available at the outside, through a single "channel". On average, such an algorithm can extract knowledge about the response of the network in at most p 1 / 2 effective dimensions, where the upper limit is reached only if the perturbations are truly statistically independent, exploiting the full channel capacity. In the worst case the algorithm only retains scalar information through a single, low-bandwidth channel, which is e.g. the case for the sequential weight perturbation algorithm. Hence, the stochastic algorithm achieves a speed-up of a factor p 1 / 2 over the technique of sequential weight perturbation, by using parallel statistically independent perturbations as opposed to serial single perturbations. The original model-free algorithm by Dembo and Kailath [8] does not achieve this p 1 / 2 2Sure enough, averaging say M instances of (4) for different random perturbations will improve the estimate of the gradient by decreasing its variance . However, the variance of the update ~p decreases by a factor of M, allowing an increase in learning rate by only a factor of M 1/2, while to that purpose M network evaluations are required. In terms of total computation efforts, the averaged method is hence a factor A-l1/2 slower . 247 248 Cauwenberghs speed-up over the sequential perturbation method (and may even do worse), partly because the information about the specific error contribution by the perturbations is contaminated due to the constant error bias signal ?(p). Note that up to here the term "speed" was defined in terms of the number of epochs, which does not necessarily directly relate to the physical speed, in terms of the total number of operations. An equally important factor in speed is the amount of computation involved per epoch to obtain values for the updates (3) and (4). For the stochastic algorithm, the most intensive part of the computation involved at every epoch is the evaluation of ?(p) for two instances of pin (5), which typically scales as O(P) for neural networks. The remaining operations relate to the generation of random perturbations 7ri and the calculation of the correlations in (4), scaling as O( P) as well. Hence, for an accurate comparison of the learning speed, the scaling of the computations involved in a single gradient descent step needs to be balanced against the computation effort by the stochastic method corresponding to an equivalent error descent rate, which combining both factors scales as O( p 3 / 2 ). An example where the scaling for this computation balances in favor of the stochastic error-descent method, due to the expensive calculation of the full gradient, will be demonstrated below for dynamic trajectory learning. More importantly, the intrinsic parallelism, fault tolerance and computational simplicity of the stochastic algorithm are especially attractive with hardware implementations in mind. The complexity of the computations can be furthermore reduced by picking a binary random distribution for the parallel perturbations, 7ri = ?u with equal probability for both polarities, simplifying the multiply operations in the parameter updates. In addition, powerful techniques exist to generate largescale streams of pseudo-random bits in VLSI [15]. 3 Numerical Simulations For a test of the learning algorithm on time-dependent problems, we selected dynamic trajectory learning (a "Figure 8") as a representative example [2]. Several exact gradient methods based on an error functional of the form (2) exist [2]-[5k with a computational complexity scaling as either O( P) per epoch for an off-line method [2] (requiring history storage over the complete time interval of the error functional), or as O(p2) [3] and recently as O(p 3 / 2) [4]-[5] per epoch for an on-line method (with only most current history storage). The stochastic error-descent algorithm provides an on-line alternative with an O( P) per epoch complexity. As a consequence, including the extra p 1/ 2 factor for the convergence speed relative to gradient descent, the overall computation complexity of the stochastic error-descent still scales like the best on-line exact gradient method currently available. For the simulations, we compared several runs of the stochastic method with a single run of an exact gradient-descent method, all runs starting from the same initial conditions. For a meaningful comparison, the equivalent learning rate for 3The distinction between on-line and off-line methods here refers to issues of time in the computation. On-line methods process iucoming data strictly in the order it is received, while off-line methods require extensive access to previously processed data. On-line methods are therefore more desirable for real-time learning applications. rev~rsal A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization = stochastic descent 7]eff J-U1 2 was set to 7], resulting in equal average speeds. We implemented binary random perturbations 7ri ?O" with 0" 1 X 10- 3 ? We used the network topology, the teacher forcing mechanism, the values for the learning parameters and the values for the initial conditions from [4], case 4, except for 7] (and 7]eff) which we reduced from 0.1 to 0.05 to avoid strong instabilities in the stochastic sessions. Each epoch represents one complete period of the figure eight. We found no loca.l minima for the learning problem, and all sessions converged successfully within 4000 epochs as shown in Fig. 1 (a). The occasional upward transitions in the stochastic error are caused by temporary instabilities due to the elevated value of the learning rate. At lower values of the learning rate, we observed significantly less frequent and articulate upward transitions. The measured distribution for the decrements in error at 7]eff = 0.01 is given in Fig. 1 (b). The values of the stochastic error decrements in the histogram are normalized to the mean of the distribution, i. e. the error decrements by gradient descent (8). As expected, the error decreases at practically all times with an average rate equal to that of gradient descent, but the largest fraction of the updates cause little change in error. = = Figure Eight Trajectory 15 - - ? Exact Gradient Descent Stochastic Error-Descent ~ >. u ~ 10 ~ ='0" ... u.. ~ 5 0 -1 Number of Epochs (a) 0 2 3 4 5 Normalized Error Decrement (b) Figure 1 Exact Gradient and Stochastic Error-Descent Methods for the Figure "8" Trajectory. (a) Convergence Dynamics (11 = 0.05). (b) Distribution of the Error Decrements.(11 = 0.01). 4 Conclusion The above analysis and examples serve to demonstrate the solid performance of the error-descent algorithm, in spite of its simplicity and the minimal requirements on explicit knowledge of internal structure. While the functional simplicity and faulttolerance of the algorithm is particularly suited for hardware implementations, on conventional digital computers its efficiency compares favorably with pure gradient descent methods for certain classes of networks and optimization problems, owing to the involved effort to obtain full gradient information. The latter is particularly true for complex optimization problems, such as for trajectory learning and adaptive control, with expensive scaling properties for the calculation of the gradient. In particular, the discrete formulation of the learning dynamics, decoupled from the dynamics of the network, enables the stochastic error-descent algorithm to handle dynamic networks and time-dependent optimization functionals gracefully. 249 250 Cauwenberghs Appendix: Formal Analysis We analyze the algorithm for small perturbations 1I"i, by expanding (5) into a Taylor series around p: f = L 88fp) 11") + 0(111"12) , (6) ) where the 8f / 8p) represent the components of the true error gradient, reflecting the physical structure of the network. Substituting (6) in (4) yields: " 8pj 8f 1I"i1l") tl.Pi = -It " ~ + 0(111"1 2 )1I"i (7) . ) For mutually uncorrelated perturbations 1I"i with uniform variance (1'2, E(1I"i1l") the parameter vector on average changes as E(tl.p) 8f = -It(1' 2 = (1'26i), 3 8p + 0((1' ) . (8) Hence, on average the algorithm performs pure gradient descent as in (3), with an effective learning rate 11 = 1'(1'2. The fluctuations of the parameter updates (7) with respect to their average (8) give rise to diffusion in the error-descent process. Nevertheless, regardless of these fluctuations the error will always decrease under the updates (4), provided that the increments tl.Pi are sufficiently small (J.t small): " -8 8f. tl.Pi tl.f = " ~ . p. . 8f 1I"i8f + O(Itl.pl 2 ) ~ -It "~" "~" -8 8 11") . . ~ ) ~ ~ -J.t f "2 ::; 0 . (9) Note that this is a direct consequence of the offset bias subtraction in (5), and (9) is no longer valid when the compensating reference term f(p) in (5) is omitted. The algorithm will converge towards local error minima just like gradient descent, as long as the perturbation vector 11" statistically explores all directions of the parameter space. In principle, statistical independence of the 11". is not required to ensure convergence, though in the case of cross-correlated perturbations the learning trajectory (7) does not on average follow the steepest path (8) towards the optima, resulting in slower learning. The constant It cannot be increased arbitrarily to boost the speed of learning. The value of J.t is constrained by the allowable range for Itl.pl in (9). The maximum level for Itl.pl depends on the steepness and nonlinearity of the error functional f, but is largely independent of which algorithm is being used. A value of Itl.pl exceeding the limit will likely cause instability in the learning process, just as it would for an exact gradient descent method. The constraint on Itl.pl allows us to formulate the maximum attainable speed of the stochastic algorithm, relative to that of other methods. From (4), ltl.pl2 = J.t 2 111"12f2 ::::::: p1'2(1'2f2 (10) where P is the number of parameters. The approximate equality at the end of (10) holds for large P, and results from the central limit theorem for 111"12 with E( 1I"i1l") = (1'2 h.) . From (6), the expected value of (10) is E(I~pI2) = P (p0'2)21 ~! 12 . (11) The maximum attainable value for I' can be expressed in terms of the maximum value of 11 for gradient descent learning. Indeed, from a worst-case analysis of (3) 2 8f Itl.plmax = l1max 1 1 2 2 ap max (12) A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization and from a similar worst-case analysis of (11). we obtain P IJma.x /7 2 "" 71ma.x to a first order approximation. With the derived value for J.tma.x, the maximum effective learning rate 1/eff associated with the mean field equation (8) becomes 71eff = p- 1 / 2 71ma.x for the stochastic method, as opposed to 1/ma.x for the exact gradient method. This implies that on average and under optimal conditions the learning process for the stochastic error descent method is a factor pl/2 slower than optimal gradient descent. From similar arguments, it can be shown that for sequential perturbations lI'j the effective learning rate for the mean field gradient descent satisfies 71eff = p-l 71ma.x. Hence under optimal conditions the sequential weight perturbation technique is a factor P slower than optimal gradient descent. Acknowledgements We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here. References [I] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning Internal Representations by Error Propagation," in Parallel Distributed Processing, Explorations in the Microstructure of Cognition, vol. 1, D.E. Rumelhart and J.L. McClelland, eds., Cambridge, MA: MIT Press, 1986. [2] B.A. Pearlmutter, "Learning State Space Trajectories in Recurrent Neural Networks," Neural Computation, vol. 1 (2), pp 263-269, 1989. [3] R.J. Williams and D. Zipser, "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks," Neural Computation, vol. 1 (2), pp 270-280, 1989. [4] N.B. Toomarian, and J. Barhen, "Learning a Trajectory using Adjoint Functions and Teacher Forcing," Neural Networks, vol. 5 (3), pp 473-484, 1992. [5] J. Schmidhuber, " A Fixed Size Storage O( n 3 ) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks," Neural Computation, vol. 4 (2), pp 243248, 1992. [6] B. Widrow and M.A. Lehr, "30 years of Adaptive Neural Networks. Percept ron, Madaline, and Backpropagation," Proc. IEEE, vol. 78 (9), pp 1415-1442, 1990. [7] M. Jabri and B. Flower, "Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayered Networks," IEEE Trans. Neural Networks, vol. 3 (1), pp 154-157, 1992. [8] A. Dembo and T. Kailath, "Model-Free Distributed Learning," IEEE Trans. Neural Networks, vol. 1 (1), pp 58-70, 1990. [9] H.P. Whitaker, "An Adaptive System for the Control of Aircraft and Spacecraft," in Institute for Aeronautical Sciences, pap. 59-100, 1959. [10] B.P. Anderson and D.A. Kerns, "Using Noise Injection and Correlation in Analog Hardware to Estimate Gradients," submitted, 1992. [11] D. Kirk, D. Kerns, K. Fleischer, and A. Barr, "Analog VLSI Implementation of Gradient Descent," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [12] P. Baldi, "Learning in Dynamical Systems: Gradient Descent, Random Descent and Modular Approaches," JPL Technical Report, California Institute of Technology, 1992. [13J J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar, "A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [14] B. Flower and M. labri, "Summed Weight Neuron Perturbation: An O(n) Improvement over Weight Perturbation," in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman Publishers, vol. 5, 1993. [15] J. Alspector, l.W. Gannett, S. Haber, M.B. Parker, and R. Chu, "A VLSI-Efficient Technique for Generating Multiple Uncorrelated Noise Sources and Its Application to Stochastic Neural Networks," IEEE T. Circuits and Systems, 38 (1), pp 109-123, 1991. 251 PART III CONTROL, NAVIGATION, AND PLANNING
690 |@word aircraft:1 trial:1 version:1 simulation:3 simplifying:1 p0:1 attainable:3 solid:1 reduction:1 initial:3 series:1 current:1 activation:1 chu:1 reminiscent:1 i1l:3 numerical:1 j1:1 enables:1 remove:1 update:15 v:1 selected:3 accordingly:1 dembo:3 steepest:3 provides:1 node:1 ron:1 along:1 constructed:2 direct:1 consists:1 yuhas:1 combine:1 baldi:3 spacecraft:1 indeed:1 expected:2 alspector:3 p1:1 planning:1 multi:1 compensating:1 inspired:1 decreasing:1 actual:1 little:1 increasing:1 becomes:1 provided:5 unrelated:1 toomarian:1 circuit:1 kaufman:3 substantially:1 finding:1 pseudo:1 quantitative:1 every:3 ragged:1 sensibly:1 control:4 omit:1 continually:2 positive:3 understood:1 local:3 limit:3 consequence:2 analyzing:1 path:3 fluctuation:4 ap:1 studied:1 mateo:3 specifying:1 barhen:1 range:1 statistically:4 averaged:1 yariv:1 backpropagation:1 significantly:1 refers:1 spite:1 kern:2 cannot:1 cal:1 turbulence:1 storage:3 context:1 instability:3 accumulating:1 equivalent:2 deterministic:1 demonstrated:1 conventional:1 williams:2 regardless:1 starting:1 formulate:1 simplicity:3 pure:3 rule:3 importantly:1 stability:1 handle:1 gert:2 anyway:1 increment:4 exact:7 rumelhart:2 expensive:2 particularly:2 observed:1 coincidence:1 worst:3 region:1 cycle:2 decrease:9 valuable:1 balanced:1 complexity:5 dynamic:14 trained:1 impurity:1 serve:1 efficiency:1 f2:2 tca:1 necessitates:1 derivation:2 fast:5 effective:6 labri:1 outside:1 modular:1 posed:1 say:1 favor:1 obviously:2 sequence:1 frequent:1 relevant:1 combining:1 unavoidably:1 achieve:1 adjoint:1 intuitive:1 exploiting:1 convergence:6 regularity:1 requirement:1 optimum:1 generating:1 incremental:3 pi2:1 recurrent:5 widrow:1 fixing:1 measured:1 ij:1 op:1 received:1 strong:1 p2:1 implemented:1 come:1 implies:1 direction:4 correct:1 owing:1 stochastic:27 exploration:1 stringent:1 eff:6 require:2 barr:1 microstructure:1 articulate:1 strictly:4 pl:7 hold:2 practically:1 around:2 sufficiently:2 cognition:1 substituting:1 achieves:1 omitted:1 purpose:1 estimation:1 proc:1 injecting:2 currently:1 largest:1 successfully:1 mit:1 always:2 modified:3 rather:3 avoid:1 varying:3 derived:1 improvement:2 tech:1 dependent:3 typically:1 pasadena:1 vlsi:6 upward:2 issue:2 overall:1 ill:1 smoothing:2 integration:1 loca:1 constrained:1 summed:1 field:3 equal:4 construct:1 represents:1 contaminated:1 report:1 randomly:1 individual:5 investigate:1 multiply:1 evaluation:2 analyzed:1 truly:1 lehr:1 yielding:1 navigation:1 accurate:1 orthogonal:1 decoupled:1 taylor:1 desired:2 i1c:1 theoretical:1 minimal:1 instance:5 increased:1 measuring:1 retains:1 deviation:1 uniform:2 delay:2 teacher:2 explores:2 off:3 picking:1 central:1 opposed:3 worse:1 jayakumar:1 li:1 includes:1 matter:2 satisfy:2 caused:1 depends:1 stream:1 performed:3 root:1 view:1 analyze:2 cauwenberghs:4 reached:1 parallel:9 defer:1 contribution:4 minimize:1 square:2 variance:4 largely:1 percept:1 yield:1 basically:1 trajectory:12 history:2 converged:1 submitted:1 reach:1 cumbersome:1 ed:1 against:1 pp:8 intentionally:1 involved:4 associated:1 adjusting:1 popular:1 knowledge:3 actually:1 reflecting:1 supervised:6 follow:1 response:4 formulation:2 though:1 anderson:1 furthermore:3 just:3 correlation:2 hand:1 propagation:1 interfere:1 omitting:1 effect:1 contain:1 true:3 requiring:1 normalized:2 hence:7 equality:1 excluded:1 illustrated:1 attractive:1 ll:1 during:1 uniquely:1 steady:1 allowable:1 complete:4 demonstrate:1 pearlmutter:1 performs:2 l1:1 instantaneous:1 isolates:1 recently:1 functional:7 physical:3 ltl:1 volume:2 association:1 analog:7 elevated:1 significant:2 cambridge:1 similarly:1 session:2 stochasticity:1 nonlinearity:1 specification:1 access:2 longer:2 surface:1 forcing:2 schmidhuber:1 certain:1 binary:2 arbitrarily:1 fault:1 morgan:3 minimum:4 additional:1 somewhat:1 subtraction:3 converge:2 paradigm:1 period:1 focussing:1 signal:3 multiple:2 desirable:2 full:3 reduces:1 technical:1 faster:1 characterized:1 calculation:5 cross:1 long:2 serial:1 equally:1 variant:1 essentially:1 iteration:1 histogram:1 represent:1 background:1 addition:1 interval:1 source:1 publisher:3 extra:1 unlike:1 sure:1 comment:1 call:1 zipser:1 feedforward:2 iii:2 enough:1 independence:1 architecture:1 topology:2 bandwidth:1 reduce:1 intensive:1 fleischer:1 synchronous:1 effort:3 proceed:1 cause:2 tma:1 amount:1 hardware:5 processed:1 mcclelland:1 reduced:2 generate:1 supplied:1 exist:2 meir:1 r12:1 per:5 discrete:2 vol:11 steepness:1 threshold:1 nevertheless:1 pj:1 diffusion:2 asymptotically:1 aeronautical:1 pap:1 fraction:1 pl2:1 year:1 run:3 powerful:1 appendix:2 scaling:8 bit:1 guaranteed:2 strength:3 orthogonality:1 constraint:1 ri:3 u1:1 speed:18 argument:2 injection:3 llj:1 rev:1 intuitively:1 equation:1 mutually:3 previously:1 pin:1 mechanism:2 needed:1 mind:1 end:1 available:2 operation:4 eight:2 occasional:1 alternative:1 slower:6 original:3 assumes:1 remaining:1 ensure:1 running:2 whitaker:1 especially:2 degrades:1 dependence:1 gradient:39 detrimental:1 thank:1 capacity:1 gracefully:1 mail:2 code:1 polarity:1 minimizing:1 demonstration:1 balance:1 madaline:2 effecting:1 unfortunately:1 relate:2 favorably:1 rise:2 implementation:3 adjustable:3 unknown:1 allowing:1 upper:1 neuron:1 descent:48 hinton:1 perturbation:47 arbitrary:1 required:2 specified:2 extensive:1 connection:1 california:2 distinction:1 temporary:1 boost:1 trans:2 address:1 llp:1 suggested:1 dynamical:2 pattern:1 below:3 parallelism:1 flower:3 fp:1 including:1 max:1 haber:1 largescale:1 improve:1 technology:2 auto:1 extract:1 originality:1 gannett:1 epoch:13 prior:2 acknowledgement:1 relative:3 fully:2 interesting:1 generation:1 suggestion:1 digital:1 degree:1 consistent:1 principle:1 uncorrelated:3 pi:7 bypass:1 free:8 bias:6 formal:2 perceptron:1 institute:3 differentiating:1 distributed:8 tolerance:1 boundary:1 dimension:2 van:1 transition:2 avoids:1 valid:1 adaptive:4 reinforcement:1 san:3 correlate:1 functionals:2 approximate:1 global:1 sequentially:1 correlating:1 reveals:1 assumed:1 unnecessary:1 putten:1 search:1 continuous:1 channel:4 reasonably:1 ca:4 expanding:1 contributes:3 mse:1 investigated:1 necessarily:2 complex:1 jabri:1 multilayered:1 decrement:6 motivation:1 noise:4 allowed:1 fig:2 representative:1 tl:5 parker:1 probing:1 explicit:3 exceeding:1 kirk:2 theorem:1 xt:1 specific:3 unperturbed:1 offset:4 virtue:1 concern:1 jpl:1 consist:1 intrinsic:1 sequential:8 effectively:3 downward:1 suited:1 lt:1 likely:2 expressed:1 scalar:3 applies:1 corresponds:1 satisfies:1 ma:5 itl:6 kailath:3 towards:3 jeff:1 change:3 typical:2 except:1 averaging:5 total:3 partly:1 meaningful:1 internal:5 support:2 latter:2 avoiding:1 correlated:1
6,523
6,900
Rotting Bandits Nir Levine Electrical Engineering Department The Technion Haifa 32000, Israel [email protected] Koby Crammer Electrical Engineering Department The Technion Haifa 32000, Israel [email protected] Shie Mannor Electrical Engineering Department The Technion Haifa 32000, Israel [email protected] Abstract The Multi-Armed Bandits (MAB) framework highlights the trade-off between acquiring new knowledge (Exploration) and leveraging available knowledge (Exploitation). In the classical MAB problem, a decision maker must choose an arm at each time step, upon which she receives a reward. The decision maker?s objective is to maximize her cumulative expected reward over the time horizon. The MAB problem has been studied extensively, specifically under the assumption of the arms? rewards distributions being stationary, or quasi-stationary, over time. We consider a variant of the MAB framework, which we termed Rotting Bandits, where each arm?s expected reward decays as a function of the number of times it has been pulled. We are motivated by many real-world scenarios such as online advertising, content recommendation, crowdsourcing, and more. We present algorithms, accompanied by simulations, and derive theoretical guarantees. 1 Introduction One of the most fundamental trade-offs in stochastic decision theory is the well celebrated Exploration vs. Exploitation dilemma. Should one acquire new knowledge on the expense of possible sacrifice in the immediate reward (Exploration), or leverage past knowledge in order to maximize instantaneous reward (Exploitation)? Solutions that have been demonstrated to perform well are those which succeed in balancing the two. First proposed by Thompson [1933] in the context of drug trials, and later formulated in a more general setting by Robbins [1985], MAB problems serve as a distilled framework for this dilemma. In the classical setting of the MAB, at each time step, the decision maker must choose (pull) between a fixed number of arms. After pulling an arm, she receives a reward which is a realization drawn from the arm?s underlying reward distribution. The decision maker?s objective is to maximize her cumulative expected reward over the time horizon. An equivalent, more typically studied, is the regret, which is defined as the difference between the optimal cumulative expected reward (under full information) and that of the policy deployed by the decision maker. MAB formulation has been studied extensively, and was leveraged to formulate many real-world problems. Some examples for such modeling are online advertising [Pandey et al., 2007], routing of packets [Awerbuch and Kleinberg, 2004], and online auctions [Kleinberg and Leighton, 2003]. Most past work (Section 6) on the MAB framework has been performed under the assumption that the underlying distributions are stationary, or possibly quasi-stationary. In many real-world scenarios, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. this assumption may seem simplistic. Specifically, we are motivated by real-world scenarios where the expected reward of an arm decreases over time instances that it has been pulled. We term this variant Rotting Bandits. For motivational purposes, we present the following two examples. ? Consider an online advertising problem where an agent must choose which ad (arm) to present (pull) to a user. It seems reasonable that the effectiveness (reward) of a specific ad on a user would deteriorate over exposures. Similarly, in the content recommendation context, Agarwal et al. [2009] showed that articles? CTR decay over amount of exposures. ? Consider the problem of assigning projects through crowdsourcing systems [Tran-Thanh et al., 2012]. Given that the assignments primarily require human perception, subjects may fall into boredom and their performance would decay (e.g., license plate transcriptions [Du et al., 2013]). As opposed to the stationary case, where the optimal policy is to always choose some specific arm, in the case of Rotting Bandits the optimal policy consists of choosing different arms. This results in the notion of adversarial regret vs. policy regret [Arora et al., 2012] (see Section 6). In this work we tackle the harder problem of minimizing the policy regret. The main contributions of this paper are the following: ? Introducing a novel, real-world oriented MAB formulation, termed Rotting Bandits. ? Present an easy-to-follow algorithm for the general case, accompanied with theoretical guarantees. ? Refine the theoretical guarantees for the case of existing prior knowledge on the rotting models, accompanied with suitable algorithms. The rest of the paper is organized as follows: in Section 2 we present the model and relevant preliminaries. In Section 3 we present our algorithm along with theoretical guarantees for the general case. In Section 4 we do the same for the parameterized case, followed by simulations in Section 5. In Section 6 we review related work, and conclude with a discussion in Section 7. 2 Model and Preliminaries We consider the problem of Rotting Bandits (RB); an agent is given K arms and at each time step t = 1, 2, .. one of the arms must be pulled. We denote the arm that is pulled at time step t as i (t) ? [K] = {1, .., K}. When arm i is pulled for the nth time, the agent receives a time independent, ? 2 sub-Gaussian random reward, rt , with mean ?i (n).1 In this work we consider two cases: (1) There is no prior knowledge on the expected rewards, except for the ?rotting? assumption to be presented shortly, i.e., a non-parametric case (NPC). (2) There is prior knowledge that the expected rewards comprised of an unknown constant part and a rotting part which is known to belong to a set of rotting models, i.e., a parametric case (PC). Let Ni (t) be the number of pulls of arm i at time t not including this round?s choice (Ni (1) = 0), and ? the set of all sequences i (1) , i (2) , .., where i (t) ? [K] , ?t ? N. i.e., ? ? ? is an infinite sequence of actions (arms), also referred to as a policy. We denote the arm that is chosen by policy ? at time t as ? (t). The objective of an agent is to maximize the expected total reward in time T , defined for policy ? ? ? by, " T # X  ??(t) N?(t) (t) + 1 (1) J (T ; ?) = E t=1 We consider the equivalent objective of minimizing the regret in time T defined by, R (T ; ?) = max{J (T ; ? ? )} ? J (T ; ?) . ? ? ?? (2) Assumption 2.1. (Rotting) ?i ? [K], ?i (n) is positive, and non-increasing in n. 1 Our results hold for pulls-number dependent variances ? 2 (n), by upper bound them ? 2 ? ? 2 (n) , ?n. It is fairly straightforward to adapt the results to pulls-number dependent variances, but we believe that the way presented conveys the setting in the clearest way. 2 2.1 Optimal Policy Let ? max be a policy defined by, ? max (t) ? argmax{?i (Ni (t) + 1)} (3) i?[K] where, in a case of tie, break it randomly. Lemma 2.1. ? max is an optimal policy for the RB problem. Proof: See Appendix B of the supplementary material. 3 Non-Parametric Case In the NPC setting for the RB problem, the only information we have is that the expected rewards sequences are positive and non-increasing in the number of pulls. The Sliding-Window Average (SWA) approach is a heuristic for ensuring with high probability that, at each time step, the agent did not sample significantly sub-optimal arms too many times. We note that, potentially, the optimal arm changes throughout the trajectory, as Lemma 2.1 suggests. We start by assuming that we know the time horizon, and later account for the case we do not. Known Horizon The idea behind the SWA approach is that after we pulled a significantly sub-optimal arm ?enough" times, the empirical average of these ?enough" pulls would be distinguishable from the optimal arm for that time step and, as such, given any time step there is a bounded number of significantly sub-optimal pulls compared to the optimal policy. Pseudo algorithm for SWA is given by Algorithm 1. Algorithm 1 SWA Input : K, T, ? > 0 ?  Initialize : M ? d?42/3 ? 2/3 K ?2/3 T 2/3 ln1/3 2T e, and Ni ? 0 for all i ? [K] for t = 1, 2, .., KM do Ni(t) Ramp up : i (t) by Round-Robin, receive rt , and set Ni(t) ? Ni(t) + 1 ; ri(t) ? rt end for for t = KM + 1, ..., T do   PNi 1 n Balance : i (t) ? argmaxi?[K] M r n=Ni ?M +1 i N i(t) Update : receive rt , and set Ni(t) ? Ni(t) + 1 ; ri(t) ? rt end for Theorem 3.1. Suppose Assumption 2.1 holds. SWA algorithm achieves regret bounded by,   ?   SWA ?1/2 R T;? ? ? max ?i (1) + ? 42/3 ? 2/3 K 1/3 T 2/3 ln1/3 2T + 3K max ?i (1) i?[K] i?[K] (4) Proof: See Appendix C.1 of the supplementary material. ?2/3 We note that the upper bound obtains its minimum for ? = 2 maxi?[K] ?i (1) , which can serve as a way to choose ? if maxi?[K] ?i (1) is known, but ? can also be given as an input to SWA to allow control on the averaging window size. Unknown Horizon In this case we use doubling trick in order to achieve the same horizon-dependent rate for the regret. We apply the SWA algorithm with a series of increasing horizons (powers of two, i.e., 1, 2, 4, ..) until reaching the (unknown) horizon. We term this Algorithm wSWA (wrapper SWA). Corollary 3.1.1. Suppose Assumption 2.1 holds. wSWA algorithm achieves regret bounded by,   ?   wSWA ?1/2 R T;? ? ? max ?i (1) + ? 8? 2/3 K 1/3 T 2/3 ln1/3 2T i?[K] + 3K max ?i (1) (log2 T + 1) i?[K] Proof: See Appendix C.2 of the supplementary material. 3 (5) 4 Parametric Case In the PC setting for the RB problem, there is prior knowledge that the expected rewards comprised of a sum of an unknown constant part and a rotting part known to belong to a set of models, ?. i.e., the expected reward of arm i at its nth pull is given by, ?i (n) = ?ci + ? (n; ?i? ), where ?i? ? ?. We [K] denote {?i? }i=1 by ?? . We consider two cases: The first is the asymptotically vanishing case (AV), c i.e., ?i : ?i = 0. The second is the asymptotically non-vanishing case (ANV), i.e., ?i : ?ci ? R. We present a few definitions that will serve us in the following section. Definition 4.1. For a function f : N ? R, we define the function f ?? : R ? N ? {?} by the following rule: given ? ? R, f ?? (?) returns the smallest N ? N such that ?n ? N : f (n) ? ?, or ? if such N does not exist. Definition 4.2. For any ?1 6= ?2 ? ?2 , define det?1 ,?2 , Ddet?1 ,?2 : N ? R as, n? 2 2 Pn j=1 ? (j; ?1 ) ? j=1 ? (j; ?2 ) det?1 ,?2 (n) = P n Ddet?1 ,?2 (n) = P bn/2c j=1 [? (j; ?1 ) ? ? (j; ?2 )] ? n? 2 Pn 2 [? (j; ? ) ? ? (j; ? )] 1 2 j=bn/2c+1 Definition 4.3. Let bal : N ? ? ? N ? ? be defined at each point n ? N as the solution for, min ? s.t, max ? (?; ?) ? min ? (n; ?) ??? ??? We define bal (?) = ?. Assumption 4.1. (Rotting Models) ? (n; ?) is positive, non-increasing in n, and ? (n; ?) ? o (1), ?? ? ?, where ? is a discrete known set. We present an example for which, in Appendix E, we demonstrate how the different following assumptions hold. By this we intend to achieve two things: (i) show that the assumptions are not too harsh, keeping the problem relevant and non-trivial, and (ii) present a simple example on how to verify the assumptions.  ? Example 4.1. The reward of arm i for its nth pull is distributed as N ?ci + n??i , ? 2 . Where ?i? ? ? = {?1 , ?2 , ..., ?M }, and ?? ? ? : 0.01 ? ? ? 0.49. 4.1 Closest To Origin (AV) The Closest To Origin (CTO) approach for RB is a heuristic that simply states that we hypothesize that the true underlying model for an arm is the one that best fits the past rewards. The fitting criterion is proximity to the origin of the sum of expected rewards shifted by the observed rewards. Let i r1i , r2i , .., rN be the sequence of rewards observed from arm i up until time t. Define, i (t) Y (i, t; ?) =  NX i (t) Ni (t) rji ? j=1 X j=1  ? (j; ?) . (6) ??? The CTO approach dictates that at each decision point, we assume that the true underlying rotting model corresponds to the following proximity to origin rule (hence the name), ??i (t) = argmin{|Y (i, t; ?) |}. (7) ??? The CTOSIM version tackles the RB problem by simultaneously detecting the true rotting models and balancing between the expected rewards (following Lemma 2.1). In this approach, every time step, each arm?s rotting model is hypothesized according to the proximity rule (7). Then the algorithm simply follows an argmax rule, where least number of pulls is used for tie breaking (randomly between an equal number of pulls). Pseudo algorithm for CTOSIM is given by Algorithm 2. Assumption 4.2. (Simultaneous Balance and Detection ability)      1 ?1 ?? bal max det?1 ,?2 ln (?) ? o (?) 16 ?1 6=?2 ??2 4 The above assumption ensures that, starting from some horizon T , the underlying models could be distinguished from the others, w.p 1 ? 1/T 2 , by their sums of expected rewards, and the arms could then be balanced, all within the horizon. ? Theorem 4.1. Suppose Assumptions 4.1 and 4.2 hold. There exists a finite step TSIM , such that ? for all T ? TSIM , CTOSIM achieves regret upper bounded by o (1) (which is upper bounded by ? max???? ? (1; ?)). Furthermore, TSIM is upper bounded by the solution for the following, min T ? T, b ? N?? {0}, t ? NK ? ? ? ? ? ktk1 ? T + b  ? ? ? ? ?   ? ? s.t ti ? max???? m? K(T1+b)2 ; ? ? ?b, ?t : ?    ? ?  ? ? ? ? ? ? ? ?? (ti + 1; ?? ) ? min ? ? max???? m? i ??? (8) 1 ;? K(T +b)2    ; ?? Proof: See Appendix D.1 of the supplementary material. Regret upper bounded by o (1) is achieved by proving that w.p of 1 ? 1/T the regret vanishes, and in any case it is still bounded by a decaying term. The shown optimization bound stems from ensuring that the arms would be pulled enough times to be correctly detected, and then balanced (following ? the optimal policy, Lemma 2.1). Another upper bound for TSIM can be found in Appendix D.1. 4.2 Differences Closest To Origin (ANV) We tackle this problem by estimating both the rotting models and the constant terms of the arms. The Differences Closest To Origin (D-CTO) approach is composed of two stages: first, detecting the underlying rotting models, then estimating and controlling the pulls due to the constant terms. We denote a? = argmaxi?[K] {?ci }, and ?i = ?ca? ? ?ci . Assumption 4.3. (D-Detection ability)   ?? max 2 Ddet?1 ,?2 () ? D () < ?, ?1 6=?2 ?? ? > 0 This assumption ensures that for any given probability, the models could be distinguished, by the differences (in pulls) between the first and second halves of the models? sums of expected rewards. Models Detection In order to detect the underlying rotting models, we cancel the influence of the constant terms. Once we do this, we can detect the underlying models. Specifically, we define a criterion of proximity to the origin based on differences between the halves of the rewards sequences, as follows: define, ? ? ? ? bNi (t)/2c Ni (t) bNi (t)/2c Ni (t) X X X X Z (i, t; ?) = ? rji ? rji ? ? ? ? (j; ?) ? ? (j; ?)? . j=1 j=1 j=bNi (t)/2c+1 j=bNi (t)/2c+1 (9) The D-CTO approach is that in each decision point, we assume that the true underlying model corresponds to the following rule, ??i (t) = argmin{|Z (i, t; ?) |} (10) ??? We define the following optimization problem, indicating the number of samples required for ensuring correct detection of the rotting models w.h.p. For some arm i with (unknown) rotting model ?i? ,  (  P ??i (l) 6= ?i? ? p, ?l ? m min m s.t (11) while pulling only arm i. We denote the solution to the above problem, when we use proximity rule (10), by m?diff (p; ?i? ), and define m?diff (p) = max??? {m?diff (p; ?)}. 5 Algorithm 3 D-CTOUCB Algorithm 2 CTOSIM Input : K, ? Initialization : Ni = 0, ?i ? [K] for t = 1, 2, .., K do Ramp up : i (t) = t ,and update Ni(t) end for for t = K + 1, ..., do Detect : determine {??i } by Eq. (7)  Balance : i (t) ? argmaxi?[K] ? Ni + 1; ??i Update : Ni(t) ? Ni(t) + 1 end for  Input : K, ?, ? Initialization : Ni = 0, ?i ? [K] for t = 1, 2, .., K ? m?diff (?/K) do Explore : i (t) by Round Robin, update Ni(t) end for Detect : determine {??i } by Eq. (10) for t = K ? m?diff (?/K) + 1, ..., do UCB : i (t) according to Eq. (12) Update : Ni(t) ? Ni(t) + 1 end for D-CTOUCB We next describe an approach with one decision point, and later on remark on the possibility of having a decision point at each time step. As explained above, after detecting the rotting models, we move to tackle the constant terms aspect of the expected rewards. This is done in a UCB1-like Ni (t) approach [Auer et al., 2002a]. Given a sequence of rewards from arm i, {rki }k=1 , we modify them using the estimated rotting model ??i , then estimate the arm?s constant term, and finally choose the arm with the highest estimated expected reward, plus an upper confident term. i.e., at time t, we pull arm i (t), according to the rule, h   i i (t) ? argmax ? ?ci (t) + ? Ni (t) + 1; ??i (t) + ct,Ni (t) (12) i?[K] where ??i (t) is the estimated rotting model (obtained in the first stage), and,   PNi (t)  i r rj ? ? j; ??i (t) j=1 8 ln (t) ? 2 c ? ?i (t) = , ct,s = Ni (t) s In a case of a tie in the UCB step, it may be arbitrarily broken. Pseudo algorithm for D-CTOUCB is given by Algorithm 3, accompanied with the following theorem. Theorem 4.2. Suppose Assumptions 4.1, and 4.3 hold. For ? ? (0, 1), with probability of at least 1 ? ?, D-CTOUCB algorithm achieves regret bounded at time T by,    X  32? 2 ln T ? max m?diff (?/K) , ??? (i ; ?i? ) , ? (? + ? (1; ? )) + C (?? , {?ci }) (13) ? i a 2 (? ?  ) i i i?[K] i6=a? 2 32? ln T for any sequence i ? (0, ?i ) , ?i 6= a? . Where (? 2 is the only time-dependent factor. i ?i ) Proof: See Appendix D.2 of the supplementary material. A few notes on the result: Instead of calculating m?diff (?/K), it is possible to use any upper bound  ?1 2K 1 (e.g., as shown in Appendix E, max?1 6=?2 ??2 Ddet?? rounded to higher even ?1 ,?2 8 ln ? number). We cannot hope for a better rate than ln T as stochastic MAB is a special case of the RB problem. Finally, we can convert the D-CTOUCB algorithm to have a decision point in each step: at each time step, determine the rotting models according to proximity rule (10), followed by pulling an arm according to Eq. (12). We term this version D-CTOSIM-UCB . 5 Simulations We next compare the performance of the SWA and CTO approaches with benchmark algorithms. Setups for all the simulations we use Normal distributions with ? 2 = 0.2, and T = 30, 000. Non-Parametric: K = 2. As for the expected rewards: ?1 (n) = 0.5, ?n, and ?2 (n) = 1 for its first 7, 500 pulls and 0.4 afterwards. This setup is aimed to show the importance of not relying on the 6 Table 1: Number of ?wins? and p-values between the different algorithms UCB1 55 15 98 100 100 <1e-5 <1e-5 22 99 100 0.54 40 50 97 100 100 100 0.83 0.91 50 98 100 600 UCB1 DUCB SWUCB wSWA 500 400 1000 300 wSWA <1e-5 <1e-5 <1e-5 (D-)CTO <1e-5 <1e-5 <1e-5 <1e-5 <1e-5 <1e-5 <1e-5 100 <1e-5 < 1e-5 <1e-5 97 100 <1e-5 <1e-5 <1e-5 <1e-5 66 Asymptotically Vanishing Case 450 UCB1 DUCB SWUCB wSWA CTO 400 350 300 200 250 200 Asymptotically Non-Vanishing Case UCB1 DUCB SWUCB wSWA D-CTO 150 100 500 0 0 SWUCB <1e-5 <1e-5 100 100 0.81 Regret Regret 1500 100 100 100 Non-Parametric Case 2500 2000 DUCB <1e-5 Regret ANV AV NP UCB1 DUCB SWUCB wSWA UCB1 DUCB SWUCB wSWA CTO UCB1 DUCB SWUCB wSWA D-CTO 100 5000 10000 15000 time steps 20000 25000 30000 0 0 50 5000 10000 15000 time steps 20000 25000 30000 0 0 5000 10000 15000 time steps 20000 25000 30000 Figure 1: Average regret. Left: non-parametric. Middle: parametric AV. Right: parametric ANV whole past rewards in the RB setting.  ?? j Parametric AV & ANV: K = 10. The rotting models are of the form ? (j; ?) = int 100 +1 , where int(?) is the lower rounded integer, and ? = {0.1, 0.15, .., 0.4} (i.e., plateaus of length 100, with decay between plateaus according to ?). {?i? }K i=1 were sampled with replacement from ?, K independently across arms and trajectories. {?ci }K (ANV) were sampled randomly from [0, 0.5] . i=1 Algorithms we implemented standard benchmark algorithms for non-stationary MAB: UCB1 by Auer et al. [2002a], Discounted UCB (DUCB) and Sliding-Window UCB (SWUCB) by Garivier and Moulines [2008]. We implemented CTOSIM , D-CTOSIM-UCB , and wSWA for the relevant setups. We note that adversarial benchmark algorithms are not relevant in this case, as the rewards are unbounded. Grid Searches were performed to determine the algorithms? parameters. For DUCB, following Kocsis and Szepesv?ri [2006], the discount factor was chosen from ? ? {0.9, 0.99, .., 0.999999}, the window size for SWUCB from ? ? {1e3, 2e3, .., 20e3}, and ? for wSWA from {0.2, 0.4, .., 1}. Performance for each of the cases, we present a plot of the average regret over 100 trajectories, specify the number of ?wins? of each algorithm over the others, and report the p-value of a paired T-test between the (end of trajectories) regrets of each pair of algorithms. For each trajectory and two algorithms, the ?winner? is defined as the algorithm with the lesser regret at the end of the horizon. Results the parameters that were chosen by the grid search are as follows: ? = 0.999 for the non-parametric case, and 0.999999 for the parametric cases. ? = 4e3, 8e3, and 16e3 for the nonparametric, AV, and ANV cases, respectively. ? = 0.2 was chosen for all cases. The average regret for the different algorithms is given by Figure 1. Table 1 shows the number of ?wins? and p-values. The table is to be read as the following: the entries under the diagonal are the number of times the algorithms from the left column ?won? against the algorithms from the top row, and the entries above the diagonal are the p-values between the two. While there is no clear ?winner? between the three benchmark algorithms across the different cases, wSWA, which does not require any prior knowledge, consistently and significantly outperformed them. In addition, when prior knowledge was available and CTOSIM or D-CTOUCB-SIM could be deployed, they outperformed all the others, including wSWA. 7 6 Related Work We turn to reviewing related work while emphasizing the differences from our problem. Stochastic MAB In the stochastic MAB setting [Lai and Robbins, 1985], the underlying reward distributions are stationary over time. The notion of regret is the same as in our work, but the optimal policy in this setting is one that pulls a fixed arm throughout the trajectory. The two most common approaches for this problem are: constructing Upper Confidence Bounds which stem from the seminal work by Gittins [1979] in which he proved that index policies that compute upper confidence bounds on the expected rewards of the arms are optimal in this case (e.g., see Auer et al. [2002a], Garivier and Capp? [2011], Maillard et al. [2011]), and Bayesian heuristics such as Thompson Sampling which was first presented by Thompson [1933] in the context of drug treatments (e.g., see Kaufmann et al. [2012], Agrawal and Goyal [2013], Gopalan et al. [2014]). Adversarial MAB In the Adversarial MAB setting (also referred to as the Experts Problem, see the book of Cesa-Bianchi and Lugosi [2006] for a review), the sequence of rewards are selected by an adversary (i.e., can be arbitrary). In this setting the notion of adversarial regret is adopted [Auer et al., 2002b, Hazan and Kale, 2011], where the regret is measured against the best possible fixed action that could have been taken in hindsight. This is as opposed to the policy regret we adopt, where the regret is measured against the best sequence of actions in hindsight. Hybrid models Some past work consider settings between the Stochastic and the Adversarial settings. Garivier and Moulines [2008] consider the case where the reward distributions remain constant over epochs and change arbitrarily at unknown time instants, similarly to Yu and Mannor [2009] who consider the same setting, only with the availability of side observations. Chakrabarti et al. [2009] consider the case where arms can expire and be replaced with new arms with arbitrary expected reward, but as long as an arm does not expire its statistics remain the same. Non-Stationary MAB Most related to our problem is the so-called Non-Stationary MAB. Originally proposed by Jones and Gittins [1972], who considered a case where the reward distribution of a chosen arm can change, and gave rise to a sequence of works (e.g., Whittle et al. [1981], Tekin and Liu [2012]) which were termed Restless Bandits and Rested Bandits. In the Restless Bandits setting, termed by Whittle [1988], the reward distributions change in each step according to a known stochastic process. Komiyama and Qin [2014] consider the case where each arm decays according to a linear combination of decaying basis functions. This is similar to our parametric case in that the reward distributions decay according to possible models, but differs fundamentally in that it belongs to the Restless Bandits setup (ours to the Rested Bandits). More examples in this line of work are Slivkins and Upfal [2008] who consider evolution of rewards according to Brownian motion, and Besbes et al. [2014] who consider bounded total variation of expected rewards. The latter is related to our setting by considering the case where the total variation is bounded by a constant, but significantly differs by that it considers the case where the (unknown) expected rewards sequences are not affected by actions taken, and in addition requires bounded support as it uses the EXP3 as a sub-routine. In the Rested Bandits setting, only the reward distribution of a chosen arm changes, which is the case we consider. An optimal control policy (reward processes are known, no learning required) to bandits with non-increasing rewards and discount factor was previously presented (e.g., Mandelbaum [1987], and Kaspi and Mandelbaum [1998]). Heidari et al. (2016) consider the case where the reward decays (as we do), but with no statistical noise (deterministic rewards), which significantly simplifies the problem. Another somewhat closely related setting is suggested by Bouneffouf and Feraud [2016], in which statistical noise exists, but the expected reward shape is known up to a multiplicative factor. 7 Discussion We introduced a novel variant of the Rested Bandits framework, which we termed Rotting Bandits. This setting deals with the case where the expected rewards generated by an arm decay (or generally do not increase) as a function of pulls of that arm. This is motivated by many real-world scenarios. We first tackled the non-parametric case, where there is no prior knowledge on the nature of the decay. We introduced an easy-to-follow algorithm accompanied by theoretical guarantees. We then tackled the parametric case, and differentiated between two scenarios: expected rewards decay to zero (AV), and decay to different constants (ANV). For both scenarios we introduced 8 suitable algorithms with stronger guarantees than for the non-parametric case: For the AV scenario we introduced an algorithm for ensuring, in expectation, regret upper bounded by a term that decays to zero with the horizon. For the ANV scenario we introduced an algorithm for ensuring, with high probability, regret upper bounded by a horizon-dependent rate which is optimal for the stationary case. We concluded with simulations that demonstrated our algorithms? superiority over benchmark algorithms for non-stationary MAB. We note that since the RB setting is novel, there are not suitable available benchmarks, and so this paper also serves as a benchmark. For future work we see two main interesting directions: (i) show a lower bound on the regret for the non-parametric case, and (ii) extend the scope of the parametric case to continuous parameterization. Acknowledgment The research leading to these results has received funding from the European Research Council under the European Union?s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n. 306638 References D. Agarwal, B.-C. Chen, and P. Elango. Spatio-temporal models for estimating click-through rate. In Proceedings of the 18th international conference on World wide web, pages 21?30. ACM, 2009. S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling. In Aistats, pages 99?107, 2013. R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. arXiv preprint arXiv:1206.6400, 2012. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235?256, 2002a. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002b. B. Awerbuch and R. D. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 45?53. ACM, 2004. O. Besbes, Y. Gur, and A. Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. In Advances in neural information processing systems, pages 199?207, 2014. D. Bouneffouf and R. Feraud. Multi-armed bandit problem with known trend. Neurocomputing, 205:16?21, 2016. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge university press, 2006. D. Chakrabarti, R. Kumar, F. Radlinski, and E. Upfal. Mortal multi-armed bandits. In Advances in Neural Information Processing Systems, pages 273?280, 2009. S. Du, M. Ibrahim, M. Shehata, and W. Badawy. Automatic license plate recognition (alpr): A state-of-the-art review. IEEE Transactions on Circuits and Systems for Video Technology, 23(2):311?325, 2013. A. Garivier and O. Capp?. The kl-ucb algorithm for bounded stochastic bandits and beyond. In COLT, pages 359?376, 2011. A. Garivier and E. Moulines. On upper-confidence bound policies for non-stationary bandit problems. arXiv preprint arXiv:0805.3415, 2008. J. C. Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological), pages 148?177, 1979. A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In ICML, volume 14, pages 100?108, 2014. E. Hazan and S. Kale. Better algorithms for benign bandits. Journal of Machine Learning Research, 12(Apr): 1287?1311, 2011. H. Heidari, M. Kearns, and A. Roth. Tight policy regret bounds for improving and decaying bandits. 9 D. M. Jones and J. C. Gittins. A dynamic allocation index for the sequential design of experiments. University of Cambridge, Department of Engineering, 1972. H. Kaspi and A. Mandelbaum. Multi-armed bandits in discrete and continuous time. Annals of Applied Probability, pages 1270?1290, 1998. E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In International Conference on Algorithmic Learning Theory, pages 199?213. Springer, 2012. R. Kleinberg and T. Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 594?605. IEEE, 2003. L. Kocsis and C. Szepesv?ri. Discounted ucb. In 2nd PASCAL Challenges Workshop, pages 784?791, 2006. J. Komiyama and T. Qin. Time-decaying bandits for non-stationary systems. In International Conference on Web and Internet Economics, pages 460?466. Springer, 2014. T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4?22, 1985. O.-A. Maillard, R. Munos, G. Stoltz, et al. A finite-time analysis of multi-armed bandits problems with kullback-leibler divergences. In COLT, pages 497?514, 2011. A. Mandelbaum. Continuous multi-armed bandits and multiparameter processes. The Annals of Probability, pages 1527?1556, 1987. S. Pandey, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based approach. In SDM, pages 216?227. SIAM, 2007. H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169?177. Springer, 1985. A. Slivkins and E. Upfal. Adapting to a changing environment: the brownian restless bandits. In COLT, pages 343?354, 2008. C. Tekin and M. Liu. Online learning of rested and restless bandits. IEEE Transactions on Information Theory, 58(8):5588?5611, 2012. W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933. L. Tran-Thanh, S. Stein, A. Rogers, and N. R. Jennings. Efficient crowdsourcing of unknown experts using multi-armed bandits. In European Conference on Artificial Intelligence, pages 768?773, 2012. P. Whittle. Restless bandits: Activity allocation in a changing world. Journal of applied probability, pages 287?298, 1988. P. Whittle et al. Arm-acquiring bandits. The Annals of Probability, 9(2):284?292, 1981. J. Y. Yu and S. Mannor. Piecewise-stationary bandit problems with side observations. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1177?1184. ACM, 2009. 10
6900 |@word trial:1 exploitation:3 middle:1 version:2 leighton:2 seems:1 stronger:1 dekel:1 nd:1 km:2 simulation:5 bn:2 harder:1 wrapper:1 celebrated:1 series:2 liu:2 ours:1 past:5 existing:1 com:1 gmail:1 assigning:1 must:4 benign:1 shape:1 hypothesize:1 plot:1 update:5 v:2 stationary:15 half:2 selected:2 intelligence:1 parameterization:1 vanishing:4 detecting:3 mannor:4 unbounded:1 elango:1 along:1 symposium:2 chakrabarti:3 consists:1 fitting:1 deteriorate:1 sacrifice:1 expected:25 multi:8 moulines:3 relying:1 discounted:2 armed:8 window:4 considering:1 increasing:5 motivational:1 project:1 underlying:10 bounded:15 estimating:3 circuit:1 israel:3 argmin:2 hindsight:2 guarantee:6 pseudo:3 temporal:1 every:1 ti:2 feraud:2 tackle:4 tie:3 biometrika:1 control:2 grant:1 superiority:1 positive:3 t1:1 engineering:4 modify:1 lugosi:2 plus:1 initialization:2 studied:3 suggests:1 josifovski:1 acknowledgment:1 thirty:1 union:1 regret:33 goyal:2 differs:2 empirical:1 drug:2 significantly:6 dictate:1 adapting:1 confidence:3 cannot:1 context:3 influence:1 seminal:1 equivalent:2 deterministic:1 demonstrated:2 thanh:2 roth:1 exposure:2 straightforward:1 starting:1 independently:1 thompson:7 kale:2 formulate:1 tekin:2 economics:1 rule:9 pull:18 gur:1 proving:1 notion:3 variation:2 annals:3 controlling:1 suppose:4 user:2 us:1 origin:7 agreement:1 trick:1 trend:1 recognition:1 observed:2 levine:1 preprint:2 electrical:3 ensures:2 trade:2 decrease:1 highest:1 balanced:2 vanishes:1 broken:1 environment:1 reward:54 dynamic:2 reviewing:1 tight:1 dilemma:2 upon:1 serve:3 basis:1 capp:2 describe:1 argmaxi:3 tsim:4 detected:1 r2i:1 artificial:1 choosing:1 heuristic:3 supplementary:5 ramp:2 ability:2 statistic:1 fischer:1 multiparameter:1 online:8 kocsis:2 sequence:11 agrawal:2 sdm:1 tran:2 qin:2 relevant:4 realization:1 achieve:2 gittins:4 derive:1 ac:2 measured:2 received:1 sim:1 eq:4 implemented:2 direction:1 closely:1 correct:1 stochastic:8 exploration:3 packet:1 routing:2 human:1 material:5 rogers:1 require:2 preliminary:2 mab:18 hold:6 proximity:6 considered:1 normal:1 scope:1 algorithmic:1 zeevi:1 achieves:4 adopt:1 smallest:1 purpose:1 outperformed:2 maker:5 council:1 robbins:5 hope:1 offs:1 always:1 gaussian:1 rki:1 reaching:1 pn:2 corollary:1 she:2 consistently:1 methodological:1 likelihood:1 adversarial:6 detect:4 dependent:5 typically:1 her:2 bandit:38 quasi:2 colt:3 pascal:1 art:1 special:1 fairly:1 initialize:1 equal:1 once:1 distilled:1 having:1 beach:1 sampling:4 koby:2 cancel:1 yu:2 jones:2 icml:1 future:1 others:3 np:1 report:1 fundamentally:1 primarily:1 piecewise:1 few:2 oriented:1 randomly:3 composed:1 simultaneously:1 divergence:1 neurocomputing:1 replaced:1 argmax:3 replacement:1 detection:4 possibility:1 pc:2 behind:1 stoltz:1 haifa:3 theoretical:5 korda:1 instance:1 column:1 modeling:1 assignment:1 introducing:1 entry:2 npc:2 technion:5 comprised:2 levin:1 seventh:1 too:2 confident:1 st:1 fundamental:1 international:4 siam:2 off:1 rounded:2 ctr:1 cesa:4 opposed:2 choose:6 leveraged:1 possibly:1 book:1 expert:2 leading:1 return:1 rji:3 account:1 accompanied:5 whittle:4 availability:1 int:2 ad:2 later:3 performed:2 break:1 multiplicative:1 view:1 hazan:2 start:1 decaying:4 contribution:1 il:2 ni:26 variance:2 kaufmann:2 who:4 bayesian:1 advertising:3 trajectory:6 simultaneous:1 plateau:2 definition:4 sixth:1 against:4 clearest:1 conveys:1 proof:5 bni:4 sampled:2 proved:1 treatment:1 knowledge:11 maillard:2 organized:1 routine:1 auer:6 higher:1 originally:1 follow:2 specify:1 formulation:2 done:1 furthermore:1 stage:2 heidari:2 until:2 receives:3 web:2 pulling:3 believe:1 usa:1 name:1 hypothesized:1 verify:1 true:4 awerbuch:2 hence:1 evolution:1 read:1 leibler:1 ktk1:1 deal:1 round:3 game:1 won:1 bal:3 criterion:2 plate:2 demonstrate:1 motion:1 auction:2 instantaneous:1 novel:3 funding:1 common:1 winner:2 volume:1 belong:2 he:1 extend:1 multiarmed:2 cambridge:2 r1i:1 automatic:1 grid:2 mathematics:1 similarly:2 i6:1 erc:1 closest:4 brownian:2 showed:1 belongs:1 termed:5 scenario:8 arbitrarily:2 herbert:1 minimum:1 somewhat:1 determine:4 maximize:4 ii:2 sliding:2 full:1 afterwards:1 rj:1 stem:2 exceeds:1 adapt:1 exp3:1 long:2 lai:2 paired:1 ensuring:5 prediction:1 variant:3 simplistic:1 expectation:1 arxiv:4 agarwal:3 achieved:1 receive:2 szepesv:2 addition:2 concluded:1 rest:1 subject:1 shie:2 thing:1 leveraging:1 seem:1 effectiveness:1 integer:1 ee:2 rested:5 leverage:1 besbes:2 easy:2 enough:3 fit:1 gave:1 nonstochastic:1 click:1 idea:1 lesser:1 simplifies:1 knowing:1 det:3 motivated:3 ibrahim:1 e3:6 action:4 remark:1 generally:1 tewari:1 clear:1 aimed:1 gopalan:2 jennings:1 amount:1 nonparametric:1 discount:2 stein:1 extensively:2 schapire:1 exist:1 shifted:1 estimated:3 correctly:1 rb:9 discrete:2 affected:1 drawn:1 license:2 expire:2 changing:2 garivier:5 asymptotically:6 sum:4 convert:1 parameterized:1 throughout:2 reasonable:1 decision:11 appendix:8 bound:12 ct:2 internet:1 followed:2 tackled:2 refine:1 annual:3 activity:1 ri:4 kleinberg:4 aspect:2 min:5 kumar:1 department:4 according:10 combination:1 across:2 remain:2 explained:1 taken:2 ln:6 previously:1 turn:1 ln1:3 know:1 end:10 serf:1 adopted:1 available:3 komiyama:2 apply:1 differentiated:1 distinguished:2 shortly:1 top:1 anv:9 log2:1 instant:1 calculating:1 classical:2 society:1 objective:4 intend:1 move:1 parametric:18 rt:5 diagonal:2 win:3 nx:1 considers:1 trivial:1 assuming:1 length:1 index:3 minimizing:2 acquire:1 balance:3 setup:4 potentially:1 taxonomy:1 expense:1 rise:1 design:2 policy:20 unknown:9 perform:1 bianchi:4 upper:14 av:8 observation:2 benchmark:7 finite:4 immediate:1 rn:1 mansour:1 arbitrary:2 introduced:5 pair:1 required:2 kl:1 slivkins:2 nip:1 beyond:1 adversary:2 suggested:1 perception:1 fp:1 challenge:1 program:1 including:2 max:17 video:1 royal:1 power:1 suitable:3 hybrid:1 arm:48 nth:3 technology:1 arora:2 harsh:1 nir:1 prior:7 review:3 epoch:1 geometric:1 freund:1 highlight:1 interesting:1 allocation:4 foundation:1 upfal:3 agent:5 article:1 balancing:2 row:1 keeping:1 side:2 allow:1 pulled:7 fall:1 pni:2 wide:1 munos:2 distributed:2 feedback:1 curve:1 world:8 cumulative:3 boredom:1 adaptive:3 transaction:2 obtains:1 kullback:1 transcription:1 mortal:1 conclude:1 spatio:1 cto:10 pandey:2 search:2 continuous:3 robin:2 table:3 nature:1 ca:2 improving:1 du:2 european:3 complex:1 constructing:1 posted:1 did:1 aistats:1 main:2 apr:1 whole:1 noise:2 referred:2 deployed:2 sub:5 breaking:1 theorem:4 emphasizing:1 specific:2 maxi:2 decay:12 evidence:1 exists:2 workshop:1 sequential:2 importance:1 ci:8 horizon:13 nk:1 restless:6 chen:1 demand:1 ucb1:9 distinguishable:1 simply:2 explore:1 doubling:1 recommendation:2 acquiring:2 springer:3 corresponds:2 acm:4 succeed:1 formulated:1 price:1 content:2 change:5 specifically:3 except:1 infinite:1 diff:7 averaging:1 lemma:4 kearns:1 total:3 called:1 ucb:8 indicating:1 support:1 radlinski:1 latter:1 crammer:1 crowdsourcing:3
6,524
6,901
Unbiased estimates for linear regression via volume sampling ? Micha? Derezinski Department of Computer Science University of California Santa Cruz [email protected] Manfred K. Warmuth Department of Computer Science University of California Santa Cruz [email protected] Abstract Given a full rank matrix X with more columns than rows, consider the task of estimating the pseudo inverse X+ based on the pseudo inverse of a sampled subset of columns (of size at least the number of rows). We show that this is possible if the subset of columns is chosen proportional to the squared volume spanned by the rows of the chosen submatrix (ie, volume sampling). The resulting estimator is unbiased and surprisingly the covariance of the estimator also has a closed form: It equals a specific factor times X+> X+ . Pseudo inverse plays an important part in solving the linear least squares problem, where we try to predict a label for each column of X. We assume labels are expensive and we are only given the labels for the small subset of columns we sample from X. Using our methods we show that the weight vector of the solution for the sub problem is an unbiased estimator of the optimal solution for the whole problem based on all column labels. We believe that these new formulas establish a fundamental connection between linear least squares and volume sampling. We use our methods to obtain an algorithm for volume sampling that is faster than state-of-the-art and for obtaining bounds for the total loss of the estimated least-squares solution on all labeled columns. 1 Introduction Let X be a wide full rank matrix with d rows and n columns where n ? d. Our X goal is to estimate the pseudo inverse X+ of X based on the pseudo inverse of a subset of columns. More precisely, we sample a subset S ? {1..n} of s column indices (where s ? d). We let XS be the sub-matrix of the s columns IS indexed by S (See Figure 1). Consider a version of X in which all but the columns of S are zero. This matrix equals XIS where IS is an n-dimensional diagonal matrix with (IS )ii = 1 if i ? S and 0 otherwise. XI xi S S We assume that the set of s column indices of X is selected proportional to the squared volume spanned by the rows of submatrix XS , i.e. proportional X+> to det(XS X> S ) and prove a number of new surprising expectation formulas for this type of volume sampling, such as +> E[(XIS )+ ] = X+ n ? d + 1 +> + and E[ (XS X> )?1 ] = X X . {zS } | s?d+1 (XIS )+> (XIS )+ (XIS) XS (XS)+> Figure 1: Set S may not be consecutive. Note that (XIS )+ has the n ? d shape of X+ where the s rows indexed by S contain (XS )+ and the remaining n ? s rows are zero. The expectation of this matrix is X+ even though (XS )+ is 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. clearly not a sub-matrix of X+ . In addition to the expectation formulas, our new techniques lead to an efficient volume sampling procedure which beats the state-of-the-art by a factor of n2 in time complexity. Volume sampling is useful in numerous applications, from clustering to matrix approximation, but we focus on the task of solving linear least squares problems: For an n?dimensional label vector y, let w? = argminw ||X> w ? y||2 = X+ y. Assume the entire design matrix X is known to the learner but labels are expensive and you want to observe as few of them as possible. Let w?(S) = (XS )+ yS be the solution to the sub-problem based on labels yS . What is the smallest number of labels s necessary, for which there is a sampling procedure on sets S of size s st the expected loss of w?(S) is at most a constant factor larger than the loss of w? that uses all n labels (where the constant is independent of n)? More precisely, using the short hand L(w) = ||X> w ? y||2 for the loss on all n labels, what is the smallest size s such that E[L(w?(S))] ? const L(w? ). This question is a version of the ?minimal coresets? open problem posed in [3]. The size has to be at least d and one can show that randomization is necessary in that any deterministic algorithm for choosing a set of d columns can suffer loss larger by a factor of n. Also any iid sampling of S (such as the commonly used leverage scores [8]) requires at least ?(d log d) examples to achieve a finite factor. In this paper however we show that with a size d volume sample, E[L(w?(S))] = (d + 1)L(w? ) if X is in general position. Note again that we have equality and not just an upper bound. Also we can show that the multiplicative factor d + 1 is optimal. We further improve this factor to 1 +  via repeated volume sampling. Moreover, our expectation formulas imply that when S is size s ? d volume sampled, then w?(S) is an unbiased estimator for w? , ie E[w?(S)] = w? . 2 Related work Volume sampling is an extension of a determinantal point process [15], which has been given a lot of attention in the literature with many applications to machine learning, including recommendation systems [10] and clustering [13]. Many exact and approximate methods for efficiently generating samples from this distribution have been proposed [6, 14], making it a useful tool in the design of randomized algorithms. Most of those methods focus on sampling s ? d elements. In this paper, we study volume sampling sets of size s ? d, which has been proposed in [1] and motivated with applications in graph theory, linear regression, matrix approximation and more. The only known polynomial time algorithm for size s > d volume sampling was recently proposed in [16] with time complexity O(n4 s). We offer a new algorithm with runtime O((n ? s + d)nd), which is faster by a factor of at least n2 . The problem of selecting a subset of input vectors for solving a linear regression task has been extensively studied in statistics literature under the terms optimal design [9] and pool-based active learning [19]. Various criteria for subset selection have been proposed, like A-optimality and D?1 optimality. For example, A-optimality seeks to minimize tr((XS X> ), which is combinatorially S) ?1 hard to optimize exactly. We show that for size s volume sampling (for s ? d), E[(XS X> ]= S) n?d+1 +> + X X which provides an approximate randomized solution for this task. s?d+1 A related task has been explored in the field of computational geometry, where efficient algorithms are sought for approximately solving linear regression and matrix approximation [17, 5, 3]. Here, multiplicative bounds on the loss of the approximate solution can be achieved via two approaches: Subsampling the vectors of the design matrix, and sketching the design matrix X and the label vector y by multiplying both by the same suitably chosen random matrix. Algorithms which use sketching to generate a smaller design matrix for a given linear regression problem are computationally efficient [18, 5], but unlike vector subsampling, they require all of the labels from the original problem to generate the sketch, so they do not apply directly to our setting of using as few labels as possible. The main competitor to volume sampling for linear regression is iid sampling using the statistical leverage scores [8]. However we show in this paper that any iid sampling method requires sample size ?(d log d) to achieve multiplicative loss bounds. On the other hand, the input vectors obtained from volume sampling are selected jointly and this makes the chosen subset more informative. We show that just d volume sampled columns are sufficient to achieve a multiplicative bound. Volume sampling size s ? d has also been used in this line of work by [7, 11] for matrix approximation. 2 3 Unbiased estimators Let n be an integer dimension. For each subset S ? {1..n} of size s we are given a matrix formula F(S). Our goal is to sample set S of size s using some sampling process and then develop concise expressions for ES:|S|=s [F(S)]. Examples of formula classes F(S) will be given below. We represent the sampling by a directed acyclic graph (dag), with a single root node corresponding to the full set {1..n}, Starting from the root, we proceed along the edges of the graph, iteratively removing elements from the set S. Concretely, consider a dag with levels s = n, n ? 1, ..., d. Level  s contains ns nodes for sets S ? {1..n} of size s. Every node S at level s > d has s directed edges to the nodes S ? {i} at the next lower level. These edges are labeled with a conditional probability vector P (S?i |S). The probability of a (directed) path is the product of the probabilities along its edges. The outflow of probability from each node on all but the bottom level is 1. We let the probability P (S) of node S be the probability of all paths from the top node {1..n} to S and set the probability P ({1..n}) of the top node to 1. We associate a formula F(S) with each set node S in the dag. The following key equality lets us compute expectations. Lemma 1 If for all S ? {1..n} of size greater than d we have X F(S) = P (S?i |S)F(S?i ), i?S then for any s ? {d..n}: ES:|S|=s [F(S)] = P S:|S|=s P (S)F(S) = F({1..n}). Proof Suffices to show that expectations at successive layers are equal: X X X X X P (S) F(S) = P (S) P (S?i |S) F(S?i ) = P (T+j )P (T |T+j ) F(T ). S:|S|=s S:|S|=s i?S T :|T |=s?1 j ?T / | 3.1 {z P (T ) } Volume sampling Given a wide full-rank matrix X ? Rd?n and a sample size s ? {d..n}, volume sampling chooses subset S ? {1..n} of size s with probability proportional to volume spanned by the rows of submatrix XS , ie proportional to det(XS X> S ). The following corollary uses the above dag setup to compute the normalization constant for this distribution. When s = d, the corollary provides a novel minimalist P > proof for the Cauchy-Binet formula: S:|S|=s det(XS X> S ) = det(XX ). Corollary 2 Let X ? Rd?n and S ? {1..n} of size n ? s ? d st det(XS X> S ) > 0. Then for any set S of size larger than d and i ? S, define the probability of the edge from S to S?i as: P (S?i |S) := det(XS?i X> S?i ) (s?d) det(XS X> S) = > ?1 1?x> xi i (XS XS ) , (reverse iterative volume sampling) s?d where xi is the ith column of X and XS is the Psub matrix of columns indexed by S. Then P (S?i |S) is a proper probability distribution and thus S:|S|=s P (S) = 1 for all s ? {d..n}. Furthermore P (S) = det(XS X> S) .  n?d > s?d det(XX ) (volume sampling) Proof First, for any node S st s > d and det(XS X> S ) > 0, the probabilities out of S sum to 1: X i?S P (S?i |S) = X 1 ? tr((XS X> )?1 xi x> ) S i?S i s?d = ?1 s ? tr((XS X> XS X> s?d S) S) = = 1. s?d s?d It remains to show the formula for the probability P (S) of all paths ending at node S. Consider any path from the root {1..n} to S. There are (n ? s)! such paths. The fractions of determinants in 3 probabilities along each path telescope1 and the additional factors accumulate to the same product. So the probability of all paths from the root to S is the same and the total probability into S is (n ? s)! det(XS X> S) = > (n ? d)(n ? d ? 1) . . . (n ? s + 1) det(XX ) 3.2 1  n?d s?d det(XS X> S) . > det(XX ) Expectation formulas for volume sampling All expectations in the remainder of the paper are wrt volume sampling. We use the short hand E[F(S)] for expectation with volume sampling where the size of the sampled set is fixed to s. The expectation formulas for two P choices of F(S) are proven in the next two theorems. By Lemma 1 it suffices to show F(S) = i?S P (S?i |S)F(S?i ) for volume sampling. We introduce a bit more notation first. Recall that XS is the sub matrix of columns indexed by S ? {1..n} (See Figure 1). Consider a version of X in which all but the columns of S are zero. This matrix equals XIS where IS is an n-dimensional diagonal matrix with (IS )ii = 1 if i ? S and 0 otherwise. Theorem 3 Let X ? Rd?n be a wide full rank matrix (ie n ? d). For s ? {d..n}, let S ? 1..n be a size s volume sampled set over X. Then E[(XIS )+ ] = X+ . We believe that this fundamental formula lies at the core of why volume sampling is important in many applications. In this work, we focus on its application to linear regression. However, [1] discuss many problems where controlling the pseudo-inverse of a submatrix is essential. For those applications, it is important to establish variance bounds for the estimator offered by Theorem 3. In this case, volume sampling once again offers very concrete guarantees. We obtain them by showing the following formula, which can be viewed as a second moment for this estimator. Theorem 4 Let X ? Rd?n be a full-rank matrix and s ? {d..n}. If size s volume sampling over X has full support, then n?d+1 (XX> )?1 . E[ (XS X> )?1 ] = {zS } | s ? d + 1 | {z } (XIS )+> (XIS )+ X+> X+ If volume sampling does not have full support then the matrix equality ?=? is replaced by the positive-definite inequality ??. The condition that size s volume sampling over X has full support is equivalent to det(XS X> S) > 0 for all S ? 1..n of size s. Note that if size s volume sampling has full support, then size t > s also has full support. So full support for the smallest size d (often phrased as X being in general position) implies that volume sampling wrt any size s ? d has full support. Surprisingly by combining theorems 3 and 4, we can obtain a ?covariance type formula? for the pseudo-inverse matrix estimator: E[((XIS )+ ? E[(XIS )+ ])> ((XIS )+ ? E[(XIS )+ ])] = E[(XIS )+> (XIS )+ ] ? E[(XIS )+ ]> E[(XIS )+ ] n ? d + 1 +> + n?s = X X ? X+> X+ = X+> X+ . s?d+1 s?d+1 (1) Theorem 4 can also be used to obtain an expectation formula for the Frobenius norm k(XIS )+ kF of the estimator: n?d+1 + 2 Ek(XIS )+ k2F = E[tr((XIS )+> (XIS )+ )] = kX kF . (2) s?d+1 This norm formula has been shown in [1], with numerous applications. Theorem 4 can be viewed as a much stronger pre trace version of the norm formula. Also our proof techniques are quite different 1 Note that 00 determinant ratios are avoided along the path because paths with such ratios always lead to sets of probability 0 and in the corollary we only consider paths to nodes S for which det(XS XS ) > 0. 4 and much simpler. Note that if size s volume sampling for X does not have full support then (1) becomes a semi-definite inequality  between matrices and (2) an inequality between numbers. Proof of Theorem 3 We apply Lemma 1 with F(S) = (XIS )+ . It suffices to show F(S) = > ?1 P 1?x> xi i (XS XS ) , ie: i?S P (S?i |S)F(S?i ) for P (S?i |S) := s?d (XIS )+ = X 1 ? x> (XS X> )?1 xi i S s?d i?S (XIS?i )+ | {z } . (XIS?i )> (XS?i X> S ?i )?1 ?1 > ?1 Proven by applying Sherman Morrison to (XS?i X> = (XS X> on the rhs: S?i ) S ? xi xi )   ?1 > ?1 X 1 ? x> (XS X> )?1 xi (XS X> xi x> i i (XS XS ) > ?1 S S) ((XIS )> ? ei x> ) (X X ) + . S S i > ?1 x s?d 1 ? x> i i (XS XS ) i ?1 We now expand the last two factors into 4 terms. The expectation of the first (XIS )> (XS X> is S) + (XIS ) (which is the lhs) and the expectations of the remaining three terms times s ? d sum to 0: X X  ?1 > ?1 > ?1 > > ?1 > ? (1 ? x> xi ) ei x> + (XIS )> (X x SX i xi (XS XS ) i (XS XS ) i (XS XS ) S)  i?S i?S  X > > ?1 > > ?1 ? ei (xi (XS XS ) xi ) xi (XS XS ) = 0. i?S s?d+1 ?1 Proof of Theorem 4 Choose F(S) = n?d+1 (XS X> . By Lemma 1 it suffices to show F(S) = S) P P (S |S)F(S ) for volume sampling: ?i ?i i?S  X 1 ? x> (XS X> )?1 xi   s?d+1 s? d i ?1 S (XS X> )?1 = (XS?i X> ( ( ( S S?i )  ( (  (d + 1 (d +(  n? s?d n? 1 ( ( i?S ?1 To show this we apply Sherman Morrison to (XS?i X> on the rhs: S?i )   ?1 > ?1 X xi x> (XS X> i (XS XS ) > ?1 > ?1 S) (1 ? x> (X X ) x ) (X X ) + S i S i S S > ?1 x 1 ? x> i i (XS XS ) i?S X   ?1 ?1 > > ?1 ?1 > = (s ? d)(XS X> + (X x = (s ? d + 1) (XS X> . SX i xi (XS XS ) S) S) S)  i?S  > ?1 If some denominators 1?x> (X X ) x are zero, then only sum over i for which the denominators S i i S are positive. In this case the above matrix equality becomes a positive-definite inequality . L(w?(Si )) 4 Linear regression with few labels Our main motivation for studying volume sampling came from asking the following simple question. Suppose we want to solve a d-dimensional linear regression problem with a matrix X ? Rd?n of input column vectors and a label vector y ? Rn , ie find w ? Rd that minimizes the least squares loss L(w) = kX> w ? yk2 : L(?) E[L(w?(S))] L(w?(Sj )) ? d L(w ) L(w? ) def w? = argmin L(w) = X+> y, w?(Si ) w?Rd w? = E(w?(S)) w?(Sj ) ? in exbut the access to label vector y is restricted. We are al- Figure 2: Unbiased estimator w (S) ? pectation suffers loss (d + 1) L(w ). lowed to pick a subset S ? {1..n} for which the labels yi (where i ? S) are revealed to us, and then solve the subproblem (XS , yS ), obtaining w?(S). What is the smallest number of labels such that for any X, we can find w?(S) for which L(w?(S)) is only a multiplicative factor away from L(w? ) (independent of the number of input vectors n)? This question was posed as an open problem by [3]. It is easy to show that we need at least d labels (when X is full-rank), so as to guarantee the uniqueness of solution w?(S). We use volume sampling to show that d labels are in fact sufficient (proof in Section 4.1). 5 Theorem 5 If the input matrix X ? Rd?n is in general position, then for any label vector y ? Rn , the expected square loss (on all n labeled vectors) of the optimal solution w?(S) for the subproblem (XS , yS ), with the d-element set S obtained from volume sampling, is given by E[L(w?(S))] = (d + 1) L(w? ). If X is not in general position, then the expected loss is upper-bounded by (d + 1) L(w? ). The factor d + 1 cannot be improved when selecting only d labels (we omit the proof): Proposition 6 For any d, there exists a least squares problem (X, y) with d + 1 vectors in Rd such that for every d-element index set S ? {1, ..., d + 1}, we have L(w?(S)) = (d + 1) L(w? ). Note that the multiplicative factor in Theorem 5 does not depend on n. It is easy to see that this cannot be achieved by any deterministic algorithm (without the access to labels). Namely, suppose that d = 1 and X is a vector of all ones, whereas the label vector y is a vector of all ones except for a single zero. No matter which column index we choose deterministically, if that index corresponds to the label 0, the solution to the subproblem will incur loss L(w?(S)) = n L(w? ). The fact that volume sampling is a joint distribution also plays an essential role in proving Theorem 5. Consider a matrix X with exactly d unique linearly independent columns (and an arbitrary number of duplicates). Any iid column sampling distribution (like for example leverage score sampling) will require ?(d log d) samples to retrieve all d unique columns (ie coupon collector problem), which is necessary to get any multiplicative loss bound. The exact expectation formula for the least squares loss under volume sampling suggests a deep connection between linear regression and this distribution. We can use Theorem 3 to further strengthen that connection. Note, that the least squares estimator obtained through volume sampling can be written as w?(S) = (XIS )+> y. Applying formula for the expectation of pseudo-inverse, we conclude that w?(S) is an unbiased estimator of w? . Proposition 7 Let X ? Rd?n be a full-rank matrix and n ? s ? d. Let S ? 1..n be a size s volume sampled set over X. Then, for arbitrary label vector y ? Rn , we have E[w?(S)] = E[(XIS )+> y] = X+> y = w? . For size s = d volume sampling, the fact that E[w?(S)] equals w? can be found in an early paper [2]. They give a direct proof based on Cramer?s rule. For us the above proposition is a direct consequence of the matrix expectation formula given in Theorem 3 that holds for volume sampling of any size s ? d. In contrast, the loss expectation formula of Theorem 5 is limited to sampling of size s = d. Bounding the loss expectation for s > d remains an open problem. However, we consider a different strategy for extending volume sampling in linear regression. Combining Proposition 7 with Theorem 5 we can compute the variance of predictions generated by volume sampling, and obtain tighter multiplicative loss bounds by sampling multiple d-element subsets S1 , ..., St independently. Theorem 8 Let (X, y) be as in Theorem 5. For k independent size d volume samples S1 , ..., Sk , ? ? ??   k X 1 d ? ? ? ? ? E L w (Sj ) = 1+ L(w? ). k j=1 k def def b = X> w? and y b (S) = X> w?(S) as the predictions generated by w? and w?(S) Proof Denote y respectively. We perform bias-variance decomposition of the loss of w?(S) (for size d volume sampling): b+y b ? yk2 ] E[L(w?(S))] = E[kb y(S) ? yk2 ] = E[kb y(S) ? y b k2 ] + E[2(b b )> (b = E[kb y(S) ? y y(S) ? y y ? y)] + kb y ? yk2 n n X  (?) X  = E (b y (S)i ? E[b y (S)i ])2 + L(w? ) = Var[b y (S)i ] + L(w? ), i=1 i=1 6 where (?) follows from Theorem 3. Now, we use Theorem 5 to obtain the total variance of predictions: n X Var[b y (S)i ] = E[L(w?(S))] ? L(w? ) = d L(w? ). i=1 Now the expected loss of the average weight vector wrt sampling k independent sets S1 , ..., Sk is: ? ? ? ?? ? k k n X X X 1 1 E ?L ? w?(Sj )?? = yb(Sj )i ? + L(w? ) Var ? k j=1 k j=1 i=1 ? ?   k d 1 ?X ? ? ? d L(w ) + L(w ) = 1 + L(w? ). = 2 k k j=1 It is worth noting that the average weight vector used in Theorem 8 is not expected to perform better than taking the solution to the joint subproblem, w?(S1:k ), where S1:k = S1 ? ... ? Sk . However, theoretical guarantees for that case are not yet available. 4.1 Proof of Theorem 5 We use the following lemma regarding the leave-one-out loss for linear regression [4]: Lemma 9 Let w?(?i) denote the least squares solution for problem (X?i , y?i ). Then, we have > ?1 L(w? ) = L(w?(?i)) ? x> xi `i (w?(?i)), i (XX ) where def 2 `i (w) = (x> i w ? yi ) . When X has d + 1 columns and X?i is a full-rank d ? d matrix, then L(w?(?i)) = `i (w?(?i)) and Lemma 9 leads to the following: L(w? )   z }| { X > (1) > 2 e e e det(XX ) = det(XX ) kb y ? yk where X = y> (2) > ?1 = det(XX> )(1 ? x> xi )`i (w?(?i)) i (XX ) (3) ? = det(X?i X> ?i )`i (w (?i)), (3) where (1) is the ?base ? height? formula for volume, (2) follows from Lemma 9 and (3) follows from a standard determinant formula. Returning to the proof, our goal is to find the expected loss E[L(w?(S))], where S is a size d volume sampled set. First, we rewrite the expectation as follows: E[L(w?(S))] = X S,|S|=d = X P (S)L(w?(S)) = P (S) X ? P (S) `j (w (S)) = `j (w?(S)) j=1 S,|S|=d X X n X X P (T?j ) `j (w?(T?j )). (4) T,|T |=d+1 j?T S,|S|=d j ?S / We now use (3) on the matrix XT and test instance xj (assuming rank(XT?j ) = d): P (T?j ) `j (w?(T?j )) = det(XT?j X> T?j ) det(XX> ) `j (w?(T?j )) = eTX e >) det(X T . det(XX> ) (5) Since the summand does not depend on the index j ? T , the inner summation in (4) becomes a multiplication by d + 1. This lets us write the expected loss as: E[L(w?(S))] = d+1 det(XX> ) X (1) eTX e > ) = (d + 1) det(X T T,|T |=d+1 eX e > ) (2) det(X = (d + 1) L(w? ), det(XX> ) (6) where (1) follows from the Cauchy-Binet formula and (2) is an application of the ?base ? height? formula. If X is not in general position, then for some summands in (5), rank(XT?j ) < d and P (T?j ) = 0. Thus the left-hand side of (5) is 0, while the right-hand side is non-negative, so (6) becomes an inequality, completing the proof of Theorem 5. 7 5 Efficient algorithm for volume sampling In this section we propose an algorithm for efficiently performing exact volume sampling for any s ? d. This addresses the question posed by [1], asking for a polynomial-time algorithm for the case when s > d. [6, 11] gave an algorithm for the case when s = d, which runs in time O(nd3 ). Recently, [16] offered an algorithm for arbitrary s, which has complexity O(n4 s). We propose a new method, which uses our techniques to achieve the time complexity O((n ? s + d)nd), a direct improvement over [16] by a factor of at least n2 . Our algorithm also offers an improvement for s = d in certain regimes. Namely, when n = o(d2 ), then our algorithm runs in time O(n2 d) = o(nd3 ), faster than the method proposed by [6]. Our algorithm implements reverse iterative sampling from Corollary 2. After removing q columns, we are left with an index set of size n ? q that is distributed according to volume sampling for column set size n ? q. Theorem 10 The sampling algorithm runs in time O((n ? s + d)nd), using O(d2 + n) additional memory, and returns set S which is distributed according to size s volume sampling over X. Proof For correctness we show the following invariants that hold at the beginning of the while loop: > ?1 pi = 1 ? x> xi = (|S| ? d) P (S?i |S) i (XS XS ) and ?1 Z = (XS X> . S) At the first iteration the invariants trivially hold. When updating the pj we use Z and the pi from the previous iteration, so we can rewrite the update as Reverse iterative volume sampling 2 pj ? pj ? (x> j v) > ?1 = 1 ? x> xj ? j (XS XS ) > ?1 = 1 ? x> xj ? j (XS XS )  ?1 = 1 ? x> (XS X> + j S) 2 (x> j Zxi ) > ?1 x 1 ? x> i i (XS XS ) > ?1 > ?1 xj x> xi x> j (XS XS ) i (XS XS ) > ?1 x 1 ? x> i i (XS XS ) ?1 > ?1 xi x> (XS X> i (XS XS ) S) > > ?1 1 ? xi (XS XS ) xi (?)  xj > ?1 = 1 ? x> xj = (|S| ? 1 ? d) P (S?i,j |S?i ), j (XS?i XS?i ) Input: X ? Rd?n , s ? {d..n} Z ? (XX> )?1 ?i?{1..n} pi ? 1 ? x> i Zxi S ? {1, .., n} while |S| > s Sample i ? pi out of S S ? S ? {i} ? v ? Zxi / pi 2 ?j?S pj ? pj ? (x> j v) > Z ? Z + vv end return S where (?) follows from the Sherman-Morrison formula. The update of Z is also an application of Sherman-Morrison and this concludes the proof of correctness. Runtime: Computing the initial Z = (XX> )?1 takes O(nd2 ), as does computing the initial values of pj ?s. Inside the while loop, updating pj ?s takes O(|S|d) = O(nd) and updating Z takes O(d2 ). The overall runtime becomes O(nd2 + (n ? s)nd) = O((n ? s + d)nd). The space usage (in addition to the input data) is dominated by the pi values and matrix Z. 6 Conclusions We developed exact formulas for E[(XIS )+ )] and E[(XIS )+ )2 ] when the subset S of s column indices is sampled proportionally to the volume det(XS X> S ). The formulas hold for any fixed size s ? {d..n}. These new expectation formulas imply that the solution w?(S) for a volume sampled subproblem of a linear regression problem is unbiased. We also gave a formula relating the loss of the subproblem to the optimal loss (ie E(L(w?(S))) = (d + 1)L(w? )). However, this result only holds for sample size s = d. It is an open problem to obtain such an exact expectation formula for s > d. S A natural algorithm is to draw k samples Si of size P d and return w?(S1:k ), where S1:k = i Si . We were able to get exact expressions for the loss L( k1 i w?(Si )) of the average predictor but it is an open problem to get nontrivial bounds for the loss of the best predictor w?(S1:k ). 8 We were able to show that for small sample sizes, volume sampling a set jointly has the advantage: It achieves a multiplicative bound for the smallest sample size d, whereas any independent sampling routine requires sample size at least ?(d log d). We believe that our results demonstrate a fundamental connection between volume sampling and linear regression, which demands further exploration. Our loss expectation formula has already been applied by [12] to the task of linear regression without correspondence. Acknowledgements Thanks to Daniel Hsu and Wojciech Kot?owski for many valuable discussions. This research was supported by NSF grant IIS-1619271. References [1] Haim Avron and Christos Boutsidis. Faster subset selection for matrices and applications. SIAM Journal on Matrix Analysis and Applications, 34(4):1464?1499, 2013. [2] Aharon Ben-Tal and Marc Teboulle. A geometric property of the least squares solution of linear equations. Linear Algebra and its Applications, 139:165 ? 170, 1990. [3] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Rich coresets for constrained linear regression. CoRR, abs/1202.3505, 2012. [4] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [5] Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ?13, pages 81?90, New York, NY, USA, 2013. ACM. [6] Amit Deshpande and Luis Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, FOCS ?10, pages 329?338, Washington, DC, USA, 2010. IEEE Computer Society. [7] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, SODA ?06, pages 1117?1126, Philadelphia, PA, USA, 2006. Society for Industrial and Applied Mathematics. [8] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation of matrix coherence and statistical leverage. J. Mach. Learn. Res., 13(1):3475? 3506, December 2012. [9] Valeri Vadimovich Fedorov, W.J. Studden, and E.M. Klimko, editors. Theory of optimal experiments. Probability and mathematical statistics. Academic Press, New York, 1972. [10] Mike Gartrell, Ulrich Paquet, and Noam Koenigstein. Bayesian low-rank determinantal point processes. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ?16, pages 349?356, New York, NY, USA, 2016. ACM. [11] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings of the Twenty-third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?12, pages 1207?1214, Philadelphia, PA, USA, 2012. Society for Industrial and Applied Mathematics. [12] Daniel Hsu, Kevin Shi, and Xiaorui Sun. Linear regression without correspondence. CoRR, abs/1705.07048, 2017. [13] Byungkon Kang. Fast determinantal point process sampling with application to clustering. In Proceedings of the 26th International Conference on Neural Information Processing Systems, NIPS?13, pages 2319?2327, USA, 2013. Curran Associates Inc. [14] Alex Kulesza and Ben Taskar. k-DPPs: Fixed-Size Determinantal Point Processes. In Proceedings of the 28th International Conference on Machine Learning, pages 1193?1200. Omnipress, 2011. 9 [15] Alex Kulesza and Ben Taskar. Determinantal Point Processes for Machine Learning. Now Publishers Inc., Hanover, MA, USA, 2012. [16] C. Li, S. Jegelka, and S. Sra. Column Subset Selection via Polynomial Time Dual Volume Sampling. ArXiv e-prints, March 2017. [17] Michael W. Mahoney. Randomized algorithms for matrices and data. Found. Trends Mach. Learn., 3(2):123?224, February 2011. [18] Tamas Sarlos. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS ?06, pages 143?152, Washington, DC, USA, 2006. IEEE Computer Society. [19] Masashi Sugiyama and Shinichi Nakajima. Pool-based active learning in approximate linear regression. Mach. Learn., 75(3):249?274, June 2009. 10
6901 |@word determinant:3 version:4 polynomial:3 norm:3 stronger:1 nd:6 suitably:1 open:5 d2:3 seek:1 covariance:2 decomposition:1 pick:1 concise:1 tr:4 moment:1 initial:2 contains:1 score:3 selecting:2 daniel:2 woodruff:2 surprising:1 si:5 yet:1 written:1 luis:2 determinantal:5 cruz:2 informative:1 shape:1 update:2 selected:2 warmuth:1 beginning:1 ith:1 short:2 core:1 manfred:2 provides:2 node:12 successive:1 simpler:1 height:2 mathematical:1 along:4 ucsc:2 direct:3 symposium:5 focs:2 prove:1 inside:1 introduce:1 expected:7 owski:1 becomes:5 estimating:1 moreover:1 xx:16 notation:1 bounded:1 what:3 argmin:1 minimizes:1 z:2 developed:1 guarantee:3 pseudo:8 every:2 avron:1 masashi:1 runtime:3 exactly:2 returning:1 k2:1 grant:2 omit:1 sinop:1 positive:3 consequence:1 mach:3 path:10 approximately:1 lugosi:1 studied:1 suggests:1 micha:1 limited:1 projective:1 seventeenth:1 directed:3 unique:2 definite:3 implement:1 procedure:2 gabor:1 projection:1 pre:1 get:3 cannot:2 selection:4 applying:2 optimize:1 equivalent:1 deterministic:2 sarlos:1 shi:1 attention:1 starting:1 independently:1 estimator:12 rule:1 spanned:3 retrieve:1 proving:1 controlling:1 play:2 suppose:2 strengthen:1 exact:6 us:3 curran:1 associate:2 element:5 pa:2 expensive:2 trend:1 updating:3 labeled:3 bottom:1 role:1 subproblem:6 mike:1 taskar:2 wang:1 sun:1 valuable:1 yk:1 complexity:4 depend:2 solving:4 rewrite:2 algebra:1 ali:1 incur:1 learner:1 drineas:2 joint:2 various:1 fast:2 kevin:1 choosing:1 quite:1 larger:3 posed:3 solve:2 otherwise:2 statistic:2 paquet:1 jointly:2 advantage:1 propose:2 reconstruction:1 product:2 remainder:1 argminw:1 combining:2 loop:2 achieve:4 ismail:2 frobenius:1 extending:1 rademacher:2 generating:1 leave:1 ben:3 koenigstein:1 develop:1 implies:1 kb:5 exploration:1 require:2 suffices:4 randomization:1 proposition:4 tighter:1 summation:1 extension:1 hold:5 cramer:1 minimalist:1 predict:1 sought:1 consecutive:1 smallest:5 early:1 achieves:1 uniqueness:1 label:26 gartrell:1 combinatorially:1 correctness:2 tool:1 clearly:1 always:1 corollary:5 focus:3 june:1 improvement:2 nd2:2 rank:13 contrast:1 industrial:2 entire:1 expand:1 overall:1 dual:1 art:2 constrained:1 equal:5 field:1 once:1 santosh:1 beach:1 sampling:69 washington:2 k2f:1 duplicate:1 few:3 summand:1 replaced:1 geometry:1 nd3:2 ab:2 mahoney:2 edge:5 necessary:3 lh:1 indexed:4 re:1 theoretical:1 minimal:1 instance:1 column:31 asking:2 teboulle:1 subset:16 predictor:2 chooses:1 st:6 thanks:1 fundamental:3 randomized:3 siam:3 ie:8 international:2 pool:2 michael:2 sketching:2 concrete:1 squared:2 again:2 cesa:1 choose:2 ek:1 return:3 wojciech:1 li:1 coresets:2 matter:1 inc:2 multiplicative:9 try:1 lot:1 closed:1 root:4 minimize:1 square:11 variance:4 efficiently:2 bayesian:1 iid:4 multiplying:1 worth:1 suffers:1 competitor:1 boutsidis:2 deshpande:2 proof:15 petros:2 sampled:9 hsu:2 recall:1 routine:1 improved:2 yb:1 though:1 furthermore:1 just:2 hand:5 sketch:1 ei:3 xiaorui:1 believe:3 usage:1 usa:10 contain:1 unbiased:8 binet:2 tamas:1 equality:4 iteratively:1 game:1 criterion:1 demonstrate:1 omnipress:1 novel:1 recently:2 volume:66 relating:1 accumulate:1 cambridge:1 dag:4 dpps:1 rd:11 trivially:1 mathematics:2 sugiyama:1 sherman:4 access:2 yk2:4 summands:1 base:2 nicolo:1 reverse:3 certain:1 inequality:5 came:1 yi:2 greater:1 additional:2 forty:1 venkatesan:1 morrison:4 ii:3 semi:1 full:17 multiple:1 faster:4 academic:1 offer:3 long:1 y:4 prediction:4 regression:19 denominator:2 expectation:22 arxiv:1 iteration:2 represent:1 normalization:1 nakajima:1 achieved:2 addition:2 want:2 whereas:2 derezinski:1 publisher:1 unlike:1 december:1 integer:1 leverage:4 noting:1 revealed:1 easy:2 xj:6 gave:2 inner:1 regarding:1 det:29 motivated:1 expression:2 guruswami:1 clarkson:1 suffer:1 proceed:1 york:4 deep:1 useful:2 santa:2 proportionally:1 extensively:1 generate:2 nsf:1 estimated:1 write:1 discrete:2 key:1 pj:7 kenneth:1 graph:3 fraction:1 sum:3 run:3 inverse:8 you:1 soda:2 draw:1 coherence:1 submatrix:4 bit:1 bound:10 layer:1 def:4 completing:1 haim:1 correspondence:2 annual:5 nontrivial:1 precisely:2 alex:2 phrased:1 tal:1 dominated:1 optimality:3 performing:1 vempala:1 department:2 according:2 march:1 smaller:1 n4:2 making:1 s1:9 restricted:1 invariant:2 computationally:1 equation:1 remains:2 discus:1 wrt:3 end:1 studying:1 available:1 aharon:1 magdon:2 hanover:1 apply:3 observe:1 away:1 original:1 top:2 remaining:2 clustering:4 subsampling:2 const:1 k1:1 amit:2 establish:2 february:1 society:4 malik:2 question:4 already:1 print:1 strategy:1 diagonal:2 recsys:1 cauchy:2 assuming:1 index:8 ratio:2 kemal:1 setup:1 stoc:1 trace:1 negative:1 noam:1 design:6 proper:1 twenty:1 perform:2 bianchi:1 upper:2 recommender:1 fedorov:1 finite:1 beat:1 dc:2 rn:3 zxi:3 shinichi:1 arbitrary:3 david:2 namely:2 connection:4 california:2 kang:1 nip:2 etx:2 address:1 able:2 below:1 regime:1 kot:1 sparsity:1 kulesza:2 including:1 memory:1 natural:1 improve:1 imply:2 numerous:2 concludes:1 philadelphia:2 literature:2 acknowledgement:1 geometric:1 kf:2 multiplication:1 loss:27 proportional:5 acyclic:1 proven:2 var:3 foundation:2 offered:2 sufficient:2 jegelka:1 editor:1 ulrich:1 pi:6 row:9 surprisingly:2 last:1 supported:1 bias:1 side:2 vv:1 wide:3 taking:1 fifth:1 coupon:1 distributed:2 dimension:1 ending:1 rich:1 concretely:1 commonly:1 avoided:1 sj:5 approximate:4 active:2 conclude:1 xi:60 iterative:3 sk:3 why:1 learn:3 ca:1 sra:1 obtaining:2 marc:1 main:2 linearly:1 rh:2 whole:1 motivation:1 bounding:1 n2:4 repeated:1 outflow:1 collector:1 ny:3 n:1 christos:2 sub:5 position:5 deterministically:1 lie:1 third:1 formula:32 removing:2 theorem:24 specific:1 xt:4 showing:1 explored:1 x:95 lowed:1 essential:2 exists:1 corr:2 kx:2 sx:2 demand:1 recommendation:1 corresponds:1 acm:6 ma:1 conditional:1 goal:3 viewed:2 hard:1 except:1 lemma:8 total:3 e:2 support:8 ex:1
6,525
6,902
Approximation Bounds for Hierarchical Clustering: Average Linkage, Bisecting K-means, and Local Search Benjamin Moseley? Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Joshua R. Wang? Department of Computer Science Stanford University 353 Serra Mall, Stanford, CA 94305, USA [email protected] Abstract Hierarchical clustering is a data analysis method that has been used for decades. Despite its widespread use, the method has an underdeveloped analytical foundation. Having a well understood foundation would both support the currently used methods and help guide future improvements. The goal of this paper is to give an analytic framework to better understand observations seen in practice. This paper considers the dual of a problem framework for hierarchical clustering introduced by Dasgupta [Das16]. The main result is that one of the most popular algorithms used in practice, average linkage agglomerative clustering, has a small constant approximation ratio for this objective. Furthermore, this paper establishes that using bisecting k-means divisive clustering has a very poor lower bound on its approximation ratio for the same objective. However, we show that there are divisive algorithms that perform well with respect to this objective by giving two constant approximation algorithms. This paper is some of the first work to establish guarantees on widely used hierarchical algorithms for a natural objective function. This objective and analysis give insight into what these popular algorithms are optimizing and when they will perform well. 1 Introduction Hierarchical clustering is a widely used method to analyze data. See [MC12, KBXS12, HG05] for an overview and pointers to relevant work. In a typical hierarchical clustering problem, one is given a set of n data points and a notion of similarity between the points. The output is a hierarchy of clusters of the input. Specifically, a dendrogram (tree) is constructed where the leaves correspond to the n input data points and the root corresponds to a cluster containing all data points. Each internal node of the tree corresponds to a cluster of the data points in its subtree. The clusters (internal nodes) become more refined as the nodes are lower in the tree. The goal is to construct the tree so that the clusters deeper in the tree contain points that are relatively more similar. There are many reasons for the popularity of hierarchical clustering, including that the number of clusters is not predetermined and that the clusters produced induce taxonomies that give meaningful ways to interpret data. Methods used to perform hierarchical clustering are divided into two classes: agglomerative and divisive. Agglomerative algorithms are a bottom-up approach and are more commonly used than ? Benjamin Moseley was supported in part by a Google Research Award, a Yahoo Research Award and NSF Grants CCF-1617724, CCF-1733873 and CCF-1725661. This work was partially done while the author was working at Washington University in St. Louis. ? Joshua R. Wang was supported in part by NSF Grant CCF-1524062. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. divisive approaches [HTF09]. In an agglomerative method, each of the n input data points starts as a cluster. Then iteratively, pairs of similar clusters are merged according to some appropriate metric of similarity. Perhaps the most popular metric to define similarity is average linkage where the similarity between two clusters is defined as the average similarity between all pairs of data points in the two clusters. In average linkage agglomerative clustering the two clusters with the highest average similarity are merged at each step. Other metrics are also popular. Related examples include: single linkage, where the similarity between two clusters is the maximum similarity between any two single data points in each cluster, and complete linkage, where the distance is the minimum similarity between any two single data points in each cluster. Divisive algorithms are a top-down approach where initially all data points belong to a single cluster. Splits are recursively performed, dividing a cluster into two clusters that will be further divided. The process continues until each cluster consists of a single data point. In each step of the algorithm, the data points are partitioned such that points in each cluster are more similar than points across clusters. There are several approaches to perform divisive clustering. One example is bisecting k-means where k-means is used at each step with k = 2. For details on bisecting k-means, see [Jai10]. Motivation: Hierarchical clustering has been used and studied for decades. There has been some work on theoretically quantifying the quality of the solutions produced by algorithms, such as [ABBL12, AB16, ZB09, BA08, Das16]. Much of this work focuses on deriving the structure of solutions created by algorithms or analytically describing desirable properties of a clustering algorithm. Though the area has been well-studied, there is no widely accepted formal problem framework. Hierarchical clustering describes a class of algorithmic methods rather than a problem with an objective function. Studying a formal objective for the problem could lead to the ability to objectively compare different methods; there is a desire for the community to investigate potential objectives. This would further support the use of current methods and guide the development of improvements. This paper is concerned with investigating objectives for hierarchical clustering. The main goal and result of this paper is giving a natural objective that results in a theoretical guarantee for the most commonly used hierarchical clustering algorithm, average linkage agglomerative clustering. This guarantee gives support for why the algorithm is popular in practice and the objective gives insight into what the algorithm optimizes. This paper also proves a bad lower bound on bisecting k-means with respect to the same natural objective. This objective can therefore be used as a litmus test for the applicability of particular algorithms. This paper further gives top-down approaches that do have strong theoretical guarantees for the objective. Problem Formulation: Towards this paper?s goal, first a formal problem framework for hierarchical clustering needs to be established. Recently, Dasgupta [Das16] introduced a new problem framework for hierarchical clustering. This work justified their objective by establishing that for several sample problem instances, the resulting solution corresponds to what one might expect out of a desirable solution. This work has spurred considerable interest and there have been several follow up papers [CC17, Das16, RP16]. In the problem introduced by Dasgupta [Das16] there is a set of n data points as input and for two points i and j there is a weight wi,j denoting their similarity. The higher the weight, the larger the similarity. This is represented as a weighted complete graph G. In the problem the output is a (full) binary tree where the leaves of the tree correspond to the input data points. For each pair of points i and j, let T [i ? j] denote the subtree rooted at i and j?s least common ancestor. Let leaves(T [i ? j]) denote the setPof leaves in the tree T [i ? j]. The goal is to construct the tree such that the cost costG (T ) := i,j?[n] wij |leaves(T [i ? j])| is minimized. Intuitively, this objective enforces that more similar points i and j should have a lower common ancestor in the tree because the weight wi,j is large and having a smaller least common ancestor ensures that |leaves(T [i ? j])| is smaller. In particular, more similar points should be separated at lower levels of the hierarchical clustering. For this objective, several approximation algorithms have been given [CC17, Das16, ? RP16]. It is known that there is a divisive clustering algorithm with an approximation ratio of O( log n) [CC17]. In particular, the algorithm gives a O(?n )-approximation where ?n is the approximation ratio of the sparsest cut subroutine and this is the best possible for any algorithm [CC17]. ? That is, every algorithm is a ?(?n )-approximation. The current best known bound on ?n is O( log n) [ARV09]. Unfortunately, this conclusion misses one of our key goals in trying to establish an objective function. 2 While the algorithms and analysis are ingenious, none of the algorithms with theoretical guarantees are from the class of algorithms used in practice. Due to the complexity of the proposed algorithms, it will also be difficult to put them into practice. Hence the question still looms: are there strong theoretical guarantees for practical algorithms? Is the objective from [Das16] the ideal objective for our goals? Is there a natural objective that admits solutions that are provably close to optimal? Results: In this paper, we consider an objective function motivated by the objective introduced by Dasgupta in [Das16]. For a given tree T let |non-leaves(T [i ? j])| be the total number of leaves that are not in the subtree rooted at the least common ancestor of i and j.P The objective in [Das16] focuses on constructing a binary tree T to minimize the cost costG (T ) := i,j?[n] wij |leaves(T [i ? j])|. This T is constructed to maximize the revenue revG (T ) := P paper considers the dual problem where P i,j?[n] wij |non-leaves(T [i ? j])| = (n i,j?[n] wi,j ) ? costG (T ). It is important to observe that the optimal clustering is the same for both objectives. Due to this, all the examples given in [Das16] motivating their objective by showing desirable structural properties of the optimal solution also apply to the objective considered in this paper. Our objective can be interpreted similarly to that in [Das16]. In particular, similar points i and j should be located lower in the tree as to maximize |non-leaves(T [i ? j])|, the points that get separated at high levels of the hierarchical clustering. This paper gives a thorough investigation of this new problem framework by analyzing several algorithms for the objective. The main result is establishing that average linkage clustering is a 13 approximation. This result gives theoretical justification for the use of average linkage clustering and, additionally, this shows that the objective considered is tractable since it admits ?(1)-approximations. This suggests that the objective captures a component of what average linkage is optimizing for. This paper then seeks to understand what other algorithms are good for this objective. In particular, is there a divisive algorithm with strong theoretical guarantees? What can be said about practical divisive algorithms? We establish that bisecting k-means is no better than a O( ?1n ) approximation. This establishes that this method is very poor for the objective considered. This suggests that bisecting k-means is optimizing for something different than what average linkage optimizes for. Given this negative result, we question whether there are divisive algorithms that optimize for our objective. We answer this question affirmatively by giving a local search strategy that obtains a 1 1 3 -approximation as well as showing that randomly partitioning is a tight 3 -approximation. The randomized algorithm can be found in the supplementary material. Other Related Work: Very recently a contemporaneous paper [CKMM17] done independently has been published on ArXiv. This paper considers another class of objectives motivated by the work of [Das16]. For their objective, they also derive positive results for average linkage clustering. 2 Preliminaries In this section, we give preliminaries including a formal definition of the problem considered and basic building blocks for later algorithm analysis. In the Revenue Hierarchical Clustering Problem there are n input data points given as a set V . There is a weight wi,j ? 0 between each pair of points i and j denoting their similarity, represented as a complete graph G. The output of the problem is a rooted tree T where the leaves correspond to the data points and the internal nodes of the tree correspond to clusters of the points in the subtree. We will use the indices 1, 2, . . . n to denote the leaves of the tree. For two leaves i and j, let T [i?j] denote the subtree rooted at the least common ancestor of i and j and let the set non-leaves(T [i ? j]) denote the number of leaves P in TPthat are not in T [i ? j]. The objective is to construct T to maximize the revenue revG (T ) = i?[n] j6=i?[n] wi,j |non-leaves(T [i ? j])|. We make no assumptions on the structure of the optimal tree T ; however, one optimal tree is a binary tree, so we may restrict the solution to binary trees without lossP of generality. To see this, let leaves(T [i ? j]) be the set of leaves in T [i ? j] and costG (T ) := i,j wij |leaves(T [i ? j])|. The objectiveP considered in [Das16] focuses on minimizing costG (T ). We note than costG (T ) + revG (T ) = n i,j wi,j , so the optimal solution to minimizing costG (T ) is the same as the optimal 3 solution to maximizing revG (T ). In [Das16] it was shown that the optimal solution for any input is a binary tree. As mentioned, there are two common types of algorithms for hierarchical clustering: agglomerative (bottom-up) algorithms and divisive (top-down) algorithms. In an agglomerative algorithm, each vertex v ? V begins in separate cluster, and each iteration of the algorithm chooses two clusters to merge into one. In a divisive algorithm, all vertices v ? V begin in a single cluster, and each iteration of the algorithm selects a cluster with more than one vertex and partitions it into two small clusters. In this section, we present some basic techniques which we later use to analyze the effect each iteration has on the revenue. It will be convenient for us to think of the weight function as taking in two vertices instead of an edge, i.e. w : V ? V ? R?0 . This is without loss of generality, because we can always set the weight of any nonedge to zero (e.g. wvv = 0 ?v ? V ). To bound the performance of an algorithm it suffices to bound revG (T ) and costG (T ) since revG (T )+ P costG (T ) = n i,j wi,j . Further, let T ? denote the optimal hierarchical clustering. Then its revenue P is at most revG (T ? ) ? (n ? 2) ij wij . This is because any edge ij can have at most (n ? 2) non-leaves for its subtree T [i ? j]; i and j are always leaves. 2.1 Analyzing Agglomerative Algorithms In this section, we discuss a method for bounding the performance of an agglomerative algorithm. When an agglomerative algorithm merges two clusters A, B, this determines the least common ancestor for any pair of nodes i, j where i ? A and j ? B. Given this, P we define the revenue gain due to merging A and B as, merge-revG (A, B) := (n ? |A| ? |B|) a?A,b?B wab . Notice that the final revenue revG (T ) is exactly the sum over iterations of the revenue gains, since each edge isPcounted exactly once: when its endpoints are merged into a single cluster. Hence, revG (T ) = merges A, B merge-revG (A, B). We next define the cost of merging A and B as the following. This is the potential revenue lost by merging A and B; revenue that can no longerP be gained after A and B are merged, but was initially P possible. Define, merge-costG (A, B) := |B| a?A,c?[n]\(A?B) wac + |A| b?B,c?[n]\(A?B) wbc . The total cost ofPthe tree T , costG (T ), is exactly the sum over iterations of the cost increases, plus an additional 2 ij wij term that accounts for each edge being counted towards its own endpoints. We can see why this is true if we consider a pair of vertices i, j ? [n] in the final hierarchical clustering T . If at some point a cluster containing i is merged with a third cluster before it gets merged with the cluster containing j, then the number of leaves in T [i ? j] goes up by the size of the third cluster. This is exactly the quantity captured over P by our cost increase definition. Aggregated P all pairs i, j this is the following, costG (T ) = i,j?[n] wij |leaves(T [i ? j])| = 2 i,j?[n] wij + P merges A, B merge-costG (A, B). 2.2 Analyzing Divisive Algorithms Similar reasoning can be used for divisive algorithms. The following are revenue gain and cost increase definitions for when a P divisive algorithm P partitions a cluster into two clusters A, B. Define, split-revG (A, B) := |B| a,a0 ?A waa0 + |A| b,b0 ?B wbb0 and split-costG (A, B) := P (|A| + |B|) a?A,b?B wab . Consider the revenue gain. For a, a0 ? A we are now guaranteed that when the nodes in B are split from A then every node in B will not be a leaf in T [a ? a0 ] (and a symmetric term for when they are both in B). On the cost side, the term counts the cost of any pairs a ? A and b ? B that are now separated since we now know their subtree T [i ? j] has exactly the nodes in A ? B as leaves. 3 A Theoretical Guarantee for Average Linkage Agglomerative Clustering In this section, we present the main result, a theoretical guarantee on average linkage clustering. We additionally give a bad example lower bounding the best performance of the algorithm. See [MC12] for details and background on this widely used algorithm. The formal definition of the algorithm 4 is given in the following pseudocode. The main idea is that initially all n input points are in their own cluster and the algorithm recursively merges clusters until there is one cluster. In each step, the algorithm mergers the clusters AP and B such that the pair maximizes the average distances of points 1 between the two clusters, |A||B| a?A,b?B wab . Data: Vertices V , weights w : E ? R?0 Initialize clusters C ? ?v?V {v}; while |C| ? 2 do P 1 Choose A, B ? C to maximize w(A, ? B) := |A||B| a?A,b?B wab ; Set C ? C ? {A ? B} \ {A, B}; end Algorithm 1: Average Linkage The following theorem establishes that this algorithm is only a small constant factor away from optimal. Theorem 3.1. Consider a graph G = (V, E) with nonnegative edge weights w : E ? R?0 . Let the hierarchical clustering T ? be a optimal solution maximizing of revG (?) and let T be the hierarchical clustering returned by Algorithm 1. Then, revG (T ) ? 13 revG (T ? ). Proof. Consider an iteration of Algorithm 1. Let the current clusters be in the set C, and the algorithm chooses to merge clusters A and B fromPC. When doing so, the algorithm attains a revenue gain 1 of the following. Let w(A, ? B) = |A||B| a?A,b?B wab be the average weight of an edge between points in A and B. X X X merge-revG (A, B) = (n ? |A| ? |B|) wab = |C| wab a?A,b?B X = C?C\{A,B} a?A,b?B |C||A||B|w(A, ? B) C?C\{A,B} while at the same time incurring a cost increase of: X merge-costG (A, B) = |B| wac + |A| a?A,c?[n]\(A?B) X X = |B| X X X X wbc C?C\{A,B} b?B,c?C |B||A||C|w(A, ? C) + C?C\{A,B} ? X wac + |A| C?C\{A,B} a?A,c?C = wbc b?B,c?[n]\(A?B) X |A||B||C|w(B, ? C) C?C\{A,B} |B||A||C|w(A, ? B) + C?C\{A,B} X |A||B||C|w(A, ? B) C?C\{A,B} = 2 ? merge-revG (A, B) Intuitively, every time this algorithm loses two units of potential it cements the gain of one unit of potential, which is why it is a 13 -approximation. Formally: X X X X costG (T ) = 2 wij + merge-costG (A, B) ? 2 wij + 2 ? merge-revG (A, B) i,j ?2 X i,j merges A, B merges A, B wij + 2 ? revG (T ) i,j Now the revenue can be bounded X as follows. X X revG (T ) ? n wij ? costG (T ) ? n wij ? 2 wij ? 2 ? revG (T ) ij revG (T ) ? ij n?2X 3 ij wij ? i,j 1 revG (T ? ) 3 where the last step follows from the fact that it is impossible to have more than n ? 2 non-leaves. 5 1+? u v ??? ??? n/2 nodes n/2 nodes Figure 1: Hard graph for Average Linkage (k = 2 case). In the following, we establish that the algorithm is at best a 1/2 approximation. The proof can be found in Section 1 of the supplementary material. Lemma 3.2. Let  > 0 be any fixed constant. There exists a graph G = (V, E) with nonnegative edge weights w : E ? R?0 , such that if the hierarchical clustering T ? is an optimal solution  of revG (?) and T is the hierarchical clustering returned by Average Linkage, revG (T ) ? 21 +  revG (T ? ). 4 A Lower Bound on Bisecting k-means In this section, we consider the divisive algorithm which uses the k-means objective (with k = 2) when choosing how to split clusters. Normally, the k-means objective concerns the distances between Pk P points and their cluster center: min i=1 x?Si ||x ? ?i ||2 . However, it is known that this can be Pk P rewritten as a sum over intra-cluster distances: min i=1 |S1i | x,y?Si ||x?y||2 [ABC+ 15]. In other P 1 words, when splitting a cluster into two sets A and B, the algorithm minimizes |A| a,a0 ?A ||a ? P 1 0 2 0 2 a || + B b,b0 ?B ||b ? b || . At first glance, this appears to almost capture split-revG (A, B); the key difference is that the summation has been scaled down by a factor of |A||B|. Of course, it also involves minimization over squared distances instead of maximization over similarity weights. We show that the divisive P algorithm which Psplits clusters by the natural k-means similarity objective, 1 1 0 + 0 namely max |A| w 0 aa a,a ?A b,b0 ?B wbb , is not a good approximation to the optimal |B| hierarchical clustering. Lemma 4.1. There exists a graph G = (V, E) with nonnegative edge weights w : E ? R?0 , such that if the hierarchical clustering T ? is a maximizer of revG (?) and T is the hierarchical clustering returned by the divisive algorithm which, splits clusters by the k-means similarity objective, 1 revG (T ) ? ?(? revG (T ? ). n) Proof. The plan is to exploit the fact that k-means is optimizing an objective function which differs from the actual split revenue by a factor of |A||B|. We use almost the same group as? in the lower bound against Average Linkage, except that the weight of the edge between u and v is n. There are still unit weight edges between u and n2 ? 1 other nodes and unit weight edges between v and the remaining n2 ? 1 nodes. See Figure 1 for the structure of this graph. The key claim is that Divisive k-means will begin by separating u and v from all other nodes. ? It is easy to see that this split scores a value of 12 n under our alternate k-means objective function. Why does no other split score better? Well, any other split can either keep u and v together or separate them. If ?the split keeps the two together along with k other nodes, then it scores at most ? n 1 ? 1? n > 6. If the split separates the two, then it k+2 [ n + k] ? k+2 + 1, which is less than 2 n if scores at most 2, since at best each side can be a tree of weight one edges and hence has fewer edges than nodes. Now that we have established our key claim, it is easy to see that Divisive k-means is done scoring on this graph, since it must next ? cut the edge uv and the other larger cluster has no edges in it. Hence Divisive k-means will score n(n ? 2) on this graph. 6 As before, the optimal clustering may merge u with its other neighbors first and v with its other neighbors first, scoring a revenue gain of 2 [(n ? 2) + (n ? 3) + ? ? ? + (n/2)] = 34 n2 ? O(n). There ? is a ?( n) gap between these revenues, completing the proof. 5 Divisive Local-Search In this section, we develop a simple local search algorithm and bound its approximation ratio. The local search algorithm takes as input a cluster C and divides it into two clusters A and B to optimize a local objective: the split revenue. In particular, initially A = B = ?. Each node in C is added to A or B uniformly at random. Local search is run by moving individual nodesP between A and B. In a step, P any point i ? A (resp. B) is added to B (resp. A) if wj,l + (|A| ? 1) j?B wi,j > j,l?A;j,l6 = i P P P P P w + |B| j?A,j6=i wi,j (resp. j,l?B;j,l6=i wj,l + (|B| ? 1) j?A wi,j > j,l?A wj,l + j,l?B P j,l |A| j?B,j6=i wi,j ). This states that a point is moved to another set if the objective increases. The algorithm performs these local moves until there is no node that can be moved to improve the objective. Data: Vertices V , weights w : E ? R?0 Initialize clusters C ? {V }; while some cluster C ? C has more than one vertex do Let A, B be a uniformly random 2-partitionPof C; P Run local search on A, B to maximize |B| a,a0 ?A waa0 + |A| b,b0 ?B wbb0 , considering just moving a single node; Set C ? C ? {A, B} \ {C}; end Algorithm 2: Divisive Local-Search In the following theorem, we show that the algorithm is arbitrarily close to a 1 3 approximation. Theorem 5.1. Consider a graph G = (V, E) with nonnegative edge weights w : E ? R?0 . Let the hierarchical clustering T ? be the optimal solution of revG (?) and let T be the hierarchical clustering 1 ? returned by Algorithm 2. Then, revG (T ) ? (n?6) (n?2) 3 revG (T ). P Proof. Since we know that revG (T ? ) ? (n ? 2) ij wij , it suffices to show that revG (T ) ? P 1 ij wij . We do this by considering possible local moves at every step. 3 (n ? 2) Consider any step of the algorithm and suppose the algorithm decides to partition P a cluster into A, B. As stated in the algorithm, its local search objective value is OBJ = |B| a,a0 ?A waa0 + P |A| b,b0 ?B wbb0 . Assume without loss of generality that |B| ? |A|, and consider the expected local search objective OBJ 0 value for moving a random node from B to A. Note that the new local search objective value is at most what the algorithm obtained, i.e. OBJ 0 ? OBJ: ? ? ? ?  |B|?1 X X X 1 2 E[OBJ 0 ] = (|B| ? 1) ? waa0 + wab ? + (|A| + 1) ? |B| wbb0 ?  |B| 0 0 a,a ?A a?A,b?B b,b ?B 2 ? ? ? ? X X X 1 |B| ? 2 = (|B| ? 1) ? waa0 + wab ? + (|A| + 1) ? wbb0 ? |B| |B| 0 0 a,a ?A a?A,b?B b,b ?B ? ? X X |B| ? 1 X 2 = (|B| ? 1) waa0 + wab + (|A| + 1) ?(1 ? ) wbb0 ? |B| |B| 0 0 a,a ?A = OBJ ? X a,a0 ?A waa0 + b,b ?B a?A,b?B |B| ? 1 |B| X a?A,b?B 7 wab + (? 2|A| 2 +1? ) |B| |B| X b,b0 ?B wbb0 But since there are no improving moves we know the following. X 0 ? E[OBJ 0 ] ? OBJ = ? waa0 + a,a0 ?A |B| ? 1 |B| X wab ? a?A,b?B 2|A| ? |B| + 2 X wbb0 |B| 0 b,b ?B Rearranging terms and multiplying by |B| yields the following. X (|B| ? 1) wab ? |B| X a,a0 ?A a?A,b?B X waa0 + (2|A| ? |B| + 2) wbb0 b,b0 ?B We now consider three cases. Either (i) |B| ? |A| + 2, (ii) |B| = |A| + 1, or (iii) |B| = |A|. Case (i) is straightforward:  |B| ? 1 |A| + |B|  split-costG (A, B) ? split-revG (A, B) 1 split-costG (A, B) ? split-revG (A, B) 2 In case (ii), we use the fact that (x + 2)(x ? 2) ? (x + 1)(x ? 1) to simplify:    |A| + 1 |B| ? 1 split-costG (A, B) ? split-revG (A, B) |A| + |B| |A|     |B| ? 1 |B| + 2 split-costG (A, B) ? split-revG (A, B) |A| + |B| |B| + 1    |B| ? 1 |B| + 1 split-costG (A, B) ? split-revG (A, B) |B| + 2 |A| + |B|   |B| ? 2 split-costG (A, B) ? split-revG (A, B) |A| + |B|   1.5 1 ? split-costG (A, B) ? split-revG (A, B) 2 |A| + |B|  Case (iii) proceeds similarly; we now use the fact that (x + 2)(x ? 3) ? (x)(x ? 1) to simplify:     |B| ? 1 |A| + 2 split-costG (A, B) ? split-revG (A, B) |A| + |B| |A|     |B| + 2 |B| ? 1 split-costG (A, B) ? split-revG (A, B) |A| + |B| |B|    |B| |B| ? 1 split-costG (A, B) ? split-revG (A, B) |B| + 2 |A| + |B|   |B| ? 3 split-costG (A, B) ? split-revG (A, B) |A| + |B|   1 3 ? split-costG (A, B) ? split-revG (A, B) 2 |A| + |B| 8 Hence we have shown that for each step of our algorithm, the split revenue is at least ( 12 ? times the split cost. We rewrite this inequality and then sum over all iterations: X 1 split-revG (A, B) ? split-costG (A, B) ? 3 wab 2 3 |A|+|B| ) a?A,b?B X 1 revG (T ) ? costG (T ) ? 3 wij 2 i,j?[n] ? ? X 1? X wij ? revG (T )? ? 3 wij n = 2 i,j?[n] i,j?[n] 3 n?6 X revG (T ) ? wij 2 2 i,j?[n] n?6 X revG (T ) ? wij 3 i,j?[n] This is what we wanted to prove. We note that it is possible to improve the loss in terms of n to n?4 n?2 by instead considering the local P P search objective (|B| ? 1) a,a0 ?A waa0 + (|A| ? 1) b,b0 ?B wbb0 . 6 Conclusion One purpose of developing an analytic framework for problems is that it can help clarify and explain our observations from practice. In this case, we have shown that average linkage is a 13 -approximation to a particular objective function, and the analysis that does so helps explain what average linkage is optimizing. There is much more to explore in this direction. Are there other objective functions which characterize other hierarchical clustering algorithms? For example, what are bisecting k-means, single-linkage, and complete-linkage optimizing for? An analytic framework can also serve to guide development of new algorithms. How well can this dual objective be approximated? For example, we suspect that average linkage is actually a constant approximation strictly better than 13 . Could a smarter algorithm break the 12 threshold? Perhaps the 12 threshold is due to a family of graphs which we do not expect to see in practice. Is there a natural input restriction that would allow for better guarantees? References [AB16] Margareta Ackerman and Shai Ben-David. A characterization of linkage-based hierarchical clustering. Journal of Machine Learning Research, 17:232:1?232:17, 2016. [ABBL12] Margareta Ackerman, Shai Ben-David, Simina Br?nzei, and David Loker. Weighted clustering. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada., 2012. [ABC+ 15] Pranjal Awasthi, Afonso S Bandeira, Moses Charikar, Ravishankar Krishnaswamy, Soledad Villar, and Rachel Ward. Relax, no need to round: Integrality of clustering formulations. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 191?200. ACM, 2015. [ARV09] Sanjeev Arora, Satish Rao, and Umesh V. Vazirani. Expander flows, geometric embeddings and graph partitioning. J. ACM, 56(2):5:1?5:37, 2009. [BA08] Shai Ben-David and Margareta Ackerman. Measures of clustering quality: A working set of axioms for clustering. In Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 121?128, 2008. 9 [CC17] Moses Charikar and Vaggos Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics. In Proceedings of the Twenty-Eighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 841?854, 2017. [CKMM17] Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu. Hierarchical clustering: Objective functions and algorithms. CoRR, abs/1704.02147, 2017. [Das16] Sanjoy Dasgupta. A cost function for similarity-based hierarchical clustering. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 118?127, 2016. [HG05] Katherine A. Heller and Zoubin Ghahramani. Bayesian hierarchical clustering. In Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, pages 297?304, 2005. [HTF09] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. Unsupervised Learning, pages 485?585. Springer New York, New York, NY, 2009. [Jai10] Anil K. Jain. Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8):651 ? 666, 2010. [KBXS12] Akshay Krishnamurthy, Sivaraman Balakrishnan, Min Xu, and Aarti Singh. Efficient active algorithms for hierarchical clustering. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012. [MC12] Fionn Murtagh and Pedro Contreras. Algorithms for hierarchical clustering: an overview. Wiley Interdisc. Rew.: Data Mining and Knowledge Discovery, 2(1):86?97, 2012. [RP16] Aurko Roy and Sebastian Pokutta. Hierarchical clustering via spreading metrics. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2316?2324, 2016. [ZB09] Reza Zadeh and Shai Ben-David. A uniqueness theorem for clustering. In UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, June 18-21, 2009, pages 639?646, 2009. 10
6902 |@word seek:1 recursively:2 score:5 denoting:2 current:3 si:2 must:1 porta:1 partition:3 predetermined:1 analytic:3 wanted:1 intelligence:2 leaf:27 fewer:1 merger:1 scotland:1 pointer:1 characterization:1 node:19 toronto:1 along:1 constructed:2 become:1 symposium:2 consists:1 prove:1 acmsiam:1 theoretically:1 expected:1 actual:1 chatziafratis:1 considering:3 begin:3 spain:2 bounded:1 maximizes:1 what:11 interpreted:1 minimizes:1 guarantee:10 thorough:1 every:4 exactly:5 scaled:1 uk:1 partitioning:2 unit:4 grant:2 normally:1 louis:1 positive:1 before:2 understood:1 local:15 despite:1 analyzing:3 establishing:2 merge:12 ap:1 might:1 plus:1 studied:2 suggests:2 practical:2 enforces:1 ofpthe:1 practice:7 block:1 lost:1 differs:1 area:1 axiom:1 convenient:1 word:1 induce:1 zoubin:1 get:2 close:2 put:1 impossible:1 optimize:2 restriction:1 center:1 maximizing:2 go:1 straightforward:1 independently:1 qc:1 splitting:1 insight:2 wvv:1 deriving:1 notion:1 justification:1 krishnamurthy:1 resp:3 hierarchy:1 suppose:1 us:1 pa:1 roy:1 approximated:1 recognition:1 located:1 continues:1 cut:3 bottom:2 wang:3 capture:2 s1i:1 ensures:1 wj:3 highest:1 mentioned:1 benjamin:2 complexity:1 singh:1 tight:1 rewrite:1 serve:1 bisecting:9 represented:2 separated:3 jain:1 artificial:2 choosing:1 refined:1 stanford:3 widely:4 larger:2 supplementary:2 relax:1 ability:1 objectively:1 ward:1 think:1 final:2 analytical:1 ackerman:3 relevant:1 ontario:1 moved:2 cluster:55 ben:4 help:3 derive:1 andrew:1 develop:1 montreal:1 ij:8 b0:8 strong:3 dividing:1 c:1 involves:1 direction:1 merged:6 material:2 suffices:2 investigation:1 preliminary:2 frederik:1 summation:1 strictly:1 clarify:1 considered:5 litmus:1 algorithmic:1 claim:2 aarti:1 purpose:1 uniqueness:1 spreading:2 currently:1 villar:1 sivaraman:1 establishes:3 weighted:2 minimization:1 awasthi:1 always:2 rather:1 focus:3 june:3 improvement:2 attains:1 trenn:1 a0:10 initially:4 ancestor:6 wij:22 subroutine:1 selects:1 germany:1 provably:1 dual:3 yahoo:1 development:2 plan:1 initialize:2 construct:3 once:1 having:2 washington:1 beach:1 icml:2 unsupervised:1 future:1 minimized:1 simplify:2 randomly:1 individual:1 ab:1 friedman:1 interest:1 investigate:1 mining:1 intra:1 edge:16 tree:23 divide:1 theoretical:9 instance:1 rao:1 maximization:1 applicability:1 cost:12 vertex:8 satish:1 motivating:1 characterize:1 answer:1 chooses:2 st:2 international:2 randomized:1 together:2 sanjeev:1 squared:1 aaai:1 containing:3 choose:1 account:1 potential:4 cement:1 performed:1 root:1 later:2 break:1 analyze:2 doing:1 start:1 shai:4 minimize:1 correspond:4 yield:1 bayesian:1 vincent:1 produced:2 none:1 multiplying:1 published:1 j6:3 wab:14 explain:2 afonso:1 sebastian:1 trevor:1 definition:4 sixth:1 against:1 hotel:1 proof:5 gain:7 popular:5 knowledge:1 actually:1 appears:1 higher:1 varun:1 follow:1 formulation:2 done:3 though:1 generality:3 furthermore:1 just:1 dendrogram:1 until:3 jerome:1 working:2 maximizer:1 google:1 widespread:1 glance:1 quality:2 perhaps:2 usa:4 effect:1 building:1 contain:1 true:1 ccf:4 analytically:1 hence:5 symmetric:1 iteratively:1 round:1 rooted:4 trying:1 complete:4 performs:1 reasoning:1 umesh:1 recently:2 common:7 pseudocode:1 overview:2 cohen:1 endpoint:2 reza:1 belong:1 interpret:1 mellon:1 cambridge:1 uv:1 similarly:2 moving:3 similarity:16 something:1 setpof:1 krishnaswamy:1 own:2 optimizing:6 optimizes:2 contreras:1 bandeira:1 inequality:1 binary:5 arbitrarily:1 joshua:3 scoring:2 seen:1 minimum:1 additional:1 captured:1 aggregated:1 maximize:5 july:2 ii:2 full:1 desirable:3 long:1 divided:2 award:2 basic:2 cmu:1 metric:5 arxiv:1 iteration:7 smarter:1 justified:1 background:1 sigact:1 suspect:1 expander:1 december:2 balakrishnan:1 flow:1 obj:8 structural:1 ideal:1 split:42 easy:2 concerned:1 iii:2 embeddings:1 hastie:1 restrict:1 idea:1 br:1 whether:1 motivated:2 simina:1 linkage:24 returned:4 york:2 nsf:2 notice:1 moses:2 popularity:1 tibshirani:1 carnegie:1 discrete:1 dasgupta:5 group:1 key:4 threshold:2 integrality:1 aurko:1 graph:12 sum:4 year:1 run:2 letter:1 uncertainty:1 soda:1 rachel:1 almost:2 family:1 zadeh:1 bound:9 completing:1 guaranteed:1 nonnegative:4 annual:4 wbc:3 bonn:1 min:3 relatively:1 department:1 developing:1 according:1 alternate:1 charikar:2 poor:2 across:1 describes:1 smaller:2 partitioned:1 wi:11 addad:1 intuitively:2 describing:1 discus:1 count:1 know:3 tractable:1 end:2 studying:1 incurring:1 rewritten:1 apply:1 observe:1 hierarchical:38 away:1 appropriate:1 top:3 spurred:1 include:1 clustering:57 remaining:1 l6:2 exploit:1 giving:3 ghahramani:1 prof:1 establish:4 objective:54 move:3 question:3 ingenious:1 quantity:1 added:2 strategy:1 said:1 distance:5 separate:3 separating:1 agglomerative:12 considers:3 reason:1 index:1 ratio:5 minimizing:2 margareta:3 innovation:1 loker:1 difficult:1 unfortunately:1 katherine:1 robert:1 taxonomy:1 stoc:1 negative:1 stated:1 twenty:5 perform:4 htf09:2 observation:2 affirmatively:1 january:1 august:1 community:1 canada:3 wac:3 introduced:4 david:5 pair:9 namely:1 merges:6 established:2 barcelona:2 nip:1 beyond:1 proceeds:1 pattern:1 eighth:1 including:2 max:1 mall:1 natural:6 loom:1 improve:2 mathieu:1 arora:1 created:1 columbia:1 heller:1 geometric:1 discovery:1 vancouver:1 loss:3 expect:2 revenue:19 foundation:2 pranjal:1 claire:1 course:1 supported:2 last:1 guide:3 formal:5 understand:2 deeper:1 side:2 neighbor:2 allow:1 taking:1 akshay:1 fifth:1 serra:1 edinburgh:1 author:1 commonly:2 counted:1 contemporaneous:1 vazirani:1 approximate:1 obtains:1 keep:2 decides:1 investigating:1 active:1 uai:1 pittsburgh:1 search:12 decade:2 why:4 additionally:2 kanade:1 ca:2 rearranging:1 improving:1 constructing:1 pk:2 main:5 motivation:1 bounding:2 n2:3 xu:1 ny:1 wiley:1 sparsest:2 third:2 anil:1 down:4 theorem:5 british:1 bad:2 showing:2 admits:2 pokutta:1 concern:1 exists:2 merging:3 corr:1 gained:1 subtree:7 mallmann:1 gap:1 explore:1 desire:1 partially:1 springer:1 pedro:1 aa:1 corresponds:3 loses:1 determines:1 abc:2 acm:3 ma:1 murtagh:1 ravishankar:1 goal:7 quantifying:1 vaggos:1 towards:2 considerable:1 hard:1 typical:1 specifically:1 except:1 uniformly:2 miss:1 lemma:2 total:2 sanjoy:1 accepted:1 divisive:23 moseley:2 meaningful:1 formally:1 internal:3 support:3
6,526
6,903
Adaptive Accelerated Gradient Converging Method under H?lderian Error Bound Condition Mingrui Liu, Tianbao Yang Department of Computer Science The University of Iowa, Iowa City, IA 52242 mingrui-liu, [email protected] Abstract Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its applicability. We address this issue by developing a novel adaptive gradient converging methods, i.e., leveraging the magnitude of proximal gradient as a criterion for restart and termination. Our analysis extends to a much more general condition beyond the QGC, namely the H?lderian error bound (HEB) condition. The key technique for our development is a novel synthesis of adaptive regularization and a conditional restarting scheme, which extends previous work focusing on strongly convex problems to a much broader family of problems. Furthermore, we demonstrate that our results have important implication and applications in machine learning: (i) if the objective function is coercive and semialgebraic, PG?s convergence speed is essentially o( 1t ), where t is the total number of iterations; (ii) if the objective function consists of an `1 , `? , `1,? , or huber norm regularization and a convex smooth piecewise quadratic loss (e.g., square loss, squared hinge loss and huber loss), the proposed algorithm is parameter-free and enjoys a faster linear convergence than PG without any other assumptions (e.g., restricted eigen-value condition). It is notable that our linear convergence results for the aforementioned problems are global instead of local. To the best of our knowledge, these improved results are first shown in this work. 1 Introduction We consider the following smooth composite optimization: min F (x) , f (x) + g(x), x?Rd (1) where g(x) is a proper lower semi-continuous convex function and f (x) is a continuously differentiable convex function, whose gradient is L-Lipschitz continuous. The above problem has been studied extensively in literature and many algorithms have been developed with convergence guarantee. In particular, by employing the proximal mapping associated with g(x), i.e., 1 (2) P?g (u) = arg min kx ? uk22 + ?g(x), x?Rd 2 proximal gradient (PG) and accelerated ? proximal gradient (APG) methods have been developed for solving (1) with O(1/) and O(1/ ) 1 iteration complexities for finding an -optimal solution. 1 For the moment, we neglect the constant factor. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Summary of iteration complexities in this work under the HEB condition with ? ? (0, 1/2], e suppresses where G(x) denotes the proximal gradient, C(1/? ) = max(1/? , log(1/)) and O(?) a logarithmic term. If ? > 1/2, all algorithms can converge with finite steps of proximal mapping. rAPG stands for restarting APG. ? mark results available for certain subclasses of problems. algo. PG F (x) ? F? ?  kG(x)k2 ?  1 O c2 LC 1?2?    1 1 O c 1?? LC 1?2? requires ? No Yes Yes requires c No Yes No  rAPG   ?  1 O c LC 1/2?? ? adaAGC *   ? 1 e c 2(1??) LC O  1?? 1  1?2?  2(1??) When either f (x) or g(x) is strongly convex, both PG and APG can enjoy a linear convergence, i.e., the iteration complexity is improved to be O(log(1/)). Recently, a wave of studies try to generalize the linear convergence to problems without strong convexity but under certain structured condition of the objective function or more generally a quadratic growth condition [8, 32, 21, 23, 7, 31, 3, 15, 9, 29, 4, 24, 26, 25]. Earlier work along the line dates back to [12, 13, 14]. An example of the structured condition is such that f (x) = h(Ax) where h(?) is strongly convex function and ?h(x) is Lipschitz continuous on any compact set, and g(x) is a polyhedral function. Under such a structured condition, a local error bound condition can be established [12, 13, 14], which renders an asymptotic (local) linear convergence for the proximal gradient method. A quadratic growth condition (QGC) prescribes that the objective function satisfies for any x ? Rd 2 : ?2 kx ? x? k22 ? F (x) ? F (x? ), where x? denotes a closest point to x in the optimal set. Under such a quadratic growth condition, several recent studies have established the linear convergence of PG, APG and many other algorithms (e.g., coordinate descent methods) [3, 15, 4, 9, 29]. A notable result is that PG enjoys an iteration complexity of O( L ? log(1/)) without knowing the valueq of ?, while a restarting version of APG studied in [15] enjoys an improved iteration complexity of O( L ? log(1/)) hinging on the value of ? to appropriately restart APG periodically. Other equivalent conditions or more restricted conditions are also considered in several studies to show the linear convergence of (proximal) gradient method and other methods [9, 15, 29, 30]. In this paper, we extend this line of work to a more general error bound condition, i.e., the H?lderian error bound (HEB) condition on a compact sublevel set S? = {x ? Rd : F (x) ? F (x? ) ? ?}: there exists ? ? (0, 1] and 0 < c < ? such that kx ? x? k2 ? c(F (x) ? F (x? ))? , ?x ? S? . (3) p Note that when ? = 1/2 and c = 1/?, the HEB reduces to the QGC. In the sequel, we will refer to C = Lc2 as condition number of the problem. It is worth mentioning that Bolte et al. [3] considered the same condition or an equivalent Kurdyka - ?ojasiewicz inequality but they only focused on descent methods that bear a sufficient decrease condition for each update consequentially excluding APG. In addition, they do not provide explicit iteration complexity under the general HEB condition. As a warm-up and motivation, we will first present a straightforward analysis to show that PG is automatically adaptive and APG can be made adaptive to the HEB by restarting. In particular if F (x) satisfies a HEB condition on the initial sublevel set, PG has an iteration comC plexity of O(max( 1?2? , C log( 1 ))) 3 , and restarting APG enjoys an iteration complexity of ? ? C O(max( 1/2?? , C log( 1 ))) for the convergence of objective value, where C = Lc2 is the condition number. These two results resemble but generalize recent works that establish linear convergence of PG and restarting APG under the QGC - a special case of HEB. Although enjoying faster convergence, restarting APG has a critical caveat: it requires the knowledge of constant c in HEB to restart APG, which is usually difficult to compute or estimate. In this paper, we make nontrivial contributions to 2 3 It can be relaxed to a fixed domain as done in this work. When ? > 1/2, all algorithms can converge in finite steps. 2 obtain faster convergence of the proximal gradient?s norm under the HEB condition by developing an adaptive accelerated gradient converging method. The main results of this paper are summarized in Table 1. The contributions of this paper are: (i) we extend the analysis of PG and restarting APG under the quadratic growth condition to more general HEB condition, and establish the adaptive iteration complexities of both algorithms; (ii) to enjoy faster convergence of restarting APG and to eliminate the algorithmic dependence on the unknown parameter c, we propose and analyze an adaptive accelerated gradient converging (adaAGC) method. The developed algorithms and theory have important implication and applications in machine learning. Firstly, if the considered objective function is also coercive and semi-algebraic (e.g., a norm regularized problem in machine learning with a semi-algebraic loss function), then PG?s convergence speed is essentially o(1/t) instead of O(1/t), where t is the total number of iterations. Secondly, for solving `1 , `? or `1,? regularized smooth loss minimization problems including least-squares loss, squared hinge loss and huber loss, the proposed adaAGC method enjoys a linear convergence and a square root dependence on the ?condition" number. In contrast to previous work, the proposed algorithm is parameter free and does not rely on any restricted conditions (e.g., the restricted eigen-value conditions). 2 Notations and Preliminaries In this section, we present some notations and preliminaries. In the sequel, we let k ? kp (p ? 1) denote the p-norm of a vector. A function g(x) : Rd ? (??, ?] is a proper function if g(x) < +? for at least one x. g(x) is lower semi-continuous at a point x0 if lim inf x?x0 g(x) = g(x0 ). A function F (x) is coercive if and only if F (x) ? ? as kxk2 ? ?. We will also refer to semi-algebraic set and semi-algebraic function several times in the paper, which are standard concepts in mathematics [2]. Due to limit of space, we present the definitions in the supplement. Denote by N the set of allPpositive integers. A function h(x) is a real polynomial if there exists ?d 1 r ? N such that h(x) = 0?|?|?r ?? x? , where ?? ? R and x? = x? 1 . . . xd , ?j ? N ? {0}, Pd |?| = j=1 ?j and r is referred to as the degree of h(x). A continuous function f (x) is said to be a piecewise convex polynomial if there exist finitely many polyhedra P1 , . . . , Pk with ?kj=1 Pj = Rn such that the restriction of f on each Pj is a convex polynomial. Let fj be the restriction of f on Pj . The degree of a piecewise convex polynomial function f denoted by deg(f ) is the maximum of the degree of each fj . If deg(f ) = 2, the function is referred to as a piecewise convex quadratic function. Note that a piecewise convex polynomial function is not necessarily a convex function [10]. A function f (x) is L-smooth w.r.t k ? k2 if it is differentiable and has a Lipschitz continuous gradient with the Lipschitz constant L, i.e., k?f (x) ? ?f (y)k2 ? Lkx ? yk2 , ?x, y. Let ?g(x) denote the subdifferential of g at x. Denote by k?g(x)k2 = minu??g(x) kuk2 . A function g(x) is ?-strongly convex w.r.t k ? k2 if it satisfies for any u ? ?g(y) such that g(x) ? g(y) + u> (x ? y) + ?2 kx ? yk22 , ?x, y. Denote by ? > 0 a positive scalar, and let P?g be the proximal mapping associated with ?g(?) defined in (2). Given an objective function F (x) = f (x) + g(x), where f (x) is L-smooth and convex, g(x) is a simple non-smooth function which is closed and convex, define a proximal gradient G? (x) as: 1 + G? (x) = (x ? x+ ? ), where x? = P?g (x ? ??f (x)). ? When g(x) = 0, we have G? (x) = ?f (x), i.e., the proximal gradient is the gradient. It is known that x is an optimal solution iff G? (x) = 0. If ? = 1/L, for simplicity we denote by G(x) = G1/L (x) and x+ = Pg/L (x ? ?f (x)/L). Let F? denote the optimal objective value to minx?Rd F (x) and ?? denote the optimal set. Denote by S? = {x : F (x) ? F? ? ?} the ?-sublevel set of F (x). Let D(x, ?) = miny?? kx ? yk2 . The proximal gradient (PG) method solves the problem (1) by the update xt+1 = P?g (xt ? ??f (xt )), d (4) with ? ? 1/L starting from some initial solution x1 ? R . It can be shown that PG has an iteration 2 complexity of O( LD(x1 ,?? ) ). Nevertheless, accelerated proximal gradient (APG) converges faster than PG. There are many variants of APG in literature [22] including the well-known FISTA [1]. The 3 Algorithm 1: ADG x0 ? ?, A0 = 0, v0 = x0 for t = 0, . . . , T do 2 t Find at+1 from quadratic equation Aat +a = 2 1+?A L Set At+1 = At + at+1 at+1 t xt + A vt Set yt = AAt+1 t+1 Compute xt+1 = Pg/L (yt ? ?f (yt )/L) Pt+1 Compute vt+1 = arg minx ? =1 a? ?f (x? )> x + At+1 g(x) + 21 kx ? x0 k22 simplest variant adopts the following update yt = xt + ?t (xt ? xt?1 ), xt+1 = P?g (yt ? ??f (yt )), where ? ? 1/L and ?t is an appropriate sequence (e.g. ?t = ? t?1 t+2 ). APG enjoys an iteration ,? ) ? 1 ? ) [22]. Furthermore, if f (x) is both L-smooth and ?-strongly convex, complexity of O( LD(x  ? ? ? ? and deduce a linear convergence [16, 11] with a better dependence on the one can set ?t = ?L? L+ ? condition number than that of PG. If g(x) is ?-strongly convex and f (x) is L-smooth, Nesterov [17] proposed a different variant based on dual averaging, which is referred to accelerated dual gradient (ADG) method and will be useful for our development. The key steps are presented in Algorithm 1. 2.1 H?lderian error bound (HEB) condition Definition 1 (H?lderian error bound (HEB)). A function F (x) is said to satisfy a HEB condition on the ?-sublevel set if there exist ? ? (0, 1] and 0 < c < ? such that for any x ? S? dist(x, ?? ) ? c(F (x) ? F? )? . (5) The HEB condition is closely related to the ?ojasiewicz inequality or more generally Kurdyka?ojasiewicz (KL) inequality in real algebraic geometry. It has been shown that when functions are semi-algebraic and continuous, the above inequality is known to hold on any compact set [3]. We refer the readers to [3] for more discussions on HEB and KL inequalities. In the remainder of this section, we will review some previous results to demonstrate that HEB is a generic condition that holds for a broad family of problems of interest. The following proposition states that any proper, coercive, convex, lower-semicontinuous and semi-algebraic functions satisfy the HEB condition. Proposition 1. [3] Let F (x) be a proper, coercive, convex, lower semicontinuous and semi-algebraic function. Then there exists ? ? (0, 1] and 0 < c < ? such that F (x) satisfies the HEB on any ?-sublevel set. Example: Most optimization problems in machine learning with an objective that consists of an empirical loss that is semi-algebraic (e.g., hinge loss, squared hinge loss, absolute loss, square loss) and a norm regularization k ? kp (p ? 1 is a rational) or a norm constraint are proper, coercive, lower semicontinuous and semi-algebraic functions. Next two propositions exhibit the value ? for piecewise convex quadratic functions and piecewise convex polynomial functions. Proposition 2. [10] Let F (x) be a piecewise convex quadratic function on Rd . Suppose F (x) is convex. Then for any ? > 0, there exists 0 < c < ? such that D(x, ?? ) ? c(F (x) ? F? )1/2 , ?x ? S? . Many problems in machine learning are piecewise convex quadratic functions, which will be discussed more in Section 5. Proposition 3. [10] Let F (x) be a piecewise convex polynomial function on Rd . Suppose F (x) is convex. Then for any ? > 0, there exists c > 0 such that D(x, ?? ) ? c(F (x) ? 1 F? ) (deg(F )?1)d +1 , ?x ? S? . 4 Algorithm 2: restarting APG (rAPG) Input: the number of stages K and x0 ? ? for k = 1, . . . , K do Set y1k = xk?1 and xk1 = xk?1 for ? = 1, . . . , tk do Update xk?+1 = Pg/L (y?k ? ?f (y?k )/L) ? Update y?k+1 = xk?+1 + ? +3 (xk?+1 ? xk? ) Let xk = xktk +1 and update tk Output: xK Indeed, for a polyhedral constrained convex polynomial, we can have a tighter result, as shown below. Proposition 4. [27] Let F (x) be a convex polynomial function on Rd with degree m. If P ? Rd is a polyhedral set, then the problem minx?P F (x) admits a global error bound: ?x ? P there exists 0 < c < ? such that h i 1 D(x, ?? ) ? c (F (x) ? F? ) + (F (x) ? F? ) m . (6) From the global error bound (6), one can easily derive the HEB condition (3). As an example, an `1 constrained `p norm regression below [19] satisfies the HEB condition (3) with ? = p1 : n min F (x) , kxk1 ?s 1X > (a x ? bi )p , n i=1 i p ? 2N. (7) Many previous papers have considered a family of structured smooth composite functions F (x) = h(Ax) + g(x), where g(x) is a polyhedral function and h(?) is a smooth and strongly convex function on any compact set. Suppose the optimal set of the above problem is non-empty and compact (e.g., the function is coercive) so is the sublevel set S? , and it can been shown that such a function satisfies HEB with P? = 1/2 on any sublevel set S? [15, Theorem 10]. Examples of h(u) include logistic loss h(u) = i log(1 + exp(?ui )) and square loss h(u) = kuk22 . Finally, we note that there exist problems that admit HEB with ? > 1/2. A trivial example is given by F (x) = 12 kxk22 + kxkpp with p ? [1, 2), which satisfies HEB with ? = 1/p ? (1/2, 1]. An interesting non-trivial family of problems is that f (x) = 0 and g(x) is a piece-wise linear functions according to Proposition 3. PG or APG applied to such family of problems is closely related to proximal point algorithm [20]. Explorations of such algorithmic connection is not the focus of this paper. 3 PG and restarting APG under HEB As a warm-up and motivation of the major contribution presented in next section, we present a convergence result of PG and a restarting APG under the HEB condition. The analysis is mostly straightforward and is included in the supplement. We first present a result of PG using the update (4). Theorem 1. Suppose F (x0 ) ? F? ? 0 and F (x) satisfies HEB on S0 . The iteration complexity of PG with option I (which returns the last solution, see the supplementary material) for achieving 2 L F (xt ) ? F? ?  is O(c2 L2??1 ) if ? > 1/2, and is O(max{ c1?2? , c2 L log( 0 )}) if ? ? 1/2. 0 Next, we show that APG can be made adaptive to HEB by periodically restarting given c and ?. This is similar to [15] under the QGC. The steps of restarting APG (rAPG) are presented in Algorithm 2, where we employ the simplest variant of APG. Theorem 2. Suppose F (x0 ) ? F? ? 0 and F (x) satisfies HEB on S0 . By running Algorithm 2 ? ??1/2 with K = dlog2 0 e and tk = d2c Lk?1 e, we have F (xK ) ? F? ? . The iteration complexity ? ? 1/2?? ? c L of rAPG is O(c L0 ) if ? > 1/2, and if ? ? 1/2 it is O(max{ 1/2?? , c L log( 0 )}). From Algorithm 2, we can see that rAPG requires the knowledge of c besides ? to restart APG. However, for many problems of interest, the value of c is unknown, which makes rAPG impractical. To address this issue, we propose to use the magnitude of the proximal gradient as a measure for restart and termination. It is worth mentioning the difference between the development in this paper and previous studies. Previous work [16, 11] have considered strongly convex optimization 5 problems where the strong convexity parameter is unknown, where they also use the magnitude of the proximal gradient as a measure for restart and termination. However, in order to achieve faster convergence under the HEB condition without the strong convexity, we have to introduce a novel technique of adaptive regularization that adapts to the HEB. With a novel synthesis of the adaptive regularization and a conditional restarting that searchs for the c, we are able to develop practical adaptive accelerated gradient methods. We also notice a recent work [6] that proposed unconditional restarted accelerated gradient methods under QGC. Their restart of APG/FISTA does not involve evaluation of the gradient or the objective value but rather depends on a restarting frequency parameter and a convex combination parameter for computing the restarting solution, which can be set based on a rough estimate of the strong convexity parameter. As a result, their linear convergence (established for distance of solutions to the optimal set) heavily depends on the rough estimate of the strong convexity parameter. Before diving into the details of the proposed algorithm, we will first present a variant of PG as a baseline for comparison motivated by [18] for smooth problems, which enjoys a faster convergence than the vanilla PG in terms of the proximal gradient?s norm. The idea is to return a solution that achieves the minimum magnitude of the proximal gradient, i.e., min1?? ?t kG(x? )k2 . The convergence of min1?? ?t kG(x? )k2 under HEB is presented in the following theorem. Theorem 3. Suppose F (x0 ) ? F? ? 0 and F (x) satisfies HEB on S0 . The iteration complexity of PG (option II, which returns the solution with historically minimal proximal gradient, see the supple1?2? 1 mentary material) for achieving min1?? ?t kG(x? )k2 ? , is O(c 1?? L max{1/ 1?? , log( 0 )}) if ? ? 1/2, and is O(c2 L2??1 ) if ? > 1/2. 0 The final theorem in this section summarizes an o(1/t) convergence result of PG for minimizing a proper, coercive, convex, lower semicontinuous and semi-algebraic function, which could be interesting of its own. Theorem 4. Let F (x) be a proper, coercive, convex, lower semicontinuous and semi-algebraic functions. Then PG (with option I and option II) converges at a speed of o(1/t) for F (x) ? F? and G(x), respectively, where t is the total number of iterations. Remark: This can be easily proved by combining Proposition 1 and Theorems 1, 3. 4 Adaptive Accelerated Gradient Converging Methods We first present a key lemma for our development that serves the foundation of the adaptive regularization and conditional restarting. Lemma 1. Assume F (x) satisfies HEB for any x ? S? with ? ? (0, 1]. If ? ? (0, 1/2], then for 1 ? ? any x ? S? , we have D(x, ?? ) ? L2 kG(x)k2 + c 1?? 2 1?? kG(x)k21?? . If ? ? (1/2, 1], then for any x ? S? , we have D(x, ?? ) ? L2 + 2c2 ? 2??1 kG(x)k2 . A building block of the proposed algorithm is to solve a problem of the following style by employing the Algorithm 1 (i.e., Nesterov?s ADG): ? ? F? (x) = F (x) + kx ? x0 k22 = f (x) + g(x) + kx ? x0 k22 , (8) 2 2 which consists of a L-smooth function f (x) and a ?-strongly convex function g? (x) = g(x) + 2? kx ? x0 k22 . A key result for our development of conditional restarting is the following theorem for each call of Algorithm 1 for solving the above problem. Theorem the Algorithm 1 for minimizing f (x) + g? (x) with an initial solution x0 , q5. By running  L L for t ? 2? log ? we have h i?t p p ? kG(xt+1 )k2 ? L(L + ?)kx0 ? x? k2 1 + ?/(2L) + 2 2?kx0 ? x? k2 . where x? is any optimal solution to the original problem. Finally, we present the proposed adaptive accelerated gradient converging (adaAGC) method for solving the smooth composite optimization in Algorithm 3 and prove the main theorem of this section. 6 Algorithm 3: adaAGC for solving (1) Input: x0 ? ? and c0 and ? > 1 Let ce = c0 and ?0 = kG(x0 )k2 for k = 1, . . . , K do for s = 1, . . . , do Let ?k be given in (9) and g?k (x) = g(x) + ?2k kx ? xk?1 k22 A0 = 0, v0 = xk?1 , xk0 = xk?1 for t = 0, . . . do 2 Let at+1 be the root of Aat +a = 2 1+?Lk At Set At+1 = At + at+1 at+1 t xkt + A vt Set yt = AAt+1 t+1 k Compute xt+1 = Pg?k /L (yt ? ?f (yt )/L) Pt+1 Compute vt+1 = arg minx 12 kx ? xk?1 k22 + ? =1 a? ?f (xk? )> x + At+1 g?k (x) if kG(xkt+1 )k2 ? ?k?1 /2 then let xk = xkt+1 and ?k = ?k?1 /2 // step S1 break the enclosing two for loops ? q L(L+?k ) if ? = d 2L log e then // condition (*) ?k ?k let ce = ?ce and break the enclosing for loop // step S2 Output: xK The adaAGC runs with multiple stages (k = 1, . . . , K). We start with an initial guess c0 of the parameter c in the HEB. With the current guess ce of c, at the k-th stage adaAGC employs ADG to solve a problem of (8) with an adaptive regularization parameter ?k being ? ! 1?2? 1?? ? ?k?1 ? ? min L , if ? ? (0, 1/2] ? 32 1/(1??) 1?? 16ce 2 ?k = (9)   ? ? 1 ? min L , if ? ? (1/2, 1] 2??1 32 32c2e  0 The condition (*) specifies the condition for restarting with an increased value of ce . When the flow enters step S2 before step S1 for each s, it means that the current guess ce is not sufficiently large according to Theorem 5 and Lemma 1, then we increase ce and repeat the same process (next iteration for s). We refer to this machinery as conditional restarting. We present the main result of this section in the following theorem. Theorem 6. Suppose F (x0 ) ? F? ? 0 , F (x) satisfies HEB on S0 and c0 ? c. Let ?0 = kG(x0 )k2 , K = dlog2 ( ?0 )e, p = (1 ? 2?)/(1 complexity of Algorithm 3 for ? ? 1?) for ? ? (0, 1/2]. The iteration  ? 1 e e Lc??1/2 ) 2(1??) having kG(xK )k2 ?  is O Lc max( p/2 , log(?0 /) if ? ? (0, 1/2], and O( 0  e suppresses a log term depending on c, c0 , L, ?. if ? ? (1/2, 1], where O(?) We sketch the idea of the proof here: for each k, we can bound the number of cycles (indexd by s in the algorithm) in order to enter step S1 denoted by sk . We can bound sk ? log? (c/c0 ) + 1 and then total ? q PK L(L+?k ) number of iterations across all stages is bounded by k=1 sk tk where tk = d 2L log e. ?k ?k Before ending this section, we would like to remark that if the smoothness parameter L is unknown, one can also employ the backtracking technique pairing with each update to search for L [17]. 4.1 Convergence of Objective Gap In this subsection, we show that the convergence of the proximal gradient also implies the convergence of the objective gap F (x) ? F? for certain subclasses of the general problems that we have considered. Our first result applies to the case when F (x) satisfies the HEB with ? ? (0, 1) and the nonsmooth part g(x) is absent, i.e., F (x) = f (x). In this case, we can establish the convergence of the objective gap, since the objective gap can be bounded by a function of the magnitude of gradient, 7 1/(1??) i.e., f (x) ? f? ? c1/(1??) k?f (x)k2 easily prove the following result. (c.f. the proof of Lemma 2 in the supplement). One can Theorem 7. Assume F (x) = f (x) and the same conditions inTheorem 6 hold. The iteration  ? 1 e complexity of Algorithm 3 for having F (xK ) ? F (x? ) ?  is O Lc max( 1/2?? , log(?0 /) if ? e Lc??1/2 ) if ? ? (1/2, 1), where O(?) e suppresses a log term depending on ? ? (0, 1/2], and O( 0 c, c0 , L, ?. Remark Note that the above iteration complexity of adaAGC is the same as that of rAPG (shown in Table 1), where the later is established under the knowledge of c. Our second result applies to a subclass of the general problems where either g(x) or f (x) is ?-strongly convex or F (x) = f (x) + g(x), where f (x) = h(Ax) with h(?) being a strongly convex function and g(x) is the indicator function of a polyhedral set ? = {x : Cx ? b}. Examples include square loss minimization under an `1 or `? constraint [15, Theorem 8]. It has been shown that in the last case, for any x ? dom(F ), there exists ? > 0 such that ? f (x? ) ? f (x) + ?f (x)> (x? ? x) + kx ? x? k22 , (10) 2 wherepx? is the closest optimal solution to x, and the HEB condition of F (x) with ? = 1/2 and c = 2/? holds [15, Theorem 1]. In the three cases mentioned above, we can establish that F (x+ ) ? F? ? O(1/?)kG(x)k22 , where x+ = Pg/L (x ? ?f (x)/L), and the following result. Theorem 8. Assume f (x) or g(x) is ?-strongly convex, or f (x) = h(Ax) and g(x) is the indicator function of a polyhedral set such that (10) holds for some ? > 0, and other conditions in Theorem 6 hold. Theiteration complexity of Algorithm 3 for having F (x+ K ) ? F (x? ) ?  is p ? e e suppresses a log term depending on ?, c0 , L, ?. O L/? log(?0 / ?) , where O(?) 5 Applications and Experiments In this section, we present some applications of our theorems and algorithms in machine learning. In particular, we consider the regularized problems with a smooth loss: n 1X min `(x> ai , bi ) + ?R(x), (11) x?Rd n i=1 where (ai , bi ), i = 1, . . . , n denote a set of training examples, R(x) could be the `1 norm kxk1 , the PK `? norm kxk? , or a huber norm [28], or the `1,p norm k=1 kxk kp , where k is the k-th component vector of x. Next, we present several results about the HEB condition to cover a broad family of loss functions that enjoy the faster convergence of adaAGC. Corollary 1. Assume the loss function `(z, b) is nonnegative, convex, smooth and piecewise quadratic, then the problems in (11) with `1 norm, `? norm, Huber norm and `1,? norm regularization satisfy the HEB condition with ? = 1/2 on any sublevel set S? with ? > 0. Hence adaAGC has a global linear convergence in terms of the proximal gradient?s norm and a square root dependence on the condition number. Remark: The above corollary follows directly from Proposition 2 and Theorem 6. If the loss function is a logistic loss and the regularizer is a polyhedral function (e.g., `1 , `? and `1,? norm), we can prove the same result. Examples of convex, smooth and piecewise convex quadratic loss functions include: square loss: `(z, b) = (z ? b)2 for b ? R; squared hinge loss: `(z, b) = max(0, 1 ? bz)2 for b ? {1, ?1}; and huber loss: `(z, b) = ?(|z ? b| ? ?2 ) if |z ? b| > ?, and `(z, b) = (z ? b)2 /2 if |z ? b| ? ?, for b ? R. Experimental Results We conduct some experiments to demonstrate the effectiveness of adaAGC for solving problems of type (1). Specifically, we compare adaAGC, PG with option II that returns the solution with historically minimal proximal gradient, FISTA, unconditional restarting FISTA (urFISTA) [6] for optimizing the squared hinge loss (classification), square loss (regression), huber loss (with ? = 1) (regression) with `1 and `? regularization, which are cases of (11), and we also consider the `1 constrained `p norm regression (7) with varying p. We use three datasets from the LibSVM website [5], which are splice (n = 1000, d = 60) for classification, and bodyfat 8 Table 2: squared hinge loss with `1 norm (left) and `? norm (right) regularization on splice data Algorithm PG FISTA urFISTA adaAGC  = 10?4  = 10?5  = 10?6 2040 2040 2040 1289 1289 1289 1666 2371 2601 1410 1410 1410 FISTA > adaAGC > PG > urFISTA  = 10?7 2040 1289 3480 1410  = 10?4  = 10?5  = 10?6  = 10?7 3514 3724 3724 3724 5526 5526 5526 5526 1674 2379 2605 3488 2382 2382 2382 2382 adaAGC > urFISTA > PG > FISTA Table 3: square loss with `1 norm (left) and `? norm (right) regularization on cpusmall data Algorithm PG FISTA urFISTA adaAGC  = 10?4  = 10?5  = 10?6 109298 159908 170915 6781 16387 23779 18278 26706 35173 9571 12623 13575 adaAGC > FISTA > urFISTA > PG  = 10?7 170915 23779 43603 13575  = 10?4  = 10?5  = 10?6  = 10?7 139505 204120 210874 210874 6610 16418 20082 20082 18276 26704 35169 43601 9881 13033 13632 13632 adaAGC > FISTA > urFISTA > PG Table 4: `1 regularized huber loss (left) and `1 constrained square loss (right) on bodyfat data Algorithm PG FISTA urFISTA adaAGC  = 10?4  = 10?5  = 10?6 258723 423181 602043 6630 25020 74416 6855 12662 17994 16976 16980 23844 urFISTA > adaAGC > FISTA > PG  = 10?7 681488 124261 23933 25697  = 10?4  = 10?5  = 10?6  = 10?7 1006880 1768482 2530085 2632578 15805 66319 180977 181176 138359 235081 331203 426341 23054 33818 44582 48127 adaAGC> FISTA > urFISTA > PG Table 5: `1 constrained `p norm regression on bodyfat data ( = 10?3 ) Algorithm PG adaAGC p=2 250869 (1) 8710 (1) p=4 979401 (3.90) 17494 (2.0) p=6 1559753 (6.22) 22481 (2.58) p=8 4015665 (16.00) 33081 (3.80) (n = 252, d = 14), cpusmall (n = 8192, d = 12) for regression. For problems covered by (11), we fix ? = n1 , and the parameter s in (7) is set to s = 100. We use the backtracking in PG, adaAGC and FISTA to search for the smoothness parameter. In adaAGC, we set c0 = 2, ? = 2 for the `1 constrained `p norm regression and c0 = 10, ? = 2 for the rest problems. For fairness, for urFISTA and adaAGC, we use the same initial estimate of unknown parameter (i.e., c). Each algorithm starts at the same initial point, which is set to be zero, and we stop each algorithm when the norm of its proximal gradient is less than a prescribed threshold  and report the total number of proximal mappings. The results are presented in the Tables 2?5. It indicates that adaAGC converges faster than PG and FISTA (except for solving squared hinge loss with `1 norm regularization) when  is very small, which is consistent with the theoretical results. Note that urFISTA sometimes has better performance than adaAGC but is worse than adaAGC in most cases. It is notable that for some problems (see Table 2) the number of proximal mappings is the same value for achieving different precision . This is because that value is the minimum number of proximal mappings such that the magnitude of the proximal gradient suddenly becomes zero. In Table 5, the numbers in parenthesis indicate the increasing factor in the number of proximal mappings compared to the base case p = 2, which show that increasing factors of adaAGC are approximately the square root of that of PG and thus are consistent with our theory. 6 Conclusions In this paper, we have considered smooth composite optimization problems under a general H?lderian error bound condition. We have established adaptive iteration complexity to the H?lderian error bound condition of proximal gradient and accelerated proximal gradient methods. To eliminate the dependence on the unknown parameter in the error bound condition and enjoy the faster convergence of accelerated proximal gradient method, we have developed a novel parameter-free adaptive accelerated gradient converging method using the magnitude of the (proximal) gradient as a measure for restart and termination. We have also considered a broad family of norm regularized problems in machine learning and showed faster convergence of the proposed adaptive accelerated gradient converging method. Acknowledgments We thank the anonymous reviewers for their helpful comments. M. Liu and T. Yang are partially supported by National Science Foundation (IIS-1463988, IIS-1545995). 9 References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Img. Sci., 2:183?202, 2009. [2] E. Bierstone and P. D. Milman. Semianalytic and subanalytic sets. Publications Math?matiques de l?Institut des Hautes ?tudes Scientifiques, 67(1):5?42, 1988. [3] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. CoRR, abs/1510.08234, 2015. [4] D. Drusvyatskiy and A. S. Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. arXiv:1602.06661, 2016. [5] R.-E. Fan and C.-J. Lin. Libsvm data: Classification, regression and multi-label. URL: http://www. csie. ntu. edu. tw/cjlin/libsvmtools/datasets, 2011. [6] O. Fercoq and Z. Qu. Restarting accelerated gradient methods with a rough strong convexity estimate. arXiv preprint arXiv:1609.07358, 2016. [7] P. Gong and J. Ye. Linear convergence of variance-reduced projected stochastic gradient without strong convexity. CoRR, abs/1406.1102, 2014. [8] K. Hou, Z. Zhou, A. M. So, and Z. Luo. On the linear convergence of the proximal gradient method for trace norm regularization. In Advances in Neural Information Processing Systems (NIPS), pages 710?718, 2013. [9] H. Karimi, J. Nutini, and M. W. Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-?ojasiewicz condition. In Machine Learning and Knowledge Discovery in Databases - European Conference (ECML-PKDD), pages 795?811, 2016. [10] G. Li. Global error bounds for piecewise convex polynomials. Math. Program., 137(1-2):37?64, 2013. [11] Q. Lin and L. Xiao. An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. In Proceedings of the International Conference on Machine Learning, (ICML), pages 73?81, 2014. [12] Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minization. Journal of Optimization Theory and Applications, 72(1):7?35, 1992. [13] Z.-Q. Luo and P. Tseng. On the linear convergence of descent methods for convex essenially smooth minization. SIAM Journal on Control and Optimization, 30(2):408?425, 1992. [14] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Annals of Operations Research, 46:157?178, 1993. [15] I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for nonstrongly convex optimization. CoRR, abs/1504.06298, 2015. [16] Y. Nesterov. Introductory lectures on convex optimization : a basic course. Applied optimization. Kluwer Academic Publ., 2004. [17] Y. Nesterov. Gradient methods for minimizing composite objective function. Core discussion papers, Universite catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007. [18] Y. Nesterov. How to make the gradients small. Optima 88, 2012. [19] H. Nyquist. The optimal lp norm estimator in linear regression models. Communications in Statistics - Theory and Methods, 12(21):2511?2524, 1983. [20] R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. on Control and Optimization, 14, 1976. 10 [21] A. M. So. Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity. CoRR, abs/1309.0113, 2013. [22] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal on Optimization, 2008. [23] P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. Journal of Machine Learning Research, 15(1):1523?1548, 2014. [24] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In International Conference on Machine Learning, pages 3821?3830, 2017. [25] Y. Xu, Y. Yan, Q. Lin, and T. Yang. Homotopy smoothing for non-smooth problems with lower complexity than O(1/). In Advances In Neural Information Processing Systems 29 (NIPS), pages 1208?1216, 2016. [26] T. Yang and Q. Lin. Rsg: Beating subgradient method without smoothness and strong convexity. CoRR, abs/1512.03107, 2016. [27] W. H. Yang. Error bounds for convex polynomials. SIAM Journal on Optimization, 19(4):1633? 1647, 2009. [28] O. Zadorozhnyi, G. Benecke, S. Mandt, T. Scheffer, and M. Kloft. Huber-norm regularization for linear prediction models. In Machine Learning and Knowledge Discovery in Databases European Conference (ECML-PKDD), pages 714?730, 2016. [29] H. Zhang. New analysis of linear convergence of gradient-type methods via unifying error bound conditions. CoRR, abs/1606.00269, 2016. [30] H. Zhang. The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth. Optimization Letters, pages 1?17, 2016. [31] Z. Zhou and A. M. So. A unified approach to error bounds for structured convex optimization problems. CoRR, abs/1512.03518, 2015. [32] Z. Zhou, Q. Zhang, and A. M. So. L1p-norm regularization: Error bounds and convergence rate analysis of first-order methods. In Proceedings of the 32nd International Conference on Machine Learning, (ICML), pages 1501?1510, 2015. 11
6903 |@word version:1 polynomial:11 norm:32 nd:1 c0:10 termination:4 semicontinuous:5 pg:46 ld:2 moment:1 initial:6 liu:3 kx0:2 current:2 luo:4 bierstone:1 hou:1 periodically:2 update:8 guess:3 website:1 xk:18 core:2 ojasiewicz:4 caveat:1 math:2 revisited:1 firstly:1 scientifiques:1 zhang:3 along:1 c2:5 pairing:1 consists:3 prove:3 introductory:1 polyhedral:7 introduce:1 x0:18 huber:9 indeed:1 p1:2 dist:1 pkdd:2 multi:1 automatically:1 increasing:2 mingrui:2 becomes:1 notation:2 bounded:2 kg:13 suppresses:4 developed:4 coercive:9 unified:1 finding:1 impractical:1 guarantee:1 subclass:3 concave:1 growth:8 xd:1 k2:19 control:2 enjoy:5 positive:1 before:3 aat:4 local:4 limit:1 mandt:1 approximately:1 studied:2 equivalence:1 mentioning:2 bi:3 practical:1 acknowledgment:1 block:1 empirical:1 yan:1 composite:5 uiowa:1 operator:1 restriction:2 equivalent:2 www:1 reviewer:1 yt:9 center:1 tianbao:2 straightforward:2 starting:1 convex:51 focused:1 simplicity:1 lderian:7 estimator:1 coordinate:2 annals:1 pt:2 suppose:7 heavily:1 mentary:1 econometrics:1 database:2 kxk1:2 min1:3 csie:1 preprint:1 enters:1 wang:1 cycle:1 decrease:1 mentioned:1 pd:1 convexity:11 complexity:21 miny:1 ui:1 nesterov:6 dom:1 prescribes:1 q5:1 solving:7 algo:1 easily:3 regularizer:1 subanalytic:1 fast:1 kp:3 whose:1 supplementary:1 solve:2 statistic:1 g1:1 final:1 sequence:1 differentiable:3 propose:2 remainder:1 combining:1 loop:2 date:1 iff:1 achieve:1 adapts:1 convergence:45 empty:1 optimum:1 converges:3 tk:5 derive:1 develop:1 depending:3 gong:1 finitely:1 solves:1 strong:11 resemble:1 implies:2 indicate:1 closely:2 stochastic:2 exploration:1 libsvmtools:1 material:2 fix:1 preliminary:2 anonymous:1 proposition:9 tighter:1 ntu:1 secondly:1 homotopy:2 hold:6 sufficiently:1 considered:8 exp:1 minu:1 mapping:7 algorithmic:2 kxkpp:1 major:1 achieves:1 label:1 tudes:1 city:1 minimization:2 rough:3 rather:1 zhou:3 shrinkage:1 varying:1 broader:1 publication:1 corollary:2 ax:4 focus:1 comc:1 polyhedron:1 indicates:1 contrast:1 baseline:1 helpful:1 eliminate:2 a0:2 karimi:1 issue:2 aforementioned:1 arg:3 dual:2 denoted:2 classification:3 development:5 constrained:6 special:1 smoothing:1 having:3 beach:1 broad:3 icml:2 fairness:1 nonsmooth:1 report:1 piecewise:13 employ:3 suter:1 national:1 beck:1 geometry:1 n1:1 ab:7 interest:2 evaluation:1 unconditional:2 necoara:1 l1p:1 implication:2 machinery:1 institut:1 heb:40 enjoying:1 conduct:1 y1k:1 theoretical:1 minimal:2 increased:1 earlier:1 teboulle:1 cover:1 applicability:1 cpusmall:2 proximal:39 st:1 international:3 siam:5 kloft:1 sequel:2 synthesis:2 continuously:1 squared:7 sublevel:8 worse:1 admit:1 style:1 return:4 li:1 de:3 summarized:1 rockafellar:1 satisfy:3 notable:3 depends:2 piece:1 later:1 try:1 root:4 closed:1 break:2 analyze:1 wave:1 start:2 option:5 contribution:3 square:12 variance:1 yes:3 generalize:2 rsg:1 worth:2 submitted:1 definition:2 inexact:1 frequency:1 lc2:2 universite:1 associated:2 proof:2 rational:1 stop:1 proved:1 knowledge:6 lim:1 subsection:1 proximalgradient:1 back:1 focusing:1 improved:3 done:1 strongly:12 furthermore:2 xk1:1 stage:4 sketch:1 logistic:2 xkt:3 usa:1 building:1 k22:9 concept:1 ye:1 regularization:15 hence:1 criterion:1 plexity:1 demonstrate:3 fj:2 wise:1 novel:5 recently:1 matiques:1 extend:2 discussed:1 kluwer:1 refer:4 enter:1 ai:2 smoothness:3 rd:11 vanilla:1 mathematics:1 peypouquet:1 yk2:2 lkx:1 v0:2 deduce:1 base:1 closest:2 own:1 recent:4 showed:1 optimizing:1 inf:1 diving:1 certain:3 inequality:5 vt:4 minimum:2 relaxed:1 converge:2 ii:7 semi:13 multiple:1 reduces:1 smooth:19 faster:14 academic:1 long:1 lin:6 parenthesis:1 converging:8 variant:5 regression:9 basic:1 prediction:1 essentially:2 bz:1 arxiv:3 iteration:25 sometimes:1 c1:2 addition:1 subdifferential:1 appropriately:2 rest:1 comment:1 leveraging:1 flow:1 effectiveness:1 integer:1 call:1 yang:7 yk22:1 polyak:1 idea:2 knowing:1 absent:1 motivated:1 url:1 nyquist:1 render:1 algebraic:12 remark:4 generally:2 useful:1 covered:1 involve:1 extensively:1 simplest:2 reduced:1 http:1 specifies:1 continuation:1 exist:3 restricts:1 notice:1 key:4 nevertheless:1 threshold:1 achieving:3 pj:3 ce:8 libsvm:2 subgradient:1 monotone:1 run:1 inverse:1 letter:1 extends:2 family:7 reader:1 summarizes:1 bound:23 apg:29 milman:1 fan:1 quadratic:15 nonnegative:1 nontrivial:1 constraint:2 speed:3 min:6 prescribed:1 fercoq:1 department:1 developing:2 structured:5 according:2 combination:1 across:1 lp:1 drusvyatskiy:1 tw:1 qu:1 s1:3 restricted:5 equation:1 cjlin:1 serf:1 available:1 operation:2 appropriate:1 generic:1 schmidt:1 eigen:2 original:1 denotes:2 running:2 include:3 hinge:8 unifying:1 neglect:1 establish:4 suddenly:1 objective:15 dependence:5 said:2 exhibit:1 gradient:53 minx:4 distance:1 thank:1 sci:1 uk22:1 restart:9 tseng:4 trivial:2 besides:1 minimizing:3 difficult:1 mostly:1 potentially:1 glineur:1 trace:1 enclosing:2 xk0:1 proper:7 publ:1 unknown:7 datasets:2 finite:2 descent:7 ecml:2 excluding:1 communication:1 rn:1 namely:2 kl:2 connection:1 louvain:1 established:5 nip:3 address:2 beyond:1 able:1 usually:1 below:2 beating:1 program:1 max:9 including:2 ia:1 critical:1 warm:2 regularized:5 rely:1 indicator:2 scheme:1 kxk22:1 historically:2 lk:1 kj:1 review:1 literature:2 l2:2 discovery:2 asymptotic:2 loss:34 nonstrongly:1 bear:1 lecture:1 interesting:2 semialgebraic:1 foundation:2 iowa:2 degree:4 sufficient:1 consistent:2 xiao:1 thresholding:1 course:1 summary:1 repeat:1 last:2 free:3 supported:1 enjoys:7 catholique:1 weaker:1 absolute:1 sparse:1 stand:1 ending:1 adopts:1 made:2 adaptive:20 projected:1 nguyen:1 employing:2 restarting:25 compact:5 dlog2:2 consequentially:1 deg:3 global:6 img:1 continuous:7 search:3 iterative:1 sk:3 table:10 ca:1 necessarily:1 european:2 domain:1 pk:3 main:3 motivation:2 s2:2 x1:1 xu:2 referred:3 scheffer:1 adg:4 lc:8 precision:1 kuk22:1 explicit:1 kxk2:1 splice:2 theorem:22 kuk2:1 xt:12 k21:1 admits:1 exists:7 corr:7 supplement:3 magnitude:7 kx:12 gap:4 hinging:1 bolte:2 cx:1 logarithmic:1 backtracking:2 kurdyka:2 kxk:2 partially:1 scalar:1 restarted:1 applies:2 nutini:1 satisfies:13 relies:1 lewis:1 conditional:5 lipschitz:4 feasible:2 fista:15 included:1 specifically:1 except:1 averaging:1 minization:2 lemma:4 total:5 experimental:1 hautes:1 bodyfat:3 mark:1 accelerated:18
6,527
6,904
Stein Variational Gradient Descent as Gradient Flow Qiang Liu Department of Computer Science Dartmouth College Hanover, NH 03755 [email protected] Abstract Stein variational gradient descent (SVGD) is a deterministic sampling algorithm that iteratively transports a set of particles to approximate given distributions, based on a gradient-based update that guarantees to optimally decrease the KL divergence within a function space. This paper develops the first theoretical analysis on SVGD. We establish that the empirical measures of the SVGD samples weakly converge to the target distribution, and show that the asymptotic behavior of SVGD is characterized by a nonlinear Fokker-Planck equation known as Vlasov equation in physics. We develop a geometric perspective that views SVGD as a gradient flow of the KL divergence functional under a new metric structure on the space of distributions induced by Stein operator. 1 Introduction Stein variational gradient descent (SVGD) [1] is a particle-based algorithm for approximating complex distributions. Unlike typical Monte Carlo algorithms that rely on randomness for approximation, SVGD constructs a set of points (or particles) by iteratively applying deterministic updates that is constructed to optimally decrease the KL divergence to the target distribution at each iteration. SVGD has a simple form that efficient leverages the gradient information of the distribution, and can be readily applied to complex models with massive datasets for which typical gradient descent has been found efficient. A nice property of SVGD is that it strictly reduces to the typical gradient ascent for maximum a posteriori (MAP) when using only a single particle (n = 1), while turns into a full sampling method with more particles. Because MAP often provides reasonably good results in practice, SVGD is found more particle-efficient than typical Monte Carlo methods which require much larger numbers of particles to achieve good results. SVGD can be viewed as a variational inference algorithm [e.g., 2], but is significantly different from the typical parametric variational inference algorithms that use parametric sets to approximate given distributions and have the disadvantage of introducing deterministic biases and (often) requiring non-convex optimization. The non-parametric nature of SVGD allows it to provide consistent estimation for generic distributions like Monte Carlo does. There are also particle algorithms based on optimization, or variational principles, with theoretical guarantees [e.g., 3?5], but they often do not use the gradient information effectively and do not scale well in high dimensions. However, SVGD is difficult to analyze theoretically because it involves a system of particles that interact with each other in a complex way. In this work, we take an initial step towards analyzing SVGD. We characterize the SVGD dynamics using an evolutionary process of the empirical measures of the particles that is known as Vlasov process in physics, and establish that empirical measures of the particles weakly converge to the given target distribution. We develop a geometric interpretation of SVGD that views SVGD as a gradient flow of KL divergence, defined on a new Riemannian-like metric structure imposed on the space of density functions. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Stein Variational Gradient Descent (SVGD) We start with a brief overview of SVGD [1]. Let ?p be a probability measure of interest with a positive, (weakly) differentiable density p(x) on an open set X ? RdP . We want to approximate ?p n with a set of particles {xi }ni=1 whose empirical measure ? ?n (dx) = i=1 ?(x ? xi )/ndx weakly converges to ?p as n ? ? (denoted by ? ?n ? ?p ), in the sense that we have E??n [h] ? E?p [h] as n ? ? for all bounded, continuous test functions h. To achieve this, we initialize the particles with some simple distribution ?, and update them via map T (x) = x + ?(x), where  is a small step size, and ?(x) is a perturbation direction, or velocity field, which should be chosen to maximally decrease the KL divergence of the particle distribution with the target distribution; this is framed by [1] as solving the following functional optimization,   d (1) max ? KL(T ? || ?p ) =0 s.t. ||?||H ? 1 . ??H d where ? denotes the (empirical) measure of the current particles, and T ? is the measure of the updated particles x0 = T (x) with x ? ?, or the pushforward measure of ? through map T , and H is a normed function space chosen to optimize over. A key observation is that the objective in (1) is a linear functional of ? that draws connections to ideas in the Stein?s method [6] used for proving limit theorems or probabilistic bounds in theoretical statistics. Liu and Wang [1] showed that d ? KL(T ? || ?p ) =0 = E? [Sp ?], with Sp ?(x) := ? log p(x)> ?(x) + ? ? ?(x), (2) d Pd where ? ? ? := k=1 ?xk ?k (x), and Sp is a linear operator that maps a vector-valued function ? to a scalar-valued function Sp ?, and Sp is called the Stein operator in connection with the so-called Stein?s identity, which shows that the RHS of (2) equals zero if ? = ?p , Z Ep [Sp ?] = Ep [? log p> ? + ? ? ?] = ? ? (p?)dx = 0; (3) it is the result of integration by parts, assuming proper zero boundary conditions. Therefore, the optimization (1) reduces to  D(? || ?p ) := max E? [Sp ?], s.t. ||?||H ? 1 , (4) ??H where D(? || ?p ) is called Stein discrepancy, which provides a discrepancy measure between ? and ?p , since D(? || ?p ) = 0 if ? = ?p and D(? || ?p ) > 0 if ? 6= ?p given H is sufficiently large. Because (4) induces an infinite dimensional functional optimization, it is critical to select a nice space H that is both sufficiently rich and also ensures computational tractability in practice. Kernelized Stein discrepancy (KSD) provides one way to achieve this by taking H to be a reproducing kernel Hilbert space (RKHS), for which the optimization yields a closed form solution [7?10]. To be specific, let H0 be a RKHS of scalar-valued functions with a positive definite kernel k(x, x0 ), and H = H0 ? ? ? ? ? H0 the corresponding d ? 1 vector-valued RKHS. Then it can be shown that the optimal solution of (4) is ???,p (?) ? Ex?? [Sp ? k(x, ?)], with Sp ? k(x, ?) := ? log p(x)k(x, ?) + ?x k(x, ?), (5) where Sp ? is an outer product variant of Stein operator which maps a scalar-valued function to a vector-valued one. Further, it has been shown in [e.g., 7] that q 0 D(? || ?p ) = ||???,p ||H = Ex,x0 ?? [?p (x, x0 )], with ?p (x, x0 ) := Spx Spx ? k(x, x0 ), (6) where ?p (x, x0 ) is a ?Steinalized? positive definite kernel obtained by applying Stein operator twice; 0 Spx and Spx are the Stein operators w.r.t. variable x and x0 , respectively. The key advantage of KSD is its computational tractability: it can be empirically evaluated with samples drawn from ? and the gradient ? log p, which is independent of the normalization constant in p [see 7, 8]. 2 Algorithm 1 Stein Variational Gradient Descent [1] Input: The score function ?x log p(x). Goal: A set of particles {xi }ni=1 that approximates p(x). Initialize a set of particles {xi0 }ni=1 ; pick a positive definite kernel k(x, x0 ) and step-size {` }. For iteration ` do xi`+1 ? xi` + ????n` ,p (xi` ), ?i = 1, . . . , n, n where ????n` ,p (x) =  1 X ? log p(xj` )k(xj` , x) + ?xj k(xj` , x) , ` n j=1 (8) An important theoretic issue related to KSD is to characterize when H is rich enough to ensure D(? || ?p ) = 0 iff ? = ?p ; this has been studied by Liu et al. [7], Chwialkowski et al. [8], Oates et al. [11]. More recently, Gorham and Mackey [10] (Theorem 8) established a stronger result that Stein discrepancy implies weak convergence on X = Rd : let {?` }? `=1 be a sequence of probability measures, then D(?` || ?p ) ? 0 ?? ?` ? ?p as ` ? ?, (7) for ?p that are distantly dissipative (Definition 4 of Gorham and Mackey [10]) and a class of inverse multi-quadric kernels. Since the focus of this work is on SVGD, we will assume (7) holds without further examination. In SVGD algorithm, we iteratively update a set of particles using the optimal transform just derived, starting from certain initialization. Let {xi` }ni=1 be the particles at the `-th iteration. In this case, the exact distributions of {xi` }ni=1 are unknown P or difficult to keep track of, but can be best approximated by their empirical measure ? ?n` (dx) = i ?(x ? xi` )dx/n. Therefore, it is natural to think that ????n` ,p , with ? in (5) replaced by ? ?n` , provides the best update direction for moving the particles (and n equivalently ? ?` ) ?closer to? ?p . Implementing this update (8) iteratively, we get the main SVGD algorithm in Algorithm 1. Intuitively, the update in (8) pushes the particles towards the high probability regions of the target probability via the gradient term ? log p, while maintaining a degree of diversity via the second term ?k(x, xi ). In addition, (8) reduces to the typical gradient descent for maximizing log p if we use only a single particle (n = 1) and the kernel stratifies ?k(x, x0 ) = 0 for x = x0 ; this allows SVGD to provide a spectrum of approximation that smooths between maximum a posterior (MAP) optimization to a full sampling approximation by using different particle sizes, enabling efficient trade-off between accuracy and computation cost. Despite the similarity to gradient descent, we should point out that the SVGD update in (8) does not correspond to minimizing any objective function F ({xi` }) in terms of the particle location {xi` }, because one would find ?xi ?xj F 6= ?xj ?xi F if this is true. Instead, it is best to view SVGD as a type of (particle-based) numerical approximation of an evolutionary partial differential equation (PDE) of densities or measures, which corresponds to a special type of gradient flow of the KL divergence functional whose equilibrium state equals the given target distribution ?p , as we discuss in the sequel. 3 Density Evolution of SVGD Dynamics This section collects our main results. We characterize the evolutionary process of the empirical measures ? ?n` of the SVGD particles and their large sample limit as n ? ? (Section 3.1) and large time limit as ` ? ? (Section 3.2), which together establish the weak convergence of ? ?n` to the target measure ?p . Further, we show that the large sample limit of the SVGD dynamics is characterized by a Vlasov process, which monotonically decreases the KL divergence to target distributions with a decreasing rate that equals the square of Stein discrepancy (Section 3.2-3.3). We also establish a geometric intuition that interpret SVGD as a gradient flow of KL divergence under a new Riemannian metric structure induced by Stein operator (Section 3.4). Section 3.5 provides a brief discussion on the connection to Langevin dynamics. 3 3.1 Large Sample Asymptotic of SVGD Consider the optimal transform T ?,p (x) = x + ???,p (x) with ???,p defined in (5). We define its related map ?p : ? 7? T ?,p ?, where T ?,p ? denotes the pushforward measure of ? through transform T ?,p . This map fully characterizes the SVGD dynamics in the sense that the empirical measure ? ?n` can be obtained by recursively applying ?p starting from the initial measure ? ?n0 . ? ?n`+1 = ?p (? ?n` ), ?` ? N. (9) Note that ?p is a nonlinear map because the transform T ?,p depends on the input map ?. If ? has a density q and  is small enough so that T ?,p is invertible, the density q 0 of ?0 = ?p (?) is given by the change of variables formula: ?1 q 0 (z) = q(T ?1 ?,p (z)) ? | det(?T ?,p (z))|. (10) When ? is an empirical measure and q is a Dirac delta function, this equation still holds formally in the sense of distribution (generalized functions). Critically, ?p also fully characterizes the large sample limit property of SVGD. Assume the initial empirical measure ? ?n0 at the 0-th iteration weakly converges to a measure ?? 0 as n ? ?, which can be achieved, for example, by drawing {xi0 } i.i.d. from ?? 0 , or using MCMC or Quasi Monte Carlo methods. Starting from the limit initial measure ?? 0 and applying ?p recursively, we get ? ?? `+1 = ?p (?` ), ?` ? N. (11) Assuming ? ?n0 ? ?? ?n` ? ?? 0 by initialization, we may expect that ? ` for all the finite iterations ` if ?p satisfies certain Lipschitz condition. This is naturally captured by the bounded Lipschitz metric. For two measures ? and ?, their bounded Lipschitz (BL) metric is defined to be their difference of means on the set of bounded, Lipschitz test functions:  BL(?, ?) = sup E? f ? E? f s.t. ||f ||BL ? 1 , where ||f ||BL = max{||f ||? , ||f ||Lip }, f |f (x)?f (y)| ||x?y||2 . For a vector-valued bounded Pd > Lipschitz function f = [f1 , . . . , fd ] , we define its norm by ||f ||2BL = i=1 ||fi ||2BL . It is known that the BL metric metricizes weak convergence, that is, BL(?n , ?) ? 0 if and only if ?n ? ?. where ||f ||? = supx |f (x)| and ||f ||Lip = supx6=y Lemma 3.1. Assuming g(x, y) := Spx ? k(x, y) is bounded Lipschitz jointly on (x, y) with norm ||g||BL < ?, then for any two probability measures ? and ?0 , we have BL(?p (?), ?p (?0 )) ? (1 + 2||g||BL ) BL(?, ?0 ). Theorem 3.2. Let ? ?n` be the empirical measure of {xi` }ni=1 at the `-th iteration of SVGD. Assuming lim BL(? ?n0 , ?? 0 ) ? 0, n?? then for ?? ` defined in (11), at any finite iteration `, we have lim BL(? ?n` , ?? ` ) ? 0. n?? Proof. It is a direct result of Lemma 3.1. Since BL(?, ?) metricizes weak convergence, our result suggests ? ?n` ? ? ?? ?n0 ? ? ?? 0 by ` for ?`, if ? initialization. The bound of BL metric in Lemma 3.1 increases by a factor of (1 + 2||g||BL ) at each iteration. We can prevent the explosion of the BL bound by decaying step size sufficiently fast. It may be possible to obtain tighter bounds, however, it is fundamentally impossible to get a factor smaller than one without further assumptions: suppose we can get BL(?p (?), ?p (?0 )) ? ?BL(?, ?0 ) for some constant ? ? [0, 1), then starting from any initial ? ?n0 , with any fixed particle size n (e.g., n = 1), n ` we would have BL(? ?` , ?p ) = O(? ) ? 0 as ` ? 0, which is impossible because we can not get arbitrarily accurate approximate of ?p with finite n. It turns out that we need to look at KL divergence in order to establish convergence towards ?p as ` ? ?, as we discuss in Section 3.2-3.3. 4 Remark Because g(x, y) = ?x log p(x)k(x, y) + ?x k(x, y), and ?x log p(x) is often unbounded if the domain X is not unbounded. Therefore, the condition that g(x, y) must be bounded in Lemma 3.1 suggests that it can only be used when X is compact. It is an open question to establish results that can work for more general domain X. 3.2 Large Time Asymptotic of SVGD Theorem 3.2 ensures that we only need to consider the update (11) starting from the limit initial ?? 0 , which we can assume to have nice density functions and have finite KL divergence with the target ?p . We show that update (11) monotonically decreases the KL divergence between ?? ` and ?p and hence allows us to establish the convergence ?? ` ? ?p . Theorem 3.3. 1. Assuming p is a density that satisfies Stein?s identity (3) for ?? ? H, then the measure ?p of p is a fixed point of map ?p in (11). 2. Assume R = supx { 12 ||? log p||Lip k(x, x) + 2?xx0 k(x, x)} < ?, where ?xx0 k(x, x) = P 0 ? 0 i ?xi ?xi k(x, x ) x=x0 , and the step size ` at the `-th iteration is no larger than ` := ? ?> ?1 (2 supx ?(???` ,p + ???` ,p )) , where ?(A) denotes the spectrum norm of a matrix A. If KL(?? 0 || ?p ) < ? by initialization, then  1 ? ? 2 KL(?? (12) `+1 || ?p ) ? KL(?` || ?p ) ? ?(1 ? ` R) D(?` || ?p ) , ` that is, the population SVGD dynamics always deceases the KL divergence when using sufficiently small step sizes, with a decreasing rate upper bounded by the squared Stein discrepancy. Further, if ? ? we set the step size ` to be ` ? D(?? ` || ?p ) for any ? > 0, then (12) implies that D(?` || ?p ) ? 0 as ` ? ?. ? Remark Assuming D(?? ` || ?p ) ? 0 implies ?` ? ?p (see (7)), then Theorem 3.3(2) implies ? ?` ? ?p . Further, together with Theorem 3.2, we can establish the weak convergence of the empirical measures of the SVGD particles: ? ?n` ? ?p , as ` ? ?, n ? ?. Remark Theorem 3.3 can not be directly applied on the empirical measures ? ?n` with finite sample n size n, since it would give KL(? ?` || ?p ) = ? in the beginning. It is necessary to use BL metric and KL divergence to establish the convergence w.r.t. sample size n and iteration `, respectively. Remark The requirement that ` ? ?` is needed to guarantee that the transform T ?` ,p (x) = x + ???` ,p (x) has a non-singular Jacobean matrix everywhere. From the bound in Equation A.6 of the Appendix, we can derive an upper bound of the spectrum radius: p ? sup ?(????` ,p + ???> ?xx0 k(x, x) D(?` || ?p ). ?` ,p ) ? 2 sup ||???` ,p ||F ? 2 sup x x x This suggest that the step size should be upper bounded by the inverse of Stein discrepancy, i.e., ?` ? D(?` || ?p )?1 = ||???` ,p ||?1 H , where D(?` || ?p ) can be estimated using (6) (see [7]). 3.3 Continuous Time Limit and Vlasov Process Many properties can be understood more easily as we take the continuous time limit ( ? 0), reducing our system to a partial differential equation (PDE) of the particle densities (or measures), under which we show that the negative gradient of KL divergence exactly equals the square Stein discrepancy (the limit of (12) as  ? 0). To be specific, we define a continuous time t = `, and take infinitesimal step size  ? 0, the evolution of the density q in (10) then formally reduces to the following nonlinear Fokker-Planck equation (see Appendix A.3 for the derivation): ? qt (x) = ?? ? (??qt ,p (x)qt (x)). (13) ?t This PDE is a type of deterministic Fokker-Planck equation that characterizes the movement of particles under deterministic forces, but it is nonlinear in that the velocity field ??qt ,p (x) depends on 0 the current particle density qt through the drift term ??qt ,p (x) = Ex0 ?qt [Spx ? k(x, x0 )]. 5 It is not surprising to establish the following continuous version of Theorem 3.3(2), which is of central importance to our gradient flow perspective in Section 3.4: Theorem 3.4. Assuming {?t } are the probability measures whose densities {qt } satisfy the PDE in (13), and KL(?0 || ?p ) < ?, then d KL(?t || ?p ) = ?D(?t || ?p )2 . dt (14) R? Remark This result suggests a path integration formula, KL(?0 || ?p ) = 0 D(?t || ?p )2 dt, which can be potentially useful for estimating KL divergence or the normalization constant. PDE (13) only works for differentiable densities qt . Similar to the case of ?p as a map between (empirical) measures, one can extend (13) to a measure-value PDE that incorporates empirical measures as weak solutions. Take a differentiable test function h and integrate the both sides of (13): Z Z ? h(x)qt (x)dx = ? h(x)? ? (??qt ,p (x)qt (x))dx, ?t Using integration by parts on the right side to ?shift? the derivative operator from ??qt ,p qt to h, we get d E? [h] = E?t [?h> ???t ,p ], (15) dt t which depends on ?t only through the expectation operator and hence works for empirical measures as well,. A set of measures {?t } is called the weak solution of (13) if it satisfies (15). Using results in Fokker-Planck equation, the measure process (13)-(15) can be translated to an ordinary differential equation on random particles {xt } whose distribution is ?t : dxt = ???t ,p (xt )dt, ?t is the distribution of random variable xt , (16) initialized from random variable x0 with distribution ?0 . Here the nonlinearity is reflected in the fact that the velocity field depends on the distribution ?t of the particle at the current time. In particular, if we initialize (15) using an empirical measure ? ?n0 of a set of finite particles {xi0 }ni=1 , (16) reduces to the following continuous time limit of n-particle SVGD dynamics: n dxit = ????nt ,p (xit )dt, with ? ?nt (dx) = ?i = 1, . . . , n, 1X ?(x ? xit )dx, n i=1 (17) where {? ?nt } can be shown to be a weak solution of (13)-(15), parallel to (9) in the discrete time case. (16) can be viewed as the large sample limit (n ? ?) of (17). The process (13)-(17) is a type of Vlasov processes [12, 13], which are (deterministic) interacting particle processes of the particles interacting with each other though the dependency on their ?mean field? ?t (or ? ?nt ), and have found important applications in physics, biology and many other areas. There is a vast literature on theories and applications of interacting particles systems in general, and we only refer to Spohn [14], Del Moral [15] and references therein as examples. Our particular form of Vlasov process, constructed based on Stein operator in order to approximate arbitrary given distributions, seems to be new to the best of our knowledge. 3.4 Gradient Flow, Optimal Transport, Geometry We develop a geometric view for the Vlasov process in Section 3.3, interpreting it as a gradient flow for minimizing the KL divergence functional, defined on a new type of optimal transport metric on the space of density functions induced by Stein operator. We focus on the set of ?nice? densities q paired with a well defined Stein operator Sq , acting on a Hilbert space H. To develop the intuition, consider a density q and its nearby density q 0 obtained by applying transform T (x) = x + ?(x)dt on x ? q with infinitesimal dt and ? ? H, then we can show that (See Appendix A.3) log q 0 (x) = log q(x) ? Sq ?(x)dt, q 0 (x) = q(x) ? q(x)Sq ?(x)dt, 6 (18) Because one can show that Sq ? = ??(?q) from (2), we define operator qSq by qSq ?(x) = q q(x)Sq ?(x) = ? ? (?(x)q(x)). Eq (18) suggests that the Stein operator Sq (resp. qSq ) serves to translate a ?-perturbation on the random variable x to the corresponding change on the log-density (resp. density). This fact plays a central role in our development. Denote by Hq (resp. qHq ) the space of functions of form Sq ? (resp. qSq ?) with ? ? H, that is, Hq = {Sq ? : ? ? H}, qHq = {qSq ? : ? ? H}. Equivalently, qHq is the space of functions of form qf where f ? Hq . This allows us to consider the inverse of Stein operator for functions in Hq . For each f ? Hq , we can identify an unique function ? f ? H that has minimum || ? ||H norm in the set of ? that satisfy Sq ? = f , that is,  ? q,f = arg min ||?||H s.t. Sq ? = f , ??H where Sq ? = f is known as the Stein equation. This allows us to define inner products on Hq and qHq using the inner product on H: hf1 f2 iHq := hqf1 , qf2 iqHq := h? q,f1 , ? q,f2 iH . (19) Based on standard results in RKHS [e.g., 16], one can show that if H is a RKHS with kernel k(x, x0 ), then Hq and qHq are both RKHS; the reproducing kernel of Hq is ?p (x, x0 ) in (6), and correspondingly, the kernel of qHq is q(x)?p (x, x0 )q(x0 ). Now consider q and a nearby q 0 = q+qf dt, ?f ? Hq , obtained by an infinitesimal perturbation on the density function using functions in space Hq . Then the ? q,f can be viewed as the ?optimal? transform, in the sense of having minimum || ? ||H norm, that transports q to q 0 via T (x) = x + ? q,f (x)dt. It is therefore natural to define a notion of distance between q and q 0 = q + qf dt via, WH (q, q 0 ) := ||? q,f ||H dt. From (18) and (19), this is equivalent to WH (q, q 0 ) = ||q ? q 0 ||qHq dt = || log q 0 ? log q||Hq dt. Under this definition, we can see that the infinitesimal neighborhood {q 0 : WH (q, q 0 ) ? dt} of q, consists of densities (resp. log-densities) of form q 0 = q + gdt, ?g ? qHq , ||g||qHq ? 1, log q 0 = log q + f dt, ?f ? Hq , ||f ||Hq ? 1. Geometrically, this means that qHq (resp. Hq ) can be viewed as the tangent space around density q (resp. log-density log q). Therefore, the related inner product h?, ?iqHq (resp. h?, ?iHq ) forms a Riemannian metric structure that corresponds to WH (q, q 0 ). This also induces a geodesic distance that corresponds to a general, H-dependent form of optimal transport metric between distributions. Consider two densities p and q that can be transformed from one to the other with functions in H, in the sense that there exists a curve of velocity fields {?t : ?t ? H, t ? [0, 1]} in H, that transforms random variable x0 ? q to x1 ? p via dxt = ?t (x)dt. This is equivalent to say that there exists a curve of densities {?t : t ? [0, 1]} such that ?t ?t = ?? ? (?t ?t ), and ?0 = q, ?1 = p. It is therefore natural to define a geodesic distance between q and p via Z  1 ||?t ||H dt, s.t. ?t ?t = ?? ? (?t ?t ), ?0 = p, ?1 = q . (20) WH (q, p) = inf {?t , ?t } 0 We call WH (p, q) an H-Wasserstein (or optimal transport) distance between p and q, in connection with the typical 2-Wasserstein distance, which can be viewed as a special case of (20)Rby taking H to be the L2?t space equipped with norm ||f ||L2? = E?t [f 2 ], replacing the cost with ||?t ||L2? dt; t t the 2-Wasserstein distance is widely known to relate to Langevin dynamics as we discuss more in Section 3.5 [e.g., 17, 18]. Now for a given functional F (q), this metric structure induced a notion of functional covariant gradient: the covariant gradient gradH F (q) of F (q) is defined to be a functional that maps q to an element in the tangent space qHq of q, and satisfies F (q + f dt) = F (q) + hgradH F (q), f dtiqHq , (21) for any f in the tangent space qHq . 7 Theorem 3.5. Following (21), the gradient of the KL divergence functional F (q) := KL(q || p) is gradH KL(q || p) = ? ? (??q,p q). Therefore, the SVGD-Valsov equation (13) is a gradient flow of KL divergence under metric WH (?, ?): ?qt = ?gradH KL(qt || p). ?t In addition, ||gradH KL(q || p)||qHq = D(q || p). Remark We can also definite the functional gradient via   F (q + f ) ? F (q) lim gradH F (q) ? arg max , ?0+ WH (q + f, q) f : ||f ||qHq ?1 which specifies the steepest ascent direction of F (q) (with unit norm). The result in Theorem (3.5) is consistent with this definition. 3.5 Comparison with Langevin Dynamics The theory of SVGD is parallel to that of Langevin dynamics in many perspectives, but with importance differences. We give a brief discussion on their similarities and differences. Langevin dynamics works by iterative updates of form ? x`+1 ? x` + ? log p(x` ) + 2 ?` , ?` ? N (0, 1), where a single particle {x` } moves along the gradient direction, perturbed with a random Gaussian noise that plays the role of enforcing the diversity to match the variation in p (which is accounted by the deterministic repulsive force in SVGD). Taking the continuous time limit ( ? 0), We obtain a Ito stochastic differential equation, dxt = ?? log p(xt )dt + 2dWt ,where Wt is a standard Brownian motion, and x0 is a random variable with initial distribution q0 . Standard results show that the density qt of random variable xt is governed by a linear Fokker-Planck equation, following which the KL divergence to p decreases with a rate that equals Fisher divergence: ?qt d = ?? ? (qt ? log p) + ?qt , KL(qt || p) = ?F(qt , p), (22) ?t dt where F(q, p) = ||? log(q/p)||2L2 . This result is parallel to Theorem 3.4, and the role of square Stein q discrepancy (and RKHS H) is replaced by Fisher divergence (and L2q space). Further, parallel to Theorem 3.5, it is well known that (22) can be also treated as a gradient flow of the KL functional KL(q || p), but under the 2-Wasserstein metric W2 (q, p) [17]. The main advantage of using RKHS over L2q is that it allows tractable computation of the optimal transport direction; this is not case when using L2q and as a result Langevin dynamics requires a random diffusion term in order to form a proper approximation. Practically, SVGD has the advantage of being deterministic, and reduces to exact MAP optimization when using only a single particle, while Langevin dynamics has the advantage of being a standard MCMC method, inheriting its statistical properties, and does not require an O(n2 ) cost to calculate the n-body interactions as SVGD. However, the connections between SVGD and Langevin dynamics may allow us to develop theories and algorithms that unify the two, or combine their advantages. 4 Conclusion and Open Questions We developed a theoretical framework for analyzing the asymptotic properties of Stein variational gradient descent. Many components of the analysis provide new insights in both theoretical and practical aspects. For example, our new metric structure can be useful for solving other learning problems by leveraging its computational tractability. Many important problems remains to be open. For example, an important open problem is to establish explicit convergence rate of SVGD, for which the existing theoretical literature on Langevin dynamics and interacting particles systems may provide insights. Another problem is to develop finite sample bounds for SVGD that can take the fact that it reduces to MAP optimization when n = 1 into account. It is also an important direction to understand the bias and variance of SVGD particles, or combine it with traditional Monte Carlo whose bias-variance analysis is clearer (see e.g., [19]). 8 Acknowledgement This work is supported in part by NSF CRII 1565796. We thank Lester Mackey and the anonymous reviewers for their comments. References [1] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances in Neural Information Processing Systems, 2016. [2] M. J. Wainwright, M. I. Jordan, et al. Graphical models, exponential families, and variational R in Machine Learning, 1(1?2):1?305, 2008. inference. Foundations and Trends [3] Y. Chen, M. Welling, and A. Smola. Super-samples from kernel herding. In Conference on Uncertainty in Artificial Intelligence (UAI), 2010. [4] J. Dick, F. Y. Kuo, and I. H. Sloan. High-dimensional integration: the quasi-monte carlo way. Acta Numerica, 22:133?288, 2013. [5] B. Dai, N. He, H. Dai, and L. Song. Provable Bayesian inference via particle mirror descent. In The 19th International Conference on Artificial Intelligence and Statistics, 2016. [6] C. Stein. Approximate computation of expectations. Lecture Notes-Monograph Series, 7:i?164, 1986. [7] Q. Liu, J. D. Lee, and M. I. Jordan. A kernelized Stein discrepancy for goodness-of-fit tests and model evaluation. In International Conference on Machine Learning (ICML), 2016. [8] K. Chwialkowski, H. Strathmann, and A. Gretton. A kernel test of goodness-of-fit. In International Conference on Machine Learning (ICML), 2016. [9] C. J. Oates, M. Girolami, and N. Chopin. Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society, Series B, 2017. [10] J. Gorham and L. Mackey. Measuring sample quality with kernels. In International Conference on Machine Learning (ICML), 2017. [11] C. J. Oates, J. Cockayne, F.-X. Briol, and M. Girolami. Convergence rates for a class of estimators based on Stein?s identity. arXiv preprint arXiv:1603.03220, 2016. [12] W. Braun and K. Hepp. The Vlasov dynamics and its fluctuations in the 1/n limit of interacting classical particles. Communications in mathematical physics, 56(2):101?113, 1977. [13] A. A. Vlasov. On vibration properties of electron gas. J. Exp. Theor. Phys, 8(3):291, 1938. [14] H. Spohn. Large scale dynamics of interacting particles. Springer Science & Business Media, 2012. [15] P. Del Moral. Mean field simulation for Monte Carlo integration. CRC Press, 2013. [16] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. [17] F. Otto. The geometry of dissipative evolution equations: the porous medium equation. 2001. [18] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. [19] J. Han and Q. Liu. Stein variational adaptive importance sampling. In Uncertainty in Artificial Intelligence, 2017. 9
6904 |@word version:1 seems:1 villani:1 stronger:1 norm:7 open:5 simulation:1 pick:1 recursively:2 initial:7 liu:7 series:2 score:1 rkhs:8 existing:1 current:3 nt:4 surprising:1 dx:8 must:1 readily:1 numerical:1 update:11 n0:7 mackey:4 intelligence:3 xk:1 beginning:1 steepest:1 provides:5 location:1 unbounded:2 mathematical:1 along:1 constructed:2 direct:1 differential:4 consists:1 combine:2 svgd:46 x0:20 theoretically:1 behavior:1 multi:1 decreasing:2 equipped:1 estimating:1 bounded:9 medium:4 developed:1 guarantee:3 braun:1 exactly:1 berlinet:1 lester:1 unit:1 control:1 planck:5 positive:4 understood:1 limit:14 despite:1 crii:1 analyzing:2 path:1 fluctuation:1 twice:1 initialization:4 studied:1 dissipative:2 therein:1 collect:1 suggests:4 acta:1 unique:1 practical:1 practice:2 definite:4 sq:11 area:1 empirical:17 significantly:1 suggest:1 get:6 operator:15 applying:5 impossible:2 optimize:1 equivalent:2 deterministic:8 map:16 imposed:1 maximizing:1 reviewer:1 starting:5 normed:1 convex:1 unify:1 insight:2 estimator:1 proving:1 population:1 notion:2 variation:1 updated:1 resp:8 target:9 suppose:1 play:2 massive:1 exact:2 velocity:4 element:1 approximated:1 trend:1 ep:2 role:3 preprint:1 wang:2 calculate:1 region:1 ensures:2 decrease:6 trade:1 movement:1 monograph:1 intuition:2 pd:2 dynamic:17 geodesic:2 weakly:5 solving:2 f2:2 translated:1 easily:1 derivation:1 fast:1 monte:8 artificial:3 gorham:3 h0:3 neighborhood:1 whose:5 larger:2 valued:7 widely:1 say:1 drawing:1 otto:1 statistic:3 think:1 transform:7 jointly:1 advantage:5 differentiable:3 sequence:1 interaction:1 product:4 cockayne:1 iff:1 translate:1 achieve:3 dirac:1 convergence:10 requirement:1 strathmann:1 converges:2 derive:1 develop:6 clearer:1 qt:22 eq:1 involves:1 implies:4 girolami:2 direction:6 radius:1 stochastic:1 implementing:1 crc:1 require:2 f1:2 anonymous:1 tighter:1 theor:1 qsq:5 strictly:1 hold:2 practically:1 sufficiently:4 around:1 exp:1 equilibrium:1 electron:1 purpose:1 estimation:1 vibration:1 ex0:1 quadric:1 always:1 stratifies:1 gaussian:1 super:1 derived:1 focus:2 xit:2 sense:5 posteriori:1 inference:5 dependent:1 spx:6 kernelized:2 quasi:2 transformed:1 chopin:1 issue:1 arg:2 denoted:1 development:1 integration:6 initialize:3 special:2 field:6 construct:1 equal:5 having:1 beach:1 sampling:4 qiang:2 biology:1 look:1 icml:3 distantly:1 discrepancy:10 develops:1 fundamentally:1 supx6:1 divergence:21 ksd:3 replaced:2 geometry:2 interest:1 fd:1 evaluation:1 rdp:1 accurate:1 closer:1 partial:2 explosion:1 necessary:1 old:1 initialized:1 theoretical:6 disadvantage:1 goodness:2 measuring:1 ordinary:1 tractability:3 introducing:1 cost:3 optimally:2 characterize:3 dependency:1 supx:3 perturbed:1 st:1 density:27 international:4 sequel:1 lee:1 probabilistic:1 physic:4 off:1 invertible:1 together:2 squared:1 central:2 derivative:1 account:1 diversity:2 satisfy:2 sloan:1 depends:4 view:4 closed:1 analyze:1 characterizes:3 sup:4 start:1 decaying:1 parallel:4 square:3 ni:7 accuracy:1 qf2:1 variance:2 yield:1 correspond:1 identify:1 weak:8 bayesian:2 porous:1 critically:1 carlo:8 randomness:1 herding:1 phys:1 definition:3 infinitesimal:4 naturally:1 proof:1 riemannian:3 xx0:3 wh:8 lim:3 knowledge:1 hilbert:3 dt:23 reflected:1 maximally:1 evaluated:1 though:1 just:1 smola:1 ndx:1 transport:8 replacing:1 nonlinear:4 del:2 quality:1 usa:1 requiring:1 true:1 evolution:3 hence:2 q0:1 iteratively:4 generalized:1 theoretic:1 motion:1 interpreting:1 variational:12 recently:1 fi:1 functional:12 empirically:1 overview:1 nh:1 volume:1 extend:1 interpretation:1 approximates:1 xi0:3 interpret:1 he:1 refer:1 framed:1 rd:1 particle:46 nonlinearity:1 moving:1 han:1 similarity:2 posterior:1 brownian:1 showed:1 perspective:3 inf:1 certain:2 arbitrarily:1 captured:1 minimum:2 wasserstein:4 dai:2 converge:2 monotonically:2 full:2 reduces:7 gretton:1 smooth:1 match:1 characterized:2 long:1 pde:6 paired:1 variant:1 metric:15 expectation:2 arxiv:2 iteration:10 kernel:13 normalization:2 achieved:1 addition:2 want:1 singular:1 w2:1 unlike:1 ascent:2 comment:1 induced:4 chwialkowski:2 flow:10 incorporates:1 leveraging:1 jordan:2 call:1 leverage:1 enough:2 xj:6 fit:2 dartmouth:2 inner:3 idea:1 det:1 shift:1 pushforward:2 gdt:1 moral:2 song:1 remark:6 useful:2 transforms:1 stein:34 induces:2 specifies:1 nsf:1 delta:1 estimated:1 track:1 discrete:1 numerica:1 key:2 drawn:1 prevent:1 diffusion:1 vast:1 geometrically:1 inverse:3 everywhere:1 uncertainty:2 family:1 draw:1 appendix:3 bound:7 nearby:2 aspect:1 min:1 department:1 smaller:1 intuitively:1 equation:16 remains:1 turn:2 discus:3 needed:1 tractable:1 serf:1 repulsive:1 hanover:1 generic:1 dwt:1 thomas:1 denotes:3 ensure:1 graphical:1 maintaining:1 establish:11 approximating:1 society:1 classical:1 bl:22 objective:2 move:1 question:2 parametric:3 traditional:1 evolutionary:3 gradient:32 hq:14 distance:6 thank:1 outer:1 enforcing:1 provable:1 assuming:7 minimizing:2 dick:1 equivalently:2 difficult:2 potentially:1 relate:1 negative:1 proper:2 unknown:1 upper:3 observation:1 datasets:1 enabling:1 finite:7 descent:11 gas:1 langevin:9 communication:1 interacting:6 perturbation:3 reproducing:3 arbitrary:1 drift:1 kl:35 connection:5 established:1 nip:1 agnan:1 max:4 oates:3 royal:1 wainwright:1 critical:1 natural:3 rely:1 examination:1 force:2 treated:1 business:3 brief:3 nice:4 geometric:4 literature:2 tangent:3 l2:4 acknowledgement:1 asymptotic:4 fully:2 expect:1 dxt:3 lecture:1 foundation:1 integrate:1 degree:1 consistent:2 principle:1 qf:3 accounted:1 supported:1 bias:3 side:2 allow:1 understand:1 taking:3 correspondingly:1 boundary:1 dimension:1 curve:2 rich:2 adaptive:1 welling:1 functionals:1 approximate:6 compact:1 keep:1 uai:1 xi:17 spectrum:3 continuous:7 iterative:1 lip:3 nature:1 reasonably:1 ca:1 interact:1 complex:3 domain:2 inheriting:1 sp:10 main:3 rh:1 noise:1 hf1:1 n2:1 x1:1 body:1 explicit:1 exponential:1 governed:1 ito:1 theorem:14 formula:2 briol:1 specific:2 xt:5 jacobean:1 exists:2 ih:1 effectively:1 importance:3 mirror:1 push:1 chen:1 scalar:3 springer:3 covariant:2 fokker:5 corresponds:3 satisfies:4 viewed:5 identity:3 goal:1 towards:3 lipschitz:6 fisher:2 change:2 typical:7 infinite:1 reducing:1 acting:1 wt:1 lemma:4 called:4 kuo:1 hepp:1 select:1 college:1 formally:2 mcmc:2 ex:2
6,528
6,905
Partial Hard Thresholding: Towards A Principled Analysis of Support Recovery Jie Shen Department of Computer Science School of Arts and Sciences Rutgers University New Jersey, USA [email protected] Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University New Jersey, USA [email protected] Abstract In machine learning and compressed sensing, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements. In this paper, we present a principled analysis on the support recovery performance for a family of hard thresholding algorithms. To this end, we appeal to the partial hard thresholding (PHT) operator proposed recently by Jain et al. [IEEE Trans. Information Theory, 2017]. We show that under proper conditions, PHT recovers an arbitrary s-sparse signal within O(s? log ?) iterations where ? is an appropriate condition number. Specifying the PHT operator, we obtain the best known results for hard thresholding pursuit and orthogonal matching pursuit with replacement. Experiments on the simulated data complement our theoretical findings and also illustrate the effectiveness of PHT. 1 Introduction This paper is concerned with the problem of recovering an arbitrary sparse signal from a set of its ? ? Rd is s-sparse if there are no more than s (compressed) measurements. We say that a signal x ? . This problem, together with its many variants, have found a variety of successful non-zeros in x applications in compressed sensing, machine learning and statistics. Of particular interest is the ? is the true signal and only a small number of linear measurements are given, referred setting where x to as compressed sensing. Such instance has been exhaustively studied in the last decade, along with a large body of elegant work devoted to efficient algorithms including ?1 -based convex optimization and hard thresholding based greedy pursuits [7, 6, 15, 8, 3, 5, 11]. Another quintessential example is the sparsity-constrained minimization program recently considered in machine learning [30, 2, 14, ? from a set of training 22], for which the goal is to efficiently learn the global sparse minimizer x data. Though in most cases, the underlying signal can be categorized into either of the two classes, we note that it could also be other object such as the parameter of logistic regression [19]. Hence, for a unified analysis, this paper copes with an arbitrary sparse signal and the results to be established quickly apply to the special instances above. It is also worth mentioning that while one can characterize the performance of an algorithm and can evaluate the obtained estimate from various aspects, we are specifically interested in the quality of support recovery. Recall that for sparse recovery problems, there are two prominent metrics: the ?2 distance and the support recovery. Theoretical results phrased in terms of the ?2 metric is also referred to as parameter estimation, on which most of the previous papers emphasized. Under this metric, many popular algorithms, e.g., the Lasso [24, 27] and hard thresholding based algorithms [9, 3, 15, 8, 10, 22], are guaranteed with accurate approximation up to the energy of noise. Support recovery is another important factor to evaluate an algorithm, which is also known as feature 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. selection or variable selection. As one of the earliest work, [25] offered sufficient and necessary conditions under which orthogonal matching pursuit and basis pursuit identify the support. The theory was then developed by [35, 32, 27] for the Lasso estimator and by [29] for the garrotte estimator. Typically, recovering the support of a target signal is more challenging than parameter estimation. For instance, [18] showed that the restricted eigenvalue condition suffices for the Lasso to produce an accurate estimate whereas in order to recover the sign pattern, a more stringent mutual incoherence condition has to be imposed [27]. However, as has been recognized, if the support is detected precisely by a method, then the solution admits the optimal statistical rate [27]. In this regard, research on support recovery continues to be a central theme in recent years [33, 34, 31, 4, 17]. Our work follows this line and studies the support recovery performance of hard thresholding based algorithms, which enjoy superior computational efficiency to the convex programs when manipulating a huge volume of data [26]. We note that though [31, 4] have carried out theoretical understanding for hard thresholding pursuit (HTP) [10], showing that HTP identifies the support of a signal within a few iterations, neither of them obtained the general results in this paper. To be more detailed, under the restricted isometry property (RIP) condition [6], our iteration bound holds for an arbitrary sparse signal of interest, while the results from [31, 4] hold either for the global sparse minimizer or for the true sparse signal. Using a relaxed sparsity condition, we obtain a clear iteration complexity O(s? log ?) where ? is a proper condition number. In contrast, it is hard to quantify the bound of [31] (see Theorem 3 therein). From the algorithmic perspective, we consider a more general algorithm than HTP. In fact, we appeal to the recently proposed partial hard thresholding (PHT) operator [13] and demonstrate novel results, which in turn indicates the best known iteration complexity for HTP and orthogonal matching pursuit with replacement (OMPR) [12]. Thereby, the results in this paper considerably extend our earlier work on HTP [23]. It is also worth mentioning that, though our analysis hinges on the PHT operator, the support recovery results to be established are stronger than the results in [13] since they only showed parameter estimation of PHT. Finally, we remark that while a couple of previous work considered signals that are not exactly sparse (e.g., [4]), we in this paper focus on the sparse case. Extensions to the generic signals are left as interesting future directions. Contribution. The contribution of this paper is summarized as follows. We study the iteration complexity of the PHT algorithm, and show that under the RIP condition or the relaxed sparsity condition (to be clarified), PHT recovers the support of an arbitrary s-sparse signal within O(s? log ?) iterations. This strengthens the theoretical results of [13] where only parameter estimation of PHT was established. Thanks to the generality of the PHT operator, our results shed light on the support recovery performance of a family of prevalent iterative algorithms. As two extreme cases of PHT, the new results immediately apply to HTP and OMPR, and imply the best known bound. Roadmap. The remainder of the paper is organized as follows. We describe the problem setting, as well as the partial hard thresholding operator in Section 2, followed by the main results regarding the iteration complexity. In Section 3, we sketch the proof of the main results and list some useful lemmas which might be of independent interest. Numerical results are illustrated in Section 4 and Section 5 concludes the paper and poses several interesting future work. The detailed proof of our theoretical results is deferred to the appendix (see the supplementary file). Notation. We collect the notation that is involved in this paper. The upper-right letter C and its subscript variants (e.g., C1 ) are used to denote absolute constants whose values may change from appearance to appearance. For a vector x ? Rd , its ?2 norm is denoted by kxk. The support set of x is denoted by supp (x) which indexes the non-zeros in x. With a slight abuse, supp (x, k) is the set of indices for the k largest (in magnitude) elements. Ties are broken lexicographically. We interchangeably write kxk0 or |supp (x)| to signify the cardinality of supp (x). We will also consider a vector restricted on a support set. That is, for a d-dimensional vector x and a support set T ? {1, 2, . . . , d}, depending on the context, xT can either be a |T |-dimensional vector by extracting the elements belonging to T or a d-dimensional vector by setting the elements outside T to zero. The complement of a set T is denoted by T . ? ? Rd for the target s-sparse signal whose support is denoted by S. The quantity We reserve x ? min > 0 is the minimum absolute element in x ? S , where we recall that x ? S ? Rs consists of the x ? . The PHT algorithm will depend on a carefully chosen function F (x). We write non-zeros of x its gradient as ?F (x) and we use ?k F (x) as a shorthand of (?F (x))supp(?F (x),k) , i.e., the top k absolute components of ?F (x). 2 2 Partial Hard Thresholding To pursue a sparse solution, hard thresholding has been broadly invoked by many popular greedy algorithms. In the present work, we are interested in the partial hard thresholding operator which sheds light upon a unified design and analysis for iterative algorithms employing this operator and the hard thresholding operator [13]. Formally, given a support set T and a freedom parameter r > 0, the PHT operator which is used to produce a k-sparse approximation to z is defined as follows: PHTk (z; T, r) := arg min kx ? zk , s.t. kxk0 ? k, |T \ supp (x)| ? r. (1) x?Rd The first constraint simply enforces a k-sparse solution. To gain intuition on the second one, consider that T is the support set of the last iterate of an iterative algorithm, for which |T | ? k. Then the second constraint ensures that the new support set differs from the previous one by at most r positions. As a special case, one may have noticed that the PHT operator reduces to the standard hard thresholding when picking the freedom parameter r ? k. On the other spectrum, if we look at the case where r = 1, the PHT operator yields the interesting algorithm termed orthogonal matching pursuit with replacement [12], which in general replaces one element in each iteration. It has been shown in [13] that the PHT operator can be computed in an efficient manner for a general support set T and a freedom parameter r. In this paper, our major focus will be on the case |T | = k 1 . Then Lemma 1 of [13] indicates that PHTk (z; T, r) is given as follows:   top = supp z T , r , PHTk (z; T, r) = HTk z T ?top , (2) where HTk (?) is the standard hard thresholding operator that sets all but the k largest absolute components of a vector to zero. Equipped with the PHT operator, we are now in the position to describe a general iterative greedy algorithm, termed PHT(r) where r is the freedom parameter in (1). At the t-th iteration, the algorithm reveals the last iterate xt?1 as well as its support set S t?1 , and returns a new solution as follows: z t = xt?1 ? ??F (xt?1 ),   y t = PHTk z t ; S t?1 , r , S t = supp y t , xt = arg min F (x), s.t. supp (x) ? S t . x?Rd Above, we note that ? > 0 is a step size and F (x) is a proxy function which should be carefully chosen (to be clarified later). Typically, the sparsity parameter k equals s, the sparsity of the target ? . In this paper, we consider a more general choice of k which leads to novel results. For signal x further clarity, several comments on F (x) are in order. First, one may have observed that in the context of sparsity-constrained minimization, the proxy function F (x) used above is chosen as the objective function [30, 14]. In that scenario, the target signal is a global optimum and PHT(r) proceeds as projected gradient descent. Nevertheless, recall ? . It is not realistic to look for a function F (x) that our goal is to estimate an arbitrary signal x such that our target happens to be its global minimizer. The remedy we will offer is characterizing ? and ?F (? a deterministic condition between x x) which is analogous to the signal-to-noise ratio condition, so that any function F (x) fulfilling that condition suffices. In this light, we find that F (x) behaves more like a proxy that guides the algorithm to a given target. Remarkably, our analysis also encompasses the situation considered in [30, 14]. Second, though it is not being made explicitly, one should think of F (x) as containing the mea? from y = A? surements or the training data. Consider, for example, recovering x x where A is a design matrix and y is the response (both are known). A natural way would be running the PHT(r) algorithm with F (x) = ky ? Axk2 . One may also think of the logistic regression model where y is a binary vector (label), A is a collection of training data (feature), and F (x) is the logistic loss evaluated on the training samples. With the above clarification, we are ready to make assumptions on F (x). It turns out that two properties of F (x) are vital for our analysis: restricted strong convexity and restricted smoothness. These two conditions were proposed by [16] and have been standard in the literature [34, 1, 14, 22]. 1 Our results actually hold for |T | ? k. But we observe that the size of T we will consider is usually equal to k. Hence, for ease of exposition, we take |T | = k. This is also the case considered in [12]. 3 Definition 1. We say that a differentiable function F (x) satisfies the property of restricted strong ? d ? convexity (RSC) at sparsity level s with parameter ?? s > 0 if for all x, x ? R with kx ? x k0 ? s, ?? 2 s kx ? x? k . 2 Likewise, we say that F (x) satisfies the property of restricted smoothness (RSS) at sparsity level s ? d ? with parameter ?+ s > 0 if for all x, x ? R with kx ? x k0 ? s, it holds that F (x) ? F (x? ) ? h?F (x? ), x ? x? i ? F (x) ? F (x? ) ? h?F (x? ), x ? x? i ? ?+ 2 s kx ? x? k . 2 ? We call ?s = ?+ s /?s as the condition number of the problem, since it is essentially identical to the condition number of the Hessian matrix of F (x) restricted on s-sparse directions. 2.1 Deterministic Analysis The following proposition shows that under very mild conditions, PHT(r) either terminates or re? using at most O(s?2s log ?2s ) iterations. covers the support of an arbitrary s-sparse signal x + Proposition 2. Consider the PHT(r) algorithm with k = s. Suppose that F (x) is ?? 2s -RSC and ?2s ? + either terminates or recovers RSS, and the step size ? ? (0, 1/?+ 2s ). Let ? := ?2s /?2s . Then PHT(r) ? ? ? ? within O(s? log ?) iterations provided that x ? min ? 4 2+2 the support of x k?2s F (? x)k. ?? 2s A few remarks are in order. First, we remind the reader that under the conditions stated above, it is not guaranteed that PHT(r) succeeds. We say that PHT(r) fails if it terminates at some time stamp t but S t 6= S. This indeed happens if, for example, we feed it with a bad initial point and pick a very small step size. In particular, if x0min > ? ?F (x0 ) ? , then the algorithm makes no progress. The crux to remedy this issue is imposing a lower bound on ? or looking at more coordinates in each iteration, which is the theme below. However, the proposition is still useful because it asserts that as far as we make sure that PHT(r) runs long enough (i.e., O(s? log ?) iterations), it recovers the support of an arbitrary sparse signal. We also note that neither the RIP condition nor a relaxed sparsity is assumed in this proposition. ? min -condition above is natural, which can be viewed as a generalization of the well-known The x signal-to-noise ratio (SNR) condition. This follows by considering the noisy compressed sensing 2 problem, where y = A? x + e and F (x) = ky ? Axk . Here, the vector e is some noise. Now the RSC and RSS imply for any 2s-sparse x 2 Hence 2 2 + ?? 2s kxk ? kAxk ? ?2s kxk . k?2s F (? x)k = (A? e)2s = ?(kek) ? min -condition has been used many times to establish support recovery. See, for examIn fact, the x ple, [31, 4, 23]. In the following, we strengthen Prop. 2 by considering the RIP condition which requires a wellbounded condition number (i.e., ? ? O(1). Theorem 3. Consider the PHT(r) algorithm with k = s. Suppose that F (x) is ?? and 2s+r -RSC ? + + ? ?2s+r -RSS. Let ? := ?2s+r /?2s+r be the condition number which is smaller than 1 + 1/( 2 + ?) p ? ?1 where ? = 1 + s/r. Pick the step size ? = ? ? /?+ 2s+r such that ? ? 2+? < ? ? 1. Then PHT(r) ? within recovers the support of x ! ? log ? log( 2/(1 ? ?)) tmax = + + 2 k? xk0 log(1/?) log(1/?) iterations, provided that for some constant ? ? (0, 1) ? 3 s+6 ? min ? k?s+r F (? x)k . x ??? 2s+r ? Above, ? = ( 2 + ?)(? ? ? ? ) ? (0, 1). 4 We remark several aspects of the theorem. The most important part is that Theorem 3 offers the theoretical justification that PHT(r) always recovers the support. This is achieved by imposing an RIP condition (i.e., bounding the condition number from the above) and using a proper step size. We also make the iteration bound explicit, in order to examine the parameter dependency. First, we note that tmax scales approximately linearly with ?. This conforms the intuition because a small ? actually indicates a large signal-to-noise ratio, and hence easy to distinguish the support of interest from the noise. The freedom parameter r is mainly encoded in the coefficient ? through the quantity ?. Observe that when increasing the scalar r, we have a small ?, and hence fewer iterations. This is not surprising since a large value of r grants the algorithm more freedom to look at the current iterate. Indeed, in the best case, PHT(s) is able to recover the support in O(1) iterations while PHT(1) has ? min -condition, we find that we need a stronger to take O(s) steps. However, if we investigate the x SNR condition to afford a large freedom parameter. It is also interesting to contrast Theorem 3 to [31, 4], which independently built state-of-the-art support recovery results for HTP. As has been mentioned, [31] made use of the optimality of the target signal, which is a restricted setting compared to our result. Their iteration bound (see Theorem 1 therein), though provides an appealing insight, does not have a clear parameter dependence on the natural parameters of the problem (e.g., sparsity and condition number). [4] developed O(k? xk0 ) iteration complexity for compressed sensing. Again, they confined to a special signal whereas we carry out a generalization that allows us to analyze a family of algorithms. Though the RIP condition has been ubiquitous in the literature, many researchers point out that it is not realistic in practical applications [18, 20, 21]. This is true for large-scale machine learning problems, where the condition number may grow with the sample size (hence one cannot upper bound it with a constant). A clever solution was first (to our knowledge) suggested by [14], where they showed that using the sparsity parameter k = O(?2 s) guarantees convergence of projected gradient descent. The idea was recently employed by [22, 31] to show an RIP-free condition for sparse recovery, though in a technically different way. The following theorem borrows this elegant idea to prove RIP-free results for PHT(r). ? + Theorem 4. Consider the PHT(r) algorithm. Suppose that F (x)  is ?2k -RSCand ?2k -RSS. Let ? 4 ? := ?+ 2k /?2k be the condition number. Further pick k ? s + 1 + ? 2 (?? )2 min{s, r} where 2k ? is included in the iterate of PHT(r) within ? ? (0, 1/?+ 2k ). Then the support of x   3 log ? 2 log(2/(1 ? ?)) tmax = + + 2 k? xk 0 log(1/?) log(1/?) iterations, provided that for some constant ? ? (0, 1), ? ?+3 ? min ? x k?k+s F (? x)k . ??? 2k Above, we have ? = 1 ? + ??? 2k (1???2k ) . 2 We discuss the salient features of Theorem 4 compared to Prop. 2 and Theorem 3. First, note that we can pick ? = 1/(2?+ 2k ) in the above theorem, which results in ? = O(1 ? 1/?). So the iteration complexity is essentially given by O(s? log ?) that is similar to the one in Prop. 2. However, in Theorem 4, the sparsity parameter k is set to be O(s + ?2 min{s, r}) which guarantees support ? min -condition might be refined, in that it inclusion. We pose an ? open question of whether the x currently scales with ? which is stringent for ill-conditioned problems. Another important consequence implied by the theorem is that the sparsity parameter k actually depends on the minimum of s and r. Consider r = 1 which corresponds to the OMPR algorithm. Then k = O(s + ?2 ) suffices. In contrast, previous work of [14, 31, 22, 23] only obtained theoretical result for k = O(?2 s), owing to a restricted problem setting. We also note that even in the original OMPR paper [12] and its latest version [13], such an RIP-free condition was not established. 2.2 Statistical Results Until now, all of our theoretical results are phrased in terms of deterministic conditions (i.e., RSC, ? min ). It is known that these conditions can be satisfied by prevalent statistical models RSS and x 5 such as linear regression and logistic regression. Here, we give detailed statistical results for sparse linear regression, and we refer the reader to [1, 14, 22, 23] for other applications. Consider the sparse linear regression model ? i + ei , yi = hai , x 1 ? i ? N, where ai are drawn i.i.d. from a sub-gaussian distribution with zero mean and covariance ? ? Rd?d and ei are drawn i.i.d. from N (0, ? 2 ). We presume that the diagonal elements of ? are properly ? scaled, i.e., ?jj ? 1 for 1 ? j ? d. Let A = (a? 1 ; . . . ; aN ) and y = (y1 ; . . . ; yN ). Our goal is to ? from the knowledge of A and y. To this end, we may choose F (x) = 21 ky ? Axk2 . Let recover x ?min (?) and ?max (?) be the smallest and the largest singulars of ?, respectively. Then it is known that with high probability, F (x) satisfies the RSC and RSS properties at the sparsity level K with parameters K log d K log d ?? , ?+ , (3) K = ?min (?) ? C1 ? K = ?max (?) + C2 ? N N respectively. It is also known that with high probability, the following holds: r K log d . (4) k?K F (? x)k ? 2? N See [1] for a detailed discussion. With these probabilistic arguments on hand, we investigate the sufficient conditions under which the preceding deterministic results hold. For Prop. 2, recall that the sparsity level of RSC and RSS is 2s. Hence, if we pick the sample size N = q ? 2C1 s log d/?min (?) for some q > 1, then q q ? ? (?) 1+C2 /qC1 ? ? 2 2 + ??max 4 2 + 2 ?2s (?) 1?1/q min p k?2s F (? x)k ? 4? . ? ?2s (1 ? 1/q) qC1 ?min (?) The right-hand side is monotonically decreasing with q, which indicates that as soon as we pick ? min . To be more concrete, consider that the covariance q large enough, it becomes smaller than x matrix ? is the identity matrix for which ?min (?) = ?max (?) = 1. Now suppose that q ? 2, which gives an upper bound p ? ? ? 4 2 + 2 ?2s 8?(2 2 + 2 + C2 /C1 ) ? k?2s F (? x)k ? . qC1 ?? 2s ? min -condition in Prop. 2, it suffices to pick Thus, in order to fulfill the x !2 ) ( p ? 8?(2 2 + 2 + C2 /C1 ) ? . q = max 2, ? min C1 x For Theorem 3, it essentially asks for a well-conditioned design matrix at the sparsity level 2s + r. Note that (3) implies ?2s+r ? ?max (?)/?min (?), which in return requires a well-conditioned covariance matrix. Thus, to guarantee that ?2s+r ? 1 + ? for some ? > 0, it suffices to choose ? such that ?max (?)/?min (?) < 1 + ? and pick N = q ? C1 (2s + r) log d/?min (?) with q= 1 + ? + C?1 1 C2 ?max (?)/?min (?) . 1 + ? ? ?max (?)/?min (?) Finally, Theorem 4 asserts support inclusion by expanding the support size of the iterates. Suppose 2 that ? = 1/(2?+ 2k ), which results in k ? s+ (16?2k + 1) min{r, s}. Given that the condition number ?2k is always greater than 1, we can pick k ? s + 20?22k min{r, s}. At a first sight, this seems to be weird in that k depends on the condition number ?2k which itself relies on the choice of k. In the following, we present concrete sample complexity showing that this condition can be met. We will focus on two extreme cases: r = 1 and r = s. For r = 1, we require k ? s + 20?22k . Let us pick N = 4C1 k log d/?min (?). In this way, we + C2 1 obtain ?? 2k = 2 ?min (?) and ?2k ? (1 + 2C1 )?max (?). It then follows that the condition number 2 of the design matrix ?2k ? (2 + C C1 )?max (?)/?min (?). Consequently, we can set the parameter 2   C2 ?max (?) . k = s + 20 2+ C1 ?min (?) 6 Note that the above quantities depend only on the covariance matrix. Again, if ? is the identity matrix, the sample complexity is O(s log d). For r = s, likewise k ? 20?22k s suffices. Following the deduction above, we get  2  C2 ?max (?) k = 20 2+ s. C1 ?min (?) 3 Proof Sketch We sketch the proof and list some useful lemmas which might be of independent interest. The high-level proof technique follows from the recent work of [4] which performs an RIP analysis for compressed sensing. But for our purpose, we have to deal with the freedom parameter r as well as the RIP-free condition. We also need to generalize the arguments in [4] to show support recovery results for arbitrary sparse signals. Indeed, we prove the following lemma which is crucial for our ? are in descending order analysis. Below we assume without loss of generality that the elements in x according to the magnitude. + Lemma 5. Consider the PHT(r) algorithm. Assume that F (x) is ?? 2k -RSC and ?2k -RSS. Further t assume that the sequence of {x }t?0 satisfies t x ? x ? ? ? ? ? t x0 ? x ? + ?1 , (5) t x ? x ? ?? x ? t + ?2 , (6) S for positive ?, ?1 , ?, ?2 and 0 < ? < 1. Suppose that at the n-th iteration (n ? 0), S n contains the ? . Then, for any integer 1 ? q ? s ? p, there exists an indices of top p (in magnitude) elements of x integer ? ? 1 determined by ? ? {p+1,...,s} + ?, 2 |? xp+q | > ?? ? ? ??1 x where ? = ??2 + ?1 + 1 k?2 F (? x)k , ?? 2k ? provided that ? ? such that S n+? contains the indices of top p + q elements of x some constant ? ? (0, 1). ? 2?? xmin for We isolate this lemma here since we find it inspiring and general. The lemma states that under proper conditions, as far as one can show that the sequence satisfies (5) and (6), then after a few iterations, PHT(r) captures more correct indices in the iterate. Note that the condition (5) states that the sequence should contract with a geometric rate, and the condition (6) follows immediately from the fully corrective step (i.e., minimizing F (x) over the new support set). The next theorem concludes that under the conditions of Lemma 5, the total iteration complexity for support recovery is proportional to the sparsity of the underlying signal. Theorem 6. Assume same conditions as in Lemma   5. Then PHT(r) successfully identifies the suplog(??/(1??)) log 2 ? using 2 log(1/?) + log(1/?) + 2 k? port of x xk0 number of iterations. The detailed proofs of these two results are given in the appendix. Armed with them, it remains to show that PHT(r) satisfies the condition (5) under different settings. Proof Sketch for Prop. 2. We start with comparing F (z tS t ) and F (xt?1 ). For the sake, we record several important properties. First, due to the fully corrective step, the support set of ?F (xt?1 ) is orthogonal to S t?1 . That means for any subset ? ? S t?1 , z t? = xt?1 and for any set ? ? S t?1 , ? t t?1 z ? = ???? F (x ). We also note that due to the PHT operator, any element of z tS t \S t?1 is not smaller than that of z tS t?1 \S t . These critical facts together with the RSS condition result in 2  t?1 F (xt ) ? F (xt?1 ) ? F (z tS t ) ? F (xt?1 ) ? ??(1 ? ??+ ) ?F (x ) . t t?1 2s S \S 7 Since S t \ S t?1 consists of the top elements of ?F (xt?1 ), we can show that t 2 t?1   2?? 2s S \ S t?1 F (xt?1 ) ? F (? x) . ?F (x ) S t \S t?1 ? t |S \ S t?1 | + |S \ S t?1 | Using the argument of Prop. 21, we establish the linear convergence of the iterates, i.e., the condition (5). The result then follows. Proof Sketch for Theorem 3. To prove this theorem, we present a more careful analysis on the problem structure. In particular, let T = supp ?F (xt?1 , r) , J t = S t?1 ?T , and consider the elements of ?F (xt?1 ). Since T contains the largest elements, any element outside T is smaller than those of T . Then we may compare the elements of ?F (xt?1 ) on S \ T and S \ T . Though they have different number of components, we can show the relationship between the averaged energy: 2 2   1 1 ?F (xt?1 ) T \S ? ?F (xt?1 ) S\T . |T \ S| |S \ T | ? J t in terms of Using this equation followed by some standard relaxation, we can bound x xt?1 ? x ? as follows. Lemma 7. Assume that F (x) satisfies the properties of RSC and RSS at sparsity level k + s + r.  + + t t?1 Let ?? := ?? ? supp ?F (xt?1 ), r . k+s+r and ? := ?k+s+r . Consider the support set J = S We have for any 0 < ? ? 1/?+ , ? x ? + ? k?s+r F (? ? J t ? ?(1 ? ??? ) xt?1 ? x x)k , ? p where ? = 1 + s/r. In particular, picking ? = 1/?+ gives   ? 1 xt?1 ? x x ? + ? k?s+r F (? ?Jt ? ? 1 ? x)k . ? ? Note that the lemma also applies to the two-stage thresholding algorithms (e.g., CoSaMP [15]) whose first step is expanding the support set. On the other hand, we also know that t z J t \S t ? z tJ t \S . ? J t \S t can be This is because J t \ S t contains the r smallest elements of z tJ t . It then follows that x t?1 ? . Finally, we note that S t = (J t \ S t ) ? J t . Hence, (5) follows. upper bounded by x ?x Proof Sketch for Theorem 4. The proof idea of Theorem 4 is inspired by [31], though we give a tighter and a more general analysis. We first observe that S t \ S t?1 contains larger elements than S t?1 \ S t , due to PHT. This enables us to show that 2 2  1 ? ??+ 1 ? ??+ 2k 2k t z S t ? xt?1 ? ? F (xt ) ? F (xt?1 ) ? ? ?F (xt?1 ) S t \S t?1 . 2? 2? Then we prove the claim 2   t?1 ) ? F (? x) . ?F (xt?1 ) S t \S t?1 ? ?? 2k F (x To this end, we consider whether r is larger than s. If r ? s, then it is possible that S t \ S t?1 ? s. In this case, using the RSC condition and the PHT property, we can show that 2 2    t?1 ) ? F (? x) . ?F (xt?1 ) S t \S t?1 ? ?F (xt?1 ) S\S t?1 ? ?? 2k F (x If S t \ S t?1 < s ? r, then the above does not hold. But we may partition the set S \ S t?1 as a union of T1 = S \ (S t ? S t?1 ) and T2 = (S t \ S t?1 ) ? S, and show that the ?2 -norm of F (xt?1 ) on T1 is smaller than that on T2 if k = s + ?2 s. In addition, the RSC condition gives 2 2   ?? 1 1 2 2k ? ? xt?1 ? F (? x x) ? F (xt?1 ) + ? ?F (xt?1 ) T + ? ?F (xt?1 ) S t \S t?1 . 1 4 ?2k ?2k Since T2 ? S t \ S t?1 , it implies the desired bound by rearranging the terms. The case r < s follows in a reminiscent way. The proof is complete. 8 4 Simulation We complement our theoretical results by performing numerical experiments in this section. In particular, we are interested in two aspects: first, the number of iterations required to identify the support of an s-sparse signal; second, the tradeoff between the iteration number and percentage of success resulted from different choices of the freedom parameter r. N = 200 80 60 r=1 r=2 r=5 r = 10 r = 100 40 14 12 20 10 s = 10 r=1 8 6 r=2 4 r=5 2 0 1 10 20 30 40 50 60 70 80 90100 #non?zeros percentage of success 100 #iterations 80 r=1 N = 200 70 r=2 r=5 60 r = 10 50 r = 100 40 30 20 10 0 1 10 20 30 40 50 60 70 80 90100 #non?zeros percentage of success #iterations We consider the compressed sensing model y = A? x + 0.01e, where the dimension d = 200 and the entries of A and e are i.i.d. normal variables. Given a sparsity level s, we first uniformly choose the ? , and assign values to the non-zeros with i.i.d. normals. There are two configurations: support of x the sparsity s and the sample size N . Given s and N , we independently generate 100 signals and test PHT(r) on them. We say PHT(r) succeeds in a trial if it returns an iterate with correct support within 10 thousands iterations. Otherwise we mark the trial as failure. Iteration numbers to be reported are counted only on those success trials. The step size ? is fixed to be the unit, though one can tune it using cross-validation for better performance. 1 r = 100 50 100 150 #measurements 200 100 s = 10 80 60 40 20 0 1 r=1 r=2 r=5 r = 10 r = 100 50 100 150 #measurements 200 Figure 1: Iteration number and success percentage against sparsity and sample size. The first panel shows that the iteration number grows linearly with the sparsity. The choice r = 5 suffices to guarantee a minimum iteration complexity. The second panel shows comparable statistical performance for different choices of r. The third one illustrates how the iteration number changes with the sample size and the last panel depicts phase transition. To study how the iteration number scales with the sparsity in practice, we fix N = 200 and tune s from 1 to 100. We test different freedom parameter r on these signals. The results are shown in the leftmost figure in Figure 1. As our theory predicted, we observe that within O(s) iterations, PHT(r) precisely identifies the true support. In the second subfigure, we plot the percentage of success against the sparsity. It appears that PHT(r) lays on top of each other. This is possibly because we used a sufficiently large sample size. Next, we fix s = 10 and vary N from 1 to 200. Surprisingly, from the rightmost figure, we do ? min not observe performance degrade using a large freedom parameter. So we conjecture that the x condition we established can be refined. Figure 1 also illustrates an interesting phenomenon: after a particular threshold, say r = 5, PHT(r) does not significantly reduces the iteration number by increasing r. This cannot be explained by our theorems in the paper. We leave it as a promising research direction. 5 Conclusion and Future Work In this paper, we have presented a principled analysis on a family of hard thresholding algorithms. To facilitate our analysis, we appealed to the recently proposed partial hard thresholding operator. We have shown that under the RIP condition or the relaxed sparsity condition, the PHT(r) algorithm ? within O(k? recovers the support of an arbitrary sparse signal x xk0 ? log ?) iterations, provided that a generalized signal-to-noise ratio condition is satisfied. On account of our unified analysis, we have established the best known bound for HTP and OMPR. We have also illustrated that the simulation results agree with our finding that the iteration number is proportional to the sparsity. There are several interesting future directions. First, it would be interesting to examine if we can close the logarithmic factor log ? in the iteration bound. Second, it is also useful to study RIP-free conditions for two-stage PHT?algorithms such as CoSaMP. Finally, we pose the open question of ? min -condition. whether one can improve the ? factor in the x Acknowledgements. The work is supported in part by NSF-Bigdata-1419210 and NSF-III1360971. We thank the anonymous reviewers for valuable comments. 9 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics, 40(5):2452?2482, 2012. [2] S. Bahmani, B. Raj, and P. T. Boufounos. Greedy sparsity-constrained optimization. Journal of Machine Learning Research, 14(1):807?841, 2013. [3] T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265?274, 2009. [4] J.-L. Bouchot, S. Foucart, and P. Hitczenko. Hard thresholding pursuit algorithms: number of iterations. Applied and Computational Harmonic Analysis, 41(2):412?435, 2016. [5] T. T. Cai and L. Wang. Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Information Theory, 57(7):4680?4688, 2011. [6] E. J. Cand?s and T. Tao. Decoding by linear programming. IEEE Trans. Information Theory, 51(12):4203?4215, 2005. [7] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33?61, 1998. [8] W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Information Theory, 55(5):2230?2249, 2009. [9] I. Daubechies, M. Defrise, and C. D. Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11):1413?1457, 2004. [10] S. Foucart. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6):2543?2563, 2011. [11] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkh?user, 2013. [12] P. Jain, A. Tewari, and I. S. Dhillon. Orthogonal matching pursuit with replacement. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pages 1215?1223, 2011. [13] P. Jain, A. Tewari, and I. S. Dhillon. Partial hard thresholding. IEEE Trans. Information Theory, 63(5):3029?3038, 2017. [14] P. Jain, A. Tewari, and P. Kar. On iterative hard thresholding methods for high-dimensional Mestimation. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, pages 685?693, 2014. [15] D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301?321, 2009. [16] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M -estimators with decomposable regularizers. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, pages 1348?1356, 2009. [17] S. Osher, F. Ruan, J. Xiong, Y. Yao, and W. Yin. Sparse recovery via differential inclusions. Applied and Computational Harmonic Analysis, 41(2):436?469, 2016. [18] Y. R. Peter J. Bickel and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, pages 1705?1732, 2009. [19] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Trans. Information Theory, 59(1):482?494, 2013. [20] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research, 11:2241?2259, 2010. 10 [21] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE Trans. Information Theory, 59(6):3434?3447, 2013. [22] J. Shen and P. Li. A tight bound of hard thresholding. CoRR, abs/1605.01656, 2016. [23] J. Shen and P. Li. On the iteration complexity of support recovery via hard thresholding pursuit. In Proceedings of the 34th International Conference on Machine Learning, pages 3115?3124, 2017. [24] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), pages 267?288, 1996. [25] J. A. Tropp. Greed is good: algorithmic results for sparse approximation. IEEE Trans. Information Theory, 50(10):2231?2242, 2004. [26] J. A. Tropp and S. J. Wright. Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6):948?958, 2010. [27] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ?1 -constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55(5):2183? 2202, 2009. [28] J. Wang, S. Kwon, P. Li, and B. Shim. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Processing, 64(4):1076?1089, 2016. [29] M. Yuan and Y. Lin. On the non-negative garrotte estimator. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(2):143?161, 2007. [30] X.-T. Yuan, P. Li, and T. Zhang. Gradient hard thresholding pursuit for sparsity-constrained optimization. In Proceedings of the 31st International Conference on Machine Learning, pages 127?135, 2014. [31] X.-T. Yuan, P. Li, and T. Zhang. Exact recovery of hard thresholding pursuit. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems, pages 3558?3566, 2016. [32] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal of Machine Learning Research, 10:555?568, 2009. [33] T. Zhang. Some sharp performance bounds for least squares regression with L1 regularization. The Annals of Statistics, 37(5A):2109?2144, 2009. [34] T. Zhang. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Trans. Information Theory, 57(9):6215?6221, 2011. [35] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research, 7:2541?2563, 2006. 11
6905 |@word mild:1 trial:3 milenkovic:1 version:1 stronger:2 norm:2 seems:1 open:2 r:12 simulation:2 covariance:4 decomposition:1 pick:10 asks:1 thereby:1 iii1360971:1 bahmani:1 carry:1 initial:1 configuration:1 contains:5 series:2 rightmost:1 current:1 comparing:1 surprising:1 reminiscent:1 numerical:4 realistic:2 partition:1 enables:1 plot:1 greedy:5 fewer:1 xk:1 record:1 provides:1 iterates:2 clarified:2 kaxk:1 zhang:5 mathematical:1 along:1 c2:8 differential:1 yuan:3 consists:2 shorthand:1 prove:4 blumensath:1 manner:1 x0:2 indeed:3 cand:1 nor:1 examine:2 inspired:1 decreasing:1 armed:1 equipped:1 cardinality:1 considering:2 increasing:2 provided:5 becomes:1 underlying:2 notation:2 bounded:1 panel:3 biostatistics:1 pursue:1 developed:2 compressive:3 unified:4 finding:2 guarantee:4 shed:2 tie:1 exactly:1 scaled:1 unit:1 grant:1 enjoy:1 yn:1 positive:1 t1:2 consequence:1 subscript:1 incoherence:1 abuse:1 approximately:1 might:3 tmax:3 defrise:1 therein:2 studied:1 dantzig:1 specifying:1 challenging:1 collect:1 mentioning:2 ease:1 averaged:1 practical:1 enforces:1 atomic:1 union:1 practice:1 differs:1 significantly:1 matching:8 davy:1 get:1 cannot:2 clever:1 selection:5 operator:17 close:1 mea:1 context:2 descending:1 imposed:1 deterministic:4 reviewer:1 latest:1 independently:2 convex:3 shen:3 decomposable:1 recovery:24 immediately:2 pure:1 needell:1 estimator:4 insight:1 coordinate:1 justification:1 analogous:1 annals:3 target:7 suppose:6 rip:14 strengthen:1 programming:3 user:1 exact:1 element:17 strengthens:1 continues:1 lay:1 observed:1 wang:2 capture:1 thousand:1 ensures:1 xmin:1 valuable:1 principled:3 intuition:2 mentioned:1 broken:1 complexity:11 convexity:2 exhaustively:1 depend:2 tight:1 technically:1 upon:1 efficiency:1 basis:2 k0:2 jersey:2 various:1 corrective:2 jain:4 fast:1 describe:2 birkh:1 detected:1 outside:2 refined:2 saunders:1 whose:3 encoded:1 supplementary:1 larger:2 say:6 otherwise:1 compressed:11 statistic:5 think:2 noisy:2 itself:1 sequence:3 eigenvalue:2 differentiable:1 cai:1 reconstruction:2 remainder:1 asserts:2 ky:3 convergence:3 optimum:1 cosamp:3 produce:2 leave:1 object:1 illustrate:1 depending:1 pose:3 stat:1 school:1 progress:1 strong:2 recovering:3 predicted:1 implies:2 quantify:1 met:1 direction:4 correct:2 owing:1 stringent:2 require:1 crux:1 assign:1 suffices:7 generalization:2 fix:2 anonymous:1 proposition:4 tighter:1 extension:1 hold:7 sufficiently:1 considered:4 wright:1 normal:2 algorithmic:2 claim:1 reserve:1 major:1 vary:1 bickel:1 smallest:2 purpose:1 estimation:4 label:1 currently:1 largest:4 successfully:1 minimization:2 htp:8 always:2 gaussian:2 sight:1 fulfill:1 zhou:1 shrinkage:1 earliest:1 focus:3 properly:1 methodological:1 prevalent:2 indicates:4 mainly:1 contrast:3 inaccurate:1 typically:2 deduction:1 manipulating:1 interested:3 tao:1 arg:2 issue:1 ill:1 denoted:4 plan:1 art:2 constrained:5 special:3 mutual:1 ruan:1 equal:2 beach:1 identical:1 look:3 yu:3 future:4 t2:3 few:3 kwon:1 resulted:1 phase:1 replacement:4 ab:1 freedom:11 interest:5 huge:1 investigate:2 deferred:1 extreme:2 light:3 tj:2 devoted:1 regularizers:1 accurate:2 partial:8 necessary:1 conforms:1 orthogonal:9 incomplete:1 re:1 desired:1 theoretical:9 subfigure:1 rsc:12 instance:3 earlier:1 cover:1 subset:1 entry:1 snr:2 successful:1 characterize:1 reported:1 dependency:1 considerably:1 vershynin:1 st:2 thanks:1 international:2 negahban:2 siam:2 probabilistic:1 contract:1 decoding:1 picking:2 together:2 quickly:1 concrete:2 yao:1 again:2 central:2 satisfied:2 daubechies:1 containing:1 choose:3 possibly:1 zhao:1 return:3 li:6 supp:11 account:1 summarized:1 coefficient:1 explicitly:1 depends:2 later:1 analyze:1 surements:1 recover:3 start:1 contribution:2 square:2 kek:1 efficiently:1 likewise:2 yield:1 identify:2 generalize:1 rauhut:1 worth:2 researcher:1 presume:1 ping:1 simultaneous:1 definition:1 failure:1 against:2 energy:2 involved:1 proof:11 recovers:8 couple:1 gain:1 popular:2 recall:4 knowledge:2 ubiquitous:1 organized:1 carefully:2 actually:3 appears:1 feed:1 htk:2 methodology:1 response:1 evaluated:1 though:10 generality:2 stage:2 until:1 sketch:6 hand:3 tropp:3 ei:2 axk:1 logistic:5 quality:1 scientific:1 grows:1 usa:3 facilitate:1 true:4 remedy:2 hence:8 regularization:1 dhillon:2 pht:49 illustrated:2 deal:1 interchangeably:1 leftmost:1 prominent:1 generalized:2 complete:1 demonstrate:1 performs:1 l1:1 harmonic:5 invoked:1 novel:2 recently:5 superior:1 raskutti:1 behaves:1 volume:1 anisotropic:1 extend:1 slight:1 ompr:5 measurement:6 refer:1 imposing:2 ai:1 smoothness:2 rd:7 consistency:2 mathematics:1 inclusion:3 isometry:1 showed:3 recent:2 perspective:1 raj:1 termed:2 scenario:1 kar:1 binary:1 success:6 yi:1 minimum:3 greater:1 relaxed:4 preceding:1 kxk0:2 employed:1 dai:1 recognized:1 monotonically:1 signal:37 reduces:2 lexicographically:1 offer:2 long:2 cross:1 lin:1 ravikumar:1 variant:2 regression:10 essentially:3 metric:3 rutgers:4 iteration:44 agarwal:1 achieved:1 confined:1 c1:12 whereas:2 remarkably:1 signify:1 addition:1 grow:1 singular:1 crucial:1 file:1 comment:2 sure:1 isolate:1 elegant:2 effectiveness:1 call:1 extracting:1 integer:2 vital:1 enough:2 concerned:1 easy:1 variety:1 iterate:6 lasso:7 regarding:1 idea:3 tradeoff:1 whether:3 greed:1 peter:1 hessian:1 afford:1 jj:1 remark:3 jie:1 useful:4 tewari:3 detailed:5 clear:2 tune:2 weird:1 tsybakov:1 inspiring:1 generate:1 percentage:5 nsf:2 sign:1 tibshirani:1 broadly:1 write:2 salient:1 nevertheless:1 threshold:2 drawn:2 clarity:1 neither:2 relaxation:1 year:1 run:1 inverse:2 letter:1 family:4 reader:2 appendix:2 comparable:1 bit:1 bound:14 guaranteed:2 followed:2 distinguish:1 replaces:1 quadratic:1 annual:4 precisely:2 constraint:3 phrased:2 sake:1 aspect:3 argument:3 min:36 optimality:1 performing:1 conjecture:1 department:3 according:1 belonging:1 terminates:3 smaller:5 appealing:1 happens:2 axk2:2 osher:1 explained:1 restricted:11 fulfilling:1 equation:1 agree:1 remains:1 turn:2 discus:1 know:1 tractable:1 end:3 pursuit:19 apply:2 observe:5 appropriate:1 generic:1 xiong:1 original:1 top:7 running:1 rudelson:1 hinge:1 establish:2 society:2 garrotte:2 implied:1 objective:1 noticed:1 pingli:1 quantity:3 question:2 dependence:1 diagonal:1 hai:1 gradient:5 subspace:1 distance:1 thank:1 simulated:1 degrade:1 roadmap:1 index:5 remind:1 relationship:1 ratio:4 minimizing:1 stated:1 negative:1 design:5 xk0:4 proper:4 upper:4 descent:2 t:4 situation:1 looking:1 communication:1 y1:1 arbitrary:10 sharp:2 complement:3 required:1 established:6 nip:1 trans:11 able:1 suggested:1 proceeds:1 usually:1 pattern:1 below:2 sparsity:30 encompasses:1 program:2 built:1 including:1 max:13 royal:2 qc1:3 wainwright:4 critical:1 natural:3 improve:1 imply:2 identifies:3 carried:1 concludes:2 ready:1 understanding:1 literature:2 geometric:1 acknowledgement:1 appealed:1 loss:2 fully:2 shim:1 interesting:7 proportional:2 borrows:1 validation:1 offered:1 sufficient:2 proxy:3 xp:1 thresholding:29 port:1 surprisingly:1 last:4 free:5 soon:1 supported:1 guide:1 side:1 understand:1 characterizing:1 absolute:4 sparse:34 regard:1 dimension:1 transition:1 made:2 collection:1 projected:2 ple:1 employing:1 far:2 cope:1 counted:1 selector:1 global:5 reveals:1 assumed:1 quintessential:1 spectrum:1 iterative:8 decade:1 promising:1 learn:1 zk:1 robust:1 ca:1 expanding:2 rearranging:1 mol:1 main:2 linearly:2 bounding:1 noise:8 categorized:1 body:1 referred:2 depicts:1 fails:1 theme:2 position:2 explicit:1 sub:1 stamp:1 third:1 theorem:22 bad:1 xt:34 emphasized:1 jt:1 showing:2 sensing:12 appeal:2 list:2 admits:1 foucart:3 exists:1 corr:1 importance:1 magnitude:3 conditioned:3 illustrates:2 kx:5 chen:1 logarithmic:1 yin:1 simply:1 appearance:2 kxk:3 scalar:1 applies:1 corresponds:1 minimizer:3 satisfies:7 relies:1 prop:7 goal:3 viewed:1 identity:2 consequently:1 exposition:1 towards:1 careful:1 donoho:1 hard:28 change:2 included:1 specifically:1 determined:1 uniformly:1 lemma:11 boufounos:1 clarification:1 total:1 succeeds:2 formally:1 highdimensional:1 support:49 mark:1 bigdata:1 evaluate:2 phenomenon:1 correlated:1
6,529
6,906
Shallow Updates for Deep Reinforcement Learning Nir Levine? Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 [email protected] Tom Zahavy? Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 [email protected] Daniel J. Mankowitz Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 [email protected] Aviv Tamar Dept. of Electrical Engineering and Computer Sciences, UC Berkeley Berkeley, CA 94720 [email protected] Shie Mannor Dept. of Electrical Engineering The Technion - Israel Institute of Technology Israel, Haifa 3200003 [email protected] * Joint first authors. Ordered alphabetically by first name. Abstract Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach ? the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer. 1 Introduction Reinforcement learning (RL) is a field of research that uses dynamic programing (DP; Bertsekas 2008), among other approaches, to solve sequential decision making problems. The main challenge in applying DP to real world problems is an exponential growth of computational requirements as the problem size increases, known as the curse of dimensionality (Bertsekas, 2008). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. RL tackles the curse of dimensionality by approximating terms in the DP calculation such as the value function or policy. Popular function approximators for this task include deep neural networks, henceforth termed deep RL (DRL), and linear architectures, henceforth termed shallow RL (SRL). SRL methods have enjoyed wide popularity over the years (see, e.g., Tsitsiklis et al. 1997; Bertsekas 2008 for extensive reviews). In particular, batch algorithms based on a least squares (LS) approach, such as Least Squares Temporal Difference (LSTD, Lagoudakis & Parr 2003) and Fitted-Q Iteration (FQI, Ernst et al. 2005) are known to be stable and data efficient. However, the success of these algorithms crucially depends on the quality of the feature representation. Ideally, the representation encodes rich, expressive features that can accurately represent the value function. However, in practice, finding such good features is difficult and often hampers the usage of linear function approximation methods. In DRL, on the other hand, the features are learned together with the value function in a deep architecture. Recent advancements in DRL using convolutional neural networks demonstrated learning of expressive features (Zahavy et al., 2016; Wang et al., 2016) and state-of-the-art performance in challenging tasks such as video games (Mnih et al. 2015; Tessler et al. 2017; Mnih et al. 2016), and Go (Silver et al., 2016). To date, the most impressive DRL results (E.g., the works of Mnih et al. 2015, Mnih et al. 2016) were obtained using online RL algorithms, based on a stochastic gradient descent (SGD) procedure. On the one hand, SRL is stable and data efficient. On the other hand, DRL learns powerful representations. This motivates us to ask: can we combine DRL with SRL to leverage the benefits of both? In this work, we develop a hybrid approach that combines batch SRL algorithms with online DRL. Our main insight is that the last layer in a deep architecture can be seen as a linear representation, with the preceding layers encoding features. Therefore, the last layer can be learned using standard SRL algorithms. Following this insight, we propose a method that repeatedly re-trains the last hidden layer of a DRL network with a batch SRL algorithm, using data collected throughout the DRL run. We focus on value-based DRL algorithms (e.g., the popular DQN of Mnih et al. 2015) and on SRL based on LS methods1 , and propose the Least Squares DQN algorithm (LS-DQN). Key to our approach is a novel regularization term for the least squares method that uses the DRL solution as a prior in a Bayesian least squares formulation. Our experiments demonstrate that this hybrid approach significantly improves performance on the Atari benchmark for several combinations of DRL and SRL methods. To support our results, we performed an in-depth analysis to tease out the factors that make our hybrid approach outperform DRL. Interestingly, we found that the improved performance is mainly due to the large batch size of SRL methods compared to the small batch size that is typical for DRL. 2 Background In this section we describe our RL framework and several shallow and deep RL algorithms that will be used throughout the paper. RL Framework: We consider a standard RL formulation (Sutton & Barto, 1998) based on a Markov Decision Process (MDP). An MDP is a tuple hS, A, R, P, ?i, where S is a finite set of states, A is a finite set of actions, and ? ? [0, 1] is the discount factor. A transition probability function P : S ? A ? ?S maps states and actions to a probability distribution over next states. Finally, R : S ? A ? [Rmin , Rmax ] denotes the reward. The goal in RL is toP learn a policy ? : S ? ?A that ? t solves the MDP by maximizing the expected discounted return E [ t=0 ? rt | ?]. Value based RL P ? ? t methods make use of the action value function Q (s, a) = E[ t=0 ? rt |st = s, at = a, ?], which represents the expected discounted return of executing action a ? A from state s ? S and following the policy ? thereafter. The optimal action value function Q? (s, a) obeys a fundamental recursion known as the Bellman equation Q? (s, a) = E [ rt + ? maxa0 Q? (st+1 , a0 )| st = s, at = a]. 1 Our approach can be generalized to other DRL/SRL variants. 2 2.1 SRL algorithms Least Squares Temporal Difference Q-Learning (LSTD-Q): LSTD (Barto & Crites, 1996) and LSTD-Q (Lagoudakis & Parr, 2003) are batch SRL algorithms. LSTD-Q learns a control policy ? ? ? = ?w? of the action value function from a batch of samples by estimating a linear approximation Q ? |S||A| ? k Q ?R , where w ? R are a set of weights and ? ? R|S||A|?k is a feature matrix. Each row of ? represents a feature vector for a state-action pair hs, ai. The weights w? are learned by enforcing ? ? to satisfy a fixed point equation w.r.t. the projected Bellman operator, resulting in a system of Q linear equations Aw? = b, where A = ?T (? ? ?P?? ?) and b = ?T R. Here, R ? R|S||A| is the reward vector, P ? R|S||A|?|S| is the transition matrix and ?? ? R|S|?|S||A| is a matrix describing SRL the policy. Given a set of NSRL samples D = {si , ai , ri , si+1 }N i=1 , we can approximate A and b with the following empirical averages: A? = NX SRL  1 NSRL   ?(si , ai )T ?(si , ai ) ? ??(si+1 , ?(si+1 )) , ?b = i=1 1 NSRL NX SRL   ?(si , ai )T ri . i=1 (1) ? ? ?bk2 The weights w? can be calculated using a least squares minimization: w ? ? = arg minw kAw 2 or by calculating the pseudo-inverse: w ? ? = A???b. LSTD-Q is an off-policy algorithm: the same set of samples D can be used to train any policy ? so long as ?(si+1 ) is defined for every si+1 in the set. Fitted Q Iteration (FQI): The FQI algorithm (Ernst et al., 2005) is a batch SRL algorithm that computes iterative approximations of the Q-function using regression. At iteration N of the algorithm, the set D defined above and the approximation from the previous iteration QN ?1 are used to generate supervised learning targets: 0 yi = ri + ? maxa0 QN ?1 (si+1 , a ), , ?i ? NSRL . These targets are then used by a supervised learning (regression) method to compute the next function in the sequence QN , by minimizing PNSRL the MSE loss QN = argminQ i=1 (Q(si , ai ) ? (ri + ? maxa0 QN ?1 (si+1 , a0 )))2 . For a linear function approximation Qn (a, s) = ?T (s, a)wn , LS can be used to give the FQI solution ? ? ?bk2 , where A, ? ?b are given by: wn = arg minw kAw 2 A? = 1 NX SRL  NSRL  ?(si , ai )T ?(si , ai ) , i=1 ?b = 1 NSRL NX SRL  ?(si , ai )T yi  . (2) i=1 The FQI algorithm can also be used with non-linear function approximations such as trees (Ernst et al., 2005) and neural networks (Riedmiller, 2005). The DQN algorithm (Mnih et al., 2015) can be viewed as online form of FQI. 2.2 DRL algorithms Deep Q-Network (DQN): The DQN algorithm (Mnih et al., 2015) learns the Q function by minimizing the mean squared error of the Bellman equation, defined as Est ,at ,rt ,st+1 kQ? (st , at ) ? yt k22 , 0 where yt = rt + ? maxa0 Q?target (st+1 , a ). The DQN maintains two separate networks, namely the current network with weights ? and the target network with weights ?target . Fixing the target network makes the DQN algorithm equivalent to FQI (see the FQI MSE loss defined above), where the regression algorithm is chosen to be SGD (RMSPROP, Hinton et al. 2012). The DQN is an off-policy learning algorithm. Therefore, the tuples hst , at , rt , st+1 i that are used to optimize the network weights are first collected from the agent?s experience and are stored in an Experience Replay (ER) buffer (Lin, 1993) providing improved stability and performance. Double DQN (DDQN): DDQN (Van Hasselt et al., 2016) is a modification of the DQN algorithm that addresses overly optimistic estimates of the value function. This is achieved by performing action selection with the current network ? and evaluating the action with the target network, ?target , yielding the DDQN target update yt = rt if st+1 is terminal, otherwise yt = rt + ?Q?target (st+1 , maxa Q? (st+1 , a)). 3 3 The LS-DQN Algorithm We now present a hybrid approach for DRL with SRL updates2 . Our algorithm, the LS-DQN Algorithm, periodically switches between training a DRL network and re-training its last hidden layer using an SRL method. 3 We assume that the DRL algorithm uses a deep network for representing the Q function4 , where the last layer is linear and fully connected. Such networks have been used extensively in deep RL recently (e.g., Mnih et al. 2015; Van Hasselt et al. 2016; Mnih et al. 2016). In such a representation, the last layer, which approximates the Q function, can be seen as a linear combination of features (the output of the penultimate layer), and we propose to learn more accurate weights for it using SRL. Explicitly, the LS-DQN algorithm begins by training the weights of a DRL network, wk , using a value-based DRL algorithm for NDRL steps (Line 2). LS-DQN then updates the last hidden layer weights, wklast , by executing LS-UPDATE: retraining the weights using a SRL algorithm with NSRL samples (Line 3). The LS-UPDATE consists of the following steps. First, data trajectories D for the batch update are gathered using the current network weights, wk (Line 7). In practice, the current experience replay can be used and no additional samples need to be collected. The algorithm next generates new features ? (s, a) from the data trajectories using the current DRL network with weights wk . This step guarantees that we do not use samples with inconsistent features, as the ER contains features from ?old? networks weights. Computationally, this step requires running a forward pass of the deep network for every sample in D, and can be performed quickly using parallelization. Once the new features are generated, LS-DQN uses an SRL algorithm to re-calculate the weights of the last hidden layer wklast (Line 9). While the LS-DQN algorithm is conceptually straightforward, we found that naively running it with off-the-shelf SRL algorithms such as FQI or LSTD resulted in instability and a degradation of the DRL performance. The reason is that the ?slow? SGD computation in DRL essentially retains information from older training epochs, while the batch SRL method ?forgets? all data but the most recent batch. In the following, we propose a novel regularization method for addressing this issue. Algorithm 1 LS-DQN Algorithm Require: w0 1: for k = 1 ? ? ? SRLiters do 2: wk ? trainDRLNetwork(wk?1 ) 3: wklast ? LS-UPDATE(wk ) 4: end for . Train the DRL network for NDRL steps . Update the last layer weights with the SRL solution 5: 6: function LS-UPDATE(w) 7: D ? gatherData(w) 8: ?(s, a) ? generateFeatures(D, w) 9: wlast ? SRL-Algorithm(D, ?(s, a)) 10: return wlast 11: end function Regularization Our goal is to improve the performance of a value-based DRL agent using a batch SRL algorithm. Batch SRL algorithms, however, do not leverage the knowledge that the agent has gained before the most recent batch5 . We observed that this issue prevents the use of off-the-shelf implementations of SRL methods in our hybrid LS-DQN algorithm. 2 Code is available online at https://github.com/Shallow-Updates-for-Deep-RL We refer the reader to Appendix B for a diagram of the algorithm. 4 The features in the last DQN layer are not action dependent. We generate action-dependent features ? (s, a) by zero-padding to a one-hot state-action feature vector. See Appendix E for more details. 5 While conceptually, the data batch can include all the data seen so far, due to computational limitations, this is not a practical solution in the domains we consider. 3 4 To enjoy the benefits of both worlds, that is, a batch algorithm that can use the accumulated knowledge gained by the DRL network, we introduce a novel Bayesian regularization method for LSTD-Q and FQI that uses the last hidden layer weights of the DRL network wklast as a Bayesian prior for the SRL algorithm 6 . SRL Bayesian Prior Formulation: We are interested in learning the weights of the last hidden layer (wlast ), using a least squares SRL algorithm. We pursue a Bayesian approach, where the prior weights distribution at iteration k of LS-DQN is given by wprior ? N (wklast , ??2 ), and we recall that wklast are the last hidden layer weights of the DRL network at iteration SRLiter = k. The Bayesian solution for the regression problem in the FQI algorithm is given by (Box & Tiao, 2011) wlast = (A? + ?I)?1 (?b + ?wklast ) , where A? and ?b are given in Equation 2. A similar regularization can be added to LSTD-Q based on a regularized fixed point equation (Kolter & Ng, 2009). Full details are in Appendix A. 4 Experiments In this section, we present experiments showcasing the improved performance attained by our LSDQN algorithm compared to state-of-the-art DRL methods. Our experiments are divided into three sections. In Section 4.1, we start by investigating the behavior of SRL algorithms in high dimensional environments. We then show results for the LS-DQN on five Atari domains, in Section 4.2, and compare the resulting performance to regular DQN and DDQN agents. Finally, in Section 4.3, we present an ablative analysis of the LS-DQN algorithm, which clarifies the reasons behind our algorithm?s success. 4.1 SRL Algorithms with High Dimensional Observations In the first set of experiments, we explore how least squares SRL algorithms perform in domains with high dimensional observations. This is an important step before applying a SRL method within the LS-DQN algorithm. In particular, we focused on answering the following questions: (1) What regularization method to use? (2) How to generate data for the LS algorithm? (3) How many policy improvement iterations to perform? To answer these questions, we performed the following procedure: We trained DQN agents on two games from the Arcade Learning Environment (ALE, Bellemare et al.); namely, Breakout and Qbert, using the vanilla DQN implementation (Mnih et al., 2015). For each DQN run, we (1) periodically 7 save the current DQN network weights and ER; (2) Use an SRL algorithm (LSTD-Q or FQI) to re-learn the weights of the last layer, and (3) evaluate the resulting DQN network by temporarily replacing the DQN weights with the SRL solution weights. After the evaluation, we replace back the original DQN weights and continue training. Each evaluation entails 20 roll-outs 8 with an -greedy policy (similar to Mnih et al.,  = 0.05). This periodic evaluation setup allowed us to effectively experiment with the SRL algorithms and obtain clear comparisons with DQN, without waiting for full DQN runs to complete. (1) Regularization: Experiments with standard SRL methods without any regularization yielded poor results. We found the main reason to be that the matrices used in the SRL solutions (Equations 1 and 2) are ill-conditioned, resulting in instability. One possible explanation stems from the sparseness of the features. The DQN uses ReLU activations (Jarrett et al., 2009), which causes the network to learn sparse feature representations. For example, once the DQN completed training on Breakout, 96% of features were zero. Once we added a regularization term, we found that the performance of the SRL improved.  algorithms  We experimented with the `2 and Bayesian Prior (BP) regularizers (? ? 0, 102 ). While the `2 regularizer showed competitive performance in Breakout, we found that the BP performed better across domains (Figure 1, best regularizers chosen, shows the average score of each configuration following the explained evaluation procedure, for the different epochs). Moreover, the BP regularizer 6 The reader is referred to Ghavamzadeh et al. (2015) for an overview on using Bayesian methods in RL. Every three million DQN steps, referred to as one epoch (out of a total of 50 million steps). 8 Each roll-out starts from a new (random) game and follows a policy until the agent loses all of its lives. 7 5 was not sensitive to the scale of the regularization coefficient. Regularizers in the range (10?1 , 101 ) performed well across all domains. A table of average scores for different coefficients can be found in Appendix C.1. Note that we do not expect for much improvement as we replace back the original DQN weights after evaluation. (2) Data Gathering: We experimented with two mechanisms for generating data: (1) generating new data from the current policy, and (2) using the ER. We found that the data generation mechanism had a significant impact on the performance of the algorithms. When the data is generated only from the current DQN policy (without ER) the SRL solution resulted in poor performance compared to a solution using the ER (as was observed by Mnih et al. 2015). We believe that the main reason the ER works well is that the ER contains data sampled from multiple (past) policies, and therefore exhibits more exploration of the state space. (3) Policy Improvement: LSTD-Q and FQI are off-policy algorithms and can be applied iteratively on the same dataset (e.g. LSPI, Lagoudakis & Parr 2003). However, in practice, we found that performing multiple iterations did not improve the results. A possible explanation is that by improving the policy, the policy reaches new areas in the state space that are not represented well in the current ER, and therefore are not approximated well by the SRL solution and the current DRL network. Figure 1: Periodic evaluation for DQN (green), LS-DQNLSTD-Q with Bayesian prior regularization (red, solid ? = 10, dashed ? = 1), and `2 regularization (blue, solid ? = 0.001, dashed ? = 0.0001). 4.2 Atari Experiments We next ran the full LS-DQN algorithm (Alg. 1) on five Atari domains: Asterix, Space Invaders, Breakout, Q-Bert and Bowling. We ran the LS-DQN using both DQN and DDQN as the DRL algorithm, and using both LSTD-Q and FQI as the SRL algorithms. We chose to run a LS-update every NDRL = 500k steps, for a total of 50M steps (SRLiters = 100). We used the current ER buffer as the ?generated? data in the LS-UPDATE function (line 7 in Alg. 1, NSRL = 1M ), and a regularization coefficient ? = 1 for the Bayesian prior solution (both for FQI and LSTQ-Q). We emphasize the we did not use any additional samples beyond the samples already obtained by the DRL algorithm. Figure 2 presents the learning curves of the DQN network, LS-DQN with LSTD-Q, and LS-DQN with FQI (referred to as DQN, LS-DQNLSTD-Q , and LS-DQNFQI , respectively) on three domains: Asterix, Space Invaders and Breakout. Note that we use the same evaluation process as described in Mnih et al. (2015). We were also interested in a test to measure differences between learning curves, and not only their maximal score. Hence we chose to perform Wilcoxon signed-rank test on the average scores between the three DQN variants. This non-parametric statistical test measures whether related samples differ in their means (Wilcoxon, 1945). We found that the learning curves for both LS-DQNLSTD-Q and LS-DQNFQI were statistically significantly better than those of DQN, with p-values smaller than 1e-15 for all three domains. Table 1 presents the maximum average scores along the learning curves of the five domains, when the SRL algorithms were incorporated into both DQN agents and DDQN agents (the notation is similar, i.e., LS-DDQNFQI )9 . Our algorithm, LS-DQN, attained better performance compared to the 9 Scores for DQN and DDQN were taken from Van Hasselt et al. (2016). 6 Figure 2: Learning curves of DQN (green), LS-DQNLSTD-Q (red), and LS-DQNFQI (blue). vanilla DQN agents, as seen by the higher scores in Table 1 and Figure 2. We observe an interesting phenomenon for the game Asterix: In Figure 2, the DQN?s score ?crashes? to zero (as was observed by Van Hasselt et al. 2016). LS-DQNLSTD-Q did not manage to resolve this issue, even though it achieved a significantly higher score that that of the DQN. LS-DQNFQI , however, maintained steady performance and did not ?crash? to zero. We found that, in general, incorporating FQI as an SRL algorithm into the DRL agents resulted in improved performance. Table 1: Maximal average scores across five different Atari domains for each of the DQN variants. ``` Space ``` Game Breakout Asterix Qbert Bowling ``` Invaders Algorithm ` 4.3 DQN9 LS-DQNLSTD-Q LS-DQNFQI 401.20 420.00 438.55 1975.50 3207.44 3360.81 6011.67 13704.23 13636.81 10595.83 10767.47 12981.42 42.40 61.21 75.38 DDQN9 LS-DDQNFQI 375.00 397.94 3154.60 4400.83 15150.00 16270.45 14875.00 12727.94 70.50 80.75 Ablative Analysis In the previous section, we saw that the LS-DQN algorithm has improved performance, compared to the DQN agents, across a number of domains. The goal of this section is to understand the reasons behind the LS-DQN?s improved performance by conducting an ablative analysis of our algorithm. For this analysis, we used a DQN agent that was trained on the game of Breakout, in the same manner as described in Section 4.1. We focus on analyzing the LS-DQNFQI algorithm, that has the same optimization objective as DQN (cf. Section 2), and postulate the following conjectures for its improved performance: (i) The SRL algorithms use a Bayesian regularization term, which is not included in the DQN objective. (ii) The SRL algorithms have less hyperparameters to tune and generate an explicit solution compared to SGD-based DRL solutions. (iii) Large-batch methods perform better than small-batch methods when combining DRL with SRL. (iv) SRL algorithms focus on training the last layer and are easier to optimize. The Experiments: We started by analyzing the learning method of the last layer (i.e., the ?shallow? part of the learning process). We did this by optimizing the last layer, at each LS-UPDATE epoch, using (1) FQI with a Bayesian prior and a LS solution, and (2) an ADAM (Kingma & Ba, 2014) optimizer with and without an additional Bayesian prior regularization term in the loss function. We compared these approaches for different mini-batch sizes of 32, 512, and 4096 data points, and used ? = 1 for all experiments. Relating to conjecture (ii), note that the FQI algorithm has only one hyper-parameter to tune and produces an explicit solution using the whole dataset simultaneously. ADAM, on the other hand, has more hyper-parameters to tune and works on different mini-batch sizes. 7 The Experimental Setup: The experiments were done in a periodic fashion similar to Section 4.1, i.e., testing behavior in different epochs over a vanilla DQN run. For both ADAM and FQI, we first collected 80k data samples from the ER at each epoch. For ADAM, we performed 20 iterations over the data, where each iteration consisted of randomly permuting the data, dividing it into mini-batches and optimizing using ADAM over the mini-batches10 . We then simulate the agent and report average scores across 20 trajectories. The Results: Figure 3 depicts the difference between the average scores of (1) and (2) to that of the DQN baseline scores. We see that larger mini-batches result in improved performance. Moreover, the LS solution (FQI) outperforms the ADAM solutions for mini-batch sizes of 32 and 512 on most epochs, and even slightly outperforms the best of them (mini-batch size of 4096 and a Bayesian prior). In addition, a solution with a prior performs better than a solution without a prior. Summary: Our ablative analysis experiments strongly support conjectures (iii) and (iv) from above, for explaining LS-DQN?s improved performance. That is, large-batch methods perform better than small-batch methods when combining DRL with SRL as explained above; and SRL algorithms that focus on training only the last layer are easier to optimize, as we see that optimizing the last layer improved the score across epochs. Figure 3: Differences of the average scores, for different learning methods, compared to vanilla DQN. Positive values represent improvement over vanilla DQN. For example, for mini-batch of 32 (left figure), in epoch 3, FQI (blue) out-performed vanilla DQN by 21, while ADAM with prior (red), and ADAM without prior (green) under-performed by -46, and -96, respectively. Note that: (1) as the mini-batch size increases, the improvement of ADAM over DQN becomes closer to the improvement of FQI over DQN, and (2) optimizing the last layer improves performance. We finish this Section with an interesting observation. While the LS solution improves the performance of the DRL agents, we found that the LS solution weights are very close to the baseline DQN solution. See Appendix D, for the full results. Moreover, the distance was inversely proportional to the performance of the solution. That is, the FQI solution that performed the best, was the closest (in `2 norm) to the DQN solution, and vice versa. There were orders of magnitude differences between the norms of solutions that performed well and those that did not. Similar results, i.e., that large-batch solutions find solutions that are close to the baseline, have been reported in (Keskar et al., 2016). We further compare our results with the findings of Keskar et al. in the section to follow. 5 Related work We now review recent works that are related to this paper. Regularization: The general idea of applying regularization for feature selection, and to avoid overfitting is a common theme in machine learning. However, applying it to RL algorithms is challenging due to the fact that these algorithms are based on finding a fixed-point rather than optimizing a loss function (Kolter & Ng, 2009).Value-based DRL approaches do not use regularization layers (e.g. pooling, dropout and batch normalization), which are popular in other deep learning methods. The DQN, for example, has a relatively shallow architecture (three convolutional layers, followed by two fully connected layers) without any regularization layers. Recently, regularization was introduced 10 The selected hyper-parameters used for these experiments can be found in Appendix D, along with results for one iteration of ADAM. 8 in problems that combine value-based RL with other learning objectives. For example, Hester et al. (2017) combine RL with supervised learning from expert demonstration, and introduce regularization to avoid over-fitting the expert data; and Kirkpatrick et al. (2017) introduces regularization to avoid catastrophic forgetting in transfer learning. SRL methods, on the other hand, perform well with regularization (Kolter & Ng, 2009) and have been shown to converge Farahmand et al. (2009). Batch size: Our results suggest that a large batch LS solution for the last layer of a value-based DRL network can significantly improve it?s performance. This result is somewhat surprising, as it has been observed by practitioners that using larger batches in deep learning degrades the quality of the model, as measured by its ability to generalize (Keskar et al., 2016). However, our method differs from the experiments performed by Keskar et al. 2016 and therefore does not contradict them, for the following reasons: (1) The LS-DQN Algorithm uses the large batch solution only for the last layer. The lower layers of the network are not affected by the large batch solution and therefore do not converge to a sharp minimum. (2) The experiments of (Keskar et al., 2016) were performed for classification tasks, whereas our algorithm is minimizing an MSE loss. (3) Keskar et al. showed that large-batch solutions work well when piggy-backing (warm-started) on a small-batch solution. Similarly, our algorithm mixes small and large batch solutions as it switches between them periodically. Moreover, it was recently observed that flat minima in practical deep learning model classes can be turned into sharp minima via re-parameterization without changing the generalization gap, and hence it requires further investigation Dinh et al. (2017). In addition, Hoffer et al. showed that large-batch training can generalize as well as small-batch training by adapting the number of iterations Hoffer et al. (2017). Thus, we strongly believe that our findings on combining large and small batches in DRL are in agreement with recent results of other deep learning research groups. Deep and Shallow RL: Using the last-hidden layer of a DNN as a feature extractor and learning the last layer with a different algorithm has been addressed before in the literature, e.g., in the context of transfer learning (Donahue et al., 2013). In RL, there have been competitive attempts to use SRL with unsupervised features to play Atari (Liang et al., 2016; Blundell et al., 2016), but to the best of our knowledge, this is the first attempt that successfully combines DRL with SRL algorithms. 6 Conclusion In this work we presented LS-DQN, a hybrid approach that combines least-squares RL updates within online deep RL. LS-DQN obtains the best of both worlds: rich representations from deep RL networks as well as stability and data efficiency of least squares methods. Experiments with two deep RL methods and two least squares methods revealed that a hybrid approach consistently improves over vanilla deep RL in the Atari domain. Our ablative analysis indicates that the success of the LS-DQN algorithm is due to the large batch updates made possible by using least squares. This work focused on value-based RL. However, our hybrid linear/deep approach can be extended to other RL methods, such as actor critic (Mnih et al., 2016). More broadly, decades of research on linear RL methods have provided methods with strong guarantees, such as approximate linear programming (Desai et al., 2012) and modified policy iteration (Scherrer et al., 2015). Our approach shows that with the correct modifications, such as our Bayesian regularization term, linear methods can be combined with deep RL. This opens the door to future combinations of well-understood linear RL with deep representation learning. Acknowledgement This research was supported by the European Community?s Seventh Framework Program (FP7/2007-2013) under grant agreement 306638 (SUPREL). A. Tamar is supported in part by Siemens and the Viterbi Scholarship, Technion. 9 References Barto, AG and Crites, RH. Improving elevator performance using reinforcement learning. Advances in neural information processing systems, 8:1017?1023, 1996. Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253?279, 2013. Bertsekas, Dimitri P. Approximate dynamic programming. 2008. Blundell, Charles, Uria, Benigno, Pritzel, Alexander, Li, Yazhe, Ruderman, Avraham, Leibo, Joel Z, Rae, Jack, Wierstra, Daan, and Hassabis, Demis. Model-free episodic control. stat, 1050:14, 2016. Box, George EP and Tiao, George C. Bayesian inference in statistical analysis. John Wiley & Sons, 2011. Desai, Vijay V, Farias, Vivek F, and Moallemi, Ciamac C. Approximate dynamic programming via a smoothed linear program. Operations Research, 60(3):655?674, 2012. Dinh, Laurent, Pascanu, Razvan, Bengio, Samy, and Bengio, Yoshua. Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933, 2017. Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 30th international conference on machine learning (ICML-13), pp. 647?655, 2013. Ernst, Damien, Geurts, Pierre, and Wehenkel, Louis. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503?556, 2005. Farahmand, Amir M, Ghavamzadeh, Mohammad, Mannor, Shie, and Szepesv?ri, Csaba. Regularized policy iteration. In Advances in Neural Information Processing Systems, pp. 441?448, 2009. Ghavamzadeh, Mohammad, Mannor, Shie, Pineau, Joelle, Tamar, Aviv, et al. Bayesian reinforcement learning: R in Machine Learning, 8(5-6):359?483, 2015. A survey. Foundations and Trends Hester, Todd, Vecerik, Matej, Pietquin, Olivier, Lanctot, Marc, Schaul, Tom, Piot, Bilal, Sendonaris, Andrew, Dulac-Arnold, Gabriel, Osband, Ian, Agapiou, John, et al. Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017. Hinton, Geoffrey, Srivastava, NiRsh, and Swersky, Kevin. Neural networks for machine learning lecture 6a overview of mini?batch gradient descent. 2012. Hoffer, Elad, Hubara, Itay, and Soudry, Daniel. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017. Jarrett, Kevin, Kavukcuoglu, Koray, LeCun, Yann, et al. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146?2153. IEEE, 2009. Keskar, Nitish Shirish, Mudigere, Dheevatsa, Nocedal, Jorge, Smelyanskiy, Mikhail, and Tang, Ping Tak Peter. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017. Kolter, J Zico and Ng, Andrew Y. Regularization and feature selection in least-squares temporal difference learning. In Proceedings of the 26th annual international conference on machine learning. ACM, 2009. Lagoudakis, Michail G and Parr, Ronald. Least-squares policy iteration. Journal of machine learning research, 4(Dec):1107?1149, 2003. Liang, Yitao, Machado, Marlos C, Talvitie, Erik, and Bowling, Michael. State of the art control of atari games using shallow reinforcement learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016. Lin, Long-Ji. Reinforcement learning for robots using neural networks. 1993. 10 Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928?1937, 2016. Riedmiller, Martin. Neural fitted q iteration?first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317?328. Springer, 2005. Scherrer, Bruno, Ghavamzadeh, Mohammad, Gabillon, Victor, Lesner, Boris, and Geist, Matthieu. Approximate modified policy iteration and its application to the game of tetris. Journal of Machine Learning Research, 16: 1629?1676, 2015. Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. Sutton, Richard and Barto, Andrew. Reinforcement Learning: An Introduction. MIT Press, 1998. Tessler, Chen, Givony, Shahar, Zahavy, Tom, Mankowitz, Daniel J, and Mannor, Shie. A deep hierarchical approach to lifelong learning in minecraft. Proceedings of the National Conference on Artificial Intelligence (AAAI), 2017. Tsitsiklis, John N, Van Roy, Benjamin, et al. An analysis of temporal-difference learning with function approximation. IEEE transactions on automatic control 42.5, pp. 674?690, 1997. Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. Proceedings of the National Conference on Artificial Intelligence (AAAI), 2016. Wang, Ziyu, Schaul, Tom, Hessel, Matteo, van Hasselt, Hado, Lanctot, Marc, and de Freitas, Nando. Dueling network architectures for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1995?2003, 2016. Wilcoxon, Frank. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80?83, 1945. Zahavy, Tom, Ben-Zrihem, Nir, and Mannor, Shie. Graying the black box: Understanding dqns. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1899?1908, 2016. 11
6906 |@word h:2 norm:2 retraining:1 open:1 crucially:1 sgd:4 solid:2 configuration:1 contains:2 score:15 daniel:3 interestingly:2 bilal:1 past:1 outperforms:2 hasselt:6 current:11 com:2 freitas:1 surprising:1 si:15 gmail:1 yet:1 activation:2 diederik:1 john:4 guez:2 uria:1 periodically:4 ronald:1 update:17 greedy:1 selected:1 advancement:1 intelligence:3 parameterization:1 amir:1 talvitie:1 aja:1 mannor:5 pascanu:2 zhang:1 five:5 wierstra:1 along:2 farahmand:2 consists:1 pritzel:1 drl:45 combine:7 fitting:2 manner:1 introduce:2 forgetting:2 expected:2 behavior:2 multi:1 terminal:1 bellman:3 discounted:2 resolve:1 curse:2 becomes:1 begin:1 estimating:1 campus:1 moreover:4 notation:1 provided:1 israel:8 what:2 atari:9 rmax:1 pursue:1 grabska:1 maxa:1 finding:4 ag:1 csaba:1 guarantee:2 temporal:4 pseudo:1 every:4 berkeley:3 growth:1 tackle:1 control:5 zico:1 grant:1 enjoy:1 suprel:1 louis:1 bertsekas:4 before:3 positive:1 engineering:6 understood:1 todd:1 soudry:1 sutton:2 encoding:1 analyzing:2 laurent:2 matteo:1 signed:1 chose:2 black:1 challenging:3 range:1 jarrett:2 obeys:1 statistically:1 practical:2 lecun:1 testing:1 practice:3 differs:1 razvan:2 procedure:3 demis:1 episodic:1 riedmiller:3 area:1 empirical:1 significantly:4 adapting:1 fqi:24 regular:1 arcade:2 suggest:1 close:2 selection:3 operator:1 context:1 applying:4 instability:2 bellemare:3 optimize:3 equivalent:1 map:1 demonstrated:1 yt:4 maximizing:1 go:2 straightforward:1 l:58 jimmy:1 focused:2 survey:1 matthieu:1 insight:2 stability:3 autonomous:1 target:10 play:1 dulac:1 itay:1 programming:3 olivier:1 us:7 samy:1 invader:3 agreement:2 trend:1 roy:1 approximated:1 recognition:2 hessel:1 observed:5 levine:1 ep:1 preprint:5 electrical:5 wang:2 calculate:1 connected:2 desai:2 ran:2 substantial:1 benjamin:1 environment:3 rmsprop:1 reward:2 ideally:1 sendonaris:1 dynamic:3 ghavamzadeh:4 trained:2 kaw:2 ablative:5 efficiency:1 eric:1 farias:1 avivt:1 joint:1 represented:1 tx:1 shirish:1 regularizer:2 geist:1 train:4 ramalho:1 describe:1 artificial:3 hyper:4 kevin:2 zahavy:4 larger:2 solve:1 elad:1 otherwise:1 ability:1 neil:1 online:5 sequence:1 net:1 propose:5 maximal:2 turned:1 combining:3 date:1 ernst:4 achieve:1 academy:1 schaul:2 breakout:7 milan:1 double:3 requirement:1 darrell:1 produce:1 generating:2 silver:5 executing:2 ben:1 adam:11 object:1 tim:1 develop:1 ac:3 stat:1 fixing:1 damien:1 measured:1 andrew:3 strong:1 dividing:1 pietquin:1 hst:1 solves:1 differ:1 ning:1 correct:1 stochastic:2 exploration:1 human:1 nando:1 require:2 maxa0:4 generalization:3 benigno:1 investigation:1 viterbi:1 parr:4 desjardins:1 optimizer:1 hubara:1 sensitive:1 saw:1 vice:1 successfully:1 hoffman:1 minimization:1 mit:1 modified:2 rather:1 srl:58 avoid:3 shelf:2 rusu:2 barto:4 focus:4 improvement:8 consistently:1 rank:1 indicates:1 mainly:2 baseline:3 inference:1 dependent:2 accumulated:1 a0:2 hidden:9 dnn:1 tak:1 interested:2 backing:1 arg:2 issue:3 ill:1 classification:1 among:1 scherrer:2 qbert:2 art:4 platform:1 tzeng:1 uc:1 field:1 once:3 beach:1 ng:4 koray:3 veness:3 represents:2 unsupervised:1 icml:1 future:1 report:1 yoshua:1 mirza:1 richard:1 randomly:1 simultaneously:1 national:3 hamper:1 resulted:3 elevator:1 individual:1 attempt:2 harley:1 mankowitz:2 ostrovski:1 rae:1 mnih:16 evaluation:8 joel:4 kirkpatrick:2 introduces:1 yielding:1 behind:2 permuting:1 regularizers:3 accurate:1 tuple:1 closer:1 moallemi:1 necessary:1 experience:4 arthur:2 minw:2 biometrics:1 tree:3 iv:2 puigdomenech:1 old:1 hester:2 haifa:4 re:6 showcasing:1 fitted:3 dheevatsa:1 retains:1 addressing:1 kq:1 technion:8 levin:1 seventh:1 stored:1 reported:1 answer:1 aw:1 function4:1 periodic:3 combined:1 st:11 fundamental:1 international:7 off:5 asterix:4 michael:2 together:1 quickly:1 gabillon:1 squared:1 postulate:1 aaai:2 manage:1 huang:1 henceforth:2 expert:2 dimitri:1 return:3 li:1 volodymyr:2 de:1 ioannis:1 wk:6 coefficient:3 satisfy:1 kolter:4 explicitly:1 vecerik:1 depends:1 ranking:1 performed:12 optimistic:1 tessler:2 red:3 start:2 competitive:2 maintains:1 jia:1 il:3 square:19 convolutional:3 roll:2 conducting:1 keskar:7 gathered:1 clarifies:1 conceptually:2 generalize:4 bayesian:18 kavukcuoglu:3 accurately:1 trajectory:3 ping:1 reach:1 trevor:1 mudigere:1 pp:9 james:1 attributed:2 sampled:1 dataset:2 popular:3 ask:1 recall:1 knowledge:3 dimensionality:2 improves:4 back:2 matej:1 attained:2 higher:2 supervised:3 follow:1 tom:5 improved:11 formulation:3 done:1 box:3 though:1 strongly:2 stage:1 until:1 hand:6 expressive:2 replacing:1 ruderman:1 mehdi:1 mode:1 pineau:1 quality:2 rabinowitz:1 mdp:3 aviv:2 dqn:83 usage:1 usa:1 lillicrap:1 name:1 k22:1 consisted:1 believe:2 regularization:26 hence:2 iteratively:1 vivek:1 game:10 bowling:4 maintained:1 steady:1 generalized:1 complete:1 demonstrate:2 mohammad:3 geurts:1 performs:1 jack:1 novel:3 recently:3 lagoudakis:4 charles:1 superior:1 common:1 machado:1 rl:29 overview:2 ji:1 million:2 approximates:1 relating:1 significant:2 refer:1 dinh:2 versa:1 ai:9 tuning:1 vanilla:8 enjoyed:1 automatic:1 similarly:1 rd:2 closing:1 bruno:1 had:1 stable:3 entail:1 impressive:1 actor:1 longer:1 robot:1 badia:1 wilcoxon:3 closest:1 recent:6 showed:3 optimizing:6 termed:2 buffer:2 shahar:1 success:4 continue:1 life:1 joelle:1 approximators:1 yi:2 victor:1 jorge:1 seen:4 minimum:5 additional:3 somewhat:1 preceding:1 george:3 michail:1 converge:2 dashed:2 ale:1 ii:2 full:4 multiple:2 mix:1 stem:1 barwinska:1 calculation:1 long:3 lin:2 dept:5 divided:1 impact:1 variant:3 regression:4 essentially:1 vision:1 arxiv:10 iteration:17 represent:2 normalization:1 hado:2 achieved:3 dec:1 background:1 crash:2 argminq:1 addition:2 whereas:1 addressed:1 diagram:1 szepesv:1 parallelization:1 pooling:1 shie:6 quan:1 inconsistent:1 practitioner:1 ee:1 leverage:2 door:1 revealed:1 iii:2 bengio:2 wn:2 variety:1 switch:2 relu:1 finish:1 architecture:6 methods1:1 idea:1 andreas:1 tamar:3 blundell:2 whether:1 veda:1 givony:1 padding:1 osband:1 peter:1 cause:1 repeatedly:1 action:12 deep:36 gabriel:1 clear:1 tune:3 discount:1 extensively:1 generate:4 http:1 outperform:1 piot:1 overly:1 popularity:1 blue:3 broadly:1 naddaf:1 waiting:1 affected:1 group:1 key:2 thereafter:1 georg:1 yangqing:1 changing:1 leibo:1 nocedal:1 year:1 run:5 inverse:1 powerful:1 swersky:1 throughout:2 reader:2 yann:1 decision:2 appendix:6 lanctot:3 dropout:1 layer:33 followed:1 yielded:1 annual:1 rmin:1 alex:2 bp:3 ri:5 flat:1 encodes:1 generates:1 simulate:1 nitish:1 performing:2 yavar:1 relatively:1 conjecture:3 martin:2 smelyanskiy:1 combination:3 poor:2 across:6 smaller:1 slightly:1 son:1 mastering:1 shallow:8 making:1 modification:2 explained:2 den:1 gathering:1 wprior:1 taken:1 computationally:1 equation:7 describing:1 mechanism:2 fp7:1 antonoglou:1 end:2 hoffer:3 available:1 operation:1 panneershelvam:1 observe:1 hierarchical:1 generic:1 pierre:1 save:1 batch:48 hassabis:1 original:2 denotes:1 top:1 include:2 running:2 completed:1 cf:1 wehenkel:1 ddqn:7 tiago:1 calculating:1 scholarship:1 approximating:2 lspi:1 objective:3 added:2 question:2 already:1 parametric:1 degrades:1 rt:8 exhibit:1 gradient:2 dp:3 distance:1 separate:1 fidjeland:1 penultimate:1 nx:4 w0:1 maddison:1 chris:1 collected:4 reason:7 enforcing:1 erik:1 code:1 mini:10 providing:1 minimizing:3 demonstration:2 schrittwieser:1 liang:2 difficult:1 setup:2 kieran:1 julian:1 frank:1 ba:2 implementation:2 motivates:1 policy:23 perform:6 observation:3 markov:1 benchmark:1 finite:2 daan:1 descent:2 hinton:2 incorporated:1 extended:1 bert:1 sharp:4 smoothed:1 community:1 overcoming:1 introduced:1 david:4 pair:1 namely:2 extensive:1 learned:4 kingma:2 nip:1 address:1 beyond:1 challenge:1 program:2 green:3 video:1 explanation:2 dueling:1 power:1 hot:1 hybrid:9 regularized:2 warm:1 recursion:1 representing:1 older:1 improve:3 github:1 technology:4 inversely:1 started:2 nir:2 review:2 prior:14 epoch:9 literature:1 acknowledgement:1 understanding:1 graf:2 loss:5 fully:2 expect:1 lecture:1 multiagent:1 generation:1 limitation:1 interesting:2 proportional:1 geoffrey:1 foundation:1 agent:16 bk2:2 critic:1 row:1 summary:1 supported:2 last:26 free:1 tease:1 asynchronous:1 tsitsiklis:2 understand:1 institute:4 wide:1 explaining:1 arnold:1 lifelong:1 bulletin:1 mikhail:1 sparse:1 benefit:2 van:8 curve:5 depth:1 calculated:1 world:4 transition:2 rich:4 computes:1 qn:6 author:1 evaluating:1 reinforcement:16 projected:1 forward:1 made:1 agnieszka:1 far:1 sifre:1 alphabetically:1 lesner:1 transaction:1 approximate:5 emphasize:1 contradict:1 obtains:1 overfitting:1 investigating:1 tuples:1 ziyu:1 agapiou:1 search:1 iterative:1 decade:1 table:4 learn:5 transfer:2 nature:2 ca:2 boris:1 dqns:1 minecraft:1 improving:2 alg:2 mse:3 investigated:1 european:2 domain:14 marc:5 did:6 apr:1 main:4 crites:2 whole:1 rh:1 hyperparameters:1 allowed:1 referred:3 depicts:1 fashion:1 andrei:2 slow:1 wiley:1 judy:1 theme:1 explicit:2 exponential:1 replay:2 answering:1 forgets:1 extractor:1 learns:3 donahue:2 ian:1 tang:1 ciamac:1 er:11 experimented:2 naively:1 incorporating:1 sequential:1 effectively:1 gained:2 decaf:1 magnitude:1 conditioned:1 sparseness:1 gap:3 easier:2 vijay:1 chen:1 timothy:1 explore:1 visual:1 prevents:2 vinyals:1 ordered:1 temporarily:1 lstd:13 springer:1 driessche:1 loses:1 acm:1 piggy:1 goal:3 viewed:1 adria:1 jeff:1 replace:2 programing:1 included:1 typical:1 degradation:1 total:2 pas:1 catastrophic:2 experimental:1 tetri:1 siemens:1 est:1 guillaume:1 tiao:2 support:2 alexander:1 oriol:1 evaluate:1 tested:1 phenomenon:1 srivastava:1
6,530
6,907
LightGBM: A Highly Efficient Gradient Boosting Decision Tree Guolin Ke1 , Qi Meng2 , Thomas Finley3 , Taifeng Wang1 , Wei Chen1 , Weidong Ma1 , Qiwei Ye1 , Tie-Yan Liu1 1 Microsoft Research 2 Peking University 3 Microsoft Redmond 1 {guolin.ke, taifengw, wche, weima, qiwye, tie-yan.liu}@microsoft.com; 2 [email protected]; 3 [email protected]; Abstract Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm, and has quite a few effective implementations such as XGBoost and pGBRT. Although many engineering optimizations have been adopted in these implementations, the efficiency and scalability are still unsatisfactory when the feature dimension is high and data size is large. A major reason is that for each feature, they need to scan all the data instances to estimate the information gain of all possible split points, which is very time consuming. To tackle this problem, we propose two novel techniques: Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB). With GOSS, we exclude a significant proportion of data instances with small gradients, and only use the rest to estimate the information gain. We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size. With EFB, we bundle mutually exclusive features (i.e., they rarely take nonzero values simultaneously), to reduce the number of features. We prove that finding the optimal bundling of exclusive features is NP-hard, but a greedy algorithm can achieve quite good approximation ratio (and thus can effectively reduce the number of features without hurting the accuracy of split point determination by much). We call our new GBDT implementation with GOSS and EFB LightGBM. Our experiments on multiple public datasets show that, LightGBM speeds up the training process of conventional GBDT by up to over 20 times while achieving almost the same accuracy. 1 Introduction Gradient boosting decision tree (GBDT) [1] is a widely-used machine learning algorithm, due to its efficiency, accuracy, and interpretability. GBDT achieves state-of-the-art performances in many machine learning tasks, such as multi-class classification [2], click prediction [3], and learning to rank [4]. In recent years, with the emergence of big data (in terms of both the number of features and the number of instances), GBDT is facing new challenges, especially in the tradeoff between accuracy and efficiency. Conventional implementations of GBDT need to, for every feature, scan all the data instances to estimate the information gain of all the possible split points. Therefore, their computational complexities will be proportional to both the number of features and the number of instances. This makes these implementations very time consuming when handling big data. To tackle this challenge, a straightforward idea is to reduce the number of data instances and the number of features. However, this turns out to be highly non-trivial. For example, it is unclear how to perform data sampling for GBDT. While there are some works that sample data according to their weights to speed up the training process of boosting [5, 6, 7], they cannot be directly applied to GBDT 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. since there is no sample weight in GBDT at all. In this paper, we propose two novel techniques towards this goal, as elaborated below. Gradient-based One-Side Sampling (GOSS). While there is no native weight for data instance in GBDT, we notice that data instances with different gradients play different roles in the computation of information gain. In particular, according to the definition of information gain, those instances with larger gradients1 (i.e., under-trained instances) will contribute more to the information gain. Therefore, when down sampling the data instances, in order to retain the accuracy of information gain estimation, we should better keep those instances with large gradients (e.g., larger than a pre-defined threshold, or among the top percentiles), and only randomly drop those instances with small gradients. We prove that such a treatment can lead to a more accurate gain estimation than uniformly random sampling, with the same target sampling rate, especially when the value of information gain has a large range. Exclusive Feature Bundling (EFB). Usually in real applications, although there are a large number of features, the feature space is quite sparse, which provides us a possibility of designing a nearly lossless approach to reduce the number of effective features. Specifically, in a sparse feature space, many features are (almost) exclusive, i.e., they rarely take nonzero values simultaneously. Examples include the one-hot features (e.g., one-hot word representation in text mining). We can safely bundle such exclusive features. To this end, we design an efficient algorithm by reducing the optimal bundling problem to a graph coloring problem (by taking features as vertices and adding edges for every two features if they are not mutually exclusive), and solving it by a greedy algorithm with a constant approximation ratio. We call the new GBDT algorithm with GOSS and EFB LightGBM 2 . Our experiments on multiple public datasets show that LightGBM can accelerate the training process by up to over 20 times while achieving almost the same accuracy. The remaining of this paper is organized as follows. At first, we review GBDT algorithms and related work in Sec. 2. Then, we introduce the details of GOSS in Sec. 3 and EFB in Sec. 4. Our experiments for LightGBM on public datasets are presented in Sec. 5. Finally, we conclude the paper in Sec. 6. 2 Preliminaries 2.1 GBDT and Its Complexity Analysis GBDT is an ensemble model of decision trees, which are trained in sequence [1]. In each iteration, GBDT learns the decision trees by fitting the negative gradients (also known as residual errors). The main cost in GBDT lies in learning the decision trees, and the most time-consuming part in learning a decision tree is to find the best split points. One of the most popular algorithms to find split points is the pre-sorted algorithm [8, 9], which enumerates all possible split points on the pre-sorted feature values. This algorithm is simple and can find the optimal split points, however, it is inefficient in both training speed and memory consumption. Another popular algorithm is the histogram-based algorithm [10, 11, 12], as shown in Alg. 13 . Instead of finding the split points on the sorted feature values, histogram-based algorithm buckets continuous feature values into discrete bins and uses these bins to construct feature histograms during training. Since the histogram-based algorithm is more efficient in both memory consumption and training speed, we will develop our work on its basis. As shown in Alg. 1, the histogram-based algorithm finds the best split points based on the feature histograms. It costs O(#data ? #f eature) for histogram building and O(#bin ? #f eature) for split point finding. Since #bin is usually much smaller than #data, histogram building will dominate the computational complexity. If we can reduce #data or #feature, we will be able to substantially speed up the training of GBDT. 2.2 Related Work There have been quite a few implementations of GBDT in the literature, including XGBoost [13], pGBRT [14], scikit-learn [15], and gbm in R [16] 4 . Scikit-learn and gbm in R implements the presorted algorithm, and pGBRT implements the histogram-based algorithm. XGBoost supports both 1 When we say larger or smaller gradients in this paper, we refer to their absolute values. The code is available at GitHub: https://github.com/Microsoft/LightGBM. 3 Due to space restriction, high level pseudo code is used. The details could be found in our open-source code. 4 There are some other works speed up GBDT training via GPU [17, 18], or parallel training [19]. However, they are out of the scope of this paper. 2 2 the pre-sorted algorithm and histogram-based algorithm. As shown in [13], XGBoost outperforms the other tools. So, we use XGBoost as our baseline in the experiment section. To reduce the size of the training data, a common approach is to down sample the data instances. For example, in [5], data instances are filtered if their weights are smaller than a fixed threshold. SGB [20] uses a random subset to train the weak learners in every iteration. In [6], the sampling ratio are dynamically adjusted in the training progress. However, all these works except SGB [20] are based on AdaBoost [21], and cannot be directly applied to GBDT since there are no native weights for data instances in GBDT. Though SGB can be applied to GBDT, it usually hurts accuracy and thus it is not a desirable choice. Similarly, to reduce the number of features, it is natural to filter weak features [22, 23, 7, 24]. This is usually done by principle component analysis or projection pursuit. However, these approaches highly rely on the assumption that features contain significant redundancy, which might not always be true in practice (features are usually designed with their unique contributions and removing any of them may affect the training accuracy to some degree). The large-scale datasets used in real applications are usually quite sparse. GBDT with the pre-sorted algorithm can reduce the training cost by ignoring the features with zero values [13]. However, GBDT with the histogram-based algorithm does not have efficient sparse optimization solutions. The reason is that the histogram-based algorithm needs to retrieve feature bin values (refer to Alg. 1) for each data instance no matter the feature value is zero or not. It is highly preferred that GBDT with the histogram-based algorithm can effectively leverage such sparse property. To address the limitations of previous works, we propose two new novel techniques called Gradientbased One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB). More details will be introduced in the next sections. Algorithm 1: Histogram-based Algorithm Algorithm 2: Gradient-based One-Side Sampling Input: I: training data, d: max depth Input: m: feature dimension nodeSet ? {0} . tree nodes in current level rowSet ? {{0, 1, 2, ...}} . data indices in tree nodes for i = 1 to d do for node in nodeSet do usedRows ? rowSet[node] for k = 1 to m do H ? new Histogram() . Build histogram for j in usedRows do bin ? I.f[k][j].bin H[bin].y ? H[bin].y + I.y[j] H[bin].n ? H[bin].n + 1 Find the best split on histogram H. ... Input: I: training data, d: iterations Input: a: sampling ratio of large gradient data Input: b: sampling ratio of small gradient data Input: loss: loss function, L: weak learner models ? {}, fact ? 1?a b topN ? a ? len(I) , randN ? b ? len(I) for i = 1 to d do preds ? models.predict(I) g ? loss(I, preds), w ? {1,1,...} sorted ? GetSortedIndices(abs(g)) topSet ? sorted[1:topN] randSet ? RandomPick(sorted[topN:len(I)], randN) usedSet ? topSet + randSet w[randSet] ? = fact . Assign weight f act to the small gradient data. newModel ? L(I[usedSet], ? g[usedSet], Update rowSet and nodeSet according to the best w[usedSet]) split points. models.append(newModel) ... 3 Gradient-based One-Side Sampling In this section, we propose a novel sampling method for GBDT that can achieve a good balance between reducing the number of data instances and keeping the accuracy for learned decision trees. 3.1 Algorithm Description In AdaBoost, the sample weight serves as a good indicator for the importance of data instances. However, in GBDT, there are no native sample weights, and thus the sampling methods proposed for AdaBoost cannot be directly applied. Fortunately, we notice that the gradient for each data instance in GBDT provides us with useful information for data sampling. That is, if an instance is associated with a small gradient, the training error for this instance is small and it is already well-trained. A straightforward idea is to discard those data instances with small gradients. However, the data distribution will be changed by doing so, which will hurt the accuracy of the learned model. To avoid this problem, we propose a new method called Gradient-based One-Side Sampling (GOSS). 3 GOSS keeps all the instances with large gradients and performs random sampling on the instances with small gradients. In order to compensate the influence to the data distribution, when computing the information gain, GOSS introduces a constant multiplier for the data instances with small gradients (see Alg. 2). Specifically, GOSS firstly sorts the data instances according to the absolute value of their gradients and selects the top a ? 100% instances. Then it randomly samples b ? 100% instances from the rest of the data. After that, GOSS amplifies the sampled data with small gradients by a constant 1?a b when calculating the information gain. By doing so, we put more focus on the under-trained instances without changing the original data distribution by much. 3.2 Theoretical Analysis GBDT uses decision trees to learn a function from the input space X s to the gradient space G [1]. Suppose that we have a training set with n i.i.d. instances {x1 , ? ? ? , xn }, where each xi is a vector with dimension s in space X s . In each iteration of gradient boosting, the negative gradients of the loss function with respect to the output of the model are denoted as {g1 , ? ? ? , gn }. The decision tree model splits each node at the most informative feature (with the largest information gain). For GBDT, the information gain is usually measured by the variance after splitting, which is defined as below. Definition 3.1 Let O be the training dataset on a fixed node of the decision tree. The variance gain of splitting feature j at point d for this node is defined as 1 Vj|O (d) = nO where nO = P ( I[xi ? O], njl|O (d) = P {xi ?O:xij ?d} njl|O (d) P g i )2 ( P {xi ?O:xij >d} njr|O (d) + I[xi ? O : xij ? d] and njr|O (d) = gi )2 ! , P I[xi ? O : xij > d]. For feature j, the decision tree algorithm selects d?j = argmaxd Vj (d) and calculates the largest gain Vj (d?j ). 5 Then, the data are split according feature j ? at point dj ? into the left and right child nodes. In our proposed GOSS method, first, we rank the training instances according to their absolute values of their gradients in the descending order; second, we keep the top-a ? 100% instances with the larger gradients and get an instance subset A; then, for the remaining set Ac consisting (1 ? a) ? 100% instances with smaller gradients, we further randomly sample a subset B with size b ? |Ac |; finally, we split the instances according to the estimated variance gain V?j (d) over the subset A ? B, i.e., ( 1 V?j (d) = n P xi ?Al 1?a b njl (d) gi + P xi ?Bl gi )2 + ( P xi ?Ar 1?a b njr (d) gi + P xi ?Br gi )2 ! , (1) where Al = {xi ? A : xij ? d},Ar = {xi ? A : xij > d},Bl = {xi ? B : xij ? d},Br = {xi ? B : xij > d}, and the coefficient 1?a b is used to normalize the sum of the gradients over B back to the size of Ac . Thus, in GOSS, we use the estimated V?j (d) over a smaller instance subset, instead of the accurate Vj (d) over all the instances to determine the split point, and the computation cost can be largely reduced. More importantly, the following theorem indicates that GOSS will not lose much training accuracy and will outperform random sampling. Due to space restrictions, we leave the proof of the theorem to the supplementary materials. Theorem 3.2 We denotePthe approximation error in GOSS as E(d) = |V?j (d) ? Vj (d)| and g?lj (d) = P c c )r |gi | xi ?(A?A )l |gi | , g?rj (d) = xi ?(A?A . With probability at least 1 ? ?, we have j j n (d) n (d) r l ( E(d) ? where Ca,b = 1?a ? b 2 Ca,b ln 1/? ? max 1 1 , njl (d) njr (d) ) r + 2DCa,b ln 1/? , n (2) maxxi ?Ac |gi |, and D = max(? glj (d), g?rj (d)). According to the  theorem, we have the  following discussions: (1) The asymptotic approximation ratio of GOSS is O 1 j nl (d) + 1 j nr (d) + ?1 n ? . If the split is not too unbalanced (i.e., njl (d) ? O( n) and ? njr (d) ? O( n)), the approximation error will be dominated by the second term of Ineq.(2) which 5 Our following analysis holds for arbitrary node. For simplicity and without confusion, we omit the sub-index O in all the notations. 4 ? decreases to 0 in O( n) with n ? ?. That means when number of data is large, the approximation is quite accurate. (2) Random sampling is a special case of GOSS with a = 0. In many cases, GOSS could outperform random sampling, under the condition C0,? > Ca,??a , which is equivalent ?a to ? > ?1?a with ?a = maxxi ?A?Ac |gi |/ maxxi ?Ac |gi |. ? ??a Next, we analyze the generalization performance in GOSS. We consider the generalization error in GOSS GOSS Egen (d) = |V?j (d) ? V? (d)|, which is the gap between the variance gain calculated by the sampled training instances in GOSS and the true variance gain for the underlying distribution. We ? GOSS have Egen (d) ? |V?j (d) ? Vj (d)| + |Vj (d) ? V? (d)| = EGOSS (d) + Egen (d). Thus, the generalization error with GOSS will be close to that calculated by using the full data instances if the GOSS approximation is accurate. On the other hand, sampling will increase the diversity of the base learners, which potentially help to improve the generalization performance [24]. 4 Exclusive Feature Bundling In this section, we propose a novel method to effectively reduce the number of features. Algorithm 3: Greedy Bundling Algorithm 4: Merge Exclusive Features Input: F : features, K: max conflict count Construct graph G searchOrder ? G.sortByDegree() bundles ? {}, bundlesConflict ? {} for i in searchOrder do needNew ? True for j = 1 to len(bundles) do cnt ? ConflictCnt(bundles[j],F [i]) if cnt + bundlesConflict[i] ? K then bundles[j].add(F [i]), needNew ? False break Input: numData: number of data Input: F : One bundle of exclusive features binRanges ? {0}, totalBin ? 0 for f in F do totalBin += f.numBin binRanges.append(totalBin) newBin ? new Bin(numData) for i = 1 to numData do newBin[i] ? 0 for j = 1 to len(F ) do if F [j].bin[i] 6= 0 then newBin[i] ? F [j].bin[i] + binRanges[j] if needNew then Add F [i] as a new bundle to bundles Output: newBin, binRanges Output: bundles High-dimensional data are usually very sparse. The sparsity of the feature space provides us a possibility of designing a nearly lossless approach to reduce the number of features. Specifically, in a sparse feature space, many features are mutually exclusive, i.e., they never take nonzero values simultaneously. We can safely bundle exclusive features into a single feature (which we call an exclusive feature bundle). By a carefully designed feature scanning algorithm, we can build the same feature histograms from the feature bundles as those from individual features. In this way, the complexity of histogram building changes from O(#data ? #f eature) to O(#data ? #bundle), while #bundle << #f eature. Then we can significantly speed up the training of GBDT without hurting the accuracy. In the following, we will show how to achieve this in detail. There are two issues to be addressed. The first one is to determine which features should be bundled together. The second is how to construct the bundle. Theorem 4.1 The problem of partitioning features into a smallest number of exclusive bundles is NP-hard. Proof: We will reduce the graph coloring problem [25] to our problem. Since graph coloring problem is NP-hard, we can then deduce our conclusion. Given any instance G = (V, E) of the graph coloring problem. We construct an instance of our problem as follows. Take each row of the incidence matrix of G as a feature, and get an instance of our problem with |V | features. It is easy to see that an exclusive bundle of features in our problem corresponds to a set of vertices with the same color, and vice versa.  For the first issue, we prove in Theorem 4.1 that it is NP-Hard to find the optimal bundling strategy, which indicates that it is impossible to find an exact solution within polynomial time. In order to find a good approximation algorithm, we first reduce the optimal bundling problem to the graph coloring problem by taking features as vertices and adding edges for every two features if they are not mutually exclusive, then we use a greedy algorithm which can produce reasonably good results 5 (with a constant approximation ratio) for graph coloring to produce the bundles. Furthermore, we notice that there are usually quite a few features, although not 100% mutually exclusive, also rarely take nonzero values simultaneously. If our algorithm can allow a small fraction of conflicts, we can get an even smaller number of feature bundles and further improve the computational efficiency. By simple calculation, random polluting a small fraction of feature values will affect the training accuracy by at most O([(1 ? ?)n]?2/3 )(See Proposition 2.1 in the supplementary materials), where ? is the maximal conflict rate in each bundle. So, if we choose a relatively small ?, we will be able to achieve a good balance between accuracy and efficiency. Based on the above discussions, we design an algorithm for exclusive feature bundling as shown in Alg. 3. First, we construct a graph with weighted edges, whose weights correspond to the total conflicts between features. Second, we sort the features by their degrees in the graph in the descending order. Finally, we check each feature in the ordered list, and either assign it to an existing bundle with a small conflict (controlled by ?), or create a new bundle. The time complexity of Alg. 3 is O(#f eature2 ) and it is processed only once before training. This complexity is acceptable when the number of features is not very large, but may still suffer if there are millions of features. To further improve the efficiency, we propose a more efficient ordering strategy without building the graph: ordering by the count of nonzero values, which is similar to ordering by degrees since more nonzero values usually leads to higher probability of conflicts. Since we only alter the ordering strategies in Alg. 3, the details of the new algorithm are omitted to avoid duplication. For the second issues, we need a good way of merging the features in the same bundle in order to reduce the corresponding training complexity. The key is to ensure that the values of the original features can be identified from the feature bundles. Since the histogram-based algorithm stores discrete bins instead of continuous values of the features, we can construct a feature bundle by letting exclusive features reside in different bins. This can be done by adding offsets to the original values of the features. For example, suppose we have two features in a feature bundle. Originally, feature A takes value from [0, 10) and feature B takes value [0, 20). We then add an offset of 10 to the values of feature B so that the refined feature takes values from [10, 30). After that, it is safe to merge features A and B, and use a feature bundle with range [0, 30] to replace the original features A and B. The detailed algorithm is shown in Alg. 4. EFB algorithm can bundle many exclusive features to the much fewer dense features, which can effectively avoid unnecessary computation for zero feature values. Actually, we can also optimize the basic histogram-based algorithm towards ignoring the zero feature values by using a table for each feature to record the data with nonzero values. By scanning the data in this table, the cost of histogram building for a feature will change from O(#data) to O(#non_zero_data). However, this method needs additional memory and computation cost to maintain these per-feature tables in the whole tree growth process. We implement this optimization in LightGBM as a basic function. Note, this optimization does not conflict with EFB since we can still use it when the bundles are sparse. 5 Experiments In this section, we report the experimental results regarding our proposed LightGBM algorithm. We use five different datasets which are all publicly available. The details of these datasets are listed in Table 1. Among them, the Microsoft Learning to Rank (LETOR) [26] dataset contains 30K web search queries. The features used in this dataset are mostly dense numerical features. The Allstate Insurance Claim [27] and the Flight Delay [28] datasets both contain a lot of one-hot coding features. And the last two datasets are from KDD CUP 2010 and KDD CUP 2012. We directly use the features used by the winning solution from NTU [29, 30, 31], which contains both dense and sparse features, and these two datasets are very large. These datasets are large, include both sparse and dense features, and cover many real-world tasks. Thus, we can use them to test our algorithm thoroughly. Our experimental environment is a Linux server with two E5-2670 v3 CPUs (in total 24 cores) and 256GB memories. All experiments run with multi-threading and the number of threads is fixed to 16. 5.1 Overall Comparison We present the overall comparisons in this subsection. XGBoost [13] and LightGBM without GOSS and EFB (called lgb_baselline) are used as baselines. For XGBoost, we used two versions, xgb_exa (pre-sorted algorithm) and xgb_his (histogram-based algorithm). For xgb_his, lgb_baseline, and LightGBM, we used the leaf-wise tree growth strategy [32]. For xgb_exa, since it only supports layer-wise growth strategy, we tuned the parameters for xgb_exa to let it grow similar trees like other 6 Table 1: Datasets used in the experiments. Name Allstate Flight Delay LETOR KDD10 KDD12 #data 12 M 10 M 2M 19M 119M #f eature 4228 700 136 29M 54M Description Sparse Sparse Dense Sparse Sparse Task Binary classification Binary classification Ranking Binary classification Binary classification Metric AUC AUC NDCG [4] AUC AUC Table 2: Overall training time cost comparison. LightGBM is lgb_baseline with GOSS and EFB. EFB_only is lgb_baseline with EFB. The values in the table are the average time cost (seconds) for training one iteration. Allstate Flight Delay LETOR KDD10 KDD12 xgb_exa 10.85 5.94 5.55 108.27 191.99 xgb_his 2.63 1.05 0.63 OOM OOM lgb_baseline 6.07 1.39 0.49 39.85 168.26 EFB_only 0.71 0.27 0.46 6.33 20.23 LightGBM 0.28 0.22 0.31 2.85 12.67 Table 3: Overall accuracy comparison on test datasets. Use AUC for classification task and NDCG@10 for ranking task. SGB is lgb_baseline with Stochastic Gradient Boosting, and its sampling ratio is the same as LightGBM. Allstate Flight Delay LETOR KDD10 KDD12 xgb_exa 0.6070 0.7601 0.4977 0.7796 0.7029 xgb_his 0.6089 0.7840 0.4982 OOM OOM lgb_baseline 0.6093 0.7847 0.5277 0.78735 0.7049 SGB 0.6064 ? 7e-4 0.7780 ? 8e-4 0.5239 ? 6e-4 0.7759 ? 3e-4 0.6989 ? 8e-4 LightGBM 0.6093 ? 9e-5 0.7846 ? 4e-5 0.5275 ? 5e-4 0.78732 ? 1e-4 0.7051 ? 5e-5 0.73 0.74 LightGBM lgb_baseline xgb_his xgb_exa 0 200 400 600 800 1000 Time(s) NDCG@10 0.40 0.42 0.44 0.46 0.48 0.50 0.52 0.75 AUC 0.76 0.77 0.78 0.79 methods. And we also tuned the parameters for all datasets towards a better balancing between speed and accuracy. We set a = 0.05, b = 0.05 for Allstate, KDD10 and KDD12, and set a = 0.1, b = 0.1 for Flight Delay and LETOR. We set ? = 0 in EFB. All algorithms are run for fixed iterations, and we get the accuracy results from the iteration with the best score.6 Figure 1: Time-AUC curve on Flight Delay. LightGBM lgb_baseline xgb_his xgb_exa 0 50 100 150 200 250 300 350 400 Time(s) Figure 2: Time-NDCG curve on LETOR. The training time and test accuracy are summarized in Table 2 and Table 3 respectively. From these results, we can see that LightGBM is the fastest while maintaining almost the same accuracy as baselines. The xgb_exa is based on the pre-sorted algorithm, which is quite slow comparing with histogram-base algorithms. By comparing with lgb_baseline, LightGBM speed up 21x, 6x, 1.6x, 14x and 13x respectively on the Allstate, Flight Delay, LETOR, KDD10 and KDD12 datasets. Since xgb_his is quite memory consuming, it cannot run successfully on KDD10 and KDD12 datasets due to out-of-memory. On the remaining datasets, LightGBM are all faster, up to 9x speed-up is achieved on the Allstate dataset. The speed-up is calculated based on training time per iteration since all algorithms converge after similar number of iterations. To demonstrate the overall training process, we also show the training curves based on wall clock time on Flight Delay and LETOR in the Fig. 1 6 Due to space restrictions, we leave the details of parameter settings to the supplementary material. 7 Table 4: Accuracy comparison on LETOR dataset for GOSS and SGB under different sampling ratios. We ensure all experiments reach the convergence points by using large iterations with early stopping. The standard deviations on different settings are small. The settings of a and b for GOSS can be found in the supplementary materials. Sampling ratio SGB GOSS 0.1 0.5182 0.5224 0.15 0.5216 0.5256 0.2 0.5239 0.5275 0.25 0.5249 0.5284 0.3 0.5252 0.5289 0.35 0.5263 0.5293 0.4 0.5267 0.5296 and Fig. 2, respectively. To save space, we put the remaining training curves of the other datasets in the supplementary material. On all datasets, LightGBM can achieve almost the same test accuracy as the baselines. This indicates that both GOSS and EFB will not hurt accuracy while bringing significant speed-up. It is consistent with our theoretical analysis in Sec. 3.2 and Sec. 4. LightGBM achieves quite different speed-up ratios on these datasets. The overall speed-up comes from the combination of GOSS and EFB, we will break down the contribution and discuss the effectiveness of GOSS and EFB separately in the next sections. 5.2 Analysis on GOSS First, we study the speed-up ability of GOSS. From the comparison of LightGBM and EFB_only (LightGBM without GOSS) in Table 2, we can see that GOSS can bring nearly 2x speed-up by its own with using 10% - 20% data. GOSS can learn trees by only using the sampled data. However, it retains some computations on the full dataset, such as conducting the predictions and computing the gradients. Thus, we can find that the overall speed-up is not linearly correlated with the percentage of sampled data. However, the speed-up brought by GOSS is still very significant and the technique is universally applicable to different datasets. Second, we evaluate the accuracy of GOSS by comparing with Stochastic Gradient Boosting (SGB) [20]. Without loss of generality, we use the LETOR dataset for the test. We tune the sampling ratio by choosing different a and b in GOSS, and use the same overall sampling ratio for SGB. We run these settings until convergence by using early stopping. The results are shown in Table 4. We can see the accuracy of GOSS is always better than SGB when using the same sampling ratio. These results are consistent with our discussions in Sec. 3.2. All the experiments demonstrate that GOSS is a more effective sampling method than stochastic sampling. 5.3 Analysis on EFB We check the contribution of EFB to the speed-up by comparing lgb_baseline with EFB_only. The results are shown in Table 2. Here we do not allow the confliction in the bundle finding process (i.e., ? = 0).7 We find that EFB can help achieve significant speed-up on those large-scale datasets. Please note lgb_baseline has been optimized for the sparse features, and EFB can still speed up the training by a large factor. It is because EFB merges many sparse features (both the one-hot coding features and implicitly exclusive features) into much fewer features. The basic sparse feature optimization is included in the bundling process. However, the EFB does not have the additional cost on maintaining nonzero data table for each feature in the tree learning process. What is more, since many previously isolated features are bundled together, it can increase spatial locality and improve cache hit rate significantly. Therefore, the overall improvement on efficiency is dramatic. With above analysis, EFB is a very effective algorithm to leverage sparse property in the histogram-based algorithm, and it can bring a significant speed-up for GBDT training process. 6 Conclusion In this paper, we have proposed a novel GBDT algorithm called LightGBM, which contains two novel techniques: Gradient-based One-Side Sampling and Exclusive Feature Bundling to deal with large number of data instances and large number of features respectively. We have performed both theoretical analysis and experimental studies on these two techniques. The experimental results are consistent with the theory and show that with the help of GOSS and EFB, LightGBM can significantly outperform XGBoost and SGB in terms of computational speed and memory consumption. For the future work, we will study the optimal selection of a and b in Gradient-based One-Side Sampling and continue improving the performance of Exclusive Feature Bundling to deal with large number of features no matter they are sparse or not. 7 We put our detailed study on ? tuning in the supplementary materials. 8 References [1] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189?1232, 2001. [2] Ping Li. Robust logitboost and adaptive base class (abc) logitboost. arXiv preprint arXiv:1203.3491, 2012. [3] Matthew Richardson, Ewa Dominowska, and Robert Ragno. Predicting clicks: estimating the click-through rate for new ads. In Proceedings of the 16th international conference on World Wide Web, pages 521?530. ACM, 2007. [4] Christopher JC Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11(23-581):81, 2010. [5] Jerome Friedman, Trevor Hastie, Robert Tibshirani, et al. Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). The annals of statistics, 28(2):337?407, 2000. [6] Charles Dubout and Fran?ois Fleuret. Boosting with maximum adaptive sampling. In Advances in Neural Information Processing Systems, pages 1332?1340, 2011. [7] Ron Appel, Thomas J Fuchs, Piotr Doll?r, and Pietro Perona. Quickly boosting decision trees-pruning underachieving features early. In ICML (3), pages 594?602, 2013. [8] Manish Mehta, Rakesh Agrawal, and Jorma Rissanen. Sliq: A fast scalable classifier for data mining. In International Conference on Extending Database Technology, pages 18?32. Springer, 1996. [9] John Shafer, Rakesh Agrawal, and Manish Mehta. Sprint: A scalable parallel classi er for data mining. In Proc. 1996 Int. Conf. Very Large Data Bases, pages 544?555. Citeseer, 1996. [10] Sanjay Ranka and V Singh. Clouds: A decision tree classifier for large datasets. In Proceedings of the 4th Knowledge Discovery and Data Mining Conference, pages 2?8, 1998. [11] Ruoming Jin and Gagan Agrawal. Communication and memory efficient parallel decision tree construction. In Proceedings of the 2003 SIAM International Conference on Data Mining, pages 119?129. SIAM, 2003. [12] Ping Li, Christopher JC Burges, Qiang Wu, JC Platt, D Koller, Y Singer, and S Roweis. Mcrank: Learning to rank using multiple classification and gradient boosting. In NIPS, volume 7, pages 845?852, 2007. [13] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785?794. ACM, 2016. [14] Stephen Tyree, Kilian Q Weinberger, Kunal Agrawal, and Jennifer Paykin. Parallel boosted regression trees for web search ranking. In Proceedings of the 20th international conference on World wide web, pages 387?396. ACM, 2011. [15] Fabian Pedregosa, Ga?l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825?2830, 2011. [16] Greg Ridgeway. Generalized boosted models: A guide to the gbm package. Update, 1(1):2007, 2007. [17] Huan Zhang, Si Si, and Cho-Jui Hsieh. Gpu-acceleration for large-scale tree boosting. arXiv preprint arXiv:1706.08359, 2017. [18] Rory Mitchell and Eibe Frank. Accelerating the xgboost algorithm using gpu computing. PeerJ Preprints, 5:e2911v1, 2017. [19] Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, and Tieyan Liu. A communication-efficient parallel algorithm for decision tree. In Advances in Neural Information Processing Systems, pages 1271?1279, 2016. [20] Jerome H Friedman. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367?378, 2002. [21] Michael Collins, Robert E Schapire, and Yoram Singer. Logistic regression, adaboost and bregman distances. Machine Learning, 48(1-3):253?285, 2002. [22] Ian Jolliffe. Principal component analysis. Wiley Online Library, 2002. [23] Luis O Jimenez and David A Landgrebe. Hyperspectral data analysis and supervised feature reduction via projection pursuit. IEEE Transactions on Geoscience and Remote Sensing, 37(6):2653?2667, 1999. [24] Zhi-Hua Zhou. Ensemble methods: foundations and algorithms. CRC press, 2012. [25] Tommy R Jensen and Bjarne Toft. Graph coloring problems, volume 39. John Wiley & Sons, 2011. [26] Tao Qin and Tie-Yan Liu. Introducing LETOR 4.0 datasets. CoRR, abs/1306.2597, 2013. [27] Allstate claim data, https://www.kaggle.com/c/ClaimPredictionChallenge. [28] Flight delay data, https://github.com/szilard/benchm-ml#data. [29] Hsiang-Fu Yu, Hung-Yi Lo, Hsun-Ping Hsieh, Jing-Kai Lou, Todd G McKenzie, Jung-Wei Chou, Po-Han Chung, Chia-Hua Ho, Chun-Fu Chang, Yin-Hsuan Wei, et al. Feature engineering and classifier ensemble for kdd cup 2010. In KDD Cup, 2010. [30] Kuan-Wei Wu, Chun-Sung Ferng, Chia-Hua Ho, An-Chun Liang, Chun-Heng Huang, Wei-Yuan Shen, Jyun-Yu Jiang, Ming-Hao Yang, Ting-Wei Lin, Ching-Pei Lee, et al. A two-stage ensemble of diverse models for advertisement ranking in kdd cup 2012. In KDDCup, 2012. [31] Libsvm binary classification data, https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html. [32] Haijian Shi. Best-first decision tree learning. PhD thesis, The University of Waikato, 2007. 9
6907 |@word version:1 polynomial:1 proportion:1 mcrank:1 c0:1 nd:1 open:1 mehta:2 hsieh:2 citeseer:1 dramatic:1 reduction:1 liu:3 contains:3 score:1 jimenez:1 tuned:2 dubourg:1 outperforms:1 existing:1 current:1 com:5 incidence:1 comparing:4 si:2 gpu:3 john:2 luis:1 numerical:1 additive:1 informative:1 kdd:5 ranka:1 drop:1 designed:2 update:2 greedy:5 fewer:2 leaf:1 core:1 record:1 filtered:1 provides:3 boosting:15 contribute:1 node:9 ron:2 firstly:1 zhang:1 five:1 yuan:1 prove:4 fitting:1 tommy:1 introduce:1 blondel:1 multi:2 bertrand:1 ming:2 zhi:2 cpu:1 cache:1 estimating:1 notation:1 underlying:1 what:1 substantially:1 finding:4 sung:1 safely:2 pseudo:1 every:4 act:1 tackle:2 growth:3 tie:3 classifier:3 hit:1 platt:1 partitioning:1 omit:1 before:1 engineering:2 todd:1 jiang:1 meng:1 merge:2 ndcg:4 might:1 dynamically:1 ye1:1 fastest:1 range:2 unique:1 practice:1 implement:3 yan:3 significantly:3 projection:2 pre:7 word:1 jui:1 get:4 cannot:4 close:1 selection:1 ga:1 put:3 influence:1 impossible:1 lambdarank:1 descending:2 restriction:3 conventional:2 equivalent:1 optimize:1 preprints:1 www:2 shi:1 go:46 straightforward:2 ke:2 shen:1 simplicity:1 splitting:2 jorma:1 hsuan:1 importantly:1 dominate:1 retrieve:1 paykin:1 bundling:13 hurt:3 annals:2 target:1 play:2 suppose:2 oom:4 exact:1 construction:1 olivier:1 us:3 designing:2 kunal:1 ego:1 native:3 database:1 role:2 cloud:1 preprint:2 csie:1 wang:1 kilian:1 ordering:4 decrease:1 remote:1 environment:1 complexity:7 trained:4 singh:1 solving:1 qiwei:2 efficiency:7 learner:3 basis:1 accelerate:1 po:1 train:1 fast:1 effective:4 query:1 choosing:1 refined:1 quite:11 whose:1 larger:5 widely:1 supplementary:6 say:1 kai:1 ability:1 statistic:3 gi:10 g1:1 richardson:1 emergence:1 kuan:1 online:1 sequence:1 agrawal:4 propose:7 maximal:1 qin:1 achieve:6 roweis:1 description:2 normalize:1 scalability:1 amplifies:1 convergence:2 extending:1 letor:11 produce:2 jing:1 leave:2 tianqi:1 help:3 cnt:2 develop:1 ac:6 measured:1 progress:1 ois:1 come:1 safe:1 filter:1 stochastic:4 libsvmtools:1 public:3 material:6 bin:16 crc:1 assign:2 generalization:4 wall:1 preliminary:1 ntu:2 proposition:1 sprint:1 varoquaux:1 adjusted:1 hold:1 gradientbased:1 randn:2 scope:1 predict:1 claim:2 matthew:1 major:1 efb:24 achieves:2 smallest:1 early:3 gbm:3 omitted:1 estimation:3 proc:1 applicable:1 lose:1 prettenhofer:1 largest:2 vice:1 lightgbm:26 create:1 tool:1 weighted:1 successfully:1 brought:1 always:2 ke1:1 avoid:3 zhou:1 boosted:2 focus:1 bundled:2 improvement:1 unsatisfactory:1 rank:4 indicates:3 check:2 grisel:1 sigkdd:1 baseline:4 chou:1 wang1:1 stopping:2 lj:1 perona:1 koller:1 selects:2 tao:1 issue:3 classification:8 among:2 overall:9 denoted:1 html:1 art:1 special:1 spatial:1 gramfort:1 construct:6 never:1 once:1 beach:1 sampling:32 piotr:1 qiang:1 yu:2 icml:1 nearly:3 alter:1 future:1 np:4 report:1 few:3 randomly:3 simultaneously:4 individual:1 consisting:1 microsoft:6 maintain:1 ab:2 friedman:3 highly:4 possibility:2 mining:6 insurance:1 introduces:1 nl:1 bundle:31 accurate:5 bregman:1 edge:3 fu:2 huan:1 tree:27 pku:1 waikato:1 isolated:1 theoretical:3 instance:45 gn:1 argmaxd:1 ar:2 cover:1 retains:1 cost:9 introducing:1 vertex:3 subset:5 deviation:1 delay:9 too:1 scanning:2 cho:1 thoroughly:1 st:1 international:5 siam:2 retain:1 lee:1 michael:1 together:2 dominowska:1 quickly:1 linux:1 thesis:1 choose:1 huang:1 conf:1 inefficient:1 chung:1 manish:2 li:2 michel:1 ma1:1 exclude:1 diversity:1 sec:8 coding:2 summarized:1 coefficient:1 matter:2 int:1 jc:3 ranking:4 ad:1 performed:1 break:2 lot:1 view:1 doing:2 liu1:1 analyze:1 len:5 sort:2 parallel:5 carlos:1 elaborated:1 contribution:3 publicly:1 greg:1 accuracy:24 variance:5 largely:1 conducting:1 ensemble:4 correspond:1 weak:3 vincent:2 ping:3 reach:1 trevor:1 definition:2 wche:1 associated:1 proof:2 gain:20 sampled:4 dataset:7 treatment:1 popular:3 mitchell:1 enumerates:1 color:1 subsection:1 knowledge:2 organized:1 carefully:1 actually:1 back:1 coloring:7 dca:1 alexandre:1 higher:1 originally:1 supervised:1 adaboost:4 wei:8 done:2 though:1 generality:1 furthermore:1 dubout:1 stage:1 clock:1 until:1 hand:1 flight:9 jerome:3 web:4 christopher:2 scikit:3 logistic:2 usa:1 building:5 ye:1 contain:2 true:3 multiplier:1 name:1 nonzero:8 deal:2 during:1 auc:7 please:1 percentile:1 generalized:1 demonstrate:2 confusion:1 performs:1 bring:2 wise:2 novel:7 charles:1 common:1 overview:1 preds:2 million:1 volume:2 significant:6 refer:2 versa:1 hurting:2 cup:5 tuning:1 kaggle:1 similarly:1 guolin:3 dj:1 han:1 deduce:1 base:4 add:3 own:1 recent:1 discard:1 topn:3 store:1 ineq:1 server:1 binary:6 continue:1 yi:1 guestrin:1 fortunately:1 additional:2 determine:2 v3:1 converge:1 stephen:1 multiple:3 desirable:1 rj:2 full:2 faster:1 determination:1 calculation:1 long:1 compensate:1 chia:2 lin:1 peking:1 qi:2 prediction:2 calculates:1 controlled:1 basic:3 regression:3 scalable:3 metric:1 arxiv:4 iteration:10 histogram:25 xgboost:10 achieved:1 ewa:1 separately:1 addressed:1 grow:1 source:1 rest:2 bringing:1 duplication:1 effectiveness:1 call:3 leverage:2 yang:1 split:17 easy:1 affect:2 hastie:1 identified:1 gbdt:34 click:3 reduce:13 idea:2 cn:1 regarding:1 tradeoff:1 br:2 thread:1 fuchs:1 gb:1 accelerating:1 suffer:1 peter:1 ranknet:1 appel:1 useful:1 fleuret:1 detailed:2 listed:1 tune:1 processed:1 reduced:1 http:4 schapire:1 outperform:3 xij:8 percentage:1 notice:3 estimated:2 per:2 tibshirani:1 diverse:1 discrete:2 redundancy:1 key:1 threshold:2 rissanen:1 achieving:2 changing:1 libsvm:1 graph:11 pietro:1 fraction:2 year:1 sum:1 run:4 package:1 almost:5 wu:2 fran:1 decision:17 acceptable:1 rory:1 layer:1 toft:1 dominated:1 ragno:1 speed:23 relatively:1 according:8 combination:1 smaller:7 son:1 tw:1 handling:1 bucket:1 ln:2 mutually:5 previously:1 jennifer:1 turn:1 count:2 discus:1 thirion:1 singer:2 jolliffe:1 letting:1 cjlin:1 end:1 serf:1 adopted:1 available:2 pursuit:2 doll:1 mckenzie:1 save:1 weinberger:1 ho:2 thomas:2 original:4 top:3 remaining:4 include:2 glj:1 ensure:2 maintaining:2 calculating:1 yoram:1 ting:1 especially:2 build:2 bl:2 threading:1 already:1 strategy:5 exclusive:24 nr:1 unclear:1 gradient:41 distance:1 lou:1 consumption:3 trivial:1 reason:2 code:3 index:2 ratio:14 balance:2 ching:1 liang:1 mostly:1 robert:3 potentially:1 frank:1 hao:1 negative:2 append:2 implementation:6 design:2 pei:1 perform:1 datasets:24 fabian:1 jin:1 communication:2 arbitrary:1 weidong:1 introduced:1 david:1 optimized:1 eature:5 conflict:7 learned:2 merges:1 chen1:1 nip:2 address:1 able:2 redmond:1 njr:5 below:2 usually:10 sanjay:1 sparsity:1 challenge:2 interpretability:1 memory:8 including:1 max:4 hot:4 natural:1 rely:1 predicting:1 indicator:1 residual:1 improve:4 github:3 technology:1 lossless:2 library:1 mathieu:1 text:1 review:1 literature:1 discovery:2 python:1 asymptotic:1 loss:5 limitation:1 proportional:1 rejoinder:1 facing:1 foundation:1 degree:3 consistent:3 principle:1 tyree:1 heng:1 balancing:1 row:1 lo:1 changed:1 jung:1 last:1 keeping:1 side:8 allow:2 burges:2 guide:1 wide:2 taking:2 absolute:3 sparse:19 curve:4 dimension:3 depth:1 xn:1 calculated:3 world:3 landgrebe:1 reside:1 author:1 adaptive:2 universally:1 transaction:1 pruning:1 preferred:1 implicitly:1 keep:3 ml:1 conclude:1 unnecessary:1 consuming:4 xi:16 kddcup:1 continuous:2 search:2 table:15 learn:5 reasonably:1 robust:1 ca:4 correlated:1 ignoring:2 taifeng:2 improving:1 alg:8 e5:1 vj:7 main:1 dense:5 linearly:1 big:2 whole:1 logitboost:2 shafer:1 child:1 x1:1 fig:2 slow:1 wiley:2 hsiang:1 sub:1 winning:1 lie:1 advertisement:1 learns:1 ian:1 maxxi:3 down:3 removing:1 theorem:6 er:1 sensing:1 list:1 offset:2 jensen:1 chun:4 eibe:1 false:1 adding:3 effectively:4 importance:1 merging:1 corr:1 hyperspectral:1 phd:1 gap:1 chen:2 locality:1 gagan:1 yin:1 egen:3 ordered:1 geoscience:1 chang:1 hua:3 springer:1 corresponds:1 abc:1 acm:4 ma:1 oct:1 goal:1 sorted:10 acceleration:1 towards:3 njl:5 replace:1 hard:4 change:2 included:1 specifically:3 except:1 uniformly:1 reducing:2 ridgeway:1 classi:1 principal:1 called:4 total:2 experimental:4 rakesh:2 rarely:3 pedregosa:1 support:2 pgbrt:3 scan:2 unbalanced:1 lambdamart:1 collins:1 evaluate:1 hung:1
6,531
6,908
Adversarial Ranking for Language Generation Kevin Lin? University of Washington [email protected] Xiaodong He Microsoft Research [email protected] Dianqi Li? University of Washington [email protected] Zhengyou Zhang Microsoft Research [email protected] Ming-Ting Sun University of Washington [email protected] Abstract Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach. 1 Introduction Language generation plays an important role in natural language processing, which is essential to many applications such as machine translation [1], image captioning [6], and dialogue systems [26]. Recent studies [10, 11, 29, 33] show that the recurrent neural networks (RNNs) and the long shortterm memory networks (LSTMs) can achieve impressive performances for the task of language generation. Evaluation metrics such as BLEU [22], METEOR [2], and CIDEr [32] are reported in the literature. Generative adversarial networks (GANs) have drawn great attentions since Goodfellow et al. [8] introduced the framework for generating the synthetic data that is similar to the real one. The main idea behind GANs is to have two neural network models, the discriminator and the generator, competing against each other during learning. The discriminator aims to distinguish the synthetic from the real data, while the generator is trained to confuse the discriminator by generating high quality synthetic data. During learning, the gradient of the training loss from the discriminator is then used as the guidance for updating the parameters of the generator. Since then, GANs achieve great performance in computer vision tasks such as image synthesis [5, 14, 17, 24, 27]. Their successes are mainly attributed to training the discriminator to estimate the statistical properties of the continuous real-valued data (e.g., pixel values). ? The authors contributed equally to this work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The adversarial learning framework provides a possible way to synthesize language descriptions in high quality. However, GANs have limited progress with natural language processing. Primarily, the GANs have difficulties in dealing with discrete data (e.g., text sequences [3]). In natural languages processing, the text sequences are evaluated as the discrete tokens whose values are non-differentiable. Therefore, the optimization of GANs is challenging. Secondly, most of the existing GANs assume the output of the discriminator to be a binary predicate indicating whether the given sentence is written by human or machine [4, 16, 18, 34, 35]. For a large variety of natural language expressions, this binary predication is too restrictive, since the diversity and richness inside the sentences are constrained by the degenerated distribution due to binary classification. In this paper, we propose a novel adversarial learning framework, RankGAN, for generating highquality language descriptions. RankGAN learns the model from the relative ranking information between the machine-written and the human-written sentences in an adversarial framework. In the proposed RankGAN, we relax the training of the discriminator to a learning-to-rank optimization problem. Specifically, the proposed new adversarial network consists of two neural network models, a generator and a ranker. As opposed to performing a binary classification task, we propose to train the ranker to rank the machine-written sentences lower than human-written sentences with respect to a reference sentence which is human-written. Accordingly, we train the generator to synthesize sentences which confuse the ranker so that machine-written sentences are ranked higher than human-written sentences in regard to the reference. During learning, we adopt the policy gradient technique [31] to overcome the non-differentiable problem. Consequently, by viewing a set of data samples collectively and evaluating their quality through relative ranking, the discriminator is able to make better assessment of the quality of the samples, which in turn helps the generator to learn better. Our method is suitable for language learning in comparison to conventional GANs. Experimental results clearly demonstrate that our proposed method outperforms the state-of-the-art methods. 2 Related works GANs: Recently, GANs [8] have been widely explored due to its nature of unsupervised deep learning. Though GANs achieve great successes on computer vision applications [5, 14, 17, 24, 27], there are only a few progresses in natural language processing because the discrete sequences are not differentiable. To tackle the non-differentiable problem, SeqGAN [35] addresses this issue by the policy gradient inspired from the reinforcement learning [31]. The approach considers each word selection in the sentence as an action, and computes the reward of the sequence with the Monte Carlo (MC) search. Their method back-propagates the reward from the discriminator, and encourages the generator to create human-like language sentences. Li et al. [18] apply GANs with the policy gradient method to dialogue generation. They train a Seq2Seq model as the generator, and build the discriminator using a hierarchical encoder followed by a 2-way softmax function. Dai et al. [4] show that it is possible to enhance the diversity of the generated image captions with conditional GANs. Yang et al. [34] further prove that training a convolutional neural network (CNN) as a discriminator yields better performance than that of the recurrent neural network (RNN) for the task of machine translation (MT). Among the works mentioned above, SeqGAN [35] is the most relevant study to our proposed method. The major difference between SeqGAN [35] and our proposed model is that we replace the regression based discriminator with a novel ranker, and we formulate a new learning objective function in the adversarial learning framework. In this condition, the rewards for training our model are not limited to binary regression, but encoded with relative ranking information. Learning to rank: Learning to rank plays an essential role in Information Retrieval (IR) [21]. The ranking technique has been proven effective for searching documents [12] and images [23]. Given a reference, the desired information (such as click-through logs [15]) is incorporated into the ranking function which aims to encourage the relevant documents to be returned as early as possible. While the goal of previous works is to retrieve relevant documents, our proposed model takes the ranking scores as the rewards to learn the language generator. Our proposed RankGAN is one of the first generative adversarial network which learns by relative ranking information. 2 Figure 1: An illustration of the proposed RankGAN. H denotes the sentence sampled from the human-written sentences. G is the sentence generated by the generator G? . The inputs of the ranker R? consist of one synthetic sequence and multiple human-written sentences. Given the reference sentence U which is written by human, we rank the input sentences according to the relative scores. In this figure, it is illustrated that the generator tries to fool the ranker and let the synthetic sentence to be ranked at the top with respect to the reference sentence. 3 3.1 Method Overall architecture In conventional GANs [8], the discriminator with multilayer perceptrons outputs a binary probability distribution to suggest whether the unknown sequences come from the real data rather than the data synthesized by a generator. In contrast to conventional GANs, RankGAN consists of a sequence generator G and a ranker R, where the R can endow a relative rank among the sequences when given a reference. As illustrated in Figure 1, the learning objective of G is to produce a synthetic sentence that receives higher ranking score than those drawn from real data. However, the goal of R is to rank the synthetic sentence lower than human-written sentences. Thus, this can be treated as G and R play a minimax game with the objective function L: min max L(G? , R? ) = E ? ? s?Ph     log R? (s|U, C ? ) + E log(1 ? R? (s|U, C + )) (1) s?G? where ? and ? are the variable parameters in G and R, respectively. The E is the expectation operator, and Ph is the real data from human-written sentences. s ? Ph and s ? G? denote that s is from human-written sentences and synthesized sentences, respectively. The U is the reference set used for estimating relative ranks, and C + , C ? are the comparison set with regard to different input sentences s. When the input sentence s is the real data, C ? is pre-sampled from generated data; If the input sentence s is the synthetic data, the C + is pre-sampled from human-written data. The forms of G? and R? can be achieved in many ways. In this paper, we design the generative model with the long short-term memory networks (LSTMs) [11]. A LSTM iteratively takes the embedded features of the current token wt plus the information in the hidden state ht?1 and the cell state ct?1 from previous stages, and updates the current states ht and ct . Additionally, the subsequent word wt+1 is conditionally sampled subjects to the probability distribution p(wt+1 |ht ) which is determined by the value of the current hidden state ht . Benefiting from the capacity of LSTMs, our generative model can conserve long-term gradient information and produce more delicate word sequences s = (w0 , w1 , w2 , ..., wT ), where T is the sequence length. Recent studies show that the convolutional neural network can achieve high performance for machine translation [7, 34] and text classification [36]. The proposed ranker R, which shares the similar convolutional architecture, first maps concatenated sequence matrices into the embedded feature vectors ys = F(s) through a series of nonlinear functions F. Then, the ranking score will be calculated for the sequence features ys with the reference feature yu which is extracted by R in advance. 3 3.2 Rank score More disparities between sentences can be observed by contrasts. Inspired by this, unlike the conventional GANs, our architecture possesses a novel comparison system that evaluates the relative ranking scores among sentences. Following the ranking steps commonly used in Web search [12], the relevance score of the input sequence s given a reference u is measured as: ?(s|u) = cosine(ys , yu ) = ys ? yu kys kkyu k (2) where the yu and ys are the embedded feature vectors of the reference and the input sequence, respectively. k?k denotes the norm operator. Then, a softmax-like formula is used to compute the ranking score for a certain sequence s given a comparison set C: P (s|u, C) = P s 0 exp(??(s|u)) 0 ?C 0 exp(??(s |u)) (3) The parameter ?, whose value is set empirically during experiments, shares the similar idea with the Boltzmann exploration [30] method in reinforcement learning. Lower ? results in all sentences to be nearly equiprobable, while higher ? increases the biases toward the sentence with the greater score. 0 The set C = C ? {s} denotes the set of input sentences to be ranked. The collective ranking score for an input sentence is an expectation of its scores given different references sampled across the reference space. During learning, we randomly sample a set of references from human-written sentences to construct the reference set U . Meanwhile, the comparison set C will be constructed according to the type of the input sentence s, i.e., C is sampled from the human-written set if s is a synthetic sentence produced by G, and vice versa. With the above setting, the expected log ranking score computed for the input sentence s can be derived by: log R? (s|U, C) = E log [P (s|u, C)] (4) u?U Here, s is the input sentence. It is either human-written or produced by G? . Accordingly, the comparison set C is C + if s is generated by machine, and vice versa. Given the reference set and the comparison set, we are able to compute the rank scores indicating the relative ranks for the complete sentences. The ranking scores will be used for the objective functions of generator G? and ranker R? . 3.3 Training In conventional settings, GANs are designed for generating real-valued image data and thus the generator G? consists of a series of differential functions with continuous parameters guided by the objective function from discriminator D? [8]. Unfortunately, the synthetic data in the text generation task is based on discrete symbols, which are hard to update through common back-propagation. To solve this issue, we adopt the Policy Gradient method [31], which has been widely used in reinforcement learning. Suppose the vocabulary set is V , at time step t, the previous tokens generated in the sequence are (w0 , w1 , ..., wt?1 ), where all tokens wi ? V . When compared to the typical reinforcement learning algorithms, the existing sequence s1:t?1 = (w0 , w1 , ..., wt?1 ) is the current state, the next token wt that selected in the next step is an action sampling from the policy ?? (wt |s1:t?1 ). Since we use G to generate the next token, the policy ?? equals to p(wt |s1:t?1 ) which mentioned previously, and ? is the parameter set in generator G. Once the generator reaches the end of one sequence (i.e., s = s1:T ), it receives a ranking reward R(s|U, C) according to the comparison set C and its related reference set U. Note that in reinforcement learning, the current reward is compromised by the rewards from intermediate states and future states. However, in text generation, the generator G? obtains the reward if and only if one sequence has been completely generated, which means no intermediate reward is gained before the sequence hits the end symbol. To relieve this problem, we utilize the Monte Carlo rollouts 4 methods [4, 35] to simulate intermediate rewards when a sequence is incomplete. Then, the expected future reward V for partial sequences can be computed by: V?,? (s1:t?1 , U ) = E sr ?G?   log R? (sr |U, C + , s1:t?1 ) (5) Here, sr represents the complete sentence sampled by rollout methods with the given starter sequence s1:t?1 . To be more specific, the beginning tokens (w0 , w1 , ..., wt?1 ) are fixed and the rest tokens are consecutively sampled by G? until the last token wT is generated. We denote this as the ?path? generated by the current policy. We keep sampling n different paths with the corresponding ranking scores. Then, the average ranking score will be used to approximate the expected future reward for the current partial sequence. With the feasible intermediate rewards, we can finalize the objective function for complete sentences. Refer to the proof in [31], the gradient of the objective function for generator G can be formulated as: " ?? L? (s0 ) = E s1:T ?G? # T X X ?? ?? (wt |s1:t?1 )V?,? (s1:t , U ) (6) t=1 wt ?V where ?? is the partial differential operator. The start state s0 is the first generated token w0 . Es1:T ?G? is the mean over all sampled complete sentences based on current generator?s parameter ? within one minibatch. Note that we only compute the partial derivatives for ?, as the R? is fixed during the training of generator. Importantly, different from the policy gradients methods in other works [4, 20, 35], our method replaces the simple binary outputs with a ranking system based on multiple sentences, which can better reflect the quality of the imitate sentences and facilitate effective training of the generator G. To train the ranker?s parameter set ?, we can fix the parameters in ? and maximize the objective equation (1). In practice, however, it has been found that the network model learns better by minimizing log(R? (s|U, C + )) instead of maximizing log(1 ? R? (s|U, C + )), where s ? G? . This is similar to the finding in [25]. Hence, during the training of R? , we maximize the following ranking objective function: L? = E s?Ph     log R? (s|U, C ? ) ? E log R? (s|U, C + ) (7) s?G? It is worthwhile to note that when the evaluating data comes from the human-written sentences, the comparison set C ? is sampled from the generated sentences through G? ; In contrast, if the estimating data belongs to the synthetic sentences, C + consists of human-written sentences. We found empirically that this gives more stable training. 3.4 Discussion Note that the proposed RankGAN has a Nash Equilibrium when the generator G? simulates the humanwritten sentences distribution Ph , and the ranker R? cannot correctly estimate rank between the synthetic sentences and the human-written sentences. However, as also discussed in the literature [8, 9], it is still an open problem how a non-Bernoulli GAN converges to such an equilibrium. In a sense, replacing the absolute binary predicates with the ranking scores based on multiple sentences can relieve the gradient vanishing problem and benefit the training process. In the following experiment section, we observe that the training converges on four different datasets, and leads to a better performance compared to previous state-of-the-arts. 4 Experimental results Following the evaluation protocol in [35], we first carry out experiments on the data and simulator proposed in [35]. Then, we compare the performance of RankGAN with other state-of-the-art methods on multiple public language datasets including Chinese poems [37], COCO captions [19], and Shakespear?s plays [28]. 5 Table 1: The performance comparison of different methods on the synthetic data [35] in terms of the negative log-likelihood (NLL) scores. Method MLE PG-BLEU SeqGAN RankGAN NLL 9.038 8.946 8.736 8.247 Learning Curve 10.25 RankGAN SeqGAN MLE PG-BLEU 10.00 9.75 Nll Loss 9.50 9.25 9.00 8.75 8.50 8.25 0 50 100 150 Epochs 200 250 Figure 2: Learning curves of different methods on the simulation of synthetic data with respect to different training epochs. Note that the vertical dashed line indicates the end of the pre-training of PG-BLEU, SeqGAN and RankGAN. 4.1 Simulation on synthetic data We first conduct the test on the dataset proposed in [35]. The synthetic data2 is a set of sequential tokens which can be seen as the simulated data comparing to the real-word language data. We conduct this simulation to validate that the proposed method is able to capture the dependency of the sequential tokens. In the simulation, we firstly collect 10, 000 sequential data generated by the oracle model (or true model) as the training set. Note that the oracle model we used is a random initialized LSTM which is publicly available2 . During learning, we randomly select one training sentence and 0 one generated sentence from RankGAN to form the input set C . Then, given a reference sample which is also randomly selected from the training set, we compute the ranking score and optimize the proposed objective function. Note that the sentence length of the training data is fixed to 20 for simplicity. Following the evaluation protocol in [35], we evaluate the machine-written sentences by stimulating the Turing test. In the synthetic data experiment, the oracle model, which plays the role as the human, generates the ?human-written? sentences following its intrinsic data distribution Po . We assume these sentences are the ground truth sentences used for training, thus each model should learn and imitate the sentences from Po . At the test stage, obviously, the generated sentences from each model will be evaluated by the original oracle model. Following this, we take the sentences generated by RankGAN as the input of the oracle model, and estimate the average negative log-likelihood (NLL) [13]. The lower the NLL score is, the higher probability the generated sentence will pass the Turing test. We compare our approach with the state-of-the-art methods including maximum likelihood estimation (MLE), policy gradient with BLEU (PG-BLEU), and SeqGAN [35]. The PG-BLEU computes the BLEU score to measure the similarity between the generated sentence and the human-written 2 The synthetic data and the https://github.com/LantaoYu/SeqGAN oracle model 6 (LSTM model) are publicly available at Table 2: The performance comparison of different methods on the Chinese poem generation in terms of the BLEU scores and human evaluation scores. Method BLEU-2 Method Human score MLE SeqGAN RankGAN 0.667 0.738 0.812 SeqGAN RankGAN Human-written 3.58 4.52 6.69 sentences, then takes the BLEU score as the reward to update the generator with policy gradient. Because PG-BLEU also learns the similarity information during training, it can be seen as a baseline comparing to our approach. It?s noteworthy that while the PG-BLEU grasps the similarities depend on the n-grams matching from the token-level among sentences, RankGAN explores the ranking connections inside the embedded features of sentences. These two methods are fundamentally different. Table 1 shows the performance comparison of RankGAN and the other methods. It can be seen that the proposed RankGAN performs more favourably against the compared methods. Figure 2 shows the learning curves of different approaches with respect to different training epochs. While MLE, PG-BLEU and SeqGAN tend to converge after 200 training epochs, the proposed RankGAN consistently improves the language generator and achieves relatively lower NLL score. The results suggest that the proposed ranking objective, which relaxes the binary restriction of the discriminator, is able to learn effective language generator. It is worth noting that the proposed RankGAN achieves better performance than that of PG-BLEU. This indicates employing the ranking information as the reward is more informative than making use of the BLEU score that stands on token-level similarities. In our experiments, we noticed that the results are not sensitive to the size of comparison set and reference set. The learning curves converge to similar results with different reference sizes and comparison sizes. However, learning with the large reference size and comparison set could potentially increase the computational cost. Conventional GANs employ a binary classifier to distinguish the human-written and the machinecreated sentences. Though effective, it is also very restrictive for tasks like natural language generation, where rich structures and various language expressions need to be considered. For these tasks, usually a relative quality assessment is more suitable. The proposed RankGAN is able to perform quality assessment in a relative space, and therefore, rather than training the discriminator to assign the absolute 0 or 1 binary predicate to the synthesized or real data sample, we expect the discriminator to rank the synthetic data compared to the real data in the relative assessment space where better quality judgments of different data samples can be obtained. Given the rewards with the relative ranking information, the proposed RankGAN is possible to learn better language generator than the compared state-of-the-art methods. 4.2 Results on Chinese poems composition To evaluate the performance of our language generator, we compare our method with other approaches including MLE and SeqGAN [35] on the real-word language data. We conduct experiments on the Chinese poem dataset [37], which contains 13, 123 five-word quatrain poems. Each poem has 4 sentences, and each sentence contains 5 words resulting in a total of 20 words. After the standard pre-processing which replaces the non-frequently used words (appeared less than 5 times) with the special character UNK, we train our model on the dataset and generate the poem. To keep the proposed method general, our model does not take advantage of any prior knowledge such as phonology during learning. Following the evaluation protocol in [35, 37], we compute the BLEU-2 score and estimate the similarity between the human-written poem and the machine-created one. Table 2 summarizes the BLEU-2 score of different methods. It can be seen that the proposed RankGAN performs more favourably compared to the state-of-the-art methods in terms of BLEU-2 score. This indicates that the proposed objective is able to learn effective language generator with real-world data. We further conduct human study to evaluate the quality of the generated poem in human perspective. Specifically, we invite 57 participants who are native mandarin Chinese speakers to score the poems. During the evaluation, we randomly sample and show 15 poems written by different methods, 7 Table 3: The performance comparison of different methods on the COCO captions in terms of the BLEU scores and human evaluation scores. Method BLEU-2 BLEU-3 BLEU-4 Method Human score MLE SeqGAN RankGAN 0.781 0.815 0.845 0.624 0.636 0.668 0.589 0.587 0.614 SeqGAN RankGAN Human-written 3.44 4.61 6.42 Table 4: Example of the generated descriptions with different methods. Note that the language models are trained on COCO caption dataset without the images. Human-written Two men happily working on a plastic computer. The toilet in the bathroom is filled with a bunch of ice. A bottle of wine near stacks of dishes and food. A large airplane is taking off from a runway. Little girl wearing blue clothing carrying purple bag sitting outside cafe. SeqGAN (Baseline) A baked mother cake sits on a street with a rear of it. A tennis player who is in the ocean. A highly many fried scissors sits next to the older. A person that is sitting next to a desk. Child jumped next to each other. RankGAN (Ours) Three people standing in front of some kind of boats. A bedroom has silver photograph desk. The bears standing in front of a palm state park. This bathroom has brown bench. Three bus in a road in front of a ramp. including RankGAN, SeqGAN, and written by human. Then, we ask the subjects to evaluate the quality of the poem by grading the poem from 1 to 10 points. It can be seen in Table 2, human-written poems receive the highest score comparing to the machine-written one. RankGAN outperforms the compared method in terms of the human evaluation score. The results suggest that the ranking score is informative for the generator to create human-like sentences. 4.3 Results on COCO image captions We further evaluate our method on the large-scale dataset for the purpose of testing the stability of our model. We test our method on the image captions provided by the COCO dataset [19]. The captions are the narrative sentences written by human, and each sentence is at least 8 words and at most 20 words. We randomly select 80, 000 captions as the training set, and select 5, 000 captions to form the validation set. We replace the words appeared less than 5 times with UNK character. Since the proposed RankGAN focuses on unconditional GANs that do not consider any prior knowledge as input, we train our model on the captions of the training set without conditioning on specific images. In the experiment, we evaluate the performance of the language generator by averaging BLEU scores to measure the similarity between the generated sentences and the human-written sentences in the validation set. Table 3 shows the performance comparison of different methods. RankGAN achieves better performance than the other methods in terms of different BLEU scores. Some of the samples written by humans, and synthesized by the SeqGAN and the proposed model RankGAN are shown in Table 4. These examples show that our model is able to generate fluent, novel sentences that are 8 Table 5: The performance comparison of different methods on Shakespeare?s play Romeo and Juliet in terms of the BLEU scores. Method BLEU-2 BLEU-3 BLEU-4 MLE SeqGAN RankGAN 0.796 0.887 0.914 0.695 0.842 0.878 0.635 0.815 0.856 not existing in the training set. The results show that RankGAN is able to learn effective language generator in a large corpus. We also conduct human study to evaluate the quality of the generated sentences. We invite 28 participants who are native or proficient English speakers to grade the sentences. Similar to the setting in previous section, we randomly sample and show 15 sentences written by different methods, and ask the subjects to grade from 1 to 10 points. Table 3 shows the human evaluation scores. As can be seen, the human-written sentences get the highest score comparing to the language models. Among the GANs approaches, RankGAN receives better score than SeqGAN, which is consistent to the finding in the Chinese poem composition. The results demonstrate that the proposed learning objective is capable to increase the diversity of the wording making it realistic toward human-like language description. 4.4 Results on Shakespeare?s plays Finally, we investigate the possibility of learning Shakespeare?s lexical dependency, and make use of the rare phrases. In this experiment, we train our model on the Romeo and Juliet play [28] to further validate the proposed method. The script is splited into 2, 500 training sentences and 565 test sentences. To learn the rare words in the script, we adjust the threshold of UNK from 5 to 2. Table 5 shows the performance comparison of the proposed RankGAN and the other methods including MLE and SeqGAN. As can be seen, the proposed method achieves consistently higher BLEU score than the other methods in terms of the different n-grams criteria. The results indicate the proposed RankGAN is able to capture the transition pattern among the words, even if the training sentences are novel, delicate and complicated. 5 Conclusion We presented a new generative adversarial network, RankGAN, for generating high-quality natural language descriptions. Instead of training the discriminator to assign absolute binary predicate to real or synthesized data samples, we propose using a ranker to rank the human-written sentences higher than the machine-written sentences relatively. We then train the generator to synthesize natural language sentences that can be ranked higher than the human-written one. By relaxing the binary-classification restriction and conceiving a relative space with rich information for the discriminator in the adversarial learning framework, the proposed learning objective is favourable for synthesizing natural language sentences in high quality. Experimental results on multiple public datasets demonstrate that our method achieves significantly better performance than previous state-ofthe-art language generators. In the future, we plan to explore and extend our model in many other tasks, such as image synthesis and conditional GAN for image captioning. Acknowledgement We would like to thank the reviewers for their constructive comments. We thank NVIDIA Corporation for the donation of the GPU used for this research. We also thank Tianyi Zhou and Pengchuan Zhang for their helpful discussions. 9 References [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [2] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proc. ACL workshops, volume 29, pages 65?72, 2005. [3] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. Proc. CoNLL, page 10, 2016. [4] Bo Dai, Dahua Lin, Raquel Urtasun, and Sanja Fidler. Towards diverse and natural image descriptions via a conditional gan. arXiv preprint arXiv:1703.06029, 2017. [5] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Proc. NIPS, pages 1486?1494, 2015. [6] Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll?r, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. From captions to visual concepts and back. In Proc. CVPR, pages 1473?1482, 2015. [7] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017. [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proc. NIPS, pages 2672?2680, 2014. [9] Ian J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014. [10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [11] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997. [12] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333?2338, 2013. [13] Ferenc Husz?r. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? arXiv preprint arXiv:1511.05101, 2015. [14] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proc. CVPR, 2017. [15] Thorsten Joachims. Optimizing search engines using clickthrough data. In Proc. SIGKDD, pages 133?142, 2002. [16] Matt J Kusner and Jos? Miguel Hern?ndez-Lobato. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051, 2016. [17] Christian Ledig, Lucas Theis, Ferenc Husz?r, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016. [18] Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. [19] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proc. ECCV, pages 740?755, 2014. [20] Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. Improved image captioning via policy gradient optimization of spider. R in Information [21] Tie-Yan Liu et al. Learning to rank for information retrieval. Foundations and Trends Retrieval, 3(3):225?331, 2009. 10 [22] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proc. ACL, pages 311?318, 2002. [23] Devi Parikh and Kristen Grauman. Relative attributes. In Proc. ICCV, pages 503?510, 2011. [24] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [25] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proc. NIPS, 2016. [26] Kevin Reschke, Adam Vogel, and Dan Jurafsky. Generating recommendation dialogs by extracting information from user reviews. In ACL, 2013. [27] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. [28] William Shakespeare. The complete works of William Shakespeare. Race Point Publishing, 2014. [29] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Proc. NIPS, pages 3104?3112, 2014. [30] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [31] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057?1063, 1999. [32] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proc. CVPR, pages 4566?4575, 2015. [33] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google?s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. [34] Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887, 2017. [35] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: sequence generative adversarial nets with policy gradient. In Proc. AAAI, 2017. [36] Xiang Zhang and Yann LeCun. Text understanding from scratch. arXiv preprint arXiv:1502.01710, 2015. [37] Xingxing Zhang and Mirella Lapata. Chinese poetry generation with recurrent neural networks. In Proc. EMNLP, 2014. 11
6908 |@word cnn:1 norm:1 open:1 simulation:4 pg:9 juliet:2 tianyi:1 carry:1 lantao:1 ndez:1 series:2 score:45 disparity:1 contains:2 liu:2 document:3 ours:1 outperforms:2 existing:4 current:8 com:3 comparing:4 guadarrama:1 written:43 gpu:1 john:1 subsequent:1 realistic:2 informative:2 shakespeare:5 adam:1 christian:1 designed:1 update:3 generative:16 selected:2 alec:2 imitate:2 accordingly:2 beginning:1 fried:1 data2:1 vanishing:1 short:2 proficient:1 krikun:1 provides:1 denis:1 sits:2 firstly:1 zhang:6 five:1 rollout:1 bowman:1 constructed:1 differential:2 jonas:1 yuan:1 consists:4 prove:1 dan:2 inside:2 aitken:1 expected:3 frequently:1 dialog:1 simulator:1 grade:2 inspired:2 ming:1 food:1 little:1 soumith:2 provided:1 estimating:3 superresolution:1 kind:1 finding:2 corporation:1 tackle:1 tie:1 zaremba:1 grauman:1 classifier:2 hit:1 platt:1 sherjil:1 highquality:1 ramanan:1 before:1 ice:1 todd:1 limit:1 sutton:2 path:2 noteworthy:1 mirella:1 rnns:1 plus:1 acl:3 collect:1 challenging:1 relaxing:1 luke:2 jurafsky:2 limited:2 logeswaran:1 lecun:1 testing:1 practice:1 maire:1 rnn:1 yan:3 significantly:1 matching:1 word:14 pre:4 road:1 suggest:3 get:1 cannot:1 selection:1 operator:3 zehan:1 acero:1 context:1 salim:1 optimize:1 conventional:6 map:1 restriction:2 shi:1 maximizing:1 lexical:1 reviewer:1 attention:1 sepp:1 lobato:1 fluent:1 emily:1 formulate:1 simplicity:1 pouget:1 importantly:1 fang:1 retrieve:1 stability:1 searching:1 xinchen:1 play:8 suppose:1 user:1 caption:11 yishay:1 samy:1 goodfellow:4 runway:1 synthesize:4 element:1 trend:1 conserve:1 updating:1 native:2 observed:1 role:3 mike:1 preprint:14 wang:3 capture:2 sun:1 richness:1 highest:2 mentioned:2 nash:1 schiele:1 reward:16 warde:1 trained:2 depend:1 carrying:1 wording:1 ferenc:2 deva:1 singh:1 jumped:1 toilet:1 completely:1 girl:1 po:3 various:1 train:9 effective:6 monte:2 vicki:1 klaus:1 kevin:3 outside:1 jianfeng:2 whose:2 encoded:1 widely:2 valued:2 solve:1 cvpr:3 relax:1 ramp:1 bernt:1 encoder:1 ward:1 jointly:1 obviously:1 nll:6 sequence:32 differentiable:4 advantage:1 net:3 sen:1 propose:4 qin:1 relevant:3 cao:1 translate:1 achieve:4 benefiting:1 margaret:1 papineni:1 description:9 validate:2 ky:1 sutskever:1 jing:1 captioning:3 generating:9 produce:2 converges:2 silver:1 object:1 help:2 donation:1 recurrent:4 alon:1 mandarin:1 miguel:1 andrew:4 measured:1 tim:1 progress:2 come:2 indicate:1 guided:1 ning:1 meteor:2 attribute:1 consecutively:1 exploration:1 human:49 viewing:2 happily:1 larry:1 public:3 mcallester:1 assign:3 fix:1 kristen:1 secondly:1 baked:1 clothing:1 considered:1 ground:1 exp:2 great:4 equilibrium:2 caballero:1 lawrence:2 rgen:1 efros:1 major:1 achieves:5 adopt:2 early:1 wine:1 purpose:1 conceiving:1 estimation:1 proc:16 narrative:1 bag:1 sensitive:1 vice:2 create:2 mit:1 clearly:2 yarats:1 aim:2 cider:2 rather:3 husz:2 zhou:2 barto:1 endow:1 derived:1 focus:1 romeo:2 joachim:1 consistently:2 rank:16 likelihood:4 mainly:1 bernoulli:1 indicates:3 contrast:3 sigkdd:1 adversarial:21 baseline:2 sense:1 helpful:1 rear:1 rupesh:1 cunningham:1 hidden:2 perona:1 pixel:1 issue:2 classification:4 among:6 overall:1 unk:3 dauphin:1 lucas:1 available2:1 constrained:1 art:7 softmax:3 special:1 plan:1 equal:1 construct:1 once:1 saurabh:1 piotr:2 washington:3 beach:1 sampling:3 represents:1 park:1 yu:6 unsupervised:2 nearly:1 denton:1 siqi:1 future:4 yoshua:2 mirza:1 fundamentally:1 richard:2 primarily:1 few:1 equiprobable:1 randomly:6 employ:1 individual:1 murphy:1 rollouts:1 microsoft:5 delicate:2 william:2 highly:1 investigate:1 possibility:1 alexei:1 evaluation:12 grasp:1 adjust:1 farley:1 unconditional:1 behind:1 encourage:1 partial:4 capable:1 conduct:5 incomplete:1 filled:1 initialized:1 desired:1 guidance:1 seq2seq:1 phrase:1 cost:1 rare:2 predicate:5 too:1 front:3 reported:1 dependency:2 synthetic:19 cho:1 st:1 person:1 lstm:3 explores:1 tianlin:1 standing:2 ritter:1 off:1 lee:1 jos:1 enhance:1 synthesis:3 michael:2 ilya:1 gans:24 w1:4 reflect:1 aaai:1 opposed:1 rafal:1 huang:1 emnlp:1 dialogue:3 derivative:1 wojciech:1 li:5 diversity:3 lapata:1 heck:1 relieve:2 ranking:29 scissors:1 tsung:1 race:1 script:2 try:1 wolfgang:1 analyze:1 weinan:1 start:1 participant:2 complicated:1 metz:1 purple:1 ir:1 publicly:2 convolutional:5 who:3 yield:1 judgment:2 sitting:2 ofthe:1 tejani:1 serge:1 norouzi:1 zhengyou:1 plastic:1 produced:2 mc:1 carlo:2 bunch:1 finalize:1 worth:1 reach:1 against:2 evaluates:1 james:1 chintala:2 proof:1 attributed:1 sampled:10 ledig:1 dataset:6 ask:2 mitchell:1 knowledge:2 improves:1 akata:1 back:3 higher:7 totz:1 improved:3 wei:2 evaluated:2 though:2 stage:2 alykhan:1 until:1 correlation:1 working:1 receives:3 favourably:2 lstms:3 mehdi:1 web:2 replacing:1 nonlinear:1 assessment:5 propagation:1 minibatch:1 banerjee:1 google:1 quality:15 scheduled:1 usa:1 xiaodong:3 ye:1 facilitate:1 true:1 brown:1 concept:1 phillip:1 hence:1 kyunghyun:1 fidler:1 iteratively:1 semantic:1 illustrated:2 cafe:1 conditionally:1 poem:15 during:11 game:1 encourages:1 speaker:2 cosine:1 samuel:1 criterion:2 complete:5 demonstrate:4 mohammad:1 performs:2 image:19 novel:6 recently:1 parikh:2 common:2 mt:3 empirically:2 conditioning:1 volume:3 discussed:1 he:3 extend:1 synthesized:5 dahua:1 refer:1 composition:2 jozefowicz:1 versa:2 honglak:1 mother:1 cambridge:1 automatic:2 grangier:1 language:34 sanja:1 stable:1 tennis:1 impressive:1 similarity:6 alejandro:1 align:1 sergio:1 recent:2 perspective:1 optimizing:1 belongs:1 dish:1 coco:6 schmidhuber:1 certain:1 nvidia:1 hay:1 binary:15 success:3 yi:1 seen:7 tinghui:1 dai:3 greater:1 bathroom:2 isola:1 deng:2 converge:2 maximize:2 dashed:1 multiple:6 alan:1 long:5 lin:3 retrieval:3 equally:1 y:5 mle:9 jean:1 laplacian:1 regression:2 multilayer:1 vision:2 metric:2 expectation:2 arxiv:28 pyramid:1 achieved:1 cell:1 hochreiter:1 receive:1 jiwei:1 vogel:1 w2:1 yonghui:1 rest:1 unlike:1 posse:1 sr:3 lajanugen:1 comment:1 subject:3 tend:1 simulates:1 bahdanau:1 effectiveness:1 extracting:1 near:1 yang:2 noting:1 intermediate:4 bengio:3 relaxes:1 variety:1 bedroom:1 architecture:3 restrict:1 competing:1 click:1 idea:2 airplane:1 grading:1 ranker:12 whether:2 expression:2 bridging:1 returned:1 action:2 deep:4 fool:1 johannes:1 desk:2 ph:5 generate:3 http:1 cikm:1 correctly:1 blue:1 diverse:1 discrete:5 group:1 four:1 threshold:1 drawn:2 ht:4 utilize:1 uw:3 pietro:1 turing:2 jose:1 raquel:1 yann:2 forrest:1 wu:1 summarizes:1 conll:1 ct:2 followed:1 distinguish:2 courville:1 replaces:2 oracle:6 alex:2 your:1 yong:1 generates:1 simulate:1 min:1 performing:1 relatively:2 gehring:1 palm:1 structured:1 according:3 lavie:1 across:1 character:2 wi:1 kusner:1 acosta:1 rob:1 making:2 s1:10 quoc:2 iccv:1 thorsten:1 invite:2 equation:1 previously:1 bus:1 turn:2 bing:1 hern:1 end:3 photo:1 stand:1 available:1 doll:2 apply:1 observe:1 hierarchical:1 worthwhile:1 salimans:1 ocean:1 original:1 cake:1 denotes:3 top:1 gan:3 publishing:1 phonology:1 giving:1 restrictive:2 ting:1 build:1 concatenated:1 chinese:7 feng:1 objective:14 noticed:1 gradient:15 thank:3 simulated:1 capacity:2 street:1 w0:5 evaluate:7 zeynep:1 considers:1 consensus:1 urtasun:1 bleu:31 toward:2 dzmitry:1 ozair:1 degenerated:1 length:2 reed:1 illustration:1 minimizing:1 unfortunately:1 potentially:1 spider:1 hao:1 negative:2 synthesizing:2 design:1 collective:1 policy:14 unknown:1 contributed:1 boltzmann:1 perform:1 vertical:1 clickthrough:2 datasets:4 predication:1 incorporated:1 auli:1 mansour:1 stack:1 introduced:1 david:3 bottle:1 sentence:90 discriminator:22 optimized:1 connection:1 engine:1 nip:6 starter:1 address:1 able:11 distinguishability:1 adversary:1 usually:1 pattern:1 scott:1 appeared:2 max:1 memory:3 including:5 suitable:2 natural:11 difficulty:1 ranked:4 treated:1 boat:1 zhu:3 minimax:1 older:1 github:1 created:1 shortterm:1 jun:2 zhen:1 text:7 review:1 understanding:1 epoch:4 literature:2 prior:2 acknowledgement:1 seqgan:21 xiang:1 theis:1 relative:16 graf:1 embedded:4 loss:2 expect:1 bear:1 macherey:2 generation:10 men:1 proven:1 ramakrishna:1 generator:35 validation:2 foundation:1 consistent:1 s0:2 propagates:1 share:2 roukos:1 translation:9 eccv:1 token:14 last:1 english:1 bias:1 taking:1 absolute:4 benefit:1 regard:2 overcome:1 calculated:1 vocabulary:1 transition:1 world:1 evaluating:3 curve:4 rich:3 computes:2 gram:2 author:1 collection:1 reinforcement:7 commonly:1 employing:1 kishore:1 approximate:1 obtains:1 keep:2 dealing:1 satinder:1 corpus:1 belongie:1 vedantam:1 fergus:1 xi:1 continuous:3 search:4 compromised:1 table:12 additionally:1 schuster:1 learn:10 nature:1 ca:1 improving:1 meanwhile:1 protocol:3 zitnick:2 main:1 child:1 xu:2 learns:4 zhifeng:1 ian:3 formula:1 specific:2 symbol:2 explored:1 favourable:1 abadie:1 gupta:1 essential:2 consist:1 intrinsic:1 workshop:1 sequential:3 gained:1 maxim:1 zhenhai:1 confuse:2 gumbel:1 chen:3 monroe:1 gap:1 photograph:1 explore:1 gao:3 devi:2 visual:1 vinyals:2 iandola:1 bo:2 recommendation:1 collectively:2 radford:2 truth:1 extracted:1 stimulating:1 conditional:5 goal:2 formulated:1 cheung:1 consequently:1 towards:1 replace:2 feasible:1 hard:1 specifically:2 determined:1 typical:1 wt:13 averaging:1 total:1 pas:1 matt:1 experimental:4 player:1 perceptrons:1 indicating:2 select:3 aaron:1 people:1 vilnis:1 relevance:1 oriol:2 constructive:1 wearing:1 es1:1 bench:1 scratch:1 srivastava:1
6,532
6,909
Regret Minimization in MDPs with Options without Prior Knowledge Ronan Fruit Sequel Team - Inria Lille [email protected] Matteo Pirotta Sequel Team - Inria Lille [email protected] Alessandro Lazaric Sequel Team - Inria Lille [email protected] Emma Brunskill Stanford University [email protected] Abstract The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RM AX -SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RM AX -SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while the regret analysis of UCRL-SMDP requires prior knowledge of the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP?s up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical results supporting the theoretical findings. 1 Introduction Tractable learning of how to make good decisions in complex domains over many time steps almost definitely requires some form of hierarchical reasoning. One powerful and popular framework for incorporating temporally-extended actions in the context of reinforcement learning is the options framework [1]. Creating and leveraging options has been the subject of many papers over the last two decades (see e.g., [2, 3, 4, 5, 6, 7, 8]) and it has been of particular interest recently in combination with deep reinforcement learning, with a number of impressive empirical successes (see e.g., [9] for an application to Minecraft). Intuitively (and empirically) temporal abstraction can help speed up learning (reduce the amount of experience needed to learn a good policy) by shaping the actions selected towards more promising sequences of actions [10], and it can reduce planning computation through reducing the need to evaluate over all possible actions (see e.g., Mann and Mannor [11]). However, incorporating options does not always improve learning efficiency as shown by Jong et al. [12]. Intuitively, limiting action selection only to temporally-extended options might hamper the exploration of the environment by restricting the policy space. Therefore, we argue that in addition to the exciting work being done in heuristic and algorithmic approaches that leverage and/or dynamically discover options, it is important to build a formal understanding of how and when options may help or hurt reinforcement learning performance, and that such insights may also help inform empirically motivated options-RL research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. There has been fairly limited work on formal performance bounds of RL with options. Brunskill and Li [13] derived sample complexity bounds for an RM AX-like exploration-exploitation algorithm for semi-Markov decision processes (SMDPs). While MDPs with options can be mapped to SMDPs, their analysis cannot be immediately translated into the PAC-MDP sample complexity of learning with options, which makes it harder to evaluate their potential benefit. Fruit and Lazaric [14] analyzed an SMDP variant of UCRL [15] showing how its regret can be mapped to the regret of learning in the original MDP with options. The resulting analysis explicitly showed how options can be beneficial whenever the navigability among the states in the original MDP is not compromised (i.e., the MDP diameter is not significantly increased), the level of temporal abstraction is high (i.e., options have long durations, thus reducing the number of decision steps), and the optimal policy with options performs as well as the optimal policy using primitive actions. While this result makes explicit the impact of options on the learning performance, the proposed algorithm (UCRL-SMDP, or SUCRL in short) needs prior knowledge on the parameters of the distributions of cumulative rewards and durations of each option to construct confidence intervals and compute optimistic solutions. In practice this is often a strong requirement and any incorrect parametrization (e.g., loose upper-bounds on the true parameters) directly translates into a poorer regret performance. Furthermore, even if a hand-designed set of options may come with accurate estimates of their parameters, this would not be possible for automatically generated options, which are of increasing interest to the deep RL community. Finally, this prior work views each option as a distinct and atomic macro-action, thus losing the potential benefit of considering the inner structure and the interaction between of options, which could be used to significantly improve sample efficiency. In this paper we remove the limitations of prior theoretical analyses. In particular, we combine the semi-Markov decision process view on options and the intrinsic MDP structure underlying their execution to achieve temporal abstraction without relying on parameters that are typically unknown. We introduce a transformation mapping each option to an associated irreducible Markov chain and we show that optimistic policies can be computed using only the stationary distributions of the irreducible chains and the SMDP dynamics (i.e., state to state transition probabilities through options). This approach does not need to explicitly estimate cumulative rewards and duration of options and their confidence intervals. We propose two alternative implementations of a general algorithm (F REE - SUCRL, or FSUCRL in short) that differs in whether the stationary distribution of the options? irreducible Markov chains and its confidence intervals are computed explicitly or implicitly through an ad-hoc extended value iteration algorithm. We derive regret bounds for FSUCRL that match the regret of SUCRL up to an additional term accounting for the complexity of estimating the stationary distribution of an irreducible Markov chain starting from its transition matrix. This additional regret is the, possibly unavoidable, cost to pay for not having prior knowledge on options. We further the theoretical findings with a series of simple grid-world experiments where we compare FSUCRL to SUCRL and UCRL (i.e., learning without options). 2 Preliminaries  Learning in MDPs with options. A finite MDP is a tuple M = S, A, p, r where S is the set of states, A is the set of actions, p(s0 |s, a) is the probability of transition from state s to state s0 through action a, r(s, a) is the random reward associated to (s, a) with expectation r(s,  a). A deterministic policy ? : S ? A maps states to actions. We define an option as a tuple o = so , ?o , ?o where so ? S is the state where the option can be initiated1 , ?o : S ? A is the associated stationary Markov policy, and ?o : S ? [0, 1] is the probability of termination. As proved by Sutton et al. [1], when primitive actions are replaced by a set decision process is a semi-Markov  of options O, the resulting decision processes (SMDP) MO = SO , Os , pO , RO , ?O where SO ? S is the set of states where options can start and end, Os is the set of options available at state s, pO (s0 |s, o) is the probability of terminating in s0 when starting o from s, RO (s, o) is the (random) cumulative reward obtained by executing option o from state s until interruption at s0 with expectation RO (s, o), and ?O (s, o) is the duration (i.e., number of actions executed to go from s to s0 by following ?o ) with expectation ? (s, o).2 Throughout the rest of the paper, we assume that options are well defined. 1 2 Restricting the standard initial set to one state so is without loss of generality (see App. A). Notice that RO (s, o) (similarly for ?O ) is well defined only when s = so , that is when o ? Os . 2 Assumption 1. The set of options O is admissible, that is 1) all options terminate in finite time with probability 1, 2), in all possible terminal states there exists at least one option that can start, i.e., ?o?O {s : ?o (s) > 0} ? ?o?O {so }, 3) the resulting SMDP MO is communicating. Lem. 3 in [14] shows that under Asm. 1 the family of SMDPs induced by using options in MDPs is such that for any option o, the distributions of the cumulative reward and the duration are subExponential with bounded parameters (?r (o), br (o)) and (?? (o), b? (o)) respectively. The maximal expected duration is denoted by ?max = maxs,o {? O (s, o)}. Let t denote primitive action steps and let i index decision steps at option level.P The number of decision steps up to (primitive) step t is  n N (t) = max n : Tn ? t , where Tn = i=1 ?i is the number of primitive steps executed over n decision steps and ?i is the (random) number of steps before the termination of the option chosen at step i. Under Asm. 1 there exists a policy ? ? : S ? O over options that achieves the largest gain (per-step reward)  PN (t)  ? def ? ? i=1 Ri ?O = max ?O = max lim E , (1) ? ? t?+? t where Ri is the reward cumulated by the option executed at step i. The optimal gain also satisfies the optimality equation of an equivalent MDP obtained by data-transformation (Lem. 2 in [16]), i.e.,  X  RO (s, o) 1 ?s ? S ??O = max + pO (s0 |s, o)u?O (s0 ) ? u?O (s) , (2) o?Os ? O (s, o) ? O (s, o) 0 s ?S where u?O is the optimal bias and Os is the set of options than can be started in s (i.e., o ? Os ? so = s). In the following sections, we drop the dependency on the option set O from all previous terms whenever clear from the context. Given the optimal average reward ??O , we evaluate the performance P of a learning A by its cumulative (SMDP) regret over n decision steps as  ? algorithm Pn n ?(A, n) = i=1 ?i ?O ? i=1 Ri . In [14] it is shown that ?(A, n) is equal to the MDP regret up to a linear ?approximation? regret accounting for the difference between the optimal gains of M on primitive actions and the associated SMDP MO . 3 Parameter-free SUCRL for Learning with Options Optimism in SUCRL. At each episode, SUCRL runs a variant of extended value iteration (EVI) [17] to solve the ?optimistic? version of the data-transformation optimality equation in Eq. 2, i.e., ( ) e nX o  1  R(s, o) 0 ? 0 ? ? + max pe(s |s, o)e u (s ) ? u e (s) ?e = max max , (3) o?Os e e? p ?e(s, o) ?e(s, o) R,e 0 s ?S e and ?e are the vectors of cumulative rewards and durations for all state-option pairs and they where R belong to confidence intervals constructed using parameters (?r (o), br (o)) and (?? (o), b? (o)) (see e Sect.3 in [14] for the exact expression). Similarly, confidence intervals need to be computed for p, but this does not require any prior knowledge on the SMDP since the transition probabilities naturally belong to the simplex over states. As a result, without any prior knowledge, such confidence intervals cannot be directly constructed and SUCRL cannot be run. In the following, we see how constructing an irreducible Markov chain (MC) associated to each option avoids this problem. 3.1 Irreducible Markov Chains Associated to Options Options as absorbing Markov chains. A natural way to address SUCRL?s limitations is to avoid considering options as atomic operations (as in SMDPs) but take into consideration their inner (MDP) structure. Since options terminate in finite time (Asm. 1), they can be seen as an absorbing Markov reward process whose state space contains all states that are reachable by the option and where option terminal states are absorbing states of the MC (see Fig. 1). More formally, for any option o the set of inner states So includes the initial state so and all states s with ?o (s) < 1 that are reachable by executing ?o from so (e.g., So = {s0 , s1 } in Fig. 1), while the set of absorbing states Soabs includes all states with ?o (s) > 0 (e.g., Soabs = {s0 , s1 , s2 } in Fig. 1). The absorbing MC associated to o is 3 1?p ?0 s0 1?p p a0 p(s0 |s0 , o) ?1 s1 a1 ?2 p a0 (1??1 )p ?1 p s0 ... o p(s1 |s0 , o) ... ... s1 s2 ... ... p0 So so,1 (1?p)(1??1 ) (1??1 )p p so,0 so,1 (1?p)?1 s1 1 s0 (1?p)(1??1 ) (1?p)(1??0 ) (1?p)?0 ... a1 ... so,0 a0 s2 a1 ... p(s2 |s0 , o) s2 1 Soabs p00 1 Figure 1: (upper-left) MDP with an option o starting from s0 and executing a0 in all states with termination probabilities ?o (s0 ) = ?0 , ?o (s1 ) = ?1 and ?o (s2 ) = 1. (upper-right) SMDP dynamics associated to option o. (lower-left) Absorbing MC associated to options o. (lower-right) Irreducible MC obtained by transforming the associated absorbing MC with p0 = (1 ? ?0 )(1 ? p) + ?0 (1 ? p) + p?1 and p00 = ?1 (1 ? p) + p. characterized by a transition matrix Po of dimension (|So | + |Soabs |) ? (|So | + |Soabs |) defined as3  Qo Po = 0  Qo (s, s0 ) = (1 ? ?o (s0 ))p(s0 |s, ?o (s)) for any s, s0 ? So Vo with I Vo (s, s0 ) = ?o (s0 )p(s0 |s, ?o (s)) for any s ? So , s0 ? Soabs , where Qo is the transition matrix between inner states (dim. |So | ? |So |), Vo is the transition matrix from inner states to absorbing states (dim. |So | ? |Soabs |), and I is the identity matrix (dim. |Soabs | ? |Soabs |). As proved in Lem. 3 in [14], the expected cumulative rewards R(s, o), the duration ? (s, o), and the sub-Exponential parameters (?r (o), br (o)) and (?? (o), b? (o)) are directly related to the transition matrices Qo and Vo of the associated absorbing chain Po . This suggests that, given an estimate of Po , we could directly derive the corresponding estimates of R(s, o) and ? (s, o). Following this idea, we could ?propagate? confidence intervals on the entries of Po to obtain confidence intervals on rewards and duration estimates without any prior knowledge on their parameters and thus solve Eq. 3 without any prior knowledge. Nonetheless, intervals on Po do not necessarily translate into compact bounds for R and ? . For example, if the value Veo = 0 belongs to the confidence interval of e o) and ?e(s, o) are Peo (no state in Soabs can be reached), the corresponding optimistic estimates R(s, unbounded and Eq. 3 is ill-defined. Options as irreducible Markov chains. We first notice from Eq. 2 that computing the optimal policy only requires computing the ratio R(s, o)/? (s, o) and the inverse 1/? (s, o). Starting from Po , we can construct an irreducible MC whose stationary distribution is directly related to these terms. We proceed as illustrated in Fig. 1: all terminal states are ?merged? together and their transitions are ?redirected? to the initial state so . More formally, let 1 be the all-one vector of dimension |Soabs |, then vo = Vo 1 ? R|So | contains the cumulative probability to transition from an inner state to any terminal state. Then the chain Po can be transformed into a MC with transition matrix Po0 = [vo Q0o ] ? RSo ?So , where Q0o contains all but the first column of Qo . Po0 is now an irreducible MC as any state can be reached starting from any other state and thus it admits a unique stationary distribution ?o . In order to relate ?o to the optimality equation in Eq. 2, we need an additional assumption on the options. Assumption 2. For any option o ? O, the starting state so is also a terminal state (i.e., ?o (so ) = 1) and any state s0 ? S with ?o (s0 ) < 1 is an inner state (i.e., s0 ? So ). 3 In the following we only focus on the dynamics of the process; similar definitions apply for the rewards. 4 Input: Confidence ? ?]0, 1[, rmax , S, A, O For episodes k = 1, 2, ... do 1. Set ik := i, t = tk and episode counters ?k (s, a) = 0, ?k (s, o) = 0 0 2. Compute estimates pbk (s0 |s, o), Pbo,k , rbk (s, a) and their confidence intervals in Eq. 6 3. Compute an k -approximation of the optimal optimistic policy ? ek of Eq. 5 4. While ?l ? [t + 1, t + ?i ], ?k (sl , al ) < Nk (sl , al ) do (a) Execute option oi = ? ek (si ), obtain primitive rewards ri1 , ..., ri?i and visited states s1i , ..., s?i i = si+1 (b) Set ?k (si , oi ) += 1, i += 1, t += ?i and ?k (s, ?oi (s)) += 1 for all s ? {s1i , ..., s?i i } 5. Set Nk (s, o) += ?k (s, o) and Nk (s, a) += ?k (s, a) Figure 2: The general structure of FSUCRL. While the first part has a very minor impact on the definition of O, the second part of the assumption guarantees that options are ?well designed? as it requires the termination condition to be coherent with the true inner states of the option, so that if ?o (s0 ) < 1 then s0 should be indeed reachable by the option. Further discussion about Asm. 2 is reported in App. A. We then obtain the following property. Lemma 1. Under Asm. 2, let ?o ? [0, 1]So be the unique stationary distribution of the irreducible MC Po0 associated to option o, then 4 ?s ? S, ?o ? Os , 1 = ?o (s) ? (s, o) and X R(s, o) = r(s0 , ?o (s0 ))?o (s0 ). ? (s, o) 0 (4) s ?So This lemma illustrates the relationship between the stationary distribution of Po0 and the key terms in Eq. 2.5 As a result, we can apply Lem. 1 to Eq. 3 and obtain the optimistic optimality equation ( )  X    e? ? u ?s ? S ?e? = max max reo (s0 ) ? eo (s0 ) + ? eo (s) max e b| u e? (s) , (5) o?Os e o ,e ? ro e bo s0 ?So o where reo (s0 ) = re (s0 , ?o (s0 )) and e bo = (e p(s0 |s, o))s0 ?S . Unlike in the absorbing MC case, where e and ?e, in this forcompact confidence sets for Po may lead to unbounded optimistic estimates for R mulation ?o (s) can be equal to 0 (i.e., infinite duration and cumulative reward) without compromising the solution of Eq. 5. Furthermore, estimating ?o implicitly leverages over the correlation between cumulative reward and duration, which is ignored when estimating R(s, o) and ? (s, o) separately. Finally, we prove the following result. Lemma 2. Let reo ? R, e bo ? P, and ? eo ? M, with R, P, M compact sets containing the true parameters ro , bo and ?o , then the optimality equation in Eq. 5 always admits a unique solution ?e? and ?e? ? ?? (i.e., the solution of Eq. 5 is an optimistic gain). Now, we need to provide an explicit algorithm to compute the optimistic optimal gain ?e? of Eq. 5 and its associated optimistic policy. In the next section, we introduce two alternative algorithms that are guaranteed to compute an -optimistic policy. 3.2 SUCRL with Irreducible Markov Chains The structure of the UCRL-like algorithm for learning with options but with no prior knowledge on distribution parameters (called F REE-SUCRL, or FSUCRL) is reported in Fig. 2. Unlike SUCRL we do not directly estimate the expected cumulative reward and duration of options but we estimate the SMDP transition probabilities p(s0 |s, o), the irreducible MC Po0 associated to each option, and the state-action reward r(s, a). For all these terms we can compute confidence intervals (Hoeffding and empirical Bernstein) without any prior knowledge as 4 5 Notice that since option o is defined in s, then s = so . Furthermore r is the MDP expected reward. Lem. 4 in App. D extends this result by giving an interpretation of ?o (s0 ), ?s0 ? So . 5 s log(SAtk /?) , Nk (s, a) s 0 pk (s0 |s, o) 1 ? pbk (s0 |s, o))ctk ,? 7ctk ,? p(s |s, o) ? pbk (s0 |s, o) ? ? p (s, o, s0 ) ? 2b + , k Nk (s, o) 3Nk (s, o) s 0 0 0 2Pbo,k (s, s0 ) 1 ? Pbo,k (s, s0 ))ctk ,? 7ctk ,? 0 Po (s, s0 ) ? Pbo,k (s, s0 ) ? ?kP (s, o, s0 ) ? + , Nk (s, ?o (s)) 3Nk (s, ?o (s)) r(s, a) ? rbk (s, a) ? ?kr (s, a) ? rmax (6a) (6b) (6c) where Nk (s, a) (resp. Nk (s, o)) is the number of samples collected at state-action s, a (resp. stateoption s, o) up to episode k, Eq. 6a coincides with the one used in UCRL, in Eq. 6b s = so and s0 ? S, and in Eq. 6c s, s0 ? So . Finally, ctk ,? = O (log (|So | log(tk )/?)) [18, Eq. 31]. To obtain an actual implementation of the algorithm reported on Fig. 2 we need to define a procedure to compute an approximation of Eq. 5 (step 3). Similar to UCRL and SUCRL, we define an EVI algorithm starting from a function u0 (s) = 0 and computing at each iteration j ( (  ) ) n o X | 0 0 uj+1 (s) = max max reo (s ) ? eo (s ) + ? eo (s) max e bo uj ? uj (s) +uj (s), (7) o?Os eo ? e bo s0 ?So where reo (s0 ) is the optimistic reward (i.e., estimate plus the confidence bound of Eq. 6a) and the optimistic transition probability vector e bo is computed using the algorithm introduced in [19, App. A] for Bernstein bound as in Eqs. 6b, 6c or in [15, Fig. 2] for Hoeffding bound (see App. B). Depending on whether confidence intervals for ?o are computed explicitly or implicitly we can define two alternative implementations that we present below. b |o = ? b |o Pbo0 under bo be the solution of ? Explicit confidence intervals. Given the estimate Pbo0 , let ? b |o e = e. Such a ? b o always exists and is unique since Pbo0 is computed after terminating constraint ? the option at least once and is thus irreducible. The perturbation analysis in [20] can be applied to derive the confidence interval b o k1 ? ?k? (o) := ? k?o ? ? bo,min kPo0 ? Pbo0 k?,1 , (8) where k?k?,1 is the maximum of the `1 -norm of the rows of the transition matrix, ? bo,min is the |So | 6 smallest condition number for the `1 -norm of ?o . Let ?o ? R be such that ?o (so ) = reo (so ) +  | e o in Eq. 7 has the same maxebo e bo uj ? uj (so ) and ?o (s) = reo (s), then the maximum over ? form as the innermost maximum over bo (with Hoeffding bound) and thus we can directly apply b o , ?k? (o), and states So ordered descendingly according to ?o . The Alg. [15, Fig. 2] with parameters ? resulting value is then directly plugged into Eq. 7 and uj+1 is computed. We refer to this algorithm as FSUCRLV 1. Nested extended value iteration. An alternative approach builds on the observation that the maximum over ?o in Eq. 7 can be seen as the optimization of the average reward (gain) ( ) X ? 0 0 ?eo (uj ) = max ?o (s )e ?o (s ) , (9) eo ? s0 ?So where ?o is defined as above. Eq. 9 is indeed the optimal gain of a bounded-parameter MDP with state space So , an action space composed of the option action (i.e., ?o (s)), and transitions Peo0 in the confidence intervals 7 of Eq. 6c, and thus we can write its optimality equation ( ) X ? 0 0 ? 0 ?eo (uj ) = max ?o (s) + Peo (s, s )w eo (s ) ? w eo? (s), (10) e0 P o s0 6 The provably smallest condition number (refer to [21, Th. 2.3]) is the one provided by Seneta [22]: bo ) = maxi,j 1 kZ bo (i, :) ? Z bo (j, :)k1 where Z bo (i, :) is the i-th row of Z bo = (I ? Pbo0 + 1| ? ? bo,min = ?1 (Z bo )?1 . 2 0 7 e The confidence intervals on Po can never exclude a non-zero transition between any two states of So . Therefore, the corresponding bounded-parameter MDP is always communicating and ??o (uj ) is state-independent. 6 where w eo? is an optimal bias. For any input function v we can compute ??o (v) by using EVI on the bounded-parameter MDP, thus avoiding to explicitly construct the confidence intervals of ? eo . As a result, we obtain two nested EVI algorithms where, starting from an initial bias function v0 (s) = 0, 8 o at any iteration j we set the bias function of the inner EVI to wj,0 (s) = 0 and we compute (see App. C.3 for the general EVI for bounded-parameter MDPs and its guarantees) n o o o wj,l+1 (s0 ) = max ?o (s) + Peo (?|s0 )| wj,l , (11) eo P o o until the stopping condition ljo = inf{l ? 0 : sp{wj,l+1 ?wj,l } ? ?j } is met, where (?j )j?0 is a o o ? vanishing sequence. As wj,l+1 ? wj,l converges to ?o (vj ) with l, the outer EVI becomes n o o o o o vj+1 (s) = max g wj,l ? w + vj (s), (12) j,lj j +1 o?Os where g : v 7? 12 (max{v} + min{v}). In App. C.4 we show that this nested scheme, that we call FSUCRLV 2, converges to the solution of Eq. 5. Furthermore, if the algorithm is stopped when sp {vj+1 ? vj } + ?j ? ? then |e ?? ? g(vj+1 ? vj )| ? ?/2. One of the interesting features of this algorithm is its hierarchical structure. Nested EVI is operating on two different time scales by iteratively considering every option as an independent optimistic planning sub-problem (EVI of Eq. 11) and gathering all the results into a higher level planning problem (EVI of Eq. 12). This idea is at the core of the hierarchical approach in RL, but it is not always present in the algorithmic structure, while nested EVI naturally arises from decomposing Eq. 7 in two value iteration algorithms. It is also worth to underline that the confidence intervals implicitly generated for ? eo are never worse than those in Eq. 8 and they are often much tighter. In practice the bound of Eq. 8 may be actually worse because of the worst-case scenario considered in the computation of the condition numbers (see Sec. 5 and App. F). 4 Theoretical Analysis Before stating the guarantees for FSUCRL, we recall the definition of diameter of M and MO :     D = max min E ?? (s, s0 ) , DO = max min E ?? (s, s0 ) , 0 0 s,s ?S ?:S?A s,s ?SO ?:S?O where ?? (s, s0 ) is the (random) number of primitive actions to move from s to s0 following policy ?. We also define a pseudo-diameter characterizing the ?complexity? of the inner dynamics of options: ? 1 ? ?? ? e O = r ?? + ? max D ?? where we define:    ? ? r? = max {sp(ro )} , ?1? = max ?1o , ?? = max {? } , and ? = min min ? (s) o ? o o?O o?O o?O o?O s?So with ?1o and ?? o the condition numbers of the irreducible MC associated to options o (for the `1 and `? -norm respectively [20]) and sp(ro ) the span of the reward of the option. In App. D we prove the following regret bound. Theorem 1. Let M be a communicating MDP with reward bounded between 0 and rmax = 1 and let O be a set of options satisfying Asm. 1 and 2 such that ?r (s, o) ? ?r , ?? (s, o) ? ?? , and ? (s, o) ? ?max . We also define BO = maxs,o supp(p(?|s, o)) (resp. B = maxs,a supp(p(?|s, a)) as the largest support of the SMDP (resp. MDP) dynamics. Let Tn be the number of primitive steps executed when running FSUCRLV 2 over n decision steps, then its regret is bounded as   ? ? ? ? e DO SBO On + (?r + ?? ) n + SATn + D e O SBOTn ?(FSUCRL, n) = O | {z } | {z } {z } | ?p ?R,? 8 (13) ?? We use vj instead of uj since the error in the inner EVI directly affects the value of the function at the outer EVI, which thus generates a sequence of functions different from (uj ). 7 Comparison to SUCRL. Using the confidence intervals of Eq. 6b and a slightly tighter analysis than the one by Fruit and Lazaric [14] (Bernstein bounds and higher accuracy for EVI) leads to a regret bound for SUCRL as   ? e ?p + ?R,? + ?r+ + ??+ SAn , (14) ?(SUCRL, n) = O {z } | ?0R,? where ?r+ and ??+ are upper-bounds on ?r and ?? that are used in defining the confidence intervals for ? and R that are actually used in SUCRL. The term ?p is the regret induced by errors in estimating the SMDP dynamics p(s0 |s, o), while ?R,? summarizes ? the randomness in the cumulative reward and duration of options. Both these terms scale as n, thus taking advantage of the temporal abstraction (i.e., the ratio between the number of primitive steps Tn and the decision steps n). The main difference between the two bounds is then in the last term, which accounts for the regret due to the optimistic estimation of the behavior of the options. In SUCRL this regret is linked to the upper bounds on the parameters of R and ? . As shown in Thm.2 in [14], when ?r+ = ?r and ??+ = ?? , the bound of SUCRL is nearly-optimal as it almost matches the lower-bound, thus showing that ?0R,? is unavoidable. In FSUCRL however, the additional regret ?? comes from the estimation errors of the per-time-step rewards ro and the dynamic Po0 . Similar to ?p , these errors are amplified by the e O . While ?? may actually be the unavoidable cost to pay for removing the prior pseudo-diameter D e O changes with the structure of the options knowledge about options, it is interesting to analyze how D (see App. E for a concrete example). The probability ?o (s) decreases as the probability of visiting an inner state s ? So using the option policy. In this case, the probability of collecting samples on the inner transitions is low and this leads to large estimation errors for Po0 . These errors are then propagated to the stationary distribution ?o through the condition numbers ? (e.g., ?1o directly follows from an non-empirical version of Eq. 8). Furthermore, we notice that 1/?o (s) ? ?o (s) ? |So |, suggesting that ?long? or ?big? options are indeed more difficult to estimate. On the other hand, ?? becomes smaller whenever the transition probabilities under policy ?o are supported over a few states (B small) and the rewards are similar within the option (sp(ro ) small). While in the worst case ?? may actually be much bigger than ?0R,? when the parameters of R and ? are accurately known (i.e., ??+ ? ?? and ?r+ ? ?r ), in Sect. 5 we show scenarios in which the actual performance of FSUCRL is close or better than SUCRL and the advantage of learning with options is preserved. To explain why FSUCRL can perform better than SUCRL we point out that FSUCRL?s bound is somewhat worst-case w.r.t. the correlation between options. In fact, in Eq. 6c the error in estimating Po0 in a state s does not scale with the number of samples obtained while executing option o but those collected by taking the primitive action prescribed by ?o . This means that even if o has a low probability of reaching s starting from so (i.e., ?o (s) is very small), the true error may still be small as soon as another option o0 executes the same action (i.e., ?o (s) = ?o0 (s)). In this case the regret bound is loose and the actual performance of FSUCRL is much better. Therefore, although it is not apparent in the regret analysis, not only is FSUCRL leveraging on the correlation between the cumulative reward and duration of a single option, but it is also leveraging on the correlation between different options that share inner state-action pairs. ? Comparison to UCRL. We recall that the regret of UCRL is bounded as O(D SBATn ), where Tn is to the total number of steps. As discussed by [14], the major advantage of options is in terms of temporal abstraction (i.e., Tn  n) and reduction of the state-action space (i.e., SO < S and O < A). Eq.(13) also reveals that options can also improve the learning speed by reducing the size of the support BO of the dynamics of the environment w.r.t. primitive actions. This can lead to a huge improvement e.g., when options are designed so as to reach a specific goal. This potential advantage is new compared to [14] and matches the intuition on ?good? options often presented in the literature (see e.g., the concept of ?funnel? actions introduced by Dietterich [23]). Bound for FSUCRLV 1. Bounding the regret of FSUCRLV 1 requires bounding the empirical ? b in Eq. (8) with the true condition number ?. Since ? b tends to ? as the number of samples of the option increases, the overall regret would only be increased by a lower order term. In practice however, FSUCRL V 2 is preferable to FSUCRLV 1 . The latter will suffer from the true condition numbers  ?1o o?O since they are used to compute the confidence bounds on the stationary distributions (?o )o?O , while for FSUCRLV 2 they appear only in the analysis. As much as the dependency on the diameter in the analysis of UCRL, the condition numbers may also be loose in practice, although tight from a theoretical perspective. See App.D.6 and experiments for further insights. 8 ?106 Cumulative Regret ?(Tn ) Ratio of regrets R 1.1 1 UCRL FSUCRLv1 FSUCRLv2 SUCRLv2 SUCRLv3 0.9 0.8 0.7 0.6 UCRL FSUCRLv1 FSUCRLv2 SUCRLv2 SUCRLv3 3 2 1 0 2 4 6 8 10 12 0 Maximal duration of options Tmax 2 4 6 Duration Tn 8 10 ?108 Figure 3: (Left) Regret after 1.2 ? 108 steps normalized w.r.t. UCRL for different option durations in a 20x20 grid-world. (Right) Evolution of the regret as Tn increases for a 14x14 four-rooms maze. 5 Numerical Simulations In this section we compare the regret of FSUCRL to SUCRL and UCRL to empirically verify the impact of removing prior knowledge about options and estimating their structure through the irreducible MC transformation. We consider the toy domain presented in [14] that was specifically designed to show the advantage of temporal abstraction and the classical 4-rooms maze [1]. To be able to reproduce the results of [14], we run our algorithm with Hoeffding confidence bounds for the `1 -deviation of the empirical distribution (implying that BO has no impact). We consider settings where ?R,? is the dominating term of the regret (refer to App. F for details). When comparing the two versions of FSUCRL to UCRL on the grid domain (see Fig. 3 (left)), we empirically observe that the advantage of temporal abstraction is indeed preserved when removing the knowledge of the parameters of the option. This shows that the benefit of temporal abstraction is not just a mere artifact of prior knowledge on the options. Although the theoretical bound in Thm. 1 is always worse than its SMDP counterpart (14), we see that FSUCRL performs much better than SUCRL in our examples. This can be explained by the fact that the options we use greatly overlap. Even if our regret bound does not make explicit the fact that FSUCRL exploits the correlation between options, this can actually significantly impact the result in practice. The two versions of SUCRL differ in the amount of prior knowledge given to the algorithm to construct the parameters ?r+ and ??+ that are used in building the confidence intervals.In v3 we provide a tight upper-bound rmax on the rewards and distinct option-dependent parameters for the duration (?o and ?? (o)), in v2 we only provide a global (option-independent) upper bound on ?o and ?o . Unlike FSUCRL which is ?parameter-free?, SUCRL is highly sensitive to the prior knowledge about options and can perform even worse than UCRL. A similar behaviour is observed in Fig. 3 (right) where both the versions of SUCRL fail to beat UCRL but FSUCRLV 2 has nearly half the regret of UCRL. On the contrary, FSUCRLV 1 suffers a linear regret due to a loose dependency on the condition numbers (see App. F.2). This shows that the condition numbers appearing in the bound of FSUCRLV 2 are actually loose. In both experiments, UCRL and FSUCRL had similar running times meaning that the improvement in cumulative regret is not at the expense of the computational complexity. 6 Conclusions We introduced FSUCRL, a parameter-free algorithm to learn in MDPs with options by combining the SMDP view to estimate the transition probabilities at the level of options (p(s0 |s, o)) and the MDP structure of options to estimate the stationary distribution of an associated irreducible MC which allows to compute the optimistic policy at each episode. The resulting regret matches SUCRL bound up to an additive term. While in general, this additional regret may be large, we show both theoretically and empirically that FSUCRL is actually competitive with SUCRL and it retains the advantage of temporal abstraction w.r.t. learning without options. Since FSUCRL does not require strong prior knowledge about options and its regret bound is partially computable, we believe the results of this paper could be used as a basis to construct more principled option discovery algorithms that explicitly optimize the exploration-exploitation performance of the learning algorithm. 9 Acknowledgments This research was supported in part by French Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and French National Research Agency (ANR) under project ExTra-Learn (n.ANR-14-CE24-0010-01). References [1] Richard S. Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1): 181 ? 211, 1999. [2] Amy McGovern and Andrew G. Barto. Automatic discovery of subgoals in reinforcement learning using diverse density. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 361?368, 2001. [3] Ishai Menache, Shie Mannor, and Nahum Shimkin. Q-cut?dynamic discovery of sub-goals in reinforcement learning. In Proceedings of the 13th European Conference on Machine Learning, Helsinki, Finland, August 19?23, 2002, pages 295?306. Springer Berlin Heidelberg, 2002. [4] ?zg?r Sim? ? sek and Andrew G. Barto. Using relative novelty to identify useful temporal abstractions in reinforcement learning. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML ?04, 2004. [5] Pablo Samuel Castro and Doina Precup. Automatic construction of temporally extended actions for mdps using bisimulation metrics. In Proceedings of the 9th European Conference on Recent Advances in Reinforcement Learning, EWRL?11, pages 140?152, Berlin, Heidelberg, 2012. Springer-Verlag. [6] Kfir Y. Levy and Nahum Shimkin. Unified inter and intra options learning using policy gradient methods. In EWRL, volume 7188 of Lecture Notes in Computer Science, pages 153?164. Springer, 2011. [7] Munu Sairamesh and Balaraman Ravindran. Options with exceptions. In Proceedings of the 9th European Conference on Recent Advances in Reinforcement Learning, EWRL?11, pages 165?176, Berlin, Heidelberg, 2012. Springer-Verlag. [8] Timothy Arthur Mann, Daniel J. Mankowitz, and Shie Mannor. Time-regularized interrupting options (TRIO). In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 1350?1358. JMLR.org, 2014. [9] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 1553?1561. AAAI Press, 2017. [10] Martin Stolle and Doina Precup. Learning options in reinforcement learning. In SARA, volume 2371 of Lecture Notes in Computer Science, pages 212?223. Springer, 2002. [11] Timothy A. Mann and Shie Mannor. Scaling up approximate value iteration with options: Better policies with fewer iterations. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 127?135. JMLR.org, 2014. [12] Nicholas K. Jong, Todd Hester, and Peter Stone. The utility of temporal abstraction in reinforcement learning. In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems, May 2008. [13] Emma Brunskill and Lihong Li. PAC-inspired Option Discovery in Lifelong Reinforcement Learning. In Proceedings of the 31st International Conference on Machine Learning, ICML 2014, volume 32 of JMLR Proceedings, pages 316?324. JMLR.org, 2014. [14] Ronan Fruit and Alessandro Lazaric. Exploration?exploitation in mdps with options. In Proceedings of Machine Learning Research, volume 54: Artificial Intelligence and Statistics, 20-22 April 2017, Fort Lauderdale, FL, USA, pages 576?584, 2017. [15] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563?1600, 2010. 10 [16] A. Federgruen, P.J. Schweitzer, and H.C. Tijms. Denumerable undiscounted semi-markov decision processes with unbounded rewards. Mathematics of Operations Research, 8(2):298? 313, 1983. [17] Alexander L. Strehl and Michael L. Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309?1331, December 2008. [18] Daniel J. Hsu, Aryeh Kontorovich, and Csaba Szepesv?ri. Mixing time estimation in reversible markov chains from a single sample path. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS 15, pages 1459?1467. MIT Press, 2015. [19] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS 15, pages 2818?2826. MIT Press, 2015. [20] Grace E. Cho and Carl D. Meyer. Comparison of perturbation bounds for the stationary distribution of a markov chain. Linear Algebra and its Applications, 335(1):137 ? 150, 2001. [21] Stephen J. Kirkland, Michael Neumann, and Nung-Sing Sze. On optimal condition numbers for markov chains. Numerische Mathematik, 110(4):521?537, Oct 2008. [22] E. Seneta. Sensitivity of finite markov chains under perturbation. Statistics & Probability Letters, 17(2):163?168, May 1993. [23] Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227?303, 2000. [24] Ronald Ortner. Optimism in the face of uncertainty should be refutable. Minds and Machines, 18(4):521?526, 2008. [25] Pierre Bremaud. Applied Probability Models with Optimization Applications, chapter 3: Recurrence and Ergodicity. Springer-Verlag Inc, Berlin; New York, 1999. [26] Pierre Bremaud. Applied Probability Models with Optimization Applications, chapter 2: Discrete-Time Markov Models. Springer-Verlag Inc, Berlin; New York, 1999. [27] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. [28] Peter L. Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ?09, pages 35?42. AUAI Press, 2009. [29] Daniel Paulin. Concentration inequalities for markov chains by marton couplings and spectral methods. Electronic Journal of Probability, 20, 2015. [30] Martin Wainwright. Course on Mathematical Statistics, chapter 2: Basic tail and concentration bounds. University of California at Berkeley, Department of Statistics, 2015. 11
6909 |@word exploitation:4 version:6 norm:3 underline:1 termination:4 simulation:1 propagate:1 ljo:1 accounting:2 p0:2 innermost:1 decomposition:1 harder:1 reduction:1 initial:4 series:1 contains:3 daniel:4 comparing:1 si:3 john:1 ronald:2 ronan:3 additive:2 numerical:1 remove:2 designed:4 drop:1 smdp:22 stationary:12 implying:1 selected:1 half:1 intelligence:5 fewer:1 parametrization:1 vanishing:1 short:2 core:1 paulin:1 mannor:5 org:3 unbounded:3 mathematical:1 constructed:2 schweitzer:1 aryeh:1 ik:1 redirected:1 incorrect:1 prove:2 combine:1 emma:3 introduce:2 theoretically:1 inter:1 ravindran:1 indeed:4 expected:4 behavior:1 planning:3 terminal:5 inspired:1 relying:1 automatically:1 actual:3 considering:3 increasing:1 becomes:2 provided:1 discover:1 underlying:1 estimating:6 project:1 bounded:8 denumerable:1 rmax:4 navigability:1 unified:1 finding:2 transformation:4 csaba:1 guarantee:4 temporal:14 pseudo:2 every:1 collecting:1 auai:1 berkeley:1 preferable:1 ro:11 rm:3 appear:1 before:2 negligible:1 todd:1 tends:1 sutton:2 ree:2 matteo:2 path:1 inria:6 might:1 plus:1 tmax:1 dynamically:1 suggests:1 sara:1 christoph:1 limited:1 unique:4 acknowledgment:1 thirty:1 atomic:2 practice:6 regret:39 differs:1 procedure:1 episodic:1 empirical:6 ce24:1 significantly:3 confidence:26 cannot:3 close:1 selection:1 context:2 optimize:1 equivalent:2 deterministic:1 map:1 eighteenth:1 primitive:12 go:1 starting:9 duration:19 numerische:1 immediately:1 communicating:4 insight:2 amy:1 x14:1 autonomous:1 hurt:1 limiting:1 resp:4 construction:1 exact:1 losing:1 carl:1 programming:1 satisfying:1 cut:1 observed:1 worst:3 s1i:2 wj:8 episode:5 sect:2 counter:1 decrease:1 alessandro:3 intuition:1 environment:2 transforming:1 complexity:7 principled:1 reward:30 littman:1 agency:1 dynamic:10 terminating:2 singh:1 tight:2 weakly:1 algebra:1 efficiency:2 basis:1 translated:2 po:14 joint:1 chapter:3 distinct:2 kp:1 artificial:5 mcgovern:1 zahavy:1 whose:3 heuristic:1 stanford:2 solve:2 apparent:1 dominating:1 anr:2 asm:6 statistic:4 hoc:1 advantage:8 sequence:3 propose:1 interaction:1 maximal:2 fr:3 macro:2 combining:2 translate:1 mixing:1 achieve:1 amplified:1 requirement:1 undiscounted:1 neumann:1 executing:4 converges:2 tk:2 help:3 derive:3 depending:1 stating:1 andrew:2 coupling:1 minor:1 sim:1 strong:2 eq:36 tijms:1 c:1 come:2 met:1 differ:1 mulation:1 merged:1 compromising:1 stochastic:1 exploration:5 mann:3 education:1 require:2 behaviour:1 preliminary:2 tighter:2 considered:1 mapping:2 algorithmic:2 mo:4 major:1 achieves:1 finland:1 smallest:2 estimation:5 integrates:1 visited:1 calais:1 sensitive:1 council:1 largest:2 minimization:1 mit:2 always:6 ewrl:3 reaching:1 pn:2 avoid:1 barto:2 as3:1 ucrl:21 ax:3 derived:1 focus:1 improvement:2 greatly:1 balaraman:1 dim:3 abstraction:14 stopping:1 dependent:1 typically:1 lj:1 a0:4 transformed:1 reproduce:1 provably:1 overall:1 among:1 subexponential:1 ill:1 denoted:1 fairly:1 equal:2 construct:5 once:1 never:2 beach:1 having:1 lille:3 icml:4 nearly:2 simplex:1 report:1 bremaud:2 richard:1 few:1 irreducible:17 ortner:2 composed:1 national:1 hamper:1 replaced:1 interest:2 mankowitz:2 huge:1 highly:1 intra:1 analyzed:1 chain:16 kfir:1 accurate:1 poorer:1 tuple:2 arthur:1 experience:1 hester:1 plugged:1 re:1 e0:1 theoretical:7 stopped:1 increased:2 column:1 ctk:5 sze:1 retains:1 cost:2 deviation:1 entry:1 ri1:1 seventh:1 reported:3 dependency:3 ishai:1 trio:1 cho:1 st:3 density:1 definitely:1 international:8 sensitivity:1 sequel:3 lauderdale:1 michael:2 together:2 precup:3 concrete:1 kontorovich:1 aaai:2 unavoidable:3 p00:2 containing:1 leveraged:1 possibly:1 hoeffding:4 stolle:1 worse:4 creating:1 ek:2 li:2 supp:2 account:1 potential:3 exclude:1 suggesting:1 toy:1 sec:1 includes:2 inc:3 explicitly:6 doina:3 ad:1 dann:1 view:4 optimistic:16 analyze:2 linked:1 reached:2 start:2 competitive:1 option:126 bisimulation:1 tessler:1 oi:3 accuracy:1 identify:1 accurately:1 mc:15 mere:1 worth:1 app:13 randomness:1 executes:1 explain:1 inform:1 reach:1 suffers:1 whenever:3 definition:3 nonetheless:2 shimkin:2 naturally:2 associated:16 propagated:1 gain:7 hsu:1 proved:2 popular:1 recall:2 knowledge:17 lim:1 shaping:1 actually:7 auer:1 higher:3 tom:1 ebrun:1 april:1 done:1 execute:1 generality:1 furthermore:5 just:1 ergodicity:1 until:2 correlation:5 hand:2 qo:5 o:11 reversible:1 french:2 artifact:1 mdp:18 believe:1 building:1 usa:4 dietterich:2 concept:1 true:6 normalized:1 verify:1 evolution:1 counterpart:1 rbk:2 regularization:1 iteratively:1 jaksch:1 illustrated:1 puterman:1 pbk:3 recurrence:1 coincides:1 samuel:1 stone:1 vo:7 tn:9 performs:2 reasoning:1 meaning:1 consideration:1 novel:1 recently:1 absorbing:10 sek:1 empirically:5 rl:4 volume:6 subgoals:1 belong:2 interpretation:1 discussed:1 tail:1 refer:3 automatic:2 grid:3 mathematics:1 similarly:2 had:1 reachable:3 lihong:1 impressive:1 operating:1 v0:1 recent:3 showed:1 perspective:1 belongs:1 inf:1 scenario:3 verlag:4 inequality:1 shahar:1 success:1 seen:2 ministry:1 additional:5 somewhat:1 eo:15 novelty:1 v3:1 semi:6 u0:1 stephen:1 match:5 characterized:1 long:3 bigger:1 a1:3 impact:6 variant:2 basic:1 expectation:3 metric:1 iteration:8 veo:1 preserved:3 addition:1 szepesv:1 separately:1 interval:23 extra:1 rest:1 unlike:3 regional:1 subject:1 induced:2 shie:4 december:1 contrary:1 leveraging:3 call:1 near:1 leverage:2 bernstein:3 affect:1 marton:1 kirkland:1 inner:15 reduce:2 idea:2 br:3 translates:1 computable:1 whether:2 motivated:1 optimism:2 expression:1 o0:2 givony:1 utility:1 bartlett:1 suffer:1 peter:3 proceed:1 york:3 hardly:2 action:27 deep:3 ignored:1 useful:1 tewari:1 clear:1 amount:2 diameter:5 sl:2 notice:4 lazaric:5 per:2 diverse:1 write:1 discrete:2 key:1 four:1 satk:1 run:3 inverse:1 letter:1 powerful:1 nahum:2 uncertainty:2 refutable:1 extends:1 almost:2 throughout:1 family:1 electronic:1 interrupting:1 decision:16 summarizes:1 scaling:1 bound:33 def:1 pay:2 guaranteed:1 fl:1 constraint:1 ri:5 helsinki:1 generates:1 speed:2 optimality:6 min:8 span:1 prescribed:1 martin:3 department:1 according:1 combination:1 beneficial:1 slightly:1 smaller:1 son:1 lem:5 s1:7 castro:1 intuitively:2 explained:1 gathering:1 equation:6 mathematik:1 loose:5 fail:1 needed:1 mind:1 tractable:1 end:1 available:2 operation:2 decomposing:1 apply:3 observe:1 hierarchical:5 v2:1 spectral:1 nicholas:1 appearing:1 pierre:2 alternative:4 evi:14 original:2 thomas:2 running:2 exploit:1 giving:1 k1:2 build:2 uj:12 february:1 classical:1 move:1 concentration:2 interruption:1 grace:1 visiting:1 gradient:1 mapped:2 berlin:5 outer:2 nx:1 argue:1 collected:2 index:1 relationship:1 ratio:3 difficult:1 executed:4 x20:1 relate:1 expense:1 nord:1 seneta:2 menache:1 implementation:3 policy:18 unknown:1 perform:2 twenty:2 upper:7 observation:1 markov:24 sing:1 finite:4 supporting:1 beat:1 defining:1 extended:6 team:3 perturbation:3 regal:1 thm:2 august:1 community:1 introduced:4 pablo:1 pair:2 fort:1 california:2 coherent:1 maxq:1 nip:3 peo:3 address:1 able:1 below:1 reo:7 ambuj:1 max:29 wainwright:1 overlap:1 natural:1 regularized:1 scheme:1 improve:3 mdps:13 temporally:3 started:1 prior:18 understanding:1 literature:1 discovery:4 relative:1 loss:1 lecture:2 multiagent:1 interesting:2 limitation:3 funnel:1 agent:1 fruit:5 s0:70 exciting:1 share:1 strehl:1 row:2 course:1 supported:2 last:2 free:3 soon:1 formal:2 bias:4 lifelong:2 characterizing:1 taking:2 face:1 fifth:1 benefit:3 dimension:2 transition:19 cumulative:16 world:2 avoids:1 kz:1 maze:2 reinforcement:17 san:2 approximate:1 compact:2 implicitly:4 satinder:1 global:1 reveals:1 uai:1 francisco:1 pasde:1 compromised:1 decade:1 why:1 promising:1 learn:3 terminate:2 ca:1 minecraft:2 alg:1 heidelberg:3 european:3 complex:1 necessarily:1 constructing:1 domain:3 vj:8 sp:5 pk:1 main:1 s2:6 big:1 bounding:2 edition:1 fig:10 ny:1 wiley:1 pirotta:2 brunskill:4 sub:3 meyer:1 explicit:4 exponential:1 pe:1 levy:1 jmlr:6 admissible:1 theorem:1 removing:3 specific:1 pac:4 showing:2 maxi:1 admits:2 incorporating:2 intrinsic:1 exists:3 restricting:2 workshop:2 cumulated:1 kr:1 execution:1 illustrates:1 horizon:1 nk:10 chen:1 timothy:2 ordered:1 partially:1 bo:22 springer:7 nested:5 satisfies:1 oct:1 identity:1 goal:2 towards:1 room:2 change:1 smdps:5 pbo:4 reducing:3 infinite:1 specifically:1 lemma:3 called:1 total:1 rso:1 jong:2 formally:2 zg:1 exception:1 support:2 latter:1 arises:1 alexander:1 evaluate:3 avoiding:1
6,533
691
Intersecting regions: The key to combinatorial structure in hidden unit space Janet Wiles Depts of Psychology and Computer Science, University of Queensland QLD 4072 Australia. [email protected] Mark Ollila, Vision Lab, CITRI Dept of Computer Science, University of Melbourne, Vic 3052 Australia [email protected] Abstract Hidden units in multi-layer networks form a representation space in which each region can be identified with a class of equivalent outputs (Elman, 1989) or a logical state in a finite state machine (Cleeremans, Servan-Schreiber & McClelland, 1989; Giles, Sun, Chen, Lee, & Chen, 1990). We extend the analysis of the spatial structure of hidden unit space to a combinatorial task, based on binding features together in a visual scene. The logical structure requires a combinatorial number of states to represent all valid scenes. On analysing our networks, we find that the high dimensionality of hidden unit space is exploited by using the intersection of neighboring regions to represent conjunctions of features. These results show how combinatorial structure can be based on the spatial nature of networks, and not just on their emulation of logical structure. 1 TECHNIQUES FOR ANALYSING THE SPATIAL AND LOGICAL STRUCTURE OF HIDDEN UNIT SPACE In multi-layer networks, regions of hidden unit space can be identified with classes of equivalent outputs. For example, Elman (1989) showed that the hidden unit patterns for words in simple grammatical sentences cluster into regions, with similar patterns representing similar grammatical entities. For example, different tokens of the same word are clustered tightly, indicating that they are represented within a small region. These regions can be grouped into larger regions, reflecting a hierarchical structure. The largest 27 28 Wiles and Ollila groups represent the abstract categories, nouns and verbs. Elman used cluster analysis to demonstrate this hierarchical grouping, and principal component analysis (PCA) to show dimensions of variation in the representation in hidden unit space. An alternative approach to Elman's hierarchical clustering is to identify each region with a functional state. By tracing the trajectories of sequences through the different regions, an equivalent finite state machine (FSM) can be constructed This approach has been described using Reber grammars with simple recurrent networks (Cleeremans, ServanSchreiber & McClelland, 1989) and higher-order networks (Giles, Sun, Chen, Lee, & Chen, 1990). Giles et a1. showed that the logical structure of the grammars is embedded in hidden unit space by identifying each regions with a state, extracting the equivalent finite state machine from the set of states, and then reducing it to the minimal FSM. Oustering and FSM extraction demonstrate different aspects of representations in hidden unit space. Elman showed that regions can be grouped hierarchically and that dimensions of variation can be identified using PCA, emphasizing how the functionality is reflected in the spatial structure. Giles et al. extracted the logical structure of the finite state machine in a way that represented the logical states independently of their spatial embedding. There is an inherent trade off between the spatial and logical analyses: In one sense, the FSM is the idealized version of a grammar, and indeed for the Reber grammars, Giles et a1. found improved performance on the extracted FSMs over the trained networks. However, the states of the FSM increase combinatorially with the size of the input. If there is information encoded in the hierarchical grouping of regiOns or relative spatial arrangement of clusters, the extracted FSM cannot exploit it. The basis of the logical equivalence of a FSM and the hidden unit representations is that disjoint regions of hidden unit space represent separate logical states. In previous work, we reversed the process of identifying clusters with states of a FSM, by using prior knowledge of the minimal FSM to label hidden unit patterns from a network trained on sequences from three temporal functions (Wiles & Bloesch, 1992). Canonical discriminant analysis (CDA, Cliff, 1987) was then used to view the hidden unit patterns clustered into regions that corresponded to the six states of the minimal FSM. In this paper we explore an alternative interpretation of regions. Instead of considering disjoint regions, we view each region as a sub-component lying at the intersection of two or more larger regions. For example, in the three-function simulations, the six clusters can be interpreted in terms of three large regions that identify the three possible temporal functions, overlapping with two large regions that identify the output of the network (see Figure 1). The six states can then be seen as combinations of the three function and two output classes (Le, 5 large overlapping regions instead of 6 smaller disjoint ones). While the three-function simulation does provide a clear demonstration of the intersecting structure of regions, nonetheless, only six states are required to represent the minimal FSM and harder tasks are needed to demonstrate combinatorial representations. 2 SIMULATIONS OF THE CONJUNCTION OF COLOR, SHAPE AND LOCATION The representation of combinatorial structure is an important aspect of any computational tasi( because of the drastic implications of combinatorial explOSion for scaling. The intersection of regions is a concise way to represent all possible combinations of different items. We demonstrate this idea applied to the analysis of a hidden unit space Intersecting regions: The key to combinatorial structure in hidden unit space , " -l----[tt' , ..l----r' .1.. - 'J I:1(1) 1::ftl 'lli-~--=--t-f-~ r - - - ~ If 1 r- r - - ; , , I 1 I ? .J. J -, r f -.. I , f ~?.i r I ,I I f I J '1 _ RO: RI J c c "&. E ou ft .!::! c oc " I f r ? ~ ...:r -h= - , J .e I ,} J L I , r I 1:_ ..; I : U ":2 Xl AI ~-+- -- r r I r 'I I r - - - -., r- - ---, -~ r AO e ,-:1 r I '- _~.J 1 II xo, I I i.' r ~ , r J' ~ .-1 J I JI :- ~ r I ,i:-r- , ~. : r I 1 L __ ~J I' 1 I L___~ Firse CIDOnicaJ componCDI Figure 1. Intersecting regions in hidden unit space. Hidden unit patterns from the threefunction task of Wiles and Bloesch (1992) are shown projected onto the first and third canonical components. Each temporal function, XOR, AND and OR is represented by a vertical region, separated along the first canonical component. The possible outputs, 0 and 1 are represented by horizontal regions, separated down the third canonical component. The states of the finite state machine are represented by the regions in the intersections of the vertical and horizontal regions. (Adapted from Wiles & Bloesch, 1992, Figure lb.) 29 30 Wiles and Ollila representation of conjunctions of colors, shapes and locations. In our task, a scene consists of zero or more objects, each object identified by its color, shape and location. The number of scenes, C, is given by C = (s/+l)l where s,/, and 1 are the numbers of shapes, features and locations respectively. This problem illustrates several important components: There is no unique representation of an object in the input or output - each object is represented only by the presence of a shape and color at a given location. The task of the network is to create hidden unit representations for all possible scenes, each containing the features themselves, and the binding of features to position. The simulations involved two locations, three possible shapes and three colors (100 legitimate scenes). A 12-20-12 encoder network was trained on the entire set of scenes and the hidden unit patterns for each scene were recorded. Analysis using CDA with 10 groups designating all possible combinations of zero, one or two colors showed that the hidden unit space was partitioned into intersecting regions corresponding to the three colors or no color (see Figure 2a). CDA was repeated using groups designating all combinations of shapes, which showed an alternative partitioning into four intersecting regions related to the component shapes (see Figure 2b). Figures 2a and 2b show alternate two-dimensional projections of the 20-dimensional space. The analyses showed that each hidden unit pattern was contained in many different groupings, such as all objects that are red, all triangles, or all red triangles. In linguistic terms, each hidden unit pattern corresponds to a token of a feature, and the region containing all tokens of a given group corresponds to its abstract type. The interesting aspect of this representation is that the network had learnt not only how to separate the groups, but also to use overlapping regions. Thus given a region that represents a circle and one representing a triangle, the intersection of the two regions implies a scene that has both a circle and a triangle. Given suitable groups, the perspectives provided by CDA show many different abstract types within the hidden unit space. For example, scenes can be grouped according to the number of objects in a scene, or the number of squares in a scene. We were initially surprised that contiguous regions exist for representing scenes with zero, one and two Objects, since the output units only require representations of individual features, such as square or circle, and not the abstraction to "any shape", or even more abstract, "any object". It seems plausible that the separation of these regions is due to the high dimensionality provided by 12-20-12 mappings. The excess degrees of freedom in hidden unit space can encode variation in the inputs that is not necessarily required to complete the task. With fewer hidden units, we would expect that variation in the input patterns that is not required for completing the task would be compressed or lost under the competing requirement of maximally separating functionally useful groups in the hidden unit space. This explanation found support in a second simulation, using a 12-8-12 encoder network. Whereas analysis of the 12-20-12 network showed separation of patterns into disjoint regions by number of objects, the smaller 12-8-12 network did not. Over all, our analyses showed that as the number of dimensions increases, additional aspects of scenes may be represented, even if those aspects are not required for the task that the network is learning. Intersecting regions: The key to combinatorial structure in hidden unit space 2A LEGEND Region 1 2 3 4 , ..... - -"'" A B C D E F G H I J Any scene with red Any scene with blue Any scene with green Any scene with 0 or 1 color Scenes with red & green objects Scenes with red & blue objects Scenes with 1 red object Scenes with 2 red objects Scenes with green & blue objects Scenes with 1 green object Scenes with 2 green objects Scenes with 1 blue object Scenes with 2 blue objects Scenes with no objects First canonical component t~? .~ ..,.-----, .. ..-..-.."'''., "- r I ~.. 1 \. \ JJ i ~, 2 A'' , I , \.' I ' ?? \ ? \ .. 4 ! \ ....... : ??? ~????????\.~ j I \. \i ,II .,--;: , --;fI . .... ., "'..?.--",- : '? ? : : ?~ -..-.. ,t -. -.. 3 'I ,. .- ..... !~ ... 2s LEGEND Region 1 2 3 4 A B C D E F G H I J Any scene with a triangle Any scene with a circle Any scene with a square Scenes with 0 or 1 object Scenes with a triangle & a circle Scenes with a triangle & a square Scenes with a single triangle Scenes with 2 triangles Scenes with a circle & a square Scenes with a single circle Scenes with 2 circles Scenes with a single square Scenes with 2 squares Scenes with no objects First canonical component Figure 2. CDA plots showing the representations of features in a scene. A scene consists of zero, one or two objects, represented in terms of color, shape and location. 2a. Patterns labelled by color: Hidden unit patterns form ten distinct clusters, which have been grouped into four intersecting regions, 1-4. For example, the hidden unit patterns within region 1 all contain at least one red object, those in regions 2 contain at least one blue one, and those in the intersection of regions 1 and 2 contain one red and one blue object. 2b. Patterns labelled by shape: Again the hidden unit patterns form ten distinct clusters, which have been grouped into four intersecting regions, however, these regions represent scenes with the same shape. 2a and 2b show alternate groupings of the same hidden unit space, projected onto different canonical components. The two projections can be combined in the mind's eye (albeit with some difficulty) to form a four dimensional representation of the spatial structure of intersecting regions of both color and shape. 31 32 Wiles and Ollila 3 THE SPATIAL STRUCTURE OF HIDDEN UNIT SPACE IS ISOMORPHIC TO THE COMBINATORIAL STRUCTURE OF THE VISUAL MAPPING TASK In conclusion, the simulations demonstrate how combinatorial structure can be embedded in the spatial nature of networks in a way that is isomorphic to the combinatorial structure of the task, rather than by emulation of logical structure. In our approach, the representation of intersecting regions is the key to providing combinatorial representations. If the visual mapping task were extended by including a feature specifying the color of the background scene (e.g., blue or green) the number of possible scenes would double, as would the number of states in a FSM. By contrast, in the hidden unit representation, the additional feature would involve adding two more overlapping regions to those currently supported by the spatial structure. This could be implemented by dividing hidden unit space along an unused dimension, orthogonal to the current groups. The task presented in this case study is extremely simplified, in order to expose the intrinsic combinatorial structure required in binding. Despite the simplifications, it does contain elements of tasks that face real cognitive systems. In the simulations above, individual Objects can be clustered by their shape or color, or whole scenes by other properties, such as the number of squares in the scene. These representations provide a concise and easily accessible structure that solves the combinatorial problem of binding several features to one object, in such a way as to represent the individual object, and yet also allow efficient access to its component features. The flexibility of such access processes is one of the main motivations for tensor models of human memory (Humphreys, Bain & Pike, 1989) and analogical reasoning (Halford et aI., in press). Our analysis of spatial structure in terms of intersecting regions has a straightforward interpretation in terms of tensors, and provides a basis for future work on network implementations of the tensor memory and analogical reasoning models. Acknowledgements We thank Simon Dennis and Steven Phillips for their canonical discriminant program. This work was supported by grants from the Australian Research Council. References Cleeremans, A., Servan-Schreiber, D., and McClelland, J.L. (1989). Finite state automata and simple recurrent networks, Neural Computation, 1, 372-381. Cliff, N. (1987). Analyzing Multivariate Data. Harcourt Brace Jovanovich, Orlando, Florida. Elman, J. (1989). Representation and structure in connectionist models. CRL Technical Report 8903, Center for Research in Language, University of California, San Diego, 26pp. Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., and Chen, D. (1990). Higher Order Recurrent Networks. In D.S. Touretzky (ed.) Advances in Neural Information Processing Systems 2, Morgan-Kaufmann, San Mateo, Ca., 380-387. Intersecting regions: The key to combinatorial structure in hidden unit space Halford, G.S., Wilson, W.H., Guo, J., Wiles, J. and Stewart, J.E.M. Connectionist implications for processing capacity limitations in analogies. To appear in KJ. Holyoak & J. Barnden (Eds.), Advances in Connectionist and Neural Computation Theory, Vol 2: Analogical Connections. Norwood, NJ: Ablex, in press. Humphreys, M.S., Bain, J.D., and Pike, R. (1989). Different ways to cue a coherent memory system: A theory of episodic, semantic and procedural tasks, Psychological Review, 96 (2), 208-233. Wiles, J. and Bloesch, A. (1992). Operators and curried functions: Training and analysis of simple recurrent networks. In J. E. Moody, S. J. Hanson, and R. P. Lippmann (Eds.) Advances in Neural Information Processing Systems 4, Morgan-Kaufmann, San Mateo, Ca. 33
691 |@word version:1 seems:1 holyoak:1 simulation:7 queensland:1 concise:2 harder:1 current:1 yet:1 shape:14 plot:1 cue:1 fewer:1 item:1 provides:1 location:7 along:2 constructed:1 surprised:1 consists:2 indeed:1 elman:6 themselves:1 multi:2 considering:1 provided:2 interpreted:1 nj:1 temporal:3 ro:1 partitioning:1 unit:37 grant:1 appear:1 despite:1 analyzing:1 cliff:2 au:2 mateo:2 equivalence:1 specifying:1 unique:1 lost:1 episodic:1 projection:2 word:2 cannot:1 onto:2 operator:1 janet:1 equivalent:4 center:1 straightforward:1 independently:1 automaton:1 identifying:2 legitimate:1 embedding:1 variation:4 diego:1 designating:2 element:1 ft:1 steven:1 cleeremans:3 region:53 sun:3 trade:1 curried:1 trained:3 ablex:1 basis:2 triangle:9 easily:1 represented:8 separated:2 distinct:2 corresponded:1 encoded:1 larger:2 plausible:1 compressed:1 grammar:4 encoder:2 sequence:2 neighboring:1 flexibility:1 oz:1 analogical:3 cluster:7 requirement:1 double:1 object:27 recurrent:4 solves:1 dividing:1 implemented:1 c:1 implies:1 australian:1 emulation:2 functionality:1 human:1 australia:2 require:1 orlando:1 ao:1 clustered:3 lying:1 mapping:3 combinatorial:16 label:1 currently:1 expose:1 council:1 grouped:5 schreiber:2 largest:1 combinatorially:1 create:1 rather:1 ollila:4 wilson:1 conjunction:3 linguistic:1 encode:1 contrast:1 sense:1 abstraction:1 entire:1 initially:1 hidden:36 janetw:1 spatial:12 noun:1 extraction:1 represents:1 future:1 connectionist:3 report:1 inherent:1 tightly:1 individual:3 freedom:1 implication:2 fsm:12 explosion:1 orthogonal:1 circle:8 cda:5 minimal:4 melbourne:1 psychological:1 giles:6 contiguous:1 servan:2 stewart:1 learnt:1 combined:1 accessible:1 lee:3 off:1 together:1 moody:1 intersecting:13 again:1 recorded:1 containing:2 cognitive:1 vi:1 idealized:1 view:2 lab:1 red:9 barnden:1 simon:1 square:8 xor:1 kaufmann:2 identify:3 lli:1 trajectory:1 touretzky:1 ed:3 nonetheless:1 pp:1 involved:1 logical:11 knowledge:1 color:14 dimensionality:2 ou:1 reflecting:1 higher:2 reflected:1 improved:1 maximally:1 just:1 horizontal:2 dennis:1 harcourt:1 overlapping:4 contain:4 semantic:1 oc:1 tt:1 demonstrate:5 complete:1 reasoning:2 fi:1 functional:1 ji:1 extend:1 interpretation:2 functionally:1 ai:2 phillips:1 language:1 had:1 access:2 multivariate:1 showed:8 perspective:1 exploited:1 bain:2 seen:1 morgan:2 additional:2 ii:2 technical:1 reber:2 a1:2 fsms:1 vision:1 represent:8 whereas:1 ftl:1 background:1 brace:1 legend:2 extracting:1 presence:1 unused:1 psychology:1 identified:4 competing:1 idea:1 six:4 pca:2 pike:2 jj:1 useful:1 clear:1 involve:1 ten:2 mcclelland:3 category:1 exist:1 canonical:8 disjoint:4 halford:2 blue:8 vol:1 group:8 key:5 four:4 procedural:1 bloesch:4 separation:2 scaling:1 layer:2 completing:1 simplification:1 adapted:1 scene:49 ri:1 aspect:5 extremely:1 according:1 alternate:2 combination:4 smaller:2 partitioned:1 wile:9 xo:1 needed:1 mind:1 drastic:1 hierarchical:4 uq:1 alternative:3 florida:1 clustering:1 exploit:1 tensor:3 arrangement:1 reversed:1 separate:2 thank:1 separating:1 entity:1 capacity:1 discriminant:2 providing:1 demonstration:1 implementation:1 vertical:2 finite:6 extended:1 verb:1 lb:1 required:5 sentence:1 connection:1 hanson:1 california:1 coherent:1 pattern:15 program:1 green:6 including:1 explanation:1 memory:3 suitable:1 difficulty:1 representing:3 vic:1 eye:1 kj:1 review:1 prior:1 acknowledgement:1 relative:1 embedded:2 expect:1 interesting:1 limitation:1 analogy:1 norwood:1 degree:1 token:3 supported:2 allow:1 face:1 tracing:1 grammatical:2 dimension:4 valid:1 projected:2 simplified:1 san:3 excess:1 lippmann:1 qld:1 nature:2 ca:2 necessarily:1 did:1 hierarchically:1 main:1 whole:1 motivation:1 repeated:1 sub:1 position:1 xl:1 third:2 humphreys:2 down:1 emphasizing:1 showing:1 grouping:4 intrinsic:1 albeit:1 adding:1 illustrates:1 depts:1 chen:6 intersection:6 explore:1 visual:3 contained:1 binding:4 corresponds:2 extracted:3 labelled:2 crl:1 analysing:2 reducing:1 principal:1 jovanovich:1 isomorphic:2 indicating:1 mark:1 support:1 guo:1 dept:1
6,534
6,910
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee Alireza Aghasi? Institute for Insight Georgia State University IBM TJ Watson [email protected] Afshin Abdi Department of ECE Georgia Tech [email protected] Nam Nguyen IBM TJ Watson [email protected] Justin Romberg Department of ECE Georgia Tech [email protected] Abstract We introduce and analyze a new technique for model reduction for deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting and model redundancy negatively affects the prediction accuracy and model variance. Our Net-Trim algorithm prunes (sparsifies) a trained network layer-wise, removing connections at each layer by solving a convex optimization program. This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model. The algorithms and associated analysis are applicable to neural networks operating with the rectified linear unit (ReLU) as the nonlinear activation. We present both parallel and cascade versions of the algorithm. While the latter can achieve slightly simpler models with the same generalization performance, the former can be computed in a distributed manner. In both cases, Net-Trim significantly reduces the number of connections in the network, while also providing enough regularization to slightly reduce the generalization error. We also provide a mathematical analysis of the consistency between the initial network and the retrained model. To analyze the model sample complexity, we derive the general sufficient conditions for the recovery of a sparse transform matrix. For a single layer taking independent Gaussian random vectors as inputs, we show that if the network response can be described using a maximum number of s non-zero weights per node, these weights can be learned from O(s log N ) samples. 1 Introduction With enough layers, neurons in each layer, and a sufficiently large set of training data, neural networks can learn structure of arbitrary complexity [1]. This model flexibility has made the deep neural network a pioneer machine learning tool over the past decade (see [2] for a comprehensive overview). In practice, multi-layer networks often have more parameters than can be reliably estimated from the amount of data available. This gives the training procedure a certain ambiguity ? many different sets of parameter values can model the data equally well, and we risk instabilities due to overfitting. In this paper, we introduce a framework for sparisfying networks that have already been trained using standard techniques. This reduction in the number of parameters needed to specify the network makes it more robust and more computationally efficient to implement without sacrificing performance. ? Corresponding Author 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In recent years there has been increasing interest in the mathematical understanding of deep networks. These efforts are mainly in the context of characterizing the minimizers of the underlying cost function [3, 4] and the geometry of the loss function [5]. Recently, the analysis of deep neural networks using compressed sensing tools has been considered in [6], where the distance preservability of feedforward networks at each layer is studied. There are also works on formulating the training of feedforward networks as an optimization problem [7, 8, 9], where the majority of the works approach their understanding of neural networks by sequentially studying individual layers. Various methods have been proposed to reduce overfitting via regularizing techniques and pruning strategies. These include explicit regularization using `1 and `2 penalties during training [10, 11], and techniques that randomly remove active connections in the training phase (e.g. Dropout [12] and DropConnect [13]) making them more likely to produce sparse networks. There has also been recent works on explicit network compression (e.g., [14, 15, 16]) to remove the inherent redundancies. In what is perhaps the most closely related work to what is presented below, [14] proposes a pruning scheme that simply truncates small weights of an already trained network, and then re-adjusts the remaining active weights using another round of training. These aforementioned techniques are based on heuristics, and lack general performance guarantees that help understand when and how well they work. We present a framework, called Net-Trim, for pruning the network layer-by-layer that is based on convex optimization. Each layer of the net consists of a linear map followed by a nonlinearity; the algorithms and theory presented below use a rectified linear unit (ReLU) applied point-wise to each output of the linear map. Net-trim works by taking a trained network, and then finding the sparsest set of weights for each layer that keeps the output responses consistent with the initial training. More concisely, if Y (`?1) is the input (across the training examples) to layer `, and Y (`) is the output following the ReLU operator, Net-Trim searches for a sparse W such that Y (`) ? ReLU(W ? Y (`?1) ). Using the standard `1 relaxation for sparsity and the fact that the ReLU function is piecewise linear allows us to perform this search by solving a convex program. In contrast to techniques based on thresholding (such as [14]), Net-Trim does not require multiple other time-consuming training steps after the initial pruning. Along with making the computations tractable, Net-Trim?s convex formulation also allows us to derive theoretical guarantees on how far the retrained model is from the initial model, and establish sample complexity arguments about the number of random samples required to retrain a presumably sparse layer. To the best of our knowledge, Net-Trim is the first pruning scheme with such performance guarantees. In addition, it is easy to modify and adapt to other structural constraints on the weights by adding additional penalty terms or introducing additional convex constraints. An illustrative example is shown in Figure 1. Here, 200 points in the 2D plane are used to train a binary classifier. The regions corresponding to each class are nested spirals. We fit a classifier using a simple neural network with two hidden layers with fully connected weights, each consisting 200 neurons. Figure 1(b) shows the weighted adjacency matrix between the layers after training, and then again after Net-Trim is applied. With only a negligible change to the overall network response (panel (a) vs panel (d)), Net-Trim is able to prune more than 93% of the links among the neurons, representing a significant model reduction. Even when the neural network is trained using sparsifying weight regularizers (here, Dropout [12] and `1 penalty), Net-Trim produces a model which is over 7 times sparser than the initial one, as presented in panel (c). The numerical experiments in Section 6 show that these kinds of results are not limited to toy examples; Net-Trim achieves significant compression ratios on large networks trained on real data sets. The remainder of the paper is structured as follows. In Section 2, we formally present the network model used in the paper. The proposed pruning schemes, both the parallel and cascade Net-Trim are presented and discussed in Section 3. Section 4 is devoted to the convex analysis of the proposed framework and its sample complexity. The implementation details of the proposed convex scheme are presented in Section 5. Finally, in Section 6, we report some retraining experiments using the Net-Trim and conclude the paper by presenting some general remarks. Along with some extended discussions, the proofs of all of the theoretical statements in the paper are presented as a supplementary note (specifically, ?4 of the notes is devoted to the technical proofs). We very briefly summarize the notation used below. For a matrix A, we use A?1 ,? to denote the submatrix formed by restricting the rows of A to the index set ?1 . Similarly, A?,?2 is the submatrix of columns indexed by ?2 , and A?1 ,?2 is formed by extracting both rows and columns. For an 2 1.5 1.5 1 1 0.5 0.5 (b) 0 0 -0.5 -0.5 -1 -1 -1.5 -1.5 -1 -0.5 0 0.5 1 -1.5 -1.5 1.5 -1 -0.5 0 0.5 1 1.5 (a) (d) (c) Figure 1: Net-Trim pruning performance; (a) initial trained model; (b) the weighted adjacency matrix relating the two hidden layers before (left) and after (right) the application of Net-Trim; (c) left: the adjacency matrix after training the network with Dropout and `1 regularization; right: after retraining via Net-Trim; (d) the retrained classifier N M ? N matrix X with entries xm,n , we use2 ?X?1 ? ?M m=1 ?n=1 ?xm,n ? and ?X?F as the Frobenius norm. For a vector x, ?x?0 is the cardinality of x, supp x is the set of indexes with non-zero entries, and suppc x is the complement set. We will use the notation x+ as shorthand max(x, 0), where max(., 0) is applied to vectors and matrices component-wise. Finally, the vertical concatenation of two vectors a and b is denoted by [a; b]. 2 Feedforward Network Model In this section, we introduce some notational conventions related to a feedforward network model. We assume that we have P training samples xp , p = 1, ?, P , where xp ? RN is an input to the network. We stack up these samples into a matrix X ? RN ?P , structured as X = [x1 , ?, xP ]. Considering L layers for the network, the output of the network at the final layer is denoted by Y (L) ? RNL ?P , where each column in Y (L) is a response to the corresponding training column in X. The network activations are taken to be rectified linear units. The output of the `-th layer is Y (`) ? RN` ?P , generated by applying the adjoint of the weight matrix W` ? RN`?1 ?N` to the output of the previous layer Y (`?1) and then applying a component-wise max(., 0) operation: Y (`) = max (W`? Y (`?1) , 0) , ` = 1, ?, L, (1) where Y (0) = X and N0 = N . A trained neural network as outlined in (1) is represented by T N ({W` }L `=1 , X). For the sake of theoretical analysis, all the results presented in this paper are stated for link-normalized networks, where ?W` ?1 = 1 for every layer ` = 1, ?, L. Such presentation is with no loss of generality, as any network in the form of (1) can be converted to its link-normalized version by replacing W` with W` /?W` ?1 , and Y (`+1) with Y (`+1) / ?`j=0 ?Wj ?1 . Since max(?x, 0) = ? max(x, 0) for ? > 0, any weight processing on a network of the form (1) can be applied to the link-normalized version and later transferred to the original domain via a suitable scaling. 3 Convex Pruning of the Network Our pruning strategy relies on redesigning the network so that for the same training data each layer outcomes stay more or less close to the initial trained model, while the weights associated with each layer are replaced with sparser versions to reduce the model complexity. Figure 2 presents the main idea, where the complex paths between the layer outcomes are replaced with simple paths. In a sense, if we consider each layer response to the transmitted data as a checkpoint, Net-Trim assures the checkpoints remain roughly the same, while a simpler path between the checkpoints is discovered. 2 The notation ?X?1 should not be confused with the matrix induced `1 norm 3 W1 X Y (1) ? ?1 W WL Y (L?1) Y ? (L) X (1) Y? ? ?L W (L?1) Y? (L) Y? Figure 2: The main retraining idea: keeping the layer outcomes close to the initial trained model while finding a simpler path relating each layer input to the output Consider the first layer, where X = [x1 , ?, xP ] is the layer input, W = [w1 , ?, wM ] the layer ? to coefficient matrix, and Y = [ym,p ] the layer outcome. We require the new coefficient matrix W be sparse and the new response to be close to Y . Using the sum of absolute entries as a proxy to promote sparsity, a natural strategy to retrain the layer is addressing the nonlinear program ? = arg min ?U ? W 1 s.t. U ?max (U ? X, 0) ? Y ?F ? . (2) Despite the convex objective, the constraint set in (2) is non-convex. However, we may approximate ? ? X, 0) to have similar activation patterns. it with a convex set by imposing Y and Y? = max(W More specifically, knowing that ym,p is either zero or positive, we enforce the max(., 0) argument to be negative when ym,p = 0, and close to ym,p elsewhere. To present the convex formulation, for V = [vm,p ], throughout the paper we use the notation U ? C (X, Y , V ) to present the constraint set 2 ? ? ? (u?m xp ? ym,p ) ? 2 ? ? m,p? ym,p >0 ? ? u?m xp ? vm,p ? Based on this definition, a convex proxy to (2) is ? = arg min ?U ? W 1 . (3) m, p ? ym,p = 0 s.t. U ? C (X, Y , 0). (4) U Basically, depending on the value of ym,p , a different constraint is imposed on u?m xp to emulate the ReLU operation. As a first observation towards establishing a retraining framework, we show that the solution of (4) is consistent with the desired constraint in (2), as follows. ? be the solution to (4). For Y? = max(W ? ? X, 0) being the retrained layer Proposition 1. Let W response, ?Y? ? Y ?F ? . 3.1 Parallel and Cascade Net-Trim Based on the above exploratory, we propose two schemes to retrain a neural network; one explores a computationally distributable nature and the other proposes a cascading scheme to retrain the layers sequentially. The general idea which originates from the relaxation in (4) is referred to as the Net-Trim, specified by the parallel or cascade nature. The parallel Net-Trim is a straightforward application of the convex program (4) to each layer in the network. Basically, each layer is processed independently based on the initial model input and output, without taking into account the retraining result from the previous layer. Specifically, denoting Y (`?1) and Y (`) as the input and output of the `-th layer of the initial trained neural network (see equation (1)), we propose to relearn the coefficient matrix W ` via the convex program ? ` = arg min ?U ? W 1 s.t. U ? C (Y (`?1) , Y (`) , 0) . (5) U The optimization in (5) can be independently applied to every layer in the network and hence computationally distributable. Algorithm 1 presents the pseudocode for the parallel Net-Trim. In this pseudocode, we use TRIM(X, Y , V , ) as a function which returns the solution to a program like (4) with the constraint U ? C (X, Y , V ). With reference to the constraint in (5), if we only retrain the `-th layer, the output of the retrained layer is in the -neighborhood of that before retraining. However, when all the layers are retrained through (5), an immediate question would be whether the retrained network produces an output which is controllably close to the initially trained model. In the following theorem, we show that the retrained error does not blow up across the layers and remains a multiple of . 4 (`) Theorem 1. Let T N ({W` }L `=1 , X) be a link-normalized trained network with layer outcomes Y L ? ` } , X) by solving the convex programs described by (1). Form the retrained network T N ({W `=1 ? ` ? Y? (`?1) , 0) obey (5), with  = ` at each layer. Then the retrained layer outcomes Y? (`) = max(W ` (`) (`) ? ?Y ? Y ?F ? ?j=1 j . When all the layers are retrained with a fixed parameter  (as in Algorithm 1), a corollary of the theorem above would bound the overall discrepancy as ?Y? (L) ? Y (L) ?F ? L. In a cascade Net-Trim, unlike the parallel scheme where each layer is retrained independently, the outcome of a retrained layer is probed into the program retraining the next layer. More specifically, having the first layer processed via (4), one would ideally seek to address (5) with the modified constraint U ? C (Y? (`?1) , Y (`) , 0) to retrain the subsequent layers. However, as detailed in ?1 of the supplementary note, such program is not necessarily feasible and needs to be sufficiently slacked to warrant feasibility. In this regard, for every subsequent layer, ` = 2, ?, L, the retrained weighting ? ` , is obtained via matrix, W min ?U ?1 U (`?1) (`?1) s.t. U ? C` (Y? , Y (`) , W`? Y? ), (6) where for W` = [w`,1 , ?, w`,N` ] and ?` ? 1, 2` = ?` 2 ? (`) (`) ? (`?1) (w?`,m y ? ym,p ) . p (7) m,p? ym,p >0 The constants ?` ? 1 (referred to as the inflation rates) are free parameters, which control the sparsity of the resulting matrices. In the following theorem, we prove that the outcome of the retrained network produced by Algorithm 2 is close to that of the network before retraining. (`) Theorem 2. Let T N ({W` }L . `=1 , X) be a link-normalized trained network with layer outcomes Y ? ` }L , X) by solving (5) for the first layer and (6) for the Form the retrained network T N ({W `=1 ? `? Y? (`?1) , 0), Y? (1) = max(W ? 1? X, 0) and ?` ? 1. subsequent layers with ` as in (7), Y? (`) = max(W 1 Then the outputs Y? (`) of the retrained network will obey ?Y? (`) ? Y (`) ?F ? 1 (?`j=2 ?j ) 2 . Algorithm 2 presents the pseudo-code to implement the cascade Net-Trim for a link normalized network with 1 =  and a constant inflation rate, ?, across all the layers. In such case, a corollary of (L?1) Theorem 2 bounds the network overall discrepancy as ?Y? (L) ? Y (L) ?F ? ? 2 . We would like to note that focusing on a link-normalized network is only for the sake of presenting the theoretical results in a more compact form. In practice, such conversion is not necessary and to retrain layer ` in the parallel Net-Trim we can take  = r ?Y (`) ?F and use  = r ?Y (1) ?F for the cascade case, where r plays a similar role as  for a link-normalized network. Moreover, as detailed in ?2 of the supplementary note, Theorems 1 and 2 identically apply to the practical networks that follow (1) for the first L ? 1 layers and skip an activation at the last layer. Algorithm 1 Parallel Net-Trim Algorithm 2 Cascade Net-Trim 1: 1: Input: X,  > 0, and normalized W1 , ?, WL 2: 2: Y (0) ? X 3: % generating initial layer outcomes: 4: 3: for ` = 1, ?, L do 5: 4: Y (`) ? max (W`? Y (`?1) , 0) 6: 5: end for 7: % retraining: 6: for all ` = 1, ?, L do 8: ? ` ? TRIM (Y (`?1) , Y (`) , 0, ) 7: W 9: 8: end for ? 1 , ?, W ?L 10: 9: Output: W 11: 5 Input: X,  > 0, ? > 1 and normalized W1 , ?, WL Y ? max (W1? X, 0) ? 1 ? TRIM(X, Y , 0, ) W ? 1? X, 0) Y? ? max(W for ` = 2, ?, L do Y ? max(W`? Y , 0) ? p ? ym,p )2 )1/2  ? (? ?m,p?ym,p >0 (w?`,m y % w`,m is the m-th column of W` ? ` ? TRIM(Y? , Y , W ? Y? , ) W ` ? `? Y? , 0) Y? ? max(W end for ? 1 , ?, W ?L Output: W 4 Convex Analysis and Sample Complexity In this section, we derive a sampling theorem for a single-layer, redundant network. Here, there are many sets of weights that can induce the observed outputs given then input vectors. This scenario might arise when the number of training samples used to train a (large) network is small (smaller than the network degrees of freedom). We will show that when the inputs into the layers are independent Gaussian random vectors, if there are sparse set of weights that can generate the output, then with high probability, the Net-Trim program in (4) will find them. As noted above, in the case of a redundant layer, for a given input X and output Y , the relation Y = max(W ? X, 0) can be established via more than one W . In this case, we hope to find a sparse W by setting  = 0 in (4). For this value of , our central convex program decouples into M convex ?: programs, each searching for the m-th column in W ? m = arg min ?w?1 s.t. { w w w? xp = ym,p w ? xp ? 0 p ? ym,p > 0 . p ? ym,p = 0 (8) By dropping the m index and introducing the slack variable s, program (8) can be cast as min ?w?1 w,s where ? [w] = y, X s s.t. s ? 0, (9) ? 0 y ? = [ X??,? X ] , y = [ ?] , 0 X ?,?c ?I ? and ? = {p ? yp > 0}. For a general X, not necessarily structured as above, the following result states the sufficient conditions under which a sparse pair (w? , s? ) is the unique minimizer to (9). Proposition 2. Consider a pair (w? , s? ) ? (Rn1 , Rn2 ), which is feasible for the convex program ? ? with entries satisfying (9). If there exists a vector ? = [?` ] ? Rn1 +n2 in the range of X { ?1 <?` <1 ` ? suppc w? ?` =sign(w`? ) c ? ,{ 0 < ?n1 +` ` ? supp s ?n1 +` =0 ` ? supp w? ` ? supp s? (10) ? ? is full column rank, then the pair ? = supp w? ? {n1 + supp s? } the restricted matrix X and for ? ?,? ? ? (w , s ) is the unique solution to (9). The proposed optimality result can be related to the unique identification of a sparse w? from rectified observations of the form y = max(X ? w? , 0). Clearly, the structure of the feature matrix X plays the key role here, and the construction of the dual certificate stated in Proposition 2 entirely relies on this. As an insightful case, we show that when X is a Gaussian matrix (that is, the elements of X are i.i.d values drawn from a standard normal distribution), for sufficiently large number of samples, the dual certificate can be constructed. As a result, we can warrant that learning w? can be performed with much fewer samples than the layer degrees of freedom. Theorem 3. Let w? ? RN be an arbitrary s-sparse vector, X ? RN ?P a Gaussian matrix representing the samples and ? > 1 a fixed value. Given P = (15s + 6)? log N observations of the type y = max(X ? w? , 0), with probability exceeding 1 ? N 1?? the vector w? can be learned exactly through (8). The standard Gaussian assumption for the feature matrix X allows us to relate the number of training samples to the number of active links in a layer. Such feature structure could be a realistic assumption for the first layer of the neural network. As reflected in the proof of Theorem 3, because of the dependence of the set ? to the entries in X, we need to take a totally nontrivial analysis path than the standard concentration of measure arguments for the sum of independent random matrices. In fact, the proof requires establishing concentration bounds for the sum of dependent random matrices. ? While we focused on each column of W ? individually, for the observations Y = max(W ? X, 0), using the union bound, an exact identification of W ? can be warranted as a corollary of Theorem 3. Corollary 1. Consider an arbitrary matrix W ? = [w?1 , ?, w?M ] ? RN ?M , where sm = ?w?m ?0 , and ? 0 < sm ? smax for m = 1, ?, M . For X ? RN ?P being a Gaussian matrix, set Y = max(W ? X, 0). ? If ? > (1 + logN M ) and P = (15smax + 6)? log N , for  = 0, W can be accurately learned through max +6 1?? 15s 15sm +6 . (4) with probability exceeding 1 ? ?M m=1 N 6 It can be shown that for the network model in (1), probing the network with an i.i.d sample matrix X would generate subgaussian random matrices with independent columns as the subsequent layer outcomes. Under certain well conditioning of the input covariance matrix of each layer, results similar to Theorem 3 are extendable to the subsequent layers. While such results are left for a more extended presentation of the work, Theorem 3 is brought here as a good reference for the general performance of the proposed retraining scheme and the associated analysis theme. 5 Implementing the Convex Program If the quadratic constraint in (3) is brought to the objective via a regularization parameter ?, the resulting convex program decouples into M smaller programs of the form ? m = arg min ?u?1 + ? w u 2 (u? xp ? ym,p ) ? s.t. u? xp ? vm,p , for p ? ym,p = 0, (11) p? ym,p >0 ? . Such decoupling of the regularized form is computationally each recovering a column of W attractive, since it makes the trimming task extremely distributable among parallel processing units ? on a separate unit. Addressing the original constrained form (4) in by recovering each column of W a fast and scalable way requires using more complicated techniques, which is left to a more extended presentation of the work. We can formulate the program in a standard form by introducing the index sets ?cm = {p ? ym,p = 0}. ?m = {p ? ym,p > 0}, Denoting the m-th row of Y by y ?m and the m-th row of V by v ?m , one can equivalently rewrite (11) in terms of u as min ?u?1 + u? Qm u + 2q ?m u s.t. P m u ? cm , (12) u where Qm = ?X ?,?m X ??,?m , q m = ??X ?,?m y m?m = ??Xy m , P m = X ??,?cm , cm = v m?cm . (13) The `1 term in the objective of (12) can be converted into a linear term by defining a new vector ? = [u+ ; ?u? ], where u? = min(u, 0). This variable change naturally yields u ?. ?u?1 = 1? u u = [I, ?I]? u, The convex program (13) is now cast as the standard quadratic program ?mu ? ?Q ? + (1 + 2? ? s.t. min u qm ) u ? ? u where ?m = [ 1 Q ?1 ?1 ] ? Qm , 1 q q?m = [ m ] , ?q m ? c ? ? [ m] , [P m ] u 0 ?I P? m = [P m (14) ?P m ] . ? ?m , the solution to (14) is found, the solution to (11) can be recovered via w ? m = [I, ?I]? Once u u?m . Aside from the variety of convex solvers that can be used to address (14), we are specifically interested in using the alternating direction method of multipliers (ADMM). In fact the main motivation to translate (11) into (14) is the availability of ADMM implementations for problems in the form of (14) that are reasonably fast and scalable (e.g., see [17]). The authors have made the implementation publicly available online3 . 6 Experiments and Discussions Aside from the major technical contribution of the paper in providing a theoretical understanding of the Net-Trim pruning process, in this section we present some experiments to highlight its performance against the state of the art techniques. 3 The code for the regularized Net-Trim implementation using the ADMM scheme can be accessed online at: https://github.com/DNNToolBox/Net-Trim-v1 7 The first set of experiments associated with the example presented in the introduction (classification of 2D points on nested spirals) compares the Net-Trim pruning power against the standard pruning strategies of `1 regularization and Dropout. The experiments demonstrate how Net-Trim can significantly improve the pruning level of a given network and produce simpler and more understandable networks. We also compare the cascade Net-Trim against the parallel scheme. As could be expected, for a fixed level of discrepancy between the initial and retrained models, the cascade scheme is capable of producing sparser networks. However, the computational distributability of the parallel scheme makes it a more favorable approach for large scale and big data problems. Due to the space limitation, these experiments are moved to ?3 of the supplementary note. We next apply Net-Trim to the problem of classifying hand-written digits of the mixed national institute of standards and technology (MNIST) dataset. The set contains 60,000 training samples and 10,000 test instances. To examine different settings, we consider 6 networks: NN2-10K, which is a 784?300?300?10 network (two hidden layers of 300 nodes) and trained with 10,000 samples; NN3-30K, a 784?300?500?300?10 network trained with 30,000 samples; and NN3-60K, a 784?300?1000?300?10 network trained with 60,000 samples. We also consider CNN-10K, CNN-30K and CNN-60K which are topologically identical convolutional networks trained with 10,000, 30,000 and 60,000 samples, respectively. The convolutional networks contain two convolutional layers composed of 32 filters of size 5 ? 5 ? 1 for the first layer and 5 ? 5 ? 32 for the second layer, both followed by max pooling and a fully connected layer of 512 neurons. While the linearity of the convolution allows using the Net-Trim for the associated layers, here we merely consider retraining the fully connected layers. To address the Net-Trim convex program, we use the regularized form outlined in Section 5, which is fully capable of parallel processing. For our largest problem (associated with the fully connected layer in CNN-60K), retraining each column takes less than 20 seconds and distributing the independent jobs among a cluster of processing units (in our case 64) or using a GPU reduces the overall retraining of a layer to few minutes. Table 1 summarize the retraining experiments. Panel (a) corresponds to the Net-Trim operating in a low discrepancy mode (smaller ), while in panel (b) we explore more sparsity by allowing larger discrepancies. Each neural network is trained three times with different initialization seeds and average quantities are reported. In these tables, the first row corresponds to the test accuracy of the initial models. The second row reports the overall pruning rate and the third row reports the overall discrepancy between the initial and Net-Trim retrained models. We also compare the results with the work by Han, Pool, Tran and Dally (HPTD) [14]. The basic idea in [14] is to truncate the small weights across the network and perform another round of training on the active weights. The forth row reports the test accuracy after applying Net-Trim. To make a fair comparison in applying the HPTD, we impose the same number of weights to be truncated in the HPTD technique. The accuracy of the model after this truncation is presented on the fifth row. Rows six and seven present the test accuracy of Net-Trim and HPTD after a fine training process (optional for Net-Trim). An immediate observation is the close test error of Net-Trim compared to the initial trained models (row four vs row one). We can observe from the second and third rows of the two tables that allowing more discrepancy (larger ) increases the pruning level. We can also observe that the basic Net-Trim process (row four) in many scenarios beats the HPTD (row seven), and if we allow a fine training step after the Net-Trim (row six), in all the scenarios a better test accuracy is achieved. A serious problem with the HPTD is the early minima trapping (EMT). When we simply truncate the layer transfer matrices, ignoring their actual contribution to the network, the error introduced can be very large (row five), and using this biased pattern as an initialization for the fine training can produce poor local minima solutions with large errors. The EMT blocks in the table correspond to the scenarios where all three random seeds failed to generate acceptable results for this approach. In the experiments where Net-Trim was followed by an additional fine training step, this was never an issue, since the Net-Trim outcome is already a good model solution. ? 1 after the Net-Trim process. We observe 28 bands (MNIST images are In Figure 3(a), we visualize W 28?28), where the zero columns represent the boundary pixels with the least image information. It is noteworthy that such interpretable result is achieved using the Net-Trim with no post or pre-processes. A similar outcome of HPTD is depicted in panel (b). As a matter of fact, the authors present a similar visualization as panel (a) in [14], which is the result of applying the HPTD process iteratively and going through the retraining step many times. Such path certainly produces a lot of processing load and lacks any type of confidence on being a convergent procedure. 8 99.25 45.74 0.55 99.25 30.17 99.33 EMT Init. Mod. Acc. (%) Total Pruning (%) NT Overall Disc. (%) NT No FT Acc. (%) HPTD No FT Acc. (%) NT + FT Acc. (%) HPTD + FT Acc. (%) (a) 0 CNN-60K 99.11 39.11 0.75 99.15 55.92 99.21 EMT CNN-30K CNN-60K 98.37 43.91 1.22 98.31 19.17 98.35 98.16 CNN-10K CNN-30K 98.18 29.38 1.77 98.1 8.92 98.12 EMT NN3-60K CNN-10K 97.58 30.69 1.31 97.55 10.34 97.67 97.32 NN3-30K NN3-60K 95.59 40.86 1.98 95.47 9.3 95.85 93.56 NN2-10K NN3-30K Init. Mod. Acc. (%) Total Pruning (%) NT Overall Disc. (%) NT No FT Acc. (%) HPTD No FT Acc. (%) NT + FT Acc. (%) HPTD + FT Acc. (%) NN2-10K Table 1: The test accuracy of different models before and after Net-Trim (NT) and HPTD [14]. Without a fine training (FT) step, Net-Trim produces pruned networks in the majority of cases more accurate than HPTD and with no risk of poor local minima. Adding an additional FT step makes Net-Trim consistently prominent 95.59 75.87 4.95 94.92 8.97 95.89 95.61 97.58 75.82 11.01 95.97 10.1 97.69 EMT 98.18 77.40 11.47 97.35 8.92 98.19 97.96 98.37 76.18 3.65 97.91 31.18 98.40 EMT 99.11 77.63 5.32 99.08 73.36 99.17 99.01 99.25 81.62 8.93 98.96 46.84 99.26 99.06 (b) 0 50 50 100 100 150 150 200 200 250 250 300 300 0 100 200 300 400 500 600 700 0 100 200 300 (a) 400 500 600 700 (b) ? 1 in NN3-60K; (a) Net-Trim output; (b) standard HPTD Figure 3: Visualization of W 100 100 Net-Trim Retrained Model Initial Model 95 90 Test Accuracy 90 Test Accuracy Net-Trim Retrained Model Initial Model 95 85 80 75 70 65 85 80 75 70 65 0 20 40 60 80 100 120 140 160 0 Noise (%) (a) 20 40 60 80 100 120 140 160 Noise (%) (b) Figure 4: Noise robustness of initial and retrained networks; (a) NN2-10K; (b) NN3-30K Also, for a deeper understanding of the robustness Net-Trim adds to the models, in Figure 4 we have plotted the classification accuracy of the initial and retrained models against the level of added noise to the test data (ranging from 0 to 160%). The Net-Trim improvement in accuracy becomes more noticeable as the noise level in the data increases. Basically, as expected, reducing the model complexity makes the network more robust to outliers and noisy samples. It is also interesting to note that the NN3-30K initial model in panel (b), which is trained with more data, presents robustness to a larger level of noise compared to NN2-10K in panel (a). However, the retrained models behave rather similarly (blue curves) indicating the saving that can be achieved in the number of training samples via Net-Trim. In fact, Net-Trim can be particularly useful when the number of training samples is limited. While overfitting is likely to occur in such scenarios, Net-Trim reduces the complexity of the model by setting a significant portion of weights at each layer to zero, yet maintaining the model consistency. This capability can also be viewed from a different perspective, that Net-Trim simplifies the process of determining the network size. In other words, if the network used at the training phase is oversized, Net-Trim can reduce its size to an order matching the data. Finally, aside from the theoretical and practical contribution that Net-Trim brings to the understanding of deep neural network, the idea can be easily generalized to retraining schemes with other regularizers (e.g., the use of ridge or elastic net type regularizers) or other structural constraint about the network. 9 References [1] K. Hornik, M. Stinchcombe, H. White D. Achlioptas, and F. McSherry. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359?366, 1989. [2] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. [3] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In Proceedings of the 31st International Conference on Machine Learning, 2014. [4] K. Kawaguchi. Deep learning without poor local minima. In Preprint, 2016. [5] A. Choromanska, M. Henaff, M. Mathieu, G.B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [6] R. Giryes, G. Sapiro, and A.M. Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? IEEE Transactions on Signal Processing, 64(13):3444?3457, 2016. [7] Y. Bengio, N. Le Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In Proceedings of the 18th International Conference on Neural Information Processing Systems, pages 123?130, 2005. [8] F. Bach. Breaking the curse of dimensionality with convex neural networks. Technical report, 2014. [9] O. Aslan, X. Zhang, and D. Schuurmans. Convex deep learning via normalized kernels. In Proceedings of the 27th International Conference on Neural Information Processing Systems, pages 3275?3283, 2014. [10] S. Nowlan and G. Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473?493, 1992. [11] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural computation, 7(2):219?269, 1995. [12] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929?1958, 2014. [13] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 33rd International Conference on Machine Learning, 2016. [14] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135?1143, 2015. [15] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In International Conference on Machine Learning, pages 2285?2294, 2015. [16] S. Han, H. Mao, and W. J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [17] E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson. Optimal parameter selection for the alternating direction method of multipliers (admm): quadratic problems. IEEE Transactions on Automatic Control, 60(3):644?658, 2015. 10
6910 |@word cnn:10 briefly:1 version:4 compression:3 norm:2 johansson:1 retraining:16 seek:2 covariance:1 simplifying:1 sparsifies:1 arous:1 reduction:3 initial:20 contains:1 denoting:2 past:1 recovered:1 com:2 nt:7 nowlan:1 activation:4 yet:1 written:1 gpu:1 pioneer:1 numerical:1 subsequent:5 realistic:1 girosi:1 remove:2 interpretable:1 n0:1 v:2 aside:3 intelligence:1 fewer:1 plane:1 trapping:1 certificate:2 node:2 simpler:4 accessed:1 five:1 zhang:2 mathematical:2 along:2 constructed:1 rnl:1 consists:1 shorthand:1 prove:1 manner:1 introduce:3 theoretically:1 expected:2 roughly:1 examine:1 multi:1 salakhutdinov:1 actual:1 curse:1 cardinality:1 increasing:1 considering:1 confused:1 totally:1 underlying:1 notation:4 panel:9 moreover:1 linearity:1 becomes:1 what:2 kind:1 cm:5 finding:2 guarantee:4 pseudo:1 sapiro:1 every:3 exactly:1 decouples:2 classifier:3 qm:4 control:2 unit:6 originates:1 producing:1 before:4 negligible:1 positive:1 local:3 modify:1 despite:1 establishing:2 path:6 noteworthy:1 solver:1 might:1 initialization:2 studied:1 limited:2 range:1 practical:2 unique:3 lecun:2 practice:2 union:1 implement:2 block:1 digit:1 procedure:2 universal:2 online3:1 cascade:10 significantly:2 matching:1 pre:1 induce:1 confidence:1 word:1 close:7 selection:1 romberg:1 operator:1 risk:2 context:1 instability:1 applying:5 ghadimi:1 map:2 imposed:1 straightforward:1 independently:3 convex:29 focused:1 formulate:1 roux:1 recovery:1 insight:1 adjusts:1 cascading:1 nam:1 searching:1 exploratory:1 construction:1 play:2 exact:1 goodfellow:1 trick:1 element:1 satisfying:1 particularly:1 observed:1 role:2 ft:10 preprint:2 region:1 wj:1 connected:4 compressing:2 mu:1 complexity:8 ideally:1 trained:23 solving:4 rewrite:1 negatively:1 easily:1 various:1 represented:1 emulate:1 train:2 fast:2 artificial:1 outcome:13 neighborhood:1 heuristic:1 supplementary:4 larger:3 delalleau:1 compressed:1 statistic:1 transform:1 noisy:1 distributable:3 final:1 online:1 net:66 oversized:1 propose:2 tran:2 remainder:1 translate:1 flexibility:1 achieve:1 adjoint:1 forth:1 frobenius:1 moved:1 sutskever:1 cluster:1 smax:2 produce:7 generating:1 help:1 derive:3 depending:1 noticeable:1 job:1 recovering:2 skip:1 convention:1 direction:2 closely:1 filter:1 implementing:1 adjacency:3 require:2 generalization:2 proposition:3 sufficiently:3 considered:1 inflation:2 normal:1 presumably:1 seed:2 visualize:1 major:1 achieves:1 early:1 favorable:1 applicable:1 individually:1 largest:1 wl:3 tool:2 weighted:2 hope:1 brought:2 clearly:1 mit:1 gaussian:7 modified:1 rather:1 gatech:2 wilson:1 corollary:4 notational:1 consistently:1 rank:1 improvement:1 mainly:1 tech:2 contrast:1 nn3:9 sense:1 teixeira:1 dependent:1 minimizers:1 initially:1 hidden:3 relation:1 going:1 choromanska:1 redesigning:1 interested:1 pixel:1 arg:5 classification:3 overall:8 aforementioned:1 among:3 denoted:2 logn:1 proposes:2 dual:2 constrained:1 art:1 once:1 never:1 having:1 beach:1 sampling:1 saving:1 identical:1 jones:1 warrant:2 promote:1 discrepancy:7 report:5 piecewise:1 serious:1 inherent:1 few:1 randomly:1 composed:1 national:1 comprehensive:1 individual:1 replaced:2 geometry:1 phase:2 consisting:1 n1:3 freedom:2 interest:1 trimming:1 certainly:1 tj:2 regularizers:3 devoted:2 mcsherry:1 accurate:1 capable:3 necessary:1 xy:1 poggio:1 indexed:1 re:1 plotted:1 desired:1 sacrificing:1 theoretical:6 instance:1 column:13 soft:1 cost:1 introducing:3 addressing:2 entry:5 krizhevsky:1 reported:1 extendable:1 st:2 explores:1 international:6 stay:1 vm:3 pool:2 ym:20 w1:5 again:1 ambiguity:1 central:1 rn1:2 wan:1 dropconnect:2 return:1 yp:1 toy:1 supp:6 account:1 converted:2 blow:1 rn2:1 coding:1 availability:1 coefficient:3 matter:1 later:1 performed:1 lot:1 dally:3 analyze:2 portion:1 wm:1 parallel:13 complicated:1 capability:1 contribution:3 formed:2 publicly:1 accuracy:11 convolutional:3 variance:1 yield:1 correspond:1 identification:2 vincent:1 accurately:1 produced:1 basically:3 disc:2 rectified:4 acc:10 sharing:1 definition:1 against:4 naturally:1 associated:6 proof:4 dataset:1 knowledge:1 dimensionality:1 focusing:1 originally:1 hashing:1 follow:1 reflected:1 response:7 specify:1 formulation:2 generality:1 achlioptas:1 relearn:1 hand:1 replacing:1 nonlinear:2 lack:2 mode:1 brings:1 perhaps:1 usa:1 normalized:11 multiplier:2 contain:1 former:1 regularization:7 hence:1 alternating:2 iteratively:1 white:1 attractive:1 round:2 during:1 illustrative:1 noted:1 generalized:1 prominent:1 presenting:2 ridge:1 demonstrate:1 image:2 wise:4 ranging:1 recently:1 pseudocode:2 overview:1 emt:7 conditioning:1 nn2:5 discussed:1 relating:2 significant:3 imposing:1 rd:1 automatic:1 consistency:2 outlined:2 similarly:2 nonlinearity:1 han:3 operating:2 surface:1 add:1 recent:2 perspective:1 henaff:1 scenario:5 certain:2 binary:1 arbitrarily:1 watson:2 approximators:1 transmitted:1 minimum:4 additional:4 impose:1 prune:2 redundant:2 signal:1 multiple:2 full:1 reduces:3 technical:3 adapt:1 bach:1 long:1 post:1 equally:1 feasibility:1 prediction:1 scalable:2 basic:2 multilayer:2 arxiv:2 represent:1 alireza:1 kernel:1 achieved:3 addition:1 huffman:1 fine:5 biased:1 unlike:1 induced:1 pooling:1 mod:2 marcotte:1 extracting:1 structural:2 subgaussian:1 feedforward:5 bengio:2 enough:2 easy:1 spiral:2 identically:1 affect:1 relu:6 fit:1 variety:1 architecture:1 reduce:4 idea:5 simplifies:1 knowing:1 whether:1 six:2 distributing:1 effort:1 penalty:3 remark:1 deep:13 useful:1 detailed:2 amount:1 band:1 processed:2 generate:3 http:1 sign:1 estimated:1 per:1 blue:1 probed:1 dropping:1 sparsifying:1 redundancy:2 key:1 four:2 drawn:1 prevent:1 v1:1 relaxation:2 merely:1 year:1 sum:3 topologically:1 throughout:1 acceptable:1 scaling:1 submatrix:2 dropout:5 layer:84 bound:5 entirely:1 followed:3 convergent:1 courville:1 quadratic:3 nontrivial:1 occur:1 constraint:11 sake:2 argument:3 min:10 formulating:1 optimality:1 extremely:1 pruned:1 transferred:1 department:2 structured:3 truncate:2 poor:3 across:4 slightly:2 remain:1 smaller:3 making:2 outlier:1 restricted:1 taken:1 computationally:4 equation:1 visualization:2 remains:1 assures:1 slack:1 needed:1 ge:1 tractable:1 end:3 studying:1 available:2 operation:2 apply:2 obey:2 observe:3 enforce:1 robustness:3 weinberger:1 original:2 remaining:1 include:1 zeiler:1 maintaining:1 establish:1 kawaguchi:1 objective:3 already:3 question:1 quantity:1 added:1 strategy:5 concentration:2 dependence:1 distance:1 link:10 separate:1 concatenation:1 majority:2 seven:2 provable:1 afshin:1 code:2 index:4 providing:2 ratio:1 equivalently:1 truncates:1 statement:1 relate:1 stated:2 negative:1 implementation:4 reliably:1 understandable:1 issue:1 bronstein:1 perform:2 allowing:2 conversion:1 vertical:1 neuron:4 observation:5 convolution:1 sm:3 behave:1 truncated:1 immediate:2 defining:1 hinton:2 extended:3 optional:1 beat:1 rn:8 discovered:1 stack:1 arbitrary:3 retrained:24 introduced:1 complement:1 cast:2 required:1 specified:1 pair:3 connection:4 giryes:1 concisely:1 learned:3 established:1 gsu:1 nip:1 address:3 justin:1 able:1 below:3 pattern:2 xm:2 sparsity:4 summarize:2 program:22 max:25 stinchcombe:1 power:1 suitable:1 natural:1 regularized:3 representing:2 scheme:13 improve:1 github:1 technology:1 mathieu:1 arora:1 understanding:5 determining:1 loss:3 fully:5 highlight:1 mixed:1 interesting:1 limitation:1 aslan:1 degree:2 sufficient:2 consistent:3 xp:11 proxy:2 thresholding:1 tyree:1 classifying:1 ibm:3 row:17 elsewhere:1 last:1 keeping:1 free:1 truncation:1 allow:1 understand:1 deeper:1 institute:2 taking:3 characterizing:1 absolute:1 sparse:11 fifth:1 distributed:1 regard:1 boundary:1 curve:1 author:3 made:2 nguyen:1 far:1 transaction:2 pruning:19 approximate:1 trim:68 compact:1 keep:2 overfitting:5 sequentially:2 active:4 conclude:1 consuming:1 fergus:1 search:2 decade:1 table:5 learn:1 reasonably:1 nature:2 robust:2 elastic:1 ca:1 init:2 decoupling:1 schuurmans:1 ignoring:1 hornik:1 warranted:1 complex:2 necessarily:2 domain:1 main:3 motivation:1 big:1 arise:1 noise:6 n2:1 fair:1 x1:2 referred:2 retrain:7 georgia:3 probing:1 theme:1 mao:1 explicit:2 sparsest:1 exceeding:2 breaking:1 weighting:1 third:2 bhaskara:1 removing:1 theorem:13 minute:1 load:1 insightful:1 sensing:1 exists:1 mnist:2 restricting:1 adding:2 quantization:1 transfer:1 sparser:3 chen:2 depicted:1 simply:2 likely:2 explore:1 failed:1 nested:2 minimizer:1 corresponds:2 relies:2 ma:1 viewed:1 presentation:3 towards:1 admm:4 feasible:2 change:2 checkpoint:3 specifically:5 reducing:1 called:1 total:2 ece:3 indicating:1 formally:1 latter:1 abdi:2 regularizing:1 srivastava:1
6,535
6,911
Graph Matching via Multiplicative Update Algorithm Bo Jiang School of Computer Science and Technology Anhui University, China [email protected] Jin Tang School of Computer Science and Technology Anhui University, China [email protected] Yihong Gong School of Electronic and Information Engineering Xi?an Jiaotong University, China [email protected] Chris Ding CSE Department, University of Texas at Arlington, Arlington, USA [email protected] Bin Luo School of Computer Science and Technology, Anhui University, China [email protected] Abstract As a fundamental problem in computer vision, graph matching problem can usually be formulated as a Quadratic Programming (QP) problem with doubly stochastic and discrete (integer) constraints. Since it is NP-hard, approximate algorithms are required. In this paper, we present a new algorithm, called Multiplicative Update Graph Matching (MPGM), that develops a multiplicative update technique to solve the QP matching problem. MPGM has three main benefits: (1) theoretically, MPGM solves the general QP problem with doubly stochastic constraint naturally whose convergence and KKT optimality are guaranteed. (2) Empirically, MPGM generally returns a sparse solution and thus can also incorporate the discrete constraint approximately. (3) It is efficient and simple to implement. Experimental results show the benefits of MPGM algorithm. 1 Introduction In computer vision and machine learning area, many problems of interest can be formulated by graph matching problem. Previous approaches [3?5, 15, 16] have formulated graph matching as a Quadratic Programming (QP) problem with both doubly stochastic and discrete constraints. Since it is known to be NP-hard, many approximate algorithms have been developed to find approximate solutions for this problem [8, 16, 21, 24, 20, 13]. One kind of approximate methods generally first develop a continuous problem by relaxing the discrete constraint and aim to find the optimal solution for this continuous problem. After that, they obtain the final discrete solution by using a discretization step such as Hungarian or greedy algorithm [3, 15, 16]. Obviously, the discretization step of these methods is generally independent of the matching objective optimization process which may lead to weak local optimum for the problem. Another kind of methods aim to obtain a discrete solution for QP matching problem [16, 1, 24]. For example, Leordeanu et al. [16] proposed an iterative matching method (IPFP) which optimized the QP matching problem in a discrete domain. Zhou et al. [24, 25] proposed an effective graph matching method (FGM) which optimized the QP matching problem approximately using a convexconcave relaxation technique [21] and thus returns a discrete solution for the problem. From optimization aspect, the core optimization algorithm used in both IPFP [16] and FGM [24] is related to Frank-Wolfe [9] algorithm and FGM [24, 25] further uses a path following procedure to alleviate the local-optimum problem more carefully. The core of Frank-Wolfe [9] algorithm is to optimize the quadratic problem by sequentially optimizing the linear approximations of QP problem. In addition 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. to optimization-based methods, probabilistic methods can also be used for solving graph matching problems [3, 19, 23]. In this paper, we propose a new algorithm, called Multiplicative Update Graph Matching (MPGM), that develops a multiplicative update technique for the general QP problem with doubly stochastic constraint. Generally, MPGM has the following three main aspects. First, MPGM solves the general QP problem with doubly stochastic constraint directly and naturally. In MPGM algorithm, each update step has a closed-form solution and the convergence of the algorithm is also guaranteed. Moreover, the converged solution is guaranteed to be Karush-Kuhn-Tucker (KKT) optimality. Second, empirically, MPGM can generate a sparse solution and thus incorporates the discrete constraint naturally in optimization. Therefore, MPGM can obtain a local optimal discrete solution for the QP matching problem. Third, it is efficient and simple to implement. Experimental results on both synthetic and real-world matching tasks demonstrate the effectiveness and benefits of the proposed MPGM algorithm. 2 Problem Formulation and Related Works Problem Formulation. Assume G = (V, E) and G? = (V ? , E ? ) are two attributed graphs to be matched, where each node vi ? V or edge eik ? E has an attribute vector ai or rik . The aim of graph matching problem is to establish the correct correspondences between V and V ? . For each correspondence (vi , vj? ), there is an affinity Sa (ai , a?j ) that measures how well node vi ? V matches node vj? ? V ? . Also, for each correspondence pair (vi , vj? ) and (vk , vl? ), there is an affinity Sr (rik , r?jl ) that measures the compatibility between node pair (vi , vk ) and (vj? , vl? ). One can define an affinity matrix W whose diagonal term Wij,ij represents Sa (ai , a?j ), and the non-diagonal element Wij,kl contains Sr (rik , r?jl ). The one-to-one correspondences can be represented by a permutation matrix X ? {0, 1}n?n , where n = |V | = |V ? |1 . Here, Xij = 1 implies that node vi in G corresponds to node vj? in G? , and Xij = 0 otherwise. In this paper, we denote x = (X11 ...Xn1 , ..., X1n ...Xnn )T as a column-wise vectorized replica of X. The graph matching problem is generally formulated as a Quadratic Programming (QP) problem with doubly stochastic and discrete constraints [16, 3, 10], i.e., x? = arg max(xT Wx) s.t. x ? P, (1) x where P is defined as, P = {x | ?i ?n j=1 xij = 1, ?j ?n i=1 xij = 1, xij ? {0, 1}} (2) The above QP problem is NP-hard and thus approximate relaxations are usually required. One popular way is to relax the permutation domain P to the doubly stochastic domain D, ?n ?n (3) D = {x|?i j=1 xij = 1, ?j i=1 xij = 1, xij ? 0}. That is solving the following relaxed matching problem [21, 20, 10], x? = arg max(xT Wx) x s.t. x ? D. (4) Since W is not necessarliy positive (or negative) semi-definite, thus this problem is generally not a concave or convex problem. Related Works. Many algorithms have been proposed to find a local optimal solution for the above QP matching problem (Eq.(4)). One kind of popular methods is to use constraint relaxation and projection, such as GA [10] and RRWM [3]. Generally, they iteratively conduct the following two steps: (a) searching for a solution by ignoring the doubly stochastic constraint temporarily; (b) Projecting the current solution onto the desired doubly stochastic domain to obtain a feasible solution. Note that the projection step (b) is generally independent of the optimization step (a) and thus may lead to weak local optimum. Another kind of important methods is to use objective function approximation and thus solves the problem approximately, such as Frank-Wolfe algorithm [9]. Frank-Wolfe aims to optimize the above quadratic problem by sequentially solving the approximate linear problems. This algorithm has been widely adopted in many recent matching methods [16, 24, 21], such as IPFP [16] and FGM [24]. 1 Here, we focus on equal-size graph matching problem. For graphs with different sizes, one can add dummy isolated nodes into the smaller graph and transform them to equal-size case [21, 10] 2 3 Algorithm Our aim in this paper is to develop a new algorithm to solve the general QP matching problem Eq.(4). We call it as Multiplicative Update Graph Matching (MPGM). Formally, starting with an initial solution vector x(0) , MPGM solves the problem Eq.(4) by iteratively updating a current solution vector x(t) , t = 0, 1... as follows, (t+1) xkl (t) = xkl [ 2(Wx(t) ) + ?? + ?? ]1/2 kl l k , + ?+ + ? k l (5) ? ? + where ?+ k = (|?k | + ?k )/2, ?k = (|?k | ? ?k )/2, ?k = (|?k | + ?k )/2, ?k = (|?k | ? ?k )/2, and the Lagrangian multipliers (?, ?) are computed as, ] ( ) )?1 [ ( ( T) T T T diag K(t) X(t) ? X(t) diag K(t) X(t) ? =2 I ? X(t) X(t) ( T) ? =2 diag K(t) X(t) ? X(t) ? (6) where K(t) , X(t) are the matrix forms of vector (Wx(t) ) and x(t) , respectively, i.e., K(t) , X(t) ? (t) (t) (t) Rn?n and Kkl = (Wx(t) )kl , Xkl = xkl . ? = (?1 , ? ? ? ?n )T ? Rn?1 , ? = (?1 , ? ? ? ?n )T ? n?1 R . The iteration starts with an initial x(0) and is repeated until convergence. Complexity. The main complexity in each iteration is on computing Wx(t) . Thus, the total computational complexity for MPGM is less than O(M N 2 ), where N = n2 is the length of vector x(t) and M is the maximum iteration. Our experience is that the algorithm converges quickly and the average maximum iteration M is generally less than 200. Theoretically, the complexity of MPGM is the same with RRWM [3] and IPFP [16], but obviously lower than GA [10] and FGM [24]. Comparison with Related Works. Multiplicative update algorithms have been studied in solving matching problems [6, 13, 11, 12]. Our work is significantly different from previous works in the following aspects. Previous works [6, 13, 11] generally first develop a kind of approximation (or relaxation) for QP matching problem by ignoring the doubly stochastic constraint, and then aim to find the optimum of the relaxation problem by developing an algorithm. In contrast, our work focus on the general and challengeable QP problem with doubly stochastic constraint (Eq.(4)), and derive a simple multiplicative algorithm to solve the problem Eq.(4) directly. Note that, the proposed algorithm is not limited to solving QP matching problem only. It can also be used in some other QP (or general continuous objective function) problems with doubly stochastic constraint (e.g. MAP inference, clustering) in machine learning area. In this paper, we focus on graph matching problem. Starting Point. To alleviate the local optima and provide a feasible starting point for MPGM algorithm, given an initial vector x(0) , we first use the simple projection x(0) = P (Wx(0) ) several times to obtain a kind of the feasible start point for MPGM algorithm. Here P denotes the projection [22] or normalization [20] to make x(0) satisfy the doubly stochastic constraint. 4 Theoretical Analysis Theorem 1. Under update Eq.(5), the Lagrangian function L(x) is monotonically increasing, L(x) = xT Wx ? n ? i=1 n n n ? ? ? ?i ( xij ? 1) ? ?j ( xij ? 1) j=1 j=1 (7) i=1 where ?, ? are Lagrangian multipliers. Proof. To prove it, we use the auxiliary function approach [7, 14]. An auxiliary function function ?(x, ? x) of Lagrangian function L(x) satisfies following, ?(x, x) = L(x), ?(x, ?x) ? L(x). (8) Using the auxiliary function ?(x, ? x), we define x(t+1) = arg max ?(x, x(t) ). x 3 (9) Then by construction of ?(x, ? x), we have L(x(t) ) = ?(x(t) , x(t) ) ? L(x(t+1) ). (10) This proves that L(x ) is monotonically increasing. (t) The main step in the following of the proof is to provide an appropriate auxiliary function and find the global maximum for the auxiliary function. We rewrite Eq.(7) as L(x) = xT Wx ? n ? i=1 = n ? n ? n ? n ? ?i ( n ? xij ? 1) ? j=1 n ? ?j ( j=1 Wij,kl xij xkl ? i=1 j=1 k=1 l=1 n ? n ? xij ? 1) i=1 n n n ? ? ? ?i ( xij ? 1) ? ?j ( xij ? 1). i=1 j=1 j=1 (11) i=1 We show that one auxiliary function ?(x, ?x) of L(x) is, ?(x, ? x) = n ? n ? n ? n ? i=1 j=1 k=1 l=1 ? n ? n ? j=1 (12) ?+ i n n n [? ] ? [? ] 1 x2ij xij ? ( +? xij ) ? 1 + ?? xij (1 + log )?1 i ? 2 ? xij xij j=1 i=1 j=1 ?+ j n n n [? ] ? [? ] 1 x2ij xij ? ( +? xij ) ? 1 + ?? xij (1 + log )?1 . j ? 2 ? xij xij i=1 j=1 i=1 i=1 ? ( xij xkl ) Wij,kl ? xij ? xkl 1 + log ? xij ? xkl 2 Using the inequality z ? 1 + log z and ab ? 12 (a2 + b2 )(a ? 12 ( ab + b)), one can prove that Eq.(12) is a lower bound of Eq.(11). Thus, Z(x, ?x) is an auxiliary function of L(x). According to Eq.(9), we need to find the global maximum of ?(x, ?x) for x. The gradient is ?xkl ?xkl ?xkl ??(x, ? x) xkl xkl = 2(W? x)kl ? ?+ + ?? ? ?+ + ?? k k l l ?xkl ?xkl ?xkl xkl xkl xkl Note that, for graph matching problem, we have WT = W. Thus, the second derivative is [ ] ) xkl ( ?) ? 2 ?(x, x 1 ? ? + + = ? 2(W? x)kl + ?? + ? + (? + ? ) ?ki ?lj ? 0, k l l ?xkl k ?xkl ?xij x2kl (13) Therefore, ?(x, ? x) is a concave function in x and has a unique global maximum. It can be obtained x) by setting the first derivative to zero ( ??(x,? ?xkl = 0), which gives [ 2(W?x) + ?? + ?? ]1/2 kl k l . (14) xkl = ? xkl + ?+ k + ?l Therefore, we obtain the update rule in Eq.(5) by setting x(t+1) = x and x(t) = ?x.  Theorem 2. Under update Eq.(5), the converged solution x? is Karush-Kuhn-Tucker (KKT) optimal. Proof. The standard Lagrangian function is L(x) = xT Wx ? n ? i=1 n n n n ? n ? ? ? ? ?i ( xij ? 1) ? ?j ( xij ? 1) ? ?ij xij j=1 j=1 i=1 (15) i=1 j=1 Here, we use the Lagrangian function to induce KKT optimal condition. Using Eq.(15), we have ?L(x) = 2(Wx)kl ? ?k ? ?l . ?xkl (16) The corresponding KKT condition is ?L(x) = 2(Wx)kl ? ?k ? ?l ? ?kl = 0 ?xkl ? ?L(x) = ?( xkl ? 1) = 0 ??k (17) (18) l ? ?L(x) = ?( xkl ? 1) = 0 ??l (19) ?kl xkl = 0. (20) k 4 This leads to the following KKT complementary slackness condition, [ ] 2(Wx)kl ? ?k ? ?l xkl = 0. (21) ? ? Because l xkl = 1, k xkl = 1, summing over indexes k and l respectively, we obtain the following two group equations, n n ? ? xkl (Wx)kl ? ?l xkl ? ?k = 0, (22) 2 2 l=1 l=1 n ? n ? xkl (Wx)kl ? k=1 ?k xkl ? ?l = 0. (23) k=1 Eqs.(22, 23) can be equivalently reformulated as the following matrix forms, 2 diag(KXT ) ? ? ? X? = 0, (24) 2 diag(K X) ? ? ? X ? = 0. (25) where k = 1, 2, ? ? ? n, l = 1, 2, ? ? ? n. K, X are the matrix forms of vector (Wx) and x, respectively, i.e., K, X ? Rn?n and Kkl = (Wx)kl , Xkl = xkl . Thus, we can obtain the values for ? and ? as, T T ? = 2(I ? XT X)?1 (diag(KT X) ? XT diag(KXT )) ? = 2 diag(KX ) ? X? On the other hand, from update Eq.(5), at convergence, [ 2(Wx? ) + ?? + ?? ]1/2 kl k l x?kl = x?kl + ?+ k + ?l T (26) (27) (28) Thus, we have (2(Wx? )kl ? ?k ? ?l )x?2 kl = 0, which is identical to the following KKT condition, [ ] 2(Wx? )kl ? ?k ? ?l x?kl = 0. (29) Substituting the values of ?k , ?l in Eq.(28) from Eqs.(26,27), we obtain update rule Eq.(5).  Remark. Similar to the above analysis, we can also derive another similar update as, (t+1) xkl ? (t) (t) 2(Wx )kl + ?k + ?+ k + ?l = xkl + ?? l . (30) The optimality and convergence of this update are also guaranteed. We omit the further discussion of them due to the lack of space. In real application, one can use both of these two update algorithms (Eq.(5), Eq.(30)) to obtain better results. 5 Sparsity and Discrete Solution One property of the proposed MPGM is that it can result in a sparse optimal solution, although the discrete binary constraint have been dropped in MPGM optimization process. This suggests that MPGM can search for an optimal solution nearly on the permutation domain P, i.e., the boundary of the doubly stochastic domain D. Unfortunately, here we cannot provide a theoretical proof on the sparsity of MPGM solution, but demonstrate it experimentally. Figure 1 (a) shows the solution x(t) across different iterations. Note that, regardless of initialization, as the iteration increases, the solution vector x(t) of MPGM becomes more and more sparse and converges to a discrete binary solution. Note that, in MPGM update Eq.(5), when xtkl closes to zero, it can keep closing to zero in the following update process because of the particular multiplicative operation. Therefore, as the iteration increases, the solution vector xt+1 is guaranteed to be more sparse than solution vector xt . Figure 1 (b) shows the objective and sparsity2 of the solution vector x(t) . We can observe that (1) the objective of x(t) increases and converges after some iterations, demonstrating the convergence of MPGM algorithm. (2) The sparsity of the solution x(t) increases and converges to the baseline, which demonstrates the ability of MPGM algorithm to maintain the discrete constraint in the converged solution. 2 Sparsity measures the percentage of zero (close-to-zero) elements in Z. Firstly, set the threshold ? = 0.001 ? mean(Z), then renew Zij = 0 if Zij ? ?. Finally, the sparsity is defined as the percentage of zero elements in the renewed Z. 5 Figure 1: (a) Solution vector x(t) of MPGM across different iterations (top: start from uniform solution; middle: start from SM solution; bottom: start from RRWM solution). 6 Experiments We have applied MPGM algorithm to several matching tasks. Our method has been compared with some other state-of-the-art methods including SM [15], IPFP [16], SMAC [5], RRWM [3] and FGM [24]. We implemented IPFP [16] with two versions: (1) IPFP-U that is initialized by the uniform solution; (2) IPFP-S that is initialized by SM method [15]. In experiments, we initialize our MPGM with uniform solution and obtain similar results when initializing with SM solution. 6.1 Synthetic Data Similar to the works [3, 24], we have randomly generated data sets of nin 2D points as inlier nodes for G. We obtain the corresponding nodes in graph G? by transforming the whole point set with a random rotation and translation and then adding Gaussian noise N (0, ?) to the point positions from graph G. In addition, we also added nout outlier nodes in both graphs respectively at random positions. The affinity matrix W has been computed as Wij,kl = exp(??rik ? r?jl ?2F /0.0015), where rik is the Euclidean distance between two nodes in G and similarly for r?jl . Figure 2 summarizes the comparison results. We can note that: (1) similar to IPFP [16] and FGM [24] which return discrete matching solutions, MPGM always generates sparse solutions on doubly stochastic domain. (2) MPGM returns higher objective score and accuracy than IPFP [16] and FGM [24] methods, which demonstrate that MPGM can find the sparse solution more optimal than these methods. (3) MPGM generally performs better than the continuous domain methods including SM [15], SMAC [5] and RRWM [3]. Comparing with these methods, MPGM incorporates the doubly stochastic constraint more naturally and thus finds the solution more optimal than RRWM method. (4) MPGM generally has similar time cost with RRWM [3]. We have not shown the time cost of FGM [24] method in Fig.2, because FGM uses a hybrid optimization method and has obviously higher time cost than other methods. 6.2 Image Sequence Data In this section, we perform feature matching on CMU and YORK house sequences [3, 2, 18]. For CMU "hotel" sequence, we have matched all images spaced by 5, 10 ? ? ? 75 and 80 frames and computed the average performances per separation gap. For YORK house sequence, we have matched all images spaced by 1, 2 ? ? ? 8 and 9 frames and computed the average performances per separation gap. The affinity matrix has been computed by Wij,kl = exp(??rik ? r?jl ?2F /1000), where rik is the Euclidean distance between two points. Figure 3 summarizes the performance results. It is noted that MPGM outperforms the other methods in both objective score and matching accuracy, indicating the effectiveness of MPGM method. Also, 6 1 0.035 0.4 0.3 0.02 inliers nin = 20 outliers n 0.04 out 0.8 0.75 0.7 =0 0.06 0.85 0.08 Deformation noise ? 0.65 0.1 0.02 in 0.04 0.06 0.08 0.1 Deformation noise ? 0.02 0.5 0.4 2 inliers nin = 15 6 8 0.7 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.6 0.5 0.4 0.3 deformation noise ? = 0.04 4 0.2 10 2 1 1 0.9 Objective score 0.6 0.5 0.4 0.3 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM inliers nin = 15 outliers n out 0.2 0.02 0.04 0.06 0.08 0.1 0.08 0.6 0.4 0.2 deformation noise ? = 0.04 0.02 0.04 0.06 0.08 0.1 8 10 0.08 0.1 Deformation noise ? RRWM SM IPFP?U IPFP?S SMAC MPGM 0.12 0.1 0.08 0.06 0.04 deformation noise ? = 0.04 inliers nin = 15 deformation noise ? = 0.04 0.02 0 6 8 10 2 4 6 8 0 10 2 # of outliers nout 4 6 # of outliers nout 1 0.7 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.6 0.5 0.4 0.02 inliers nin = 15 outliers nout = 5 0.6 0.4 0.2 RRWM SM IPFP?U IPFP?S SMAC MPGM 0.05 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.8 0.2 0 0 0.1 inliers nin = 15 0.8 0.3 =5 Deformation noise ? 0.06 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM # of outliers nout 0.9 0.7 inliers nin = 15 4 # of outliers nout utliers n 0.8 0.04 Deformation noise ? 0.8 Sparsity FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.6 0.005 0.8 Sparsity Objective score 0.7 0.01 outliers nout = 0 inliers nin = 20 outliers nout = 0 1 0.9 0.8 0.02 0.015 inliers nin = 20 0 1 0.9 Accuracy 0.4 0.2 inliers n = 20 outliers nout = 0 0.025 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.6 Time FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.5 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM Time 0.6 RRWM SM IPFP?U IPFP?S SMAC MPGM 0.03 0.9 0.04 Time Accuracy 0.7 0.8 Sparsity Objective score 0.95 0.8 Accuracy 0.04 1 1 0.9 0.03 0.02 inliers n = 15 in outliers nout = 5 0.01 inliers n = 15 in outliers nout = 5 0 0.04 0.06 0.08 0.02 0.1 Deformation noise ? 0.04 0.06 0.08 0 0.1 Deformation noise ? 0.02 0.04 0.06 Deformation noise ? Figure 2: Comparison results of different methods on synthetic point sets matching MPGM can generate sparse solutions. These are generally consistent with the results on the synthetic data experiments and further demonstrate the benefits of MPGM algorithm. Objective score Accuracy 0.8 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.7 0.6 0.5 0.4 1 1 0.95 0.8 0.9 Sparsity 1 0.9 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.85 0.8 0.75 0.7 30 40 50 60 70 10 80 20 0 0.4 0.2 0 Objective score Accuracy 0.8 0.6 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 2 4 6 30 40 50 60 70 10 80 20 8 1 1 0.9 0.8 0.8 0.7 0.6 0.5 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 2 0.6 0.4 0.2 40 50 60 70 80 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0 4 6 8 Separation Separation 30 Separation Separation Separation Sparsity 20 0.4 0.2 0.65 10 FGM RRWM SM IPFP?U IPFP?S SMAC MPGM 0.6 2 4 6 8 Separation Figure 3: Comparison results of different methods on CMU and YORK image sequences. Top: CMU images; Bottom: YORK images. 6.3 Real-world Image Data In this section, we tested our method on some real-world image datasets. We evaluate our MPGM on the dataset [17] whose images are selected from Pascal 2007 3 . In this dataset, there are 30 pairs of car images and 20 pairs of motorbike images. For each image pair, feature points and groundtruth matches were manually marked and each pair contains 30-60 ground-truth correspondences. ?|pi ?p? | The affinity between two nodes is computed as Wij,ij = exp( 0.05 j ), where pi is the orientation of normal vector at the sampled point (node) i to the contour, similarly to p?j . Also, the affinity 3 http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html 7 Figure 4: Some examples of image matching on Pascal 2007 dataset (LEFT: original image pair, MIDDLE: FGM result, RIGHT: MPGM result. Incorrect matches are marked by red lines) Figure 5: Comparison results of different graph matching methods on the Pascal 2007 dataset ?|dik ?d? | between two correspondences has been computed as Wij,kl = exp( 0.15 jl ), where dik denotes the Euclidean distance between feature point i and k, similarly to d?jl . Some matching examples are shown in Figure 4. To test the performance against outlier noise, we have randomly added 020 outlier features for each image pair. The overall results of matching accuracy across different outlier features are summarized in Figure 5. From Figure 5, we can note that MPGM outperforms the other competing methods including RRWM [3] and FGM [24], which further demonstrates the effectiveness and practicality of MPGM on conducting real-world image matching tasks. 7 Conclusions and Future work This paper presents an effective algorithm, Multiplicative Update Graph Matching (MPGM), that develops a multiplicative update technique to solve the QP matching problem with doubly stochastic mapping constraint. The KKT optimality and convergence properties of MPGM algorithms are theoretically guaranteed. We show experimentally that MPGM solution is sparse and thus approximately incorporates the discrete constraint in optimization naturally. In our future, the theoretical analysis on the sparsity of MPGM needs to be further studied. Also, we will incorporate our MPGM in some path-following strategy to find a more optimal solution for the matching problem. We will adapt the proposed algorithm to solve some other optimization problems with doubly stochastic constraint in machine learning and computer vision area. Acknowledgment This work is supported by the NBRPC 973 Program (2015CB351705); National Natural Science Foundation of China (61602001,61671018, 61572030); Natural Science Foundation of Anhui Province (1708085QF139); Natural Science Foundation of Anhui Higher Education Institutions of China (KJ2016A020); Co-Innovation Center for Information Supply & Assurance Technology, Anhui University; The Open Projects Program of National Laboratory of Pattern Recognition. 8 References [1] K. Adamczewski, Y. Suh, and K. M. Lee. Discrete tabu search for graph matching. In ICCV, pages 109?117, 2015. [2] T. S. Caetano, J. J. McAuley, L. Cheng, Q. V. Le, and A. J. Smola. Learning graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(6):1048?1058, 2009. [3] M. Cho, J. Lee, and K. M. Lee. Reweighted random walks for graph matching. In European Conference on Computer Vision, pages 492?505, 2010. [4] D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. International Journal of Pattern Recognition and Artificial Intelligence, pages 265?298, 2004. [5] M. Cour, P. Srinivasan, and J.Shi. Balanced graph matching. In Neural Information Processing Systems, pages 313?320, 2006. [6] C. Ding, T. Li, and M. I. Jordan. Nonnegative matrix factorization for combinatorial optimization: Spectral clustering, graph matching and clique finding. In IEEE International Conference on Data Mining, pages 183?192, 2008. [7] C. Ding, T. Li, and M. I. Jordan. Convex and semi-nonnegative matrix factorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(1):45?55, 2010. [8] O. Enqvist, K. Josephon, and F. Kahl. Optimal correspondences from pairwise constraints. In IEEE International Conference on Computer Vision, pages 1295?1302, 2009. [9] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95?110, 1956. [10] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(4):377?388, 1996. [11] B. Jiang, J. Tang, C. Ding, and B. Luo. A local sparse model for matching problem. In AAAI, pages 3790?3796, 2015. [12] B. Jiang, J. Tang, C. Ding, and B. Luo. Nonnegative orthogonal graph matching. In AAAI, 2017. [13] B. Jiang, H. F. Zhao, J. Tang, and B. Luo. A sparse nonnegative matrix factorization technique for graph matching problem. Pattern Recognition, 47(1):736?747, 2014. [14] D. D. Lee and H. S. Seung. Algorithms for nonnegative matrix factorization. In Neural Information Processing Systems, pages 556?562, 2001. [15] M. Leordeanu and M. Hebert. A spectral technique for correspondence problem using pairwise constraints. In IEEE International Conference on Computer Vision, pages 1482?1489, 2005. [16] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph macthing and map inference. In Neural Information Processing Systems, pages 1114?1122, 2009. [17] M. Leordeanu, R. Sukthankar, and M. Hebert. Unsupervised learning for graph mathing. International Journal of Computer Vision, 95(1):1?18, 2011. [18] B. Luo, R. C. Wilson, and E. R. Hancock. Spectal embedding of graphs. Pattern Recognition, 36(10):2213?2230, 2003. [19] J. J. MuAuley and T. S. Caetano. Fast matching of large point sets under occlusions. Pattern Recognition, 45(1):563?569, 2012. [20] B. J. van Wyk and M. A. van Wyk. A pocs-based graph matching algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(11):1526?1530, 2004. [21] M. Zaslavskiy, F. Bach, and J. P. Vert. A path following algorithm for the graph matching problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12):2227?2242, 2009. [22] R. Zass and A. Shashua. Doubly stochastic normalization for spectral clustering. In Proceedings of the conference on Neural Information Processing Systems (NIPS), pages 1569?1576, 2006. [23] Z. Zhang, Q. Shi, J. McAuley, W. Wei, Y. Zhang, and A. V. D. Hengel. Pairwise matching through max-weight bipartite belief propagation. In CVPR, pages 1202?1210, 2016. [24] F. Zhou and F. D. la Torre. Factorized graph matching. In IEEE Conference on Computer Vision and Pattern Recognition, pages 127?134, 2012. [25] F. Zhou and F. D. la Torre. Deformable graph matching. In IEEE Conference on Computer Vision and Pattern Recognition, pages 127?134, 2013. 9
6911 |@word version:1 middle:2 open:1 mcauley:2 initial:3 contains:2 score:7 zij:2 renewed:1 kahl:1 outperforms:2 current:2 discretization:2 comparing:1 luo:5 wx:21 update:20 greedy:1 selected:1 assurance:1 intelligence:6 core:2 institution:1 cse:1 node:13 firstly:1 org:1 zhang:2 supply:1 incorrect:1 prove:2 doubly:19 pairwise:3 theoretically:3 voc:1 increasing:2 becomes:1 project:1 moreover:1 matched:3 factorized:1 kind:6 developed:1 finding:1 concave:2 demonstrates:2 omit:1 positive:1 engineering:1 local:7 dropped:1 jiang:4 wyk:2 path:3 approximately:4 initialization:1 china:6 studied:2 suggests:1 relaxing:1 co:1 limited:1 factorization:4 unique:1 acknowledgment:1 thirty:1 implement:2 definite:1 nout:11 procedure:1 area:3 significantly:1 vert:1 matching:58 projection:4 induce:1 onto:1 ga:2 cannot:1 close:2 sukthankar:2 www:1 optimize:2 map:2 lagrangian:6 center:1 shi:2 regardless:1 starting:3 convex:2 rule:2 tabu:1 embedding:1 searching:1 construction:1 xjtu:1 programming:4 us:2 wolfe:5 element:3 recognition:8 updating:1 zaslavskiy:1 bottom:2 ding:5 initializing:1 caetano:2 balanced:1 transforming:1 complexity:4 seung:1 solving:5 rewrite:1 bipartite:1 represented:1 hancock:1 effective:2 fast:1 artificial:1 whose:3 widely:1 solve:5 cvpr:1 relax:1 otherwise:1 ability:1 transform:1 final:1 obviously:3 kxt:2 sequence:5 propose:1 deformable:1 gold:1 convergence:7 cour:1 optimum:5 nin:10 rangarajan:1 converges:4 inlier:1 derive:2 develop:3 gong:1 ij:3 school:4 sa:2 eq:21 solves:4 auxiliary:7 hungarian:1 implemented:1 implies:1 kuhn:2 correct:1 attribute:1 torre:2 stochastic:19 bin:1 education:1 karush:2 alleviate:2 ground:1 normal:1 exp:4 mapping:1 substituting:1 a2:1 combinatorial:1 gaussian:1 always:1 aim:6 zhou:3 wilson:1 focus:3 naval:1 vk:2 contrast:1 baseline:1 inference:2 vl:2 lj:1 wij:8 compatibility:1 x11:1 arg:3 orientation:1 pascal:3 html:1 overall:1 art:1 initialize:1 equal:2 beach:1 manually:1 identical:1 represents:1 unsupervised:1 nearly:1 eik:1 future:2 np:3 develops:3 randomly:2 national:2 uta:1 occlusion:1 maintain:1 ab:2 interest:1 mining:1 inliers:12 tj:1 kt:1 edge:1 experience:1 orthogonal:1 conduct:1 euclidean:3 initialized:2 desired:1 walk:1 isolated:1 deformation:12 theoretical:3 column:1 assignment:1 cost:3 uniform:3 synthetic:4 utliers:1 cho:1 st:1 fundamental:1 international:5 probabilistic:1 lee:4 quickly:1 aaai:2 derivative:2 zhao:1 return:4 li:2 b2:1 summarized:1 satisfy:1 vi:6 multiplicative:11 closed:1 red:1 start:5 shashua:1 accuracy:8 conducting:1 spaced:2 weak:2 converged:3 against:1 hotel:1 tucker:2 naturally:5 proof:4 attributed:1 xn1:1 sampled:1 dataset:4 popular:2 car:1 carefully:1 higher:3 arlington:2 wei:1 formulation:2 smola:1 until:1 hand:1 lack:1 propagation:1 slackness:1 usa:2 pascalnetwork:1 multiplier:2 iteratively:2 laboratory:1 reweighted:1 convexconcave:1 x1n:1 noted:1 xkl:41 demonstrate:4 performs:1 image:16 wise:1 rotation:1 qp:20 empirically:2 jl:7 ai:3 similarly:3 closing:1 sansone:1 add:1 recent:1 optimizing:1 inequality:1 binary:2 jiaotong:1 relaxed:1 monotonically:2 semi:2 match:3 adapt:1 bach:1 long:1 zass:1 vision:9 cmu:4 iteration:9 normalization:2 addition:2 sr:2 incorporates:3 effectiveness:3 jordan:2 integer:2 call:1 graduated:1 competing:1 cn:4 texas:1 yihong:1 chqding:1 dik:2 ipfp:46 reformulated:1 york:4 remark:1 generally:13 generate:2 http:1 xij:32 percentage:2 dummy:1 per:2 discrete:18 srinivasan:1 group:1 demonstrating:1 threshold:1 replica:1 graph:38 relaxation:5 year:1 groundtruth:1 electronic:1 separation:8 summarizes:2 bound:1 ki:1 guaranteed:6 correspondence:8 cheng:1 quadratic:6 nonnegative:5 constraint:23 conte:1 generates:1 aspect:3 optimality:4 department:1 developing:1 according:1 anhui:6 smaller:1 across:3 projecting:1 smac:20 outlier:16 iccv:1 equation:1 ygong:1 adopted:1 operation:1 observe:1 quarterly:1 appropriate:1 spectral:3 motorbike:1 original:1 denotes:2 clustering:3 top:2 practicality:1 prof:1 establish:1 objective:12 added:2 strategy:1 diagonal:2 affinity:7 gradient:1 distance:3 fgm:27 chris:1 mail:1 kkl:2 length:1 index:2 innovation:1 equivalently:1 unfortunately:1 frank:5 negative:1 perform:1 datasets:1 sm:23 jin:1 logistics:1 frame:2 rn:3 pair:8 required:2 kl:27 optimized:2 nip:2 usually:2 pattern:13 sparsity:11 challenge:1 program:2 max:4 including:3 belief:1 natural:3 hybrid:1 technology:4 voc2007:1 permutation:3 foundation:3 rik:7 vectorized:1 consistent:1 pi:2 translation:1 supported:1 hebert:3 sparse:11 benefit:4 van:2 boundary:1 world:4 x2ij:2 contour:1 hengel:1 enqvist:1 projected:1 transaction:5 approximate:6 keep:1 clique:1 global:3 kkt:8 sequentially:2 summing:1 xi:1 continuous:4 iterative:1 search:2 suh:1 ca:1 ignoring:2 european:1 domain:8 vj:5 diag:8 main:4 whole:1 noise:14 n2:1 repeated:1 complementary:1 fig:1 vento:1 position:2 house:2 third:1 tang:4 theorem:2 xt:9 workshop:1 adding:1 ahu:3 province:1 kx:1 gap:2 temporarily:1 bo:1 xnn:1 leordeanu:4 corresponds:1 truth:1 satisfies:1 marked:2 formulated:4 feasible:3 hard:3 experimentally:2 wt:1 called:2 total:1 experimental:2 la:2 indicating:1 formally:1 incorporate:2 evaluate:1 tested:1
6,536
6,912
Dynamic Importance Sampling for Anytime Bounds of the Partition Function Qi Lou Computer Science Univ. of California, Irvine Irvine, CA 92697, USA [email protected] Rina Dechter Computer Science Univ. of California, Irvine Irvine, CA 92697, USA [email protected] Alexander Ihler Computer Science Univ. of California, Irvine Irvine, CA 92697, USA [email protected] Abstract Computing the partition function is a key inference task in many graphical models. In this paper, we propose a dynamic importance sampling scheme that provides anytime finite-sample bounds for the partition function. Our algorithm balances the advantages of the three major inference strategies, heuristic search, variational bounds, and Monte Carlo methods, blending sampling with search to refine a variationally defined proposal. Our algorithm combines and generalizes recent work on anytime search [16] and probabilistic bounds [15] of the partition function. By using an intelligently chosen weighted average over the samples, we construct an unbiased estimator of the partition function with strong finite-sample confidence intervals that inherit both the rapid early improvement rate of sampling with the long-term benefits of an improved proposal from search. This gives significantly improved anytime behavior, and more flexible trade-offs between memory, time, and solution quality. We demonstrate the effectiveness of our approach empirically on real-world problem instances taken from recent UAI competitions. 1 Introduction Probabilistic graphical models, including Bayesian networks and Markov random fields, provide a framework for representing and reasoning with probabilistic and deterministic information [5, 6, 8]. Reasoning in a graphical model often requires computing the partition function, or normalizing constant of the underlying distribution. Exact computation of the partition function is known to be #P-hard [19] in general, leading to the development of many approximate schemes. Two important properties for a good approximation are that (1) it provides bounds or confidence guarantees on the result, so that the degree of approximation can be measured; and that (2) it can be improved in an anytime manner, so that the approximation becomes better as more computation is available. In general, there are three major paradigms for approximate inference: variational bounds, heuristic search, and Monte Carlo sampling. Each method has advantages and disadvantages. Variational bounds [21], and closely related approximate elimination methods [7, 14] provide deterministic guarantees on the partition function. However, these bounds are not anytime; their quality often depends on the amount of memory available, and do not improve without additional memory. Search algorithms [12, 20, 16] explicitly enumerate over the space of configurations and eventually provide an exact answer; however, while some problems are well-suited to search, others only improve their quality very slowly with more computation. Importance sampling [e.g., 4, 15] gives probabilistic bounds that improve with more samples at a predictable rate; in practice this means bounds that improve rapidly at first, but are slow to become very tight. Several algorithms combine two strategies: approximate hash-based counting combines sampling (of hash functions) with CSP-based search [e.g., 3, 2] or other MAP queries [e.g., 9, 10], although these are not typically formulated to provide anytime 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. behavior. Most closely related to this work are [16] and [15], which perform search and sampling, respectively, guided by variational bounds. In this work, we propose a dynamic importance sampling algorithm that provides anytime probabilistic bounds (i.e., they hold with probability 1 ? ? for some confidence parameter ?). Our algorithm interleaves importance sampling with best first search [16], which is used to refine the proposal distribution of successive samples. In practice, our algorithm enjoys both the rapid bound improvement characteristic of importance sampling [15], while also benefiting significantly from search on problems where search is relatively effective, or when given enough computational resources, even when these points are not known in advance. Since our samples are drawn from a sequence of different, improving proposals, we devise a weighted average estimator that upweights higher-quality samples, giving excellent anytime behavior. upper bound ?64 Motivating example. We illustrate the focus and contribusearch [16] sampling [15] tions of our work on an example problem instance (Fig. 1). ?66 two-stage Search [16] provides strict bounds (gray) but may not improve DIS ?68 rapidly, particularly once memory is exhausted; on the other ?70 hand, importance sampling [15] provides probabilistic bounds ?72 (green) that improve at a predictable rate, but require more and more samples to become tight. We first describe a ?two stage? ?74 sampling process that uses a search tree to improve the baseline ?76 bound from which importance sampling starts (blue), greatly ?78 improving its long-term performance, then present our dynamic 2 4 10 10 importance sampling (DIS) algorithm, which interleaves the time (sec) search and sampling processes (sampling from a sequence of proposal distributions) to give bounds that are strong in an Figure 1: Example: bounds on logZ for protein instance 1bgc. anytime sense. 2 Background Let X = (X1 , . . . , XM ) be a vector of random variables, where each Xi takes values in a discrete domain Xi ; we use lower case letters, e.g. xi ? Xi , to indicate a value of Xi , and x to indicate an assignment of X. A graphical model over X consists of a set of factors F = {f? (X? ) | ? ? I}, where each factor f? is defined on a subset X? = {Xi | i ? ?} of X, called its scope. We associate an undirected graph G = (V, E) with F, where each node i ? V corresponds to a variable Xi and we connect two nodes, (i, j) ? E, iff {i, j} ? ? for some ?. The set I then corresponds to cliques of G. We can interpret F as an unnormalized probability measure, so that Y XY f (x) = f? (x? ), Z= f? (x? ) x ??I ??I Z is called the partition function, and normalizes f (x). Computing Z is often a key task in evaluating the probability of observed data, model selection, or computing predictive probabilities. 2.1 AND/OR search trees We first require some notations from search. AND/OR search trees are able to exploit the conditional independence properties of the model, as expressed by a pseudo tree: Definition 1 (pseudo tree). A pseudo tree of an undirected graph G = (V, E) is a directed tree T = (V, E 0 ) sharing the same set of nodes as G. The tree edges E 0 form a subset of E, and we require that each edge (i, j) ? E \ E 0 be a ?back edge?, i.e., the path from the root of T to j passes through i (denoted i ? j). G is called the primal graph of T. Fig. 2(a)-(b) show an example primal graph and pseudo tree. Guided by the pseudo tree, we can construct an AND/OR search tree T consisting of alternating levels of OR and AND nodes. Each OR node s is associated with a variable, which we slightly abuse notation to denote Xs ; the children of s, ch(s), are AND nodes corresponding to the possible values of Xs . The root ? of the AND/OR search tree corresponds to the root of the pseudo tree. Let pa(c) = s indicate the parent of c, and an(c) = {n | n ? c} be the ancestors of c (including itself) in the tree. 2 A 1 0 B B 1 0 1 0 A F" C C C F C F F F B G" A" B" 0 E" C C" (a) D 0 1 0 1 0 1 0 1 0 1 0 1 0 1 G G G G G G G G F E D" 1 E G D E D E D E D E D E D E D E D 0 1 0 10 1 0 10 1 0 10 1 0 1 0 1 0 10 1 0 10 1 0 10 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 (b) (c) Figure 2: (a) A primal graph of a graphical model over 7 variables. (b) A pseudo tree for the primal graph consistent with elimination order G, F, E, D, C, B, A. (c) AND/OR search tree guided by the pseudo tree. One full solution tree is marked red and one partial solution tree is marked blue. As the pseudo tree defines a partial ordering on the variables Xi , the AND/OR tree extends this to one over partial configurations of X. Specifically, any AND node c corresponds to a partial configuration x?c of X, defined by its assignment and that of its ancestors: x?c = x?p ? {Xs = xc }, where s = pa(c), p = pa(s). For completeness, we also define x?s for any OR node s, which is the same as that of its AND parent, i.e., x?s = x?pa(s) . For any node n, the corresponding variables of x?n is denoted as X?n . Let de(Xn ) be the set of variables below Xn in the pseudo tree; we define X>n = de(Xn ) if n is an AND node; X>n = de(Xn ) ? {Xn } if n is an OR node. The notion of a partial solution tree captures partial configurations of X respecting the search order: Definition 2 (partial solution tree). A partial solution tree T of an AND/OR search tree T is a subtree satisfying three conditions: (1) T contains the root of T ; (2) if an OR node is in T , at most one of its children is in T ; (3) if an AND node is in T , all of its children or none of its children are in T . Any partial solution tree T defines a partial configuration xT of X; if xT is a complete configuration of X, we call T a full solution tree, and use Tx to denote the corresponding solution tree of a complete assignment x. Fig. 2(c) illustrates these concepts. We also associate a weight wc with each AND node, defined to be the product of all factors f? that are instantiated at c but not before: Y wc = f? (x? ), Ic = {? | Xc ? X? ? X?c } ??Ic For completeness, define ws = 1 for any ORQ node s. It is then easy to see that, for any node n, the product of weights on a path to the root, gn = a?n wa (termed the cost of the path), equals the value of the factors whose scope is fully instantiated at n, i.e., fully instantiated by x?n . We can extend this cost notion to any partial solution tree T by defining g(T ) as the product of all factors fully instantiated by xT ; we will slightly abuse notation by using g(T ) and g(xT ) interchangeably. Particularly, the cost of any full solution tree equals the value of its corresponding complete configuration. We use g(x>n |x?n ) (termed the conditional cost) to denote the quotient g([x?n , x>n ])/g(x?n ), where x>n is any assignment of X>n , the variables below n in the search tree. We give a ?value? vn to each node n equal to the total conditional cost of all configurations below n: X vn = g(x>n |x?n ). (1) x>n The value of the root is simply the partition function, v? = Z. Equivalently, vn can be defined recursively: if n is an AND node corresponding to a leaf of the pseudo tree, let vn = 1; otherwise, (Q vc , if AND node n vn = Pc?ch(n) (2) c?ch(n) wc vc , if OR node n 2.2 AND/OR best-first search for bounding the partition function AND/OR best-first search (AOBFS) can be used to bound the partition function in an anytime fashion by expanding and updating bounds defined on the search tree [16]. Beginning with only the root 3 ?, AOBFS expands the search tree in a best-first manner. More precisely, it maintains an explicit AND/OR search tree of visited nodes, denoted S. For each node n in the AND/OR search tree, AOBFS maintains un , an upper bound on vn , initialized via a pre-compiled heuristic vn ? h+ n , and subsequently updated during search using information propagated from the frontier: (Q uc , if AND node n un = Pc?ch(n) (3) w u , if OR node n c?ch(n) c c Thus, the upper bound at the root, u? , is an anytime deterministic upper bound of Z. Note that this upper bound depends on the current search tree S, so we write U S = u? . If all nodes below n have been visited, then un = vn ; we call n solved and can remove the subtree below n from memory. Hence we can partition the frontier nodes into two sets: solved frontier nodes, SOLVED(S), and unsolved ones, OPEN(S). AOBFS assigns a priority to each node and expands a top-priority (unsolved) frontier node at each iteration. We use the ?upper priority? from [16], Y Un = gn un us (4) s?branch(n) where branch(n) are the OR nodes that are siblings of some node ? n. Un quantifies n?s contribution to the global bound U S , so this priority attempts to reduce the upper bound on Z as quickly as possible. We can also interpret our bound U S as a sum of bounds on each of the partial configurations covered by S. Concretely, let TS be the set of projections of full solution trees on S (in other words, TS are partial solution trees whose leaves are frontier nodes of S); then, X Y US = UT where UT = g(T ) us (5) T ?TS s?leaf (T ) and leaf (T ) are the leaf nodes of the partial solution tree T . 2.3 Weighted mini-bucket for heuristics and sampling To construct a heuristic function for search, we can use a class of variational bounds called weighted mini-bucket (WMB, [14]). WMB corresponds to a relaxed variable elimination procedure, respecting the search pseudo tree order, that can be tightened using reparameterization (or ?cost-shifting?) operations. Importantly for this work, this same relaxation can also be used to define a proposal distribution for importance sampling that yields finite-sample bounds [15]. We describe both properties here. Let n be any node in the search tree; then, one can show that WMB yields the following reparametrization of the conditional cost below n [13]: YY g(x>n |x?n ) = h+ bkj (xk |xanj (k) )?kj , Xk ? X>n (6) n k j where Xanj (k) are the ancestors of Xk in the pseudo tree that are included in the j-th mini-bucket of Xk . The size of Xanj (k) is controlled by a user-specified parameter calledP the ibound. The bkj (xk |xanj (k) ) are conditional beliefs, and the non-negative weights ?kj satisfy j ?kj = 1. Suppose that we define a conditional distribution q(x>n |x?n ) by replacing the geometric mean over the bkj in (6) with their arithmetic mean: YX q(x>n |x?n ) = ?kj bkj (xk |xanj (k) ) (7) k j Applying the arithmetic-geometric mean inequality, we see that g(x>n |x?n )/h+ n ? q(x>n |x?n ). Summing over x>n shows that h+ is a valid upper bound heuristic for v : n n X vn = g(x>n |x?n ) ? h+ n x>n The mixture distribution q can be also used as a proposal for importance sampling, by drawing samples from q and averaging the importance weights, g/q. For any node n, we have that h i g(x>n |x?n )/q(x>n |x?n ) ? h+ (8) E g(x>n |x?n )/q(x>n |x?n ) = vn n, 4 i.e., the importance weight g(x>n |x?n )/q(x>n |x?n ) is an unbiased and bounded estimator of vn . In [15], this property was used to give finite-sample bounds on Z which depended on the WMB bound, h+ ? . To be more specific, note that g(x>n |x?n ) = f (x) when n is the root ?, and thus f (x)/q(x) ? h+ ? ; the boundedness of f (x)/q(x) results in the following finite-sample upper bound on Z that holds with probability at least 1 ? ?: s N i X d 7 ln(2/?)h+ 1 f (x ) 2Var({f (xi )/q(xi )}N ? i=1 ) ln(2/?) Z? + + (9) N i=1 q(xi ) N 3(N ? 1) i i N d where {xi }N i=1 are i.i.d. samples drawn from q(x), and Var({f (x )/q(x )}i=1 ) is the unbiased empirical variance. This probabilistic upper bound usually becomes tighter than h+ ? very quickly. A corresponding finite-sample lower bound on Z exists as well [15]. 3 Two-step sampling The finite-sample bound (9) suggests that improvements to the upper bound on Z may be translatable into improvements in the probabilistic, sampling bound. In particular, if we define a proposal that uses the search tree S and its bound U S , we can improve our sample-based bound as well. This motivates us to design a two-step sampling scheme that exploits the refined upper bound from search; it is a top-down procedure starting from the root: Step 1 For an internal node n: if it is an AND node, all its children are selected; if n is an OR node, one child c ? ch(n) is randomly selected with probability wc uc /un . Step 2 When a frontier node n is reached, if it is unsolved, draw a sample of X>n from q(x>n |x?n ); if it is solved, quit. The behavior of Step 1 can be understood by the following proposition: Proposition 1. Step 1 returns a partial solution tree T ? TS with probability UT /U S (see (5)). Any frontier node of S will be reached with probability proportional to its upper priority defined in (4). Note that at Step 2, although the sampling process terminates when a solved node n is reached, we associate every configuration x>n of X>n with probability g(x>n |x?n )/vn which is appropriate in lieu of (1). Thus, we can show that this two-step sampling scheme induces a proposal distribution, denoted q S (x), which can be expressed as: Y Y Y q S (x) = wn un /upa(n) q(x>n0 |x?n0 ) g(x>n00 |x?n00 )/vn00 n?AND(Tx ?S) n0 ?OPEN(S)?Tx n00 ?SOLVED(S)?Tx where AND(Tx ? S) is the set of all AND nodes of the partial solution tree Tx ? S. By applying (3), and noticing that the upper bound is the initial heuristic for any node in OPEN(S) and is exact at any solved node, we re-write q S (x) as Y Y g(Tx ? S) + q S (x) = h g(x>n00 |x?n00 ) (10) 0 q(x>n0 |x?n0 ) n US 0 00 n ?OPEN(S)?Tx n ?SOLVED(S)?Tx S q (x) actually provides bounded importance weights that can use the refined upper bound U S : Proposition 2. Importance weights from q S (x) are bounded by the upper bound of S, and are unbiased estimators of Z, i.e., h i S f (x)/q S (x) ? U S , (11) E f (x)/q (x) = Z Proof. Note that f (x) can be written as Y f (x) = g(Tx ? S) g(x>n0 |x?n0 ) n0 ?OPEN(S)?Tx Y g(x>n00 |x?n00 ) (12) n00 ?SOLVED(S)?Tx Noticing that for any n0 ? OPEN(S), g(x>n0 |x?n0 ) ? h+ n0 q(x>n0 |x?n0 ) by (8), and comparing S S with (10), we see f (x)/q (x) is bounded by U . Its unbiasedness is trivial. 5 Algorithm 1 Dynamic importance sampling (DIS) Require: Control parameters Nd , Nl ; memory budget, time budget. d Zbi /Ui }N ), Z, b ?. Ensure: N , HM(U ), Var({ i=1 1: Initialize S ? {?} with the root ?. 2: while within the time budget 3: if within the memory budget // update S and its associated upper bound U S 4: Expand Nd nodes via AOBFS (Alg. 1 of [16]) with the upper priority defined in (4). 5: end if 6: Draw Nl samples via T WO S TEP S AMPLING(S). 7: After drawing each sample: d Zbi /Ui }N ). 8: Update N , HM(U ), Var({ i=1 b ? via (13), (14). 9: Update Z, 10: end while 11: function T WO S TEP S AMPLING(S) 12: Start from the root of the search tree S: 13: For an internal node n: select all its children if it is an AND node; select exactly 14: one child c ? ch(n) with probability wc uc /un if it is an OR node. 15: At any unsolved frontier node n, draw one sample from q(x>n |x?n ) in (7). 16: end function Thus, importance weights resulting from our two-step sampling can enjoy the same type of bounds described in (9). Moreover, note that at any solved node, our sampling procedure incorporates the ?exact? value of that node into the importance weights, which serves as Rao-Blackwellisation and can potentially reduce variance. We can see that if S = ? (before search), q S (x) is the proposal distribution of [15]; as search proceeds, the quality of the proposal distribution improves (gradually approaching the underlying distribution f (x)/Z as S approaches the complete search tree). If we perform search first, up to some memory limit, and then sample, which we refer to as two-stage sampling, our probabilistic bounds will proceed from an improved baseline, giving better bounds at moderate to long computation times. However, doing so sacrifices the quick improvement early on given by basic importance sampling. In the next section, we describe our dynamic importance sampling procedure, which balances these two properties. 4 Dynamic importance sampling To provide good anytime behavior, we would like to do both sampling and search, so that early samples can improve the bound quickly, while later samples obtain the benefits of the search tree?s improved proposal. To do so, we define a dynamic importance sampling (DIS) scheme, presented in Alg. 1, which interleaves drawing samples and expanding the search tree. One complication of such an approach is that each sample comes from a different proposal distribution, and thus has a different bound value entering into the concentration inequality. Moreover, each sample is of a different quality ? later samples should have lower variance, since they come from an improved proposal. To this end, we construct an estimator of Z that upweights higher-quality samples. Let i Si i N b {xi }N i=1 be a series of samples drawn via Alg. 1, with {Zi = f (x )/q (x )}i=1 the corresponding importance weights, and {Ui = U Si }N the corresponding upper bounds on the importance weights i=1 b respectively. We introduce an estimator Z of Z: N bi HM(U ) X Z Zb = , N U i=1 i HM(U ) = N h1 X 1 i?1 N i=1 Ui (13) where HM(U ) is the harmonic mean of the upper bounds Ui . Zb is an unbiased estimator of Z (since it is a weighted average of independent, unbiased estimators). Additionally, since Z/ HM(U ), b HM(U ), and Z bi /Ui are all within the interval [0, 1], we can apply an empirical Bernstein Z/ bound [17] to derive finite-sample bounds: 6 Theorem 1. Define the deviation term s  2Var({ d Z bi /Ui }N ) ln(2/?) 7 ln(2/?)  i=1 + ? = HM(U ) N 3(N ? 1) (14) bi /Ui }N . Then Z d Zbi /Ui }N ) is the unbiased empirical variance of {Z b + ? and Z b?? where Var({ i=1 i=1 b are upper and lower bounds of Z with probability at least 1 ? ?, respectively, i.e., Pr[Z ? Z + ?] ? 1 ? ? and Pr[Z ? Zb ? ?] ? 1 ? ?. b ? ? < 0 at first; if so, we may replace Zb ? ? with any non-trivial lower bound It is possible that Z b a (1 ? ?) probabilistic bound by the Markov inequality [11]. of Z. In the experiments, we use Z?, b + ? with the current deterministic upper bound if the latter is tighter. We can also replace Z Intuitively, our DIS algorithm is similar to Monte Carlo tree search (MCTS) [1], which also grows an explicit search tree while sampling. However, in MCTS, the sampling procedure is used to grow the tree, while DIS uses a classic search priority. This ensures that the DIS samples are independent, since samples do not influence the proposal distribution of later samples. This also distinguishes DIS from methods such as adaptive importance sampling (AIS) [18]. 5 Empirical evaluation We evaluate our approach (DIS) against AOBFS (search, [16]) and WMB-IS (sampling, [15]) on several benchmarks of real-world problem instances from recent UAI competitions. Our benchmarks include pedigree, 22 genetic linkage instances from the UAI?08 inference challenge1 ; protein, 50 randomly selected instances made from the ?small? protein side-chains of [22]; and BN, 50 randomly selected Bayesian networks from the UAI?06 competition2 . These three sets are selected to illustrate different problem characteristics; for example protein instances are relatively small (M = 100 variables on average, and average induced width 11.2) but high cardinality (average max |Xi | = 77.9), while pedigree and BN have more variables and higher induced width (average M 917.1 and 838.6, average width 25.5 and 32.8), but lower cardinality (average max |Xi | 5.6 and 12.4). We alloted 1GB memory to all methods, first computing the largest ibound that fits the memory budget, and using the remaining memory for search. All the algorithms used the same upper bound heuristics, which also means DIS and AOBFS had the same amount of memory available for search. For AOBFS, we use the memory-limited version (Alg. 2 of [16]) with ?upper? priority, which continues improving its bounds past the memory limit. Additionally, we let AOBFS access a lower bound heuristic for no cost, to facilitate comparison between DIS and AOBFS. We show DIS for two settings, (Nl =1, Nd =1) and (Nl =1, Nd =10), balancing the effort between search and sampling. Note that WMB-IS can be viewed as DIS with (Nl =Inf, Nd =0), i.e., it runs pure sampling without any search, and two-stage sampling viewed as DIS with (Nl =1, Nd =Inf), i.e., it searches to the memory limit then samples. We set ? = 0.025 and ran each algorithm for 1 hour. All implementations are in C/C++. Anytime bounds for individual instances. Fig. 3 shows the anytime behavior of all methods on two instances from each benchmark. We observe that compared to WMB-IS, DIS provides better upper and lower bounds on all instances. In 3(d)?(f), WMB-IS is not able to produce tight bounds within 1 hour, but DIS quickly closes the gap. Compared to AOBFS, in 3(a)?(c),(e), DIS improves much faster, and in (d),(f) it remains nearly as fast as search. Note that four of these examples are sufficiently hard to be unsolved by a variable elimination-based exact solver, even with several orders of magnitude more computational resources (200GB memory, 24 hour time limit). Thus, DIS provides excellent anytime behavior; in particular, (Nl =1, Nd =10) seems to work well, perhaps because expanding the search tree is slightly faster than drawing a sample (since the tree depth is less than the number of variables). On the other hand, two-stage sampling gives weaker early bounds, but is often excellent at longer time settings. Aggregated results across the benchmarks. To quantify anytime performance of the methods in each benchmark, we introduce a measure based on the area between the upper and lower bound of 1 2 http://graphmod.ics.uci.edu/uai08/Evaluation/Report/Benchmarks/ http://melodi.ee.washington.edu/~bilmes/uai06InferenceEvaluation/ 7 ?85 ?125 AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage ?130 1 10 2 10 3 ?90 ?95 ?105 0 10 10 (a) pedigree/pedigree33 2 10 time (sec) ?270 AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage 1 10 2 10 3 2 10 time (sec) ?70 ?75 ?80 ?85 AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage ?95 10 10 (d) pedigree/pedigree37 10 4 ?120 ?90 4 10 time (sec) AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage (c) BN/BN_30 logZ ( unknown ) logZ ( unknown ) ?265 ?280 ?30 ?35 0 10 4 10 ?65 ?275 ?25 (b) protein/1co6 ?260 logZ ( ?268.435 ) AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage ?100 4 10 time (sec) ?20 logZ ( unknown ) logZ ( unknown ) logZ ( ?124.979 ) ?120 0 2 10 time (sec) (e) protein/1bgc ?130 ?140 ?150 AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage ?160 4 10 1 2 10 10 3 10 time (sec) 10 4 (f) BN/BN_129 Figure 3: Anytime bounds on logZ for two instances per benchmark. Dotted line sections on some curves indicate Markov lower bounds. In examples where search is very effective (d,f), or where sampling is very effective (a), DIS is equal or nearly so, while in (b,c,e) DIS is better than either. Table 1: Mean area between upper and lower bounds of logZ, normalized by WMB-IS, for each benchmark. Smaller numbers indicate better anytime bounds. The best for each benchmark is bolded. pedigree protein BN AOBFS WMB-IS DIS (Nl =1, Nd =1) DIS (Nl =1, Nd =10) two-stage 16.638 1.576 0.233 1 1 1 0.711 0.110 0.340 0.585 0.095 0.162 1.321 2.511 0.865 logZ. For each instance and method, we compute the area of the interval between the upper and lower bound of logZ for that instance and method. To avoid vacuous lower bounds, we provide each algorithm with an initial lower bound on logZ from WMB. To facilitate comparison, we normalize the area of each method by that of WMB-IS on each instance, then report the geometric mean of the normalized areas across each benchmark in Table 1. This shows the average relative quality compared to WMB-IS; smaller values indicate tighter anytime bounds. We see that on average, search is more effective than sampling on the BN instances, but much less effective on pedigree. Across all three benchmarks, DIS (Nl =1, Nd =10) produces the best result by a significant margin, while DIS (Nl =1, Nd =1) is also very competitive, and two-stage sampling does somewhat less well. 6 Conclusion We propose a dynamic importance sampling algorithm that embraces the merits of best-first search and importance sampling to provide anytime finite-sample bounds for the partition function. The AOBFS search process improves the proposal distribution over time, while our particular weighted average of importance weights gives the resulting estimator quickly decaying finite-sample bounds, as illustrated on several UAI problem benchmarks. Our work also opens up several avenues for future research, including investigating different weighting schemes for the samples, more flexible balances between search and sampling (for example, changing over time), and more closely integrating the variational optimization process into the anytime behavior. 8 Acknowledgements We thank William Lam, Wei Ping, and all the reviewers for their helpful feedback. This work is sponsored in part by NSF grants IIS-1526842, IIS-1254071, and by the United States Air Force under Contract No. FA8750-14-C-0011 and FA9453-16-C-0508 under the DARPA PPAML program. References [1] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1?43, 2012. [2] S. Chakraborty, K. S. Meel, and M. Y. Vardi. Algorithmic improvements in approximate counting for probabilistic inference: From linear to logarithmic SAT calls. IJCAI?16. [3] S. Chakraborty, D. J. Fremont, K. S. Meel, S. A. Seshia, and M. Y. Vardi. Distribution-aware sampling and weighted model counting for SAT. AAAI?14, pages 1722?1730. AAAI Press, 2014. [4] P. Dagum and M. Luby. An optimal approximation algorithm for Bayesian inference. Artificial Intelligence, 93(1-2):1?27, 1997. [5] A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009. [6] R. Dechter. Reasoning with probabilistic and deterministic graphical models: Exact algorithms. Synthesis Lectures on Artificial Intelligence and Machine Learning, 7(3):1?191, 2013. [7] R. Dechter and I. Rish. Mini-buckets: A general scheme of approximating inference. Journal of ACM, 50 (2):107?153, 2003. [8] R. Dechter, H. Geffner, and J. Y. Halpern. Heuristics, Probability and Causality. A Tribute to Judea Pearl. College Publications, 2010. [9] S. Ermon, C. Gomes, A. Sabharwal, and B. Selman. Taming the curse of dimensionality: Discrete integration by hashing and optimization. In International Conference on Machine Learning, pages 334?342, 2013. [10] S. Ermon, C. Gomes, A. Sabharwal, and B. Selman. Low-density parity constraints for hashing-based discrete integration. In International Conference on Machine Learning, pages 271?279, 2014. [11] V. Gogate and R. Dechter. Sampling-based lower bounds for counting queries. Intelligenza Artificiale, 5 (2):171?188, 2011. [12] M. Henrion. Search-based methods to bound diagnostic probabilities in very large belief nets. In Proceedings of the 7th conference on Uncertainty in Artificial Intelligence, pages 142?150, 1991. [13] Q. Liu. Reasoning and Decisions in Probabilistic Graphical Models?A Unified Framework. PhD thesis, University of California, Irvine, 2014. [14] Q. Liu and A. Ihler. Bounding the partition function using H?lder?s inequality. In Proceedings of the 28th International Conference on Machine Learning (ICML), New York, NY, USA, 2011. [15] Q. Liu, J. W. Fisher, III, and A. T. Ihler. Probabilistic variational bounds for graphical models. In Advances in Neural Information Processing Systems, pages 1432?1440, 2015. [16] Q. Lou, R. Dechter, and A. Ihler. Anytime anyspace AND/OR search for bounding the partition function. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017. [17] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. In COLT, 2009. [18] M.-S. Oh and J. O. Berger. Adaptive importance sampling in Monte Carlo integration. Journal of Statistical Computation and Simulation, 41(3-4):143?168, 1992. [19] L. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8(2):189 ? 201, 1979. [20] C. Viricel, D. Simoncini, S. Barbe, and T. Schiex. Guaranteed weighted counting for affinity computation: Beyond determinism and structure. In International Conference on Principles and Practice of Constraint Programming, pages 733?750. Springer, 2016. [21] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. FoundaR in Machine Learning, 1(1-2):1?305, 2008. tions and Trends [22] C. Yanover and Y. Weiss. Approximate inference and protein-folding. In Advances in Neural Information Processing Systems, pages 1457?1464, 2002. 9
6912 |@word version:1 seems:1 chakraborty:2 nd:23 open:7 simulation:1 bn:6 boundedness:1 recursively:1 initial:2 configuration:10 contains:1 series:1 united:1 liu:3 genetic:1 fa8750:1 past:1 current:2 comparing:1 rish:1 si:2 written:1 dechter:7 partition:16 remove:1 sponsored:1 update:3 n0:14 hash:2 intelligence:5 leaf:5 selected:5 xk:6 beginning:1 provides:8 completeness:2 node:51 complication:1 successive:1 become:2 consists:1 combine:3 introduce:2 darwiche:1 manner:2 sacrifice:1 rapid:2 behavior:8 curse:1 cardinality:2 solver:1 becomes:2 underlying:2 notation:3 bounded:4 moreover:2 unified:1 guarantee:2 pseudo:13 every:1 expands:2 exactly:1 control:1 grant:1 enjoy:1 before:2 understood:1 depended:1 limit:4 path:3 abuse:2 suggests:1 limited:1 bi:4 directed:1 practice:3 procedure:5 logz:12 pontil:1 area:5 empirical:5 significantly:2 projection:1 confidence:3 pre:1 word:1 integrating:1 protein:8 close:1 selection:1 applying:2 influence:1 deterministic:5 map:1 quick:1 reviewer:1 schiex:1 starting:1 survey:1 assigns:1 zbi:3 pure:1 estimator:9 importantly:1 oh:1 reparameterization:1 classic:1 notion:2 updated:1 suppose:1 user:1 exact:6 programming:1 us:3 associate:3 pa:4 trend:1 satisfying:1 particularly:2 updating:1 continues:1 observed:1 solved:10 capture:1 ensures:1 rina:1 ordering:1 trade:1 upweights:2 fremont:1 ran:1 predictable:2 respecting:2 ui:9 complexity:1 dynamic:9 halpern:1 tight:3 predictive:1 darpa:1 ppaml:1 tx:12 univ:3 instantiated:4 fast:1 effective:5 describe:3 monte:5 query:2 artificial:4 refined:2 whose:2 heuristic:10 drawing:4 otherwise:1 lder:1 itself:1 sequence:2 advantage:2 intelligently:1 net:1 propose:3 lam:1 product:3 uci:4 rapidly:2 iff:1 benefiting:1 competition:2 wmb:19 normalize:1 parent:2 ijcai:1 produce:2 artificiale:1 tions:2 illustrate:2 derive:1 simoncini:1 measured:1 strong:2 quotient:1 indicate:6 come:2 quantify:1 guided:3 sabharwal:2 closely:3 subsequently:1 vc:2 ermon:2 elimination:4 require:4 proposition:3 tighter:3 quit:1 blending:1 frontier:8 hold:2 sufficiently:1 ic:6 scope:2 algorithmic:1 major:2 tavener:1 early:4 dagum:1 visited:2 largest:1 weighted:8 offs:1 csp:1 avoid:1 publication:1 focus:1 improvement:6 greatly:1 baseline:2 sense:1 helpful:1 inference:9 typically:1 w:1 ancestor:3 expand:1 flexible:2 colt:1 denoted:4 lucas:1 development:1 integration:3 initialize:1 uc:3 field:1 construct:4 once:1 equal:4 beach:1 sampling:52 washington:1 aware:1 icml:1 nearly:2 future:1 others:1 report:2 distinguishes:1 randomly:3 powley:1 individual:1 consisting:1 william:1 attempt:1 evaluation:2 mixture:1 nl:23 pc:2 primal:4 perez:1 chain:1 graphmod:1 edge:3 partial:16 xy:1 tree:57 maurer:1 initialized:1 re:1 theoretical:1 instance:15 modeling:1 gn:2 rao:1 disadvantage:1 assignment:4 cost:8 deviation:1 subset:2 motivating:1 connect:1 answer:1 unbiasedness:1 st:2 density:1 international:4 probabilistic:14 contract:1 synthesis:1 quickly:5 seshia:1 thesis:1 aaai:3 slowly:1 priority:8 geffner:1 leading:1 return:1 de:3 sec:7 satisfy:1 permanent:1 explicitly:1 depends:2 later:3 root:12 h1:1 doing:1 red:1 start:2 reached:3 maintains:2 reparametrization:1 decaying:1 competitive:1 contribution:1 air:1 variance:5 characteristic:2 bolded:1 yield:2 bayesian:4 none:1 carlo:5 bilmes:1 upa:1 ping:1 sharing:1 definition:2 against:1 associated:2 ihler:5 proof:1 unsolved:5 propagated:1 irvine:7 judea:1 anytime:23 ut:3 improves:3 dimensionality:1 variationally:1 actually:1 back:1 higher:3 hashing:2 improved:6 wei:2 stage:13 hand:2 replacing:1 defines:2 quality:8 gray:1 perhaps:1 grows:1 usa:5 facilitate:2 concept:1 unbiased:7 normalized:2 hence:1 alternating:1 entering:1 illustrated:1 whitehouse:1 interchangeably:1 during:1 width:3 game:1 unnormalized:1 pedigree:6 tep:2 complete:4 demonstrate:1 reasoning:5 variational:8 harmonic:1 empirically:1 extend:1 interpret:2 refer:1 significant:1 cambridge:1 ai:2 had:1 interleaf:3 access:1 longer:1 compiled:1 recent:3 moderate:1 inf:2 termed:2 bkj:4 inequality:4 devise:1 ampling:2 additional:1 relaxed:1 somewhat:1 aggregated:1 paradigm:1 arithmetic:2 branch:2 full:4 ii:2 faster:2 long:4 controlled:1 qi:1 basic:1 bgc:2 iteration:1 folding:1 proposal:16 background:1 interval:3 grow:1 strict:1 pass:1 induced:2 undirected:2 incorporates:1 effectiveness:1 jordan:1 call:3 ee:1 counting:5 bernstein:2 iii:1 enough:1 easy:1 wn:1 independence:1 fit:1 zi:1 browne:1 approaching:1 reduce:2 avenue:1 sibling:1 gb:2 linkage:1 effort:1 wo:2 proceed:1 york:1 enumerate:1 covered:1 amount:2 induces:1 http:2 nsf:1 dotted:1 diagnostic:1 per:1 yy:1 blue:2 discrete:3 write:2 key:2 four:1 drawn:3 changing:1 graph:6 relaxation:1 sum:1 run:1 letter:1 noticing:2 uncertainty:1 extends:1 family:1 vn:12 draw:3 decision:1 cowling:1 bound:82 guaranteed:1 refine:2 precisely:1 constraint:2 wc:5 uai08:1 relatively:2 embrace:1 terminates:1 slightly:3 across:3 smaller:2 intuitively:1 gradually:1 pr:2 bucket:4 taken:1 ln:4 resource:2 remains:1 eventually:1 merit:1 end:4 serf:1 lieu:1 rohlfshagen:1 generalizes:1 available:3 operation:1 apply:1 observe:1 appropriate:1 luby:1 top:2 remaining:1 ensure:1 include:1 graphical:9 xc:2 yx:1 exploit:2 giving:2 approximating:1 strategy:2 concentration:1 affinity:1 lou:2 thank:1 intelligenza:1 trivial:2 samothrakis:1 berger:1 mini:4 gogate:1 balance:3 equivalently:1 tribute:1 potentially:1 negative:1 design:1 implementation:1 motivates:1 unknown:4 perform:2 upper:28 markov:3 benchmark:12 finite:10 t:4 defining:1 vacuous:1 specified:1 california:4 hour:3 pearl:1 nip:1 able:2 beyond:1 proceeds:1 below:6 usually:1 xm:1 program:1 including:3 memory:16 green:1 belief:2 shifting:1 max:2 wainwright:1 force:1 co6:1 yanover:1 representing:1 scheme:7 improve:9 mcts:2 hm:8 kj:4 taming:1 geometric:3 acknowledgement:1 relative:1 fully:3 lecture:1 proportional:1 var:6 foundar:1 penalization:1 degree:1 consistent:1 principle:1 tightened:1 balancing:1 normalizes:1 parity:1 blackwellisation:1 enjoys:1 dis:36 side:1 weaker:1 determinism:1 melodi:1 benefit:2 curve:1 depth:1 xn:5 world:2 evaluating:1 valid:1 feedback:1 concretely:1 made:1 adaptive:2 selman:2 transaction:1 meel:2 approximate:6 clique:1 global:1 colton:1 uai:5 investigating:1 sat:2 summing:1 gomes:2 xi:15 search:64 un:9 ibound:2 quantifies:1 table:2 additionally:2 ca:4 expanding:3 improving:3 alg:4 alloted:1 excellent:3 domain:1 inherit:1 bounding:3 vardi:2 child:8 x1:1 fig:4 causality:1 fashion:1 barbe:1 slow:1 ny:1 explicit:2 exponential:1 weighting:1 down:1 theorem:1 xt:4 specific:1 x:3 normalizing:1 exists:1 valiant:1 importance:29 n00:8 phd:1 magnitude:1 subtree:2 illustrates:1 exhausted:1 budget:5 margin:1 gap:1 suited:1 logarithmic:1 simply:1 expressed:2 springer:1 ch:7 corresponds:5 acm:1 conditional:6 marked:2 formulated:1 viewed:2 replace:2 fisher:1 hard:2 included:1 specifically:1 henrion:1 averaging:1 zb:4 called:4 total:1 select:2 college:1 internal:2 latter:1 alexander:1 evaluate:1
6,537
6,913
Is the Bellman residual a bad proxy? Matthieu Geist1 , Bilal Piot2,3 and Olivier Pietquin 2,3 Universit? de Lorraine & CNRS, LIEC, UMR 7360, Metz, F-57070 France 2 Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 - CRIStAL, F-59000 Lille, France 3 Now with Google DeepMind, London, United Kingdom [email protected] [email protected], [email protected] 1 Abstract This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual kT? v? ? v? k1,? over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered. 1 Introduction Reinforcement Learning (RL) aims at estimating a policy ? close to the optimal one, in the sense that its value, v? (the expected discounted return), is close to maximal, i.e kv? ? v? k is small (v? being the optimal value), for some norm. Controlling the residual kT? v? ? v? k (where T? is the optimal Bellman operator and v? a value function parameterized by ?) over a class of parameterized value functions is a classical approach in value-based RL, and especially in Approximate Dynamic Programming (ADP). Indeed, controlling this residual allows controlling the distance to the optimal value function: generally speaking, we have that C kv? ? v?v? k ? kT? v? ? v? k, (1) 1?? with the policy ?v? being greedy with respect to v? [17, 19]. Some classical ADP approaches actually minimize a projected Bellman residual, k?(T? v? ? v? )k, where ? is the operator projecting onto the hypothesis space to which v? belongs: Approximate Value Iteration (AVI) [11, 9] tries to minimize this using a fixed-point approach, v?k+1 = ?T? v?k , and it has been shown recently [18] that Least-Squares Policy Iteration (LSPI) [13] tries to minimize it using a Newton approach1 . Notice that in this case (projected residual), there is no general performance bound2 for controlling kv? ? v?v? k. 1 (Exact) policy iteration actually minimizes kT? v ? vk using a Newton descent [10]. With a single action, this approach reduces to LSTD (Least-Squares Temporal Differences) [5], that can be arbitrarily bad in an off-policy setting [20]. 2 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Despite the fact that (unprojected) residual approaches come easily with performance guarantees, they are not extensively studied in the (value-based) literature (one can mention [3] that considers a subgradient descent or [19] that frames the norm of the residual as a delta-convex function). A reason for this is that they lead to biased estimates when the Markovian transition kernel is stochastic and unknown [1], which is a rather standard case. Projected Bellman residual approaches are more common, even if not introduced as such originally (notable exceptions are [16, 18]). An alternative approach consists in maximizing directly the mean value E? [v? (S)] for a userdefined state distribution ?, this being equivalent to directly minimizing kv? ? v? k1,? , see Sec. 2. This suggests defining a class of parameterized policies and optimizing over them, which is the predominant approach in policy search3 [7]. This paper aims at theoretically and experimentally studying these two approaches: maximizing the mean value (related algorithms operate on policies) and minimizing the residual (related algorithms operate on value functions). In that purpose, we place ourselves in the context of policy search algorithms. We adopt this position because we could derive a method that minimizes the residual kT? v? ? v? k1,? over policies and compare to other methods that usually maximize the mean value. On the other hand, adapting ADP methods so that they maximize the mean value is way harder4 . This new approach is presented in Sec. 3, and we show theoretically how good this proxy. In Sec. 4, we conduct experiments on randomly generated generic Markov decision processes to compare both approaches empirically. The experiments are specifically designed to study the influence of the involved concentrability coefficient. Despite the good theoretical properties of the Bellman residual approach, it turns out that it only works well if there is a good match between the sampling distribution and the discounted state occupancy distribution induced by the optimal policy, which is a very limiting requirement. In comparison, maximizing the mean value is rather insensitive to this issue and works well whatever the sampling distribution is, contrary to what suggests the sole related theoretical bound. This study thus suggests that maximizing the mean value, although it doesn?t provide easy theoretical analysis, is a better approach to build efficient and robust RL algorithms. 2 Background 2.1 Notations Let ?X be the set of probability distributions over a finite set X and Y X the set of applications from X to the set Y . By convention, all vectors are column vectors, except distributions (for left multiplication). A Markov Decision Process (MDP) is a tuple {S, A, P, R, ?}, where S is the finite state space5 , A is the finite action space, P ? (?S )S?A is the Markovian transition kernel (P (s0 |s, a) denotes the probability of transiting to s0 when action a is applied in state s), R ? RS?A is the bounded reward function (R(s, a) represents the local benefitPof doing action a in state s) and ? ? (0, 1) is the discount factor. For v ? RS , we write kvk1,? = s?S ?(s)|v(s)| the ?-weighted `1 -norm of v. Notice that when the function v ? RS is componentwise positive, that is v ? 0, the ?-weighted `1 -norm of v is actually its expectation with respect to ?: if v ? 0, then kvk1,? = E? [v(S)] = ?v. We will make an intensive use of this basic property in the following. A stochastic policy ? ? (?A )S associates a distribution over actions to each state. The policy-induced reward and transition kernels, R? ? RS and P? ? (?S )S , are defined as R? (s) = E?(.|s) [R(s, A)] and P? (s0 |s) = E?(.|s) [P (s0 |s, A)]. The quality of a policy is quantified by the associated value function v? ? RS : X v? (s) = E[ ? t R? (St )|S0 = s, St+1 ? P? (.|St )]. t?0 3 A remarkable aspect of policy search is that it does not necessarily rely on the Markovian assumption, but this is out of the scope of this paper (residual approaches rely on it, through the Bellman equation). Some recent and effective approaches build on policy search, such as deep deterministic policy gradient [15] or trust region policy optimization [23]. Here, we focus on the canonical mean value maximization approach. 4 Approximate linear programming could be considered as such but is often computationally intractable [8, 6]. 5 This choice is done for ease and clarity of exposition, the following results could be extended to continuous state and action spaces. 2 The value v? is the unique fixed point of the Bellman operator T? , defined as T? v = R? + ?P? v for any v ? RS . Let define the second Bellman operator T? as, for any v ? RS , T? v = max??(?A )S T? v. A policy ? is greedy with respect to v ? RS , denoted ? ? G(v) if T? v = T? v. There exists an optimal policy ?? that satisfies componentwise v?? ? v? , for all ? ? (?A )S . Moreover, we have that ?? ? G(v? ), with v? being the unique fixed point of T? . Finally, for any distribution ? ? ?S , the ?-weighted occupancy measure induced by the policy ? when the initial state is sampled from ? is defined as X d?,? = (1 ? ?)? ? t P?t = (1 ? ?)?(I ? ?P? )?1 ? ?S . t?0 For two distributions ? and ?, we write k ?? k? the smallest constant C satisfying, for all s ? S, ?(s) ? C?(s). This quantity measures the mismatch between the two distributions. 2.2 Maximizing the mean value Let P be a space of parameterized stochastic policies and let ? be a distribution of interest. The optimal policy has a higher value than any other policy, for any state. If the MDP is too large, satisfying this condition is not reasonable. Therefore, a natural idea consists in searching for a policy such that the associated value function is as close as possible to the optimal one, in expectation, according to a distribution of interest ?. More formally, this means minimizing kv? ? vk1,? = E? [v? (S) ? v? (S)] ? 0. The optimal value function being unknown, one cannot address this problem directly, but it is equivalent to maximizing E? [v? (S)]. This is the basic principle of many policy search approaches: max J? (?) with J? (?) = E? [v? (S)] = ?v? . ??P Notice that we used a sampling distribution ? here, possibly different from the distribution of interest ?. Related algorithms differ notably by the considered criterion (e.g., it can be the mean reward rather than the ?-discounted cumulative reward considered here) and by how the corresponding optimization problem is solved. We refer to [7] for a survey on that topic. Contrary to ADP, the theoretical efficiency of this family of approaches has not been studied a lot. Indeed, as far as we know, there is a sole performance bound for maximizing the mean value. Theorem 1 (Scherrer and Geist [22]). Assume that the policy space P is stable by stochastic mixture, that is ??, ? 0 ? P, ?? ? (0, 1), (1 ? ?)? + ?? 0 ? P. Define the ?-greedy-complexity of the policy space P as E? (P) = max min d?,? (T? v? ? T?0 v? ). 0 ??P ? ?P Then, any policy ? that is an -local optimum of J? , in the sense that ?v(1??)?+??0 ? ?v? ? , ? enjoys the following global performance guarantee: d?,?? 1 (E? (P) + ) . ?(v? ? v? ) ? (1 ? ?)2 ? ? ?? 0 ? ?, lim ??0 This bound (as all bounds of this kind) has three terms: an horizon term, a concentrability term and 1 an error term. The term 1?? is the average optimization horizon. This concentrability coefficient (kd?,?? /?k? ) measures the mismatch between the used distribution ? and the ?-weighted occupancy measure induced by the optimal policy ?? when the initial state is sampled from the distribution of interest ?. This tells that if ? is the distribution of interest, one should optimize Jd?,?? , which is not feasible, ?? being unknown (in this case, the coefficient is equal to 1, its lower bound). This coefficient can be arbitrarily large: consider the case where ? concentrates on a single starting state (that is ?(s0 ) = 1 for a given state s0 ) and such that the optimal policy leads to other states (that is, d?,?? (s0 ) < 1), the coefficient is then infinite. However, it is also the best concentrability coefficient according to [21], that provides a theoretical and empirical comparison of Approximate Policy Iteration (API) schemes. The error term is E? (P) + , where E? (P) measures the capacity of 3 the policy space to represent the policies being greedy with respect to the value of any policy in P and  tells how the computed policy ? is close to a local optimum of J? . There exist other policy search approches, based on ADP rather than on maximizing the mean value, such as Conservative Policy Iteration (CPI) [12] or Direct Policy Iteration (DPI) [14]. The bound of Thm. 1 matches the bounds of DPI or CPI. Actually, CPI can be shown to be a boosting approach maximizing the mean value. See the discussion in [22] for more details. However, this bound is also based on a very strong assumption (stability by stochastic mixture of the policy space) which is not satisfied by all commonly used policy parameterizations. 3 Minimizing the Bellman residual Direct maximization of the mean value operates on policies, while residual approaches operate on value functions. To study these two optimization criteria together, we introduce a policy search method that minimizes a residual. As noted before, we do so because it is much simpler than introducing a value-based approach that maximizes the mean value. We also show how good this proxy is to policy optimization. Although this algorithm is new, it is not claimed to be a core contribution of the paper. Yet it is clearly a mandatory step to support the comparison between optimization criteria. 3.1 Optimization problem We propose to search a policy in P that minimizes the following Bellman residual: min J? (?) with J? (?) = kT? v? ? v? k1,? . ??P Notice that, as for the maximization of the mean value, we used a sampling distribution ?, possibly different from the distribution of interest ?. From the basic properties of the Bellman operator, for any policy ? we have that T? v? ? v? . Consequently, the ?-weighted `1 -norm of the residual is indeed the expected Bellman residual: J? (?) = E? [[T? v? ](S) ? v? (S)] = ?(T? v? ? v? ). Therefore, there is naturally no bias problem for minimizing a residual here, contrary to other residual approaches [1]. This is an interesting result on its own, as removing the bias in value-based residual approaches is far from being straightforward. This results from the optimization being done over policies and not over values, and thus from v? being an actual value (the one of the current policy) obeying to the Bellman equation6 . Any optimization method can be envisioned to minimize J? . Here, we simply propose to apply a subgradient descent (despite the lack of convexity). Theorem 2 (Subgradient of J? ). Recall that given the considered notations, the distribution ?PG(v? ) is the state distribution obtained by sampling the initial state according to ?, applying the action being greedy with respect to v? and following the dynamics to the next state. This being said, the subgradient of J? is given by  1 X ??J? (?) = d?,? (s) ? ?d?PG(v? ) ,? (s) ?(a|s)? ln ?(a|s)q? (s, a), 1 ? ? s,a P with q? (s, a) = R(s, a) + ? s0 ?S P (s0 |s, a)v? (s0 ) the state-action value function. Proof. The proof relies on basic (sub)gradient calculus, it is given in the appendix. There are two terms in the negative subgradient ??J? : the first one corresponds to the gradient of J? , the second one (up to the multiplication by ??) is the gradient of J?PG(v? ) and acts as a kind of correction. This subgradient can be estimated using Monte Carlo rollouts, but doing so is harder than for classic policy search (as it requires additionally sampling from ?PG(v? ) , which requires estimating 6 The property T? v ? v does not hold if v is not the value function of a given policy, as in value-based approaches. 4 the state-action value function). Also, this gradient involves computing the maximum over actions (as it requires sampling from ?PG(v? ) , that comes from explicitly considering the Bellman optimality operator), which prevents from extending easily this approach to continuous actions, contrary to classic policy search. Thus, from an algorithmic point of view, this approach has drawbacks. Yet, we do not discuss further how to efficiently estimate this subgradient since we introduced this approach for the sake of comparison to standard policy search methods only. For this reason, we will consider an ideal algorithm in the experimental section where an analytical computation of the subgradient is possible, see Sec. 4. This will place us in an unrealistically good setting, which will help focusing on the main conclusions. Before this, we study how good this proxy is to policy optimization. 3.2 Analysis Theorem 3 (Proxy bound for residual policy search). We have that 1 d?,?? J? (?) = 1 d?,?? kT? v? ? v? k1,? . kv? ? v? k1,? ? 1?? ? 1?? ? ? ? Proof. The proof can be easily derived from the analyses of [12], [17] or [22]. We detail it for completeness in the appendix. This bound shows how controlling the residual helps in controlling the error. It has a linear dependency on the horizon and the concentrability coefficient is the best one can expect (according to [21]). It has the same form has the bounds for value-based residual minimization [17, 19] (see also Eq. (1)). It is even better due to the involved concentrability coefficient (the ones for value-based bounds are worst, see [21] for a comparison). Unfortunately, this bound is hardly comparable to the one of Th. 1, due to the error terms. In Th. 3, the error term (the residual) is a global error (how good is the residual as a proxy), whereas in Th. 1 the error term is mainly a local error (how small is the gradient after minimizing the mean value). Notice also that Th. 3 is roughly an intermediate step for proving Th. 1, and that it applies to any policy (suggesting that searching for a policy that minimizes the residual makes sense). One could argue that a similar bound for mean value maximization would be something like: if J? (?) ? ?, then kv? ? v? k1,? ? ?v? ? ?. However, this is an oracle bound, as it depends on the unknown solution v? . It is thus hardly exploitable. The aim of this paper is to compare these two optimization approaches to RL. At a first sight, maximizing directly the mean value should be better (as a more direct approach). If the bounds of Th. 1 and 3 are hardly comparable, we can still discuss the involved terms. The horizon term is better (linear instead of quadratic) for the residual approach. Yet, an horizon term can possibly be hidden in the residual itself. Both bounds imply the same concentrability coefficient, the best one can expect. This is a very important term in RL bounds, often underestimated: as these coefficients can easily explode, minimizing an error makes sense only if it?s not multiplied by infinity. This coefficient suggests that one should use d?,?? as the sampling distribution. This is rarely reasonable, while using instead directly the distribution of interest is more natural. Therefore, the experiments we propose on the next section focus on the influence of this concentrability coefficient. 4 Experiments We consider Garnet problems [2, 4]. They are a class of randomly built MDPs meant to be totally abstract while remaining representative of the problems that might be encountered in practice. Here, a Garnet G(|S|, |A|, b) is specified by the number of states, the number of actions and the branching factor. For each (s, a) couple, b different next states are chosen randomly and the associated probabilities are set by randomly partitioning the unit interval. The reward is null, except for 10% of states where it is set to a random value, uniform in (1, 2). We set ? = 0.99. > For the policy space, we consider a Gibbs parameterization: P = {?w : ?w (a|s) ? ew ?(s,a) }. The features are also randomly generated, F (d, l). First, we generate binary state-features ?(s) of dimension d, such that l components are set to 1 (the others are thus 0). The positions of the 1?s are 5 selected randomly such that no two states have the same feature. Then, the state-action features, of > dimension d|A|, are classically defined as ?(s, a) = (0 . . . 0 ?(s) 0 . . . 0) , the position of the zeros depending on the action. Notice that in general this policy space is not stable by stochastic mixture, so the bound for policy search does not formally apply. We compare classic policy search (denoted as PS(?)), that maximizes the mean value, and residual policy search (denoted as RPS(?)), that minimizes the mean residual. We optimize the relative objective functions with a normalized gradient ascent (resp. normalized subgradient descent) with a constant learning rate ? = 0.1. The gradients are computed analytically (as we have access to the model), so the following results represent an ideal case, when one can do an infinite number of rollouts. Unless said otherwise, the distribution ? ? ?S of interest is the uniform distribution. 4.1 Using the distribution of interest 0.6 0.4 0.4 0.2 0.0 0 0.2 200 400 600 800 1000 number of iterations a. Error for PS(?). 0.0 0 200 400 600 800 1000 number of iterations b. Error for RPS(?). 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0 kT?v? ? v? k1,? 0.8 0.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0 kT?v? ? v? k1,? 0.8 kv? ? v? k1,? ? kv?k?1 1,? kv? ? v? k1,? ? kv?k?1 1,? First, we consider ? = ?. We generate randomly 100 Garnets G(30, 4, 2) and 100 features F (8, 3). For each Garnet-feature couple, we run both algorithms for T = 1000 iterations. For each algorithm, kv? ?v? k1,? we measure two quantities: the (normalized) error kv (notice that as rewards are positive, we ? k1,? have kv? k1,? = ?v? ) and the Bellman residual kT? v? ? v? k1,? , where ? depends on the algorithm and on the iteration. We show the results (mean?standard deviation) on Fig. 1. 200 400 600 800 1000 number of iterations 200 400 600 800 1000 number of iterations c. Residual for PS(?). d. Residual for RPS(?). Figure 1: Results on the Garnet problems, when ? = ?. Fig. 1.a shows that PS(?) succeeds in decreasing the error. This was to be expected, as it is the criterion it optimizes. Fig. 1.c shows how the residual of the policies computed by PS(?) evolves. By comparing this to Fig. 1.a, it can be observed that the residual and the error are not necessarily correlated: the error can decrease while the residual increases, and a low error does not necessarily involves a low residual. Fig. 1.d shows that RPS(?) succeeds in decreasing the residual. Again, this is not surprising, as it is the optimized criterion. Fig. 1.b shows how the error of the policies computed by RPS(?) evolves. Comparing this to Fig. 1.d, it can be observed that decreasing the residual lowers the error: this is consistent with the bound of Thm. 3. Comparing Figs. 1.a and 1.b, it appears clearly that RPS(?) is less efficient than PS(?) for decreasing the error. This might seem obvious, as PS(?) directly optimizes the criterion of interest. However, when comparing the errors and the residuals for each method, it can be observed that they are not necessarily correlated. Decreasing the residual lowers the error, but one can have a low error with a high residual and vice versa. As explained in Sec. 1, (projected) residual-based methods are prevalent for many reinforcement learning approaches. We consider a policy-based residual rather than a value-based one to ease the comparison, but it is worth studying the reason for such a different behavior. 4.2 Using the ideal distribution d ? The lower the concentrability coefficient k ?,? ? k? is, the better the bounds in Thm. 1 and 3 are. This coefficient is minimized for ? = d?,?? . This is an unrealistic case (?? is unknown), but since we work with known MDPs we can compute this quantity (the model being known), for the sake of a complete empirical analysis. Therefore, PS(d?,?? ) and RPS(d?,?? ) are compared in Fig. 2. We highlight the fact that the errors and the residuals shown in this figure are measured respectively to the distribution of interest ?, and not the distribution d?,?? used for the optimization. 6 0.4 0.2 0.2 200 400 600 800 1000 number of iterations 0.0 0 200 400 600 800 1000 number of iterations 200 400 600 800 1000 number of iterations c. Residual for PS(d?,?? ). a. Error for PS(d?,?? ). b. Error for RPS(d?,?? ). 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0 kT?v? ? v? k1,? 0.6 0.4 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0 kT?v? ? v? k1,? 0.8 0.6 0.0 0 kv? ? v? k1,? ? kv?k?1 1,? kv? ? v? k1,? ? kv?k?1 1,? 0.8 200 400 600 800 1000 number of iterations d. Residual for RPS(d?,?? ). Figure 2: Results on the Garnet problems, when ? = d?,?? . Fig. 2.a shows that PS(d?,?? ) succeeds in decreasing the error kv? ? v? k1,? . However, comparing Fig. 2.a to Fig. 1.a, there is no significant gain in using ? = d?,?? instead of ? = ?. This suggests that the dependency of the bound in Thm. 1 on the concentrability coefficient is not tight. Fig. 2.c shows how the corresponding residual evolves. Again, there is no strong correlation between the residual and the error. Fig. 2.d shows how the residual kT? v? ? v? k1,? evolves for RPS(d?,?? ). It is not decreasing, but it is not what is optimized (the residual kT? v? ? v? k1,d?,?? , not shown, decreases indeed, in a similar fashion than Fig. 1.d). Fig. 2.b shows how the related error evolves. Compared to Fig. 2.a, there is no significant difference. The behavior of the residual is similar for both methods (Figs. 2.c and 2.d). Overall, this suggests that controlling the residual (RPS) allows controlling the error, but that this requires a wise choice for the distribution ?. On the other hand, controlling directly the error (PS) is much less sensitive to this. In other words, this suggests a stronger dependency of the residual approach to the mismatch between the sampling distribution and the discounted state occupancy measure induced by the optimal policy. 4.3 Varying the sampling distribution This experiment is designed to study the effect of the mismatch between the distributions. We sample 100 Garnets G(30, 4, 2), as well as associated feature sets F (8, 3). The distribution of interest is no longer the uniform distribution, but a measure that concentrates on a single starting state of interest d ? s0 : ?(s0 ) = 1. This is an adverserial case, as it implies that k ?,? ? k? = ?: the branching factor being equal to 2, the optimal policy ?? cannot concentrate on s0 . The sampling distribution is defined as being a mixture between the distribution of interest and the ideal distribution. For ? ? [0, 1], ?? is defined as ?? = (1 ? ?)? + ?d?,?? . It is straightforward to show that in this case the concentrability coefficient is indeed ?1 (with the convention that 01 = ?):   d?,?? d?,?? (s0 ) 1 1 = max ; = . ?? (1 ? ?) + ?d (s ) ? ? ?,? 0 ? ? For each MDP, the learning (for PS(?? ) and RPS(?? )) is repeated, from the same initial policy, by setting ? = k1 , for k ? [1; 25]. Let ?t,x be the policy learnt by algorithm x (PS or RPS) at iteration t, the integrated error (resp. integrated residual) is defined as T 1 X kv? ? v?t,x k1,? T t=1 kv? k1,? (resp. T 1X kT? v?t,x ? v?t,x k1,? ). T t=1 Notice that here again, the integrated error and residual are defined with respect to ?, the distribution of interest, and not ?? , the sampling distribution used for optimization. We get an integrated error d ? (resp. residual) for each value of ? = k1 , and represent it as a function of k = k ?,? ?? k? , the concentrability coefficient. Results are presented in Fig. 3, that shows these functions averaged across the 100 randomly generated MDPs (mean?standard deviation as before, minimum and maximum values are shown in dashed line). Fig. 3.a shows the integrated error for PS(?? ). It can be observed that the mismatch between measures has no influence on the efficiency of the algorithm. Fig. 3.b shows the same thing for RPS(?? ). The integrated error increases greatly as the mismatch between the sampling measure and the ideal one 7 0.6 0.4 0.2 0.0 0 5 10 15 20 concentrability coefficient 25 a. Integrated error for PS(?? ). 0.8 0.6 0.4 0.2 0.0 0 5 10 15 20 concentrability coefficient 25 b. Integrated error for RPS(?? ). integrated residual 0.8 integrated residual 1.0 integrated error integrated error 1.0 1.5 1.0 0.5 0.0 0 5 10 15 20 concentrability coefficient 25 c. Integrated residual for PS(?? ). 1.5 1.0 0.5 0.0 0 5 10 15 20 concentrability coefficient 25 d. Integrated residual for RPS(?? ). Figure 3: Results for the sampling distribution ?? . increases (the value to which the error saturates correspond to no improvement over the initial policy). Comparing both figures, it can be observed that RPS performs as well as PS only when the ideal distribution is used (this corresponds to a concentrability coefficient of 1). Fig. 3.c and 3.d show the integrated residual for each algorithm. It can be observed that RPS consistently achieves a lower residual than PS. Overall, this suggests that using the Bellman residual as a proxy is efficient only if the sampling distribution is close to the ideal one, which is difficult to achieve in general (the ideal distribution d?,?? being unknown). On the other hand, the more direct approach consisting in maximizing the mean value is much more robust to this issue (and can, as a consequence, be considered directly with the distribution of interest). One could argue that the way we optimize the considered objective function is rather naive (for example, considering a constant learning rate). But this does not change the conclusions of this experimental study, that deals with how the error and the Bellman residual are related and with how the concentrability influences each optimization approach. This point is developed in the appendix. 5 Conclusion The aim of this article was to compare two optimization approaches to reinforcement learning: minimizing a Bellman residual and maximizing the mean value. As said in Sec. 1, Bellman residuals are prevalent in ADP. Notably, value iteration minimizes such a residual using a fixed-point approach and policy iteration minimizes it with a Newton descent. On another hand, maximizing the mean value (Sec. 2) is prevalent in policy search approaches. As Bellman residual minimization methods are naturally value-based and mean value maximization approaches policy-based, we introduced a policy-based residual minimization algorithm in order to study both optimization problems together. For the introduced residual method, we proved a proxy bound, better than value-based residual minimization. The different nature of the bounds of Th. 1 and 3 made the comparison difficult, but both involve the same concentrability coefficient, a term often underestimated in RL bounds. Therefore, we compared both approaches empirically on a set of randomly generated Garnets, the study being designed to quantify the influence of this concentrability coefficient. From these experiments, it appears that the Bellman residual is a good proxy for the error (the distance to the optimal value function) only if, luckily, the concentrability coefficient is small for the considered MDP and the distribution of interest, or one can afford a change of measure for the optimization problem, such that the sampling distribution is close to the ideal one. Regarding this second point, one can change to a measure different from the ideal one, d?,?? (for example, using for ? a uniform distribution when the distribution of interest concentrates on a single state would help), but this is difficult in general (one should know roughly where the optimal policy will lead to). Conversely, maximizing the mean value appears to be insensitive to this problem. This suggests that the Bellman residual is generally a bad proxy to policy optimization, and that maximizing the mean value is more likely to result in efficient and robust reinforcement learning algorithms, despite the current lack of deep theoretical analysis. This conclusion might seems obvious, as maximizing the mean value is a more direct approach, but this discussion has never been addressed in the literature, as far as we know, and we think it to be important, given the prevalence of (projected) residual minimization in value-based RL. 8 References [1] Andr?s Antos, Csaba Szepesv?ri, and R?mi Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89?129, 2008. [2] TW Archibald, KIM McKinnon, and LC Thomas. On the generation of Markov decision processes. Journal of the Operational Research Society, pages 354?361, 1995. [3] Leemon C. Baird. Residual Algorithms: Reinforcement Learning with Function Approximation. In International Conference on Machine Learning (ICML), pages 30?37, 1995. [4] Shalabh Bhatnagar, Richard S Sutton, Mohammad Ghavamzadeh, and Mark Lee. Natural actor-critic algorithms. Automatica, 45(11):2471?2482, 2009. [5] Steven J. Bradtke and Andrew G. Barto. Linear Least-Squares algorithms for temporal difference learning. Machine Learning, 22(1-3):33?57, 1996. [6] Daniela Pucci de Farias and Benjamin Van Roy. The linear programming approach to approximate dynamic programming. Operations research, 51(6):850?865, 2003. [7] Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A Survey on Policy Search for Robotics. Foundations and Trends in Robotics, 2(1-2):1?142, 2013. [8] Vijay V. Desai, Vivek F. Farias, and Ciamac C. Moallemi. Approximate dynamic programming via a smoothed linear program. Oper. Res., 60(3):655?674, May 2012. [9] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-Based Batch Mode Reinforcement Learning. Journal of Machine Learning Research, 6:503?556, 2005. [10] Jerzy A Filar and Boleslaw Tolwinski. On the Algorithm of Pollatschek and Avi-ltzhak. Stochastic Games And Related Topics, pages 59?70, 1991. [11] Geoffrey Gordon. Stable Function Approximation in Dynamic Programming. In International Conference on Machine Learning (ICML), 1995. [12] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning (ICML), 2002. [13] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107?1149, 2003. [14] Alessandro Lazaric, Mohammad Ghavamzadeh, and R?mi Munos. Analysis of a classificationbased policy iteration algorithm. In International Conference on Machine Learning (ICML), 2010. [15] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016. [16] Hamid R Maei, Csaba Szepesv?ri, Shalabh Bhatnagar, and Richard S Sutton. Toward off-policy learning control with function approximation. In International Conference on Machine Learning (ICML), 2010. [17] R?mi Munos. Performance bounds in `p -norm for approximate value iteration. SIAM journal on control and optimization, 46(2):541?561, 2007. [18] Julien P?rolat, Bilal Piot, Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. Softened Approximate Policy Iteration for Markov Games. In International Conference on Machine Learning (ICML), 2016. [19] Bilal Piot, Matthieu Geist, and Olivier Pietquin. Difference of Convex Functions Programming for Reinforcement Learning. In Advances in Neural Information Processing Systems (NIPS), 2014. 9 [20] Bruno Scherrer. Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view. In International Conference on Machine Learning (ICML), 2010. [21] Bruno Scherrer. Approximate Policy Iteration Schemes: A Comparison. In International Conference on Machine Learning (ICML), pages 1314?1322, 2014. [22] Bruno Scherrer and Matthieu Geist. Local Policy Search in a Convex Space and Conservative Policy Iteration as Boosted Policy Search. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), 2014. [23] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning (ICML), 2015. 10
6913 |@word seems:1 norm:6 stronger:1 pieter:1 calculus:1 r:8 pg:5 mention:1 harder:1 lorraine:2 initial:5 united:1 bilal:4 current:3 comparing:7 surprising:1 yet:3 john:2 ronald:1 designed:5 greedy:5 selected:1 parameterization:1 core:1 oblique:1 provides:1 boosting:1 parameterizations:1 completeness:1 philipp:1 simpler:1 wierstra:1 direct:5 pritzel:1 consists:2 introduce:1 theoretically:3 notably:3 indeed:5 expected:3 behavior:2 pkdd:1 roughly:2 bellman:27 discounted:4 decreasing:7 actual:1 considering:2 totally:1 estimating:2 notation:2 bounded:1 moreover:1 maximizes:2 null:1 what:2 kind:2 minimizes:9 deepmind:1 developed:1 unified:1 csaba:2 guarantee:2 temporal:3 act:1 universit:1 whatever:1 partitioning:1 unit:1 control:3 louis:1 positive:2 before:3 local:5 consequence:1 api:1 despite:5 sutton:2 path:1 approximately:1 inria:1 might:4 umr:2 studied:2 quantified:1 suggests:9 conversely:1 ease:2 hunt:1 averaged:1 unique:2 practice:2 prevalence:2 jan:1 empirical:2 adapting:1 projection:1 word:1 get:1 onto:1 close:6 cannot:2 operator:6 context:1 influence:6 applying:1 optimize:3 equivalent:2 deterministic:1 maximizing:17 straightforward:2 starting:2 convex:3 survey:2 matthieu:5 stability:1 searching:2 classic:3 proving:1 limiting:1 resp:4 controlling:9 gerhard:1 exact:1 olivier:4 programming:7 hypothesis:1 associate:1 trend:1 roy:1 satisfying:2 database:1 observed:6 steven:1 levine:1 solved:1 worst:1 region:2 desai:1 decrease:2 envisioned:1 benjamin:1 alessandro:1 convexity:1 complexity:1 reward:6 dynamic:5 ghavamzadeh:2 tight:1 efficiency:2 farias:2 easily:4 geist:5 leemon:1 univ:4 effective:1 london:1 monte:1 tell:2 avi:2 otherwise:1 think:1 itself:1 analytical:1 propose:4 maximal:1 fr:3 ernst:1 achieve:1 kv:21 requirement:1 optimum:2 extending:1 p:19 neumann:1 silver:1 help:3 derive:2 depending:1 andrew:1 damien:1 measured:1 sole:2 eq:1 strong:2 pietquin:4 involves:2 come:2 implies:1 convention:2 differ:1 concentrate:4 quantify:1 drawback:1 stochastic:7 luckily:1 fix:1 abbeel:1 hamid:1 correction:1 hold:1 jerzy:1 considered:8 scope:1 algorithmic:1 parr:1 achieves:1 adopt:1 smallest:1 purpose:2 sensitive:1 vice:1 weighted:5 minimization:8 clearly:2 sight:1 aim:5 rather:6 boosted:1 varying:1 barto:1 kvk1:2 derived:1 focus:2 vk:1 improvement:1 prevalent:3 consistently:1 mainly:1 greatly:1 kim:1 sense:4 cnrs:2 integrated:15 hidden:1 france:2 issue:2 scherrer:5 overall:2 denoted:3 equal:2 never:1 beach:1 sampling:16 represents:1 lille:3 icml:9 minimized:1 others:1 gordon:1 richard:2 randomly:10 ourselves:2 rollouts:2 consisting:1 interest:19 predominant:1 mixture:4 antos:1 kt:15 tuple:1 rps:18 moallemi:1 unless:1 conduct:1 tree:1 re:1 theoretical:8 fitted:1 column:1 markovian:3 maximization:6 introducing:1 addressing:1 deviation:2 uniform:4 too:1 dependency:3 learnt:1 st:4 international:10 siam:1 lee:1 off:2 michael:1 together:2 again:3 satisfied:1 possibly:3 classically:1 return:1 oper:1 suggesting:1 de:2 sec:7 coefficient:26 baird:1 notable:1 explicitly:1 depends:2 try:2 lot:1 view:2 doing:2 metz:1 contribution:1 minimize:5 square:4 efficiently:1 correspond:1 carlo:1 bhatnagar:2 worth:2 concentrability:22 involved:4 obvious:3 naturally:2 associated:4 proof:4 mi:3 couple:2 sampled:2 gain:1 proved:1 recall:1 lim:1 knowledge:1 actually:4 focusing:1 appears:3 originally:1 higher:1 tom:1 done:2 correlation:1 langford:1 hand:4 rolat:1 trust:2 lack:3 google:1 mode:1 quality:1 mdp:4 believe:1 usa:1 effect:1 shalabh:2 normalized:3 lillicrap:1 counterpart:1 analytically:1 moritz:1 deal:1 vivek:1 game:2 branching:2 noted:1 criterion:7 complete:1 mohammad:2 geurts:1 performs:1 adverserial:1 bradtke:1 wise:1 recently:1 lagoudakis:1 common:1 empirically:3 rl:7 insensitive:2 tassa:1 adp:6 refer:1 significant:2 versa:1 gibbs:1 erez:1 bruno:4 stable:3 access:1 longer:1 actor:1 something:1 own:1 recent:1 optimizing:1 belongs:1 optimizes:2 mandatory:1 claimed:1 binary:1 arbitrarily:2 minimum:1 michail:1 maximize:3 dashed:1 ii:1 sham:1 reduces:1 match:2 long:1 basic:4 expectation:2 iteration:26 kernel:3 represent:3 sergey:1 robotics:2 background:1 unrealistically:1 whereas:1 szepesv:2 interval:1 underestimated:2 addressed:1 biased:1 operate:3 ascent:1 induced:5 thing:1 contrary:4 seem:2 unprojected:1 jordan:1 near:1 ideal:10 intermediate:1 easy:1 idea:1 regarding:1 intensive:1 peter:2 speaking:1 afford:1 hardly:3 action:14 deep:4 heess:1 generally:3 involve:1 garnet:8 discount:1 extensively:1 generate:2 exist:1 canonical:1 andr:1 notice:8 piot:3 delta:1 estimated:1 lazaric:1 write:2 clarity:1 subgradient:9 run:1 parameterized:4 place:3 family:1 reasonable:2 decision:4 appendix:3 comparable:2 bound:26 quadratic:1 encountered:1 oracle:1 infinity:1 ri:2 sake:2 explode:1 aspect:1 min:2 optimality:1 softened:1 transiting:1 according:4 centrale:1 kd:1 across:1 kakade:1 tw:1 evolves:5 projecting:1 explained:1 computationally:1 equation:1 ln:1 turn:1 discus:2 daniela:1 know:3 studying:3 operation:1 multiplied:1 apply:2 generic:2 pierre:1 alternative:1 batch:1 jd:1 thomas:1 denotes:1 remaining:1 wehenkel:1 newton:3 k1:27 especially:1 build:2 classical:2 society:1 lspi:1 objective:2 question:1 quantity:3 said:3 gradient:8 iclr:1 distance:2 capacity:1 topic:2 argue:2 considers:1 reason:3 toward:1 filar:1 minimizing:8 kingdom:1 unfortunately:1 difficult:3 negative:1 policy:91 unknown:6 markov:5 daan:1 finite:3 descent:5 ecml:1 defining:1 extended:1 saturates:1 frame:1 dpi:2 smoothed:1 thm:4 introduced:4 david:1 maei:1 specified:1 componentwise:2 optimized:2 nip:2 address:1 cristal:1 usually:3 mismatch:6 program:1 built:1 max:4 unrealistic:1 natural:3 rely:2 residual:83 occupancy:4 scheme:2 mdps:3 imply:1 julien:1 naive:1 literature:2 discovery:1 schulman:1 multiplication:2 relative:1 expect:2 highlight:1 interesting:1 generation:1 geoffrey:1 remarkable:1 foundation:1 proxy:12 s0:15 consistent:1 principle:2 article:1 critic:1 enjoys:1 bias:2 munos:3 van:1 dimension:2 transition:3 cumulative:1 doesn:1 commonly:1 reinforcement:11 projected:6 made:1 far:3 approximate:10 global:2 automatica:1 search:19 continuous:3 additionally:1 nature:1 robust:3 ca:1 nicolas:1 operational:1 necessarily:4 european:1 marc:1 main:1 repeated:1 exploitable:1 cpi:3 fig:22 representative:1 fashion:1 lc:1 sub:1 position:3 obeying:1 theorem:3 removing:1 boleslaw:1 bad:4 ciamac:1 lille1:2 intractable:1 exists:1 classificationbased:1 horizon:5 vijay:1 timothy:1 vk1:1 simply:1 likely:1 prevents:1 lstd:1 applies:1 corresponds:2 pucci:1 satisfies:1 relies:1 consequently:1 exposition:1 feasible:1 experimentally:1 change:3 specifically:2 except:2 infinite:2 operates:1 yuval:1 conservative:2 experimental:2 succeeds:3 ew:1 exception:1 formally:2 rarely:1 deisenroth:1 support:1 mark:1 meant:1 jonathan:1 alexander:1 correlated:2
6,538
6,914
Generalization Properties of Learning with Random Features Alessandro Rudi ? Lorenzo Rosasco INRIA - Sierra Project-team, ? Ecole Normale Sup?erieure, Paris, 75012 Paris, France [email protected] University of Genova, Istituto Italiano di Tecnologia, Massachusetts Institute of Technology. [email protected] Abstract We study the generalization properties of ridge regression with random features ? in the statistical learning framework. We show?for the first time that O(1/ n) learning bounds can be achieved with only O( n log n) random features rather than O(n) as suggested by previous results. Further, we prove faster learning rates and show that they might require more random features, unless they are sampled according to a possibly problem dependent distribution. Our results shed light on the statistical computational trade-offs in large scale kernelized learning, showing the potential effectiveness of random features in reducing the computational complexity while keeping optimal generalization properties. 1 Introduction Supervised learning is a basic machine learning problem where the goal is estimating a function from random noisy samples [1, 2]. The function to be learned is fixed, but unknown, and flexible non-parametric models are needed for good results. A general class of models is based on functions of the form, f (x) = M X ?i q(x, ?i ), (1) i=1 where q is a non-linear function, ?1 , . . . , ?M ? Rd are often called centers, ?1 , . . . , ?M ? R are coefficients, and M = Mn could/should grow with the number of data points n. Algorithmically, the problem reduces to computing from data the parameters ?1 , . . . , ?M , ?1 , . . . , ?M and M . Among others, one-hidden layer networks [3], or RBF networks [4], are examples of classical approaches considering these models. Here, parameters are computed by considering a non-convex optimization problem, typically hard to solve and analyze [5]. Kernel methods are another notable example of an approach [6] using functions of the form (1). In this case, q is assumed to be a positive definite function [7] and it is shown that choosing the centers to be the input points, hence M = n, suffices for optimal statistical results [8, 9, 10]. As a by product, kernel methods require only finding the coefficients (?i )i , typically by convex optimization. While theoretically sound and remarkably effective in small and medium size problems, memory requirements make kernel methods unfeasible for large scale problems. Most popular approaches to tackle these limitations are randomized and include sampling the centers at random, either in a data-dependent or in a data-independent way. Notable examples include Nystr?om [11, 12] and random features [13] approaches. Given random centers, computations still ? This work was done when A.R. was working at Laboratory of Computational and Statistical Learning (Istituto Italiano di Tecnologia). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. reduce to convex optimization with potential big memory gains, provided that the centers are fewer than the data-points. In practice, the choice of the number of centers is based on heuristics or memory constraints, and the question arises of characterizing theoretically which choices provide optimal learning bounds. Answering this question allows to understand the statistical and computational trade-offs in using these randomized approximations. For Nystr?om methods, partial results in this direction were derived for example in [14] and improved in [15], but only for a simplified setting where the input points are fixed. Results in the ? statistical learning setting were given in [16] for ridge regression, showing in particular that O( n log n) random centers uniformly sampled from n ? training points suffices to yield O(1/ n) learning bounds, the same as full kernel ridge regression. A question motivating our study is whether similar results hold for random features approaches. While several papers consider the properties of random features for approximating the kernel function, see [17] and references therein, fewer results consider their generalization properties. An exception is one of the original random features papers, which provides learning bounds for a general ? class of loss functions [18]. These results show that O(n) random features are needed for O(1/ n) learning bounds and choosing less random features leads to worse bounds. In other words, these results suggest that that computational gains come at the expense of learning accuracy. Later results, see e.g. [19, 20, 21], essentially confirm these considerations, albeit the analysis in [21] suggests that fewer random features could suffice if sampled in a problem dependent way. In this paper, we focus on the least squares loss, considering random features within a ridge regression approach. Our main result shows, under standard estimator obtained with a ? assumptions, that the ? number of random features proportional to O( n log n) achieves O(1/ n) learning error, that is the same prediction accuracy of the exact kernel ridge regression estimator. In other words, there are problems for which random features can allow to drastically reduce computational costs without any loss of prediction accuracy. To the best of our knowledge this is the first result showing that such an effect is possible. Our study improves on previous results by taking advantage of analytic and probabilistic results developed to provide sharp analyses of kernel ridge regression. We further present a second set of more refined results deriving fast convergence rates. We show that indeed fast rates are possible, but, depending on the problem at hand, a larger number of features might be needed. We then discuss how the requirement on the number of random features can be weakened at the expense of typically more complex sampling schemes. Indeed, in this latter case either some knowledge of the data-generating distribution or some potentially data-driven sampling scheme is needed. For this latter case, we borrow and extend ideas from [21, 16] and inspired from the theory of statical leverage scores [22]. Theoretical findings are complemented by numerical simulation validating the bounds. The rest of the paper is organized as follows. In Section 2, we review relevant results on learning with kernels, least squares and learning with random features. In Section 3, we present and discuss our main results, while proofs are deferred to the appendix. Finally, numerical experiments are presented in Section 4. 2 Learning with random features and ridge regression We begin recalling basics ideas in kernel methods and their approximation via random features. Kernel ridge regression Consider the supervised problem of learning a function given a training set of n examples (xi , yi )ni=1 , where xi ? X, X = RD and yi ? R. Kernel methods are nonparametric approaches defined by a kernel K : X ? X ? R, that is a symmetric and positive definite (PD) function2 . A particular instance is kernel ridge regression given by fb? (x) = n X ?i K(xi , x), ? = (K + ?nI)?1 y. (2) i=1 Here ? > 0, y = (y1 , . . . , yn ), ? ? Rn , and K is the n by n matrix with entries Kij = K(xi , xj ). The above method is standard and can be derived from an empirical risk minimization perspective [6], and is related to Gaussian processes [3]. While KRR has optimal statistical properties? see later? its applicability to large scale datasets is limited since it requires O(n2 ) in space, to store K, and 2 A kernel K is PD if for all x1 , . . . , xN the N by N matrix with entries K(xi , xj ) is positive semidefinite. 2 roughly O(n3 ) in time, to solve the linear system in (2). Similar requirements are shared by other kernel methods [6]. To explain the basic ideas behind using random features with ridge regression, it is useful to recall the computations needed to solve KRR when the kernel is linear K(x, x0 ) = x> x0 . In this case, Eq. (2) reduces to standard ridge regression and can be equivalenty computed considering, fb? (x) = x> w b? b >X b + ?nI)?1 X b > y. w b? = (X (3) b is the n by D data matrix. In this case, the complexity becomes O(nD) in space, and where X O(nD2 + D3 ) in time. Beyond the linear case, the above reasoning extends to inner product kernels K(x, x0 ) = ?M (x)> ?M (x0 ) (4) where ?M : X ? RM is a finite dimensional (feature) map. In this case, KRR can be computed b replaced by the n by M matrix Sb> = (?(x1 ), . . . , ?(xn )). considering (3) with the data matrix X M The complexity is then O(nM ) in space, and O(nM 2 + M 3 ) in time, hence much better than O(n2 ) 3 and O(n ), as soon as M  n. Considering only kernels of the form (4) can be restrictive. Indeed, 0 2 classic examples of kernels, e.g. the Gaussian kernel e?kx?x k , do not satisfy (4) with finite M . It is then natural to ask if the above reasoning can still be useful to reduce the computational burden for more complex kernels such as the Gaussian kernel. Random features, that we recall next, show that this is indeed the case. Random features with ridge regression assuming it holds only approximately, The basic idea of random features [13] is to relax Eq. (4) K(x, x0 ) ? ?M (x)> ?M (x0 ). (5) Clearly, if one such approximation exists the approach described in the previous section can still be used. A first question is then for which kernels an approximation of the form (5) can be derived. A simple manipulation of the Gaussian kernel provides one basic example. Example 1 (Random Fourier features [13]). If we write the Gaussian kernel as K(x, x0 ) = G(x?x0 ), 2 1 with G(z) = e? 2?2 kzk , for a ? > 0, then since the inverse Fourier transform of G is a Gaussian, and using a basic symmetry argument, it is easy to show that Z Z 2? ? ? 2 ?2 1 G(x ? x0 ) = 2 cos(w> x + b) 2 cos(w> x0 + b) e? 2 kwk dw db 2?Z 0 where Z is a normalizing factor. Then, the Gaussian kernel has an approximation of the form (5) with ? ? > x + bM )), and w1 , . . . , wM and b1 , . . . , bM ?M (x) = M ?1/2 ( 2 cos(w1> x + b1 ), . . . , 2 cos(wM 1 ?? 2 kwk2 /2 sampled independently from Z e and uniformly in [0, 2?], respectively. The above example can be abstracted to a general strategy. Assume the kernel K to have an integral representation, Z 0 K(x, x ) = ?(x, ?)?(x0 , ?)d?(?), ?x, x0 ? X, (6) ? where (?, ?) is probability space and ? : X ? ? ? R. The random features approach provides an approximation of the form (5) where ?M (x) = M ?1/2 (?(x, ?1 ), . . . , ?(x, ?M )), and with ?1 , . . . , ?M sampled independently with respect to ?. Key to the success of random features is that kernels, to which the above idea apply, abound? see Appendix E for a survey with some details. Back to supervised learning, combining random features with ridge regression leads to, fb?,M (x) := ?M (x)> w b?,M , > b > with w b?,M := (SbM SM + ?I)?1 SbM yb, (7) > for ? > 0, SbM := n?1/2 (?M (x1 ), . . . , ?M (xn )) and y := n?1/2 (y1 , . . . , yn ). Then, random features can be used to reduce the computational costs of full kernel ridge regression as soon as M  n (see Sec. 2). However, since random features rely on an approximation (5), the question is whether there is a loss of prediction accuracy. This is the question we analyze in the rest of the paper. 3 3 Main Results In this section, we present our main results characterizing the generalization properties of random features with ridge regression. We begin considering a basic setting and then discuss fast learning rates and the possible benefits of problem dependent sampling schemes. 3.1 ? ? O( n log n) Random features lead to O(1/ n) learning error We consider a standard statistical learning setting. The data (xi , yi )ni=1 are sampled identically and independently with respect to a probability ? on X ? R, with X a separable space (e.g. X = RD , D ? N). The goal is to minimize the expected risk Z E(f ) = (f (x) ? y)2 d?(x, y), since this implies that f will generalize/predict well new data. Since we consider estimators of the form (2), (7) we are potentially restricting the space of possible solutions. Indeed, estimators of this form belong to the so called reproducing kernel Hilbert space (RKHS) corresponding to the PD kernel K. Recall that, the latter is the function space H defined as as the completion of the linear span of {K(x, ?) : x ? X} with respect to the inner product hK(x, ?), K(x0 , ?)i := K(x, x0 ) [7]. In this view, the best possible solution is fH solving min E(f ). (8) f ?H We will assume throughout that fH exists. We add one technical remark useful in the following. Remark 1. Existence of fH is not ensured, since we consider a potentially infinite dimensional RKHS H, possibly universal [23]. The situation is different if H is replaced by HR = {f ? H : kf k ? R}, with R fixed a priori. In this case a minimizer of risk E always exists, but R needs to be fixed a priori and HR can?t be universal. Clearly, assuming fH to exist, implies it belongs to a ball of radius R?,H . However, our results do not require prior knowledge of R?,H and hold uniformly over all finite radii. The following is our first result on the learning properties of random features with ridge regression. Theorem 1. Assume that K is a kernel with an integral representation (6). Assume ? continuous, such that |?(x, ?)| ? ? almost surely, with ? ? [1, ?) and |y| ? b almost surely, with b > 0. Let ? ? (0, 1]. If n ? n0 and ?n = n?1/2 , then a number of random features Mn equal to ? ? 108?2 n Mn = c0 n log , ? is enough to guarantee, with probability at least 1 ? ?, that E(fb?n ,Mn ) ? E(fH ) ? c1 log2 ? n 18 ? . In particular the constants c0 , c1 do not depend on n, ?, ?, and n0 does not depends on n, ?, fH , ?. The above result is presented with some simplifications (e.g. the assumption of bounded output) for sake of presentation, while it is proved and presented in full generality in the Appendix. In particular, the values of all the constants are given explicitly. Here, we make a few comments. The learning bound is the same achieved by the exact kernel ridge regression estimator (2) choosing ? = n?1/2 , see e.g. [10]. The theorem derives a bound in a worst case situation, where no assumption is made besides existence of fH , and is optimal in ? a minmax sense [10]. This means that, in this setting, as soon as the number of features is order n log n, the corresponding ridge regression estimator has optimal generalization properties. This is remarkable considering the corresponding gain from a computational perspective:?from roughly O(n3 ) and O(n2 ) in time and space for kernel ridge regression to O(n2 ) and O(n n) for ridge regression with random features (see Section 2). Consider that taking ? ? 1/n2 changes only the constants and allows to derive bounds in expectation and almost sure convergence (see Cor. 1 in the appendix, for the result in expectation). The above result shows that there is a whole set of problems where computational gains are achieved without having to trade-off statistical accuracy. In the next sections we consider what happens under more benign assumptions, which are standard, but also somewhat more technical. We first compare with previous works since the above setting is the one more closely related. 4 Comparison with [18]. This is one of the original random features paper and considers the question of generalization properties. In particular they study the estimator n 1X >b b b fR (x) = ?M (x) ?R,? , ?R,? = argmin `(?M (xi )> ?, yi ), n k?k? ?R i=1 for a fixed R, a Lipshitz loss function `, and where kwk? = max{|?1 |, ? ? ? , |?M |}. The largest space considered in [18] is Z  GR = ?(?, ?)?(?)d?(?) |?(?)| < R a.e. , (9) rather than a RKHS, where R is fixed a priori. The best possible solution is fG?R solving minf ?GR E(f ), and the main result in [18] provides the bound R R (10) E(fbR ) ? E(fG?R ) . ? + ? , n M This is the first and still one the main results providing a statistical analysis for an estimator based on random features for a wide class of loss functions. There ? are a few elements of comparison with the result in this paper, but the main one is that to get O(1/ n) learning bounds, the above result requires O(n) random features, while a smaller number leads to worse bounds. This shows the main novelty of our analysis. Indeed we prove that, considering the square loss, fewer random features are sufficient, hence allowing computational gains without loss of accuracy. We add a few more tehcnical comments explaining : 1) how the setting we consider covers a wider range of problems, and 2) why the bounds we obtain are sharper. First, note that the functional setting in our paper is more general in R the following sense. It isReasy to see that considering the RKHS H is equivalent to consider H2 = ?(?, ?)?(?)d?(?) |?(?)|2 d?(?) < ? and the following inclusions hold GR ? G? ? H2 . Clearly, assuming a minimizer of the expected risk to exists in H2 does not imply it belongs to G? or GR , while the converse is true. In this view, our results cover a wider range of problems. Second, note that, this gap is not easy to bridge. Indeed, even if we were to consider G? in place of GR , the results in [18] could be used to derive the bound R R E E(fbR ) ? E(fG?? ) . ? + ? + A(R), (11) n M where A(R) := E(fG?R ) ? E(fG?? ) and fG?? is a minimizer of the expected risk on G? . In this case we would have to balance the various terms in (11), which would lead to a worse bound. For example, we could consider R := log n, obtaining a bound n?1/2 log n with an extra logarithmic term, but the result would hold only for n larger than a number of examples n0 at least exponential with respect to the norm of f? . Moreover, to derive results uniform with respect to f? , we would have to keep into account the decay rate of A(R) and this would get bounds slower than n?1/2 . Comparison with other results. Several other papers study the generalization properties of random features, see [21] and references therein. For example, generalization bounds are derived in [19] from very general arguments. However, the corresponding generalization bound requires a number of ? random features much larger than the number of training examples to give O(1/ n) bounds. The basic results in [21] are analogous to those in [18] with the set GR replaced by HR . These results are closer, albeit more restrictive then ours (see Remark ? 8) and especially like the bounds in [18] suggest O(n) random features are needed for O(1/ n) learning bounds. A novelty in [21] is the introduction of more complex problem dependent sampling that can reduce the number of random features. In Section 3.3, we show that using possibly-data ? dependent random features can lead to rates much faster than n?1/2 , and using much less than n features. 3.2 Refined Results: Fast Learning Rates Faster rates can be achieved under favorable conditions. Such conditions for kernel ridge regression are standard, but somewhat technical. Roughly speaking they characterize the ?size? of the considered RKHS and the regularity of fH . The key quantity needed to make this precise is the integral operator defined by the kernel K and the marginal distribution ?X of ? on X, that is Z (Lg)(x) = K(x, z)g(z)d?X (z), ?g ? L2 (X, ?X ), X 5 1 1 0.95 0.8 0.9 0.7 0.85 0.6 1 0.9 0.9 0.8 0.8 0.7 0.7 0.8 0.6 0.6 0.5 0.75 0.5 0.5 0.4 0.7 0.4 0.4 0.3 0.65 0.3 0.3 0.2 0.6 0.2 0.2 0.1 0.55 0.1 0 0.5 ? ? 1 0.9 0.6 0.7 0.8 0.9 0.1 0 0.5 0.5 1 0 0.6 0.7 0.8 0.9 1 r r Figure 1: How many random features needed for fast rates, M = nc R seen as a map from L2 (X, ?X ) = {f : X ? R | kf k2? = |f (x)|2 d?X < ?} to itself. Under the assumptions of Thm. 1, the integral operator is positive, self-adjoint and trace-class (hence compact) [24]. We next define the conditions that will lead to fast rates, and then comment on their interpretation. Assumption 1 (Prior  assumptions). For ? > 0, let the effective dimension be defined as N (?) := Tr (L + ?I)?1 L , and assume, there exists Q > 0 and ? ? [0, 1] such that, N (?) ? Q2 ??? . (12) Moreover, assume there exists r ? 1/2 and g ? L2 (X, ?X ) such that fH (x) = (Lr g)(x) a.s.. (13) We provide some intuition on the meaning of the above assumptions, and defer the interested reader to [10] for more details. The effective dimension can be seen as a ?measure of the size? of the RKHS H. Condition (12) allows to control the variance of the estimator and is equivalent to conditions on covering numbers and related capacity measures [23]. In particular, it holds if the eigenvalues ?i ?s of L decay as i?1/? . Intuitively, a fast decay corresponds to a smaller RKHS, whereas a slow decay corresponds to a larger RKHS. The case ? = 0 is the more benign situation, whereas ? = 1 is the worst case, corresponding to the basic setting. A classic example, when X = RD , corresponds to considering kernels of smoothness s, in which case ? = D/(2s) and condition (12) is equivalent to assuming H to be a Sobolev space [23]. Condition (13) allows to control the bias of the estimator and is common in approximation theory [25]. It is a regularity condition that can be seen as form of weak sparsity of fH . Roughly speaking, it requires the expansion of fH , on the the basis given by the the eigenfunctions L, to have coefficients that decay faster than ?ir . A large value of r means that the coefficients decay fast and hence many are close to zero. The case r = 1/2 is the worst case, and can be shown to be equivalent to assuming fH exists. This latter situation corresponds to setting considered in the previous section. We next show how these assumptions allow to derive fast rates. Theorem 2. Let ? ? (0, 1]. Under Asm. 1 and the same assumptions of Thm. 1, if n ? n0 , and 1 ?n = n? 2r+? , then a number of random features M equal to Mn = c0 n 1+?(2r?1) 2r+? log 108?2 n , ? is enough to guarantee, with probability at least 1 ? ?, that 2r 18 ? 2r+? n , ? for r ? 1, and where c0 , c1 do not depend on n, ? , while n0 does not depends on n, fH , ?. E(fb?n ,Mn ) ? E(fH ) ? c1 log2 The above bound is the same as the one obtained by the full kernel ridge regression estimator and is optimal in a minimax sense [10]. For large r and small ? it approaches a O(1/n) bound. When ? = 1 and r = 1/2 the worst case bound of the previous section is recovered. Interestingly, the number ? of random features in different regimes is typically smaller than n but can be larger than O( n). Figure. 1 provides a pictorial representation of the number of random features needed for 6 optimal rates in different regimes. In particular M  n random features are enough when ? ? > 0 and r > 1/2. For example for r = 1, ? = 0 (higher regularity/sparsity and a small RKHS) O( n) are sufficient to get a rate O(1/n). But, for example, if r = 1/2, ? = 0 (not too much regularity/sparsity but a small RKHS) O(n) are needed for O(1/n) error. The proof suggests that this effect can be a byproduct of sampling features in a data-independent way. Indeed, in the next section we show how much fewer features can be used considering problem dependent sampling schemes. 3.3 Refined Results: Beyond uniform sampling We show next that fast learning rates can be achieved with fewer random features if they are somewhat compatible with the data distribution. This is made precise by the following condition. Assumption 2 (Compatibility condition). Define the maximum random features dimension as F? (?) = sup k(L + ?I)?1/2 ?(?, ?)k2?X , ? > 0. (14) ??? Assume there exists ? ? [0, 1], and F > 0 such that F? (?) ? F ??? , ?? > 0. The above assumption is abstract and we comment on it before showing how it affects the results. The maximum random features dimension (14) relates the random features to the data-generating distribution through the operator L. It is always satisfied for ? = 1 ands F = ?2 . e.g. considering any random feature satisfying (6). The favorable situation corresponds to random features such that case ? = ?. The following theoretical construction borrowed from [21] gives an example. Example 2 (Problem dependent RF). Assume K is Ra kernel with an integral representation (6). 1 For s(?) = k(L + ?I)?1/2 ?(?, ?)k?2 ?X and Cs := s(?) d?(?), consider the random features p ?s (x, ?) = ?(x, ?) Cs s(?), with distribution ?s (?) := C?(?) . We show in the Appendix that s s(?) these random features provide an integral representation of K and satisfy Asm. 2 with ? = ?. We next show how random features satisfying Asm. 2 can lead to better resuts. Theorem 3. Let ? ? (0, 1]. Under Asm. 2 and the same assumptions of Thm. 1, 2, if n ? n0 , and 1 ?n = n? 2r+? , then a number of random features Mn equal to Mn = c0 n ?+(1+???)(2r?1) 2r+? log 108?2 n , ? is enough to guarantee, with probability at least 1 ? ?, that 2r 18 ? 2r+? , n ? where c0 , c1 do not depend on n, ? , while n0 does not depends on n, fH , ?. E(fb?n ,Mn ) ? E(fH ) ? c1 log2 The above learning bound is the same as Thm. 2, but the number of random ? features is given by a more complex expression depending on ?. In particular, in the slow O(1/ n) rates scenario, that ? is r = 1/2, ? = 1, we see that only O(n?/2 ), rather than O( n), are needed. For a small RKHS, that is ? = 0 and random features with ? = ?, a constant (!) number of feature is sufficient. A similar trend is seen considering fast rates. For ? > 0 and r > 1/2, if ? < 1 then the number of random features is always smaller, and potentially much smaller, then the number of random features sampled in a problem independent way, that is ? = 1. For ? = 0 and r = 1/2, the number of number of features is O(n? ) and can be again just constant if ? = ?. Figure 1 depicts the number of random features required if ? = ?. The above result shows the potentially dramatic effect of problem dependent random features. However the construction in Ex. 2 is theoretical. We comment on this in the next remark. Remark 2 (Random features leverage scores). The construction in Ex. 2 is theoretical, however empirical random features leverage scores sb(?) = vb(?)> (K + ?nI)?1 vb(?), with vb(?) ? Rn , (b v (?))i = ?(xi , ?), can be considered. Statistically, this requires considering an extra estimation step. It seems our proof can be extended to account for this, and we will pursue this in a future work. Computationally, it requires devising approximate numerical strategies, like standard leverage scores [22]. We next compare random features and Nystr?om methods. 7 Comparison with Nystr?om. This question was recently considered in [20] and our results offer new insights. In particular, recalling the results in [16], we see that in the slow rate setting there is essentially no difference between random features and Nystr?om approaches, neither from a statistical nor from a computational point of view. In the case of fast rates, Nystr?om methods with uniform 1 sampling requires O(n? 2r+? ) random centers, which compared to Thm. 2, suggests Nystr?om methods can be advantageous in this regime. While problem dependent random features provide a further improvement, it should be compared with the number of centers needed for Nystr?om with leverage ? scores, which is O(n? 2r+? ) and hence again better, see Thm. 3. In summary, both random features and Nystr?om methods achieve optimal statistical guarantees while reducing computations. They are essentially the same in the worst case, while Nystr?om can be better for benign problems. Finally we add a few words about the main steps in the proof. Steps of the proof. The proofs are quite technical and long and are collected in the appendices. They use a battery of tools developed to analyze KRR and related methods. The key challenges in the analysis include analyzing the bias of the estimator, the effect of noise in the outputs, the effect of random sampling in the data, the approximation due to random features and a notion of orthogonality between the function space corresponding to random features and the full RKHS. The last two points are the main elements on novelty in the proof. In particular, compared to other studies, we identify and study the quantity needed to assess the effect of the random feature approximation if the goal is prediction rather than the kernel approximation itself. 4 Numerical results While the learning bounds we present are optimal, there are no lower bounds on the number of random features, hence we present numerical experiments validating our bounds. Consider a spline kernel of order q (see [26] Eq. 2.1.7 when q integer), defined as ?q (x, x0 ) = ? X e2?ikx e?2?ikz |k|?q , k=?? R1 almost everywhere on [0, 1], with q ? R, for which we have 0 ?q (x, z)?q0 (x0 , z)dz = ?q+q0 (x, x0 ), for any q, q 0 ? R. Let X = [0, 1], and ?X be the uniform distribution. For ? ? (0, 1) and 1 (?, x), r ? [1/2, 1] let, K(x, x0 ) = ? ?1 (x, x0 ), ?(?, x) = ? 2? f? (x) = ? ?r + 12 + (x, x0 ) with  > 0, x0 ? X. Let ?(y|x) be a Gaussian density with variance ? 2 and mean f ? (x). Then Asm 1, 2 are satisfied and ? = ?. We compute the KRR estimator for n ? {103 , . . . , 104 } and select ? minimizing the excess risk computed analytically. Then we compute the RF-KRR estimator and select the number of features M needed to obtain an excess risk within 5% of the one by KRR. In Figure 2, the theoretical and estimated behavior of the excess risk, ? and M with respect to n are reported together with their standard deviation over 100 repetitions. The experiment shows that the predictions by Thm. 3 are accurate, since the theoretical predictions estimations are within one standard deviation from the values measured in the simulation. 5 Conclusion In this paper, we provide a thorough analyses of the generalization properties of random features with ridge regression. We consider a statistical learning theory setting where data are noisy and sampled at random. Our main results show that there are large classes of learning problems where random features allow to reduce computations while preserving optimal statistical accuracy of exact kernel ridge regression. This in contrast with previous state of the art results suggesting computational gains needs to be traded-off with statistical accuracy. Our results open several venues for both theoretical and empirical work. As mentioned in the paper, it would be interesting to analyze random features with empirical leverage scores. This is immediate if input points are fixed, but our approach should allow to also consider the statistical learning setting. Beyond KRR, it would be interesting to analyze random features together with other approaches, in particular accelerated and stochastic gradient methods, or distributed techniques. It should be possible to extend the results in the paper to consider these cases. A more substantial generalization would be to consider loss functions other than quadratic loss, since this require different techniques from empirical process theory. 8 ?6 measured error meas ? std predicted error ?6.5 ?7 measured ? meas ? std predicted ? ?5 measured m meas ? std predicted m 8 7 6 log m ?6 ?8 log ? log error ?7.5 ?4 ?8.5 ?7 ?9 ?9.5 5 4 3 ?8 ?10 2 ?10.5 ?9 1 ?11 1000 2000 3000 4000 5000 6000 n ?6 8000 9000 10000n measured error meas ? std predicted error ?6.5 ?7 log error 7000 1000 2000 3000 4000 5000 nn 6000 7000 8000 9000 10000 1000 measured ? meas ? std predicted ? ?4.5 ?5 2000 3000 4000 5000 6000 n n 7000 8000 9000 10000 measured m meas ? std predicted m 8 7 log m log ? ?7.5 ?5.5 ?8 ?8.5 6 5 ?6 4 ?9 ?6.5 ?9.5 3 ?7 ?10 1000 2000 3000 4000 5000 nn 6000 7000 8000 9000 10000 1000 2000 3000 4000 5000 nn 6000 7000 8000 9000 10000 1000 2000 3000 4000 5000 n n 6000 7000 8000 9000 10000 Figure 2: Comparison of theoretical and simulated rates for: excess risk E(fb?,M ) ? inf f ?H E(f ), ?, M , w.r.t. n (100 repetitions). Parameters r = 11/16, ? = 1/8 (top), and r = 7/8, ? = 1/4 (bottom). Acknowledgments The authors gratefully acknowledge the contribution of Raffaello Camoriano who was involved in the initial phase of this project. These preliminary result appeared in the 2016 NIPS workshop ?Adaptive and Scalable Nonparametric Methods in ML?. This work is funded by the Air Force project FA9550-17-1-0390 (European Office of Aerospace Research and Development) and by the FIRB project RBFR12M3AC (Italian Ministry of Education, University and Research). References [1] V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998. [2] F. Cucker and S. Smale. On the mathematical foundations of learning. Bulletin of the AMS, 39:1?49, 2002. [3] C. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [4] T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 1990. [5] A. Pinkus. Approximation theory of the mlp model in neural networks. Acta Numerica, 8:143?195, 1999. [6] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2002. [7] N. Aronszajn. Theory of reproducing kernels. Transactions of the AMS, 68(3):337?404, 1950. [8] G. S. Kimeldorf and G. Wahba. A correspondence between bayesian estimation on stochastic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2):495?502, 1970. [9] B. Sch?olkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In Computational learning theory, pages 416?426. Springer, 2001. [10] A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithm. FoCM, 2007. [11] A. J. Smola and B. Sch?olkopf. Sparse greedy matrix approximation for machine learning. In ICML, 2000. [12] C. Williams and M. Seeger. Using the nystr?om method to speed up kernel machines. In NIPS, 2000. [13] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [14] F. Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, 2013. [15] A. Alaoui and M. Mahoney. Fast randomized kernel ridge regression with statistical guarantees. In NIPS. 2015. 9 [16] A. Rudi, R. Camoriano, and L. Rosasco. Less is more: Nystr?om computational regularization. In NIPS. 2015. [17] B. K. Sriperumbudur and Z. Szabo. Optimal rates for random fourier features. ArXiv e-prints, June 2015. [18] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In NIPS, 2009. [19] C. Cortes, M. Mohri, and A. Talwalkar. On the impact of kernel approximation on learning accuracy. In AISTATS, 2010. [20] T. Yang, Y. Li, M. Mahdavi, R. Jin, and Z. Zhou. Nystr?om method vs random fourier features: A theoretical and empirical comparison. In NIPS, pages 485?493, 2012. [21] F. Bach. On the equivalence between quadrature rules and random features. ArXiv e-prints, February 2015. [22] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. JMLR, 13:3475?3506, 2012. [23] I. Steinwart and A. Christmann. Support Vector Machines. Springer New York, 2008. [24] S. Smale and D. Zhou. Learning theory estimates via integral operators and their approximations. Constructive approximation, 26(2):153?172, 2007. [25] S. Smale and D. Zhou. Estimating the approximation error in learning theory. Analysis and Applications, 1(01):17?41, 2003. [26] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [27] E. De Vito, L. Rosasco, A. Caponnetto, U. D. Giovannini, and F. Odone. Learning from examples as an inverse problem. In JMLR, pages 883?904, 2005. [28] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration inequalities. In Advanced Lectures on Machine Learning. 2004. [29] V. V. Yurinsky. Sums and Gaussian vectors. 1995. [30] J. A. Tropp. User-friendly tools for random matrices: An introduction. 2012. [31] S. Minsker. On some extensions of bernstein?s inequality for self-adjoint operators. arXiv, 2011. [32] J. Fujii, M. Fujii, T. Furuta, and R. Nakamoto. Norm inequalities equivalent to heinz inequality. Proceedings of the American Mathematical Society, 118(3), 1993. [33] Andrea Caponnetto and Yuan Yao. Adaptation for regularization operators in learning theory. Technical report, DTIC Document, 2006. [34] Rajendra Bhatia. Matrix analysis, volume 169. Springer Science & Business Media, 2013. [35] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In NIPS, 2009. [36] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors, NIPS, pages 342?350. 2009. [37] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012. [38] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 239?247. ACM, 2013. [39] Q. Le, T. Sarl?os, and A. Smola. Fastfood - computing hilbert space expansions in loglinear time. In ICML, 2013. [40] J. Yang, V. Sindhwani, Q. Fan, H. Avron, and M. Mahoney. Random laplace feature maps for semigroup kernels on histograms. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 971?978. IEEE, 2014. [41] R. Hamid, Y. Xiao, A. Gittens, and D. Decoste. Compact random feature maps. In ICML, pages 19?27, 2014. 10 [42] J. Yang, V. Sindhwani, H. Avron, and M. W. Mahoney. Quasi-monte carlo feature maps for shift-invariant kernels. In ICML, volume 32 of JMLR Proceedings, pages 485?493. JMLR.org, 2014. [43] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(3):480?492, 2012. 11
6914 |@word polynomial:1 norm:2 seems:1 nd:1 c0:6 advantageous:1 open:1 simulation:2 dramatic:1 nystr:13 tr:1 initial:1 minmax:1 series:1 score:6 woodruff:1 ecole:1 rkhs:12 ours:1 interestingly:1 document:1 recovered:1 numerical:5 additive:1 benign:3 girosi:1 analytic:1 n0:7 v:1 greedy:1 fewer:6 devising:1 intelligence:1 fa9550:1 lr:1 provides:5 herbrich:1 org:1 fujii:2 mathematical:3 yuan:1 prove:2 firb:1 x0:21 theoretically:2 expected:3 ra:1 behavior:1 andrea:1 nor:1 indeed:8 roughly:4 heinz:1 inspired:1 decoste:1 considering:15 abound:1 becomes:1 project:4 estimating:2 provided:1 suffice:1 begin:2 kimeldorf:1 medium:2 bounded:1 what:1 moreover:2 argmin:1 pursue:1 q2:1 developed:2 finding:2 guarantee:5 thorough:1 avron:2 friendly:1 tackle:1 shed:1 ensured:1 rm:1 k2:2 control:2 lipshitz:1 converse:1 yn:2 positive:4 before:1 minsker:1 analyzing:1 approximately:1 lugosi:1 inria:2 might:2 therein:2 weakened:1 acta:1 equivalence:1 suggests:3 co:4 limited:1 range:2 statistically:1 acknowledgment:1 practice:1 definite:2 empirical:6 universal:2 vedaldi:1 word:3 suggest:2 get:3 unfeasible:1 close:1 operator:6 risk:9 function2:1 equivalent:5 map:8 center:9 dz:1 williams:2 independently:3 convex:3 survey:1 estimator:14 sbm:3 insight:1 rule:1 deriving:1 borrow:1 dw:1 classic:2 notion:1 analogous:1 laplace:1 annals:1 construction:3 user:1 exact:3 element:2 trend:1 satisfying:2 recognition:2 std:6 bottom:1 statical:1 worst:5 culotta:1 trade:3 alessandro:2 intuition:1 pd:3 mentioned:1 complexity:3 substantial:1 benjamin:1 battery:1 vito:2 depend:3 solving:2 ali:1 basis:1 sink:1 drineas:1 various:1 fast:15 effective:3 monte:1 ikz:1 furuta:1 bhatia:1 choosing:3 refined:3 odone:1 sarl:1 quite:1 heuristic:1 larger:5 solve:3 cvpr:1 relax:1 asm:5 statistic:1 transform:1 noisy:2 itself:2 advantage:1 eigenvalue:1 nakamoto:1 product:4 fr:2 adaptation:1 relevant:1 combining:1 achieve:1 adjoint:2 ismail:1 olkopf:3 convergence:2 regularity:4 requirement:3 r1:1 generating:2 sierra:1 wider:2 depending:2 derive:4 completion:1 measured:7 borrowed:1 eq:3 c:2 predicted:6 come:1 implies:2 christmann:1 direction:1 radius:2 closely:1 stochastic:2 observational:1 education:1 require:4 suffices:2 generalization:12 preliminary:1 randomization:1 hamid:1 extension:1 hold:6 pham:1 considered:5 predict:1 traded:1 camoriano:2 achieves:1 fh:16 favorable:2 estimation:3 krr:8 bridge:1 sensitive:1 largest:1 repetition:2 tool:2 weighted:1 minimization:2 mit:2 offs:2 clearly:3 gaussian:9 always:3 ands:1 normale:1 rather:4 zhou:3 office:1 derived:4 focus:1 june:1 nd2:1 improvement:1 rank:1 hk:1 contrast:1 sigkdd:1 seeger:1 talwalkar:1 sense:3 am:2 dependent:10 nn:3 sb:2 typically:4 kernelized:1 italian:1 hidden:1 quasi:1 france:1 interested:1 compatibility:1 among:1 flexible:1 colt:1 priori:3 development:1 art:1 smoothing:1 marginal:1 equal:3 having:1 beach:1 sampling:10 icml:4 minf:1 representer:1 future:1 others:1 spline:3 report:1 few:4 pictorial:1 szabo:1 replaced:3 raffaello:1 phase:1 kitchen:1 recalling:2 mlp:1 mining:1 deferred:1 mahoney:4 semidefinite:1 light:1 behind:1 accurate:1 integral:7 closer:1 partial:1 byproduct:1 istituto:2 poggio:1 unless:1 theoretical:9 instance:1 kij:1 cover:2 cost:2 applicability:1 deviation:2 entry:2 uniform:4 gr:6 too:1 motivating:1 characterize:1 reported:1 cho:1 st:1 density:1 venue:1 randomized:3 recht:2 siam:1 international:1 probabilistic:1 off:2 pagh:1 cucker:1 together:2 yao:1 w1:2 again:2 nm:2 satisfied:2 rosasco:3 possibly:3 worse:3 american:1 li:1 mahdavi:1 account:2 potential:2 suggesting:1 de:2 yurinsky:1 sec:1 coefficient:4 satisfy:2 notable:2 explicitly:1 depends:3 later:2 view:3 analyze:5 sup:2 kwk:2 wm:2 defer:1 contribution:1 ass:1 air:1 ir:1 om:13 accuracy:9 square:4 ni:5 minimize:1 variance:2 yield:1 identify:1 who:1 generalize:1 weak:1 bayesian:1 carlo:1 rajendra:1 explain:1 sriperumbudur:1 involved:1 e2:1 proof:7 di:2 sampled:8 gain:6 proved:1 massachusetts:1 popular:1 ask:1 recall:3 knowledge:4 improves:1 organized:1 hilbert:2 back:1 cbms:1 higher:1 supervised:3 zisserman:1 improved:1 yb:1 done:1 generality:1 just:1 smola:4 working:1 hand:1 steinwart:1 tropp:1 replacing:1 aronszajn:1 o:1 usa:1 effect:6 true:1 hence:7 analytically:1 regularization:3 symmetric:1 laboratory:1 q0:2 boucheron:1 semigroup:1 self:2 covering:1 generalized:1 ridge:25 reasoning:2 meaning:1 lazebnik:1 consideration:1 recently:1 common:1 functional:1 volume:4 pinkus:1 extend:2 belong:1 interpretation:1 kwk2:1 smoothness:1 rd:4 erieure:1 mathematics:1 inclusion:1 gratefully:1 funded:1 dot:1 add:3 perspective:2 lrosasco:1 belongs:2 driven:1 inf:1 manipulation:1 store:1 scenario:1 inequality:4 binary:1 success:1 kar:1 rbfr12m3ac:1 yi:4 seen:4 preserving:1 ministry:1 somewhat:3 surely:2 novelty:3 relates:1 full:5 sound:1 reduces:2 rahimi:2 caponnetto:3 technical:5 faster:4 offer:1 long:2 bach:2 impact:1 prediction:6 scalable:2 regression:25 basic:9 essentially:3 expectation:2 vision:1 arxiv:3 histogram:1 kernel:56 achieved:5 c1:6 whereas:2 remarkably:1 grow:1 sch:3 extra:2 rest:2 regional:1 sure:1 comment:5 eigenfunctions:1 validating:2 db:1 alaoui:1 lafferty:1 effectiveness:1 integer:1 leverage:7 yang:3 bernstein:1 bengio:1 easy:2 identically:1 enough:4 ikx:1 xj:2 affect:1 wahba:2 reduce:6 idea:5 inner:2 shift:2 whether:2 expression:1 speaking:2 york:2 remark:5 deep:1 useful:3 nonparametric:2 exist:1 nsf:1 estimated:1 algorithmically:1 write:1 numerica:1 key:3 d3:1 neither:1 sum:2 raginsky:1 inverse:2 everywhere:1 extends:1 throughout:1 almost:4 place:1 reader:1 sobolev:1 coherence:1 appendix:6 genova:1 vb:3 bound:30 layer:1 simplification:1 rudi:3 correspondence:1 fan:1 quadratic:1 constraint:1 orthogonality:1 n3:2 sake:1 bousquet:1 fourier:4 speed:1 argument:2 span:1 min:1 separable:1 according:1 ball:1 smaller:5 gittens:1 happens:1 intuitively:1 invariant:2 computationally:1 discus:3 needed:14 italiano:2 cor:1 magdon:1 apply:1 slower:1 existence:2 original:2 top:1 include:3 log2:3 restrictive:2 especially:1 approximating:1 classical:1 february:1 society:1 question:8 quantity:2 print:2 parametric:1 strategy:2 concentration:1 loglinear:1 gradient:1 simulated:1 capacity:1 considers:1 collected:1 assuming:5 besides:1 code:1 providing:1 balance:1 minimizing:1 nc:1 lg:1 potentially:5 sharper:1 expense:2 smale:3 trace:1 unknown:1 allowing:1 datasets:1 sm:1 finite:3 acknowledge:1 jin:1 immediate:1 situation:5 extended:1 team:1 precise:2 y1:2 rn:2 reproducing:2 sharp:2 thm:7 paris:2 required:1 aerospace:1 learned:1 nip:10 beyond:4 suggested:1 pattern:3 giovannini:1 regime:3 sparsity:3 challenge:1 appeared:1 rf:2 max:1 memory:3 natural:1 rely:1 force:1 regularized:1 business:1 hr:3 advanced:1 mn:9 minimax:1 scheme:4 technology:1 lorenzo:1 imply:1 focm:1 philadelphia:1 review:1 prior:2 l2:3 discovery:1 kf:2 loss:10 lecture:1 interesting:2 limitation:1 proportional:1 remarkable:1 h2:3 foundation:1 sufficient:3 xiao:1 editor:1 compatible:1 summary:1 mohri:1 last:1 keeping:1 soon:3 drastically:1 bias:2 allow:4 understand:1 institute:1 wide:1 explaining:1 characterizing:2 taking:2 bulletin:1 saul:1 sparse:1 fg:6 benefit:1 distributed:1 kzk:1 dimension:4 xn:3 karnick:1 fb:7 author:1 made:2 adaptive:2 simplified:1 bm:2 transaction:2 excess:4 approximate:1 compact:2 keep:1 confirm:1 abstracted:1 ml:1 b1:2 assumed:1 xi:8 continuous:1 why:1 ca:1 symmetry:1 obtaining:1 schuurmans:1 expansion:2 complex:4 european:1 aistats:2 main:11 fastfood:1 big:1 whole:1 noise:1 n2:5 quadrature:1 x1:3 depicts:1 slow:3 wiley:1 explicit:2 exponential:1 answering:1 jmlr:4 theorem:5 bishop:1 showing:4 meas:6 decay:6 cortes:1 normalizing:1 derives:1 burden:1 exists:8 fbr:2 albeit:2 restricting:1 workshop:1 vapnik:1 kx:1 dtic:1 gap:1 locality:1 logarithmic:1 sindhwani:2 springer:4 corresponds:5 minimizer:3 complemented:1 acm:2 goal:3 presentation:1 rbf:1 shared:1 hard:1 change:1 tecnologia:2 infinite:1 reducing:2 uniformly:3 called:2 exception:1 select:2 support:2 latter:4 arises:1 accelerated:1 constructive:1 ex:2
6,539
6,915
Differentially private Bayesian learning on distributed data Mikko Heikkil?1 [email protected] Samuel Kaski3 [email protected] Eemil Lagerspetz2 [email protected] Kana Shimizu4 [email protected] Sasu Tarkoma2 [email protected] Antti Honkela1,5 [email protected] 1 Helsinki Institute for Information Technology HIIT, Department of Mathematics and Statistics, University of Helsinki 2 Helsinki Institute for Information Technology HIIT, Department of Computer Science, University of Helsinki 3 Helsinki Institute for Information Technology HIIT, Department of Computer Science, Aalto University 4 Department of Computer Science and Engineering, Waseda University 5 Department of Public Health, University of Helsinki Abstract Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness, or add prohibitive amounts of noise. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. 1 Introduction Differential privacy (DP) [9, 11] has recently gained popularity as the theoretically best-founded means of protecting the privacy of data subjects in machine learning. It provides rigorous guarantees against breaches of individual privacy that are robust even against attackers with access to additional side information. DP learning methods have been proposed e.g. for maximum likelihood estimation [24], empirical risk minimisation [5] and Bayesian inference [e.g. 8, 13, 16, 17, 19, 25, 29]. There are DP versions of most popular machine learning methods, including linear regression [16, 28], logistic regression [4], support vector machines [5], and deep learning [1]. Almost all existing DP machine learning methods assume that some trusted party has unrestricted access to all the data in order to add the necessary amount of noise needed for the privacy guarantees. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. This is a highly restrictive assumption for many applications, e.g. for learning with data on mobile devices, and creates huge privacy risks through a potential single point of failure. In this paper we introduce a general strategy for DP Bayesian learning in the distributed setting with minimal overhead. Our method builds on the asymptotically optimal sufficient statistic perturbation mechanism [13, 16] and shares its asymptotic optimality. The method is based on a DP secure multi-party communication (SMC) algorithm, called Distributed Compute algorithm (DCA), for achieving DP in the distributed setting. We demonstrate good performance of the method on DP Bayesian inference using linear regression as an example. 1.1 Our contribution We propose a general approach for privacy-sensitive learning in the distributed setting. Our approach combines SMC with DP Bayesian learning methods, originally introduced for the non-distributed setting including a trusted party, to achieve DP Bayesian learning in the distributed setting. To demonstrate our framework in practice, we combine the Gaussian mechanism for (, ?)-DP with efficient DP Bayesian inference using sufficient statistics perturbation (SSP) and an efficient SMC approach for secure distributed computation of the required sums of sufficient statistics. We prove that the Gaussian SSP is an efficient (, ?)-DP Bayesian inference method and that the distributed version approaches this quickly as the number of parties increases. We also address the subtle challenge of normalising the data privately in a distributed manner, required for the proof of DP in distributed DP learning. 2 2.1 Background Differential privacy Differential privacy (DP) [11] gives strict, mathematically rigorous guarantees against intrusions on individual privacy. A randomised algorithm is differentially private if its results on adjacent data sets are likely to be similar. Here adjacency means that the data sets differ by a single element, i.e., the two data sets have the same number of samples, but they differ on a single one. In this work we utilise a relaxed version of DP called (, ?)-DP [9, Definition 2.4]. Definition 2.1. A randomised algorithm A is (, ?)-DP, if for all S ? Range (A) and all adjacent data sets D, D0 , P (A(D) ? S) ? exp()P (A(D0 ) ? S) + ?. The parameters  and ? in Definition 2.1 control the privacy guarantee:  tunes the amount of privacy (smaller  means stricter privacy), while ? can be interpreted as the proportion of probability space where the privacy guarantee may break down. There are several established mechanisms for ensuring DP. We use the Gaussian mechanism [9, Theorem 3.22]. The theorem says that given a numeric query f with `2 -sensitivity ?2 (f ), adding noise distributed as N (0, ? 2 ) to each output component guarantees (, ?)-DP, when ? 2 > 2 ln(1.25/?)(?2 (f )/)2 . (1) Here, the `2 -sensitivity of a function f is defined as ?2 (f ) = sup kf (D) ? f (D0 )k2 , (2) D?D 0 where the supremum is over all adjacent data sets D, D0 . 2.2 Differentially private Bayesian learning Bayesian learning provides a natural complement to DP because it inherently can handle uncertainty, including uncertainty introduced to ensure DP [26], and it provides a flexible framework for data modelling. Three distinct types of mechanisms for DP Bayesian inference have been proposed: 1. Drawing a small number of samples from the posterior or an annealed posterior [7, 25]; 2 2. Sufficient statistics perturbation (SSP) of an exponential family model [13, 16, 19]; and 3. Perturbing the gradients in gradient-based MCMC [25] or optimisation in variational inference [17]. For models where it applies, the SSP approach is asymptotically efficient [13, 16], unlike the posterior sampling mechanisms. The efficiency proof of [16] can be generalised to (, ?)-DP and Gaussian SSP as shown in the Supplementary Material. The SSP (#2) and gradient perturbation (#3) mechanisms are of similar form in that the DP mechanism ultimately computes a perturbed sum z= N X zi + ? (3) i=1 over quantities zi computed for different samples i = 1, . . . , N , where ? denotes the noise injected to ensure the DP guarantee. For SSP [13, 16, 19], the zi are the sufficient statistics of a particular sample, whereas for gradient perturbation [17, 25], the zi are the clipped per-sample gradient contributions. When a single party holds the entire data set, the sum z in Eq. (3) can be computed easily, but the case of distributed data makes things more difficult. 3 Secure and private learning with distributed data Let us assume there are N data holders (called clients in the following), who each hold a single data sample. We would like to use the aggregate data for learning, but the clients do not want to reveal their data as such to anybody else. The main problem with the distributed setting is that if each client uses a trusted aggregator (TA) DP technique separately, the noise ? in Eq. (3) is added by each client, increasing the total noise variance by a factor of N compared to the non-distributed single TA setting, effectively reducing to naive input perturbation. To reduce the noise level without compromising on privacy, the individual data samples need to be combined without directly revealing them to anyone. Our solution to this problem uses an SMC approach based on a form of secret sharing: each client sends their term of the sum, split to separate messages, to M servers such that together the messages sum up to the desired value, but individually they are just random noise. This can be implemented efficiently using a fixed-point representation of real numbers which allows exact cancelling of the noise in the addition. Like any secret sharing approach, this algorithm is secure as long as not all M servers collude. Cryptography is only required to secure the communication between the client and the server. Since this does not need to be homomorphic as in many other protocols, faster symmetric cryptography can be used for the bulk of the data. We call this the Distributed Compute Algorithm (DCA), which we introduce next in detail. 3.1 Distributed compute algorithm (DCA) In order to add the correct amount of noise while avoiding revealing the unperturbed data to any single party, we combine an encryption scheme with the Gaussian mechanism for DP as illustrated in Fig. 1(a). Each individual client adds a small amount of Gaussian noise to his data, resulting in the aggregated noise to be another Gaussian with large enough variance. The details of the noise scaling are presented in the Section 3.1.2. The scheme relies on several independent aggregators, called Compute nodes (Algorithm 1). At a general level, the clients divide their data and some blinding noise into shares that are each sent to one Compute. After receiving shares from all clients, each Compute decrypts the values, sums them and broadcasts the results. The final results can be obtained by summing up the values from all Computes, which cancels the blinding noise. 3.1.1 Threat model We assume there are at most T clients who may collude to break the privacy, either by revealing the noise they add to their data samples or by abstaining from adding the noise in the first place. The rest are honest-but-curious (HbC), i.e., they will take a peek at other people?s data if given the chance, but they will follow the protocol. 3 Scaling factor needed to guarantee privacy Number of colluding clients T=0 T=5 T=10 5.0 M Compute Nodes 4.5 N Clients 4.0 zi + ?i Individual encryption Enc(mi,l) Scaling factor Compute Node N messages Sum of N decrypted messages Message Router N messages Compute Node 3.5 3.0 2.5 ?i zi+ ? 2.0 DP Result 1.5 Sum of N decrypted messages 1.0 (a) DCA setting 0 20 40 60 Number of clients 80 100 (b) Extra scaling factor Figure 1: 1(a): Schematic diagram of the Distributed Compute algorithm (DCA). Red refers to encrypted values, blue to unencrypted (but blinded or DP) values. 1(b) Extra scaling factor needed for the noise in the distributed setting with T colluding clients as compared to the trusted aggregator setting. Algorithm 1 Distributed Compute Algorithm for distributed summation with independent Compute nodes Input: d-dimensional vectors zi held by clients i ? {1, . . . , N }; Distributed Gaussian mechanism noise variances ?j2 , j = 1, . . . , d (public); Number of parties N (public); Number of Compute nodes M (public); PN Output: Differentially private sum i=1 (zi + ? i ), where ? i ? N (0, diag(?j2 )) 1: Each client i simulates ? i ? N (0, diag(?j2 )) and M ? 1 vectors ri,k of uniformly random PM ?1 PM fixed-point data with ri,M = ? k=1 ri,k to ensure that k=1 ri,k = 0d (a vector of zeros). 2: Each client i computes the messages mi,1 = zi + ? i + ri,1 , mi,k = ri,k , k = 2, . . . M , and sends them securely to the corresponding Compute k. 3: After receiving messages from all of the clients, Compute k decrypts the values and broadcasts PN the noisy aggregate sums qk = i=1 mi,k . A final aggregator will then add these to obtain PM PN k=1 qk = i=1 (zi + ? i ). To break the privacy of individual clients, all Compute nodes need to collude. We therefore assume that at least one Compute node follows the protocol. We further assume that all parties have an interest in the results and hence will not attempt to pollute the results with invalid values. 3.1.2 Privacy of the mechanism In order to guarantee that the sum-query results returned by Algorithm 1 are DP, we need to show that the variance of the aggregated Gaussian noise is large enough. Theorem 1 (Distributed Gaussian mechanism). If at most T clients collude or drop out of the protocol, the sum-query result returned by Algorithm 1 is (, ?)-DP, when the variance of the added noise ?j2 fulfils 1 ?j2 ? ?2 , N ? T ? 1 j,std 2 where N is the number of clients and ?j,std is the variance of the noise in the standard (, ?)-DP Gaussian mechanism given in Eq. (1). Proof. See Supplement. In the case of all HbC clients, T = 0. The extra scaling factor increases the variance of the DP, but this factor quickly approaches 1 as the number of clients increases, as can be seen from Figure 1(b). 4 3.1.3 Fault tolerance The Compute nodes need to know which clients? contributions they can safely aggregate. This feature is simple to implement e.g. with pairwise-communications between all Compute nodes. In order to avoid having to start from scratch due to insufficient noise for DP, the same strategy used to protect against colluding clients can be utilized: when T > 0, at most T clients in total can drop or collude and the scheme will still remain private. 3.1.4 Computational scalability Most of the operations needed in Algorithm 1 are extremely fast: encryption and decryption can use fast symmetric algorithms such as AES (using slower public key cryptography just for the key of the symmetric system) and the rest is just integer additions for the fixed point arithmetic. The likely first bottlenecks in the implementation would be caused by synchronisation when gathering the messages as well as the generation of cryptographically secure random vectors ri,k . 3.2 Differentially private Bayesian learning on distributed data In order to perform DP Bayesian learning securely in the distributed setting, we use DCA (Algorithm 1) to compute the required data summaries that correspond to Eq. (3). In this Section we consider how to combine this scheme with concrete DP learning methods introduced for the trusted aggregator setting, so as to provide a wide range of possibilities for performing DP Bayesian learning securely with distributed data. The aggregation algorithm is most straightforward to apply to the SSP method [13, 16] for exact and approximate posterior inference on exponential family models. [13] and [16] use Laplacian noise to guarantee -DP, which is a stricter form of privacy than the (, ?)-DP used in DCA [9]. We consider here only (, ?)-DP version of the methods, and discuss the possible Laplace noise mechanism further in Section 7. The model training in this case is done in a single iteration, so a single application of Algorithm 1 is enough for learning. We consider a more detailed example in Section 3.2.1. We can also apply DCA to DP variational inference [17, 19]. These methods rely on possibly clipped gradients or expected sufficient statistics calculated from the data. Typically, each training iteration would use only a mini-batch instead of the full data. To use variational inference in the distributed setting, an arbitrary party keeps track of the current (public) model parameters and the privacy budget, and asks for updates from the clients. At each iteration, the model trainer selects a random mini-batch of fixed public size from the available clients and sends them the current model parameters. The selected clients then calculate the clipped gradients or expected sufficient statistics using their data, add noise to the values scaled reflecting the batch size, and pass them on using DCA. The model trainer receives the decrypted DP sums from the output and updates the model parameters. 3.2.1 Distributed Bayesian linear regression with data projection As an empirical example, we consider Bayesian linear regression (BLR) with data projection in the distributed setting. The standard BLR model depends on the data only through sufficient statistics and the approach discussed in Section 3.2 can be used in a straightforward manner to fit the model by running a single round of DCA. The more efficient BLR with projection (Algorithm 2) [16] reduces the data range, and hence sensitivity, by non-linearly projecting all data points inside stricter bounds, which translates into less added noise. We can select the bounds to optimize bias vs. DP noise variance. In the distributed setting, we need to run an additional round of DCA and use some privacy budget to estimate data standard deviations (stds). However, as shown by the test results (Figures 2 and 3), this can still achieve significantly better utility with a given privacy level. The assumed bounds in Step 1 of Algorithm 2 would typically be available from general knowledge of the data. The initial projection in Step 1 ensures the privacy of the scheme even if the bounds are invalid for some samples. We determine the optimal final projection thresholds pj in Step 3 using the same general approach as [16]: we create an auxiliary data set of equal size as the original with data 5 Algorithm 2 Distributed linear regression with projection Input: Data and target values (xij , yi ), j = 1, . . . , d held by clients i ? {1, . . . , N }; Number of clients N (public); Assumed data and target bounds (?cj , cj ), j = 1, . . . , d + 1 (public); Privacy budget (, ?) (public); PN P ?ix ? Ti + ? (1) , N ? Ti y?i + ? (2) , Output: DP BLR model sufficient statistics of projected data i=1 x i=1 x calculated using projection to estimated optimal bounds 1: Each client projects his data to the assumed bounds (?cj , cj ) ?j. 2: Calculate marginal std estimates ? (1) , . . . , ? (d+1) by running Algorithm 1 using the assumed bounds for sensitivity and a chosen share of the privacy budget. 3: Estimate optimal projection thresholds pj , j = 1, . . . , d + 1 as fractions of std on auxiliary data. Each client then projects his data to the estimated optimal bounds (?pj ? (j) , pj ? (j) ), j = 1, . . . , d + 1. 4: Aggregate the unique terms in the DP sufficient statistics by running Algorithm 1 using the estimated optimal bounds for sensitivity and the remaining privacy budget, and combine the DP result vectors into the symmetric d ? d matrix and d-dimensional vector of DP sufficient statistics. generated as xi ? N (0, Id ) ? ? N (0, ?0 I) (4) (5) yi |xi ? N (xTi ?, ?). (6) We then perform grid search on the auxiliary data with varying thresholds to find the one providing optimal prediction performance. The source code for our implementation is available through GitHub1 and a more detailed description can be found in the Supplement. 4 Experimental setup We demonstrate the secure DP Bayesian learning scheme in practice by testing the performance of the BLR with data projection, the implementation of which was discussed in Section 3.2.1, along with the DCA (Algorithm 1) in the all HbC clients distributed setting (T = 0). With the DCA our primary interest is scalability. In the case of BLR implementation, we are mostly interested in comparing the distributed algorithm to the trusted aggregator version as well as comparing the performance of the straightforward BLR to the variant using data projection, since it is not clear a priori if the extra cost in privacy necessitated by the projection in the distributed setting is offset by the reduced noise level. We use simulated data for the DCA scalability testing, and real data for the BLR tests. As real data, we use the Wine Quality [6] (split into white and red wines) and Abalone data sets from the UCI repository[18], as well as the Genomics of Drug Sensitivity in Cancer (GDSC) project data 2 . The measured task in the GDSC data is to predict drug sensitivity of cancer cell lines from gene expression data (see Supplement for a more detailed description). The datasets are assumed to be zero-centred. This assumption is not crucial but is done here for simplicity; non-zero data means can be estimated like the marginal stds at the cost of some added noise (see Section 3.2.1). For estimating the marginal std, we also need to assume bounds for the data. For unbounded data, we can enforce arbitrary bounds simply by projecting all data inside the chosen bounds, although very poor choice of bounds will lead to poor performance. With real distributed data, the assumed bounds could differ from the actual data range. In the UCI tests we simulate this effect by scaling each data dimension to have a range of length 10, and then assuming bounds of [?7.5, 7.5], i.e., the assumed bounds clearly overestimate the length of the true range, thus adding more noise to the results. The actual scaling chosen here is arbitrary. With the GDSC data, the true ranges are mostly known due to the nature of the data (see Supplement). 1 2 https://github.com/DPBayes/dca-nips2017 http://www.cancerrxgene.org/, release 6.1, March 2017 6 N=102 N=103 N=104 N=105 d=10 1.72 1.89 2.99 8.58 d=102 2.03 2.86 12.36 65.64 d=103 3.43 10.56 101.2 610.55 d=104 15.30 84.95 994.96 1592.29 Table 1: DCA experiment average runtimes in seconds with 5 repeats, using M=10 Compute nodes, N clients and vector length d. 4.0 NP proj NP TA input perturbed proj TA DDP proj DDP 2.5 NP proj NP TA input perturbed proj TA DDP proj DDP 2.5 NP proj NP TA input perturbed proj TA DDP proj DDP 3.5 2.0 2.0 1.5 2.0 MAE 2.5 MAE MAE 3.0 1.5 1.0 1.0 1.0 1.0 1.78 3.16 5.62 epsilon 10.0 31.62 1.5 1.0 1.78 3.16 5.62 epsilon 10.0 31.62 1.0 1.78 3.16 5.62 epsilon 10.0 31.62 d=11, sample size=1000, repeats=25, ? = 0.0001 d=8, sample size=3000, repeats=25, ? = 0.0001 d=11, sample size=3000, repeats=25, ? = 0.0001 (a) Red wine data set (b) Abalone data set (c) White wine data set Figure 2: Median of the predictive accuracy measured on mean absolute error (MAE) on several UCI data sets with error bars denoting the interquartile range (lower is better). The performance of the distributed methods (DDP, DDP proj) is indistinguishable from the corresponding undistributed algorithms (TA, TA proj) and the projection (proj TA, proj DDP) can clearly be beneficial for prediction performance. NP refers to non-private version, TA to the trusted aggregator setting, DDP to the distributed scheme. The optimal projection thresholds are searched for using 10 (GDSC) or 20 (UCI) repeats on a grid with 20 points between 0.1 and 2.1 times the std of the auxiliary data set. In the search we use one common threshold for all data dimensions and a separate one for the target. For accuracy measure, we use prediction accuracy on a separate test data set. The size of the test set for UCI in Figure 2 is 500 for red wine, 1000 for white wine, and 1000 for abalone data. The test set size for GDSC in Figure 3 is 100. For UCI, we compare the median performance measured on mean absolute error over 25 cross-validation (CV) runs, while for GDSC we measure mean prediction accuracy to sensitive vs insensitive with Spearman?s rank correlation on 25 CV runs. In both cases, we use input perturbation [11] and the trusted aggregator setting as baselines. 5 Results Table 1 shows runtimes of a distributed Spark implementation of the DCA algorithm. The timing excludes encryption, but running AES for the data of the largest example would take less than 20 s on a single thread on a modern CPU. The runtime modestly increases as N or d is increased. This suggests that the prototype is reasonably scalable. Spark overhead sets a lower bound runtime of approximately 1 s for small problems. For large N and d, sequential communication at the 10 Compute threads is the main bottleneck. Larger N could be handled by introducing more Compute nodes and clients only communicating with a subset of them. Comparing the results on predictive error with and without projection (Fig. 2 and Fig. 3), it is clear that despite incurring extra privacy cost for having to estimate the marginal standard deviations, using the projection can improve the results markedly with a given privacy budget. The results also demonstrate that compared to the trusted aggregator setting, the extra noise added due to the distributed setting with HbC clients is insignificant in practice as the results of the distributed and trusted aggregator algorithms are effectively indistinguishable. 7 input perturbed proj TA DDP proj DDP NP DDP 0.25 0.25 0.20 0.20 Predictive accuracy Predictive accuracy NP proj NP TA 0.15 0.10 0.05 0.00 input perturbed proj DDP 0.15 0.10 0.05 0.00 1.0 3.0 5.0 epsilon 7.5 10.0 1.0 d=10, sample size=840, CV=25, ?=0.0001 (a) Drug sensitivity prediction 3.0 5.0 epsilon 7.5 10.0 d=10, sample size=840, CV=25, ?=0.0001 (b) Drug sensitivity prediction, selected methods Figure 3: Mean drug sensitivity prediction accuracy on GDSC dataset with error bars denoting standard deviation over CV runs (higher is better). Distributed results (DDP, proj DDP) do not differ markedly from the corresponding trusted aggregator (TA, proj TA) results. The projection (proj TA, proj DDP) is clearly beneficial for performance. The actual sample size varies between drugs. NP refers to non-private version, TA to the trusted aggregator setting, DDP to the distributed scheme. 6 Related work The idea of distributed private computation through addition of noise generated in a distributed manner was first proposed by Dwork et al. [10]. However, to the best of our knowledge, there is no prior work on secure DP Bayesian statistical inference in the distributed setting. In machine learning, [20] presented the first method for aggregating classifiers in a DP manner, but their approach is sensitive to the number of parties and sizes of the data sets held by each party and cannot be applied in a completely distributed setting. [21] improved upon this by an algorithm for distributed DP stochastic gradient descent that works for any number of parties. The privacy of the algorithm is based on perturbation of gradients which cannot be directly applied to the efficient SSP mechanism. The idea of aggregating classifiers was further refined in [15] through a method that uses an auxiliary public data set to improve the performance. The first practical method for implementing DP queries in a distributed manner was the distributed Laplace mechanism presented in [22]. The distributed Laplace mechanism could be used instead of the Gaussian mechanism if pure -DP is required, but the method, like those in [20, 21], needs homomorphic encryption which is computationally more demanding, especially for high-dimensional data. There is a wealth of literature on secure distributed computation of DP sum queries as reviewed in [14]. The methods of [23, 2, 3, 14] also include different forms of noise scaling to provide collusion resistance and/or fault tolerance, where the latter requires a separate recovery round after data holder failures which is not needed by DCA. [12] discusses low level details of an efficient implementation of the distributed Laplace mechanism. Finally, [27] presents several proofs related to the SMC setting and introduce a protocol for generating approximately Gaussian noise in a distributed manner. Compared to their protocol, our method of noise addition is considerably simpler and faster, and produces exactly instead of approximately Gaussian noise with negligible increase in noise level. 7 Discussion We have presented a general framework for performing DP Bayesian learning securely in a distributed setting. Our method combines a practical SMC method for calculating secure sum queries with efficient Bayesian DP learning techniques adapted to the distributed setting. 8 DP methods are based on adding sufficient noise to effectively mask the contribution of any single sample. The extra loss in accuracy due to DP tends to diminish as the number of samples increases and efficient DP estimation methods converge to their non-private counterparts as the number of samples increases [13, 16]. A distributed DP learning method can significantly help in increasing the number of samples because data held by several parties can be combined thus helping make DP learning significantly more effective. Considering the DP and the SMC components separately, although both are necessary for efficient privacy-aware learning, it is clear that the choice of method to use for each sub-problem can be made largely independently. Assessing these separately, we can therefore easily change the privacy mechanism from the Gaussian used in Algorithm 1 to the Laplace mechanism, e.g. by utilising one of the distributed Laplace noise addition methods presented in [14] to obtain a pure -DP method. If need be, the secure sum algorithm in our method can also be easily replaced with one that better suits the security requirements at hand. While the noise introduced for DP will not improve the performance of an otherwise good learning algorithm, a DP solution to a learning problem can yield better results if the DP guarantees allow access to more data than is available without privacy. Our distributed method can further help make this more efficient by securely and privately combining data from multiple parties. Acknowledgements This work was funded by the Academy of Finland [Centre of Excellence COIN and projects 259440, 278300, 292334, 294238, 297741, 303815, 303816], the Japan Agency for Medical Research and Development (AMED), and JST CREST [JPMJCR1688]. References [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proc. CCS 2016, 2016. [2] G. ?cs and C. Castelluccia. I have a DREAM! (DiffeRentially privatE smArt Metering). In Proc. 13th International Conference in Information Hiding (IH 2011), pages 118?132, 2011. [3] T. H. H. Chan, E. Shi, and D. Song. Privacy-preserving stream aggregation with fault tolerance. In Proc. 16th Int. Conf. on Financial Cryptography and Data Security (FC 2012), pages 200?214, 2012. [4] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems 21, pages 289?296. 2009. [5] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12:1069?1109, 2011. [6] P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis. Modeling wine preferences by data mining from physicochemical properties. Decision Support Systems, 47(4):547?553, 2009. [7] C. Dimitrakakis, B. Nelson, A. Mitrokotsa, and B. I. P. Rubinstein. Robust and private Bayesian inference. In Proc. ALT 2014, pages 291?305, 2014. [8] C. Dimitrakakis, B. Nelson, Z. Zhang, A. Mitrokotsa, and B. I. P. Rubinstein. Differential privacy for Bayesian inference through posterior sampling. Journal of Machine Learning Research, 18(11):1?39, 2017. [9] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211?407, 2014. [10] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology (EUROCRYPT 2006), page 486?503, 2006. [11] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proc. 3rd Theory of Cryptography Conference (TCC 2006), pages 265?284. 2006. 9 [12] F. Eigner, A. Kate, M. Maffei, F. Pampaloni, and I. Pryvalov. Differentially private data aggregation with optimal utility. In Proceedings of the 30th Annual Computer Security Applications Conference, pages 316?325. ACM, 2014. [13] J. Foulds, J. Geumlek, M. Welling, and K. Chaudhuri. On the theory and practice of privacypreserving Bayesian data analysis. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, UAI?16, pages 192?201, 2016. [14] S. Goryczka and L. Xiong. A comprehensive comparison of multiparty secure additions with differential privacy. IEEE Transactions on Dependable and Secure Computing, 2015. [15] J. Hamm, P. Cao, and M. Belkin. Learning privately from multiparty data. In ICML, 2016. [16] A. Honkela, M. Das, A. Nieminen, O. Dikmen, and S. Kaski. Efficient differentially private learning improves drug sensitivity prediction. 2016. arXiv:1606.02109 [stat.ML]. [17] J. J?lk?, O. Dikmen, and A. Honkela. Differentially private variational inference for nonconjugate models. In Proc. 33rd Conference on Uncertainty in Artificial Intelligence (UAI 2017), 2017. [18] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ ml. [19] M. Park, J. Foulds, K. Chaudhuri, and M. Welling. Variational Bayes in private settings (VIPS). 2016. arXiv:1611.00340. [20] M. Pathak, S. Rane, and B. Raj. Multiparty differential privacy via aggregation of locally trained classifiers. In Advances in Neural Information Processing Systems 23, pages 1876?1884, 2010. [21] A. Rajkumar and S. Agarwal. A differentially private stochastic gradient descent algorithm for multiparty classification. In Proc. AISTATS 2012, pages 933?941, 2012. [22] V. Rastogi and S. Nath. Differentially private aggregation of distributed time-series with transformation and encryption. In Proc. 2010 ACM SIGMOD International Conference on Management of Data (SIGMOD 2010), pages 735?746. ACM, 2010. [23] E. Shi, T. Chan, E. Rieffel, R. Chow, and D. Song. Privacy-preserving aggregation of time-series data. In Proc. NDSS, 2011. [24] A. Smith. Efficient, differentially private point estimators. 2008. arXiv:0809.4794 [cs.CR]. [25] Y. Wang, S. E. Fienberg, and A. J. Smola. Privacy for free: Posterior sampling and stochastic gradient Monte Carlo. In Proc. ICML 2015, pages 2493?2502, 2015. [26] O. Williams and F. McSherry. Probabilistic inference and differential privacy. In Adv. Neural Inf. Process. Syst. 23, 2010. [27] G. Wu, Y. He, J. Wu, and X. Xia. Inherit differential privacy in distributed setting: Multiparty randomized function computation. In 2016 IEEE Trustcom/BigDataSE/ISPA, pages 921?928, 2016. [28] J. Zhang, Z. Zhang, X. Xiao, Y. Yang, and M. Winslett. Functional mechanism: Regression analysis under differential privacy. PVLDB, 5(11):1364?1375, 2012. [29] Z. Zhang, B. Rubinstein, and C. Dimitrakakis. On the differential privacy of Bayesian inference. In Proc. AAAI 2016, 2016. 10
6915 |@word private:22 version:7 repository:2 proportion:1 cortez:1 nd:1 asks:1 initial:1 series:2 lichman:1 denoting:2 existing:1 current:2 com:2 comparing:3 collude:5 gmail:1 router:1 chu:1 drop:2 update:2 v:2 mitrokotsa:2 intelligence:2 prohibitive:1 device:1 selected:2 decrypted:3 pvldb:1 smith:2 normalising:1 provides:3 node:12 preference:1 org:1 simpler:1 zhang:5 unbounded:1 along:1 differential:13 become:1 abadi:1 prove:1 naor:1 overhead:2 combine:6 inside:2 introduce:3 manner:6 excellence:1 pairwise:1 mask:1 privacy:49 theoretically:1 secret:2 expected:2 multi:2 xti:1 actual:3 cpu:1 considering:1 increasing:2 hiding:1 project:4 estimating:1 interpreted:1 kenthapadi:1 transformation:1 guarantee:12 safely:1 synchronisation:1 ti:2 stricter:3 runtime:2 exactly:1 k2:1 scaled:1 classifier:3 control:1 medical:1 maffei:1 overestimate:1 generalised:1 negligible:1 engineering:1 aggregating:3 timing:1 tends:1 despite:1 mach:1 id:1 approximately:3 suggests:1 smc:7 range:8 unique:1 practical:2 thirty:1 testing:2 practice:4 hamm:1 implement:1 empirical:3 drug:7 significantly:3 revealing:3 projection:16 refers:3 cannot:2 risk:3 optimize:1 www:1 shi:2 roth:1 annealed:1 straightforward:3 williams:1 independently:1 foulds:2 simplicity:1 spark:2 recovery:1 pure:2 communicating:1 mironov:2 estimator:1 his:3 financial:1 handle:1 laplace:6 target:3 exact:2 us:3 mikko:2 goodfellow:1 element:1 trend:1 rajkumar:1 utilized:1 std:8 wang:1 calculate:2 ensures:1 adv:1 agency:1 ultimately:1 trained:1 smart:1 predictive:4 creates:1 upon:1 efficiency:1 completely:1 easily:3 kaski:2 distinct:1 fast:2 effective:1 monte:1 query:6 rubinstein:3 artificial:2 aggregate:4 refined:1 supplementary:1 larger:1 say:1 drawing:1 otherwise:1 statistic:12 noisy:1 final:3 propose:2 tcc:1 cancelling:1 j2:5 uci:8 enc:1 blr:8 rapidly:1 combining:1 cao:1 chaudhuri:4 achieve:2 academy:1 description:2 differentially:13 scalability:3 requirement:1 assessing:1 hbc:4 produce:1 generating:1 encryption:6 help:2 cryptology:1 stat:1 measured:3 eq:4 implemented:1 auxiliary:5 c:2 differ:4 correct:1 compromising:1 dependable:1 stochastic:3 jst:1 public:11 material:1 adjacency:1 implementing:1 require:1 summation:1 mathematically:1 helping:1 hold:3 practically:1 diminish:1 ic:1 exp:1 algorithmic:1 predict:1 finland:1 wine:7 estimation:2 proc:11 sensitive:3 individually:1 largest:1 create:1 trusted:13 minimization:1 clearly:3 gaussian:16 pn:4 avoid:1 cr:1 mobile:1 varying:1 minimisation:1 release:1 modelling:1 likelihood:1 rank:1 aalto:2 intrusion:1 secure:14 rigorous:2 baseline:1 inference:16 entire:2 typically:2 chow:1 diminishing:1 proj:21 selects:1 interested:1 trainer:2 classification:1 flexible:1 priori:1 development:1 marginal:4 equal:1 aware:1 having:2 beach:1 sampling:3 runtimes:2 park:1 cancel:1 icml:2 np:11 few:1 belkin:1 modern:1 comprehensive:1 individual:6 replaced:1 ourselves:1 suit:1 attempt:1 huge:1 message:10 interest:2 highly:1 possibility:1 interquartile:1 dwork:4 mining:1 weakness:1 mcsherry:3 held:4 cryptographically:1 necessary:2 necessitated:1 divide:1 desired:1 re:1 theoretical:1 minimal:1 increased:1 modeling:1 cost:4 introducing:1 deviation:3 subset:1 perturbed:6 varies:1 considerably:1 combined:2 st:1 international:2 sensitivity:12 randomized:1 probabilistic:1 receiving:2 together:1 quickly:2 concrete:1 aaai:1 decryption:1 management:1 homomorphic:2 broadcast:2 possibly:1 conf:1 matos:1 japan:1 syst:1 potential:1 centred:1 int:1 kate:1 caused:1 depends:1 stream:1 break:3 sup:1 red:4 start:1 aggregation:6 bayes:1 contribution:4 holder:3 accuracy:8 variance:8 who:2 efficiently:1 qk:2 correspond:1 largely:1 yield:1 rastogi:1 bayesian:26 carlo:1 cc:1 monteleoni:2 aggregator:12 sharing:2 definition:3 against:4 failure:2 proof:4 mi:4 dataset:1 popular:1 knowledge:2 improves:1 cj:4 subtle:1 reflecting:1 dca:18 ispa:1 originally:1 ta:18 higher:1 follow:1 nonconjugate:1 improved:1 hiit:3 done:2 just:3 smola:1 correlation:1 honkela:3 hand:1 receives:1 logistic:2 quality:1 reveal:1 usa:1 effect:1 calibrating:1 true:2 counterpart:1 hence:2 symmetric:4 illustrated:1 white:3 adjacent:3 round:3 indistinguishable:2 samuel:2 abalone:3 demonstrate:4 variational:5 fi:5 recently:1 common:1 functional:1 perturbing:1 insensitive:1 sarwate:1 discussed:2 he:1 mae:4 cv:5 rd:2 grid:2 mathematics:1 pm:3 centre:1 funded:1 access:4 add:7 eurocrypt:1 posterior:6 chan:2 raj:1 inf:1 server:3 fault:3 yi:2 seen:1 preserving:3 additional:2 care:1 unrestricted:1 relaxed:1 aggregated:2 determine:1 converge:1 arithmetic:1 full:1 multiple:1 reduces:1 d0:4 faster:2 cross:1 long:2 physicochemical:1 laplacian:1 ensuring:1 schematic:1 prediction:8 regression:8 variant:1 scalable:1 optimisation:1 arxiv:3 iteration:3 agarwal:1 encrypted:1 cell:1 background:1 whereas:1 want:1 separately:3 addition:6 wealth:1 else:1 diagram:1 source:1 sends:3 peek:1 crucial:1 extra:8 rest:2 unlike:1 median:2 archive:1 strict:1 markedly:2 subject:2 privacypreserving:1 sent:1 thing:1 simulates:1 nath:1 call:1 integer:1 curious:1 yang:1 split:2 enough:3 fit:1 zi:10 reduce:1 idea:2 prototype:1 translates:1 honest:1 bottleneck:2 thread:2 expression:1 handled:1 utility:2 url:1 blinded:1 song:2 returned:2 resistance:1 deep:2 clear:4 detailed:3 tune:1 amount:5 locally:1 reduced:1 http:3 pollute:1 xij:1 estimated:4 popularity:1 per:1 bulk:1 blue:1 track:1 threat:1 key:2 cerdeira:1 threshold:5 achieving:1 pj:4 abstaining:1 asymptotically:3 excludes:1 fraction:1 sum:17 dimitrakakis:3 run:4 utilising:1 talwar:1 uncertainty:4 injected:1 clipped:3 almost:1 family:2 place:1 multiparty:5 wu:2 decision:1 scaling:9 bound:18 ddp:18 annual:1 adapted:1 helsinki:10 decrypts:2 ri:7 fulfils:1 collusion:1 simulate:1 anyone:1 optimality:1 extremely:1 performing:2 department:5 march:1 poor:2 spearman:1 smaller:1 remain:1 beneficial:2 projecting:2 gathering:1 fienberg:1 ln:1 computationally:1 randomised:2 discus:2 mechanism:23 needed:5 know:1 vip:1 available:4 operation:1 incurring:1 apply:2 enforce:1 xiong:1 batch:3 coin:1 slower:1 original:1 denotes:1 running:4 ensure:3 remaining:1 include:1 calculating:1 sigmod:2 restrictive:1 epsilon:5 build:2 especially:1 added:5 quantity:1 strategy:3 primary:1 modestly:1 ssp:9 rane:1 gradient:11 dp:73 separate:4 simulated:1 nelson:2 nissim:1 aes:2 dream:1 assuming:1 code:1 length:3 insufficient:1 mini:2 providing:1 difficult:1 setup:1 mostly:2 implementation:6 attacker:1 perform:2 geumlek:1 datasets:1 protecting:2 descent:2 communication:4 perturbation:8 arbitrary:3 introduced:4 complement:1 required:5 security:3 nips2017:1 established:2 protect:1 nip:1 address:1 bar:2 challenge:1 including:3 demanding:1 natural:1 client:37 rely:1 pathak:1 blinding:2 scheme:8 improve:3 github:1 technology:3 lk:1 naive:1 health:2 breach:1 genomics:1 prior:1 literature:1 acknowledgement:1 kf:1 asymptotic:1 loss:1 generation:2 validation:1 foundation:2 sufficient:12 xiao:1 share:4 cancer:2 summary:2 repeat:5 antti:2 free:1 side:1 bias:1 allow:1 institute:3 wide:1 absolute:2 distributed:63 benefit:1 tolerance:3 calculated:2 dimension:2 numeric:1 xia:1 computes:3 made:1 projected:1 founded:1 party:17 welling:2 transaction:1 crest:1 approximate:1 supremum:1 keep:1 gene:1 ml:2 uai:2 summing:1 assumed:7 xi:2 search:2 reviewed:1 table:2 learn:1 nature:1 reasonably:1 robust:2 ca:1 inherently:1 protocol:6 diag:2 da:1 aistats:1 inherit:1 main:2 privately:3 linearly:1 noise:43 cryptography:5 fig:3 securely:5 sub:1 exponential:2 mcmahan:1 ix:1 down:1 theorem:3 unperturbed:1 offset:1 insignificant:1 alt:1 ih:1 adding:4 effectively:3 gained:1 sequential:1 supplement:4 budget:6 shimizu:1 fc:1 simply:1 likely:2 heikkil:1 applies:1 utilise:1 chance:1 relies:1 acm:3 dikmen:2 invalid:2 change:1 reducing:1 colluding:3 uniformly:1 called:4 total:2 pas:1 experimental:1 select:1 support:2 people:1 searched:1 latter:1 almeida:1 avoiding:1 mcmc:1 scratch:1
6,540
6,916
Learning to Compose Domain-Specific Transformations for Data Augmentation Alexander J. Ratner?, Henry R. Ehrenberg?, Zeshan Hussain, Jared Dunnmon, Christopher R? Stanford University {ajratner,henryre,zeshanmh,jdunnmon,chrismre}@cs.stanford.edu Abstract Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches. 1 Introduction Modern machine learning models, such as deep neural networks, may have billions of free parameters and accordingly require massive labeled data sets for training. In most settings, labeled data is not available in sufficient quantities to avoid overfitting to the training set. The technique of artificially expanding labeled training sets by transforming data points in ways which preserve class labels ? known as data augmentation ? has quickly become a critical and effective tool for combatting this labeled data scarcity problem. Data augmentation can be seen as a form of weak supervision, providing a way for practitioners to leverage their knowledge of invariances in a task or domain. And indeed, data augmentation is cited as essential to nearly every state-of-the-art result in image classification [3, 7, 11, 24] (see Supplemental Materials), and is becoming increasingly common in other modalities as well [20]. Even on well studied benchmark tasks, however, the choice of data augmentation strategy is known to cause large variances in end performance and be difficult to select [11, 7], with papers often reporting their heuristically found parameter ranges [3]. In practice, it is often simple to formulate a large set of primitive transformation operations, but time-consuming and difficult to find the parameterizations and compositions of them needed for state-of-the-art results. In particular, many transformation operations will have vastly different effects based on parameterization, the set of other transformations they are applied with, and even their particular order of composition. For example, brightness and saturation enhancements might be destructive when applied together, but produce realistic images when paired with geometric transformations. ? Authors contributed equally 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. programs Rotate Rotate ZoomOut ShiftHue Flip Flip ShiftHue Brighten P (w20 | w1 , w0 ) Rachel writes code for WebCo. E1 NN E2 Figure 1: Three examples of transformation functions (TFs) in different domains: Two example sequences of incremental image TFs applied to CIFAR-10 images (left); a conditional word-swap TF using an externally trained language model and specifically targeting nouns (NN) between entity mentions (E1,E2) for a relation extraction task (middle); and an unsupervised segementation-based translation TF applied to mass-containing mammography images (right). Given the difficulty of searching over this configuration space, the de facto norm in practice consists of applying one or more transformations in random order and with random parameterizations selected from hand-tuned ranges. Recent lines of work attempt to automate data augmentation entirely, but either rely on large quantities of labeled data [1, 21], restricted sets of simple transformations [8, 13], or consider only local perturbations that are not informed by domain knowledge [1, 22] (see Section 4). In contrast, our aim is to directly and flexibly leverage domain experts? knowledge of invariances as a valuable form of weak supervision in real-world settings where labeled training data is limited. In this paper, we present a new method for data augmentation that directly leverages user domain knowledge in the form of transformation operations, and automates the difficult process of composing and parameterizing them. We formulate the problem as one of learning a generative sequence model over black-box transformation functions (TFs): user-specified operators representing incremental transformations to data points that need not be differentiable nor deterministic. For example, TFs could rotate an image by a small degree, swap a word in a sentence, or translate a segmented structure in an image (Fig. 1). We then design a generative adversarial objective [9] which allows us to train the sequence model to produce transformed data points which are still within the data distribution of interest, using unlabeled data. Because the TFs can be stochastic or non-differentiable, we present a reinforcement learning-based training strategy for this model. The learned model can then be used to perform data augmentation on labeled training data for any end discriminative model. Given the flexibility of our representation of the data augmentation process, we can apply our approach in many different domains, and on different modalities including both text and images. On a real-world mammography image task, we achieve a 3.4 accuracy point boost above randomly composed augmentation by learning to appropriately combine standard image TFs with domainspecific TFs derived in collaboration with radiology experts. Using novel language model-based TFs, we see a 1.4 F1 boost over heuristic augmentation on a text relation extraction task from the ACE corpus. And on a 10%-subsample of the CIFAR-10 dataset, we achieve a 4.0 accuracy point gain over a standard heuristic augmentation approach and are competitive with comparable semi-supervised approaches. Additionally, we show empirical results suggesting that the proposed approach is robust to misspecified TFs. Our hope is that the proposed method will be of practical value to practitioners and of interest to researchers, so we have open-sourced the code at https: //github.com/HazyResearch/tanda. 2 Modeling Setup and Motivation In the standard data augmentation setting, our aim is to expand a labeled training set by leveraging knowledge of class-preserving transformations. For a practitioner with domain expertise, providing individual transformations is straightforward. However, high performance augmentation techniques use compositions of finely tuned transformations to achieve state-of-the-art results [7, 3, 11], and heuristically searching over this space of all possible compositions and parameterizations for a new task is often infeasible. Our goal is to automate this task by learning to compose and parameterize a set of user-specified transformation operators in ways that are diverse but still preserve class labels. In our method, transformations are modeled as sequences of incremental user-specified operations, called transformation functions (TFs) (Fig. 1). Rather than making the strong assumption that all the provided TFs preserve class labels, as existing approaches do, we assume a weaker form of class 2 Figure 2: A high-level diagram of our method. Users input a set of transformation functions h1 , ..., hK and unlabeled data. A generative adversarial approach is then used to train a null class discriminator, D? , and a generator, G, which produces TF sequences h?1 , ..., h?L . Finally, the trained generator is used to perform data augmentation for an end discriminative model Df . invariance which enables us to use unlabeled data to learn a generative model over transformation sequences. We then propose two representative model classes to handle modeling both commutative and non-commutative transformations. 2.1 Augmentation as Sequence Modeling In our approach, we represent transformations as sequences of incremental operations. In this setting, the user provides a set of K TFs, hi : X 7? X , i ? [1, K]. Each TF performs an incremental transformation: for example, hi could rotate an image by five degrees, swap a word in a sentence, or move a segmented tumor mass around a background mammography image (see Fig. 1). In order to accommodate a wide range of such user-defined TFs, we treat them as black-box functions which need not be deterministic nor differentiable. This formulation gives us a tractable way to tune both the parameterization and composition of the TFs in a discretized but fine-grained manner. Our representation can be thought of as an implicit binning strategy for tuning parameterizations ? e.g. a 15 degree rotation might be represented as three applications of a five-degree rotation TF. It also provides a direct way to represent compositions of multiple transformation operations. This is critical as a multitude of state-of-the-art results in the literature show the importance of using compositions of more than one transformations per image [7, 3, 11], which we also confirm experimentally in Section 5. 2.2 Weakening the Class-Invariance Assumption Any data augmentation technique fundamentally relies on some assumption about the transformation operations? relation to the class labels. Previous approaches make the unrealistic assumption that all provided transformation operations preserve class labels for all data points. That is, y(h?L ? . . . ? h?1 (x)) = y(x) (1) for label mapping function y, any sequence of TF indices ?1 , ..., ?L , and all data points x. This assumption puts a large burden of precise specification on the user, and based on our observations, is violated by many real-world data augmentation strategies. Instead, we consider a weaker modeling assumption. We assume that transformation operations will not map between classes, but might destructively map data points out of the distribution of interest entirely: y(h?L ? . . . ? h?1 (x)) ? {y(x), y? } (2) where y? represents an out-of-distribution null class. Intuitively, this weaker assumption is motivated by the categorical image classification setting, where we observe that transformation operations provided by the user will almost never turn, for example, a plane into a car, but may often turn a plane into an indistinguishable ?garbage? image (Fig. 3). We are the first to consider this weaker invariance assumption, which we believe more closely matches various practical data augmentation settings of interest. In Section 5, we also provide empirical evidence that this weaker assumption is useful in binary classification settings and over modalities other than image data. Critically, it also enables us to learn a model of TF sequences using unlabeled data alone. 3 Original Plane Auto Bird Cat Deer Plane Auto Bird Figure 3: Our modeling assumption is that transformations may map out of the natural distribution of interest, but will rarely map between classes. As a demonstration, we take images from CIFAR-10 (each row) and randomly search for a transformation sequence that best maps them to a different class (each column), according to a trained discriminative model. The matches rarely resemble the target class but often no longer look like ?normal? images at all. Note that we consider a fixed set of user-provided TFs, not adversarially selected ones. 2.3 Figure 4: Some example transformed images generated using an augmentation generative model trained using our approach. Note that this is not meant as a comparison to Fig. 3. Minimizing Null Class Mappings Using Unlabeled Data Given assumption (2), our objective is to learn a model G? which generates sequences of TF indices ? ? {1, K}L with fixed length L, such that the resulting TF sequences h?1 , ..., h?L are not likely to map data points into y? . Crucially, this does not involve using the class labels of any data points, and so we can use unlabeled data. Our goal is then to minimize the the probability of a generated sequence mapping unlabeled data points into the null class, with respect to ?: J? = E? ?G? Ex?U [P (y(h?L ? . . . ? h?1 (x)) = y? )] (3) where U is some distribution of unlabeled data. Generative Adversarial Objective In order to approximate P (y(h?1 ? . . . ? h?L (x)) = y? ), we jointly train the generator G? and a discriminative model D?? using a generative adversarial network (GAN) objective [9], now minimizing with respect to ? and maximizing with respect to ?: h i h i J?? = E? ?G? Ex?U log(1 ? D?? (h?L ? . . . ? h?1 (x))) + Ex0 ?U log(D?? (x0 )) (4) As in the standard GAN setup, the training procedure can be viewed as a minimax game in which the discriminator?s goal is to assign low values to transformed, out-of-distribution data points and high values to real in-distribution data points, while simultaneously, the generator?s goal is to generate transformation sequences which produce data points that are indistinguishable from real data points according to the discriminator. For D?? , we use an all-convolution CNN as in [23]. For further details, see Supplemental Materials. Diversity Objective An additional concern is that the model will learn a variety of null transformation sequences (e.g. rotating first left than right repeatedly). Given the potentially large state-space of actions, and the black-box nature of the user-specified TFs, it seems infeasible to hard-code sets of inverse operations to avoid. To mitigate this, we instead consider a second objective term: Jd = E? ?G? Ex?U [d(h?L ? . . . ? h?1 (x), x)] (5) where d : X ? X ? R is some distance function. For d, we evaluated using both distance in the raw input space, and in the feature space learned by the final pre-softmax layer of the discriminator D?? . Combining eqns. 4 and 5, our final objective is then J = J?? +?Jd?1 where ? > 0 is a hyperparameter. We minimize J with respect to ? and maximize with respect to ?. 4 2.4 Modeling Transformation Sequences We now consider two model classes for G? : Independent Model We first consider a mean field model in which each sequential TF is chosen independently. This reduces our task to one of learning K parameters, which we can think of as representing the task-specific ?accuracies? or ?frequencies? of each TF. For example, we might want to learn that elastic deformations or swirls should only rarely be applied to images in CIFAR-10, but that small rotations can be applied frequently. In particular, a mean field model also provides a simple way of effectively learning stochastic, discretized parameterizations of the TFs. For example, if we have a TF representing five-degree rotations, Rotate5Deg, a marginal value of PG? (Rotate5Deg) = 0.1 could be thought of as roughly equivalent to learning to rotate 0.5L degrees on average. State-Based Model There are important cases, however, where the independent representation learned by the mean field model could be overly limited. In many settings, certain TFs may have very different effects depending on which other TFs are applied with them. As an example, certain similar pairs of image transformations might be overly lossy when applied together, such as a blur and a zoom operation, or a brighten and a saturate operation. A mean field model could not represent such disjunctions as these. Another scenario where an independent model fails is where the TFs are non-commutative, such as with lossy operators (e.g. image transformations which use aliasing). In both of these cases, modeling the sequences of transformations could be important. Therefore we consider a long short-term memory (LSTM) network as as a representative sequence model. The output from each cell of the network is a distribution over the TFs. The next TF in the sequence is then sampled from this distribution, and is fed as a one-hot vector to the next cell in the network. 3 Learning a Transformation Sequence Model The core challenge that we now face in learning G? is that it generates sequences over TFs which are not necessarily differentiable or deterministic. This constraint is a critical facet of our approach from the usability perspective, as it allows users to easily write TFs as black-box scripts in the language of their choosing, leveraging arbitrary subfunctions, libraries, and methods. In order to work around this constraint, we now describe our model in the syntax of reinforcement learning (RL), which provides a convenient framework and set of approaches for handling computation graphs with non-differentiable or stochastic nodes [27]. ?i be the Reinforcement Learning Formulation Let ?i be the index of the ith TF applied, and x resulting incrementally transformed data point. Then we consider st = (x, x ?1 , x ?2 , ..., x ?t , ?1 , ...., ?t ) as the state after having applied t of the incremental TFs. Note that we include the incrementally transformed data points x ?1 , ..., x ?t in st since the TFs may be stochastic. Each of the model classes considered for G? then uses a different state representation s?. For the mean field model, the state representation used is s?MF = ?. For the LSTM model, we use s?LSTM = LSTM(?t , st?1 ), the state t t update operation performed by a standard LSTM cell parameterized by ?. Policy Gradient with Incremental Rewards Let `t (x, ? ) = log(1 ? D?? (? xt )) be the cumulative ? loss for a data point x at step t, with `0 (x) = `0 (x, ? ) ? log(1 ? D? (x)). Let R(st ) = `t (x, ? ) ? `t?1 (x, ? ) be the incremental reward, representing the difference in discriminator loss at incremental transformation step t. We can now recast the first term of our objective J?? as an expected sum of incremental rewards: " # L h i X ? U (?) ? E? ?G? Ex?U log(1 ? D? (h?1 ? . . . ? h?L (x))) = E? ?G? Ex?U `0 (x) + R(st ) t=1 (6) We omit `0 in practice, equivalent to using the loss of x as a baseline term. Next, let ?? be the stochastic transition policy implictly defined by G? . We compute the recurrent policy gradient [32] 5 of the objective U (?) as: ?? U (?) = E? ?G? Ex?U " L X # R(st )?? log ?? (?t | s?t?1 ) (7) t=1 Following standard practice, we approximate this quantity by sampling batches of n data points and m sampled action sequences per data point. We also use standard techniques of discounting with factor ? ? [0, 1] and considering only future rewards [12]. See Supplemental Materials for details. 4 Related Work We now review related work, both to motivate comparisons in the experiments section and to present complementary lines of work. Heuristic Data Augmentation Most state-of-the-art image classification pipelines use some limited form of data augmentation [11, 7]. This generally consists of applying crops, flips, or small affine transformations, in fixed order or at random, and with parameters drawn randomly from hand-tuned ranges. In addition, various studies have applied heuristic data augmentation techniques to modalities such as audio [31] and text [20]. As reported in the literature, the selection of these augmentation strategies can have large performance impacts, and thus can require extensive selection and tuning by hand [3, 7] (see Supplemental Materials as well). Interpolation-Based Techniques Some techniques have explored generating augmented training sets by interpolating between labeled data points. For example, the well-known SMOTE algorithm applies this basic technique for oversampling in class-imbalanced settings [2], and recent work explores using a similar interpolation approach in a learned feature space [5]. [13] proposes learning a class-conditional model of diffeomorphisms interpolating between nearest-neighbor labeled data points as a way to perform augmentation. We view these approaches as complementary but orthogonal, as our goal is to directly exploit user domain knowledge of class-invariant transformation operations. Adversarial Data Augmentation Several lines of recent work have explored techniques which can be viewed as forms of data augmentation that are adversarial with respect to the end classification model. In one set of approaches, transformation operations are selected adaptively from a given set in order to maximize the loss of the end classification model being trained [30, 8]. These procedures make the strong assumption that all of the provided transformations will preserve class labels, or use bespoke models over restricted sets of operations [28]. Another line of recent work has showed that augmentation via small adversarial linear perturbations can act as a regularizer [10, 22]. While complimentary, this work does not consider taking advantage of non-local transformations derived from user knowledge of task or domain invariances. Finally, generative adversarial networks (GANs) [9] have recently made great progress in learning complete data generation models from unlabeled data. These can be used to augment labeled training sets as well. Class-conditional GANs [1, 21] generate artificial data points but require large sets of labeled training data to learn from. Standard unsupervised GANs can be used to generate additional out-of-class data points that can then augment labeled training sets [25, 29]. We compare our proposed approach with these methods empirically in Section 5. 5 Experiments We experimentally validate the proposed framework by learning augmentation models for several benchmark and real-world data sets, exploring both image recognition and natural language understanding tasks. Our focus is on the performance of end classification models trained on labeled datasets augmented with our approach and others used in practice. We also examine robustness to user misspecification of TFs, and sensitivity to core hyperparameters. 5.1 Datasets and Transformation Functions Benchmark Image Datasets We ran experiments on the MNIST [18] and CIFAR-10 [17] datasets, using only a subset of the class labels to train the end classification models and treating the rest 6 as unlabeled data. We used a generic set of TFs for both MNIST and CIFAR-10: small rotations, shears, central swirls, and elastic deformations. We also used morphologic operations for MNIST, and adjustments to hue, saturation, contrast, and brightness for CIFAR-10. Benchmark Text Dataset We applied our approach to the Employment relation extraction subtask from the NIST Automatic Content Extraction (ACE) corpus [6], where the goal is to identify mentions of employer-employee relations in news articles. Given the standard class imbalance in information extraction tasks like this, we used data augmentation to oversample the minority positive class. The flexibility of our TF representation allowed us to take a straightforward but novel approach to data augmentation in this setting. We constructed a trigram language model using the ACE corpus and Reuters Corpus Volume I [19] from which we can sample a word conditioned on the preceding words. We then used this model as the basis for a set of TFs that select words to swap based on the part-of-speech tag and location relative to entities of interest (see Supplemental Materials for details). Mammography Tumor-Classification Dataset To demonstrate the effectiveness of our approach on real-world applications, we also considered the task of classifying benign versus malignant tumors from images in the Digital Database for Screening Mammography (DDSM) dataset [15, 4, 26], which is a class-balanced dataset consisting of 1506 labeled mammograms. In collaboration with domain experts in radiology, we constructed two basic TF sets. The first set consisted of standard image transformation operations subselected so as not to break class-invariance in the mammography setting. For example, brightness operations were excluded for this reason. The second set consisted of both the first set as well as several novel segmentation-based transplantation TFs. Each of these TFs utilized the output of an unsupervised segmentation algorithm to isolate the tumor mass, perform a transformation operation such as rotation or shifting, and then stitch it into a randomly-sampled benign tissue image. See Fig. 1 (right panel) for an illustrative example, and Supplemental Materials for further details. 5.2 End Classifier Performance We evaluated our approach by using it to augment labeled training sets for the tasks mentioned above, and show that we achieve strong gains over heuristic baselines. In particular, for a given set of TFs, we evaluate the performance of mean field (MF) and LSTM generators trained using our approach against two standard data augmentation techniques used in practice. The first (Basic) consists of applying random crops to images, or performing simple minority class duplication for the ACE relation extraction task. The second (Heur.) is the standard heuristic approach of applying random compositions of the given set of transformation operations, the most common technique used in practice [3, 11, 14]. For both our approaches (MF and LSTM) and Heur., we additionally use the same random cropping technique as in the Basic approach. We present these results in Table 1, where we report test set accuracy (or F1 score for ACE), and use a random subsample of the available labeled training data. Additionally, we include an extra row for the DDSM task highlighting the impact of adding domain-specific (DS) TFs ? the segmentation-based operations described above ? on performance. In Table 2 we additionally compare to two related generative-adversarial methods, the Categorical GAN (CatGAN) [29], and the semi-supervised GAN (SS-GAN) from [25]. Both of these methods use GAN-based architectures trained on unlabeled data to generate new out-of-class data points with which to augment a labeled training set. Following their protocol for CIFAR-10, we train our generator on the full set of unlabeled data, and our end discriminator on ten disjoint random folds of the labeled training set not including the validation set (i.e. n = 4000 each), averaging the results. In all settings, we train our TF sequence generator on the full set of unlabeled data. We select a fixed sequence length for each task via an initial calibration experiment (Fig. 5b). We use L = 5 for ACE, L = 7 for DDSM + DS, and L = 10 for all other tasks. We note that our findings here mirrored those in the literature, namely that compositions of multiple TFs lead to higher end model accuracies. We selected hyperparameters of the generator via performance on a validation set. We then used the trained generator to transform the entire training set at each epoch of end classification model training. For MNIST and DDSM we use a four-layer all-convolutional CNN, for CIFAR10 we use a 56-layer ResNet [14], and for ACE we use a bi-directional LSTM. Additionally, we incorporate a basic transformation regularization term as in [24] (see Supplemental Materials), and train for the last ten epochs without applying any transformations as in [11]. In all cases, we use hyperparameters as 7 Task % None Basic Heur. MF LSTM MNIST 1 10 90.2 97.3 95.3 98.7 95.9 99.0 96.5 99.2 96.7 99.1 CIFAR-10 10 100 66.0 87.8 73.1 91.9 77.5 92.3 79.8 94.4 81.5 94.0 ACE (F1) 100 62.7 59.9 62.8 62.9 64.2 DDSM DDSM + DS 10 57.6 58.8 59.3 53.7 58.2 59.9 61.0 62.7 Table 1: Test set performance of end models trained on subsamples of the labeled training data (%), not including validation splits, using various data augmentation approaches. None indicates performance with no augmentation. All tasks are measured in accuracy, except ACE which is measured by F1 score. (a) Model Acc. (%) CatGAN SS-GAN LSTM 80.42 ? 0.58 81.37 ? 2.32 81.47 ? 0.46 Table 2: Reported end model accuracies, averaged across 10% subsample folds, on CIFAR-10 for comparable GAN methods. (b) Figure 5: (a) Learned TF frequency parameters for misspecified and normal TFs on MNIST. The mean field model correctly learns to avoid the misspecified TFs. (b) Larger sequence lengths lead to higher end model accuracy on CIFAR-10, while random performs best with shorter sequences, according to a sequence length calibration experiment. reported in the literature. For further details of generator and end model training see the Supplemental Materials. We see that across the applications studied, our approach outperforms the heuristic data augmentation approach most commonly used in practice. Furthermore, the LSTM generator outperforms the simple mean field one in most settings, indicating the value of modeling sequential structure in data augmentation. In particular, we realize significant gains over standard heuristic data augmentation on CIFAR-10, where we are competitive with comparable semi-supervised GAN approaches, but with significantly smaller variance. We also train the same CIFAR-10 end model using the full labeled training dataset, and again see strong relative gains (2.1 pts. in accuracy over heuristic), coming within 2.1 points of the current state-of-the-art [16] using our much simpler end model. On the ACE and DDSM tasks, we also achieve strong performance gains, showing the ability of our method to productively incorporate more complex transformation operations from domain expert users. In particular, in DDSM we observe that the addition of the segmentation-based TFs causes the heuristic augmentation approach to perform significantly worse, due to a large number of new failure modes resulting from combinations of the segmentation-based TFs ? which use gradient-based blending ? and the standard TFs such as zoom and rotate. In contrast, our LSTM model learns to avoid these destructive subsequences and achieves the highest score, resulting in a 9.0 point boost over the comparable heuristic approach. 8 Robustness to TF Misspecification One of the high-level goals of our approach is to enable an easier interface for users by not requiring that the TFs they specify be completely class-preserving. The lack of any assumption of well-specified transformation operations in our approach, and the strong empirical performance realized, is evidence of this robustness. To additionally illustrate the robustness of our approach to misspecified TFs, we train a mean field generator on MNIST using the standard TF set, but with two TFs (shear operations) parameterized so as to map almost all images to the null class. We see in Fig. 5a that the generator learns to avoid applying the misspecified TFs (red lines) almost entirely. 6 Conclusion and Future Work We presented a method for learning how to parameterize and compose user-provided black-box transformation operations used for data augmentation. Our approach is able to model arbitrary TFs, allowing practitioners to leverage domain knowledge in a flexible and simple manner. By training a generative sequence model over the specified transformation functions using reinforcement learning in a GAN-like framework, we are able to generate realistic transformed data points which are useful for data augmentation. We demonstrated that our method yields strong gains over standard heuristic approaches to data augmentation for a range of applications, modalities, and complex domain-specific transformation functions. There are many possible future directions of research for learning data augmentation strategies in the proposed model, such as conditioning the generator?s stochastic policy on a featurized version of the data point being transformed, and generating TF sequences of dynamic length. More broadly, we are excited about further formalizing data augmentation as a novel form of weak supervision, allowing users to directly encode domain knowledge about invariants into machine learning models. Acknowledgements We would like to thank Daniel Selsam, Ioannis Mitliagkas, Christopher De Sa, William Hamilton, and Daniel Rubin for valuable feedback and conversations. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) SIMPLEX program under No. N66001-15-C-4043, the DARPA D3M program under No. FA8750-17-20095, DARPA programs No. FA8750-12-2-0335 and FA8750-13-2-0039, DOE 108845, National Institute of Health (NIH) U54EB020405, the Office of Naval Research (ONR) under awards No. N000141210041 and No. N000141310129, the Moore Foundation, the Okawa Research Grant, American Family Insurance, Accenture, Toshiba, and Intel. This research was also supported in part by affiliate members and other supporters of the Stanford DAWN project: Intel, Microsoft, Teradata, and VMware. This material is based on research sponsored by DARPA under agreement number FA8750-17-2-0095. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, AFRL, NSF, NIH, ONR, or the U.S. Government. References [1] S. Baluja and I. Fischer. Adversarial transformation networks: Learning to generate adversarial examples. arXiv preprint arXiv:1703.09387, 2017. [2] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321?357, 2002. [3] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Deep big simple neural nets excel on handwritten digit recognition, 2010. Cited on, 80. [4] K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle, L. Tarbox, and F. Prior. The cancer imaging archive (TCIA): Maintaining and operating a public information repository. Journal of Digital Imaging, 26(6):1045?1057, 2013. [5] T. DeVries and G. W. Taylor. arXiv:1702.05538, 2017. Dataset augmentation in feature space. 9 arXiv preprint [6] G. R. Doddington, A. Mitchell, M. A. Przybocki, L. A. Ramshaw, S. Strassel, and R. M. Weischedel. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, volume 2, page 1, 2004. [7] A. Dosovitskiy, P. Fischer, J. Springenberg, M. Riedmiller, and T. Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks, arxiv preprint. arXiv preprint arXiv:1506.02753, 2015. [8] A. Fawzi, H. Samulowitz, D. Turaga, and P. Frossard. Adaptive data augmentation for image classification. In Image Processing (ICIP), 2016 IEEE International Conference on, pages 3688?3692. IEEE, 2016. [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672?2680, 2014. [10] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [11] B. Graham. Fractional max-pooling. arXiv preprint arXiv:1412.6071, 2014. [12] E. Greensmith, P. L. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471?1530, 2004. [13] S. Hauberg, O. Freifeld, A. B. L. Larsen, J. Fisher, and L. Hansen. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In Artificial Intelligence and Statistics, pages 342?350, 2016. [14] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016. [15] M. Heath, K. Bowyer, D. Kopans, R. Moore, and W. P. Kegelmeyer. The digital database for screening mammography. In Proceedings of the 5th international workshop on digital mammography, pages 212?218. Medical Physics Publishing, 2000. [16] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. [17] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [19] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361?397, 2004. [20] X. Lu, B. Zheng, A. Velivelli, and C. Zhai. Enhancing text categorization with semantic-enriched representation and training data augmentation. Journal of the American Medical Informatics Association, 13(5):526?535, 2006. [21] M. Mirza and S. Osindero. arXiv:1411.1784, 2014. Conditional generative adversarial nets. arXiv preprint [22] T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015. [23] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [24] M. Sajjadi, M. Javanmardi, and T. Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. CoRR, abs/1606.04586, 2016. 10 [25] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226?2234, 2016. [26] R. Sawyer Lee, F. Gimenez, A. Hoogi, and D. Rubin. Curated breast imaging subset of DDSM. In The Cancer Imaging Archive, 2016. [27] J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528?3536, 2015. [28] L. Sixt, B. Wild, and T. Landgraf. Rendergan: Generating realistic labeled data. arXiv preprint arXiv:1611.01331, 2016. [29] J. T. Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. [30] C. H. Teo, A. Globerson, S. T. Roweis, and A. J. Smola. Convex learning with invariances. In Advances in neural information processing systems, pages 1489?1496, 2008. [31] S. Uhlich, M. Porcu, F. Giron, M. Enenkl, T. Kemp, N. Takahashi, and Y. Mitsufuji. Improving music source separation based on deep neural networks through data augmentation and network blending. Submitted to ICASSP, 2017. [32] D. Wierstra, A. F?rster, J. Peters, and J. Schmidhuber. Recurrent policy gradients. Logic Journal of IGPL, 18(5):620?634, 2010. 11
6916 |@word cnn:2 version:1 middle:1 repository:1 norm:1 seems:1 open:1 heuristically:2 crucially:1 excited:1 pg:1 brightness:3 sajjadi:1 mention:2 accommodate:1 reduction:1 initial:1 configuration:1 liu:1 efficacy:1 score:3 heur:3 tuned:3 daniel:2 document:1 fa8750:4 outperforms:2 existing:1 current:1 com:1 realize:1 realistic:3 blur:1 benign:2 enables:2 treating:1 sponsored:1 update:1 alone:1 generative:16 selected:4 intelligence:2 parameterization:2 accordingly:1 plane:4 ith:1 smith:1 short:1 core:2 ratner:1 provides:4 parameterizations:5 node:1 location:1 simpler:1 zhang:1 five:3 wierstra:1 constructed:2 direct:1 become:1 consists:3 compose:3 combine:1 wild:1 manner:2 x0:1 expected:1 indeed:1 frossard:1 roughly:1 nor:2 frequently:1 aliasing:1 examine:1 discretized:2 considering:1 increasing:1 provided:6 project:2 notation:1 formalizing:1 panel:1 mass:3 null:6 complimentary:1 informed:1 supplemental:8 finding:2 transformation:62 teradata:1 mitigate:1 every:1 act:1 zaremba:1 classifier:1 facto:1 medical:3 omit:1 grant:1 hamilton:1 kegelmeyer:2 greensmith:1 positive:1 thereon:1 nakae:1 local:2 treat:1 becoming:1 interpolation:2 might:5 black:5 bird:2 studied:2 limited:3 range:5 catgan:2 bi:1 averaged:1 practical:2 lecun:1 globerson:1 practice:9 writes:1 digit:1 procedure:2 riedmiller:1 empirical:3 thought:2 significantly:2 convenient:1 word:6 pre:1 unlabeled:14 targeting:1 operator:3 selection:2 put:1 applying:6 igpl:1 equivalent:2 deterministic:4 map:7 demonstrated:1 maximizing:1 primitive:1 straightforward:2 flexibly:1 independently:1 convex:1 formulate:2 chrismre:1 mammography:8 pouget:1 parameterizing:1 swirl:2 shlens:1 searching:2 handle:1 target:1 pt:1 user:21 massive:1 us:1 goodfellow:3 agreement:1 productively:1 recognition:5 utilized:1 curated:1 distributional:1 labeled:23 binning:1 database:2 preprint:12 parameterize:2 news:1 sun:1 connected:1 highest:1 valuable:2 ran:1 balanced:1 subtask:1 transforming:1 mentioned:1 agency:1 rose:1 reward:4 warde:1 tfs:43 automates:1 employment:1 dynamic:1 trained:11 motivate:1 swap:4 basis:1 completely:1 easily:1 darpa:5 icassp:1 represented:1 various:3 cat:1 regularizer:1 train:9 dreaming:1 effective:1 describe:1 artificial:3 deer:1 choosing:1 sourced:1 disjunction:1 harnessing:1 ace:12 stanford:3 heuristic:13 larger:1 s:2 transplantation:1 ability:1 statistic:1 fischer:2 radiology:2 jointly:1 think:1 transform:1 final:2 samulowitz:1 subsamples:1 sequence:31 differentiable:5 advantage:1 net:3 propose:2 coming:1 combining:1 translate:1 flexibility:2 achieve:6 roweis:1 validate:1 billion:1 enhancement:1 cropping:1 produce:4 generating:3 incremental:10 categorization:2 resnet:1 depending:1 recurrent:2 illustrate:1 exemplar:1 measured:2 nearest:1 ex:6 progress:1 sa:1 strong:7 c:1 resemble:1 direction:1 closely:1 stochastic:8 enable:1 opinion:1 material:10 public:1 ddsm:9 virtual:1 require:3 government:2 assign:1 abbeel:1 f1:5 segementation:1 blending:2 exploring:1 around:2 considered:2 hall:1 normal:2 great:1 mapping:3 automate:2 trigram:1 achieves:1 purpose:1 estimation:1 label:10 hansen:1 teo:1 ex0:1 tf:21 tool:1 hope:1 aim:2 rather:1 avoid:5 office:1 encode:1 derived:2 focus:1 naval:1 improvement:1 koppel:1 indicates:1 hk:1 contrast:3 adversarial:18 ishii:1 baseline:2 hauberg:1 dependent:1 nn:2 typically:1 weakening:1 entire:1 relation:7 expand:1 reproduce:1 transformed:7 classification:11 flexible:1 augment:4 proposes:1 smoothing:1 noun:1 art:7 softmax:1 brox:1 marginal:1 field:9 never:1 extraction:8 beach:1 having:1 sampling:2 represents:1 adversarially:1 look:1 unsupervised:6 nearly:1 future:3 simplex:1 others:1 report:1 fundamentally:1 dosovitskiy:1 mirza:2 modern:1 randomly:4 composed:1 preserve:6 simultaneously:1 zoom:2 individual:2 national:1 vmware:1 densely:1 consisting:1 ehrenberg:1 william:1 microsoft:1 attempt:1 ab:1 interest:6 screening:2 zheng:1 insurance:1 evaluation:1 farley:1 copyright:1 cifar10:1 shorter:1 orthogonal:1 taylor:1 rotating:1 deformation:2 fawzi:1 n000141310129:1 column:1 modeling:8 facet:1 subset:2 krizhevsky:1 osindero:1 reported:3 synthetic:1 adaptively:1 st:7 cited:2 lstm:12 explores:1 sensitivity:1 international:2 automating:1 destructively:1 lee:1 physic:1 informatics:1 together:2 quickly:1 gans:4 w1:1 augmentation:51 vastly:1 central:1 again:1 containing:1 accenture:1 reflect:1 huang:1 worse:1 expert:5 american:2 li:1 szegedy:1 suggesting:1 distribute:1 takahashi:1 de:2 diversity:1 ioannis:1 script:1 h1:1 performed:1 view:2 morphologic:1 break:1 red:1 competitive:2 metz:1 minimize:2 accuracy:11 convolutional:4 variance:3 yield:1 identify:1 directional:1 weak:3 raw:1 handwritten:1 critically:1 none:2 ren:1 lu:1 expertise:1 researcher:1 tissue:1 acc:1 submitted:1 devries:1 manual:1 against:1 failure:1 frequency:2 destructive:2 larsen:1 e2:2 chintala:1 gain:6 sampled:3 dataset:8 mitchell:1 knowledge:9 car:1 conversation:1 ubiquitous:1 segmentation:5 fractional:1 sophisticated:1 afrl:1 higher:2 supervised:5 specify:2 improved:1 formulation:2 evaluated:2 box:5 furthermore:1 implicit:1 smola:1 d:3 hand:3 oversample:1 christopher:2 lack:1 incrementally:2 mode:1 believe:1 lossy:2 usa:1 effect:2 implictly:1 consisted:2 requiring:1 discounting:1 regularization:2 excluded:1 moore:3 semantic:1 indistinguishable:2 game:1 eqns:1 illustrative:1 syntax:1 complete:1 demonstrate:1 performs:2 interface:1 image:35 weber:1 novel:4 recently:1 dawn:1 misspecified:6 common:2 rotation:6 nih:2 shear:2 rl:1 empirically:1 conditioning:1 volume:2 association:1 he:1 employee:1 significant:1 composition:10 phillips:1 tuning:3 automatic:2 ramshaw:1 language:5 henry:1 gratefully:1 specification:1 calibration:2 supervision:3 longer:1 operating:1 imbalanced:1 recent:4 showed:1 perspective:1 scenario:1 schmidhuber:2 certain:2 binary:1 onr:2 der:1 seen:1 preserving:2 additional:2 preceding:1 affiliate:1 maximize:2 gambardella:1 semi:5 multiple:3 full:3 reduces:1 segmented:2 match:2 usability:1 long:2 cifar:14 equally:1 e1:2 award:1 paired:1 impact:2 crop:2 basic:6 breast:1 vision:1 enhancing:1 df:1 arxiv:23 represent:3 cell:3 background:1 want:1 fine:1 addition:2 diagram:1 source:1 modality:5 appropriately:1 extra:1 rest:1 finely:1 archive:2 heath:1 isolate:1 duplication:1 pooling:1 member:1 leveraging:3 effectiveness:1 practitioner:4 smote:2 leverage:4 yang:1 split:1 easy:1 bengio:2 baxter:1 variety:1 weischedel:1 hussain:1 architecture:1 ciresan:1 selsam:1 okawa:1 haffner:1 supporter:1 motivated:1 javanmardi:1 defense:1 bartlett:1 peter:1 speech:1 cause:2 repeatedly:1 action:2 deep:6 garbage:1 useful:2 generally:1 heess:1 involve:1 tune:1 hue:1 ten:2 u54eb020405:1 http:1 generate:6 mirrored:1 nsf:1 oversampling:1 governmental:1 overly:2 per:2 disjoint:1 correctly:1 diverse:1 broadly:1 write:1 hyperparameter:1 four:1 achieving:1 drawn:1 n66001:1 imaging:5 graph:2 sum:1 inverse:1 parameterized:2 springenberg:2 reporting:1 rachel:1 almost:3 employer:1 family:1 separation:1 endorsement:1 maaten:1 graham:1 comparable:4 bowyer:2 entirely:3 layer:4 hi:2 lrec:1 courville:1 fold:2 sawyer:1 constraint:2 toshiba:1 tag:1 generates:2 performing:1 rcv1:1 diffeomorphisms:2 according:3 turaga:1 combination:1 across:2 smaller:1 increasingly:1 featurized:1 kirby:1 making:1 intuitively:1 restricted:2 invariant:2 pipeline:1 subfunctions:1 turn:2 malignant:1 needed:2 jared:1 flip:3 tractable:1 fed:1 end:18 available:2 operation:28 brighten:2 apply:1 observe:2 salimans:1 generic:1 chawla:1 batch:1 robustness:4 weinberger:1 jd:2 original:1 bespoke:1 include:2 miyato:1 gan:10 publishing:1 maintaining:1 music:1 exploit:1 w20:1 implied:1 objective:9 move:1 quantity:3 realized:1 strategy:6 gradient:7 distance:2 thank:1 entity:2 koyama:1 w0:1 kemp:1 reason:1 przybocki:1 minority:3 ozair:1 code:3 length:5 modeled:1 index:3 zhai:1 providing:2 demonstration:1 minimizing:2 difficult:3 setup:2 potentially:1 design:1 policy:6 perform:6 contributed:1 imbalance:1 allowing:2 observation:1 convolution:1 datasets:5 benchmark:5 nist:1 acknowledge:1 hinton:1 precise:1 misspecification:2 perturbation:3 arbitrary:3 pair:1 namely:1 specified:7 extensive:1 sentence:2 discriminator:6 meier:1 icip:1 learned:7 boost:3 nip:1 able:2 pattern:1 maeda:1 challenge:1 saturation:2 program:5 recast:1 max:1 including:3 memory:1 shifting:1 unrealistic:1 critical:3 hot:1 difficulty:1 rely:1 natural:2 residual:1 advanced:1 representing:4 minimax:1 github:1 library:1 reprint:1 excel:1 categorical:3 auto:2 health:1 text:7 review:1 geometric:1 literature:4 understanding:1 epoch:2 acknowledgement:1 prior:1 relative:2 schulman:1 loss:4 generation:1 versus:1 generator:14 digital:4 validation:3 foundation:1 clark:1 degree:6 affine:1 sufficient:1 freifeld:1 article:1 rubin:2 classifying:1 tiny:1 tasdizen:1 collaboration:2 translation:1 row:2 cancer:2 supported:1 last:1 free:1 infeasible:2 weaker:5 institute:1 wide:1 neighbor:1 face:1 taking:1 explaining:1 van:1 feedback:1 world:5 cumulative:1 transition:1 author:2 domainspecific:1 reinforcement:5 made:1 commonly:1 adaptive:1 collection:1 approximate:2 nov:1 logic:1 confirm:1 overfitting:1 corpus:4 consuming:2 discriminative:6 subsequence:1 search:1 table:4 additionally:6 learn:6 nature:1 robust:2 expanding:1 ca:1 composing:1 elastic:2 improving:1 bottou:1 necessarily:2 artificially:1 constructing:1 domain:18 interpolating:2 protocol:1 complex:2 apr:1 motivation:1 subsample:3 hyperparameters:3 reuters:1 big:1 allowed:1 complementary:2 xu:1 augmented:2 fig:8 representative:2 intel:2 enriched:1 fails:1 learns:3 grained:1 externally:1 saturate:1 mammogram:1 specific:6 xt:1 showing:1 explored:2 abadie:1 multitude:1 evidence:2 concern:1 workshop:1 essential:1 burden:1 mnist:7 sequential:2 corr:1 effectively:1 importance:1 adding:1 mitliagkas:1 n000141210041:1 notwithstanding:1 conditioned:1 commutative:3 chen:1 easier:1 mf:4 authorized:1 likely:1 highlighting:1 expressed:2 adjustment:1 stitch:1 recommendation:1 applies:1 radford:2 relies:1 lewis:1 conditional:4 goal:7 viewed:2 cheung:1 fisher:1 content:2 experimentally:2 hard:1 sixt:1 specifically:1 except:1 baluja:1 averaging:1 tumor:4 called:1 invariance:8 rarely:3 select:3 indicating:1 support:1 rotate:6 meant:1 alexander:1 doddington:1 violated:1 scarcity:1 incorporate:2 evaluate:1 audio:1 handling:1
6,541
6,917
Wasserstein Learning of Deep Generative Point Process Models Shuai Xiao? ? , Mehrdad Farajtabar? Xiaojing Ye? , Junchi Yan? Xiaokang Yang? , Le Song , Hongyuan Zha ? Shanghai Jiao Tong University  College of Computing, Georgia Institute of Technology ? School of Mathematics, Georgia State University {benjaminforever,yanjunchi,xkyang}@sjtu.edu.cn {mehrdad}@gatech.edu, [email protected] {lsong,zha}@cc.gatech.edu Abstract Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model?s expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones. 1 Introduction Event sequences are ubiquitous in areas such as e-commerce, social networks, and health informatics. For example, events in e-commerce are the times a customer purchases a product from an online vendor such as Amazon. In social networks, event sequences are the times a user signs on or generates posts, clicks, and likes. In health informatics, events can be the times when a patient exhibits symptoms or receives treatments. Bidding and asking orders also comprise events in the stock market. In all of these applications, understanding and predicting user behaviors exhibited by the event dynamics are of great practical, economic, and societal interest. Temporal point processes [1] is an effective mathematical tool for modeling events data. It has been applied to sequences arising from social networks [2, 3, 4], electronic health records [5], ecommerce [6], and finance [7]. A temporal point process is a random process whose realization consists of a list of discrete events localized in (continuous) time. The point process representation of sequence data is fundamentally different from the discrete time representation typically used in time series analysis. It directly models the time period between events as random variables, and allows temporal events to be modeled accurately, without requiring the choice of a time window to aggregate events, which may cause discretization errors. Moreover, it has a remarkably extensive theoretical foundation [8]. However, conventional point process models often make strong unrealistic assumptions about the generative processes of the event sequences. In fact, a point process is characterized by its conditional ? Authors contributed equally. Work completed at Georgia Tech. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. intensity function ? a stochastic model for the time of the next event given all the times of previous events. The functional form of the intensity is often designed to capture the phenomena of interests [9]. Some examples are homogeneous and non-homogeneous Poisson processes [10], self-exciting point processes [11], self-correcting point process models [12], and survival processes [8]. Unfortunately, they make various parametric assumptions about the latent dynamics governing the generation of the observed point patterns. As a consequence, model misspecification can cause significantly degraded performance using point process models, which is also shown by our experimental results later. To address the aforementioned problem, the authors in [13] propose to learn a general representation of the underlying dynamics from the event history without assuming a fixed parametric form in advance. The intensity function of the temporal point process is viewed as a nonlinear function of the history of the process and is parameterized using a recurrent neural network. Apparently this line of work still relies on explicit modeling of the intensity function. However, in many tasks such as data generation or event prediction, knowledge of the whole intensity function is unnecessary. On the other hand, sampling sequences from intensity-based models is usually performed via a thinning algorithm [14], which is computationally expensive; many sample events might be rejected because of the rejection step, especially when the intensity exhibits high variation. More importantly, most of the methods based on intensity function are trained by maximizing log likelihood or a lower bound on it. They are asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distributions, which suffers serious issues such as mode dropping [15, 16]. Alternatively, Generative Adversarial Networks (GAN) [17] have proven to be a promising alternative to traditional maximum likelihood approaches [18, 19]. In this paper, for the first time, we bypass the intensity-based modeling and likelihood-based estimation of temporal point processes and propose a neural network-based model with a generative adversarial learning scheme for point processes. In GANs, two models are used to solve a minimax game: a generator which samples synthetic data from the model, and a discriminator which classifies the data as real or synthetic. Theoretically speaking, these models are capable of modeling an arbitrarily complex probability distribution, including distributions over discrete events. They achieve state-of-the-art results on a variety of generative tasks such as image generation, image super-resolution, 3D object generation, and video prediction [20, 21]. The original GAN in [17] minimizes the Jensen-Shannon (JS) and is regarded as highly unstable and prone to miss modes. Recently, Wasserstein GAN (WGAN) [22] is proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore it is shown that the EM objective, as a metric between probability distributions [23] has many advantages as the loss function correlates with the quality of the generated samples and reduces mode dropping [24]. Moreover, it leverages the geometry of the space of event sequences in terms of their distance, which is not the case for an MLE-based approach. In this paper we extend the notion of WGAN for temporal point processes and adopt a Recurrent Neural Network (RNN) for training. Importantly, we are able to demonstrate that Wasserstein distance training of RNN point process models outperforms the same architecture trained using MLE. In a nutshell, the contributions of the paper are: i) We propose the first intensity-free generative model for point processes and introduce the first (to our best knowledge) likelihood-free corresponding learning methods; ii) We extend WGAN for point processes with Recurrent Neural Network architecture for sequence generation learning; iii) In contrast to the usual subjective measures of evaluating GANs we use a statistical and a quantitative measure to compare the performance of the model to the conventional ones. iv) Extensive experiments involving various types of point processes on both synthetic and real datasets show the promising performance of our approach. 2 Proposed Framework In this section, we define Point Processes in a way that is suitable to be combined with the WGANs. 2.1 Point Processes Let S be a compact space equipped with a Borel ?-algebra B. Take ? as the set of counting measures on S with C as the smallest ?-algebra on it. Let (?, F, P) be a probability space. A point process on S is a measurable map ? : ? ? ? from the probability space (?, F, P) to the measurable space (?, C). Figure 1-a illustrates this mapping. Pn Every realization of a point process ? can be written as ? = i=1 ?Xi where ? is the Dirac measure, n is an integer-valued random variable and Xi ?s are random elements of S or events. A point process 2 R can be equivalently represented by a counting process: N (B) := B ?(x)dx, which basically is the number of events in each Borel subset B ? B of S. The mean measure M of a point process ? is a measure on S that assigns to every B ? B the expected number of events of ? in B, i.e., M (B) := E[N (B)] for all B ? B. R For inhomogeneous Poisson process, M (B) = B ?(x)dx, where the intensity function ?(x) yields a positive measurable function on S. Intuitively speaking, ?(x)dx is the expected number of events in the infinitesimal dx. For the most common type of point process, a Homogeneous Poisson process, ?(x) = ? and M (B) = ?|B|, where | ? | is the Lebesgue measure on (S, B). More generally, in Cox point processes, ?(x) can be a random density possibly depending on the history of the process. For R any point process, given ?(?), N (B) ? Poisson( B ?(x)dx). In addition, if B1 , . . . , Bk ? B are disjoint, then N (B1 ), . . . , N (Bk ) are independent conditioning on ?(?). For the ease of exposition, we will present the framework for the case where the events are happening in the real half-line of time. But the framework is easily extensible to the general space. 2.2 Temporal Point Processes A particularly interesting case of point processes is given when S is the time interval [0, TP ), which n we will call a temporal point process. Here, a realization is simply a set of time points: ? = i=1 ?ti . With a slight notation abuse we will write ? = {t1 , . . . , tn } where each ti is a random time before T . Using a conditional intensity (rate) function is the usual way to characterize point processes. For Inhomogeneous Poisson process (IP), the intensity ?(t) is a fixed non-negative function supported in [0, T ). For example, it can be a multi-modal function comprised of k Gaussian kernels: ?(t) =  Pk 2 ?1/2 exp ?(t ? ci )2 /?i2 , for t ? [0, T ), where ci and ?i are fixed center and i=1 ?i (2??i ) standard deviation, respectively, and ?i is the weight (or importance) for kernel i. A self-exciting (Hawkes) process (SE) is a cox process where Pthe intensity is determined by previous (random) events in a special parametric form: ?(t) = ? + ? ti <t g(t ? ti ), where g is a nonnegative kernel function, e.g., g(t) = exp(??t) for some ? > 0. This process has an implication that the occurrence of an event will increase the probability of near future events and its influence will (usually) decrease over time, as captured by (the usually) decaying fixed kernel g. ? is the exogenous rate of firing events and ? is the coefficient for the endogenous rate. In contrast, in self-correcting processes (SC), an event will decrease the probability of an event: P ?(t) = exp(?t? ti <t ?). The exp ensures that the intensity is positive, while ? and ? are exogenous and endogenous rates. We can utilize more flexible ways to model the intensity, e.g., by a Recurrent Neural Network (RNN): ?(t) = gw (t, hti ), where hti is the feedback loop capturing the influence of previous events (last updated at the latest event) and is updated by hti = hv (ti , hti?1 ). Here w, v are network weights. 2.3 Wasserstein-Distance for Temporal Point Processes Given samples from a point process, one way to estimate the process is to find a model (?g , Fg , Pg ) ? (?, C) that is close enough to the real data (?r , Fr , Pr ) ? (?, C). As mentioned in the introduction, Wasserstein distance [22] is our choice as the proximity measure. The Wasserstein distance between distribution of two point processes is: W (Pr , Pg ) = inf ???(Pr ,Pg ) E(?,?)?? [k? ? ?k? ], (1) where ?(Pr , Pg ) denotes the set of all joint distributions ?(?, ?) whose marginals are Pr and Pg . The distance between two sequences k? ? ?k? , is tricky and need further attention. Take ? = {x1 , x2 , . . . , xn } and ? = {y1 , . . . , ym }, where for simplicity first consider theP case m = n. The Pwe n n two sequences can be thought as discrete distributions ?? = i=1 n1 ?xi and ?? = i=1 n1 ?yi . Then, the distance between these two is an optimal transport problem argmin??? h?, Ci, where ? is the set of doubly stochastic matrices (rows and columns sum up to one), h?, ?i is the Frobenius dot product, and C is the cost matrix. Cij captures the energy needed to move a probability mass from xi to yj . We take Cij = kxi ? yj k? where k ? k? is the norm in S. It can be seen that the optimal solution is attained at extreme points and, by Birkhoff?s theorem, the extreme points of the set of doubly stochastic matrices is a permutation [25]. In other words, the mass is transfered Pn from a unique source event to a unique target event. Therefore, we have: k? ? ?k? = min? i=1 kxi ? y?(i) k? , where the minimum is taken among all n! permutations of 1 . . . n. For the case m 6= n, without loss of 3 ? ? count s5 ? s4 s3 s2 ? t1 t2 s1 t3 ?1 t1 ?2 ?3 t2 t3 t4 ?4 ?5 T b) k ? k? distance between sequences a) Point process probability space Figure 1: a) The outcome of the random experiment ? is mapped to a point in space of count measures ?; b) Distance between two sequences ? = {t1 , t2 , . . .} and ? = {?1 , ?2 , . . .} generality we assume n ? m and define the distance as follows: Xn Xm k? ? ?k? = min kxi ? y?(i) k? + ? i=1 i=n+1 ks ? y?(i) k, (2) where s is a fixed limiting point in border of the compact space S and the minimum is over all permutations of 1 . . . m. The second term penalizes unmatched points in a very special way which will be clarified later. Appendix B proves that it is indeed a valid distance measure. Interestingly, in the case of temporal point process in [0, T ) the distance between ? = {t1 , . . . , tn } and ? = {?1 , . . . , ?m } is reduced to Xn Xm k? ? ?k? = |ti ? ?i | + (m ? n) ? T ? ?i+1 , (3) i=1 i+1 where the time points are ordered increasingly, s = T is chosen as the anchor point, and | ? | is the Lebesgue measure in the real line. A proof is given in Appendix C. This choice of distance is significant in two senses. First, it is computationally efficient and no excessive computation is involved. Secondly, in terms of point processes, it is interpreted as the volume by which the two counting measures differ. Figure 1-b demonstrates this intuition and justifies our choice of metric in ? and Appendix D contains the proof. The distance used in our current work is the simplest yet effective distance that exhibits high interpretability and efficient computability. More robust distance like local alignment distance and dynamic time warping [26] should be more robust and are great venues for future work. Equation (1) is computationally highly intractable and its dual form is usually utilized [22]: W (Pr , Pg ) = sup E??Pr [f (?)] ? E??Pg [f (?)], (4) kf kL ?1 where the supremum is taken over all Lipschitz functions f : ? ? R, i.e., functions that assign a value to a sequence of events (points) and satisfy |f (?) ? f (?)| ? k? ? ?k? for all ? and ?. However, solving the dual form is still highly nontrivial. Enumerating all Lipschitz functions over point process realizations is impossible. Instead, we choose a parametric family of functions to approximate the search space fw and consider solving the problem max w?W,kfw kL ?1 E??Pr [fw (?)] ? E??Pg [fw (?)] (5) where w ? W is the parameter. The more flexible fw , the more accurate will be the approximation. It is notable that W-distance leverages the geometry of the space of event sequences in terms of their distance, which is not the case for MLE-based approach. It in turn requires functions of event sequences f (x1 , x2 , ...), rather than functions of the time stamps f (xi ). Furthermore, Stein?s method to approximate Poisson processes [27, 28] is also relevant as they are defining distances between a Poisson process and an arbitrary point process. 2.4 WGAN for Temporal Point Processes Equipped with a way to approximately compute the Wasserstein distance, we will look for a model Pr that is close to the distribution of real sequences. Again, we choose a sufficiently flexible parametric family of models, g? parameterized by ?. Inspired by GAN [17], this generator takes a noise and turns it into a sample to mimic the real samples. In conventional GAN or WGAN, Gaussian or uniform distribution is chosen. In point processes, a homogeneous Poisson process plays the role of a non-informative and uniform-like distribution: the probability of events in every region is independent of the rest and is proportional to its volume. Define the noise process as (?z , Fz , Pz ) ? (?, C), then 4 ? ? Pz is a sample from a Poisson process on S = [0, T ) with constant rate ?z > 0. Therefore, g? : ? ? ? is a transformation in the space of counting measures. Note that ?z is part of the prior knowledge and belief about the problem domain. Therefore, the objective of learning the generative model can be written as min W (Pr , Pg ) or equivalently: min ? max w?W,kfw kL ?1 E??Pr [fw (?)] ? E??Pz [fw (g? (?))] (6) In GAN terminology fw is called the discriminator and g? is known as the generator model. We estimate the generative model by enforcing that the sample sequences from the model have the same distribution as training sequences. Given L samples sequences from real data Dr = {?1 , . . . , ?L } and from the noise Dz = {?1 , . . . , ?L } the two expectations are estimated empirically: E??Pr [fw (?)] = PL PL 1 1 l=1 fw (?l ) and E??Pz [fw (g? (?))] = L l=1 fw (g? (?l )). L 2.5 Ingredients of WGANTPP To proceed with our point process based WGAN, we need the generator function g? : ? ? ?, the discriminator function fw : ? ? R, and enforce Lipschitz constraint on fw . Figure 4 in Appendix A illustrates the data flow for WGANTPP. The generator transforms a given sequence to another sequence. Similar to [29, 30] we use Recurrent Neural Networks (RNN) to model the generator. For clarity, we use the vanilla RNN to illustrate the computational process as below. The LSTM is used in our experiments for its capacity to capture long-range dependency. If the input and output sequences are ? = {z1 , . . . , zn } and ? = {t1 , . . . , tn } then the generator g? (?) = ? works according to hi = ?hg (Ahg zi + Bgh hi?1 + bhg ), ti = ?xg (Bgx hi + bxg ) (7) Here hi is the k-dimensional history embedding vector and ?hg and ?xg are the activation functions. n      o The parameter set of the generator is ? = Ahg k?1 , Bgh k?k , bhg k?1 , Bgx 1?k , bxg 1?1 . Pn Similarly, we define the discriminator function who assigns a scalar value fw (?) = i=1 ai to the sequence ? = {t1 , . . . , tn } according to hi = ?hd (Ahd ti + Bgh hi?1 + bhg ) ai = ?ad (Bda hi + bad ) (8) n o    where the parameter set is comprised of w = Ahd k?1 , Bdh k?k , bhd k?1 , (Bda )1?k , (bad )1?1 . Note that both generator and discriminator RNNs are causal networks. Each event is only influenced by the previous events. To enforce the Lipschitz constraints the original WGAN paper [15] adopts weight clipping. However, our initial experiments shows an inferior performance by using weight clipping. This is also reported by the same authors in their follow-up paper [24] to the original work. The poor performance of weight clipping for enforcing 1-Lipschitz can be seen theoretically as well: just consider a simple neural network with one input, one neuron, and one output: f (x) = ?(wx + b) and the weight clipping w < c. Then, |f 0 (x)| ? 1 ?? |w? 0 (wx + b)| ? 1 ?? |w| ? 1/|? 0 (wx + b)| (9) It is clear that when 1/|? 0 (wx + b)| < c, which is quite likely to happen, the Lipschitz constraint is not necessarily satisfied. In our work, we use a novel approach for enforcing the Lipschitz constraints, avoiding the computation of the gradient which can be costly and difficult for point processes. We add the Lipschitz constraint as a regularization term to the empirical loss of RNN. L L L X X 1X |fw (?l ) ? fw (g? (?m ))| min max fw (?l ) ? fw (g? (?l )) ? ? | ? 1| (10) ? w?W,kfw kL ?1 L |?l ? g? (?m )|? l=1 2L 2 l=1 l,m=1  We can take each of the pairs of real and generator sequences, and regularize based on them; however, we have seen that only a small portion of pairs (O(L)), randomly selected, is sufficient. The procedure of WGANTPP learning is given in Algorithm 1 Remark The significance of Lipschitz constraint and regularization (or more generally any capacity control) is more apparent when we consider the connection of W-distance and optimal transport problem [25]. Basically, minimizing the W-distance between the empirical distribution and the model distribution is equivalent to a semidiscrete optimal transport [25]. Without capacity control for the generator and discriminator, the optimal solution simply maps a partition of the sample space to the set of data points, in effect, memorizing the data points. 5 Algorithm 1 WGANTPP for Temporal Point Process. The default values ? = 1e ? 4, ?1 = 0.5, ?2 = 0.9, m = 256, ncritic = 5. Require: : the regularization coefficient ? for direct Lipschitz constraint. the batch size, m. the number of iterations of the critic per generator iteration, ncritic . Adam hyper-parameters ?, ?1 , ?2 . Require: : w0 , initial critic parameters. ?0 , initial generator?s parameters. 1: set prior ?z to the expectation of event rate for real data. 2: while ? has not converged do 3: for t = 0, ..., ncritic do 4: Sample point process realizations {? (i) }m i=1 ? Pr from real data. 5: Sample {? (i) }m ? P from a Poisson process with rate ?z . z   1 Pmi=1 Pm Pm |f (? )?f (g? (?j ))| 1 (i) (i) 0 ? 1| 6: L ? m i=1 fw (g? (? )) ? m i=1 fw (? ) + ? i,j=1 | w |?ii ?g?w(?j )| ? 0 7: w ? Adam(?w L , w, ?, ?1 , ?2 ) 8: end for Pz from a Poisson process with rate ?z . 9: Sample {? (i) }m i=1 ?P m 1 (i) 10: ? ? Adam(??? m i=1 fw (g? (? )), ?, ?, ?1 , ?2 ) 11: end while 4 0 6 Theoretical Quantiles intensity intensity 0.50 0.25 0 5 time Data Generated by IP+SE+SC intensity intensity 0 4 2 5 time 10 2 4 0 5 time 4 0 5 time 10 5 time 0 2 0 10 0 5 time 10 2 4 6 Theoretical Quantiles Data Generated by NN Real SC IP NN SE WGAN 4 2 0 Data Generated by IP+SE+NN real SC 4 IP NN SE WGAN 3 1 0 2 0 6 Real SC 8 IP NN SE WGAN 6 0 10 2 4 Theoretical Quantiles 2 real SC NN 4 IP SE WGAN 3 0 0 Real SC IP NN 6 SE WGAN Data Generated by SC 1 0 2 0 6 Sample Quantiles Sample Quantiles 4 Data Generated by IP+SC+NN real SC NN 6 IP SE WGAN 2 2 4 Theoretical Quantiles SC 8 Real IP NN SE WGAN 6 0 10 4 0 Data Generated by SE Data Generated by IP Real SC NN 1.00 IP SE WGAN 0.75 0.00 2 Data Generated by NN intensity 2 4 Real SC IP NN 6 SE WGAN 0 5 time 10 Data Generated by SE+SC+NN real SC IP NN 6 SE WGAN intensity 0 Data Generated by SC intensity 2 Real SC IP NN 6 SE WGAN intensity 4 0 Data Generated by SE Sample Quantiles Sample Quantiles Data Generated by IP Real SC IP NN 6 SE WGAN 4 2 0 0 5 time 10 Figure 2: Performance of different methods on various synthetic data. Top row: QQ plot slope deviation; middle row: intensity deviation in basic conventional models; bottom row: intensity deviation in mixture of conventional processes. 3 Experiments The current work aims at exploring the feasibility of modeling point process without prior knowledge of its underlying generating mechanism. To this end, most widely-used parametrized point processes, e.g., self-exciting and self-correcting, and inhomogeneous Poisson processes and one flexible neural network model, neural point process are compared. In this work we use the most general forms for simpler and clear exposition to the reader and propose the very first model in adversarial training of point processes in contrast to likelihood based models. 3.1 Datasets and Protocol Synthetic datasets. We simulate 20,000 sequences over time [0, T ) where T = 15, for inhomogeneous process (IP), self-exciting (SE), and self-correcting process (SC), recurrent neural point process (NN). We also create another 4 (= C43 ) datasets from the above 4 synthetic data by a uniform mixture 6 Table 1: Deviation of QQ plot slope and empirical intensity for ground-truth and learned model Int. Dev. Int. Dev. QQP. Dev. Data IP SE SC NN IP SE SC NN IP+SE+SC IP+SC+NN IP+SE+NN SE+SC+NN MLE-IP 0.035 (8.0e-4) 0.055 (6.5e-5) 3.510 (4.9e-5) 0.182 (1.6e-5) 0.110 (1.9e-4) 1.950 (4.8e-4) 2.208 (7.0e-5) 1.044 (2.4e-4) 1.505 (3.3e-4) 1.178 (7.0e-5) 1.052 (2.4e-4) 1.825 (2.8e-4) MLE-SE 0.284 (7.0e-5) 0.001 (1.3e-6) 2.778 (7.4e-5) 0.687 (5.0e-6) 0.241 (1.0e-4) 0.019 (1.84e-5) 0.653 (1.2e-4) 0.889 (1.2e-5) 0.410 (1.8e-5) 0.588 (1.3e-4) 0.453 (1.2e-4) 0.324 (1.1e-4) Estimator MLE-SC 0.159 (3.8e-5) 0.086 (1.1e-6) 0.002 (8.8e-6) 1.004 (2.5e-6) 0.289 (2.8e-5) 1.112 (3.1e-6) 0.006 (9.9e-5) 1.101 (1.3e-4) 0.823 (3.1e-6) 0.795 (9.9e-5) 0.583 (1.0e-4) 1.269 (1.1e-4) MLE-NN 0.216 (3.3e-2) 0.104 (6.7e-3) 4.523 (2.6e-3) 0.065 (1.2e-2) 0.511 (1.8e-1) 0.414 (1.6e-1) 1.384 (1.7e-1) 0.341 (3.4e-1) 0.929 (1.6e-1) 0.713 (1.7e-1) 0.678 (3.4e-1) 0.286 (3.6e-1) WGAN 0.033 (3.3e-3) 0.051 (1.8e-3) 0.070 (6.4e-3) 0.012 (4.7e-3) 0.136 (8.7e-3) 0.860 (6.2e-2) 0.302 (2.2e-3) 0.144 (4.28e-2) 0.305 (6.1e-2) 0.525 (2.2e-3) 0.419 (4.2e-2) 0.200 (3.8e-2) from the triplets. The new datasets IP+SE+SC, IP+SE+NN, IP+SC+NN, SE+SC+NN are created to testify the mode dropping problem of learning a generative model. The parameter setting follows: i) Inhomogeneous process. The intensity function is independent from history and given in Sec. 2.2, where k = 3, ? = [3, 7, 11], c = [1, 1, 1], ? = [2, 3, 2]. ii) Self-exciting process. The past events increase the rate of future events. The conditional intensity function is given in Sec. 2.2 where ? = 1.0, ? = 0.8 and the decaying kernel g(t ? ti ) = e?(t?ti ) . iii) Self-correcting process. The conditional intensity function is defined in Sec. 2.2. It increases with time and decreases by events occurrence. We set ? = 1.0, ? = 0.2. iv) Recurrent Neural Network process. The conditional intensity is given in Sec. 2.2, where the neural network?s parameters are set randomly and fixed. We first feed random variable from [0,1] uniform distribution, and then iteratively sample events from the intensity and feed the output into the RNN to get the new intensity for the next step. Real datasets. We collect sequences separately from four public available datasets, namely, healthcare MIMIC-III, public media MemeTracker, NYSE stock exchanges, and publications citations. The time scale for all real data are scaled to [0,15], and the details are as follows: i) MIMIC. MIMIC-III (Medical Information Mart for Intensive Care III) is a large, publicly available dataset, which contains de-identified health-related data during 2001 to 2012 for more than 40,000 patients. We worked with patients who appear at least 3 times, which renders 2246 patients. Their visiting timestamps are collected as the sequences. ii) Meme. MemeTracker tracks the meme diffusion over public media, which contains more than 172 million news articles or blog posts. The memes are sentences, such as ideas, proverbs, and the time is recorded when it spreads to certain websites. We randomly sample 22,000 cascades. iii) MAS. Microsoft Academic Search provides access to its data, including publication venues, time, citations, etc. We collect citations records for 50,000 papers. iv) NYSE. We use 0.7 million high-frequency transaction records from NYSE for a stock in one day. The transactions are evenly divided into 3,200 sequences with equal durations. 3.2 Experimental Setup Details. We can feed the temporal sequences to generator and discriminator directly. In practice, all temporal sequences are transformed into time duration between two consecutive events, i.e., transforming the sequence ? = {t1 , . . . , tn } into {?1 , . . . , ?n?1 }, where ?i = ti+1 ?ti . This approach makes the model train easily and perform robustly. The transformed sequences are statistically identical to the original sequences, which can be used as the inputs of our neural network. To make sure we that the times are increasing we use elu + 1 activation function to produce positive inter arrival times for the generator and accumulate the intervals to get the sequence. The Adam optimization method with learning rate 1e-4, ?1 = 0.5, ?2 = 0.9, is applied. The code is available 2 . Baselines. We compare the proposed method of learning point processes (i.e., minimizing sample distance) with maximum likelihood based methods for point process. To use MLE inference for point process, we have to specify its parametric model. The used parametric model are inhomogeneous Poisson process (mixture of Gaussian), self-exciting process, self-correcting process, and RNN. For 2 https://github.com/xiaoshuai09/Wasserstein-Learning-For-Point-Process 7 each data, we use all the above solvers to learn the model and generate new sequences, and then we compare the generated sequences with real ones. Evaluation metrics. Although our model is an intensity-free approach we will evaluate the performance by metrics that are computed via intensity. For all models, we work with the empirical intensity instead. Note that our objective measures are in sharp contrast with the best practices in GANs in which the performance is usually evaluated subjectively, e.g., by visual quality assessment. We evaluate the performance of different methods to learn the underlying processes via two measures: 1) The first one is the well-known QQ plot of sequences generated from learned model. The quantile-quantile (q-q) plot is the graphical representation of the quantiles of the first data set against the quantiles of the second data set. From the time change property [10] R t of point processes, if the sequences come from the point process ?(t) , then the integral ? = tit+1 ?(s)ds between consecutive events should be exponential distribution with parameter 1. Therefore, the QQ plot of ? against exponential distribution with rate 1 should fall approximately along a 45-degree reference line. The evaluation procedure is as follows: i) The ground-truth data is generated from a model, say IP; ii) All 5 methods are used to learn the unknown process using the ground-truth data; iii) The learned model is used to generate a sequence; iv) The sequence is used against the theoretical quantiles from the model to see if the sequence is really coming from the ground-truth generator or not; v) The deviation from slope 1 is visualized or reported as a performance measure. 2) The second metric is the deviation between empirical intensity from the learned model and the ground truth intensity. We can estimate empirical intensity ?0 (t) = E(N (t + ?t) ? N (t))/?t from sufficient number of realizations of point process through counting the average number of events during [t, t + ?t], where N (t) is the count process for ?(t). The L1 distance between the ground-truth empirical intensity and the learned empirical intensity is reported as a performance measure. 3.3 Results and Discussion Synthetic data. Figure 2 presents the learning ability of WGANTPP when the ground-truth data is generated via different types of point process. We first compare the QQ plots in the top row from the micro perspective view, where QQ plot describes the dependency between events. Red dots legend-ed with Real are the optimal QQ distribution, where the intensity function generates the sequences are known. We can observe that even though WGANTPP has no prior information about the ground-truth point process, it can estimate the model better except for the estimator that knows the parametric form of data. This is quite expected: When we are training a model and we know the parametric form of the generating model we can find it better. However, whenever the model is misspecified (i.e., we don?t know the parametric from a priori) WGANTPP outperforms other parametric forms and RNN approach. The middle row of figure 2 compares the empirical intensity. The Real line is the optimal empirical intensity estimated from the real data. The estimator can recover the empirical intensity well in the case that we know the parametric form where the data comes from. Otherwise, estimated intensity degrades considerably when the model is misspecified. We can observe our WGANTPP produces the empirical intensity better and performs robustly across different types of point process data. For MLE-IP, different number of kernels are tested and the empirical intensity results improves but QQ plot results degrade when the number of kernels increases, so only result of 3 kernels is shown mainly for clarity of presentation. The fact that the empirical intensity estimated from MLE-IP method are good and QQ plots are very bad indicates the inhomogeneous Poisson process can capture the average intensity (Macro dynamics) accurately but incapable of capturing the dependency between events (Micro dynamics). To testify that WGANTPP can cope with mode dropping, we generate mixtures of data from three different point processes and use this data to train different models. Models with specified form can handle limited types of data and fail to learn from diverse data sources. The last row of figure 2 shows the learned intensity from mixtures of data. WGANTPP produces better empirical intensity than alternatives, which fail to capture the heterogeneity in data. To verify the robustness of WGANTPP, we randomly initialize the generator parameters and run 10 rounds to get the mean and std of deviations for both empirical intensity and QQ plot from ground truth. For empirical intensity, we compute the integral of difference of learned intensity and ground-truth intensity. Table 1 reports the mean and std of deviations for intensity deviation. For each estimators, we obtain the slope from the regression line for its QQ plot. Table 1 reports the mean and std of deviations for slope of the QQ plot. Compared to the MLE-estimators, WGANTPP consistently outperforms even without prior knowledge about the parametric form of the true underlying generative point process. Note that for mixture models QQ-plot is not feasible. Real-world data. We evaluate WGANTPP on a diverse real-world data process from health-care, public media, scientific activities and stock exchange. For those real world data, the underlying 8 10 MAS time 2 0 10 NYSE SC 6 real IP NN SE WGAN 4 intensity intensity intensity time 5 real SC IP NN 4 SE WGAN 3 2 1 0 0 5 intensity MEME MIMIC real SC 1.0 IP NN SE WGAN 0.8 0.6 0.4 0.2 0.0 0 5 0 5 time 10 2 real SC IP NN SE WGAN 1 0 0 Figure 3: Performance of different methods on various real-world datasets. Table 2: Deviation of empirical intensity for real-world data. Data MIMIC Meme MAS NYSE MLE-IP 0.150 0.839 1.089 0.799 MLE-SE 0.160 1.008 1.693 0.426 Estimator MLE-SC 0.339 0.701 1.592 0.361 MLE-NN 0.686 0.920 2.712 0.347 5 time 10 WGAN 0.122 0.351 0.849 0.303 generative process is unknown, previous works usually assume that they are certain types of point process from their domain knowledge. Figure 3 shows the intensity learned from different models, where Real is estimated from the real-world data itself. Table 2 reports the intensity deviation. When all models have no prior knowledge about the true generative process, WGANTPP recovers intensity better than all the other models across the data sets. Analysis. We have observed that when the generating model is misspecified WGANTPP outperforms the other methods without leveraging the a priori knowledge of the parametric form. However, when the exact parametric form of data is known and when it is utilized to learn the parameters, MLE with this full knowledge performs better. However, this is generally a strong assumption. As we have observed from the real-world experiments WGANTPP is superior in terms of performance. Somewhat surprising is the observation that WGANTPP tends to outperform the MLE-NN approach which basically uses the same RNN architecture but trained using MLE. The superior performance of our approach compared to MLE-NN is another witness of the the benefits of using W-distance in finding a generator that fits the observed sequences well. Even though the expressive power of the estimators is the same for WGANTPP and MLE-NN, MLE-NN may suffer from mode dropping or get stuck in an inferior local minimum since maximizing likelihood is asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distribution. The inherent weakness of KL divergence [22] renders MLE-NN perform unstably, and the large variances of deviations empirically demonstrate this point. 4 Conclusion and Future Work We have presented a novel approach for Wasserstein learning of deep generative point processes which requires no prior knowledge about the underlying true process and can estimate it accurately across a wide scope of theoretical and real-world processes. For the future work, we would like to explore the connection of the WGAN with the optimal transport problem. We will also explore other possible distance metrics over the realizations of point processes, and more sophisticated transforms of point processes, particularly those that are causal. Extending the current work to marked point processes and processes over structured spaces is another interesting venue for future work. Acknowledgements. This project was supported in part by NKRDP 2016YFB1001003, NSF (IIS1639792, IIS-1218749, IIS-1717916, CMMI-1745382, ), NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, ONR N00014-15-1-2340, NSFC 61602176. References [1] DJ Daley and D Vere-Jones. An introduction to the theory of point processes. 2003. [2] Scott W Linderman and Ryan P Adams. Discovering latent network structure in point process data. In ICML, pages 1413?1421, 2014. [3] Mehrdad Farajtabar, Nan Du, Manuel Gomez-Rodriguez, Isabel Valera, Hongyuan Zha, and Le Song. Shaping social activity by incentivizing users. In NIPS 2014 [4] Mehrdad Farajtabar, Xiaojing Ye, Sahar Harati, Hongyuan Zha, and Le Song. ultistage campaigning in social networks In NIPS 2016 9 [5] Wenzhao Lian, Ricardo Henao, Vinayak Rao, Joseph E Lucas, and Lawrence Carin. A multitask point process predictive model. In ICML, pages 2030?2038, 2015. [6] Lizhen Xu, Jason A Duan, and Andrew Whinston. Path to purchase: A mutually exciting point process model for online advertising and conversion. Management Science, 60(6):1392?1412, 2014. [7] Emmanuel Bacry, Iacopo Mastromatteo, and Jean-Fran?ois Muzy. Hawkes processes in finance. Market Microstructure and Liquidity, 1(01):1550005, 2015. [8] Odd Aalen, Ornulf Borgan, and Hakon Gjessing. Survival and event history analysis: a process point of view. Springer Science & Business Media, 2008. [9] Mehrdad Farajtabar, Yichen Wang, Manuel Gomez-Rodriguez, Shuang Li, Hongyuan Zha, and Le Song. Coevolve: A joint point process model for information diffusion and network co-evolution. In NIPS 2015. [10] John Frank Charles Kingman. Poisson processes. Wiley Online Library, 1993. [11] Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 1971. [12] Valerie Isham and Mark Westcott. A self-correcting point process. Stochastic Processes and Their Applications, 8(3):335?347, 1979. [13] Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016. [14] Yosihiko Ogata. On lewis? simulation method for point processes. IEEE Transactions on Information Theory, 27(1):23?31, 1981. [15] Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016, 2017. [16] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint:1701.00160, 2016. [17] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [18] Ferenc Husz?r. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? arXiv preprint arXiv:1511.05101, 2015. [19] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. [20] Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint:1612.00005, 2016. [21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [22] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv:1701.07875, 2017. [23] Ding Zhou, Jia Li, and Hongyuan Zha. A new mallows distance based metric for comparing clusterings. In ICML, pages 1028?1035, 2005. [24] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017. [25] C?dric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. [26] Marco Cuturi and Mathieu Blondel. Soft-DTW: a Differentiable Loss Function for Time-Series. In ICML, pages 894?903, 2017. [27] Dominic Schuhmacher and Aihua Xia. A new metric between distributions of point processes. Advances in applied probability, 40(3):651?672, 2008. [28] Laurent Decreusefond, Matthias Schulte, Christoph Th?le, et al. Functional poisson approximation in kantorovich?rubinstein distance with applications to u-statistics and stochastic geometry. The Annals of Probability, 44(3):2147?2197, 2016. [29] Olof Mogren. C-rnn-gan: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904, 2016. [30] Arnab Ghosh, Viveka Kulharia, Amitabha Mukerjee, Vinay Namboodiri, and Mohit Bansal. Contextual rnn-gans for abstract reasoning diagram generation. arXiv preprint arXiv:1609.09444, 2016. 10
6917 |@word multitask:1 cox:2 middle:2 norm:1 villani:1 simulation:1 pg:9 unstably:1 memetracker:2 initial:3 series:2 contains:3 interestingly:1 outperforms:4 subjective:1 past:1 current:3 discretization:1 com:1 surprising:1 manuel:3 activation:2 yet:1 dx:5 written:2 vere:1 john:1 timestamps:1 happen:1 informative:1 wx:4 partition:1 kdd:1 designed:1 plot:13 generative:20 half:1 selected:1 website:1 discovering:1 alec:1 record:3 ncritic:3 provides:1 clarified:1 ron:1 simpler:1 mathematical:2 along:1 xye:1 direct:1 consists:1 doubly:2 introduce:1 blondel:1 mohit:1 theoretically:2 expected:3 indeed:1 inter:1 market:2 behavior:1 multi:2 inspired:1 harati:1 junchi:1 duan:1 soumith:2 window:1 equipped:2 increasing:1 solver:1 project:1 classifies:1 moreover:2 underlying:6 notation:1 mass:2 medium:5 anh:1 argmin:1 interpreted:1 minimizes:1 finding:1 transformation:1 ghosh:1 temporal:15 quantitative:1 every:3 ti:13 finance:2 nutshell:1 biometrika:1 demonstrates:1 scaled:1 tricky:1 control:2 healthcare:1 medical:1 sherjil:1 superiority:1 appear:1 faruk:1 positive:3 t1:8 before:1 local:2 tends:1 limit:1 consequence:1 nsfc:1 laurent:1 firing:1 becoming:1 abuse:1 approximately:2 might:1 bhd:1 rnns:1 path:1 k:1 alexey:1 collect:2 xiaokang:1 luke:1 co:1 christoph:1 ease:1 limited:1 range:1 statistically:1 aihua:1 commerce:2 practical:1 unique:2 yj:2 mallow:1 practice:3 procedure:2 area:1 rnn:12 yan:1 empirical:18 significantly:1 thought:1 cascade:1 word:1 get:4 close:2 influence:2 impossible:1 conventional:6 equivalent:3 customer:1 measurable:3 maximizing:2 map:2 center:1 latest:1 attention:1 dz:1 duration:2 resolution:1 amazon:1 simplicity:1 assigns:2 correcting:7 pouget:1 estimator:7 importantly:2 regarded:1 regularize:1 hd:1 embedding:2 handle:1 notion:1 variation:1 yosihiko:1 updated:2 limiting:1 target:2 play:2 qq:13 user:3 exact:1 annals:1 homogeneous:4 us:1 goodfellow:2 element:1 expensive:1 particularly:2 utilized:2 std:3 observed:4 role:1 bottom:1 preprint:8 ding:1 wang:1 capture:5 hv:1 region:1 ensures:1 news:1 decrease:3 gjessing:1 mentioned:1 intuition:1 meme:5 transforming:1 borgan:1 principled:1 cuturi:1 warde:1 dynamic:6 trained:3 solving:2 ferenc:1 algebra:2 tit:1 predictive:1 bidding:1 easily:2 joint:2 stock:4 isabel:1 various:5 represented:1 train:4 jiao:1 effective:2 sc:33 rubinstein:1 aggregate:1 hyper:1 outcome:1 whose:2 quite:2 apparent:1 solve:1 valued:1 widely:1 say:1 otherwise:1 jean:2 ability:1 statistic:1 itself:1 ip:36 online:3 sequence:44 advantage:1 differentiable:1 net:1 matthias:2 propose:5 product:2 coming:1 fr:1 macro:1 relevant:1 loop:1 realization:7 bda:2 pthe:1 achieve:1 frobenius:1 dirac:1 isham:1 extending:1 produce:3 generating:3 adam:5 object:1 depending:1 recurrent:9 illustrate:1 andrew:1 rakshit:1 lsong:1 odd:1 school:1 strong:2 ois:1 come:2 differ:1 inhomogeneous:7 stochastic:5 bgh:3 public:4 require:2 exchange:2 assign:1 microstructure:1 really:1 ryan:1 secondly:1 exploring:1 pl:2 marco:1 proximity:1 sufficiently:1 ground:10 exp:4 great:2 lawrence:1 mapping:1 scope:1 adopt:1 smallest:1 consecutive:2 earth:1 estimation:1 currently:1 create:1 tool:1 gaussian:3 super:1 aim:1 rather:1 husz:1 pn:3 zhou:1 dric:1 gatech:2 publication:2 clune:1 consistently:1 likelihood:10 indicates:1 mainly:1 tech:1 contrast:4 adversarial:9 baseline:1 inference:1 nn:37 typically:1 transformed:2 henao:1 issue:1 aforementioned:1 flexible:4 among:1 dual:2 priori:2 lucas:2 art:1 special:2 initialize:1 equal:1 comprise:1 schulte:1 beach:1 sampling:2 identical:1 valerie:1 look:1 jones:1 excessive:1 icml:4 carin:1 unsupervised:1 purchase:2 future:6 mirza:1 t2:3 fundamentally:1 serious:1 mimic:6 micro:2 report:3 randomly:4 inherent:1 dosovitskiy:1 divergence:3 wgan:26 geometry:3 lebesgue:2 n1:2 microsoft:1 testify:2 interest:2 highly:3 evaluation:3 alignment:1 weakness:1 mixture:6 extreme:2 birkhoff:1 utkarsh:1 farley:1 sens:1 hg:2 r01gm108341:1 implication:1 accurate:1 integral:2 capable:1 iv:4 old:1 penalizes:1 causal:2 theoretical:7 column:1 modeling:8 soft:1 asking:1 dev:3 rao:1 extensible:1 tp:1 yichen:1 zn:1 vinayak:1 clipping:4 cost:1 deviation:14 subset:1 uniform:4 comprised:2 shuang:1 characterize:1 reported:3 dependency:3 kxi:3 synthetic:8 combined:1 considerably:1 st:1 density:1 venue:3 lstm:1 oord:1 coevolve:1 informatics:2 ym:1 transfered:1 bethge:1 gans:6 again:1 satisfied:1 recorded:1 management:1 choose:2 possibly:1 unmatched:1 dr:1 wgans:1 kingman:1 ricardo:1 li:2 de:1 sec:4 coefficient:2 int:2 satisfy:1 notable:1 ad:1 ahg:2 later:2 performed:1 endogenous:2 exogenous:2 apparently:1 sup:1 portion:1 zha:6 decaying:2 red:1 recover:1 metz:1 dumoulin:1 slope:5 jia:1 contribution:1 view:2 publicly:1 degraded:1 convolutional:1 variance:1 who:2 yield:1 t3:2 vincent:1 accurately:3 basically:3 advertising:1 cc:1 history:7 converged:1 influenced:1 suffers:1 whenever:1 ed:1 infinitesimal:1 failure:1 against:3 energy:1 frequency:1 involved:1 chintala:2 proof:2 recovers:1 dataset:1 treatment:1 popular:1 knowledge:10 improves:1 ubiquitous:1 shaping:1 sophisticated:1 thinning:1 feed:3 attained:1 day:1 follow:1 modal:2 specify:1 improved:1 evaluated:1 though:2 symptom:1 generality:1 furthermore:4 governing:1 rejected:1 just:1 shuai:1 nyse:5 d:1 hand:1 receives:1 transport:5 mehdi:1 expressive:1 nonlinear:1 assessment:1 rodriguez:3 mode:6 quality:2 scheduled:1 gulrajani:1 scientific:1 usa:1 effect:1 ye:2 requiring:1 verify:1 true:3 evolution:1 regularization:3 leibler:2 iteratively:1 i2:1 pwe:1 gw:1 round:1 game:1 self:14 nuisance:1 inferior:2 during:2 substantiate:1 hawkes:3 bansal:1 demonstrate:2 tn:5 performs:2 l1:1 reasoning:1 image:3 novel:2 recently:1 charles:1 misspecified:3 common:1 superior:2 nih:1 functional:2 empirically:2 shanghai:1 conditioning:1 volume:4 million:2 extend:2 slight:1 wenzhao:1 lizhen:1 marginals:1 yosinski:1 accumulate:1 s5:1 significant:1 ishaan:1 ai:2 vanilla:1 pmi:1 mathematics:1 similarly:1 pm:2 dj:1 dot:2 moving:1 access:1 etc:1 add:1 subjectively:1 j:1 perspective:1 inf:1 certain:2 n00014:1 incapable:1 blog:1 arbitrarily:1 onr:1 societal:1 yi:1 captured:1 seen:3 wasserstein:12 minimum:3 care:2 somewhat:1 dai:1 arjovsky:3 period:1 ii:8 full:1 sound:1 reduces:1 alan:1 characterized:2 academic:1 plug:1 long:2 ahmed:1 divided:1 post:2 equally:1 mle:22 feasibility:1 prediction:2 involving:1 basic:1 regression:1 patient:4 metric:8 poisson:16 expectation:2 arxiv:15 iteration:2 kernel:8 arnab:1 addition:1 remarkably:1 separately:1 interval:2 jason:2 diagram:1 source:2 rest:1 exhibited:1 sure:1 legend:1 leveraging:2 flow:1 integer:1 call:1 near:1 yang:1 leverage:2 counting:5 iii:7 enough:1 bengio:2 variety:2 fit:1 zi:1 architecture:3 identified:1 click:1 economic:1 idea:1 cn:1 enumerating:1 intensive:1 song:5 render:2 suffer:1 speaking:2 cause:2 proceed:1 remark:1 deep:3 generally:3 se:33 clear:2 transforms:3 s4:1 stein:1 visualized:1 simplest:1 reduced:1 http:1 fz:1 generate:3 outperform:1 nsf:2 tutorial:1 s3:1 sign:1 estimated:5 arising:1 disjoint:1 per:1 track:1 diverse:2 discrete:4 write:1 dropping:5 four:1 terminology:1 clarity:2 diffusion:2 utilize:1 computability:1 asymptotically:2 sum:1 run:1 parameterized:2 farajtabar:4 family:2 reader:1 electronic:1 fran:1 comparing:1 appendix:4 whinston:1 capturing:2 bound:1 hi:7 nan:2 gomez:3 courville:2 nonnegative:1 activity:2 nontrivial:1 strength:1 constraint:7 worked:1 your:1 x2:2 generates:2 simulate:1 min:5 martin:3 structured:1 according:2 poor:1 describes:1 across:3 em:2 increasingly:1 joseph:1 s1:1 memorizing:1 intuitively:1 den:1 pr:13 taken:2 computationally:3 vendor:1 equation:1 mutually:2 bing:1 turn:2 count:3 mechanism:1 fail:2 needed:1 know:4 end:3 available:3 linderman:1 observe:2 enforce:2 occurrence:2 robustly:2 alternative:2 batch:1 robustness:1 original:4 denotes:1 top:2 clustering:1 completed:1 gan:8 graphical:1 quantile:2 especially:1 prof:1 emmanuel:1 warping:1 objective:4 move:1 parametric:16 costly:1 degrades:1 mehrdad:5 traditional:1 usual:2 visiting:1 exhibit:3 gradient:1 cmmi:1 iclr:1 distance:31 kantorovich:1 mapped:1 bacry:1 capacity:3 parametrized:1 w0:1 evenly:1 degrade:1 collected:1 unstable:1 sjtu:1 enforcing:3 ozair:1 assuming:1 code:1 modeled:1 minimizing:4 equivalently:2 difficult:1 unfortunately:1 cij:2 setup:1 frank:1 negative:1 bxg:2 contributed:1 perform:2 unknown:2 conversion:1 neuron:1 observation:1 datasets:8 dominic:1 defining:1 heterogeneity:1 bhg:3 misspecification:1 witness:1 y1:1 arbitrary:1 sharp:1 intensity:67 expressiveness:1 contextual:1 bk:2 david:1 pair:2 namely:1 kl:7 extensive:2 z1:1 discriminator:7 connection:2 sentence:1 specified:1 learned:9 gsu:1 bdh:1 nip:7 address:1 able:1 adversary:1 usually:6 pattern:1 xm:2 below:1 scott:1 muzy:1 including:2 interpretability:1 video:1 max:3 belief:1 unrealistic:2 event:52 suitable:1 power:1 business:2 predicting:1 valera:1 minimax:1 scheme:1 github:1 technology:1 library:1 mathieu:1 dtw:1 created:1 xg:2 campaigning:1 health:5 prior:7 understanding:1 acknowledgement:1 review:1 kf:1 theis:1 loss:4 permutation:3 generation:7 interesting:2 proportional:1 sahar:1 proven:1 localized:1 ingredient:1 generator:18 foundation:2 degree:1 sufficient:2 xiao:1 exciting:9 article:1 bypass:1 critic:2 row:7 prone:2 supported:2 last:2 asynchronous:1 free:5 institute:1 fall:1 wide:1 fg:1 benefit:1 liquidity:1 feedback:1 default:1 xn:3 world:10 evaluating:1 valid:1 van:1 xia:1 author:3 adopts:1 stuck:1 nguyen:1 social:5 correlate:1 transaction:3 cope:1 approximate:2 compact:2 citation:3 kullback:2 supremum:1 elu:1 hongyuan:5 anchor:1 b1:2 unnecessary:1 xi:5 thep:1 alternatively:1 don:1 spectrum:1 continuous:2 latent:3 search:2 triplet:1 iterative:1 table:5 promising:2 learn:6 robust:2 ca:1 career:1 vinay:1 du:2 bottou:2 complex:1 necessarily:1 domain:2 protocol:1 pk:1 significance:1 spread:1 whole:1 s2:1 border:1 noise:3 arrival:1 x1:2 xu:2 quantiles:11 borel:2 georgia:3 tong:1 wiley:1 explicit:1 daley:1 exponential:2 stamp:1 hti:4 hanjun:1 ian:2 upadhyay:1 theorem:1 incentivizing:1 ogata:1 bad:3 jensen:1 list:1 pz:5 abadie:1 survival:2 intractable:1 workshop:1 sequential:1 importance:1 ci:3 illustrates:2 justifies:1 t4:1 trivedi:1 rejection:1 yoshua:2 simply:2 likely:1 explore:2 happening:1 visual:1 ordered:1 scalar:1 springer:2 radford:1 truth:10 relies:1 lewis:1 mart:1 ma:3 conditional:6 ahd:2 viewed:1 presentation:1 marked:2 exposition:2 towards:1 jeff:1 lipschitz:10 feasible:1 fw:21 change:1 determined:1 except:1 miss:1 called:1 experimental:2 shannon:1 xiaojing:2 aalen:1 aaron:2 college:1 mark:1 phenomenon:2 bigdata:1 evaluate:3 lian:1 tested:1 avoiding:1
6,542
6,918
Ensemble Sampling Xiuyuan Lu Stanford University [email protected] Benjamin Van Roy Stanford University [email protected] Abstract Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applications for which Thompson sampling is viable. We establish a theoretical basis that supports the approach and present computational results that offer further insight. 1 Introduction Thompson sampling [8] has emerged as an effective heuristic for trading off between exploration and exploitation in a broad range of online decision problems. To select an action, the algorithm samples a model of the system from the prevailing posterior distribution and then determines which action maximizes expected immediate reward according to the sampled model. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. With complex models such as neural networks, exact computation of posterior distributions becomes intractable. One can resort to to the Laplace approximation, as discussed, for example, in [2, 5], but this approach is suitable only when posterior distributions are unimodal, and computations become an obstacle with complex models like neural networks because compute time requirements grow quadratically with the number of parameters. An alternative is to leverage Markov chain Monte Carlo methods, but those are computationally onerous, especially when the model is complex. A practical approximation to Thompson sampling that can address complex models and problems requiring frequent decisions should facilitate fast incremental updating. That is, the time required per time period to learn from new data and generate a new sample model should be small and should not grow with time. Such a fast incremental method that builds on the Laplace approximation concept is presented in [5]. In this paper, we study a fast incremental method that applies more broadly, without relying on unimodality. As a sanity check we offer theoretical assurances that apply to the special case of linear bandits. We also present computational results involving simple bandit problems as well as complex neural network models that demonstrate efficacy of the approach. Our approach is inspired by [6], which applies a similar concept to the more complex context of deep reinforcement learning, but without any theoretical analysis. The essential idea is to maintain and incrementally update an ensemble of statistically plausible models, and to sample uniformly from this set in each time period as an approximation to sampling from the posterior distribution. Each model is initially sampled from the prior, and then updated in a manner that incorporates data and random perturbations that diversify the models. The intention is for the ensemble to approximate the posterior distribution and the variance among models to diminish as the posterior concentrates. We refine this methodology and bound the incremental regret relative to exact Thompson sampling for a broad class 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of online decision problems. Our bound indicates that it suffices to maintain a number of models that grows only logarithmically with the horizon of the decision problem, ensuring computational tractability of the approach. 2 Problem formulation We consider a broad class of online decision problems to which Thompson sampling could, in principle, be applied, though that would typically be hindered by intractable computational requirements. We will define random variables with respect to a probability space (?, F, P) endowed with a filtration (Ft : t = 0, . . . , T ). As a convention, random variables we index by t will be Ft -measurable, and we use Pt and Et to denote probabilities and expectations conditioned on Ft . The decision-maker chooses actions A0 , . . . , AT ?1 ? A and observes outcomes Y1 , . . . , YT ? Y. There is a random variable ?, which represents a model index. Conditioned on (?, At?1 ), Yt is independent of Ft?1 . Further, P(Yt = y|?, At?1 ) does not depend on t. This can be thought of as a Bayesian formulation, where randomness in ? reflects prior uncertainty about which model corresponds to the true nature of the system. We assume that A is finite and that each action At is chosen by a randomized policy ? = (?0 , . . . , ?T ?1 ). Each ?t is Ft -measurable, and each realization is a probability mass function over actions A; At is sampled independently from ?t . The agent associates a reward R(y) with each outcome y ? Y, where the reward function R is fixed and known. Let Rt = R(Yt ) denote the reward realized at time t. Let R? (a) = E [R(Yt )|?, At?1 = a]. Uncertainty about ? induces uncertainty about the true optimal action, which we denote by A? ? arg max R? (a). Let R? = R? (A? ). The T -period conditional regret a?A when the actions (A0 , .., AT ?1 ) are chosen according to ? is defined by " T # X ? Regret(T, ?, ?) = E (R ? Rt ) ? , (1) t=1 where the expectation is taken over the randomness in actions At and outcomes Yt , conditioned on ?. We illustrate with a couple of examples that fit our formulation. Example 1. (linear bandit) Let ? be drawn from <N and distributed according to a N (?0 , ?0 ) prior. There is a set of K actions A ? <N . At each time t = 0, 1, . . . , T ? 1, an action At ? A is 2 selected, after which a reward Rt+1 = Yt+1 = ?> At + Wt+1 is observed, where Wt+1 ? N (0, ?w ). Example 2. (neural network) Let g? : <N 7? <K denote a mapping induced by a neural network with weights ?. Suppose there are K actions A ? <N , which serve as inputs to the neural network, and the goal is to select inputs that yield desirable outputs. At each time t = 0, 1, . . . , T ?1, an action 2 At ? A is selected, after which Yt+1 = g? (At ) + Wt+1 is observed, where Wt+1 ? N (0, ?w I). A reward Rt+1 = R(Yt+1 ) is associated with each observation. Let ? be distributed according to a N (?0 , ?0 ) prior. The idea here is that data pairs (At , Yt+1 ) can be used to fit a neural network model, while actions are selected to trade off between generating data pairs that reduce uncertainty in neural network weights and those that offer desirable immediate outcomes. 3 Algorithms Thompson sampling offers a heuristic policy for selecting actions. In each time period, the algorithm samples an action from the posterior distribution pt (a) = Pt (A? = a) of the optimal action. In other words, Thompson sampling uses a policy ?t = pt . It is easy to see that this is equivalent to sampling a model index ??t from the posterior distribution of models and then selecting an action At = arg max R??t (a) that optimizes the sampled model. a?A Thompson sampling is computationally tractable for some problem classes, like the linear bandit problem, where the posterior distribution is Gaussian with parameters (?t , ?t ) that can be updated incrementally and efficiently via Kalman filtering as outcomes are observed. However, when dealing with complex models, like neural networks, computing the posterior distribution becomes intractable. Ensemble sampling serves as an approximation to Thompson sampling for such contexts. 2 Algorithm 1 EnsembleSampling 1: Sample: ??0,1 , . . . , ??0,M ? p0 2: for t = 0, . . . , T ? 1 do 3: Sample: m ? unif({1, . . . , M }) 4: Act: At = arg max R??t,m (a) a?A 5: Observe: Yt+1 6: Update: ??t+1,1 , . . . , ??t+1,M 7: end for The posterior can be interpreted as a distribution of ?statistically plausible? models, by which we mean models that are sufficiently consistent with prior beliefs and the history of observations. With this interpretation in mind, Thompson sampling can be thought of as randomly drawing from the range of statistically plausible models. Ensemble sampling aims to maintain, incrementally update, and sample from a finite set of such models. In the spirit of particle filtering, this set of models approximates the posterior distribution. The workings of ensemble sampling are in some ways more intricate than conventional uses of particle filtering, however, because interactions between the ensemble of models and selected actions can skew the distribution. While elements of ensemble sampling require customization, a general template is presented as Algorithm 1. The algorithm begins by sampling M models from the prior distribution. Then, over each time period, a model is sampled uniformly from the ensemble, an action is selected to maximize expected reward under the sampled model, the resulting outcome is observed, and each of the M models is updated. To produce an explicit algorithm, we must specify a model class, prior distribution, and algorithms for sampling from the prior and updating models. For a concrete illustration, let us consider the linear bandit (Example 1). Though ensemble sampling is unwarranted in this case, since Thompson sampling is efficient, the linear bandit serves as a useful context for understanding the approach. Standard algorithms can be used to sample models from the N (?0 , ?0 ) prior. One possible procedure for updating models maintains a covariance matrix, updating it according to  > 2 ?1 ?t+1 = ??1 , t + At At /?w and generates model parameters incrementally according to   2 ?t,m + At (Rt+1 + W ? ??t+1,m = ?t+1 ??1 ? )/? t+1,m t w , ? t,m : t = 1, . . . , T, m = 1, . . . , M ) are independent N (0, ? 2 ) random for m = 1, . . . , M , where (W w samples drawn by the updating algorithm. It is easy to show that the resulting parameter vectors satisfy ! t?1 X 1 ?1 > 2 > ? ? +1,m ? A? ?) + (? ? ??0,m ) ? (? ? ??0,m ) , ??t,m = arg min (R? +1 + W 0 2 ?w ? ? =0 which admits an intuitive interpretation: each ??t,m is a model fit to a randomly perturbed prior and randomly perturbed observations. As we establish in the appendix, for any deterministic sequence A0 , . . . , At?1 , conditioned on Ft , the models ??t,1 , . . . , ??t,M are independent and identically distributed according to the posterior distribution of ?. In this sense, the ensemble approximates the posterior. It is not a new observation that, for deterministic action sequences, such a scheme generates exact samples of the posterior distribution (see, e.g., [7]). However, for stochastic action sequences selected by Algorithm 1, it is not immediately clear how well the ensemble approximates the posterior distribution. We will provide a bound in the next section which establishes that, as the number of models M increases, the regret of ensemble sampling quickly approaches that of Thompson sampling. The ensemble sampling algorithm we have described for the linear bandit problem motivates an analogous approach for the neural network model of Example 2. This approach would again begin with M models, with connection weights ??0,1 , . . . , ??0,M sampled from a N (?0 , ?0 ) prior. It could be 3 natural here to let ?0 = 0 and ?0 = ?02 I for some variance ?02 chosen so that the range of probable models spans plausible outcomes. To incrementally update parameters, at each time t, each model m applies some number of stochastic gradient descent iterations to reduce a loss function of the form Lt (?) = t?1 1 X ? ? +1,m ? g? (A? ))2 + (? ? ??0,m )> ??1 (? ? ??0,m ). (Y? +1 + W 0 2 ?w ? =0 We present computational results in Section 5.2 that demonstrate viability of this approach. 4 Analysis of ensemble sampling for the linear bandit Past analyses of Thompson sampling have relied on independence between models sampled over time periods. Ensemble sampling introduces dependencies that may adversely impact performance. It is not immediately clear whether the degree of degradation should be tolerable and how that depends on the number of models in the ensemble. In this section, we establish a bound for the linear bandit context. Our result serves as a sanity check for ensemble sampling and offers insight that should extend to broader model classes, though we leave formal analysis beyond the linear bandit for future work. Consider the linear bandit problem described in Example 1. Let ? TS and ? ES denote the Thompson and ensemble sampling policies for this problem, with the latter based on an ensemble of M models, generated and updated according to the procedure described in Section 3. Let R? = mina?A ?> a denote the worst mean reward and let ?(?) = R? ? R? denote the gap between maximal and minimal mean rewards. The following result bounds the difference in regret as a function of the gap, ensemble size, and number of actions. Theorem 3. For all  > 0, if M? 4|A|T 4|A| log , 2 3 then Regret(T, ? ES , ?) ? Regret(T, ? TS , ?) + ?(?)T. This inequality bounds the regret realized by ensemble sampling by a sum of the regret realized by Thompson sampling and an error term ?(?)T . Since we are talking about cumulative regret, the error term bounds the per-period degradation relative to Thompson sampling by ?(?). The value of  can be made arbitrarily small by increasing M . Hence, with a sufficiently large ensemble, the per-period loss will be small. This supports the viability of ensemble sampling. An important implication of this result is that it suffices for the ensemble size to grow logarithmically in the horizon T . Since Thompson sampling requires independence between models sampled over time, in a sense, it relies on T models ? one per time period. So to be useful, ensemble sampling should operate effectively with a much smaller number, and the logarithmic dependence is suitable. The bound also grows with |A| log |A|, which is manageable when there are a modest number of actions. We conjecture that a similar bound holds that depends instead on a multiple of N log N , where N is the linear dimension, which would offer a stronger guarantee when the number of actions becomes large or infinite, though we leave proof of this alternative bound for future work. The bound of Theorem 3 is on a notion of regret conditioned on the realization of ?. A Bayesian regret bound that removes dependence on this realization can be obtained by taking an expectation, integrating over ?:     E Regret(T, ? ES , ?) ? E Regret(T, ? TS , ?) + E [?(?)] T. We provide a complete proof of Theorem 3 in the appendix. Due to space constraints, we only offer a sketch here. Sketch of Proof. Let A denote an Ft -adapted action process (A0 , . . . , AT ?1 ). Our procedure for generating and updating models with ensemble sampling is designed so that, for any deterministic A, conditioned on the history of rewards (R1 , . . . , Rt ), models ??t,1 , . . . , ??t,M that comprise the ensemble are independent and identically distributed according to the posterior distribution of ?. This can be verified via some algebra, as is done in the appendix. 4 Recall that pt (a) denotes the posterior probability Pt (A? = a) = P (A? = a|A0 , R1 , . . . , At?1 , Rt ). To explicitly indicate dependence on the action process, we will use a superscript: pt (a) = pA t (a).   P M 1 A A > 0 ? 0 Let p?A denote an approximation to p , given by p ? (a) = I a = arg max ? a t,m a . t t t m=1 M Note that given an action process A, at time t Thompson sampling would sample the next action from pA ?A t , while ensemble sampling would sample the next action from p t . If A is deterministic then, since ? ? ?t,1 , . . . , ?t,M , conditioned on the history of rewards, are i.i.d. and distributed as ?, p?A t represents an empirical distribution of samples drawn from pA . It follows from this and Sanov?s Theorem that, for t any deterministic A,  A |A| ?M  P dKL (? pA e . t kpt ) ? |? ? (M + 1) A naive application of the union bound over all deterministic action sequences would establish that, for any A (deterministic or stochastic),    A A a a P dKL (? pt kpt ) ? |? ? P maxt dKL (? pt kpt ) ?  ? ? |A|t (M + 1)|A| e?M  a?A However, our proof takes advantage of the fact that, for any deterministic A, pA ?A t and p t do not depend on the ordering of past actions and observations. To make it precise, we encode the sequence of actions in terms of action counts c0 , . . . , cT ?1 . In particular, let ct,a = |{? ? t : A? = a}| be the number of times that action a has been selected by time t. We apply a coupling argument that introduces dependencies between the noise terms Wt and action counts, without changing the distributions of any observable variables. We let (Zn,a : n ? N, a ? A) be i.i.d. N (0, 1) random variables, and let Wt+1 = Zct,At ,At . Similarly, we let (Z?n,a,m : n ? N, a ? A, m = 1, . . . , M ) be ? t+1,m = Z?c ,A ,m . To make explicit the dependence i.i.d N (0, 1) random variables, and let W t t,At on A, we will use a superscript and write cA t to denote the action counts at time t when the action process is given by A. It is not hard to verify, as is done in the appendix, that if a, a ? AT are two deterministic action sequences such that cat?1 = cat?1 , then pat = pat and p?at = p?at . This allows us to apply the union bound over action counts, instead of action sequences, and we get that for any A (deterministic or stochastic), !  A A a a P dKL (? pt kpt ) ? |? ? P a max t dKL (? pt kpt ) ?  ? ? (t + 1)|A| (M + 1)|A| e?M  . ct?1 :a?A Now, we specialize the action process A to the action sequence At = AES t selected by ensemble A sampling, and we will omit the superscripts in pA and p ? . We can decompose the per-period regret t t of ensemble sampling as     E R? ? ?> At |? = E (R? ? ?> At )I (dKL (? pt kpt ) ? ) |?   + E (R? ? ?> At )I (dKL (? pt kpt ) < ) |? . (2) The first term can be bounded by   E (R? ? ?> At )I (dKL (? pt kpt ) ? ) |? ? ? ?(?)P (dKL (? pt kpt ) ? |?) ?(?)(t + 1)|A| (M + 1)|A| e?M  . To bound the second term, we will use another coupling argument that couples the actions that would be selected by ensemble sampling with those that would be selected by Thompson sampling. Let ATS pt kpt ) ? }, we t denote the action?that Thompson sampling would select at time t. On {dKL (? have k? pt ? pt kTV ? 2 by Pinsker?s inequality. Conditioning on p?t and pt , if dKL (? pt kpt ) ? , ES TS ? ? we can construct random variables At and At such that they have the same distributions as AES t ?ES ?TS and ATS maximal coupling, we can make A = A with probability at least t , respectively. Using t t p 1 ? 12 k? pt ? pt kTV ? 1 ? /2. Then, the second term of the sum in (2) can be decomposed into   E (R? ? ?> At )I (dKL (? pt kpt ) ? ) |? h h   i i ?TS p?t , pt , ? ? = E E (R? ? ?> A?ES pt kpt ) ? , A?ES t )I dKL (? t = At h h   i i ES TS ? ? ? , +E E (R? ? ?> A?ES )I d (? p kp ) ? , A = 6 A p ? , p , ? KL t t t t t t t 5 which, after some algebraic manipulations, leads to     p E (R? ? ?> At )I (dKL (? pt kpt ) < ) |? ? E R? ? ?> ATS /2 ?(?). t |? + The result then follows from some straightforward algebra. 5 Computational results In this section, we present computational results that demonstrate viability of ensemble sampling. We will start with a simple case of independent Gaussian bandits in Section 5.1 and move on to more complex models of neural networks in Section 5.2. Section 5.1 serves as a sanity check for the empirical performance of ensemble sampling, as Thompson sampling can be efficiently applied in this case and we are able to compare the performances of these two algorithms. In addition, we provide simulation results that demonstrate how the ensemble size grows with the number of actions. Section 5.2 goes beyond our theoretical analysis in Section 4 and gives computational evidence of the efficacy of ensemble sampling when applied to more complex models such as neural networks. We show that ensemble sampling, even with a few models, achieves efficient learning and outperforms -greedy and dropout on the example neural networks. 5.1 Gaussian bandits with independent arms We consider a Gaussian bandit with K actions, where action k has mean reward ?k . Each ?k is drawn i.i.d. from N (0, 1). During each time step t = 0, . . . , T ? 1, we select an action k ? {1, . . . , K} and observe reward Rt+1 = ?k + Wt+1 , where Wt+1 ? N (0, 1). Note that this is a special case of Example 1. Since the posterior distribution of ? can be explicitly computed in this case, we use it as a sanity check for the performance of ensemble sampling. Figure 1a shows the per-period regret of Thompson sampling and ensemble sampling applied to a Gaussian bandit with 50 independent arms. We see that as the number of models increases, ensemble sampling better approximates Thompson sampling. The results were averaged over 2,000 realizations. Figure 1b shows the minimum number of models required so that the expected per-period regret of ensemble sampling is no more than  plus the expected per-period regret of Thompson sampling at some large time horizon T across different numbers of actions. All results are averaged over 10,000 realizations. We chose T = 2000 and  = 0.03. The plot shows that the number of models needed seems to grow sublinearly with the number of actions, which is stronger than the bound proved in Section 4. Ensemble sampling on an independent Gaussian bandit with 50 arms 80 Number of models 2.0 Per-period regret Ensemble sampling on independent Gaussian bandits 100 Thompson sampling 5 models 10 models 20 models 30 models 1.5 1.0 0.5 0.0 60 40 20 0 100 200 300 t 400 500 600 0 700 (a) 0 25 50 75 100 125 Number of actions 150 175 200 (b) Figure 1: (a) Ensemble sampling compared with Thompson sampling on a Gaussian bandit with 50 independent arms. (b) Minimum number of models required so that the expected per-period regret of ensemble sampling is no more than  = 0.03 plus the expected per-period regret of Thompson sampling at T = 2000 for Gaussian bandits across different numbers of arms. 5.2 Neural networks In this section, we follow Example 2 and show computational results of ensemble sampling applied to neural networks. Figure 2 shows -greedy and ensemble sampling applied to a bandit problem 6 where the mapping from actions to expected rewards is represented by a neuron. More specifically, we have a set of K actions A ? <N . The mean reward of selecting an action a ? A is given by g? (a) = max(0, ?> a), where weights ? ? <N are drawn from N (0, ?I). During each time period, we select an action At ? A and observe reward Rt+1 = g? (At ) + Zt+1 , where Zt+1 ? N (0, ?z2 ). We set the input dimension N = 100, number of actions K = 100, prior variance ? = 10, and noise variance ?z2 = 100. Each dimension of each action was sampled uniformly from [?1, 1], except for the last dimension, which was set to 1. In Figure 3, we consider a bandit problem where the mapping from actions to expected rewards is represented by a two-layer neural network with weights ? ? (W1 , W2 ), where W1 ? <D?N and W2 ? <D . Each entry of the weight matrices is drawn independently from N (0, ?). There is a set of K actions A ? <N . The mean reward of choosing an action a ? A is g? (a) = W2> max(0, W1 a). During each time period, we select an action At ? A and observe reward Rt+1 = g? (At ) + Zt+1 , where Zt+1 ? N (0, ?z2 ). We used N = 100 for the input dimension, D = 50 for the dimension of the hidden layer, number of actions K = 100, prior variance ? = 1, and noise variance ?z2 = 100. Each dimension of each action was sampled uniformly from [?1, 1], except for the last dimension, which was set to 1. Ensemble sampling with M models starts by sampling ??m from the prior distribution independently for each model m. At each time step, we pick a model m uniformly at random and apply the greedy action with respect to that model. We update the ensemble incrementally. During each time period, we apply a few steps of stochastic gradient descent for each model m with respect to the loss function Lt (?) = t?1 1 1 X (R? +1 + Z?? +1,m ? g? (A? ))2 + k? ? ??m k22 , ?z2 ? =0 ? where perturbations (Z?t,m : t = 1, . . . , T, m = 1, . . . , M ) are drawn i.i.d. from N (0, ?z2 ). Besides ensemble sampling, there are other heuristics for sampling from an approximate posterior distribution over neural networks, which may be used to develop approximate Thompson sampling. Gal and Ghahramani proposed an approach based on dropout [4] to approximately sample from a posterior over neural networks. In Figure 3, we include results from using dropout to approximate Thompson sampling on the two-layer neural network bandit. To facilitate gradient flow, we used leaky ReLUs of the form max(0.01x, x) internally in all agents, while the target neural nets still use regular ReLUs as described above. We took 3 stochastic gradient steps with a minibatch size of 64 for each model update. We used a learning rate of 1e-1 for -greedy and ensemble sampling, and a learning rate of 1e-2, 1e-2, 2e-2, and 5e-2 for dropout with dropping probabilities 0.25, 0.5, 0.75, and 0.9 respectively. All results were averaged over around 1,000 realizations. Figure 2 plots the per-period regret of -greedy and ensemble sampling on the single neuron bandit. We see that ensemble sampling, even with 10 models, performs better than -greedy with the best tuned parameters. Increasing the size of the ensemble further improves the performance. An ensemble of size 50 achieves orders of magnitude lower regret than -greedy. Figure 3a and 3b show different versions of -greedy applied to the two-layer neural network model. We see that -greedy with an annealing schedule tends to perform better than a fixed . Figure 3c plots the per-period regret of the dropout approach with different dropping probabilities, which seems to perform worse than -greedy. Figure 3d plots the per-period regret of ensemble sampling on the neural net bandit. Again, we see that ensemble sampling, with a moderate number of models, outperforms the other approaches by a significant amount. 6 Conclusion Ensemble sampling offers a potentially efficient means to approximate Thompson sampling when using complex models such as neural networks. We have provided an analysis that offers theoretical assurances for the case of linear bandit models and computational results that demonstrate efficacy with complex neural network models. We are motivated largely by the need for effective exploration methods that can efficiently be applied in conjunction with complex models such as neural networks. Ensemble sampling offers one approach 7 (a) Epsilon-greedy 40 (b) Ensemble sampling agent name epsilon=0.05 instant regret 30 epsilon=0.1 epsilon=0.2 epsilon=50/(50+t) 20 epsilon=150/(150+t) epsilon=300/(300+t) ensemble=5 10 ensemble=10 ensemble=30 ensemble=50 0 0 500 1000 1500 2000 t 0 500 1000 1500 2000 Figure 2: (a) -greedy and (b) ensemble sampling applied to a single neuron bandit. (a) Fixed epsilon (b) Annealing epsilon 60 agent name 40 epsilon=0.05 epsilon=0.1 epsilon=0.2 epsilon=0.3 epsilon=10/(10+t) epsilon=30/(30+t) epsilon=50/(50+t) epsilon=70/(70+t) dropout=0.25 dropout=0.5 dropout=0.75 dropout=0.9 ensemble=5 ensemble=10 ensemble=30 ensemble=50 instant regret 20 0 (c) Dropout (d) Ensemble sampling 60 40 20 0 0 100 200 300 400 500 t 0 100 200 300 400 500 Figure 3: (a) Fixed -greedy, (b) annealing -greedy, (c) dropout, and (d) ensemble sampling applied to a two-layer neural network bandit. to representing uncertainty in neural network models, and there are others that might also be brought to bear in developing approximate versions of Thompson sampling [1, 4]. The analysis of various other forms of approximate Thompson sampling remains open. Ensemble sampling loosely relates to ensemble learning methods [3], though an important difference in motivation lies in the fact that the latter learns multiple models for the purpose of generating a more accurate model through their combination, while the former learns multiple models to reflect uncertainty in the posterior distribution over models. That said, combining the two related approaches may be fruitful. In particular, there may be practical benefit to learning many forms of models (neural networks, tree-based models, etc.) and viewing the ensemble as representing uncertainty from which one can sample. Acknowledgments This work was generously supported by a research grant from Boeing and a Marketing Research Award from Adobe. 8 References [1] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML?15, pages 1613?1622. JMLR.org, 2015. [2] Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In J. ShaweTaylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2249?2257. Curran Associates, Inc., 2011. [3] Thomas G Dietterich. Ensemble learning. The handbook of brain theory and neural networks, 2:110?125, 2002. [4] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1050?1059, New York, New York, USA, 20?22 Jun 2016. PMLR. [5] Carlos G?mez-Uribe. Online algorithms for parameter mean and variance estimation in dynamic regression. arXiv preprint arXiv:1605.05697v1, 2016. [6] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4026?4034. Curran Associates, Inc., 2016. [7] George Papandreou and Alan L Yuille. Gaussian sampling by local perturbations. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1858?1866. Curran Associates, Inc., 2010. [8] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285?294, 1933. 9
6918 |@word exploitation:1 version:2 manageable:1 stronger:2 seems:2 nd:1 c0:1 unif:1 open:1 simulation:1 covariance:1 p0:1 pick:1 efficacy:3 selecting:3 ktv:2 tuned:1 bootstrapped:1 past:2 outperforms:2 z2:6 must:1 shawetaylor:1 remove:1 designed:1 plot:4 update:6 greedy:14 selected:10 assurance:2 org:1 wierstra:1 become:1 viable:1 pritzel:1 specialize:1 manner:1 intricate:1 sublinearly:1 expected:8 brain:1 inspired:1 relying:1 decomposed:1 increasing:2 becomes:3 begin:2 provided:1 bounded:1 maximizes:1 mass:1 interpreted:1 sanov:1 gal:2 guarantee:1 expands:1 act:1 biometrika:1 internally:1 omit:1 grant:1 local:1 tends:1 approximately:1 might:1 plus:2 chose:1 range:5 statistically:3 averaged:3 practical:2 acknowledgment:1 union:2 regret:27 procedure:3 kpt:14 empirical:3 thought:2 intention:1 integrating:1 word:1 regular:1 zoubin:1 get:1 context:4 measurable:2 equivalent:1 conventional:1 yt:11 deterministic:10 fruitful:1 straightforward:1 go:1 williams:1 independently:3 thompson:36 immediately:2 insight:2 notion:1 laplace:2 updated:4 analogous:1 pt:26 suppose:1 target:1 exact:3 olivier:1 us:2 curran:3 associate:4 logarithmically:2 roy:2 element:1 pa:6 updating:6 observed:4 ft:7 preprint:1 worst:1 culotta:1 kilian:1 ordering:1 trade:1 observes:1 benjamin:2 reward:19 pinsker:1 dynamic:1 depend:2 algebra:2 serve:1 yuille:1 basis:1 cat:2 unimodality:1 represented:2 various:1 fast:3 effective:3 monte:1 kp:1 zemel:2 outcome:7 choosing:1 sanity:4 emerged:2 stanford:4 heuristic:4 plausible:4 drawing:1 superscript:3 online:5 sequence:8 advantage:1 net:2 took:1 interaction:1 maximal:2 frequent:1 combining:1 realization:6 intuitive:1 requirement:2 r1:2 produce:1 generating:3 incremental:4 leave:2 illustrate:1 coupling:3 develop:1 trading:1 indicate:1 convention:1 concentrate:1 stochastic:6 exploration:3 viewing:1 require:1 suffices:2 decompose:1 probable:1 hold:1 sufficiently:2 diminish:1 around:1 mapping:3 achieves:2 purpose:1 unwarranted:1 estimation:1 maker:1 establishes:1 reflects:1 brought:1 generously:1 gaussian:10 aim:2 broader:1 conjunction:1 encode:1 maria:1 check:4 indicates:1 likelihood:1 sense:2 typically:1 a0:5 initially:1 hidden:1 bandit:27 arg:5 among:1 prevailing:1 special:4 comprise:1 construct:1 beach:1 sampling:93 koray:1 represents:2 broad:4 icml:1 future:2 others:1 develops:1 few:2 randomly:3 maintain:3 evaluation:1 introduces:2 chain:1 implication:1 accurate:1 modest:1 tree:1 loosely:1 taylor:1 theoretical:5 minimal:1 obstacle:1 papandreou:1 zn:1 tractability:2 entry:1 dependency:2 perturbed:2 chooses:1 st:1 international:3 randomized:1 lee:1 off:2 quickly:1 concrete:1 w1:3 again:2 reflect:1 worse:1 adversely:1 resort:1 li:1 inc:3 satisfy:1 explicitly:2 depends:2 view:1 start:2 relied:1 maintains:1 relus:2 carlos:1 variance:7 largely:1 efficiently:3 ensemble:77 yield:1 bayesian:3 kavukcuoglu:1 lu:1 carlo:1 randomness:2 history:3 associated:1 proof:4 couple:2 sampled:11 proved:1 recall:1 improves:1 schedule:1 follow:1 methodology:1 specify:1 formulation:3 done:2 though:5 mez:1 marketing:1 working:1 sketch:2 incrementally:6 minibatch:1 grows:3 dqn:1 facilitate:2 dietterich:1 usa:2 concept:2 true:2 verify:1 lxy:1 former:1 requiring:1 hence:1 k22:1 during:4 mina:1 complete:1 demonstrate:5 performs:1 balcan:1 charles:2 conditioning:1 volume:2 extend:1 discussed:1 interpretation:2 approximates:4 significant:1 diversify:1 rd:1 similarly:1 particle:2 sugiyama:1 shawe:1 lihong:1 chapelle:1 etc:1 posterior:24 optimizes:1 moderate:1 manipulation:1 inequality:2 arbitrarily:1 minimum:2 george:1 maximize:1 period:22 relates:1 multiple:3 desirable:2 unimodal:1 alan:1 exceeds:1 offer:10 long:1 award:1 dkl:14 ensuring:1 impact:1 involving:1 basic:2 adobe:1 florina:1 regression:1 expectation:3 arxiv:2 iteration:1 addition:1 annealing:3 grow:4 w2:3 operate:1 induced:1 incorporates:1 spirit:1 flow:1 name:2 lafferty:1 leverage:1 easy:2 identically:2 viability:3 independence:2 fit:3 hindered:1 reduce:2 idea:2 blundell:2 whether:1 motivated:1 bartlett:1 osband:1 algebraic:1 york:2 action:64 deep:3 dramatically:1 useful:2 clear:2 amount:1 induces:1 generate:1 per:14 broadly:1 write:1 dropping:2 drawn:7 changing:1 verified:1 v1:1 sum:2 luxburg:1 uncertainty:9 guyon:1 decision:7 appendix:4 dropout:12 bound:16 ct:3 layer:5 refine:1 adapted:1 constraint:1 generates:2 argument:2 min:1 span:1 conjecture:1 developing:1 according:9 combination:1 smaller:1 across:2 taken:1 computationally:2 remains:1 skew:1 count:4 needed:1 mind:1 tractable:3 serf:4 end:1 endowed:1 apply:5 observe:4 tolerable:1 pmlr:1 alternative:2 weinberger:2 thomas:1 denotes:1 include:1 maintaining:1 instant:2 ghahramani:2 especially:1 establish:4 build:1 epsilon:17 move:1 realized:3 rt:10 dependence:4 said:1 gradient:4 bvr:1 aes:2 kalman:1 besides:1 index:3 illustration:1 potentially:1 filtration:1 boeing:1 motivates:1 policy:4 zt:4 perform:2 unknown:1 observation:5 neuron:3 markov:1 daan:1 finite:2 descent:2 t:7 pat:2 immediate:2 precise:1 y1:1 perturbation:3 pair:2 required:3 kl:1 connection:1 quadratically:1 nip:1 address:1 beyond:2 able:1 max:8 cornebise:1 belief:1 suitable:2 natural:1 arm:5 representing:3 scheme:1 julien:1 jun:1 naive:1 prior:14 understanding:1 relative:2 loss:3 bear:1 filtering:3 agent:4 degree:1 consistent:1 principle:1 editor:4 maxt:1 supported:1 last:2 formal:1 template:1 face:1 taking:1 leaky:1 van:2 distributed:5 benefit:1 dimension:8 cumulative:1 made:1 reinforcement:1 approximate:8 observable:1 dealing:1 handbook:1 onerous:1 learn:1 nature:1 ca:2 complex:13 garnett:1 motivation:1 noise:3 yarin:1 xiuyuan:1 pereira:1 explicit:2 lie:1 jmlr:1 learns:2 ian:1 theorem:4 admits:1 evidence:2 intractable:3 essential:1 effectively:1 magnitude:1 conditioned:7 horizon:3 gap:2 customization:1 lt:2 logarithmic:1 talking:1 applies:3 corresponds:1 determines:1 relies:1 conditional:1 zct:1 goal:1 hard:1 infinite:1 specifically:1 uniformly:5 except:2 wt:8 degradation:2 e:9 select:6 support:2 latter:2 alexander:1
6,543
6,919
Character-Level Language Modeling with Recurrent Highway Hypernetworks Joseph Suarez Stanford University [email protected] Abstract We present extensive experimental and theoretical support for the efficacy of recurrent highway networks (RHNs) and recurrent hypernetworks complimentary to the original works. Where the original RHN work primarily provides theoretical treatment of the subject, we demonstrate experimentally that RHNs benefit from far better gradient flow than LSTMs in addition to their improved task accuracy. The original hypernetworks work presents detailed experimental results but leaves several theoretical issues unresolved?we consider these in depth and frame several feasible solutions that we believe will yield further gains in the future. We demonstrate that these approaches are complementary: by combining RHNs and hypernetworks, we make a significant improvement over current state-of-the-art character-level language modeling performance on Penn Treebank while relying on much simpler regularization. Finally, we argue for RHNs as a drop-in replacement for LSTMs (analogous to LSTMs for vanilla RNNs) and for hypernetworks as a de-facto augmentation (analogous to attention) for recurrent architectures. 1 Introduction and related works Recurrent architectures have seen much improvement since their inception in the 1990s, but they still suffer significantly from the problem of vanishing gradients [1]. Though many consider LSTMs [2] the de-facto solution to vanishing gradients, in practice, the problem is far from solved (see Discussion). Several LSTM variants have been developed, most notably GRUs [3], which are simpler than LSTM cells but benefit from only marginally better gradient flow. Greff et al. and Britz et al. conducted exhaustive (for all practical purposes) architecture searches over simple LSTM variants and demonstrated that none achieved significant improvement [4] [5]?in particular, the latter work discovered that LSTMs consistently outperform comparable GRUs on machine translation, and no proposed cell architecture to date has been proven significantly better than the LSTM. This result necessitated novel approaches to the problem. One approach is to upscale by simply stacking recurrent cells and increasing the number of hidden units. While there is certainly some optimal trade off between depth and cell size, with enough data, simply upscaling both has yielded remarkable results in neural machine translation (NMT) [6].1 However, massive upscaling is impractical in all but the least hardware constrained settings and fails to remedy fundamental architecture issues, such as poor gradient flow inherent in recurrent cells [8]. We later demonstrate that gradient issues persist in LSTMs (see Results) and that the grid-like architecture of stacked LSTMs is suboptimal. 1 For fair comparison, Google?s NMT system does far more than upscaling and includes an explicit attentional mechanism [7]. We do not experiment with attention and/or residual schemes, but we expect the gains made by such techniques to stack with our work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The problem of gradient flow can be somewhat mitigated by the adaptation of Batch Normalization [9] to the recurrent case [10] [11]. While effective, it does not solve the problem entirely and also imposes significant overhead in memory and thus in performance, given the efficiency of parallelization over minibatches. This is often offset by a reduction in total epochs over the data required, but recurrent architectures with better gradient flow could ideally provide comparable or better convergence without reliance upon explicit normalization. Zilly et al. recently proposed recurrent highway networks (RHNs) and offered copious theoretical support for the architecture?s improved gradient flow [12]. However, while the authors provided mathematical rigor, we believe that experimental confirmation of the authors? claims could further demonstrate the model?s simplicity and widespread applicability. Furthermore, we find that the discussion of gradient flow is more nuanced than presented in the original work (see Discussion). Ha et al. recently questioned the weight-sharing paradigm common among recurrent architectures, proposing hypernetworks as a mechanism for allowing weight drift between timesteps [13]. This consideration is highly desirable, given the successes of recent convolutional architectures on language modeling tasks [14] [15], which were previously dominated by recurrent architectures. Both RHNs and hypernetworks achieved state-of-the-art (SOTA) on multiple natural language processing (NLP) tasks at the time. As these approaches address unrelated architectural issues, it should not be surprising that combining them yields SOTA on Penn Treebank [16] (PTB), improving significantly over either model evaluated individually. We consider both RHNs and hypernetworks to be largely overlooked in recent literature on account of apparent rather than extant complexity. Furthermore, the original RHN work lacks sufficient experimental demonstration of improved gradient flow; the original hypernetworks work lacks theoretical generalization of their weight-drift scheme. We present experimental results for RHNs complementary to the original work?s theoretical results and theoretical results for hypernetworks complementary to the original work?s experimental results. Founded on these results, our most important contribution is a strong argument for the utility of RHNs and hypernetworks, both individually and jointly, in constructing improved recurrent architectures. 2 Model architecture 2.1 Recurrent highway networks We make a few notational simplifications to the original RHN equations that will later facilitate extensibility. We find it clearest and most succinct to be programmatic in our notation 2 . First, consider the GRU: ? + (si?1 ? h)W ?) [h, t] =xi U + si?1 W r = tanh(xi U (1) h, t =?(h), ?(t) si =(1 ? t) ? r + t ? si?1 ? ? Rd?2n , W, W ? ? Rn?2n are weight matrices where where x ? Rd , h, t, r, st ? Rn , and U, U d, n are the input and hidden dimensions. ? is the sigmoid nonlinearity, and ? is the Hadamard (elementwise) product. A one layer RHN cell is a simplified GRU variant: [h, t] =xi U + si?1 W si =(1 ? t) ? si?1 + t ? h (2) h, t = tanh(h), ?(t) where the definitions from above hold. The RHN is extended to arbitrary depth by simply stacking this cell with new hidden weight matrices, with the caveat that xi U is omitted except at the input layer: RHN Cell(xi , si?1 , l) : [h, t] =1 [l = 0] xi U + si?1 W c, t =1 ? t, dropout(t) (3) h, t = tanh(h), ?(t) si =c ? si?1 + t ? h where l is the layer index, which is used as an indicator. We can introduce recurrent dropout [17] on t across all layers with a single hyperparameter. We later demonstrate strong results without the need for more complex regularization or layer normalization. Finally, unlike stacked LSTMs, RHNs are structurally linear. That is, a depth L RHN applied to a sequence of length M can be unrolled to a simple depth M L network. We restate this fact from the original work only because it is important to our analysis, which we defer to Results and Discussion. 2 Note that for purpose of clean alignment, equations are presented top to bottom, then left to right. 2 2.2 Hypernetworks We slightly alter the original notation of recurrent hypernetworks for ease of combination with RHNs. We define a hypervector z as a linear upscaling projection applied to the outputs of a small recurrent network: z(a) = Wp a (4) where a ? Rh is the activation vector output by an arbitrary recurrent architecture, Wp ? Rn?h is an upscaling projection from dimension h to n, and h  n. The hypervector is then used to scale the weights of the main recurrent network by: f (z(a)) = z(a) ? W W (5) where we overload ? as the element-wise product across columns. That is, each element of z scales one column (or row, depending on notation) of W . As this constitutes a direct modification of the weights, hypernetworks have the interpretation of relaxing the weight sharing constraint implicit in RNNs. 2.3 Recurrent highway hypernetworks We adapt hypernetworks to RHNs by directly modifying the RHN cell using (5): RHN CellHyper(xi , si?1 , l, z) : e (z) + si?1 W f (z) c, t =1 ? t, dropout(t) [h, t] =1 [l = 0] xi U h, t = tanh(h), ?(t) si =c ? si?1 + t ? h (6) If RHN Cell and RHN CellHyper had the same state sizes, we could simply stack them. However, as the hypernetwork is much smaller than the main network by design, we instead must upscale between the networks. Our final architecture at each timestep for layer l can thus be written: sh =RHN Cell(sh , l) z =[Mpl sh , Mpl sh ] sn =RHN CellHyper(sn , l, z) (7) where Mpl ? Rh?n is the upscale projection matrix for layer l and z is the concatenation of Mpl sh with itself. Notice the simplicity of this extension?it is at least as straightforward to extend RHNs as GRUs and LSTMs. Again, we use only simple recurrent dropout for regularization. A few notes, for clarity and ease of reproduction: as the internal weight matrices of the main network have different dimensionality (Ul ? Rd?2n , Wl ? Rn?2n ), we require the concatenation operation to form z in (7). We find this works much better than simply using larger projection matrices. Also, sh , sn in (7) are the hypernetwork and main network states, respectively. This may seem backwards from the notation above, but note that the hypernetwork is a standard, unmodified RHN Cell; its outputs are then used in the main network, which is the modified RHN CellHyper. 3 3.1 Results (experimental) Penn Treebank Penn Treebank (PTB) contains approximately 5.10M/0.40M/0.45M characters in the train/val/test sets respectively and has a small vocabulary of 50 characters. There has recently been some controversy surrounding results on PTB: Jozefowicz et al. went as far to say that performance on such small datasets is dominated by regularization [18]. Radford et al. chose to evaluate language modeling performance only upon the (38GB) Amazon Product Review dataset for this reason [19]. Performance on large, realistic datasets is inarguably a better metric of architecture quality than performance on smaller datasets such as PTB. However, such metrics make comparison among models nearly impossible: performance on large datasets is non-standard because evaluation at this scale is infeasible in many research settings simply because of limited hardware access. While most models can be trained on 1-4 GPUs within a few weeks, this statement is misleading, as significantly more hardware is required for efficient development and hyperparameter search. We therefore emphasize the importance of small datasets for standardized comparison among models. Hutter is a medium sized task (approximately 20 times larger than PTB) that should be feasible in most settings (e.g. the original RHN and Hypernetwork works). We are only reasonably able to 3 Table 1: Comparison of bits per character (BPC) test errors on PTB. We achieve SOTA without layer normalization, improving over vanilla hypernetworks, which require layer normalization Model Test Val Params (M) LSTM 2-Layer LSTM 2-Layer LSTM (1125 hidden, ours) HyperLSTM Layer Norm LSTM Layer Norm HyperLSTM Layer Norm HyperLSTM (large embed) 2-Layer Norm HyperLSTM, 1000 units Recurrent Highway Network (ours) HyperRHN (ours) 1.31 1.28 ? 1.26 1.27 1.25 1.23 1.22 1.19 1.35 1.31 1.29 1.30 1.30 1.28 1.26 1.24 1.24 1.21 4.3 12.2 15.6 4.9 4.3 4.9 5.1 14.4 14.0 15.5 evaluate on PTB due to a strict hardware limitation of two personally owned GPUs. We therefore take additional precautions to ensure fair comparison: First, we address the critiques of Jozefoqicz et al. by avoiding complex regularization. We use only simple recurrent dropout with a uniform probability across layers. Second, we minimally tune hyperparameters as discussed below. Finally, we are careful with the validation data and run the test set only once on our best model. We believe these precautions prevent overfitting the domain and corroborate the integrity of our result. Furthermore, SOTA performance with suboptimal hyperparameters demonstrates the robustness of our model. 3.2 Architecture and training details In addition to our HyperRHN, we consider our implementations of a 2-Layer LSTM and a plain RHN below. All models, including hypernetworks and their strong baselines, are compared in Table 1. Other known published results are included in the original hypernetworks work, but have test bpc ? 1.27. We train all models using Adam [20] with the default learning rate 0.001 and sequence length 100, batch size 256 (the largest that fits in memory for our main model) on a single GTX 1080 Ti until overfitting becomes obvious. We evaluate test performance only once and only on our main model, using the validation set for early stopping. Our data batcher loads the dataset into main memory as a single contiguous block and reshapes it to column size 100. We do not zero pad for efficiency and no distinction is made between sentences for simplicity. Data is embedded into a 27 dimensional vector. We do not cross validate any hyperparameters except for dropout. We first consider our implementation of a 2-Layer LSTM with hidden dimension 1125, which yields approximately as many learnable parameters as our main model. We train for 350 epochs with recurrent dropout probability 0.9. As expected, our model performs slightly better than the slightly smaller baseline in the original hypernetworks work. We use this model in gradient flow comparisons (see Discussion) As the original RHN work presents only word-level results for PTB, we trained a RHN baseline by simply disabling the Hypernetwork augmentation. Convergence was achieved in 618 epochs. Our model consists of a recurrent highway hypernetwork with 7 layers per cell. The main network has 1000 neurons per layer and the hypernetwork has 128 neurons per layer, for a total of approximately 15.2M parameters. Both subnetworks use a recurrent dropout keep probability of 0.65 and no other regularizer/normalizer. We attribute our model?s ability to perform without layer normalization to the improved gradient flow of RHNs (see Discussion). The model converges in 660 epochs, obtaining test perplexity 2.29 (where cross entropy corresponds to loge of perplexity) and 1.19 bits per character (BPC, log2 of perplexity), 74.6 percent accuracy. By epoch count, our model is comparable to a plain RHN but performs better. Training takes 2-3 days (fairly long for PTB) compared to 1-2 days for a plain RHN and a few hours for an LSTM. However, this comparison is unfair: all models require a similar number of floating point operations and differ 4 primarily in backend implementation optimization. We consider possible modifications to our model that take advantage of existing optimization in Results (theoretical), below. Finally, we note that reporting of accuracy is nonstandard. Accuracy is a standard metric in vision; we encourage its adoption in language modeling, as BPC is effectively a change of base applied to standard cross entropy and is exponential in scale. This downplays the significance of gains where the error ceiling is likely small. Accuracy is more immediately comparable to maximum task performance, which we estimate to be well below 80 percent given the recent trend of diminishing returns coupled with genuine ambiguity in the task. Human performance is roughly 55 percent, as measured by our own performance on the task. 4 Results (theoretical) Our final model is a direct adaptation of the original hypervector scaling factor to RHNs. However, we did attempt a generalization of hypernetworks and encountered extreme memory considerations that have important implications for future work. Notice that the original hypernetwork scaling factor is equivalent to element-wise multiplication by a rank-1 matrix (e.g. the outer product of z with a ones vector, which does not include all rank-1 matrices). Ideally, we should be able to scale by any matrix at all. As mentioned by the authors, naively generating different scaling vectors for each column of the weight matrix is prohibitively expensive in both memory and computation time. We propose a low rank-d update inspired by the thin singular value decomposition as follows: f=W ? W d X ui vi> (8) i=1 Compared to the original scaling update, our variation has memory and performance cost linear in the rank of the update. As with the SVD, we would expect most of the information relevant to the weight drift scale to be contained in a relatively low-rank update. However, we were unable to verify this hypothesis due to a current framework limitation. All deep learning platforms currently assemble computation graphs, and this low rank approximation is added as a node in the graph. This requires memory equal to the dimensionality of the scaling matrix per training example! The original hypernetworks update is only feasible because of a small mathematical trick: row-wise scaling of the weight matrix is equal to elementwise multiplication after the matrix-vector multiply. Note that this is a practical rather than theoretical limitation. As variations in the weights of the hypernetwork arise only as a function of variations in ui , vi , W , it is possible to define a custom gradient operation that does not need to store the low rank scaling matrices at each time step for backpropagation. Lastly, we note that hypernetworks are a new and largely unexplored area of research. Even without the above addition, hypernetworks have yielded large improvements on a diverse array of tasks while introducing a minimal number of additional parameters. The only reason we cannot currently recommend hypernetworks as a drop-in network augmentation for most tasks (compare to e.g. attention) is another framework limitation. Despite requiring far fewer floating point operations than the larger main network, adding a hypernetwork still incurs nearly a factor of two in performance. This is due to the extreme efficiency of parallelization over large matrix multiplies; the overhead is largely time spent copying data. We propose rolling the hypernetwork into the main network. This could be accomplished by simply increasing the hidden dimension by the desired hypernetwork dimension h. The first h elements of the activation can then be treated as the hypervector. Note that this may require experimentation with matrix blocking and/or weight masking schemes in order to avoid linear interactions between the hypernetwork and main network during matrix multiplication. The issues and solutions above are left as thought experiments; we prioritize our limited computational resources towards experimental efforts on recurrent highway networks. The theoretical results above are included to simultaneously raise and assuage concerns surrounding generalization and efficiency of hypernetworks. We see additional development of hypernetworks as crucial to the continued success of our recurrent model in the same manner that attention is a necessary, de-facto network augmentation in machine translation (and we further expect the gains to stack). Our model?s strong language modeling result using a single graphics card was facilitated by the small size of PTB, which allowed us to afford the 2X computational cost of recurrent hypernetworks. We present methods 5 Figure 1: Visualization of hyper recurrent highway network training convergence for optimizing the representational power and computational cost of hypernetworks; additional engineering will still be required in order to fully enable efficient training on large datasets. 5 5.1 Discussion (experimental) Training time We visualize training progress in Fig. 1. Notice that validation perplexity seems to remain below training perplexity for nearly 500 epochs. While the validation and test sets in PTB appear slightly easier than the training set, the cause of this artifact is that the validation loss is masked by a minimum 50-character context whereas the training loss is not (we further increase minimum context to 95 after training and observe a small performance gain), therefore the training loss suffers from the first few impossible predictions at the start of each example. The validation data is properly overlapped such that performance is being evaluated over the entire set. It may also seem surprising that the model takes over 600 epochs to converge, and that training progress appears incredibly slow towards the end. We make three observations: first, we did not experiment with different optimizers, annealing the learning rate, or even the fixed learning rate itself. Second, as the maximum task accuracy is unknown, it is likely that gains small on an absolute scale are large on a relative scale. We base this conjecture on the diminishing gains of recent work on an absolute scale: we find that the difference between 1.31 (1 layer LSTM) and 1.19 bpc (our model) is approximately 71.1-74.6 percent accuracy. For reference, our improvement over the original hypernetworks work is approximately 1.0 percent (this figure is obtained from interpolation on the BPC scale). Third and finally, regardless of whether our second observation is true, our architecture exhibits similar convergence to a RHN and begins outperforming the 2-layer LSTM baseline before the latter converges. 5.2 Overview of visualizations Our motivation in the visualizations that follow is to compare desirable and undesirable properties of our RHN-based model and standard recurrent models, namely stacked LSTMs. There are two natural gradient visualizations: within-cell gradients, which are averaged over time but not over all of the weight layers within the recurrent cell, and outside-cell gradients, which are averaged over internal weight layers but not over time. Time-averaged gradients are less useful to our discussion than the norms of raw weight layers; we therefore present these along with outside-cell gradient visualizations. 5.3 Cell visualizations We visualize the 2-norms of the learned weight layers of our RHN-based model in Fig. 2 and of an LSTM baseline (2 layers, 1150 hidden units, recurrent dropout keep p=0.90, 15.6M parameters) in Fig. 3. Notice that in the middle six layers (the first/last layers have different dimensionality and are incomparable) of the RHN block (Fig. 2), weight magnitude decreases with increasing layer depth. We view this as evidence for the iterative-refinement view of deep learning, as smaller updates are 6 Figure 2: L2 norms of learned weights in our recurrent highway hypernetwork model. Increasing depth is shown from left to right in each block of layers. As dimensionality differs between blocks, the middle layers of each block are incomparable to the first/last layers, hence the disparity in norm. Figure 3: L2 norms of learned weights in our 2-layer LSTM baseline, with layer 1 left of layer 2. applied in deeper layers. This is first evidence of this paradigm that we are aware of in the recurrent case, as similar statistics in stacked LSTMs are less conclusive because of horizontal grid connections. This also explains why performance gains diminish as RHN depth increases, as was noted in the original work. 5.4 Gradient visualizations over time We consider the mean L2-norms of the gradients of the activations with respect to the loss at the final timestep. But first, an important digression: when should we visualize gradient flow: at initialization, during training, or after convergence? To our knowledge, this matter has not yet received direct treatment. Fig. 4 is computed at initialization and seems to suggest that RHNs are far inferior to LSTMs in the multilayer case, as the network cannot possibly learn in the presence of extreme vanishing gradients. This line of reasoning lacks the required nuance, which we discuss below. 6 Discussion (theoretical) We address the seemingly inconsistent experimental results surrounding gradient flow in RHN. First, we note that the LSTM/RHN comparison is unfair: multilayer LSTM/GRU cells are laid out in a grid. The length of the gradient path is equal to the sum of the sequence length and the number of layers (minus one); in an RHN, it is equal to the product. In the fair one layer case, we found that the RHN actually possesses far greater initial gradient flow. Second, these intuitions regarding vanishing gradients at initialization are incorrect. As shown in Fig. 5, gradient flow improves dramatically after training for just one epoch. By convergence, as shown in Fig. 6, results shift in the favor of RHNs, confirming experimentally the theoretical gradient flow benefits of RHNs over LSTMs. Third, we address a potential objection. One might argue that while the gradient curves of our RHN based model and the LSTM baseline are similar in shape, the magnitude difference is misleading. For example, if LSTMs naturally have a million times smaller weights, then the factor of a hundred magnitude difference in Fig. 6 would actually demonstrate superiority of the LSTM. This is the reason for our consideration of weight norms in Fig. 2-3, which show that LSTMs have only 100 times smaller weights. Thus the gradient curves in Fig. 6 are effectively comparable in magnitude. However, RHNs maintain gradient flow equal to that of stacked LSTMs while having far greater 7 Figure 4: Layer-averaged gradient comparison between our model and an LSTM baseline. Gradients are computed at initialization at the input layer of each timestep with respect to the final timestep?s loss. Weights are initialized orthogonally. Figure 5: Identical to Fig. 4, but gradients are computed from models trained for one epoch. Figure 6: Identical to Fig. 4, but gradients are computed after convergence. gradient path length, thus the initial comparison is unfair. We believe that this is the basis for the RHN?s performance increase over the LSTM: RHNs allow much greater effective network depth without incurring additional gradient vanishing. Fourth, we experimented with adding the corresponding horizontal grid connections to our RHN, obtaining significantly better gradient flow. With the same parameter budget as our HyperRHN model, this variant obtains 1.40 bpc?far inferior to our HyperRHN, though it could likely be optimized somewhat. It appears that long gradient paths are precisely the advantage in RHNs. We therefore suggest that gradient flow specifically along the deepest gradient path is an important consideration in architecture design: it provides an upper limit on effective network depth. It appears that greater effective depth is precisely the advantage in modeling potential of the RHN. 7 Conclusion We present a cohesive set of contributions to recurrent architectures. First, we provide strong experimental evidence for RHNs as a simple drop-in replacement for stacked LSTMs and a detailed discussion of several engineering optimizations that could further performance. Second, we visualize 8 and discuss the problem of vanishing gradients in recurrent architectures, revealing that gradient flow significantly shifts during training, which can lead to misleading comparisons among models. This demonstrates that gradient flow should be evaluated at or near convergence; using this metric, we confirm that RHNs benefit from far greater effective depth than stacked LSTMs while maintaining equal gradient flow. Third, we suggest multiple expansions upon hypernetworks for future work that have the potential to significantly improve efficiency and generalize the weight-drift paradigm. This could lead to further improvement upon our architecture and, we hope, facilitate general adoption of hypernetworks as a network augmentation. Finally, we demonstrate effectiveness by presenting and open sourcing (code 3 ) a combined architecture that obtains SOTA on PTB with minimal regularization and tuning which normally compromise results on small datasets. Acknowledgments Special thanks to Ziang Xie, Jeremy Irvin, Dillon Laird, and Hao Sheng for helpful commentary and suggestion during the revision process. References [1] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J?rgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [2] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735?1780, 1997. [3] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [4] Klaus Greff, Rupesh K Srivastava, Jan Koutn?k, Bas R Steunebrink, and J?rgen Schmidhuber. LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, 2016. [5] Denny Britz, Anna Goldie, Thang Luong, and Quoc Le. Massive exploration of neural machine translation architectures. arXiv preprint arXiv:1703.03906, 2017. [6] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google?s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. [7] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [8] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J?rgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. [9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [10] Tim Cooijmans, Nicolas Ballas, C?sar Laurent, ?a?glar G?l?ehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. [11] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. [12] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn?k, and J?rgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016. [13] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. 3 github.com/jsuarez5341/Recurrent-Highway-Hypernetworks-NIPS 9 [14] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083, 2016. [15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016. [16] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313?330, 1993. [17] Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016. [18] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. [19] Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444, 2017. [20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 10
6919 |@word middle:2 norm:11 seems:2 open:1 decomposition:1 incurs:1 minus:1 reduction:1 initial:2 contains:1 efficacy:1 disparity:1 ours:3 existing:1 current:2 com:1 surprising:2 si:15 activation:3 yet:1 must:1 written:1 diederik:1 realistic:1 confirming:1 assuage:1 shape:1 christian:1 drop:3 update:6 precaution:2 leaf:1 fewer:1 alec:1 discovering:1 krikun:1 vanishing:6 short:1 caveat:1 provides:2 node:1 simpler:2 mathematical:2 along:2 direct:3 incorrect:1 consists:1 yuan:1 overhead:2 manner:1 introduce:1 notably:1 expected:1 roughly:1 kiros:1 ptb:12 inspired:1 relying:1 increasing:4 becomes:1 provided:1 begin:1 mitigated:1 unrelated:1 notation:4 medium:1 revision:1 complimentary:1 developed:1 proposing:1 impractical:1 unexplored:1 ti:1 prohibitively:1 demonstrates:2 facto:3 normally:1 unit:3 penn:5 appear:1 superiority:1 before:1 engineering:2 limit:2 despite:1 critique:1 laurent:1 path:4 interpolation:1 approximately:6 might:1 rnns:2 chose:1 minimally:1 initialization:4 relaxing:1 ease:2 limited:2 adoption:2 averaged:4 practical:2 acknowledgment:1 practice:1 block:5 differs:1 backpropagation:1 optimizers:1 jan:2 area:1 empirical:1 significantly:7 thought:1 projection:4 revealing:1 word:1 suggest:3 cannot:2 undesirable:1 context:2 impossible:2 equivalent:1 demonstrated:1 straightforward:1 attention:4 incredibly:1 regardless:1 hypernetworks:34 backend:1 sepp:3 jimmy:2 simplicity:3 amazon:1 immediately:1 continued:1 array:1 variation:3 analogous:2 sar:1 massive:2 digression:1 hypothesis:1 trick:1 element:4 trend:1 expensive:1 overlapped:1 persist:1 blocking:1 bottom:1 mike:2 suarez:1 preprint:15 solved:1 went:1 trade:1 decrease:1 extensibility:1 mentioned:1 intuition:1 complexity:1 ui:2 ideally:2 controversy:1 trained:3 raise:1 compromise:1 zilly:2 upon:4 efficiency:5 basis:1 aliaksei:1 regularizer:1 surrounding:3 stacked:7 train:3 effective:5 klaus:2 hyper:1 outside:2 exhaustive:1 kalchbrenner:1 apparent:1 stanford:2 solve:1 larger:3 say:1 ability:1 statistic:1 favor:1 simonyan:1 jointly:2 itself:2 laird:1 final:4 seemingly:1 sequence:4 advantage:3 net:2 propose:2 jamie:1 interaction:1 product:5 unresolved:1 adaptation:2 denny:1 qin:1 relevant:1 combining:2 hadamard:1 date:1 cao:1 glar:1 translate:1 achieve:1 representational:1 validate:1 sutskever:1 convergence:8 generating:1 adam:2 converges:2 spent:1 depending:1 recurrent:44 tim:1 andrew:1 measured:1 received:1 progress:2 disabling:1 strong:5 differ:1 restate:1 annotated:1 attribute:1 modifying:1 stochastic:1 exploration:1 human:2 enable:1 explains:1 require:4 odyssey:1 espeholt:1 beatrice:1 generalization:3 marcinkiewicz:1 koutn:2 ryan:1 extension:1 exploring:1 hold:1 copious:1 diminish:1 week:1 visualize:4 claim:1 rgen:5 early:1 omitted:1 purpose:2 tanh:4 currently:2 highway:12 individually:2 wl:1 largest:1 hope:1 modified:1 rather:2 avoid:1 improvement:6 consistently:1 notational:1 rank:7 properly:1 normalizer:1 baseline:8 helpful:1 rupesh:2 stopping:1 entire:1 pad:1 hidden:7 diminishing:2 issue:5 among:4 dauphin:1 multiplies:1 development:2 art:2 constrained:1 fairly:1 platform:1 special:1 genuine:1 once:2 equal:6 aware:1 beach:1 having:1 frasconi:2 identical:2 thang:1 koray:1 constitutes:1 nearly:3 thin:1 alter:1 future:3 yoshua:4 recommend:1 inherent:1 primarily:2 few:5 simultaneously:1 floating:2 replacement:2 maintain:1 attempt:1 highly:1 multiply:1 custom:1 evaluation:2 certainly:1 alignment:1 bpc:7 sh:6 extreme:3 implication:1 encourage:1 necessary:1 necessitated:1 loge:1 desired:1 initialized:1 theoretical:13 minimal:2 hutter:1 column:4 modeling:10 stanislau:1 corroborate:1 contiguous:1 unmodified:1 stacking:2 applicability:1 cost:3 introducing:1 rolling:1 uniform:1 masked:1 hundred:1 conducted:1 graphic:1 dependency:2 params:1 combined:1 cho:2 st:2 grus:3 lstm:22 upscale:3 fundamental:1 thanks:1 oord:1 off:1 michael:1 ilya:1 extant:1 augmentation:5 again:1 ambiguity:1 rafal:2 prioritize:1 possibly:1 severyn:1 luong:1 chung:1 return:1 szegedy:1 account:1 potential:3 jeremy:1 de:3 includes:1 matter:1 dillon:1 vi:2 later:3 view:2 wolfgang:1 start:1 masking:1 defer:1 contribution:2 accuracy:7 convolutional:2 largely:3 yield:3 generalize:1 raw:1 norouzi:1 kavukcuoglu:1 marginally:1 none:1 published:1 nonstandard:1 suffers:1 sharing:2 definition:1 batcher:1 clearest:1 obvious:1 naturally:1 gain:8 dataset:2 treatment:2 mitchell:1 knowledge:1 dimensionality:4 improves:1 actually:2 barth:1 appears:3 day:2 follow:1 xie:1 improved:5 evaluated:3 though:2 furthermore:3 inception:1 implicit:1 lastly:1 just:1 until:1 sheng:1 horizontal:2 lstms:18 lack:3 google:2 widespread:1 quality:1 artifact:1 nuanced:1 believe:4 lei:1 mary:1 building:1 usa:1 facilitate:2 verify:1 gtx:1 remedy:1 requiring:1 true:1 regularization:6 hence:1 kyunghyun:2 wp:2 cohesive:1 during:4 inferior:2 noted:1 mpl:4 presenting:1 demonstrate:7 mohammad:1 performs:2 greff:2 percent:5 reasoning:1 wise:3 consideration:4 novel:1 recently:3 common:1 sigmoid:1 overview:1 ballas:1 million:1 extend:1 interpretation:1 discussed:1 elementwise:2 significant:3 jozefowicz:3 rd:3 vanilla:2 grid:4 tuning:1 nonlinearity:1 grangier:1 language:9 had:1 access:1 base:2 align:1 integrity:1 own:1 recent:4 optimizing:1 perplexity:5 schmidhuber:5 store:1 outperforming:1 success:2 accomplished:1 seen:1 minimum:2 additional:5 somewhat:2 greater:5 commentary:1 dai:1 converge:1 paradigm:3 hypernetwork:14 multiple:2 desirable:2 adapt:1 cross:3 long:6 sota:5 prediction:1 variant:4 multilayer:2 vision:1 metric:4 arxiv:30 normalization:9 sergey:1 achieved:3 cell:19 programmatic:1 hochreiter:3 addition:3 whereas:1 annealing:1 objection:1 singular:1 crucial:1 parallelization:2 unlike:1 posse:1 yonghui:2 strict:1 nmt:2 subject:1 bahdanau:1 flow:23 inconsistent:1 seem:2 effectiveness:1 near:1 presence:1 backwards:1 bengio:4 enough:1 fit:1 timesteps:1 architecture:24 suboptimal:2 incomparable:2 regarding:1 shift:3 whether:1 six:1 utility:1 gb:1 bridging:1 ul:1 effort:1 accelerating:1 sentiment:1 suffer:1 karen:1 questioned:1 afford:1 cause:1 deep:3 dramatically:1 useful:1 detailed:2 tune:1 hardware:4 generate:1 outperform:1 notice:4 upscaling:5 per:6 diverse:1 hyperparameter:2 paolo:2 georg:1 reliance:1 clarity:1 prevent:1 clean:1 nal:1 timestep:4 graph:2 sum:1 run:1 facilitated:1 fourth:1 reporting:1 laid:1 architectural:1 wu:2 yann:1 scaling:7 comparable:5 bit:2 entirely:1 layer:44 dropout:10 simplification:1 courville:1 fan:1 encountered:1 yielded:2 assemble:1 constraint:1 precisely:2 alex:1 dominated:2 argument:1 kumar:1 relatively:1 gpus:2 conjecture:1 combination:1 poor:1 across:3 slightly:4 smaller:6 character:7 remain:1 joseph:1 modification:2 quoc:3 den:1 ceiling:1 equation:2 resource:1 previously:1 visualization:7 discus:2 count:1 mechanism:2 end:1 subnetworks:1 gulcehre:1 operation:4 incurring:1 experimentation:1 observe:1 batch:4 robustness:1 original:21 top:1 standardized:1 nlp:1 ensure:1 include:1 angela:1 log2:1 maintaining:1 linguistics:1 added:1 exhibit:1 gradient:48 attentional:1 unable:1 card:1 concatenation:2 outer:1 argue:2 reason:3 nuance:1 dzmitry:1 marcus:1 length:5 code:1 index:1 copying:1 julian:1 demonstration:1 unrolled:1 statement:1 hao:1 noam:1 ba:3 design:2 implementation:3 unknown:1 perform:1 allowing:1 upper:1 gated:2 neuron:2 observation:2 datasets:7 caglar:1 extended:1 hinton:1 santorini:1 frame:1 discovered:1 rn:4 stack:3 auli:1 arbitrary:2 shazeer:1 drift:4 overlooked:1 downplays:1 david:2 hyperlstm:4 namely:1 required:4 gru:3 extensive:1 sentence:1 conclusive:1 connection:2 optimized:1 distinction:1 learned:3 hour:1 kingma:1 nip:2 address:4 able:2 below:6 including:1 memory:9 power:1 natural:2 treated:1 difficulty:2 indicator:1 residual:1 scheme:3 improve:1 github:1 misleading:3 orthogonally:1 britz:2 coupled:1 sn:3 epoch:9 literature:1 review:2 personally:1 val:2 multiplication:3 l2:3 graf:1 relative:1 deepest:1 embedded:1 expect:3 fully:1 loss:6 macherey:2 suggestion:1 limitation:4 proven:1 geoffrey:1 remarkable:1 validation:6 offered:1 sufficient:1 imposes:1 treebank:5 translation:8 row:2 ehre:1 sourcing:1 last:2 english:1 infeasible:1 allow:1 deeper:1 absolute:2 benefit:4 van:1 rhn:34 depth:12 dimension:5 vocabulary:1 plain:3 default:1 curve:2 author:3 made:2 refinement:1 simplified:1 founded:1 far:10 transaction:1 emphasize:1 obtains:2 keep:2 confirm:1 overfitting:2 ioffe:1 corpus:1 cooijmans:1 xi:8 search:3 iterative:1 why:1 table:2 learn:1 reasonably:1 ca:1 confirmation:1 nicolas:1 obtaining:2 improving:2 steunebrink:1 expansion:1 complex:2 constructing:1 domain:1 did:2 significance:1 main:13 anna:1 rh:2 motivation:1 hyperparameters:3 arise:1 succinct:1 fair:3 complementary:3 allowed:1 lasse:1 fig:12 junyoung:1 slow:1 erhardt:1 fails:1 structurally:1 explicit:2 exponential:1 unfair:3 third:3 zhifeng:1 embed:1 load:1 covariate:1 learnable:1 offset:1 experimented:1 reproduction:1 reshapes:1 naively:1 concern:1 evidence:3 adding:2 effectively:2 importance:1 maxim:1 magnitude:4 budget:1 chen:1 easier:1 gap:1 entropy:2 simply:8 likely:3 gao:1 vinyals:1 contained:1 radford:2 corresponds:1 owned:1 semeniuta:1 minibatches:1 sized:1 ann:1 careful:1 towards:2 feasible:3 experimentally:2 change:1 included:2 specifically:1 except:2 reducing:1 total:2 rigor:1 experimental:11 svd:1 aaron:2 internal:3 support:2 latter:2 overload:1 avoiding:1 oriol:1 evaluate:3 schuster:2 srivastava:2
6,544
692
The Power of Approximating: a Comparison of Activation Functions Bhaskar DasGupta Department of Computer Science University of Minnesota Minneapolis, MN 55455-0159 email: dasgupta~cs.umn.edu Georg Schnitger Department of Computer Science The Pennsylvania State University University Park, PA 16802 email: georg~cs.psu.edu Abstract We compare activation functions in terms of the approximation power of their feedforward nets. We consider the case of analog as well as boolean input. 1 Introduction We consider efficient approximationsofa given multivariate function I: [-1, l]m-+ by feedforward neural networks. We first introduce the notion of a feedforward net. n Let r be a class of real-valued functions, where each function is defined on some subset of n. A r-net C is an unbounded fan-in circuit whose edges and vertices are labeled by real numbers. The real number assigned to an edge (resp. vertex) is called its weight (resp. its threshold). Moreover, to each vertex v an activation function IV E r is assigned. Finally, we assume that C has a single sink w. The net C computes a function Ie : [-1,11 m --+ n as follows. The components of the input vector x (Xl, . .. , x m ) E [-1, 11 m are assigned to the sources of C. Let Vl, ??? , Vn be the immediate predecessors of a vertex v. The input for v is then sv(x) = E~=l WiYi -tv, where Wi is the weight of the edge (Vi, V), tv is the threshold of v and Yi is the value assigned to Vi. If V is not the sink, then we assign the value Iv (sv (x)) to v. Otherwise we assign Sv (x) to v. = Then Ie = Sw is the function computed by C where W is the unique sink of C. 615 616 DasGupta and Schnitger A great deal of work has been done showing that nets of two layers can approximate (in various norms) large function classes (including continuous functions) arbitrarily well (Arai, 1989; Carrol and Dickinson, 1989; Cybenko, 1989; Funahashi, 1989; Gallant and White, 1988; Hornik et al. 1989; Irie and Miyake,1988; Lapades and Farber, 1987; Nielson, 1989; Poggio and Girosi, 1989; Wei et al., 1991). Various activation functions have been used, among others, the cosine squasher, the standard sigmoid, radial basis functions, generalized radial basis functions, polynomials, trigonometric polynomials and binary thresholds. Still, as we will see, these func- tions differ greatly in terms of their approximation power when we only consider efficient nets; i.e. nets with few layers and few vertices. Our goal is to compare activation functions in terms of efficiency and quality of approximation. We measure efficiency by the size of the net (i.e. the number of vertices, not counting input units) and by its number of layers. Another resource of interest is the Lipschitz-bound of the net, which is a measure of the numerical stability of the net. We say that net C has Lipschitz-bound L if all weights and thresholds of C are bounded in absolute value by L and for each vertex v of C and for all inputs x, y E [-1, l]m, Itv(sv(x)) -tv(sv(y?1 :::; L ?Isv(x) - sv(y)l? (Thus we do not demand that activation function Iv has Lipschitz-bound L, but only that Iv has Lipschitz-bound L for the inputs it receives.) We measure the quality of an approximation of function I by function Ie by the Chebychev norm; i.e. by the maximum distance between I and Ie over the input domain [-1, l]m. Let r be a class of activation functions. following two questions . We are particularly interested in the ? Given a function I : [-1, l]m -+ n, how well can we approximate I by a f-net with d layers, size s, and Lipschitz-bound L? Thus, we are particularly interested in the behavior of the approximation error e(s, d) as a function of size and number of layers. This set-up allows us to investigate how much the approximation error decreases with increased size and/or number of layers . ? Given two classes of activation functions fl and f2, when do f 1-nets and f 2nets have essentially the same "approximation power" with respect to some error function e(s, d)? We first formalize the notion of "essentially the same approximation power" . Definition 1.1 Let e : N2 -+ n+ be a function. fl and f2 are classes of activation functions. We say that fl simulates r 2 with respect to e if and only if there is a constant k such that for all functions I : [-1, l]m -+ with Lipschitz-bound l/e(s, d), (a). n if f can be approximated by a r 2 -net with d layers, size s, Lipschitzbound 2~ and approximation error e(s, d), then I can also be approximated with error e( s, d) by a r 1 -net with k( d + 1) layers, size (s + l)k and Lipschitz-bound 28 " ? (b). We say that f1 and r2 are equivalent with respect to e if and only if f2 simulates f 1 with respect to e and f 1 simulates f2 with respect to e. The Power of Approximating: a Comparison of Activation Functions In other words, when comparing the approximation power of activation functions, we allow size to increase polynomially and the number of layers to increase by a constant factor, but we insist on at least the same approximation error. Observe that we have linked the approximation error e( s, d) and the Lipschitz-bound of the function to be approximated. The reason is that approximations of functions with high Lipschitz-bound "tend" to have an inversely proportional approximation error. Moreover observe that the Lipschitz-bounds of the involved nets are allowed to be exponential in the size of the net. We will see in section 3, that for some activation functions far smaller Lipschitz-bounds suffice. Below we discuss our results. In section 2 we consider the case of tight approximations, i.e. e(s, d) 2-'. Then in section 3 the more relaxed error model e(s, d) = s-d is discussed . In section 4 we consider the computation of boolean functions and show that sigmoidal nets can be far more efficient than thresholdnets. = 2 Equivalence of Activation Functions for Error e( s, d) = 2- 8 We obtain the following result. Theorem 2.1 The following activation functions are equivalent with respect to error e(s, d) 2- 3 ? = ? the standard sigmoid O'(x) = l+ex~(-r)' ? any rational function which is not a polynomial, ? any root x a , provided Q is not a natural number, ? the logarithm (for any base b > 1), ? the gaussian e- x2 , ? the radial basis functions (1 + x2)a, < 1, Q Q #0 Notable exceptions from the list of functions equivalent to the standard sigmoid are polynomials, trigonometric polynomials and splines. We do obtain an equivalence to the standard sigmoid by allowing splines of degree s as activation functions for nets of size s. (We will always assume that splines are continuous with a single knot only.) Theorem 2.2 Assume that e(s, d) = 2-'. Then splines (of degree s for nets of size s) and the standard sigmoid are equivalent with respect to e(s, d). Remark 2.1 (a) Of course, the equivalence of spline-nets and {O' }-nets also holds for binary input. Since threshold-nets can add and multiply m m-bit numbers with constantly many layers and size polynomial in m (Rei/, 1987), threshold-nets can efficiently approximate polynomials and splines. 617 618 DasGupta and Schnitger Thus, we obtain that {u }-nets with d layers, size s and Lipschitz-bound L can be simulated by nets of binary thresholds. The number of layers of the simulating threshold-net will increase by a constant factor and its size will increase by a polynomial in (s + n) log(L), where n is the number of input bits. (The inclusion of n accounts for the additional increase in size when approximately computing a weighted sum by a threshold-net.) (b) If we allow size to increase by a polynomial in s + n, then threshold-nets and {u }-nets are actually equivalent with respect to error bound 2-". This follows, since a threshold function can easily be implemented by a sigmoidal gate (Maass et al., 1991). Thus, if we allow size to increase polynomially (in s + n) and the number of layers to increase by a constant factor, then {u }-nets with weights that are at most exponential (in s + n) can be simulated by {u} -nets with weights of size polynomial in s. {u }-nets and threshold-nets (respectively nets of linear thresholds) are not equivalent for analog input. The same applies to polynomials, even if we allow polynomials of degree s as activation function for nets of size s: Theorem 2.3 (a) Let sq(x) = x 2 ? If a net of linear splines (with d layers and size s) approximates sq( x) over the interval [-1, 1], then its approximation error will be at least s-o( d) . (b) Let abs(x) =1 x I. If a polynomial net with d layers and size s approximates abs(x) over the interval [-1,1]' then the approximation error will be at least s-O(d). We will see in Theorem 2.5 that the standard sigmoid (and hence any activation function listed in Theorem 2.1) is capable of approximating sq(x) and abs(x) with error at most 2-" by constant-layer nets of size polynomial in s. Hence the standard sigmoid is properly stronger than linear splines and polynomials. Finally, we show that sine and the standard sigmoid are inequivalent with respect to error 2-'. Theorem 2.4 The function sine(Ax) can be approximated by a {u}-net CA with d layers, size s AO(l/d) and error at most sO( -d). On the other hand, every {u }-net with d layers which approximates sine(Ax) with error at most ~, has to have size at least AO(l/d). = Below we sketch the proof of Theorem 2.1. The proof itself will actually be more instructive than the statement of Theorem 2.1. In particular, we will obtain a general criterion that allows us to decide whether a given activation function (or class of activation functions) has at least the approximation power of splines. 2.1 Activation F\lnctions with the Approximation Power of Splines Obviously, any activation function which can efficiently approximate polynomials and the binary threshold will be able to efficiently approximate splines. This follows since a spline can be approximated by the sum p + t . q with polynomials p and q The Power of Approximating: a Comparison of Activation Functions and a binary threshold t. (Observe that we can approximate a product once we can approximately square: (x + y)2/2 - x 2/2 - y2/2 = x . y.) Firstly, we will see that any sufficiently smooth activation function is capable of approximating polynomials. Definition 2.1 Ld -y : n ---+ n be a function. We call -y suitable if and only if there exists real numbers a, f3 (a > 0) and an integer k such that (a) -y can be represented by the power series 2:~o ai(x - f3)i for all x E [-a, a]. The coefficients are rationals of the form ai = ~ with IPi I, IQil ~ 2ki (for i > 1). (b) For each i > 2 there exists j with i ~ j ~ i k and aj "# O. Proposition 2.1 Assume that -y is suitable with parameter k. Then, over the domain [-D, D], any degree n polynomial p can be approximated with errore by a {-y}-net Cpo Cp has 2 layers and size 0(n2k); its weights are rational numbers whose numerator and denominator are bounded in absolute value by Pmax(2 + D)PO,y(n)lh(N+l)II[_a,a1;' Here we have assumed that the coefficients of p are rational numbers with numerator and denominator bounded in absolute value by Pmax. Thus, in order to have at least the approximation power of splines, a suitable activation function has to be able to approximate the binary threshold. This is achieved by the following function class, Definition 2.2 Let function. (a). r be a class of activation functions and let 9 : [1,00] ---+ n be a We say that 9 is fast converging if and only if I g(x) - g(x o < roo g( u 2 )du < 00 + e) 1= 0(e/X 2 ) and J I Jroo g( u 2 for x ~ 1, e ~ 0, )du 1= 0(1/ N) for all N ~ 1. 2N 1 (b). We say that r is powerful if and only if at least one function in r is suitable and there is a fast converging function g which can be approximated for all s > 1 (over the domain [-2",2"]) with error 2-" by a {r}-net with a constant number of layers, size polynomial in s and Lipschitz-bound 2". Fast convergence can be checked easily for differentiable functions by applying the mean value theorem. Examples are x-a for a ~ 1, exp( -x) and 0'( -x). Moreover, it is not difficult to show that each function mentioned in Theorem 2.1 is powerful. Hence Theorem 2.1 is a corollary of Theorem 2.5 Assume that (a) r r is powerful. simulates splines with respect to error e(s, d) = 2- 3 ? 619 620 DasGupta and Schnitger Assume that each activation function in r can be approximated (over the domain [-2',2']) with error 2-' by a spline-net N, of size s and with constantly many layers. Then r is equivalent to splines. (b) Remark 2.2 Obviously, 1/x is po'wer/ul. Therefore Theorem 2.5 implies that constant-layer {l/x}-nets of size s approximate abs(x) = Ixl with error 2-'. The degree of the resulting rational function will be polynomial in s. Thus Theorem 2.5 generalizes .N ewman's approximation of the absolute value by rational functions. (Newman, 1964) 3 Equivalence of Activation Functions for Error s-d The lower bounds in the previous section suggest that the relaxed error bound e(s, d) = s-d is of importance. Indeed, it will turn out that many non-trivial smooth activation functions lead to nets that simulate {(T }-nets, provided the number of input units is counted when determining the size of the net. (We will see in section 4, that linear splines and the standard sigmoid are not equivalent if the number of inputs is not counted). The concept of threshold-property will be crucial for us. be a collection of activation functions. We say that r has the threshold-property if there is a constant c such that the following two properties are satisfied for all m > 1. Definition 3.1 Let r there is a threshold-net T-y,m with c layers and size (s + m)C which computes the binary representation of'Y'(x) where h(x)-t'(x)1 ~ 2- m . (a) For each 'Y E r The input x of T-y ,m is given in binary and consists of 2m + 1 bits; m bits describe the integral part of x, m bits describe its fractional part and one bit indicates the sign . s + m specifies the required number of output bits, i.e. s rlog2(sup{'Y(x) : _2m+l < x < 2m+l})1. = r -net with c layers, size m C and Lipschitz bound 2 mc which approximates the binary threshold over D [-1,1] - [-11m, 11m] with error 11m. (b) There is a = We can now state the main result of this section. Theorem 3.1 Assume that e(s, d) = s-d. be a class of activation functions and assume that r has the threshold property. Then, (T and r are equivalent with respect to e . Moreover, {(T} -nets only require weights and thresholds of absolute value at most s. (Observe that r -nets are allowed to have weights as large as 2' I) (a) Let r (b) If rand (T are equivalent with respect to error 2-', then rand (T are equivalent with respect to error s-d. (c) Additionally, the following classes are equivalent to {(T }-nets with respect to e. (We assume throughout that all coefficients, weights and thresholds are bounded by 2 3 for nets of size s) . ? polynomial nets (i. e. polynomials of degree s appear as activation function for nets of size s), The Power of Approximating: a Comparison of Activation Functions where ~/ is a suitable function and (This includes the sine-function.) ? {"y }-nets, "y satisfies part (a) of Definition 3.1. ? nets of linear splines The equivalence proof involves a first phase of extracting O(dlogs) bits from the analog input. In a second phase, a binary computation is mimicked. The extraction process can be carried out with error s-1 (over the domain [-1,1] - [-l/s, l/s]) once the binary threshold is approximated. 4 Computing boolean functions As we have seen in Remark 2.1, the binary threshold (respectively linear splines) gains considerable power when computing boolean functions as compared to approximating analog functions. But sigmoidal nets will be far more powerful when only the number of neurons is counted and the number of input units is disregarded. For instance, sigmoidal nets are far more efficient for "squaring", i.e when computing: Mn = {(x, y): x E {O, l}n, y E {O, l}n:l and [xJ2;;::: [y]} (where [z] = L Zi). i Theorem 4.1 A threshold-net computing Mn must have size at least n(logn). But Mn can be computed by a (1'-net with constantly many gates. The previously best known separation of threshold-nets and sigmoidal-nets is due to Maass, Schnitger and Sontag (Maass et al., 1991). But their result only applies to threshold-nets with at most two layers; our result holds without any restriction on the number oflayers. Theorem 4.1 can be generalized to separate threshold-nets and 3-times differentiable activation functions, but this smoothness requirement is more severe than the one assumed in (Maass et al., 1991). 5 Conclusions Our results show that good approximation performance (for error 2-") hinges on two properties, namely efficient approximation of polynomials and efficient approximation of the binary threshold. These two properties are shared by a quite large class of activation functions; i.e. powerful functions. Since (non-polynomial) rational functions are powerful, we were able to generalize Newman's approximation of I x I by rational functions . On the other hand, for a good approximation performance relative to the relaxed error bound s-d it is already sufficient to efficiently approximate the binary threshold. Consequently, the class of equivalent activation functions grows considerably (but only if the number of input units is counted). The standard sigmoid is distinguished in that its approximation performance scales with the error bound: if larger error is allowed, then smaller weights suffice. Moreover, the standard sigmoid is actually more powerful than the binary threshold even when computing boolean functions. In particular, the standard sigmoid is able to take advantage of its (non-trivial) smoothness to allow for more efficient nets. 621 622 DasGupta and Schnitger Acknowledgements. We wish to thank R. Paturi, K. Y. Siu and V. P. Roychowdhury for helpful discussions. Special thanks go to W. Maass for suggesting this research, to E. Sontag for continued encouragement and very valuable advice and to J. Lambert for his never-ending patience. The second author gratefully acknowledges partial support by NSF-CCR-9114545. References Arai, W. (1989), Mapping abilities of three-layer networks, in "Proc. of the International Joint Conference on Neural Networks", pp. 419-423. Carrol, S. M., and Dickinson, B. W. (1989), Construction of neural nets using the Radon Transform,in "Proc. of the International Joint Conference on Neural Networks", pp. 607-611. Cybenko, G. (1989), Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and System, 2, pp. 303-314. Funahashi, K. (1989), On the approximate realization of continuous mappings by neural networks, Neural Networks, 2, pp. 183-192. Gallant, A. R., and White, H. (1988), There exists a neural network that does not make avoidable mistakes, in "Proc. of the International Joint Conference on Neural Networks" , pp. 657-664. Hornik, K., Stinchcombe, M., and White, H. (1989), Multilayer Feedforward Networks are Universal Approximators, Neural Networks, 2, pp. 359-366. Irie, B., and Miyake, S. (1988), Capabilities of the three-layered perceptrons, in "Proc. of the International Joint Conference on Neural Networks", pp. 641-648. Lapades, A., and Farbar, R. (1987), How neural nets work, in "Advances in Neural Information Processing Systems" , pp. 442-456. Maass, W., Schnitger, G., and Sontag, E. (1991), On the computational power of sigmoid versus boolean threshold circuits, in "Proc. of the 32nd Annual Symp. on Foundations of Computer Science" , pp. 767-776. Newman, D. J. (1964), Rational approximation to 11, pp. 11-14. I x I , Michigan Math. Journal, Hecht-Nielson, R. (1989), Theory of backpropagation neural networks, in "Proc. of the International Joint Conference on Neural Networks", pp. 593-611. Poggio, T., and Girosi, F. (1989), A theory of networks for Approximation and learning, Artificial Intelligence Memorandum, no 1140. Reif, J. H. (1987), On threshold circuits and polynomial computation, in "Proceedings of the 2nd Annual Structure in Complexity theory", pp. 118-123. Wei, Z., Yinglin, Y., and Qing, J. (1991), Approximation property of multi-layer neural networks ( MLNN ) and its application in nonlinear simulation, in "Proc. of the International Joint Conference on Neural Networks", pp. 171-176.
692 |@word polynomial:26 norm:2 stronger:1 nd:2 simulation:1 ld:1 series:1 comparing:1 activation:35 schnitger:7 must:1 numerical:1 girosi:2 intelligence:1 funahashi:2 math:1 sigmoidal:6 firstly:1 ipi:1 unbounded:1 predecessor:1 consists:1 symp:1 introduce:1 indeed:1 behavior:1 multi:1 insist:1 provided:2 moreover:5 bounded:4 circuit:3 suffice:2 arai:2 every:1 control:1 unit:4 appear:1 mistake:1 approximately:2 equivalence:5 minneapolis:1 unique:1 backpropagation:1 sq:3 universal:1 word:1 radial:3 suggest:1 layered:1 applying:1 restriction:1 equivalent:13 go:1 miyake:2 continued:1 his:1 stability:1 notion:2 memorandum:1 resp:2 construction:1 dickinson:2 pa:1 approximated:9 particularly:2 labeled:1 inequivalent:1 decrease:1 valuable:1 mentioned:1 complexity:1 tight:1 efficiency:2 f2:4 basis:3 sink:3 easily:2 po:2 joint:6 various:2 represented:1 fast:3 describe:2 artificial:1 newman:3 whose:2 quite:1 larger:1 valued:1 say:6 roo:1 otherwise:1 ability:1 transform:1 itself:1 obviously:2 advantage:1 differentiable:2 net:70 oflayers:1 product:1 realization:1 trigonometric:2 xj2:1 convergence:1 requirement:1 tions:1 implemented:1 c:2 involves:1 implies:1 differ:1 rei:1 farber:1 require:1 assign:2 f1:1 ao:2 cybenko:2 proposition:1 hold:2 sufficiently:1 exp:1 great:1 mapping:2 isv:1 proc:7 superposition:1 weighted:1 gaussian:1 always:1 corollary:1 ax:2 properly:1 indicates:1 greatly:1 helpful:1 squaring:1 vl:1 interested:2 among:1 logn:1 special:1 once:2 f3:2 extraction:1 psu:1 never:1 park:1 others:1 spline:19 few:2 qing:1 phase:2 ab:4 interest:1 investigate:1 multiply:1 severe:1 umn:1 edge:3 capable:2 integral:1 partial:1 poggio:2 lh:1 iv:4 reif:1 logarithm:1 increased:1 instance:1 boolean:6 vertex:7 subset:1 siu:1 iqil:1 sv:6 considerably:1 thanks:1 international:6 ie:4 satisfied:1 account:1 suggesting:1 includes:1 coefficient:3 notable:1 vi:2 sine:4 root:1 linked:1 sup:1 capability:1 square:1 efficiently:4 generalize:1 lambert:1 knot:1 mc:1 checked:1 email:2 definition:5 pp:13 involved:1 proof:3 rational:9 gain:1 carrol:2 fractional:1 formalize:1 actually:3 wei:2 rand:2 done:1 hand:2 receives:1 sketch:1 nonlinear:1 aj:1 quality:2 grows:1 concept:1 y2:1 hence:3 assigned:4 maass:6 deal:1 white:3 numerator:2 cosine:1 criterion:1 generalized:2 paturi:1 cp:1 sigmoid:13 analog:4 discussed:1 approximates:4 ai:2 smoothness:2 encouragement:1 mathematics:1 inclusion:1 gratefully:1 minnesota:1 base:1 add:1 multivariate:1 binary:15 arbitrarily:1 approximators:1 yi:1 seen:1 additional:1 relaxed:3 signal:1 ii:1 smooth:2 hecht:1 a1:1 converging:2 denominator:2 essentially:2 multilayer:1 achieved:1 nielson:2 interval:2 source:1 crucial:1 tend:1 simulates:4 bhaskar:1 call:1 integer:1 extracting:1 counting:1 feedforward:4 zi:1 pennsylvania:1 whether:1 ul:1 sontag:3 remark:3 listed:1 specifies:1 nsf:1 roychowdhury:1 sign:1 ccr:1 dasgupta:6 georg:2 n2k:1 threshold:34 sum:2 powerful:7 wer:1 throughout:1 decide:1 vn:1 separation:1 patience:1 radon:1 bit:8 layer:27 bound:19 fl:3 ki:1 fan:1 annual:2 x2:2 simulate:1 department:2 tv:3 smaller:2 wi:1 resource:1 previously:1 discus:1 turn:1 generalizes:1 observe:4 simulating:1 distinguished:1 mimicked:1 gate:2 hinge:1 sw:1 approximating:7 question:1 already:1 irie:2 distance:1 separate:1 thank:1 simulated:2 trivial:2 reason:1 difficult:1 statement:1 pmax:2 gallant:2 allowing:1 neuron:1 immediate:1 rlog2:1 namely:1 required:1 chebychev:1 able:4 below:2 wiyi:1 including:1 stinchcombe:1 power:15 suitable:5 natural:1 mn:4 inversely:1 carried:1 acknowledges:1 func:1 acknowledgement:1 determining:1 relative:1 cpo:1 proportional:1 versus:1 ixl:1 foundation:1 degree:6 sufficient:1 course:1 allow:5 absolute:5 ending:1 computes:2 author:1 collection:1 counted:4 far:4 polynomially:2 approximate:10 assumed:2 continuous:3 additionally:1 ca:1 hornik:2 du:2 domain:5 main:1 n2:1 allowed:3 advice:1 wish:1 exponential:2 xl:1 theorem:17 showing:1 list:1 r2:1 exists:3 importance:1 demand:1 disregarded:1 michigan:1 applies:2 satisfies:1 constantly:3 avoidable:1 goal:1 consequently:1 lipschitz:14 shared:1 considerable:1 called:1 perceptrons:1 exception:1 support:1 instructive:1 ex:1
6,545
6,920
Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter Yi Xu? , Qihang Lin? , Tianbao Yang? Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA ? Department of Management Sciences, The University of Iowa, Iowa City, IA 52242, USA {yi-xu, qihang-lin, tianbao-yang}@uiowa.edu ? Abstract Error bound, an inherent property of an optimization problem, has recently revived in the development of algorithms with improved global convergence without strong convexity. The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems. Quadratic error bound have been leveraged to achieve linear convergence in many first-order methods including the stochastic variance reduced gradient (SVRG) method, which is one of the most important stochastic optimization methods in machine learning. However, the studies along this direction face the critical issue that the algorithms must depend on an unknown growth parameter (a generalization of strong convexity modulus) in the error bound. This parameter is difficult to estimate exactly and the algorithms choosing this parameter heuristically do not have theoretical convergence guarantee. To address this issue, we propose novel SVRG methods that automatically search for this unknown parameter on the fly of optimization while still obtain almost the same convergence rate as when this parameter is known. We also analyze the convergence property of SVRG methods under H?lderian error bound, which generalizes the quadratic error bound. 1 Introduction Finite-sum optimization problems have broad applications in machine learning, including regression by minimizing the (regularized) empirical square losses and classification by minimizing the (regularized) empirical logistic losses. In this paper, we consider the following finite-sum problem: n 1X min F (x) , fi (x) + ?(x), x?? n i=1 (1) where fi (x) is a continuously differential convex function whose gradient is Lipschitz continuous and ?(x) is a proper, lower-semicontinuous convex function [24]. Traditional proximal gradient (PG) methods or accelerated proximal gradient (APG) methods for solving (1) become prohibited when the number of components n is very large, which has spurred many studies on developing stochastic optimization algorithms with fast convergence [4, 8, 25, 1]. An important milestone among several others is the stochastic variance reduced gradient (SVRG) method [8] and its proximal variant [26]. Under the strong convexity of the objective function F (x), linear convergence of SVRG and its proximal variant has been established. Many variations of SVRG have also been proposed [2, 1]. However, the key assumption of strong convexity limits the power of SVRG for many interesting problems in machine learning without strong convexity. For example, in regression with high-dimensional data one is usually interested in solving the least-squares regression with an `1 norm regularization or constraint (known as the LASSO-type problem). A common practice for solving non-strongly convex finite-sum problems by SVRG is to add a small strongly convex regularizer (e.g., ?2 kxk22 ) [26]. Recently, a variant of SVRG (named SVRG++ [2]) was 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. designed that can cope with non-strongly convex problems without adding the strongly convex term. However, these approaches only have sublinear convergence (e.g., requiring a O(1/) iteration complexity to achieve an -optimal solution). Promisingly, recent studies on optimization showed that leveraging the quadratic error bound (QEB) condition can open a new door to the linear convergence without strong convexity [9, 20, 6, 30, 5, 3]. The problem (1) obeys the QEB condition if the following holds: kx ? x? k2 ? c(F (x) ? F (x? ))1/2 , ?x ? ?, (2) where x? denotes the closest optimal solution to x and ? is usually a compact set. Indeed, the aforementioned LASSO-type problems satisfy the QEB condition. It is worth mentioning that the above condition (or similar conditions) has been explored extensively and has different names in the literature, e.g., the second-order growth condition, the weak strong convexity [20], essential strong convexity [13], restricted strong convexity [31], optimal strong convexity [13], semi-strong convexity [6]. Interestingly, [6, 9] have showed that SVRG can enjoy a linear convergence under the QEB condition. However, the issue is that SVRG requires to know the parameter c (analogous to the strong convexity parameter) in the QEB for setting the number of iterations of inner loops, which is usually unknown and difficult to estimate. A naive trick for setting the number of iterations of inner loops to a certain multiplicative factor (e.g., 2) of the number of components n is usually sub-optimal and worrisome because it may not be large enough for bad conditioned problems or it could be too large for good conditioned problems. In the former case, the algorithm may not converge as the theory indicates and in the latter case, too many iterations may be wasted for inner loops. To address this issue, we develop a new variant of SVRG that embeds an efficient automatic search step for c into the optimization. The challenge for developing such an adaptive variant of SVRG is that one needs to develop an appropriate machinery to check whether the current value of c is large enough. One might be reminded of some restarting procedure for searching the unknown strong convexity parameter in APG methods [21, 11]. However, there are several differences that make the development of such a search scheme much more daunting for SVRG than for APG. The first difference is that, although SVRG has a lower per-iteration cost than APG, it also makes smaller progress towards the optimality after each iteration, which provides less information on the correctness of the current c. The second difference lies at that the SVRG is inherently stochastic, making the analysis for bounding the number of search steps much more difficult. To tackle this challenge, we propose to perform the proximal gradient updates occasionally at the reference points in SVRG where the full gradient is naturally computed. The normal of the proximal gradient provides a probabilistic ?certificate" for checking whether the value of c is large enough. We then provide a novel analysis to bound the expected number of search steps with a consideration that the probabilistic ?certificate" might fail with some probability. The final result shows that the new variant of SVRG enjoys a linear convergence under the QEB condition with unknown c and the corresponding complexity is only worse by a logarithmic factor than that in the setting where the parameter c is assumed to be known. Besides the QEB condition, we also consider more general error bound conditions (aka the H?lderian error bound (HEB) conditions [3]) whose definition is given below, and develop adaptive variants of SVRG under the HEB condition with ? ? (0, 1/2) to enjoy intermediate faster convergence rates than SVRG under only the smoothness assumption (e.g, SVRG++ [2]). It turns out that the adaptive variants of SVRG under HEB with ? < 1/2 are simpler than that under the QEB. Definition 1 (H?lderian error bound (HEB)). Problem (1) is said to satisfy a H?lderian error bound condition on a compact set ? if there exist ? ? (0, 1/2] and c > 0 such that for any x ? ? kx ? x? k2 ? c(F (x) ? F? )? , (3) where x? denotes the closest optimal solution to x. It is notable that the above inequality can always hold for ? = 0 on a compact set ?. Therefore the discussion in the paper regarding the HEB condition also applies to the case ? = 0. In addition, if a HEB condition with ? ? (1/2, 1] holds, we can always reduce it to the QEB condition provided that F (x) ? F? is bounded over ?. However, we are not aware of any interesting examples of (1) for such cases. We defer several examples satisfying the HEB conditions with explicit ? ? (0, 1/2] in machine learning to Section 5. We refer the reader to [29, 28, 27, 14] for more examples. 2 2 Related work The use of error bound conditions in optimization for deriving fast convergence dates back to [15, 16, 17], where the (local) error bound condition bounds the distance of a point in the local neighborhood of the optimal solution to the optimal set by a multiple of the norm of the proximal gradient at the point. Based on their local error bound condition, they have derived local linear convergence for descent methods (e.g., proximal gradient methods). Several recent works have established the same local error bound conditions for several interesting problems in machine learning [7, 32, 33]. H?lderian error bound (HEB) conditions have been studied extensively in variational analysis [10] and recently revived in optimization for developing fast convergence of optimization algorithms. Many studies have leveraged the QEB condition in place of strong convexity assumption to develop fast convergence (e.g., linear convergence) of many optimization algorithms (e.g., the gradient method [3], the proximal gradient method [5], the accelerated gradient method [20], coordinate descent methods [30], randomized coordinate descent methods [9, 18], subgradient methods [29, 27], primal-dual style of methods [28], and etc.). This work is closely related to several recent studies that have shown that SVRG methods can also enjoy linear convergence for finite-sum (composite) smooth optimization problems under the QEB condition [6, 9, 12]. However, these approach all require knowing the growth parameter in the QEB condition, which is unknown in many practical problems. It is worth mentioning that several recent studies have also noticed the similar issue in SVRG-type of methods that the strong convexity constant is unknown and suggested some practical heuristics for either stopping the inner iterations early or restarting the algorithm [2, 22, 19]. Nonetheless, no theoretical convergence guarantee is provided for the suggested heuristics. Our work is also related to studies that focus on searching for unknown strong convexity parameter in accelerated proximal gradient (APG) methods [21, 11] but with striking differences as mentioned before. Recently, Liu & Yang [14] considered the HEB for composite smooth optimization problems and developed an adaptive restarting accelerated gradient method without knowing the c constant in the HEB. As we argued before, their analysis can not be trivially extended to SVRG. 3 SVRG under the HEB condition in the oracle setting In this section, we will present SVRG methods under the HEB condition in the oracle setting assuming that the c parameter is given. We first give some notations. Denote by Li the smoothness constant of fi , i.e., for all x, y ? ? fi (x) ? fi (y) ? h?fi (y), x ? yi + L2i kx ? yk22 . It implies Pn that f (x) , n1 i=1 fi (x) is also continuously differential convex function whose gradient is Lf Pn Pn Lipschitz continuous, where Lf ? n1 i=1 Li . For simplicity, we can take Lf = n1 i=1 Li . In the sequel, we let L , maxi Li and assume that it is given or can be estimated for the problem. Denote by ?? the optimal set of the problem (1), and by F? = minx?? F (x). The detailed steps of SVRG under the HEB condition are presented in Algorithm 1. The formal guarantee of SVRGHEB is given in the following theorem. Theorem 2. Suppose problem (1) satisfies the HEB condition with ? ? (0, 1/2] and F (x0 )?F? ? 0 , 1?2? where x0 is an initial solution. Let ? = 1/(36L), and T1 ? 81Lc2 (1/0 ) . Algorithm 1 ensures R E[F (? x(R) ) ? F? ] ? (1/2) 0 . (4) In particular, by running Algorithm 1 with R = dlog2 0 e, we have E[F (? x(R) ) ? F? ] ? , and the computational complexity for achieving an -optimal solution in expectation is O(n log(0 /) + 1 Lc2 max{ 1?2? , log(0 /)}). Remark: We make several remarks about the Algorithm 1 and the results in Theorem 2. First, the constant factors in ? and T1 should not be treated literally, because we have made no effort to optimize them. Second, when ? = 1/2 (i.e, the QEB condition holds), the Algorithm 1 reduces to the standard SVRG method under strong convexity, and the iteration complexity becomes O((n + Lc2 ) log(0 /)), which is the same as that of the standard SVRG with Lc2 mimicking the condition number of the problem. Third, when ? = 0 (i.e., with only the smoothness assumption), the Algorithm 1 reduces to SVRG++ [2] with one difference, where in SVRGHEB the initial point and the reference point for each outer loop are the same but are different in SVRG++, and the iteration complexity of 2 SVRGHEB becomes O(n log(0 /) + Lc ) that is similar to that of SVRG++. Fourth, for intermediate 3 Algorithm 1 SVRG method under HEB (SVRGHEB (x0 , T1 , R, ?)) 1: Input: x0 ? ?, the number of inner initial iterations T1 , and the number of outer loops R. 2: x ?(0) = x0 3: for r = 1, 2, . . . , R do (r) 4: g?r = ?f (? x(r?1) ), x0 = x ?(r?1) 5: for t = 1, 2, . . . , Tr do 6: Choose it ? {1, . . . , n} uniformly at random. (r) (r) 7: gt = ?fit (xt?1 ) ? ?fit (? x(r?1) ) + g?r . (r) (r) (r) (r) 1 kx ? xt?1 k22 + ?(x). 8: xt = arg minx?? hgt , x ? xt?1 i + 2? 9: end for P (r) Tr xt 10: x ?(r) = T1r t=1 11: Tr+1 = 21?2? Tr 12: end for 13: Output: x ?(R) Algorithm 2 SVRG method under HEB with Restarting: SVRGHEB-RS 1: Input: x(0) ? ?, a small value c0 > 0, and ? ? (0, 1/2). (1) 1?2? 2: Initialization: T1 = 81Lc20 (1/0 ) 3: for s = 1, 2, . . . , S do (s) 4: x(s) =SVRGHEB (x(s?1) , T1 , R, ?) (s+1) 5: T1 6: end for (s) = 21?2? T1 ? ? (0, 1/2) we can obtain faster convergence than SVRG++. Lastly, note that the number of iterations for each outer loop depends on the c parameter in the HEB condition. The proof the Theorem 2 is simply built on previous analysis of SVRG and is deferred to the supplement. 4 Adaptive SVRG under the HEB condition in the dark setting In this section, we will present adaptive variants of SVRGHEB that can be run in the dark setting, i.e, without assuming c is known. We first present the variant for ? < 1/2, which is simple and can help us understand the difficulty for ? = 1/2. 4.1 Adaptive SVRG for ? ? (0, 1/2) An issue of SVRGHEB is that when c is unknown the initial number of iterations T1 in Algorithm 1 is difficult to estimate . A small value of T1 may not guarantee SVRGHEB converges as Theorem 2 indicates. To address this issue, we can use the restarting trick, i.e, restarting SVRGHEB with an increasing sequences of T1 . The steps are shown in Algorithm 2. We can start with a small value of c0 , which is not necessarily larger than c. If c0 is larger than c, the first call of SVRGHEB will yield an -optimal solution as Theorem 2 indicates. Below, we assume that c0 ? c. Theorem 3. Suppose problem (1) satisfies the HEB with ? ? (0, 1/2) and F (x0 ) ? F? ? 0 , where (1) 1?2? 0 x0 is an initial solution. Let c0 ? c, dlog2 0 e and T1 = 81Lc20 (1/0 ) . Then l? 2,R  =m with at most a total number of S = 1 1 2 ?? log2 c c0 + 1 calls of SVRGHEB in Algorithm 2, we find a solution x(S) such that E[F (x(S) ) ? F?] ? . The computaional complexity of SVRGHEB-RS for  obtaining such an -optimal solution is O n log(0 /) log(c/c0 ) + Lc2 1?2? . Remark: The proof is in the supplement. We can see that Algorithm 2 cannot be applied to ? = 1/2, (s) which gives a constant sequence of T1 and therefore cannot provide any convergence guarantee for a small value of c0 < c. We have to develop a different variant for tackling ? = 1/2. A minor point of worth mentioning is that if necessary we can stop Algorithm 2 appropriately by performing a proximal gradient update at x(s) (whose full gradient will be computed for the next stage) and checking if the proximal gradient?s Euclidean norm square is less than a predefined level (c.f. (7)). 4 Algorithm 3 SVRG method under QEB with Restarting and Search: SVRGQEB-RS 1: Input: x ?(0) ? ?, an initial value c0 > 0,  > 0, ? = 1/ log(1/) and ? ? (0, 1). 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 4.2 x ?(0) = arg minx?? h?f (? x0 ), x ? x ?0 i + L2 kx ? x ?0 k22 + ?(x), s = 0 (s) (s) 2 while k? x ?x ? k2 >  do Set Rs and Ts = d81Lc2s e as in Lemma 2 x ?(s+1) =SVRGHEB (? x(s) , Ts , Rs , 0.5) (s+1) x ? = arg minx?? h?f (? x(s+1) ), x ? x ?(s+1) i + L2 kx ? x ?(s+1) k22 + ?(x) cs+1 = cs if k? x(s+1) ? ?x ?(s+1) k2 ? ?k? x(s) ? x ?(s) k2 then cs+1 = 2cs , x ?(s+1) = x ?(s) , x ?(s+1) = x ?(s) end if s=s+1 end while Output: x ?(s) Adaptive SVRG for ? = 1/2 In light of the value of T1 in Theorem 2 for ? = 1/2, i.e., T1 = d81Lc2 e, one might consider to start with a small value for c and then increase its value by a constant factor at certain points in order to increase the value of T1 . But the challenge is to decide when we should increase the value of c. If one follows a similar procedure as in Algorithm 2, we may end up with a worse iteration complexity. To tackle this challenge, we need to develop an appropriate machinery to check whether the value of c is already large enough for SVRG to decrease the objective value. However, we cannot afford the cost for computing the objective value due to large n. To this end, we develop a ?certificate? that can be easily verified and can act as signal for a sufficient decrease in the objective value. The developed certificate is motivated by a property of proximal gradient update under the QEB as shown in (5). Lemma 1. Let x ? = arg minx?? h?f (? x), x? x ?i+ L2 kx? x ?k22 +?(x). Then under the QEB condition of the problem (1), we have F (? x) ? F? ? (L + Lf )2 c2 k? x?x ?k22 . (5) The above lemma indicates that we can perform a proximal gradient update at a point x ? and use k? x?x ?k2 as a gauge for monitoring the decrease in the objective value. However, the proximal gradient update is too expensive to compute due to the computation of full gradient ?f (? x). Luckily, SVRG allows to compute the full gradient at a small number of reference points. We propose to leverage these full gradients to conduct the proximal gradient updates and develop the certificate for searching the value of c. The detailed steps of the proposed algorithm are presented in Algorithm 3 to which we refer as SVRGQEB-RS . Similar to SVRGHEB-RS , SVRGQEB-RS also calls SVRGHEB for multiple stages. We conduct the proximal gradient update at the returned solution of each SVRGHEB , which also serves as the initial solution and the initial reference point for the next stage of SVRGHEB when our check in Step 7 fails. At each stage, at most Rs + 1 full gradients are computed, where Rs is a logarithmic number as revealed later. Step 7 - Step 11 in Algorithm 3 are considered as our search step for searching the value of c. We will show that, if cs is larger than c, the condition in Step 7 is true with small probability. This can be seen from the following lemma. Lemma 2. Suppose problem (1) satisfies the QEB condition. Let G0 ? G1 . . . ? Gs . . . be a filtration with the sigma algebra Gs generated byl all random before line 4 of stage s of Algorithm 3.  2 events m 2cs (L+Lf )2 1 Let ? = 36L , Ts = d81Lc2s e, Rs = log2 . Then for any ? ? (0, 1), we have ?2 ?L   Pr k? x(s+1) ? x ?(s+1) k2 ? ?k? x(s) ? x ?(s) k2 Gs , cs ? c ? ?. 2 Proof. By Lemma 1, we have F (? x(s) ) ? F? ? (L + Lf ) c2 k? x(s) ? x ?(s) k22 for all s. Below we consider stages such that cs ? c. Following Theorem 2 and the above inequality, when Ts = d81Lc2s e ? d81Lc2 e, we have 2 E[F (? x(s+1) ) ? F? |Gs ] ? 0.5Rs (F (? x(s) ) ? F? ) ? 0.5Rs (L + Lf ) c2 k? x(s) ? x ?(s) k22 . (s+1) (6) Moreover, the smoothness of f (x) and the definition of x ? imply (see Lemma 4 in the supplemnt). L F (? x(s+1) ) ? F? ? k? x(s+1) ? x ?(s+1) k22 . (7) 2 5 By combining (7) and (6) and using Markov inequality, we have   2 L (s+1) 0.5Rs (L + Lf ) c2 k? x(s) ? x ?(s) k22 (s+1) 2 Pr k? x . ?x ? k2 ? |Gs ? 2  2 (s) If we choose  = ? Lk?x conclusion follows. ?? x(s) k2 2 in the inequality above and let Rs defined as in the assumption, the Theorem 4. Under the same conditions as in Lemma 2 with ? = 1/ log(1/), the expected computational complexity of SVRGQEB-RS for finding an -optimal solution is at most   2  (0)       c (L + Lf )2 1 k? x ?x ?(0) k22 c 2 O (Lc + n) log2 log log1/?2 + log2 . ?2 L   c0 x(s+1) ? x ?(s+1) k2 < ?k? x(s) ? x ?(s) k2 ; Proof. We call stage s with s = 0, 1, . . . a successful stage if k? (s) (s) 2 otherwise, the stages is called anunsuccessful stage. The condition k? x ?x ? k2 ?  will hold k? x(0) ?? x(0) k2 2 after S1 := log1/?2 successful stages and then Algorithm 3 will stop. Let S denote  the total number of stages when the algorithm stops. Although stage s = S ? 1 is the last stage, for the convenience in the proof, we still define stage s = S as a post-termination stage where no computation is performed. In stage s with 0 ? s ? S ? 1, the computational complexity is proportional to the number of stochastic gradient computations (#SGC), which is Ts Rs + n(Rs + 1) ? (Ts + 2n)Rs . If stage s is successful, then Rs+1 = Rs and Ts+1 = Ts . If stage s is unsuccessful, then Rs+1 = Rs + 1 ? 2Rs and Ts+1 = 2Ts so that Rs+1 Ts+1 ? 4Rs Ts . In either case, Rs and Ts are non-decreasing. Note that, after S2 := d2 log2 (c/c0 )e unsuccessful stages, we will have cs ? c. We will consider two scenarios: (I) the algorithm stops with cS < c and (I) the algorithm stops with cS ? c. In the first scenario, we have S1 successful stages and at most S2 unsuccessfully stages so that S? all stages by (S1+ S2 )(TS?1 + 2n)RS?1 ? hS1 + S2 and cS < c.The #SGC ofi   can be bounded O log2 ( cc0 ) + log1/?2 k? x(0) ?? x(0) k22  log2 2c2 (L+Lf )2 ?2 ?L (Lc2 + n) . Then, we consider the second scenario. Let s? be the first stage with cs ? c, i.e., s? := min{s|cs ? ? c}. It is easy to see that cs? < 2c and there are S2 unsuccessful and less than S1 successful stages  #SGC in any stage before s? is bounded by (Ts? + 2n)Rs? ?  before stages?. 2 Since 2the 8c (L+Lf ) 2 , the total #SGC in stages 0, 1, . . . , s??1 is at most (S1 +S2 )(Ts? + O (Lc + n) log2 ?2 ?L  (0) (0) 2 i h  2   2c (L+Lf )2 k? x ?? x k2 2 2n)Rs? ? O log2 ( cc0 ) + log1/?2 log (Lc + n) . 2 2  ? ?L Next, we bound the total #SGC in stages s?, s? + 1, . . . , S. In the rest of the proof, we consider stage s with s? ? s ? S. We define C(? x, x ?, i, j, s) as the expected #SGC in stages s, s+1, . . . , S, conditioning on that the initial state of stage s are x ?(s) = x ? and x ?(s) = x ? and the numbers of successful and unsuccessful stages before stage s are i and j, respectively. Note that s = i + j. Because stage s depends on the historical path only through the state variables (? x, x ?, i, j, s), C(? x, x ?, i, j, s) is well defined and (? x, x ?, i, j, s) transits in a Markov chain with the next state being (? x, x ?, i, j + 1, s + 1) if stage s does not succeed and being (? x+ , x ?+ , i+1, j, s+1) if stage s succeeds, where x ?+ =SVRGHEB (? x, L 2 Ts , Rs , 0.5) and x ?+ = arg minx?? h?f (? x+ ), x ? x ?+ i + 2 kx ? x ?+ k2 + ?(x). In the next, we will use backward induction to derive an upper bound for C(? x, x ?, i, j, s) that only depends on i and j but not on s, x ? and x ?. In particular, we want to show that 4j?S2 (Ts? + 2n)Rs? Ai , for i ? 0, j ? 0, i + j = s, s ? s?, 1 ? 4? PS1 ?i?1  1?? r if 0 ? i ? S1 ? 1 and Ai := 0 if i = S1 . where Ai := r=0 1?4? C(? x, x ?, i, j, s) ? (8) We start with the base case where i = S1 . By definitions, the only stage with i = S1 is the posttermination stage, namely, stage s = S. In this case, C(? x, x ?, i, j, s) = 0 since stage S performs no computation. Then, (8) holds trivially with Ai = 0. 6 Suppose i < S1 and (8) holds for i + 1, i + 2, . . . , S1 . We want to prove it also holds i. We define X = X(? x, x ?, i, j, s) as the random variable that equals the number of unsuccessful stages from stage s (including stage s) to the first successful stage among stages s, s + 1, s + 2, . . . , S ? 1, conditioning on s ? s? and the state variables at the beginning of stage s are (? x, x ?, i, j, s). Note that X = 0 means stage s is successful. For simplicity of notation, we use Pr(?) to represent the conditional probability Pr(?|s ? s?, (? x, x ?, i, j, s)). Since cs ? cs? ? c for s ? s?, we can show by Lemma 2 that 1 hQ i r?1 Pr(X = r) = t=0 Pr(X ? t + 1|X ? t) Pr(X = r|X ? r), Pr(X ? r + 1|X ? r) = Pr(s + r fails |stages s, s + 1, . . . , s + r ? 1 fail) ? ?, (9) Pr(X = r|X ? r) = Pr(s + r succeeds |stages s, s + 1, . . . , s + r ? 1 fail), = 1 ? Pr(X ? r + 1|X ? r) ? 1 ? ?. Pr When X = r, the #SGC from stage s to the end of the algorithms will be t=0 (Ts+t + 2n)Rs+t + EC(? x+ , x ?+ , i + 1, j + r, s + r + 1), where E denotes the expectation over x ?+ and x ?+ conditioning on (? x, x ?) and x ?+ =SVRGHEB (? x, Ts+r , Rs+r , 0.5) and x ?+ = arg minx?? h?f (? x+ ), x ? x ?+ i + L2 kx ? 2 x ?+ k2 + ?(x). Since stages s, s + 1, . . . , s + r ? 1 are unsuccessful, we have (Ts+t + 2n)Rs+t ? 4t (Ts + 2n)Rs ? 4j+t?S2 (Ts? + 2n)Rs? for t = 0, 1, . . . , r ? 1. Because (8) holds for i + 1 and for any x ?+ and x ?+ , we have C(? x+ , x ?+ , i + 1, j + r, s + r + 1) ? 4j+r?S2 (Ts? + 2n)Rs? Ai+1 . 1 ? 4? (10) Based on the above inequality and the connection between C(? x, x ?, i, j, s) and C(? x+ , x ?+ , i + 1, j + r, s + r + 1), we will prove that (8) holds for i, j, s. C(? x, x ?, i, j, s) = ? X r X Pr(X = r) (Ts+t + 2n)Rs+t + EC(? x+ , x ?+ , i + 1, j + r, s + r + 1) r=0 ? ? ? = ! t=0  ? r j+r?S2 [(1 ? ?)/(1 ? 4?)]S1 ?i?1 ? 1 X 4 (T + 2n)R s ? s ? ? Pr(X = r) ? (Ts+t + 2n)Rs+t + 1 ? 4? ((1 ? ?)/(1 ? 4?) ? 1) r=0 t=0 ! ? r X X 4j+r?S2 (Ts? + 2n)Rs? j+t?S2 Ai+1 Pr(X = r) 4 (Ts? + 2n)Rs? + 1 ? 4? r=0 t=0 ! ? r X X 4r j?S2 t 4 (Ts? + 2n)Rs? Ai+1 Pr(X = r) 4 + 1 ? 4? r=0 t=0 "r?1 #  r+1  ? X Y 4 ?1 4r Ai+1 4j?S2 (Ts? + 2n)Rs? Pr(X ? t + 1|X ? t) Pr(X = r|X ? r) + . 3 1 ? 4? r=0 t=0 ? ? X Since 1 ? ? ? 41 , for any a ? 0 and any b ? a + 1, we have  4a Ai+1 4a+1 ? 1 + 3 1 ? 4?  a+2   a+2  4a+1 Ai+1 4 ?1 4a+1 Ai+1 4 ?1 ? Pr(X = a + 1|X ? a + 1) (1 ? ?) + + 3 1 ? 4? 3 1 ? 4? " r?1 #  r+1  b r X Y 4 ?1 4 Ai+1 Pr(X ? t + 1|X ? t) Pr(X = r|X ? r) + := Dab , 3 1 ? 4? r=a+1 t=a+1  ? ? which implies b Da?1 := "r?1 b X Y r=a # Pr(X ? t + 1|X ? t) Pr(X = r|X ? r) t=a  4r+1 ? 1 4r Ai+1 + 3 1 ? 4?  a+1  4 ?1 4a Ai+1 Pr(X = a|X ? a) + + Pr(X ? a + 1|X ? a)Dab 3 1 ? 4?  a+1  4 ?1 4a Ai+1 ? (1 ? ?) + + ?Dab . 3 1 ? 4? Q 1 We follow the convention that ji = 1 if j < i. = 7  b Applying this inequality for a = 0, 1, . . . , b ? 1 and the fact Db?1 ? b D?1 ? (1 ? ?) b?1 X ? r  r=0 4r+1 ? 1 4r Ai+1 + 3 1 ? 4?  +? b  4b+1 ?1 3 + 4b Ai+1 1?4? 4b+1 ? 1 4b Ai+1 + 3 1 ? 4? gives  . Since 4? < 1, letting b in the inequality above increase to infinity gives C(? x, x ?, i, j, s) ? 4j?S2 (Ts? + 2n)Rs?(1 ? ?) ? X ?r  r=0 = 4 j?S2  (Ts? + 2n)Rs? Ai+1 (1 ? ?) 1 + 1 ? 4? (1 ? 4?)2  4r+1 ? 1 4r Ai+1 + 3 1 ? 4?  4j?S2 (Ts? + 2n)Rs?Ai , 1 ? 4? which is (8). Then by induction, (8) holds for any state (? x, x ?, i, j, s) with s ? s?. At the moment when the algorithm enters stage s?, we must have j = S2 and i = s? ? S2 . By (8) and the facts that S1 +S2 ??s  PS1 ?i?1  1?? r 1?? s? ? S2 and that Ai = r=0 ? (S + S ? s ? ) , the expected #SGC 1 2 1?4? 1?4? from stage s? to the end of algorithm is C(? x, x ?, s? ? S2 , S2 , s?) ? ?  S1 +S2 ??s (Ts? + 2n)Rs? 1?? (S1 + S2 ? s?) 1 ? 4? 1 ? 4?   S1 !  2 8c (L + Lf )2 1?? 2 S1 . O (Lc + n) log2 ?2 ?L 1 ? 4? 1??  S1  k? x(0) ?? x(0) k22   log( 1?4?2 ) log 1/? 1?? 1 ? In light of the value of ?, i.e., ? = log(1/) , we have 1?4? = 3?  (0) (0)  (1?4?) log 1/?    3? k? x ?? x k2 = O 1 ? O(1). Therefore, by adding the #SGC be fore and after the s ? stages in the second scenario,       we have  the expected total #SGC is  O 5 log c c0 + log k? x(0) ?? x(0) k22  log c2 (L+Lf )2 ?L (Lc2 + n) . Applications and Experiments In this section, we consider some applications in machine learning and present some experimental results. We will consider finite-sum problems in machine learning where fi (x) = `(x> ai , bi ) denotes a loss function on an observed training feature and label pair (ai , bi ), and ?(x) denotes a regularization on the model x. Let us first consider some examples of loss functions and regularizers that satisfy the QEB condition. More examples can be found in [29, 28, 27, 14]. Piecewise convex quadratic (PCQ) problems. According to the global error bound of piecewise convex polynomials by Li [10], PCQ problems satisfy the QEB condition. Examples of such problems include empirical square loss, squared hinge loss or Huber loss minimization with `1 norm, `? norm or `1,? norm regularization or constraint. A family of structured smooth composite functions. This family include functions of the form F (x) = h(Ax) + ?(x), where ?(x) is a polyhedral function or an indicator function of a polyhedral set and h(?) is a smooth and strongly convex function on any compact set. Accoding to studies in [6, 20], the QEB holds on any compact set or the involved polyhedral set. Examples of interesting loss functions include the aforementioned square loss and the logisitc loss as well. For examples satisfying the HEB condition with intermediate values ofP ? ? (0, 1/2), we can n consider `1 constrained `p norm regression, where the objective f (x) = 1/n i=1 (x> ai ? bi )p with + p ? 2N [23]. According to the reasoning in [14], the HEB condition holds with ? = 1/p. Before presenting the experimental results, we would like to remark that in many regularized machine learning formulations, no constraint in a compact domain x ? ? is included. Nevertheless, we can explicitly add a constraint ?(x) ? B into the problem to ensure that intermediate solutions generated by the proposed algorithms always stay in a compact set, where B can be set to a large value without affecting the optimal solutions. The proximal mapping of ?(x) with such an explicit constraint can be efficiently handled by combining the proximal mapping and a binary search for the Lagrangian 8 squared hinge + ?1 norm, Adult -5 -10 -15 0 100 200 300 400 squared hinge + ?1 norm, Adult -6 -8 -10 -12 -14 0 100 200 300 400 -4.5 -2 -2.5 -3 -3.5 -4 -4.5 -5 0 100 200 300 #grad/n 400 500 -5.5 100 200 300 400 500 SAGA SVRG++ SVRG-heuristics SVRGH EB?RS -2.1 objective - optimum -4 0 -2 SAGA SVRG++ SVRG-heuristics SVRGQEB?RS -1.5 objective - optimum -3 -3.5 -10 #grad/n ?p regression (p = 4), E2006 -1 SAGA SVRG++ SVRG-heuristics SVRGQEB?RS -2.5 -5 -15 500 SAGA SVRG++ SVRG-heuristics SVRGQEB?RS #grad/n huber loss + ?1 norm, million songs -2 objective - optimum 0 SAGA SVRG++ SVRG-heuristics SVRGQEB?RS -4 -16 500 #grad/n square + ?1 norm, million songs -5 logistic + ?1 norm, Adult 0 -2 objective - optimum SVRGHEB (1000) SVRGHEB (2000) SVRGHEB (8000) SVRGHEB (2n=65122) SVRGQEB?RS (1000) SVRGQEB?RS (2000) SVRGQEB?RS (8000) SVRGQEB?RS (2n=65122) objective - optimum objective - optimum 0 -2.2 -2.3 -2.4 -2.5 -2.6 -2.7 -2.8 0 100 200 300 #grad/n 400 500 -2.9 0 100 200 300 400 500 #grad/n Figure 1: Comparison of different algorithms for solving different problems on different datasets. multiplier. In practice, as long as B is sufficiently large, the constraint remains inactive and the computational cost remains the same. Next, we conduct some experiments to demostrate the effectiveness of the proposed algorithms on several tasks, including `1 regularized squared hinge loss minimization, `1 regularized logistic loss minimization for linear classification problems; and `1 constrained `p norm regression, `1 regularized square loss minimization and `1 regularized Huber loss minimization for linear regression problems. We use three datasets from libsvm website: Adult (n = 32561, d = 123), E2006-tfidf (n = 16087, d = 150360), and YearPredictionMSD (n = 51630, d = 90). Note that we use the testing set of YearPredictionMSD data for our experiment because some baselines need a lot of time to converge on the large training set. We set the regularization parameter of `1 norm and the upper bound of `1 constraint to be 10?4 and 100, respectively. In each plot, the difference between objective value and optimum is presented in log scale. Our first experiment is to justify the proposed SVRGQEB-RS algorithm by comparing it with SVRGHEB with different estimations of c (corresponding to the different initial values of T1 ). We try four different values of T1 ? {1000, 2000, 8000, 2n}. The result is plotted in the top left of Figure 1. We can see that SVRGHEB with some underestimated values of T1 (e.g, 1000, 2000) converge very slowly. However, the performance of SVRGQEB-RS is not affected too much by the initial value of T1 , which is consistent with our theory showing the log dependence on the initial value of c. Moreover, SVRGQEB-RS with different values of T1 perform always better than their counterparts of SVRGHEB . Then we compare SVRGQEB-RS and SVRGHEB-RS to other baselines for solving different problems on different data sets. We choose SAGA, SVRG++ as the baselines. We also notice that a heuristic variant of SVRG++ was suggested in [2] where epoch length is automatically determined based on the change in the variance of gradient estimators between two consecutive epochs. However, according to our experiments we find that this heuristic automatic strategy cannot always terminate one epoch because their suggested criterion cannot be met. This is also confirmed by our communication with the authors of SVRG++. To make it work, we manually add an upper bound constraint of each epoch length equal to 2n following the suggestion in [8]. The resulting baseline is denoted by SVRG-heuristics. For all algorithms, the step size is best tuned. The initial epoch length of SVRG++ is set to n/4 following the suggestion in [2], and the same initial epoch length is also used in our algorithms. The comparison with these baselines are reported in remaining figures of Figure 1. We can see that SVRGQEB-RS (resp. SVRGHEB-RS ) always has superior performance, while SVRG-heuristics sometimes performs well sometimes bad. Acknowlegements We thank the anonymous reviewers for their helpful comments. Y. Xu and T. Yang are partially supported by National Science Foundation (IIS-1463988, IIS-1545995). 9 References [1] Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Proceedings of the 49th Annual ACM Symposium on Theory of Computing, STOC ?17, 2017. [2] Z. Allen-Zhu and Y. Yuan. Improved svrg for non-strongly-convex or sum-of-non-convex objectives. In Proceedings of The 33rd International Conference on Machine Learning, pages 1080?1089, 2016. [3] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. CoRR, abs/1510.08234, 2015. [4] A. Defazio, F. R. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS), pages 1646?1654, 2014. [5] D. Drusvyatskiy and A. S. Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. arXiv:1602.06661, 2016. [6] P. Gong and J. Ye. Linear convergence of variance-reduced projected stochastic gradient without strong convexity. CoRR, abs/1406.1102, 2014. [7] K. Hou, Z. Zhou, A. M. So, and Z. Luo. On the linear convergence of the proximal gradient method for trace norm regularization. In Advances in Neural Information Processing Systems (NIPS), pages 710?718, 2013. [8] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315?323, 2013. [9] H. Karimi, J. Nutini, and M. W. Schmidt. Linear convergence of gradient and proximalgradient methods under the polyak-?ojasiewicz condition. In Machine Learning and Knowledge Discovery in Databases - European Conference (ECML-PKDD), pages 795?811, 2016. [10] G. Li. Global error bounds for piecewise convex polynomials. Math. Program., 137(1-2):37?64, 2013. [11] Q. Lin and L. Xiao. An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. In Proceedings of the International Conference on Machine Learning, (ICML), pages 73?81, 2014. [12] J. Liu and M. Tak?c. Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption. CoRR, abs/1612.05356, 2016. [13] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351?376, 2015. [14] M. Liu and T. Yang. Adaptive accelerated gradient converging methods under holderian error bound condition. CoRR, abs/1611.07609, 2017. [15] Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minization. Journal of Optimization Theory and Applications, 72(1):7?35, 1992. [16] Z.-Q. Luo and P. Tseng. On the linear convergence of descent methods for convex essenially smooth minization. SIAM Journal on Control and Optimization, 30(2):408?425, 1992. [17] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Annals of Operations Research, 46:157?178, 1993. [18] C. Ma, R. Tappenden, and M. Tak?c. Linear convergence of the randomized feasible descent method under the weak strong convexity assumption. CoRR, abs/1506.02530, 2015. [19] T. Murata and T. Suzuki. Doubly accelerated stochastic variance reduced dual averaging method for regularized empirical risk minimization. CoRR, abs/1703.00439, 2017. [20] I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for nonstrongly convex optimization. CoRR, abs/1504.06298, 2015. 10 [21] Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125?161, 2013. [22] L. Nguyen, J. Liu, K. Scheinberg, and M. Tak?c. SARAH: A novel method for machine learning problems using stochastic recursive gradient. CoRR, 2017. [23] H. Nyquist. The optimal lp norm estimator in linear regression models. Communications in Statistics - Theory and Methods, 12(21):2511?2524, 1983. [24] R. Rockafellar. Convex Analysis. Princeton mathematical series. Princeton University Press, 1970. [25] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. In Proceedings of the International Conference on Machine Learning (ICML), pages 567?599, 2013. [26] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014. [27] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3821?3830, 2017. [28] Y. Xu, Y. Yan, Q. Lin, and T. Yang. Homotopy smoothing for non-smooth problems with lower complexity than O(1/). In Advances In Neural Information Processing Systems 29 (NIPS), pages 1208?1216, 2016. [29] T. Yang and Q. Lin. Rsg: Beating sgd without smoothness and/or strong convexity. CoRR, abs/1512.03107, 2016. [30] H. Zhang. New analysis of linear convergence of gradient-type methods via unifying error bound conditions. CoRR, abs/1606.00269, 2016. [31] H. Zhang and W. Yin. Gradient methods for convex minimization: better rates under weaker conditions. arXiv preprint arXiv:1303.4645, 2013. [32] Z. Zhou and A. M.-C. So. A unified approach to error bounds for structured convex optimization problems. arXiv:1512.03518, 2015. [33] Z. Zhou, Q. Zhang, and A. M. So. L1p-norm regularization: Error bounds and convergence rate analysis of first-order methods. In Proceedings of the 32nd International Conference on Machine Learning, (ICML), pages 1501?1510, 2015. 11
6920 |@word polynomial:2 norm:17 nd:1 c0:12 open:1 termination:1 heuristically:1 semicontinuous:1 r:64 d2:1 t1r:1 pg:1 sgd:1 acknowlegements:1 tr:4 reduction:2 moment:1 liu:5 series:1 initial:14 tuned:1 interestingly:1 current:2 comparing:1 luo:4 tackling:1 must:2 hou:1 designed:1 plot:1 update:7 website:1 beginning:1 ojasiewicz:1 provides:2 certificate:5 math:1 simpler:1 zhang:6 mathematical:2 along:1 c2:6 direct:1 differential:2 become:1 symposium:1 yuan:1 prove:2 doubly:1 polyhedral:3 x0:9 huber:3 expected:5 indeed:1 pkdd:1 pcq:2 decreasing:1 automatically:2 increasing:1 becomes:2 provided:2 bounded:3 notation:2 moreover:2 developed:2 unified:1 finding:1 guarantee:5 act:1 growth:6 tackle:2 exactly:1 milestone:1 k2:18 control:1 enjoy:3 before:7 t1:21 local:6 limit:1 path:1 might:3 initialization:1 studied:2 eb:1 mentioning:3 bi:3 obeys:1 practical:2 testing:1 practice:2 recursive:1 lf:14 procedure:2 empirical:4 yan:1 composite:5 cannot:5 uiowa:1 convenience:1 risk:1 applying:1 tappenden:1 optimize:1 lagrangian:1 reviewer:1 tianbao:2 convex:22 simplicity:2 lderian:5 estimator:2 deriving:1 searching:4 variation:1 coordinate:5 analogous:1 resp:1 annals:1 suppose:4 programming:1 trick:2 promisingly:1 satisfying:2 expensive:1 database:1 observed:1 fly:1 preprint:1 enters:1 ensures:1 decrease:3 mentioned:1 convexity:22 complexity:11 nesterov:2 depend:1 solving:5 algebra:1 predictive:1 easily:1 regularizer:1 fast:5 choosing:1 neighborhood:1 shalev:1 whose:4 heuristic:11 larger:3 otherwise:1 dab:3 statistic:1 g1:1 final:1 sequence:2 differentiable:1 propose:3 loop:6 combining:2 date:1 achieve:2 convergence:34 optimum:7 incremental:1 converges:1 help:1 derive:1 develop:8 sarah:1 gong:1 minor:1 progress:1 strong:22 c:17 implies:3 convention:1 met:1 direction:1 closely:1 stochastic:16 luckily:1 require:1 argued:1 generalization:1 anonymous:1 homotopy:2 tfidf:1 hold:13 sufficiently:1 considered:2 wright:1 normal:1 prohibited:1 mapping:2 early:1 consecutive:1 estimation:1 label:1 hs1:1 correctness:1 gauge:1 city:2 minimization:8 always:6 pn:3 zhou:3 derived:1 focus:1 ax:1 indicates:4 check:3 aka:1 baseline:5 helpful:1 stopping:1 tak:3 interested:1 karimi:1 mimicking:1 issue:7 classification:2 among:2 aforementioned:2 dual:3 arg:6 denoted:1 development:2 constrained:2 smoothing:1 equal:2 aware:1 beach:1 manually:1 progressive:1 broad:1 icml:4 others:1 piecewise:3 inherent:1 suter:1 national:1 unsuccessfully:1 n1:3 ab:9 deferred:1 light:2 primal:1 regularizers:1 necoara:1 l1p:1 chain:1 predefined:1 necessary:1 machinery:2 literally:1 heb:21 conduct:3 euclidean:1 plotted:1 theoretical:2 cost:3 hgt:1 successful:8 johnson:1 too:4 reported:1 proximal:23 st:1 international:5 randomized:2 siam:3 stay:1 sequel:1 probabilistic:2 continuously:2 squared:4 satisfied:1 management:1 leveraged:2 choose:3 slowly:1 worse:2 style:1 li:6 rockafellar:1 satisfy:4 notable:1 explicitly:1 depends:3 multiplicative:1 later:1 performed:1 lot:1 try:1 analyze:1 start:3 defer:1 square:7 variance:7 efficiently:1 murata:1 yield:1 weak:3 rsg:1 monitoring:1 worth:3 fore:1 confirmed:1 yearpredictionmsd:2 definition:4 nonetheless:1 involved:1 lc2:7 naturally:1 proof:6 stop:5 knowledge:1 proximalgradient:1 back:1 follow:1 improved:2 daunting:1 katyusha:1 formulation:1 strongly:7 stage:55 lastly:1 logistic:3 modulus:1 usa:3 name:1 ye:1 requiring:1 k22:13 multiplier:1 counterpart:1 former:1 regularization:6 e2006:2 true:1 criterion:1 presenting:1 performs:2 allen:2 reasoning:1 variational:1 consideration:1 novel:3 recently:4 fi:8 common:1 superior:1 ji:1 conditioning:3 million:2 refer:2 ai:25 smoothness:5 automatic:2 rd:1 trivially:2 peypouquet:1 etc:1 add:3 gt:1 base:1 closest:2 recent:4 showed:2 scenario:4 occasionally:1 certain:2 inequality:7 binary:1 yi:3 sgc:10 seen:1 converge:3 signal:1 semi:2 ii:2 full:6 multiple:2 reduces:2 smooth:6 faster:4 bach:1 long:2 lin:6 ofp:1 post:1 converging:1 variant:12 regression:8 expectation:2 arxiv:4 iteration:13 represent:1 sometimes:2 addition:1 want:2 affecting:1 underestimated:1 appropriately:1 rest:1 ascent:1 comment:1 db:1 leveraging:1 effectiveness:1 call:4 yang:8 door:1 intermediate:4 yk22:1 enough:4 leverage:1 revealed:1 easy:1 ps1:2 fit:2 lasso:2 polyak:1 inner:5 regarding:1 reduce:1 knowing:2 grad:6 inactive:1 whether:3 motivated:1 handled:1 defazio:1 accelerating:1 effort:1 nyquist:1 song:2 returned:1 afford:1 remark:4 byl:1 detailed:2 revived:2 dark:2 extensively:2 reduced:4 continuation:1 exist:1 qihang:2 notice:1 estimated:1 per:1 affected:1 key:1 four:1 nevertheless:1 achieving:1 libsvm:1 verified:1 lacoste:1 backward:1 wasted:1 subgradient:1 sum:6 run:1 fourth:1 striking:1 named:1 place:1 family:3 almost:1 reader:1 l2i:1 decide:1 bound:33 apg:5 quadratic:6 oracle:2 g:5 annual:1 constraint:8 infinity:1 min:2 optimality:1 performing:1 department:2 developing:3 according:3 structured:2 smaller:1 lp:1 drusvyatskiy:1 making:1 s1:18 restricted:1 pr:26 remains:2 scheinberg:1 turn:1 fail:3 know:1 letting:1 end:9 serf:1 generalizes:2 operation:1 appropriate:2 schmidt:1 batch:1 denotes:5 spurred:1 running:1 include:3 ensure:1 top:1 log2:10 hinge:4 remaining:1 unifying:1 objective:15 noticed:1 already:1 g0:1 strategy:1 dependence:1 traditional:1 said:1 gradient:44 minx:7 hq:1 distance:1 thank:1 outer:3 transit:1 tseng:3 induction:2 assuming:2 besides:1 length:4 mini:1 minimizing:3 difficult:4 stoc:1 glineur:1 sigma:1 trace:1 filtration:1 proper:1 unknown:10 perform:3 upper:3 markov:2 datasets:2 finite:5 descent:11 t:34 ecml:1 extended:1 communication:2 namely:1 pair:1 connection:1 established:2 nip:5 address:3 adult:4 suggested:4 usually:4 below:3 parallelism:1 beating:1 challenge:4 program:1 built:1 including:4 max:1 unsuccessful:7 ia:2 critical:1 power:1 treated:1 difficulty:1 regularized:9 event:1 indicator:1 zhu:2 scheme:2 kxk22:1 imply:1 cc0:2 julien:1 lk:1 log1:4 naive:1 epoch:6 literature:1 l2:4 checking:2 discovery:1 loss:16 nonstrongly:1 sublinear:1 interesting:4 suggestion:2 proportional:1 worrisome:1 foundation:1 iowa:4 sufficient:1 consistent:1 xiao:2 supported:1 last:1 asynchronous:1 svrg:62 enjoys:1 formal:1 weaker:1 understand:1 face:1 sparse:1 author:1 made:1 adaptive:11 projected:2 suzuki:1 historical:1 ec:2 nguyen:2 cope:1 restarting:7 compact:7 dlog2:2 global:4 assumed:1 shwartz:1 search:8 continuous:2 terminate:1 ca:1 inherently:1 obtaining:1 reminded:1 necessarily:1 european:1 domain:1 da:1 bounding:1 s2:25 xu:5 embeds:1 lc:5 sub:1 fails:2 explicit:2 saga:7 lie:1 third:1 theorem:10 bad:2 xt:5 showing:1 maxi:1 explored:1 essential:1 adding:2 corr:10 supplement:2 conditioned:2 kx:9 bolte:1 logarithmic:2 yin:1 simply:1 partially:1 applies:1 nutini:1 satisfies:3 lewis:1 acm:1 ma:1 succeed:1 conditional:1 acceleration:1 towards:1 lipschitz:2 feasible:2 change:1 included:1 determined:1 uniformly:1 justify:1 averaging:1 minization:2 lemma:9 total:5 called:1 experimental:2 succeeds:2 support:1 latter:1 accelerated:7 princeton:2
6,546
6,921
Bayesian Compression for Deep Learning Christos Louizos University of Amsterdam TNO Intelligent Imaging [email protected] Karen Ullrich University of Amsterdam [email protected] Max Welling University of Amsterdam CIFAR? [email protected] Abstract Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency. 1 Introduction While deep neural networks have become extremely successful in in a wide range of applications, often exceeding human performance, they remain difficult to apply in many real world scenarios. For instance, making billions of predictions per day comes with substantial energy costs given the energy consumption of common Graphical Processing Units (GPUs). Also, real-time predictions are often about a factor 100 away in terms of speed from what deep NNs can deliver, and sending NNs with millions of parameters through band limited channels is still impractical. As a result, running them on hardware limited devices such as smart phones, robots or cars requires substantial improvements on all of these issues. For all those reasons, compression and efficiency have become a topic of interest in the deep learning community. While all of these issues are certainly related, compression and performance optimizing procedures might not always be aligned. As an illustration, consider the convolutional layers of Alexnet, which account for only 4% of the parameters but 91% of the computation [65]. Compressing these layers will not contribute much to the overall memory footprint. There is a variety of approaches to address these problem settings. However, most methods have the common strategy of reducing both the neural network structure and the effective fixed point precision for each weight. A justification for the former is the finding that NNs suffer from significant parameter redundancy [14]. Methods in this line of thought are network pruning, where unnecessary connections are being removed [38, 24, 21], or student-teacher learning where a large network is used to train a significantly smaller network [5, 26]. From a Bayesian perspective network pruning and reducing bit precision for the weights is aligned with achieving high accuracy, because Bayesian methods search for the optimal model structure (which leads to pruning with sparsity inducing priors), and reward uncertain posteriors over parameters through the bits back argument [27] (which leads to removing insignificant bits). This relation is made explicit in the MDL principle [20] which is known to be related to Bayesian inference. ? Canadian Institute For Advanced Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. In this paper we will use the variational Bayesian approximation for Bayesian inference which has also been explicitly interpreted in terms of model compression [27]. By employing sparsity inducing priors for hidden units (and not individual weights) we can prune neurons including all their ingoing and outgoing weights. This avoids more complicated and inefficient coding schemes needed for pruning or vector quantizing individual weights. As an additional Bayesian bonus we can use the variational posterior uncertainty to assess which bits are significant and remove the ones which fluctuate too much under approximate posterior sampling. From this we derive the optimal fixed point precision per layer, which is still practical on chip. 2 Variational Bayes and Minimum Description Length A fundamental theorem in information theory is the minimum description length (MDL) principle [20]. It relates to compression directly in that it defines the best hypothesis to be the one that communicates the sum of the model (complexity cost LC ) and the data misfit (error cost LE ) with the minimum number of bits [57, 58]. It is well understood that variational inference can be reinterpreted from an MDL point of view [54, 69, 27, 29, 19]. More specifically, assume that we are presented with a dataset QN D that consists from N input-output pairs {(x1 , y1 ), . . . , (xn , yn )}. Let p(D|w) = i=1 p(yi |xi , w) be a parametric model, e.g. a deep neural network, that maps inputs x to their corresponding outputs y using parameters w governed by a prior distribution p(w). In this scenario, we wish to approximate the intractable posterior distribution p(w|D) = p(D|w)p(w)/p(D) with a fixed form approximate posterior q? (w) by optimizing the variational parameters ? according to: L(?) = Eq? (w) [log p(D|w)] + Eq? (w) [log p(w)] + H(q? (w)), | {z } | {z } LE (1) LC where H(?) denotes the entropy and L(?) is known as the evidence-lower-bound (ELBO) or negative variational free energy. As indicated in eq. 1, L(?) naturally decomposes into a minimum cost for communicating the targets {yn }N n=1 under the assumption that the sender and receiver agreed on a prior p(w) and that the receiver knows the inputs {xn }N n=1 and form of the parametric model. By using sparsity inducing priors for groups of weights that feed into a neuron the Bayesian mechanism will start pruning hidden units that are not strictly necessary for prediction and thus achieving compression. But there is also a second mechanism by which Bayes can help us compress. By explicitly entertaining noisy weight encodings through q? (w) we can benefit from the bits-back argument [27, 29] due to the entropy term; this is in contrast to infinitely precise weights that lead to H(?(w)) = ??2 . Nevertheless in practice, the data misfit term LE is intractable for neural network models under a noisy weight encoding, so as a solution Monte Carlo integration is usually employed. Continuous q? (w) allow for the reparametrization trick [34, 56]. Here, we replace sampling from q? (w) by a deterministic function of the variational parameters ? and random samples from some noise variables : L(?) = Ep() [log p(D|f (?, ))] + Eq? (w) [log p(w)] + H(q? (w)), (2) where w = f (?, ). By applying this trick, we obtain unbiased stochastic gradients of the ELBO with respect to the variational parameters ?, thus resulting in a standard optimization problem that is fit for stochastic gradient ascent. The efficiency of the gradient estimator resulting from eq. 2 can be further improved for neural networks by utilizing local reparametrizations [35] (which we will use in our experiments); they provide variance reduction in an efficient way by locally marginalizing the weights at each layer and instead sampling the distribution of the pre-activations. 3 Related Work One of the earliest ideas and most direct approaches to tackle efficiency is pruning. Originally introduced by [38], pruning has recently been demonstrated to be applicable to modern architectures [25, 21]. It had been demonstrated that an overwhelming amount of up to 99,5% of parameters can be pruned in common architectures. There have been quite a few encouraging results obtained by (empirical) Bayesian approaches that employ weight pruning [19, 7, 50, 67, 49]. Nevertheless, 2 In practice this term is a large constant determined by the weight precision. 2 weight pruning is in general inefficient for compression since the matrix format of the weights is not taken into consideration, therefore the Compressed Sparse Column (CSC) format has to be employed. Moreover, note that in conventional CNNs most flops are used by the convolution operation. Inspired by this observation, several authors proposed pruning schemes that take these considerations into account [70, 71] or even go as far as efficiency aware architectures to begin with [31, 15, 30]. From the Bayesian viewpoint, similar pruning schemes have been explored at [45, 51, 37, 33]. Given optimal architecture, NNs can further be compressed by quantization. More precisely, there are two common techniques. First, the set of accessible weights can be reduced drastically. As an extreme example, [13, 46, 55, 72] and [11] trained NN to use only binary or tertiary weights with floating point gradients. This approach however is in need of significantly more parameters than their ordinary counterparts. Work by [18] explores various techniques beyond binary quantization: k-means quantization, product quantization and residual quantization. Later studies extent this set to optimal fixed point [42] and hashing quantization [10]. [25] apply k-means clustering and consequent center training. From a practical point of view, however, all these are fairly unpractical during test time. For the computation of each feature map in a net, the original weight matrix must be reconstructed from the indexes in the matrix and a codebook that contains all the original weights. This is an expensive operation and this is why some studies propose a different approach than set quantization. Precision quantization simply reduces the bit size per weight. This has a great advantage over set quantization at inference time since feature maps can simply be computed with less precision weights. Several studies show that this has little to no effect on network accuracy when using 16bit weights [47, 22, 12, 68, 9]. Somewhat orthogonal to the above discussion but certainly relevant are approaches that customize the implementation of CNNs for hardware limited devices[30, 4, 60]. 4 Bayesian compression with scale mixtures of normals Consider the following prior over a parameter w where its scale z is governed by a distribution p(z): z ? p(z); w ? N (w; 0, z 2 ), (3) with z 2 serving as the variance of the zero-mean normal distribution over w. By treating the scales of w as random variables we can recover marginal prior distributions over the parameters that have heavier tails and more mass at zero; this subsequently biases the posterior distribution over w to be sparse. This family of distributions is known as scale-mixtures of normals [6, 2] and it is quite general, as a lot of well known sparsity inducing distributions are special cases. One example of the aforementioned framework is the spike-and-slab distribution [48], the golden standard for sparse Bayesian inference. Under the spike-and-slab, the mixing density of the scales is a Bernoulli distribution, thus the marginal p(w) has a delta ?spike? at zero and a continuous ?slab? over the real line. Unfortunately, this prior leads to a computationally expensive inference since we have to explore a space of 2M models, where M is the number of the model parameters. Dropout [28, 64], one of the most popular regularization techniques for neural networks, can be interpreted as positing a spike and slab distribution over the weights where the variance of the ?slab? is zero [17, 43]. Another example is the Laplace distribution which arises by considering p(z 2 ) = Exp(?). The mode of the posterior distribution under a Laplace prior is known as the Lasso [66] estimator and has been previously used for sparsifying neural networks at [70, 59]. While computationally simple, the Lasso estimator is prone to ?shrinking" large signals [8] and only provides point estimates about the parameters. As a result it does not provide uncertainty estimates, it can potentially overfit and, according to the bits-back argument, is inefficient for compression. For these reasons, in this paper we will tackle the problem of compression and efficiency in neural networks by adopting a Bayesian treatment and inferring an approximate posterior distribution over the parameters under a scale mixture prior. We will consider two choices for the prior over the scales p(z); the hyperparameter free log-uniform prior [16, 35] and the half-Cauchy prior, which results into a horseshoe [8] distribution. Both of these distributions correspond to a continuous relaxation of the spike-and-slab prior and we provide a brief discussion on their shrinkage properties at Appendix C. 3 4.1 Reparametrizing variational dropout for group sparsity One potential choice for p(z) is the improper log-uniform prior [35]: p(z) ? |z|?1 . It turns out that we can recover the log-uniform prior over the weights w if we marginalize over the scales z: Z 1 1 p(w) ? N (w|0, z 2 )dz = . (4) |z| |w| This alternative parametrization of the log uniform prior is known in the statistics literature as the normal-Jeffreys prior and has been introduced by [16]. This formulation allows to ?couple" the scales of weights that belong to the same group (e.g. neuron or feature map), by simply sharing the corresponding scale variable z in the joint prior3 : A,B A Y 1 Y p(W, z) ? N (wij |0, zi2 ), |z | i i ij (5) where W is the weight matrix of a fully connected neural network layer with A being the dimensionality of the input and B the dimensionality of the output. Now consider performing variational inference with a joint approximate posterior parametrized as follows: q? (W, z) = A Y N (zi |?zi , ?2zi ?i ) i=1 A,B Y 2 N (wij |zi ?ij , zi2 ?ij ), (6) i,j where ?i is the dropout rate [64, 35, 49] of the given group. As explained at [35, 49], the multiplicative parametrization of the approximate posterior over z suffers from high variance gradients; therefore we will follow [49] and re-parametrize it in terms of ?z2i = ?2zi ?i , hence optimize w.r.t. ?z2i . The lower bound under this prior and approximate posterior becomes: L(?) = Eq? (z)q? (W|z) [log p(D|W)] ? Eq? (z) [KL(q? (W|z)||p(W|z))] ? KL(q? (z)||p(z)). (7) Under this particular variational posterior parametrization the negative KL-divergence from the conditional prior p(W|z) to the approximate posterior q? (W|z) is independent of z:  A,B  2 z z i2 ?ij i2 ?2ij 1X z i2 KL(q? (W|z)||p(W|z)) = log 2 2 + 2 + ?1 . (8) 2 i,j z z z i ?ij i i2 This independence can be better understood if we consider a non-centered parametrization of the w prior [53]. More specifically, consider reparametrizing the weights as w ?ij = ziji ; this will then result Q ? ? = ? Now if into p(W|z)p(z) = p(W)p(z), where p(W) ?ij |0, 1) and W = diag(z)W. i,j N (w ? we perform variational inference under the p(W)p(z) prior with an approximate posterior that has 2 ? z) = q? (W)q ? ? (z), with q? (W) ? = Q N (w the form of q? (W, ?ij |?ij , ?ij ), then we see that we i,j arrive at the same expressions for the negative KL-divergence from the prior to the approximate posterior. Finally, the negative KL-divergence from the normal-Jeffreys scale prior p(z) to the Gaussian variational posterior q? (z) depends only on the ?implied? dropout rate, ?i = ?z2i /?2zi , and takes the following form [49]: ? KL(q? (z)||p(z)) ? A X  k1 ?(k2 + k3 log ?i ) ? 0.5m(? log ?i ) ? k1 , (9) i where ?(?), m(?) are the sigmoid and softplus functions respectively4 and k1 = 0.63576, k2 = 1.87320, k3 = 1.48695. We can now prune entire groups of parameters by simply specifying a threshold for the variational dropout rate of the corresponding group, e.g. log ?i = (log ?z2i ? log ?2zi ) ? t. It should be mentioned that this prior parametrization readily allows for a more R flexible marginal posterior over the weights as we now have a compound distribution, q? (W) = q? (W|z)q? (z)dz; this is in contrast to the original parametrization and the Gaussian approximations employed by [35, 49]. 3 Stricly speaking the result of eq. 4 only holds when each weight has its own scale and not when that scale is shared across multiple weights. Nevertheless, in practice we obtain a prior that behaves in a similar way, i.e. it biases the variational posterior to be sparse. 4 ?(x) = (1 + exp(?x))?1 , m(x) = log(1 + exp(x)) 4 Furthermore, this approach generalizes the low variance additive parametrization of variational dropout proposed for weight sparsity at [49] to group sparsity (which was left as an open question at [49]) in a principled way. At test time, in order to have a single feedforward pass we replace the distribution over W at each layer with a single weight matrix, the masked variational posterior mean:  ? = diag(m) E ? W (10) ? [diag(z)W] = diag m ?z MW , q(z)q(W) where m is a binary mask determined according to the group variational dropout rate and MW are ? We further use the variational posterior marginal variances5 for this particular the means of q? (W). posterior approximation:  2 2 2 V(wij )N J = ?z2i ?ij + ?2ij + ?ij ?zi , (11) to asess the bit precision of each weight in the weight matrix. More specifically, we employed the ? to compute the unit round off necessary to represent the mean variance across the weight matrix W weights. This method will give us the amount significant bits, and by adding 3 exponent and 1 sign ? 6 . We provide more details at bits we arrive at the final bit precision for the entire weight matrix W Appendix B. 4.2 Group horseshoe with half-Cauchy scale priors Another choice for p(z) is a proper half-Cauchy distribution: C + (0, s) = 2(s?(1 + (z/s)2 ))?1 ; it induces a horseshoe prior [8] distribution over the weights, which is a well known sparsity inducing prior in the statistics literature. More formally, the prior hierarchy over the weights is expressed as (in a non-centered parametrization): s ? C + (0, ?0 ); z?i ? C + (0, 1); w ?ij ? N (0, 1); wij = w ?ij z?i s, (12) where ?0 is the free parameter that can be tuned for specific desiderata. The idea behind the horseshoe is that of the ?global-local" shrinkage; the global scale variable s pulls all of the variables towards zero whereas the heavy tailed local variables zi can compensate and allow for some weights to escape. Instead of directly working with the half-Cauchy priors we will employ a decomposition of the half-Cauchy that relies upon (inverse) gamma distributions [52] as this will allow us to compute the negative KL-divergence from the scale prior p(z) to an approximate log-normal scale posterior q? (z) in closed form (the derivation is given in Appendix D). More specifically, we have that the half-Cauchy prior can be expressed in a non-centered parametrization as: ? = IG(0.5, 1); ? p(?) p(? ?) = G(0.5, k 2 ); z2 = ? ? ?, (13) where IG(?, ?), G(?, ?) correspond to the inverse Gamma and Gamma distributions in the scale parametrization, and z follows a half-Cauchy distribution with scale k. Therefore we will re-express the whole hierarchy as: sb ? IG(0.5, 1); sa ? G(0.5, ?02 ); ??i ? IG(0.5, 1); ? ? i ? G(0.5, 1); w ?ij ? N (0, 1); q wij = w ?ij sa sb ? ? i ??i . (14) It should be mentioned that the improper log-uniform prior is the limiting case of the horseshoe prior when the shapes of the (inverse) Gamma hyperpriors on ? ? i , ??i go to zero [8]. In fact, several well known shrinkage priors can be expressed in this form by altering the shapes of the (inverse) Gamma hyperpriors [3]. For the variational posterior we will employ the following mean field approximation: ? = LN (sb |?s , ? 2 )LN (sa |?s , ? 2 ) q? (sb , sa , ?) sb sa a b A Y LN (??i |???i , ??2? ) i (15) i ? = ? W) q? (?, A Y LN (? ?i |??? i , ??2? i ) i A,B Y 2 N (w ?ij |?w?ij , ?w ?i j ), (16) i,j  V(wij ) = V(zi w ?ij ) = V(zi ) E[w ?ij ]2 + V(w ?ij ) + V(w ?ij ) E[zi ]2 . 6 Notice that the fact that we are using mean-field variational approximations (which we chose for simplicity) can potentially underestimate the variance, thus lead to higher bit precisions for the weights. We leave the exploration of more involved posteriors for future work. 5 5 where LN (?, ?) is a log-normal distribution. It should be mentioned that a similar form of noncentered variational inference for the horseshoe has been also successfully employed for undirected models at [32]. Notice that we can also apply local reparametrizations [35] when we are sampling q ? ? ? i ??i and sa sb by exploiting properties of the log-normal distribution7 and thus forming the implied: q ? s = sa sb ? LN (?s , ?s2 ) z?i = ? ? i ??i ? LN (?z?i , ?z2?i ); (17) 1 1 1 1 ?z?i = (??? i + ???i ); ?z2?i = (??2? i + ??2? ); ?s = (?sa + ?sb ); ?s2 = (?s2a + ?s2b ). (18) i 2 4 2 4 As a threshold rule for group pruning we will use the negative log-mode8 of the local log-normal r.v. zi = s? zi , i.e. prune when (?z2i ? ?zi ) ? t, with ?zi = ?z?i + ?s and ?z2i = ?z2?i + ?s2 .This ignores dependencies among the zi elements induced by the common scale s, but nonetheless we found that it works well in practice. Similarly with the group normal-Jeffreys prior, we will replace the distribution over W at each layer with the masked variational posterior mean during test time: 1 2  ? = diag(m) E ? W ? [diag(z)W] = diag m exp(?z + ? z ) MW , q(z)q(W) 2 (19) where m is a binary mask determined according to the aforementioned threshold, MW are the means ? and ?z , ? 2 are the means and variances of the local log-normals over zi . Furthermore, of q(W) z similarly to the group normal-Jeffreys approach, we will use the variational posterior marginal variances:  2 2 V(wij )HS = (exp(?z2i ) ? 1) exp(2?zi + ?z2i ) ?ij + ?2ij + ?ij exp(2?zi + ?z2i ), (20) ? to compute the final bit precision for the entire weight matrix W. 5 Experiments We validated the compression and speed-up capabilities of our models on the well-known architectures of LeNet-300-100 [39], LeNet-5-Caffe9 on MNIST [40] and, similarly with [49], VGG [61]10 on CIFAR 10 [36]. The groups of parameters were constructed by coupling the scale variables for each filter for the convolutional layers and for each input neuron for the fully connected layers. We provide the algorithms that describe the forward pass using local reparametrizations for fully connected and convolutional layers with each of the employed approximate posteriors at appendix F. For the horseshoe prior we set the scale ?0 of the global half-Cauchy prior to a reasonably small value, e.g. ?0 = 1e ? 5. This further increases the prior mass at zero, which is essential for sparse estimation and compression. We also found that constraining the standard deviations as described at [44] and ?warm-up" [62] helps in avoiding bad local optima of the variational objective. Further details about the experimental setup can be found at Appendix A. Determining the threshold for pruning can be easily done with manual inspection as usually there are two well separated clusters (signal and noise). We provide a sample visualization at Appendix E. 5.1 Architecture learning & bit precisions We will first demonstrate the group sparsity capabilities of our methods by illustrating the learned architectures at Table 1, along with the inferred bit precision per layer. As we can observe, our methods infer significantly smaller architectures for the LeNet-300-100 and LeNet-5-Caffe, compared to Sparse Variational Dropout, Generalized Dropout and Group Lasso. Interestingly, we observe that for the VGG network almost all of big 512 feature map layers are drastically reduced to around 10 feature maps whereas the initial layers are mostly kept intact. Furthermore, all of the Bayesian methods considered require far fewer than the standard 32 bits per-layer to represent the weights, sometimes even allowing for 5 bit precisions. 7 The product of log-normal r.v.s is another log-normal and a power of a log-normal r.v. is another log-normal. Empirically, it slightly better separates the scales compared to the negative log-mean ?(?zi + 0.5?z2i ). 9 https://github.com/BVLC/caffe/tree/master/examples/mnist 10 The adapted CIFAR 10 version described at http://torch.ch/blog/2015/07/30/cifar.html. 8 6 Table 1: Learned architectures with Sparse VD [49], Generalized Dropout (GD) [63] and Group Lasso (GL) [70]. Bayesian Compression (BC) with group normal-Jeffreys (BC-GNJ) and group horseshoe (BC-GHS) priors correspond to the proposed models. We show the amount of neurons left after pruning along with the average bit precisions for the weights at each layer. Network & size Method Pruned architecture Bit-precision LeNet-300-100 Sparse VD 512-114-72 8-11-14 BC-GNJ BC-GHS 278-98-13 311-86-14 8-9-14 13-11-10 14-19-242-131 7-13-208-16 3-12-192-500 13-10-8-12 - BC-GNJ BC-GHS 8-13-88-13 5-10-76-16 18-10-7-9 10-10?14-13 VGG BC-GNJ (2? 64)-(2? 128)-(3?256)-(8? 512) BC-GHS 63-64-128-128-245-155-63-26-24-20-14-12-11-11-15 51-62-125-128-228-129-38-13-9-6-5-6-6-6-20 10-10-10-10-8-8-8-5-5-5-5-5-6-7-11 11-12-9-14-10-8-5-5-6-6-6-8-11-17-10 784-300-100 LeNet-5-Caffe 20-50-800-500 5.2 Sparse VD GD GL Compression Rates For the actual compression task we compare our method to current work in three different scenarios: (i) compression achieved only by pruning, here, for non-group methods we use the CSC format to store parameters; (ii) compression based on the former but with reduced bit precision per layer (only for the weights); and (iii) the maximum compression rate as proposed by [25]. We believe Table 2: Compression results for our methods. ?DC? corresponds to Deep Compression method introduced at [25], ?DNS? to the method of [21] and ?SWS? to the Soft-Weight Sharing of [67]. Numbers marked with * are best case guesses. Compression Rates (Error %) Model Original Error % LeNet-300-100 1.6 LeNet-5-Caffe 0.9 VGG 8.4 Method |w6=0| |w| % Pruning Fast Prediction Maximum Compression DC DNS SWS Sparse VD 8.0 1.8 4.3 2.2 6 (1.6) 28* (2.0) 12* (1.9) 21(1.8) 84(1.8) 40 (1.6) 64(1.9) 113 (1.8) BC-GNJ BC-GHS 10.8 10.6 9(1.8) 9(1.8) 36(1.8) 23(1.9) 58(1.8) 59(2.0) DC DNS SWS Sparse VD 8.0 0.9 0.5 0.7 6*(0.7) 55*(0.9) 100*(1.0) 63(1.0) 228(1.0) 39(0.7) 108(0.9) 162(1.0) 365(1.0) BC-GNJ BC-GHS 0.9 0.6 108(1.0) 156(1.0) 361(1.0) 419(1.0) 573(1.0) 771(1.0) BC-GNJ BC-GHS 6.7 5.5 14(8.6) 18(9.0) 56(8.8) 59(9.0) 95(8.6) 116(9.2) these to be relevant scenarios because (i) can be applied with already existing frameworks such as Tensorflow [1], (ii) is a practical scheme given upcoming GPUs and frameworks will be designed to work with low and mixed precision arithmetics [41, 23]. For (iii), we perform k-means clustering on the weights with k=32 and consequently store a weight index that points to a codebook of available 7 weights. Note that the latter achieves highest compression rate but it is however fairly unpractical at test time since the original matrix needs to be restored for each layer. As we can observe at Table 2, our methods are competitive with the state-of-the art for LeNet-300-100 while offering significantly better compression rates on the LeNet-5-Caffe architecture, without any loss in accuracy. Do note that group sparsity and weight sparsity can be combined so as to further prune some weights when a particular group is not removed, thus we can potentially further boost compression performance at e.g. LeNet-300-100. For the VGG network we observe that training from a random initialization yielded consistently less accuracy (around 1%-2% less) compared to initializing the means of the approximate posterior from a pretrained network, similarly with [49], thus we only report the latter results11 . After initialization we trained the VGG network regularly for 200 epochs using Adam with the default hyperparameters. We observe a small drop in accuracy for the final models when using the deterministic version of the network for prediction, but nevertheless averaging across multiple samples restores the original accuracy. Note, that in general we can maintain the original accuracy on VGG without sampling by simply finetuning with a small learning rate, as done at [49]. This will still induce (less) sparsity but unfortunately it does not lead to good compression as the bit precision remains very high due to not appropriately increasing the marginal variances of the weights. 5.3 Speed and energy consumption We demonstrate that our method is competitive with [70], denoted as GL, a method that explicitly prunes convolutional kernels to reduce compute time. We measure the time and energy consumption of one forward pass of a mini-batch with batch size 8192 through LeNet-5-Caffe. We average over 104 forward passes and all experiments were run with Tensorflow 1.0.1, cuda 8.0 and respective cuDNN. We apply 16 CPUs run in parallel (CPU) or a Titan X (GPU). Note that we only use the pruned architecture as lower bit precision would further increase the speed-up but is not implementable in any common framework. Further, all methods we compare to in the latter experiments would barely show an improvement at all since they do not learn to prune groups but only parameters. In figure 1 we present our results. As to be expected the largest effect on the speed up is caused by GPU usage. However, both our models and best competing models reach a speed up factor of around 8?. We can further save about 3 ? energy costs by applying our architecture instead of the original one on a GPU. For larger networks the speed-up is even higher: for the VGG experiments with batch size 256 we have a speed-up factor of 51?. Figure 1: Left: Avg. Time a batch of 8192 samples takes to pass through LeNet-5-Caffe. Numbers on top of the bars represent speed-up factor relative to the CPU implementation of the original network. Right: Energy consumption of the GPU of the same process (when run on GPU). 6 Conclusion We introduced Bayesian compression, a way to tackle efficiency and compression in deep neural networks in a unified and principled way. Our proposed methods allow for theoretically principled compression of neural networks, improved energy efficiency with reduced computation while naturally learning the bit precisions for each weight. This serves as a strong argument in favor of Bayesian methods for neural networks, when we are concerned with compression and speed up. 11 We also tried to finetune the same network with Sparse VD, but unfortunately it increased the error considerably (around 3% extra error), therefore we do not report those results. 8 Acknowledgments We would like to thank Dmitry Molchanov, Dmitry Vetrov, Klamer Schutte and Dennis Koelma for valuable discussions and feedback. This research was supported by TNO, NWO and Google. References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical Society. Series B (Methodological), pages 99?102, 1974. [3] A. Armagan, M. Clyde, and D. B. Dunson. Generalized beta mixtures of gaussians. In Advances in neural information processing systems, pages 523?531, 2011. [4] E. Azarkhish, D. Rossi, I. Loi, and L. Benini. Neurostream: Scalable and energy efficient deep learning with smart memory cubes. arXiv preprint arXiv:1701.06420, 2017. [5] J. Ba and R. Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654?2662, 2014. [6] E. Beale, C. Mallows, et al. Scale mixing of symmetric distributions with zero means. The Annals of Mathematical Statistics, 30(4):1145?1151, 1959. [7] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, 2015. [8] C. M. Carvalho, N. G. Polson, and J. G. Scott. The horseshoe estimator for sparse signals. Biometrika, 97 (2):465?480, 2010. [9] S. Chai, A. Raghavan, D. Zhang, M. Amer, and T. Shields. Low precision neural networks using subband decomposition. arXiv preprint arXiv:1703.08595, 2017. [10] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing convolutional neural networks. arXiv preprint arXiv:1506.04449, 2015. [11] M. Courbariaux and Y. Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or ?1. arXiv preprint arXiv:1602.02830, 2016. [12] M. Courbariaux, J.-P. David, and Y. Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. [13] M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3105?3113, 2015. [14] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148?2156, 2013. [15] X. Dong, J. Huang, Y. Yang, and S. Yan. More is less: A more complicated network with less inference complexity. arXiv preprint arXiv:1703.08651, 2017. [16] M. A. Figueiredo. Adaptive sparseness using jeffreys? prior. Advances in neural information processing systems, 1:697?704, 2002. [17] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML, 2016. [18] Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. ICLR, 2015. [19] A. Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348?2356, 2011. [20] P. D. Gr?nwald. The minimum description length principle. MIT press, 2007. [21] Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379?1387, 2016. [22] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015. [23] P. Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. Master?s thesis, University of California, 2016. 9 [24] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural networks. In Advances in Neural Information Processing Systems, pages 1135?1143, 2015. [25] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR, 2016. [26] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. [27] G. E. Hinton and D. Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5?13. ACM, 1993. [28] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [29] A. Honkela and H. Valpola. Variational learning and bits-back coding: an information-theoretic view to bayesian learning. IEEE Transactions on Neural Networks, 15(4):800?810, 2004. [30] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. [31] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. ICLR, 2017. [32] J. B. Ingraham and D. S. Marks. arXiv:1602.03807, 2016. Bayesian sparsity for intractable distributions. arXiv preprint [33] T. Karaletsos and G. R?tsch. Automatic relevance determination for deep generative models. arXiv preprint arXiv:1505.07765, 2015. [34] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations (ICLR), 2014. [35] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparametrization trick. Advances in Neural Information Processing Systems, 2015. [36] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009. [37] N. D. Lawrence. Note relevance determination. In Neural Nets WIRN Vietri-01, pages 128?133. Springer, 2002. [38] Y. LeCun, J. S. Denker, S. A. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In NIPs, volume 2, pages 598?605, 1989. [39] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. [40] Y. LeCun, C. Cortes, and C. J. Burges. The mnist database of handwritten digits, 1998. [41] D. D. Lin and S. S. Talathi. Overcoming challenges in fixed point training of deep convolutional networks. Workshop ICML, 2016. [42] D. D. Lin, S. S. Talathi, and V. S. Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015. [43] C. Louizos. Smart regularization of deep architectures. Master?s thesis, University of Amsterdam, 2015. [44] C. Louizos and M. Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. ArXiv e-prints, Mar. 2017. [45] D. J. MacKay. Probable networks and plausible predictions?a review of practical bayesian methods for supervised neural networks. Network: Computation in Neural Systems, 6(3):469?505, 1995. [46] N. Mellempudi, A. Kundu, D. Mudigere, D. Das, B. Kaul, and P. Dubey. Ternary neural networks with fine-grained quantization. arXiv preprint arXiv:1705.01462, 2017. [47] P. Merolla, R. Appuswamy, J. Arthur, S. K. Esser, and D. Modha. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016. [48] T. J. Mitchell and J. J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023?1032, 1988. [49] D. Molchanov, A. Ashukha, and D. Vetrov. Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369, 2017. [50] E. Nalisnick, A. Anandkumar, and P. Smyth. A scale mixture perspective of multiplicative noise in neural networks. arXiv preprint arXiv:1506.03208, 2015. [51] R. M. Neal. Bayesian learning for neural networks. PhD thesis, Citeseer, 1995. 10 [52] S. E. Neville, J. T. Ormerod, M. Wand, et al. Mean field variational bayes for continuous sparse signal shrinkage: pitfalls and remedies. Electronic Journal of Statistics, 8(1):1113?1151, 2014. [53] O. Papaspiliopoulos, G. O. Roberts, and M. Sk?ld. A general framework for the parametrization of hierarchical models. Statistical Science, pages 59?73, 2007. [54] C. Peterson. A mean field theory learning algorithm for neural networks. Complex systems, 1:995?1019, 1987. [55] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525?542. Springer, 2016. [56] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1278?1286, 2014. [57] J. Rissanen. Modeling by shortest data description. Automatica, 14(5):465?471, 1978. [58] J. Rissanen. Stochastic complexity and modeling. The annals of statistics, pages 1080?1100, 1986. [59] S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. arXiv preprint arXiv:1607.00485, 2016. [60] S. Shi and X. Chu. Speeding up convolutional neural networks by exploiting the sparsity of rectifier units. arXiv preprint arXiv:1704.07724, 2017. [61] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. [62] C. K. S?nderby, T. Raiko, L. Maal?e, S. K. S?nderby, and O. Winther. Ladder variational autoencoders. arXiv preprint arXiv:1602.02282, 2016. [63] S. Srinivas and R. V. Babu. Generalized dropout. arXiv preprint arXiv:1611.06791, 2016. [64] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929?1958, 2014. [65] V. Sze, Y.-H. Chen, T.-J. Yang, and J. Emer. Efficient processing of deep neural networks: A tutorial and survey. arXiv preprint arXiv:1703.09039, 2017. [66] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267?288, 1996. [67] K. Ullrich, E. Meeds, and M. Welling. Soft weight-sharing for neural network compression. ICLR, 2017. [68] G. Venkatesh, E. Nurvitadhi, and D. Marr. Accelerating deep convolutional networks using low-precision and sparsity. arXiv preprint arXiv:1610.00324, 2016. [69] C. S. Wallace. Classification by minimum-message-length inference. In International Conference on Computing and Information, pages 72?81. Springer, 1990. [70] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances In Neural Information Processing Systems, pages 2074?2082, 2016. [71] T.-J. Yang, Y.-H. Chen, and V. Sze. Designing energy-efficient convolutional neural networks using energy-aware pruning. CVPR, 2017. [72] C. Zhu, S. Han, H. Mao, and W. J. Dally. Trained ternary quantization. ICLR, 2017. 11
6921 |@word h:1 illustrating:1 version:2 compression:34 annapureddy:1 nd:1 open:1 tried:1 decomposition:2 citeseer:1 sparsifies:1 ld:1 reduction:1 initial:1 liu:1 contains:1 series:2 tuned:1 bc:15 interestingly:1 offering:1 document:1 existing:1 freitas:1 current:1 z2:4 com:1 activation:2 chu:1 must:1 readily:1 gpu:5 devin:1 csc:2 additive:1 numerical:1 entertaining:1 shape:2 vietri:1 remove:1 designed:2 treating:1 drop:1 half:8 fewer:2 device:2 guess:1 generative:2 inspection:1 parametrization:11 tertiary:1 provides:1 node:1 contribute:2 codebook:2 attack:1 beauchamp:1 zhang:1 positing:1 mathematical:1 along:2 constructed:1 direct:1 become:3 beta:1 wierstra:2 abadi:1 consists:1 introduce:1 benini:1 theoretically:1 mask:2 expected:1 wallace:1 brain:1 inspired:1 salakhutdinov:2 pitfall:1 encouraging:1 overwhelming:1 little:1 considering:1 actual:1 becomes:1 begin:1 increasing:1 moreover:1 cpu:3 bonus:1 mass:2 alexnet:2 what:1 interpreted:2 unified:1 finding:1 gal:1 impractical:1 golden:1 tackle:3 biometrika:1 k2:2 unit:5 yn:2 understood:2 local:9 vetrov:2 encoding:3 modha:1 might:1 chose:1 initialization:2 china:1 specifying:1 co:1 limited:4 range:1 practical:5 acknowledgment:1 lecun:3 mallow:2 practice:4 ternary:2 backpropagation:1 footprint:1 procedure:1 digit:1 empirical:1 yan:1 significantly:5 thought:1 pre:1 induce:1 marginalize:1 selection:2 applying:2 optimize:2 conventional:1 map:6 deterministic:2 demonstrated:2 center:1 dz:2 go:2 dean:2 shi:1 survey:1 simplicity:1 communicating:1 estimator:4 rule:1 utilizing:1 weyand:1 pull:1 marr:1 justification:1 laplace:2 limiting:1 annals:2 target:1 hierarchy:2 smyth:1 designing:1 hypothesis:1 trick:3 element:1 expensive:2 recognition:2 nderby:2 database:1 ep:1 preprint:23 initializing:1 wang:2 compressing:4 improper:2 connected:3 solla:1 removed:2 highest:1 valuable:1 principled:4 substantial:2 mentioned:3 complexity:3 kalenichenko:1 reward:1 tsch:1 dynamic:1 trained:4 smart:3 deliver:1 upon:1 efficiency:9 meed:1 noncentered:1 easily:1 joint:2 finetuning:1 chip:1 various:1 derivation:1 train:1 separated:1 fast:1 effective:2 describe:1 monte:1 caffe:7 quite:2 larger:1 plausible:1 z2i:11 distortion:1 cvpr:1 elbo:2 compressed:2 favor:1 statistic:5 simonyan:1 noisy:2 final:3 advantage:1 agrawal:1 quantizing:1 net:4 propose:1 tran:1 product:2 mb:1 adaptation:1 aligned:2 relevant:2 mixing:2 description:5 inducing:6 billion:1 exploiting:2 cluster:1 optimum:1 chai:1 sutskever:2 adam:2 leave:1 staying:1 help:2 derive:1 coupling:1 andrew:1 gong:1 bourdev:1 ij:27 sa:8 eq:8 strong:1 come:1 distilling:1 cnns:2 stochastic:4 subsequently:1 centered:3 human:1 exploration:1 raghavan:1 uncini:1 filter:1 require:1 dnns:1 really:1 dns:3 probable:1 strictly:1 hold:1 around:4 considered:1 normal:18 exp:7 great:2 k3:2 lawrence:1 slab:6 achieves:1 andreetto:1 estimation:1 applicable:1 nwo:1 jackel:1 largest:1 talathi:2 successfully:1 ormerod:1 mit:1 always:1 gaussian:2 denil:1 fluctuate:1 shrinkage:5 mobile:1 wilson:1 earliest:1 encode:1 validated:1 karaletsos:1 rezende:1 june:1 improvement:2 consistently:1 bernoulli:1 methodological:2 contrast:2 camp:1 inference:13 nn:1 sb:8 entire:3 torch:1 hidden:2 relation:1 wij:7 france:1 ullrich:3 issue:2 classification:2 overall:1 aforementioned:2 flexible:1 exponent:1 denoted:1 among:1 html:1 art:2 integration:1 fairly:2 special:1 marginal:6 field:4 aware:2 restores:1 cube:1 beach:1 sampling:5 mackay:1 lille:1 icml:4 future:1 report:2 intelligent:1 escape:1 few:1 employ:3 modern:1 oriented:1 wen:1 gamma:5 divergence:4 individual:3 floating:1 maintain:1 ab:1 interest:1 message:1 reinterpreted:1 certainly:2 mdl:3 mixture:6 extreme:1 nl:3 behind:1 necessary:2 arthur:1 respective:1 orthogonal:1 tree:1 re:2 uncertain:1 instance:1 column:1 soft:2 increased:1 modeling:2 sze:2 altering:1 caruana:1 ordinary:1 cost:5 deviation:1 uniform:5 masked:2 krizhevsky:3 successful:1 gr:1 too:1 dependency:1 teacher:1 considerably:1 nns:4 gd:2 st:1 density:1 fundamental:1 explores:1 combined:1 accessible:1 unpractical:2 clyde:1 international:4 off:1 dong:1 winther:1 pool:1 yao:1 thesis:3 huang:1 american:1 inefficient:3 li:1 account:2 potential:1 de:1 student:1 coding:3 babu:1 titan:1 explicitly:3 caused:1 depends:1 later:1 view:4 lot:1 multiplicative:3 closed:1 dally:4 competitive:3 bayes:4 start:1 complicated:2 reparametrization:2 recover:2 capability:2 parallel:1 ass:1 shakibi:1 accuracy:8 convolutional:15 variance:10 correspond:3 misfit:2 bayesian:25 handwritten:1 kavukcuoglu:1 carlo:1 detector:1 reach:1 suffers:1 sharing:3 manual:1 sixth:1 mudigere:1 underestimate:1 energy:12 nonetheless:1 involved:1 mohamed:1 naturally:2 couple:1 dataset:1 treatment:1 popular:1 mitchell:1 knowledge:1 car:1 dimensionality:2 agreed:1 back:4 finetune:1 feed:1 ingoing:1 originally:1 hashing:1 day:1 follow:1 higher:2 molchanov:2 improved:2 supervised:1 zisserman:1 formulation:1 done:2 amer:1 mar:1 furthermore:3 merolla:1 autoencoders:1 overfit:1 working:1 honkela:1 dennis:1 reparametrizing:2 kaul:1 propagation:1 google:1 defines:1 mode:1 ordonez:1 indicated:1 believe:1 usa:1 effect:2 usage:1 unbiased:1 remedy:1 counterpart:1 former:2 regularization:3 hence:1 lenet:13 symmetric:1 i2:4 neal:1 xnor:1 round:1 during:3 davis:1 customize:1 generalized:4 theoretic:1 demonstrate:2 image:2 variational:34 consideration:2 recently:1 common:6 sigmoid:1 behaves:1 empirically:1 volume:1 million:1 tail:1 louizos:4 belong:1 association:1 nurvitadhi:1 significant:3 dinh:1 automatic:1 similarly:4 had:1 esser:1 robot:1 han:4 nalisnick:1 posterior:29 own:1 perspective:2 optimizing:2 phone:1 scenario:4 compound:1 store:2 binary:6 blog:1 yi:1 minimum:6 additional:1 somewhat:1 employed:6 prune:8 novelty:1 determine:1 shortest:1 corrado:1 signal:4 ii:2 relates:1 multiple:3 arithmetic:1 nwald:1 reduces:1 infer:1 july:1 determination:2 long:1 cifar:4 compensate:1 lin:2 ghs:7 prediction:6 desideratum:1 scalable:1 regression:2 heterogeneous:1 vision:2 arxiv:47 represent:3 adopting:2 sometimes:1 kernel:1 achieved:1 agarwal:1 whereas:2 huffman:1 fine:1 appuswamy:1 appropriately:1 extra:1 ascent:1 pass:1 induced:1 undirected:1 regularly:1 flow:1 anandkumar:1 mw:4 yang:4 canadian:1 feedforward:1 constraining:1 iii:2 concerned:1 variety:1 independence:1 fit:1 zi:21 bengio:4 architecture:14 lasso:5 competing:1 hussain:1 reduce:1 idea:2 barham:1 haffner:1 vgg:8 reparametrizations:3 blundell:1 expression:1 heavier:1 accelerating:1 suffer:1 gnj:7 karen:1 speaking:1 deep:32 dubey:1 amount:3 band:1 locally:1 hardware:3 induces:1 bvlc:1 narayanan:1 reduced:4 http:2 tutorial:1 notice:2 cuda:1 sign:1 delta:1 per:6 tibshirani:1 serving:1 hyperparameter:1 express:1 group:23 redundancy:1 sparsifying:1 nevertheless:4 threshold:4 achieving:3 rissanen:2 binaryconnect:1 prevent:1 kept:1 imaging:1 relaxation:1 sum:1 beijing:1 wand:1 run:3 inverse:4 uncertainty:5 master:3 arrive:2 family:1 almost:1 distribution7:1 electronic:1 wu:1 keutzer:1 appendix:6 bit:26 dropout:15 layer:18 bound:2 yielded:1 annual:1 adapted:1 precisely:1 speed:11 argument:4 extremely:1 pruned:3 performing:1 format:3 gpus:2 rossi:1 structured:1 according:4 remain:1 smaller:2 across:3 slightly:1 making:1 constrained:1 jeffreys:6 explained:1 taken:1 computationally:2 ln:7 visualization:1 previously:1 remains:1 turn:1 mechanism:2 needed:1 know:1 serf:1 sending:1 maal:1 brevdo:1 gaussians:1 operation:2 parametrize:1 generalizes:1 available:1 apply:4 hyperpriors:2 hierarchical:2 away:1 observe:5 salimans:1 denker:1 zi2:2 save:1 alternative:1 batch:4 beale:1 weinberger:1 original:9 compress:1 denotes:1 running:1 clustering:2 top:1 graphical:1 sw:3 tno:2 k1:3 subband:1 ghahramani:1 society:2 upcoming:1 implied:2 objective:1 surgery:1 question:1 already:1 spike:5 restored:1 strategy:1 parametric:2 damage:1 print:1 cudnn:1 gradient:6 iclr:7 separate:1 thank:1 armagan:1 valpola:1 parametrized:1 vd:6 consumption:4 topic:1 argue:1 extent:1 cauchy:8 reason:2 barely:1 w6:1 gopalakrishnan:1 length:5 index:2 illustration:1 mini:1 minimizing:1 neville:1 difficult:1 unfortunately:3 setup:1 mostly:1 potentially:3 dunson:1 robert:1 negative:7 ba:1 polson:1 implementation:2 proper:1 ashukha:1 perform:2 allowing:1 neuron:5 convolution:1 observation:1 farhadi:1 howard:2 implementable:1 horseshoe:9 flop:1 hinton:5 precise:1 emer:1 y1:1 dc:3 community:1 inferred:1 overcoming:1 introduced:4 david:2 pair:1 venkatesh:1 kl:8 connection:2 imagenet:1 california:1 learned:2 tensorflow:3 binarynet:1 boost:1 kingma:2 nip:2 address:1 beyond:1 bar:1 usually:2 scott:1 sparsity:17 challenge:1 max:1 memory:2 including:1 royal:2 cornebise:1 power:1 warm:1 predicting:1 stricly:1 residual:1 advanced:1 zhu:2 representing:1 scheme:4 kundu:1 github:1 brief:1 ladder:1 raiko:1 ziji:1 auto:1 speeding:1 binarization:1 prior:44 literature:2 epoch:1 review:1 multiplication:1 marginalizing:1 determining:1 relative:1 graf:1 fully:3 loss:1 mixed:1 carvalho:1 principle:3 viewpoint:1 tyree:1 courbariaux:3 tiny:1 heavy:1 prone:1 gl:3 supported:1 free:3 keeping:1 figueiredo:1 moskewicz:1 drastically:2 bias:2 allow:4 burges:1 institute:1 wide:1 peterson:1 sparse:15 benefit:1 distributed:1 feedback:1 default:1 xn:2 world:1 avoids:1 van:1 qn:1 ignores:1 author:1 made:1 forward:3 avg:1 ig:4 adaptive:1 preventing:1 employing:1 far:2 welling:6 transaction:1 reconstructed:1 pruning:18 approximate:14 dmitry:2 global:3 overfitting:1 receiver:2 automatica:1 unnecessary:1 xi:1 search:1 continuous:4 sk:1 decomposes:1 why:1 tailed:1 table:4 channel:1 reasonably:1 learn:1 ca:1 robust:1 rastegari:1 improving:1 as:1 bottou:1 complex:1 european:1 diag:7 uva:3 da:1 significance:1 whole:1 noise:3 s2:3 big:1 hyperparameters:1 x1:1 papaspiliopoulos:1 lc:2 christos:1 precision:25 shrinking:1 inferring:1 exceeding:1 explicit:1 wish:1 shield:1 mao:2 governed:2 communicates:1 grained:1 theorem:1 removing:1 bad:1 specific:1 rectifier:1 explored:1 insignificant:1 cortes:1 consequent:1 gupta:1 normalizing:1 evidence:1 intractable:3 essential:1 quantization:14 mnist:3 adding:1 corr:1 workshop:1 phd:1 sparseness:1 chen:8 entropy:2 simply:5 explore:1 sender:1 infinitely:1 forming:1 vinyals:1 amsterdam:4 expressed:3 iandola:1 pretrained:1 springer:3 ch:1 corresponds:1 relies:1 acm:1 conditional:1 marked:1 consequently:1 towards:1 replace:3 shared:1 specifically:4 determined:3 reducing:2 redmon:1 averaging:1 pas:4 experimental:1 intact:1 citro:1 formally:1 loi:1 softplus:1 latter:3 arises:1 guo:1 mark:1 relevance:2 avoiding:1 ashraf:1 outgoing:1 srinivas:1 srivastava:2
6,547
6,922
Streaming Sparse Gaussian Process Approximations Thang D. Bui? Cuong V. Nguyen? Richard E. Turner Department of Engineering, University of Cambridge, UK {tdb40,vcn22,ret26}@cam.ac.uk Abstract Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped. However, the field lacks a principled method to handle streaming data in which both the posterior distribution over function values and the hyperparameter estimates are updated in an online fashion. The small number of existing approaches either use suboptimal hand-crafted heuristics for hyperparameter learning, or suffer from catastrophic forgetting or slow updating when new data arrive. This paper develops a new principled framework for deploying Gaussian process probabilistic models in the streaming setting, providing methods for learning hyperparameters and optimising pseudo-input locations. The proposed framework is assessed using synthetic and real-world datasets. 1 Introduction Probabilistic models employing Gaussian processes have become a standard approach to solving many machine learning tasks, thanks largely to the modelling flexibility, robustness to overfitting, and well-calibrated uncertainty estimates afforded by the approach [1]. One of the pillars of the modern Gaussian process probabilistic modelling approach is a set of sparse approximation schemes that allow the prohibitive computational cost of GP methods, typically O(N 3 ) for training and O(N 2 ) for prediction where N is the number of training points, to be substantially reduced whilst still retaining accuracy. Arguably the most important and influential approximations of this sort are pseudo-point approximation schemes that employ a set of M  N pseudo-points to summarise the observational data thereby reducing computational costs to O(N M 2 ) and O(M 2 ) for training and prediction, respectively [2, 3]. Stochastic optimisation methods that employ mini-batches of training data can be used to further reduce computational costs [4, 5, 6, 7], allowing GPs to be scaled to datasets comprising millions of data points. The focus of this paper is to provide a comprehensive framework for deploying the Gaussian process probabilistic modelling approach to streaming data. That is, data that arrive sequentially in an online fashion, possibly in small batches, and whose number are not known a priori (and indeed may be infinite). The vast majority of previous work has focussed exclusively on the batch setting and there is not a satisfactory framework that supports learning and approximation in the streaming setting. A na?ve approach might simply incorporate each new datum as they arrived into an ever-growing dataset and retrain the GP model from scratch each time. With infinite computational resources, this approach is optimal, but in the majority of practical settings, it is intractable. A feasible alternative would train on just the most recent K training data points, but this completely ignores potentially large amounts of informative training data and it does not provide a method for incorporating the old model into the new one which would save computation (except perhaps through initialisation of the hyperparameters). Existing, sparse approximation schemes could be applied in the same manner, but they merely allow K to be increased, rather than allowing all previous data to be leveraged, and again do not utilise intermediate approximate fits. ? These authors contributed equally to this work. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. What is needed is a method for performing learning and sparse approximation that incrementally updates the previously fit model using the new data. Such an approach would utilise all the previous training data (as they will have been incorporated into the previously fit model) and leverage as much of the previous computation as possible at each stage (since the algorithm only requires access to the data at the current time point). Existing stochastic sparse approximation methods could potentially be used by collecting the streamed data into mini-batches. However, the assumptions underpinning these methods are ill-suited to the streaming setting and they perform poorly (see sections 2 and 4). This paper provides a new principled framework for deploying Gaussian process probabilistic models in the streaming setting. The framework subsumes Csat? and Opper?s two seminal approaches to online regression [8, 9] that were based upon the variational free energy (VFE) and expectation propagation (EP) approaches to approximate inference respectively. In the new framework, these algorithms are recovered as special cases. We also provide principled methods for learning hyperparameters (learning was not treated in the original work and the extension is non-trivial) and optimising pseudo-input locations (previously handled via hand-crafted heuristics). The approach also relates to the streaming variational Bayes framework [10]. We review background material in the next section and detail the technical contribution in section 3, followed by several experiments on synthetic and real-world data in section 4. 2 Background Regression models that employ Gaussian processes are state of the art for many datasets [11]. In this paper we focus on the simplest GP regression model as a test case of the streaming framework for inference and learning. Given N input and real-valued output pairs {xn , yn }N n=1 , a standard GP regression model assumes yn = f (xn ) + n , where f is an unknown function that is corrupted by Gaussian observation noise n ? N (0, ?y2 ). Typically, f is assumed to be drawn from a zero-mean GP prior f ? GP(0, k(?, ?|?)) whose covariance function depends on hyperparameters ?. In this simple model, the posterior over f , p(f |y, ?), and the marginal likelihood p(y|?) can be computed 2 analytically (here we have collected the observations into a vector y = {yn }N However, n=1 ). these quantities present a computational challenge resulting in an O(N 3 ) complexity for maximum likelihood training and O(N 2 ) per test point for prediction. This prohibitive complexity of exact learning and inference in GP models has driven the development of many sparse approximation frameworks [12, 13]. In this paper, we focus on the variational free energy approximation scheme [3, 14] which lower bounds the marginal likelihood of the data using a variational distribution q(f ) over the latent function: Z Z p(y, f |?) log p(y|?) = log df p(y, f |?) ? df q(f ) log = Fvfe (q, ?). (1) q(f ) Since Fvfe (q, ?) = log p(y|?) ? KL[q(f )||p(f |y, ?)], where KL[?||?] denotes the Kullback?Leibler divergence, maximising this lower bound with respect to q(f ) guarantees the approximate posterior gets closer to the exact posterior p(f |y, ?). Moreover, the variational bound Fvfe (q, ?) approximates the marginal likelihood and can be used for learning the hyperparameters ?. In order to arrive at a computationally tractable method, the approximate posterior is parameterized via a set of M pseudo-points u that are a subset of the function values f = {f6=u , u} and which will summarise the data. Specifically, the approximate posterior is assumed to be q(f ) = p(f6=u |u, ?)q(u), where q(u) is a variational distribution over u and p(f6=u |u, ?) is the prior distribution of the remaining latent function values. This assumption allows the following critical cancellation that results in a computationally tractable lower bound: Z X p(y|f, ?)p(u|?)p(fX ?) XX 6=X u |u, Fvfe (q(u), ?) = df q(f ) log XX p(f6=X ?)q(u) XX u |u, XZ = ?KL[q(u)||p(u|?)] + du q(u)p(fn |u, ?) log p(yn |fn , ?), n where fn = f (xn ) is the latent function value at xn . For the simple GP regression model considered here, closed-form expressions for the optimal variational approximation qvfe (f ) and the optimal The dependence on the inputs {xn }N n=1 of the posterior, marginal likelihood, and other quantities is suppressed throughout to lighten the notation. 2 2 variational bound Fvfe (?) = maxq(u) Fvfe (q(u), ?) (also called the ?collapsed? bound) are available: 2 p(f |y, ?) ? qvfe (f ) ? p(f6=u |u, ?)p(u|?)N (y; Kfu K?1 uu u, ?y I), 1 X 2 log p(y|?) ? Fvfe (?) = log N (y; 0, Kfu K?1 (knn ? Knu K?1 uu Kuf + ?y I) ? uu Kun ), 2?y2 n where f is the latent function values at training points, and Kf1 f2 is the covariance matrix between the latent function values f1 and f2 . Critically, the approach leads to O(N M 2 ) complexity for approximate maximum likelihood learning and O(M 2 ) per test point for prediction. In order for this method to perform well, it is necessary to adapt the pseudo-point input locations, e.g. by optimising the variational free energy, so that the pseudo-data distribute themselves over the training data. Alternatively, stochastic optimisation may be applied directly to the original, uncollapsed version of the bound [4, 15]. In particular, an unbiased estimate of the variational lower bound can be obtained using a small number of training points randomly drawn from the training set: Z N X Fvfe (q(u), ?) ? ?KL[q(u)||p(u|?)] + du q(u)p(fn |u, ?) log p(yn |fn , ?). |B| yn ?B Since the optimal approximation is Gaussian as shown above, q(u) is often posited as a Gaussian distribution and its parameters are updated by following the (noisy) gradients of the stochastic estimate of the variational lower bound. By passing through the training set a sufficient number of times, the variational distribution converges to the optimal solution above, given appropriately decaying learning rates [4]. In principle, the stochastic uncollapsed approach is applicable to the streaming setting as it refines an approximate posterior based on mini-batches of data that can be considered to arrive sequentially (here N would be the number of data points seen so far). However, it is unsuited to this task since stochastic optimisation assumes that the data subsampling process is uniformly random, that the training set is revisited multiple times, and it typically makes a single gradient update per mini-batch. These assumptions are incompatible with the streaming setting: continuously arriving data are not typically drawn iid from the input distribution (consider an evolving time-series, for example); the data can only be touched once by the algorithm and not revisited due to computational constraints; each mini-batch needs to be processed intensively as it will not be revisited (multiple gradient steps would normally be required, for example, and this runs the risk of forgetting old data without delicately tuning the learning rates). In the following sections, we shall discuss how to tackle these challenges through a novel online inference and learning procedure, and demonstrate the efficacy of this method over the uncollapsed approach and na?ve online versions of the collapsed approach. 3 Streaming sparse GP (SSGP) approximation using variational inference The general situation assumed in this paper is that data arrive sequentially so that at each step new data points ynew are added to the old dataset yold . The goal is to approximate the marginal likelihood and the posterior of the latent process at each step, which can be used for anytime prediction. The hyperparameters will also be adjusted online. Importantly, we assume that we can only access the current data points ynew directly for computational reasons (it might be too expensive to hold yold and x1:Nold in memory, for example, or approximations made at the previous step must be reused to reduce computational overhead). So the effect of the old data on the current posterior must be propagated through the previous posterior. We will now develop a new sparse variational free energy approximation for this purpose, that compactly summarises the old data via pseudo-points. The pseudo-inputs will also be adjusted online since this is critical as new parts of the input space will be revealed over time. The framework is easily extensible to more complex non-linear models. 3.1 Online variational free energy inference and learning Consider an approximation to the true posterior at the previous step, qold (f ), which must be updated to form the new approximation qnew (f ), 1 qold (f ) ? p(f |yold ) = p(f |?old )p(yold |f ), (2) Z1 (?old ) 1 qnew (f ) ? p(f |yold , ynew ) = p(f |?new )p(yold |f )p(ynew |f ). (3) Z2 (?new ) 3 Whilst the updated exact posterior p(f |yold , ynew ) balances the contribution of old and new data through their likelihoods, the new approximation cannot access p(yold |f ) directly. Instead, we can find an approximation of p(yold |f ) by inverting eq. (2), that is p(yold |f ) ? Z1 (?old )qold (f )/p(f |?old ). Substituting this into eq. (3) yields, p?(f |yold , ynew ) = Z1 (?old ) qold (f ) p(f |?new )p(ynew |f ) . Z2 (?new ) p(f |?old ) (4) Although it is tempting to use this as the new posterior, qnew (f ) = p?(f |yold , ynew ), this recovers exact GP regression with fixed hyperparameters (see section 3.3) and it is intractable. So, instead, we consider a variational update that projects the distribution back to a tractable form using pseudo-data. At this stage we allow the pseudo-data input locations in the new approximation to differ from those in the old one. This is required if new regions of input space are gradually revealed, as for example in typical time-series applications. Let a = f (zold ) and b = f (znew ) be the function values at the pseudo-inputs before and after seeing new data. Note that the number of pseudo-points, Ma = |a| and Mb = |b| are not necessarily restricted to be the same. The form of the approximate posterior mirrors that in the batch case, that is, the previous approximate posterior, qold (f ) = p(f6=a |a, ?old )qold (a) where we assume qold (a) = N (a; ma , Sa ). The new posterior approximation takes the same form, but with the new pseudo-points and new hyperparameters: qnew (f ) = p(f6=b |b, ?new )qnew (b). Similar to the batch case, this approximate inference problem can be turned into an optimisation problem using variational inference. Specifically, consider Z p(f6=b |b, ?new )qnew (b) KL[qnew (f )||? p(f |yold , ynew )] = df qnew (f ) log Z (? ) (5) qold (f ) 1 old Z2 (?new ) p(f |?new )p(ynew |f ) p(f |?old )   Z p(a|?old )qnew (b) Z2 (?new ) + df qnew (f ) log . = log Z1 (?old ) p(b|?new )qold (a)p(ynew |f ) Since the KL divergence is non-negative, the second term in the expression above is the negative approximate lower bound of the online log marginal likelihood (as Z2 /Z1 ? p(ynew |yold )), or the variational free energy F(qnew (f ), ?new ). By setting the derivative of F w.r.t. q(b) equal to 0, the optimal approximate posterior can be obtained for the regression case,3 Z Z  qold (a) + df p(f |b) log p(ynew |f ) (6) qvfe (b) ? p(b) exp da p(a|b) log p(a|?old ) ? p(b)N (? y; K?f b K?1 ? ,vfe ), bb b, ?y (7) where f is the latent function values at the new training points,    2    ynew ?y I 0 Kfb 0?1 ?1 ? y= , Da = (S?1 . , K?f b = , ?y? ,vfe = a ? Kaa ) Kab Da S?1 0 Da a ma The negative variational free energy is also analytically available, 1 F(?) = log N (? y; 0, K?f b K?1 tr(Kff ? Kfb K?1 ? ,vfe ) ? bb Kb? bb Kbf ) + ?a ; where f + ?y 2?y2 (8) ?1 ?1 ?1 2?a = ? log |Sa | + log |K0aa | + log |Da | + m|a (S?1 a Da Sa ? Sa )ma ? tr[Da Qa ] + const. Equations (7) and (8) provide the complete recipe for online posterior update and hyperparameter learning in the streaming setting. The computational complexity and memory overhead of the new method is of the same order as the uncollapsed stochastic variational inference approach. The procedure is demonstrated on a toy regression example as shown in fig. 1[Left]. Online ?-divergence inference and learning 3.2 One obvious extension of the online approach discussed above replaces the KL divergence in eq. (5) with a more general ?-divergence [16]. This does not affect tractability: the optimal form of the approximate posterior can be obtained analytically for the regression case, qpep (b) ? p(b)N (? y; K?f b K?1 ? ,pep ) where bb b, ?y  2  ? I + ?diag(Kff ? Kfb K?1 0 bb Kbf ) . (9) ?y? ,pep = y 0 Da + ?(Kaa ? Kab K?1 bb Kba ) 3 Note that we have dropped ?new from p(b|?new ), p(a|b, ?new ) and p(f |b, ?new ) to lighten the notation. 4 mean log-likelihood y y y 2.0 1.0 0.0 -1.0 -2.0 2.0 1.0 0.0 -1.0 -2.0 2.0 1.0 0.0 -1.0 -2.0 ?2 0 2 4 x 6 8 10 2.5 0.01 0.20 2.0 0.50 0.80 1.5 1.00 1.0 0 12 5 10 15 20 25 batch index Figure 1: [Left] SSGP inference and learning on a toy time-series using the VFE approach. The black crosses are data points (past points are greyed out), the red circles are pseudo-points, and blue lines and shaded areas are the marginal predictive means and confidence intervals at test points. [Right] Log-likelihood of test data as training data arrives for different ? values, for the pseudo periodic dataset (see section 4.2). We observed that ? = 0.01 is virtually identical to VFE. Dark lines are means over 4 splits and shaded lines are results for each split. Best viewed in colour. This reduces back to the variational case as ? ? 0 (compare to eq. (7)) since then the ?-divergence is equivalent to the KL divergence. The approximate online log marginal likelihood is also analytically tractable and recovers the variational case when ? ? 0. Full details are provided in the appendix. 3.3 Connections to previous work and special cases This section briefly highlights connections between the new framework and existing approaches including Power Expectation Propagation (Power-EP), Expectation Propagation (EP), Assumed Density Filtering (ADF), and streaming variational Bayes. Recent work has unified a range of batch sparse GP approximations as special cases of the Power-EP algorithm [13]. The online ?-divergence approach to inference and learning described in the last section is equivalent to running a forward filtering pass of Power-EP. In other words, the current work generalizes the unifying framework to the streaming setting. When the hyperparameters and the pseudo-inputs are fixed, ?-divergence inference for sparse GP regression recovers the batch solutions provided by Power-EP. In other words, only a single pass through the data is necessary for Power-EP to converge in sparse GP regression. For the case ? = 1, which is called Expectation Propagation, we recover the seminal work by Csat? and Opper [8]. For the variational free energy case (equivalently where ? ? 0) we recover the seminal work by Csat? [9]. The new framework can be seen to extend these methods to allow principled learning and pseudo-input optimisation. Interestingly, in the setting where hyperparameters and the pseudo-inputs are fixed, if pseudo-points are added at each stage at the new data input locations, the method returns the true posterior and marginal likelihood (see appendix). For fixed hyperparameters and pseudo-points, the new VFE framework is equivalent to the application of streaming variational Bayes (VB) or online variational inference [10, 17, 18] to the GP setting in which the previous posterior plays a role of an effective prior for the new data. Similarly, the equivalent algorithm when ? = 1 is called Assumed Density Filtering [19]. When the hyperparameters are updated, the new method proposed here is different from streaming VB and standard application of ADF, as the new method propagates approximations to just the old likelihood terms and not the prior. Importantly, we found vanilla application of the streaming VB framework performed catastrophically for hyperparameter learning, so the modification is critical. 4 Experiments In this section, the SSGP method is evaluated in terms of speed, memory usage, and accuracy (loglikelihood and error). The method was implemented on GPflow [20] and compared against GPflow?s version of the following baselines: exact GP (GP), sparse GP using the collapsed bound (SGP), and stochastic variational inference using the uncollapsed bound (SVI). In all the experiments, the RBF kernel with ARD lengthscales is used, but this is not a limitation required by the new methods. An implementation of the proposed method can be found at http://github.com/thangbui/streaming_sparse_gp. Full experimental results and additional discussion points are included in the appendix. 4.1 Synthetic data Comparing ?-divergences. We first consider the general online ?-divergence inference and learning framework and compare the performance of different ? values on a toy online regression dataset 5 in fig. 1[Right]. Whilst the variational approach performs well, adapting pseudo-inputs to cover new regions of input space as they are revealed, algorithms using higher ? values perform more poorly. Interestingly this appears to be related to the tendency for EP, in batch settings, to clump pseudo-inputs on top of one another [21]. Here the effect is much more extreme as the clumps accumulate over time, leading to a shortage of pseudo-points if the input range of the data increases. Although heuristics could be introduced to break up the clumps, this result suggests that using small ? values for online inference and learning might be more appropriate (this recommendation differs from the batch setting where intermediate settings of ? around 0.5 are best [13]). Due to these findings, for the rest of the paper, we focus on the variational case. Hyperparameter learning. We generated multiple time-series from GPs with known hyperparameters and observation noises, and tracked the hyperparameters learnt by the proposed online variational free energy method and exact GP regression. Overall, SSGP can track and learn good hyperparameters, and if there are sufficient pseudo-points, it performs comparatively to full GP on the entire dataset. Interestingly, all models including full GP regression tend to learn bigger noise variances as any discrepancy in the true and learned function values is absorbed into this parameter. 4.2 Speed versus accuracy In this experiment, we compare SSGP to the baselines (GP, SGP, and SVI) in terms of a speedaccuracy trade-off where the mean marginal log-likelihood (MLL) and the root mean squared error (RMSE) are plotted against the accumulated running time of each method after each iteration. The comparison is performed on two time-series datasets and a spatial dataset. Time-series data. We first consider modelling a segment of the pseudo periodic synthetic dataset [22], previously used for testing indexing schemes in time-series databases. The segment contains 24,000 time-steps. Training and testing sets are chosen interleaved so that their sizes are both 12,000. The second dataset is an audio signal prediction dataset, produced from the TIMIT database [23] and previously used to evaluate GP approximations [24]. The signal was shifted down to the baseband and a segment of length 18,000 was used to produce interleaved training and testing sets containing 9,000 time steps. For both datasets, we linearly scale the input time steps to the range [0, 10]. All algorithms are assessed in the mini-batch streaming setting with data ynew arriving in batches of size 300 and 500 taken in order from the time-series. The first 1,000 examples are used as an initial training set to obtain a reasonable starting model for each algorithm. In this experiment, we use memory-limited versions of GP and SGP that store the last 3,000 examples. This number was chosen so that the running times of these algorithms match those of SSGP or are slightly higher. For all sparse methods (SSGP, SGP, and SVI), we run the experiments with 100 and 200 pseudo-points. For SVI, we allow the algorithm to make 100 stochastic gradient updates during each iteration and run preliminary experiments to compare 3 learning rates r = 0.001, 0.01, and 0.1. The preliminary results showed that the performance of SVI was not significantly altered and so we only present the results for r = 0.1. Figure 2 shows the plots of the accumulated running time (total training and testing time up until the current iteration) against the MLL and RMSE for the considered algorithms. It is clear that SSGP significantly outperforms the other methods both in terms of the MLL and RMSE, once sufficient training data have arrived. The performance of SSGP improves when the number of pseudo-points increases, but the algorithm runs more slowly. In contrast, the performance of GP and SGP, even after seeing more data or using more pseudo-points, does not increase significantly since they can only model a limited amount of data (the last 3,000 examples). Spatial data. The second set of experiments consider the OS Terrain 50 dataset that contains spot heights of landscapes in Great Britain computed on a grid.4 A block of 200 ? 200 points was split into 10,000 training examples and 30,000 interleaved testing examples. Mini-batches of data of size 750 and 1,000 arrive in spatial order. The first 1,000 examples were used as an initial training set. For this dataset, we allow GP and SGP to remember the last 7,500 examples and use 400 and 600 pseudo-points for the sparse models. Figure 3 shows the results for this dataset. SSGP performs better than the other baselines in terms of the RMSE although it is worse than GP and SGP in terms of the MLL. 4 The dataset is available at: https://data.gov.uk/dataset/os-terrain-50-dtm. 6 SVI (r=0.1) mean log-likelihood 2 0 ?2 SGP 10?1 10?2 10?3 ?4 100 101 0 ?2 100 104 RMSE 10?2 10?4 0 10 10?2 101 102 103 accumulated running time (s) 10?4 0 10 104 4 4 2 2 0 ?2 ?4 ?6 ?8 100 101 102 accumulated running time (s) 101 102 accumulated running time (s) 103 pseudo periodic data, batch size = 500 mean log-likelihood mean log-likelihood pseudo periodic data, batch size = 300 0 ?2 ?4 ?6 ?8 103 100 1 10 100 10?1 10?1 10?2 101 102 accumulated running time (s) 103 102 accumulated running time (s) 103 1 100 RMSE RMSE 103 10?3 10?3 10?2 10?3 10?3 10 102 4 accumulated running time (s) 10 10?1 10?1 ?4 101 102 103 accumulated running time 100 (s) 100 RMSE 2 ?4 102 103 accumulated running 100 time (s) 101 101 10 SSGP 4 100 RMSE mean log-likelihood 4 GP 100 101 102 accumulated running time (s) 10?4 0 10 103 audio data, batch size = 300 101 audio data, batch size = 500 Figure 2: Results for time-series datasets with batch sizes 300 and 500. Pluses and circles indicate the results for M = 100, 200 pseudo-points respectively. For each algorithm (except for GP), the solid and dashed lines are the efficient frontier curves for M = 100, 200 respectively. 4.3 Memory usage versus accuracy Besides running time, memory usage is another important factor that should be considered. In this experiment, we compare the memory usage of SSGP against GP and SGP on the Terrain dataset above with batch size 750 and M = 600 pseudo-points. We allow GP and SGP to use the last 2,000 and 6,000 examples for training, respectively. These numbers were chosen so that the memory usage of the two baselines roughly matches that of SSGP. Figure 4 plots the maximum memory usage of the three methods against the MLL and RMSE. From the figure, SSGP requires small memory usage while it can achieve comparable or better MLL and RMSE than GP and SGP. 4.4 Binary classification We show a preliminary result for GP models with non-Gaussian likelihoods, in particular, a binary classification model on the benchmark banana dataset. As the optimal form for the approximate posterior is not analytically tractable, the uncollapsed variational free energy is optimised numerically. The predictions made by SSGP in a non-iid streaming setting are shown in fig. 5. SSGP performs well and achieves the performance of the batch sparse variational method [5]. 7 2 1 1 0 mean log-likelihood mean log-likelihood 2 ?1 ?2 ?3 ?4 ?5 ?6 100 10 1 2 3 10 10 accumulated running time (s) 10 ?2 ?3 ?4 ?5 ?6 100 4 101 102 103 accumulated running time (s) 104 101 102 103 accumulated running time (s) 104 200 200 100 RMSE 100 RMSE 0 ?1 50 50 25 25 100 101 102 103 accumulated running time (s) 12.5 100 104 terrain data, batch size = 750 terrain data, batch size = 1000 Figure 3: Results for spatial data (see fig. 2 for the legend). Pluses/solid lines and circles/dashed lines indicate the results for M = 400, 600 pseudo-points respectively. 100 0.0 ?0.5 80 ?1.0 RMSE mean log-likelihood 0.5 ?1.5 ?2.0 60 40 ?2.5 ?3.0 400 600 800 1000 maximum memory usage (MB) 1200 20 400 1400 600 800 1000 maximum memory usage (MB) 1200 1400 Figure 4: Memory usage of SSGP (blue), GP (magenta) and SGP (red) against MLL and RMSE. error=0.28 error=0.15 error=0.10 error=0.10 x2 2 0 ?2 ?2 0 x1 2 ?2 0 2 x1 ?2 0 x1 2 ?2 0 2 x1 Figure 5: SSGP inference and learning on a binary classification task in a non-iid streaming setting. The right-most plot shows the prediction made by using sparse variational inference on full training data [5] for comparison. Past observations are greyed out. The pseudo-points are shown as black dots and the black curves show the decision boundary. 5 Summary We have introduced a novel online inference and learning framework for Gaussian process models. The framework unifies disparate methods in the literature and greatly extends them, allowing sequential updates of the approximate posterior and online hyperparameter optimisation in a principled manner. The proposed approach outperforms existing approaches on a wide range of regression datasets and shows promising results on a binary classification dataset. A more thorough investigation on models with non-Gaussian likelihoods is left as future work. We believe that this framework will be particularly useful for efficient deployment of GPs in sequential decision making problems such as active learning, Bayesian optimisation, and reinforcement learning. 8 Acknowledgements The authors would like to thank Mark Rowland, John Bradshaw, and Yingzhen Li for insightful comments and discussion. Thang D. Bui is supported by the Google European Doctoral Fellowship. Cuong V. Nguyen is supported by EPSRC grant EP/M0269571. Richard E. Turner is supported by Google as well as EPSRC grants EP/M0269571 and EP/L000776/1. References [1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning. The MIT Press, 2006. [2] E. Snelson and Z. Ghahramani, ?Sparse Gaussian processes using pseudo-inputs,? in Advances in Neural Information Processing Systems (NIPS), 2006. [3] M. K. Titsias, ?Variational learning of inducing variables in sparse Gaussian processes,? in International Conference on Artificial Intelligence and Statistics (AISTATS), 2009. [4] J. Hensman, N. Fusi, and N. D. Lawrence, ?Gaussian processes for big data,? in Conference on Uncertainty in Artificial Intelligence (UAI), 2013. [5] J. Hensman, A. G. D. G. Matthews, and Z. Ghahramani, ?Scalable variational Gaussian process classification,? in International Conference on Artificial Intelligence and Statistics (AISTATS), 2015. [6] A. Dezfouli and E. V. Bonilla, ?Scalable inference for Gaussian process models with black-box likelihoods,? in Advances in Neural Information Processing Systems (NIPS), 2015. [7] D. Hern?ndez-Lobato and J. M. Hern?ndez-Lobato, ?Scalable Gaussian process classification via expectation propagation,? in International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. [8] L. Csat? and M. Opper, ?Sparse online Gaussian processes,? Neural Computation, 2002. [9] L. Csat?, Gaussian Processes ? Iterative Sparse Approximations. PhD thesis, Aston University, 2002. [10] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan, ?Streaming variational Bayes,? in Advances in Neural Information Processing Systems (NIPS), 2013. [11] T. D. Bui, D. Hern?ndez-Lobato, J. M. Hern?ndez-Lobato, Y. Li, and R. E. Turner, ?Deep Gaussian processes for regression using approximate expectation propagation,? in International Conference on Machine Learning (ICML), 2016. [12] J. Qui?onero-Candela and C. E. Rasmussen, ?A unifying view of sparse approximate Gaussian process regression,? The Journal of Machine Learning Research, 2005. [13] T. D. Bui, J. Yan, and R. E. Turner, ?A unifying framework for Gaussian process pseudo-point approximations using power expectation propagation,? Journal of Machine Learning Research, 2017. [14] A. G. D. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani, ?On sparse variational methods and the Kullback-Leibler divergence between stochastic processes,? in International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. [15] C.-A. Cheng and B. Boots, ?Incremental variational sparse Gaussian process regression,? in Advances in Neural Information Processing Systems (NIPS), 2016. [16] T. Minka, ?Power EP,? tech. rep., Microsoft Research, Cambridge, 2004. [17] Z. Ghahramani and H. Attias, ?Online variational Bayesian learning,? in NIPS Workshop on Online Learning, 2000. [18] M.-A. Sato, ?Online model selection based on the variational Bayes,? Neural Computation, 2001. [19] M. Opper, ?A Bayesian approach to online learning,? in On-Line Learning in Neural Networks, 1999. [20] A. G. D. G. Matthews, M. van der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. Le?n-Villagr?, Z. Ghahramani, and J. Hensman, ?GPflow: A Gaussian process library using TensorFlow,? Journal of Machine Learning Research, 2017. [21] M. Bauer, M. van der Wilk, and C. E. Rasmussen, ?Understanding probabilistic sparse Gaussian process approximations,? in Advances in Neural Information Processing Systems (NIPS), 2016. [22] E. J. Keogh and M. J. Pazzani, ?An indexing scheme for fast similarity search in large time series databases,? in International Conference on Scientific and Statistical Database Management, 1999. [23] J. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallett, N. Dahlgren, and V. Zue, ?TIMIT acoustic-phonetic continuous speech corpus LDC93S1,? Philadelphia: Linguistic Data Consortium, 1993. [24] T. D. Bui and R. E. Turner, ?Tree-structured Gaussian process approximations,? in Advances in Neural Information Processing Systems (NIPS), 2014. 9
6922 |@word briefly:1 version:4 pillar:1 reused:1 covariance:2 delicately:1 thereby:1 tr:2 solid:2 catastrophically:1 initial:2 ndez:4 series:10 exclusively:1 efficacy:1 contains:2 initialisation:1 kuf:1 interestingly:3 past:2 existing:5 outperforms:2 current:5 recovered:1 z2:5 com:1 comparing:1 must:3 john:1 fn:5 refines:1 ldc93s1:1 informative:1 analytic:1 plot:3 update:6 intelligence:5 prohibitive:2 kff:2 provides:1 revisited:3 location:5 height:1 fujii:1 become:1 overhead:2 manner:2 forgetting:2 villagr:1 indeed:1 roughly:1 themselves:1 xz:1 growing:1 gov:1 project:1 xx:3 moreover:1 notation:2 provided:2 what:1 substantially:1 whilst:3 unified:1 finding:1 suite:1 guarantee:1 pseudo:39 thorough:1 remember:1 collecting:1 dahlgren:1 tackle:1 scaled:1 uk:3 normally:1 grant:2 yn:6 arguably:1 before:1 engineering:1 dropped:1 optimised:1 might:3 black:4 plus:2 doctoral:1 garofolo:1 suggests:1 shaded:2 deployment:2 limited:2 range:4 clump:3 practical:1 testing:5 block:1 differs:1 svi:6 spot:1 procedure:2 area:1 evolving:1 yan:1 adapting:1 significantly:3 boyd:1 confidence:1 word:2 seeing:2 consortium:1 get:1 cannot:1 selection:1 collapsed:3 risk:1 seminal:3 equivalent:4 demonstrated:1 britain:1 lobato:4 williams:1 starting:1 importantly:2 handle:1 fx:1 updated:5 play:1 exact:6 gps:4 expensive:1 particularly:1 updating:1 database:4 ep:12 observed:1 role:1 epsrc:2 region:2 trade:1 fiscus:1 principled:6 complexity:4 broderick:1 cam:1 solving:1 segment:3 predictive:1 titsias:1 upon:1 f2:2 completely:1 compactly:1 easily:1 unsuited:1 train:1 fast:1 effective:1 artificial:5 lengthscales:1 whose:2 heuristic:3 valued:1 loglikelihood:1 statistic:4 knn:1 gp:34 noisy:1 online:26 mb:3 turned:1 flexibility:1 poorly:2 achieve:1 inducing:1 recipe:1 speedaccuracy:1 produce:1 uncollapsed:6 incremental:1 converges:1 develop:1 ac:1 ard:1 sa:4 eq:4 implemented:1 vfe:7 uu:3 indicate:2 differ:1 stochastic:10 kb:1 enable:1 observational:1 material:1 qold:10 f1:1 preliminary:3 investigation:1 keogh:1 adjusted:2 extension:2 frontier:1 underpinning:1 hold:1 around:1 considered:4 exp:1 great:1 lawrence:1 matthew:3 substituting:1 achieves:1 purpose:1 applicable:1 sidestepped:1 mit:1 gaussian:30 rather:1 ret26:1 wilson:1 linguistic:1 focus:4 modelling:4 likelihood:25 greatly:1 contrast:1 tech:1 baseline:4 kfu:2 inference:21 streaming:22 accumulated:15 typically:4 entire:1 baseband:1 comprising:1 overall:1 classification:6 ill:1 priori:1 retaining:1 development:1 art:1 special:3 spatial:4 marginal:10 field:1 equal:1 once:2 beach:1 thang:2 optimising:3 identical:1 icml:1 discrepancy:1 future:1 summarise:2 develops:1 richard:2 employ:3 lighten:2 modern:1 randomly:1 ve:2 comprehensive:1 divergence:12 mll:7 microsoft:1 arrives:1 extreme:1 dezfouli:1 closer:1 necessary:2 tree:1 ynew:15 old:20 circle:3 plotted:1 increased:1 cover:1 extensible:1 cost:3 tractability:1 subset:1 nickson:1 too:1 corrupted:1 periodic:4 learnt:1 synthetic:4 calibrated:1 thanks:1 st:1 density:2 international:6 probabilistic:6 off:1 continuously:1 na:2 again:1 squared:1 thesis:1 management:1 containing:1 leveraged:1 possibly:1 slowly:1 worse:1 derivative:1 leading:1 return:1 toy:3 li:2 f6:8 distribute:1 subsumes:1 bonilla:1 depends:1 performed:2 break:1 root:1 closed:1 candela:1 view:1 red:2 bayes:5 sort:1 decaying:1 recover:2 rmse:15 timit:2 contribution:2 accuracy:4 variance:1 largely:1 yield:1 landscape:1 bayesian:3 unifies:1 critically:1 iid:3 produced:1 onero:1 nold:1 deploying:3 against:6 energy:10 minka:1 obvious:1 recovers:3 propagated:1 dataset:17 intensively:1 anytime:1 improves:1 back:2 adf:2 appears:1 higher:2 evaluated:1 box:1 just:2 stage:3 until:1 hand:2 o:2 lack:1 incrementally:1 propagation:7 google:2 perhaps:1 scientific:1 believe:1 usage:10 usa:1 effect:2 kab:2 y2:3 unbiased:1 true:3 analytically:5 leibler:2 satisfactory:1 sgp:12 during:1 gpflow:3 arrived:2 complete:1 demonstrate:1 performs:4 lamel:1 variational:40 snelson:1 novel:2 tracked:1 million:1 discussed:1 extend:1 approximates:1 pep:2 numerically:1 accumulate:1 cambridge:2 tuning:1 vanilla:1 grid:1 similarly:1 cancellation:1 dot:1 access:3 similarity:1 posterior:24 recent:2 showed:1 driven:1 store:1 phonetic:1 binary:4 rep:1 der:2 seen:2 additional:1 converge:1 tempting:1 dashed:2 signal:2 relates:1 multiple:3 full:5 reduces:1 greyed:2 technical:1 match:2 adapt:1 cross:1 long:1 posited:1 equally:1 l000776:1 bigger:1 dtm:1 prediction:8 scalable:3 regression:18 optimisation:7 expectation:7 df:6 iteration:3 kernel:1 background:2 fellowship:1 interval:1 appropriately:1 rest:1 comment:1 tend:1 virtually:1 legend:1 jordan:1 leverage:1 intermediate:2 revealed:3 yold:14 split:3 affect:1 fit:3 suboptimal:1 reduce:2 pallett:1 attias:1 expression:2 handled:1 colour:1 suffer:1 speech:1 passing:1 deep:1 useful:1 clear:1 shortage:1 amount:2 dark:1 bradshaw:1 processed:1 simplest:1 reduced:1 http:2 shifted:1 per:3 csat:5 track:1 blue:2 hyperparameter:6 shall:1 drawn:3 vast:1 merely:1 run:4 parameterized:1 uncertainty:2 znew:1 arrive:6 throughout:1 reasonable:1 extends:1 kfb:3 fusi:1 incompatible:1 appendix:3 decision:2 vb:3 comparable:1 qui:1 interleaved:3 bound:12 followed:1 datum:1 cheng:1 replaces:1 sato:1 constraint:1 afforded:1 x2:1 speed:2 performing:1 department:1 influential:1 structured:1 slightly:1 suppressed:1 modification:1 making:1 gradually:1 restricted:1 indexing:2 taken:1 computationally:2 resource:1 equation:1 previously:5 hern:4 discus:1 zue:1 needed:1 tractable:5 qnew:11 available:3 generalizes:1 appropriate:1 save:1 batch:26 robustness:1 alternative:1 ssgp:18 original:2 assumes:2 denotes:1 remaining:1 subsampling:1 running:18 top:1 unifying:3 const:1 ghahramani:5 comparatively:1 summarises:1 streamed:1 added:2 quantity:2 dependence:1 gradient:4 thank:1 majority:2 collected:1 trivial:1 reason:1 maximising:1 length:1 besides:1 index:1 mini:7 providing:1 balance:1 equivalently:1 kun:1 yingzhen:1 potentially:2 negative:3 disparate:1 implementation:1 unknown:1 contributed:1 allowing:3 perform:3 boot:1 observation:4 datasets:7 benchmark:1 wilk:2 situation:1 ever:1 incorporated:1 banana:1 introduced:2 inverting:1 pair:1 required:3 kl:8 z1:5 connection:2 kf1:1 acoustic:1 learned:1 tensorflow:1 maxq:1 nip:8 qa:1 boukouvalas:1 regime:1 kaa:2 challenge:2 including:2 memory:13 power:8 critical:3 treated:1 turner:6 scheme:6 altered:1 github:1 aston:1 library:1 philadelphia:1 review:1 prior:4 literature:1 acknowledgement:1 understanding:1 highlight:1 limitation:1 filtering:3 versus:2 sufficient:3 propagates:1 principle:1 intractability:1 summary:1 supported:3 last:5 free:10 cuong:2 arriving:2 rasmussen:3 allow:7 wide:1 focussed:1 sparse:25 van:2 bauer:1 curve:2 opper:4 xn:5 world:2 boundary:1 hensman:4 ignores:1 author:2 made:3 forward:1 reinforcement:1 nguyen:2 employing:1 far:1 rowland:1 bb:6 approximate:19 kullback:2 bui:5 overfitting:1 sequentially:3 active:1 uai:1 corpus:1 assumed:5 alternatively:1 terrain:5 search:1 latent:7 iterative:1 continuous:1 promising:1 learn:2 pazzani:1 ca:1 du:2 complex:1 necessarily:1 european:1 da:8 diag:1 aistats:4 linearly:1 big:1 noise:3 hyperparameters:15 kbf:2 x1:5 crafted:2 fig:4 retrain:1 fashion:2 slow:1 touched:1 down:1 magenta:1 insightful:1 intractable:2 incorporating:1 workshop:1 sequential:2 mirror:1 phd:1 suited:1 simply:1 absorbed:1 tdb40:1 recommendation:1 utilise:2 ma:4 goal:1 viewed:1 rbf:1 fisher:1 feasible:1 included:1 infinite:2 except:2 reducing:1 specifically:2 uniformly:1 typical:1 called:3 total:1 pas:2 catastrophic:1 experimental:1 tendency:1 support:2 mark:1 assessed:2 wibisono:1 incorporate:1 evaluate:1 audio:3 scratch:1
6,548
6,923
V EEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning Akash Srivastava School of Informatics University of Edinburgh [email protected] Chris Russell The Alan Turing Institute London [email protected] Lazar Valkov School of Informatics University of Edinburgh [email protected] Michael U. Gutmann School of Informatics University of Edinburgh [email protected] Charles Sutton School of Informatics & The Alan Turing Institute University of Edinburgh [email protected] Abstract Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images. But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes of the true distribution. To address this, we introduce V EEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise. In sharp contrast to a traditional autoencoder over data points, V EEGAN does not require specifying a loss function over the data, but rather only over the representations, which are standard normal by assumption. On an extensive set of synthetic and real world image datasets, V EEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. 1 Introduction Deep generative models are a topic of enormous recent interest, providing a powerful class of tools for the unsupervised learning of probability distributions over difficult manifolds such as natural images [7, 11, 18]. Deep generative models are usually implicit statistical models [3], also called implicit probability distributions, meaning that they do not induce a density function that can be tractably computed, but rather provide a simulation procedure to generate new data points. Generative adversarial networks (GANs) [7] are an attractive such method, which have seen promising recent successes [17, 20, 23]. GANs train two deep networks in concert: a generator network that maps random noise, usually drawn from a multi-variate Gaussian, to data items; and a discriminator network that estimates the likelihood ratio of the generator network to the data distribution, and is trained 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. using an adversarial principle. Despite an enormous amount of recent work, GANs are notoriously fickle to train, and it has been observed [1, 19] that they often suffer from mode collapse, in which the generator network learns how to generate samples from a few modes of the data distribution but misses many other modes, even though samples from the missing modes occur throughout the training data. To address this problem, we introduce V EEGAN,1 a variational principle for estimating implicit probability distributions that avoids mode collapse. While the generator network maps Gaussian random noise to data items, V EEGAN introduces an additional reconstructor network that maps the true data distribution to Gaussian random noise. We train the generator and reconstructor networks jointly by introducing an implicit variational principle, which encourages the reconstructor network not only to map the data distribution to a Gaussian, but also to approximately reverse the action of the generator. Intuitively, if the reconstructor learns both to map all of the true data to the noise distribution and is an approximate inverse of the generator network, this will encourage the generator network to map from the noise distribution to the entirety of the true data distribution, thus resolving mode collapse. Unlike other adversarial methods that train reconstructor networks [4, 5, 22], the noise autoencoder dramatically reduces mode collapse. Unlike recent adversarial methods that also make use of a data autoencoder [1, 13, 14], V EEGAN autoencodes noise vectors rather than data items. This is a significant difference, because choosing an autoencoder loss for images is problematic, but for Gaussian noise vectors, an `2 loss is entirely natural. Experimentally, on both synthetic and real-world image data sets, we find that V EEGAN is dramatically less susceptible to mode collapse, and produces higher-quality samples, than other state-of-the-art methods. 2 Background Implicit probability distributions are specified by a sampling procedure, but do not have a tractable density [3]. Although a natural choice in many settings, implicit distributions have historically been seen as difficult to estimate. However, recent progress in formulating density estimation as a problem of supervised learning has allowed methods from the classification literature to enable implicit model estimation, both in the general case [6, 10] and for deep generative adversarial networks (GANs) in D particular [7]. Let {xi }N i=1 denote the training data, where each xi ? R is drawn from an unknown distribution p(x). A GAN is a neural network G? that maps representation vectors z ? RK , typically drawn from a standard normal distribution, to data items x ? RD . Because this mapping defines an implicit probability distribution, training is accomplished by introducing a second neural network D? , called a discriminator, whose goal is to distinguish generator samples from true data samples. The parameters of these networks are estimated by solving the minimax problem max min OGAN (?, ?) := Ez [log ? (D? (G? (z)))] + Ex [log (1 ? ? (D? (x)))] , ? ? where Ez indicates an expectation over the standard normal z, Ex indicates an expectation over the data distribution p(x), and ? denotes the sigmoid function. At the optimum, in the limit of infinite data and arbitrarily powerful networks, we will have D? = log q? (x)/p(x), where q? is the density that is induced by running the network G? on normally distributed input, and hence that q? = p [7]. Unfortunately, GANs can be difficult and unstable to train [19]. One common pathology that arises in GAN training is mode collapse, which is when samples from q? (x) capture only a few of the modes of p(x). An intuition behind why mode collapse occurs is that the only information that the objective function provides about ? is mediated by the discriminator network D? . For example, if D? is a constant, then OGAN is constant with respect to ?, and so learning the generator is impossible. When this situation occurs in a localized region of input space, for example, when there is a specific type of image that the generator cannot replicate, this can cause mode collapse. 1 V EEGAN is a Variational Encoder Enhancement to Generative Adversarial Networks. https://akashgit. github.io/VEEGAN/ 2 z p(x) z F? p(x) F? x p0 (z) x G? p0 (z) z G? z (a) Suppose F? is trained to approximately invert G? . Then applying F? to true data is likely to produce a non-Gaussian distribution, allowing us to detect mode collapse. (b) When F? is trained to map the data to a Gaussian distribution, then treating F? ? G? as an autoencoder provides learning signal to correct G?. Figure 1: Illustration of how a reconstructor network F? can help to detect mode collapse in a deep generative network G? . The data distribution is p(x) and the Gaussian is p0 (z). See text for details. 3 Method The main idea of V EEGAN is to introduce a second network F? that we call the reconstructor network which is learned both to map the true data distribution p(x) to a Gaussian and to approximately invert the generator network. To understand why this might prevent mode collapse, consider the example in Figure 1. In both columns of the figure, the middle vertical panel represents the data space, where in this example the true distribution p(x) is a mixture of two Gaussians. The bottom panel depicts the input to the generator, which is drawn from a standard normal distribution p0 = N (0, I), and the top panel depicts the result of applying the reconstructor network to the generated and the true data. The arrows labeled G? show the action of the generator. The purple arrows labelled F? show the action of the reconstructor on the true data, whereas the green arrows show the action of the reconstructor on data from the generator. In this example, the generator has captured only one of the two modes of p(x). The difference between Figure 1a and 1b is that the reconstructor networks are different. First, let us suppose (Figure 1a) that we have successfully trained F? so that it is approximately the inverse of G? . As we have assumed mode collapse however, the training data for the reconstructor network F? does not include data items from the ?forgotten" mode of p(x), therefore the action of F? on data from that mode is ill-specified. This means that F? (X), X ? p(x) is unlikely to be Gaussian and we can use this mismatch as an indicator of mode collapse. Conversely, let us suppose (Figure 1b) that F? is successful at mapping the true data distribution to a Gaussian. In that case, if G? mode collapses, then F? will not map all G? (z) back to the original z and the resulting penalty provides us with a strong learning signal for both ? and ?. Therefore, the learning principle for V EEGAN will be to train F? to achieve both of these objectives simultaneously. Another way of stating this intuition is that if the same reconstructor network maps both the true data and the generated data to a Gaussian distribution, then the generated data is likely to coincide with true data. To measure whether F? approximately inverts G? , we use an autoencoder loss. More precisely, we minimize a loss function, like `2 loss between z ? p0 and F? (G? (z))). To quantify whether F? maps the true data distribution to a Gaussian, we use the cross entropy H(Z, F? (X)) between Z and F? (x). This boils down to learning ? and ? by minimising the sum of these two objectives, namely   Oentropy (?, ?) = E kz ? F? (G? (z))k22 + H(Z, F? (X)). (1) While this objective captures the main idea of our paper, it cannot be easily computed and minimised. We next transform it into a computable version and derive theoretical guarantees. 3.1 Objective Function Let us denote the distribution of the outputs of the reconstructor network when applied to a fixed data R item x by p? (z|x) and when applied to all X ? p(x) by p? (z) = p? (z|x)p(x) dx. The conditional 3 distribution p? (z|x) is Gaussian with unit variance and, with a slight abuse of notation, (deterministic) mean function F? (x). The entropy term H(Z, F? (X)) can thus be written as Z Z Z H(Z, F? (X)) = ? p0 (z) log p? (z)dz = ? p0 (z) log p(x)p? (z|x) dx dz. (2) This cross entropy is minimized with respect to ? when p? (z) = p0 (z) [2]. Unfortunately, the integral on the right-hand side of (2) cannot usually be computed in closed form. We thus introduce a variational distribution q? (x|z) and by Jensen?s inequality, we have Z Z p? (z|x)p(x) q? (x|z) ? log p? (z) = ? log p? (z|x)p(x) dx ? ? q? (x|z) log dx, (3) q? (x|z) q? (x|z) which we use to bound the cross-entropy in (2). In variational inference, strong parametric assumptions are typically made on q? . Importantly, we here relax that assumption, instead representing q? implicitly as a deep generative model, enabling us to learn very complex distributions. The variational distribution q? (x|z) plays exactly the same role as the generator in a GAN, and for that reason, we will parameterize q? (x|z) as the output of a stochastic neural network G? (z). In practice minimizing this bound is difficult if q? is specified implicitly. For instance, it is challenging to train a discriminator network that accurately estimates the unknown likelihood ratio log p(x)/q? (x|z), because q? (x|z), as a conditional distribution, is much more peaked than the joint distribution p(x), making it too easy for a discriminator to tell the two distributions apart. Intuitively, the discriminator in a GAN works well when it is presented a difficult pair of distributions to distinguish. To circumvent this problem, we write (see supplementary material) Z ? p0 (z) log p? (z) ? KL [q? (x|z)p0 (z) k p? (z|x)p(x)] ? E [log p0 (z)] . (4) Here all expectations are taken with respect to the joint distribution p0 (z)q? (x|z). Now, moving to the second term in (1), we define the reconstruction penalty as an expectation of the cost of autoencoding noise vectors, that is, E [d(z, F? (G? (z)))] . The function d denotes a loss function in representation space RK , such as `2 loss and therefore the term is an autoencoder in representation space. To make this link explicit, we R expand R the expectation, assuming that we choose d to be `2 loss. This yields E [d(z, F? (x))] = p0 (z) q? (x|z)kz ? F? (x)k2 dxdz. Unlike a standard autoencoder, however, rather than taking a data item as input and attempting to reconstruct it, we autoencode a representation vector. This makes a substantial difference in the interpretation and performance of the method, as we discuss in Section 4. For example, notice that we do not include a regularization weight on the autoencoder term in (5), because Proposition 1 below says that this is not needed to recover the data distribution. Combining these two ideas, we obtain the final objective function O(?, ?) = KL [q? (x|z)p0 (z) k p? (z|x)p(x)] ? E [log p0 (z)] + E [d(z, F? (x))] . (5) Rather than minimizing the intractable Oentropy (?, ?), our goal in V EEGAN is to minimize the upper bound O with respect to ? and ?. Indeed, if the networks F? and G? are sufficiently powerful, then if we succeed in globally minimizing O, we can guarantee that q? recovers the true data distribution. This statement is formalized in the following proposition. Proposition 1. Suppose that there exist parameters ?? , ? ? such that O(? ? , ?? ) = H[p0 ], where H denotes Shannon entropy. Then (? ? , ?? ) minimizes O, and further Z Z p?? (z) := p?? (z|x)p(x) dx = p0 (z), and q? ? (x) := q? ? (x|z)p0 (z) dz = p(x). Because neural networks are universal approximators, the conditions in the proposition can be achieved when the networks G and F are sufficiently powerful. 3.2 Learning with Implicit Probability Distributions This subsection describes how to approximate O when we have implicit representations for q? and p? rather than explicit densities. In this case, we cannot optimize O directly, because the KL divergence 4 Algorithm 1 V EEGAN training 1: while not converged do 2: for i ? {1 . . . N } do 3: Sample z i ? p0 (z) 4: Sample xig ? q? (x|zi ) 5: Sample xi ? p(x) 6: Sample zgi ? p? (zg |xi ) 7: 8: 9: 10: 11: 12: 13:   P g? ? ??? N1 i log ? D? (z i , xig ) + log 1 ? ? D? (zgi , xi ) P g? ? ?? N1 i d(z i , xig ) P P g? ? ?? N1 i D? (z i , xig ) + N1 i d(z i , xig ) ? ? ? ? ?g? ; ? ? ? ? ?g? ; ? ? ? ? ?g? ? LR . Compute ?? O ? . Compute ?? O ? . Compute ?? O . Perform SGD updates for ?, ? and ? in (5) depends on a density ratio which is unknown, both because q? is implicit and also because p(x) is unknown. Following [4, 5], we estimate this ratio using a discriminator network D? (x, z) which we will train to encourage q? (x|z)p0 (z) D? (z, x) = log . (6) p? (z|x)p(x) This will allow us to estimate O as N N 1 X 1 X ? D? (z i , xig ) + d(z i , xig ), O(?, ?, ?) = N i=1 N i=1 (7) where (z i , xig ) ? p0 (z)q? (x|z). In this equation, note that xig is a function of ?; although we suppress this in the notation, we do take this dependency into account in the algorithm. We use an auxiliary objective function to estimate ?. As mentioned earlier, we omit the entropy term ?E [log p0 (z)] from ? as it is constant with respect to all parameters. In principle, any method for density ratio estimation O could be used here, for example, see [9, 21]. In this work, we will use the logistic regression loss, much as in other methods for deep adversarial training, such as GANs [7], or for noise contrastive estimation [8]. We will train D? to distinguish samples from the joint distribution q? (x|z)p0 (z) from p? (z|x)p(x). The objective function for this is OLR (?, ?, ?) = ?E? [log (? (D? (z, x)))] ? E? [log (1 ? ? (D? (z, x)))] , (8) where E? denotes expectation with respect to the joint distribution q? (x|z)p0 (x) and E? with respect ? LR to indicate the Monte Carlo estimate of O LR. Our learning algorithm to p? (z|x)p(x). We write O optimizes this pair of equations with respect to ?, ?, ? using stochastic gradient descent. In particular, ? LR (?, ?, ?) and min?,? O(?, ? the algorithms aim to find a simultaneous solution to min? O ?, ?). This training procedure is described in Algorithm 1. When this procedure converges, we will have that ? ? = arg min? OLR (?, ? ? , ?? ), which means that D?? has converged to the likelihood ratio (6). Therefore (? ? , ?? ) have also converged to a minimum of O. We also found that pre-training the reconstructor network on samples from p(x) helps in some cases. 4 Relationships to Other Methods An enormous amount of attention has been devoted recently to improved methods for GAN training, and we compare ourselves to the most closely related work in detail. BiGAN/Adversarially Learned Inference BiGAN [4] and Adversarially Learning Inference (ALI) [5] are two essentially identical recent adversarial methods for learning both a deep generative network G? and a reconstructor network F? . Likelihood-free variational inference (LFVI) [22] extends this idea to a hierarchical Bayesian setting. Like V EEGAN, all of these methods also use a discriminator D? (z, x) on the joint (z, x) space. However, the V EEGAN objective function O(?, ?) 5 provides significant benefits over the logistic regression loss over ? and ? that is used in ALI/BiGAN, or the KL-divergence used in LFVI. In all of these methods, just as in vanilla GANs, the objective function depends on ? and ? only via the output D? (z, x) of the discriminator; therefore, if there is a mode of data space in which D? is insensitive to changes in ? and ?, there will be mode collapse. In V EEGAN, by contrast, the reconstruction term does not depend on the discriminator, and so can provide learning signal to ? or ? even when the discriminator is constant. We will show in Section 5 that indeed V EEGAN is dramatically less prone to mode collapse than ALI. InfoGAN While differently motivated to obtain disentangled representation of the data, InfoGAN also uses a latent-code reconstruction based penalty in its cost function. But unlike V EEGAN, only a part of the latent code is reconstructed in InfoGAN. Thus, InfoGAN is similar to VEEGAN in that it also includes an autoencoder over the latent codes, but the key difference is that InfoGAN does not also train the reconstructor network on the true data distribution. We suggest that this may be the reason that InfoGAN was observed to require some of the same stabilization tricks as vanilla GANs, which are not required for VEEGAN. Adversarial Methods for Autoencoders A number of other recent methods have been proposed that combine adversarial methods and autoencoders, whether by explicitly regularizing the GAN loss with an autoencoder loss [1, 13], or by alternating optimization between the two losses [14]. In all of these methods, the autoencoder is over images, i.e., they incorporate a loss function of the form ?d(x, G? (F? (x))), where d is a loss function over images, such as pixel-wise `2 loss, and ? is a regularization constant. Similarly, variational autoencoders [12, 18] also autoencode images rather than noise vectors. Finally, the adversarial variational Bayes (AVB) [15] is an adaptation of VAEs to the case where the posterior distribution p? (z|x) is implicit, but the data distribution q? (x|z), must be explicit, unlike in our work. Because these methods autoencode data points, they share a crucial disadvantage. Choosing a good loss function d over natural images can be problematic. For example, it has been commonly observed that minimizing an `2 reconstruction loss on images can lead to blurry images. Indeed, if choosing a loss function over images were easy, we could simply train an autoencoder and dispense with adversarial learning entirely. By contrast, in V EEGAN we autoencode the noise vectors z, and choosing a good loss function for a noise autoencoder is easy. The noise vectors z are drawn from a standard normal distribution, using an `2 loss on z is entirely natural ? and does not, as we will show in Section 5, result in blurry images compared to purely adversarial methods. 5 Experiments Quantitative evaluation of GANs is problematic because implicit distributions do not have a tractable likelihood term to quantify generative accuracy. Quantifying mode collapsing is also not straightforward, except in the case of synthetic data with known modes. For this reason, several indirect metrics have recently been proposed to evaluate GANs specifically for their mode collapsing behavior [1, 16]. However, none of these metrics are reliable on their own and therefore we need to compare across a number of different methods. Therefore in this section we evaluate V EEGAN on several synthetic and real datasets and compare its performance against vanilla GANs [7], Unrolled GAN [16] and ALI [5] on five different metrics. Our results strongly suggest that V EEGAN does indeed resolve mode collapse in GANs to a large extent. Generally, we found that V EEGAN performed well with default hyperparameter values, so we did not tune these. Full details are provided in the supplementary material. 5.1 Synthetic Dataset Mode collapse can be accurately measured on synthetic datasets, since the true distribution and its modes are known. In this section we compare all four competing methods on three synthetic datasets of increasing difficulty: a mixture of eight 2D Gaussian distributions arranged in a ring, a mixture of twenty-five 2D Gaussian distributions arranged in a grid 2 and a mixture of ten 700 dimensional 2 Experiment follows [5]. Please note that for certain settings of parameters, vanilla GAN can also recover all 25 modes, as was pointed out to us by Paulina Grnarova. 6 Table 1: Sample quality and degree of mode collapse on mixtures of Gaussians. V EEGAN consistently captures the highest number of modes and produces better samples. 2D Ring GAN ALI Unrolled GAN V EEGAN 2D Grid 1200D Synthetic Modes (Max 8) % High Quality Samples Modes (Max 25) % High Quality Samples Modes (Max 10) % High Quality Samples 1 2.8 7.6 8 99.3 0.13 35.6 52.9 3.3 15.8 23.6 24.6 0.5 1.6 16 40 1.6 3 0 5.5 2.0 5.4 0.0 28.29 Gaussian distributions embedded in a 1200 dimensional space. This mixture arrangement was chosen to mimic the higher dimensional manifolds of natural images. All of the mixture components were isotropic Gaussians. For a fair comparison of the different learning methods for GANs, we use the same network architectures for the reconstructors and the generators for all methods, namely, fully-connected MLPs with two hidden layers. For the discriminator we use a two layer MLP without dropout or normalization layers. V EEGAN method works for both deterministic and stochastic generator networks. To allow for the generator to be a stochastic map we add an extra dimension of noise to the generator input that is not reconstructed. To quantify the mode collapsing behavior we report two metrics: We sample points from the generator network, and count a sample as high quality, if it is within three standard deviations of the nearest mode, for the 2D dataset, or within 10 standard deviations of the nearest mode, for the 1200D dataset. Then, we report the number of modes captured as the number of mixture components whose mean is nearest to at least one high quality sample. We also report the percentage of high quality samples as a measure of sample quality. We generate 2500 samples from each trained model and average the numbers over five runs. For the unrolled GAN, we set the number of unrolling steps to five as suggested in the authors? reference implementation. As shown in Table 1, V EEGAN captures the greatest number of modes on all the synthetic datasets, while consistently generating higher quality samples. This is visually apparent in Figure 2, which plot the generator distributions for each method; the generators learned by V EEGAN are sharper and closer to the true distribution. This figure also shows why it is important to measure sample quality and mode collapse simultaneously, as either alone can be misleading. For instance, the GAN on the 2D ring has 99.3% sample quality, but this is simply because the GAN collapses all of its samples onto one mode (Figure 2b). On the other extreme, the unrolled GAN on the 2D grid captures almost all the modes in the true distribution, but this is simply because that it is generating highly dispersed samples (Figure 2i) that do not accurately represent the true distribution, hence the low sample quality. All methods had approximately the same running time, except for unrolled GAN, which is a few orders of magnitude slower due to the unrolling overhead. 5.2 Stacked MNIST Following [16], we evaluate our methods on the stacked MNIST dataset, a variant of the MNIST data specifically designed to increase the number of discrete modes. The data is synthesized by stacking three randomly sampled MNIST digits along the color channel resulting in a 28x28x3 image. We now expect 1000 modes in this data set, corresponding to the number of possible triples of digits. Again, to focus the evaluation on the difference in the learning algorithms, we use the same generator architecture for all methods. In particular, the generator architecture is an off-the-shelf standard implementation3 of DCGAN [17]. For Unrolled GAN, we used a standard implementation of the DCGAN discriminator network. For ALI and V EEGAN, the discriminator architecture is described in the supplementary material. For the reconstructor in ALI and V EEGAN, we use a simple two-layer MLP for the reconstructor without any regularization layers. 3 https://github.com/carpedm20/DCGAN-tensorflow 7 Stacked-MNIST DCGAN ALI Unrolled GAN V EEGAN CIFAR-10 Modes (Max 1000) KL IvOM 99 16 48.7 150 3.4 5.4 4.32 2.95 0.00844 ? 0.002 0.0067 ? 0.004 0.013 ? 0.0009 0.0068 ? 0.0001 Table 2: Degree of mode collapse, measured by modes captured and the inference via optimization measure (IvOM), and sample quality (as measured by KL) on Stacked-MNIST and CIFAR. V EEGAN captures the most modes and also achieves the highest quality. Finally, for V EEGAN we pretrain the reconstructor by taking a few stochastic gradient steps with respect to ? before running Algorithm 1. For all methods other than V EEGAN, we use the enhanced generator loss function suggested in [7], since we were not able to get sufficient learning signals for the generator without it. V EEGAN did not require this adjustment for successful training. As the true locations of the modes in this data are unknown, the number of modes are estimated using a trained classifier as described originally in [1]. We used a total of 26000 samples for all the models and the results are averaged over five runs. As a measure of quality, following [16] again, we also report the KL divergence between the generator distribution and the data distribution. As reported in Table 2, V EEGAN not only captures the most modes, it consistently matches the data distribution more closely than any other method. Generated samples from each of the models are shown in the supplementary material. 5.3 CIFAR Finally, we evaluate the learning methods on the CIFAR-10 dataset, a well-studied and diverse dataset of natural images. We use the same discriminator, generator, and reconstructor architectures as in the previous section. However, the previous mode collapsing metric is inappropriate here, owing to CIFAR?s greater diversity. Even within one of the 10 classes of CIFAR, the intra-group diversity is very high compared to any of the 10 classes of MNIST. Therefore, for CIFAR it is inappropriate to assume, as the metrics of the previous subsection do, that each labelled class corresponds to a single mode of the data distribution. Instead, we use a metric introduced by [16] which we will call the inference via optimization metric (IvOM). The idea behind this metric is to compare real images from the test set to the nearest generated image; if the generator suffers from mode collapse, then there will be some images for which this distance is large. To quantify this, we sample a real image x from the test set, and find the closest image that the GAN is capable of generating, i.e. optimizing the `2 loss between x and generated image G? (z) with respect to z. If a method consistently attains low MSE, then it can be assumed to be capturing more modes than the ones which attain a higher MSE. As before, this metric can still be fooled by highly dispersed generator distributions, and also the `2 metric may favour generators that produce blurry images. Therefore we will also evaluate sample quality visually. All numerical results have been averaged over five runs. Finally, to evaluate whether the noise autoencoder in V EEGAN is indeed superior to a more traditional data autoencoder, we compare to a variant, which we call V EEGAN +DAE, that uses a data autoencoder instead, by simply replacing d(z, F? (x)) in O with a data loss kx ? G? (F? (x)))k22 . As shown in Table 2, ALI and V EEGAN achieve the best IvOM. Qualitatively, however, generated samples from V EEGAN seem better than other methods. In particular, the samples from V EEGAN +DAE are meaningless. Generated samples from V EEGAN are shown in Figure 3b; samples from other methods are shown in the supplementary material. As another illustration of this, Figure 3 illustrates the IvOM metric, by showing the nearest neighbors to real images that each of the GANs were able to generate; in general, the nearest neighbors will be more semantically meaningful than randomly generated images. We omit V EEGAN +DAE from this table because it did not produce plausible images. Across the methods, we see in Figure 3 that V EEGAN captures small details, such as the face of the poodle, that other methods miss. 8 Figure 2: Density plots of the true data and generator distributions from different GAN methods trained on mixtures of Gaussians arranged in a ring (top) or a grid (bottom). (a) True Data (b) GAN (c) ALI (d) Unrolled (e) V EEGAN (f) True Data (g) GAN (h) ALI (i) Unrolled (j) V EEGAN Figure 3: Sample images from GANs trained on CIFAR-10. Best viewed magnified on screen. (a) Generated samples nearest to real images from CIFAR-10. In each of the two panels, the first column are real images, followed by the nearest images from DCGAN, ALI, Unrolled GAN, and V EEGAN respectively. 6 (b) Random samples from generator of V EEGAN trained on CIFAR-10. Conclusion We have presented V EEGAN, a new training principle for GANs that combines a KL divergence in the joint space of representation and data points with an autoencoder over the representation space, motivated by a variational argument. Experimental results on synthetic data and real images show that our approach is much more effective than several state-of-the art GAN methods at avoiding mode collapse while still generating good quality samples. Acknowledgement We thank Martin Arjovsky, Nicolas Collignon, Luke Metz, Casper Kaae S?nderby, Lucas Theis, Soumith Chintala, Stanis?aw Jastrz?ebski, Harrison Edwards, Amos Storkey and Paulina Grnarova for their helpful comments. We would like to specially thank Ferenc Husz?r for insightful discussions and feedback. 9 References [1] Che, Tong, Li, Yanran, Jacob, Athul Paul, Bengio, Yoshua, and Li, Wenjie. Mode regularized generative adversarial networks. In International Conference on Learning Representations (ICLR), volume abs/1612.02136, 2017. [2] Cover, Thomas M and Thomas, Joy A. Elements of information theory. John Wiley & Sons, 2012. [3] Diggle, Peter J. and Gratton, Richard J. Monte carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), 46(2):193?227, 1984. ISSN 00359246. URL http://www.jstor.org/stable/2345504. [4] Donahue, Jeff, Kr?henb?hl, Philipp, and Darrell, Trevor. Adversarial feature learning. In International Conference on Learning Representations (ICLR), 2017. [5] Dumoulin, Vincent, Belghazi, Ishmael, Poole, Ben, Mastropietro, Olivier, Lamb, Alex, Arjovsky, Martin, and Courville, Aaron. Adversarially learned inference. In International Conference on Learning Representations (ICLR), 2017. [6] Dutta, Ritabrata, Corander, Jukka, Kaski, Samuel, and Gutmann, Michael U. Likelihood-free inference by ratio estimation. 2016. [7] Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672?2680, 2014. [8] Gutmann, Michael U. and Hyvarinen, Aapo. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13:307?361, 2012. [9] Gutmann, M.U. and Hirayama, J. Bregman divergence as general framework to estimate unnormalized statistical models. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI), pp. 283?290, Corvallis, Oregon, 2011. AUAI Press. [10] Gutmann, M.U., Dutta, R., Kaski, S., and Corander, J. Likelihood-free inference via classification. arXiv:1407.4981, 2014. [11] Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [12] Kingma, D.P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. [13] Larsen, Anders Boesen Lindbo, S?nderby, S?ren Kaae, Larochelle, Hugo, and Winther, Ole. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), 2016. [14] Makhzani, Alireza, Shlens, Jonathon, Jaitly, Navdeep, and Goodfellow, Ian J. Adversarial autoencoders. Arxiv preprint 1511.05644, 2015. URL http://arxiv.org/abs/1511.05644. [15] Mescheder, Lars M., Nowozin, Sebastian, and Geiger, Andreas. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. ArXiv, abs/1701.04722, 2017. URL http://arxiv.org/abs/1701.04722. [16] Metz, Luke, Poole, Ben, Pfau, David, and Sohl-Dickstein, Jascha. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. [17] Radford, Alec, Metz, Luke, and Chintala, Soumith. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [18] Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278?1286, 2014. 10 [19] Salimans, Tim, Goodfellow, Ian J., Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. CoRR, abs/1606.03498, 2016. URL http://arxiv.org/abs/1606.03498. [20] S?nderby, Casper Kaae, Caballero, Jose, Theis, Lucas, Shi, Wenzhe, and Husz?r, Ferenc. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. [21] Sugiyama, M., Suzuki, T., and Kanamori, T. Density ratio estimation in machine learning. Cambridge University Press, 2012. [22] Tran, D., Ranganath, R., and Blei, D. M. Deep and Hierarchical Implicit Models. ArXiv e-prints, 2017. [23] Zhu, Jun-Yan, Park, Taesung, Isola, Phillip, and Efros, Alexei A. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. 11
6923 |@word version:1 middle:1 replicate:1 simulation:1 jacob:1 p0:24 contrastive:2 sgd:1 series:1 jimenez:1 com:1 diederik:1 dx:5 written:1 must:1 john:1 numerical:1 realistic:1 treating:1 concert:1 update:1 plot:2 designed:1 alone:1 generative:17 joy:1 intelligence:1 item:7 alec:2 isotropic:1 lr:4 blei:1 provides:4 location:1 philipp:1 org:4 five:6 wierstra:1 along:1 combine:2 overhead:1 introduce:4 indeed:6 behavior:2 multi:1 lindbo:1 globally:1 resolve:1 soumith:2 inappropriate:2 increasing:1 unrolling:2 provided:1 estimating:1 notation:2 panel:4 interpreted:1 minimizes:1 magnified:1 guarantee:3 forgotten:1 quantitative:1 auai:1 zaremba:1 exactly:1 wenjie:1 k2:1 classifier:1 uk:5 sherjil:1 normally:1 unit:1 omit:2 before:2 xig:9 limit:1 io:1 sutton:1 despite:1 encoding:2 approximately:6 abuse:1 might:1 studied:1 specifying:1 conversely:1 challenging:1 luke:3 collapse:26 averaged:2 practice:1 backpropagation:1 digit:2 procedure:4 universal:1 implementation3:1 yan:1 attain:1 pre:1 induce:1 diggle:1 suggest:2 get:1 cannot:4 onto:1 impossible:1 applying:2 optimize:1 www:1 map:14 deterministic:2 missing:1 dz:3 mescheder:1 straightforward:1 attention:1 shi:1 resolution:1 formalized:1 pouget:1 jascha:1 importantly:1 shlens:1 disentangled:1 enhanced:1 suppose:4 play:1 olivier:1 us:2 goodfellow:3 jaitly:1 trick:1 storkey:1 element:1 nderby:3 labeled:1 observed:3 bottom:2 role:1 preprint:6 capture:8 parameterize:1 region:1 connected:1 cycle:1 gutmann:6 russell:1 highest:2 substantial:1 intuition:2 mentioned:1 dispense:1 warde:1 trained:9 depend:1 solving:1 ferenc:2 ali:12 purely:1 crussell:1 easily:1 joint:6 indirect:1 differently:1 kaski:2 train:12 stacked:4 effective:1 london:1 monte:2 ole:1 artificial:1 vicki:1 tell:1 choosing:4 lazar:1 whose:2 apparent:1 supplementary:5 plausible:1 jean:1 say:1 relax:1 reconstruct:1 encoder:1 statistic:1 jointly:1 transform:1 final:1 shakir:1 autoencoding:2 net:1 reconstruction:4 tran:1 adaptation:1 combining:1 achieve:2 enhancement:1 optimum:1 darrell:1 produce:6 generating:4 converges:1 ring:4 ben:2 help:2 derive:1 tim:1 ac:5 stating:1 hirayama:1 measured:3 nearest:8 school:4 progress:1 ex:2 edward:1 strong:2 auxiliary:1 entirety:1 indicate:1 larochelle:1 quantify:4 avb:1 kaae:3 closely:2 correct:1 owing:1 stochastic:6 lars:1 stabilization:1 jonathon:1 enable:1 material:5 require:3 proposition:4 sufficiently:2 normal:5 visually:2 caballero:1 mapping:3 efros:1 achieves:1 estimation:7 proc:1 grnarova:2 successfully:1 tool:2 amos:1 gaussian:17 aim:1 super:1 rather:7 husz:2 shelf:1 rezende:1 focus:1 consistently:4 methodological:1 likelihood:7 indicates:2 fooled:1 pretrain:1 contrast:3 adversarial:23 attains:1 detect:2 helpful:1 inference:12 anders:1 typically:2 unlikely:1 hidden:1 expand:1 pixel:2 arg:1 classification:2 ill:1 lucas:2 art:2 beach:1 sampling:1 identical:1 represents:1 adversarially:3 park:1 unsupervised:2 icml:1 peaked:1 mimic:1 minimized:1 report:4 mirza:1 yoshua:2 richard:1 few:5 randomly:2 simultaneously:2 divergence:5 ourselves:1 n1:4 ab:6 interest:1 mlp:2 highly:2 intra:1 alexei:1 evaluation:2 introduces:1 mixture:9 extreme:1 farley:1 behind:2 devoted:1 bregman:1 integral:1 encourage:2 closer:1 capable:1 dae:3 theoretical:1 instance:2 column:2 earlier:1 cover:1 disadvantage:1 retains:1 ishmael:1 stacking:1 cost:2 deviation:2 introducing:2 successful:2 too:1 characterize:1 reported:1 dependency:1 aw:1 synthetic:10 st:2 density:9 international:6 winther:1 off:1 informatics:4 minimised:1 bigan:3 michael:4 gans:20 again:2 choose:1 collapsing:5 conf:1 poodle:1 wojciech:1 li:2 account:1 diversity:2 includes:1 oregon:1 explicitly:1 depends:2 performed:1 closed:1 dumoulin:1 reconstructor:22 recover:2 bayes:4 complicated:1 metz:3 minimize:2 purple:1 mlps:1 accuracy:1 dutta:2 variance:1 convolutional:1 yield:1 bayesian:1 vincent:1 accurately:3 none:1 carlo:2 ren:1 notoriously:1 converged:3 simultaneous:1 suffers:1 sebastian:1 ed:4 trevor:1 against:1 pp:3 larsen:1 mohamed:1 chintala:2 recovers:1 boil:1 sampled:1 dataset:6 subsection:2 color:1 back:1 higher:4 originally:1 supervised:1 danilo:1 improved:2 arranged:3 though:1 strongly:1 just:1 implicit:16 olr:2 autoencoders:5 hand:1 replacing:1 mehdi:1 defines:1 mode:63 logistic:2 quality:18 usa:1 phillip:1 k22:2 true:24 hence:2 regularization:3 alternating:1 attractive:1 encourages:1 please:1 samuel:1 unnormalized:2 fickle:1 image:36 variational:15 meaning:1 novel:1 recently:2 charles:1 wise:1 sigmoid:1 common:1 superior:1 hugo:1 insensitive:1 volume:1 slight:1 interpretation:1 synthesized:1 significant:2 corvallis:1 cambridge:1 rd:1 vanilla:4 consistency:1 grid:4 similarly:1 pointed:1 carpedm20:1 sugiyama:1 pathology:1 had:1 moving:1 stable:1 similarity:1 add:1 posterior:1 own:1 recent:8 closest:1 optimizing:1 inf:1 apart:1 reverse:1 optimizes:1 boesen:1 certain:1 inequality:1 arbitrarily:1 success:1 approximators:1 accomplished:1 seen:2 captured:3 greater:2 additional:1 minimum:1 arjovsky:2 isola:1 signal:4 resolving:1 full:1 reduces:1 alan:2 taesung:1 match:1 cross:3 long:1 minimising:1 cifar:10 variant:3 regression:2 aapo:1 essentially:1 expectation:6 metric:13 navdeep:1 arxiv:17 normalization:1 represent:1 alireza:1 invert:2 achieved:1 background:1 whereas:1 harrison:1 crucial:1 extra:1 meaningless:1 unlike:5 specially:1 comment:1 induced:1 seem:1 call:3 bengio:2 easy:3 mastropietro:1 variate:1 zi:1 architecture:5 competing:1 andreas:1 idea:5 computable:1 favour:1 whether:4 motivated:2 url:4 penalty:3 suffer:1 peter:1 henb:1 cause:1 action:6 deep:12 dramatically:3 generally:1 tune:1 amount:2 ten:1 unpaired:1 generate:4 http:6 exist:1 percentage:1 problematic:3 notice:1 estimated:2 diverse:1 write:2 hyperparameter:1 discrete:1 dickstein:1 group:1 key:1 four:1 enormous:3 drawn:5 prevent:1 sum:1 run:3 turing:3 inverse:2 powerful:5 uncertainty:1 jose:1 extends:1 throughout:1 almost:1 lamb:1 geiger:1 capturing:1 entirely:3 bound:3 layer:5 dropout:1 distinguish:3 followed:1 courville:2 zgi:2 jstor:1 occur:1 precisely:1 jukka:1 alex:1 argument:1 min:4 formulating:1 attempting:1 martin:2 lfvi:2 describes:1 across:2 son:1 making:1 hl:1 intuitively:2 taken:1 equation:2 bing:1 discus:1 count:1 needed:1 tractable:2 gaussians:4 eight:1 hierarchical:2 salimans:1 blurry:3 slower:1 original:2 thomas:2 denotes:4 running:3 top:2 include:2 gan:24 unifying:1 yanran:1 society:1 objective:11 arrangement:1 print:1 occurs:2 parametric:1 makhzani:1 traditional:2 corander:2 che:1 gradient:2 iclr:4 distance:1 link:1 thank:2 chris:1 topic:1 manifold:3 extent:2 unstable:1 reason:3 ozair:1 assuming:1 code:3 issn:1 relationship:1 illustration:2 providing:1 ratio:8 minimizing:4 unrolled:11 difficult:6 susceptible:1 unfortunately:2 statement:1 sharper:1 suppress:1 implementation:2 unknown:5 perform:1 allowing:1 upper:1 vertical:1 twenty:1 datasets:5 sm:1 daan:1 enabling:1 descent:1 situation:1 sharp:1 introduced:1 david:2 namely:2 pair:2 csutton:1 extensive:1 specified:3 discriminator:15 kl:8 required:1 pfau:1 learned:5 tensorflow:1 kingma:2 tractably:1 nip:1 address:2 able:2 suggested:2 poole:2 usually:3 below:1 mismatch:1 beyond:1 including:1 max:6 green:1 reliable:1 royal:1 greatest:1 natural:9 difficulty:1 circumvent:1 regularized:1 indicator:1 zhu:1 resists:1 minimax:1 representing:1 github:2 historically:1 misleading:1 mediated:1 autoencoder:19 auto:2 jun:1 text:1 literature:1 acknowledgement:1 autoencode:4 theis:2 asymptotic:1 embedded:1 loss:26 fully:1 expect:1 localized:1 generator:36 triple:1 degree:2 sufficient:1 consistent:1 principle:6 nowozin:1 share:1 casper:2 translation:1 prone:2 free:3 kanamori:1 side:1 allow:2 understand:1 institute:2 neighbor:2 taking:2 face:1 amortised:1 edinburgh:4 distributed:1 benefit:1 default:1 dimension:1 world:2 avoids:1 feedback:1 kz:2 athul:1 author:1 made:1 commonly:1 coincide:1 qualitatively:1 suzuki:1 far:1 hyvarinen:1 welling:2 ranganath:1 reconstructed:2 approximate:3 implicitly:2 belghazi:1 uai:1 assumed:2 xi:6 latent:3 why:3 table:6 promising:1 learn:1 channel:1 ca:1 nicolas:1 mse:2 complex:1 did:3 main:2 arrow:3 wenzhe:1 noise:19 paul:1 allowed:1 fair:1 xu:1 depicts:2 screen:1 gratton:1 tong:1 wiley:1 inverts:1 explicit:3 infogan:6 learns:2 donahue:1 ian:3 rk:2 down:1 specific:1 showing:1 insightful:1 jensen:1 abadie:1 intractable:1 mnist:7 sohl:1 corr:1 kr:1 magnitude:1 illustrates:1 kx:1 chen:1 entropy:6 simply:4 likely:2 ez:2 dcgan:5 adjustment:1 radford:2 corresponds:1 dispersed:2 succeed:1 conditional:2 goal:2 viewed:1 cheung:1 quantifying:1 labelled:2 jeff:1 experimentally:1 change:1 infinite:1 except:2 reducing:1 reversing:1 specifically:2 semantically:1 miss:2 called:2 total:1 experimental:1 shannon:1 meaningful:1 vaes:1 zg:1 aaron:2 arises:1 incorporate:1 evaluate:6 regularizing:1 avoiding:1 srivastava:2
6,549
6,924
Sparse Embedded k-Means Clustering ? ? Weiwei Liu?,\,?, Xiaobo Shen?,? , Ivor W. Tsang\ School of Computer Science and Engineering, The University of New South Wales School of Computer Science and Engineering, Nanyang Technological University \ Centre for Artificial Intelligence, University of Technology Sydney {liuweiwei863,njust.shenxiaobo}@gmail.com [email protected] Abstract The k-means clustering algorithm is a ubiquitous tool in data mining and machine learning that shows promising performance. However, its high computational cost has hindered its applications in broad domains. Researchers have successfully addressed these obstacles with dimensionality reduction methods. Recently, [1] develop a state-of-the-art random projection (RP) method for faster k-means clustering. Their method delivers many improvements over other dimensionality reduction methods. For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of min{n, d}2 log(d)/k for data matrix X ? Rn?d with n data points and d features, while losing only a factor of one in approximation accuracy. Unfortundk ) for matrix multiplication and this cost will nately, they still require O( 2 log(d) be prohibitive for large values of n and d. To break this bottleneck, we carefully build a sparse embedded k-means clustering algorithm which requires O(nnz(X)) (nnz(X) denotes the number of non-zeros in X) for fast matrix multiplication. Moreover, our proposed algorithm improves on [1]?s results for approximation accuracy by a factor of one. Our empirical studies corroborate our theoretical findings, and demonstrate that our approach is able to significantly accelerate k-means clustering, while achieving satisfactory clustering performance. 1 Introduction Due to its simplicity and flexibility, the k-means clustering algorithm [2, 3, 4] has been identified as one of the top 10 data mining algorithms. It has shown promising results in various real world applications, such as bioinformatics, image processing, text mining and image analysis. Recently, the dimensionality and scale of data continues to grow in many applications, such as biological, finance, computer vision and web [5, 6, 7, 8, 9], which poses a serious challenge in designing efficient and accurate algorithmic solutions for k-means clustering. To address these obstacles, one prevalent technique is dimensionality reduction, which embeds the original features into low dimensional space before performing k-means clustering. Dimensionality reduction encompasses two kinds of approaches: 1) feature selection (FS), which embeds the data into a low dimensional space by selecting the actual dimensions of the data; and 2) feature extraction (FE), which finds an embedding by constructing new artificial features that are, for example, linear combinations of the original features. Laplacian scores [10] and Fisher scores [11] are two basic feature selection methods. However, they lack a provable guarantee. [12] first propose a provable singular value decomposition (SVD) feature selection method. It uses the SVD to find O(klog(k/)/2 ) actual features such that with constant probability the clustering structure ? The first two authors make equal contributions. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Table 1: Dimension reduction methods for k-means clustering. The third column corresponds to the number of selected or extracted features. The fourth column corresponds to the time complexity of each dimension reduction method. The last column corresponds to the approximation accuracy. N/A denotes not available. nnz(X) denotes the number of non-zeros in X.  and ? represent the gap to optimality and the confidence level, respectively. Sparse embedding is abbreviated to SE. M ETHOD [13] F OLKLORE [12] [14] [1] [15] T HIS PAPER D ESCRIPTION SVD-FE RP-FE D IMENSIONS k O( log(n) ) 2 T IME O(nd min{n, d}) O( ndlog(n) ) 2 log(d) ACCURACY 2 1+ SVD-FS SVD-FE RP-FE ) O( klog(k/) 2 O( k2 ) O( k2 ) O(nd min{n, d}) O(nd min{n, d}) O( 2 ndk ) log(d) 2+ 1+ 2+ O(dlog(d)n + dlog(n)) O(nnz(X)) N/A 1+ RP-FE SE-FE O( log(n) ) n O(max( k+log(1/?) , 2  6 2 ? )) is preserved within a factor of 2 + . [13] propose a popular feature extraction approach, where k artificial features are constructed using the SVD such that the clustering structure is preserved within a factor of two. Recently, corollary 4.5 in [14]?s study improves [13]?s results, by claiming that O( k2 ) dimensions are sufficient for preserving 1 +  accuracy. Because SVD is computationally expensive, we focus on another important feature extraction method that randomly projects the data into low dimensional space. [1] develop a state-of-the-art random projection (RP) method, which is based on random sign matrices. Compared to SVD-based feature extraction approaches [14], [1] reduce the running time by a factor of min{n, d}2 log(d)/k 2 , while losing only a factor of one in approximation accuracy. They also improve the results of the folklore RP method by a factor of log(n)/k in terms of the number of embedded dimensions and the running time, while losing a factor of one in approximation accuracy. Compared to SVD-based feature selection methods, [1] reduce the embedded dimension by a factor of log(k/) and the running time ndk by a factor of min{n, d}2 log(d)/k, respectively. Unfortunately, they still require O( 2 log(d) ) for matrix multiplication and this cost will be prohibitive for large values of n and d. This paper carefully constructs a sparse matrix for the RP method that only requires O(nnz(X)) for fast matrix multiplication. Our algorithm is significantly faster than other dimensionality reduction methods, especially when nnz(X) << nd. Theoretically, we show a provable guarantee for our algorithm. Given d? = O(max( k+log(1/?) , 26? )), with probability at least 1 ? O(?), our algorithm 2 preserves the clustering structure within a factor of 1 + , improving on the results of [12] and [1] by a factor of one for approximation accuracy. These results are summarized in Table 1. Experiments on three real-world data sets show that our algorithm outperforms other dimension reduction methods. The results verify our theoretical analysis. We organize this paper as follows. Section 2 introduces the concept of -approximation k-means clustering and our proposed sparse embedded k-means clustering algorithm. Section 3 analyzes the provable guarantee for our algorithm and experimental results are presented in Section 4. The last section provides our conclusions. Sparse Embedded k-Means Clustering 2 -Approximation k-Means Clustering 2.1 Given X ? Rn?d with n data points and d features. We denote the transpose of the vector/matrix by superscript 0 and the logarithms to base 2 by log. Let r = rank(X). By using singular value decomposition (SVD), we have X = U ?V 0 , where ? ? Rr?r is a positive diagonal matrix containing the singular values of X in decreasing order (?1 ? ?2 ? . . . ? ?r ), and U ? Rn?r and V ? Rd?r contain orthogonal left and right singular vectors of X. Let Uk and Vk represent U and V with all but their first k columns zeroed out, respectively, and ?k be ? with all but its largest k singular values zeroed out. Assume k ? r, [16] have already shown that Xk = Uk ?k Vk0 is the optimal rank k 2 Refer to Section 2.1 for the notations. 2 approximation to X for any unitarily invariant norm, including the Frobenius and spectral norms. The pseudoinverse of X is given by X + = V ??1 U 0 . Assume Xr|k = X ? Xk . In denotes the n ? n identity matrix. Let ||X||F be the Frobenius norm of matrix X. For concision, ||A||2 represents the spectral norm of A if A is a matrix and the Euclidean norm of A if A is a vector. Let nnz(X) denote the number of non-zeros in X. The task of k-means clustering is to partition n data points in d dimensions into k clusters. Let ?i be the centroid of the vectors in cluster i and c(xi ) be the cluster that xi is assigned to. Assume D ? Rn?k is the indicator matrix which has exactly one non-zero element per row, which denotes ? cluster membership. The i-th data point belongs to the j-th cluster if and only if Dij = 1/ zj , where zj denotes the number of data points in cluster j. Note D0 D = Ik and the i-th row of DD0 X is Pthat n the centroid of xi ?s assigned cluster. Thus, we have i=1 ||xi ? ?c(xi ) ||22 = ||X ? DD0 X||2F . We formally define the k-means clustering task as follows, which is also studied in [12] and [1]. Definition 1 (k-Means Clustering). Given X ? Rn?d and a positive integer k denoting the number of clusters. Let D be the set of all n ? k indicator matrices D. The task of k-means clustering is to solve min ||X ? DD0 X||2F (1) D?D To accelerate the optimization of problem 1, we aim to find a -approximate solution for problem 1 ? ? Rn?d? with d? < d. by optimizing D (either exactly or approximately) over an embedded matrix X To measure the quality of approximation, we first define the -approximation embedded matrix: Definition 2 (-Approximation Embedded Matrix). Given 0 ?  < 1 and a non-negative constant ? . ? ? Rn?d? with d? < d is a -approximation embedded matrix for X, if X ? ? DD0 X|| ? 2 + ? ? (1 + )||X ? DD0 X||2 (1 ? )||X ? DD0 X||2F ? ||X F F (2) We show that a -approximation embedded matrix is sufficient for approximately optimizing problem 1: Lemma 1 (-Approximation k-Means Clustering). Given X ? Rn?d and D be the set of all n ? k ? ? Rn?d? with d? < d, let indicator matrices D, let D? = arg minD?D ||X ? DD0 X||2F . Given X ? ? = arg minD?D ||X ? ? DD0 X|| ? 2 . If X ? is a 0 -approximation embedded matrix for X, given D F 0 0 ? 2 , we have ? ?D ? 0 X|| ? 2 ? ?||X ? ?D ? ?D ? ?0 X||  = 2 /(1 ?  ), then for any ? ? 1, if ||X ? D F F ?D ? 0 X||2F ? (1 + )?||X ? D? D?0 X||2F ||X ? D ? ?D ? ?D ? ?0 X|| ? 2 ? ||X ? ? D? D?0 X|| ? 2 and thus Proof. By definition, we have ||X F F ? ?D ?D ? 0 X|| ? 2F ? ?||X ? ? D? D?0 X|| ? 2F ||X (3) ? is a -approximation embedded matrix for X, we have Since X ? ? D? D?0 X|| ? 2 ?(1 + 0 )||X ? D? D?0 X||2 ? ? ||X F F ? ?D ?D ? 0 X|| ? 2F ?(1 ? 0 )||X ? D ?D ? 0 X||2F ? ? ||X (4) Combining Eq.(3) and Eq.(4), we obtain: ?D ? 0 X||2F ? ? ? ||X ? ?D ?D ? 0 X|| ? 2F ??||X ? ? D? D?0 X|| ? 2F (1 ? 0 )||X ? D 0 ?(1 + 0 )?||X ? D? D? X||2F ? ? ? (5) Eq.(5) implies that ?D ? 0 X||2F ? (1 + 0 )/(1 ? 0 )?||X ? D? D?0 X||2F ? (1 + )?||X ? D? D?0 X||2F ||X ? D (6) ? is an optimal solution for X, ? then it also preserves Remark. Lemma 1 implies that if D ? ? approximation for X. Parameter ? allows D to be approximately global optimal for X. 3 Algorithm 1 Sparse Embedded k-Means Clustering Input: X ? Rn?d . Number of clusters k. Output: -approximate solution for problem 1. k+log(1/?) 6 1: Set d? = O(max( , 2 ? )). 2 ? with probability 1/d. ? 2: Build a random map h so that for any i ? [d], h(i) = j for j ? [d] ? 3: Construct matrix ? ? {0, 1}d?d with ?i,h(i) = 1, and all remaining entries 0. 4: Construct matrix Q ? Rd?d is a random diagonal matrix whose entries are i.i.d. Rademacher variables. ? = XQ? and run exact or approximate k-means algorithms on X. ? 5: Compute the product X 2.2 Sparse Embedding [1] construct a random embedded matrix for fast k-means clustering by post-multiplying X with a ?1 d ? d? random matrix having entries ?1 or ? with equal probability. However, this method requires d? d? ndk ) for matrix multiplication and this cost will be prohibitive for large values of n and d. To O( 2 log(d) break this bottleneck, Algorithm 1 demonstrates our sparse embedded k-means clustering, which requires O(nnz(X)) for fast matrix multiplication. Our algorithm is significantly faster than other dimensionality reduction methods, especially when nnz(X) << nd. For an index i taking values in the set {1, ? ? ? , n}, we write i ? [n]. Next, we state our main theorem to show that XQ? is the -approximation embedded matrix for X: ? Theorem 1. Let ? and Q be constructed as in Algorithm 1 and R = (Q?)0 ? Rd?d . Given , 26? )). Then for any X ? Rn?d , with a probability of at least 1 ? O(?), d? = O(max( k+log(1/?) 2 0 XR is the -approximation embedded matrix for X. 3 Proofs Let Z = In ? DD0 and tr be the trace notation. Eq.(2) can be formulated as: (1 ? )tr(ZXX 0 Z) ? ?X ? 0 . To prove our ?X ? 0 Z) + ? ? (1 + )tr(ZXX 0 Z). Then, we try to approximate XX 0 with X tr(Z X ? = XR0 and our goal is to show that tr(ZXX 0 Z) can be approximated main theorem, we write X 0 0 ?X ? 0 ? XX 0 that are by tr(ZXR RX Z). Lemma 2 provides conditions on the error matrix E = X ? sufficient to guarantee that X is a -approximation embedded matrix for X. For any two symmetric matrices A, B ? Rn?n , A  B indicates that B ? A is positive semidefinite. Let ?i (A) denote the i-th largest eigenvalue of A in absolute value. h?, ?i represents the inner product, and 0n?d denotes an n ? d zero matrix with all its entries being zero. ?X ? 0 . If we write C? = C + E1 + E2 + E3 + E4 , where: Lemma 2. Let C = XX 0 and C? = X (i) E1 is symmetric and ?1 C  E1  1 C. Pk (ii) E2 is symmetric, i=1 |?i (E2 )| ? 2 ||Xr|k ||2F , and tr(E2 ) ? ?2 ||Xr|k ||2F . (iii) The columns of E3 fall in the column span of C and tr(E30 C + E3 ) ? 23 ||Xr|k ||2F . (iv) The rows of E4 fall in the row span of C and tr(E4 C + E40 ) ? 24 ||Xr|k ||2F . ? is a -approximation embedded matrix for X. Specifically, and 1 + 2 + ?2 + 3 + 4 = , then X ? ? min{0, tr(E2 )} ? (1 + )tr(ZCZ). we have (1 ? )tr(ZCZ) ? tr(Z CZ) The proof can be referred to [17]. Next, we show XR0 is the -approximation embedded matrix for X. We first present the following theorem: Theorem 2. Assume r > 2k and let V2k ??Rd?r represent V with all but their first 2k columns 0 0 zeroed out. We define M1 = V2k , M2 = k/||Xr|k ||F (X ? XV2k V2k ) and M ? R(n+r)?d as ? containing M1 as its first r rows and M2 as its lower n rows. We construct R = (Q?)0 ? Rd?d , 4 which is shown in Algorithm 1. Given d? = O(max( k+log(1/?) , 26? )), then for any X ? Rn?d , with 2 a probability of at least 1 ? O(?), we have (i) ||(RM 0 )0 (RM 0 ) ? M M 0 ||2 < . (ii) |||RM20 ||2F ? ||M20 ||2F | ? k. Proof. To prove the first result, one can easily check that M1 M20 = 0r?n , thus M M 0 is a block diagonal matrix with an upper left block equal to M1 M10 and lower right block equal to M2 M20 . The spectral norm of M1 M10 is 1. ||M2 M20 ||2 = ||M2 ||22 = 0 k||X?XV2k V2k ||22 ||Xr|k ||2F 0 = k||Xr|2k ||22 . ||Xr|k ||2F As ? k||Xr|2k ||22 , we derive ||M2 M20 ||2 ? 1. Since M M is a block diagonal matrix, we have ||M ||22 = ||M M 0 ||2 = max{||M1 M10 ||2 , ||M2 M20 ||2 } = 1. tr(M1 M10 ) = 2k. tr(M2 M20 ) = k||Xr|2k ||2F . As ||Xr|k ||2F ? ||Xr|2k ||2F , we derive tr(M2 M20 ) ? k. Then we have ||M ||2F = ||Xr|k ||2F tr(M M 0 ) = tr(M1 M10 ) + tr(M2 M20 ) ? 3k. Applying Theorem 6 from [18], we can obtain that given d? = O( k+log(1/?) ), with a probability of at least 1 ? ?, ||(RM 0 )0 (RM 0 ) ? M M 0 ||2 < . 2 ||Xr|k ||2F The proof of the second result can be found in the Supplementary Materials. ? = XR0 satisfies the conditions of Lemma 2. Based on Theorem 2, we show that X Lemma 3. Assume r > 2k and we construct M and R as in Theorem 2. Given d? = ? = XR0 O(max( k+log(1/?) , 26? )), then for any X ? Rn?d , with a probability of at least 1?O(?), X 2 satisfies the conditions of Lemma 2. 0 Proof. We construct H1 ? Rn?(n+r) as H1 = [XV2k , 0n?n ], thus H1 M = XV2k V2k . And we set ||Xr|k ||F ||Xr|k ||F n?(n+r) 0 = H2 ? R as H2 = [0n?r , ?k In ], so we have H2 M = ?k M2 = X ? XV2k V2k Xr|2k and X = H1 M + H2 M and we obtain the following: ?X ? 0 ? XX 0 = XR0 RX 0 ? XX 0 = E =X 1 + 2 + 3 + 4 (7) Where 1 = H1 M R0 RM 0 H10 ? H1 M M 0 H10 , 2 = H2 M R0 RM 0 H20 ? H2 M M 0 H20 , 3 = 0 0 0 0 0 H1 M R RM H2 ? H1 M M H2 and 4 = H2 M R0 RM 0 H10 ? H2 M M 0 H10 . We bound , 1 , 2 3 and 4 separately, showing that each corresponds to one of the error terms described in Lemma 2. Bounding . 1 0 0 0 0 E1 = H1 M R0 RM 0 H10 ? H1 M M 0 H10 = XV2k V2k R0 RV2k V2k X 0 ? XV2k V2k V2k V2k X0 (8) E1 is symmetric. By Theorem 2, we know that with a probability of at least 1 ? ?, ||(RM 0 )0 (RM 0 ) ? M M 0 ||2 <  holds. Then we get ?In+r  (RM 0 )0 (RM 0 ) ? M M 0  In+r . And we derive the following: ?H1 H10  E1  H1 H10 (9) 0 0 0 0 2 For any vector v, v 0 XV2k V2k V2k V2k X 0 v = ||V2k V2k X 0 v||22 ? ||V2k V2k ||2 ||X 0 v||22 = 0 2 0 0 0 0 0 0 0 0 ||X v||2 = v XX v, so H1 M M H1 = XV2k V2k V2k V2k X  XX . Since H1 M M 0 H10 = 0 0 0 XV2k V2k V2k V2k X 0 = XV2k V2k X 0 = H1 H10 , we have H1 H10 = H1 M M 0 H10  XX 0 = C (10) Combining Eqs.(9) and (10), we arrive at a probability of at least 1 ? ?, ?C  E1  C (11) satisfying the first condition of Lemma 2. Bounding . 2 E2 =H2 M R0 RM 0 H20 ? H2 M M 0 H20 0 0 0 0 0 0 =(X ? XV2k V2k )R0 R(X ? XV2k V2k ) ? (X ? XV2k V2k )(X ? XV2k V2k ) 5 (12) ? E2 is symmetric. Note that H2 just selects M2 from M and scales it by ||Xr|k ||F / k. Using Theorem 2, we know that with a probability of at least 1 ? ?, tr(E2 ) = ||Xr|k ||2F tr(M2 R0 RM20 ? M2 M20 ) ? ||Xr|k ||2F k (13) Applying Theorem 6.2 from [19] and rescaling  , we can obtain a probability of at least 1 ? ?,  0 0 ||E2 ||F = ||Xr|2k R0 RXr|2k ? Xr|2k Xr|2k ||F ? ? ||Xr|2k ||2F (14) k Combining Eq.(14), Cauchy-Schwarz inequality and ||Xr|2k ||2F ? ||Xr|k ||2F , we get that with a probability of at least 1 ? ?, k X |?i (E2 )| ? ? k||E2 ||F ? ||Xr|k ||2F (15) i=1 Eqs.(13) and (15) satisfy the second conditions of Lemma 2. Bounding . 3 E3 =H1 M R0 RM 0 H20 ? H1 M M 0 H20 0 0 0 0 0 0 =XV2k V2k R0 R(X ? XV2k V2k ) ? XV2k V2k (X ? XV2k V2k ) (16) 0 The columns of E3 are in the column span of H1 M = XV2k V2k , and so in the column span of 2 0 0 0 0 0 0 0 C. ||V2k ||F = tr(V2k V2k ) = 2k. As V2k V = V2k V2k , V2k Xr|2k = V2k (V ?U 0 ? V2k ?2k U2k )= 0 0 ?2k U2k ? ?2k U2k = 0r?n . Applying Theorem 6.2 from [19] again and rescaling , we can obtain that with a probability of at least 1 ? ?, tr(E30 C + E3 ) =||??1 U 0 (H1 M R0 RM 0 H20 ? H1 M M 0 H20 )||2F 0 0 =||V2k R0 RXr|2k ? 0r?n ||2F ? 2 ||Xr|k ||2F (17) Thus, Eq.(17) satisfies the third condition of Lemma 2. Bounding . 4 E4 =H2 M R0 RM 0 H10 ? H2 M M 0 H10 0 0 0 0 =(X ? XV2k V2k )R0 RV2k V2k X 0 ? (X ? XV2k V2k )V2k V2k X0 (18) E4 = E30 and thus we immediately have that with a probability of at least 1 ? ?, tr(E4 C + E40 ) ? 2 ||Xr|k ||2F (19) ? = XR0 satisfies the Lastly, Eqs.(11), (13), (15), (17) and (19) ensure that, for any X ? Rn?d , X conditions of Lemma 2 and is the -approximation embedded matrix for X with a probability of at least 1 ? O(?). 4 Experiment 4.1 Data Sets and Baselines We denote our proposed sparse embedded k-means clustering algorithm as SE for short. This section evaluates the performance of the proposed method on four real-world data sets: COIL20, SECTOR, RCV1 and ILSVRC2012. The COIL20 [20] and ILSVRC2012 [21] data sets are collected from website34 , and other data sets are collected from the LIBSVM website5 . The statistics of these data sets are presented in the Supplementary Materials. We compare SE with several other dimensionality reduction techniques: 3 http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php http://www.image-net.org/challenges/LSVRC/2012/ 5 https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ 4 6 40 30 20 0 200 400 600 800 # of dimensions 15 10 5 0 1000 k-means SVD LLE LS RP PD SE 0 200 (a) COIL20 400 600 800 # of dimensions 25 20 k-means RP PD SE 15 10 1000 40 Clustering accuracy (in %) k-means SVD LLE LS RP PD SE Clustering accuracy (in %) 50 10 30 20 60 Clustering accuracy (in %) Clustering accuracy (in %) 70 30 20 0 0 (b) SECTOR 200 400 600 800 # of dimensions k-means RP PD SE 10 1000 0 200 400 600 800 1000 # of dimensions (c) RCV1 (d) ILSVRC2012 Figure 1: Clustering accuracy of various methods on COIL20, SECTOR, RCV1 and ILSVRC2012 data sets. SVD LLE LS RP PD SE 10-2 0 200 400 600 800 # of dimensions 10 2 100 10-2 1000 SVD LLE LS RP PD SE 0 (a) COIL20 200 400 600 800 # of dimensions 10 2 101 100 1000 RP PD SE Preprocessing time (in second) 10-1 Preprocessing time (in second) 100 10-3 103 104 Preprocessing time (in second) Preprocessing time (in second) 101 104 102 100 10-2 0 (b) SECTOR 200 400 600 800 # of dimensions 1000 RP PD SE 0 200 400 600 800 1000 # of dimensions (c) RCV1 (d) ILSVRC2012 Figure 2: Dimension reduction time of various methods on COIL20, SECTOR, RCV1 and ILSVRC2012 data sets. 102 10-1 101 100 10 10-2 0 200 400 600 800 # of dimensions (a) COIL20 1000 -1 0 200 400 600 800 # of dimensions 102 k-means RP PD SE 101 1000 (b) SECTOR 10 0 200 400 600 800 # of dimensions (c) RCV1 1000 Clustering time (in second) 100 103 k-means SVD LLE LS RP PD SE Clustering time (in second) k-means SVD LLE LS RP PD SE Clustering time (in second) Clustering time (in second) 101 4 k-means RP PD SE 103 0 500 1000 # of dimensions (d) ILSVRC2012 Figure 3: Clustering time of various methods on COIL20, SECTOR, RCV1 and ILSVRC2012 data sets. ? SVD: The singular value decomposition or principal components analysis dimensionality reduction approach. ? LLE: The local linear embedding (LLE) algorithm is proposed by [22]. We use the code from website6 with default parameters. ? LS: [10] develop the laplacian score (LS) feature selection method. We use the code from website7 with default parameters. ? PD: [15] propose an advanced compression scheme for accelerating k-means clustering. We use the code from website8 with default parameters. ? RP: The state-of-the-art random projection method is proposed by [1]. After dimensionality reduction, we run all methods on a standard k-means clustering package, which is from website9 with default parameters. We also compare all these methods against the standard k-means algorithm on the full dimensional data sets. To measure the quality of all methods, we report clustering accuracy based on the labelled information of the input data. Finally, we report the running 6 http://www.cs.nyu.edu/ roweis/lle/ www.cad.zju.edu.cn/home/dengcai/Data/data.html 8 https://github.com/stephenbeckr/SparsifiedKMeans 9 www.cad.zju.edu.cn/home/dengcai/Data/data.html 7 7 times (in seconds) of both the dimensionality reduction procedure and the k-means clustering for all baselines. 4.2 Results The experimental results of various methods on all data sets are shown in Figures 1, 2 and 3. The Y axes of Figures 2 and 3 represent dimension reduction and clustering time in log scale. We can?t get the results of SVD, LLE and LS within three days on RCV1 and ILSVRC2012 data sets. Thus, these results are not reported. From Figures 1, 2 and 3, we can see that: ? As the number of embedded dimensions increases, the clustering accuracy and running times of all dimensionality reduction methods increases, which is consistent with the empirical results in [1]. ? Our proposed dimensionality reduction method has superior performance compared to the RP method and other baselines in terms of accuracy, which verifies our theoretical results. LLE and LS generally underperforms on the COIL20 and SECTOR data sets. ? SVD and LLE are the two slowest methods compared with the other baselines in terms of dimensionality reduction time. The dimension reduction time of the RP method increases significantly with the increasing dimensions, while our method obtains a stable and lowest dimensionality reduction time. We achieve several hundred orders of magnitude faster than the RP method and other baselines. The results also support our complexity analysis. ? All dimensionality reduction methods are significantly faster than standard k-means algorithm with full dimensions. Finally, we conclude that our proposed method is able to significantly accelerate k-means clustering, while achieving satisfactory clustering performance. 5 Conclusion The k-means clustering algorithm is a ubiquitous tool in data mining and machine learning with numerous applications. The increasing dimensionality and scale of data has provided a considerable challenge in designing efficient and accurate k-means clustering algorithms. Researchers have successfully addressed these obstacles with dimensionality reduction methods. These methods embed the original features into low dimensional space, and then perform k-means clustering on the embedded dimensions. SVD is one of the most popular dimensionality reduction methods. However, it is computationally expensive. Recently, [1] develop a state-of-the-art RP method for faster k-means clustering. Their method delivers many improvements over other dimensionality reduction methods. For example, compared to an advanced SVD-based feature extraction approach [14], [1] reduce the running time by a factor of min{n, d}2 log(d)/k, while only losing a factor of one in approximation accuracy. They also improve the result of the folklore RP method by a factor of log(n)/k in terms of the number of embedded dimensions and the running time, while losing ndk a factor of one in approximation accuracy. Unfortunately, it still requires O( 2 log(d) ) for matrix multiplication and this cost will be prohibitive for large values of n and d. To break this bottleneck, we carefully construct a sparse matrix for the RP method that only requires O(nnz(X)) for fast matrix multiplication. Our algorithm is significantly faster than other dimensionality reduction methods, especially when nnz(X) << nd. Furthermore, we improve the results of [12] and [1] by a factor of one for approximation accuracy. Our empirical studies demonstrate that our proposed algorithm outperforms other dimension reduction methods, which corroborates our theoretical findings. Acknowledgments We would like to thank the area chairs and reviewers for their valuable comments and constructive suggestions on our paper. This project is supported by the ARC Future Fellowship FT130100746, ARC grant LP150100671, DP170101628, DP150102728, DP150103071, NSFC 61232006 and NSFC 61672235. 8 References [1] Christos Boutsidis, Anastasios Zouzias, Michael W. Mahoney, and Petros Drineas. Randomized dimensionality reduction for k-means clustering. IEEE Trans. Information Theory, 61(2):1045?1062, 2015. [2] J. A. Hartigan and M. A. Wong. Algorithm as 136: A k-means clustering algorithm. Applied Statistics, 28(1):100?108, 1979. [3] Xiao-Bo Shen, Weiwei Liu, Ivor W. Tsang, Fumin Shen, and Quan-Sen Sun. Compressed k-means for large-scale clustering. In AAAI, pages 2527?2533, 2017. [4] Xinwang Liu, Miaomiao Li, Lei Wang, Yong Dou, Jianping Yin, and En Zhu. Multiple kernel k-means with incomplete kernels. In AAAI, pages 2259?2265, 2017. [5] Tom M. Mitchell, Rebecca A. Hutchinson, Radu Stefan Niculescu, Francisco Pereira, Xuerui Wang, Marcel Adam Just, and Sharlene D. Newman. Learning to decode cognitive states from brain images. Machine Learning, 57(1-2):145?175, 2004. [6] Jianqing Fan, Richard Samworth, and Yichao Wu. Ultrahigh dimensional feature selection: Beyond the linear model. JMLR, 10:2013?2038, 2009. [7] Jorge S?nchez, Florent Perronnin, Thomas Mensink, and Jakob J. Verbeek. Image classification with the fisher vector: Theory and practice. International Journal of Computer Vision, 105(3):222?245, 2013. [8] Yiteng Zhai, Yew-Soon Ong, and Ivor W. Tsang. The emerging ?big dimensionality?. IEEE Computational Intelligence Magazine, 9(3):14?26, 2014. [9] Weiwei Liu and Ivor W. Tsang. Making decision trees feasible in ultrahigh feature and label dimensions. Journal of Machine Learning Research, 18(81):1?36, 2017. [10] Xiaofei He, Deng Cai, and Partha Niyogi. Laplacian score for feature selection. In NIPS, pages 507?514, 2005. [11] Donald H. Foley and John W. Sammon Jr. An optimal set of discriminant vectors. IEEE Trans. Computers, 24(3):281?289, 1975. [12] Christos Boutsidis, Michael W. Mahoney, and Petros Drineas. Unsupervised feature selection for the k-means clustering problem. In NIPS, pages 153?161, 2009. [13] Petros Drineas, Alan M. Frieze, Ravi Kannan, Santosh Vempala, and V. Vinay. Clustering in large graphs and matrices. In Proceedings of the Tenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 291?299, 1999. [14] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering. In Proceedings of the Twenty-Fourth Annual ACMSIAM Symposium on Discrete Algorithms, pages 1434?1453, 2013. [15] Farhad Pourkamali Anaraki and Stephen Becker. Preconditioned data sparsification for big data with applications to PCA and k-means. IEEE Trans. Information Theory, 63(5):2954?2974, 2017. [16] Leon Mirsky. Symmetric gauge functions and unitarily invariant norms. The Quarterly Journal of Mathematics, 11:50?59, 1960. [17] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 163?172, 2015. [18] Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal approximate matrix product in terms of stable rank. In 43rd International Colloquium on Automata, Languages, and Programming, pages 11:1?11:14, 2016. [19] Daniel M. Kane and Jelani Nelson. Sparser johnson-lindenstrauss transforms. Journal of the ACM, 61(1):4:1?4:23, 2014. [20] Rong Wang, Feiping Nie, Xiaojun Yang, Feifei Gao, and Minli Yao. Robust 2DPCA with non-greedy l1-norm maximization for image analysis. IEEE Trans. Cybernetics, 45(5):1108?1112, 2015. [21] Weiwei Liu, Ivor W. Tsang, and Klaus-Robert M?ller. An easy-to-hard learning paradigm for multiple classes and multiple labels. Journal of Machine Learning Research, 18(94):1?38, 2017. [22] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. SCIENCE, 290:2323?2326, 2000. 9
6924 |@word compression:1 norm:8 nd:6 sammon:1 decomposition:4 tr:24 reduction:29 liu:5 score:4 selecting:1 woodruff:1 denoting:1 daniel:1 outperforms:2 com:2 cad:2 njust:1 gmail:1 john:1 partition:1 christian:1 intelligence:2 prohibitive:4 selected:1 greedy:1 xk:2 short:1 provides:2 org:1 constructed:2 dengcai:2 ik:1 symposium:3 prove:2 acmsiam:1 wale:1 dan:1 x0:2 xiaojun:1 theoretically:1 brain:1 decreasing:1 actual:2 increasing:2 project:2 xx:8 moreover:1 notation:2 provided:1 e30:3 lowest:1 kind:1 emerging:1 finding:2 sparsification:1 guarantee:4 cave:1 finance:1 exactly:2 k2:3 demonstrates:1 uk:2 rm:17 grant:1 organize:1 before:1 positive:3 engineering:2 local:1 nsfc:2 approximately:3 au:1 studied:1 mirsky:1 kane:1 projective:1 klog:2 pthat:1 acknowledgment:1 nanyang:1 block:4 practice:1 xr:32 procedure:1 area:1 nnz:11 empirical:3 significantly:7 projection:3 confidence:1 donald:1 e40:2 get:3 selection:8 applying:3 wong:1 www:6 map:1 reviewer:1 l:10 automaton:1 shen:3 musco:2 simplicity:1 immediately:1 m2:14 his:1 embedding:5 decode:1 exact:1 losing:5 magazine:1 us:1 designing:2 programming:1 element:1 expensive:2 approximated:1 satisfying:1 continues:1 csie:1 wang:3 tsang:6 ilsvrc2012:9 sun:1 technological:1 valuable:1 pd:13 colloquium:1 complexity:2 nie:1 concision:1 ong:1 drineas:3 accelerate:3 easily:1 various:5 dou:1 fast:5 artificial:3 newman:1 klaus:1 u2k:3 whose:1 supplementary:2 solve:1 compressed:1 statistic:2 niyogi:1 superscript:1 rr:1 eigenvalue:1 net:1 cai:1 sen:1 propose:3 product:3 combining:3 flexibility:1 achieve:1 roweis:2 frobenius:2 cluster:9 rademacher:1 adam:1 derive:3 develop:4 pose:1 school:2 eq:9 sydney:1 c:2 marcel:1 implies:2 libsvmtools:1 material:2 require:2 ntu:1 biological:1 rong:1 hold:1 lawrence:1 algorithmic:1 samworth:1 label:2 schwarz:1 largest:2 gauge:1 successfully:2 tool:2 stefan:1 xr0:6 aim:1 corollary:1 ax:1 focus:1 improvement:2 vk:1 prevalent:1 rank:4 indicates:1 check:1 zju:2 slowest:1 centroid:2 baseline:5 perronnin:1 membership:1 niculescu:1 h10:14 selects:1 arg:2 classification:1 html:2 art:4 equal:4 construct:8 santosh:1 extraction:6 beach:1 having:1 sohler:1 represents:2 broad:1 unsupervised:1 future:1 report:2 serious:1 richard:1 randomly:1 frieze:1 ime:1 preserve:2 mining:4 mahoney:2 introduces:1 semidefinite:1 accurate:2 orthogonal:1 tree:1 iv:1 euclidean:1 logarithm:1 incomplete:1 theoretical:4 column:10 obstacle:3 corroborate:1 maximization:1 cost:5 entry:4 hundred:1 dij:1 seventh:1 johnson:1 reported:1 hutchinson:1 st:1 m10:5 international:2 randomized:1 siam:1 michael:4 yao:1 again:1 aaai:2 containing:2 cognitive:1 rescaling:2 li:1 summarized:1 coresets:1 satisfy:1 escription:1 break:3 try:1 h1:23 contribution:1 partha:1 php:1 accuracy:19 yew:1 xiaobo:1 multiplying:1 rx:2 researcher:2 cybernetics:1 definition:3 evaluates:1 against:1 boutsidis:2 e2:11 proof:6 jianping:1 petros:3 popular:2 mitchell:1 ut:1 dimensionality:25 ubiquitous:2 improves:2 carefully:3 elder:1 nately:1 day:1 tom:1 mensink:1 furthermore:1 just:2 lastly:1 web:1 christopher:1 nonlinear:1 lack:1 quality:2 lei:1 feiping:1 unitarily:2 usa:1 verify:1 concept:1 contain:1 assigned:2 symmetric:6 satisfactory:2 rxr:2 demonstrate:2 delivers:2 l1:1 image:6 recently:4 superior:1 cohen:2 he:1 m1:8 refer:1 feldman:1 rd:6 mathematics:1 centre:1 language:1 stable:2 base:1 coil20:9 optimizing:2 belongs:1 jianqing:1 inequality:1 jorge:1 preserving:1 analyzes:1 ndk:4 r0:15 zouzias:1 deng:1 forty:1 feifei:1 ller:1 paradigm:1 ii:2 stephen:1 full:2 multiple:3 d0:1 anastasios:1 alan:1 faster:7 long:1 post:1 e1:7 cameron:1 laplacian:3 verbeek:1 basic:1 vision:2 represent:4 cz:1 kernel:2 underperforms:1 preserved:2 fellowship:1 separately:1 addressed:2 singular:7 grow:1 south:1 comment:1 quan:1 integer:1 yang:1 iii:1 weiwei:4 easy:1 identified:1 florent:1 hindered:1 reduce:4 inner:1 cn:2 bottleneck:3 pca:2 accelerating:1 becker:1 f:2 e3:6 remark:1 generally:1 se:16 transforms:1 locally:1 softlib:1 http:5 zj:2 sign:1 per:1 write:3 discrete:2 four:1 achieving:2 hartigan:1 libsvm:1 ravi:1 tenth:1 graph:1 run:2 package:1 fourth:2 arrive:1 wu:1 home:2 decision:1 bound:1 xuerui:1 fan:1 annual:3 software:1 yong:1 min:9 optimality:1 span:4 performing:1 rcv1:8 chair:1 vempala:1 leon:1 vk0:1 radu:1 combination:1 jr:1 sam:2 v2k:49 tw:1 making:1 dlog:2 invariant:2 computationally:2 abbreviated:1 cjlin:1 mind:2 know:2 available:1 quarterly:1 spectral:3 schmidt:1 rp:26 original:3 thomas:1 denotes:7 clustering:55 running:8 top:1 remaining:1 ensure:1 madalina:1 folklore:2 h20:8 build:2 especially:3 miaomiao:1 already:1 diagonal:4 thank:1 nelson:2 cauchy:1 collected:2 discriminant:1 provable:4 kannan:1 preconditioned:1 code:3 index:1 zhai:1 unfortunately:2 fe:7 sector:8 robert:1 claiming:1 trace:1 negative:1 ethod:1 twenty:1 perform:1 upper:1 datasets:1 arc:2 xiaofei:1 rn:16 jakob:1 rebecca:1 david:1 nip:3 trans:4 address:1 able:2 beyond:1 challenge:3 encompasses:1 max:7 including:1 dd0:9 indicator:3 turning:1 melanie:1 advanced:3 zhu:1 scheme:1 improve:3 github:1 technology:1 jelani:2 numerous:1 farhad:1 columbia:1 foley:1 xq:2 text:1 multiplication:8 ultrahigh:2 embedded:26 suggestion:1 h2:15 sufficient:3 consistent:1 xiao:1 zeroed:3 tiny:1 row:6 supported:1 last:2 transpose:1 soon:1 lle:12 fall:2 saul:1 taking:1 absolute:1 sparse:11 dimension:30 default:4 world:3 lindenstrauss:1 author:1 preprocessing:4 approximate:5 obtains:1 pseudoinverse:1 global:1 persu:1 conclude:1 francisco:1 corroborates:1 xi:5 table:2 promising:2 robust:1 ca:1 vinay:1 improving:1 constructing:1 domain:1 pk:1 main:2 bounding:4 big:3 verifies:1 zxx:3 referred:1 m20:10 en:1 embeds:2 christos:2 pereira:1 jmlr:1 third:2 theorem:12 e4:6 embed:1 showing:1 nyu:1 magnitude:1 dpca:1 gap:1 sparser:1 yin:1 yichao:1 gao:1 ivor:6 nchez:1 bo:1 corresponds:4 satisfies:4 extracted:1 acm:3 coil:1 identity:1 formulated:1 goal:1 labelled:1 fisher:2 considerable:1 feasible:1 hard:1 lsvrc:1 specifically:1 lemma:12 principal:1 svd:21 experimental:2 formally:1 support:1 bioinformatics:1 constructive:1
6,550
6,925
Dynamic-Depth Context Tree Weighting Jo?o V. Messias? Morpheus Labs Oxford, UK [email protected] Shimon Whiteson University of Oxford Oxford, UK [email protected] Abstract Reinforcement learning (RL) in partially observable settings is challenging because the agent?s observations are not Markov. Recently proposed methods can learn variable-order Markov models of the underlying process but have steep memory requirements and are sensitive to aliasing between observation histories due to sensor noise. This paper proposes dynamic-depth context tree weighting (D2-CTW), a model-learning method that addresses these limitations. D2-CTW dynamically expands a suffix tree while ensuring that the size of the model, but not its depth, remains bounded. We show that D2-CTW approximately matches the performance of state-of-the-art alternatives at stochastic time-series prediction while using at least an order of magnitude less memory. We also apply D2-CTW to model-based RL, showing that, on tasks that require memory of past observations, D2-CTW can learn without prior knowledge of a good state representation, or even the length of history upon which such a representation should depend. 1 Introduction Agents must often act given an incomplete or noisy view of their environments. While decisiontheoretic planning and reinforcement learning (RL) methods can discover control policies for agents whose actions can have uncertain outcomes, partial observability greatly increases the problem difficulty since each observation does not provide sufficient information to disambiguate the true state of the environment and accurately gauge the utility of the agent?s available actions. Moreover, when stochastic models of the system are not available a priori, probabilistic inference over latent state variables is not feasible. In such cases, agents must learn to memorize past observations and actions [21, 9], or one must learn history-dependent models of the system [15, 8]. Variable-order Markov models (VMMs), which have long excelled in stochastic time-series prediction and universal coding [23, 14, 2], have recently also found application in RL under partial observability [13, 7, 24, 19]. VMMs build a context-dependent predictive model of future observations and/or rewards, where a context is a variable-length subsequence of recent observations. Since the number of possible contexts grows exponentially with both the context length and the number of possible observations, VMMs? memory requirements may grow accordingly. Conversely, the frequency of each particular context in the data decreases as its length increases, so it may be difficult to accurately model long-term dependencies without requiring prohibitive amounts of data. Existing VMMs address these problems by allowing models to differentiate between contexts at nonconsecutive past timesteps, ignoring intermediate observations [13, 22, 10, 24, 4]. However, they typically assume that either the amount of input data is naturally limited or there is a known bound on the length of the contexts to be considered. In most settings in which an agent interacts continuously with its environment, neither assumption is well justified. The lack of a defined time limit means the approaches that make the former assumption, e.g., [13, 24], may eventually and indiscriminately ? During the development of this work, the main author was employed by the University of Amsterdam. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. use all the agent?s physical memory, while those that assume a bound on the context length, e.g., [19], may perform poorly if observations older than this bound are relevant. This paper proposes dynamic-depth context tree weighting (D2-CTW), a VMM designed for general continual learning tasks. D2-CTW extends context tree weighting (CTW) [23] by allowing it to dynamically grow a suffix tree that discriminates between observations at different depths only insofar as that improves its ability to predict future inputs. This allows it to bound the number of contexts represented in the model, without sacrificing the ability to model long-term dependencies. Our empirical results show that, when used for general stochastic time-series prediction, D2-CTW produces models that are much more compact than those of CTW while providing better results in the presence of noise. We also apply D2-CTW as part of a model-based RL architecture and show that it outperforms multiple baselines on the problem of RL under partial observability, particularly when an effective bound on the length of its contexts is not known a priori. 2 2.1 Background Stochastic Time-Series Prediction Let an alphabet ? = {? 1 , ? 2 , . . . , ? |?| } be a discrete set of symbols, and let ?(?) represent the space of probability distributions over ? (the (|?| ? 1)-simplex). Consider a discrete-time stochastic process that, at each time t ? 0, samples a symbol ?t from a probability distribution pt ? ?(?). We assume that this stochastic process is stationary and ergodic, and that pt is a conditional probability distribution, which for some (unknown) constant integer D with 0 < D ? t has the form: pt (?) = P (?t = ? | ?t?1 , ?t?2 , . . . , ?t?D ). (1) Let ? t?D:t?1 = (?t?D , ?t?D+1 , . . . ?t?1 , ) be a string of symbols from time t ? D to t ? 1. Since ? t?D:t?1 ? ?D and ? is finite, there is a finite number of length-D strings on which the evolution of our stochastic process can be conditioned. Thus, the stochastic process can also be represented by a time-invariant function F : ?D ? ?(?) such that pt =: F (? t?D:t?1 ) at any time t ? D. Let s be a string of symbols from alphabet ? with length |s| and elements [s]i=?{1,...,|s|} . Furthermore, a string q with |q| < |s| is said to be a prefix of s iff q1:|q| = s1:|q| , and a suffix of s iff q1:|q| = s|s|?|q|:|s| . We write sq or ?s for the concatenation of strings s and q or of s and symbol ? ? ?. A complete and proper suffix set is a set of strings S such that any string not in S has exactly one suffix in S but no string in S has a suffix in S. Although D is an upper bound on the age of the oldest symbol on which the process F depends, at any time t it may depend only on some suffix of ? t?D:t?1 of length less than D. Given the variablelength nature of its conditional arguments, F can be tractably encoded as a D-bounded tree source [2] that arranges a complete and proper suffix set into a tree-like graphical structure. Each node at depth d ? D corresponds to a length-d string and all internal nodes correspond to suffixes of the strings associated with their children; and each leaf encodes a distribution over ? representing the value of F for that string. Given a single, uninterrupted sequence of ? 0:t generated by F , we wish to learn the F? : ?D ? ?(?) that minimises the average log-loss of the observed data ? 0:t . Letting PF? (? | ?i?D , . . . , ?i?1 ) := F? (? i?D:i?1 ): 1 l(? 0:t | F? ) = ? t 2.2 t X log PF? (?i | ?i?D , . . . , ?i?1 ). (2) i=D Context Tree Weighting The depth-K context tree on alphabet ? is a graphical structure obtained by arranging all possible strings in ?K into a full tree. A context tree has a fixed depth at all leaves and potentially encodes all strings in ?K , not just those required by F . More specifically, given a sequence of symbols ? 0:t?1 , the respective length-K context ? t?K:t?1 induces a context path along the context tree by following at each level d ? K the edge corresponding to ?t?d . The root of the context tree represents an empty string ?, a suffix to all strings. Furthermore, each node keeps track of the input symbols that have immediately followed its respective context. Let sub(? 0:t?1 , s) represent the 2 string obtained by concatenating all symbols ?i in ? 0:t?1 such that its preceding symbols verify ?i?k = sk for k = 1, . . . , |s|. Then, each node s in the context tree maintains its own estimate of the probability of observing the string sub(? 0:t?1 , s). Context tree weighting (CTW) [23] learns a mixture of the estimates of P (sub(? 0:t?1 , s)) at all contexts s of length |s| ? K and uses it to estimate the probability of the entire observed sequence. Let Pes (? 0:t?1 ) represent the estimate of P (sub(? 0:t?1 , s)) at the node corresponding to s, and let Pws (? 0:t?1 ) be a weighted representation of the same measure, defined recursively as:  1 s Q 1 ?s ??? Pw (? 0:t?1 ) if |s| < K, 2 Pe (? 0:t?1 ) + 2 (3) Pws (? 0:t?1 ) := Pes (? 0:t?1 ) if |s| = K. Since sub(? 0:t?1 , ?) = ? 0:t?1 by definition of the empty context, Pw? (? 0:t?1 ) is an estimate of P (? 0:t?1 ). The conditional probability of symbol ?t is approximated as PF? (?t |? 0:t?1 ) = Pw? (? 0:t )/Pw? (? 0:t?1 ). The (unweighted) estimate Pes (? 0:t?1 ) at each context is often computed by keeping |?| incrementally updated counters [cs,t ]i=1,...,|?| ? N0 , where for each ? i ? ?, [cs,t ]i represents the total number of instances where the substring s? i can be found within ? 0:t?1 . The vector of counters cs,t can be modelled as the output of a Dirichlet-multinomial distribution with concentration parameter vector ? = [?i ]i=1,...,|?| . An estimate of the probability of observing symbol ? k at time t + 1 can then be taken as follows: if s is on the context path at time t and ?t = ? k is the next observed symbol, then [cs,t+1 ]k = [cs,t ]k + 1, and [cs,t+1 ]i = [cs,t ]i for all i 6= k. Then: Pes (? k | ? 0:t ) := [cs,t ]k + [?]k PDirM (cs,t+1 | ?) = , + PDirM (cs,t | ?) c+ s,t + ? (4) P|?| P|?| where ?+ = i=1 [?]i , c+ s,t = i=1 [cs,t ]i , and PDirM is the Dirichlet-multimomial mass function. Qt s The estimate of the probability of the full sequence is then Pes (? 0:t ) = ? =0 Pe (?? |? 0:? ?1 ). This can be updated in constant time as each new symbol is received. The choice of ? affects the overall quality of the estimator. We use the sparse adaptive Dirichlet (SAD) estimator [11], which is especially suited to large alphabets. In principle, a depth-K context tree has |?|K+1 ? 1 nodes, each with at most |?| integer counters. In practice, there may be fewer nodes since one need only to allocate space for contexts found in the data at least once, but their total number may still grow linearly with the length of the input string. Thus, for problems such as partially observable RL, in which the amount of input data is unbounded, or for large |?| and K, the memory used by CTW can quickly become unreasonable. Previous extensions to CTW and other VMM algorithms have been made that do not explicitly bound the depth of the model [6, 22]. However, these still take up memory that is worst-case linear in the length of the input sequence. Therefore, they are not applicable to reinforcement learning. To overcome this problem, most existing approaches artificially limit K to a low value, which limits the agent?s ability to address long-term dependencies. To our knowledge, the only existing principled approach to reducing the amount of memory required by CTW was proposed in [5], through the use of a modified (Budget) SAD estimator which can be used to limit the branching factor in the context tree to B < |?|. This approach still requires K to be set a priori, and is best-suited to prediction problems with large alphabets but few high frequency symbols (e.g. word prediction), which is not generally the case in decision-making problems. 2.3 Model-Based RL with VMMs In RL with partial observability, an agent performs at each time t an action at ? A, and receives an observation ot ? O and a reward rt ? R with probabilities P (ot |o0:t?1 , r0:t?1 , a0:t?1 ) and P (rt |o0:t?1 , r0:t?1 , a0:t?1 ) respectively. This representation results from marginalising out the latent state variables and assuming that theP agent observes rewards. The agent?s goal is to maximise the ? expected cumulative future rewards E{ ? =t+1 rt + ?? ?t r? } for some discount factor ? ? [0, 1). Letting R = {rt : P (rt |o0:t?1 , r0:t?1 , a0:t?1 ) > 0 ?o0:t?1 , r0:t?1 , a0:t?1 } represent the set of possible rewards and zt ? {1, . . . , |R|} the unique index of rt ? R, then a percept (ot , zt ) is received at each time with probability P (ot , zt |o0:t?1 , z0:t?1 , a0:t?1 ). VMMs such as CTW can 3 then learn a model of this process, using the alphabet ? = O ? {1, . . . , |R|}. This predictive model must condition on past actions, but its output should only estimate the probability of the next percept (not the next action). This is solved by interleaving actions and percepts in the input context, but only updating its estimators based on the value of the next percept [19]. The resulting action-conditional model can be used as a simulator by sample-based planning methods such as UCT [12]. 2.4 Utile Suffix Memory Utile suffix memory (USM) [13] is an RL algorithm similar to VMMs for stochastic time-series prediction. USM learns a suffix tree that is conceptually similar to a context tree with the following differences. First, each node in the suffix tree directly maintains an estimate of expected cumulative future reward for each action. To compute this estimate, USM still predicts (immediate) future observations and rewards at each context, analogously to VMM methods. This prediction is done in a purely frequentist manner, which often yields inferior prediction performance compared to other VMMs, especially given noisy data. Second, USM?s suffix tree does not have a fixed depth; instead, its tree is grown incrementally, by testing potential expansions for statistically significant differences between their respective predictions of cumulative future reward. USM maintains a fixed-depth subtree of fringe nodes below the proper leaf nodes of the suffix tree. Fringe nodes do not contribute to the model?s output, but they also maintain count vectors. At regular intervals, USM compares the distributions over cumulative future reward of each fringe node against its leaf ancestor, through a Kolmogorov-Smirnov (K-S) test. If this test succeeds at some threshold confidence, then all fringe nodes below that respective leaf node become proper nodes, and a new fringe subtree is created below the new leaf nodes. USM?s fringe expansion allows it to use memory efficiently, as only the contextual distinctions that are actually significant for prediction are represented. However, USM is computationally expensive. Performing K-S tests for all nodes in a fringe subtree requires, in the worst-case, time linear in the amount of (real-valued) data contained at each node, and exponential in the depth of the subtree. This cost can be prohibitive even if the expansion test is only run infrequently. Furthermore, USM does not explicitly bound its memory use, and simply stopping growth once a memory bound is hit would bias the model towards symbols received early in learning. 3 Dynamic-Depth Context Tree Weighting We now propose dynamic-depth context tree weighting (D2-CTW). Rather than fixing the depth a priori, like CTW, or using unbounded memory, like USM, D2-CTW learns F? with dynamic depth, subject to the constraint |F?t | ? L at any time t, where L is a fixed memory bound. 3.1 Dynamic Expansion in CTW To use memory efficiently and avoid requiring a fixed depth, we could simply replicate USM?s fringe expansion in CTW, by performing K-S tests on distributions over symbols (Pes ) instead of distributions over expected reward. However, doing so would introduce bias. The weighted estimates Pws (? 0:t ) for each context s depend on the ratio of the probability of the observed data at s itself, 0 Pes (? 0:t ), and that of the data observed at its children, Pws (? 0:t ) at s0 = ?s ?? ? ?. These P estimates depend on the number of times each symbol followed a context, implying that cs,t = ??? cs0 ,t . Thus, the weighting in (3) assumes that each symbol that was observed to follow the non-leaf context s was also observed to follow exactly one of its children s0 . If this was not so and, e.g., s was 0 0 created at time 0 but its children only at ? > 0, then, since Pws (? ? :t ) ? Pws (? 0:t ), the weighting would be biased towards the children, which would have been exposed to less data. Fortunately, an alternative CTW recursion, originally proposed for numerical stability [20], overcomes this issue. In CTW and for a context tree of fixed depth K, let ?ts be the likelihood ratio between the weighted estimate below s and the local estimate at s itself: ( Q ?s ??? Pw (? 0:t ) if |s| < K, s Pes (? 0:t ) ?t := (5) 1 if |s| = K. Then, the weighted estimate of the conditional probability of an observed symbol ?t at node s is: 1 s P (? 0:t ) (1 + ?ts ) 1 + ?ts P s (? 0:t ) s  Pws (?t |? 0:t?1 ) := sw = 1 2s e =: P (? |? ) . (6) t 0:t?1 e s s Pw (? 0:t?1 ) 1 + ?t?1 2 Pe (? 0:t?1 ) 1 + ?t?1 4 Furthermore, ?ts can be updated for each s as follows. Let Ct represent the set of suffixes on the context path at time t (the set of all suffixes of ? 0:t?1 ). Then: ( s0 Pw (?t |? 0:t?1 ) s s Pes (?t |? 0:t?1 ) ?t?1 if s ? Ct , ?t = (7) s ?ts = ?t?1 otherwise, where s0 = ?t?1?|s| s is the child of s that follows it on the context path. For any context, we set ?0s = 1. This reformulation allows the computation of Pws (?t |? 0:t?1 ) using only the nodes on the context path and while storing only a single value in those nodes, ?ts , regardless of |?|. Since this reformulation depends only on conditional probability estimates, we can perform fringe expansion in CTW and add nodes dynamically without biasing the mixture. Disregard the fixed depth limit K and consider instead a suffix tree where all leaf nodes have a depth greater than the fringe depth H > 0. For any leaf node at depth d, its ancestor at depth d ? H is its frontier node. The descendants of any frontier node are fringe nodes. Let ft represent the frontier node on the context path at time t. At every timestep t, we traverse down the tree by following the context path as in CTW. At every node on the context path and above ft , we apply (6) and (7) while treating ft as a leaf node. For ft and the fringe nodes on the context path below it, we apply the same updates while treating fringe nodes normally. Thus, the recursion in (6) does not carry over to fringe nodes, but otherwise all nodes update their values of ? in the same manner. Once the fringe expansion criterion is met (see Section 3.2), the fringe nodes below ft simply stop being labeled as such, while the values of ? for the nodes above ft must be updated to reflect the change in the model. Let P?wft (? 0:t ) represent the weighted (unconditional) output at ft after the fringe expansion step. We have therefore P?wft (? 0:t ) := 12 Peft (? 0:t )(1 + ?tft ), but prior to the expansion, Pwft (? 0:t ) = Peft (? 0:t ). The net change in the likelihood of ? 0:t , according to ft , is: ft := ?exp P?wft (? 0:t ) Pwft (? 0:t ) = 1 + ?tft . 2 (8) This induces a change in the likelihood of the data according to all of the ancestors of ft . We need ? =: P?w? (? 0:t )/Pw? (? 0:t ), which quantifies the effect of the fringe expansion on to determine ?exp the global output of the weighted model. Proposition 1. Let f be a string corresponding to a frontier node, and let pd be the length-d suffix Q|f |?1 ?tpd 1+?tf f of f (with p0 = ?). Also let ?f := d=0 1+? pd , and ?exp := 2 . Then: t ? ?exp := P?w? (? 0:t ) Pw? (? 0:t )  f = 1 + ?f ?exp ?1 . The proof can be found in the supplementary material of this paper (Appendix A.1). This formulation is useful since, for any node s in the suffix tree with ancestors (p0 , p1 , . . . , p|s|?1 ) we can p Q|s|?1 ?tpd p|s|?1 ?t |s|?1| associate a value ?st = d=0 1+? pd = ?t p|s?1| that measures the sensitivity of the whole t 1+?t model to changes below s, and not necessarily just fringe expansions. Thus, a node with ?s ' 0 is a good candidate for pruning (see Section 3.3). Furthermore, this value can be computed while traversing the tree along the context path. Although the computation of ?s for a particular node still requires O(|s|) operations, the values of ? for all ancestors of s are also computed along the way. 3.2 Fringe Expansion Criterion f As a likelihood ratio, ?exp provides a statistical measure of the difference between the predictive ? model at each frontier node f and that formed by its fringe children. Analogously, ?exp can be seen as the likelihood ratio between two models that differ only on the subtree below f . Therefore, we can ? test the hypothesis that the subtree below f should be added to the model by checking if ?exp >? ? for some ? > 1. Since the form of Pw (?) is unknown, we cannot establish proper confidence levels for ?; however, the following result shows that the value of ? is not especially important, since if the subtree below f improves the model, this test will eventually be true given enough data. 5 Theorem 1. Let S and Sexp be two proper suffix sets such that Sexp = (S \ f ) ? F where f is suffix to all f 0 ? F. Furthermore, let M and Mexp be the CTW models using the suffix trees induced by S and Sexp respectively, and Pw? (? 0:t ; M ), Pw? (? 0:t ; Mexp ) their estimates of the likelihood of ? 0:t . If there is a T ? N such that, for any ? > 0: TY +? Pef (?t |? 0:t?1 ; M ) < t=? TY +? Y Pw?f (?t |? 0:t?1 ; Mexp ), t=? ??? then for any ? ? [1, ?), there is T 0 > 0 such that Pw? (? 0:T 0 ; Mexp )/Pw? (? 0:T 0 ; M ) > ?. ? The proof can be found in the supplementary material (Appendix A.2). Using ?exp > ? as a statistical test instead of K-S tests yields great computational savings, since the procedure described in Proposition 1 allows us to determine this test in O(|ft |), typically much lower than the O(|?|H+1 ) complexity of K-S testing all fringe children. Theorem 1 also ensures that, if sufficient memory is available, D2-CTW will eventually perform as well as CTW with optimal depth bound K = D. This follows from the fact that, for every node s at depth ds ? D in a CTW suffix tree, if ?ts ? 1 for all t > ? , then the D2-CTW suffix tree will be at least as deep as ds at context s after some time t0 ? ? . That is, at some point, the D2-CTW model will contain the ?useful? sub-tree of the optimal-depth context tree. Corollary 1. Let l(? | F?CT W , D) represent the average log-loss of CTW using fixed depth K when modeling a D?bounded tree source, and l(? | F?D2?CT W , ?, H, L) the same metric when using D2CTW. For any values of ? > 1 and H > 1, and for sufficiently high L > 0, there exists a time T 0 > 0 such that, for any t > T 0 , l(? T 0 :t | F?D2?CT W , ?, H, L) ? l(? T 0 :t | F?CT W , D). 3.3 Ensuring the Memory Bound In order to ensure that the memory bound |F?t | ? L is respected, we must first consider whether a potential fringe expansion does not require more memory than is available. Thus, if the subtree below frontier node f has size Lf , we must test if |F?t | + Lf ? L. This means that fringe nodes are not taken into account when computing |F?t |, as they do not contribute to the output of F?t and are therefore considered as memory overhead, and discarded after training. Once |F?t | is such that no fringe expansions are possible without violating the memory bound, it may still be possible to improve the model by pruning low-quality subtrees to create enough space for more valuable fringe expansions. Pruning operations also have a quantifiable effect on the likelihood of the observed data according to F?t . Let P sw (? 0:t ) represent the weighted estimate at internal node s s after pruning its subtree. Analogously to (8), we can define ?prune := P sw (? 0:t )/Pws (? 0:t ) = ? 2/(1 + ?ts ). We can also compute ?prune , the global effect on the likelihood, using the procedure  ? s s in Proposition 1. Since ?prune = 1 + ?s ?prune ? 1 , typically with ?prune < 1, if a fringe ? ? expansion at f increases Pw (? 0:t ) by a factor of ?exp but requires space Lf such that |F?t |+Lf > L, ? we should prune the subtree below s 6= f that frees Ls space and reduces Pw? (? 0:t ) by ?prune if 1) ? ? ?exp ? ?prune > 1; 2) |F?t | + Lf ? Ls ? L; and 3) s is not an ancestor of f . The latter condition requires O(|f |?|s|) time to validate, while the former can be done in constant time if ?s is available. In general, some combination of subtrees could be pruned to free enough space for some combination of fringe expansions, but determining the best possible combination of operations at each time is too computationally expensive. As a tractable approximation, we compare only the best single exf ? pansion and prune at nodes f ? and s? respectively, quantified with two heuristics Hexp := log ?exp f s ? ? f ? s and Hprune := ? log ?prune s , such that f = arg maxf Hexp and s = arg minf Hprune . As L is decreased, the performance of D2-CTW may naturally degrade. Although Corollary 1 may no longer be applicable in that case, a weaker bound on the performance of memory-constrained D2-CTW can be obtained as follows, regardless of L: let dtmin denote the minimum depth of any frontier node at time t; then the D2-CTW suffix tree covers the set of dtmin -bounded models [23]. The redundancy of D2-CTW, measured as the Kullback-Leibler divergence DKL (F ||F?t ), is then at least as low as the redundancy of a multi-alphabet CTW implementation with K = dtmin [17]. 6 bib book1 book2 geo news obj1 obj2 paper1 paper2 paper3 paper4 paper5 paper6 pic progc progl progp trans Model depth (optimal params.) File name Avg. Log-loss(b) (L=1000, 5% noise) (c) Avg. Log-loss vs. ? 4.35 bits/byte 4.30 4.25 4.20 4.15 4.10 bib book1 book2 geo news obj1 obj2 paper1 paper2 paper3 paper4 paper5 paper6 pic progc progl progp trans bib book1 book2 geo news obj1 obj2 paper1 paper2 paper3 paper4 paper5 paper6 pic progc progl progp trans 14 12 10 8 6 4 2 0 File name (a) (L=1000) Avg. log-loss File name Model depth 7 6 5 4 3 2 1 0 File name Nr. nodes (optimal params.) bib book1 book2 geo news obj1 obj2 paper1 paper2 paper3 paper4 paper5 paper6 pic progc progl progp trans Number of nodes 107 106 105 104 103 102 101 100 bits/byte bits/byte bits/byte 7 6 5 4 3 2 1 0 Avg. log-loss (optimal params.) ctw d2-ctw bib book1 book2 geo news obj1 obj2 paper1 paper2 paper3 paper4 paper5 paper6 pic progc progl progp trans 7 6 5 4 3 2 1 0 (d) File name 4.05 1 5 9 13 17 Likelihood ratio threshold, ? (f) (e) Figure 1: Calgary Corpus performance with CTW (red) and D2-CTW (blue). For average log-loss, lower is better: (a)-(c) using optimal parameters; (d) with a bound on the number of nodes; (e) with size bound and uniform noise; (f) log-loss vs. ? on ?book2?, with 10% noise (over 30 runs). 3.4 Complete Algorithm and Complexity The complete D2-CTW algorithm operates as follows (please refer to Appendix A.3 for the respective pseudo-code): a suffix tree is first initialized containing only a root node; at every timestep, the suffix tree is updated using the observed symbol ?t , and the preceding context (if it exists) from time t ? dtmax ? H where dtmax is the current maximum depth of the tree and H is the fringe depth. This update returns the weighted conditional probability of ?t , and it also keeps track of the best known fringe expansion and pruning operations. Then, a post-processing step expands and possibly prunes the tree as necessary, ensuring the memory bound is respected. This step also corrects the values of ? for any nodes affected by these topological operations. D2-CTW trains on each new symbol in O(dtmax + H) time, the same as CTW with depth bound K = dtmax + H. A worstcase O((dtmax + H)|?|) operations are necessary to sample a symbol from the learned model, also equivalent to CTW. Post-processing requires O(max{|f ? |, |s? |}) time. 4 Experiments We now present empirical results on byte-prediction tasks and partially-observable RL. Our code and instructions for its use is publicly available at: https://bitbucket.org/jmessias/vmm_py. Byte Prediction We compare the performance of D2-CTW against CTW on the 18-file variant of the Calgary Corpus [3], a benchmark of text and binary data files. For each file, we ask the algorithms to predict the next byte given the preceding data, such that |?| = 256 across all files. We first compare performance when using (approximately) optimal hyperparameters. For CTW, we performed a grid search taking K ? {1, . . . , 10} for each file. For D2-CTW, we investigated the effect of ? on the prediction log-loss across different files, and found no significant effect of this parameter for sufficiently large values (an example is shown in Fig. 1f), in accordance with Theorem 1. Consequently, we set ? = 10 for all our D2-CTW runs. We also set L = ? and H = 2. The corpus results, shown in Figs. 1a?1c, show that D2-CTW achieves comparable performance to CTW: on average D2-CTW?s loss is 2% higher, which is expected since D2-CTW grows dynamically from a single node, while CTW starts with a fully grown model of optimal height. By contrast, D2-CTW uses many fewer nodes than CTW, by at least one order of magnitude (average factor ? 28). D2-CTW automatically discovers optimal depths that are similar to the optimal values for CTW. We then ran a similar test but with a bound on the number of nodes L = 1000. For CTW, we enforced this bound by simply stopping the suffix tree from growing beyond this point2 . The results 2 For simplicity, we did not use CTW with Budget SAD as a baseline. Budget SAD could also be used to extend D2-CTW, so a fair comparison would necessitate the optional integration of Budget SAD into both CTW and D2-CTW. This is an interesting possibility for future work. 7 are shown in Fig. 1d. In this case, the log-loss of CTW is on average 11.4% and up to 32.3% higher than that of D2-CTW, showing that D2-CTW makes a significantly better use of memory. Finally, we repeated this test but randomly replaced 5% of symbols with uniform noise. This makes the advantage of D2-CTW is even more evident, with CTW scoring on average 20.0% worse (Fig. 1e). While the presence of noise still impacts performance, the results show that D2-CTW, unlike CTW, is resilient to noise: spurious contexts are not deemed significant, avoiding memory waste. Model-Based RL For our empirical study on online partially observable RL tasks, we take as a baseline MC-AIXI, a combination of fixed-depth CTW modelling with ?UCT planning [19], and investigate the effect of replacing CTW with D2-CTW and limiting the available memory. We also compare against PPM-C, a frequentist VMM that is competitive with CTW [2]. Our experimental domains are further described in the supplementary material. Our first domain is the T-maze [1], in which an agent must remember its initial observation in order to act optimally at the end of the maze. We consider a maze of length 4. We set K = 3 for CTW and PPM-C, which is the guaranteed minimum depth to produce the optimal policy. For D2-CTW we set ? = 1, H = 2, and do not enforce a memory bound. As in [19], we use an -greedy exploration strategy. Fig. 2a shows that D2-CTW discovers the length of the T-Maze automatically. Furthermore, CTW and PPM-C fail to learn to retain the required observations, as during the initial stages of learning the agent may need more than 3 steps to reach the goal (D2-CTW learns a model of depth 4). Our second scenario is the cheese maze [13], a navigation task with aliased observations. Under optimal parameters, D2-CTW and CTW both achieve near-optimal performance for this task. We investigated the effect of setting a bound on the number of nodes L = 1000, roughly 1/5 of the amount used by CTW with optimal hyperparameters. In Fig. 2b we show that the quality of D2CTW degrades less than both CTW and PPM-C, still achieving a near optimal policy. As this is a small-sized problem with D = 2, CTW and PPM-C still produce reasonable results in this case albeit with lower quality models than D2-CTW. Finally, we tested a partially observable version of mountain car [16], in which the position of the car is observed but not its velocity. We coarsely discretised the position of the car into 10 states. In this task, we have no strong prior knowledge about the required context length, but found K = 4 to be sufficient for optimal PPM-C and CTW performance. For D2-CTW, we used ? = 10, H = 2. We set also L = 1000 for all methods. Fig. 2c shows the markedly superior performance of D2-CTW when subject to this memory constraint. 5 Conclusions and Future Work We introduced D2-CTW, a variable-order modelling algorithm that extends CTW by using a fringe expansion mechanism that tests contexts for statistical significance, and by allowing the dynamic adaptation of its suffix tree while subject to a memory bound. We showed both theoretically and empirically that D2-CTW requires little configuration across domains and provides better performance with respect to CTW under memory constraints and/or the presence of noise. In future work, we will investigage the use the Budget SAD estimator with a dynamic budget as an alternative mechanism for informed pruning. We also aim to apply a similar approach to context tree switching (CTS) [18], an algorithm that is closely related to CTW but enables mixtures in a larger model class. Average reward/episode in T-Maze (over 200 eps., K=3) 60 Reward Reward 80 40 20 D2-CTW PPM-C CTW optimal value fully-Markov bound 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 1e3 Episodes (a) 2 1 0 1 2 3 4 5 Average reward/step in Cheese Maze (over 200 steps, L=1000) D2-CTW PPM-C CTW optimal value 1 2 3 4 Timesteps (b) 5 6 7 1e3 Reward 100 40 20 0 20 40 60 80 100 100 Average reward/episode in P.O. Mountain Car (over 100 eps., L=1000) 200 300 400 Episodes D2-CTW PPM-C CTW 500 (c) Figure 2: Performance measured as (running) average rewards in (a) T-maze; (b) cheese maze; (c) partially observable mountain car. Results show mean over 10 runs, and shaded first to third quartile. 8 Acknowledgments This work was supported by the European Commission under the grant agreement FP7-ICT-611153 (TERESA). References [1] B. Bakker. Reinforcement learning with long short-term memory. In Proceedings of the 14th International Conference on Neural Information Processing Systems, NIPS?01, pages 1475? 1482, Cambridge, MA, USA, 2001. MIT Press. [2] R. Begleiter, R. El-Yaniv, and G. Yona. On prediction using variable order Markov models. Journal of Artificial Intelligence Research, 22:385?421, 2004. [3] T. Bell, I. H. Witten, and J. G. Cleary. Modeling for text compression. ACM Computing Surveys (CSUR), 21(4):557?591, 1989. [4] M. Bellemare, J. Veness, and E. Talvitie. Skip context tree switching. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1458?1466, 2014. [5] M. G. Bellemare. Count-based frequency estimation with bounded memory. In IJCAI, pages 3337?3344, 2015. [6] J. G. Cleary and W. J. Teahan. Unbounded length contexts for PPM. The Computer Journal, 40(2 and 3):67?75, 1997. [7] V. F. Farias, C. C. Moallemi, B. Van Roy, and T. Weissman. Universal reinforcement learning. IEEE Transactions on Information Theory, 56(5):2441?2454, 2010. [8] W. L. Hamilton, M. M. Fard, and J. Pineau. Modelling sparse dynamical systems with compressed predictive state representations. In ICML (1), pages 178?186, 2013. [9] M. Hausknecht and P. Stone. Deep recurrent Q-learning for partially observable MDPs. In AAAI Fall Symposium Series, 2015. [10] M. P. Holmes and C. L. Isbell Jr. Looping suffix tree-based inference of partially observable hidden state. In Proceedings of the 23rd international conference on Machine learning, pages 409?416. ACM, 2006. [11] M. Hutter et al. Sparse adaptive dirichlet-multinomial-like processes. In Conference on Learning Theory: JMLR Workshop and Conference Proceedings, volume 30. Journal of Machine Learning Research, 2013. [12] L. Kocsis and C. Szepesv?ri. Bandit based monte-carlo planning. In European conference on machine learning, pages 282?293. Springer, 2006. [13] A. K. McCallum. Reinforcement Learning with Selective Percemption and Hidden State. PhD thesis, University of Rochester, 1995. [14] D. Ron, Y. Singer, and N. Tishby. The power of amnesia: Learning probabilistic automata with variable memory length. Machine learning, 25(2-3):117?149, 1996. [15] S. Singh, M. R. James, and M. R. Rudary. Predictive state representations: A new theory for modeling dynamical systems. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 512?519. AUAI Press, 2004. [16] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. [17] T. J. Tjalkens, Y. M. Shtarkov, and F. M. J. Willems. Context tree weighting: Multi-alphabet sources. In 14th Symposium on Information Theory in the Benelux, pages 128?135, 1993. [18] J. Veness, K. S. Ng, M. Hutter, and M. Bowling. Context tree switching. In Data Compression Conference (DCC), 2012, pages 327?336. IEEE, 2012. 9 [19] J. Veness, K. S. Ng, M. Hutter, W. Uther, and D. Silver. A monte-carlo AIXI approximation. Journal of Artificial Intelligence Research, 40(1):95?142, 2011. [20] P. A. J. Volf. Weighting techniques in data compression: Theory and algorithms. Technische Universiteit Eindhoven, 2002. [21] D. Wierstra, A. Foerster, J. Peters, and J. Schmidhuber. Solving deep memory POMDPs with recurrent policy gradients. In International Conference on Artificial Neural Networks, pages 697?706. Springer, 2007. [22] F. M. Willems. The context-tree weighting method: Extensions. IEEE Transactions on Information Theory, 44(2):792?798, 1998. [23] F. M. Willems, Y. M. Shtarkov, and T. J. Tjalkens. The context-tree weighting method: basic properties. IEEE Transactions on Information Theory, 41(3):653?664, 1995. [24] F. Wood, J. Gasthaus, C. Archambeau, L. James, and Y. W. Teh. The sequence memoizer. Communications of the ACM, 54(2):91?98, 2011. 10
6925 |@word version:1 pw:17 compression:3 replicate:1 smirnov:1 cs0:1 instruction:1 d2:51 p0:2 q1:2 cleary:2 recursively:1 carry:1 initial:2 configuration:1 series:6 prefix:1 past:4 existing:3 outperforms:1 current:1 contextual:1 must:8 numerical:1 enables:1 designed:1 treating:2 update:3 n0:1 v:2 stationary:1 implying:1 prohibitive:2 leaf:10 fewer:2 greedy:1 accordingly:1 intelligence:3 oldest:1 mccallum:1 talvitie:1 short:1 utile:2 memoizer:1 provides:2 node:56 contribute:2 traverse:1 ron:1 org:1 unbounded:3 height:1 along:3 shtarkov:2 wierstra:1 become:2 symposium:2 amnesia:1 descendant:1 overhead:1 introduce:1 manner:2 bitbucket:1 theoretically:1 expected:4 roughly:1 p1:1 planning:4 growing:1 aliasing:1 simulator:1 multi:2 excelled:1 automatically:2 little:1 pf:3 discover:1 underlying:1 bounded:5 moreover:1 mass:1 aliased:1 mountain:3 string:19 bakker:1 informed:1 pseudo:1 remember:1 every:4 expands:2 act:2 continual:1 growth:1 auai:1 exactly:2 hit:1 uk:4 control:1 normally:1 grant:1 hamilton:1 maximise:1 local:1 accordance:1 limit:5 book2:6 switching:3 sutton:1 oxford:3 path:10 approximately:2 quantified:1 dynamically:4 conversely:1 challenging:1 shaded:1 co:1 archambeau:1 limited:1 statistically:1 unique:1 acknowledgment:1 testing:2 practice:1 lf:5 tpd:2 sq:1 procedure:2 universal:2 empirical:3 bell:1 significantly:1 fard:1 word:1 confidence:2 regular:1 cannot:1 context:62 bellemare:2 equivalent:1 regardless:2 l:2 automaton:1 ergodic:1 survey:1 tjalkens:2 arranges:1 simplicity:1 immediately:1 estimator:5 holmes:1 stability:1 arranging:1 updated:5 limiting:1 pt:4 us:2 aixi:2 hypothesis:1 agreement:1 associate:1 element:1 infrequently:1 approximated:1 particularly:1 updating:1 expensive:2 velocity:1 roy:1 predicts:1 labeled:1 observed:11 ft:11 solved:1 worst:2 ensures:1 news:5 episode:4 decrease:1 counter:3 observes:1 valuable:1 principled:1 discriminates:1 environment:3 pd:3 complexity:2 ran:1 reward:17 ppm:10 dynamic:9 depend:4 singh:1 solving:1 exposed:1 predictive:5 purely:1 upon:1 farias:1 represented:3 kolmogorov:1 grown:2 alphabet:8 train:1 effective:1 monte:2 artificial:4 outcome:1 whose:1 encoded:1 supplementary:3 valued:1 heuristic:1 larger:1 otherwise:2 obj2:5 compressed:1 ability:3 noisy:2 itself:2 online:1 kocsis:1 differentiate:1 sequence:6 advantage:1 net:1 propose:1 adaptation:1 relevant:1 iff:2 poorly:1 achieve:1 validate:1 quantifiable:1 ijcai:1 empty:2 requirement:2 paper2:5 yaniv:1 produce:3 silver:1 recurrent:2 ac:1 fixing:1 minimises:1 measured:2 qt:1 received:3 strong:1 c:13 skip:1 memorize:1 paper5:5 met:1 differ:1 closely:1 stochastic:10 bib:5 exploration:1 quartile:1 material:3 require:2 paper1:5 resilient:1 proposition:3 exf:1 eindhoven:1 extension:2 frontier:7 sufficiently:2 considered:2 exp:12 great:1 predict:2 achieves:1 early:1 estimation:1 applicable:2 sensitive:1 gauge:1 tf:1 create:1 weighted:8 wft:3 mit:2 sensor:1 aim:1 modified:1 rather:1 avoid:1 barto:1 progc:5 corollary:2 modelling:3 likelihood:9 greatly:1 contrast:1 baseline:3 inference:2 dependent:2 suffix:32 stopping:2 el:1 typically:3 entire:1 a0:5 hidden:2 spurious:1 ancestor:6 bandit:1 selective:1 overall:1 issue:1 arg:2 priori:4 proposes:2 development:1 art:1 constrained:1 integration:1 once:4 saving:1 beach:1 veness:3 ng:2 represents:2 icml:2 minf:1 future:10 simplex:1 variablelength:1 vmm:4 few:1 randomly:1 divergence:1 replaced:1 maintain:1 progp:5 possibility:1 investigate:1 paper3:5 mixture:3 navigation:1 unconditional:1 subtrees:2 edge:1 partial:4 necessary:2 moallemi:1 respective:5 hausknecht:1 traversing:1 tree:51 incomplete:1 initialized:1 sacrificing:1 uncertain:1 hutter:3 instance:1 book1:5 modeling:3 cover:1 cost:1 geo:5 technische:1 uniform:2 too:1 tishby:1 optimally:1 commission:1 dependency:3 params:3 st:3 international:4 sensitivity:1 rudary:1 retain:1 probabilistic:2 corrects:1 analogously:3 continuously:1 quickly:1 jo:1 thesis:1 reflect:1 aaai:1 containing:1 possibly:1 begleiter:1 necessitate:1 worse:1 tft:2 return:1 account:1 potential:2 coding:1 waste:1 explicitly:2 depends:2 performed:1 view:1 root:2 lab:1 observing:2 doing:1 red:1 start:1 competitive:1 maintains:3 universiteit:1 rochester:1 formed:1 publicly:1 percept:4 efficiently:2 correspond:1 yield:2 conceptually:1 modelled:1 accurately:2 substring:1 mc:1 carlo:2 pomdps:1 history:3 reach:1 definition:1 against:3 ty:2 frequency:3 james:2 naturally:2 associated:1 proof:2 paper6:5 stop:1 ask:1 knowledge:3 car:5 improves:2 actually:1 originally:1 higher:2 violating:1 follow:2 dcc:1 formulation:1 done:2 ox:1 furthermore:7 just:2 marginalising:1 uct:2 stage:1 d:2 receives:1 replacing:1 lack:1 incrementally:2 pineau:1 quality:4 grows:2 usa:2 effect:7 name:5 requiring:2 true:2 verify:1 contain:1 former:2 evolution:1 csur:1 leibler:1 during:2 branching:1 bowling:1 inferior:1 please:1 mexp:4 criterion:2 stone:1 evident:1 nonconsecutive:1 complete:4 performs:1 discovers:2 recently:2 superior:1 witten:1 multinomial:2 rl:13 physical:1 empirically:1 exponentially:1 volume:2 extend:1 eps:2 significant:4 refer:1 cambridge:2 rd:1 grid:1 ctw:100 longer:1 add:1 own:1 recent:1 showed:1 scenario:1 schmidhuber:1 binary:1 scoring:1 seen:1 minimum:2 fortunately:1 greater:1 preceding:3 employed:1 r0:4 prune:11 determine:2 multiple:1 full:2 reduces:1 match:1 long:6 post:2 weissman:1 dkl:1 ensuring:3 prediction:15 variant:1 impact:1 basic:1 metric:1 foerster:1 represent:9 messias:1 justified:1 background:1 szepesv:1 interval:1 pansion:1 decreased:1 grow:3 source:3 ot:4 biased:1 unlike:1 file:11 markedly:1 subject:3 induced:1 integer:2 near:2 presence:3 intermediate:1 insofar:1 enough:3 affect:1 timesteps:2 architecture:1 observability:4 maxf:1 t0:1 whether:1 o0:5 allocate:1 utility:1 peter:1 e3:2 action:9 deep:3 generally:1 useful:2 paper4:5 amount:6 discount:1 induces:2 http:1 track:2 blue:1 discrete:2 write:1 affected:1 coarsely:1 redundancy:2 reformulation:2 threshold:2 achieving:1 neither:1 timestep:2 wood:1 enforced:1 run:4 uncertainty:1 extends:2 reasonable:1 sad:6 decision:1 appendix:3 comparable:1 bit:4 bound:25 uninterrupted:1 ct:7 followed:2 guaranteed:1 topological:1 constraint:3 isbell:1 looping:1 ri:1 sexp:3 encodes:2 progl:5 argument:1 pruned:1 performing:2 according:3 combination:4 point2:1 jr:1 across:3 making:1 s1:1 invariant:1 taken:2 computationally:2 remains:1 eventually:3 count:2 fail:1 mechanism:2 singer:1 letting:2 tractable:1 fp7:1 end:1 available:7 operation:6 dtmax:5 unreasonable:1 apply:5 enforce:1 frequentist:2 alternative:3 assumes:1 dirichlet:4 ensure:1 running:1 graphical:2 sw:3 build:1 especially:3 establish:1 respected:2 added:1 strategy:1 concentration:1 rt:6 degrades:1 interacts:1 nr:1 said:1 gradient:1 concatenation:1 degrade:1 assuming:1 length:21 code:2 index:1 providing:1 ratio:5 difficult:1 steep:1 potentially:1 implementation:1 proper:6 policy:4 unknown:2 perform:3 teh:1 allowing:3 upper:1 zt:3 observation:16 willems:3 markov:5 discarded:1 finite:2 benchmark:1 t:8 optional:1 immediate:1 communication:1 gasthaus:1 obj1:5 calgary:2 pic:5 introduced:1 discretised:1 required:4 teresa:1 distinction:1 learned:1 nip:2 tractably:1 address:3 trans:5 beyond:1 below:12 dynamical:2 biasing:1 max:1 memory:35 power:1 difficulty:1 recursion:2 representing:1 older:1 improve:1 mdps:1 created:2 deemed:1 byte:7 prior:3 text:2 ict:1 checking:1 determining:1 loss:11 fully:2 interesting:1 limitation:1 age:1 agent:13 sufficient:3 uther:1 s0:4 principle:1 storing:1 supported:1 keeping:1 free:2 bias:2 weaker:1 fall:1 taking:1 sparse:3 van:1 overcome:1 depth:38 cumulative:4 unweighted:1 maze:9 author:1 made:1 reinforcement:7 adaptive:2 avg:4 transaction:3 pruning:6 observable:8 compact:1 usm:11 kullback:1 keep:2 overcomes:1 global:2 cheese:3 volf:1 corpus:3 thep:1 subsequence:1 search:1 latent:2 quantifies:1 sk:1 disambiguate:1 learn:7 nature:1 ca:1 ignoring:1 whiteson:2 expansion:19 investigated:2 necessarily:1 artificially:1 european:2 domain:3 did:1 significance:1 main:1 linearly:1 whole:1 noise:9 hyperparameters:2 child:8 fair:1 repeated:1 yona:1 fig:7 sub:6 position:2 wish:1 concatenating:1 exponential:1 candidate:1 pe:12 jmlr:1 weighting:14 third:1 learns:4 interleaving:1 shimon:2 z0:1 down:1 pws:9 theorem:3 showing:2 symbol:24 exists:2 workshop:1 indiscriminately:1 albeit:1 hexp:2 magnitude:2 phd:1 subtree:10 conditioned:1 budget:6 suited:2 simply:4 amsterdam:1 contained:1 partially:8 springer:2 corresponds:1 worstcase:1 acm:3 ma:1 conditional:7 fringe:31 goal:2 sized:1 consequently:1 towards:2 feasible:1 change:4 specifically:1 reducing:1 operates:1 total:2 experimental:1 disregard:1 succeeds:1 jmessias:2 decisiontheoretic:1 internal:2 latter:1 tested:1 avoiding:1
6,551
6,926
A Regularized Framework for Sparse and Structured Neural Attention Vlad Niculae? Cornell University Ithaca, NY [email protected] Mathieu Blondel NTT Communication Science Laboratories Kyoto, Japan [email protected] Abstract Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax. 1 Introduction Modern neural network architectures are commonly augmented with an attention mechanism, which tells the network where to look within the input in order to make the next prediction. Attentionaugmented architectures have been successfully applied to machine translation [2, 29], speech recognition [10], image caption generation [44], textual entailment [38, 31], and sentence summarization [39], to name but a few examples. At the heart of attention mechanisms is a mapping function that converts real values to probabilities, encoding the relative importance of elements in the input. For the case of sequence-to-sequence prediction, at each time step of generating the output sequence, attention probabilities are produced, conditioned on the current state of a decoder network. They are then used to aggregate an input representation (a variable-length list of vectors) into a single vector, which is relevant for the current time step. That vector is finally fed into the decoder network to produce the next element in the output sequence. This process is repeated until the end-of-sequence symbol is generated. Importantly, such architectures can be trained end-to-end using backpropagation. Alongside empirical successes, neural attention?while not necessarily correlated with human attention?is increasingly crucial in bringing more interpretability to neural networks by helping explain how individual input elements contribute to the model?s decisions. However, the most commonly used attention mechanism, softmax, yields dense attention weights: all elements in the input always make at least a small contribution to the decision. To overcome this limitation, sparsemax was recently proposed [31], using the Euclidean projection onto the simplex as a sparse alternative to ? Work performed during an internship at NTT Commmunication Science Laboratories, Kyoto, Japan. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. fusedmax softmax sparsemax r d us mei fesnian ivnaistsee nr sucalleov nd d ay cre tfor atihe on of joi a co front mb nt f ter galotinogr ror ba isml r us . d mei fesnian ivnaistsee nr sucalleov nd d ay cre tfor atihe on of joi a co front mb nt f ter galotinogr ror ba isml r us . d mei fesnian ivnaistsee nr sucalleov nd d ay cre tfor atihe on of joi a n co fro t mb nt f ter galotinogr ror ba isml . russian defense minister calls for joint front against terrorism <EOS> Figure 1: Attention weights produced by the proposed fusedmax, compared to softmax and sparsemax, on sentence summarization. The input sentence to be summarized (taken from [39]) is along the x-axis. From top to bottom, each row shows where the attention is distributed when producing each word in the summary. All rows sum to 1, the grey background corresponds to exactly 0 (never achieved by softmax), and adjacent positions with exactly equal weight are not separated by borders. Fusedmax pays attention to contiguous segments of text with equal weight; such segments never occur with softmax and sparsemax. In addition to enhancing interpretability, we show in ?4.3 that fusedmax outperforms both softmax and sparsemax on this task in terms of ROUGE scores. softmax. Compared to softmax, sparsemax outputs more interpretable attention weights, as illustrated in [31] on the task of textual entailment. The principle of parsimony, which states that simple explanations should be preferred over complex ones, is not, however, limited to sparsity: it remains open whether new attention mechanisms can be designed to benefit from more structural prior knowledge. Our contributions. The success of sparsemax motivates us to explore new attention mechanisms that can both output sparse weights and take advantage of structural properties of the input through the use of modern sparsity-inducing penalties. To do so, we make the following contributions: 1) We propose a new general framework that builds upon a max operator, regularized with a strongly convex function. We show that this operator is differentiable, and that its gradient defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes as special cases both softmax and a slight generalization of sparsemax. (?2) 2) We show how to incorporate the fused lasso [42] in this framework, to derive a new attention mechanism, named fusedmax, which encourages the network to pay attention to contiguous segments of text when making a decision. This idea is illustrated in Figure 1 on sentence summarization. For cases when the contiguity assumption is too strict, we show how to incorporate an OSCAR penalty [7] to derive a new attention mechanism, named oscarmax, that encourages the network to pay equal attention to possibly non-contiguous groups of words. (?3) 3) In order to use attention mechanisms defined under our framework in an autodiff toolkit, two problems must be addressed: evaluating the attention itself and computing its Jacobian. However, our attention mechanisms require solving a convex optimization problem and do not generally enjoy a simple analytical expression, unlike softmax. Computing the Jacobian of the solution of an optimization problem is called argmin/argmax differentiation and is currently an area of active research (cf. [1] and references therein). One of our key algorithmic contributions is to show how to compute this Jacobian under our general framework, as well as for fused lasso and OSCAR. (?3) 4) To showcase the potential of our new attention mechanisms as a drop-in replacement for existing ones, we show empirically that our new attention mechanisms enhance interpretability while achieving comparable or better accuracy on three diverse and challenging tasks: textual entailment, machine translation, and sentence summarization. (?4) Notation. We denote the set {1, . . . , d} by [d]. We denote the (d ? 1)-dimensional probability simplex by ?d := {x ? Rd : kxk1 = 1, x ? 0} and the Euclidean projection onto it by P?d (x) := arg miny??d ky ? xk2 . Given a function f : Rd ? R ? {?}, its convex conjugate is defined by f ? (x) := supy?dom f y T x?f (y). Given a norm k?k, its dual is defined by kxk? := supkyk?1 y T x. We denote the subdifferential of a function f at y by ?f (y). Elements of the subdifferential are called subgradients and when f is differentiable, ?f (y) contains a single element, the gradient of f at y, denoted by ?f (y). We denote the Jacobian of a function g : Rd ? Rd at y by Jg (y) ? Rd?d and the Hessian of a function f : Rd ? R at y by Hf (y) ? Rd?d . 2 max 4 3 2 1 0 ([t, 0]) + max softmax sparsemax sq-pnorm-max fusedmax 4 2 0 const max ([t, 0])1 = 1.00 ([t, 0])1 0.75 0.50 0.25 2 0.00 4 4 2 0 t 2 4 t Figure 2: The proposed max? (x) operator up to a constant (left) and the proposed ?? (x) mapping (right), illustrated with x = [t, 0] and ? = 1. In this case, max? (x) is a ReLu-like function and ?? (x) is a sigmoid-like function. Our framework recovers softmax (negative entropy) and sparsemax (squared 2-norm) as special cases. We also introduce three new attention mechanisms: sq-pnorm-max (squared p-norm, here illustrated with p = 1.5), fusedmax (squared 2-norm + fused lasso), and oscarmax (squared 2-norm + OSCAR; not pictured since it is equivalent to fusedmax in 2-d). Except for softmax, which never exactly reaches 0, all mappings shown on the right encourage sparse outputs. 2 2.1 Proposed regularized attention framework The max operator and its subgradient mapping To motivate our proposal, we first show in this section that the subgradients of the maximum operator define a mapping from Rd to ?d , but that this mapping is highly unsuitable as an attention mechanism. The maximum operator is a function from Rd to R and can be defined by max(x) := max xi = sup y T x. i?[d] y??d The equality on the r.h.s comes from the fact that the supremum of a linear form over the simplex is always achieved at one of the vertices, i.e., one of the standard basis vectors {ei }di=1 . Moreover, it is not hard to check that any solution y ? of that supremum is precisely a subgradient of max(x): ? max(x) = {ei? : i? ? arg maxi?[d] xi }. We can see these subgradients as a mapping ? : Rd ? ?d that puts all the probability mass onto a single element: ?(x) = ei for any ei ? ? max(x). However, this behavior is undesirable, as the resulting mapping is a discontinuous function (a Heaviside step function when x = [t, 0]), which is not amenable to optimization by gradient descent. 2.2 A regularized max operator and its gradient mapping These shortcomings encourage us to consider a regularization of the maximum operator. Inspired by the seminal work of Nesterov [35], we apply a smoothing technique. The conjugate of max(x) is  0, if y ? ?d max? (y) = . ?, o.w. For a proof, see for instance [33, Appendix B]. We now add regularization to the conjugate  ??(y), if y ? ?d ? := max? (y) , ?, o.w. where we assume that ? : Rd ? R is ?-strongly convex w.r.t. some norm k ? k and ? > 0 controls the regularization strength. To define a smoothed max operator, we take the conjugate once again T ? T max? (x) = max?? ? (x) = sup y x ? max? (y) = sup y x ? ??(y). y?Rd (1) y??d Our main proposal is a mapping ?? : Rd ? ?d , defined as the argument that achieves this supremum. ?? (x) := arg max y T x ? ??(y) = ?max? (x) y??d The r.h.s. holds by combining that i) max? (x) = (y ? )T x ? max?? (y ? ) ? y ? ? ?max? (x) and ii) ?max? (x) = {?max? (x)}, since (1) has a unique solution. Therefore, ?? is a gradient mapping. We illustrate max? and ?? for various choices of ? in Figure 2 (2-d) and in Appendix C.1 (3-d). 3 Importance of strong convexity. Our ?-strong convexity assumption on ? plays a crucial role and should not be underestimated. Recall that a function f : Rd ? R is ?-strongly convex w.r.t. a norm k ? k if and only if its conjugate f ? is ?1 -smooth w.r.t. the dual norm k ? k? [46, Corollary 3.5.11] 1 [22, Theorem 3]. This is sufficient to ensure that max? is ?? -smooth, or, in other words, that it is 1 differentiable everywhere and its gradient, ?? , is ?? -Lipschitz continuous w.r.t. k ? k? . Training by backpropagation. In order to use ?? in a neural network trained by backpropagation, two problems must be addressed for any regularizer ?. The first is the forward computation: how to evaluate ?? (x), i.e., how to solve the optimization problem in (1). The second is the backward computation: how to evaluate the Jacobian of ?? (x), or, equivalently, the Hessian of max? (x). One of our key contributions, presented in ?3, is to show how to solve these two problems for general differentiable ?, as well as for two structured regularizers: fused lasso and OSCAR. 2.3 Recovering softmax and sparsemax as special cases Before deriving new attention mechanisms using our framework, we now show how we can recover softmax and sparsemax, using a specific regularizer ?. Pd Softmax. We choose ?(y) = i=1 yi log yi , the negative entropy. The conjugate of the negative entropy restricted to the simplex is the log sum exp [9, Example 3.25]. Moreover, if f (x) = ?g(x) for ? > 0, then f ? (y) = ?g ? (y/?). We therefore get a closed-form expression: max? (x) = Pd ? log sum exp(x/?) := ? log i=1 exi /? . Since the negative entropy is 1-strongly convex w.r.t. k ? k1 over ?d , we get that max? is ?1 -smooth w.r.t. k ? k? . We obtain the classical softmax, with temperature parameter ?, by taking the gradient of max? (x), ex/? , ?? (x) = Pd xi /? i=1 e (softmax) where ex/? is evaluated element-wise. Note that some authors also call max? a ?soft max.? Although ?? is really a soft arg max, we opt to follow the more popular terminology. When x = [t, 0], it can be checked that max? (x) reduces to the softplus [16] and ?? (x)1 to a sigmoid. Sparsemax. We choose ?(y) = 21 kyk22 , also known as Moreau-Yosida regularization in proximal operator theory [35, 36]. Since 12 kyk22 is 1-strongly convex w.r.t. k?k2 , we get that max? is ?1 -smooth w.r.t. k ? k2 . In addition, it is easy to verify that ?? (x) = P?d (x/?) = arg min ky ? x/?k2 . (sparsemax) y??d This mapping was introduced as is in [31] with ? = 1 and was named sparsemax, due to the fact that it is a sparse alternative to softmax. Our derivation thus gives us a slight generalization, where ? controls the sparsity (the smaller, the sparser) and could be tuned; in our experiments, however, we follow the literature and set ? = 1. The Euclidean projection onto the simplex, P?d , can be computed exactly [34, 15] (we discuss the complexity in Appendix B). Following [31], the Jacobian of ?? is  1 1 diag(s) ? ssT /ksk1 , J?? (x) = JP?d (x/?) = ? ? where s ? {0, 1}d indicates the nonzero elements of ?? (x). Since ?? is Lipschitz continuous, Rademacher?s theorem implies that ?? is differentiable almost everywhere. For points where ?? is not differentiable (where max? is not twice differentiable), we can take an arbitrary matrix in the set of Clarke?s generalized Jacobians [11], the convex hull of Jacobians of the form lim J?? (xt ) [31]. xt ?x 3 3.1 Deriving new sparse and structured attention mechanisms Differentiable regularizer ? Before tackling more structured regularizers, we address in this section the case of general differentiable regularizer ?. Because ?? (x) involves maximizing (1), a concave function over the simplex, it can be computed globally using any off-the-shelf projected gradient solver. Therefore, the main challenge is how to compute the Jacobian of ?? . This is what we address in the next proposition. 4 Proposition 1 Jacobian of ?? for any differentiable ? (backward computation) Assume that ? is differentiable over ?d and that ?? (x) = arg maxy??d y T x ? ??(y) = y ? has been computed. Then the Jacobian of ?? at x, denoted J?? , can be obtained by solving the system (I + A(B ? I)) J?? = A, where we defined the shorthands A := JP?d (y ? ? ???(y ? ) + x) and B := ?H? (y ? ). The proof is given in Appendix A.1. Unlike recent work tackling argmin differentiation through matrix differential calculus on the Karush?Kuhn?Tucker (KKT) conditions [1], our proof technique relies on differentiating the fixed point iteration y ? = P?d (y ? ? ?f (y ? )). To compute J?? v for an arbitrary v ? Rd , as required by backpropagation, we may directly solve (I + A(B ? I)) (J?? v) = Av. We show in Appendix B how this system can be solved efficiently thanks to the structure of A. Squared p-norms. As a useful example of a differentiable function over the simplex, we consider P 2/p d p , where y ? ?d and p ? (1, 2]. For this choice squared p-norms: ?(y) = 12 kyk2p = i=1 yi of p, it is known that the squared p-norm is strongly convex w.r.t. k ? kp [3]. This implies that max? is 1 1 1 ?(p?1) smooth w.r.t. k.kq , where p + q = 1. We call the induced mapping function sq-pnorm-max: ? (sq-pnorm-max) ?? (x) = arg min kyk2p ? y T x. 2 d y?? p?1 y The gradient and Hessian needed for Proposition 1 can be computed by ??(y) = kyk p?2 and p s (2 ? p) p?1 (p ? 1) p?2 H? (y) = diag(d) + uuT , where d = and u = y , p?2 y kykp kykp2p?2 with the exponentiation performed element-wise. sq-pnorm-max recovers sparsemax with p = 2 and, like sparsemax, encourages sparse outputs. However, as can be seen in the zoomed box in Figure 2 (right), the transition between y ? = [0, 1] and y ? = [1, 0] can be smoother when 1 < p < 2. Throughout our experiments, we use p = 1.5. 3.2 Structured regularizers: fused lasso and OSCAR Fusedmax. For cases when the input is sequential and the order is meaningful, as is the case for many natural languages, we propose fusedmax, an attention mechanism based on fused lasso [42], also known as 1-d total variation (TV). Fusedmax encourages paying attention to contiguous segments, with equal weights within each one. It is expressed under our framework by choosing Pd?1 ?(y) = 21 kyk22 + ? i=1 |yi+1 ? yi |, i.e., the sum of a strongly convex term and of a 1-d TV penalty. It is easy to verify that this choice yields the mapping d?1 X 1 |yi+1 ? yi |. (fusedmax) ?? (x) = arg min ky ? x/?k2 + ? y??d 2 i=1 Oscarmax. For situations where the contiguity assumption may be too strict, we propose oscarmax, based on the OSCAR penalty [7], to encourage attention weights to merge into clusters with the same value, regardless of position in the sequence. This is accomplished by replacing the 1-d TV penalty P in fusedmax with an ?-norm penalty on each pair of attention weights, i.e., ?(y) = 1 2 i<j max(|yi |, |yj |). This results in the mapping 2 kyk2 + ? X 1 ?? (x) = arg min ky ? x/?k2 + ? max(|yi |, |yj |). (oscarmax) y??d 2 i<j Forward computation. Due to the y ? ?d constraint, computing fusedmax/oscarmax does not seem trivial on first sight. The next proposition shows how to do so, without any iterative method. Proposition 2 Computing fusedmax and oscarmax (forward computation) d?1 X 1 |yi+1 ? yi |. PTV (x) := arg min ky ? xk2 + ? 2 y?Rd i=1 X 1 oscarmax: ?? (x) = P?d (POSC (x/?)) , POSC (x) := arg min ky ? xk2 + ? max(|yi |, |yj |). 2 y?Rd i<j fusedmax: ?? (x) = P?d (PTV (x/?)) , 5 Here, PTV and POSC indicate the proximal operators of 1-d TV and OSCAR, and can be computed exactly by [13] and [47], respectively. To remind the reader, P?d denotes the Euclidean projection onto the simplex and can be computed exactly using [34, 15]. Proposition 2 shows that we can compute fusedmax and oscarmax using the composition of two functions, for which exact noniterative algorithms exist. This is a surprising result, since the proximal operator of the sum of two functions is not, in general, the composition of the proximal operators of each function. The proof follows by showing that the indicator function of ?d satisfies the conditions of [45, Corollaries 4,5]. Groups induced by PTV and POSC . Let z ? be the optimal solution of PTV (x) or POSC (x). For PTV , we denote the group of adjacent elements with the same value as zi? by G?i , ?i ? [d]. Formally, G?i = [a, b] ? N with a ? i ? b where a and b are the minimal and maximal indices such that zi? = zj? for all j ? G?i . For POSC , we define G?i as the indices of elements with the same absolute value as zi? , more formally G?i = {j ? [d] : |zi? | = |zj? |}. Because P?d (z ? ) = max(z ? ? ?, 0) for some ? ? R, fusedmax/oscarmax either shift a group?s common value or set all its elements to zero. ? controls the trade-off between no fusion (sparsemax) and all elements fused into a single trivial group. While tuning ? may improve performance, we observe that ? = 0.1 (fusedmax) and ? = 0.01 (oscarmax) are sensible defaults that work well across all tasks and report all our results using them. Backward computation. We already know that the Jacobian of P?d is the same as that of sparsemax with ? = 1. Then, by Proposition 2, if we know how to compute the Jacobians of PTV and POSC , we can obtain the Jacobians of fusedmax and oscarmax by straightforward application of the chain rule. However, although PTV and POSC can be computed exactly, they lack analytical expressions. We next show that we can nonetheless compute their Jacobians efficiently, without needing to solve a system. Proposition 3 Jacobians of PTV and POSC (backward computation) Assume z ? = PTV (x) or POSC (x) has been computed. Define the groups derived from z ? as above. ( sign(z? z? ) ( 1 i j if j ? G?i , if j ? G?i and zi? 6= 0, |G? | |G? i i| . and [JPOSC (x)]i,j = Then, [JPTV (x)]i,j = 0 o.w. 0 o.w. The proof is given in Appendix A.2. Clearly, the structure of these Jacobians permits efficient Jacobian-vector products; we discuss the computational complexity and implementation details in Appendix B. Note that PTV and POSC are differentiable everywhere except at points where groups change. For these points, the same remark as for sparsemax applies, and we can use Clarke?s Jacobian. 4 Experimental results We showcase the performance of our attention mechanisms on three challenging natural language tasks: textual entailment, machine translation, and sentence summarization. We rely on available, well-established neural architectures, so as to demonstrate simple drop-in replacement of softmax with structured sparse attention; quite likely, newer task-specific models could lead to further improvement. 4.1 Textual entailment (a.k.a. natural language inference) experiments Textual entailment is the task of deciding, given a text T and an hypothesis H, whether a human reading T is likely to infer that H is true [14]. We use the Stanford Natural Language Inference (SNLI) dataset [8], a collection of 570,000 English sentence pairs. Each pair consists of a sentence and an hypothesis, manually labeled with one of the labels ENTAILMENT, CONTRADICTION, or NEUTRAL. We use a variant of the neural attention?based classifier proposed for Table 1: Textual entailment this dataset by [38] and follow the same methodology as [31] in terms test accuracy on SNLI [8]. of implementation, hyperparameters, and grid search. We employ the CPU implementation provided in [31] and simply replace sparsemax attention accuracy with fusedmax/oscarmax; we observe that training time per epoch softmax 81.66 is essentially the same for each of the four attention mechanisms sparsemax 82.39 (timings and more experimental details in Appendix C.2). Table 1 shows that, for this task, fusedmax reaches the highest accuracy, and oscarmax slightly outperforms softmax. Furthermore, 6 fusedmax oscarmax 82.41 81.76 softmax 0.2 0.1 0.0 0.3 0.2 0.1 0.0 sparsemax 0.2 0.1 A band is playing on stage ata concert and the attendants are dancing to the music. fusedmax A band is playing on stage ata concert and the attendants are dancing to the music. 0.0 0.2 0.1 0.0 A band is playing on stage ata concert and the attendants are dancing to the music. oscarmax A band is playing on stage ata concert and the attendants are dancing to the music. Figure 3: Attention weights when considering the contradicted hypothesis ?No one is dancing.? fusedmax results in the most interpretable feature groupings: Figure 3 shows the weights of the neural network?s attention to the text, when considering the hypothesis ?No one is dancing.? In this case, all four models correctly predicted that the text ?A band is playing on stage at a concert and the attendants are dancing to the music,? denoted along the x-axis, contradicts the hypothesis, although the attention weights differ. Notably, fusedmax identifies the meaningful segment ?band is playing?. 4.2 Machine translation experiments Sequence-to-sequence neural machine translation (NMT) has recently become a strong contender in machine translation [2, 29]. In NMT, attention weights can be seen as an alignment between source and translated words. To demonstrate the potential of our new attention mechanisms for NMT, we ran experiments on 10 language pairs. We build on OpenNMT-py [24], based on PyTorch [37], with all default hyperparameters (detailed in Appendix C.3), simply replacing softmax with the proposed ?? . OpenNMT-py with softmax attention is optimized for the GPU. Since sparsemax, fusedmax, and oscarmax rely on sorting operations, we implement their computations on the CPU for simplicity, keeping the rest of the pipeline on the GPU. However, we observe that, even with this context switching, the number of tokens processed per second was within 3/4 of the softmax pipeline. For sq-pnorm-max, we observe that the projected gradient solver used in the forward pass, unlike the linear system solver used in the backward pass, could become a computational bottleneck. To mitigate this effect, we set the tolerance of the solver?s stopping criterion to 10?2 . Quantitatively, we find that all compared attention mechanisms are always within 1 BLEU score point of the best mechanism (for detailed results, cf. Appendix C.3). This suggests that structured sparsity does not restrict accuracy. However, as illustrated in Figure 4, fusedmax and oscarmax often lead to more interpretable attention alignments, as well as to qualitatively different translations. the coalition for international aid should read it carefully . <EOS> softmax co ali L int ptoiona ern ur ati ai l' deonade vrale it att a lirlee v en ec tio n . oscarmax the international aid coalition should read it carefully . <EOS> co ali La t int poioun ern r ati aidl' o e de nal vrae i lt att a liree en vec tio n . fusedmax co ali L int ptoiona ern ur ati ai l' deonade vrale it att a lirlee en vec tio n . the coalition for international aid should read it carefully . <EOS> Figure 4: Attention weights for French to English translation, using the conventions of Figure 1. Within a row, weights grouped by oscarmax under the same cluster are denoted by ???. Here, oscarmax finds a slightly more natural English translation. More visulizations are given in Appendix C.3. 4.3 Sentence summarization experiments Attention mechanisms were recently explored for sentence summarization in [39]. To generate sentence-summary pairs at low cost, the authors proposed to use the title of a news article as a noisy summary of the article?s leading sentence. They collected 4 million such pairs from the Gigaword dataset and showed that this seemingly simplistic approach leads to models that generalize 7 Table 2: Sentence summarization results, following the same experimental setting as in [39]. DUC 2004 attention Gigaword ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L 27.16 27.69 28.42 27.84 27.94 9.48 9.55 9.96 9.46 9.28 24.47 24.96 25.55 25.14 25.08 35.13 36.04 36.09 35.36 35.94 17.15 17.78 17.62 17.23 17.75 32.92 33.64 33.69 33.03 33.66 softmax sparsemax fusedmax oscarmax sq-pnorm-max surprisingly well. We follow their experimental setup and are able to reproduce comparable results to theirs with OpenNMT when using softmax attention. The models we use are the same as in ?4.2. Our evaluation follows [39]: we use the standard DUC 2004 dataset (500 news articles each paired with 4 different human-generated summaries) and a randomly held-out subset of Gigaword, released by [39]. We report results on ROUGE-1, ROUGE-2, and ROUGE-L. Our results, in Table 2, indicate that fusedmax is the best under nearly all metrics, always outperforming softmax. In addition to Figure 1, we exemplify our enhanced interpretability and provide more detailed results in Appendix C.4. 5 Related work Smoothed max operators. Replacing the max operator by a differentiable approximation based on the log sum exp has been exploited in numerous works. Regularizing the max operator with a squared 2-norm is less frequent, but has been used to obtain a smoothed multiclass hinge loss [41] or smoothed linear programming relaxations for maximum a-posteriori inference [33]. Our work differs from these in mainly two aspects. First, we are less interested in the max operator itself than in its gradient, which we use as a mapping from Rd to ?d . Second, since we use this mapping in neural networks trained with backpropagation, we study and compute the mapping?s Jacobian (the Hessian of a regularized max operator), in contrast with previous works. Interpretability, structure and sparsity in neural networks. Providing interpretations alongside predictions is important for accountability, error analysis and exploratory analysis, among other reasons. Toward this goal, several recent works have been relying on visualizing hidden layer activations [20, 27] and the potential for interpretability provided by attention mechanisms has been noted in multiple works [2, 38, 39]. Our work aims to fulfill this potential by providing a unified framework upon which new interpretable attention mechanisms can be designed, using well-studied tools from the field of structured sparse regularization. Selecting contiguous text segments for model interpretations is explored in [26], where an explanation generator network is proposed for justifying predictions using a fused lasso penalty. However, this network is not an attention mechanism and has its own parameters to be learned. Furthemore, [26] sidesteps the need to backpropagate through the fused lasso, unlike our work, by using a stochastic training approach. In constrast, our attention mechanisms are deterministic and drop-in replacements for existing ones. As a consequence, our mechanisms can be coupled with recent research that builds on top of softmax attention, for example in order to incorporate soft prior knowledge about NMT alignment into attention through penalties on the attention weights [12]. A different approach to incorporating structure into attention uses the posterior marginal probabilities from a conditional random field as attention weights [23]. While this approach takes into account structural correlations, the marginal probabilities are generally dense and different from each other. Our proposed mechanisms produce sparse and clustered attention weights, a visible benefit in interpretability. The idea of deriving constrained alternatives to softmax has been independently explored for differentiable easy-first decoding [32]. Finally, sparsity-inducing penalties have been used to obtain convex relaxations of neural networks [5] or to compress models [28, 43, 40]. These works differ from ours, in that sparsity is enforced on the network parameters, while our approach can produce sparse and structured outputs from neural attention layers. 8 6 Conclusion and future directions We proposed in this paper a unified regularized framework upon which new attention mechanisms can be designed. To enable such mechanisms to be used in a neural network trained by backpropagation, we demonstrated how to carry out forward and backward computations for general differentiable regularizers. We further developed two new structured attention mechanisms, fusedmax and oscarmax, and demonstrated that they enhance interpretability while achieving comparable or better accuracy on three diverse and challenging tasks: textual entailment, machine translation, and summarization. The usefulness of a differentiable mapping from real values to the simplex or to [0, 1] with sparse or structured outputs goes beyond attention mechanisms. We expect that our framework will be useful to sample from categorical distributions using the Gumbel trick [21, 30], as well as for conditional computation [6] or differentiable neural computers [18, 19]. We plan to explore these in future work. Acknowledgements We are grateful to Andr? Martins, Takuma Otsuka, Fabian Pedregosa, Antoine Rolet, Jun Suzuki, and Justine Zhang for helpful discussions. We thank the anonymous reviewers for the valuable feedback. References [1] B. Amos and J. Z. Kolter. OptNet: Differentiable optimization as a layer in neural networks. In Proc. of ICML, 2017. [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR, 2015. [3] K. Ball, E. A. Carlen, and E. H. Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. Inventiones Mathematicae, 115(1):463?482, 1994. [4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009. [5] Y. Bengio, N. Le Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In Proc. of NIPS, 2005. [6] Y. Bengio, N. L?onard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. In Proc. of NIPS, 2013. [7] H. D. Bondell and B. J. Reich. Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics, 64(1):115?123, 2008. [8] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning natural language inference. In Proc. of EMNLP, 2015. [9] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004. [10] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech recognition. In Proc. of NIPS, 2015. [11] F. H. Clarke. Optimization and nonsmooth analysis. SIAM, 1990. [12] T. Cohn, C. D. V. Hoang, E. Vymolova, K. Yao, C. Dyer, and G. Haffari. Incorporating structural alignment biases into an attentional neural translation model. In Proc. of NAACL-HLT, 2016. [13] L. Condat. A direct algorithm for 1-d total variation denoising. IEEE Signal Processing Letters, 20(11):1054?1057, 2013. [14] I. Dagan, B. Dolan, B. Magnini, and D. Roth. Recognizing textual entailment: Rational, evaluation and approaches. Natural Language Engineering, 15(4):i?xvii, 2009. [15] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 -ball for learning in high dimensions. In Proc. of ICML, 2008. 9 [16] C. Dugas, Y. Bengio, F. B?lisle, C. Nadeau, and R. Garcia. Incorporating second-order functional knowledge for better option pricing. Proc. of NIPS, 2001. [17] J. Friedman, T. Hastie, H. H?fling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302?332, 2007. [18] A. Graves, G. Wayne, and I. Danihelka. Neural Turing Machines. In Proc. of NIPS, 2014. [19] A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi?nska, S. G. Colmenarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471?476, 2016. [20] O. Irsoy. Deep sequential and structural neural models of compositionality. PhD thesis, Cornell University, 2017. [21] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with Gumbel-Softmax. In Proc. of ICLR, 2017. [22] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari. Regularization techniques for learning with matrices. Journal of Machine Learning Research, 13:1865?1890, 2012. [23] Y. Kim, C. Denton, L. Hoang, and A. M. Rush. Structured attention networks. In Proc. of ICLR, 2017. [24] G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. OpenNMT: Open-source toolkit for neural machine translation. arXiv e-prints, 2017. [25] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, 2007. [26] T. Lei, R. Barzilay, and T. Jaakkola. Rationalizing neural predictions. In Proc. of EMNLP, 2016. [27] J. Li, X. Chen, E. Hovy, and D. Jurafsky. Visualizing and understanding neural models in NLP. In Proc. of NAACL-HLT, 2016. [28] B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Pensky. Sparse convolutional neural networks. In Proc. of ICCVPR, 2015. [29] M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP, 2015. [30] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. In Proc. of ICLR, 2017. [31] A. F. Martins and R. F. Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proc. of ICML, 2016. [32] A. F. Martins and J. Kreutzer. Learning what?s easy: Fully differentiable neural easy-first taggers. In Proc. of EMNLP, 2017. [33] O. Meshi, M. Mahdavi, and A. G. Schwing. Smooth and strong: MAP inference with linear convergence. In Proc. of NIPS, 2015. [34] C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of Rn . Journal of Optimization Theory and Applications, 50(1):195?200, 1986. [35] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?152, 2005. R in Optimization, [36] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends 1(3):127?239, 2014. [37] PyTorch. http://pytorch.org, 2017. 10 [38] T. Rockt?schel, E. Grefenstette, K. M. Hermann, T. Kocisky, and P. Blunsom. Reasoning about entailment with neural attention. In Proc. of ICLR, 2016. [39] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence summarization. In Proc. of EMNLP, 2015. [40] S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. Neurocomputing, 241:81?89, 2017. [41] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1):105?145, 2016. [42] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91?108, 2005. [43] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Proc. of NIPS, 2016. [44] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proc. of ICML, 2015. [45] Y. Yu. On decomposing the proximal map. In Proc. of NIPS, 2013. [46] C. Zalinescu. Convex analysis in general vector spaces. World Scientific, 2002. [47] X. Zeng and M. A. Figueiredo. Solving OSCAR regularization problems by fast approximate proximal splitting algorithms. Digital Signal Processing, 31:124?135, 2014. [48] X. Zeng and F. A. Mario. The ordered weighted ?1 norm: Atomic formulation, dual norm, and projections. arXiv e-prints, 2014. [49] L. W. Zhong and J. T. Kwok. Efficient sparse modeling with automatic feature grouping. IEEE transactions on neural networks and learning systems, 23(9):1436?1447, 2012. 11
6926 |@word norm:16 nd:3 open:3 calculus:1 grey:1 carry:1 liu:1 contains:1 score:2 att:3 selecting:1 series:1 tuned:1 ours:1 ati:3 outperforms:2 existing:3 ksk1:1 current:2 reynolds:1 nt:3 surprising:1 activation:1 tackling:2 must:2 gpu:2 visible:1 drop:4 interpretable:5 designed:3 concert:5 kyk:1 contribute:1 org:2 zhang:2 tagger:1 mathematical:2 along:2 bowman:1 direct:1 differential:1 become:2 bertoldi:1 shorthand:1 consists:1 introduce:1 blondel:1 notably:2 behavior:1 sparsemax:28 kiros:1 multi:1 inspired:1 globally:1 relying:1 cpu:2 solver:4 considering:2 provided:2 estimating:1 notation:1 moreover:2 mass:1 what:2 grabska:1 argmin:2 parsimony:1 contiguity:2 bojar:1 developed:1 unified:2 finding:1 differentiation:2 mitigate:1 concave:1 exactly:7 k2:5 classifier:1 control:3 wayne:2 enjoy:1 producing:1 danihelka:2 before:2 engineering:1 timing:1 attend:1 consequence:1 rouge:10 switching:1 encoding:1 merge:1 blunsom:1 acl:1 twice:1 therein:1 terrorism:1 accountability:1 studied:1 suggests:1 challenging:3 co:6 jurafsky:1 limited:1 unique:1 yj:3 atomic:1 implement:1 differs:1 backpropagation:7 sq:7 mei:3 area:1 empirical:1 onard:1 projection:7 boyd:2 word:4 get:3 onto:7 undesirable:1 selection:1 operator:20 put:1 context:1 seminal:1 py:2 equivalent:1 deterministic:1 demonstrated:2 reviewer:1 maximizing:1 roth:1 straightforward:1 attention:79 regardless:1 independently:1 convex:14 go:1 shen:1 simplicity:1 roux:1 constrast:1 michelot:1 splitting:1 contradiction:1 rule:1 importantly:1 deriving:3 vandenberghe:1 reparameterization:1 exploratory:1 variation:2 coordinate:2 annals:1 enhanced:1 play:1 caption:2 exact:1 programming:3 us:1 hypothesis:5 trick:1 element:14 trend:1 recognition:2 tappen:1 showcase:3 labeled:1 bottom:1 kxk1:1 role:1 solved:1 wang:2 news:2 contradicted:1 trade:1 highest:1 knight:1 valuable:1 ran:1 pd:4 convexity:3 yosida:1 miny:1 complexity:2 nesterov:2 dynamic:1 dom:1 trained:5 motivate:1 solving:3 segment:7 ror:3 ali:3 grateful:1 upon:4 basis:1 gu:1 translated:1 joint:1 exi:1 various:1 regularizer:4 derivation:1 separated:1 ramalho:1 fast:2 shortcoming:1 effective:1 kp:1 zemel:1 tell:3 aggregate:1 choosing:1 shalev:3 eos:4 saunders:1 quite:1 stanford:1 solve:4 koehn:1 delalleau:1 federico:1 statistic:1 jointly:1 itself:2 noisy:1 seemingly:1 sequence:8 advantage:1 differentiable:20 analytical:2 propose:4 mb:3 zoomed:1 maximal:1 product:1 frequent:1 relevant:1 combining:1 translate:1 inducing:2 ky:6 foroosh:1 convergence:1 cluster:2 rademacher:1 produce:3 generating:1 derive:3 illustrate:1 propagating:1 barzilay:1 ptv:11 strong:4 paying:1 recovering:1 c:1 involves:1 come:1 implies:2 indicate:2 predicted:1 kuhn:1 differ:2 direction:1 convention:1 discontinuous:1 annotated:1 hermann:1 hull:1 stochastic:3 human:3 uncini:1 enable:1 meshi:1 require:1 generalization:3 really:1 karush:1 anonymous:1 proposition:8 clustered:1 opt:1 helping:1 pytorch:3 pham:1 hold:1 exp:3 deciding:1 posc:11 mapping:21 algorithmic:1 joi:3 achieves:1 salakhudinov:1 xk2:3 released:1 proc:26 label:2 currently:1 title:1 grouped:1 successfully:1 tool:1 amos:1 weighted:1 minimization:2 clearly:1 always:4 sight:1 aim:1 fulfill:1 shelf:1 cornell:3 shrinkage:2 zhong:1 jaakkola:1 corollary:2 derived:1 focus:2 improvement:1 potts:1 niculae:1 check:1 indicates:1 mainly:1 contrast:1 kim:2 posteriori:1 inference:5 helpful:1 stopping:1 carlen:1 entire:1 hidden:1 reproduce:1 interested:1 arg:11 dual:4 among:1 classification:1 denoted:4 plan:1 smoothing:1 softmax:36 special:4 constrained:1 marginal:2 equal:4 once:1 never:3 field:2 beach:1 manually:1 look:1 icml:4 nearly:1 denton:1 yu:1 future:2 simplex:10 report:2 nonsmooth:1 quantitatively:1 few:1 employ:1 modern:4 randomly:1 wen:1 fling:1 neurocomputing:1 individual:1 beck:1 argmax:1 replacement:4 friedman:1 harley:1 highly:1 callison:1 mnih:1 evaluation:2 abstractive:1 alignment:4 regularizers:4 held:1 chain:1 amenable:1 constantin:1 encourage:3 biometrics:1 euclidean:4 sacrificing:1 rush:3 minimal:1 instance:1 soft:3 modeling:1 teboulle:1 contiguous:5 cost:1 vertex:1 neutral:1 subset:1 kq:1 usefulness:1 uniform:1 predictor:1 recognizing:1 front:3 too:2 takuma:1 proximal:8 rosset:1 contender:1 cho:3 st:1 cre:3 thanks:1 international:3 siam:2 off:2 decoding:1 enhance:2 fused:10 concrete:1 yao:1 squared:8 again:1 thesis:1 zen:1 choose:2 possibly:1 emnlp:5 external:1 luong:1 sidestep:1 leading:1 jacobians:7 japan:2 account:1 potential:5 chorowski:1 de:1 li:2 mahdavi:1 summarized:1 includes:2 int:3 kolter:1 performed:2 closed:1 mario:1 sup:3 hf:1 recover:1 option:1 contribution:5 minister:1 accuracy:6 hovy:1 convolutional:1 efficiently:2 yield:2 generalize:1 vincent:1 produced:2 explain:1 simultaneous:1 reach:2 mathematicae:1 checked:1 hlt:2 against:1 inventiones:1 nonetheless:1 internship:1 tucker:1 proof:5 di:1 recovers:2 rational:1 dataset:4 popular:1 birch:1 vlad:2 recall:1 knowledge:3 kyk22:3 lim:1 exemplify:1 barwi:1 carefully:3 supervised:1 follow:4 methodology:2 entailment:13 formulation:1 evaluated:1 box:1 strongly:7 furthermore:1 stage:5 until:1 correlation:1 ei:4 cohn:1 replacing:3 duc:2 lack:1 zeng:2 french:1 defines:2 pricing:1 lei:1 russian:1 scientific:1 name:1 usa:1 building:1 verify:2 true:1 effect:1 naacl:2 equality:1 regularization:8 read:3 laboratory:2 nonzero:1 illustrated:5 adjacent:2 visualizing:2 during:1 kyk2:1 encourages:4 noted:1 criterion:1 generalized:1 ay:3 demonstrate:2 duchi:1 temperature:1 reasoning:1 image:2 wise:2 recently:4 parikh:1 sigmoid:2 common:1 functional:1 empirically:1 irsoy:1 jp:2 million:1 slight:3 interpretation:2 theirs:1 kreutzer:1 composition:2 cambridge:1 vec:2 ai:2 smoothness:2 rd:18 tuning:1 grid:1 automatic:1 language:7 jg:1 toolkit:3 reich:1 otsuka:1 add:1 align:1 posterior:1 own:1 recent:3 showed:1 inequality:1 outperforming:1 success:2 herbst:1 yi:12 accomplished:1 exploited:1 seen:2 deng:1 signal:2 angeli:1 ii:1 multiple:1 smoother:1 needing:1 kyoto:2 reduces:1 infer:1 ntt:2 smooth:8 long:1 opennmt:4 justifying:1 paired:1 prediction:5 variant:1 simplistic:1 regression:1 enhancing:1 essentially:1 metric:1 chandra:1 arxiv:2 iteration:1 achieved:2 proposal:2 background:1 addition:3 subdifferential:2 addressed:2 underestimated:1 source:3 crucial:2 ithaca:1 rest:1 unlike:4 bringing:1 nska:1 pass:1 strict:2 induced:2 noniterative:1 nmt:4 bahdanau:2 cowan:1 ascent:1 astudillo:1 seem:1 call:3 marcotte:1 structural:5 schel:1 chopra:1 ter:3 bengio:6 easy:5 relu:1 zi:5 hussain:1 architecture:4 lasso:9 restrict:1 hastie:1 idea:2 multiclass:1 shift:1 bottleneck:1 whether:2 expression:3 defense:1 penalty:10 lieb:1 speech:2 hessian:4 remark:1 deep:3 generally:2 useful:2 detailed:3 sst:1 tewari:1 band:6 processed:1 generate:1 http:1 outperform:1 exist:1 zj:2 andr:1 canonical:1 moses:1 sign:1 per:2 correctly:1 tibshirani:2 klein:1 diverse:2 gigaword:3 discrete:1 group:9 key:2 four:2 terminology:1 dancing:7 achieving:2 nal:1 backward:7 imaging:1 lisle:1 subgradient:2 relaxation:3 rolet:1 sum:6 convert:1 enforced:1 turing:1 everywhere:3 oscar:9 exponentiation:1 inverse:1 letter:1 named:3 almost:1 throughout:1 reader:1 wu:1 decision:3 appendix:12 clarke:3 comparable:3 layer:3 pay:3 optnet:1 courville:2 strength:1 occur:1 precisely:1 constraint:1 burch:1 aspect:1 argument:1 min:6 subgradients:3 martin:3 ern:3 structured:15 tv:4 ball:2 manning:2 conjugate:6 coalition:3 smaller:1 across:1 increasingly:1 slightly:2 newer:1 contradicts:1 ur:2 kakade:1 making:1 maxy:1 restricted:1 heart:1 taken:1 pipeline:2 bondell:1 remains:1 discus:2 mechanism:40 needed:1 know:2 singer:1 dyer:2 fed:1 end:3 available:1 operation:1 decomposing:1 permit:1 apply:1 observe:4 kwok:1 pnorm:7 alternative:3 jang:1 compress:1 top:2 denotes:1 cf:2 ensure:1 clustering:1 nlp:1 hinge:1 const:1 unsuitable:1 music:5 k1:1 build:3 classical:1 society:1 already:1 print:2 map:2 nr:3 antoine:1 gradient:13 iclr:5 thank:1 attentional:1 decoder:2 sensible:1 maddison:1 collected:1 trivial:2 bleu:1 reason:1 toward:1 senellart:1 length:1 index:2 remind:1 providing:2 furthemore:1 equivalently:1 setup:1 trace:1 negative:4 ba:4 implementation:3 motivates:1 summarization:12 teh:1 av:1 neuron:1 enabling:1 fabian:1 descent:1 finite:1 dugas:1 rockt:1 situation:1 communication:1 rn:1 smoothed:5 arbitrary:2 sharp:1 compositionality:1 introduced:1 kocisky:1 pair:6 required:1 sentence:15 optimized:1 learned:1 textual:11 established:1 nip:9 address:2 able:1 beyond:1 alongside:2 haffari:1 poole:1 sparsity:9 challenge:1 reading:1 max:57 interpretability:9 explanation:2 memory:1 royal:1 suitable:2 natural:7 rely:2 regularized:7 hybrid:1 indicator:1 pictured:1 zhu:1 improve:2 snli:2 mathieu:2 identifies:1 axis:2 numerous:1 fro:1 categorical:2 jun:1 coupled:1 nadeau:1 text:6 prior:2 literature:1 epoch:1 acknowledgement:1 understanding:1 dolan:1 relative:1 graf:2 loss:2 expect:1 fully:1 generation:2 limitation:1 generator:1 hoang:3 foundation:1 digital:1 supy:1 sufficient:1 article:3 principle:1 thresholding:1 playing:6 translation:16 row:3 ata:4 summary:4 token:1 surprisingly:1 keeping:1 english:3 figueiredo:1 bias:1 dagan:1 taking:1 differentiating:1 absolute:1 sparse:17 moreau:1 benefit:2 distributed:1 overcome:1 default:2 attendant:5 world:1 evaluating:1 transition:1 tolerance:1 feedback:1 dimension:1 forward:6 commonly:2 author:2 projected:2 collection:1 qualitatively:1 suzuki:1 ec:1 transaction:1 approximate:1 preferred:1 supremum:3 active:1 kkt:1 corpus:1 xi:3 shwartz:3 agapiou:1 continuous:3 iterative:2 search:1 table:4 nature:1 correlated:1 ca:1 serdyuk:1 necessarily:1 complex:1 diag:2 dense:2 main:2 xvii:1 border:1 hyperparameters:2 repeated:1 condat:1 xu:1 augmented:2 en:3 ny:1 aid:3 position:2 jacobian:13 theorem:2 specific:2 xt:2 showing:1 uut:1 symbol:1 list:1 maxi:1 explored:3 moran:1 fusion:1 grouping:2 incorporating:3 sequential:2 importance:2 phd:1 autodiff:1 conditioned:1 tio:3 gumbel:2 chen:2 sparser:1 sorting:1 backpropagate:1 entropy:4 lt:1 garcia:1 simply:2 explore:2 likely:2 visual:1 kxk:1 expressed:1 ordered:1 pathwise:1 applies:1 corresponds:1 satisfies:1 relies:1 grefenstette:2 conditional:3 weston:1 goal:1 lipschitz:2 replace:1 hard:1 change:1 except:2 pensky:1 denoising:1 schwing:1 called:2 total:2 pas:2 experimental:4 la:1 meaningful:2 pedregosa:1 formally:2 colmenarejo:1 softplus:1 accelerated:1 incorporate:4 evaluate:3 heaviside:1 regularizing:1 ex:2
6,552
6,927
Multi-output Polynomial Networks and Factorization Machines Mathieu Blondel NTT Communication Science Laboratories Kyoto, Japan [email protected] Takuma Otsuka NTT Communication Science Laboratories Kyoto, Japan [email protected] Vlad Niculae? Cornell University Ithaca, NY [email protected] Naonori Ueda NTT Communication Science Laboratories RIKEN Kyoto, Japan [email protected] Abstract Factorization machines and polynomial networks are supervised polynomial models based on an efficient low-rank decomposition. We extend these models to the multi-output setting, i.e., for learning vector-valued functions, with application to multi-class or multi-task problems. We cast this as the problem of learning a 3-way tensor whose slices share a common basis and propose a convex formulation of that problem. We then develop an efficient conditional gradient algorithm and prove its global convergence, despite the fact that it involves a non-convex basis selection step. On classification tasks, we show that our algorithm achieves excellent accuracy with much sparser models than existing methods. On recommendation system tasks, we show how to combine our algorithm with a reduction from ordinal regression to multi-output classification and show that the resulting algorithm outperforms simple baselines in terms of ranking accuracy. 1 Introduction Interactions between features play an important role in many classification and regression tasks. Classically, such interactions have been leveraged either explicitly, by mapping features to their products (as in polynomial regression), or implicitly, through the use of the kernel trick. While fast linear model solvers have been engineered for the explicit approach [9, 28], they are typically limited to small numbers of features or low-order feature interactions, due to the fact that the number of parameters that they need to learn scales as O(dt ), where d is the number of features and t is the order of interactions considered. Models kernelized with the polynomial kernel do not suffer from this problem; however, the cost of storing and evaluating these models grows linearly with the number of training instances, a problem sometimes referred to as the curse of kernelization [30]. Factorization machines (FMs) [25] are a more recent approach that can use pairwise feature interactions efficiently even in very high-dimensional data. The key idea of FMs is to model the weights of feature interactions using a low-rank matrix. Not only this idea offers clear benefits in terms of model compression compared to the aforementioned approaches, it has also proved instrumental in modeling interactions between categorical variables, converted to binary features via a one-hot encoding. Such binary features are usually so sparse that many interactions are never observed in the ? Work performed during an internship at NTT Commmunication Science Laboratories, Kyoto. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. training set, preventing classical approaches from capturing their relative importance. By imposing a low rank on the feature interaction weight matrix, FMs encourage shared parameters between interactions, allowing to estimate their weights even if they never occurred in the training set. This property has been used in recommender systems to model interactions between user variables and item variables, and is the basis of several industrial successes of FMs [32, 17]. Originally motivated as neural networks with a polynomial activation (instead of the classical sigmoidal or rectifier activations), polynomial networks (PNs) [20] have been shown to be intimately related to FMs and to only subtly differ in the non-linearity they use [5]. PNs achieve better performance than rectifier networks on pedestrian detection [20] and on dependency parsing [10], and outperform kernel approximations such as the Nystr?m method [5]. However, existing PN and FM works have been limited to single-output models, i.e., they are designed to learn scalar-valued functions, which restricts them to regression or binary classification problems. Our contributions. In this paper, we generalize FMs and PNs to multi-output models, i.e., for learning vector-valued functions, with application to multi-class or multi-task problems. 1) We cast learning multi-output FMs and PNs as learning a 3-way tensor, whose slices share a common basis (each slice corresponds to one output). To obtain a convex formulation of that problem, we propose to cast it as learning an infinite-dimensional but row-wise sparse matrix. This can be achieved by using group-sparsity inducing penalties. (?3) 2) To solve the obtained optimization problem, we develop a variant of the conditional gradient (a.k.a. Frank-Wolfe) algorithm [11, 15], which repeats the following two steps: i) select a new basis vector to add to the model and ii) refit the model over the current basis vectors. (?4) We prove the global convergence of this algorithm (Theorem 1), despite the fact that the basis selection step is non-convex and more challenging in the shared basis setting. (?5) 3) On multi-class classification tasks, we show that our algorithm achieves comparable accuracy to kernel SVMs but with much more compressed models than the Nystr?m method. On recommender system tasks, where kernelized models cannot be used (since they do not generalize to unseen user-item pairs), we demonstrate how our algorithm can be combined with a reduction from ordinal regression to multi-output classification and show that the resulting algorithm outperforms singleoutput PNs and FMs both in terms of root mean squared error (RMSE) and ranking accuracy, as measured by nDCG (normalized discounted cumulative gain) scores. (?6) 2 Background and related work Notation. We denote the set {1, . . . , m} by [m]. Given a vector v ? Rk , we denote its elements by vr ? R ?r ? [k]. Given a matrix V ? Rk?m , we denote its rows by vr ? Rm ?r ? [k] and its columns by v:,c ?c ? [m]. We denote the lp norm of V by kV kp := k vec(V )kp and its lp /lq norm 1 P k p p . The number of non-zero rows of V is denoted by kV k0,? . by kV kp,q := r=1 kvr kq Factorization machines (FMs). Given an input vector x ? Rd , FMs predict a scalar output by X wi,j xi xj , y?FM := wT x + i<j d where w ? R contains feature weights and W ? Rd?d is a low-rank matrix that contains pairwise feature interaction weights. To obtain a low-rank W , [25] originally proposed to use a change of variable W = H T H, where H ? Rk?d (with k ? N+ a rank parameter) and to learn H instead. Noting that this quadratic model results in a non-convex problem in H, [4, 31] proposed to convexify the problem by learning W directly but to encourage low rank using a nuclear norm on W . For learning, [4] proposed a conditional gradient like approach with global convergence guarantees. Polynomial networks (PNs). PNs are a recently-proposed form of neural network where the usual activation function is replaced with a squared activation. Formally, PNs predict a scalar output by y?PN := wT x + v T ?(Hx) = wT x + k X vr ?(hT r x), r=1 2 where ?(a) = a (evaluated element-wise) is the squared activation, v ? Rk is the output layer vector, H ? Rk?d is the hidden layer matrix and k is the number of hidden units. Because the 2 hT k hT 1 m + ? ? ? + vk,m v1,m Wm ... h1 W2 W1 hk hT k hT 1 d + ? ? ? + vk,1 v1,1 d hk h1 Figure 1: Our multi-output PNs / FMs learn a tensor whose slices share a common basis {hr }kr=1 . Pd r.h.s term can be rewritten as xT W x = i,j=1 wi,j xi xj if we set W = H T diag(v)H, we see that PNs are clearly a slight variation of FMs and that learning (v, H) can be recast as learning a low-rank matrix W . Based on this observation, [20] proposed to use GECO [26], a greedy algorithm for convex optimization with a low-rank constraint, similar to the conditional gradient algorithm. [13] proposed a learning algorithm for PNs with global optimality guarantees but their theory imposes non-negativity on the network parameters and they need one distinct hyper-parameter per hidden unit to avoid trivial models. Other low-rank polynomial models were recently introduced in [29, 23] but using a tensor network (a.k.a. tensor train) instead of the canonical polyadic (CP) decomposition. 3 A convex formulation of multi-output PNs and FMs In this section, we generalize PNs and FMs to multi-output problems. For the sake of concreteness, we focus on PNs for multi-class classification. The extensionPto FMs is straightforward and simply requires to replace ?(hT x) = (hT x)2 by ?ANOVA (h, x) := i<j xi hi xj hj , as noted in [5]. The predictions of multi-class PNs can be naturally defined as y?MPN := argmaxc?[m] wcT x+xT Wc x, where m is the number of classes, wc ? Rd and Wc ? Rd?d is low-rank. Following [5], we can model the linear term directly in the quadratic term if we augment all data points with an extra feature of value 1, i.e., xT ? [1, xT ]. We will therefore simply assume y?MPN = argmaxc?[m] xT Wc x henceforth. Our main proposal in this paper is to decompose W1 , . . . , Wm using a shared basis: Pk Wc = H T diag(v:,c )H = r=1 vr,c hr hT ?c ? [m], (1) r where, in neural network terminology, H ? Rk?d can be interpreted as a hidden layer matrix and V ? Rk?m as an output layer matrix. Compared to the naive approach of decomposing each Wc as Wc = HcT diag(v:,c )Hc , this reduces the number of parameters from m(dk + k) to dk + mk. While a nuclear norm could be used to promote a low rank on each Wc , similarly as in [4, 31], this is clearly not sufficient to impose a shared basis. A naive approach would be to use non-orthogonal joint diagonalization as a post-processing. However, because this is a non-convex problem for which no globally convergent algorithm is known [24], this would result in a loss of accuracy. Our key idea is to cast the problem of learning a multi-output PN as that of learning an infinite but row-wise sparse matrix. Without loss of generality, we assume that basis vectors (hidden units) lie in the unit ball. We therefore denote the set of basis vectors by H := {h ? Rd : khk2 ? 1}. Let us denote this infinite matrix by U ? R|H|?m (we use a discrete notation for simplicity). We can then write X ?(hT x)uh ? Rm and y?MPN = argmax o(x; U )c where o(x; U ) := c?[m] h?H m uh ? RP denotes the weights of basis h across all classes (outputs). In this formulation, we have Wc = h?H uh,c hhT and sharing a common basis (hidden units) amounts to encouraging the rows of U , uh , to be either dense or entirely sparse. This can be naturally achieved using group-sparsity inducing penalties. Intuitively, V in (1) can be thought as U restricted to its row support. Define the training set by X ? Rn?d and y ? [m]n . We then propose to solve the convex problem min F (U ) := ?(U )?? n X i=1 3 ? (yi , o(xi ; U )) , (2) Table 1: Sparsity-inducing penalties considered in this paper. With some abuse of notation, we denote by eh and ec standard basis vectors of dimension |H| and m, respectively. Selecting an optimal basis vector h? to add is a non-convex optimization problem. The constant ? ? (0, 1) is the tolerance parameter used for the power method and ? is the multiplicative approximation we guarantee. ?? (G) ?? ? ? ? ??? (G) kU k1 kGk? sign(gh? ,c? )eh? eT c? kU k1,2 kGk?,2 T ? e h? g h ? /kgh? k2 h? ? argmaxkgh k2 1?? ? m kU k1,? kGk?,1 ? eh? sign(gh? )T h? ? argmaxkgh k1 1?? m ?(U ) l1 (lasso) l1 /l2 (group lasso) l1 /l? ? Subproblem ? ? h ,c ? argmax |gh,c | h?H,c?[m] h?H h?H ? 1?? where ? is a smooth and convex multi-class loss function (cf. Appendix A for three common examples), ? is a sparsity-inducing penalty and ? > 0 is a hyper-parameter. In this paper, we focus on the l1 (lasso), l1 /l2 (group lasso) and l1 /l? penalties for ?, cf. Table 1. However, as we shall see, solving (2) is more challenging with the l1 /l2 and l1 /l? penalties than with the l1 penalty. Although our formulation is based on an infinite view, we next show that U ? has finite row support. Proposition 1 Finite row support of U ? for multi-output PNs and FMs Let U ? be an optimal solution of (2), where ? is one of the penalties in Table 1. Then, kU ? k0,? ? nm + 1. If ?(?) = k ? k1 , we can tighten this bound to kU ? k0,? ? min(nm + 1, dm). Proof is in Appendix B.1. It is open whether we can tighten this result when ? = k ? k1,2 or k ? k1,? . 4 A conditional gradient algorithm with approximate basis vector selection At first glance, learning with an infinite number of basis vectors seems impossible. In this section, we show how the well-known conditional gradient algorithm [11, 15] combined with group-sparsity inducing penalties naturally leads to a greedy algorithm that selects and adds basis vectors that are useful across all outputs. On every iteration, the conditional gradient algorithm performs updates of the form U (t+1) = (1 ? ?)U (t) + ??? , where ? ? [0, 1] is a step size and ?? is obtained by solving a linear approximation of the objective around the current iterate U (t) : ?? ? argminh?, ?F (U (t) )i = ? ? argmaxh?, ??F (U (t) )i. ?(?)?? (3) ?(?)?1 Let us denote the negative gradient ??F (U ) by G ? R|H|?m for short. Its elements are defined by gh,c = ? n X ?(hT xi )?? (yi , o(xi ; U ))c , i=1 where ??(y, o) ? Rm is the gradient of ? w.r.t. o (cf. Appendix A). For ReLu activations, solving (3) is known to be NP-hard [1]. Here, we focus on quadratic activations, for which we will be able to provide approximation guarantees. Plugging the expression of ?, we get gh,c = ?hT ?c h where ?c := X T Dc X (PN) or ?c := n  X 1 T diag(xi )2 (FM) X Dc X ? Dc 2 i=1 and Dc ? Rn?n is a diagonal matrix such that (Dc )i,i := ??(yi , o(xi ; U ))c . Let us recall the definition of the dual norm of ?: ?? (G) := max?(?)?1 h?, Gi. By comparing this equation to (3), we see that ?? is the argument that achieves the maximum in the dual norm ?? (G), up to a constant factor ? . It is easy to verify that any element in the subdifferential of ?? (G), which we denote by ??? (G) ? R|H|?m , achieves that maximum, i.e., ?? ? ? ? ??? (G). Basis selection. As shown in Table 1, elements of ??? (G) (subgradients) are |H| ? m matrices with a single non-zero row indexed by h? , where h? is an optimal basis (hidden unit) selected by h? ? argmax kgh kp , h?H 4 (4) and where p = ? when ? = k ? k1 , p = 2 when ? = k.k1,2 and p = 1 when ? = k ? k1,? . We call (4) a basis vector selection criterion. Although this selection criterion was derived from the linearization of the objective, it is fairly natural: it chooses the basis vector with largest ?violation?, as measured by the lp norm of the negative gradient row gh . Multiplicative approximations. The key challenge in solving (3) or equivalently (4) arises from the fact that G has infinitely many rows gh . We therefore cast basis vector selection as a continuous optimization problem w.r.t. h. Surprisingly, although the entire objective (2) is convex, (4) is not. ? ? R|H|?m that satisfies Instead of the exact maximum, we will therefore only require to find a ? ? ?? ?(?) and ? Gi ? ?h?? , Gi, h?, where ? ? (0, 1] is a multiplicative approximation (higher is better). It is easy to verify that this is ? ? H that satisfies kg? kp ? ?kgh? kp . equivalent to replacing the optimal h? by an approximate h h Sparse case. When ?(?) = k ? k1 , we need to solve max kgh k? = max max |hT ?c h| = max max |hT ?c h|. h?H h?H c?[m] c?[m] h?H It is well known that the optimal solution of maxh?H |hT ?c h| is the dominant eigenvector of ?c . ? as the hc Therefore, we simply need to find the dominant eigenvector hc of each ?c and select h T with largest singular value |hc ?c hc |. Using the power method, we can find an hc that satisfies T |hT c ?c hc | ? (1 ? ?) max |h ?c h|, (5) h?H for some tolerance parameter ? ? (0, 1). The procedure takes O(Nc log(d)/?) time, where Nc is the number of non-zero elements in ?c [26]. Taking the maximum w.r.t. c ? [m] on both sides of (5) leads to kgh ? k? ? ?kgh? k? , where ? = 1 ? ?. However, using ? = k ? k1 does not encourage ? selecting an h that is useful for all outputs. In fact, when ? = k ? k1 , our approach is equivalent to imposing independent nuclear norms on W1 , . . . , Wm . Group-sparse cases. When ?(?) = k.k1,2 or ?(?) = k.k1,? , we need to solve max kgh k22 = max f2 (h) := h?H h?H m X (hT ?c h)2 or c=1 max kgh k1 = max f1 (h) := h?H h?H m X c=1 |hT ?c h|, respectively. Unlike the l1 -constrained case, we are clearly selecting a basis vector with largest violation across all outputs. However, we are now faced with a more difficult non-convex optimization problem. Our strategy is to first choose an initialization h(0) which guarantees a certain multiplicative approximation ?, then refine the solution using a monotonically non-increasing iterative procedure. Initialization. We simply choose h(0) as the approximate solution of the ? = k ? k1 case, i.e., we have kgh(0) k? ? (1 ? ?) max kgh k? . h?H Now, using ? mkxk? ? kxk2 ? kxk? and mkxk? ? kxk1 ? kxk? , this immediately implies kgh(0) kp ? ? max kgh kp , h?H with ? = 1?? ? m if p = 2 and ? = 1?? m if p = 1. Refining the solution. We now apply another instance of the conditional gradient algorithm to solve the subproblem maxkhk2 ?1 fp (h) itself, leading to the following iterates: h(t+1) = (1 ? ?t )h(t) + ?t ?fp (h(t) ) , k?fp (h(t) )k2 (6) where ?t ? [0, 1]. Following [3, Section 2.2.2], if we use the Armijo rule to select ?t , every limit point of the sequence {h(t) } is a stationary point of fp . In practice, we observe that ?t = 1 is almost always selected. Note that when ?t = 1 and m = 1 (i.e., single-output case), our refining algorithm recovers the power method. Generalized power methods were also studied for structured matrix factorization [16, 21], but with different objectives and constraints. Since the conditional gradient 5 Algorithm 1 Multi-output PN/FM training Input: X, y, k, ? H ? [ ], V ? [ ] for t := 1, . . . , k doP T Compute oi := t?1 r=1 ?(hr xi )vr ?i ? [n] Let gh := [?hT ?1 h, . . . , ?hT ?m h]T ? ? argmax Find h h?H kgh kp ? to H and 0 to V Append h V ? argmin Ft (V , H) ?(V )?? Optional: V , H ? argmin Ft (V , H) ?(V )?? hr ?H ?r?[t] end for P Output: V , H (equivalent to U = kt=1 eht vtT ) algorithm assumes a differentiable function, in the case p = 1, we replace the absolute function with the Huber function |x| ? 21 x2 if |x| ? 1, |x| ? 12 otherwise. Corrective refitting step. After t iterations, U (t) contains at most t non-zero rows. We can therefore always store U (t) as V (t) ? Rt?m (the output layer matrix) and H (t) ? Rt?d (the basis vectors / hidden units added so far). order to improveaccuracy, on iteration t, we can then refit the objective  In Pn Pt Ft (V , H) := i=1 ? yi , r=1 ?(hT r xi )vr . We consider two kinds of corrective steps, a convex one that minimizes Ft (V , H (t) ) w.r.t. V ? Rt?m and an optional non-convex one that minimizes Ft (V , H) w.r.t. both V ? Rt?m and H ? Rt?d . Refitting allows to remove previously-added bad basis vectors, thanks to the use of sparsity-inducing penalties. Similar refitting procedures are commonly used in matching pursuit [22]. The entire procedure is summarized in Algorithm 1 and implementation details are given in Appendix D. 5 Analysis of Algorithm 1 The main difficulty in analyzing the convergence of Algorithm 1 stems from the fact that we cannot solve the basis vector selection subproblem globally when ? = k ? k1,2 or k ? k1,? . Therefore, we need to develop an analysis that can cope with the multiplicative approximation ?. Multiplicative approximations were also considered in [18] but the condition they require is too stringent (cf. Appendix B.2 for a detailed discussion). The next theorem guarantees the number of iterations needed to output a multi-output network that achieves as small objective value as an optimal solution of (2). Theorem 1 Convergence of Algorithm 1 Assume F is smooth with constant ?. Let U (t) be the output after t iterations of Algorithm 1 run with 8? 2 ? constraint parameter ?? . Then, F (U (t) ) ? min F (U ) ? ? ?t ? ? 2. ?? 2 ?(U )?? 2  In [20], single-output PNs were trained using GECO [26], a greedy algorithm with similar O ????2 guarantees. However, GECO is limited to learning infinite vectors (not matrices) and it does not constrain its iterates like we do. Hence GECO cannot remove bad basis vectors. The proof of Theorem 1 and a detailed comparison with GECO are given in Appendix B.2. Finally, we note that the infinite dimensional view is also key to convex neural networks [2, 1]. However, to our knowledge, we are the first to give an explicit multiplicative approximation guarantee for a non-linear multi-output network. 6 6.1 Experimental results Experimental setup Datasets. For our multi-class experiments, we use four publicly-available datasets: segment (7 classes), vowel (11 classes), satimage (6 classes) and letter (26 classes) [12]. Quadratic models sub6 stantially improve over linear models on these datasets. For our recommendation system experiments, we use the MovieLens 100k and 1M datasets [14]. See Appendix E for complete details. Model validation. The greedy nature of Algorithm 1 allows us to easily interleave training with model validation. Concretely, we use an outer loop (embarrassingly parallel) for iterating over the range of possible regularization parameters, and an inner loop (Algorithm 1, sequential) for increasing the number of basis vectors. Throughout our experiments, we use 50% of the data for training, 25% for validation, and 25% for evaluation. Unless otherwise specified, we use a multi-class logistic loss. 6.2 Method comparison for the basis vector (hidden unit) selection subproblem satimage vowel As we mentioned previously, the linearized subprob+ refine lem (basis vector selection) for the l1 /l2 and l1 /l? 1 init (proposed) constrained cases involves a significantly more chal- random init +refine lenging non-convex optimization problem. In this 1 init section, we compare different methods for obtaining random init ? an approximate solution h to (4). We focus on the best data ?1 /?? case, since we have a method for computing the true global solution h? , albeit with exponential 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 complexity in m (cf. Appendix C). This allows us to report the empirically observed multiplicative Figure 2: Empirically observed multiplicative ? 1 (h? ). ? 1 (h? ). approximation factor ?? = f1 (h)/f approximation factor ?? := f1 (h)/f l l Compared methods. We compare l1 init + refine (proposed), random init + refine, l1 init (without ? = xi? /kxi? k2 where i? = argmax f1 (xi /kxi k2 ). refine), random init and best data: h i?[n] Results. We report ?? in Figure 2. l1 init + refine achieves nearly the global maximum on both datasets and outperforms random init + refine, showing the effectiveness of the proposed initialization and that the iterative update (6) can get stuck in a bad local minimum if initialized badly. On the other hand, l1 init + refine outperforms l1 init alone, showing the importance of iteratively refining the solution. Best data, a heuristic similar to that of approximate kernel SVMs [7], is not competitive. Sparsity-inducing penalty comparison In this section, we compare the l1 , l1 /l2 and l1 /l? penalties for the choice of ?, when varying the maximum number of basis vectors (hidden units). Figure 3 indicates test set accuracy when using output layer refitting. We also include linear logistic regression, kernel SVMs and the Nystr?m method as baselines. For the latter 2 two, we use the quadratic kernel (xT i xj + 1) . Hyper-parameters are chosen so as to maximize validation set accuracy. Results. On the vowel (11 classes) and letter (26 classes) datasets, l1 /l2 and l1 /l? penalties outperform l1 norm starting from 20 and 75 hidden units, respectively. On satimage (6 classes) and segment (7 classes), we observed that the three penalties are mostly similar (not shown). We hypothesize that l1 /l2 and l1 /l? penalties make a bigger difference when the number of classes is large. Multioutput PNs substantially outperform the Nystr?m method with comparable number of basis vectors (hidden units). Multi-output PNs reach the same test accuracy as kernel SVMs with very few basis vectors on vowel and satimage but appear to require at least 100 basis vectors to reach good performance on letter. This is not surprising, since kernel SVMs require 3,208 support vectors on letter, as indicated in Table 2 below. 6.4 Test multi-class accuracy 6.3 0.94 0.92 0.90 0.88 0.86 0.70 0.50 letter 0 50 100 Max. hidden units 150 Figure 3: Penalty comparison. Multi-class benchmark comparison Compared methods. We compare the proposed conditional gradient algorithm with output layer refitting only and with both output and hidden layer refitting; projected gradient descent (FISTA) 7 Table 2: Muli-class test accuracy and number of basis vectors / support vectors. segment vowel satimage Conditional gradient (full refitting, proposed) 87.83 (12) 89.80 (25) l1 96.71 (41) l1 /l2 96.71 (40) 89.57 (15) 89.08 (18) l1 /l? 96.71 (24) 86.96 (15) 88.99 (20) letter 92.29 (150) 91.81 (106) 92.35 (149) Conditional gradient (output-layer refitting, proposed) l1 97.05 (20) 80.00 (21) 89.71 (40) 91.01 (139) l1 /l2 96.36 (21) 85.22 (15) 89.71 (50) 92.24 (150) l1 /l? 96.19 (16) 86.96 (41) 89.35 (41) 91.68 (128) Projected gradient descent (random init) l1 96.88 (50) 79.13 (50) 89.53 l1 /l2 96.88 (50) 80.00 (48) 89.80 l1 /l? 96.71 (50) 83.48 (50) 89.08 l22 96.88 (50) 81.74 (50) 89.98 Baselines Linear Kernelized OvR PN 92.55 96.71 (238) 94.63 60.00 85.22 (189) 73.91 (50) (50) (50) (50) 83.03 89.53 (688) 89.44 88.45 88.45 88.45 88.45 (150) (150) (150) (150) 71.17 93.73 (3208) 75.36 with random initialization; linear and kernelized models; one-vs-rest PNs (i.e., fit one PN per class). We focus on PNs rather than FMs since they are known to work better on classification tasks [5]. Results are included in Table 2. From these results, we can make the following observations and conclusions. When using output-layer refitting on vowel and letter (two datasets with more than 10 classes), group-sparsity inducing penalties lead to better test accuracy. This is to be expected, since these penalties select basis vectors that are useful across all classes. When using full hidden layer and output layer refitting, l1 catches up with l1 /l2 and l1 /l? on the vowel and letter datasets. Intuitively, the basis vector selection becomes less important if we make more effort at every iteration by refitting the basis vectors themselves. However, on vowel, l1 /l2 is still substantially better than l1 (89.57 vs. 87.83). Compared to projected gradient descent with random initialization, our algorithm (for both output and full refitting) is better on 3/4 (l1 ), 2/4 (l1 /l2 ) and 3/4 (l1 /l? ) of the datasets. In addition, with our algorithm, the best model (chosen against the validation set) is substantially sparser. Multi-output PNs substantially outperform OvR PNs. This is to be expected, since multi-output PNs learn to share basis vectors across different classes. 6.5 Recommender system experiments using ordinal regression A straightforward way to implement recommender systems consists in training a single-output model to regress ratings from one-hot encoded user and item indices [25]. Instead of a single-output PN or FM, we propose to use ordinal McRank, a reduction from ordinal regression to multi-output binary classification, which is known to achieve good nDCG (normalized discounted cumulative gain) scores [19]. This reduction involves training a probabilistic binary classifier for each of the m relevance levels (for instance, m = 5 in the MovieLens datasets). The expected relevance of x (e.g. the concatenation of the one-hot encoded user and item indices) is then computed by m h m i X X c p(y ? c | x) ? p(y ? c ? 1 | x) , c p(y = c | x) = y? = c=1 c=1 where we use the convention p(y ? 0 | x) = 0. Thus, all we need to do to use ordinal McRank is to train a probabilistic binary classifier p(y ? c | x) for all c ? [m]. Our key proposal is to use a multi-output model to learn all m classifiers simultaneously, i.e., in a multi-task fashion. Let xi be a vector representing a user-item pair with corresponding rating yi , for 8 RMSE nDCG@1 1.00 nDCG@5 0.76 Movielens 100k 0.77 0.98 0.74 0.76 0.72 0.96 0.75 0.70 0.74 0.94 0.68 0 10 20 30 40 50 0.73 0 10 20 30 40 50 0 10 20 30 40 50 1.00 0.76 Movielens 1M 0.98 0.77 0.75 0.96 0.94 0.74 0.92 0.73 0.76 Single-output PN Single-output FM Ordinal McRank FM l1 /l2 0.75 0.90 Ordinal McRank FM l1 /l 0.72 0 10 20 30 40 50 0 10 Max. hidden units 20 30 Max. hidden units 40 50 0 10 20 30 40 50 Max. hidden units Figure 4: Recommender system experiment: RMSE (lower is better) and nDCG (higher is better). i ? [n]. We form a n ? m matrix Y such that yi,c = +1 if yi ? c and ?1 otherwise, and solve ! n X m X X min ? yi,c , ?ANOVA (h, xi )uh,c , ?(U )?? i=1 c=1 h?H where ? is set to the binary logistic loss, in order to be able to produce probabilities. After running Algorithm 1 on that objective for k iterations, we obtain H ? Rk?d and V ? Rk?m . Because H is shared across all outputs, the only small overhead of using the ordinal McRank reduction, compared to a single-output regression model, therefore comes from learning V ? Rk?m instead of v ? Rk . In this experiment, we focus on multi-output factorization machines (FMs), since FMs usually work better than PNs for one-hot encoded data [5]. We show in Figure 4 the RMSE and nDCG (truncated at 1 and 5) achieved when varying k (the maximum number of basis vectors / hidden units). Results. When combined with the ordinal McRank reduction, we found that l1 /l2 and l1 /l? ? constrained multi-output FMs substantially outperform single-output FMs and PNs on both RMSE and nDCG measures. For instance, on MovieLens 100k and 1M, l1 /l? ?constrained multi-output FMs achieve an nDCG@1 of 0.75 and 0.76, respectively, while single-output FMs only achieve 0.71 and 0.75. Similar trends are observed with nDCG@5. We believe that this reduction is more robust to ranking performance measures such as nDCG thanks to its modelling of the expected relevance. 7 Conclusion and future directions We defined the problem of learning multi-output PNs and FMs as that of learning a 3-way tensor whose slices share a common basis. To obtain a convex optimization objective, we reformulated that problem as that of learning an infinite but row-wise sparse matrix. To learn that matrix, we developed a conditional gradient algorithm with corrective refitting, and were able to provide convergence guarantees, despite the non-convexity of the basis vector (hidden unit) selection step. Although not considered in this paper, our algorithm and its analysis can be modified to make use of stochastic gradients. An open question remains whether a conditional gradient algorithm with provable guarantees can be developed for training deep polynomial networks or factorization machines. Such deep models could potentially represent high-degree polynomials with few basis vectors. However, this would require the introduction of a new functional analysis framework. 9 References [1] F. Bach. Breaking the curse of dimensionality with convex neural networks. JMLR, 2017. [2] Y. Bengio, N. Le Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In NIPS, 2005. [3] D. P. Bertsekas. Nonlinear programming. Athena Scientific Belmont, 1999. [4] M. Blondel, A. Fujino, and N. Ueda. Convex factorization machines. In ECML/PKDD, 2015. [5] M. Blondel, M. Ishihata, A. Fujino, and N. Ueda. Polynomial networks and factorization machines: New insights and efficient training algorithms. In ICML, 2016. [6] M. Blondel, K. Seki, and K. Uehara. Block coordinate descent algorithms for large-scale sparse multiclass classification. Machine Learning, 93(1):31?52, 2013. [7] A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast kernel classifiers with online and active learning. JMLR, 6(Sep):1579?1619, 2005. [8] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012. [9] Y.-W. Chang, C.-J. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Lin. Training and testing low-degree polynomial data mappings via linear svm. Journal of Machine Learning Research, 11:1471?1490, 2010. [10] D. Chen and C. D. Manning. A fast and accurate dependency parser using neural networks. In EMNLP, 2014. [11] J. C. Dunn and S. A. Harshbarger. Conditional gradient algorithms with open loop step size rules. Journal of Mathematical Analysis and Applications, 62(2):432?444, 1978. [12] R.-E. Fan and C.-J. Lin. datasets/, 2011. http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ [13] A. Gautier, Q. N. Nguyen, and M. Hein. Globally optimal training of generalized polynomial neural networks with nonlinear spectral methods. In NIPS, 2016. [14] GroupLens. http://grouplens.org/datasets/movielens/, 1998. [15] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, 2013. [16] M. Journ?e, Y. Nesterov, P. Richt?rik, and R. Sepulchre. Generalized power method for sparse principal component analysis. Journal of Machine Learning Research, 11:517?553, 2010. [17] Y. Juan, Y. Zhuang, W.-S. Chin, and C.-J. Lin. Field-aware factorization machines for CTR prediction. In ACM Recsys, 2016. [18] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe optimization for structural SVMs. In ICML, 2012. [19] P. Li, C. J. Burges, and Q. Wu. McRank: Learning to rank using multiple classification and gradient boosting. In NIPS, 2007. [20] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural networks. In NIPS, 2014. [21] R. Luss and M. Teboulle. Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint. SIAM Review, 55(1):65?98, 2013. [22] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397?3415, 1993. [23] A. Novikov, M. Trofimov, and I. Oseledets. arXiv:1605.03795, 2016. 10 Exponential machines. arXiv preprint [24] A. Podosinnikova, F. Bach, and S. Lacoste-Julien. Beyond CCA: Moment matching for multiview models. In ICML, 2016. [25] S. Rendle. Factorization machines. In ICDM, 2010. [26] S. Shalev-Shwartz, A. Gonen, and O. Shamir. Large-scale convex minimization with a low-rank constraint. In ICML, 2011. [27] S. Shalev-Shwartz, Y. Wexler, and A. Shashua. ShareBoost: Efficient multiclass learning with feature sharing. In NIPS, 2011. [28] S. Sonnenburg and V. Franc. Coffin: A computational framework for linear SVMs. In ICML, 2010. [29] E. Stoudenmire and D. J. Schwab. Supervised learning with tensor networks. In NIPS, 2016. [30] Z. Wang, K. Crammer, and S. Vucetic. Multi-class Pegasos on a budget. In ICML, 2010. [31] M. Yamada, W. Lian, A. Goyal, J. Chen, K. Wimalawarne, S. A. Khan, S. Kaski, H. M. Mamitsuka, and Y. Chang. Convex factorization machine for toxicogenomics prediction. In KDD, 2017. [32] E. Zhong, Y. Shi, N. Liu, and S. Rajan. Scaling factorization machines with parameter server. In CIKM, 2016. 11
6927 |@word kgk:3 compression:1 polynomial:14 instrumental:1 norm:9 seems:1 interleave:1 open:3 mcrank:7 trofimov:1 linearized:1 wexler:1 decomposition:2 hsieh:1 nystr:4 sepulchre:1 moment:1 reduction:7 liu:1 contains:3 score:2 selecting:3 outperforms:4 existing:2 current:2 comparing:1 surprising:1 activation:7 parsing:1 multioutput:1 belmont:1 kdd:1 remove:2 designed:1 hypothesize:1 update:2 v:2 stationary:1 greedy:4 selected:2 alone:1 item:5 short:1 yamada:1 iterates:2 boosting:1 schwab:1 org:2 sigmoidal:1 zhang:1 mathematical:1 prove:2 consists:1 combine:1 overhead:1 pairwise:2 blondel:4 huber:1 expected:4 vtt:1 themselves:1 pkdd:1 multi:37 discounted:2 globally:3 encouraging:1 curse:2 solver:1 increasing:2 becomes:1 linearity:1 notation:3 kg:1 argmin:2 interpreted:1 kind:1 eigenvector:2 minimizes:2 developed:2 substantially:5 convexify:1 guarantee:10 every:3 rm:3 k2:5 classifier:4 unit:17 appear:1 bertsekas:1 local:1 limit:1 despite:3 encoding:1 analyzing:1 ndcg:10 abuse:1 initialization:5 studied:1 challenging:2 co:2 factorization:13 limited:3 range:1 testing:1 practice:1 block:2 implement:1 goyal:1 procedure:4 dunn:1 ovr:2 thought:1 significantly:1 matching:3 projection:1 get:2 cannot:3 pegasos:1 selection:12 impossible:1 www:1 equivalent:3 shi:1 straightforward:2 starting:1 convex:25 simplicity:1 roux:1 immediately:1 rule:2 insight:1 nuclear:3 variation:1 coordinate:2 oseledets:1 pt:1 play:1 parser:1 user:5 exact:1 programming:1 shamir:2 mallat:1 trick:1 wolfe:3 element:6 trend:1 observed:5 role:1 subproblem:4 kxk1:1 ft:5 csie:1 preprint:1 wang:1 revisiting:1 mpn:3 sonnenburg:1 richt:1 mentioned:1 pd:1 convexity:1 complexity:1 nesterov:1 trained:1 solving:4 segment:3 subtly:1 f2:1 efficiency:1 basis:46 uh:5 easily:1 joint:1 sep:1 k0:3 kaski:1 corrective:3 riken:1 train:2 distinct:1 fast:3 kp:9 hyper:3 shalev:3 whose:4 heuristic:1 encoded:3 valued:3 solve:7 delalleau:1 otherwise:3 compressed:1 gi:3 unseen:1 itself:1 online:1 sequence:1 differentiable:1 propose:4 interaction:12 product:1 loop:3 achieve:4 inducing:8 kv:3 convergence:6 produce:1 novikov:1 develop:3 measured:2 c:1 involves:3 implies:1 come:1 convention:1 differ:1 direction:1 stochastic:1 engineered:1 stringent:1 libsvmtools:1 require:5 hx:1 f1:4 decompose:1 ntu:1 proposition:1 vucetic:1 around:1 considered:4 mapping:2 predict:2 achieves:6 dictionary:1 gautier:1 grouplens:2 largest:3 minimization:1 clearly:3 always:2 modified:1 rather:1 pn:10 avoid:1 cornell:2 hj:1 varying:2 zhong:1 derived:1 focus:6 refining:3 vk:2 niculae:1 rank:15 indicates:1 modelling:1 hk:2 industrial:1 baseline:3 typically:1 entire:2 kernelized:4 hidden:20 journ:1 selects:1 classification:11 aforementioned:1 dual:2 denoted:1 augment:1 constrained:4 fairly:1 field:1 aware:1 never:2 beach:1 icml:7 nearly:1 promote:1 future:1 np:1 report:2 few:2 franc:1 simultaneously:1 replaced:1 argmax:5 geometry:1 vowel:8 detection:1 evaluation:1 violation:2 kt:1 accurate:1 naonori:2 encourage:3 orthogonal:1 unless:1 indexed:1 mkxk:2 initialized:1 hein:1 mk:1 instance:4 column:1 modeling:1 teboulle:1 cost:1 kq:1 fujino:2 too:1 takuma:2 dependency:2 kxi:2 combined:3 chooses:1 st:1 thanks:2 recht:1 siam:1 refitting:13 probabilistic:2 w1:3 squared:3 ctr:1 nm:2 leveraged:1 choose:2 emnlp:1 l22:1 classically:1 henceforth:1 juan:1 leading:1 li:1 japan:3 converted:1 parrilo:1 summarized:1 pedestrian:1 explicitly:1 ranking:3 performed:1 root:1 h1:2 lab:2 multiplicative:9 view:2 shashua:1 wm:3 competitive:1 parallel:1 rmse:5 contribution:1 oi:1 publicly:1 accuracy:12 efficiently:1 generalize:3 vincent:1 lu:1 reach:2 sharing:2 definition:1 against:1 internship:1 frequency:1 regress:1 dm:1 naturally:3 proof:2 recovers:1 argmaxc:2 gain:2 proved:1 vlad:2 recall:1 knowledge:1 dimensionality:1 embarrassingly:1 originally:2 dt:1 supervised:2 higher:2 ishihata:1 formulation:5 singleoutput:1 evaluated:1 generality:1 hand:1 replacing:1 nonlinear:2 glance:1 logistic:3 indicated:1 scientific:1 grows:1 believe:1 usa:1 k22:1 normalized:2 verify:2 true:1 hence:1 regularization:1 laboratory:4 iteratively:1 during:1 noted:1 criterion:2 generalized:3 chin:1 multiview:1 complete:1 demonstrate:1 performs:1 cp:1 gh:8 l1:47 wise:4 recently:2 common:6 functional:1 empirically:2 jp:2 extend:1 occurred:1 slight:1 imposing:2 vec:1 rd:5 mathematics:1 similarly:1 polyadic:1 otsuka:2 maxh:1 add:3 shareboost:1 dominant:2 jaggi:2 recent:1 pns:27 store:1 certain:1 server:1 binary:7 success:1 yi:8 minimum:1 impose:1 maximize:1 monotonically:1 signal:1 ii:1 full:3 multiple:1 kyoto:4 reduces:1 stem:1 ntt:6 smooth:2 offer:1 long:1 bach:2 lin:3 podosinnikova:1 icdm:1 post:1 bigger:1 plugging:1 prediction:3 variant:1 regression:9 arxiv:2 iteration:7 kernel:10 sometimes:1 represent:1 achieved:3 proposal:2 background:1 subdifferential:1 addition:1 ertekin:1 wct:1 argmaxh:1 singular:1 khk2:1 ithaca:1 w2:1 extra:1 unlike:1 rest:1 effectiveness:1 call:1 marcotte:1 structural:1 noting:1 bengio:1 easy:2 iterate:1 xj:4 relu:1 fit:1 fm:32 lasso:4 inner:1 idea:3 multiclass:2 whether:2 motivated:1 expression:1 effort:1 penalty:18 suffer:1 reformulated:1 deep:2 useful:3 iterating:1 clear:1 detailed:2 amount:1 svms:7 http:2 outperform:5 restricts:1 canonical:1 sign:2 cikm:1 per:2 discrete:1 write:1 shall:1 rajan:1 group:7 key:5 four:1 terminology:1 anova:2 ht:20 lacoste:2 v1:2 concreteness:1 run:1 inverse:1 letter:8 muli:1 almost:1 throughout:1 chandrasekaran:1 ueda:4 wu:1 appendix:8 scaling:1 comparable:2 capturing:1 cca:1 bound:1 layer:12 hi:1 entirely:1 convergent:1 fan:1 quadratic:5 refine:9 badly:1 constraint:5 constrain:1 x2:1 sake:1 wimalawarne:1 wc:9 argument:1 optimality:1 min:4 subgradients:1 structured:1 ball:1 manning:1 across:6 intimately:1 wi:2 lp:3 tw:1 lem:1 intuitively:2 restricted:1 equation:1 previously:2 remains:1 cjlin:1 needed:1 ordinal:10 rendle:1 end:1 pursuit:2 decomposing:1 rewritten:1 hct:1 available:1 apply:1 observe:1 spectral:1 schmidt:1 rp:1 denotes:1 assumes:1 cf:5 include:1 running:1 k1:19 classical:2 tensor:7 geco:5 objective:8 added:2 question:1 strategy:1 rt:5 usual:1 diagonal:1 gradient:24 concatenation:1 athena:1 outer:1 recsys:1 stoudenmire:1 trivial:1 provable:1 willsky:1 index:2 equivalently:1 nc:2 difficult:1 setup:1 mostly:1 potentially:1 frank:3 eht:1 ringgaard:1 negative:2 append:1 uehara:1 implementation:1 refit:2 allowing:1 recommender:5 observation:2 datasets:12 benchmark:1 finite:2 descent:4 ecml:1 optional:2 truncated:1 communication:3 dc:5 rn:2 rating:2 introduced:1 cast:5 pair:2 specified:1 khan:1 nip:7 able:3 beyond:1 usually:2 below:1 chal:1 fp:4 sparsity:9 challenge:1 gonen:1 recast:1 max:17 hot:4 power:5 natural:1 eh:3 difficulty:1 hr:4 pletscher:1 representing:1 improve:2 zhuang:1 mathieu:2 julien:2 categorical:1 negativity:1 naive:2 catch:1 faced:1 review:1 l2:15 relative:1 loss:5 dop:1 validation:5 foundation:1 degree:2 rik:1 sufficient:1 imposes:1 storing:1 share:5 bordes:1 row:13 repeat:1 surprisingly:1 free:1 side:1 burges:1 taking:1 absolute:1 sparse:10 livni:1 benefit:1 slice:5 tolerance:2 dimension:1 evaluating:1 cumulative:2 preventing:1 concretely:1 commonly:1 stuck:1 projected:3 nguyen:1 ec:1 tighten:2 far:1 cope:1 transaction:1 approximate:5 implicitly:1 global:6 active:1 kernelization:1 xi:14 shwartz:3 continuous:1 iterative:2 table:7 learn:7 ku:5 nature:1 ca:1 robust:1 init:13 obtaining:1 excellent:1 hc:7 bottou:1 diag:4 pk:1 main:2 dense:1 linearly:1 referred:1 fashion:1 ny:1 vr:6 explicit:2 lq:1 exponential:2 lie:1 kxk2:1 breaking:1 jmlr:2 theorem:4 rk:11 seki:1 bad:3 rectifier:2 xt:6 showing:2 dk:2 svm:1 albeit:1 sequential:1 importance:2 kr:1 diagonalization:1 linearization:1 budget:1 sparser:2 chen:2 simply:4 infinitely:1 kvr:1 kxk:2 scalar:3 recommendation:2 chang:3 corresponds:1 satisfies:3 acm:1 weston:1 conditional:16 satimage:5 shared:5 replace:2 change:1 hard:1 fista:1 infinite:8 movielens:6 included:1 wt:3 principal:1 mamitsuka:1 hht:1 experimental:2 select:4 formally:1 support:5 latter:1 arises:1 armijo:1 crammer:1 relevance:3 argminh:1 lian:1
6,553
6,928
Clustering Billions of Reads for DNA Data Storage Cyrus Rashtchiana,b Konstantin Makarycheva,c Mikl?s R?cza,d Siena Dumas Anga Djordje Jevdjica Sergey Yekhanina Luis Cezea,b Karin Straussa a Microsoft Research, b CSE at University of Washington, c EECS at Northwestern University, d ORFE at Princeton University Abstract Storing data in synthetic DNA offers the possibility of improving information density and durability by several orders of magnitude compared to current storage technologies. However, DNA data storage requires a computationally intensive process to retrieve the data. In particular, a crucial step in the data retrieval pipeline involves clustering billions of strings with respect to edit distance. Datasets in this domain have many notable properties, such as containing a very large number of small clusters that are well-separated in the edit distance metric space. In this regime, existing algorithms are unsuitable because of either their long running time or low accuracy. To address this issue, we present a novel distributed algorithm for approximately computing the underlying clusters. Our algorithm converges efficiently on any dataset that satisfies certain separability properties, such as those coming from DNA data storage systems. We also prove that, under these assumptions, our algorithm is robust to outliers and high levels of noise. We provide empirical justification of the accuracy, scalability, and convergence of our algorithm on real and synthetic data. Compared to the state-of-the-art algorithm for clustering DNA sequences, our algorithm simultaneously achieves higher accuracy and a 1000x speedup on three real datasets. 1 Introduction Existing storage technologies cannot keep up with the modern data explosion. Thus, researchers have turned to fundamentally different physical media for alternatives. Synthetic DNA has emerged as a promising option, with theoretical information density of multiple orders of magnitude more than magnetic tapes [12, 24, 26, 52]. However, significant biochemical and computational improvements are necessary to scale DNA storage systems to read/write exabytes of data within hours or even days. Encoding a file in DNA requires several preprocessing steps, such as randomizing it using a pseudo-random sequence, partitioning it into hundred-character substrings, adding address and error correction information to these substrings, and finally encoding everything to the {A, C, G, T} alphabet. The resulting collection of short strings is synthesized into DNA and stored until needed. To retrieve the data, the DNA is accessed using next-generation sequencing, which results in several noisy copies, called reads, of each originally synthesized short string, called a reference. With current technologies, these references and reads contain hundreds of characters, and in the near future, they will likely contain thousands [52]. After sequencing, the goal is to recover the unknown references from the observed reads. The first step, which is the focus of this paper, is to cluster the reads into groups, each of which is the set of noisy copies of a single reference. Figure 1: DNA storage datasets have many small clusters that are well-separated in edit distance. The output of clustering is fed into a consensus-finding algorithm, which predicts the most likely reference to have produced each cluster of reads. As Figure 1 shows, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. datasets typically contain only a handful of reads for each reference, and each of these reads differs from the reference by insertions, deletions, and/or substitutions. The challenge of clustering is to achieve high precision and recall of many small underlying clusters, in the presence of such errors. Datasets arising from DNA storage have two striking properties. First, the number of clusters grows linearly with the input size. Each cluster typically consists of five to fifteen noisy copies of the same reference. Second, the clusters are separated in edit distance, by design (via randomization). We investigate approximate clustering algorithms for large collections of reads with these properties. Suitable algorithms must satisfy several criteria. First, they must be distributed, to handle the billions of reads coming from modern sequencing machines. Second, their running time must scale favorably with the number of clusters. In DNA storage datasets, the size of the clusters is fixed and determined by the number of reads needed to recover the data. Thus, the number of clusters k grows linearly with the input size n (i.e., k = ?(n)). Any methods requiring ?(k ? n) = ?(n2 ) time or communication would be too slow for billion-scale datasets. Finally, algorithms must be robust to noise and outliers, and they must find clusters with relatively large diameters (e.g., linear in the dimensionality). These criteria rule out many clustering methods. Algorithms for k-medians and related objectives are unsuitable because they have running time or communication scaling with k ? n [19, 29, 33, 42]. Graph clustering methods, such as correlation clustering [4, 9, 18, 47], require a similarity graph.1 Constructing this graph is costly, and it is essentially equivalent to our clustering problem, since in DNA storage datasets, the similarity graph has connected components that are precisely the clusters of noisy reads. Linkage-based methods are inherently sequential, and iteratively merging the closest pair of clusters takes quadratic time. Agglomerative methods that are robust to outliers do not extend to versions that are distributed and efficient in terms of time, space, and communication [2, 8]. Turning to approximation algorithms, tools such as metric embeddings [43] and locality sensitive hashing (LSH) [31] trade a small loss in accuracy for a large reduction in running time. However, such tools are not well understood for edit distance [16, 17, 30, 38, 46], even though many methods have been proposed [15, 27, 39, 48, 54]. In particular, no published system has demonstrated the potential to handle billions of reads, and no efficient algorithms have experimental or theoretical results supporting that they would achieve high enough accuracy on DNA storage datasets. This is in stark contrast to set similarity and Hamming distance, which have many positive results [13, 36, 40, 49, 55]. Given the challenges associated with existing solutions, we ask two questions: (1) Can we design a distributed algorithm that converges in sub-quadratic time for DNA storage datasets? (2) Is it possible to adapt techniques from metric embeddings and LSH to cluster billions of strings in under an hour? Our Contributions We present a distributed algorithm that clusters billions of reads arising from DNA storage systems. Our agglomerative algorithm utilizes a series of filters to avoid unnecessary distance computations. At a high level, our algorithm iteratively merges clusters based on random representatives. Using a hashing scheme for edit distance, we only compare a small subset of representatives. We also use a light-weight check based on a binary embedding to further filter pairs. If a pair of representatives passes these two tests, edit distance determines whether the clusters are merged. Theoretically and experimentally, our algorithm satisfies four desirable properties. Scalability: Our algorithm scales well in time and space, in shared-memory and shared-nothing environments. For n input reads, each of P processors needs to hold only O(n/P ) reads in memory. Accuracy: We measure accuracy as the fraction of clusters with a majority of found members and no false positives. Theoretically, we show that the separation of the underlying clusters implies our algorithm converges quickly to a correct clustering. Experimentally, a small number of communication rounds achieve 98% accuracy on multiple real datasets, which suffices to retrieve the stored data. Robustness: For separated clusters, our algorithm is optimally robust to adversarial outliers. Performance: Our algorithm outperforms the state-of-the-art clustering method for sequencing data, Starcode [57], achieving higher accuracy with a 1000x speedup. Our algorithm quickly recovers clusters with large diameter (e.g., 25), whereas known string similarity search methods perform poorly with distance threshold larger than four [35, 53]. Our algorithm is simple to implement in any distributed framework, and it clusters 5B reads with 99% accuracy in 46 minutes on 24 processors. 1 The similarity graph connects all pairs of elements with distance below a given threshold. 2 1.1 Outline The rest of the paper is organized as follows. We begin, in Section 2, by defining the problem statement, including clustering accuracy and our data model. Then, in Section 3, we describe our algorithm, hash function, and binary signatures. In Section 4, we provide an overview of the theoretical analysis, with most details in the appendix. In Section 5, we empirically evaluate our algorithm. We discuss related work in Section 6 and conclude in Section 7. 2 DNA Data Storage Model and Problem Statement For an alphabet ?, the edit distance between two strings x, y ? ?? is denoted dE (x, y) and equals the minimum number of insertions, deletions, or substitutions needed to transform x to y. It is well known that dE defines a metric. We fix ? = {A, C, G, T}, representing the four DNA nucleotides. We define the distance between two nonempty sets C1 , C2 ? ?? as dE (C1 , C2 ) = minx?C1 ,y?C2 dE (x, y). A clustering C of a finite set S ? ?? is any partition of S into nonempty subsets. We work with the following definition of accuracy, motivated by DNA storage data retrieval. e be clusterings. For 1/2 < ? 6 1 the accuracy of C e with Definition 2.1 (Accuracy). Let C, C respect to C is |C| e = max A? (C, C) ? 1 X e e?(i) ? Ci | > ?|Ci |}, 1{C?(i) ? Ci and |C |C| i=1 e ? {1, 2, . . . , max(|C|, |C|)}. e where the max is over all injective maps ? : {1, 2, . . . , |C|} e as the output of an algorithm. The accuracy We think of C as the underlying clustering and C e measures the number of clusters in C e that overlap with some cluster in C in at least a A? (C, C) ?-fraction of elements while containing no false positives.2 This is a stricter notion than the standard classification error [8, 44]. Notice that our accuracy definition does not require that the clusterings be of the same set. We will use this to compare clusterings of S and S ? O for a set of outliers O ? ?? . For DNA storage datasets, the underlying clusters have a natural interpretation. During data retrieval, several molecular copies of each original DNA strand (reference) are sent to a DNA sequencer. The output of sequencing is a small number of noisy reads of each reference. Thus, the reads that correspond to the same reference form a cluster. This interpretation justifies the need for high accuracy: each underlying cluster represents one stored unit of information. Data Model To aid in the design and analysis of clustering algorithms for DNA data storage, we introduce the following natural generative model. First, pick many random centers (representing original references), then perturb each center by insertions, deletions, and substitutions to acquire the elements of the cluster (representing the noisy reads). We model the original references as random strings because during the encoding process, the original file has been randomized using a fixed pseudo-random sequence [45]. We make this model precise, starting with the perturbation. Definition 2.2 (p-noisy copy). For p ? [0, 1] and z ? ?? , define a p-noisy copy of z by the following process. For each character in z, independently, do one of the following four operations: (i) keep the character unchanged with probability (1 ? p), (ii) delete it with probability p/3, (iii) with probability p/3, replace it with a character chosen uniformly at random from ?, or (iv) with probability p/3, keep the character and insert an additional one after it, chosen uniformly at random from ?. We remark that our model and analysis can be generalized to incorporate separate deletion, insertion, and substitution probabilities p = pD + pI + pS , but we use balanced probabilities p/3 to simplify the exposition. Now, we define a noisy cluster. For simplicity, we assume uniform cluster sizes. Definition 2.3 (Noisy cluster of size s). We define the distribution Ds,p,m with cluster size s, noise rate p ? [0, 1], and dimension m. Sample a cluster C ? Ds,p,m as follows: pick a center z ? ?m uniformly at random; then, each of the s elements of C will be an independent p-noisy copy of z. With our definition of accuracy and our data model in hand, we define the main clustering problem. 2 e ? [0, 1]. The requirement ? ? (1/2, 1] implies A? (C, C) 3 Problem Statement Fix p, m, s, n. Let C = {C1 , . . . , Ck } be a set of k = n/s independent clusters Ci ? Ds,p,m . Given an accuracy parameter ? ? (1/2, 1] and an error tolerance ? ? [0, 1], on e of S with A? (C, C) e > 1 ? ?. input set S = ?ki=1 Ci , the goal is to quickly find a clustering C 3 Approximately Clustering DNA Storage Datasets Our distributed clustering method iteratively merges clusters with similar representatives, alternating between local clustering and global reshuffling. At the core of our algorithm is a hash family that determines (i) which pairs of representatives to compare, and (ii) how to repartition the data among the processors. On top of this simple framework, we use a cheap pre-check, based on the Hamming distance between binary signatures, to avoid many edit distance comparisons. Our algorithm achieves high accuracy by leveraging the fact that DNA storage datasets contain clusters that are well-separated in edit distance. In this section, we will define separated clusterings, explain the hash function and the binary signature, and describe the overall algorithm. 3.1 Separated Clusters The most important consequence of our data model Ds,p,m is that the clusters will be well-separated in the edit distance metric space. Moreover, this reflects the actual separation of clusters in real datasets. To make this precise, we introduce the following definition. Definition 3.1. A clustering {C1 , . . . , Ck } is (r1 , r2 )-separated if Ci has diameter3 at most r1 for every i ? {1, 2, . . . , k}, while any two different clusters Ci and Cj satisfy dE (Ci , Cj ) > r2 . DNA storage datasets will be separated with r2  r1 . Thus, recovering the clusters corresponds to finding pairs of strings with distance at most r1 . Whenever r2 > 2 ? r1 , our algorithm will be robust to outliers. In Section 4, we provide more details about separability under our DNA storage data model. We remark that our clustering separability definition differs slightly from known notions [2, 3, 8] in that we explicitly bound both the diameter of clusters and distance between clusters. 3.2 Hashing for Edit Distance Algorithms for string similarity search revolve around the simple fact that when two strings x, y ? ?m have edit distance at most r, then they share a substring of length at least m/(r + 1). However, insertions and deletions imply that the matching substrings may appear in different locations. Exact algorithms build inverted indices to find matching substrings, and many optimizations have been proposed to exactly find all close pairs [34, 51, 57]. Since we need only an approximate solution, we design a hash family based on finding matching substrings quickly, without being exhaustive. Informally, for parameters w, `, our hash picks a random ?anchor? a of length w, and the hash value for x is the substring of length w + ` starting at the first occurrence of a in x. We formally define the family of hash functions Hw,` = {h?,` : ?? ? ?w+` } parametrized by w, `, where ? is a permutation of ?w . For x = x1 x2 ? ? ? xm , the value of h?,` (x) is defined as follows. Find the earliest, with respect to ?, occurring w-gram a in x, and let i be the index of the first occurrence of a in x. Then, h?,` (x) = xi ? ? ? xm0 where m0 = min(m, i + w + `). To sample h?,` from Hw,` , simply pick a uniformly random permutation ? : ?w ? ?w . Note that Hw,` resembles MinHash [13, 14] with the natural mapping from strings to sets of substrings of length w + `. Our hash family has the benefit of finding long substrings (such as w + ` = 16), while only having the overhead of finding anchors of length w. This reduces computation time, while still leading to effective hashes. We now describe the signatures. 3.3 Binary Signature Distance The q-gram distance is an approximation for edit distance [50]. By now, it is a standard tool in bioinformatics and string similarity search [27, 28, 48, 54]. A q-gram is simply a substring of length q, and the q-gram distance measures the number of different q-grams between two strings. For a string 3 A cluster C has diameter at most r if dE (x, y) 6 r for all pairs x, y ? C. 4 Algorithm 1 Clustering DNA Strands 1: function C LUSTER(S, r, q, w, `, ?low , ?high , comm_steps, local_steps) 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: e = S. C For i = 1, 2, . . . , comm_steps: Sample h?,` ? Hw,` and hash-partition clusters, applying h?,` to representatives. For j = 1, 2, . . . , local_steps: Sample h?,` ? Hw,` . e sample a representative xC ? C, and then compute the hash h?,` (xC ). For C ? C, For each pair x, y with h?,` (x) = h?,` (y): If (dH (?(x), ?(y)) 6 ?low ) or (dH (?(x), ?(y)) 6 ?high and dE (x, y) 6 r): e = (C e \ {Cx , Cy }) ? {Cx ? Cy }. Update C e return C. end function q x ? ?m , let the binary signature ?q (x) ? {0, 1}4 be the indicator vector for the set q-grams in x. Then, the q-gram distance between x and y equals the Hamming distance dH (?q (x), ?q (y)). The utility of the q-gram distance is that the Hamming distance dH (?q (x), ?q (y)) approximates the edit distance dE (x, y), yet it is much faster to check dH (?q (x), ?q (y)) 6 ? than to verify dE (x, y) 6 r. The only drawback of the q-gram distance is that it may not faithfully preserve the separation of clusters, in the worst case. This implies that the q-gram distance by itself is not sufficient for clustering. Therefore, we use binary signatures as a coarse filtering step, but reserve edit distance for ambiguous merging decisions. We provide theoretical bounds on the q-gram distance in Section 4.1 and Appendix B. We now explain our algorithm. 3.4 Algorithm Description We describe our distributed, agglomerative clustering algorithm (displayed in Algorithm 1). The algorithm ingests the input set S ? ?? in parallel, so each core begins with roughly the same e number of reads. Signatures ?q (x) are pre-computed and stored for each x ? S. The clustering C is initialized as singletons. It will be convenient to use the notation xC for an element x ? C, and e to denote the current the notation Cx for the cluster that x belongs to. We abuse notation and use C global clustering. The algorithm alternates between global communication and local computation. Communication One representative xC is sampled uniformly from each cluster Cx in the current e in parallel. Then, using shared randomness among all cores, a hash function h?,` is clustering C, sampled from Hw,` . Using this same hash function for each core, a hash value is computed for e The communication round ends each representative xC for cluster C in the current clustering C. by redistributing the clusters randomly using these hash values. In particular, the value h?,` (xc ) e is thus repartitioned among cores. determines which core receives C. The current clustering C Local Computation The local computation proceeds independently on each core. One local round e j be the set of clusters that have been revolves around one hash function h?,` ? Hw,` . Let C distributed to the jth core. During each local clustering step, one uniform representative xC is e j . The representatives are bucketed based on h?,` (xc ). Now, the sampled for each cluster C ? C local clustering requires three parameters, r, ?low , ?high , set ahead of time, and known to all the cores. For each pair y, z in a bucket, first the algorithm checks whether dH (?q (y), ?q (z)) 6 ?low . If so, the clusters Cy and Cz are merged. Otherwise, the algorithm checks if both dH (?q (y), ?q (z)) 6 ?high and dE (x, y) 6 r, and merges the clusters Cy and Cz if these two conditions hold. Immediately after e j is updated, and Cx corresponds to the present cluster containing x. Note that distributing a merge, C the clusters among cores during communication implies that no coordination is needed after merges. The local clustering repeats for local_steps rounds before moving to the next communication round. Termination After the local computation finishes, after the last of comm_steps communication e =S C e j and terminates. rounds, the algorithm outputs the current clustering C j 5 4 4.1 Theoretical Algorithm Analysis Cluster Separation and Binary Signatures When storing data in DNA, the encoding process leads to clusters with nearly-random centers. Recall that we need the clusters to be far apart for our algorithm to perform well. Fortunately, random cluster centers will have edit distance ?(m) with high-probability. Indeed, two independent random strings have expected edit distance cind ? m, for a constant cind > 0. Surprisingly, the exact value of cind remains unknown. Simulations suggest that cind ? 0.51, and it is known that cind > 0.338 [25]. When recovering the data, DNA storage systems receive clusters that consist of p-noisy copies of the centers. In particular, two reads inside of a cluster will have edit distance O(pm), since they are p-noisy copies of the same center. Therefore, any two reads in different clusters will be far apart in edit distance whenever p  cind is a small enough constant. We formalize these bounds and provide more details, such as high-probability results, in Appendix A. Another feature of our algorithm is the use of binary signatures. To avoid incorrectly merging distinct clusters, we need the clusters to be separated according to q-gram distance. We show that random cluster centers will have q-gram distance ?(m) when q = 2 log4 m. Additionally, for any two reads x, y, we show that dH (?q (x), ?q (y)) 6 2q ? dE (x, y), implying that if x and y are in the same cluster, then their q-gram distance will be at most O(qpm). Therefore, whenever p  1/q ? 1/ log m, signatures will already separate clusters. For larger p, we use the pair of thresholds ?low < ?high to mitigate false merges. We provide more details in Appendix B. In Section 5, we mention an optimization for the binary signatures, based on blocking, which empirically improves the approximation quality, while reducing memory and computational overhead. 4.2 Convergence and Hash Analysis The running time of our algorithm depends primarily on the number of iterations and the total number of comparisons performed. The two types of comparisons are edit distance computations, which take time O(rm) to check distance at most r, and q-gram distance computations, which take time linear in the signature length. To avoid unnecessary comparisons, we partition cluster representatives using our hash function and only compare reads with the same hash value. Therefore, we bound the total number of comparisons by bounding the total number of hash collisions. In particular, we prove the following convergence theorem (details appear in Appendix C. Theorem 4.1 (Informal). For sufficiently large n and m and small p, there exist parameters for our algorithm such that it outputs a clustering with accuracy (1 ? ?) and the expected number of comparisons is      log(s/?) n2 O max n1+O(p) , ?(1/p) ? 1 + . s m Note that n1+O(p) > n2 /m?(1/p) in the expression above whenever the reads are long enough, that is, when m > ncp (where c is some small constant). Thus, for a large range of n, m, p, and ?, our algorithm converges in time proportional to n1+O(p) , which is sub-quadratic in n, the number of input reads. Since we expect the number of clusters k to be k = ?(n), our algorithm outperforms any methods that require time ?(kn) = ?(n2 ) in this regime. The running time analysis of our algorithm revolves around estimating both the collision probability of our hash function and the overall convergence time to identify the underlying clusters. The main overhead comes from unnecessarily comparing reads that belong to different clusters. Indeed, for pairs of reads inside the same cluster, the total number of comparisons is O(n), since after a comparison, the reads will merge into the same cluster. For reads in different clusters, we show that they collide with probability that is exponentially small in the hash length (since they are nearly-random strings). For the convergence analysis, we prove that reads in the same cluster will collide with significant probability, implying that after roughly    n n o log(s/?) O max nO(p) , ?(1/p) ? 1 + s m iterations, the found clustering will be (1 ? ?) accurate. 6 In Section 5, we experimentally validate our algorithm?s running time, convergence, and correctness properties on real and synthetic data. 4.3 Outlier Robustness Our final theoretical result involves bounding the number of incorrect merges caused by potential outliers in the dataset. In real datasets, we expect some number of highly-noisy reads, due to experimental error. Fortunately, such outliers lead to only a minor loss in accuracy for our algorithm, when the clusters are separated. We prove the following theorem in Appendix D. Theorem 4.2. Let C = {C1 , . . . , Ck } be an (r, 2r)-separated clustering of S. Let O be any set of e be size ?0 k. Fixing the randomness and parameters in the algorithm with distance threshold r, let C 0 0 0 ? ? ? the output on S and C be the output on S ? O. Then, A? (C, C ) > A? (C, C) ? ? . Notice that this is optimal since ?0 k outliers can clearly modify ?0 k clusters. For DNA storage data recovery, if we desire 1 ? ? accuracy overall, and we expect at most ?0 k outliers, then we simply need to aim for a clustering with accuracy at least 1 ? ? + ?0 . 5 Experiments We experimentally evaluate our algorithm on real and synthetic data, measuring accuracy and wall clock time. Table 1 describes our datasets. We evaluate accuracy on the real data by comparing the found clusterings to a gold standard clustering. We construct the gold standard by using the original reference strands, and we group the reads by their most likely reference using an established alignment tool (see Appendix E for full details). The synthetically generated data resembles real data distributions and properties [45]. We implement our algorithm in C++ using MPI. We run tests on Microsoft Azure virtual machines (size H16mr: 16 cores, 224 GB RAM, RDMA network). Table 1: Datasets. Real data from Organick et. al. [45]. Synthetic data from Defn. 2.3. Appendix E has details. Dataset 3.1M real 13.2M real 58M real 12M real 5.3B synthetic 5.1 # Reads 3,103,511 13,256,431 58,292,299 11,973,538 5,368,709,120 Avg. Length 150 150 150 110 110 Description Movie file stored in DNA Music file stored in DNA Collection of files (40MB stored in DNA; includes above) Text file stored in DNA Noise p = 4%; cluster size s = 10. Implementation and Parameter Details For the edit distance threshold, we desire r to be just larger than the cluster diameter. With p noise, we expect the diameter to be at most 4pm with high probability. We conservatively estimate p ? 4% for real data, and thus we set r = 25, since 4pm = 24 for p = 0.04 and m = 150. For the binary signatures, we observe that choosing larger q separates clusters better, but it also q increases overhead, since ?q (x) ? {0, 1}4 is very high-dimensional. To remedy this, we used a blocking approach. We partitioned x into blocks of 22 characters and computed ?3 of each block, concatenating these 64-bit strings for the final signature. On synthetic data, we found that setting ?low = 40 and ?high = 60 leads to very reduced running time while sacrificing negligible accuracy. For the hashing, we set w, ` to encourage collisions of close pairs and discourage collisions of far pairs. Following Theorem C.1, we set w = dlog4 (m)e = 4 and ` = 12, so that w + ` = 16 = log4 n with n = 232 . Since our clusters are very small, we find that we can further filter far pairs by concatenating two independent hashes to define a bucket based on this 64-bit value. Moreover, since we expect very few reads to have the same hash, instead of comparing all pairs in a hash bucket, we sort the reads based on hash value and only compare adjacent elements. For communication, we use only the first 20 bits of the hash value, and we uniformly distribute clusters based on this. Finally, we conservatively set the number of iterations to 780 total (26 communication rounds, each with 30 local iterations) because this led to 99.9% accuracy on synthetic data (even with ? = 1.0). 7 (a) Time Comparison (log scale) (b) Accuracy Comparison Figure 2: Comparison to Starcode. Figure 2a plots running times on three real datasets of our algorithm versus four Starcode executions using four distance thresholds d ? {2, 4, 6, 8}. For the first dataset, with 3.1M real reads, Figure 2b plots A? for varying ? ? {0.6, 0.7, 0.8, 0.9, 1.0} of our algorithm versus Starcode. We stopped Starcode if it did not finish within 28 hours. We ran tests on one processor, 16 threads. (a) Distributed Convergence (b) Binary Signature Improvement (c) Strong Scaling Figure 3: Empirical results for our algorithm. Figure 3a plots accuracy A0.9 of intermediate clusterings (5.3B synthetic reads, 24 processors). Figure 3b shows single-threaded running times for four variants of our algorithm, depending on whether it uses signatures for merging and/or filtering (3.1M real reads; single thread). Figure 3c plots times as the number of processors varies from 1 to 8, with 16 cores per processor (58M real reads). Starcode Parameters Starcode [57] takes a distance threshold d ? {1, 2, . . . , 8} as an input parameter and finds all clusters with radius not exceeding this threshold. We run Starcode for various settings of d, with the intention of understanding how Starcode?s accuracy and running time change with this parameter. We use Starcode?s sphere clustering ?-s? option, since this has performed most accurately on sample data, and we use the ?-t? parameter to run Starcode with 16 threads. 5.2 Discussion Figure 2 shows that our algorithm outperforms Starcode, the state-of-the-art clustering algorithm for DNA sequences [57], in both accuracy and time. As explained above, we have set our algorithm?s parameters based on theoretical estimates. On the other hand, we vary Starcode?s distance threshold parameter d ? {2, 4, 6, 8}. We demonstrate in Figures 2a and 2b that increasing this distance parameter significantly improves accuracy on real data, but also it also greatly increases Starcode?s running time. Both algorithms achieve high accuracy for ? = 0.6, and the gap between the algorithms widens as ? increases. In Figure 2a, we show that our algorithm achieves more than a 1000x speedup over the most accurate setting of Starcode on three real datasets of varying sizes and read lengths. For d ? {2, 4, 6}, our algorithm has a smaller speedup and a larger improvement in accuracy. Figure 3a shows how our algorithm?s clustering accuracy increases with the number of communication rounds, where we evaluate A? with ? = 0.9. Clearly, using 26 rounds is quite conservative. Nonetheless, our algorithm took only 46 minutes wall clock time to cluster 5.3B synthetic reads on 24 processors (384 cores). We remark that distributed MapReduce-based algorithms for string similarity joins have been reported to need tens of minutes for only tens of millions of reads [21, 51]. 8 Figure 3b demonstrates the effect of binary signatures on runtime. Recall that our algorithm uses signatures in two places: merging clusters when dH (?(x), ?(y)) 6 ?low , and filtering pairs when dH (?(x), ?(y)) > ?high . This leads to four natural variants: (i) omitting signatures, (ii) using them for merging, (iii) using them for filtering, or (iv) both. The biggest improvement (20x speedup) comes from using signatures for filtering (comparing (i) vs. (iii)). This occurs because the cheap Hamming distance filter avoids a large number of expensive edit distance computations. Using signatures for merging provides a modest 30% improvement (comparing (iii) vs. (iv)); this gain does not appear between (i) and (ii) because of time it takes to compute the signatures. Overall, the effectiveness of signatures justifies their incorporation into an algorithm that already filters based on hashing. Figure 3c evaluates the scalability of our algorithm on 58M real reads as the number of processors varies from 1 to 8. At first, more processors lead to almost optimal speedups. Then, the communication overhead outweighs the parallelization gain. Achieving perfect scalability requires greater understanding and control of the underlying hardware and is left as future work. 6 Related Work Recent work identifies the difficulty of clustering datasets containing large numbers of small clusters. Betancourt et. al. [11] calls this ?microclustering? and proposes a Bayesian non-parametric model for entity resolution datasets. Kobren et. al. [37] calls this ?extreme clustering? and studies hierarchical clustering methods. DNA data storage provides a new domain for micro/extreme clustering, with interesting datasets and important consequences [12, 24, 26, 45, 52]. Large-scale, extreme datasets ? with billions of elements and hundreds of millions of clusters ? are an obstacle for many clustering techniques [19, 29, 33, 42]. We demonstrate that DNA datasets are well-separated, which implies that our algorithm converges quickly to a highly-accurate solution. It would be interesting to determine the minimum requirements for robustness in extreme clustering. One challenge of clustering for DNA storage comes from the fact that reads are strings with edit errors and a four-character alphabet. Edit distance is regarded as a difficult metric, with known lower bounds in various models [1, 5, 7]. Similarity search algorithms based on MinHash [13, 14] originally aimed to find duplicate webpages or search results, which have much larger natural language alphabets. However, known MinHash optimizations [40, 41] may improve our clustering algorithm. Chakraborty, Goldenberg, and Kouck? explore the question of preserving small edit distances with a binary embedding [16]. This embedding was adapted by Zhang and Zhang [56] for approximate string similarity joins. We leave a thorough comparison to these papers as future work, along with obtaining better theoretical bounds for hashing or embeddings [17, 46] under our data distribution. 7 Conclusion We highlighted a clustering task motivated by DNA data storage. We proposed a new distributed algorithm and hashing scheme for edit distance. Experimentally and theoretically, we demonstrated our algorithm?s effectiveness in terms of accuracy, performance, scalability, and robustness. We plan to release one of our real datasets. We hope our dataset and data model will lead to further research on clustering and similarity search for computational biology or other domains with strings. For future work, our techniques may also apply to other metrics and to other applications with large numbers of small, well-separated clusters, such as entity resolution or deduplication [20, 23, 32]. Finally, our work motivates a variety of new theoretical questions, such as studying the distortion of embeddings for random strings under our generative model (we elaborate on this in Appendix B ). 8 Acknowledgments We thank Yair Bartal, Phil Bernstein, Nova Fandina, Abe Friesen, Sariel Har-Peled, Christian Konig, Paris Koutris, Marina Meila, Mark Yatskar for useful discussions. We also thank Alyshia Olsen for help designing the graphs. Finally, we thank Jacob Nelson for sharing his MPI wisdom and Taylor Newill and Christian Smith from the Microsoft Azure HPC Team for help using MPI on Azure. 9 References [1] A. Abboud, T. D. Hansen, V. V. Williams, and R. Williams. Simulating Branching Programs with Edit Distance and Friends: Or: A polylog shaved is a lower bound made. In STOC, 2016. [2] M. Ackerman, S. Ben-David, D. Loker, and S. Sabato. Clustering Oligarchies. In AISTATS, 2013. [3] M. Ackerman and S. Dasgupta. Incremental Clustering: The Case for Extra Clusters. In Advances in Neural Information Processing Systems, pages 307?315, 2014. [4] N. Ailon, M. Charikar, and A. Newman. Aggregating inconsistent information: ranking and clustering. Journal of the ACM (JACM), 55(5):23, 2008. [5] A. Andoni and R. Krauthgamer. The Computational Hardness of Estimating Edit Distance. SIAM J. Comput., 39(6). [6] A. Andoni and R. Krauthgamer. The Smoothed Complexity of Edit Distance. ACM Transactions on Algorithms (TALG), 8(4):44, 2012. [7] A. Backurs and P. Indyk. Edit Distance Cannot be Computed in Strongly Subquadratic time (unless SETH is false). In STOC, 2015. [8] M.-F. Balcan, Y. Liang, and P. Gupta. Robust Hierarchical Clustering. Journal of Machine Learning Research, 15(1):3831?3871, 2014. [9] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1-3):89?113, 2004. [10] T. Batu, F. Erg?n, J. Kilian, A. Magen, S. Raskhodnikova, R. Rubinfeld, and R. Sami. A Sublinear Algorithm for Weakly Approximating Edit Distance. In STOC, 2003. [11] B. Betancourt, G. Zanella, J. W. Miller, H. Wallach, A. Zaidi, and B. Steorts. Flexible Models for Microclustering with Application to Entity Resolution. In NIPS, 2016. [12] J. Bornholt, R. Lopez, D. M. Carmean, L. Ceze, G. Seelig, and K. Strauss. A DNA-based Archival Storage System. In ASPLOS, 2016. [13] A. Z. Broder. On the Resemblance and Containment of Documents. In Compression and Complexity of Sequences, pages 21?29. IEEE, 1997. [14] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic Clustering of the Web. Computer Networks and ISDN Systems, 29(8-13):1157?1166, 1997. [15] J. Buhler. Efficient large-scale sequence comparison by locality-sensitive hashing. Bioinformatics, 17(5):419?428, 2001. [16] D. Chakraborty, E. Goldenberg, and M. Kouck?. Streaming Algorithms for Embedding and Computing Edit Distance in the Low Distance Regime. In STOC, 2016. [17] M. Charikar and R. Krauthgamer. Embedding the Ulam Metric into L1 . Theory of Computing, 2(11):207? 224, 2006. [18] S. Chawla, K. Makarychev, T. Schramm, and G. Yaroslavtsev. Near Optimal LP Rounding Algorithm for Correlation Clustering on Complete and Complete k-partite Graphs. In STOC, 2015. [19] J. Chen, H. Sun, D. Woodruff, and Q. Zhang. Communication-Optimal Distributed Clustering. In Advances in Neural Information Processing Systems, pages 3720?3728, 2016. [20] P. Christen. Data matching: concepts and techniques for record linkage, entity resolution, and duplicate detection. Springer Science & Business Media, 2012. [21] D. Deng, G. Li, S. Hao, J. Wang, and J. Feng. Massjoin: A Mapreduce-based Method for Scalable String Similarity Joins. In ICDE, pages 340?351. IEEE, 2014. [22] D. P. Dubhashi and A. Panconesi. Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, 2009. [23] A. K. Elmagarmid, P. G. Ipeirotis, and V. S. Verykios. Duplicate record detection: A survey. IEEE Transactions on knowledge and data engineering, 19(1):1?16, 2007. [24] Y. Erlich and D. Zielinski. DNA Fountain Enables a Robust and Efficient Storage Architecture. Science, 355(6328):950?954, 2017. 10 [25] S. Ganguly, E. Mossel, and M. Z. R?cz. Sequence Assembly from Corrupted Shotgun Reads. In ISIT, pages 265?269, 2016. http://arxiv.org/abs/1601.07086. [26] N. Goldman, P. Bertone, S. Chen, C. Dessimoz, E. M. LeProust, B. Sipos, and E. Birney. Towards Practical, High-capacity, Low-maintenance Information Storage in Synthesized DNA. Nature, 494(7435), 2013. [27] S. Gollapudi and R. Panigrahy. A Dictionary for Approximate String Search and Longest Prefix Search. In CIKM, 2006. [28] L. Gravano, P. G. Ipeirotis, H. V. Jagadish, N. Koudas, S. Muthukrishnan, D. Srivastava, et al. Approximate String Joins in a Database (almost) for Free. In VLDB, volume 1, pages 491?500, 2001. [29] S. Guha, Y. Li, and Q. Zhang. Distributed Partial Clustering. arXiv preprint arXiv:1703.01539, 2017. [30] H. Hanada, M. Kudo, and A. Nakamura. On Practical Accuracy of Edit Distance Approximation Algorithms. arXiv preprint arXiv:1701.06134, 2017. [31] S. Har-Peled, P. Indyk, and R. Motwani. Approximate nearest neighbor: Towards removing the curse of dimensionality. Theory of Computing, 8(1):321?350, 2012. [32] O. Hassanzadeh, F. Chiang, H. C. Lee, and R. J. Miller. Framework for Evaluating Clustering Algorithms in Duplicate Detection. PVLDB, 2(1):1282?1293, 2009. [33] C. Hennig, M. Meila, F. Murtagh, and R. Rocci. Handbook of Cluster Analysis. CRC Press, 2015. [34] Y. Jiang, D. Deng, J. Wang, G. Li, and J. Feng. Efficient Parallel Partition-based Algorithms for Similarity Search and Join with Edit Distance Constraints. In Joint EDBT/ICDT Workshops, 2013. [35] Y. Jiang, G. Li, J. Feng, and W.-S. Li. String Similarity Joins: An Experimental Evaluation. PVLDB, 7(8):625?636, 2014. [36] J. Johnson, M. Douze, and H. J?gou. Billion-scale Similarity Search with GPUs. arXiv preprint arXiv:1702.08734, 2017. [37] A. Kobren, N. Monath, A. Krishnamurthy, and A. McCallum. A Hierarchical Algorithm for Extreme Clustering. In KDD, 2017. [38] R. Krauthgamer and Y. Rabani. Improved Lower Bounds for Embeddings Into L1 . SIAM J. on Computing, 38(6):2487?2498, 2009. [39] H. Li and R. Durbin. Fast and Accurate Short Read Alignment with Burrows?Wheeler Transform. Bioinformatics, 25(14):1754?1760, 2009. [40] P. Li and C. K?nig. b-Bit Minwise Hashing. In WWW, pages 671?680. ACM, 2010. [41] P. Li, A. Owen, and C.-H. Zhang. One Permutation Hashing. In NIPS, 2012. [42] G. Malkomes, M. J. Kusner, W. Chen, K. Q. Weinberger, and B. Moseley. Fast Distributed k-center Clustering with Outliers on Massive Data. In NIPS, 2015. [43] J. Matou?ek. Lectures on Discrete Geometry, volume 212. Springer New York, 2002. [44] M. Meil?a and D. Heckerman. An Experimental Comparison of Model-based Clustering Methods. Machine learning, 42(1-2):9?29, 2001. [45] L. Organick, S. D. Ang, Y.-J. Chen, R. Lopez, S. Yekhanin, K. Makarychev, M. Z. Racz, G. Kamath, P. Gopalan, B. Nguyen, C. Takahashi, S. Newman, H.-Y. Parker, C. Rashtchian, K. Stewart, G. Gupta, R. Carlson, J. Mulligan, D. Carmean, G. Seelig, L. Ceze, and K. Strauss. Scaling up DNA data storage and random access retrieval. bioRxiv, 2017. [46] R. Ostrovsky and Y. Rabani. Low Distortion Embeddings for Edit Distance. J. ACM, 2007. [47] X. Pan, D. Papailiopoulos, S. Oymak, B. Recht, K. Ramchandran, and M. I. Jordan. Parallel Correlation Clustering on Big Graphs. In Advances in Neural Information Processing Systems, pages 82?90, 2015. [48] Z. Rasheed, H. Rangwala, and D. Barbara. Efficient Clustering of Metagenomic Sequences using Locality Sensitive Hashing. In Proceedings of the 2012 SIAM International Conference on Data Mining, pages 1023?1034. SIAM, 2012. 11 [49] N. Sundaram, A. Turmukhametova, N. Satish, T. Mostak, P. Indyk, S. Madden, and P. Dubey. Streaming Similarity Search Over One Billion Tweets Using Parallel Locality-Sensitive Hashing. PVLDB, 6(14):1930? 1941, 2013. [50] E. Ukkonen. Approximate String-matching with q-grams and Maximal Matches. Theoretical computer science, 92(1):191?211, 1992. [51] C. Yan, X. Zhao, Q. Zhang, and Y. Huang. Efficient string similarity join in multi-core and distributed systems. PloS one, 12(3):e0172526, 2017. [52] S. H. T. Yazdi, R. Gabrys, and O. Milenkovic. Portable and Error-Free DNA-Based Data Storage. bioRxiv, page 079442, 2016. [53] M. Yu, G. Li, D. Deng, and J. Feng. String Similarity Search and Join: A Survey. Frontiers of Computer Science, 10(3):399?417, 2016. [54] P. Yuan, C. Sha, and Y. Sun. Hash?{ed}-Join: Approximate String Similarity Join with Hashing. In International Conference on Database Systems for Advanced Applications, pages 217?229. Springer, 2014. [55] R. B. Zadeh and A. Goel. Dimension Independent Similarity Computation. The Journal of Machine Learning Research, 14(1):1605?1626, 2013. [56] H. Zhang and Q. Zhang. EmbedJoin: Efficient Edit Similarity Joins via Embeddings. In KDD, 2017. [57] E. V. Zorita, P. Cusc?, and G. Filion. Starcode: Sequence Clustering Based on All-pairs Search. Bioinformatics, 2015. 12
6928 |@word milenkovic:1 version:1 compression:1 chakraborty:2 termination:1 vldb:1 simulation:1 jacob:1 pick:4 fifteen:1 mention:1 reduction:1 substitution:4 series:1 woodruff:1 document:1 prefix:1 outperforms:3 existing:3 current:7 comparing:5 yet:1 must:5 luis:1 partition:4 kdd:2 cheap:2 christian:2 enables:1 plot:4 update:1 sundaram:1 hash:28 implying:2 generative:2 v:2 mccallum:1 pvldb:3 smith:1 short:3 core:14 record:2 chiang:1 coarse:1 provides:2 cse:1 location:1 org:1 accessed:1 five:1 zhang:8 along:1 c2:3 incorrect:1 prove:4 consists:1 lopez:2 overhead:5 yuan:1 inside:2 introduce:2 theoretically:3 expected:2 hardness:1 indeed:2 roughly:2 multi:1 goldman:1 actual:1 curse:1 gou:1 increasing:1 begin:2 estimating:2 underlying:8 moreover:2 notation:3 medium:2 string:30 finding:5 pseudo:2 mitigate:1 every:1 thorough:1 stricter:1 exactly:1 runtime:1 rm:1 demonstrates:1 ostrovsky:1 partitioning:1 unit:1 control:1 appear:3 positive:3 before:1 understood:1 local:10 modify:1 negligible:1 aggregating:1 consequence:2 engineering:1 encoding:4 jiang:2 meil:1 approximately:2 abuse:1 merge:2 resembles:2 wallach:1 revolves:2 range:1 zanella:1 acknowledgment:1 practical:2 block:2 implement:2 differs:2 sequencer:1 wheeler:1 empirical:2 yan:1 significantly:1 seelig:2 matching:5 convenient:1 pre:2 intention:1 suggest:1 magen:1 isdn:1 cannot:2 close:2 storage:31 applying:1 raskhodnikova:1 www:1 equivalent:1 map:1 demonstrated:2 center:9 phil:1 bartal:1 williams:2 starting:2 independently:2 survey:2 resolution:4 simplicity:1 recovery:1 immediately:1 rule:1 fountain:1 regarded:1 erg:1 his:1 retrieve:3 embedding:5 handle:2 notion:2 justification:1 krishnamurthy:1 updated:1 papailiopoulos:1 massive:1 exact:2 us:2 designing:1 element:7 expensive:1 predicts:1 database:2 blocking:2 observed:1 preprint:3 wang:2 worst:1 thousand:1 cy:4 connected:1 kilian:1 sun:2 plo:1 trade:1 ran:1 balanced:1 environment:1 pd:1 insertion:5 peled:2 complexity:2 manasse:1 signature:24 weakly:1 collide:2 seth:1 joint:1 various:2 muthukrishnan:1 alphabet:4 separated:15 distinct:1 fast:2 describe:4 effective:1 newman:2 choosing:1 exhaustive:1 quite:1 emerged:1 larger:6 distortion:2 otherwise:1 koudas:1 ganguly:1 think:1 transform:2 noisy:14 itself:1 final:2 highlighted:1 indyk:3 syntactic:1 sequence:9 erlich:1 took:1 douze:1 coming:2 mb:1 ackerman:2 maximal:1 turned:1 archival:1 poorly:1 achieve:4 icdt:1 gold:2 rashtchian:1 description:2 validate:1 gollapudi:1 scalability:5 billion:10 convergence:7 motwani:1 cluster:93 p:1 requirement:2 ulam:1 r1:5 webpage:1 perfect:1 converges:5 leave:1 help:2 depending:1 friend:1 polylog:1 fixing:1 ben:1 nearest:1 minor:1 strong:1 recovering:2 involves:2 implies:5 come:3 karin:1 radius:1 merged:2 correct:1 drawback:1 filter:5 virtual:1 everything:1 crc:1 require:3 suffices:1 fix:2 wall:2 randomization:1 isit:1 insert:1 malkomes:1 correction:1 hold:2 frontier:1 around:3 sufficiently:1 mapping:1 makarychev:2 reserve:1 m0:1 achieves:3 vary:1 microclustering:2 dictionary:1 hansen:1 coordination:1 sensitive:4 edit:37 hpc:1 correctness:1 faithfully:1 tool:4 reflects:1 hope:1 clearly:2 reshuffling:1 aim:1 ck:3 avoid:4 varying:2 earliest:1 release:1 focus:1 improvement:5 longest:1 sequencing:5 check:6 greatly:1 contrast:1 adversarial:1 goldenberg:2 biochemical:1 streaming:2 typically:2 a0:1 birney:1 issue:1 classification:1 among:4 overall:4 denoted:1 flexible:1 proposes:1 plan:1 art:3 equal:2 construct:1 having:1 washington:1 beach:1 biology:1 represents:1 unnecessarily:1 yu:1 nearly:2 future:4 subquadratic:1 fundamentally:1 simplify:1 primarily:1 few:1 modern:2 randomly:1 micro:1 duplicate:4 simultaneously:1 preserve:1 azure:3 defn:1 connects:1 geometry:1 microsoft:3 n1:3 ab:1 detection:3 possibility:1 investigate:1 highly:2 mining:1 evaluation:1 alignment:2 extreme:5 light:1 har:2 accurate:4 yekhanin:1 encourage:1 explosion:1 necessary:1 injective:1 partial:1 nucleotide:1 modest:1 unless:1 iv:3 taylor:1 initialized:1 biorxiv:2 sacrificing:1 theoretical:10 delete:1 stopped:1 konstantin:1 obstacle:1 measuring:1 stewart:1 subset:2 hundred:3 uniform:2 rounding:1 johnson:1 guha:1 too:1 satish:1 optimally:1 stored:8 reported:1 kn:1 mostak:1 randomizing:1 eec:1 varies:2 corrupted:1 synthetic:11 st:1 density:2 broder:2 randomized:2 siam:4 oymak:1 recht:1 international:2 lee:1 quickly:5 containing:4 huang:1 ek:1 zhao:1 leading:1 return:1 stark:1 li:9 takahashi:1 potential:2 distribute:1 de:11 singleton:1 schramm:1 includes:1 satisfy:2 notable:1 explicitly:1 caused:1 depends:1 ranking:1 performed:2 deduplication:1 recover:2 option:2 parallel:5 sort:1 contribution:1 partite:1 accuracy:38 efficiently:1 miller:2 correspond:1 identify:1 wisdom:1 bayesian:1 konig:1 accurately:1 produced:1 substring:10 researcher:1 published:1 processor:10 randomness:2 explain:2 whenever:4 sharing:1 ed:1 definition:9 evaluates:1 nonetheless:1 associated:1 recovers:1 hamming:5 sampled:3 gain:2 dataset:5 ask:1 recall:3 knowledge:1 dimensionality:2 improves:2 organized:1 mikl:1 cj:2 formalize:1 higher:2 originally:2 day:1 hashing:13 friesen:1 improved:1 though:1 strongly:1 just:1 correlation:4 until:1 d:4 hand:2 receives:1 clock:2 web:1 defines:1 quality:1 resemblance:1 grows:2 usa:1 effect:1 omitting:1 contain:4 requiring:1 verify:1 remedy:1 concept:1 read:49 alternating:1 iteratively:3 round:9 adjacent:1 during:4 branching:1 ambiguous:1 mpi:3 criterion:2 generalized:1 bansal:1 outline:1 complete:2 demonstrate:2 l1:2 balcan:1 novel:1 physical:1 overview:1 empirically:2 exponentially:1 volume:2 million:2 extend:1 interpretation:2 approximates:1 belong:1 synthesized:3 significant:2 cambridge:1 meila:2 steorts:1 pm:3 language:1 lsh:2 moving:1 access:1 similarity:21 closest:1 orfe:1 recent:1 belongs:1 apart:2 barbara:1 certain:1 incremental:1 binary:14 inverted:1 preserving:1 minimum:2 additional:1 fortunately:2 greater:1 deng:3 goel:1 determine:1 ii:4 verykios:1 multiple:2 desirable:1 full:1 reduces:1 faster:1 adapt:1 kudo:1 offer:1 long:4 retrieval:4 sphere:1 zweig:1 match:1 molecular:1 marina:1 variant:2 scalable:1 maintenance:1 essentially:1 metric:8 zaidi:1 arxiv:7 iteration:4 sergey:1 cz:3 c1:6 receive:1 whereas:1 median:1 crucial:1 sabato:1 parallelization:1 rest:1 extra:1 nig:1 file:6 pass:1 sent:1 member:1 leveraging:1 inconsistent:1 effectiveness:2 jordan:1 call:2 near:2 presence:1 synthetically:1 intermediate:1 iii:4 embeddings:7 enough:3 minhash:3 variety:1 bernstein:1 finish:2 sami:1 architecture:1 luster:1 intensive:1 panconesi:1 whether:3 motivated:2 expression:1 thread:3 utility:1 distributing:1 gb:1 linkage:2 shotgun:1 york:1 tape:1 remark:3 useful:1 collision:4 gopalan:1 informally:1 aimed:1 dubey:1 ang:1 ten:2 hardware:1 dna:46 diameter:6 reduced:1 http:1 exist:1 notice:2 cikm:1 arising:2 per:1 write:1 discrete:1 dasgupta:1 hennig:1 group:2 revolve:1 four:9 threshold:9 blum:1 achieving:2 backurs:1 buhler:1 ram:1 graph:8 icde:1 fraction:2 tweet:1 run:3 striking:1 place:1 family:4 almost:2 utilizes:1 separation:4 decision:1 appendix:9 scaling:3 redistributing:1 zadeh:1 bit:4 ki:1 bound:8 quadratic:3 durbin:1 adapted:1 ahead:1 handful:1 precisely:1 incorporation:1 constraint:1 x2:1 rabani:2 min:1 relatively:1 gpus:1 speedup:6 charikar:2 ailon:1 according:1 alternate:1 rubinfeld:1 terminates:1 slightly:1 describes:1 separability:3 character:8 partitioned:1 smaller:1 lp:1 kusner:1 heckerman:1 pan:1 outlier:12 explained:1 bucket:3 pipeline:1 computationally:1 remains:1 discus:1 nonempty:2 needed:4 nova:1 fed:1 end:2 informal:1 studying:1 operation:1 apply:1 observe:1 hierarchical:3 magnetic:1 simulating:1 occurrence:2 chawla:2 alternative:1 robustness:4 yair:1 weinberger:1 original:5 top:1 clustering:75 running:12 assembly:1 krauthgamer:4 outweighs:1 xc:8 unsuitable:2 music:1 widens:1 carlson:1 perturb:1 build:1 approximating:1 unchanged:1 feng:4 dubhashi:1 objective:1 question:3 already:2 occurs:1 parametric:1 costly:1 concentration:1 sha:1 minx:1 distance:65 separate:3 thank:3 entity:4 majority:1 parametrized:1 capacity:1 nelson:1 agglomerative:3 threaded:1 consensus:1 portable:1 sipos:1 panigrahy:1 length:10 index:2 elmagarmid:1 acquire:1 kouck:2 loker:1 difficult:1 liang:1 statement:3 stoc:5 favorably:1 hao:1 kamath:1 design:4 implementation:1 motivates:1 unknown:2 perform:2 datasets:27 finite:1 displayed:1 supporting:1 incorrectly:1 defining:1 communication:15 precise:2 team:1 perturbation:1 smoothed:1 abe:1 david:1 pair:18 paris:1 glassman:1 merges:6 deletion:5 established:1 hour:3 nip:4 address:2 proceeds:1 below:1 xm:1 regime:3 challenge:3 program:1 including:1 memory:3 max:5 suitable:1 overlap:1 natural:5 difficulty:1 business:1 ipeirotis:2 turning:1 indicator:1 nakamura:1 advanced:1 representing:3 scheme:2 improve:1 movie:1 technology:3 mossel:1 imply:1 identifies:1 madden:1 siena:1 text:1 understanding:2 mapreduce:2 batu:1 sariel:1 betancourt:2 loss:2 lecture:1 expect:5 permutation:3 northwestern:1 generation:1 interesting:2 filtering:5 proportional:1 sublinear:1 versus:2 ukkonen:1 sufficient:1 storing:2 pi:1 share:1 repeat:1 last:1 copy:9 surprisingly:1 jth:1 christen:1 free:2 neighbor:1 distributed:16 tolerance:1 benefit:1 dimension:2 gram:16 avoids:1 evaluating:1 conservatively:2 collection:3 avg:1 preprocessing:1 made:1 nguyen:1 far:4 transaction:2 approximate:8 olsen:1 keep:3 global:3 anchor:2 handbook:1 containment:1 cyrus:1 unnecessary:2 conclude:1 xi:1 repartitioned:1 search:13 table:2 additionally:1 promising:1 nature:1 robust:7 ca:1 inherently:1 ncp:1 obtaining:1 improving:1 discourage:1 constructing:1 domain:3 did:1 aistats:1 main:2 linearly:2 bounding:2 noise:5 big:1 n2:4 nothing:1 x1:1 representative:12 join:11 biggest:1 elaborate:1 parker:1 slow:1 aid:1 precision:1 sub:2 turmukhametova:1 exceeding:1 concatenating:2 comput:1 burrow:1 rangwala:1 hw:7 minute:3 theorem:5 removing:1 r2:4 yaroslavtsev:1 gupta:2 consist:1 workshop:1 false:4 strauss:2 adding:1 sequential:1 merging:7 ci:8 andoni:2 magnitude:2 execution:1 ramchandran:1 justifies:2 occurring:1 gap:1 chen:4 locality:4 cx:5 led:1 simply:3 likely:3 explore:1 jacm:1 desire:2 strand:3 springer:3 corresponds:2 satisfies:2 determines:3 dh:10 acm:4 murtagh:1 goal:2 exposition:1 towards:2 shared:3 replace:1 owen:1 experimentally:5 change:1 matou:1 determined:1 talg:1 uniformly:6 reducing:1 conservative:1 called:2 total:5 jagadish:1 experimental:4 abboud:1 moseley:1 formally:1 log4:2 mark:1 bioinformatics:4 minwise:1 repartition:1 incorporate:1 evaluate:4 princeton:1 srivastava:1
6,554
6,929
Multi-Objective Non-parametric Sequential Prediction Guy Uziel Computer Science Department Technion - Israel Institute of Technology [email protected] Ran El-Yaniv Computer Science Department Technion - Israel Institute of Technology [email protected] Abstract Online-learning research has mainly been focusing on minimizing one objective function. In many real-world applications, however, several objective functions have to be considered simultaneously. Recently, an algorithm for dealing with several objective functions in the i.i.d. case has been presented. In this paper, we extend the multi-objective framework to the case of stationary and ergodic processes, thus allowing dependencies among observations. We first identify an asymptomatic lower bound for any prediction strategy and then present an algorithm whose predictions achieve the optimal solution while fulfilling any continuous and convex constraining criterion. 1 Introduction In the traditional online learning setting, and in particular in sequential prediction under uncertainty, the learner is evaluated by a single loss function that is not completely known at each iteration [7]. When dealing with multiple objectives, since it is impossible to simultaneously minimize all of the objectives, one objective is chosen as the main function to minimize, leaving the others to be bound by pre-defined thresholds. Methods for dealing with one objective function can be transformed to deal with several objective functions by giving each objective a pre-defined weight. The difficulty, however, lies in assigning an appropriate weight to each objective in order to keep the objectives below a given threshold. This approach is very problematic in real world applications, where the player is required to to satisfy certain constraints. For example, in online portfolio selection [17, 5], the player may want to maximize wealth while keeping the risk (i.e., variance) contained below a certain threshold. Another example is the Neyman-Pearson (NP) classification paradigm (see, e.g., [22]) (which extends the objective in classical binary classification) where the goal is to learn a classifier achieving low type II error whose type I error is kept below a given threshold. In the adversarial setting it is known that multiple-objective is generally impossible when the constriants are unknown a-priory [21]. In the stochastic setting, Mahdavi et al. [20] proposed a framework for dealing with multiple objectives in the i.i.d. case. They proved that if there exists a solution that minimizes the main objective function while keeping the other objectives below given thresholds, then their algorithm will converge to the optimal solution. In this work, we study online prediction with multiple objectives but now consider the challenging general case where the unknown underlying process is stationary and ergodic, thus allowing observations to depend on each other arbitrarily. The (single-objective) sequential prediction under stationary and ergodic sources, has been considered in many papers and in various application domains. For example, in online portfolio selection, [14, 11, 12, 18] proposed non-parametric online strategies that guarantee, under mild conditions, the best possible outcome. Another interesting example in this regard is the work on time-series prediction by [3, 9, 4]. A common theme to all these results is that the asymptotically optimal strategies are constructed by combining the predictions of many simple 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. experts. The above strategies use a countably infinite set of experts, and the guarantees provided for these strategies are always asymptotic. This is no coincidence, as it is well known that finite sample guarantees for these methods cannot be achieved without additional strong assumptions on the source distribution [8, 19]. Approximate implementations of non-parametric strategies (which apply only a finite set of experts), however, turn out to work exceptionally well and, despite the inevitable approximation, are reported [13, 12, 11, 16, 17] to significantly outperform strategies designed to work in an adversarial, no-regret setting, in various domains. The algorithm presented in this paper utilizes as a sub-routine the Weak Aggregating Algorithm (WAA) of [24], and [15] to handle multiple objectives. While we discuss here the case of only two objective functions, our theorems can be extended easily to any fixed number of functions. 2 Problem Formulation We consider the following prediction game. Let X , [?D, D]d ? Rd be a compact observation space where D > 0. At each round, n = 1, 2, . . ., the player is required to make a prediction yn ? Y, where Y ? Rm is a compact and convex set, based on past observations, X1n?1 , (x1 , . . . , xn?1 ) and, xi ? X (X10 is the empty observation). After making the prediction yn , the observation xn is revealed and the player suffers two losses, u(yn , xn ) and c(yn , xn ), where u and c are real-valued continuous functions and convex w.r.t. their first argument. We view the player?s prediction strategy as (n?1) a sequence S , {Sn }? ? Y; that is, the player?s prediction n=1 of forecasting functions Sn : X n?1 at round n is given by Sn (X1 ) (for brevity, we denote S(X1n?1 )). Throughout the paper we assume that x1 , x2 , . . . are realizations of random variables X1 , X2 , . . . such that the stochastic process (Xn )? ?? is jointly stationary and ergodic and P(Xi ? X ) = 1. The player?s goal is to play PN the game with a strategy that minimizes the average u-loss, N1 i=1 u(S(X1i?1 ), xi ), while keeping PN the average c-loss N1 i=1 c(S(X1i?1 ), xi ) bounded below a prescribed threshold ?. Formally, we define the following: Definition 1 (?-bounded strategy). A prediction strategy S will be called ?-bounded if ! N 1 X c(S(X1i?1 ), Xi ) ? ? lim sup N i=1 N ?? almost surely. The set of all ?-bounded strategies will be denoted S? . The well known result of [1] states that for the single objective case the best possible outcome is E [maxy?Y EP? [u(y, X0 )]] where P? is the regular conditional probability distribution of X0 given F? (the ?-algebra generated by the infinite past X?1 , X?2 , . . .). This motivates us to define the following: Definition 2 (?-feasible process). We say that the stationary and ergodic process {Xi }? ?? is ?feasible w.r.t. the functions u and c, if for a threshold ? > 0, there exists some y 0 ? Y such that EP? [c(y 0 , X0 )] < ?. ? ? If ?-feasibility holds, then we will denote by y? (y? is not necessarily unique) the solution to the following minimization problem: minimize y?Y EP? [u(y, X0 )] (1) subject to EP? [c(y, X0 )] ? ?, (1) and we define the ?-feasible optimal value as ? V ? = E [EP? [u(y? , X0 )]] . Note that problem (1) is a convex minimization problem over Y, which in turn is a compact and convex subset of Rm . Therefore, the problem is equivalent to finding the saddle point of the Lagrangian function [2], namely, min max+ L(y, ?), y?Y ??R 2 where the Lagrangian is L(y, ?) , (EP? [u(y, X0 )] + ? (EP? [c(y, X0 )] ? ?)) . We denote the optimal dual by ??? and assume that ??? is unique. Moreover, we set a constant 1 ?max such that ?max > ??? , and set ? , [0, ?max ]. We also define the instantaneous Lagrangian function as l(y, ?, x) , u(y, x) + ? (c(y, x) ? ?) . (2) In Brief, we are seeking a strategy S ? S? that is as good as any other ?-bounded strategy, in terms of the average u-loss, when the underlying process is ?-feasible. Such a strategy will be called ?-universal. Optimality of V ? 3 In this section, we show that the average u-loss of any ?-bounded prediction strategy cannot be smaller than V ? , the ?-feasible optimal value. This result is a generalization of the well-known result of [1] regarding the best possible outcome under a single objective. Before stating and proving this optimality result, we state three lemmas that will be used repeatedly in this paper. The first lemma is known as Breiman?s generalized ergodic theorem. The second and the third lemmas concern the continuity of the saddle point w.r.t. the probability distribution, their proofs appear in the supplementary material. Lemma 1 (Ergodicity, [6]). Let X = {Xi }? ?? be a stationary and ergodic process. For each positive integer i, let Ti denote the operator that shifts any sequence by i places to the left. Let f1 , f2 , . . . be a sequence of real-valued functions such that limn?? fn (X) = f (X) almost surely, for some function f . Assume that E supn |fn (X)| < ?. Then, n 1X fi (T i X) = Ef (X) n?? n i=1 lim almost surely. Lemma 2 (Continuity and Minimax). Let Y, ?, X be compact real spaces. l : Y ? ? ? X ? R be a continuous function. Denote by P(X ) the space of all probability measures on X (equipped with the topology of weak-convergence). Then the following function L? : P(X ) ? R is continuous L? (Q) = inf sup EQ [l(y, ?, x)] . (3) y?Y ??? Moreover, for any Q ? P(X ), inf sup EQ [l(y, ?, x)] = sup inf EQ [l(y, ?, x)] . y?Y ??? ??? y?Y Lemma 3 (Continuity of the optimal selection). Let Y, ?, X be compact real spaces. Then, there exist two measurable selection functions hX ,h? such that     hy (Q) ? arg min max EQ [l(y, ?, x)] , h? (Q) ? arg max min EQ [l(y, ?, x)] y?Y ??? ??? y?Y ? for any Q ? P(X ). Moreover, let L be as defined in Equation (3). Then, the set Gr(L? ) , {(u? , v ? , Q) | u? ? hy (Q), v ? ? h? (Q), Q ? P(X )}, is closed in Y ? ? ? P(X ). The importance of Lemma 3 stems from the fact that it proves the continuity properties of the multi-valued correspondences Q ? hy (Q) and Q ? h? (Q). This leads to the knowledge that if for the limiting distribution, Q? , the optimal set is a singleton, then Q ? hy (Q) and Q ? h? (Q) are continuous in Q? . We are now ready to prove the optimality of V ? . 1 This can be done, for example, by imposing some regularity conditions on the objectives (see, e.g., [20]). 3 Theorem 1 (Optimality of V ? ). Let {Xi }? ?? be a ?-feasible process. Then, for any strategy S ? S? , the following holds a.s. N 1 X lim inf u(S(X1i?1 ), Xi ) ? V ? . N ?? N i=1 Proof. For any given strategy S ? S? , we will look at the following sequence: N 1 X l(S(X1i?1 ), ???i , Xi ). N i=1 (4) where ???i ? h? (PXi |X i?1 ) Observe that 1 (4) = N i h 1 X l(S(X1i?1 ), ???i , Xi ) ? E l(S(X1i?1 ), ???i , Xi ) | X1i?1 N i=1 N i 1 X h E l(S(X1i?1 ), ???i , Xi ) | X1i?1 . N i=1 h i Since Ai = l(S(X1i?1 ), ???i , Xi ) ? E l(S(X1i?1 ), ???i , Xi ) | X1i?1 is a martingale difference sequence, the last summand converges to 0 a.s., by the strong law of large numbers (see, e.g., [23]). Therefore, N N i 1 X 1 X h lim inf l(S(X1i?1 ), ???i , Xi ) = lim inf E l(S(X1i?1 ), ???i , Xi ) | X1i?1 N ?? N N ?? N i=1 i=1 + ? lim inf N ?? N h i 1 X min E l(y, ???i , Xi ) | X1i?1 , N i=1 y?Y() (5) where the minimum is taken w.r.t. all the ?(X1i?1 )-measurable functions. Because the process is stationary, we get for ???i ? h? (PX0 |X ?1 ), 1?i N N h i 1 X ? 1 X ?1 min E l(y, ???i , X0 ) | X1?i = lim inf L (PX0 |X ?1 ). (5) = lim inf 1?i N ?? N N ?? N y?Y() i=1 i=1 (6) Using Levy?s zero-one law, PX0 |X ?1 ? P? weakly as i approaches ? and from Lemma 2 we know 1?i that L? is continuous. Therefore, we can apply Lemma 1 and get that a.s. ? ? (6) = E [L? (P? )] = E [EP? [l (y? , ??? , X0 )]] = E [L (y? , ??? , X0 )] . (7) Note also, that due to the complementary slackness condition of the optimal solution, i.e., ? ??? (EP? [c(y? , X0 )] ? ?) = 0, we get ? (7) = E [EP? [u (y? , X0 )]] = V ? . From the uniqueness of ??? , and using Lemma 3 ???i ? ??? as i approaches ?. Moreover, since l is continuous on a compact set, l is also uniformly continuous. Therefore, for any given  > 0, there exists ? > 0, such that if |?0 ? ?| < ?, then |l(y, ?0 , x) ? l(y, ?, x)| <  for any y ? Y and x ? X . Therefore, there exists i0 such that if i > i0 then |l(y, ???i , x) ? l(y, ??? , x)| <  for any y ? Y and x ? X . Thus, lim inf N ?? N N 1 X 1 X l(S(X1i?1 ), ??? , Xi ) ? lim inf l(S(X1i?1 ), ???i , Xi ) N ?? N N i=1 i=1 N N 1 X 1 X i?1 ? = lim inf l(S(X1 ), ?? , Xi ) + lim sup ?l(S(X1i?1 ), ???i , Xi ) N ?? N N N ?? i=1 i=1 ? lim inf N ?? N N 1 X 1 X l(S(X1i?1 ), ???i , Xi ) ? l(S(X1i?1 ), ??? , Xi ) ? ? a.s., N i=1 N i=1 4 Algorithm 1 Minimax Histogram Based Aggregation (MHA) Input: Countable set of experts {Hk,h }, y0 ? Y, ?0 ? ?, initial probability {?k,h }, For n = 0 to ? Play yn , ?n . Nature reveals xn Suffer loss l(yn , ?n , xn ). Update the cumulative loss of the experts k,h ly,n , n X k,h l?,n , i l(yk,h , ?i , xi ) n X i=0 l(yi , ?ik,h , xi ) i=0 Update experts? weights y,(k,h)   1 k,h wny,(k,h) , ?k,h exp ? ? ly,n n y,(k,h) pn+1 w , P? Pn+1 ? h=1 y,(k,h) k=1 wn+1 ?,(k,h) Update experts? weights wn+1 ?,(k,h) wn+1  , ?k,h exp 1 k,h ? l?,n n ?,(k,h)  ?,(k,h) pn+1 w = P? Pn+1 ? h=1 Choose yn+1 and ?n+1 as follows X y,(k,h) n+1 yn+1 = pn+1 yk,h ?n+1 = k,h X k=1 ?,(k,h) wn+1 ?,(k,h) n+1 ?k,h pn+1 k,h End For and since  is arbitrary, lim inf N ?? N N 1 X 1 X l(S(X1i?1 ), ??? , Xi ) ? lim inf l(S(X1i?1 ), ???i , Xi ). N ?? N N i=1 i=1 Therefore we can conclude that lim inf N ?? N 1 X l(S(X1i?1 ), ??? , Xi ) ? V ? a.s. N i=1 We finish the proof by noticing that since S ? S? , then by definition lim sup N ?? and since ??? N 1 X c(S(X1i?1 ), Xi ) ? ? a.s. N i=1 is non negative, we will get the desired result. The above lemma also provides the motivation to find the saddle point of the Lagrangian L. Therefore, for the reminder of the paper we will use the loss function l as defined in Equation 2. 4 Minimax Histogram Based Aggregation We are now ready to present our algorithm Minimax Histogram based Aggregation (MHA) and prove that its predictions are as good as the best strategy. By Theorem 1 we can restate our goal: find a prediction strategy S ? S? such that for any ?-feasible process {Xi }? ?? the following holds: N 1 X u(S(X1i?1 ), Xi ) = V ? a.s. N ?? N i=1 lim 5 Such a strategy will be called ?-universal. We do so by maintaining a countable set of experts {Hk,h } k, h = 1, 2, . . ., which are constructed in a similar manner to the experts used in [12]. Each expert is defined using a histogram which gets finer as h grows, allowing us to construct an i empirical measure on X . An expert Hk,h therefore outputs a pair (yk,h , ?ik,h ) ? Y ? ? at round i. This pair is the minimax w.r.t. its empirical measure. We show that those emprical measures converge weakly to P? , thus, the experts? prediction will converge to V ? . Our algorithm outputs at round i a pair (yi , ?i ) ? Y ? ? where the sequence of predictions y1 , y2 , . . . tries to minimize the PN average loss N1 i=1 l(y, ?i , xi ) and the sequence of predictions ?1 , ?2 , . . . tries to maximize the PN i average loss N1 i=1 l(yi , ?, xi ). Each of yi and ?i is the aggregation of predictions yk,h and ?ik,h , k, h = 1, 2, . . . , respectively. In order to ensure that the performance of MHA will be as good as any other expert for both the y and the ? predictions, we apply the Weak Aggregating Algorithm of [24], and [15] twice alternately. Theorem 2 states that the selection of points made by the experts above converges to the optimal solution, the proof of Theorem 2 and the explicit construction of the experts appears in the supplementary material. Then, in Theorem 3 we prove that MHA applied on the experts defined in Theorem 2 generates a sequence of predictions that is ?-bounded and as good as any other strategy w.r.t. any ?-feasible process. Theorem 2. Assume that {Xi }? ?? is a ?-feasible process. Then, it is possible to construct a countable set of experts {Hk,h } for which N 1 X i l(yk,h , ?ik,h , Xi ) = V ? a.s., k?? h?? n?? N i=1 lim lim lim i where (yk,h , ?ik,h ) are the predictions made by expert Hk,h at round i. Before stating the main theorem regarding MHA, we state the following lemma (the proof appears in the supplementary material), which is used in the proof of the main result regarding MHA. Lemma 4. Let {Hk,h } be a countable set of experts as defined in the proof of Theorem 2. Then, the following relation holds a.s.: inf lim sup k,h n?? N N   1 X 1 X i l yk,h , ?i , Xi ? V ? ? sup lim inf l yi , ?ik,h , Xi , N i=1 k,h n?? N i=1 where (yi , ?i ) are the predictions of MHA when applied on {Hk,h }. We are now ready to state and prove the optimality of MHA. Theorem 3 (Optimality of MHA). Let (yi , ?i ) be the predictions generated by MHA when applied on {Hk,h } as defined in the proof of Theorem 2. Then, for any ?-feasible process {Xi }? ?? : MHA is a ?-bounded and ?-universal strategy. Proof. We first show that N 1 X l(yi , ?i , Xi ) = V ? a.s. N ?? N i=1 lim (8) Applying Lemma 5 in [15], we know that the x updates guarantee that for every expert Hk,h , N N 1 X 1 X i Ck,h l(yi , ?i , xi ) ? l(y , ?i , xi ) + ? N i=1 N i=1 k,h N N N 0 Ck,h 1 X 1 X l(yi , ?i , xi ) ? l(yi , ?ik,h , xi ) ? ? , N i=1 N i=1 N 0 where Ck,h , Ck,h > 0 are some constants independent of N . In particular, using Equation (9), ! N N 1 X 1 X i Ck,h l(yi , ?i , xi ) ? inf l(y , ?i , xi ) + ? . k,h N i=1 N i=1 k,h N 6 (9) (10) Therefore, we get ! N N 1 X i 1 X Ck,h lim sup l(yi , ?i , xi ) ? lim sup inf l(y , ?i , xi ) + ? N i=1 k,h N N ?? N i=1 N ?? k,h ! ! N N 1 X i Ck,h 1 X i ? inf lim sup l(y , ?i , xi ) + ? ? inf lim sup l(y , ?i , xi ) , k,h N ?? k,h N ?? N i=1 k,h N i=1 k,h N (11) where in the last inequality we used the fact that lim sup is sub-additive. Using Lemma (4), we get that (11) ? V ? ? sup lim inf k,h n?? N  1 X l yi , ?ik,h , Xi . N i=1 (12) Using similar arguments and using Equation (10) we can show that (12) ? lim inf N ?? N 1 X l(yi , ?i , xi ). N i=1 Summarizing, we have N N 1 X 1 X l(yi , ?i , xi ) ? V ? ? lim inf l(yi , ?i , xi ). N ?? N N ?? N i=1 i=1 PN Therefore, we can conclude that a.s. limN ?? N1 i=1 l(yi , ?i , Xi ) = V ? . lim sup To show that MHA is indeed a ?-bounded strategy, we use two special experts H0,0 , H?1,?1 whose predictions are ?n0,0 = ?max and ?n?1,?1 = 0 for every n and to shorten the notation, we denote g(y, ?, x) , ?(c(y, x) ? ?). First, from Equation (10) applied on the expert H0,0 , we get that: lim sup N ?? N N 1 X 1 X g(yi , ?max , x) ? lim sup g(yi , ?i , x). N i=1 N ?? N i=1 (13) Moreover, since l is uniformly continuous, for any given  > 0, there exists ? > 0, such that if |?0 ? ?| < ?, then |l(y, ?0 , x) ? l(y, ?, x)| <  for any y ? Y and x ? X . We also know from the proof of Theorem 2 that limk?? limh?? limi?? ?ik,h = ??? . Therefore, there exist k0 , h0 , i0 such that |?ik0 ,h0 ? ??? | < ? for any i > i0 . Therefore, ! N N 1 X 1 X ? lim sup l(yi , ?? , xi ) ? l(yi , ?i , xi ) ? N i=1 N i=1 N ?? ! N N X 1 X 1 lim sup l(yi , ??? , xi ) ? l(yi , ?ik0 ,h0 , xi ) + N i=1 N i=1 N ?? ! N N 1 X 1 X i lim sup l(yi , ?k0 ,h0 , xi ) ? l(yi , ?i , xi ) N i=1 N i=1 N ?? (14) From the uniform continuity we also learn that the first summand is bounded above by , and from Equation (10), we get that the last summand is bounded above by 0. Thus, (14) ? , and since  is arbitrary, we get that lim sup N ?? ! N N 1 X 1 X ? l(yi , ?? , xi ) ? l(yi , ?i , xi ) ? 0. N i=1 N i=1 7 PN Thus, lim supN ?? N1 i=1 l(yi , ??? , Xi ) ? V ? , and from Theorem 1 we can conclude that PN limN ?? N1 i=1 l(yi , ??? , Xi ) = V ? . Therefore, we can deduce that N N 1 X 1 X g(yi , ?i , xi ) ? lim sup g(yi , ??? , xi ) = N i=1 N ?? N i=1 lim sup N ?? lim sup N ?? N N 1 X 1 X g(yi , ?i , xi ) + lim inf ?g(yi , ??? , xi ) N ?? N N i=1 i=1 ? lim sup N ?? = lim sup N ?? N N 1 X 1 X g(yi , ?i , xi ) ? g(yi , ??? , xi ) N i=1 N i=1 N N 1 X 1 X l(yi , ?i , xi ) ? l(yi , ??? , xi ) = 0, N i=1 N i=1 which results in lim sup N ?? N N 1 X 1 X g(yi , ?i , xi ) ? lim sup g(yi , ??? , xi ). N i=1 N N ?? i=1 Combining the above with Equation (13), we get that lim sup N ?? Since 0 ? ??? N N 1 X 1 X g(yi , ?max , xi ) ? lim sup g(yi , ??? , xi ). N i=1 N ?? N i=1 < ?max , we get that MHA is ?-bounded. This also implies that N 1 X lim sup ?i (c(yi , xi ) ? ?) ? 0. N ?? N i=1 Now, if we apply Equation (10) on the expert H?1,?1 , we get that lim inf N ?? N 1 X ?i (c(yi , xi ) ? ?) ? 0. N i=1 Thus, N 1 X lim ?i (c(yi , xi ) ? ?) = 0, N ?? N i=1 and using Equation (8), we get that MHA is also ?-universal. 5 Concluding Remarks In this paper, we introduced the Minimax Histogram Aggregation (MHA) algorithm for multipleobjective sequential prediction. We considered the general setting where the unknown underlying process is stationary and ergodic., and given that the underlying process is ?-feasible, we extended the well-known result of [1] regarding the asymptotic lower bound of prediction with a single objective, to the case of multi-objectives. We proved that MHA is a ?-bounded strategy whose predictions also converge to the optimal solution in hindsight. In the proofs of the theorems and lemmas above, we used the fact that the initial weights of the experts, ?k,h , are strictly positive thus implying a countably infinite expert set. In practice, however, one cannot maintain an infinite set of experts. Therefore, it is customary to apply such algorithms with a finite number of experts (see [14, 11, 12, 18]). Despite the fact that in the proof we assumed that the observation set X is known a priori, the algorithm can also be applied in the case that X is unknown by applying the doubling trick. For a further discussion on this point, see [9]. In our proofs, we relied on the compactness of the set X . It will be interesting to see whether the universality of MHA can be sustained under unbounded processes as well. A very interesting open question would be to identify conditions allowing for finite sample bounds when predicting with multiple objectives. 8 Acknowledgments This research was supported by The Israel Science Foundation (grant No. 1890/14) References [1] P.H. Algoet. The strong law of large numbers for sequential decisions under uncertainty. IEEE Transactions on Information Theory, 40(3):609?633, 1994. [2] A. Ben-Tal and A. Nemirovsky. Optimization iii. Lecture Notes, 2012. [3] G. Biau, K. Bleakley, L. Gy?rfi, and G. Ottucs?k. Nonparametric sequential prediction of time series. Journal of Nonparametric Statistics, 22(3):297?317, 2010. [4] G. Biau and B. Patra. Sequential quantile prediction of time series. IEEE Transactions on Information Theory, 57(3):1664?1674, 2011. [5] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 2005. [6] L. Breiman. The individual ergodic theorem of information theory. The Annals of Mathematical Statistics, 28(3):809?811, 1957. [7] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [8] L. Devroye, L. Gy?rfi, and G. Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013. [9] L. Gy?rfi and G. Lugosi. Strategies for sequential prediction of stationary time series. In Modeling uncertainty, pages 225?248. Springer, 2005. [10] L. Gy?rfi, G. Lugosi, and G. Morvai. A simple randomized algorithm for sequential prediction of ergodic time series. IEEE Transactions on Information Theory, 45(7):2642?2650, 1999. [11] L. Gy?rfi, G. Lugosi, and F. Udina. Nonparametric kernel-based sequential investment strategies. Mathematical Finance, 16(2):337?357, 2006. [12] L. Gy?rfi and D. Sch?fer. Nonparametric prediction. Advances in Learning Theory: Methods, Models and Applications, 339:354, 2003. [13] L. Gy?rfi, F. Udina, and H. Walk. Nonparametric nearest neighbor based empirical portfolio selection strategies. Statistics & Decisions, International Mathematical Journal for Stochastic Methods and Models, 26(2):145?157, 2008. [14] L. Gy?rfi, A. Urb?n, and I. Vajda. Kernel-based semi-log-optimal empirical portfolio selection strategies. International Journal of Theoretical and Applied Finance, 10(03):505?516, 2007. [15] Y. Kalnishkan and M. Vyugin. The weak aggregating algorithm and weak mixability. In International Conference on Computational Learning Theory, pages 188?203. Springer, 2005. [16] B. Li and S.C.H. Hoi. On-line portfolio selection with moving average reversion. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 273?280, 2012. [17] B. Li and S.C.H. Hoi. Online portfolio selection: A survey. ACM Computing Surveys (CSUR), 46(3):35, 2014. [18] B. Li, S.C.H Hoi, and V. Gopalkrishnan. Corn: Correlation-driven nonparametric learning approach for portfolio selection. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):21, 2011. [19] U.V. Luxburg and B. Sch?lkopf. Statistical learning theory: Models, concepts, and results. arXiv preprint arXiv:0810.4752, 2008. 9 [20] M. Mahdavi, T. Yang, and R. Jin. Stochastic convex optimization with multiple objectives. In Advances in Neural Information Processing Systems, pages 1115?1123, 2013. [21] S. Mannor, J. Tsitsiklis, and J.Y. Yu. Online learning with sample path constraints. Journal of Machine Learning Research, 10(Mar):569?590, 2009. [22] P. Rigollet and X. Tong. Neyman-pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12(Oct):2831?2855, 2011. [23] W. Stout. Almost sure convergence, vol. 24 of probability and mathematical statistics, 1974. [24] V. Vovk. Competing with stationary prediction strategies. In International Conference on Computational Learning Theory, pages 439?453. Springer, 2007. 10
6929 |@word mild:1 rani:1 urb:1 open:1 nemirovsky:1 initial:2 series:5 tist:1 past:2 assigning:1 universality:1 fn:2 additive:1 designed:1 update:4 n0:1 stationary:10 implying:1 provides:1 mannor:1 unbounded:1 mathematical:4 constructed:2 reversion:1 ik:9 prove:4 sustained:1 manner:1 x0:13 indeed:1 multi:4 equipped:1 provided:1 underlying:4 bounded:13 moreover:5 notation:1 medium:1 israel:3 minimizes:2 finding:1 hindsight:1 guarantee:4 every:2 ti:1 finance:2 rm:2 classifier:1 ly:2 grant:1 yn:8 appear:1 before:2 positive:2 aggregating:3 despite:2 path:1 lugosi:5 twice:1 challenging:1 unique:2 acknowledgment:1 practice:1 regret:1 investment:1 empirical:4 universal:4 significantly:1 pre:2 regular:1 get:14 cannot:3 selection:10 operator:1 risk:1 impossible:2 applying:2 bleakley:1 measurable:2 equivalent:1 lagrangian:4 convex:6 ergodic:10 survey:2 shorten:1 proving:1 handle:1 limiting:1 annals:1 construction:1 play:2 trick:1 recognition:1 ep:10 preprint:1 coincidence:1 ran:1 yk:7 convexity:1 depend:1 weakly:2 algebra:1 f2:1 learner:1 completely:1 easily:1 k0:2 various:2 pearson:2 outcome:3 h0:6 whose:4 supplementary:3 valued:3 say:1 statistic:4 jointly:1 online:9 sequence:8 fer:1 combining:2 realization:1 achieve:1 convergence:2 yaniv:2 empty:1 regularity:1 converges:2 ben:1 ac:2 stating:2 nearest:1 eq:5 strong:3 c:2 implies:1 restate:1 stochastic:5 vajda:1 material:3 hoi:3 hx:1 f1:1 generalization:1 strictly:1 hold:4 considered:3 exp:2 uniqueness:1 minimization:2 always:1 ck:7 pn:13 breiman:2 pxi:1 mainly:1 hk:9 adversarial:2 summarizing:1 el:2 i0:4 compactness:1 relation:1 transformed:1 arg:2 among:1 classification:3 dual:1 denoted:1 priori:1 special:1 construct:2 beach:1 look:1 icml:1 yu:1 inevitable:1 others:1 np:1 intelligent:1 summand:3 simultaneously:2 individual:1 n1:7 maintain:1 walk:1 desired:1 theoretical:1 modeling:1 subset:1 uniform:1 technion:4 stout:1 gr:1 reported:1 dependency:1 st:1 international:5 randomized:1 probabilistic:1 cesa:1 choose:1 guy:1 expert:27 li:3 mahdavi:2 singleton:1 gy:8 satisfy:1 view:1 try:2 closed:1 sup:31 px0:3 competitive:1 aggregation:5 relied:1 minimize:4 il:2 variance:1 identify:2 biau:2 ik0:2 weak:5 lkopf:1 finer:1 suffers:1 definition:3 uziel:1 proof:13 proved:2 lim:53 knowledge:1 reminder:1 limh:1 routine:1 focusing:1 appears:2 formulation:1 evaluated:1 done:1 mar:1 ergodicity:1 correlation:1 continuity:5 slackness:1 grows:1 usa:1 concept:1 y2:1 csur:1 deal:1 round:5 game:3 waa:1 x1n:2 criterion:1 generalized:1 instantaneous:1 ef:1 recently:1 fi:1 common:1 rigollet:1 volume:1 extend:1 cambridge:2 imposing:1 ai:1 rd:1 portfolio:7 moving:1 deduce:1 inf:27 driven:1 certain:2 inequality:1 binary:1 arbitrarily:1 yi:45 minimum:1 additional:1 surely:3 converge:4 maximize:2 paradigm:1 semi:1 ii:1 multiple:7 stem:1 x10:1 long:1 feasibility:1 prediction:37 arxiv:2 histogram:5 kernel:2 iteration:1 achieved:1 want:1 wealth:1 leaving:1 source:2 limn:3 sch:2 limk:1 sure:1 subject:1 integer:1 yang:1 constraining:1 revealed:1 iii:1 wn:4 finish:1 topology:1 competing:1 regarding:4 shift:1 whether:1 forecasting:1 suffer:1 repeatedly:1 remark:1 generally:1 rfi:8 nonparametric:6 kalnishkan:1 outperform:1 exist:2 problematic:1 vol:1 threshold:7 achieving:1 kept:1 asymptotically:1 luxburg:1 noticing:1 uncertainty:3 extends:1 throughout:1 almost:4 place:1 utilizes:1 decision:2 bound:4 correspondence:1 constraint:3 x2:2 hy:4 tal:1 vyugin:1 generates:1 argument:2 prescribed:1 min:5 optimality:6 concluding:1 corn:1 department:2 smaller:1 y0:1 making:1 maxy:1 fulfilling:1 taken:1 neyman:2 equation:9 turn:2 discus:1 know:3 end:1 apply:5 observe:1 appropriate:1 customary:1 ensure:1 maintaining:1 giving:1 quantile:1 prof:1 classical:1 mixability:1 seeking:1 objective:29 question:1 parametric:3 strategy:30 traditional:1 supn:2 priory:1 ottucs:1 devroye:1 minimizing:1 negative:1 implementation:1 countable:4 motivates:1 unknown:4 allowing:4 bianchi:1 observation:7 finite:4 jin:1 extended:2 y1:1 arbitrary:2 emprical:1 introduced:1 namely:1 required:2 pair:3 nip:1 alternately:1 below:5 pattern:1 borodin:1 max:10 difficulty:1 business:1 predicting:1 minimax:6 technology:3 brief:1 ready:3 sn:3 asymptotic:2 law:3 loss:11 lecture:1 interesting:3 foundation:1 supported:1 last:3 keeping:3 tsitsiklis:1 institute:2 neighbor:1 limi:1 regard:1 xn:7 world:2 cumulative:1 made:2 transaction:4 approximate:1 compact:6 countably:2 keep:1 dealing:4 reveals:1 conclude:3 assumed:1 xi:80 continuous:9 learn:2 nature:1 ca:1 necessarily:1 domain:2 main:4 motivation:1 complementary:1 x1:6 martingale:1 tong:1 sub:2 theme:1 explicit:1 x1i:28 lie:1 levy:1 third:1 theorem:17 concern:1 exists:5 sequential:10 importance:1 saddle:3 contained:1 doubling:1 springer:4 acm:2 oct:1 conditional:1 goal:3 exceptionally:1 feasible:11 infinite:4 uniformly:2 vovk:1 lemma:16 called:3 player:7 formally:1 brevity:1
6,555
693
Integration of Visual and Somatosensory Information for Preshaping Hand in Grasping Movements Yoji Uno Naohiro Fukumura* ATR Human Information Processing Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan Faculty of Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113, Japan Ryoji Suzuki Faculty of Engineering University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113, Japan Mitsuo Kawato ATR Human Information Processing Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan Abstract The primate brain must solve two important problems in grasping movements. The first problem concerns the recognition of grasped objects: specifically, how does the brain integrate visual and motor information on a grasped object? The second problem concerns hand shape planning: specifically, how does the brain design the hand configuration suited to the shape of the object and the manipulation task? A neural network model that solves these problems has been developed. The operations of the network are divided into a learning phase and an optimization phase. In the learning phase, internal representations, which depend on the grasped objects and the task, are acquired by integrating visual and somatosensory information. In the optimization phase, the most suitable hand shape for grasping an object is determined by using a relaxation computation of the network. * Present Address: Parallel Distributed Processing Research Dept., Sony Corporation, 6-7-35 Kitashinagawa, Shinagawa-ku, Tokyo 141, Japan 311 312 Uno, Fukumura, Suzuki, and Kawato 1 INTRODUCTION It has previously been established that, while reaching out to grasp an object, the human hand preshapes according to the shape of the object and the planned manipulation (Jeannerod, 1984; Arbib et al., 1985). The preshaping of the human hand suggests that prior to grasping an object the 3-dimensional form of the object is recognized and the most suitable hand configuration is preset depending on the manipulation task. It is supposed that the human recognizes objects using not only visual information but also somatosensory information when the hand grasps them. Visual information is made from the 2-dimensional image in the visual system of the brain. Somatosensory information is closely related to motor information, because it depends on the prehensile hand shape (Le., finger configuration). We hypothesize that an internal representation of a grasped object is formed in the brain by integrating visual and somatosensory information. Some physiological studies support our hypothesis. For example, Taira et al. (1990) found that the activity of hand-movement-related neurons in the posterior parietal association cortex were highly selective to the shape and/or the orientation of manipulated switches. How can the neural network integrate different kinds of information? Merely uniting visual image with somatosensory information does not lead to any interesting representation. Our basic idea is that information compression is applied to integrating different kinds of information. It is useful to extract the essential information by compressing the visual and somatosensory information. lrie & Kawato (1991) pointed out that multi-layered perceptrons have the ability to extract features from the input signals by compressing the information from input signals. Katayama & Kawato (1990) proposed a learning schema in which an internal representation of the grasped object was acquired using information compression. Developing the schema of Katayama et al., we have devised a neural network model for recognizing objects and planning hand shapes (e.g., Fukumura et al. 1991). This neural network consists of five layers of neurons with only forward connections as shown in Figure 1. The input layer (1 st layer) and the output layer (5th layer) of the network have the same structure. There are fewer neurons in the 3rd layer than in the 1st and 5th layers. The operations of the network are divided into the learning phase, which is discussed in section 2 and the optimization phase, which is discussed in section 3. 2 INTEGRATION OF VISUAL AND SOMATOSENSORY INFORMATION USING NETWORK LEARNING In the learning phase, the neural network learns the relation between the visual information (Le.,visual image) and the somatosensory information which, in this paper, is regarded as information on the prehensile hand configuration (Le., finger configuration). Both vector x representing the visual image of an object and vector y representing the prehensile hand configuration to grasp it are fed into the 1st layer (the input layer). The synaptic weights of the network are repeatedly adjusted so that the 5th layer outputs the same vectors x and y as are fed into the 1st layer. In other words, the network comes to realize the identity map between the 1st layer and the 5th layer through a learning process. The most important point of the neural network model is that the number of neurons in the 3rd layer is smaller than the number of neurons in the 1st layer (which is equal to the number of neurons in the 5th layer). Therefore, the information from x and y is compressed between Visual & Somatosensory Information for Preshaping Hand in Grasping Movements y (hand) Figure 1: A neural network model for integrating visual image x and prehensile hand configuration y. The internal representation z of a grasped object is acquired in the third layer. the 1st layer and the 3rd layer, and restored between the 3rd layer and the 5th layer. Once the network learning process is complete, visual image x and prehensile hand configuration yare integrated in the network. Consequently, the internal representation z of the grasped object, which should include enough information to reproduce x and y, is formed in the 3rd layer. Prehensile hand configuration in grasping movements were measured and the learning of the network was simulated by a computer. In behavioral experiments, three kinds of wooden objects were prepared: five circular cylinders whose diameters were 3 cm, 4 cm, 5 cm, 6 cm and 7 cm; four quadrangular prisms whose side lengths were 3 cm, 4 cm, 5 cm and 6 cm; and three spheres whose diameters were 3 cm, 4 cm and 5 cm. Data input to the network was comprised of visual image x and prehensile hand configuration y. Visual images of objects are formed through complicated processes in the visual system of the brain. For simplicity, however, projections of objects onto a side plane and/or a bottom plane were used instead of real visual images. The area of each pixel of the ,e-rojected image was fed into the network as an element of visual image x. A DataGloveT (V P L) was used to measure finger configurations in grasping movements. We attached sixteen optical fibers, whose outputs were roughly inversely proportional to finger joint-angles, to the DataGlove. The subject was instructed to grasp the objects on the table tightly with the palm and all the fingers. The subject grasped twelve objects thirty times each, which produced 360 prehensile patterns for use as training data for network learning. In the computer simulation, six neurons were set in the 3rd layer. The baCk-propagation learning method was applied in order to modify the synaptic weights in the network. Figure 2 shows the activity of neurons in the 3rd layer after the learning had sufficiently been performed. Some interesting features of the internal representations were found in Figure 2. The first is that the level of neuron activity in the 3rd layer increased monotonically as the size of the object increased. The second is that, except for the magnitude, the neuron activation patterns for the same kinds of objects were almost the same. Furthermore, the activation patterns were similar for circular cylinders and quadrangular prisms, but were quite different for spheres. In other words, similar representations were acquired for similarly shaped objects. We concluded that the internal representations were formed in the 3rd 313 314 Uno, Fukumura, Suzuki, and Kawato Neuron activity ~\ ,../ 0.00 / d ,( 1 P \' YI Diameter \, \6 Q, b -Q-Scm P, V\' \ -{)-6cm o. , \ ' -I.OO-l--~1~2~3----;=4-5::"":;:"'6~ Neuron index of the 3rd layer a) Circular cylinder \ I tJ 1.00 \ \ \ -3cm Diameter ..... -4cm -Q'-4cm -i:rScm -D-Scm -/:r6cm Side Length -D-6cm 0.00 ~\~~ ' - O '7cm j\"-o ~ J 10 \ 1 '\ 0 '0/ b/\ \ o.J /\ \0 --3cm -o--4cm I ?? 1.00 0 \ 0--, Q' ~ Neuron activity Neuron activity p, 1.00 -1.00 1~ 123456 Neuron index of the 3rd layer -1.00-l--~1-2......-13~4~5~6~ b) Quadrangular prism Neuron index of the 3rd layer c) Sphere Figure 2: Internal representations of grasped objects. Graph a) shows the neuron activation patterns for five circular cylinders whose diameters were 3 cm, 4 cm, 5 cm, 6 cm and 7 cm. Graph b) shows the neuron activation patterns for four quadrangular prisms whose side lengths were 3 cm, 4 cm, 5 cm and 6 cm. Finally, Graph c) shows the neuron activation patterns for three spheres whose diameters were 3 cm, 4 cm and 5 cm. The abscissa represents the index of the six neurons in the 3rd layer, while the ordinate represents their activity. These values were normalized from -1 to +1. layer and changed topologically according to the shapes and sizes of the grasped objects. 3 DESIGN OF PREHENSILE HAND SHAPES The neural network that has completed the learning can design hand shapes to grasp any objects in the optimization phase. Determining prehensile hand shape (i.e., finger configuration) is an ill-posed problem, because there are many ways to grasp any given object. In other words, prehensile hand configuration cannot be determined uniquely for anyone object. In order to solve this indeterminacy, a criterion, a measure of performance for any possible prehensile configuration is introduced. The criterion should normally be defined based on the dynamics of the human hand and the manipulation task. However, for simplicity, the criterion is defined based only on the static configuration of the fingers, which is represented by vector y. We assumed that the central nervous system adopts a stable hand configuration to grasp an object, which corresponds to flexing the fingers as much as possible. The output of the DataGlove sensor decreases as finger flexion increases. Therefore, the criterion C l (y) is defined as follows: (1) where Yi represents the ith output of the sixteen DataGlove sensors. Minimizing the criterion C l (y) requires as much finger flexing as possible. Finding values of Yi (i = 1,2, ... , 16) so as to minimize C l (y) is an optimization problem Visual & Somatosensory Information for Preshaping Hand in Grasping Movements with constraints. In the optimization phase, the neural network can solve this optimization problem using a relaxation computation as follows. When an object is specified, the visual image x* of the object is input to the 1st layer as an input signal and given to the 5th layer as a reference signal. We call neurons in the 1st and the 5th layers which represent visual image x image neurons, and call neurons in the 1st and the 5th layers which represent finger configuration y hand neurons. Let us define the following energy function of the network. E(y) * = 21 ~ ~(Xi - I Xi) 2 1~ I 2 + 2 ~(Yj - Yj) + A' 2:1 ~_2 ~ Yj' j i (2) j Here, xi is the ith element of the image x* which is fed into the ith image neuron in the 1st layer, and x~ is the output of the ith image neuron in the 5th layer. Yj is the activity of the jth hand neuron in the 1st layer, and yj is the output of the jth hand neuron in the 5th layer. A is a positive regularization parameter which decreases gradually during the relaxation computation. The first term and the second term of equation (2) require that the network realizes the identity map between the input layer and the output layer as well as in the learning phase. This requirement guarantees that a hand whose configuration is specified by vector y can grasp an object whose visual image is x*. The third term of equation (2) represents the criterion C 1(y). In the optimization phase, the values of the synaptic weights are fixed. Instead. the hand neuron changes its state autonomously while obeying the following differential equation: dYk c ds aE = - aYk' k = 1,2, ... ,16. (3) Here. s is the relaxation time required for the state change of the hand neuron, and c is a positive time constant. The right-hand side of equation (3) can be transformed as follows: _ aE = ~(x; _ 8Yk ~ x~) ax~ + ~(yj ~ J 8Yk I _ y',) ay} + (Yk _ yk) (ay~ J 8Yk 8Yk 1) - AYk. (4) It is straightforward to show that the first three terms of equation (4) are the error signals at the kth hand neuron, which can be calculated backward from the output layer to the input layer. The fourth term of equation (4) is a suppressive signal which is given to the hand neuron by itself. When the state of the hand neuron obeys the differential equation (3), the time change E can be expressed as : dE ds =L dYk aE k ds aYk = -c L(d y k )2 < O. k ds (5) - Therefore, the energy function E always decreases and the network comes to the equilibrium state that is the (local) minimum energy state. The outputs of the hand neurons in the equilibrium state represent the solution of the optimization problem which corresponds to the most suitable finger configuration. The relaxation computation of the neural network was simulated. For example, when given the image of a circular cylinder whose diameter was 5 cm, the prehensile finger configuration was computed. After a hundred-thousand iterations for the relaxation computation, we had the results shown in Figure 3. The left sied shows the hand shape that had the minimum value of the criterion of all the training data recorded when the subject grasped a circular cylinder whose diameter was 5 cm. The right side shows the hand shape produced by relaxation computation. These two hand shapes were very similar, which indicated that the network reproduced hand shape by using relaxation computation. 315 316 Uno, Fukumura, Suzuki, and Kawato Results of relaxation Trainnig Data Figure 3: Prehesile hand shapes for grasping a circular cylinder whose diameter was 5cm. 4 VARIOUS TYPES OF PREHENSIONS In the sections above, the subject was instructed to grasp objects using only one type of prehension. It is, however, thought that a human chooses different types of prehensions depending on the manipulation tasks. In order to investigate the dependence of the internal representation on the type of prehension, the second behavioral experiment was conducted. In this experiment, five circular cylinders and three spheres which were the same size as those in the first experiment were prepared. The subject was first instructed to grasp the objects tightly with his palm and all of his fingers, and then to grasp the same objects with only his fingertips. Iberall et al.(1988) referred to the first prehension and the second prehension as palm opposition and pad opposition, respectively. The subject grasped eight objects in two different types of prehensions twenty times each, which produced 320 prehensile patterns. Four neurons were set in the 3rd layer of the network and the network learning was simulated using these prehensile patterns as training data. Figure 4 shows the neuronal activation patterns formed in the 3rd layer after the network learning. Even if the grasped objects were the same, the neuron activation pattern for palm opposition was quite different from that for palm opposition. The neural network can reproduce different prehension, by introducing different criteria. Cl (y) is definded corresponding to palm opposition. Furthermore, we defind another criterion C2(Y), corresponding to pad opposition. i{MP,CM C(y) = 2: 2 idP yf + 2:(1.0 - Yj)2. (6) Minimizing the criterion C2 (y) demands that the MP joints (metacarpophalangeal joints) of the four fingers and the eM joint (carpometacarpal joint) of the thumb be flexed as much as possible and that the IP joints (interphalangeal joints) of all five fingers be stretched as much as possible. The relaxation computation of the neural network was simulated, when given the image of a sphere whose diameter was 5 cm. The results of the relaxation computation are shown in Figure 5. Adopting the different criteria, the neural network reproduced different prehensile hand configurations which corresponded to a) palm opposition and b) pad opposition. Visual & Somatosensory Information for Preshaping Hand in Grasping Movements Neuron activity 1.00 ?1 ~\. \. q Neuron activity 1.00 'Q\ , ,', 0 ~ . \ ,/ V 000 Neuron activity 100 9 ~ \ \\~/f\~ 0,00 e.-fl-, Neuron activity 100 Diameter ~ Diameter laD {> -0- - 4cm -0- Scm --0-- 6cm -0' 7cm - - 4cm I I I 0.00 A I 'A I I I I ~\ ~ ,100 ,1234 Neuron index of the 3rt! layer '1.oo_~..--~~ 1234 Neuron index of the 3rd layer a) Palm Opposition b) Pad Oppositon Circular cylinder ,1.00+--_ _ _..--. 1234 Neuron index of the 3rd layer 1234 Neuron index of the 3rd layer C) Palm Opposition d) Pad Oppositon Sphere Figure 4: Internal representations of grasped objects formed in the 3rd layer of the network_ Graphs a), b), c) and d) show the activation patterns of neurons for palm oppositions when grasping 5 circular cylinders, for pad oppositions when grasping 5 circular cylinders, for palm oppositions when grasping 3 spheres and for pad oppositions when grasping 3 spheres, respectively. See Figure 2 legend for description. 5 DISCUSSION In view of the function of neurons in the posterior parietal association cortex, we have devised a neural network model for integrating visual and motor information. The proposed neural network model is an active sensing model, as it learns only when an object is successfully grasped. In this paper, tactile information is not treated, as the materials of the grasped objects are ,not considered for simplicity. We know that tactile information plays an important role in the recognition of grasped objects. The neural network model shown in Figure 1 can easily be developed so as to integrate visual, motor and tactile information. However, it is not clear how the internal representations of grasped objects is changed by adding tactile information. The critical problem in our neural network model is how many neurons should be set in the 3rd layer to represent the shapes of grasped objects. If there are too few neurons in the 3rd layer, the 3rd layer cannot represent enough information to restor x and y between the 3rd layer and the 5th layer; that is, the network cannot learn to realize the identity map between the input layer and the output layer. If there are too many neurons in the 3rd layer, the network cannot obtain useful representations of the grasped objects in the 3rd layer and the relaxation computation sometimes fails. In the present stage, we have no method to decide an adequate number of neurons for the 3rd layer. This is an important task for the future. Acknowledgements The main part ofthis study was done while the first author (Y.U.) was working at University of Tokyo. Y. Uno, N. Fukumura and R. Suzuki was supported by Japanese Ministry of 317 318 Uno, Fukumura, Suzuki, and Kawato Trainnig Data Result of relaxation a) Plam Opposition Training Data Result of relaxation b) Pad opposition Figure 5: Prehensile hand configuration a) for palm opposition and prehensile hand configuration b) for pad opposition when grasping a sphere whose diameter was 5 cm. The left sides show the hand shapes with the minimum values of the criterions for all training data recorded when the subject grasped a sphere whose diameter was 5 cm. The right sides show the hand shapes made by the relaxation computation. Education, Science and Culture Grants, NO.03251102 and No.03650338. M. Kawato was supported by Human Frontier Science Project Grant. References M. Jeannerod. (1984) The timing of natural prehension movements, J. Motor Behavior, 16: 235-254. M.A. Arbib, T. Iberall and D. Lyons. (1985) Coordinated control programs for movements of the hand. Hand Function and the Neocortex. Experimental Brain Research, suppl.l0, 111-129. N. Fukumura, Y. Uno, R. Suzuki andK. Kawato (1991) A neural network model which recognizes shape of a grasped object and decides hand configuration. Japan IEICE Technical Report, NC90-104: 213-218 (in Japanese). Katayama and M. Kawato (1990) Neural network model integrating visual and somatic information. J. Robotics Society of Japan, 8: 117-125 (in Japanese). T. Iberall (1998) A neural network for planning hand shapes in human prehension. proc. Automation and controls Con/.: 2288-2293. B. Irie and Kawato (1991) "Acquisition of Internal Representation by Multilayered Perceptrons." Electronics and Communications in Japan, Part 3, 74: 112-118. M. Taira, S. Mine, A.P. Georgopoulos, A. Murata and S. Sakata. (1990) Parietal cortex neurons of the monkey related to the visual guidance of hand movement. Exp. Brain Res., 83: 29-36.
693 |@word faculty:2 compression:2 simulation:1 electronics:1 configuration:24 activation:8 must:1 realize:2 shape:21 motor:5 hypothesize:1 fewer:1 prehension:10 nervous:1 plane:2 ith:4 flexing:2 five:5 c2:2 differential:2 prehensile:18 consists:1 behavioral:2 acquired:4 behavior:1 abscissa:1 seika:2 planning:3 multi:1 brain:8 roughly:1 lyon:1 project:1 kind:4 cm:43 monkey:1 developed:2 finding:1 corporation:1 guarantee:1 control:2 normally:1 grant:2 positive:2 engineering:2 local:1 modify:1 timing:1 rscm:1 suggests:1 obeys:1 thirty:1 yj:7 grasped:22 area:1 thought:1 projection:1 word:3 integrating:6 onto:1 cannot:4 layered:1 map:3 straightforward:1 simplicity:3 regarded:1 his:3 play:1 hypothesis:1 element:2 recognition:2 bottom:1 role:1 thousand:1 compressing:2 autonomously:1 grasping:15 movement:11 decrease:3 yk:6 mine:1 dynamic:1 depend:1 flexed:1 easily:1 joint:7 represented:1 fiber:1 finger:16 various:1 corresponded:1 whose:15 quite:2 posed:1 solve:3 compressed:1 ability:1 sakata:1 itself:1 ip:1 reproduced:2 supposed:1 description:1 requirement:1 object:46 depending:2 oo:1 measured:1 indeterminacy:1 solves:1 somatosensory:12 come:2 idp:1 closely:1 tokyo:6 human:9 material:1 education:1 require:1 adjusted:1 frontier:1 sufficiently:1 considered:1 exp:1 equilibrium:2 proc:1 realizes:1 successfully:1 sensor:2 always:1 reaching:1 ax:1 l0:1 wooden:1 integrated:1 pad:9 relation:1 selective:1 reproduce:2 transformed:1 pixel:1 orientation:1 ill:1 integration:2 equal:1 once:1 shaped:1 represents:4 future:1 report:1 few:1 manipulated:1 tightly:2 phase:11 taira:2 cylinder:11 mitsuo:1 highly:1 circular:11 investigate:1 fingertip:1 grasp:11 tj:1 culture:1 andk:1 trainnig:2 re:1 guidance:1 increased:2 planned:1 introducing:1 hundred:1 comprised:1 recognizing:1 conducted:1 too:2 fukumura:8 cho:2 chooses:1 st:12 twelve:1 central:1 recorded:2 japan:8 de:1 automation:1 coordinated:1 mp:2 depends:1 performed:1 view:1 schema:2 complicated:1 parallel:1 scm:3 minimize:1 formed:6 murata:1 thumb:1 produced:3 synaptic:3 energy:3 acquisition:1 static:1 con:1 back:1 done:1 furthermore:2 stage:1 d:4 hand:53 working:1 propagation:1 yf:1 indicated:1 ieice:1 normalized:1 regularization:1 laboratory:2 during:1 uniquely:1 criterion:12 ay:2 complete:1 image:20 kawato:11 attached:1 association:2 discussed:2 lad:1 stretched:1 rd:26 similarly:1 pointed:1 had:3 stable:1 cortex:3 posterior:2 manipulation:5 prism:4 yi:3 minimum:3 ministry:1 recognized:1 monotonically:1 signal:6 kyoto:2 technical:1 sphere:11 divided:2 devised:2 basic:1 ae:3 iteration:1 represent:5 adopting:1 sometimes:1 suppl:1 robotics:1 concluded:1 suppressive:1 jeannerod:2 subject:7 legend:1 call:2 enough:2 switch:1 arbib:2 idea:1 preshaping:5 six:2 tactile:4 soraku:2 bunkyo:2 repeatedly:1 adequate:1 useful:2 clear:1 prepared:2 neocortex:1 diameter:14 four:4 shinagawa:1 backward:1 graph:4 relaxation:15 merely:1 angle:1 fourth:1 topologically:1 almost:1 decide:1 layer:59 opposition:18 fl:1 activity:12 constraint:1 uno:7 georgopoulos:1 anyone:1 flexion:1 optical:1 palm:12 developing:1 according:2 smaller:1 em:1 primate:1 gradually:1 equation:7 previously:1 know:1 sony:1 fed:4 operation:2 yare:1 eight:1 include:1 recognizes:2 completed:1 hikaridai:2 society:1 restored:1 dependence:1 rt:1 kth:1 irie:1 atr:2 simulated:4 gun:2 length:3 index:8 minimizing:2 design:3 twenty:1 neuron:51 parietal:3 communication:1 somatic:1 ordinate:1 introduced:1 required:1 specified:2 connection:1 established:1 address:1 pattern:11 program:1 suitable:3 critical:1 treated:1 natural:1 representing:2 inversely:1 extract:2 dyk:2 katayama:3 prior:1 acknowledgement:1 determining:1 interesting:2 proportional:1 sixteen:2 integrate:3 changed:2 supported:2 jth:2 side:8 distributed:1 calculated:1 instructed:3 forward:1 suzuki:7 made:2 adopts:1 ayk:3 author:1 hongo:2 active:1 decides:1 assumed:1 xi:3 table:1 ku:3 learn:1 cl:1 japanese:3 main:1 multilayered:1 neuronal:1 referred:1 fails:1 obeying:1 third:2 learns:2 uniting:1 sensing:1 physiological:1 concern:2 essential:1 ofthis:1 adding:1 magnitude:1 demand:1 suited:1 visual:30 expressed:1 corresponds:2 identity:3 consequently:1 change:3 specifically:2 determined:2 except:1 preset:1 experimental:1 perceptrons:2 internal:12 support:1 dept:1
6,556
6,930
A Universal Analysis of Large-Scale Regularized Least Squares Solutions Ashkan Panahi Department of Electrical and Computer Engineering North Carolina State University Raleigh, NC 27606 [email protected] Babak Hassibi Department of Electrical Engineering California Institute of Technology Pasadena, CA 91125 [email protected] Abstract A problem that has been of recent interest in statistical inference, machine learning and signal processing is that of understanding the asymptotic behavior of regularized least squares solutions under random measurement matrices (or dictionaries). The Least Absolute Shrinkage and Selection Operator (LASSO or least-squares with `1 regularization) is perhaps one of the most interesting examples. Precise expressions for the asymptotic performance of LASSO have been obtained for a number of different cases, in particular when the elements of the dictionary matrix are sampled independently from a Gaussian distribution. It has also been empirically observed that the resulting expressions remain valid when the entries of the dictionary matrix are independently sampled from certain non-Gaussian distributions. In this paper, we confirm these observations theoretically when the distribution is sub-Gaussian. We further generalize the previous expressions for a broader family of regularization functions and under milder conditions on the underlying random, possibly non-Gaussian, dictionary matrix. In particular, we establish the universality of the asymptotic statistics (e.g., the average quadratic risk) of LASSO with non-Gaussian dictionaries. 1 Introduction During the last few decades, retrieving structured data from an incomplete set of linear observations has received enormous attention in a wide range of applications. This problem is especially interesting when the ambient dimension of the data is very large, so that it cannot be directly observed and manipulated. One of the main approaches is to solve regularized least-squares optimization problems that are tied to the underlying data model. This can be generally expressed as: 1 min ky ? Axk22 + f (x), x 2 (1) where x ? Rm and y ? Rn are the desired data and observation vectors, respectively. The matrix A ? Rm?n is the sensing matrix, representing the observation process, and the regularization function f : Rn ? R imposes the desired structure on the observed data. When f is convex, the optimization in (1) can be solved reliably with a reasonable amount of calculations. In particular, the case where f is the `1 norm is known as the LASSO, which has been extremely successful in retrieving sparse data vectors. During the past years, random sensing matrices A have been widely used and studied in the context of the convex regularized least squares problems. From the perspective of data retrieval, this choice is supported by a number of studies in the so-called Compressed Sensing (CS) literature, which show that under reasonable assumptions, random matrices may lead to good performance [1, 2, 3]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. An interesting topic, addressed in the recent compressed sensing literature (and also considered here), is to understand the behavior of the regularized least squares solution in the asymptotic case, where m and n grow to infinity with a constant ratio ? = m/n. For this purpose, a scenario is widely considered, where y is generated by the following linear model: y = Ax0 + ?, (2) where x0 is the true structured vector and ? is the noise vector, here assumed to consist of independent centered Gaussian entries, with equal variances ? 2 . Then, it is desired to characterize the statistical ? of (1), also called the estimate, and the error w = x ? ? x0 . More behavior of the optimal solution x ? and the error specifically, we are interested in the asymptotic empirical distribution1 of the estimate x w, when the sensing matrix is also randomly generated with independent and identically distributed entries. Familiar examples of such matrices are Gaussian and Bernoulli matrices. 1.1 Previous Work Analyzing linear least squares problems with random matrices has a long history. The behavior of the unregularized solution, or that of ridge regression (i.e., `2 ?regularization) is characterized by the singular values of A which is well-understood in random matrix theory [4, 5]. However, a general study of regularized solutions became prominent with the advent of compressed sensing, where considerable effort has been directed toward an analysis in the sense explained above. In compressed sensing, early works focused on the LASSO (`1 regularization), sparse vectors x0 and the case, where ? = 0 [6]. These works aimed at providing conditions to guarantee perfect recovery, meaning w = 0, and established the Restricted Isometry Property (RIP) as a deterministic perfect recovery condition. This condition is generally difficult to verify [7, 8]. It was immediately observed that under mild conditions, random matrices satisfy the RIP condition with high probability, when the dimensions grow to infinity with a proper ratio ? [9, 10]. Soon after, it was discovered that the RIP condition was unnecessary to undertake the analysis for random matrices. In [11], an "RIP-less" theory of perfect recovery was introduced. Despite some earlier attempts [12, 13], a successful error analysis of the LASSO for Gaussian matrices was not obtained until the important paper [14], where it was shown by the analysis of so-called approximate message passing (AMP) that for any pseudo Lipschitz function ?; R2 ? R, ? and x0 , respectively, the sample risk and defining x ?i , xi0 as the ith elements of x n 1X ?(? xi , xi0 ) n k=1 converges to a value that can be precisely computed. As a special case, the asymptotic value of the scaled `2 norm of the error w is calculated by taking ?(? xi , xi0 ) = (? xi ? xi0 )2 . In [15], similar results are obtained for M-estimators using AMP. Fundamental bounds for linear estimation with Gaussian matrices are also recently provided in [16]. Another remarkable direction of progress was made in a series of papers, revolving around an approach, first developed by Gordon in [17], and introduced to the compressed sensing literature by Stojnic in [18]. Employing Gordon?s approach, [19] provides the analysis of a broad range of convex regularized least squares problems for Gaussian sensing matrices. Exact expressions are provided in this work only for asymptotically small noise. In [20] this result is utilized to provide the exact analysis of the LASSO for general noise variance, confirming the earlier results in [14]. Some further investigations are recently provided in [21] and [22]. When there is no measurement noise, universal (non-Gaussian) results on the phase transition for the number of measurements that allows perfect recovery of the signal have been recently obtained in [23]. Another special case of ridge (`2 ) regression is studied in [24]. The technical approach in [23] is different from ours. Furthermore, the current paper considers measurement noise and is concerned with the performance of the algorithm and not on the phase transitions for perfect recovery. In [25], the so-called Lindeberg approach is proposed to study the asymptotic behavior of the LASSO. This is similar in spirit to our approach. However, the study in [25] is more limited than ours, in the sense that it only establishes universality of the expected value of the optimal cost when the LASSO is restricted to an arbitrary rectangular ball. Some stronger bounds on the error risk of LASSO are 1 Empirical distribution of a vector x is a measure ? where ?(A) is the fraction of entries in x valued in A. 2 established in [23, 26], which are sharp for asymptotically small noise or large m. However, to the best of our knowledge, there have not been any exact universal results of the generality as ours in the literature. It is also noteworthy that our results can be predicted by the replica symmetry (RS) method as partially developed in [27]. Another recent area where the connection of RS and performance of estimators has been rigorously established is low rank matrix estimation [28, 29] 2 Main Results Our contributions are twofold: First, we generalize the expressions in [21] and [20] for a more general case of arbitrary separable regularization functions f (x) where with an abuse of notation n X f (x) = f (xi ) (3) i=1 and the function f on the right hand side is a real function f (x) : R ? R. Second, we show that our result is universal, which precisely means that our expressions are independent of the distribution (law) of the i.i.d sensing matrix. In general, we tie the asymptotic behavior of the optimization in (1) to the following two-dimensional optimization, which we refer to as the essential optimization:     ?? 2 p?(? ? 1) ?? 2 ? ? + ? + E Sf , p? + X , (4) Cf (?, ?) = max min ??0 p>0 2 2p 2 p where X and ? are two independent random variables, distributed by an arbitrary distribution ? and standard Gaussian p.d.f, respectively. Further, Sf (. , . ) denotes the proximal function of f , which is defined by q (5) Sf (q, y) = min (x ? y)2 + f (x). x 2   ? ?) of (4) is unique, with the minimum located at x ?f (q, y). If the solution p? = p?(?, ?), ?? = ?(?, then we define the random variables ! ?? ? ? ? ?X X = Xf,?,?,? = x ?f , p?? + X , W = X p? Our result can be expressed by the following theorem: Theorem 1 Suppose that the entries of A are first generated independently by a proper distribution2 ? ? and next scaled by 1/ m. Moreover, assume that the true vector x0 is randomly generated and has i.i.d. entries with some distribution ?. Then, ? The optimal cost in (1), scaled by n1 , converges in probability to Cf (?, ?), ? and the error w weakly converge to the ? The empirical distributions of the solution x ? and W , respectively, distribution of X if one of the following holds: 1. The real function f is strongly convex. 2. The real function f equals ?|x| for some ? > 0, ? is further ?s2 -sub-Gaussian3 and the ? 6= 0) is smaller than a constant depending on ?, ?, ?, ?. "effective sparsity" M0 = Pr(X For example, M0 ? ?/2 works where   9 1 8?s2 ? log 9 + H(?) ? min 1, 2 + log 8?s 2 9 and H(?) = ?? log ? ? (1 ? ?) log(1 ? ?) is the binary entropy function4 . 2 Here, a proper distribution is the one with vanishing first, third and fifth moments, unit variance and finite fourth and sixth moments. A centered random variable Z is ?s2 -sub-Gaussian if E(erZ ) ? e 4 In this paper, all logarithms are to natural base (e). 3 3 2 r2 ?s 2 holds for every r ? R. We include more detailed and general results, as well as the proofs in the supplementary material. In the rest of this paper, we discuss the consequences of Theorem 1, especially for the case of the LASSO, and give a sketch of the proof of our results. 3 Remarks and Numerical Results In this section, we discuss few issues arising from our analysis. 3.1 Evaluation of Asymptotic Values A crucial question related to Theorem 1 and the essential optimization is how to calculate the optimal parameters in (4). Here, our purpose is to provide a simple instruction for solving the optimization in (4). Notice that (4) is a min-max optimization over the pair (p, ?) of real positive numbers. We observe that there exists an appealing structure in this optimization, which substantially simplifies its numerical solution: Theorem 2 For any fixed ? > 0, the objective function in (4) is convex over p. For any fixed ?, denote the optimal value of the inner optimization (over p) of (4) by ?(?). Then, ? is a concave function of ?. Using Theorem 2, we may reduce the problem of solving (4) into a sequence of single dimensional convex optimization problems (line searches). We assume a derivative-free5 algorithm alg(?), such as dichotomous search (See the supplement for more details), which receives as an input (an oracle of) a convex function ? and returns its optimal value and its optimal point over [0 ?). Denote the cost function of (4) by ?(p, ?). This means that ?(?) = min ?(p, ?). If ?(p, ?) is easy to calculate, p we observe that alg(?(p, ?)) for a fixed ? is an oracle of ?(?). Since ?(?) is now easy to calculate we may execute alg(?(?)) to obtain the optimal parameters. 3.1.1 Derivation for LASSO To apply the above technique, we require a fast method to evaluate the objective function in (4). Here, we provide the expressions for the case of LASSO with f (x) = ?|x|, which is originally formulated in [30]. For this case, we assume that the entries of the true vector x0 are non-zero and standard Gaussian with probability 0 ? ? ? 1. In other words, ? = ?N + (1 ? ?)?0 , where N and ?0 are standard Gaussian and the Dirac measures on R, respectively. Then, we have that    p ? ?p E Sf , p? + X = ? 1 + p2 F ( 1 + p2 ) + (1 ? ?)pF (?) p p where 2   ?? ?2 q ?e 2q2 q ? 1 + 2 Q( ) + F (q) = ? ? 2 q q 4 2 2? The function Q(. ) is the Gaussian tail Q-function. We may replace the above expression in the ? W in Section 2. Now, definition of essential optimization to obtain p?, ?? and the random variables X, 2 let us calculate kwk2 /n by taking expectation over empirical distribution of w. Using Theorem 1, we obtain the following term for the asymptotic value of kwk22 /n:     ?p ?p , p, 1 + (1 ? ?)J , p, 0 E(W 2 ) = ?J ? ? where 2 2 2 J(, p, ?) = ? + 2 p +  ? ? 2  Q  p ? 2 + p2 ! r ? 2   ? 2 + p2 2 exp ? 2? 2(?2 + p2 ) Figure 1a depicts the average value kwk22 /n over 50 independent realizations of the LASSO, including independent Gaussian sensing matrices with ? = 0.5, sparse true vectors with ? = 0.2 and Gaussian 5 It is also possible to use the derivative-based algorithms, but it requires to calculate the derivatives of Sf and ?. We do not study this case. 4 -3 0.22 3 x 10 Variance of Squared Error 0.2 0.18 MSE 0.16 0.14 0.12 0.1 Theoretical Empirical n=200 Empirical n=500 0.08 0.06 0 1 2 3 ? 4 2.5 2 1.5 1 0.5 n=200 n=500 0 0 5 1 2 (a) ? 3 4 5 (b) Figure 1: a)The sample mean of the quadratic risk for different values of ?, compared to its theoretical value. The average is taken over 50 trials. b) The sample variance of the quadratic risk for different values of ?. The average is taken over 1000 trials. 0.25 Error L2 norm Effective Sparsity Error L1 norm MSE and Sparsity 0.2 0.15 0.1 0.05 0 0 0.5 1 1.5 2 2.5 3 3.5 Figure 2: Asymptotic error `1 and squared `2 norms, as well as the solution sparsity. Their corresponding optimal ? values are depicted by vertical lines. noise realizations with ? 2 = 0.1. We consider two different problem sizes n = 200, 500. As seen, the sample mean, which approximates the statistical mean E(kwk22 /n), agrees with the theoretical results above. Figure 1b examines the convergence of the error 2-norm by depicting the sample variance of kwk22 /n for the two cases above with n = 200, 500. Each data point is obtained by 1000 independent realizations. As seen, the case n = 500 has a smaller variance, which indicates that as dimensions grow the variance of the quadratic risk vanishes and it converges in probability to its mean. Another interesting phenomenon in Figure 1b is that the larger values of ? are associated with larger uncertainty (variance), especially for smaller problem sizes. The asymptotic analysis allows us to decide an optimal value of the regularization parameter ?. Figure 2 shows few possibilities. It depicts the theoretical values for the error squared `2 and `1 norms as well as the sparsity of the solution. The (effective6 ) sparsity can be calculated as !   ? ?p ? 6= 0) = 2(1 ? ?)Q p M0 = Pr(X + 2?Q ? ? 1 + p2 The expression for the `1 norm can be calculated similar to the `2 norm, but does not have closed form and is calculated by a Monte Carlo method. We observe that at the minimal error, both in `2 and `1 senses, the solution is sparser than the true vector (? = 0.2). On the contrary, adjusting the sparsity to the true one slightly increases the error `2 norm. As expected, the sparsity of the solution decreases monotonically with increasing ?. 6 Since we establish weak convergence of the empirical distribution, this number does not necessarily reflect ? , but rather the "infinitesimally" small ones. the number of exactly zero elements in x 5 0.2 10-3 Variance of Squared Error Average L2 Error 3 0.18 0.16 Theoretical t-distribution =3, n=200 t-distribution =3, n=500 Bernoulli Asymmetric Bernoulli 0.14 0.12 0 1 2 3 4 2.5 2 1.5 1 0 5 Bernoulli, n=200 t-distribution, n=500 t-distribution, n=200 Asymmetric Bernoulli 0.5 0 (a) 1 2 3 4 5 (b) Figure 3: (a) The average LASSO error `2 norm (b) the sample variance of LASSO error `2 norm for different matrices. 3.2 Universality and Heavy-tailed Distributions In the previous section, we demonstrated numerical results, which were generated by Gaussian matrices. Here, we focus on universality. In Section 2, our results are under some regularity assumptions for the sensing matrix. For the LASSO, we require sub-Gaussian entries and low sparsity, which is equivalent to a large regularization value. Here, we examine these conditions in three cases: First, a centered Bernoulli matrix where each entry ?1 or 1 with probability 1/2. Second, a matrix distributed by Student?s t-distribution with 3 degrees of freedom ? = 3 and scaled to possess unit variance. Third, an asymmetric Bernoulli matrix where each entry is either 3 or ?1/3 with probabilities 0.1 and 0.9, respectively. Figure 3 shows the error `2 norm and its variance for the LASSO case. As seen, all cases follow the predicted asymptotic result. However, the results for the t-matrix and the asymmetric distribution is beyond our analysis, since t-distribution does not possess finite statistical moments of an order larger than 2 and the asymmetric case possesses non-vanishing third moment. This indicates that our universal results hold beyond the limit assumed in this paper. However, we are not able to prove it with our current technique. 3.3 Remarks on More General Universality Results As we explain in the supplementary document, Theorem 1 is specialized from a more general result. Here, we briefly discuss the main aspects of our general result. When the regularization is not separable (f is non-separable) our analysis may still guarantee universality of its behavior. However, we are not able to evaluate the asymptotic values anymore. Instead, we relate the behavior of a general sensing matrix to a reference choice, e.g. a Gaussian matrix. For example, we are able to show that if the optimal objective value in (1) converges to a particular value for the reference matrix, then it converges to exactly the same value for other suitable matrices in Theorem 1. The asymptotic optimal value may remain unknown to us. The universality of the optimal value holds for a much broader family of regularizations than the separable ones in (3). For example, "weakly separable" functions of the form P f (xi , xj ) f (x) = i,j n or the generalized fused LASSO [31, 32] are simply seen to possess universal optimal values. One important property of our generalized result is that if we are able to establish optimal value universality for a particular regularization function f (x), then we automatically have a similar result for f (?x), where ? is a fixed matrix satisfying certain regularity conditions7 . This connects our analysis to the analysis of generalized LASSO [33, 34]. Moreover, substituting f (?x) in (1) and changing the optimization variable to x0 = ?x, we obtain (1) where A is replaced by A??1 . Hence, our approach enables us to obtain further results on sensing matrices of the form A??1 , where A is i.i.d and ? is deterministic. We postpone more careful analysis to future papers. 7 More precisely, we require ? to have a strictly positive smallest singular value and a bounded third operator norm. 6 It is worth mentioning that we obtain Theorem 1 about the separable functions in the light of the same principle: We simply connect the behavior of the error for an arbitrary matrix to the Gaussian one. In this particular case, we are able to carry out the calculations over Gaussian matrices with the well-known techniques, developed for example in [18] and briefly explained below. 4 Technical Discussion 4.1 An Overview of Approach In this section, we present a crude sketch of our mathematical analysis. Our aim is to show the main ideas without being involved in mathematical subtleties. There are four main elements in our analysis, which we address in the following. 4.1.1 From Optimal Cost to the Characteristics of the Optimal Point In essence, we study the optimal values of optimizations such as the one in (1). Studying the optimal solution directly is much more difficult. Hence, we employ an indirect method, where we connect an arbitrary real-valued characteristic (function) g of the optimal point to the optimal value of a set of related optimizations. This is possible through the following simple observation: Lemma 1 We are to minimize a convex function ?(x) on a convex domain D and suppose that x? is a minimal solution. Further, g(x) is such that the function ? + g remains convex when  is in a symmetric interval [?e e]. Define ?() as the minimal value of ? + g on D. Then, ?() is concave on [?e e] and g(x? ) is its subgradient at  = 0. As a result of Lemma 1, the increments ?() ? ?(0) and ?(0) ? ?(?) for positive values of  provide lower and upper bounds for g(x? ), respectively. We use these bounds to prove convergence in probability. However, Lemma 1 requires ? + g to remain convex for both positive and negative values of . It is now simple to see that choosing a strongly convex function for f and a convex function with bounded second derivative for g ensures this requirement. 4.1.2 Lindeberg?s Approach With the above approach, we only need to focus on the universality of the optimal values. To obtain universal results, we adopt the method that Lindeberg famously developed to obtain a strong version of the central limit theorem [35]. Lindeberg?s approach requires a reference case, where asymptotic properties are simple to deduce. Then, similar results are proved for an arbitrary case by considering a finite chain of intermediate problems, starting from the reference case and ending at the desired case. In each step, we are able to analyze the change in the optimal value and show that the sum of these changes cannot be substantial for asymptotically large cases. In our study, we take the optimization in (1) with a Gaussian matrix A as reference. In each step of the chain, we replace one Gaussian row of A with another one with the target distribution. After m step, we arrive at the desired case. At each step, we analyze the change in the optimal value by Taylor expansion, which shows that the 1 change is of second order and is o( m ) (in fact O( m15/4 )) with high probability, such that the total change is bounded by o(1). For this, we require strong convexity and bounded third derivatives. This shows universality of the optimal value. 4.1.3 Asymptotic Results For Gaussian Matrices Since we take Gaussian matrices as reference in the Lindeberg?s approach, we require a different machinery to analyze the Gaussian case. The analysis of (1) for the Gaussian matrices is considered in [19]. Here, we briefly review this approach and specialize it in some particular cases. Let us start by defining the following so-called Key optimization, associated with (1): r m? kvk22 gT v m 2 fn (v + x0 ) +? ? ? + (6) ?n (g, x0 ) = max minn ?2 + ?>0 v?R n m n 2n n where g is a n?dimensional standard Normal random vector, independent of other variables. Then, [19] shows that in case A is generated by a standard Gaussian random variable and ?n (g, x0 ) converges in probability to a value C, then the optimal value in (1) also converges to C. The 7 consequences of this observation are thoroughly discussed in [20]. Here, we focus on a case, where f (x) is separable as in 3. For this case, The Key optimization in (6) can be simplified and stated as in the following theorem (See [30]). Theorem 3 Suppose that A is generated by a Gaussian distribution, x0 is i.i.d. with distribution ? and f (x) is separable as in (1). Furthermore, m/n ? ? ? R?0 . Then, the optimal value of the optimization in (1), converges in probability to Cf (?, ?) defined in Section 2. Now, we may put the above steps together to obtain the desired result for the strongly convex functions: Lindeberg?s approach shows that the optimal cost is universal. On the other hand, the optimal cost for Gaussian matrices is given by Cf (?, ?). We conclude that Cf (?, ?) is the universal limit of the optimal cost. Now, we may use the argument in Lemma 1 to obtain a characteristic g at the optimal point. For this, we may take for example regularizations of the form f + g, which by the previous discussion converges to Cf +g . Then, g(? x) becomes equal to dCf +g /d at  = 0, which by further calculations leads to the result in Theorem 1 8 . 4.1.4 Final Step: The LASSO The above argument fails for the LASSO with f (x) = ?|x| because it lacks strong convexity. Our remedy is to start from an "augmented approximation" of the LASSO with f (x) = ?|x| + x2 /2 and to show that the solution of the approximation is stable in the sense that removing the term x2 /2 does not substantially change the optimal point. We employ a slightly modified argument in [12], which requires two assumptions: a) The solution is sparse. b) The matrix A is sufficiently restricted-isometric. The condition on restricted isometry is satisfied by assuming sub-Gaussian distributions [36], while the sparsity of the solution is given by M0 . The assumption that M0 is sufficiently small allows the argument in [12] to hold in our case, which ensures that the LASSO solution remains close to the solution of the augmented LASSO and the claims of Theorem 1 can be established for the LASSO. However, we are able to show that the optimal value of the LASSO is close to that of the augmented LASSO without any requirement for sparsity. This can be found in the supplementary material. 5 Conclusion The main purpose of this study was to extend the existing results about the convex regularized least squares problems in two different directions, namely more general regularization functions and non-Gaussian sensing matrices. In the first direction, we tied the asymptotic properties for general separable convex regularization functions to a two-dimensional optimization that we called the essential optimization. We also provided a simple way to calculate asymptotic characteristics of the solution from the essential optimization. In the second direction, we showed that the asymptotic behavior of regularization functions with certain regularity conditions is independent of the distribution (law) of the sensing matrix. We presented few numerical experiments which validated our results. However, these experiments suggest that the universality of the asymptotic behavior holds beyond our assumptions. 5.1 Future Research After establishing the convergence results, a natural further question is the rate of convergence. The properties of regularized least squares solutions with finite size is not well-studied even for the Gaussian matrices. Another interesting subject for future research is to consider random sensing matrices, which are not necessarily identically distributed. We believe that our technique can be generalized to a case with independent rows or columns instead of elements. A similar generalization can be obtained by considering true vectors with a different structure. Moreover, we introduced a number of cases such as generalized LASSO [34] and generalized fused Lasso [32], where our analysis shows universality, but the asymptotic performance cannot be calculated. Calculating the asymptotic values of these problems for a reference choice, such as Gaussian matrices is an interesting subject of future study. 8 The expression for g(w) is found in a similar way, but requires some mathematical preparations, which we express later. 8 References [1] E. J. Candes and T. Tao, ?Near-optimal signal recovery from random projections: Universal encoding strategies?,? Information Theory, IEEE Transactions on, vol. 52, no. 12, pp. 5406? 5425, 2006. [2] D. L. Donoho, ?For most large underdetermined systems of linear equations the minimal `1 snorm solution is also the sparsest solution,? Communications on pure and applied mathematics, vol. 59, no. 6, pp. 797?829, 2006. [3] Y. C. Eldar and G. Kutyniok, Compressed sensing: theory and applications. Cambridge University Press, 2012. [4] Z. Bai, ?Methodologies in spectral analysis of large dimensional random matrices, a review,? Statistica Sinica, pp. 611?662, 1999. [5] Z. Bai and J. W. Silverstein, Spectral analysis of large dimensional random matrices, vol. 20. Springer, 2010. [6] D. L. Donoho, ?Compressed sensing,? Information Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1289?1306, 2006. [7] S. S. Chen, D. L. Donoho, and M. A. Saunders, ?Atomic decomposition by basis pursuit,? SIAM journal on scientific computing, vol. 20, no. 1, pp. 33?61, 1998. [8] E. J. Candes and T. Tao, ?Decoding by linear programming,? Information Theory, IEEE Transactions on, vol. 51, no. 12, pp. 4203?4215, 2005. [9] E. J. Cand?s, ?The restricted isometry property and its implications for compressed sensing,? Comptes Rendus Mathematique, vol. 346, no. 9, pp. 589?592, 2008. [10] R. G. Baraniuk, ?Compressive sensing,? IEEE signal processing magazine, vol. 24, no. 4, 2007. [11] E. J. Candes and Y. Plan, ?A probabilistic and ripless theory of compressed sensing,? Information Theory, IEEE Transactions on, vol. 57, no. 11, pp. 7235?7254, 2011. [12] E. J. Candes, J. K. Romberg, and T. Tao, ?Stable signal recovery from incomplete and inaccurate measurements,? Communications on pure and applied mathematics, vol. 59, no. 8, pp. 1207? 1223, 2006. [13] D. L. Donoho, M. Elad, and V. N. Temlyakov, ?Stable recovery of sparse overcomplete representations in the presence of noise,? Information Theory, IEEE Transactions on, vol. 52, no. 1, pp. 6?18, 2006. [14] M. Bayati and A. Montanari, ?The lasso risk for gaussian matrices,? Information Theory, IEEE Transactions on, vol. 58, no. 4, pp. 1997?2017, 2012. [15] D. Donoho and A. Montanari, ?High dimensional robust m-estimation: Asymptotic variance via approximate message passing,? Probability Theory and Related Fields, vol. 166, no. 3-4, pp. 935?969, 2016. [16] J. Barbier, M. Dia, N. Macris, and F. Krzakala, ?The mutual information in random linear estimation,? in Communication, Control, and Computing (Allerton), 2016 54th Annual Allerton Conference on, pp. 625?632, IEEE, 2016. [17] Y. Gordon, On Milman?s inequality and random subspaces which escape through a mesh in R n. Springer, 1988. [18] M. Stojnic, ?A framework to characterize performance of lasso algorithms,? arXiv preprint arXiv:1303.7291, 2013. [19] S. Oymak, C. Thrampoulidis, and B. Hassibi, ?The squared-error of generalized lasso: A precise analysis,? in Communication, Control, and Computing (Allerton), 2013 51st Annual Allerton Conference on, pp. 1002?1009, IEEE, 2013. [20] C. Thrampoulidis, A. Panahi, D. Guo, and B. Hassibi, ?Precise error analysis of the \`2 -lasso,? arXiv preprint arXiv:1502.04977, 2015. [21] C. Thrampoulidis, E. Abbasi, and B. Hassibi, ?The lasso with non-linear measurements is equivalent to one with linear measurements,? Advances in Neural Information Processing Systems, 2015. 9 [22] C. Thrampoulidis, E. Abbasi, and B. Hassibi, ?Precise error analysis of regularized m-estimators in high-dimension,? arXiv preprint arXiv:1601.06233, 2016. [23] S. Oymak and J. A. Tropp, ?Universality laws for randomized dimension reduction, with applications,? arXiv preprint arXiv:1511.09433, 2015. [24] N. E. Karoui, ?Asymptotic behavior of unregularized and ridge-regularized high-dimensional robust regression estimators: rigorous results,? arXiv preprint arXiv:1311.2445, 2013. [25] A. Montanari and S. B. Korada, ?Applications of lindeberg principle in communications and statistical learning,? tech. rep., 2010. [26] N. Zerbib, Y.-H. Li, Y.-P. Hsieh, and V. Cevher, ?Estimation error of the lasso,? tech. rep., 2016. [27] Y. Kabashima, T. Wadayama, and T. Tanaka, ?A typical reconstruction limit for compressed sensing based on lp-norm minimization,? Journal of Statistical Mechanics: Theory and Experiment, vol. 2009, no. 09, p. L09003, 2009. [28] J. Barbier, M. Dia, N. Macris, F. Krzakala, T. Lesieur, and L. Zdeborov?, ?Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula,? in Advances in Neural Information Processing Systems, pp. 424?432, 2016. [29] M. Lelarge and L. Miolane, ?Fundamental limits of symmetric low-rank matrix estimation,? arXiv preprint arXiv:1611.03888, 2016. [30] C. Thrampoulidis, A. Panahi, and B. Hassibi, ?Asymptotically exact error analysis for the generalized equation-lasso,? in 2015 IEEE International Symposium on Information Theory (ISIT), pp. 2021?2025, IEEE, 2015. [31] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, ?Sparsity and smoothness via the fused lasso,? Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 1, pp. 91?108, 2005. [32] B. Xin, Y. Kawahara, Y. Wang, and W. Gao, ?Efficient generalized fused lasso and its application to the diagnosis of alzheimer?s disease.,? in AAAI, pp. 2163?2169, Citeseer, 2014. [33] J. Liu, L. Yuan, and J. Ye, ?Guaranteed sparse recovery under linear transformation.,? in ICML (3), pp. 91?99, 2013. [34] R. J. Tibshirani, J. E. Taylor, E. J. Candes, and T. Hastie, The solution path of the generalized lasso. Stanford University, 2011. [35] J. W. Lindeberg, ?Eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung,? Mathematische Zeitschrift, vol. 15, no. 1, pp. 211?225, 1922. [36] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, ?A simple proof of the restricted isometry property for random matrices,? Constructive Approximation, vol. 28, no. 3, pp. 253?263, 2008. 10
6930 |@word mild:1 trial:2 version:1 briefly:3 norm:15 stronger:1 instruction:1 r:2 carolina:1 decomposition:1 hsieh:1 citeseer:1 carry:1 reduction:1 moment:4 liu:1 series:2 bai:2 ours:3 document:1 amp:2 past:1 existing:1 current:2 karoui:1 universality:13 axk22:1 fn:1 numerical:4 mesh:1 confirming:1 enables:1 ith:1 vanishing:2 provides:1 allerton:4 mathematical:3 symposium:1 retrieving:2 yuan:1 prove:2 specialize:1 krzakala:2 x0:11 theoretically:1 expected:2 behavior:12 cand:1 examine:1 mechanic:1 automatically:1 lindeberg:8 pf:1 increasing:1 considering:2 provided:4 becomes:1 underlying:2 notation:1 moreover:3 bounded:4 advent:1 substantially:2 q2:1 developed:4 compressive:1 transformation:1 guarantee:2 pseudo:1 every:1 concave:2 tie:1 exactly:2 kutyniok:1 rm:2 scaled:4 control:2 unit:2 positive:4 engineering:2 understood:1 limit:5 consequence:2 zeitschrift:1 despite:1 encoding:1 analyzing:1 establishing:1 path:1 noteworthy:1 abuse:1 studied:3 mentioning:1 limited:1 range:2 directed:1 unique:1 atomic:1 postpone:1 area:1 universal:10 empirical:7 projection:1 word:1 suggest:1 cannot:3 close:2 selection:1 operator:2 romberg:1 put:1 risk:7 context:1 equivalent:2 deterministic:2 demonstrated:1 attention:1 starting:1 independently:3 convex:16 focused:1 rectangular:1 recovery:9 immediately:1 pure:2 estimator:4 examines:1 increment:1 target:1 suppose:3 rip:4 exact:4 programming:1 magazine:1 element:5 satisfying:1 utilized:1 located:1 asymmetric:5 observed:4 preprint:6 electrical:2 solved:1 wang:1 calculate:6 ensures:2 distribution2:1 wadayama:1 decrease:1 knight:1 substantial:1 disease:1 vanishes:1 convexity:2 rigorously:1 babak:1 weakly:2 solving:2 basis:1 indirect:1 derivation:1 fast:1 effective:2 monte:1 choosing:1 saunders:2 kawahara:1 widely:2 solve:1 valued:2 supplementary:3 larger:3 elad:1 compressed:10 stanford:1 statistic:1 final:1 sequence:1 reconstruction:1 realization:3 dirac:1 ky:1 convergence:5 regularity:3 requirement:2 perfect:5 converges:9 depending:1 received:1 progress:1 strong:3 p2:6 c:1 predicted:2 direction:4 centered:3 material:2 require:5 mathematique:1 generalization:1 investigation:1 isit:1 underdetermined:1 strictly:1 hold:6 around:1 considered:3 sufficiently:2 normal:1 exp:1 claim:1 substituting:1 m0:5 dictionary:5 early:1 smallest:1 adopt:1 purpose:3 estimation:7 agrees:1 establishes:1 minimization:1 gaussian:37 aim:1 modified:1 rather:1 shrinkage:1 broader:2 validated:1 focus:3 bernoulli:7 rank:3 indicates:2 panahi:3 tech:2 rigorous:1 sense:3 inference:1 milder:1 inaccurate:1 pasadena:1 interested:1 tao:3 issue:1 eldar:1 plan:1 special:2 mutual:2 equal:3 field:1 beach:1 broad:1 icml:1 future:4 gordon:3 escape:1 few:4 employ:2 randomly:2 manipulated:1 familiar:1 replaced:1 phase:2 connects:1 ripless:1 n1:1 attempt:1 freedom:1 interest:1 message:2 possibility:1 evaluation:1 light:1 sens:1 chain:2 implication:1 ambient:1 machinery:1 incomplete:2 taylor:2 logarithm:1 desired:6 overcomplete:1 theoretical:5 minimal:4 cevher:1 column:1 earlier:2 ax0:1 cost:7 entry:10 successful:2 characterize:2 connect:2 function4:1 proximal:1 rosset:1 thoroughly:1 st:2 fundamental:2 siam:1 oymak:2 randomized:1 international:1 probabilistic:1 decoding:1 together:1 fused:4 squared:5 aaai:1 reflect:1 central:1 satisfied:1 abbasi:2 possibly:1 davenport:1 derivative:5 return:1 li:1 de:1 student:1 north:1 satisfy:1 later:1 closed:1 analyze:3 start:2 candes:5 contribution:1 minimize:1 square:10 became:1 variance:14 characteristic:4 silverstein:1 generalize:2 weak:1 carlo:1 worth:1 kabashima:1 history:1 explain:1 ashkan:1 sixth:1 definition:1 lelarge:1 pp:21 involved:1 proof:4 associated:2 sampled:2 proved:1 adjusting:1 knowledge:1 originally:1 isometric:1 follow:1 methodology:2 devore:1 execute:1 strongly:3 generality:1 furthermore:2 until:1 hand:2 stojnic:2 sketch:2 receives:1 tropp:1 lack:1 perhaps:1 scientific:1 believe:1 usa:1 revolving:1 ye:1 verify:1 true:7 remedy:1 regularization:15 hence:2 symmetric:3 during:2 essence:1 generalized:10 prominent:1 ridge:3 macris:2 l1:1 meaning:1 recently:3 specialized:1 empirically:1 overview:1 korada:1 tail:1 xi0:4 approximates:1 discussed:1 extend:1 kwk2:1 measurement:7 refer:1 cambridge:1 smoothness:1 mathematics:2 stable:3 deduce:1 base:1 gt:1 isometry:4 recent:3 showed:1 perspective:1 scenario:1 certain:3 inequality:1 binary:1 rep:2 der:1 caltech:1 seen:4 minimum:1 eine:1 converge:1 monotonically:1 signal:5 technical:2 xf:1 characterized:1 calculation:3 long:2 retrieval:1 regression:3 expectation:1 arxiv:12 addressed:1 interval:1 grow:3 singular:2 crucial:1 rest:1 posse:4 kwk22:4 subject:2 contrary:1 spirit:1 alzheimer:1 near:1 presence:1 intermediate:1 identically:2 concerned:1 undertake:1 easy:2 xj:1 hastie:1 lasso:40 inner:1 simplifies:1 reduce:1 idea:1 l09003:1 expression:10 effort:1 passing:2 remark:2 generally:2 detailed:1 aimed:1 amount:1 notice:1 arising:1 tibshirani:2 diagnosis:1 mathematische:1 vol:17 express:1 key:2 four:1 enormous:1 changing:1 replica:2 asymptotically:4 subgradient:1 fraction:1 year:1 sum:1 fourth:1 uncertainty:1 baraniuk:2 arrive:1 family:2 reasonable:2 decide:1 bound:4 guaranteed:1 milman:1 quadratic:4 oracle:2 annual:2 infinity:2 precisely:3 x2:2 dichotomous:1 ncsu:1 aspect:1 argument:4 min:6 extremely:1 separable:9 infinitesimally:1 department:2 structured:2 ball:1 remain:3 smaller:3 slightly:2 appealing:1 lp:1 explained:2 restricted:6 pr:2 taken:2 unregularized:2 equation:2 remains:2 discus:3 rendus:1 dia:2 studying:1 pursuit:1 apply:1 observe:3 spectral:2 anymore:1 denotes:1 cf:6 include:1 wakin:1 calculating:1 neue:1 especially:3 establish:3 society:1 objective:3 question:2 strategy:1 zdeborov:1 subspace:1 topic:1 considers:1 toward:1 assuming:1 minn:1 ratio:2 providing:1 nc:1 difficult:2 sinica:1 relate:1 negative:1 stated:1 reliably:1 proper:3 unknown:1 upper:1 vertical:1 observation:6 finite:4 defining:2 communication:5 precise:4 rn:2 discovered:1 arbitrary:6 sharp:1 thrampoulidis:5 introduced:3 pair:1 namely:1 connection:1 california:1 wahrscheinlichkeitsrechnung:1 established:4 tanaka:1 nip:1 address:1 beyond:3 able:7 distribution1:1 below:1 sparsity:12 max:3 including:1 royal:1 suitable:1 natural:2 regularized:11 zhu:1 representing:1 m15:1 technology:1 kvk22:1 review:2 understanding:1 literature:4 l2:2 asymptotic:25 law:3 interesting:6 remarkable:1 bayati:1 degree:1 imposes:1 principle:2 famously:1 heavy:1 row:2 supported:1 last:1 soon:1 side:1 raleigh:1 understand:1 institute:1 wide:1 taking:2 absolute:1 sparse:6 fifth:1 distributed:4 dimension:5 calculated:5 valid:1 transition:2 ending:1 made:1 simplified:1 employing:1 transaction:6 temlyakov:1 approximate:2 confirm:1 assumed:2 unnecessary:1 conclude:1 xi:5 search:2 decade:1 tailed:1 robust:2 ca:2 symmetry:1 depicting:1 alg:3 expansion:1 mse:2 necessarily:2 domain:1 main:6 statistica:1 montanari:3 s2:3 noise:8 augmented:3 lesieur:1 depicts:2 hassibi:7 sub:5 fails:1 sparsest:1 sf:5 crude:1 tied:2 barbier:2 third:5 theorem:15 removing:1 formula:1 sensing:23 r2:2 consist:1 essential:5 exists:1 supplement:1 sparser:1 chen:1 entropy:1 depicted:1 simply:2 gao:1 expressed:2 partially:1 subtlety:1 springer:2 formulated:1 donoho:5 careful:1 twofold:1 lipschitz:1 replace:2 considerable:1 change:6 specifically:1 typical:1 lemma:4 comptes:1 called:6 total:1 xin:1 guo:1 preparation:1 constructive:1 evaluate:2 phenomenon:1
6,557
6,931
Deep Sets Manzil Zaheer1,2 , Satwik Kottur1 , Siamak Ravanbhakhsh1 , Barnab?s P?czos1 , Ruslan Salakhutdinov1 , Alexander J Smola1,2 1 2 Carnegie Mellon University Amazon Web Services {manzilz,skottur,mravanba,bapoczos,rsalakhu,smola}@cs.cmu.edu Abstract We study the problem of designing models for machine learning tasks defined on sets. In contrast to traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets that are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics [1], to anomaly detection in piezometer data of embankment dams [2], to cosmology [3, 4]. Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We also derive the necessary and sufficient conditions for permutation equivariance in deep models. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection. 1 Introduction A typical machine learning algorithm, like regression or classification, is designed for fixed dimensional data instances. Their extensions to handle the case when the inputs or outputs are permutation invariant sets rather than fixed dimensional vectors is not trivial and researchers have only recently started to investigate them [5?8]. In this paper, we present a generic framework to deal with the setting where input and possibly output instances in a machine learning task are sets. Similar to fixed dimensional data instances, we can characterize two learning paradigms in case of sets. In supervised learning, we have an output label for a set that is invariant or equivariant to the permutation of set elements. Examples include tasks like estimation of population statistics [1], where applications range from giga-scale cosmology [3, 4] to nano-scale quantum chemistry [9]. Next, there can be the unsupervised setting, where the ?set? structure needs to be learned, e.g. by leveraging the homophily/heterophily tendencies within sets. An example is the task of set expansion (a.k.a. audience expansion), where given a set of objects that are similar to each other (e.g. set of words {lion, tiger, leopard}), our goal is to find new objects from a large pool of candidates such that the selected new objects are similar to the query set (e.g. find words like jaguar or cheetah among all English words). This is a standard problem in similarity search and metric learning, and a typical application is to find new image tags given a small set of possible tags. Likewise, in the field of computational advertisement, given a set of high-value customers, the goal would be to find similar people. This is an important problem in many scientific applications, e.g. given a small set of interesting celestial objects, astrophysicists might want to find similar ones in large sky surveys. Main contributions. In this paper, (i) we propose a fundamental architecture, DeepSets, to deal with sets as inputs and show that the properties of this architecture are both necessary and sufficient (Sec. 2). (ii) We extend this architecture to allow for conditioning on arbitrary objects, and (iii) based on this architecture we develop a deep network that can operate on sets with possibly different sizes (Sec. 3). We show that a simple parameter-sharing scheme enables a general treatment of sets within supervised and semi-supervised settings. (iv) Finally, we demonstrate the wide applicability of our framework through experiments on diverse problems (Sec. 4). 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Permutation Invariance and Equivariance 2.1 Problem Definition A function f transforms its domain X into its range Y. Usually, the input domain is a vector space Rd and the output response range is either a discrete space, e.g. {0, 1} in case of classification, or a continuous space R in case of regression. Now, if the input is a set X = {x1 , . . . , xM }, xm ? X, i.e., the input domain is the power set X = 2X , then we would like the response of the function to be ?indifferent? to the ordering of the elements. In other words, Property 1 A function f : 2X ? Y acting on sets must be permutation invariant to the order of objects in the set, i.e. for any permutation ? : f ({x1 , . . . , xM }) = f ({x?(1) , . . . , x?(M ) }). In the supervised setting, given N examples of of X (1) , ..., X (N ) as well as their labels y (1) , ..., y (N ) , the task would be to classify/regress (with variable number of predictors) while being permutation invariant w.r.t. predictors. Under unsupervised setting, the task would be to assign high scores to valid sets and low scores to improbable sets. These scores can then be used for set expansion tasks, such as image tagging or audience expansion in field of computational advertisement. In transductive setting, (n) (n) each instance xm has an associated labeled ym . Then, the objective would be instead to learn M a permutation equivariant function f : X ? Y M that upon permutation of the input instances permutes the output labels, i.e. for any permutation ?: f ([x?(1) , . . . , x?(M ) ]) = [f?(1) (x), . . . , f?(M ) (x)] (1) 2.2 Structure We want to study the structure of functions on sets. Their study in total generality is extremely difficult, so we analyze case-by-case. We begin by analyzing the invariant case when X is a countable set and Y = R, where the next theorem characterizes its structure. Theorem 2 A function f (X) operating on a set X having elements from a countable universe, is a valid setP function, i.e., invariant to the permutation of instances in X, iff it can be decomposed in the form ? x?X ?(x) , for suitable transformations ? and ?.  P The extension to case when X is uncountable, like X = R, we could only prove that ? x?X ?(x) is a universal approximator. The proofs and difficulties in handling the uncountable case, are discussed in Appendix A. However, we still conjecture that exact equality holds. Next, we analyze the equivariant case when X = Y = R and f is restricted to be a neural network layer. The standard neural network layer is represented as f? (x) = ?(?x) where ? ? RM ?M is the weight vector and ? : R ? R is a nonlinearity such as sigmoid function. The following lemma states the necessary and sufficient conditions for permutation-equivariance in this type of function. Lemma 3 The function f? : RM ? RM defined above is permutation equivariant iff all the offdiagonal elements of ? are tied together and all the diagonal elements are equal as well. That is, ? = ?I + ? (11T ) ?, ? ? R 1 = [1, . . . , 1]T ? RM I ? RM ?M is the identity matrix This result can be easily extended to higher dimensions, i.e., X = Rd when ?, ? can be matrices. 2.3 Related Results The general form of Theorem 2 is closely related with important results in different domains. Here, we quickly review some of these connections. de Finetti theorem. A related concept is that of an exchangeable model in Bayesian statistics, It is backed by deFinetti?s theorem which states that any exchangeable model can be factored as " M # Z Y p(X|?, M0 ) = d? p(xm |?) p(?|?, M0 ), (2) m=1 where ? is some latent feature and ?, M0 are the hyper-parameters of the prior. To see that this fits into our result, let us consider exponential families with conjugate priors, where we can analytically calculate the integral of (2). In this special case p(x|?) = exp (h?(x), ?i ? g(?)) and p(?|?, M0 ) = exp (h?, ?i ? M0 g(?) ? h(?, M0 )). Now if we marginalize out ?, we get a form which looks exactly like the one in Theorem 2 ! ! X p(X|?, M0 ) = exp h ? + ?(xm ), M0 + M ? h(?, M0 ) . (3) m 2 Representer theorem and kernel machines. Support distribution machines use f (p) = P i ?i yi K(pi , p) + b as the prediction function [8, 10], where pi , p are distributions and ?i , b ? R. In practice, the pi , p distributions are never given to us explicitly, usually only i.i.d. sample sets are available from these distributions, and therefore we need to estimate kernel K(p, q) using these ? q) = 1 0 P k(xi , yj ), where k is another kernel samples. A popular approach is to use K(p, i,j MM M0 operating on the samples {xi }M i=1 ? p and {yj }j=1 ? q. Now, these prediction functions can be seen fitting into the structure of our Theorem. Spectral methods. A consequence of the polynomial decomposition is that spectral methods [11] can be viewed as a special case of the mapping ? ? ?(X): in that case one can compute polynomials, usually only up to a relatively low degree (such as k = 3), to perform inference about statistical properties of the distribution. The statistics are exchangeable in the data, hence they could be represented by the above map. 3 3.1 Deep Sets Architecture Invariant model. The structure of permutation invariant functions in Theorem 2 hints at a general strategy for inference over sets of objects, which we call DeepSets. Replacing ? and ? by universal approximators leaves matters unchanged, since, in particular, ? and ? can be used to approximate arbitrary polynomials. Then, it remains to learn these approximators, yielding in the following model: ? Each instance xm is transformed (possibly by several layers) into some representation ?(xm ). ? The representations ?(xm ) are added up and the output is processed using the ? network in the same manner as in any deep network (e.g. fully connected layers, nonlinearities, etc.). ? Optionally: If we have additional meta-information z, then the above mentioned networks could be conditioned to obtain the conditioning mapping ?(xm |z). In other words, the key is to add up all representations and then apply nonlinear transformations. Equivariant model. Our goal is to design neural network layers that are equivariant to the permutations of elements in the input x. Based on Lemma 3, a neural network layer f? (x) is permutation equivariant if and only if all the off-diagonal elements of ? are tied together and all the diagonal elements are equal as well, i.e., ? = ?I + ? (11T ) for ?, ? ? R. This function is simply a non-linearity applied to a weighted combination of (i) its input Ix and; (ii) the sum of input values (11T )x. Since summation does not depend on the permutation, the layer is permutation-equivariant. We can further manipulate the operations and parameters in this layer to get other variations, e.g.: . f (x) = ? (?Ix + ? maxpool(x)1) . (4) where the maxpooling operation over elements of the set (similar to sum) is commutative. In practice, this variation performs better in some applications. This may be due to the fact that for ? = ?, the input to the non-linearity is max-normalized. Since composition of permutation equivariant functions is also permutation equivariant, we can build DeepSets by stacking such layers. 3.2 Other Related Works Several recent works study equivariance and invariance in deep networks w.r.t. general group of transformations [12?14]. For example, [15] construct deep permutation invariant features by pairwise . coupling of features at the previous layer, where fi,j ([xi , xj ]) = [|xi ? xj |, xi + xj ] is invariant to transposition of i and j. Pairwise interactions within sets have also been studied in [16, 17]. [18] approach unordered instances by finding ?good? orderings. The idea of pooling a function across set-members is not new. In [19], pooling was used binary classification task for causality on a set of samples. [20] use pooling across a panoramic projection of 3D object for classification, while [21] perform pooling across multiple views. [22] observe the invariance of the payoff matrix in normal form games to the permutation of its rows and columns (i.e. player actions) and leverage pooling to predict the player action. The need of permutation equivariance also arise in deep learning over sensor networks and multi-agent setings, where a special case of Lemma 3 has been used as the architecture [23]. In light of these related works, we would like to emphasize our novel contributions: (i) the universality result of Theorem 2 for permutation invariance that also relates DeepSets to other machine learning techniques, see Sec. 3; (ii) the permutation equivariant layer of (4), which, according to Lemma 3 identifies necessary and sufficient form of parameter-sharing in a standard neural layer and; (iii) novel application settings that we study next. 3 (a) Entropy estimation for rotated of 2d Gaussian (b) Mutual (c) Mutual information estimation by varying correlation information estimation by varying rank-1 strength (d) Mutual information on 32d random covariance matrices Figure 1: Population statistic estimation: Top set of figures, show prediction of DeepSets vs SDM for N = 210 case. Bottom set of figures, depict the mean squared error behavior as number of sets is increased. SDM has lower error for small N and DeepSets requires more data to reach similar accuracy. But for high dimensional problems deep sets easily scales to large number of examples and produces much lower estimation error. Note that the N ? N matrix inversion in SDM makes it prohibitively expensive for N > 214 = 16384. 4 Applications and Empirical Results We present a diverse set of applications for DeepSets. For the supervised setting, we apply DeepSets to estimation of population statistics, sum of digits and classification of point-clouds, and regression with clustering side-information. The permutation-equivariant variation of DeepSets is applied to the task of outlier detection. Finally, we investigate the application of DeepSets to unsupervised set-expansion, in particular, concept-set retrieval and image tagging. In most cases we compare our approach with the state-of-the art and report competitive results. 4.1 4.1.1 Set Input Scalar Response Supervised Learning: Learning to Estimate Population Statistics In the first experiment, we learn entropy and mutual information of Gaussian distributions, without providing any information about Gaussianity to DeepSets. The Gaussians are generated as follows: ? Rotation: We randomly chose a 2 ? 2 covariance matrix ?, and then generated N sample sets from N (0, R(?)?R(?)T ) of size M = [300 ? 500] for N random values of ? ? [0, ?]. Our goal was to learn the entropy of the marginal distribution of first dimension. R(?) is the rotation matrix. ? Correlation: We randomly chose a d ? d covariance matrix ? for d = 16, and then generated N sample sets from N (0, [?, ??; ??, ?]) of size M = [300 ? 500] for N random values of ? ? (?1, 1). Goal was to learn the mutual information of among the first d and last d dimension. ? Rank 1: We randomly chose v ? R32 and then generated a sample sets from N (0, I +?vv T ) of size M = [300 ? 500] for N random values of ? ? (0, 1). Goal was to learn the mutual information. ? Random: We chose N random d ? d covariance matrices ? for d = 32, and using each, generated a sample set from N (0, ?) of size M = [300 ? 500]. Goal was to learn the mutual information. We train using L2 loss with a DeepSets architecture having 3 fully connected layers with ReLU activation for both transformations ? and ?. We compare against Support Distribution Machines (SDM) using a RBF kernel [10], and analyze the results in Fig. 1. 4.1.2 Sum of Digits Next, we compare to what happens if our set data is treated as a sequence. We consider the task of finding sum of a given set of digits. We consider two variants of this experiment: Text. We randomly sample a subset of maximum M = 10 digits from this dataset to build 100k ?sets? of training images, where the setlabel is sum of digits in that set. We test against sums of M digits, for M starting from 5 all the way up to 100 over another 100k examples. Figure 2: Accuracy of digit summation with text (left) and image (right) inputs. All approaches are trained on tasks of length 10 at most, tested on examples of length up to 100. We see that DeepSets generalizes better. 4 Image. MNIST8m [24] contains 8 million instances of 28 ? 28 grey-scale stamps of digits in {0, . . . , 9}. We randomly sample a subset of maximum M = 10 images from this dataset to build N = 100k ?sets? of training and 100k sets of test images, where the set-label is the sum of digits in that set (i.e. individual labels per image is unavailable). We test against sums of M images of MNIST digits, for M starting from 5 all the way up to 50. We compare against recurrent neural networks ? LSTM and GRU. All models are defined to have similar number of layers and parameters. The output of all models is a scalar, predicting the sum of N digits. Training is done on tasks of length 10 at most, while at test time we use examples of length up to 100. The accuracy, i.e. exact equality after rounding, is shown in Fig. 2. DeepSets generalize much better. Note for image case, the best classification error for single digit is around p = 0.01 for MNIST8m, so in a collection of N of images at least one image will be misclassified is 1 ? (1 ? p)N , which is 40% for N = 50. This matches closely with observed value in Fig. 2(b). 4.1.3 Point Cloud Classification A point-cloud is a set of low-dimensional vectors. This type of data is frequently encountered in various applications like robotics, vision, and cosmology. In these applications, existing methods often convert the point-cloud data to voxel or mesh representation as a preprocessing step, e.g. [26, 29, 30]. Since the output of many range sensors, such as LiDAR, is in the form of pointcloud, direct application of deep learning methods to point-cloud is highly desirable. Moreover, it is easy and cheaper to apply transformations, such as rotation and translation, when working with point-clouds than voxelized 3D objects. Model Instance Size 3DShapeNets [25] 303 VoxNet [26] 323 MVCNN [21] 164?164? 12 VRN Ensemble [27] 323 3D GAN [28] 643 DeepSets 5000 ? 3 Representation voxels (using convolutional deep belief net) voxels (voxels from point-cloud + 3D CNN) multi-vew images (2D CNN + viewpooling) voxels (3D CNN, variational autoencoder) voxels (3D CNN, generative adversarial training) point-cloud point-cloud Accuracy 77% 83.10% 90.1% 95.54% 83.3% 90 ? .3% DeepSets 100 ? 3 82 ? 2% As point-cloud data is just a set of points, we can use DeepSets to classify point-cloud representation of a subset of ShapeNet objects [31], Table 1: Classification accuracy and the representationcalled ModelNet40 [25]. This subset consists of size used by different methods on the ModelNet40. 3D representation of 9,843 training and 2,468 test instances belonging to 40 classes of objects. We produce point-clouds with 100, 1000 and 5000 particles each (x, y, z-coordinates) from the mesh representation of objects using the point-cloudlibrary?s sampling routine [32]. Each set is normalized by the initial layer of the deep network to have zero mean (along individual axes) and unit (global) variance. Tab. 1 compares our method using three permutation equivariant layers against the competition; see Appendix H for details. 4.1.4 Improved Red-shift Estimation Using Clustering Information An important regression problem in cosmology is to estimate the red-shift of galaxies, corresponding to their age as well as their distance from us [33] based on photometric observations. One way to estimate the red-shift from photometric observations is using a regression model [34] on the galaxy clusters. The prediction for each galaxy does not change by permuting the members of the galaxy cluster. Therefore, we can treat each galaxy cluster as a ?set? and use DeepSets to estimate the individual galaxy red-shifts. See Appendix G for more details. For each galaxy, we have 17 photometric features from the redMaPPer Method Scatter galaxy cluster catalog [35] that contains photometric readings for MLP 0.026 26,111 red galaxy clusters. Each galaxy-cluster in this catalog has redMaPPer 0.025 between ? 20 ? 300 galaxies ? i.e. x ? RN (c)?17 , where N (c) is the DeepSets 0.023 cluster-size. The catalog also provides accurate spectroscopic red-shift estimates for a subset of these galaxies. Table 2: Red-shift experiment. We randomly split the data into 90% training and 10% test clusters, and Lower scatter is better. minimize the squared loss of the prediction for available spectroscopic |zspec ?z| red-shifts. As it is customary in cosmology literature, we report the average scatter 1+z , where spec zspec is the accurate spectroscopic measurement and z is a photometric estimate in Tab. 2. 5 Method LDA-1k (Vocab = 17k) LDA-3k (Vocab = 38k) LDA-5k (Vocab = 61k) Recall (%) Recall (%) Recall (%) MRR Med. MRR Med. MRR Med. @10 @100 @1k @10 @100 @1k @10 @100 @1k Random Bayes Set w2v Near NN-max NN-sum-con NN-max-con DeepSets 0.06 1.69 6.00 4.78 4.58 3.36 5.53 0.6 11.9 28.1 22.5 19.8 16.9 24.2 5.9 37.2 54.7 53.1 48.5 46.6 54.3 0.001 0.007 0.021 0.023 0.021 0.018 0.025 8520 2848 641 779 1110 1250 696 0.02 2.01 4.80 5.30 5.81 5.61 6.04 0.2 14.5 21.2 24.9 27.2 25.7 28.5 2.6 36.5 43.2 54.8 60.0 57.5 60.7 0.000 28635 0.01 0.008 3234 1.75 0.016 2054 4.03 0.025 672 4.72 0.027 453 4.87 0.026 570 4.72 0.027 426 5.54 0.2 12.5 16.7 21.4 23.5 22.0 26.1 1.6 34.5 35.2 47.0 53.9 51.8 55.5 0.000 0.007 0.013 0.022 0.022 0.022 0.026 30600 3590 6900 1320 731 877 616 Table 3: Results on Text Concept Set Retrieval on LDA-1k, LDA-3k, and LDA-5k. Our DeepSets model outperforms other methods on LDA-3k and LDA-5k. However, all neural network based methods have inferior performance to w2v-Near baseline on LDA-1k, possibly due to small data size. Higher the better for recall@k and mean reciprocal rank (MRR). Lower the better for median rank (Med.) 4.2 Set Expansion In the set expansion task, we are given a set of objects that are similar to each other and our goal is to find new objects from a large pool of candidates such that the selected new objects are similar to the query set. To achieve this one needs to reason out the concept connecting the given set and then retrieve words based on their relevance to the inferred concept. It is an important task due to wide range of potential applications including personalized information retrieval, computational advertisement, tagging large amounts of unlabeled or weakly labeled datasets. Going back to de Finetti?s theorem in Sec. 3.2, where we consider the marginal probability of a set of observations, the marginal probability allows for very simple metric for scoring additional elements to be added to X. In other words, this allows one to perform set expansion via the following score s(x|X) = log p(X ? {x} |?) ? log p(X|?)p({x} |?) (5) Note that s(x|X) is the point-wise mutual information between x and X. Moreover, due to exchangeability, it follows that regardless of the order of elements we have M X X S(X) = s (xm | {xm?1 , . . . x1 }) = log p(X|?) ? log p({xm } |?) (6) m m=1 When inferring sets, our goal is to find set completions {xm+1 , . . . xM } for an initial set of query terms {x1 , . . . , xm }, such that the aggregate set is coherent. This is the key idea of the Bayesian Set algorithm [36] (details in Appendix D). Using DeepSets, we can solve this problem in more generality as we can drop the assumption of data belonging to certain exponential family. For learning the score s(x|X), we take recourse to large-margin classification with structured loss functions [37] to obtain the relative loss objective l(x, x0 |X) = max(0, s(x0 |X)?s(x|X)+?(x, x0 )). In other words, we want to ensure that s(x|X) ? s(x0 |X) + ?(x, x0 ) whenever x should be added and x0 should not be added to X. Conditioning. Often machine learning problems do not exist in isolation. For example, task like tag completion from a given set of tags is usually related to an object z, for example an image, that needs to be tagged. Such meta-data are usually abundant, e.g. author information in case of text, contextual data such as the user click history, or extra information collected with LiDAR point cloud. Conditioning graphical models with meta-data is often complicated. For instance, in the BetaBinomial model we need to ensure that the counts are always nonnegative, regardless of z. Fortunately, DeepSets does not suffer from such complications and the fusion of multiple sources of data can be done in a relatively straightforward manner. Any of the existing methods in deep learning, including feature concatenation by averaging, or by max-pooling, can be employed. Incorporating these metadata often leads to significantly improved performance as will be shown in experiments; Sec. 4.2.2. 4.2.1 Text Concept Set Retrieval In text concept set retrieval, the objective is to retrieve words belonging to a ?concept? or ?cluster?, given few words from that particular concept. For example, given the set of words {tiger, lion, cheetah}, we would need to retrieve other related words like jaguar, puma, etc, which belong to the same concept of big cats. This task of concept set retrieval can be seen as a set completion task conditioned on the latent semantic concept, and therefore our DeepSets form a desirable approach. Dataset. We construct a large dataset containing sets of NT = 50 related words by extracting topics from latent Dirichlet allocation [38, 39], taken out-of-the-box1 . To compare across scales, we 1 github.com/dmlc/experimental-lda 6 consider three values of k = {1k, 3k, 5k} giving us three datasets LDA-1k, LDA-3k, and LDA-5k, with corresponding vocabulary sizes of 17k, 38k, and 61k. Methods. We learn this using a margin loss with a DeepSets architecture having 3 fully connected layers with ReLU activation for both transformations ? and ?. Details of the architecture and training are in Appendix E. We compare to several baselines: (a) Random picks a word from the vocabulary uniformly at random. (b) Bayes Set [36]. (c) w2v-Near computes the nearest neighbors in the word2vec [40] space. Note that both Bayes Set and w2v NN are strong baselines. The former runs Bayesian inference using Beta-Binomial conjugate pair, while the latter uses the powerful 300 dimensional word2vec trained on the billion word GoogleNews corpus2 . (d) NN-max uses a similar architecture as our DeepSets but uses max pooling to compute the set feature, as opposed to sum pooling. (e) NN-max-con uses max pooling on set elements but concatenates this pooled representation with that of query for a final set feature. (f) NN-sum-con is similar to NN-max-con but uses sum pooling followed by concatenation with query representation. Evaluation. We consider the standard retrieval metrics ? recall@K, median rank and mean reciprocal rank, for evaluation. To elaborate, recall@K measures the number of true labels that were recovered in the top K retrieved words. We use three values of K = {10, 100, 1k}. The other two metrics, as the names suggest, are the median and mean of reciprocals of the true label ranks, respectively. Each dataset is split into TRAIN (80%), VAL (10%) and TEST (10%). We learn models using TRAIN and evaluate on TEST, while VAL is used for hyperparameter selection and early stopping. Results and Observations. As seen in Tab. 3: (a) Our DeepSets model outperforms all other approaches on LDA-3k and LDA-5k by any metric, highlighting the significance of permutation invariance property. (b) On LDA-1k, our model does not perform well when compared to w2v-Near. We hypothesize that this is due to small size of the dataset insufficient to train a high capacity neural network, while w2v-Near has been trained on a billion word corpus. Nevertheless, our approach comes the closest to w2v-Near amongst other approaches, and is only 0.5% lower by Recall@10. 4.2.2 Image Tagging ESP game IAPRTC-12.5 We next experiment with image tagging, where the task Method P R F1 N+ P R F1 N+ is to retrieve all relevant tags corresponding to an image. 35 19 25 215 40 19 26 198 Images usually have only a subset of relevant tags, there- Least Sq. MBRM 18 19 18 209 24 23 23 223 fore predicting other tags can help enrich information that JEC 24 19 21 222 29 19 23 211 can further be leveraged in a downstream supervised task. FastTag 46 22 30 247 47 26 34 280 In our setup, we learn to predict tags by conditioning Least Sq.(D) 44 32 37 232 46 30 36 218 44 32 37 229 46 33 38 254 DeepSets on the image, i.e., we train to predict a partial FastTag(D) DeepSets 39 34 36 246 42 31 36 247 set of tags from the image and remaining tags. At test time, we predict tags from the image alone. Table 4: Results of image tagging on Datasets. We report results on the following three ESPgame and IAPRTC-12.5 datasets. Perfordatasets - ESPGame, IAPRTC-12.5 and our in-house mance of our DeepSets approach is roughly dataset, COCO-Tag. We refer the reader to Appendix F, similar to the best competing approaches, except for precision. Refer text for more details. for more details about datasets. Higher the better for all metrics ? precision Methods. The setup for DeepSets to tag images is sim- (P), recall (R), f1 score (F1), and number of ilar to that described in Sec. 4.2.1. The only difference non-zero recall tags (N+). being the conditioning on the image features, which is concatenated with the set feature obtained from pooling individual element representations. Baselines. We perform comparisons against several baselines, previously reported in [41]. Specifically, we have Least Sq., a ridge regression model, MBRM [42], JEC [43] and FastTag [41]. Note that these methods do not use deep features for images, which could lead to an unfair comparison. As there is no publicly available code for MBRM and JEC, we cannot get performances of these models with Resnet extracted features. However, we report results with deep features for FastTag and Least Sq., using code made available by the authors 3 . Evaluation. For ESPgame and IAPRTC-12.5, we follow the evaluation metrics as in [44]?precision (P), recall (R), F1 score (F1), and number of tags with non-zero recall (N+). These metrics are evaluate for each tag and the mean is reported (see [44] for further details). For COCO-Tag, however, we use recall@K for three values of K = {10, 100, 1000}, along with median rank and mean reciprocal rank (see evaluation in Sec. 4.2.1 for metric details). 2 3 code.google.com/archive/p/word2vec/ http://www.cse.wustl.edu/~mchen/ 7 Figure 3: Each row shows a set, constructed from CelebA dataset, such that all set members except for an outlier, share at least two attributes (on the right). The outlier is identified with a red frame. The model is trained by observing examples of sets and their anomalous members, without access to the attributes. The probability assigned to each member by the outlier detection network is visualized using a red bar at the bottom of each image. The probabilities in each row sum to one. Results and Observations. Tab. 4 shows results of imRecall MRR Med. @10 @100 @1k age tagging on ESPgame and IAPRTC-12.5, and Tab. 5 Method on COCO-Tag. Here are the key observations from Tab. 4: w2v NN (blind) 5.6 20.0 54.2 0.021 823 (blind) 9.0 39.2 71.3 0.044 310 (a) performance of our DeepSets model is comparable to DeepSets DeepSets 31.4 73.4 95.3 0.131 28 the best approaches on all metrics but precision, (b) our recall beats the best approach by 2% in ESPgame. On Table 5: Results on COCO-Tag dataset. further investigation, we found that the DeepSets model Clearly, DeepSets outperforms other baseretrieves more relevant tags, which are not present in list of lines significantly. Higher the better for reground truth tags due to a limited 5 tag annotation. Thus, call@K and mean reciprocal rank (MRR). this takes a toll on precision while gaining on recall, yet Lower the better for median rank (Med). yielding improvement on F1. On the larger and richer COCO-Tag, we see that the DeepSets approach outperforms other methods comprehensively, as expected. Qualitative examples are in Appendix F. 4.3 Set Anomaly Detection The objective here is to find the anomalous face in each set, simply by observing examples and without any access to the attribute values. CelebA dataset [45] contains 202,599 face images, each annotated with 40 boolean attributes. We build N = 18, 000 sets of 64 ? 64 stamps, using these attributes each containing M = 16 images (on the training set) as follows: randomly select 2 attributes, draw 15 images having those attributes, and a single target image where both attributes are absent. Using a similar procedure we build sets on the test images. No individual person?s face appears in both train and test sets. Our deep neural network consists of 9 2D-convolution and max-pooling layers followed by 3 permutation-equivariant layers, and finally a softmax layer that assigns a probability value to each set member (Note that one could identify arbitrary number of outliers using a sigmoid activation at the output). Our trained model successfully finds the anomalous face in 75% of test sets. Visually inspecting these instances suggests that the task is non-trivial even for humans; see Fig. 3. As a baseline, we repeat the same experiment by using a set-pooling layer after convolution layers, and replacing the permutation-equivariant layers with fully connected layers of same size, where the final layer is a 16-way softmax. The resulting network shares the convolution filters for all instances within all sets, however the input to the softmax is not equivariant to the permutation of input images. Permutation equivariance seems to be crucial here as the baseline model achieves a training and test accuracy of ? 6.3%; the same as random selection. See Appendix I for more details. 5 Summary In this paper, we develop DeepSets, a model based on powerful permutation invariance and equivariance properties, along with the theory to support its performance. We demonstrate the generalization ability of DeepSets across several domains by extensive experiments, and show both qualitative and quantitative results. In particular, we explicitly show that DeepSets outperforms other intuitive deep networks, which are not backed by theory (Sec. 4.2.1, Sec. 4.1.2). Last but not least, it is worth noting that the state-of-the-art we compare to is a specialized technique for each task, whereas our one model, i.e., DeepSets, is competitive across the board. 8 References [1] B. Poczos, A. Rinaldo, A. Singh, and L. Wasserman. Distribution-free distribution regression. In International Conference on AI and Statistics (AISTATS), JMLR Workshop and Conference Proceedings, 2013. pages 1 [2] I. Jung, M. Berges, J. Garrett, and B. Poczos. Exploration and evaluation of ar, mpca and kl anomaly detection techniques to embankment dam piezometer data. Advanced Engineering Informatics, 2015. pages 1 [3] M. Ntampaka, H. Trac, D. Sutherland, S. Fromenteau, B. Poczos, and J. Schneider. Dynamical mass measurements of contaminated galaxy clusters using machine learning. The Astrophysical Journal, 2016. URL http://arxiv.org/abs/1509.05409. pages 1 [4] M. Ravanbakhsh, J. Oliva, S. Fromenteau, L. Price, S. Ho, J. Schneider, and B. Poczos. Estimating cosmological parameters from the dark matter distribution. In International Conference on Machine Learning (ICML), 2016. pages 1 [5] J. Oliva, B. Poczos, and J. Schneider. Distribution to distribution regression. In International Conference on Machine Learning (ICML), 2013. pages 1 [6] Z. Szabo, B. Sriperumbudur, B. Poczos, and A. Gretton. Learning theory for distribution regression. Journal of Machine Learning Research, 2016. pages [7] K. Muandet, D. Balduzzi, and B. Schoelkopf. Domain generalization via invariant feature representation. In In Proceeding of the 30th International Conference on Machine Learning (ICML 2013), 2013. pages [8] K. Muandet, K. Fukumizu, F. Dinuzzo, and B. Schoelkopf. Learning from distributions via support measure machines. In In Proceeding of the 26th Annual Conference on Neural Information Processing Systems (NIPS 2012), 2012. pages 1, 3 [9] Felix A. Faber, Alexander Lindmaa, O. Anatole von Lilienfeld, and Rickard Armiento. Machine learning energies of 2 million elpasolite (abC2 D6 ) crystals. Phys. Rev. Lett., 117:135502, Sep 2016. doi: 10.1103/PhysRevLett.117.135502. URL http://link.aps.org/doi/10.1103/ PhysRevLett.117.135502. pages 1 [10] B. Poczos, L. Xiong, D. Sutherland, and J. Schneider. Support distribution machines, 2012. URL http://arxiv.org/abs/1202.0302. pages 3, 4 [11] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. arXiv preprint arXiv:1210.7559, 2012. pages 3 [12] Robert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information processing systems, pages 2537?2545, 2014. pages 3 [13] Taco S Cohen and Max Welling. Group equivariant convolutional networks. arXiv preprint arXiv:1602.07576, 2016. pages [14] Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Equivariance through parametersharing. arXiv preprint arXiv:1702.08389, 2017. pages 3 [15] Xu Chen, Xiuyuan Cheng, and St?phane Mallat. Unsupervised deep haar scattering on graphs. In Advances in Neural Information Processing Systems, pages 1709?1717, 2014. pages 3 [16] Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016. pages 3 [17] Nicholas Guttenberg, Nathaniel Virgo, Olaf Witkowski, Hidetoshi Aoki, and Ryota Kanai. Permutation-equivariant neural networks applied to dynamics prediction. arXiv preprint arXiv:1612.04530, 2016. pages 3 [18] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. pages 3 [19] David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Sch?lkopf, and L?on Bottou. Discovering causal signals in images. arXiv preprint arXiv:1605.08179, 2016. pages 3 9 [20] Baoguang Shi, Song Bai, Zhichao Zhou, and Xiang Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE Signal Processing Letters, 22(12):2339?2343, 2015. pages 3, 23, 24 [21] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 945?953, 2015. pages 3, 5, 23, 24 [22] Jason S Hartford, James R Wright, and Kevin Leyton-Brown. Deep learning for predicting human strategic behavior. In Advances in Neural Information Processing Systems, pages 2424?2432, 2016. pages 3 [23] Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems, pages 2244?2252, 2016. pages 3 [24] Ga?lle Loosli, St?phane Canu, and L?on Bottou. Training invariant support vector machines using selective sampling. In L?on Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors, Large Scale Kernel Machines, pages 301?320. MIT Press, Cambridge, MA., 2007. pages 5 [25] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912?1920, 2015. pages 5, 23 [26] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for realtime object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 922?928. IEEE, 2015. pages 5, 23 [27] Andrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Generative and discriminative voxel modeling with convolutional neural networks. arXiv preprint arXiv:1608.04236, 2016. pages 5, 23 [28] Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. arXiv preprint arXiv:1610.07584, 2016. pages 5, 23 [29] Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C Price, Shirley Ho, Jeff Schneider, and Barnab?s P?czos. Estimating cosmological parameters from the dark matter distribution. In Proceedings of The 33rd International Conference on Machine Learning, 2016. pages 5 [30] Hong-Wei Lin, Chiew-Lan Tai, and Guo-Jin Wang. A mesh reconstruction algorithm driven by an intrinsic property of a point cloud. Computer-Aided Design, 36(1):1?9, 2004. pages 5 [31] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. pages 5 [32] Radu Bogdan Rusu and Steve Cousins. 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011. pages 5 [33] James Binney and Michael Merrifield. Galactic astronomy. Princeton University Press, 1998. pages 5, 21 [34] AJ Connolly, I Csabai, AS Szalay, DC Koo, RG Kron, and JA Munn. Slicing through multicolor space: Galaxy redshifts from broadband photometry. arXiv preprint astro-ph/9508100, 1995. pages 5, 21 [35] Eduardo Rozo and Eli S Rykoff. redmapper ii: X-ray and sz performance benchmarks for the sdss catalog. The Astrophysical Journal, 783(2):80, 2014. pages 5, 21 [36] Zoubin Ghahramani and Katherine A Heller. Bayesian sets. In NIPS, volume 2, pages 22?23, 2005. pages 6, 7, 17, 18, 19 10 [37] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In S. Thrun, L. Saul, and B. Sch?lkopf, editors, Advances in Neural Information Processing Systems 16, pages 25?32, Cambridge, MA, 2004. MIT Press. pages 6 [38] Jonathan K. Pritchard, Matthew Stephens, and Peter Donnelly. Inference of population structure using multilocus genotype data. Genetics, 155(2):945?959, 2000. ISSN 0016-6731. URL http://www.genetics.org/content/155/2/945. pages 6, 19 [39] David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. Latent dirichlet allocation. Journal of Machine Learning Research, 3:2003, 2003. pages 6, 19 [40] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111?3119, 2013. pages 7, 19 [41] Minmin Chen, Alice Zheng, and Kilian Weinberger. Fast image tagging. In Proceedings of The 30th International Conference on Machine Learning, pages 1274?1282, 2013. pages 7, 20 [42] S. L. Feng, R. Manmatha, and V. Lavrenko. Multiple bernoulli relevance models for image and video annotation. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR?04, pages 1002?1009, Washington, DC, USA, 2004. IEEE Computer Society. URL http://dl.acm.org/citation.cfm?id=1896300. 1896446. pages 7, 20 [43] Ameesh Makadia, Vladimir Pavlovic, and Sanjiv Kumar. A new baseline for image annotation. In Proceedings of the 10th European Conference on Computer Vision: Part III, ECCV ?08, pages 316?329, Berlin, Heidelberg, 2008. Springer-Verlag. ISBN 978-3-540-88689-1. doi: 10.1007/ 978-3-540-88690-7_24. URL http://dx.doi.org/10.1007/978-3-540-88690-7_24. pages 7, 20 [44] Matthieu Guillaumin, Thomas Mensink, Jakob Verbeek, and Cordelia Schmid. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In Computer Vision, 2009 IEEE 12th International Conference on, pages 309?316. IEEE, 2009. pages 7, 20, 21 [45] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. pages 8 [46] Jerrold E Marsden and Michael J Hoffman. Elementary classical analysis. Macmillan, 1993. pages 12 [47] Nicolas Bourbaki. El?ments de math?matiques: th?orie des ensembles, chapitres 1 ? 4, volume 1. Masson, 1990. pages 12 [48] Boris A Khesin and Serge L Tabachnikov. Arnold: Swimming Against the Tide, volume 86. American Mathematical Society, 2014. pages 12 [49] C. A. Micchelli. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11?22, 1986. pages 15 [50] Luis Von Ahn and Laura Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 319?326. ACM, 2004. pages 20 [51] Michael Grubinger. Analysis and evaluation of visual information systems performance, 2007. URL http://eprints.vu.edu.au/1435. Thesis (Ph. D.)?Victoria University (Melbourne, Vic.), 2007. pages 20 [52] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740?755. Springer, 2014. pages 20 [53] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. pages 21, 23, 24, 25 [54] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. pages 25 11
6931 |@word repository:1 cnn:4 inversion:1 polynomial:3 seems:1 grey:1 covariance:4 decomposition:2 pick:1 initial:2 liu:1 manmatha:1 score:7 bai:2 contains:3 daniel:1 piotr:1 outperforms:5 existing:2 recovered:1 contextual:1 nt:1 luo:1 com:2 activation:3 yet:1 dx:1 must:2 luis:1 universality:1 scatter:3 mesh:3 sanjiv:1 diederik:1 john:1 shape:4 enables:2 minmin:1 hypothesize:1 designed:1 drop:1 aps:1 siamak:3 depict:1 alone:1 sukhbaatar:1 selected:2 discovering:1 leaf:1 v:1 generative:3 spec:1 reciprocal:5 dinuzzo:1 transposition:1 blei:1 provides:2 math:1 cse:1 complication:1 org:6 tianfan:1 zhang:2 lavrenko:1 mathematical:1 along:3 constructed:1 direct:1 beta:1 kalogerakis:1 qualitative:2 prove:1 consists:2 lopez:1 fitting:1 wild:1 ray:1 manner:2 x0:6 pairwise:2 angel:1 expected:1 tagging:8 roughly:1 equivariant:18 frequently:1 cheetah:2 multi:3 behavior:2 vocab:3 freeman:1 decomposed:1 manolis:1 soumith:1 jm:1 decoste:1 begin:1 estimating:2 linearity:2 moreover:2 mass:1 what:1 finding:2 transformation:6 astronomy:1 eduardo:1 quantitative:1 hartford:1 sky:1 exactly:1 prohibitively:1 rm:5 exchangeable:3 ramanan:1 unit:2 positive:1 service:1 felix:1 sutherland:2 treat:1 sd:1 engineering:1 consequence:1 esp:1 analyzing:1 id:1 koo:1 interpolation:1 aoki:1 might:1 chose:4 au:1 china:1 studied:1 theodore:1 suggests:1 alice:1 limited:1 range:5 yj:2 vu:1 practice:2 definite:1 backpropagation:1 sq:4 maire:1 digit:12 procedure:1 faber:1 universal:2 empirical:1 significantly:2 trac:1 projection:1 puma:1 word:18 wustl:1 zoubin:1 suggest:1 get:3 cannot:1 unlabeled:1 ga:1 selection:2 marginalize:1 context:1 dam:2 ameesh:1 www:2 dean:1 customer:1 map:1 shi:1 backed:2 straightforward:1 sepp:1 starting:2 jimmy:1 regardless:2 survey:1 masson:1 tomas:1 amazon:1 assigns:1 slicing:1 wasserman:1 zimo:1 permutes:1 matthieu:1 factored:1 retrieve:4 multicolor:1 population:7 handle:1 variation:3 coordinate:1 target:1 mallat:1 user:1 anomaly:3 exact:2 olivier:1 us:5 designing:1 domingo:1 samy:1 element:13 recognition:5 expensive:1 labeled:2 observed:1 taskar:1 cloud:16 loosli:1 bottom:2 wang:2 preprint:13 calculate:1 schoelkopf:2 connected:4 kilian:1 ordering:2 iaprtc:5 mentioned:1 jaguar:2 barnabas:1 dynamic:2 trained:5 weakly:1 deva:1 depend:1 singh:1 celestial:1 upon:1 sep:1 easily:2 xiaoou:2 represented:2 various:1 cat:1 maji:1 train:6 fast:2 doi:4 query:5 labeling:1 aggregate:1 hyper:1 kevin:1 richer:1 kai:1 larger:1 cvpr:1 zhichao:1 solve:1 ability:1 statistic:9 anatole:1 transductive:1 final:2 sdm:4 sequence:3 toll:1 vrn:1 net:1 isbn:1 propose:1 reconstruction:1 interaction:1 clevert:1 linguang:1 modelnet40:2 relevant:3 gen:1 iff:2 achieve:1 intuitive:1 competition:1 sutskever:1 olaf:1 billion:2 cluster:10 produce:2 boris:1 phane:2 rotated:1 object:20 adam:1 coupling:1 andrew:2 develop:2 help:1 derive:1 recurrent:1 completion:3 resnet:1 bogdan:1 maturana:1 sim:1 strong:1 nearest:2 c:1 come:1 elus:1 closely:2 annotated:1 attribute:9 filter:1 physrevlett:2 stochastic:1 exploration:1 human:3 ja:1 assign:1 f1:7 generalization:2 barnab:2 investigation:1 spectroscopic:3 elementary:1 summation:2 inspecting:1 sainbayar:1 leopard:1 extension:2 hold:1 mm:1 around:1 wright:1 guibas:1 visually:1 exp:3 lawrence:1 mapping:2 predict:4 cfm:1 normal:1 matthew:1 m0:10 achieves:1 early:1 torralba:1 ruslan:1 estimation:10 label:7 successfully:1 weighted:1 hoffman:1 evangelos:1 fukumizu:1 mit:2 clearly:1 sensor:2 gaussian:2 always:1 rather:1 zhou:1 rusu:1 exchangeability:1 varying:2 ax:1 improvement:1 rank:11 panoramic:2 bernoulli:1 contrast:1 adversarial:2 shapenet:2 baseline:8 inference:4 el:1 stopping:1 nn:9 rozo:1 perona:1 koller:1 going:1 selective:1 transformed:1 misclassified:1 subhransu:1 classification:10 among:2 enrich:1 art:2 softmax:3 special:4 mutual:8 marginal:3 field:2 construct:2 mravanba:1 cordelia:1 beach:1 sampling:2 ng:1 equal:2 never:1 having:4 washington:1 icml:3 look:1 unsupervised:5 yu:1 celeba:2 photometric:5 representer:1 contaminated:1 report:4 pavlovic:1 intelligent:1 hint:1 few:1 randomly:7 individual:5 cheaper:1 szabo:1 microsoft:1 william:1 ab:2 detection:6 mlp:1 investigate:2 highly:1 zheng:1 evaluation:7 indifferent:1 genotype:1 yielding:2 dabbish:1 light:1 permuting:1 word2vec:3 accurate:3 integral:1 baoguang:1 partial:1 necessary:4 improbable:1 shuran:2 iv:1 savarese:1 unterthiner:1 abundant:1 causal:1 melbourne:1 increased:1 classify:2 instance:14 column:1 modeling:2 boolean:1 ar:1 voxnet:2 phrase:1 applicability:2 stacking:1 strategic:1 subset:6 predictor:2 paz:1 connolly:1 rounding:1 characterize:1 reported:2 kanai:1 xue:1 kudlur:1 muandet:2 st:3 person:1 international:11 lstm:1 fundamental:1 probabilistic:1 off:1 informatics:1 pool:2 maxpool:1 ym:1 connecting:1 quickly:1 together:2 michael:6 ilya:1 thesis:1 von:2 squared:2 opposed:1 huang:1 containing:2 possibly:4 leveraged:1 nano:1 american:1 laura:1 ullman:1 li:1 potential:1 nonlinearities:1 de:4 chemistry:1 unordered:1 sec:10 pooled:1 automation:1 gaussianity:1 matter:4 explicitly:2 blind:2 leonidas:1 astrophysical:2 tsung:1 view:2 nishihara:1 jason:2 analyze:3 tab:6 red:10 competitive:2 bayes:3 offdiagonal:1 complicated:1 characterizes:2 annotation:4 observing:2 contribution:2 minimize:1 publicly:1 greg:1 accuracy:6 convolutional:5 nathaniel:1 likewise:1 miller:1 serge:2 variance:1 ensemble:2 identify:1 generalize:1 lkopf:2 bayesian:4 fore:1 worth:1 researcher:1 history:1 ping:1 reach:1 phys:1 whenever:1 sharing:2 sebastian:1 definition:1 volumetric:1 sriperumbudur:1 against:7 energy:1 guillaumin:1 regress:1 galaxy:14 james:3 chintala:1 cosmology:5 proof:1 associated:1 con:5 hsu:1 dataset:10 treatment:1 popular:1 recall:14 lim:1 mnist8m:2 lilienfeld:1 routine:1 garrett:1 back:1 appears:1 scattering:1 steve:1 higher:4 supervised:8 follow:1 chengkai:1 response:3 improved:2 wei:1 mensink:1 done:2 generality:2 just:1 smola:1 djork:1 correlation:2 working:1 dennis:1 web:1 replacing:2 su:2 nonlinear:1 google:1 widespread:1 lda:16 aj:1 scientific:1 usa:2 name:1 brown:1 true:2 concept:12 normalized:2 former:1 hence:1 assigned:1 equality:2 tagged:1 analytically:1 semantic:1 funkhouser:1 deal:2 conditionally:1 game:3 inferior:1 hong:1 crystal:1 ridge:1 demonstrate:3 performs:1 ranging:1 variational:1 wise:1 novel:2 recently:1 fi:1 image:39 common:1 rotation:3 specialized:1 matiques:1 sigmoid:2 googlenews:1 physical:1 cohen:1 conditioning:6 shanghai:1 volume:3 million:2 belong:2 discussed:1 extend:1 homophily:1 refer:2 mellon:1 composition:1 cambridge:2 measurement:2 ai:1 cosmological:2 ritchie:1 rd:3 canu:1 particle:1 nonlinearity:1 chapelle:1 satwik:1 robot:1 similarity:1 ahn:1 maxpooling:1 etc:2 add:1 deepsets:41 access:2 operating:3 closest:1 w2v:8 recent:1 mrr:6 retrieved:1 driven:1 coco:6 scenario:1 zhirong:1 certain:1 verlag:1 szalay:1 meta:3 hay:1 binary:1 approximators:2 yi:2 joshua:2 scoring:1 hanrahan:1 guestrin:1 tide:1 fortunately:1 additional:2 jerrold:1 seen:3 employed:1 arn:1 schneider:6 paradigm:1 corrado:1 signal:2 ii:4 relates:1 semi:1 desirable:2 stephen:1 multiple:3 gretton:1 match:1 long:1 retrieval:7 lin:2 manipulate:1 marsden:1 verbeek:1 prediction:6 regression:9 oliva:3 variant:1 vision:8 cmu:1 metric:11 anomalous:3 arxiv:27 kernel:5 kron:1 robotics:2 hochreiter:1 audience:2 whereas:1 want:3 median:5 source:1 crucial:1 sch:2 extra:1 operate:2 archive:1 pooling:13 med:6 member:6 leveraging:1 lafferty:1 jordan:1 anandkumar:1 call:2 extracting:1 near:6 noting:1 jec:3 leverage:1 split:2 easy:1 bengio:1 iii:3 variety:1 xj:3 fit:1 relu:2 isolation:1 architecture:11 identified:1 click:1 competing:1 idea:2 cousin:1 shift:7 absent:1 url:7 manjunath:1 song:3 suffer:1 peter:1 poczos:8 compositional:1 action:2 antonio:1 deep:25 transforms:1 amount:1 dark:2 tenenbaum:2 ph:2 processed:1 visualized:1 http:8 ravanbakhsh:3 exist:1 jiajun:1 per:1 diverse:2 carnegie:1 hyperparameter:1 discrete:1 finetti:2 group:2 key:3 donnelly:1 lan:1 nevertheless:1 shirley:1 iros:1 corpus2:1 graph:1 swimming:1 pietro:1 downstream:1 convert:1 sum:15 run:1 eli:1 letter:1 powerful:2 multilocus:1 family:4 reader:1 wu:2 realtime:1 draw:1 appendix:8 comparable:1 layer:25 followed:2 cheng:1 telgarsky:1 encountered:1 nonnegative:1 annual:1 strength:1 xiaogang:1 personalized:1 grubinger:1 pcl:1 tag:23 extremely:1 kumar:1 mikolov:1 relatively:2 conjecture:1 redshift:1 radu:1 structured:1 according:1 combination:1 belonging:3 conjugate:2 across:6 kakade:1 rev:1 rsalakhu:1 rob:1 happens:1 outlier:6 restricted:1 iccv:1 pointcloud:1 invariant:15 bapoczos:1 taken:1 recourse:1 remains:1 previously:1 tai:1 count:1 ge:1 generalizes:1 gaussians:1 mance:1 deeppano:1 available:4 victoria:1 operation:2 doll:1 apply:3 generic:1 spectral:2 observe:1 nicholas:1 sigchi:1 xiong:1 weinberger:1 ho:2 customary:1 thomas:3 uncountable:2 dirichlet:2 ensure:2 remaining:1 clustering:2 graphical:1 include:1 gan:1 top:2 binomial:1 giving:1 concatenated:1 ghahramani:1 build:5 balduzzi:1 classical:1 icra:1 rsj:1 feng:1 tensor:1 micchelli:1 society:3 added:4 unchanged:1 objective:6 strategy:1 shapenets:1 diagonal:3 traditional:1 amongst:1 distance:2 link:1 berlin:1 concatenation:2 capacity:1 d6:1 thrun:1 topic:1 astro:1 collected:1 trivial:2 reason:1 erik:1 length:4 issn:1 makadia:1 code:3 insufficient:1 providing:1 vladimir:1 mbrm:3 optionally:1 difficult:1 katherine:1 setup:2 voxelized:1 robert:2 ryota:1 hao:1 tagprop:1 ba:1 ziwei:1 design:3 countable:2 munn:1 perform:5 sebastien:1 convolution:3 observation:6 datasets:5 markov:1 benchmark:1 jin:1 beat:1 pat:1 payoff:1 extended:1 communication:1 frame:1 rn:1 dc:2 jakob:1 pritchard:1 arbitrary:3 tomer:1 ntampaka:1 inferred:1 compositionality:1 david:2 pair:1 gru:1 kl:1 extensive:1 connection:1 nick:1 catalog:4 photometry:1 coherent:1 learned:2 kingma:1 bourbaki:1 nip:3 bar:1 taco:1 lion:2 usually:6 xm:16 dynamical:1 pattern:2 reading:1 gaining:1 max:13 including:3 belief:1 video:1 power:1 suitable:1 difficulty:1 treated:1 predicting:3 haar:1 advanced:1 scheme:1 github:1 vic:1 library:1 identifies:1 started:1 metadata:1 schmid:1 autoencoder:1 brock:1 auto:1 text:7 prior:2 literature:1 heller:1 review:1 val:2 voxels:5 l2:1 relative:1 xiang:1 fully:4 multiagent:1 permutation:37 loss:5 interesting:1 allocation:2 scherer:1 approximator:1 age:2 degree:1 mpca:1 agent:1 sufficient:4 xiao:1 editor:2 share:2 pi:3 translation:1 row:3 eccv:1 genetics:2 summary:1 jung:1 repeat:1 czos:1 last:2 free:1 english:1 side:1 lle:1 allow:1 vv:1 arnold:1 neighbor:2 saul:1 face:5 comprehensively:1 wide:2 distributed:1 dimension:3 vocabulary:2 valid:2 lett:1 rich:1 quantum:1 computes:1 author:2 made:1 collection:1 preprocessing:1 voxel:2 welling:1 citation:1 approximate:1 hang:1 emphasize:1 bernhard:1 sz:1 global:1 corpus:1 belongie:1 discriminative:2 xi:5 fergus:1 search:1 latent:6 continuous:1 khosla:1 table:5 learn:10 concatenates:1 ca:1 nicolas:1 symmetry:1 unavailable:1 heidelberg:1 expansion:9 bottou:3 european:2 domain:6 zitnick:1 equivariance:8 aistats:1 significance:1 main:2 universe:1 big:1 arise:1 fromenteau:3 x1:4 xu:1 fig:4 causality:1 broadband:1 elaborate:1 board:1 scattered:1 deployed:1 xiuyuan:1 precision:5 inferring:1 exponential:3 candidate:2 house:1 stamp:2 tied:2 unfair:1 jmlr:1 advertisement:3 ix:2 tang:2 theorem:12 list:1 orie:1 ments:1 fusion:1 incorporating:1 dl:1 rickard:1 mnist:1 intrinsic:1 workshop:1 eprints:1 conditioned:2 commutative:1 margin:3 chen:3 entropy:3 savva:1 rg:1 mvcnn:1 simply:2 visual:1 rinaldo:1 highlighting:1 vinyals:1 aditya:1 hidetoshi:1 macmillan:1 scalar:2 ilar:1 chang:2 springer:2 pedro:1 leyton:1 truth:1 acm:2 ma:2 extracted:1 weston:2 identity:1 goal:9 viewed:1 rbf:1 jeff:3 price:2 fisher:1 content:1 tiger:2 change:1 aided:1 specifically:1 except:2 typical:2 lidar:2 acting:1 averaging:1 uniformly:1 lemma:5 total:1 silvio:1 invariance:6 experimental:1 junier:1 tendency:1 player:2 select:1 qixing:1 giga:1 support:6 guo:1 people:1 latter:1 jonathan:1 alexander:2 jianxiong:1 relevance:2 oriol:1 constructive:1 evaluate:2 princeton:1 tested:1 handling:1
6,558
6,932
ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events Evan Racah1,2 , Christopher Beckham1,3 , Tegan Maharaj1,3 , Samira Ebrahimi Kahou4 , Prabhat2 , Christopher Pal1,3 1 MILA, Universit? de Montr?al, [email protected]. 2 Lawrence Berkeley National Lab, Berkeley, CA, [email protected]. 3 ?cole Polytechnique de Montr?al, [email protected]. 4 Microsoft Maluuba, [email protected]. Abstract Then detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect. 1 Introduction Climate change is one of the most important challenges facing humanity in the 21st century, and climate simulations are one of the only viable mechanisms for understanding the future impact of various carbon emission scenarios and intervention strategies. Large climate simulations produce massive datasets: a simulation of 27 years from a 25 square km, 3 hour resolution model produces on the order of 10TB of multi-variate data. This scale of data makes post-processing and quantitative assessment challenging, and as a result, climate analysts and policy makers typically take global and annual averages of temperature or sea-level rise. While these coarse measurements are useful for public and media consumption, they ignore spatially (and temporally) resolved extreme weather events such as extra-tropical cyclones and tropical cyclones (hurricanes). Because the general public and policy makers are concerned about the local impacts of climate change, it is critical that we be able to examine how localized weather patterns (such as tropical cyclones), which can have dramatic impacts on populations and economies, will change in frequency and intensity under global warming. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Deep neural networks, especially deep convolutional neural networks, have enjoyed breakthrough success in recent recent years, achieving state-of-the-art results on many benchmark datasets (Krizhevsky et al., 2012; He et al., 2015; Szegedy et al., 2015) and also compelling results on many practical tasks such as disease diagnosis (Hosseini-Asl et al., 2016), facial recognition (Parkhi et al., 2015), autonomous driving (Chen et al., 2015), and many others. Furthermore, deep neural networks have also been very effective in the context of unsupervised and semi-supervised learning; some recent examples include variational autoencoders (Kingma & Welling, 2013), adversarial networks (Goodfellow et al., 2014; Makhzani et al., 2015; Salimans et al., 2016; Springenberg, 2015), ladder networks (Rasmus et al., 2015) and stacked what-where autoencoders (Zhao et al., 2015). There is a recent trend towards video datasets aimed at better understanding spatiotemporal relations and multimodal inputs (Kay et al., 2017; Gu et al., 2017; Goyal et al., 2017). The task of finding extreme weather events in climate data is similar to the task of detecting objects and activities in video - a popular application for deep learning techniques. An important difference is that in the case of climate data, the ?video? has 16 or more ?channels? of information (such as water vapour, pressure and temperature), while conventional video only has 3 (RGB). In addition, climate simulations do not share the same statistics as natural images. As a result, unlike many popular techniques for video, we hypothesize that we cannot build off successes from the computer vision community such as using pretrained weights from CNNs (Simonyan & Zisserman, 2014; Krizhevsky et al., 2012) pretrained on ImageNet (Russakovsky et al., 2015). Climate data thus poses a number of interesting machine learning problems: multi-class classification with unbalanced classes; partial annotation; anomaly detection; distributional shift and bias correction; spatial, temporal, and spatiotemporal relationships at widely varying scales; relationships between variables that are not fully understood; issues of data and computational efficiency; opportunities for semi-supervised and generative models; and more. Here, we address multi-class detection and localization of four extreme weather phenomena: tropical cyclones, extra-tropical cyclones, tropical depressions, and atmospheric rivers. We implement a 3D (height, width, time) convolutional encoderdecoder, with a novel single-pass bounding-box regression loss applied at the bottleneck. To our knowledge, this is the first use of a deep autoencoding architecture for bounding-box regression. This architectural choice allows us to do semi-supervised learning in a very natural way (simply training the autoencoder with reconstruction for unlabelled data), while providing relatively interpretable features at the bottleneck. This is appealing for use in the climate community, as current engineered heuristics do not perform as well as human experts for identifying extreme weather events. Our main contributions are (1) a baseline bounding-box loss formulation; (2) our architecture, a first step away from engineered heuristics for extreme weather events, towards semi-supervised learned features; (3) the ExtremeWeather dataset, which we make available in three benchmarking splits: one small, for model exploration, one medium, and one comprising the full 27 years of climate simulation output. 2 2.1 Related work Deep learning for climate and weather data Climate scientists do use basic machine learning techniques, for example PCA analysis for dimensionality reduction (Monahan et al., 2009), and k-means analysis for clusterings Steinhaeuser et al. (2011). However, the climate science community primarily relies on expert engineered systems and ad-hoc rules for characterizing climate and weather patterns. Of particular relevance is the TECA (Toolkit for Extreme Climate Analysis) Prabhat et al. (2012, 2015), an application of large scale pattern detection on climate data using heuristic methods. A more detailed explanation of how TECA works is described in section 3. Using the output of TECA analysis (centers of storms and bounding boxes around these centers) as ground truth, (Liu et al., 2016) demonstrated for the first time that convolutional architectures could be successfully applied to predict the class label for two extreme weather event types. Their work considered the binary classification task on centered, cropped patches from 2D (single-timestep) multi-channel images. Like (Liu et al., 2016) we use TECA?s output (centers and bounding boxes) as ground truth, but we build on the work of Liu et al. (2016) by: 1) using uncropped images, 2) considering the temporal axis of the data 3) doing multi-class bounding box detection and 4) taking a semi-supervised approach with a hybrid predictive and reconstructive model. 2 Some recent work has applied deep learning methods to weather forecasting. Xingjian et al. (2015) have explored a convolutional LSTM architecture (described in 2.2 for predicting future precipitation on a local scale (i.e. the size of a city) using radar echo data. In contrast, we focus on extreme event detection on planetary-scale data. Our aim is to capture patterns which are very local in time (e.g. a hurricane may be present in half a dozen sequential frames), compared to the scale of our underlying climate data, consisting of global simulations over many years. As such, 3D CNNs seemed to make more sense for our detection application, compared to LSTMs whose strength is in capturing long-term dependencies. 2.2 Related methods and models Following the dramatic success of CNNs in static 2D images, a wide variety of CNN architectures have been explored for video, ex. (Karpathy et al., 2014; Yao et al., 2015; Tran et al., 2014). The details of how CNNs are extended to capture the temporal dimension are important. Karpathy et al. (2014) explore different strategies for fusing information from 2D CNN subcomponents; in contrast, Yao et al. (2015) create 3D volumes of statistics from low level image features. Convolutional networks have also been combined with RNNs (recurrent neural networks) for modeling video and other sequence data, and we briefly review some relevant video models here. The most common and straightforward approach to modeling sequential images is to feed single-frame representations from a CNN at each timestep to an RNN. This approach has been examined for a number of different types of video (Donahue et al., 2015; Ebrahimi Kahou et al., 2015), while (Srivastava et al., 2015) have explored an LSTM architecture for the unsupervised learning of video representations using a pretrained CNN representation as input. These architectures separate learning of spatial and temporal features, something which is not desirable for climate patterns. Another popular model, also used on 1D data, is a convolutional RNN, wherein the hidden-to-hidden transition layer is 1D convolutional (i.e. the state is convolved over time). (Ballas et al., 2016) combine these ideas, applying a convolutional RNN to frames processed by a (2D) CNN. The 3D CNNs we use here are based on 3-dimensional convolutional filters, taking the height, width, and time axes into account for each feature map, as opposed to aggregated 2D CNNs. This approach was studied in detail in (Tran et al., 2014). 3D convolutional neural networks have been used for various tasks ranging from human activity recognition (Ji et al., 2013), to large-scale YouTube video classification (Karpathy et al., 2014), and video description (Yao et al., 2015). Hosseini-Asl et al. (2016) use a 3D convolutional autoencoder for diagnosing Alzheimer?s disease through MRI - in their case, the 3 dimensions are height, width, and depth. (Whitney et al., 2016) use 3D (height, width, depth) filters to predict consecutive frames of a video game for continuation learning. Recent work has also examined ways to use CNNs to generate animated textures and sounds (Xie et al., 2016). This work is similar to our approach in using 3D convolutional encoder, but where their approach is stochastic and used for generation, ours is deterministic, used for multi-class detection and localization, and also comprises a 3D convolutional decoder for unsupervised learning. Stepping back, our approach is related conceptually to (Misra et al., 2015), who use semi-supervised learning for bounding-box detection, but their approach uses iterative heuristics with a support vector machine (SVM) classifer, an approach which would not allow learning of spatiotemporal features. Our setup is also similar to recent work from (Zhang et al., 2016) (and others) in using a hybrid prediction and autoencoder loss. This strategy has not, to our knowledge, been applied either to multidimensional data or bounding-box prediction, as we do here. Our bounding-box prediction loss is inspired by (Redmon et al., 2015), an approach extended in (Ren et al., 2015), as well as the single shot multiBox detector formulation used in (Liu et al., 2015) and the seminal bounding-box work in OverFeat (Sermanet et al., 2013). Details of this loss are described in Section 4. 3 3.1 The ExtremeWeather dataset The Data The climate science community uses three flavors of global datasets: observational products (satellite, gridded weather station); reanalysis products (obtained by assimilating disparate observational products into a climate model) and simulation products. In this study, we analyze output from the third category because we are interested in climate change projection studies. We would like to better understand how Earth?s climate will change by the year 2100; and it is only possible to conduct 3 such an analysis on simulation output. Although this dataset contains the past, the performance of deep learning methods on this dataset can still inform the effectiveness of these approaches on future simulations. We consider the CAM5 (Community Atmospheric Model v5) simulation, which is a standardized three-dimensional, physical model of the atmosphere used by the climate community to simulate the global climate (Conley et al., 2012). When it is configured at 25-km spatial resolution (Wehner et al., 2015), each snapshot of the global atmospheric state in the CAM5 model output is a 768x1152 image, having 16 ?channels?, each corresponding to a different simulated variable (like surface temperature, surface pressure, precipitation, zonal wind, meridional wind, humidity, cloud fraction, water vapor, etc.). The global climate is simulated at a temporal resolution of 3 hours, giving 8 snapshots (images) per day. The data we provide is from a simulation of 27 years from 1979 to 2005. In total, this gives 78,840 16-channel 768x1152 images. 3.2 The Labels Ground-truth labels are created for four extreme weather events: Tropical Depressions (TD) Tropical Cyclones (TC), Extra-Tropical Cyclones (ETC) and Atmospheric Rivers (AR) using TECA (Prabhat et al., 2012). TECA generally works by suggesting candidate coordinates for storm centers by only selecting points that follow a certain combination of criteria, which usually involves requiring various variables? (such as pressure, temperature and wind speed) values are between between certain thresholds. These candidates are then refined by breaking ties and matching the "same" storms across time (Prabhat et al., 2012). These storm centers are then used as the center coordinates for bounding boxes. The size of the boxes is determined using prior domain knowledge as to how big these storms usually are, as described in (Liu et al., 2016). Every other image (i.e. 4 per day) is labeled due to certain design decisions made during the production run of the TECA code. This gives us 39,420 labeled images. 3.2.1 Issues with the Labels TECA, the ground truth labeling framework, implements heuristics to assign ?ground truth? labels for the four types of extreme weather events. However, it is entirely possible there are errors in the labeling: for instance, there is little agreement in the climate community on a standard heuristic for capturing Extra-Tropical Cyclones (Neu et al., 2013); Atmospheric Rivers have been extensively studied in the northern hemisphere (Lavers et al., 2012; Dettinger et al., 2011), but not in the southern hemisphere; and spatial extents of such events not universally agreed upon. In addition, this labeling only includes AR?s in the US and not in Europe. As such, there is potential for many false negatives, resulting in partially annotated images. Lastly, it is worth mentioning that because the ground truth generation is a simple automated method, a deep, supervised method can only do as well as emulating this class of simple functions. This, in addition to lower representation for some classes (AR and TD), is part of our motivation in exploring semi-supervised methods to better understand the features underlying extreme weather events rather than trying to "beat" existing techniques. 3.3 Suggested Train/Test Splits We provide suggested train/test splits for the varying sizes of datasets on which we run experiments. Table 1 shows the years used for train and test for each dataset size. We show "small" (2 years train, 1 year of test), "medium" (8 years train, 2 years test) and "large" (22 years train, 5 years test) datasets. For reference, table 2 shows the breakdown of the dataset splits for each class for "small" in order to illustrate the class-imbalance present in the dataset. Our model was trained on "small", where we split the train set 50:50 for train and validation. Links for downloading train and test data, as well as further information the different dataset sizes and splits can be found here: extremeweatherdataset.github.io. Table 1: Three benchmarking levels for the ExtremeWeather dataset Level Train Test Small Medium Large 1979, 1981 1979-1983,1989-1991 1979-1983, 1994-2005, 1989-1993 1984 1984-1985 1984-1988 4 Table 2: Number of examples in ExtremeWeather benchmark splits, with class breakdown statistics for Tropical Cyclones (TC), Extra-Tropical Cyclones (ETC), Tropical Depressions (TD), and United States Atmospheric Rivers (US-AR) 4 Benchmark Split TC (%) ETC (%) TD (%) US-AR (%) Total Small Train Test 3190 (42.32) 2882 (39.04) 3510 (46.57) 3430 (46.47) 433 (5.74) 697 (9.44) 404 (5.36) 372 (5.04) 7537 7381 The model We use a 3D convolutional encoder-decoder architecture, meaning that the filters of the convolutional encoder and decoder are 3 dimensional (height, width, time). The architecture is shown in Figure 1; the encoder uses convolution at each layer while the decoder is the equivalent structure in reverse, using tied weights and deconvolutional layers, with leaky ReLUs (Andrew L. Maas & Ng., 2013) (0.1) after each layer. As we take a semi-supervised approach, the code (bottleneck) layer of the autoencoder is used as the input to the loss layers, which make predictions for (1) bounding box location and size, (2) class associated with the bounding box, and (3) the confidence (sometimes called ?objectness?) of the bounding box. Further details (filter size, stride length, padding, output sizes, etc.) can be found in the supplementary materials. Figure 1: Diagram of the 3D semi-supervised architecture. Parentheses denote subset of total dimension shown (for ease of visualization, only two feature maps per layer are shown for the encoder-decoder. All feature maps are shown for bounding-box regression layers). The total loss for the network, L, is a weighted combination of supervised bounding-box regression loss, Lsup , and unsupervised reconstruction error, Lrec : L = Lsup + ?Lrec , (1) where Lrec is the mean squared squared difference between input X and reconstruction X ? : Lrec = 1 ||X ? X ? ||22 , M (2) where M is the total number of pixels in an image. In order to regress bounding boxes, we split the original 768x1152 image into a 12x18 grid of 64x64 anchor boxes. We then predict a box at each grid point by transforming the representation to 12x18=216 scores (one per anchor box). Each score encodes three pieces of information: (1) how much the predicted box differs in size and location from the anchor box, (2) the confidence that an object of interest is in the predicted box (?objectness?), and (3) the class probability distribution for that object. Each component of the score is computed by several 3x3 convolutions applied to the 640 12x18 feature maps of the last encoder layer. Because each set of pixels in each feature map at a given x, y coordinate can be thought of as a learned representation of the climate data in a 64x64 patch of the input image, we can think of the 3x3 convolutions as having a local receptive field size of 192x192, so they use a representation of a 192x192 neighborhood from the input image as context to determine the box and object centered in the given 64x64 patch. Our approach is similar to (Liu et al., 2015) and (Sermanet et al., 2013), which use convolutions from small local receptive field filters to 5 regress boxes. This choice is motivated by the fact that extreme weather events occur in relatively small spatiotemporal volumes, with the ?background? context being highly consistent across event types and between events and non-events. This is in contrast to Redmon et al. (2015), which uses a fully connected layer to consider the whole image as context, appropriate for the task of object identification in natural images, where there is often a strong relationship between background and object. The bounding box regression loss, Lsup , is determined as follows: Lsup = 1 (Lbox + Lconf + Lcls ), N where N is the number of time steps in the minibatch, and Lbox is defined as: X obj X obj Lbox = ? 1i R(ui ? u?i ) + ? 1i R(vi ? vi? ), i (3) (4) i where i ? [0, 216) is the index of the anchor box for the ith grid point, and where 1obj = 1 if an i object is present at the ith grid point, 0 if not; R(z) is the smooth L1 loss as used in (Ren et al., 2015), ui = (tx , ty )i and u?i = (t?x , t?y )i , vi = (tw , th )i and vi? = (t?w , t?h )i and t is the parametrization defined in (Ren et al., 2015) such that: tx = (x ? xa )/wa , ty = (y ? ya )/ha , tw = log(w/wa ), th = log(h/ha ) t?x = (x? ? xa )/wa , t?y = (y ? ? ya )/ha , t?w = log(w? /wa ), t?h = log(h? /ha ), where (xa , ya , wa , ha ) is the center coordinates and height and width of the closest anchor box, (x, y, w, h) are the predicted coordinates and (x? , y ? , w? , h? ) are the ground truth coordinates. Lconf is the weighted cross-entropy of the log-probability of an object being present in a grid cell: X obj X noobj Lconf = 1i [? log(p(obj)i )] + ? ? 1i [? log(p(obj i ))] (5) i i Finally Lcls is the cross-entropy between the one-hot encoded class distribution and the softmax predicted class distribution, evaluated only for predicted boxes at the grid points containing a ground truth box: X obj X Lcls = 1i ?p? (c) log(p(c)) (6) i c?classes The formulation of Lsup is similar in spirit to YOLO (Redmon et al., 2015), with a few important differences. Firstly, the object confidence and class probability terms in YOLO are squared-differences between ground truth and prediction, while we use cross-entropy, as used in the region proposal network from Faster R-CNN (Ren et al., 2015) and the network from (Liu et al., 2015), for the object probability term and the class probability term respectively. Secondly, we use a different parametrization for the coordinates and the size of the bounding box. In YOLO, the parametrizations for x and y are equivalent to Faster R-CNN?s tx and ty , for an anchor box the same size as the patch it represents (64x64). However w and h in YOLO are equivalent to Faster-RCNN?s th and tw for a 64x64 anchor box only if (a) the anchor box had a height and width equal to the size of the whole image and (b) there were no log transform in the faster-RCNN?s parametrization. We find both these differences to be important in practice. Without the log term, and using ReLU nonlinearities initialized (as is standard) centered around 0, most outputs (more than half) will give initial boxes that are in 0 height and width. This makes learning very slow, as the network must learn to resize essentially empty boxes. Adding the log term alone in effect makes the "default" box (an output of 0) equal to the height and width of the entire image - this equally slows down learning, because the network must now learn to drastically shrink boxes. Making ha and wa equal to 64x64 is a pragmatic ?Goldilocks? value. This makes training much more efficient, as optimization can focus more on picking which box contains an object and not as much on what size the box should be. Finally, where YOLO uses squared difference between predicted and ground truth for the coordinate parametrizations, we use smooth L1, due its lower sensitivity to outlier predictions (Ren et al., 2015). 6 5 5.1 Experiments and Discussion Framewise Reconstruction As a simple experiment, we first train a 2D convolutional autoencoder on the data, treating each timestep as an individual training example (everything else about the model is as described in Section 4), in order to visually assess reconstructions and ensure reasonable accuracy of detection. Figure 2 shows the original and reconstructed feature maps for the 16 climate variables of one image in the training set. Reconstruction loss on the validation set was similar to the training set. As the reconstruction visualizations suggest, the convolutional autoencoder architecture does a good job of encoding spatial information from climate images. original reconstruction Figure 2: Feature maps for the 16 channels in an ?image? from the training set (left) and their reconstructions from the 2D convolutional autoencoder (right). 5.2 Detection and localization All experiments are on ExtremeWeather-small, as described in Section 3, where 1979 is train and 1981 is validation. The model is trained with Adam (Kingma & Ba, 2014), with a learning rate of 0.0001 and weight decay coefficient of 0.0005. For comparison, and to evaluate how useful the time axis is to recognizing extreme weather events, we run experiments with both 2D (width, height) and 3D (width, height, time) versions of the architecture described in Section 4. Values for ?, ?, ? (hyperparameters described in loss Equations 4 and 5) were selected with experimentation and some inspiration from (Redmon et al., 2015) to be 5, 7 and 0.5 respectively. A lower value for ? pushes up the confidence of true positive examples, allowing the model more examples to learn from, is thus a way to deal with ground-truth false negatives. Although some of the selection of these parameters is a bit ad-hoc, we assert that our results still provide a good first-pass baseline approach for this dataset. The code is available at https://github.com/eracah/hur-detect During training, we input one day?s simulation at a time (8 time steps; 16 variables). The semisupervised experiments reconstruct all 8 time steps, predicting bounding boxes for the 4 labelled timesteps, while the supervised experiments reconstruct and predict bounding boxes only for the 4 labelled timesteps. Table 3 shows Mean Average Precision (mAP) for each experiment. Average Precision (AP) is calculated for each class in the manner of ImageNet (Russakovsky et al., 2015), integrating the precision-recall curve, and mAP is averaged over classes. Results are shown for various settings of ? (see Equation 1) and for two modes of evaluation; at IOU (intersection over union of the bounding-box and ground-truth box) thresholds of 0.1 and 0.5. Because the 3D model has inherently higher capacity (in terms of number of parameters) than the 2D model, we also experiment with higher capacity 2D models by doubling the number of filters in each layer. Figure 3 shows bounding box predictions for 2 consecutive (6 hours in between) simulation frames, comparing the 3D supervised vs 3D semi-supervised model predictions. It is interesting to note that 3D models perform significantly better than their 2D counterparts for ETC and TC (hurricane) classes. This implies that the time evolution of these weather events is an important criteria for discriminating them. In addition, the semi-supervised model significantly improves the ETC and TC performance, which suggests unsupervised shaping of the spatio-temporal representation is important for these events. Similarly, semi-supervised data improves performance of the 3D model (for IOU=0.1), while this effect is not observed for 2D models, suggesting that 3D representations benefit more from unsupervised data. Note that hyperparameters were tuned in the supervised setting, and a more thorough hyperparameter search for ? and other parameters may yield better semi-supervised results. 7 Figure 3 shows qualitatively what the quantitative results in Table 3 confirm - semi-supervised approaches help with rough localization of weather events, but the model struggles to achieve accurate boxes. As mentioned in Section 4, the network has a hard time adjusting the size of the boxes. As such, in this figure we see mostly boxes of size 64x64. For example, for TDs (usually much smaller than 64x64) and for ARs, (always much bigger than 64x64), a 64x64 box roughly centered on the event is sufficient to count as a true positive at IOU=0.1, but not at the more stringent IOU=0.5. This lead to a large dropoff in performance for ARs and TDs, and a sizable dropoff in the (variably-sized) TCs. Longer training time could potentially help address these issues. Table 3: 2D and 3D supervised and semi-supervised results, showing Mean Average Precision (mAP) and Average Precision (AP) for each class, at IOU=0.1; IOU=0.5. M is model; P is millions of parameters; and ? weights the amount that reconstruction contributes to the overall loss. M Mode P ? ETC (46.47%) AP (%) TC (39.04%) AP (%) TD (9.44%) AP (%) AR (5.04%) AP (%) mAP 2D 2D 2D 2D 2D 3D 3D Sup Semi Semi Sup Semi Sup Semi 66.53 66.53 66.53 16.68 16.68 50.02 50.02 0 1 10 0 1 0 1 21.92; 14.42 18.05; 5.00 15.57; 5.87 13.90; 5.25 15.80; 9.62 22.65; 15.53 24.74; 14.46 52.26; 9.23 52.37; 5.26 44.22; 2.53 49.74; 15.33 39.49; 4.84 50.01; 9.12 56.40; 9.00 95.91; 10.76 97.69; 14.60 98.99; 28.56 97.58; 7.56 99.50; 3.26 97.31; 3.81 96.57; 5.80 35.61; 33.51 36.33; 0.00 36.61; 0.00 35.63; 33.84 21.26; 13.12 34.05; 17.94 33.95; 0.00 51.42; 16.98 51.11; 6.21 48.85; 9.24 49.21; 15.49 44.01; 7.71 51.00; 11.60 52.92; 7.31 Figure 3: Bounding box predictions shown on 2 consecutive (6 hours in between) simulation frames, for the integrated water vapor column channel. Green = ground truth, Red = high confidence predictions (confidence above 0.8). 3D supervised model (Left), and semi-supervised (Right). 5.3 Feature exploration In order to explore learned representations, we use t-SNE (van der Maaten & Hinton, Nov 2008) to visualize the autoencoder bottleneck (last encoder layer). Figure 4 shows the projected feature maps for the first 7 days in the training set for both 3D supervised (top) and semi-supervised (bottom) experiments. Comparing the two, it appears that more TCs (hurricanes) are clustered by the semisupervised model, which would fit with the result that semi-supervised information is particularly valuable for this class. Viewing the feature maps, we can see that both models have learned spiral patterns for TCs and ETCs. 6 Conclusions and Future Work We introduce to the community the ExtremeWeather dataset in hopes of encouraging new research into unique, difficult, and socially and scientifically important datasets. We also present a baseline method for comparison on this new dataset. The baseline explores semi-supervised methods for 8 Figure 4: t-SNE visualisation of the first 7 days in the training set for 3D supervised (top) and semi-supervised (bottom) experiments. Each frame (time step) in the 7 days has 12x18 = 216 vectors of length 640 (number of feature maps in the code layer), where each pixel in the 12x18 patch corresponds to a 64x64 patch in the original frame. These vectors are projected by t-SNE to two dimensions. For both supervised and semi-supervised, we have zoomed into two dense clusters and sampled 64x64 patches to show what that feature map has learned. Grey = unlabelled, Yellow = tropical depression (not shown), Green = TC (hurricane), Blue = ETC, Red = AR. object detection and bounding box prediction using 3D autoencoding CNNs. These architectures and approaches are motivated by finding extreme weather patterns; a meaningful and important problem for society. Thus far, the climate science community has used hand-engineered criteria to characterize patterns. Our results indicate that there is much promise in considering deep learning based approaches. Future work will investigate ways to improve bounding-box accuracy, although even rough localizations can be very useful as a data exploration tool, or initial step in a larger decision-making system. Further interpretation and visualization of learned features could lead to better heuristics, and understanding of the way different variables contribute to extreme weather events. Insights in this paper come from only a fraction of the available data, and we have not explored such challenging topics as anomaly detection, partial annotation detection and transfer learning (e.g. to satellite imagery). Moreover, learning to generate future frames using GAN?s (Goodfellow et al., 2014) or other deep generative models, while using performance on a detection model to measure the quality of the generated frames could be another very interesting future direction. We make the ExtremeWeather dataset available in hopes of enabling and encouraging the machine learning community to pursue these directions. The retirement of Imagenet this year (Russakovsky et al., 2017) marks the end of an era in deep learning and computer vision. We believe the era to come should be defined by data of social importance, pushing the boundaries of what we know how to model. Acknowledgments This research used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Code relies on open-source deep learning frameworks Theano (Bergstra et al.; Team et al., 2016) and Lasagne (Team, 2016), whose developers we gratefully acknowledge. We thank Samsung and Google for support that helped make this research 9 possible. We would also like to thank Yunjie Liu and Michael Wehner for providing access to the climate datasets; Alex Lamb and Thorsten Kurth for helpful discussions. References Awni Y. Hannun Andrew L. Maas and Andrew Y. Ng. Rectifier nonlinearities improve neural network acoustic models. ICML Workshop on Deep Learning for Audio, Speech, and Language Processing, 2013. Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for learning video representations. In the Proceedings of ICLR. arXiv preprint arXiv:1511.06432, 2016. Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2722?2730, 2015. Andrew J Conley, Rolando Garcia, Doug Kinnison, Jean-Francois Lamarque, Dan Marsh, Mike Mills, Anne K Smith, Simone Tilmes, Francis Vitt, Hugh Morrison, et al. Description of the ncar community atmosphere model (cam 5.0). 2012. Michael D. Dettinger, Fred Martin Ralph, Tapash Das, Paul J. Neiman, and Daniel R. Cayan. Atmospheric rivers, floods and the water resources of california. Water, 3(2):445, 2011. ISSN 2073-4441. URL http://www.mdpi.com/2073-4441/3/2/445. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. Samira Ebrahimi Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, and Christopher Pal. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 467?474. ACM, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672?2680, 2014. Raghav Goyal, Samira Kahou, Vincent Michalski, Joanna Materzy?nska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The "something something" video database for learning and evaluating visual common sense. arXiv preprint arXiv:1706.04261, 2017. Chunhui Gu, Chen Sun, Sudheendra Vijayanarasimhan, Caroline Pantofaru, David A Ross, George Toderici, Yeqing Li, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. arXiv preprint arXiv:1705.08421, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026?1034, 2015. Ehsan Hosseini-Asl, Georgy Gimel?farb, and Ayman El-Baz. Alzheimer?s disease diagnostics by a deeply supervised adaptable 3d convolutional network. 2016. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221?231, 2013. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725?1732, 2014. 10 Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. arXiv:1312.6114, 2013. Auto-encoding variational bayes. arXiv preprint Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097?1105, 2012. David A. Lavers, Gabriele Villarini, Richard P. Allan, Eric F. Wood, and Andrew J. Wade. The detection of atmospheric rivers in atmospheric reanalyses and their links to british winter floods and the large-scale climatic circulation. Journal of Geophysical Research: Atmospheres, 117(D20): n/a?n/a, 2012. ISSN 2156-2202. doi: 10.1029/2012JD018027. URL http://dx.doi.org/10. 1029/2012JD018027. D20106. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. Ssd: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015. Yunjie Liu, Evan Racah, Prabhat, Joaquin Correa, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, and William Collins. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. 2016. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders. CoRR, abs/1511.05644, 2015. URL http://arxiv.org/abs/1511.05644. Ishan Misra, Abhinav Shrivastava, and Martial Hebert. Watch and learn: Semi-supervised learning of object detectors from videos. CoRR, abs/1505.05769, 2015. URL http://arxiv.org/abs/ 1505.05769. Adam H Monahan, John C Fyfe, Maarten HP Ambaum, David B Stephenson, and Gerald R North. Empirical orthogonal functions: The medium is the message. Journal of Climate, 22(24):6501? 6514, 2009. Urs Neu, Mirseid G. Akperov, Nina Bellenbaum, Rasmus Benestad, Richard Blender, Rodrigo Caballero, Angela Cocozza, Helen F. Dacre, Yang Feng, Klaus Fraedrich, Jens Grieger, Sergey Gulev, John Hanley, Tim Hewson, Masaru Inatsu, Kevin Keay, Sarah F. Kew, Ina Kindem, Gregor C. Leckebusch, Margarida L. R. Liberato, Piero Lionello, Igor I. Mokhov, Joaquim G. Pinto, Christoph C. Raible, Marco Reale, Irina Rudeva, Mareike Schuster, Ian Simmonds, Mark Sinclair, Michael Sprenger, Natalia D. Tilinina, Isabel F. Trigo, Sven Ulbrich, Uwe Ulbrich, Xiaolan L. Wang, and Heini Wernli. Imilast: A community effort to intercompare extratropical cyclone detection and tracking algorithms. Bulletin of the American Meteorological Society, 94(4): 529?547, 2013. doi: 10.1175/BAMS-D-11-00154.1. Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In British Machine Vision Conference, volume 1, pp. 6, 2015. Prabhat, Oliver Rubel, Surendra Byna, Kesheng Wu, Fuyu Li, Michael Wehner, and Wes Bethel. Teca: A parallel toolkit for extreme climate analysis. ICCS, 2012. Prabhat, Surendra Byna, Venkatram Vishwanath, Eli Dart, Michael Wehner, and William D. Collins. Teca: Petascale pattern recognition for climate science. CAIP, 2015. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546? 3554, 2015. Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. CoRR, abs/1506.02640, 2015. URL http://arxiv.org/ abs/1506.02640. 11 Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211?252, 2015. Olga Russakovsky, Eunbyung Park, Wei Liu, Jia Deng, Fei-Fei Li, and Alex Berg. Beyond imagenet large scale visual recognition challenge, 2017. URL http://image-net.org/challenges/ beyond_ilsvrc.php. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. 2016. Pierre Sermanet, David Eigen, Xiang Zhang, Micha?l Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. CoRR, abs/1502.04681, 2, 2015. Karsten Steinhaeuser, Nitesh Chawla, and Auroop Ganguly. Comparing predictive power in climate data: Clustering matters. Advances in Spatial and Temporal Databases, pp. 39?55, 2011. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1?9, 2015. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. 2014. L.J.P van der Maaten and G.E. Hinton. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9: 2579?2605, Nov 2008. Michael Wehner, Prabhat, Kevin A. Reed, D?ith? Stone, William D. Collins, and Julio Bacmeister. Resolution dependence of future tropical cyclone projections of cam5.1 in the u.s. clivar hurricane working group idealized configurations. Journal of Climate, 28(10):3905?3925, 2015. doi: 10.1175/JCLI-D-14-00311.1. William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. 2016. Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic textures and sounds by spatial-temporal generative convnet. 2016. Shi Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, pp. 802?810, 2015. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4507?4515, 2015. Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsupervised objectives for large-scale image classification. arXiv preprint arXiv:1606.06582v1, 2016. Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015. 12
6932 |@word cnn:9 version:1 mri:1 briefly:1 humidity:1 open:1 grey:1 km:2 simulation:15 rgb:1 blender:1 downloading:1 pressure:3 dramatic:2 shot:2 reduction:1 initial:2 liu:12 configuration:1 score:3 selecting:1 united:1 daniel:1 tuned:1 contains:2 ours:1 deconvolutional:1 animated:1 past:1 existing:2 guadarrama:1 current:1 comparing:3 anne:2 com:4 subcomponents:1 ncar:1 dx:1 must:2 joaquim:1 diederik:2 john:2 nian:1 christian:2 lcls:3 hypothesize:1 treating:1 interpretable:1 v:1 alone:1 intelligence:1 selected:1 generative:5 half:2 amir:1 alec:1 parametrization:3 ith:3 smith:1 detecting:2 coarse:1 contribute:1 location:2 firstly:1 org:5 zhang:5 yuting:1 diagnosing:1 height:11 framewise:1 direct:1 viable:1 junbo:1 freitag:1 combine:1 dan:1 manner:1 introduce:1 ch11231:1 allan:1 karsten:1 roughly:1 elman:1 andrea:1 examine:1 multi:6 paluri:1 salakhutdinov:1 ming:1 inspired:1 socially:1 td:5 gov:1 encouraging:2 toderici:2 farhadi:1 little:1 considering:2 precipitation:3 underlying:2 joao:1 moreover:1 medium:5 what:6 vapor:2 pursue:1 developer:1 unified:1 finding:2 temporal:10 assert:1 quantitative:2 multidimensional:1 berkeley:2 every:1 thorough:1 tie:1 zaremba:1 universit:1 mansimov:1 sherjil:1 intervention:1 positive:2 scientist:1 local:5 understood:1 struggle:1 io:2 era:2 encoding:2 wehner:6 ap:6 rnns:1 studied:2 examined:2 ava:1 suggests:1 lasagne:1 challenging:2 christoph:1 micha:1 mentioning:1 ease:1 averaged:1 unique:1 acknowledgment:1 lecun:2 practical:1 atomic:1 practice:1 union:1 implement:2 differs:1 x3:2 goyal:2 evan:3 area:2 rnn:3 yan:1 empirical:1 significantly:2 thought:1 sudheendra:2 projection:2 matching:1 weather:26 vedaldi:1 confidence:6 integrating:1 suggest:1 cannot:1 unlabeled:1 selection:1 andrej:2 transition:1 context:4 applying:1 seminal:1 risk:1 vijayanarasimhan:2 www:1 equivalent:3 map:15 demonstrated:1 deterministic:1 sukthankar:2 wong:1 straightforward:1 conventional:1 center:8 jimmy:1 shi:1 helen:1 resolution:4 identifying:1 pouget:1 rule:1 insight:1 shlens:1 goldilocks:1 kay:2 century:1 racah:2 exploratory:1 maarten:1 autonomous:2 population:1 x64:12 coordinate:8 user:1 massive:1 anomaly:2 us:5 mikko:1 goodfellow:5 jaitly:1 agreement:1 humanity:1 trend:1 kahou:3 recognition:14 particularly:1 variably:1 breakdown:2 distributional:1 database:2 labeled:4 observed:1 mike:1 cloud:1 blocking:1 bottom:2 wang:3 capture:2 preprint:12 region:2 connected:1 sun:3 valuable:1 deeply:1 disease:3 mentioned:1 transforming:1 ui:2 nowcasting:1 warde:1 tobias:1 cam:1 gerald:1 dynamic:1 radar:1 trained:2 ali:1 predictive:2 classifer:1 localization:8 upon:1 efficiency:1 eric:1 gu:2 resolved:1 samsung:1 multimodal:2 isabel:1 various:5 tx:3 harri:1 train:13 stacked:2 sven:1 effective:1 reconstructive:1 doi:4 fyfe:1 vicki:1 labeling:3 kevin:2 neighborhood:1 refined:1 klaus:1 jean:2 whose:2 kai:1 larger:1 cvpr:1 heuristic:7 supplementary:1 reconstruct:2 encoded:1 encoder:7 widely:1 statistic:3 simonyan:3 flood:2 ganguly:1 think:1 transform:1 echo:1 autoencoding:2 sequence:1 hoc:2 net:2 michalski:2 reconstruction:10 tran:3 interaction:1 product:4 zoomed:1 relevant:1 climatic:1 parametrizations:2 achieve:1 description:3 sutskever:1 exploiting:1 empty:1 darrell:1 satellite:2 sea:1 francois:1 natalia:1 adam:3 cluster:1 produce:2 object:15 tim:3 sarah:1 andrew:8 bourdev:1 pose:2 augmenting:1 help:3 illustrate:1 recurrent:3 ex:1 job:1 strong:1 sizable:1 predicted:6 involves:1 implies:1 indicate:1 larochelle:1 iou:6 direction:2 come:2 goroshin:1 annotated:1 steinhaeuser:2 cnns:9 filter:6 stochastic:2 centered:4 human:5 engineered:4 exploration:3 observational:2 jonathon:1 material:1 everything:1 stringent:1 viewing:1 public:2 atmosphere:3 assign:1 clustered:1 brian:1 dropoff:2 manohar:1 secondly:1 exploring:1 awni:1 correction:1 marco:1 kinetics:1 around:2 considered:1 ground:13 visually:1 caballero:1 lawrence:1 predict:4 visualize:1 driving:2 consecutive:3 earth:1 ruslan:1 label:5 maker:2 honkala:1 ross:4 cole:1 create:1 successfully:1 city:1 tool:1 weighted:2 hope:2 rough:2 always:1 aim:1 rather:1 varying:2 office:2 ax:1 emission:1 june:1 focus:2 rubel:1 contrast:3 adversarial:4 kim:1 sense:2 baseline:4 helpful:1 detect:2 economy:1 mueller:1 el:1 leung:1 entire:1 typically:1 integrated:2 hidden:2 relation:1 visualisation:1 going:1 pantofaru:1 comprising:1 interested:1 mitigating:1 pixel:3 overall:1 classification:7 issue:3 among:1 uwe:1 ralph:1 overfeat:2 art:1 spatial:7 breakthrough:1 softmax:1 santosh:1 field:2 equal:3 cordelia:1 beach:1 ng:2 once:1 having:3 represents:1 park:1 look:1 icml:1 igor:1 unsupervised:9 tds:2 yu:1 future:8 yoshua:1 others:3 mirza:1 richard:2 primarily:1 torresani:1 few:1 winter:1 national:2 hurricane:7 individual:1 consisting:1 irina:1 jeffrey:1 microsoft:2 william:4 ab:7 detection:21 montr:2 interest:2 message:1 investigate:1 highly:1 evaluation:1 extreme:21 farley:1 diagnostics:1 accurate:1 oliver:1 encourage:1 partial:2 polymtl:1 retirement:1 facial:1 orthogonal:1 conduct:1 incomplete:1 initialized:1 lbl:1 girshick:2 instance:1 column:1 modeling:2 compelling:1 ar:9 whitney:2 rabinovich:1 fusing:1 subset:1 krizhevsky:3 recognizing:1 valentin:1 samira:4 front:1 characterize:1 pal:3 dependency:1 encoders:1 spatiotemporal:6 combined:1 cho:1 st:2 international:5 river:6 discriminating:1 explores:1 hugh:1 memisevic:1 lee:2 lstm:3 sensitivity:1 contract:1 picking:1 off:1 michael:10 ilya:1 yao:5 gans:1 sanjeev:1 imagery:1 squared:4 management:1 opposed:1 huang:1 containing:1 berglund:1 sinclair:1 expert:2 american:1 zhao:2 wojciech:1 li:6 szegedy:3 suggesting:2 potential:1 nonlinearities:2 de:3 account:1 gabriele:1 stride:1 bergstra:1 includes:1 coefficient:1 matter:1 configured:1 kate:1 north:1 idealized:1 ad:2 piece:1 vi:4 helped:1 wind:3 lab:1 analyze:1 doing:1 red:2 sup:3 bayes:1 francis:1 parallel:1 relus:1 annotation:2 jia:3 contribution:1 ass:1 square:1 php:1 accuracy:3 convolutional:30 multibox:2 who:1 circulation:1 yield:2 yellow:1 conceptually:1 identification:2 vincent:3 ren:7 venugopalan:1 worth:1 russakovsky:5 caroline:1 detector:3 inform:1 wai:1 neu:2 trevor:2 ty:3 energy:2 frequency:1 pp:12 storm:5 regress:2 associated:1 maluuba:1 static:1 sampled:1 dataset:18 adjusting:1 popular:3 recall:1 hur:2 knowledge:3 dimensionality:1 improves:2 shaping:1 agreed:1 sean:1 warming:1 back:2 adaptable:1 appears:1 feed:1 higher:2 supervised:39 day:6 follow:1 zisserman:3 rahul:2 improved:1 wei:4 xie:2 evaluated:1 shrink:1 wherein:1 formulation:3 furthermore:1 box:54 xa:3 lastly:1 autoencoders:3 working:1 hand:1 joaquin:1 lstms:2 tropical:16 christopher:4 mehdi:1 assessment:1 su:1 google:1 minibatch:1 meteorological:1 mode:2 quality:1 scientific:1 believe:1 semisupervised:2 facilitate:1 effect:3 usa:1 requiring:1 asl:3 true:2 counterpart:1 evolution:1 facility:1 kyunghyun:1 concept:1 moritz:1 geographic:1 inspiration:1 spatially:2 deal:1 climate:44 visualizing:1 game:1 width:11 during:2 seff:1 covering:1 lastname:1 scientifically:1 criterion:3 trying:1 stone:1 polytechnique:1 demonstrate:1 correa:1 l1:2 temperature:4 dragomir:2 zhiheng:1 ranging:1 meaning:1 variational:2 novel:1 ari:1 image:26 marsh:1 common:2 umontreal:1 ishan:1 physical:1 ji:2 hugo:1 stepping:1 conley:2 ballas:3 volume:3 million:1 interpretation:1 he:3 surpassing:1 measurement:1 anguelov:2 honglak:1 xingjian:2 enjoyed:1 grid:6 affordance:1 similarly:1 hp:1 ssd:1 gratefully:1 had:1 language:1 toolkit:2 access:1 europe:1 longer:1 surface:2 etc:9 sergio:1 something:3 closest:1 recent:8 hemisphere:2 reverse:1 scenario:1 certain:4 misra:2 binary:1 success:3 der:2 joshua:1 jens:1 george:2 tapani:1 deng:2 aggregated:1 xiangyu:1 determine:1 morrison:1 semi:30 full:1 desirable:1 sound:2 smooth:2 unlabelled:2 faster:5 cross:3 long:3 post:1 equally:1 roland:1 bigger:1 simone:1 parenthesis:1 impact:3 jost:1 prediction:12 basic:2 regression:5 essentially:1 vision:10 navdeep:1 yeung:1 arxiv:27 sergey:1 alireza:1 sometimes:1 cell:1 proposal:2 addition:4 cropped:1 krause:1 background:2 else:1 diagram:1 source:1 jian:2 extra:6 unlike:1 nska:1 spirit:1 effectiveness:1 obj:7 encoderdecoder:1 alzheimer:2 prabhat:8 yang:2 leverage:1 bernstein:1 split:9 bengio:1 spiral:1 automated:1 etcs:1 concerned:1 variate:1 timesteps:2 relu:1 architecture:14 fit:1 variety:1 idea:1 ac02:1 lubomir:1 shift:1 bottleneck:4 motivated:2 pca:1 url:6 padding:1 effort:1 forecasting:1 song:1 peter:1 karen:2 speech:1 shaoqing:2 emotion:1 action:3 depression:4 deep:19 useful:3 generally:1 detailed:1 aimed:1 karpathy:5 amount:2 extensively:1 tenenbaum:1 processed:1 category:1 multichannel:1 wes:1 generate:2 http:8 dit:1 continuation:2 northern:1 governmental:1 per:4 blue:1 diagnosis:1 hyperparameter:1 promise:1 group:1 four:3 threshold:2 yangqing:1 achieving:1 kenneth:1 advancing:1 v1:1 timestep:3 fraction:2 year:15 wood:1 nersc:1 run:3 eli:1 you:1 springenberg:2 reasonable:1 lamb:1 architectural:1 d20:1 patch:7 zonal:1 wu:2 yann:2 maaten:2 acceptable:1 ayman:1 resize:1 decision:3 bit:1 entirely:1 lrec:4 capturing:2 layer:13 courville:3 lsup:5 annual:1 activity:2 strength:1 occur:1 fei:4 alex:3 encodes:1 simulate:1 speed:1 nitish:1 kumar:1 relatively:2 martin:1 department:1 reale:1 combination:2 across:2 subhashini:1 smaller:1 ur:1 appealing:1 tw:3 joseph:1 rob:2 making:2 outlier:1 theano:1 hillier:1 thorsten:1 equation:2 visualization:3 hannun:1 resource:2 bing:1 count:1 mechanism:1 describing:1 know:1 end:1 available:7 experimentation:1 salimans:2 away:1 appropriate:1 caip:1 chawla:1 joanna:1 pierre:2 shetty:1 gridded:1 eigen:1 convolved:1 ebrahimi:4 original:4 standardized:1 clustering:2 ensure:1 angela:1 include:1 gan:1 opportunity:1 thomas:1 top:2 pushing:1 konda:1 giving:1 hanley:1 especially:1 build:2 hosseini:3 society:2 gregor:1 feng:1 objective:1 v5:1 strategy:3 receptive:2 dependence:1 makhzani:2 southern:1 iclr:1 fabio:1 convnet:1 separate:1 link:2 thank:2 capacity:2 decoder:5 vishwanath:1 consumption:1 chris:1 topic:1 valpola:1 simulated:2 extent:1 water:5 nina:1 marcus:1 raghav:1 analyst:1 ozair:1 length:2 issn:2 code:6 reed:3 index:1 relationship:3 rasmus:3 sermanet:4 ying:1 providing:2 difficult:1 setup:1 mostly:1 carbon:1 sne:4 potentially:1 hao:2 negative:3 rise:1 synthesizing:1 slows:1 disparate:1 design:1 ba:2 susanne:1 policy:3 satheesh:1 perform:2 allowing:1 imbalance:1 convolution:5 snapshot:2 datasets:9 benchmark:3 ingo:1 dart:1 enabling:1 acknowledge:1 sanketh:1 beat:1 viola:1 hinton:3 extended:2 team:2 emulating:1 frame:10 station:1 community:12 intensity:1 atmospheric:9 david:6 imagenet:7 acoustic:1 california:1 learned:7 planetary:1 hour:4 kingma:4 nip:1 address:2 beyond:1 suggested:2 able:2 usually:3 perception:1 firstname:1 hendricks:1 shuiwang:1 bam:1 pattern:15 challenge:5 scott:2 tb:1 max:1 including:1 explanation:1 green:3 video:22 power:1 wade:1 hot:1 event:25 hybrid:2 critical:1 predicting:2 natural:3 zhu:1 improve:3 github:4 lorenzo:1 ladder:2 temporally:2 mathieu:2 abhinav:1 axis:2 raiko:1 doug:1 martial:1 categorical:1 woo:1 autoencoder:8 schmid:1 auto:2 created:1 prior:1 understanding:7 review:1 icc:1 xiang:1 ina:1 fully:3 loss:13 generation:2 interesting:4 facing:1 stephenson:1 geoffrey:1 localized:3 validation:3 rcnn:2 assimilating:1 vanhoucke:1 torabi:1 sufficient:1 consistent:1 xiao:1 x18:5 classifying:1 share:1 reanalysis:1 production:1 maas:2 supported:1 last:2 antti:1 hebert:1 alain:1 drastically:1 bias:1 divvala:1 understand:3 deeper:2 allow:1 rodrigo:1 lisa:1 face:1 taking:2 bulletin:1 wide:1 characterizing:1 leaky:1 van:2 benefit:1 curve:1 calculated:1 dimension:4 depth:2 default:1 boundary:1 fred:1 seemed:1 evaluating:1 made:1 monahan:2 qualitatively:1 projected:2 universally:1 far:1 erhan:2 welling:2 transaction:1 kishore:1 social:1 nov:2 reconstructed:1 ignore:1 confirm:1 global:7 anchor:8 spatio:2 xi:1 fergus:2 search:1 iterative:1 khosla:1 table:7 channel:6 transfer:1 delving:2 ca:4 inherently:1 learn:4 nicolas:2 shrivastava:1 contributes:1 du:1 ehsan:1 domain:1 da:1 yianilos:1 dense:1 main:1 whole:2 bounding:27 big:1 zhourong:1 motivation:1 hyperparameters:2 paul:2 xu:2 mila:1 benchmarking:2 slow:1 precision:5 cyclone:13 comprises:1 candidate:2 tied:1 breaking:1 third:1 donahue:2 ian:4 dozen:1 british:2 kin:1 dumitru:2 down:1 rectifier:2 showing:1 kew:1 decay:1 abadie:1 chun:2 explored:4 svm:1 workshop:1 false:3 sequential:2 corr:4 adding:1 importance:1 laver:3 texture:2 push:1 chen:5 flavor:1 entropy:3 intersection:1 garcia:1 mill:1 simply:1 tc:10 susanna:1 explore:3 rohrbach:1 visual:6 yolo:5 aditya:1 tegan:1 kaiming:2 tracking:1 partially:1 watch:1 doubling:1 chang:1 pretrained:3 radford:1 pinto:1 corresponds:1 truth:13 relies:2 acm:2 ma:1 tejas:1 sized:1 cheung:1 khosrowshahi:1 informing:1 towards:3 labelled:2 change:6 hard:1 carreira:1 parkhi:2 objectness:2 redmon:5 youtube:1 determined:2 olga:2 total:5 called:1 pas:2 mathias:1 ya:3 geophysical:1 meaningful:1 saenko:1 aaron:3 pragmatic:1 piero:1 berg:1 mark:2 support:2 unbalanced:1 collins:3 jonathan:1 chloe:1 kulkarni:1 jianxiong:1 relevance:1 evaluate:1 phenomenon:1 audio:1 schuster:1 srivastava:2
6,559
6,933
Process-constrained batch Bayesian Optimisation Pratibha Vellanki1 , Santu Rana1 , Sunil Gupta1 , David Rubin2 Alessandra Sutti2 , Thomas Dorin2 , Murray Height2 ,Paul Sandars3 , Svetha Venkatesh1 1 Centre for Pattern Recognition and Data Analytics Deakin University, Geelong, Australia [pratibha.vellanki, santu.rana, sunil.gupta, [email protected]] 2 Institute for Frontier Materials, GTP Research Deakin University, Geelong, Australia [d.rubindecelisleal, alessandra.sutti, thomas.dorin, [email protected]] 3 Materials Science and Engineering, Michigan Technological University, USA [[email protected]] Abstract Prevailing batch Bayesian optimisation methods allow all control variables to be freely altered at each iteration. Real-world experiments, however, often have physical limitations making it time-consuming to alter all settings for each recommendation in a batch. This gives rise to a unique problem in BO: in a recommended batch, a set of variables that are expensive to experimentally change need to be fixed, while the remaining control variables can be varied. We formulate this as a process-constrained batch Bayesian optimisation problem. We propose two algorithms, pc-BO(basic) and pc-BO(nested). pc-BO(basic) is simpler but lacks convergence guarantee. In contrast pc-BO(nested) is slightly more complex, but admits convergence analysis. We show that the regret of pc-BO(nested) is sublinear. We demonstrate the performance of both pc-BO(basic) and pc-BO(nested) by optimising benchmark test functions, tuning hyper-parameters of the SVM classifier, optimising the heat-treatment process for an Al-Sc alloy to achieve target hardness, and optimising the short polymer fibre production process. 1 Introduction Experimental optimisation is used to design almost all products and processes, scientific and industrial, around us. Experimental optimisation involves optimising input control variables in order to achieve a target output. Design of experiments (DOE) [16] is the conventional laboratory and industrial standard methodology used to efficiently plan experiments. The method is rigid - not adaptive based on the completed experiments so far. This is where Bayesian optimisation offers an effective alternative. Bayesian optimisation [13, 17] is a powerful probabilistic framework for efficient, global optimisation of expensive, black box functions. The field is undergoing a recent resurgence, spurred by new theory and problems and is impacting computer science broadly - tuning complex algorithms [3, 22, 18, 21], combinatorial optimisation [24, 12], reinforcement learning [4]. Usually, a prior belief in the form of Gaussian process is maintained over the possible set of objective functions and the posterior is the refined belief after updating the model with experimental data. The updated model is used to seek the most promising location of function extrema by using a variety of criteria, e.g. expected improvement (EI), and upper confidence bound (UCB). The maximiser of such a criteria function is then recommended for the function evaluation. Iteratively the model is updated and recommendations are made till the target outcome is achieved. When concurrent function evaluations are possible, Bayesian optimisation returns multiple suggestions, and this is termed as the batch 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Temperature (T) Polymer flow (???? ) Channel width(?) Short Nano-fibers T4 T2 T3 T1 t1 Constriction angle (??) t2 t3 t4 Time (t) (a) Heat treatment for Al-Sc - temperature time profile Device position(??) Coagulant flow (???? ) (b) Experimental setup for short polymer fibre production. Figure 1: Examples of real-world applications requiring process constraints. setting. Bayesian optimisation with batch setting has been investigated by [10, 5, 6, 9, 1] wherein different strategies are used to recommend multiple settings at each iteration. In all these methods, all the control variables are free to be altered at each iteration. However, in some situations needing to change all the variables for a single batch may not be efficient and this leads to the motivation of our process-constrained Bayesian optimisation. This work has been directly influenced from the way experiments are conducted in many real-world scenarios with a typical limitation on resources. For example, in our work with metallurgists, we were given a task to find the optimal heat-treatment schedule of an alloy which maximises the strength. Heat-treatment involves taking the alloy through a series of exposures to different temperatures for a variable amount of durations as shown in Figure 1a. Typically, a heat treatment schedule can last for multiple days, so doing one experiment at a time is not efficient. Fortunately, a furnace is big enough to hold multiple samples at the same time. If we have to perform multiple experiments in one batch yet using only one furnace, then we must design our Bayesian optimisation recommendations in such a way that the temperatures across a batch remain the same, whilst still allowing the durations to vary. Samples would be put in the same oven, but would be taken out after different elapsed time for each step of the heat treatment. Similar examples abound in other domains of process and product design. For short polymer fibre production a polymer is injected axially within another flow of a solvent in a particular geometric manifold [20]. A representation of the experimental setup marked with the parameters involved is shown in Figure 1b. When optimising for the yield it is generally easy to change the flow parameters (pump speed setting) than changing the device geometry (opening up the enclosure and modifying the physical configuration). Hence in this case as well, it is beneficial to recommend a batch of suggested experiments at a fixed geometry but allowing flow parameters to vary. Many such examples where the batch recommendations are constrained by the processes involved have been encountered by the authors in realising the potential of Bayesian optimisation for real-world applications. To construct a more familiar application we use the hyper-parameter tuning problem for Support Vector Machines (SVM). When we use parallel tuning using batch Bayesian optimisation, it may be useful if all the parallel training runs finished at the same time. This would require fixing the cost parameter, while allowing the the other hyper-parameters to vary. Whist this may or may not be a real concern depending on the use cases, we use it here as a case study. We formulate this unique problem as process-constrained batch Bayesian optimisation. The recommendation schedule needs to constrain a set of variables corresponding to control variables that are experimentally expensive (time, cost, difficulty) to change (constrained set) and varies all the remaining control variables (unconstrained set). Our approach involves incorporating constraints on stipulated control parameters and allowing the others to change in an unconstrained manner. The mathematical formulation of our optimisation problem is as follows. x? = argmaxx?X f (x) and we want a batch Bayesian optimisation sequence c {{xt,0 , xt,1 , ..., xt,K?1 }}Tt=1 such that ?t and xt,k = [xuc t,k xt,k ], 0 xct,k = xct,k0 ?k, k ? [0, ..., K ? 1] th Where xct,k is the k th constrained variable in tth batch and similarly xuc t,k is the k unconstrained variable in the tth batch. T is the total number of iterations and K is the batch-size. 2 We propose two approaches to the solve this problem: basic process-constrained Bayesian optimisation (pc-BO(basic)) and nested process-constrained batch Bayesian optimisation (pc-BO(nested)). pc-BO(basic) is an intuitive modification motivated by the work of [5] and pc-BO(nested) is based on a nested Bayesian optimisation method we will describe in section 3. We formulate the algorithms pc-BO(basic) and pc-BO(nested), and for pc-BO(nested) we present the theoretic analysis to show that the average regret vanishes superlinearly with iterations. We demonstrate the performance of pc-BO(basic) and pc-BO(nested) on both benchmark test functions and real world problems that involve hyper-parameter tuning for SVM classification for two datasets: breast cancer and biodegradable waste, the industrial problem of heat treatment process for an Aluminium-Scandium (Al-Sc) alloy, and another industrial problem of short polymer fibre production process. 2 2.1 Related background Bayesian optimisation Bayesian optimisation is a sequential method of global optimisation of an expensive and unknown black-box function f whose domain is X , to find its maxima x? = argmaxf (x) (or minima). It is x?X especially powerful when the function is expensive to evaluate and it does not have a closed-form expression, but it is possible to generate noisy observations from experiments. The Gaussian process (GP) is commonly used as a flexible way to place a prior over the unknown function [14]. It is are completely described by the mean function m(x) and the covariance function k(x, x0 ) and they imply our belief and uncertainties about the objective function. Noisy observations from the experiments are sequentially appended into the model, that in turn updates our belief about the objective function. The acquisition function is a surrogate utility function that takes a known tractable closed form and allows us to choose the next query point. It is maximised in the place of the unknown objective function and constructed such that it balances between exploring regions of high value (mean) and exploiting regions of high uncertainties (variances) across the objective function. Gaussian process based Upper Confidence Bound (GP-UCB) proposed by [19] is one of the acquisition functions which is shown to achieve sublinear growth in cumulative regret. It is define at tth iteration as t ?GP ?U CB (x) = ?t?1 (x) + p ?t ?t?1 (x) (1) where, v = 1 and ?t = 2log(td/2+2 ? 2 /3?) is the confidence parameter, wherein t denotes the iteration number, d represents the dimensionality of the data and ? ? (0, 1). We are motivated by GP-UCB based methods. Although our approach can be intuitively extended to other acquisition function, we do not explore this in the current work. 2.2 Batch Bayesian optimisation methods The GP exhibits an interesting characteristic that its predictive variance is dependent on only the input attributes while updating its mean requires knowledge about the outcome of the experiment. This leads us to a direction of strategies for multiple recommendations. There are several batch Bayesian optimisation algorithms for an unconstrained case. GP-BUCB by [6] recommends multiple batch points using the UCB strategy and the aforementioned characteristic. To fill up a batch, it updates the variances with the available attribute information and appends the outcomes temporarily by substituting them with most recently computed posterior mean. A similar strategy is used in the GP-UCB-PE by [5] that optimises the unknown function by incorporating some batch elements where uncertainty is high. GP-UCB-PE computes the first batch element by using the UCB strategy and recommends the rest of the points by relying on only the predictive variance, and not the mean. It has been shown that for these GP-UCB based algorithms the regret can be bounded tighter than the single recommendation methods. To the best of our knowledge these existing batch Bayesian optimisation techniques do not address the process-constrained problem presented in this work. The algorithms proposed in this paper are inspired by the previous approaches but address it in context of a process-constrained setting. 3 2.3 Constrained-batch vs. constrained-space optimisation We refer to the parameters that are not allowed to change (eg. temperatures for heat treatment, or device geometry for fibre production) as constrained set and the other parameters (heat treatment durations or flow parameters) as unconstrained set. We emphasise that our usage of constraint differs from the problem settings presented in literature, for example in [2, 11, 7, 8], where the parameters values are constrained or the function evaluations are constrained by inequalities. In the problem setting that we present, all the parameters exist in unconstrained space; for each individual batch, the constrained variables should have the same value. 3 Proposed method We recall the maximisation problem from Section 1 as x? = argmaxx?X f (x). In our case X = X uc ? X c , where X c is the constrained subspace and X uc is the unconstrained subspace. Algorithm 1 pc-BO(basic): Basic process-constrained pure exploration batch Bayesian optimisation algorithm. while (t < M axIter)   GP ?U CB c (xt,0 | D) xt,0 = xuc t,0 xt,0 = argmaxx?X ? for k = 1, .., K ? 1   uc k0 <k  uc c xuc t,k = argmax xuc ?X uc ? xt,k | D, xt,0 , xt,k0 end    uc c  K?1 c D = D ? xuc xt,k xt,1 t,k xt,1 , f k=0 end Algorithm 2 pc-BO(nested): Nested process-constrained batch Bayesian optimisation algorithm. while (t < M axIter) xct = argmaxxc ?X c ?cGP ?U CB (xct | DO ) GP ?U CB c xuc (xuc t,0 = argmaxxuc ?X uc ?uc t | DI , xt ) for k = 1, ..., K-1   uc k0 <k  uc c xuc t,k = argmaxxuc ?X uc ?uc xt | DI , xt , xt,k0 end   + c  DO = DO ? xct , f (xuc t ) xt    uc c  K?1 c DI = DI ? xuc xt,k xt t,k xt , f k=0 end A na?ve approach to solving the process is to employ any standard batch Bayesian optimisation algorithm where the first member is generated and then subsequent members are filled up by setting the constraint variables to that of the first member. We describe this approach as the basic processconstrained pure exploration batch Bayesian optimisation (pc-BO(basic)) algorithm as detailed in algorithm 1, where ?GP ?U CB (x | D) is the acquisition function as defined in Equation 1. We note that pc-BO(basic) is an improvisation over the work of [5]. During each iteration, the first batch element is recommended using the UCB strategy. The remaining batch elements, as in GP-UCBPE, are generated by updating the posterior variance of the GP, after the constrained set attributes are fixed to those of the first batch element. We provide an alternate formulation via a nested optimisation problem called nested processconstrained batch Bayesian optimisation (pc-BO(nested)) with two stages. For each batch, in the outer stage optimisation is performed to find the optimal values of the constrained variables and in the inner stage optimisation is performed to find optimal values of the unconstrained variables. The algorithm is detailed in algorithm 2, where ?cGP ?U CB (x | D) is the acquisition function for the GP ?U CB outer stage, and ?uc (x | D) is the acquisition function for the inner stage as defined in Equa+ c tion 1, and (xuc ) = argmax f ([xuc t t xt ]), is the unconstrained batch parameter that yields the n o uc xuc t ? xt,k K?1 k=0 best target goal for the given constrained parameter xc . We are able to analyse the convergence of 4 pc-BO(nested). It can be expected that in some cases the performance of the pc-BO(basic) and pcBO(nested) are close. The pc-BO(basic) method maybe considered simpler, but it lacks guaranteed convergence. 3.1 Convergence analysis for pc-BO(nested) We now present the analysis of the convergence of pc-BO(nested) as described in Algorithm 2. The outer stage optimisation problem for xc and observation Do is expressed as follows. where, (xc )? = g(xc ) , argmaxxc ?X c g(xc ), max f ([xuc xc ]) xuc ?X uc max f ([xuc xc ]) = f ([(xuc )+ xc ]), ' where, X uc , DO , xuc ?X uc {{xt,0 , xt,1 , ..., xt,K?1 }}Tt=1 n h + c ioT xct , f xuc xt, t,k such that, xct,k = xc , t=1 And the inner stage optimisation problem for xuc and observation DI is expressed as follows. (xuc )? where, = argmaxxuc ?X uc h (xuc ) , uc h(x ) , DI , f ([xuc xc ]) n  uc c  K?1 oT c xuc xt,k xt t,k xt , f k=0 t=1 This is solved using a Bayesian optimisation routine. Here,(xuc )+ is the unconstrained batch parameter that yields the best target goal for the given constrained parameter xc . Unfortunately as g(xc ) is not easily measurable, we use f ([(xuc )+ xc ]) as an approximation to it. To address this we use a provable batch Bayesian optimisation such as GP-UCB-PE [5] in the inner stage. The loops are performed together where in each iteration t, the outer loop first recommends a single recommend K ation of xct and then the inner loop suggests a batch, xuc t,k k=1 . Combining them we get processconstrained set of recommendations. We show that together these two Bayesian optimisation loops converge to the optimal solution. + uc c c Let us denote (xuc t ) = argmaxxuc ? xuc K f ([x xt ]). Following that we can write g(x ) as, { } k k=1 h ? + c (xuc = f (xuc t ) xt, t ) xt, h i + c = f (xuc + rtuc t ) xt, g(xc ) = f  c  i +f  ? c (xuc t ) xt,  ?f h + c (xuc t ) xt, i (2) where rtuc is the regret of the inner loop. The observational model is given as y c = g(xc ) +  = f h i + rtuc +  rtK 2 + c (xuc t ) xt, Lemma 1. For regret of the inner loop, PT t=1 where  ? N (0, ? 2 ) ? ?1uc C1uc ?Tuc + (3) ?2 6 Proof. As we use GP-UCB-PE for unconstrained parameter optimisation,?we can say that the regret rtK = min rtk ?k = 0, ..., K ? 1 (Lemma 1, [5]). Hence, rtK ? rt0 ? 2 ?1 ?t0 . Now, even though every batch recommendation for xc will always be run for one iteration only, the ?t0 (xt ) is computed from the updated GP. Hence the sum of (?t0 )2 can be upper bounded by ?T . Thus, T  X rtK 2 ? ?1uc C1uc ?Tuc + t=1 Here, ?T = ?2 6 (4) ?1 = 2log(1d/2+2 ? 2 /3?) is the confidence parameter; C1 = 8/log(1 + ? ?2 ); 2 max I(yA : fA ) assuming y = f + , where  ? N (0, ? /2) is the maximum c A?X ,|A|=T information gain after T rounds. (Please see supplementary material 5 for derivation) Lemma 2. For the variance of rtuc has the order of ?r2t ? O(C1uc ?1uc ?tuc + C2uc ) 5 Proof. We use PE algorithm [5] to compute K-recommendation, hence the variance of the regret rtuc can be bounded above by ?r2tuc ? E((rtuc )2 ) ?E t 1 X uc 2 (rt0 ) t 0 ! t 1X 2 0 ) min (ruc t 0 k<K t k =E t =0 ! t =0 + c The second inequality holds since on an average the gap rtuc = g(xc ) ? f ([(xuc t ) x ]) decreases with iteration t, ?xc ? X c . From equation 3, equation 4 and using the Lemma 4 and 5 of [5] we can write E t 1X 2 0 ) min (ruc t 0 k<K t k ! t =0 1 ? O( C1uc ?1uc ?tuc + C2uc ) t (5) for some C1uc, C2uc ? R. ?t is the maximum information gain over t samples. This concludes the proof. The following lemma guarantees an existence of a finite T0 after which the noise variance coming from the inner optimisation loop becomes smaller than the noise in the observation model. Lemma 3. ?T0 < ? for which ?r2Tuc ? ? 2 . 0 uc is sublinear in t. Therefore, any Proof. In Lemma 1,C1uc, C2uc and ?1uc are fixed constant and ?tK uc 1 uc uc uc quantity of the form M1 ? t C1 ?1 ?t + C2 also decreases sublinearly with t for ?M1 ? R. Hence the lemma is proved. Let us denote the instantaneous regret for the outer Bayesian optimisation loop as rtc = g((xc )? ) ? g(xct ), we can write the average regret after T iterations as, R?T = ? T 1 1 X p c c 1 X c (2 ?t ?t?1 (xct ) + 2 ) rt ? T t=0 T t r P c (xct ))2 ?Tc (?t?1 1 X 1 2 + T T t2 (6) using the Lemma 5.8 of [19] and Cauchy-Schwartz inequality. Lemma 4. For the outer Bayesian optimisation lim R?T ? 0 T ?? Proof. From the equation 6 s R?T ? = ? 2 ?Tc PT c c 2 t=1 (?t?1 (xt )) + T 1 X 1 T t=1 t2 T v ! u c T0 T T X X u? 1 X 1 c c (xct ))2 + (?t?1 ((?t?1 (xct ))2 + 2t T T T t=1 t2 t=1 t=T0 +1 r T ?Tc 1 X 1 2 (AT0 + BT ) + T T t=1 t2 (7) We then show that AT0 is upper bounded by a constant irrespective of T as long as T ? T0 and BT is T X 1 = 2 T ?? t t=1 sublinear with T . ?Tc is sublinear in T and lim ?2 . 6 Hence the right hand side vanishes as T ? ?. The details of the proof is presented in the supplementary material. However, in reality using regret as the upper bound on rtuc is not necessary, as a tighter upper bound may exist when we know the maximum value of the function1 and we can safely alter the upper bound as, p + c rtuc ? min(f max ? f ([(xuc t ) x ]), 2 uc ?1 ?t?1 (xuc 0 )) The above results holds since Lemma 2 still holds. 1 e.g. for hyper-parameter tuning we know that maximum value of accuracy is 1. 6 (8) Branin (normalised) 1 Ackley (normalised) 1 0.9 best value so far best value so far 0.8 0.95 0.9 pc-BO(nested) 10 20 1 30 40 50 60 number of iterations Goldstein-Price (normalised) 70 0.6 0.5 0.4 pc-BO(nested) pc-BO(basic) s-BO 0.3 pc-BO(basic) s-BO 0.85 0 0.7 0.2 0.1 0 80 10 20 1 30 40 50 60 number of iterations Egg-holder (normalised) 70 80 0.95 0.99 best value so far best value so far 0.9 0.98 0.85 0.97 0.95 20 30 40 50 number of iterations 60 70 pc-BO(nested) pc-BO(basic) s-BO 0.6 s-BO 10 0.7 0.65 pc-BO(nested) pc-BO(basic) 0.94 0.93 0 0.8 0.75 0.96 0.55 0 8 0 10 20 30 40 50 number of iterations 60 70 80 Figure 2: Synthetic test function optimisation using pc-BO(nested), pc-BO(basic) and s-BO. The zoomed area on the respective scale is shown for Branin and Goldstein-Price. 4 Experiments We conducted a set of experiments using both synthetic data and real data to demonstrate the performance of pc-BO(basic) and pc-BO(nested). To the best of our knowledge, there are no other methods that can selectively constrain parameters in each batch during Bayesian optimisation. Further, we also show the results for the test function optimisation using sequential BO (s-BO) using GP-UCB. The code is implemented in MATLAB and all the experiments are run on an Intel CPU E5-2640 v3 @2.60GHz machine. We use the squared exponential distance kernel. To show the performance, we plot the results as the best outcome so far against the number of iterations performed. The uncertainty bars in the figures pertain to 10 runs of BO algorithms with different initialisations for a batch of 3 recommendations. The errors bars show the standard error and the graph shows the mean best outcome until the respective iteration. 4.1 Benchmark test function optimisation In this section, we use benchmark test functions and demonstrate the performance of pc-BO(basic) and pc-BO(nested). We apply the test functions by constraining the second parameter and finding the best configuration across the first parameter (unconstrained). The Branin, Ackley, GoldsteinPrice and the Egg-holder functions were optimised using pc-BO(basic) and pc-BO(nested), and the results are shown in Figure 2. From the results, we note that the pc-BO(nested) is marginally better or similar in performance when compared with pc-BO(basic). It also shows that batch Bayesian optimisation is more efficient in terms of number of iterations than a purely sequential approach for the problem at hand. 4.2 Hyper-parameter tuning for SVM Support vector machines with RBF kernel require hyper-parameter tuning for Cost (C ) and Gamma (? ). Out of these parameters, the cost is a critical parameter that trades off error for generalisation. Consider tuning SVM?s in parallel. The cost parameter strongly affects the time required for training SVM. It would be inconvenient if one training process took much longer than the other. Thus constraining the cost parameter for a single batch maybe a good idea. We use our algorithms to tune 7 both the hyper-parameters C and ? , at each batch only varying ? , but not C . This is demonstrated on the classification using SVM problem using two datasets downloaded from UCI machine learning repository: Breast cancer dataset (BCW) and Bio-degradation dataset (QSAR). BCW has 683 instances with 9 attributes each of the data, where the instances are labelled as benign or malign tumour as per the diagnosis. The QSAR dataset categorises 1055 chemicals with 42 attributes as ready or not ready biodegradable waste. The results are plotted as best accuracy obtained across number of iterations. We observe from the results in Figure 3, that pc-BO(nested) again performs marginally better than pc-BO(basic) for the BCW dataset. For the QSAR dataset, pc-BO(nested) higher accuracy with lesser iterations than what pc-BO(basic) requires. 4.3 Heat treatment for an Al-Sc alloy Alloy casting involves heat treatment process - exposing the cast to different temperatures for select times, that ensures target hardness of the alloy. This process is repeated in steps. The underlying physics of heat-treatment of an alloy is based on nucleation and growth. During the nucleation process, ?new phases? or precipitates are formed when clusters of atom self organise. This is a difficult stochastic process that happens at lower temperatures. These precipitates then diffuse together to achieve the requisite target alloy characteristics in the growth step. KWN [15, 23] is the industrial standard precipitation model for the kinetics of nucleation and growth steps. As a preliminary study we use this simulator to demonstrate the strength of our algorithm. As explained in the introduction, it is cost efficient to test heat treatment in the real world by varying the time and keeping the temperature constrained in each batch. This will allow us to test multiple samples at one go in a single oven. We use the same constrains for our simulator driven study. We consider a two stage heat treatment process. The input to first stage is the alloy composition, the temperature and time. The nucleation output of this stage is input to the the second stage along with the temperature and time for the second stage. The final output is hardness of the material (strength in kPa). To optimise this two stage heat treatment process our inputs are [T1 , T2 , t1 , t2 ], where [T1 , T2 ] represent temperatures in Celsius, [t1 , t2 ] represent the time in minutes for each stage. Figure 4 shows the results of the heat-treatment process optimisation. 4.4 Short polymer fibre polymer production Short polymer fibre production is a set of experiments we conducted in collaboration with material scientists at Deakin University. For production of short polymer fibres, a polymer rich fluid is injected coaxially into the flow of another solvent in a particular geometric manifold. The parameters included in this experiment are device position in mm, constriction angle in degrees, channel width in mm, polymer flow in ml/hr, and coagulant speed in cm/s. The final output, the combined utility is the distance of the length and diameter of the polymer from target polymer. The goal is to optimise the input parameters to obtain a polymer fibre of a desired length and diameter. As explained in the introduction, it is efficient to test multiple combinations of polymer flow and coagulant speed for a fixed geometric setup than in a single batch. SVM with BCW 1 SVM with QSAR 0.9 0.85 accuracy accuracy 0.95 0.8 0.9 0.75 pc-BO(nested) pc-BO(basic) pc-BO(nested) pc-BO(basic) 0.7 0.85 0 10 20 30 40 50 60 70 0 80 10 20 30 40 50 60 70 80 number of iterations number of iterations Figure 3: Hyper-parameter tuning for SVM based classification on Breast Cancer Data (BCW) and bio-degradable waste data (QSAR) using pc-BO(nested) and pc-BO(basic) 8 Al-Sc alloy heat treatment short polymer fibre production best combined utility of polymer Hardness of the alloy 120 105 90 pc-BO(nested) pc-BO(basic) 75 0 5 10 15 20 number of iterations iterations 25 1 0.8 0.6 0.4 0.2 pc-BO(nested) pc-BO(basic) 0 30 0 5 10 15 20 25 30 35 40 number of iterations 0 Figure 4: Results for heat-treatment and short polymer fibre production processes. (a) Experimental result for Al-Sc heat treatment profile for a two stage heat-treatment process using pc-BO(nested) and pc-BO(basic). (b) Optimisation for short polymer fibre production with position, constriction angle and channel width constrained for each batch. Polymer flow and coagulant speed are unconstrained. The optimisation is shown for pc-BO(nested) and pc-BO(basic) algorithms. The parameters in this experiments are discrete, where every parameter takes 3 discrete values, except the constriction angle which takes 2 discrete values. Coagulant speed and polymer flow are unconstrained parameters and channel width, constriction angle and position are the constrained parameters. We conducted the experiment in batches of 3. The Figure 4 shows the optimisation results for this experiment over 53 iterations. 5 Conclusion We have identified a new problem in batch Bayesian optimisation, motivated from physical limitations in real world experiments while conducting batch experiments. It is not feasible and resourcefriendly to change all available settings in scientific and industrial experiments for a batch. We propose process-constrained batch Bayesian optimisation for such applications, where it is preferable to fix the values of some variables in a batch. We propose two approaches to solve the problem of process-constrained batches pc-BO(basic) and pc-BO(nested). We present analytical proof for convergence of pc-BO(nested). Synthetic functions, and real world experiments: hyper-parameter tuning for SVM, alloy heat treatment process, and short polymer fiber production process were optimised using the proposed algorithms. We found that pc-BO(nested) in each of these scenarios is either more efficient or equally well performing compared with pc-BO(basic). Acknowledgements This research was partially funded by the Australian Government through the Australian Research Council (ARC) and the Telstra-Deakin Centre of Excellence in Big Data and Machine Learning. Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006). References [1] J. Azimi, A. Fern, and X. Z. Fern. Batch bayesian optimization via simulation matching. In Advances in Neural Information Processing Systems, pages 109?117, 2010. [2] J. Azimi, X. Fern, and A. Fern. Budgeted optimization with constrained experiments. Journal of Artificial Intelligence Research, 56:119?152, 2016. [3] J. Bergstra, R. Bardenet, Y. Bengio, and B. K?gl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546?2554, 2011. [4] E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599, (UBC TR-2009-023 and arXiv:1012.2599), 2010. [5] E. Contal, D. Buffoni, A. Robicquet, and N. Vayatis. Parallel gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 225?240. Springer, 2013. 9 [6] T. Desautels, A. Krause, and J. W. Burdick. Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research, 15(1):3873?3923, 2014. [7] J. R. Gardner, M. J. Kusner, Z. E. Xu, K. Q. Weinberger, and J. P. Cunningham. Bayesian optimization with inequality constraints. In International Conference on Machine Learning, pages 937?945, 2014. [8] M. A. Gelbart, J. Snoek, and R. P. Adams. Bayesian optimization with unknown constraints. In Uncertainty in Artificial Intelligence, pages 250?259, 2014. [9] D. Ginsbourger, R. Le Riche, and L. Carraro. A multi-points criterion for deterministic parallel global optimization based on gaussian processes. Technical report, 2008. [10] J. Gonz?lez, Z. Dai, P. Hennig, and N. D. Lawrence. Batch bayesian optimization via local penalization. In Artificial Intelligence and Statistics, pages 648?657, 2015. [11] J. M. Hern?ndez-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general framework for constrained bayesian optimization using information-based search. Journal of Machine Learning Research, 17(160):1?53, 2016. [12] F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Learning and Intelligent Optimization, pages 507?523, 2011. [13] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455?492, 1998. [14] C. E. Rasmussen. Gaussian processes for machine learning. 2006. [15] J. Robson, M. Jones, and P. Prangnell. Extension of the n-model to predict competing homogeneous and heterogeneous precipitation in al-sc alloys. Acta Materialia, 51(5):1453?1468, 2003. [16] J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn. Design and analysis of computer experiments. Statistical science, pages 409?423, 1989. [17] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148?175, 2016. [18] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages 2960?2968, 2012. [19] N. Srinivas, A. Krause, S. Kakade, and M. W. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 1015?1022, 2010. [20] A. Sutti, T. Lin, and X. Wang. Shear-enhanced solution precipitation: a simple process to produce short polymeric nanofibers. Journal of nanoscience and nanotechnology, 11(10):8947?8952, 2011. [21] K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In Advances in Neural Information Processing Systems, pages 2004?2012, 2013. [22] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Auto-weka: combined selection and hyperparameter optimization of classification algorithms. In International Conference on Knowledge Discovery and Data Mining, pages 847?855, 2013. [23] R. Wagner, R. Kampmann, and P. W. Voorhees. Homogeneous Second-Phase Precipitation. Wiley Online Library, 1991. [24] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. de Freitas. Bayesian optimization in high dimensions via random embeddings. In International Joint Conference on Artificial Intelligence, pages 1778?1784, 2013. 10
6933 |@word repository:1 exploitation:1 simulation:1 seek:1 covariance:1 tr:1 configuration:3 series:1 ndez:1 initialisation:1 existing:1 freitas:3 current:1 yet:1 must:1 exposing:1 subsequent:1 benign:1 burdick:1 plot:1 update:2 v:1 intelligence:4 device:4 maximised:1 short:13 location:1 simpler:2 height:1 mathematical:1 branin:3 constructed:1 c2:1 along:1 shahriari:1 manner:1 excellence:1 x0:1 snoek:3 sublinearly:1 expected:2 hardness:4 telstra:1 simulator:2 multi:2 inspired:1 relying:1 td:1 cpu:1 abound:1 becomes:1 precipitation:4 bounded:4 underlying:1 what:1 argmaxf:1 cm:1 superlinearly:1 israel:1 whilst:1 extremum:1 finding:1 guarantee:2 safely:1 every:2 growth:4 preferable:1 classifier:1 schwartz:1 control:7 bio:2 t1:6 engineering:1 scientist:1 local:1 improvisation:1 optimised:2 contal:1 black:3 au:2 acta:1 suggests:1 analytics:1 equa:1 unique:2 practical:1 maximisation:1 regret:12 differs:1 area:1 matching:1 confidence:5 enclosure:1 get:1 close:1 alloy:14 pertain:1 selection:1 put:1 context:1 conventional:1 measurable:1 demonstrated:1 deterministic:1 lobato:1 exposure:1 go:1 rt0:2 duration:3 formulate:3 welch:2 axiter:2 pure:3 schonlau:1 fill:1 updated:3 target:8 pt:2 enhanced:1 user:1 svetha:2 homogeneous:2 element:5 recognition:1 expensive:7 updating:3 database:1 ackley:2 solved:1 wang:3 region:2 ensures:1 decrease:2 technological:1 trade:1 vanishes:2 constrains:1 solving:1 predictive:2 purely:1 completely:1 easily:1 kwn:1 joint:2 k0:5 fiber:2 derivation:1 heat:21 effective:1 describe:2 query:1 sc:7 artificial:4 hyper:11 outcome:5 refined:1 whose:1 nucleation:4 supplementary:2 solve:2 say:1 ucbpe:1 statistic:1 gp:19 analyse:1 noisy:2 final:2 thornton:1 online:1 sequence:1 analytical:1 took:1 propose:4 product:2 coming:1 zoomed:1 uci:1 loop:9 combining:1 matheson:1 till:1 achieve:4 intuitive:1 exploiting:1 convergence:7 cluster:1 produce:1 adam:5 tk:1 depending:1 fixing:1 implemented:1 involves:4 australian:3 larochelle:1 direction:1 attribute:5 modifying:1 stipulated:1 stochastic:1 exploration:4 human:1 australia:2 observational:1 material:6 require:2 government:1 fix:1 furnace:2 preliminary:1 polymer:23 tighter:2 frontier:1 exploring:1 kinetics:1 hold:4 mm:2 around:1 considered:1 extension:1 cb:7 lawrence:1 predict:1 substituting:1 vary:3 dorin:1 robson:1 combinatorial:1 council:1 concurrent:1 hoffman:1 cora:1 gaussian:8 always:1 varying:2 casting:1 june:1 improvement:1 zoghi:1 contrast:1 industrial:6 seeger:1 dependent:1 rigid:1 typically:1 bt:2 cunningham:1 bandit:2 classification:4 flexible:1 aforementioned:1 impacting:1 plan:1 constrained:31 prevailing:1 uc:34 field:1 construct:1 optimises:1 beach:1 atom:1 optimising:5 represents:1 jones:2 icml:1 alter:2 t2:10 recommend:3 others:1 report:1 opening:1 employ:1 intelligent:1 gamma:1 ve:1 individual:1 familiar:1 geometry:3 argmax:2 phase:2 r2t:1 mining:1 evaluation:3 pc:67 necessary:1 respective:2 filled:1 desired:1 plotted:1 inconvenient:1 haifa:1 hutter:3 instance:2 modeling:1 cost:8 pump:1 conducted:4 constriction:5 varies:1 synthetic:3 combined:3 st:1 international:4 probabilistic:1 off:1 physic:1 together:3 na:1 lez:1 squared:1 again:1 choose:1 nano:1 return:1 potential:1 de:3 bergstra:1 waste:3 performed:4 tion:1 closed:2 azimi:2 doing:1 bucb:1 wynn:1 parallel:5 appended:1 formed:1 biodegradable:2 accuracy:5 variance:8 characteristic:3 efficiently:1 conducting:1 t3:2 yield:3 holder:2 bayesian:45 fern:4 marginally:2 axially:1 qsar:5 influenced:1 against:1 acquisition:6 involved:2 proof:7 sunil:2 di:6 gain:2 proved:1 treatment:21 dataset:5 appends:1 recall:1 knowledge:5 lim:2 dimensionality:1 mitchell:1 schedule:3 routine:1 brochu:1 goldstein:2 higher:1 day:1 methodology:1 wherein:2 formulation:2 box:3 though:1 strongly:1 precipitate:2 stage:16 until:1 hand:2 ei:1 lack:2 scientific:2 usa:2 usage:1 requiring:1 brown:2 hence:6 chemical:1 laboratory:1 iteratively:1 eg:1 round:1 during:3 width:4 self:1 please:1 maintained:1 robicquet:1 criterion:3 gelbart:2 tt:2 demonstrate:5 theoretic:1 performs:1 temperature:11 instantaneous:1 recently:1 shear:1 physical:3 at0:2 function1:1 m1:2 refer:1 composition:1 celsius:1 tuning:11 unconstrained:14 similarly:1 centre:2 sack:1 funded:1 longer:1 posterior:3 oven:2 recent:1 driven:1 termed:1 scenario:2 gonz:1 inequality:4 minimum:1 fortunately:1 dai:1 freely:1 converge:1 v3:1 recommended:3 multiple:9 needing:1 cgp:2 technical:1 offer:1 long:2 lin:1 equally:1 basic:35 breast:3 optimisation:56 heterogeneous:1 arxiv:2 iteration:27 kernel:2 malign:1 geelong:2 represent:2 achieved:1 buffoni:1 c1:2 vayatis:1 background:1 want:1 fellowship:1 krause:2 ot:2 rest:1 member:3 flow:11 xct:14 constraining:2 bengio:1 sander:1 enough:1 easy:1 variety:1 recommends:3 affect:1 embeddings:1 identified:1 competing:1 inner:8 idea:1 lesser:1 riche:1 tradeoff:1 weka:1 t0:8 motivated:3 expression:1 utility:3 matlab:1 generally:1 useful:1 detailed:2 involve:1 tune:1 maybe:2 amount:1 tth:3 diameter:2 generate:1 exist:2 tutorial:1 rtk:5 per:1 broadly:1 diagnosis:1 write:3 discrete:3 hennig:1 hyperparameter:1 alessandra:2 changing:1 budgeted:1 ruc:2 bardenet:1 graph:1 sum:1 fibre:12 run:4 angle:5 powerful:2 injected:2 uncertainty:5 swersky:2 place:2 almost:1 maximiser:1 bound:6 guaranteed:1 encountered:1 strength:3 constraint:6 constrain:2 solvent:2 diffuse:1 speed:5 min:4 performing:1 alternate:1 combination:1 across:4 slightly:1 remain:1 beneficial:1 smaller:1 kusner:1 kakade:1 making:1 modification:1 happens:1 intuitively:1 explained:2 taken:1 resource:1 equation:4 hern:1 turn:1 know:2 tractable:1 end:4 available:2 apply:1 observe:1 hierarchical:1 batch:59 alternative:1 weinberger:1 existence:1 thomas:2 recipient:1 denotes:1 remaining:3 spurred:1 completed:1 xc:19 ghahramani:1 murray:2 especially:1 prof:1 objective:5 quantity:1 strategy:6 fa:1 rt:1 surrogate:1 tumour:1 exhibit:1 subspace:2 distance:2 outer:6 manifold:2 cauchy:1 provable:1 assuming:1 code:1 length:2 balance:1 setup:3 unfortunately:1 difficult:1 rise:1 resurgence:1 fluid:1 design:6 unknown:5 perform:1 maximises:1 upper:8 allowing:4 observation:5 datasets:2 benchmark:4 finite:1 arc:2 situation:1 extended:1 varied:1 parallelizing:1 carraro:1 deakin:6 david:1 venkatesh:2 cast:1 required:1 elapsed:1 nip:1 address:3 able:1 suggested:1 bar:2 usually:1 pattern:1 max:4 optimise:2 gtp:1 belief:4 critical:1 ation:1 difficulty:1 hr:1 kpa:1 altered:2 imply:1 library:1 finished:1 gardner:1 irrespective:1 concludes:1 ready:2 auto:1 prior:2 geometric:3 literature:1 acknowledgement:1 discovery:2 review:1 sublinear:5 suggestion:1 limitation:3 interesting:1 penalization:1 downloaded:1 desautels:1 degree:1 collaboration:1 production:12 cancer:3 gl:1 last:1 free:1 keeping:1 rasmussen:1 side:1 allow:2 normalised:4 organise:1 institute:1 taking:2 wagner:1 emphasise:1 ghz:1 dimension:1 world:8 cumulative:1 rich:1 computes:1 author:1 made:1 adaptive:1 reinforcement:2 commonly:1 ginsbourger:1 far:6 laureate:1 ml:1 global:5 sequentially:1 active:1 consuming:1 search:1 reality:1 promising:1 channel:4 ca:1 argmaxx:3 e5:1 investigated:1 complex:2 european:1 domain:2 big:2 motivation:1 paul:1 profile:2 noise:2 allowed:1 repeated:1 xu:1 realising:1 intel:1 egg:2 wiley:1 position:4 exponential:1 pe:5 xuc:39 minute:1 xt:40 undergoing:1 admits:1 gupta:1 svm:11 concern:1 voorhees:1 incorporating:2 sequential:4 t4:2 gap:1 tc:4 michigan:1 hoos:2 explore:1 expressed:2 rana:1 temporarily:1 bo:76 partially:1 recommendation:11 springer:1 nested:41 ubc:1 leyton:2 marked:1 goal:3 rbf:1 labelled:1 price:2 feasible:1 experimentally:2 change:7 rtc:1 typical:1 generalisation:1 included:1 except:1 nanotechnology:1 bcw:5 lemma:11 degradation:1 total:1 called:1 experimental:7 ya:1 ucb:12 selectively:1 select:1 support:2 evaluate:1 requisite:1 srinivas:1
6,560
6,934
Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes Ahmed M. Alaa Electrical Engineering Department University of California, Los Angeles [email protected] Mihaela van der Schaar Department of Engineering Science University of Oxford [email protected] Abstract Predicated on the increasing abundance of electronic health records, we investigate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multitask learning framework in which factual and counterfactual outcomes are modeled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregionalization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counterfactual outcomes. We conduct experiments on observational datasets for an interventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experiments, we show that our method significantly outperforms the state-of-the-art. 1 Introduction Clinical trials entail enormous costs: the average costs of multi-phase trials in vital therapeutic areas such as the respiratory system, anesthesia and oncology are $115.3 million, $105.4 million, and $78.6 million, respectively [1]. Moreover, due to the difficulty of patient recruitment, randomized controlled trials often exhibit small sample sizes, which hinders the discovery of heterogeneous therapeutic effects across different patient subgroups [2]. Observational studies are cheaper and quicker alternatives to clinical trials [3, 4]. With the advent of electronic health records (EHRs), currently deployed in more than 75% of hospitals in the U.S. according to the latest ONC data brief1 , there is a growing interest in using machine learning to infer heterogeneous treatment effects from readily available observational data in EHRs. This interest glints in recent initiatives such as STRATOS [3], which focuses on guiding observational medical research, in addition to various recent works on causal inference from observational data developed by the machine learning community [4-11]. Motivated by the plethora of EHR data and the potentiality of precision medicine, we address the problem of estimating individualized treatment effects (i.e. causal inference) using observational data. The problem differs from standard supervised learning in that for every subject in an observational cohort, we only observe the "factual" outcome for a specific treatment assignment, but never observe the corresponding "counterfactual" outcome2 , without which we can never know the true 1 2 https://www.healthit.gov/sites/default/files/briefs/ Some works refer to this setting as the "logged bandits with feedback" [12, 13]. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. treatment effect [4-9]. Selection bias creates a discrepancy in the feature distributions for the treated and control patient groups, which makes the problem even harder. Much of the classical works have focused on the simpler problem of estimating average treatment effects via unbiased estimators based on propensity score weighting (see [14] and the references therein). More recent works learn individualized treatment effects via regression models that view the subjects? treatment assignments as input features [4-13]. We provide a thorough review on these works in Section 3. Contribution At the heart of this paper lies a novel conception of causal inference as a multi-task learning problem. That is, we view a subject?s potential outcomes as the outputs of a vector-valued function in a reproducing kernel Hilbert space (vvRKHS) [15]. We propose a Bayesian approach for learning the treatment effects through a multi-task Gaussian process (GP) prior over the populations? potential outcomes. The Bayesian perspective on the multi-task learning problem allows reasoning about the unobserved counterfactual outcomes, giving rise to a loss function that quantifies the Bayesian risk of the estimated treatment effects while taking into account the uncertainty in counterfactual outcomes without explicit propensity modeling. Furthermore, we show that optimizing the multi-task GP hyper-parameters via risk-based empirical Bayes [16] is equivalent to minimizing the empirical error in the factual outcomes, with a regularizer that is proportional to the posterior uncertainty (variance) in counterfactual outcomes. We provide a feature space interpretation of our method showing its relation to previous works on domain adaptation [6, 8], empirical risk minimization [13], and tree-based learning [4, 5, 7, 9]. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals. With the exception of [5] and [9], all previous works do not associate their estimates with confidence measures, which hinders their applicability in formal medical research. While Bayesian credible sets do not guarantee frequentist coverage, recent results on the "honesty" (i.e. frequentist coverage) of adaptive credible sets in nonparametric regression may extend to our setting [16]. In particular, [Theorem 1, 16] shows that ?under some extrapolation conditions? adapting a GP prior via risk-based empirical Bayes guarantees honest credible sets: investigating the validity of these results in our setting is an interesting topic for future research. 2 Problem Setup We consider the setting in which a specific treatment is applied to a population of subjects, where each subject i possesses a d-dimensional feature Xi ? X , and two (random) potential outcomes (1) (0) (1) (0) Yi , Yi ? R that are drawn from a distribution (Yi , Yi )|Xi = x ? P(.|Xi = x), and correspond to the subject?s response with and without the treatment, respectively. The realized causal (1) (0) effect of the treatment on subject i manifests through the random variable (Yi ? Yi ) | Xi = x. Hence, we define the individualized treatment effect (ITE) for subjects with a feature Xi = x as [ ] (1) (0) T (x) = E Yi ? Yi Xi = x . (1) Our goal is to conduct the causal inference task of estimating the function T (x) from an observational dataset D, which typically comprises n independent samples of the random tuple (W ) {Xi , Wi , Yi i }, where Wi ? {0, 1} is a treatment assignment indicator that indicates whether or (W ) (1?Wi ) not subject i has received the treatment under consideration. The outcomes Yi i and Yi are known as the factual and the counterfactual outcomes, respectively [6, 9]. Treatment assignments are generally dependent on features, i.e. Wi ?? ? Xi . The conditional distribution P(Wi = 1|Xi = x), also known as the propensity score of subject i [13, 14], reflects the underlying policy for assigning the treatment to subjects. Throughout this paper, we respect the standard assumptions of unconfoundedness (or ignorability) and overlap: this setting is known in the literature as the "potential outcomes model with unconfoundedness" [4-11]. Individual-based causal inference using observational data is challenging. Since we only observe (1) (0) one of the potential outcomes for every subject i, we never observe the treatment effect Yi ? Yi for any of the subjects, and hence we cannot resort to standard supervised learning to estimate T (x). Moreover, the dataset D exhibits selection bias, which may render the estimates of T (x) inaccurate if the treatment assignment for individuals with Xi = x is strongly biased (i.e. P(Wi = 1|Xi = x) is close to 0 or 1). Since our primary motivation for addressing this problem comes from its application potential in precision medicine, it is important to associate our estimate of T (.) with a pointwise measure of confidence in order to properly guide therapeutic decisions for individual patients. 2 3 Multi-task Learning for Causal Inference Vector-valued Potential Outcomes Function We adopt the following signal-in-white-noise model for the potential outcomes: (w) Yi = fw (Xi ) + ?i,w , w ? {0, 1}, (2) (w) 2 where ?i,w ? N (0, ?w ) is a Gaussian noise variable. It follows from (2) that E[Yi | Xi = x] = fw (x), and hence the ITE can be estimated as T?(x) = f?1 (x) ? f?0 (x). Most previous works that estimate T (x) via direct modeling learn a single-output regression model that treats the treatment assignment as an input feature, i.e. fw (x) = f (x, w), f (., .) : X ? {0, 1} ? R, and estimate the ITE as T?(x) = f?(x, 1) ? f?(x, 0) [5-9]. We take a different perspective by introducing a new multi-output regression model comprising a potential outcomes (PO) function f (.) : X ? R2 , with d inputs (features) and 2 outputs (potential outcomes); the ITE estimate is the projection of the estimated PO function on the vector e = [?1 1]T , i.e. T?(x) = ? f T (x) e. Consistent pointwise estimation of the ITE function T (x) requires restricting the PO function f (x) to a smooth function class [9]. To this end, we model the PO function f (x) as belonging to a vector-valued Reproducing Kernel Hilbert Space (vvRKHS) HK equipped with an inner product ?., .?HK , and with a reproducing kernel K : X ? X ? R2?2 , where K is a (symmetric) positive semi-definite matrix-valued function [15]. Our choice for the vvRKHS is motivated by its algorithmic advantages; by virtue of the representer theorem, we know that learning the PO function entails estimating a finite number of coefficients evaluated at the input points {Xi }ni=1 [17]. Multi-task Learning The vector-valued model for the PO function conceptualizes causal inference (W ) as a multi-task learning problem. That is, D = {Xi , Wi , Yi i }ni=1 can be thought of as comprising training data for two learning tasks with target functions f0 (.) and f1 (.), with Wi acting as the "task index" for the ith training point [15]. For an estimated PO function ?f (x), the true loss functional is ? ( )2 ? ? L(f ) = f T (x) e ? T (x) ? P(X = x) dx. (3) x?X The loss functional in (3) is known as the precision in estimating heterogeneous effects (PEHE), and is commonly used to quantify the "goodness" of T?(x) as an estimate of T (x) [4-6, 8]. A conspicuous challenge that arises when learning the "PEHE-optimal" PO function f is that we cannot compute the (1) (0) empirical PEHE for a particular f ? HK since the treatment effect samples {Yi ?Yi }ni=1 are not available in D. On the other hand, using a loss function that evaluates the losses of f0 (x) and f1 (x) separately (as in conventional multi-task learning [Sec. 3.2, 15]) can be highly problematic: in the presence of a strong selection bias, the empirical loss for f (.) with respect to factual outcomes may not generalize to counterfactual outcomes, leading to a large PEHE loss. In order to gain insight into the structure of the optimal PO function, we consider an "oracle" that has access to counterfactual outcomes. For such an oracle, the finite-sample empirical PEHE is ( ))2 1 ? (?T (1?Wi ) (W ) ?? f (Xi ) e ? (1 ? 2Wi ) Yi ? Yi i , L( f ; K, Y(W) , Y(1?W) ) = n i=1 n (W ) (4) (1?W ) i where Y(W) = [Yi i ]i and Y(1?W) = [Yi ]i . When Y(1?W) is accessible, the PEHEoptimal PO function f (.) is given by the following representer Theorem. Theorem 1 (Representer Theorem for Oracle Causal Inference). For any ? f ? ? HK satisfying ?? ? f ? = arg min L( f ; K, Y(W) , Y(1?W) ) + ? ||? f ||2HK , ? ? R+ , ? f ?HK (5) ? X1 ), . . . , K(., ? Xn )}, where K(., ? .) = eT K(., .) e. we have that T?? (.) = eT ? f ? (.) ? span{K(., ? n ? Xi ), ? = [?1 , . . . , ?n ]T , where That is, T?? (.) admits a representation T?? (.) = i=1 ?i K(., ? ? = (K(X, X) + n ? I)?1 ((1 ? 2W) ? (Y(1?W) ? Y(W) )), (6) ? ? i , Xj ))i,j , W = [W1 , . . . , Wn ]T .  where ? denotes component-wise product, K(X, X) = (K(X 3 A Bayesian Perspective Theorem 1 follows directly from the generalized representer Theorem [17] (A proof is provided in [17]), and it implies that regularized empirical PEHE minimization in vvRKHS is equivalent to Bayesian inference with a Gaussian process (GP) prior [Sec. 2.2, 15]. Therefore, we can interpret T?? (.) as the posterior mean of T (.) given a GP prior with a covariance ? i.e. T ? GP(0, K). ? We know from Theorem 1 that K ? = eT Ke, hence the prior on T (.) kernel K, is equivalent to a multi-task GP prior on the PO function f (.) with a kernel K, i.e. f ? GP(0, K). The Bayesian view of the problem is advantageous for two reasons. First, as discussed earlier, it allows computing individualized (pointwise) measures of uncertainty in T?(.) via posterior credible intervals. Second, it allows reasoning about the unobserved counterfactual outcomes in a Bayesian fashion, and hence provides a natural proxy for the oracle learner?s empirical PEHE in (4). Let ? ? ? be a kernel hyper-parameter that parametrizes the multi-task GP kernel K? . We define the Bayesian PEHE risk R(?, ? f ; D) for a point estimate ? f as follows ] [ ?? (7) R(?, ? f ; D) = E? L( f ; K? , Y(W) , Y(1?W) ) D . The expectation in (7) is taken with respect to Y(1?W) |D. The Bayesian PEHE risk R(?, ? f ; D) is simply the oracle learner?s empirical loss in (4) marginalized over the posterior distribution of the unobserved counterfactuals Y(1?W) , and hence it incorporates the posterior uncertainty in counterfactual outcomes without explicit propensity modeling. The optimal hyper-parameter ?? and interpolant ? f ? (.) that minimize the Bayesian PEHE risk are given in the following Theorem. Theorem 2 (Risk-based Empirical Bayes). The minimizer (? f ? , ?? ) of R(?, ? f ; D) is given by ? ? ? 2 ? ? f ? = E?? [ f | D ], ?? = arg min ? Y(W) ? E? [ f | D ] + ??? ? {z }2 | Empirical factual error Var? [ Y(1?W) | D ] {z }1 | ? ? ?, ? Posterior counterfactual variance where Var? [.|.] is the posterior variance and ?.?p is the p-norm.  The proof is provided in Appendix A. Theorem 2 shows that hyper-parameter selection via riskbased empirical Bayes is instrumental in alleviating the impact of selection bias. This is because, as the Theorem states, ?? minimizes the empirical loss of ? f ? with respect to factual outcomes, and uses the posterior variance of the counterfactual outcomes as a regularizer. Hence, ?? carves a kernel that not only fits factual outcomes, but also generalizes well to counterfactuals. It comes as no surprise that ? f ? = E?? [ f | D ]; E?? [ f | D, Y(1?W) ] is equivalent to the oracle?s solution in Theorem 1, hence by the law of iterated expectations, E?? [ f | D ] = E?? [ E?? [ f | D, Y(1?W) ] | D ] is the oracle?s solution marginalized over the posterior distribution of counterfactuals. Figure 1: Pictorial depiction for model selection via risk-based empirical Bayes. Related Works A feature space interpretation of Theorem 2 helps creating a conceptual equivalence between our method and previous works. For simplicity of exposition, consider a finitedimensional vvRKHS in which the PO function resides: we can describe such a space in terms of a feature map ? : X ? Rp , where K(x, x? ) = ??(x), ?(x? )? [Sec. 2.3, 15]. Every PO function f ? HK can be represented as f = ??, ?(x)?, and hence the two response surfaces fo (.) and f1 (.) 4 are represented as hyperplanes in the transformed feature space as depicted in Fig. 1 (right). The risk-based empirical Bayes method attempts to find a feature map ? and two hyperplanes that best fit the factual outcomes (right panel in Fig. 1) while minimizing the posterior variance in counterfactual outcomes (middle panel in Fig. 1). This conception is related to that of counterfactual regression [6, 8], which builds on ideas from co-variate shift and domain adaptation [19] in order to jointly learn a response function f and a "balanced" representation ? that makes the distributions P(?(Xi = x)|Wi = 1) and P(?(Xi = x)|Wi = 0) similar. Our work differs from [6, 8] in the following aspects. First, our Bayesian multi-task formulation provides a direct estimate of the PEHE: (7) is an unbiased estimator of the finite-sample version of (3). Contrarily, [Eq. 2, 6] creates a coarse proxy for the PEHE by using the nearest-neighbor factual outcomes in replacement of counterfactuals, whereas [Eq. 3, 8] optimizes a generalization bound which may largely overestimate the true PEHE for particular hypothesis classes. [6] optimizes the algorithm?s hyper-parameters by assuming (unrealistically) that counterfactuals are available in a held-out sample, whereas [8] uses an ad hoc nearest-neighbor approximation. Moreover, unlike the case in [6], our multi-task formulation protects the interactions between Wi and Xi from being lost in high-dimensional feature spaces. Most of the previous works estimate the ITE via co-variate adjustment (G-computation formula) [4, 5, 7, 11, 20]; the most remarkable of these methods are the nonparametric Bayesian additive regression trees [5] and causal forests [4, 9]. We provide numerical comparisons with both methods in Section 5. [11] also uses Gaussian processes, but with the focus of modeling treatment response curves over time. Counterfactual risk minimization is another framework that is applicable only when the propensity score P(Wi = 1|Xi = x) is known [12, 13]. [25] uses deep networks to infer counterfactuals, but requires some of the data to be drawn from a randomized trial. 4 Causal Multi-task Gaussian Processes (CMGPs) In this Section, we provide a recipe for Bayesian causal inference with the prior f ? GP(0, K? ). We call this model a Causal Multi-task Gaussian Process (CMGP). Constructing the CMGP Kernel As it is often the case in medical settings, the two response surfaces f0 (.) and f1 (.) may display different levels of heterogeneity (smoothness), and may have different relevant features. Standard intrinsic coregionalization models for constructing vector-valued kernels impose the same covariance parameters for all outputs [18], which limits the interaction between the treatment assignments and the patients? features. To that end, we construct a linear model of coregionalization (LMC) [15], which mixes two intrinsic coregionalization models as follows K? (x, x? ) = A0 k0 (x, x? ) + A1 k1 (x, x? ), (8) where kw (x, x? ), w ? {0, 1},( is the radial basis function )(RBF) with automatic relevance determi2 2 ? 2 nation, i.e. kw (x, x? ) = exp ? 12 (x ? x? )T R?1 w (x ? x ) , Rw = diag(?1,w , ?2,w , . . . , ?d,w ), with th ?d,w being the length scale parameter of the d feature in kw (., .), whereas A0 and A1 are given by ] ] [ 2 [ 2 ?00 ?0 ?10 ?1 . A0 = , A1 = (9) 2 2 ?0 ?01 ?1 ?11 2 The parameters (?ij )ij and (?i )i determine the variances and correlations of the two response surfaces f0 (x) and f1 (x). The LMC kernel introduces degrees of freedom that allow the two response surfaces to have different covariance functions and relevant features. When ?00 >> ?01 and ?11 >> ?10 , the length scale parameter ?d,w can be interpreted as the relevance of the dth feature to the response surface fw (.). The set of all hyper-parameters is ? = (?0 , ?1 , R0 , R1 , A0 , A1 ). Adapting the Prior via Risk-based Empirical Bayes In order to avoid overfitting to the factual outcomes Y(W) , we evaluate the empirical error in factual outcomes via leave-one-out cross-validation (LOO-CV) with Bayesian regularization [24]; the regularized objective function is thus given by ? D) = ?0 Q(?) + ?1 ???2 , where R(?; 2 n ( )2 ? (W ) Yi i ? E? [f (Xi ) | D?i ] , Q(?) = Var? [ Y(1?W) | D ] + (10) 1 i=1 and D?i is the dataset D with subject i removed, whereas ?0 and ?1 are the Bayesian regularization parameters. For the second level of inference, we use the improper Jeffrey?s prior as an ignorance 5 prior for the regularization parameters, i.e. P(?0 ) ? ?10 and P(?1 ) ? ?11 . This allows us to integrate out the regularization parameters [Sec. 2.1, 24], leading to a revised objective function ? D) = n log(Q(?)) + (10 + 2 d) log(???2 ) [Eq. (15), 24]. It is important to note that LOO-CV R(?; 2 with squared loss has often been considered to be unfavorable in ordinary GP regression as it leaves one degree of freedom undetermined [Sec. 5.4.2, 5]; this problem does not arise in our setting since the term Var? [ Y(1?W) | D ] 1 involves all the variance parameters, and hence the objective ? D) does not depend solely on the posterior mean. function R(?; Causal Inference via CMGPs Algorithm 1 sums up the entire causal inference procedure. It first invokes the routine Initialize-hyperparameters, which uses the sample variance and upcrossing rate of Y(W) to initialize ? (see Appendix B). Such an automated initialization procedure allows running our method without any user-defined inputs, which facilitates its usage by researchers conducting observational studies. Having initialized ? (line 3), the algorithm finds a locally optimal ?? using gradient descent (lines 5-12), and then estimates the ITE function and the associated credi(W ) (W ) ble intervals (lines 13-17). (X = [{Xi }Wi =0 , {Xi }Wi =1 ]T , Y = [{Yi i }Wi =0 , {Yi i }Wi =1 ]T , ? ? 2 x ? = diag(?02 In?n1 , ?12 In1 ), n1 = i Wi , erf(x) = ?1? ?x e?y dy, and K? (x) = (K? (x, Xi ))i .) We use a re-parametrized version of the Adaptive Moment Estimation (ADAM) gradient descent algorithm for optimizing ? [21]; we first apply a transformation ? = exp(?) to ensure that all covariance parameters remain positive, and then run ADAM to minimize ? R(log(? t ); D). The ITE function is estimated as the posterior mean of the CMGP (line 14). The credible interval C? (x) with a Bayesian coverage of ? for a subject with feature x is defined as P? (T (x) ? C? (x)) = ?, and is computed straightforwardly using the error function of the normal distribution (lines 15-17). The computational burden of Algorithm 1 is dominated by the O(n3 ) matrix inversion in line 13; for large observational studies, this can be ameliorated using conventional sparse approximations [Sec. 8.4, 23]. 5 Algorithm 1 Causal Inference via CMGPs 1: 2: 3: 4: 5: 6: Input: Observational dataset D, Bayesian coverage ? Output: ITE function T?(x), credible intervals C? (x) ? ? Initialize-hyperparameters(D) ?0 ? exp(?), t ? 0, mt ? 0, vt ? 0, repeat ? mt+1 ? ?1 mt + (1 ? ?1 ) ? ?t ? ?? R(log(? t ); D) 7: 2 ? vt+1 ? ?2 vt +(1??2 ) ? (?t ? ?? R(log(? t ); D)) 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: t m ? t+1 ? mt /(1 ? ? /(1 ? ?2t ) ) ( 1 ), v?t+1 ? vt? ?t+1 ? ?t ? exp ?? ? m ? t+1 /( v?t+1 + ?) t?t+1 until convergence ?? ? log(?t?1 ) ??? ? (K?? (X, X) + ?)?1 T?(x) ? (KT?? (x) ??? Y)T e V(x) ? K?? (x, x) ? K?? (x) ??? KT?? (x) 1 ? I(x) ? erf?1 (?) (2eT V(x)e) 2 ? ? C? (x) ? [T?(x) ? I(x), T?(x) + I(x)] Experiments Since the ground truth counterfactual outcomes are never available in real-world observational datasets, evaluating causal inference algorithms is not straightforward. We follow the semi-synthetic experimental setup in [5, 6, 8], where covariates and treatment assignments are real but outcomes are simulated. Experiments are conducted using the IHDP dataset introduced in [5]. We also introduce a new experimental setup using the UNOS dataset: an observational dataset involving end-stage cardiovascular patients wait-listed for heart transplantation. Finally, we illustrate the clinical utility and significance of our algorithm by applying it to the real outcomes in the UNOS dataset. The IHDP dataset The Infant Health and Development Program (IHDP) is intended to enhance the cognitive and health status of low birth weight, premature infants through pediatric follow-ups and parent support groups [5]. The semi-simulated dataset in [5, 6, 8] is based on covariates from a real randomized experiment that evaluated the impact of the IHDP on the subjects? IQ scores at the age of three: selection bias is introduced by removing a subset of the treated population. All outcomes (response surfaces) are simulated. The response surface data generation process was not designed to favor our method: we used the standard non-linear "Response Surface B" setting in [5] 6 (also used in [6] and [8]). The dataset comprises 747 subjects (608 control and 139 treated), and there are 25 covariates associated with each subject. The UNOS dataset3 The United Network for Organ Sharing (UNOS) dataset contains information on every heart transplantation event in the U.S. since 1987. The dataset also contains information on patients registered in the heart transplantation wait-list over the years, including those who died before undergoing a transplant. Left Ventricular Assistance Devices (LVADs) were introduced in 2001 as a life-saving therapy for patients awaiting a heart donor [26]; the survival benefits of LVADs are very heterogeneous across the patients? population, and it is unclear to practitioners how outcomes vary across patient subgroups. It is important to learn the heterogeneous survival benefits of LVADs in order to appropriately re-design the current transplant priority allocation scheme [26]. We extracted a cohort of patients enrolled in the wait-list in 2010; we chose this year since by that time the current continuous-flow LVAD technology became dominant in practice, and patients have been followed up sufficiently long to assess their survival. (Details of data processing is provided in Appendix C.) After excluding pediatric patients, the cohort comprised 1,006 patients (774 control and 232 treated), and there were 14 covariates associated with each patient. The outcomes (survival times) generation model is described as follows: ?0 = ?1 = 1, f0 (x) = exp((x + 12 ) ?), and f1 (x) = ? x ? ?, where ? is a random vector of regression coefficients sampled uniformly from [0, 0.1, 0.2, 0.3, 0.4], and ? is selected for a given ? so as to adjust the average survival benefit to 5 years. In order to increase the selection bias, we estimate the propensity score P(Wi = 1|Xi = x) using logistic-regression, and then, sequentially, with probability 0.5 we remove the control patient whose propensity score is closest to 1, and with probability 0.5 we remove a random control patient. A total of 200 patients are removed, leading to a cohort with 806 patients. The resulting dataset is more biased than IHDP, and hence poses a greater inferential challenge. ? Table 1: Results on the IHDP and UNOS datasets (lower PEHE is better). IHDP UNOS In-sample ? PEHE Out-of-sample ? PEHE In-sample ? PEHE Out-of-sample ? PEHE 0.76 ? 0.01 2.3 ? 0.14 1.7 ? 0.10 4.1 ? 0.15 1.8 ? 0.13 4.5 ? 0.20 ? CMGP GP 0.59 ? 0.01 2.1 ? 0.11 ? BART CF VTRF CFRF 2.0 2.4 1.4 2.7 ? BLR BNN CFRW 5.9 ? 0.31 2.1 ? 0.11 1.0 ? 0.07 6.1 ? 0.41 2.2 ? 0.13 1.2 ? 0.08 5.7 ? 0.21 3.2 ? 0.10 2.7 ? 0.07 6.2 ? 0.30 3.3 ? 0.12 2.9 ? 0.11 ? kNN PSM 3.2 ? 0.12 4.9 ? 0.31 4.2 ? 0.22 4.9 ? 0.31 5.2 ? 0.11 4.6 ? 0.12 5.4 ? 0.12 4.8 ? 0.16 ? TML 5.2 ? 0.35 5.2 ? 0.35 6.2 ? 0.31 6.2 ? 0.31 ? ? ? ? 0.13 0.21 0.07 0.24 2.2 2.8 2.2 2.9 ? ? ? ? 0.17 0.23 0.16 0.25 3.5 3.8 4.5 4.7 ? ? ? ? 0.17 0.25 0.35 0.21 3.9 4.3 4.9 5.2 ? ? ? ? 0.23 0.31 0.41 0.32 Benchmarks We compare our algorithm with: ? Tree-based methods (BART [5], causal forests (CF) [4, 9], virtual-twin random forests (VTRF) [7], and counterfactual random forests (CFRF) [7]), ? Balancing counterfactual regression (Balancing linear regression (BLR) [6], balancing neural networks (BNN) [6], and counterfactual regression with Wasserstein distance metric (CFRW) [8]), ? Propensity-based and matching methods (k nearest-neighbor (kNN), propensity score matching (PSM)), ? Doubly-robust methods (Targeted maximum likelihood (TML) [22]), and ? Gaussian process-based methods (separate GP regression for treated and control with marginal likelihood maximization (GP)). Details of all these benchmarks are provided in Appendix D. Following [4-9], we evaluate the performance of all algorithms by reporting the square-root of ?n (1) (0) PEHE = n1 i=1 ((f1 (Xi ) ? f0 (Xi )) ? E[Yi ? Yi |Xi = x])2 , where f1 (Xi ) ? f0 (Xi ) is 3 https://www.unos.org/data/ 7 the estimated treatment effect. We evaluate the PEHE via a Monte Carlo simulation with 1000 realizations of both the IHDP and UNOS datasets, where in each experiment we run all the benchmarks with 60/20/20 train-validation-test splits. Counterfactuals are never made available to any of the benchmarks. We run Algorithm 1 with the a learning rate of 0.01 and with the standard setting prescribed in [21] (i.e. ?1 = 0.9, ?2 = 0.999, ? = 10?8 ). We report both the in-sample and out-of-sample PEHE estimates: the former corresponds to the accuracy of the estimated ITE in a retrospective cohort study, whereas the latter corresponds to the performance of a clinical decision support system that provides out-of-sample patients with ITE estimates [8]. The in-sample PEHE metrics is non-trivial since we never observe counterfactuals even in the training phase. Results As can be seen in Table 1, CMGPs outperform all other benchmarks in terms of the PEHE in both the IHDP and UNOS datasets. The benefit of the risk-based empirical Bayes method manifest in the comparison with ordinary GP regression that fits the treated and control populations by evidence maximization. The performance gain of CMGPs with respect to GPs increase in the UNOS dataset as it exhibits a larger selection bias, hence na?ve GP regression tends to fit a function to the factual outcomes that does not generalize well to counterfactuals. Our algorithm is also performing better than all other nonparametric tree-based algorithms. In comparison to BART, our algorithm places an adaptive prior on a smooth function space, and hence it is capable of achieving faster posterior contraction rates than BART, which places a prior on a space of discontinuous functions [16]. Similar insights apply to the frequentist random forest algorithms. CMGPs also outperform the different variants of counterfactual regression in both datasets, though CFRW is competitive in the IHDP experiment. BLR performs badly in both datasets as it balances the distributions of the treated and control populations by variable selection, and hence it throws away informative features for the sake of balancing the selection bias. The performance gain of CMGPs with respect to BNN and CFRW shows that the multi-task learning framework is advantageous: through the linear coregionalization kernel, CMGPs preserves the interactions between Wi and Xi , and hence is capable of capturing highly non-linear (heterogeneous) response surfaces. Figure 2: Pathway for a representative patient in the UNOS dataset. 6 Discussion: Towards Precision Medicine To provide insights into the clinical utility of CMGPs, we ran our algorithm on all patients in the UNOS dataset who were wait-listed in the period 2005-2010, and used the real patient survival times as outcomes. The current transplant priority allocation scheme relies on a coarse categorization of patients that does not take into account their individual risks; for instance, all patients who have an LVAD are thought of as benefiting from it equally. We found a substantial evidence in the data that this leads to wrong clinical decision. In particular, we found that 10.3% of wait-list patients for whom an LVAD was implanted exhibit a delayed assignment to a high priority allocation in the wait-list. One of such patients has her pathway depicted in Fig. 2: she was assigned a high priority (status 1A) in June 2013, but died shortly after, before her turn to get a heart transplant. Her late assignment to the high priority status was caused by an overestimated benefit of the LVAD she got implanted in 2010; that is, the wait-list allocation scheme assumed she will attain the "populational average" survival benefit from the LVAD. Our algorithm had a much more conservative estimate of her survival; since she was diabetic, her individual benefit from the LVAD was less than the populational average. We envision a new priority allocation scheme in which our algorithm is used to allocate priorities based on the individual risks in a personalized manner. 8 References [1] C. Adams and V. Brantner. Spending on New Drug Development. Health Economics, 19(2): 130-141, 2010. [2] J. C. Foster, M. G. T. Jeremy, and S. J. Ruberg. Subgroup Identification from Randomized Clinical Trial Data. Statistics in medicine, 30(24), 2867-2880, 2011. [3] W. Sauerbrei, M. Abrahamowicz, D. G. Altman, S. Cessie, and J. Carpenter. Strengthening Analytical Thinking for Observational Studies: the STRATOS Initiative. Statistics in medicine, 33(30): 5413-5432, 2014. [4] S. Athey and G. Imbens. Recursive Partitioning for Heterogeneous Causal Effects. Proceedings of the National Academy of Sciences, 113(27):7353-7360, 2016. [5] J. L. Hill. Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics, 2012. [6] F. D. Johansson, U. Shalit, and D. Sontag. Learning Representations for Counter-factual Inference. In ICML, 2016. [7] M. Lu, S. Sadiq, D. J. Feaster, and H. Ishwaran. Estimating Individual Treatment Effect in Observational Data using Random Forest Methods. arXiv:1701.05306, 2017. [8] U. Shalit, F. Johansson, and D. Sontag. Estimating Individual Treatment Effect: Generalization Bounds and Algorithms. arXiv:1606.03976, 2016. [9] S. Wager and S. Athey. Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. arXiv:1510.04342, 2015. [10] Y. Xie, J. E. Brand, and B. Jann. Estimating Heterogeneous Treatment Effects with Observational Data. Sociological Methodology, 42(1):314-347, 2012. [11] Y. Xu, Y. Xu, and S. Saria. A Bayesian Nonparametic Approach for Estimating Individualized TreatmentResponse Curves. arXiv:1608.05182, 2016. [12] M. Dudk, J. Langford, and L. Li. Doubly robust policy evaluation and learning. In ICML, 2011. [13] A. Swaminathan and T. Joachims. Batch Learning from Logged Bandit Feedback Through Counter-factual Risk Minimization. Journal of Machine Learning Research, 16(1): 1731-1755, 2015. [14] A. Abadie and G. Imbens. Matching on the Estimated Propensity Score. Econometrica, 84(2):781-807, 2016. [15] M. A. Alvarez, L. Rosasco, N. D. Lawrence. Kernels for Vector-valued Functions: A Review. Foundations R and Trends ?in Machine Learning, 4(3):195-266, 2012. [16] S. Sniekers, A. van der Vaart. Adaptive Bayesian Credible Sets in Regression with a Gaussian Process Prior. Electronic Journal of Statistics, 9(2):2475-2527, 2015. [17] B. Schlkopf, R. Herbrich, and A. J. Smola. A Generalized Representer Theorem. International Conference on Computational Learning Theory, 2001. [18] E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task Gaussian Process Prediction. In NIPS, 2007. [19] S. Bickel, M. Brckner, and T. Scheffer. Discriminative Learning under Covariate Shift. Journal of Machine Learning Research, 10(9): 2137-2155, 2009. [20] V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, and C. Hansen. Double Machine Learning for Treatment and Causal Parameters. arXiv preprint arXiv:1608.00060, 2016. [21] D. Kingma and J. Ba. ADAM: A Method for Stochastic Optimization. arXiv:1412.6980, 2014. [22] K. E. Porter, S. Gruber, M. J. Van Der Laan, and J. S. Sekhon. The Relative Performance of Targeted Maximum Likelihood Estimators. The International Journal of Biostatistics, 7(1):1-34, 2011. [23] Carl Edward Rasmussen. Gaussian Processes for Machine Learning. Citeseer, 2006. [24] G. C. Cawley and N. L. C. Talbot. Preventing Over-fitting During Model Selection via Bayesian Regularisation of the Hyper-parameters. Journal of Machine Learning Research, 841-861, 2007. [25] J. Hartford, G. Lewis, K. Leyton-Brown, and M. Taddy. Counterfactual Prediction with Deep Instrumental Variables Networks. arXiv preprint arXiv:1612.09596, 2016. [26] M. S. Slaughter, et al. Advanced Heart Failure Treated with Continuous-flow Left Ventricular Assist Device. New England Journal of Medicine, 361(23): 2241-2251, 2009. 9
6934 |@word multitask:1 trial:6 version:2 middle:1 inversion:1 advantageous:2 norm:1 instrumental:2 johansson:2 simulation:1 eng:1 covariance:4 contraction:1 citeseer:1 harder:1 moment:1 contains:2 score:8 united:1 envision:1 outperforms:1 current:3 mihaela:2 assigning:1 dx:1 readily:1 stemming:1 additive:1 numerical:1 informative:1 remove:2 designed:1 bart:4 infant:3 leaf:1 device:3 selected:1 ith:1 realizing:1 record:2 provides:3 coarse:2 herbrich:1 hyperplanes:2 org:1 simpler:1 conceptualizes:1 anesthesia:1 direct:2 initiative:2 doubly:2 pathway:2 fitting:1 manner:1 introduce:1 growing:1 multi:21 gov:1 equipped:1 increasing:1 provided:4 estimating:9 moreover:3 underlying:1 panel:2 biostatistics:1 advent:1 interpreted:1 minimizes:2 developed:1 unos:12 unobserved:4 transformation:1 guarantee:2 thorough:1 every:4 hartford:1 nation:1 wrong:1 uk:1 control:8 partitioning:1 medical:3 overestimate:1 positive:2 cardiovascular:1 engineering:2 before:2 treat:1 died:2 limit:1 tends:1 oxford:1 solely:1 chose:1 therein:1 initialization:1 equivalence:1 tml:2 challenging:1 co:2 lost:1 practice:1 definite:1 differs:2 recursive:1 procedure:2 area:1 empirical:21 drug:1 attain:1 adapting:3 significantly:1 alleviated:1 projection:1 confidence:4 thought:2 radial:1 ups:1 wait:8 inferential:1 matching:3 get:1 cannot:2 close:1 selection:13 risk:17 applying:1 www:2 equivalent:4 conventional:2 map:2 latest:1 straightforward:1 economics:1 williams:1 focused:1 ke:1 simplicity:1 estimator:3 insight:3 population:6 altman:1 target:1 alleviating:1 user:1 taddy:1 gps:1 us:5 carl:1 hypothesis:1 associate:2 trend:1 satisfying:1 schaar:1 pediatric:2 donor:1 factual:16 quicker:1 electrical:1 preprint:2 improper:1 hinders:2 counter:2 removed:2 ran:1 balanced:1 substantial:1 covariates:4 econometrica:1 interpolant:1 depend:1 creates:2 learner:2 basis:1 po:13 k0:1 various:1 represented:2 awaiting:1 regularizer:2 train:1 describe:1 monte:1 hyper:7 outcome:41 birth:1 whose:1 larger:1 valued:8 vanderschaar:1 transplantation:3 erf:2 favor:1 knn:2 statistic:4 gp:18 jointly:2 vaart:1 hoc:1 advantage:1 analytical:1 propose:2 interaction:3 product:2 strengthening:1 adaptation:2 relevant:2 blr:3 realization:1 benefiting:1 academy:1 los:1 recipe:1 chai:1 convergence:1 parent:1 double:1 plethora:1 r1:1 categorization:1 adam:4 leave:1 help:1 illustrate:1 develop:1 ac:1 pose:1 iq:1 ij:2 nearest:3 received:1 eq:3 edward:1 throw:1 coverage:4 strong:1 involves:1 come:2 implies:1 quantify:1 discontinuous:1 stochastic:1 observational:18 virtual:1 f1:8 generalization:2 sekhon:1 therapy:1 considered:1 ground:1 normal:1 exp:5 sufficiently:1 lawrence:1 algorithmic:1 vary:1 adopt:1 bickel:1 estimation:3 chernozhukov:1 applicable:1 currently:1 hansen:1 propensity:10 organ:1 reflects:1 minimization:4 gaussian:12 avoid:1 focus:2 june:1 joachim:1 properly:1 she:4 indicates:1 likelihood:3 hk:7 inference:19 dependent:1 inaccurate:1 typically:1 entire:1 a0:4 her:5 bandit:2 relation:1 transformed:1 comprising:2 arg:2 development:2 art:1 initialize:3 marginal:1 construct:1 never:6 having:1 beach:1 saving:1 kw:3 icml:2 representer:5 thinking:1 athey:2 discrepancy:1 future:1 parametrizes:1 report:1 preserve:1 ve:1 national:1 individual:8 cheaper:1 pictorial:1 delayed:1 phase:2 intended:1 replacement:1 jeffrey:1 n1:3 attempt:1 freedom:2 interest:2 investigate:1 highly:2 evaluation:1 adjust:1 introduces:1 held:1 wager:1 kt:2 tuple:1 capable:2 dataset3:1 conduct:2 tree:4 initialized:1 re:2 shalit:2 causal:21 instance:1 modeling:5 earlier:1 goodness:1 assignment:10 maximization:2 ordinary:2 cost:2 applicability:1 addressing:1 introducing:1 subset:1 undetermined:1 comprised:1 conducted:1 loo:2 straightforwardly:1 synthetic:1 st:1 international:2 randomized:4 accessible:1 overestimated:1 enhance:1 na:1 ehrs:2 w1:1 squared:1 rosasco:1 priority:7 cognitive:1 creating:1 resort:1 leading:3 li:1 account:2 potential:12 jeremy:1 sec:6 twin:1 coefficient:2 caused:1 bonilla:1 ad:1 view:3 extrapolation:1 root:1 counterfactuals:9 competitive:1 bayes:9 contribution:1 ass:1 square:1 ni:3 minimize:2 became:1 variance:8 largely:1 conducting:1 who:3 correspond:1 accuracy:1 generalize:2 bayesian:26 identification:1 iterated:1 schlkopf:1 lu:1 carlo:1 researcher:1 fo:1 sharing:1 evaluates:1 failure:1 proof:2 associated:3 gain:3 therapeutic:3 dataset:17 treatment:33 sampled:1 counterfactual:22 manifest:2 credible:9 hilbert:3 routine:1 supervised:2 follow:2 xie:1 response:12 methodology:1 alvarez:1 formulation:2 evaluated:2 ox:1 strongly:1 though:1 furthermore:1 predicated:1 stage:1 swaminathan:1 correlation:1 until:1 hand:1 langford:1 smola:1 porter:1 logistic:1 usage:1 usa:1 effect:22 validity:1 true:3 unbiased:2 brown:1 former:1 hence:15 regularization:4 assigned:1 symmetric:1 bnn:3 white:1 ignorance:1 assistance:1 lmc:2 cfrw:4 during:1 generalized:2 transplant:5 hill:1 performs:1 reasoning:2 spending:1 wise:1 consideration:1 novel:2 functional:2 mt:4 million:3 extend:1 interpretation:2 discussed:1 interpret:1 refer:1 cv:2 smoothness:1 automatic:1 ehr:1 had:1 access:1 entail:2 f0:7 depiction:1 surface:9 dominant:1 posterior:13 closest:1 recent:4 perspective:3 optimizing:2 optimizes:2 vt:4 life:1 der:3 yi:27 seen:1 greater:1 wasserstein:1 impose:1 r0:1 determine:1 period:1 signal:1 semi:3 full:1 mix:1 infer:2 smooth:2 faster:1 england:1 ahmed:1 clinical:7 long:2 cross:1 equally:1 a1:4 controlled:1 impact:3 prediction:2 involving:1 regression:17 variant:1 heterogeneous:9 patient:28 expectation:2 metric:2 implanted:2 arxiv:9 kernel:15 addition:1 whereas:5 separately:1 unrealistically:1 interval:6 cawley:1 crucial:1 appropriately:1 biased:2 contrarily:1 posse:1 unlike:1 file:1 subject:18 facilitates:1 incorporates:1 flow:2 call:1 practitioner:1 presence:1 cohort:5 vital:1 conception:2 wn:1 automated:1 split:1 xj:1 fit:4 variate:2 inner:1 idea:1 angeles:1 honest:1 shift:2 whether:1 motivated:2 allocate:1 utility:2 assist:2 retrospective:1 unconfoundedness:2 render:1 sontag:2 deep:2 generally:1 listed:3 enrolled:1 nonparametric:5 locally:1 rw:1 http:2 outperform:2 problematic:1 pehe:23 estimated:8 group:2 enormous:1 achieving:1 drawn:2 interventional:1 sum:1 year:3 run:3 recruitment:1 uncertainty:5 psm:2 logged:2 reporting:1 throughout:1 place:2 electronic:3 decision:3 appendix:4 ble:1 dy:1 capturing:1 ignorability:1 bound:2 followed:1 display:1 oracle:7 badly:1 n3:1 personalized:1 protects:1 sake:1 ucla:1 ventricular:3 dominated:1 aspect:1 min:2 span:1 prescribed:1 performing:1 department:2 according:1 belonging:1 across:3 cardiac:1 remain:1 conspicuous:1 wi:21 imbens:2 heart:8 taken:1 turn:1 know:3 end:3 available:5 generalizes:1 apply:2 observe:5 ishwaran:1 away:1 frequentist:3 alternative:1 batch:1 shortly:1 rp:1 denotes:1 running:1 ensure:1 cf:2 graphical:1 marginalized:2 medicine:7 carves:1 giving:1 invokes:1 k1:1 build:1 classical:1 objective:3 in1:1 realized:1 primary:1 slaughter:1 got:1 unclear:1 exhibit:4 gradient:2 distance:1 individualized:9 separate:1 simulated:3 parametrized:1 topic:1 whom:1 trivial:1 reason:1 assuming:1 length:2 modeled:1 pointwise:5 index:1 minimizing:2 balance:1 setup:3 rise:1 ba:1 design:1 policy:2 revised:1 datasets:7 ihdp:10 benchmark:5 finite:3 descent:2 heterogeneity:1 excluding:1 reproducing:4 oncology:1 community:1 introduced:3 california:1 registered:1 subgroup:3 kingma:1 nip:2 address:1 dth:1 challenge:2 program:2 including:1 overlap:1 event:1 difficulty:1 treated:8 regularized:2 natural:1 indicator:1 advanced:1 scheme:4 technology:1 brief:1 glint:1 health:5 prior:15 review:2 discovery:1 literature:1 relative:1 law:1 regularisation:1 loss:10 sociological:1 interesting:1 generation:2 proportional:1 allocation:5 var:4 remarkable:1 age:1 validation:2 foundation:1 integrate:1 degree:2 consistent:1 proxy:2 gruber:1 foster:1 balancing:4 populational:2 repeat:1 rasmussen:1 bias:9 formal:1 guide:1 allow:1 neighbor:3 taking:1 sparse:1 van:3 benefit:7 feedback:2 default:1 xn:1 finitedimensional:1 curve:2 resides:1 world:1 evaluating:1 preventing:1 commonly:1 coregionalization:5 adaptive:4 made:1 premature:2 social:1 ameliorated:1 status:3 overfitting:1 investigating:1 sequentially:1 conceptual:1 assumed:1 xi:32 discriminative:1 continuous:2 quantifies:1 diabetic:1 table:2 learn:4 robust:2 ca:1 forest:7 constructing:2 domain:2 diag:2 significance:1 motivation:1 noise:2 arise:1 hyperparameters:2 x1:1 respiratory:1 site:1 fig:4 representative:1 carpenter:1 xu:2 scheffer:1 fashion:1 deployed:1 precision:5 inferring:1 guiding:1 explicit:2 comprises:2 chetverikov:1 lie:1 weighting:1 late:1 abundance:1 theorem:15 formula:1 removing:1 specific:2 covariate:1 showing:1 undergoing:1 r2:2 abadie:1 admits:1 list:5 virtue:1 survival:8 evidence:2 intrinsic:2 burden:1 talbot:1 restricting:1 stratos:2 surprise:1 depicted:2 simply:1 adjustment:1 corresponds:2 minimizer:1 truth:1 relies:1 extracted:1 lewis:1 leyton:1 conditional:1 goal:1 ite:11 targeted:2 exposition:1 rbf:1 towards:1 saria:1 fw:4 uniformly:1 acting:1 laan:1 conservative:1 total:1 hospital:1 experimental:2 unfavorable:1 brand:1 exception:1 alaa:1 support:2 latter:1 arises:1 relevance:2 evaluate:3
6,561
6,935
Spherical convolutions and their application in molecular modelling Wouter Boomsma Department of Computer Science University of Copenhagen [email protected] Jes Frellsen Department of Computer Science IT University of Copenhagen [email protected] Abstract Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based on the cubed-sphere representation. We investigate the challenges that arise in this setting, and extend our discussion to include scenarios of spherical volumes, with several strategies for parameterizing the radial dimension. As a proof of concept, we conclude with an assessment of the performance of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting. In particular, despite the lack of any domain specific feature-engineering, we demonstrate performance comparable to state-of-the-art methods in the field, which build on decades of domain-specific knowledge. 1 Introduction Given the transformational role that convolutional neural networks (CNNs) have had in the area of image analysis, it is natural to consider whether such networks can be efficiently applied in other contexts. In particular spatially embedded data can often be interpreted as images, allowing for direct transfer of neural network architectures to these domains. Recent years have demonstrated interesting examples in a broad selection of the natural sciences, ranging from physics (Aurisano et al., 2016; Mills et al., 2017) to biology (Wang et al., 2016; Min et al., 2017), in many cases showing convolutional neural networks to substantially outperform existing methods. The standard convolutional neural network can be applied naturally to data embedded in a Euclidean space, where uniformly spaced grids can be trivially defined. For other manifolds, such as the sphere, it is less obvious, and to our knowledge, convolutional neural networks for such manifolds have not been systematically investigated. In particular for the sphere, the topic has direct applications in a range of scientific disciplines, such as the earth sciences, astronomy, and modelling of molecular structure. This paper presents two strategies for creating spherical convolutions, as understood in the context of convolutional neural networks (i.e., discrete, and efficiently implementable as tensor operations). The first is a straightforward periodically wrapped convolution on a spherical-polar grid. The second builds on the concept of a cubed-sphere (Ronchi et al., 1996). We proceed with extending these 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. strategies to include the radial component, using concentric grids, which allows us to conduct convolutions in spherical volumes. Our hypothesis is that these concentric spherical convolutions should outperform standard 3D convolutions in cases where data is naturally parameterized in terms of a radial component. We test this hypothesis in the context of molecular modelling. We will consider structural environments in a molecule as being defined from the viewpoint of a single amino acid or nucleotide: how does such an entity experience its environment in terms of the mass and charge of surrounding atoms? We show that a standard convolutional neural network architectures can be used to learn various features of molecular structure, and that our spherical convolutions indeed outperform standard 3D convolutions for this purpose. We conclude by demonstrating state-of-the art performance in predicting mutation induced changes in protein stability. 2 Spherical convolutions Conventional CNNs work on discretized input data on a grid in Rn , such as time series data in R and image data in R2 . At each convolutional layer l a CNN performs discrete convolutions (or a correlation) Cl X X [f ? k i ](x) = f (x0 )kci (x ? x0 ) (1) x0 ?Zn c=1 of the input feature map f : Zn ? RCl and a set of Cl+1 filters k i : Zn ? RCl (Cohen and Welling, 2016; Goodfellow et al., 2016). While such convolutions are equivariant to translation on the grid, they are not equivariant to scaling (Cohen and Welling, 2016). This means that in order to preserve the translation equivariance in Rn , conventional CNNs rely on the grid being uniformly spaced within each dimension of Rn . Constructing such a grid is straightforward in Rn . However, for convolutions on other manifolds such as the 2D sphere, S2 = {v ? R3 |vv| = 1}, no such planar uniform grid is available, due to the non-linearity of the space (Mardia and Jupp, 2009). In this section, we briefly discuss the consequences of using convolutions in the standard non-uniform spherical-polar grid, and present an alternative grid for which the non-uniformity is expected to be less severe. 2.1 Convolution of features on S2 A natural approach to a discretization on the sphere is to represent points v on the sphere by their spherical-polar coordinates (?, ?) and construct uniformly spaced grid in these coordinates, where the spherical coordinates are defined by v = (cos ?, sin ? cos ?, sin ? sin ?)| . Convolutions on such a grid can be implemented efficiently using standard 2D convolutions when taking care of using periodic padding at the ? boundaries. The problem with a spherical-polar coordinate grid is that it is highly non-equidistant when projected onto the sphere: the distance between grid points becomes increasingly small as we move from the equator to the poles (figure 1, left). Since standard convolution operators are not scale-invariant, this will reduce the ability to share filters between different areas of the sphere. Figure 1: Two realizations of a grid on the sphere. Left: a grid using equiangular spacing in a standard spherical-polar coordinate system, and Right: An equiangular cubed-sphere representation, as described in Ronchi et al. (1996). 2 Figure 2: Left: A cubed-sphere grid and a curve on the sphere. Right: The six planes of a cubed-sphere representation and the transformation of the curve to this representation. As a potential improvement, we will investigate a spherical convolution based on the cubed-sphere transformation (Ronchi et al., 1996). The transformation is constructed by decomposing the sphere into six patches defined by projecting the circumscribed cube onto the sphere (figure 1, right). In this transformation a point on the sphere v ? S2 is mapped to a patch b ? {1, 2, 3, 4, 5, 6} and two coordinates (?, ?) ? [??/4, ?/4[2 on that patch. The coordinate are given by the angles between the axis pointing to the patch and v measured in the two coordinate planes perpendicular to the patch. For instance the vectors {v ? S2 |vx > vy and vx > vz } map to patch b = 1 and we have tan ? = vy /vx and tan ? = vz /vx . The reaming mappings are described by Ronchi et al. (1996). If we grid the two angles (?, ?) uniformly in the cubed-sphere transformation and project this grid onto the sphere, we obtain a grid that is more regular (Ronchi et al., 1996), although it has artefacts in the 8 corners of the circumscribed cube (figure 1, right). The cubed-sphere convolution is then constructed by applying the conventional convolution in equation (1) to a uniformly spaced grid on each of the six cubed shape patches. This construction has two main advances: 1) within each patch, the convolution is almost equivariant to translation in ? and ? and 2) features on the cubed-sphere grid can naturally be expressed using tensors, which means that the spherical convolution can be efficiently implemented on a GPU. When implementing convolutions and pooling operations for the cubed-sphere grid, one has to be carefully in padding each patch with the contents of the four neighbouring patches, in order to preserve the wrapped topology of the sphere (figure 2, right). Both of these two approaches to spherical convolutions are hampered by a lack of rotational invariance, which restricts the degree with which filters can be shared over the surface of the sphere, leading to suboptimal efficiency in the learning of the parameters. Despite this limitation, for capturing patterns in spherical volumes, we expect that the ability to express patterns naturally in terms of radial and angular dimensions has advantages over standard 3D convolutions. We test this hypothesis in the following sections. 2.2 Convolutions of features on B3 The two representations from figure 1 generalize to the ball B3 by considering concentric shells at uniformly separated radii. In the case of the cubed-sphere, this means that a vector v ? B3 is mapped ? | to the unique coordinates (r, b, ?, ?), where r = vv is the radius and (b, ?, ?) are the cubed-sphere coordinates at r, and we construct a uniform grid in r, ? and ?. Likewise, in the spherical-polar case, we construct a uniform grid in r, ? and ?. We will refer to these grids as concentric cubed-sphere grid and concentric spherical-polar grid respectively (figure 3). As is the case for their S2 counterparts, features on these grids can be naturally expressed using tensors. We can apply the conventional 3D convolutions in equation (1) to features on the concentric cubedsphere and the concentric spherical-polar grids, and denote these as concentric cubed-sphere convolution (CCSconv) and concentric spherical-polar convolution (CSPconv). For fixed r, the convolutions will thus have the same properties as in the S2 case. In these concentric variants, the convolutions will not be equivariant to translations in r, which again reduces the potential to share filter parameters. 3 Figure 3: Three realizations of a grid on the ball. Left: a grid using equiangular spacing in a standard spherical-polar coordinate system (concentric spherical-polar grid). Center: An equiangular cubed-sphere representation, as described in Ronchi et al. (1996) (concentric cubed-sphere grid). Right: a Cartesian grid. We propose to address this issue in three ways. First, we can simply apply the convolution over the full range of r with a large number of filters Cl+1 and hope that the network will automatically allocate different filters at different radii. Secondly, we can make the filters k i (x ? x0 , xr ) depend on r, which corresponds to using different (possibly overlapping) filters on each spherical shell (conv-banded-disjoint). Thirdly, we can divide the r-grid into segments and apply the same filter within each segment (conv-banded), potentially with overlapping regions (depending on the stride). The three approaches are illustrated in figure 4. In the experiments below, we will be comparing the performance of our concentric spherical convolution methods to that of a simple 3D convolution in a Cartesian grid (figure 3, right). (a) conv (b) conv-banded-disjoint (convbd ) (c) conv-banded (convb ) Figure 4: Three strategies for the radial component of concentric cubed-sphere or concentric spherical convolutions. (a) conv: The same convolution-filter is applied to all values of r, (b) conv-bandeddisjoint (convbd ): convolution-filters are only applied in the angular directions, using different filters for each block in r, (c) conv-banded (convb ): convolutions are applied within radial segments, Note that for visual clarity, we use a stride of 3 in this figure, although we use a stride of 1 in practice. 3 Modelling structural environments in molecules In the last decades, substantial progress has been made in the ability to simulate and analyse molecular structures on a computer. Much of this progress can be ascribed to the molecular force fields used to capture the physical interactions between atoms. The basic functional forms of these models were established in the late 1960s, and through gradual refinements they have become a success story of molecular modelling. Despite these positive developments, the accuracy of molecular force fields is known to still be a limiting factor for many biological and pharmaceutical applications, and further improvements are necessary in this area to increase the robustness of methods for e.g. protein prediction and design. There are indications that Machine Learning could provide solutions to such challenges. While, traditionally, most of the attention in the Machine Learning community has been dedicated 4 Figure 5: Example of the environment surrounding an amino acid in a protein, in this case the phenylalanine at position 30 in protein GB1 (PDB ID: 2GB1). Left: an external view of the global environment. Right: an interval view, from the perspective of the amino acid in question. to predicting structural features from amino acid sequences (e.g. secondary structure, disorder, and contact prediction), there are increasingly applications taking three dimensional molecular structure as input (Behler and Parrinello, 2007; Jasrasaria et al., 2016; Sch?tt et al., 2017; Smith et al., 2017). In particular in the field of quantum chemistry, a number of studies have demonstrated the ability of deep learning techniques to accurately predict energies of molecular systems. Common to many of these methods is a focus on manually engineered features, where the molecular input structure is encoded based on prior domain-specific knowledge, such as specific functional relationships between atoms and their environments (Behler and Parrinello, 2007; Smith et al., 2017). Recently, a few studies have demonstrated the potential of automatically learning such features, by encoding the molecular structural input in a more domain-agnostic manner, for instance considering only pairwise distance matrices (Sch?tt et al., 2017), space filling curves (Jasrasaria et al., 2016), or basic structural features (Wallach et al., 2015). The fact that atomic forces are predominantly distance-based suggests that molecular environments are most naturally represented with a radial-based parameterization, which makes it an obvious test case for the convolutions presented in the previous section. If successful, such convolutions could allow us to make inferences directly from the raw molecular structure of a molecule, avoiding the need of manual feature engineering. We will consider the environments that each amino acids experience within its globular protein structure as images in the 3-ball. Figure 5 shows an example of the environment experienced by an arbitrarily chosen amino acid in the GB1 protein (PDB ID: 2GB1). Although distorted by the fish-eye perspective, the local environment (right) displays several key features of the data: we see clear patterns among neighboring atoms, depending on their local structure, and we can imagine the model learning to recognize hydrogen bonds and charge interactions between an amino acid and its surroundings. Our representation of the molecular environment includes all atoms within a 12 ? radius of the C? atom of the amino acid in question. Each atom is represented by three fundamental properties: 1) its position relative to the amino acid in question (i.e., the position in the grid), 2) its mass, and 3) its partial charge, as defined by the amber99sb force field (Hornak et al., 2006). We construct two types of models, which are identical except for their output. The first outputs the propensity for different secondary structure labels at a given position (i.e., helix, extended, coil), while the second outputs the propensity for different amino acid types. Each of these models will be implemented with both the Cartesian, the concentric spherical and the concentric cubed-sphere convolutions. Furthermore, for the concentric cubed-sphere convolutions, we compare the three strategies for dealing with the radial component illustrated in figure 4. 5 Table 1: The architecture of the CNN where o represent the output size, which is 3 for secondary structure output and 20 for amino acid output. As an example, we use the convolutional filter sizes from the concentric cubed-sphere (CCS) case. Similar sizes are used for the other representations. Layer 0 1 1 2 2 3 3 4 4 5 6 7 3.1 Operation Input CCSconv + ReLU CCSsumpool CCSconv + ReLU CCSsumpool CCSconv + ReLU CCSsumpool CCSconv + ReLU CCSsumpool Dense + ReLU Dense + ReLU Dense + Softmax Filter / weight size 3 ? 5 ? 5 ? 2 ? 16 1?3?3 3 ? 3 ? 3 ? 16 ? 32 3?3?3 3 ? 3 ? 3 ? 32 ? 64 1?3?3 3 ? 3 ? 3 ? 64 ? 128 1?3?3 34 560 ? 2 048 2 048 ? 2 048 2 048 ? o Layer output size 6 ? 24 ? 38 ? 38 ? 2 6 ? 22 ? 19 ? 19 ? 16 6 ? 22 ? 10 ? 10 ? 16 6 ? 20 ? 10 ? 10 ? 32 6 ? 9 ? 5 ? 5 ? 32 6 ? 7 ? 5 ? 5 ? 64 6 ? 7 ? 3 ? 3 ? 64 6 ? 5 ? 3 ? 3 ? 128 6 ? 5 ? 3 ? 3 ? 128 2 048 2 048 o Model architecture The input to the network is a grid (concentric cubed-sphere, concentric spherical polar or Cartesian). Each voxel has two input channels: the mass of the atom that lies in the given bin and the atom?s partial charge (or zeros if no atom is found). The resolution of the grids are chosen so that the maximum distance within a bin is 0.5?, which ensures that bins are occupied by at most one atom. The radius of the ball is set to 12?, since most physical interactions between atoms occur within this distance (Irb?ck and Mohanty, 2006). This gives us an input tensor of shape (b = 6, r = 24, ? = 38, ? = 38, C1 = 2) for the concentric cubed-sphere case, (r = 24, ? = 76, ? = 151, C1 = 2) for the concentric spherical polar case, and (x = 60, y = 60, z = 60, C1 = 2) for the Cartesian case. We use a deep model architecture that is loosely inspired by the VGG models (Simonyan and Zisserman, 2015), but employs the convolution operators described above. Our models have four convolutional layers followed by three dense layers, as illustrated in table 1. Each convolutional layer is followed by rectified linear unit (ReLU) activation function (Hahnloser et al., 2000; Glorot et al., 2011) and a sum pooling operation which is appropriately wrapped in the case of the concentric cubed-sphere and the concentric spherical polar grid. We use sum pooling since the input features, mass and partial charge, are both physical quantities that are naturally additive. The total number of parameters is the models (with the amino acid output) are 75 313 253 (concentric cubed-sphere), 69 996 645 (concentric spherical polar), and 61 159 077 (Cartesian). Furthermore, for the concentric cubed-sphere case, we include a comparison of the two alternative strategies for the radial component: the convb and the convbd , which have 75 745 333 and 76 844 661 parameters respectively. Finally, to see the effect of convolutions over a purely dense model, we include a baseline model where the convolutional layers are replaced with dense layers, but otherwise following the same architecture, and roughly the same number of parameters (66 670 613). 3.2 Training We minimized the cross-entropy loss using Adam (Kingma and Ba, 2015), regularized by penalizing the loss with the sum of the L2 of all weights, using a multiplicative factor of 0.001. All dense layers also used dropout regularization with a probability of 0.5 of keeping a neuron. The models were trained on NVIDIA Titan X (Pascal) GPUs, using a batch size of 100 and a learning rate of 0.0001. The models were trained on data set of high resolution crystal structures. A large initial (nonhomology-reduced) data set was constructed using the PISCES server (Wang and Dunbrack, 2003). For all structures, hydrogen atoms were added using the Reduce program (Word et al., 1999), after which partial charges were assigned using the OpenMM framework (Eastman et al., 2012), using the amber99sb force field (Hornak et al., 2006). During these stages strict filters were applied to remove structures that 1) were incomplete (missing chains or missing residues compared to the seqres 6 entry), 2) had chain breaks, 3) failed to parse in OpenMM, or 4) led the Reduce program to crash. Finally, the remaining set was resubmitted to the PISCES server, where homology-reduction was done at the 30% level. This left us with 2336 proteins, out of which 1742 were used for training, 10 for validation, and the remainder was set aside for testing. The homology-reduction ensures that any pair of sequences in the data set are at most 30% identical at the amino-acid-level, which allows us to safely split the data into non-overlapping sets. 4 Results We now discuss results obtained with the secondary structure and amino acid models, respectively. Despite the apparent similarity of the two models, the two tasks have substantially different biological implications: secondary structure is related to the 3D structure locally at a given position in a protein, i.e. whether the protein assumes a helical or a more extended shape. In contrast, amino acid propensities describe allowed mutations in a protein, which is related to the fundamental biochemistry of the molecule, and is relevant for understanding genetic disease and for design of new proteins. 4.1 Learning the DSSP secondary structure function Predicting the secondary structure of a protein conditioned on knowledge of the three dimensional structure is not considered a hard problem. We include it here because we are interested in the ability of the neural network to learn the function that is typically used to annotate three dimensional structures with secondary structure, in our case DSSP (Kabsch and Sander, 1983). Interestingly, the different concentric convolutional models are seen to perform about equally well on this problem (table 2, Q3), marginally outperforming the Cartesian convolution and substantially outperforming the dense baseline model. To get a sense of the absolute performance, we would ideally compare to existing methods on the same problem. However, rediscovering the DSSP function is not a common task in bioinformatics, and not many tools are available that would constitute a meaningful comparison, in particular because secondary structure annotation algorithms use different definitions of secondary structure. We here use the TORUSDBN model (Boomsma et al., 2008, 2014) to provide such a baseline. The model is sequential in the sequence of a protein, and thus captures local structural information only. While the model is originally designed to sample backbone dihedral angles conditioned on an amino acid sequence or secondary structure sequence, it is generative, and can thus be used in reverse and provide the most probable secondary structure or amino acid sequence given using viterbi decoding. Most importantly, it is trained on DSSP, making it useful as a comparison for this study. Included as the last row in table 2, TORUSDBN demonstrates slightly lower performance compared to our convolutional approaches, illustrating that most of the secondary structure signal is encoded in the local angular preferences. It is encouraging to see that the convolutional networks capture all these local signals, but obtain additional performance through more non-local interactions. 4.1.1 Learning amino acid propensities Compared to secondary structure, predicting the amino acid propensity is substantially harder?partly because of the larger sample space, but also because we expect such preferences to be defined by more global interaction patterns. Interestingly, the two concentric convolutions perform about equally well, suggesting that the added regularity of the cubed-sphere representation does not provide a substantial benefit for this case (table 2, Q20). However, both methods substantially outperform the standard 3D convolution, which again outperforms the dense baseline model. We also note that there is now a significant difference between the three radial strategies, with conv-banded-disjoint (bd) and conv-banded (b) both performing worse than the simpler case of using a single convolution over the entire r-range. Again, we include TorusDBN as an external reference. The substantially lower performance of this model confirms that the amino acid label prediction task depends predominantly on non-local features not captured by this model. Finally, we include another baseline: the most frequent amino acid observed at this position among homologous (evolutionarily related) proteins. It is remarkable that the concentric models (which are trained on a homology-reduced protein set), are capable of learning the structural preferences of amino acids to the same extent as the information that is encoded as genetic variation in the sequence databases. This strongly suggests the ability of our models to learn general relationships between structure and sequence. 7 Table 2: Performance of various models in the prediction of (a) DSSP-style secondary structure conditioned and (b) amino acid propensity conditioned on the structure. The Q3 score is defined as the percentage of correct predictions for the three possible labels: helix, extended and coil. The Q20 score is defined as the percentage of correct predictions for the 20 possible amino acid labels. Model Q3 (secondary structure) Q20 (amino acid) 0.933 0.931 0.932 0.932 0.922 0.888 0.894 0.564 0.515 0.548 0.560 0.500 0.348 0.547 0.183 CCSconv CCSconvbd CCSconvb CSPconv Cartesian CCSdense PSSM TORUSDBN 4.1.2 Predicting change-of-stability The models in the previous section not only predict the most likely amino acid, but also the entire distribution. A natural question is whether the ratio of probabilities of two amino acids according to this distribution is related to the change of stability induced by the corresponding mutation. We briefly explore this question here. The stability of a protein is the difference in free energy ?G between the folded and unfolded conformation of a protein. The change in stability that occurs as a consequence of a mutation is thus frequently referred to as ??G. These values can be measured experimentally, and several data sets with these values are publicly available. As a simple approximation, we can interpret the sum of negative log-probabilities of each amino acid along the sequence as a free energy of the folded state Gf . To account for the free energy of the unfolded state, Gu , we could consider the negative log-probability that the amino acid in question occurs in the given amino acid sequence (without conditioning on the environment). Again, assuming independence between sites in the chain, this could be modelled by simply calculating the log-frequencies of the different amino acids across the data set, and summing over all sites of the specific protein to get the total free energy. Subtracting these two pairs of values for the wild type (W) and mutant (M) would give us a rough estimate of the ??G, which due to our assumption of independence between sites simplifies to just the difference in values at the given site: ? ,M ? ) = (Gf (Mn ) ? Gu (Mn )) ? (Gf (Wn ) ? Gu (Wn )), ??G(W (2) ? and M ? denote the full wild type and mutant sequence respectively, and Wn and Mn denote where W the amino acids of wild type and mutant at the site n at which they differ. Given the extensive set of simplifying assumptions in the argument above, we do not use the expression in equation (2) directly but rather use the four log-probabilities (Gf (Mn ), Gu (Mn ), Gf (Wn ), Gu (Wn )) as input to a simple regression model (a single hidden layer neural network with 10 hidden nodes and a ReLU activation function), trained on experimentally observed ??G data. We calculate the performance on several standard experimental data sets on mutation-induced change-of-stability, in each case using 5-fold cross validation, and reporting the correlation between experimentally measured and our calculated ??G. As a baseline, we compare our performance to two of the best known programs for calculating ??G: Rosetta and FoldX. The former were taken from a recent publication (Conch?ir et al., 2015), while the latter were calculated using the FoldX program (version 4). The comparison shows that even a very simple approach based on our convolutional models produces results that are comparable to the state-of-the-art in the field (table 3). This is despite the fact that we use a rather crude approximation of free energy, and that our approach disregards the fact that a mutation at a given site modifies the environment grids of all amino acids within the 12 ? range. Although these initial results should therefore not be considered conclusive, they suggest that models like the ones we propose could play a future role in ??G predictions. Apart from the overall levels of performance, the most remarkable feature of table 3 is that it shows equal performance for the Cartesian and concentric cubed-sphere convolutions, despite the fact that the former displayed substantially lower Q20 scores. This peculiar result points to an interesting 8 Table 3: Pearson correlation coefficients between experimentally measured and predicted changes of stability for several sets of published stability measurements. Kellogg Guerois Potapov ProTherm* Rosetta FoldX CCSconv CSPconv Cartesian 0.65 0.65 0.52 0.44 0.70 0.73 0.59 0.53 0.66 0.66 0.52 0.49 0.64 0.64 0.51 0.48 0.66 0.66 0.52 0.49 caveat in the interpretation of the predicted distribution over amino acids for a given environment. At sufficiently high resolution of the structural environment, a perfect model would be able to reliably predict the identity of the wild type amino acid by the specific shape of the hole it left behind. This means that as models improve, the entropy of the predicted amino acid distributions is expected to decrease, with increasingly peaked distributions centered at the wild type. An increased sensitivity towards the exact molecular environment will therefore eventually decrease the models ability to consider other amino acids at that position, leading to lower ??G performance. The missing ingredient in our approach is the structural rearrangement in the environments that occurs as a consequence of the mutation. A full treatment of the problem should average the predictions over the available structural variation, and structural resampling is indeed part of both Rosetta and FoldX. For these reasons, it is difficult to make clear interpretations of the relative differences in performance of the three convolution procedures in table 3. The overall performance of all three, however, indicates that convolutions might be useful as part of a more comprehensive modelling strategy such as those used in Rosetta and FoldX. 5 Conclusions Convolutional neural networks are a powerful tool for analyzing spatial data. In this paper, we investigated the possibility of extending the applicability of the technique to data in the 3-ball, presenting two strategies for conducting convolutions in these spherical volumes. We assessed the performance of the two strategies (and variants thereof) on various tasks in molecular modelling, and demonstrate a substantial potential of these such concentric convolutional approaches to outperform standard 3D convolutions for such data. We expect that further improvements to the concentric convolution approach can be obtained by improving the spherical convolutions themselves. In particular, a convolution operation that was rotationally invariant would provide greater data efficiency than the approach used here. Obtaining such convolutions will be the subject of future work. Finally, we note that while this manuscript was in review, another paper on the application of convolutional neural networks for predicting amino acid preferences conditioned on structural environments was published, by Torng and Altman (Torng and Altman, 2017). Their study is conceptually similar to one of the applications described in this paper, but uses a Cartesian grid and standard 3D convolution (in addition to other minor differences, such as a one-hot atom type encoding). While Torng and Altman present a more thorough biological analysis in their paper than we do here, the accuracy they report is considerably lower than what we obtained. Based on the comparisons reported here, we anticipate that models such as theirs could be improved by switching to a concentric representation. 6 Availability The spherical convolution Tensorflow code and the datasets used in this paper are available at https://github.com/deepfold. Acknowledgments This work was supported by the Villum Foundation (W.B., grant number VKR023445). 9 References A. Aurisano, A. Radovic, D. Rocco, A. Himmel, M. Messier, E. Niner, G. Pawloski, F. Psihas, A. Sousa, and P. Vahle. A convolutional neural network neutrino event classifier. Journal of Instrumentation, 11(9):P09001, 2016. J. Behler and M. Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical Review Letters, 98(14):146401, 2007. W. Boomsma, K. V. Mardia, C. C. Taylor, J. Ferkinghoff-Borg, A. Krogh, and T. Hamelryck. A generative, probabilistic model of local protein structure. Proceedings of the National Academy of Sciences, 105(26): 8932?8937, 2008. W. Boomsma, P. Tian, J. Frellsen, J. Ferkinghoff-Borg, T. Hamelryck, K. Lindorff-Larsen, and M. Vendruscolo. Equilibrium simulations of proteins using molecular fragment replacement and NMR chemical shifts. Proceedings of the National Academy of Sciences, 111(38):13852?13857, 2014. T. Cohen and M. Welling. Group equivariant convolutional networks. In M. F. Balcan and K. Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2990?2999, New York, USA, 2016. S. ?. Conch?ir, K. A. Barlow, R. A. Pache, N. Ollikainen, K. Kundert, M. J. O?Meara, C. A. Smith, and T. Kortemme. A web resource for standardized benchmark datasets, metrics, and Rosetta protocols for macromolecular modeling and design. PLoS ONE, 10(9):e0130433, 2015. P. Eastman, M. S. Friedrichs, J. D. Chodera, R. J. Radmer, C. M. Bruns, J. P. Ku, K. A. Beauchamp, T. J. Lane, L.-P. Wang, D. Shukla, et al. OpenMM 4: a reusable, extensible, hardware independent library for high performance molecular simulation. Journal of Chemical Theory and Computation, 9(1):461?469, 2012. X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In G. Gordon, D. Dunson, and M. Dud?k, editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pages 315?323, Fort Lauderdale, FL, USA, 2011. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. R. H. R. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. 405:947?951, 2000. V. Hornak, R. Abel, A. Okur, B. Strockbine, A. Roitberg, and C. Simmerling. Comparison of multiple Amber force fields and development of improved protein backbone parameters. Proteins: Structure, Function, and Bioinformatics, 65(3):712?725, 2006. A. Irb?ck and S. Mohanty. PROFASI: a Monte Carlo simulation package for protein folding and aggregation. Journal of Computational Chemistry, 27(13):1548?1555, 2006. D. Jasrasaria, E. O. Pyzer-Knapp, D. Rappoport, and A. Aspuru-Guzik. Space-filling curves as a novel crystal structure representation for machine learning models. arXiv:1608.05747, 2016. W. Kabsch and C. Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers, 22(12):2577?2637, 1983. D. Kingma and J. Ba. Adam: A method for stochastic optimization. In 3th International Conference on Learning Representations, San Diego, USA, 2015. K. V. Mardia and P. E. Jupp. Directional statistics, 2009. K. Mills, M. Spanner, and I. Tamblyn. Deep learning and the Schr?dinger equation. Physical Review A, 96: 042113, 2017. S. Min, B. Lee, and S. Yoon. Deep learning in bioinformatics. Briefings in Bioinformatics, 18(5):851?869, 2017. C. Ronchi, R. Iacono, and P. Paolucci. The "cubed sphere": A new method for the solution of partial differential equations in spherical geometry. Journal of Computational Physics, 124(1):93?114, 1996. K. T. Sch?tt, F. Arbabzadah, S. Chmiela, K. R. M?ller, and A. Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8:13890, 2017. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In 3th International Conference on Learning Representations, San Diego, USA, 2015. 10 J. Smith, O. Isayev, and A. Roitberg. ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost. Chemical Science, 8(4):3192?3203, 2017. W. Torng and R. B. Altman. 3D deep convolutional neural networks for amino acid environment similarity analysis. BMC Bioinformatics, 18(1):302, 2017. I. Wallach, M. Dzamba, and A. Heifets. AtomNet: a deep convolutional neural network for bioactivity prediction in structure-based drug discovery. arXiv:1510.02855, 2015. G. Wang and R. L. Dunbrack. PISCES: a protein sequence culling server. Bioinformatics, 19(12):1589?1591, 2003. S. Wang, J. Peng, J. Ma, and J. Xu. Protein secondary structure prediction using deep convolutional neural fields. Scientific Reports, 6, 2016. J. M. Word, S. C. Lovell, J. S. Richardson, and D. C. Richardson. Asparagine and glutamine: using hydrogen atom contacts in the choice of side-chain amide orientation. Journal of Molecular Biology, 285(4):1735?1747, 1999. 11
6935 |@word cnn:2 briefly:2 version:1 unaltered:1 illustrating:1 confirms:1 gradual:1 simulation:3 simplifying:1 irb:2 harder:1 reduction:2 initial:2 series:1 score:3 fragment:1 genetic:2 interestingly:2 outperforms:1 existing:2 jupp:2 discretization:1 comparing:1 com:1 activation:2 bd:1 gpu:1 periodically:1 additive:1 iacono:1 shape:4 remove:1 designed:1 aside:1 resampling:1 generative:2 intelligence:1 parameterization:1 plane:2 dunbrack:2 smith:4 caveat:1 node:1 beauchamp:1 preference:4 simpler:1 along:1 constructed:3 direct:2 become:1 borg:2 differential:1 wild:5 manner:1 introduce:1 ascribed:1 pairwise:1 x0:4 expected:2 indeed:2 roughly:1 themselves:1 frequently:1 equivariant:5 peng:1 discretized:1 inspired:2 spherical:39 automatically:2 unfolded:2 encouraging:1 considering:3 becomes:1 project:1 conv:10 linearity:1 circuit:1 mass:4 agnostic:1 what:1 backbone:2 interpreted:1 substantially:7 astronomy:1 transformation:5 safely:1 thorough:1 charge:6 friedrichs:1 demonstrates:1 classifier:1 nmr:1 unit:1 grant:1 positive:1 engineering:2 understood:1 local:8 consequence:3 switching:1 despite:6 encoding:2 id:2 analyzing:1 kabsch:2 culling:1 might:1 wallach:2 suggests:2 vendruscolo:1 co:2 range:4 perpendicular:1 tian:1 unique:1 acknowledgment:1 testing:1 atomic:1 practice:1 block:1 xr:1 procedure:1 pawloski:1 area:4 drug:1 parrinello:3 word:2 radial:10 regular:1 pdb:2 protein:27 suggest:1 get:2 convenience:1 onto:3 selection:2 operator:2 coexist:1 context:4 applying:1 conventional:4 map:2 demonstrated:3 center:1 missing:3 modifies:1 straightforward:2 attention:1 resolution:3 disorder:1 parameterizing:1 insight:1 importantly:1 stability:8 rediscovering:1 coordinate:11 traditionally:1 variation:2 limiting:1 altman:4 construction:1 tan:2 imagine:1 play:1 exact:1 neighbouring:1 guzik:1 us:1 diego:2 hypothesis:3 goodfellow:2 circumscribed:2 recognition:2 database:1 observed:2 role:2 yoon:1 wang:5 capture:3 calculate:1 region:1 ensures:2 plo:1 decrease:2 substantial:3 disease:1 environment:21 abel:1 ideally:1 seung:1 phenylalanine:1 trained:5 uniformity:1 depend:1 segment:3 purely:1 efficiency:2 gu:5 various:4 represented:2 sarpeshkar:1 surrounding:2 separated:1 describe:1 monte:1 artificial:1 outside:1 pearson:1 apparent:1 encoded:3 larger:1 amide:1 otherwise:1 ability:7 simonyan:2 statistic:2 richardson:2 analyse:1 advantage:1 indication:1 sequence:12 propose:2 subtracting:1 interaction:5 remainder:1 frequent:1 neighboring:1 relevant:1 realization:2 helical:1 academy:2 amplification:1 regularity:1 extending:2 produce:1 adam:2 perfect:1 depending:2 measured:4 minor:1 conformation:1 progress:2 krogh:1 implemented:3 predicted:3 differ:1 artefact:1 direction:1 radius:5 correct:2 cnns:3 filter:15 stochastic:1 centered:1 vx:4 engineered:1 implementing:1 globular:1 bin:3 villum:1 biological:3 probable:1 secondly:1 anticipate:1 sufficiently:1 considered:2 equilibrium:1 mapping:1 predict:3 viterbi:1 pointing:1 biochemistry:1 dictionary:1 earth:1 purpose:1 polar:16 bond:1 label:4 behler:3 propensity:6 vz:2 tool:2 hope:1 rough:1 mit:1 ck:2 occupied:1 rather:2 chmiela:1 publication:1 q3:3 focus:1 improvement:3 mutant:3 modelling:8 indicates:1 contrast:1 baseline:6 sense:1 inference:1 entire:3 typically:1 hidden:2 interested:1 issue:1 among:2 overall:2 pascal:1 orientation:1 development:2 spatial:2 art:3 softmax:1 cube:2 field:10 construct:4 equal:1 beach:1 atom:15 manually:1 bmc:1 biology:2 broad:1 identical:2 filling:2 peaked:1 future:2 minimized:1 report:2 gordon:1 few:1 employ:1 surroundings:1 preserve:2 national:2 comprehensive:1 recognize:1 pharmaceutical:1 replaced:1 geometry:1 replacement:1 rearrangement:1 wouter:1 investigate:2 highly:1 possibility:1 severe:1 behind:1 chain:4 implication:1 peculiar:1 capable:2 partial:5 necessary:1 experience:2 nucleotide:1 conduct:1 incomplete:1 euclidean:2 divide:1 loosely:1 taylor:1 instance:2 increased:1 modeling:1 wb:1 kellogg:1 extensible:2 zn:3 mahowald:1 applicability:1 pole:1 cost:1 entry:1 uniform:4 successful:1 reported:1 periodic:1 considerably:1 st:1 fundamental:2 dssp:5 sensitivity:1 international:4 probabilistic:1 physic:2 lee:1 decoding:1 discipline:1 lauderdale:1 again:4 dihedral:1 possibly:1 dinger:1 worse:1 corner:1 creating:1 external:2 leading:2 style:1 cubed:29 suggesting:1 transformational:1 potential:6 account:1 stride:3 chemistry:2 rcl:2 messier:1 includes:1 coefficient:1 availability:1 titan:1 depends:1 multiplicative:1 view:2 break:1 aggregation:1 annotation:1 mutation:7 ir:2 accuracy:3 convolutional:25 acid:39 conducting:2 efficiently:4 likewise:1 spaced:4 publicly:1 directional:1 conceptually:1 generalize:1 modelled:1 raw:1 accurately:1 marginally:1 carlo:1 cc:1 rectified:1 published:2 banded:7 manual:1 definition:1 energy:7 frequency:1 larsen:1 obvious:2 thereof:1 naturally:7 proof:1 di:1 treatment:1 knowledge:4 jes:1 carefully:1 rappoport:1 manuscript:1 originally:1 planar:1 zisserman:2 improved:2 done:1 box:1 strongly:1 furthermore:2 angular:3 stage:1 just:1 correlation:3 parse:1 web:1 assessment:1 lack:2 overlapping:3 scientific:2 usa:5 b3:3 effect:1 neutrino:1 concept:2 homology:3 counterpart:1 former:2 regularization:1 assigned:1 chemical:4 spatially:1 barlow:1 dud:1 illustrated:3 frellsen:2 wrapped:3 sin:3 during:1 generalized:1 bonded:1 lovell:1 presenting:1 crystal:2 tt:3 demonstrate:2 briefing:1 performs:1 dedicated:1 balcan:1 geometrical:1 image:7 ranging:1 novel:1 recently:1 predominantly:2 common:2 functional:2 physical:5 cohen:3 conditioning:1 volume:6 thirdly:1 extend:2 interpretation:2 interpret:1 theirs:1 refer:1 significant:1 measurement:1 silicon:1 dft:1 rd:1 trivially:2 grid:45 had:2 similarity:2 surface:2 cortex:1 recent:2 perspective:2 apart:1 reverse:1 scenario:1 instrumentation:1 nvidia:1 server:3 outperforming:2 success:1 arbitrarily:1 seen:1 captured:1 additional:1 care:1 rotationally:1 greater:1 ller:1 signal:2 full:3 gb1:4 multiple:1 reduces:1 cross:2 sphere:44 long:1 molecular:22 equally:2 prediction:10 variant:2 basic:2 regression:1 metric:1 arxiv:2 annotate:1 represent:2 equator:1 c1:3 folding:1 addition:1 residue:1 crash:1 spacing:2 interval:1 tkatchenko:1 sch:3 appropriately:1 strict:1 induced:3 pooling:3 subject:1 structural:13 split:1 sander:2 concerned:1 wn:5 bengio:2 independence:2 relu:8 equidistant:1 arbabzadah:1 architecture:7 topology:1 suboptimal:1 reduce:3 simplifies:1 vgg:1 shift:1 whether:3 six:3 expression:1 allocate:1 padding:2 proceed:1 york:1 constitute:1 deep:11 generally:1 useful:2 clear:2 locally:1 hardware:1 reduced:2 http:1 outperform:6 percentage:2 restricts:1 vy:2 fish:1 disjoint:3 discrete:2 express:1 group:1 kci:1 four:3 key:1 demonstrating:1 reusable:1 clarity:1 penalizing:1 douglas:1 ronchi:7 ani:1 year:1 sum:4 angle:3 parameterized:1 powerful:1 letter:1 distorted:1 fourteenth:1 package:1 reporting:1 almost:2 patch:10 scaling:1 comparable:2 capturing:1 layer:10 dropout:1 fl:1 followed:2 display:1 courville:1 fold:1 occur:1 equiangular:4 lane:1 simulate:1 argument:1 min:2 performing:1 gpus:1 department:2 according:1 ball:5 across:1 slightly:1 increasingly:4 making:1 projecting:1 invariant:2 taken:1 equation:5 resource:1 discus:2 r3:1 eventually:1 available:5 operation:5 decomposing:1 apply:3 alternative:2 robustness:1 batch:1 sousa:1 weinberger:1 hampered:1 assumes:1 remaining:1 include:7 standardized:1 calculating:2 build:2 q20:4 contact:2 tensor:5 move:1 question:6 quantity:1 added:2 occurs:3 strategy:11 rocco:1 distance:5 mapped:2 entity:1 topic:1 manifold:3 extent:1 trivial:1 reason:1 itu:1 assuming:1 code:1 relationship:2 rotational:1 ratio:1 difficult:1 unfortunately:1 dunson:1 potentially:1 negative:2 ba:2 design:3 reliably:1 perform:2 allowing:1 convolution:63 neuron:1 datasets:2 benchmark:1 implementable:1 displayed:1 extended:3 communication:1 schr:1 rn:4 biopolymers:1 community:1 concentric:35 copenhagen:2 pair:2 fort:1 extensive:1 conclusive:1 tensorflow:1 established:1 kingma:2 nip:1 address:1 able:1 below:1 pattern:5 challenge:2 spanner:1 program:4 analogue:1 hot:1 event:1 natural:5 rely:1 force:7 predicting:6 regularized:1 homologous:1 mn:5 improve:1 github:1 eye:1 library:1 axis:1 carried:1 gf:5 prior:1 understanding:1 l2:1 review:3 knapp:1 discovery:1 relative:2 embedded:2 loss:2 expect:3 interesting:2 limitation:1 remarkable:2 ingredient:1 validation:2 foundation:1 digital:1 degree:1 viewpoint:1 story:1 systematically:1 helix:2 share:2 editor:2 translation:4 row:1 bordes:1 supported:1 last:2 keeping:1 free:5 side:1 allow:1 vv:2 aspuru:1 taking:2 absolute:1 sparse:1 benefit:1 boundary:1 dimension:3 curve:4 calculated:2 quantum:2 made:1 refinement:1 projected:1 san:2 voxel:1 welling:3 dealing:1 global:2 summing:1 conclude:2 hydrogen:4 decade:2 shukla:1 table:10 nature:1 ku:2 transfer:1 molecule:4 ca:1 learn:3 channel:1 obtaining:1 improving:1 rosetta:5 investigated:2 cl:3 constructing:1 domain:7 equivariance:1 protocol:1 main:1 dense:9 s2:6 arise:1 allowed:1 evolutionarily:1 amino:39 xu:1 site:6 referred:1 experienced:1 position:7 mohanty:2 lie:1 crude:1 mardia:3 late:1 specific:6 rectifier:1 showing:1 r2:1 dk:2 glorot:2 sequential:1 conditioned:5 cartesian:11 hole:1 entropy:2 eastman:2 mill:2 led:1 simply:2 likely:1 explore:1 visual:1 failed:1 expressed:2 corresponds:1 ma:1 shell:2 coil:2 hahnloser:2 identity:1 towards:1 shared:1 content:1 change:6 hard:1 included:1 folded:2 except:1 uniformly:6 experimentally:4 total:2 secondary:18 invariance:1 partly:1 experimental:1 disregard:1 meaningful:1 latter:1 assessed:1 bioinformatics:6 avoiding:1
6,562
6,936
Efficient Optimization for Linear Dynamical Systems with Applications to Clustering and Sparse Coding Wenbing Huang1,3 , Mehrtash Harandi2 , Tong Zhang2 Lijie Fan3 , Fuchun Sun3 , Junzhou Huang1 1 Tencent AI Lab. ; 2 Data61, CSIRO and Australian National University, Australia; 3 Department of Computer Science and Technology, Tsinghua University, Tsinghua National Lab. for Information Science and Technology (TNList); 2 1 {helendhuang, joehhuang}@tencent.com {[email protected], [email protected]} 3 {flj14@mails, fcsun@mail}.tsinghua.edu.cn Abstract Linear Dynamical Systems (LDSs) are fundamental tools for modeling spatiotemporal data in various disciplines. Though rich in modeling, analyzing LDSs is not free of difficulty, mainly because LDSs do not comply with Euclidean geometry and hence conventional learning techniques can not be applied directly. In this paper, we propose an efficient projected gradient descent method to minimize a general form of a loss function and demonstrate how clustering and sparse coding with LDSs can be solved by the proposed method efficiently. To this end, we first derive a novel canonical form for representing the parameters of an LDS, and then show how gradient-descent updates through the projection on the space of LDSs can be achieved dexterously. In contrast to previous studies, our solution avoids any approximation in LDS modeling or during the optimization process. Extensive experiments reveal the superior performance of the proposed method in terms of the convergence and classification accuracy over state-of-the-art techniques. 1 Introduction Learning from spatio-temporal data is an active research area in computer vision, signal processing and robotics. Examples include dynamic texture classification [1], video action recognition [2, 3, 4] and robotic tactile sensing [5]. One kind of the popular models for analyzing spatio-temporal data is Linear Dynamical Systems (LDSs) [1]. Specifically, LDSs apply parametric equations to model the spatio-temporal data. The optimal system parameters learned from the input are employed as the descriptor of each spatio-temporal sequence. The benefits of applying LDSs are two-fold: 1. LDSs are generative models and their parameters are learned in an unsupervised manner. This makes LDSs suitable choices for not only classification but also interpolation/extrapolation/generation of spatio-temporal sequences [1, 6, 7]; 2. Unlike vectorial ARMA models [8], LDSs are less prone to the curse of dimensionality as a result of their lower-dimensional state space [9]. Clustering [10] and coding [5] LDSs are two fundamental problems that motivate this work. The clustering task is to group LDS models based on some given similarity metrics. The problem of coding, especially sparse coding, is to identify a dictionary of LDSs along their associated sparse codes to best reconstruct a collection of LDSs. Given a set of LDSs, the key problems of clustering and sparse coding are computing the mean and finding the LDS atoms, respectively, both of which are not easy tasks by any measure. Due to an infinite number of equivalent transformations for 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the system parameters [1], the space of LDSs is non-Euclidean. This in turn makes the direct use of traditional techniques (e.g., conventional sparse solvers) inapplicable. To get around the difficulties induced by the non-Euclidean geometry, previous studies (e.g., [11, 12, 13, 5]) resort to various approximations, either in modeling or during optimization. For instance, the authors in [11] approximated the clustering mean by finding the closest sample under a certain embedding. As we will see in our experiments, involving approximations into the solutions exhibits inevitable limitations to the algorithmic performance. This paper develops a gradient-based method to solve the clustering and sparse coding tasks efficiently without any approximation involved. To this end, we reformulate the optimization problems for these two different tasks and then unify them into one common problem by making use of the kernel trick. However, there exist several challenges to address this common problem efficiently. The first challenge comes from the aforementioned invariance property on the LDS parameters. To attack this challenge, we introduce a novel canonical form of the system parameters that is insensitive to the equivalent changes. The second challenge comes from the fact that the optimization problem of interest requires solving Discrete Lyapunov Equations (DLEs). At first glance, such a dependency makes backpropagating the gradients through DLEs more complicated. Interestingly, we prove that the gradients can be exactly derived by solving another DLE in the end, which makes our optimization much simpler and more efficient. Finally, as suggested by [14], the LDS parameters, i.e., the transition and measurement matrices require to be stable and orthogonal, respectively. Under our canonical representation, the stability constraint is reduced to the bound constraint. We then make use of the Cayley-transformation [15] to maintain orthogonality and perform the bound-normalization to accomplish stability. Clustering and sparse coding can be combined with high-level pooling frameworks (e.g., bag-of-systems [11] and spatial-temporal-pyramid-matching [16]) for classifying dynamic textures. Our experiments on such kind of data demonstrate that the proposed methods outperform state-of-the-art techniques in terms of the convergence and classification accuracy. 2 Related Work LDS modeling. In the literature, various non-Euclidean metrics have been proposed to measure the distances between LDSs, such as Kullback-Leibler divergence [17], Chernoff distance [18], Binet-Cauchy kernel [19] and group distance [14]. This paper follows the works in [20, 21, 11, 12] to represent an LDS by making use of the extended observability subspace; comparing LDSs is then achieved by measuring the subspace angles [22]. Clustering LDSs. In its simplest form, clustering LDSs can be achieved by alternating between two sub-processes: 1) assigning LDSs to the closest clusters using a similarity measure; 2) computing the mean of the LDSs within the same cluster. However, as the space of LDSs is non-Euclidean, computing means on this space is not straightforward. In [12], the authors embedded LDSs into a finite Grassmann manifold by representing each LDS with its finite observability subspace and then cluster LDSs on that manifold. In contrast, our method applies the extended observability subspace to represent LDSs. In this way, not only the fully temporal evolution of the input sequence is taken into account, but also and as will be shown shortly, the computational cost is reduced. The solution proposed by [11] also represent LDSs with extended observability subspaces; but it approximates the mean by finding a sample that is closest to the mean using the concept of Multidimensional Scaling (MDS). Instead, our method finds the system tuple of the exact mean for the given group of LDSs without relying on any approximation. Afsari et al. [14] cluster LDSs by first aligning the parameters of LDSs in their equivalence space. However, the method of Afsari et al. is agnostic to the joint behavior of transition and measurement matrices and treat them independently. Other related studies include probabilistic framework for clustering LDSs [23, 24]. Sparse Coding with LDSs. Combining sparse coding with LDS modeling could further promote the classification performance [13]. However, similar to the clustering task, the non-Euclidean structure makes it hard to formulate the reconstruction objective and update the dictionary atoms on the space of LDSs. To address this issue, [13] embedded LDSs into the space of symmetric matrices by representing each LDS with its finite observability subspace. With this embedding, dictionary learning can be performed in the Euclidean space. In [5], the authors employ the extended observability subspaces as the LDS descriptors; however, to update the dictionary, the authors enforce symmetric constraints on the the transition matrices. Different from previous studies, our model 2 works on the the original LDS model and does not enforce any additional constraint to the transition matrices. To sum up, in contrast to previous studies [12, 11, 14, 13, 5], this paper solves the clustering and sparse coding problems in a novel way regarding the following aspects. First, we unify the optimizing objective functions for both clustering and sparse coding; Second, we avoid any additional constraints (e.g. symmetric transition in [5] and finite observability in [12, 13]) for the solution; Finally, we propose a canonical formulation of the LDS tuple to facilitate the optimization. 3 LDS Modeling LDSs describe time series through the following model [1]:  y(t) = y + Cx(t) + w(t) (1) x(t + 1) = Ax(t) + Bv(t), with Rm?? 3 Y = [y(1), ? ? ? , y(? )] and Rn?? 3 X = [x(1), ? ? ? , x(? )] representing the observed variables and the hidden states of the system, respectively. Furthermore, y ? Rm is the mean of Y ; A ? Rn?n is the transition matrix of the model; B ? Rn?nv (nv ? n) is the noise transformation matrix; C ? Rm?n is the measurement matrix; v(t) ? N (0, Inv ) and w(t) ? N (0, ?) denoting the process and measurement noise components, respectively. We also assume that n  m and C has full rank. Overall, generating the observed variables is governed by the parameters ? = {x(1), y, A, B, C, ?}. System Identification. The system parameters A and C of Eq. (1) describe the dynamics and spatial patterns of the input sequence, respectively [11]. Therefore, the tuple (A, C) is a desired descriptor for spatio-temporal data. Finding the optimal tuple (A, C) is known as system identification. A popular and efficient method for system identification is proposed in [1]. This method requires the columns of C to be orthogonal, i.e., C is a point on the Stiefel manifold defined as ST(m, n) = {C ? Rm?n |C T C = In }. The transition matrix A obtained by the method of [1] is not naturally stable. An LDS is stable if its spectral radius, i.e. the maximum eigenvalue of its transition matrix denoted by ?(A) is less than one. To obtain a stable transition matrix, [5] propose a soft-normalization technique which is our choice in this paper. Therefore, we are interested in the LDS tuple with the constraints, C = {C T C = In , ?(A) < 1}. (2) Equivalent Representation. Studying Eq. (1) shows that the output of the system remains unchanged under linear transformations of the state basis [1]. More specifically, an LDS has an equivalent class of representations, i.e., (A, C) ? (P T AP , CP ) (3) 1 for any P ? O(n) . For simplicity, the equivalence in Eq.(3) is called as P-equivalence. Obviously comparing LDSs through Euclidean distance between the associated tuples is inaccurate as a result of P-equivalence. To circumvent this difficulty, a family of approaches apply the extended observability subspace to represent an LDS [20, 21, 11, 5]. Below, we briefly review this topic. Extended Observability Subspace. The expected output sequence of Eq. (1) [12] is calculated as [E[y(1)]; E[y(2)]; E[y(3)]; ? ? ? ] = [C; CA; CA2 ; ? ? ? ]x(1) = O ? (A, C)x(1), (4) where O ? (A, C) ? R??n is called as the extended observability matrix of the LDS associated to (A, C). Let S(A, C) denote the extended observability subspace spanned by the columns of O ? (A, C). Obviously, the extended observability subspace is invariant to P-equivalence, i.e., S(A, C) = S(P T AP , CP ). In addition, the extended observability subspace is capable of containing the fully temporal evolution of the input sequence as observed from Eq. (4). 4 Our Approach In this section, we first unify the optimizations for clustering and sparse coding with LDSs by making use of the kernel functions. Next, we present our method to address this optimization problem. 1 In general, (A, C) ? (P ?1 AP , CP ) for P ? GL(n) with GL(n) denoting non-singular n ? n matrices. Since we are interested in orthogonal measurement matrices (i.e., C ? ST(m, n)), the equivalent class takes the form described in Eq. (3). 3 4.1 Problem Formulation We recall that each LDS is represented by its extended observability subspace. Clustering or sparse coding in the space of extended observability subspaces is not straightforward because the underlying geometry is non-Euclidean. Our idea here is to implicitly map the subspaces to a Reproducing Kernel Hilbert Space (RKHS). For better readability, we simplify the subspace induced by S(Ai , C i ) as S i in the rest of this section if no ambiguity is caused. We denote the implicit mapping defined by a positive definite kernel k(S 1 , S 2 ) = ?(S 1 )T ?(S 2 ) as ? : S 7? H. Various kernels [25, 19, 5] based on extended observability subspaces have been proposed to measure the similarity between LDSs. Though the proposed method is general in nature, in the rest of the paper we employ the projection kernel [5] due to its simplicity. The projection kernel is defined as ?1 kp (S 1 , S 2 ) = Tr(G?1 (5) 11 G12 G22 G21 ), where Tr(?) computes the trace and the product matrices Gij = O T (Ai , C i )O ? (Aj , C j ) = ? P? T t T t t=0 (Ai ) C i C j Aj , for i, j ? {1, 2} are obtained by solving the following DLE T AT (6) i Gij Aj ? Gij = ?C i C j . The solution of DLE exists and is unique when both Ai and Aj are stable [22]. DLE can be solved by a numerical algorithm with the computational complexity of O(n3 ) [26], where n is the hidden dimension and is usually very small (see Eq. (1)). Clustering. As discussed before, the key of clustering is to compute the mean for the given set of LDSs. While several works [12, 11, 14] have been developed for computing the mean, none of their solutions are derived in the kernel form. The mean defined by the implicit mapping is N 1 X k?(S m ) ? ?(S i )k2 s.t. (Am , C m ) ? C, (7) min Am ,C m N i where S m is the mean subspace and S i are data subspaces. Removing the terms that are independent from S m (e.g., ?(S m )T ?(S m ) = 1) leads to N 2 X k(S m , S i ) s.t. (Am , C m ) ? C. (8) min ? Am ,C m N i Sparse Coding. The problem of sparse coding in the RKHS is written as [13] J N X 1 X zi,j ?(S 0 j )k2 + ?kz i k1 , s.t. (A0 j , C 0 j ) ? C, j = 1, ? ? ? , J; (9) min k?(S ) ? i N i {A0 j ,C 0 j }J j=1 j=1 0 J where {S i }N i=1 are the data subspaces; {S j }j=1 are the dictionary subspaces; zi,j is the sparse code 0 J of data S i over atom S j ; R ? z i = [zi,1 ; ? ? ? ; zi,J ] and ? is the sparsity factor. Eq. (9) shares the same form as those in [13, 5]; however, here we apply the extended observability subspaces and perform no additional constraint on the transition matrices. To perform sparse coding, we alternative between the two phases: 1) computing the sparse codes given LDS dictionary, which is similar to the conventional sparse coding task [13]; 2) optimizing each dictionary atom with the codes fixed. Specifically, updating the r-th atom with other atoms fixed gives the kernel formulation of the objective as N J X 1 X ?r = ?zi,r k(S 0 r , S i ) + zi,r zi,j k(S 0 r , S 0 j ). (10) N i j=1,j6=r Common Problem. Clearly, Eq. (8) and (10) have the common form as N 1 X min ?i k(S(A, C), S(Ai , C i )) s.t. (A, C) ? C. A,C N i=1 (11) N Here, (A, C) is the LDS tuple to be identified; {(Ai , C i )}N i=1 are given LDSs; {?i }i=1 are the task-dependent coefficients (are specified in Eq. (8) and Eq. (10)). To minimize (11), we resort to the Projected Gradient Descent (PGD) method. Note that the solution space in (11) is redundant due to the invariance induced by P-equivalence (Eq. (3)). We thus devise a canonical representation of the system tuple (see Theorem 1). The canonical form not only confines the search space but also simplifies the stability constraint to a bound constraint. We then compute the gradients with respect to the system tuple by backpropagating the gradients through DLEs (see Theorem 4). Finally, we project the gradients to feasible regions of the system tuples via Caylay-transformation (Eq. (16-17) and bound-normalization (Eq. (18)). We now present the details. 4 4.2 Canonical Representation Theorem 1. For any given LDS, the system tuple (A, C) ? Rn?n ? Rm?n and all its equivalent representations have the canonical form (?V , U ), where U ? ST(m, n), V ? O(n) and ? ? Rn?n is diagonal with the diagonal elements arranged in a descend order, i.e. ?1 ? ?2 ? ? ? ? ? ?n 2 . Remark 2. The proof of Theorem 1 (presented in the supplementary material) requires the SVD decomposition that is not necessarily unique [27], thus the canonical form of a system tuple is not unique. Even so, the free dimensionality of the canonical space (i.e., mn) is less than that of the original tuples (i.e., mn + n(n?1) ) within the feasible region of C. This is due to the invariance 2 induced by P-equivalence (Eq. (3)) if one optimizes (11) in the original form of the system tuple. Remark 3. It is easy to see that the stability (i.e., ?(A) < 1) translates into the constraint |?i | < 1 in the canonical representation with ?i being the i-th diagonal element of ?. As such, problem (11) can be cast as N 1 X min ?i k(S(?V , U ), S(Ai , C i )), ?,V ,U N (12) i=1 s.t. V T V = In ; U T U = In ; |?i | < 1, i = 1, ? ? ? , n. A feasible solution of (11) can be obtained by minimizing (12) and the stability constraint in (11) is reduced to a bound constraint in (12). The canonical form derived from Theorem 1 is central to our methods. It is because with the canonical form, we can simplify the stability constraint to a bound one, thus making the solution simpler and more efficient. We note that even with conditions on one single LDS, optimizing the original form of A with the stability constraint is tedious (e.g., [7] and we note that the tasks addressed in our paper are more complicated where far more than one LDS are required to optimize). Furthermore, the canonical form enables us to reduce the redundancy of the LDS tuple (see Remark 3). To be specific, with canonical form, one needs to update only n singular values rather than the entire A matrix. Also optimization with the canonical representations avoids numerical instabilities related to equivalent classes, thus facilitating the optimization. 4.3 Passing Gradients Through DLEs According to the definition of the projection kernel, to obtain k(S(A, C), S(Ai , C i )) for (11) (note canonical form A = ?V and C = U ), computing the product-matrices Gi = P? thatTint the T t (A ) C C i Ai are required. To compute the gradients of the objective in (11) shown by ? t=0 w.r.t. the tuple ? = (A, C), we make use of the chain rule in the vectorized form as X ?? ?Gi : ?? = . (13) ?? : ?Gi : ?? : i ?? i: While computing ?G is straightforward, deriving ?G ??: is non-trivial as the values of the producti: matrices Gi are obtained by an infinite summation. The following theorem proves that the gradients are derived by solving an induced DLE. Theorem 4. Let the extended observability matrices of two LDSs (A1 , C 1 ) and (A2 , C 2 ) be O 1 and P? T t T t O 2 , respectively. Furthermore, let G12 = O T 1 O2 = t=0 (A1 ) C 1 C 2 A2 be the product-matrix between O 1 and O 2 . Given the gradient of the objective function with respect to the product-matrix . ?? ?G12 = H, the gradients with respect to the system parameters are ?? = G12 A2 RT 12 , ?A1 ?? = C 2 RT 12 , ?C 1 ?? = GT 12 A1 R12 , ?A2 ?? = C 1 R12 , ?C 2 (14) where R12 is obtained by solving the following DLE A1 R12 AT 2 ? R12 + H = 0. 2 All the proofs of the theorems in this paper are provided in the supplementary material. 5 (15) 4.4 Constraint-Aware Updates We cannot preserve the orthogonality of V , U and the stability of ? if we use conventional gradientdescent methods to update the parameters ?, V , U of (12). Optimization on the space of orthogonal matrices is a well-studied problem [15]. Here, we employ the Cayley transformation [15] to maintain orthogonality for V and U . In particular, we update V by ? V = V ? ? LV (I2n + RT LV )?1 RT (16) VV, 2 V where LV = [?V , V ] and RV = [V , ??V ], ?V is the gradient of the objective w.r.t. V , and ? is the learning rate. Similarly, to update U , we use ? U = U ? ? LU (I2n + RT LU )?1 RT (17) UU, 2 U where LU = [?U , U ] and RU = [U , ??U ]. As shown in [15], the Cayley transform follows the descent curve, thus updating V by Eq. (16) and U by Eq. (17) decreases the objective for sufficiently small ? . To accomplish stability, we apply the following bound normalization on ?, i.e., ? ?k = (?k ? ? ??k ), (18) max(?, |?k ? ? ??k |) where ?k is the k-th diagonal element of ?; ??k denotes the gradient w.r.t. ?k ; and ? < 1 is a threshold (we set ? = 0.99 in all of our experiments in this paper). From the above, we immediately have the following result, Theorem 5. The update direction in Eq. (18) is a descent direction. The authors in [5] constrain the eigenvalues of the transition matrix to be in (?1, 1) using a Sigmoid function. However, the Sigmoid function is easier to saturate and its gradient will vanish when ?k is close to the bound. In contrast, Eq. (18) does not suffer from this issue. For reader?s convenience, all the aforementioned details for optimizing (11) are summarized in Algorithm 1. The full details about how to use Algorithm 1 to solve clustering and sparse coding are provided in the supplementary material. Algorithm 1 The PGD method to optimize problem (11) Input: The given tuples {(Ai , C j )}; the initialization of (A, C); and the learning rate ? ; According to Theorem 1, compute the canonical formulations of {(Ai , C i )}N i=1 and (A, C) as {(?i , V i , U i )}N and (?, V , U ), respectively; i=1 for t = 1 to maxIter do Compute the gradients according to Theorem 4: ??, ?V , ?U ; ?1 T RV V with LV and RV defined in Eq. (16); Update V : V = V ? ? LV (I2n + ?2 RT V LV ) T ? Update U : U = U ? ? LU (I2n + 2 RU LU )?1 RT U U with LU and RU defined in Eq. (17); Update ?: ?k = max(?,|?k??? ??k |) (?k ? ? ??k ); end for Output: the system tuple (?, V , U ). 4.5 Extensions for Other Kernels The proposed solution is general in nature and can be used with other kernel functions such as the Martin kernel [25] and Binet-Cauchy kernel [19]. The Martin kernel is defined as    ?1 km (A1 , C 1 ), (A2 , C 2 ) = det G?1 (19) 11 G12 G22 G21 , with Gij as in Eq.(5). The determinant version of the Binet-Cauchy kernel is defined as    kb (A1 , C 1 ), (A2 , C 2 ) = det C 1 M C T 2 , (20) T where M satisfies e??b A1 M AT 2 ? M = ?x1 (1)x2 (1), ?b is the exponential discounting rate, and x1 (1), x2 (1) are the initial hidden states of the two compared LDSs. Both the Martin kernel and Binet-Cauchy kernel are computed by DLEs. Thus, Theorem 4 can be employed to compute the gradients w.r.t. the system tuple for them. 6 5 Experiments In this section, we first compare the performance of our proposed method (see Algorithm 1), called as PGD, with previous state-of-the-art methods for the task of clustering and sparse coding using the DynTex++ [28] dataset. We then evaluate the classification accuracies of various state-of-the-art methods with PGD on two video datasets, namely the YUPENN [29] and the DynTex [30] datasets. The above datasets have been widely used in evaluating LDS-based algorithms in the literature, and their details are presented in the supplementary material. In all experiments, the hidden order of LDS (n in Eq. (1)) is fixed to 10. To learn an LDS dictionary, we use the sparsity factor of 0.1 (? in Eq.(9)). The LDS tuples for all input sequences are learned by the method in [1] and the transition matrices are stabilized by the soft-normalization technique in [5]. 5.1 Models Comparison This experiment uses the DynTex++ datasets. We extract the histogram of LBP from Three Orthogonal Planes (LBP-TOP) [31] by splitting each video into sub-videos of length 8, with a 6-frame overlap. The LBP-TOP features are fed to LDSs to identify the system parameters. For clustering, we compare our PGD with the MDS method with the Martin Kernel [11] and the Align algorithm [14]. For sparse coding, two related methods are compared: Grass [13] and LDSST [5]. We follow [13] and use 3-step observability matrices for the Grass method (hence Grass-3 below). In LDSST, the transition matrices are enforced to be symmetric. All algorithms are randomly initialized and the average results over 10 times are reported. Purity 0.6 600 PGD Align MDS Time Per Epoch 0.8 0.4 0.2 0 4 8 16 32 64 400 200 0 128 PGD Align MDS 4 8 16 32 64 128 NumberOfClusters NumberOfClusters Figure 1: The clustering performance of the MDS, Align and PGD algorithms with varying number of clusters on DynTex++. 5.1.1 Clustering To P evaluate the clustering performance, we apply the purity metric [32], which is given by p = 1 k maxi ci,k , where ci,k counts the number of samples from i-th class in k-th cluster; N is the N number of the data. A higher purity means a better performance. For the Align algorithm, we varied the learning rate when optimizing the aligning matrices and chose the value that delivered the best performance. For our PGD algorithm, we selected the learning rate as 0.1 for ? and V and 1 for U . Fig. 1 reports the clustering performance of the compared methods. Our method consistently outperforms both MDS and Align methods over various number of clusters. We also report the running time for one epoch of each algorithm in Fig. 1. Here, one epoch means one update of the clustering centers through all data samples. Fig. 1 shows that PGD performs faster that both the MDS and Align algorithms, probably because the MDS method recomputes the kernel-matrix for the embedding at each epoch and the Align algorithm calculates the aligning distance in an iterative way. 5.1.2 Sparse Coding In this experiment, we used half of samples from DynTex++ for training the dictionary and the other half for testing. As the objective of (11) is in a sum-minimize form, we can employ the stochastic version of Algorithm 1 to optimize (11) for large-scale dataset. This can be achieved by sampling a mini-batch to update the system tuple at each iteration. Therefore, in addition to the full batch version, we also carried out the stochastic PGD with the mini-bach of size 128, which is denoted as PGD-128. The learning rates of both full PGD and PGD-128 were selected as 0.1 for ? and V and 1 for U , and their values were decreased by half every 10 epoch. Different from PGD, the Grass and LDSST methods require the whole dataset in hand for learning the dictionary at each epoch, and thus they can not support the update via mini-batches. 7 -0.5 -1 -1.5 PGD-full PGD-128 LDSST Grass-3 10 0 0 -0.2 -0.1 -0.4 -0.6 -0.8 100 1000 10000 Testing NR Testing NR Testing NR 0 -1 PGD-full PGD-128 LDSST Grass-3 10 Time (s) -0.2 -0.3 -0.4 -0.5 100 1000 10000 50000 -0.6 PGD-full PGD-128 LDSST Grass-3 10 100 Time (s) (a) J = 4 (b) J = 8 1000 10000 50000 Time (s) (c) J = 16 Figure 2: Testing reconstruction errors of Grass-3, LDSST, PGD-full and PGD-128 with different dictionary sizes on DynTex++. The PGD-128 method converges much faster than other counterparts. Although Grass-3 converges to a bit smaller error than PGD-128 when J = 4 (see (a)), it performs worse than PGD-128 when the value of J is increasing (see (b) and (c)). It is unfair to directly compare the reconstruction errors (Eq. (9)) of different methods, since their values are calculated by different metrics. Therefore, we make use of the normalized reconstruction init error defined as N R = RtR?R , where Rinit and Rt are corresponded to the reconstruction errors init at the initial step and the t-th epoch, respectively. Fig. 2 shows the normalized reconstruction errors on testing set of PGDs, Grass-3 and the LDSST method during the learning process for various dictionary sizes. PGD-128 converges to lower errors than PGD-full on all experiments, indicating that the stochastic sampling strategy is helpful to escaping from the poor local minima. PGD-128 consistently outperforms both Grass-3 and LDSST in terms of the learning speed and the final error. The computational complexities of updating one dictionary atom for the Grass and the LDSST method are O((J + N )L2 n2 m2 )) and O((J + N )n2 m2 )), respectively. Here, J is the dictionary size, N is the number of data, and n and m are LDS parameters defined in Eq. (1). In contrast, PGD requires to calculate the projected gradients of the canonical tuples which scales to only O((J + N )n2 m). As shown in Fig. 2, PGD is more than 50 times faster than the Grass-3 and LDSST methods per epoch. 5.2 Video Classification Classifying YUPENN or DynTex videos is challenging as the videos are recoded under various viewpoints and scales. To deliver robust features, we implement two kinds of high-level pooling frameworks: Bag-of-Systems (BoS) [11] and Spatial-Temporal-Pyramid-Matching (STPM) [16]3 . In particular, 1) BoS is performed with the clustering methods, i.e., MDS, Align and PGD. The BoS framework models the local spatio-temporal blocks with LDSs and then clusters the LDS descriptors to obtain the codewords; 2)The STPM framework works in conjunction with the sparse coding approaches (i.e., Grass-3, LDSST and the PGD methods). Unlike BoS that represents a video by unordered local descriptors, STPM partitions a video into segments under different scales (2-level scales are considered here) and concatenates all local descriptors for each segment to form a vectorized representation. The codewords are provided by learning a dictionary. For the BoS methods, we apply the nonlinear SVM as the classifier where the radial basis kernel with ?2 distance [33] is employed; while for the STPM methods, we utilize linear SVM for classification. Table 1: Mean classification accuracies (percentage) on the YUPENN and DynTex datasets. Datasets References YUPENN DynTex 85 [10] - +BoS +STPM MDS Align PGD Grass-3 LDSST PGD 83.3 82.1 84.1 91.6 90.7 93.6 59.5 62.7 65.4 75.1 75.1 76.5 YUPENN. The non-overlapping spatio-temporal blocks of size 8 ? 8 ? 25 were sampled from the videos. The number of the codewords for all BoS and STPM methods was set to 128. We sampled 50 blocks from each video to learn the codewords for the MDS, Align, Grass-3 and LDSST methods. For PGD, we updated the codewords by mini-batches. To maintain the diversity within each mini-batch, a 3 In the experiments, we consider the projection kernel as defined in Eq. (5). We have also conducted additional experiments by considering a new kernel, namely the Martin kernel (Eq. (19)). The results are provided in the supplementary material. 8 hierarchical approach was used. In particular, at each iteration, we first randomly sampled 20 videos from the dataset and then sampled 4 blocks from each of the videos, leading to a mini-batch of size N 0 = 80. The learning rates were set as 0.5 for ? and V and 5 for U , and their values were decreased by half every 10 epochs. The test protocol is the leave-one-video-out as suggested in [29], leading to a total of 420 trials. Table 1 shows that the STPM methods achieve better accuracies than the BoS approaches; within the same pooling framework, our PGD always outperforms other compared models. For the probabilistic clustering method [10], the result on YUPENN is 85% reported in Table 1. Note that in [10], a richer number of dictionary has been applied. DynTex. For the Dyntex dataset, the spatio-temporal blocks of size 16 ? 16 ? 50 were sampled in a non-overlapping way. The number of the codewords for all methods was chosen as 64. We applied the same sampling strategy as that on YUPENN to learn the codewords for all compared methods. As shown in Table 1, the proposed method is superior compared to the studied models with both BoS and STPM coding strategies. 6 Conclusion We propose an efficient Projected-Gradient-Decent (PGD) method to optimize problem (11). Our algorithm can be used to perform clustering and sparse coding with LDSs. In contrast to previous studies, our solution avoids any approximation in LDS modeling or during the optimization process. Extensive experiments on clustering and sparse coding verify the effectiveness of the proposed method in terms of the convergence performance and learning speed. We also explore the combination of PGD with two high-level pooling frameworks, namely Bag-of-Systems (BoS) and Spatial-TemporalPyramid-Matching for video classification. The experimental results demonstrate that our PGD method outperforms state-of-the-art methods consistently. Acknowledgments This research was supported in part by the National Science Foundation of China (NSFC) (Grant No: 91420302, 91520201,61210013 and 61327809), the NSFC and the German Research of Foundation (DFG) in project Crossmodal Learning (Grant No: NSFC 61621136008/ DFG TRR-169), and the National High-Tech Research and Development Plan under Grant 2015AA042306. Besides, Tong Zhang was supported by Australian Research Council?s Discovery Projects funding scheme (project DP150104645). References [1] Gianfranco Doretto, Alessandro Chiuso, Ying Nian Wu, and Stefano Soatto. Dynamic textures. International Journal of Computer Vision (IJCV), 51(2):91?109, 2003. [2] Tae-Kyun Kim and Roberto Cipolla. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(8):1415?1428, 2009. [3] Chuang Gan, Naiyan Wang, Yi Yang, Dit-Yan Yeung, and Alex G Hauptmann. Devnet: A deep event network for multimedia event detection and evidence recounting. In CVPR, pages 2568?2577. [4] Chuang Gan, Ting Yao, Kuiyuan Yang, Yi Yang, and Tao Mei. You lead, we exceed: Labor-free video concept learning by jointly exploiting web videos and images. In CVPR, pages 923?932, 2016. [5] Wenbing Huang, Fuchun Sun, Lele Cao, Deli Zhao, Huaping Liu, and Mehrtash Harandi. Sparse coding and dictionary learning with linear dynamical systems. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. [6] Sajid M Siddiqi, Byron Boots, and Geoffrey J Gordon. A constraint generation approach to learning stable linear dynamical systems. In Advances in Neural Information Processing Systems (NIPS), 2007. [7] Wenbing Huang, Lele Cao, Fuchun Sun, Deli Zhao, Huaping Liu, and Shanshan Yu. Learning stable linear dynamical systems with the weighted least square method. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2016. [8] S?ren Johansen. Likelihood-based inference in cointegrated vector autoregressive models. Oxford University Press on Demand, 1995. 9 [9] Bijan Afsari and Ren? Vidal. Distances on spaces of high-dimensional linear stochastic processes: A survey. In Geometric Theory of Information, pages 219?242. Springer, 2014. [10] Adeel Mumtaz, Emanuele Coviello, Gert RG Lanckriet, and Antoni B Chan. A scalable and accurate descriptor for dynamic textures using bag of system trees. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 37(4):697?712, 2015. [11] Avinash Ravichandran, Rizwan Chaudhry, and Rene Vidal. Categorizing dynamic textures using a bag of dynamical systems. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(2):342?353, 2013. [12] Pavan Turaga, Ashok Veeraraghavan, Anuj Srivastava, and Rama Chellappa. Statistical computations on Grassmann and Stiefel manifolds for image and video-based recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(11):2273?2286, 2011. [13] Mehrtash Harandi, Richard Hartley, Chunhua Shen, Brian Lovell, and Conrad Sanderson. Extrinsic methods for coding and dictionary learning on Grassmann manifolds. International Journal of Computer Vision (IJCV), 114(2):113?136, 2015. [14] Bijan Afsari, Rizwan Chaudhry, Avinash Ravichandran, and Ren? Vidal. Group action induced distances for averaging and clustering linear dynamical systems with applications to the analysis of dynamic scenes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2208?2215. IEEE, 2012. [15] Zaiwen Wen and Wotao Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142(1-2):397?434, 2013. [16] Jianchao Yang, Kai Yu, Yihong Gong, and Thomas Huang. Linear spatial pyramid matching using sparse coding for image classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1794?1801. IEEE, 2009. [17] Antoni B Chan and Nuno Vasconcelos. Probabilistic kernels for the classification of auto-regressive visual processes. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 846?851. IEEE, 2005. [18] Franco Woolfe and Andrew Fitzgibbon. Shift-invariant dynamic texture recognition. In European Conference on Computer Vision (ECCV), pages 549?562. Springer, 2006. [19] SVN Vishwanathan, Alexander J Smola, and Ren? Vidal. Binet-Cauchy kernels on dynamical systems and its application to the analysis of dynamic scenes. International Journal of Computer Vision (IJCV), 73(1):95?119, 2007. [20] Payam Saisan, Gianfranco Doretto, Ying Nian Wu, and Stefano Soatto. Dynamic texture recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages II?58. IEEE, 2001. [21] Antoni B Chan and Nuno Vasconcelos. Classifying video with kernel dynamic textures. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1?6. IEEE, 2007. [22] Katrien De Cock and Bart De Moor. Subspace angles between ARMA models. Systems & Control Letters, 46(4):265?270, 2002. [23] Antoni B. Chan, Emanuele Coviello, and Gert RG Lanckriet. Clustering dynamic textures with the hierarchical EM algorithm. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2022?2029. IEEE, 2010. [24] Antoni B. Chan, Emanuele Coviello, and Gert RG Lanckriet. Clustering dynamic textures with the hierarchical EM algorithm for modeling video. 35(7):1606?1621, 2013. [25] Richard J Martin. A metric for ARMA processes. IEEE Transactions on Signal Processing, 48(4):1164? 1170, 2000. [26] A Barraud. A numerical algorithm to solve a?{T} xa-x= q. IEEE Transactions on Automatic Control, 22(5):883?885, 1977. [27] Dan Kalman. A singularly valuable decomposition: the svd of a matrix. The college mathematics journal, 27(1):2?23, 1996. [28] Bernard Ghanem and Narendra Ahuja. Maximum margin distance learning for dynamic texture recognition. In European Conference on Computer Vision (ECCV), pages 223?236. Springer, 2010. 10 [29] Konstantinos G Derpanis, Matthieu Lecce, Kostas Daniilidis, and Richard P Wildes. Dynamic scene understanding: The role of orientation features in space and time in scene classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1306?1313. IEEE, 2012. [30] Renaud P?teri, S?ndor Fazekas, and Mark J. Huiskes. DynTex : a Comprehensive Database of Dynamic Textures. Pattern Recognition Letters, doi: 10.1016/j.patrec.2010.05.009, 2010. http://projects.cwi.nl/dyntex/. [31] Guoying Zhao and Matti Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29(6):915?928, 2007. [32] Anna Huang. Similarity measures for text document clustering. In Proceedings of the sixth new zealand computer science research student conference (NZCSRSC2008), Christchurch, New Zealand, pages 49?56, 2008. [33] Richard O Duda, Peter E Hart, and David G Stork. Pattern classification. John Wiley & Sons, 2012. 11
6936 |@word trial:1 determinant:1 version:3 briefly:1 duda:1 tedious:1 km:1 decomposition:2 tr:2 tnlist:1 initial:2 liu:2 series:1 denoting:2 rkhs:2 interestingly:1 document:1 o2:1 outperforms:4 com:1 comparing:2 assigning:1 written:1 john:1 numerical:3 partition:1 nian:2 enables:1 update:15 grass:16 bart:1 generative:1 selected:2 half:4 intelligence:6 plane:1 regressive:1 readability:1 attack:1 simpler:2 zhang:2 mathematical:1 along:1 direct:1 chiuso:1 prove:1 ijcv:3 dan:1 introduce:1 manner:1 mehrtash:4 expected:1 behavior:1 ldss:45 relying:1 curse:1 solver:1 increasing:1 considering:1 project:5 provided:4 underlying:1 agnostic:1 maxiter:1 kind:3 developed:1 finding:4 transformation:6 temporal:13 every:2 multidimensional:1 exactly:1 rm:5 k2:2 classifier:1 control:2 grant:3 positive:1 before:1 local:5 treat:1 tsinghua:3 analyzing:2 nsfc:3 oxford:1 interpolation:1 ap:3 sajid:1 chose:1 au:1 studied:2 initialization:1 equivalence:7 china:1 challenging:1 i2n:4 unique:3 acknowledgment:1 testing:6 block:5 definite:1 implement:1 fitzgibbon:1 huiskes:1 mei:1 area:1 yan:1 projection:5 matching:4 radial:1 get:1 cannot:1 close:1 convenience:1 naiyan:1 ravichandran:2 applying:1 instability:1 crossmodal:1 optimize:4 conventional:4 equivalent:7 map:1 center:1 straightforward:3 independently:1 shanshan:1 survey:1 formulate:1 zealand:2 unify:3 simplicity:2 immediately:1 splitting:1 shen:1 matthieu:1 m2:2 rule:1 spanned:1 deriving:1 stability:9 embedding:3 gert:3 updated:1 exact:1 programming:1 us:1 lanckriet:3 trick:1 element:3 recognition:15 approximated:1 cayley:3 updating:3 database:1 observed:3 role:1 solved:2 wang:1 descend:1 calculate:1 region:2 renaud:1 sun:2 decrease:1 valuable:1 alessandro:1 complexity:2 dynamic:17 motivate:1 solving:5 segment:2 deliver:1 inapplicable:1 basis:2 joint:2 various:8 represented:1 recomputes:1 describe:2 chellappa:1 kp:1 artificial:1 pgds:1 corresponded:1 doi:1 richer:1 huang1:2 solve:3 supplementary:5 widely:1 cvpr:10 reconstruct:1 kai:1 gi:4 transform:1 jointly:1 delivered:1 final:1 obviously:2 sequence:7 eigenvalue:2 tpami:5 propose:4 reconstruction:6 product:4 cao:2 combining:1 achieve:1 exploiting:1 convergence:3 cluster:8 ijcai:1 generating:1 categorization:1 converges:3 leave:1 rama:1 wilde:1 derive:1 andrew:1 gong:1 eq:28 solves:1 come:2 australian:2 uu:1 lyapunov:1 direction:2 radius:1 hartley:1 g22:2 stochastic:4 kb:1 australia:1 material:5 require:2 brian:1 summation:1 junzhou:1 extension:1 zhang2:1 around:1 sufficiently:1 considered:1 algorithmic:1 mapping:2 narendra:1 dictionary:18 rizwan:2 a2:6 bag:5 council:1 tool:1 weighted:1 moor:1 clearly:1 always:1 rather:1 avoid:1 varying:1 conjunction:1 categorizing:1 derived:4 ax:1 afsari:4 cwi:1 consistently:3 rank:1 likelihood:1 mainly:1 tech:1 contrast:6 kim:1 am:4 bos:10 helpful:1 inference:1 dependent:1 inaccurate:1 entire:1 cock:1 a0:2 hidden:4 interested:2 tao:1 issue:2 classification:14 aforementioned:2 overall:1 denoted:2 orientation:1 development:1 plan:1 art:5 spatial:5 aware:1 vasconcelos:2 beach:1 atom:7 chernoff:1 sampling:3 represents:1 yu:2 unsupervised:1 inevitable:1 promote:1 report:2 develops:1 simplify:2 employ:4 gordon:1 richard:4 randomly:2 wen:1 preserve:1 national:4 divergence:1 comprehensive:1 dfg:2 geometry:3 phase:1 csiro:2 maintain:3 detection:2 interest:1 dle:11 nl:1 chain:1 accurate:1 tuple:16 capable:1 facial:1 orthogonal:5 tree:1 euclidean:9 initialized:1 arma:3 desired:1 instance:1 column:2 modeling:9 soft:2 measuring:1 cost:1 conducted:1 reported:2 dependency:1 pavan:1 spatiotemporal:1 data61:2 accomplish:2 combined:1 st:4 fundamental:2 international:4 probabilistic:3 discipline:1 yao:1 ambiguity:1 central:1 containing:1 huang:4 worse:1 resort:2 zhao:3 leading:2 account:1 diversity:1 de:2 unordered:1 coding:29 summarized:1 student:1 coefficient:1 caused:1 performed:2 extrapolation:1 lab:2 complicated:2 minimize:3 square:1 accuracy:5 descriptor:7 efficiently:3 identify:2 payam:1 lds:34 identification:3 none:1 lu:6 ren:4 daniilidis:1 j6:1 definition:1 sixth:1 involved:1 nuno:2 naturally:1 associated:3 proof:2 sampled:5 dataset:5 popular:2 recall:1 dimensionality:2 veeraraghavan:1 hilbert:1 higher:1 follow:1 formulation:4 arranged:1 though:2 furthermore:3 xa:1 implicit:2 smola:1 correlation:1 hand:1 web:1 nonlinear:1 overlapping:2 glance:1 aj:4 reveal:1 usa:1 avinash:2 facilitate:1 verify:1 normalized:2 concept:2 binet:5 counterpart:1 evolution:2 hence:2 discounting:1 alternating:1 symmetric:4 leibler:1 soatto:2 during:4 backpropagating:2 lovell:1 demonstrate:3 performs:2 cp:3 stefano:2 stiefel:2 image:3 novel:3 funding:1 superior:2 common:4 sigmoid:2 stork:1 insensitive:1 volume:3 discussed:1 approximates:1 measurement:5 rene:1 ai:12 automatic:1 mathematics:1 similarly:1 emanuele:3 stable:7 similarity:4 gt:1 align:11 aligning:3 closest:3 gianfranco:2 chan:5 optimizing:5 optimizes:1 chunhua:1 certain:1 binary:1 yi:2 devise:1 conrad:1 minimum:1 additional:4 employed:3 purity:3 ashok:1 doretto:2 redundant:1 signal:2 ii:1 rv:3 full:9 faster:3 bach:1 long:1 hart:1 grassmann:3 a1:8 calculates:1 involving:1 scalable:1 vision:14 metric:5 woolfe:1 yeung:1 histogram:1 kernel:28 normalization:5 represent:4 pyramid:3 achieved:4 robotics:1 iteration:2 addition:2 lbp:3 addressed:1 decreased:2 singular:2 rest:2 unlike:2 coviello:3 probably:1 nv:2 induced:6 pooling:4 byron:1 effectiveness:1 yang:4 exceed:1 easy:2 decent:1 zi:7 identified:1 escaping:1 observability:19 regarding:1 cn:2 idea:1 simplifies:1 konstantinos:1 translates:1 reduce:1 det:2 yihong:1 shift:1 jianchao:1 svn:1 expression:1 tactile:1 suffer:1 peter:1 passing:1 action:3 remark:3 deep:1 siddiqi:1 simplest:1 reduced:3 dit:1 http:1 outperform:1 exist:1 percentage:1 canonical:20 r12:5 stabilized:1 deli:2 extrinsic:1 per:2 discrete:1 zaiwen:1 group:4 key:2 redundancy:1 threshold:1 utilize:1 sum:2 enforced:1 angle:2 letter:2 you:1 ca2:1 family:1 reader:1 wu:2 scaling:1 bit:1 bound:8 singularly:1 fold:1 bv:1 vectorial:1 constraint:17 orthogonality:4 constrain:1 alex:1 n3:1 x2:2 scene:4 vishwanathan:1 aspect:1 speed:2 franco:1 min:5 martin:6 department:1 according:3 turaga:1 combination:1 poor:1 smaller:1 em:2 son:1 making:4 invariant:2 taken:1 equation:2 remains:1 turn:1 count:1 german:1 fed:1 end:4 sanderson:1 studying:1 vidal:4 apply:6 hierarchical:3 enforce:2 spectral:1 alternative:1 batch:6 shortly:1 original:4 chuang:2 denotes:1 clustering:34 include:2 top:2 running:1 gan:2 wenbing:3 thomas:1 ting:1 k1:1 especially:1 prof:1 society:2 unchanged:1 tensor:1 objective:8 christchurch:1 codewords:7 parametric:1 strategy:3 rt:9 md:11 traditional:1 diagonal:4 nr:3 exhibit:1 gradient:21 sun3:1 distance:9 subspace:23 lele:2 topic:1 mail:2 manifold:5 cauchy:5 trivial:1 g21:2 code:4 ru:3 length:1 besides:1 reformulate:1 mini:6 minimizing:1 kalman:1 ying:2 trace:1 recoded:1 perform:4 wotao:1 boot:1 datasets:6 finite:4 descent:5 extended:15 kyun:1 cointegrated:1 frame:1 rn:5 varied:1 reproducing:1 inv:1 david:1 cast:1 required:2 specified:1 extensive:2 namely:3 johansen:1 learned:3 nip:2 address:3 suggested:2 chaudhry:2 dynamical:9 pattern:17 below:2 usually:1 sparsity:2 challenge:4 max:2 video:21 suitable:1 overlap:1 difficulty:3 event:2 circumvent:1 mn:2 representing:4 scheme:1 technology:2 carried:1 extract:1 auto:1 roberto:1 text:1 comply:1 literature:2 review:1 epoch:9 l2:1 discovery:1 geometric:1 understanding:1 embedded:2 loss:1 fully:2 generation:2 limitation:1 geoffrey:1 lv:6 ghanem:1 foundation:2 vectorized:2 viewpoint:1 classifying:3 share:1 eccv:2 prone:1 gl:2 supported:2 free:3 huaping:2 vv:1 trr:1 sparse:29 benefit:1 curve:1 calculated:2 dimension:1 transition:13 avoids:3 rich:1 computes:1 kz:1 author:5 collection:1 evaluating:1 projected:4 autoregressive:1 far:1 transaction:7 implicitly:1 kullback:1 active:1 robotic:1 spatio:9 tuples:6 search:1 iterative:1 table:4 nature:2 matti:1 concatenates:1 learn:3 robust:1 ca:2 init:2 tencent:2 necessarily:1 european:2 protocol:1 anna:1 whole:1 fuchun:3 noise:2 n2:3 derpanis:1 facilitating:1 x1:2 fig:5 rtr:1 ahuja:1 tong:3 kostas:1 wiley:1 sub:2 exponential:1 governed:1 unfair:1 vanish:1 removing:1 theorem:12 gradientdescent:1 saturate:1 specific:1 harandi:3 antoni:5 sensing:1 maxi:1 anuj:1 svm:2 evidence:1 exists:1 ci:2 texture:13 hauptmann:1 anu:1 demand:1 margin:1 easier:1 rg:3 cx:1 yin:1 explore:1 visual:1 labor:1 applies:1 cipolla:1 springer:3 satisfies:1 g12:5 feasible:4 change:1 hard:1 specifically:3 infinite:2 pgd:40 averaging:1 called:3 gij:4 total:1 invariance:3 svd:2 experimental:1 multimedia:1 bernard:1 pietikainen:1 indicating:1 college:1 support:1 mark:1 confines:1 alexander:1 evaluate:2 tae:1 srivastava:1
6,563
6,937
On Optimal Generalizability in Parametric Learning Ahmad Beirami? [email protected] Meisam Razaviyayn? [email protected] Shahin Shahrampour? [email protected] Vahid Tarokh? [email protected] Abstract We consider the parametric learning problem, where the objective of the learner is determined by a parametric loss function. Employing empirical risk minimization with possibly regularization, the inferred parameter vector will be biased toward the training samples. Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the outof-sample performance. A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. LOOCV is rarely used in practice due to the high computational complexity. In this paper, we first develop a computationally efficient approximate LOOCV (ALOOCV) and provide theoretical guarantees for its performance. Then we use ALOOCV to provide an optimization algorithm for finding the regularizer in the empirical risk minimization framework. In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer. 1 Introduction We consider the parametric supervised/unsupervised learning problem, where the objective of the learner is to build a predictor based on a set of historical data. Let z n = {zi }ni=1 , where zi ? Z denotes the data samples at the learner?s disposal that are assumed to be drawn i.i.d. from an unknown density function p(?), and Z is compact. We assume that the learner expresses the objective in terms of minimizing a parametric loss function `(z; ?), which is a function of the parameter vector ?. The learner solves for the unknown parameter vector ? ? ? ? Rk , where k denotes the number of parameters in the model class, and ? is a convex, compact set. Let L(?) , E{`(z; ?)} (1) be the risk associated with the parameter vector ?, where the expectation is with respect to the density p(?) that is unknown to the learner. Ideally, the goal of the learner is to choose the parameter vector ? ? such that ? ? ? arg min??? L(?) = arg min??? E{`(z; ?)}. Since the density function p(?) is unknown, the learner cannot compute ? ? and hence cannot achieve the ideal performance of L(? ? ) = min??? L(?) associated with the model class ?. Instead, one can consider the minimiza? School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA. Department of Industrial and Systems Engineering, University of Southern California, Los Angeles, CA 90089, USA. ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. tion of the empirical version of the problem through the empirical risk minimization framework: X b n ) ? arg min ?(z `(zi ; ?) + r(?), ??? i?[n] where [n] , {1, 2, . . . , n} and r(?) is some regularization function. While the learner can evaluate her performance on the training data samples (also called the in-sample empirical risk, i.e., Pn 1 b n i=1 `(zi ; ?(z ))), it is imperative to assess the average performance of the learner on fresh test n b n )), which is referred to as the out-of-sample risk. A simple and universal apsamples, i.e., L(?(z proach to measuring the out-of-sample risk is cross validation [1]. Leave-one-out cross validation (LOOCV), which is a popular exhaustive cross validation strategy, uses (n ? 1) of the samples for training while one sample is left out for testing. This procedure is repeated on the n samples in a round-robin fashion, and the learner ends up with n estimates for the out-of-sample loss corresponding to each sample. These estimates together form a cross validation vector which can be used for the estimation of the out-of-sample performance, model selection, and tuning the model hyperparameters. While LOOCV provides a reliable estimate of the out-of-sample loss, it brings about an additional factor of n in terms of computational cost, which makes it practically impossible because of the high computational cost of training when the number of samples is large. Contribution: Our first contribution is to provide an approximation for the cross validation vector, called ALOOCV, with much lower computational cost. We compare its performance with LOOCV in problems of reasonable size where LOOCV is tractable. We also test it on problems of large size where LOOCV is practically impossible to implement. We describe how to handle quasi-smooth loss/regularizer functions. We also show that ALOOCV is asymptotically equivalent to Takeuchi information criterion (TIC) under certain regularity conditions. Our second contribution is to use ALOOCV to develop a gradient descent algorithm for jointly optimizing the regularization hyperparameters as well as the unknown parameter vector ?. We show that multiple hyperparameters could be tuned using the developed algorithm. We emphasize that the second contribution would not have been possible without the developed estimator as obtaining the gradient of the LOOCV with respect to tuning parameters is computationally expensive. Our experiments show that the developed method handles quasi-smooth regularized loss functions as well as number of tuning parameters that is on the order of the training samples. Finally, it is worth mentioning that although the leave-one-out cross validation scenario is considered in our analyses, the results and the algorithms can be extended to the leave-q-out cross validation and bootstrap techniques. Related work: A main application of cross validation (see [1] for a recent survey) is in model selection [2?4]. On the theoretical side, the proposed approximation on LOOCV is asymptotically equivalent to Takeuchi information criterion (TIC) [4?7], under certain regularity conditions (see [8] for a proof of asymptotic equivalence of AIC and LOOCV in autoregressive models). This is also related to Barron?s predicted square error (PSE) [9] and Moody?s effective number of parameters for nonlinear systems [10]. Despite these asymptotic equivalences our main focus is on the nonasymptotic performance of ALOOCV. ALOOCV simplifies to the closed form derivation of the LOOCV for linear regression, called PRESS (see [11, 12]). Hence, this work can be viewed as an approximate extension of this closed form derivation for an arbitrary smooth regularized loss function. This work is also related to the concept of influence functions [13], which has recently received renewed interest [14]. In contrast to methods based on influence functions that require large number of samples due to their asymptotic nature, we empirically show that the developed ALOOCV works well even when the number of samples and features are small and comparable to each other. In particular, ALOOCV is capable of predicting overfitting and hence can be used for model selection and choosing the regularization hyperparameter. Finally, we expect that the idea of ALOOCV can be extended to derive computationally efficient approximate bootstrap estimators [15]. Our second contribution is a gradient descent optimization algorithm for tuning the regularization hyperparameters in parametric learning problems. A similar approach has been taken for tuning the single parameter in ridge regression where cross validation can be obtained in closed form [16]. Most of the existing methods, on the other hand, ignore the response and carry out the optimization solely based on the features, e.g., Stein unbiased estimator of the risk for multiple parameter selection [17, 18]. 2 Bayesian optimization has been used for tuning the hyperparameters in the model [19?23], which postulates a prior on the parameters and optimizes for the best parameter. Bayesian optimization methods are generally derivative free leading to slow convergence rate. In contrast, the proposed method is based on a gradient descent method. Other popular approaches to the tuning of the optimization parameters include grid search and random search [24?26]. These methods, by nature, also suffer from slow convergence. Finally, model selection has been considered as a bi-level optimization [27,28] where the training process is modeled as a second level optimization problem within the original problem. These formulations, similar to many other bi-level optimization problems, often lead to computationally intensive algorithms that are not scalable. We remark that ALOOCV can also be used within Bayesian optimization, random search, and grid search methods. Further, resource allocation can be used for improving the optimization performance in all of such methods. 2 Problem Setup To facilitate the presentation of the ideas, let us define the following concepts. Throughout, we assume that all the vectors are in column format. Definition 1 (regularization vector/regularized loss function) We suppose that the learner is concerned with M regularization functions r1 (?), . . . , rM (?) in addition to the main loss function `(z; ?). We define the regularization vector r(?) as r(?) , (r1 (?), . . . , rM (?))> . Further, let ? = (?1 , . . . , ?M )> be the vector of regularization parameters. We call wn (z; ?, ?) the regularized loss function given by 1 X 1 ?m rm (?). wn (z; ?, ?) , `(z; ?) + ?> r(?) = `(z; ?) + n n m?[M ] The above definition encompasses many popular learning problems. For example, elastic net regression [31] can be cast in this framework by setting r1 (?) = k?k1 and r2 (?) = 21 k?k22 . Definition 2 (empirical risk/regularized empirical risk) Let the empirical risk be defined as Pn czn (?, ?) = Lbzn (?) = n1 i=1 `(zi ; ?). Similarly, let the regularized empirical risk be defined as W P n 1 i=1 {wn (zi ; ?, ?)}. n Definition 3 (regularized empirical risk minimization) We suppose that the learner solves the b? (z n ) as follows: empirical risk minimization problem by selecting ? ? ? ?X ? n o b? (z n ) ? arg min W czn (?, ?) = arg min ? `(zi ; ?) + ?> r(?) . (2) ? ??? ??? ? i?[n] b? (z n ), the empirical risk corresponding to ? b? (z n ) can be readily comOnce the learner solves for ? P 1 n n b b b puted by Lzn (? ? (z )) = n i?[n] `(zi ; ? ? (z )). While the learner can evaluate her performance b? (z n ))), it is on the observed data samples (also called the in-sample empirical risk, i.e., Lbzn (? b? (z n )) imperative to assess the performance of the learner on unobserved fresh samples, i.e., L(? (see (1)), which is referred to as the out-of-sample risk. To measure the out-of-sample risk, it is a common practice to perform cross validation as it works outstandingly well in many practical situations and is conceptually universal and simple to implement. Leave-one-out cross validation (LOOCV) uses all of the samples but one for training, which is left out for testing, leading to an n-dimensional cross validation vector of out-of-sample estimates. Let us formalize this notion. Let z n\i , (z1 , . . . , zi?1 , zi+1 , . . . , zn ) denote the set of the training examples excluding zi . 3 b? (z n\i ) be Definition 4 (LOOCV empirical risk minimization/cross validation vector) Let ? n\i the estimated parameter over the training set z , i.e., ? ? ? X ? n o b? (z n\i ) ? arg min W czn\i (?, ?) = arg min ? `(zj ; ?) + ?> r(?) . (3) ? ??Rk ??Rk ? j?[n]\i b? (z n\i )), and The cross validation vector is given by {CV?,i (z n )}i?[n] where CV?,i (z n ) , `(zi ; ? P n the cross validation out-of-sample estimate is given by CV? (z n ) , n1 i=1 CV?,i (z n ). The empirical mean and the empirical variance of the n-dimensional cross validation vector are used by practitioners as surrogates on assessing the out-of-sample performance of a learning method. The computational cost of solving the problem in (3) is n times that of the original problem in (2). Hence, while LOOCV provides a simple yet powerful tool to estimate the out-of-sample performance, the additional factor of n in terms of the computational cost makes LOOCV impractical in large-scale problems. One common solution to this problem is to perform validation on fewer number of samples, where the downside is that the learner would obtain a much more noisy and sometimes completely unstable estimate of the out-of-sample performance compared to the case where the entire LOOCV vector is at the learner?s disposal. On the other hand, ALOOCV described next will provide the benefits of LOOCV with negligible additional computational cost. We emphasize that the presented problem formulation is general and includes a variety of parametric machine learning tasks, where the learner empirically solves an optimization problem to minimize some loss function. 3 Approximate Leave-One-Out Cross Validation (ALOOCV) We assume that the regularized loss function is three times differentiable with continuous derivatives (see Assumption 1). This includes many learning problems, such as the L2 regularized logistic loss function. We later comment on how to handle the `1 regularizer function in LASSO. To proceed, we need one more definition. Definition 5 (Hessian/empirical Hessian) Let H(?) denote the Hessian of the risk function defined bzn (?, ?) denote the empirical Hessian of the regularized loss as H(?) , ?2? L(?). Further, let H  bzn (?, ?) , E bzn ?2 wn (z; ?, ?) = 1 Pn ?2 wn (zi ; ?, ?). Similarly, we function, defined as H ? i=1 ? n  2 bzn (?, ?) , E bzn\i ?2 wn (z; ?, ?) = 1 P ? wn (zi ; ?, ?). define H ? ? i?[n]\i n?1 Next we present the set of assumptions we need to prove the main result of the paper. Assumption 1 We assume that b? (z n ) ? ? ? k? = op (1).4 (a) There exists ? ? ? ?? ,3 such that k? (b) wn (z; ?) is of class C 3 as a function of ? for all z ? Z. (c) H(? ? )  0 is positive definite. Theorem 1 Under Assumption 1, let e(i) (z n ) , ? b? (z n ) + ? ?  ?1 1 b b? (z n ), ? b? (z n )), Hzn\i ? ?? `(zi ; ? n?1 (4) assuming the inverse exists. Then, b? (z n\i ) ? ? e(i) (z n ) = ? ? 3 4  ?1 1 b (i) b? (z n ), ? Hzn\i ? ??,n , n?1 (5) (?)? denotes the interior operator. Xn = op (an ) implies that Xn /an approaches 0 in probability with respect to the density function p(?). 4 with high probability where (i) (i),1 (i),2 ??,n = ??,n ? ??,n , (6) (i),1 and ??,n is defined as   ? 1 X X b n b? (z n\i ))> b? (z n ) ? ? b? (z n\i ))b (? ? (z n ) ? ? ?2? wn?1 (zj ; ? i,j,1 (z ), ?) (? e? , ?,? 2 ??? (i),1 ??,n , j?[n]\i ??[k] (7) n i,j,1 b where eb? is ?-th standard unit vector, and such that for all ? ? [k], ? i,j,1 ? ? (z n ) + ?,? (z ) = ?? (i),2 b? (z n\i ) for some 0 ? ?i,j,1 ? 1. Further, ? (1 ? ?i,j,1 )? is defined as ? (i),2 ??,n , X ?  X > b? (z n )?? b? (z n\i )) eb? (? j?[n]\i ?,??[k] ?,n  ?2 i,j,2 n n n\i b b ?> ))b e? , ? wn?1 (zj ; ? ?,?,? (z ), ?) (? ? (z )?? ? (z ??? ??? (8) (i),2 i,j,2 b i,j,2 b i,j,2 such that for ?, ? ? [k], ? ?,?,? (z n ) = ??,? ? ? (z n ) + (1 ? ??,? )? ? (z n\i ) for some 0 ? ??,? ? 1. 5 Further, we have   1 n n\i b b k? ? (z ) ? ? ? (z ))k? = Op , (9) n   b? (z n\i ) ? ? e(i) (z n )k? = Op 1 . (10) k? ? n2 See the appendix for the proof. Inspired by Theorem 1, we provide an approximation on the cross validation vector.   (i) n n e Definition 6 (approximate cross validation vector) Let ACV?,i (z ) = ` zi ; ? ? (z ) . We call {ACV?,i (z n )}i?[n] the approximate cross validation vector. We further call n ACV? (z n ) , 1X ACV?,i (z n ) n i=1 (11) the approximate cross validation estimator of the out-of-sample loss. We remark that the definition can be extended to leave-q-out and q-fold cross validation by replacing the index i to an index set S with |S| = q, comprised of the q left-out samples in (4). (i) e (z n )}i?[n] is upper bounded by O(np+C(n, p)) where C(n, p) The cost of the computation of {? ? b? (z n ) in (2); see [14]. Note that the empirical risk miniis the computational cost of solving for ? mization problem posed in (2) requires time at least ?(np). Hence, the overall cost of computation e(i) (z n )}i?[n] is dominated by solving (2). On the other hand, the cost of computing the true of {? ? b? (z n\i )}i?[n] posed cross validation performance by naively solving n optimization problems {? in (3) would be O(nC(n, p)) which would necessarily be ?(n2 p) making it impractical for largescale problems. Corollary 2 The approximate cross validation vector is exact for kernel ridge regression. That is, e(i) (z n ) = ? b? (z n\i ) for all given that the regularized loss function is quadratic in ?, we have ? ? i ? [n] . (i) Proof We notice that the error term ??,n in (6) only depends on the third derivative of the loss funcb? (z n ). Hence, provided that the regularized loss function is quadratic in tion in a neighborhood of ? (i) ?, ??,n = 0 for all i ? [n]. 5  Xn = Op (an ) implies that Xn /an is stochastically bounded with respect to the density function p(?). 5 The fact that the cross validation vector could be obtained for kernel ridge regression in closed form without actually performing cross validation is not new, and the method is known as PRESS [11]. In a sense, the presented approximation could be thought of as an extension of this idea to more general loss and regularizer functions while losing the exactness property. We remark that the idea of ALOOCV is also related to that of the influence functions. In particular, influence functions have been used in [14] to derive an approximation on LOOCV for neural networks with large sample sizes. However, we notice that methods based on influence functions usually underestimate overfitting making them impractical for model selection. In contrast, we empirically demonstrate the effectiveness of ALOOCV in capturing overfitting and model selection. b? (z n ) and ? b? (z n\i ) are the same. In the case of `1 regularizer we assume that the support set of ? Although this would be true for large enough n under Assumption 1, it is not necessarily true for a b? (z n\i ) is known we use given sample z n when sample i is left out. Provided that the support set of ? the developed machinery in Theorem 1 on the subset of parameters that are non-zero. Further, we ignore the `1 regularizer term in the regularized loss function as it does not contribute to the Hessian matrix locally, and we assume that the regularized loss function is otherwise smooth in the sense of Assumption 1. In this case, the cost of calculating ALOOCV would scale with O(npa log(1/)) b? (z n ). where pa denotes the number of non-zero coordinates in the solution ? We remark that although the nature of guarantees in Theorem 1 are asymptotic, we have experimentally observed that the estimator works really well even for n and p as small as 50 in elastic net regression, logistic regression, and ridge regression. Next, we also provide an asymptotic characterization of the approximate cross validation. Lemma 3 Under Assumption 1, we have b? (z n )) + R b? (z n ), ?) + Op b zn (? ACV? (z n ) = Lbzn (? where b zn (?, ?) , R  1 n2  , h i?1 X 1 b ?> `(z ; ?) H (?, ?) ?? `(zi ; ?). n\i i z ? n(n ? 1) (12) (13) i?[n] Note that in contrast to the ALOOCV (in Theorem 1), the Op (1/n2 ) error term here depends on the second derivative of the loss function with respect to the parameters, consequently leading to worse performance, and underestimation of overfitting. 4 Tuning the Regularization Parameters Thus far, we presented an approximate cross validation vector that closely follows the predictions provided by the cross validation vector, while being computationally inexpensive. In this section, we use the approximate cross validation vector to tune the regularization parameters for the optimal performance. We are interested in solving  out-of-sample   Pn 1 n\i n b . To this end, we need to calculate the gradient of min? CV? (z ) = n i=1 ` zi ; ? ? z n b ? ? (z ) with respect to ?, which is given in the following lemma. h  i?1 b? (z n ) = ? 1 H b? (z n ), ? b? (z n )). bzn ? Lemma 4 We have ?? ? ?? r(? n h  i?1 b? (z n\i ) = ? 1 H b? (z n\i ), ? b? (z n\i )). bzn\i ? Corollary 5 We have ?? ? ?? r(? n?1 In order to apply first order optimization methods for minimizing CV? (z n ), we need to compute its gradient with respect to the tuning parameter vector ?. Applying the simple chain rule implies n    1 X >b b? z n\i ?? CV? (z n ) = ?? ? ? (z n\i ) ?? ` zi ; ? (14) n i=1 =? n   h  i?1    X 1 n\i b b? (z n\i ) b? z n\i , bzn\i ? ?> ) H ?? ` z i ; ? ? r ? ? (z n(n ? 1) i=1 6 (15) 0.8 0.35 ALOOCV Out-of-Sample Loss 0.7 mean( 1 ,..., m ) 0.3 mean( m+1, ..., p ) 0.6 Loss 0.25 0.5 Elapsed time: 28 seconds 0.2 0.4 0.15 0.3 0.2 0.1 0 100 200 300 400 500 600 700 800 0 100 200 300 400 500 600 700 800 Iteration Number Iteration Number Figure 2: The progression of ??s when Algorithm 1 is applied to ridge regression with diagonal regressors. Figure 1: The progression of the loss when Algorithm 1 is applied to ridge regression with diagonal regressors. b? (z n\i ) from Corollary 5. However, (15) is computationally where (15) follows by substituting ?? ? expensive and almost impossible in practice even for medium size datasets. Hence, we use the ALOOCV from (4) (Theorem 1) in (14) to approximate the gradient. Let (i) g?    ?1    (i) (i) (i) 1 n n n > e e e b Hzn\i ? ? (z ) ?? ` zi ; ? ? (z ) . (z ) , ? ? r ? ? (z ) n?1 ? n (16) Further, motivated by the suggested ALOOCV, let us define the approximate gradient g ? (z n ) as P (i) g ? (z n ) , n1 i?[n] g ? (z n ) . Based on our numerical experiments, this approximate gradient closely follows the gradient of the cross validation, i.e., ?? CV? (z n ) ? g ? (z n ). Note that this approximation is straightforward to compute. Therefore, using this approximation, we can apply the first order optimization algorithm 1 to optimize the tuning parameter ?. Although Algorithm 1 is Algorithm 1 Approximate gradient descent algorithm for tuning ? Initialize the tuning parameter ?0 , choose a step-size selection rule, and set t = 0 for t = 0, 1, 2, . . . do calculate the approximate gradient g ?t (z n ) set ?t+1 = ?t ? ?t g ?t (z n ) end for more computationally efficient compared to LOOCV (saving a factor of n), it might still be computationally expensive for large values of n as it still scales linearly with n. Hence, we also present an online version of the algorithm using the stochastic gradient descent idea; see Algorithm 2. Algorithm 2 Stochastic (online) approximate gradient descent algorithm for tuning ? Initialize the tuning parameter ?0 and set t = 0 for t = 0, 1, 2, . . . do choose a random index it ? {1, . . . , n} (i ) calculate the stochastic gradient g ?tt (z n ) using (16) (i ) set ?t+1 = ?t ? ?t g ?tt (z n ) end for 5 Numerical Experiments Ridge regression with diagonal regressors: We consider the following regularized loss function: wn (z; ?, ?) = `(z; ?) + 1 > 1 1 > ? r(?) = (y ? ? > x)2 + ? diag(?)?. n 2 2n 7 Histogram of Figure 3: The histogram of the normalized difference between LOOCV and ALOOCV for 5 runs of the algorithm on randomly selected samples for each ? in Table 1 (MNIST dataset with n = 200 and p = 400). ? 3.3333 1.6667 0.8333 0.4167 0.2083 0.1042 0.0521 Lbzn 0.0637 (0.0064) 0.0468 (0.0051) 0.0327 (0.0038) 0.0218 (0.0026) 0.0139 (0.0017) 0.0086 (0.0011) 0.0051 (0.0006) L 0.1095 (0.0168) 0.1021 (0.0182) 0.0996 (0.0201) 0.1011 (0.0226) 0.1059 (0.0256) 0.1131 (0.0291) 0.1219 (0.0330) CV 0.1077 (0.0151) 0.1056 (0.0179) 0.1085 (0.0214) 0.1158 (0.0256) 0.1264 (0.0304) 0.1397 (0.0356) 0.1549 (0.0411) ACV 0.1080 (0.0152) 0.1059 (0.0179) 0.1087 (0.0213) 0.1155 (0.0254) 0.1258 (0.0300) 0.1386 (0.0349) 0.1534 (0.0402) IF 0.0906 (0.0113) 0.0734 (0.0100) 0.0559 (0.0079) 0.0397 (0.0056) 0.0267 (0.0038) 0.0171 (0.0024) 0.0106 (0.0015) Table 1: The results of logistic regression (in-sample loss, out-of-sample loss, LOOCV, and ALOOCV, and Influence Function LOOCV) for different regularization parameters on MNIST dataset with n = 200 and p = 400. The numbers in parentheses represent the standard error. n p? 1e5 1e4 1e3 1e2 1e1 1e0 Lbzn 0.6578 0.5810 0.5318 0.5152 0.4859 0.4456 L 0.6591 0.6069 0.5832 0.5675 0.5977 0.6623 b? (z n )) `(zi ; ? 0.0872 0.0920 0.0926 0.0941 0.0950 0.0990 0.1505 ACV 0.6578 (0.0041) 0.5841 (0.0079) 0.5444 (0.0121) 0.5379 (0.0146) 0.5560 (0.0183) 0.6132 (0.0244) Table 2: The results of logistic regression (insample loss, out-of-sample loss, CV, ACV) on CIFAR-10 dataset with n = 9600 and p = 3072. CV 8.5526 2.1399 10.8783 3.5210 5.7753 5.2626 12.0483 ACV 8.6495 2.1092 9.4791 3.3162 6.1859 5.0554 11.5281 IF 0.2202 0.2081 0.2351 0.2210 0.2343 0.2405 0.3878 Table 3: Comparison of the leave-one-out estimates on the 8 outlier samples with highest in-sample loss in the MNIST dataset. In other words, we consider one regularization parameter per each model parameter. To validate the proposed optimization algorithm, we consider a scenario with p = 50 where x is drawn i.i.d. from N (0, Ip ). We let y = ? ?> x +  where ?1 = . . . = ?40 = 0 and ?41 , . . . , ?50 ? N (0, 1) i.i.d, and  ? N (0, 0.1). We draw n = 150 samples from this model, and apply Algorithm 1 to optimize for ? = (?1 , . . . , ?50 ). The problem is designed in such a way that out of 50 features, the first 40 are irrelevant while the last 10 are important. We initialize the algorithm with ?11 = . . . = ?150 = 1/3 and compute ACV using Theorem 1. Recall that in this case, ACV is exactly equivalent to CV (see Corollary 2). Figure 1 plots ALOOCV, the out-of-sample loss, and the mean value of ? calculated over the irrelevant and relevant features respectively. As expected, the ? for an irrelevant feature is set to a larger number, on the average, compared to that of a relevant feature. Finally, we remark that the optimization of 50 tuning parameters in 800 iterations took a mere 28 seconds on a PC. 8 4500 3.5 22 3800 Estimated CV ACV and CV 4000 20 and Actual CV 3500 18 ACV 3700 3 Estimated CV 16 14 3600 12 Loss Actual CV CV 3400 loss 2500 3300 Out-of-Sample Loss 2000 7 7.5 8 8.5 9 1500 Computation Time 3500 Out-of-Sample 3000 2.5 Run time ratio LOOCV/ALOOCV 10 8 6 2 4 70 80 90 100 110 120 130 140 LOOCV 150 1.5 1 1000 (Algo. 2) Online 500 0 n = 70 p = 50 Batch (Algo. 1) 0 2 4 6 8 10 12 No. of iterations 14 16 18 20 0 70 n = 200 Approximate LOOCV 0.5 80 90 100 110 120 130 140 150 p Figure 4: The application of Algorithms 1 and 2 to elastic net regression. The left panel shows the loss vs. number of iterations. The right panel shows the run-time vs. n (the sample size). Logistic regression: The second example that we consider is logistic regression: wn (z; ?, ?) = `(z; ?) + 1 > 1 ? r(?) = H(y|| sigmoid(?0 + ? > x)) + ?k?k22 . n 2n 1 , where H(?||?) for any u ? [0, 1] and v ? (0, 1) is given by H(u||v) := u log v1 + (1 ? u) log 1?v ?x and denotes the binary cross entropy function, and sigmoid(x) := 1/(1 + e ) denotes the sigmoid function. In this case, we only consider a single regularization parameter. Since the loss and regularizer are smooth, we resort to Theorem 1 to compute ACV. We applied logistic regression on MNIST and CIFAR-10 image datasets where we used each pixel in the image as a feature according to the aforementioned loss function. In MNIST, we classify the digits 2 and 3 while in CIFAR-10, we classify ?bird? and ?cat.? As can be seen in Tables 1 and 2, ACV closely follows CV on the MNIST dataset. On the other hand, the approximation of LOOCV based on influence functions [14] performs poorly in the regime where the model is significantly overfit and hence it cannot be used for effective model selection. On CIFAR-10, ACV takes ?1s to run per each sample, whereas CV takes ?60s per each sample requiring days to run for each ? even for this medium sized problem. The histogram of the normalized difference between CV and ACV vectors is plotted in Figure 3 for 5 runs of the algorithm for each ? in Table 1. As can be seen, CV and ACV are almost always within 5% of each other. We have also plotted the loss for the eight outlier samples with the highest in-sample loss in the MNIST dataset in Table 3. As can be seen, ALOOCV closely follows LOOCV even when the leave-one-out loss is two orders of magnitude larger than the in-sample loss for these outliers. On the other hand, the approximation based on the influence functions fails to capture the out-of-sample performance and the outliers in this case. Elastic net regression: Finally, we consider the popular elastic net regression problem [31]: 1 > 1 1 1 ? r(?) = (y ? ? > x)2 + ?1 k?k1 + ?2 k?k22 . n 2 n 2n In this case, there are only two regularization parameters to be optimized for the quasi-smooth regularized loss. Similar to the previous case, we consider y = ? ?> x +  where ?? = ??? ?? where ?? is a Bernoulli(1/2) RV and ?? ? N (0, 1). Hence, the features are weighted non-uniformly in y and half of them are zeroed out on the average. We apply both Algorithms 1 and 2 where we used the approximation in Theorem 1 and the explanation on how to handle `1 regularizers to compute ACV. We initialized with ?1 = ?2 = 0. As can be seen on the left panel (Figure 4), ACV closely follows CV in this case. Further, we see that both algorithms are capable of significantly reducing the loss after only a few iterations. The right panel compares the run-time of the algorithms vs. the number of samples. This confirms our analysis that the run-time of CV scales quadratically with O(n2 ) as opposed to O(n) in ACV. This impact is more signified in the inner panel where the run-time ratio is plotted. wn (z; ?, ?) = `(z; ?) + Acknowledgement This work was supported in part by DARPA under Grant No. W911NF-16-1-0561. The authors are thankful to Jason D. Lee (USC) who brought to their attention the recent work [14] on influence functions for approximating leave-one-out cross validation. 9 References [1] Sylvain Arlot and Alain Celisse. A survey of cross-validation procedures for model selection. Statistics surveys, 4:40?79, 2010. [2] Seymour Geisser. The predictive sample reuse method with applications. Journal of the American Statistical Association, 70(350):320?328, 1975. [3] Peter Craven and Grace Wahba. Smoothing noisy data with spline functions. Numerische Mathematik, 31(4):377?403, 1978. [4] Kenneth P Burnham and David R Anderson. Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media, 2003. [5] Hirotugu Akaike. Statistical predictor identification. Annals of the Institute of Statistical Mathematics, 22(1):203?217, 1970. [6] Hirotogu Akaike. Information theory and an extension of the maximum likelihood principle. In Selected Papers of Hirotugu Akaike, pages 199?213. Springer, 1998. [7] K Takeuchi. Distribution of informational statistics and a criterion of model fitting. suri-kagaku (mathematical sciences) 153 12-18, 1976. [8] Mervyn Stone. Cross-validation and multinomial prediction. Biometrika, pages 509?515, 1974. [9] Andrew R Barron. Predicted squared error: a criterion for automatic model selection. 1984. [10] John E Moody. The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems. In Advances in neural information processing systems, pages 847?854, 1992. [11] David M Allen. The relationship between variable selection and data agumentation and a method for prediction. Technometrics, 16(1):125?127, 1974. [12] Ronald Christensen. Plane answers to complex questions: the theory of linear models. Springer Science & Business Media, 2011. [13] R Dennis Cook and Sanford Weisberg. Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics, 22(4):495?508, 1980. [14] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. International Conference on Machine Learning, 2017. [15] Bradley Efron. Better bootstrap confidence intervals. Journal of the American statistical Association, 82(397):171?185, 1987. [16] Gene H Golub, Michael Heath, and Grace Wahba. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21(2):215?223, 1979. [17] Charles-Alban Deledalle, Samuel Vaiter, Jalal Fadili, and Gabriel Peyr?e. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM Journal on Imaging Sciences, 7(4):2448?2487, 2014. [18] Sathish Ramani, Zhihao Liu, Jeffrey Rosen, Jon-Fredrik Nielsen, and Jeffrey A Fessler. Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and SURE-based methods. IEEE Transactions on Image Processing, 21(8):3659? 3672, 2012. [19] Jonas Mo?ckus. Application of Bayesian approach to numerical methods of global and stochastic optimization. Journal of Global Optimization, 4(4):347?365, 1994. [20] Jonas Mo?ckus. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference, pages 400?404. Springer, 1975. [21] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. [22] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pages 507?523. Springer, 2011. 10 [23] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951? 2959, 2012. [24] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281?305, 2012. [25] Chris Thornton, Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 847?855. ACM, 2013. [26] Katharina Eggensperger, Matthias Feurer, Frank Hutter, James Bergstra, Jasper Snoek, Holger Hoos, and Kevin Leyton-Brown. Towards an empirical foundation for assessing Bayesian optimization of hyperparameters. In NIPS workshop on Bayesian Optimization in Theory and Practice, pages 1?5, 2013. [27] Gautam Kunapuli, K Bennett, Jing Hu, and Jong-Shi Pang. Bilevel model selection for support vector machines. In CRM Proceedings and Lecture Notes, volume 45, pages 129?158, 2008. [28] Kristin P Bennett, Jing Hu, Xiaoyun Ji, Gautam Kunapuli, and Jong-Shi Pang. Model selection via bilevel optimization. In 2006 Intl. Joint Conf. on Neural Networks IJCNN?06, pages 1922? 1929, 2006. [29] Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task Bayesian optimization. In Advances in neural information processing systems, pages 2004?2012, 2013. [30] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. Proc. of ICLR, 17, 2017. [31] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005. 11
6937 |@word mri:1 version:2 hu:2 confirms:1 carry:1 liu:1 configuration:2 series:1 selecting:1 tuned:1 renewed:1 existing:1 bradley:1 freitas:1 yet:1 readily:1 john:1 ronald:1 numerical:4 designed:1 plot:1 v:3 tarokh:1 half:1 fewer:1 selected:2 cook:1 plane:1 provides:2 characterization:2 contribute:1 detecting:1 gautam:2 insample:1 mathematical:1 jonas:2 prove:1 fitting:1 snoek:3 expected:1 vahid:2 weisberg:1 multi:1 inspired:1 informational:1 actual:2 provided:3 bounded:2 panel:5 medium:4 tic:2 developed:5 outof:1 finding:1 unobserved:1 extremum:1 impractical:3 guarantee:2 exactly:1 biometrika:1 rm:3 sanford:1 hirotugu:2 unit:1 grant:1 jamieson:1 arlot:1 positive:1 negligible:1 engineering:2 seymour:1 despite:1 solely:1 might:1 black:1 bird:1 eb:2 equivalence:2 mentioning:1 bi:2 practical:3 testing:2 practice:5 implement:2 definite:1 bootstrap:3 digit:1 procedure:3 empirical:21 universal:2 npa:1 thought:1 significantly:2 word:1 confidence:1 cannot:3 interior:1 selection:19 operator:1 risk:22 influence:11 impossible:3 applying:1 optimize:2 equivalent:3 shi:2 straightforward:1 attention:1 fadili:1 convex:1 survey:3 numerische:1 estimator:6 rule:2 handle:4 notion:1 coordinate:1 annals:1 suppose:2 user:1 exact:1 losing:1 us:2 akaike:3 harvard:4 pa:1 expensive:4 observed:2 preprint:1 capture:1 calculate:3 ahmad:1 highest:2 complexity:1 sugar:1 ideally:1 solving:5 algo:2 minimiza:1 predictive:1 efficiency:1 learner:20 completely:1 eric:1 darpa:1 joint:1 mization:1 cat:1 regularizer:8 derivation:2 acv:21 describe:1 effective:3 kevin:5 choosing:2 neighborhood:1 exhaustive:1 hyper:1 posed:2 larger:2 otherwise:1 statistic:2 jointly:1 noisy:2 ip:1 online:3 thornton:1 differentiable:1 net:6 matthias:1 took:1 reconstruction:1 relevant:2 poorly:1 achieve:1 validate:1 los:1 convergence:2 regularity:2 r1:3 sea:3 assessing:2 jing:2 adam:2 leave:10 intl:1 thankful:1 illustrate:1 develop:2 derive:2 andrew:1 measured:1 op:7 school:1 received:1 solves:4 predicted:2 fredrik:1 implies:3 larochelle:1 closely:5 stochastic:4 nando:1 require:1 generalization:1 really:1 ryan:2 extension:3 practically:2 considered:2 mo:2 substituting:1 estimation:1 proc:1 loocv:28 tool:1 weighted:1 kristin:1 minimization:6 cora:1 exactness:1 brought:1 always:1 shahin:2 pn:4 jalal:1 corollary:4 focus:1 bernoulli:1 likelihood:1 industrial:1 contrast:4 sigkdd:1 rostamizadeh:1 sense:2 talwalkar:1 inference:1 entire:1 signified:1 her:2 bandit:1 fessler:1 quasi:3 interested:1 pixel:1 arg:7 overall:1 aforementioned:1 classification:1 smoothing:1 initialize:3 saving:1 beach:1 geisser:1 holger:3 unsupervised:1 jon:1 rosen:1 np:2 spline:1 intelligent:1 yoshua:1 few:1 randomly:1 usc:2 jeffrey:2 n1:3 technometrics:3 interest:1 mining:1 evaluation:1 golub:1 pc:1 regularizers:1 meisam:1 chain:1 capable:2 machinery:1 initialized:1 plotted:3 e0:1 theoretical:2 hutter:3 column:1 classify:2 downside:1 modeling:1 alban:1 w911nf:1 measuring:1 zn:3 restoration:1 cost:12 imperative:2 subset:1 predictor:2 comprised:1 peyr:1 gcv:1 answer:1 generalizability:1 combined:1 st:1 density:5 international:3 siam:1 ifip:1 lee:1 michael:1 together:1 moody:2 squared:1 postulate:1 opposed:1 choose:3 possibly:1 worse:1 stochastically:1 conf:1 resort:1 derivative:4 leading:3 american:2 li:1 nonasymptotic:1 de:1 bergstra:2 vaiter:1 includes:2 hzn:3 depends:2 tion:2 later:1 jason:1 closed:4 contribution:5 ass:2 pang:3 square:1 ni:1 accuracy:1 takeuchi:3 variance:1 who:1 minimize:1 conceptually:1 bayesian:10 identification:1 mere:1 worth:1 trevor:1 definition:9 inexpensive:1 underestimate:1 james:2 e2:1 associated:2 proof:3 dataset:6 popular:4 vlad:1 recall:1 knowledge:1 efron:1 kunapuli:2 ramani:1 formalize:1 nielsen:1 brochu:1 actually:1 disposal:2 supervised:1 day:1 methodology:1 response:1 wei:1 formulation:2 done:1 box:1 anderson:1 overfit:1 hand:5 dennis:1 replacing:1 nonlinear:3 logistic:7 brings:1 puted:1 usa:3 facilitate:1 k22:3 requiring:1 normalized:2 concept:2 unbiased:2 true:3 regularization:18 hence:10 brown:3 bzn:8 round:1 samuel:1 criterion:4 generalized:1 stone:1 ridge:8 demonstrate:1 tt:2 theoretic:1 performs:1 allen:1 percy:1 image:4 suri:1 recently:1 charles:1 common:2 sigmoid:3 multinomial:1 jasper:3 empirically:3 hugo:1 ji:1 volume:1 association:2 cambridge:1 cv:24 tuning:15 automatic:1 grid:2 mathematics:1 similarly:2 feb:1 recent:2 optimizing:1 optimizes:1 irrelevant:3 scenario:2 certain:2 binary:1 seen:4 additional:3 rv:1 multiple:3 smooth:6 technical:1 cross:39 long:1 cifar:4 e1:1 parenthesis:1 impact:1 prediction:4 scalable:1 regression:20 expectation:1 arxiv:2 iteration:6 sometimes:1 kernel:2 histogram:3 represent:1 addition:1 whereas:1 interval:1 biased:1 rest:1 heath:1 sure:1 comment:1 effectiveness:1 call:3 practitioner:1 ideal:1 bengio:1 enough:1 concerned:1 wn:13 variety:1 crm:1 zi:21 hastie:1 lasso:1 wahba:2 inner:1 simplifies:1 idea:5 weka:1 intensive:1 angeles:1 motivated:1 pse:1 reuse:1 suffer:1 peter:1 e3:1 proceed:1 hessian:5 remark:5 gabriel:1 generally:1 tune:1 stein:2 locally:1 zj:3 tutorial:1 notice:2 estimated:3 per:3 celisse:1 proach:1 hyperparameter:3 express:1 drawn:2 kenneth:1 v1:1 imaging:1 asymptotically:2 run:9 inverse:1 powerful:1 swersky:1 throughout:1 reasonable:1 almost:2 draw:1 appendix:1 comparable:1 capturing:1 aic:1 fold:1 quadratic:2 ijcnn:1 bilevel:2 dominated:1 min:9 performing:1 ameet:1 format:1 department:1 influential:1 according:1 craven:1 partitioned:1 making:2 ckus:2 christensen:1 outlier:4 koh:1 taken:1 computationally:8 resource:1 mathematik:1 tractable:1 end:4 apply:4 progression:2 eight:1 barron:2 hierarchical:1 batch:1 original:2 denotes:6 include:1 calculating:1 k1:2 build:1 approximating:1 classical:1 society:1 seeking:1 objective:3 desalvo:1 question:1 parametric:7 strategy:2 diagonal:3 surrogate:1 grace:2 southern:1 gradient:16 iclr:1 chris:1 unstable:1 toward:1 fresh:2 assuming:1 afshin:1 modeled:1 index:3 relationship:1 ratio:2 minimizing:2 nc:1 setup:1 liang:1 frank:3 unknown:5 perform:2 upper:1 datasets:2 descent:6 situation:1 extended:3 excluding:1 lisha:1 arbitrary:1 inferred:1 david:2 cast:1 z1:1 optimized:1 california:1 elapsed:1 quadratically:1 nip:2 suggested:1 usually:1 regime:1 encompasses:1 reliable:1 royal:1 explanation:1 business:2 regularized:16 predicting:1 largescale:1 auto:1 prior:1 understanding:1 l2:1 acknowledgement:1 discovery:1 asymptotic:5 loss:44 expect:1 lecture:1 allocation:1 validation:41 foundation:1 zeroed:1 principle:1 lzn:1 supported:1 last:1 free:1 alain:1 bias:1 side:1 institute:1 benefit:1 calculated:1 xn:4 autoregressive:1 author:1 reinforcement:1 regressors:3 historical:1 employing:1 far:1 transaction:1 approximate:18 compact:2 emphasize:2 ignore:2 gene:1 global:2 overfitting:4 active:1 assumed:1 search:5 continuous:1 iterative:1 robin:1 table:7 nature:3 ca:2 elastic:6 obtaining:1 improving:1 e5:1 katharina:1 feurer:1 necessarily:2 complex:1 zou:1 diag:1 main:4 linearly:1 hyperparameters:6 n2:5 repeated:2 razaviyayn:1 referred:2 fashion:1 slow:2 fails:1 third:1 rk:3 theorem:9 e4:1 r2:1 exists:2 naively:1 mnist:7 workshop:1 sequential:1 giulia:1 hui:1 magnitude:1 entropy:1 shahrampour:1 hoos:3 springer:5 leyton:3 acm:2 ma:1 goal:1 viewed:1 presentation:1 consequently:1 sized:1 towards:1 bennett:2 experimentally:1 determined:1 sylvain:1 uniformly:1 reducing:1 lemma:3 called:4 underestimation:1 rarely:1 jong:2 support:3 eggensperger:1 evaluate:2
6,564
6,938
Near Optimal Sketching of Low-Rank Tensor Regression Jarvis Haupt1 [email protected] 1 Xingguo Li1,2 [email protected] David P. Woodruff 3 [email protected] ? 2 University of Minnesota Georgia Institute of Technology 3 Carnegie Mellon University Abstract We study the least squares regression problem min ?2Rp1 ?????pD kA(?) bk22 , PR (r) (r) where ? is a low-rank tensor, defined as ? = r=1 ?1 ? ? ? ?D , for vectors (r) ?d 2 Rpd for all r 2 [R] and d 2 [D]. Here, denotes the outer product of vectors, and A(?) is a linear function on ?. This problem is motivated by the fact PD that the number of parameters in ? is only R ? d=1 pd , which is significantly QD smaller than the d=1 pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors ?, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on sparse random projections 2 Rm?n , with m ? n, to reduce the problem to a much smaller problem min? k A(?) bk22 , for which 2 2 k A(?) bk2 = (1 ? ")kA(?) bk2 holds simultaneously for all ?. We obtain a significantly smaller dimension and sparsity in the randomized linear mapping than is possible for ordinary least squares regression. Finally, we give a number of numerical simulations supporting our theory. 1 Introduction For a sequence of D-way design tensors Ai 2 Rp1 ?????pD , i 2 [n] , {1, . . . , n}, suppose we observe noisy linear measurements of an unknown D-way tensor ? 2 Rp1 ?????pD , given by b = Ai (?) + z, b, z 2 Rn , (1) where A(?) : R ! R is a linear function with Ai (?) = hAi , ?i = vec(Ai ) vec(?) for all i 2 [n], vec(X) is the vectorization of a tensor X, and z = [z1 , . . . , zn ]> corresponds to the observation noise. Given the design tensors {Ai }ni=1 and noisy observations b = [b1 , . . . , bn ]> , a natural approach for estimating the parameter ? is to use the Ordinary Least Square (OLS) estimation for the tensor regression problem, i.e., to solve p1 ?????pD n min ?2Rp1 ?????pD kA(?) > bk22 . (2) Tensor regression has been widely studied in the literature. Applications include computer vision [8, 19, 34], data mining [5], multi-model ensembles [32], neuroimaging analysis [15, 36], multitask learning [21, 31], and multivariate spatial-temporal data analysis [1, 11]. In these applications, modeling the unknown parameters as a tensor is what is needed, as it allows for learning data that has multi-directional relations, such as in climate prediction [33], inherent structure learning with multi-dimensional indices [21], and hand movement trajectory decoding [34]. ? The authors are listed in alphabetical order. The authors acknowledge support from University of Minnesota Startup Funding. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Due to the high dimensionality of tensor data, structured learning based on low-rank tensor decompositions, such as CANDECOMP/PARAFAC (CP) decomposition and Tucker decomposition models [13, 24], have been proposed in order to obtain tractable tensor regression problems. As discussed more below, requiring the unknown tensor to be low-rank significantly reduces the number of unknown parameters. We consider low-rank tensor regression problems based on the CP decomposition and Tucker decomposition models. For simplicity, we first focus on the CP model, and later extend our analysis to the Tucker model. Suppose that ? admits a rank-R CP decomposition, that is, ?= R X (r) ?1 r=1 (r) (r) ? ? ? ?D , (3) where ?d 2 Rpd for all r 2 [R], d 2 [D], and is the outer product of vectors. For convenience, we reparameterize the set of low-rank tensors by its matrix slabs/factors: n o (1) (R) SD,R , [[?1 , . . . , ?D ]] | ?d = [?d , . . . , ?d ] 2 Rpd ?R , for all d 2 [D] . Then we can rewrite model (1) in a compact form b = A(?D ??? ?1 )1R + z, (4) QD where A = [vec(A1 ), ? ? ? , vec(An )]> 2 Rn? d=1 pd is the matricization of all design tensors, 1R = [1, . . . , 1] 2 RR is a vector of all 1s, ? is the Kronecker product, and is the Khatri-Rao product. In addition, the OLS estimation for tensor regression (2) can be rewritten as the following nonconvex problem in terms of low-rank tensor parameters [[?1 , . . . , ?D ]], min #2S S kA# bk22 , where n o QD , (?D ? ? ? ?1 )1R 2 R d=1 pd [[?1 , . . . , ?D ]] 2 SD,R . (5) D,R D,R QD The number of parameters for a general tensor ? 2 Rp1 ?????pD is d=1 pd , which may be prohibitive for estimation even for small values of {pd }D tensor model (3) is that d=1 . The benefit of the low-rank QD PD it dramatically reduces the degrees of freedom of the unknown tensor from d=1 pd to R ? d=1 pd , where we are typically interested in the case when R ? pd for all d 2 [D]. For example, a typical MRI image has size 2563 ? 1.7 ? 107 , while using the low-rank model with R = 10, we reduce the number of unknown parameters to 256 ? 3 ? 10 ? 8 ? 103 ? 107 . This significantly increases the applicability of the tensor regression model in practice. Nevertheless, solving the tensor regression problem (5) is still expensive in terms of both computation PD and memory requirements, for typical settings, when n R ? d=1 pd . In particular, the per iteration complexity is at least linear in n for popular algorithms such as block alternating minimization and QD block gradient descent [27, 28]. In addition, in order to store A, it takes n ? d=1 pd words of memory. Both of these aspects are undesirable when n is large. This motivates us to consider data dimensionality reduction techniques, also called sketching, for the tensor regression problem. Instead of solving (5), we consider the simple Sketched Ordinary Least Square (SOLS) problem: min #2S D,R k A# bk22 , (6) where 2 Rm?n is a random matrix (specified in Section 2). Importantly, will satisfy two properties, namely (1) m ? n so that we significantly reduce the size of the problem, and (2) will be very sparse so that v can be computed very quickly for any v 2 Rn . Na?vely applying existing analyses of sketching techniques for least squares regression requires QD m = ?( d=1 pd ), which is prohibitive (for a survey, see, e.g., [30]). In this paper, our main contribution is to show that it is possible to use a sparse Johnson-Lindenstrauss transformation as our sketching matrix for the CP model of low-rank tensor regression, with constant column sparsity PD and dimension m = R ? d=1 pd , up to poly-logarithmic (polylog) factors. Note that our dimension matches the number of intrinsic parameters in the CP model. Further, we stress that we do not assume anything about the tensor, such as orthogonal matrix slabs/factor, or incoherence; our dimensionality 2 reduction works for arbitrary tensors. We show, with the above sparsity and dimenion, that with constant probability, simultaneously for all # 2 S ,D,R , k A# bk22 = (1 ? ")kA# bk22 . This implies that any solution to (6) has the same cost as in (5) up to a (1 + ")-factor. In particular, by solving (6) we obtain a (1 + ")-approximation to (5). We note that our dimensionality reduction technique is not tied to any particular algorithm; that is, if one runs any algorithm or heuristic on the reduced (sketched) problem, obtaining an ?-approximate solution #, then # is also a (1+?)?-approximate solution to the original problem. Our result is the first non-trivial dimensionality QD reduction for this problem, i.e., dimensionality reduction better than d=1 pd , which is trivial by ignoring the low-rank structure of the tensor, and which achieves a relative error (1 + ")-approximation. While it may be possible to apply dimensionality reduction methods directly in alternating minimization methods for solving tensor regression, unlike our method, such methods do not have provable guarantees and it is not clear how errors propagate across iterations. However, since we reduce the original problem to a smaller version of itself with a provable guarantee, one could further apply dimensionality reduction techniques as heuristics for alternating minimization on the smaller problem. Our proof is based on a careful characterization of Talagrand?s functional for the parameter space of low-rank tensors, providing a highly nontrivial analysis for what we consider to be a simple and practical algorithm. One of the main difficulties is dealing with general, non-orthogonal tensors, for which we are able to provide a careful re-parameterization in order to bound the so-called Finsler metric; interestingly, for non-orthogonal tensors it is always possible to partially orthogonalize them, and this partial orthogonalization turns out to suffice for our analysis. We give precise details below. We also provide numerical evaluations on both synthetic and real data to demonstrate the empirical performance of our algorithm. Notation. For scalars x, y 2 R, let x = (1 ? ")y if x 2 [(1 ")y, (1 + ")y], x . (&)y if x ? ( )c1 y, poly(x) = xc2 and polylog(x, y) = (log x)c3 ? (log y)c4 for some universal constants c1 , c2 , c3 , c4 > 0. We also use standard asymptotic notations O(?) and ?(?). Given a matrix A 2 Rm?n , we denote kAk2 as the spectral norm, span(A) ? Rm as the subspace spanned by the columns of A, max (A) and min (A) as the largest and smallest singular values of A, respectively, and ?A = max (A)/ min (A) as the condition number. We use nnz(A) to denote the number of nonzero entries of A, and PA as the projection operator onto span(A). Given two matrices A = [a1 , . . . , an ] 2 Rm?n and B = [b1 , . . . , bq ] 2 Rp?q , A?B = [a1 ?B, . . . , an ?B] 2 Rmp?nq denotes the Kronecker product, and A B = [a1 ?b1 , . . . , an ?bn ] 2 Rmp?n denotes the Khatri-Rao product with n = q. We let Bn ? Rn be the unit sphere in Rn , i.e., Bn = {x 2 Rn | kxk2 = 1}, P(?) be the probability of an event, and E(?) denotes the expectation of a random variable. Without Q QD P PD further specification, we denote = d=1 and = d=1 . We further summarize the dimension parameters for ease of reference. Given a tensor ?, D is the number of ways, pd is the dimension of the d-th way for d 2 [D]. R is the rank of ? for all ways under the CP decomposition, and Rd is the rank of the d-th way under the Tucker decomposition for d 2 [D]. n is the number of observations for tensor regression. m is the sketching dimension and s is the sparsity of each column in a sparse Johnson-Lindenstrauss transformation. 2 Background We start with a few important definitions. Definition 1 (Oblivious Subspace Embedding). Suppose ? is a distribution on m ? n matrices where m is a function of parameters n, d, and ". Further, suppose that with probability at least 1 , for any fixed n ? d matrix A, a matrix drawn from ? has the property that k Axk22 = (1 ? ")kAxk22 simultaneously for all x 2 X ? Rd . Then ? is an (", ) oblivious subspace embedding (OSE) of X . An OSE preserves the norm of vectors in a certain set X after linear transformation by A. This is widely studied as a key property for sketching based analyses (see [30] and the references therein). We want to show an analogous property when X is parameterized by low-rank tensors. Definition 2 (Leverage Scores). Given A 2 Rn?d , let Z 2 Rn?d have orthonormal columns that 2 span the column space of A. Then `2i (A) = ke> i Zk2 is the i-th leverage score of A. 3 Leverage scores play an important role in randomized matrix algorithms [7, 16, 17]. Calculating the leverage scores na?vely by orthogonalizing A requires O(nd2 ) time. It is shown in [3] that the leverage scores of A can be approximated individually up to a constant multiplicative factor in O(nnz(A) log n + poly(d)) time using sparse subspace embeddings. In our analysis, there will be a very mild dependence on the maximum leverage score of A and the sparsity for the sketching matrix . Note that we do not need to calculate the leverage scores. Definition 3 (Talagrand?s Functional). Given a (semi-)metric ? on Rn and a bounded set S ? Rn , Talagrand?s 2 -functional is 2 (S, ?) = inf1 sup 1 X {Sr }r=0 x2S r=0 2r/2 ? ?(x, Sr ), (7) where ?(x, Sr ) is a distance from x to Sr and the infimum is taken over all collections {Sr }1 r=0 such r that S0 ? S1 ? . . . ? S with |S0 | = 1 and |Sr | ? 22 . A closely related notion of the 2 -functional is the Gausssian mean width: G(S) = Eg supx2S hg, xi, where g ? Nn (0, In ). For any bounded S ? Rn , G(S) and 2 (S, k ? k2 ) differ multiplicatively by at most a universal constant in Euclidean space [25]. Finding a tight upper bound on the 2 -functional for the parameter space of low-rank tensors is key to our analysis. Definition 4 (Finsler Metric). Let E, E 0 ? Rn be p-dimensional subspaces. The Finsler metric of E and E 0 is ?Fin (E, E 0 ) = kPE PE 0 k2 , where PE is the projection onto the subspace E. The Finsler metric is the semi-metric used in the 1 always holds for any E and E 0 [23]. 2 -functional in our analysis. Note that ?Fin (E, E 0 ) ? Definition 5 (Sparse Johnson-Lindenstrauss Transforms). Let ij be independent Rademacher random variables, i.e., P( ij = 1) = P( ij = 1) = 1/2, and let ij : ? ! {0, 1} be random variables, independent of the ij , with the following properties: (i) ij? are negatively ? Qk t=1 it ,j ? E (ii) There are s = (iii) The vectors ( Then correlated for fixed j, i.e., for all 1 ? i1 < . . . < ik ? m, we have Qk s k ; t=1 E ( it ,j ) = m Pm i=1 ij nonzero ij for a fixed j; and m ij )i=1 are independent across j 2 [n]. 2 Rm?n is a sparse Johnson-Lindenstrauss transform (SJLT) matrix if ij = p1 ij ij . s The SJLT has several benefits [4, 12, 30]. First, the computation of x takes only O(nnz(x)) time when s is a constant. Second, storing takes only sn memory instead of mn, which is significant when s ? m. This can often further be reduced by drawing the entries of from a limited independent family of random variables. We will use an SJLT matrix as the sketching matrix in our analysis and our goal will be to show sufficient conditions on the sketching dimension m and per-column sparsity s such that the analogue of the OSE property holds for low-rank tensor regression. Specifically, we provide sufficient conditions for the SJLT matrix 2 Rm?n to preserve the cost of all solutions for tensor regression, i.e., bounds on m and s for which " 2 , (8) E sup k xk2 1 < 10 x2T where " is a given precision and T is a normalized space parameterized as the union of certain subspaces of A, which will be further discussed in the following sections. Note that by linearity, it is sufficient to consider x with kxk2 = 1 in the above, which explains the form of (8). Moreover, by Markov?s inequality, (8) implies that simultaneously for all # = vec(?) 2 S D,R , where ? admits a low-rank tensor decomposition, with probability at least 9/10, we have k A# bk22 = (1 ? ")kA# bk22 , (9) which allows us to minimize the much smaller sketched problem to obtain parameters # which, when plugged into the original objective function, provide a multiplicative (1 + ")-approximation. 4 3 Dimensionality Reduction for CP Decomposition PR (r) (r) (r) We start with the following notation. Given a tensor ? = r=1 ?1 ? ? ? ?D , where ?d 2 Rpd (r) for all d 2 [D] and r 2 [R], we fix all but ?1 for r 2 [R], and denote n o h (1) (r) (R) i ? ? ? A \1 = A \1 , . . . , A \1 2 Rn?Rp1 , (i) PpD Pp2 (i) (i) (i) ? (jD ,...,j2 ) (i) where A \1 = ?D,jD ? ? ? ?2,j2 , ?d,jd is the jd -th entry of ?d , and jD =1 ? ? ? j2 =1 A 2) A(jD ,...,j 2 Rn?p1 is a column submatrix of A indexed by jD 2 [pD ], . . . , j2 2 [p2 ], i.e., Q ? (1,...,1) ? A= A , . . . , A(pD ,...,p2 ) 2 Rn? pd . The above parameterization allows us to view tensor regression as preserving the norms of vectors in an infinite union of subspaces, described in more detail in the full version of our paper [10]. Then we rewrite the observation model (4) as b=A? 3.1 R X r=1 (r) (r) ?D ? ? ? ? ? ?1 + z = R X r=1 A (r) ?\1 (r) ? ?1 + z = A n (r) ?\1 o h i> (1)> (R)> ? ?1 . . . ?1 + z. Main Result Q The parameter space for the tensor regression problem (1) is a subspace of R pd , i.e., S D,R ? Q Q pd R . Therefore, a na?ve application of sketching requires m & pd /"2 in order for (9) to hold [18]. The following theorem provides sufficient conditions to guarantee (1 + ")-approximation of the objective for low-rank tensor regression under the CP decomposition model. PD Theorem 1. Suppose R ? maxd pd /2 and maxi2[n] `2i (A) ? 1/(R d=2 pd )2 . Let n [ PR (r) PR (r) (r) (r) (r) A# A' T = r=1 ?D ?? ? ?? ?1 , ' = r=1 D ?? ? ?? 1 , ?d , kA# A'k2 # = r2[R],d2[D] (r) d 2 Bp d o and let 2 Rm?n be an SJLT matrix with column sparsity s. Then with probability at least 9/10, (9) holds if m and s satisfy, respectively, ? ?X ? X X ? m&R pd log DR?A pd polylog(m, n)/"2 and s & log2 pd polylog(m, n)/"2 . P From Theorem 1, we have that for an SJLT matrix 2 Rm?n with m = ?(R pd ) and s = ?(1), up to logarithmic factors, we can guarantee (1 + ")-approximation of the objective. The sketching complexity of m is nearly P optimal compared with the number of free parameters for the CP decomposition model, i.e., R( pd D + 1), up to logarithmic factors. Here wo do not make any (r) orthogonality assumption on the tensor factors ?d , and show in our analysis that the general tensor space T can be paramterized in terms of an orthogonal one if R ? maxd pd /2 holds. The condition R ? maxd pd /2 is not restrictive in our setting, as we are interested in low-rank tensors with R ? pd . Note that we achieve a (1 + ")-approximation in objective function value for arbitrary tensors; if one wants to achieve closeness of the underlying parameters one needs to impose further assumptions on the model, such as the form of the noise distribution or structural properties of A [20, 36]. Our maximum leverage score assumption is very mild and much weaker than the standard incoherence assumptions used for example, in matrix completion, which allow for uniform sampling based approaches. For example, our assumption states that the maximum leverage score is at most PD Q 1/(R d=2 pd )2 . In the typical overconstrained case, n pd , and in order for uniform sampling P to provide a subspace embedding, one needs the maximum leverage score to be at most R pd /n PD (see, e.g., Section 2.4 of [30]), which is much less than 1/(R d=2 pd )2 when n is large, and so uniform sampling fails in our setting. Moreover, it is also possible to apply a standard idea to flatten the leverage scores of a deterministic design A based on the Subsampled Randomized Hadamard Transformation (SRHT) using the Walsh-Hadamard matrix [9, 26]. Note that applying the SRHT to an n ? d matrix A only takes O(nd log n) time, which if A is dense, is the same amount of time one needs just to read A (up to a log n factor). Further details are deferred to the full version of our paper [10]. 5 3.2 Proof Sketch of Our Analysis for a Basic Case We provide a sketch of our analysis for the case when R = 1 and D = 2, i.e., ? is rank 1 matrix. The analysis for more general cases is more involved, but with similar intuition. Details of the analyses are deferred to the the full version of our paper, where we start with a proof for the most basic cases and gradually build up the proof for the most general case. Pp2 (i) (1) Let Av = , . . . , A(p2 ) ] 2 Rn?p2 p1 with A(i) 2 Rn?p1 for all i=1 A vi , where A = [A S v1 v2 f i 2 [p2 ], V = W f {span[A , A ]}, and W = {v1 , v2 2 Bp2 with hv1 , v2 i = 0}. We start with an illustration that the set T can be reparameterized to the following set with respect to tensors with orthogonal factors: [ T = {x 2 E | kxk2 = 1} . E2V Suppose hv1 , v2 i 6= 0. Let v2 = ?v1 + z for some ?, hv1 , zi = 0. Then we have Ax Ay Av 1 u 1 = kAx Ayk2 kAv1 u1 2 R and a unit vector z 2 Rp2 , where Av 2 u 2 Av1 (u1 = v A 2 u2 k2 kAv1 (u1 ?u2 ) Az ( u2 ) , ?u2 ) Az ( u2 )k2 which is equivalent to hv1 , v2 i = 0 by reparameterizing z as v2 . Based on known dimensionality reduction results [2, 6] (see further details in the full version [10]), the main quantities needed for bounding properties of are the quantities ?V , 22 (V, ?Fin ), N (V, ?Fin , "0 ), R" 1/2 and 0 0 (log N (V, ?Fin , t)) dt, where N (V, ?Fin , t) is the covering number of V under the Finsler metric using balls of radius t and pV = supv1 ,v2 2Bp2 ,hv1 ,v2 i=0 dim {span (Av1 ,v2 )} ? 2p1 . Bounding these quantities for the space of low-rank tensors is new and is our main technical contribution. These will be addressed separately as follows. Part 1: Bound pV . Let Av1 ,v2 = [Av1 , Av2 ]. It is straightforward that pV ? 2p1 . Part 2: Bound 2 2 (V, ?Fin ). By the definition of 2 (V, ?Fin ) = inf 2 -functional sup v1 ,v2 2V {V k }1 k=0 A 1 X k=0 in (7) for the Finsler metric, we have 2k/2 ? ?Fin (Av1 ,v2 , V k ), where V k is an "k -net of V, i,e., for any A 2 V there exist v 1 , v 2 2 Bp2 with hv 1 , v 2 i = 0, kv1 v 1 k2 ? ?k , and kv2 v 2 k2 ? ?k , such that Av1 ,v2 2 V k and ?Fin (Av1 ,v2 , Av1 ,v2 ) ? "k . v1 ,v2 From Lemma 6, we have ?Fin (Av1 ,v2 , V k ) ? 2?A ?k for kv1 v 1 k2 ? ?k and kv2 v 2 k2 ? ?k . On the other hand, we have that ?Fin (Av1 ,v2 , V k ) ? 1 always holds. Therefore, we have ?Fin (Av1 ,v2 , V k ) ? min{2?A ?k , 1}. Let k 0 be the smallest integer such that 2?A ?k0 ? 1. Then 2 (V, ?Fin ) ? 1 X 0 k/2 2 ?Fin (A k=0 v1 ,v2 , Vk) ? k X 2k/2 + 1 X k=k0 +1 k=0 2k/2 ?Fin (Av1 ,v2 , V k ). (10) Starting from ?0 = 1 and |V 0 | = 1, for k 1, we have ?k < 1 and |V k | ? (3/?k )p2 [29]. Also from k the 2 -functional, we require |V k | ? 22 ? (3/?k )p2 , which implies r 0 k0 X 2k /2 1 k/2 2 =p . p2 log . (11) ?k 0 2 1 k=0 For k > k , we choose ?k+1 = ?k2 such that (3/?k+1 ) 0 p2 ? 22 p2 k+1 . Then we have |V k+1 | ? 22 k0 +1 By choosing k to be the smallest integer such that (3/?k0 +1 ) ? 2 holds, we have t r ? ?2 1 1 X X 1 1 k/2 v1 ,v2 k0 /2 t/2 k0 /2 2 ? ?Fin (A , Vk) = 2 ? 2 ? ?2 . p2 log . 2 ?k 0 0 t=1 0 k=k +1 6 2 k+1 . (12) Combining (10) ? (12), and choosing a small enough "0 such that "0 ? 2?A ?k0 , we have ?A 2 . 2 (V, ?Fin ) . p2 log "0 R" Part 3: Bound N (V, ?Fin , "0 ) and 0 0 [log N (V, ?Fin , t)]1/2 dt. From our choice from Part 2, "0 2 ? ?2p2 (0, 1) is a constant. Then it is straightforward that N (V, ?Fin , "0 ) ? "30 . From direct integration, this implies r Z "0 1 [log N (V, ?Fin , t)]1/2 dt."0 p2 log . " 0 0 Combining the results in Parts 1, 2, and 3, we have that (9) holds if m and s satisfy, respectively ? ? p2 log ?"A0 + p1 + p2 log "10 ? polylog(m, n) m& and "2 ? ? log2 "10 + "20 (p1 + p2 ) log "10 ? polylog(m, n) s& . "2 We finish the proof by taking "0 = 1/(p1 + p2 ). 4 Dimensionality Reduction for Tucker Decomposition We start with a formal model description. Suppose ? admits the following Tucker decomposition: ?= RD R1 X X (r ) (r ) ??? G(r1 , . . . , rD ) ? ?1 1 ? ? ? ?D D , r1 =1 (13) rD =1 (r ) where G 2 RR1 ?????RD is the core tensor and ?d d 2 Rpd for all rd 2 [Rd ] and d 2 [D]. Let A A Pp Pp (rD ) (r ) = jDD=1 ? ? ? j22=1 A(jD ,...,j2 ) ?D,j ? ? ? ?2,j22 and D ? (r ,...,rD ) (r ,...,rD ) PR2 P RD PR PR ?\11 ? 1 = G(1, r2 ,. . ., rD ),. . ., r22=1 ? ? ? rDD=1 A \1 G(R1 , r2 ,. . ., rD ) . r2 =1 ? ? ? rD =1 A (r ,...,rD ) ?\11 ? {r } ?\1 d Then the observation model (4) can be written as b=A P R1 r1 =1 ? ? ? PRD (rD ) rD =1 G(r1 , . . . , rD )?D ????? (r ) ?1 1 +z =A ? {r } ?\1 d h (1)> ?1 (R1 )> . . . ?1 i> + z. The following theorem provides sufficient conditions to guarantee (1 + ")-approximation of the objective function for low-rank tensor regression under the Tucker decomposition model. PD Theorem 2. Suppose nnz(G) ? maxd pd /2 and maxi2[n] `2i (A) ? 1/( d=2 Rd pd + nnz(G))2 . Let T = [ r2[R],d2[D] '= RD R1 n A# A' X X (r ) (r ) #= ??? G1 (r1 , . . . , rD ) ? ?D D ? ? ? ? ? ?1 1 , kA# A'k2 r =1 r =1 1 R1 X r1 =1 ??? RD X rD =1 D G2 (r1 , . . . , rD ) ? (rD ) D ? ??? ? (r1 ) 1 , (r ) ?d d , (rd ) d 2 Bpd o and 2 Rm?n be an SJLT matrix with column sparsity s. Then with probability at least 9/10, (9) holds if m and s satisfy ? ? p m & C1 ? log C1 D?A R1 nnz(G) ? polylog(m, n)/"2 and s & log2 C1 ? polylog(m, n)/"2 , P where C1 = Rd pd + nnz(G). P From Theorem 2, we have that using an SJLT matrix with m = ?( Rd pd + nnz(G)) and s = ?(1), up to logarithmic factors, we can guarantee (1+")-approximation of the objective function. 7 The sketching complexity of m is nearPoptimal compared with number of free parameters for P the 2 the Tucker decomposition model, i.e., R p + nnz(G) R , up to logarithmic factors. Note d d d Q that nnz(G) ? Rd , and thus the condition that nnz(G) ? maxd pd /2 can be more restrictive than R ? maxd pd /2 in the CP model when nnz(G) > R. This is due to the fact that the Tucker model is more ?expressive? than the CP model for a tensor of the same dimensions. For example, if R1 = ? ? ? = RD = R, then the CP model (3) can be viewed as special case of the Tucker model (13) by setting all off-diagonal entries of the core tensor G to be 0. Moreover, the conditions and results in Theorem 2 are essentially of the same order as those in Theorem 1 when nnz(G) = R, which indicates the tightness of our analysis. 5 Experiments We study the performance of sketching for tensor regression through numerical experiments over both synthetic and real data sets. For solving the OLS problem for tensor regression (2), we use a cyclic block-coordinate minimization algorithm based on a tensor toolbox [35]. Specifically, in a cyclic manner for all d 2 [D], we fix all but one ?d of [[?1 , . . . , ?D ]] 2 SD,R and minimize the resulting quadratic loss function (2) with respect to ?i , until the decrease of the objective is smaller than a predefined threshold ? . For SOLS, we use the same algorithm after multiplying A and b with an SJLT matrix . All results are run on a supercomputer due to the large scale of the data. Note that our result is not tied to any specific algorithm and we can use any algorithm that solves OLS for low-rank tensors for solving SOLS for low-rank tensors. For synthetic data, we generate the low-rank tensor ? as follows. For each d 2 [D], we generate R (1) (R) random columns with N (0, 1) entries to form non-orthogonal tensor factors ?d = [?d , . . . , ?d ] of [[?1 , . . . , ?D ]] 2 SD,R independently. We also generate R real scalars ?1 , . . . , ?R uniformly PR (r) (r) and independently from [1, 10]. Then ? is formed by ? = r=1 ?r ?1 ? ? ? ?D . The n tensor n designs {Ai }i=1 are generated independently with i.i.d. N (0, 1) entries for 10% of the entries chosen uniformly at random, and the remaining entries are set to zero. We also generate the noise z to have i.i.d. N (0, z2 ) entries, and the generation of the SJLT matrix follows Definition 5. For both OLS and SOLS, we use random initializations for ?, i.e., ?d has i.i.d. N (0, 1) entries for all d 2 [D]. We compare OLS and SOLS for low-rank tensor regression under both the noiseless and noisy scenarios. For the noiseless case, i.e., z = 0, we choose R = 3, p1 = p2 = p3 = 100, m = 5 ? R(p1 + p2 + p3 ) = 4500, and s = 200. Different values of n = 104 , 105 , and 106 are chosen to compare both statistical and computational performances of OLS and SOLS. For the noisy case, the settings of all parameters are identical to those in the noiseless case, except that z = 1. We provide a plot of the scaled objective versus the number of iterations for some random trials in Figure 1. The scaled objective is set to be kA#tSOLS bk22 /n for SOLS and kA#tOLS bk22 /n for OLS, where 105 SOLS n1 SOLS n2 100 OLS n1 OLS n2 OLS n3 106 10-5 10 SOLS n3 104 Objective Objective SOLS n3 SOLS n1 SOLS n2 OLS n1 OLS n2 OLS n3 102 100 -10 5 10 15 20 25 5 Iteration (a) z 10 15 20 Iteration (b) =0 z =1 Figure 1: Comparison of SOLS and OLS on synthetic data. The vertical axis corresponds to the scaled objectives kA#tSOLS bk22 /n for SOLS and kA#tOLS bk22 /n for OLS, where #t is the update in the t-th iteration. The horizontal axis corresponds to the number of iterations (passes of block-coordinate minimization for all blocks). For both the noiseless case z = 0 and noisy case z = 1, we set n1 = 104 , n2 = 105 , and n3 = 106 respectively. 8 #tSOLS and #tOLS are the updates in the t-th iterations of SOLS and OLS respectively. Note the we are using k A#SOLS bk22 /n as the objective function for solving the SOLS problem, but looking at the original objective kA#SOLS bk22 /n for the solution of SOLS is ultimately what we are interested in. However, we have that the gap between k A#SOLS bk22 /n and kA#SOLS bk22 /n is very small in our results (< 1%). The number of iterations is the number of passes of block-coordinate minimization for all blocks. We can see that OLS and SOLS require approximately the same number of iterations for comparable decrease in objective function value. However, since the SOLS instance has a much smaller size, its per iteration computational cost is much lower than that of OLS. We further provide numerical results on the running time (CPU execution time) and the optimal scaled objectives in Table 1. Using the same stopping criterion, we see that SOLS and OLS achieve comparable objectives (within < 5% differences), matching our theory. In terms of the running time, SOLS is significantly faster than OLS, especially when n is large compared to the sketching dimen sion m. For example, when n = 106 , SOLS is more than 200 times faster than OLS while achieving a comparable objective function value with OLS. This matches with our theoretical results on the computational cost of OLS versus SOLS. Note that here we suppose that the rank is known for our simulation, which can be restrictive in practice. We observe that if we choose a moderately larger rank than the true rank of the underlying model, then the results are similar to what we discussed above. Smaller values of the rank result in a much deteriorated statistical performance for both OLS and SOLS. We also examine sketching of low-rank tensor regression on a real dataset of MRI images [22]. The dataset consists of 56 frames of a human brain, each of which is of dimension 128 ? 128 pixels, i.e., p1 = p2 = 128 and p3 = 56. The generation of design tensors {Ai }ni=1 and linear measurements b follows the same settings as for the synthetic data, with z = 0. We choose three values of R = 3, 5, 10, and set m = 5 ? R(p1 + p2 + p3 ). The sample size is set to n = 104 for all settings of R. Analogous to the synthetic data, we provide numerical results for SOLS and OLS on the running time (CPU execution time) and the optimal scaled objectives. The results are provided in Table 2. Again, we have that SOLS is much faster than OLS and they achieve comparable optimal objectives, under all settings of ranks. Table 1: Comparison of SOLS and OLS on CPU execution time (in seconds) and the optimal scaled objective over different choices of sample sizes and noise levels on synthetic data. The results are averaged over 50 random trials, with both the mean values and standard deviations (in parentheses) provided. Note that we terminate the program after the running time exceeds 3 ? 104 seconds. Variance of Noise Sample Size OLS Time SOLS z n = 10 4 n = 10 175.37 (65.784) z 5 n = 10 3683.9 (1496.7) 6 > 3 ? 10 (NA) n = 105 n = 106 168.62 (24.570) 2707.3 (897.14) > 3 ? 104 (NA) n = 10 4 =1 4 120.34 128.09 132.93 121.71 124.84 128.65 (35.711) (37.293) (38.649) (34.214) (33.774) (32.863) ) 0.9153 (0.0256) 0.9341 (0.0213) 0.9425 (0.0172) 0.9376 0.9817 0.9901 ) (0.0261) (0.0242) (0.0256) < 10 (< 10 10 OLS < 10 10 SOLS (< 10 10 Objective =0 10 ) ) < 10 (< 10 10 < 10 10 (< 10 10 10 ) ) < 10 (< 10 10 < 10 10 (< 10 10 10 Table 2: Comparison of SOLS and OLS on CPU execution time (in seconds) and the optimal scaled objective over different choices of ranks on the MRI data. The results are averaged over 10 random trials, with both the mean values and standard deviations (in parentheses) provided. Rank Time Objective R=3 2824.4 (768.08) 16.003 (0.1378) OLS R=5 8137.2 (1616.3) 11.164 (0.1152) R = 10 26851 (8320.1) 6.8679 (0.0471) 9 R=3 196.31 (68.180) 17.047 (0.1561) SOLS R=5 364.09 (145.79) 11.992 (0.1538) R = 10 761.73 (356.76) 7.3968 (0.0975) References [1] Mohammad Taha Bahadori, Qi Rose Yu, and Yan Liu. Fast multivariate spatio-temporal analysis via low-rank tensor learning. In Advances in Neural Information Processing Systems, pages 3491?3499, 2014. [2] Jean Bourgain, Sjoerd Dirksen, and Jelani Nelson. Toward a unified theory of sparse dimensionality reduction in Euclidean space. Geometric and Functional Analysis, 25(4):1009?1088, 2015. [3] Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing, pages 81?90. ACM, 2013. [4] Anirban Dasgupta, Ravi Kumar, and Tam?s Sarl?s. A sparse Johnson?Lindenstrauss transform. In Proceedings of the 42nd Annual ACM Symposium on Theory of Computing, pages 341?350. ACM, 2010. [5] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4):1253?1278, 2000. [6] Sjoerd Dirksen. Dimensionality reduction with subgaussian matrices: A unified theory. Foundations of Computational Mathematics, pages 1?30, 2015. [7] Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. Journal of Machine Learning Research, 13(Dec):3475?3506, 2012. [8] Weiwei Guo, Irene Kotsia, and Ioannis Patras. Tensor learning for regression. IEEE Transactions on Image Processing, 21(2):816?827, 2012. [9] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217?288, 2011. [10] Jarvis Haupt, Xingguo Li, and David P Woodruff. Near optimal sketching of low-rank tensor regression. arXiv preprint arXiv:1709.07093, 2017. [11] Peter D Hoff. Multilinear tensor regression for longitudinal relational data. The Annals of Applied Statistics, 9(3):1169, 2015. [12] Daniel M. Kane and Jelani Nelson. Sparser Johnson-Lindenstrauss transforms. Journal of the ACM, 61(1):4:1?4:23, 2014. [13] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500, 2009. [14] Bingxiang Li, Wen Li, and Lubin Cui. New bounds for perturbation of the orthogonal projection. Calcolo, 50(1):69?78, 2013. [15] Xiaoshan Li, Hua Zhou, and Lexin Li. Tucker tensor regression and neuroimaging analysis. arXiv preprint arXiv:1304.5637, 2013. [16] Michael W Mahoney. Randomized algorithms for matrices and data. Foundations and Trends R in Machine Learning, 3(2):123?224, 2011. [17] Michael W Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697?702, 2009. [18] Jelani Nelson and Huy L Nguyen. Lower bounds for oblivious subspace embeddings. In International Colloquium on Automata, Languages, and Programming, pages 883?894. Springer, 2014. 10 [19] Sung Won Park and Marios Savvides. Individual kernel tensor-subspaces for robust face recognition: A computationally efficient tensor framework without requiring mode factorization. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 37(5):1156?1166, 2007. [20] Garvesh Raskutti and Ming Yuan. Convex regularization for high-dimensional tensor regression. arXiv preprint arXiv:1512.01215, 2015. [21] Bernardino Romera-Paredes, Hane Aung, Nadia Bianchi-Berthouze, and Massimiliano Pontil. Multilinear multitask learning. In Proceedings of the 30th International Conference on Machine Learning, pages 1444?1452, 2013. [22] Antoine Rosset, Luca Spadola, and Osman Ratib. Osirix: an open-source software for navigating in multidimensional DICOM images. Journal of Digital Imaging, 17(3):205?216, 2004. [23] Zhongmin Shen. Lectures on Finsler geometry, volume 2001. World Scientific, 2001. [24] Nicholas Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos Papalexakis, and Christos Faloutsos. Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing, 2017. [25] Michel Talagrand. The generic chaining: upper and lower bounds of stochastic processes. Springer Science &amp; Business Media, 2006. [26] Joel A Tropp. Improved analysis of the subsampled randomized hadamard transform. Advances in Adaptive Data Analysis, 3(01n02):115?126, 2011. [27] Paul Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109(3):475?494, 2001. [28] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1-2):387?423, 2009. [29] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [30] David P Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends R in Theoretical Computer Science, 10(1?2):1?157, 2014. [31] Yongxin Yang and Timothy Hospedales. Deep multi-task representation learning: A tensor factorisation approach. arXiv preprint arXiv:1605.06391, 2016. [32] Rose Yu, Dehua Cheng, and Yan Liu. Accelerated online low-rank tensor learning for multivariate spatio-temporal streams. In International Conference on Machine Learning, 2015. [33] Rose Yu and Yan Liu. Learning from multiway data: Simple and efficient tensor regression. In International Conference on Machine Learning, pages 373?381, 2016. [34] Qibin Zhao, Cesar F Caiafa, Danilo P Mandic, Zenas C Chao, Yasuo Nagasaka, Naotaka Fujii, Liqing Zhang, and Andrzej Cichocki. Higher order partial least squares (hopls): a generalized multilinear regression method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7):1660?1673, 2013. [35] Hua Zhou. Matlab TensorReg toolbox. tensorreg/, 2013. http://hua-zhou.github.io/softwares/ [36] Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540?552, 2013. 11
6938 |@word multitask:2 mild:2 trial:3 version:5 mri:3 norm:3 paredes:1 nd:2 open:1 d2:2 simulation:2 propagate:1 bn:4 decomposition:23 e2v:1 dirksen:2 yasuo:1 reduction:13 cyclic:2 liu:3 score:11 woodruff:5 daniel:1 interestingly:1 longitudinal:1 romera:1 amp:1 existing:1 ka:14 z2:1 written:1 axk22:1 numerical:6 kv1:2 plot:1 update:2 bart:1 intelligence:1 prohibitive:2 nq:1 parameterization:2 rp1:6 core:2 characterization:1 provides:2 zhang:1 fujii:1 mathematical:1 lathauwer:2 c2:1 direct:1 symposium:2 ik:1 dicom:1 yuan:1 consists:1 dimen:1 manner:1 p1:14 examine:1 multi:4 brain:1 ming:1 cpu:4 provided:3 estimating:1 notation:3 suffice:1 bounded:2 linearity:1 moreover:3 underlying:2 what:4 brett:1 x2s:1 yongxin:1 medium:1 kaxk22:1 unified:2 finding:2 transformation:4 sung:1 guarantee:6 temporal:3 multidimensional:1 rm:10 k2:11 scaled:7 unit:2 sd:4 papalexakis:1 io:1 incoherence:2 approximately:1 therein:1 studied:2 initialization:1 kane:1 ease:1 limited:1 walsh:1 factorization:1 averaged:2 practical:1 practice:2 alphabetical:1 block:8 union:2 pontil:1 nnz:13 empirical:1 universal:2 yan:3 significantly:6 osman:1 projection:4 matching:1 word:1 flatten:1 convenience:1 undesirable:1 onto:2 operator:1 av2:1 applying:2 equivalent:1 deterministic:1 straightforward:2 starting:1 independently:3 automaton:1 survey:1 ke:1 convex:1 simplicity:1 shen:1 factorisation:1 importantly:1 spanned:1 orthonormal:1 embedding:3 srht:2 notion:1 coordinate:5 analogous:2 deteriorated:1 annals:1 suppose:9 play:1 kolda:1 qibin:1 programming:2 pa:1 trend:2 expensive:1 approximated:1 recognition:1 bp2:3 role:1 preprint:5 hv:1 calculate:1 irene:1 movement:1 sol:36 decrease:2 kv2:2 rose:3 intuition:1 pd:56 colloquium:1 complexity:3 moderately:1 ultimately:1 solving:7 rewrite:2 tight:1 algebra:1 negatively:1 drineas:2 k0:8 massimiliano:1 fast:2 startup:1 choosing:2 sarl:1 jean:1 heuristic:2 widely:2 solve:1 larger:1 drawing:1 tightness:1 statistic:1 g1:1 transform:3 noisy:5 itself:1 online:1 sequence:1 rr:1 net:1 product:6 caiafa:1 jarvis:2 j2:5 hadamard:3 combining:2 achieve:4 academy:1 ismail:1 description:1 az:2 convergence:1 requirement:1 r1:16 rademacher:1 polylog:8 completion:1 ij:12 solves:1 p2:22 c:1 implies:4 qd:9 differ:1 radius:1 closely:1 bader:1 stochastic:1 human:1 explains:1 require:2 fix:2 rpd:5 multilinear:4 hold:10 mapping:1 slab:2 rdd:1 achieves:1 smallest:3 xk2:1 estimation:3 individually:1 largest:1 pp2:2 tool:1 moor:1 minimization:8 reparameterizing:1 evangelos:1 always:3 zhou:4 sion:1 parafac:1 focus:1 ax:1 nd2:1 vk:2 rank:40 indicates:1 cesar:1 dim:1 stopping:1 nn:1 typically:1 a0:1 relation:1 ppd:1 i1:1 interested:3 sketched:3 pixel:1 spatial:1 integration:1 special:1 hoff:1 calcolo:1 beach:1 sampling:3 identical:1 park:1 yu:3 nadia:1 nearly:1 hv1:5 nonsmooth:1 inherent:1 few:1 oblivious:3 wen:1 roman:1 simultaneously:4 preserve:2 ve:1 national:1 individual:1 n02:1 subsampled:2 geometry:1 n1:5 freedom:1 sjoerd:2 mining:1 highly:1 evaluation:1 joel:2 umn:2 deferred:2 mahoney:3 hg:1 predefined:1 maxi2:2 fu:1 partial:2 orthogonal:7 vely:2 bq:1 indexed:1 euclidean:2 plugged:1 re:1 theoretical:2 instance:1 column:10 modeling:1 rao:2 zn:1 ordinary:4 applicability:1 cost:4 deviation:2 entry:10 hopls:1 uniform:3 vandewalle:1 johnson:6 synthetic:7 rosset:1 vershynin:1 st:1 international:4 randomized:5 siam:3 probabilistic:1 off:1 decoding:1 michael:3 sketching:17 quickly:1 na:5 again:1 pr2:1 choose:4 huang:1 dr:1 tam:1 american:1 zhao:1 michel:1 li:6 de:3 ioannis:1 satisfy:4 vi:1 stream:1 later:1 multiplicative:2 view:1 sup:3 start:5 contribution:2 minimize:2 square:7 ni:2 formed:1 qk:2 variance:1 ensemble:1 directional:1 trajectory:1 multiplying:1 nagasaka:1 cybernetics:2 randomness:1 khatri:2 joos:1 definition:8 pp:2 tucker:12 involved:1 tamara:1 proof:5 petros:2 cur:1 dataset:2 popular:1 dimensionality:14 overconstrained:1 higher:1 dt:3 danilo:1 improved:2 just:1 osirix:1 until:1 talagrand:4 hand:2 sketch:2 horizontal:1 tropp:2 expressive:1 mode:1 infimum:1 aung:1 scientific:1 usa:1 requiring:2 normalized:1 true:1 regularization:1 alternating:3 read:1 nonzero:2 climate:1 eg:1 width:1 covering:1 anything:1 chaining:1 won:1 criterion:1 generalized:1 stress:1 ay:1 yun:1 demonstrate:1 mohammad:1 cp:14 hane:1 orthogonalization:1 image:4 funding:1 ols:32 garvesh:1 raskutti:1 functional:9 volume:1 discussed:3 extend:1 rmp:2 lieven:2 martinsson:1 association:1 sidiropoulos:1 mellon:1 bk22:17 measurement:2 significant:1 vec:6 ai:7 rr1:1 hospedales:1 rd:31 pm:1 mathematics:1 multiway:1 language:1 minnesota:2 specification:1 multivariate:3 inf:1 scenario:1 store:1 certain:2 nonconvex:1 inequality:1 maxd:6 preserving:1 impose:1 signal:2 semi:2 ii:1 full:4 reduces:2 exceeds:1 technical:1 match:2 faster:3 long:1 sphere:1 luca:1 mandic:1 liqing:1 a1:4 parenthesis:2 qi:1 prediction:1 kax:1 regression:35 basic:2 vision:1 cmu:1 metric:8 expectation:1 essentially:1 iteration:11 noiseless:4 arxiv:10 kernel:1 dec:1 c1:6 addition:2 background:1 want:2 separately:1 addressed:1 singular:2 source:1 unlike:1 sr:6 savvides:1 pass:2 sangwoon:1 integer:2 structural:1 near:2 yang:1 leverage:12 subgaussian:1 iii:1 embeddings:2 enough:1 weiwei:1 finish:1 zi:1 li1:1 bpd:1 reduce:4 idea:1 motivated:1 av1:12 j22:2 lubin:1 wo:1 clarkson:1 peter:1 matlab:1 deep:1 dramatically:1 clear:1 listed:1 transforms:2 amount:1 reduced:2 generate:4 http:1 exist:1 per:4 r22:1 carnegie:1 prd:1 dasgupta:1 key:2 gunnar:1 nevertheless:1 threshold:1 achieving:1 drawn:1 ravi:1 kenneth:1 v1:7 imaging:1 run:2 parameterized:2 family:1 p3:4 coherence:1 comparable:4 submatrix:1 bound:9 cheng:1 quadratic:1 annual:2 nontrivial:1 kronecker:2 orthogonality:1 bp:1 n3:5 software:2 rp2:1 aspect:1 u1:3 nathan:1 min:8 reparameterize:1 span:5 kumar:1 separable:1 xingguo:2 structured:1 ball:1 cui:1 anirban:1 smaller:9 across:2 s1:1 gradually:1 pr:7 taken:1 computationally:1 turn:1 x2t:1 needed:2 tractable:1 zk2:1 magdon:1 rewritten:1 apply:4 observe:2 marios:1 v2:23 spectral:1 finsler:7 generic:1 bahadori:1 nicholas:1 faloutsos:1 rp:1 jd:8 original:4 supercomputer:1 denotes:4 remaining:1 include:1 running:4 andrzej:1 log2:3 calculating:1 restrictive:3 build:1 especially:1 tensor:79 objective:24 malik:1 quantity:3 dependence:1 kak2:1 diagonal:1 tols:3 hai:1 xc2:1 gradient:2 antoine:1 subspace:12 distance:1 navigating:1 outer:2 nondifferentiable:1 nelson:3 dwoodruf:1 tseng:2 trivial:2 toward:1 provable:2 index:1 multiplicatively:1 providing:1 illustration:1 neuroimaging:3 design:6 motivates:1 unknown:6 bianchi:1 upper:2 av:3 observation:5 vertical:1 markov:1 fin:22 acknowledge:1 descent:3 supporting:1 reparameterized:1 relational:1 looking:1 precise:1 frame:1 rn:17 perturbation:1 arbitrary:2 david:5 namely:1 specified:1 c3:2 z1:1 toolbox:2 c4:2 nip:1 able:1 below:2 pattern:1 candecomp:1 sparsity:9 summarize:1 program:1 max:2 memory:3 analogue:1 event:1 natural:1 difficulty:1 business:1 taha:1 mn:1 zhu:1 bourgain:1 jelani:3 technology:1 github:1 axis:2 cichocki:1 sn:1 chao:1 review:2 literature:1 geometric:1 relative:1 asymptotic:2 loss:1 haupt:1 lecture:1 generation:2 versus:2 digital:1 foundation:3 degree:1 sufficient:5 s0:2 xiao:1 bk2:2 storing:1 berthouze:1 free:2 formal:1 weaker:1 allow:1 institute:1 taking:1 face:1 sparse:9 benefit:2 dimension:9 lindenstrauss:6 world:1 author:2 collection:1 adaptive:1 nguyen:1 transaction:4 approximate:3 compact:1 dealing:1 b1:3 spatio:2 xi:1 vectorization:1 table:4 matricization:1 terminate:1 robust:1 ca:1 ignoring:1 obtaining:1 poly:3 constructing:1 main:5 dense:1 bounding:2 noise:5 paul:2 huy:1 n2:5 georgia:1 ose:3 precision:1 fails:1 christos:1 pv:3 dehua:1 kxk2:3 tied:2 pe:2 theorem:8 specific:1 r2:5 admits:3 closeness:1 intrinsic:1 orthogonalizing:1 execution:4 gap:1 patras:1 sparser:1 logarithmic:5 halko:1 timothy:1 gausssian:1 bernardino:1 partially:1 scalar:2 g2:1 u2:5 hua:4 springer:2 corresponds:3 acm:5 goal:1 viewed:1 careful:2 man:1 typical:3 specifically:2 infinite:1 uniformly:2 except:1 lemma:1 called:2 orthogonalize:1 support:1 guo:1 jdd:1 accelerated:1 correlated:1
6,565
6,939
Tractability in Structured Probability Spaces Arthur Choi University of California Los Angeles, CA 90095 [email protected] Yujia Shen University of California Los Angeles, CA 90095 [email protected] Adnan Darwiche University of California Los Angeles, CA 90095 [email protected] Abstract Recently, the Probabilistic Sentential Decision Diagram (PSDD) has been proposed as a framework for systematically inducing and learning distributions over structured objects, including combinatorial objects such as permutations and rankings, paths and matchings on a graph, etc. In this paper, we study the scalability of such models in the context of representing and learning distributions over routes on a map. In particular, we introduce the notion of a hierarchical route distribution and show how they can be leveraged to construct tractable PSDDs over route distributions, allowing them to scale to larger maps. We illustrate the utility of our model empirically, in a route prediction task, showing how accuracy can be increased significantly compared to Markov models. 1 Introduction A structured probability space is one where members of the space correspond to structured or combinatorial objects, such as permutations, partial rankings, or routes on a map [Choi et al., 2015, 2016]. Structured spaces have come into focus recently, given their large number of applications and the lack of systematic methods for inducing and learning distributions over such spaces. Some structured objects are supported by specialized distributions, e.g., the Mallows distribution over permutations [Mallows, 1957, Lu and Boutilier, 2011]. For other types of objects, one is basically on their own as far developing representations and corresponding algorithms for inference and learning. Standard techniques, such as probabilistic graphical models, are not suitable for these kind of distributions since the constraints on such objects often lead to almost fully connected graphical models, which are not amenable to inference or learning. A framework known as PSDD was proposed recently for systematically inducing and learning distributions over structured objects [Kisa et al., 2014a,b, Shen et al., 2016]. According to this framework, one first describes members of the space using propositional logic, then compiles these descriptions into Boolean circuits with specific properties (a circuit encodes a structured space by evaluating to 1 precisely on inputs corresponding to members of the space). By parameterizing these Boolean circuits, one can induce a tractable distribution over objects in the structured space. The only domain specific investment in this framework corresponds to the encoding of objects using propositional logic. Moreover, the only computational bottleneck in this framework is the compilation of propositional logic descriptions to circuits with specific properties, which are known as SDD circuits (for Sentential Decision Diagrams) [Darwiche, 2011, Xue et al., 2012]. Parameterized SDD circuits are known as a PSDDs (for Probabilistic SDDs) and have attractive properties, including tractable inference and closed-form parameter estimation under complete data [Kisa et al., 2014a]. Most of the focus on PSDDs has been dedicated to showing how they can systematically induce and learn distributions over various structured objects. Case studies have been reported relating to total and partial rankings [Choi et al., 2015], game traces, and routes on a map [Choi et al., 2016]. The scalability of these studies varied. For partial rankings, experiments have been reported for hundreds of items. However, for total rankings and routes, the experimental studies were more of a proof of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 3 A 0 0 0 0 1 1 1 1 (a) B C Pr 0 0 0.2 0 1 0.2 1 0 0.0 0.1 1 1 0 0 0.0 0 1 0.3 1 0 0.1 1 1 0.1 Distribution 3 .6 .4 3 1 4 1 C 1 .33 C ?C A B ?A?B A ?B?A B 4 1 .5 .75 .25 .67 .5 C ?C A B ?A?B (b) SDD A ?B?A B (c) PSDD C 4 1 A 0 C 2 B (d) Vtree Figure 1: A probability distribution and its SDD/PSDD representation. The numbers annotating or-gates in (b) & (c) correspond to vtree node IDs in (d). While the circuit appears to be a tree, the input variables are shared and hence the circuit is not a tree. concept, showing for example how the learned PSDD distributions can be superior to ones learned used specialized or baseline methods [Choi et al., 2015]. In this paper, we study a particular structured space, while focusing on computational considerations. The space we consider is that of routes on a map, leading to what we call route distributions. These distributions are of great practical importance as they can be used to estimate traffic jams, predict specific routes, and even project the impact of interventions, such as closing certain routes on a map. The main contribution on this front is the notion of hierarchical simple-route distributions, which correspond to a hierarchical map representation that forces routes to be simple (no loops) at different levels of the hierarchy. We show in particular how this advance leads to the notion of hierarchical PSDDs, allowing one to control the size of component PSDDs by introducing more levels of the hierarchy. This guarantees a representation of polynomial size, but at the expense of losing exactness on some route queries. Not only does this advance the state-of-the-art for learning distributions over routes, but it also suggests a technique that can potentially be applied in other contexts as well. This paper is structured as follows. In Section 2, we review SDD circuits and PSDDs, and in Section 3 we turn to routes as a structured space and their corresponding distributions. Hierarchical distributions are treated in Section 4, with complexity and correctness guarantees. In Section 5, we discuss new techniques for encoding and compiling a PSDD in a hierarchy. We present empirical results in Section 6, and finally conclude with some remarks in Section 7. 2 Probabilistic SDDs PSDDs are a class of tractable probabilistic models, which were originally motivated by the need to represent probability distributions Pr (X) with many instantiations x attaining zero probability, i.e., a structured space [Kisa et al., 2014a, Choi et al., 2015, 2016]. Consider the distribution Pr (X) in Figure 1(a) for an example. To construct a PSDD for such a distribution, we perform the two following steps. We first construct a special Boolean circuit that captures the zero entries in the following sense; see Figure 1(b). For each instantiation x, the circuit evaluates to 0 at instantiation x iff Pr (x) = 0. We then parameterize this Boolean circuit by including a local distribution on the inputs of each or-gate; see Figure 1(c). Such parameters are often learned from data. The Boolean circuit underlying a PSDD is known as a Sentential Decision Diagram (SDD) [Darwiche, 2011]. These circuits satisfy specific syntactic and semantic properties based on a binary tree, called a vtree, whose leaves correspond to variables; see Figure 1(d). SDD circuits alternate between or-gates and and-gates. Their and-gates have two inputs each and satisfy a property called decomposability: each input depends on a different set of variables. The or-gates satisfy a property called determinism: at most one input will be high under any circuit input. The role of the vtree is (roughly) to determine which variables will appear as inputs for gates. 2 s s t t Figure 2: Two paths connecting s and t in a graph. A PSDD is obtained by including a distribution ?1 , . . . , ?n on the inputs of each or-gate; see again Figure 1(c). The semantics of PSDDs are given in [Kisa et al., 2014a].1 The PSDD is a complete and canonical representation of probability distributions. That is, PSDDs can represent any distribution, and there is a unique PSDD for that distribution (under some conditions). A variety of probabilistic queries are tractable on PSDDs, including that of computing the probability of a partial variable instantiation and the most likely instantiation. Moreover, the maximum likelihood parameter estimates of a PSDD are unique given complete data, and these parameters can be computed efficiently using closed-form estimates; see [Kisa et al., 2014a] for details. Finally, PSDDs have been used to learn distributions over combinatorial objects, including rankings and permutations [Choi et al., 2015], as well as paths and games [Choi et al., 2016]. In these applications, the Boolean circuit underlying a PSDD captures variable instantiations that correspond to combinatorial objects, while its parameterization induces a distribution over these objects. As a concrete example, PSDDs were used to induce distributions over the permutations of n items as follows. We have a variable Xij for each i, j ? {1, . . . , n} denoting that item i is at position j in the permutation. Clearly, not all instantiations of these variables correspond to (valid) permutations. An SDD circuit is then constructed, which outputs 1 iff the corresponding input corresponds to a valid permutation. Each parameterization of this SDD circuit leads to a distribution on permutations and these parameterizations can be learned from data; see Choi et al. [2015]. 3 Route Distributions We consider now the structured space of simple routes on a map, which correspond to connected and loop-free paths on a graph. Our ultimate goal here is to learn distributions over simple routes and use them for reasoning about traffic, but we first discuss how to represent such distributions. Consider a map in the form of an undirected graph G and let X be a set of binary variables, which are in one-to-one correspondence with the edges of graph G. For example, the graph in Figure 2 will lead to 12 binary variables, one for each edge in the graph. A variable instantiation x will then be interpreted as a set of edges in graph G. In particular, instantiation x includes edge e iff the edge variable is set to true in instantiation x. As such, some of the instantiations x will correspond to routes in G and others will not.2 In Figure 2, the left route corresponds to a variable instantiation in which 4 variables are set to true, while all other 8 variables are set to false. Let ?G be a Boolean formula obtained by disjoining all instantiations x that correspond to routes in graph G. A probability distribution Pr (X) is called a route distribution iff it assigns a zero probability to every instantiation x that does not correspond to a route, i.e., Pr (x) = 0 if x 6|= ?G . One can systematically induce a route distribution over graph G by simply compiling the Boolean formula ?G into an SDD, and then parameterizing the SDD to obtain a PSDD. This approach was actually proposed in Choi et al. [2016], where empirical results were shown for routes on grids of size at most 8 nodes by 8 nodes. Let us now turn to simple routes, which are routes that do not contain loops. The path on the left of Figure 2 is simple, while the one on the right is not simple. Among the instantiations x corresponding to routes, some are simple routes and others are not. Let ?G be a Boolean formula obtained by disjoining all instantiations x that correspond to simple routes. We then have ?G |= ?G . 1 Let x be an instantiation of PSDD variables. If the SDD circuit outputs 0 at input x, then Pr (x) = 0. Otherwise, traverse the circuit top-down, visiting the (unique) high input of each visited or-node, and all inputs of each visited and-node. Then Pr (x) is the product of parameters visited during the traversal process. 2 An instantiation x corresponds to a route iff the edges it mentions positively can be ordered as a sequence (n1 , n2 ), (n2 , n3 ), (n3 , n4 ), . . . , (nk?1 , nk ). 3 s a a a = b W b t t b t Figure 3: The set of all s-t paths corresponds to concatenating edge (s, a) with all a-t paths and concatenating edge (s, b) with all b-t paths. Figure 4: Partitioning a map into three regions (intersections are nodes of the graph and roads between intersections are edges of the graph). Regions have black boundaries. Red edges cross regions and blue edges are contained within a region. A simple-route distribution Pr (X) is a distribution such that Pr (x) = 0 if x 6|= ?G . Clearly, simpleroute distributions are a subclass of route distributions. One can also systematically represent and learn simple-route distributions using PSDDs. In this case, one must compile the Boolean formula ?G into an SDD whose parameters are then learned from data. Figure 3 shows one way to encode this Boolean formula (recursively), as discussed in Choi et al. [2016]. More efficient approaches are known, based on Knuth?s Simpath algorithm [Knuth, 2009, Minato, 2013, Nishino et al., 2017]. To give a sense of current scalability when compiling simple-routes into SDD circuits, Nishino et al. [2017] reported results on graphs with as many as 100 nodes and 140 edges for a single source and destination pair. To put these results in perspective, we point out that we are not aware of how one may obtain similar results using standard probabilistic graphical model?for example, a Bayesian or a Markov network. Imposing complex constraints, such as the simple-route constraint, typically lead to highly-connected networks with high treewidths.3 While PSDD scalability is favorable in this case?when compared to probabilistic graphical models? our goal is to handle problems that are significantly larger in scale. The classical direction for achieving this goal is to advance current circuit compilation technology, which would allow us to compile propositional logic descriptions that cannot be compiled today. We next propose an alternative, yet a complementary direction, which is based on the notion of hierarchical maps and the corresponding notion of hierarchical distributions. 4 Hierarchical Route Distributions A route distribution can be represented hierarchically if one imposes a hierarchy on the underlying map, leading to a representation that is polynomial in size if one includes enough levels in the hierarchy. Under some conditions which we discuss later, the hierarchical representation can also support inference in time polynomial in its size. The penalty incurred due to this hierarchical representation is a loss of exactness on some queries, which can be controlled as we discuss later. 3 If we can represent a uniform distribution of simple routes on a map, then we can count the number of simple paths on a graph, which is a #P-complete problem [Valiant, 1979]. Hence, we do not in general expect a Bayesian or Markov network for such a distribution to have bounded treewidth. 4 We start by discussing hierarchical maps, where a map is represented by a graph G as discussed earlier. Let N1 , . . . , Nm be a partitioning of the nodes in graph G and let us call each Ni a region. These regions partition edges X into B, A1 , . . . , Am , where B are edges that cross regions and Ai are edges inside region Ni . Consider the following decomposition for distributions over routes: Pr (x) = Pr (b) m Y Pr (ai | bi ). (1) i=1 We refer to such a distribution as a decomposable route distribution.4 Here, Bi are edges that cross out of region Ni , and b, ai and bi are partial instantiations that are compatible with instantiation x. To discuss the main insight behind this hierarchical representation, we need to first define a graph GB that is obtained from G by aggregating each region Ni into a single node. We also need to define subgraphs Gbi , obtained from G by keeping only edges Ai and the edges set positively in instantiation bi (the positive edges of bi denote the edges used to enter and exit the region Ni ). Hence, graph GB is an abstraction of graph G, while each graph Gbi is a subset of G. Moreover, one can think of each subgraph Gbi as a local map (for region i) together with a particular set of edges that connects it to other regions. We can now state the following key observations. The distribution Pr (B) is a route distribution for the aggregated graph GB . Moreover, each distribution Pr (Ai | bi ) is a distribution over (sets of) routes for subgraph Gbi (in general, we may enter and exit a region multiple times). Hence, we are able to represent the route distribution Pr (X) using a set of smaller route distributions. One of these distributions Pr(B) captures routes across regions. The others, PrP (Ai | bi ), capture m routes that are within a region. The count of these smaller distributions is 1 + i=1 2|Bi | , which is exponential in the size of variable sets B1 , . . . , Bn . We will later see that this count can be polynomial for some simple-route distributions. We used ?G to represent the instantiations corresponding to routes, and ?G to represent the instantiations corresponding to simple routes, with ?G |= ?G . Some of these simple routes are also simple with respect to the aggregated graph GB (i.e., they will not visit a region Ni more than once), while other simple routes are not simple with respect to graph GB . Let ?G be the Boolean expression obtained by disjoining instantiations x that correspond to simple routes that are also simple (and non-empty) with respect to graph GB .5 We then have ?G |= ?G |= ?G and the following result. Theorem 1 Consider graphs G, GB and Gbi as indicated above. Let Pr (B) be a simple-route distribution for graph GB , and Pr (Ai | bi ) be a simple-route distribution for graph Gbi . Then the resulting distribution Pr (X), as defined by Equation 1, is a simple-route distribution for graph G. This theorem will not hold if Pr (B) were not a simple-route distribution for graph GB . That is, having each distribution Pr (Ai | bi ) be a simple-route distribution for graph Gbi is not sufficient for the hierarchical distribution to be a simple-route distribution for G. Hierarchical distributions that satisfy the conditions of Theorem 1 will be called hierarchical simpleroute distributions. Theorem 2 Let Pr (X) be a hierarchical simple-route distribution for graph G and let ?G be as indicated above. We then have Pr (x) = 0 if x 6|= ?G . This means that the distribution will assign a zero probability to all instantiations x |= ?G ? ??G . These instantiations correspond to routes that are simple for graph G but not simple for graph GB . Hence, simple-route hierarchical distributions correspond to a subclass of the simple-route distributions for graph G. This subclass, however, is interesting for the following reason. Theorem 3 Consider a hierarchical simple-route distribution Pr (X) and let x be an instantiation that sets more than two variables in some Bi to true. Then Pr (x) = 0. 4 Note that not all route distributions can be decomposed as such: the decomposition implies the independence of routes on edges Ai given the route on edges B. 5 For most practical cases, the independence assumption of the hierarchical decomposition will dictate that routes on GB be non-empty. An empty route on GB corresponds to a route contained within a single region, which we can accommodate using a route distribution for the single region. 5 Basically, a route that is simple for graph GB cannot enter and leave a region more than once. Corollary 1 The hierarchical simple-route distribution Pr (X) can be constructed from distribution Pr (B) and distributions Pr (Ai | bi ) for which bi sets no more than two variables to true. Corollary 2 The hierarchical Pm simple-route distribution Pr (X) can be represented by a data structure whose size is O(2|B| + i=1 2|Ai | |Bi |2 ). If we choose our regions Ni to be small enough, then 2|Ai | can be treated as a constant. A tabular representation of the simple-route distribution Pr (B) has size O(2|B| ). If representing this table is practical, then inference is also tractable (via variable elimination). However, this distribution can itself be represented by a simple-route hierarchical distribution. This process can continue until we reach a simple-route distribution that admits an efficient representation. We can therefore obtain a final representation which is polynomial in the number of variables X and, hence, polynomial in the size of graph G (however, inference may no longer be tractable). In our approach, we represent the distributions Pr (B) and Pr (Ai | bi ) using PSDDs. This allows these distributions to be over a relatively large number of variables (on the order of hundreds), which would not be feasible if we used more classical representations, such as graphical models. This hierarchical representation, which is both small and admits polytime inference, is an approximation as shown by the following theorem. Theorem 4 Consider a decomposable route distribution Pr (X) (as in Equation 1), the corresponding hierarchical simple-route distribution Pr (X | ?G ), and a query ? over variables X. The error of the query Pr (? | ?G ), relative to Pr (?), is:   Pr (? | ?G ) ? Pr (?) Pr (? | ?G ) = Pr (?G ) 1 ? Pr (? | ?G ) Pr (? | ?G ) where ?G = ?G ? ??G denotes simple-routes in G that are not simple routes in GB . The conditions of this theorem basically require the two distributions to agree on the relative probabilities of simple routes that are also simple in GB . Note also that Pr (?G ) + Pr (?G ) = 1. Hence, if Pr (?G ) ? 1, then we expect the hierarchical distribution to be accurate. This happens when most simple routes are also simple in GB , a condition that may be met by a careful choice of map regions.6 At one extreme, if each region has at most two edges crossing out of it, then Pr (?G ) = 1 and the hierarchical distribution is exact. Hierarchical simple-route distributions will assign a zero probability to routes x that are simple in G but not in GB . However, for a mild condition on the hierarchy, we can guarantee that if there is a simple route between nodes s and t in G, there is also a simple route that is simple for GB . Proposition 1 If the subgraphs Gbi are connected, then there is a simple route connecting s and t in G iff there is a simple route connecting s and t in G that is also a simple route for GB . Under this condition, hierarchical simple-route distributions will provide an approximation for any source/destination query. One can compute marginal and MAP queries in polytime on a hierarchical distribution, assuming that one can (in polytime) multiply and sum-out variables from its component distributions?we basically need to sum-out variables Bi from each Pr (Ai |bi ), then multiply the results with Pr (B). In our experiments, however, we follow a more direct approach to inference, in which we multiply all component distributions (PSDDs), to yield one PSDD for the hierarchical distribution. This is not always guaranteed to be efficient, but leads to a much simpler implementation. 5 Encoding and Compiling Routes Recall that constructing a PSDD involves two steps: constructing an SDD that represents the structured space, and then parameterizing the SDD. In this section, we discuss how to construct 6 If ? is independent of ?G (and hence ? is independent of ?G ), then the approximation is also exact. At this point, however, we do not know of an intuitive characterization of queries ? that satisfy this property. 6 Figure 5: Partitioning of the area around the Financial District of San Francisco, into regions. an SDD that represents the structured space of hierarchical, simple routes. Subsequently, in our experiments, we shall learn the parameters of the PSDD from data. We first consider the space of simple routes that are not necessarily hierarchical. Note here that an SDD of a Boolean formula can be constructed bottom-up, starting with elementary SDDs representing literals and constants, and then constructing more complex SDDs from them using conjoin, disjoin, and negation operators implemented by an SDD library. This approach can be used to construct an SDD that encodes simple routes, using the idea from Figure 3, which is discussed in more detail in Choi et al. [2016]. The G RAPHILLION library can be used to construct a Zero-suppressed Decision Diagram (ZDD) representing all simple routes for a given source/destination pair [Inoue et al., 2014]. The ZDDs can then be disjoined across all source and destination pairs, and then converted to an SDD. An even more efficient algorithm was proposed recently for compiling simple routes to ZSDDs, which we used in our experiments [Nishino et al., 2016, 2017]. Consider now the space of hierarchical simple routes induced by regions N1 , . . . , Nm of graph G, with a corresponding partition of edges into B, A1 , . . . , Am , as discussed earlier. To compile an SDD for the hierarchical, simple routes of G, we first compile an SDD representing the simple routes over each region. That is, for each region Ni , we take the graph induced by the edges Ai and Bi , and compile an SDD representing all its simple routes (as described above). Similarly, we compile an SDD representing the simple routes of the abstracted graph GB . At this point, we have a hierarchical, simple-route distribution in which components are represented as PSDDs and that we can do inference on using multiplication and summing-out as discussed earlier. In our experiments, however, we take the extra step of multiplying all the m + 1 component PSDDs, to yield a single PSDD over the structured space of hierarchical, simple routes. This simplifies inference and learning as we can now use the linear-time inference and learning procedures known for PSDDs [Kisa et al., 2014a].7 6 Experimental Results In our experiments, we considered a dataset consisting of GPS data collected from taxicab routes in San Francisco.8 We acquired public map data from http://www.openstreetmap.org/, i.e., the undirected graph representing the streets (edges) and intersections (nodes) of San Francisco. We projected the GPS data onto the San Francisco graph using the map-matching API of the graphhopper package.9 For more on map-matching, see, e.g., [Froehlich and Krumm, 2008]. 7 In our experiments, we use an additional simplification. Recall from Footnote 5 that if bi sets all variables negatively (i.e., no edges), then Gbi is empty. We now allow the case that Gbi contains all edges Ai (by disjoing the corresponding SDDs). Intuitively, this optionally allows a simple path to exist strictly in region Ri . While the global SDD no longer strictly represents hierarchical simple paths (it may allow sets of independent simple paths at once), we do not have to treat simple paths that are confined to a single region as a special case. 8 Available at http://crawdad.org/epfl/mobility/20090224/. 9 Available at https://www.graphhopper.com. 7 To partition the graph of San Francisco into regions, we obtained a publicly available dataset of traffic analysis zones, produced by the California Metropolitan Transportation Commission.10 These zones correspond to small area neighborhoods and communities of the San Francisco Bay Area. To facilitate the compilation of regions into SDDs, we further split these zones in half until each region was compilable (horizontally if the region was taller than it was wide, or vertically otherwise). Finally, we restricted our attention to areas around the Financial District of San Francisco, which we were able to compile into a hierarchical distribution using one level of abstraction; see Figure 5. Given the routes over the graph of San Francisco, we first filtered out any routes that did not correspond to a simple path on the San Francisco graph. We next took all routes that were contained solely in the region under consideration. We further took any sub-route that passed through this region, as a route for our region. In total, we were left with 87,032 simple routes. We used half for training, and the other half for testing. For the training set, we also removed all simple routes that were not simple in the hierarchy. We did not remove such routes for the purposes of testing. We first compiled an SDD of hierarchical simple-routes over the region, leading to an SDD with 62,933 nodes, and 152,140 free parameters. We then learned the parameters of our PSDD from the training set, assuming Laplace smoothing [Kisa et al., 2014a]. We considered a route prediction task where we predict the next road segment, given the route taken so far; see, e.g., [Letchner et al., 2006, Simmons et al., 2006, Krumm, 2008]. That is, for each route of the testing set, we consider one edge at a time and try to predict the next edge, given the edges observed so far. We consider three approaches: (1) a naive baseline that uses the relative frequency of edges to predict the next edge, while discounting the last-used edge, (2) a Markov model that predicts, given the last-used edge, what edge would be the most likely one to be traversed next, (3) a PSDD given the current partial route as well as the destination. The last assumption is often the situation in reality, given the ubiquity of GPS routing applications on mobile phones. We remark that Markov models and HMMs are less amenable to accepting a destination as an observation. For the PSDD, the current partial route and the last edge to be used (i.e., the destination) are given as evidence e. The evidence for an endpoint (source or destination) is the edge used (set positively), where the remaining edges are assumed to be unused (and set negatively). For internal nodes on a route, two edges (entering and exiting a node) are set positively and the remaining edges are set negatively in the evidence. To predict the next edge on a partial route, we consider the edges X incident to the current node and compute their marginal probabilities Pr (X | e) according to the PSDD. The probability of the last edge used in the partial route is 1, which we ignore. The remaining edges have a probability that sums to a value less than one; one minus this probability is the probability that the route ends at the current node. Among all these options, we pick the most likely as our prediction (either navigate to a new edge, or stop). Note that for the purposes of training our PSDD, we removed those simple routes that were not simple on the hierarchy. When testing, such routes have a probability of zero on our PSDD. Moreover, partial routes may also have zero probability, if they cannot be extended to a hierarchical simple-route. In this case, we cannot compute the marginals Pr (X | e). Hence, we simply unset our evidence, one edge at a time in the order that we set them (first unsetting negative edges before positive edges), until the evidence becomes consistent again, relative to the PSDD. We summarize the relative accuracies over 43,516 total testing routes: model accuracy naive 0.736 (326,388/443,481) Markov 0.820 (363,536/443,481) PSDD 0.931 (412,958/443,481) For each model, we report the accuracy averaged over all steps on all paths, ignoring those steps where the prediction is trivial (i.e., there is only one edge or no edge available to be used next). We find that the PSDD is much more accurate at predicting the next road segment, compared to the Markov model and the naive baseline. Indeed, this could be expected as (1) the PSDD uses the history of the route so far, and perhaps more importantly, (2) it utilizes knowledge of the destination. 10 Available at https://purl.stanford.edu/fv911pc4805. 8 7 Conclusion In this paper, we considered Probabilistic Sentential Decision Diagrams (PSDDs) representing distributions over routes on a map, or equivalently, simple paths on a graph. We considered a hierarchical approximation of simple-route distributions, and examined its relative tractability and its accuracy. We showed how this perspective can be leveraged to represent and learn more scalable PSDDs for simple-route distributions. In a route prediction task, we showed that PSDDs can take advantage of the available observations, such as the route taken so far and the destination of a trip, to make more accurate predictions. Acknowledgments We thank Eunice Chen and Andy Shih for helpful comments, discussions and code. This work has been partially supported by NSF grant #IIS-1514253, ONR grant #N00014-15-1-2339 and DARPA XAI grant #N66001-17-2-4032. References A. Choi, G. Van den Broeck, and A. Darwiche. Tractable learning for structured probability spaces: A case study in learning preference distributions. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), 2015. A. Choi, N. Tavabi, and A. Darwiche. Structured features in naive Bayes classification. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), 2016. A. Darwiche. SDD: A new canonical representation of propositional knowledge bases. In Proceedings of IJCAI, pages 819?826, 2011. J. Froehlich and J. Krumm. Route prediction from trip observations. Technical report, SAE Technical Paper, 2008. T. Inoue, H. Iwashita, J. Kawahara, and S.-i. Minato. Graphillion: software library for very large sets of labeled graphs. International Journal on Software Tools for Technology Transfer, pages 1?10, 2014. D. Kisa, G. Van den Broeck, A. Choi, and A. Darwiche. Probabilistic sentential decision diagrams. In KR, 2014a. D. Kisa, G. Van den Broeck, A. Choi, and A. Darwiche. Probabilistic sentential decision diagrams: Learning with massive logical constraints. In ICML Workshop on Learning Tractable Probabilistic Models (LTPM), 2014b. D. E. Knuth. The Art of Computer Programming, Volume 4, Fascicle 1: Bitwise Tricks & Techniques; Binary Decision Diagrams. Addison-Wesley Professional, 2009. J. Krumm. A Markov model for driver turn prediction. Technical report, SAE Technical Paper, 2008. J. Letchner, J. Krumm, and E. Horvitz. Trip router with individualized preferences (TRIP): incorporating personalization into route planning. In AAAI, pages 1795?1800, 2006. T. Lu and C. Boutilier. Learning Mallows models with pairwise preferences. In Proceedings of ICML, pages 145?152, 2011. C. L. Mallows. Non-null ranking models. Biometrika, 1957. S. Minato. Techniques of BDD/ZDD: brief history and recent activity. IEICE Transactions, 96-D(7):1419?1429, 2013. M. Nishino, N. Yasuda, S. Minato, and M. Nagata. Zero-suppressed sentential decision diagrams. In AAAI, pages 1058?1066, 2016. M. Nishino, N. Yasuda, S. Minato, and M. Nagata. Compiling graph substructures into sentential decision diagrams. In Proceedings of the Thirty-First Conference on Artificial Intelligence (AAAI), 2017. Y. Shen, A. Choi, and A. Darwiche. Tractable operations for arithmetic circuits of probabilistic models. In Advances in Neural Information Processing Systems 29 (NIPS), 2016. R. Simmons, B. Browning, Y. Zhang, and V. Sadekar. Learning to predict driver route and destination intent. In Intelligent Transportation Systems Conference, pages 127?132, 2006. L. G. Valiant. The complexity of enumeration and reliability problems. SIAM J. Comput., 8(3):410?421, 1979. Y. Xue, A. Choi, and A. Darwiche. Basing decisions on sentences in decision diagrams. In AAAI, pages 842?849, 2012. 9
6939 |@word mild:1 polynomial:6 adnan:1 bn:1 decomposition:3 pick:1 mention:1 minus:1 accommodate:1 recursively:1 contains:1 denoting:1 horvitz:1 bitwise:1 current:6 com:1 yet:1 router:1 must:1 partition:3 remove:1 half:3 leaf:1 intelligence:3 item:3 parameterization:2 accepting:1 filtered:1 characterization:1 parameterizations:1 node:16 traverse:1 preference:3 district:2 simpler:1 org:2 zhang:1 constructed:3 direct:1 driver:2 inside:1 darwiche:11 introduce:1 pairwise:1 acquired:1 expected:1 indeed:1 roughly:1 planning:1 decomposed:1 enumeration:1 becomes:1 project:1 moreover:5 underlying:3 circuit:24 bounded:1 null:1 what:2 kind:1 interpreted:1 guarantee:3 every:1 subclass:3 biometrika:1 control:1 partitioning:3 grant:3 intervention:1 appear:1 positive:2 before:1 local:2 aggregating:1 treat:1 vertically:1 api:1 encoding:3 id:1 path:16 solely:1 black:1 examined:1 suggests:1 compile:7 hmms:1 bi:19 averaged:1 practical:3 unique:3 acknowledgment:1 testing:5 mallow:4 investment:1 thirty:1 procedure:1 area:4 empirical:2 significantly:2 aychoi:1 dictate:1 matching:2 induce:4 road:3 cannot:4 onto:1 operator:1 put:1 context:2 www:2 map:22 transportation:2 unset:1 attention:1 starting:1 shen:3 decomposable:2 assigns:1 subgraphs:2 parameterizing:3 insight:1 importantly:1 financial:2 handle:1 notion:5 laplace:1 simmons:2 hierarchy:8 today:1 massive:1 exact:2 losing:1 gps:3 us:2 programming:1 trick:1 crossing:1 predicts:1 labeled:1 bottom:1 role:1 observed:1 capture:4 parameterize:1 region:37 connected:4 removed:2 complexity:2 traversal:1 segment:2 negatively:3 exit:2 matchings:1 darpa:1 joint:1 various:1 represented:5 query:8 artificial:3 neighborhood:1 kawahara:1 whose:3 larger:2 stanford:1 annotating:1 otherwise:2 syntactic:1 think:1 itself:1 final:1 sequence:1 advantage:1 took:2 propose:1 product:1 loop:3 krumm:5 iff:6 subgraph:2 description:3 inducing:3 intuitive:1 scalability:4 los:3 ijcai:2 empty:4 leave:1 object:13 illustrate:1 crawdad:1 implemented:1 c:3 involves:1 come:1 treewidth:1 implies:1 met:1 direction:2 subsequently:1 routing:1 elimination:1 public:1 jam:1 require:1 assign:2 proposition:1 elementary:1 traversed:1 strictly:2 hold:1 around:2 considered:4 great:1 predict:6 purpose:2 estimation:1 compiles:1 favorable:1 combinatorial:4 visited:3 correctness:1 basing:1 metropolitan:1 tool:1 exactness:2 clearly:2 always:1 mobile:1 corollary:2 encode:1 focus:2 likelihood:1 prp:1 baseline:3 sense:2 am:2 helpful:1 inference:11 froehlich:2 abstraction:2 browning:1 epfl:1 typically:1 semantics:1 among:2 classification:1 art:2 special:2 smoothing:1 marginal:2 construct:6 aware:1 once:3 beach:1 having:1 represents:3 icml:2 tabular:1 others:3 report:3 intelligent:1 connects:1 consisting:1 n1:3 negation:1 highly:1 multiply:3 extreme:1 personalization:1 behind:1 compilation:3 amenable:2 accurate:3 andy:1 edge:52 partial:10 arthur:1 mobility:1 tree:3 increased:1 earlier:3 boolean:13 tractability:2 introducing:1 entry:1 decomposability:1 subset:1 hundred:2 uniform:1 conjoin:1 front:1 reported:3 commission:1 xue:2 broeck:3 st:1 yujias:1 international:2 siam:1 probabilistic:13 systematic:1 destination:11 connecting:3 together:1 concrete:1 again:2 aaai:6 nm:2 sentential:8 leveraged:2 choose:1 literal:1 leading:3 yasuda:2 converted:1 attaining:1 includes:2 satisfy:5 ranking:7 depends:1 later:3 try:1 closed:2 traffic:3 red:1 start:1 bayes:1 option:1 nagata:2 substructure:1 contribution:1 ni:8 accuracy:5 publicly:1 efficiently:1 correspond:16 yield:2 bayesian:2 produced:1 basically:4 lu:2 multiplying:1 history:2 footnote:1 sae:2 reach:1 evaluates:1 frequency:1 proof:1 stop:1 dataset:2 logical:1 recall:2 knowledge:2 actually:1 sdds:6 appears:1 focusing:1 simpath:1 originally:1 wesley:1 follow:1 gbi:10 until:3 lack:1 indicated:2 perhaps:1 ieice:1 usa:1 facilitate:1 concept:1 true:4 contain:1 hence:9 discounting:1 entering:1 semantic:1 attractive:1 game:2 during:1 complete:4 dedicated:1 reasoning:1 consideration:2 bdd:1 recently:4 superior:1 specialized:2 empirically:1 endpoint:1 volume:1 discussed:5 relating:1 marginals:1 refer:1 imposing:1 ai:16 enter:3 grid:1 pm:1 similarly:1 closing:1 reliability:1 longer:2 compiled:2 etc:1 base:1 own:1 graphhopper:2 showed:2 perspective:2 recent:1 phone:1 route:138 certain:1 n00014:1 binary:4 continue:1 discussing:1 onr:1 additional:1 disjoined:1 determine:1 aggregated:2 ii:1 arithmetic:1 multiple:1 technical:4 cross:3 long:1 visit:1 a1:2 controlled:1 impact:1 prediction:8 scalable:1 represent:10 confined:1 diagram:11 source:5 extra:1 comment:1 induced:2 undirected:2 member:3 call:2 unused:1 split:1 enough:2 variety:1 independence:2 idea:1 simplifies:1 angeles:3 openstreetmap:1 bottleneck:1 motivated:1 expression:1 utility:1 ultimate:1 gb:20 passed:1 penalty:1 remark:2 boutilier:2 fascicle:1 induces:1 http:4 xij:1 exist:1 canonical:2 nsf:1 blue:1 shall:1 taller:1 key:1 shih:1 achieving:1 psdd:30 n66001:1 graph:47 sum:3 package:1 parameterized:1 almost:1 utilizes:1 decision:12 guaranteed:1 simplification:1 correspondence:1 activity:1 constraint:4 precisely:1 n3:2 ri:1 encodes:2 software:2 ucla:3 relatively:1 structured:20 developing:1 according:2 alternate:1 describes:1 smaller:2 across:2 suppressed:2 n4:1 happens:1 intuitively:1 restricted:1 pr:51 den:3 taken:2 equation:2 agree:1 turn:3 discus:6 count:3 know:1 addison:1 tractable:10 end:1 available:6 operation:1 hierarchical:41 ubiquity:1 alternative:1 compiling:6 gate:8 professional:1 top:1 denotes:1 remaining:3 graphical:5 classical:2 visiting:1 thank:1 individualized:1 street:1 collected:1 trivial:1 reason:1 assuming:2 code:1 polytime:3 optionally:1 equivalently:1 potentially:1 expense:1 trace:1 negative:1 intent:1 implementation:1 perform:1 allowing:2 observation:4 markov:8 situation:1 extended:1 varied:1 treewidths:1 exiting:1 community:1 propositional:5 pair:3 trip:4 sentence:1 california:4 learned:6 nip:2 able:2 yujia:1 summarize:1 including:6 suitable:1 treated:2 force:1 predicting:1 representing:9 technology:2 brief:1 library:3 inoue:2 naive:4 review:1 multiplication:1 relative:6 fully:1 loss:1 permutation:9 expect:2 interesting:1 sdd:29 incurred:1 incident:1 sufficient:1 consistent:1 imposes:1 systematically:5 compatible:1 supported:2 last:5 free:2 keeping:1 allow:3 wide:1 determinism:1 van:3 boundary:1 evaluating:1 valid:2 san:9 projected:1 kisa:9 far:5 transaction:1 nishino:5 ignore:1 logic:4 abstracted:1 global:1 instantiation:27 xai:1 b1:1 summing:1 conclude:1 francisco:9 assumed:1 bay:1 table:1 reality:1 learn:6 transfer:1 ca:4 ignoring:1 complex:2 necessarily:1 constructing:3 domain:1 did:2 main:2 hierarchically:1 n2:2 complementary:1 minato:5 positively:4 sub:1 position:1 concatenating:2 exponential:1 comput:1 formula:6 choi:18 down:1 theorem:8 specific:5 navigate:1 showing:3 admits:2 evidence:5 workshop:1 incorporating:1 false:1 valiant:2 importance:1 kr:1 knuth:3 nk:2 chen:1 intersection:3 simply:2 likely:3 vtree:4 horizontally:1 ordered:1 contained:3 partially:1 psdds:21 corresponds:6 goal:3 careful:1 shared:1 feasible:1 called:5 total:4 experimental:2 zone:3 internal:1 support:1
6,566
694
Kohonen Feature Maps and Growing Cell Structures a Performance Comparison Bernd Fritzke International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704-1105, USA Abstract A performance comparison of two self-organizing networks, the Kohonen Feature Map and the recently proposed Growing Cell Structures is made. For this purpose several performance criteria for self-organizing networks are proposed and motivated. The models are tested with three example problems of increasing difficulty. The Kohonen Feature Map demonstrates slightly superior results only for the simplest problem. For the other more difficult and also more realistic problems the Growing Cell Structures exhibit significantly better performance by every criterion . Additional advantages of the new model are that all parameters are constant over time and that size as well as structure of the network are determined automatically. 1 INTRODUCTION Self-organizing networks are able to generate interesting low-dimensional representations of high-dimensional input data. The most well-known of these models is the Kohonen Feature Map (Kohonen [1982)) . So far it has been applied to a large variety of problems including vector quantization (Schweizer et al. [1991)), biological modelling (Obermayer, Ritter & Schulten [1990)), combinatorial optimization (Favata & Walker [1991]) and also processing of symbolic information(Ritter & Kohonen [1989)) . 123 124 Fritzke It has been reported by a number of researchers, that one disadvantage of Kohonen's model is the fact, that the network structure had to be specified in advance. This is generally not possible in an optimal way since a necessary piece of information, the probability distribution of the input signals, is usually not available. The choice of an unsuitable network structure, however, can badly degrade network performance. Recently we have proposed a new self-organizing network model - the Growing Cell Structures - which can automatically determine a problem specific network structure (Fritzke [1992]). By now the model has been successfully applied to clustering (Fritzke [1991]) and combinatorial optimization (Fritzke & Wilke [1991]). In this contribution we directly compare our model to that of Kohonen. We first review some general properties of self-organizing networks and several performance criteria for these networks are proposed and motivated. The new model is then briefly described. Simulation results are presented and allow a comparison of both models with respect to the proposed criteria. 2 2.1 SELF-ORGANIZING NETWORKS CHARACTERlSTICS A self-organizing network consists of a set of neurons arranged in some topological structure which induces neighborhood relations among the neurons. An ndimensional reference vector is attached to every neuron. This vector determines the specific n-dimensional input signal to which the neuron is maximally sensitive. By assigning to every input signal the neuron with the nearest reference vector (according to a suitable norm), a mapping is defined from the space of all possible input signals onto the neural structure. A given set of reference vectors thus divides the input vector space into regions with a common nearest reference vector. These regions are commonly denoted as Voronoi regions and the corresponding partition of the input vector space is denoted Voronoi partition. Self-organizing networks learn (change internal parameters) in an unsupervised manner from a stream of input signals. These input signals obey a generally unknown probability distribution. For each input signal the neuron with the nearest reference vector is determined, the so-called "best matching unit" (bmu). The reference vectors of the bmu and of a number of its topological neighbors are moved towards the input signal. The adaptation of topological neighbors distinguishes self-organization ("winner take most") from competitive learning where only the bmu is adapted ("winner take all"). 2.2 PERFORMANCE CRlTERlA One can identify three main criteria for self-organizing networks. The importance of each criterion may vary from application to application. Topology Preservation. This denotes two properties of the mapping defined by the network. We call the mapping topology-preserving if Kohonen Feature Maps and Growing Cell Structures-a Performance Comparison a) similar input vectors are mapped onto identical or closely neighboring neurons and b) neighboring neurons have similar reference vectors. Property a) ensures, that small changes of the input vector cause correspondingly small changes in the position of the bmu. The mapping is robust against distortions of the input , a very important property for applications dealing with real , noisy data. Property b) ensures robustness of the inverse mapping . The topology preservation is especially interesting when the dimension of the input vectors is higher than the network dimension. Then the mapping reduces the data dimension but usually preserves important similarity relations among the input data. Modelling of Probability Distribution. A set of reference vectors is said to model the probability distribution, ifthe local density of reference vectors in the input vector space approaches the probability density of the input vector distribution . This property is desirable for two reasons. First, we get an implicit model of the unknown probability distribution underlying the input signals. Second, the network becomes fault-tolerant against damage, since every neuron is only "responsible" for a small fraction of all input vectors . If neurons are destroyed for some reason the mapping ability of the network degrades only proportionally to the number of the destroyed neurons (soft fail) . This is a very desirable property for technical (as well as natural) systems. Minimization of Quantization Error . The quantization error for a given input signal is the distance between this signal and the reference vector of the bmu . We call a set of reference vectors error minimizing for a given probability distribution if the mean quantization error is minimized. This property is important , if the original signals have to be reconstructed from the reference vectors which is a very common situation in vector quantization. The quantization error in this case limits the accuracy of the reconstruction . One should note that the optimal distribution of reference vectors for error minimization is generally different from the optimal distribution for distribution modelling. 3 THE GROWING CELL STRUCTURES The Growing Cell Structures are a self-organizing network an important feature of which is the ability to automatically find a problem specific network structure through a growth process. Basic building blocks are k-dimensional hypertetrahedrons: lines for k = 1, triangles for k = 2, tetrahedrons for k = 3 etc . The vertices of the hypertetrahedrons are the neurons and the edges denote neighborhood relations. By insertion and deletion of neurons the structure is modified. This is done during a self-organization process which is similar to that in Kohonen 's model. Input signals cause adaptation of the bmu and its topological neighbors. In contrast to Kohonen's model all parameters are constant including the width of the neighborhood around 125 126 Fritzke the bmu where adaptation takes place. Only direct neighbors and the bmu itself are being adapted. 3.1 INSERTION OF NEURONS To determine the positions where new neurons should be inserted the concept of a resource is introduced. Every neuron has a local resource variable and new neurons are always inserted near the neuron with the highest resource value. New neurons get part of the resource of their neighbors so that in the long run the resource is distributed evenly among all neurons. Every input signal causes an increase of the resource variable of the best matching unit. Choices for the resource examined so far are ? the summed quantization error caused by the neuron ? the number of input signals received by the neuron Always after a constant number of adaptation steps (e.g. 100) a new neuron is inserted. For this purpose the neuron with the highest resource is determined and the edge connecting it to the neighbor with the most different reference vector is "split" by inserting the new neuron. Further edges are added to rebuild a structure consisting only of k-dimensional hypertetrahedrons. The reference vector of the new neuron is interpolated from the reference vectors belonging to the ending points of the split edge. The resource variable of the new neuron is initialized by subtracting some resource from its neighbors) the amount of which is determined by the reduction of their Voronoi regions through the insertion. 3.2 DELETION OF NEURONS By comparing the fraction of all input signals which a specific neuron has received and the volume of its Voronoi region one can derive a local estimate of the probability density of the input vectors. Those neurons) whose reference vectors fall into regions of the input vector space with a very low probability density) are regarded as "superfluous)) and are removed. The result are problem-specific network structures potentially consisting of several separate sub networks and accurately modelling a given probability distribution. 4 SIMULATION RESULTS A number of tests have been performed to evaluate the performance of the new model. One series is described in the following. Three methods have been compared. a) Kohonen Feature Maps (KFM) b) Growing Cell Structures with quantization error as resource (GCS-l) c) Growing Cell Structures with number of input signals as resource (GCS-2) Kohonen Feature Maps and Growing Cell Structures-a Performance Comparison [J c [J [J o Distribution A: The probability density is uniform in the unit square Distribution B: The probability density is uniform in the lOx 10-field, by a factor 100 higher in the 1 x I-field and zero elsewhere [J c [J Distribution C: The probability density is uniform inside the seven lower squares, by a factor 10 higher in the two upper squares and zero elsewhere. Figure 1: Three different probability distributions used for a performance comparison. Distribution A is very simple and has a form ideally suited for the Kohonen Feature Map which uses a square grid of neurons. Distribution B was chosen to show the effects of a highly varying probability density. Distribution C is the most realistic with a number of separate regions some of which have also different probability densities. These models were applied to the probability distributions shown in fig. 1. The Kohonen model was used with a 10 x 10-grid of neurons. The Growing Cell Structures were used to build up a two dimensional cell structure of the same size. This was achieved by stopping the growth process when the number of neurons had reached 100. At the end of the simulation the proposed criteria were measured as follows: ? The topology preservation requires two properties. Property a) was measured by the topographical product recently proposed by Bauer e.a. for this purpose (Bauer & Pawelzik [1992]). Property b) was measured by computing the mean edge length in the input space, i.e. the mean difference between reference vectors of directly neighboring neurons. ? The distribution modelling was measured by generating 5000 test signals according to the specific probability distribution and counting for every neuron the number of test signals it has been bmu for. The standard deviation of all counter values was computed and divided by the mean value of the counters to get a normalized measure, the distribution error, for the modelling of the probability distribution. ? The error minimization was measured by computing the mean square quantization error of the test signals. The numerical results of the simulations are shown in fig. 2. Typical examples of the final network structures can be seen in fig. 3. It can be seen from fig. 2 that the 127 128 Fritzke A model KFM B 0.022 0.014 10 .0 111 GCS-1 GCS-2 model C 0.048 0.044 1 0 . 019 KFM GCS-1 GCS-2 1 a) topographical product model B C A model KFM 0.84 0.90 KFM GCS-1 GCS-2 ?I] 10.591 0.73 0.26 1.57 c) distribution error B A GCS-1 GCS-2 0.092 0.11 0.11 1 0 . 056 1 0.071 C 0.110 0.015 10.0131 b) mean edge length B C A 0.0020 0.0019 0.0019 0.00077 0.00089 0.00086 0.00010 10.000551 10.000041 d) quantization error Figure 2: Simulation results of the performance comparison. The model of Kohonen(KFM) and two versions of the Growing Cell Structures have been compared with respect to different criteria. All criteria are such, that smaller values are better values. The best (smallest) value in each column is enclosed in a box. Simulations were performed with the probability distributions A, Band C from fig. 1. model of Kohonen has superior values only for distribution A, which is very regular and formed exactly like the chosen network structure (a square). Since generally the probability distribution is unknown and irregular, the distributions Band C are by far more realistic. For these distributions the Growing Cell Structures have the best values. The modelling of the distribution and the minimization of the quantization error are generally concurring objectives. One has to decide which objective is more important for the current application. Then the appropriate version ofthe Growing Cell Structures can optimize with respect to that objective. For the complicated distribution C, however, either version of the Growing Cell Structures performs for every criterion better than Kohonen's model. Especially notable is the low quantization error for distribution C and the error minimizing version (GCS-2) of the Growing Cell Structures (see fig. 2d). This value indicates a good potential for vector quantization. 5 DISCUSSION Our investigations indicate that - w.r.t the proposed criteria - the Growing Cell Structures are superior to Kohonen's model for all but very carefully chosen trivial examples. Although we used small examples for the sake of clarity, our experiments lead us to conjecture, that the difference will further increase with the difficulty and size of the problem. There are some other important advantages of our approach. First, all parameters are constant. This eliminates the difficult choice of a "cooling schedule" which is necessary in Kohonen's model. Second, the network size does not have to be specified in advance. Instead the growth process can be continued until an arbitrary performance criterion is met. To meet a specific criterion with Kohonen's model, one generally has to try different network sizes. To start always with a very large Kohonen Feature Maps and Growing Cell Structures-a Performance Comparison Distribution A a) Distribution B Distribution C - f- -I-- v -" b) c) Figure 3: Typical simulation results for the model of Kohonen and the two versions of the Growing Cell Structures. The network size is 100 in every case. The probability distributions are described in fig. 1. a) Kohonen Feature Map (KFM). For distributions Band C the fixed network structure leads to long connections and neurons in regions with zero probability density. b) Growing Cell Structures, distribution modelling variant (GCS-1). The growth process combined with occasional removal of "superfluous" neurons has led to several sub networks for distributions Band C. For distribution B roughly half of the neurons are used to model either of the squares. This corresponds well to the underlying probability density. c) Growing Cell Structures, error minimizing variant (GCS-2). The difference to the previous variant can be seen best for distribution B, where only a few neurons are used to cover the small square. 129 130 Fritzke network is not a good solution to this problem, since the computational effort grows faster than quadratically with the network size. Currently applications of variants of the new method to image compression and robot control are being investigated. Furthermore a new type of radial basis function network related to (Moody & Darken [1989]) is being explored, which is based on the Growing Cell Structures. REFERENCES Bauer, H.- U. & K. Pawelzik [1992}, "Quantifying the neighborhood preservation of self-organizing feature maps," IEEE Transactions on Neural Networks 3, 570-579. Favata, F. & R. Walker [1991]' "A study of the application of Kohonen-type neural networks to the travelling Salesman Problem," Biological Cybernetics 64, 463-468. Fritzke, B. [1991], "Unsupervised clustering with growing cell structures," Proc. of IJCNN-91, Seattle, 531-536 (Vol. II). Fritzke, B. [1992], "Growing cell structures - a self-organizing network in k dimensions," in Artificial Neural Networks II, I. Aleksander & J. Taylor, eds., North-Holland, Amsterdam, 1051-1056. Fritzke, B. & P. Wilke [19911, "FLEXMAP - A neural network with linear time and space complexity for the traveling salesman problem," Proc. of IJCNN-91, Singapore, 929-934. Kohonen, T. [19821, "Self-organized formation of topologically correct feature maps," Biological Cybernetics 43, 59-69. Moody, J. & C. Darken [19891, "Fast Learning in Networks of Locally-Tuned Processing Units," Neural Computation 1, 281-294. Obermayer, K., H. Ritter & K. Schulten [1990J, "Large-scale simulations of selforganizing neural networks on parallel computers: application to biological modeling," Parallel Computing 14,381-404. Ritter, H.J. & T. Kohonen [1989], "Self-Organizing Semantic Maps," Biological Cybernetics 61,241-254. Schweizer, L., G. Parladori, G.L. Sicuranza & S. Marsi [1991}, "A fully neural approach to image compression," in Artificial Neural Networks, T. Kohonen, K. Miikisara, O. Simula & J. Kangas, eds., North-Holland, Amsterdam, 815-820.
694 |@word version:5 briefly:1 compression:2 norm:1 simulation:8 reduction:1 series:1 tuned:1 current:1 comparing:1 assigning:1 numerical:1 realistic:3 partition:2 half:1 direct:1 consists:1 inside:1 manner:1 roughly:1 growing:24 automatically:3 pawelzik:2 increasing:1 becomes:1 underlying:2 suite:1 berkeley:1 every:9 growth:4 exactly:1 demonstrates:1 wilke:2 control:1 unit:4 local:3 limit:1 meet:1 examined:1 responsible:1 block:1 significantly:1 matching:2 radial:1 regular:1 symbolic:1 get:3 onto:2 optimize:1 map:13 center:1 continued:1 regarded:1 us:1 simula:1 cooling:1 inserted:3 region:8 ensures:2 kfm:7 counter:2 highest:2 removed:1 insertion:3 complexity:1 ideally:1 basis:1 triangle:1 fast:1 artificial:2 formation:1 neighborhood:4 whose:1 distortion:1 ability:2 schweizer:2 noisy:1 itself:1 final:1 advantage:2 ifthe:1 reconstruction:1 subtracting:1 product:2 adaptation:4 kohonen:28 neighboring:3 inserting:1 organizing:13 moved:1 seattle:1 generating:1 derive:1 measured:5 nearest:3 received:2 indicate:1 met:1 closely:1 correct:1 investigation:1 biological:5 around:1 mapping:7 vary:1 smallest:1 purpose:3 proc:2 combinatorial:2 currently:1 sensitive:1 successfully:1 minimization:4 always:3 modified:1 aleksander:1 varying:1 modelling:8 indicates:1 contrast:1 voronoi:4 rebuild:1 stopping:1 relation:3 among:3 denoted:2 summed:1 field:2 identical:1 unsupervised:2 minimized:1 few:1 distinguishes:1 preserve:1 consisting:2 organization:2 highly:1 superfluous:2 edge:6 necessary:2 divide:1 taylor:1 initialized:1 column:1 soft:1 modeling:1 cover:1 disadvantage:1 vertex:1 deviation:1 uniform:3 marsi:1 reported:1 combined:1 density:11 international:1 hypertetrahedrons:3 ritter:4 connecting:1 moody:2 potential:1 north:2 notable:1 caused:1 stream:1 piece:1 performed:2 try:1 reached:1 competitive:1 start:1 complicated:1 parallel:2 contribution:1 square:8 formed:1 accuracy:1 identify:1 ofthe:1 accurately:1 researcher:1 cybernetics:3 ed:2 tetrahedron:1 against:2 organized:1 schedule:1 carefully:1 higher:3 maximally:1 arranged:1 done:1 box:1 furthermore:1 implicit:1 until:1 traveling:1 grows:1 building:1 effect:1 usa:1 concept:1 normalized:1 semantic:1 during:1 self:16 width:1 criterion:13 performs:1 image:2 recently:3 superior:3 common:2 attached:1 winner:2 volume:1 grid:2 had:2 robot:1 similarity:1 etc:1 fault:1 preserving:1 seen:3 additional:1 determine:2 signal:20 preservation:4 ii:2 desirable:2 reduces:1 technical:1 faster:1 long:2 divided:1 variant:4 basic:1 achieved:1 cell:25 irregular:1 walker:2 eliminates:1 call:2 near:1 fritzke:11 counting:1 split:2 destroyed:2 variety:1 topology:4 motivated:2 effort:1 cause:3 generally:6 proportionally:1 selforganizing:1 amount:1 band:4 locally:1 induces:1 simplest:1 generate:1 singapore:1 vol:1 clarity:1 fraction:2 run:1 inverse:1 topologically:1 place:1 decide:1 topological:4 badly:1 adapted:2 ijcnn:2 sake:1 interpolated:1 conjecture:1 according:2 belonging:1 smaller:1 slightly:1 resource:12 fail:1 end:1 travelling:1 salesman:2 available:1 obey:1 occasional:1 appropriate:1 robustness:1 original:1 denotes:1 clustering:2 unsuitable:1 especially:2 build:1 objective:3 added:1 damage:1 degrades:1 obermayer:2 exhibit:1 said:1 distance:1 separate:2 mapped:1 street:1 degrade:1 evenly:1 seven:1 evaluate:1 trivial:1 reason:2 length:2 minimizing:3 difficult:2 potentially:1 unknown:3 upper:1 neuron:39 darken:2 situation:1 gc:13 kangas:1 arbitrary:1 introduced:1 bernd:1 specified:2 connection:1 quadratically:1 deletion:2 able:1 usually:2 including:2 suitable:1 difficulty:2 natural:1 ndimensional:1 lox:1 review:1 removal:1 fully:1 interesting:2 enclosed:1 elsewhere:2 allow:1 institute:1 neighbor:7 fall:1 correspondingly:1 distributed:1 bauer:3 dimension:4 ending:1 made:1 commonly:1 bmu:9 far:3 transaction:1 reconstructed:1 dealing:1 tolerant:1 learn:1 robust:1 ca:1 investigated:1 main:1 fig:7 sub:2 position:2 schulten:2 specific:7 explored:1 quantization:13 importance:1 suited:1 led:1 amsterdam:2 holland:2 corresponds:1 determines:1 quantifying:1 towards:1 change:3 determined:4 typical:2 called:1 internal:1 topographical:2 tested:1
6,567
6,940
Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit Laurence Aitchison University of Cambridge Cambridge, CB2 1PZ, UK [email protected] Adam Packer University College London London, WC1E 6BT, UK [email protected] Lloyd Russell University College London London, WC1E 6BT, UK [email protected] Jinyao Yan Janelia Research Campus Ashburn, VA 20147 [email protected] Michael H?usser University College London London, WC1E 6BT, UK [email protected] Philippe Castonguay Janelia Research Campus Ashburn, VA 20147 [email protected] Srinivas C. Turaga Janelia Research Campus Ashburn, VA 20147 [email protected] Abstract Population activity measurement by calcium imaging can be combined with cellular resolution optogenetic activity perturbations to enable the mapping of neural connectivity in vivo. This requires accurate inference of perturbed and unperturbed neural activity from calcium imaging measurements, which are noisy and indirect, and can also be contaminated by photostimulation artifacts. We have developed a new fully Bayesian approach to jointly inferring spiking activity and neural connectivity from in vivo all-optical perturbation experiments. In contrast to standard approaches that perform spike inference and analysis in two separate maximum-likelihood phases, our joint model is able to propagate uncertainty in spike inference to the inference of connectivity and vice versa. We use the framework of variational autoencoders to model spiking activity using discrete latent variables, low-dimensional latent common input, and sparse spike-and-slab generalized linear coupling between neurons. Additionally, we model two properties of the optogenetic perturbation: off-target photostimulation and photostimulation transients. Using this model, we were able to fit models on 30 minutes of data in just 10 minutes. We performed an all-optical circuit mapping experiment in primary visual cortex of the awake mouse, and use our approach to predict neural connectivity between excitatory neurons in layer 2/3. Predicted connectivity is sparse and consistent with known correlations with stimulus tuning, spontaneous correlation and distance. 1 Introduction Quantitative mapping of connectivity is an essential prerequisite for understanding the operation of neural circuits. Thus far, it has only been possible to perform neural circuit mapping by using electrophysiological [1, 2], or electron-microscopic [3, 4] techniques. In addition to being extremely 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. involved, these techniques are difficult or impossible to perform in vivo. But a new generation of all-optical techniques enable the simultaneous optical recording and perturbation of neural activity with cellular resolution in vivo [5]. In principle, cellular resolution perturbation experiments can enable circuit mapping in vivo, however several challenges exist. First, while two-photon optogenetics can be used to drive spikes in neurons with cellular resolution, there can be variability in the number of spikes generated from trial to trial and from neuron to neuron. Second, there can be substantial off-target excitation of neurons whose dendrites might pass close to the targeted neurons. Third, there is a transient artifact from the laser pulse used for photostimulation which contaminates the activity imaging, preventing accurate estimates of changes in neural activity at the precise time of the perturbation, when accurate activity estimates are most useful. Fourth, the readout of activity in the stimulated neurons, and their downstream neighbors is a noisy flourescence measurement of the intracellular calcium concentration, which is itself an indirect measure of spiking activity. Fifth, the synaptic input from one neuron is rarely strong enough to generate action potentials on its own. Thus the optogenetic perturbation of single neurons is unlikely to generate changes in the suprathreshold activity of post-synaptic neurons which can be detected via calcium imaging on every trial. Highly sensitive statistical tools are needed to infer neural connectivity in the face of these unique challenges posed by modern all-optical experimental technology. To solve this problem, we develop a global Bayesian inference strategy, jointly inferring a distribution over spikes and unknown connections, and thus allowing uncertainty in the spikes to influence the inferred connections and vice versa. In the past, such methods have not been used because they were computationally intractable, but they are becoming increasingly possible due to three recent advances: the development of GPU computing [6], modern automatic differentiation libraries such as Tensorflow [7], and recent developments in variational autoencoders, including the reparameterization trick [8, 9]. By combining these techniques, we are able to perform inference in a large-scale model of calcium imaging data, including spike inference, photostimulation, low-dimensional activity, and generalized linear synaptic connectivity. 1.1 Prior work Bayesian models have been proposed to infer connectivity from purely observational neural datasets [10, 11], however such approaches do not recover connectivity in the common setting where the population neural activity is low-rank or driven by external unobserved inputs. Perturbations are essential to uncover connectivity in such scenarios, and a combination of electrophysiological readout and optogenetic perturbation has been used successfully [12, 13]. The analysis of such data is far simpler than our setting as electrophysiological measurements of the sub-threshold membrane potential of a post-synaptic neuron can enable highly accurate detection of strong and weak incoming connections. In contrast, we are concerned with the more challenging setting of noisy calcium imaging measurements of suprathreshold post-synaptic spiking activity. Further, we are the first to accurately model artifacts associated with 2-photon optogenetic photostimulation and simultaneous calcium imaging, while performing joint inference of spiking neural activity and sparse connectivity. 2 2.1 Methods Variational Inference We seek to perform Bayesian inference, i.e. to compute the posterior over latent variables, z, (e.g. weights, spikes) given data, x (i.e. the fluorescence signal), P (z|x) = P (x|z) P (z) , P (x) and, for model comparison, we would like to compute the model evidence, Z P (x) = dz P (x|z) P (z) . (1) (2) However, the computation of these quantities is intractable, and this intractability has hindered the application of Bayesian techniques to large-scale data analysis, such as calcium imaging. Variational 2 A B Rest of brain l(t ? 1) Stim laser e(t ? 1) GCaMP only l(t + 1) l(t) e(t + 1) e(t) l(t + 2) e(t + 2) s(t ? 1) s(t) s(t + 1) s(t + 2) f (t ? 1) E f (t) f (t + 1) f (t + 2) 1.0 0.12 10 20 t (s) 30 0.0 0.08 0 Observations fc (t) Recon. rc (t) Spikes Q(sc (t) = 1) 0.5 D 0.0 C 0.5 GCaMP + opsin 0 t (s) 2 0 10 20 t (s) 30 Figure 1: An overview of the data and generative model. A. A schematic diagram displaying the experimental protocol. All cells express a GCaMP calcium indicator, which fluoresces in response to spiking activity. A large subset of the excitatory cells also express channelrhodopsin, which, in combination with two-photon photostimulation, allows cellular resolution activity perturbations [5]. B. A simplified generative model, omitting unknown weights. The observed fluorescence signal, f , depends on spikes, s, at past times, and the external optogenetic perturbation, e (to account for the small photostimulation transient, which lasts only one or two frames). The spikes depend on previous spikes, external optogenetic stimulation, e, and on a low-dimensional dynamical system, l, representing the inputs coming from the rest of the brain. C. Results for spike inference based on spontaneous data. Gray gives the original (very noisy) fluorescence trace, black gives the reconstructed denoised fluorescence trace, based on inferred spikes, and red gives the inferred probability of spiking. D. Average fluorescence signal for cells that are directly perturbed (triggered on the perturbation). We see a large increase and slow decay in the fluorescence signal, driven by spiking activity. The small peaks at 0.5 s intervals are photostimulation transients. E. As in C, but for perturbed data. Note the small peaks in the reconstruction coming from the modelled photostimulation transients. inference is one technique for circumventing this intractability [8, 9, 14], which, in combination with recent work in deep neural networks (DNNs), has proven extremely effective [8, 9]. In variational inference, we create a recognition model/approximate posterior, Q (z|x), intended to approximate the posterior, P (z|x) [14]. This recognition model allows us to write down the evidence lower bound objective (ELBO), log P (x) ? L = EQ(z|x) [log P (x, z) ? log Q (z|x)] , (3) and optimizing this bound allows us to improve the recognition model, to the extent that, if Q (z|x) is sufficiently flexible, the bound becomes tight and the recognition model will match the posterior, Q (z|x) = P (z|x). 2.2 Our model At the broadest possible level, our experimental system has known inputs, observed outputs, and unknown latent variables. The input is optogenetic stimulation of randomly selected cells (Fig. 1A; i.e. we target the cell with a laser, which usually causes it to spike), represented by a binary vector, et , which is 1 if the cell is directly targeted, and 0 if it is not directly targeted. There are three unknown latent variables/parameters over which we infer an approximate posterior. First, there is a synaptic weight matrix, Wss , describing the underlying connectivity between cells. Second, there is a low-dimensional latent common input, lt , which represents input from other brain regions, and changes slowly over time (Fig. 1B). Third, there is a binary latent, st , representing spiking activity, which depends on previous spiking activity through the synaptic weight matrix, optogenetic stimulation and the low-rank latent (Fig. 1B). Finally, we observe spiking activity indirectly through a flourescence signal, ft , which is in essence a noisy convolution of the underlying spikes. As such, the observations and latents can be written, x = f, z = {l, s, Wss }, 3 respectively. Substituting these into the ELBO (Eq. 3), the full variational objective becomes, L = EQ(s,l,Wss |f ,e) [log P (f , s, l, Wss |e) ? log Q (s, l, Wss |f , e)] , (4) where we have additionally conditioned everything on the known inputs, e. 2.3 Generative model Neglecting initial states, we can factorize the generative model as Y P (f , s, l, Wss |e) = P (Wss ) P (lt |lt?1 ) P (st |st?1:0 , e, lt , Wss ) P (ft |st:0 , et ) , (5) t i.e., we first generate a synaptic weight matrix, Wss , then we generate the latent low-rank states, lt based on their values at the previous time-step, then we generate the spikes based on past spikes, the synaptic weights, optogenetic stimulation, e, and the low-rank latents, and finally, we generate the flourescence signal based on past spiking and optogenetic stimulation. To generate synaptic weights, we assume a sparse prior, where there is some probability p that the weight is generated from a zero-mean Gaussian, and there is probability 1 ? p that the weight is zero,    P Wijss = (1 ? p)? Wijss + pN Wijss , 0, ? 2 , (6) where ? is the Dirac delta, we set p = 0.1 based on prior information, and learn ? 2 . To generate the low-rank latent states, we use a simple dynamical system,  P (lt |lt?1 ) = N lt ; Wll lt?1 , ?l . (7) where Wll is the dynamics matrix, and ?l is a diagonal covariance matrix, representing independent Gaussian noise. To generate spikes, we use, P (st |st?1:0 , e, lt , Wss ) = Bernoulli (st ; ? (ut )) where ? is a vectorised sigmoid, ?i (x) = 1/ (1 + e ut = Wse et + Wss t?1 X ?xi (8) ), and the cell?s inputs, ut , are given by, ?st?t0 st0 + Wsl lt + bs . (9) t0 =t?4 The first term represents the drive from optogenetic input, et , (to reiterate, a binary vector representing whether a cell was directly targeted on this timestep), coupled by weights, Wse , representing the degree to which cells surrounding the targeted cell also respond to the optogenetic stimulation. Note that Wse is structured (i.e. written down in terms of other parameters), and we discuss this structure later. The second term represents synaptic connectivity: how spikes at previous timesteps, st0 might influence spiking at this timestep, via a rapidly-decaying temporal kernel, ?s , and a synaptic weight matrix Wss . The third term represents the input from other brain-regions by allowing the low-dimensional latents, lt , to influence spiking activity according to a weight matrix, Wsl . Finally, to generate the observed flourescence signal from the spiking activity, we use,  P (ft ) = N ft ; rt , ?f , (10) where ?f is a learned, diagonal covariance matrix, representing independent noise in the flourescence observations. For computational tractability, the mean flourescence signal, or ?reconstruction?, is simply a convolution of the spikes, rt = A t X t0 =0 ?t?t0 st0 + br + Wre et , (11) where represents an entrywise, or Hadamard, product. This expression takes a binary vector representing spiking activity, st0 , convolves it with a temporal kernel, ?, representing temporal dynamics of flourescence responses, then scales it with the diagonal matrix, A, and adds a bias, br . The last term models an artifact in which optogenetic photostimulation, represented by a binary vector et describing whether a cell was directly targeted by the stimulation laser on that timestep, directly affects the imaging system according to a weight matrix Wre . The temporal kernel, ?c,t?t0 is a sum of two exponentials unique to each cell, decay ?c,t = e?t/?c as is typical in e.g. [15]. 4 rise ? e?t/?c , (12) 2.4 Recognition model The recognition model factorises similarly, Q (s, l, Wss |f , e) = Q (Wss ) Q (s|f , e) Q (l|f ) . To approximate the posterior over weights we use,    2 . Q Wijss = (1 ? pij )? Wijss + pij N Wijss , ?ij , ?ij (13) (14) 2 where pij is the inferred probability that the weight is non-zero, and ?ij and ?ij are the mean and variance of the inferred distribution over the weight, given that it is non-zero. As a recognition model for spikes, we use a multi-layer perceptron to map from the flourescence signal back to an inferred probability of spiking, Q (s(t)|v(t)) = Bernoulli (s(t); ? (v(t))) , (15) where v(t) depends on the fluorescence trace, and the optogenetic input, v(t) = MLPs (f (t ? T : t + T )) + De Wse e(t) + bs . (16) Here, De is a diagonal matrix scaling the external input, and MLP (f (t ? T : t + T )) is a neural network that, for each cell, takes a window of the fluorescence trace from time t ? T to t + T , (for us, T = 100 frames, or about 3 seconds) linearly maps this window onto 20 features, then maps those 20 features through 2 standard neural-network layers with 20 units and Elu non-linearities [16], and finally linearly maps to a single value. To generate the low-rank latents, we use the same MLP, but allow for a different final linear mapping from 20 features to a single output,  Q (l(t)|f ) = N l(t); Wfl MLPl (f (t ? T : t + T )) , ?l . (17) Here, we use a fixed diagonal covariance, ?l , and we use Wfl to reduce the dimensionality of the MLP output to the number of latents. 2.5 Gradient-based optimization of generative and recognition model parameters We used the automatic differentiation routines embedded within TensorFlow to differentiate the ELBO with respect to the parameters of both the generative and recognition models,  2 L = L ?, Wll , ?l , Wsl , bs , ?f , ?cdecay , ?crise , br , Wre , pij , ?ij , ?ij , De , Wfl , MLP, respi , ?k , (18) where the final two variables are defined later. We then used Adam [17] to perform the optimization. Instead of using minibatches consisting of multiple short time-windows, we used a single, relatively large time-window (of 1000 frames, or around 30 s, which minimized any edge-effects at the start or end of the time-window. 3 3.1 Results All-optical circuit mapping experimental protocol We used a virus to express GCaMP6s pan-neuronally in layer 2/3 of mouse primary visual cortex (V1), and co-expressed C1V1 in excitatory neurons of the same layer. The mouse was awake, headfixed and on a treadmill. As in [5], we used a spatial light modulator to target 2-photon excitation of the C1V1 opsin in a subset of neurons, while simultaneously imaging neural activity in the local circuit by 2-photon calcium imaging of GCaMP6s. With this setup, we designed an experimental protocol to facilitate discovery of a large portion of the connections within a calcium-imaging field of view. In particular, twice every second we selected five cells at random, stimulated them, observed the activity in the rest of the network, and used this information to infer whether the stimulated cells projected to any of the other cells in the network (Fig. 1A). The optogenetic perturbation experiment consisted of 7200 trials and lasted one hour. We also mapped the orientation and direction tuning properties of the imaged neurons, and separately recorded spontaneous neural activity for 40 minutes. Our model was able to infer spikes in spontaneous data (Fig. 1C), and in photostimulation data, was able to both infer spikes and account for photostimulation transients (Fig. 1DE). 5 C 1.0 ?fc 0.05 0 0.0 200 400 Distance (?m) 0.4 0.2 0.0 0 0.00 0.5 Stim. y (?m) 200 400 B Modelled stim. A 0 200 400 Distance (?m) 0 200 400 x (?m) Direct Indirect Norm. Act. 2 0 -4 0 -2 C 0 10 20 30 Time (s) 0.2 0.0 0 B 4 y ?m 200 400 A y ?m 200 400 Figure 2: Modeling off-target photostimulation, in which stimulating at one location activates surrounding cells. A. The change in average fluorescence based on 500 ms just before and just after stimulation (?fc ) for photostimulation of a target at a specified distance [5]. B. The modelled distance-dependent activation induced by photostimulation. The spatial extent of modelled off-target stimulation is broadly consistent with the raw-data in A. Note that as each cell has a different spatial absorption profile and responsiveness, modelled stimulation is not a simple function of distance from the target cell. C. Modelled off-target photostimulation resulting from stimulation of an example cell. 0 200 400 x ?m 0 200 400 x ?m -0.2 Figure 3: Inferred low-rank latent activity. A. Time course of lt for perturbed data. The different lines correspond to different modes. B. The projection weights from the first latent onto cells, where cells are plotted according to their locations on the imaging plane. C. As B but for the second latent. Note that all projection weights are very close to 0, so the points are all gray. 3.2 Inferring the extent of off-target photostimulation Since photostimulation may also directly excite off-target neurons, we explicitly modelled this process (Fig. 2A). We used a sum of five Gaussians with different scales, ?k , to flexibly model distance-dependent stimulation, se Wij = respi 5 X k=1   exp d2i (xj )/ 2?k2 , (19) where xj describes the x, y position of the ?target? cell j, and each cell receiving off-target stimulation has its own degree of responsiveness, respi , and a metric, di (xj , yj ), describing that cell?s response to light stimulation in different spatial locations. The metric allows for stimulation to take on an elliptic pattern (given by Pi ?s), and have a shifted center (given by x ?i ), T d2i (xj ) = (xj ? x ?i ) Pi (xj ? x ?i ) (20) After inference, this model gives a similar spatial distribution of perturbation-triggered activity (Fig. 2B). Furthermore, it should be noted that because each cell has its own responsiveness and spatial light absorption profile, if we stimulate in one location, a cell?s responsiveness is not a simple function of distance (Fig. 2BC). Finally, we allow small modifications around this strict spatial profile using a dense weight matrix. 3.3 Joint inference of latent common inputs Our model was able to jointly infer neural activity, latent common inputs (Fig. 3A) and sparse synaptic connectivity. As expected, we found one critical latent variable describing overall activation of all cells (Fig. 3B) [18], and a second, far less important latent (Fig. 3C). Given the considerable difference in magnitude between the impact of these two latents on the system, we can infer that only one latent variable is required to describe the system effectively. However, further work is needed to implement flexible yet interpretable low-rank latent variables in this system. 6 B 1.06 Test ELBO Test ELBO A 1.04 1.02 1.00 0 50 Epoch 100 1.10 1.08 1.06 1.04 0 50 Epoch 100 Model Sparse GLM + LR Dense GLM + LR LR Independent Separate Figure 4: Performance of various models for spontaneous (A) and perturbed (B) data. We consider ?Sparse GLM + LR? (the full model), ?Dense GLM + LR? (the full model, but with with dense GLM weights), ?LR? (a model with no GLM, only the low-rank component), ?Independent? (a model with no higher-level structure) and finally ?Separate? (the spikes are extracted using the independent model, then the full model is fitted to those spikes). 3.4 The model recovers known properties of biological activity The ELBO forms only a lower bound on the model evidence, so it is possible for models to appear better/worse simply because of changes in the tightness of the bound. As such, it is important to check that the learned model recovers known properties of biological connectivity. We thus compared a group of models, including the full model, a model with dense (as opposed to the usual sparse) synaptic connectivity, a model with only low-rank latents, and a simple model with no higher-level structure, for both spontaneous (Fig. 4A) and perturbed (Fig. 4B) data. We found that the sparse GLM offered a dramatic improvement over the dense GLM, which in turn offered little benefit over a model with only low-rank activity. (Note the reported values are ELBO per cell per timestep, so must be multiplied by 348 cells and around 100,000 time-steps to obtain the raw-ELBO values, which are then highly significant). Thus, the ELBO is able to recover features of real biological connectivity (biological connectivity is also sparse [1, 2]). 3.5 Joint inference is better than a ?pipeline? Furthermore, we compared our joint approach, where we jointly infer spikes, low-rank activity, and weights, to a more standard ?pipeline? in which one infer spikes using a simple Bayesian model lacking low-rank activity and GLM connectivity, then infer the low-rank activity and weights based on those spikes, similar to [11]. We found that performing inference jointly ? allowing information about low-rank activity, GLM connectivity and external stimulation to influence spike inferences greatly improved the quality of our inferences for both spontaneous (Fig. 4A) and perturbed data (Fig. 4B). This improvement is entirely expected within the framework of variational inference, as the ?pipeline? has two objectives, one for spike extraction, and another for the high-level generative model, and without the single, unified objective, it is even possible for the ELBO to decrease with more training (Fig. 4B). 3.6 The inferred sparse weights are consistent with known properties of neural circuits Next, we plotted the synaptic ?GLM? weights for spontaneous (Fig. 5A?D) and perturbed (Fig. 5E? H) data. These weights are negatively correlated with distance (p < 0.0001; Fig. 5BF) suggesting that short-range connections are predominantly excitatory (though this may be confounded by cells overlapping, such that activity in one cell is recorded as activity in a different cell). The short range excitatory connections can be seen as the diagonal red bands in Fig. 5AE as the neurons are roughly sorted by proximity, with the first 248 being perturbed, and the remainder never being perturbed. The weights are strongly correlated with spontaneous correlation (p < 0.0001; Fig. 5CG), as measured using raw fluorescence traces; a result which is expected, given that the model should use these weights to account for some aspects of the spontaneous correlation. Finally, the weights are positively correlated with signal correlation (p < 0.0001; Fig. 5DH), as measured using 8 drifting gratings, a finding that is consistent with previous results [1, 2]. 7 B C 1 Weight 1 Weight 1 Post index 200 D Weight A 0 0 0 0 E 200 Pre index 0 F 200 400 Distance G 1 0 -0.5 0.0 0.5 Signal corr. H 1 Weight 1 Weight Post index 200 0.0 0.5 Spont. corr. Weight 0 0 0 0 0 200 Pre index 0 200 400 Distance 0.0 0.5 Spont. corr. -0.5 0.0 0.5 Signal corr. Figure 5: Inferred connection weights. A. Weight matrix inferred from spontaneous data (in particular, the expected value of the weight, under the recognition model, with red representing positive connectivity, and blue representing negative connectivity), plotted against distance (B), spontaneous correlation (C), and signal correlation (D). E?H. As A?D for perturbed data. 3.7 Perturbed data supports stronger inferences than spontaneous data Consistent with our expectations, we found that perturbations considerably increased the number of discovered connections. Our spike-and-slab posterior over weights can be interpreted to yield an estimated confidence probability that a given connection exists. We can use this probability to estimate the number of highly confident connections. In particular, we were able to find 50% more connections in the perturbed dataset than the spontaneous dataset, with a greater than 0.95 probability (1940 vs 1204); twice times as many highly confident connections with probability 0.99 or higher (1107 vs 535); and five times as many with the probability 0.999 or higher (527 vs 101). These results highlight the importance of perturbations to uncovering connections which would otherwise have been missed when analyzing purely observational datasets. 3.8 Simulated data Using the above methods, it is difficult to assess the effectiveness of the model because we do not have ground truth. While the ideal approach would be to obtain ground-truth data experimentally, this is very difficult in practice. An alternative approach is thus to simulate data from the generative model, in which case the ground-truth weights are simply those used to perform the initial simulation. To perform a quantitative comparison, we used the correlation between a binary variable representing whether the true weights were greater than 0.1 (because it is extremely difficult to distinguish between zero, and very small but non-zero weights, and), and the inferred probability of the weight being greater than 0.1, based on a combination of the inferences over the discrete and continuous component. We chose a threshold of 0.1 because it was relatively small in comparison with the standard-deviation for the non-zero weights of around 0.4. We started by trying to replicate our experiments as closely as possible (Fig. 6), i.e. we inferred all the parameters, noise-levels, timescales, priors on weights etc. based on real data, and resampled the weight matrix based on the inferred prior over weights. We then considered repeating the same stimulation pattern 50 times (frozen), as against using 50 times more entirely random simulated data (unfrozen), and found that, as expected, using random stimulation patterns is more effective. As computational constraints prevent us from increasing the data further, we considered reducing the noise by a factor of 40 (low-noise), and then additionally reduced the timescales of the calcium transients by a factor of 10 (fast decay) which improved the correlation to 0.85. These results indicate the model is functioning correctly, but raise issues for future work. In particular, the considerable improvement achieved by reducing the timescales indicates that careful modeling of the calcium transient is essential, and that faster calcium indicators have the potential to dramatically improve the ultimate accuracy of weight inferences. 8 correlation 1 0.8 0.6 0.4 0.2 0 raw frozen unfrozen unfrozen low noise unfrozen low noise fast decay Figure 6: Effectiveness of various variants of the model at finding the underlying ground-truth weights. The correlation compares a binary variable reporting whether the ground-truth weight is above or below 0.1 with a continuous measure reporting the inferred probability of the weight being larger than 0.1. The first condition, raw, uses simulated data that matches the real data as closely as possible including the same length of photostimulated and spontaneous data as we obtained, and matching the parameters such as the noise level to those used in data. The frozen/unfrozen conditions represent using 50 times more data, where, for ?frozen? condition, we repeat the same optogenetic stimulation 50 times, and for the ?unfrozen? condition we always use fresh, randomly chosen stimulation patterns. The final pair of conditions are photo stimulated data, with 50 times more unfrozen data. For the ?low noise? condition we reduce the noise level by a factor of 40, and for the ?fast decay? condition, we additionally reduce the calcium decay timeconstants by a factor of 10. 4 Discussion We applied modern variational autoencoder and GPU computing techniques to create a fully Bayesian model of calcium imaging and perturbation data. This model simultaneously and efficiently extracted Bayesian approximate posteriors over spikes, the extent of two optogenetic perturbation artifacts, lowrank activity, and sparse synaptic (GLM) weights. This is the first model designed for perturbation data, and we are not aware of any other model which is able to extract posteriors over such a wide range of parameters with such efficiency. Our inferred weights are consistent with studies using electrophysiological means to measure connectivity in mouse V1 [1, 2]. Further, model selection gives biologically expected results, identifying sparseness, suggesting that these models are identifying biologically relevant structure in the data. However, simply identifying broad properties such as sparseness does not imply that our inferences about individual weights are correct: for this, we need validation using complementary experimental approaches. Finally, we have shown that recent developments in variational autoencoders make it possible to perform inference in ?ideal? models: large-scale models describing noisy data-generating processes and complex biological phenomena simultaneously. References [1] H. Ko, S. B. Hofer, B. Pichler, K. A. Buchanan, P. J. Sj?str?m, and T. D. Mrsic-Flogel, ?Functional specificity of local synaptic connections in neocortical networks,? Nature, vol. 473, no. 7345, pp. 87?91, 2011. [2] L. Cossell, M. F. Iacaruso, D. R. Muir, R. Houlton, E. N. Sader, H. Ko, S. B. Hofer, and T. D. Mrsic-Flogel, ?Functional organization of excitatory synaptic strength in primary visual cortex,? Nature, vol. 518, no. 7539, pp. 399?403, 2015. [3] S. ya Takemura, A. Bharioke, Z. Lu, A. Nern, S. Vitaladevuni, P. K. Rivlin, W. T. Katz, D. J. Olbris, S. M. Plaza, P. Winston, T. Zhao, J. A. Horne, R. D. Fetter, S. Takemura, K. Blazek, L.-A. Chang, O. Ogundeyi, M. A. Saunders, V. Shapiro, C. Sigmund, G. M. Rubin, L. K. Scheffer, I. A. Meinertzhagen, and D. B. Chklovskii, ?A visual motion detection circuit suggested by drosophila connectomics,? Nature, vol. 500, pp. 175?181, aug 2013. [4] W.-C. A. Lee, V. Bonin, M. Reed, B. J. Graham, G. Hood, K. Glattfelder, and R. C. Reid, ?Anatomy and function of an excitatory network in the visual cortex,? Nature, vol. 532, no. 7599, pp. 370?374, 2016. 9 [5] A. M. Packer, L. E. Russell, H. W. Dalgleish, and M. H?usser, ?Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo,? Nature Methods, vol. 12, no. 2, pp. 140?146, 2015. [6] R. Raina, A. Madhavan, and A. Y. Ng, ?Large-scale deep unsupervised learning using graphics processors,? in Proceedings of the 26th annual international conference on machine learning, pp. 873?880, ACM, 2009. [7] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, ?TensorFlow: Large-scale machine learning on heterogeneous systems,? 2015. Software available from tensorflow.org. [8] D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? ICLR, 2014. [9] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate inference in deep generative models,? ICML, 2014. [10] Y. Mishchenko, J. T. Vogelstein, and L. Paninski, ?A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data,? The Annals of Applied Statistics, vol. 5, pp. 1229?1261, June 2011. [11] D. Soudry, S. Keshri, P. Stinson, M.-H. Oh, G. Iyengar, and L. Paninski, ?Efficient "shotgun" inference of neural connectivity from highly sub-sampled activity data,? PLoS computational biology, vol. 11, p. e1004464, Oct. 2015. [12] A. M. Packer, D. S. Peterka, J. J. Hirtz, R. Prakash, K. Deisseroth, and R. Yuste, ?Two-photon optogenetics of dendritic spines and neural circuits,? Nat Methods, vol. 9, pp. 1202?U103, Dec. 2012. [13] B. Shababo, B. Paige, A. Pakman, and L. Paninski, ?Bayesian inference and online experimental design for mapping neural microcircuits,? in Advances in Neural Information Processing Systems 26 (C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, eds.), pp. 1304?1312, Curran Associates, Inc., 2013. [14] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, ?An introduction to variational methods for graphical models,? Machine Learning, vol. 37, no. 2, pp. 183?233, 1999. [15] J. T. Vogelstein, A. M. Packer, T. A. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski, ?Fast nonnegative deconvolution for spike train inference from population calcium imaging,? Journal of neurophysiology, vol. 104, no. 6, pp. 3691?3704, 2010. [16] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, ?Fast and accurate deep network learning by exponential linear units (elus),? arXiv preprint arXiv:1511.07289, 2015. [17] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? ICLR, 2015. [18] M. Okun, N. A. Steinmetz, L. Cossell, M. F. Iacaruso, H. Ko, P. Barth?, T. Moore, S. B. Hofer, T. D. Mrsic-Flogel, M. Carandini, et al., ?Diverse coupling of neurons to populations in sensory cortex,? Nature, vol. 521, no. 7553, pp. 511?515, 2015. [19] A. Mnih and D. J. Rezende, ?Variational inference for Monte Carlo objectives,? ICML, 2016. [20] C. J. Maddison, A. Mnih, and Y. W. Teh, ?The concrete distribution: A continuous relaxation of discrete random variables,? arXiv preprint arXiv:1611.00712, 2016. [21] E. Jang, S. Gu, and B. Poole, ?Categorical reparameterization with Gumbel-Softmax,? arXiv preprint arXiv:1611.01144, 2016. 10
6940 |@word neurophysiology:1 trial:4 stronger:1 laurence:2 norm:1 bf:1 replicate:1 rivlin:1 pulse:1 propagate:1 seek:1 covariance:3 simulation:1 dramatic:1 deisseroth:1 initial:2 bc:1 hirtz:1 past:4 steiner:1 com:4 virus:1 activation:2 gmail:4 yet:1 written:2 gpu:2 must:1 connectomics:1 devin:1 wll:3 designed:2 interpretable:1 v:3 generative:9 selected:2 isard:1 plane:1 shababo:1 short:3 lr:6 location:4 org:3 simpler:1 five:3 rc:1 wierstra:1 olah:1 direct:1 abadi:1 buchanan:1 expected:6 spine:1 roughly:1 multi:1 brain:4 little:1 window:5 str:1 increasing:1 becomes:2 horne:1 campus:3 underlying:3 circuit:11 linearity:1 interpreted:1 developed:1 unified:1 unobserved:1 st0:4 differentiation:2 finding:2 temporal:4 quantitative:2 every:2 act:1 prakash:1 k2:1 uk:5 unit:2 appear:1 reid:1 before:1 positive:1 local:2 soudry:1 encoding:1 analyzing:1 becoming:1 might:2 black:1 twice:2 chose:1 challenging:1 co:1 range:3 unique:2 hood:1 yj:1 practice:1 implement:1 backpropagation:1 cb2:1 yan:1 projection:2 matching:1 pre:2 confidence:1 specificity:1 onto:2 close:2 selection:1 impossible:1 influence:4 map:4 dean:1 dz:1 center:1 flexibly:1 resolution:6 identifying:3 shlens:1 oh:1 reparameterization:2 population:4 annals:1 target:13 spontaneous:15 us:1 curran:1 goodfellow:1 trick:1 associate:1 recognition:10 observed:4 ft:4 preprint:3 readout:2 region:2 plo:1 russell:2 decrease:1 substantial:1 dynamic:2 d2i:2 depend:1 tight:1 raise:1 purely:2 negatively:1 efficiency:1 gu:1 joint:5 indirect:3 convolves:1 represented:2 various:2 surrounding:2 laser:4 train:1 fast:5 effective:2 london:6 describe:1 monte:1 detected:1 sc:1 saunders:1 whose:1 posed:1 solve:1 larger:1 tightness:1 elbo:10 otherwise:1 optogenetics:2 statistic:1 jointly:5 noisy:6 itself:1 final:3 online:1 differentiate:1 triggered:2 frozen:4 ucl:1 reconstruction:2 okun:1 photostimulation:19 coming:2 product:1 remainder:1 clevert:1 relevant:1 combining:1 hadamard:1 rapidly:1 treadmill:1 dirac:1 sutskever:1 generating:1 adam:3 coupling:2 develop:1 ac:1 measured:2 ij:6 lowrank:1 aug:1 grating:1 eq:3 strong:2 predicted:1 indicate:1 elus:1 direction:1 anatomy:1 closely:2 correct:1 mrsic:3 stochastic:2 enable:4 transient:8 suprathreshold:2 observational:2 everything:1 dnns:1 drosophila:1 biological:5 absorption:2 dendritic:1 proximity:1 sufficiently:1 considered:2 ground:5 around:4 exp:1 mapping:8 predict:1 slab:2 electron:1 substituting:1 wre:3 sensitive:1 fluorescence:10 vice:2 create:2 successfully:1 tool:1 iyengar:1 activates:1 gaussian:2 always:1 pn:1 jaakkola:1 rezende:2 june:1 improvement:3 rank:15 likelihood:1 bernoulli:2 lasted:1 check:1 contrast:2 greatly:1 cg:1 indicates:1 inference:31 dependent:2 bt:3 unlikely:1 w:14 wij:1 overall:1 uncovering:1 flexible:2 orientation:1 issue:1 development:3 spatial:7 softmax:1 field:1 aware:1 never:1 extraction:1 beach:1 ng:1 biology:1 represents:5 broad:1 yu:1 unsupervised:1 icml:2 future:1 minimized:1 contaminated:1 stimulus:1 modern:3 randomly:2 steinmetz:1 packer:4 simultaneously:3 individual:1 phase:1 intended:1 consisting:1 detection:2 organization:1 mlp:4 highly:6 mnih:2 zheng:1 light:3 accurate:5 edge:1 neglecting:1 bonin:1 unterthiner:1 plotted:3 fitted:1 increased:1 optogenetic:18 modeling:2 flogel:3 cossell:2 tractability:1 deviation:1 subset:2 latents:7 graphic:1 reported:1 perturbed:13 considerably:1 combined:1 confident:2 st:9 kudlur:1 peak:2 international:1 lee:1 off:8 receiving:1 michael:1 mouse:4 concrete:1 connectivity:27 recorded:2 opposed:1 slowly:1 worse:1 external:5 zhao:1 account:3 potential:3 photon:6 de:4 suggesting:2 lloyd:1 inc:1 explicitly:1 depends:3 reiterate:1 vi:1 performed:1 later:2 view:1 red:3 start:1 recover:2 denoised:1 decaying:1 portion:1 bharioke:1 dalgleish:1 bayes:1 jia:1 vivo:6 mlps:1 ass:1 accuracy:1 variance:1 neuronally:1 efficiently:1 correspond:1 yield:1 weak:1 bayesian:11 modelled:7 raw:5 accurately:1 lu:1 carlo:1 drive:2 processor:1 simultaneous:3 synaptic:18 ed:1 against:2 pp:12 involved:1 tucker:1 mohamed:1 broadest:1 associated:1 di:1 recovers:2 vectorised:1 sampled:1 dataset:2 carandini:1 usser:2 ut:3 dimensionality:1 electrophysiological:4 wicke:1 routine:1 uncover:1 back:1 barth:1 higher:4 response:3 improved:2 entrywise:1 though:1 strongly:1 microcircuit:1 furthermore:2 just:3 stinson:1 autoencoders:3 correlation:11 wse:4 overlapping:1 mode:1 artifact:5 gray:2 stimulate:1 quality:1 usa:1 omitting:1 effect:1 facilitate:1 consisted:1 true:1 functioning:1 vasudevan:1 imaged:1 moore:2 irving:1 essence:1 noted:1 excitation:2 davis:1 levenberg:1 turagas:1 generalized:2 m:1 trying:1 muir:1 neocortical:1 motion:1 variational:12 sigmund:1 predominantly:1 common:5 sigmoid:1 hofer:3 stimulation:20 spiking:17 functional:2 overview:1 machado:1 katz:1 measurement:5 significant:1 jozefowicz:1 cambridge:2 versa:2 tuning:2 automatic:2 similarly:1 janelia:5 cortex:5 etc:1 add:1 posterior:9 own:3 recent:4 optimizing:1 driven:2 wattenberg:1 scenario:1 manipulation:1 binary:7 peterka:1 responsiveness:4 seen:1 greater:3 corrado:1 signal:13 vogelstein:2 full:5 multiple:1 infer:11 match:2 faster:1 pakman:1 hhmi:2 long:1 post:5 va:3 schematic:1 impact:1 variant:1 ko:3 ae:1 gcamp6s:2 metric:2 expectation:1 heterogeneous:1 arxiv:6 kernel:3 represent:1 monga:1 agarwal:1 achieved:1 cell:34 dec:1 hochreiter:1 addition:1 separately:1 chklovskii:1 interval:1 diagram:1 rest:3 warden:1 strict:1 recording:2 induced:1 meinertzhagen:1 effectiveness:2 jordan:1 ideal:2 enough:1 concerned:1 affect:1 fit:1 timesteps:1 xj:6 modulator:1 fetter:1 flourescence:8 hindered:1 reduce:3 barham:1 br:3 t0:5 whether:5 expression:1 ultimate:1 shotgun:1 paige:1 cause:1 action:1 deep:4 dramatically:1 useful:1 se:1 repeating:1 band:1 ph:1 recon:1 ashburn:3 reduced:1 generate:11 shapiro:1 exist:1 shifted:1 delta:1 estimated:1 per:2 correctly:1 iacaruso:2 blue:1 broadly:1 aitchison:2 discrete:3 write:1 diverse:1 vol:11 express:3 group:1 harp:1 threshold:2 wsl:3 prevent:1 v1:2 imaging:16 timestep:4 relaxation:1 circumventing:1 downstream:1 sum:2 talwar:1 uncertainty:2 fourth:1 respond:1 reporting:2 missed:1 scaling:1 graham:1 entirely:2 layer:5 bound:5 resampled:1 distinguish:1 winston:1 babadi:1 plaza:1 activity:43 annual:1 strength:1 nonnegative:1 constraint:1 awake:2 software:1 aspect:1 simulate:1 extremely:3 performing:2 optical:8 relatively:2 structured:1 turaga:1 according:3 combination:4 membrane:1 describes:1 increasingly:1 pan:1 b:3 modification:1 biologically:2 glm:12 pipeline:3 computationally:1 describing:5 discus:1 turn:1 needed:2 end:1 confounded:1 photo:1 brevdo:1 available:1 operation:1 gaussians:1 prerequisite:1 multiplied:1 observe:1 indirectly:1 elliptic:1 alternative:1 weinberger:1 jang:1 drifting:1 original:1 graphical:1 wc1e:3 ghahramani:2 murray:1 objective:5 olbris:1 quantity:1 spike:35 kaiser:1 strategy:1 primary:3 concentration:1 rt:2 diagonal:6 usual:1 microscopic:1 gradient:1 iclr:2 distance:12 separate:3 mapped:1 simulated:3 maddison:1 extent:4 cellular:6 fresh:1 stim:3 length:1 index:4 reed:1 difficult:4 setup:1 keshri:1 trace:5 negative:1 rise:1 ba:1 design:1 calcium:18 unknown:4 perform:9 allowing:3 teh:1 neuron:18 observation:3 datasets:2 convolution:2 nern:1 philippe:1 gas:1 variability:1 precise:1 frame:3 discovered:1 perturbation:19 inferred:15 pair:1 required:1 specified:1 connection:14 hausser:1 learned:2 tensorflow:4 hour:1 kingma:2 nip:1 able:9 suggested:1 poole:1 dynamical:2 usually:1 pattern:4 below:1 challenge:2 including:4 critical:1 indicator:2 raina:1 representing:11 improve:2 technology:1 factorises:1 library:1 imply:1 started:1 categorical:1 coupled:1 autoencoder:1 extract:1 auto:1 prior:5 understanding:1 discovery:1 epoch:2 embedded:1 fully:2 lacking:1 highlight:1 takemura:2 interrogation:1 generation:1 fluorescent:1 yuste:2 proven:1 validation:1 degree:2 madhavan:1 offered:2 pij:4 consistent:6 vanhoucke:1 rubin:1 principle:1 displaying:1 channelrhodopsin:1 intractability:2 gcamp:3 pi:2 excitatory:7 wfl:3 course:1 repeat:1 last:2 bias:1 allow:2 burges:1 perceptron:1 srinivas:1 neighbor:1 wide:1 face:1 saul:1 fifth:1 sparse:12 benefit:1 preventing:1 sensory:1 projected:1 simplified:1 far:3 welling:2 reconstructed:1 approximate:6 sj:1 elu:1 global:1 incoming:1 excite:1 factorize:1 xi:1 continuous:3 latent:19 additionally:4 stimulated:4 nature:6 vitaladevuni:1 learn:1 ca:1 contaminates:1 dendrite:1 bottou:1 complex:1 protocol:3 dense:6 timescales:3 intracellular:1 linearly:2 noise:10 profile:3 mishchenko:1 complementary:1 positively:1 neuronal:1 fig:24 scheffer:1 slow:1 sub:2 inferring:4 position:1 exponential:2 third:3 minute:3 down:2 ghemawat:1 unperturbed:1 pz:1 decay:6 evidence:3 deconvolution:1 essential:3 intractable:2 exists:1 effectively:1 corr:4 importance:1 magnitude:1 nat:1 conditioned:1 sparseness:2 gumbel:1 chen:1 lt:13 fc:3 simply:4 paninski:4 visual:5 vinyals:1 expressed:1 chang:1 truth:5 extracted:2 minibatches:1 stimulating:1 dh:1 pichler:1 acm:1 sorted:1 targeted:6 oct:1 careful:1 sippy:1 man:1 considerable:2 change:5 experimentally:1 typical:1 reducing:2 pas:1 experimental:7 ya:1 citro:1 rarely:1 college:3 support:1 phenomenon:1 schuster:1 correlated:3
6,568
6,941
Gaussian process based nonlinear latent structure discovery in multivariate spike train data Anqi Wu, Nicholas A. Roy, Stephen Keeley, & Jonathan W. Pillow Princeton Neuroscience Institute Princeton University Abstract A large body of recent work focuses on methods for extracting low-dimensional latent structure from multi-neuron spike train data. Most such methods employ either linear latent dynamics or linear mappings from latent space to log spike rates. Here we propose a doubly nonlinear latent variable model that can identify low-dimensional structure underlying apparently high-dimensional spike train data. We introduce the Poisson Gaussian-Process Latent Variable Model (P-GPLVM), which consists of Poisson spiking observations and two underlying Gaussian processes?one governing a temporal latent variable and another governing a set of nonlinear tuning curves. The use of nonlinear tuning curves enables discovery of low-dimensional latent structure even when spike responses exhibit high linear dimensionality (e.g., as found in hippocampal place cell codes). To learn the model from data, we introduce the decoupled Laplace approximation, a fast approximate inference method that allows us to efficiently optimize the latent path while marginalizing over tuning curves. We show that this method outperforms previous Laplace-approximation-based inference methods in both the speed of convergence and accuracy. We apply the model to spike trains recorded from hippocampal place cells and show that it compares favorably to a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network. 1 Introduction Recent advances in multi-electrode array recording techniques have made it possible to measure the simultaneous spiking activity of increasingly large neural populations. These datasets have highlighted the need for robust statistical methods for identifying the latent structure underlying high-dimensional spike train data, so as to provide insight into the dynamics governing large-scale activity patterns and the computations they perform [1?4]. Recent work has focused on the development of sophisticated model-based methods that seek to extract a shared, low-dimensional latent process underlying population spiking activity. These methods can be roughly categorized on the basis of two basic modeling choices: (1) the dynamics of the underlying latent variable; and (2) the mapping from latent variable to neural responses. For choice of dynamics, one popular approach assumes the latent variable is governed by a linear dynamical system [5?10], while a second assumes that it evolves according to a Gaussian process, relaxing the linearity assumption and imposing only smoothness in the evolution of the latent state [1, 11?13]. For choice of mapping function, most previous methods have assumed a fixed linear or log-linear relationship between the latent variable and the mean response level [1, 5?8, 10, 11]. These methods seek to find a linear embedding of population spiking activity, akin to PCA or factor analysis. In many cases, however, the relationship between neural activity and the quantity it encodes can be highly nonlinear. Hippocampal place cells provide an illustrative example: if each discrete location in a 2D 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. latent process 0 10 time (s) log tuning curves spike trains tuning curves spike rate 3 2 1 0 0 time (s) 10 Figure 1: Schematic diagram of the Poisson Gaussian Process Latent Variable Model (P-GPLVM), illustrating multi-neuron spike train data generated by the model with a one-dimensional latent process. environment has a single active place cell, population activity spans a space whose dimensionality is equal to the number of neurons; a linear latent variable model cannot find a reduced-dimensional representation of population activity, despite the fact that the underlying latent variable (?position?) is clearly two-dimensional. Several recent studies have introduced nonlinear coupling between latent dynamics and firing rate [7, 9, 14]. These models use deep neural networks to parametrize the nonlinear mapping from latent space to spike rates, but often require repeated trials or long training sets. Table 1 summarizes these different model structures for latent neural trajectory estimation (including the original Gaussian process latent variable model (GPLVM) [15], which assumes Gaussian observations and does not produce spikes). model latent mapping function output nonlinearity observation PLDS [8] PfLDS [16] LFADS [14] GPFA [1] P-GPFA [12, 13] GPLVM [15] P-GPLVM LDS LDS RNN GP GP GP GP linear neural net neural net linear linear GP GP exp exp exp identity exp identity exp Poisson Poisson Poisson Gaussian Poisson Gaussian Poisson Table 1: Modeling assumptions of various latent variable models for spike trains. In this paper, we propose the Poisson Gaussian process latent variable model (P-GPLVM) for spike train data, which allows for nonlinearity in both the latent state dynamics and in the mapping from the latent states to the spike rates. Our model posits a low-dimensional latent variable that evolves in time according to a Gaussian process prior; this latent variable governs firing rates via a set of non-parametric tuning curves, parametrized as exponentiated samples from a second Gaussian process, from which spikes are then generated by a Poisson process (Fig. 1). The paper is organized as follows: Section 2 introduces the P-GPLVM; Section 3 describes the decoupled Laplace approximation for performing efficient inference for the latent variable and tuning curves; Section 4 describes tuning curve estimation; Section 5 compares P-GPLVM to other models using simulated data and hippocampal place-cell recordings, demonstrating the accuracy and interpretability of P-GPLVM relative to other methods. 2 Poisson-Gaussian process latent variable model (P-GPLVM) Suppose we have simultaneously recorded spike trains from N neurons. Let Y ? RN ?T denote the matrix of spike count data, with neurons indexed by i ? (1, . . . , N ) and spikes counted in discrete 2 time bins indexed by t ? (1, . . . , T ). Our goal is to construct a generative model of the latent structure underlying these data, which will here take the form of a P -dimensional latent variable x(t) and a set of mapping functions or tuning curves {hi (x)}, i ? (1, . . . , N ) which map the latent variable to the spike rates of each neuron. Latent dynamics Let x(t) denote a (vector-valued) latent process, where each component xj (t), j ? (1, . . . , P ), evolves according to an independent Gaussian process (GP), xj (t) ? GP (0, kt ) , (1) with covariance function kt (t, t0 ) , cov(xj (t), xj (t0 )) governing how each scalar process varies over time. Although we can select any valid covariance function for kt , here we use the exponential covariance function, a special case of the Mat?rn kernel, given by k(t, t0 ) = r exp (?|t ? t0 |/l), which is parametrized by a marginal variance r > 0 and length-scale l > 0. Samples from this GP are continuous but not differentiable, equivalent to a Gaussian random walk with a bias toward the origin, also known as the Ornstein-Uhlenbeck process [17]. The latent state x(t) at any time t is a P -dimensional vector that we will write as xt ? RP ?1 . The collection of such vectors over T time bins forms a matrix X ? RP ?T . Let xj denote the jth row of X, which contains the set of states in latent dimension j. From the definition of a GP, xj has a multivariate normal distribution, xj ? N (0, Kt ) (2) with a T ? T covariance matrix Kt generated by evaluating the covariance function kt at all time bins in (1, . . . , T ). Nonlinear mapping Let h : RP ?? R denote a nonlinear function mapping from the latent vector xt to a firing rate ?t . We will refer to h(x) as a tuning curve, although unlike traditional tuning curves, which describe firing rate as a function of some externally (observable) stimulus parameter, here h(x) describes firing rate as a function of the (unobserved) latent vector x. Previous work has modeled h with a parametric nonlinear function such as a deep neural network [9]. Here we develop a nonparametric approach using a Gaussian process prior over the log of h. The logarithm assures that spike rates are non-negative. Let fi (x) = log hi (x) denote the log tuning curve for the i?th neuron in our population, which we model with a GP, fi (x) ? GP (0, kx ) , (3) where kx is a (spatial) covariance function that governs smoothness of the function over its P dimensional input space. For simplicity, we use the common Gaussian or radial basis function (RBF) covariance function: kx (x, x0 ) = ? exp ?||x ? x0 ||22 /2? 2 , where x and x0 are arbitrary points in latent space, ? is the marginal variance and ? is the length scale. The tuning curve for neuron i is then given by hi (x) = exp(fi (x)). Let fi ? RT ?1 denote a vector with the t?th element equal to fi (xt ). From the definition of a GP, fi has a multivariate normal distribution given latent vectors at all time bins x1:T = {xt }Tt=1 , fi |x1:T ? N (0, Kx ) (4) with a T ? T covariance matrix Kx generated by evaluating the covariance function kx at all pairs of latent vectors in x1:T . Stacking fi for N neurons, we will formulate a matrix F ? RN ?T with fi> on the i?th row. The element on the i?th row and the t?th column is fi,t = fi (xt ). Poisson spiking Lastly, we assume Poisson spiking given the latent firing rates. We assume that spike rates are in units of spikes per time bin. Let ?i,t = exp(fi,t ) = exp(fi (xt )) denote the spike rate of neuron i at time t. The spike-count of neuron i at t given the log tuning curve fi and latent vector xt is Poisson distributed as yi,t |fi , xt ? Poiss(exp(fi (xt ))). (5) In summary, our model is as a doubly nonlinear Gaussian process latent variable model with Poisson observations (P-GPLVM). One GP is used to model the nonlinear evolution of the latent dynamic x, while a second GP is used to generate the log of the tuning curve f as a nonlinear function of x, which is then mapped to a tuning curve h via a nonlinear link function, e.g. exponential function. Fig. 1 provides a schematic of the model. 3 3 Inference using the decoupled Laplace approximation For our inference procedure, we estimate the log of the tuning curve, f , as opposed to attempting to infer the tuning curve h directly. Once f is estimated, h can be obtained by exponentiating f . Given the model outlined above, the joint distribution over the observed data and all latent variables is written as, p(Y, F, X, ?) = p(Y|F)p(F|X, ?, ?)p(X|r, l) = N Y T Y p(yi,t |fi,t ) i=1 t=1 N Y p(fi |X, ?, ?) i=1 P Y p(xj |r, l), (6) j=1 where ? = {?, ?, r, l} is the hyperparameter set, references to which will now be suppressed for simplification. This is a Gaussian process latent variable model (GPLVM) with Poisson observations and a GP prior, and our goal is to now estimate both F and X. A standard Bayesian treatment of the GPLVM requires the computation of the log marginal likelihood associated with the joint distribution (Eq.6). Both F and X must be marginalized out, Z Z Z Z log p(Y) = log p(Y, F, X)dXdF = log p(Y|F) p(F|X)p(X)dX dF. (7) However, propagating the prior density p(X) through the nonlinear mapping makes this inference difficult. The nested integral in (Eq. 7) contains X in a complex nonlinear manner, making analytical integration over X infeasible. To overcome these difficulties, we can use a straightforward MAP training procedure where the latent variables F and X are selected according to FMAP , XMAP = argmaxF,X p(Y|F)p(F|X)p(X). (8) Note that point estimates of the hyperparameters ? can also be found by maximizing the same objective function. As discussed above, learning X remains a challenge due to the interplay of the latent variables, i.e. the dependency of F on X. For our MAP training procedure, fixing one latent variable while optimizing for the other in a coordinate descent approach is highly inefficient since the strong interplay of variables often means getting trapped in bad local optima. In variational GPLVM [18], the authors introduced a non-standard variational inference framework for approximately integrating out the latent variables X then subsequently training a GPLVM by maximizing an analytic lower bound on the exact marginal likelihood. An advantage of the variational framework is the introduction of auxiliary variables which weaken the strong dependency between X and F. However, the variational approximation is only applicable to Gaussian observations; with Poisson observations, the integral over F remains intractable. In the following, we will propose using variations of the Laplace approximation for inference. 3.1 Standard Laplace approximation We first use Laplace?s method to find a Gaussian approximation q(F|Y, X) to the true posterior p(F|Y, X), then do MAP estimation for X only. We employ the Laplace approximation for each fi individually. Doing a second order Taylor expansion of log p(fi |yi , X) around the maximum of the posterior, we obtain a Gaussian approximation q(fi |yi , X) = N (?fi , A?1 ), (9) where ?fi = argmaxfi p(fi |yi , X) and A = ??? log p(fi |yi , X)|fi =?fi is the Hessian of the negative log posterior at that point. By Bayes? rule, the posterior over fi is given by p(fi |yi , X) = p(yi |fi )p(fi |X)/p(yi |X), but since p(yi |X) is independent of fi , we need only consider the unnormalized posterior, defined as ?(fi ), when maximizing w.r.t. fi . Taking the logarithm gives 1 1 ?(fi ) = log p(yi |fi ) + log p(fi |X) = log p(yi |fi ) ? fi> Kx?1 fi ? log |Kx | + const. 2 2 Differentiating (Eq. 10) w.r.t. fi we obtain ??(fi ) ???(fi ) = ? log p(yi |fi ) ? Kx?1 fi = ?? log p(yi |fi ) ? Kx?1 (10) (11) = ?Wi ? Kx?1 , (12) where Wi = ??? log p(yi |fi ). The approximated log conditional likelihood on X (see Sec. 3.4.4 in [17]) can then be written as 1 1 (13) log q(yi |X) = log p(yi |?fi ) ? ?fi> Kx?1 ?fi ? log |IT + Kx Wi |. 2 2 4 We can then estimate X as XMAP = argmaxX N X q(yi |X)p(X). (14) i=1 When using standard LA, the gradient of log q(yi |X) w.r.t. X should be calculated for a given posterior mode ?fi . Note that not only is the covariance matrix Kx an explicit function of X, but also ?fi and Wi are also implicitly functions of X ? when X changes, the optimum of the posterior ?fi changes as well. Therefore, log q(yi |X) contains an implicit function of X which does not allow for a straightforward closed-form gradient expression. Calculating numerical gradients instead yields a very inefficient implementation empirically. 3.2 Third-derivative Laplace approximation One method to derive this gradient explicitly is described in [17] (see Sec. 5.5.1). We adapt their procedure to our setting to make the implicit dependency of ?fi and Wi on X explicit. To solve (Eq. 14), we need to determine the partial derivative of our approximated log conditional likelihood (Eq. 13) w.r.t. X, given as T X ? log q(yi |X) ? log q(yi |X) ? log q(yi |X) ? f?i,t + = (15) ?X ?X ?X ? f?i,t explicit t=1 by the chain rule. When evaluating the second term, we use the fact that ?fi is the posterior maximum, so ??(fi )/?fi = 0 at fi = ?fi , where ?(fi ) is defined in (Eq. 11). Thus the implicit derivatives of the first two terms in (Eq. 13) vanish, leaving only !  ?3 ? log q(yi |X) 1 1 ?1 ?1 ?Wi = ? tr (Kx + Wi ) = ? (Kx?1 + Wi )?1 tt log p(yi |?fi ). (16) 2 2 ? f?i,t ? f?i,t ? f?3 i,t To evaluate ? f?i,t /?X, we differentiate the self-consistent equation ?fi = Kx ? log p(yi |?fi ) (setting (Eq. 11) to be 0 at ?fi ) to obtain ?Kx ? log p(yi |?fi ) ? ?fi ?Kx ? ?fi = ? log p(yi |?fi ) + Kx = (IT + Kx Wi )?1 ? log p(yi |?fi ), (17) ?X ?X ?X ?X ? ?fi ?? fi ? where we use the chain rule ?X = ?X ? ???f and ?? log p(yi |?fi )/? ?fi = ?Wi from (Eq. 12). The i desired implicit derivative is obtained by multiplying (Eq. 16) and (Eq. 17) to formulate the second term in (Eq. 15). We can now estimate XMAP with (Eq. 14) using the explicit gradient expression in (Eq. 15). We call this method third-derivative Laplace approximation (tLA), as it depends on the third derivative of the data likelihood term (see [17] for further details). However, there is a big computational drawback with tLA: for each step along the gradient we have just derived, the posterior mode ?fi must be reevaluated. This method might lead to a fast convergence theoretically, but this nested optimization makes for a very slow computation empirically. 3.3 Decoupled Laplace approximation We propose a novel method to relax the Laplace approximation, which we refer to as the decoupled Laplace approximation (dLA). Our relaxation not only decouples the strong dependency between X and F, but also avoids the nested optimization of searching for the posterior mode of F within each update of X. As in tLA, dLA also assumes ?fi to be a function of X. However, while tLA assumes ?fi to be an implicit function of X, dLA constructs an explicit mapping between ?fi and X. The standard Laplace approximation uses a Gaussian approximation for the posterior p(fi |yi , X) ? p(yi |fi )p(fi |X) where, in this paper, p(yi |fi ) is a Poisson distribution and p(fi |X) is a multivariate Gaussian distribution. We first do the same second order Taylor expansion of log p(fi |yi , X) around the posterior maximum to find q(fi |yi , X) as in (Eq. 9). Now if we approximate the likelihood distribution p(yi |fi ) as a Gaussian distribution q(yi |fi ) = N (m, S), we can derive its mean m and covariance S. If p(fi |X) = N (0, Kx ) and q(fi |yi , X) = N (?fi , A?1 ), the relationship between two Gaussian distributions and their product allow us to solve for m and S from the relationship N (?fi , A?1 ) ? N (m, S)N (0, Kx ): 5 Algorithm 1 Decoupled Laplace approximation at iteration k Input: data observation yi , latent variable Xk?1 from iteration k ? 1 1. Compute the new posterior mode ?fik and the precision matrix Ak by solving (Eq. 10) to obtain ?1 q(fi |yi , Xk?1 ) = N (?fik , Ak ). 2. Derive mk and S k (Eq. 18): S k = (Ak ? Kx?1 )?1 , mk = S k Ak ?fik . 3. Fix mk and S k and derive the new mean and covariance for q(fi |yi , Xk?1 ) as functions of X: ?1 ?1 ?1 ?1 ?1 A(X) = S k + Kx (X) , ?fi (X) = A(X) S k mk = A(X) Ak ?fik . ?1 4. Since A = Wi + Kx?1 , we have Wi = S k , and can obtain the new approximated conditional distribution q(yi |X) (Eq. 13) with ?fi replaced by ?fi (X). PN 5. Solve Xk = argmaxX i=1 q(yi |X)p(X). Output: new latent variable Xk A = S ?1 + Kx?1 , ?fi = A?1 S ?1 m S = (A ? Kx?1 )?1 , m = SA?fi . (18) ? m and S represent the components of the posterior terms, fi and A, that come from the likelihood. Now when estimating X, we fix these likelihood terms m and S, and completely relax the prior, p(fi |X). We are still solving (Eq. 14) w.r.t. X, but now q(fi |yi , X) has both mean and covariance approximated as explicit functions of X. Alg. 1 describes iteration k of the dLA algorithm, with which we can now estimate XMAP . Step 3 indicates that the posterior maximum for the current iteration ?fi (X) = A(X)?1 Ak ?f k is now explicitly updated as a function of X, avoiding the computationally i demanding nested optimization of tLA. Intuitively, dLA works by finding a Gaussian approximation to the likelihood at ?fik such that the approximated posterior of fi , q(fi |yi , X), is now a closed-form Gaussian distribution with mean and covariance as functions of X, ultimately allowing for the explicit calculation of q(yi |X). 4 =? Tuning curve estimation ? and ?f from the inference, we can now calculate the tuning curve h for each Given the estimated X P ?1 neuron. Let x1:G = {xg }G . Correspondingly, g=1 be a grid of G latent states, where xg ? R for each neuron, we have the log of the tuning curve vector evaluated on the grid of latent states, fgrid ? RG?1 , with the g?th element equal to f (xg ). Similar to (Eq. 4), we can write down its distribution as fgrid |x1:G ? N (0, Kgrid ) (19) with a G ? G covariance matrix Kgrid generated by evaluating the covariance function kx at all pairs of vectors in x1:G . Therefore we can write a joint distribution for [?f , fgrid ] as " #    ?f Kx? kgrid ? N 0, > . (20) kgrid Kgrid fgrid Kx? ? RT ?T is a covariance matrix with elements evaluated at all pairs of estimated latent vectors ? and kgrid = kx (? ? 1:T = {? x xt }Tt=1 in X, xt , xg ). Thus we have the following posterior distribution t,g over fgrid : ? 1:T , x1:G ? N (?(x1:G ), ?(x1:G )) fgrid |?f , x (21) ?1 ? ?(x1:G ) = k> grid Kx ? f , ?1 ?(x1:G ) = diag(Kgrid ) ? k> grid Kx ? kgrid where diag(Kgrid ) denotes a diagonal matrix constructed from the diagonal of Kgrid . Setting ?fgrid = ?(x1:G ), the spike rate vector ? grid = exp(?fgrid ) ? (22) describes the tuning curve h evaluated on the grid x1:G . 5 Experiments 5.1 Simulation data We first examine performance using two simulated datasets generated with different kinds of tuning curves, namely sinusoids and Gaussian bumps. We will compare our algorithm (P-GPLVM) with 6 Gaussian bump tuning curve C) E) Sinusoid tuning curve 1st dimension 1st dimension 0.9 2 0 60 0.8 100 20 time True location PLDS PfLDS D) neuron 19 3 3 neuron 4 neuron 10 1.5 0 -1.2 -0.4 0.4 0 spike rate -0.4 P-GPFA PfLDS P-GPFA P-GPLVM 0.4 P-GPLVM-dLA 3 1.5 PfLDS 0.3 P-GPLVM-dLA F) 0.9 neuron 1 -1.2 P-GPFA 3 1.5 PLDS 100 value 1.5 True time PLDS 60 neuron 4 GPLVM 20 PLDS -1.2 B) 0.6 P-GPLVM-dLA 2 0 P-GPLVM-tLA 2nd dimension P-GPLVM-aLA value 0.9 -2 -0.4 P-GPFA 0.4 Gaussian bump tuning curve PfLDS Sinusoid tuning curve A) 0.8 0.7 P-GPLVM-aLA P-GPLVM-tLA 0 -1.2 -0.4 location 0.4 0 P-GPLVM-dLA 0.6 neuron 12 -1.2 -0.4 True 0.4 0 400 800 time (sec) Estimated Figure 2: Results from the sinusoid and Gaussian bump simulated experiments. A) and C) are estimated latent processes. B) and D) display the tuning curves estimated by different methods. E) shows the R2 performances with error bars. F) shows the convergence R2 performances of three different Laplace approximation inference methods with error bars. Error bars are plotted every 10 seconds. PLDS, PfLDS, P-GPFA and GPLVM (see Table 1), using the tLA and dLA inference methods. We also include an additional variant on the Laplace approximation, which we call the approximated Laplace approximation (aLA), where we use only the explicit (first) term in (Eq. 15) to optimize over X for multiple steps given a fixed ?fi . This allows for a coarse estimation for the gradient w.r.t. X for a few steps in X before estimation is necessary, partially relaxing the nested optimization so as to speed up the learning procedure. For comparison between models in our simulated experiments, we compute the R-squared (R2 ) values from the known latent processes and the estimated latent processes. In all simulation studies, we generate 1 single trial per neuron with 20 simulated neurons and 100 time bins for a single experiment. Each experiment is repeated 10 times and results are averaged across 10 repeats. Sinusoid tuning curve: This simulation generates a "grid cell" type response. A grid cell is a type of neuron that is activated when an animal occupies any point on a grid spanning the environment [19]. When an animal moves in a one-dimensional space (P = 1), grid cells exhibit oscillatory responses. Motivated by the response properties of grid cells, the log firing rate of each neuron i is coupled to the latent process through a sinusoid with a neuron-specific phase ?i and frequency ?i , fi = sin(?i x + ?i ). (23) We randomly generated ?i uniformly from the region [0, 2?] and ?i uniformly from [1.0, 4.0]. An example of the estimated latent processes versus the true latent process is presented in Fig. 2A. We used least-square regression to learn an affine transformation from the latent space to the space of the true locations. Only P-GPLVM finds the global optimum by fitting the valley around t = 70. Fig. 2B displays the true tuning curves and the estimated tuning curves for neuron 4, 10, & 9 with PLDS, PfLDS, P-GPFA and P-GPLVM-dLA. For PLDS, PfLDS and P-GPFA, we replace the estimated ?f with the observed spike count y in (Eq. 21), and treat the posterior mean as the tuning curve on a grid of latent representations. For P-GPLVM, the tuning curve is estimated via (Eq. 22). The R2 performance is shown in the first column of Fig. 2E. Deterministic Gaussian bump tuning curve: For this simulation, each neuron?s tuning curve is modeled as a unimodal Gaussian bump in a 2D space such that the log of the tuning curve, f , is a deterministic Gaussian function of x. Fig. 2C shows an example of the estimated latent processes. PLDS fits an overly smooth curve, while P-GPLVM can find the small wiggles that are missed by other methods. Fig. 2D displays the 2D tuning curves for neuron 1, 4, & 12 estimated by PLDS, PfLDS, P-GPFA and P-GPLVM-dLA. The R2 performance is shown in the second column of Fig. 2E. Overall, P-GPFA has a quite unstable performance due to the ARD kernel function in the GP prior, potentially encouraging a bias for smoothness even when the underlying latent process is actually 7 rat 1 A) rat 2 1st dimension B) 1st dimension 220 200 180 100 200 600 200 1000 600 1000 2nd dimension 2nd dimension 150 100 50 20 200 600 time (ms) 200 1000 True location PLDS C) 1000 D) 0.8 value 0.7 0.1 0.3 0.2 0.65 E) F) True location Estimated location 1.5 1.5 0.5 0.5 True location Estimated location 4 4 neuron 1 0 5 100 200 300 0 5 100 200 0 0 300 100 0 0 200 5 100 200 5 neuron 12 neuron 1 0 0 5 100 200 300 0 0 5 300 0 0 100 200 300 0 0 4 100 200 0 0 4 100 200 0 0 5 100 200 0 0 5 100 200 0 0 100 200 neuron 2 neuron 9 0 0 100 200 5 5 0 0 0 0 100 200 300 spike rate neuron 19 0.75 PLL PLL P-GPLVM-tLA P-GPLVM-dLA GPLVM 0.65 P-GPLVM-aLA 0.7 PLDS 0.75 0.4 0.15 P-GPFA PfLDS 0.8 value 600 time (ms) P-GPLVM-dLA neuron 10 neuron 28 100 200 300 100 200 location 300 0 0 100 200 location Figure 3: Results from the hippocampal data of two rats. A) and B) are estimated latent processes during a 1s recording period for two rats. C) and D) show R2 and PLL performance with error bars. E) and F) display the true tuning curves and the tuning curves estimated by P-GPLVM-dLA. quite non-smooth. PfLDS performs better than PLDS in the second case, but when the true latent process is highly nonlinear (sinusoid) and the single-trial dataset is small, PfLDS losses its advantage to stochastic optimization. GPLVM has a reasonably good performance with the nonlinearities, but is worse than P-GPLVM which demonstrates the significance of using the Poisson observation model. For P-GPLVM, the dLA inference algorithm performs best overall w.r.t. both convergence speed and R2 (Fig. 2F). 5.2 Application to rat hippocampal neuron data Next, we apply the proposed methods to extracellular recordings from the rodent hippocampus. Neurons were recorded bilaterally from the pyramidal layer of CA3 and CA1 in two rats as they performed a spatial alternation task on a W-shaped maze [20]. We confine our analyses to simultaneously recorded putative place cells during times of active navigation. Total number of simultaneously recorded neurons ranged from 7-19 for rat 1 and 24-38 for rat 2. Individual trials of 50 seconds were isolated from 15 minute recordings, and binned at a resolution of 100ms. We used this hippocampal data to identify a 2D latent space using PLDS, PfLDS, P-GPFA, GPLVM and P-GPLVMs (Fig. 3), and compared these to the true 2D location of the rodent. For visualization purposes, we linearized the coordinates along the arms of the maze to obtain 1D representations. 8 Fig. 3A & B present two segments of 1s recordings for the two animals. The P-GPLVM results are smoother and recover short time-scale variations that PLDS ignores. The average R2 performance for all methods for each rodent is shown in Fig. 3C & D where P-GPLVM-dLA consistently performs the best. We also assessed the model fitting quality by doing prediction on a held-out dataset. We split all the time bins in each trial into training time bins (the first 90% time bins) and held-out time bins (the last 10% time bins). We first estimated the parameters for the mapping function or the tuning curve in each model using spike trains from all the neurons within training time bins. Then we fixed the parameters and inferred the latent process using spike trains from 70% neurons within held-out time bins. Finally, we calculated the predictive log likelihood (PLL) for the other 30% neurons within held-out time bins given the inferred latent process. We subtracted the log-likelihood of the population mean firing rate model (single spike rate) from the predictive log likelihood divided by number of observations, shown in Fig. 3C & D. Both P-GPLVM-aLA and P-GPLVM-dLA perform well. GPLVM has very negative PLL, omitted in the figures. Fig. 3E & F present the tuning curves learned by P-GPLVM-dLA where each row corresponds to a neuron. For our analysis we have the true locations xtrue , the estimated locations xP-GPLVM , a grid of G locations x1:G distributed with a shape of the maze, the spike count observation yi , and the estimated log of the tuning curves ?fi for each neuron i. The light gray dots in the first column of Fig. 3E & F are the binned spike counts when mapping from the space of xtrue to the space of x1:G . The second column contains the binned spike counts mapped from the space of xP-GPLVM to the space ? and ?f with xtrue and y of x1:G . The black curves in the first column are achieved by replacing x respectively using the predictive posterior in (Eq. 21) and (Eq. 22). The yellow curves in the second ? grid for each neuron. We can tell that column are the estimated tuning curves by using (Eq. 22) to get ? the estimated tuning curves closely match the true tuning curves from the observations, discovering different responsive locations for different neurons as the rat moves. 6 Conclusion We proposed a doubly nonlinear Gaussian process latent variable model for neural population spike trains that can identify nonlinear low-dimensional structure underlying apparently high-dimensional spike train data. We also introduced a novel decoupled Laplace approximation, a fast approximate inference method that allows us to efficiently maximize marginal likelihood for the latent path while integrating over tuning curves. We showed that this method outperforms previous Laplaceapproximation-based inference methods in both the speed of convergence and accuracy. We applied the model to both simulated data and spike trains recorded from hippocampal place cells and showed that it outperforms a variety of previous methods for latent structure discovery. 9 References [1] BM Yu, JP Cunningham, G Santhanam, SI Ryu, KV Shenoy, and M Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. In Adv neur inf proc sys, pages 1881?1888, 2009. [2] L. Paninski, Y. Ahmadian, Daniel G. Ferreira, S. Koyama, Kamiar R. Rad, M. Vidne, J. Vogelstein, and W. Wu. A new look at state-space models for neural data. J comp neurosci, 29(1-2):107?126, 2010. [3] John P Cunningham and B M Yu. Dimensionality reduction for large-scale neural recordings. Nature neuroscience, 17(11):1500?1509, 2014. [4] SW Linderman, MJ Johnson, MA Wilson, and Z Chen. A bayesian nonparametric approach for uncovering rat hippocampal population codes during spatial navigation. J neurosci meth, 263:36?47, 2016. [5] JH Macke, L Buesing, JP Cunningham, BM Yu, KV Shenoy, and M Sahani. Empirical models of spiking in neural populations. In Adv neur inf proc sys, pages 1350?1358, 2011. [6] L Buesing, J H Macke, and M Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. In Adv neur inf proc sys, pages 1682?1690, 2012. [7] EW Archer, U Koster, JW Pillow, and JH Macke. Low-dimensional models of neural population activity in sensory cortical circuits. In Adv neur inf proc sys, pages 343?351, 2014. [8] JH Macke, L Buesing, and M Sahani. Estimating state and parameters in state space models of spike trains. Advanced State Space Methods for Neural and Clinical Data, page 137, 2015. [9] Yuanjun Gao, Lars Busing, Krishna V Shenoy, and John P Cunningham. High-dimensional neural spike train analysis with generalized count linear dynamical systems. In Adv neur inf proc sys, pages 2044?2052, 2015. [10] JC Kao, P Nuyujukian, SI Ryu, MM Churchland, JP Cunningham, and KV Shenoy. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces. Nature communications, 6, 2015. [11] David Pfau, Eftychios A Pnevmatikakis, and Liam Paninski. Robust learning of low-dimensional dynamics from large neural ensembles. In Adv neur inf proc sys, pages 2391?2399, 2013. [12] Hooram Nam. Poisson extension of gaussian process factor analysis for modeling spiking neural populations. Master?s thesis, Department of Neural Computation and Behaviour, Max Planck Institute for Biological Cybernetics, Tubingen, 8 2015. [13] Y. Zhao and I. M. Park. Variational latent gaussian process for recovering single-trial dynamics from population spike trains. arXiv preprint arXiv:1604.03053, 2016. [14] David Sussillo, Rafal Jozefowicz, LF Abbott, and Chethan Pandarinath. Lfads-latent factor analysis via dynamical systems. arXiv preprint arXiv:1608.06315, 2016. [15] Neil D Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In Adv neur inf proc sys, pages 329?336, 2004. [16] Y Gao, EW Archer, L Paninski, and JP Cunningham. Linear dynamical neural population models through nonlinear embeddings. In Adv neur inf proc sys, pages 163?171, 2016. [17] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [18] AC Damianou, MK Titsias, and ND Lawrence. Variational inference for uncertainty on the inputs of gaussian process models. arXiv preprint arXiv:1409.2287, 2014. [19] T Hafting, M Fyhn, S Molden, MB Moser, and EI Moser. Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801?806, 2005. [20] M Karlsson, M Carr, and Frank LM. Simultaneous extracellular recordings from hippocampal areas ca1 and ca3 (or mec and ca1) from rats performing an alternation task in two w-shapped tracks that are geometrically identically but visually distinct. crcns.org. http://dx.doi.org/10.6080/K0NK3BZJ, 2005. 10
6941 |@word trial:8 illustrating:1 hippocampus:1 nd:4 busing:1 seek:2 simulation:4 linearized:1 covariance:17 tr:1 reduction:1 contains:4 daniel:1 ala:5 outperforms:3 current:1 anqi:1 si:2 dx:2 written:2 must:2 john:2 numerical:1 shape:1 enables:1 analytic:1 motor:1 fyhn:1 update:1 generative:1 selected:1 discovering:1 xk:5 sys:8 short:1 provides:1 coarse:1 location:16 org:2 along:2 constructed:1 consists:1 doubly:3 fitting:2 manner:1 introduce:2 theoretically:1 x0:3 roughly:1 examine:1 multi:3 brain:1 encouraging:1 estimating:2 underlying:9 linearity:1 circuit:1 argmaxf:1 kind:1 ca1:3 unobserved:1 finding:1 transformation:1 temporal:1 every:1 ferreira:1 decouples:1 demonstrates:1 unit:1 planck:1 shenoy:4 before:1 generalised:1 local:1 treat:1 despite:1 ak:6 path:2 firing:8 approximately:1 might:1 black:1 relaxing:2 liam:1 averaged:1 lf:1 procedure:5 area:1 rnn:1 empirical:1 dla:19 radial:1 integrating:2 get:1 cannot:1 valley:1 optimize:2 equivalent:1 map:5 deterministic:2 maximizing:3 straightforward:2 williams:1 focused:1 formulate:2 resolution:1 simplicity:1 identifying:1 fik:5 insight:1 rule:3 array:1 hafting:1 nam:1 population:16 embedding:1 searching:1 coordinate:2 variation:2 laplace:19 updated:1 suppose:1 exact:1 carl:1 us:1 origin:1 element:4 roy:1 approximated:6 observed:2 preprint:3 calculate:1 region:1 adv:8 environment:2 dynamic:12 ultimately:1 solving:2 segment:1 churchland:1 predictive:3 titsias:1 basis:2 completely:1 joint:3 various:1 train:18 distinct:1 fast:3 describe:1 ahmadian:1 doi:1 tell:1 whose:1 quite:2 valued:1 solve:3 relax:2 encoder:1 cov:1 neil:1 gp:17 highlighted:1 interplay:2 advantage:2 differentiable:1 differentiate:1 net:2 analytical:1 propose:4 product:1 mb:1 pll:5 kv:3 kao:1 getting:1 convergence:5 electrode:1 optimum:3 produce:1 coupling:1 develop:1 derive:4 fixing:1 propagating:1 yuanjun:1 sussillo:1 ard:1 ac:1 sa:1 eq:26 strong:3 auxiliary:1 recovering:1 come:1 posit:1 drawback:1 closely:1 subsequently:1 stochastic:1 lars:1 occupies:1 bin:14 require:1 behaviour:1 fix:2 microstructure:1 biological:1 kamiar:1 extension:1 mm:1 around:3 confine:1 normal:2 exp:12 visually:1 lawrence:2 mapping:14 bump:6 lm:1 omitted:1 purpose:1 estimation:6 proc:8 applicable:1 individually:1 pnevmatikakis:1 mit:1 clearly:1 gaussian:42 pn:1 poi:1 vae:1 wilson:1 derived:1 focus:1 consistently:1 likelihood:13 indicates:1 inference:15 cunningham:6 visualisation:1 archer:2 overall:2 uncovering:1 development:1 animal:3 spatial:4 special:1 integration:1 marginal:5 equal:3 construct:2 once:1 shaped:1 beach:1 park:1 yu:3 look:1 stimulus:1 employ:2 few:1 mec:1 randomly:1 simultaneously:3 individual:1 replaced:1 phase:1 highly:3 karlsson:1 introduces:1 navigation:2 light:1 activated:1 held:4 chain:2 kt:6 integral:2 partial:1 necessary:1 decoupled:7 indexed:2 taylor:2 logarithm:2 walk:1 desired:1 plotted:1 isolated:1 weaken:1 mk:5 column:7 modeling:3 nuyujukian:1 tubingen:1 stacking:1 ca3:2 johnson:1 dependency:4 varies:1 st:5 density:1 moser:2 squared:1 thesis:1 recorded:6 opposed:1 rafal:1 worse:1 inefficient:2 derivative:6 macke:4 zhao:1 nonlinearities:1 sec:3 jc:1 explicitly:2 ornstein:1 depends:1 performed:1 closed:2 apparently:2 doing:2 bayes:1 recover:1 square:1 accuracy:3 variance:2 efficiently:2 ensemble:1 yield:1 identify:3 yellow:1 lds:2 bayesian:2 buesing:3 reevaluated:1 trajectory:1 multiplying:1 comp:1 cybernetics:1 simultaneous:2 oscillatory:1 damianou:1 definition:2 frequency:1 associated:1 fmap:1 dataset:2 treatment:1 popular:1 dimensionality:3 organized:1 sophisticated:1 actually:1 response:6 jw:1 evaluated:3 governing:4 implicit:5 lastly:1 just:1 bilaterally:1 replacing:1 ei:1 nonlinear:21 mode:4 quality:1 gray:1 usa:1 molden:1 ranged:1 true:15 evolution:2 sinusoid:7 sin:1 during:3 self:1 illustrative:1 unnormalized:1 rat:11 m:3 generalized:1 hippocampal:10 tt:3 carr:1 performs:3 interface:1 variational:7 novel:2 fi:106 common:1 spiking:8 empirically:2 jp:4 discussed:1 refer:2 jozefowicz:1 imposing:1 smoothness:3 tuning:46 outlined:1 dxdf:1 grid:14 nonlinearity:2 dot:1 cortex:2 multivariate:4 posterior:19 recent:4 showed:2 optimizing:1 inf:8 alternation:2 yi:47 krishna:1 additional:1 determine:1 maximize:1 period:1 vogelstein:1 stephen:1 smoother:1 multiple:1 unimodal:1 infer:1 smooth:2 match:1 adapt:1 calculation:1 clinical:1 long:2 divided:1 schematic:2 prediction:1 variant:1 basic:1 regression:1 poisson:20 df:1 arxiv:6 iteration:4 kernel:2 uhlenbeck:1 represent:1 achieved:1 cell:11 diagram:1 pyramidal:1 leaving:1 unlike:1 recording:8 xtrue:3 call:2 extracting:1 split:1 embeddings:1 identically:1 variety:2 xj:8 fit:1 eftychios:1 t0:4 expression:2 pca:1 motivated:1 akin:1 hessian:1 deep:3 governs:2 nonparametric:2 reduced:1 generate:2 http:1 neuroscience:2 estimated:22 per:2 trapped:1 overly:1 track:1 discrete:2 write:3 mat:1 hyperparameter:1 santhanam:1 demonstrating:1 abbott:1 relaxation:1 geometrically:1 koster:1 master:1 uncertainty:1 place:7 wu:2 missed:1 pflds:13 putative:1 summarizes:1 bound:1 hi:3 layer:1 simplification:1 display:4 activity:9 binned:3 encodes:1 generates:1 speed:4 span:1 performing:2 attempting:1 extracellular:2 department:1 according:4 neur:8 describes:5 across:1 increasingly:1 suppressed:1 wi:12 evolves:3 making:1 intuitively:1 tla:9 computationally:1 equation:1 visualization:1 remains:2 assures:1 count:7 parametrize:2 plds:15 linderman:1 apply:2 spectral:1 nicholas:1 responsive:1 subtracted:1 rp:3 original:1 vidne:1 assumes:5 denotes:1 include:1 marginalized:1 sw:1 const:1 calculating:1 objective:1 move:2 quantity:1 spike:42 parametric:2 rt:2 traditional:1 diagonal:2 exhibit:2 gradient:7 link:1 mapped:2 simulated:6 parametrized:2 koyama:1 chris:1 unstable:1 toward:1 spanning:1 code:2 length:2 modeled:2 relationship:4 difficult:1 potentially:1 frank:1 favorably:1 negative:3 implementation:1 perform:2 allowing:1 neuron:45 observation:13 datasets:2 descent:1 gplvm:50 communication:1 rn:3 arbitrary:1 inferred:2 introduced:3 david:2 pair:3 namely:1 rad:1 pfau:1 learned:1 ryu:2 nip:1 bar:4 dynamical:4 pattern:1 challenge:1 including:2 interpretability:1 max:1 demanding:1 difficulty:1 advanced:1 arm:1 meth:1 xg:4 auto:1 extract:1 coupled:1 sahani:4 prior:6 discovery:4 marginalizing:1 relative:1 loss:1 versus:1 affine:1 consistent:1 xp:2 row:4 summary:1 repeat:1 last:1 rasmussen:1 jth:1 infeasible:1 bias:2 exponentiated:1 allow:2 jh:3 institute:2 taking:1 correspondingly:1 differentiating:1 distributed:2 curve:49 dimension:8 overcome:1 valid:1 pillow:2 evaluating:4 calculated:2 sensory:1 avoids:1 author:1 made:1 collection:1 exponentiating:1 maze:3 ignores:1 counted:1 bm:2 approximate:3 observable:1 implicitly:1 global:1 active:2 assumed:1 continuous:1 latent:84 table:3 learn:2 reasonably:1 robust:2 ca:1 nature:3 mj:1 argmaxx:2 alg:1 expansion:2 complex:1 diag:2 significance:1 cortical:1 neurosci:2 big:1 hyperparameters:1 repeated:2 categorized:1 body:1 x1:16 fig:15 crcns:1 slow:1 xmap:4 precision:1 position:1 explicit:8 exponential:2 governed:1 vanish:1 third:3 externally:1 down:1 minute:1 bad:1 xt:11 specific:1 r2:8 intractable:1 entorhinal:1 kx:34 wiggle:1 chen:1 rodent:3 rg:1 paninski:3 gao:2 partially:1 scalar:1 gpfa:13 nested:5 corresponds:1 ma:1 pandarinath:1 conditional:3 identity:2 goal:2 rbf:1 shared:1 replace:1 change:2 gplvms:1 uniformly:2 total:1 la:1 ew:2 select:1 jonathan:1 assessed:1 evaluate:1 princeton:2 avoiding:1
6,569
6,942
Neural system identification for large populations separating ?what? and ?where? 6 David A. Klindt * 1-3 , Alexander S. Ecker * 1,2,4,6 , Thomas Euler 1-3 , Matthias Bethge 1,2,4-6 * Authors contributed equally 1 Centre for Integrative Neuroscience, University of T?bingen, Germany 2 Bernstein Center for Computational Neuroscience, University of T?bingen, Germany 3 Institute for Ophthalmic Research, University of T?bingen, Germany 4 Institute for Theoretical Physics, University of T?bingen, Germany 5 Max Planck Institute for Biological Cybernetics, T?bingen, Germany Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA [email protected], [email protected], [email protected], [email protected] Abstract Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional methods for neural system identification do not capitalize on this separation of ?what? and ?where?. Learning deep convolutional feature spaces that are shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron?s response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations ? a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse readout layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex. 1 Introduction In neural system identification, we seek to construct quantitative models that describe how a neuron responds to arbitrary stimuli [1, 2]. In sensory neuroscience, the standard way to approach this problem is with a generalized linear model (GLM): a linear filter followed by a point-wise nonlinearity [3, 4]. However, neurons elicit complex nonlinear responses to natural stimuli even as early as in the retina [5, 6] and the degree of nonlinearity increases as ones goes up the visual hierarchy. At the same time, neurons in the same brain area tend to perform similar computations at different positions in the visual field. This separability of what is computed from where it is computed is a key idea underlying the notion of functional cell types tiling the visual field in a retinotopic fashion. For early visual processing stages like the retina or primary visual cortex, several nonlinear methods have been proposed, including energy models [7, 8], spike-triggered covariance methods [9, 10], linear-nonlinear (LN-LN) cascades [11, 12], convolutional subunit models [13, 14] and GLMs based on handcrafted nonlinear feature spaces [15]. While these models outperform the simple GLM, they 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. still cannot fully account for the responses of even early visual processing stages (i.e. retina, V1), let alone higher-level areas such as V4 or IT. The main problem is that the expressiveness of the model (i.e. number of parameters) is limited by the amount of data that can be collected for each neuron. The recent success of deep learning in computer vision and other fields has sparked interest in using deep learning methods for understanding neural computations in the brain [16, 17, 18], including promising first attempts to learn feature spaces for neural system identification [19, 20, 21, 22, 23]. In this study, we would like to achieve a better understanding of the possible advantages of deep learning methods over classical tools for system identification by analyzing their effectiveness on ground truth models. Classical approaches have traditionally been framed as individual multivariate regression problems for each recorded neuron, without exploiting computational similarities between different neurons for regularization. One of the most obvious similarities between different neurons, however, is that the visual system simultaneously extracts similar features at many different locations. Because of this spatial equivariance, the same nonlinear subspace is spanned at many nearby locations and many neurons share similar nonlinear computations. Thus, we should be able to learn much more complex nonlinear functions by combining data from many neurons and learning a common feature space from which we can linearly predict the activity of each neuron. We propose a convolutional neural network (CNN) architecture with a special readout layer that separates the problem of learning a common feature space from estimating each neuron?s receptive field location and cell type, but can still be trained end-to-end on experimental data. We evaluate this model architecture using simple simulations and show its potential for developing a functional characterization of cell types. Moreover, we show that our model outperforms the current state-ofthe-art on a publicly available dataset of mouse V1 responses to natural images [19]. 2 Related work Using artificial neural networks to predict neural responses has a long history [24, 25, 26]. Recently, two studies [13, 14] fit two-layer models with a convolutional layer and a pooling layer. They do find marked improvements over GLMs and spike-triggered covariance methods, but like most other previous studies they fit their model only to individual cells? responses and do not exploit computational similarities among neurons. Antolik et al. [19] proposed learning a common feature space to improve neural system identification. They outperform GLM-based approaches by fitting a multi-layer neural network consisting of parameterized difference-of-Gaussian filters in the first layer, followed by two fully-connected layers. However, because they do not use a convolutional architecture, features are shared only locally. Thus, every hidden unit has to be learned ?from scratch? at each spatial location and the number of parameters in the fully-connected layers grows quadratically with population size. McIntosh et al. [20] fit a CNN to retinal data. The bottleneck in their approach is the final fullyconnected layer that maps the convolutional feature space to individual cells? responses. The number of parameters in this final readout layer grows very quickly and even for their small populations represents more than half of the total number of parameters. Batty et al. [21] also advocate feature sharing and explore using recurrent neural networks to model the shared feature space. They use a two-step procedure, where they first estimate each neuron?s location via spike-triggered average, then crop the stimulus accordingly for each neuron and then learn a model with shared features. The performance of this approach depends critically on the accuracy of the initial location estimate, which can be problematic for nonlinear neurons with a weak spike-triggered average response (e. g. complex cells in primary visual cortex). Our contribution is a novel network architecture consisting of a number of convolutional layers followed by a sparse readout layer factorizing the spatial and feature dimensions. Our approach has two main advantages over prior art. First, it reduces the effective number of parameters in the readout layer substantially while still being trainable end-to-end. Second, our readout forces all computations to be performed in the convolutional layers while the factorized readout layer provides an estimate of the receptive field location and the cell type of each neuron. In addition, our work goes beyond the findings of these previous studies by providing a systematic evaluation, on ground truth models, of the advantages of feature sharing in neural system identification ? in particular in settings with many neurons and few observations. 2 Figure 1: Feature sharing makes more efficient use of the available data. Red line: System identification performance with one recorded neuron. Blue lines: Performance for a hypothetical population of 10 neurons with identical receptive field shapes whose locations we know. A shared model (solid blue) is equivalent to having 10? as much data, i. e. the performance curve shifts to the left. If we fit all neurons independently (dashed blue), we do not benefit from their similarity. 3 Learning a common feature space We illustrate why learning a common feature space makes much more efficient use of the available data by considering a simple thought experiment. Suppose we record from ten neurons that all compute exactly the same function, except that they are located at different positions. If we know each neuron?s position, we can pool their data to estimate a single model by shifting the stimulus such that it is centered on each neuron?s receptive field. In this case we have effectively ten times as much data as in the single-neuron case (Fig. 1, red line) and we will achieve the same model performance with a tenth of the data (Fig. 1, solid blue line). In contrast, if we treat each neuron as an individual regression problem, the performance will on average be identical to the single-neuron case (Fig. 1, dashed blue line). Although this insight has been well known from transfer learning in machine learning, it has so far not been applied widely in a neuroscience context. In practice we neither know the receptive field locations of all neurons a priori nor do all neurons implement exactly the same nonlinear function. However, the improvements of learning a shared feature space can still be substantial. First, estimating the receptive field location of an individual neuron is a much simpler task than estimating its entire nonlinear function from scratch. Second, we expect the functional response diversity within a cell type to be much smaller than the overall response diversity across cell types [27, 28]. Third, cells in later processing stages (e. g. V1) share the nonlinear computations of their upstream areas (retina, LGN), suggesting that equipping them with a common feature space will simplify learning their individual characteristics [19]. 4 Feature sharing in a simple linear ground-truth model We start by investigating the possible advantages of learning a common feature space with a simple ground truth model ? a population of linear neurons with Poisson-like output noise:  p  rn = aT yn ? N rn , |rn | (1) ns Here, s is the (Gaussian white noise) stimulus, rn the firing rate of neuron n, an its receptive field kernel and yn its noisy response. In this simple model, the classical GLM-based approch reduces to (regularized) multivariate linear regression, which we compare to a convolutional neural network. 4.1 Convolutional neural network model Our neural network consists of a convolutional layer and a readout layer (Fig. 2). The first layer convolves the image with a number of kernels to produce K feature maps, followed by batch normalization [29]. There is no nonlinearity in the network (i.e. activation function is the identity). Batch normalization ensures that the output has fixed variance, which is important for the regularization in the second layer. The readout layer pools the output, c, of the convolutional layer by applying a sparse mask, q, for each neuron: X r?n = cijk qijkn (2) i,j,k Here, r?n is the predicted firing rate of neuron n. The mask q is factorized in the spatial and feature dimension: qijkn = mijn wkn , (3) where m is a spatial mask and w is a set of K feature weights for each neuron. The spatial mask and feature weights encode each neuron?s receptive field location and cell type, respectively. As we expect them to be highly sparse, we regularize both by an L1 penalty (with strengths ?m and ?w ). 3 Input Feature Space Receptive Fields Responses Original convolution Neuron 1 ... ... ... K 17 ? 17 ? K 32 Neuron N ?3 2 48 ? 48 32 ? 32 ? K (32 ? 32 + K) ? N N?1 48 ? 48 Figure 2: Our proposed CNN architecture in its simplest form. It consists of a feature space module and a readout layer. The feature space is extracted via one or more convolutional layers (here one is shown). The readout layer computes for each neuron a weighted sum over the entire feature space. To keep the number of parameters tractable and facilitate interpretability, we factorize the readout into a location mask and a vector of feature weights, which are both encouraged to be sparse by regularizing with L1 penalty. By factorizing the spatial and feature dimension in the readout layer, we achieve several useful properties: first, it reduces the number of parameters substantially compared to a fully-connected layer [20]; second, it limits the expressiveness of the layer, forcing the ?computations? down to the convolutional layers, while the readout layer performs only the selection; third, this separation of computation from selection facilitates the interpretation of the learned parameters in terms of functional cell types. We minimize the following penalized mean-squared error using the Adam optimizer [30]: X X 1 X L= |mijn | + ?w |wkn | (ybn ? r?bn )2 + ?m B i,j,n (4) k,n b,n where b denotes the sample index and B = 256 is the minibatch size. We use an initial learning rate of 0.001 and early stopping based on a separate validation set consisting of 20% of the training set. When the validation error has not improved for 300 consecutive steps, we go back to the best parameter set and decrease the learning rate once by a factor of ten. After the second time we end the training. We find the optimal regularization weights ?m and ?w via grid search. To achieve optimal performance, we found it to be useful to initialize the masks well. Shifting the convolution kernel by one pixel in one direction while shifting the mask in the opposite direction in principle produces the same output. However, because in practice the filter size is finite, poorly initialized masks can lead to suboptimal solutions with partially cropped filters (cf. Fig. 3C, CNN10 ). To initialize the masks, we calculated the spike-triggered average for each neuron, smoothed it with a large Gaussian kernel and took the pixel with the maximum absolute value as our initial guess for the neurons? location. We set this pixel to the standard deviation of the neuron?s response (because the output of the convolutional layer has unit variance) and initialized the rest of the mask randomly from a Gaussian N (0, 0.001). We initialized the convolution kernels randomly from N (0, 0.01) and the feature weights from N (1/K, 0.01). 4.2 Baseline models In the linear example studied here, the GLM reduces to simple linear regression. We used two forms of regularization: lasso (L1) and ridge (L2). To maximize the performance of these baseline models, we cropped the stimulus around each neuron?s receptive field. Thus, the number of parameters these models have to learn is identical to those in the convolution kernel of the CNN. Again, we cross-validated over the regularization strength. 4.3 Performance evaluation To measure the models? performance we compute the fraction of explainable variance explained: (5) FEV = 1 ? (? r ? r)2 /Var(r) 4 B Fraction of explainable variance A C 1.0 Model Number of samples 28 210 212 OLS 0.8 Kernel known OLS Lasso Ridge CNN1 CNN10 CNN100 CNN1000 0.6 0.4 0.2 0.0 26 28 210 212 214 Number of samples 216 218 Lasso Ridge CNN1 CNN10 CNN100 CNN1000 Figure 3: Feature sharing in homogeneous linear population. A, Population of homogeneous spatially shifted on-center/off-surround neurons. B, Model comparison: Fraction of explainable variance explained vs. the number of samples used for fitting the models. Ordinary least squares (OLS), L1 (Lasso) and L2 (Ridge) regularized regression models are fit to individual neurons. CNNN are convolutional models with N neurons fit jointly. The dashed line shows the performance (for N ? ?) of estimating the mask given the ground truth convolution kernel.C, Learned filters for different methods and number of samples. which is evaluated on the ground-truth firing rates r without observation noise. A perfect model would achieve FEV = 1. We evaluate FEV on a held-out test set not seen during model fitting and cross-validation. 4.4 Single cell type, homogeneous population We first considered the idealized situation where all neurons share the same 17 ? 17 px on-center/offsurround filter, but at different locations (Fig. 3A). In other words, there is only one feature map in the convolutional layer (K = 1). We used a 48 ? 48 px Gaussian white noise stimulus and scaled the neurons? output such that h|r|i = 0.1, mimicking a neurally-plausible signal-to-noise ratio at firing rates of 1 spike/s and an observation window of 100 ms. We simulated populations of N = 1, 10, 100 and 1000 neurons and varied the amount of training data. The CNN model consistently outperformed the linear regression models (Fig. 3B). The ridgeregularized linear regression explained around 60% of the explainable variance with 4,000 samples (i. e. pairs of stimulus and N-dimensional neural response vector). A CNN model pooling over 10 neurons achieved the same level of performance with less than a quarter of the data. The margin in performance increased with the number of neurons pooled over in the model, although the relative improvement started to level off when going from 100 to 1,000 neurons. With few observations, the bottleneck appears to be estimating each neuron?s location mask. Two observations support this hypothesis. First, the CNN1000 model learned much ?cleaner? weights with 256 samples than ridge regression with 4,096 (Fig. 3C), although the latter achieved a higher predictive performance (FEV = 55% vs. 65%). This observation suggests that the feature space can be learned efficiently with few samples and many neurons, but that the performance is limited by the estimation of neurons? location masks. Second, when using the ground-truth kernel and optimizing solely the location masks, performance was only marginally better than for 1,000 neurons (Fig. 3B, blue dotted line), indicating an upper performance bound by the problem of estimating the location masks. 4.5 Functional classification of cell types Our next step was to investigate whether our model architecture can learn interpretable features and obtain a functional classification of cell types. Using the same simple linear model as above, we simulated two cell types with different filter kernels. To make the simulation a bit more realistic, we made the kernels heterogeneous within a cell type (Fig. 4A). We simulated a population of 1,000 neurons (500 of each type). With sparsity on the readout weights every neuron has to select one of the two convolutional kernels. As a consequence, the feature weights represent more or less directly the cell type identity of each 5 Figure 4: A, Example receptive fields of two types of neurons, differing in their average size. B, Learned filters of the CNN model. C, Scatter plot of the feature weights for the two cell types. neuron (Fig. 4C). This in turn forces the kernels to learn the average of each type (Fig. 4B). However, any other set of kernels spanning the same subspace would have achieved the same predictive performance. Thus, we find that sparsity on the feature weights facilitates interpretability: each neuron chooses one feature channel which represents the essential computation of this type of neuron. 5 Learning nonlinear feature spaces 5.1 Ground truth model Next, we investigated how our approach scales to more complex, nonlinear neurons and natural stimuli. To keep the benefits of having ground truth data available, we chose our model neurons from the VGG-19 network [31], a popular CNN trained on large-scale object recognition. We selected four random feature maps from layer conv2_2 as ?cell types?. For each cell type, we picked 250 units with random locations (32 ? 32 possible locations). We computed ground-truth responses for all 1000 cells on 44 ? 44 px image patches obtained by randomly cropping images from the ImageNet (ILSVRC2012) dataset. As before, we rescaled the output to produce sparse, neurally plausible mean responses of 0.1 and added Poisson-like noise. We fit a CNN with three convolutional layers consisting of 32, 64 and 4 feature maps (kernel size 5 ? 5), followed by our sparse, factorized readout layer (Fig. 5A). Each convolutional layer was followed by batch normalization and a ReLU nonlinearity. We trained the model using Adam with a batch-size of 64 and the same initial step size, early stopping, cross-validation and initialization of the masks as described above. As a baseline, we fit a ridge-regularized GLM with ReLU nonlinearity followed by an additional bias. To show that our sparse, factorized readout layer is an important feature of our architecture, we also implemented two alternative ways of choosing the readout, which have been proposed in previous work on learning common feature spaces for neural populations. The first approach is to estimate the receptive field location in advance based on the spike-triggered average of each neuron [21].1 To do so, we determined the pixel with the strongest spike-triggered average. We then set this pixel to one in the location mask and all other pixels to zero. We then kept the location mask fixed while optimizing convolution kernels and feature weights. The second approach is to use a fully-connected readout tensor [20] and regularize the activations of all neurons with L1 penalty. In addition, we regularized the fully-connected readout tensor with L2 weight decay. We fit both models to populations of 1,000 neurons. Our CNN with the factorized readout outperformed all three baselines (Fig. 5B).2 The performance of the GLM saturated at ?20% FEV (Fig. 5B), highlighting the high degree of nonlinearity of our model neurons. Using a fully-connected readout [20] incurred a substantial performance penalty when the number of samples was small and only asymptotically (for a large number of samples) reached the same performance as our factorized readout. Estimating the receptive field location in 1 Note that they used a recurrent neural network for the shared feature space. Here we only reproduce their approach to defining the readout. 2 It did not reach 100% performance, since the feature space we fit was smaller and the network shallower than the one used to generate the ground truth data. 6 A Input Feature Space Receptive Fields Responses 5?5?3 5 ? 5 ? 32 5 ? 5 ? 64 ... Neuron 1 ... Neuron N 40 ? 40 ? 32 36 ? 36 ? 64 B 32 ? 32 ? 4 C D Feature 2 Fraction of explained variance CNN 1,000 neurons CNN 100 neurons CNN 12 neurons Type 1 Type 2 Feature 1 Type 3 Type 4 Feature 4 Fixed mask Full readout GLM 29 210 211 212 213 214 Number of samples 215 216 Feature 3 (32 ? 32 + K) ? N E Fraction of explained variance 44 ? 44 ? 3 Ours Fixed mask Full readout 4 8 16 Number of cell types Figure 5: Inferring a complex, nonlinear feature space. A, Model architecture. B, Dependence of model performance (FEV) on number of samples used for training. C, Feature weights of the four cell types for CNN1000 with 215 samples cluster strongly. D, Learned location masks for four randomly chosen cells (one per type). E, Dependence of model performance (FEV) on number of types of neurons in population, number of samples fixed to 212 . advance [21] led to a drop in performance ? even for large sample sizes. A likely explanation for this finding is the fact that the responses are quite nonlinear and, thus, estimates of the receptive field location via spike-triggered average (a linear method) are not very reliable, even for large sample sizes. Note that the fact that we can fit the model is not trivial, although ground truth is a CNN. We have observations of noise-perturbed VGG units whose locations we do not know. Thus, we have to infer both the location of each unit as well as the complex, nonlinear feature space simultaneously. Our results show that our model solves this task more efficiently than both simpler (GLM) and equally expressive [20] models when the number of samples is relatively small. In addition to fitting the data well, the model also recovered both the cell types and the receptive field locations correctly (Fig. 5C, D). When fit using 216 samples (210 for validation/test and the rest for training), the readout weights of the four cell types clustered nicely (Fig. 5C) and it successfully recovered the location masks (Fig. 5D). In fact, all cells were classified correctly based on their largest feature weight. Next, we investigated how our model and its competitors [20, 21] fare when scaling up to large recordings with many types of neurons. To simulate this scenario, we sampled again VGG units (from the same layer as above), taking 64 units with random locations from up to 16 different feature maps (i.e. cell types). Correspondingly we increased the number of feature maps in the last convolutional layer of the models. We fixed the number of training samples to 212 to compare models in a challenging regime (cf. Fig. 5B) where performance can be high but is not yet asymptotic. Our CNN model scales gracefully to more diverse neural populations (Fig. 5E), remaining roughly at the same level of performance. Similarly, the CNN with the fixed location masks estimated in advance scales well, although with lower overall performance. In contrast, the performance of the fully-connected readout drops fast, because the number of parameters in the readout layer grows very quickly with the number of feature maps in the final convolutional layer. In fact, we were unable to fit models with more than 16 feature maps with this approach, because the size of the read-out tensor became prohibitively large for GPU memory. 7 Table 1: Application to data from primary visual cortex (V1) of mice [19]. The table shows average correlations between model predictions and neural responses on the test set. Scan 1 2 3 Average Antolik et al. 2016 [19] LNP CNN with fully connected readout CNN with fixed mask 0.51 0.37 0.47 0.45 0.43 0.30 0.34 0.38 0.46 0.38 0.43 0.41 0.47 0.36 0.43 0.42 CNN with factorized readout (ours) 0.55 0.45 0.49 0.50 Finally, we asked how far we can push our model with long recordings and many neurons. We tested our model with 216 training samples from 128 different types of neurons (again 64 units each). On this large dataset with ? 60.000 recordings from ? 8.000 neurons we were still able to fit the model on a single GPU and perform at 90% FEV (data not shown). Thus, we conclude that our model scales well to large-scale problems with thousands of nonlinear and diverse neurons. 5.2 Application to data from primary visual cortex To test our approach on real data and going beyond the previously explored retinal data [20, 21], we used the publicly available dataset from Antolik et al. [19].3 The dataset has been obtained by two-photon imaging in the primary visual cortex of sedated mice viewing natural images. It contains three scans with 103, 55 and 102 neurons, respectively, and their responses to static natural images. Each scan consists of a training set of images that were each presented once (1800, 1260 and 1800 images, respectively) as well as a test set consisting of 50 images (each image repeated 10, 8 and 12 times, respectively). We use the data in the same form as the original study [19], to which we refer the reader for full details on data acquisition, post-processing and the visual stimulation paradigm. To fit this dataset, we used the same basic CNN architecture described above, with three small modifications. First, we replaced the ReLU activation functions by a soft-thresholding nonlinearity, f (x) = log(1 + exp(x)). Second, we replaced the mean-squared error loss by a Poisson loss (because neural responses are non-negative and the observation noise scales with the mean response). Third, we had to regularize the convolutional kernels, because the dataset is relatively limited in terms of recording length and number of neurons. We used two forms of regularization: smoothness and group sparsity. Smoothness is achieved by an L2 penalty on the Laplacian of the convolution kernels: h 0.5 1 0.5 i X 1 ?6 1 Llaplace = ?laplace (6) (W:,:,kl ? L)2ij , L= 0.5 i,j,k,l 1 0.5 where Wijkl is the 4D tensor representing the convolution kernels, i and j depict the two spatial dimensions of the filters and k, l the input and output channels. Group sparsity encourages filters to pool from only a small set of feature maps in the previous layer and is defined as: X sX 2 . Lgroup = ?group Wijkl (7) i,j kl We fit CNNs with one, two and three layers. After an initial exploration of different CNN architectures (filter sizes, number of feature maps) on the first scan, we systematically cross-validated over different filter sizes, number of feature maps and regularization strengths via grid search on all three scans. We fit all models using 80% of the training dataset for training and the remaining 20% for validation using Adam and early stopping as described above. For each scan, we selected the best model based on the likelihood on the validation set. In all three scans, the best model had 48 feature maps per layer and 13 ? 13 px kernels in the first layer. The best model for the first two scans had 3 ? 3 kernels in the subsequent layers, while for the third scan larger 8 ? 8 kernels performed best. We compared our model to four baselines: (a) the Hierarchical Structural Model from the original paper publishing the dataset [19], (b) a regularized linear-nonlinear Poisson (LNP) model, (c) a CNN with fully-connected readout (as in [20]) and (d) a CNN with fixed spatial masks, inferred 3 See [22, 23] for concurrent work on primate V1. 8 from the spike-triggered averages of each neuron (as in [21]). We used a separate, held-out test set to compare the performance of the models. On the test set, we computed the correlation coefficient between the response predicted by each model and the average observed response across repeats of the same image.4 Our CNN with factorized readout outperformed all four baselines on all three scans (Table 1). The other two CNNs, which either did not use a factorized readout (as in [20]) or did not jointly optimize feature space and readout (as in [21]), performed substantially worse. Interestingly, they did not even reach the performance of [19], which uses a three-layer fully-connected neural network instead of a CNN. Thus, our model is the new state of the art for predicting neural responses in mouse V1 and the factorized readout was necessary to outperform an earlier (and simpler) neural network architecture that also learned a shared feature space for all neurons [19]. 6 Discussion Our results show that the benefits of learning a shared convolutional feature space can be substantial. Predictive performance increases, however, only until an upper bound imposed by the difficulty of estimating each neuron?s location in the visual field. We propose a CNN architecture with a sparse, factorized readout layer that separates these two problems effectively. It allows scaling up the complexity of the convolutional layers to many parallel channels (which are needed to describe diverse, nonlinear neural populations), while keeping the inference problem of each neuron?s receptive field location and type identity tractable. Furthermore, our performance curves (see Figs. 3 and 5) may inform experimental designs by determining whether one should aim for longer recordings or more neurons. For instance, if we want to explain at least 80% of the variance in a very homogenous population of neurons, we could choose to record either ? 2,000 responses from 10 cells or ? 500 responses from 1,000 cells. Besides making more efficient use of the data to infer their nonlinear computations, the main promise of our new regularization scheme for system identification with CNNs is that the explicit separation of ?what? and ?where? provides us with a principled way to functionally classify cells into different types: the feature weights of our model can be thought of as a ?barcode? identifying each cell type. We are currently working on applying this approach to large-scale data from the retina and primary visual cortex. Later processing stages, such as primary visual cortex could additionally benefit from similarly exploiting equivariance not only in the spatial domain, but also (approximately) in the orientation or direction-of-motion domain. Availability of code The code to fit the models and reproduce the figures is available online at: https://github.com/david-klindt/NIPS2017 Acknowledgements We thank Philipp Berens, Katrin Franke, Leon Gatys, Andreas Tolias, Fabian Sinz, Edgar Walker and Christian Behrens for comments and discussions. This work was supported by the German Research Foundation (DFG) through Collaborative Research Center (CRC 1233) ?Robust Vision? as well as DFG grant EC 479/1-1; the European Union?s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement No 674901; the German Excellency Initiative through the Centre for Integrative Neuroscience T?bingen (EXC307). The research was also supported by Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. 4 We used the correlation coefficient for evaluation (a) to facilitate comparison with the original study [19] and (b) because estimating FEV on data with a small number of repetitions per image is unreliable. 9 References [1] Matteo Carandini, Jonathan B. Demb, Valerio Mante, David J. Tolhurst, Yang Dan, Bruno A. Olshausen, Jack L. Gallant, and Nicole C. Rust. Do we know what the early visual system does? The Journal of Neuroscience, 25(46):10577?10597, 2005. [2] Michael C.-K. Wu, Stephen V. David, and Jack L. Gallant. Complete functional characterization of sensory neurons by system identification. Annual Review of Neuroscience, 29:477?505, 2006. [3] Judson P. Jones and Larry A. Palmer. The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6):1187?1211, 1987. [4] Alison I. Weber and Jonathan W. Pillow. Capturing the dynamical repertoire of single neurons with generalized linear models. arXiv:1602.07389 [q-bio], 2016. [5] Tim Gollisch and Markus Meister. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron, 65(2):150?164, 2010. [6] Alexander Heitman, Nora Brackbill, Martin Greschner, Alexander Sher, Alan M. Litke, and E. J. Chichilnisky. Testing pseudo-linear models of responses to natural scenes in primate retina. bioRxiv, page 45336, 2016. [7] David H. Hubel and Torsten N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat?s visual cortex. The Journal of Physiology, 160(1):106, 1962. [8] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2(2):284?299, 1985. [9] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal Elements of Macaque V1 Receptive Fields. Neuron, 46(6):945?956, 2005. [10] Jon Touryan, Gidon Felsen, and Yang Dan. Spatial structure of complex cell receptive fields measured with natural images. Neuron, 45(5):781?791, 2005. [11] James M. McFarland, Yuwei Cui, and Daniel A. Butts. Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs. PLOS Computational Biology, 9(7):e1003143, 2013. [12] Esteban Real, Hiroki Asari, Tim Gollisch, and Markus Meister. Neural Circuit Inference from Function to Structure. Current Biology, 2017. [13] Brett Vintch, J. Anthony Movshon, and Eero P. Simoncelli. A convolutional subunit model for neuronal responses in macaque V1. The Journal of Neuroscience, 35(44):14829?14841, 2015. [14] Ryan J. Rowekamp and Tatyana O. Sharpee. Cross-orientation suppression in visual area V2. Nature Communications, 8, 2017. [15] Ben Willmore, Ryan J. Prenger, Michael C.-K. Wu, and Jack L. Gallant. The berkeley wavelet transform: a biologically inspired orthogonal wavelet transform. Neural Computation, 20(6):1537?1564, 2008. [16] Daniel L. K. Yamins, Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619?8624, 2014. [17] Ari S. Benjamin, Hugo L. Fernandes, Tucker Tomlinson, Pavan Ramkumar, Chris VerSteeg, Lee Miller, and Konrad P. Kording. Modern machine learning far outperforms GLMs at predicting spikes. bioRxiv, page 111450, 2017. [18] Seyed-Mahdi Khaligh-Razavi, Linda Henriksson, Kendrick Kay, and Nikolaus Kriegeskorte. Explaining the hierarchy of visual representational geometries by remixing of features from many computational vision models. bioRxiv, page 9936, 2014. [19] J?n Antol?k, Sonja B. Hofer, James A. Bednar, and Thomas D. Mrsic-Flogel. Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes. PLOS Computational Biology, 12(6):e1004927, 2016. [20] Lane T. McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen A. Baccus. Deep Learning Models of the Retinal Response to Natural Scenes. arXiv:1702.01825 [q-bio, stat], 2017. 10 [21] Eleanor Batty, Josh Merel, Nora Brackbill, Alexander Heitman, Alexander Sher, Alan Litke, E. J. Chichilnisky, and Liam Paninski. Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses. In 5th International Conference on Learning Representations, 2017. [22] William F. Kindel, Elijah D. Christensen, and Joel Zylberberg. Using deep learning to reveal the neural code for images in primary visual cortex. arXiv:1706.06208 [cs, q-bio], 2017. [23] Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, and Alexander S. Ecker. Deep convolutional models improve predictions of macaque V1 responses to natural images. bioRxiv, page 201764, 2017. [24] S. R. Lehky, T. J. Sejnowski, and R. Desimone. Predicting responses of nonlinear neurons in monkey striate cortex to complex patterns. The Journal of Neuroscience, 12(9):3568?3581, 1992. [25] Brian Lau, Garrett B. Stanley, and Yang Dan. Computational subunits of visual cortical neurons revealed by artificial neural networks. Proceedings of the National Academy of Sciences, 99(13):8974?8979, 2002. [26] Ryan Prenger, Michael C. K. Wu, Stephen V. David, and Jack L. Gallant. Nonlinear V1 responses to natural scenes revealed by neural network analysis. Neural Networks, 17(5?6):663?679, 2004. [27] Tom Baden, Philipp Berens, Katrin Franke, Miroslav R. Ros?n, Matthias Bethge, and Thomas Euler. The functional diversity of retinal ganglion cells in the mouse. Nature, 529(7586):345?350, 2016. [28] Katrin Franke, Philipp Berens, Timm Schubert, Matthias Bethge, Thomas Euler, and Tom Baden. Inhibition decorrelates visual feature representations in the inner retina. Nature, 542(7642):439?444, 2017. [29] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs], 2015. [30] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [31] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. 11
6942 |@word neurophysiology:1 cnn:28 torsten:1 wiesel:1 kriegeskorte:1 integrative:2 seek:1 simulation:2 bn:1 covariance:2 solid:2 initial:5 contains:1 daniel:2 ours:2 interestingly:1 outperforms:3 current:3 com:2 recovered:2 activation:3 gmail:1 scatter:1 yet:1 gpu:2 diederik:1 realistic:1 subsequent:1 shape:1 christian:2 plot:1 interpretable:1 drop:2 depict:1 v:2 alone:1 intelligence:2 half:1 guess:1 selected:2 greschner:1 accordingly:1 short:1 record:2 provides:3 characterization:2 tolhurst:1 location:36 philipp:3 org:1 simpler:3 initiative:1 consists:3 fitting:5 fullyconnected:1 advocate:1 dan:3 mask:25 roughly:1 nor:1 gatys:2 multi:1 bethgelab:1 brain:2 inspired:1 gollisch:2 window:1 considering:1 project:1 retinotopic:1 moreover:2 underlying:1 estimating:9 factorized:11 circuit:2 brett:1 what:6 linda:1 interpreted:1 substantially:3 monkey:1 cijk:1 differing:1 finding:2 sinz:1 pseudo:1 quantitative:1 every:2 hypothetical:1 wijkl:2 berkeley:1 exactly:2 prohibitively:1 scaled:1 ro:1 schwartz:1 bio:3 unit:8 grant:2 yn:2 planck:1 before:1 thereon:1 scientist:1 treat:1 limit:1 consequence:1 willmore:1 demb:1 analyzing:1 path:1 firing:4 solely:1 approximately:1 matteo:1 chose:1 initialization:1 studied:1 suggests:1 challenging:1 limited:4 palmer:1 liam:1 testing:1 practice:2 union:1 implement:1 procedure:1 area:4 elicit:1 kendrick:1 cascade:1 thought:2 physiology:1 fev:9 word:1 cannot:1 interior:2 selection:2 context:1 applying:2 franke:3 optimize:1 equivalent:1 ecker:3 map:13 center:6 imposed:1 nicole:2 go:3 independently:1 jimmy:1 identifying:1 insight:1 spanned:1 regularize:3 kay:1 population:16 notion:1 traditionally:1 laplace:1 henriksson:1 hierarchy:3 suppose:1 behrens:1 homogeneous:3 us:1 hypothesis:1 agreement:1 element:1 recognition:2 located:1 observed:1 module:1 thousand:3 readout:37 ensures:1 connected:10 ilsvrc2012:1 plo:2 decrease:1 rescaled:1 cin:1 substantial:3 principled:1 benjamin:1 complexity:1 asked:1 trained:4 predictive:3 seyed:1 convolves:1 maheswaranathan:1 cat:2 america:1 approch:1 fast:1 describe:2 effective:1 prenger:2 doi:2 artificial:3 niru:1 sejnowski:1 choosing:1 batty:2 whose:2 quite:1 widely:1 plausible:3 larger:1 simonyan:1 jointly:2 noisy:1 transform:2 final:3 online:1 triggered:9 advantage:4 matthias:5 took:1 propose:3 interaction:1 combining:1 poorly:1 achieve:5 representational:1 academy:2 razavi:1 exploiting:2 cluster:1 cropping:1 produce:3 adam:4 perfect:1 ben:1 object:1 tim:2 illustrate:1 recurrent:3 bednar:1 stat:1 andrew:1 measured:1 ij:1 edward:1 solves:1 implemented:1 predicted:2 c:2 direction:3 mrsic:1 cnns:4 filter:12 stochastic:1 centered:1 exploration:1 enable:1 viewing:1 larry:1 crc:1 government:2 clustered:1 repertoire:1 biological:1 ryan:3 brian:1 d16pc00003:1 around:2 considered:1 ground:13 exp:1 predict:3 major:1 optimizer:1 early:7 consecutive:1 purpose:1 esteban:1 estimation:2 outperformed:3 currently:1 largest:1 concurrent:1 repetition:1 rowekamp:1 successfully:1 tool:1 weighted:1 gaussian:5 aim:1 encode:1 validated:2 improvement:3 consistently:1 likelihood:1 nora:2 contrast:2 litke:2 baseline:6 suppression:1 inference:2 ganguli:1 bergen:1 stopping:3 entire:2 hidden:1 going:2 reproduce:3 lgn:1 germany:5 schubert:1 mimicking:1 overall:2 orientation:2 classification:2 among:2 priori:1 pixel:6 spatial:13 art:4 special:1 initialize:2 homogenous:1 field:27 construct:1 once:2 nicely:1 beach:1 having:2 encouraged:1 identical:3 represents:2 biology:3 capitalize:1 adelson:1 jones:1 jon:1 cadieu:1 ibc:2 stimulus:9 simplify:1 few:3 retina:8 modern:1 randomly:4 simultaneously:2 national:2 individual:8 dfg:2 replaced:2 geometry:1 consisting:5 william:1 attempt:1 neuroscientist:1 interest:1 highly:1 investigate:1 evaluation:3 joel:1 saturated:1 copyright:1 held:2 antol:1 desimone:1 necessary:1 orthogonal:1 heitman:2 initialized:3 biorxiv:4 theoretical:1 miroslav:1 increased:2 classify:2 soft:1 earlier:1 instance:1 flogel:1 ordinary:1 deviation:1 euler:4 pavan:1 perturbed:1 spatiotemporal:2 chooses:1 st:1 international:1 lee:1 v4:1 physic:1 systematic:1 off:2 pool:3 bethge:5 mouse:6 quickly:2 contract:1 michael:3 felsen:1 squared:2 again:3 recorded:2 solomon:1 baden:2 choose:1 worse:1 szegedy:1 account:2 potential:1 suggesting:1 de:2 diversity:3 retinal:5 photon:1 pooled:1 distribute:1 availability:1 coefficient:2 santiago:1 scratched:1 depends:1 idealized:1 performed:3 later:2 picked:1 view:1 red:2 start:1 reached:1 parallel:1 annotation:1 curie:1 contribution:1 collaborative:1 square:1 odowska:1 publicly:2 accuracy:1 convolutional:29 characteristic:1 variance:9 efficiently:2 miller:1 ofthe:1 became:1 weak:1 identification:12 critically:1 marginally:1 edgar:2 cybernetics:1 history:1 classified:1 explain:1 strongest:1 reach:2 inform:1 sharing:5 competitor:1 energy:2 acquisition:1 tucker:1 james:4 obvious:1 static:1 sampled:1 dataset:9 carandini:1 popular:1 improves:1 stanley:1 garrett:1 back:1 appears:1 higher:3 tom:2 response:36 improved:1 zisserman:1 evaluated:1 strongly:1 brackbill:2 furthermore:1 stage:4 equipping:1 binocular:1 correlation:3 glms:3 until:1 working:1 expressive:1 nonlinear:23 minibatch:1 reveal:1 grows:3 olshausen:1 usa:2 facilitate:2 regularization:8 spatially:1 read:1 white:2 konrad:1 during:1 encourages:1 m:1 generalized:2 hong:1 ridge:6 complete:1 performs:1 l1:5 motion:2 weber:1 image:16 wise:1 novel:1 ari:1 recently:1 regularizing:1 jack:4 common:8 ols:3 charles:1 functional:9 quarter:1 stimulation:1 rust:2 hugo:1 handcrafted:1 interpretation:1 fare:1 functionally:1 refer:1 surround:1 mcintosh:2 framed:1 smoothness:2 grid:2 similarly:2 centre:2 nonlinearity:7 bruno:1 had:3 cortex:13 surface:1 similarity:4 longer:1 timm:1 inhibition:1 multivariate:2 recent:1 khaligh:1 optimizing:2 forcing:1 scenario:1 success:1 lnp:2 seen:1 additional:1 houston:1 george:1 tomlinson:1 maximize:1 paradigm:1 eleanor:1 dashed:3 signal:1 stephen:3 neurally:2 full:3 simoncelli:2 reduces:4 infer:2 alan:2 valerio:1 cross:5 long:3 believed:1 post:1 equally:2 laplacian:1 prediction:3 regression:8 crop:1 multilayer:1 heterogeneous:1 vision:3 basic:1 poisson:4 arxiv:6 hofer:1 kernel:22 normalization:4 represent:1 smarter:1 achieved:4 cell:36 sergey:1 addition:3 cropped:2 want:1 walker:2 touryan:1 rest:2 comment:1 recording:7 tend:1 pooling:2 facilitates:2 effectiveness:1 structural:1 yang:3 bernstein:1 revealed:2 fit:18 relu:3 architecture:15 lasso:4 opposite:1 suboptimal:1 barcode:1 idea:1 andreas:2 inner:1 vgg:3 shift:2 bottleneck:3 whether:2 ramkumar:1 accelerating:1 explainable:4 penalty:5 movshon:2 bingen:6 karen:1 deep:9 useful:2 cleaner:1 amount:2 locally:1 ten:3 lehky:1 simplest:1 minimize:1 generate:1 http:1 outperform:3 problematic:1 shifted:1 dotted:1 governmental:1 neuroscience:10 estimated:1 per:3 correctly:2 blue:6 diverse:3 promise:1 group:3 key:1 four:6 neither:1 marie:1 tenth:1 kept:1 v1:10 imaging:1 asymptotically:1 hiroki:1 fraction:6 sum:1 parameterized:1 reader:1 architectural:1 wu:3 separation:3 patch:1 endorsement:1 scaling:2 bit:1 capturing:1 layer:49 bound:2 followed:7 mante:1 annual:1 activity:2 strength:3 sparked:1 scene:4 katrin:3 lane:1 markus:2 nearby:1 simulate:1 leon:2 optical:1 px:4 relatively:2 martin:1 department:1 developing:1 disclaimer:1 cui:1 smaller:2 across:2 separability:1 wkn:2 modification:1 primate:3 making:1 biologically:1 constrained:1 christensen:1 explained:5 lau:1 alison:1 glm:9 ln:2 ybn:1 previously:1 turn:1 german:2 needed:1 know:5 yamins:1 tractable:2 end:7 tiling:1 available:6 meister:2 hierarchical:2 v2:1 fernandes:1 nikolaus:1 batch:5 alternative:1 thomas:5 original:4 denotes:1 remaining:2 cf:2 publishing:1 medicine:1 exploit:1 aran:1 classical:3 society:1 tensor:4 implied:1 added:1 nayebi:1 spike:11 receptive:22 primary:9 dependence:2 striate:2 traditional:1 responds:1 subspace:2 separate:4 unable:1 separating:1 simulated:3 thank:1 gracefully:1 chris:1 collected:1 tuebingen:2 trivial:1 spanning:1 length:1 besides:1 index:1 code:3 dicarlo:1 providing:1 baylor:1 ratio:1 innovation:1 baccus:1 negative:1 ba:1 design:2 policy:1 contributed:1 perform:3 upper:2 shallower:1 neuron:98 observation:8 convolution:8 gallant:4 denfield:1 finite:1 fabian:1 subunit:3 situation:1 defining:1 communication:1 rn:4 varied:1 smoothed:1 arbitrary:1 tatyana:1 expressiveness:2 inferred:1 david:7 pair:1 trainable:1 kl:2 chichilnisky:2 ethan:1 imagenet:1 optimized:1 nips2017:1 learned:8 quadratically:1 herein:1 kingma:1 nip:1 macaque:3 able:2 beyond:2 mcfarland:1 dynamical:1 perception:1 pattern:1 regime:1 sparsity:4 challenge:1 max:1 including:2 interpretability:2 explanation:1 shifting:3 reliable:1 memory:1 natural:11 force:2 regularized:5 predicting:3 difficulty:1 business:1 advanced:1 representing:2 scheme:1 improve:2 github:1 eye:1 started:1 reprint:1 extract:1 sher:2 prior:1 understanding:2 l2:4 acknowledgement:1 review:1 determining:1 relative:1 asymptotic:1 fully:11 expect:2 loss:2 limitation:2 merel:1 var:1 validation:7 foundation:1 incurred:1 degree:2 exciting:1 principle:1 thresholding:1 systematically:1 share:3 penalized:1 repeat:1 last:1 keeping:1 supported:2 bias:1 institute:3 explaining:1 taking:1 correspondingly:1 absolute:1 sparse:9 decorrelates:1 benefit:4 vintch:1 curve:2 dimension:5 calculated:1 cortical:1 pillow:1 computes:1 sensory:2 author:2 forward:1 made:1 programme:1 far:4 ec:1 kording:1 uni:2 zylberberg:1 keep:2 unreliable:1 butt:1 hubel:1 investigating:1 gidon:1 ioffe:1 conclude:1 eero:2 tolias:2 factorize:1 surya:1 factorizing:3 search:2 physiologically:1 sk:1 why:1 table:3 additionally:1 promising:1 nature:3 learn:6 transfer:1 ca:1 channel:3 robust:1 investigated:2 necessarily:1 complex:8 anthony:2 upstream:1 berens:3 equivariance:2 domain:2 did:4 european:1 main:3 linearly:1 official:1 noise:8 iarpa:2 repeated:1 neuronal:2 fig:21 e1003143:1 fashion:1 n:1 position:3 inferring:2 explicit:1 mahdi:1 third:4 wavelet:2 down:1 covariate:1 explored:1 decay:1 essential:1 effectively:2 notwithstanding:1 push:1 margin:1 sx:1 horizon:1 authorized:1 led:1 paninski:1 explore:2 likely:1 ganglion:2 visual:26 sonja:1 highlighting:1 expressed:1 contained:1 josh:1 partially:1 truth:13 darren:1 extracted:1 marked:1 identity:3 seibert:1 shared:9 determined:1 except:1 reducing:1 total:1 experimental:4 sharpee:1 indicating:1 select:1 college:1 internal:1 support:1 latter:1 scan:10 jonathan:2 alexander:7 odelia:1 cnn1:2 evaluate:3 tested:1 scratch:2
6,570
6,943
Certified Defenses for Data Poisoning Attacks Jacob Steinhardt? Stanford University [email protected] Pang Wei Koh? Stanford University [email protected] Percy Liang Stanford University [email protected] Abstract Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data. 1 Introduction Traditionally, computer security seeks to ensure a system?s integrity against attackers by creating clear boundaries between the system and the outside world (Bishop, 2002). In machine learning, however, the most critical ingredient of all?the training data?comes directly from the outside world. For a system trained on user data, an attacker can inject malicious data simply by creating a user account. Such data poisoning attacks require us to re-think what it means for a system to be secure. The focus of the present work is on data poisoning attacks against classification algorithms, first studied by Biggio et al. (2012) and later by a number of others (Xiao et al., 2012; 2015b; Newell et al., 2014; Mei and Zhu, 2015b; Burkard and Lagesse, 2017; Koh and Liang, 2017). This body of work has demonstrated data poisoning attacks that can degrade classifier accuracy, sometimes dramatically. Moreover, while some defenses have been proposed against specific attacks (Laishram and Phoha, 2016), few have been stress-tested against a determined attacker. Are there defenses that are robust to a large class of data poisoning attacks? At development time, one could take a clean dataset and test a defense against a number of poisoning strategies on that dataset. However, because of the near-limitless space of possible attacks, it is impossible to conclude from empirical success alone that a defense that works against a known set of attacks will not fail against a new attack. In this paper, we address this difficulty by presenting a framework for studying the entire space of attacks against a given defense. Our framework applies to defenders that (i) remove outliers residing outside a feasible set, then (ii) minimize a margin-based loss on the remaining data. For such defenders, we can generate approximate upper bounds on the efficacy of any data poisoning attack, which hold modulo two assumptions?that the empirical train and test distribution are close together, ? Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. and that the outlier removal does not significantly change the distribution of the clean (non-poisoned) data; these assumptions are detailed more formally in Section 3. We then establish a duality result for our upper bound, and use this to generate a candidate attack that nearly matches the bound. Both the upper bound and attack are generated via an efficient online learning algorithm. We consider two different instantiations of our framework: first, where the outlier detector is trained independently and cannot be affected by the poisoned data, and second, where the data poisoning can attack the outlier detector as well. In both cases we analyze binary SVMs, although our framework applies in the multi-class case as well. In the first setting, we apply our framework to an ?oracle? defense that knows the true class centroids and removes points that are far away from the centroid of the corresponding class. While previous work showed successful attacks on the MNIST-1-7 (Biggio et al., 2012) and Dogfish (Koh and Liang, 2017) image datasets in the absence of any defenses, we show (Section 4) that no attack can substantially increase test error against this oracle?the 0/1-error of an SVM on either dataset is at most 4% against any of the attacks we consider, even after adding 30% poisoned data.1 Moreover, we provide certified upper bounds of 7% and 10% test error, respectively, on the two datasets. On the other hand, on the IMDB sentiment corpus (Maas et al., 2011) our attack increases classification test error from 12% to 23% with only 3% poisoned data, showing that defensibility is very datasetdependent: the high dimensionality and abundance of irrelevant features in the IMDB corpus give the attacker more room to construct attacks that evade outlier removal. For the second setting, we consider a more realistic defender that uses the empirical (poisoned) centroids. For small amounts of poisoned data (? 5%) we can still certify the resilience of MNIST1-7 and Dogfish (Section 5). However, with more (30%) poisoned data, the attacker can subvert the outlier removal to obtain stronger attacks, increasing test error on MNIST-1-7 to 40%?much higher than the upper bound of 7% for the oracle defense. In other words, defenses that rely on the (potentially poisoned) data can be much weaker than their data-independent counterparts, underscoring the need for outlier removal mechanisms that are themselves robust to attack. 2 Problem Setting Consider a prediction task from an input x 2 X (e.g., Rd ) to an output y 2 Y; in our case we will take Y = { 1, +1} (binary classification) although most of our analysis holds for arbitrary Y. Let ` be a non-negative convex loss function: e.g., for linear classification with the hinge loss, `(?; x, y) = max(0, 1 yh?, xi) for a model ? 2 ? ? Rd and data point (x, y). Given a true data-generating distribution p? over X ? Y, define the test loss as L(?) = E(x,y)?p? [`(?; x, y)]. We consider the causative attack model (Barreno et al., 2010), which consists of a game between two players: the defender (who seeks to learn a model ?), and the attacker (who wants the learner to learn a bad model). The game proceeds as follows: ? n data points are drawn from p? to produce a clean training dataset Dc . ? The attacker adaptively chooses a ?poisoned? dataset Dp of ?n poisoned points, where ? 2 [0, 1] parametrizes the attacker?s resources. ? and incurs test loss L(?). ? ? The defender trains on the full dataset Dc [ Dp to produce a model ?, ? while the attacker?s goal is to maximize it. The defender?s goal is to minimize the quantity L(?) Remarks. We assume the attacker has full knowledge of the defender?s algorithm and of the clean training data Dc . While this may seem generous to the attacker, it is widely considered poor practice to rely on secrecy for security (Kerckhoffs, 1883; Biggio et al., 2014a); moreover, a determined attacker can often reverse-engineer necessary system details (Tram?r et al., 2016). The causative attack model allows the attacker to add points but not modify existing ones. Indeed, systems constantly collect new data (e.g., product reviews, user feedback on social media, or insurance claims), whereas modification of existing data would require first compromising the system. ? known as indiscriminate availability attacks Attacks that attempt to increase the overall test loss L(?), (Barreno et al., 2010), can be thought of as a denial-of-service attack. This is in contrast to targeted 1 We note Koh and Liang?s attack on Dogfish targets specific test images rather than overall test error. 2 Figure 1: Different datasets possess very different levels of vulnerability to attack. Here, we visualize the effect of the sphere and slab oracle defenses, with thresholds chosen to match the 70th percentile of the clean data. We mark with an X our attacks for the respective values of ?. (a) For the MNIST-1-7 dataset, the classes are well-separated and no attack can get past the defense. Note that our attack chooses to put all of its weight on the negative class here, although this need not be true in general. (b) For the IMDB dataset, the class centroids are not well-separated and it is easy to attack the classifier. See Section 4 for more details about the experiments. attacks on individual examples or sub-populations (e.g., Burkard and Lagesse, 2017). Both have serious security implications, but we focus on denial-of-service attacks, as they compromise the model in a broad sense and interfere with fundamental statistical properties of learning algorithms. 2.1 Data Sanitization Defenses A defender who trains na?vely on the full (clean + poisoned) data Dc [ Dp is doomed to failure, as even a single poisoned point can in some cases arbitrarily change the model (Liu and Zhu, 2016; Park et al., 2017). In this paper, we consider data sanitization defenses (Cretu et al., 2008), which examine the full dataset and try to remove the poisoned points, for example by deleting outliers. Formally, the defender constructs a feasible set F ? X ? Y and trains only on points in F: X def def ?? = argmin L(?; (Dc [ Dp ) \ F), where L(?; S) = `(?; x, y). (1) ?2? (x,y)2S Given such a defense F, we would like to upper bound the worst possible test loss over any attacker ? Such a bound would certify that the defender incurs at (choice of Dp )?in symbols, maxDp L(?). most some loss no matter what the attacker does. We consider two classes of defenses: ? Fixed defenses, where F does not depend on Dp . One example for text classification is letting F be documents that contain only licensed words (Newell et al., 2014). Other examples are oracle defenders that depend on the true distribution p? . While such defenders are not implementable in practice, they provide bounds: if even an oracle can be attacked, then we should be worried. ? Data-dependent defenses, where F depends on Dc [ Dp . These defenders try to estimate p? from Dc [ Dp and thus are implementable in practice. However, they open up a new line of attack wherein the attacker chooses the poisoned data Dp to change the feasible set F. def def Example defenses for binary classification. Let ?+ = E[x | y = +1] and ? = E[x | y = 1] be the centroids of the positive and negative classes. A natural defense strategy is to remove points that are too far away from the corresponding centroid. We consider two ways of doing this: the sphere defense, which removes points outside a spherical radius, and the slab defense, which first projects points onto the line between the centroids and then discards points that are too far on this line: def Fsphere = {(x, y) : kx ?y k2 ? ry }, def Fslab = {(x, y) : |hx ? y , ?y ? y i| ? sy }. (2) Here ry , sy are thresholds (e.g., chosen so that 30% of the data is removed). Note that both defenses are oracles (?y depends on p? ); in Section 5, we consider versions that estimate ? from Dc [ Dp . Figure 1 depicts both defenses on the MNIST-1-7 and IMDB datasets. Intuitively, the constraints on MNIST-1-7 make it difficult for an attacker, whereas IMDB looks far more attackable. In the next section, we will see how to make these intuitions concrete. 3 Algorithm 1 Online learning algorithm for generating an upper bound and candidate attack. Input: clean data Dc of size n, feasible set F, radius ?, poisoned fraction ?, step size ?. 1 (0) Initialize z (0) 0, (0) 0, U ? 1. ?, ? for t = 1, . . . , ?n do Compute (x(t) , y (t) ) = argmax(x,y)2F `(?(t 1) ; x, y). U? min U ? , n1 L(?(t 1) ; Dc ) + ?`(?(t 1) ; x(t) , y (t) ) . 1 (t 1) g (t) ; Dc ) + ?r`(?(t 1) ; x(t) , y (t) ). n rL(? (t) (t) Update: z (t) z (t 1) g (t) , max( (t 1) , kz ? k2 ), ?(t) end for Output: upper bound U ? and candidate attack Dp = {(x(t) , y (t) )}?n t=1 . 3 z (t) (t) . Attack, Defense, and Duality ? To make progress, we consider Recall that we are interested in the worst-case test loss maxDp L(?). three approximations. First, (i) we pass from the test loss to the training loss on the clean data, and (ii) we consider the training loss on the full (clean + poisoned) data, which upper bounds the loss on the clean data due to non-negativity of the loss. For any model ?, we then have: (i) L(?) ? (ii) 1 1 L(?; Dc ) ? L(?; Dc [ Dp ). n n (3) The approximation (i) could potentially be invalid due to overfitting; however, if we regularize the model appropriately then we can show that train and test are close by standard concentration arguments (see Appendix B for details). Note that (ii) is always a valid upper bound, and will be relatively tight as long as the model ends up fitting the poisoned data well. For our final approximation, we (iii) have the defender train on Dc [ (Dp \ F) (i.e., it uses the entire clean data set Dc rather than just the inliers Dc \ F). This should not have a large effect as long as the defense is not too aggressive (i.e., as long as F is not so small that it would remove important ? points from the clean data Dc ). We denote the resulting model as ?? to distinguish it from ?. Putting it all together, the worst-case test loss from any attack Dp with ?n elements is approximately upper bounded as follows: (ii) (i) 1 ? 1 ? ? ? max L(?) max L(?; Dc ) ? max L(?; Dc [ (Dp \ F)) Dp Dp n Dp n (iii) 1 ? ? max L(?; Dc [ (Dp \ F)) Dp n 1 def = max min L(?; Dc [ Dp ) = M. Dp ?F ?2? n (4) Here the final step is because ?? is chosen to minimize L(?; Dc [ (Dp \ F)). The minimax loss M defined in (4) is the central quantity that we will focus on in the sequel; it has duality properties that will yield insight into the nature of the optimal attack. Intuitively, the attacker that achieves M is trying to maximize the loss on the full dataset by adding poisoned points from the feasible set F. The approximations (i) and (iii) define the assumptions we need for our certificates to hold; as long as both approximations are valid, M will give an approximate upper bound on the worst-case test loss. 3.1 Fixed Defenses: Computing the Minimax Loss via Online Learning We now focus on computing the minimax loss M in (4) when F is not affected by Dp (fixed defenses). In the process of computing M, we will also produce candidate attacks. Our algorithm is based on no-regret online learning, which models a game between a learner and nature and thus is a natural fit to our data poisoning setting. For simplicity of exposition we assume ? is an `2 -ball of radius ?. Our algorithm, shown in Algorithm 1, is very simple: in each iteration, it alternates between finding the worst attack point (x(t) , y (t) ) with respect to the current model ?(t 1) and updating the model in the direction of the attack point, producing ?(t) . The attack Dp is the set of points thus found. 4 To derive the algorithm, we simply swap min and max in (4) to get an upper bound on M, after which the optimal attack set Dp ? F for a fixed ? is realized by a single point (x, y) 2 F: 1 def 1 M ? min max L(?; Dc [ Dp ) = min U (?), where U (?) = L(?; Dc ) + ? max `(?; x, y). ?2? Dp ?F n ?2? n (x,y)2F (5) Note that U (?) upper bounds M for any model ?. Algorithm 1 follows the natural strategy of minimizing U (?) to iteratively tighten this upper bound. In the process, the iterates {(x(t) , y (t) )} ? Dc [ Dp ) is a lower bound on M. We can form a candidate attack Dp whose induced loss n1 L(?; monitor the duality gap between lower and upper bounds on M to ascertain the quality of the bounds. Moreover, since the loss ` is convex in ?, U (?) is convex in ? (regardless of the structure of F, which could even be discrete). In this case, if we minimize U (?) using any online learning algorithm with sublinear regret, the duality gap vanishes for large datasets. In particular (proof in Appendix A): Proposition 1. Assume the loss ` is convex. Suppose that an online learning algorithm (e.g., Algorithm 1) is used to minimize U (?), and that the parameters (x(t) , y (t) ) maximize the loss (t) `(?(t 1) ; x, y) for the iterates ?(t 1) of the online learning algorithm. Let U ? = min?n t=1 U (? ). Also suppose that the learning algorithm has regret Regret(T ) after T time steps. Then, for the ? attack Dp = {(x(t) , y (t) )}?n t=1 , the corresponding parameter ? satisfies: 1 ? 1 ? Regret(?n) L(?; Dc [ Dp ) ? M ? U ? and U ? L(?; Dc [ Dp ) ? . (6) n n ?n Regret(?n) Hence, any algorithm whose average regret is small will have a nearly optimal candidate ?n attack Dp . There are many algorithms that have this property (Shalev-Shwartz, 2011); the particular algorithm depicted in Algorithm 1 is a variant of regularized dual averaging (Xiao, 2010). In summary, we have a simple learning algorithm that computes an upper bound on the minimax loss along with a candidate attack (which provides a lower bound). Of course, the minimax loss M is only an approximation to the true worst-case test loss (via (4)). We examine the tightness of this approximation empirically in Section 4. 3.2 Data-Dependent Defenses: Upper and Lower Bounds We now turn our attention to data-dependent defenders, where the feasible set F depends on the data Dc [ Dp (and hence can be influenced by the attacker). For example, consider the slab defense (see (2)) that uses the empirical (poisoned) mean instead of the true mean: def Fslab (Dp ) = {(x, y) : |hx ? ?y (Dp ), ? ?y (Dp ) ? ? y (Dp )i| ? sy }, (7) where ? ?y (Dp ) is the empirical mean over Dc [ Dp ; the notation F(Dp ) tracks the dependence of the feasible set on Dp . Similarly to Section 3.1, we analyze the minimax loss M, which we can bound as in (5): M ? min?2? maxDp ?F (Dp ) n1 L(?; Dc [ Dp ). However, unlike in (5), it is no longer the case that the optimal Dp places all points at a single location, due to the dependence of F on Dp ; we must jointly maximize over the full set Dp . To improve tractability, we take a continuous relaxation: we think of Dp as a probability distribution with mass 1 ?n on each point in Dp , and relax this to allow any probability distribution ?p . The constraint then becomes supp(?p ) ? F(Dp ) (where supp denotes the support), and the analogue to (5) is 1 ? (?), where U ? (?) def M ? min U = L(?; Dc ) + ? max E?p [`(?; x, y)]. (8) ?2? n supp(?p )?F (?p ) ? (?). Indeed, this is what we shall do, but This suggests again employing Algorithm 1 to minimize U there are a few caveats: ? (?) is in general quite difficult. We will, however, ? The maximization problem in the definition of U solve a specific instance in Section 5 based on the sphere/slab defense described in Section 2.1. ? The constraint set for ?p is non-convex, so duality (Proposition 1) no longer holds. In particular, the average of two feasible ?p might not itself be feasible. To partially address the second issue, we will run Algorithm 1, at each iteration obtaining a distribution (t) ? (?(t) ). Then, for each ?p(t) we will generate a candidate attack by sampling ?p and upper bound U (t) ?n points from ?p , and take the best resulting attack. In Section 4 we will see that despite a lack of rigorous theoretical guarantees, this often leads to good upper bounds and attacks in practice. 5 Figure 2: On the (a) Dogfish and (b) MNIST-1-7 datasets, our candidate attack (solid blue) achieves the upper bound (dashed blue) on the worst-case train loss, as guaranteed by Proposition 1. Moreover, this worst-case loss is low; even after adding 30% poisoned data, the loss stays below 0.1. (c) The gradient descent (dash-dotted) and label flip (dotted) baseline attacks are suboptimal under this defense, with test loss (red) as well as test error and train loss (not shown) all significantly worse than our candidate attack. 4 Experiments I: Oracle Defenses An advantage of our framework is that we obtain a tool that can be easily run on new datasets and defenses to learn about the robustness of the defense and gain insight into potential attacks. We first study two image datasets: MNIST-1-7, and the Dogfish dataset used by Koh and Liang (2017). For MNIST-1-7, following Biggio et al. (2012), we considered binary classification between the digits 1 and 7; this left us with n = 13007 training examples of dimension 784. For Dogfish, which is a binary classification task, we used the same Inception-v3 features as in Koh and Liang (2017), so that each of the n = 1800 training images is represented by a 2048-dimensional vector. For this and subsequent experiments, our loss ` is the hinge loss (i.e., we train an SVM). We consider the combined oracle slab and sphere defense from Section 2.1: F = Fslab \ Fsphere . To run Algorithm 1, we need to maximize the loss over (x, y) 2 F. Note that maximizing the hinge loss `(?; x, y) is equivalent to minimizing yh?, xi. Therefore, we can solve the following quadratic program (QP) for each y 2 {+1, 1} and take the one with higher loss: minimizex2Rd yh?, xi subject to kx ?y k22 ? ry2 , |hx ? y , ?y ? y i| ? sy . (9) The results of Algorithm 1 are given in Figures 2a and 2b; here and elsewhere, we used a combination of CVXPY (Diamond and Boyd, 2016), YALMIP (L?fberg, 2004), SeDuMi (Sturm, 1999), and Gurobi (Gurobi Optimization, Inc., 2016) to solve the optimization. We plot the upper bound U ? computed by Algorithm 1, as well as the train and test loss induced by the corresponding attack Dp . Except for small ?, the model ?? fits the poisoned data almost perfectly. We think this is because all feasible attack points that can get past the defense can be easily fit without sacrificing the quality of the rest of the model; in particular, the model chooses to fit the attack points as soon as ? is large enough that there is incentive to do so. ? Dc ) on the clean data nearly matches its upper bound The upshot is that, in this case, the loss L(?; ? Dc [ Dp ) (which in turn matches U ? ). On both datasets, the certified upper bound U ? is small L(?; (< 0.1 with ? = 0.3), showing that the datasets are resilient to attack under the oracle defense. We also ran the candidate attack from Algorithm 1 as well as two baselines ? gradient descent on the test loss (varying the location of points in Dp , as in Biggio et al. (2012) and Mei and Zhu (2015b)), and a simple baseline that inserts copies of points from Dc with the opposite label (subject to the flipped points lying in F). The results are in in Figure 2c. Our attack consistently performs strongest; label flipping seems to be too weak, while the gradient algorithm seems to get stuck in local minima.2 Though it is not shown in the figure, we note that the maximum test 0-1 error against any attack, for ? up to 0.3, was 4%, confirming the robustness suggested by our certificates. Finally, we visualize our attack in Figure 1a. Interestingly, though the attack was free to place points anywhere, most of the attack is tightly concentrated around a single point at the boundary of F. 2 Though Mei and Zhu (2015b) state that their cost is convex, they communicated to us that this is incorrect. 6 Figure 3: The (a) Enron and (b) IMDB text datasets are significantly easier to attack under the oracle sphere and slab defense than the image datasets from Figure 2. (c) In particular, our attack achieves a large increase in test loss (solid red) and test error (solid purple) with small ? for IMDB. The label flip baseline was unsuccessful as before, and the gradient baseline does not apply to discrete data. In (a) and (b), note the large gap between upper and lower bounds, resulting from the upper bound relaxation and the IQP/randomized rounding approximations. 4.1 Text Data: Handling Integrity Constraints We next consider attacks on text data. Beyond the the sphere and slab constraints, a valid attack on text data must satisfy additional integrity constraints (Newell et al., 2014): for text, the input x consists of binary indicator features (e.g., presence of the word ?banana?) rather than arbitrary reals.3 Algorithm 1 still applies in this case ? the only difference is that the QP from Section 4 has the added constraint x 2 Zd 0 and hence becomes an integer quadratic program (IQP), which can be computationally expensive to solve. We can still obtain upper bounds simply by relaxing the integrity constraints; the only issue is that the points x(t) in the corresponding attack will have continuous values, and hence don?t correspond to actual text inputs. To address this, we use the IQP solver from Gurobi (Gurobi Optimization, Inc., 2016) to find an approximately optimal feasible x. This yields a valid candidate attack, but it might not be optimal if the solver doesn?t find near-optimal solutions. We ran both the upper bound relaxation and the IQP solver on two text datasets, the Enron spam corpus (Metsis et al., 2006) and the IMDB sentiment corpus (Maas et al., 2011). The Enron training set consists of n = 4137 e-mails (30% spam and 70% non-spam), with d = 5166 distinct words. The IMDB training set consists of n = 25000 product reviews with d = 89527 distinct words. We used bag-of-words features, which yields test accuracy 97% and 88%, respectively, in the absence of poisoned data. IMDB was too large for Gurobi to even approximately solve the IQP, so we resorted to a randomized rounding heuristic to convert the continuous relaxation to an integer solution. Results are given in Figure 3; there is a relatively large gap between the upper bound and the attack. Despite this, the attacks are relatively successful. Most striking is the attack on IMDB, which increases test error from 12% to 23% for ? = 0.03, despite having to pass the oracle defender. To understand why the attacks are so much more successful in this case, we can consult Figure 1b. In contrast to MNIST-1-7, for IMDB the defenses place few constraints on the attacker. This seems to be a consequence of the high dimensionality of IMDB and the large number of irrelevant features, which increase the size of F without a corresponding increase in separation between the classes. 5 Experiments II: Data-Dependent Defenses We now revisit the MNIST-1-7 and Dogfish datasets. Before, we saw that they were unattackable provided we had an oracle defender that knew the true class means. If we instead consider a data-dependent defender that uses the empirical (poisoned) means, how much can this change the attackability of these datasets? In this section, we will see that the answer is quite a lot. As described in Section 3.2, we can still use our framework to obtain upper and lower bounds even in this data-dependent case, although the bounds won?t necessarily match. The main difficulty ? (?), which involves a potentially intractable maximization (see (8)). However, is in computing U for 2-class SVMs there is a tractable semidefinite programming algorithm; the full details are in 3 Note that in the previous section, we ignored such integrity constraints for simplicity. 7 Figure 4: The data-dependent sphere and slab defense is significantly weaker than its oracle counterpart, allowing MNIST-1-7 and Dogfish to be successfully attacked. (a) On MNIST-1-7, our attack achieves a test loss of 0.69 (red) and error of 0.40 (not shown) at ? = 0.3, more than 10? its oracle counterpart (gold). At low ? ? 0.05, the dataset is safe, with a max train loss of 0.12. We saw qualitatively similar results on Dogfish. (b) Data-dependent sanitization can be significantly poisoned by coordinated adversarial data. We show here our attack for ? = 0.3, which places almost all of its attacking mass on the red X. This shifts the empirical centroid, rotating the slab constraint (from red to orange) and allowing the red X to be placed far on the other side of the blue centroid. Appendix D, but the rough idea is the following: we can show that the optimal distribution ?p in (8) is supported on at most 4 points (one support vector and one non-support vector in each class). Moreover, for a fixed ?p , the constraints and objective depend only on inner products between a small number of points: the 4 attack points, the class means ? (on the clean data), and the model ?. Thus, we can solve for the optimal attack locations with a semidefinite program on a 7 ? 7 matrix. Then in an outer loop, we randomly sample ?p from the probability simplex and take the one with the highest loss. Running this algorithm on MNIST-1-7 yields the results in Figure 4a. On the test set, our ? = 0.3 attack leads to a hinge loss of 0.69 (up from 0.03) and a 0-1 loss of 0.40 (up from 0.01). Similarly, on Dogfish, our ? = 0.3 attack gives a hinge loss of 0.59 (up from 0.05) and a 0-1 loss of 0.22 (up from 0.01). The geometry of the attack is depicted in Figure 4b. By carefully choosing the location of the attack, the attacker can place points that lie substantially outside the original (clean) feasible set. This is because the poisoned data can substantially change the the direction of the slab constraint, while the sphere constraint by itself is not enough to effectively filter out attacks. There thus appears to be significant danger in employing data-dependent defenders?beyond the greater difficulty of analyzing them, they seem to actually be more vulnerable to attack. 6 Related Work Due to their increased use in security-critical settings such as malware detection, there has been an explosion of work on the security of machine learning systems; see Barreno et al. (2010), Biggio et al. (2014a), Papernot et al. (2016b), and Gardiner and Nagaraja (2016) for some recent surveys. Our contribution relates to the long line of work on data poisoning attacks; beyond linear classifiers, others have studied the LASSO (Xiao et al., 2015a), clustering (Biggio et al., 2013; 2014c), PCA (Rubinstein et al., 2009), topic modeling (Mei and Zhu, 2015a), collaborative filtering (Li et al., 2016), neural networks (Yang et al., 2017), and other models (Mozaffari-Kermani et al., 2015; Vuurens et al., 2011; Wang, 2016). There have also been a number of demonstrated vulnerabilities in deployed systems (Newsome et al., 2006; Laskov and ?rndi`c, 2014; Biggio et al., 2014b). We provide formal scaffolding to this line of work by supplying a tool that can certify defenses against a range of attacks. A striking recent security vulnerability discovered in machine learning systems is adversarial test images that can fool image classifiers despite being imperceptible from normal images (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini et al., 2016; Kurakin et al., 2016; Papernot et al., 2016a). These images exhibit vulnerabilities at test time, whereas data poisoning is a vulnerability at training time. However, recent adversarial attacks on reinforcement learners (Huang et al., 2017; Behzadan and Munir, 2017; Lin et al., 2017) do blend train and test vulnerabilities. A common defense against adversarial test examples is adversarial training (Goodfellow et al., 2015), which alters the training objective to encourage robustness. 8 We note that generative adversarial networks (Goodfellow et al., 2014), despite their name, are not focused on security but rather provide a game-theoretic objective for training generative models. Finally, a number of authors have studied the theoretical question of learning in the presence of adversarial errors, under a priori distributional assumptions on the data. Robust algorithms have been exhibited for mean and covariance estimation and clustering (Diakonikolas et al., 2016; Lai et al., 2016; Charikar et al., 2017), classification (Klivans et al., 2009; Awasthi et al., 2014), regression (Nasrabadi et al., 2011; Nguyen and Tran, 2013; Chen et al., 2013; Bhatia et al., 2015) and crowdsourced data aggregation (Steinhardt et al., 2016). However, these bounds only hold for specific (sometimes quite sophisticated) algorithms and are focused on good asymptotic performance, rather than on giving good numerical error guarantees for concrete datasets/defenses. 7 Discussion In this paper we have presented a tool for studying data poisoning defenses that goes beyond empirical validation by providing certificates against a large family of attacks modulo the approximations from Section 3. We stress that our bounds are meant to be used as a way to assess defense strategies in the design stage, rather than guaranteeing performance of a deployed learning algorithm (since our method needs to be run on the clean data, which we presumably would not have access to at deployment time). For instance, if we want to build robust defenses for image classifiers, we can assess the performance against attacks on a number of known image datasets, in order to gain more confidence in the robustness of the system that we actually deploy. Having applied our framework to binary SVMs, there are a number of extensions we can consider: e.g., to other loss functions or to multiclass classification. We can also consider defenses beyond the sphere and slab constraints considered here?for instance, sanitizing text data using a language model, or using the covariance structure of the clean data (Lakhina et al., 2004). The main requirement of our framework is the ability to efficiently maximize `(?; x, y) over all feasible x and y. For margin-based classifiers such as SVMs and logistic regression, this only requires maximizing a linear function over the feasible set, which is often possible (e.g., via dynamic programming) even for discrete sets. Our framework currently does not handle non-convex losses: while our method might still be meaningful as a way of generating attacks, our upper bounds would no longer be valid. The issue is that an attacker could try to thwart the optimization process and cause the defender to end up in a bad local minimum. Finding ways to rule this out without relying on convexity would be quite interesting. ? / M was useful because M admits the natural minimax formulation Separately, the bound L(?) ? can be expressed directly as a bilevel optimization problem (Mei and (5), but the worst-case L(?) Zhu, 2015b), which is intractable in general but admits a number of heuristics (Bard, 1999). Bilevel optimization has been considered in the related setting of Stackelberg games (Br?ckner and Scheffer, 2011; Br?ckner et al., 2012; Zhou and Kantarcioglu, 2016), and is natural to apply here as well. To conclude, we quote Biggio et al., who call for the following methodology for evaluating defenses: To pursue security in the context of an arms race it is not sufficient to react to observed attacks, but it is also necessary to proactively anticipate the adversary by predicting the most relevant, potential attacks through a what-if analysis; this allows one to develop suitable countermeasures before the attack actually occurs, according to the principle of security by design. The existing paradigm for such proactive anticipation is to design various hypothetical attacks against which to test the defenses. However, such an evaluation is fundamentally limited because it leaves open the possibility that there is a more clever attack that we failed to think of. Our approach provides a first step towards surpassing this limitation, by not just anticipating but certifying the reliability of a defender, thus implicitly considering an infinite number of attacks before they occur. Reproducibility. The code and data for replicating our experiments is available on GitHub (http: //bit.ly/gt-datapois) and Codalab Worksheets (http://bit.ly/cl-datapois). Acknowledgments. JS was supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. This work was also partially supported by a Future of Life Institute grant and a grant from the Open Philanthropy Project. We are grateful to Daniel Selsam, Zhenghao Chen, and Nike Sun, as well as to the anonymous reviewers, for a great deal of helpful feedback. 9 References P. Awasthi, M. F. Balcan, and P. M. Long. The power of localization for efficiently learning linear separators with noise. In Symposium on Theory of Computing (STOC), pages 449?458, 2014. J. F. Bard. Practical Bilevel Optimization: Algorithms and Applications. Springer, 1999. M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. The security of machine learning. Machine Learning, 81(2):121?148, 2010. V. Behzadan and A. Munir. Vulnerability of deep reinforcement learning to policy induction attacks. arXiv, 2017. K. Bhatia, P. Jain, and P. Kar. Robust regression via hard thresholding. In Advances in Neural Information Processing Systems (NIPS), pages 721?729, 2015. B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In International Conference on Machine Learning (ICML), pages 1467?1474, 2012. B. Biggio, I. Pillai, S. R. Bul?, D. Ariu, M. Pelillo, and F. Roli. Is data clustering in adversarial settings secure? In Workshop on Artificial Intelligence and Security (AISec), 2013. B. Biggio, G. Fumera, and F. Roli. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering, 26(4):984?996, 2014a. B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, and F. Roli. Poisoning behavioral malware clustering. In Workshop on Artificial Intelligence and Security (AISec), 2014b. B. Biggio, B. S. Rota, P. Ignazio, M. Michele, M. E. Zemene, P. Marcello, and R. Fabio. Poisoning complete-linkage hierarchical clustering. In Workshop on Structural, Syntactic, and Statistical Pattern Recognition, 2014c. M. A. Bishop. The art and science of computer security. Addison-Wesley Longman Publishing Co., Inc., 2002. M. Br?ckner and T. Scheffer. Stackelberg games for adversarial prediction problems. In SIGKDD, pages 547?555, 2011. M. Br?ckner, C. Kanzow, and T. Scheffer. Static prediction games for adversarial learning problems. Journal of Machine Learning Research (JMLR), 13:2617?2654, 2012. C. Burkard and B. Lagesse. Analysis of causative attacks against SVMs learning from data streams. In International Workshop on Security And Privacy Analytics, 2017. N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou. Hidden voice commands. In USENIX Security, 2016. M. Charikar, J. Steinhardt, and G. Valiant. Learning from untrusted data. In Symposium on Theory of Computing (STOC), 2017. Y. Chen, C. Caramanis, and S. Mannor. Robust high dimensional sparse regression and matching pursuit. arXiv, 2013. G. F. Cretu, A. Stavrou, M. E. Locasto, S. J. Stolfo, and A. D. Keromytis. Casting out demons: Sanitizing training data for anomaly sensors. In IEEE Symposium on Security and Privacy, pages 81?95, 2008. I. Diakonikolas, G. Kamath, D. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In Foundations of Computer Science (FOCS), 2016. S. Diamond and S. Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research (JMLR), 17(83):1?5, 2016. J. Gardiner and S. Nagaraja. On the security of machine learning in malware c&c detection: A survey. ACM Computing Surveys (CSUR), 49(3), 2016. 10 I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014. I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. Gurobi Optimization, Inc. Gurobi optimizer reference manual, 2016. S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. Adversarial attacks on neural network policies. arXiv, 2017. S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in Neural Information Processing Systems (NIPS), 2009. A. Kerckhoffs. La cryptographie militaire. Journal des sciences militaires, 9, 1883. A. R. Klivans, P. M. Long, and R. A. Servedio. Learning halfspaces with malicious noise. Journal of Machine Learning Research (JMLR), 10:2715?2740, 2009. P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In International Conference on Machine Learning (ICML), 2017. A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv, 2016. K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), 2016. R. Laishram and V. V. Phoha. Curie: A method for protecting SVM classifier from poisoning attack. arXiv, 2016. A. Lakhina, M. Crovella, and C. Diot. Diagnosing network-wide traffic anomalies. In ACM SIGCOMM Computer Communication Review, volume 34, pages 219?230, 2004. P. Laskov and N. ?rndi`c. Practical evasion of a learning-based classifier: A case study. In Symposium on Security and Privacy, 2014. B. Li, Y. Wang, A. Singh, and Y. Vorobeychik. Data poisoning attacks on factorization-based collaborative filtering. In Advances in Neural Information Processing Systems (NIPS), 2016. Y. Lin, Z. Hong, Y. Liao, M. Shih, M. Liu, and M. Sun. Tactics of adversarial attack on deep reinforcement learning agents. arXiv, 2017. J. Liu and X. Zhu. The teaching dimension of linear learners. Journal of Machine Learning Research (JMLR), 17(162), 2016. J. L?fberg. YALMIP: A toolbox for modeling and optimization in MATLAB. In CACSD, 2004. A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In Association for Computational Linguistics (ACL), 2011. S. Mei and X. Zhu. The security of latent Dirichlet allocation. In Artificial Intelligence and Statistics (AISTATS), 2015a. S. Mei and X. Zhu. Using machine teaching to identify optimal training-set attacks on machine learners. In Association for the Advancement of Artificial Intelligence (AAAI), 2015b. V. Metsis, I. Androutsopoulos, and G. Paliouras. Spam filtering with naive Bayes ? which naive Bayes? In CEAS, volume 17, pages 28?69, 2006. M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha. Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE Journal of Biomedical and Health Informatics, 19(6):1893?1905, 2015. 11 N. M. Nasrabadi, T. D. Tran, and N. Nguyen. Robust lasso with missing and grossly corrupted observations. In Advances in Neural Information Processing Systems (NIPS), 2011. A. Newell, R. Potharaju, L. Xiang, and C. Nita-Rotaru. On the practicality of integrity attacks on document-level sentiment analysis. In Workshop on Artificial Intelligence and Security (AISec), pages 83?93, 2014. J. Newsome, B. Karp, and D. Song. Paragraph: Thwarting signature learning by training maliciously. In International Workshop on Recent Advances in Intrusion Detection, 2006. N. H. Nguyen and T. D. Tran. Exact recoverability from dense corrupted observations via `1 minimization. IEEE Transactions on Information Theory, 59(4):2017?2035, 2013. N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv, 2016a. N. Papernot, P. McDaniel, A. Sinha, and M. Wellman. Towards the science of security and privacy in machine learning. arXiv, 2016b. S. Park, J. Weimer, and I. Lee. Resilient linear classification: an approach to deal with attacks on training data. In International Conference on Cyber-Physical Systems, pages 155?164, 2017. B. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S. Lau, S. Rao, N. Taft, and J. Tygar. Antidote: Understanding and defending against poisoning of anomaly detectors. In ACM SIGCOMM Conference on Internet measurement conference, 2009. S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2011. J. Steinhardt, S. Wager, and P. Liang. The statistics of streaming sparse regression. arXiv preprint arXiv:1412.4182, 2014. J. Steinhardt, G. Valiant, and M. Charikar. Avoiding imposters and delinquents: Adversarial crowdsourcing and peer prediction. In Advances in Neural Information Processing Systems (NIPS), 2016. J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11:625?653, 1999. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. F. Tram?r, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart. Stealing machine learning models via prediction APIs. In USENIX Security, 2016. J. Vuurens, A. P. de Vries, and C. Eickhoff. How much spam can you take? An analysis of crowdsourcing results to increase accuracy. ACM SIGIR Workshop on Crowdsourcing for Information Retrieval, 2011. G. Wang. Combating Attacks and Abuse in Large Online Communities. PhD thesis, University of California Santa Barbara, 2016. H. Xiao, H. Xiao, and C. Eckert. Adversarial label flips attack on support vector machines. In European Conference on Artificial Intelligence, 2012. H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli. Is feature selection secure against training data poisoning? In International Conference on Machine Learning (ICML), 2015a. H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli. Support vector machines under adversarial label contamination. Neurocomputing, 160:53?62, 2015b. L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research (JMLR), 11:2543?2596, 2010. 12 C. Yang, Q. Wu, H. Li, and Y. Chen. Generative poisoning attack method against neural networks. arXiv, 2017. Y. Zhou and M. Kantarcioglu. Modeling adversarial learning as nested Stackelberg games. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2016. 13
6943 |@word version:1 stronger:1 seems:3 indiscriminate:1 open:3 seek:2 jacob:1 covariance:3 incurs:2 solid:3 liu:3 efficacy:1 tram:2 daniel:1 document:2 interestingly:1 past:2 existing:3 mishra:1 imposter:1 transferability:1 current:1 scaffolding:1 intriguing:1 must:2 john:1 subsequent:1 realistic:1 numerical:1 confirming:1 remove:6 plot:1 update:1 alone:1 intelligence:6 leaf:1 poisoned:26 generative:4 advancement:1 supplying:1 caveat:1 provides:2 certificate:3 mannor:1 iterates:2 location:4 attack:119 zhang:2 diagnosing:1 vorobeychik:1 lakhina:2 along:1 symposium:4 incorrect:1 focs:2 consists:4 androutsopoulos:1 fitting:1 behavioral:1 paragraph:1 privacy:4 stolfo:1 indeed:2 themselves:1 examine:2 multi:1 ry:2 relying:1 spherical:1 duan:1 actual:1 little:1 solver:3 considering:1 becomes:2 provided:2 increasing:1 notation:1 bounded:1 project:2 medium:1 mass:2 moreover:6 what:4 agnostic:1 burkard:3 argmin:1 substantially:3 pursue:1 finding:2 guarantee:2 hypothetical:1 zaremba:1 classifier:9 k2:2 healthcare:1 grant:2 ly:2 producing:1 before:4 service:2 engineering:1 local:2 resilience:1 modify:1 positive:1 consequence:1 understood:1 despite:5 analyzing:1 approximately:3 abuse:1 black:2 might:3 acl:1 studied:3 suggests:1 relaxing:1 pangwei:1 deployment:1 co:1 kane:1 collect:1 limited:1 analytics:1 graduate:1 factorization:1 range:1 acknowledgment:1 practical:2 practice:4 regret:7 communicated:1 digit:1 mei:7 danger:1 empirical:9 significantly:5 thought:1 matching:1 boyd:2 causative:3 word:7 confidence:1 rota:1 anticipation:1 get:4 onto:1 close:2 clever:1 selection:1 cannot:1 put:1 context:1 impossible:1 raghunathan:1 risk:2 influence:1 equivalent:1 demonstrated:2 missing:1 reviewer:1 maximizing:2 go:1 regardless:1 attention:1 independently:1 convex:9 focused:2 sigir:1 survey:3 simplicity:2 react:1 pouget:1 rule:1 maliciously:1 estimator:1 insight:2 shlens:1 regularize:1 population:1 handle:1 traditionally:1 target:1 suppose:2 deploy:1 user:5 modulo:2 programming:2 anomaly:3 us:4 exact:1 goodfellow:9 element:1 trend:1 recognition:1 expensive:1 updating:1 distributional:1 observed:1 preprint:1 wang:3 worst:10 sanitizing:2 sun:2 contamination:1 removed:1 highest:1 ran:2 halfspaces:1 intuition:1 vanishes:1 convexity:1 complexity:1 warde:1 dynamic:1 signature:1 trained:3 denial:2 depend:3 tight:1 grateful:1 compromise:1 singh:1 localization:1 imdb:14 learner:5 swap:1 untrusted:1 yalmip:2 easily:2 various:1 represented:1 caramanis:1 train:13 separated:2 jain:1 distinct:2 artificial:6 rubinstein:2 bhatia:2 outside:5 choosing:1 shalev:2 peer:1 harnessing:1 whose:2 stanford:6 quite:4 heuristic:2 solve:6 relax:1 widely:1 tightness:1 ability:1 statistic:2 think:4 jointly:1 syntactic:1 itself:2 final:2 online:11 ceas:1 certified:3 advantage:1 net:1 tran:3 product:3 relevant:1 loop:1 reproducibility:1 gold:1 demon:1 ry2:1 sutskever:1 requirement:1 assessing:1 produce:3 generating:3 guaranteeing:1 derive:1 develop:1 pelillo:1 progress:1 strong:1 c:2 involves:1 come:2 giacinto:1 direction:2 safe:1 radius:3 stackelberg:3 nike:1 compromising:1 filter:1 stochastic:1 resilient:3 require:2 taft:1 hx:3 abbeel:1 anonymous:1 proposition:3 anticipate:1 insert:1 extension:1 hold:6 pham:1 lying:1 residing:1 considered:4 around:1 normal:1 presumably:1 great:1 visualize:2 claim:1 slab:11 optimizer:1 generous:1 achieves:4 estimation:2 daly:1 dogfish:11 bag:1 label:6 currently:1 quote:1 vulnerability:7 saw:2 successfully:1 tool:4 minimization:2 rough:1 awasthi:2 sensor:1 iqp:5 always:1 aim:1 rather:6 zhou:3 varying:1 command:1 casting:1 karp:1 focus:4 potts:1 consistently:1 intrusion:1 contrast:3 rigorous:1 adversarial:20 centroid:9 sense:1 sigkdd:1 helpful:1 secure:3 baseline:5 dependent:9 streaming:1 entire:2 hidden:1 interested:1 overall:2 classification:11 issue:3 dual:2 priori:1 development:1 art:1 tygar:2 initialize:1 orange:1 equal:1 construct:2 having:2 beach:1 sampling:1 ng:1 phoha:2 flipped:1 park:2 look:1 icml:3 marcello:1 nearly:4 broad:2 future:1 parametrizes:1 simplex:1 others:2 fundamentally:1 serious:1 defender:22 mirza:1 few:3 randomly:1 tightly:1 neurocomputing:1 individual:1 argmax:1 geometry:1 n1:3 attempt:1 detection:3 limitless:1 possibility:1 mining:1 insurance:1 evaluation:2 wellman:1 farley:1 semidefinite:2 inliers:1 wager:1 implication:1 crovella:1 encourage:1 explosion:1 necessary:2 sedumi:2 respective:1 kantarcioglu:2 vely:1 rotating:1 re:1 sacrificing:1 theoretical:2 sinha:1 increased:1 instance:3 modeling:4 rao:2 newsome:2 stewart:1 maximization:2 licensed:1 tractability:1 cost:1 successful:3 rounding:2 too:5 mnist1:1 answer:1 corrupted:2 kurakin:2 author:1 combined:1 adaptively:1 st:1 cvxpy:2 international:8 randomized:2 thwart:1 chooses:4 stay:1 sequel:1 lee:1 fundamental:1 systematic:1 informatics:1 together:2 quickly:1 concrete:2 na:1 again:1 aaai:1 thesis:1 central:1 moitra:1 huang:4 worse:1 creating:2 inject:2 li:4 szegedy:3 aggressive:1 potential:2 supp:3 de:2 account:1 fannie:1 availability:1 jha:1 matter:1 inc:4 satisfy:1 coordinated:1 race:1 proactively:1 stream:1 depends:3 proactive:1 try:3 lot:1 later:1 analyze:2 traffic:1 red:6 doing:1 aggregation:1 bayes:2 crowdsourced:1 nagaraja:2 curie:1 contribution:2 collaborative:2 ass:2 minimize:6 accuracy:3 purple:1 pang:1 who:4 efficiently:2 sy:4 correspond:1 yield:4 identify:1 weak:1 detector:3 strongest:1 influenced:1 manual:1 papernot:5 definition:1 against:21 servedio:1 failure:1 grossly:1 codalab:1 evade:1 proof:1 vaidya:1 static:1 gain:2 dataset:15 recall:1 knowledge:3 dimensionality:2 carefully:1 actually:3 sophisticated:1 anticipating:1 appears:1 wesley:1 higher:2 asia:1 methodology:1 wei:1 wherein:1 formulation:1 though:3 box:2 anywhere:1 biomedical:1 just:2 inception:1 stage:1 sturm:2 hand:1 lack:1 interfere:1 logistic:1 quality:2 michele:1 usa:1 effect:3 k22:1 contain:1 true:7 csur:1 brown:1 counterpart:3 hence:4 regularization:1 symmetric:1 iteratively:1 reiter:1 deal:2 game:8 whereby:1 percentile:1 won:1 fberg:2 biggio:16 hong:1 tactic:1 trying:1 presenting:1 stress:2 theoretic:1 complete:1 performs:1 percy:1 balcan:1 corona:1 image:11 common:1 rl:1 physical:2 empirically:2 qp:2 volume:2 association:2 surpassing:1 doomed:1 significant:1 measurement:1 rd:2 similarly:2 teaching:2 language:2 had:1 replicating:1 reliability:1 bruna:1 access:1 longer:3 gt:1 add:1 j:1 integrity:6 recent:5 showed:1 irrelevant:2 driven:1 reverse:1 discard:1 barbara:1 kar:1 binary:7 success:1 arbitrarily:1 life:1 minimum:2 additional:1 greater:1 aisec:3 attacking:1 maximize:6 v3:1 nasrabadi:2 dashed:1 ii:6 relates:1 full:8 paradigm:1 imperceptible:1 match:6 sphere:9 retrieval:1 long:8 lin:2 lai:2 sigcomm:2 paired:1 prediction:7 variant:1 regression:5 liao:1 arxiv:11 iteration:2 sometimes:2 worksheet:1 whereas:3 fellowship:2 separately:1 want:2 malicious:3 appropriately:1 rest:1 unlike:1 posse:1 enron:3 exhibited:1 subject:2 cyber:1 induced:2 name:1 sridharan:1 seem:2 integer:2 consult:1 structural:1 near:2 presence:2 yang:2 call:1 bengio:2 enough:3 easy:1 iii:3 fit:4 lasso:2 perfectly:1 paliouras:1 inner:1 idea:1 suboptimal:1 opposite:1 multiclass:1 br:4 selsam:1 shift:1 pca:1 defense:56 linkage:1 sentiment:5 song:1 cause:1 remark:1 matlab:2 deep:2 dramatically:1 useful:1 ignored:1 detailed:1 tewari:1 clear:1 santa:1 fool:1 amount:1 concentrated:1 svms:5 mcdaniel:2 generate:3 http:2 nsf:1 revisit:1 dotted:2 alters:1 certify:3 track:1 blue:3 zd:1 pillai:1 discrete:3 shall:1 incentive:1 affected:2 vuurens:2 putting:1 shih:1 threshold:2 monitor:1 drawn:1 evasion:1 clean:18 juels:1 longman:1 resorted:1 relaxation:4 fraction:1 cone:1 convert:1 run:4 powerful:1 you:1 striking:2 place:5 family:2 almost:2 wu:1 eickhoff:1 separation:1 appendix:3 bit:2 def:10 internet:1 bound:46 dash:1 distinguish:1 laskov:3 courville:1 guaranteed:1 quadratic:2 followed:1 oracle:15 gardiner:2 occur:1 bilevel:3 constraint:15 countermeasure:1 software:1 certifying:1 underscoring:1 klivans:2 min:8 argument:1 poisoning:22 vempala:1 diot:1 relatively:3 charikar:3 pacific:1 according:1 alternate:1 ball:1 poor:1 combination:1 hertz:1 across:1 ascertain:1 kakade:1 joseph:2 modification:1 stealing:1 outlier:10 intuitively:2 lau:1 handling:1 koh:7 computationally:1 resource:1 turn:2 fail:1 mechanism:1 defending:1 addison:1 barreno:4 know:1 tractable:1 letting:1 flip:3 end:3 studying:2 pursuit:1 available:1 apply:3 hierarchical:1 away:2 robustness:4 voice:1 original:1 denotes:1 remaining:1 ensure:1 running:1 dirichlet:1 clustering:5 linguistics:1 publishing:1 hinge:5 ckner:4 malware:3 giving:2 practicality:1 build:1 establish:1 objective:3 added:1 quantity:2 flipping:1 blend:1 strategy:4 concentration:2 dependence:2 realized:1 occurs:1 diakonikolas:2 exhibit:1 gradient:4 iclr:2 dp:54 fabio:1 outer:1 secrecy:1 degrade:1 topic:1 mail:1 nelson:4 induction:1 ozair:1 bard:2 code:1 sur:1 providing:1 minimizing:2 liang:8 difficult:2 susceptible:1 stoc:2 potentially:3 kamath:1 negative:3 design:3 policy:2 perform:1 pliang:1 diamond:2 attacker:23 observation:2 allowing:2 datasets:18 upper:33 implementable:2 protecting:1 descent:2 attacked:2 communication:1 banana:1 dc:34 discovered:1 arbitrary:2 recoverability:1 usenix:2 community:1 gurobi:7 toolbox:2 security:23 california:1 learned:1 zhenghao:1 nip:7 address:4 beyond:5 suggested:1 proceeds:1 adversary:1 below:1 pattern:2 program:3 unsuccessful:1 max:12 deleting:1 analogue:1 power:1 suitable:1 critical:2 difficulty:3 rely:2 regularized:2 predicting:1 natural:5 indicator:1 arm:1 minimax:7 zhu:9 improve:1 github:1 negativity:1 naive:2 health:1 text:9 review:3 upshot:1 understanding:2 removal:5 discovery:1 python:1 asymptotic:1 xiang:1 embedded:1 loss:53 sublinear:1 interesting:1 limitation:1 allocation:1 filtering:3 ingredient:1 validation:1 foundation:4 agent:1 sufficient:1 xiao:9 principle:1 thresholding:1 corrupting:1 intractability:1 roli:5 eckert:3 maas:3 course:1 summary:1 elsewhere:1 copy:1 placed:1 supported:3 soon:1 free:1 formal:1 weaker:2 understand:1 side:1 institute:1 explaining:1 allow:1 face:1 wagner:1 wide:1 combating:1 sparse:2 feedback:2 boundary:2 dimension:3 world:3 valid:5 antidote:1 kz:1 evaluating:1 stuck:1 qualitatively:1 reinforcement:3 worried:1 computes:1 doesn:1 nguyen:3 far:5 tighten:1 erhan:1 transaction:2 employing:2 spam:5 approximate:3 social:1 implicitly:1 apis:1 overfitting:1 instantiation:1 question:1 corpus:4 conclude:2 knew:1 xi:3 shwartz:2 fumera:2 fergus:1 don:1 continuous:3 latent:1 why:1 learn:3 nature:2 robust:8 ca:1 obtaining:1 sanitization:3 necessarily:1 carlini:2 constructing:1 separator:1 european:1 cl:1 aistats:1 dense:1 main:2 weimer:1 noise:2 metsis:2 kerckhoffs:2 body:1 xu:1 scheffer:3 depicts:1 deployed:2 shield:1 sub:1 lie:1 candidate:13 jmlr:5 yh:3 abundance:1 bad:2 specific:4 bishop:2 showing:2 symbol:1 abadie:1 admits:2 svm:3 intractable:2 workshop:7 mnist:14 false:1 adding:4 effectively:1 valiant:2 phd:1 vries:1 margin:3 kx:2 gap:4 easier:1 chen:4 depicted:2 simply:3 steinhardt:5 failed:1 expressed:1 partially:2 vulnerable:1 applies:3 springer:1 newell:4 nested:1 satisfies:1 relies:1 acm:4 constantly:1 rieck:1 goal:2 targeted:1 bul:1 exposition:1 invalid:1 towards:2 room:1 absence:2 feasible:14 change:5 hard:1 infinite:1 except:1 determined:3 averaging:2 engineer:1 pas:2 duality:6 la:1 player:1 meaningful:1 formally:2 mark:1 support:6 cacsd:1 meant:1 phenomenon:1 tested:1 avoiding:1 crowdsourcing:3
6,571
6,944
Eigen-Distortions of Hierarchical Representations Alexander Berardino Center for Neural Science New York University [email protected] Johannes Ball? Center for Neural Science New York University? [email protected] Valero Laparra Image Processing Laboratory Universitat de Val?ncia [email protected] Eero Simoncelli Howard Hughes Medical Institute, Center for Neural Science and Courant Institute of Mathematical Sciences New York University [email protected] Abstract We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted most- and least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image, and compare these thresholds to the predictions of the corresponding model. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than both the CNN and all layers of VGG16. Human capabilities for recognizing complex visual patterns are believed to arise through a cascade of transformations, implemented by neurons in successive stages in the visual system. Several recent studies have suggested that representations of deep convolutional neural networks trained for object recognition can predict activity in areas of the primate ventral visual stream better than models constructed explicitly for that purpose (Yamins et al. [2014], Khaligh-Razavi and Kriegeskorte [2014]). These results have inspired exploration of deep networks trained on object recognition as models of human perception, explicitly employing their representations as perceptual metrics or loss functions (H?naff and Simoncelli [2016], Johnson et al. [2016], Dosovitskiy and Brox [2016]). On the other hand, several other studies have used synthesis techniques to generate images that indicate a profound mismatch between the sensitivity of these networks and that of human observers. Specifically, Szegedy et al. [2013] constructed image distortions, imperceptible to humans, that cause their networks to grossly misclassify objects. Similarly, Nguyen and Clune [2015] optimized randomly initialized images to achieve reliable recognition from a network, but found that the resulting ?fooling images? were uninterpretable by human viewers. Simpler networks, designed ? Currently at Google, Inc. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. for texture classification and constrained to mimic the early visual system, do not exhibit such failures (Portilla and Simoncelli [2000]). These results have prompted efforts to understand why generalization failures of this type are so consistent across deep network architectures, and to develop more robust training methods to defend networks against attacks designed to exploit these weaknesses (Goodfellow et al. [2014]). From the perspective of modeling human perception, these synthesis failures suggest that representational spaces within deep neural networks deviate significantly from that of humans, and that methods for comparing representational similarity, based on fixed object classes and discrete sampling of the representational space, may be insufficient to expose these failures. If we are going to use such networks as models for human perception, we need better methods of comparing model representations to human vision. Recent work has analyzed deep networks? robustness to visual distortions on classification tasks, as well as the similarity of classification errors that humans and deep networks make in the presence of the same kind of distortion (Dodge and Karam [2017]). Here, we aim to accomplish something in the same spirit, but rather than testing on a set of handselected examples, we develop a model-constrained synthesis method for generating targeted test stimuli that can be used to compare the layer-wise representational sensitivity of a model to human perceptual sensitivity. Utilizing Fisher information, we isolate the model-predicted most and least noticeable changes to an image. We test the quality of these predictions by determining how well human observers can discriminate these same changes. We test the power of this method on six layers of VGG16 (Simonyan and Zisserman [2015]), a deep convolutional neural network (CNN) trained to classify objects. We also compare these results to those derived from models explicitly trained to predict human sensitivity to image distortions, including both a 4-stage generic CNN, a fine-tuned version of VGG16, and a family of highly-structured models explicitly constructed to mimic the physiology of the early human visual system. Example images from the paper, as well as additional examples, can be found online at http://www.cns.nyu.edu/~lcv/eigendistortions/. 1 Predicting discrimination thresholds Suppose we have a model for human visual representation, defined by conditional density p(~r|~x), where ~x is an N -dimensional vector containing the image pixels, and ~r is an M -dimensional random vector representing responses internal to the visual system. If the image is modified by the addition of a distortion vector, ~x + ?? u, where u ? is a unit vector, and scalar ? controls the amplitude of distortion, the model can be used to predict the threshold at which the distorted image can be reliably distinguished from the original image. Specifically, one can express a lower bound on the discrimination threshold in direction u ? for any observer or model that bases its judgments on ~r (Seri?s et al. [2009]): q ?T J ?1 [~x]? u (1) T (? u; ~x) ? ? u where ? is a scale factor that depends on the noise amplitude of the internal representation (as well as experimental conditions, when measuring discrimination thresholds of human observers), and J[~x] is the Fisher information matrix (FIM; Fisher [1925]), a second-order expansion of the log likelihood:   ? T  ? J[~x] = E~r|~x log p(~r|~x) log p(~r|~x) (2) ?~x ?~x Here, we restrict ourselves to models that can be expressed as a deterministic (and differentiable) mapping from the input pixels to mean output response vector, f (~x), with additive white Gaussian noise in the response space. The log likelihood in this case reduces to a quadratic form:  1 log p(~r|~x) = ? [~r ? f (~x)]T [~r ? f (~x)] + const. 2 Substituting this into Eq. (2) gives: ?f T ?f ?~x ?~x Thus, for these models, the Fisher information matrix induces a locally adaptive Euclidean metric on the space of images, as specified by the Jacobian matrix, ?f /?~x. J[~x] = 2 Image pixel 2 model A response 2 u ?T JA 1 [~x] u ? u ? u ?T JA 1 [~ x] u ? q T u T 1J 1 [~ ? x ] u ? u ? u ? JB [~ x] u ? B (? u;; ~ ~x)) u (human) TT (? u x ?(human) p(~rA |~x) response 1 model B pixel 1 q p(~rB |~x) JA [~x] JB [~x] pixel 2 response 2 ?: For unit vectors u pixel 1 response 1 Figure 1: Measuring and comparing model-derived predictions of image discriminability. Two models are applied to an image (depicted as a point ~x in the space of pixel values), producing response vectors ~rA and ~rB . Responses are assumed to be stochastic, and drawn from known distributions p(~rA |~x) and p(~rB |~x). The Fisher Information Matrices (FIM) of the models, JA [~x] and JB [~x], provide a quadratic approximation of the discriminability of distortions relative to an image (rightmost plot, colored ellipses). The extremal eigenvalues and eigenvectors of the FIMs (directions indicated by colored lines) provide predictions of the most and least visible distortions. We test these predictions by measuring human discriminability in these directions (colored points). In this example, the ratio of discriminability along the extremal eigenvectors is larger for model A than for model B, indicating that model A provides a better description of human perception of distortions (for this image). 1.1 Extremal eigen-distortions The FIM is generally too large to be stored in memory or inverted. Even if we could store and invert it, the high dimensionality of input (pixel) space renders the set of possible distortions too large to test experimentally. We resolve both of these issues by restricting our consideration to the mostand least-noticeable distortion directions, corresponding to the eigenvectors of J[~x] with largest and smallest eigenvalues, respectively. First, note that if a distortion direction e? is an eigenvector of J[~x] with associated eigenvalue ?, then it is also an eigenvector of J ?1 [~x] (with eigenvalue 1/?), since the FIM is symmetric and positive semi-definite. In this case, Eq. (1) becomes ? T (? e; ~x) ? ?/ ? That is, the predicted discrimination threshold in the direction of an eigenvector is inversely proportional to the square root of its associated eigenvalue, and the ratio of discrimination thresholds along two different eigenvectors is the square root of the ratio of their associated eigenvalues. If human discrimination thresholds attain the bound of Eq. (1), or are a constant multiple above it, the strongest prediction arising from a given model is the ratio of the extremal (maximal and minimal) eigenvalues of its FIM, which can be compared to the ratio of human discrimination thresholds for distortions in the directions of the corresponding extremal eigenvectors (Fig. 1). Although the FIM cannot be stored, it is straightforward to compute its product with an input vector (i.e., an image). Using this operation, we can solve for the extremal eigenvectors using the wellknown power iteration method (von Mises and Pollaczek-Geiringer [1929]). Specifically, to obtain the maximal eigenvalue of a given function and its associated eigenvector (?m and e?m , respectively), we (0) start with a vector consisting of white noise, e?m , and then iteratively apply the FIM, renormalizing the resulting vector, until convergence: ?m(k+1) = J[~x]? em(k) ; e?m(k+1) = J[~x]? em(k) /?(k+1) m To obtain the minimal eigenvector, e?l , we perform a second iteration using the FIM with the maximal eigenvalue subtracted from the diagonal: (k+1) (k) (k+1) (k) (k+1) ?l = (J[~x] ? ?m I) e?l ; e?l = (J[~x] ? ?m I) e?l /?l 3 1.2 Measuring human discrimination thresholds For each model under consideration, we synthesized extremal eigen-distortions for 6 images from the Kodak image set2 . We then estimated human thresholds for detecting these distortions using a two-alternative forced-choice task. On each trial, subjects were shown (for one second each, and in randomized order) a photographic image, ~x, and the same image distorted using one of the extremal eigenvectors, ~x +?? e, and then asked to indicate which image appeared more distorted. This procedure was repeated for 120 trials for each distortion vector, e?, over a range of ? values, with ordering chosen by a standard psychophysical staircase procedure. The proportion of correct responses, as a function of ?, was fit with a cumulative Gaussian function, and the subject?s detection threshold, Ts (? e; ~x) was estimated as the point on this function where the subject could distinguish the distorted image 75% of the time. We computed the natural logarithm of the ratio of these discrimination thresholds for the minimal and maximal eigenvectors, and averaged this over images (indexed by i) and subjects (indexed by s): S I 1 1 XX D(f ) = log kTs (? eli ; ~xi )/Ts (? emi ; ~xi )k S I s=1 i=1 where Ts indicates the threshold measured for human subject s. D(f ) provides a measure of a model?s ability to predict human performance with respect to distortion detection: the ratio of thresholds for model-generated extremal distortions will be highest when the model is most similar to the human subjects (Fig. 1). 2 Probing representational sensitivity of VGG16 layers ln threshold We begin by examining discrimination predictions derived from the deep convolutional network known as VGG16. In 2 their paper, Johnson et al. [2016] trained a neural network 1 to generate super-resolution images using the representa0 tion of an intermediate layer of VGG16 as a perceptual loss function, and showed that the images this network -1 produced looked significantly better than images gener-2 ated with simpler loss functions (e.g. pixel-domain mean squared error). H?naff and Simoncelli [2016] used VGG16 -3 as an image metric to synthesize minimal length paths -4 (geodesics) between images modified by simple global MSE Front 2 3 4 5 6 transformations (rotation, dilation, etc.). The authors VGG16 layer found that a modified version of the network produced geodesics that captured these global transformations well Figure 2: Top: Average log-thresholds (as measured perceptually), especially in deeper layers. for detection of the least-noticeable Implicit in both of these studies, and others like them (e.g., (red) and most-noticeable (blue) eigenDosovitskiy and Brox [2016]), is the idea that training a distortions derived from layers within deep neural network to recognize objects may result in VGG16 (10 Human observers), and a a network with other human perceptual qualities. Here, baseline model (MSE) for which prewe compare VGG16?s sensitivity to distortions directly dicted distortions in all directions are to human perceptual sensitivity to the same distortions. equally visible. We transformed luminance-valued images and distortion vectors to proper inputs for VGG16 following the preprocessing steps described in the original paper, and verified that our implementation replicated the published object recognition results. For human perceptual measurements, all images were transformed to produce the same luminance values on our calibrated display as those assumed by the model. We computed eigen-distortions of VGG16 at 6 different layers: the rectified convolutional layer immediately prior to the first max-pooling operation (Front), as well as each subsequent layer following a pooling operation (Layer2?Layer6). A subset of these are shown, both in isolation and superimposed on the image from which they were derived, in Fig. 3. Note that the detectability of these distortions in isolation is not necessarily indicative of their detectability when superimposed on the underlying image, as measured in our experiments. We compared all of these predictions to a 2 Downloaded from http://www.cipr.rpi.edu/resource/stills/kodak.html. 4 4? em Front Most-noticeable eigen-distortions Layer 3 Layer 5 Front Least-noticeable eigen-distortions Layer 3 Layer 5 Image X 30? el Image X Figure 3: Eigen-distortions derived from three layers of the VGG16 network for an example image. Images are best viewed in a display with luminance range from 5 to 300 cd/m2 and a ? exponent of 2.4. Top: Most-noticeable eigen-distortions. All distortion image intensities are scaled by the same amount (?4). Second row: Original image (~x), and sum of this image with each of the eigendistortions. Third and fourth rows: Same, for the least-noticeable eigen-distortions. Distortion image intensities are scaled the same (?30). baseline model (MSE), where the image transformation, f (~x), is replaced by the identity matrix. For this model, every distortion direction is equally discriminable. Human detection thresholds are summarized in Fig. 2, and indicate that all layers surpassed the baseline model in at least one of their predictions. Additionally, the early layers of VGG16 (in particular, Front and Layer3) are better predictors of human sensitivity than the deeper layers (Layer4, Layer5, Layer6). Specifically, the most noticeable eigen-distortions from representations within VGG16 become more discriminable with depth, but so generally do the least-noticeable eigen-distortions. This discrepancy could arise from overlearned invariances, or invariances induced by network architecture (e.g. layer 6, the first stage in the network where the number of output coefficients falls below the number of input pixels, is an under-complete representation). Notably, including the "L2 pooling" modification of H?naff and Simoncelli [2016] did not significantly modify the eigen-distortions synthesized from VGG16 (data not shown). 3 Probing representational similarity of IQA-optimized models The results above suggest that training a neural network to recognize objects imparts some ability to predict human sensitivity to distortions. However, we find that deeper layers of the network produce worse predictions than shallower layers. This could be a result of the mismatched training objective function (object recognition) or the particular architecture of the network. Since we clearly cannot probe the entire space of networks that achieve good results on object recognition, we aim instead to 5 Convolution, 5x5 filters Downsampling 2x2, batch normalization, rectification Figure 4: Architecture of a 4-layer Convolutional Neural Network (CNN). Each layer consists of a convolution, downsampling, and a rectifying nonlinearity (see text). The network was trained, using batch normalization, to maximize correlation with the TID-2008 database of human image distortion sensitivity. probe a more general form of the latter question. Specifically, we train multiple models of differing architecture to predict human image quality ratings, and test their ability to generalize by measuring human sensitivity to their eigen-distortions. We constructed a generic 4-layer convolutional neural network (CNN, 436908 parameters - Fig. 4). Within this network, each layer applies a bank of 5 ? 5 convolution filters to the outputs of the previous layer (or, for the first layer, the input image). The convolution responses are subsampled by a factor of 2 along each spatial dimension (the number of filters at each layer is increased by the same factor to maintain a complete representation at each stage). Following each convolution, we employ batch normalization, in which all responses are divided by the standard deviation taken over all spatial positions and all layers, and over a batch of input images (Ioffe and Szegedy [2015]). Finally, outputs are rectified with a softplus nonlinearity, log(1 + exp(x)). After training, the batch normalization factors are fixed to the global mean and variance across the entire training set. We compare our generic CNN to a model reflecting the structure and computations of the Lateral Geniculate Nucleus (LGN), the visual relay center of the Thalamus. Previous results indicate that such models can successfully mimic human judgments of image quality (Laparra et al. [2017]). The full model (On-Off), is constructed from a cascade of linear filtering, and nonlinear computational modules (local gain control and rectification). The first stage decomposes the image into two separate channels. Within each channel, the image is filtered by a differenceof-Gaussians (DoG) filter (2 parameters, controlling spatial size of the Gaussians - DoG filters in On and Off channels are assumed to be of opposite sign). Following this linear stage, the outputs are normalized by two sequential stages of gain control, a known property of LGN neurons (Mante et al. [2008]). Filter outputs are first normalized by a local measure of luminance (2 parameters, controlling filter size and amplitude), and subsequently by a local measure of contrast (2 parameters, again controlling size and amplitude). Finally, the outputs of each channel are rectified by a softplus nonlinearity, for a total of 12 model parameters. In order to evaluate the necessity of each structural element of this model, we also test three reduced sub-models, each trained on the same data (Fig. 5). Finally, we compare both of these models to a version of VGG16 targeted at image quality assessment (VGGIQA). This model computes the weighted mean squared error over all rectified convolutional layers of the VGG16 network (13 weight parameters in total), with weights trained on the same perceptual data as the other models. 6 On-Off LGG LG LN Figure 5: Architecture of our LGN model (On-Off), and several reduced models (LGG, LG, and LN). Each model was trained to maximize correlation with the TID-2008 database of human image distortion sensitivity. 3.1 Optimizing models for IQA We trained all of the models on the TID-2008 database, which contains a large set of original and distorted images, along with corresponding human ratings of perceived distortion [Ponomarenko et al., 2009]. Perceptual distortion distance for each model was calculated as the Euclidean distance between the model?s representations of the original and distorted images: D? = ||f (~x) ? f (~x 0 )||2 For each model, we optimized the parameters, ?, so as to maximize the correlation between that model?s reports of perceptual distance, D? and the human mean opinion scores (MOS) reported in the TID-2008 database.   ?? = arg max corr(D? , M OS) ? Optimization of VGG-IQA weights was performed using non-negative least squares. Optimization of all other models was performed using regularized stochastic gradient ascent with the Adam algorithm (Kingma and Ba [2015]). Comparing perceptual predictions of generic and structured models After training, we evaluated each model?s predictive performance using traditional cross-validation methods on a held-out test set of the TID-2008 database. The generic CNN, the structured On-Off model, and the VGG-IQA model all performed well (Pearson correlation: CNN ? = .86, On-Off: ? = .82, VGG-IQA: ? = .84). 3 2 ln threshold 3.2 1 0 ?1 Stepping beyond the TID-2008 database, and using ?2 the more stringent eigen-distortion test, yielded a very different outcome (Figs. 7, 6 and 8). All of our ?3 models surpassed the baseline model in at least one ?4 of their predictions, however, the eigen-distortions MSE LN LG LGG On-Off CNN VGG IQA derived from the generic CNN and VGG-IQA were IQA Model significantly less predictive of human sensitivity than those derived from the On-Off model (Fig. 6) and, Figure 6: Top: Average log-thresholds for surprisingly, even somewhat less predictive than early detection of the least-noticeable (red) and layers of VGG16 (see Fig. 8). Thus, the eigen- most-noticeable (blue) eigen-distortions dedistortion test reveals generalization failures in the rived from IQA models (19 human observers). CNN and VGG16 architectures that are not exposed by traditional methods of cross-validation. On the other hand, the models with architectures that mimic biology (On-Off, LGG, LG) are constrained in a way that enables better generalization. We compared these results to the performance of each of our reduced LGN models (Fig. 5), to determine the necessity of each structural element of the full model. As expected, the models incorporating more LGN functional elements performed better on a traditional cross-validation test, with the most complex of the reduced models (LGG) performing at the same level as On-Off and the CNN (LN: ? = .66, LG: ? = .74, LGG: ? = .83). Likewise, models with more LGN functional elements produced eigen-distortions that increased in predictive accuracy (Fig. 6 and 8). It is worth noting that the three LGN models that incorporate some form of local gain control perform significantly better than all layers of VGG16, including the early layers (see Fig. 8). 4 Discussion Analysis-by-synthesis can provide a powerful form of ?Turing test?: perceptual measurements on a limited set of model-optimized examples can reveal failures that might not be apparent in measurements on a large set of hand-curated examples. In this paper, we present a new methodology for synthesizing best and worst-case predictions from perceptual models, and compare those predictions to human perception. We are not the first to introduce a method of this kind. Wang and Simoncelli [2008] introduced Maximum Differentiation (MAD) competition, which creates images optimized for one metric while 7 Most-noticeable eigen-distortion (4? em ) LG LGG On-Off CNN VGG-IQA Least-noticeable eigen-distortion (30? el ) LG LGG On-Off CNN VGG-IQA Figure 7: Eigen-distortions for several models trained to maximize correlation with human distortion ratings in TID-2008 [Ponomarenko et al., 2009]. Images are best viewed in a display with luminance range from 5 to 300 cd/m2 and a ? exponent of 2.4. Top: Most-noticeable eigen-distortions. All distortion image intensities are re-scaled by the same amount (?4). Second row: Original image (~x), and sum of this image with each eigen-distortion. Third and fourth rows: Same, for the least-noticeable eigen-distortions. Distortion image intensities re-scaled by the same amount (?30). holding constant the competing metric?s rating. Our method relies on a Fisher approximation to generate extremal perturbations, and uses the ratio of their empirically measured discrimination thresholds as an absolute measure of alignment to human sensitivity (as opposed to relative pairwise comparisons of model performance). Our method can easily be generalized to incorporate more physiologically realistic noise assumptions, such as Poisson noise, and could potentially be extended to include noise at each stage of a hierarchical model. We?ve used this method to analyze the ability of VGG16, a deep convolutional neural network trained to recognize objects, to account for human perceptual sensitivity. First, we find that the early layers of the network are moderately successful in this regard. Second, these layers (Front, Layer 3) surpassed the predictive power of a generic shallow CNN explicitly trained to predict human perceptual sensitivity, but underperformed models of the LGN trained on the same objective. And third, perceptual sensitivity predictions synthesized from a layer of VGG16 decline in accuracy for deeper layers. We also showed that a highly structured model of the LGN generates predictions that substantially surpass the predictive power of any individual layer of VGG16, as well as a version of VGG16 trained to fit human sensitivity data (VGG-IQA), or a generic 4-layer CNN trained on the same data. These failures of both the shallow and deep neural networks were not seen in traditional cross-validation tests on the human sensitivity data, but were revealed by measuring human sensitivity to model-synthesized eigen-distortions. Finally, we confirmed that known functional properties 8 ln threshold ratio (D(f)) 7 6 5 4 3 2 1 0 MSE LN LG LGG On-Off CNN VGG FRONT 2 IQA IQA Models 3 4 5 6 VGG16 Layers Figure 8: Average empirical log-threshold ratio (D) for eigen-distortions derived from each IQA optimized model and each layer of VGG16. of the early visual system (On and Off pathways) and ubiquitous neural computations (local gain control, Carandini and Heeger [2012]) have a direct impact on perceptual sensitivity, a finding that is buttressed by several other published results (Malo et al. [2006], Lyu and Simoncelli [2008], Laparra et al. [2010, 2017], Ball? et al. [2017]). Most importantly, we demonstrate the utility of prior knowledge in constraining the choice of models. Although the structured models used components similar to generic CNNs, they had far fewer layers and their parameterization was highly restricted, thus allowing a far more limited family of transformations. These structural choices were informed by knowledge of primate visual physiology, and training on human perceptual data was used to determine parameters of the model that are either unknown or underconstrained by current experimental knowledge. Our results imply that this imposed structure serves as a powerful regularizer, enabling these models to generalize much better than generic unstructured networks. Acknowledgements The authors would like to thank the members of the LCV and VNL groups at NYU, especially Olivier Henaff and Najib Majaj, for helpful feedback and comments on the manuscript. Additionally, we thank Rebecca Walton and Lydia Cassard for their tireless efforts in collecting the perceptual data presented here. This work was funded in part by the Howard Hughes Medical Institute, the NEI Visual Neuroscience Training Program and the Samuel J. and Joan B. Williamson Fellowship. References J. Ball?, V. Laparra, and E.P. Simoncelli. End-to-end optimized image compression. ICLR 2017, pages 1?27, March 2017. Matteo Carandini and David J. Heeger. Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13, 2012. Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions. arxiv.org, 2017. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. NIP2 2016: Neural Information Processing Systems, 2016. R.A. Fisher. Theory of statistical estimation. Proceedings of the Cambridge Philosophical Society, 22:700?725, 1925. I.J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. ICLR 2014, December 2014. Olivier J H?naff and Eero P Simoncelli. Geodesics of learned representations. ICLR 2016, November 2016. Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ICLR 2015, February 2015. 9 Justin Johnson, Alexandre Alahi, and Fei Fei Li. Perceptual losses for real-time style transfer and super-resolution. ECCV: The European Conference on Computer Vision, 2016. Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLOS Computational Biology, 10(11):e1003915, November 2014. Diederik P Kingma and Jimmy Lei Ba. ADAM: A Method for Stochastic Optimization. ICLR 2015, pages 1?15, January 2015. V. Laparra, A. Berardino, J. Ball?, and E.P. Simoncelli. Perceptually optimized image rendering. Journal of the Optical Society of America A, 34(9):1511?1525, September 2017. Valero Laparra, Jordi Mu?oz-Mar?, and Jes?s Malo. Divisive normalization image quality metric revisited. Journal of the Optical Society of America A, 27, 2010. Siwei Lyu and Eero P. Simoncelli. Nonlinear image representation using divisive normalization. Proc. Computer Vision and Pattern Recognition, 2008. J. Malo, I Epifanio, R. Navarro, and E.P. Simoncelli. Nonlinear image representation for efficient perceptual coding. IEEE Transactions on Image Processing, 15, 2006. Valerio Mante, Vincent Bonin, and Matteo Carandini. Functional mechanisms shaping lateral geniculate responses to artificial and natural stimuli. Neuron, 58(4):625?638, May 2008. J. Nguyen, A. Yosinski and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. in computer vision and pattern recognition. IEEE CVPR, 2015. N Ponomarenko, V Lukin, and A Zelensky. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern . . . , 2009. Javier Portilla and Eero P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int?l Journal of Computer Vision, 40(1):"49?71", Dec 2000. Peggy Seri?s, Alan A. Stocker, and Eero P. Simoncelli. Is the homunculus "aware" of sensory adaptation? Neural Computation, 2009. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR 2015, September 2015. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv.org, December 2013. Richard von Mises and H. Pollaczek-Geiringer. Praktische verfahren der gleichungsaufl?sung. ZAMM Zeitschrift f?r Angewandte Mathematik und Mechanik, 9:152?164, 1929. Zhou Wang and Eero P. Simoncelli. Maximum differentiation (mad) competition: A methodology for comparing computational models of perceptual qualities. Journal of Vision, 2008. D. L. K. Yamins, H. Hong, C. Cadieu, E.A. Solomon, D. Seibert, and J.J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619?8624, June 2014. 10
6944 |@word trial:2 cnn:18 version:4 compression:1 kriegeskorte:2 proportion:1 necessity:2 contains:1 score:1 tuned:1 rightmost:1 current:1 laparra:7 comparing:6 rpi:1 diederik:1 intriguing:1 visible:2 subsequent:1 additive:1 realistic:1 enables:1 christian:1 designed:2 plot:1 discrimination:11 lydia:1 fewer:1 parameterization:1 indicative:1 colored:3 filtered:1 provides:2 detecting:1 revisited:1 successive:1 attack:1 org:2 simpler:2 mathematical:1 along:4 constructed:5 direct:1 become:1 profound:1 consists:1 pathway:1 introduce:1 pairwise:1 notably:1 expected:1 ra:3 inspired:1 gener:1 resolve:1 becomes:1 begin:1 xx:1 underlying:1 kind:2 substantially:2 eigenvector:5 informed:1 differing:1 finding:1 transformation:5 differentiation:2 sung:1 every:1 collecting:1 alahi:1 zaremba:1 scaled:4 control:6 unit:2 medical:2 producing:1 positive:1 tid:7 local:7 modify:1 zeitschrift:1 path:1 matteo:2 lcv:2 might:1 discriminability:4 alexey:1 limited:2 range:3 averaged:1 testing:1 hughes:2 definite:1 procedure:2 rived:1 area:1 empirical:1 majaj:1 cascade:2 significantly:5 physiology:2 attain:1 confidence:1 suggest:2 cannot:2 www:2 deterministic:1 imposed:1 center:4 straightforward:1 jimmy:1 resolution:2 unstructured:1 immediately:1 m2:2 utilizing:1 importantly:1 shlens:1 controlling:3 suppose:1 olivier:2 us:1 goodfellow:3 synthesize:1 element:4 recognition:11 praktische:1 curated:1 database:9 module:1 wang:2 worst:1 ordering:1 plo:1 highest:1 und:1 mu:1 moderately:1 asked:1 geodesic:3 trained:18 exposed:1 predictive:6 dodge:2 creates:1 seyed:1 easily:2 joint:1 america:2 regularizer:1 train:1 forced:1 mechanik:1 seri:2 detected:1 artificial:1 pearson:1 outcome:1 harnessing:1 apparent:1 larger:1 solve:1 valued:1 distortion:63 cvpr:1 ability:6 simonyan:2 statistic:1 najib:1 online:1 eigenvalue:10 differentiable:1 maximal:4 product:1 adaptation:1 achieve:2 representational:6 oz:1 academy:1 text:1 razavi:2 description:1 competition:2 sutskever:1 convergence:1 walton:1 produce:2 generating:2 renormalizing:1 adam:2 object:12 unrecognizable:1 develop:3 andrew:1 measured:4 noticeable:17 eq:3 implemented:1 predicted:3 indicate:4 differenceof:1 direction:9 correct:1 filter:7 stochastic:3 subsequently:1 exploration:1 human:59 cnns:1 stringent:1 opinion:1 ja:4 fims:1 generalization:3 viewer:1 exp:1 mapping:1 predict:8 mo:1 lyu:2 substituting:1 ventral:1 early:9 smallest:2 relay:1 purpose:1 perceived:1 estimation:1 proc:1 geniculate:2 currently:1 expose:1 extremal:10 largest:2 successfully:1 weighted:1 clearly:1 gaussian:2 aim:2 modified:3 rather:1 super:2 zhou:1 clune:2 derived:10 june:1 likelihood:2 indicates:1 superimposed:2 fooled:1 contrast:1 adversarial:1 defend:1 baseline:4 helpful:1 el:2 dicted:1 entire:2 going:1 transformed:2 lgn:9 layer2:1 pixel:10 issue:1 classification:3 html:1 arg:1 exponent:2 constrained:3 spatial:3 brox:3 aware:1 beach:1 sampling:1 cadieu:1 biology:2 unsupervised:1 mimic:5 discrepancy:1 report:1 jb:3 stimulus:2 dosovitskiy:2 employ:1 richard:1 others:1 randomly:1 modern:1 recognize:3 ve:1 individual:1 national:1 subsampled:1 replaced:1 ourselves:1 cns:1 consisting:1 maintain:1 detection:5 misclassify:1 highly:3 evaluation:1 alignment:1 weakness:1 analyzed:1 held:1 stocker:1 kt:1 bonin:1 indexed:2 euclidean:2 logarithm:1 initialized:1 re:2 minimal:4 increased:2 classify:1 modeling:1 measuring:6 deviation:1 subset:1 predictor:1 recognizing:1 examining:1 successful:1 johnson:3 too:2 front:7 universitat:1 stored:2 reported:1 discriminable:2 accomplish:1 calibrated:1 st:1 density:1 sensitivity:25 randomized:1 off:14 synthesis:4 von:2 squared:2 again:1 solomon:1 containing:1 opposed:1 worse:1 style:1 li:1 szegedy:5 account:1 de:1 summarized:1 coding:1 karam:2 coefficient:2 inc:1 int:1 explicitly:5 depends:1 stream:1 ated:1 later:1 root:2 observer:6 tion:1 performed:4 analyze:1 red:2 start:1 capability:1 rectifying:1 square:3 accuracy:2 convolutional:10 variance:1 likewise:1 judgment:2 generalize:2 vincent:1 produced:3 worth:1 rectified:4 confirmed:1 published:2 explain:2 strongest:1 siwei:1 failure:7 grossly:1 against:1 associated:4 mi:2 jordi:1 gain:5 carandini:3 knowledge:3 dimensionality:1 ubiquitous:1 jes:1 shaping:1 amplitude:4 javier:1 reflecting:1 manuscript:1 alexandre:1 higher:1 courant:1 supervised:1 methodology:2 zisserman:2 response:14 evaluated:1 mar:1 stage:10 implicit:1 until:1 correlation:5 hand:4 nonlinear:3 assessment:2 o:1 google:1 quality:9 indicated:1 reveal:1 lei:1 usa:1 staircase:1 normalized:2 symmetric:1 laboratory:1 iteratively:1 white:2 x5:1 samuel:2 hong:1 generalized:1 tt:1 complete:2 demonstrate:1 image:76 wise:1 consideration:2 rotation:1 functional:4 empirically:1 overlearned:1 stepping:1 yosinski:1 synthesized:4 measurement:3 cambridge:1 uv:1 similarly:1 nonlinearity:3 had:1 funded:1 bruna:1 similarity:4 cortex:1 etc:1 base:1 something:1 recent:2 showed:2 khaligh:2 perspective:1 optimizing:1 henaff:1 wellknown:1 store:1 der:1 inverted:1 captured:1 seen:1 additional:1 somewhat:1 determine:2 maximize:4 vgg16:28 semi:1 multiple:2 simoncelli:16 photographic:1 reduces:1 thalamus:1 full:3 alan:1 imperceptible:1 match:2 valerio:1 believed:1 long:1 cross:4 divided:1 vnl:1 equally:2 lina:1 ellipsis:1 impact:1 prediction:19 imparts:1 vision:6 metric:8 surpassed:3 poisson:1 lgg:9 iteration:2 normalization:8 arxiv:2 sergey:1 invert:1 dec:1 addition:1 fellowship:1 fine:1 ascent:1 navarro:1 comment:1 subject:7 isolate:1 pooling:3 induced:1 member:1 december:2 spirit:1 structural:3 layer3:1 presence:1 noting:1 intermediate:1 revealed:1 constraining:1 rendering:1 variety:1 fit:2 isolation:2 architecture:8 restrict:1 opposite:1 competing:1 idea:1 decline:1 vgg:9 shift:1 six:1 fim:8 utility:1 accelerating:1 effort:2 render:1 karen:1 york:3 cause:1 deep:18 generally:2 eigenvectors:9 johannes:2 amount:4 locally:1 induces:1 reduced:4 generate:3 http:2 homunculus:1 canonical:1 sign:1 estimated:2 arising:1 neuroscience:2 rb:3 blue:2 discrete:1 detectability:2 express:1 group:1 threshold:23 drawn:1 verified:1 utilize:1 luminance:5 sum:2 fooling:1 eli:1 turing:1 fourth:2 powerful:2 distorted:7 family:2 uninterpretable:1 layer:46 bound:2 distinguish:1 display:3 quadratic:2 mante:2 yielded:1 activity:1 fei:2 x2:1 generates:1 emi:1 performing:1 optical:2 structured:5 ball:4 march:1 across:2 em:4 shallow:2 primate:2 modification:1 naff:4 restricted:1 valero:3 taken:1 ln:8 resource:1 rectification:2 mathematik:1 mechanism:1 yamins:2 serf:1 end:2 operation:3 gaussians:2 apply:1 probe:2 hierarchical:4 generic:10 kodak:2 nikolaus:1 distinguished:1 subtracted:1 alternative:1 robustness:1 batch:6 eigen:26 original:6 thomas:1 top:4 include:1 const:1 exploit:1 especially:2 establish:1 february:1 society:3 psychophysical:1 objective:2 added:1 question:1 looked:1 parametric:1 diagonal:1 traditional:4 exhibit:1 gradient:1 iclr:6 september:2 distance:3 separate:1 thank:2 lateral:2 mad:2 length:1 dicarlo:1 prompted:1 insufficient:1 ratio:10 downsampling:2 lg:8 potentially:1 holding:1 negative:1 ba:2 synthesizing:1 peggy:1 implementation:1 reliably:2 proper:1 unknown:1 perform:2 shallower:1 allowing:1 neuron:3 convolution:5 howard:2 enabling:1 november:2 t:3 january:1 extended:1 portilla:2 perturbation:2 nei:1 intensity:4 rating:6 introduced:1 rebecca:1 dog:2 david:1 specified:1 verfahren:1 optimized:10 philosophical:1 learned:1 kingma:2 nip:1 beyond:1 suggested:1 justin:1 below:1 perception:6 pattern:3 mismatch:1 appeared:1 program:1 reliable:1 including:3 memory:1 max:2 power:4 natural:2 regularized:1 predicting:1 representing:1 inversely:1 imply:1 joan:1 review:1 deviate:1 prior:2 balle:1 l2:1 val:1 layer6:2 determining:1 relative:2 acknowledgement:1 loss:4 proportional:1 filtering:1 e1003915:1 validation:4 downloaded:1 nucleus:1 consistent:1 bank:1 cd:2 row:4 eccv:1 surprisingly:1 understand:1 deeper:4 institute:3 fall:1 mismatched:1 explaining:1 absolute:1 regard:1 feedback:1 depth:1 dimension:1 calculated:1 cumulative:1 cortical:1 computes:1 sensory:1 author:2 adaptive:1 preprocessing:1 replicated:1 nguyen:2 employing:1 far:2 erhan:1 transaction:1 global:3 reveals:1 ioffe:2 assumed:3 eero:7 xi:2 fergus:1 physiologically:1 decomposes:1 why:1 dilation:1 additionally:2 berardino:2 channel:4 underperformed:1 robust:1 ca:1 nature:1 angewandte:1 transfer:1 epifanio:1 expansion:1 mse:5 williamson:1 complex:3 necessarily:1 european:1 domain:1 did:1 noise:6 arise:2 malo:3 repeated:1 fig:12 probing:2 sub:1 position:1 heeger:2 perceptual:24 mahdi:1 jacobian:1 third:3 wavelet:1 covariate:1 nyu:5 incorporating:2 restricting:1 sequential:1 corr:1 underconstrained:1 texture:2 perceptually:2 depicted:1 visual:16 set2:1 expressed:1 zamm:1 scalar:1 applies:1 relies:1 conditional:1 viewed:2 targeted:2 identity:1 seibert:1 fisher:9 change:2 experimentally:1 specifically:6 reducing:1 surpass:1 total:2 discriminate:1 invariance:2 e:1 experimental:2 divisive:2 indicating:1 internal:3 softplus:2 latter:1 iqa:15 alexander:1 incorporate:2 evaluate:1
6,572
6,945
Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization Yossi Arjevani Department of Computer Science and Applied Mathematics Weizmann Institute of Science Rehovot 7610001, Israel [email protected] Abstract We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient ? for obtaining a complexity bound of O((n + L/?) ln(1/)) for L-smooth and ?-strongly convex individual functions - one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, p SAG), it is not possible to get an ?accelerated? complexity ? bound of O((n+ nL/?) ln(1/)), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by ?(n + L/), assuming p that (on average) the same update rule is used in any iteration, and ?(n + nL/) otherwise. 1 Introduction An optimization problem principal to machine learning and statistics is that of finite sums: n min F (w) := w?Rd 1X fi (w), n i=1 (1) where the individual functions fi are assumed to possess some favorable analytical properties, such as Lipschitz-continuity, smoothness or strong convexity (see [16] for details). We measure the iteration complexity of a given optimization algorithm by determining how many evaluations of individual functions (via some external oracle procedure, along with their gradient, Hessian, etc.) are needed in order to obtain an -solution, i.e., a point w ? Rd which satisfies E[F (w) ? minw?Rd F (w)] <  (where the expectation is taken w.r.t. the algorithm and the oracle randomness). Arguably, the simplest way of minimizing finite sum problems is by using optimization algorithms for general optimization problems. For concreteness of the following discussion, let us assume for the moment that the individual functions are L-smooth and ?-strongly convex. In this case, by applying vanilla Gradient Descent (GD) or Accelerated Gradient Descent (AGD, [16]), one obtains iteration complexity of  ? ? ? n ? ln(1/) , (2) O(n? ln(1/)) or O ? hides logarithmic respectively, where ? := L/? denotes the condition number of the problem and O factors in the problem parameters. However, whereas such bounds enjoy logarithmic dependence on 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. the accuracy level, the multiplicative dependence on n renders this approach unsuitable for modern applications where n is very large. A different approach to tackle a finite sum problem is by reformulating it as a stochastic optimization problem, i.e., minw?Rd Ei?U ([n]) [fi (w)], and then applying a general stochastic method, such  as SGD, which allows iteration complexity of O(1/) or O 1/2 (depending on the problem parameters). These methods offer rates which do not depend on n, and are therefore attractive for situations where one seeks for a solution of relatively low accuracy. An evident drawback of these methods is their broad applicability for stochastic optimization problems, which may conflict with the goal of efficiently exploiting the unique noise structure of finite sums (indeed, in the general stochastic setting, these rates cannot be improved, e.g., [1, 18]). In recent years, a major breakthrough was made when stochastic methods specialized in finite sums (first SAG [19] and SDCA [21], and then SAGA [10], SVRG [11], SDCA without duality [20], and others) were shown to obtain iteration complexity of ? O((n + ?) ln(1/)). (3) The ability of these algorithms to enjoy both logarithmic dependence on the accuracy parameter and an additive dependence on n is widely attributed to the fact that the noise of finite sum problems distributes over a finite set of size n. Perhaps surprisingly, in this paper we show that another key ingredient is crucial, namely, a mean of knowing which individual function is being referred to by the oracle at each iteration. In particular, this shows that variance-reduction mechanisms (see, e.g., [10, Section 3]) cannot be applied without explicitly knowing the ?identity? of the individual functions. On the more practical side, this result shows that when data augmentation (e.g., [14]) is done without an explicit enumeration of the added samples, it is impossible to obtain iteration complexity as stated in (3, see [7] for relevant upper bounds). Although variance-reduction mechanisms are essential for obtaining an additive dependence on n (as shown in (3)), they do not necessarily yield ?accelerated? rates which depend on the square root of the condition number (as shown in (2) for AGD). Recently, generic acceleration schemes were used by [13] and accelerated SDCA [22] to obtain iteration complexity of   ? ? (n + nk) ln(1/) . O (4) The question of whether this rate is optimal was answered affirmatively by [23, 12, 5, 3]. The first category of lower bounds exploits the degree of freedom offered by a d- (or an infinite-) dimensional space to show that any first-order and a certain class of second-order methods cannot obtain better rates than (4) in the regime where the number of iterations is less than O(d/n). The second category of lower bounds is based on maintaining the complexity of the functional form of the iterates, thereby establishing bounds for first-order and coordinate-descent algorithms whose step sizes are oblivious to the problem parameters (e.g., SAG, SAGA, SVRG, SDCA, SDCA without duality) for any number of iterations, regardless of d and n. In this work, we further extend the theory of oblivious finite sum algorithms, by showing that if a first-order and a coordinate-descent oracle are used, then acceleration is not possible without an explicit knowledge of the strong convexity parameter. This implies that in cases where only poor estimation of the strong convexity is available, faster rates may be obtained through ?adaptive? algorithms (see relevant discussions in [19, 4]). Next, we show that in the smooth and convex case, oblivious finite sum algorithms which, on average, apply the same update rule at each iteration (e.g., SAG, SDCA, SVRG, SVRG++ [2], and typically, other algorithms with a variance-reduction mechanism as described in [10, Section 3]), are bound to iteration p complexity of ?(n + L/), where L denotes the smoothness parameter (rather than ?(n + nL/)). To show this, we employ a restarting scheme (see [4]) which explicitly introduces the strong convexity parameter into algorithms that are designed for smooth and convex functions. Finally, we use this scheme to establish a tight dimension-free lower bound for smooth and convex finite sums which holds for oblivious algorithms with a first-order and a coordinate-descent oracle. To summarize, our contributions (in order of appearance) are the following: ? In Section 2, we prove that in the setting of stochastic optimization, having finitely supported noise (as in finite sum problems) is not sufficient for obtaining linear convergence rates with 2 a linear dependence on n - one must also know exactly which individual function is being referred to by the oracle at each iteration. Deriving similar results for various settings, we show that SDCA, accelerated SDCA, SAG, SAGA, SVRG, SVRG++ and other finite sum algorithms must have a proper enumeration of the individual functions in order to obtain their stated convergence rate. ? In Section 3.1, we lay the foundations of the framework of general CLI algorithms (see [3]), which enables us to formally address oblivious algorithms (e.g., when step sizes are scheduled regardless of the function at hand). In section 3.2, we improve upon [4], by showing that (in this generalized framework) the optimal iteration complexity of oblivious, deterministic or stochastic, finite sum algorithms with both first-order and coordinate-descent oracles cannot perform better than ?(n + ? ln(1/)), unless the strong convexity parameter is provided explicitly. In particular, the richer expressiveness power of this framework allows addressing incremental gradient methods, such as Incremental Gradient Descent [6] and Incremental Aggregated Gradient [8, IAG]. ? In Section 3.3, we show that, in the L-smooth and convex case, the optimal complexity bound (in terms of the accuracy parameter) of oblivious algorithms whose p update rules are ? + nL/), as obtained, (on average) fixed for any iteration is ?(n + L/) (rather then O(n e.g., by accelerated SDCA). To show this, we first invoke a restarting scheme (used by [4]) to explicitly introduce strong convexity into algorithms for finite sums with smooth and convex individuals, and then apply the result derived in Section 3.2. ? In Section 3.4, we use the reduction introduced in Section 3.3, to show that the optimal iteration complexity of minimizing L-smooth and convex finite sums using oblivious  algorithms p equipped with a first-order and a coordinate-descent oracle is ? n + nL/ . 2 The Importance of Individual Identity In the following, we address the stochastic setting of finite sum problems (1) where one is equipped with a stochastic oracle which, upon receiving a call, returns some individual function chosen uniformly at random and hides its index. We show that not knowing the identity of the function returned by the oracle (as opposed to an incremental oracle which addresses the specific individual functions chosen by the user), significantly harms the optimal attainable performance. To this end, we reduce the statistical problem of estimating the bias of a noisy coin into that of optimizing finite sums. This reduction (presented below) makes an extensive use of elementary definitions and tools from information theory, all of which can be found in [9]. First, given n ? N, we define the following finite sum problem   1 n?? + n+? ? F? := f + f , n 2 2 (5) where n is w.l.o.g. assumed to be odd, ? ? {?1, 1} and f + , f ? are some functions (to be defined later). We then define the following discrepancy measure between F1 and F?1 for different values of n (see also [1]), ? ?(n) = min {F1 (w) + F?1 (w) ? F1? ? F?1 }, w?Rd (6) where F?? := inf w F? (w). It is easy to verify that no solution can be ?(n)/4-optimal for both F1 and F?1 , at the same time. Thus, by running a given optimization algorithm long enough to obtain ?(n)/4-solution w.h.p., we can deduce the value of ?. Also, note that, one can simplify the computation of ?(n) by choosing convex f + , f ? such that f + (w) = f ? (?w). Indeed, in this case, ? ? we have F1 (w) = F?1 (?w) (in particular, F1? = F?1 ), and since F1 (w) + F?1 (w) ? F1? ? F?1 is convex, it must attain its minimum at w = 0, which yields ?(n) = 2(F1 (0) ? F1? ). (7) Next, we let ? ? {?1, 1} be drawn uniformly at random, and then use the given optimization algorithm to estimate the bias of a random variable X which, conditioned on ?, takes +1 w.p. 1/2 + ?/2n, and ?1 w.p. 1/2 ? ?/2n. To implement the stochastic oracle described above, 3 conditioned on ?, we draw k i.i.d. copies of X, denoted by X1 , . . . , Xk , and return f ? , if Xi = ?, and f + , otherwise. Now, if k is such that ?(n) , 40 for both ? = ?1 and ? = 1, then by Markov inequality, we have that   P F? (w(k) ) ? F?? ? ?(n)/4 ? ? 1/10 E[F? (w(k) ) ? F?? | ?] ? (8) (note that F? (w(k) ) ? F?? is a non-negative random variable). We may now try to guess the value of ? using the following estimator ? ? (w(k) ) = argmin {F?0 (w(k) ) ? F??0 }, ? 0 ?{?1,1} whose probability of error, as follows by Inequality (8), is P (? ? 6= ?) ? 1/10. (9) Lastly, we show that the existence of an estimator for ? with high probability of success implies that k = ?(n2 ). To this end, note that the corresponding conditional dependence structure of this probabilistic setting can be modeled as follows: ? ? X1 , . . . , Xk ? ? ? . Thus, we have (a) (b) (c) H(? | X1 , . . . , Xk ) ? H(? | ? ? ) ? Hb (P (? ? 6= ?)) ? 1 , 2 (10) where H(?) and Hb (?) denote the Shannon entropy function and the binary entropy function, respectively, (a) follows by the data processing inequality (in terms of entropy), (b) follows by Fano?s inequality and (c) follows from Equation (9). Applying standard entropy identities, we get (d) H(? | X1 , . . . , Xk ) = H(X1 , . . . , Xk | ?) + H(?) ? H(X1 , . . . , Xk ) (e) = kH(X1 | ?) + 1 ? H(X1 , . . . , Xk ) (f ) ? kH(X1 | ?) + 1 ? kH(X1 ), (11) where (d) follows from Bayes rule, (e) follows by the fact that Xi , conditioned on ?, are i.i.d. and (f ) follows from the chain rule and the fact that conditioning reduces entropy. Combining this with Inequality (10) and rearranging, we have k? 1 1 n2 ? = , 2 2(H(X1 ) ? H(X1 | ?)) 2 2 (1/n) where the last inequality follows from the fact that H(X1 ) = 1 and the following estimation for the 2 binary entropy function: Hb (p) ? 1 ? 4 (p ? 1/2) (see Lemma 2, Appendix A). Thus, we arrive at the following statement. Lemma 1. The minimal number of stochastic oracle calls required to obtain ?(n)/40-optimal solution for problem (5) is ? n2 /2. Instantiating this schemes for f + , f ? of various analytical properties yields the following. Theorem 1. When solving a finite sum problem (defined in 1) with a stochastic oracle, one needs at least n2 /2 oracle calls in order to obtain an accuracy level of: 1. ?+1 40n2 for smooth and strongly convex individuals with condition ?. 2. L 40n2 for L-smooth and convex individuals. 3. M2 40?n2 M if ?n ? 1, and convex individuals. M 20n ? ? 40 , otherwise, for (M + ?)-Lipschitz continuous and ?-strongly 4 Proof 1. Define, f ? (w) = 1 > (w ? q) A (w ? q) , 2 where A is a d ? d diagonal matrix whose diagonal entries are ?, 1 . . . , 1, and q = (1, 1, 0, . . . , 0)> is a d-dimensional vector. One can easily verify that f ? are smooth and strongly convex functions with condition number ?, and that   ? >  1 1 ?  1 w? q A w? q + F? (w) = 1 ? 2 q> Aq. 2 n n 2 n Therefore, the minimizer of F? is (?/n)q, and using Equation (7), we see that ?(n) = ?+1 n2 . 2. We define f ? (w) = L 2 kw ? e1 k . 2 One can easily verify that f ? are L-smooth and convex functions, and that the minimizer of F? is (?/n)e1 . By Equation (7), we get ?(n) = nL2 . 3. We define f ? (w) = M kw ? e1 k + ? 2 kwk , 2 over the unit ball. Clearly, f ? are (M + ?)-Lipschitz continuous and ?-strongly convex M functions. It can be verified that the minimizer of F? is (? min{ ?n , 1})e1 . Therefore, by Equation (7), we see that in this case we have ( 2 M M 2 ?n ? 1 . ?(n) = ?n 2M n ? ? o.w. A few conclusions can be readily made from Theorem 1. First, if a given optimization algorithm obtains an iteration complexity of an order of c(n, ?) ln(1/), up to logarithmic factors (including the norm of the minimizer which, in our construction, is of an order of 1/n and coupled with the accuracy parameter), for solving smooth and strongly convex finite sum problems with a stochastic oracle, then   n2 ? c(n, ?) = ? . ln(n2 /(? + 1)) Thus, the following holds for optimization of finite sums with smooth and strongly convex individuals. Corollary 1. In order to obtain linear convergence rate with linear dependence on n, one must know the index of the individual function addressed by the oracle. This implies that variance-reduction methods such as, SAG, SAGA, SDCA and SVRG (possibly combining with acceleration schemes), which exhibit linear dependence on n, cannot be applied when data augmentation is used. In general, this conclusion also holds for cases when one applies general first-order optimization algorithms, such as AGD, on finite sums, as this typically results in a linear dependence on n. Secondly, if a given optimization algorithm obtains an iteration complexity of an order of n + L? kw(0) ? w? k2 /? for solving smooth and convex finite sum problems with a stochastic oracle, then n + L??? n2(??1) = ?(n2 ). Therefore, ? = ? and ? ? 2, indicating that an iteration complexity of an order of n + Lkw(0) ? w? k2 /, as obtained by, e.g., SVRG++, is not attainable with a stochastic oracle. Similar reasoning based on the Lipschitz and strongly convex case in Theorem 1 shows that the iteration complexity guaranteed by accelerated SDCA is also not attainable in this setting. 5 3 Oblivious Optimization Algorithms In the previous section, we discussed different situations under which variance-reduction schemes are not applicable. Now, we turn to study under what conditions can one apply acceleration schemes. First, we define the framework of oblivious CLI algorithms. Next, we show that, for this family of algorithms, knowing the strong convexity parameter is crucial for obtaining accelerated rates. We then describe a restarting scheme through which we establish that stationary algorithms (whose update rule are, on average, the same for every iteration) for smooth and convex functions are sub-optimal. Finally, we use this reduction to derive a tight lower bound for smooth and convex finite sums on the iteration complexity of any oblivious algorithm (not just stationary). 3.1 Framework In the sequel, following [3], we present the analytic framework through which we derive iteration complexity bounds. This, perhaps pedantic, formulation will allows us to study somewhat subtle distinctions between optimization algorithms. First, we give a rigorous definition for a class of optimization problems which emphasizes the role of prior knowledge in optimization. Definition 1 (Class of Optimization Problems). A class of optimization problems is an ordered triple (F, I, Of ), where F is a family of functions defined over some domain designated by dom(F), I is the side-information given prior to the optimization process and Of is a suitable oracle procedure which upon receiving w ? domF and ? in some parameter set ?, returns Of (w, ?) ? dom(F) for a given f ? F (we shall omit the subscript in Of when f is clear from the context). In finite sum problems, F comprises of functions as defined in (1); the side-information may contain the smoothness parameter L, the strong convexity parameter ? and the number of individual functions n; and the oracle may allow one to query about a specific individual function (as in the case of incremental oracle, and as opposed to the stochastic oracle discussed in Section 2). We now turn to define CLI optimization algorithms (see [3] for a more comprehensive discussion). Definition 2 (CLI). An optimization algorithm is called a Canonical Linear Iterative (CLI) optimization algorithm over a class of optimization problems (F, I, Of ), if given an instance f ? F (0) and initialization points {wi }i?J ? dom(F), where J is some index set, it operates by iteratively generating points such that for any i ? J ,   X (k+1) (k) (k) wi ? Of wj ; ?ij , k = 0, 1, . . . (12) j?J (k) ?ij holds, where ? ? are parameters chosen, stochastically or deterministically, by the algorithm, possibly based on the side-information. If the parameters do not depend on previously acquired oracle answers, we say that the given algorithm is oblivious. For notational convenience, we assume (k) that the solution returned by the algorithm is stored in w1 . Throughout the rest of the paper, we shall be interested in oblivious CLI algorithms (for brevity, we usually omit the ?CLI? qualifier) equipped with the following two incremental oracles: Generalized first-order oracle: O(w; A, B, c, i) := A?fi (w) + Bw + c, Steepest coordinate-descent oracle: O(w; j, i) := w + t? ej , (13) where A, B ? Rd?d , c ? Rd , i ? [n], j ? [d], ej denotes the j?th d-dimensional unit vector and t? ? argmint?R fj (w1 , . . . , wj?1 , wj + t, wj+1 , . . . , wd ). We restrict the oracle parameters such that only one individual function is allowed to be accessed at each iteration. We remark that the family of oblivious algorithms with a first-order and a coordinate-descent oracle is wide and subsumes SAG, SAGA, SDCA, SDCA without duality, SVRG, SVRG++ to name a few. Also, note that coordinate-descent steps w.r.t. partial gradients can be implemented using the generalized first-order oracle by setting A to be some principal minor of the unit matrix (see, e.g., RDCM in [15]). Further, similarly to [3], we allow both first-order and coordinate-descent oracles to be used during the same optimization process. 3.2 No Strong Convexity Parameter, No Acceleration for Finite Sum Problems Having described our analytic approach, we now turn to present some concrete applications. Below, we show that in the absence of a good estimation for the strong convexity parameter, the optimal 6 iteration complexity of oblivious algorithms is ?(n + k ln(1/)). Our proof is based on the technique used in [3, 4] (see [3, Section 2.3] for a brief introduction of the technique). Given 0 <  < L, we define the following set of optimization problems (over Rd with d > 1)  n  1X 1 > > w Q? w ? q w , where (14) F? (w) := n i=1 2 ? L+? ??L ? ? ? 1 2 2 ? ??L L+? ? ? 1 ? ? 2 ? 2 R ? 0 ? ? ? ? ? Q? := ? ? , q := ? ? ? . ?, ? ? 2 .. ? .. ? ? ? . 0 ? parametrized by ? ? (, L) (note that the individual functions are identical. We elaborate more on this below). It can be easily verified that the condition number of F ?? , which?we denote by ?(F? ), is L/?, and that the corresponding minimizers are w? (?) = (R/? 2, R/? 2, 0, . . . , 0)> with norm ? R. If we are allowed to use different optimization algorithm for different ? in this setting, then we know p that the optimal iteration complexity is of an order of (n + n?(F? )) ln(1/). However, if we allowed to use only one single algorithm, then we show that the optimal iteration complexity is of an order of n + ?(F? ) ln(1/). The proof goes as follows. First, note that in this setting, the oracles defined in (13) take the following form, Generalized first-order oracle: O(w; A, B, c, i) = A(Q? w ? q) + Bw + c, (15) Steepest coordinate-descent oracle: O(w; j, i) = (I ? (1/(Q? )jj )ei (Q? )j,? ) w ? qj /(Q? )jj ej . Now, since the oracle answers are linear in ? and the k?th iterate is a k-fold composition of sums of (k) the oracle answers, it follows that w1 forms a d-dimensional vector of univariate polynomials in ? of degree ? k with (possibly random) coefficients (formally, see Lemma 3, Appendix A). Denoting (k) the polynomial of the first coordinate of Ew1 (?) by s(?), we see that for any ? ? (, L), ? 2s(?)? R R (k) (k) ? ? Ekw1 (?) ? w (?)k ? kEw1 (?) ? w (?)k ? s(?) ? ? ? ? ? 1 , 2? 2L R where the first inequality follows by Jensen inequality and the second inequality by focusing on the first coordinate of Ew(k) (?) and w? (?). Lastly, since the coefficients of s(?) do not depend on ?, we have by Lemma 4 in Appendix A, that there exists ? > 0, such that for any ? ? (L ? ?, L) it holds that ?  k+1 R 1 R 2s(?)? ? ? 1 ? ? 1? , ?(F? ) 2L R 2L by which we derive the following. Theorem 2. The iteration complexity of oblivious finite sum optimization algorithms equipped with a first-order and a coordinate-descent oracle whose side-information does not contain the strong ? + ? ln(1/)). convexity parameter is ?(n The n part of the lower bound holds for any type of finite sum algorithm and is proved in [3, Theorem 5]. The lower bound stated in Theorem 2 is tight up to logarithmic factors and is attained by, e.g., SAG [19]. Although relying on a finite sum with identical individual functions may seem somewhat disappointing, it suggests that some variance-reduction schemes can only give optimal dependence in terms of n, and that obtaining optimal dependence in terms of the condition number need to be done through other (acceleration) mechanisms (e.g., [13]). Lastly, note that, this bound holds for any number of iterations (regardless of the problem parameters). 3.3 Stationary Algorithms for Smooth and Convex Finite Sums are Sub-optimal In the previous section, we showed that not knowing the strong convexity parameter reduces the optimal attainable iteration complexity. In this section, we use this result to show that whereas general 7 optimization p algorithms for smooth and convex finite sum problems obtain iteration complexity of ? + nL/), the optimal iteration complexity of stationary algorithms (whose expected update O(n rules are fixed) is ?(n + L/). The proof (presented below) is based on a general restarting scheme (see Scheme 1) used in [4]. The scheme allows one to apply algorithms which are designed for L-smooth and convex problems on smooth and strongly convex finite sums by explicitly incorporating the strong convexity parameter. The key feature of this reduction is its ability to ?preserve? the exponent of the iteration complexity ? from an order of C(f )(L/)? in the non-strongly convex case to an order of (C(f )?) ln(1/) in the strongly convex case, where C(f ) denotes some quantity which may depend on f but not on k, and ? is some positive constant. S CHEME G IVEN 1 R ESTARTING S CHEME A N OPTIMIZATION ALGORITHM A FOR SMOOTH CONVEX FUNCTIONS WITH f (w(k) ) ? f ? ? 2 (0) ? C(f ) w ?w? ? k 0 FOR ANY INITIALIZATION POINT I TERATE FOR ? w t = 1, 2, . . . R ESTART THE STEP SIZE SCHEDULE OF A ? (0) I NITIALIZE A AT w p RUN A FOR ? 4C(f )/? ITERATIONS ? (0) TO BE THE POINT RETURNED BY A S ET w E ND The proof goes as follows. Suppose A is a stationary CLI optimization algorithm for L-smooth and convex finite sum problems equipped with oracles (13). Also, assume that its convergence rate for 2 n? L? kw(0) ?w? k k ? N, N ? N is of an order of , for some ?, ?, ? > 0. First, observe that in k? this case we must have ? = 1. For otherwise, we get f (w(k) ) ? f ? = ((?f )(w(k) ) ? (?f )? )/? ? n? (?L)? /?k ? = ? ??1 n? L? /k ? , implying that, simply by scaling f , one can optimize to any level of accuracy using at most N iterations, which contradicts [3, Theorem 5]. Now, by [4, Lemma 1], Scheme 1 produces a new algorithm whose iteration complexity for smooth and strongly convex finite sums with condition number ? is ? ? + n? ?? ln(1/)). O(N + n? (L/) ) ?? O(n (16) Finally, stationary algorithms are invariant under this restarting scheme. Therefore, the new algorithm cannot depend on ?. Thus, by Theorem 2, it must hold that that ? ? 1 and that max{N, n? } = ?(n), proving the following. Theorem 3. If the iteration complexity of a stationary optimization algorithm for smooth and convex finite sum problems equipped with a first-order and a coordinate-descent oracle is of the form of the l.h.s. of (16), then it must be at least ?(n + L/). We note that, this lower bound is tight and is attained by, e.g., SDCA. 3.4 A Tight Lower Bound for Smooth and Convex Finite Sums We now turn to derive a lower bound for finite sum problems with smooth and convex individual functions using the restarting scheme shown in the previous section. Note that, here we allow any oblivious optimization algorithm, not just stationary. The technique shown in Section 3.2 of reducing an optimization problem into a polynomial approximation problem was used in [3] to derive lower bounds for various settings. The smooth and convex case was proved only for n = 1, and a generalization for n > 1 seems to reduce to a non-trivial approximation problem. Here, using Scheme 1, we are able to avoid this difficulty by reducing the non-strongly case to the strongly convex case, for which a lower bound for a general n is known. The proof follows the same lines of the proof of Theorem 3. Given an oblivious optimization algorithm for finite sums with smooth and convex individuals equipped with oracles (13), we apply again Scheme 1 to get an algorithm for the smooth and strongly convex case, whose iteration complexity is as in (16). Now, crucially, oblivious algorithm are invariant under Scheme 1 (that 8 is, when applied on a given oblivious algorithm, Scheme 1 produces another oblivious algorithm). Therefore, using [3, Theorem 2], we obtain the following. Theorem 4. If the iteration complexity of an oblivious optimization algorithm for smooth and convex finite sum problems equipped with a first-order and a coordinate-descent oracle is of the form of the l.h.s. of (16), then it must be at least ! r nL ? n+ .  This bound is tight and is obtained by, e.g., accelerated SDCA [22]. Optimality in terms of L and  can be obtained simply by applying Accelerate Gradient Descent [16], or alternatively, by using an accelerated version of SVRG as presented in [17]. More generally, one can apply acceleration schemes, e.g., [13], to get an optimal dependence on . Acknowledgments We thank Raanan Tvizer and Maayan Maliach for several helpful and insightful discussions. References [1] Alekh Agarwal, Martin J Wainwright, Peter L Bartlett, and Pradeep K Ravikumar. Informationtheoretic lower bounds on the oracle complexity of convex optimization. In Advances in Neural Information Processing Systems, pages 1?9, 2009. [2] Zeyuan Allen-Zhu and Yang Yuan. Improved svrg for non-strongly-convex or sum-of-nonconvex objectives. Technical report, Technical report, arXiv preprint, 2016. [3] Yossi Arjevani and Ohad Shamir. Dimension-free iteration complexity of finite sum optimization problems. In Advances in Neural Information Processing Systems, pages 3540?3548, 2016. [4] Yossi Arjevani and Ohad Shamir. On the iteration complexity of oblivious first-order optimization algorithms. In Proceedings of the 33nd International Conference on Machine Learning, pages 908?916, 2016. [5] Yossi Arjevani and Ohad Shamir. Oracle complexity of second-order methods for finite-sum problems. arXiv preprint arXiv:1611.04982, 2016. [6] Dimitri P Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM Journal on Optimization, 7(4):913?926, 1997. [7] Alberto Bietti and Julien Mairal. Stochastic optimization with variance reduction for infinite datasets with finite-sum structure. arXiv preprint arXiv:1610.00970, 2016. [8] Doron Blatt, Alfred O Hero, and Hillel Gauchman. A convergent incremental gradient method with a constant step size. SIAM Journal on Optimization, 18(1):29?51, 2007. [9] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. [10] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646?1654, 2014. [11] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013. [12] Guanghui Lan. An optimal randomized incremental gradient method. arXiv:1507.02000, 2015. arXiv preprint [13] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In Advances in Neural Information Processing Systems, pages 3366?3374, 2015. 9 [14] Ga?lle Loosli, St?phane Canu, and L?on Bottou. Training invariant support vector machines using selective sampling. Large scale kernel machines, pages 301?320, 2007. [15] Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341?362, 2012. [16] Yurii Nesterov. Introductory lectures on convex optimization, volume 87. Springer Science & Business Media, 2004. [17] Atsushi Nitanda. Accelerated stochastic gradient descent for minimizing finite sums. In Artificial Intelligence and Statistics, pages 195?203, 2016. [18] Maxim Raginsky and Alexander Rakhlin. Information-based complexity, feedback and dynamics in convex programming. Information Theory, IEEE Transactions on, 57(10):7036?7056, 2011. [19] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, pages 1?30, 2013. [20] Shai Shalev-Shwartz. Sdca without duality. arXiv preprint arXiv:1502.06177, 2015. [21] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567?599, 2013. [22] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105?145, 2016. [23] Blake E Woodworth and Nati Srebro. Tight complexity bounds for optimizing composite objectives. In Advances in Neural Information Processing Systems, pages 3639?3647, 2016. 10
6945 |@word version:1 polynomial:3 norm:2 seems:1 nd:2 seek:1 crucially:1 attainable:4 sgd:1 thereby:1 moment:1 reduction:13 denoting:1 wd:1 must:9 readily:1 john:1 additive:2 enables:1 analytic:2 zaid:1 designed:2 update:5 joy:1 stationary:8 implying:1 intelligence:1 guess:1 xk:7 steepest:2 iterates:1 accessed:1 zhang:3 mathematical:2 along:1 doron:1 yuan:1 prove:1 introductory:1 introduce:1 acquired:1 expected:1 indeed:2 relying:1 enumeration:2 equipped:8 provided:1 estimating:1 bounded:1 medium:1 israel:1 what:1 argmin:1 iven:1 every:1 tackle:1 sag:8 exactly:1 k2:2 unit:3 enjoy:2 omit:2 arguably:1 bertsekas:1 positive:1 establishing:1 subscript:1 initialization:2 suggests:1 iag:1 weizmann:2 unique:1 practical:1 acknowledgment:1 implement:1 procedure:2 sdca:17 universal:1 significantly:1 attain:1 composite:2 get:6 cannot:6 ga:1 convenience:1 context:1 applying:4 impossible:1 optimize:1 deterministic:1 go:2 regardless:3 convex:45 roux:1 m2:1 rule:7 estimator:2 deriving:1 nitialize:1 proving:1 coordinate:19 construction:1 suppose:1 shamir:3 user:1 programming:3 maayan:1 element:1 lay:1 role:1 preprint:5 loosli:1 wj:4 convexity:14 complexity:38 nesterov:2 dynamic:1 dom:3 depend:6 tight:7 solving:3 predictive:1 upon:3 efficiency:1 easily:3 accelerate:1 various:3 fast:1 describe:1 query:1 artificial:1 choosing:1 hillel:1 shalev:3 whose:9 richer:1 widely:1 say:1 otherwise:4 ability:2 statistic:2 itself:1 noisy:1 analytical:2 relevant:2 combining:2 kh:3 cli:8 exploiting:1 convergence:4 produce:2 generating:1 incremental:10 phane:1 depending:1 derive:5 ac:1 ij:2 minor:1 finitely:1 odd:1 strong:14 implemented:1 implies:3 drawback:1 stochastic:22 f1:10 generalization:1 elementary:1 secondly:1 hold:8 blake:1 major:1 favorable:1 estimation:3 applicable:1 tool:1 minimization:1 clearly:1 rather:2 avoid:1 ej:3 corollary:1 derived:1 notational:1 hongzhou:1 rigorous:1 helpful:1 minimizers:1 typically:2 selective:1 interested:1 dual:2 denoted:1 exponent:1 breakthrough:1 having:2 beach:1 sampling:1 identical:2 kw:4 broad:2 yu:1 discrepancy:1 others:1 report:2 simplify:1 oblivious:23 employ:1 modern:1 few:2 preserve:1 comprehensive:1 individual:25 bw:2 freedom:1 huge:1 evaluation:1 introduces:1 nl:7 pradeep:1 chain:1 partial:1 minw:2 ohad:3 unless:2 minimal:1 instance:1 cover:1 applicability:1 addressing:1 entry:1 qualifier:1 johnson:1 stored:1 answer:3 proximal:1 gd:1 guanghui:1 st:2 international:1 siam:3 kew1:1 randomized:1 sequel:1 probabilistic:1 invoke:1 receiving:2 concrete:1 w1:3 augmentation:2 again:1 opposed:2 possibly:3 external:1 stochastically:1 dimitri:1 return:3 subsumes:1 coefficient:2 explicitly:6 multiplicative:1 root:1 later:1 try:1 kwk:1 francis:2 bayes:1 shai:3 simon:1 blatt:1 contribution:1 il:1 square:2 accuracy:7 variance:9 efficiently:2 yield:3 emphasizes:1 randomness:1 nl2:1 definition:4 proof:7 attributed:1 proved:2 knowledge:2 subtle:1 schedule:1 focusing:1 attained:2 improved:2 rie:1 formulation:1 done:2 strongly:18 just:2 lastly:4 hand:1 ei:2 continuity:1 scheduled:1 perhaps:3 usa:1 name:1 verify:3 contain:2 reformulating:1 iteratively:1 attractive:1 during:1 generalized:4 evident:1 allen:1 atsushi:1 fj:1 reasoning:1 fi:4 recently:1 specialized:1 functional:1 conditioning:1 volume:1 extend:1 discussed:2 bietti:1 composition:1 smoothness:3 rd:8 vanilla:1 mathematics:1 fano:1 similarly:1 canu:1 aq:1 alekh:1 etc:1 deduce:1 hide:2 recent:1 showed:1 optimizing:2 inf:1 disappointing:1 certain:1 nonconvex:1 inequality:9 binary:2 success:1 minimum:1 somewhat:2 zeyuan:1 aggregated:1 harchaoui:1 reduces:2 smooth:32 technical:2 faster:1 offer:1 long:2 bach:2 lin:1 alberto:1 e1:4 ravikumar:1 instantiating:1 expectation:1 arxiv:9 iteration:43 kernel:1 agarwal:1 whereas:2 addressed:1 crucial:2 rest:1 posse:1 ascent:2 seem:1 call:3 yang:1 easy:1 enough:1 hb:3 iterate:1 restrict:1 reduce:2 knowing:5 qj:1 whether:1 bartlett:1 defazio:1 accelerating:1 arjevani:5 render:1 peter:1 returned:3 hessian:1 jj:2 remark:1 generally:1 clear:1 category:2 simplest:1 canonical:1 rehovot:1 alfred:1 shall:2 key:2 lan:1 drawn:1 verified:2 lacoste:1 concreteness:1 sum:50 year:1 raginsky:1 run:1 arrive:1 family:3 throughout:1 draw:1 appendix:3 scaling:1 bound:23 guaranteed:1 convergent:1 fold:1 gauchman:1 oracle:44 answered:1 lkw:1 min:3 optimality:1 relatively:1 martin:1 department:1 designated:1 ball:1 poor:1 son:1 contradicts:1 wi:2 invariant:3 taken:1 ln:15 equation:4 previously:1 turn:4 mechanism:4 needed:1 yossi:5 know:4 hero:1 nitanda:1 end:2 yurii:1 available:1 cheme:2 apply:7 observe:1 generic:1 schmidt:1 coin:1 existence:1 thomas:2 denotes:4 running:1 maintaining:1 unsuitable:1 exploit:1 woodworth:1 establish:2 objective:3 added:1 question:1 quantity:1 dependence:13 diagonal:2 exhibit:1 gradient:15 thank:1 parametrized:1 trivial:1 assuming:1 index:3 modeled:1 minimizing:5 statement:1 stated:3 negative:1 proper:1 perform:1 upper:1 markov:1 datasets:1 finite:49 descent:21 affirmatively:1 situation:2 expressiveness:1 domf:1 introduced:1 namely:1 required:1 extensive:1 conflict:1 distinction:1 nip:1 address:3 able:2 below:5 usually:1 regime:1 summarize:1 including:2 max:1 wainwright:1 power:1 suitable:1 difficulty:1 business:1 regularized:2 zhu:1 scheme:23 improve:1 brief:1 julien:3 coupled:1 prior:2 nati:1 determining:1 catalyst:1 loss:2 lecture:1 limitation:1 srebro:1 ingredient:1 triple:1 foundation:1 degree:2 offered:1 sufficient:2 surprisingly:2 supported:1 free:2 svrg:13 copy:1 last:1 side:5 bias:2 allow:3 lle:1 institute:1 wide:1 feedback:1 dimension:2 made:2 adaptive:1 agd:3 transaction:1 restarting:6 obtains:3 informationtheoretic:1 argmint:1 mairal:2 harm:1 assumed:2 xi:2 shwartz:3 alternatively:1 continuous:2 iterative:1 ca:1 rearranging:1 nicolas:1 obtaining:5 bottou:1 necessarily:1 domain:1 noise:3 n2:12 allowed:3 x1:13 referred:3 elaborate:1 wiley:1 tong:3 sub:2 comprises:1 saga:6 explicit:2 deterministically:1 theorem:12 specific:2 showing:2 insightful:1 jensen:1 rakhlin:1 essential:1 exists:1 incorporating:1 importance:1 maxim:1 conditioned:3 nk:1 entropy:6 logarithmic:5 simply:2 appearance:1 univariate:1 ordered:1 applies:1 springer:1 minimizer:4 satisfies:1 conditional:1 goal:1 identity:4 acceleration:9 lipschitz:4 absence:1 infinite:2 uniformly:2 operates:1 reducing:2 distributes:1 principal:2 lemma:5 called:1 duality:4 shannon:1 ew:1 indicating:1 formally:2 aaron:1 support:2 mark:1 brevity:1 alexander:1 accelerated:12
6,573
6,946
Unsupervised Sequence Classification using Sequential Output Statistics Yu Liu ? , Jianshu Chen ? , and Li Deng? ? Microsoft Research, Redmond, WA 98052, USA? [email protected] ? Citadel LLC, Seattle/Chicago, USA? [email protected] Abstract We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning. 1 Introduction Unsupervised learning is one of the most challenging problems in machine learning. It is often formulated as the modeling of how the world works without requiring a huge amount of human labeling effort, e.g. [8]. To reach this grand goal, it is necessary to first solve a sub-goal of unsupervised learning with high practical value; that is, learning to predict output labels from input data without requiring costly labeled data. Toward this end, we study in this paper the learning of a sequence classifier without labels by using sequential output statistics. The problem is highly valuable since the sequential output statistics, such as language models, could be obtained independently of the input data and thus with no labeling cost. The problem we consider here is different from most studies on unsupervised learning, which concern automatic discovery of inherent regularities of the input data to learn their representations [13, 28, 18, 17, 5, 1, 31, 20, 14, 12]. When these methods are applied in prediction tasks, either the learned representations are used as feature vectors [22] or the learned unsupervised models are used to initialize a supervised learning algorithm [9, 18, 2, 24, 10]. In both ways, the above unsupervised methods played an auxiliary role in helping supervised learning when it is applied to prediction tasks. Recently, various solutions have been proposed to address the input-to-output prediction problem without using labeled training data, all without demonstrated successes [11, 30, 7]. Similar to this work, the authors in [7] proposed an unsupervised cost that also exploits the sequence prior of the output samples to train classifiers. The power of such a strong prior in the form of language ? ? All the three authors contributed equally to the paper. The work was done while Yu Liu and Li Deng were at Microsoft Research. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. models in unsupervised learning was also demonstrated in earlier studies in [21, 3]. However, these earlier methods did not perform well in practical prediction tasks with real-world data without using additional strong generative models. Possible reasons are inappropriately formulated cost functions and inappropriate choices of optimization methods. For example, it was shown in [7] that optimizing the highly non-convex unsupervised cost function could easily get stuck in trivial solutions, although adding a special regularization mitigated the problem somewhat. The solution provided in this paper fundamentally improves these prior works in [11, 30, 7] in following aspects. First, we propose a novel cost function for unsupervised learning, and find that it has a desired coverage-seeking property that makes the learning algorithm less inclined to be stuck in trivial solutions than the cost function in [7]. Second, we develop a special empirical formulation of this cost function that avoids the need for a strong generative model as in [30, 11].3 Third, although the proposed cost function is more difficult to optimize in its functional form, we develop a stochastic primal-dual gradient (SPDG) algorithm to effectively solve problem. Our analysis of SPDG demonstrates how it is able to reduce the high barriers in the cost function by transforming it into a primal-dual domain. Finally and most importantly, we demonstrate the new cost function and the associated SPDG optimization algorithm work well in two real-world classification tasks. In the rest of the paper, we proceed to demonstrate these points and discuss related works along the way. 2 Empirical-ODM: An unsupervised learning cost for sequence classifiers In this section, we extend the earlier work of [30] and propose an unsupervised learning cost named Empirical Output Distribution Match (Empirical-ODM) for training classifiers without labeled data. We first formulate the unsupervised learning problem with sequential output structures. Then, we introduce the Empirical-ODM cost and discuss its important properties that are closely related to unsupervised learning. 2.1 Problem formulation We consider the problem of learning a sequence classifier that predicts an output sequence (y1 , . . . , yT0 ) from an input sequence (x1 , . . . , xT0 ) without using labeled data, where T0 denotes the length of the sequence. Specifically, the learning algorithm does not have access to a labeled training set DXY , {(xn1 , . . . , xnTn ), (y1n , . . . , yTnn ) : n = 1, . . . , M }, where Tn denotes the length of the n-th sequence. Instead, what is available is a collection of input sequences, denoted as DX , {(xn1 , . . . , xnTn ) : n = 1, . . . , M }. In addition, we assume that the sequential output statistics (or sequence prior), in the form of an N -gram probability, are available: pLM (i1 , . . . , iN ) , pLM (ytn N +1 = i1 , . . . , ytn = iN ) where i1 , . . . , iN 2 {1, . . . , C} and the subscript ?LM? stands for language model. Our objective is to train the sequence classifier by just using DX and pLM (?). Note that the sequence prior pLM (?), in the form of language models, is a type of structure commonly found in natural language data, which can be learned from a large amount of text data freely available without labeling cost. For example, in optical character recognition (OCR) tasks, ytn could be an English character and xnt is the input image containing this character. We can estimate an N -gram character-level language model pLM (?) from a separate text corpus. Therefore, our learning algorithm will work in a fully unsupervised manner, without any human labeling cost. In our experiment section, we will demonstrate the effectiveness of our method on such a real OCR task. Other potential applications include speech recognition, machine translation, and image/video captioning. In this paper, we focus on the sequence classifier in the form of p? (ytn |xnt ) that is, it computes the posterior probability p? (ytn |xnt ) only based on the current input sample xnt in the sequence. Furthermore, we restrict our choice of p? (ytn |xnt ) to be linear classifiers4 and focus our attention on designing and understanding unsupervised learning costs and methods for label-free prediction. In 3 The work [11] only proposed a conceptual idea of using generative models to integrate the output structure and the output-to-input structure for unsupervised learning in speech recognition. Specifically, the generative models are built from the domain knowledge of speech waveform generation mechanism. No mathematical formulation or successful experimental results are provided in [11]. PC wjT xn 4 wiT xn t , where the model parameter is ? , {w 2 Rd , i = 1, . . . , C}. t / p? (ytn = i|xn i t) = e j=1 e 2 fact, as we will show in later sections, even with linear models, the unsupervised learning problem is still highly nontrivial and the cost function is also highly non-convex. And we emphasize that developing a successful unsupervised learning approach for linear classifiers, as we do in this paper, provides important insights and is an important first step towards more advanced nonlinear models (e.g., deep neural networks). We expect that, in future work, the insights obtained here could help us generalize our techniques to nonlinear models. A recent work that shares the same motivations as our work is [29], which also recognizes the high cost of obtaining labeled data and seeks label-free prediction. Different from our setting, they exploit domain knowledge from laws of physics in computer vision applications, whereas our approach exploits sequential statistics in the natural language outputs. Finally, our problem is fundamentally different from the sequence transduction method in [15], although it also exploits language models for sequence prediction. Specifically, the method in [15] is a fully supervised learning in that it requires supervision at the sequence level; that is, for each input sequence, a corresponding output sequence (of possibly different length) is provided as a label. The use of language model in [15] only serves the purpose of regularization in the sequence-level supervised learning. In stark contrast, the unsupervised learning we propose does not require supervision at any level including specifically the sequence level; we do not need the sequence labels but only the prior distribution pLM (?) of the output sequences. 2.2 The Empirical-ODM We now introduce an unsupervised learning cost that exploits the sequence structure in pLM (?). It is mainly inspired by the approach to breaking the Caesar cipher, one of the simplest forms of encryption [23]. Caesar cipher is a substitution cipher where each letter in the original message is replaced with a letter corresponding to a certain number of letters up or down in the alphabet. For example, the letter ?D? is replaced by the letter ?A?, the letter ?E? is replaced by the letter ?B?, and so on. In this way, the original message that was readable ends up being less understandable. The amount of this shifting is also known to the intended receiver of the message, who can decode the message by shifting back each letter in the encrypted message. However, Caesar cipher could also be broken by an unintended receiver (not knowing the shift) when it analyzes the frequencies of the letters in the encrypted messages and matches them up with the letter distribution of the original text [4, pp.9-11]. More formally, let yt = f (xt ) denote a function that maps each encrypted letter xt into an original letter yt . And let pLM (i) , pLM (yt = i) denote the prior letter distribution of the original message, estimated from a regular text corpus. When f (?) is constructed in a way that all mapped letters {yt : yt = f (xt ), t = 1, . . . , T } have the same distribution as the prior pLM (i), it is able to break the Caesar cipher and recover the original letters at the mapping outputs. Inspired by the above approach, the posterior probability p? (ytn |xnt ) in our classification problem can be interpreted as a stochastic mapping, which maps each input vector xnt (the ?encrypted letter?) into an output vector ytn (the ?original letter?) with probability p? (ytn |xnt ). Then in a samplewise manner, each input sequence (xn1 , . . . , xnTn ) is stochastically mapped into an output sequence (y1n , . . . , yTnn ). We move a step further than the above approach by requiring that the distribution of the N -grams among all the mapped output sequences are close to the prior N -gram distribution pLM (i1 , . . . , iN ). With this motivation, we propose to learn the classifier p? (yt |xt ) by minimizing the cross entropy between the prior distribution and the expected N -gram frequency of the output sequences: ? X min J (?) , pLM (i1 , . . . , iN ) ln p? (i1 , . . . , iN ) (1) ? i1 ,...,iN where p? (i1 , . . . , iN ) denotes the expected frequency of a given N -gram (i1 , . . . , iN ) among all the output sequences. In Appendix B of the supplementary material, we derive its expression as p? (i1 , . . . , iN ) , M Tn N Y1 1 XX p? (ytn T n=1 t=1 k=0 k = iN n k |xt k ) (2) where T , T1 + ? ? ? + TM is the total number of samples in all sequences. Note that minimizing the cross entropy in (1) is also equivalent to minimizing the Kullback-Leibler (KL) divergence between P the two distributions since they only differ by a constant term, pLM ln pLM . Therefore, the cost function (1) seeks to estimate ? by matching the two output distributions, where the expected N -gram 3 distribution in (2) is an empirical average over all the samples in the training set. For this reason, we name the cost (1) as Empirical Output Distribution Match (Empirical-ODM) cost. In [30], the authors proposed to minimize an output distribution match (ODM) cost, defined as the KL-divergence between the prior R output distribution and the marginalized output distribution, D(pLM (y)||p? (y)), where p? (y) , p? (y|x)p(x)dx. However, evaluating p? (y) requires integrating over the input space using a generative model p(x). Due to the lack of such a generative model, they were not able to optimize this proposed ODM cost. Instead, alternative approaches such as Dual autoencoders and GANs were proposed as heuristics. Their results were not successful without using a few labeled data. Our proposed Empirical-ODM cost is different from the ODM cost in [30] in three key aspects. (i) We do not need any labeled data for training. (ii) We exploit sequence structure of output statistics, i.e., in our case y = (y1 , . . . , yN ) (N -gram) whereas in [30] y = yt (unigram, i.e., no sequence structure). This is crucial in developing a working unsupervised learning algorithm. The change from unigram to N -gram allows us to explicitly exploit the sequence structures at the output, which makes the technique from non-working to working (see Table 2 in Section 4). It might also explain why the method in [30] failed as it does not exploit the sequence structure. (iii) We replace the marginalized distribution p? (y) by the expected N -gram frequency in (2). This is critical in that it allows us to directly minimize the divergence between two output distributions without the need for a generative model, which [30] could not do. In fact, we can further show that p? (i1 , . . . , iN ) is an empirical approximation of p? (y) with y = (y1 , . . . , yN ) (see Appendix B.2 of the supplementary material). In this way, our cost (1) can be understood as an N -gram and empirical version of the ODM cost except for an additive constant, i.e., y is replaced by y = (y1 , . . . , yN ) and p? (y) is replaced by its empirical approximation. 2.3 Coverage-seeking versus mode-seeking We now discuss an important property of the proposed Empirical-ODM cost (1) by comparing it with the cost proposed in [7]. We show that the Empirical-ODM cost has a coverage-seeking property, which makes it more suitable for unsupervised learning than the mode-seeking cost in [7]. In [7], the authors proposed the expected negative log-likelihood as the unsupervised learning cost function that exploits the output sequential statistics. The intuition was to maximize the aggregated log-likelihood of all the output sequences assumed to be generated by the stochastic mapping p? (ytn |xnt ). We show in Appendix A of the supplementary material that their cost is equivalent to X X p? (i1 , . . . , iN ) ln pLM (iN |iN 1 , . . . , i1 ) (3) i1 ,...,iN 1 iN where pLM (iN |iN 1 , . . . , i1 ) , p(ytn = iN |ytn 1 = iN 1 , . . . , ytn N +1 = i1 ), and the summations are over all possible values of i1 , . . . , iN 2 {1, . . . , C}. In contrast, we can rewrite our cost (1) as X X pLM (i1 , . . . , iN 1 ) ? pLM (iN |iN 1 , . . . , i1 ) ln p? (i1 , . . . , iN ) (4) i1 ,...,iN 1 iN where we used the chain rule of conditional probabilities. Note that both costs (3) and (4) are in a cross entropy form. However, a key difference is that the positions of the distributions p? (?) and pLM (?) are swapped. We show that the cost in the form of (3) proposed in [7] is a mode-seeking divergence between two distributions, while by swapping p? (?) and pLM (?), our cost in (4) becomes a coverage-seeking divergence (see [25] for a detailed discussion on divergences with these two different behaviors). To understand this, we consider the following two situations: ? If pLM (iN |iN 1 , . . . , i1 ) ! 0 and p? (i1 , . . . , iN ) > 0 for a certain (i1 , . . . , iN ), the cross entropy in (3) goes to +1 and the cross entropy in (4) approaches zero. ? If pLM (iN |iN 1 , . . . , i1 ) > 0 and p? (i1 , . . . , iN ) ! 0 for a certain (i1 , . . . , iN ), the cross entropy in (3) approaches zero and the cross entropy in (4) goes to +1. Therefore, the cost function (3) will heavily penalize the classifier if it predicts an output that is believed to be less probable by the prior distribution pLM (?), and it will not penalize the classifier when it does not predict an output that pLM (?) believes to be probable. That is, the classifier is encouraged to predict a single output mode with high probability in pLM (?), a behavior called ?mode-seeking? in [25]. This probably explains the phenomena observed in [7]: the training process easily converges to 4 (a) (b) (c) Figure 1: The profiles of J (?) for the OCR dataset on a two-dimensional affine space passing through the supervised solution. The three figures show the same profile from different angles, where the red dot is the supervised solution. The contours of the profiles are shown at the bottom. a trivial solution of predicting the same output that has the largest probability in pLM (?). In contrast, the cost (4) will heavily penalize the classifier if it does not predict the output for which pLM (?) is positive, and will penalize less if it predicts outputs for which pLM (?) is zero. That is, this cost will encourage p? (y|x) to cover as much of pLM (?) as possible, a behavior called ?coverage-seeking? in [25]. Therefore, training the classifier using (4) will make it less inclined to learn trivial solutions than that in [7] since it will be heavily penalized. We will verify this fact in our experiment section 4. In addition, the coverage-seeking property could make the learning less sensitive to the sparseness of language models (i.e., pLM is zero for some N -grams) since the cost will not penalize these N -grams. In summary, our proposed cost (1) is more suitable for unsupervised learning than that in [7]. 2.4 The difficulties of optimizing J (?) However, there are two main challenges of optimizing the Empirical-ODM cost J (?) in (1). The first one is that the sample average (over the entire training data set) in the expression of p? (?) (see (2)) is inside the logarithmic loss, which is different P from traditional machine learning problems where the average is outside loss functions (e.g., t ft (?)). This functional form prevents us from applying stochastic gradient descent (SGD) to minimize (1) as the stochastic gradients would be intrinsically biased (see Appendix C for a detailed discussion and see section 4 for the experiment results). The second challenge is that the cost function J (?) is highly non-convex even with linear classifiers. To see this, we visualize the profile of the cost function J (?) (restricted to a two-dimensional sub-space) around the supervised solution in Figure 1.56 We observe that there are local optimal solutions and there are high barriers between the local and global optimal solutions. Therefore, besides the difficulty of having the sample average inside the logarithmic loss, minimizing this cost function directly will be difficult since crossing the high barriers to reach the global optimal solution would be hard if not properly initialized. 3 The Stochastic Primal-Dual Gradient (SPDG) Algorithm To address the first difficulty in Section 2.4, we transform the original cost (1) into an equivalent min-max problem in order to bring the sample average out of the logarithmic loss. Then, we could obtain unbiased stochastic gradients to solve the problem. To this end, we first introduce the concept of convex conjugate functions. For a given convex function f (u), its convex conjugate function f ? (?) is defined as f ? (?) , supu (? T u f (u)) [6, pp.90-95], where u and ? are called primal and dual variables, respectively. For a scalar function f (u) = ln u, its conjugate function can be calculated as f ? (?) = 1 ln( ?) with ? < 0. Furthermore, it holds that f (u) = sup? (uT ? f ? (?)), by 5 The approach to visualizing the profile is explained with more detail in Appendix F. More slices and a video of the profiles from many angles can be found in the supplementary material. 6 Note that the supervised solution (red dot) coincides with the global optimal solution of J (?). The intuition for this is that the classifier trained by supervised learning should also produce output N -gram distribution that is close to the prior marginal output N -gram distribution given by pLM (?). 5 Algorithm 1 Stochastic Primal-Dual Gradient Method n 1: Input data: DX = {(xn 1 , . . . , xTn ) : n = 1, . . . , M } and pLM (i1 , . . . , iN ). 2: Initialize ? and V where the elements of V are negative 3: repeat 4: Randomly sample a mini-batch of B subsequences of length N from all the sequences in the training set DX , i.e., B = {(xntmm N +1 , . . . , xntmm )}B m=1 . Compute the stochastic gradients for each subsequence in the mini-batch and average them 5: ?= nm B 1 X @Ltm , B m=1 @? V = nm B 1 X @Ltm @ X + pLM (i1 ,. . ., iN ) ln( ?i1 ,...,iN) B m=1 @V @Vi ...i 1 6: Update ? and V according to ? ? ?? ? and V 7: until convergence or a certain stopping condition is met N V + ?v V . which we have ln u = max? (u? + 1 + ln( ?)).7 Substituting it into (1), the original minimization problem becomes the following equivalent min-max problem: min ? max {?i1 ,...,iN <0 } ? L(?, V ) , M Tn X 1 XX Lnt (?, V ) + pLM (i1 , . . . , iN ) ln( ?i1 ,...,iN ) T n=1 t=1 i ,...,i 1 (5) N where V , {?i1 ,...,iN } is a collection of all the dual variables ?i1 ,...,iN , and Lnt (?, V ) is the t-th component function in the n-th sequence, defined as Lnt (?, V ), X pLM (i1 , . . . , iN )?i1 ,...,iN i1 ,...,iN N Y1 k=0 p? (ytn k = iN n k |xt k ) In the equivalent min-max problem (5), we find the optimal solution (?? , V ? ) by minimizing L with respect to the primal variable ? and maximizing L with respect to the dual variable V . The obtained optimal solution to (5), (?? , V ? ), is called the saddle point of L [6]. Once it is obtained, we only keep ?? , which is also the optimal solution to (1) and thus the model parameter. We further note that the equivalent min-max problem (5) is now in a form that sums over T = T1 + ? ? ? + TM component functions Lnt (?, V ). Therefore, the empirical average has been brought out of the logarithmic loss and we are ready to apply stochastic gradient methods. Specifically, we minimize L with respect to the primal variable ? by stochastic gradient descent and maximize L with respect to the dual variable V by stochastic gradient ascent. Therefore, we name the algorithm stochastic primal-dual gradient (SPDG) method (see its details in Algorithm 1). We implement the SPDG algorithm in TensorFlow, which automatically computes the stochastic gradients.8 Finally, the constraint on dual variables ?i1 ,...,iN are automatically enforced by the inherent log-barrier, ln( ?i1 ,...,iN ), in (5) [6]. Therefore, we do not need a separate method to enforce the constraint. We now show that the above min-max (primal-dual) reformulation also alleviates the second difficulty discussed in Section 2.4. Similar to the case of J (?), we examine the profile of L(?, V ) in (5) (restricted to a two-dimensional sub-space) around the optimal (supervised) solution in Figure 2a (see Appendix F for the visualization details). Comparing Figure 2a to Figure 1, we observe that the profile of L(?, V ) is smoother than that of J (?) and the barrier is significantly lower. To further compare J (?) and L(?, V ), we plot in Figure 2b the values of J (?) and L(?, V ) along the same line of ?? + p (?1 ?? ) for different p . It shows that the barrier of L(?, V ) along the primal direction is lower than that in J (?). These observations imply that the reformulated min-max problem (5) is better conditioned than the original problem (1), which further justifies the use of SPDG method. 7 8 The supremum is attainable and is thus replaced by maximum. The code will be released soon. 6 (a) (b) Figure 2: The profiles of L(?, V ) for the OCR dataset. (a) The profile on a two-dimensional affine space passing through the optimal solution (red dot). (b) The profile along the line of ?? + p (?1 ?? ) for different values of p 2 R, where the circles are the optimal solutions. 4 4.1 Experiments Experimental setup We evaluate our unsupervised learning scheme described in earlier secitons using two classification tasks, unsupervised character-level OCR and unsupervised English Spelling Correction (Spell-Corr). In both tasks, there is no label provided during training. Hence, they are both unsupervised. For the OCR task, we obtain our dataset from a public database UWIII English Document Image Database [27], which contains images for each line of text with its corresponding groudtruth. We first use Tesseract [19] to segment the image for each line of text into characters tiles and assign each tile with one character. We verify the segmentation result by training a simple neural network classifier on the segmented results and achieve 0.9% error rate on the test set. Then, we select sentence segments that are longer than 100 and contain only lowercase English characters and common punctuations (space, comma, and period). As a result, we have a vocabulary of size 29 and we obtain 1,175 sentence segments including 153,221 characters for our OCR task. To represent images, we extract VGG19 features with dim = 4096, and project them into 200-dimension vectors using Principal Component Analysis. We train the language models (LM) pLM (?) to provide the required output sequence statistics from both in-domain and out-of-domain data sources. The out-of-domain data sources are completely different databases, including three different language partitions (CNA, NYT, XIN) in the English Gigaword database [26]. In Spell-Corr task, we learn to correct the spelling from a mis-spelled text. From the AFP partition of the Gigaword database, we select 500 sentence segments into our Spell-Corr dataset. We select sentences that are longer than 100 and contain only English characters and common punctuations, resulting in a total of 83,567 characters. The mis-spelled texts are generated by substitution simulations and are treated as our inputs. The objective of this task is to recover the original text. 4.2 Results: Comparing optimization algorithms In the first set of experiments, we aim to evaluate the effectiveness of the SPDG method as described in Section 3, which is designed for optimizing the Empirical-ODM cost in Section 2. The analysis provided in Sections 2 and 3 sheds insight to why SPDG is superior to the method in [7] and to the standard stochastic gradient descent (SGD) method. The coverage-seeking behavior of the proposed Empirical-ODM cost helps avoid trivial solutions, and the simultaneous optimization of primal-dual variables reduces the barriers in the highly non-convex profile of the cost function. Furthermore, we do not include the methods from [30] because their approaches could not achieve satisfactory results without a few labeled data, while we only consider fully unsupervised learning setting. In addition, the methods in [30] are not optimizing the ODM cost and do not exploit the output sequential statistics. 7 Table 1 provides strong experimental evidence demonstrating the substantially greater effectiveness of the primal-dual method over the SGD and the method in [7] on both tasks. All these results are obtained by training the models until converge. Let us examine the results on the OCR in detail. First, the SPGD on the unsupervised cost function achieves 9.21% error rate, much lower than the error rates of any of mini-batch SGD runs, where the size of the mini-batches ranges from 10 to 10,000. Note that, larger mini-batch sizes produce lower errors here because it becomes closer to full-batch gradient and thus lower bias in SGD. On the other hand, when the mini-batch size is as small as 10, the high error rate of 83.09% is close to a guess by majority rule ? predicting the character (space) that has a largest proportion in the train set, i.e., 25, 499/153, 221 = 83.37%. Furthermore, the method from [7] does not perform well no matter how we tune the hyperparameters for the generative regularization. Finally and perhaps most interestingly, with no labels provided in the training, the classification errors produced by our method are only about twice compared with supervised learning (4.63% shown in Table 1). This clearly demonstrates that the unsupervised learning scheme proposed in this paper is an effective one. For the Spelling Correction data set (see the third column in Table 1), we observe rather consistent results with the OCR data set. Table 1: Test error rates on two datasets: OCR and Spell-Corr. The 2-gram character LM is trained from in-domain data. The numbers inside h?i are the mini-batch sizes of the SGD method. Data sets OCR Spell-Corr 4.3 SPDG (Ours) 9.59% 1.94% Method from [7] 83.37% 82.91% SGD h10i 83.09% 82.91% SGD h100i 78.05% 72.93% SGD h1ki 67.14% 65.69% SGD h10ki 56.48% 45.24% Supervised Learning 4.63% 0.00% Majority Guess 83.37% 82.91% Results: Comparing orders of language modeling In the second set of experiments, we examine to what extent the use of sequential statistics (e.g. 2and 3-gram LMs) can do better than the uni-gram LM (no sequential information) in unsupervised learning. The unsupervised prediction results are shown in Table 2, using different data sources to estimate N-gram LM parameters. Consistent across all four ways of estimating reliable N-gram LMs, we observe significantly lower error rates when the unsupervised learning exploits 2-gram and 3-gram LM as sequential statistics compared with exploiting the prior with no sequential statistics (i.e. 1-gram). In three of four cases, exploiting a 3-gram LM gives better results than a 2-gram LM. Furthermore, the comparable error rate associated with 3-gram using out-of-domain output character data (10.17% in Table 2) to that using in-domain output character data (9.59% in Table 1) indicates that the effectiveness of the unsupervised learning paradigm presented in this paper is robust to the quality of the LM acting as the sequential prior. Table 2: Test error rates on the OCR dataset. Character-level language models (LMs) with the orders are trained from three out-of-domain datasets and from the fused in-domain and out-of-domain data. No. Sents No. Chars 1-gram 2-gram 3-gram 5 NYT-LM 1,206,903 86,005,542 71.83% 10.93% 10.17% XIN-LM 155,647 18,626,451 72.14% 12.55% 12.89% CNA-LM 12,234 1,911,124 71.51% 10.56% 10.29 % Fused-LM 15,409 2,064,345 71.25% 10.33% 9.21% Conclusions and future work In this paper, we study the problem of learning a sequence classifier without the need for labeled training data. The practical benefit of such unsupervised learning is tremendous. For example, in large scale speech recognition systems, the currently dominant supervised learning methods typically require a few thousand hours of training data, where each utterance in the acoustic form needs to be labeled by humans. Although there are millions of hours of natural speech data available for training, labeling all of them for supervised learning is less feasible. To make effective use of such 8 huge amounts of acoustic data, the practical unsupervised learning approach discussed in this paper would be called for. Other potential applications such as machine translation, image and video captioning could also benefit from our paradigm. This is mainly because of their common natural language output structure, from which we could exploit the sequential structures for learning the classifier without labels. For other (non-natural-language) applications where there is also a sequential output strucutre, our proposed approach could be applicable in a similar manner. Furthermore, our proposed Empirical-ODM cost function significantly improves over the one in [7] by emphasizing the coverage-seeking behavior. Although the new cost function has a functional form that is more difficult to optimize, a novel SPDG algorithm is developed to effectively address the problem. An analysis of profiles of the cost functions sheds insight to why SPDG works well and why previous methods failed. Finally, we demonstrate in two datasets that our unsupervised learning method is highly effective, producing only about twice errors as fully supervised learning, which no previous unsupervised learning could produce without additional steps of supervised learning. While the current work is restricted to linear classifiers, we intend to generalize the approach to nonlinear models (e.g., deep neural nets [16]) in our future work. We also plan to extend our current method from exploiting N-gram LM to exploiting the currently state-of-the-art neural-LM. Finally, one challenge that remains to be addressed is the scaling of the current method to large vocabulary and high-order LM (i.e., large C and N ). In this case, the summation over all (i1 , . . . , iN ) in (5) becomes computationally expensive. A potential solution is to parameterize the dual variable ?i1 ,...,iN by a recurrent neural network and approximate the sum using beamsearch, which we leave as a future work. Acknowledgments The authors would like to thank all the anonymous reviewers for their constructive feedback. References [1] Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1?127, January 2009. [2] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 153?160, 2007. [3] Taylor Berg-Kirkpatrick, Greg Durrett, and Dan Klein. Unsupervised transcription of historical documents. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 207?217, 2013. [4] Albrecht Beutelspacher. Cryptology. Mathematical Association of America, 1994. [5] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993?1022, March 2003. [6] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [7] Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao, and Li Deng. Unsupervised learning of predictors from unpaired input-output samples. arXiv:1606.04646, 2016. [8] Soumith Chintala and Yann LeCun. A path to unsupervised learning through adversarial networks. In https://code.facebook.com/posts/1587249151575490/a-path-to-unsupervisedlearning-through-adversarial-networks/, 2016. [9] George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 20(1):30?42, 2012. [10] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 3079?3087, 2015. [11] Li Deng. Deep learning for speech and language processing. In Tutorial at Interspeech Conf, Dresden, Germany, https://www.microsoft.com/en-us/research/wpcontent/uploads/2016/07/interspeech-tutorial-2015-lideng-sept6a.pdf, Aug-Sept, 2015. 9 [12] Ian Goodfellow. Generative adversarial nets. In http://www.cs.toronto.edu/ dtarlow/pos14/talks/goodfellow.pdf, 2016. Tutorial at NIPS, [13] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning, by MIT Press. 2016. [14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 2672?2680, 2014. [15] Alex Graves. Sequence transduction with recurrent neural networks. arXiv:1211.3711, 2012. arXiv preprint [16] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-Rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82?97, November 2012. [17] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527?1554, 2006. [18] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006. [19] Anthony Kay. Tesseract: An open-source optical character recognition engine. Linux Journal, 2007. [20] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [21] Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL, pages 499?506, 2006. [22] Quoc Le, Marc?Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng. Building high-level features using large scale unsupervised learning. In International Conference in Machine Learning, 2012. [23] Dennis Luciano and Gordon Prichett. Cryptology: From caesar ciphers to public-key cryptosystems. The College Mathematics Journal, 18(1):2?17, 1987. [24] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [25] Tom Minka. Divergence measures and message passing. Technical report, Technical report, Microsoft Research, 2005. [26] Robert et al Parker. English gigaword fourth edition ldc2009t13. Philadelphia: Linguistic Data Consortium, 2009. [27] Ihsin Phillips, Bhabatosh Chanda, and Robert data.science.uva.nl/events/dlia//datasets/uwash3.html. Haralick. http://isis- [28] P. Smolensky. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter Information Processing in Dynamical Systems: Foundations of Harmony Theory, pages 194?281. 1986. [29] Russell Stewart and Stefano Ermon. Label-free supervision of neural networks with physics and domain knowledge. In Proceedings of AAAI, 2017. [30] Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, and Oriol Vinyals. Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015. [31] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371?3408, 2010. 10
6946 |@word version:1 proportion:1 open:1 seek:2 simulation:1 evaluating:1 attainable:1 sgd:10 ytn:16 harder:1 liu:2 substitution:2 contains:1 unintended:1 document:2 interestingly:1 ours:1 current:4 com:4 comparing:4 diederik:1 dx:5 devin:1 chicago:1 additive:1 partition:2 plm:37 plot:1 designed:1 update:1 generative:11 greedy:1 guess:2 yamada:1 blei:1 provides:2 toronto:1 mathematical:2 along:4 constructed:1 kingsbury:1 dan:2 inside:3 manner:3 introduce:3 expected:5 behavior:5 isi:1 examine:3 inspired:2 salakhutdinov:1 automatically:2 soumith:1 inappropriate:1 becomes:4 provided:6 xx:2 mitigated:1 project:1 estimating:1 what:2 interpreted:1 substantially:1 tesseract:2 developed:2 shed:2 classifier:21 demonstrates:2 sherjil:1 dtarlow:1 yn:3 producing:1 t1:2 positive:1 understood:1 local:3 encoding:1 subscript:1 path:2 might:1 acl:1 twice:3 dresden:1 challenging:1 range:1 practical:4 acknowledgment:1 lecun:1 implement:1 supu:1 empirical:20 uploads:1 significantly:3 matching:1 boyd:1 pre:1 integrating:1 regular:1 word:1 consortium:1 get:1 close:3 acero:1 context:1 applying:1 yee:1 optimize:4 equivalent:6 map:2 demonstrated:2 yt:7 maximizing:1 cryptosystems:1 vgg19:1 go:2 attention:1 reviewer:1 independently:2 convex:8 sainath:1 formulate:1 citadel:2 wit:1 tomas:1 pouget:1 matthieu:1 insight:4 rule:2 importantly:1 lamblin:1 vandenberghe:1 kay:1 cna:2 heavily:3 decode:1 magazine:1 designing:1 goodfellow:4 jaitly:1 crossing:1 element:1 recognition:7 expensive:1 trend:1 predicts:3 labeled:12 database:5 observed:1 role:1 bottom:1 ft:1 preprint:4 parameterize:1 thousand:1 inclined:3 ranzato:1 russell:1 knight:1 valuable:2 principled:1 intuition:2 transforming:1 broken:1 warde:1 trained:4 rewrite:1 segment:4 completely:1 easily:2 po:1 various:1 america:1 talk:1 chapter:1 alphabet:1 train:4 stacked:1 fast:1 effective:3 labeling:5 jianfeng:1 outside:1 kevin:1 jean:1 heuristic:1 supplementary:4 solve:4 larger:1 kai:2 statistic:14 transform:1 sequence:43 net:4 sen:1 propose:5 alleviates:1 achieve:2 seattle:1 convergence:1 regularity:1 exploiting:4 sutskever:1 captioning:2 produce:3 karol:1 converges:1 encryption:1 spelled:2 help:2 derive:1 develop:2 recurrent:2 leave:1 cryptology:2 andrew:4 tim:1 aug:1 strong:5 auxiliary:1 coverage:8 c:1 kenji:1 larochelle:2 met:1 differ:1 direction:1 waveform:1 closely:1 correct:1 stochastic:16 exploration:1 human:3 char:1 ermon:1 material:4 public:2 explains:1 require:2 assign:1 microstructure:1 anonymous:1 probable:2 summation:2 helping:1 correction:2 hold:1 around:2 mapping:3 predict:4 visualize:1 lm:19 cognition:1 substituting:1 achieves:1 released:1 purpose:1 ruslan:1 estimation:1 applicable:1 harmony:1 label:11 currently:2 sensitive:1 cipher:6 largest:2 minimization:1 brought:1 clearly:1 mit:1 aim:1 rather:1 avoid:1 linguistic:1 rezende:1 focus:2 haralick:1 properly:1 likelihood:2 mainly:2 indicates:1 contrast:3 adversarial:4 baseline:1 dim:1 dependent:1 stopping:1 lowercase:1 entire:1 typically:1 i1:42 germany:1 classification:5 dual:16 among:2 denoted:1 pascal:2 html:1 contour:1 plan:1 art:1 special:2 initialize:2 marginal:1 once:1 having:1 beach:1 ng:2 encouraged:1 yu:4 unsupervised:48 caesar:5 future:4 yoshua:5 mirza:1 fundamentally:2 inherent:2 report:2 few:3 gordon:1 randomly:1 divergence:7 replaced:6 intended:1 jeffrey:1 microsoft:5 huge:2 message:8 highly:8 kirkpatrick:1 punctuation:2 nl:1 farley:1 pc:1 primal:13 swapping:1 chain:1 encourage:1 closer:1 necessary:1 taylor:1 initialized:1 desired:1 circle:1 column:1 earlier:5 modeling:3 cover:1 stewart:1 cost:58 predictor:1 successful:3 odm:17 osindero:1 st:2 grand:1 international:1 physic:2 dong:2 michael:1 fused:2 ilya:1 gans:1 linux:1 aaai:1 nm:2 containing:1 huang:1 possibly:1 tile:2 rafal:1 stochastically:1 conf:1 albrecht:1 stark:1 li:7 potential:3 matter:1 explicitly:1 vi:1 later:1 break:1 view:1 sup:1 red:3 recover:2 bayes:1 parallel:1 simon:1 minimize:4 greg:3 who:1 afp:1 dean:2 generalize:2 vincent:2 produced:1 xtn:1 explain:1 simultaneous:1 reach:3 facebook:1 lnt:4 frequency:4 pp:2 mohamed:1 minka:1 chintala:1 associated:2 mi:2 xn1:3 dataset:5 intrinsically:1 knowledge:3 ut:1 improves:2 dimensionality:1 segmentation:1 back:1 supervised:18 danilo:1 tom:1 formulation:3 done:1 furthermore:6 just:1 autoencoders:2 until:2 working:3 hand:1 rahman:1 dennis:1 mehdi:1 nonlinear:3 lack:1 mode:5 quality:1 perhaps:1 building:1 xiaodong:1 lillicrap:1 usa:3 requiring:3 name:2 verify:2 unbiased:1 concept:1 regularization:3 spell:5 hence:1 www:2 leibler:1 satisfactory:1 visualizing:1 during:1 interspeech:2 coincides:1 criterion:1 whye:1 pdf:2 demonstrate:5 tn:3 bring:1 stefano:1 image:7 wise:1 variational:1 novel:2 recently:1 common:3 superior:1 functional:4 hugo:2 million:1 extend:2 discussed:2 association:2 lieven:1 he:1 jozefowicz:1 isabelle:1 cambridge:1 ai:1 phillips:1 automatic:1 rd:1 mathematics:1 language:19 dot:3 access:1 supervision:3 longer:2 patrick:1 dominant:1 posterior:2 recent:1 optimizing:5 certain:4 success:1 meeting:1 analyzes:1 dai:1 additional:2 somewhat:1 greater:1 george:2 deng:7 freely:1 aggregated:1 maximize:2 period:1 converge:1 signal:1 semi:1 paradigm:2 ii:1 smoother:1 full:1 reduces:1 corrado:2 stephen:1 segmented:1 technical:2 match:4 cross:7 long:1 believed:1 post:1 equally:1 prediction:8 vision:1 navdeep:1 arxiv:9 represent:1 monga:1 encrypted:4 penalize:5 addition:3 whereas:2 addressed:1 source:4 crucial:1 swapped:1 rest:1 biased:1 probably:1 ascent:1 sent:1 effectiveness:4 jordan:1 iii:1 bengio:5 architecture:1 restrict:1 reduce:1 idea:1 tm:2 knowing:1 shift:1 t0:1 expression:2 effort:1 reformulated:1 speech:9 proceed:1 passing:3 deep:10 useful:1 detailed:2 tune:1 amount:4 jianshu:2 simplest:1 unpaired:1 http:4 tutorial:3 estimated:1 klein:1 gigaword:3 vol:1 decipherment:1 group:1 key:3 four:3 reformulation:1 demonstrating:1 dahl:2 nyt:2 sum:2 enforced:1 run:1 angle:2 letter:17 fourth:1 named:1 yann:1 appendix:6 scaling:1 comparable:1 layer:1 played:1 courville:2 annual:1 nontrivial:1 constraint:2 alex:2 aspect:2 min:8 mikolov:1 optical:2 developing:2 according:1 march:1 conjugate:3 across:1 contain:2 character:17 quoc:2 explained:1 restricted:3 ln:11 computationally:1 visualization:1 remains:1 bing:1 discus:3 mechanism:1 anish:1 end:3 serf:1 available:4 apply:1 observe:4 ocr:12 enforce:1 pierre:1 alternative:1 batch:8 original:11 denotes:3 dirichlet:1 include:2 linguistics:1 recognizes:1 marginalized:2 readable:1 exploit:12 gregor:1 seeking:12 objective:2 move:1 intend:1 costly:2 spelling:3 traditional:1 antoine:1 gradient:15 separate:2 mapped:3 thank:1 majority:2 lajoie:1 extent:1 trivial:6 toward:1 reason:2 ozair:1 length:4 besides:1 code:2 mini:7 manzagol:1 minimizing:5 difficult:3 setup:1 robert:2 negative:2 xnt:9 xntn:3 understandable:1 contributed:1 perform:2 teh:1 observation:1 datasets:5 descent:3 november:1 january:1 situation:1 hinton:3 y1:6 david:2 required:1 kl:2 sentence:4 acoustic:3 engine:1 learned:3 tensorflow:1 tremendous:1 hour:2 kingma:1 nip:5 address:4 able:3 redmond:1 dynamical:1 samplewise:1 smolensky:1 challenge:3 built:1 max:9 reliable:1 including:3 video:3 belief:2 y1n:2 power:1 shifting:2 critical:1 natural:5 suitable:2 difficulty:4 predicting:2 treated:1 event:1 advanced:1 scheme:2 imply:1 ready:1 auto:1 extract:1 philadelphia:1 utterance:1 spgd:1 sept:1 text:9 prior:15 understanding:1 discovery:1 popovici:1 rathod:1 graf:1 law:1 fully:5 expect:1 loss:5 generation:1 allocation:1 versus:1 geoffrey:3 abdel:1 foundation:2 integrate:1 vanhoucke:1 affine:2 consistent:2 share:1 translation:2 yt0:1 penalized:1 summary:1 repeat:1 free:3 english:7 soon:1 drastically:1 bias:1 senior:1 understand:1 barrier:7 distributed:1 benefit:2 feedback:1 dimension:1 vocabulary:3 xn:4 slice:1 calculated:1 llc:1 gram:30 world:4 avoids:2 stand:1 computes:2 stuck:3 author:5 collection:2 commonly:1 durrett:1 historical:1 nguyen:1 welling:1 transaction:1 approximate:1 emphasize:1 uni:1 kullback:1 transcription:1 keep:1 supremum:1 global:3 corpus:2 conceptual:1 receiver:2 assumed:1 subsequence:2 comma:1 latent:1 why:4 table:9 learn:4 robust:1 ca:1 obtaining:2 anthony:1 domain:13 inappropriately:1 marc:1 did:1 uva:1 main:1 aurelio:1 motivation:2 hyperparameters:1 profile:13 edition:1 x1:1 xu:1 en:1 transduction:2 parker:1 sub:3 position:1 breaking:1 third:2 coling:1 ian:3 down:1 emphasizing:1 xt:6 unigram:2 abadie:1 concern:1 evidence:1 sequential:17 effectively:3 adding:1 corr:5 jianshuc:1 ltm:2 conditioned:1 justifies:1 sparseness:1 chen:4 entropy:7 logarithmic:4 saddle:1 xt0:1 gao:1 failed:2 prevents:1 vinyals:1 scalar:1 luciano:1 nair:1 conditional:1 goal:2 formulated:2 towards:2 wjt:1 jeff:1 replace:1 shared:1 feasible:1 change:1 hard:1 specifically:6 except:1 reducing:1 acting:1 denoising:2 principal:1 total:2 called:5 experimental:3 xin:2 aaron:2 formally:1 select:3 berg:1 tara:1 college:1 dxy:1 rajat:1 oriol:1 constructive:1 evaluate:2 audio:1 phenomenon:1
6,574
6,947
Subset Selection under Noise Chao Qian1 Jing-Cheng Shi2 Yang Yu2 Ke Tang3,1 Zhi-Hua Zhou2 1 Anhui Province Key Lab of Big Data Analysis and Application, USTC, China 2 National Key Lab for Novel Software Technology, Nanjing University, China 3 Shenzhen Key Lab of Computational Intelligence, SUSTech, China [email protected] [email protected] {shijc,yuy,zhouzh}@lamda.nju.edu.cn Abstract The problem of selecting the best k-element subset from a universe is involved in many applications. While previous studies assumed a noise-free environment or a noisy monotone submodular objective function, this paper considers a more realistic and general situation where the evaluation of a subset is a noisy monotone function (not necessarily submodular), with both multiplicative and additive noises. To understand the impact of the noise, we firstly show the approximation ratio of the greedy algorithm and POSS, two powerful algorithms for noise-free subset selection, in the noisy environments. We then propose to incorporate a noise-aware strategy into POSS, resulting in the new PONSS algorithm. We prove that PONSS can achieve a better approximation ratio under some assumption such as i.i.d. noise distribution. The empirical results on influence maximization and sparse regression problems show the superior performance of PONSS. 1 Introduction Subset selection is to select a subset of size at most k from a total set of n items for optimizing some objective function f , which arises in many applications, such as maximum coverage [10], influence maximization [16], sparse regression [17], ensemble pruning [23], etc. Since it is generally NPhard [7], much effort has been devoted to the design of polynomial-time approximation algorithms. The greedy algorithm is most favored for its simplicity, which iteratively chooses one item with the largest immediate benefit. Despite the greedy nature, it can perform well in many cases. For a monotone submodular objective function f , it achieves the (1 ? 1/e)-approximation ratio, which is optimal in general [18]; for sparse regression where f can be non-submodular, it has the best-so-far approximation bound 1 ? e?? [6], where ? is the submodularity ratio. Recently, a new approach Pareto Optimization for Subset Selection (POSS) has been shown superior to the greedy algorithm [21, 24]. It reformulates subset selection with two simultaneous objectives, i.e., optimizing the given objective and minimizing the subset size, and employs a randomized iterative algorithm to solve this bi-objective problem. POSS is proved to achieve the same general approximation guarantee as the greedy algorithm, and is shown better on some subclasses [5]. The Pareto optimization method has also been successfully applied to solve subset selection with general cost constraints [20] as well as ratio optimization of monotone set functions [22]. Most of the previous studies assumed that the objective function is noise-free. However, we can only have a noisy evaluation in many realistic applications. For examples, for influence maximization, computing the influence spread objective is #P-hard [2], and thus is often estimated by simulating the random diffusion process [16], which brings noise; for sparse regression, only a set of limited data can be used for evaluation, which makes the evaluation noisy; and more examples include maximizing information gain in graphical models [4], crowdsourced image collection summarization [26], etc. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. To the best of our knowledge, only a few studies addressing noisy subset selection have been reported, which assumed monotone submodular objective functions. Under the general multiplicative noise model (i.e., the noisy objective value F (X) is in the range of (1 ? )f (X)), it was proved ? that no polynomial-time algorithm can achieve a constant approximation ratio for any  > 1/ n, while the greedy algorithm can achieve a (1 ? 1/e ? 16?)-approximation ratio for  = ?/k as long as ? < 1 [14]. By assuming that F (X) is a random variable (i.e., random noise) and the expectation of F (X) is the true value f (X), it was shown that the greedy algorithm can achieve nearly a (1 ? 1/e)approximation guarantee via uniform sampling [16] or adaptive sampling [26]. Recently, Hassidim and Singer [13] considered the consistent random noise model, where for each subset X, only the first evaluation is a random draw from the distribution of F (X) and the other evaluations return the same value. For some classes of noise distribution, they provided polynomial-time algorithms with constant approximations. In this paper, we consider a more general situation, i.e., noisy subset selection with a monotone objective f (not necessarily submodular), for both multiplicative noise and additive noise (i.e., F (X) is in the range of f (X) ? ) models. The main results are: ? Firstly, we extend the approximation ratio of the greedy algorithm from the submodular case [14] to the general situation (Theorems 1, 2), and also slightly improve it. ? Secondly, we prove that the approximation ratio of POSS is nearly the same as that of the greedy algorithm (Theorems 3, 4). Moreover, on two maximum coverage cases, we show that POSS can have a better ability of avoiding the misleading search direction due to the noise (Propositions 1, 2). ? Thirdly, we introduce a noise-aware comparison strategy into POSS, and propose the new PONSS algorithm for noisy subset selection. When comparing two solutions with close noisy objective values, POSS selects the solution with the better observed value, while PONSS keeps both of them such that the risk of deleting a good solution is reduced. With some assumption such as i.i.d. ?? )-approximation ratio under noise distribution, we prove that PONSS can obtain a 1? 1+ (1 ? e multiplicative noise (Theorem 5). Particularly for the submodular case (i.e., ? = 1) and  being a constant, PONSS has a constant approximation ratio. Note that for the greedy algorithm and POSS under general multiplicative noise, they only guarantee a ?(1/k) approximation ratio. We also prove the approximation ratio of PONSS under additive noise (Theorem 6). We have conducted experiments on influence maximization and sparse regression problems, two typical subset selection applications with the objective function being submodular and non-submodular, respectively. The results on real-world data sets show that POSS is better than the greedy algorithm in most cases, and PONSS clearly outperforms POSS and the greedy algorithm. We start the rest of the paper by introducing the noisy subset selection problem. We then present in three subsequent sections the theoretical analyses for the greedy, POSS and PONSS algorithms, respectively. We further empirically compare these algorithms. The final section concludes this paper. 2 Noisy Subset Selection Given a finite nonempty set V = {v1 , . . . , vn }, we study the functions f : 2V ? R defined on subsets of V . The subset selection problem as presented in Definition 1 is to select a subset X of V such that a given objective f is maximized with the constraint |X| ? k, where | ? | denotes the size of a set. Note that we only consider maximization since minimizing f is equivalent to maximizing ?f . Definition 1 (Subset Selection). Given all items V = {v1 , . . . , vn }, an objective function f and a budget k, it is to find a subset of at most k items maximizing f , i.e., arg maxX?V f (X) s.t. |X| ? k. (1) A set function f : 2V ? R is monotone if for any X ? Y , f (X) ? f (Y ). In this paper, we consider V monotone functions and assume that they are normalized, Pi.e., f (?) = 0. A set function f : 2 ? R is submodular if for any X ? Y , f (Y ) ? f (X) ? (f (X ? {v}) ? f (X)) [19]. The v?Y \X submodularity ratio in Definition 2 characterizes how close a set function f is to submodularity. It is easy to see that f is submodular iff ?X,k (f ) = 1 for any X and k. For some concrete non-submodular applications, bounds on ?X,k (f ) were derived [1, 9]. When f is clear, we will use ?X,k shortly. 2 Algorithm 1 Greedy Algorithm Input: all items V = {v1 , . . . , vn }, a noisy objective function F , and a budget k Output: a subset of V with k items Process: 1: Let i = 0 and Xi = ?. 2: repeat 3: Let v ? = arg maxv?V \Xi F (Xi ? {v}). 4: Let Xi+1 = Xi ? {v ? }, and i = i + 1. 5: until i = k 6: return Xk Definition 2 (Submodularity Ratio [6]). Let f be a non-negative set function. The submodularity ratio of f with respect to a set X and a parameter k ? 1 is  P v?S f (L ? {v}) ? f (L) ?X,k (f ) = min . f (L ? S) ? f (L) L?X,S:|S|?k,S?L=? In many applications of subset selection, we cannot obtain the exact objective value f (X), but rather only a noisy one F (X). In this paper, we will study the multiplicative noise model, i.e., (1 ? )f (X) ? F (X) ? (1 + )f (X), (2) as well as the additive noise model, i.e., f (X) ?  ? F (X) ? f (X) + . 3 (3) The Greedy Algorithm The greedy algorithm as shown in Algorithm 1 iteratively adds one item with the largest F improvement until k items are selected. It can achieve the best approximation ratio for many subset selection problems without noise [6, 18]. However, its performance for noisy subset selection was not theoretically analyzed until recently. Let OP T = maxX:|X|?k f (X) denote the optimal function value of Eq. (1). Horel and Singer [14] proved that for subset selection with submodular objective functions under the multiplicative noise model, the greedy algorithm finds a subset X with  2k  k ! 1? 1? 1 1+ f (X) ? 1? 1? ? OP T. (4) 4k 1+ k 1 + (1?) 2 Note that their original bound in Theorem 5 of [14] is w.r.t. F (X) and we have switched to f (X) by multiplying a factor of 1? 1+ according to Eq. (2). By extending their analysis with the submodularity ratio, we prove in Theorem 1 the approximation bound of the greedy algorithm for the objective f being not necessarily submodular. Note that their analysis is based on an inductive inequality on F , while we directly use that on f , which brings a slight improvement. For the submodular case, ?X,k = 1 and the bound in Theorem 1 changes to be  k  k ! 1? 1 1? 1 1+ k  1? f (X) ? 1? ? OP T. 1 1+ k 1 ? 1? 1+ 1 ? k Comparing with that (i.e., Eq. (4)) in [14], our bound is tighter, since  k  2k k k !i k?1 1? 1? 1? k1 1? 1? 1? k1 X 1?  1i k?1 X 1?2  1 1+ 1+  = 1? ? 1? ? ? k. 1 4k 1+ k 1+ k 1 ? 1? 1 + (1?) 2 1+ 1 ? k i=0 i=0 Due to space limitation, the proof of Theorem 1 is provided in the supplementary material. We also show in Theorem 2 the approximation ratio under additive noise. The proof is similar to that of Theorem 1, except that Eq. (3) is used instead of Eq. (2) for comparing f (X) with F (X). 3 Algorithm 2 POSS Algorithm Input: all items V = {v1 , . . . , vn }, a noisy objective function F , and a budget k Parameter: the number T of iterations Output: a subset of V with at most k items Process: 1: Let x = {0}n , P = {x}, and let t = 0. 2: while t < T do 3: Select x from P uniformly at random. 4: Generate x0 by flipping each bit of x with probability n1 . 5: if @z ? P such that z  x0 then 6: P = (P \ {z ? P | x0  z}) ? {x0 }. 7: end if 8: t = t + 1. 9: end while 10: return arg maxx?P,|x|?k F (x) Theorem 1. For subset selection under multiplicative noise, the greedy algorithm finds a subset X ! with  k  1? ?X,k 1? ?X,k k 1+ k f (X) ? 1? 1? ? OP T. ?X,k  1+ k 1 ? 1? 1+ 1 ? k Theorem 2. For subset selection under additive noise, the greedy algorithm finds a subset X with      ?X,k k 2k f (X) ? 1 ? 1 ? ? OP T ? . k ?X,k 4 The POSS Algorithm Let a Boolean vector x ? {0, 1}n represent a subset X of V , where xi = 1 if vi ? X and xi = 0 otherwise. The Pareto Optimization method for Subset Selection (POSS) [24] reformulates the original problem Eq. (1) as a bi-objective maximization problem:  ??, |x| ? 2k , f2 (x) = ?|x|. arg maxx?{0,1}n (f1 (x), f2 (x)), where f1 (x) = F (x), otherwise That is, POSS maximizes the original objective and minimizes the subset size simultaneously. Note that setting f1 to ?? is to exclude overly infeasible solutions. We will not distinguish x ? {0, 1}n and its corresponding subset for convenience. In the bi-objective setting, the domination relationship as presented in Definition 3 is used to compare two solutions. For |x| < 2k and |y| ? 2k, it trivially holds that x  y. For |x|, |y| < 2k, x  y if F (x) ? F (y) ? |x| ? |y|; x  y if x  y and F (x) > F (y) ? |x| < |y|. Definition 3 (Domination). For two solutions x and y, ? x weakly dominates y (denoted as x  y) if f1 (x) ? f1 (y) ? f2 (x) ? f2 (y); ? x dominates y (denoted as x  y) if x  y and f1 (x) > f1 (y) ? f2 (x) > f2 (y). POSS as described in Algorithm 2 uses a randomized iterative procedure to optimize the bi-objective problem. It starts from the empty set {0}n (line 1). In each iteration, a new solution x0 is generated by randomly flipping bits of an archived solution x selected from the current P (lines 3-4); if x0 is not dominated by any previously archived solution (line 5), it will be added into P , and meanwhile those solutions weakly dominated by x0 will be removed (line 6). After T iterations, the solution with the largest F value satisfying the size constraint in P is selected (line 10). In [21, 24], POSS using E[T ] ? 2ek 2 n was proved to achieve the same approximation ratio as the greedy algorithm for subset selection without noise, where E[T ] denotes the expected number of iterations. However, its approximation performance under noise is not known. Let ?min = minX:|X|=k?1 ?X,k . We first show in Theorem 3 the approximation ratio of POSS under multiplicative noise. The proof is provided in the supplementary material due to space limitation. The approximation ratio of POSS under additive noise is shown in Theorem 4, the proof of which is similar to that of Theorem 3 except that Eq. (3) is used instead of Eq. (2). 4 ??+1 ?? ?1 ??+2 ?? ?1 ?2? ?? ?2??1 ?2? ?4??2 ?3??3 ?4??3 ?4??1 ?4? (a) [13] (b) Figure 1: Two examples of the maximum coverage problem. Theorem 3. For subset selection under multiplicative noise, POSS using E[T ] ? 2ek 2 n finds a subset X with |X| ? k and !  k  1? ?min 1? ?min k 1+ k  1? 1? ? OP T. f (X) ? ?min 1+ k 1 ? 1? 1+ 1 ? k Theorem 4. For subset selection under additive noise, POSS using E[T ] ? 2ek 2 n finds a subset X with |X| ? k and       ?min k 2k ?min k f (X) ? 1 ? 1 ? ? OP T ? ? 1? . k ?min k By comparing Theorem 1 with 3, we find that the approximation bounds of POSS and the greedy algorithm under multiplicative noise are nearly the same. Particularly, for the submodular case (where ?X,k = 1 for any X and k), they are exactly the same. Under additive noise, their approximation k bounds (i.e., Theorems 2 and 4) are also nearly the same, since the additional term (1 ? ?min k )  in Theorem 4 can almost be omitted compared with other terms. To further investigate the performances of the greedy algorithm and POSS, we compare them on two maximum coverage examples with noise. Maximum coverage as in Definition 4 is a classic subset selection problem. Given a family of sets that cover a universe of elements, the goal is to select at most k sets whose union is maximal. For Examples 1 and 2, the greedy algorithm easily finds an optimal solution if without noise, but can only guarantee nearly a 2/k and 3/4-approximation under noise, respectively. We prove in Propositions 1 and 2 that POSS can avoid the misleading search direction due to noise through multi-bit search and backward search, respectively, and find an optimal solution. Note that the greedy algorithm can only perform single-bit forward search. Due to space limitation, the proofs are provided in the supplementary material. Definition 4 (Maximum Coverage). Given a ground set U , a collection V = {S1 , S2 , . . . , Sn } of subsets of U , and a budget k, it is to find a subset of V (represented by x ? {0, 1}n ) such that [ arg maxx?{0,1}n f (x) = | Si | s.t. |x| ? k. i:xi =1 Example 1. [13] As shown in Figure 1(a), V contains n = 2l subsets {S1 , . . . , S2l }, where ?i ? l, Si covers the same two elements, and ?i > l, Si covers one unique element. The objective evaluation is exact except that ?? ? X ? {S1 , . . . , Sl }, i > l, F (X) = 2 + ? and F (X ? {Si }) = 2, where 0 < ? < 1. The budget satisfies that 2 < k ? l. Proposition 1. For Example 1, POSS using E[T ] = O(kn log n) finds an optimal solution, while the greedy algorithm cannot. Example 2. As shown in Figure 1(b), V contains n = 4l subsets {S1 , . . . , S4l }, where ?i ? 4l ? 3 : |Si | = 1, |S4l?2 | = 2l ? 1, and |S4l?1 | = |S4l | = 2l ? 2. The objective evaluation is exact except that F ({S4l }) = 2l. The budget k = 2. Proposition 2. For Example 2, POSS using E[T ] = O(n) finds the optimal solution {S4l?2 , S4l?1 }, while the greedy algorithm cannot. 5 The PONSS Algorithm POSS compares two solutions based on the domination relation as shown in Definition 3. This may be not robust to noise, because a worse solution can appear to have a better F value and then survive to replace the true better solution. Inspired by the noise handling strategy threshold selection [25], we modify POSS by replacing domination with ?-domination, where x is better than y if F (x) is larger than F (y) by at least a threshold. By ?-domination, solutions with close F values will be kept 5 Algorithm 3 PONSS Algorithm Input: all items V = {v1 , . . . , vn }, a noisy objective function F , and a budget k Parameter: T , ? and B Output: a subset of V with at most k items Process: 1: Let x = {0}n , P = {x}, and let t = 0. 2: while t < T do 3: Select x from P uniformly at random. 4: Generate x0 by flipping each bit of x with probability n1 . 5: if @z ? P such that z ? x0 then 6: P = (P \ {z ? P | x0 ? z}) ? {x0 }. 7: Q = {z ? P | |z| = |x0 |}. 8: if |Q| = B + 1 then 9: P = P \ Q and let j = 0. 10: while j < B do 11: Select two solutions z1 , z2 from Q uniformly at random without replacement. 12: Evaluate F (z1 ), F (z2 ); let z? = arg maxz?{z1 ,z2 } F (z) (breaking ties randomly). 13: P = P ? {? z }, Q = Q \ {? z }, and j = j + 1. 14: end while 15: end if 16: end if 17: t = t + 1. 18: end while 19: return arg maxx?P,|x|?k F (x) in P rather than only one with the best F value is kept; thus the risk of removing a good solution is reduced. This modified algorithm called PONSS (Pareto Optimization for Noisy Subset Selection) is presented in Algorithm 3. However, using ?-domination may also make the size of P very large, and then reduce the efficiency. We further introduce a parameter B to limit the number of solutions in P for each possible subset size. That is, if the number of solutions with the same size in P exceeds B, one of them will be deleted. As shown in lines 7-15, the better one of two solutions randomly selected from Q is kept; this process is repeated for B times, and the remaining solution in Q is deleted. For the analysis of PONSS, we consider random noise, i.e., F (x) is a random variable, and assume that the probability of F (x) > F (y) is not less than 0.5 + ? if f (x) > f (y), i.e., Pr(F (x) > F (y)) ? 0.5 + ? if f (x) > f (y), (5) where ? ? [0, 0.5). This assumption is satisfied in many noisy settings, e.g., the noise distribution is i.i.d. for each x (which is explained in the supplementary material). Note that for comparing two solutions selected from Q in line 12 of PONSS, we reevaluate their noisy objective F values independently, i.e., each evaluation is a new independent random draw from the noise distribution. For the multiplicative noise model, we use the multiplicative ?-domination relation as presented in 1+? Definition 5. That is, x ? y if F (x) ? 1?? ? F (y) and |x| ? |y|. The approximation ratio of PONSS with the assumption Eq. (5) is shown in Theorem 5, which is better than that of POSS under general multiplicative noise (i.e., Theorem 3), because  k k k i k?1 k?1 1 ? 1? 1 ? ?min X 1 ?   X 1 ? 1 ? ?min 1+ k ?min  ?min i k  = 1? 1? ? = . ?min ?min 1+ k k 1 ? 1? k 1+ 1 ? k i=0 i=0 Particularly for the submodular case where ?min = 1, PONSS with the assumption Eq. (5) can achieve a constant approximation ratio even when  is a constant, while the greedy algorithm and POSS under general multiplicative noise only guarantee a ?(1/k) approximation ratio. Note that when ? is a constant, the approximation guarantee of PONSS can hold with a constant probability by using a polynomially large B, and thus the number of iterations of PONSS is polynomial in expectation. Definition 5 (Multiplicative ?-Domination). For two solutions x and y, 1+? ? x weakly dominates y (denoted as x ? y) if f1 (x) ? 1?? ? f1 (y) ? f2 (x) ? f2 (y); ? x dominates y (denoted as x ? y) if x ? y and f1 (x) > 6 1+? 1?? ? f1 (y) ? f2 (x) > f2 (y). Lemma 1. [21] For any X ? V , there exists one item v? ? V \ X such that ?X,k (OP T ? f (X)). f (X ? {? v }) ? f (X) ? k Theorem 5. For subset selection under multiplicative noise with the assumption Eq. (5), with 2 probability at least 12 (1 ? 12nkB 2?log 2k ), PONSS using ? ?  and T = 2eBnk 2 log 2k finds a subset X with |X| ? k and    ?min k 1? ? OP T. 1? 1? f (X) ? 1+ k Proof. Let Jmax denote the maximum value of j ? [0, k] such that in P , there exists a solution x j with |x| ? j and f (x) ? (1 ? (1 ? ?min k ) ) ? OP T . Note that Jmax = k implies that there exists one ? ? k solution x in P satisfying that |x | ? k and f (x? ) ? (1 ? (1 ? ?min k ) ) ? OP T . Since the final selected solution x from P has the largest F value (i.e., line 19 of Algorithm 3), we have 1 1 1? f (x) ? F (x) ? F (x? ) ? f (x? ). 1+ 1+ 1+ That is, the desired approximation bound is reached. Thus, we only need to analyze the probability of Jmax = k after running T = 2eBnk 2 log 2k number of iterations. Assume that in the run of PONSS, one solution with the best f value in Q is always kept after each implementation of lines 8-15. We then show that Jmax can reach k with probability at least 0.5 after 2eBnk 2 log 2k iterations. Jmax is initially 0 since it starts from {0}n , and we assume that currently Jmax = i < k. Let x be a corresponding solution with the value i, i.e., |x| ? i and    ?min i ? OP T. (6) f (x) ? 1 ? 1 ? k First, Jmax will not decrease. If x is not deleted, it obviously holds. For deleting x, there are two possible cases. If x is deleted in line 6, the newly included solution x0 ? x, which implies that 1 1+? 1 1+ 1 F (x0 ) ? 1+ ? 1?? F (x) ? 1+ ? 1? F (x) ? f (x), where the |x0 | ? |x| ? i and f (x0 ) ? 1+ third inequality is by ? ? . If x is deleted in lines 8-15, there must exist one solution z ? in P with |z ? | = |x| and f (z ? ) ? f (x), because we assume that one solution with the best f value in Q is kept. Second, Jmax can increase in each iteration with some probability. From Lemma 1, we know that a new solution x0 can be produced by flipping one specific 0 bit of x (i.e., adding a specific item) such that |x0 | = |x| + 1 ? i + 1 and     ?x,k  ?x,k ?min i+1 f (x0 ) ? 1 ? f (x) + ? OP T ? 1 ? 1 ? ? OP T, k k k where the second inequality is by Eq. (6) and ?x,k ? ?min (since |x| < k and ?x,k decreases with x). Note that x0 will be added into P ; otherwise, there must exist one solution in P dominating x0 (line 5 of Algorithm 3), and this implies that Jmax has already been larger than i, which contradicts with the assumption Jmax = i. After including x0 , Jmax ? i + 1. Since P contains at most B solutions for each possible size {0, . . . , 2k ? 1}, |P | ? 2Bk. Thus, Jmax can increase by at least 1 in one iteration 1 with probability at least |P1 | ? n1 (1 ? n1 )n?1 ? 2eBnk , where |P1 | is the probability of selecting x in line 3 of Algorithm 3 due to uniform selection and n1 (1 ? n1 )n?1 is the probability of flipping only a specific bit of x in line 4. We divide the 2eBnk 2 log 2k iterations into k phases with equal length. For reaching Jmax = k, it is sufficient that Jmax increases at least once in each phase. Thus, we have  k 2eBnk log 2k Pr(Jmax = k) ? 1 ? (1 ? 1/(2eBnk)) ? (1 ? 1/(2k))k ? 1/2. We then only need to investigate our assumption that in the run of 2eBnk 2 log 2k iterations, when implementing lines 8-15, one solution with the best f value in Q is always kept. Let R = {z ? ? arg maxz?Q f (z)}. If |R| > 1, it trivially holds, since only one solution from Q is deleted. If |R| = 1, deleting the solution z ? with the best f value implies that z ? is never included into P in implementing lines 11-13 of Algorithm 3, which are repeated for B iterations. In the j-th (where 0 ? j ? B ? 1) iteration, |Q| = B + 1 ? j. Under the condition that z ? is not included into P from the 0-th to the (j ? 1)-th iteration, the probability that z ? is selected in line 11 is (B ? j)/ B+1?j = 2/(B + 1 ? j). We know from Eq. (5) that F (z ? ) is better in the comparison 2 of line 12 with probability at least 0.5+?. Thus, the probability of not including z ? into P in the j-th 7 2 iteration is at most 1? B+1?j ? (0.5+?). Then, the probability of deleting the solution with the best f QB?1 1+2? value in Q when implementing lines 8-15 is at most j=0 (1? B+1?j ). Taking the logarithm, we get       Z B?1 B B+1 X X j ? 2? B ? j ? 2? j ? 2? log log = ? dj log B+1?j j+1 j+1 1 j=0 j=1     (B + 1 ? 2?)B+1?2? (1 ? 2?)1?2? = log ? log , (B + 2)B+2 22 where the inequality is since log j?2? j+1 is increasing with j, and the last equality is since the derivative j?2? j?2? of log (j?2?) (j+1)j+1 with respect to j is log j+1 . Thus, we have   B+2 B?1 Y 1 4 4 B +1?2? 1+2? ? ? ? 1?1/e 1+2? , ? 1? 1+2? 1?2? B +1?j B +2 (B +1?2?) (1?2?) e B j=0 where the last inequality is by 0 < 1 ? 2? ? 1 and (1 ? 2?)1?2? ? e?1/e . By the union bound, our assumption holds with probability at least 1 ? (12nk 2 log 2k)/B 2? . Thus, the theorem holds. For the additive noise model, we use the additive ?-domination relation as presented in Definition 6. That is, x ? y if F (x) ? F (y) + 2? and |x| ? |y|. By applying Eq. (3) and additive ?-domination to the proof procedure of Theorem 5, we can prove the approximation ratio of PONSS under additive noise with the assumption Eq. (5), as shown in Theorem 6. Compared with the approximation ratio of POSS under general additive noise (i.e., Theorem 4), PONSS achieves a better one. This can be k 2k easily verified since (1 ? (1 ? ?min k ) ) ?min ? 2, where the inequality is derived by ?min ? [0, 1]. Definition 6 (Additive ?-Domination). For two solutions x and y, ? x weakly dominates y (denoted as x ? y) if f1 (x) ? f1 (y) + 2? ? f2 (x) ? f2 (y); ? x dominates y (denoted as x ? y) if x ? y and f1 (x) > f1 (y) + 2? ? f2 (x) > f2 (y). Theorem 6. For subset selection under additive noise with the assumption Eq. (5), with probability 2 at least 12 (1 ? 12nkB 2?log 2k ), PONSS using ? ?  and T = 2eBnk 2 log 2k finds a subset X with   |X| ? k and  ?min k f (X) ? 1 ? 1 ? ? OP T ? 2. k 6 Empirical Study We conducted experiments on two typical subset selection problems: influence maximization and sparse regression, where the former has a submodular objective function and the latter has a nonsubmodular one. The number T of iterations in POSS is set to 2ek 2 n as suggested by Theorem 3. For PONSS, B is set to k, and ? is set to 1, which is obviously not smaller than . Note that POSS needs one objective evaluation for the newly generated solution x0 in each iteration, while PONSS needs 1 or 1 + 2B evaluations, which depends on whether the condition in line 8 of Algorithm 3 is satisfied. For the fairness of comparison, PONSS is terminated until the total number of evaluations reaches that of POSS, i.e., 2ek 2 n. Note that in the run of each algorithm, only a noisy objective value F can be obtained; while for the final output solution, we report its accurately estimated f value for the assessment of the algorithms by an expensive evaluation. As POSS and PONSS are randomized algorithms and the behavior of the greedy algorithm is also randomized under random noise, we repeat the run 10 times independently and report the average estimated f values. Influence Maximization The task is to identify a set of influential users in social networks. Let a directed graph G(V, E) represent a social network, where each node is a user and each edge (u, v) ? E has a probability pu,v representing the influence strength from user u to v. Given a budget k, influence maximization is to find a subset X of V with |X| ? k such that the expected number of nodes activated by propagating from X (called influence spread) is maximized. The fundamental propagation model Independent Cascade [11] is used. Note that the set of active nodes in the diffusion process is a random variable, and the expectation of its size is monotone and submodular [16]. We use two real-world data sets: ego-Facebook and Weibo. ego-Facebook is downloaded from http: //snap.stanford.edu/data/index.html, and Weibo is crawled from a Chinese microblogging 8 1400 1600 PONSS POSS Greedy 1000 5 6 7 8 Budget k 9 10 PONSS POSS Greedy 800 600 300 300 1000 1200 400 400 1200 1400 0 10 20 PONSS POSS Greedy 200 100 30 Influence Spread 500 Influence Spread Influence Spread Influence Spread 1600 1800 500 600 1800 2000 Running time in kn 5 (a) ego-Facebook (4,039 #nodes, 88,234 #edges) 6 7 8 Budget k 9 10 200 100 PONSS POSS Greedy 0 10 20 30 Running time in kn (b) Weibo (10,000 #nodes, 162,371 #edges) Figure 2: Influence maximization (influence spread: the larger the better). The right subfigure on each data set: influence spread vs running time of PONSS and POSS for k = 7. 0.15 0.16 0.2 0.15 0.14 0.15 0.08 R2 R2 R2 0.1 R2 0.1 0.12 0.1 0.05 PONSS POSS Greedy 0.06 0.04 10 12 14 16 Budget k 18 20 0 0.05 PONSS POSS Greedy 0 20 40 0 10 60 Running time in kn (a) protein (24,387 #inst, 357 #feat) PONSS POSS Greedy 12 14 16 Budget k 18 20 0.1 0.05 0 PONSS POSS Greedy 0 20 40 60 Running time in kn (b) YearPredictionMSD (515,345 #inst, 90 #feat) 2 Figure 3: Sparse regression (R : the larger the better). The right subfigure on each data set: R2 vs running time of PONSS and POSS for k = 14. site Weibo.com like Twitter. On each network, the propagation probability of one edge from node u to v is estimated by weight(u,v) indegree(v) , as widely used in [3, 12]. We test the budget k from 5 to 10. For estimating the objective influence spread, we simulate the diffusion process 10 times independently and use the average as an estimation. But for the final output solutions of the algorithms, we average over 10,000 times for accurate estimation. From the left subfigure on each data set in Figure 2, we can see that POSS is better than the greedy algorithm, and PONSS performs the best. By selecting the greedy algorithm as the baseline, we plot in the right subfigures the curve of influence spread over running time for PONSS and POSS with k = 7. Note that the x-axis is in kn, the running time order of the greedy algorithm. We can see that PONSS quickly reaches a better performance, which implies that PONSS can be efficient in practice. Sparse Regression The task is to find a sparse approximation solution to the linear regression problem. Given all observation variables V = {v1 , . . . , vn }, a predictor variable z and a budget k, sparse regression is to find a set of at most k variables maximizing the P squared multiple correlation 2 Rz,X = 1 ? MSEz,X [8, 15], where MSEz,X = min??R|X| E[(z ? i?X ?i vi )2 ] denotes the mean squared error. We assume w.l.o.g. that all random variables are normalized to have expectation 0 and 2 variance 1. The objective Rz,X is monotone increasing, but not necessarily submodular [6]. We use two data sets from http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ datasets/. The budget k is set to {10, 12, . . . , 20}. For estimating R2 in the optimization process, we use a random sample of 1000 instances. But for the final output solutions, we use the whole data set for accurate estimation. The results are plotted in Figure 3. The performances of the three algorithms are similar to that observed for influence maximization, except some losses of POSS over the greedy algorithm (e.g., on YearPredictionMSD with k = 20). For both tasks, we test PONSS with ? = {0.1, 0.2, . . . , 1}. The results are provided in the supplementary material due to space limitation, which show that PONSS is always better than POSS and the greedy algorithm. This implies that the performance of PONSS is not sensitive to the value of ?. 7 Conclusion In this paper, we study the subset selection problem with monotone objective functions under multiplicative and additive noises. We first show that the greedy algorithm and POSS, two powerful algorithms for noise-free subset selection, achieve nearly the same approximation guarantee under noise. Then, we propose a new algorithm PONSS, which can achieve a better approximation ratio with some assumption such as i.i.d. noise distribution. The experimental results on influence maximization and sparse regression exhibit the superior performance of PONSS. 9 Acknowledgements The authors would like to thank reviewers for their helpful comments and suggestions. C. Qian was supported by NSFC (61603367) and YESS (2016QNRC001). Y. Yu was supported by JiangsuSF (BK20160066, BK20170013). K. Tang was supported by NSFC (61672478) and Royal Society Newton Advanced Fellowship (NA150123). Z.-H. Zhou was supported by NSFC (61333014) and Collaborative Innovation Center of Novel Software Technology and Industrialization. References [1] A. A. Bian, J. M. Buhmann, A. Krause, and S. Tschiatschek. Guarantees for greedy maximization of non-submodular functions with applications. In ICML, pages 498?507, 2017. [2] W. Chen, C. Wang, and Y. Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In KDD, pages 1029?1038, 2010. [3] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In KDD, pages 199?208, 2009. [4] Y. Chen, H. Hassani, A. Karbasi, and A. Krause. Sequential information maximization: When is greedy near-optimal? In COLT, pages 338?363, 2015. [5] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In STOC, pages 45?54, 2008. [6] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In ICML, pages 1057?1064, 2011. [7] G. Davis, S. Mallat, and M. Avellaneda. Adaptive greedy approximations. Constructive Approximation, 13(1):57?98, 1997. [8] G. Diekhoff. Statistics for the Social and Behavioral Sciences: Univariate, Bivariate, Multivariate. William C Brown Pub, 1992. [9] E. R. Elenberg, R. Khanna, A. G. Dimakis, and S. Negahban. Restricted strong convexity implies weak submodularity. arXiv:1612.00804, 2016. [10] U. Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634?652, 1998. [11] J. Goldenberg, B. Libai, and E. Muller. Talk of the network: A complex systems look at the underlying process of word-of-mouth. Marketing Letters, 12(3):211?223, 2001. [12] A. Goyal, W. Lu, and L. Lakshmanan. Simpath: An efficient algorithm for influence maximization under the linear threshold model. In ICDM, pages 211?220, 2011. [13] A. Hassidim and Y. Singer. Submodular optimization under noise. In COLT, pages 1069?1122, 2017. [14] T. Horel and Y. Singer. Maximization of approximately submodular functions. In NIPS, pages 3045?3053, 2016. [15] R. A. Johnson and D. W. Wichern. Applied Multivariate Statistical Analysis. Pearson, 6th edition, 2007. [16] D. Kempe, J. Kleinberg, and ?. Tardos. Maximizing the spread of influence through a social network. In KDD, pages 137?146, 2003. [17] A. Miller. Subset Selection in Regression. Chapman and Hall/CRC, 2nd edition, 2002. [18] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a submodular set function. Mathematics of Operations Research, 3(3):177?188, 1978. [19] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions ? I. Mathematical Programming, 14(1):265?294, 1978. [20] C. Qian, J.-C. Shi, Y. Yu, and K. Tang. On subset selection with general cost constraints. In IJCAI, pages 2613?2619, 2017. [21] C. Qian, J.-C. Shi, Y. Yu, K. Tang, and Z.-H. Zhou. Parallel Pareto optimization for subset selection. In IJCAI, pages 1939?1945, 2016. [22] C. Qian, J.-C. Shi, Y. Yu, K. Tang, and Z.-H. Zhou. Optimizing ratio of monotone set functions. In IJCAI, pages 2606?2612, 2017. 10 [23] C. Qian, Y. Yu, and Z.-H. Zhou. Pareto ensemble pruning. In AAAI, pages 2935?2941, 2015. [24] C. Qian, Y. Yu, and Z.-H. Zhou. Subset selection by Pareto optimization. In NIPS, pages 1765?1773, 2015. [25] C. Qian, Y. Yu, and Z.-H. Zhou. Analyzing evolutionary optimization in noisy environments. Evolutionary Computation, 2017. [26] A. Singla, S. Tschiatschek, and A. Krause. Noisy submodular maximization via adaptive sampling with applications to crowdsourced image collection summarization. In AAAI, pages 2037?2043, 2016. 11
6947 |@word nkb:2 polynomial:4 nd:1 lakshmanan:1 contains:3 selecting:3 pub:1 outperforms:1 current:1 comparing:5 z2:3 com:1 si:5 must:2 additive:17 subsequent:1 realistic:2 kdd:3 plot:1 maxv:1 v:2 intelligence:1 greedy:47 selected:7 item:14 xk:1 node:6 firstly:2 mathematical:1 prove:7 behavioral:1 introduce:2 x0:23 theoretically:1 expected:2 behavior:1 p1:2 multi:1 inspired:1 zhouzh:1 zhi:1 increasing:2 provided:5 estimating:2 moreover:1 underlying:1 maximizes:1 minimizes:1 dimakis:1 guarantee:8 subclass:1 tie:1 exactly:1 appear:1 nju:1 reformulates:2 modify:1 limit:1 despite:1 nsfc:3 analyzing:1 meet:1 approximately:1 china:3 limited:1 tschiatschek:2 bi:4 range:2 directed:1 unique:1 union:2 practice:1 goyal:1 procedure:2 empirical:2 maxx:6 cascade:1 word:1 protein:1 get:1 nanjing:1 cannot:3 close:3 selection:40 convenience:1 risk:2 influence:25 applying:1 optimize:1 equivalent:1 maxz:2 www:1 reviewer:1 maximizing:6 center:1 shi:3 independently:3 ke:1 simplicity:1 qian:7 wichern:1 classic:1 jmax:15 tardos:1 mallat:1 user:3 exact:3 programming:1 us:1 element:4 ego:3 satisfying:2 particularly:3 expensive:1 observed:2 csie:1 wang:3 decrease:2 removed:1 environment:3 convexity:1 weakly:4 f2:14 efficiency:1 po:51 easily:2 represented:1 talk:1 pearson:1 whose:1 supplementary:5 solve:2 larger:4 dominating:1 snap:1 otherwise:3 stanford:1 widely:1 ability:1 statistic:1 noisy:23 final:5 obviously:2 propose:3 maximal:1 iff:1 achieve:10 ijcai:3 empty:1 jing:1 extending:1 propagating:1 op:15 yuy:1 eq:16 strong:1 coverage:6 implies:7 direction:2 submodularity:7 libsvmtools:1 material:5 implementing:3 crc:1 f1:15 ntu:1 proposition:4 tighter:1 diekhoff:1 secondly:1 hold:6 considered:1 ground:1 hall:1 achieves:2 dictionary:1 omitted:1 estimation:3 currently:1 sensitive:1 singla:1 largest:4 successfully:1 clearly:1 always:3 lamda:1 rather:2 modified:1 avoid:1 reaching:1 zhou:6 crawled:1 derived:2 improvement:2 s2l:1 prevalent:1 baseline:1 hassid:2 inst:2 helpful:1 twitter:1 goldenberg:1 initially:1 relation:3 selects:1 arg:8 html:1 colt:2 denoted:6 favored:1 kempe:3 equal:1 aware:2 once:1 never:1 beach:1 sampling:3 chapman:1 yu:7 survive:1 nearly:6 fairness:1 icml:2 look:1 report:2 employ:1 few:1 randomly:3 simultaneously:1 national:1 phase:2 replacement:1 n1:6 william:1 investigate:2 evaluation:13 analyzed:1 activated:1 devoted:1 accurate:2 edge:4 divide:1 logarithm:1 desired:1 plotted:1 theoretical:1 subfigure:4 instance:1 boolean:1 cover:4 pons:47 maximization:19 cost:2 introducing:1 addressing:1 subset:64 uniform:2 predictor:1 conducted:2 johnson:1 reported:1 kn:6 chooses:1 st:1 fundamental:1 randomized:4 negahban:1 quickly:1 concrete:1 squared:2 aaai:2 satisfied:2 worse:1 ek:5 derivative:1 return:4 exclude:1 archived:2 microblogging:1 vi:2 depends:1 multiplicative:18 lab:3 analyze:1 characterizes:1 reached:1 start:3 crowdsourced:2 parallel:1 collaborative:1 variance:1 ensemble:2 maximized:2 identify:1 miller:1 shenzhen:1 yes:1 weak:1 accurately:1 produced:1 lu:1 multiplying:1 yearpredictionmsd:2 simultaneous:1 reach:3 facebook:3 definition:13 involved:1 proof:7 gain:1 newly:2 proved:4 knowledge:1 hassani:1 simpath:1 bian:1 jiangsusf:1 horel:2 marketing:2 until:4 correlation:1 replacing:1 assessment:1 propagation:2 khanna:1 brings:2 usa:1 normalized:2 true:2 brown:1 inductive:1 equality:1 former:1 iteratively:2 davis:1 performs:1 image:2 novel:2 recently:3 superior:3 viral:1 empirically:1 thirdly:1 extend:1 slight:1 trivially:2 mathematics:1 submodular:28 dj:1 etc:2 add:1 pu:1 multivariate:2 optimizing:3 inequality:6 muller:1 additional:1 multiple:1 exceeds:1 long:2 icdm:1 nonsubmodular:1 impact:1 scalable:1 regression:13 expectation:4 arxiv:1 iteration:17 represent:2 fellowship:1 krause:3 rest:1 comment:1 yu2:1 near:1 yang:2 easy:1 reduce:1 cn:3 whether:1 effort:1 reevaluate:1 generally:1 clear:1 industrialization:1 reduced:2 generate:2 http:2 sl:1 exist:2 estimated:4 overly:1 key:3 threshold:4 deleted:6 verified:1 diffusion:3 kept:6 backward:1 v1:6 graph:1 monotone:12 run:4 letter:1 powerful:2 almost:1 family:1 vn:6 draw:2 bit:7 bound:10 distinguish:1 cheng:1 strength:1 constraint:4 software:2 dominated:2 kleinberg:1 simulate:1 weibo:4 min:27 qb:1 influential:1 according:1 anhui:1 smaller:1 slightly:1 feige:1 contradicts:1 tw:1 s1:4 explained:1 restricted:1 pr:2 karbasi:1 ln:1 previously:1 nonempty:1 cjlin:1 singer:4 know:2 end:6 operation:1 spectral:1 simulating:1 shortly:1 original:3 rz:2 denotes:3 remaining:1 include:1 running:9 graphical:1 newton:1 k1:2 chinese:1 approximating:2 society:1 objective:34 added:2 already:1 flipping:5 strategy:3 indegree:1 exhibit:1 minx:1 nemhauser:2 evolutionary:2 thank:1 considers:1 assuming:1 length:1 index:1 relationship:1 ratio:29 minimizing:2 innovation:1 stoc:1 negative:1 design:1 implementation:1 summarization:2 perform:2 observation:1 datasets:1 finite:1 immediate:1 situation:3 incorporate:1 bk:1 z1:3 nip:3 avellaneda:1 suggested:1 including:2 royal:1 deleting:4 mouth:1 buhmann:1 advanced:1 representing:1 improve:1 technology:2 misleading:2 axis:1 concludes:1 sn:1 chao:1 acknowledgement:1 loss:1 suggestion:1 limitation:4 wolsey:2 switched:1 downloaded:1 sufficient:1 consistent:1 pareto:7 pi:1 repeat:2 last:2 free:4 supported:4 infeasible:1 understand:1 taking:1 sparse:12 benefit:1 curve:1 world:2 forward:1 collection:3 adaptive:3 author:1 far:1 polynomially:1 social:6 pruning:2 feat:2 keep:1 active:1 assumed:3 xi:8 search:5 iterative:2 elenberg:1 nature:1 robust:1 ca:1 necessarily:4 meanwhile:1 complex:1 da:2 spread:11 main:1 universe:2 terminated:1 big:1 noise:59 s2:1 whole:1 edition:2 repeated:2 site:1 nphard:1 breaking:1 third:1 tang:4 theorem:29 removing:1 libai:1 specific:3 r2:6 dominates:6 bivariate:1 exists:3 adding:1 sequential:1 province:1 budget:15 sustc:1 nk:1 chen:3 univariate:1 jacm:1 hua:1 satisfies:1 goal:1 replace:1 fisher:1 hard:1 change:1 included:3 typical:2 except:5 uniformly:3 lemma:2 total:2 called:2 experimental:1 domination:12 select:6 ustc:2 latter:1 arises:1 constructive:1 evaluate:1 avoiding:1 handling:1
6,575
6,948
Collecting Telemetry Data Privately Bolin Ding, Janardhan Kulkarni, Sergey Yekhanin Microsoft Research {bolind, jakul, yekhanin}@microsoft.com Abstract The collection and analysis of telemetry data from user?s devices is routinely performed by many software companies. Telemetry collection leads to improved user experience but poses significant risks to users? privacy. Locally differentially private (LDP) algorithms have recently emerged as the main tool that allows data collectors to estimate various population statistics, while preserving privacy. The guarantees provided by such algorithms are typically very strong for a single round of telemetry collection, but degrade rapidly when telemetry is collected regularly. In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics. In this paper, we develop new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for an arbitrarily long period of time. For two basic analytical tasks, mean estimation and histogram estimation, our LDP mechanisms for repeated data collection provide estimates with comparable or even the same accuracy as existing single-round LDP collection mechanisms. We conduct empirical evaluation on real-world counter datasets to verify our theoretical results. Our mechanisms have been deployed by Microsoft to collect telemetry across millions of devices. 1 Introduction Collecting telemetry data to make more informed decisions is a commonplace. In order to meet users? privacy expectations and in view of tightening privacy regulations (e.g., European GDPR law) the ability to collect telemetry data privately is paramount. Counter data, e.g., daily app or system usage statistics reported in seconds, is a common form of telemetry. In this paper we are interested in algorithms that preserve users? privacy in the face of continuous collection of counter data, are accurate, and scale to populations of millions of users. Recently, differential privacy [10] (DP) has emerged as defacto standard for the privacy guarantees. In the context of telemetry collection one typically considers algorithms that exhibit differential privacy in the local model [12, 14, 7, 5, 3, 18], also called randomized response model [19], ?-amplification [13], or FRAPP [1]. These are randomized algorithms that are invoked on each user?s device to turn user?s private value into a response that is communicated to a data collector and have the property that the likelihood of any specific algorithm?s output varies little with the input, thus providing users with plausible deniability. Guarantees offered by locally differentially private algorithms, although very strong in a single round of telemetry collection, quickly degrade when data is collected over time. This is a very challenging problem that limits the applicability of DP in many contexts. In telemetry applications, privacy guarantees need to hold in the face of continuous data collection. An influential paper [12] proposed a framework based on memoization to tackle this issue. Their techniques allow one to extend single round DP algorithms to continual data collection and protect users whose values stay constant or change very rarely. The key limitation of the work of [12] is that their approach cannot protect users? private numeric values with very small but frequent changes, making it inappropriate for collecting telemetry counters. In this paper, we address this limitation. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We design mechanisms with formal privacy guarantees in the face of continuous collection of counter data. These guarantees are particularly strong when user?s behavior remains approximately the same, varies slowly, or varies around a small number of values over the course of data collection. Our results. Our contributions are threefold. 1) We give simple 1-bit response mechanisms in the local model of DP for single-round collection of counter data for mean and histogram estimation. Our mechanisms are inspired by those in [19, 8, 7, 4], but allow for considerably simpler descriptions and implementations. Our experiments also demonstrate their performance in concrete settings. 2) Our main technical contribution is a rounding technique called ?-point rounding that borrows ideas from approximation algorithms literature [15, 2], and allows memoization to be applied in the context of private collection of counters. Our memoization schema avoids substantial losses in accuracy or privacy and unaffordable storage overhead. We give a rigorous definition of privacy guarantees provided by our algorithms when the data is collected continuously for an arbitrarily long period of time. We also present empirical findings related to our privacy guarantees. 3) Finally, our mechanisms have been deployed by Microsoft across millions of devices starting with Windows Insiders in Windows 10 Fall Creators Update to protect users? privacy while collecting application usage statistics. 1.1 Preliminaries and problem formulation In our setup, there are n users, and each user at time t has a private (integer or real-valued) counter with value xi (t) ? [0, m]. A data collector wants to collect these counter values {xi (t)}i?[n] at each time stamp t to do statistical analysis. For example, for the telemetry analysis, understanding the mean and the distribution of counter values (e.g., app usage) is very important to IT companies. Local model of differential privacy (LDP). Users do not need to trust the data collector and require formal privacy guarantees before they are willing to communicate their values to the data collector. Hence, a more well-studied DP model [10, 11], which first collects all users? data and then injects noise in the analysis step, is not applicable in our setup. In this work, we adopt the local model of differential privacy, where each user randomizes private data using a randomized algorithm (mechanism) A locally before sending it to data collector. Definition 1 ([13, 8, 4]). A randomized algorithm A : V ? Z is -locally differentially private (-LDP) if for any pair of values v, v 0 ? V and any subset of output S ? Z, we have that Pr[A(v) ? S] ? e ? Pr[A(v 0 ) ? S] . LDP formalizes a type of plausible deniability: no matter what output is released, it is approximately equally as likely to have come from one point v ? V as any other. For alternate interpretations of differential privacy within the framework of hypothesis testing we refer the reader to [20, 7]. Statistical estimation problems. We focus on two estimation problems in this paper. Mean estimation: For each time stamp t, the data ? (t) for P collector wants to obtain an estimation ? the mean of x~t = hxi (t)ii?[n] , i.e., ?(x~t ) = n1 ? i?[n] xi (t). We do worst case analysis and aim to bound the absolute error |? ? (t) ? ?(x~t )| for any input x~t ? [0, m]n . In the rest of the paper, we abuse notation and denote ?(t) to mean ?(x~t ) for a fixed input x~t . Histogram estimation: Suppose the domain of counter values is partitioned into k buckets (e.g., with equal widths), and a counter value xi (t) ? [0, m] can be mapped to a bucket number vi (t) ? [k]. For each time stamp t, the data collector wants to estimate frequency of v ? [k] : ht (v) = n1 ? |{i : ? t (v). The error of a histogram estimation is measured by maxv?[k] |h ? t (v) ? ht (v)|. vi (t) = v}| as h Again, we do worst case analysis of our algorithm over all possible inputs v~t = hvi (t)ii?[n] ? [k]n . 1.2 Repeated collection and overview of privacy framework Privacy leakage in repeated data collection. Although LDP is a very strict notion of privacy, its effectiveness decreases if the data is collected repeatedly. If we collect counter values of a user i for T time stamps by executing an ?-LDP mechanism A independently on each time stamp, 2 xi (1)xi (2) . . . xi (T ) can be only guaranteed indistinguishable to another sequence of counter values, x0i (1)x0i (2) . . . x0i (T ), by a factor of up to eT ?? , which is too large to be reasonable as T increases. Hence, in applications such as telemetry, where data is collected continuously, privacy guarantees provided by an LDP mechanism for a single round of data collection are not sufficient. We formalize our privacy guarantee to enhance LDP for repeated data collection later in Section 3. However, intuitively we ensure that every user blends with a large set of other users who have very different behaviors. Our Privacy Framework and Guarantees. Our framework for repeated private collection of counter data follows similar outline as the framework used in [12]. Our framework for mean and histogram estimation has four main components: 1) An important building block for our overall solution are 1-bit mechanisms that provide local -LDP guarantees and good accuracy for a single round of data collection (Section 2). 2) An ?-point rounding scheme to randomly discretize users private values prior to applying memoization (to conceal small changes) while keeping the expectation of discretized values intact (Section 3). 3) Memoization of discretized values using the 1-bit mechanisms to avoid privacy leakage from repeated data collection (Section 3). In particular, if the counter value of a user remains approximately consistent, then the user is guaranteed -differential privacy even after many rounds of data collection. 4) Finally, output perturbation (instantaneous noise in [12]) to protect exposing the transition points due to large changes in user?s behavior and attacks based on auxiliary information (Section 4). In Sections 2, 3 and 4, we formalize these guarantees focusing predominantly on the mean estimation problem. All the omitted proofs and additional experimental results are in the full version on the arXiv [6]. 2 Single-round LDP mechanisms for mean and histogram estimation We first describe our 1-bit LDP mechanisms for mean and histogram estimation. Our mechanisms are inspired by the works of Duchi et al. [8, 7, 9] and Bassily and Smith [4]. However, our mechanisms are tuned for more efficient communication (by sending 1 bit for each counter each time) and stronger protection in repeated data collection (introduced later in Section 3). To the best our knowledge, the exact form of mechanisms presented in this Section was not known. Our algorithms yield accuracy gains in concrete settings (see Section 5) and are easy to understand and implement. 2.1 1-Bit mechanism for mean estimation Collection mechanism 1BitMean: When the collection of counter xi (t) at time t is requested by the data collector, each user i sends one bit bi (t), which is independently drawn from the distribution:   1, with probability e1+1 + xim(t) ? ee ?1 +1 ; bi (t) = (1) 0, otherwise. Mean estimation. Data collector obtains the bits {bi (t)}i?[n] from n users and estimates ?(t) as n ? ? (t) = m X bi (t) ? (e? + 1) ? 1 . n i=1 e? ? 1 (2) The basic randomizer of [4] is equivalent to our 1-bit mechanism for the case when each user takes values either 0 or m. The above mechanism can also be seen as a simplification of the multidimensional mean-estimation mechanism given in [7]. For the 1-dimensional mean estimation, Duchi et al. [7] show that Laplace mechanism is asymptotically optimal for the mini-max error. However, the communication cost per user in Laplace mechanism is ?(log m) bits, and our experiments show it also leads to larger error compared to our 1-bit mechanism. We prove following results for the above 1-bit mechanism. Theorem 1. For single-round data collection, the mechanism 1BitMean in (1) preserves -LDP for each user. Upon receiving the n bits {bi (t)}i?[n] , the data collector can then estimate the mean of 3 counters from n users as ? ? (t) in (2). With probability at least 1 ? ?, we have r m e? + 1 2 ? ? ? ? log . |? ? (t) ? ?(t)| ? e ? 1 ? 2n 2.2 d-Bit mechanism for histogram estimation Now we consider the problem of estimating histograms of counter values in a discretized domain with k buckets with LDP to be guaranteed. This problem has extensive literature both in computer science and statistics, and dates back to the seminal work Warner [19]; we refer the readers to following excellent papers [16, 8, 4, 17] for more information. Recently, Bassily and Smith [4] gave asymptotically tight results for the problem in the worst-case model building on the works of [16]. On the other hand, Duchi et al. [8] introduce a mechanism by adapting Warner?s classical randomized response mechanism in [19], which is shown to be optimal for the statistical mini-max regret if one does not care about the cost of communication. Unfortunately, some ideas in Bassily and Smith [4] such as Johnson-Lindenstrauss lemma do not scale to population sizes of millions of users. Therefore, in order to have a smooth trade-off between accuracy and communication cost (as well as the ability to protect privacy in repeated data collection, which will be introduced in Section 3) we introduce a modified version of Duchi et al.?s mechanism [8] based on subsampling by buckets. Collection mechanism dBitFlip: Each user i randomly draws d bucket numbers without replacement from [k], denoted by j1 , j2 , . . . , jd . When the collection of discretized bucket number vi (t) ? [k] at time t is requested by the data collector, each user i sends a vector: bi (t) = [(j1 , bi,j1 (t)), (j2 , bi,j2 (t)), . . . , (jd , bi,jd (t))] , where bi,jp (t) is a random 0-1 bit,  ?/2 ?/2   e /(e + 1) if vi (t) = jp with Pr bi,jp (t) = 1 = , for p = 1, 2, . . . , d. 1/(e?/2 + 1) if vi (t) 6= jp Under the same public coin model as in [4], each user i only needs to send to the data collector d bits bi,j1 (t), bi,j2 (t), . . ., bi,jd (t) in bi (t), as j1 , j2 , . . . , jd can be generated using public coins. Histogram estimation. Data collector estimates histogram ht as: for v ? [k], ? t (v) = k h nd X bi,v (t) ? (e?/2 + 1) ? 1 . e?/2 ? 1 (3) bi,v (t) is received When d = k, dBitFlip is exactly the same as the one in Duchi et al.[8]. The privacy guarantee is straightforward. In terms of the accuracy, the intuition is that for each bucket v ? [k], there are roughly nd/k users responding with a 0-1 bit bi,v (t). We can prove the following result. Theorem 2. For single-round data collection, the mechanism dBitFlip preserves -LDP for each user. Upon receiving the d bits {bi,jp (t)}p?[d] from each user i, the data collector can then estimate ? t in (3). With probability at least 1 ? ?, we have, then histogram ht as h ! r r r 5k e?/2 + 1 6k k log(k/?) ? max |ht (v) ? ht (v)| ? ? ? log ?O . nd e?/2 ? 1 ? ?2 nd v?[k] 3 Memoization for continual collection of counter data One important concern regarding the use of -LDP algorithms (e.g., in Section 2.1) to collect counter data pertains to privacy leakage that may occur if we collect user?s data repeatedly (say, daily) and user?s private value xi does not change or changes little. Depending on the value of , after a number of rounds, data collector will have enough noisy reads to estimate xi with high accuracy. Memoization [12] is a simple rule that says that: At the account setup phase each user pre-computes and stores his responses to data collector for all possible values of the private counter. At data collection users do not use fresh randomness, but respond with pre-computed responses corresponding to their current counter values. Memoization (to a certain degree) takes care of situations when the private value xi stays constant. Note that the use of memoization violates differential privacy in continual collection. If memoization is employed, data collector can easily distinguish a user whose 4 value keeps changing, from a user whose value is constant; no matter how small the  is. However, privacy leakage is limited. When data collector observes that user?s response had changed, this only indicates that user?s value had changed, but not what it was and not what it is. As observed in [12, Section 1.3] using memoization technique in the context of collecting counter data is problematic for the following reason. Often, from day to day, private values xi do not stay constant, but rather experience small changes (e.g., one can think of app usage statistics reported in seconds). Note that, naively using memoization adds no additional protection to the user whose private value varies but stays approximately the same, as data collector would observe many independent responses corresponding to it. One naive way to fix the issue above is to use discretization: pick a large integer (segment size) s that divides m; consider the partition of all integers into segments [`s, (` + 1)s]; and have each user report his value after rounding the true value xi to the mid-point of the segment that xi belongs to. This approach takes care of the issue of leakage caused by small changes to xi as users values would now tend to stay within a single segment, and thus trigger the same memoized response; however accuracy loss may be extremely large. For instance, in a population where all xi are `s + 1 for some `, after rounding every user would be responding based on the value `s + s/2. In the following subsection we present a better (randomized) rounding technique (termed ?-point rounding) that has been previously used in approximation algorithms literature [15, 2] and rigorously addresses the issues discussed above. We first consider the mean estimation problem. 3.1 ?-point rounding for mean estimation The key idea of rounding is to discretize the domain where users? counters take their values. Discretization reduces domain size, and users that behave consistently take less different values, which allows us to apply memoization to get a strong privacy guarantee. As we demonstrated above discretization may be particularly detrimental to accuracy when users? private values are correlated. We propose addressing this issue by: making the discretization rule independent across different users. This ensures that when (say) all users have the same value, some users round it up and some round it down, facilitating a smaller accuracy loss. We are now ready to specify the algorithm that extends the basic algorithm 1BitMean and employs both ?-point rounding and memoization. We assume that counter values range in [0, m]. 1. At the algorithm design phase, we specify an integer s (our discretization granularity). We assume that s divides m. We suggest setting s rather large compared to m, say s = m/20 or even s = m depending on the particular application domain. 2. At the the setup phase, each user i ? [n] independently at random picks a value ?i ? {0, . . . , s ? 1}, that is used to specify the rounding rule. 3. User i invokes the basic algorithm 1BitMean with range m to compute and memoize 1-bit responses to data collector for all m s + 1 values xi in the arithmetic progression A = {`s}0?`? ms . (4) 4. Consider a user i with private value xi who receives a data collection request. Let xi ? [L, R), where L, R are the two neighboring elements of the arithmetic progression {`s}0?`? ms +1 . The user xi rounds value to L if xi + ?i < R; otherwise, the user rounds the value to R. Let yi denote the value of the user after rounding. In each round, user responds with the memoized bit for value yi . Note that rounding is always uniquely defined. Perhaps a bit surprisingly, using ?-point rounding does not lead to additional accuracy losses independent of the choice of discretization granularity s. Theorem 3. Independent of the value of discretization granularity s, at any round of data collection, each output bit bi is still sampled according to the distribution given by formula (1). Therefore, the algorithm above provides the same accuracy guarantees as given in Theorem 1. 5 3.2 Privacy definition using permanent memoization In what follows we detail privacy guarantees provided by an algorithm that employs ?-point rounding and memoization in conjunction with the -DP 1-bit mechanism of Section 2.1 against a data collector that receives a very long stream of user?s responses to data collection events. Let U be a user and x(1), . . . , x(T ) be the sequence of U ?s private counter values. Given user?s private value ?i , each of {x(j)}j?[T ] gets rounded to the corresponding value {y(j)}j?[T ] in the set A (defined by (4)) according to the rule given in Section 3.1. Definition 2. Let B be the space of all sequences {z(j)}j?[T ] ? AT , considered up to an arbitrary permutation of the elements of A. We define the behavior pattern b(U ) of the user U to be the element of B corresponding to {y(j)}j?[T ] . We refer to the number of distinct elements y(j) in the sequence {y(j)}j?[T ] as the width of b(U ). We now discuss our notion of behavior pattern, using counters that carry daily app usage statistics as an example. Intuitively, users map to the same behavior pattern if they have the same number of different modes (approximate counter values) of using the app, and switch between these modes on the same days. For instance, one user that uses an app for 30 minutes on weekdays, 2 hours on weekends, and 6 hours on holidays, and the other user who uses the app for 4 hours on weekdays, 10 minutes on weekends, and does not use it on holidays will likely map to the same behavior pattern. Observe however that the mapping from actual private counter values {x(j)} to behavior patterns is randomized, thus there is a likelihood that some users with identical private usage profiles may map to different behavior patterns. This is a positive feature of the Definition 2 that increases entropy among users with the same behavior pattern. The next theorem shows that the algorithm of Section 3.1 makes users with the same behavior pattern blend with each other from the viewpoint of data collector (in the sense of differential privacy). Theorem 4. Consider users U and V with sequences of private counter values {xU (1), . . . , xU (T )} and {xV (1), . . . , xV (T )}. Assume that both U and V respond at T data-collection time stamps using the algorithm presented in Section 3.1, and b(U ) = b(V ) with the width of b(U ) equal to w. Let sU , sV ? {0, 1}T be the random sequences of responses generated by users U and V ; then for any binary string s ? {0, 1}T in the response domain, we have: Pr[sU = s] ? ew ? Pr[sV = s] . 3.2.1 (5) Setting parameters The -LDP guarantee provided by Theorem 4 ensures that each user is indistinguishable from other users with the same behavior pattern (in the sense of LDP). The exact shape of behavior patterns is governed by the choice of the parameter s. Setting s very large, say s = m or s = m/2 reduces the number of possible behavior patterns and thus increases the number of users that blend by mapping to a particular behavior pattern. It also yields stronger guarantee for blending within a pattern since for all users U we necessarily have b(U ) ? m/s + 1 and thus by Theorem 4 the likelihood of distinguishing users within a pattern is trivially at most e(m/s+1)? . At the same time there are cases where one can justify using smaller values of s. In fact, consistent users, i.e., users whose private counter always land in the vicinity of one of a small number of fixed values enjoy a strong LDP guarantee within their patterns irrespective of s (provided it is not too small), and smaller s may be advantageous to avoid certain attacks based on auxiliary information as the set of all possible values of a private counter xi that lead to a specific output bit b is potentially more complex. Finally, it is important to stress that the -LDP guarantee established in Theorem 4 is not a panacea, and in particular it is a weaker guarantee provided in a much more challenging setting than just the -LDP guarantee across all users that we provide for a single round of data collection (an easier setting). In particular, while LDP across all population of users is resilient to any attack based on auxiliary information, LDP across a sub population may be vulnerable to such attacks and additional levels of protection may need to be applied. In particular, if data collector observes that user?s response has changed; data collector knows with certainty that user?s true counter value had changed. In the case of app usage telemetry this implies that app has been used on one of the days. This attack is partly mitigated by the output perturbation technique that is discussed in Section 4. 6 3200000 160000 3200000 App A (s=m) 160000 App A (s=m/2) 3200000 8000 8000 8000 400 400 400 20 20 20 1 1 1 Percentage of users in the patterns Percentage of users in the patterns App A (s=m/3) 160000 Percentage of users in the patterns Figure 1: Distribution of pattern supports for App A 3.2.2 Experimental study We use a real-world dataset of 3 million users with their daily usage of an app (App A) collected (in seconds) over a continuous period of 31 days to demonstrate the mapping of users to behavior patterns in Figure 1. See full version of the paper for usage patterns for more apps. For each behavior pattern (Definition 2), we calculate its support as the number of users with their sequences in this pattern. All the patterns? supports sup are plotted (y-axis) in the decreasing order, and we can also calculate the percentage of users (x-axis) in patterns with supports at least sup. We vary the parameter s in permanent memoization from m (maximizing blending) to m/3 and report the corresponding distributions of pattern supports in Figure 1. It is not hard to see that theoretically for every behavior pattern there is a very large set of sequences of private counter values {x(t)}t that may map to it (depending on ?i ). Real data (Figure 1) provides evidence that users tend to be approximately consistent and therefore simpler patterns, i.e., patterns that mostly stick to a single rounded value y(t) = y correspond to larger sets of sequences {xi (t)}t , obtained from a real population. In particular, for each app there is always one pattern (corresponding to having one fixed y(t) = y across all 31 days) which blends the majority of users (> 2 million). More complex behavior patterns have less users mapping to them. In particular, there always are some lonely users (1%-5% depending on s) who land in patterns that have support size of one or two. From the viewpoint of a data collector such users can only be identified as those having a complex and irregular behavior, however the actual nature of that behavior by Theorem 4 remains uncertain. 3.3 Example One specific example of a counter collection problem that has been identified in [12, Section 1.3] as being non-suitable for techniques presented in [12] but can be easily solved using our methods is to repeatedly collect age in days from a population of users. When we set s = m and apply the algorithm of Section 3.1 we can collect such data for T rounds with high accuracy. Each user necessarily responds with a sequence of bits that has form z t ? z?T ?t , where 0 ? t ? T. Thus data collector only gets to learn the transition point, i.e., the day when user?s age in days passes the value m ? ?i , which is safe from privacy perspective as ?i is picked uniformly at random by the user. 3.4 Continual collection for histogram estimation using permanent memoization Naive memoization. ?-point rounding is not suitable for histogram estimation as counter values have been mapped to k buckets. The single-round LDP mechanism in Duchi et al. [8] sends a 0-1 random response for each bucket: send 1 with probability e?/2 /(e?/2 + 1) if the value is in this bucket, and with probability 1/(e?/2 + 1) if not. This mechanism is -LDP. Each user can then memoize a mapping fk : [k] ? {0, 1}k by running this mechanism once for each v ? [k], and always respond fk (v) if the user?s value is in bucket v. However, this memoization schema leads to serious privacy leakage: with some auxiliary information, one can infer with high confidence a user?s value from the response produced by the mechanism; more concretely, if the data collector knows that the app usage value is in a bucket v and observes the output fk (v) = z in some day, whenever the user sends z again in future, the data collector can infer that the bucket number is v with almost 100% probability. d-bit memoization. To avoid such privacy leakages, we memoize based on our d-bit mechanism dBitFlip (Section 2.2). Each user runs dBitFlip for each v ? [k], with responses created on d buckets j1 , j2 , . . . , jd (randomly drawn and then fixed per user), and memoizes the response in a mapping fd : [k] ? {0, 1}d . A user will always send fd (v) if the bucket number is v. This mechanism is denoted by dBitFlipPM, and the same estimator (3) can be used to estimate the histogram upon 7 Laplace 1BitRRPM 1BitRRPM+OP(1/10) 1BitRRPM+OP(1/3) 32768 4096 4096 512 512 64 64 8 8 1 Laplace 1BitRRPM 1BitRRPM+OP(1/10) 1BitRRPM+OP(1/3) 32768 0.15 0.1 0.05 1 0.1 0.2 0.5 1 2 Epsilon 5 10 (a) Mean (n = 0.3 ? 106 ) BinFlip BinFlip+ KFlip 4BitFlipPM 2BitFlipPM 1BitFlipPM 0.2 0.1 0.2 0.5 1 2 Epsilon 5 (b) Mean (n = 3 ? 106 ) 0 10 0.1 0.2 0.5 1 2 Epsilon 5 10 (c) Histogram (n = 0.3 ? 106 ) Figure 2: Comparison of mechanisms for mean and histogram estimations on real-world datasets receiving the d-bit response from every user. This scheme avoids privacy leakages that arise due to the naive memoization, because multiple (? k/2d w.h.p.) buckets are mapped to the same response. This protection is the strongest when d = 1. Definition 2 about behavior patterns and Theorem 4 can be generalized here to provide similar privacy guarantee in continual data collection. 4 Output perturbation One of the limitations of our memoization approach based on ?-point rounding is that it does not protect the points of time where user?s behavior changes significantly. Consider a user who never uses an app for a long time, and then starts using it. When this happens, suppose the output produced by our algorithm changes from 0 to 1. Then the data collector can learn with certainty that the user?s behavior changed, (but not what this behavior was or what it became). Output perturbation is one possible mechanism of protecting the exact location of the points of time where user?s behavior has changed. As mentioned earlier, output perturbation was introduced in [12] as a way to mitigate privacy leakage that arises due to memoization. The main idea behind output perturbation is to flip the output of memoized responses with a small probability 0 ? ? ? 0.5. This ensures that data collector will not be able to learn with certainty that the behavior of a user changes at certain time stamps. In the full version of the paper we formalize this notion, and prove accuracy and privacy guarantees with output perturbation. Here we contain ourselves to mentioning that using output perturbation with a positive ?, in combination with the -LDP algorithm in Section 2 is equivalent to  1BitMean  e (1?2?)( )+?  invoking the 1BitMean algorithm with 0 = ln (1?2?)( e1+1 )+? . e +1 5 Empirical evaluation We compare our mechanisms (with permanent memoization) for mean and histogram estimation with previous mechanisms for one-time data collection. We would like to emphasize that the goal of these experiments is to show that our mechanisms, with such additional protection, are no worse than or comparable to the state-of-the-art LDP mechanisms in terms of estimation accuracy. We first use the real-world dataset which is described in Section 3.2.2. Mean estimation. We implement our 1-bit mechanism (Section 2.1) with ?-point Randomized Rounding and Permanent Memoization for repeated collection (Section 3), denoted by 1BitRRPM, and output perturbation to enhance the protection for usage change (Section 4), denoted by 1BitRRPM+OP(?). We compare it with the Laplace mechanism for LDP mean estimation in [8, 9], denoted by Laplace. We vary the value of ? (? = 0.1-10) and the number of users (n = 0.3, 3 ? 106 by randomly picking subsets of all users), and run all the mechanisms 3000 times on 31-day usage data with three counters. The domain size is m = 24 hours. The average of absolute errors (in seconds) with one standard deviation (STD) are reported in Figures 2(a)-2(b). 1BitRRPM is consistently better than Laplace with smaller errors and narrower STDs. Even with a perturbation probability ? = 1/10, they are comparable in accuracy. When ? = 1/3, output perturbation is equivalent to adding an additional uniform noise from [0, 24 hours] independently on each day?even in this case, 1BitRRPM+OP(1/3) gives us tolerable accuracy when the number of users is large. Histogram estimation. We create k = 32 buckets on [0, 24 hours] with even widths to evaluate mechanisms for histogram estimation. We implement our d-bit mechanism (Section 2.2) with 8 Laplace 1BitRRPM 1BitRRPM+OP(1/10) 1BitRRPM+OP(1/3) 32768 4096 4096 512 512 64 64 8 8 1 Laplace 1BitRRPM 1BitRRPM+OP(1/10) 1BitRRPM+OP(1/3) 32768 0.2 0.5 1 2 Epsilon 5 10 (a) Mean (constant distribution) 0.15 0.1 0.05 1 0.1 BinFlip BinFlip+ KFlip 4BitFlipPM 2BitFlipPM 1BitFlipPM 0.2 0.1 0.2 0.5 1 2 Epsilon 5 10 (b) Mean (uniform distribution) 0 0.1 0.2 0.5 1 2 Epsilon 5 10 (c) Histogram (normal distribution) Figure 3: Mechanisms for mean and histogram estimations on different distributions (n = 0.3 ? 106 ) permanent memoization for repeated collection (Section 3.4), denoted by dBitFlipPM. In order to provide protection on usage change in repeated collection, we use d = 1, 2, 4 (strongest when d = 1). We compare it with state-of-the-art one-time mechanisms for histogram estimation: BinFlip [8, 9], KFlip (k-RR in [17]), and BinFlip+ (applying the generic protocol with 1-bit reports in [4] on BinFlip). When d = k, dBitFlipPM has the same accuracy as BinFlip. KFlip is sub-optimal for small ? [17] but has better performance when ? is ?(ln k). In contrast, BinFlip+ has good performance when ? ? 2. We repeat the experiment 3000 times and report the average histogram error (i.e., maximum error across all bars in a histogram) with one standard deviation for different algorithms in Figure 2(c) with ? = 0.1-10 and n = 0.3 ? 106 to confirm the above theoretical results. BinFlip (equivalently, 32BitFlipPM) has the best accuracy overall. With enhanced privacy protection in repeated data collection, 4bitFlipPM is comparable to the one-time collection mechanism KFlip when ? is small (0.1-0.5) and 4bitFlipPM-1bitFlipPM are better than BinFlip+ when ? is large (5-10). On different data distributions. We have shown that errors in mean and histogram estimations can be bounded (Theorems 1-2) in terms of ? and the number of users n, together with the number of buckets k and the number of bits d (applicable only to histograms). We now conduct additional experiments on synthetic datasets to verify that the empirical errors should not change much on different data distributions. Three types of distributions are considered: i) constant distribution, i.e., each user i has a counter xi (t) = 12 (hours) all the time; ii) uniform distribution, i.e., xi (t) ? U(0, 24); and iii) normal distribution, i.e., xi (t) ? N (12, 22 ) (with mean equal to 12 and standard deviation equal to 2), truncated on [0, 24]. Three synthetic datasets are created by drawing samples of sizes n = 0.3 ? 106 from these three distributions. Some results are plotted on Figure 3: the empirical errors on different distributions are almost the same as those in Figures 2(a) and 2(c). One can refer to the full version of the paper [6] for the complete set of charts. 6 Deployment In earlier sections, we presented new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for a long period of time. Our mean estimation algorithm has been deployed by Microsoft starting with Windows Insiders in Windows 10 Fall Creators Update. The algorithm is used to collect the number of seconds that a user has spend using a particular app. Data collection is performed every 6 hours, with  = 1. Memoization is applied across days and output perturbation uses ? = 0.2. According to Section 4, this makes a single round of data collection satisfy 0 -DP with 0 = 0.686. One important feature of our deployment is that collecting usage data for multiple apps from a single user only leads to a minor additional privacy loss that is independent of the actual number of apps. Intuitively, this happens since we are collecting active usage data, and the total number of seconds that a user can spend across multiple apps in 6 hours is bounded by an absolute constant that is independent of the number of apps. Theorem 5. Using the 1BitMean mechanism with a parameter 0 to simultaneously collect t counters P x1 , . . . , xt , where each xi satisfies 0 ? xi ? m and i xi ? m preserves 00 -DP, where 0 00 = 0 + e ? 1. We defer the proof to the full version of the paper [6]. By Theorem 5, in deployment, a single round of data collection across an arbitrary large number of apps satisfies 00 -DP, where 00 = 1.672. 9 References [1] S. Agrawal and J. R. Haritsa. A framework for high-accuracy privacy-preserving mining. In ICDE, pages 193?204, 2005. [2] N. Bansal, D. Coppersmith, and M. Sviridenko. Improved approximation algorithms for broadcast scheduling. SIAM Journal on Computing, 38(3):1157?1174, 2008. [3] R. Bassily, K. Nissim, U. Stemmer, and A. Thakurta. Practical locally private heavy hitters. In NIPS, 2017. [4] R. Bassily and A. D. Smith. Local, private, efficient protocols for succinct histograms. In STOC, pages 127?135, 2015. [5] R. Bassily, A. D. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In FOCS, pages 464?473, 2014. [6] B. Ding, J. Kulkarni, and S. Yekhanin. Collecting telemetry data privately. arXiv, 2017. [7] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS, pages 429?438, 2013. [8] J. C. Duchi, M. J. Wainwright, and M. I. Jordan. Local privacy and minimax bounds: Sharp rates for probability estimation. In NIPS, pages 1529?1537, 2013. [9] J. C. Duchi, M. J. Wainwright, and M. I. Jordan. Minimax optimal procedures for locally private estimation. CoRR, abs/1604.02390, 2016. [10] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In TCC, pages 265?284, 2006. [11] C. Dwork, A. Roth, et al. The algorithmic foundations of differential privacy. Foundations and R in Theoretical Computer Science, 9(3?4):211?407, 2014. Trends [12] ?. Erlingsson, V. Pihur, and A. Korolova. RAPPOR: randomized aggregatable privacypreserving ordinal response. In CCS, pages 1054?1067, 2014. [13] A. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining. In PODS, pages 211?222, 2003. [14] G. C. Fanti, V. Pihur, and ?. Erlingsson. Building a RAPPOR with the unknown: Privacypreserving learning of associations and data dictionaries. PoPETs, 2016(3):41?61, 2016. [15] M. X. Goemans, M. Queyranne, A. S. Schulz, M. Skutella, and Y. Wang. Single machine scheduling with release dates. SIAM Journal on Discrete Mathematics, 15(2):165?192, 2002. [16] J. Hsu, S. Khanna, and A. Roth. Distributed private heavy hitters. In ICALP, pages 461?472, 2012. [17] P. Kairouz, K. Bonawitz, and D. Ramage. Discrete distribution estimation under local privacy. ICML, 2016. [18] J. Tang, A. Korolova, X. Bai, X. Wang, and X. Wang. Privacy loss in Apple?s implementation of differential privacy on MacOS 10.125. arXiv 1709.02753, 2017. [19] S. L. Warner. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63?69, 1965. [20] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375?389, 2010. 10
6948 |@word private:31 version:6 eliminating:1 stronger:2 advantageous:1 nd:4 willing:1 memoize:3 invoking:1 pick:2 weekday:2 pihur:2 carry:1 bai:1 tuned:1 existing:2 current:1 com:1 discretization:7 protection:8 exposing:1 partition:1 j1:6 shape:1 korolova:2 update:2 maxv:1 device:4 smith:6 kairouz:1 defacto:1 provides:2 location:1 attack:5 simpler:2 differential:11 focs:2 prove:3 overhead:1 introduce:2 privacy:56 theoretically:1 roughly:1 behavior:27 warner:3 discretized:4 inspired:2 decreasing:1 company:2 little:2 actual:3 inappropriate:1 window:4 provided:7 estimating:1 notation:1 mitigated:1 bounded:2 what:6 string:1 informed:1 finding:1 guarantee:28 formalizes:1 certainty:3 every:5 collecting:8 continual:5 multidimensional:1 tackle:1 mitigate:1 exactly:1 stick:1 enjoy:1 before:2 positive:2 local:9 xv:2 limit:1 randomizes:1 meet:1 approximately:5 abuse:1 studied:1 collect:11 challenging:2 deployment:3 mentioning:1 limited:1 bi:20 range:2 practical:1 testing:1 block:1 implement:3 regret:1 communicated:1 procedure:1 empirical:6 evasive:1 adapting:1 significantly:1 pre:2 confidence:1 skutella:1 suggest:1 get:3 cannot:1 scheduling:2 storage:1 risk:2 context:4 applying:2 seminal:1 equivalent:3 map:4 demonstrated:1 roth:2 maximizing:1 send:3 straightforward:1 starting:2 independently:4 pod:1 survey:1 wasserman:1 rule:4 estimator:1 his:2 population:8 notion:3 holiday:2 laplace:9 limiting:1 enhanced:1 suppose:2 trigger:1 user:123 exact:3 us:4 distinguishing:1 hypothesis:1 element:4 trend:1 particularly:2 std:2 observed:1 ding:2 solved:1 wang:3 worst:3 calculate:2 commonplace:1 ensures:3 counter:44 decrease:1 trade:1 observes:3 substantial:1 intuition:1 mentioned:1 rigorously:1 tight:2 segment:4 upon:3 easily:2 routinely:1 various:1 erlingsson:2 weekend:2 distinct:1 describe:1 insider:2 whose:5 emerged:2 larger:2 plausible:2 valued:1 say:5 drawing:1 otherwise:2 spend:2 ability:2 statistic:7 think:1 noisy:1 sequence:10 rr:1 agrawal:1 analytical:1 propose:1 tcc:1 frequent:1 j2:6 neighboring:1 rapidly:1 date:2 amplification:1 description:1 differentially:3 xim:1 executing:1 depending:4 develop:1 pose:1 measured:1 x0i:3 minor:1 op:10 received:1 strong:5 auxiliary:4 come:1 implies:1 safe:1 deniability:2 public:2 violates:1 require:1 resilient:1 fix:1 preliminary:1 blending:2 hold:1 around:1 considered:2 normal:2 mapping:6 algorithmic:1 vary:2 adopt:1 hvi:1 released:1 omitted:1 dictionary:1 estimation:37 evfimievski:1 applicable:2 thakurta:2 create:1 gehrke:1 tool:1 minimization:1 always:6 aim:1 modified:1 rather:2 avoid:3 zhou:1 conjunction:1 release:1 focus:1 consistently:2 likelihood:3 indicates:1 contrast:1 rigorous:1 sense:2 typically:2 schulz:1 ldp:32 interested:1 issue:5 overall:2 among:1 denoted:6 art:2 equal:4 once:1 never:1 having:2 beach:1 identical:1 icml:1 future:1 report:4 serious:1 employ:2 randomly:4 preserve:4 simultaneously:1 phase:3 ourselves:1 replacement:1 microsoft:5 n1:2 ab:1 fd:2 mining:2 dwork:2 evaluation:2 behind:1 mcsherry:1 accurate:1 yekhanin:3 daily:5 experience:2 conduct:2 divide:2 plotted:2 theoretical:3 uncertain:1 instance:2 earlier:2 applicability:1 cost:3 addressing:1 deviation:3 subset:2 uniform:3 rounding:18 johnson:1 too:2 reported:3 answer:1 varies:4 sv:2 considerably:1 synthetic:2 st:1 randomized:10 siam:2 sensitivity:1 stay:5 off:1 receiving:3 picking:1 rounded:2 enhance:2 continuously:2 concrete:2 quickly:1 together:1 again:2 broadcast:1 slowly:1 worse:1 american:2 account:1 matter:2 permanent:6 satisfy:1 caused:1 vi:5 stream:1 performed:2 view:1 later:2 picked:1 schema:2 sup:2 start:1 defer:1 contribution:2 telemetry:17 chart:1 accuracy:20 became:1 who:5 yield:2 correspond:1 apps:6 produced:2 cc:1 apple:1 app:20 randomness:1 strongest:2 whenever:1 definition:7 against:1 frequency:1 proof:2 gain:1 sampled:1 dataset:2 hsu:1 knowledge:1 subsection:1 formalize:3 back:1 focusing:1 day:13 response:23 improved:2 specify:3 formulation:1 just:1 hand:1 receives:2 trust:1 su:2 khanna:1 mode:2 perhaps:1 usage:16 usa:1 calibrating:1 verify:2 building:3 true:2 contain:1 ramage:1 hence:2 vicinity:1 read:1 round:23 indistinguishable:2 width:4 uniquely:1 m:2 generalized:1 bansal:1 randomizer:1 outline:1 stress:1 demonstrate:2 complete:1 duchi:9 invoked:1 instantaneous:1 recently:3 predominantly:1 common:1 overview:1 jp:5 million:6 extend:1 interpretation:1 discussed:2 association:3 significant:1 refer:4 trivially:1 fk:3 mathematics:1 had:3 hxi:1 geared:2 add:1 perspective:1 belongs:1 termed:1 store:1 certain:3 binary:1 arbitrarily:2 yi:2 preserving:3 seen:1 additional:8 care:3 employed:1 period:4 ii:3 arithmetic:2 full:5 multiple:3 reduces:2 infer:2 smooth:1 technical:1 long:6 equally:1 basic:4 expectation:2 arxiv:3 histogram:28 sergey:1 irregular:1 want:3 sends:4 rest:1 strict:1 pass:1 tend:2 privacypreserving:2 regularly:1 effectiveness:1 jordan:3 integer:4 ee:1 granularity:3 iii:1 easy:1 enough:1 switch:1 gave:1 identified:2 idea:4 regarding:1 queyranne:1 repeatedly:3 mid:1 locally:6 fanti:1 percentage:4 problematic:1 srikant:1 per:2 discrete:2 threefold:1 key:2 four:1 drawn:2 changing:1 ht:6 asymptotically:2 icde:1 injects:1 run:2 communicate:1 respond:3 extends:1 almost:2 reader:2 reasonable:1 draw:1 decision:1 comparable:4 bit:32 bound:3 guaranteed:3 simplification:1 distinguish:1 paramount:1 occur:1 software:1 sviridenko:1 extremely:1 influential:1 according:3 alternate:1 request:1 combination:1 across:11 smaller:4 partitioned:1 making:2 happens:2 intuitively:3 pr:5 bucket:18 ln:2 remains:3 previously:1 turn:1 discus:1 mechanism:57 know:2 flip:1 ordinal:1 hitter:2 sending:2 apply:2 observe:2 progression:2 generic:1 tolerable:1 coin:2 jd:6 responding:2 creator:2 ensure:1 subsampling:1 conceal:1 running:1 panacea:1 invokes:1 epsilon:6 lonely:1 classical:1 leakage:9 blend:4 responds:2 exhibit:1 dp:9 detrimental:1 mapped:3 majority:1 degrade:2 nissim:2 collected:6 considers:1 reason:1 fresh:1 mini:2 providing:1 memoization:28 equivalently:1 regulation:1 executed:2 setup:4 unfortunately:1 potentially:1 mostly:1 stoc:1 tightening:1 design:2 implementation:2 unknown:1 discretize:2 datasets:4 protecting:1 behave:1 truncated:1 situation:1 communication:4 perturbation:12 arbitrary:2 sharp:1 introduced:3 pair:1 extensive:1 protect:6 hour:9 established:1 nip:3 address:2 able:1 bar:1 memoized:3 pattern:33 coppersmith:1 max:3 wainwright:3 suitable:3 event:1 minimax:3 scheme:2 axis:2 irrespective:1 ready:1 created:2 naive:3 breach:1 prior:1 literature:3 understanding:1 law:1 loss:6 permutation:1 icalp:1 limitation:3 borrows:1 age:2 foundation:2 degree:1 offered:1 sufficient:1 consistent:3 viewpoint:2 land:2 heavy:2 course:1 changed:6 surprisingly:1 repeat:1 keeping:1 formal:4 allow:2 understand:1 weaker:1 bias:1 fall:2 stemmer:1 face:3 absolute:3 distributed:1 world:4 numeric:1 avoids:2 transition:2 lindenstrauss:1 computes:1 collection:53 concretely:1 approximate:1 obtains:1 emphasize:1 keep:1 confirm:1 active:1 xi:29 continuous:4 bonawitz:1 nature:1 learn:3 ca:1 requested:2 excellent:1 european:1 necessarily:2 complex:3 domain:7 protocol:2 main:4 rappor:2 privately:3 noise:4 arise:1 profile:1 succinct:1 repeated:15 collector:31 facilitating:1 xu:2 x1:1 bassily:6 deployed:3 sub:2 governed:1 stamp:7 tang:1 theorem:14 down:1 formula:1 minute:2 specific:3 xt:1 concern:1 evidence:1 naively:1 adding:1 corr:1 easier:1 entropy:1 likely:2 vulnerable:1 satisfies:2 goal:1 narrower:1 towards:2 change:14 hard:1 uniformly:1 justify:1 lemma:1 called:2 total:1 goemans:1 partly:1 experimental:2 intact:1 ew:1 rarely:1 support:6 arises:1 pertains:1 kulkarni:2 evaluate:1 correlated:1
6,576
6,949
Concrete Dropout Yarin Gal [email protected] University of Cambridge and Alan Turing Institute, London Jiri Hron [email protected] University of Cambridge Alex Kendall [email protected] University of Cambridge Abstract Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary? a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout?s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers. 1 Introduction Well-calibrated uncertainty is crucial for many tasks in deep learning. From the detection of adversarial examples [25], through an agent exploring its environment safely [10, 18], to analysing failure cases in autonomous driving vision systems [20]. Tasks such as these depend on good uncertainty estimates to perform well, with miscalibrated uncertainties in reinforcement learning (RL) having the potential to lead to over-exploration of the environment. Or, much worse, miscalibrated uncertainties in an autonomous driving vision systems leading to its failure to detect its own ignorance about the world, resulting in the loss of human life [29]. A principled technique to obtaining uncertainty in models such as the above is Bayesian inference, with dropout [9, 14] being a practical inference approximation. In dropout inference the neural network is trained with dropout at training time, and at test time the output is evaluated by dropping units randomly to generate samples from the predictive distribution [9]. But to get well-calibrated uncertainty estimates it is necessary to adapt the dropout probability as a variational parameter to the data at hand [7]. In previous works this was done through a grid-search over the dropout probabilities [9]. Grid-search can pose difficulties though in certain tasks. Grid-search is a prohibitive operation with large models such as the ones used in Computer Vision [19, 20], where multiple GPUs would be used to train a single model. Grid-searching over the dropout probability in such models would require either an immense waste of computational resources, or extremely prolonged experimentation cycles. More so, the number of possible per-layer dropout configurations grows exponentially as the number of model layers increases. Researchers have therefore restricted the grid-search to a small number of possible dropout values to make such search feasible [8], which in turn might hurt uncertainty calibration in vision models for autonomous systems. In other tasks a grid-search over the dropout probabilities is impossible altogether. In tasks where the amount of data changes over time, for example, the dropout probability should be decreased as the amount of data increases [7]. This is because the dropout probability has to diminish to zero in the limit of data?with the model explaining away its uncertainty completely (this is explained in more detail in ?2). RL is an example setting where the dropout probability has to be adapted dynamically. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. The amount of data collected by the agent increases steadily with each episode, and in order to reduce the agent?s uncertainty, the dropout probability must be decreased. Grid-searching over the dropout probability is impossible in this setting, as the agent will have to be reset and re-trained with the entire data with each new acquired episode. A method to tune the dropout probability which results in good accuracy and uncertainty estimates is needed then. Existing literature on tuning the dropout probability is sparse. Current methods include the optimisation of ? in Gaussian dropout following its variational interpretation [23], and overlaying a binary belief network to optimise the dropout probabilities as a function of the inputs [2]. The latter approach is of limited practicality with large models due to the increase in model size. With the former approach [23], practical use reveals some unforeseen difficulties [28]. Most notably, the ? values have to be truncated at 1, as the KL approximation would diverge otherwise. In practice the method under-performs. In this work we propose a new practical dropout variant which can be seen as a continuous relaxation of the discrete dropout technique. Relying on recent techniques in Bayesian deep learning [16, 27], together with appropriate regularisation terms derived from dropout?s Bayesian interpretation, our variant allows the dropout probability to be tuned using gradient methods. This results in bettercalibrated uncertainty estimates in large models, avoiding the coarse and expensive grid-search over the dropout probabilities. Further, this allows us to use dropout in RL tasks in a principled way. We analyse the behaviour of our proposed dropout variant on a wide variety of tasks. We study its ability to capture different types of uncertainty on a simple synthetic dataset with known ground truth uncertainty, and show how its behaviour changes with increasing amounts of data versus model size. We show improved accuracy and uncertainty on popular datasets in the field, and further demonstrate our variant on large models used in the Computer Vision community, showing a significant reduction in experiment time as well as improved model performance and uncertainty calibration. We demonstrate our dropout variant in a model-based RL task, showing that the agent automatically reduces its uncertainty as the amount of data increases, and give insights into common practice in the field where a small dropout probability is often used with the shallow layers of a model, and a large dropout probability used with the deeper layers. 2 Background In order to understand the relation between a model?s uncertainty and the dropout probability, we start with a slightly philosophical discussion of the different types of uncertainty available to us. This discussion will be grounded in the development of new tools to better understand these uncertainties in the next section. Three types of uncertainty are often encountered in Bayesian modelling. Epistemic uncertainty captures our ignorance about the models most suitable to explain our data; Aleatoric uncertainty captures noise inherent in the environment; Lastly, predictive uncertainty conveys the model?s uncertainty in its output. Epistemic uncertainty reduces as the amount of observed data increases? hence its alternative name ?reducible uncertainty?. When dealing with models over functions, this uncertainty can be captured through the range of possible functions and the probability given to each function. This uncertainty is often summarised by generating function realisations from our distribution and estimating the variance of the functions when evaluated on a fixed set of inputs. Aleatoric uncertainty captures noise sources such as measurement noise?noises which cannot be explained away even if more data were available (although this uncertainty can be reduced through the use of higher precision sensors for example). This uncertainty is often modelled as part of the likelihood, at the top of the model, where we place some noise corruption process on the function?s output. Gaussian corrupting noise is often assumed in regression, although other noise sources are popular as well such as Laplace noise. By inferring the Gaussian likelihood?s precision parameter ? for example we can estimate the amount of aleatoric noise inherent in the data. Combining both types of uncertainty gives us the predictive uncertainty?the model?s confidence in its prediction, taking into account noise it can explain away and noise it cannot. This uncertainty is often obtained by generating multiple functions from our model and corrupting them with noise (with precision ? ). Calculating the variance of these outputs on a fixed set of inputs we obtain the model?s predictive uncertainty. This uncertainty has different properties for different inputs. Inputs near the training data will have a smaller epistemic uncertainty component, while inputs far away 2 from the training data will have higher epistemic uncertainty. Similarly, some parts of the input space might have larger aleatoric uncertainty than others, with these inputs producing larger measurement error for example. These different types of uncertainty are of great importance in fields such as AI safety [1] and autonomous decision making, where the model?s epistemic uncertainty can be used to avoid making uninformed decisions with potentially life-threatening implications [20]. When using dropout neural networks (or any other stochastic regularisation technique), a randomly drawn masked weight matrix corresponds to a function draw [7]. Therefore, the dropout probability, together with the weight configuration of the network, determine the magnitude of the epistemic uncertainty. For a fixed dropout probability p, high magnitude weights will result in higher output variance, i.e. higher epistemic uncertainty. With a fixed p, a model wanting to decrease its epistemic uncertainty will have to reduce its weight magnitude (and set the weights to be exactly zero to have zero epistemic uncertainty). Of course, this is impossible, as the model will not be able to explain the data well with zero weight matrices, therefore some balance between desired output variance and weight magnitude is achieved1 . For uncertainty representation, this can be seen as a degeneracy with the model when the dropout probability is held fixed. Allowing the probability to change (for example by grid-searching it to maximise validation loglikelihood [9]) will let the model decrease its epistemic uncertainty by choosing smaller dropout probabilities. But if we wish to replace the grid-search with a gradient method, we need to define an optimisation objective to optimise p with respect to. This is not a trivial thing, as our aim is not to maximise model performance, but rather to obtain good epistemic uncertainty. What is a suitable objective for this? This is discussed next. 3 Concrete Dropout One of the difficulties with the approach above is that grid-searching over the dropout probability can be expensive and time consuming, especially when done with large models. Even worse, when operating in a continuous learning setting such as reinforcement learning, the model should collapse its epistemic uncertainty as it collects more data. When grid-searching this means that the data has to be set-aside such that a new model could be trained with a smaller dropout probability when the dataset is large enough. This is infeasible in many RL tasks. Instead, the dropout probability can be optimised using a gradient method, where we seek to minimise some objective with respect to (w.r.t.) that parameter. A suitable objective follows dropout?s variational interpretation [7]. Following the variational interpretation, dropout is seen as an approximating distribution q? (!) to the posterior in a Bayesian neural network with a set of random weight matrices ! = {Wl }L l=1 with L layers and ? the set of variational parameters. The optimisation objective that follows from the variational interpretation can be written as: 1 X 1 LbMC (?) = log p(yi |f ! (xi )) + KL(q? (!)||p(!)) (1) M N i2S with ? parameters to optimise, N the number of data points, S a random set of M data points, f ! (xi ) the neural network?s output on input xi when evaluated with weight matrices realisation !, and p(yi |f ! (xi )) the model?s likelihood, e.g. a Gaussian with mean f ! (xi ). The KL term KL(q? (!)||p(!)) is a ?regularisation? term which ensures that the approximate posterior q? (!) does not deviate too far from the prior distribution p(!). A note on our choice for a prior is given in appendix B. Assume that the set of variational parameters for the dropout distribution Q satisfies ? = {Ml , pl }L l=1 , a set of mean weight matrices and dropout probabilities such that q? (!) = l qMl (Wl ) and qMl (Wl ) = Ml ?diag[Bernoulli(1 pl )Kl ] for a single random weight matrix Wl of dimensions Kl+1 by Kl . The KL term can be approximated well following [7] KL(q? (!)||p(!)) = KL(qM (W)||p(W)) / L X KL(qMl (Wl )||p(Wl )) l=1 2 l (1 p) ||M||2 2 1 KH(p) (2) (3) This raises an interesting hypothesis: does dropout work well because it forces the weights to be near zero, i.e. regularising the weights? We will comment on this later. 3 with H(p) := p log p (1 p) log(1 p) (4) the entropy of a Bernoulli random variable with probability p. The entropy term can be seen as a dropout regularisation term. This regularisation term depends on the dropout probability p alone, which means that the term is constant w.r.t. model weights. For this reason the term can be omitted when the dropout probability is not optimised, but the term is crucial when it is optimised. Minimising the KL divergence between qM (W) and the prior is equivalent to maximising the entropy of a Bernoulli random variable with probability 1 p. This pushes the dropout probability towards 0.5?the highest it can attain. The scaling of the regularisation term means that large models will push the dropout probability towards 0.5 much more than smaller models, but as the amount of data N increases the dropout probability will be pushed towards 0 (because of the first term in eq. (1)). We need to evaluate the derivative of the last optimisation objective eq. (1) w.r.t. the parameter p. Several estimators are available for us to do this: for example the score function estimator (also known as a likelihood ratio estimator and Reinforce [6, 12, 30, 35]), or the pathwise derivative estimator (this estimator is also referred to in the literature as the re-parametrisation trick, infinitesimal perturbation analysis, and stochastic backpropagation [11, 22, 31, 34]). The score function estimator is known to have extremely high variance in practice, making optimisation difficult. Following early experimentation with the score function estimator, it was evident that the increase in variance was not manageable. The pathwise derivative estimator is known to have much lower variance than the score function estimator in many applications, and indeed was used by [23] with Gaussian dropout. However, unlike the Gaussian dropout setting, in our case we need to optimise the parameter of a Bernoulli distributions. The pathwise derivative estimator assumes that the distribution at hand can be re-parametrised in the form g(?, ?) with ? the distribution?s parameters, and ? a random variable which does not depend on ?. This cannot be done with the Bernoulli distribution. Instead, we replace dropout?s discrete Bernoulli distribution with its continuous relaxation. More specifically, we use the Concrete distribution relaxation. This relaxation allows us to re-parametrise the distribution and use the low variance pathwise derivative estimator instead of the score function estimator. The Concrete distribution is a continuous distribution used to approximate discrete random variables, suggested in the context of latent random variables in deep generative models [16, 27]. One way to view the distribution is as a relaxation of the ?max? function in the Gumbel-max trick to a ?softmax? ? = g(?, ?) with function, which allows the discrete random variable z to be written in the form z parameters ?, and ? a random variable which does not depend on ?. We will concentrate on the binary random variable case (i.e. a Bernoulli distribution). Instead of sampling the random variable from the discrete Bernoulli distribution (generating zeros and ones) we sample realisations from the Concrete distribution with some temperature t which results in values in the interval [0, 1]. This distribution concentrates most mass on the boundaries of the interval 0 and 1. In fact, for the one dimensional case here with the Bernoulli distribution, the Concrete distribution ? of the Bernoulli random variable z reduces to a simple sigmoid distribution which has a relaxation z convenient parametrisation: ? ? 1 ? = sigmoid z ? log p log(1 p) + log u log(1 u) (5) t ? is depicted in figure 10 in appendix with uniform u ? Unif(0, 1). This relation between u and z A. Here u is a random variable which does not depend on our parameter p. The functional relation ? and u is differentiable w.r.t. p. between z With the Concrete relaxation of the dropout masks, it is now possible to optimise the dropout probability using the pathwise derivative estimator. We refer to this Concrete relaxation of the dropout masks as Concrete Dropout. A Python code snippet for Concrete dropout in Keras [5] is given in appendix C, spanning about 20 lines of code, and experiment code is given online2 . We next assess the proposed dropout variant empirically on a large array of tasks. 2 https://github.com/yaringal/ConcreteDropout 4 4 Experiments We next analyse the behaviour of our proposed dropout variant on a wide variety of tasks. We study how our dropout variant captures different types of uncertainty on a simple synthetic dataset with known ground truth uncertainty, and show how its behaviour changes with increasing amounts of data versus model size (?4.1). We show that Concrete dropout matches the performance of handtuned dropout on the UCI datasets (?4.2) and MNIST (?4.3), and further demonstrate our variant on large models used in the Computer Vision community (?4.4). We show a significant reduction in experiment time as well as improved model performance and uncertainty calibration. Lastly, we demonstrate our dropout variant in a model-based RL task extending on [10], showing that the agent correctly reduces its uncertainty dynamically as the amount of data increases (?4.5). We compare the performance of hand-tuned dropout to our Concrete dropout variant in the following experiments. We chose not to compare to Gaussian dropout in our experiments, as when optimising Gaussian dropout?s ? following its variational interpretation [23], the method is known to underperform [28] (however, Gal [7] compared Gaussian dropout to Bernoulli dropout and found that when optimising the dropout probability by hand, the two methods perform similarly). 4.1 Synthetic data The tools above allow us to separate both epistemic and aleatoric uncertainties with ease. We start with an analysis of how different uncertainties behave with different data sizes. For this we optimise both the dropout probability p as well as the (per point) model precision ? (following [20] for the latter one). We generated simple data from the function y = 2x + 8 + ? with known noise ? ? N (0, 1) (i.e. corrupting the observations with noise with a fixed standard deviation 1), creating datasets increasing in size ranging from 10 data points (example in figure 1e) up to 10, 000 data points (example in figure 1f). Knowing the true amount of noise in our synthetic dataset, we can assess the quality of the uncertainties predicted by the model. We used models with three hidden layers of size 1024 and ReLU non-linearities, and repeated each experiment three times, averaging the experiments? results. Figure 1a shows the epistemic uncertainty (in standard deviation) decreasing as the amount of data increases. This uncertainty was computed by generating multiple function draws and evaluating the functions over a test set generated from the same data distribution. Figure 1b shows the aleatoric uncertainty tending towards 1 as the data increases?showing that the model obtains an increasingly improved estimate to the model precision as more data is given. Finally, figure 1c shows the predictive uncertainty obtained by combining the variances of both plots above. This uncertainty seems to converge to a constant value as the epistemic uncertainty decreases and the estimation of the aleatoric uncertainty improves. Lastly, the optimised dropout probabilities corresponding to the various dataset sizes are given in figure 1d. As can be seen, the optimal dropout probability in each layer decreases as more data is observed, starting from near 0.5 probabilities in all layers with the smallest dataset, and converging to values ranging between 0.2 and 0.4 when 10, 000 data points are given to the model. More interesting, the optimal dropout probability for the input layer is constant at near-zero, which is often observed with hand-tuned dropout probabilities as well. (d) (a) Epistemic (b) Aleatoric (c) Predictive Optimised dropout (e) Example dataset with (f) Example dataset with probability values (per 10 data points. 10, 000 data points. layer). First layer in blue. Figure 1: Different uncertainties (epistemic, aleatoric, and predictive, in std) as the number of data points increases, as well as optimised dropout probabilities and example synthetic datasets. 5 Figure 2: Test negative log likelihood. The lower the better. Best viewed in colour. 4.2 Figure 3: Test RMSE. The lower the better. Best viewed in colour. UCI We next assess the performance of our technique in a regression setting using the popular UCI benchmark [26]. All experiments were performed using a fully connected neural network (NN) with 2 hidden layers, 50 units each, following the experiment setup of [13]. We compare against a two layer Bayesian NN approximated by standard dropout [9] and a Deep Gaussian Process of depth 2 [4]. Test negative log likelihood for 4 datasets is reported in figure 2, with test error reported in figure 3. Full results as well as experiment setup are given in the appendix D. Figure 4 shows posterior dropout probabilities across different cross validation splits. Intriguingly, the input layer?s dropout probability (p) always decreases to essentially zero. This is a recurring pattern we observed with all UCI datasets experiments, and is further discussed in the next section. Figure 4: Converged dropout probabilities per layer, split and UCI dataset (best viewed on a computer screen). 4.3 MNIST We further experimented with the standard classification benchmark MNIST [24]. Here we assess the accuracy of Concrete dropout, and study its behaviour in relation to the training set size and model size. We assessed a fully connected NN with 3 hidden layers and ReLU activations. All models were trained for 500 epochs (? 2 ? 105 iterations); each experiment was run three times using random initial settings in order to avoid reporting spurious results. Concrete dropout achieves MNIST accuracy of 98.6%, matching that of hand-tuned dropout. Figure 5 shows a decrease in converged dropout probabilities as the size of data increases. Notice that while the dropout probabilities in the third hidden and output layers vary by a relatively small amount, they converge to zero in the first two layers. This happens despite the fact that the 2nd and Figure 5: Converged dropout probabilities as function of training set size (3x512 MLP). Figure 6: Converged dropout probabilities as function of number of hidden units. 6 (a) Input Image (b) Semantic Segmentation (c) Epistemic Uncertainty Figure 7: Example output from our semantic segmentation model (a large computer vision model). 3rd hidden layers are of the same shape and prior length scale setting. Note how the optimal dropout probabilities are zero in the first layer, matching the previous results. However, observe that the model only becomes confident about the optimal input transformation (dropout probabilities are set to zero) after seeing a relatively large number of examples in comparison to the model size (explaining the results in ?4.1 where the dropout probabilities of the first layer did not collapse to zero). This implies that removing dropout a priori might lead to suboptimal results if the training set is not sufficiently informative, and it is best to allow the probability to adapt to the data. Figure 6 provides further insights by comparing the above examined 3x512 MLP model (orange) to other architectures. As can be seen, the dropout probabilities in the first layer stay close to zero, but others steadily increase with the model size as the epistemic uncertainty increases. Further results are given in the appendix D.1. 4.4 Computer vision In computer vision, dropout is typically applied to the final dense layers as a regulariser, because the top layers of the model contain the majority of the model?s parameters [32]. For encoder-decoder semantic segmentation models, such as Bayesian SegNet, [21] found through grid-search that the best performing model used dropout over the middle layers (central encoder and decoder units) as they contain the most parameters. However, the vast majority of computer vision models leave the dropout probability fixed at p = 0.5, because it is prohibitively expensive to optimise manually ? with a few notable exceptions which required considerable computing resources [15, 33]. We demonstrate Concrete dropout?s efficacy by applying it to the DenseNet model [17] for semantic segmentation (example input, output, and uncertainty map is given in Figure 7). We use the same training scheme and hyper-parameters as the original authors [17]. We use Concrete dropout weight regulariser 10 8 (derived from the prior length-scale) and dropout regulariser 0.01 ? N ? H ? W , where N is the training dataset size, and H ? W are the number of pixels in the image. This is because the loss is pixel-wise, with the random image crops used as model input. The original model uses a hand-tuned dropout p = 0.2. Table 1 shows that replacing dropout with Concrete dropout marginally improves performance. DenseNet Model Variant No Dropout Dropout (manually-tuned p = 0.2) Dropout (manually-tuned p = 0.2) Concrete Dropout Concrete Dropout MC Sampling IoU 7 3 7 3 65.8 67.1 67.2 67.2 67.4 Table 1: Comparing the performance of Concrete dropout against baseline models with DenseNet [17] on the CamVid road scene semantic Table 2: Calibration plot. Concrete dropout reduces the uncertainty calibrasegmentation dataset. tion RMSE compared to the baselines. Concrete dropout is tolerant to initialisation values. Figure 8 shows that for a range of initialisation choices in p = [0.05, 0.5] we converge to a similar optima. Interestingly, we observe that Concrete dropout learns a different pattern to manual dropout tuning results [21]. The second and last layers have larger dropout probability, while the first and middle layers are largely deterministic. Concrete dropout improves calibration of uncertainty obtained from the models. Figure 2 shows calibration plots of a Concrete dropout model against the baselines. This compares the model?s predicted uncertainty against the accuracy frequencies, where a perfectly calibrated model corresponds to the line y = x. 7 (a) L = 0 (b) L = 1 (c) L = n/2 (d) L = n 1 (e) L = n Figure 8: Learned Concrete dropout probabilities for the first, second, middle and last two layers in a semantic segmentation model. p converges to the same minima for a range of initialisations from p = [0.05, 0.5]. Concrete dropout layer requires negligible additional compute compared with standard dropout layers with our implementation. However, using conventional dropout requires considerable resources to manually tune dropout probabilities. Typically, computer vision models consist of 10M + parameters, and take multiple days to train on a modern GPU. Using Concrete dropout can decrease the time of model training by weeks by automatically learning the dropout probabilities. 4.5 Model-based reinforcement learning Existing RL research using dropout uncertainty would hold the dropout probability fixed, or decrease it following a schedule [9, 10, 18]. This gives a proxy to the epistemic uncertainty, but raises other difficulties such as planning the dropout schedule. This can also lead to under-exploitation of the environment as was reported in [9] with Thompson sampling. To avoid this under-exploitation, Gal et al. [10] for example performed a grid-search to find p that trades-off this exploration and exploitation over the acquisition of multiple episodes at once. We repeated the experiment setup of [10], where an agent attempts to balance a pendulum hanging from a cart by applying force to the cart. [10] used a fixed dropout probability of 0.1 in the dynamics model. Instead, we use Concrete dropout with the dynamics model, and able to match their cumulative reward (16.5 with 25 time steps). Concrete dropout allows the dropout probability to adapt as more data is collected, instead of being set once and held fixed. Figures 9a?9c show the optimised dropout probabilities per layer vs. the number of episodes (acquired data), as well as the fixed probabilities in the original setup. Concrete dropout automatically decreases the dropout probability as more data is observed. Figures 9d?9g show the dynamics? model epistemic uncertainty for each one of the ? (cart location, velocity, pendulum angle, and angular four state components in the system: [x, x, ? ?, ?] velocity). This uncertainty was calculated on a validation set split from the total data after each episode. Note how with Concrete dropout the epistemic uncertainty decreases over time as more data is observed. (a) L = 0 (b) L = 1 (c) L = 2 (d) x (e) x? (f) ? (g) ?? Figure 9: Concrete dropout in model-based RL. Left three plots: dropout probabilities for the 3 layers of the dynamics model as a function of the number of episodes (amount of data) observed by the agent (Concrete dropout in blue, baseline in orange). Right four plots: epistemic uncertainty over the dynamics model output for ? Best viewed on a computer screen. the four state components: [x, x, ? ?, ?]. 5 Conclusions and Insights In this paper we introduced Concrete dropout?a principled extension of dropout which allows for the dropout probabilities to be tuned. We demonstrated improved calibration and uncertainty estimates, as well as reduced experimentation cycle time. Two interesting insights arise from this work. First, common practice in the field where a small dropout probability is often used with the shallow layers of a model seems to be supported by dropout?s variational interpretation. This can be seen as evidence towards the variational explanation of dropout. Secondly, an open question arising from previous research was whether dropout works well because it forces the weights to be near zero with fixed p. Here we showed that allowing p to adapt, gives comparable performance as optimal fixed p. Allowing p to change does not force the weight magnitude to be near zero, suggesting that the hypothesis that dropout works because p is fixed is false. 8 References [1] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. [2] Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pages 3084?3092, 2013. [3] Matthew J. Beal and Zoubin Ghahramani. The variational Bayesian EM algorithm for incomplete data: With application to scoring graphical model structures. Bayesian Statistics, 2003. [4] Thang D. Bui, Jos? Miguel Hern?ndez-Lobato, Daniel Hern?ndez-Lobato, Yingzhen Li, and Richard E. Turner. Deep gaussian processes for regression using approximate expectation propagation. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML?16, pages 1472?1481, 2016. [5] Fran?ois Chollet. Keras, 2015. URL https://github.com/fchollet/keras. GitHub repository. [6] Michael C. Fu. Chapter 19 gradient estimation. In Shane G. Henderson and Barry L. Nelson, editors, Simulation, volume 13 of Handbooks in Operations Research and Management Science, pages 575 ? 616. Elsevier, 2006. [7] Yarin Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016. [8] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. NIPS, 2016. [9] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICML, 2016. [10] Yarin Gal, Rowan McAllister, and Carl E. Rasmussen. Improving PILCO with Bayesian neural network dynamics models. In Data-Efficient Machine Learning workshop, ICML, April 2016. [11] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013. [12] Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75?84, 1990. [13] Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In ICML, 2015. [14] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [15] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. [16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-softmax. In Bayesian Deep Learning workshop, NIPS, 2016. [17] Simon J?gou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. arXiv preprint arXiv:1611.09326, 2016. [18] Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. In ArXiv e-prints, 1702.01182, 2017. [19] Michael Kampffmeyer, Arnt-Borre Salberg, and Robert Jenssen. Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2016. [20] Alex Kendall and Yarin Gal. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In ArXiv e-prints, 1703.04977, 2017. [21] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015. [22] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. [23] Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In NIPS. Curran Associates, Inc., 2015. [24] Yann LeCun and Corinna Cortes. The MNIST database of handwritten digits. 1998. URL http: //yann.lecun.com/exdb/mnist/. 9 [25] Yingzhen Li and Yarin Gal. Dropout Inference in Bayesian Neural Networks with Alpha-divergences. In ArXiv e-prints, 1703.02914, 2017. [26] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. [27] Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete distribution: A continuous relaxation of discrete random variables. In Bayesian Deep Learning workshop, NIPS, 2016. [28] Dmitry Molchanov, Arseniy Ashuha, and Dmitry Vetrov. Dropout-based automatic relevance determination. In Bayesian Deep Learning workshop, NIPS, 2016. [29] NHTSA. PE 16-007. Technical report, U.S. Department of Transportation, National Highway Traffic Safety Administration, Jan 2017. Tesla Crash Preliminary Evaluation Report. [30] John Paisley, David Blei, and Michael Jordan. Variational Bayesian inference with stochastic search. ICML, 2012. [31] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [32] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [33] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1?9, 2015. [34] Michalis Titsias and Miguel L?zaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971?1979, 2014. [35] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229?256, 1992. 10
6949 |@word exploitation:3 middle:3 manageable:1 repository:2 seems:2 nd:1 unif:1 open:1 underperform:1 pieter:1 seek:1 simulation:1 eng:1 jacob:1 epistemic:23 reduction:2 initial:1 configuration:2 ndez:2 score:5 efficacy:1 liu:2 initialisation:3 tuned:8 daniel:1 interestingly:1 lichman:1 jimenez:1 rowan:1 existing:2 current:1 com:3 comparing:2 michal:1 activation:1 diederik:2 must:1 written:2 gpu:1 john:2 ronald:1 romero:1 informative:1 shape:1 christian:1 plot:5 aside:1 alone:1 generative:2 prohibitive:2 v:1 blei:1 coarse:1 provides:1 location:1 wierstra:1 olah:1 jiri:1 doubly:1 dan:1 theoretically:1 acquired:2 mask:3 notably:1 indeed:1 planning:1 salakhutdinov:1 relying:2 decreasing:1 automatically:3 prolonged:1 glasserman:1 gou:1 increasing:3 becomes:1 estimating:1 linearity:1 mass:1 medium:1 what:2 gal:10 transformation:1 safely:1 exactly:1 prohibitively:1 qm:2 uk:3 unit:4 producing:1 overlaying:1 safety:3 maximise:2 negligible:1 frey:1 engineering:1 local:1 limit:1 despite:1 encoding:1 vetrov:1 optimised:7 hernandez:1 might:3 chose:1 examined:1 dynamically:3 collect:1 co:1 ease:1 limited:1 collapse:2 range:4 practical:4 lecun:2 zaro:1 practice:5 backpropagation:3 x512:2 digit:1 jan:1 attain:1 convenient:1 matching:2 confidence:1 road:1 seeing:1 zoubin:3 get:1 cannot:3 close:1 parametrise:1 context:1 impossible:4 applying:2 yee:1 equivalent:1 map:1 deterministic:1 conventional:1 demonstrated:1 lobato:3 transportation:1 williams:1 starting:1 jimmy:1 thompson:1 insight:5 estimator:13 array:1 avoidance:1 financial:1 reparameterization:2 searching:5 autonomous:4 hurt:1 laplace:1 carl:1 us:1 curran:1 hypothesis:2 trick:3 velocity:2 associate:1 expensive:3 approximated:2 recognition:3 std:1 database:1 observed:8 levine:1 reducible:1 preprint:7 capture:5 ensures:1 cycle:3 connected:3 episode:6 kilian:1 remote:1 decrease:10 highest:1 trade:1 principled:4 environment:4 pong:1 reward:1 cam:3 dynamic:6 trained:4 depend:4 raise:2 predictive:7 titsias:1 eric:1 completely:1 qml:3 gu:1 various:1 chapter:1 train:2 london:1 monte:1 hyper:1 choosing:1 larger:4 cvpr:1 loglikelihood:1 otherwise:1 encoder:3 ability:1 statistic:1 simonyan:1 analyse:3 final:1 shakir:1 beal:1 differentiable:1 propose:2 reset:1 adaptation:1 uci:7 combining:2 kh:1 sutskever:1 optimum:1 extending:1 generating:4 adam:2 leave:1 i2s:1 converges:1 ben:1 object:1 recurrent:1 ac:3 pose:1 miguel:3 uninformed:1 tim:1 andrew:2 eq:2 predicted:2 ois:1 implies:1 iou:1 concentrate:2 laurens:1 stochastic:6 exploration:2 human:1 require:1 behaviour:5 abbeel:1 preliminary:1 ryan:1 secondly:1 exploring:1 pl:2 extension:1 hold:1 sufficiently:1 diminish:1 ground:2 ic:1 great:1 week:1 matthew:1 driving:2 achieves:1 early:1 smallest:1 vary:1 omitted:1 estimation:3 ruslan:1 highway:1 wl:6 tool:3 sensor:1 gaussian:11 always:1 aim:1 camvid:1 rather:1 avoid:3 derived:2 rezende:1 june:1 mane:1 modelling:1 likelihood:7 bernoulli:11 adversarial:1 brendan:1 baseline:4 detect:1 elsevier:1 inference:7 nn:3 entire:1 typically:2 spurious:1 hidden:6 relation:4 kahn:1 going:1 pixel:2 classification:1 priori:1 development:2 softmax:2 orange:2 field:5 once:2 aware:1 having:1 beach:1 sampling:3 intriguingly:1 optimising:2 manually:4 thang:1 icml:7 mcallister:1 others:2 yoshua:1 report:2 inherent:2 realisation:3 few:1 richard:1 modern:1 randomly:2 connectionist:1 divergence:2 densely:1 national:1 attempt:1 detection:1 mlp:2 threatening:1 mnih:1 evaluation:1 henderson:1 parametrised:1 held:2 immense:1 implication:1 fu:1 necessary:2 incomplete:1 re:4 desired:1 modeling:1 rabinovich:1 deviation:2 masked:1 uniform:1 krizhevsky:1 hundred:1 too:1 reported:3 gregory:1 synthetic:5 calibrated:5 confident:1 st:2 international:3 stay:1 probabilistic:1 off:1 diverge:1 jos:1 together:3 ilya:1 unforeseen:1 concrete:37 parametrisation:2 michael:3 thesis:1 central:1 management:1 huang:1 worse:2 creating:1 derivative:6 leading:1 li:2 szegedy:1 account:1 potential:1 suggesting:1 waste:1 inc:1 notable:1 depends:1 later:1 view:1 performed:2 tion:1 kendall:3 pendulum:2 traffic:1 start:2 bayes:2 simon:1 jia:1 rmse:2 ass:4 accuracy:5 convolutional:5 variance:9 largely:1 modelled:1 bayesian:20 handwritten:1 vincent:1 marginally:1 mc:1 carlo:1 researcher:1 corruption:1 vazquez:1 converged:4 explain:3 detector:1 manual:1 infinitesimal:1 failure:2 against:4 acquisition:1 frequency:1 steadily:2 glynn:1 mohamed:1 conveys:1 degeneracy:1 dataset:11 popular:3 improves:3 segmentation:7 schedule:2 higher:4 day:1 molchanov:1 danilo:1 zisserman:1 improved:6 april:1 wei:1 evaluated:3 done:3 though:1 angular:1 lastly:3 hand:7 replacing:1 densenets:1 propagation:1 quality:1 grows:1 usa:1 name:1 dario:1 contain:2 true:1 former:1 hence:1 semantic:8 ignorance:2 shixiang:1 whye:1 exdb:1 evident:1 demonstrate:5 performs:1 dragomir:1 temperature:1 ranging:2 variational:15 image:5 wise:1 common:3 sigmoid:2 tending:1 functional:1 rl:11 empirically:1 exponentially:1 volume:3 discussed:2 interpretation:7 significant:2 measurement:2 refer:1 cambridge:4 anguelov:1 ai:2 paisley:1 tuning:3 automatic:2 grid:15 rd:2 similarly:2 calibration:7 operating:1 posterior:3 own:1 recent:2 showed:1 certain:1 binary:2 life:2 yi:2 der:1 scoring:1 seen:7 captured:1 minimum:1 additional:1 determine:1 converge:3 barry:1 christiano:1 pilco:1 multiple:5 full:1 reduces:5 alan:1 technical:1 faster:1 adapt:5 match:2 minimising:1 long:1 cross:1 determination:1 prediction:1 variant:14 regression:3 converging:1 crop:1 vision:15 optimisation:6 essentially:1 expectation:1 arxiv:17 iteration:1 grounded:2 sergey:1 background:1 crash:1 decreased:2 interval:2 adriana:1 source:2 crucial:2 unlike:1 scalable:1 archive:1 comment:1 cart:3 shane:1 thing:1 jordan:1 near:6 kera:3 split:3 enough:1 bengio:1 variety:2 relu:2 architecture:2 perfectly:1 suboptimal:1 andriy:1 reduce:2 knowing:1 administration:1 minimise:1 whether:1 colour:2 url:3 peter:1 karen:1 deep:17 miscalibrated:2 collision:1 tune:2 amount:14 extensively:1 reduced:2 generate:1 http:4 notice:1 arising:1 per:5 correctly:1 blue:2 summarised:1 discrete:7 dropping:1 four:3 yangqing:1 drawn:1 urban:1 densenet:3 vast:1 relaxation:10 chollet:1 run:1 turing:1 angle:1 uncertainty:87 jose:1 place:1 reporting:1 yann:2 fran:1 draw:2 decision:2 appendix:5 scaling:1 maaten:1 comparable:1 pushed:1 dropout:157 layer:35 encountered:1 adapted:1 alex:4 scene:2 nitish:1 extremely:2 performing:1 relatively:2 gpus:1 department:1 hanging:1 gredilla:1 amodei:1 conjugate:1 smaller:4 slightly:1 increasingly:1 across:1 em:1 shallow:2 making:3 happens:1 explained:2 restricted:1 resource:3 hern:2 turn:1 needed:1 segnet:2 available:3 operation:3 experimentation:4 observe:2 away:4 appropriate:1 salimans:1 pierre:1 online2:1 alternative:1 weinberger:1 jang:1 altogether:1 corinna:1 original:3 top:2 assumes:1 include:1 michalis:1 graphical:1 calculating:1 practicality:1 ghahramani:3 especially:1 approximating:1 objective:7 question:1 print:3 gradient:6 separate:1 reinforce:1 majority:2 decoder:3 chris:2 nelson:1 maddison:1 collected:2 trivial:1 reason:1 spanning:1 maximising:1 code:3 length:2 reed:1 tiramisu:1 ratio:2 balance:2 sermanet:1 difficult:1 setup:4 yingzhen:2 robert:1 potentially:1 negative:2 ba:1 implementation:1 regulariser:3 perform:2 allowing:3 teh:1 observation:1 convolution:1 datasets:6 benchmark:2 daan:1 snippet:1 behave:1 truncated:1 hinton:1 communication:1 perturbation:1 community:2 introduced:1 david:2 required:1 kl:12 philosophical:1 learned:1 kingma:2 nip:6 aleatoric:9 able:2 suggested:1 recurring:1 poole:1 pattern:4 scott:1 optimise:7 max:4 explanation:1 belief:1 suitable:3 difficulty:4 force:4 business:1 wanting:1 turner:1 representing:1 scheme:1 github:3 zhuang:1 badrinarayanan:1 categorical:1 auto:1 roberto:1 deviate:1 prior:5 literature:2 epoch:1 python:1 schulman:1 understanding:1 regularisation:6 loss:2 fully:3 interesting:3 versus:2 geoffrey:1 validation:3 agent:9 vanhoucke:1 proxy:1 editor:1 corrupting:3 course:1 fchollet:1 supported:1 last:3 rasmussen:1 infeasible:1 allow:2 deeper:3 understand:2 institute:1 explaining:2 wide:2 taking:1 sparse:1 van:1 boundary:1 dimension:1 depth:1 world:1 evaluating:1 cumulative:1 calculated:1 preventing:1 author:1 reinforcement:6 adaptive:1 far:2 erhan:1 welling:2 approximate:4 obtains:1 alpha:1 dmitry:2 bui:1 dealing:1 ml:3 reveals:1 tolerant:1 handbook:1 assumed:1 consuming:1 xi:5 search:12 continuous:6 latent:1 table:3 ca:1 obtaining:1 improving:2 diag:1 did:1 dense:1 noise:15 arise:1 paul:2 repeated:2 tesla:1 yarin:8 referred:1 screen:2 precision:5 inferring:1 wish:1 pe:1 third:1 learns:1 removing:1 dumitru:1 showing:4 sensing:1 experimented:1 cortes:1 evidence:1 consist:1 workshop:5 mnist:6 false:1 importance:1 phd:1 magnitude:5 push:2 gumbel:2 vijay:1 entropy:3 depicted:1 gao:1 steinhardt:1 pathwise:5 cipolla:1 springer:1 corresponds:2 truth:2 satisfies:1 jenssen:1 acm:1 viewed:4 towards:5 replace:2 feasible:1 analysing:1 change:5 regularising:1 specifically:1 considerable:2 handtuned:1 averaging:1 total:1 exception:1 latter:2 assessed:1 relevance:1 evaluate:1 avoiding:1 srivastava:1
6,577
695
Some Estimates of Necessary Number of Connections and Hidden Units for Feed-Forward Networks Adam Kowalczyk Telecom Australia, Research Laboratories 770 Blackburn Road, Clayton, Vic. 3168, Australia ([email protected]) Abstract The feed-forward networks with fixed hidden units (FllU-networks) are compared against the category of remaining feed-forward networks with variable hidden units (VHU-networks). Two broad classes of tasks on a finite domain X C R n are considered: approximation of every function from an open subset of functions on X and representation of every dichotomy of X. For the first task it is found that both network categories require the same minimal number of synaptic weights. For the second task and X in general position it is shown that VHU-networks with threshold logic hidden units can have approximately lin times fewer hidden units than any FHU-network must have. 1 Introduction A good candidate artificial neural network for short term memory needs to be: (i) easy to train, (ii) able to support a broad range of tasks in a domain of interest and (iii) simple to implement. The class of feed-forward networks with fixed hidden units (HU) and adjustable synaptic weights at the top layer only (shortly: FHUnetworks) is an obvious candidate to consider in this context. This class covers a wide range of networks considered in the past, including the classical perceptron, higher order networks and non-linear associative mapping. Also a number of training algorithms were specifically devoted to this category (e.g. perceptron, madaline 639 640 Kowalczyk or pseudoinverse) and a number of hardware solutions were investigated for their implementation (e.g. optical devices [8]). Leaving aside the non-trivial tasks of constructing the domain specific HU for a FHU-network [9] and then optimal loading of specific tasks, in this paper we concentrate on assessing the abilities of such structures to support a wide range of tasks in comparison to more complex feedforward networks with multiple layers of variable HU (VHU-networks). More precisely, on a finite domain X two benchmark tests are considered: approximation of every function from an open subset of functions on X and representation of every dichotomy of X. Some necessary and sufficient estimates of minimal necessary numbers of adaptable synaptic weights and of HU are obtained and then combined with some sufficient estimates in [10] to provide the final results. In Appendix we present an outline some of our recent results on the extension of the classical Function-Counting Theorem [2] to the multilayer case and discuss some of its implications to assessing network capacities. 2 Statement of the main results In this paper X will denote a subset of R n of N points. Of interest to us are multilayer feed-forward networks (shortly FF-networks) , Fw : X - R, depending on the k-tuple w = (Wl' ... , Wk) E R k of adjustable synaptic weights to be selected on loading to the network desired tasks. The FF -networks are split into two categories defined above: ? FHU-network with fixed hidden units ?>i : X -+ R k Fw(x) def I: Wi?>i(X) (x EX), (1) i=l ? VHU-networks with variable hidden units 1/Jw",i : X - R depending on some adjustable synaptic weights w", where w = (Wi, w") E R k ' x R k " = Rk k' Fw(x) def I: w~1/Jw",i(X) (x EX). (2) i=l Of special interest are situations where hidden units are built from one or more layers of artificial neurons, which, for simplicity, can be thought of as devices computing simple functions of the form (Yl, .. ?,Ym) E R m ~ a(wi1Yl + Wi 2 Y2 + ... + Wim.Ym), where a : R - R is a non-decreasing squashing function. Two particular examples of squashing functions are (i) infinitely differentiable sigmoid function t ~ (1 + exp( _t))-l and (ii) the step function 9(t) defined as 1 for t ~ 0 a.nd = 0, otherwise. In the latter case the artificial neuron is called a threshold logic neuron (ThLneuron). In the formulation of results below all biases are treated as synaptic weights attached to links from special constant HUs (= 1). Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks 2.1 Function approximation The space R X of all real functions on X has the natural structure of a vector space isomorphic with RN. We introduce the euclidean norm IIIII def CE:CEX 12(x))1/2 on R X and denote by U C R X an open, non-empty subset. We say that the FFnetwork Fw can approximate a function I on X with accuracy f > 0 if II! - Fw II < ? for a weight vector w ERie. Theorem 1 Assume the FF-network Fw is continuously differentiable with respect to the adjustable synaptic weights w ERie and k < N. If it can approximate any function in U with any accuracy then for almost every function lEU, if lilTli-+oo IIFw(i) - III = 0, where w(l), w(2), ... ERie, then lilTli-+oo Ilw(i)11 = 00. In the above theorem "almost every" means with the exception of a subset of the Lebesgue measure 0 on R X ~ RN. The proof of this theorem relies on use of Sard's theorem from differential topology (c.f. Section 3). Note that the above theorem is applicable in particular to the popular "back-propagation" network which is typically built from artificial neurons with the continuously differentiable sigmoid squashing function. The proof of the following theorem uses a different approach, since the network is not differentiably dependent on its synaptic weights to HUs. This theorem applies in particular to the classical FF -networks built from ThL-neurons. Theorem 2 A FF-network Fw must have 2:: N HU in the top hidden layer if all units of this layer have a finite number of activation levels and the network can approximate any function in U with any accuracy. The above theorems mean in particular that if we want to achieve an arbitrarily good approximation of any function in U def {I : X - R; I/(x)1 < A}, where A > 0, and we can use one of VHU-networks of the above type with synaptic weights of a restricted magnitude only, then we have to have at least N such weights. However that many weights are necessary and sufficient to achieve the same, with a FHUnetwork (1) if the functions ?i are linearly independent on X. So variable hidden units give no advantage in this case. 2.2 Implementation of dichotomy We say that the FF-network Fw can implement a dichotomy (X_, X+) of X if there exists w ERie such that Fw < 0 on X_ and Fw > 0 on X+. Proposition 3 A FHU-network Fw can implement every dichotomy of X if and only if it can exactly compute every function on X . In such a case it must have 2:: N HU in the top hidden layer. The non-trivial part of the above theorem is necessity in the first part of it, i.e. that being able to implement every dichotomy on X requires N (fixed) hidden units. In Section 3.3 we obtain this proposition from a stronger result. Note that the above 641 642 Kowalczyk proposition can be deduced from the classical Function-Counting Theorem [2] and also that an equivalent result is proved directly in [3, Theorem 7.2]. We say that the points of a subdomain X C Rn are in general position if every in R n contains no more than n points of X. Note that points of every finite sub domain of R n are in general position after a sufficiently small perturbation and that the property of being in general position is preserved under sufficiently small perturbations. Note also that the points of a typical N-point sub domain X C R n are in general position, where "typical" means with the exception of subdomains X corresponding to a certain subset of Lebesgue measure 0 in the space (Rn)N of all N -tuples of points from Rn. It is proved in [10] that for a subdomain set X C R n of N points in general position a VHU-network having i(N - 1)/nl (adjustable) ThL-neurons in the first (and the only) hidden layer can implement every dichotomy of X, where the notation itl denotes the smallest integer ~ t. Furthermore, examples are given showing that the above bound is tight. (Note that this paper corrects and gives rigorous proofs of some early results in [I, Lemma 1 and Theorem 1] and also improves [6, Theorem 4].) Combining these results with Proposition 3 we get the following result. Theorem 4 Assume that all N points of X C R n are in general position. In the class of all FF-networks which can implement every dichotomy on X there exists a VHU-network with threshold logic HU having a fraction l/n+O(1/ N) of the number of the HU that any FHU-network in this class must have. There are examples of X in general position of any even cardinality N > 0 showing that this estimate is tight. 3 Proofs Below we identify functions I : X -t R with N -tuples of their values at N -points of X (ordered in a unique manner). Under this identification the FF-networks Fw can be regarded as a transformation (3) WERk-tFwER N with the range R(Fw) 3.1 def {Fw ; w E Rk} C RN. Proof of Theorem 1. In this case the transformation (3) is continuously differentiable. Every value of it is singular since k < N, thus according to Sard's Theorem [5], R(Fw) C RN has Lebesgue measure O. It is enough to show now that if (4) lEU - R(Fw) and lim IIFw(i) l-tOO for some M I III = 0 and Ilw(i)11 < M, (5) > 0, then a contradiction follows. Actually from (5) it follows that belongs to the topological closure cl(RM) ofRM def {Fw; wE Rk & Ilwll;:; Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks M}. However, RM is a compact set as a continuous image of a closed ball {w E RA: ; Ilwll :::; M}, so cl(RM) = RM. Consequently f E RM C R(Fw) which contradicts (4). Q.E.D. 3.2 Proof of Theorem 2. We con!>ider the FF-network (1) for which there exists a finite set VCR of s points such that "pW",i(X) E V for every w" ERA:", 1 ~ i ~ k' and x E X. It is sufficient to show that the set R(Fw) of all functions computable by Fw is not dense in U if k' < N . Actually, we can write R( Fw) as a union (6) R(Fw) = where each Lw" ~f {2::~l w~'l/Jw",i ; w~, ... , W~, E R} C RN is a linear subspace of dimension:::; k' :::; N uniquely determined by the vectors 'l/Jw",i E VN C RN, i = 1, ... ,k'. However there is a finite number (:::; sN) of different vectors in VN, thus there is only a finite number (:::; sNA:) of different linear subspaces in the family {Lw" ; w" E RA: Hence, as k' < N, the union (6) is a closed no-where dense subset of R N as a finite union of proper linear subspaces (each of which is a closed and nowhere dense subset). Q.E.D. II }. 3.3 Proof of Proposition 3. We state first a stronger result. We say that a set L of functions on X is convex if for any couple of functions ?>1, ?>2 on X any Q > 0, {3 > 0, Q + {3 = 1, the function Q?>l + {3?>2 also belongs to L. Proposition 5 Let L be a convex set of functions on X = {Xl, X2, ... , XN} implementing every dichotomy of X. Then for each i E {1, 2, ... , N} there exists a function ?>i E L such that ?>i(Xi) -# 0 and ?>i(Xj) = 0 for 1 ~ i i- j :::; N . Proof. We define a transformation SGN : R X SGN(?? --+ {-1, 0, +1}N def where sgn(~) def -1 if ~ < 0, sgn(O) ~f 0 and sgn(~) def + 1 if ~ > O. We denote by WA: the subset of {-1, 0, +l}N of all points q = (ql, ... ,qN) such that 2:~l Iqil = k, for k = O,1, ... ,N. We show first that convexity of L implies for k E {1, 2, ... , N} the following WA: C SGN(L) => Wk- l C SGN(L). (7) For the proof assume WI: C SGN(L) and q = (q1, ... , qN) E {-1, 0, +1}N is such that 2:~l Iqil = k - 1. We need to show that there exists ?> E L such that SGN(?? = q. (8) 643 644 Kowalczyk The vector q has at least one vanishing entry, say, without loss of generality, q1 = O. Let ?+ and ?- be two functions in L such that SGN(?+) = q+ def SGN(?-) = q- def (+ 1, Q2, ... , qN ), (-l,Q2, ... ,QN)' Such ?+ and ?- exist since q+, q- E Wk. The function belongs to L as a convex combination of two functions from L and satisfies (8). Now note that the assumptions of the proposition imply that W N C SGN(L). Applying (7) repeatedly we find that W 1 ~ SGN(L), which means that for every index i, 1 ~ i ~ N, there exists a function ?l E L with vanishing all entries but the i-th one. Q.E.D. N ow let us see how Proposition 3 follows from the above result. Sufficiency is obvious. For the necessity we observe that the family Fw of functions on X is convex being a linear space in the case of a FHU-network (1). Now if this network can compute every dichotomy of X, then each function ?i as in Proposition 5 equals to FWi for some Wi E R k. Thus n(Fw) RN since those functions make a basis X of R ~ RN. Q.E.D. = 4 Discussion of results Theorem 1 combined with observations in [4] allows us to make the following contribution to the recent controversy on relevance/irrelevance of Kolmogorov's theorem on representation of continuous functions In _ R, I contains subsets of any cardinality. def [0,1] (c.f. [4, 7]), since In The FF-networks for approximations of continuous functions on In of rising accuracy have to be complex, at leAst in one of the following ways: ? involve adjustment of a diverging number of synaptic weights and hidden units, or ? require adjustment of synaptic weights of diverging magnitude, or ? involve selection of "pathological" squashing functions. Thus one can only shift complexity from one kind to a.nother, but not eliminate it completely. Although on theoretical grounds one can easily argue the virtues and simplicity of one kind of complexity over the other, for a genuine hardware implementation any of them poses an equally serious obstacle. For the classes of FF-networks and benchmark tests considered, the networks with multiple hidden layers have no decisive superiority over the simple structures with fixed hidden units unless dimensionality of the input space is significant. Estimates of Necessary Connections & Hidden Units for Feed-Forward Networks 5 Appendix: Capacity and Function-Counting Theorem The above results can be viewed as a step towards estimation of capacity of networks to memorise dichotomies. We intend to elaborate this subject further now and outline some of our recent results on this matter. A more detailed presentation will be available in future publications. The capacity of a network in the sense of Cover [2] (Cover's capacity) is defined as a maximal N such that for a randomly selected subset X C R n of N points with probability 1 the network can implement 1/2 of all dichotomies of X. For a linear perceptron Ie def "\:" Fw(x) = .LJ Wi~i (x EX), (9) i=l where w E R n is the vector of adjustable synaptic weights, the capacity is 2n,and 2k for a FHU-network (1) with suitable chosen hidden units 4>1, ... , 4>1e. These results are based on the so-called Function-Counting Theorem proved for the linear perceptron in the sixties (c.r. [2]). Extension of this result to the multilayer case is still an open problem (c.f. T. Cover's talk on NIPS'92). However, we have recently obtained the following partial result in this direction. Theorem 6 Given a continuous probability density on R n , for a randomly selected subset Xc R n of N points the FF-network having the first hidden layer built from h ThL-neurons can implement def nh ( N - 1 ) (10) C(N,nh)=2L i ' z=o dichotomies of X with a non-zero probability. Such a network can be constructed using nh variable synaptic weights between input and hidden layer only. For h = 1 this theorem reduces to its classical form for which the phrase "with non-zero probability" can be strengthened to "with probability I" [2]. The proof of the theorem develops Sakurai's idea of utilising the Vandermonde determinant to show the following property of the curve c( t) def (t, t 2 , ... , t n -1), t >0 (*) for any subset X of N points Xl = c(td, ... , XN = C(tN), tl < t2 < ... < tN, any hyperplane in Rn can intersect no more then n different segments [Xi,Xi+l] ofc. The first step of the proof is to observe that the property (*) itself implies that the count (10) holds for such a set X. The second and the crucial step consists in showing that for a sufficiently small ? > 0, for any selection of points Xl, ... ,XN E R n such that Ilxi - xd I < ? for i 1, ... , n, there exists a curve c passing through these points and satisfying also the property (*). = Theorem 6 implies that in the class of multilayer FF-networks having the first hidden layer built from ThL-neurons only the single hidden layer networks are the most 645 646 Kowalczyk efficient, since the higher layers have no influence on the number of implemented dichotomies (at least for the class of domains x C R n considered). Note that by virtue of (10) and the classical argument of Cover [2] for the class of domains X as in the Theorem 6 the capacity of the network considered is 2nh. Thus the following estimates hold. Corollary 7 In the class of FF-networks with a fixed nu.mber h of hidden units the ratio of the maximal capacity per hidden unit achievable by FHU-network to the maximal capacity per hidden unit achievable by VHU-networks having the ThLneurons in the first hidden layer only is 2h/2nh = lin. The analogous ratio for capacities per variable synaptic weight (in the class of FF-networks with a fixed number s of variable synaptic weights) is :::; 2s 12s 1. = Acknowledgement. I thank A. Sakurai of Hitachi Ltd., for helpful comments leading to the improvement of results of the paper. The permission of the Director, Telecom Australia Research Laboratories, to publish this material is gratefully acknow ledged. References [lJ E. Baum. On the capabilities of multilayer perceptrons. Journal of Complexity, 4:193-215, 1988. [2] T.M. Cover. Geometrical and statistical properties of linear inequalities with applications to pattern recognition. IEEE Trans. Elec. Comp., EC-14:326334, 1965. [3] R.M. Dudley. Central limit theorems for empirical mea.sures. Ann. Probability, 6:899-929, 1978. [4J F. Girosi and T . Poggio . Representation properties of networks: Kolmogorov's theorem is irrelevant. Neural Computation, 1:465-469, (1989). [5] M. Golubitsky and V. Guillemin. Springer- Verlag, New York, 1973. Stable Mapping and Their Singularities. [6] S. Huang and Y. Huang. Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Transactions on Neural Networks, 2:47-55, (1991). [7] V. Kurkova. Kolmogorov theorem is relevant. Neura.l Computation, 1, 1992. [8] D. Psaltis, C.H. Park, and J. Hong. Higher order associative memories and their optical implementations. Neural Networks, 1:149-163, (1988). [9J N. Redding, A. Kowalczyk, and T. Downs. Higher order separability and minimal hidden-unit fan-in. In T. Kohonen et al., editor, Artificial Neural Networks, volume 1, pages 25-30. Elsevier, 1991. [10] A. Sakurai. n-h-1 networks store no less n? h + 1 examples but sometimes no more. In Proceedings of IJCNN92, pages 1II-936-III-94l. IEEE, June 1992. PART VIII SPEECH AND SIGNAL PROCESSING
695 |@word determinant:1 pw:1 rising:1 loading:2 norm:1 nd:1 stronger:2 achievable:2 open:4 hu:8 closure:1 q1:2 necessity:2 contains:2 past:1 activation:1 must:4 girosi:1 aside:1 fewer:1 device:2 selected:3 vanishing:2 short:1 constructed:1 differential:1 director:1 consists:1 manner:1 introduce:1 ra:2 decreasing:1 td:1 cardinality:2 notation:1 kind:2 q2:2 transformation:3 every:18 xd:1 exactly:1 rm:5 unit:22 superiority:1 limit:1 era:1 approximately:1 au:1 range:4 unique:1 union:3 vcr:1 implement:8 intersect:1 empirical:1 thought:1 road:1 get:1 selection:2 mea:1 context:1 applying:1 influence:1 equivalent:1 baum:1 convex:4 simplicity:2 sard:2 contradiction:1 regarded:1 analogous:1 us:1 nowhere:1 satisfying:1 recognition:1 convexity:1 complexity:3 controversy:1 tight:2 segment:1 basis:1 completely:1 easily:1 kolmogorov:3 talk:1 train:1 elec:1 artificial:5 dichotomy:14 say:5 otherwise:1 ability:1 itself:1 final:1 associative:2 advantage:1 differentiable:4 maximal:3 kohonen:1 relevant:1 combining:1 achieve:2 oz:1 empty:1 assessing:2 adam:1 sna:1 depending:2 oo:2 pose:1 implemented:1 implies:3 concentrate:1 direction:1 australia:3 sgn:13 material:1 implementing:1 hitachi:1 require:2 proposition:9 singularity:1 extension:2 hold:2 sufficiently:3 considered:6 ground:1 exp:1 mapping:2 early:1 smallest:1 estimation:1 applicable:1 psaltis:1 wim:1 ofc:1 wl:1 publication:1 corollary:1 june:1 improvement:1 rigorous:1 sense:1 helpful:1 elsevier:1 dependent:1 typically:1 eliminate:1 lj:2 hidden:31 special:2 equal:1 genuine:1 having:5 blackburn:1 broad:2 park:1 future:1 t2:1 develops:1 serious:1 x_:2 pathological:1 randomly:2 lebesgue:3 interest:3 sixty:1 nl:1 irrelevance:1 devoted:1 implication:1 tuple:1 partial:1 necessary:7 poggio:1 unless:1 euclidean:1 desired:1 theoretical:1 minimal:3 obstacle:1 cover:6 sakurai:3 phrase:1 subset:13 entry:2 too:1 iqil:2 combined:2 deduced:1 density:1 ie:1 yl:1 corrects:1 ym:2 continuously:3 central:1 huang:2 leading:1 wk:3 matter:1 decisive:1 closed:3 capability:1 contribution:1 accuracy:4 identify:1 redding:1 identification:1 comp:1 synaptic:15 against:1 obvious:2 proof:11 con:1 couple:1 proved:3 popular:1 leu:2 lim:1 improves:1 dimensionality:1 actually:2 back:1 adaptable:1 feed:8 higher:4 jw:4 formulation:1 sufficiency:1 generality:1 furthermore:1 iiiii:1 propagation:1 golubitsky:1 thl:4 y2:1 hence:1 laboratory:2 uniquely:1 hong:1 outline:2 tn:2 geometrical:1 image:1 subdomain:2 recently:1 ilxi:1 sigmoid:2 attached:1 nh:5 volume:1 significant:1 gratefully:1 hus:2 stable:1 recent:3 belongs:3 irrelevant:1 store:1 certain:1 verlag:1 inequality:1 arbitrarily:1 signal:1 ii:6 multiple:2 reduces:1 mber:1 lin:2 equally:1 multilayer:6 publish:1 sometimes:1 preserved:1 want:1 singular:1 leaving:1 crucial:1 comment:1 subject:1 integer:1 counting:4 feedforward:1 iii:4 easy:1 split:1 enough:1 xj:1 topology:1 idea:1 computable:1 shift:1 ltd:1 speech:1 york:1 passing:1 repeatedly:1 detailed:1 involve:2 hardware:2 category:4 exist:1 per:3 write:1 threshold:3 ce:1 fraction:1 utilising:1 almost:2 family:2 vn:2 appendix:2 layer:14 def:15 bound:2 fan:1 topological:1 precisely:1 x2:1 argument:1 optical:2 according:1 ball:1 combination:1 erie:4 contradicts:1 separability:1 wi:6 restricted:1 discus:1 count:1 fwi:1 available:1 observe:2 kowalczyk:7 dudley:1 permission:1 shortly:2 subdomains:1 top:3 remaining:1 denotes:1 xc:1 neura:1 classical:6 intend:1 ow:1 subspace:3 link:1 thank:1 capacity:10 argue:1 trivial:2 viii:1 index:1 ratio:2 madaline:1 trl:1 ql:1 statement:1 sures:1 ilw:2 memorise:1 acknow:1 implementation:4 proper:1 adjustable:6 neuron:9 observation:1 benchmark:2 finite:8 situation:1 rn:12 perturbation:2 fhu:8 clayton:1 connection:4 nu:1 nip:1 trans:1 able:2 below:2 pattern:1 differentiably:1 built:5 including:1 memory:2 suitable:1 treated:1 natural:1 vic:1 imply:1 sn:1 acknowledgement:1 loss:1 vandermonde:1 sufficient:4 editor:1 squashing:4 bias:1 perceptron:4 wide:2 curve:2 dimension:1 xn:3 qn:4 forward:8 ec:1 transaction:1 approximate:3 compact:1 logic:3 pseudoinverse:1 tuples:2 xi:3 continuous:4 investigated:1 complex:2 cl:2 constructing:1 domain:8 main:1 dense:3 linearly:1 telecom:2 tl:1 ff:15 elaborate:1 strengthened:1 sub:2 position:8 xl:3 candidate:2 lw:2 theorem:31 rk:3 down:1 specific:2 showing:3 virtue:2 exists:7 ilwll:2 magnitude:2 infinitely:1 ordered:1 adjustment:2 applies:1 springer:1 satisfies:1 relies:1 itl:1 viewed:1 presentation:1 consequently:1 ann:1 towards:1 fw:25 specifically:1 typical:2 determined:1 hyperplane:1 lemma:1 called:2 isomorphic:1 diverging:2 perceptrons:2 exception:2 support:2 latter:1 relevance:1 ex:3
6,578
6,950
Adaptive Batch Size for Safe Policy Gradients Matteo Papini DEIB Politecnico di Milano, Italy Matteo Pirotta SequeL Team Inria Lille, France Marcello Restelli DEIB Politecnico di Milano, Italy [email protected] [email protected] [email protected] Abstract Policy gradient methods are among the best Reinforcement Learning (RL) techniques to solve complex control problems. In real-world RL applications, it is common to have a good initial policy whose performance needs to be improved and it may not be acceptable to try bad policies during the learning process. Although several methods for choosing the step size exist, research paid less attention to determine the batch size, that is the number of samples used to estimate the gradient direction for each update of the policy parameters. In this paper, we propose a set of methods to jointly optimize the step and the batch sizes that guarantee (with high probability) to improve the policy performance after each update. Besides providing theoretical guarantees, we show numerical simulations to analyse the behaviour of our methods. 1 Introduction In many real-world sequential decision-making problems (e.g., industrial robotics, natural resource management, smart grids), engineers have developed automatic control policies usually derived from modelling approaches. The performance of such policies strictly depends on the model accuracy that for some tasks (e.g., financial applications) may be quite poor. Furthermore, even when accurate models are available and good control policies are obtained, their performance may degrade over time due to the non-stationary dynamics of the problem, thus requiring human intervention to adjust the policy parameters (think about equipment wear in smart manufacturing). In such scenarios, Reinforcement Learning (RL) techniques represent an interesting solution to get an online optimization of the control policies and to hinder the performance loss caused by unpredictable environment changes, thus allowing to improve the autonomy of the control system. In the last years, several RL studies [1, 2, 3, 4, 5, 6, 7] have shown that policy-search methods can effectively be employed to solve complex control tasks (e.g., robotic ones) due to their capabilities to handle high-dimensional continuous problems, face uncertain and partial observations of the state, and incorporate prior knowledge about the problem by means of the definition of a proper policy model whose parameters need to be optimized (refer to [8, 9] for recent surveys). This last property is particularly appealing when the reinforcement learning algorithm needs to operate online in scenarios where bad exploratory policies may damage the system. A proper design of the policy model may allow excluding such policies. On the other hand, in order to speed up the learning process, most RL methods need to explore the policy space by executing policies that may be worse than the initial one. This is not acceptable in many relevant applications. Under this perspective, we are interested in developing RL methods that are (in high probability) monotonically improving. Inspired by the conservative policy iteration approach [10], recently, new advances have been done in the field of approximate policy iteration algorithms [11, 12], obtaining methods that can learn faster while still giving statistical guarantees of improvement after each policy update [13, 14]. These methods are usually referred to as conservative, monotonically improving, or safe (as we do in this paper). These ideas have been exploited also for deriving novel safe policy-search approaches [15, 16, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 17, 18, 19] that have obtained significant empirical results. In particular, policy-gradient methods are among the most commonly used RL techniques to solve complex high-dimensional tasks. Up to now, works on safe policy gradients [15, 16] have focused mainly on the choice of the step size, a parameter that significantly affects the speed and stability of gradient methods. By adopting small enough step sizes, one can limit oscillations and avoid worsening updates, but the consequent reduction of the learning rate is paid on the long term as a poor overall performance. On the other hand, as we will show in this paper, there is another parameter that plays an important role in the definition of safe policy gradient approaches: the batch size (i.e., the number of samples used to estimate the gradient). So far, the optimization of the batch size has not been considered in the RL literature. The batch size, besides conditioning the optimal step size, has a non-negligible impact on the speed of improvement when samples are trajectories performed on the actual system. In the present paper, we inquire the relationship between the step size and the batch size, showing an interesting duality. Focusing on Gaussian policies, we make a first attempt at developing practical methods aimed at achieving the best average performance in the long term, by jointly optimizing both meta-parameters. After providing some background in Section 2, in Section 3 we improve an existing adaptive step-size method [15]. Building on this result, in Section 4 we derive the main result on the batch size, proposing jointly adaptive methods. Finally, in Section 5 we empirically analyse the behaviour of the proposed methods on a simple simulated control task. 2 Preliminaries A discrete-time continuous Markov decision process (MDP) is a tuple hS, A, P, R, ?, ?i, where S is the continuous state space, A is the continuous action space, P is a Markovian transition model where P(s0 |s, a) defines the transition density between states s and s0 under action a, R : S ?A ? [?R, R] is the reward function, such that R(s, a) is the expected immediate reward for the state-action pair (s, a) and R is the maximum absolute reward value, ? ? [0, 1) is the discount factor for future rewards and ? is the initial state distribution. A policy is defined as a density distribution ?(?|s) that, for each state s, specifies the density distribution over action space A. We consider infinite horizon problems where the future rewards are exponentially discounted with ?. For each state-action pair (s, a), the utility of taking action a in state s and then following a stationary policy ? is defined as: Z Z ? 0 Q (s, a) = R(s, a) + ? P(s |s, a) ?(a0 |s0 )Q? (s0 , a0 )da0 ds0 . S A Policies can be ranked by their expected discounted reward starting from initial state distribution ?: Z Z Z Z J?? = ?(s) ?(a | s)Q? (s, a)dads = d?? (s) ?(a|s)R(s, a)dads, S A S A P? where d?? (s) = (1 ? ?) t=0 ? t P r(st = s|?, ?) is the ?-discounted future state distribution for a starting state distribution ? [2]. In the following, we will often refer to J?? as the performance of policy ?. Solving an MDP means finding a policy ? ? maximizing J?? . We consider the problem of finding a policy that maximizes the expected discounted reward over a class of parametrized policies ?? = {?? : ? ? Rm }. A particular class of parametrized policies is the Gaussian policy model with standard deviation ? and mean linear in the state features ?(?): ? !2 ? T 1 1 a ? ? ?(s) ?, ?(a|s, ?) = ? exp ?? 2 ? 2?? 2 which is a common choice for MDPs with continuous actions. The exact gradient of the expected discounted reward w.r.t. the policy parameters [2] is: Z Z 1 ?? ?? J? (?) = d (s) ?? ?(a|s, ?)Q?? (s, a)dads. 1?? S ? A In most commonly used policy gradient methods, the policy parameters are updated by following the direction of the gradient of the expected discounted reward: ? 0 = ? + ??? J? (?), where ? ? 0 is a scalar step size. In the following we will denote with k?? J? (?)kp the Lp -norm of the policy gradient. 2 3 Non-Scalar Step Size for Gaussian Policies Before starting to optimize the batch size for the gradient estimation, in this section we extend the results in [15] to the case of a non-scalar step size, showing that, focusing on the Gaussian policy model, such extension guarantees a larger performance improvement than the one obtained in [15]. Furthermore, this result significantly simplifies the closed-form solutions obtained for the optimization of the batch size described in the following sections. In Section 3.1 we stick to the theoretical setting in which the gradient is known exactly, while in Section 3.2 we take into account the estimation error. 3.1 Exact Framework The idea is to have a separate adaptive step size ?i for each component ?i of ?. For notational convenience, we define a non-scalar step size as a diagonal matrix ? = diag(?1 , ?2 , . . . , ?m ) with ?i ? 0 for i = 1, . . . , m. The policy parameters can be updated as: ? 0 = ? + ??? J? (?). Note that the direction of the update can differ from the gradient direction. Since the ?i are nonnegative, the absolute angular difference is never more than ?/2. The traditional scalar step-size update can be seen as a special case where ? = ?I. Assumption 3.1. State features are uniformly bounded: |?i (s)| ? M? , ?s ? S, ?i = 1, . . . , m. By adapting Theorem 4.3 in [15] to the new parameter update, we obtain a lower bound on the policy performance improvement: Lemma 3.2. For any initial state distribution ? and any pair of stationary Gaussian policies ?? ? N (? T ?(s), ? 2 ) and ??0 ? N (? 0T ?(s), ? 2 ), so that ? 0 = ? + ??? J? (?), and under Assumption 3.1, the difference between the performance of ??0 and the one of ?? can be bounded below as follows: 0 T J? (? ) ? J? (?) ? ?? J? (?) ??? J? (?) ? k??? J? (?)k21 M?2 (1 ? ?)? 2  1 ? 2?? Z ? S Z d?? (s) where kQ?? k? is the supremum norm of the Q-function: kQ?? k? = ?? Q (s, a)dads + A ? kQ?? k? 2(1 ? ?)  , Q?? (s, a). sup s?S,a?A The above bound requires us to compute the Q-function explicitly, but this is often not possible in real-world applications. We now consider a simplified (although less tight) version of the bound that does not have this requirement, which is an adaptation of Corollary 5.1 in [15]: Theorem 3.3. For any initial state distribution ? and any pair of stationary Gaussian policies ?? ? N (? T ?(s), ? 2 ) and ??0 ? N (? 0T ?(s), ? 2 ), so that ? 0 = ? + ??? J? (?), and under Assumption 3.1, the difference between the performance of ??0 and the one of ?? can be bounded below as follows: T 2 J? (? 0 ) ? J? (?) ? ?? J? (?) ??? J? (?) ? c k??? J? (?)k1 ,   RM 2 ? + where c = (1??)2??2 ?|A| 2(1??) and |A| is the volume of the action space. 2?? We then find the step size ?? that maximizes this lower bound under the natural constraint ?i ? 0 ?i = 1, . . . , m. The derivation is not trivial and is provided in Appendix A. Corollary 3.4. The lower bound of Theorem 3.3 is maximized by the following non-scalar step size: 1 if k = min {arg maxi |??i J? (?)|} , ?k? = 2c 0 otherwise, which guarantees the following performance improvement: J? (? 0 ) ? J? (?) ? k?? J? (?)k2? . 4c Note that update induced by the obtained ?? corresponds to employing a constant, scalar step size to update just the parameter corresponding to the largest absolute gradient component. This method is known in the literature as greedy coordinate descent. Convergence of this algorithm to a local 3 optimum is guaranteed for small step sizes, as shown in [20]. Note also that the way in which the index is selected in case of multiple maxima (here min) is arbitrary, see the proof of Corollary 3.4 for details. We now propose an intuitive explanation of our result: the employed performance lower bound ultimately derives from Corollary 3.6 in [13]. From the original bound, one can easily see that the positive part accounts to the average advantage of the new policy over the old one, while the negative part penalizes large parameter updates, which may result in overshooting. Updating just the parameter corresponding to the larger policy gradient component represents an intuitive trade-off between these two objectives. We now show that this result represents an improvement w.r.t. the adaptive scalar step size proposed in [15] for the current setting: Corollary 3.5. Under identical hypotheses, the performance improvement guaranteed by Corollary 3.4 is never less than the one guaranteed by Corollary 5.1 in [15], i.e.: 2 4 k?? J? (?)k? k?? J? (?)k2 ? 2. 4c 4c k?? J? (?)k1 2 This corollary derives from the trivial norm inequality k?? J? (?)k? k?? J? (?)k1 ? k?? J? (?)k2 . 3.2 Approximate Framework We now turn to the more realistic case in which the policy gradient, ?? J? (?), is not known, and has to be estimated from a finite number of trajectory samples. A performance improvement can still be guaranteed with high probability. To adapt the result of Theorem 3.3 to the stochastic gradient case, ? ? J? (?): we need both a lower bound on the policy gradient estimate ? ? ? J? (?)| ? , 0) ? ? J? (?) = max(|? ? (where the maximum is component-wise) and an upper bound: ? ? J? (?) = |? ? ? J? (?)| +  ? where  = [1 , . . . , m ], and i is an upper bound on the approximation error of ??i J? (?) with probability at least 1 ? ?. We can now state the following: Theorem 3.6. Under the same assumptions of Theorem 3.3, and provided that a policy gradient ? ? J? (?) is available, so that P(|?? J? (?) ? ? ? ? J? (?)| ? i ) ? ?, ?i = 1, . . . , m, the estimate ? i i difference between the performance of ??0 and the one of ?? can be bounded below with probability at least (1 ? ?)m as follows: 2 ? ? J? (?)T ?? ? ? J? (?) ? c ? ? J? (?) J? (? 0 ) ? J? (?) ? ? ?? , 1 where c is defined as in Theorem 3.3. To derive the optimal step size, we first restrict our analysis to the case in which 1 = 2 = . . . = m , . We call this common estimation error . This comes naturally in the following section, where we use concentration bounds to give an expression for . However, it is always possible to define a common error by  = maxi i . We then need the following assumption: Assumption 3.7. At least one component of the policy gradient estimate is, in absolute value, no ? less than the approximation error: ?? J? (?) ? . ? The violation of the above assumption can be used as a stopping condition since it prevents to guarantee any performance improvement. We can now state the following (the derivation is similar to the one of Corollary 3.5 and is, again, left to Appendix A): Corollary 3.8. The performance lower bound of Theorem 3.6 is maximized under Assumption 3.7 by the following non-scalar step size: ? n o ? ? J? (?)k ?)2 ? (k? ? ? if k = min arg max | ? J (?)| , 2 ?i ? i ? ? J? (?)k +) ?k? = 2c(k? ? ? 0 otherwise, 4 which guarantees with probability (1 ? ?)m a performance improvement 4  ? ?? J? (?) ?  ? J? (? 0 ) ? J? (?) ?  2 . ? 4c ?? J? (?) +  ? 4 Adaptive Batch Size In this section we jointly optimize the step size for parameter updates and the batch size for policy gradient estimation, taking into consideration the cost of collecting sample trajectories. We call N the ? ? J? (?) batch size, i.e., the number of trajectories sampled to compute the policy gradient estimate ? at each parameter update. We define the following cost-sensitive performance improvement measure: Definition 4.1. Cost-sensitive performance improvement measure ?? is defined as: B? (?, N ) , N where B? is the high probability lower bound on performance improvement given in Theorem 3.6. ?? (?, N ) := The rationale behind this choice of performance measure is to maximize the performance improvement per sample trajectory. Using larger batch sizes leads to more accurate policy updates, but the gained performance improvement is spread over a larger number of trials. This is particularly relevant in real-world online applications, where the collection of more samples with a sub-optimal policy affects the overall performance and must be justified by a greater improvement in the learned policy. By defining ?? in this way, we can control the improvement provided, on average, by each collected sample. We now show how to jointly select the step size ? and the batch size N so as to maximize ?? . Notice that the dependence of B? on N is entirely through , whose expression depends on which concentration bound is considered. We first restrict our analysis to concentration bounds that allow to express  as follows: Assumption 4.1. The per-component policy gradient estimation error made by averaging over N sample trajectories can be bounded with probability at least 1 ? ? by: d? (N ) = ? , N where d? is a constant w.r.t. N . This class of inequalities includes well-known concentration bounds such as Chebyshev?s and Hoeffding?s. Under Assumption 4.1 ?? can be optimized in closed form: Theorem 4.2. Under the hypotheses of Theorem 3.3 and Assumption 4.1, the cost-sensitive performance improvement measure ?? , as defined in Definition 4.1, is maximized by the following step size and batch size: ? ? n o ? ( ? (13?3 17) 2 ? ? J? (?)| , if k = min arg maxi |? ? (13 + 3 17)d? ? i 4c ?k? = N? = ? 2 ? , ? ? ? 0 otherwise, ? 2 ?? J? (?) ? ? RM?2 (1??)2 ? 2  |A| ? where c = + 2?? mance improvement of: ? 2(1??)  . This choice guarantees with probability (1 ? ?)m a perfor- ? 2 2 393 ? 95 17 ? ? J? (? ) ? J? (?) ? ?? J? (?) ? 0.16 ? ? J? (?) . 8 ? ? 0 Notice that, under Assumption 4.1, Assumption 3.7 can be restated as N ? d2 ? , which ? ? J? (?)k2 k? ? ? is always verified by the proposed N . This means that the adaptive batch size never allows an estimation error larger than the gradient estimate. Another peculiarity of this result is that the step size is constant, in the sense that its value does not depend on the gradient estimate. This can be 5 explained in terms of a duality between step size and batch size: in other conservative adaptive-step size approaches, such as the one proposed with Theorem 4.2, the step size is kept small to counteract policy updates that are too off due to bad gradient estimates. When also the batch size is made adaptive, a sufficient number of sample trajectories can be taken to keep the policy update on track even with a constant-valued step size. Note that, in this formulation, the batch size selection process is always one step behind the gradient estimation. A possible synchronous implementation is to update N ? each time a trajectory is performed, using all the data collected since the last learning step. As soon as the number of trajectories performed in the current learning iteration is larger than or equal to N ? , a new learning step is performed. We now consider some concentration bounds in more detail: we provide the values for d? , while the full expressions for N ? can be found in Appendix B. 4.1 Chebyshev?s Bound By using the sample mean version of Chebyshev?s bound we obtain: s ? ? J? (?)] V ar[? i , d? = ? ? ? J? (?) is the policy gradient approximator (from a single sample trajectory). The main where ? i advantage of this bound is that it does not make any assumption on the range of the gradient sample. The variance of the sample can be upper bounded in the case of the REINFORCE [1] and the G(PO)MDP [3]/PGT [2] gradient estimators by using results from [21], already adapted for similar purposes in [15]. The G(PO)MDP/PGT estimator suffers from a smaller variance if compared with REINFORCE, and the variance bound is indeed tighter. 4.2 Hoeffding?s Bound By using Hoeffding?s bound we obtain: r d? = R log 2/? , 2 ? ? J? (?))|. For the class of policies where R is the range of the gradient approximator, i.e., |supp(? i we are considering, i.e., Gaussian with mean linear in the features, under some assumptions, the range can be upper bounded as follows: Lemma 4.3. For any Gaussian policy ?? ? N (? T ?(s), ? 2 ), assuming that the action space is bounded (?a ? A, |a| ? A) and the policy gradient is estimated on trajectories of length H, the ? ? J? (?) can be upper bounded ?i = 1, . . . , m and ?? by range R of the policy gradient sample ? i R? 2HM? AR . ? 2 (1 ? ?) As we will show in Section 5, a more practical solution (even if less rigorous) consists in computing the range as the difference between the largest and the smallest gradient sample seen during learning. 4.3 Empirical Bernstein?s Bound Tighter concentration bounds allow for smaller batch sizes (which result in more frequent policy updates) and larger step sizes, thus speeding up the learning process and improving long-time average performance. An empirical Bernstein bound from [22] allows to use sample variance instead of the variance bounds from [21] and to limit the impact of the gradient range. On the other hand, this bound does not satisfy Assumption 4.1, giving for the estimation error the following, more complex, expression: d? f? (N ) = ? + , N N where p d? = 2SN ln 3/?, f = 3R ln 3/?, 6 and SN is the sample variance of the gradient approximator. No reasonably simple closed-form solution is available in this case, requiring a linear search of the batch size N ? maximizing ?? . By adapting Assumption 3.7 to this case, a starting point for this search can be provided: r ? ?2 2 + 4f ? ? ? J? (?) d + d ? ? ? ? ? ? ? , N ?? ? ? ? 2 ?? J? (?) ? We also know that there is a unique maximum in [N0 , +?) (see Appendix A for more details) and that ?? goes to 0 as N goes to infinity. Hence, to find the optimal batch size, it is enough to start from N0 and stop as soon as the value of the cost function ?(?? , N ) begins to decrease. Furthermore, the optimal step size is no longer constant: it can be computed with the expression given in Corollary 3.8 by setting  := (N ? ). As for the Hoeffding?s bound, the range R can be upper bounded exactly or estimated from samples. Table 1: Improvement rate of the policy updates for different policy standard deviation ?, fixed batch size N and fixed step size ?, using the G(PO)MDP gradient estimator. ? = 0.5 ? ?=1 N = 10000 N = 1000 N = 100 N = 10000 N = 1000 N = 100 95.96% 100% 98.99% 100% 52.85% 73.27% 81.88% 83.88% 49.79% 51.41% 55.69% 58.44% 24.24% 100% 100% 100% 37.4% 27.03% 99.9% 100% 50.4% 46.08% 39.04% 86.04% 1e-3 1e-4 1e-5 1e-6 Table 2: Average performance for different gradient estimators, statistical bounds and values of ?. All results are averaged over 5 runs (95% confidence intervals are reported). 5 Estimator Bound REINFORCE REINFORCE REINFORCE G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP G(PO)MDP Chebyshev Chebyshev Chebyshev Chebyshev Chebyshev Chebyshev Chebyshev Chebyshev Hoeffding Bernstein Hoeffding (empirical range) Bernstein (empirical range) ? ? Confidence interval 0.95 0.75 0.5 0.95 0.75 0.5 0.25 0.05 0.95 0.95 0.95 0.95 -11.3266 -11.4303 -11.5947 -10.6085 -10.7141 -10.9036 -11.2355 -11.836 -11.914 -10.2159 -9.8582 -9.6623 [-11.3277; -11.3256] [-11.4308; -11.4297] [-11.5958; -11.5937] [-10.6087; -10.6083] [-10.7145; -10.7136] [-10.904; -10.9031] [-11.2363; -11.2346] [-11.8368; -11.8352] [-11.9143; -11.9136] [-10.2162; -10.2155] [-9.8589; -9.8574] [-9.6619; -9.6627] Numerical Simulations In this section, we test the proposed methods on the linear-quadratic Gaussian regulation (LQG) problem [23]. The LQG problem is defined by transition model st+1 ? N (st + at , ?02 ), Gaussian policy at ? N (? ? s, ? 2 ) and reward rt = ?0.5(s2t + a2t ). In all our simulations we use ?0 = 0, since all the noise can be modelled on the agent?s side without loss of generality. Both action and state variables are bounded to the interval [?2, 2] and the initial state is drawn uniformly at random. We use this task as a testing ground because it is simple, all the constants involved in our bounds can be computed exactly, and the true optimal parameter ?? is available as a reference. We use a discount factor ? = 0.9, which gives an optimal parameter ?? ? ?0.59, corresponding to expected performance J(?? ) ? ?13.21. Coherently with the framework described in Section 1, we are interested both in the convergence speed and in the ratio of policy updates that does not result in a 7 worsening of the expected performance, which we will call improvement ratio. First of all, we want to analyze how the choice of fixed step sizes and batch sizes may affect the improvement ratio and how much it depends on the variability of the trajectories (that in this case is due to the variance of the policy). Table 1 shows the improvement ratio for two parameterizations (? = 0.5 and ? = 1) when various constant step sizes and batch sizes are used, starting from ? = ?0.55 and stopping after a total of one million trajectories. As expected, small batch sizes combined with large step sizes lead to low improvement ratios. However, the effect is non-trivial and problem-dependent, justifying the need for an adaptive method. We then proceed to test the methods described in Section 4. In the following simulations, we use ? = 1 and start from ? = 0, stopping after a total of 30 million trajectories. Figure 1 shows the expected performance over sample trajectories for both the REINFORCE and G(PO)MDP gradient estimators, using Chebyshev?s bound with different values of ?. Expected performance is computed for each parameter update. Data are then scaled to account for the different batch sizes. In general, REINFORCE performs worse than G(PO)MDP due to its larger variance (in both cases the proper optimal baseline from [23] was used), and larger values of ? (the probability with which worsening updates are allowed to take place) lead to better performance. Notice that an improvement ratio of 1 is achieved also with large values of ?. This is due to the fact that the bounds used in the development of our method are not tight. Being the method this conservative, in practical applications ? can be set to a high value to improve the convergence rate. Another common practice in empirical applications is to shrink confidence intervals through a scalar multiplicative factor. However, in this work we chose to not exploit this trick. Figure 2 compares the performance of the different concentration bounds described in the previous section, using always G(PO)MDP to estimate the gradient and ? = 0.95. As expected, Bernstein?s bound performs better than Chebyshev?s, especially in the empirical range version. The rigorous version of Hoeffding?s bound performs very poorly, while the one using the empirical range is almost as good as the corresponding Bernstein method. This is due to the fact that the bound on the gradient estimate range is very loose, since it accounts also for unrealistic combinations of state, action and reward. Finally, to better capture the performance of the different variants of the algorithm in a real-time scenario, we define a metric ?, which is obtained by averaging the real performance (measured during learning) over all the trajectories, coherently with the cost function used to derive the optimal batch size. The results are reported in Table 2. In Appendix C we also show how the adaptive batch size evolves as the policy approaches the optimum. 6 Conclusions We showed the relationship between the batch size and the step size in policy gradient approaches under Gaussian policies, and how their joint optimization can lead to parameters updates that guarantee with high probability a fixed improvement in the policy performance. In addition to the formal analysis, we proposed practical methods to compute the information required by the algorithms. Finally, we have proposed a preliminary evaluation on a simple control task. Future work should focus on developing more practical methods. It would also be interesting to investigate the extension of the proposed methodology to other classes of policies. Acknowledgments This research was supported in part by French Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and French National Research Agency (ANR) under project ExTra-Learn (n.ANR-14-CE24-0010-01). References [1] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229?256, 1992. [2] Richard S Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, pages 1057?1063. MIT Press, 2000. [3] Jonathan Baxter and Peter L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319?350, 2001. 8 ?15 Expected Performance ?20 ?25 ?30 ?35 G(PO)MDP ?=0.95 G(PO)MDP ?=0.75 G(PO)MDP ?=0.5 G(PO)MDP ?=0.25 G(PO)MDP ?=0.05 REINFORCE ?=0.95 REINFORCE ?=0.75 REINFORCE ?=0.5 ?40 ?45 ?50 ?55 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 Number of Trajectories 2.8 3 ?107 Figure 1: Expected performance over sample trajectories using G(PO)MDP and REINFORCE (dashed) gradient estimators and Chebyshev bound, for different values of ?. All results are averaged over 5 runs. ?10 Expected Performance ?20 ?30 ?40 Bernstein (empirical range) Hoeffding (empirical range) Bernstein Chebyshev Hoeffding ?50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Number of Trajectories 2 2.2 2.4 2.6 2.8 3 ?107 Figure 2: Comparison of the performance of different statistical bounds, using the G(PO)MDP gradient estimator and ? = 0.95. All results are averaged over 5 runs. [4] Frank Sehnke, Christian Osendorfer, Thomas R?ckstie?, Alex Graves, Jan Peters, and J?rgen Schmidhuber. Policy gradients with parameter-based exploration for control. In Artificial Neural Networks - ICANN 2008, pages 387?396. Springer Berlin Heidelberg, 2008. [5] Jens Kober and Jan Peters. Policy search for motor primitives in robotics. In Advances in Neural Information Processing Systems 21, volume 21, pages 849?856, 2008. [6] Jan Peters and Stefan Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180?1190, 2008. [7] Jan Peters, Katharina M?lling, and Yasemin Altun. Relative entropy policy search. In AAAI Conference on Artificial Intelligence 24. AAAI Press, 2010. [8] Ivo Grondman, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. A survey of actor-critic reinforcement learning: Standard and natural policy gradients. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 42(6):1291?1307, 2012. [9] Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1?142, 2013. [10] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning 19, pages 267?274. Morgan Kaufmann, 2002. [11] Dimitri P Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3):310?335, 2011. 9 [12] Bruno Scherrer. Approximate policy iteration schemes: A comparison. In International Conference on Machine Learning 31, volume 32 of JMLR Workshop and Conference Proceedings, pages 1314?1322. JMLR.org, 2014. [13] Matteo Pirotta, Marcello Restelli, Alessio Pecorino, and Daniele Calandriello. Safe policy iteration. In International Conference on Machine Learning 30, volume 28 of JMLR Workshop and Conference Proceedings, pages 307?315. JMLR.org, 2013. [14] Yasin Abbasi-Yadkori, Peter L Bartlett, and Stephen J Wright. A fast and reliable policy improvement algorithm. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1338?1346, 2016. [15] Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Adaptive step-size for policy gradient methods. In Advances in Neural Information Processing Systems 26, pages 1394?1402. Curran Associates, Inc., 2013. [16] Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Policy gradient in lipschitz markov decision processes. Machine Learning, 100(2-3):255?283, 2015. [17] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning 32, volume 37 of JMLR Workshop and Conference Proceedings, pages 1889?1897. JMLR.org, 2015. [18] Philip Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. High confidence policy improvement. In International Conference on Machine Learning 32, volume 37 of JMLR Workshop and Conference Proceedings, pages 2380?2388. JMLR.org, 2015. [19] Mohammad Ghavamzadeh, Marek Petrik, and Yinlam Chow. Safe policy improvement by minimizing robust baseline regret. pages 2298?2306, 2016. [20] Julie Nutini, Mark W. Schmidt, Issam H. Laradji, Michael P. Friedlander, and Hoyt A. Koepke. Coordinate descent converges faster with the gauss-southwell rule than random selection. In International Conference on Machine Learning 32, volume 37 of JMLR Workshop and Conference Proceedings, pages 1632?1641. JMLR.org, 2015. [21] Tingting Zhao, Hirotaka Hachiya, Gang Niu, and Masashi Sugiyama. Analysis and improvement of policy gradient estimation. Neural Networks, 26:118?129, 2012. [22] Volodymyr Mnih, Csaba Szepesv?ri, and Jean-Yves Audibert. Empirical bernstein stopping. In International Conference on Machine Learning 25, volume 307 of ACM International Conference Proceeding Series, pages 672?679. ACM, 2008. [23] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682?697, May 2008. [24] M. S. Pinsker. Information and Information Stability of Random Variables and Processes. Izv. Akad. Nauk, Moskva, 1960. 10
6950 |@word h:1 trial:1 version:4 norm:3 d2:1 pieter:1 simulation:4 paid:2 reduction:1 initial:7 series:1 existing:1 current:2 worsening:3 must:1 john:2 ronald:1 realistic:1 numerical:2 lqg:2 christian:1 motor:2 update:22 n0:2 overshooting:1 stationary:4 greedy:1 selected:1 intelligence:3 sehnke:1 ivo:1 parameterizations:1 philipp:1 org:5 s2t:1 consists:1 indeed:1 expected:14 yasin:1 inspired:1 discounted:6 actual:1 unpredictable:1 considering:1 provided:4 begin:1 bounded:11 project:1 maximizes:2 developed:1 proposing:1 finding:2 csaba:1 guarantee:9 masashi:1 collecting:1 exactly:3 rm:3 k2:4 stick:1 control:11 scaled:1 intervention:1 bertsekas:1 before:1 negligible:1 positive:1 local:1 limit:2 theocharous:1 sutton:1 hirotaka:1 niu:1 matteo:7 approximately:1 inria:2 chose:1 range:14 averaged:3 practical:5 unique:1 acknowledgment:1 testing:1 practice:1 regret:1 jan:5 empirical:11 ce24:1 significantly:2 adapting:2 confidence:4 lucian:1 altun:1 get:1 convenience:1 selection:2 a2t:1 optimize:3 maximizing:2 go:2 attention:1 starting:5 williams:1 primitive:1 survey:4 politecnico:2 focused:1 restated:1 estimator:8 rule:1 deriving:1 financial:1 stability:2 handle:1 exploratory:1 coordinate:2 updated:2 yishay:1 play:1 gerhard:1 exact:2 curran:1 hypothesis:2 trick:1 trend:1 associate:1 particularly:2 updating:1 role:1 levine:1 inquire:1 capture:1 region:1 trade:1 decrease:1 environment:1 agency:1 reward:11 babuska:1 pinsker:1 dynamic:1 hinder:1 ultimately:1 ghavamzadeh:2 depend:1 solving:1 tight:2 smart:2 singh:1 petrik:1 easily:1 po:22 joint:1 various:1 derivation:2 fast:1 kp:1 artificial:4 alessio:1 choosing:1 whose:3 quite:1 larger:9 solve:3 valued:1 jean:1 otherwise:3 anr:2 statistic:1 think:1 jointly:5 analyse:2 online:3 advantage:2 propose:2 kober:1 fr:1 adaptation:1 frequent:1 relevant:2 poorly:1 nauk:1 intuitive:2 convergence:3 requirement:1 optimum:2 neumann:1 executing:1 converges:1 derive:3 measured:1 come:1 differ:1 direction:4 safe:7 stochastic:1 peculiarity:1 exploration:1 milano:2 human:1 mcallester:1 education:1 behaviour:2 abbeel:1 preliminary:2 tighter:2 strictly:1 extension:2 considered:2 ground:1 wright:1 exp:1 rgen:1 smallest:1 purpose:1 estimation:10 calais:1 sensitive:3 council:1 largest:2 stefan:1 mit:1 gaussian:11 always:4 avoid:1 koepke:1 corollary:11 derived:1 focus:1 schaal:2 improvement:30 notational:1 modelling:1 mainly:1 industrial:1 rigorous:2 equipment:1 baseline:2 sense:1 dependent:1 stopping:4 da0:1 a0:2 chow:1 france:1 interested:2 overall:2 among:2 arg:3 scherrer:1 development:1 special:1 field:1 equal:1 never:3 beach:1 identical:1 represents:2 lille:1 marcello:5 osendorfer:1 future:4 connectionist:1 richard:1 national:1 neurocomputing:1 attempt:1 investigate:1 mnih:1 evaluation:1 adjust:1 violation:1 behind:2 accurate:2 tuple:1 partial:1 old:1 penalizes:1 theoretical:2 uncertain:1 markovian:1 ar:2 cost:6 deviation:2 kq:3 too:1 reported:2 combined:1 st:4 density:3 international:9 sequel:1 off:2 hoyt:1 michael:2 again:1 aaai:2 abbasi:1 management:1 hoeffding:9 worse:2 zhao:1 dimitri:1 supp:1 account:4 volodymyr:1 includes:1 inc:1 satisfy:1 caused:1 explicitly:1 depends:3 audibert:1 ad:1 performed:4 try:1 multiplicative:1 dad:4 closed:3 analyze:1 sup:1 start:2 capability:1 yves:1 accuracy:1 variance:8 kaufmann:1 maximized:3 modelled:1 trajectory:19 cybernetics:1 hachiya:1 suffers:1 definition:4 involved:1 naturally:1 proof:1 di:2 sampled:1 stop:1 knowledge:1 focusing:2 higher:1 methodology:1 improved:1 formulation:1 done:1 shrink:1 generality:1 furthermore:3 angular:1 just:2 langford:1 hand:3 trust:1 french:2 defines:1 mdp:24 usa:1 building:1 effect:1 requiring:2 true:1 hence:1 moritz:1 during:3 daniele:1 mohammad:2 performs:3 wise:1 consideration:1 novel:1 recently:1 common:5 rl:8 empirically:1 conditioning:1 exponentially:1 volume:8 million:2 extend:1 refer:2 significant:1 automatic:1 grid:1 sugiyama:1 bruno:1 wear:1 actor:2 longer:1 recent:1 showed:1 perspective:1 italy:2 optimizing:1 scenario:3 schmidhuber:1 meta:1 inequality:2 jens:1 exploited:1 yasemin:1 seen:2 ministry:1 greater:1 morgan:1 employed:2 determine:1 maximize:2 monotonically:2 dashed:1 stephen:1 multiple:1 full:1 sham:1 faster:2 adapt:1 long:4 justifying:1 luca:2 impact:2 variant:1 metric:1 iteration:6 represent:1 adopting:1 sergey:1 robotics:4 achieved:1 justified:1 background:1 want:1 addition:1 szepesv:1 interval:4 yinlam:1 extra:1 operate:1 regional:1 induced:1 jordan:1 call:3 bernstein:9 enough:2 baxter:1 affect:3 restrict:2 idea:2 simplifies:1 chebyshev:15 synchronous:1 expression:5 utility:1 bartlett:2 peter:9 proceed:1 action:11 gabriel:1 aimed:1 discount:2 specifies:1 exist:1 notice:3 estimated:3 per:2 track:1 discrete:1 express:1 achieving:1 drawn:1 calandriello:1 verified:1 kept:1 year:1 counteract:1 run:3 lope:1 place:1 almost:1 bascetta:2 oscillation:1 decision:3 acceptable:2 appendix:5 entirely:1 bound:40 guaranteed:4 quadratic:1 nonnegative:1 adapted:1 gang:1 constraint:1 infinity:1 alex:1 ri:1 speed:4 min:4 developing:3 combination:1 poor:2 smaller:2 appealing:1 lp:1 evolves:1 making:1 kakade:1 explained:1 taken:1 southwell:1 ln:2 resource:1 deib:2 turn:1 loose:1 know:1 issam:1 available:4 mance:1 batch:31 yadkori:1 schmidt:1 original:1 thomas:2 pgt:2 exploit:1 giving:2 k1:3 especially:1 objective:1 already:1 coherently:2 damage:1 concentration:7 dependence:1 rt:1 diagonal:1 traditional:1 gradient:55 separate:1 reinforce:11 simulated:1 berlin:1 parametrized:2 philip:1 degrade:1 collected:2 trivial:3 assuming:1 besides:2 length:1 index:1 relationship:2 providing:2 ratio:6 minimizing:1 akad:1 regulation:1 robert:1 frank:1 nord:1 negative:1 design:1 implementation:1 proper:3 policy:93 allowing:1 upper:6 observation:1 markov:2 finite:1 polimi:2 descent:2 immediate:1 defining:1 excluding:1 team:1 variability:1 mansour:1 arbitrary:1 david:1 pair:4 required:1 optimized:2 ds0:1 learned:1 nip:1 usually:2 below:3 max:2 reliable:1 explanation:1 marek:1 unrealistic:1 natural:4 ranked:1 scheme:1 improve:4 mdps:1 hm:1 speeding:1 sn:2 prior:1 literature:2 review:1 schulman:1 friedlander:1 graf:1 relative:1 georgios:1 loss:2 rationale:1 interesting:3 approximator:3 foundation:1 agent:1 sufficient:1 s0:4 critic:2 autonomy:1 supported:1 last:3 soon:2 side:1 allow:3 formal:1 face:1 taking:2 absolute:4 julie:1 world:4 transition:3 commonly:2 adaptive:12 reinforcement:8 simplified:1 collection:1 made:2 far:1 employing:1 transaction:1 approximate:5 skill:1 supremum:1 keep:1 satinder:1 robotic:1 pasde:1 search:7 continuous:5 lling:1 table:4 learn:2 reasonably:1 robust:1 ca:1 tingting:1 obtaining:1 improving:3 heidelberg:1 katharina:1 complex:4 marc:1 diag:1 icann:1 main:2 spread:1 noise:1 restelli:5 allowed:1 referred:1 pirotta:5 sub:1 jmlr:10 theorem:12 bad:3 showing:2 k21:1 maxi:3 consequent:1 derives:2 workshop:5 papini:2 sequential:1 effectively:1 gained:1 horizon:2 entropy:1 explore:1 prevents:1 scalar:10 springer:1 nutini:1 corresponds:1 acm:2 manufacturing:1 lipschitz:1 man:1 change:1 infinite:2 uniformly:2 laradji:1 averaging:2 engineer:1 conservative:4 lemma:2 total:2 duality:2 gauss:1 busoniu:1 select:1 deisenroth:1 perfor:1 mark:1 jonathan:1 incorporate:1
6,579
6,951
A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning Marco Fraccaro?? Simon Kamronn ?? Ulrich Paquet? ? Technical University of Denmark ? DeepMind Ole Winther? Abstract This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object?s representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks. 1 Introduction From the earliest stages of childhood, humans learn to represent high-dimensional sensory input to make temporal predictions. From the visual image of a moving tennis ball, we can imagine its trajectory, and prepare ourselves in advance to catch it. Although the act of recognising the tennis ball is seemingly independent of our intuition of Newtonian dynamics [31], very little of this assumption has yet been captured in the end-to-end models that presently mark the path towards artificial general intelligence. Instead of basing inference on any abstract grasp of dynamics that is learned from experience, current successes are autoregressive: to imagine the tennis ball?s trajectory, one forward-generates a frame-by-frame rendering of the full sensory input [5, 7, 23, 24, 29, 30]. To disentangle two latent representations, an object?s, and that of its dynamics, this paper introduces Kalman variational auto-encoders (KVAEs), a model that separates an intuition of dynamics from an object recognition network (section 3). At each time step t, a variational auto-encoder [18, 25] compresses high-dimensional visual stimuli xt into latent encodings at . The temporal dynamics in the learned at -manifold are modelled with a linear Gaussian state space model that is adapted to handle complex dynamics (despite the linear relations among its states zt ). The parameters of the state space model are adapted at each time step, and non-linearly depend on past at ?s via a recurrent neural network. Exact posterior inference for the linear Gaussian state space model can be preformed with the Kalman filtering and smoothing algorithms, and is used for imputing missing data, for instance when we imagine the trajectory of a bouncing ball after observing it in initial and final video frames (section 4). The separation between recognition and dynamics model allows for missing data imputation to be done via a combination of the latent states zt of the model and its encodings at only, without having to forward-sample high-dimensional images xt in an autoregressive way. KVAEs are tested on videos of a variety of simulated physical systems in section 5: from raw visual stimuli, it ?end-to-end? learns the interplay between the recognition and dynamics components. As KVAEs can do smoothing, they outperform an array of methods in generative and missing data imputation tasks (section 5). ? Equal contribution. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Background Linear Gaussian state space models. Linear Gaussian state space models (LGSSMs) are widely used to model sequences of vectors a = a1:T = [a1 , .., aT ]. LGSSMs model temporal correlations through a first-order Markov process on latent states z = [z1 , .., zT ], which are potentially further controlled with external inputs u = [u1 , .., uT ], through the Gaussian distributions p?t (zt |zt?1 , ut ) = N (zt ; At zt?1 + Bt ut , Q) , p?t (at |zt ) = N (at ; Ct zt , R) . (1) Matrices ?t = [At , Bt , Ct ] are the state transition, control and emission matrices at time t. Q and R are the covariance matrices of the process and measurement noise respectively. With a starting state z1 ? N (z1 ; 0, ?), the joint probability distribution of the LGSSM is given by QT QT p? (a, z|u) = p? (a|z) p? (z|u) = t=1 p?t (at |zt ) ? p(z1 ) t=2 p?t (zt |zt?1 , ut ) , (2) where ? = [?1 , .., ?T ]. LGSSMs have very appealing properties that we wish to exploit: the filtered and smoothed posteriors p(zt |a1:t , u1:t ) and p(zt |a, u) can be computed exactly with the classical Kalman filter and smoother algorithms, and provide a natural way to handle missing data. Variational auto-encoders. A variational auto-encoder (VAE) [18, 25] defines a deep generative model p? (xt , at ) = p? (xt |at )p(at ) for data xt by introducing a latent encoding at . Given a likelihood p? (xt |at ) and a typically Gaussian prior p(at ), the posterior p? (at |xt ) represents a stochastic map from xt to at ?s manifold. As this posterior is commonly analytically intractable, VAEs approximate it with a variational distribution q? (at |xt ) that is parameterized by ?. The approximation q? is commonly called the recognition, encoding, or inference network. 3 Kalman Variational Auto-Encoders The useful information that describes the movement and interplay of objects in a video typically lies in a manifold that has a smaller dimension than the number of pixels in each frame. In a video of a ball bouncing in a box, like Atari?s game Pong, one could define a one-to-one mapping from each of the high-dimensional frames x = [x1 , .., xT ] into a two-dimensional latent space that represents the position of the ball on the screen. If the position was known for consecutive time steps, for a set of videos, we could learn the temporal dynamics that govern the environment. From a few new positions one might then infer where the ball will be on the screen in the future, and then imagine the environment with the ball in that position. The Kalman variational auto-encoder (KVAE) is based on the notion described above. To disentangle recognition and spatial representation, a sensory input xt is mapped to at (VAE), a variable on a low-dimensional manifold that encodes an object?s position and other visual properties. In turn, at is used as a pseudo-observation for the dynamics model (LGSSM). xt represents a frame of a video2 x = [x1 , .., xT ] of length T . Each frame is encoded into a point at on a low-dimensional manifold, so that the KVAE contains T separate VAEs that share the same decoder p? (xt |at ) and encoder q? (at |xt ), and depend on each other through a time-dependent prior over a = [a1 , .., aT ]. This is illustrated in figure 1. 3.1 Generative model We assume that a acts as a latent representation of the whole video, so that the generative model of a sequence QT factorizes as p? (x|a) = t=1 p? (xt |at ). In this paper p? (xt |at ) is a deep neural network parameterized by ?, 2 xt?1 xt xt+1 at?1 at at+1 zt?1 zt zt+1 VAE LGSSM ut?1 ut ut+1 Figure 1: A KVAE is formed by stacking a LGSSM (dashed blue), and a VAE (dashed red). Shaded nodes denote observed variables. Solid arrows represent the generative model (with parameters ?) while dashed arrows represent the VAE inference network (with parameters ?). While our main focus in this paper are videos, the same ideas could be applied more in general to any sequence of high dimensional data. 2 that emits either a factorized Gaussian or Bernoulli probability vector depending on the data type of xt . We model a with a LGSSM, and following (2), its prior distribution is Z p? (a|u) = p? (a|z) p? (z|u) dz , (3) so that the joint density for the KVAE factorizes as p(x, a, z|u) = p? (x|a) p? (a|z) p? (z|u). A LGSSM forms a convenient back-bone to a model, as the filtered and smoothed distributions p? (zt |a1:t , u1:t ) and p? (zt |a, u) can be obtained exactly. Temporal reasoning can be done in the latent space of zt ?s and via the latent encodings a, and we can do long-term predictions without having to auto-regressively generate high-dimensional images xt . Given a few frames, and hence their encodings, one could ?remain in latent space? and use the smoothed distributions to impute missing frames. Another advantage of using a to separate the dynamics model from x can be seen by considering the emission matrix Ct . Inference in the LGSSM requires matrix inverses, and using it as a model for the prior dynamics of at allows the size of Ct to remain small, and not scale with the number of pixels in xt . While the LGSSM?s process and measurement noise in (1) are typically formulated with full covariance matrices [26], we will consider them as isotropic in a KVAE, as at act as a prior in a generative model that includes these extra degrees of freedom. What happens when a ball bounces against a wall, and the dynamics on at are not linear any more? Can we still retain a LGSSM backbone? We will incorporate nonlinearities into the LGSSM by regulating ?t from outside the exact forward-backward inference chain. We revisit this central idea at length in section 3.3. 3.2 Learning and inference for the KVAE We learn ? and ? from sequences {x(n) } by maximizing the sum of their respective Pa set of example (n) (n) log likelihoods L = n log p?? (x |u ) as a function of ? and ?. For simplicity in the exposition we restrict our discussion below to one sequence, and omit the sequence index n. The log likelihood or evidence is an intractable average over all plausible settings of a and z, and exists as the denominator in Bayes? theorem when inferring the posterior p(a, z|x, u). A more tractable approach to both learning and inference is to introduce a variational distribution q(a, z|x, u) that approximates the posterior. The evidence lower bound (ELBO) F is   Z p? (x|a)p? (a|z)p? (z|u) log p(x|u) = log p(x, a, z|u) ? Eq(a,z|x,u) log = F(?, ?, ?) , (4) q(a, z|x, u) and a sum of F?s is maximized instead of a sum of log likelihoods. The variational distribution q depends on ?, but for the bound to be tight we should specify q to be equal to the posterior distribution that only depends on ? and ?. Towards this aim we structure q so that it incorporates the exact conditional posterior p? (z|a, u), that we obtain with Kalman smoothing, as a factor of ?: QT q(a, z|x, u) = q? (a|x) p? (z|a, u) = t=1 q? (at |xt ) p? (z|a, u) . (5) The benefit of the LGSSM backbone is now apparent. We use a ?recognition model? to encode each xt using a non-linear function, after which exact smoothing is possible. In this paper q? (at |xt ) is a deep neural network that maps xt to the mean and the diagonal covariance of a Gaussian distribution. As explained in section 4, this factorization allows us to deal with missing data in a principled way. Using (5), the ELBO in (4) becomes    p? (x|a) p? (a|z)p? (z|u) F(?, ?, ?) = Eq? (a|x) log + Ep? (z|a,u) log . (6) q? (a|x) p? (z|a, u) The lower bound in (6) can be estimated using Monte Carlo integration with samples {e a(i) , e z(i) }Ii=1 drawn from q, X ? ?, ?) = 1 F(?, log p? (x|e a(i) )+log p? (e a(i) , e z(i) |u)?log q? (e a(i) |x)?log p? (e z(i) |e a(i) , u) . (7) I i Note that the ratio p? (e a(i) , e z(i) |u)/p? (e z(i) |e a(i) , u) in (7) gives p? (e a(i) |u), but the formulation with (i) {e z } allows stochastic gradients on ? to also be computed. A sample from q can be obtained by e ? q? (a|x), and using a e as an observation for the LGSSM. The posterior p? (z|e first sampling a a, u) can be tractably obtained with a Kalman smoother, and a sample e z ? p? (z|e a, u) obtained from it. Parameter learning is done by jointly updating ?, ?, and ? by maximising the ELBO on L, which decomposes as a sum of ELBOs in (6), using stochastic gradient ascent and a single sample to approximate the intractable expectations. 3 3.3 Dynamics parameter network The LGSSM provides a tractable way to structure p? (z|a, u) into the variational approximation in (5). However, even in the simple case of a ball bouncing against a wall, the dynamics on at are not linear anymore. We can deal with these situations while preserving the linear dependency between consecutive states in the LGSSM, by non-linearly changing the parameters ?t of the model over time as a function of the latent encodings up to time t ? 1 (so that we can still define a generative model). Smoothing is still possible as the state transition matrix At and others in ?t do not have to be constant in order to obtain the exact posterior p? (zt |a, u). Recall that ?t describes how the latent state zt?1 changes from time t ? 1 to time t. In the more general setting, the changes in dynamics at time t may depend on the history of the system, encoded in a1:t?1 and possibly a starting code a0 that can be learned from data. If, for instance, we see the ball colliding with a wall at time t ? 1, then we know that it will bounce at time t and change direction. We then let ?t be a learnable function of a0:t?1 , so that the prior in (2) becomes QT QT p? (a, z|u) = t=1 p?t (a0:t?1 ) (at |zt ) ? p(z1 ) t=2 p?t (a0:t?1 ) (zt |zt?1 , ut ) . (8) During inference, after all the frames are encoded in a, the dynamics parameter network returns ? = ?(a), the parameters of the LGSSM at all time steps. We can now use the Kalman smoothing algorithm to find the exact conditional posterior over z, that will be used when computing the gradients of the ELBO. ?t?1 ?t ?t+1 dt?1 dt dt+1 In our experiments the dependence of ?t on a0:t?1 is modulated by a dynamics parameter network ?t = at?2 at?1 at ?t (a0:t?1 ), that is implemented with a recurrent neural network with LSTM cells that takes at each time Figure 2: Dynamics parameter network step the encoded state as input and recurses dt = for the KVAE. LSTM(at?1 , dt?1 ) and ?t = softmax(dt ), as illustrated in figure 2. The output of the dynamics parameter network PK (k) is weights that sum to one, k=1 ?t (a0:t?1 ) = 1. These weights choose and interpolate between K different operating modes: At = K X (k) ?t (a0:t?1 )A(k) , Bt = k=1 K X (k) ?t (a0:t?1 )B(k) , k=1 Ct = K X (k) ?t (a0:t?1 )C(k) . (9) k=1 We globally learn K basic state transition, control and emission matrices A(k) , B(k) and C(k) , and interpolate them based on information from the VAE encodings. The weighted sum can be interpreted as a soft mixture of K different LGSSMs whose time-invariant matrices are combined using the timevarying weights ?t . In practice, each of the K sets {A(k) , B(k) , C(k) } models different dynamics, (k) that will dominate when the corresponding ?t is high. The dynamics parameter network resembles the locally-linear transitions of [16, 33]; see section 6 for an in depth discussion on the differences. 4 Missing data imputation Let xobs be an observed subset of frames in a video sequence, for instance depicting the initial movement and final positions of a ball in a scene. From its start and end, can we imagine how the ball reaches its final position? Autoregressive models like recurrent neural networks can only forward-generate xt frame by frame, and cannot make use of the information coming from the final frames in the sequence. To impute the unobserved frames xun in the middle of the sequence, we need to do inference, not prediction. The KVAE exploits the smoothing abilities of its LGSSM to use both the information from the past and the future when imputing missing data. In general, if x = {xobs , xun }, the unobserved frames in xun could also appear at non-contiguous time steps, e.g. missing at random. Data can be imputed by sampling from the joint density p(aun , aobs , z|xobs , u), and then generating xun from aun . We factorize this distribution as p(aun , aobs , z|xobs , u) = p? (aun |z) p? (z|aobs , u) p(aobs |xobs ) , 4 (10) and we sample from it with ancestral sampling starting from xobs . Reading (10) from right to left, a sample from p(aobs |xobs ) can be approximated with the variational distribution q? (aobs |xobs ). Then, if ? is fully known, p? (z|aobs , u) is computed with an extension to the Kalman smoothing algorithm to sequences with missing data, after which samples from p? (aun |z) could be readily drawn. However, when doing missing data imputation the parameters ? of the LGSSM are not known at all time steps. In the KVAE, each ?t depends on all the previous encoded states, including aun , and these need to be estimated before ? can be computed. In this paper we recursively estimate ? in the following way. Assume that x1:t?1 is known, but not xt . We sample a1:t?1 from q? (a1:t?1 |x1:t?1 ) using the VAE, and use it to compute ?1:t . The computation of ?t+1 depends on at , which is missing, bt will be used. Such an estimate can be arrived at in two steps. The filtered posterior and an estimate a distribution p? (zt?1 |a1:t?1 , u1:t?1 ) can be computed as it depends only on ?1:t?1 , and from it, we sample Z b zt ? p? (zt |a1:t?1 , u1:t ) = p?t (zt |zt?1 , ut ) p? (zt?1 |a1:t?1 , u1:t?1 ) dzt?1 (11) bt from the predictive distribution of at , and sample a Z bt ? p? (at |a1:t?1 , u1:t ) = p?t (at |zt ) p? (zt |a1:t?1 , u1:t ) dzt ? p?t (at |b a zt ) . (12) bt ]). The same The parameters of the LGSSM at time t + 1 are then estimated as ?t+1 ([a0:t?1 , a procedure is repeated at the next time step if xt+1 is missing, otherwise at+1 is drawn from the VAE. After the forward pass through the sequence, where we estimate ? and compute the filtered posterior for z, the Kalman smoother?s backwards pass computes the smoothed posterior. While the smoothed posterior distribution is not exact, as it relies on the estimate of ? obtained during the forward pass, it improves data imputation by using information coming from the whole sequence; see section 5 for an experimental illustration. 5 Experiments We motivated the KVAE with an example of a bouncing ball, and use it here to demonstrate the model?s ability to separately learn a recognition and dynamics model from video, and use it to impute missing data. To draw a comparison with deep variational Bayes filters (DVBFs) [16], we apply the KVAE to [16]?s pendulum example. We further apply the model to a number of environments with different properties to demonstrate its generalizability. All models are trained end-to-end with stochastic gradient descent. Using the control input ut in (1) we can inform the model of known quantities such as external forces, as will be done in the pendulum experiment. In all the other experiments, we omit such information and train the models fully unsupervised from the videos only. Further implementation details can be found in the supplementary material (appendix A) and in the Tensorflow [1] code released at github.com/simonkamronn/kvae. 5.1 Bouncing ball We simulate 5000 sequences of 20 time steps each of a ball moving in a two-dimensional box, where each video frame is a 32x32 binary image. A video sequence is visualised as a single image in figure 4d, with the ball?s darkening color reflecting the incremental frame index. In this set-up the initial position and velocity are randomly sampled. No forces are applied to the ball, except for the fully elastic collisions with the walls. The minimum number of latent dimensions that the KVAE requires to model the ball?s dynamics are at ? R2 and zt ? R4 , as at the very least the ball?s position in the box?s 2d plane has to be encoded in at , and zt has to encode the ball?s position and velocity. The model?s flexibility increases with more latent dimensions, but we choose these settings for the sake of interpretable visualisations. The dynamics parameter network uses K = 3 to interpolate three modes, a constant velocity, and two non-linear interactions with the horizontal and vertical walls. We compare the generation and imputation performance of the KVAE with two recurrent neural network (RNN) models that are based on the same auto-encoding (AE) architecture as the KVAE and are modifications of methods from the literature to be better suited to the bouncing ball experiments.3 3 We also experimented with the SRNN model from [8] as it can do smoothing. However, the model is probably too complex for the task in hand, and we could not make it learn good dynamics. 5 (a) Frames xt missing completely at random. (b) Frames xt missing in the middle of the sequence. (c) Comparison of encoded (ground truth), generated and smoothed trajectories of a KVAE in the latent space a. The black squares illustrate observed samples and the hexagons indicate the initial state. Notice that the at ?s lie on a manifold that can be rotated and stretched to align with the frames of the video. Figure 3: Missing data imputation results. In the AE-RNN, inspired by the architecture from [29], a pretrained convolutional auto-encoder, identical to the one used for the KVAE, feeds the encodings to an LSTM network [13]. During training the LSTM predicts the next encoding in the sequence and during generation we use the previous output as input to the current step. For data imputation the LSTM either receives the previous output or, if available, the encoding of the observed frame (similarly to filtering in the KVAE). The VAE-RNN is identical to the AE-RNN except that uses a VAE instead of an AE, similarly to the model from [6]. Figure 3a shows how well missing frames are imputed in terms of the average fraction of incorrectly guessed pixels. In it, the first 4 frames are observed (to initialize the models) after which the next 16 frames are dropped at random with varying probabilities. We then impute the missing frames by doing filtering and smoothing with the KVAE. We see in figure 3a that it is beneficial to utilize information from the whole sequence (even the future observed frames), and a KVAE with smoothing outperforms all competing methods. Notice that dropout probability 1 corresponds to pure generation from the models. Figure 3b repeats this experiment, but makes it more challenging by removing an increasing number of consecutive frames from the middle of the sequence (T = 20). In this case the ability to encode information coming from the future into the posterior distribution is highly beneficial, and smoothing imputes frames much better than the other methods. Figure 3c graphically illustrates figure 3b. We plot three trajectories over at -encodings. The generated trajectories were obtained after initializing the KVAE model with 4 initial frames, while the smoothed trajectories also incorporated encodings from the last 4 frames of the sequence. The encoded trajectories were obtained with no missing data, and are therefore considered as ground truth. In the first three plots in figure 3c, we see that the backwards recursion of the Kalman smoother corrects the trajectory obtained with generation in the forward pass. However, in the fourth plot, the poor trajectory that is obtained during the forward generation step, makes smoothing unable to follow the ground truth. The smoothing capabilities of KVAEs make it also possible to train it with up to 40% of missing data with minor losses in performance (appendix C in the supplementary material). Links to videos of the imputation results and long-term generation from the models can be found in appendix B and at sites.google.com/view/kvae. Understanding the dynamics parameter network. In our experiments the dynamics parameter network ?t = ?t (a0:t?1 ) is an LSTM network, but we could also parameterize it with any differentiable function of a0:t?1 (see appendix D in the supplementary material for a comparison of various 6 (a) k = 1 (b) k = 2 (c) k = 3 (d) Reconstruction of x (k) Figure 4: A visualisation of the dynamics parameter network ?t (at?1 ) for K = 3, as a function of (k) at?1 . The three ?t ?s sum to one at every point in the encoded space. The greyscale backgrounds in (k) a) to c) correspond to the intensity of the weights ?t , with white indicating a weight of one in the dynamics parameter network?s output. Overlaid on them is the full latent encoding a. d) shows the reconstructed frames of the video as one image. architectures). When using a multi-layer perceptron (MLP) that depends on the previous encoding as mixture network, i.e. ?t = ?t (at?1 ), figure 4 illustrates how the network chooses the mixture of learned dynamics. We see that the model has correctly learned to choose a transition that maintains a constant velocity in the center (k = 1), reverses the horizontal velocity when in proximity of the left and right wall (k = 2), the reverses the vertical velocity when close to the top and bottom (k = 3). 5.2 Pendulum experiment We test the KVAE on the experiment of a dynamic torqueModel Test ELBO controlled pendulum used in [16]. Training, validation and KVAE (CNN) 810.08 test set are formed by 500 sequences of 15 frames of 16x16 2 3 KVAE (MLP) 807.02 pixels. We use a KVAE with at ? R , zt ? R and K = 2, 798.56 DVBF and try two different encoder-decoder architectures for the DMM 784.70 VAE, one using a MLP and one using a convolutional neural network (CNN). We compare the performaces of the KVAE Table 1: Pendulum experiment. to DVBFs [16] and deep Markov models4 (DMM) [19], nonlinear SSMs parameterized by deep neural networks whose intractable posterior distribution is approximated with an inference network. In table 1 we see that the KVAE outperforms both models in terms of ELBO on a test set, showing that for the task in hand it is preferable to use a model with simpler dynamics but exact posterior inference. 5.3 Other environments To test how well the KVAE adapts to different environments, we trained it end-to-end on videos of (i) a ball bouncing between walls that form an irregular polygon, (ii) a ball bouncing in a box and subject to gravity, (iii) a Pong-like environment where the paddles follow the vertical position of the ball to make it stay in the frame at all times. Figure 5 shows that the KVAE learns the dynamics of all three environments, and generates realistic-looking trajectories. We repeat the imputation experiments of figures 3a and 3b for these environments in the supplementary material (appendix E), where we see that KVAEs outperform alternative models. 6 Related work Recent progress in unsupervised learning of high dimensional sequences is found in a plethora of both deterministic and probabilistic generative models. The VAE framework is a common workhorse in the stable of probabilistic inference methods, and it is extended to the temporal setting by [2, 6, 8, 16, 19]. In particular, deep neural networks can parameterize the transition and emission distributions of different variants of deep state-space models [8, 16, 19]. In these extensions, inference 4 Deep Markov models were previously referred to as deep Kalman filters. 7 (a) Irregular polygon. (b) Box with gravity. (c) Pong-like environment. Figure 5: Generations from the KVAE trained on different environments. The videos are shown as single images, with color intensity representing the incremental sequence index t. In the simulation that resembles Atari?s Pong game, the movement of the two paddles (left and right) is also visible. networks define a variational approximation to the intractable posterior distribution of the latent states at each time step. For the tasks in section 5, it is preferable to use the KVAE?s simpler temporal model with an exact (conditional) posterior distribution than a highly non-linear model where the posterior needs to be approximated. A different combination of VAEs and probabilistic graphical models has been explored in [15], which defines a general class of models where inference is performed with message passing algorithms that use deep neural networks to map the observations to conjugate graphical model potentials. In classical non-linear extensions of the LGSSM like the extended Kalman filter and in the locallylinear dynamics of [16, 33], the transition matrices at time t have a non-linear dependence on zt?1 . The KVAE?s approach is different: by introducing the latent encodings at and making ?t depend on a1:t?1 , the linear dependency between consecutive states of z is preserved, so that the exact smoothed posterior can be computed given a, and used to perform missing data imputation. LGSSM with dynamic parameterization have been used for large-scale demand forecasting in [27]. [20] introduces recurrent switching linear dynamical systems, that combine deep learning techniques and switching Kalman filters [22] to model low-dimensional time series. [11] introduces a discriminative approach to estimate the low-dimensional state of a LGSSM from input images. The resulting model is reminiscent of a KVAE with no decoding step, and is therefore not suited for unsupervised learning and video generation. Recent work in the non-sequential setting has focused on disentangling basic visual concepts in an image [12]. [10] models neural activity by finding a non-linear embedding of a neural time series into a LGSSM. Great strides have been made in the reinforcement learning community to model how environments evolve in response to action [5, 23, 24, 30, 32]. In similar spirit to this paper, [32] extracts a latent representation from a PCA representation of the frames where controls can be applied. [5] introduces action-conditional dynamics parameterized with LSTMs and, as for the KVAE, a computationally efficient procedure to make long term predictions without generating high dimensional images at each time step. As autoregressive models, [29] develops a sequence to sequence model of video representations that uses LSTMs to define both the encoder and the decoder. [7] develops an actionconditioned video prediction model of the motion of a robot arm using convolutional LSTMs that models the change in pixel values between two consecutive frames. While the focus in this work is to define a generative model for high dimensional videos of simple physical systems, several recent works have combined physical models of the world with deep learning to learn the dynamics of objects in more complex but low-dimensional environments [3, 4, 9, 34]. 7 Conclusion The KVAE, a model for unsupervised learning of high-dimensional videos, was introduced in this paper. It disentangles an object?s latent representation at from a latent state zt that describes its dynamics, and can be learned end-to-end from raw video. Because the exact (conditional) smoothed posterior distribution over the states of the LGSSM can be computed, one generally sees a marked 8 improvement in inference and missing data imputation over methods that don?t have this property. A desirable property of disentangling the two latent representations is that temporal reasoning, and possibly planning, could be done in the latent space. As a proof of concept, we have been deliberate in focussing our exposition to videos of static worlds that contain a few moving objects, and leave extensions of the model to real world videos or sequences coming from an agent exploring its environment to future work. Acknowledgements We would like to thank Lars Kai Hansen for helpful discussions on the model design. Marco Fraccaro is supported by Microsoft Research through its PhD Scholarship Programme. We thank NVIDIA Corporation for the donation of TITAN X GPUs. References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. [2] E. Archer, I. M. Park, L. Buesing, J. Cunningham, and L. Paninski. Black box variational inference for state space models. arXiv:1511.07367, 2015. [3] P. W. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, and K. Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In NIPS, 2016. [4] M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach to learning physical dynamics. In ICLR, 2017. [5] S. Chiappa, S. Racani?re, D. Wierstra, and S. Mohamed. Recurrent environment simulators. In ICLR, 2017. [6] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model for sequential data. In NIPS, 2015. [7] C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016. [8] M. Fraccaro, S. K. S?nderby, U. Paquet, and O. Winther. Sequential neural models with stochastic layers. In NIPS, 2016. [9] K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik. Learning visual predictive models of physics for playing billiards. In ICLR, 2016. [10] Y. Gao, E. W. Archer, L. Paninski, and J. P. Cunningham. Linear dynamical neural population models through nonlinear embeddings. In NIPS, 2016. [11] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. Backprop KF: learning discriminative deterministic state estimators. In NIPS, 2016. [12] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2017. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, Nov. 1997. [14] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. [15] M. J. Johnson, D. Duvenaud, A. B. Wiltschko, S. R. Datta, and R. P. Adams. Composing graphical models with neural networks for structured representations and fast inference. In NIPS, 2016. [16] M. Karl, M. Soelch, J. Bayer, and P. van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. In ICLR, 2017. 9 [17] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. [18] D. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [19] R. Krishnan, U. Shalit, and D. Sontag. Structured inference networks for nonlinear state space models. In AAAI, 2017. [20] S. Linderman, M. Johnson, A. Miller, R. Adams, D. Blei, and L. Paninski. Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems. In AISTATS, 2017. [21] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. In ICLR, 2017. [22] K. P. Murphy. Switching Kalman filters. Technical report, 1998. [23] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In NIPS, 2015. [24] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. arXiv:1511.06309, 2015. [25] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [26] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11(2):305?45, 1999. [27] M. W. Seeger, D. Salinas, and V. Flunkert. Bayesian intermittent demand forecasting for large inventories. In NIPS, 2016. [28] W. Shi, J. Caballero, F. Husz?r, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, 2016. [29] N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015. [30] W. Sun, A. Venkatraman, B. Boots, and J. A. Bagnell. Learning to filter with predictive state inference machines. In ICML, 2016. [31] L. G. Ungerleider and L. G. Haxby. ?What? and ?where? in the human brain. Curr. Opin. Neurobiol., 4:157?165, 1994. [32] N. Wahlstr?m, T. B. Sch?n, and M. P. Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv:1502.02251, 2015. [33] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, 2015. [34] J. Wu, I. Yildirim, J. J. Lim, W. T. Freeman, and J. B. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In NIPS, 2015. 10
6951 |@word cnn:2 middle:3 simulation:1 covariance:3 solid:1 recursively:1 initial:5 contains:1 series:2 outperforms:3 past:2 steiner:1 current:2 com:2 yet:1 flunkert:1 reminiscent:1 readily:1 devin:1 visible:1 realistic:1 haxby:1 opin:1 plot:3 interpretable:1 generative:11 intelligence:1 isard:1 parameterization:1 plane:1 isotropic:1 short:1 filtered:4 blei:1 provides:1 pascanu:1 node:1 org:1 simpler:2 wierstra:2 olah:1 beta:1 abadi:1 combine:1 introduce:2 aitken:1 video2:1 planning:1 multi:1 simulator:1 brain:1 torque:1 inspired:1 globally:1 freeman:1 little:1 xobs:8 considering:1 increasing:1 becomes:2 factorized:1 what:2 backbone:2 atari:3 interpreted:1 neurobiol:1 deepmind:1 unobserved:2 finding:1 corporation:1 temporal:10 pseudo:1 every:1 act:3 gravity:2 exactly:2 preferable:2 botvinick:1 mansimov:1 control:6 omit:2 appear:1 before:1 dropped:1 switching:4 despite:1 encoding:18 path:1 might:1 black:2 resembles:2 dynamically:1 r4:1 shaded:1 challenging:1 factorization:1 lgssms:4 galileo:1 practice:1 backpropagation:1 procedure:2 riedmiller:1 rnn:4 convenient:1 integrating:1 cannot:1 close:1 dzt:2 disentangles:2 map:3 deterministic:2 missing:25 dz:1 maximizing:1 graphically:1 center:1 starting:3 dean:1 shi:1 focused:1 resolution:1 simplicity:1 x32:1 pure:1 estimator:1 higgins:1 array:1 dominate:1 shlens:1 disentangled:1 oh:1 reparameterization:1 embedding:1 handle:2 notion:1 population:1 imagine:5 exact:11 us:3 goodfellow:2 pa:1 velocity:6 recognition:9 approximated:3 updating:1 nderby:1 predicts:1 observed:6 ep:1 bottom:1 levine:3 preprint:1 initializing:1 wang:1 childhood:1 parameterize:2 sun:1 movement:3 principled:1 intuition:2 environment:14 govern:1 pong:4 visualised:1 dynamic:44 trained:4 depend:4 tight:1 singh:1 predictive:3 completely:1 gu:1 srnn:1 joint:3 various:1 polygon:2 train:2 fast:1 monte:1 ole:1 artificial:1 outside:1 salina:1 apparent:1 encoded:9 widely:1 plausible:1 whose:2 supplementary:4 cvpr:1 elbo:6 otherwise:1 encoder:8 ability:3 kai:1 paquet:2 jointly:1 final:4 seemingly:1 interplay:2 sequence:25 advantage:1 differentiable:2 agrawal:1 reconstruction:1 preformed:1 aun:6 coming:5 interaction:3 recurses:1 flexibility:1 adapts:1 roweis:1 xun:4 sutskever:1 plethora:1 generating:2 adam:3 newtonian:1 incremental:2 object:12 rotated:1 depending:1 recurrent:8 illustrate:1 leave:1 chiappa:1 donation:1 minor:1 qt:6 progress:1 eq:2 implemented:1 indicate:1 revers:2 soelch:1 direction:1 filter:8 stochastic:7 lars:1 human:2 material:4 backprop:1 abbeel:1 wall:7 extension:4 exploring:1 marco:2 proximity:1 considered:1 ground:3 ungerleider:1 caballero:1 great:1 overlaid:1 mapping:1 duvenaud:1 elbos:1 salakhudinov:1 consecutive:5 torralba:1 released:1 battaglia:1 prepare:1 hansen:1 basing:1 weighted:1 gaussian:9 aim:1 super:1 husz:1 factorizes:2 varying:1 vae:13 timevarying:1 earliest:1 encode:3 rezende:2 emission:4 focus:2 improvement:1 bernoulli:1 likelihood:4 seeger:1 helpful:1 inference:22 dependent:1 bt:7 typically:3 a0:13 cunningham:2 relation:2 visualisation:2 smagt:1 archer:2 pixel:8 among:1 smoothing:14 spatial:1 integration:1 softmax:2 initialize:1 equal:2 constrained:1 having:2 beach:1 sampling:3 identical:2 represents:3 park:1 yu:1 unsupervised:9 venkatraman:1 constitutes:1 kastner:1 icml:3 future:5 report:1 others:1 stimulus:2 develops:2 few:3 randomly:1 interpolate:3 murphy:1 ourselves:1 imputes:1 microsoft:1 curr:1 freedom:1 mlp:3 regulating:1 message:1 highly:2 mnih:1 zheng:1 grasp:1 introduces:4 mixture:3 chain:1 bayer:1 experience:1 respective:1 re:1 shalit:1 instance:3 soft:1 contiguous:1 wahlstr:1 stacking:1 introducing:2 subset:1 johnson:2 paddle:2 too:1 pal:1 encoders:3 dependency:2 generalizability:1 kudlur:1 combined:2 chooses:1 st:1 density:2 winther:2 lstm:6 retain:1 ancestral:1 stay:1 probabilistic:3 corrects:1 decoding:1 physic:3 lee:1 concrete:1 central:1 aaai:1 choose:3 possibly:2 external:2 chung:1 return:1 ullman:1 potential:1 nonlinearities:1 stride:1 includes:1 rueckert:1 titan:1 depends:6 vi:1 performed:1 bone:1 view:1 try:1 observing:1 doing:2 red:1 start:1 bayes:4 pendulum:5 capability:1 maintains:1 simon:1 jia:1 contribution:1 formed:2 square:1 convolutional:4 maximized:1 guessed:1 correspond:1 miller:1 modelled:1 raw:4 buesing:1 kavukcuoglu:1 bayesian:2 yildirim:1 carlo:1 trajectory:11 history:1 reach:1 inform:1 against:2 tucker:1 mohamed:3 proof:1 static:1 emits:1 sampled:1 recall:1 color:2 ut:10 improves:1 wicke:1 lim:1 back:1 reflecting:1 feed:1 dt:6 follow:2 patraucean:1 specify:1 response:1 totz:1 formulation:1 done:5 box:6 stage:1 dmm:2 correlation:1 hand:2 receives:1 horizontal:2 lstms:4 nonlinear:4 google:1 billiards:1 defines:2 mode:2 usa:1 concept:3 contain:1 evolution:1 analytically:1 hence:1 vasudevan:1 moore:1 illustrated:2 deal:2 white:1 game:3 impute:4 during:5 irving:1 davis:1 levenberg:1 arrived:1 demonstrate:2 workhorse:1 motion:1 reasoning:3 image:12 variational:18 handa:1 common:1 imputing:2 physical:7 imagined:1 approximates:1 measurement:2 jozefowicz:1 dinh:1 stretched:1 similarly:2 moving:3 stable:1 tennis:3 robot:1 operating:1 align:1 disentangle:2 posterior:23 recent:3 wattenberg:1 schmidhuber:1 nvidia:1 binary:1 success:1 der:1 captured:1 seen:1 preserving:1 minimum:1 ssms:1 goel:1 focussing:1 corrado:1 dashed:3 ii:2 smoother:4 full:3 desirable:1 infer:1 technical:2 long:5 wiltschko:1 lai:1 a1:14 controlled:2 prediction:7 variant:1 basic:3 denominator:1 ae:4 expectation:1 heterogeneous:1 ajay:1 arxiv:6 represent:3 monga:1 agarwal:1 hochreiter:1 cell:1 irregular:2 preserved:1 background:2 separately:1 sch:1 extra:1 warden:1 ascent:1 probably:1 subject:1 incorporates:1 spirit:1 backwards:2 iii:1 bengio:1 embeddings:1 rendering:1 variety:2 krishnan:1 burgess:1 architecture:4 competing:2 restrict:1 idea:2 barham:1 bounce:2 motivated:1 pca:1 forecasting:2 sontag:1 passing:1 compositional:1 action:3 deep:18 useful:1 collision:1 generally:1 fragkiadaki:1 locally:2 tenenbaum:2 imputed:3 generate:3 outperform:2 deliberate:1 revisit:1 notice:2 estimated:3 correctly:1 blue:1 discrete:1 harp:1 drawn:3 imputation:13 changing:2 utilize:1 backward:1 relaxation:1 fraction:1 sum:7 inverse:1 parameterized:4 fourth:1 bouncing:8 talwar:1 springenberg:1 wu:1 separation:1 draw:1 appendix:5 dropout:1 bound:3 ct:5 hexagon:1 layer:2 courville:1 activity:1 adapted:2 colliding:1 scene:1 encodes:1 software:1 sake:1 lgssm:23 generates:2 u1:8 simulate:1 gpus:1 structured:2 ball:25 combination:2 poor:1 conjugate:1 describes:4 smaller:1 remain:2 beneficial:2 appealing:1 modification:1 happens:1 making:1 lerchner:1 presently:1 explained:1 invariant:1 fraccaro:3 computationally:1 previously:1 describing:1 turn:1 know:1 tractable:2 finn:1 end:13 available:2 brevdo:1 linderman:1 apply:2 anymore:1 alternative:1 jang:1 compress:1 top:1 graphical:3 unifying:1 exploit:2 scholarship:1 murray:1 ghahramani:1 classical:2 malik:1 quantity:1 kaiser:1 dependence:2 diagonal:1 bagnell:1 gradient:4 iclr:6 separate:3 mapped:1 simulated:2 unable:1 decoder:3 link:1 thank:2 maddison:1 manifold:6 denmark:1 maximising:1 kalman:16 length:2 index:3 code:2 illustration:1 ratio:1 disentangling:2 potentially:1 greyscale:1 ba:1 haarnoja:1 implementation:1 design:1 zt:39 policy:1 perform:1 teh:1 vertical:3 observation:3 boot:1 markov:3 descent:1 gas:1 incorrectly:1 situation:1 extended:2 incorporated:1 looking:1 frame:38 intermittent:1 smoothed:9 community:1 intensity:2 datta:1 introduced:1 z1:5 engine:1 learned:6 tensorflow:3 kingma:2 nip:12 tractably:1 poole:1 below:1 dynamical:4 reading:1 including:1 memory:2 video:32 natural:1 force:2 recursion:1 arm:1 representing:1 github:1 categorical:1 catch:1 auto:11 extract:1 autoencoder:1 prior:6 literature:1 understanding:1 acknowledgement:1 kf:1 evolve:1 review:1 fully:3 loss:1 generation:8 filtering:3 validation:1 degree:1 agent:1 vanhoucke:1 ulrich:1 playing:1 share:1 karl:1 repeat:2 last:1 supported:1 perceptron:1 benefit:1 van:1 matthey:1 dimension:3 depth:1 world:5 transition:7 aobs:7 autoregressive:4 sensory:3 forward:8 commonly:2 computes:1 made:1 reinforcement:1 programme:1 welling:1 reconstructed:1 approximate:3 nov:1 spatio:1 factorize:1 discriminative:2 don:1 continuous:1 latent:28 decomposes:1 table:2 learn:7 ca:1 elastic:1 composing:1 depicting:1 inventory:1 complex:3 aistats:1 pk:1 main:1 linearly:2 arrow:2 whole:3 noise:2 repeated:1 x1:4 site:1 referred:1 screen:2 x16:1 sub:1 position:11 inferring:1 wish:1 watter:1 lie:2 learns:2 theorem:1 removing:1 embed:1 xt:32 bishop:1 showing:1 ghemawat:1 learnable:1 r2:1 experimented:1 explored:1 evidence:2 glorot:1 intractable:5 exists:1 recognising:1 sequential:4 phd:1 illustrates:2 demand:2 gumbel:1 chen:1 suited:2 paninski:3 boedecker:1 gao:1 visual:7 vinyals:1 pretrained:1 chang:1 cipolla:1 corresponds:1 truth:3 relies:1 lewis:1 conditional:6 marked:1 formulated:1 exposition:2 towards:3 man:1 change:4 except:2 perceiving:1 called:1 pas:4 experimental:1 citro:1 vaes:3 indicating:1 deisenroth:1 mark:1 guo:1 modulated:1 incorporate:1 tested:1 schuster:1 srivastava:1
6,580
6,952
PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference Jonathan H. Huggins CSAIL, MIT [email protected] Ryan P. Adams Google Brain and Princeton [email protected] Tamara Broderick CSAIL, MIT [email protected] Abstract Generalized linear models (GLMs)?such as logistic regression, Poisson regression, and robust regression?provide interpretable models for diverse data types. Probabilistic approaches, particularly Bayesian ones, allow coherent estimates of uncertainty, incorporation of prior information, and sharing of power across experiments via hierarchical models. In practice, however, the approximate Bayesian methods necessary for inference have either failed to scale to large data sets or failed to provide theoretical guarantees on the quality of inference. We propose a new approach based on constructing polynomial approximate sufficient statistics for GLMs (PASS-GLM). We demonstrate that our method admits a simple algorithm as well as trivial streaming and distributed extensions that do not compound error across computations. We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates. We validate our approach empirically in the case of logistic regression using a quadratic approximation and show competitive performance with stochastic gradient descent, MCMC, and the Laplace approximation in terms of speed and multiple measures of accuracy?including on an advertising data set with 40 million data points and 20,000 covariates. 1 Introduction Scientists, engineers, and companies increasingly use large-scale data?often only available via streaming?to obtain insights into their respective problems. For instance, scientists might be interested in understanding how varying experimental inputs leads to different experimental outputs; or medical professionals might be interested in understanding which elements of patient histories lead to certain health outcomes. Generalized linear models (GLMs) enable these practitioners to explicitly and interpretably model the effect of covariates on outcomes while allowing flexible noise distributions?including binary, count-based, and heavy-tailed observations. Bayesian approaches further facilitate (1) understanding the importance of covariates via coherent estimates of parameter uncertainty, (2) incorporating prior knowledge into the analysis, and (3) sharing of power across different experiments or domains via hierarchical modeling. In practice, however, an exact Bayesian analysis is computationally infeasible for GLMs, so an approximation is necessary. While some approximate methods provide asymptotic guarantees on quality, these methods often only run successfully in the small-scale data regime. In order to run on (at least) millions of data points and thousands of covariates, practitioners often turn to heuristics with no theoretical guarantees on quality. In this work, we propose a novel and simple approximation framework for probabilistic inference in GLMs. We demonstrate theoretical guarantees on the quality of point estimates in the finite-sample setting and on the quality of Bayesian posterior approximations produced by our framework. We show that our framework trivially extends to streaming data and to distributed architectures, with no additional compounding of error in these settings. We empirically demonstrate the practicality 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of our framework on datasets with up to tens of millions of data points and tens of thousands of covariates. Large-scale Bayesian inference. Calculating accurate approximate Bayesian posteriors for large data sets together with complex models and potentially high-dimensional parameter spaces is a longstanding problem. We seek a method that satisfies the following criteria: (1) it provides a posterior approximation; (2) it is scalable; (3) it comes equipped with theoretical guarantees; and (4) it provides arbitrarily good approximations. By posterior approximation we mean that the method outputs an approximate posterior distribution, not just a point estimate. By scalable we mean that the method examines each data point only a small number of times, and further can be applied to streaming and distributed data. By theoretical guarantees we mean that the posterior approximation is certified to be close to the true posterior in terms of, for example, some metric on probability measures. Moreover, the distance between the exact and approximate posteriors is an efficiently computable quantity. By an arbitrarily good approximation we mean that, with a large enough computational budget, the method can output an approximation that is as close to the exact posterior as we wish. Markov chain Monte Carlo (MCMC) methods provide an approximate posterior, and the approximation typically becomes arbitrarily good as the amount of computation time grows asymptotically; thereby MCMC satisfies criteria 1, 3, and 4. But scalability of MCMC can be an issue. Conversely, variational Bayes (VB) and expectation propagation (EP) [27] have grown in popularity due to their scalability to large data and models?though they typically lack guarantees on quality (criteria 3 and 4). Subsampling methods have been proposed to speed up MCMC [1, 5, 6, 21, 25, 41] and VB [18]. Only a few of these algorithms preserve guarantees asymptotic in time (criterion 4), and they often require restrictive assumptions. On the scalability front (criterion 2), many though not all subsampling MCMC methods have been found to require examining a constant fraction of the data at each iteration [2, 6, 7, 30, 31, 38], so the computational gains are limited. Moreover, the random data access required by these methods may be infeasible for very large datasets that do not fit into memory. Finally, they do not apply to streaming and distributed data, and thus fail criterion 2 above. More recently, authors have proposed subsampling methods based on piecewise deterministic Markov processes (PDMPs) [8, 9, 29]. These methods are promising since subsampling data here does not change the invariant distribution of the continuous-time Markov process. But these methods have not yet been validated on large datasets nor is it understood how subsampling affects the mixing rates of the Markov processes. Authors have also proposed methods for coalescing information across distributed computation (criterion 2) in MCMC [12, 32, 34, 35], VB [10, 11], and EP [15, 17]?and in the case of VB, across epochs as streaming data is collected [10, 11]. (See Angelino et al. [3] for a broader discussion of issues surrounding scalable Bayesian inference.) While these methods lead to gains in computational efficiency, they lack rigorous justification and provide no guarantees on the quality of inference (criteria 3 and 4). To address these difficulties, we are inspired in part by the observation that not all Bayesian models require expensive posterior approximation. When the likelihood belongs to an exponential family, Bayesian posterior computation is fast and easy. In particular, it suffices to find the sufficient statistics of the data, which require computing a simple summary at each data point and adding these summaries across data points. The latter addition requires a single pass through the data and is trivially streaming or distributed. With the sufficient statistics in hand, the posterior can then be calculated via, e.g., MCMC, and point estimates such as the MLE can be computed?all in time independent of the data set size. Unfortunately, sufficient statistics are not generally available (except in very special cases) for GLMs. We propose to instead develop a notion of approximate sufficient statistics. Previously authors have suggested using a coreset?a weighted data subset?as a summary of the data [4, 13, 14, 16, 19, 24]. While these methods provide theoretical guarantees on the quality of inference via the model evidence, the resulting guarantees are better suited to approximate optimization and do not translate to guarantees on typical Bayesian desiderata, such as the accuracy of posterior mean and uncertainty estimates. Moreover, while these methods do admit streaming and distributed constructions, the approximation error is compounded across computations. Our contributions. In the present work we instead propose to construct our approximate sufficient statistics via a much simpler polynomial approximation for generalized linear models. We therefore call our method polynomial approximate sufficient statistics for generalized linear models (PASSGLM). PASS-GLM satisfies all of the criteria laid of above. It provides a posterior approximation with theoretical guarantees (criteria 1 and 3). It is scalable since is requires only a single pass over 2 the data and can be applied to streaming and distributed data (criterion 2). And by increasing the number of approximate sufficient statistics, PASS-GLM can produce arbitrarily good approximations to the posterior (criterion 4). The Laplace approximation [39] and variational methods with a Gaussian approximation family [20, 22] may be seen as polynomial (quadratic) approximations in the log-likelihood space. But we note that the VB variants still suffer the issues described above. A Laplace approximation relies on a Taylor series expansion of the log-likelihood around the maximum a posteriori (MAP) solution, which requires first calculating the MAP?an expensive multi-pass optimization in the large-scale data setting. Neither Laplace nor VB offers the simplicity of sufficient statistics, including in streaming and distributed computations. The recent work of Stephanou et al. [36] is similar in spirit to ours, though they address a different statistical problem: they construct sequential quantile estimates using Hermite polynomials. In the remainder of the paper, we begin by describing generalized linear models in more detail in Section 2. We construct our novel polynomial approximation and specify our PASS-GLM algorithm in Section 3. We will see that streaming and distributed computation are trivial for our algorithm and do not compound error. In Section 4.1, we demonstrate finite-sample guarantees on the quality of the MAP estimate arising from our algorithm, with the maximum likelihood estimate (MLE) as a special case. In Section 4.2, we prove guarantees on the Wasserstein distance between the exact and approximate posteriors?and thereby bound both posterior-derived point estimates and uncertainty estimates. In Section 5, we demonstrate the efficacy of our approach in practice by focusing on logistic regression. We demonstrate experimentally that PASS-GLM can be scaled with almost no loss of efficiency to multi-core architectures. We show on a number of real-world datasets?including a large, high-dimensional advertising dataset (40 million examples with 20,000 dimensions)?that PASS-GLM provides an attractive trade-off between computation and accuracy. 2 Background Generalized linear models. Generalized linear models (GLMs) combine the interpretability of linear models with the flexibility of more general outcome distributions?including binary, ordinal, and heavy-tailed observations. Formally, we let Y ? R be the observation space, X ? Rd be the covariate space, and ? ? Rd be the parameter space. Let D := {(xn , yn )}N n=1 be the observed data. We write X ? RN ?d for the matrix of all covariates and y ? RN for the vector of all observations. We consider GLMs PN PN log p(y | X, ?) = n=1 log p(yn | g ?1 (xn ? ?)) = n=1 ?(yn , xn ? ?), where ? := g ?1 (xn ? ?) is the expected value of yn and g ?1 : R ? R is the inverse link function. We call ?(y, s) := log p(y | g ?1 (s)) the GLM mapping function. Examples include some of the most widely used models in the statistical toolbox. For instance, for binary observations y ? {?1}, the likelihood model is Bernoulli, p(y = 1 | ?) = ?, ? and the link function is often either the logit g(?) = log 1?? (as in logistic regression) or the pro?1 bit g(?) = ? (?), where ? is the standard Gaussian CDF. When modeling count data y ? N, the likelihood model might be Poisson, p(y | ?) = ?y e?? /y!, and g(?) = log(?) is the typical log link. Other GLMs include gamma regression, robust regression, and binomial regression, all of which are commonly used for large-scale data analysis (see Examples A.1 and A.3). If we place a prior ?0 (d?) on the parameters, then a full Bayesian analysis aims to approximate the (typically intractable) GLM posterior distribution ?D (d?), where ?D (d?) = R p(y | X, ?) ?0 (d?) . p(y | X, ? 0 ) ?0 (d? 0 ) The maximum a posteriori (MAP) solution gives a point estimate of the parameter: ?MAP := arg max ?D (?) = arg max log ?0 (?) + LD (?), ??? (1) ??? where LD (?) := log p(y | X, ?) is the data log-likelihood. The MAP problem strictly generalizes finding the maximum likelihood estimate (MLE), since the MAP solution equals the MLE when using the (possibly improper) prior ?0 (?) = 1. 3 Algorithm 1 PASS-GLM inference Require: data D, GLM mapping function ? : R ? R, degree M , polynomial basis (?m )m?N with base measure ? R 1: Calculate basis coefficients bm ? ??m d? using numerical integration for m = 0, . . . , M PM (M ) 2: Calculate polynomial coefficients bm ? k=m ?k,m bm for m = 0, . . . , M P 3: for k ? Nd with j kj ? M do 4: Initialize tk ? 0 5: for n = 1, . . . , N doP . Can be done with any combination of batch, parallel, or streaming 6: for k ? Nd with j kj ? M do 7: Update tk ? tk + (yn xn )k  m (M ) k ?D (?) = P P 8: Form approximate log-likelihood L k?Nd : j kj ?m k bm tk ? ?D (?) to construct approximate posterior ? 9: Use L ?D (?) Computation and exponential families. In large part due to the high-dimensional integral implicit in the normalizing constant, approximating the posterior, e.g., via MCMC or VB, is often prohibitively expensive. Approximating this integral will typically require many evaluations of the (log-)likelihood, or its gradient, and each evaluation may require ?(N ) time. Computation is much more efficient, though, if the model is in an exponential family (EF). In the EF case, there exist functions t, ? : Rd ? Rm , such that1 log p(yn | xn , ?) = t(yn , xn ) ? ?(?) =: LD,EF (?; t(yn , xn )). Thus, we can rewrite the log-likelihood as PN LD (?) = n=1 LD,EF (?; t(yn , xn )) =: LD,EF (?; t(D)), PN where t(D) := n=1 t(yn , xn ). The sufficient statistics t(D) can be calculated in O(N ) time, after which each evaluation of LD,EF (?; t(D)) or ?LD,EF (?; t(D)) requires only O(1) time. Thus, instead of K passes over N data (requiring O(N K) time), only O(N + K) time is needed. Even for moderate values of N , the time savings can be substantial when K is large. The Poisson distribution is an illustrative example of a one-parameter exponential family with t(y) = (1, y,P log y!) P and ?(?) = (?, log ?, 1). Thus, if we have data y (there are no covariates), t(y) = (N, n yn , log yn !). In this case Pit is easy to calculate that the maximum likelihood estimate of ? from t(y) as t1 (y)/t0 (y) = N ?1 n yn . Unfortunately, GLMs rarely belong to an exponential family ? even if the outcome distribution is in an exponential family, the use of a link destroys the EF structure. In logistic regression, we write (overloading the ? notation) log p(yn | xn , ?) = ?logit (yn xn ? ?), where ?logit (s) := ? log(1 + e?s ). For Poisson regression with log link, log p(yn | xn , ?) = ?Poisson (yn , xn ? ?), where ?Poisson (y, s) := ys ? es ? log y!. In both cases, we cannot express the log-likelihood as an inner product between a function solely of the data and a function solely of the parameter. 3 PASS-GLM Since exact sufficient statistics are not available for GLMs, we propose to construct approximate sufficient statistics. In particular, we propose to approximate the mapping function ? with an order-M polynomial ?M . We therefore call our method polynomial approximate sufficient statistics for GLMs (PASS-GLM). We illustrate our method next in the logistic regression case, where log p(yn | xn , ?) = ?logit (yn xn ? ?). The fully general treatment appears in Appendix A. (M ) (M ) (M ) Let b0 , b1 . . . , bM be constants such that PM (M ) ?logit (s) ? ?M (s) := m=0 bm sm . 1 Our presentation is slightly different from the standard textbook account because we have implicitly absorbed the base measure and log-partition function into t and ?. 4 Let vk := k Qd vj j for vectors v, k ? Rd . Taking s = yx ? ?, we obtain PM PM (M ) (M ) P ?logit (yx ? ?) ? ?M (yx ? ?) = m=0 bm (yx ? ?)m = m=0 bm d Pk?N j=1 j = PM m=0 P P k?Nd : j kj =m m k  (yx)k ? k kj =m k k a(k, m, M )(yx) ? ,  (M ) where k is the multinomial coefficient and a(k, m, M ) := m bm . Thus, ?M is an M -degree k  d+M polynomial approximation to ?logit (yx ? ?) with the d monomials of degree at most M serving as sufficient statistics derived from yx. Specifically, we have a exponential family model with  m t(yx) = ([yx]k )k and ?(?) = (a(k, m, M )? k )k , P (M ) where k is taken over all k ? Nd such that j kj ? M . We next discuss the calculation of the bm and the choice of M . (M ) Choosing the polynomial approximation. To calculate the coefficients bm , we choose a polynomial basis (?P m )m?N orthogonal with respect to aR base measure ?, where ?m is degree m [37]. That m is, ?m (s) = j=0 ?m,j sj for some ?m,j , and ?m ?m0 d? = ?mm0 , where ?mm0 = 1 if m = m0 R P? and zero otherwise. If bm := ??m d?, then ?(s) = m=0 bm ?m (s) and the approximaPM P (M ) M tion ?M (s) = m=0 bm ?m (s). Conclude that bm = k=m ?k,m bm . The complete PASS-GLM framework appears in Algorithm 1. Choices for the orthogonal polynomial basis include Chebyshev, Hermite, Leguerre, and Legendre polynomials [37]. We choose Chebyshev polynomials since they provide a uniform quality guarantee on a finite interval, e.g., [?R, R] for some R > 0 in what follows. If ? is smooth, the choice of Chebyshev polynomials (scaled appropriately, along with the base measure ?, based on the choice of R) yields error exponentially small in M : sups?[?R,R] |?(s) ? ?M (s)| ? C?M for some 0 < ? < 1 and C > 0 [26]. We show in Appendix B that the error in the approximate derivative ?0M is also exponentially small in M : sups?[?R,R] |?0 (s) ? ?0M (s)| ? C 0 ?M , where C 0 > C. Choosing the polynomial degree. For fixed d, the number of monomials is O(M d ) while for fixed M the number of monomials is O(dM ). The number of approximate sufficient statistics can remain manageable when either M or d is small but becomes unwieldy if M and d are both large. Since our experiments (Section 5) generally have large d, we focus on the small M case here. In our experiments we further focus on the choice of logistic regression as a particularly popular GLM example with p(yn | xn , ?) = ?logit (yn xn ? ?), where ?logit (s) := ? log(1 + e?s ). In general, the smallest and therefore most compelling choice of M a priori is 2, and we demonstrate the reasonableness of this choice empirically in Section 5 for a number of large-scale data analyses. In addition, in the logistic regression case, M = 6 is the next usable choice beyond M = 2. This is be(M ) cause b2k+1 = 0 for all integer k ? 1 with 2k + 1 ? M . So any approximation beyond M = 2 must (M ) have M ? 4. Also, b4k > 0 for all integers k ? 1 with 4k ? M . So choosing M = 4k, k ? 1, leads to a pathological approximation of ?logit where the log-likelihood can be made arbitrarily large by taking k?k2 ? ?. Thus, a reasonable polynomial approximation for logistic regression requires M = 2 + 4k, k ? 0. We have discussed the relative drawbacks of other popular quadratic approximations, including the Laplace approximation and variational methods, in Section 1. 4 Theoretical Results We next establish quality guarantees for PASS-GLM. We first provide finite-sample and asymptotic guarantees on the MAP (point estimate) solution, and therefore on the MLE, in Section 4.1. We then provide guarantees on the Wasserstein distance between the approximate and exact posteriors, and show these bounds translate into bounds on the quality of posterior mean and uncertainty estimates, in Section 4.2. See Appendix C for extended results, further discussion, and all proofs. 4.1 MAP approximation In Appendix C, we state and prove Theorem C.1, which provides guarantees on the quality of the MAP estimate for an arbitrary approximation L?D (?) to the log-likelihood LD (?). The approximate 5 MAP (i.e., the MAP under L?D ) is (cf. Eq. (1)) ??MAP := arg max log ?0 (?) + L?D (?). ??? Roughly, we find in Theorem C.1 that the error in the MAP estimate naturally depends on the error of the approximate log-likelihood as well as the peakedness of the posterior near the MAP. In the latter case, if log ?D is very flat, then even a small error from using L?D in place of LD could lead to a large error in the approximate MAP solution. We measure the peakedness of the distribution in terms of the strong convexity constant2 of ? log ?D near ?MAP . We apply Theorem C.1 to PASS-GLM for logistic regression and robust regression. We require the assumption that ?M (t) ? ?(t) ?t ? / [?R, R], (2) which in the cases of logistic regression and smoothed Huber regression, we conjecture holds for M = 2 + 4k, k ? N. For a matrix A, kAk2 denotes its spectral norm. Corollary 4.1. For the logistic regression model, assume that k(?2 LD (?MAP ))?1 k2 ? cd/N for some constant c > 0 and that kxn k2 ? 1 for all n = 1, . . . , N . Let ?M be the order-M Chebyshev approximation to ?logit on [?R, R] such that Eq. (2) holds. Let ? ?D (?) denote the posterior approximation obtained by using ?M with a log-concave prior. Then there exist q numbers r = r(R) > 1, ? = ?(M ) = O(r?M ), and ?? ? 27 ?d3 c3 +54 , such that if R ? k?MAP k2 ? 2 cd? ?? , then 4cd? 4 4 4 2 k?MAP ? ??MAP k22 ? ? ? c d ? + 8cd?. ? 27 The main takeaways from Corollary 4.1 are that (1) the error decreases exponentially in M thanks to the ? term, (2) the error does not depend on the amount of data, and (3) in order for the bound on the approximate MAP solution to hold, the norm of the true MAP solution must be sufficiently smaller than R. Remark 4.2. Some intuition for the assumption on the Hessian of LD , i.e., ?2 LD (?) = PN > 00 n=1 ?logit (yn xn ? ?)xn xn , is as follows. Typically for ? near ?MAP , the minimum eigenvalue 2 of ? LD (?) is at least N/(cd) for some c > 0. The minimum eigenvalue condition in Corollary 4.1 holds if, for example, a constant fraction of the data satisfies 0 < b ? kxn k2 ? B < ? and that subset of the data does not lie too close to any (d ? 1)-dimensional hyperplane. This condition essentially requires the data not to be degenerate and is similar to ones used to show asymptotic consistency of logistic regression [40, Ex. 5.40]. The approximate MAP error bound in the robust regression case using, for example, the smoothed Huber loss (Example A.1), is quite similar to the logistic regression result. Corollary 4.3. For robust regression with smoothed Huber loss, assume that a constant fraction of the data satisfies |xn ? ?MAP ? yn | ? b/2 and that kxn k2 ? 1 for all n = 1, . . . , N . Let ?M be the order M Chebyshev approximation to ?Huber on [?R, R] such that Eq. (2) holds. Let ? ?D (?) denote the posterior approximation obtained by using ?M with a log-concave prior. Then if R  k?MAP k2 , there exists r > 1 such that for M sufficiently large, k?MAP ? ??MAP k22 = O(dr?M ). 4.2 Posterior approximation We next establish guarantees on how close the approximate and exact posteriors Rare in Wasserstein R distance, dW . For distributions P and Q on Rd , dW (P, Q) := supf :kf kL ?1 | f dP ? f dQ|, where kf kL denotes the Lipschitz constant of f .3 This choice of distance is particularly useful since, if dW (?D , ? ?D ) ? ?, then ? ?D can be used to estimate any function with bounded gradient with error at most ? supw k?f (w)k2 . Wasserstein error bounds therefore give bounds on the mean estimates (corresponding to f (?) = ?i ) as well as uncertainty estimates such as mean absolute deviation (corresponding to f (?) = |??i ? ?i |, where ??i is the expected value of ?i ). Recall that a twice-differentiable function f : Rd ? R is %-strongly convex at ? if the minimum eigenvalue of the Hessian of f evaluated at ? is at least % > 0. 3 2 The Lipschitz constant of function f : Rd ? R is kf kL := supv,w?Rd k?(v)??(w)k . kv?wk2 2 6 CovType ChemReact 0 -1 -2 -3 -4 -4 0 0.3 1.0 0.2 0.5 0.1 0.0 6 -1 ?(t) -2 4 2 0 ynxn, 2 MAP 4 2.0 1.5 -2 2 0 4 2 4 4 2 0 ynxn, 2 MAP 4 6 CodRNA 0.2 ?2(t) 1.0 0 6 0.0 6 ?(t) Webspam ?2(t) -3 -4 -2 -4 1.5 0.1 0.5 0.0 6 4 (a) 2 0 ynxn, 2 MAP 4 6 0.0 12 4 4 ynxn, 12 MAP 20 (b) Figure 1: Validating the use of PASS-GLM with M = 2. (a) The second-order Chebyshev approximation to ? = ?logit on [?4, 4] is very accurate, with error of at most 0.069. (b) For a variety of datasets, the inner products hyn xn , ?MAP i are mostly in the range of [?4, 4]. Our general result (Theorem C.3) is stated and proved in Appendix C. Similar to Theorem C.1, the result primarily depends on the peakedness of the approximate posterior and the error of the approximate gradients. If the gradients are poorly approximated then the error can be large while if the (approximate) posterior is flat then even small gradient errors could lead to large shifts in expected values of the parameters and hence large Wasserstein error. We apply Theorem C.3 to PASS-GLM for logistic regression and Poisson regression. We give simplified versions of these corollaries in the main text and defer the more detailed versions to Appendix C. For logistic regression we assume M = 2 and ? = Rd since this is the setting we use for our experiments. The result is similar in spirit to Corollary 4.1, though more straightforward ? ?R since M = 2. Critically, we see in this result how having small error depends on |yn xn ? ?| with high probability. Otherwise the second term in the bound will be large. Corollary 4.4. Let ?2 be the second-order Chebyshev approximation to ?logit on [?R, R] and ? denote the posterior approximation obtained by using ?2 with a Gauslet ? ?D (?) = N(? | ??MAP , ?) R PN ? and let ?1 sian prior ?0 (?) = N(? | ?0 , ?0 ). Let ?? := ??D (d?), let ?1 := N ?1 n=1 hyn xn , ?i, ? ? ?1 , where n ? Unif{1, . . . , N }. be the subgaussianity constant of the random variable hyn xn , ?i ? 2 ? cd/N , and that kxn k2 ? 1 for all n = 1, . . . , N . Then Assume that |?1 | ? R, that k?k with ?02 := k?0 k2 , we have    ? dW (?D , ? ?D ) = O dR4 + d?0 exp ?12 ?0?2 ? 2 ?0?1 (R ? |?1 |) . ? < R, so that ?2 is a good The main takeaway from Corollary 4.4 is that if (a) for most n, |hxn , ?i| approximation to ?logit , and (b) the approximate posterior concentrates quickly, then we get a highquality approximate posterior. This result matches up with the experimental results (see Section 5 for further discussion). For Poisson regression, we return to the case of general M . Recall that in the Poisson regression model that the expectation of yn is ? = exn ?? . If yn is bounded and has non-trivial probability of being greater than zero, we lose little by restricting xn ? ? to be bounded. Thus, we will assume that the parameter space is bounded. As in Corollaries 4.1 and 4.3, the error is exponentially small in M PN and, as long as k n=1 xn x> n k2 grows linearly in N , does not depend on the amount of data. Corollary 4.5. Let fM (s) be the order-M Chebyshev approximation to et on the interval [?R, R], and let ? ?D (?) denote the posterior approximation obtained by using the approximation log p?(yn | xn , ?) := yn xn ? ? ? fM (xn ? ?) ? log yn ! with a log-concave prior on ? = BR (0). If PN 00 inf s?[?R,R] fM (s) ? %? > 0, k n=1 xn x> n k2 = ?(N/d), and kxn k2 ? 1 for all n = 1, . . . , N , then  dW (?D , ? ?D ) = O d? %?1 M 2 eR 2?M . 7 10.0 time?(sec) 100.0 0.32 0.1 0.032 0.01 1.0 10.0 time?(sec) 100.0 time?(sec) 100.0 1.0 0.32 0.1 0.1 1.0 1.0 10.0 time?(sec) 100.0 0.32 0.1 0.032 0.1 (a) W EBSPAM 1.0 10.0 time?(sec) 100.0 (b) C OV T YPE Negative?Test?LL 0.14 0.12 0.01 1.0 0.1 1.0 time?(sec) 10.0 average?mean?error 0.1 1.0 1.0 3.2 0.16 0.32 0.1 0.032 0.01 0.01 1.0 0.1 1.0 time?(sec) 10.0 0.1 0.01 0.1 1.0 time?(sec) average?variance?error time?(sec) 100.0 average?mean?error 10.0 average?variance?error 1.0 1.0 1.0 Negative?Test?LL Negative?Test?LL 0.62 0.1 average?mean?error 0.64 average?variance?error average?variance?error PASS?LR2 Laplace 0.6 SGD True?Posterior MALA 0.5 0.66 average?mean?error Negative?Test?LL 0.68 0.6 0.5 0.4 0.3 0.2 0.01 100.0 1.0 100.0 1.0 100.0 10.0 3.2 1.0 time?(sec) 4.0 2.5 1.6 10.0 1.0 (c) C HEM R EACT 1.0 time?(sec) time?(sec) (d) C OD RNA Figure 2: Batch inference results. In all metrics smaller is better. Note that although %??1 does depend on R and M , as M becomes large it converges to eR . Observe that if we truncate a prior on Rd to be on BR (0), by making R and M sufficiently large, the Wasserstein distance between ?D and the PASS-GLM posterior approximation ? ?D can be made arbitarily small. Similar results could be shown for other GLM likelihoods. 5 Experiments In our experiments, we focus on logistic regression, a particularly popular GLM example.4 As discussed in Section 3, we choose M = 2 and call our algorithm PASS-LR2. Empirically, we observe that M = 2 offers a high-quality approximation of ? on the interval [?4, 4] (Fig. 1a). In fact sups?[?4,4] |?2 (s) ? ?(s)| < 0.069. Moreover, we observe that for many datasets, the inner products yn xn ? ?MAP tend to be concentrated within [?4, 4], and therefore a high-quality approximation on this range is sufficient for our analysis. In particular, Fig. 1b shows histograms of yn xn ? ?MAP for four datasets from our experiments. In all but one case, over 98% of the data points satisfy |yn xn ? ?MAP | ? 4. In the remaining dataset (C OD RNA), only ?80% of the data satisfy this condition, and this is the dataset for which PASS-LR2 performed most poorly (cf. Corollary 4.4). 5.1 Large dataset experiments In order to compare PASS-LR2 to other approximate Bayesian methods, we first restrict our attention to datasets with fewer than 1 million data points. We compare to the Laplace approximation and the adaptive Metropolis-adjusted Langevin algorithm (MALA). We also compare to stochastic gradient descent (SGD) although SGD provides only a point estimate and no approximate posterior. In all experiments, no method performs as well as PASS-LR2 given the same (or less) running time. Datasets. The C HEM R EACT dataset consists of N = 26,733 chemicals, each with d = 100 properties. The goal is to predict whether each chemical is reactive. The W EBSPAM corpus consists of N = 350,000 web pages and the covariates consist of the d = 127 features that each appear in at least 25 documents. The cover type (C OV T YPE) dataset consists of N = 581,012 cartographic observations with d = 54 features. The task is to predict the type of trees that are present at each observation location. The C OD RNA dataset consists of N = 488,565 and d = 8 RNA-related features. The task is to predict whether the sequences are non-coding RNA. Fig. 2 shows average errors of the posterior mean and variance estimates as well as negative test loglikelihood for each method versus the time required to run the method. SGD was run for between 1 and 20 epochs. The true posterior was estimated by running three chains of adaptive MALA for 50,000 iterations, which produced Gelman-Rubin statistics well below 1.1 for all datasets. 4 Code is available at https://bitbucket.org/jhhuggins/pass-glm. 8 PASS?LR2?(area?=?0.696) SGD?(area?=?0.725) 10.0 0.75 speedup True?Positive?Rate 1.00 0.50 0.25 0.00 0.00 7.5 5.0 2.5 0.25 0.50 0.75 False?Positive?Rate 0 1.00 (a) 10 cores 20 (b) Figure 3: (a) ROC curves for streaming inference on 40 million C RITEO data points. SGD and PASS-LR2 had negative test log-likelihoods of, respectively, 0.07 and 0.045. (b) Cores vs. speedup (compared to one core) for parallelization experiment on 6 million examples from the C RITEO dataset. Speed. For all four datasets, PASS-LR2 was an order of magnitude faster than SGD and 2?3 orders of magnitude faster than the Laplace approximation. Mean and variance estimates. For C HEM R EACT, W EBSPAM, and C OV T YPE, PASS-LR2 was superior to or competitive with SGD, with MALA taking 10?100x longer to produce comparable results. Laplace again outperformed all other methods. Critically, on all datasets the PASS-LR2 variance estimates were competitive with Laplace and MALA. Test log-likelihood. For C HEM R EACT and W EBSPAM, PASS-LR2 produced results competitive with all other methods. MALA took 10?100x longer to produce comparable results. For C OV T YPE, PASS-LR2 was competitive with SGD but took a tenth of the time, and MALA took 1000x longer for comparable results. Laplace outperformed all other methods, but was orders of magnitude slower than PASS-LR2. C OD RNA was the only dataset where PASS-LR2 performed poorly. However, this performance was expected based on the yn xn ? ?MAP histogram (Fig. 1a). 5.2 Very large dataset experiments using streaming and distributed PASS-GLM We next test PASS-LR2, which is streaming without requiring any modifications, on a subset of 40 million data points from the Criteo terabyte ad click prediction dataset (C RITEO). The covariates are 13 integer-valued features and 26 categorical features. After one-hot encoding, on the subset of the data we considered, d ? 3 million. For tractability we used sparse random projections [23] to reduce the dimensionality to 20,000. At this scale, comparing to the other fully Bayesian methods from Section 5.1 was infeasible; we compare only to the predictions and point estimates from SGD. PASSLR2 performs slightly worse than SGD in AUC (Fig. 3a), but outperforms SGD in negative test loglikelihood (0.07 for SGD, 0.045 for PASS-LR2). Since PASS-LR2 estimates a full covariance, it was about 10x slower than SGD. A promising approach to speeding up and reducing memory usage of PASS-LR2 would be to use a low-rank approximation to the second-order moments. To validate the efficiency of distributed computation with PASS-LR2, we compared running times on 6M examples with dimensionality reduced to 1,000 when using 1?22 cores. As shown in Fig. 3b, the speed-up is close to optimal: K cores produces a speedup of about K/2 (baseline 3 minutes using 1 core). We used Ray to implement the distributed version of PASS-LR2 [28].5 6 Discussion We have presented PASS-GLM, a novel framework for scalable parameter estimation and Bayesian inference in generalized linear models. Our theoretical results provide guarantees on the quality of point estimates as well as approximate posteriors derived from PASS-GLM. We validated our approach empirically with logistic regression and a quadratic approximation. We showed competitive performance on a variety of real-world data, scaling to 40 million examples with 20,000 covariates, and trivial distributed computation with no compounding of approximation error. There a number of important directions for future work. The first is to use randomization methods along the lines of random projections and random feature mappings [23, 33] to scale to larger M and d. We conjecture that the use of randomization will allow experimentation with other GLMs for which quadratic approximations are insufficient. 5 https://github.com/ray-project/ray 9 Acknowledgments JHH and TB are supported in part by ONR grant N00014-17-1-2072, ONR MURI grant N00014-11-1-0688, and a Google Faculty Research Award. RPA is supported by NSF IIS-1421780 and the Alfred P. Sloan Foundation. References [1] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient Fisher scoring. In International Conference on Machine Learning, 2012. [2] P. Alquier, N. Friel, R. Everitt, and A. Boland. Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels. Statistics and Computing, 26:29?47, 2016. [3] E. Angelino, M. J. Johnson, and R. P. Adams. Patterns of scalable Bayesian inference. Foundations and R in Machine Learning, 9(2-3):119?247, 2016. Trends [4] O. Bachem, M. Lucic, and A. Krause. Practical coreset constructions for machine learning. arXiv.org, Mar. 2017. [5] R. Bardenet, A. Doucet, and C. C. Holmes. Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach. In International Conference on Machine Learning, pages 405?413, 2014. [6] R. Bardenet, A. Doucet, and C. C. Holmes. On Markov chain Monte Carlo methods for tall data. Journal of Machine Learning Research, 18:1?43, 2017. [7] M. J. Betancourt. The fundamental incompatibility of Hamiltonian Monte Carlo and data subsampling. In International Conference on Machine Learning, 2015. [8] J. Bierkens, P. Fearnhead, and G. O. Roberts. The zig-zag process and super-efficient sampling for Bayesian analysis of big data. arXiv.org, July 2016. [9] A. Bouchard-C?ot?e, S. J. Vollmer, and A. Doucet. The bouncy particle sampler: A non-reversible rejectionfree Markov chain Monte Carlo method. arXiv.org, pages 1?37, Jan. 2016. [10] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan. Streaming variational Bayes. In Advances in Neural Information Processing Systems, Dec. 2013. [11] T. Campbell, J. Straub, J. W. Fisher, III, and J. P. How. Streaming, distributed variational inference for Bayesian nonparametrics. In Advances in Neural Information Processing Systems, 2015. [12] R. Entezari, R. V. Craiu, and J. S. Rosenthal. Likelihood inflating sampling algorithm. arXiv.org, May 2016. [13] D. Feldman, M. Faulkner, and A. Krause. Scalable training of mixture models via coresets. In Advances in Neural Information Processing Systems, pages 2142?2150, 2011. [14] W. Fithian and T. Hastie. Local case-control sampling: Efficient subsampling in imbalanced data sets. The Annals of Statistics, 42(5):1693?1724, Oct. 2014. [15] A. Gelman, A. Vehtari, P. Jyl?anki, T. Sivula, D. Tran, S. Sahai, P. Blomstedt, J. P. Cunningham, D. Schiminovich, and C. Robert. Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data. arXiv.org, Dec. 2014. [16] L. Han, T. Yang, and T. Zhang. Local uncertainty sampling for large-scale multi-class logistic regression. arXiv.org, Apr. 2016. [17] L. Hasenclever, S. Webb, T. Lienart, S. Vollmer, B. Lakshminarayanan, C. Blundell, and Y. W. Teh. Distributed Bayesian learning with stochastic natural-gradient expectation propagation and the posterior server. Journal of Machine Learning Research, 18:1?37, 2017. [18] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14:1303?1347, 2013. [19] J. H. Huggins, T. Campbell, and T. Broderick. Coresets for scalable Bayesian logistic regression. In Advances in Neural Information Processing Systems, May 2016. [20] T. Jaakkola and M. I. Jordan. A variational approach to Bayesian logistic regression models and their extensions. In Sixth International Workshop on Artificial Intelligence and Statistics, volume 82, 1997. 10 [21] A. Korattikara, Y. Chen, and M. Welling. Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In International Conference on Machine Learning, 2014. [22] A. Kucukelbir, R. Ranganath, A. Gelman, and D. M. Blei. Automatic variational inference in Stan. In Advances in Neural Information Processing Systems, June 2015. [23] P. Li, T. J. Hastie, and K. W. Church. Very sparse random projections. In SIGKDD Conference on Knowledge Discovery and Data Mining, 2006. [24] M. Lucic, M. Faulkner, A. Krause, and D. Feldman. Training mixture models at scale via coresets. arXiv.org, Mar. 2017. [25] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with subsets of data. In Uncertainty in Artificial Intelligence, Mar. 2014. [26] J. C. Mason and D. C. Handscomb. Chebyshev Polynomials. Chapman and Hall/CRC, New York, 2003. [27] T. P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc, Aug. 2001. [28] R. Nishihara, P. Moritz, S. Wang, A. Tumanov, W. Paul, J. Schleier-Smith, R. Liaw, M. Niknami, M. I. Jordan, and I. Stoica. Real-time machine learning: The missing pieces. In Workshop on Hot Topics in Operating Systems, 2017. [29] A. Pakman, D. Gilboa, D. Carlson, and L. Paninski. Stochastic bouncy particle sampler. In International Conference on Machine Learning, Sept. 2017. [30] N. S. Pillai and A. Smith. Ergodicity of approximate MCMC chains with applications to large data sets. arXiv.org, May 2014. [31] M. Pollock, P. Fearnhead, A. M. Johansen, and G. O. Roberts. The scalable Langevin exact algorithm: Bayesian inference for big data. arXiv.org, Sept. 2016. [32] M. Rabinovich, E. Angelino, and M. I. Jordan. Variational consensus Monte Carlo. arXiv.org, June 2015. [33] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in Neural Information Processing Systems, pages 1313?1320, 2009. [34] S. L. Scott, A. W. Blocker, F. V. Bonassi, H. A. Chipman, E. I. George, and R. E. McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. In Bayes 250, 2013. [35] S. Srivastava, V. Cevher, Q. Tran-Dinh, and D. Dunson. WASP: Scalable Bayes via barycenters of subset posteriors. In International Conference on Artificial Intelligence and Statistics, 2015. [36] M. Stephanou, M. Varughese, and I. Macdonald. Sequential quantiles via Hermite series density estimation. Electronic Journal of Statistics, 11(1):570?607, 2017. [37] G. Szeg?o. Orthogonal Polynomials. American Mathematical Society, 4th edition, 1975. [38] Y. W. Teh, A. H. Thiery, and S. Vollmer. Consistency and fluctuations for stochastic gradient Langevin dynamics. Journal of Machine Learning Research, 17(7):1?33, Mar. 2016. [39] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81(393):82?86, 1986. [40] A. W. van der Vaart. Asymptotic Statistics. University of Cambridge, 1998. [41] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In International Conference on Machine Learning, 2011. 11
6952 |@word faculty:1 version:3 manageable:1 polynomial:22 norm:2 logit:15 nd:5 unif:1 seek:1 covariance:1 sgd:14 thereby:2 ld:14 moment:2 series:2 efficacy:1 ours:1 document:1 outperforms:1 comparing:1 od:4 com:1 yet:1 must:2 numerical:1 partition:1 interpretable:1 update:1 v:1 intelligence:4 fewer:1 hamiltonian:1 core:7 smith:2 blei:2 provides:6 location:1 org:11 simpler:1 zhang:1 hermite:3 mathematical:1 along:2 prove:2 consists:4 combine:1 firefly:1 ray:3 bitbucket:1 huber:4 expected:4 roughly:1 nor:2 multi:3 brain:1 inspired:1 company:1 little:1 equipped:1 increasing:1 becomes:3 begin:1 project:1 moreover:4 notation:1 bounded:4 mcculloch:1 what:1 straub:1 textbook:1 finding:1 inflating:1 guarantee:23 concave:3 prohibitively:1 scaled:2 rm:1 anki:1 k2:13 supv:1 highquality:1 medical:1 yn:33 appear:1 grant:2 control:1 t1:1 positive:2 scientist:2 understood:1 local:2 encoding:1 friel:1 solely:2 fluctuation:1 might:3 twice:1 wk2:1 conversely:1 pit:1 limited:1 range:2 acknowledgment:1 practical:1 practice:3 implement:1 wasp:1 jan:1 area:2 projection:3 boyd:1 get:1 cannot:1 close:5 gelman:3 cartographic:1 map:40 deterministic:1 missing:1 straightforward:1 attention:1 convex:1 simplicity:1 coreset:2 insight:1 examines:1 holmes:2 dw:5 notion:1 sahai:1 justification:1 laplace:11 annals:1 construction:2 exact:9 vollmer:3 element:1 trend:1 expensive:3 particularly:4 approximated:1 muri:1 ep:2 observed:1 wang:2 thousand:2 calculate:4 improper:1 trade:1 decrease:1 zig:1 substantial:1 intuition:1 vehtari:1 convexity:1 broderick:3 covariates:10 dynamic:2 depend:3 rewrite:1 ov:4 efficiency:3 basis:4 sink:1 ynxn:4 grown:1 surrounding:1 fast:1 monte:9 artificial:4 outcome:4 choosing:3 quite:1 heuristic:1 widely:1 valued:1 larger:1 loglikelihood:2 otherwise:2 statistic:23 vaart:1 noisy:1 certified:1 sequence:1 eigenvalue:3 differentiable:1 took:3 propose:6 tran:2 product:3 remainder:1 korattikara:2 interpretably:1 mixing:1 translate:2 flexibility:1 degenerate:1 poorly:3 validate:2 kv:1 scalability:3 convergence:1 produce:4 adam:3 converges:1 tk:4 tall:1 illustrate:1 develop:1 ex:1 b0:1 aug:1 eq:3 strong:1 come:1 qd:1 concentrate:1 direction:1 drawback:1 stochastic:8 enable:1 crc:1 require:8 suffices:1 randomization:3 ryan:1 adjusted:1 extension:2 strictly:1 hold:5 around:1 sufficiently:3 considered:1 hall:1 exp:1 maclaurin:1 mapping:4 predict:3 m0:2 smallest:1 estimation:2 outperformed:2 lose:1 successfully:1 weighted:2 hoffman:1 minimization:1 compounding:2 mit:4 destroys:1 gaussian:2 fearnhead:2 aim:1 rna:6 super:1 pn:8 incompatibility:1 varying:1 broader:1 wilson:1 jaakkola:1 corollary:11 validated:2 derived:3 focus:3 june:2 vk:1 bernoulli:1 likelihood:20 rank:1 sigkdd:1 rigorous:1 criteo:1 baseline:1 posteriori:2 inference:19 austerity:1 streaming:17 typically:5 cunningham:1 rpa:2 interested:2 issue:3 arg:3 flexible:1 supw:1 priori:1 special:2 integration:1 initialize:1 marginal:1 equal:1 construct:5 saving:1 having:1 beach:1 sampling:5 chapman:1 bachem:1 future:1 piecewise:1 primarily:1 few:1 pathological:1 b2k:1 preserve:1 gamma:1 jhh:1 kitchen:1 mining:1 evaluation:3 mixture:2 chain:7 accurate:3 integral:2 necessary:2 respective:1 orthogonal:3 tree:1 taylor:1 theoretical:10 cevher:1 instance:2 jyl:1 modeling:2 compelling:1 ar:1 cover:1 rabinovich:1 tractability:1 deviation:1 subset:6 monomials:3 rare:1 uniform:1 examining:1 johnson:1 front:1 too:1 hem:4 kadane:1 mala:7 st:1 thanks:1 international:8 fundamental:1 fithian:1 recht:1 csail:3 density:2 probabilistic:2 off:1 together:1 quickly:1 again:1 kucukelbir:1 choose:3 possibly:1 dr:1 worse:1 admit:1 american:2 derivative:1 usable:1 return:1 li:1 account:1 ebspam:4 sec:12 coding:1 coresets:3 coefficient:4 lakshminarayanan:1 inc:1 satisfy:2 sloan:1 explicitly:1 depends:3 ad:1 piece:1 tion:1 performed:2 nishihara:1 stoica:1 sup:3 competitive:6 bayes:5 parallel:1 bouchard:1 lr2:20 defer:1 contribution:1 accuracy:3 variance:7 kaufmann:1 efficiently:1 yield:1 bayesian:27 produced:3 critically:2 carlo:9 advertising:2 history:1 sharing:2 sixth:1 tamara:1 minka:1 dm:1 naturally:1 proof:1 gain:2 dataset:11 treatment:1 popular:3 proved:1 recall:2 knowledge:2 dimensionality:2 campbell:2 thiery:1 focusing:1 appears:2 specify:1 nonparametrics:1 done:1 though:5 strongly:1 evaluated:1 mar:4 just:1 implicit:1 ergodicity:1 glms:13 hand:1 hastings:1 web:1 replacing:1 reversible:1 propagation:4 lack:2 google:2 bonassi:1 logistic:21 quality:17 grows:2 usage:1 facilitate:1 effect:1 usa:1 true:5 requiring:2 k22:2 alquier:1 hence:1 kxn:5 chemical:2 moritz:1 attractive:1 ll:4 auc:1 illustrative:1 liaw:1 criterion:12 generalized:8 complete:1 demonstrate:7 performs:2 pro:1 lucic:2 variational:9 novel:3 recently:1 ef:8 superior:1 multinomial:1 empirically:5 exponentially:4 volume:1 million:10 belong:1 discussed:2 hasenclever:1 association:1 dinh:1 cambridge:1 feldman:2 paisley:1 everitt:1 rd:10 lienart:1 trivially:2 pm:5 consistency:2 automatic:1 particle:2 had:1 access:1 han:1 longer:3 ahn:1 operating:1 base:4 posterior:46 imbalanced:1 recent:1 showed:1 belongs:1 moderate:1 inf:1 compound:2 certain:1 n00014:2 server:1 binary:3 arbitrarily:5 onr:2 life:1 der:1 scoring:1 seen:1 minimum:3 additional:1 wasserstein:6 greater:1 morgan:1 terabyte:1 george:1 exn:1 july:1 ii:1 multiple:1 full:2 rahimi:1 compounded:1 smooth:1 match:1 faster:2 calculation:1 offer:2 long:2 pakman:1 mle:5 y:1 award:1 prediction:2 variant:1 scalable:11 regression:34 desideratum:1 patient:1 metric:2 poisson:9 expectation:5 essentially:1 iteration:2 histogram:2 kernel:1 arxiv:10 dec:2 addition:2 background:1 krause:3 interval:3 publisher:1 appropriately:1 parallelization:1 ot:1 pass:1 tend:1 validating:1 spirit:2 jordan:4 practitioner:2 chipman:1 call:4 integer:3 peakedness:3 near:3 yang:1 iii:1 enough:1 easy:2 faulkner:2 variety:2 affect:1 fit:1 architecture:2 fm:3 restrict:1 click:1 inner:3 reduce:1 hastie:2 computable:1 br:2 chebyshev:9 shift:1 blundell:1 t0:1 whether:2 hxn:1 suffer:1 hessian:2 cause:1 york:1 remark:1 generally:2 useful:1 detailed:1 amount:3 ten:2 concentrated:1 reduced:1 http:2 exist:2 nsf:1 estimated:1 arising:1 popularity:1 rosenthal:1 serving:1 diverse:1 alfred:1 write:2 pillai:1 express:1 four:2 d3:1 tierney:1 neither:1 tenth:1 boland:1 bardenet:2 asymptotically:1 fraction:3 sum:1 blocker:1 run:4 inverse:1 uncertainty:10 extends:1 family:8 laid:1 almost:1 place:2 reasonable:1 electronic:1 appendix:6 scaling:2 vb:7 comparable:3 bit:1 bound:8 quadratic:5 incorporation:1 flat:2 takeaway:2 schiminovich:1 speed:4 conjecture:2 speedup:3 truncate:1 combination:1 legendre:1 across:7 slightly:2 increasingly:1 remain:1 smaller:2 partitioned:1 metropolis:2 making:1 modification:1 huggins:2 invariant:1 glm:27 taken:1 computationally:1 previously:1 turn:1 count:2 fail:1 describing:1 needed:1 ordinal:1 discus:1 available:4 generalizes:1 experimentation:1 apply:3 observe:3 hierarchical:2 spectral:1 ype:4 batch:2 professional:1 slower:2 coalescing:1 binomial:1 subsampling:8 include:3 cf:2 denotes:2 remaining:1 running:3 yx:10 calculating:2 carlson:1 practicality:1 restrictive:1 quantile:1 establish:2 approximating:2 society:1 eact:4 quantity:1 barycenter:1 kak2:1 gradient:11 dp:1 distance:6 link:5 macdonald:1 topic:1 collected:1 consensus:2 trivial:4 code:1 insufficient:1 unfortunately:2 mostly:1 robert:3 potentially:1 webb:1 dunson:1 stated:1 negative:7 allowing:1 teh:3 observation:8 datasets:12 markov:8 sm:1 finite:4 descent:2 tbroderick:1 langevin:4 extended:1 rn:2 smoothed:3 arbitrary:1 required:2 toolbox:1 c3:1 kl:3 johansen:1 coherent:2 nip:1 address:2 beyond:2 suggested:1 below:1 pattern:1 scott:1 regime:1 tb:1 including:6 memory:2 interpretability:1 max:3 webspam:1 power:2 hot:2 difficulty:1 natural:1 sian:1 github:1 stan:1 church:1 categorical:1 health:1 sept:2 kj:6 speeding:1 text:1 prior:9 understanding:3 epoch:2 discovery:1 kf:3 handscomb:1 betancourt:1 asymptotic:5 relative:1 loss:3 fully:2 dop:1 versus:1 hyn:3 foundation:2 degree:5 sufficient:17 rubin:1 dq:1 heavy:2 cd:6 land:1 summary:3 supported:2 infeasible:3 gilboa:1 allow:2 taking:3 absolute:1 sparse:2 distributed:16 van:1 curve:1 calculated:2 dimension:1 world:2 xn:36 transition:1 author:3 commonly:1 made:2 adaptive:3 simplified:1 longstanding:1 bm:16 welling:3 ranganath:1 sj:1 approximate:41 implicitly:1 cutting:1 doucet:3 mm0:2 b1:1 corpus:1 conclude:1 reasonableness:1 continuous:1 tailed:2 promising:2 robust:5 ca:1 expansion:1 complex:1 constructing:1 domain:1 vj:1 pk:1 main:3 apr:1 linearly:1 big:3 noise:1 paul:1 edition:1 fig:6 quantiles:1 roc:1 subgaussianity:1 wish:1 exponential:7 lie:1 angelino:3 unwieldy:1 theorem:6 minute:1 pollock:1 covariate:1 er:2 mason:1 stephanou:2 admits:1 covtype:1 evidence:1 normalizing:1 workshop:2 consist:1 incorporating:1 intractable:1 exists:1 adding:1 sequential:2 importance:1 overloading:1 restricting:1 false:1 magnitude:3 budget:2 chen:1 suited:1 supf:1 arbitarily:1 paninski:1 absorbed:1 failed:2 satisfies:5 relies:1 cdf:1 oct:1 goal:1 presentation:1 towards:1 lipschitz:2 fisher:2 change:1 experimentally:1 typical:2 except:1 specifically:1 reducing:1 hyperplane:1 sampler:2 szeg:1 engineer:1 pas:43 experimental:3 e:1 zag:1 rarely:1 formally:1 latter:2 jonathan:1 reactive:1 wibisono:1 mcmc:12 princeton:2 srivastava:1
6,581
6,953
Bayesian GAN Yunus Saatchi Uber AI Labs Andrew Gordon Wilson Cornell University Abstract Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles. 1 Introduction Learning a good generative model for high-dimensional natural signals, such as images, video and audio has long been one of the key milestones of machine learning. Powered by the learning capabilities of deep neural networks, generative adversarial networks (GANs) [4] and variational autoencoders [6] have brought the field closer to attaining this goal. GANs transform white noise through a deep neural network to generate candidate samples from a data distribution. A discriminator learns, in a supervised manner, how to tune its parameters so as to correctly classify whether a given sample has come from the generator or the true data distribution. Meanwhile, the generator updates its parameters so as to fool the discriminator. As long as the generator has sufficient capacity, it can approximate the CDF inverse-CDF composition required to sample from a data distribution of interest. Since convolutional neural networks by design provide reasonable metrics over images (unlike, for instance, Gaussian likelihoods), GANs using convolutional neural networks can in turn provide a compelling implicit distribution over images. Although GANs have been highly impactful, their learning objective can lead to mode collapse, where the generator simply memorizes a few training examples to fool the discriminator. This pathology is reminiscent of maximum likelihood density estimation with Gaussian mixtures: by collapsing the variance of each component we achieve infinite likelihood and memorize the dataset, which is not useful for a generalizable density estimate. Moreover, a large degree of intervention is required to stabilize GAN training, including feature matching, label smoothing, and mini-batch discrimination [9, 10]. To help alleviate these practical difficulties, recent work has focused on replacing the Jensen-Shannon divergence implicit in standard GAN training with alternative metrics, such as f-divergences [8] or Wasserstein divergences [1]. Much of this work is analogous to introducing various regularizers for maximum likelihood density estimation. But just as it can be difficult to choose the right regularizer, it can also be difficult to decide which divergence we wish to use for GAN training. It is our contention that GANs can be improved by fully probabilistic inference. Indeed, a posterior distribution over the parameters of the generator could be broad and highly multimodal. GAN 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. training, which is based on mini-max optimization, always estimates this whole posterior distribution over the network weights as a point mass centred on a single mode. Thus even if the generator does not memorize training examples, we would expect samples from the generator to be overly compact relative to samples from the data distribution. Moreover, each mode in the posterior over the network weights could correspond to wildly different generators, each with their own meaningful interpretations. By fully representing the posterior distribution over the parameters of both the generator and discriminator, we can more accurately model the true data distribution. The inferred data distribution can then be used for accurate and highly data-efficient semi-supervised learning. In this paper, we propose a simple Bayesian formulation for end-to-end unsupervised and semisupervised learning with generative adversarial networks. Within this framework, we marginalize the posteriors over the weights of the generator and discriminator using stochastic gradient Hamiltonian Monte Carlo. We interpret data samples from the generator, showing exploration across several distinct modes in the generator weights. We also show data and iteration efficient learning of the true distribution. We also demonstrate state of the art semi-supervised learning performance on several benchmarks, including SVHN, MNIST, CIFAR-10, and CelebA. The simplicity of the proposed approach is one of its greatest strengths: inference is straightforward, interpretable, and stable. Indeed all of the experimental results were obtained without feature matching or any ad-hoc techniques. We have made code and tutorials available at https://github.com/andrewgordonwilson/bayesgan. 2 Bayesian GANs Given a dataset D = {x(i) } of variables x(i) ? pdata (x(i) ), we wish to estimate pdata (x). We transform white noise z ? p(z) through a generator G(z; ?g ), parametrized by ?g , to produce candidate samples from the data distribution. We use a discriminator D(x; ?d ), parametrized by ?d , to output the probability that any x comes from the data distribution. Our considerations hold for general G and D, but in practice G and D are often neural networks with weight vectors ?g and ?d . By placing distributions over ?g and ?d , we induce distributions over an uncountably infinite space of generators and discriminators, corresponding to every possible setting of these weight vectors. The generator now represents a distribution over distributions of data. Sampling from the induced prior distribution over data instances proceeds as follows: (1) Sample ?g ? p(?g ); (2) Sample z(1) , . . . , z(n) ? p(z); (3) x ?(j) = G(z(j) ; ?g ) ? pgenerator (x). For posterior inference, we propose unsupervised and semi-supervised formulations in Sec 2.1 - 2.2. We note that in an exciting recent pre-print Tran et al. [11] briefly mention using a variational approach to marginalize weights in a GAN, as part of a general exposition on hierarchical implicit models (see also Karaletsos [5] for a nice theoretical exploration of related topics in graphical model message passing). While promising, our approach has several key differences: (1) our representation for the posteriors is straightforward, requires no interventions, provides novel formulations for unsupervised and semi-supervised learning, and has state of the art results on many benchmarks. Conversely, Tran et al. [11] is only pursued for fully supervised learning on a few small datasets; (2) we use sampling to explore a full posterior over the weights, whereas Tran et al. [11] perform a variational approximation centred on one of the modes of the posterior (and due to the properties of the KL divergence is prone to an overly compact representation of even that mode); (3) we marginalize z in addition to ?g , ?d ; and (4) the ratio estimation approach in [11] limits the size of the neural networks they can use, whereas in our experiments we can use comparably deep networks to maximum likelihood approaches. In the experiments we illustrate the practical value of our formulation. Although the high level concept of a Bayesian GAN has been informally mentioned in various contexts, to the best of our knowledge we present the first detailed treatment of Bayesian GANs, including novel formulations, sampling based inference, and rigorous semi-supervised learning experiments. 2 2.1 Unsupervised Learning To infer posteriors over ?g , ?d , we can iteratively sample from the following conditional posteriors: ! ng Y (i) p(?g |z, ?d ) ? D(G(z ; ?g ); ?d ) p(?g |?g ) (1) p(?d |z, X, ?g ) ? i=1 n d Y ng Y i=1 i=1 D(x(i) ; ?d ) ? (1 ? D(G(z(i) ; ?g ); ?d )) ? p(?d |?d ) . (2) p(?g |?g ) and p(?d |?d ) are priors over the parameters of the generator and discriminator, with hyperparameters ?g and ?d , respectively. nd and ng are the numbers of mini-batch samples for the d discriminator and generator, respectively.1 We define X = {x(i) }ni=1 . We can intuitively understand this formulation starting from the generative process for data samples. Suppose we were to sample weights ?g from the prior p(?g |?g ), and then condition on this sample of the weights to form a particular generative neural network. We then sample white noise z from p(z), and transform this noise through the network G(z; ?g ) to generate candidate data samples. The discriminator, conditioned on its weights ?d , outputs a probability that these candidate samples came from the data distribution. Eq. (1) says that if the discriminator outputs high probabilities, then the posterior p(?g |z, ?d ) will increase in a neighbourhood of the sampled setting of ?g (and hence decrease for other settings). For the posterior over the discriminator weights ?d , the first two terms of Eq. (2) form a discriminative classification likelihood, labelling samples from the actual data versus the generator as belonging to separate classes. And the last term is the prior on ?d . Marginalizing the noise In prior work, GAN updates are implicitly conditioned on a set of noise samples z. We can instead marginalize z from our posterior updates using simple Monte Carlo: =p(z) Z p(?g |?d ) = Z p(?g , z|?d )dz = Jg z }| { 1 X p(?g |z, ?d ) p(z|?d ) dz ? p(?g |z(j) , ?d ) , z(j) ? p(z) Jg j=1 By following a similar derivation, p(?d |?g ) ? 1 Jd PJd j p(?d |z(j) , X, ?g ), z(j) ? p(z). This specific setup has several nice features for Monte Carlo integration. First, p(z) is a white noise distribution from which we can take efficient and exact samples. Secondly, both p(?g |z, ?d ) and p(?d |z, X, ?g ), when viewed as a function of z, should be reasonably broad over z by construction, since z is used to produce candidate data samples in the generative procedure. Thus each term in the simple Monte Carlo sum typically makes a reasonable contribution to the total marginal posterior estimates. We do note, however, that the approximation will typically be worse for p(?d |?g ) due to the conditioning on a minibatch of data in Equation 2. Classical GANs as maximum likelihood Our proposed probabilistic approach is a natural Bayesian generalization of the classical GAN: if one uses uniform priors for ?g and ?d , and performs iterative MAP optimization instead of posterior sampling over Eq. (1) and (2), then the local optima will be the same as for Algorithm 1 of Goodfellow et al. [4]. We thus sometimes refer to the classical GAN as the ML-GAN. Moreover, even with a flat prior, there is a big difference between Bayesian marginalization over the whole posterior versus approximating this (often broad, multimodal) posterior with a point mass as in MAP optimization (see Figure 3, Supplement). Posterior samples By iteratively sampling from p(?g |?d ) and p(?d |?g ) at every step of an epoch one can, in the limit, obtain samples from the approximate posteriors over ?g and ?d . Having such samples can be very useful in practice. Indeed, one can use different samples for ?g to alleviate GAN collapse and generate data samples with an appropriate level of entropy, as well as forming a committee of generators to strengthen the discriminator. The samples for ?d in turn form a committee of discriminators which amplifies the overall adversarial signal, thereby further improving the unsupervised learning process. Arguably, the most rigorous method to assess the utility of these posterior samples is to examine their effect on semi-supervised learning, which is a focus of our experiments in Section 4. 1 For mini-batches, one must make sure the likelihood and prior are scaled appropriately. See supplement. 3 2.2 Semi-supervised Learning We extend the proposed probabilistic GAN formalism to semi-supervised learning. In the semisupervised setting for K-class classification, we have access to a set of n unlabelled observations, (i) (i) s {x(i) }, as well as a (typically much smaller) set of ns observations, {(xs , ys )}N i=1 , with class (i) labels ys ? {1, . . . , K}. Our goal is to jointly learn statistical structure from both the unlabelled and labelled examples, in order to make much better predictions of class labels for new test examples x? than if we only had access to the labelled training inputs. In this context, we redefine the discriminator such that D(x(i) = y (i) ; ?d ) gives the probability that sample x(i) belongs to class y (i) . We reserve the class label 0 to indicate that a data sample is the output of the generator. We then infer the posterior over the weights as follows: ! ng K Y X p(?g |z, ?d ) ? D(G(z(i) ; ?g ) = y; ?d ) p(?g |?g ) (3) i=1 y=1 p(?d |z, X, ys , ?g ) ? nd X K Y i=1 y=1 D(x (i) = y; ?d ) ng Y D(G(z(i) ; ?g ) = 0; ?d ) i=1 Ns Y (i) (D(x(i) s = ys ; ?d ))p(?d |?d ) i=1 (4) During every iteration we use ng samples from the generator, nd unlabeled samples, and all of the Ns labeled samples, where typically Ns  n. As in Section 2.1, we can approximately marginalize z using simple Monte Carlo sampling. Much like in the unsupervised learning case, we can marginalize the posteriors over ?g and ?d . To compute the predictive distribution for a class label y? at a test input x? we use a model average over all collected samples with respect to the posterior over ?d : Z T 1 X (k) (k) p(y? |x? , D) = p(y? |x? , ?d )p(?d |D)d?d ? p(y? |x? , ?d ) , ?d ? p(?d |D) . (5) T k=1 We will see that this model average is effective for boosting semi-supervised learning performance. In Section 3 we present an approach to MCMC sampling from the posteriors over ?g and ?d . 3 Posterior Sampling with Stochastic Gradient HMC In the Bayesian GAN, we wish to marginalize the posterior distributions over the generator and discriminator weights, for unsupervised learning in 2.1 and semi-supervised learning in 2.2. For this purpose, we use Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [3] for posterior sampling. The reason for this choice is three-fold: (1) SGHMC is very closely related to momentum-based SGD, which we know empirically works well for GAN training; (2) we can import parameter settings (such as learning rates and momentum terms) from SGD directly into SGHMC; and most importantly, (3) many of the practical benefits of a Bayesian approach to GAN inference come from exploring a rich multimodal distribution over the weights ?g of the generator, which is enabled by SGHMC. Alternatives, such as variational approximations, will typically centre their mass around a single mode, and thus provide a unimodal and overly compact representation for the distribution, due to asymmetric biases of the KL-divergence. The posteriors in Equations 3 and 4 are both amenable to HMC techniques as we can compute the gradients of the loss with respect to the parameters we are sampling. SGHMC extends HMC to the case where we use noisy estimates of such gradients in a manner which guarantees mixing in the limit of a large number of minibatches. For a detailed review of SGHMC, please see Chen et al. [3]. Using the update rules from Eq. (15) in Chen et al. [3], we propose to sample from the posteriors over the generator and discriminator weights as in Algorithm 1. Note that Algorithm 1 outlines standard momentum-based SGHMC: in practice, we found it help to speed up the ?burn-in? process by replacing the SGD part of this algorithm with Adam for the first few thousand iterations, after which we revert back to momentum-based SGHMC. As suggested in Appendix G of Chen et al. [3], we employed a learning rate schedule which decayed according to ?/d where d is set to the number of unique ?real? datapoints seen so far. Thus, our learning rate schedule converges to ?/N in the limit, where we have defined N = |D|. 4 Algorithm 1 One iteration of sampling for the Bayesian GAN. ? is the friction term for SGHMC, ? is the learning rate. We assume that the stochastic gradient discretization noise term ?? is dominated by the main friction term (this assumption constrains us to use small step sizes). We take Jg and Jd simple MC samples for the generator and discriminator respectively, and M SGHMC samples for each simple MC sample. We rescale to accommodate minibatches as in the supplementary material. J ,M g d ,M ? Represent posteriors with samples {?gj,m }j=1,m=1 and {?dj,m }Jj=1,m=1 from previous iteration for number of MC iterations Jg do ? Sample Jg noise samples {z(1) , . . . , z(Jg ) } from noise prior p(z). Each z(i) has ng samples. ? Update sample set representing p(?g |?d ) by running SGHMC updates for M iterations: ? ? Jg Jd X X ? log p(?g |z(i) , ?k,m ) d ? + n; n ? N (0, 2??I) ?gj,m ? ?gj,m + v; v ? (1 ? ?)v + ? ? ?? g i=1 k=1 ? Append ?gj,m to sample set. end for for number of MC iterations Jd do ? Sample minibatch of Jd noise samples {z(1) , . . . , z(Jd ) } from noise prior p(z). ? Sample minibatch of nd data samples x. ? Update sample set representing p(?d |z, ?g ) by running SGHMC updates for M iterations: ? ? Jg Jd X (i) k,m X ) ? log p(? |z , x, ? d g ? + n; n ? N (0, 2??I) ?dj,m ? ?dj,m + v; v ? (1 ? ?)v + ? ? ?? d i=1 k=1 ? Append ?dj,m to sample set. end for 4 Experiments We evaluate our proposed Bayesian GAN (henceforth titled BayesGAN) on six benchmarks (synthetic, MNIST, CIFAR-10, SVHN, and CelebA) each with four different numbers of labelled examples. We consider multiple alternatives, including: the DCGAN [9], the recent Wasserstein GAN (W-DCGAN) [1], an ensemble of ten DCGANs (DCGAN-10) which are formed by 10 random subsets 80% the size of the training set, and a fully supervised convolutional neural network. We also compare to the reported MNIST result for the LFVI-GAN, briefly mentioned in a recent pre-print [11], where they use fully supervised modelling on the whole dataset with a variational approximation. We interpret many of the results from MNIST in detail in Section 4.2, and find that these observations carry forward to our CIFAR-10, SVHN, and CelebA experiments. For all real experiments we use a 5-layer Bayesian deconvolutional GAN (BayesGAN) for the generative model G(z|?g ) (see Radford et al. [9] for further details about structure). The corresponding discriminator is a 5-layer 2-class DCGAN for the unsupervised GAN and a 5-layer, K + 1 class DCGAN for a semi-supervised GAN performing classification over K classes. The connectivity structure of the unsupervised and semi-supervised DCGANs were the same as for the BayesGAN. Note that the structure of the networks in Radford et al. [9] are slightly different from [10] (e.g. there are 4 hidden layers and fewer filters per layer), so one cannot directly compare the results here with those in Salimans et al. [10]. Our exact architecture specification is also given in our codebase. The performance of all methods could be improved through further calibrating architecture design for each individual benchmark. For the Bayesian GAN we place a N (0, 10I) prior on both the generator and discriminator weights and approximately integrate out z using simple Monte Carlo samples. We run Algorithm 1 for 5000 iterations and then collect weight samples every 1000 iterations and record out-of-sample predictive accuracy using Bayesian model averaging (see Eq. 5). For Algorithm 1 we set Jg = 10, Jd = 1, M = 2, and nd = ng = 64. All experiments were performed on a single TitanX GPU for consistency, but BayesGAN and DCGAN-10 could be sped up to approximately the same runtime as DCGAN through multi-GPU parallelization. 5 In Table 1 we summarize the semi-supervised results, where we see consistently improved performance over the alternatives. All runs are averaged over 10 random subsets of labeled examples for estimating error bars on performance (Table 1 shows mean and 2 standard deviations). We also qualitatively illustrate the ability for the Bayesian GAN to produce complementary sets of data samples, corresponding to different representations of the generator produced by sampling from the posterior over the generator weights (Figures 1, 2, 6). The supplement also contains additional plots of accuracy per epoch and accuracy vs runtime for semi-supervised experiments. We emphasize that all of the alternatives required the special techniques described in Salimans et al. [10] such as mini-batch discrimination, whereas the proposed Bayesian GAN needed none of these techniques. 4.1 Synthetic Dataset We present experiments on a multi-modal synthetic dataset to test the ability to infer a multi-modal posterior p(?g |D). This ability not only helps avoid the collapse of the generator to a couple training examples, an instance of overfitting in regular GAN training, but also allows one to explore a set of generators with different complementary properties, harmonizing together to encapsulate a rich data distribution. We generate D-dimensional synthetic data as follows: z ? N (0, 10 ? Id ), A ? N (0, ID?d ), x = Az + ,  ? N (0, 0.01 ? ID ), dD We then fit both a regular GAN and a Bayesian GAN to such a dataset with D = 100 and d = 2. The generator for both models is a two-layer neural network: 10-1000-100, fully connected, with ReLU activations. We set the dimensionality of z to be 10 in order for the DCGAN to converge (it does not converge when d = 2, despite the inherent dimensionality being 2!). Consistently, the discriminator network has the following structure: 100-1000-1, fully-connected, ReLU activations. For this dataset we place an N (0, I) prior on the weights of the Bayesian GAN and approximately integrate out z using J = 100 Monte-Carlo samples. Figure 1 shows that the Bayesian GAN does a much better job qualitatively in generating samples (for which we show the first two principal components), and quantitatively in terms of Jensen-Shannon divergence (JSD) to the true distribution (determined through kernel density estimates). In fact, the DCGAN (labelled ML GAN as per Section 2) begins to eventually increase in testing JSD after a certain number of training iterations, which is reminiscent of over-fitting. When D = 500, we still see good performance with the Bayesian GAN. We also see, with multidimensional scaling [2], that samples from the posterior over Bayesian generator weights clearly form multiple distinct clusters, indicating that the SGHMC sampling is exploring multiple distinct modes, thus capturing multimodality in weight space as well as in data space. 4.2 MNIST MNIST is a well-understood benchmark dataset consisting of 60k (50k train, 10k test) labeled images of hand-written digits. Salimans et al. [10] showed excellent out-of-sample performance using only a small number of labeled inputs, convincingly demonstrating the importance of good generative modelling for semi-supervised learning. Here, we follow their experimental setup for MNIST. We evaluate the Bayesian DCGAN for semi-supervised learning using Ns = {20, 50, 100, 200} labelled training examples. We see in Table 1 that the Bayesian GAN has improved accuracy over the DCGAN, the Wasserstein GAN, and even an ensemble of 10 DCGANs! Moreover, it is remarkable that the Bayesian GAN with only 100 labelled training examples (0.2% of the training data) is able to achieve 99.3% testing accuracy, which is comparable with a state of the art fully supervised method using all 50, 000 training examples! We show a fully supervised model using ns samples to generally highlight the practical utility of semi-supervised learning. Moreover, Tran et al. [11] showed that a fully supervised LFVI-GAN, on the whole MNIST training set (50, 000 labelled examples) produces 140 classification errors ? almost twice the error of our proposed Bayesian GAN approach using only ns = 100 (0.2%) labelled examples! We suspect this difference largely comes from (1) the simple practical formulation of the Bayesian GAN in Section 2, (2) marginalizing z via simple Monte Carlo, and (3) exploring a broad multimodal posterior distribution over the generator weights with SGHMC with our approach versus a variational approximation (prone to over-compact representations) centred on a single mode. We can also see qualitative differences in the unsupervised data samples from our Bayesian DCGAN and the standard DCGAN in Figure 2. The top row shows sample images produced from six generators 6 Figure 1: Left: Samples drawn from pdata (x) and visualized in 2-D after applying PCA. Right 2 columns: Samples drawn from pMLGAN (x) and pBGAN (x) visualized in 2D after applying PCA. The data is inherently 2-dimensional so PCA can explain most of the variance using 2 principal components. It is clear that the Bayesian GAN is capturing all the modes in the data whereas the regular GAN is unable to do so. Right: (Top 2) Jensen-Shannon divergence between pdata (x) and p(x; ?) as a function of the number of iterations of GAN training for D = 100 (top) and D = 500 (bottom). The divergence is computed using kernel density estimates of large sample datasets drawn from pdata (x) and p(x; ?), after applying dimensionality reduction to 2-D (the inherent dimensionality of the data). In both cases, the Bayesian GAN is far more effective at minimizing the Jensen-Shannon divergence, reaching convergence towards the true distribution, by exploring a full distribution over generator weights, which is not possible with a maximum likelihood GAN (no matter how many iterations). (Bottom) The sample set {?gk } after convergence viewed in 2-D using Multidimensional Scaling (using a Euclidean distance metric between weight samples) [2]. One can clearly see several clusters, meaning that the SGHMC sampling has discovered pronounced modes in the posterior over the weights. produced from six samples over the posterior of the generator weights, and the bottom row shows sample data images from a DCGAN. We can see that each of the six panels in the top row have qualitative differences, almost as if a different person were writing the digits in each panel. Panel 1 (top left), for example, is quite crisp, while panel 3 is fairly thick, and panel 6 (top right) has thin and fainter strokes. In other words, the Bayesian GAN is learning different complementary generative hypotheses to explain the data. By contrast, all of the data samples on the bottom row from the DCGAN are homogenous. In effect, each posterior weight sample in the Bayesian GAN corresponds to a different style, while in the standard DCGAN the style is fixed. This difference is further illustrated for all datasets in Figure 6 (supplement). Figure 3 (supplement) also further emphasizes the utility of Bayesian marginalization versus optimization, even with vague priors. However, we do not necessarily expect high fidelity images from any arbitrary generator sampled from the posterior over generators; in fact, such a generator would probably have less posterior probability than the DCGAN, which we show in Section 2 can be viewed as a maximum likelihood analogue of our approach. The advantage in the Bayesian approach comes from representing a whole space of generators alongside their posterior probabilities. Practically speaking, we also stress that for convergence of the maximum-likelihood DCGAN we had to resort to using tricks including minibatch discrimination, feature normalization and the addition of Gaussian noise to each layer of the discriminator. The Bayesian DCGAN needed none of these tricks. 7 Table 1: Detailed supervised and semi-supervised learning results for all datasets. In almost all experiments BayesGAN outperforms DCGAN and W-DCGAN substantially, and typically even outperforms ensembles of DCGANs. The runtimes, per epoch, in minutes, are provided in rows including the dataset name. While all experiments were performed on a single GPU, note that DCGAN-10 and BayesGAN methods can be sped up straightforwardly using multiple GPUs to obtain a similar runtime to DCGAN. Note also that the BayesGAN is generally much more efficient per epoch than the alternatives, as per Figure 4 (supplement). Results are averaged over 10 random supervised subsets ? 2 stdev. Standard train/test splits are used for MNIST, CIFAR-10 and SVHN. For CelebA we use a test set of size 10k. Test error rates are across the entire test set. Ns No. of misclassifications for MNIST. Test error rate for others. Supervised DCGAN W-DCGAN DCGAN-10 BayesGAN MNIST N =50k, D = (28, 28) 14 15 114 32 20 50 100 200 ? ? 2134 ? 525 1389 ? 438 1823 ? 412 453 ? 110 128 ? 11 95 ? 3.2 1687 ? 387 490 ? 170 156 ? 17 91 ? 5.2 1087 ? 564 189 ? 103 97 ? 8.2 78 ? 2.8 1432 ? 487 332 ? 172 79 ? 5.8 74 ? 1.4 CIFAR-10 N =50k, D = (32, 32, 3) 18 19 146 68 1000 2000 4000 8000 63.4 ? 2.6 56.1 ? 2.1 51.4 ? 2.9 47.2 ? 2.2 58.2 ? 2.8 47.5 ? 4.1 40.1 ? 3.3 29.3 ? 2.8 57.1 ? 2.4 49.8 ? 3.1 38.1 ? 2.9 27.4 ? 2.5 31.1 ? 2.5 29.2 ? 1.2 27.4 ? 3.2 25.5 ? 2.4 32.7 ? 5.2 26.2 ? 4.8 23.4 ? 3.7 21.1 ? 2.5 SVHN N =75k, D = (32, 32, 3) 29 31 217 81 500 1000 2000 4000 53.5 ? 2.5 37.3 ? 3.1 26.3 ? 2.1 20.8 ? 1.8 31.2 ? 1.8 25.5 ? 3.3 22.4 ? 1.8 20.4 ? 1.2 29.4 ? 1.8 25.1 ? 2.6 23.3 ? 1.2 19.4 ? 0.9 27.1 ? 2.2 18.3 ? 1.7 16.7 ? 1.8 14.0 ? 1.4 22.5 ? 3.2 12.9 ? 2.5 11.3 ? 2.4 8.7 ? 1.8 CelebA N =100k, D = (50, 50, 3) 103 98 649 329 1000 2000 4000 8000 53.8 ? 4.2 36.7 ? 3.2 34.3 ? 3.8 31.1 ? 4.2 52.3 ? 4.2 37.8 ? 3.4 31.5 ? 3.2 29.5 ? 2.8 51.2 ? 5.4 39.6 ? 3.5 30.1 ? 2.8 27.6 ? 4.2 47.3 ? 3.5 31.2 ? 1.8 29.3 ? 1.5 26.4 ? 1.1 33.4 ? 4.7 31.8 ? 4.3 29.4 ? 3.4 25.3 ? 2.4 This robustness arises from a Gaussian prior over the weights which provides a useful inductive bias, and due to the MCMC sampling procedure which alleviates the risk of collapse and helps explore multiple modes (and uncertainty within each mode). To be balanced, we also stress that in practice the risk of collapse is not fully eliminated ? indeed, some samples from p(?g |D) still produce generators that create data samples with too little entropy. In practice, sampling is not immune to becoming trapped in sharply peaked modes. We leave further analysis for future work. Figure 2: Top: Data samples from six different generators corresponding to six samples from the posterior over ?g . The data samples show that each explored setting of the weights ?g produces generators with complementary high-fidelity samples, corresponding to different styles. The amount of variety in the samples emerges naturally using the Bayesian approach. Bottom: Data samples from a standard DCGAN (trained six times). By contrast, these samples are homogenous in style. 8 4.3 CIFAR-10 CIFAR-10 is also a popular benchmark dataset [7], with 50k training and 10k test images, which is harder to model than MNIST since the data are 32x32 RGB images of real objects. Figure 6 (supplement) shows datasets produced from four different generators corresponding to samples from the posterior over the generator weights. As with MNIST, we see meaningful qualitative variation between the panels. In Table 1 we also see again (but on this more challenging dataset) that using Bayesian GANs as a generative model within the semi-supervised learning setup significantly decreases test set error over the alternatives, especially when ns  n. 4.4 SVHN The StreetView House Numbers (SVHN) dataset consists of RGB images of house numbers taken by StreetView vehicles. Unlike MNIST, the digits significantly differ in shape and appearance. The experimental procedure closely followed that for CIFAR-10. There are approximately 75k training and 25k test images. We see in Table 1 a particularly pronounced difference in performance between BayesGAN and the alternatives. Data samples are shown in Figure 6 (supplement). 4.5 CelebA The large CelebA dataset contains 120k celebrity faces amongst a variety of backgrounds (100k training, 20k test images). To reduce background variations we used a standard face detector [12] to crop the faces into a standard 50 ? 50 size. Figure 6 shows data samples from the trained Bayesian GAN. In order to assess performance for semi-supervised learning we created a 32-class classification task by predicting a 5-bit vector indicating whether or not the face: is blond, has glasses, is male, is pale and is young. Table 1 shows the same pattern of promising performance for CelebA. 5 Discussion By exploring rich multimodal distributions over the weight parameters of the generator, the Bayesian GAN can capture a diverse set of complementary and interpretable representations of data. We have shown that such representations can enable state of the art performance on semi-supervised problems, using a simple inference procedure. Effective semi-supervised learning of natural high dimensional data is crucial for reducing the dependency of deep learning on large labelled datasets. Often labeling data is not an option, or it comes at a high cost ? be it through human labour or through expensive instrumentation (such as LIDAR for autonomous driving). Moreover, semi-supervised learning provides a practical and quantifiable mechanism to benchmark the many recent advances in unsupervised learning. Although we use MCMC, in recent years variational approximations have been favoured for inference in Bayesian neural networks. However, the likelihood of a deep neural network can be broad with many shallow local optima. This is exactly the type of density which is amenable to a sampling based approach, which can explore a full posterior. Variational methods, by contrast, typically centre their approximation along a single mode and also provide an overly compact representation of that mode. Therefore in the future we may generally see advantages in following a sampling based approach in Bayesian deep learning. Aside from sampling, one could try to better accommodate the likelihood functions common to deep learning using more general divergence measures (for example based on the family of ?-divergences) instead of the KL divergence in variational methods, alongside more flexible proposal distributions. In the future, one could also estimate the marginal likelihood of a probabilistic GAN, having integrated away distributions over the parameters. The marginal likelihood provides a natural utility function for automatically learning hyperparameters, and for performing principled quantifiable model comparison between different GAN architectures. It would also be interesting to consider the Bayesian GAN in conjunction with a non-parametric Bayesian deep learning framework, such as deep kernel learning [13, 14]. We hope that our work will help inspire continued exploration into Bayesian deep learning. Acknowledgements We thank Pavel Izmailov for helping to create a tutorial for the codebase and helpful comments, and Soumith Chintala for helpful advice, and NSF IIS-1563887 for support. 9 References [1] Arjovsky, M., Chintala, S., and Bottou, L. (2017). arXiv:1701.07875. Wasserstein GAN. arXiv preprint [2] Borg, I. and Groenen, P. J. (2005). Modern multidimensional scaling: Theory and applications. Springer Science & Business Media. [3] Chen, T., Fox, E., and Guestrin, C. (2014). Stochastic gradient Hamiltonian Monte Carlo. In Proc. International Conference on Machine Learning. [4] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pages 2672?2680. [5] Karaletsos, T. (2016). Adversarial message passing for graphical models. arXiv preprint arXiv:1612.05048. [6] Kingma, D. P. and Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. [7] Krizhevsky, A., Nair, V., and Hinton, G. (2010). Cifar-10 (Canadian institute for advanced research). [8] Nowozin, S., Cseke, B., and Tomioka, R. (2016). f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pages 271?279. [9] Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. [10] Salimans, T., Goodfellow, I. J., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016). Improved techniques for training gans. CoRR, abs/1606.03498. [11] Tran, D., Ranganath, R., and Blei, D. M. (2017). Deep and hierarchical implicit models. CoRR, abs/1702.08896. [12] Viola, P. and Jones, M. J. (2004). Robust real-time face detection. Int. J. Comput. Vision, 57(2):137?154. [13] Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016a). Deep kernel learning. Artificial Intelligence and Statistics. [14] Wilson, A. G., Hu, Z., Salakhutdinov, R. R., and Xing, E. P. (2016b). Stochastic variational deep kernel learning. In Advances in Neural Information Processing Systems, pages 2586?2594. 10
6953 |@word briefly:2 nd:5 hu:2 rgb:2 pavel:1 sgd:3 mention:1 thereby:1 harder:1 accommodate:2 carry:1 reduction:1 contains:2 deconvolutional:1 outperforms:2 com:1 discretization:1 activation:2 reminiscent:2 must:1 import:1 gpu:3 written:1 shape:1 plot:1 interpretable:3 update:8 discrimination:4 v:1 generative:14 pursued:1 fewer:1 aside:1 intelligence:1 hamiltonian:4 record:1 blei:1 yunus:1 provides:5 boosting:1 along:1 borg:1 qualitative:3 consists:1 fitting:1 redefine:1 multimodality:1 manner:2 indeed:4 examine:1 multi:3 salakhutdinov:2 titanx:1 automatically:1 actual:1 little:1 soumith:1 begin:1 estimating:1 moreover:6 provided:1 panel:6 mass:3 medium:1 substantially:1 generalizable:1 guarantee:1 quantitative:1 every:4 multidimensional:3 runtime:3 exactly:1 zaremba:1 milestone:1 scaled:1 intervention:3 arguably:1 encapsulate:1 understood:1 local:2 limit:4 despite:1 encoding:1 id:3 becoming:1 approximately:5 burn:1 twice:1 conversely:1 collect:1 challenging:1 collapse:6 averaged:2 practical:7 unique:1 testing:2 practice:5 digit:3 procedure:4 significantly:2 matching:3 pre:2 induce:1 regular:3 word:1 cannot:1 marginalize:8 unlabeled:1 context:2 applying:3 writing:1 risk:2 crisp:1 map:2 dz:2 straightforward:3 starting:1 focused:1 simplicity:1 x32:1 pouget:1 rule:1 continued:1 importantly:1 datapoints:1 enabled:1 variation:2 autonomous:1 analogous:1 construction:1 suppose:1 strengthen:1 exact:2 us:1 goodfellow:3 hypothesis:1 trick:2 expensive:1 particularly:1 asymmetric:1 labeled:4 bottom:5 preprint:4 capture:1 thousand:1 connected:2 decrease:2 mentioned:2 balanced:1 principled:1 constrains:1 warde:1 trained:2 streetview:2 predictive:2 vague:1 multimodal:5 various:2 regularizer:1 stdev:1 derivation:1 train:2 revert:1 distinct:3 effective:3 monte:11 artificial:1 labeling:1 quite:1 supplementary:1 say:1 ability:3 statistic:1 transform:3 jointly:1 noisy:1 hoc:1 advantage:2 net:1 propose:3 tran:5 pale:1 mixing:1 alleviates:1 achieve:2 pronounced:2 az:1 amplifies:1 quantifiable:2 convergence:3 cluster:2 optimum:2 produce:7 generating:1 adam:1 converges:1 leave:1 object:1 help:5 illustrate:2 andrew:1 rescale:1 job:1 eq:5 come:6 memorize:2 indicate:1 differ:1 thick:1 closely:2 filter:1 stochastic:7 exploration:3 human:1 enable:1 material:1 generalization:1 alleviate:2 secondly:1 exploring:6 helping:1 hold:1 practically:1 around:1 reserve:1 driving:1 purpose:1 estimation:3 proc:1 label:5 create:2 hope:1 minimization:1 brought:1 clearly:2 gaussian:4 always:1 harmonizing:1 reaching:1 avoid:1 cornell:1 wilson:3 conjunction:1 cseke:1 karaletsos:2 focus:1 jsd:2 consistently:2 modelling:2 likelihood:16 contrast:3 adversarial:7 rigorous:2 glass:1 helpful:2 inference:7 typically:7 entire:1 integrated:1 hidden:1 overall:1 classification:5 fidelity:2 flexible:1 groenen:1 art:5 smoothing:1 integration:1 special:1 marginal:3 field:1 fairly:1 homogenous:2 having:2 beach:1 sampling:19 ng:8 runtimes:1 placing:1 broad:5 represents:1 unsupervised:13 jones:1 thin:1 pdata:5 celeba:9 peaked:1 future:3 others:1 mirza:1 quantitatively:1 gordon:1 inherent:2 few:3 modern:1 divergence:14 individual:1 consisting:1 ab:2 detection:1 interest:1 message:2 highly:3 male:1 mixture:1 farley:1 regularizers:1 amenable:2 accurate:1 closer:1 fox:1 euclidean:1 theoretical:1 instance:3 classify:1 formalism:1 compelling:1 column:1 cost:1 introducing:1 deviation:1 subset:3 uniform:1 krizhevsky:1 too:1 reported:1 straightforwardly:1 dependency:1 synthetic:4 st:1 density:6 decayed:1 person:1 international:1 probabilistic:4 together:1 gans:13 connectivity:1 again:1 choose:1 henceforth:1 collapsing:1 worse:1 resort:1 style:4 attaining:1 centred:3 sec:1 stabilize:1 int:1 matter:1 ad:1 performed:2 vehicle:1 memorizes:1 lab:1 try:1 xing:2 bayes:1 option:1 capability:1 metz:1 contribution:1 ass:2 formed:1 ni:1 accuracy:5 convolutional:4 variance:2 largely:1 ensemble:4 correspond:1 bayesian:45 accurately:1 comparably:1 produced:4 mc:4 carlo:11 none:2 emphasizes:1 stroke:1 explain:2 detector:1 naturally:1 chintala:3 couple:1 sampled:2 dataset:13 treatment:1 popular:1 knowledge:1 emerges:1 dimensionality:4 schedule:2 back:1 supervised:35 follow:1 modal:2 improved:5 inspire:1 formulation:8 wildly:1 just:1 implicit:4 autoencoders:1 hand:1 replacing:2 expressive:1 celebrity:1 minibatch:4 mode:17 semisupervised:2 usa:1 effect:2 calibrating:1 concept:1 true:5 name:1 inductive:1 hence:1 iteratively:2 illustrated:1 white:4 during:1 please:1 stress:2 outline:1 demonstrate:1 performs:1 svhn:8 image:13 variational:12 consideration:1 contention:1 novel:2 meaning:1 common:1 sped:2 empirically:1 labour:1 conditioning:1 extend:1 interpretation:1 interpret:2 refer:1 composition:1 ai:1 consistency:1 centre:2 pathology:1 jg:9 had:2 dj:4 immune:1 stable:1 access:2 specification:1 gj:4 posterior:45 own:1 recent:6 showed:2 belongs:1 instrumentation:1 certain:1 outperforming:1 came:1 seen:1 arjovsky:1 wasserstein:5 additional:1 guestrin:1 employed:1 converge:2 signal:2 semi:25 ii:1 full:3 unimodal:1 multiple:5 infer:3 unlabelled:2 long:3 cifar:10 y:4 prediction:1 crop:1 vision:1 metric:3 arxiv:8 iteration:14 sometimes:1 represent:1 kernel:5 normalization:1 proposal:1 whereas:4 addition:2 background:2 crucial:1 appropriately:1 parallelization:1 unlike:2 sure:1 probably:1 induced:1 suspect:1 comment:1 canadian:1 split:1 bengio:1 variety:2 marginalization:2 codebase:2 fit:1 relu:2 architecture:3 misclassifications:1 reduce:1 whether:2 six:7 pca:3 utility:4 titled:1 passing:2 speaking:1 jj:1 deep:14 useful:3 generally:3 fool:2 informally:1 tune:1 detailed:3 clear:1 amount:1 ten:1 visualized:2 generate:4 http:1 nsf:1 tutorial:2 trapped:1 overly:4 correctly:1 per:6 diverse:2 key:2 four:2 demonstrating:1 drawn:3 sum:1 year:1 run:2 inverse:1 uncertainty:1 extends:1 place:2 reasonable:2 decide:1 almost:3 family:1 appendix:1 scaling:3 comparable:1 bit:1 capturing:2 layer:7 followed:1 courville:1 fold:1 strength:1 sharply:1 flat:1 dominated:1 speed:1 friction:2 performing:2 gpus:1 according:1 lfvi:2 belonging:1 across:2 smaller:1 slightly:1 shallow:1 intuitively:1 taken:1 equation:2 turn:2 eventually:1 committee:2 mechanism:1 needed:2 know:1 end:4 available:1 sghmc:15 hierarchical:2 salimans:4 appropriate:1 away:1 neighbourhood:1 batch:5 alternative:8 robustness:1 jd:8 top:7 running:2 gan:54 graphical:2 especially:1 approximating:1 classical:3 objective:1 print:2 parametric:1 gradient:8 amongst:1 distance:1 separate:1 unable:1 thank:1 capacity:1 parametrized:2 topic:1 collected:1 reason:1 ozair:1 code:1 mini:6 ratio:1 minimizing:1 difficult:2 setup:3 hmc:3 gk:1 append:2 design:2 perform:1 observation:3 datasets:6 benchmark:8 viola:1 hinton:1 discovered:1 arbitrary:1 inferred:1 impactful:1 required:3 kl:3 discriminator:23 kingma:1 nip:1 able:1 suggested:1 proceeds:1 bar:1 alongside:2 pattern:1 summarize:1 convincingly:1 including:7 max:1 video:1 analogue:1 greatest:1 natural:4 difficulty:1 business:1 predicting:1 dcgans:4 advanced:1 representing:4 github:1 eliminated:1 created:1 auto:1 prior:14 nice:2 epoch:4 review:1 powered:1 acknowledgement:1 marginalizing:2 relative:1 fully:11 expect:2 loss:1 highlight:1 interesting:1 versus:4 remarkable:1 generator:48 integrate:2 degree:1 sufficient:1 exciting:1 nowozin:1 uncountably:1 prone:2 row:5 last:1 bias:2 understand:1 institute:1 face:5 benefit:1 avoids:1 rich:4 forward:1 made:1 qualitatively:2 far:2 welling:1 ranganath:1 approximate:2 obtains:1 compact:5 implicitly:2 emphasize:1 ml:2 overfitting:1 discriminative:1 iterative:1 table:7 promising:2 learn:2 reasonably:1 robust:1 ca:1 inherently:1 improving:1 excellent:1 necessarily:1 meanwhile:1 bottou:1 main:1 whole:5 noise:13 hyperparameters:2 big:1 complementary:5 xu:1 advice:1 n:9 favoured:1 tomioka:1 momentum:4 explicit:1 wish:3 comput:1 candidate:6 house:2 learns:1 young:1 minute:1 specific:1 showing:1 jensen:4 explored:1 x:1 abadie:1 mnist:14 corr:2 importance:1 supplement:8 labelling:1 conditioned:2 chen:5 entropy:2 simply:1 explore:4 appearance:1 forming:1 dcgan:29 radford:4 springer:1 corresponds:1 cdf:2 minibatches:2 nair:1 conditional:1 goal:2 viewed:3 cheung:1 exposition:1 towards:1 labelled:9 hard:1 lidar:1 infinite:2 determined:1 reducing:1 averaging:1 sampler:1 principal:2 total:1 blond:1 experimental:3 uber:1 shannon:4 meaningful:2 indicating:2 support:1 arises:1 evaluate:2 mcmc:3 audio:2
6,582
6,954
Off-policy evaluation for slate recommendation Adith Swaminathan Microsoft Research, Redmond [email protected] Alekh Agarwal Microsoft Research, New York [email protected] Akshay Krishnamurthy University of Massachusetts, Amherst [email protected] Miroslav Dud?k Microsoft Research, New York [email protected] Damien Jose Microsoft, Redmond [email protected] John Langford Microsoft Research, New York [email protected] Imed Zitouni Microsoft, Redmond [email protected] Abstract This paper studies the evaluation of policies that recommend an ordered set of items (e.g., a ranking) based on some context?a common scenario in web search, ads, and recommendation. We build on techniques from combinatorial bandits to introduce a new practical estimator that uses logged data to estimate a policy?s performance. A thorough empirical evaluation on real-world data reveals that our estimator is accurate in a variety of settings, including as a subroutine in a learningto-rank task, where it achieves competitive performance. We derive conditions under which our estimator is unbiased?these conditions are weaker than prior heuristics for slate evaluation?and experimentally demonstrate a smaller bias than parametric approaches, even when these conditions are violated. Finally, our theory and experiments also show exponential savings in the amount of required data compared with general unbiased estimators. 1 Introduction In recommendation systems for e-commerce, search, or news, we would like to use the data collected during operation to test new content-serving algorithms (called policies) along metrics such as revenue and number of clicks [4, 25]. This task is called off-policy evaluation. General approaches, namely inverse propensity scores (IPS) [13, 18], require unrealistically large amounts of logged data to evaluate whole-page metrics that depend on multiple recommended items, which happens when showing ranked lists. The key challenge is that the number of possible lists (called slates) is combinatorially large. As a result, the policy being evaluated is likely to choose different slates from those recorded in the logs most of the time, unless it is very similar to the data-collection policy. This challenge is fundamental [34], so any off-policy evaluation method that works with large slates needs to make some structural assumptions about the whole-page metric or the user behavior. Previous work on off-policy evaluation and whole-page optimization improves the probability of match between logging and evaluation by restricting attention to small slate spaces [35, 26], introducing assumptions that allow for partial matches between the proposed and observed slates [27], or assuming that the policies used for logging and evaluation are similar [4, 32]. Another line of work constructs parametric models of slate quality [8, 16, 14] (see also Sec. 4.3 of [17]). While these approaches require less data, they can have large bias, and their use in practice requires an expensive trial-and-error cycle involving weeks-long A/B tests to develop new policies [20]. In this paper we 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.0 Reward: Negative Time-to-success Reward: Utility Rate RMSE 0.81 10 100 0.6 0.40 10 0.2 ?1 100.0 0.0 10?0.5 10 3 0.2 0.4 0.6 10 103 Number of logged samples (n) OnPolicy IPS DM: tree 0.8 4 104 1.0 PI Figure 1: Off-policy evaluation of two whole-page user-satisfaction metrics on proprietary search engine data. Average RMSE of different estimators over 50 runs on a log-log scale. Our method (PI) achieves the best performance with moderate data sizes. The unbiased IPS method suffers high variance, and direct modeling (DM) of the metrics suffers high bias. O N P OLICY is the expensive choice of deploying the policy, for instance, in an A/B test. design a method more robust to problems with bias and with only modest data requirements, with the goal of substantially shortening this cycle and accelerating the policy development process. We frame the slate recommendation problem as a combinatorial generalization of contextual bandits [3, 23, 13]. In combinatorial contextual bandits, for each context, a policy selects a slate consisting of component actions, after which a reward for the entire slate is observed. In web search, the context is the search query augmented with a user profile, the slate is the search results page consisting of a list of retrieved documents (actions), and example reward metrics are page-level measures such as time-to-success, NDCG (position-weighted relevance), or other measures of user satisfaction. As input we receive contextual bandit data obtained by some logging policy, and our goal is to estimate the reward of a new target policy. This off-policy setup differs from online learning in contextual bandits, where the goal is to adaptively maximize the reward in the presence of an explore-exploit trade-off [5]. Inspired by work in combinatorial and linear bandits [7, 31, 11], we propose an estimator that makes only a weak assumption about the evaluated metric, while exponentially reducing the data requirements in comparison with IPS. Specifically, we posit a linearity assumption, stating that the slate-level reward (e.g., time to success in web search) decomposes additively across actions, but the action-level rewards are not observed. Crucially, the action-level rewards are allowed to depend on the context, and we do not require that they be easily modeled from the features describing the context. In fact, our method is completely agnostic to the representation of contexts. We make the following contributions: 1. The pseudoinverse estimator (PI) for off-policy evaluation: a general-purpose estimator from the combinatorial bandit literature, adapted for off-policy evaluation. When ranking ` out of m items under the linearity assumption, PI typically requires O(`m/?2 ) samples to achieve error at most ??an exponential gain over the m?(`) sample complexity of IPS. We provide distribution-dependent bounds based on the overlap between logging and target policies. 2. Experiments on real-world search ranking datasets: The strong performance of the PI estimator provides, to our knowledge, the first demonstration of high-quality off-policy evaluation of whole-page metrics, comprehensively outperforming prior baselines (see Fig. 1). 3. Off-policy optimization: We provide a simple procedure for learning to rank (L2R) using the PI estimator to impute action-level rewards for each context. This allows direct optimization of whole-page metrics via pointwise L2R approaches, without requiring pointwise feedback. Related work Large state spaces have typically been studied in the online, or on-policy, setting. Some works assume specific parametric (e.g., linear) models relating the metrics to the features describing a slate [2, 31, 15, 10, 29]; this can lead to bias if the model is inaccurate (e.g., we might not have access to sufficiently predictive features). Others posit the same linearity assumption as we do, but further assume a semi-bandit feedback model where the rewards of all actions on the slate 2 are revealed [19, 22, 21]. While much of the research focuses on on-policy setting, the off-policy paradigm studied in this paper is often preferred in practice since it might not be possible to implement low-latency updates needed for online learning, or we might be interested in many different metrics and require a manual review of their trade-offs before deploying new policies. At a technical level, the PI estimator has been used in online learning [7, 31, 11], but the analysis there is tailored to the specific data collection policies used by the learner. In contrast, we provide distribution-dependent bounds without any assumptions on the logging or target policy. 2 Setting and notation In combinatorial contextual bandits, a decision maker repeatedly interacts as follows: 1. the decision maker observes a context x drawn from a distribution D(x) over some space X; 2. based on the context, the decision maker chooses a slate s = (s1 , . . . , s` ) consisting of actions sj , where a position j is called a slot, the number of slots is `, actions at position j come from some space Aj (x), and the slate s is chosen from a set of allowed slates S(x) ? A1 (x) ? ? ? ? ? A` (x); 3. given the context and slate, a reward r ? [?1, 1] is drawn from a distribution D(r | x, s); rewards in different rounds are independent, conditioned on contexts and slates. The context space X can be infinite, but the set of actions is finite. We assume |Aj (x)| = mj for all contexts x ? X and define m := maxj mj as the maximum number of actions per slot. The goal of the decision maker is to maximize the reward. The decision maker is modeled as a stochastic policy ? that specifies a conditional distribution ?(s | x) (a deterministic policy is a special case). The value of a policy ?, denoted V (?), is defined as the expected reward when following ?:   V (?) := Ex?D Es??(?|x) Er?D(?|x,s) r . (1) To simplify derivations, we extend the conditional distribution ? into a distribution over triples (x, s, r) as ?(x, s, r) := D(r | x, s)?(s | x)D(x). With this shorthand, we have V (?) = E? [r]. To finish this section, we introduce notation for the expected reward for a given context and slate, which we call the slate value, and denote as: V (x, s) := Er?D(?|x,s) [r] . (2) Example 1 (Cartesian product). Consider the optimization of a news portal where the reward is the whole-page advertising revenue. Context x is the user profile, slate is the news-portal page with slots corresponding toQ news sections,1 and actions are the articles. The set of valid slates is the Q Cartesian product S(x) = j?` Aj (x). The number of valid slates is exponential in `: |S(x)| = j?` mj . Example 2 (Ranking). Consider web search and ranking. Context x is the query along with user profile. Actions correspond to search items (such as webpages). The policy chooses ` of m items, where the set A(x) of m items for a context x is chosen from a corpus by a filtering step (e.g., a database query). We have Aj (x) = A(x) for all j ? `, but the allowed slates S(x) have no repetitions. The number of valid slates is exponential in `: |S(x)| = m!/(m ? `)! = m?(`) . A reward could be the negative time-to-success, i.e., negative of the time taken by the user to find a relevant item. 2.1 Off-policy evaluation and optimization In the off-policy setting, we have access to the logged data (x1 , s1 , r1 ), . . . , (xn , sn , rn ) collected using a past policy ?, called the logging policy. Off-policy evaluation is the task of estimating the value of a new policy ?, called the target policy, using the logged data. Off-policy optimization is the harder task of finding a policy ? ? that achieves maximal reward. There are two standard approaches for off-policy evaluation. The direct method (DM) uses the logged data to train a (parametric) model r?(x, s) for predicting the expected reward for a given context and slate. V (?) is then estimated as Pn P V?DM (?) = n1 i=1 s?S(x) r?(xi , s)?(s | xi ) . (3) 1 For simplicity, we do not discuss the more general setting of showing multiple articles in each news section. 3 The direct method is often biased due to mismatch between model assumptions and ground truth. The second approach, which is provably unbiased (under modest assumptions), is the inverse propensity score (IPS) estimator [18]. The IPS estimator re-weights the logged data according to ratios of slate probabilities under the target and logging policy. It has two common variants:  Pn ?(si |xi )  Pn Pn i |xi ) i |xi ) V?wIPS (?) = i=1 ri ? ?(s V?IPS (?) = n1 i=1 ri ? ?(s . (4) i=1 ?(si |xi ) ?(si |xi ) , ?(si |xi ) wIPS generally has better variance with an asymptotically zero bias. The variance of both estimators ?(`) . grows linearly with ?(s|x) ?(s|x) , which can be ?(|S(x)|). This is prohibitive when |S(x)| = m 3 Our approach The IPS estimator is minimax optimal [34], so its exponential variance is unavoidable in the worst case. We circumvent this hardness by positing an assumption on the structure of rewards. Specifically, we assume that the slate-level reward is a sum of unobserved action-level rewards that depend on the context, the action, and the position on the slate, but not on the other actions on the slate. Formally, we consider slate indicator vectors in R`m whose components are indexed by pairs (j, a) of slots and possible actions in them. A slate is described by an indicator vector 1s ? R`m whose entry at position (j, a) is equal to 1 if the slate s has action a in slot j, i.e., if sj = a. The above assumption is formalized as follows: Assumption 1 (Linearity Assumption). For each context x ? X there exists an (unknown) intrinsic P` reward vector ?x ? R`m such that the slate value satisfies V (x, s) = 1Ts ?x = j=1 ?x (j, sj ). The slate indicator vector can be viewed as a feature vector, representing the slate, and ?x can be viewed as a context-specific weight vector. The assumption refers to the fact that the value of a slate is a linear function of its feature representation. However, note that this linear dependence is allowed to be completely different across contexts, because we make no assumptions on how ?x depends on x, and in fact our method does not even attempt to accurately estimate ?x . Being agnostic to the form of ?x is the key departure from the direct method and parametric bandits. While Assumption 1 rules out interactions among different actions on a slate,2 its ability to vary intrinsic rewards arbitrarily across contexts captures many common metrics in information retrieval, such as the normalized discounted cumulative gain (NDCG) [6], a common metric in web ranking: P` rel(x,sj ) ?1 Example 3 (NDCG). For a slate s, we first define DCG(x, s) := j=1 2log (j+1) where rel(x, a) 2 is the relevance of document a on query x. Then NDCG(x, s) := DCG(x, s)/DCG? (x) where DCG? (x) = maxs?S(x) DCG(x, s), so NDCG takes values in [0, 1]. Thus, NDCG satisfies Assump tion 1 with ?x (j, a) = 2rel(x,a) ? 1 log2 (j + 1)DCG? (x). In addition to Assumption 1, we also make the standard assumption that the logging policy puts non-zero probability on all slates that can be potentially chosen by the target policy. This assumption is also required for IPS, otherwise unbiased off-policy evaluation is impossible [24]. Assumption 2 (Absolute Continuity). The off-policy evaluation problem satisfies the absolute continuity assumption if ?(s | x) > 0 whenever ?(s | x) > 0 with probability one over x ? D. 3.1 The pseudoinverse estimator Using Assumption 1, we can now apply the techniques from the combinatorial bandit literature to our problem. In particular, our estimator closely follows the recipe of Cesa-Bianchi and Lugosi [7], albeit with some differences to account for the off-policy and contextual nature of our setup. Under Assumption 1, we can view the recovery of ?x for a given context x as a linear regression problem. The covariates 1s are drawn according to ?(? | x), and the reward follows a linear model, conditional on s and x, with ?x as the ?weight vector?. Thus, we can write the MSE of an estimate w as Es??(?|x) Er?D(?|s,x) [(1Ts w ? r)2 ], or more compactly as E? [(1Ts w ? r)2 | x], using our definition of ? as a distribution over triples (x, s, r). We estimate ?x by the MSE minimizer with the smallest 2 We discuss limitations of Assumption 1 and directions to overcome them in Sec. 5. 4 norm, which can be written in closed form as  ?x = E? [1s 1T | x] ? E? [r1s | x] , ? s (5) where M? is the Moore-Penrose pseudoinverse of a matrix M. Note that this idealized ?estimator? ?x uses conditional expectations over s ? ?(? | x) and r ? D(? | s, x). To simplify the notation, ? we write ??,x := E? [1s 1Ts | x] ? R`m?`m to denote the (uncentered) covariance matrix for our regression problem, appearing on the right-hand side of Eq. (5). We also introduce notation for the second term in Eq. (5) and its empirical estimate: ??,x := E? [r1s | x], and ??i := ri 1si . ?x = ?? ??,x . Under Assumptions 1 and 2, it is easy to Thus, our regression estimator (5) is simply ? ?,x T ? T ? show that V (x, s) = 1s ?x = 1s ??,x ??,x . Replacing ??,x with ??i motivates the following estimator for V (?), which we call the pseudoinverse estimator or PI: n V?PI (?) = n 1 XX 1X ? ? ?? = ?(s | xi )1Ts ??,x ri ? qT?,xi ??,x 1 . i i i si n i=1 n i=1 (6) s?S In Eq. (6) we have expanded the definition of ??i and introduced the notation q?,x for the expected slate indicator under ? conditional on x, q?,x := E? [1s | x]. The summation over s required to obtain q?,xi in Eq. (6) can be replaced by a small sample. We can also derive a weighted variant of PI: Pn T ? i=1 ri ? q?,xi ??,xi 1si . (7) V?wPI (?) = P n ? T i=1 q?,xi ??,xi 1si We prove the following unbiasedness property in Appendix A. Proposition 1. If Assumptions 1 and 2 hold, then the estimator V?PI is unbiased, i.e., E?n [V?PI ] = V (?), where the expectation is over the n logged examples sampled i.i.d. from ?. Pn As special cases, PI reduces to IPS when ` = 1, and simplifies to i=1 ri /n when ? = ? (see Appendix C). To build further intuition, we consider the settings of Examples 1 and 2, and simplify the PI estimator to highlight the improvement over IPS. Example 4 (PI for a Cartesian product when ? is a product distribution).QThe PI estimator for the Cartesian product slate space, when ? factorizes across slots as ?(s | x) = j ?(sj | x), simplifies to  P Pn ?(sij |xi ) ` V?PI (?) = n1 i=1 ri ? ? ` + 1 , j=1 ?(sij |xi ) by Prop. 2 in Appendix D. Note that unlike IPS, which divides by probabilities of whole slates, the PI estimator only divides by probabilities of actions appearing in individual slots. Thus, the magnitude of each term of the outer summation is only O(`m), whereas the IPS terms are m?(`) . Example 5 (PI for rankings with ` = m and uniform logging). In this case, P  Pn ?(sij |xi ) ` V?PI (?) = n1 i=1 ri ? , j=1 1/(m?1) ? m + 2 by Prop. 4 in Appendix E.1. The summands are again O(`m) = O(m2 ). 3.2 Deviation analysis So far, we have shown that PI is unbiased under our assumptions and overcomes the deficiencies of IPS in specific examples. We now derive a finite-sample error bound, based on the overlap between ? and ?. We use Bernstein?s inequality, for which we define the variance and range terms:   ? ? ? 2 := Ex?D qT?,x ??,x q?,x , ? := sup sup qT?,x ??,x 1s . (8) x s:?(s|x)>0 The quantity ? 2 bounds the variance whereas ? bounds the range. They capture the ?average? and ?worst-case? mismatch between ? and ?. They equal one when ? = ? (see Appendix C), and yield the following deviation bound: 5 Theorem 1. Under Assumptions 1 and 2, let ? 2 and ? be defined as in Eq. (8). Then, for any ? ? (0, 1), with probability at least 1 ? ?, r 2? 2 ln(2/?) 2(? + 1) ln(2/?) ? + . VPI (?) ? V (?) ? n 3n We observe that this finite sample bound is structurally different from the regret bounds studied in the prior works on combinatorial bandits. The bound incorporates the extent of overlap between ? and ? so that we have higher confidence in our estimates when the logging and evaluation policies are similar?an important consideration in off-policy evaluation. While the bound might look complicated, it simplifies if we consider the class of ?-uniform logging policies. Formally, for any policy ?, define ?? (s | x) = (1 ? ?)?(s | x) + ??(s | x), with ? being the uniform distribution over the set S(x). For suitably small ?, such logging policies are widely used in practice. We have the following corollary for these policies, proved in Appendix E: Corollary 1. In the settings of Example 2, if the logging is done with ?? for some p 1 or Example  ? > 0, we have |V?PI (?) ? V (?)| ? O ??1 `m/n . Again, this turns the ?(m` ) data dependence of IPS into O(m`). The key step in the proof is the bound on a certain norm of ??? , similar to the bounds of Cesa-Bianchi and Lugosi [7], but our results are a bit sharper. 4 Experiments We empirically evaluate the performance of the pseudoinverse estimator for ranking problems. We first show that PI outperforms prior works in a comprehensive semi-synthetic study using a public dataset. We then use our estimator for off-policy optimization, i.e., to learn ranking policies, competitively with supervised learning that uses more information. Finally, we demonstrate substantial improvements on proprietary data from search engine logs for two user-satisfaction metrics used in practice: timeto-success and utility rate, which do not satisfy the linearity assumption. More detailed results are deferred to Appendices F and G. All of our code is available online.3 4.1 Semi-synthetic evaluation Our semi-synthetic evaluation uses labeled data from the Microsoft Learning to Rank Challenge dataset [30] (MSLR-WEB30K) to create a contextual bandit instance. Queries form the contexts x and actions a are the available documents. The dataset contains over 31K queries, each with up to 1251 judged documents, where the query-document pairs are judged on a 5-point scale, rel(x, a) ? {0, . . . , 4}. Each pair (x, a) has a feature vector f (x, a), which can be partitioned into title and body features (ftitle and fbody ). We consider two slate rewards: NDCG from Example 3, and the expected reciprocal rank, ERR [9], which does not satisfy linearity, and is defined as P` Qr?1 rel(x,a) ERR(x, s) := r=1 1r i=1 (1 ? R(si ))R(sr ) , where R(a) = 2 2maxrel?1 with maxrel = 4. To derive several distinct logging and target policies, we first train two lasso regression models, called lassotitle and lassobody , and two regression tree models, called treetitle and treebody , to predict relevances from ftitle and fbody , respectively. To create the logs, queries x are sampled uniformly, and the set A(x) consists of the top m documents according to treetitle . The logging policy is parametrized by a model, either treetitle or lassotitle , and a scalar ? ? 0. It samples from a multinomial distribution over documents p? (a|x) ? 2??blog2 rank(x,a)c where rank(x, a) is the rank of document a for query x according to the corresponding model. Slates are constructed slot-by-slot, sampling without replacement according to p? . Varying ? interpolates between uniformly random and deterministic logging. Thus, all logging policies are based on the models derived from ftitle . We consider two deterministic target policies based on the two models derived from fbody , i.e., treebody and lassobody , which select the top ` documents according to the corresponding model. The four base models are fairly distinct: on average fewer than 2.75 documents overlap among top 10 (see Appendix H). 3 https://github.com/adith387/slates_semisynth_expts 6 log10(RMSE) 1.0 0 NDCG, m=100, l=10 logging: uniform, target: tree ERR, m=100, l=10 logging: uniform, target: lasso NDCG, m=100, l=10,?=1.0 logging: tree, target: tree ERR, m=10, l=5 logging: uniform, target: tree 0.8 -1 0.6 -2 0.4 -3 0.2 0.0 -4 3 0.0 10 # of conditions 10 10 105 0.2106 103 106 103 104 0.6 105 # of samples (n) ERR, m=10, l=5 NDCG, m=100, l=10 104 NDCG, m=10, l=5 0.45 10 106 103 0.8 104 105 106 1.0 ERR, m=100, l=10 88 6 4 22 00 0.010?3 10 10 # of conditions 104 10?2 10?1 0.2 100 10?3 NDCG, m=10, l=5 ?2 0.410?1 0.6 100 10?3 10 10?1 Normalized RMSE @ 600k samples ERR, m=10, l=5 NDCG, m=100, l=10 10?2 100 10?30.8 10?2 10?1 1.00 10 ERR, m=100, l=10 88 6 4 22 00 0.010?3 10?2 10?1 0.2 100 10?3 10?2 OnPolicy ?2 0.410?1 0.6 100 10?3 10 Normalized RMSE @ 60k samples wIPS DM: lasso 10?1 DM: tree 100 10?30.8 10?2 10?1 1.00 10 wPI Figure 2: Top: RMSE of various estimators under four experimental conditions (see Appendix F for all 40 conditions). Middle: CDF of normalized RMSE at 600k samples; each plot aggregates over 10 logging-target combinations; closer to top-left is better. Bottom: Same as middle but at 60k samples. We compare the weighted estimator wPI with the direct method (DM) and weighted IPS (wIPS). (Weighted variants outperformed the unweighted ones.) We implement two variants of DM: regression trees and lasso, each trained on the first n/2 examples and using the remaining n/2 examples for evaluation according to Eq. (3). We also include an aspirational baseline, O N P OLICY, which corresponds to deploying the target policy as in an A/B test and returning the average of observed rewards. This is the expensive alternative we wish to avoid. We evaluate the estimators by recording the root mean square error (RMSE) as a function of the number of samples, averaged over at least 25 independent runs. We do this for 40 different experimental conditions, considering two reward metrics, two slate-space sizes, and 10 combinations of target and logging policies (including the choice of ?). The top row of Fig. 2 shows results for four representative conditions (see Appendix F for all results), while the middle and bottom rows aggregate across conditions. To produce the aggregates, we shift and rescale the RMSE of all methods, at 600k (middle row) or 60k (bottom row) samples, so the best performance is at 0.001 and the worst is at 1.0 (excluding O N P OLICY). (We use 0.001 instead of 0.0 to allow plotting on a log scale.) The aggregate plots display the cumulative distribution function of these normalized RMSE values across 10 target-logging combinations, keeping the metric and the slate-space size fixed. The pseudoinverse estimator wPI easily dominates wIPS across all experimental conditions, as can be seen in Fig. 2 (top) and in Appendix F. While wIPS and IPS are (asymptotically) unbiased even without linearity assumption, they both suffer from a large variance caused by the slate size. The variance and hence the mean square error of wIPS and IPS grows exponentially with the slate size, so they perform poorly beyond the smallest slate sizes. DM performs well in some cases, especially with few samples, but often plateaus or degrades eventually as it overfits on the logging distribution, which is different from the target. While wPI does not always outperform DM methods (e.g., Fig. 2, top row, second from right), it is the only method that works robustly across all conditions, as can be seen in the aggregate plots. In general, choosing between DM and wPI is largely a matter of bias-variance tradeoff. DM can be particularly good with very small data sizes, because of its low variance, and in those settings it is often the best choice. However, PI performs comprehensively better given enough data (see Fig. 2, middle row). 7 In the top row of Fig. 2, we see that, as expected, wPI is biased for the ERR metric since ERR does not satisfy linearity. The right two panels also demonstrate the effect of varying m and `. While wPI deteriorates somewhat for the larger slate space, it still gives a meaningful estimate. In contrast, wIPS fails to produce any meaningful estimate in the larger slate space and its RMSE barely improves with more data. Finally, the left two plots in the top row show that wPI is fairly insensitive to the amount of stochasticity in logging, whereas DM improves with more overlap between logging and target. 4.2 Semi-synthetic policy optimization We now show how to use the pseudoinverse estimator for off-policy optimization. We leverage pointwise learning to rank (L2R) algorithms, which learn a scoring function for query-document pairs by fitting to relevance labels. We call this the supervised approach, as it requires relevance labels. Instead of requiring relevance labels, we use the pseudoinverse estimator to convert page-level reward into per-slot reward components?the estimates of ?x (j, a)?and these become targets for regression. Thus, the pseudoinverse estimator enables pointwise L2R to optimize whole-page metrics even without relevance labels. Given a contextual bandit dataset {(xi , si , ri )}i?n collected by the logging ?i = ?? ??i , turning the i-th contextual bandit policy ?, we begin by creating the estimates of ?xi : ? ?,xi example into `m regression examples. The trained regression model is used to create a slate, starting with the highest scoring slot-action pair, and continuing greedily (excluding the pairs with the already chosen slots or actions). This procedure is detailed in Appendix G. Note that without the linearity assumptions, our imputed regression targets might not lead to the best possible learned policy, but we still expect to adapt somewhat to the slate-level metric. We use the MSLR-WEB10K dataset [30] to compare our approach with benchmarked results [33] for NDCG@3 (i.e., ` = 3).4 This dataset contains 10k queries, over 1.2M relevance judgments, and up to 908 judged documents per query. The state-of-the-art listwise L2R method on this dataset is a highly tuned variant of LambdaMART [1] (with an ensemble of 1000 trees, each with up to 70 leaves). We use the provided 5-fold split and always train on bandit data collected by uniform logging from four folds, while evaluating with supervised data on the fifth. We compare our approach, titled PI-OPT, against the supervised approach (SUP), trained to predict the gains, equal to 2rel(x,a) ? 1, computed using annotated relevance judgements in the training folds (predicting raw relevances was inferior). Both PI-OPT and SUP train gradient boosted regression trees (with 1000 trees, each with up to 70 leaves). Additionally, we also experimented with the ERR metric. The average test-set performance (computed using ground-truth relevance judgments for each test set) across the 5-folds is reported in Table 1. Our method, PI-OPT is competitive with the supervised baseline SUP for NDCG, and is substantially superior for ERR. A different transformation instead of gains might yield a stronger supervised baseline for ERR, but this only illustrates the key benefit of PI-OPT: the right pointwise targets are automatically inferred for any whole-page metric. Both PI-OPT and SUP are slightly worse than LambdaMART for NDCG@3, but they are arguably not as highly tuned, and PI-OPT only uses the slate-level metric. Table 1: Comparison of L2R approaches optimizing NDCG@3 and ERR@3. LambdaMART is a tuned list-wise approach. SUP and PI-OPT use the same pointwise L2R learner; SUP uses 8 ? 105 relevance judgments, PI-OPT uses 107 samples (under uniform logging) with page-level rewards. Metric NDCG@3 ERR@3 4.3 LambdaMART 0.457 ? uniformly random 0.152 0.096 SUP 0.438 0.311 PI-OPT 0.421 0.321 Real-world experiments We finally evaluate all methods using logs collected from a popular search engine. The dataset consists of search queries, for which the logging policy randomly (non-uniformly) chooses a slate of 4 Our dataset here differs from the dataset MSLR-WEB30K used in Sec. 4.1. There our goal was to study realistic problem dimensions, e.g., constructing length-10 rankings out of 100 candidates. Here, we use MSLRWEB10K, because it is the largest dataset with public benchmark numbers by state-of-the-art approaches (specifically LambdaMART). 8 size ` = 5 from a small pre-filtered set of documents of size m ? 8. After preprocessing, there are 77 unique queries and 22K total examples, meaning that for each query, we have logged impressions for many of the available slates. As before, we create the logs by sampling queries uniformly at random, and using a logging policy that samples uniformly from the slates shown for this query. We consider two page-level metrics: time-to-success (TTS) and U TILITY R ATE. TTS measures the number of seconds between presenting the results and the first satisfied click from the user, defined as any click for which the user stays on the linked page for sufficiently long. TTS value is capped and scaled to [0, 1]. U TILITY R ATE is a more complex page-level metric of user satisfaction. It captures the interaction of a user with the page as a timeline of events (such as clicks) and their durations. The events are classified as revealing a positive or negative utility to the user and their contribution is proportional to their duration. U TILITY R ATE takes values in [?1, 1]. We evaluate a target policy based on a logistic regression classifier trained to predict clicks and using the predicted probabilities to score slates. We restrict the target policy to pick among the slates in our logs, so we know the ground truth slate-level reward. Since we know the query distribution, we can calculate the target policy?s value exactly, and measure RMSE relative to this true value. We compare our estimator (PI) with three baselines similar to those from Sec. 4.1: DM, IPS and O N P OLICY. DM uses regression trees over roughly 20,000 slate-level features. Fig. 1 from the introduction shows that PI provides a consistent multiplicative improvement in RMSE over IPS, which suffers due to high variance. Starting at moderate sample sizes, PI also outperforms DM, which suffers due to substantial bias. 5 Discussion In this paper we have introduced a new estimator (PI) for off-policy evaluation in combinatorial contextual bandits under a linearity assumption on the slate-level rewards. Our theoretical and empirical analysis demonstrates the merits of the approach. The empirical results show a favorable bias-variance tradeoff. Even in datasets and metrics where our assumptions are violated, the PI estimator typically outperforms all baselines. Its performance, especially at smaller sample sizes, could be further improved by designing doubly-robust variants [12] and possibly also incorporating weight clipping [34]. One promising approach to relax Assumption 1 is to posit a decomposition over pairs (or tuples) of slots to capture higher-order interactions such as diversity. More generally, one could replace slate spaces by arbitrary compact convex sets, as done in linear bandits. In these settings, the pseudoinverse estimator could still be applied, but tight sample-complexity analysis is open for future research. References [1] Nima Asadi and Jimmy Lin. Training efficient tree-based models for document ranking. In European Conference on Advances in Information Retrieval, 2013. [2] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 2002. [3] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 2002. [4] L?on Bottou, Jonas Peters, Joaquin Qui?onero-Candela, Denis Charles, Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 2013. [5] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed R in Machine Learning, 2012. bandit problems. Foundations and Trends [6] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In International Conference on Machine Learning, 2005. [7] Nicolo Cesa-Bianchi and G?bor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 2012. 9 [8] Olivier Chapelle and Ya Zhang. A dynamic Bayesian network click model for web search ranking. In International Conference on World Wide Web, 2009. [9] Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. Expected reciprocal rank for graded relevance. In Conference on Information and Knowledge Management, 2009. [10] Wei Chu, Lihong Li, Lev Reyzin, and Robert E Schapire. Contextual bandits with linear payoff functions. In Artificial Intelligence and Statistics, 2011. [11] Varsha Dani, Thomas P. Hayes, and Sham M. Kakade. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems, 2008. [12] Miroslav Dud?k, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. In International Conference on Machine Learning, 2011. [13] Miroslav Dud?k, Dumitru Erhan, John Langford, and Lihong Li. Doubly robust policy evaluation and optimization. Statistical Science, 2014. [14] Georges E. Dupret and Benjamin Piwowarski. A user browsing model to predict search engine click data from past observations. In SIGIR Conference on Research and Development in Information Retrieval, 2008. [15] Sarah Filippi, Olivier Cappe, Aur?lien Garivier, and Csaba Szepesv?ri. Parametric bandits: The generalized linear case. In Advances in Neural Information Processing Systems, 2010. [16] Fan Guo, Chao Liu, Anitha Kannan, Tom Minka, Michael Taylor, Yi-Min Wang, and Christos Faloutsos. Click chain model in web search. In International Conference on World Wide Web, 2009. [17] Katja Hofmann, Lihong Li, Filip Radlinski, et al. Online evaluation for information retrieval. Foundations and Trends in Information Retrieval, 2016. [18] Daniel G Horvitz and Donovan J Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 1952. [19] Satyen Kale, Lev Reyzin, and Robert E Schapire. Non-stochastic bandit slate problems. In Advances in Neural Information Processing Systems, 2010. [20] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M Henne. Controlled experiments on the web: survey and practical guide. Knowledge Discovery and Data Mining, 2009. [21] Akshay Krishnamurthy, Alekh Agarwal, and Miroslav Dud?k. Efficient contextual semi-bandit learning. Advances in Neural Information Processing Systems, 2016. [22] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesv?ri. Tight regret bounds for stochastic combinatorial semi-bandits. In Artificial Intelligence and Statistics, 2015. [23] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In Advances in Neural Information Processing Systems, 2008. [24] John Langford, Alexander Strehl, and Jennifer Wortman. Exploration scavenging. In International Conference on Machine Learning, 2008. [25] Lihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized news article recommendation. In International Conference on World Wide Web, 2010. [26] Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. Unbiased offline evaluation of contextual-banditbased news article recommendation algorithms. In International Conference on Web Search and Data Mining, 2011. [27] Lihong Li, Imed Zitouni, and Jin Young Kim. Toward predicting the outcome of an a/b experiment for search relevance. In International Conference on Web Search and Data Mining, 2015. [28] Kaare Brandt Petersen, Michael Syskind Pedersen, et al. The matrix cookbook. Technical University of Denmark, 2008. [29] Lijing Qin, Shouyuan Chen, and Xiaoyan Zhu. Contextual combinatorial bandit and its application on diversified online recommendation. In International Conference on Data Mining, 2014. [30] Tao Qin and Tie-Yan Liu. Introducing LETOR 4.0 datasets. arXiv:1306.2597, 2013. 10 [31] Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 2010. [32] Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In International Conference on Machine Learning, 2015. [33] Niek Tax, Sander Bockting, and Djoerd Hiemstra. A cross-benchmark comparison of 87 learning to rank methods. Information Processing and Management, 2015. [34] Yu-Xiang Wang, Alekh Agarwal, and Miroslav Dudik. Optimal and adaptive off-policy evaluation in contextual bandits. In International Conference on Machine Learning, 2017. [35] Yue Wang, Dawei Yin, Luo Jie, Pengyuan Wang, Makoto Yamada, Yi Chang, and Qiaozhu Mei. Beyond ranking: Optimizing whole-page presentation. In International Conference on Web Search and Data Mining, pages 103?112, 2016. 11
6954 |@word katja:1 trial:1 exploitation:1 middle:5 judgement:1 norm:2 stronger:1 suitably:1 open:1 additively:1 crucially:1 covariance:1 decomposition:1 pick:1 harder:1 liu:2 contains:2 uma:1 score:3 daniel:1 tuned:3 document:14 past:2 outperforms:3 err:15 horvitz:1 com:7 contextual:16 luo:1 si:10 chu:3 written:1 john:8 realistic:1 hofmann:1 enables:1 randal:1 plot:4 update:1 intelligence:2 prohibitive:1 fewer:1 item:7 leaf:2 greedy:1 reciprocal:2 yamada:1 renshaw:1 filtered:1 provides:2 denis:1 ron:1 brandt:1 zhang:3 positing:1 along:2 constructed:1 direct:6 become:1 jonas:1 shorthand:1 prove:1 consists:2 fitting:1 doubly:3 ray:1 dan:1 introduce:3 hardness:1 expected:7 roughly:1 behavior:1 multi:2 inspired:1 discounted:1 automatically:1 armed:2 considering:1 begin:1 estimating:1 linearity:10 notation:5 xx:1 agnostic:2 panel:1 provided:1 piwowarski:1 benchmarked:1 substantially:2 finding:1 unobserved:1 transformation:1 csaba:2 thorough:1 tie:1 exactly:1 returning:1 scaled:1 classifier:1 demonstrates:1 kaare:1 hamilton:1 arguably:1 before:2 positive:1 lev:2 ndcg:18 lugosi:3 might:6 studied:3 range:2 averaged:1 practical:2 commerce:1 unique:1 kveton:1 practice:4 regret:3 implement:2 differs:2 procedure:2 mei:1 empirical:4 yan:1 revealing:1 deed:1 confidence:2 pre:1 refers:1 donald:1 petersen:1 judged:3 put:1 context:25 impossible:1 shaked:1 risk:1 optimize:1 branislav:1 deterministic:3 nicole:1 kale:1 attention:1 starting:2 duration:2 convex:1 jimmy:1 sigir:1 thompson:1 simplicity:1 formalized:1 recovery:1 survey:1 m2:1 estimator:38 rule:1 krishnamurthy:2 target:24 user:14 olivier:3 us:9 designing:1 trend:2 expensive:3 particularly:1 onpolicy:2 database:1 labeled:1 observed:4 bottom:3 wang:5 capture:4 worst:3 calculate:1 alekha:1 news:7 cycle:2 trade:3 highest:1 observes:1 substantial:2 intuition:1 benjamin:1 complexity:2 covariates:1 reward:34 dynamic:1 trained:4 depend:3 tight:2 predictive:1 logging:32 learner:2 completely:2 compactly:1 easily:2 slate:67 various:1 derivation:1 train:4 distinct:2 query:18 artificial:2 aggregate:5 choosing:1 outcome:1 whose:2 heuristic:1 widely:1 larger:2 relax:1 otherwise:1 ability:1 statistic:2 satyen:1 ip:21 online:8 patrice:1 propose:1 interaction:3 product:5 maximal:1 qin:2 relevant:1 reyzin:2 poorly:1 achieve:1 tax:1 qr:1 recipe:1 webpage:1 requirement:2 r1:1 letor:1 produce:2 paat:1 derive:4 develop:1 damien:1 stating:1 sarah:1 rescale:1 qt:3 eq:6 strong:1 c:1 predicted:1 come:1 direction:1 posit:3 closely:1 annotated:1 stochastic:4 exploration:2 public:2 require:4 generalization:2 proposition:1 opt:9 summation:2 hold:1 sufficiently:2 ground:3 week:1 predict:4 achieves:3 vary:1 smallest:2 purpose:1 favorable:1 outperformed:1 combinatorial:12 label:4 maker:5 makoto:1 propensity:2 title:1 largest:1 combinatorially:1 repetition:1 create:4 weighted:5 minimization:1 dani:1 offs:2 always:2 pn:8 avoid:1 boosted:1 factorizes:1 varying:2 corollary:2 derived:2 focus:1 joachim:1 improvement:3 rank:11 contrast:2 greedily:1 baseline:6 kim:1 dependent:2 inaccurate:1 entire:1 typically:3 dcg:6 qthe:1 bandit:33 lien:1 subroutine:1 selects:1 tao:1 provably:1 interested:1 among:3 denoted:1 development:2 art:2 special:2 fairly:2 equal:3 construct:1 saving:1 beach:1 sampling:3 look:1 cookbook:1 yu:1 future:1 others:1 recommend:1 simplify:3 few:1 wen:1 randomly:1 comprehensive:1 individual:1 maxj:1 replaced:1 consisting:3 replacement:2 microsoft:13 n1:4 attempt:1 highly:2 mining:5 zheng:1 evaluation:29 deferred:1 chain:1 accurate:1 closer:1 partial:1 modest:2 unless:1 tree:13 indexed:1 divide:2 continuing:1 taylor:1 re:1 theoretical:1 miroslav:5 instance:2 modeling:1 yoav:1 portugaly:1 clipping:1 introducing:2 deviation:2 entry:1 lazier:1 uniform:8 wortman:1 reported:1 synthetic:4 chooses:3 adaptively:1 st:1 unbiasedness:1 fundamental:1 amherst:1 siam:1 international:12 stay:1 varsha:1 aur:1 off:25 michael:2 again:2 recorded:1 unavoidable:1 cesa:5 choose:1 satisfied:1 possibly:1 management:2 worse:1 creating:1 american:1 simard:1 li:7 account:1 filippi:1 diversity:1 sec:4 erin:1 rusmevichientong:1 pengyuan:1 matter:1 satisfy:3 caused:1 ranking:13 ad:1 depends:1 idealized:1 multiplicative:1 tion:1 view:1 closed:1 root:1 overfits:1 sup:9 linked:1 competitive:2 candela:1 complicated:1 rmse:13 contribution:2 square:2 greg:1 variance:12 largely:1 ensemble:1 correspond:1 yield:2 judgment:3 weak:1 raw:1 bor:1 bayesian:1 accurately:1 pedersen:1 onero:1 advertising:2 classified:1 plateau:1 deploying:3 suffers:4 manual:1 whenever:1 ed:1 definition:2 ashkan:1 against:1 minka:1 dm:16 proof:1 gain:4 sampled:2 proved:1 dataset:11 massachusetts:1 popular:1 counterfactual:2 knowledge:3 improves:3 auer:2 cappe:1 higher:2 supervised:6 tom:1 improved:1 wei:3 evaluated:2 done:2 roger:1 swaminathan:2 langford:7 hand:1 joaquin:1 web:14 replacing:1 scavenging:1 continuity:2 logistic:1 quality:2 aj:4 grows:2 adith:2 usa:1 effect:1 matt:1 requiring:2 unbiased:9 normalized:5 true:1 hence:1 dud:4 moore:1 round:1 during:1 impute:1 inferior:1 learningto:1 generalized:1 presenting:1 impression:1 tt:3 demonstrate:3 performs:2 dawei:1 reasoning:1 meaning:1 wise:1 consideration:1 snelson:1 ari:1 charles:1 longbotham:1 common:4 superior:1 multinomial:1 empirically:1 anitha:1 exponentially:2 wpi:9 insensitive:1 extend:1 association:1 relating:1 multiarmed:1 mathematics:1 stochasticity:1 lihong:7 chapelle:2 access:2 alekh:3 summands:1 base:1 nicolo:1 xiaoyan:1 retrieved:1 optimizing:2 moderate:2 scenario:1 certain:1 inequality:1 outperforming:1 success:6 arbitrarily:1 yi:2 scoring:2 seen:2 george:1 somewhat:2 dudik:1 maximize:2 paradigm:1 recommended:1 semi:7 multiple:2 sham:1 reduces:1 technical:2 match:2 adapt:1 cross:1 long:3 retrieval:5 lin:1 a1:1 controlled:1 involving:1 variant:6 regression:13 metric:26 expectation:2 donovan:1 arxiv:1 tailored:1 agarwal:3 receive:1 addition:1 unrealistically:1 whereas:3 szepesv:2 jcl:1 kohavi:1 biased:2 unlike:1 sr:1 yue:1 recording:1 incorporates:1 call:3 structural:1 presence:1 leverage:1 revealed:1 bernstein:1 easy:1 enough:1 split:1 variety:1 sander:1 finish:1 nonstochastic:2 lasso:4 click:8 restrict:1 simplifies:3 tradeoff:2 shift:1 imed:2 utility:3 accelerating:1 suffer:1 titled:1 peter:3 interpolates:1 york:3 azin:1 action:23 proprietary:2 repeatedly:1 jie:1 generally:2 latency:1 detailed:2 amount:3 shortening:1 blog2:1 imputed:1 http:1 specifies:1 outperform:1 schapire:4 estimated:1 deteriorates:1 per:3 serving:1 write:2 key:4 four:4 drawn:3 garivier:1 asymptotically:2 sum:1 convert:1 run:2 jose:1 inverse:2 parameterized:1 logged:10 decision:5 appendix:12 qui:1 vpi:1 bit:1 bound:14 display:1 fold:4 fan:1 adapted:1 deficiency:1 ri:11 personalized:1 tal:1 asadi:1 min:1 expanded:1 according:7 combination:3 smaller:2 across:9 slightly:1 ate:3 partitioned:1 kakade:1 happens:1 s1:2 sij:3 thorsten:1 taken:1 ln:2 jennifer:1 describing:2 discus:2 turn:1 eventually:1 needed:1 know:2 merit:1 available:3 operation:2 competitively:1 apply:1 observe:1 pierre:1 appearing:2 robustly:1 alternative:1 faloutsos:1 thomas:1 top:10 remaining:1 include:1 log2:1 log10:1 exploit:1 build:2 especially:2 graded:1 already:1 quantity:1 parametric:6 degrades:1 dependence:2 interacts:1 mslr:3 gradient:2 parametrized:1 outer:1 chris:1 collected:5 extent:1 barely:1 kannan:1 toward:1 denmark:1 assuming:1 code:1 length:1 modeled:2 pointwise:6 dupret:1 ratio:1 demonstration:1 setup:2 robert:4 potentially:1 sharper:1 negative:4 design:1 motivates:1 policy:76 unknown:1 perform:1 bianchi:5 r1s:2 observation:1 datasets:3 benchmark:2 finite:4 descent:1 t:5 jin:1 payoff:1 excluding:2 frame:1 rn:1 arbitrary:1 inferred:1 introduced:2 namely:1 required:3 pair:7 engine:4 learned:1 timeline:1 nip:1 capped:1 beyond:2 redmond:3 syskind:1 mismatch:2 departure:1 challenge:3 l2r:7 including:2 max:2 overlap:5 satisfaction:4 ranked:1 event:2 circumvent:1 predicting:3 indicator:4 turning:1 zhu:1 minimax:1 representing:1 github:1 sn:1 hullender:1 chao:1 prior:4 literature:2 review:1 discovery:1 epoch:1 nicol:2 relative:1 xiang:1 freund:1 expect:1 highlight:1 limitation:1 filtering:1 proportional:1 triple:2 revenue:2 foundation:2 olicy:4 shouyuan:1 consistent:1 article:4 plotting:1 pi:38 strehl:1 row:8 keeping:1 tsitsiklis:1 offline:1 bias:9 weaker:1 allow:2 side:2 burges:1 comprehensively:2 wide:3 akshay:3 guide:1 absolute:2 fifth:1 listwise:1 benefit:1 feedback:3 overcome:1 xn:1 world:6 valid:3 zitouni:2 cumulative:2 unweighted:1 evaluating:1 collection:2 dimension:1 preprocessing:1 adaptive:1 qiaozhu:1 far:1 erhan:1 sj:5 compact:1 preferred:1 overcomes:1 pseudoinverse:10 uncentered:1 reveals:1 hayes:1 corpus:1 filip:1 tuples:1 xi:21 search:20 decomposes:1 table:2 additionally:1 promising:1 mj:3 nature:1 robust:4 ca:1 learn:2 mse:2 bottou:1 complex:1 european:1 constructing:1 elon:1 linearly:2 universe:1 whole:11 xuanhui:1 profile:3 allowed:4 x1:1 augmented:1 fig:7 body:1 representative:1 tong:1 christos:1 structurally:1 position:5 fails:1 wish:1 exponential:5 candidate:1 chickering:1 young:1 theorem:1 dumitru:1 specific:4 bastien:1 showing:2 er:3 list:4 experimented:1 dominates:1 exists:1 intrinsic:2 incorporating:1 restricting:1 rel:6 albeit:1 magnitude:1 portal:2 conditioned:1 illustrates:1 cartesian:4 browsing:1 chen:1 yin:1 simply:1 likely:1 explore:1 bubeck:1 assump:1 penrose:1 dipankar:1 ordered:1 diversified:1 scalar:1 recommendation:7 chang:1 corresponds:1 truth:3 satisfies:3 minimizer:1 cdf:1 prop:2 slot:14 conditional:5 goal:5 viewed:2 presentation:1 replace:1 price:1 mdudik:1 experimentally:1 content:1 nima:1 specifically:3 infinite:1 reducing:1 uniformly:6 called:8 total:1 e:2 experimental:3 ya:2 meaningful:2 formally:2 select:1 guo:1 radlinski:1 lambdamart:5 relevance:14 violated:2 alexander:1 evaluate:5 ex:2
6,583
6,955
A multi-agent reinforcement learning model of common-pool resource appropriation Julien Perolat? DeepMind London, UK [email protected] Charles Beattie DeepMind London, UK [email protected] Joel Z. Leibo? DeepMind London, UK [email protected] Karl Tuyls University of Liverpool Liverpool, UK [email protected] Vinicius Zambaldi DeepMind London, UK [email protected] Thore Graepel DeepMind London, UK [email protected] Abstract Humanity faces numerous problems of common-pool resource appropriation. This class of multi-agent social dilemma includes the problems of ensuring sustainable use of fresh water, common fisheries, grazing pastures, and irrigation systems. Abstract models of common-pool resource appropriation based on non-cooperative game theory predict that self-interested agents will generally fail to find socially positive equilibria?a phenomenon called the tragedy of the commons. However, in reality, human societies are sometimes able to discover and implement stable cooperative solutions. Decades of behavioral game theory research have sought to uncover aspects of human behavior that make this possible. Most of that work was based on laboratory experiments where participants only make a single choice: how much to appropriate. Recognizing the importance of spatial and temporal resource dynamics, a recent trend has been toward experiments in more complex real-time video game-like environments. However, standard methods of noncooperative game theory can no longer be used to generate predictions for this case. Here we show that deep reinforcement learning can be used instead. To that end, we study the emergent behavior of groups of independently learning agents in a partially observed Markov game modeling common-pool resource appropriation. Our experiments highlight the importance of trial-and-error learning in commonpool resource appropriation and shed light on the relationship between exclusion, sustainability, and inequality. 1 Introduction Natural resources like fisheries, groundwater basins, and grazing pastures, as well as technological resources like irrigation systems and access to geosynchronous orbit are all common-pool resources (CPRs). It is difficult or impossible for agents to exclude one another from accessing them. But whenever an agent obtains an individual benefit from such a resource, the remaining amount available for appropriation by others is ever-so-slightly diminished. These two seemingly-innocent properties of CPRs combine to yield numerous subtle problems of motivation in organizing collective action [12, 26, 27, 6]. The necessity of organizing groups of humans for effective CPR appropriation, combined with its notorious difficulty, has shaped human history. It remains equally critical today. ? indicates equal contribution 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Renewable natural resources? have a stock component and a flow component [10, 35, 7, 26]. Agents may choose to appropriate resources from the flow. However, the magnitude of the flow depends on the state of the stock? . Over-appropriation negatively impacts the stock, and thus has a negative impact on future flow. Agents secure individual rewards when they appropriate resource units from a CPR. However, the cost of such appropriation, felt via its impact on the CPR stock, affects all agents in the community equally. Economic theory predicts that as long as each individual?s share of the marginal social cost is less than their marginal gain from appropriating an additional resource unit, agents will continue to appropriate from the CPR. If such over-appropriation continues unchecked for too long then the CPR stock may become depleted, thus cutting off future resource flows. Even if an especially clever agent were to realize the trap, they still could not unilaterally alter the outcome by restraining their own behavior. In other words, CPR appropriation problems have socially-deficient Nash equilibria. In fact, the choice to appropriate is typically dominant over the choice to show restraint (e.g. [32]). No matter what the state of the CPR stock, agents prefer to appropriate additional resources for themselves over the option of showing restraint, since in that case they receive no individual benefit but still endure the cost of CPR exploitation by others. (a) Open map (b) Small map with agent?s observation Figure 1: (a) The initial state of the Commons Game at the start of each episode on the large open map used in sections 3.2, 3.3, and 3.5. Apples are green, walls are grey, and players are red or blue. (b) The initial state of the small map used for the single-agent experiment (Section 3.1). The size of the window of pixels a player receives as an observation is also shown. Nevertheless, despite such pessimistic theoretical predictions, human communities frequently are able to self-organize to solve CPR appropriation problems [26, 28, 27, 6]. A major goal of laboratory-based behavioral work in this area is to determine what it is about human behavior that makes this possible. Being based on behavioral game theory [4], most experimental work on human CPR appropriation behavior features highly abstracted environments where the only decision to make is how much to appropriate (e.g. [29]). The advantage of such a setup is that the theoretical predictions of non-cooperative game theory are clear. However, this is achieved by sacrificing the opportunity to model spatial and temporal dynamics which are important in real-world CPRs [26]. This approach also downplays the role of trial-and-error learning. One recent line of behavioral research on CPR appropriation features significantly more complex environments than the abstract matrix games that came before [16, 18, 17, 14, 15]. In a typical experiment, a participant controls the movements of an on-screen avatar in a real-time video game-like environment that approximates a CPR with complex spatial and temporal dynamics. They are compensated proportionally to the amount of resources they collect. Interesting behavioral results have been obtained with this setup. For example, [18] found that participants often found cooperative solutions that relied on dividing the CPR into separate territories. However, due to the increased complexity of the environment model used in this new generation of experiments, the standard tools of noncooperative game theory can no longer be used to generate predictions. We propose a new model of common-pool resource appropriation in which learning takes the center stage. It consists of two components: (1) a spatially and temporally dynamic CPR environment, similar to [17], and (2) a multi-agent system consisting of N independent self-interested deep reinforcement learning agents. On the collective level, the idea is that self-organization to solve CPR appropriation problems works by smoothly adjusting over time the incentives felt by individual agents through a process akin to trial and error. This collective adjustment process is the aggregate result of all the many individual agents simultaneously learning how best to respond to their current situation. ? Natural resources may or may not be renewable. However, this paper is only concerned with those that are. CPR appropriation problems are concerned with the allocation of the flow. In contrast, CPR provision problems concern the supply of the stock. This paper only addresses the appropriation problem and we will say no more about CPR provision. See [7, 26] for more on the distinction between the two problems. ? 2 This model of CPR appropriation admits a diverse range of emergent social outcomes. Much of the present paper is devoted to developing methodology for analyzing such emergence. For instance, we show how behavior of groups may be characterized along four social outcome metrics called: efficiency, equality, sustainability, and peace. We also develop an N -player empirical game-theoretic analysis that allows one to connect our model back to standard non-cooperative game theory. It allows one to determine classical game-theoretic properties like Nash equilibria for strategic games that emerge from learning in our model. Our point is not to argue that we have a more realistic model than standard non-cooperative game theory. This is also a reductionist model. However, it emphasizes different aspects of real-world CPR problems. It makes different assumptions and thus may be expected to produce new insights for the general theory of CPR appropriation that were missed by the existing literature?s focus on standard game theory models. Our results are broadly compatible with previous theory while also raising a new possibility, that trial-and-error learning may be a powerful mechanism for promoting sustainable use of the commons. 2 2.1 Modeling and analysis methods The commons game The goal of the Commons Game is to collect ?apples? (resources). The catch is that the apple regrowth rate (i.e. CPR flow) depends on the spatial configuration of the uncollected apples (i.e the CPR stock): the more nearby apples, the higher the regrowth rate. If all apples in a local area are harvested then none ever grow back?until the end of the episode (1000 steps), at which point the game resets to an initial state. The dilemma is as follows. The interests of the individual lead toward harvesting as rapidly as possible. However, the interests of the group as a whole are advanced when individuals refrain from doing so, especially in situations where many agents simultaneously harvest in the same local region. Such situations are precarious because the more harvesting agents there are, the greater the chance of bringing the local stock down to zero, at which point it cannot recover. (a) Single agent return So far, the proposed Commons Game is quite similar to the dynamic game used in human behavioral experiments [16, 18, 17, 14, 15]. However, it departs in one notable way. In the behavioral work, especially [17], participants were given the option of paying a fee in order to fine another participant, reducing their score. In contrast, in our Commons Game, agents can tag one another with a ?time-out beam?. Any agent caught in the path of the beam is removed from the game for 25 steps. Neither the tagging nor the tagged agent receive any direct reward or punishment from this. However, the tagged agent loses the chance to collect apples during its time-out period and the tagging agent loses a bit of time chasing and aiming, thus paying the opportunity cost of foregone apple consumption. We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores. The Commons Game is a partially-observable general-sum Markov Game [33, 22]. In each state of the game, agents take actions based on a partial observation of the state space and receive an individual reward. Agents must learn through (b) Optimal path experience an appropriate behavior policy while interacting Figure 2: (a) Single-agent returns as with one another. a function of training steps. (b) The optimal resource appropriation policy for a single agent on this map. At convergence, the agent we study nearly learns this policy: https://youtu. be/NnghJgsMxAY. In technical terms, we consider an N -player partially observable Markov game M defined on a finite set of states S. The observation function O : S ? {1, . . . , N } ! Rd specifies each player?s d-dimensional view on the state space. In any state, players are allowed to take actions from the set A1 , . . . , AN (one for each player). As a result of their joint action a1 , . . . , aN 2 A1 , . . . , AN the state changes following the stochastic transition function T : S ? A1 ? ? ? ? ? AN ! (S) (where (S) denotes the set of discrete probability distributions over S) and every player receives an individual reward defined as 3 ri : S ? A1 ? ? ? ? ? AN ! R for player i. Finally, let us write Oi = {oi | s 2 S, oi = O(s, i)} be the observation space of player i. Each agent learns, independently through their own experience of the environment, a behavior policy ? i : Oi ! (Ai ) (written ?(ai |oi )) based on their own observation oi = O(s, i) and reward ri (s, a1 , . . . , aN ). For the sake of simplicity we will write ~a = (a1 , . . . , aN ), ~o = (o1 , . . . , oN ) and ~? (.|~o) = (? 1 (.|o1 ), . . . , ? N (.|oN )). Each agent?s goal is to maximize a long term -discounted payoff defined as follow: V~?i (s0 ) =E " 1 X t=0 t i r (st , ~at )|~at ? ~?t , st+1 ? T (st , ~at ) # 2.2 Deep multi-agent reinforcement learning Multi-agent learning in Markov games is the subject of a large literature [3], mostly concerned with the aim of prescribing an optimal learning rule. To that end, many algorithms have been proposed over the past decade to provide guarantees of convergence in specific settings. Some of them address the zero-sum two-player case [22], or attempt to solve the general-sum case [13, 11]. Others study the emergence of cooperation in partially observable Markov decision processes [9, 37, 38] but rely on knowledge of the model which is unrealistic when studying independent interaction. Our goal, as opposed to the prescriptive agenda, is to describe the behaviour that emerges when agents learn in the presence of other learning agents. This agenda is called the descriptive agenda in the categorization of Shoham & al. [34]. To that end, we simulated N independent agents, each simultaneously learning via the deep reinforcement learning algorithm of Mnih et al. (2015) [24]. Reinforcement learning algorithms learn a policy through experience balancing exploration of the environment and exploitation. These algorithms were developed for the single agent case and are applied independently here [21, 3] even though this multi-agent context breaks the Markov assumption [20]. The algorithm we use is Q-learning with function approximation (i.e. DQN) [24]. In Q-learning, the policy of agent i is implicitly represented through a state-action value function Qi (O(s, i), a) (also written Qi (s, a) in the following). The policy of agent i is an ?-greedy policy and is defined by ? i (a|O(s, i)) = (1 ?)1a=arg max Qi (s,a) + |A?i | . The parameter ? controls the a amount of exploration. The Q-function Qi is learned to minimize the bellman residual kQi (oi , ai ) ri max Qi (o0i , b)k on data collected through interaction with the environment (oi , ai , ri , o0i ) in b {(oit , ait , rti , oit+1 )} (where oit = O(st , i)). 2.3 Social outcome metrics Unlike in single-agent reinforcement learning where the value function is the canonical metric of agent performance, in multi-agent systems with mixed incentives like the Commons Game, there is no scalar metric that can adequately track the state of the system (see e.g. [5]). Thus we introduce four key social outcome metrics in order to summarize group behavior and facilitate its analysis. Consider N independent agents. Let {rti | t = 1, . . . , T } be the sequence of rewards obtained by the i-th agent over an episode of duration T . Likewise, let {oit | t = 1, . . . T } be the i-th agent?s PT observation sequence. Its return is given by Ri = t=1 rti . The Utilitarian metric (U ), also known as Efficiency, measures the sum total of all rewards obtained by all agents. It is defined as the average over players of sum of rewards Ri . The Equality metric (E) is defined using the Gini coefficient [8]. The Sustainability metric (S) is defined as the average time at which the rewards are collected. The Peace metric (P ) is defined as the average number of untagged agent steps. U =E P = "P N i=1 h E NT T Ri # , E=1 PN PT i=1 T PN PN i t=1 I(ot ) i=1 2N i i j=1 |R PN i i=1 R Rj | where I(o) = ( 4 , S=E 1 0 " N 1 X i t N i=1 # where ti = E[t | rti > 0]. if o = time-out observation otherwise. 3 Results 3.1 Sustainable appropriation in the single-agent case In principle, even a single agent, on its own, may learn a strategy that over-exploits and depletes its own private resources. However, in the single-agent case, such a strategy could always be improved by individually adopting a more sustainable strategy. We find that, in practice, agents are indeed able to learn an efficient and sustainable appropriation policy in the single-agent case (Fig. 2). 3.2 Emergent social outcomes Now we consider the multi-agent case. Unlike in the single agent case where learning steadily improved returns (Fig. 2-a), in the multi-agent case, learning does not necessarily increase returns. The returns of a single agent are also a poor indicator of the group?s behavior. Thus we monitor how the social outcome metrics that we defined in Section 2.3 evolve over the course of training (Fig. 3). The system moves through 3 phases characterized by qualitatively different behaviors and social outcomes. Phase 1, which we may call na?vety, begins at the start of training and extends until ? 900 episodes. It is characterized by healthy CPR stocks (high apple density). Agents begin training by acting randomly, diffusing through the space and collecting apples whenever they happen upon them. Apples density is high enough that the overall utilitarian efficiency (U ) is quite high, and in fact is close to the max it will ever attain. As training progresses, agents learn to move toward regions of greater apple density in order to more efficiently harvest rewards. They detect no benefit from their tagging action and quickly learn not to use it. This can be seen as a steady increase in the peace metric (P ) (Fig. 3). In a video? of typical agent behavior in the na?vety phase, it can be seen that apples remain plentiful (the CPR stock remains healthy) throughout the entire episode. Phase 2, which we may call tragedy, begins where na?vety ends (? episode 900), it is characterized by rapid and catastrophic depletion of CPR stock in each episode. The sustainability metric (S), which had already been decreasing steadily with learning in the previous phase, now takes a sudden and drastic turn downward. It happens because agents have learned ?too well? how to appropriate from the CPR. With each agent harvesting as quickly as they possibly can, no time is allowed for the CPR stock to recover. It quickly becomes depleted. As a result, utilitarian efficiency (U ) declines precipitously. At the low point, agents are collecting less than half as many apples per episode as they did at the very start of training?when they were acting randomly (Fig. 3). In a video? of agent play at the height of the tragedy one can see that by ? 500 steps into the (1100-step) episode, the stock has been completely depleted and no more apples can grow. Figure 3: Evolution of the different social outcome metrics (Sec.2.3) over the course of training on the open map (Fig.1a) using a time-out beam of length 10 and width 5. From top to bottom is displayed, the utility metric (U ), the sustainability Phase 3, which we may call maturity, begins when efmetric (S), the equality metric (E), and the peace ficiency and sustainability turn the corner and start to metric (P ). recover again after their low point (? episode 1500) and continues indefinitely. Initially, conflict breaks out when agents discover that, in situations of great ? ? learned policy after 100 episodes https://youtu.be/ranlu_9ooDw. learned policy after 1100 episodes https://youtu.be/1xF1DoLxqyQ. 5 apple scarcity, it is possible to tag another agent to prevent them from taking apples that one could otherwise take for themselves. As learning continues, this conflict expands in scope. Agents learn to tag one another in situations of greater and greater abundance. The peace metric (P ) steadily declines (Fig. 3). At the same time, efficiency (U ) and sustainability (S) increase, eventually reaching and slightly surpassing their original level from before tragedy struck. How can efficiency and sustainability increase while peace declines? When an agent is tagged by another agent?s beam, it gets removed from the game for 25 steps. Conflict between agents in the Commons Game has the effect of lowering the effective population size and thus relieving pressure on the CPR stock. With less agents harvesting at any given time, the survivors are free to collect with greater impunity and less risk of resource depletion. This effect is evident in a videok of agent play during the maturity phase. Note that the CPR stock is maintained through the entire episode. By contrast, in an analogous experiment with the tagging action disabled, the learned policies were much less sustainable (Supp. Fig. 11). taggers non-taggers (a) Territorial effect (b) Histogram of the equality metric on the territory maps Figure 4: (a) Scatter plot of return by range size (variance of position) for individual agents in experiments with one tagging agent (red dots, one per random seed) and 11 non-tagging agents (blue dots, eleven per random seed). The tagging players collect more apples per episode than the others and remain in a smaller part of the map. This illustrates that the tagging players take over a territory and harvest sustainably within its boundary. (b) represents the distribution of the equality metric (E) for different runs on four different maps with natural regions from which it may be possible to exclude other. The first map is the standard map from which others will be derived (Fig. 6c). In the second apples are more concentrated on the top left corner and will respawn faster (Fig. 6d). the third is porous meaning it is harder for an agent to protect an area (Fig 6e). On the fourth map, the interiors walls are removed (Fig. 6f). Figure 4b shows inequality rises in maps where players can exclude one another from accessing the commons. 3.3 Sustainability and the emergence of exclusion Suppose, by building a fence around the resource or some other means, access to it can be made exclusive to just one agent. Then that agent is called the owner and the resource is called a private good [30]. The owner is incentivized to avoid over-appropriation so as to safeguard the value of future resource flows from which they and they alone will profit. In accord with this, we showed above (Fig. 2) that sustainability can indeed be achieved in the single agent case. Next, we wanted to see if such a strategy could emerge in the multi-agent case. The key requirement is for agents to somehow be able to exclude one another from accessing part of the CPR, i.e. a region of the map. To give an agent the chance to exclude others we had to provide it with an advantage. Thus we ran an experiment where only one out of the twelve agents could use the tagging action. In this experiment, the tagging agent learned a policy of controlling a specific territory by using its time-out beam to exclude other agents from accessing it. The tagging agents roam over a smaller part of the map than the non-tagging agents but achieve better returns (Fig. 4a). This is because the non-tagging agents generally failed to organize a sustainable appropriation pattern k learned policy after 3900 episodes https://youtu.be/XZXJYgPuzEI. 6 All players L: taggers R: non-taggers All players L: taggers R: non-taggers number of non-taggers number of non-taggers (a) Early training (after 500 episodes) Schelling dia(b) Late training (after 3,000 episodes) Schelling diagram for L = taggers and R = non-taggers gram for L = taggers and R = non-taggers Figure 5: Schelling diagram from early (5a) and late (5b) in training for the experiment where L = taggers and R = non-taggers. and depleted the CPR stock in the area available to them (the majority of the map). The tagging agent, on the other hand, was generally able to maintain a healthy stock within its ?privatized? territory?? . Interestingly, territorial solutions to CPR appropriation problems have emerged in real-world CPR problems, especially fisheries [23, 1, 36]. Territories have also emerged spontaneously in laboratory experiments with a spatially and temporally dynamic commons game similar to the one we study here [18]. 3.4 Emergence of inequality To further investigate the emergence of exclusion strategies using agents that all have the same abilities (all can tag), we created four new maps with natural regions enclosed by walls (see Supp. Fig. 6). The idea is that it is much easier to exclude others from accessing a territory that has only a single entrance than one with multiple entrances or one with no walls at all. This manipulation had a large effect on the equality of outcomes. Easier exclusion led to greater inequality (Fig. 4b). The lucky agent that was first to learn how to exclude others from ?its territory? could then monopolize the lion?s share of the rewards for a long time (Supp. Figs. 7a and 7b). In one map with unequal apple density between the four regions, the other agents were never able to catch up and achieve returns comparable to the first-to-learn agent (Supp. Fig. 7b). On the other hand, on the maps where exclusion was more difficult, there was no such advantage to being the first to learn (Supp. Figs. 7c and 7d). 3.5 Empirical game-theoretic analysis of emergent strategic incentives We use empirical game theoretic analysis to characterize the strategic incentives facing agents at different points over the course of training. As in [21], we use simulation to estimate the payoffs of an abstracted game in which agents choose their entire policy as a single decision with two alternatives. However, the method of [21] cannot be applied directly to the case of N > 2 players that we study in this paper. Instead, we look at Schelling diagrams [32]. They provide an intuitive way to summarize the strategic structure of a symmetric N -player 2-action game where everyone?s payoffs depend only on the number of others choosing one way or the other. Following Schelling?s terminology, we refer to the two alternatives as L and R (left and right). We include in the appendix several examples of Schelling diagrams produced from experiments using different ways of assigning policies to L and R groups (Supp. Fig. 8). ?? A typical episode where the tagging agent has a policy of excluding others from a region in the lower left corner of the map: https://youtu.be/3iGnpijQ8RM. 7 In this section we restrict our attention to an experiment where L is the choice of adopting a policy that uses the tagging action and R the choice of a policy that does not tag. A Schelling diagram is interpreted as follows. The green curve is the average return obtained by a player choosing L (a tagger) as a function of the number of players choosing R (non-taggers). Likewise, the red curve is the average return obtained by a player choosing R as a function of the number of other players also choosing R. The average return of all players is shown in blue. At the leftmost point, |R| = 0 =) |L| = N , the blue curve must coincide with the green one. At the rightmost point, |R| = N =) |L| = 0, the blue curve coincides with the red curve. Properties of the strategic game can be read off from the Schelling diagram. For example, in Fig. 5b one can see that the choice of a tagging policy is dominant over the choice of a non-tagging policy since, for any |R|, the expected return of the L group is always greater than that of the R group. This implies that the Nash equilibrium is at |R| = 0 (all players tagging). The Schelling diagram also shows that the collective maximum (blue curve?s max) occurs when |R| = 7. So the Nash equilibrium is socially-deficient in this case. In addition to being able to describe the strategic game faced by agents at convergence, we can also investigate how the strategic incentives agents evolve over the course of learning. Fig. 5a shows that the strategic game after 500 training episodes is one with a uniform negative externality. That is, no matter whether one is a tagger or a non-tagger, the effect of switching one additional other agent from the tagging group to the non-tagging group is to decrease returns. After 3000 training episodes the strategic situation is different (Fig. 5b). Now, for |R| > 5, there is a contingent externality. Switching one additional agent from tagging to non-tagging has a positive effect on the remaining taggers and a negative effect on the non-taggers (green and red curves have differently signed slopes). 4 Discussion This paper describes how algorithms arising from reinforcement learning research may be applied to build new kinds of models for phenomena drawn from the social sciences. As such, this paper really has two audiences. For social scientists, the core conclusions are as follows. (1) Unlike most game theory-based approaches where modelers typically ?hand engineer? specific strategies like tit-for-tat [2] or win-stay-lose-shift [25], here agents must learn how to implement their strategic decisions. This means that the resulting behaviors are emergent. For example, in this case the tragedy of the commons was ?solved? by reducing the effective population size below the environment?s carrying capacity, but this outcome was not assumed. (2) This model endogenizes exclusion. That is, it allows agents to learn strategies wherein they exclude others from a portion of the CPR. Then, in accord with predictions from economics [26, 1, 18, 36], sustainable appropriation strategies emerge more readily in the ?privatized? zones than they do elsewhere. (3) Inequality emerges when exclusion policies are easier to implement. In particular, natural boundaries in the environment make inequality more likely to arise. From the perspective of reinforcement learning research, the most interesting aspect of this model is that?despite the fact that all agents learn only toward their individual objectives?tracking individual rewards over the course of training is insufficient to characterize the state of the system. These results illustrate how multiple simultaneously learning agents may continually improve in ?competence? without improving their expected discounted returns. Indeed, learning may even decrease returns in cases where too-competent agents end up depleting the commons. Without the social outcome metrics (efficiency, equality, sustainability, and peace) and other analyses employed here, such emergent events could not have been detected. This insight is widely applicable to other general-sum Markov games with mixed incentives (e.g. [19, 21]). This is a reductionist approach. Notice what is conspicuously absent from the model we have proposed. The process by which groups of humans self-organize to solve CPR problems is usually conceptualized as one of rational negotiation (e.g. [26]). People do things like bargain with one another, attempt to build consensus for collective decisions, think about each other?s thoughts, and make arbitration appeals to local officials. The agents in our model can?t do anything like that. Nevertheless, we still find it is sometimes possible for self-organization to resolve CPR appropriation problems. Moreover, examining the pattern of success and failure across variants of our model yields insights that appear readily applicable to understanding human CPR appropriation behavior. The question then is raised: how much of human cognitive sophistication is really needed to find adequate solutions to CPR appropriation problems? We note that nonhuman organisms also solve them [31]. This suggests that trial-and-error learning alone, without advanced cognitive capabilities, may sometimes be sufficient for effective CPR appropriation. 8 References [1] James M Acheson and Roy J Gardner. Spatial strategies and territoriality in the maine lobster industry. Rationality and society, 17(3):309?341, 2005. [2] Robert Axelrod. The Evolution of Cooperation. Basic Books, 1984. [3] Lucian Busoniu, Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2(38):156?172, 2008. [4] Colin F Camerer. Progress in behavioral game theory. The Journal of Economic Perspectives, 11(4):167? 188, 1997. [5] Georgios Chalkiadakis and Craig Boutilier. Coordination in multiagent reinforcement learning: a bayesian approach. In The Second International Joint Conference on Autonomous Agents & Multiagent Systems, AAMAS 2003, July 14-18, 2003, Melbourne, Victoria, Australia, Proceedings, pages 709?716, 2003. [6] Thomas Dietz, Elinor Ostrom, and Paul C Stern. The struggle to govern the commons. science, 302(5652):1907?1912, 2003. [7] Roy Gardner, Elinor Ostrom, and James M Walker. The nature of common-pool resource problems. Rationality and Society, 2(3):335?358, 1990. [8] C. Gini. Variabilit? e mutabilit?: contributo allo studio delle distribuzioni e delle relazioni statistiche. [|.]. Number pt. 1 in Studi economico-giuridici pubblicati per cura della facolt? di Giurisprudenza della R. Universit? di Cagliari. Tipogr. di P. Cuppini, 1912. [9] Piotr J Gmytrasiewicz and Prashant Doshi. A framework for sequential planning in multi-agent settings. Journal of Artificial Intelligence Research, 24:49?79, 2005. [10] H Scott Gordon. The economic theory of a common-property resource: the fishery. Journal of political economy, 62(2):124?142, 1954. [11] A. Greenwald and K. Hall. Correlated-Q learning. In Proceedings of the 20th International Conference on Machine Learning (ICML), pages 242?249, 2003. [12] Garrett Hardin. The tragedy of the commons. Science, 162(3859):1243?1248, 1968. [13] J. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning (ICML), pages 242?250, 1998. [14] Marco Janssen. Introducing ecological dynamics into common-pool resource experiments. Ecology and Society, 15(2), 2010. [15] Marco Janssen. The role of information in governing the commons: experimental results. Ecology and Society, 18(4), 2013. [16] Marco Janssen, Robert Goldstone, Filippo Menczer, and Elinor Ostrom. Effect of rule choice in dynamic interactive spatial commons. International Journal of the Commons, 2(2), 2008. [17] Marco A Janssen, Robert Holahan, Allen Lee, and Elinor Ostrom. Lab experiments for the study of social-ecological systems. Science, 328(5978):613?617, 2010. [18] Marco A Janssen and Elinor Ostrom. Turfs in the lab: institutional innovation in real-time dynamic spatial commons. Rationality and Society, 20(4):371?397, 2008. [19] Max Kleiman-Weiner, M K Ho, J L Austerweil, Michael L Littman, and Josh B Tenenbaum. Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction. In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016. [20] Guillaume J. Laurent, La?titia Matignon, and N. Le Fort-Piat. The world of independent learners is not Markovian. Int. J. Know.-Based Intell. Eng. Syst., 15(1):55?64, 2011. [21] Joel Z. Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AA-MAS 2017), Sao Paulo, Brazil, 2017. [22] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning (ICML), pages 157?163, 1994. 9 [23] Kent O Martin. Play by the rules or don?t play at all: Space division and resource allocation in a rural newfoundland fishing community. North Atlantic maritime cultures, pages 277?98, 1979. [24] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015. [25] Martin Nowak, Karl Sigmund, et al. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner?s dilemma game. Nature, 364(6432):56?58, 1993. [26] Elinor Ostrom. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990. [27] Elinor Ostrom, Joanna Burger, Christopher B Field, Richard B Norgaard, and David Policansky. Revisiting the commons: local lessons, global challenges. Science, 284(5412):278?282, 1999. [28] Elinor Ostrom and Roy Gardner. Coping with asymmetries in the commons: self-governing irrigation systems can work. The Journal of Economic Perspectives, 7(4):93?112, 1993. [29] Elinor Ostrom, Roy Gardner, and James Walker. Rules, games, and common-pool resources. University of Michigan Press, 1994. [30] Vincent Ostrom and Elinor Ostrom. Public goods and public choices. In Alternatives for Delivering Public Services: Toward Improved Performance, pages 7?49. Westview press, 1977. [31] Daniel J Rankin, Katja Bargum, and Hanna Kokko. The tragedy of the commons in evolutionary biology. Trends in ecology & evolution, 22(12):643?651, 2007. [32] Thomas C Schelling. Hockey helmets, concealed weapons, and daylight saving: A study of binary choices with externalities. Journal of Conflict resolution, 17(3):381?428, 1973. [33] L. S. Shapley. Stochastic Games. In Proc. of the National Academy of Sciences of the United States of America, 1953. [34] Y. Shoham, R. Powers, and T. Grenager. If multi-agent learning is the answer, what is the question? Artificial Intelligence, 171(7):365?377, 2007. [35] Vernon L Smith. Economics of production from natural resources. The American Economic Review, 58(3):409?431, 1968. [36] Rachel A Turner, Tim Gray, Nicholas VC Polunin, and Selina M Stead. Territoriality as a driver of fishers? spatial behavior in the northumberland lobster fishery. Society & Natural Resources, 26(5):491?505, 2013. [37] Pradeep Varakantham, Jun-young Kwak, Matthew E Taylor, Janusz Marecki, Paul Scerri, and Milind Tambe. Exploiting coordination locales in distributed POMDPs via social model shaping. In Proceedings of the 19th International Conference on Automated Planning and Scheduling, ICAPS, 2009. [38] Chao Yu, Minjie Zhang, Fenghui Ren, and Guozhen Tan. Emotional multiagent reinforcement learning in spatial social dilemmas. IEEE Transactions on Neural Networks and Learning Systems, 26(12):3083?3096, 2015. 10
6955 |@word katja:1 trial:5 exploitation:2 private:2 open:3 grey:1 hu:1 simulation:1 tat:2 eng:1 kent:1 pressure:1 profit:1 harder:1 initial:3 configuration:1 plentiful:1 score:2 necessity:1 united:1 prescriptive:1 daniel:1 interestingly:1 rightmost:1 past:1 existing:1 atlantic:1 current:1 com:6 nt:1 outperforms:1 scatter:1 assigning:1 must:3 written:2 readily:2 realize:1 realistic:2 happen:1 entrance:2 eleven:1 wanted:1 plot:1 bart:1 alone:2 greedy:1 half:1 intelligence:2 smith:1 core:1 indefinitely:1 harvesting:4 sudden:1 institution:1 zhang:1 tagger:20 height:1 wierstra:1 along:1 direct:1 become:1 supply:1 maturity:2 driver:1 depleting:1 consists:1 combine:1 shapley:1 liverpool:2 behavioral:8 owner:2 introduce:1 tagging:23 indeed:3 expected:3 rapid:1 themselves:2 frequently:1 nor:1 multi:14 planning:2 behavior:15 bellman:1 socially:3 discounted:2 decreasing:1 resolve:1 window:1 becomes:1 begin:4 discover:2 moreover:1 menczer:1 burger:1 pasture:2 what:4 kind:1 interpreted:1 deepmind:5 developed:1 locale:1 guarantee:1 temporal:3 every:1 collecting:2 ti:1 innocent:1 expands:1 shed:1 interactive:1 icaps:1 universit:1 uk:6 control:3 unit:2 appear:1 organize:3 continually:1 positive:2 before:2 scientist:1 local:5 service:1 struggle:1 aiming:1 switching:2 despite:2 analyzing:1 laurent:1 path:2 signed:1 collect:5 suggests:1 zambaldi:2 tambe:1 range:2 spontaneously:1 irrigation:3 tragedy:7 implement:3 chasing:1 utilitarian:3 practice:1 area:4 riedmiller:1 empirical:3 lucky:1 coping:1 significantly:1 shoham:2 attain:1 thought:1 word:1 lucian:1 intention:1 petersen:1 get:1 cannot:2 clever:1 close:1 interior:1 scheduling:1 context:1 impossible:1 risk:1 bellemare:1 map:20 compensated:1 center:1 conceptualized:1 rural:1 attention:1 economics:2 independently:3 caught:1 duration:1 survey:1 resolution:1 simplicity:1 privatized:2 insight:3 rule:4 unilaterally:1 population:2 autonomous:2 coordinate:1 analogous:1 brazil:1 avatar:1 today:1 pt:3 play:4 suppose:1 controlling:1 rationality:3 us:1 humanity:1 tan:1 trend:2 roy:4 continues:3 untagged:1 predicts:1 cooperative:6 observed:1 role:2 bottom:1 solved:1 revisiting:1 region:7 episode:20 movement:1 technological:1 removed:3 decrease:2 ran:1 accessing:5 environment:11 nash:4 complexity:1 govern:1 reward:12 littman:2 babuska:1 dynamic:9 depend:1 carrying:1 tit:2 dilemma:5 negatively:1 upon:1 efficiency:7 learner:1 completely:1 division:1 joint:3 emergent:6 stock:18 represented:1 differently:1 dietz:1 america:1 effective:4 london:5 describe:2 gini:2 detected:1 artificial:2 aggregate:1 fishing:1 outcome:12 choosing:5 quite:2 emerged:2 widely:1 solve:5 say:1 otherwise:2 stead:1 ability:1 austerweil:1 grenager:1 think:1 emergence:5 itself:1 seemingly:1 advantage:3 descriptive:1 sequence:2 hardin:1 propose:1 interaction:3 reset:1 rapidly:1 organizing:2 achieve:2 academy:1 intuitive:1 exploiting:1 convergence:3 requirement:1 asymmetry:1 produce:1 categorization:1 silver:1 impunity:1 tim:1 illustrate:1 develop:1 progress:2 paying:2 dividing:1 implies:1 schelling:10 stochastic:2 vc:1 reductionist:2 human:12 exploration:2 australia:1 public:3 behaviour:1 renewable:2 wall:4 really:2 pessimistic:1 cpr:43 marco:5 around:1 hall:1 great:1 seed:2 scope:1 equilibrium:5 predict:1 matthew:1 major:1 sought:1 early:2 institutional:1 proc:1 applicable:2 lose:2 helmet:1 healthy:3 coordination:2 individually:1 tool:1 always:2 aim:1 reaching:1 pn:4 avoid:1 rusu:1 derived:1 focus:1 fence:1 legg:1 kwak:1 indicates:1 survivor:1 secure:1 contrast:3 political:1 detect:1 economy:1 prescribing:1 typically:2 entire:3 gmytrasiewicz:1 initially:1 interested:2 endure:1 pixel:1 arg:1 overall:1 negotiation:1 spatial:9 raised:1 marginal:2 equal:1 field:1 never:1 shaped:1 beach:1 piotr:1 veness:1 biology:1 represents:1 saving:1 look:1 icml:3 nearly:1 yu:1 alter:1 future:3 others:11 gordon:1 richard:1 randomly:2 simultaneously:4 national:1 comprehensive:1 individual:13 intell:1 phase:7 consisting:1 maintain:1 attempt:2 ecology:3 restraint:2 organization:2 interest:2 ostrovski:1 highly:1 possibility:1 mnih:2 investigate:2 joel:2 tuyls:1 wellman:1 pradeep:1 light:1 devoted:1 nowak:1 partial:1 experience:3 culture:1 varakantham:1 taylor:1 noncooperative:2 orbit:1 sacrificing:1 theoretical:3 melbourne:1 increased:1 instance:1 modeling:2 industry:1 markovian:1 matignon:1 delle:2 cost:4 strategic:10 introducing:1 uniform:1 recognizing:1 examining:1 too:3 characterize:2 connect:1 answer:1 combined:1 punishment:1 st:5 density:4 twelve:1 international:7 stay:2 lee:1 off:2 pool:9 safeguard:1 michael:1 fishery:5 quickly:3 milind:1 na:3 again:1 opposed:1 choose:2 possibly:1 corner:3 cognitive:3 book:1 american:1 return:16 supp:6 syst:1 exclude:9 paulo:1 relieving:1 de:1 sec:1 includes:1 coefficient:1 matter:2 int:1 north:1 notable:1 depends:2 view:1 break:2 lab:2 doing:1 red:5 start:4 relied:1 participant:5 option:2 recover:3 portion:1 capability:1 slope:1 youtu:5 contribution:1 minimize:1 oi:8 variance:1 likewise:2 efficiently:1 yield:2 lesson:1 vinicius:2 camerer:1 porous:1 territory:8 bayesian:1 kavukcuoglu:1 vincent:1 emphasizes:1 produced:1 none:1 craig:1 ren:1 pomdps:1 apple:20 cybernetics:1 history:1 whenever:2 failure:1 lobster:2 steadily:3 james:3 doshi:1 di:3 modeler:1 gain:1 rational:1 adjusting:1 knowledge:1 emerges:2 graepel:2 subtle:1 provision:2 garrett:1 uncover:1 shaping:1 back:2 ostrom:11 higher:1 follow:1 methodology:1 wherein:1 improved:3 though:1 just:2 stage:1 governing:3 until:2 hand:3 receives:2 christopher:1 google:6 somehow:1 gray:1 disabled:1 thore:3 dqn:1 usa:1 effect:9 facilitate:1 building:1 adequately:1 equality:7 tagged:3 evolution:4 spatially:2 symmetric:1 laboratory:3 read:1 game:48 self:7 during:2 width:1 maintained:1 steady:1 anything:1 coincides:1 leftmost:1 evident:1 theoretic:4 allen:1 cooperate:1 meaning:1 sigmund:1 charles:1 common:35 turf:1 organism:1 approximates:1 surpassing:1 refer:1 perolat:2 cambridge:1 ai:4 rd:1 had:3 dot:2 stable:1 access:2 longer:2 dominant:2 own:5 recent:2 exclusion:7 restraining:1 showed:1 perspective:3 manipulation:1 ecological:2 inequality:6 binary:1 continue:1 came:1 refrain:1 success:1 seen:2 additional:4 greater:7 contingent:1 employed:1 determine:2 maximize:1 period:1 colin:1 july:1 multiple:2 rj:1 technical:1 faster:1 characterized:4 rti:4 long:5 chalkiadakis:1 equally:2 peace:7 a1:7 ensuring:1 prediction:5 impact:3 qi:5 variant:1 basic:1 janusz:2 metric:20 externality:3 histogram:1 sometimes:3 adopting:2 accord:2 achieved:2 beam:5 receive:3 addition:1 audience:1 fine:1 diagram:7 grow:2 walker:2 ot:1 weapon:1 unlike:3 bringing:1 subject:1 deficient:2 thing:1 flow:8 call:3 depleted:4 presence:1 enough:1 concerned:3 diffusing:1 automated:1 affect:1 restrict:1 economic:5 idea:2 decline:3 oit:4 shift:2 absent:1 whether:1 weiner:1 utility:1 akin:1 harvest:3 action:11 adequate:1 deep:5 boutilier:1 generally:3 kqi:1 clear:1 proportionally:1 delivering:1 conspicuously:1 amount:3 tenenbaum:1 concentrated:1 generate:2 http:5 specifies:1 goldstone:1 canonical:1 notice:1 arising:1 track:1 per:5 blue:6 diverse:1 broadly:1 discrete:1 write:2 territorial:2 incentive:6 cagliari:1 group:12 key:2 four:5 terminology:1 nevertheless:2 monitor:1 drawn:1 prevent:1 neither:1 leibo:2 lowering:1 sum:6 run:1 compete:1 powerful:1 respond:1 fourth:1 extends:1 throughout:1 rachel:1 missed:1 decision:5 prefer:1 appendix:1 fee:1 comparable:1 bit:1 lanctot:1 bargain:1 annual:1 filippo:1 ri:7 sake:1 felt:2 nearby:1 tag:5 aspect:3 martin:2 developing:1 poor:1 remain:2 slightly:2 smaller:2 describes:1 across:1 happens:1 vernon:1 notorious:1 depletion:2 resource:33 remains:2 turn:2 eventually:1 fail:1 mechanism:2 roam:1 needed:1 know:1 drastic:1 end:6 dia:1 grazing:2 studying:1 available:2 antonoglou:1 promoting:1 sustainability:11 victoria:1 appropriate:9 nicholas:1 joanna:1 alternative:3 ho:1 hassabis:1 original:1 thomas:2 denotes:1 remaining:2 top:2 include:1 opportunity:2 emotional:1 exploit:1 especially:4 build:2 society:8 classical:1 move:2 objective:1 already:1 question:2 occurs:1 strategy:10 exclusive:1 evolutionary:1 win:2 separate:1 incentivized:1 simulated:1 capacity:1 majority:1 fidjeland:1 consumption:1 argue:2 collected:2 consensus:1 water:1 fresh:1 toward:5 studi:1 length:1 o1:2 relationship:1 insufficient:1 daylight:1 innovation:1 difficult:2 setup:2 mostly:1 robert:4 minjie:1 negative:3 rise:1 agenda:3 collective:6 policy:22 stern:1 jzl:1 observation:8 markov:8 kumaran:1 finite:1 displayed:1 situation:6 payoff:3 ever:3 excluding:1 axelrod:1 interacting:1 competence:1 community:3 downplays:1 david:1 fort:1 struck:1 maine:1 conflict:4 raising:1 unequal:1 distinction:1 learned:7 protect:1 marecki:2 nip:1 address:2 able:7 lion:1 pattern:2 below:1 usually:1 scott:1 summarize:2 challenge:1 green:4 max:5 video:4 everyone:1 power:1 unrealistic:1 critical:1 event:1 natural:8 difficulty:1 rely:1 indicator:1 residual:1 advanced:2 turner:1 improve:1 numerous:2 julien:1 temporally:2 gardner:4 created:1 catch:2 jun:1 faced:1 review:2 literature:2 understanding:1 chao:1 evolve:2 georgios:1 precipitously:1 graf:1 harvested:1 multiagent:6 highlight:1 allo:1 mixed:2 interesting:2 generation:1 allocation:2 enclosed:1 facing:1 agent:114 basin:1 sufficient:1 s0:1 principle:1 sao:1 share:2 balancing:1 production:1 karl:2 compatible:1 cooperation:2 course:5 elsewhere:1 free:1 face:1 taking:1 emerge:3 benefit:3 distributed:1 boundary:2 curve:7 world:4 unchecked:1 transition:1 gram:1 qualitatively:1 reinforcement:16 made:1 coincide:1 far:1 social:18 transaction:2 schutter:1 obtains:1 observable:3 cutting:1 implicitly:1 abstracted:2 global:1 assumed:1 don:1 decade:2 reality:1 hockey:1 learn:14 nature:3 ca:1 improving:1 hanna:1 complex:3 necessarily:1 marc:1 official:1 did:1 motivation:1 whole:1 arise:1 paul:2 ait:1 allowed:2 competent:1 aamas:1 fig:23 screen:1 position:1 third:1 late:2 learns:2 abundance:1 young:1 down:1 departs:1 specific:3 showing:1 appeal:1 admits:1 concern:1 trap:1 janssen:5 sequential:2 importance:2 magnitude:1 downward:1 illustrates:1 studio:1 easier:3 smoothly:1 led:1 sophistication:1 michigan:1 likely:1 josh:1 failed:1 adjustment:1 tracking:1 partially:4 scalar:1 prisoner:1 sadik:1 aa:1 loses:2 chance:3 ma:1 goal:5 greenwald:1 piat:1 king:1 man:1 fisher:1 change:1 diminished:1 typical:3 westview:1 reducing:2 acting:2 beattie:2 engineer:1 called:5 total:1 prashant:1 catastrophic:1 experimental:2 la:1 player:25 busoniu:1 newfoundland:1 zone:1 sustainable:8 guillaume:1 people:1 appropriation:31 phenomenon:2 scarcity:1 arbitration:1 della:2 correlated:1
6,584
6,956
On the Optimization Landscape of Tensor Decompositions Rong Ge Duke University [email protected] Tengyu Ma Facebook AI Research [email protected] Abstract Non-convex optimization with local search heuristics has been widely used in machine learning, achieving many state-of-art results. It becomes increasingly important to understand why they can work for these NP-hard problems on typical data. The landscape of many objective functions in learning has been conjectured to have the geometric property that ?all local optima are (approximately) global optima?, and thus they can be solved efficiently by local search algorithms. However, establishing such property can be very difficult. In this paper, we analyze the optimization landscape of the random over-complete tensor decomposition problem, which has many applications in unsupervised leaning, especially in learning latent variable models. In practice, it can be efficiently solved by gradient ascent on a non-convex objective. We show that for any small constant ? > 0, among the set of points with function values (1 + ?)-factor larger than the expectation of the function, all the local maxima are approximate global maxima. Previously, the best-known result only characterizes the geometry in small neighborhoods around the true components. Our result implies that even with an initialization that is barely better than the random guess, the gradient ascent algorithm is guaranteed to solve this problem. Our main technique uses Kac-Rice formula and random matrix theory. To our best knowledge, this is the first time when Kac-Rice formula is successfully applied to counting the number of local optima of a highly-structured random polynomial with dependent coefficients. 1 Introduction Non-convex optimization is the dominating algorithmic technique behind many state-of-art results in machine learning, computer vision, natural language processing and reinforcement learning. Local search algorithms through stochastic gradient methods are simple, scalable and easy to implement. Surprisingly, they also return high-quality solutions for practical problems like training deep neural networks, which are NP-hard in the worst case. It has been conjectured [DPG+ 14, CHM+ 15] that on typical data, the landscape of the training objectives has the nice geometric property that all local minima are (approximate) global minima. Such property assures the local search algorithms to converge to global minima [GHJY15, LSJR16, NP06, SQW15]. However, establishing it for concrete problems can be challenging. Despite recent progress on understanding the optimization landscape of various machine learning problems (see [GHJY15, BBV16, BNS16, Kaw16, GLM16, HM16, HMR16] and references therein), a comprehensive answer remains elusive. Moreover, all previous techniques fundamentally rely on the spectral structure of the problems. For example, in [GLM16] allows us to pin down the set of the critical points (points with vanishing gradients) as approximate eigenvectors of some matrix. Among 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. these eigenvectors we can further identify all the local minima. The heavy dependency on linear algebraic structure limits the generalization to problems with non-linearity (like neural networks). Towards developing techniques beyond linear algebra, in this work, we investigate the optimization landscape of tensor decomposition problems. This is a clean non-convex optimization problem whose optimization landscape cannot be analyzed by the previous approach. It also connects to the training of neural networks with many shared properties [NPOV15] . For example, in comparison with the matrix case where all the global optima reside on a (connected) Grassmannian manifold, for both tensors and neural networks all the global optima are isolated from each other. Besides the technical motivations above, tensor decomposition itself is also the key algorithmic tool for learning many latent variable models, mixture of Gaussians, hidden Markov models, dictionary learning [Cha96, MR06, HKZ12, AHK12, AFH+ 12, HK13], just to name a few. In practice, local search heuristics such as alternating least squares [CLA09], gradient descent and power method [KM11] are popular and successful. Tensor decomposition also connects to the learning of neural networks [GLM17, JSA15, CS16]. For example, The work [GLM17] shows that the objective of learning one-hidden-layer network is implicitly decomposing a sequence of tensors with shared components, and uses the intuition from tensor decomposition to design better objective functions that provably recovers the parameters under Gaussian inputs. Concretely, we consider decomposing a random 4-th order tensor T of the rank n of the following form, n X T = ai ? ai ? ai ? ai . i=1 We are mainly interested in the over-complete regime where n  d. This setting is particularly challenging, but it is crucial for unsupervised learning applications where the hidden representations have higher dimension than the data [AGMM15, DLCC07]. Previous algorithmic results either require access to high order tensors [BCMV14, GVX13], or use complicated techniques such as FOOBI [DLCC07] or sum-of-squares relaxation [BKS15, GM15, HSSS16, MSS16]. In the worst case, most tensor problems are NP-hard [H?s90, HL13]. Therefore we work in the average case where vectors ai ? Rd are assumed to be drawn i.i.d from Gaussian distribution N (0, I). We call ai ?s the components of the tensor. We are given the entries of tensor T and our goal is to recover the components a1 , . . . , an . We will analyze the following popular non-convex objective, max X f (x) = i,j,k,l?[d]4 Ti,j,k,l xi xj xk xl = n X hai , xi4 (1.1) i=1 s.t. kxk = 1. It is known that for n  d2 , the global maxima of f is close to one of ? ?1d a1 , . . . , ? ?1d an . Previously, Ge et al. [GHJY15] show that for the orthogonal case where n ? d and all the ai ?s are orthogonal, objective function f (?) have only 2n local maxima that are approximately ? ?1d a1 , . . . , ? ?1d an . However, the technique heavily uses the orthogonality of the components and is not generalizable to over-complete case. Empirically, projected gradient ascent and power methods find one of the components ai ?s even if n is significantly larger than d. The local geometry for the over-complete case around the true components is known: in a small neighborhood of each of ? ?1d ai ?s, there is a unique local maximum [AGJ15]. Algebraic geometry techniques [CS13, ASS15] can show that f (?) has an exponential number of other critical points, while these techniques seem difficult to extend to the characterization of local maxima. It remains a major open question whether there are any other spurious local maxima that gradient ascent can potentially converge to. Main results. We show that there are no spurious local maxima in a large superlevel set that contains all the points with function values slightly larger than that of the random initialization. Theorem 1.1. Let ?, ? ? (0, 1/3) be two arbitrary constants and d be sufficiently large. Suppose d1+? < n < d2?? . Then, with high probability over the randomness of ai ?s, we have that in the superlevel set  L = x ? S d?1 : f (x) ? 3(1 + ?)n , (1.2) p 2 e n/d3 )there are exactly 2n local maxima with function values (1 ? o(1))d , each of which is O( close to one of ? ?1d a1 , . . . , ? ?1d an . Previously, the best known result [AGJ15] only characterizes the geometry in small neighborhoods around the true components, that is, there exists one local maximum in each of the small constant neighborhoods around each of the true components ai ?s. (It turns out in such neighborhoods, the objective function is actually convex.) We significantly enlarge this region to the superlevel set L, on which the function f is not convex and has an exponential number of saddle points, but still doesn?t have any spurious local maximum. Note that a random initialization z on the unit sphere has expected function value E[f (z)] = 3n. Therefore the superlevel set L contains all points that have function values barely larger than that of the random guess. Hence, Theorem 1.1 implies that with a slightly better initialization than the random guess, gradient ascent and power method1 are guaranteed to find one of the components in polynomial time. (It is known that after finding one component, it can be peeled off from the tensor and the same algorithm can be repeated to find all other components.) Corollary 1.2. In the setting of Theorem 1.1, with high probability over the choice of ai ?s, we have that given any starting point x0 that satisfies p f (x0 ) ? 3(1 + ?)n, stochastic projected gradient 1 2 e n/d3 ) Euclidean error in polynomial time. descent will find one of the ? ?d ai ?s up to O( We also strengthen Theorem p 1.1 and Corollary 1.2 (see Theorem 3.1) slightly ? the same conclusion still holds with ? = O( d/n) that is smaller than a constant. Note that the expected value of a random initialization is 3n and we only require an initialization that is slightly better than random guess in function value. We remark that a uniformly random point x on the unit sphere are not in the set L with high probability. It?s an intriguing open question to characterize the landscape in the complement of the set L. We also conjecture that from random initialization, it suffices to use constant number of projected p gradient descent (with optimal step size) to achieve the function value 3(1 + ?)n with ? = O( d/n). This conjecture ? an interesting question for future work ? is based on the hypothesis that the first constant number of? steps of gradient descent can make similar improvements as the first step does (which is equal to c dn for a universal constant c). As a comparison, previous works such as [AGJ15] require an initialization with function value ?(d2 )  n. Anandkumar et al. [AGJ16] analyze the dynamics of tensor power method with a delicate initialization that is independent with the randomness of the tensor. Thus it is not suitable for the situation where the initialization comes from the result of another algorithm, and it does not have a direct implication on the landscape of f (?). We note that the local maximum of f (?) corresponds to the robust eigenvector of the tensor. Using this language, our theorem says that a robust eigenvector of an over-complete tensor with random components is either one of those true components or has a small correlation with the tensor in the sense that hT, x?4 i is small. This improves significantly upon the understanding of robust eigenvectors [ASS15] under an interesting random model. The condition n > d1+? should be artificial. The under-complete case (n < d) can be proved by re-using the proof of [GHJY15] with the observation that local optima are preserved by linear transformation. The intermediate regime when d < n < d1+? should be analyzable by Kac-Rice formula using similar techniques, but our current proof cannot capture it directly. Since the proof in this paper is already involved, we leave this case to future work. The condition n < d2?? matches the best over-completeness level that existing polynomial algorithm can handle [DLCC07, MSS16]. 1 Power method is exactly equivalent to gradient ascent with a properly chosen finite learning rate We note that by stochastic gradient descent we meant the algorithm that is analyzed in [GHJY15]. To get a global maximum in polynomial time (polynomial in log(1/?) to get ? precision), one also needs to slightly modify stochastic gradient descent in the following way: run SGD until 1/d accuracy and then switch to gradient descent. Since the problem is locally strongly convex, the local convergence is linear. 2 Our techniques The proof of Theorem 1.1 uses Kac-Rice formula (see, e.g., [AT09]), which is based on a counting argument. To build up the intuition, we tentatively view the unit sphere as a collection of discrete points, then for each point x one can compute the probability (with respect to the randomness of the function) that x is a local maximum. Adding up all these probabilities will give us the expected number of local maxima. In continuous space, such counting argument has to be more delicate since the local geometry needs to be taken into account. This is formalized by Kac-Rice formula (see Lemma 2.2). However, Kac-Rice formula only gives a closed form expression that involves the integration of the expectation of some complicated random variable. It?s often very challenging to simplify the ? expression to obtain interpretable results. Before our work, Auffinger et al. [AAC13, AA+ 13] have successfully applied Kac-Rice formula to characterize the landscape of polynomials with random Gaussian coefficients. The exact expectation of the number of local minima can be computed there, because the Hessian of a random polynomial is a Gaussian orthogonal ensemble, whose eigenvalue distribution is well-understood with closed form expression. Our technical contribution here is successfully applying Kac-Rice formula to structured random non-convex functions where the formula cannot be exactly evaluated. The Hessian and gradients of f (?) have much more complicated distributions compared to the Gaussian orthogonal ensemble. As a result, the Kac-Rice formula is difficult to be evaluated exactly. We instead cut the space Rd into regions and use different techniques to estimate the number of local maxima. See a proof overview in Section 3. We believe our techniques can be extended to 3rd order tensors and can shed light on the analysis of other non-convex problems with structured randomness. Organization In Section 2 we introduce preliminaries regarding manifold optimization and Kac-Rice formula. We give a detailed explanation of our proof strategy in Section 3. The technical details are deferred to the supplementary material. We also note that the supplementary material contains an extended version of the preliminary and proof overview section below. 2 Notations and Preliminaries We use Idd to denote the identity matrix of dimension d ? d. Let k ? k denote the spectral norm of a matrix or the Euclidean norm of a vector. Let k?kF denote the Frobenius norm of a matrix or a tensor. Gradient, Hessian, and local maxima on manifold We have a constrained optimization problem over the unit sphere S d?1 , which is a smooth manifold. Thus we define the local maxima with respect to the manifold. It?s known that projected gradient descent for S d?1 behaves pretty much the same on the manifold as in the usual unconstrained setting [BAC16]. In supplementary material we give a brief introduction to manifold optimization, and the definition of gradient and Hessian. We refer the readers to the book [AMS07] for more backgrounds. Here we use grad f and Hess f to denote the gradient and the Hessian of f on the manifold S d?1 . We compute them in the following claim. Pn Claim 2.1. Let f : S d?1 ? R be f (x) := 41 i=1 hai , xi4 . Let Px = Idd ? xx> . Then the gradient and Hessian of f on the sphere can be written as, ! n n n X X X 3 2 > 4 hai , xi Px ai ai Px ? hai , xi Px , grad f (x) = Px hai , xi ai , Hess f (x) = 3 i=1 i=1 i=1 A local maximum of a function f on the manifold Sd?1 satisfies grad f (x) = 0, and Hess f (x)  0. Let Mf be the set of all local maxima, i.e. Mf = x ? S d?1 : grad f (x) = 0, Hess f (x)  0 . Kac-Rice formula Kac-Rice formula is a general tool for computing the expected number of special points on a manifold. Suppose there are two random functions P (?) : Rd ? Rd and Q(?) : Rd ? Rk , and an open set B in Rk . The formula counts the expected number of point x ? Rd that satisfies both P (x) = 0 and Q(x) ? B. Suppose we take P = ?f and Q = ?2 f , and let B be the set of negative semidefinite matrices, then the set of points that satisfies P (x) = 0 and Q ? B is the set of all local maxima Mf . Moreover, for any set Z ? S d?1 , we can also augment Q by Q = [?2 f, x] and choose B = {A : A  0} ? Z. With this choice of P, Q, Kac-Rice formula can count the number of local maxima inside the region Z. For simplicity, we will only introduce Kac-Rice formula for this setting. We refer the readers to [AT09, Chapter 11&12] for more backgrounds. Lemma 2.2 (Informally stated). Let f be a random function defined on the unit sphere S d?1 and let Z ? S d?1 . Under certain regularity conditions3 on f and Z, we have Z E [|Mf ? Z|] = E [| det(Hess f )| ? 1(Hess f  0)1(x ? Z) | grad f (x) = 0] pgrad f (x) (0)dx . (2.1) x where dx is the usual surface measure on S d?1 and pgrad f (x) (0) is the density of grad f (x) at 0. Formula for the number of local maxima In this subsection, we give a concrete formula for the number of local maxima of our objective function (1.1) inside the superlevel set L (defined in equation (1.2)). Taking Z = L in Lemma 2.2, it boils down to estimating the quantity on the right hand side of (2.1). We remark that for the particular function f as defined in (1.1) and Z = L, the integrand in (2.1) doesn?t depend on the choice of x. This is because for any x ? S d?1 , (Hess f, grad f, 1(x ? L)) has the same joint distribution, as characterized below: Lemma 2.3. Let f be the random function defined in (1.1). Let ?1 , . . . , ?n ? N (0, 1), and b1 , . . . , bn ? N (0, Idd?1 ) be independent Gaussian random variables. Let M = k?k44 ? Idd?1 ? 3 n X i=1 ?i2 bi b> i and g = n X ?i3 bi (2.2) i=1 Then, we have that for any x ? S d?1 , (Hess f, grad f, f ) has the same joint distribution as 4 (?M, g, k?k4 ). Using Lemma 2.2 (with Z = L) and Lemma 2.3, we derive the following formula for the expectation of our random variable E [|Mf ? L|]. Later we will later use Lemma 2.2 slightly differently with another choice of Z. Lemma 2.4. Using the notation of Lemma 2.3, let pg (?) denote the density of g. Then, h i 4 d?1 ) ? E |det(M )| 1(M  0)1(k?k4 ? 3(1 + ?)n) | g = 0 pg (0) . (2.3) E [|Mf ? L|] = Vol(S 3 Proof Overview In this section, we give a high-level overview of the proof of the main Theorem. We will prove a slightly stronger version of Theorem 1.1. Let ? be a universal constant that is to be determined later. Define the set L1 ? S d?1 as, ( ) n X ? d?1 4 L1 := x ? S : hai , xi ? 3n + ? nd . (3.1) i=1 Indeed we see that L (defined in (1.2)) is a subset of L1 when n  d. We prove that in L1 there are exactly 2n local maxima. Theorem 3.1 (main). There exists universal constants ?, ? such that the following holds: suppose d2 / logO(1) ? n ? ?d log2 d and L1 be defined as in (3.1), then with high probability over the choice of a1 , . . . , an , we have that the number of local maxima in L1 is exactly 2n: |Mf ? L1 | = 2n . (3.2) p e n/d3 )-close to one of ? ?1 a1 , . . . , ? ?1 an . Moreover, each of the local maximum in L1 is O( d d In order to count the number of local maxima in L1 , we use the Kac-Rice formula (Lemma 2.4). Recall that what Kac-Rice formula gives an expression that involves the complicated expectation 3 We omit the long list of regularity conditions here for simplicity. See more details at [AT09, Theorem 12.1.1] h i 4 E |det(M )| 1(M  0)1(k?k4 ? 3(1 + ?)n) | g = 0 . Here the difficulty is to deal with the determinant of a random matrix M (defined in Lemma 2.3), whose eigenvalue distribution does not admit an analytical form. Moreover, due to the existence of the conditioning and the indicator functions, it?s almost impossible to compute the RHS of the Kac-Rice formula (equation (2.3)) exactly. Local vs. global analysis: The key idea to proceed is to divide the superlevel set L1 into two subsets L1 = (L1 ? L2 ) ? Lc2 , where L2 := {x ? S d?1 : ?i, kPx ai k2 ? (1 ? ?)d, and |hai , xi|2 ? ?d} . (3.3) c Here ? is a sufficiently small universal constant that is to be chosen later. We also note that L2 ? L1 and hence L1 = (L1 ? L2 ) ? Lc2 . Intuitively, the set L1 ? L2 contains those points that do not have large correlation with any of the ai ?s; the compliment Lc2 is the union of the neighborhoods around each of the desired vector ?1 a1 , . . . , ?1 an . We will refer to the first subset L1 ? L2 as the global region, and refer to the Lc 2 d d as the local region. We will compute the number of local maxima in sets L1 ? L2 and Lc2 separately using different techniques. We will show that with high probability L1 ? L2 contains no local maxima using KacRice formula (see Theorem 3.2). Then, we show that Lc2 contains exactly 2n local maxima (see Theorem 3.3) using a different and more direct approach. Global analysis. The key benefit of have such division to local and global regions is that for the global region, we can avoid evaluating the value of the RHS of the Kac-Rice formula. Instead, we only need to have an estimate: Note that the number of local optima in L1 ? L2 , namely |Mf ? L1 ? L2 |, is an integer nonnegative random variable. Thus, if we can show its expectation E [|Mf ? L1 ? L2 |] is much smaller than 1, then Markov?s inequality implies that with high probability, the number of local maxima will be exactly zero. Concretely, we will use Lemma 2.2 with Z = L1 ? L2 , and then estimate the resulting integral using various techniques in random matrix theory. It remains quite challenging even if we are only shooting for an estimate. Concretely, we get the following Theorem Theorem 3.2. Let sets L1 , L2 be defined as in equation (3.3) and n ? ?d log2 d. There exists universal small constant ? ? (0, 1) and universal constants ?, ?, and a high probability event G0 , such that the expected number of local maxima in L1 ? L2 conditioned on G0 is exponentially small:   ?d/2 . E |Mf ? L1 ? L2 | G0 ? 2 See Section 3.1 for an overview of the analysis. The purpose and definition of G0 are more technical and can be found in Section 3 of the supplementary material around equation (3.3) (3,4) and (3.5). We also prove that G0 is indeed a high probability event in supplementary material. 4 Local analysis. In the local region Lc2 , that is, the neighborhoods of a1 , . . . , an , we will show there are exactly 2n local maxima. As argued above, it?s almost impossible to get exact numbers out of the Kac-Rice formula since it?s often hard to compute the complicated integral. Moreover, Kac-Rice formula only gives the expected number but not high probability bounds. However, here the observation is that the local maxima (and critical points) in the local region are well-structured. Thus, instead, we show that in these local regions, the gradient and Hessian of a point x are dominated by the terms corresponding to components {ai }?s that are highly correlated with x. The number of such terms cannot be very large (by restricted isometry property, see Section B.5 of the supplementary material). As a result, we can characterize the possible local maxima explicitly, and eventually show there is exactly one local maximum in each of the local neighborhoods around {? ?1d ai }?s. Similar (but weaker) analysis was done before in [AGJ15]. We formalize the guarantee for local regions in the following theorem, which is proved in Section 5 of the supplementary material. In Section 3.2 of the supplementary material, we also discuss the key ideas of the proof of this Theorem. Theorem 3.3. Suppose 1/? 2 ? d log d ? n ? d2 / logO(1) d. Then, with high probability over the choice a1 , . . . , an , we have, |Mf ? L1 ? Lc2 | = 2n . (3.4) p e n/d3 )-close to one of ? ?1 a1 , . . . , ? ?1 an . Moreover, each of the point in L ? Lc2 is O( d d 4 We note again that the supplementary material contains more details in each section even for sections in the main text. The main Theorem 3.1 is a direct consequence of Theorem 3.2 and Theorem 3.3. The formal proof can be found in Section 3 of the supplementary material. In the next subsections we sketch the basic ideas behind the proof of Theorem 3.2 and Theorem 3.3. Theorem 3.2 is the crux of the technical part of the paper. 3.1 Estimating the Kac-Rice formula for the global region The general plan to prove Theorem 3.2 is to use random matrix theory to estimate the RHS of the Kac-Rice formula. We begin by applying Kac-Rice formula to our situation. We note that we dropped the effect of G0 in all of the following discussions since G0 only affects some technicality that appears in the details of the proof in the supplementary material. Applying Kac-Rice formula. The first step to apply Kac-Rice formula is to characterize the joint distribution of the gradient and the Hessian. We use the notation of Lemma 2.3 for expressing the joint distribution of (Hess f, grad f, 1(x ? L1 ? L2 )). For any fix x ? S d?1 , letP?i = hai , xi and bi =PPx ai (where Px = Id ? xx> ) and M = k?k44 ? Idd?1 ? n n 3 3 i=1 ?i2 bi b> and g = i i=1 ?i bi as defined in (2.2). In order to apply Kac-Rice formula, we?d like to compute the joint distribution of the gradient and the Hessian. We have that (Hess f, grad f, 1(x ? L1 ? L2 )) has the same distribution as (M, g, 1(E1 ? E2 ? E20 )),where E1 corresponds to the event that x ? L1 , n ? o E1 = k?k44 ? 3n + ? nd , and events E2 and E20 correspond to the events that x ? L2 . We separate them out to reflect that E2 and E20 depends the randomness of ?i ?s and bi ?s respectively.   E2 = k?k2? ? ?d , and E20 = ?i ? [n], kbi k2 ? (1 ? ?)d . Using Kac-Rice formula (Lemma 2.2 with Z = L1 ? L2 ), we conclude that d?1 ) ? E [|det(M )| 1(M  0)1(E1 ? E2 ? E20 ) | g = 0] pg (0) . E [|Mf ? L1 ? L2 |] = Vol(S (3.5) Next, towards proving Theorem 3.2 we will estimate the RHS of (3.5) using various techniques. Conditioning on ?. We observe that the distributions of the gradient g and Hessian M on the RHS of equation 3.5 are fairly complicated. In particular, we need to deal with the interactions of ?i ?s (the components along x) and bi ?s (the components in the orthogonal subspace of x). Therefore, we use the law of total expectation to first condition on ? and take expectation over the randomness of bi ?s, and then take expectation over ?i ?s. Let pg|? denotes the density of g | ?, using the law of total expectation, we have, 0 E [|det(M )| 1(M  0)1(E1 ? E2 ? E2 ) | g = 0] pg (0)   = E E [|det(M )| 1(M  0)1(E20 ) | g = 0, ?] 1(E1 )1(E2 )pg|? (0) . (3.6) Note that the inner expectation of RHS of (3.6) is with respect to the randomness of bi ?s and the outer one is with respect to ?i ?s. For notional convenience we define h(?) : Rn ? R as h(?) := Vol(S d?1 ) E [det(M )1(M  0)1(E20 ) | g = 0, ?] 1(E1 )1(E2 )pg|? (0) . Then, using the Kac-Rice formula (equation (2.3))5 and equation (3.5), we obtain the following explicit formula for the number of local maxima in L1 ? L2 . E [|Mf ? L1 ? L2 |] = E [h(?)] . (3.7) We note that pg|? (0) has an explicit expression since g | ? is Gaussian. For the ease of exposition, we separate out the hard-to-estimate part from h(?), which we call W (?): W (?) := E [det(M )1(M  0)1(E20 ) | g = 0, ?] 1(E1 )1(E2 ) . 5 (3.8) In Section C of the supplementary material, we rigorously verify the regularity condition of Kac-Rice formula. Therefore by definition, we have that h(?) = Vol(S d?1 )W (?)pg|? (0). Now, since we have conditioned on ?, the distributions of the Hessian, namely M | ?, is a generalized Wishart matrix which is slightly easier than before. However there are still several challenges that we need to address in order to estimate W (?). P 4 How to control det(M )1(M  0)? Recall that M = k?k4 ? 3 ?i2 bi b> i , which is a generalized Wishart matrix whose eigenvalue distribution has no (known) analytical expression. The determinant itself by definition is a high-degree polynomial over the entries, and in our case, a complicated polynomial over the random variables ?i ?s and vectors bi ?s. We also need to properly exploit the presence of the indicator function 1(M  0), since otherwise, the desired statement will not be true ? the function f has an exponential number of critical points. Fortunately, in most of the cases, we can use the following simple claim that bounds the determinant from above by the trace. The inequality is close to being tight when all the eigenvalues of M are similar to each other. More importantly, it uses naturally the indicator function 1(M  0)! Later we will see how to strengthen it when it?s far from tight. Claim 3.4. We have that  d?1 |tr(M )| 1(M  0) det(M )1(M  0) ? d?1 The claim is a direct consequence of AM-GM inequality on the eigenvalue of M . (Note that M is of dimension (d ? 1) ? (d ? 1). we give a formal proof in Section 3.1 of the supplementary material). It follows that   |tr(M )|d?1 | g = 0, ? 1(E1 ) . (3.9) W (?) ? E (d ? 1)d?1 Here we dropped the indicators for events E2 and E20 since they are not important for the discussion below. It turnsout that |tr(M )| is a random variable that concentrates very well, and thus we have  E |tr(M )|d?1 ? | E [tr(M )] |d?1 . It can be shown that (see Proposition 4.3 in the supplementary material for the detailed calculation),  4 2 8 6 E [tr(M ) | g = 0, ?] = (d ? 1) k?k4 ? 3k?k + 3k?k8 /k?k6 . Therefore using equation (3.9) and equation above, we have that d?1 W (?) ? k?k44 ? 3k?k2 + 3k?k88 /k?k66 1(E0 )1(E1 ) . 6 Note that since g | ? has Gaussian distribution, we have, pg|? (0) = (2?)?d/2 (k?k6 )?d/2 . Thus using two equations above, we can bound E [h(?)] by h i d?1 6 d?1 ) E k?k44 ? 3k?k2 + 3k?k88 /k?k66 ? (2?)?d/2 (k?k6 )?d/2 1(E0 )1(E1 ) . E [h(?)] ? Vol(S (3.10) Therefore, it suffices to control the RHS of (3.10), which is much easier than the original Kac-Rice formula. However, it turns out that RHS of (3.10) is roughly cd for some constant c > 1! Roughly speaking, this is because the high powers of a random variables is very sensitive to its tail. Two sub-cases according to max |?i |. We aim to find a tighter bond of E[h(?)] by re-using the idea in equation (3.10). Intuitively we can consider two separate situations events: the event F0 when all of the ?i ?s are close to constant and the complementary event F0c . Formally, let ? = Kn/d  where K is a universal constant that will be determined later. Let F0 be the event that .F0 = k?k4? ? ? . Then we control E [h(?)1(F0 )] and E [h(?)1(F0c )] separately. For the former, we basically need to reuse the equation (3.10) with an indicator function inserted inside the expectation. For the latter, we make use of the large coordinate, which contributes to the ?3?i2 bi b> i term in M and makes the probability of 1(M  0) extremely small. As a result det(M )1(M  0) is almost always 0. We formalized the two cases as below: Proposition 3.5. Let K ? 2 ? 103 be a universal constant. Let ? = Kn/d and let ?, ? be sufficiently large constants (depending on K). Then for any n ? ?d log2 d, we have that d/2 E [h(?)1(F0 )] ? (0.3) . Proposition 3.6. In the setting of Proposition 3.5, we have c d/2 E [h(?)1(F0 )] ? n ? (0.3) . We see that Theorem 3.2 can be obtained as a direct consequence of Proposition 3.5, Proposition 3.6 and equation (3.7). Due to space limit, we refer the readers to the supplementary material for an extended version of proof overview and the full proofs. 4 Conclusion We analyze the optimization landscape of the random over-complete tensor decomposition problem using the Kac-Rice formula and random matrix theory. We show that in the superlevel set L that contains all the points with function values barely larger than the random guess, there are exactly 2n local maxima that correspond to the true components. This implies that with an initialization slight better than the random guess, local search algorithms converge to the desired solutions. We believe our techniques can be extended to 3rd order tensors, or other non-convex problems with structured randomness. The immediate open question is whether there is any other spurious local maximum outside this superlevel set. Answering it seems to involve solving difficult questions in random matrix theory. Another potential approach to unravel the mystery behind the success of the non-convex methods is to analyze the early stage of local search algorithms and show that they will enter the superlevel set L quickly from a good initialization. References [AA+ 13] Antonio Auffinger, Gerard Ben Arous, et al. Complexity of random smooth functions on the high-dimensional sphere. The Annals of Probability, 41(6):4214?4247, 2013. ? ? y. Random matrices and complexity of spin [AAC13] Antonio Auffinger, G?rard Ben Arous, and Ji?r? Cern` glasses. Communications on Pure and Applied Mathematics, 66(2):165?201, 2013. [AFH+ 12] Anima Anandkumar, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Yi-Kai Liu. A spectral algorithm for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 25, 2012. [AGJ15] Animashree Anandkumar, Rong Ge, and Majid Janzamin. Learning overcomplete latent variable models through tensor methods. In Proceedings of the Conference on Learning Theory (COLT), Paris, France, 2015. [AGJ16] Anima Anandkumar, Rong Ge, and Majid Janzamin. Analyzing tensor power method dynamics in overcomplete regime. JMLR, 2016. [AGMM15] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient and neural algorithms for sparse coding. In Proceedings of The 28th Conference on Learning Theory, 2015. [AHK12] Anima Anandkumar, Daniel Hsu, and Sham M. Kakade. A method of moments for mixture models and hidden Markov models. In COLT, 2012. [AMS07] P.A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2007. [ASS15] H. Abo, A. Seigal, and B. Sturmfels. Eigenconfigurations of Tensors. ArXiv e-prints, May 2015. [AT09] Robert J Adler and Jonathan E Taylor. Random fields and geometry. Springer Science & Business Media, 2009. [BAC16] N. Boumal, P.-A. Absil, and C. Cartis. Global rates of convergence for nonconvex optimization on manifolds. ArXiv e-prints, May 2016. [BBV16] Afonso S Bandeira, Nicolas Boumal, and Vladislav Voroninski. On the low-rank approach for semidefinite programs arising in synchronization and community detection. arXiv preprint arXiv:1602.04426, 2016. [BCMV14] Aditya Bhaskara, Moses Charikar, Ankur Moitra, and Aravindan Vijayaraghavan. Smoothed analysis of tensor decompositions. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 594?603. ACM, 2014. [BKS15] Boaz Barak, Jonathan A. Kelner, and David Steurer. Dictionary learning and tensor decomposition via the sum-of-squares method. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 143?151, 2015. [BNS16] Srinadh Bhojanapalli, Behnam Neyshabur, and Nathan Srebro. Global optimality of local search for low rank matrix recovery. arXiv preprint arXiv:1605.07221, 2016. [Cha96] Joseph T. Chang. Full reconstruction of Markov models on evolutionary trees: Identifiability and consistency. Mathematical Biosciences, 137:51?73, 1996. [CHM+ 15] Anna Choromanska, Mikael Henaff, Michael Mathieu, G?rard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015. [CLA09] P. Comon, X. Luciani, and A. De Almeida. Tensor decompositions, alternating least squares and other tales. Journal of Chemometrics, 23(7-8):393?405, 2009. [CS13] Dustin Cartwright and Bernd Sturmfels. The number of eigenvalues of a tensor. Linear algebra and its applications, 438(2):942?952, 2013. [CS16] Nadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decompositions. CoRR, abs/1603.00162, 2016. [DLCC07] L. De Lathauwer, J. Castaing, and J.-F. Cardoso. Fourth-order cumulant-based blind identification of underdetermined mixtures. Signal Processing, IEEE Transactions on, 55(6):2965?2973, 2007. [DPG+ 14] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pages 2933?2941, 2014. [GHJY15] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797?842, 2015. [GLM16] Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. arXiv preprint arXiv:1605.07272, 2016. [GLM17] R. Ge, J. D. Lee, and T. Ma. Learning One-hidden-layer Neural Networks with Landscape Design. ArXiv e-prints, November 2017. [GM15] Rong Ge and Tengyu Ma. Decomposing overcomplete 3rd order tensors using sum-of-squares algorithms. arXiv preprint arXiv:1504.05287, 2015. [GVX13] N. Goyal, S. Vempala, and Y. Xiao. Fourier pca. arXiv preprint arXiv:1306.5825, 2013. [H?s90] Johan H?stad. Tensor rank is np-complete. Journal of Algorithms, 11(4):644?654, 1990. [HK13] Daniel Hsu and Sham M. Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. In Fourth Innovations in Theoretical Computer Science, 2013. [HKZ12] Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden Markov models. Journal of Computer and System Sciences, 78(5):1460?1480, 2012. [HL13] Christopher J Hillar and Lek-Heng Lim. Most tensor problems are np-hard. Journal of the ACM (JACM), 60(6):45, 2013. [HM16] Moritz Hardt and Tengyu Ma. Identity matters in deep learning. CoRR, abs/1611.04231, 2016. [HMR16] Moritz Hardt, Tengyu Ma, and Benjamin Recht. Gradient descent learns linear dynamical systems. CoRR, abs/1609.05191, 2016. [HSSS16] Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, and David Steurer. Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 178?191, 2016. [JSA15] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods. ArXiv e-prints, June 2015. [Kaw16] K. Kawaguchi. Deep Learning without Poor Local Minima. ArXiv e-prints, May 2016. [KM11] Tamara G Kolda and Jackson R Mayo. Shifted power method for computing tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications, 32(4):1095?1124, 2011. [LSJR16] Jason D. Lee, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. Gradient descent only converges to minimizers. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, pages 1246?1257, 2016. [MR06] Elchanan Mossel and S?bastian Roch. Learning nonsingular phylogenies and hidden Markov models. Annals of Applied Probability, 16(2):583?614, 2006. [MSS16] Tengyu Ma, Jonathan Shi, and David Steurer. Polynomial-time tensor decompositions with sum-of-squares. In FOCS 2016, to appear, 2016. [NP06] Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177?205, 2006. [NPOV15] A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov. Tensorizing Neural Networks. ArXiv e-prints, September 2015. [SQW15] Ju Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint arXiv:1510.06096, 2015.
6956 |@word determinant:3 version:3 polynomial:11 norm:3 stronger:1 nd:2 seems:1 open:4 d2:6 bn:1 decomposition:15 pg:10 sgd:1 tr:6 sepulchre:1 arous:3 moment:2 liu:1 contains:8 daniel:4 existing:1 current:1 intriguing:1 written:1 dx:2 john:1 hmr16:2 interpretable:1 v:1 guess:6 xk:1 podoprikhin:1 vanishing:1 characterization:1 completeness:1 pascanu:1 kelner:1 zhang:1 mathematical:2 dn:1 along:1 direct:5 lathauwer:1 symposium:3 shooting:1 prove:4 yuan:1 focs:1 inside:3 introduce:2 x0:2 indeed:2 expected:7 roughly:2 chi:1 spherical:1 becomes:1 begin:1 xx:2 moreover:6 linearity:1 notation:3 medium:1 estimating:2 bhojanapalli:1 what:1 eigenvector:2 generalizable:1 finding:1 transformation:1 guarantee:1 ti:1 shed:1 exactly:12 k2:5 control:3 unit:5 omit:1 appear:1 eigenpairs:1 before:3 understood:1 local:66 modify:1 dropped:2 limit:2 consequence:3 despite:1 vetrov:1 id:1 analyzing:1 establishing:2 lsjr16:2 approximately:2 logo:2 initialization:12 therein:1 ankur:2 challenging:4 ease:1 bi:12 practical:1 unique:1 lecun:1 practice:2 union:1 implement:1 goyal:1 razvan:1 universal:8 significantly:3 bns16:2 kbi:1 get:4 cannot:4 close:6 convenience:1 mahony:1 applying:3 impossible:2 equivalent:1 dean:1 hillar:1 shi:2 elusive:1 starting:1 convex:13 unravel:1 formalized:2 simplicity:2 recovery:1 pure:1 identifying:1 importantly:1 jackson:1 proving:1 handle:1 coordinate:1 annals:2 kolda:1 suppose:5 heavily:1 strengthen:2 exact:2 duke:2 gm:1 us:5 programming:1 hypothesis:1 particularly:1 cut:1 inserted:1 preprint:6 solved:2 capture:1 worst:2 region:12 connected:1 sun:1 intuition:2 benjamin:2 convexity:1 complexity:2 nesterov:1 rigorously:1 dynamic:2 depend:1 tight:2 solving:1 algebra:2 upon:1 division:1 joint:5 differently:1 various:3 chapter:1 fast:1 artificial:1 neighborhood:8 outside:1 whose:4 heuristic:2 stanford:1 widely:1 larger:5 solve:1 dominating:1 say:1 supplementary:15 otherwise:1 kai:1 itself:2 online:1 sequence:1 eigenvalue:6 analytical:2 simchowitz:1 reconstruction:1 interaction:1 gm15:2 achieve:1 ghjy15:6 frobenius:1 chemometrics:1 convergence:2 regularity:3 optimum:7 gerard:1 kpx:1 nadav:1 boris:1 leave:1 ben:3 converges:1 derive:1 depending:1 tale:1 completion:1 novikov:1 progress:1 scary:1 c:2 involves:2 implies:4 come:1 concentrate:1 stochastic:5 material:15 require:3 argued:1 crux:1 suffices:2 generalization:1 fix:1 preliminary:3 proposition:6 tighter:1 underdetermined:1 rong:7 hold:2 around:7 sufficiently:3 wright:1 algorithmic:3 claim:5 major:1 dictionary:2 auffinger:3 early:1 purpose:1 mayo:1 bond:1 sensitive:1 successfully:3 tool:2 gaussian:8 always:1 aim:1 i3:1 pn:1 avoid:1 luciani:1 corollary:2 june:4 improvement:1 properly:2 rank:4 portland:1 mainly:1 absil:2 sense:1 am:1 glass:1 hkz12:2 dependent:1 ganguli:1 minimizers:1 hidden:7 spurious:5 choromanska:1 interested:1 france:1 provably:1 voroninski:1 agmm15:2 among:2 colt:3 dauphin:1 augment:1 k6:3 plan:1 art:2 integration:1 special:1 constrained:1 fairly:1 equal:1 field:1 beach:1 enlarge:1 unsupervised:2 future:2 np:5 yoshua:1 fundamentally:1 simplify:1 few:1 comprehensive:1 qing:1 geometry:6 connects:2 delicate:2 ab:3 detection:1 organization:1 highly:2 investigate:1 deferred:1 analyzed:2 mixture:4 semidefinite:2 light:1 behind:3 implication:1 integral:2 janzamin:3 orthogonal:5 vladislav:1 tree:1 elchanan:1 euclidean:2 divide:1 ppx:1 re:2 desired:3 taylor:1 isolated:1 e0:2 overcomplete:3 theoretical:1 entry:2 subset:3 successful:1 seventh:1 characterize:4 dependency:1 answer:1 kn:2 adler:1 cho:1 st:1 density:3 recht:2 siam:1 ju:1 lee:3 off:1 michael:2 quickly:1 concrete:2 hopkins:1 sanjeev:1 again:1 reflect:1 moitra:2 choose:1 huang:1 wishart:2 admit:1 book:1 return:1 account:1 potential:1 de:2 schramm:1 coding:1 coefficient:2 matter:1 explicitly:1 depends:1 blind:1 later:6 view:1 jason:2 closed:2 analyze:5 characterizes:2 shashua:1 recover:1 complicated:7 identifiability:1 contribution:1 square:7 spin:1 accuracy:1 convolutional:1 efficiently:2 ensemble:2 correspond:2 identify:1 landscape:12 cern:1 castaing:1 peril:1 nonsingular:1 identification:1 basically:1 anima:3 randomness:8 afonso:1 facebook:1 definition:4 notional:1 involved:1 tamara:1 lc2:8 e2:11 naturally:1 proof:17 bioscience:1 recovers:1 boil:1 hsu:4 proved:2 hardt:2 popular:2 animashree:1 recall:2 knowledge:1 subsection:2 improves:1 lim:1 formalize:1 foobi:1 actually:1 appears:1 furong:1 higher:1 rard:2 evaluated:2 done:1 strongly:1 just:1 stage:1 correlation:2 until:1 hand:1 sketch:1 christopher:1 quality:1 believe:2 usa:4 name:1 effect:1 verify:1 true:7 former:1 hence:2 kyunghyun:1 regularization:1 alternating:2 moritz:2 i2:4 deal:2 samuel:1 generalized:3 complete:8 l1:34 behaves:1 empirically:1 overview:6 ji:1 cohen:1 conditioning:2 exponentially:1 extend:1 tail:1 slight:1 refer:5 expressing:1 cambridge:1 ai:22 compliment:1 hess:10 rd:9 unconstrained:1 enter:1 mathematics:1 consistency:1 language:2 access:1 f0:6 surface:2 isometry:1 recent:1 dpg:2 conjectured:2 henaff:1 certain:1 nonconvex:2 bandeira:1 inequality:3 success:1 yi:1 minimum:7 fortunately:1 converge:3 forty:1 attacking:1 signal:1 full:2 sham:4 smooth:2 technical:5 match:1 characterized:1 calculation:1 long:2 sphere:7 e1:11 sqw15:2 a1:10 scalable:1 basic:1 rongge:1 multilayer:1 vision:1 expectation:12 tselil:1 arxiv:18 preserved:1 background:2 separately:2 aravindan:1 crucial:1 ascent:6 sigact:1 majid:2 vijayaraghavan:1 seem:1 idd:5 call:2 anandkumar:6 integer:1 jordan:1 counting:3 presence:1 intermediate:1 bengio:1 easy:1 yang:1 switch:1 xj:1 quite:1 affect:1 escaping:1 polyak:1 inner:1 regarding:1 idea:4 grad:10 det:11 whether:2 expression:6 amnon:1 pca:1 reuse:1 bbv16:2 algebraic:2 hessian:11 proceed:1 speaking:1 remark:2 york:1 deep:3 antonio:2 detailed:2 eigenvectors:3 informally:1 involve:1 cardoso:1 sturmfels:2 locally:1 kac:31 shifted:1 moses:1 arising:1 xi4:2 discrete:1 afh:2 vol:5 key:4 np06:2 achieving:1 drawn:1 d3:4 k4:6 clean:1 ht:1 relaxation:1 sum:5 run:1 mystery:1 fourth:2 almost:3 reader:3 yann:2 layer:2 bound:3 guaranteed:3 letp:1 bastian:1 nonnegative:1 annual:3 k8:1 orthogonality:1 dominated:1 fourier:1 integrand:1 argument:2 extremely:1 nathan:1 optimality:1 tengyu:7 vempala:1 px:6 conjecture:2 structured:5 developing:1 according:1 charikar:1 poor:1 smaller:2 slightly:8 increasingly:1 kakade:4 joseph:1 qu:1 comon:1 intuitively:2 restricted:1 taken:1 superlevel:9 equation:13 previously:3 assures:1 remains:3 pin:1 turn:3 count:3 eventually:1 discus:1 ge:9 yurii:1 gulcehre:1 gaussians:2 decomposing:3 neyshabur:1 apply:2 observe:1 spectral:6 existence:1 original:1 denotes:1 dirichlet:1 log2:3 newton:1 mikael:1 exploit:1 especially:1 build:1 kawaguchi:1 tensor:39 objective:9 g0:7 question:5 already:1 quantity:1 print:6 strategy:1 cartwright:1 planted:1 usual:2 hai:8 evolutionary:1 gradient:28 september:1 subspace:1 grassmannian:1 separate:3 outer:1 manifold:12 barely:3 sedghi:1 besides:1 innovation:1 difficult:4 robert:1 potentially:1 pgrad:2 statement:1 stoc:2 trace:1 negative:1 stated:1 design:2 f0c:2 steurer:3 observation:2 markov:6 tensorizing:1 finite:1 descent:10 caglar:1 jin:1 november:1 immediate:1 situation:3 extended:4 communication:1 rn:1 smoothed:1 arbitrary:1 community:1 david:3 complement:1 namely:2 paris:1 bernd:1 method1:1 nip:1 address:1 beyond:1 roch:1 below:4 dynamical:1 beating:1 regime:3 challenge:1 program:1 max:3 explanation:1 power:8 critical:4 suitable:1 natural:1 rely:1 difficulty:1 event:10 indicator:5 business:1 brief:1 mossel:1 mathieu:1 arora:1 tentatively:1 text:1 nice:1 geometric:2 understanding:2 l2:22 kf:1 law:2 synchronization:1 loss:1 interesting:2 allocation:1 srebro:1 degree:1 xiao:1 foster:1 leaning:1 heng:1 heavy:1 cd:1 surprisingly:1 side:1 weaker:1 understand:1 formal:2 barak:1 boumal:2 taking:1 sparse:2 abo:1 benefit:1 dimension:3 evaluating:1 doesn:2 reside:1 concretely:3 reinforcement:1 projected:4 collection:1 osokin:1 far:1 transaction:1 approximate:3 boaz:1 implicitly:1 technicality:1 global:17 b1:1 assumed:1 conclude:1 xi:7 surya:1 search:8 latent:4 continuous:1 why:1 pretty:1 lek:1 johan:1 robust:3 ca:1 nicolas:1 contributes:1 e20:9 anna:1 aistats:1 main:6 rh:8 motivation:1 repeated:1 complementary:1 cubic:1 tong:1 analyzable:1 precision:1 lc:1 sub:1 explicit:2 exponential:3 xl:1 answering:1 jmlr:1 dustin:1 srinadh:1 learns:1 bhaskara:1 formula:38 down:2 theorem:27 rk:2 rectifier:1 behnam:1 list:1 exists:3 adding:1 corr:3 conditioned:2 easier:2 mf:13 saddle:3 jacm:1 kxk:1 aditya:1 chang:1 springer:1 aa:2 corresponds:2 satisfies:4 acm:5 ma:9 rice:31 goal:1 identity:2 exposition:1 towards:2 shared:2 hard:6 typical:2 determined:2 uniformly:1 lemma:14 total:2 cartis:1 formally:1 phylogeny:1 almeida:1 latter:1 meant:1 jonathan:4 cumulant:1 princeton:1 d1:3 correlated:1
6,585
6,957
High-Order Attention Models for Visual Question Answering Idan Schwartz Department of Computer Science Technion [email protected] Alexander G. Schwing Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign [email protected] Tamir Hazan Department of Industrial Engineering & Management Technion [email protected] Abstract The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset. 1 Introduction The quest for algorithms which enable cognitive abilities is an important part of machine learning and appears in many facets, e.g., in visual question answering tasks [6], image captioning [26], visual question generation [18, 10] and machine comprehension [8]. A common trait in these recent cognitive-like tasks is that they take into account different data modalities, for example, visual and textual data. To address these tasks, recently, attention mechanisms have emerged as a powerful common theme, which provides not only some form of interpretability if applied to deep net models, but also often improves performance [8]. The latter effect is attributed to more expressive yet concise forms of the various data modalities. Present day attention mechanisms, like for example [15, 26], are however often lacking in two main aspects. First, the systems generally extract abstract representations of data in an ad-hoc and entangled manner. Second, present day attention mechanisms are often geared towards a specific form of input and therefore hand-crafted for a particular task. To address both issues, we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. For example, second order correlations can model interactions between two data modalities, e.g., an image and a question, and more generally, k?th order correlations can model interactions between k modalities. Learning these correlations effectively directs the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our novel attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the VQA dataset [2]. Some 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Original Image Unary Potentials Pairwise Potentials Final Attention What does the man have on his head? What does the man have on his head? What does the man have on his head? What does the man have on his head? How many cars are in the picture? How many cars are in the picture? How many cars are in the picture? How many cars are in the picture? Figure 1: Results of our multi-modal attention for one image and two different questions (1st column). The unary image attention is identical by construction. The pairwise potentials differ for both questions and images since both modalities are taken into account (3rd column). The final attention is illustrated in the 4th column. of our results are visualized in Fig. 1, where we show how the visual attention correlates with the textual attention. We begin by reviewing the related work. We subsequently provide details of our proposed technique, focusing on the high-order nature of our attention models. We then conclude by presenting the application of our high-order attention mechanism to VQA and compare it to the state-of-the-art. 2 Related work Attention mechanisms have been investigated for both image and textual data. In the following we review mechanisms for both. Image attention mechanisms: Over the past few years, single image embeddings extracted from a deep net (e.g., [17, 16]) have been extended to a variety of image attention modules, when considering VQA. For example, a textual long short term memory net (LSTM) may be augmented with a spatial attention [29]. Similarly, Andreas et al. [1] employ a language parser together with a series of neural net modules, one of which attends to regions in an image. The language parser suggests which neural net module to use. Stacking of attention units was also investigated by Yang et al. [27]. Their stacked attention network predicts the answer successively. Dynamic memory network modules which capture contextual information from neighboring image regions has been considered by Xiong et al. [24]. Shih et al. [23] use object proposals and and rank regions according to relevance. The multi-hop attention scheme of Xu et al. [25] was proposed to extract fine-grained details. A joint attention mechanism was discussed by Lu et al. [15] and Fukui et al. [7] suggest an efficient outer product mechanism to combine visual representation and text representation before applying attention over the combined representation. Additionally, they suggested the use of glimpses. Very recently, Kazemi et al. [11] showed a similar approach using concatenation instead of outer product. Importantly, all of these approaches model attention as a single network. The fact that multiple modalities are involved is often not considered explicitly which contrasts the aforementioned approaches from the technique we present. Very recently Kim et al. [14] presented a technique that also interprets attention as a multi-variate probabilistic model, to incorporate structural dependencies into the deep net. Other recent techniques are work by Nam et al. [19] on dual attention mechanisms and work by Kim et al. [13] on bilinear 2 models. In contrast to the latter two models our approach is easy to extend to any number of data modalities. Textual attention mechanisms: We also want to provide a brief review of textual attention. To address some of the challenges, e.g., long sentences, faced by translation models, Hermann et al. [8] proposed RNNSearch. To address the challenges which arise by fixing the latent dimension of neural nets processing text data, Bahdanau et al. [3] first encode a document and a query via a bidirectional LSTM which are then used to compute attentions. This mechanism was later refined in [22] where a word based technique reasons about sentence representations. Joint attention between two CNN hierarchies is discussed by Yin et al. [28]. Among all those attention mechanisms, relevant to our approach is work by Lu et al. [15] and the approach presented by Xu et al. [25]. Both discuss attention mechanisms which operate jointly over two modalities. Xu et al. [25] use pairwise interactions in the form of a similarity matrix, but ignore the attentions on individual data modalities. Lu et al. [15] suggest an alternating model, that directly combines the features of the modalities before attending. Additionally, they suggested a parallel model which uses a similarity matrix to map features for one modality to the other. It is hard to extend this approach to more than two modalities. In contrast, our model develops a probabilistic model, based on high order potentials and performs mean-field inference to obtain marginal probabilities. This permits trivial extension of the model to any number of modalities. Additionally, Jabri et al. [9] propose a model where answers are also used as inputs. Their approach questions the need of attention mechanisms and develops an alternative solution based on binary classification. In contrast, our approach captures high-order attention correlations, which we found to improve performance significantly. Overall, while there is early work that propose a combination of language and image attention for VQA, e.g., [15, 25, 12], attention mechanism with several potentials haven?t been discussed in detail yet. In the following we present our approach for joint attention over any number of modalities. 3 Higher order attention models Attention modules are a crucial component for present day decision making systems. Particularly when taking into account more and more data of different modalities, attention mechanisms are able to provide insights into the inner workings of the oftentimes abstract and automatically extracted representations of our systems. An example of such a system that captured a lot of research efforts in recent years is Visual Question Answering (VQA). Considering VQA as an example, we immediately note its dependence on two or even three different data modalities, the visual input V , the question Q and the answer A, which get processed simultaneously. More formally, we let V ? Rnv ?d , Q ? Rnq ?d , A ? Rna ?d denote a representation for the visual input, the question and the answer respectively. Hereby, nv , nq and na are the number of pixels, the number of words in the question, and the number of possible answers. We use d to denote the dimensionality of the data. For simplicity of the exposition we assume d to be identical across all data modalities. Due to this dependence on multiple data modalities, present day decision making systems can be decomposed into three major parts: (i) the data embedding; (ii) attention mechanisms; and (iii) the decision making. For a state-of-the-art VQA system such as the one we developed here, those three parts are immediately apparent when considering the high-level system architecture outlined in Fig. 2. 3.1 Data embedding Attention modules deliver to the decision making component a succinct representation of the relevant data modalities. As such, their performance depends on how we represent the data modalities themselves. Oftentimes, an attention module tends to use expressive yet concise data embedding algorithms to better capture their correlations and consequently to improve the decision making performance. For example, data embeddings based on convolutional deep nets which constitute the state-of-the-art in many visual recognition and scene understanding tasks. Language embeddings heavily rely on LSTM which are able to capture context in sequential data, such as words, phrases and sentences. We give a detailed account to our data embedding architectures for VQA in Sec. 4.1. 3 Yes MCB MCB Unary Potential Decision (Sec. 3.3) MCB Ternary Potential Pairwise Potential Concatenate LSTM LSTM 1D-Conv Word Embedding Unary Potential ? ? ? ? ? ? Softmax Softmax Softmax  Pairwise Potential Unary Potential Pairwise Potential ? ? ? ? ? ? ? ? Word Embedding ResNet 1. Yes 2. Yellow ? 17. No 18. Food Is the dog trying to catch a frisbee? Attention (Sec. 3.2) ? ? ? ? ? Data embedding (Sec. 3.1) ? ? ? ? ? ? ? Figure 2: Our state-of-the-art VQA system 3.2 Attention As apparent from the aforementioned description, attention is the crucial component connecting data embeddings with decision making modules. Subsequently we denote attention over the nq words in the question via PQ (iq ), where iq ? {1, . . . , nq } is the word index. Similarly, attention over the image is referred to via PV (iv ), where iv ? {1, . . . , nv }, and attention over the possible answers are denoted PA (ia ), where ia ? {1, . . . , na }. We consider the attention mechanism as a probability model, with each attention mechanism computing ?potentials.? First, unary potentials ?V , ?Q , ?A denote the importance of each feature (e.g., question word representations, multiple choice answers representations, and image patch features) for the VQA task. Second, pairwise potentials, ?V,Q , ?V,A , ?Q,A express correlations between two modalities. Last, third-order potential, ?V,Q,A captures dependencies between the three modalities. To obtain marginal probabilities PQ , PV and PA from potentials, our model performs mean-field inference. We combine the unary potential, the marginalized pairwise potential and the marginalized third order potential linearly including a bias term: PV (iv ) = smax(?1 ?V (iv )+?2 ?V,Q (iv )+?3 ?A,V (iv )+?4 ?V,Q,A (iv ) + ?5), PQ (iq ) = smax(?1 ?Q (iq )+?2 ?V,Q (iq )+?3 ?A,Q (iq )+?4 ?V,Q,A (iq ) + ?5), PA (ia ) = smax(?1 ?A (ia )+?2 ?A,V (ia )+?3 ?A,Q (ia )+?4 ?V,Q.A (ia ) + ?5). (1) Hereby ?i , ?i , and ?i are learnable parameters and smax(?) refers to the soft-max operation over iv ? {1, . . . , nv }, iq ? {1, . . . , nq } and ia ? {1, . . . , na } respectively. The soft-max converts the combined potentials to probability distributions, which corresponds to a single mean-field iteration. Such a linear combination of potentials provides extra flexibility for the model, since it can learn the reliability of the potential from the data. For instance, we observe that question attention relies more on the unary question potential and on pairwise question and answer potentials. In contrast, the image attention relies more on the pairwise question and image potential. Given the aforementioned probabilities PV , PQ , and PA , the attended image, question and answer vectors are denoted by aV ? Rd , aQ ? Rd and aA ? Rd . The attended modalities are calculated as the weighted sum of the image features V = [v1 , . . . , vnv ]T ? Rnv ?d , the question features Q = [q1 , . . . , qnq ]T ? Rnq ?d , and the answer features A = [a1 , . . . , ana ]T ? Rna ?d , i.e., aV = nv X iv =1 PV (iv )viv , aQ = nq X PQ (iq )qiq , iq =1 and aV = na X ia =1 4 PA (ia )aia . tanh conv tanh ?Q,V,A (iq ) conv tanh ?Q,V,A (iv ) conv tanh tanh conv conv ?Q,V (iv ) A conv Corr3 conv V Unary Potential conv conv conv ?Q,V (iq ) conv conv tanh V Ternary Potential Unary Potential tanh conv Corr2 ?V conv conv tanh V Pairwise Potential tanh conv Q Unary Potential conv tanh conv ?Q,V,A (ia ) conv tanh conv tanh conv tanh Corr3 tanh The attended modalities, which effectively focus on the data relevant for the task, are passed to a classifier for decision making, e.g., the ones discussed in Sec. 3.3. In the following we now describe the attention mechanisms for unary, pairwise and ternary potentials in more detail. conv conv module (e.g., visual and question) marginalized over its two data modalities. (c) ternary attention module (e.g., visual, question and answer) marginalized over its three data modalities.. conv tanh (a) (b) (c) Unary Pairwise PairwisePotential Ternary Unary Threeway Threeway Potential Potential Potential Potential Figure 3: IllustrationPotential of our k?order attention. (a) unary attention module (e.g., visual). (b)Potential pairwise attention conv tanh conv Corr3 conv Ternary Potential Q conv Corr2 tanh conv Pairwise Potential tanh conv Corr2 conv Pairwise Potential 3.2.1 Unary potentials We illustrate the unary attention schematically in Fig. 3 (a). The input to the unary attention module is a data representation, i.e., either the visual representation V , the question representation Q, or the answer representation A. Using those representations, we obtain the ?unary potentials? ?V , ?Q and ?A using a convolution operation with kernel size 1 ? 1 over the data representation as an additional embedding step, followed by a non-linearity (tanh in our case), followed by another convolution operation with kernel size 1 ? 1 to reduce embedding dimensionality. Since convolutions with kernel size 1 ? 1 are identical to matrix multiplies we formally obtain the unary potentials via ?V (iv ) = tanh(V Wv2 )Wv1 , ?Q (iq ) = tanh(QWq2 )Wq1 , ?A (ia ) = tanh(AWa2 )Wa1 . where Wv1 , Wq1 , Wa1 ? Rd?1 , and Wv2 , Wq2 , Wa2 ? Rd?d are trainable parameters. 3.2.2 Pairwise potentials Besides the mentioned mechanisms to generate unary potentials, we specifically aim at taking advantage of pairwise attention modules, which are able to capture the correlation between the representation of different modalities. Our approach is illustrated in Fig. 3 (b). We use a similarity matrix between image and question modalities C2 = QWq (V Wv )> . Alternatively, the (i, j)-th entry is the correlation (inner-product) of the i-th column of QWq and the j-th column of V Wv : (C2 )i,j = corr2 ((QWq ):,i , (V Wv ):,j ), corr2 (q, v) = d X q l vl . l=1 where Wq , Wv ? Rd?d are trainable parameters. We consider (C2 )i,j as a pairwise potential that represents the correlation of the i-th word in a question and the j-th patch in an image. Therefore, to retrieve the attention for a specific word, we convolve the matrix along the visual dimension using a 1 ? 1 dimensional kernel. Specifically, ? ? ! nq nv X X ?V,Q (iq ) = tanh wiv (C2 )iv ,iq , and ?V,Q (iv ) = tanh ? wiq (C2 )iv ,iq ? . iv =1 iq =1 Similarly, we obtain ?A,V and ?A,Q , which we omit due to space limitations. These potentials are used to compute the attention probabilities as defined in Eq. (1). 3.2.3 Ternary Potentials To capture the dependencies between all three modalities, we consider their high-order correlations. (C3 )i,j,k = corr3 ((QWq ):,i , (V Wv ):,j , (AWa ):,k ), corr3 (q, v, a) = d X l=1 5 ql vl al . Threeway Potential Unary Potential Pairwise Potential v h v T P Threeway Potential aV aA Outer Product Space Outer Product (b) Space (a) Outer Product Space MCB MCB MCT aV MCT aQ aQ Pairwise Potential Unary Potential Figure 4: Illustration of correlation units used for decision making. (a) MCB unit approximately sample from outer product space of two attention vectors, (b) MCT unit approximately sample from outer product space of three attention vectors. Where Wq , Wv , Wa ? Rd?d are trainable parameters. Similarly to the pairwise potentials, we use the C3 tensor to obtain correlated attention for each modality: ? ? ! nq na nv X na X X X ?V,Q,A (iq ) = tanh wiv ,ia (C3 )iq ,iv ,ia , ?V,Q,A (iv ) = tanh? wiq ,ia (C3 )iq ,iv ,ia?, iv =1 ia =1 iq =1 ia =1 ? and ?V,Q,A (ia ) = tanh ? nq nv X X ? wiq ,ia (C3 )iq ,iv ,ia ? . iv =1 iq =1 These potentials are used to compute the attention probabilities as defined in Eq. (1). 3.3 Decision making The decision making component receives as input the attended modalities and predicts the desired output. Each attended modality is a vector that consists of the relevant data for making the decision. While the decision making component can consider the modalities independently, the nature of the task usually requires to take into account correlations between the attended modalities. The correlation of a set of attended modalities are represented by the outer product of their respective vectors, e.g., the correlation of two attended modalities is represented by a matrix and the correlation of k-attended modalities is represented by a k-dimensional tensor. Ideally, the attended modalities and their high-order correlation tensors are fed into a deep net which produces the final decision. The number of parameters in such a network grows exponentially in the number of modalities, as seen in Fig. 4. To overcome this computational bottleneck, we follow the tensor sketch algorithm of Pham and Pagh [21], which was recently applied to attention models by Fukui et al. [7] via Multimodal Compact Bilinear Pooling (MCB) in the pairwise setting or Multimodal Compact Trilinear Pooling (MCT), an extension of MCB that pools data from three modalities. The tensor sketch algorithm enables us to reduce the dimension of any rank-one tensor while referring to it implicitly. It relies on the count sketch technique [4] that randomly embeds an attended vector a ? Rd1 into another Euclidean space ?(a) ? Rd2 . The tensor sketch algorithm then projects the rank-one tensor ?ki=1 ai which consists of attention correlations of order k using the convolution ?(?ki=1 ai ) = ?ki=1 ?(ai ). For example, for two attention modalities, the correlation d2 matrix a1 a> by the convolution ?(a1 ?a2 ) = ?(a1 )??(a2 ). 2 = a1 ?a2 is randomly projected to R The attended modalities ?(ai ) and their high-order correlations ?(?ki=1 ai ) are fed into a fully connected neural net to complete decision making. 4 Visual question answering In the following we evaluate our approach qualitatively and quantitatively. Before doing so we describe the data embeddings. 4.1 Data embedding The attention module requires the question representation Q ? Rnq ?d , the image representation V ? Rnv ?d , and the answer representation A ? Rna ?d , which are computed as follows. Image embedding: To embed the image, we use pre-trained convolutional deep nets (i.e., VGG-19, ResNet). We extract the last layer before the fully connected units. Its dimension in the VGG net case is 512 ? 14 ? 14 and the dimension in the ResNet case is 2048 ? 14 ? 14. Hence we obtain 6 Table 1: Comparison of results on the Multiple-Choice VQA dataset for a variety of methods. We observe the combination of all three unary, pairwise and ternary potentials to yield the best result. test-dev test-std Method Y/N Num Other All All HieCoAtt (VGG) [15] HieCoAtt (ResNet) [15] RAU (ResNet) [20] MCB (ResNet) [7] DAN (VGG) [19] DAN (ResNet) [19] MLB (ResNet) [13] 79.7 79.7 81.9 - 40.1 40.0 41.1 - 57.9 59.8 61.5 - 64.9 65.8 67.7 68.6 67.0 69.1 - 66.1 67.3 69.0 68.9 Unary+Pairwis (ResNet) 80.9 Unary+Pairwise (ResNet) 82.0 Unary + Pairwise + Ternary (VGG) 81.2 Unary + Pairwise + Ternary (ResNet) 81.6 36.0 42.7 42.7 43.3 61.6 63.3 62.3 64.8 66.7 68.7 67.9 69.4 68.7 69.3 2-Modalities: 3-Modalities: 3-Modalities: 3-Modalities: nv = 196 and we embed both the 196 VGG-19 or ResNet features into a d = 512 dimensional space to obtain the image representation V . Question embedding: To obtain a question representation, Q ? Rnq ?d , we first map a 1-hot encoding of each word in the question into a d-dimensional embedding space using a linear transformation plus corresponding bias terms. To obtain a richer representation that accounts for neighboring words, we use a 1-dimensional temporal convolution with filter of size 3. While a combination of multiple sized filters is suggested in the literature [15], we didn?t find any benefit from using such an approach. Subsequently, to capture long-term dependencies, we used a Long Short Term Memory (LSTM) layer. To reduce overfitting caused by the LSTM units, we used two LSTM layers with d/2 hidden dimension, one uses as input the word embedding representation, and the other one operates on the 1D conv layer output. Their output is then concatenated to obtain Q. We also note that nq is a constant hyperparameter, i.e., questions with more than nq words are cut, while questions with less words are zero-padded. Answer embedding: To embed the possible answers we use a regular word embedding. The vocabulary is specified by taking only the most frequent answers in the training set. Answers that are not included in the top answers are embedded to the same vector. Answers containing multiple words are embedded as n-grams to a single vector. We assume there is no real dependency between the answers, therefore there is no need of using additional 1D conv, or LSTM layers. 4.2 Decision making For our VQA example we investigate two techniques to combine vectors from three modalities. First, the attended feature representation for each modality, i.e., aV , aA and aQ , are combined using an MCT unit. Each feature element is of the form ((aV )i ? (aQ )j ? (aA )k ). While this first solution is most general, in some cases like VQA, our experiments show that it is better to use our second approach, a 2-layer MCB unit combination. This permits greater expressiveness as we employ features of the form ((aV )i ? (aQ )j ? (aQ )k ? (aA )t ) therefore also allowing image features to interact with themselves. Note that in terms of parameters both approaches are identical as neither MCB nor MCT are parametric modules. Beyond MCB, we tested several other techniques that were suggested in the literature, including element-wise multiplication, element-wise addition and concatenation [13, 15, 11], optionally followed by another hidden fully connected layer. The tensor sketching units consistently performed best. 4.3 Results Experimental setup: We use the RMSProp optimizer with a base learning rate of 4e?4 and ? = 0.99 as well as  = 1e?8 . The batch size is set to 300. The dimension d of all hidden layers is set to 512. The MCB unit feature dimension was set to d = 8192. We apply dropout with a rate of 0.5 after the word embeddings, the LSTM layer, and the first conv layer in the unary potential units. Additionally, for the last fully connected layer we use a dropout rate of 0.3. We use the top 3000 most frequent 7 How many glasses are on the table? How many glasses are on the table? How many glasses are on the table? Is anyone in the scene wearing blue? Is anyone in the scene wearing blue? Is anyone in the scene wearing blue? What kind of flooring is in the bathroom? What kind of flooring is in the bathroom? What kind of flooring is in the bathroom? What room is this? What room is this? What room is this? st Figure 5: For each image (1 column) we show the attention generated for two different questions in columns 2-4 and columns 5-7 respectively. The attentions are ordered as unary attention, pairwise attention and combined attention for both the image and the question. We observe the combined attention to significantly depend on the question. Is this animal drinking water? What kind of animal is this? no ... this red no yes white forks 4 1 tomatoes presidential blue 3 13 green 2 fila i ... don't Is this animal drinking water? 0.00 blue red cutting ... cake green bear 1 white objazd 3 elephant 4 giraffe yes reject cow 2 spain no 0.02 0.04 0.06 0.08 Attention 0.10 0.12 0.14 What kind of animal is this? 0.00 0.02 0.04 0.00 0.08 0.10 Attention 0.12 0.14 0.16 3 not white 1 on ... boy's red yes 4 no 2 aspro pimp player blue pain if ... you've green no ... image yes next ... to blue green parka 3 1 pirates gadzoom picture ... of 2 picture clock no 4 white photo red What is on the wall? 0.06 Is a light on? What is on the wall? 0.02 0.04 0.06 0.08 0.10 Attention 0.12 0.14 0.16 Is a light on? 0.00 0.02 0.04 0.06 Attention 0.08 0.10 0.12 Figure 6: The attention generated for two different questions over three modalities. We find the attention over multiple choice answers to emphasis the unusual answers. answers as possible outputs, which covers 91% of all answers in the train set. We implemented our models using the Torch framework1 [5]. As a comparison for our attention mechanism we use the approach of Lu et al. [15] and the technique of Fukui et al. [7]. Their methods are based on a hierarchical attention mechanism and multi-modal compact bilinear (MCB) pooling. In contrast to their approach we demonstrate a relatively simple technique based on a probabilistic intuition grounded on potentials. For comparative reasons only, the visualized attention is based on two modalities: image and question. We evaluate our attention modules on the VQA real-image test-dev and test-std datasets [2]. The dataset consists of 123, 287 training images and 81, 434 test set images. Each image comes with 3 questions along with 18 multiple choice answers. Quantitative evaluation: We first evaluate the overall performance of our model and compare it to a variety of baselines. Tab. 1 shows the performance of our model and the baselines on the test-dev and the test-standard datasets for multiple choice (MC) questions. To obtain multiple choice results we follow common practice and use the highest scoring answer among the provided ones. Our approach (Fig. 2) for the multiple choice answering task achieved the reported result after 180,000 iterations, which requires about 40 hours of training on the ?train+val? dataset using a TitanX GPU. Despite the fact that our model has only 40 million parameters, while techniques like [7] use over 70 million parameters, we observe state-of-the-art behavior. Additionally, we employ a 2-modality model having a similar experimental setup. We observe a significant improvement for our 3-modality model, which shows the importance of high-order attention models. Due to the fact that we use a lower embedding dimension of 512 (similar to [15]) compared to 2048 of existing 2-modality models [13, 7], the 2-modality model achieves inferior performance. We believe that higher embedding dimension and proper tuning can improve our 2-modality starting point. Additionally, we compared our proposed decision units. MCT, which is a generic extension of MCB for 3-modalities, and 2-layers MCB which has greater expressiveness (Sec. 4.2). Evaluating on the ?val? dataset while training on the ?train? part using the VGG features, the MCT setup yields 63.82% 1 https://github.com/idansc/HighOrderAtten 8 Is she using a battery-operated device? Is this a boy or a girl? Is she using a battery-operated device? Is she using a battery device? Ours: yes [15]: no [7]: no GT: yes Is this a boy or a girl? Is this boy or a girl? Ours: girl [15]: boy [7]: girl GT: girl Figure 7: Comparison of our attention results (2nd column) with attention provided by [15] (3rd column) and [7] (4th column). The fourth column provides the question and the answer of the different techniques. What color is the table? GT: brown Ours: blue What color is the table? What color is the table? What color is the table? What color is the umbrella? GT: blue Ours: blue What color is the umbrella? What color is the umbrella? What color is the umbrella? Figure 8: Failure cases: Unary, pairwise and combined attention of our approach. Our system focuses on the colorful umbrella as opposed to the table in the first row. where 2-layer MCB yields 64.57%. We also tested a different ordering of the input to the 2-modality MCB and found them to yield inferior results. Qualitative evaluation: Next, we evaluate our technique qualitatively. In Fig. 5 we illustrate the unary, pairwise and combined attention of our approach based on the two modality architecture, without the multiple choice as input. For each image we show multiple questions. We observe the unary attention usually attends to strong features of the image, while pairwise potentials emphasize areas that correlate with question words. Importantly, the combined result is dependent on the provided question. For instance, in the first row we observe for the question ?How many glasses are on the table?,? that the pairwise potential reacts to the image area depicting the glass. In contrast, for the question ?Is anyone in the scene wearing blue?? the pairwise potentials reacts to the guy with the blue shirt. In Fig. 6, we illustrate the attention for our 3-modality model. We find the attention over multiple choice answers to favor the more unusual results. In Fig. 7, we compare the final attention obtained from our approach to the results obtained with techniques discussed in [15] and [7]. We observe that our approach attends to reasonable pixel and question locations. For example, considering the first row in Fig. 7, the question refers to the battery operated device. Compared to existing approaches, our technique attends to the laptop, which seems to help in choosing the correct answer. In the second row, the question wonders ?Is this a boy or a girl??. Both of the correct answers were produced when the attention focuses on the hair. In Fig. 8, we illustrate a failure case, where the attention of our approach is identical, despite two different input questions. Our system focuses on the colorful umbrella as opposed to the object queried for in the question. 5 Conclusion In this paper we investigated a series of techniques to design attention for multimodal input data. Beyond demonstrating state-of-the-art performance using relatively simple models, we hope that this work inspires researchers to work in this direction. 9 Acknowledgments: This research was supported in part by The Israel Science Foundation (grant No. 948/15). This material is based upon work supported in part by the National Science Foundation under Grant No. 1718221. We thank Nvidia for providing GPUs used in this research. References [1] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. arXiv preprint arXiv:1601.01705, 2016. [2] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015. [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [4] Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In ICALP. Springer, 2002. [5] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. [6] Abhishek Das, Harsh Agrawal, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? arXiv preprint arXiv:1606.03556, 2016. [7] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016. [8] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, pages 1693?1701, 2015. [9] Allan Jabri, Armand Joulin, and Laurens van der Maaten. Revisiting visual question answering baselines. In ECCV. Springer, 2016. [10] U. Jain? , Z. Zhang? , and A. G. Schwing. Creativity: Generating Diverse Questions using Variational Autoencoders. In CVPR, 2017. ? equal contribution. [11] Vahid Kazemi and Ali Elqursh. Show, ask, attend, and answer: A strong baseline for visual question answering. arXiv preprint arXiv:1704.03162, 2017. [12] Jin-Hwa Kim, Sang-Woo Lee, Donghyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, and ByoungTak Zhang. Multimodal residual learning for visual qa. In NIPS, 2016. [13] Jin-Hwa Kim, Kyoung-Woon On, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325, 2016. [14] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. Structured attention networks. arXiv preprint arXiv:1702.00887, 2017. [15] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016. [16] Lin Ma, Zhengdong Lu, and Hang Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. [17] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based approach to answering questions about images. In ICCV, 2015. [18] Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. Generating natural questions about an image. arXiv preprint arXiv:1603.06059, 2016. [19] Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. Dual attention networks for multimodal reasoning and matching. arXiv preprint arXiv:1611.00471, 2016. [20] Hyeonwoo Noh and Bohyung Han. Training recurrent answering units with joint loss minimization for vqa. arXiv preprint arXiv:1606.03647, 2016. [21] Ninh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In SIGKDD. ACM, 2013. 10 [22] Tim Rockt?schel, Edward Grefenstette, Moritz Hermann, Karl, Tom?? Ko?cisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. In ICLR, 2016. [23] Kevin J Shih, Saurabh Singh, and Derek Hoiem. Where to look: Focus regions for visual question answering. In CVPR, 2016. [24] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. arXiv preprint arXiv:1603.01417, 2016. [25] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, pages 451?466. Springer, 2016. [26] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. [27] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In CVPR, 2016. [28] Wenpeng Yin, Hinrich Sch?tze, Bing Xiang, and Bowen Zhou. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193, 2015. [29] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In CVPR, 2016. 11
6957 |@word cnn:1 armand:1 polynomial:1 seems:1 nd:1 d2:1 jacob:2 q1:1 attended:13 concise:2 series:2 hoiem:1 document:1 ours:4 past:1 existing:2 com:2 contextual:1 gmail:1 yet:3 gpu:1 concatenate:1 ronan:1 enables:1 rd2:1 device:4 nq:10 item:1 kyoung:1 short:2 num:1 provides:3 location:1 zhang:3 along:2 c2:5 direct:1 qualitative:1 consists:3 combine:4 dan:3 compose:1 manner:1 pairwise:33 allan:1 merity:1 themselves:2 nor:1 kiros:1 multi:4 shirt:1 vahid:1 behavior:1 titanx:1 decomposed:1 automatically:1 food:1 considering:4 conv:35 begin:1 project:1 linearity:1 spain:1 provided:3 didn:1 laptop:1 hinrich:1 what:22 israel:1 kind:5 developed:1 wa2:1 transformation:1 finding:1 temporal:1 quantitative:1 classifier:1 schwartz:1 unit:13 grant:2 omit:1 colorful:2 kelvin:1 before:4 engineering:2 attend:3 tends:1 bilinear:5 encoding:1 despite:2 approximately:2 blunsom:2 plus:1 emphasis:1 suggests:1 wiq:3 co:1 aschwing:1 acknowledgment:1 ternary:10 practice:1 frisbee:1 area:2 significantly:2 reject:1 matching:1 word:19 pre:1 refers:2 regular:1 suggest:2 get:1 context:1 applying:1 map:3 phil:2 attention:113 starting:1 independently:1 jimmy:1 tomas:1 simplicity:1 immediately:2 attending:1 insight:1 importantly:2 nam:2 oh:1 his:4 retrieve:1 kay:1 embedding:18 construction:1 hierarchy:1 parser:2 heavily:1 caption:1 carl:1 us:2 pa:5 element:5 recognition:1 particularly:1 std:2 cut:1 predicts:2 fork:1 module:16 preprint:13 yoon:1 electrical:1 capture:8 revisiting:1 region:5 connected:4 ordering:1 highest:1 mentioned:1 intuition:1 environment:1 rmsprop:1 ideally:1 battery:4 dynamic:2 trained:1 depend:1 reviewing:1 singh:1 ali:1 deliver:1 upon:1 girl:7 multimodal:6 joint:6 various:3 represented:3 stacked:2 train:3 jain:1 fast:1 describe:2 query:1 zemel:1 tell:1 kevin:2 choosing:1 refined:1 jianfeng:1 apparent:2 emerged:1 richer:1 solve:2 cvpr:4 elephant:1 presidential:1 ability:2 favor:1 jointly:2 final:4 hoc:1 advantage:1 agrawal:2 net:12 propose:4 ment:1 interaction:3 product:10 frequent:3 neighboring:2 relevant:6 hadamard:1 translate:1 flexibility:1 achieve:2 margaret:2 description:1 darrell:2 smax:4 captioning:1 generating:2 produce:1 comparative:1 aishwarya:1 object:2 resnet:12 attends:4 iq:23 ac:1 fixing:1 illustrate:4 help:1 recurrent:1 tim:1 eq:2 strong:2 edward:2 implemented:1 c:1 kazemi:2 come:1 differ:1 direction:1 laurens:1 hermann:3 guided:1 correct:2 filter:2 subsequently:3 human:2 enable:2 ana:1 material:1 espeholt:1 wall:2 creativity:1 ryan:1 comprehension:1 extension:3 exploring:1 drinking:2 pham:2 considered:2 dhruv:3 lawrence:2 major:1 optimizer:1 early:1 a2:3 achieves:1 salakhudinov:1 ruslan:1 applicable:2 yuke:1 tanh:27 weighted:1 hope:1 minimization:1 biglearn:1 rna:3 aim:1 zhou:1 encode:1 focus:5 she:3 directs:1 consistently:1 rank:4 improvement:1 kwak:1 contrast:7 sigkdd:1 industrial:1 baseline:4 kim:8 glass:5 huijuan:1 inference:2 dependent:1 unary:33 vl:2 epfl:1 torch:1 hidden:3 rnq:4 tak:1 pixel:2 issue:1 aforementioned:3 dual:2 overall:2 denoted:2 rnnsearch:1 classification:1 multiplies:1 animal:4 art:8 spatial:2 softmax:3 marginal:2 field:3 equal:1 saurabh:1 having:1 beach:1 koray:1 hop:1 identical:5 represents:1 park:1 look:2 denton:1 icml:1 vnv:1 yoshua:2 quantitatively:1 develops:2 employ:3 haven:1 few:1 randomly:2 richard:1 simultaneously:1 ve:1 national:1 individual:1 investigate:1 evaluation:2 operated:3 light:2 antol:1 oliver:1 jeonghee:3 glimpse:1 respective:1 iv:23 euclidean:1 desired:1 rush:1 hiecoatt:2 instance:2 column:12 soft:2 wiv:2 facet:1 dev:3 cover:1 modeling:1 heo:1 phrase:1 stacking:1 entry:1 technion:3 wonder:1 inspires:1 reported:1 dependency:5 answer:33 combined:8 referring:1 st:3 cho:2 lstm:10 fritz:1 probabilistic:3 dong:1 pagh:2 lee:1 pool:1 michael:1 together:1 connecting:1 sketching:1 na:6 management:1 successively:1 containing:1 opposed:2 guy:1 cognitive:4 conf:1 luong:1 sang:1 li:3 account:7 potential:62 sec:6 kate:1 explicitly:1 caused:1 ad:1 collobert:1 stream:1 depends:1 later:1 lot:1 performed:1 hazan:2 doing:1 red:4 tab:1 mario:1 parallel:1 contribution:1 hwa:2 il:1 convolutional:4 yield:4 farach:1 trilinear:1 yes:8 yellow:1 zhengdong:1 kavukcuoglu:1 produced:1 lu:7 mc:1 researcher:1 farabet:1 trevor:2 failure:2 derek:1 involved:1 hereby:2 attributed:1 dataset:6 ask:3 mitchell:2 visual7w:1 color:8 car:4 improves:1 dimensionality:2 framework1:1 appears:1 focusing:1 bidirectional:1 higher:2 day:4 follow:2 tom:1 modal:2 entailment:1 smola:1 correlation:22 clock:1 hand:1 working:1 receives:1 sketch:4 expressive:2 autoencoders:1 mostafazadeh:1 grows:1 believe:1 usa:1 effect:1 umbrella:6 brown:1 grounding:1 pimp:1 xiaodong:2 hence:1 kyunghyun:2 alternating:1 moritz:2 read:1 illustrated:2 white:4 comprehend:1 inferior:2 trying:1 presenting:1 complete:1 demonstrate:3 performs:2 reasoning:2 image:44 wise:2 variational:1 novel:3 recently:5 parikh:3 cisk:1 common:4 ishan:1 exponentially:1 million:2 discussed:5 extend:2 he:2 trait:2 fukui:4 significant:1 rau:1 ai:5 queried:1 rd:9 tuning:1 outlined:1 similarly:4 teaching:1 illinois:2 language:4 aq:8 pq:5 reliability:1 mlb:1 geared:1 similarity:3 han:1 gt:4 base:1 align:1 recent:3 showed:1 nvidia:1 misra:1 binary:1 wv:6 der:1 scoring:1 captured:1 seen:1 additional:2 greater:2 bathroom:3 akira:1 wq2:1 deng:1 ii:1 stephen:1 multiple:14 champaign:1 long:5 lin:1 a1:5 scalable:1 ko:1 hair:1 arxiv:26 iteration:2 represent:1 kernel:5 grounded:2 achieved:1 proposal:1 schematically:1 want:1 fine:1 addition:1 entangled:1 modality:64 crucial:2 extra:1 operate:1 suleyman:1 sch:1 nv:8 pooling:5 bahdanau:2 bohyung:1 effectiveness:2 schel:1 structural:1 yang:4 bernstein:1 iii:1 embeddings:6 easy:1 reacts:2 variety:3 bengio:2 variate:1 architecture:3 interprets:1 andreas:2 inner:2 reduce:3 cow:1 vgg:7 devlin:1 bottleneck:1 passed:1 torch7:1 effort:1 hyeonwoo:1 constitute:1 matlab:1 deep:7 generally:4 detailed:1 malinowski:1 vqa:19 awa:1 visualized:2 processed:1 generate:1 http:1 moses:1 qnq:1 klein:1 blue:12 diverse:1 hyperparameter:1 express:1 shih:2 demonstrating:1 neither:1 among:2 v1:1 padded:1 year:2 convert:1 sum:1 powerful:1 you:1 fourth:1 reasonable:1 patch:2 wa1:2 decision:17 maaten:1 dropout:2 ki:4 layer:13 followed:3 courville:1 your:1 alex:1 scene:5 wq1:2 vanderwende:1 fei:2 aspect:1 anyone:4 min:1 relatively:2 gpus:1 martin:1 department:3 charikar:1 according:1 structured:1 combination:5 across:1 byoung:1 making:14 iccv:2 taken:1 jiasen:2 bing:1 discus:1 count:1 mechanism:28 ninh:1 fed:2 photo:1 unusual:2 operation:3 permit:2 apply:1 observe:8 hierarchical:2 appropriate:2 generic:1 caiming:1 xiong:2 alternative:1 batch:1 aia:1 original:1 rnv:3 convolve:1 top:2 cake:1 marginalized:4 donghyun:1 concatenated:1 byoungtak:1 tensor:9 question:69 parametric:1 dependence:2 pain:1 iclr:1 thank:1 concatenation:2 outer:8 evaluate:4 trivial:1 reason:2 water:2 marcus:3 stanislaw:1 dzmitry:1 besides:1 index:1 illustration:1 providing:1 rasmus:1 zichao:1 optionally:1 ql:1 setup:3 boy:6 ba:1 design:1 proper:1 threeway:4 allowing:1 av:8 convolution:6 neuron:1 datasets:2 urbana:1 jin:2 rockt:1 extended:1 head:4 expressiveness:2 kocisky:1 dog:1 required:2 specified:1 c3:5 sentence:4 pair:1 trainable:3 textual:8 hour:1 nip:5 qa:1 address:4 able:3 suggested:4 beyond:2 usually:2 mateusz:1 challenge:2 interpretability:1 memory:4 including:2 max:2 green:4 ia:21 hot:1 natural:1 rely:1 residual:1 zhu:1 scheme:1 improve:3 github:1 brief:1 bowen:1 picture:6 harsh:1 catch:1 extract:3 woo:4 text:2 review:2 faced:1 understanding:1 literature:2 val:2 multiplication:1 xiang:1 lacking:1 fully:4 embedded:2 bear:1 icalp:1 noh:1 generation:2 limitation:1 loss:1 hoang:1 foundation:2 tomato:1 translation:2 row:4 karl:2 eccv:2 jung:3 supported:2 last:3 bias:2 taking:3 benefit:1 van:1 overcome:1 dimension:10 calculated:1 vocabulary:1 gram:1 tamir:2 evaluating:1 rich:1 qualitatively:2 projected:1 oftentimes:2 viv:1 correlate:2 compact:4 ignore:1 implicitly:1 cutting:1 emphasize:1 hang:1 colton:1 overfitting:1 mustafa:1 conclude:1 abhishek:1 alternatively:1 don:1 latent:1 table:10 additionally:6 nature:2 learn:1 ca:1 woon:1 depicting:1 interact:1 huk:1 investigated:4 cl:1 jabri:2 zitnick:2 da:1 anna:1 giraffe:1 main:1 joulin:1 linearly:1 arise:1 succinct:1 xu:5 augmented:1 crafted:1 fig:11 referred:1 lasse:1 embeds:1 theme:1 pv:5 explicit:1 answering:21 third:2 learns:2 grained:1 embed:3 specific:2 learnable:1 workshop:1 socher:1 sequential:1 effectively:3 importance:2 chen:1 rd1:1 yin:2 lucy:1 tze:1 idan:1 rohrbach:4 mct:8 visual:31 devi:3 gao:1 ordered:1 springer:3 aa:5 corresponds:1 wv2:2 relies:3 extracted:2 ma:1 acm:1 grefenstette:2 groth:1 sized:1 consequently:1 exposition:1 towards:1 jianwei:1 room:3 man:4 hard:1 included:1 specifically:2 operates:1 schwing:2 batra:3 experimental:2 player:1 saenko:1 aaron:1 formally:2 wq:2 quest:2 latter:2 alexander:2 relevance:1 incorporate:1 wearing:4 mcb:18 tested:2 correlated:1
6,586
6,958
Sparse convolutional coding for neuronal assembly detection Sven Peter1,? Elke Kirschbaum1,? {sven.peter,elke.kirschbaum}@iwr.uni-heidelberg.de Martin Both2 [email protected] Brandon K. Harvey3 [email protected] Lee A. Campbell3 [email protected] Conor Heins3,4,? [email protected] Daniel Durstewitz5 [email protected] Ferran Diego Andilla6,? [email protected] Fred A. Hamprecht1 [email protected] 1 Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, Germany 2 Institute of Physiology and Pathophysiology, Heidelberg, Germany 3 National Institute on Drug Abuse, Baltimore, USA 4 Max Planck Institute for Dynamics and Self-Organization, G?ttingen, Germany 5 Dept. Theoretical Neuroscience, Central Institute of Mental Health, Mannheim, Germany 6 Robert Bosch GmbH, Hildesheim, Germany Abstract Cell assemblies, originally proposed by Donald Hebb (1949), are subsets of neurons firing in a temporally coordinated way that gives rise to repeated motifs supposed to underly neural representations and information processing. Although Hebb?s original proposal dates back many decades, the detection of assemblies and their role in coding is still an open and current research topic, partly because simultaneous recordings from large populations of neurons became feasible only relatively recently. Most current and easy-to-apply computational techniques focus on the identification of strictly synchronously spiking neurons. In this paper we propose a new algorithm, based on sparse convolutional coding, for detecting recurrent motifs of arbitrary structure up to a given length. Testing of our algorithm on synthetically generated datasets shows that it outperforms established methods and accurately identifies the temporal structure of embedded assemblies, even when these contain overlapping neurons or when strong background noise is present. Moreover, exploratory analysis of experimental datasets from hippocampal slices and cortical neuron cultures have provided promising results. ? Both authors contributed equally. Majority of this work was done while co-author was at 3 . ? Majority of this work was done while co-author was at 1 . ? 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. (a) Synchronously firing neurons (b) Synfire chain (c) Temporal motif Figure 1: Temporal motifs in neuronal spike trains. All three illustrations show the activity of four different neurons over time. The spikes highlighted in red are part of a repeating motif. In (a) the motif is defined by the synchronous activity of all neurons, while the synfire chain in (b) exhibits sequential spiking patterns. (c) shows a more complex motif with non-sequential temporal structure. (Figure adapted from [23].) 1 Introduction The concept of a cell assembly (or cortical motif or neuronal ensemble) was originally introduced by Donald Hebb [1] and denotes subsets of neurons that by firing coherently represent mental objects and form the building blocks of cortical information processing. Numerous experimental studies within the past 30 years have attempted to address the neural assembly hypothesis from various angles in different brain areas and species, but the concept remains debated, and recent massively parallel single-unit recording techniques have opened up new opportunities for studying the role of spatio-temporal coordination in the nervous system [2?12]. A number of methods have been proposed to identify motifs in neuronal spike train data, but most of them are only designed for strictly synchronously firing neurons (see figure 1a), i.e. with zero phase-lag [13?17], or strictly sequential patterns as in synfire chains [18?21] (see figure 1b). However, some experimental studies have suggested that cortical spiking activity may harbor motifs with more complex structure [5, 22] (see figure 1c). Only quite recently statistical algorithms were introduced that can efficiently deal with arbitrary lag constellations among the units participating in an assembly [23], but the identification and validation of motifs with complex temporal structure remains an area of current research interest. In this paper we present a novel approach to identify motifs with any of the temporal structures shown in figure 1 in a completely unsupervised manner. Based on the idea of convolutive Non-Negative Matrix Factorization (NMF) [24, 25] our algorithm reconstructs the neuronal spike matrix as a convolution of motifs and their activation time points. In contrast to convolutive NMF, we introduce an `0 and `1 prior on the motif activation and appearance, respectively, instead of a single `1 penalty. This `0 regularization enforces more sparsity in the temporal domain; thus performing better in extracting motifs from neuronal spike data by reducing false positive activations. Adding the `0 and `1 penalty terms requires a novel optimization scheme. This replaces the multiplicative update rules by a combination of discrete and continuous optimizations, which are matching pursuit and LASSO regression. Additionally we added a sorting and non-parametric threshold estimation method to distinguish between real and spurious results of the optimization problem. We benchmark our approach on synthetic data against Principal Component Analysis (PCA) and Independent Component Analysis (ICA) as the most widely used methods for motif detection, and against convolutive NMF as the method most closely related to the proposed approach. Our algorithm outperforms the other methods especially when identifying long motifs with complex temporal structure. We close with results of our approach on two real-world datasets from hippocampal slices and cortical neuron cultures. 2 Related work PCA is one of the simplest methods that has been used for a long time to track cell motifs [26]. Its biggest limitations are that different assembly patterns can easily be merged into a single ?large? component, and that neurons shared between motifs are assigned lower weights than they should have. Moreover, recovering individual neurons which belong to a single assembly is not reliably possible [27, 17], and the detected assemblies are not very robust to noise and rate fluctuations [23]. ICA with its assumption of non-Gaussian and statistically independent subcomponents [28] is able to recover individual neuron-assembly membership, and neurons belonging to multiple motifs are 2 = + Y noise = ~ s1 a1 + ~ s2 a2 Figure 2: Sketch of convolutional coding. In this example the raw data matrix Y is described by a matrix which is an additive mixture of two motifs a1 (cyan) and a2 (salmon) convolved with their activities s1 and s2 , respectively, plus background noise. also correctly identified [17]. ICA provides a better estimate for synchronous motifs than PCA [17], but motifs with more complicated temporal structure are not (directly) accommodated within this framework. An overview of PCA and ICA for identifying motifs is provided in [17]. More sophisticated statistical approaches have been developed, like unitary event analysis [13, 14], for detecting coincident, joint spike events across multiple cells. More advanced methods and statistical tests were also designed for detecting higher-order correlations among neurons [15, 16], as well as synfire chains [20]. However, none of these techniques is designed to detect more complex, non-synchronous, non-sequential temporal structure. Only quite recently more elaborate statistical schemes for capturing assemblies with arbitrary temporal structure, and also for dealing with issues like non-stationarity and different time scales, were advanced [23]. The latter method works by recursively merging sets of units into larger groups based on their joint spike count probabilities evaluated across multiple different time lags. The method proposed in this paper, in contrast, approaches the detection of complex assemblies in a very different manner, attempting to detect complex patterns as a whole. NMF techniques have been widely applied to recover spike trains from calcium fluorescence recordings [29?35]. Building on these schemes, NMF has been used to decompose a binned spike matrix into multiple levels of synchronous patterns which describe a hierarchical structuring of the motifs [36]. But these previous applications of NMF considered only neurons firing strictly synchronously. In audio processing, convolutive NMF has been successfully used to detect motifs with temporal structure [24, 25, 37]. However, as we will show later, the constraints used in audio processing are too weak to extract motifs from neuronal spike data. For this reason we propose a novel optimization approach using sparsity constraints adapted to neuronal spike data. 3 Sparse convolutional coding We formulate the identification of motifs with any of the temporal structures displayed in figure 1 as a convolutional matrix decomposition into motifs and their activity in time, based on the idea behind convolutive NMF [24, 25], and combined with the sparsity constraints used in [34]. We use a novel optimization approach and minimize the reconstruction error while taking into account the sparsity constraints for both motifs and their activation time points. n?m Let Y ? R+ be a matrix whose n rows represent individual neurons with their spiking activity binned to m columns. We assume that this raw signal is an additive mixture of l motifs ai ? Rn?? + with temporal length ? , convolved with a sparse activity signal si ? R1?m plus noise (see figure 2). + We address the unsupervised problem of simultaneously estimating both the coefficients making up the motifs ai and their activities si . To this end, we propose to solve the optimization problem 2 l l l X X X min Y ? si ~ ai + ? ksi k0 + ? kai k1 a,s i=1 i=1 F 3 i=1 (1) with ? and ? controlling the regularization strength of the `0 norm of the activations and the `1 norm of the motifs, respectively. The convolution operator ~ is defined by ? X si ~ ai = ai,j ? S(j ? 1)si (2) j=1 with ai,j being the jth column of ai . The column shift operator S(j) moves a matrix j places to the right while keeping the same size and filling missing values appropriately with zeros [24]. The product on the right-hand side is an outer product. In [25] the activity of the learned motifs is regularized only with a `1 prior which is too weak to recover motifs in neuronal spike trains. Instead we choose the `0 prior for si since it has been successfully used to learn spike trains of neurons [34]. For the motifs themselves a `1 prior is used to enforce only few non-zero coefficients while still allowing exact optimization [38]. 3.1 Optimization This problem is non-convex in general but can be approached by initializing the activities si randomly and using a block coordinate descent strategy [39, Section 2.7] to alternatingly optimize for the two variables. When keeping the activations si fixed, the motif coefficients ai are learned using LASSO regression with non-negativity constraints [40] by transforming the convolution with si to a linear set of equations by using modified Toeplitz matrices ?si ? Rmn?n? which are then stacked column-wise [41, 38]: 2 # " l vec(a1 ) X +? s1 ... ? sl ] ... min vec(Y) ? [? kai k1 (3) a {z } vec(al ) | {zmn} | i=1 mn?ln? b?R A?R | {z } ln? x?R 2 The matrices ?si are constructed from the si with ?si,j,k = ?si,j+1,k+1 = si,j?k for j ? k and ?si,j,k = 0 for j < k and ?si,j,k = 0 for j > p ? m and k < p ? ? for p = 1, . . . , n (where i denotes the ith matrix with element indices j and k). When keeping the currently found motifs ai fixed, their activation in time is learned using a convolutional matching pursuit algorithm [42?44] to approximate the `0 norm. The greedy algorithm iteratively includes an assembly appearance that most reduces the reconstruction error.All details of the algorithm are outlined in the supplementary material for this paper. 3.2 Motif sorting and non-parametric threshold estimation The list of identified motifs is expected to also contain false positives which do not appear repeatedly in the data. The main non-biological reason for this is that our algorithm only finds local minima of the optimization problem given by equation (1). Experiments on various synthetic datasets showed that motifs present at the global optimum should always have the same appearance, independent of the random initialization of the activities. The false positives which are only present in particular local minima, however, look differently every time the initialization is changed. We therefore propose to run our algorithm multiple times on the same data with the same parameter settings but with different random initializations, and use the following sorting and non-parametric threshold estimation algorithm in order to distinguish between true (reproducible) and spurious motifs. The following is only a brief description. More details are given in the supplementary material. In the first step, the motifs found in each run are sorted using pairwise matching. The sorting is necessary because the order of the motifs after learning is arbitrary and it has to be assured that the motifs with the smallest difference between different runs are compared. Sorting the sets of motifs from all runs at the same time is an NP hard multidimensional assignment problem [45]. Therefore, a greedy algorithm is used instead. It starts by sorting the two sets of motifs with the lowest assignment cost. Thereafter, the remaining sets of motifs are sorted one by one according to the order of motifs given by the already sorted sets. Inspired by permutation tests, we estimate a threshold T by creating a shuffled spike matrix to determine which motifs are only spurious. In the shuffled matrix all temporal correlations between 4 and within neurons have been destroyed. Hence, there are no real motifs in the shuffled matrix and the motifs learned from this matrix will likely be different with each new initialization. We take the minimal difference of any two motifs from different runs of the algorithm on the shuffled matrix as the threshold. We assume that motifs that show a difference between different runs larger than this threshold are spurious and discard them. 3.3 Parameter selection The sparse convolutional coding algorithm has only three parameters that have to be specified by the user: the maximal number of assemblies, the maximal temporal length of a motif, and the penalty ? on the `1 norm of the motifs. The number of assemblies to be learned can be set to a generous upper limit since the sorting method assures that only the true motifs remain while all false positives are deleted. The temporal length of a motif can also be set to a generous upper bound. To find an adequate `1 penalty for the assemblies, different values need to be tested, and it should be set to a value where neither the motifs are completely empty nor all neurons are active over the whole possible length of the motifs. In the tested cases the appearance of the found motifs did not change drastically while varying the `1 penalty within one order of magnitude, so fine-tuning it is not necessary. Instead of specifying the penalty ? on the `0 norm of the activations directly, we chose to stop the matching pursuit algorithm when adding an additional assembly appearance increases the reconstruction error or when the difference of reconstruction errors from two consecutive steps falls below a small threshold. All code for the proposed method is available at: https://github.com/sccfnad/ Sparse-convolutional-coding-for-neuronal-assembly-detection 4 4.1 Results Synthetic data Since ground truth datasets are not available, we have simulated different synthetic datasets to establish the accuracy of the proposed method, and compare it to existing work. For PCA and ICA based methods the number of motifs is estimated using the Marchenko-Pastur eigenvalue distribution [17]. The sparsity parameter in the sparse convolutive NMF (scNMF) that resulted in the best performance was chosen empirically [25]. An illustrative example dataset with twenty neurons, one hundred spurious spikes per neuron and three temporal motifs can be seen in figure 3. Consecutive activation times between motifs were modeled as Poisson renewal processes with a mean inter-event-distance of twenty frames. When running our method from two different random initial states to identify a total of five motifs, all three original motifs were among those extracted from the data (figure 3c and 3d; the motifs have been sorted manually to match up with the ground truth; all parameters for the analysis can be found in table 1). While the two spurious motifs change depending on the random initialization, the three true motifs consistently show up in the search results. Neither PCA, ICA nor scNMF were able to extract the true motifs (see figures 3e, 3f and 3g). For further analysis, various datasets consisting of fifty neurons observed over one thousand time frames were created. Details on the generation of these datasets can be found in the supplementary material. For each of the different motif lenghts ? = 1, 7 and 21 frames, twenty different datasets were created, with different noise levels and numbers of neurons shared between assemblies. To compare the performance of different methods, we use the functional association between neurons as an indicator [27, 46, 12]. For this a neuron association matrix (NAM) is calculated from the learned motifs. The NAM contains for each pair of neurons a 1 if the two neurons belong to the same assembly and a 0 otherwise. The tested methods, however, do not make binary statements about whether a neuron belongs to an assembly, but provide only the information to what degree the neuron was associated with an assembly. We apply multiple thresholds to binarize the output of the tested methods and compute true positive rate and false positive rate between the ground truth NAM and the binarized NAM, leading to the ROC curves shown in figure 4. We chose this method since it works without limitations for synchronous motifs and also allows for comparisons for the more complex cases. 5 neuron 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 100 200 300 frame 400 500 (a) Spike matrix frame frame 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 1.0 0.8 0.6 0.4 0.2 0.0 frame motif 2 frame motif 4 motif 5 frame frame 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0.4 0.0 ?0.2 ?0.4 (e) Learned component (PCA) motif 1 motif 2 motif 3 motif 4 motif 5 20 20 20 20 19 19 19 19 18 18 18 18 17 17 17 17 16 16 16 16 15 15 15 15 14 14 14 14 13 13 13 13 12 12 12 12 11 11 11 11 10 10 10 10 9 9 9 9 8 8 8 8 7 7 7 7 6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 frame frame frame frame 0.6 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0.2 0.0 ?0.2 ?0.4 motif 1 frame ?0.6 (f) Learned component (ICA) motif 2 motif 3 frame frame frame frame (g) Learned motifs (scNMF) 0.8 0.8 0.6 0.2 0.00.0 True positive rate 0.8 True positive rate 1.0 True positive rate 1.0 0.2 0.4 0.6 0.8 False positive rate (a) ? = 1 0.6 0.4 PCA ICA our method scNMF 1.0 0.00.0 0.4 PCA ICA our method scNMF 0.2 0.2 0.4 0.6 0.8 False positive rate (b) ? = 7 PCA ICA our method scNMF 0.2 1.0 0.00.0 0.2 0.4 0.6 0.8 False positive rate 1.0 (c) ? = 21 Figure 4: ROC curves of different methods on synthetic data for different temporal motif lengths. We show the mean ROC curve and its standard deviation averaged over all trials on different synthetic datasets. All methods were run ten times on each dataset with different random initializations. In the synchronous case (i.e. ? = 1, figure 4a) our proposed method performs as good as the best competitor. As expected PCA performance shows a huge variance since some of the datasets contain neurons shared between multiple motifs and since extracting actual neuron-assembly assignments is not always possible [27, 17]. When temporal structure is introduced we are still able to identify associations between neurons with very high accuracy. For short temporal motifs (? = 7, figure 4b) scNMF is able to identify associations, but only our method was able to accurately recover most associations in long motifs (? = 21, figure 4c). 6 0.6 0.5 0.4 0.3 0.2 0.1 0.0 motif 4 motif 5 20 20 20 20 19 19 19 19 18 18 18 18 17 17 17 17 16 16 16 16 15 15 15 15 14 14 14 14 13 13 13 13 12 12 12 12 11 11 11 11 10 10 10 10 9 9 9 9 8 8 8 8 7 7 7 7 6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1.0 0.4 0.7 frame Figure 3: Results on a synthetic dataset. (a) shows a synthetic spike matrix. (b) shows the three motifs present in the data. By running our algorithm with two different random initial states the motifs seen in (c) and (d) are learned. (e), (f) and (g) show the results from PCA, ICA and scNMF, respectively. 0.6 0.8 (d) Learned motifs (proposed method, second trial) 0.4 neuron 0.2 motif 1 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 frame (c) Learned motifs (proposed method, first trial) motif 1 neuron motif 3 20 20 20 20 19 19 19 19 18 18 18 18 17 17 17 17 16 16 16 16 15 15 15 15 14 14 14 14 13 13 13 13 12 12 12 12 11 11 11 11 10 10 10 10 9 9 9 9 8 8 8 8 7 7 7 7 6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 1 3 5 7 9111315 frame (b) Ground truth motifs 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 motif 1 neuron motif 2 motif 3 20 20 19 19 18 18 17 17 16 16 15 15 14 14 13 13 12 12 11 11 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 1 3 5 7 9 1113 1 3 5 7 9 1113 1 3 5 7 91113 neuron motif 1 neuron neuron 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 Table 1: Experimental parameters. We show the used maximal number of assemblies, maximal motif length in frames, `1 penalty value ?, and number of runs of the algorithm with different initializations for the performed experiments on synthetic and real datasets. We also display the estimated threshold T used for distinguishing between real and spurious motifs. Experiment #motifs motif length in frames ? #runs T synthetic example data hippocampal CA1 region cortical neuron culture 4.2 5 5 5 15 10 10 5 ? 10?4 10?6 10?6 2 5 5 ? 5.7 ? 10?6 6.5 ? 10?4 Real data In vitro hippocampal CA1 region data. We analyzed spike trains of 91 cells from the hippocampal CA1 region recorded at high temporal and multiple single cell resolution using CA2+ imaging. The acute mouse hippocampal slices were recorded in a so-called interface chamber [47]. On this dataset, our algorithm identified three motifs as real motifs. They are shown in figure 5a. The activity of each assembly has been calculated at every frame and is shown in figure 5b. In order to qualitatively show that the proposed method appropriately eliminates false positives from the list of found motifs also on real data, we plotted in figure 6 for each motif the difference to the best matching motif from every other run. We did this for the motifs identified in the original spike matrix (figure 6a), as well as for the motifs identified in the shuffled spike matrix (figure 6b). The motifs found in the shuffled matrix show much higher variability between runs than those found in the original matrix. For motifs 1 and 3 from the original matrix the difference between runs is in average about two to three times higher than for the other motifs, but still smaller than the average difference between runs for all of the motifs from the shuffled data. Nevertheless, these motifs are deleted as false positives, since the threshold for discarding a motif is set to the minimum difference of motifs from different runs on the shuffled matrix. This shows that the final set of motifs is unlikely to contain spurious motifs anymore. The spontaneous hippocampal network activity is expected to appear under the applied recording conditions as sharp wave-ripple (SPW-R) complexes that support memory consolidation [48?50, 47]. Motif 5 in figure 5a shows the typical behavior of principal neurons firing single or two consecutive spikes at a low firing rate ( 1 Hz) during SPW-R in vitro [47]. This might be interpreted as the re-activation of a formerly established neuronal assembly. In vitro cortical neuron culture data. Primary cortical neurons were prepared from E15 embryos of Sprague Dawley rats as described in [51] and approved by the NIH Animal Care and Usage Committee. Cells were transduced with an adeno-associated virus expressing the genetically-encoded calcium indicator GCaMP6f on DIV 7 (Addgene #51085). Wide-field epifluorescent videos of spontaneous calcium activity from individual wells (6 ? 104 cells/well) were recorded on DIV 14 or 18 at an acquisition rate of 31.2 frames per second. The data for the shown example contains 400 identified neurons imaged for 10 minutes on DIV 14. Our algorithm identified two motifs in the used dataset, shown in figure 5c. Their activity is plotted in figure 5d. For each column of the two motifs, figure 7 shows the percentage of active neurons at every time frame. The motifs were thresholded such that only neurons with a motif coefficient above 50% of the maximum coefficient of the motif were counted. We show those columns of the motifs which contained more than one neuron after thresholding. The fact that figure 7 shows only few motif activations that include all of the cells that are a part of the motif has less to do with the actual algorithm, but more with how the nervous system works: Only rarely all cells of an assembly will spike [23], due to both the intrinsic stochasticity, like probabilistic synaptic release [52] and the fact that synaptic connectivity and thus assembly membership will be graded and strongly fluctuates across time due to short-term synaptic plasticity [53]. Nevertheless, the plot shows that often several columns are active in parallel and there are some time points where a high percentage of the neurons in all columns is active together. This shows that the found motifs really contain temporal structure and are repeated multiple times in the data. All parameters for the analysis of the shown experiments can be found in table 1. 7 motif 4 90 90 motif 5 0.14 80 80 70 70 70 60 60 60 50 50 50 0.08 40 40 40 0.06 30 30 30 20 20 20 10 10 10 neuron 80 0 0 1 2 3 4 5 6 7 8 9 10 frame 0 1 2 3 4 5 6 7 8 9 10 frame 0.12 0.10 0.04 0.02 1 2 3 4 5 6 7 8 910 frame 0.00 (a) Motifs from hippocampal CA1 region data activity activity activity motif 2 0.75 0.50 0.25 0.00 0 3.00 2.00 1.00 0.00 0 1.50 1.00 0.50 0.00 0 1000 2000 3000 motif 4 4000 5000 1000 2000 3000 motif 5 4000 5000 1000 2000 4000 5000 3000 frame motif 1 neuron motif 2 90 400 390 380 370 360 350 340 330 320 310 300 290 280 270 260 250 240 230 220 210 200 190 180 170 160 150 140 130 120 110 100 90 80 70 60 50 40 30 20 10 0 (b) Activity of motifs from hippocampal CA1 region data 1 2 3 4 5 6 7 8 9 10 frame 400 390 380 370 360 350 340 330 320 310 300 290 280 270 260 250 240 230 220 210 200 190 180 170 160 150 140 130 120 110 100 90 80 70 60 50 40 30 20 10 0 motif 3 0.10 0.08 0.06 0.04 0.02 0.00 1 2 3 4 5 6 7 8 9 10 frame (c) Motifs from cortical neuron culture data motif 1 activity activity 3.00 2.00 1.00 0.00 0 3.00 2.00 1.00 0.00 0 2500 5000 7500 2500 5000 7500 motif10000 3 12500 15000 17500 10000 12500 15000 17500 frame (d) Activity of motifs from cortical neuron culture data Figure 5: Results from real data. We show the results of our algorithm for two different real datasets. The datasets vary in temporal length as well as number of observed cells. For each dataset we show the motifs that our algorithm identified as real motifs and their activity over time. 1 2 3 4 5 run 1 2 3 4 5 run 0e+00 1 2 3 4 5 run 3e-05 6e-05 1 2 3 4 5 run run 1 2 3 4 5 run 1 2 3 4 5 run 0e+00 motif 4 5 4 3 2 1 1 2 3 4 5 run 3e-05 6e-05 motif 5 5 4 3 2 1 run motif 3 5 4 3 2 1 run run 1 2 3 4 5 9e-05 motif 2 5 4 3 2 1 run motif 1 5 4 3 2 1 run motif 5 5 4 3 2 1 run motif 4 5 4 3 2 1 run motif 3 5 4 3 2 1 run run motif 2 5 4 3 2 1 run motif 1 5 4 3 2 1 1 2 3 4 5 run 1 2 3 4 5 run 9e-05 (a) Difference between runs for motifs learned on origi- (b) Difference between runs for motifs learned on shufnal matrix fled matrix Figure 6: Differences between the five runs for all five learned motifs from hippocampal CA1 region data. The plots show for each motif the difference to the best matching motif from every other run. We did this for the motifs identified in the original hippocampal CA1 region data (a), as well as for the motifs identified in the shuffled spike matrix (b). The motifs found in the shuffled matrix show much higher variability between runs than those found in the original matrix. 5 Discussion We have presented a new approach for the identification of motifs that is not limited to synchronous activity. Our method leverages sparsity constraints on the activity and the motifs themselves to allow a simple and elegant formulation that is able to learn motifs with temporal structure. Our algorithm extends convolutional coding methods with a novel optimization approach to allow modeling of interactions between neurons. The proposed algorithm is designed to identify motifs in data with temporal stationarity. Non-stationarities in the data, which are expected to appear especially in 8 motif 1 7 column percentage of active neurons 100 0 100 0 100 0 100 0 6 5 4 0 2500 5000 7500 100 10000 motif 3 12500 15000 17500 column 5 0 100 4 0 100 0 3 0 2500 5000 7500 frame 10000 12500 15000 17500 Figure 7: Percentage of active neurons per column over time, for all motifs identified in the cortical neuron culture dataset. For each column of the two motifs displayed in figure 5c, we show the percentage of active neurons at every time frame. Vertical grey bars indicate points in time at which all significantly populated columns of a motif fire with at least 30% of their neurons. Their reoccurence shows that the motifs really contain temporal structure and are repeated multiple times in the dataset. recordings from in vivo, are not yet taken into account. In cases where non-stationarities are expected to be strong, the method for stationarity-segmentation introduced in [54] could be used before applying our algorithm to the data. Although our algorithm has some limitations in terms of non-stationarities, results on simulated datasets show that the proposed method outperforms others especially when identifying long motifs. Additionally, the algorithm shows stable performance on real datasets. Moreover, the results found on the cortical neuron culture dataset show that our method is able to detect assemblies within large sets of recorded neurons. Acknowledgments SP and EK thank Eleonora Russo for sharing her knowledge on generating synthetic data and Fynn Bachmann for his support. LAC, BKH and CH thank Lowella Fortuno for technical assistance with cortical cultures and acknowledge the support by the Intramural Research Program of the NIH, NIDA. DD acknowledges partial financial support by DFG Du 354/8-1. SP, EK, MB, DD, FD and FAH gratefully acknowledge partial financial support by DFG SFB 1134. References [1] D. Hebb, The Organization of Behaviour: A Neuropsychological Theory. Wiley, 1949. [2] D. Marr, D. Willshaw, and B. McNaughton, Simple memory: a theory for archicortex. Springer, 1991. [3] W. Singer, ?Synchronization of cortical activity and its putative role in information processing and learning,? Annual review of physiology, vol. 55, no. 1, pp. 349?374, 1993. [4] M. A. Nicolelis, E. E. Fanselow, and A. A. Ghazanfar, ?Hebb?s dream: the resurgence of cell assemblies,? Neuron, vol. 19, no. 2, pp. 219?221, 1997. [5] Y. Ikegaya, G. Aaron, R. Cossart, D. Aronov, I. Lampl, D. Ferster, and R. Yuste, ?Synfire chains and cortical songs: temporal modules of cortical activity,? Science, vol. 304, no. 5670, pp. 559?564, 2004. [6] P. Cossart and P. J. Sansonetti, ?Bacterial invasion: The paradigms of enteroinvasive pathogens,? Science, vol. 304, no. 5668, pp. 242?248, 2004. [7] G. Buzs?ki, ?Large-scale recording of neuronal ensembles,? Nature neuroscience, vol. 7, no. 5, pp. 446?451, 2004. [8] A. Mokeichev, M. Okun, O. Barak, Y. Katz, O. Ben-Shahar, and I. Lampl, ?Stochastic emergence of repeating cortical motifs in spontaneous membrane potential fluctuations in vivo,? Neuron, vol. 53, no. 3, pp. 413?425, 2007. [9] E. Pastalkova, V. Itskov, A. Amarasingham, and G. Buzs?ki, ?Internally generated cell assembly sequences in the rat hippocampus,? Science, vol. 321, no. 5894, pp. 1322?1327, 2008. [10] I. H. Stevenson and K. P. Kording, ?How advances in neural recording affect data analysis,? Nature neuroscience, vol. 14, no. 2, pp. 139?142, 2011. [11] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, ?Whole-brain functional imaging at cellular resolution using light-sheet microscopy,? Nature methods, vol. 10, no. 5, pp. 413?420, 2013. [12] L. Carrillo-Reid, J.-e. K. Miller, J. P. Hamm, J. Jackson, and R. Yuste, ?Endogenous sequential cortical activity evoked by visual stimuli,? Journal of Neuroscience, vol. 35, no. 23, pp. 8813?8828, 2015. [13] S. Gr?n, M. Diesmann, and A. Aertsen, ?Unitary events in multiple single-neuron spiking activity: I. detection and significance,? Neural Computation, vol. 14, no. 1, pp. 43?80, 2002. [14] S. Gr?n, M. Diesmann, and A. Aertsen, ?Unitary events in multiple single-neuron spiking activity: II. nonstationary data,? Neural Computation, vol. 14, no. 1, pp. 81?119, 2002. 9 [15] B. Staude, S. Rotter, and S. Gr?n, ?Cubic: cumulant based inference of higher-order correlations in massively parallel spike trains,? Journal of Computational Neuroscience, vol. 29, no. 1, pp. 327?350, 2010. [16] B. Staude, S. Gr?n, and S. Rotter, ?Higher-order correlations in non-stationary parallel spike trains: statistical modeling and inference,? Frontiers in Computational Neuroscience, vol. 4, p. 16, 2010. [17] V. Lopes-dos Santos, S. Ribeiro, and A. B. Tort, ?Detecting cell assemblies in large neuronal populations,? Journal of neuroscience methods, vol. 220, no. 2, pp. 149?166, 2013. [18] A. C. Smith and P. C. Smith, ?A set probability technique for detecting relative time order across multiple neurons,? Neural Comput., vol. 18, no. 5, pp. 1197?1214, 2006. [19] A. C. Smith, V. K. Nguyen, M. P. Karlsson, L. M. Frank, and P. Smith, ?Probability of repeating patterns in simultaneous neural data,? Neural Comput., vol. 22, no. 10, pp. 2522?2536, 2010. [20] G. L. Gerstein, E. R. Williams, M. Diesmann, S. Gr?n, and C. Trengove, ?Detecting synfire chains in parallel spike data,? Journal of Neuroscience Methods, vol. 206, no. 1, pp. 54 ? 64, 2012. [21] E. Torre, P. Quaglio, M. Denker, T. Brochier, A. Riehle, and S. Gr?n, ?Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task,? Journal of Neuroscience, vol. 36, no. 32, pp. 8329?8340, 2016. [22] R. Yuste, J. N. MacLean, J. Smith, and A. Lansner, ?The cortex as a central pattern generator,? Nature Reviews Neuroscience, vol. 6, no. 6, pp. 477?483, 2005. [23] E. Russo and D. Durstewitz, ?Cell assemblies at multiple time scales with arbitrary lag constellations,? eLife, vol. 6, p. e19428, 2017. [24] P. Smaragdis, ?Non-negative matrix factor deconvolution; extraction of multiple sound sources from monophonic inputs,? Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 3195, pp. 494?499, 2004. [25] P. D. O?Grady and B. A. Pearlmutter, ?Convolutive non-negative matrix factorisation with a sparseness constraint,? in 2006 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing, pp. 427?432, 2006. [26] M. A. Nicolelis, L. A. Baccala, R. Lin, and J. K. Chapin, ?Sensorimotor encoding by synchronous neural ensemble activity at multiple levels of the somatosensory system,? Science, vol. 268, no. 5215, pp. 1353?1358, 1995. [27] V. Lopes-dos Santos, S. Conde-Ocazionez, M. A. L. Nicolelis, S. T. Ribeiro, and A. B. L. Tort, ?Neuronal assembly detection and cell membership specification by principal component analysis,? PLOS ONE, vol. 6, no. 6, pp. 1?16, 2011. [28] P. Comon, ?Independent component analysis, a new concept?,? Signal processing, vol. 36, no. 3, pp. 287? 314, 1994. [29] A. Cichocki and R. Zdunek, ?Multilayer nonnegative matrix factorisation,? Electronics Letters, vol. 42, no. 16, pp. 947?948, 2006. [30] J. T. Vogelstein, A. M. Packer, T. A. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski, ?Fast nonnegative deconvolution for spike train inference from population calcium imaging,? Journal of Neurophysiology, vol. 104, no. 6, pp. 3691?3704, 2010. [31] R. Rubinstein, M. Zibulevsky, and M. Elad, ?Double sparsity: Learning sparse dictionaries for sparse signal approximation,? IEEE Transactions on Signal Processing, vol. 58, no. 3, pp. 1553?1564, 2010. [32] E. A. Pnevmatikakis, T. A. Machado, L. Grosenick, B. Poole, J. T. Vogelstein, and L. Paninski, ?Rankpenalized nonnegative spatiotemporal deconvolution and demixing of calcium imaging data,? in Computational and Systems Neuroscience (Cosyne) 2013, 2013. [33] E. A. Pnevmatikakis and L. Paninski, ?Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions,? in NIPS, 2013. [34] F. Diego Andilla and F. A. Hamprecht, ?Sparse space-time deconvolution for calcium image analysis,? in Advances in Neural Information Processing Systems 27 (Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds.), pp. 64?72, Curran Associates, Inc., 2014. [35] E. A. Pnevmatikakis, Y. Gao, D. Soudry, D. Pfau, C. Lacefield, K. Poskanzer, R. Bruno, R. Yuste, and L. Paninski, ?A structured matrix factorization framework for large scale calcium imaging data analysis,? arXiv:1409.2903 [q-bio, stat]. [36] F. Diego and F. A. Hamprecht, ?Learning multi-level sparse representations,? in NIPS, 2013. [37] R. J. Weiss and J. P. Bello, ?Identifying repeated patterns in music using sparse convolutive non-negative matrix factorization,? in ISMIR, 2010. [38] H. Zou and T. Hastie, ?Regularization and variable selection via the elastic net,? Journal of the Royal Statistical Society, Series B (Statistical Methodology), vol. 67, no. 2, pp. 301?320, 2005. [39] D. P. Bertsekas, Nonlinear Programming. Athena Scientific, 1999. [40] R. Tibshirani, ?Regression shrinkage and selection via the lasso,? Journal of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1, pp. 267?288, 1996. [41] P. C. Hansen, ?Deconvolution and regularization with Toeplitz matrices,? Numerical Algorithms, vol. 29, no. 4, pp. 323?378, 2002. [42] S. G. Mallat and Z. Zhang, ?Matching pursuits with time-frequency dictionaries,? IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397?3415, 1993. [43] M. Protter and M. Elad, ?Image sequence denoising via sparse and redundant representations,? IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 27?35, 2009. 10 [44] A. Szlam, K. Kavukcuoglu, and Y. LeCun, ?Convolutional matching pursuit and dictionary training,? Computer Research Repository (arXiv), 2010. [45] W. P. Pierskalla, ?Letter to the editor ? the multidimensional assignment problem,? Operations Research, vol. 16, no. 2, pp. 422?431, 1968. [46] Y. N. Billeh, M. T. Schaub, C. A. Anastassiou, M. Barahona, and C. Koch, ?Revealing cell assemblies at multiple levels of granularity,? Journal of Neuroscience Methods, vol. 236, pp. 92 ? 106, 2014. [47] T. Pfeiffer, A. Draguhn, S. Reichinnek, and M. Both, ?Optimized temporally deconvolved Ca2+ imaging allows identification of spatiotemporal activity patterns of CA1 hippocampal ensembles,? NeuroImage, vol. 94, pp. 239?249, 2014. [48] G. Buzs?ki, ?Memory consolidation during sleep: A neurophysiological perspective,? Journal of Sleep Research, vol. 7 Suppl 1, pp. 17?23, 1998. [49] G. Girardeau, K. Benchenane, S. I. Wiener, G. Buzs?ki, and M. B. Zugaro, ?Selective suppression of hippocampal ripples impairs spatial memory,? Nature Neuroscience, vol. 12, no. 10, pp. 1222?1223, 2009. [50] G. Girardeau and M. Zugaro, ?Hippocampal ripples and memory consolidation,? Current Opinion in Neurobiology, vol. 21, no. 3, pp. 452?459, 2011. [51] D. B. Howard, K. Powers, Y. Wang, and B. K. Harvey, ?Tropism and toxicity of adeno-associated viral vector serotypes 1, 2, 5, 6, 7, 8, and 9 in rat neurons and glia in vitro,? Virology, vol. 372, no. 1, pp. 24 ? 34, 2008. [52] C. F. Stevens, ?Neurotransmitter release at central synapses,? Neuron, vol. 40, no. 2, pp. 381 ? 388, 2003. [53] H. Markram, Y. Wang, and M. Tsodyks, ?Differential signaling via the same axon of neocortical pyramidal neurons,? Proceedings of the National Academy of Sciences, vol. 95, no. 9, pp. 5323?5328, 1998. [54] C. S. Quiroga-Lombard, J. Hass, and D. Durstewitz, ?Method for stationarity-segmentation of spike train data with application to the pearson cross-correlation,? Journal of Neurophysiology, vol. 110, no. 2, pp. 562?572, 2013. 11
6958 |@word neurophysiology:2 trial:3 repository:1 norm:5 approved:1 hippocampus:1 open:1 barahona:1 grey:1 decomposition:1 recursively:1 electronics:1 initial:2 contains:2 series:2 daniel:2 outperforms:3 past:1 existing:1 current:4 com:2 subcomponents:1 virus:1 activation:11 si:17 yet:1 bello:1 underly:1 additive:2 numerical:1 plasticity:1 motor:1 designed:4 reproducible:1 update:1 plot:2 stationary:1 greedy:2 intelligence:1 nervous:2 ith:1 smith:5 short:2 mental:2 detecting:6 provides:1 zhang:1 five:3 constructed:1 pastalkova:1 differential:1 ghazanfar:1 introduce:1 manner:2 pairwise:1 inter:1 expected:5 ica:11 behavior:1 mpg:1 themselves:2 nor:2 multi:1 brain:2 inspired:1 marchenko:1 gov:2 actual:2 provided:2 estimating:1 moreover:3 chapin:1 transduced:1 lowest:1 what:1 santos:2 interpreted:1 developed:1 ca1:8 compressive:1 temporal:29 every:6 multidimensional:2 binarized:1 stationarities:3 willshaw:1 adeno:2 bio:1 unit:3 internally:1 szlam:1 appear:3 planck:1 reid:1 bertsekas:1 positive:14 before:1 local:2 limit:1 soudry:1 encoding:1 firing:7 fluctuation:2 abuse:1 might:1 physiologie:1 plus:2 initialization:7 chose:2 evoked:1 specifying:1 co:2 factorization:3 limited:1 statistically:1 averaged:1 neuropsychological:1 russo:2 acknowledgment:1 fah:1 enforces:1 testing:1 lecun:1 block:2 hamm:1 signaling:1 area:2 drug:1 physiology:2 significantly:1 matching:8 revealing:1 donald:2 close:1 selection:3 operator:2 sheet:1 applying:1 optimize:1 center:1 missing:1 williams:1 keller:1 convex:1 formulate:1 resolution:2 identifying:4 spw:2 origi:1 factorisation:2 rule:1 marr:1 nam:4 jackson:1 his:1 financial:2 toxicity:1 population:3 exploratory:1 coordinate:1 mcnaughton:1 diego:3 controlling:1 spontaneous:3 user:1 exact:1 programming:1 mallat:1 distinguishing:1 curran:1 hypothesis:1 associate:1 element:1 observed:2 role:3 module:1 initializing:1 wang:2 thousand:1 tsodyks:1 region:7 plo:1 lansner:1 zibulevsky:1 transforming:1 dynamic:1 completely:2 easily:1 joint:2 k0:1 differently:1 various:3 neurotransmitter:1 train:10 stacked:1 sven:2 describe:1 fast:1 detected:1 approached:1 artificial:1 rubinstein:1 pearson:1 quite:2 fluctuates:1 lag:4 widely:2 larger:2 whose:1 solve:1 kai:2 supplementary:3 otherwise:1 toeplitz:2 elad:2 grosenick:1 highlighted:1 emergence:1 final:1 sequence:2 eigenvalue:1 net:1 okun:1 propose:4 reconstruction:4 interaction:1 product:2 maximal:4 mb:1 poskanzer:1 date:1 academy:1 supposed:1 schaub:1 description:1 participating:1 empty:1 optimum:1 r1:1 ripple:3 double:1 generating:1 ben:1 object:1 depending:1 recurrent:1 stat:1 bosch:2 strong:2 orger:1 recovering:1 indicate:1 somatosensory:1 closely:1 merged:1 torre:1 stevens:1 opened:1 stochastic:1 opinion:1 material:3 behaviour:1 really:2 decompose:1 biological:1 strictly:4 quiroga:1 frontier:1 brandon:1 koch:1 considered:1 ground:4 lawrence:1 vary:1 generous:2 a2:2 smallest:1 dictionary:3 consecutive:3 estimation:3 robson:1 intramural:1 currently:1 hansen:1 coordination:1 fluorescence:1 pnevmatikakis:3 successfully:2 ferran:2 gaussian:1 always:2 cossart:2 modified:1 shrinkage:1 varying:1 structuring:1 release:2 focus:1 consistently:1 methodological:1 contrast:2 suppression:1 detect:4 inference:3 motif:178 membership:3 unlikely:1 spurious:8 her:1 selective:1 germany:5 issue:1 among:3 animal:1 spatial:1 renewal:1 field:1 bacterial:1 extraction:1 beach:1 manually:1 look:1 unsupervised:2 filling:1 np:1 others:1 stimulus:1 few:2 randomly:1 simultaneously:1 national:2 resulted:1 individual:4 dfg:2 lac:1 packer:1 phase:2 consisting:1 fire:1 detection:7 organization:2 interest:1 stationarity:4 huge:1 fd:1 aronov:1 karlsson:1 grasp:1 mixture:2 analyzed:1 hamprecht:3 behind:1 light:1 nida:1 chain:6 partial:2 necessary:2 culture:9 accommodated:1 re:1 plotted:2 theoretical:1 minimal:1 deconvolved:1 column:13 modeling:2 conor:2 assignment:4 cost:1 deviation:1 subset:2 hundred:1 delay:1 gr:6 too:2 spatiotemporal:2 synthetic:11 combined:1 st:1 interdisciplinary:1 lee:2 probabilistic:1 together:1 mouse:1 connectivity:1 central:3 recorded:4 reconstructs:1 choose:1 cosyne:1 creating:1 ek:2 leading:1 li:1 account:2 potential:1 stevenson:1 de:6 coding:8 includes:1 coefficient:5 inc:1 coordinated:1 invasion:1 multiplicative:1 later:1 performed:1 endogenous:1 zugaro:2 red:1 start:1 recover:4 wave:1 parallel:5 complicated:1 vivo:2 grady:1 minimize:1 accuracy:2 wiener:1 convolutional:10 became:1 efficiently:1 ensemble:4 variance:1 identify:6 miller:1 weak:2 identification:5 raw:2 kavukcuoglu:1 accurately:2 none:1 alternatingly:1 simultaneous:2 synapsis:1 reach:1 sharing:1 synaptic:3 ed:1 against:2 competitor:1 acquisition:1 pp:43 sensorimotor:1 frequency:1 associated:3 stop:1 dataset:9 knowledge:1 segmentation:2 sophisticated:1 campbell:1 back:1 originally:2 higher:6 methodology:1 wei:1 formulation:1 done:2 evaluated:1 strongly:1 correlation:5 d:1 sketch:1 hand:1 synfire:6 nonlinear:1 overlapping:1 scientific:2 building:2 usa:2 usage:1 contain:6 concept:3 true:8 regularization:4 assigned:1 shuffled:10 hence:1 imaged:1 iteratively:1 bkh:1 hildesheim:1 riehle:1 deal:1 anastassiou:1 assistance:1 during:3 self:1 illustrative:1 rat:3 hippocampal:14 neocortical:1 pearlmutter:1 performs:1 interface:1 encoded:1 image:3 wise:1 novel:5 recently:3 salmon:1 nih:4 archicortex:1 rmn:1 viral:1 functional:2 spiking:6 empirically:1 overview:1 vitro:4 machado:2 belong:2 association:5 katz:1 expressing:1 vec:3 ai:9 tuning:1 outlined:1 populated:1 stochasticity:1 bruno:1 gratefully:1 stable:1 specification:1 acute:1 cortex:2 buzs:4 recent:1 showed:1 perspective:1 belongs:1 discard:1 massively:2 pastur:1 harvey:1 binary:1 shahar:1 rotter:2 seen:2 minimum:3 additional:1 care:1 determine:1 paradigm:1 redundant:1 signal:8 ii:1 vogelstein:2 multiple:17 sound:1 reduces:1 technical:1 match:1 cross:1 long:5 lin:1 equally:1 a1:3 regression:3 multilayer:1 poisson:1 ttingen:1 arxiv:2 represent:2 suppl:1 microscopy:1 cell:17 proposal:1 background:2 fine:1 baltimore:1 pyramidal:1 source:1 appropriately:2 fifty:1 eliminates:1 recording:7 hz:1 elegant:1 extracting:2 unitary:3 nonstationary:1 leverage:1 synthetically:1 granularity:1 easy:1 destroyed:1 harbor:1 affect:1 zi:1 hastie:1 lasso:3 identified:11 idea:2 shift:1 synchronous:9 whether:1 pca:12 sfb:1 impairs:1 penalty:7 song:1 peter:1 repeatedly:1 adequate:1 repeating:3 prepared:1 ten:1 simplest:1 http:1 sl:1 percentage:5 ahrens:1 neuroscience:13 estimated:2 track:1 correctly:1 per:3 tibshirani:1 discrete:1 vol:43 group:1 thereafter:1 four:1 threshold:10 nevertheless:2 deleted:2 neither:2 thresholded:1 imaging:7 year:1 run:39 angle:1 letter:2 ca2:2 lope:2 place:1 extends:1 ismir:1 putative:1 gerstein:1 capturing:1 cyan:1 bound:1 ki:4 distinguish:2 display:1 smaragdis:1 sleep:2 replaces:1 babadi:1 annual:1 activity:32 zmn:1 adapted:2 binned:2 strength:1 constraint:7 nonnegative:4 sprague:1 diesmann:3 min:2 elife:1 performing:1 attempting:1 dawley:1 martin:1 relatively:1 glia:1 structured:1 according:1 combination:1 belonging:1 membrane:1 across:4 remain:1 smaller:1 making:1 s1:3 comon:1 embryo:1 taken:1 ln:2 equation:2 remains:2 assures:1 count:1 committee:1 singer:1 draguhn:1 end:1 studying:1 pursuit:5 available:2 operation:1 apply:2 denker:1 hierarchical:1 enforce:1 chamber:1 anymore:1 weinberger:1 lacefield:1 convolved:2 original:7 lampl:2 denotes:2 remaining:1 running:2 assembly:36 include:1 opportunity:1 music:1 lenghts:1 k1:2 especially:3 establish:1 graded:1 society:3 ghahramani:1 move:1 added:1 coherently:1 spike:29 already:1 parametric:3 strategy:1 primary:1 aertsen:2 exhibit:1 div:3 distance:1 thank:2 simulated:2 majority:2 outer:1 athena:1 topic:1 mail:1 cellular:1 binarize:1 reason:2 dream:1 length:9 code:1 index:1 modeled:1 illustration:1 robert:1 statement:1 tort:2 frank:1 negative:4 rise:1 resurgence:1 reliably:1 calcium:8 twenty:3 contributed:1 allowing:1 upper:2 vertical:1 neuron:70 convolution:3 datasets:16 benchmark:1 acknowledge:2 howard:1 coincident:1 descent:1 displayed:2 virology:1 neurobiology:1 variability:2 frame:36 rn:1 synchronously:4 arbitrary:5 sharp:1 nmf:9 introduced:4 pair:1 specified:1 optimized:1 pfau:1 learned:15 established:2 macaque:1 nip:3 address:2 able:7 suggested:1 bar:1 below:1 pattern:10 poole:1 convolutive:8 sparsity:7 genetically:1 program:1 max:1 memory:5 video:1 including:1 royal:2 power:1 event:5 nicolelis:3 regularized:1 itskov:1 indicator:2 pfeiffer:1 advanced:2 mn:1 baccala:1 scheme:3 e15:1 github:1 brief:1 temporally:2 numerous:1 identifies:1 created:2 acknowledges:1 negativity:1 extract:2 health:1 cichocki:1 formerly:1 prior:4 review:2 relative:1 protter:1 embedded:1 synchronization:1 lecture:3 permutation:1 generation:1 limitation:3 yuste:5 generator:1 validation:1 degree:1 thresholding:1 dd:2 editor:1 row:1 gcamp6f:1 changed:1 consolidation:3 keeping:3 jth:1 drastically:1 side:1 allow:2 barak:1 institute:4 fall:1 wide:1 taking:1 reichinnek:1 markram:1 sparse:14 slice:3 curve:3 calculated:2 cortical:18 fred:2 world:1 transition:1 author:3 qualitatively:1 instructed:1 counted:1 ribeiro:2 nguyen:1 welling:1 transaction:3 kording:1 approximate:1 uni:3 mannheim:2 dealing:1 global:1 active:7 spatio:1 continuous:1 search:1 fled:1 decade:1 table:3 additionally:2 promising:1 learn:2 nature:5 robust:1 ca:1 elastic:1 heidelberg:5 du:1 complex:9 zou:1 conde:1 domain:1 assured:1 monophonic:1 did:3 sp:2 main:1 significance:1 s2:2 noise:6 whole:3 repeated:4 neuronal:14 gmbh:1 biggest:1 roc:3 elaborate:1 cubic:1 hebb:5 wiley:1 axon:1 neuroimage:1 debated:1 comput:2 minute:1 bachmann:1 discarding:1 constellation:2 list:2 ikegaya:1 zdunek:1 cortes:1 tropism:1 demixing:1 deconvolution:6 intrinsic:1 workshop:1 false:10 sequential:5 adding:2 merging:1 iwr:3 fanselow:1 pathogen:1 magnitude:1 sparseness:1 ksi:1 sorting:7 paninski:4 appearance:5 likely:1 gao:1 neurophysiological:1 visual:1 contained:1 durstewitz:3 pathophysiology:1 springer:1 ch:1 truth:4 extracted:1 sorted:4 ferster:1 shared:3 sippy:1 feasible:1 hard:1 change:2 typical:1 reducing:1 denoising:1 principal:3 total:1 specie:1 called:1 partly:1 experimental:4 attempted:1 rarely:1 aaron:1 maclean:1 support:5 latter:1 carrillo:1 cumulant:1 bioinformatics:1 dept:1 audio:2 tested:4
6,587
6,959
Quantifying how much sensory information in a neural code is relevant for behavior Giuseppe Pica1,2 [email protected] Houman Safaai1,3 [email protected] Tommaso Fellin2,6 [email protected] Eugenio Piasini1 [email protected] Caroline A. Runyan3,4 [email protected] Christoph Kayser7,8 [email protected] Christopher D. Harvey3 [email protected] 1 Mathew E. Diamond5 [email protected] Stefano Panzeri1,2 [email protected] Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy 2 Neural Coding Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy 3 Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA 4 Department of Neuroscience, University of Pittsburgh, Center for the Neural Basis of Cognition, Pittsburgh, USA 5 Tactile Perception and Learning Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy 6 Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova 16163, Italy 7 Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK 8 Department of Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universit?tsstr. 25, 33615 Bielefeld, Germany Abstract Determining how much of the sensory information carried by a neural code contributes to behavioral performance is key to understand sensory function and neural information flow. However, there are as yet no analytical tools to compute this information that lies at the intersection between sensory coding and behavioral readout. Here we develop a novel measure, termed the information-theoretic intersection information III (S; R; C), that quantifies how much of the sensory information carried by a neural response R is used for behavior during perceptual discrimination tasks. Building on the Partial Information Decomposition framework, we define III (S; R; C) as the part of the mutual information between the stimulus S and the response R that also informs the consequent behavioral choice C. We compute III (S; R; C) in the analysis of two experimental cortical datasets, to show how this measure can be used to compare quantitatively the contributions of spike timing and spike rates to task performance, and to identify brain areas or neural populations that specifically transform sensory information into choice. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1 Introduction Perceptual discrimination is a brain computation that is key to survival, and that requires both encoding accurately sensory stimuli and generating appropriate behavioral choices (Fig.1). Previous studies have mostly focused separately either on the former stage, called sensory coding, by analyzing how neural activity encodes information about the external stimuli [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], or on the latter stage, called behavioral readout, by analyzing the relationships between neural activity and choices in the absence of sensory signal or at fixed sensory stimulus (to eliminate spurious choice variations of neural response due to stimulus-related selectivity) [11, 12, 13]. The separation between studies of sensory coding and readout has led to a lack of consensus on what is the neural code, which here we take as the key set of neural activity features for perceptual discrimination. Most studies have in fact defined the neural code as the set of features carrying the most sensory information [1, 2, 8], but this focus has left unclear whether the brain uses the information in such features to perform perception [14, 15, 16]. Recently, Ref. [17] proposed to determine if neural sensory representations are behaviorally relevant by evaluating the association, in single trials, between the information about the sensory stimuli S carried by the neural activity R and the behavioral choices C performed by the animal, or, in other words, to evaluate the intersection between sensory coding and behavioral readout. More precisely, Ref. [17] suggested that the hallmark of a neural feature R being relevant for perceptual discrimination is that the subject will perform correctly more often when the neural feature R provides accurate sensory information. Ref.[17] proposed to quantify this intuition by first decoding sensory stimuli from single-trial neural responses and then computing the increase in behavioral performance when such decoding is correct. This intersection framework provides several advantages with respect to earlier approaches based on computing the correlations between trial-averaged psychometric performance and trial-averaged neurometric performance [13, 14, 18], because it quantifies associations between sensory information coding and choices within the same trial, instead of considering the similarity of trial-averaged neural stimulus coding and trial-averaged behavioral performance. However, the intersection information measure proposed in Ref.[17] relies strongly on the specific choice of a stimulus decoding algorithm, that might not match the unknown decoding algorithms of the brain. Further, decoding only the most likely stimulus from neural responses throws away part of the full structure in the measured statistical relationships between S, R and C [3]. To overcome these limitations, here we convert the conceptual notions described in [17] into a novel and rigorous definition of information-theoretic intersection information between sensory coding and behavioral readout III (S; R; C). We construct the information-theoretic intersection III (S; R; C) by building on recent extensions of classical information theory, called Partial Information Decompositions (PID), that are suited to the analysis of trivariate systems [19, 20, 21]. We show that III (S; R; C) is endowed with a set of formal properties that a measure of intersection information should satisfy. Finally, we use III (S; R; C) to analyze both simulated and real cortical activity. These applications show how III (S; R; C) can be used to quantitatively redefine the neural code as the set of neural features that carry sensory information which is also used for task performance, and to identify brain areas where sensory information is read out for behavior. 2 An information-theoretic definition of intersection information Throughout this paper, we assume that we are analyzing neural activity recorded during a perceptual discrimination task (Fig.1). Over the course of an experimental trial, a stimulus s ? {s1 , ..., sNs } is presented to the animal while simultaneously some neural features r (we assume that r either takes discrete values or is discretized into a certain number of bins) and the behavioral choice c ? {c1 , ..., cNc } are recorded. We assume that the joint probability distribution p(s, r, c) has been empirically estimated by sampling these variables simultaneously over repeated trials. After the animal learns to perform the task, there will be a statistical association between the presented stimulus S and the behavioral choice C, and the Shannon information I(S : C) between stimulus and choice will therefore be positive. How do we quantify the intersection information between the sensory coding s ? r and the consequent behavioral readout r ? c that involves the recorded neural activity features r in the same trial? Clearly, the concept of intersection information must require the analysis of the full trivariate probability distribution p(s, r, c) during perceptual discriminations. The well-established, 2 Figure 1: Schematics of the information flow in a perceptual discrimination task: sensory information I(S : R) (light blue block) is encoded in the neural activity R. This activity informs the behavioral choice C and so carries information about it (I(R : C), green block). III (S; R; C) is both a part of I(S : R) and of I(R : C), and corresponds to the sensory information used for behavior. classical tools of information theory [22] provide a framework for assessing statistical associations between two variables only. Indeed, Shannon?s mutual information allows us to quantify the sensory information I(S : R) that the recorded neural features carry about the presented stimuli [3] and, separately, the choice information I(R : C) that the recorded neural features carry about the behavior. To assess intersection information in single trials, we need to extend the classic information-theoretic tools to the trivariate analysis of S, R, C. More specifically, we argue that an information-theoretic measure of intersection information should quantify the part of the sensory information which also informs the choice. To quantify this concept, we start from the tools of the Partial Information Decomposition (PID) framework. This framework decomposes the mutual information that two stochastic variables (the sources) carry about a third variable (the target) into four nonnegative information components. These components characterize distinct information sharing modes among the sources and the target on a finer scale than Shannon information quantities [19, 20, 23, 24]. In our analysis of the statistical dependencies of S, R, C, we start from the mutual information I(C : (S, R)) that S and R carry about C. Direct application of the PID framework then leads to the following nonnegative decomposition: I(C : (S, R)) = SI(C : {S; R}) + CI(C : {S; R}) + U I(C : {S \ R}) + U I(C : {R \ S}), (1) where SI, CI and U I are respectively shared (or redundant), complementary (or synergistic) and unique information quantities as defined in [20]. More in detail, ? SI(C : {S; R}) is the information about the choice that we can extract from any of S and R, i.e. the redundant information about C shared between S and R. ? U I(C : {S \ R}) is the information about the choice that we can only extract from the stimulus but not from the recorded neural response. It thus includes stimulus information relevant to the behavioral choice that is not represented in R. ? U I(C : {R \ S}) is the information about the choice that we can only extract from the neural response but not from the stimulus. It thus includes choice information in R that arises from stimulus-independent variables, such as level of attention or behavioral bias. ? CI(C : {S; R}) is the information about choice that can be only gathered if both S and R are simultaneously observed with C, but that is not available when only one between S and R is simultaneously observed with C. More precisely, it is that part of I(C : (S, R)) which does not overlap with I(S : C) nor with I(R : C) [19]. Several mathematical definitions for the PID terms described above have been proposed in the literature [19, 20, 23, 24]. In this paper, we employ that of Bertschinger et al. [20], which is widely used for tripartite systems [25, 26]. Accordingly, we consider the space ?p of all probability distributions q(s, r, c) with the same pairwise marginal distributions q(s, c) = p(s, c) and q(r, c) = 3 p(r, c) as the original distribution p(s, r, c). The redundant information SI(C : {S; R}) is then defined as the solution of the following convex optimization problem on the space ?p [20]: SI(C : {S; R}) ? max CoIq (S; R; C), q??p (2) where CoIq (S; R; C) ? Iq (S : R) ? Iq (S : R|C) is the co-information corresponding to the probability distribution q(s, r, c). All other PID terms are then directly determined by the value of SI(C : {S; R})[19]. However, none of the existing PID information components described above fits yet the notion of intersection information, as none of them quantifies the part of sensory information I(S : R) carried by neural activity R that also informs the choice C. The PID quantity that seems to be closest to this notion is the redundant information that S and R share about C, SI(C : {S; R}). However, previous works pointed out the subtle possibility that even two statistically independent variables (here, S and R) can share information about a third variable (here, C) [23, 27]. This possibility rules out using SI(C : {S; R}) as a measure of intersection information, since we expect that a neural response R which does not encode stimulus information (i.e., such that S ? ? R) cannot carry intersection information. We thus reason that the notion of intersection information should be quantified as the part of the redundant information that S and R share about C that is also a part of the sensory information I(S : R). This kind of information is even finer than the existing information components of the PID framework described above, and we recently found that comparing information components of the three different Partial Information Decompositions of the same probability distribution p(s, r, c) leads to the identification of finer information quantities [21]. We take advantage of this insight to quantify the intersection information by introducing the following new definition: III (S; R; C) = min{SI(C : {S; R}), SI(S : {R; C})}. (3) This definition allows us to further decompose the redundancy SI(C : {S; R}) into two nonnegative information components, as SI(C : {S; R}) = III (S; R; C) + X(R), (4) where X(R) ? SI(C : {S; R}) ? III (S; R; C) ? 0. This finer decomposition is useful because, unlike SI(C : {S; R}), III (S; R; C) has the property that S ? ? R =? III (S; R; C) = 0 (see Supp. Info Sec.1). This is a first basic property that we expect from a meaningful definition of intersection information. Moreover, III (S; R; C) satisfies a number of additional important properties (see proofs in Supp. Info Sec. 1) that a measure of intersection information should satisfy: 1. III (S; R; C) ? I(S : R): intersection information should be a part of the sensory information extractable from the recorded response R ? namely, the part which is relevant for the choice; 2. III (S; R; C) ? I(R : C): intersection information should be a part of the choice information extractable from the recorded response R ? namely, the part which is related to the stimulus; 3. III (S; R; C) ? I(S : C): intersection information should be a part of the information between stimulus and choice ? namely, the part which can be extracted from R; 4. III (S; {R1 , R2 }; C) ? III (S; R1 ; C), III (S; R2 ; C), as the task-relevant information that can be extracted from any recorded neural features should not be smaller than the taskrelevant information that can be extracted from any subset of those features. The measure III (S; R; C) thus translates all the conceptual features of intersection information into a well-defined analytical tool: Eq.3 defines how III (S; R; C) can be computed numerically from real data once the distribution p(s, r, c) is estimated empirically. In practice, the estimated p(s, r, c) defines the space ?p where the problem defined in Eq.2 should be solved. We developed a gradient-descent optimization algorithm to solve these problems numerically with a Matlab package that is freely available for download and reuse through Zenodo and Github https://doi.org/10.5281/zenodo.850362 (see Supp. Info Sec. 2). Computing III (S; R; C) allows the experimenter to estimate that portion of the sensory information in a neural code R that is read out for behaviour during a perceptual discrimination task, and thus to quantitatively evaluate hypotheses about neural coding from empirical data. 4 S S R1 S R2 C R1 S R2 R1 C R1 R2 C C (a) (b) (d) (c) Figure 2: Some example cases where III (S; R1 ; C) = 0 for a neural code R1 . Each panel contains a probabilistic graphical model representation of p(s, r, c), augmented by a color code illustrating the nature of the information carried by statistical relationships between variables. Red: information about the stimulus; blue: information about anything else (internal noise, distractors, and so on). III (Ri ) > 0 only if the arrows linking Ri with S and C have the same color. a: I(S : R2 ) > I(S : R1 ) = 0. I(C : R2 ) = I(C : R1 ). III (R2 ) > III (S; R1 ; C) = 0. b: I(S : R2 ) = I(S : R1 ). I(C : R2 ) > I(C : R1 ) = 0. III (R2 ) > III (S; R1 ; C) = 0. c: I(S : R1 ) > 0, I(C : R1 ) > 0, I(S : C) = 0. d: I(S : R1 ) > 0, I(C : R1 ) > 0, I(S : C) > 0, III (S; R1 ; C) = 0. 2.1 Ruling out neural codes for task performance A first important use of III (S; R; C) is that it permits to rule out recorded neural features as candidate neural codes. In fact, the neural features R for which III (S; R; C) = 0 cannot contribute to task performance. It is interesting, both conceptually and to interpret empirical results, to characterize some scenarios where III (S; R1 ; C) = 0 for a recorded neural feature R1 . III (S; R1 ; C) = 0 may correspond, among others, to one of the four scenarios illustrated in Fig.2: ? R1 drives behavior but it is not informative about the stimulus, i.e. I(R1 : S) = 0 (Fig.2a); ? R1 encodes information about S but it does not influence behavior, i.e. I(R1 : C) = 0 (Fig.2b); ? R1 is informative about both S and C but I(S : C) = 0 (Fig.2c, see also Supp. Info Sec.2); ? I(S : R1 ) > 0, I(R1 : C) > 0, I(S : C) > 0, but the sensory information I(S : R1 ) is not read out to drive the stimulus-relevant behavior and, at the same time, the way R1 affects the behaviour is not related to the stimulus (Fig.2d, see also Supp. Info Sec.2). 3 Testing our measure of intersection information with simulated data To better illustrate the properties of our measure of information-theoretic intersection information III (S; R; C), we simulated a very simple neural scheme that may underlie a perceptual discrimination task. As illustrated in Fig.3a, in every simulated trial we randomly drew a stimulus s ? {s1 , s2 } which was then linearly converted to a continuous variable that represents the neural activity in the simulated sensory cortex. This stimulus-response conversion was affected by an additive Gaussian noise term (which we term ?sensory noise?) whose amplitude was varied parametrically by changing the value of its standard deviation ?S . The simulated sensory-cortex activity was then separately converted, with two distinct linear transformations, to two continuous variables that simulated two higher-level brain regions. These two variables are termed ?parietal cortex? (R) and ?bypass pathway? (R0 ), respectively. We then combined R and R0 with parametrically tunable weights (we indicate the ratio between the R-weight and the R0 -weight with ?, see Supp. Info Sec.4) and added Gaussian noise (termed ?choice noise?), whose standard deviation ?C was varied parametrically, to eventually produce another continuous variable that was fed to a linear discriminant. We took as the simulated behavioral choice the binary output of this final linear discriminant, which in our model was meant to represent the readout mechanism in high-level brain regions that inform the motor output. We ran simulations of this model by varying parametrically the sensory noise ?S , the choice noise ?C , and the parietal to bypass ratio ?, to investigate how III (S; R; C) depended on these parameters. 5 Figure 3: a) Schematics of the simulated model used to test our framework. In each trial, a binary stimulus is linearly converted into a ?sensory-cortex activity? after the addition of ?sensory noise?. This signal is then separately converted to two higher-level activities, namely a ?parietal-cortex activity? R and a ?bypass-pathway activity? R0 . R and R0 are then combined with parametrically tunable weights and, after the addition of ?choice noise?, this signal is fed to a linear discriminant. The output of the discriminant, that is the decoded stimulus s?, drives the binary choice c. We computed the intersection information of R to extract the part of the stimulus information encoded in the ?parietal cortex? that contributes to the final choice. b-d) Intersection Information for the simulations represented in a). Mean ? sem of III (S; R; C) across 100 experimental sessions, each relying on 100 simulated trials, as a function of three independently varied simulation parameters. b) Intersection Information decreases when the stimulus representation in the parietal cortex R is more noisy (higher sensory noise ?S ). c) Intersection Information decreases when the beneficial contribution of the stimulus information carried by parietal cortex R to the final choice is reduced by increasing choice noise ?C . d) Intersection Information increases when the parietal cortex R contributes more strongly to the final choice by increasing the parietal to bypass ratio ?. In each simulated session, we estimated the joint probability psession (s, r, c) of the stimulus S, the response in parietal cortex R, and the choice C, from 100 simulated trials. We computed, separately for each simulated session, an intersection information III (S; R; C) value from the estimated psession (s, r, c). Here, and in all the analyses presented throughout the paper, we used a quadratic extrapolation procedure to correct for the limited sampling bias of information [28]. In Fig.3b-d we show mean ? s.e.m. of III (S; R; C) values across 100 independent experimental sessions, as a function of each of the three simulation parameters. We found that III (S; R; C) decreases with increasing ?S (Fig.3b). This result was explained by the fact that increasing ?S reduces the amount of stimulus information that is passed to the simulated parietal activity R, and thus also reduces the portion of such information that can inform choice and can be used to perform the task appropriately. We found that III (S; R; C) decreases with increasing ?C (Fig.3c), consistently with the intuition that for higher values of ?C the choice depends more weakly on the activity of the simulated parietal activity R, which in turn also reduces how accurately the choice reflects the stimulus in each trial. We also found that III (S; R; C) increases with increasing ? (Fig.3d), because when ? is larger the portion of stimulus information carried by the simulated parietal activity R that benefits the behavioral performance is larger. 6 4 Using our measure to rank candidate neural codes for task performance: studying the role of spike timing for somatosensory texture discrimination The neural code was traditionally defined in previous studies as the set of features of neural activity that carry all or most sensory information. In this section, we show how III (S; R; C) can be used to quantitatively redefine the neural code as the set of features that contributes the most sensory information for task performance. The experimenter can thus use III (S; R; C) to rank a set of candidate neural features {R1 , ..., RN } according to the numerical ordering III (S; Ri1 ; C) ? ... ? III (S; RiN ; C). An advantage of the information-theoretic nature of III (S; R; C) is that it quantifies intersection information on the meaningful scale of bits, and thus enables a quantitative comparison of different candidate neural codes. If for example III (S; R1 ; C) = 2III (S; R2 ; C) we can quantitatively interpret that the code R1 provides twice as much information for task performance as R2 . This interpretation is not as meaningful, for example, when comparing different values of fraction-correct measures [17]. To illustrate the power of III (S; R; C) for evaluating and ranking candidate neural codes, we apply it to real data to investigate a fundamental question: is the sensory information encoded in millisecondscale spike times used by the brain to perform perceptual discrimination? Although many studies have shown that millisecond-scale spike times of cortical neurons encode sensory information not carried by rates, whether or not this information is used has remained controversial [16, 29, 30]. It could be, for example, that spike times cannot be read out because the biophysics of the readout neuronal systems is not sufficiently sensitive to transmit this information, or because the readout neural systems do not have access to a stimulus time reference that could be used to measure these spike times [31]. To investigate this question, we used intersection information to compute whether millisecondscale spike timing of neurons (n=299 cells) in rat primary (S1) somatosensory cortex provides information that is used for performing a whisker-based texture discrimination task (Figure 4a-b). Full experimental details are reported in [32]. In particular, we compared III (S; timing; C) with the intersection information carried by rate III (S; rate; C), i.e. information carried by spike counts over time scales of tens of milliseconds. We first computed a spike-timing feature by projecting the single-trial spike train onto a zero-mean timing template (constructed by linearly combining the first three spike trains PCs to maximize sensory information, following the procedure of [32]), whose shape indicated the weight assigned to each spike depending on its timing (Figure 4a). Then we computed a spike-rate feature by weighting the spikes with a flat template which assigns the same weight to spikes independently of their time. Note that this definition of timing, and in particular the fact that the timing template was zero mean, ensured that the timing variable did not contain any rate information. We verified that this calculation provided timing and rate features that had negligible (-0.0030 ? 0.0001 across the population) Pearson correlation. The difficulty of the texture discrimination task was set so that the rat learned the task well but still made a number of errors in each session (mean behavioral performance 76.9%, p<0.001 above chance, paired t-test). These error trials were used to decouple in part choice from stimulus coding and to assess the impact of the sensory neural codes on behavior by computing intersection information. We thus computed information across all trials, including both behaviorally correct and incorrect trials. We found that, across all trials and on average over the dataset, timing carried similar texture information to rate (Figure 4b) ((9 ? 2) ? 10?3 bit in timing, (8.5 ? 1.1) ? 10?3 bit in rate, p=0.78 two-sample t-test), while timing carried more choice information than rate ((16 ? 1) ? 10?3 bit in timing, (3.0 ? 0.7) ? 10?3 bit in rate, p<10?15 two-sample t-test). If we used only traditional measures of stimulus and choice information, it would be difficult to decide which code is most helpful for task performance. However, when we applied our new information-theoretic framework, we found that the intersection information III (Figure 4b) was higher for timing than for rate ((7 ? 1) ? 10?3 bit in timing, (3.0 ? 0.6) ? 10?3 bit in rate, p<0.002 two-sample t-test), thus suggesting that spike timing is a more crucial neural code for texture perception than spike rate. Interestingly, intersection information III was approximately 80% of the total sensory information for timing, while it was only 30% of the total sensory information for rate. This suggests that in somatosensory neurons timing information about the texture is read out, and influences choice, more efficiently than rate information, contrarily to what is widely assumed in the literature [34]. These results confirm early results that were obtained with a decoding-based intersection information measure [32]. However, the information theoretic results in Fig.4b have the advantage that they do 7 Timing template (b) Time X Time Information (bits) Inst. rate (d) (c) Left Right PPC AC Information (bits) Roug h oth Sm o Rate template Inst. rate (a) *** 0.02 0.015 Rate Timing ** 0.01 0.005 0 Stimulus 0.012 Choice * 0.008 Intersection AC PPC *** 0.004 0 Stimulus Choice Intersection Figure 4: Intersection Information for two experimental datasets. a: Simplified schematics of the experimental setup in [32]. Rats are trained to distinguish between textures with different degrees of coarseness (left), and neural spiking data from somatosensory cortex (S1) is decomposed in independent rate and timing components (right). b: Stimulus, choice and intersection information for the data in panel a. Spike timing carries as much sensory information (p=0.78, 2-sample t-test), but more choice information (p<10?15 ), and more III (p<0.002) than firing rate. c: Simplified schematics of the experimental setup in [33]. Mice are trained to distinguish between auditory stimuli located to their left or to their right. Neural activity is recorded in auditory cortex (AC) and posterior parietal cortex (PPC) with 2-photon calcium imaging. d: Stimulus, choice and intersection information for the data in panel c. Stimulus information does not differ significantly between AC and PPC, but PPC has more choice information (p<0.05) and more III than AC (p<10?6 , 2-sample t-test). not depend on the use of a specific decoder to calculate intersection information. Importantly, the new information theoretic approach also allowed us to quantify the proportion of sensory information in a neural code that is read out downstream for behavior, and thus to obtain the novel conclusion that only spike timing is read out with high efficiency. 5 Application of intersection information to discover brain areas transforming sensory information into choice Our intersection information measure III (S; R; C) can also be used as a metric to discover and index brain areas that perform the key computations needed for perceptual discrimination, and thus turn sensory information into choice. Suppose for example that we are investigating this issue by recording from populations of neurons in different areas. If we rank the neural activities in the recorded areas according to the sensory information they carry, we will find that primary sensory areas are ranked highly. Instead, if we rank the areas according to the choice information they carry, the areas encoding the motor output will be ranked highly. However, associative areas that transform sensory information into choice will not be found by any of these two traditional sensory-only and choice-only rankings, and there is no currently established metric to quantitatively identify such areas. Here we argue that III (S; R; C) can be used as such metric. To illustrate this possible use of III (S; R; C), we analyzed the activity of populations of single neurons recorded in mice with two-photon calcium imaging either in Auditory Cortex (AC, n=329 neurons) or in Posterior Parietal Cortex (PPC, n=384 neurons) while the mice were performing a sound location discrimination task and had to report the perceived sound location (left vs right) by the direction of their turn in a virtual-reality navigation setup (Fig.4c; full experimental details are available in Ref.[33]). AC is a primary sensory area, whereas PPC is an association area that has been described as a multisensory-motor interface [35, 36, 37], was shown to be essential for virtual-navigation tasks [36], and is implicated in the spatial processing of auditory stimuli [38, 39]. When applying our information theoretic formalism to these data, we found that similar stimulus (sound location) information was carried by the firing rate of neurons in AC and PPC (AC: (10 ? 3) ? 10?3 bit, PPC: (5 ? 1) ? 10?3 bit, p=0.17, two-sample t-test). Cells in PPC carried 8 more choice information than cells in AC (AC: (2.8 ? 1.4) ? 10?3 bit, PPC: (6.4 ? 1.2) ? 10?3 bit, p<0.05, two-sample t-test). However, neurons in PPC had values of III ((3.6 ? 0.8) ? 10?3 bit) higher (p<10?6 , two-sample t-test) than those of AC ((2.3 ? 0.8) ? 10?3 bit): this suggests that the sensory information in PPC, though similar to that of AC, is turned into behavior into a much larger proportion (Figure 4d). Indeed, the ratio between III (S; R; C) and sensory information was higher in PPC than in AC (AC: (24 ? 11) %, PPC: (73 ? 24) %, p<0.03, one-tailed z-test). This finding reflects the associative nature of PPC as a sensory-motor interface. This result highlights the potential usefulness of III (S; R; C) as an important metric for the analysis of neuro-imaging experiments and the quantitative individuation of areas transforming sensory information into choice. 6 Discussion Here, we derived a novel information theoretic measure III (S; R; C) of the behavioral impact of the sensory information carried by the neural activity features R during perceptual discrimination tasks. The problem of understanding whether the sensory information in the recorded neural features really contributes to behavior is hotly debated in neuroscience [16, 17, 30]. As a consequence, a lot of efforts are being devoted to formulate advanced analytical tools to investigate this question [17, 40, 41]. A traditional and fruitful approach has been to compute the correlation between trialaveraged behavioral performance and trial-averaged stimulus decoding when presenting stimuli of increasing complexity [13, 14, 18]. However, this measure does not capture the relationship between fluctuations of neural sensory information and behavioral choice in the same experimental trial. To capture this single-trial relationship, Ref.[17] proposed to use a specific stimulus decoding algorithm to classify trials that give accurate sensory information, and then quantify the increase in behavioral performance in the trials where the sensory decoding is correct. However, this approach makes strong assumptions about the decoding mechanism, which may or may not be neurally plausible, and does not make use of the full structure of the trivariate S, R, C dependencies. In this work, we solved all the problems described above by extending the recent Partial Information Decomposition framework [19, 20] for the analysis of trivariate dependencies to identify III (S; R; C) as a part of the redundant information about C shared between S and R that is also a part of the sensory information I(S : R). This quantity satisfies several essential properties of a measure of intersection information between the sensory coding s ? r and the consequent behavioral readout r ? c, that we derived from the conceptual notions elaborated in Ref.[17]. Our measure III (S; R; C) provides a single-trial quantification of how much sensory information is used for behavior. This quantification refers to the absolute physical scale of bit units, and thus enables a direct comparison of different candidate neural codes for the analyzed task. Furthermore, our measure has the advantages of information-theoretical approaches, that capture all statistical dependencies between the recorded quantities irrespective of their relevance to neural function, as well as of model-based approaches, that link directly empirical data with specific theoretical hypotheses about sensory coding and behavioral readout but depend strongly on their underlying assumptions (see e.g. [12]). An important direction for future expansions of this work will be to combine III (S; R; C) with interventional tools on neural activity, such as optogenetics. Indeed, the novel statistical tools in this work cannot distinguish whether the measured value of intersection information III (S; R; C) derives from the causal involvement of R in transmitting sensory information for behavior, or whether R only correlates with causal information-transmitting areas [17]. More generally, this work can help us mapping information flow and not only information representation. We have shown above how computing III (S; R; C) separates the sensory information that is transmitted downstream to affect the behavioral output from the rest of the sensory information that is not transmitted. Further, another interesting application of III arises if we replace the final choice C with other nodes of the brain networks, and compute with III (S; R1 ; R2 ) the part of the sensory information in R1 that is transmitted to R2 . Even more generally, besides the analysis of neural information processing, our measure III can be used in the framework of network information theory: suppose that an input X = (X1 , X2 ) (with X1 ? ? X2 ) is encoded by 2 different parallel channels R1 , R2 , which are then decoded to produce collectively an output Y . Suppose further that experimental measurements in single trials can only determine the value of X, Y , and R1 , while the values of X1 , X2 , Y1 , Y2 , R2 are experimentally unaccessible. As we show in Supp. Fig. 3, III (X; R1 ; Y ) allows us to quantify the information between X and Y that passes through the channel R1 , and thus does not pass through the channel R2 . 9 7 Acknowledgements and author contributions GP was supported by a Seal of Excellence Fellowship CONISC. SP was supported by Fondation Bertarelli. CDH was supported by grants from the NIH (MH107620 and NS089521). CDH is a New York Stem Cell Foundation Robertson Neuroscience Investigator. TF was supported by the grants ERC (NEURO-PATTERNS) and NIH (1U01NS090576-01). CK was supported by the European Research Council (ERC-2014-CoG; grant No 646657). Author contributions: SP, GP and EP conceived the project; GP and EP performed the project; CAR, MED and CDH provided experimental data; GP, EP, HS, CK, SP and TF provided materials and analysis methods; GP, EP and SP wrote the paper; all authors commented on the manuscript; SP supervised the project. References [1] W. Bialek, F. Rieke, R.R. de Ruyter van Steveninck, and D. Warland. Reading a neural code. Science, 252(5014):1854?1857, 1991. [2] A. Borst and F.E. Theunissen. Information theory and neural coding. Nat. Neurosci., 2(11):947?957, 1999. [3] R. Quian Quiroga and S. Panzeri. Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci., 10(3):173?185, 2009. [4] D. V. Buonomano and W. Maass. State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci., 10:113?125, 2009. [5] M. A. Harvey, H. P. Saal, J. F. III Dammann, and S. J. Bensmaia. Multiplexing stimulus information through rate and temporal codes in primate somatosensory cortex. PLOS Biology, 11(5):e1001558, 2013. [6] C. Kayser, M. A. Montemurro, N. K. Logothetis, and S. Panzeri. Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns. Neuron, 61(4):597?608, 2009. [7] A. Luczak, B. L. McNaughton, and K. D. Harris. Packet-based communication in the cortex. Nat. Rev. Neurosci., 16(12):745?755, 2015. [8] S. Panzeri, N. Brunel, N. K. Logothetis, and C. Kayser. Sensory neural codes using multiplexed temporal scales. Trends Neurosci., 33(3):111?120, 2010. [9] M. Shamir. Emerging principles of population coding: in search for the neural code. Curr. Opin. Neurobiol., 25:140?148, 2014. [10] S. Panzeri, J.H. Macke, J. Gross, and C. Kayser. Neural population coding: combining insights from microscopic and mass signals. Trends Cogn. Sci., 19(3):162?172, 2015. [11] K. H. Britten, W. T. Newsome, M. N. Shadlen, S. Celebrini, and J. A. Movshon. A relationship between behavioral choice and the visual responses of neurons in macaque MT. Vis. Neurosci., 13:87?100, 1996. [12] R. M. Haefner, S. Gerwinn, J. H. Macke, and M. Bethge. Inferring decoding strategies from choice probabilities in the presence of correlated variability. Nat. Neurosci., 16:235?242, 2013. [13] W. T. Newsome, K. H. Britten, and J. A. Movshon. Neuronal correlates of a perceptual decision. Nature, 341(6237):52?54, 1989. [14] C. T. Engineer, C. A. Perez, Y. H. Chen, R. S. Carraway, A. C. Reed, J. A. Shetake, V. Jakkamsetti, K. Q. Chang, and M. P. Kilgard. Cortical activity patterns predict speech discrimination ability. Nat. Neurosci., 11:603?608, 2008. [15] A. L. Jacobs, G. Fridman, R. M. Douglas, N. M. Alam, P. E. Latham, G. T. Prusky, and S. Nirenberg. Ruling out and ruling in neural codes. Proc. Natl. Acad. Sci. U.S.A., 106(14):5936?5941, 2009. [16] R. Luna, A. Hernandez, C. D. Brody, and R. Romo. Neural codes for perceptual discrimination in primary somatosensory cortex. Nat. Neurosci., 8(9):1210?1219, 2005. [17] S. Panzeri, C. D. Harvey, E. Piasini, P. E. Latham, and T. Fellin. Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention, and Behavior. Neuron, 93(3):491?507, 2017. [18] R. Romo and E. Salinas. Flutter discrimination: neural codes, perception, memory and decision making. Nat. Rev. Neurosci., 4(3):203?218, 2003. 10 [19] P. Williams and R. Beer. Nonnegative decomposition of multivariate information. arXiv:1004.2515, 2010. [20] N. Bertschinger, J. Rauh, E. Olbrich, J. Jost, and N. Ay. Quantifying unique information. Entropy, 16(4):2161?2183, 2014. [21] G. Pica, E. Piasini, D. Chicharro, and S. Panzeri. Invariant components of synergy, redundancy, and unique information among three variables. Entropy, 19(9):451, 2017. [22] C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27(3):379?423, 1948. [23] M. Harder, C. Salge, and D. Polani. Bivariate measure of redundant information. Phys. Rev. E, 87(1):012130, 2013. [24] V. Griffith and C. Koch. Quantifying synergistic mutual information. In Guided Self-Organization: Inception, pages 159?190. Springer Berlin Heidelberg, 2014. [25] A. Barrett. Exploration of synergistic and redundant information sharing in static and dynamical gaussian systems. Phys. Rev. E, 91(5):052802, 2015. [26] D. Chicharro. Quantifying multivariate redundancy with maximum entropy decompositions of mutual information. arXiv:1708.03845, 2017. [27] N. Bertschinger, J. Rauh, E. Olbrich, and J. Jost. Shared information ? new insights and problems in decomposing information in complex systems. In Proceedings of the ECCS 2012, Brussels, Belgium, 2012. [28] S. P. Strong, R. Koberle, R. R. de Ruyter van Steveninck, and W. Bialek. Entropy and information in neural spike trains. Phys. Rev. Lett., 80:197?200, 1998. [29] J. D. Victor and S. Nirenberg. Indices for testing neural codes. Neural Comput., 20(12):2895?2936, 2008. [30] D. H. O? Connor, S. A. Hires, Z. V. Guo, N. Li, J. Yu, Q.-Q. Sun, D. Huber, and K. Svoboda. Neural coding during active somatosensation revealed using illusory touch. Nat. Neurosci., 16(7):958?965, 2013. [31] S. Panzeri, R. A. A. Ince, M. E. Diamond, and C. Kayser. Reading spike timing without a clock: intrinsic decoding of spike trains. Phil. Trans. R. Soc. Lond., B, Biol. Sci., 369(1637):20120467, 2014. [32] Y. Zuo, H. Safaai, G. Notaro, A. Mazzoni, S. Panzeri, and M. E. Diamond. Complementary contributions of spike timing and spike rate to perceptual decisions in rat S1 and S2 cortex. Curr. Biol., 25(3):357?363, 2015. [33] C. A. Runyan, E. Piasini, S. Panzeri, and C. D. Harvey. Distinct timescales of population coding across cortex. Nature, 548:92?96, 2017. [34] M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J. Neurosci., 18(10):3870?3896, 1998. [35] J. I. Gold and M. N. Shadlen. The neural basis of decision making. Annu. Rev. Neurosci., 30(1):535?574, 2007. [36] C. D. Harvey, P. Coen, and D. W. Tank. Choice-specific sequences in parietal cortex during a virtualnavigation decision task. Nature, 484(7392):62?68, 2012. [37] D. Raposo, M. T. Kaufman, and A. K. Churchland. A category-free neural population supports evolving demands during decision-making. Nat. Neurosci., 17(12):1784?1792, 2014. [38] K. Nakamura. Auditory spatial discriminatory and mnemonic neurons in rat posterior parietal cortex. J. Neurophysiol., 82(5):2503, 1999. [39] J. P. Rauschecker and B. Tian. Mechanisms and streams for processing of "what" and "where" in auditory cortex. Proc. Natl. Acad. Sci. U.S.A., 97(22):11800?11806, 2000. [40] R. Rossi-Pool, E. Salinas, A. Zainos, M. Alvarez, J. Vergara, N. Parga, and R. Romo. Emergence of an abstract categorical code enabling the discrimination of temporally structured tactile stimuli. Proc. Natl. Acad. Sci. U.S.A., 113(49):E7966?E7975, 2016. [41] X. Pitkow, S. Liu, D. E. Angelaki, G. C. DeAngelis, and A. Pouget. How can single sensory neurons predict behavior? Neuron, 87(2):411?423, 2015. 11
6959 |@word h:1 trial:28 illustrating:1 faculty:1 seems:1 coarseness:1 proportion:2 seal:1 simulation:4 decomposition:9 jacob:1 harder:1 carry:11 liu:1 contains:1 interestingly:1 existing:2 comparing:2 si:14 yet:2 must:1 saal:1 additive:1 numerical:1 informative:2 alam:1 shape:1 enables:2 motor:4 opin:1 cracking:1 discrimination:20 v:1 accordingly:1 provides:5 contribute:1 location:3 node:1 org:1 mathematical:2 constructed:1 direct:2 incorrect:1 pathway:2 combine:1 behavioral:27 redefine:2 excellence:1 pairwise:1 huber:1 indeed:3 montemurro:1 behavior:16 nor:1 brain:12 discretized:1 relying:1 decomposed:1 borst:1 considering:1 increasing:7 provided:3 discover:2 moreover:1 underlying:1 panel:3 project:3 mass:1 what:3 kind:1 neurobiol:1 kaufman:1 emerging:1 developed:1 finding:1 transformation:1 temporal:3 quantitative:2 every:1 universit:1 ensured:1 uk:1 unit:1 medical:1 underlie:1 grant:3 intervention:1 positive:1 negligible:1 ecc:1 timing:27 depended:1 consequence:1 acad:3 encoding:2 analyzing:3 firing:2 fluctuation:1 approximately:1 hernandez:1 might:1 twice:1 quantified:1 suggests:2 christoph:2 co:1 limited:1 discriminatory:1 tian:1 statistically:1 averaged:5 steveninck:2 unique:3 testing:2 practice:1 block:2 kayser:5 cogn:1 procedure:2 area:15 flutter:1 empirical:3 evolving:1 bell:1 significantly:1 word:1 refers:1 griffith:1 cannot:4 synergistic:3 onto:1 runyan:2 influence:2 applying:1 fruitful:1 center:3 romo:3 phil:1 williams:1 attention:1 independently:2 convex:1 focused:1 formulate:1 assigns:1 glasgow:2 pitkow:1 pouget:1 rule:2 insight:3 importantly:1 zuo:1 population:9 classic:1 notion:5 variation:1 traditionally:1 rieke:1 transmit:1 mcnaughton:1 target:2 suppose:3 logothetis:2 shamir:1 svoboda:1 discharge:1 us:1 hypothesis:2 harvard:3 trend:2 robertson:1 located:1 theunissen:1 observed:2 role:1 ep:4 solved:2 capture:3 calculate:1 readout:11 region:2 sun:1 ordering:1 decrease:4 plo:1 ran:1 gross:1 intuition:2 transforming:2 complexity:1 piasini:4 trained:2 carrying:1 weakly:1 depend:2 churchland:1 rin:1 efficiency:1 basis:2 neurophysiol:1 joint:2 iit:4 represented:2 train:4 distinct:3 doi:1 deangelis:1 pearson:1 salina:2 whose:3 encoded:4 widely:2 solve:1 larger:3 plausible:1 zainos:1 optogenetics:1 ability:1 nirenberg:2 statistic:1 gp:5 transform:2 noisy:1 emergence:1 final:5 associative:2 advantage:5 sequence:1 analytical:3 took:1 hire:1 relevant:7 combining:3 turned:1 gold:1 assessing:1 r1:40 produce:2 generating:1 extending:1 help:1 iq:2 develop:1 informs:4 illustrate:3 depending:1 measured:2 ac:15 school:2 salge:1 eq:2 strong:2 throw:1 soc:1 involves:1 indicate:1 somatosensory:6 quantify:9 differ:1 direction:2 guided:1 hotly:1 correct:5 stochastic:1 exploration:1 packet:1 virtual:2 material:1 bin:1 rauh:2 require:1 behaviour:2 olbrich:2 really:1 decompose:1 extension:1 quiroga:1 sufficiently:1 ppc:16 koch:1 panzeri:10 cognition:1 mapping:1 predict:2 pitt:1 stabilizes:1 early:1 belgium:1 perceived:1 proc:3 currently:1 sensitive:1 council:1 tf:2 tool:8 reflects:2 clearly:1 behaviorally:2 gaussian:3 ck:2 varying:1 encode:2 derived:2 focus:1 consistently:1 rank:4 rigorous:1 helpful:1 inst:2 dependent:1 eliminate:1 spurious:1 germany:1 tank:1 issue:1 among:3 animal:3 spatial:3 mutual:6 marginal:1 construct:1 once:1 beach:1 sampling:2 biology:2 represents:1 yu:1 future:1 report:1 others:1 stimulus:52 quantitatively:6 employ:1 randomly:1 simultaneously:4 phase:1 curr:2 organization:1 possibility:2 investigate:4 highly:2 analyzed:2 navigation:2 light:1 pc:1 oth:1 devoted:1 perez:1 natl:3 implication:1 accurate:2 partial:5 istituto:3 causal:2 theoretical:2 formalism:1 earlier:1 classify:1 newsome:3 introducing:1 deviation:2 subset:1 parametrically:5 ri1:1 usefulness:1 characterize:2 reported:1 dependency:4 spatiotemporal:1 combined:2 st:1 individuation:1 international:1 fundamental:1 probabilistic:1 decoding:13 pool:1 bethge:1 mouse:3 vergara:1 trivariate:5 transmitting:2 connectivity:1 recorded:16 luna:1 cognitive:3 external:1 macke:2 li:1 supp:7 suggesting:1 converted:4 photon:2 de:3 potential:1 coding:20 sec:6 includes:2 satisfy:2 ranking:2 depends:1 vi:1 stream:1 performed:2 extrapolation:1 lot:1 analyze:1 portion:3 start:2 red:1 parallel:1 elaborated:1 contribution:5 ass:2 efficiently:1 gathered:1 identify:4 correspond:1 conceptually:1 identification:1 parga:1 accurately:2 none:2 drive:3 finer:4 caroline:1 inform:2 phys:3 sharing:2 definition:7 proof:1 di:3 static:1 auditory:6 experimenter:2 tunable:2 dataset:1 illusory:1 color:2 distractors:1 car:1 subtle:1 amplitude:1 manuscript:1 higher:7 supervised:1 response:13 alvarez:1 raposo:1 though:1 strongly:3 furthermore:1 inception:1 stage:2 correlation:3 clock:1 christopher:1 touch:1 lack:1 defines:2 mode:1 indicated:1 building:2 usa:3 concept:2 contain:1 y2:1 former:1 assigned:1 read:7 laboratory:4 maass:1 illustrated:2 during:8 self:1 rovereto:2 anything:1 rat:5 presenting:1 ay:1 theoretic:13 latham:2 tn:2 ince:1 stefano:2 interface:2 hallmark:1 novel:5 recently:2 nih:2 spiking:1 empirically:2 physical:1 celebrini:1 mt:1 association:5 extend:1 linking:1 interpretation:1 numerically:2 interpret:2 measurement:1 connor:1 session:5 pointed:1 erc:2 had:3 access:1 similarity:1 cortex:24 closest:1 posterior:3 recent:2 multivariate:2 italy:4 involvement:1 termed:3 selectivity:1 certain:1 scenario:2 harvey:4 gerwinn:1 binary:3 victor:1 transmitted:3 additional:1 freely:1 r0:5 determine:2 maximize:1 redundant:8 signal:4 full:5 sound:3 neurally:1 reduces:3 stem:1 technical:1 match:1 calculation:1 long:1 mnemonic:1 paired:1 biophysics:1 schematic:4 impact:2 neuro:2 basic:1 jost:2 metric:4 arxiv:2 represent:1 cell:4 c1:1 addition:2 whereas:1 separately:5 fellowship:1 else:1 source:2 crucial:1 appropriately:1 rest:1 unlike:1 contrarily:1 pass:1 subject:1 tommaso:2 recording:1 med:1 cdh:3 flow:3 extracting:1 presence:1 revealed:1 iii:74 affect:2 fit:1 psychology:1 translates:1 whether:6 quian:1 reuse:1 passed:1 effort:1 movshon:2 tactile:2 speech:1 york:1 matlab:1 useful:1 generally:2 giuseppe:2 amount:1 ten:1 category:1 reduced:1 http:1 millisecond:2 neuroscience:7 estimated:5 correctly:1 conceived:1 blue:2 discrete:1 affected:1 commented:1 key:4 four:2 redundancy:3 changing:1 interventional:1 douglas:1 polani:1 verified:1 imaging:3 downstream:2 fraction:1 convert:1 package:1 bielefeld:3 throughout:2 ruling:3 decide:1 separation:1 decision:6 genova:1 bit:16 brody:1 distinguish:3 quadratic:1 mathew:1 nonnegative:4 activity:27 precisely:2 ri:2 flat:1 encodes:2 x2:3 multiplexing:1 min:1 lond:1 extractable:2 performing:2 optical:1 buonomano:1 rossi:1 department:3 structured:1 according:3 brussels:1 pica:2 smaller:1 across:6 beneficial:1 rev:8 primate:1 s1:5 making:3 explained:1 projecting:1 invariant:1 pid:8 turn:3 eventually:1 mechanism:3 count:1 needed:1 fed:2 italiano:3 studying:1 available:3 decomposing:1 endowed:1 cnc:1 permit:1 apply:1 away:1 appropriate:1 original:1 graphical:1 warland:1 classical:2 added:1 quantity:6 spike:27 question:3 strategy:1 primary:4 mazzoni:1 traditional:3 bialek:2 unclear:1 microscopic:1 gradient:1 link:1 separate:1 simulated:16 sci:5 decoder:1 berlin:1 evaluate:2 argue:2 consensus:1 discriminant:4 reason:1 neurometric:1 code:31 besides:1 index:2 relationship:6 reed:1 ratio:4 difficult:1 mostly:1 setup:3 info:6 calcium:2 unknown:1 diamond:3 perform:6 conversion:1 neuron:16 datasets:2 sm:1 enabling:1 descent:1 parietal:16 neurobiology:1 communication:2 variability:1 somatosensation:1 y1:1 rn:1 varied:3 download:1 namely:4 rauschecker:1 learned:1 established:2 boost:1 nip:1 macaque:1 trans:1 suggested:1 dynamical:1 perception:5 pattern:3 reading:2 green:1 max:1 including:1 memory:1 power:1 overlap:1 difficulty:1 ranked:2 quantification:2 nakamura:1 advanced:2 scheme:1 github:1 temporally:1 irrespective:1 carried:16 eugenio:2 hm:2 britten:2 extract:4 categorical:1 sn:1 koberle:1 literature:2 understanding:1 acknowledgement:1 determining:1 expect:2 whisker:1 highlight:1 interesting:2 limitation:1 foundation:1 degree:1 controversial:1 beer:1 shadlen:3 principle:1 bypass:4 share:3 course:1 supported:5 free:1 implicated:1 formal:1 bias:2 understand:1 unaccessible:1 institute:1 template:5 absolute:1 benefit:1 van:2 overcome:1 lett:1 cortical:6 evaluating:2 sensory:75 author:3 made:1 simplified:2 correlate:2 unitn:2 uni:1 wrote:1 confirm:1 synergy:1 active:1 investigating:1 conceptual:3 pittsburgh:2 assumed:1 continuous:3 search:1 quantifies:4 decomposes:1 tailed:1 reality:1 nature:6 channel:3 ruyter:2 ca:1 sem:1 contributes:5 heidelberg:1 expansion:1 european:1 complex:1 did:1 sp:5 timescales:1 linearly:3 arrow:1 s2:2 noise:11 neurosci:14 repeated:1 ref:7 complementary:2 allowed:1 x1:3 augmented:1 fig:15 psychometric:1 neuronal:3 angelaki:1 inferring:1 decoded:2 debated:1 lie:1 candidate:6 perceptual:15 comput:1 third:2 weighting:1 learns:1 fondation:1 annu:1 remained:1 cog:1 specific:5 zenodo:2 r2:18 barrett:1 consequent:3 survival:1 derives:1 essential:2 bivariate:1 intrinsic:1 drew:1 ci:3 texture:7 nat:10 bertschinger:3 demand:1 chen:1 boston:1 suited:1 entropy:4 intersection:49 led:1 likely:1 visual:1 luczak:1 chang:1 collectively:1 brunel:1 springer:1 corresponds:1 satisfies:2 relies:1 extracted:3 ma:1 chance:1 harris:1 haefner:1 fridman:1 chicharro:2 coen:1 quantifying:4 shared:4 absence:1 replace:1 experimentally:1 tecnologia:3 specifically:2 determined:1 decouple:1 engineer:1 called:3 total:2 pas:1 experimental:12 shannon:4 meaningful:3 multisensory:1 tripartite:1 internal:1 guo:1 latter:1 arises:2 meant:1 support:1 relevance:1 investigator:1 multiplexed:1 biol:2 correlated:1
6,588
696
Rational Parametrizations of Neural Networks Uwe Helmke Department of Mathematics University of Regensburg Regensburg 8400 Germany Robert C. Williamson Department of Systems Engineering Australian National University Canberra 2601 Australia Abstract A connection is drawn between rational functions, the realization theory of dynamical systems, and feedforward neural networks. This allows us to parametrize single hidden layer scalar neural networks with (almost) arbitrary analytic activation functions in terms of strictly proper rational functions. Hence, we can solve the uniqueness of parametrization problem for such networks. 1 INTRODUCTION Nonlinearly parametrized representations of functions ?: IR -+- IR of the form n (1.1) ?(x) =L CiU(X - ai) x E IR, i=l have attracted considerable attention recently in the neural network literature. Here u: IR -+- IR is typically a sigmoidal function such as (1.2) but other choices than (1.2) are possible and of interest. Sometimes more complex representations such as n (1.3) ?(x) = L ciu(bix - ad i=l 623 624 Helmke and Williamson or even compositions of these are considered. The purpose of this paper is to explore some parametrization issues regarding (1.1) and in particular to show the close connection these representations have with the standard system-theoretic realization theory for rational functions. We show how to define a generalization of (1.1) parametrized by (A, b, c), where A is a matrix over a field, and band c are vectors. (This is made more precise below). The parametrization involves the (A, b, c) being used to define a rational function. The generalized u-representation is then defined in terms of the rational function. This connection allows us to use results available for rational functions in the study of neural-network representations such as (1.1). It will also lead to an understanding of the geometry of the space of functions. One of the main contributions of the paper is to show how in general neural network representations are related to rational functions. In this summary all proofs have been omitted. A complete version of the paper is available from the second author. 2 REALIZATIONS RELATIVE TO A FUNCTION In this section we explore the relationship between sigmoidal representations of real analytic functions ?: II --+ IR defined on an interval II C IR, real rational functions defined on the complex plane C, and the well established realization theory for linear dynamical systems Ax(t) + bu(t) cx(t) + du(t). x(t) y(t) For standard textbooks on systems theory and realization theory we refer to [5, 7]. Let IK denote either the field IR of real numbers or the field C of complex numbers. Let ~ C C be an open and simply connected subset of the complex plane and let u: ~ --+ C be an analytic function defined on ~. For example, u may be obtained by an analytic continuation of some sigmoidal function u: IR --+ IR into the domain of holomorphy of the complex plane. Let T: V --+ V be a linear operator on a finite-dimensional IK-vector space V such that T has all its eigenvalues in ~. Let r c ~ be a simple closed curve, oriented in the counter-clockwise direction, enclosing all the eigenvalues of T in its interior. More generally, r may consist of a finite number of simple closed curves rk with interiors ~~ such that the union of the domains ~~ contains all the eigenvalues of T. Then the matrix valued function u(T) is defined as the contour integral [8, p.44] (2.1) u(T) := 21. 7rZ ru(z) (zI - T)-l dz. Jr Note that for each linear operator T: V operator on V. --+ V, u(T): V --+ V is again a linear If we now make the substitution T := xl + A for x E C and A: V then u(xI + A) = 21. f u(z) ?z - x)I - A)-l dz 7rZ Jr --+ V IK-linear, Rational Parametrizations of Neural Networks becomes a function of the complex variable x, at least as long as r contains all the eigenvalues of xl + A. Using the change of variables := z - x we obtain e (2.2) u(xl + A) = ~ ( 27rZ where r' =r - Jr' u(x + e) (el - A)-I de x C ~ encircles all the eigenvalues of A. Given an arbitrary vector b E V and a linear functional c: V representation ---+- IK we achieve the (2.3) Note that in (2.3) the simple closed curve the two conditions (2.4) r (2.5) x +r r c C is arbitrary, as long as it satisfies encircles all the eigenvalues of A = {x +el eE r} c~. Let </>: 1I ---+- ~ be a real analytic function in a single variable x E 1I, defined on an interval II C ~. Definition 2.1 A quadruple (A, b, c, d) is called a finite-dimensional u-realization of </>: II ---+- ~ over a field of constants IK if for all x E 1I (2.6) </>(x) = cu(xl + A)b + d holds, where the right hand side is given by (2.3) and r is assumed to satisfy the conditions (2.4)-(2.5). Here d E IK, b E V, and A: V ---+- V, c: V ---+- IK are IK-linear maps and V is a finite dimensional IK-vector space. Definition 2.2 The dimension (or degree) of a u-realization is dimK V. The 0'degree of </>, denoted 817 (</?, is the minimal dimension of all u-realizations of </>. A minimal u-realization is a u-realization of minimal dimension 817 (</?. u-realizations are a straightforward extension of the system-theoretic notion of a realization of a transfer function. In this paper we will address the following specific questions concerning u-realizations. Q1 What are the existence and uniqueness properties of u-realizations? Q2 How can one characterize minimalu-realizations? Q3 How can one compute 817 (</?? 3 EXISTENCE OF IT-REALIZATIONS We now consider the question of existence of u-realizations. To set the stage, we consider the systems theory case u(x) = x-I first. Assume we are given a formal power senes N (3.1) </>(x) "" </>i = L.J 1 x". .,=0 z. N $00, 625 626 Helmke and Williamson ? and that (A, b, c) is a O'-realization in the sense of definition 2.1. The Taylor expansion of c(xI + A)-lb at is (for A nonsingular) 00 c(xI + A)-lb = 2:)-I)i cA-(i+l)bx i . (3.2) i=O Thus i (3.3) = 0, ... ,N. if and only if the expansions of (3.1) and (3.2) coincide up to order N. Observe [7] that ?(x) = c(xI + A)-lb and dim 'V < 00 ?(x) is rational with ?(oo) = 0. = The possibility of solving (3.3) is now easily seen as follows. Let 'V lR N + 1 = Map({O, ... ,N},lR) be the finite or infinite (N + I)-fold product space oflR. (Here Map(X, Y) denotes the set of all maps from X to Y.) If N is finite let (3.4) A-I [ b = For N = 00 O~ :.:: ~1 1 ?0.] E ]R(N+l)X(N+l), (10 ... O)T E'V, (~, ?o, ?l, ~~, ... , (~~~)!). c= we take A-I: lRN ---Io]RN as a shift operator A-I: ]RN ---Io]RN A-I: (xo, xl, . . . ) .-- -(0, xo, Xl, ?? ? ) b=(I,O, ... ), c=(0,?0,?I,?2/2!, ... ): (3.5) and We then have = =? Lemma 3.1 Let O'(x) Li 7txi be analytic at X and let (A, b, c) be a 0'realization of the formal power series ?( x) L~o !ffx i , N ~ 00 (i. e. matching of the first N + 1 derivatives of ?(x) and cO'(xI + A)b at X = 0). Then ?i (3.6) = = cO'(i)(A)b for i = = 0, ... , N. = Observe that for O'(x) x-I we have O'(i)(-A) i!(A-l)i+1 as before. The existence part of the realization question Ql can now be restated as Q4 Given O'(x):= L:o~xi and a sequence of real numbers (?o, ... ,?N), does there exist an (A, b, c) with (3.7) ?i = cO'(i)(A)b, i = 0, ... , N? Rational Parametrizations of Neural Networks Thus question Q1 is essentially a Loewner interpolation question (1,3]. Let Ii = cAib, f. E No, and let [ 0"1 Uo (3.8) F= Write 7 0"1 0"2 0"3 0"2 0"3 0"4 ::! 1= 10 II (3.9) h]= Then (3.6) (for N = 00) 12/ 2! 13/ 3 ! and [?] =[ (Ui+i)r;=o? ~q . can formally be written as [?] = F? (3.10) hJ. Of course, any meaningful interpretation of (3.10) requires that the infinite sums ,",00 W ? t 2 W i....Ji=O 17.+i i! Ii, z? E !"I0, eXls . Th?IS h appens, Clor examp 1e, I?f ,",00 i....Ji=O O"i+i < 00, z. E 1"10 and 2:~0 C'Yi Jj!)2 < 00 exist. We have already seen that every finite or infinite sequence h] has a realization (A, b, c). Thus we obtain Corollary 3.2 A function ?(x) admits a O"-realization if and only if [?] E image(F). = Corollary 3.3 Let H (/Hi )~=o. There exists a finite dimensionalO"-realization of ?(x) if and only if[?] Fh] with rankH < 00. In this case 617 (?) rankH. 4 = = UNIQUENESS OF a-REALIZATIONS In this section we consider the uniqueness of the representation (2.3). Definition 4.1 (c.f. [2]) A system {91, ... ,9n} of continuous functions 9i: JI -P lR?, defined on an interval IT C lR?, is said to satisfy a Haar* condition of order n on JI if 91, ... ,9n are linearly independent, i. e. For every Cl, . .. , Cn E lR? with 2:7:1 Ci9i(X) = 0 for all x E JI, then Cl = ... = Cn = O. Remark that The Haar* condition is implied by the stronger classical Haar condition 91(Xt} det [ : gn(xd for all distinct (xi)i=1 in IT. Equivalently, if 2:7=1 cigi(X) has n distinct roots in JI, then Cl = ... = Cn = o. Definition 4.2 A subset A of C is called self-conju9ate if a E A implies a E A. 627 628 Helmke and Williamson Let (1': ~ ---+ ~ be a continuous function and define (1'~~)(x) := (1'(i)(x + Zi). Let m '" := ("'1, ... ''''m) where L = n, "'j "'j EN, "'j ~ 1, j = 1, ... ,m j=l = denote a combination of n of size m. For a given combination", ("'1, ... , "'m) of n, let 1:= {I, ... ,m} and let Ji := {I, ... ,"'d. Let Zm := {ZI, ... ,zm} and let (i-I). (1' ("', Z) m := { (1'Zi : ~ E I ,J. E J} i . (4.1 ) = Definition 4.3 If for all m < n, for all combinations", ("'I, ... ''''m) of n of size m, and for any self-conjugate set Zm of distinct points, (1'("" Zm) satisfies a H aar* condition of order n, then (1' is said to be Haar generating of order n. Theorem 4.4 (Uniqueness) Let (1': ~ ---+ ~ be Haar generating of order at least 2n on 1I and let (A, b, c) and (A, b, c) be minimal (1'-realizations of order n of functions ? and ? respectively. Then the following equivalence holds c(1'(xI + A)b = c(1'(xI + A)b \:Ix E 1I (4.2) c(eI - A)-lb = c(eI - A)-Ii; \:Ie E ~. Conversely, if ({2) holds for almost all order n triples (A, b, c), (1': ~ ---+ ~ is Haar generating on 1I of order ~ n. The following result gives examples of activation functions (1': Haar generating. (A, b, c), ~ ---+ ~ then which are Lemma 4.5 Let d E No. Then 1) The function (1'(x) = x- d is Haar generating of arbitrary order. 2) The monomial (1'(x) = x d is Haar generating of order d + 1. 3) The function e- x2 is Haar generating of arbitrary order. Remark A simple example of a (1' which is not Haar generating of order ~ 2 is (1'(x) eX. In fact, in this case (1'(x+Zj) = Cj(1'(x+zd for Cj = eZj - Z " j 2, ... ,no = = Remark The function (1'(x) = (l+e- X)-l is not Haar generating of any order > 2. By the periodicity of the complex exponential function, (1'( x + 27ri) = (1'( x - 27ri), i .;::I, for all x. Thus the Haar* condition fails for Z2 = {27ri, -27ri}. = In particular, the above uniqueness result fails for the standard sigmoid case. In order to cover this case we need a further definition. ?= Definition 4.6 Let n c C be a self-conjugate subset of C. A function (1': ~ ---+ is said to be Haar generating of order non 0, if for all m $ n, for all combinations ("'1, ... ,"'m) of n of size m, and for any self-conjugate subset Zm C n of distinct points of 0, (1'("', Zm) satisfies a Haar* condition of order n. ~ '" = Of course for n = C, this definition coincides with definition 4.3. Rational Parametrizations of Neural Networks Theorem 4.1 (Local Uniqueness) Let u: ~ -+ ~ be analytic and let 0 C C be a self-conjugate subset contained in the domain of holomorphy of u. Let 1I be a nontrivial subinterval ofOn~. Suppose u: ~ -+ ~ is Haar generating on 0 of order at least 2n, n EN. Then for any two minimal u-realizations (A, b, c) and (A, b, c) of orders at most n with spect A, spect A E n the following equivalence holds: cu(xI + A)~ (4.3) = cu(xI + A)b 'Vx E 1I c(~I - A)-lb = c(~I - A)-Ii; 'Ve E~. Lemma 4.8 Let 0 := {z E C: I~zl < 7r}. Then the standard sigmoid function = (1 + e-X)-l is Haar generating on 0 of arbitrary order. u(x) 5 MAIN RESULT As a consequence of the uniqueness theorems 4.4 and 4.7 we can now state our main result on the existence of minimal u-realizations of a function ?(x). It extends a parallel result for standard transfer function realizations, where u( x) = x-I. Theorem 5.1 (Realization) Let n c C be a self-conjugate subset, contained in the domain of holomorphy of a real meromorphic function u: ~ -+ ~. Suppose u is Haar generating on n of order at least 2n and assume ?(x) has a finite dimensional realization (A, b, c) of dimension at most n such that A has all its eigenvalues in O. 1. There exists a minimal u-realization (AI, bl , cd of ?(x) of degree 6q (?) ::; dim(A, b, c). Furthermore, there exists an invertible matrix S such that (5.1) SAS- I = [~l ~~ 1' Sb = [ be: 1' cS- 1= [CI, C2]. 2. If (AI, bt, cd and (A~, b~, cD are minimal u-realizations of ?( x) such that the eigenvalues of Al and A~ are contained in 0, then there exists a unique invertible matrix S such that (5.2) 3. A u-realization (A, b, c) is minimal if and only if(A, b, c) is controllable and observable; i.e. if and only if (A, b, c) satisfies the generic rank conditions rank(b, Ab, ... ,An-Ib) = n, rank [ c~ 1= n cAn-1 for A E ocn xn , bE ocn, cT E ocn . Remark The use of the terms "observable" and "controllable" is solely for formal correspondence with standard systems theory. There are no dynamical systems actually under consideration here. 629 630 Helmke and Williamson Remark [ All o Note that for A12] A22 ,b = [ b01 ] any u-realization ,c = [] Cl, C2 ,we (A, b, c) of the form h ave u (A) = [ U(All) 0 A U(A*22 ) ] = and thus cu(xI + A)b clu(xI + A ll )b 1 ? Thus transformations of the above kind always reduce the dimension of au-realization . Corollary 5.2 ([9]) Let u(x) = (1 + e- X )-l and let ?(x) = E~=l CiU(X ai) = E?=l c~u(x - aD be two minimal length u-representations with I~ad < 11", l~aH < 11", i = 1, ... ,n. Then (aL cD = (ap(i)' Cp(i? for a unique permutation p: {I, . .. ,n} - {I, ... ,n}. In particular, minimal length representation (1.1) with real coefficients ai and Ci are unique up to a permutation of the summands. 6 CONCLUSIONS We have drawn a connection between the realization theory for linear dynamical systems and neural network representations. There are further connections (not discussed in this summary) between representations of the form (1.3) and rational functions of two variables. There are other questions concerning diagonalizable realizations and Jordan forms. Details are given in the full length version of this paper . Open questions include the problem of partial realizations [4,6] .1 REFERENCES [1] A. C. Antoulas and B. D. O. Anderson, On the Scalar Rational Interpolation Problem,IMA Journal of Mathematical Control and Information, 3 (1986), pp. 61-88. [2] E. W. Cheney, Introduction to Approximation Theory, Chelsea Publishing Company, New York, 1982. [3] W . F. Donoghue, Jr, Monotone Matrix Functions and Analytic Continuation, Springer-Verlag, Berlin, 1974. [4] W. B. Gragg and A. Lindquist, On the Partial Realization Problem, Linear Algebra and its Applications, 50 (1983), pp. 277-319. [5] T . Kailath, Linear Systems, Prentice-Hall, Englewood Cliffs, 1980. [6] R. E. Kalman, On Partial Realizations, Transfer Functions, and Canonical Forms, Acta Polytechnica Scandinavica, 31 (1979), pp. 9-32. [7] R. E. Kalman, P. L. Falb and M. A. Arbib, Topics in Mathematical System Theory, McGraw-Hill, New York, 1969. [8] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1966. [9] R. C. Williamson and U. Helmke, Existence and Uniqueness Results for Neural Network Approximations, To appear, IEEE Transactions on Neural Networks, 1993. IThis work was supported by the Australian Research Council, the Australian Telecommunications and Electronics Research Board, and the Boeing Commercial Aircraft Company (thanks to John Moore). Thanks to Eduardo Sontag for helpful comments also.
696 |@word aircraft:1 cu:4 version:2 stronger:1 open:2 q1:2 electronics:1 substitution:1 contains:2 series:1 z2:1 activation:2 b01:1 attracted:1 written:1 john:1 analytic:8 plane:3 parametrization:3 lr:5 cheney:1 sigmoidal:3 mathematical:2 c2:2 ik:9 company:2 becomes:1 what:1 kind:1 textbook:1 q2:1 transformation:1 eduardo:1 every:2 xd:1 zl:1 control:1 uo:1 appear:1 before:1 engineering:1 local:1 io:2 consequence:1 cliff:1 quadruple:1 solely:1 interpolation:2 ap:1 au:1 acta:1 equivalence:2 conversely:1 bix:1 co:3 unique:3 union:1 matching:1 close:1 interior:2 operator:5 prentice:1 map:4 dz:2 straightforward:1 attention:1 restated:1 notion:1 lindquist:1 diagonalizable:1 suppose:2 commercial:1 connected:1 counter:1 ui:1 solving:1 algebra:1 easily:1 distinct:4 solve:1 valued:1 sequence:2 eigenvalue:8 loewner:1 product:1 zm:6 kato:1 realization:40 parametrizations:4 achieve:1 generating:13 oo:1 a22:1 sa:1 c:1 involves:1 implies:1 australian:3 direction:1 vx:1 australia:1 a12:1 generalization:1 strictly:1 extension:1 hold:4 considered:1 hall:1 ezj:1 clu:1 omitted:1 fh:1 purpose:1 uniqueness:9 council:1 always:1 hj:1 corollary:3 ax:1 q3:1 rank:3 ave:1 sense:1 dim:2 helpful:1 el:2 i0:1 sb:1 typically:1 bt:1 hidden:1 germany:1 uwe:1 issue:1 denoted:1 field:4 oriented:1 national:1 ve:1 ima:1 geometry:1 ab:1 interest:1 englewood:1 possibility:1 lrn:1 integral:1 partial:3 taylor:1 minimal:11 gn:1 cover:1 subset:6 characterize:1 thanks:2 ie:1 bu:1 invertible:2 again:1 derivative:1 bx:1 li:1 de:1 coefficient:1 satisfy:2 ad:3 root:1 closed:3 parallel:1 ocn:3 contribution:1 ir:10 nonsingular:1 ah:1 definition:10 scandinavica:1 pp:3 proof:1 rational:15 cj:2 actually:1 anderson:1 furthermore:1 stage:1 hand:1 ei:2 hence:1 moore:1 ll:1 self:6 coincides:1 generalized:1 hill:1 theoretic:2 complete:1 txi:1 cp:1 image:1 consideration:1 recently:1 sigmoid:2 functional:1 ji:7 discussed:1 interpretation:1 refer:1 composition:1 ai:5 mathematics:1 examp:1 summands:1 chelsea:1 verlag:2 yi:1 seen:2 clockwise:1 ii:9 full:1 long:2 concerning:2 essentially:1 sometimes:1 interval:3 comment:1 jordan:1 ee:1 feedforward:1 zi:4 arbib:1 reduce:1 regarding:1 cn:3 donoghue:1 det:1 shift:1 sontag:1 york:2 jj:1 remark:5 generally:1 band:1 continuation:2 exist:2 zj:1 canonical:1 zd:1 write:1 drawn:2 falb:1 monotone:1 sum:1 telecommunication:1 extends:1 almost:2 ciu:3 layer:1 hi:1 spect:2 ct:1 correspondence:1 fold:1 nontrivial:1 x2:1 ri:4 department:2 helmke:6 combination:4 conjugate:5 jr:4 xo:2 parametrize:1 available:2 observe:2 generic:1 existence:6 rz:3 dimk:1 denotes:1 include:1 publishing:1 classical:1 bl:1 implied:1 question:7 already:1 said:3 berlin:2 parametrized:2 topic:1 ru:1 length:3 kalman:2 relationship:1 equivalently:1 ql:1 robert:1 boeing:1 meromorphic:1 enclosing:1 proper:1 finite:9 precise:1 rn:3 perturbation:1 arbitrary:6 lb:5 nonlinearly:1 connection:5 established:1 address:1 dynamical:4 below:1 power:2 haar:18 literature:1 understanding:1 relative:1 permutation:2 triple:1 degree:3 cd:4 course:2 summary:2 periodicity:1 supported:1 monomial:1 side:1 formal:3 curve:3 dimension:5 xn:1 contour:1 author:1 made:1 coincide:1 transaction:1 observable:2 mcgraw:1 q4:1 assumed:1 xi:13 continuous:2 transfer:3 ca:1 controllable:2 subinterval:1 du:1 williamson:6 expansion:2 complex:7 cl:4 domain:4 main:3 linearly:1 canberra:1 en:2 board:1 fails:2 exponential:1 xl:6 ib:1 aar:1 ix:1 rk:1 theorem:4 specific:1 xt:1 admits:1 consist:1 exists:4 ci:2 cx:1 simply:1 explore:2 encircles:2 contained:3 scalar:2 springer:2 satisfies:4 kailath:1 considerable:1 change:1 infinite:3 sene:1 ithis:1 lemma:3 called:2 meaningful:1 formally:1 ex:1
6,589
6,960
Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks Federico Monti Universit? della Svizzera italiana Lugano, Switzerland [email protected] Michael M. Bronstein Universit? della Svizzera italiana Lugano, Switzerland [email protected] Xavier Bresson School of Computer Science and Engineering NTU, Singapore [email protected] Abstract Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance of these techniques when introducing the pairwise relationships between users/items in the form of graphs, and imposing smoothness priors on these graphs. However, such techniques do not fully exploit the local stationary structures on user/item graphs, and the number of parameters to learn is linear w.r.t. the number of users and items. We propose a novel approach to overcome these limitations by using geometric deep learning on graphs. Our matrix completion architecture combines a novel multi-graph convolutional neural network that can learn meaningful statistical graph-structured patterns from users and items, and a recurrent neural network that applies a learnable diffusion on the score matrix. Our neural network system is computationally attractive as it requires a constant number of parameters independent of the matrix size. We apply our method on several standard datasets, showing that it outperforms state-of-the-art matrix completion techniques. 1 Introduction Recommender systems have become a central part of modern intelligent systems. Recommending movies on Netflix, friends on Facebook, furniture on Amazon, and jobs on LinkedIn are a few examples of the main purpose of these systems. Two major approaches to recommender systems are collaborative [5] and content [32] filtering techniques. Systems based on collaborative filtering use collected ratings of items by users and offer new recommendations by finding similar rating patterns. Systems based on content filtering make use of similarities between items and users to recommend new items. Hybrid systems combine collaborative and content techniques. Matrix completion. Mathematically, a recommendation method can be posed as a matrix completion problem [9], where columns and rows represent users and items, respectively, and matrix values represent scores determining whether a user would like an item or not. Given a small subset of known elements of the matrix, the goal is to fill in the rest. A famous example is the Netflix challenge [22] offered in 2009 and carrying a 1M$ prize for the algorithm that can best predict user ratings for movies based on previous user ratings. The size of the Netflix matrix is 480k movies ? 18k users (8.5B entries), with only 0.011% known entries. Recently, there have been several attempts to incorporate geometric structure into matrix completion problems [27, 19, 33, 24], e.g. in the form of column and row graphs representing similarity of users 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. and items, respectively. Such additional information defines e.g. the notion of smoothness of the matrix and was shown beneficial for the performance of recommender systems. These approaches can be generally related to the field of signal processing on graphs [37], extending classical harmonic analysis methods to non-Euclidean domains (graphs). Geometric deep learning. Of key interest to the design of recommender systems are deep learning approaches. In the recent years, deep neural networks and, in particular, convolutional neural networks (CNNs) [25] have been applied with great success to numerous applications. However, classical CNN models cannot be directly applied to the recommendation problem to extract meaningful patterns in users, items and ratings because these data are not Euclidean structured, i.e. they do not lie on regular lattices like images but rather irregular domains like graphs. Recent works applying deep learning to recommender systems used networks with fully connected or auto-encoder architectures [44, 35, 14]. Such methods are unable to extract the important local stationary patterns from the data, which is one of the key properties of CNN architectures. New neural networks are necessary and this has motivated the recent development of geometric deep learning techniques that can mathematically deal with graph-structured data, which arises in numerous applications, ranging from computer graphics and vision [28, 2, 4, 3, 30] to chemistry [12]. We recommend the review paper [6] to the reader not familiar with this line of works. The earliest attempts to apply neural networks to graphs are due to Scarselli et al. [13, 34] (see more recent formulation [26, 40]). Bruna et al. [7, 15] formulated CNN-like deep neural architectures on graphs in the spectral domain, employing the analogy between the classical Fourier transforms and projections onto the eigenbasis of the graph Laplacian operator [37]. Defferrard et al. [10] proposed an efficient filtering scheme using recurrent Chebyshev polynomials, which reduces the complexity of CNNs on graphs to the same complexity of classical (Euclidean) CNNs. This model was later extended to deal with dynamic data [36]. Kipf and Welling [21] proposed a simplification of Chebychev networks using simple filters operating on 1-hop neighborhoods of the graph. Monti et al. [30] introduced a spatial-domain generalization of CNNs to graphs local patch operators represented as Gaussian mixture models, showing significantly better generalization across different graphs. Contributions. We present two main contributions. First, we introduce a new multi-graph CNN architecture that generalizes [10] to multiple graphs. This new architecture is able to extract local stationary patterns from signals defined on multiple graphs simultaneously. While in this work we apply multi-graph CNNs in the context of recommender systems to the graphs of users and items, however, our architecture is generic and can be used in other applications, such as neuroscience (autism detection with network of people and brain connectivity [31, 23]), computer graphics (shape correspondence on product manifold [41]), or social network analysis (abnormal spending behavior detection with graphs of customers and stores [39]). Second, we approach the matrix completion problem as learning on user and item graphs using the new deep multi-graph CNN framework. Our architecture is based on a cascade of multi-graph CNN followed by Long Short-Term Memory (LSTM) recurrent neural network [16] that together can be regarded as a learnable diffusion process that reconstructs the score matrix. 2 2.1 Background Matrix Completion Matrix completion problem. Recovering the missing values of a matrix given a small fraction of its entries is an ill-posed problem without additional mathematical constraints on the space of solutions. It is common to assume that the variables lie in a smaller subspace, i.e., the matrix is of low rank, min rank(X) s.t. xij = yij , ?ij ? ?, (1) X where X denotes the matrix to recover, ? is the set of the known entries and yij are their values. Unfortunately, rank minimization turns out to be an NP-hard combinatorial problem that is computationally intractable in practical cases. The tightest possible convex relaxation of problem (1) is to replace the rank with the nuclear norm k ? k? equal to the sum of its singular values [8], ? min kXk? + k? ? (X ? Y)k2F ; (2) X 2 the equality constraint is also replaced with a penalty to make the problem more robust to noise (here ? is the indicator matrix of the known entries ? and ? denotes the Hadamard pointwise product). 2 Cand?s and Recht [8] proved that under some technical conditions the solutions of problems (2) and (1) coincide. Geometric matrix completion An alternative relaxation of the rank operator in (1) can be achieved constraining the space of solutions to be smooth w.r.t. some geometric structure on the rows and columns of the matrix [27, 19, 33, 1]. The simplest model is proximity structure represented as an c undirected weighted column graph Gc = ({1, . . . , n}, Ec , Wc ) with adjacency matrix Wc = (wij ), c c c c where wij = wji , wij = 0 if (i, j) ? / Ec and wij > 0 if (i, j) ? Ec . In our setting, the column graph could be thought of as a social network capturing relations between users and the similarity of their tastes. The row graph Gr = ({1, . . . , m}, Er , Wr ) representing the items similarities is defined similarly. On each of these graphs one can construct the (normalized) graph Laplacian, P an n ? n symmetric positive-semidefinite matrix ? = I ? D?1/2 WD?1/2 , where D = diag( j6=i wij ) is the degree matrix. We denote the Laplacian associated with row and column graphs by ?r and ?c , respectively. Considering the columns (respectively, rows) of matrix X as vector-valued functions on the column graph Gc (respectively, row graph Gr ), their smoothness can be expressed as the Dirichlet norm kXk2Gr = trace(X> ?r X) (respecitvely, kXk2Gc = trace(X?c X> )). The geometric matrix completion problem [19] thus boils down to minimizing ? (3) min kXk2Gr + kXk2Gc + k? ? (X ? Y)k2F . X 2 Factorized models. Matrix completion algorithms introduced in the previous section are well-posed as convex optimization problems, guaranteeing existence, uniqueness and robustness of solutions. Besides, fast algorithms have been developed for the minimization of the non-differentiable nuclear norm. However, the variables in this formulation are the full m ? n matrix X, making it hard to scale up to large matrices such as the Netflix challenge. A solution is to use a factorized representation [38, 22, 27, 43, 33, 1] X = WH> , where W, H are m ? r and n ? r matrices, respectively, with r  min(m, n). The use of factors W, H reduces the number of degrees of freedom from O(mn) to O(m + n); this representation is also attractive as people often assumes the original matrix to be low-rank for solving the matrix completion problem, and rank(WH> ) ? r by construction. The nuclear norm minimization problem (2) can be rewritten in a factorized form as [38]: 1 1 ? min kWk2F + kHk2F + k? ? (WH> ? Y)k2F . (4) W,H 2 2 2 and the factorized formulation of the graph-based minimization problem (3) as 1 1 ? min kWk2Gr + kHk2Gc + k? ? (WH> ? Y)k2F . (5) W,H 2 2 2 The limitation of model (5) is that it decouples the regularization previously applied simultaneously on the rows and columns of X in (3), but the advantage is linear instead of quadratic complexity. 2.2 Deep learning on graphs The key concept underlying our work is geometric deep learning, an extension of CNNs to graphs. In particular, we focus here on graph CNNs formulated in the spectral domain. A graph Laplacian admits a spectral eigendecomposition of the form ? = ???> , where ? = (?1 , . . . ?n ) denotes the matrix of orthonormal eigenvectors and ? = diag(?1 , . . . , ?n ) is the diagonal matrix of the corresponding eigenvalues. The eigenvectors play the role of Fourier atoms in classical harmonic analysis and the eigenvalues can be interpreted as frequencies. Given a function x = (x1 , . . . , xn )> on the vertices ? = ?> x. The spectral convolution of two of the graph, its graph Fourier transform is given by x functions x, y can be defined as the element-wise product of the respective Fourier transforms, ?, x ? y = ?(?> y) ? (?> x) = ? diag(? y1 , . . . , y?n ) x (6) by analogy to the Convolution Theorem in the Euclidean case. Bruna et al. [7] used the spectral definition of convolution (6) to generalize CNNs on graphs. A spectral convolutional layer in this formulation has the form ? 0 ? q X ? ll0 ?> xl0 ? , l = 1, . . . , q, ?l = ? ? x ?Y (7) l0 =1 3 ? ll0 = where q 0 , q denote the number of input and output channels, respectively, Y diag(? yll0 ,1 , . . . , y?ll0 ,n ) is a diagonal matrix of spectral multipliers representing a learnable filter in the spectral domain, and ? is a nonlinearity (e.g. ReLU) applied on the vertex-wise function values. Unlike classical convolutions carried out efficiently in the spectral domain using FFT, the computations of the forward and inverse graph Fourier transform incur expensive O(n2 ) multiplication by the matrices ?, ?> , as there are no FFT-like algorithms on general graphs. Second, the number of parameters representing the filters of each layer of a spectral CNN is O(n), as opposed to O(1) in classical CNNs. Third, there is no guarantee that the filters represented in the spectral domain are localized in the spatial domain, which is another important property of classical CNNs. Henaff et al. [15] argued that spatial localization can be achieved by forcing the spectral multipliers to be smooth. The filter coefficients are represented as y?k = ? (?k ), where ? (?) is a smooth transfer function of frequency ?; its application to a signal x is expressed as ? (?)x = ? diag(? (?1 ), . . . , ? (?n ))?> x, where applying a function to a matrix is understood in the operator sense and boils down to applying the function to the matrix eigenvalues. In particular, the authors used parametric filters of the form p X ?? (?) = ?j ?j (?), (8) j=1 where ?1 (?), . . . , ?r (?) are some fixed interpolation kernels, and ? = (?1 , . . . , ?p ) are p = O(1) interpolation coefficients acting as parameters of the spectral convolutional layer. Defferrard et al. [10] used polynomial filters of order p represented in the Chebyshev basis, p X ? = ? ?? (?) ?j Tj (?), (9) j=0 ? is frequency rescaled in [?1, 1], ? is the (p+1)-dimensional vector of polynomial coefficients where ? parametrizing the filter, and Tj (?) = 2?Tj?1 (?) ? Tj?2 (?) denotes the Chebyshev polynomial of ? = 2??1 ? ? I is degree j defined in a recursive manner with T1 (?) = ? and T0 (?) = 1. Here, ? n ?1 ? = 2? ? ? I in the interval [?1, 1]. the rescaled Laplacian with eigenvalues ? n This approach benefits from several advantages. First, it does not require an explicit computation of the Pp ? ? Laplacian eigenvectors, as applying a Chebyshev filter to x amounts to ?? (?)x = j=0 ?j Tj (?)x; due to the recursive definition of the Chebyshev polynomials, this incurs applying the Laplacian p times. Multiplication by Laplacian has the cost of O(|E|), and assuming the graph has |E| = O(n) edges (which is the case for k-nearest neighbors graphs and most real-world networks), the overall complexity is O(n) rather than O(n2 ) operations, similarly to classical CNNs. Moreover, since the Laplacian is a local operator affecting only 1-hop neighbors of a vertex and accordingly its pth power affects the p-hop neighborhood, the resulting filters are spatially localized. 3 Our approach In this paper, we propose formulating matrix completion as a problem of deep learning on user and item graphs. We consider two architectures summarized in Figures 1 and 2. The first architecture works on the full matrix model producing better accuracy but requiring higher complexity. The second architecture used factorized matrix model, offering better scalability at the expense of slight reduction of accuracy. For both architectures, we consider a combination of multi-graph CNN and RNN, which will be described in detail in the following sections. Multi-graph CNNs are used to extract local stationary features from the score matrix using row and column similarities encoded by user and item graphs. Then, these spatial features are fed into a RNN that diffuses the score values progressively, reconstructing the matrix. 3.1 Multi-Graph CNNs Multi-graph convolution. Our first goal is to extend the notion of the aforementioned graph Fourier transform to matrices whose rows and columns are defined on row- and column-graphs. We recall that the classical two-dimensional Fourier transform of an image (matrix) can be thought of as applying a one-dimensional Fourier transform to its rows and columns. In our setting, the analogy of the two-dimensional Fourier transform has the form ? = ?> X?c X (10) r 4 where ?c , ?r and ?c = diag(?c,1 , . . . , ?c,n ) and ?r = diag(?r,1 , . . . , ?r,m ) denote the n ? n and m ? m eigenvector- and eigenvalue matrices of the column- and row-graph Laplacians ?c , ?r , respectively. The multi-graph version of the spectral convolution (6) is given by ? ? Y)? ? >, X ? Y = ?r (X c (11) and in the classical setting can be thought as the analogy of filtering a 2D image in the spectral domain (column and row graph eigenvalues ?c and ?r generalize the x- and y-frequencies of an image). ? would yield O(mn) parameAs in [7], representing multi-graph filters as their spectral multipliers Y ters, prohibitive in any practical application. To overcome this limitation, we follow [15], assuming that the multi-graph filters are expressed in the spectral domain as a smooth function of both frequen? k,k0 = ? (?c,k , ?r,k0 ). cies (eigenvalues ?c , ?r of the row- and column graph Laplacians) of the form Y 1 In particular, using Chebychev polynomial filters of degree p, p X ?c, ? ?r ) = ? c )Tj 0 (? ? r ), ?? (? ?jj 0 Tj (? (12) j,j 0 =0 ?c, ? ? r are the frequencies rescaled [?1, 1] (see Figure 4 for examples). Such filters are where ? parametrized by a (p + 1) ? (p + 1) matrix of coefficients ? = (?jj 0 ), which is O(1) in the input size as in classical CNNs on images. The application of a multi-graph filter to the matrix X p X ? ? r )XTj 0 (? ? c) X= ?jj 0 Tj (? (13) j,j 0 =0 ? c = 2??1 ?c ? I and ? ?r = incurs an O(mn) computational complexity (here, as previously, ? c,n ?1 2?r,m ?r ? I denote the scaled Laplacians). Similarly to (7), a multi-graph convolutional layer using the parametrization of filters according to (13) is applied to q 0 input channels (m ? n matrices X1 , . . . , Xq0 or a tensor of size m ? n ? q 0 ), ? 0 ? ? 0 ? q q p X X X ?l = ?? ? r )Xl0 Tj 0 (? ? c )? , l = 1, . . . , q, (14) X Xl0 ? Yll0 ? = ? ? ?jj 0 ,ll0 Tj (? l0 =1 j,j 0 =0 l0 =1 producing q outputs (tensor of size m ? n ? q). Several layers can be stacked together. We call such an architecture a Multi-Graph CNN (MGCNN). Separable convolution. A simplification of the multi-graph convolution is obtained considering the factorized form of the matrix X = WH> and applying one-dimensional convolutions to the respective graph to each factor. Similarly to the previous case, we can express the filters resorting to Chebyshev polynomials, p p X X ?l = ? r )wl , ? c )hl , ?l = w ?jr Tj (? h ?jc0 Tj 0 (? l = 1, . . . , r (15) j=0 j 0 =0 where wl , hl denote the lth columns of factors W, H and ? r = (?0r , . . . , ?pr ) and ? c = (?0c , . . . , ?pc ) are the parameters of the row- and column- filters, respectively (a total of 2(p + 1) = O(1)). Application of such filters to W and H incurs O(m + n) complexity. Convolutional layers (14) thus take the form ? 0 ? ? 0 ? q X p q X p X X r ?l = ? ? ? ? c )hl0 ? . ?, ?l = ? ? w ?j,ll h ?jc0 ,ll0 Tj 0 (? (16) 0 Tj (?r )wl0 l0 =1 j 0 =0 l0 =1 j=0 We call such an architecture a separable MGCNN or sMGCNN. 3.2 Matrix diffusion with RNNs The next step of our approach is to feed the spatial features extracted from the matrix by the MGCNN or sMGCNN to a recurrent neural network (RNN) implementing a diffusion process that progressively reconstructs the score matrix (see Figure 3). Modelling matrix completion as a diffusion process 1 For simplicity, we use the same degree p for row- and column frequencies. 5 X(t+1) = X(t) + dX(t) dX(t) ? (t) X X(t) X MGCNN RNN row+column filtering Figure 1: Recurrent MGCNN (RMGCNN) architecture using the full matrix completion model and operating simultaneously on the rows and columns of the matrix X. Learning complexity is O(mn). H(t+1) = H(t) + dH(t) dH(t) ? (t) H H(t) H> GCNN RNN column filtering W W(t+1) = W(t) + dW(t) dW(t) ? (t) W W(t) GCNN RNN row filtering Figure 2: Separable Recurrent MGCNN (sRMGCNN) architecture using the factorized matrix completion model and operating separately on the rows and columns of the factors W, H> . Learning complexity is O(m + n). t=0 1 2 3 4 5 6 7 8 9 10 2.26 1.89 1.60 1.78 1.31 0.52 0.48 0.63 0.38 0.07 0.01 1.15 1.04 0.94 0.89 0.84 0.76 0.69 0.49 0.27 0.11 0.01 Figure 3: Evolution of matrix X(t) with our architecture using full matrix completion model RMGCNN (top) and factorized matrix completion model sRMGCNN (bottom). Numbers indicate the RMS error. appears particularly suitable for realizing an architecture which is independent of the sparsity of the available information. In order to combine the few scores available in a sparse input matrix, a multilayer CNN would require very large filters or many layers to diffuse the score information across matrix domains. On the contrary, our diffusion-based approach allows to reconstruct the missing information just by imposing the proper amount of diffusion iterations. This gives the possibility to deal with extremely sparse data, without requiring at the same time excessive amounts of model parameters. See Table 3 for an experimental evaluation on this aspect. We use the classical LSTM architecture [16], which has demonstrated to be highly efficient to learn complex non-linear diffusion processes due to its ability to keep long-term internal states (in particular, limiting the vanishing gradient issue). The input of the LSTM gate is given by the static features extracted from the MGCNN, which can be seen as a projection or dimensionality reduction of the original matrix in the space of the most meaningful and representative information (the disentanglement effect). This representation coupled with LSTM appears particularly well-suited to keep a long term internal state, which allows to predict accurate small changes dX of the matrix X (or dW, dH of the factors W, H) that can propagate through the full temporal steps. 6 Figures 1 and 2 and Algorithms 1 and 2 summarize the proposed matrix completion architectures. We refer to the whole architecture combining the MGCNN and RNN in the full matrix completion setting as recurrent multi-graph CNN (RMGCNN). The factorized version with separable MGCNN and RNN is referred to as separable RMGCNN (sRMGCNN). The complexity of Algorithm 1 scales quadratically as O(mn) due to the use of MGCNN. For large matrices, Algorithm 2 that processes the rows and columns separately with standard GCNNs and scales linearly as O(m + n) is preferable. We will demonstrate in Section 4 that the proposed RMGCNN and sRMGCNN architectures show themselves very well on different settings of matrix completion problems. However, we should note that this is just one possible configuration, which we by no means claim to be optimal. For example, in all our experiments we used only one convolutional layer; it is likely that better yet performance could be achieved with more layers. Algorithm 1 (RMGCNN) Algorithm 2 (sRMGCNN) (0) input m ? r factor H(0) and n ? r factor W(0) representing the matrix X(0) 1: for t = 0 : T do 2: Apply the Graph CNN on H(t) producing ? (t) . an n ? q output H 3: for j = 1 : n do ? (t) 4: Apply RNN to q-dim h = j (t) (t) ? ,...,h ? ) producing incremental (h input m ? n matrix X containing initial values 1: for t = 0 : T do 2: Apply the Multi-Graph CNN (13) on X(t) ? (t) . producing an m ? n ? q output X 3: for all elements (i, j) do (t) ? ij Apply RNN to q-dim x = 4: (t) (t) (? xij1 , . . . , x ?ijq ) producing incremental j1 (t) update dxij 5: end for 6: Update X(t+1) = X(t) + dX(t) 7: end for 5: 6: 7: 8: jq (t) update dhj end for Update H(t+1) = H(t) + dH(t) Repeat steps 2-6 for W(t+1) end for 3.3 Training Training of the networks is performed by minimizing the loss ? (T ) (T ) (T ) `(?, ?) = kX?,? k2Gr + kX?,? k2Gc + k? ? (X?,? ? Y)k2F . 2 (17) Here, T denotes the number of diffusion iterations (applications of the RNN), and we use the (T ) notation X?,? to emphasize that the matrix depends on the parameters of the MGCNN (Chebyshev polynomial coefficients ?) and those of the LSTM (denoted by ?). In the factorized setting, we use the loss ? (T ) (T ) (T ) (T ) `(? r , ? c , ?) = kW?r ,? k2Gr + kH?c ,? k2Gc + k? ? (W?r ,? (H?c ,? )> ? Y)k2F (18) 2 where ? c , ? r are the parameters of the two GCNNs. Results2 4 Experimental settings. We closely followed the experimental setup of [33], using five standard datasets: Synthetic dataset from [19], MovieLens [29], Flixster [18], Douban [27], and YahooMusic [11]. We used disjoint training and test sets and the presented results are reported on test sets in all our experiments. As in [33], we evaluated MovieLens using only the first of the 5 provided data splits. For Flixster, Douban and YahooMusic, we evaluated on a reduced matrix of 3000 users and items, considering 90% of the given scores as training set and the remaining as test set. Classical Matrix Completion (MC) [9], Inductive Matrix Completion (IMC) [17, 42], Geometric Matrix Completion (GMC) [19], and Graph Regularized Alternating Least Squares (GRALS) [33] were used as baseline methods. In all the experiments, we used the following settings for our RMGCNNs: Chebyshev polynomials of order p = 4, outputting k = 32-dimensional features, LSTM cells with 32 features and T = 10 diffusion steps (for both training and test). The number of diffusion steps T has been estimated on the Movielens validation set and used in all our experiments. A better estimate of T can be done by cross-validation, and thus can potentially only improve the final results. All the 2 Code: https://github.com/fmonti/mgcnn 7 models were implemented in Google TensorFlow and trained using the Adam stochastic optimization algorithm [20] with learning rate 10?3 . In factorized models, ranks r = 15 and 10 was used for the synthetic and real datasets, respectively. For all methods, hyperparameters were chosen by cross-validation. Filter Response 0.8 0.4 0.2 0 ?c, ? ? r )| of the first Figure 4: Absolute value |? (? ten spectral filters learnt by our MGCNN model. In each matrix, rows and columns represent ? r and ? ? c of the row and column frequencies ? graphs, respectively. 4.1 0.6 0 0.5 1 1.5 2 ?r , ?c ? c )| and |? (? ? r )| Figure 5: Absolute values |? (? of the first four column (solid) and row (dashed) spectral filters learned by our sMGCNN model. Synthetic data We start the experimental evaluation showing the performance of our approach on a small synthetic dataset, in which the user and item graphs have strong communities structure. Though rather simple, such a dataset allows to study the behavior of different algorithms in controlled settings. The performance of different matrix completion methods is reported in Table 1, along with their theoretical complexity. Our RMGCNN and sRMGCNN models achieve better accuracy than other methods with lower complexity. Different diffusion time steps of these two models are visualized in Figure 3. Figures 4 and 5 depict the spectral filters learnt by MGCNN and row- and column-GCNNs. We repeated the same experiment assuming only the column (users) graph to be given. In this setting, RMGCNN cannot be applied, while sRMGCNN has only one GCNN applied on the factor H (the other factor W is free). Table 2 summarizes the results of this experiment, again, showing that our approach performs the best. Table 3 compares our RMGCNN with more classical multilayer MGCNNs. Our recurrent solutions outperforms deeper and more complex architectures, requiring at the same time a lower amount of parameters. Table 1: Comparison of different matrix completion methods using users+items graphs in terms of number of parameters (optimization variables) and computational complexity order (operations per iteration). Big-O notation is avoided for clarity reasons. Rightmost column shows the RMS error on Synthetic dataset. M ETHOD GMC GRALS sRMGCNN RMGCNN PARAMS mn m+n 1 1 N O . O P. mn m+n m+n mn Table 2: Comparison of different matrix completion methods using users graph only in terms of number of parameters (optimization variables) and computational complexity order (operations per iteration). Big-O notation is avoided for clarity reasons. Rightmost column shows the RMS error on Synthetic dataset. RMSE 0.3693 0.0114 0.0106 0.0053 M ETHOD GRALS sRMGCNN PARAMS m+n m N O . O P. m+n m+n RMSE 0.0452 0.0362 Table 3: Reconstruction errors for the synthetic dataset between multiple convolutional layers architectures and the proposed architecture. Chebyshev polynomials of order 4 have been used for both users and movies graphs (q 0 MGCq denotes a multi-graph convolutional layer with q 0 input features and q output features). Method Params Architecture RMSE MGCNN3layers 9K 1MGC32, 32MGC10, 10MGC1 0.0116 MGCNN4layers 53K 1MGC32, 32MGC32 ? 2, 32MGC1 0.0073 MGCNN5layers 78K 1MGC32, 32MGC32 ? 3, 32MGC1 0.0074 MGCNN6layers 104K 1MGC32, 32MGC32 ? 4, 32MGC1 0.0064 RMGCNN 9K 1MGC32 + LSTM 0.0053 8 4.2 Real data Following [33], we evaluated the proposed approach on the MovieLens, Flixster, Douban and YahooMusic datasets. For the MovieLens dataset we constructed the user and item (movie) graphs as unweighted 10-nearest neighbor graphs in the space of user and movie features, respectively. For Flixster, the user and item graphs were constructed from the scores of the original matrix. On this dataset, we also performed an experiment using only the users graph. For the Douban dataset, we used only the user graph (provided in the form of a social network). For the YahooMusic dataset, we used only the item graph, constructed with unweighted 10-nearest neighbors in the space of item features (artists, albums, and genres). For the latter three datasets, we used a sub-matrix of 3000 ? 3000 entries for evaluating the performance. Tables 4 and 5 summarize the performance of different methods. sRMGCNN outperforms the competitors in all the experiments. Table 4: Performance (RMS error) of different matrix completion methods on the MovieLens dataset. M ETHOD G LOBAL M EAN U SER M EAN M OVIE M EAN MC [9] IMC [17, 42] GMC [19] GRALS [33] sRMGCNN 5 RMSE 1.154 1.063 1.033 0.973 1.653 0.996 0.945 0.929 Table 5: Performance (RMS error) on several datasets. For Douban and YahooMusic, a single graph (of users and items respectively) was used. For Flixster, two settings are shown: users+items graphs / only users graph. M ETHOD GRALS sRMGCNN F LIXSTER 1.3126 / 1.2447 1.1788 / 0.9258 D OUBAN 0.8326 0.8012 YAHOO M USIC 38.0423 22.4149 Conclusions In this paper, we presented a new deep learning approach for matrix completion based on multi-graph convolutional neural network architecture. Among the key advantages of our approach compared to traditional methods is its low computational complexity and constant number of degrees of freedom independent of the matrix size. We showed that the use of deep learning for matrix completion allows to beat related state-of-the-art recommender system methods. To our knowledge, our work is the first application of deep learning on graphs to this class of problems. We believe that it shows the potential of the nascent field of geometric deep learning on non-Euclidean domains, and will encourage future works in this direction. Acknowledgments FM and MB are supported in part by ERC Starting Grant No. 307047 (COMET), ERC Consolidator Grant No. 724228 (LEMAN), Google Faculty Research Award, Nvidia equipment grant, Radcliffe fellowship from Harvard Institute for Advanced Study, and TU Munich Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement No. 291763. XB is supported in part by NRF Fellowship NRFF2017-10. References [1] K. Benzi, V. Kalofolias, X. Bresson, and P. Vandergheynst. Song recommendation with nonnegative matrix factorization and graph total variation. In Proc. ICASSP, 2016. [2] D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. Computer Graphics Forum, 34(5):13?23, 2015. [3] D. Boscaini, J. Masci, E. Rodol?, and M. M. Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In Proc. NIPS, 2016. [4] D. Boscaini, J. Masci, E. Rodol?, M. M. Bronstein, and D. Cremers. Anisotropic diffusion descriptors. Computer Graphics Forum, 35(2):431?441, 2016. [5] J. Breese, D. Heckerman, and C. Kadie. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. In Proc. Uncertainty in Artificial Intelligence, 1998. 9 [6] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18?42, 2017. [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. 2013. [8] E. Cand?s and B. Recht. Exact Matrix Completion via Convex Optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009. [9] E. Candes and B. Recht. Exact matrix completion via convex optimization. Comm. ACM, 55(6):111?119, 2012. [10] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Proc. NIPS, 2016. [11] G. Dror, N. Koenigstein, Y. Koren, and M. Weimer. The Yahoo! music dataset and KDD-Cup?11. In KDD Cup, 2012. [12] D. K. Duvenaud et al. Convolutional networks on graphs for learning molecular fingerprints. In Proc. NIPS, 2015. [13] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In Proc. IJCNN, 2005. [14] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. Chua. Neural collaborative filtering. In Proc. WWW, 2017. [15] M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv:1506.05163, 2015. [16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735? 1780, 1997. [17] P. Jain and I. S. Dhillon. Provable inductive matrix completion. arXiv:1306.0626, 2013. [18] M. Jamali and M. Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In Proc. Recommender Systems, 2010. [19] V. Kalofolias, X. Bresson, M. M. Bronstein, and P. Vandergheynst. Matrix completion on graphs. 2014. [20] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. 2015. [21] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. 2017. [22] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30?37, 2009. [23] S. I. Ktena, S. Parisot, E. Ferrante, M. Rajchl, M. Lee, B. Glocker, and D. Rueckert. Distance metric learning using graph convolutional networks: Application to functional brain networks. In Proc. MICCAI, 2017. [24] D. Kuang, Z. Shi, S. Osher, and A. L. Bertozzi. A harmonic extension approach for collaborative ranking. CoRR, abs/1602.05127, 2016. [25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278?2324, 1998. [26] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. 2016. [27] H. Ma, D. Zhou, C. Liu, M. Lyu, and I. King. Recommender systems with social regularization. In Proc. Web Search and Data Mining, 2011. 10 [28] J. Masci, D. Boscaini, M. M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on Riemannian manifolds. In Proc. 3DRR, 2015. [29] B. N. Miller et al. MovieLens unplugged: experiences with an occasionally connected recommender system. In Proc. Intelligent User Interfaces, 2003. [30] F. Monti, D. Boscaini, J. Masci, E. Rodol?, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model CNNs. In Proc. CVPR, 2017. [31] S. Parisot, S. I. Ktena, E. Ferrante, M. Lee, R. Guerrerro Moreno, B. Glocker, and D. Rueckert. Spectral graph convolutions for population-based disease prediction. In Proc. MICCAI, 2017. [32] M. Pazzani and D. Billsus. Content-based Recommendation Systems. The Adaptive Web, pages 325?341, 2007. [33] N. Rao, H.-F. Yu, P. K. Ravikumar, and I. S. Dhillon. Collaborative filtering with graph information: Consistency and scalable methods. In Proc. NIPS, 2015. [34] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Trans. Neural Networks, 20(1):61?80, 2009. [35] S. Sedhain, A. Menon, S. Sanner, and L. Xie. Autorec: Autoencoders meet collaborative filtering. In Proc. WWW, 2015. [36] Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson. Structured sequence modeling with graph convolutional recurrent networks. arXiv:1612.07659, 2016. [37] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Sig. Proc. Magazine, 30(3):83?98, 2013. [38] N. Srebro, J. Rennie, and T. Jaakkola. Maximum-Margin Matrix Factorization. In Proc. NIPS, 2004. [39] Y. Suhara, X. Dong, and A. S. Pentland. Deepshop: Understanding purchase patterns via deep learning. In Proc. International Conference on Computational Social Science, 2016. [40] S. Sukhbaatar, A. Szlam, and R. Fergus. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems, pages 2244?2252, 2016. [41] M. Vestner, R. Litman, E. Rodol?, A. Bronstein, and D. Cremers. Product manifold filter: Non-rigid shape correspondence via kernel density estimation in the product space. In Proc. CVPR, 2017. [42] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multi-label learning. In Proc. NIPS, 2013. [43] F. Yanez and F. Bach. Primal-dual algorithms for non-negative matrix factorization with the kullback-leibler divergence. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 2257?2261. IEEE, 2017. [44] Y. Zheng, B. Tang, W. Ding, and H. Zhou. A neural autoregressive approach to collaborative filtering. In Proc. ICML, 2016. 11
6960 |@word cnn:13 faculty:1 version:2 polynomial:10 norm:4 hu:1 propagate:1 yahoomusic:5 incurs:3 solid:1 reduction:2 initial:1 configuration:1 liu:1 score:10 offering:1 document:1 rightmost:2 outperforms:3 wd:1 com:1 yet:1 dx:4 j1:1 kdd:2 shape:4 moreno:1 progressively:2 update:4 depict:1 stationary:4 intelligence:1 prohibitive:1 sukhbaatar:1 item:25 accordingly:1 parametrization:1 rodol:4 prize:1 short:2 realizing:1 vanishing:1 chua:1 tarlow:1 zhang:1 five:1 mathematical:1 along:1 constructed:3 become:1 initiative:1 frequen:1 combine:3 manner:1 introduce:1 excellence:1 pairwise:1 frossard:1 cand:2 themselves:1 behavior:2 multi:22 brain:2 considering:3 provided:2 underlying:1 moreover:1 notation:3 factorized:11 interpreted:1 eigenvector:1 dror:1 developed:1 emerging:1 finding:1 guarantee:1 temporal:1 xq0:1 zaremba:1 preferable:1 universit:2 decouples:1 scaled:1 ser:1 litman:1 szlam:3 grant:4 producing:6 positive:1 t1:1 engineering:1 local:6 understood:1 meet:1 interpolation:2 rnns:1 factorization:5 practical:2 acknowledgment:1 lecun:4 tsoi:1 recursive:2 union:1 backpropagation:1 rnn:11 empirical:1 bell:1 significantly:1 cascade:1 projection:2 thought:3 regular:1 gcnn:3 cannot:2 onto:1 operator:5 context:1 applying:7 www:2 customer:1 missing:2 demonstrated:1 lobal:1 shi:1 starting:1 convex:4 amazon:1 simplicity:1 usi:2 regarded:1 fill:1 nuclear:3 orthonormal:1 dw:3 population:1 notion:2 variation:1 linkedin:1 limiting:1 construction:1 play:1 user:33 magazine:2 exact:2 svoboda:1 sig:1 agreement:1 harvard:1 element:3 expensive:1 particularly:2 recognition:1 bottom:1 role:1 ding:1 connected:3 rescaled:3 disease:1 italiana:2 complexity:15 comm:1 nie:1 dynamic:1 khk2f:1 geodesic:1 trained:1 carrying:1 solving:1 incur:1 predictive:1 localization:1 basis:1 jc0:2 icassp:2 k0:2 represented:5 genre:1 stacked:1 jain:1 fast:2 artificial:1 zemel:1 neighborhood:2 whose:1 encoded:1 posed:3 valued:1 cvpr:2 narang:1 rennie:1 reconstruct:1 federico:2 encoder:1 ability:1 transform:6 final:1 hagenbuchner:1 advantage:3 differentiable:1 eigenvalue:7 sequence:2 propose:2 outputting:1 reconstruction:1 product:5 mb:1 tu:1 hadamard:1 combining:1 achieve:1 deformable:1 kh:1 scalability:1 eigenbasis:1 defferrard:4 extending:2 guaranteeing:1 incremental:2 adam:2 koenigstein:1 recurrent:10 completion:37 friend:1 nearest:3 ij:2 school:1 job:1 strong:1 recovering:1 implemented:1 indicate:1 switzerland:2 direction:1 closely:1 cnns:15 filter:25 stochastic:2 implementing:1 adjacency:1 argued:1 require:2 generalization:2 ntu:2 disentanglement:1 mathematically:2 yij:2 extension:2 proximity:1 duvenaud:1 great:1 lyu:1 predict:2 claim:1 major:1 purpose:1 uniqueness:1 estimation:1 proc:23 combinatorial:1 label:1 seo:1 wl:2 weighted:1 minimization:4 gaussian:1 rather:3 zhou:3 jaakkola:1 earliest:1 hl0:1 l0:5 focus:1 rank:8 modelling:1 equipment:1 baseline:1 sense:1 dim:2 rigid:1 diffuses:1 relation:1 jq:1 wij:5 going:1 issue:1 classification:1 overall:1 among:2 ill:1 aforementioned:1 denoted:1 yahoo:2 development:1 art:2 spatial:5 field:3 equal:1 construct:1 beach:1 atom:1 hop:3 ovie:1 kw:1 nrf:1 yu:1 k2f:6 icml:1 excessive:1 future:1 purchase:1 np:1 recommend:2 intelligent:2 few:2 modern:1 simultaneously:3 divergence:1 xtj:1 familiar:1 scarselli:3 replaced:1 dual:1 attempt:2 freedom:2 detection:2 ab:1 interest:1 glocker:2 possibility:1 highly:1 mining:1 zheng:1 evaluation:2 monti:4 mixture:2 semidefinite:1 pc:1 primal:1 tj:14 xb:1 accurate:1 edge:1 encourage:1 necessary:1 experience:1 respective:2 euclidean:6 theoretical:1 column:31 modeling:1 rao:1 bresson:5 lattice:1 cost:1 introducing:1 vertex:3 subset:1 entry:6 jamali:1 kuang:1 seventh:1 gr:2 graphic:4 reported:2 learnt:2 params:3 synthetic:7 st:1 recht:3 lstm:7 international:2 density:1 lee:2 dong:1 michael:2 together:2 connectivity:1 again:1 central:1 reconstructs:2 opposed:1 containing:1 ester:1 xbresson:1 li:1 potential:1 chemistry:1 summarized:1 kadie:1 coefficient:5 rueckert:2 cremers:2 ranking:1 depends:1 radcliffe:1 later:1 performed:2 netflix:4 recover:1 start:1 candes:1 rmse:4 collaborative:9 contribution:2 square:1 accuracy:3 convolutional:19 descriptor:2 efficiently:1 miller:1 yield:1 generalize:2 famous:1 artist:1 mc:2 shuman:1 autism:1 j6:1 svizzera:2 dhj:1 facebook:1 definition:2 competitor:1 volinsky:1 frequency:7 pp:1 associated:1 riemannian:1 boil:2 static:1 proved:1 dataset:12 wh:5 recall:1 knowledge:1 dimensionality:1 appears:2 feed:1 higher:1 supervised:1 follow:1 xie:1 response:1 formulation:5 evaluated:3 done:1 though:1 just:2 miccai:2 autoencoders:1 grals:5 web:2 trust:1 propagation:1 google:2 defines:1 menon:1 believe:1 usa:1 effect:1 normalized:1 concept:1 multiplier:3 requiring:3 xavier:1 equality:1 regularization:2 evolution:1 spatially:1 symmetric:1 inductive:2 alternating:1 dhillon:2 leibler:1 deal:3 attractive:2 ll:1 drr:1 benzi:1 ortega:1 demonstrate:1 performs:1 interface:1 image:5 harmonic:3 ranging:1 novel:2 recently:1 spending:1 wise:2 common:2 functional:1 anisotropic:2 extend:1 slight:1 he:1 refer:1 imc:2 cup:2 imposing:2 smoothness:3 resorting:1 mathematics:1 similarly:4 erc:2 consistency:1 nonlinearity:1 fingerprint:1 funded:1 kipf:2 bruna:5 similarity:5 operating:3 ferrante:2 recent:5 showed:2 henaff:2 forcing:1 schmidhuber:1 store:1 nvidia:1 occasionally:1 success:1 wji:1 seen:1 additional:2 bertozzi:1 dashed:1 semi:1 signal:6 multiple:3 full:6 reduces:2 smooth:4 technical:1 offer:1 long:5 cross:2 bach:1 molecular:1 award:1 ravikumar:1 laplacian:9 controlled:1 prediction:1 scalable:1 multilayer:2 vision:1 liao:1 metric:1 vestner:1 arxiv:3 iteration:4 represent:3 kernel:2 achieved:3 cell:1 irregular:2 boscaini:5 background:1 affecting:1 separately:2 fellowship:2 interval:1 hochreiter:1 singular:1 rest:1 unlike:1 undirected:1 contrary:1 call:2 constraining:1 split:1 bengio:1 fft:2 results2:1 affect:1 relu:1 castellani:1 architecture:27 fm:1 haffner:1 chebyshev:9 t0:1 whether:1 motivated:1 rms:5 penalty:1 song:1 speech:1 jj:4 deep:19 generally:1 eigenvectors:3 transforms:2 amount:4 ten:1 locally:1 visualized:1 simplest:1 reduced:1 http:1 xij:1 singapore:1 neuroscience:1 disjoint:1 wr:1 gmc:3 estimated:1 per:2 express:1 key:4 four:1 comet:1 clarity:2 kalofolias:2 diffusion:13 graph:105 relaxation:2 fraction:1 year:1 sum:1 inverse:1 uncertainty:1 nascent:1 reader:1 patch:1 summarizes:1 capturing:1 abnormal:1 layer:11 followed:2 furniture:1 simplification:2 correspondence:3 koren:2 quadratic:1 nonnegative:1 ijcnn:1 constraint:2 diffuse:1 wc:2 fourier:9 aspect:1 min:6 formulating:1 extremely:1 separable:5 cies:1 speedup:1 structured:5 munich:1 according:1 combination:1 jr:1 beneficial:1 across:2 smaller:1 reconstructing:1 heckerman:1 making:1 hl:2 osher:1 pr:1 billsus:1 computationally:2 previously:2 turn:1 german:1 fed:1 end:4 generalizes:1 tightest:1 rewritten:1 operation:3 available:2 apply:7 spectral:24 generic:1 douban:5 alternative:1 robustness:1 gate:1 existence:1 original:3 denotes:6 dirichlet:1 assumes:1 top:1 remaining:1 gori:2 music:1 exploit:1 classical:15 forum:2 tensor:2 consolidator:1 parametric:1 diagonal:2 traditional:1 gradient:2 subspace:1 distance:1 unable:1 parametrized:1 manifold:4 collected:1 reason:2 provable:1 assuming:3 besides:1 code:1 pointwise:1 relationship:1 minimizing:2 setup:1 unfortunately:1 potentially:1 expense:1 trace:2 negative:1 ba:1 design:1 bronstein:10 proper:1 ethod:4 wl0:1 gated:1 recommender:12 convolution:10 datasets:6 parametrizing:1 jin:1 pentland:1 beat:1 extended:1 communication:1 y1:1 gc:2 community:1 rating:5 introduced:2 acoustic:1 chebychev:2 quadratically:1 learned:1 tensorflow:1 boost:1 kingma:1 nip:7 trans:1 able:1 beyond:1 pattern:6 laplacians:3 sparsity:1 challenge:2 summarize:2 monfardini:2 memory:2 power:1 suitable:1 hybrid:1 regularized:1 indicator:1 sanner:1 advanced:2 mn:8 representing:6 scheme:1 improve:1 movie:6 github:1 numerous:2 carried:1 auto:1 extract:4 coupled:1 ijq:1 prior:1 geometric:13 sg:1 review:1 taste:1 multiplication:2 determining:1 understanding:1 fully:2 loss:2 multiagent:1 limitation:3 filtering:14 analogy:4 srebro:1 localized:4 vandergheynst:8 validation:3 eigendecomposition:1 foundation:1 degree:6 offered:1 row:26 repeat:1 flixster:5 free:1 supported:2 side:1 deeper:1 institute:2 neighbor:4 absolute:2 sparse:2 benefit:1 overcome:2 xn:1 world:1 evaluating:1 unweighted:2 autoregressive:1 forward:1 author:1 adaptive:1 coincide:1 avoided:2 pth:1 employing:1 ec:3 welling:2 social:6 programme:1 melzi:1 emphasize:1 kullback:1 keep:2 recommending:1 fergus:1 search:1 table:10 learn:3 channel:2 robust:1 ca:1 transfer:1 pazzani:1 brockschmidt:1 ean:3 bottou:1 complex:2 european:1 domain:15 diag:7 main:2 linearly:1 weimer:1 kwk2f:1 noise:1 whole:1 hyperparameters:1 n2:2 big:2 repeated:1 x1:2 xu:1 representative:1 referred:1 sub:1 explicit:1 lugano:2 lie:2 third:1 masci:5 tang:1 down:2 theorem:1 specific:1 showing:4 er:1 learnable:3 admits:1 intractable:1 corr:1 album:1 kx:2 margin:1 suited:1 likely:1 ll0:5 kxk:1 expressed:3 recommendation:6 ters:1 applies:1 ch:2 extracted:2 dh:4 acm:1 ma:1 lth:1 goal:2 formulated:2 king:1 replace:1 content:4 hard:2 change:1 movielens:7 acting:1 total:2 breese:1 experimental:4 meaningful:3 internal:2 people:2 latter:1 arises:1 xl0:3 incorporate:1 della:2
6,590
6,961
Reducing Reparameterization Gradient Variance Andrew C. Miller? Harvard University [email protected] Nicholas J. Foti University of Washington [email protected] Alexander D?Amour UC Berkeley [email protected] Ryan P. Adams Google Brain and Princeton University [email protected] Abstract Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the ?reparameterization trick,? represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2,000?) reduction in gradient variance resulting in faster and more stable optimization. 1 Introduction Representing massive datasets with flexible probabilistic models has been central to the success of many statistics and machine learning applications, but the computational burden of fitting these models is a major hurdle. For optimization-based fitting methods, a central approach to this problem has been replacing expensive evaluations of the gradient of the objective function with cheap, unbiased, stochastic estimates of the gradient. For example, stochastic gradient descent using small minibatches of (conditionally) i.i.d. data to estimate the gradient at each iteration is a popular approach with massive data sets. Alternatively, some learning methods sample directly from a generative model or approximating distribution to estimate the gradients of interest, for example, in learning algorithms for implicit models [18, 30] and generative adversarial networks [2, 9]. Approximate Bayesian inference using variational techniques (variational inference, or VI) has also motivated the development of new stochastic gradient estimators, as the variational approach reframes the integration problem of inference as an optimization problem [4]. VI approaches seek out the distribution from a well-understood variational family of distributions that best approximates an intractable posterior distribution. The VI objective function itself is often intractable, but recent work has shown that it can be optimized with stochastic gradient methods that use Monte Carlo estimates of the gradient [19, 14, 22, 25], which we call Monte Carlo variational inference (MCVI). In MCVI, generating samples from an approximate posterior distribution is the source of gradient stochasticity. Alternatively, stochastic variational inference (SVI) [11] and other stochastic opti? http://andymiller.github.io/ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. mization procedures induce stochasticity through data subsampling; MCVI can also be augmented with data subsampling to accelerate computation for large data sets. The two commonly used MCVI gradient estimators are the score function gradient [19, 22] and the reparameterization gradient [14, 25, 29, 8]. Broadly speaking, score function estimates can be applied to both discrete and continuous variables, but often have high variance and thus are frequently used in conjunction with variance reduction techniques. On the other hand, the reparameterization gradient often has lower variance, but is restricted to continuous random variables. See Ruiz et al. [28] for a unifying perspective on these two estimators. Like other stochastic gradient methods, the success of MCVI depends on controlling the variance of the stochastic gradient estimator. In this work, we present a novel approach to controlling the variance of the reparameterization gradient estimator in MCVI. Existing MCVI methods control this variance na?vely by averaging several gradient estimates, which?becomes expensive for large data sets and complex models, with error that only diminishes as O(1/ N ). Our approach exploits the fact that, in MCVI, the randomness in the gradient estimator is completely determined by a known Monte Carlo generating process; this allows us to leverage knowledge about this generative procedure to de-noise the gradient estimator. In particular, we construct a computationally cheap control variate based on an analytical linear approximation to the gradient estimator. Taking a linear combination of a na?ve gradient estimate with this control variate yields a new estimator for the gradient that remains unbiased but has lower variance. Applying the idea to Gaussian approximating families, we observe a 20-2,000? reduction in variance of the gradient norm under various conditions, and faster convergence and more stable behavior of optimization traces. 2 Background Variational Inference Given a model, p(z, D) = p(D|z)p(z), of data D and parameters/latent variables z, the goal of VI is to approximate the posterior distribution p(z|D). VI approximates this intractable posterior distribution with one from a simpler family, Q = {q(z; ?), ? ? ?}, parameterized by variational parameters ?. VI procedures seek out the member of that family, q(?; ?) ? Q, that minimizes some divergence between the approximation q and the true posterior p(z|D). Variational inference can be framed as an optimization problem, usually in terms of KullbackLeibler (KL) divergence, of the following form ?? = arg min KL(q(z; ?) || p(z|D)) = arg min Ez?q? [ln q(z; ?) ? ln p(z|D)] . ??? ??? The task is to find a setting of ? that makes q(z; ?) close to the posterior p(z|D) in KL divergence.2 Directly computing the KL divergence requires evaluating the posterior itself; therefore, VI procedures use the evidence lower bound (ELBO) as the optimization objective L(?) = Ez?q? [ln p(z, D) ? ln q(z; ?)] , (1) which, when maximized, minimizes the KL divergence between q(z; ?) and p(z|D). In special cases, parts of the ELBO can be expressed analytically (e.g. the entropy form or KL-to-prior form [10]) ? we focus on the general form in Equation 1. To maximize the ELBO with gradient methods, we need to compute the gradient of Eq. (1), ?L/?? , g? . The gradient inherits the ELBO?s form as an expectation, which is in general an intractable quantity to compute. In this work, we focus on reparameterization gradient estimators (RGEs) computed using the reparameterization trick. The reparameterization trick exploits the structure of the variational data generating procedure ? the mechanism by which z is simulated from q? (z). To compute the RGE, we first express the sampling procedure from q? (z) as a differentiable map applied to exogenous randomness  ? q0 () z = T (; ?) independent of ? differentiable map, (2) (3) where the initial distribution q0 and T are jointly defined such that z ? q(z; ?) has the desired distribution. As a simple concrete example, if we set q(z; ?) to be a diagonal Gaussian, 2 We use q(z; ?) and q? (z) interchangeably. 2 160 180 180 200 200 220 ELBO ELBO 160 220 2?sample?MC 10?sample?MC 50?sample?MC 240 260 0 20 40 60 wall?clock?(seconds) 80 240 2?sample?MC 10?sample?MC 50?sample?MC 260 280 300 100 0 (a) step size = .01 20 40 60 wall?clock?(seconds) 80 100 (b) step size = .1 Figure 1: Optimization traces for MCVI applied to a Bayesian neural network with various hyperparameter settings. Each trace is running adam [13]. The three lines in each plot correspond to three different numbers of samples, L, used to estimate the gradient at each step. (Left) small stepsize; (Right) stepsize 10 times larger. Large step sizes allow for quicker progress, however noisier (i.e., small L) gradients combined with large step sizes result in chaotic optimization dynamics. The converging traces reach different ELBOs due to the illustrative constant learning rates; in practice, one decreases the step size over time to satisfy the convergence criteria in Robbins and Monro [26]. N (m? , diag(s2? )), with ? = [m? , s? ], m? ? RD , and s? ? RD + the mean and variance. The sampling procedure could then be defined as  ? N (0, ID ) , z = T (; ?) = m? + s? , (4) where s  denotes an element-wise product.3 Given this map, the reparameterization gradient estimator is simply the gradient of a Monte Carlo ELBO estimate with respect to ?. For a single sample, this is  ? q0 () , g?? , ?? [ln p(T (; ?), D) ? ln q(T (; ?); ?)] and similarly the L-sample approximation can be computed by averaging the single-sample estimator over the individual samples L (L) g?? = 1X g?? (` ). L (5) `=1 Crucially, the reparameterization gradient is unbiased, E[? g? ] = ?? L(?), guaranteeing the convergence of stochastic gradient optimization procedures that use it [26]. Gradient Variance and Convergence The efficiency of Monte Carlo variational inference hinges on the magnitude of gradient noise and the step size chosen for the optimization procedure. When the gradient noise is large, smaller gradient steps must be taken to avoid unstable dynamics of the iterates. However, a smaller step size increases the number of iterations that must be performed to reach convergence. We illustrate this trade-off in Figure 1, which shows realizations of an optimization procedure applied to a Bayesian neural network using reparameterization gradients. The posterior is over D = 653 parameters that we approximate with a diagonal Gaussian (see Appendix C.2). We compare the progress of the adam algorithm using various numbers of samples [13], fixing the learning rate. The noise present in the single-sample estimator causes extremely slow convergence, whereas the lower noise 50-sample estimator quickly converges, albeit at 50 times the cost. The upshot is that with low noise gradients we are able to safely take larger steps, enabling faster convergence to a local optimum. A natural question is, how can we reduce the variance of gradient estimates without introducing too much extra computation? Our approach is to use information about the variational model, q(?; ?), and carefully construct a control variate to the gradient. Control Variates Control variates are random quantities that are used to reduce the variance of a statistical estimator without introducing any bias by incorporating additional information into the estimator, [7]. Given an unbiased estimator g? such that E[? g ] = g (the quantity of interest), our goal 3 We will also use x/y and x2 to denote pointwise division and squaring, respectively. 3 is to construct another unbiased estimator with lower variance. We can do this by defining a control ? and can write the new estimator as variate g? with known expectation m ? . g (cv) = g? ? C(? g ? m) (6) where C ? RD?D for D-dimensional g?. Clearly the new estimator has the same expectation as the original estimator, but has a different variance. We can attain optimal variance reduction by appropriately setting C. Intuitively, the optimal C is very similar to a regression coefficient ? it is related to the covariance between the control variate and the original estimator. See Appendix A for further details on optimally setting C. 3 Method: Modeling Reparameterization Gradients In this section we develop our main contribution, a new gradient estimator that can dramatically reduce reparameterization gradient variance. In MCVI, the reparameterization gradient estimator (RGE) is a Monte Carlo estimator of the true gradient ? the estimator itself is a random variable. This random variable is generated using the ?reparameterization trick? ? we first generate some randomness  and then compute the gradient of the ELBO with respect to ? holding  fixed. This results in a complex distribution from which we can generate samples, but in general cannot characterize due to the complexity of the term arising from the gradient of the model term. However, we do have a lot of information about the sampling procedure ? we know the variational distribution ln q(z; ?), the transformation T , and we can evaluate the model joint density ln p(z, D) pointwise. Furthermore, with automatic differentiation, it is often straightforward to obtain gradients and Hessian-vector products of our model ln p(z, D). We propose a scheme that uses the structure of q? and curvature of ln p(z, D) to construct a tractable approximation of the distribution of the RGE.4 This approximation has a known mean and is correlated with the RGE distribution, allowing us to use it as a control variate to reduce the RGE variance. Given a variational family parameterized by ?, we can decompose the ELBO gradient into a few terms that reveal its ?data generating procedure?  ? q0 , z = T (; ?) ? ln p(z, D) ?z ? ln q? (z) ?z ? ln q? (z) g?? , g?(z; ?) = ? ? . ?z ?? ?z ?? | {z } | {z } | ?? {z } data term pathwise score (7) (8) parameter score Certain terms in Eq. (8) have tractable distributions. The Jacobian of T (?; ?), given by ?z/??, is defined by our choice of q(z; ?). For some transformations T we can exactly compute the distribution of the Jacobian given the distribution of . The pathwise and parameter score terms are gradients of our approximate distribution with respect to ? (via z or directly). If our approximation is tractable (e.g., a multivariate Gaussian), we can exactly characterize the distribution for these components.5 However, the data term in Eq. (8) involves a potentially complicated function of the latent variable z (and therefore a complicated function of ), resulting in a difficult-to-characterize distribution. Our goal is to construct an approximation to the distribution of ? ln p(z, D)/?z and its interaction with ?z/?? given a fixed distribution over . If the approximation yields random variables that are highly correlated with g?? , then we can use it to reduce the variance of that RGE sample. Linearizing the data term To simplify notation, we write the data term of the gradient as ? ln p(z, D) f (z 0 ) , , ?z z=z 0 where f : RD 7? RD since z ? RD . We then linearize f about some value z0   ?f ? (z0 ) (z ? z0 ) = f (z0 ) + H(z0 )(z ? z0 ), f (z) = f (z0 ) + ?z 4 (9) (10) We require the model ln p(z, D) to be twice differentiable. In fact, we know that the expectation of the parameter score term is zero, and removing that term altogether can sometimes be a source of variance reduction that we do not explore here [27]. 5 4 where H(z0 ) is the Hessian of the model, ln p(z, D), with respect to z evaluated at z0 , ? 2 ln p(z, D) ?f (z0 ) (11) (z0 ) = ?z ?z 2 Note that even though this uses second-order information about the model, it is a first-order approximation of the gradient. We also view this as a transformation of the random  for a fixed ? H(z0 ) = f?? () = f (z0 ) + H(z0 )(T (, ?) ? z0 ) , (12) which is linear in z = T (, ?). For some forms of T we can analytically derive the distribution of the random variable f?? (). In Eq. (8), the data term interacts with the Jacobian of T , given by ?z ?T (, ?) J?0 () , , (13) = ?? ?? ?=?0 which importantly is a function of the same  as in Eq. (12). We form our approximation of the first term in Eq. (8) by multiplying Eqs. (12) and (13) yielding (data) g?? () , f?? ()J? () . (14) The tractability of this approximation hinges on how Eq. (14) depends on . When q(z; ?) is multivariate normal, we show that this approximation has a computable mean and can be used to reduce variance in MCVI settings. In the following sections we describe and empirically test this variance reduction technique applied to diagonal Gaussian posterior approximations. 3.1 Gaussian Variational Families Perhaps the most common choice of approximating distribution for MCVI is a diagonal Gaussian, 6 parameterized by a mean m? ? RD and scales s? ? RD + . The log probability density function is  ?1 1 1X D ln q(z; m? , s2? ) = ? (z ? m? )| diag(s2? ) (z ? m? ) ? ln s2?,d ? ln(2?) . (15) 2 2 2 d To generate a random variate z from this distribution, we use the sampling procedure in Eq. (4). We denote the Monte Carlo RGE as g?? , [? gm? , g?s? ]. From Eq. (15), it is straightforward to derive the distributions of the pathwise score, parameter score, and Jacobian terms in Eq. (8). The Jacobian term of the sampling procedure has two straightforward components ?z = ID , ?m? ?z = diag() . ?s? (16) The pathwise score term is the partial derivative of Eq. (15) with respect to z, ignoring variation due to the variational distribution parameters and noting that z = m? + s? : ? ln q = ?diag(s2? )?1 (z ? m? ) = ?/s? . ?z (17) The parameter score term is the partial derivative of Eq. (15) with respect to variational parameters ?, ignoring variation due to z. The m? and s? components are given by ? ln q = (z ? m? )/s2? = /s? ?m? ? ln q 2 ? 1 = ?1/s? ? (z ? m? )2 /s2? = . ?s? s? (18) (19) The data term, f (z), multiplied by the Jacobian of T is all that remains to be approximated in Eq. (8). We linearize f around z0 = m? where the approximation is expected to be accurate f?? () = f (m? ) + H(m? ) ((m? + s? ) ? m? )  ? N f (m? ), H(m? )diag(s2? )H(m? )| . 6 For diagonal Gaussian q, we define ? = [m? , s? ]. 5 (20) (21) Algorithm 1 Gradient descent with RV-RGE with a diagonal Gaussian variational family  g?? g?? L Figure 2: Relationship between the base randomness , RGE g?, and approximation g?. Arrows indicate deterministic functions. Sharing  correlates the random variables. We know the distribution of g?, which allows us to use it as a control variate for g?. 1: 2: 3: 4: 5: 6: 7: procedure RV-RGE-O PTIMIZE(?1 , ln p(z, D), L) f (z) ? ?z lnp(z, D)  H(za , zb ) ? ?2z ln p(za , D) zb . Define Hessian-vector product function for t = 1, . . . , T do (`)  ? N (0, ID ) for ` = 1, . . . , L . Base randomness q0 (`) ?? ? ?? ln p(z((`) , ?t ), D) . Reparameterization gradients g t (`) ?m g ? f (m?t ) + H(m?t , s?t (`) )   f (m?t ) + H(m?t , s?t (`) )  + ?t ?s(`) ? g ?t 8: 9: 10: 11: 12: 13: E[? gm? ] ? f (m?t ) . Mean approx 1 s? t . Scale approx . Mean approx expectation t E[? gs? ] ? diag(H(m?t )) s?t + 1/s?t t P ` (RV ) ` 1 ??t ? (? ?? = L g? g ? E[? g?t ]) `g t . Scale approx expectation . Subtract control variate t (RV ) ) t ?? ?t+1 ? grad-update(?t , g . Gradient step (sgd, adam, etc.) return ?T Putting It Together: Full RGE Approximation We write the complete approximation of the RGE in Eq. (8) by combining Eqs. (16), (17), (18), (19), and (21) which results in two components that are concatenated, g?? = [? gm? , g?s? ]. Each component is defined as g?m? = f?? () + /s? ? /s? = f (m? ) + H(m? )(s? ) 2  ?1 g?s? = f?? ()  + (/s? )  ? s? = (f (m? ) + H(m? )(s? ))  + (22) 1 . s? (23) To summarize, we have constructed an approximation, g?? , of the reparameterization gradient, g?? , as a function of . Because both g?? and g?? are functions of the same random variable , and because we have mimicked the random process that generates true gradient samples, the two gradient estimators will be correlated. This approximation yields two tractable distributions ? a Gaussian for the mean parameter gradient, gm? , and a location shifted, scaled non-central ?2 for the scale parameter gradient gs? . Importantly, we can compute the mean of each component E[? gm? ] = f (m? ) , E[? gs? ] = diag(H(m? )) s? + 1/s? . (24) We use g?? (along with its expectation) as a control variate to reduce the variance of the RGE g?? . 3.2 Reduced Variance Reparameterization Gradient Estimators Now that we have constructed a tractable gradient approximation, g?? , with high correlation to the original reparameterization gradient estimator, g?? , we can use it as a control variate as in Eq. (6) (RV ) g?? = g?? ? C(? g? ? E[? g? ]). (25) The optimal value for C is related to the covariance between g?? and g?? (see Appendix A). We can try to estimate the value of C (or a diagonal approximation to C) on the fly, or we can simply fix this value. In our case, because we are using an accurate linear approximation to the transformation of a spherical Gaussian, the optimal value of C will be close to the identity (see Appendix A.1). High Dimensional Models For models with high dimensional posteriors, direct manipulation of the Hessian is computationally intractable. However, our approximations in Eqs. (22) and (23) only require a Hessian-vector product, which can be computed nearly as efficiently as the gradient [21]. Modern automatic differentiation packages enable easy and efficient implementation of Hessianvector products for nearly any differentiable model [1, 20, 15]. We note that the mean of the control variate g?s? (Eq. (24)), depends on the diagonal of the Hessian matrix. While computing the Hessian diagonal may be tractable in some cases, in general it may cost the time equivalent of D function evaluations to compute [16]. Given a high dimensional problem, we can avoid this bottleneck in multiple ways. The first is simply to ignore the random variation in the Jacobian term due to  ? if we fix z to be m? (as we do with the data term), the portion of the Jacobian that corresponds to 6 s? will be zero (in Eq. (16)). This will result in the same Hessian-vector-product-based estimator for g?m? but will set g?s? = 0, yielding variance reduction for the mean parameter but not the scale. Alternatively, we can estimate the Hessian diagonal on the fly. If we use L > 1 samples at each iteration, we can create a per-sample estimate of the s? -scaled diagonal of the Hessian using the other samples [3]. As the scaled diagonal estimator is unbiased, we can construct an unbiased estimate of the control variate mean to use in lieu of the actual mean. We will see that the resulting variance is not much higher than when using full Hessian information, and is computationally tractable to deploy on high-dimensional models. A similar local baseline strategy is used for variance reduction in Mnih and Rezende [17]. RV-RGE Estimators We introduce three different estimators based on variations of the gradient approximation defined in Eqs. (22), (23), and (24), each adressing the Hessian operations differently: ? The Full Hessian estimator implements the three equations as written and can be used when it is computationally feasible to use the full Hessian. ? The Hessian Diagonal estimator replaces the Hessian in (22) with a diagonal approximation, useful for models with a cheap Hessian diagonal. ? The Hessian-vector product + local approximation (HVP+Local) uses an efficient Hessianvector product in Eqs. (22) and (23), while approximating the diagonal term in Eq. (24) using a local baseline. The HVP+Local approximation is geared toward models where Hessian-vector products can be computed, but the exact diagonal of the Hessian cannot. We detail the RV-RGE procedure in Algorithm 1 and compare properties of these three estimators to the pure Monte Carlo estimator in the following section. 3.3 Related Work Recently, Roeder et al. [27] introduced a variance reduction technique for reparameterization gradients that ignores the parameter score component of the gradient and can be viewed as a type of control variate for the gradient throughout the optimization procedure. This approach is complementary to our method ? our approximation is typically more accurate near the beginning of the optimization procedure, whereas the estimator in Roeder et al. [27] is low-variance near convergence. We hope to incorporate information from both control variates in future work. Per-sample estimators in a multi-sample setting for variational inference were used in Mnih and Rezende [17]. We employ this technique in a different way; we use it to estimate computationally intractable quantities needed to keep the gradient estimator unbiased. Black box variational inference used control variates and Rao-Blackwellization to reduce the variance of score-function estimators [22]. Our development of variance reduction for reparameterization gradients complements their work. Other variance reduction techniques for stochastic gradient descent have focused on stochasticity due to data subsampling [12, 31]. Johnson and Zhang [12] cache statistics about the entire dataset at each epoch to use as a control variate for noisy mini-batch gradients. The variance reduction method described in Paisley et al. [19] is conceptually similar to ours. This method uses first or second order derivative information to reduce the variance of the score function estimator. The score function estimator (and their reduced variance version) often has much higher variance than the reparameterization gradient estimator that we improve upon in this work. Our variance measurement experiments in Table 1 includes a comparison to the estimator featured in [19], which we found to be much higher variance than the baseline RGE. 4 Experiments and Analysis In this section we empirically examine the variance properties of RV-RGEs and stochastic optimization for two real-data examples ? a hierarchical Poisson GLM and a Bayesian neural network.7 ? Hierarchical Poisson GLM: The frisk model is a hierarchical Poisson GLM, described in Appendix C.1. This non-conjugate model has a D = 37 dimensional posterior. ? Bayesian Neural Network: The non-conjugate bnn model is a Bayesian neural network applied to the wine dataset, (see Appendix C.2) and has a D = 653 dimensional posterior. 7 Code is available at https://github.com/andymiller/ReducedVarianceReparamGradients. 7 Table 1: Comparison of variances for RV-RGEs with L = 10-sample estimators. Variance measurements were taken for ? values at three points during the optimization algorithm (early, mid, late). The parenthetical rows labeled ?MC abs? denote the absolute value of the standard Monte Carlo reparameterization gradient estimator. The other rows compare estimators relative to the pure MC RGE variance ? a value of 100 indicates equal variation L = 10 samples, a value of 1 indicates a 100-fold decrease in variance (lower is better). Our new estimators (Full Hessian, Hessian Diag, HVP+Local) are described in Section 3.2. The Score Delta method is the gradient estimator described in [19]. Additional variance measurement results are in Appendix D. Iteration early mid late Estimator (MC abs.) MC Full Hessian Hessian Diag HVP + Local Score Delta [19] (MC abs.) MC Full Hessian Hessian Diag HVP + Local Score Delta [19] (MC abs.) MC Full Hessian Hessian Diag HVP + Local Score Delta [19] gm? Ave V(?) V(|| ? ||) (1.7e+02) (5.4e+03) 100.000 100.000 1.279 1.139 34.691 23.764 1.279 1.139 6069.668 718.430 (3.8e+03) (1.3e+05) 100.000 100.000 0.075 0.068 38.891 21.283 0.075 0.068 4763.246 523.175 (1.7e+03) (1.3e+04) 100.000 100.000 0.042 0.030 40.292 53.922 0.042 0.030 5183.885 1757.209 ln gs? Ave V(?) V(|| ? ||) (3e+04) (2e+05) 100.000 100.000 0.001 0.002 0.003 0.012 0.013 0.039 1.395 0.931 (18) (3.3e+02) 100.000 100.000 0.113 0.143 6.295 7.480 30.754 39.156 2716.038 700.100 (1.1) (19) 100.000 100.000 1.686 0.431 23.644 28.024 98.523 99.811 17355.120 3084.940 g? Ave V(?) (1.5e+04) 100.000 0.008 0.194 0.020 34.703 (1.9e+03) 100.000 0.076 38.740 0.218 4753.752 (8.3e+02) 100.000 0.043 40.281 0.110 5192.270 V(|| ? ||) (5.9e+03) 100.000 1.039 21.684 1.037 655.105 (1.3e+05) 100.000 0.068 21.260 0.071 523.532 (1.3e+04) 100.000 0.030 53.777 0.022 1761.317 Quantifying Gradient Variance Reduction We measure the variance reduction of the RGE observed at various iterates, ?t , during execution of gradient descent. Both the gradient magnitude, and the marginal variance of the gradient elements ? using a sample of 1000 gradients ? are reported. Further, we inspect both the mean, m? , and log-scale, ln s? , parameters separately. Table 1 compares gradient variances for the frisk model for our four estimators: i) pure Monte Carlo (MC), ii) Full Hessian, iii) Hessian Diagonal, and iv) Hessian-vector product + local approximation (HVP+Local). Additionally, we compare our methods to the estimator described in [19], based on the score function estimator and a control variate method. We use a first order delta method approximation of the model term, which admits a closed form control variate term. Each entry in the table measures the percent of the variance of the pure Monte Carlo estimator. We show the average variance over each component AveV(?), and the variance of the norm V(|| ? ||). We separate out variance in mean parameters, gm , log scale parameters, ln gs , and the entire vector g? . The reduction in variance is dramatic. Using HVP+Local, in the norm of the mean parameters we see between a 80? and 3,000? reduction in variance depending on the progress of the optimizer. The importance of the full Hessian-vector product for reducing mean parameter variance is also demonstrated as the Hessian diagonal only reduces mean parameter variance by a factor of 2-5?. For the variational scale parameters, ln gs , we see that early on the HVP+Local approximation is able to reduce parameter variance by a large factor (? 2,000?). However, at later iterates the HVP+Local scale parameter variance is on par with the Monte Carlo estimator, while the full Hessian estimator still enjoys huge variance reduction. This indicates that, by this point, most of the noise is the local Hessian diagonal estimator. We also note that in this problem, most of the estimator variance is in the mean parameters. Because of this, the norm of the entire parameter gradient, g? is reduced by 100 ? 5,000?. We found that the score function estimator (with the delta method control variate) is typically much higher variance than the baseline reparameterization gradient estimator (often by a factor of 10-50?). In Appendix D we report results for other values of L. Optimizer Convergence and Stability We compare the optimization traces for the frisk and bnn model for the MC and the HVP+Local estimators under various conditions. At each iteration we estimate the true ELBO value using 2000 Monte Carlo samples. We optimize the ELBO objective using adam [13] for two step sizes, each trace starting at the same value of ?0 . 8 845 850 ELBO ELBO 845 2?sample?MC 2?sample?HVP+Local 10?sample?MC 10?sample?HVP+Local 50?sample?MC 855 860 0 5 10 15 20 wall?clock?(seconds) 25 850 2?sample?MC 2?sample?HVP+Local 10?sample?MC 10?sample?HVP+Local 50?sample?MC 855 860 30 0 (a) adam with step size = 0.05 5 10 15 20 wall?clock?(seconds) 25 30 (b) adam with step size = .10 Figure 3: MCVI optimization trace applied to the frisk model for two values of L and step size. We run the standard MC gradient estimator (solid line) and the RV-RGE with L = 2 and 10 samples. 160 180 180 ELBO ELBO 160 2?sample?MC 2?sample?HVP+Local 10?sample?MC 10?sample?HVP+Local 50?sample?MC 200 220 0 20 40 60 wall?clock?(seconds) 2?sample?MC 2?sample?HVP+Local 10?sample?MC 10?sample?HVP+Local 50?sample?MC 200 220 240 80 (a) adam with step size = 0.05 0 20 40 60 wall?clock?(seconds) 80 (b) adam with step size = 0.10 Figure 4: MCVI optimization for the bnn model applied to the wine data for various L and step sizes. The standard MC gradient estimator (dotted) was run with 2, 10, and 50 samples; RV-RGE (solid) was run with 2 and 10 samples. In 4b the 2-sample MC estimator falls below the frame. Figure 3 compares ELBO optimization traces for L = 2 and L = 10 samples and step sizes .05 and .1 for the frisk model. We see that the HVP+Local estimators make early progress and converge quickly. We also see that the L = 2 pure MC estimator results in noisy optimization paths. Figure 4 shows objective value as a function of wall clock time under various settings for the bnn model. The HVP+Local estimator does more work per iteration, however it tends to converge faster. We observe the L = 10 HVP+Local outperforming the L = 50 MC estimator. 5 Conclusion Variational inference reframes an integration problem as an optimization problem with the caveat that each step of the optimization procedure solves an easier integration problem. For general models, each sub-integration problem is itself intractable, and must be estimated, typically with Monte Carlo samples. Our work has shown that we can use more information about the variational family to create tighter estimators of the ELBO gradient, which leads to faster and more stable optimization. The efficacy of our approach relies on the complexity of the RGE distribution to be well-captured by linear structure which may not be true for all models. However, we found the idea effective for non-conjugate hierarchical Bayesian models and a neural network. Our presentation is a specific instantiation of a more general idea ? using cheap linear structure to remove variation from stochastic gradient estimates. This method described in this work is tailored to Gaussian approximating families for Monte Carlo variational inference, but could be easily extended to location-scale families. We plan to extend this idea to more flexible variational distributions, including flow distributions [24] and hierarchical distributions [23], which would require approximating different functional forms within the variational objective. We also plan to adapt our technique to model and inference schemes with recognition networks [14], which would require back-propagating de-noised gradients into the parameters of an inference network. 9 Acknowledgements The authors would like to thank Finale Doshi-Velez, Mike Hughes, Taylor Killian, Andrew Ross, and Matt Hoffman for helpful conversations and comments on this work. ACM is supported by the Applied Mathematics Program within the Office of Science Advanced Scientific Computing Research of the U.S. Department of Energy under contract No. DE-AC02-05CH11231. NJF is supported by a Washington Research Foundation Innovation Postdoctoral Fellowship in Neuroengineering and Data Science. RPA is supported by NSF IIS-1421780 and the Alfred P. Sloan Foundation. References [1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Martin Arjovsky, Soumith Chintala, and L?on Bottou. arXiv:1701.07875, 2017. Wasserstein GAN. arXiv preprint [3] Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214?1229, 2007. [4] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017. [5] Andrew Gelman and Jennifer Hill. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, 2006. [6] Andrew Gelman, Jeffrey Fagan, and Alex Kiss. An analysis of the NYPD?s stop-and-frisk policy in the context of claims of racial bias. Journal of the American Statistical Association, 102:813?823, 2007. [7] Paul Glasserman. Monte Carlo Methods in Financial Engineering, volume 53. Springer Science & Business Media, 2004. [8] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013. [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2672?2680, 2014. [10] Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. 2016. [11] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303?1347, 2013. [12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013. [13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015. [14] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations, 2014. [15] Dougal Maclaurin, David Duvenaud, Matthew Johnson, and Ryan P. Adams. Autograd: Reverse-mode differentiation of native Python, 2015. URL http://github.com/HIPS/autograd. [16] James Martens, Ilya Sutskever, and Kevin Swersky. Estimating the Hessian by back-propagating curvature. In Proceedings of the International Conference on Machine Learning, 2012. [17] Andriy Mnih and Danilo Rezende. Variational inference for Monte Carlo objectives. In Proceedings of The 33rd International Conference on Machine Learning, pages 2188?2196, 2016. [18] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016. 10 [19] John Paisley, David M Blei, and Michael I Jordan. Variational bayesian inference with stochastic search. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1363?1370. Omnipress, 2012. [20] Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. Pytorch. https://github.com/ pytorch/pytorch, 2017. [21] Barak A Pearlmutter. Fast exact multiplication by the Hessian. Neural computation, 6(1):147?160, 1994. [22] Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AISTATS, pages 814?822, 2014. [23] Rajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. In International Conference on Machine Learning, 2016. [24] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1530?1538, 2015. [25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, 2014. [26] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400?407, 1951. [27] Geoffrey Roeder, Yuhuai Wu Wu, and David Duvenaud. Sticking the landing: An asymptotically zerovariance gradient estimator for variational inference. arXiv preprint arXiv:1703.09194, 2017. [28] Francisco R Ruiz, Michalis Titsias RC AUEB, and David Blei. The generalized reparameterization gradient. In Advances in Neural Information Processing Systems, pages 460?468, 2016. [29] Michalis Titsias and Miguel L?zaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971?1979, 2014. [30] Dustin Tran, Matthew D Hoffman, Rif A Saurous, Eugene Brevdo, Kevin Murphy, and David M Blei. Deep probabilistic programming. In Proceedings of the International Conference on Learning Representations, 2017. [31] Chong Wang, Xi Chen, Alexander J Smola, and Eric P Xing. Variance reduction for stochastic gradient optimization. In Advances in Neural Information Processing Systems, pages 181?189, 2013. 11
6961 |@word version:1 norm:4 nd:1 seek:2 crucially:1 covariance:2 dramatic:1 sgd:1 solid:2 reduction:20 initial:1 score:20 efficacy:1 jimenez:1 ours:1 existing:1 com:3 yet:1 diederik:2 must:3 written:1 john:2 devin:1 numerical:1 cheap:4 remove:1 plot:1 update:1 generative:6 beginning:1 blei:7 caveat:1 iterates:3 location:2 simpler:1 zhang:2 wierstra:1 along:1 constructed:2 direct:1 become:1 mathematical:1 rc:1 abadi:1 doubly:1 fitting:2 introduce:1 ch11231:1 expected:1 behavior:1 frequently:1 examine:1 multi:1 brain:1 blackwellization:1 spherical:1 glasserman:2 actual:1 soumith:2 cache:1 becomes:1 estimating:1 notation:1 medium:2 minimizes:2 transformation:4 differentiation:3 safely:1 berkeley:2 exactly:2 scaled:3 sherjil:1 control:23 saurous:1 mcauliffe:1 understood:1 local:28 engineering:2 tends:1 io:1 sutton:1 encoding:1 id:3 opti:1 path:1 black:2 twice:1 zaro:1 practice:1 hughes:1 implement:1 backpropagation:1 svi:1 chaotic:1 procedure:21 featured:1 attain:1 induce:1 cannot:2 close:2 gelman:2 context:1 applying:1 optimize:1 equivalent:1 map:3 deterministic:1 demonstrated:1 dean:1 marten:1 straightforward:3 landing:1 starting:1 jimmy:1 focused:1 pure:5 matthieu:1 pouget:1 estimator:73 importantly:2 financial:2 reparameterization:26 stability:1 variation:6 annals:1 construction:1 controlling:2 gm:7 massive:2 deploy:1 exact:2 programming:1 us:4 goodfellow:1 harvard:2 trick:4 element:2 expensive:3 approximated:1 recognition:1 balaji:1 native:1 labeled:1 observed:1 mike:1 quicker:1 fly:2 preprint:4 wang:2 noised:1 decrease:2 trade:1 gross:1 mcvi:15 complexity:2 warde:1 dynamic:2 predictive:1 titsias:2 upon:1 division:1 efficiency:1 eric:1 completely:1 accelerate:1 joint:1 easily:1 differently:1 mization:1 various:7 fast:1 describe:1 effective:1 monte:19 kevin:2 jean:1 larger:2 elbo:18 statistic:4 jointly:1 noisy:7 itself:4 shakir:3 differentiable:4 net:2 analytical:1 propose:1 tran:2 interaction:1 product:11 combining:1 realization:1 frisk:6 sticking:1 sutskever:1 convergence:9 optimum:1 sea:1 generating:5 adam:12 guaranteeing:1 converges:1 illustrate:1 andrew:4 develop:1 propagating:2 miguel:1 fixing:1 linearize:2 derive:2 depending:1 progress:4 eq:23 solves:1 involves:1 indicate:1 stochastic:20 alp:1 enable:1 require:4 multilevel:1 fix:2 wall:7 decompose:1 tighter:1 ryan:2 neuroengineering:1 pytorch:3 around:1 duvenaud:2 normal:1 maclaurin:1 claim:1 matthew:5 major:1 elbos:1 early:4 optimizer:2 wine:2 diminishes:1 yuhuai:1 ross:1 robbins:2 create:2 hoffman:4 hope:1 clearly:1 gaussian:12 avoid:2 office:1 conjunction:1 rezende:5 focus:2 inherits:1 indicates:3 adversarial:2 ave:3 baseline:4 helpful:1 inference:24 roeder:3 squaring:1 typically:3 entire:3 rpa:2 arg:2 flexible:2 development:2 plan:2 integration:4 special:1 uc:1 marginal:1 equal:1 construct:6 washington:2 beach:1 sampling:5 icml:2 nearly:2 jon:1 foti:1 future:1 report:1 mirza:1 simplify:1 yoshua:1 few:1 employ:1 modern:1 ve:1 divergence:5 individual:1 murphy:1 autograd:2 jeffrey:2 statistician:1 william:1 ab:4 interest:2 huge:1 dougal:1 highly:1 mnih:3 hvp:22 evaluation:2 chong:2 yielding:2 farley:1 accurate:3 andy:1 rajesh:2 partial:2 vely:1 iv:1 taylor:1 desired:1 parenthetical:1 hip:1 modeling:1 rao:1 cost:2 introducing:2 tractability:1 entry:1 johnson:4 coference:1 too:2 kullbackleibler:1 optimally:1 characterize:3 reported:1 gregory:1 combined:1 st:2 density:2 international:11 probabilistic:2 off:1 contract:1 michael:1 together:1 quickly:2 concrete:1 ashish:1 na:2 ilya:1 central:3 kucukelbir:1 american:2 derivative:3 return:1 de:3 includes:1 coefficient:1 lakshminarayanan:1 satisfy:1 sloan:1 vi:7 depends:3 performed:1 view:2 lot:1 try:1 exogenous:1 closed:1 later:1 portion:1 xing:1 bayes:2 complicated:2 monro:2 contribution:1 greg:1 variance:65 efficiently:1 maximized:1 miller:1 yield:3 correspond:1 conceptually:1 bayesian:9 craig:1 mc:33 carlo:19 multiplying:1 randomness:5 za:2 reach:2 sharing:1 fagan:1 inexpensive:1 bekas:1 energy:1 mohamed:3 james:1 doshi:1 chintala:2 rges:3 costa:1 stop:1 dataset:2 popular:1 knowledge:1 conversation:1 ubiquitous:1 sean:1 carefully:1 back:2 rif:1 attained:1 higher:4 danilo:3 rie:1 evaluated:1 though:1 box:2 furthermore:1 implicit:2 kokiopoulou:1 smola:1 correlation:2 clock:7 hand:1 replacing:1 mehdi:1 google:1 mode:1 reveal:1 perhaps:1 scientific:1 usa:1 matt:1 unbiased:8 true:5 analytically:2 nypd:1 q0:5 bnn:4 conditionally:1 interchangeably:1 during:2 davis:1 illustrative:1 criterion:1 linearizing:1 generalized:1 hill:1 complete:1 demonstrate:1 pearlmutter:1 omnipress:1 percent:1 variational:38 wise:1 novel:1 recently:1 common:1 functional:1 empirically:2 volume:2 extend:1 association:2 approximates:2 velez:1 measurement:3 cambridge:1 cv:1 paisley:3 framed:1 rd:9 automatic:2 approx:4 mathematics:2 similarly:1 stochasticity:3 stable:3 geared:1 etc:1 base:2 curvature:2 posterior:12 multivariate:2 recent:1 perspective:1 reverse:1 manipulation:1 certain:1 outperforming:1 success:2 captured:1 arjovsky:1 additional:2 wasserstein:1 herbert:1 converge:3 maximize:1 corrado:1 ii:2 rv:11 full:11 multiple:1 reduces:1 faster:5 adapt:1 long:1 converging:1 regression:2 aueb:1 heterogeneous:1 expectation:7 poisson:3 arxiv:8 iteration:6 represent:1 sometimes:1 tailored:1 agarwal:1 background:1 hurdle:1 whereas:2 separately:1 fellowship:1 source:2 appropriately:1 extra:1 saad:1 comment:1 member:1 flow:2 finale:1 call:1 jordan:1 near:2 leverage:1 noting:1 iii:1 easy:1 bengio:1 variate:23 andriy:1 reduce:11 idea:4 ac02:1 barham:1 computable:1 grad:1 bottleneck:1 motivated:1 url:1 accelerating:1 speaking:1 cause:1 hessian:37 deep:2 dramatically:1 useful:2 mid:2 reduced:3 generate:4 reframes:2 http:4 nsf:1 shifted:1 dotted:1 delta:6 arising:1 per:3 estimated:1 broadly:1 alfred:1 discrete:1 hyperparameter:1 write:3 express:1 putting:1 four:1 uw:1 asymptotically:1 run:3 package:1 parameterized:3 swersky:1 family:10 throughout:1 wu:2 appendix:8 bound:2 courville:1 fold:1 replaces:1 g:6 alex:1 x2:1 generates:1 carve:1 min:2 extremely:1 martin:1 department:1 gredilla:1 combination:1 conjugate:5 smaller:2 sam:1 making:1 intuitively:1 restricted:1 glm:3 taken:2 ln:32 equation:2 computationally:6 remains:2 jennifer:1 bing:1 fail:1 mechanism:1 needed:1 know:3 tractable:7 lieu:1 available:1 operation:1 brevdo:2 multiplied:1 observe:2 hierarchical:8 nicholas:1 stepsize:2 mimicked:1 batch:1 altogether:1 original:3 denotes:1 running:1 subsampling:3 michalis:2 gan:1 hinge:2 unifying:1 exploit:2 concatenated:1 approximating:6 surgery:1 objective:7 question:1 quantity:4 strategy:1 diagonal:21 interacts:1 gradient:104 separate:1 thank:1 simulated:1 unstable:1 toward:1 ozair:1 code:1 pointwise:2 relationship:1 mini:1 racial:1 innovation:1 difficult:1 potentially:1 holding:1 trace:8 ba:1 implementation:1 yousef:1 policy:1 allowing:1 inspect:1 datasets:1 daan:1 enabling:1 descent:5 defining:1 extended:1 frame:1 introduced:1 complement:1 david:10 kl:6 optimized:1 tensorflow:1 kingma:2 nip:1 able:2 usually:1 below:1 summarize:1 program:1 including:1 max:1 natural:1 business:2 advanced:1 representing:1 scheme:2 improve:1 github:4 auto:1 prior:1 upshot:1 epoch:1 acknowledgement:1 eugene:2 ptimize:1 review:1 relative:1 python:1 multiplication:1 par:1 geoffrey:1 foundation:2 row:2 supported:3 enjoys:1 bias:2 allow:1 barak:1 fall:1 taking:1 absolute:1 avev:1 distributed:1 evaluating:1 ignores:1 author:1 commonly:1 welling:1 correlate:1 ranganath:2 approximate:6 ignore:1 keep:1 instantiation:1 francisco:1 xi:1 alternatively:3 postdoctoral:1 continuous:2 latent:2 search:1 table:4 additionally:1 ca:1 ignoring:2 bottou:1 complex:2 diag:11 aistats:1 main:1 arrow:1 s2:8 noise:8 paul:3 complementary:1 xu:1 augmented:1 slow:2 tong:1 sub:1 jacobian:8 late:2 dustin:2 ruiz:2 zhifeng:1 ian:1 z0:16 removing:1 specific:1 abadie:1 admits:1 evidence:2 normalizing:1 burden:1 intractable:7 incorporating:1 albeit:1 importance:1 magnitude:3 execution:1 chen:2 easier:1 subtract:1 entropy:1 simply:3 explore:1 ez:2 expressed:1 kiss:1 pathwise:4 springer:2 corresponds:1 gerrish:1 relies:1 acm:2 minibatches:1 mart:1 goal:3 identity:1 viewed:1 quantifying:1 presentation:1 feasible:1 determined:1 reducing:2 nfoti:1 averaging:2 rge:21 zb:2 citro:1 aaron:1 noisier:1 alexander:2 incorporate:1 evaluate:1 princeton:2 correlated:3
6,591
6,962
Visual Reference Resolution using Attention Memory for Visual Dialog Paul Hongsuck Seo? Andreas Lehrmann? Bohyung Han? Leonid Sigal? ? ? POSTECH Disney Research {hsseo, bhhan}@postech.ac.kr {andreas.lehrmann, lsigal}@disneyresearch.com Abstract Visual dialog is a task of answering a series of inter-dependent questions given an input image, and often requires to resolve visual references among the questions. This problem is different from visual question answering (VQA), which relies on spatial attention (a.k.a. visual grounding) estimated from an image and question pair. We propose a novel attention mechanism that exploits visual attentions in the past to resolve the current reference in the visual dialog scenario. The proposed model is equipped with an associative attention memory storing a sequence of previous (attention, key) pairs. From this memory, the model retrieves the previous attention, taking into account recency, which is most relevant for the current question, in order to resolve potentially ambiguous references. The model then merges the retrieved attention with a tentative one to obtain the final attention for the current question; specifically, we use dynamic parameter prediction to combine the two attentions conditioned on the question. Through extensive experiments on a new synthetic visual dialog dataset, we show that our model significantly outperforms the state-of-the-art (by ? 16 % points) in situations, where visual reference resolution plays an important role. Moreover, the proposed model achieves superior performance (? 2 % points improvement) in the Visual Dialog dataset [1], despite having significantly fewer parameters than the baselines. 1 Introduction In recent years, advances in the design and optimization of deep neural network architectures have led to tremendous progress across many areas of computer vision (CV) and natural language processing (NLP). This progress, in turn, has enabled a variety of multi-modal applications spanning both domains, including image captioning [2?4], language grounding [5, 6], image generation from captions [7, 8], and visual question answering (VQA) on images [9?21] and videos [22?24]. The VQA task, in particular, has received broad attention because its formulation requires a universal understanding of image content. Most state-of-the-art methods [10, 13, 15] address this inherently challenging problem through an attention mechanism [3] that allows to visually ground linguistic expressions; they identify the region of visual interest referred to by the question and predict the answer based on the visual information in that region. More recently, Visual Dialog [1] has been introduced as a generalization of the VQA task. Unlike VQA, where every question is asked independently, a visual dialog system needs to answer a sequence of questions about an input image. The sequential and inter-dependent property of questions in a dialog presents additional challenges. Consider the simple image and partial dialog in Figure 1. Some questions (e.g., #1: ?How many 9?s are there in the image??) contain the full information needed to attend to the regions within the image and answer the question accurately. Other questions (e.g., #6: ?What is the number of the blue digit??) are ambiguous on their own and require knowledge obtained from the prior questions (1, 2, 3, and 5 in particular) in order to resolve attention to the specific region 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. # 1 2 3 4 5 6 7 Question How many 9?s are there in the image? How many brown digits are there among them? What is the background color of the digit at the left of it? What is the style of the digit? What is the color of the digit at the left of it? What is the number of the blue digit? Are there other blue digits? Answer four one white flat blue 4 two Figure 1: Example from MNIST Dialog. Each pair consists of an image (left) and a set of sequential questions with answers (right). the expression (?the blue digit?) is referring to. This process of visual reference resolution1 is the key component required to localize attention accurately in the presence of ambiguous expressions and thus plays a crucial role in extending VQA approaches to the visual dialog task. We perform visual reference resolution relying on a novel attention mechanism that employs an associative memory to obtain a visual reference for an ambiguous expression. The proposed model utilizes two types of intermediate attentions: tentative and retrieved ones. The tentative attention is calculated solely based on the current question (and, optionally, the dialog history), and is capable of focusing on an appropriate region when the question is unambiguous. The retrieved attention, used for visual reference resolution, is the most relevant previous attention available in the associative memory. The final attention for the current question is obtained by combining the two attention maps conditioned on the question; this is similar to neural module networks [12, 14], which dynamically combine discrete attention modules, based on a question, to produce the final attention. For this task, our model adopts a dynamic parameter layer [9] that allows us to work with continuous space of dynamic parametrizations, as opposed to the discrete set of parametrizations in [12, 14]. Contributions We make the following contributions. (1) We introduce a novel attention process that, in addition to direct attention, resolves visual references by modeling the sequential dependency of the current question on previous attentions through an associative attention memory; (2) We perform a comprehensive analysis of the capacity of our model for the visual reference resolution task using a synthetic visual dialog dataset (MNIST dialog) and obtain superior performance compared to all baseline models. (3) We test the proposed model in a visual dialog benchmark (VisDial [1]) and show state-of-the-art performance with significantly fewer parameters. 2 Related Work Visual Dialog Visual dialogs were recently proposed in [1] and [25], focusing on different aspects of a dialog. While the conversations in the former contain free-form questions about arbitrary objects, the dialogs in the latter aim at object discovery through a series of yes/no questions. Reinforcement learning (RL) techniques were built upon those works in [26] and [27]. Das et al. [26] train two agents by playing image guessing games and show that they establish their own communication protocol and style of speech. In [27], RL is directly used to improve the performance of agents in terms of the task completion rate of goal-oriented dialogs. However, the importance of previous references has not yet been explored in the visual dialog task. Attention for Visual Reference Resolution While visual dialog is a recent task, VQA has been studied extensively and attention models have been known to be beneficial for answering independent questions [10?16]. However, none of those methods incorporate visual reference resolution, which is neither necessary nor possible in VQA but essential in visual dialog. Beyond VQA, attention models are used to find visual groundings of linguistic expressions in a variety of other multi-modal tasks, such as image captioning [3, 4], VQA in videos [22], and visual attributes prediction [28]. Common to most of these works, an attention is obtained from a single embedding of all linguistic inputs. Instead, we propose a model that embeds each question in a dialog separately and calculates the current question?s attention by resolving its sequential dependencies through an attention memory and a dynamic attention combination process. We calculate an attention through a dynamic composition 1 We coin this term by borrowing nomenclature, partially, from NLP, where coreference resolution attempts to solve the corresponding problem in language; the visual in visual reference resolution implies that we want to do both resolve and visually ground the reference used in the question. 2 question ? (a) RNN (d) fc history ? image ? ?? (e) attention process (b) HRNN (c) CNN ? ?att ? (f) fc ?? attention memory ?? ?mem ? ?? (h) key ?? generation (g) answer decoder ?? ?? Figure 2: Architecture of the proposed network. The gray box represents the proposed attention process. Refer to Section 3 for the detailed description about individual modules (a)-(f). process taking advantage of a question?s semantic structure, which is similar to [12] and [14]. However, the proposed method still differs in that our attention process is designed to deal with ambiguous expressions in dialogs by dynamically analyzing the dependencies of questions at each time step. In contrast, [12] and [14] obtain the attention for a question based on its compositional semantics that is completely given at the time of the network structure prediction. Memory for Question Answering Another line of closely related works is the use of a memory component to question answering models. Memory networks with end-to-end training are first introduced in [29], extending the original memory network [30]. The memories in these works are used to store some factoids in a given story and the supporting facts for answering questions are selectively retrieved through memory addressing. A memory network with an episodic memory was proposed in [31] and applied to VQA by storing the features at different locations of the memory [32]. While these memories use the contents themselves for addressing, [33] proposes associative memories that have a key-value pair at each entry and use the keys for addressing the value to be retrieved. Finally, the memory component is also utilized for visual dialog in [1] to actively select the previous question in the history. Memories in these previous memory networks store given factoids to retrieve a supporting fact. In contrast, our attention memory stores previous attentions, which represent grounded references for previous questions, to resolve the current reference based on the sequential dependency of the referring expressions. Moreover, we adopt an associative memory to use the semantics of QA pairs for addressing. 3 Visual Dialog Model with Attention Memory-based Reference Resolution Visual dialog is the task of building an agent capable of answering a sequence of questions presented in the form of a dialog. Formally, we need to predict an answer yt ? Y, where Y is a set of discrete answers or a set of natural language phrases/sentences, at time t given input image I, current question qt , and dialog history H = {h? | h? = (q? , y? ) , 0 ? ? < t}. We utilize the encoder-decoder architecture recently introduced in [1], which is illustrated in Figure 2. Specifically, we represent a triplet (q, H, I) with et by applying three different encoders, based on recurrent (RNN with long-short term memory units), hierarchical recurrent (HRNN)2 and convolutional (CNN) neural networks, followed by attention and fusion units (Figure 2 (a)-(f)). Our model then decodes the answer yt from the encoded representation et (Figure 2 (g)). Note that, to obtain the encoded representation et , the CNN image feature map f computed from I undergoes a soft spatial attention process guided by the combination of qt and H as follows: ct = fc(RNN(qt ), HRNN(H)) ftatt = [?t (ct )]> ? f = N X ?t,n (ct ) ? fn , (1) (2) n=1 where fc (Figure 2 (d)) denotes a fully connected layer, ?n (ct ) is the attention map conditioned on a fused encoding of qt and H, n is the location index in the feature map, and N is the size of the spatial grid of the feature map. This attention mechanism is the critical component that allows the decoder to focus on relevant regions of the input image; it is also the main focus of this paper. 2 The questions and the answers of a history are independently embedded using LSTMs and then fused by a fc layer with concatenation to form QA encodings. The fused QA embedding at each time step is finally fed to another LSTM and the final output is used for the history encoding. 3 attention retrieval ?? ?mem ? ? ? ?? ?mem ?? ?? ?mem ? dynamic combination ?? tentative attention ?DPL ?? attentions keys ?0 ?0 ?1 ?1 ? ? ? ?tent ? ?? ???1 ?? ???1 ? ?mem ? ? (a) Dynamic combination of attentions ? ?mem ? (b) Attention retrieval from memory Figure 3: Attention process for visual dialog task. (a) The tentative and relevant attentions are first obtained independently and then dynamically combined depending on the question embedding. (b) Two boxes represent memory containing attentions and corresponding keys. Question embedding ct is projected by W mem and compared with keys using inner products, denoted by crossed circles, to generate address vector ?t . The address vector is then used as weights for computing a weighted average of all memory entries, denoted by ? within circle, to retrieve memory entry (?mem , ktmem ). t We make the observation that, for certain questions, attention can be resolved directly from ct . This is called tentative attention and denoted by ?tent . This works well for questions like #1 in Figure 1, t which are free from dialog referencing. For other questions like #6, resolving reference linguistically would be difficult (e.g., linguistic resolution may look like: ?What number of the digit to the left to the left of the brown 9?). That said, #6 is straightforward to answer if the attention utilized to answer #5 is retrieved. This process of visual reference resolution gives rise to attention retrieval ?mem from the memory. The final attention ?t (ct ) is computed using dynamic parameter layer, t where the parameters are conditioned on ct . To summarize, an attention is composed of three steps in the proposed model: tentative attention, relevant attention retrieval, and dynamic attention fusion as illustrated in Figure 3a. We describe the details of each step below. 3.1 Tentative Attention We calculate the tentative attention by computing similarity, in the joint embedding space, of the encoding of the question and history, ct , and each feature vector, fn , in the image feature grid f : >  st,n = Wctent ct Wftent fn (3) ?tent = softmax ({st,n , 1 < n < N }) , t (4) where Wctent and Wftent are projection matrices for the question and history encoding and the image feature vector, respectively, and st,n is an attention score for a feature at the spatial location n. 3.2 Relevant Attention Retrieval from Attention Memory As a reminder, in addition to the tentative attention, our model obtains the most relevant previous attention using an attention memory for visual reference resolution. Associative Attention Memory The proposed model is equipped with an associative memory, called an attention memory, to store previous attentions. The attention memory Mt = {(?0 , k0 ) , (?1 , k1 ) , . . . , (?t?1 , kt?1 )} stores all the previous attention maps ?? with their corresponding keys k? for associative addressing. Note that ?0 is NULL attention and set to all zeros. The NULL attention can be used when no previous attention reference is required for the current reference resolution. 4 The most relevant previous attention is retrieved based on the key comparison as illustrated in Figure 3b. Formally, the proposed model addresses the memory given the embedding of the current question and history ct using > mt,? = (W mem ct ) k? and ?t = softmax ({mt,? , 0 < ? < t ? 1}) , (5) mem where W projects the question and history encoding onto the semantic space of the memory keys. The relevant attention ?mem and key ktmem are then retrieved from the attention memory using t the computed addressing vector ?t by ?mem t = t?1 X ?t,? ?? and ? =0 ktmem = t?1 X ?t,? k? . (6) ? =0 This relevant attention retrieval allows the proposed model to resolve the visual reference by indirectly resolving coreferences [34?36] through the memory addressing process. Incorporating Sequential Dialog Structure While the associative addressing is effective in retrieving the most relative attention based on the question semantics, we can improve the performance by incorporating sequential structure of the questions in a dialog. Considering that more recent attentions are more likely to be referred again, we add an extra term to Eq. (5) that allows preference > for sequential addressing, i.e., m0t,? = (W mem ct ) k? + ? (t ? ? ) where ? is a learnable parameter weighting the relative time distance (t ? ? ) from the current time step. 3.3 Dynamic Attention Combination After obtaining both attentions, the proposed model combines them. The two attention maps ?tent t and ?mem are first stacked and fed to a convolution layer to locally combine the attentions. After t generating the locally combined attention features, it is flattened and fed to a fully connected (fc) layer with softmax generating the final attention map. However, a fc layer with fixed weights would always result in the same type of combination although the merging process should, as we argued previously, depend on the question. Therefore, we adopt the dynamic parameter layer introduced in [9] to adapt the weights of the fc layer conditioned on the question at test time. Formally, the final attention map ?t (ct ) for time t is obtained by  ?t (ct ) = softmax W DPL (ct ) ? ?(?tent , ?mem ) , (7) t t where W DPL (ct ) are the dynamically determined weights and ?(?tent , ?mem ) is the flattened output t t of the convolution obtained from the stacked attention maps. As in [9], we use a hashing technique to predict the dynamic parameters without explosive increase of network size. 3.4 Additional Components and Implementation In addition to the attended image feature, we find other information useful for answering the question. Therefore, for the final encoding et at time step t, we fuse the attended image feature embedding ftatt with the context embedding ct , the attention map ?t and the retrieved key ktmem from the memory, by a fc layer after concatenation (Figure 2f). Finally, when we described the associative memory in Section 3, we did not specify the memory key generation procedure. In particular, after answering the current question, we append the computed attention map to the memory. When storing the current attention into memory, the proposed model generates a key kt by fusing the context embedding ct with the current answer embedding at through a fc layer (Figure 2h). Note that an answer embedding at is obtained using LSTM. Learning Since all the modules of the proposed network are fully differentiable, the entire network can be trained end-to-end by standard gradient-based learning algorithms. 4 Experiments We conduct two sets of experiments to verify the proposed model. To highlight the model?s ability to resolve visual references, we first perform experiment with a synthetic dataset that is explicitly designed to contain ambiguous expressions and strong inter-dependency among questions in the visual dialog. We then show that the model also works well in the real VisDial [1] benchmark. 5 Q +H +SEQ ? ? ? ? X ? Accuracy 20.18 36.58 37.58 LF [1] HRE [1] MN [1] X X X ? ? ? 45.06 49.10 48.51 ATT ? X ? ? 62.62 79.72 AMEM ? X ? X ? ? X X 87.53 89.20 90.05 96.39 ATT 1.0 ATT+H AMEM AMEM+H+SEQ 0.9 accuracy Basemodel I 0.8 0.7 0.6 2 4 6 dialog step ID 8 10 Figure 4: Results on MNIST Dialog. Answer prediction accuracy [%] of all models for all questions (left) and accuracy curves of four models at different dialog steps (right). +H and +SEQ represent the use of history embeddings in models and addressing with sequential preference, respectively. 4.1 MNIST Dialog Dataset Experimental Setting We create a synthetic dataset, called MNIST Dialog3 , which is designed for the analysis of models in the task of visual reference resolution with ambiguous expressions. Each image in MNIST Dialog contains a 4 ? 4 grid of MNIST digits and each MNIST digit in the grid has four randomly sampled attributes, i.e., color = {red, blue, green, purple, brown}, bgcolor = {cyan, yellow, white, silver, salmon}, number = {x|0 ? x ? 9} and style = {flat, stroke}, as illustrated in Figure 1. Given the generated image from MNIST Dialog, we automatically generate questions and answers about a subset of the digits in the grid that focus on visual reference resolution. There are two types of questions: (i) counting questions and (ii) attribute questions that refer to a single target digit. During question generation, the target digits for a question is selected based on a subset of the previous targets referred to by ambiguous expressions, as shown in Figure 1. For ease of evaluation, we generate a single word answer rather than a sentence for each question and there 1 are a total of 38 possible answers ( 38 chance performance). We generated 30K / 10K / 10K images for training / validation / testing, respectively, and three ten-question dialogs for each image. The dimensionality of the word embedding and the hidden state in the LSTMs are set to 32 and 64, respectively. All LSTMs are single-layered. Since answers are single words, the answer embedding RNN is replaced with a word embedding layer in both the history embedding module and the memory key generation module. The image feature extraction module is formed by stacking four 3 ? 3 convolutional layers with a subsequent 2 ? 2 pooling layer. The first two convolutional layers have 32 channels, while there are 64 channels in the last two. Finally, we use 512 weight candidates to hash the dynamic parameters of the attention combination process. The entire network is trained end-to-end by minimizing the cross entropy of the predicted answer distribution at every step of the dialogs. We compare our model (AMEM) with three different groups of baselines. The simple baselines show the results of using statistical priors, where answers are obtained using image (I) or question (Q) only. We also implement the late fusion model (LF), the hierarchical recurrent encoder with attention (HREA) and the memory network encoder (MN) introduced in [1]. Additionally, an attention-based model (ATT), which directly uses tentative attention, without memory access, is implemented as a strong baseline. For some models, two variants are implemented: one using history embeddings and the other one not. These variations give us insights on the effect of using history contexts and are distinguished by +H. Finally, another two versions of the proposed model, orthogonal to the previous ones, are implemented with and without the sequential preference in memory addressing (see above), which is denoted by +SEQ. Results Figure 4 shows the results on MNIST Dialog. The answer prediction accuracy over all questions of dialogs is presented in the table on the left. It is noticeable that the models using attention mechanisms (AMEM and ATT) significantly outperform the previous baseline models (LF, HRE and MN) introduced in [1], while these baselines still perform better than the simple baseline models. This signifies the importance of attention in answering questions, consistent with previous works [10?14]. 3 The dataset is available at http://cvlab.postech.ac.kr/research/attmem 6 t coefficient AMEM AMEM+SEQ 0.4 0.3 0.2 0.1 0.0 2 4 6 relative time distance (t 8 ) 10 Figure 5: Memory addressing coefficients with and without sequential preference. Both models put large weights on recent elements (smaller relative time difference) to deal with the sequential structure of dialogs. attribute no_relation left counting right below above targets 40 40 40 20 20 20 0 0 0 20 20 20 40 40 40 20 0 20 40 sub_targets 40 40 20 0 20 40 40 20 0 20 40 Figure 6: Characteristics of dynamically predicted weights for attention combination. Dynamic weights are computed from 1,500 random samples at dialog step 3 and plotted by t-SNE. Each figure presents clusters formed by different semantics of questions. (left) Clusters generated by different question types. (middle) Subclusters formed by types of spatial relationships in attribute questions. (right) Subclusters formed by ways of specifying targets in counting questions; cluster sub_targets contains questions whose current target digits are included in the targets of the previous question. Extending ATT to incorporate history embeddings during attention map estimation increases the accuracy by about 17%, resulting in a strong baseline model. However, even the simplest version of the proposed model, which does not use history embeddings or addressing with sequential preference, already outperforms the strong baseline by a large margin. Note that this model still has indirect access to the history through the attention memory, although it does not have direct access to the encodings of past question/answer pairs when computing the attention. This signifies that the use of the attention memory is more helpful in resolving the current reference (and computing attention), compared to a method that uses more traditional tentative attention informed by the history encoding. Moreover, the proposed model with history embeddings further increases the accuracy by 1.7%. The proposed model reaches >96% accuracy when the sequential structure of dialogs is taken into account by the sequential preference in memory addressing. We also present the accuracies of the answers at each dialog step for four models that use attentions in Figure 4 (right). Notably, the accuracy of ATT drops very fast as the dialog progresses and reference resolution is needed. Adding history embeddings to the tentative attention calculation somewhat reduces the degradation. The use of the attention memory gives a very significant improvement, particularly at later steps in the dialog when complex reference resolution is needed. Parameter Analysis When we observed the learned parameter ? for the sequential preference, it is consistently negative in all experiments; it means that all models prefer recent elements. A closer look at the addressing coefficients ?t with and without the sequential preference reveals that both variants have a clear preferences for recent elements, as depicted in Figure 5. It is interesting that the case without the bias term shows a stronger preference for recent information, but its final accuracy is lower than the version with the bias term. It seems that W mem without bias puts too much weight on recent elements, resulting in worse performance. Based on this observation, we learn W mem and ? jointly to find better coefficients than W mem alone. The dynamically predicted weights form clusters with respect to the semantics of the input questions as illustrated in Figure 6, where 1,500 random samples at step 3 of dialogs are visualized using t-SNE. In Figure 6 (left), the two question types (attribute and counting) create distinct clusters. Each of 7 History: Are there any 9's in the image ? How many digits in a yellow background are there among them ? What is the color of the digit ? What is the color of the digit at the right of it ? What is the style of the blue digit ? Current QA: What is the color of the digit at the right of it ? Input image Retrieved attention from network Final attention Predicted answer: violet three one red blue flat violet Manually modified retrieved attention Final attention Predicted answer: green Figure 7: Qualitative analysis on MNIST Dialog. Given an input image and a series of questions with their visual grounding history, we present the memory retrieved and final attentions for the current question in the second and third columns, respectively. The proposed network correctly attends to target reference and predicts correct answer. The last two columns present the manually modified attention and the final attention obtained from the modified attention, respectively. Experiment shows consistency of transformation between attentions and semantic interpretability of our model. these, in turn, contains multiple sub-clusters formed by other semantics, as presented in Figure 6 (middle) and (right). In the cluster of attribute questions, sub-clusters are mainly made by types of spatial relationship used to specify the target digit (e.g., #3 in Figure 1), whereas sub-clusters in counting questions are based on whether the target digits of the question are selected from the targets of the previous question or not (e.g., #1 vs. #2 in Figure 1). Figure 7 illustrates qualitative results. Based on the history of attentions stored in the attention memory, the proposed model retrieves the previous reference as presented in the second column. The final attention for the current question is then calculated by manipulating the retrieved attention based on the current question. For example, the current question in Figure 7 refers to the right digit of the previous reference, and the model identifies the target reference successfully (column 3) as the previous reference (column 2) is given accurately by the retrieved attention. To investigate consistency with respect to attention manipulation, we move the region of the retrieved attention manually (column 4) and observe the final attention map calculated from the modified attention (column 5). It is clear that our reference resolution procedure works consistently even with the manipulated attention and responds to the question accordingly. This shows a level of semantic interpretability of our model. See more qualitative results in Section A of our supplementary material. 4.2 Visual Dialog (VisDial) Dataset Experimental Setting In the VisDial [1] dataset4 , the dialogs are collected from MS-COCO [37] images and their captions. Each dialog is composed of an image, a caption, and a sequence of ten QA pairs. Unlike in MNIST Dialog, answers to questions in VisDial are in free form text. Since each dialog always starts with an initial caption annotated in MS-COCO, the initial history is always constructed using the caption. The dataset provides 100 answer candidates for each question and accuracy of a question is measured by the rank of the matching ground-truth answer. Note that this dataset is less focused on visual reference resolution and contains fewer ambiguous expressions compared to MNIST Dialog. We estimate the portion of questions containing ambiguous expressions to be 94% and 52% in MNIST Dial and VisDial, respectively5 . While we compare our model with various encoders introduced in [1], we fix the decoder to a discriminative decoder that directly ranks the answer candidates through their embeddings. Our baselines include three visual dialog models, i.e., late fusion model (LF), hierarchical recurrent encoder (HRE) and memory network encoder (MN), and two attention based VQA models (SAN and 4 We use recently released VisDial v0.9 with the benchmark splits [1]. We consider pronouns and definite noun phrases as ambiguous expressions and count them using a POS tagger in NLTK (http://www.nltk.org/). 5 8 Table 1: Experimental results on VisDial. We show the number of parameters, mean reciprocal rank (MRR), recall@k and mean rank (MR). +H and ATT indicate use of history embeddings in prediction and attention mechanism, respectively. Model Answer prior [1] +H ? ATT ? # of params n/a MRR 0.3735 R@1 23.55 R@5 48.52 R@10 53.23 MR 26.50 LF-Q [1] LF-QH [1] LF-QI [1] LF-QIH [1] ? X ? X ? ? ? ? 8.3 M (3.6x) 12.4 M (5.4x) 10.4 M (4.6x) 14.5 M (6.3x) 0.5508 0.5578 0.5759 0.5807 41.24 41.75 43.33 43.82 70.45 71.45 74.27 74.68 79.83 80.94 83.68 84.07 7.08 6.74 5.87 5.78 HRE-QH [1] HRE-QIH [1] HREA-QIH [1] X X X ? ? ? 15.0 M (6.5x) 16.8 M (7.3x) 16.8 M (7.3x) 0.5695 0.5846 0.5868 42.70 44.67 44.82 73.25 74.50 74.81 82.97 84.22 84.36 6.11 5.72 5.66 MN-QH [1] MN-QIH [1] X X ? ? 12.4 M (5.4x) 14.7 M (6.4x) 0.5849 0.5965 44.03 45.55 75.26 76.22 84.49 85.37 5.68 5.46 SAN-QI [10] HieCoAtt-QI [15] ? ? X X n/a n/a 0.5764 0.5788 43.44 43.51 74.26 74.49 83.72 83.96 5.88 5.84 AMEM-QI AMEM-QIH AMEM+SEQ-QI AMEM+SEQ-QIH ? X ? X X X X X 1.7 M (0.7x) 2.3 M (1.0x) 1.7 M (0.7x) 2.3 M (1.0x) 0.6196 0.6192 0.6227 0.6210 48.24 48.05 48.53 48.40 78.33 78.39 78.66 78.39 87.11 87.12 87.43 87.12 4.92 4.88 4.86 4.92 HieCoAtt) with the same decoder. The three visual dialog baselines are trained with different valid combinations of inputs, which are denoted by Q, I and H in the model names. We perform the same ablation study of our model with the one for MNIST Dialog dataset. The conv5 layer in VGG-16 [38] trained on ImageNet [39] is used to extract the image feature map. Similar to [1], all word embedding layers share their weights and an LSTM is used for embedding the current question. For the models with history embedding, we use additional LSTMs for the questions, the answers, and the captions in the history. Based on our empirical observation, we share the parameters of the question and caption LSTMs while having a separate set of weights for the answer LSTM. Every LSTM embedding sentences is two-layered, but the history LSTM of HRNN has a single layer. We employ 64 dimensional word embedding vectors and 128 dimensional hidden state for every LSTM. Note that the the dimensionality of our word embeddings and hidden state representations in LSTMs are significantly lower than the baselines (300 and 512 respectively). We train the network using Adam [40] with the initial learning rate of 0.001 and weight decaying factor 0.0001. Note that we do not update the feature extraction network based on VGG-16. Results Table 1 presents mean reciprocal rank (MRR), mean rank (MR), and recall@k of the models. Note that lower is better for MRs but higher is better for all other evaluation metrics. All variants of the proposed model outperform the baselines in all metrics, achieving the state-of-theart performance. As observed in the experiments on MNIST Dialog, the models with sequential preference (+SEQ) show better performances compared to the ones without it. However, we do not see additional benefits from using a history embedding on VisDial, in contrast to MNIST Dialog. The proposed algorithm also has advantage over existing methods in terms of the number of parameters. Our full model only requires approximately 15% of parameters compared to the best baseline model without counting the parameters in the common feature extraction module based on VGG-16. In VisDial, the attention based VQA techniques with (near) state-of-the-art performances are not as good as the baseline models of [1] because they treat each question independently. The proposed model improves the performance on VisDial by facilitating the visual reference resolution process. Qualitative results for VisDial dataset are presented in Section B of the supplementary material. 5 Conclusion We proposed a novel algorithm for answering questions in visual dialog. Our algorithm resolves visual references in dialog questions based on a new attention mechanism with an attention memory, where the model indirectly resolves coreferences of expressions through the attention retrieval process. We employ the dynamic parameter prediction technique to adaptively combine the tentative and retrieved attentions based on the question. We tested on both synthetic and real datasets and illustrated improvements. 9 Acknowledgments This work was supported in part by the IITP grant funded by the Korea government (MSIT) [2017-001778, Development of Explainable Human-level Deep Machine Learning Inference Framework; 2017-0-01780, The Technology Development for Event Recognition/Relational Reasoning and Learning Knowledge based System for Video Understanding; 2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion]. References [1] Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J.M., Parikh, D., Batra, D.: Visual Dialog. In CVPR. (2017) [2] Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. In CVPR. (2015) [3] Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In ICML. (2015) [4] Mun, J., Cho, M., Han, B.: Text-guided attention model for image captioning. AAAI (2016) [5] Huang, D.A., Lim, J.J., Fei-Fei, L., Niebles, J.C.: Unsupervised visual-linguistic reference resolution in instructional videos. In CVPR. (2017) [6] Rohrbach, A., Rohrbach, M., Hu, R., Darrell, T., Schiele, B.: Grounding of textual phrases in images by reconstruction. In ECCV. (2016) [7] Mansimov, E., Parisotto, E., Ba, J., Salakhutdinov, R.: Generating images from captions with attention. In ICLR. (2016) [8] Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In ICML. (2016) [9] Noh, H., Seo, P.H., Han, B.: Image question answering using convolutional neural network with dynamic parameter prediction. In CVPR. (2016) [10] Yang, Z., He, X., Gao, J., Deng, L., Smola, A.: Stacked attention networks for image question answering. In CVPR. (2016) [11] Xu, H., Saenko, K.: Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV. (2016) [12] Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Deep compositional question answering with neural module networks. In CVPR. (2016) [13] Kim, J.H., On, K.W., Lim, W., Kim, J., Ha, J.W., Zhang, B.T.: Hadamard Product for Low-rank Bilinear Pooling. In ICLR. (2017) [14] Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Neural module networks. In CVPR. (2016) [15] Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. In NIPS. (2016) [16] Fukui, A., Park, D.H., Yang, D., Rohrbach, A., Darrell, T., Rohrbach, M.: Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP. (2016) [17] Noh, H., Han, B.: Training recurrent answering units with joint loss minimization for vqa. arXiv preprint arXiv:1606.03647 (2016) [18] Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: Visual Question Answering. In ICCV. (2015) [19] Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., Parikh, D.: Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR. (2017) [20] Zhang, P., Goyal, Y., Summers-Stay, D., Batra, D., Parikh, D.: Yin and Yang: Balancing and answering binary visual questions. In CVPR. (2016) [21] Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based approach to answering questions about images. In ICCV. (2015) [22] Mun, J., Seo, P.H., Jung, I., Han, B.: MarioQA: Answering questions by watching gameplay videos. arXiv preprint arXiv:1612.01669 (2016) 10 [23] Zhu, L., Xu, Z., Yang, Y., Hauptmann, A.G.: Uncovering temporal context for video question and answering. arXiv preprint arXiv:1511.04670 (2015) [24] Tapaswi, M., Zhu, Y., Stiefelhagen, R., Torralba, A., Urtasun, R., Fidler, S.: Movieqa: Understanding stories in movies through question-answering. In CVPR. (2016) [25] de Vries, H., Strub, F., Chandar, S., Pietquin, O., Larochelle, H., Courville, A.: Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR. (2017) [26] Das, A., Kottur, S., Moura, J.M., Lee, S., Batra, D.: Learning cooperative visual dialog agents with deep reinforcement learning. arXiv preprint arXiv:1703.06585 (2017) [27] Strub, F., de Vries, H., Mary, J., Piot, B., Courville, A., Pietquin, O.: End-to-end optimization of goal-driven and visually grounded dialogue systems. arXiv preprint arXiv:1703.05423 (2017) [28] Seo, P.H., Lin, Z., Cohen, S., Shen, X., Han, B.: Progressive attention networks for visual attribute prediction. arXiv preprint arXiv:1606.02393 (2016) [29] Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In NIPS. (2015) [30] Weston, J., Chopra, S., Bordes, A.: Memory networks. In ICLR. (2015) [31] Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., Zhong, V., Paulus, R., Socher, R.: Ask me anything: Dynamic memory networks for natural language processing. In ICML. (2016) [32] Xiong, C., Merity, S., Socher, R.: Dynamic memory networks for visual and textual question answering. In ICML. (2016) [33] Miller, A., Fisch, A., Dodge, J., Karimi, A.H., Bordes, A., Weston, J.: Key-value memory networks for directly reading documents. In EMNLP. (2016) [34] Clark, K., Manning, C.D.: Deep reinforcement learning for mention-ranking coreference models. In EMNLP. (2016) [35] Clark, K., Manning, C.D.: Improving coreference resolution by learning entity-level distributed representations. In ACL. (2016) [36] Clark, K., Manning, C.D.: Entity-centric coreference resolution with model stacking. In ACL. (2015) [37] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll?r, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In ECCV. (2014) [38] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ICLR (2015) [39] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In CVPR. (2009) [40] Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 11
6962 |@word cnn:3 middle:2 version:3 stronger:1 seems:1 hu:1 attended:2 mention:1 initial:3 series:3 att:10 score:1 contains:4 document:1 past:2 outperforms:2 existing:1 current:23 com:1 yet:1 fn:3 subsequent:1 designed:3 drop:1 update:1 hash:1 alone:1 v:1 fewer:3 selected:2 generative:1 sukhbaatar:1 accordingly:1 reciprocal:2 short:1 provides:1 location:3 preference:11 org:1 zhang:2 tagger:1 constructed:1 direct:2 retrieving:1 qualitative:4 consists:1 combine:5 introduce:1 inter:3 notably:1 merity:1 themselves:1 dialog:68 nor:1 multi:3 kiros:1 salakhutdinov:2 relying:1 automatically:1 resolve:11 equipped:2 considering:1 project:1 moreover:3 null:2 what:10 disneyresearch:1 informed:1 transformation:1 temporal:1 every:4 mansimov:1 unit:3 grant:1 ramanan:1 attend:3 treat:1 despite:1 encoding:9 bilinear:2 analyzing:1 id:1 solely:1 approximately:1 acl:2 studied:1 dynamically:6 specifying:1 challenging:1 co:1 ease:1 logeswaran:1 acknowledgment:1 testing:1 msit:1 goyal:2 implement:1 differs:1 lf:8 definite:1 digit:23 procedure:2 maire:1 episodic:1 area:1 universal:1 rnn:4 empirical:1 significantly:5 yan:1 projection:1 matching:1 word:7 refers:1 onto:1 layered:2 recency:1 context:5 applying:1 put:2 www:1 map:15 yt:2 straightforward:1 attention:146 independently:4 focused:1 resolution:24 shen:1 insight:1 enabled:1 retrieve:2 embedding:21 variation:1 autonomous:1 target:12 play:2 qh:3 caption:10 us:2 element:4 recognition:2 particularly:1 utilized:2 predicts:1 cooperative:1 database:1 observed:2 role:3 module:10 preprint:7 yadav:1 calculate:2 region:7 connected:2 iitp:1 schiele:2 asked:1 dynamic:18 trained:4 depend:1 singh:1 coreference:6 upon:1 dodge:1 completely:1 resolved:1 joint:2 indirect:1 k0:1 po:1 various:1 retrieves:2 multimodal:1 train:2 stacked:3 distinct:1 fast:1 describe:1 effective:1 ondruska:1 zemel:1 tell:2 whose:1 encoded:2 supplementary:2 solve:1 cvpr:12 encoder:5 ability:1 simonyan:1 jointly:1 final:15 associative:11 sequence:4 advantage:2 differentiable:1 agrawal:1 propose:2 reconstruction:1 product:2 relevant:10 combining:1 ablation:1 pronoun:1 hadamard:1 parametrizations:2 fisch:1 description:1 cluster:9 darrell:4 extending:3 captioning:3 produce:1 generating:3 silver:1 adam:2 object:4 depending:1 recurrent:5 ac:2 completion:1 attends:1 measured:1 qt:4 noticeable:1 conv5:1 received:1 progress:3 eq:1 implemented:3 predicted:5 pietquin:2 implies:1 indicate:1 larochelle:1 strong:4 guided:3 closely:1 correct:1 attribute:8 annotated:1 stochastic:1 movieqa:1 human:1 material:2 require:1 argued:1 government:1 fix:1 generalization:1 niebles:1 exploring:1 ground:3 elevating:1 visually:3 predict:3 achieves:1 adopt:2 torralba:1 released:1 estimation:1 linguistically:1 seo:4 create:2 successfully:1 weighted:1 minimization:1 always:3 aim:1 modified:4 rather:1 zhong:1 linguistic:5 focus:3 improvement:3 consistently:2 rank:7 mainly:1 contrast:3 adversarial:1 baseline:16 kim:2 helpful:1 inference:1 dependent:2 kottur:2 entire:2 borrowing:1 hidden:3 manipulating:1 perona:1 semantics:6 karimi:1 noh:2 among:4 uncovering:1 denoted:5 proposes:1 development:3 spatial:7 art:4 softmax:4 noun:1 khot:1 having:2 beach:1 extraction:3 manually:3 represents:1 broad:1 look:2 icml:4 unsupervised:1 theart:1 park:1 progressive:1 intelligent:1 employ:3 oriented:1 randomly:1 composed:2 manipulated:1 comprehensive:1 individual:1 replaced:1 explosive:1 microsoft:1 attempt:1 interest:1 investigate:1 evaluation:2 antol:1 visdial:12 kt:2 capable:2 partial:1 necessary:1 closer:1 korea:1 orthogonal:1 conduct:1 circle:2 plotted:1 hiecoatt:2 column:7 modeling:1 soft:1 phrase:3 stacking:2 fusing:1 addressing:15 entry:3 subset:2 signifies:2 violet:2 too:1 stored:1 dependency:5 answer:36 encoders:2 params:1 synthetic:5 combined:2 referring:2 st:4 adaptively:1 lstm:7 cho:1 fritz:1 stay:2 lee:2 dong:1 synthesis:1 fused:3 again:1 aaai:1 opposed:1 containing:2 huang:1 emnlp:3 worse:1 watching:1 dialogue:2 style:4 actively:1 li:2 account:2 de:2 chandar:1 coefficient:4 matter:1 explicitly:1 ranking:1 crossed:1 later:1 red:2 start:1 portion:1 decaying:1 contribution:2 purple:1 formed:5 accuracy:12 convolutional:5 characteristic:1 miller:1 identify:1 yes:1 yellow:2 decodes:1 accurately:3 none:1 lu:2 history:29 stroke:1 moura:2 reach:1 sampled:1 dataset:12 ask:3 mitchell:1 recall:2 knowledge:2 color:6 conversation:1 reminder:1 dimensionality:2 improves:1 lim:2 guesswhat:1 akata:1 focusing:2 centric:1 hashing:1 higher:1 subclusters:2 modal:3 specify:2 iyyer:1 zisserman:1 formulation:1 box:2 smola:1 bhhan:1 lstms:6 undergoes:1 bradbury:1 gulrajani:1 gray:1 mary:1 usa:1 grounding:6 building:1 contain:3 brown:3 verify:1 effect:1 former:1 fidler:1 name:1 semantic:4 postech:3 white:2 deal:2 illustrated:6 game:1 during:2 ambiguous:11 unambiguous:1 anything:1 m:2 reasoning:1 image:47 novel:4 recently:4 salmon:1 parikh:5 superior:2 common:3 mt:3 rl:2 cohen:1 irsoy:1 he:1 refer:2 composition:1 significant:1 fukui:1 cv:1 grid:5 consistency:2 language:5 funded:1 access:3 han:6 similarity:1 v0:1 mun:2 add:1 own:2 recent:8 mrr:3 retrieved:16 driven:1 coco:3 scenario:1 store:5 certain:1 manipulation:1 hay:1 binary:1 additional:4 somewhat:1 mr:4 deng:2 ii:1 resolving:4 full:2 multiple:1 reduces:1 adapt:1 calculation:1 cross:1 long:2 retrieval:7 lin:2 calculates:1 prediction:9 variant:3 qi:5 vision:1 metric:2 arxiv:14 represent:4 grounded:2 background:2 addition:3 separately:1 want:1 whereas:1 crucial:1 extra:1 unlike:2 pooling:3 strub:2 bohyung:1 near:1 presence:1 counting:6 intermediate:1 split:1 embeddings:9 bengio:2 yang:5 variety:2 chopra:1 architecture:3 andreas:4 inner:1 vgg:3 whether:1 expression:14 explainable:1 speech:1 nomenclature:1 compositional:2 deep:6 useful:1 detailed:1 clear:2 malinowski:1 vqa:16 hrea:2 extensively:1 locally:2 ten:2 visualized:1 simplest:1 generate:3 http:2 outperform:2 lsigal:1 piot:1 estimated:1 correctly:1 klein:2 blue:8 discrete:3 group:1 key:17 four:5 achieving:1 localize:1 neither:1 utilize:1 fuse:1 year:1 utilizes:1 seq:8 prefer:1 layer:18 ct:19 cyan:1 followed:1 summer:2 courville:3 fei:4 your:1 flat:3 lehrmann:2 generates:1 aspect:1 toshev:1 kumar:1 combination:9 manning:3 across:1 beneficial:1 smaller:1 making:1 tent:6 iccv:2 referencing:1 instructional:1 taken:1 previously:1 turn:2 count:1 mechanism:7 needed:3 fed:3 end:10 hongsuck:1 available:2 doll:1 observe:1 hierarchical:5 appropriate:1 indirectly:2 distinguished:1 xiong:1 coin:1 original:1 denotes:1 nlp:2 include:1 dial:1 exploit:1 k1:1 establish:1 move:1 question:116 already:1 traditional:1 guessing:1 said:1 responds:1 gradient:1 iclr:4 distance:2 separate:1 capacity:1 decoder:6 concatenation:2 entity:2 me:1 collected:1 urtasun:1 spanning:1 dataset4:1 index:1 relationship:2 reed:1 minimizing:1 optionally:1 difficult:1 potentially:1 sne:2 negative:1 rise:1 append:1 ba:3 design:1 implementation:1 perform:5 observation:3 convolution:2 datasets:1 neuron:1 benchmark:3 qih:6 supporting:2 situation:1 relational:1 communication:1 disney:1 stiefelhagen:1 arbitrary:1 introduced:7 pair:7 required:2 extensive:1 sentence:3 imagenet:2 tentative:14 merges:1 learned:1 tremendous:1 textual:2 kingma:1 nip:3 qa:5 address:4 beyond:1 below:2 gameplay:1 reading:1 challenge:1 summarize:1 built:1 including:1 memory:64 video:6 green:2 interpretability:2 critical:1 event:1 natural:3 mn:6 zhu:2 improve:2 movie:1 technology:2 identifies:1 extract:1 text:3 prior:3 understanding:4 discovery:2 relative:4 embedded:1 fully:3 loss:1 highlight:1 parisotto:1 generation:6 interesting:1 generator:1 validation:1 digital:1 clark:3 agent:4 consistent:1 sigal:1 story:2 storing:3 playing:1 share:2 balancing:1 eccv:3 bordes:2 jung:1 supported:1 last:2 free:3 bias:3 taking:2 benefit:1 distributed:1 curve:1 calculated:3 valid:1 adopts:1 made:1 reinforcement:3 projected:1 san:2 adaptive:1 dpl:3 erhan:1 obtains:1 compact:1 reveals:1 mem:20 belongie:1 discriminative:1 fergus:1 continuous:1 triplet:1 table:3 additionally:1 channel:2 learn:1 ca:1 inherently:1 obtaining:1 improving:1 complex:1 domain:1 da:3 protocol:1 did:1 zitnick:2 main:1 paulus:1 paul:1 cvlab:1 facilitating:1 xu:3 referred:3 embeds:1 sub:3 candidate:3 answering:27 weighting:1 late:2 third:1 nltk:2 companion:1 specific:1 learnable:1 explored:1 gupta:1 fusion:4 essential:1 socher:3 incorporating:2 mnist:17 sequential:18 merging:1 kr:2 importance:2 flattened:2 adding:1 hauptmann:1 conditioned:5 illustrates:1 vries:2 margin:1 entropy:1 depicted:1 led:1 fc:10 yin:1 likely:1 rohrbach:7 gao:1 visual:69 vinyals:1 partially:1 truth:1 chance:1 relies:1 weston:3 goal:2 leonid:1 content:2 included:1 specifically:2 determined:1 hre:5 degradation:1 called:3 total:1 batra:6 experimental:3 saenko:1 selectively:1 select:1 formally:3 latter:1 incorporate:2 tested:1
6,592
6,963
Joint distribution optimal transportation for domain adaptation Nicolas Courty? Universit? de Bretagne Sud, IRISA, UMR 6074, CNRS, [email protected] R?mi Flamary? Universit? C?te d?Azur, Lagrange, UMR 7293 , CNRS, OCA [email protected] Amaury Habrard Univ Lyon, UJM-Saint-Etienne, CNRS, Lab. Hubert Curien UMR 5516, F-42023 [email protected] Alain Rakotomamonjy Normandie Universite Universit? de Rouen, LITIS EA 4108 [email protected] Abstract This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target Ptf = (X, f (X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results. 1 Introduction In the context of supervised learning, one generally assumes that the test data is a realization of the same process that generated the learning set. Yet, in many practical applications it is often not the case, since several factors can slightly alter this process. The particular case of visual adaptation [1] in computer vision is a good example: given a new dataset of images without any label, one may want to exploit a different annotated dataset, provided that they share sufficient common information and labels. However, the generating process can be different in several aspects, such as the conditions and devices used for acquisition, different pre-processing, different compressions, etc. Domain adaptation techniques aim at alleviating this issue by transferring knowledge between domains [2]. We propose in this paper a principled and theoretically founded way of tackling this problem. The domain adaptation (DA) problem is not new and has received a lot of attention during the past ten years. State-of-the-art methods are mainly differing by the assumptions made over the change in data distributions. In the covariate shift assumption, the differences between the domains are characterized by a change in the feature distributions P(X), while the conditional distributions P(Y |X) remain unchanged (X and Y being respectively the instance and label spaces). Importance re-weighting can be used to learn a new classifier (e.g. [3]), provided that the overlapping of the distributions is large ? Both authors contributed equally. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. enough. Kernel alignment [4] has also been considered for the same purpose. Other types of method, denoted as Invariant Components by Gong and co-authors [5], are looking for a transformation T such that the new representations of input data are matching, i.e. Ps (T (X)) = Pt (T (X)). Methods are then differing by: i) The considered class of transformation, that are generally defined as projections (e.g. [6, 7, 8, 9, 5]), affine transform [4] or non-linear transformation as expressed by neural networks [10, 11] ii) The types of divergences used to compare Ps (T (X)) and Pt (T (X)), such as Kullback Leibler [12] or Maximum Mean Discrepancy [9, 5]. Those divergences usually require that the distributions share a common support to be defined. A particular case is found in the use of optimal transport, introduced for domain adaptation by [13, 14]. T is then defined to be a push-forward operator such that Ps (X) = Pt (T (X)) and that minimizes a global transportation effort or cost between distributions. The associated divergence is the so-called Wasserstein metric, that has a natural Lagrangian formulation and avoids the estimation of continuous distribution by means of kernel. As such, it also alleviates the need for a shared support. The methods discussed above implicitly assume that the conditional distributions are unchanged by T , i.e. Ps (Y |T (X)) ? Pt (Y |T (X)) but there is no clear reason for this assumption to hold. A more general approach is to adapt both marginal feature and conditional distributions by minimizing a global divergence between them. However, this task is usually hard since no label is available in the target domain and therefore no empirical version Pt (Y |X) can be used. This was achieved by restricting to specific class of transformation such as projection [9, 5]. Contributions and outline. In this work we propose a novel framework for unsupervised domain adaptation between joint distributions. We propose to find a function f that predicts an output value given an input x ? X , and that minimizes the optimal transport loss between the joint source distribution Ps and an estimated target joint distribution Ptf = (X, f (X)) depending on f (detailed in Section 2). The method is denoted as JDOT for ?Joint Distribution Optimal Transport" in the remainder. We show that the resulting optimization problem stands for a minimization of a bound on the target error of f (Section 3) and propose an efficient algorithm to solve it (Section 4). Our approach is very general and does not require to learn explicitly a transformation, as it directly solves for the best function. We show that it can handle both regression and classification problems with a large class of functions f including kernel machines and neural networks. We finally provide several numerical experiments on real regression and classification problems that show the performances of JDOT over the state-of-the-art (Section 5). 2 Joint distribution Optimal Transport Let ? ? Rd be a compact input measurable space of dimension d and C the set of labels. P(?) denotes the set of all the probability measures over ?. The standard learning paradigm assumes s classically the existence of a set of data Xs = {xsi }N i=1 associated with a set of class label information s Ns s t Ys = {yi }i=1 , yi ? C (the learning set), and a data set with unknown labels Xt = {xti }N i=1 (the testing set). In order to determine the set of labels Yt associated with Xt , one usually relies on an empirical estimate of the joint probability distribution P(X, Y ) ? P(? ? C) from (Xs , Ys ), and the assumption that Xs and Xt are drawn from the same distribution ? ? P(?). In the considered adaptation problem, one assumes the existence of two distinct joint probability distributions Ps (X, Y ) and Pt (X, Y ) which correspond respectively to two different source and target domains. We will write ?s and ?t their respective marginal distributions over X. 2.1 Optimal transport in domain adaptation The Monge problem is seeking for a map T0 : ? ? ? that pushes ?s toward ?t defined as: Z T0 = argmin d(x, T (x))d?s (x), s.t. T #?s = ?t , T ? where T #?s the image measure of ?s by T , verifying: T #?s (A) = ?t (T ?1 (A)), ? Borel subset A ? ?, + (1) and d : ? ? ? ? R is a metric. In the remainder, we will always consider without further notification the case where d is the squared Euclidean metric. When T0 exists, it is called an optimal transport map, but it is not always the case (e.g. assume that ?s is defined by one Dirac measure and 2 ?t by two). A relaxed version of this problem has been proposed by Kantorovitch [15], who rather seeks for a transport plan (or equivalently a joint probability distribution) ? ? P(? ? ?) such that: Z ? 0 = argmin d(x1 , x2 )d?(x1 , x2 ), (2) ???(?s ,?t ) ??? where ?(?s , ?t ) = {? ? P(? ? ?)|p+ #? = ?s , p? #? = ?t } and p+ and p? denotes the two marginal projections of ? ? ? to ?. Minimizers of this problem are called optimal transport plans. Should ? 0 be of the form (id ? T )#?s , then the solution to Kantorovich and Monge problems coincide. As such the Kantorovich relaxation can be seen as a generalization of the Monge problem, with less constraints on the existence and uniqueness of solutions [16]. Optimal transport has been used in DA as a principled way to bring the source and target distribution closer [13, 14, 17], by seeking for a transport plan between the empirical distributions of Xs and Xt and interpolating Xs thanks to a barycentric mapping [14], or by estimating a mapping which is not the solution of Monge problem but allows to map unseen samples [17]. Moreover, they show that better constraining the structure of ? through entropic or classwise regularization terms helps in achieving better empirical results. 2.2 Joint distribution optimal transport loss The main idea of this work is is to handle a change in both marginal and conditional distributions. As such, we are looking for a transformation T that will align directly the joint distributions Ps and Pt . Following the Kantovorich formulation of (2), T will be implicitly expressed through a coupling between both joint distributions as: Z ? 0 = argmin D(x1 , y1 ; x2 , y2 )d?(x1 , y1 ; x2 , y2 ), (3) ???(Ps ,Pt ) (??C)2 where D(x1 , y1 ; x2 , y2 ) = ?d(x1 , x2 ) + L(y1 , y2 ) is a joint cost measure combining both the distances between the samples and a loss function L measuring the discrepancy between y1 and y2 . While this joint cost is specific (separable), we leave for future work the analysis of generic joint cost function. Putting it in words, matching close source and target samples with similar labels costs few. ? is a positive parameter which balances the metric in the feature space and the loss. As such, when ? ? +?, this cost is dominated by the metric in the input feature space, and the solution of the coupling problem is the same as in [14]. It can be shown that a minimizer to (3) always exists and is unique provided that D(?) is lower semi-continuous (see [18], Theorem 4.1), which is the case when d(?) is a norm and for every usual loss functions [19]. In the unsupervised DA problem, one does not have access to labels in the target domain, and as such it is not possible to find the optimal coupling. Since our goal is to find a function on the target domain f : ? ? C, we suggest to replace y2 by a proxy f (x2 ). This leads to the definition of the following joint distribution that uses a given function f as a proxy for y: Ptf = (x, f (x))x??t (4) PNs ? In practice we consider empirical versions of Ps and Ptf , i.e. P?s = N1s i=1 ?xsi ,yis and Ptf = PNt 1 i=1 ?xti ,f (xti ) . ? is then a matrix which belongs to ? , i.e.the transportation polytope of nonNt negative matrices between uniform distributions. Since our goal is to estimate a prediction f on the target domain, we propose to find the one that produces predictions that match optimally source labels to the aligned target instances in the transport plan. For this purpose, we propose to solve the following problem for JDOT: X ? min D(xsi , yis ; xtj , f (xtj ))? ij ? min W1 (P?s , Ptf ) (5) f,??? f ij where W1 is the 1-Wasserstein distance for the loss D(x1 , y1 ; x2 , y2 ) = ?d(x1 , x2 ) + L(y1 , y2 ). We will make clear in the next section that the function f we retrieve is theoretically sound with respect to the target error. Note that in practice we add a regularization term for function f in order to avoid overfitting as discussed in Section 4. An illustration of JDOT for a regression problem is given in Figure 1. In this figure, we have very different joint and marginal distributions but we want 3 y 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Toy regression distributions Toy regression models 0 x 5 Model estimated with JDOT 1.0 1.0 0.5 0.5 0.5 0.0 0.0 0.0 0.5 0.5 Source model Target model Source samples Target samples 1.0 5 Joint OT matrices 1.0 2.5 0.0 x 2.5 5.0 0.5 JDOT matrix link OT matrix link 1.0 2.5 0.0 x 2.5 5.0 Source model Target model JDOT model 1.0 2.5 0.0 x 2.5 5.0 Figure 1: Illustration of JDOT on a 1D regression problem. (left) Source and target empirical distributions and marginals (middle left) Source and target models (middle right) OT matrix on empirical joint distributions and with JDOT proxy joint distribution (right) estimated prediction function f . to illustrate that the OT matrix ? obtained using the true empirical distribution Pt is very similar to the one obtained with the proxy Ptf which leads to a very good model for JDOT. Choice of ?. This is an important parameter balancing the alignment of feature space and labels. A natural choice of the ? parameter is obtained by normalizing the range of values of d(xsi , xtj ) with ? = 1/ maxi,j d(xsi , xtj ). In the numerical experiment section, we show that this setting is very good in two out of three experiments. However, in some cases, better performances are obtained with a cross-validation of this parameter. Also note that ? is strongly linked to the smoothness of the loss L and of the optimal labelling functions and can be seen as a Lipschitz constant in the bound of Theorem 3.1. Relation to other optimal transport based DA methods. Previous DA methods based on optimal transport [14, 17] do not not only differ by the nature of the considered distributions, but also in the way the optimal plan is used to find f . They learn a complex mapping between the source and target distributions when the objective is only to estimate a prediction function f on target. To do so, they rely on a barycentric mapping that minimizes only approximately the Wasserstein distance between the distributions. As discussed in Section 4, JDOT uses the optimal plan to propagate and fuse the labels from the source to target. Not only are the performances enhanced, but we also show how this approach is more theoretically well grounded in next section 3. Relation to Transport Lp distances. Recently, Thorpe and co-authors introduced the Transportation Lp distance [20]. Their objective is to compute a meaningful distance between multi-dimensional signals. Interestingly their distance can be seen as optimal transport between two distributions of the form (4) where the functions are known and the label loss L is chosen as a Lp distance. While their approach is inspirational, JDOT is different both in its formulation, where we introduce a more general class of loss L, and in its objective, as our goal is to estimate the target function f which is not known a priori. Finally we show theoretically and empirically that our formulation addresses successfully the problem of domain adaptation. 3 A Bound on the Target Error Let f be an hypothesis function from a given class of hypothesis H. We define the expected loss in def the target domain errT (f ) as errT (f ) = E(x,y)?Pt L(y, f (x)). We define similarly errS (f ) for the source domain. We assume the loss function L to be bounded, symmetric, k-lipschitz and satisfying the triangle inequality. To provide some guarantees on our method, we consider an adaptation of the notion probabilistic Lipschitzness introduced in [21, 22] which assumes that two close instances must have the same labels with high probability. It corresponds to a relaxation of the classic Lipschitzness allowing one to model the marginal-label relatedness such as in Nearest-Neighbor classification, linear classification or cluster assumption. We propose an extension of this notion in a domain adaptation context by assuming that a labeling function must comply with two close instances of each domain w.r.t. a coupling ?. 4 Definition (Probabilistic Transfer Lipschitzness) Let ?s and ?t be respectively the source and target distributions. Let ? : R ? [0, 1]. A labeling function f : ? ? R and a joint distribution ?(?s , ?t ) over ?s and ?t are ?-Lipschitz transferable if for all ? > 0: P r(x1 ,x2 )??(?s ,?t ) [|f (x1 ) ? f (x2 )| > ?d(x1 , x2 )] ? ?(?). Intuitively, given a deterministic labeling functions f and a coupling ?, it bounds the probability of finding pairs of source-target instances labelled differently in a (1/?)-ball with respect to ?. We can now give our main result (simplified version): Let ?? = ?f ? argmin???(Ps ,P f ) (??C)2 ?d(xs , xt ) + L(ys , yt )d?(xs , ys ; xt , yt ) and W1 (Ps , Pt ) the ast sociated 1-Wasserstein distance. Let f ? ? H be a Lipschitz labeling function that verifies the ?-probabilistic transfer Lipschitzness (PTL) assumption w.r.t. ?? and that minimizes the joint error errS (f ? ) + errT (f ? ) w.r.t all PTL functions compatible with ?? . We assume the input instances are bounded s.t. |f ? (x1 ) ? f ? (x2 )| ? M for all x1 , x2 . Let L be any symmetric loss function, k-Lipschitz and satisfying the triangle inequality. Consider a sample of Ns labeled source instances drawn from Ps and Nt unlabeled instances drawn from ?t , and then for all ? > 0, with ? = k?, we have with probability at least 1 ? ? that: r   2 1 2 1 ?f ? log( ) ? errT (f ) ? W1 (Ps , Pt ) + +? + errS (f ? ) + errT (f ? ) + kM ?(?). c0 ? NS NT f Theorem 3.1 Let be any labeling function of ? H. R The detailed proof of Theorem 3.1 is given in the supplementary material. The previous bound on the target error above is interesting to interpret. The first two terms correspond to the objective function (5) we propose to minimize accompanied with a sampling bound. The last term ?(?) assesses the probability under which the probabilistic Lipschitzness does not hold. The remaining two terms involving f ? correspond to the joint error minimizer illustrating that domain adaptation can work only if we can predict well in both domains, similarly to existing results in the literature [23, 24]. If the last terms are small enough, adaptation is possible if we are able to align well Ps and Ptf , provided that f ? and ?? verify the PTL. Finally, note that ? = k? and tuning this parameter is thus actually related to finding the Lipschitz constants of the problem. 4 Learning with Joint Distribution OT In this section, we provide some details about the JDOT?s optimization problem given in Equation (5) and discuss algorithms for its resolution. We will assume that the function space H to which f belongs is either a RKHS or a function space parametrized by some parameters w ? Rp . This framework encompasses linear models, neural networks, and kernel methods. Accordingly, we are going to define a regularization term ?(f ) on f . Depending on how H is defined, ?(f ) is either a non-decreasing function of the squared-norm induced by the RKHS (so that the representer theorem is applicable) or a squared-norm on the vector parameter. We will further assume that ?(f ) is continuously differentiable. As discussed above, f is to be learned according to the following optimization problem min f ?H,??? X  ? i,j ?d(xsi , xtj ) + L(yis , f (xtj )) + ??(f ) (6) i,j where the loss function L is continuous and differentiable with respects to its second variable. Note that while the above problem does not involve any regularization term on the coupling matrix ?, it is essentially for the sake of simplicity and readability. Regularizers like entropic regularization [25], which is relevant when the number of samples is very large, can still be used without significant change to the algorithmic framework. Optimization procedure. According to the above hypotheses on f and L, Problem (6) is smooth and the constraints are separable according to f and ?. Hence, a natural way to solve the problem (6) is to rely on alternate optimization w.r.t. both parameters ? and f . This algorithm well-known as Block Coordinate Descent (BCD) or Gauss-Seidel method (the pseudo code of the algorithm is given in appendix). Block optimization steps are discussed with further details in the following. 5 Solving with fixed f boils down to a classical OT problem with a loss matrix C such that Ci,j = ?d(xsi , xtj ) + L(yis , f (xtj )). We can use classical OT solvers such as the network simplex algorithm, but other strategies can be considered, such as regularized OT [25] or stochastic versions [26]. The optimization problem with fixed ? leads to a new learning problem expressed as X min ? i,j L(yis , f (xtj )) + ??(f ) f ?H (7) i,j Note how the data fitting term elegantly and naturally encodes the transfer of source labels yis through estimated labels of test samples with a weighting depending on the optimal transport matrix. However, this comes at the price of having a quadratic number Ns Nt of terms, which can be considered as computationally expensive. We will see in the sequel that we can benefit from the structure of the chosen loss to greatly reduce its complexity. In addition, we emphasize that when H is a RKHS, owing to kernel trick and the representer theorem, problem (7) can be re-expressed as an optimization problem with Nt number of parameters all belonging to R. Let us now discuss briefly the convergence of the proposed algorithm. Owing to the 2-block coordinate descent structure, to the differentiability of the objective function in Problem (6) and constraints on f (or its kernel trick parameters) and ? are closed, non-empty and convex, convergence result of Grippo et al. [27] on 2-block Gauss-Seidel methods directly applies. It states that if the sequence {? k , f k } produced by the algorithm has limit points then every limit point of the sequence is a critical point of Problem (6). Estimating f for least square regression problems. We detail the use of JDOT for transfer leastsquare regression problem i.e when L is the squared-loss. In this context, when the optimal transport matrix ? is fixed the learning problem boils down to X 1 min k? yj ? f (xtj )k2 + ?kf k2 (8) f ?H n t j P where the y?j = nt j ? i,j yis is a weighted average of the source target values. Note that this simplification results from the properties of the quadratic loss and that it may not occur for more complex regression loss. Estimating f for hinge loss classification problems. We now aim at estimating a multiclass classifier with a one-against-all strategy. We suppose that the data fitting is the binary squared hinge loss of the form L(y, f (x)) = max(0, 1 ? yf (x))2 . In a One-Against-All strategy we often use the s s binary matrices P such that Pi,k = 1 if sample i is of class k else Pi,k = 0. Denote as fk ? H the decision function related to the k-vs-all problem. The learning problem (7) can now be expressed as X X min P?j,k L(1, fk (xtj )) + (1 ? P?j,k )L(?1, fk (xtj )) + ? kfk k2 (9) fk ?H j,k k ? is the transported class proportion matrix P ? = 1 ? > Ps . Interestingly this formulation where P Nt illustrates that for each target sample, the data fitting term is a convex sum of hinge loss for a negative and positive label with weights in ?. 5 Numerical experiments In this section we evaluate the performance of our method (JDOT) on two different transfer tasks of classification and regression on real datasets 2 . Caltech-Office classification dataset. This dataset [28] is dedicated to visual adaptation. It contains images from four different domains: Amazon, the Caltech-256 image collection, Webcam and DSLR. Several features, such as presence/absence of background, lightning conditions, image quality, etc.) induce a distribution shift between the domains, and it is therefore relevant to consider a domain adaptation task to perform the classification. Following [14], we choose deep learning features to represent the images, extracted as the weights of the fully connected 6th layer of the DECAF convolutional neural network [29], pre-trained on ImageNet. The final feature vector is a sparse 4096 dimensional vector. 2 Open Source Python implementation of JDOT: https://github.com/rflamary/JDOT 6 Table 1: Accuracy on the Caltech-Office Dataset. Best value in bold. Domains Base SurK SA ARTL OT-IT OT-MM JDOT caltech?amazon caltech?webcam caltech?dslr amazon?caltech amazon?webcam amazon?dslr webcam?caltech webcam?amazon webcam?dslr dslr?caltech dslr?amazon dslr?webcam Mean Mean rank p-value 92.07 76.27 84.08 84.77 79.32 86.62 71.77 79.44 96.18 77.03 83.19 96.27 83.92 5.33 < 0.01 91.65 77.97 82.80 84.95 81.36 87.26 71.86 78.18 95.54 76.94 82.15 92.88 83.63 5.58 < 0.01 90.50 81.02 85.99 85.13 85.42 89.17 75.78 81.42 94.90 81.75 83.19 88.47 85.23 4.00 0.01 92.17 80.00 88.54 85.04 79.32 85.99 72.75 79.85 100.00 78.45 83.82 98.98 85.41 3.75 0.04 89.98 80.34 78.34 85.93 74.24 77.71 84.06 89.56 99.36 85.57 90.50 96.61 86.02 3.50 0.25 92.59 78.98 76.43 87.36 85.08 79.62 82.99 90.50 99.36 83.35 90.50 96.61 86.95 2.83 0.86 91.54 88.81 89.81 85.22 84.75 87.90 82.64 90.71 98.09 84.33 88.10 96.61 89.04 2.50 ? Table 2: Accuracy on the Amazon review experiment. Maximum value in bold font. Domains NN DANN JDOT (mse) JDOT (Hinge) books?dvd books?kitchen books?electronics dvd?books dvd?kitchen dvd?electronics kitchen?books kitchen?dvd kitchen?electronics electronics?books electronics?dvd electronics?kitchen 0.805 0.768 0.746 0.725 0.760 0.732 0.704 0.723 0.847 0.713 0.726 0.855 0.806 0.767 0.747 0.747 0.765 0.738 0.718 0.730 0.846 0.718 0.726 0.850 0.794 0.791 0.778 0.761 0.811 0.778 0.732 0.764 0.844 0.740 0.738 0.868 0.795 0.794 0.781 0.763 0.821 0.788 0.728 0.765 0.845 0.749 0.737 0.872 Mean p-value 0.759 0.004 0.763 0.006 0.783 0.025 0.787 ? We compare our method with four other methods: the surrogate kernel approach ([4], denoted SurK), subspace adaptation for its simplicity and good performances on visual adaptation ([8], SA), Adaptation Regularization based Transfer Learning ([30], ARTL), and the two variants of regularized optimal transport [14]: entropy-regularized OT-IT and classwise regularization implemented with the Majoration-Minimization algorithm OT-MM, that showed to give better results in practice than its group-lasso counterpart. The classification is conducted with a SVM together with a linear kernel for every method. Its results when learned on the source domain and tested on the target domain are also reported to serve as baseline (Base). All the methods have hyper-parameters, that are selected using the reverse cross-validation of Zhong and colleagues [31].The dimension d for SA is chosen from {1, 4, 7, . . . , 31}. The entropy regularization for OT-IT and OT-MM is taken from {102 , . . . , 105 }, 102 being the minimum value for the Sinkhorn algorithm to prevent numerical errors. Finally the ? parameter of OT-MM is selected from {1, . . . , 105 } and the ? in JDOT from {10?5 , 10?4 , . . . , 1}. The classification accuracy for all the methods is reported in Table 1. We can see that JDOT is consistently outperforming the baseline (5 points in average), indicating that the adaptation is successful in every cases. Its mean accuracy is the best as well as its average ranking. We conducted a Wilcoxon signed-rank test to test if JDOT was statistically better than the other methods, and report the p-value in the tables. This test shows that JDOT is statistically better than the considered methods, except for OT based ones that where state of the art on this dataset [14]. Amazon review classification dataset We now consider the Amazon review dataset [32] which contains online reviews of different products collected on the Amazon website. Reviews are encoded with bag-of-word unigram and bigram features as input. The problem is to predict positive (higher than 3 stars) or negative (3 stars or less) notation of reviews (binary classification). Since different 7 Table 3: Comparison of different methods on the Wifi localization dataset. Maximum value in bold. Domains KRR SurK DIP DIP-CC GeTarS CTC CTC-TIP JDOT t1 ? t2 t1 ? t3 t2 ? t3 80.84?1.14 76.44?2.66 67.12?1.28 90.36?1.22 94.97?1.29 85.83 ? 1.31 87.98?2.33 84.20?4.29 80.58 ? 2.10 91.30?3.24 84.32?4.57 81.22 ? 4.31 86.76 ? 1.91 90.62?2.25 82.68 ? 3.71 89.36?1.78 94.80?0.87 87.92 ? 1.87 89.22?1.66 92.60 ? 4.50 89.52 ? 1.14 93.03 ? 1.24 90.06 ? 2.01 86.76 ? 1.72 hallway1 hallway2 hallway3 60.02 ?2.60 49.38 ? 2.30 48.42 ?1.32 76.36 ? 2.44 64.69 ?0.77 65.73 ? 1.57 77.48 ? 2.68 78.54 ? 1.66 75.10? 3.39 76.24? 5.14 77.8? 2.70 73.40? 4.06 84.38 ? 1.98 77.38 ? 2.09 80.64 ? 1.76 86.98 ? 2.02 87.74 ? 1.89 82.02? 2.34 86.78 ? 2.31 87.94 ? 2.07 81.72 ? 2.25 98.83?0.58 98.45?0.67 99.27?0.41 words are employed to qualify the different categories of products, a domain adaptation task can be formulated if one wants to predict positive reviews of a product from labelled reviews of a different product. Following [33, 11], we consider only a subset of four different types of product: books, DVDs, electronics and kitchens. This yields 12 possible adaptation tasks. Each domain contains 2000 labelled samples and approximately 4000 unlabelled ones. We therefore use these unlabelled samples to perform the transfer, and test on the 2000 labelled data. The goal of this experiment is to compare to the state-of-the-art method on this subset, namely Domain adversarial neural network ([11], denoted DANN), and to show the versatility of our method that can adapt to any type of classifier. The neural network used for all methods in this experiment is a simple 2-layer model with sigmoid activation function in the hidden layer to promote non-linearity. 50 neurons are used in this hidden layer. For DANN, hyper-parameters are set through the reverse cross-validation proposed in [11], and following the recommendation of authors the learning rate is set to 10?3 . In the case of JDOT, we used the heuristic setting of ? = 1/ maxi,j d(xsi , xtj ), and as such we do not need any cross-validation. The squared Euclidean norm is used for both metric in feature space and we test as loss functions both mean squared errors (mse) and Hinge losses. 10 iterations of the block coordinate descent are realized. For each method, we stop the learning process of the network after 5 epochs. Classification accuracies are presented in table 2. The neural network (NN), trained on source and tested on target, is also presented as a baseline. JDOT surpasses DANN in 11 out of 12 tasks (except on books?dvd). The Hinge loss is better in than mse in 10 out of 12 cases, which is expected given the superiority of the Hinge loss on classification tasks [19]. Wifi localization regression dataset For the regression task, we use the cross-domain indoor Wifi localization dataset that was proposed by Zhang and co-authors [4], and recently studied in [5]. From a multi-dimensional signal (collection of signal strength perceived from several access points), the goal is to locate the device in a hallway, discretized into a grid of 119 squares, by learning a mapping from the signal to the grid element. This translates as a regression problem. As the signals were acquired at different time periods by different devices, a shift can be encountered and calls for an adaptation. In the remaining, we follow the exact same experimental protocol as in [4, 5] for ease of comparison. Two cases of adaptation are considered: transfer across periods, for which three time periods t1, t2 and t3 are considered, and transfer across devices, where three different devices are used to collect the signals in the same straight-line hallways (hallway1-3), leading to three different adaptation tasks in both cases. We compare the result of our method with several state-of-the-art methods: kernel ridge regression with RBF kernel (KRR), surrogate kernel ([4], denoted SurK), domain-invariant projection and its cluster regularized version ([7], denoted respectively DIP and DIP-CC), generalized target shift ([34], denoted GeTarS), and conditional transferable components, with its target information preservation regularization ([5], denoted respectively CTC and CTC-TIP). As in [4, 5], the hyper-parameters of the competing methods are cross-validated on a small subset of the target domain. In the case of JDOT, we simply set the ? to the heuristic value of ? = 1/ maxi,j d(xsi , xtj ) as discussed previously, and f is estimated with kernel ridge regression. Following [4], the accuracy is measured in the following way: the prediction is said to be correct if it falls within a range of three meters in the transfer across periods, and six meters in the transfer across devices. For each experiment, we randomly sample sixty percent of the source and target domain, and report the mean and standard deviation of ten repetitions accuracies in Table 3. For transfer across periods, JDOT performs best in one out of three tasks. For transfer across devices, the superiority of JDOT is clearly assessed, for it reaches an average score > 98%, which is at least ten points ahead of the best competing method for every task. Those extremely good results could be explained by the fact that using optimal transport allows to consider large shifts of distribution, for which divergences (such as maximum mean discrepancy used in CTC) or reweighting strategies can not cope with. 8 6 Discussion and conclusion We have presented in this paper the Joint Distribution Optimal Transport for domain adaptation, which is a principled way of performing domain adaptation with optimal transport. JDOT assumes the existence of a transfer map that transforms a source domain joint distribution Ps (X, Y ) into a target domain equivalent version Pt (X, Y ). Through this transformation, the alignment of both feature space and conditional distributions is operated, allowing to devise an efficient algorithm that simultaneously optimizes for a coupling between Ps and Pt and a prediction function that solves the transfer problem. We also proved that learning with JDOT is equivalent to minimizing a bound on the target distribution. We have demonstrated through experiments on classical real-world benchmark datasets the superiority of our approach w.r.t. several state-of-the-art methods, including previous work on optimal transport based domain adaptation, domain adversarial neural networks or transfer components, on a variety of task including classification and regression. We have also showed the versatility of our method, that can accommodate with several types of loss functions (mse, hinge) or class of hypothesis (including kernel machines or neural networks). Potential follow-ups of this work include a semi-supervised extension (using unlabelled examples in source domain) and investigating stochastic techniques for solving efficiently the adaptation. From a theoretical standpoint, future works include a deeper study of probabilistic transfer lipschitzness and the development of guarantees able to take into the complexity of the hypothesis class and the space of possible transport plans. Acknowledgements This work benefited from the support of the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR), the Normandie Projet GRR-DAISI, European funding FEDER DAISI and CNRS funding from the D?fi Imag?In. The authors also wish to thank Kai Zhang and Qiaojun Wang for providing the Wifi localization dataset. References [1] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual domain adaptation: an overview of recent advances. IEEE Signal Processing Magazine, 32(3), 2015. [2] S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345?1359, 2010. [3] M. Sugiyama, S. Nakajima, H. Kashima, P.V. Buenau, and M. Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In NIPS, 2008. [4] K. Zhang, V. W. Zheng, Q. Wang, J. T. Kwok, Q. Yang, and I. Marsic. Covariate shift in Hilbert space: A solution via surrogate kernels. In ICML, 2013. [5] M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Sch?lkopf. Domain adaptation with conditional transferable components. In ICML, volume 48, pages 2839?2848, 2016. [6] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012. [7] M. Baktashmotlagh, M. Harandi, B. Lovell, and M. Salzmann. Unsupervised domain adaptation by domain invariant projection. In ICCV, pages 769?776, 2013. [8] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV, 2013. [9] M. Long, J. Wang, G. Ding, J. Sun, and P. Yu. Transfer joint matching for unsupervised domain adaptation. In CVPR, pages 1410?1417, 2014. [10] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages 1180?1189, 2015. [11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1?35, 2016. 9 [12] S. Si, D. Tao, and B. Geng. Bregman divergence-based regularization for transfer subspace learning. IEEE Transactions on Knowledge and Data Engineering, 22(7):929?942, July 2010. [13] N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In ECML/PKDD, 2014. [14] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. [15] L. Kantorovich. On the translocation of masses. C.R. (Doklady) Acad. Sci. URSS (N.S.), 37:199?201, 1942. [16] F. Santambrogio. Optimal transport for applied mathematicians. Birk?user, NY, 2015. [17] M. Perrot, N. Courty, R. Flamary, and A. Habrard. Mapping estimation for discrete optimal transport. In NIPS, pages 4197?4205, 2016. [18] C. Villani. Optimal transport: old and new. Grund. der mathematischen Wissenschaften. Springer, 2009. [19] Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, and Alessandro Verri. Are loss functions all the same? Neural Computation, 16(5):1063?1076, 2004. [20] M. Thorpe, S. Park, S. Kolouri, G. Rohde, and D. Slepcev. A transportation lp distance for signal analysis. CoRR, abs/1609.08669, 2016. [21] R. Urner, S. Shalev-Shwartz, and S. Ben-David. Access to unlabeled data can speed up prediction time. In Proceedings of ICML, pages 641?648, 2011. [22] S. Ben-David, S. Shalev-Shwartz, and R. Urner. Domain adaptation?can quantity compensate for quality? In Proc of ISAIM, 2012. [23] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Proc. of COLT, 2009. [24] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151?175, 2010. [25] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013. [26] A. Genevay, M. Cuturi, G. Peyr?, and F. Bach. Stochastic optimization for large-scale optimal transport. In NIPS, pages 3432?3440, 2016. [27] Luigi Grippo and Marco Sciandrone. On the convergence of the block nonlinear gauss?seidel method under convex constraints. Operations research letters, 26(3):127?136, 2000. [28] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, LNCS, pages 213?226, 2010. [29] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [30] M. Long, J. Wang, G. Ding, S. Jialin Pan, and P.S. Yu. Adaptation regularization: A general framework for transfer learning. IEEE TKDE, 26(7):1076?1089, 2014. [31] E. Zhong, W. Fan, Q. Yang, O. Verscheure, and J. Ren. Cross validation framework to choose amongst models and datasets for transfer learning. In ECML/PKDD, 2010. [32] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Proc. of the 2006 conference on empirical methods in natural language processing, pages 120?128, 2006. [33] M. Chen, Z. Xu, K. Weinberger, and F. Sha. Marginalized denoising autoencoders for domain adaptation. In ICML, 2012. [34] K. Zhang, M. Gong, and B. Sch?lkopf. Multi-source domain adaptation: A causal view. In AAAI Conference on Artificial Intelligence, pages 3150?3157, 2015. 10
6963 |@word kulis:1 illustrating:1 middle:2 version:7 compression:1 norm:4 briefly:1 proportion:1 c0:1 bigram:1 open:1 villani:1 km:1 seek:1 propagate:1 accommodate:1 electronics:7 liu:1 contains:3 score:1 salzmann:1 rkhs:3 interestingly:2 past:1 existing:1 luigi:1 com:1 nt:6 activation:2 yet:1 tackling:1 must:2 si:1 numerical:4 v:1 intelligence:2 selected:2 device:7 website:1 accordingly:1 hallway:2 readability:1 zhang:6 direct:1 azur:1 fitting:3 introduce:1 acquired:1 theoretically:4 expected:2 andrea:1 pkdd:2 multi:3 sud:1 discretized:1 verscheure:1 decreasing:1 lyon:1 xti:3 solver:1 provided:4 estimating:4 moreover:1 bounded:2 notation:1 linearity:1 project:1 mass:1 argmin:4 minimizes:4 mathematician:1 differing:2 finding:2 transformation:8 lipschitzness:6 guarantee:2 pseudo:1 every:5 ptl:3 rohde:1 universit:3 classifier:3 k2:3 grauman:1 ustinova:1 doklady:1 imag:1 superiority:3 positive:4 t1:3 engineering:2 insa:1 limit:2 acad:1 kantorovitch:1 id:1 approximately:2 ajakan:1 signed:1 umr:3 studied:1 collect:1 co:3 ease:1 range:2 statistically:2 practical:1 unique:1 testing:1 yj:1 practice:3 block:6 backpropagation:1 procedure:1 lncs:1 empirical:9 adapting:1 matching:3 projection:5 pre:2 word:3 induce:1 ups:1 suggest:1 close:3 unlabeled:2 operator:1 selection:1 context:3 ast:1 vaughan:1 measurable:1 map:4 demonstrated:2 shi:1 transportation:5 lagrangian:1 yt:3 deterministic:1 attention:1 equivalent:2 convex:3 survey:1 resolution:1 simplicity:2 amazon:11 retrieve:1 artl:2 classic:1 handle:2 notion:2 coordinate:3 target:38 pt:15 enhanced:1 alleviating:1 suppose:1 exact:1 magazine:1 us:2 user:1 hypothesis:6 trick:2 element:1 satisfying:2 expensive:1 recognition:1 predicts:1 labeled:2 ding:2 wang:4 verifying:1 connected:1 sun:1 principled:3 alessandro:1 agency:1 complexity:2 grund:1 cuturi:2 vito:1 geodesic:1 trained:2 solving:2 irisa:1 serve:1 localization:4 triangle:2 joint:28 differently:1 univ:3 distinct:1 chellappa:1 artificial:1 labeling:5 hyper:3 shalev:2 sociated:1 encoded:1 supplementary:1 solve:3 heuristic:2 kai:1 cvpr:2 anr:2 unseen:1 tuytelaars:1 transform:1 final:1 online:1 sequence:2 differentiable:2 propose:9 product:5 adaptation:45 fr:4 remainder:2 aligned:1 combining:1 realization:1 relevant:2 alleviates:1 flamary:5 dirac:1 exploiting:1 convergence:4 cluster:2 p:18 empty:1 darrell:2 produce:1 generating:1 leave:1 ben:3 help:1 coupling:8 depending:3 illustrate:1 gong:4 ganin:2 measured:1 nearest:1 ij:2 received:1 blitzer:2 sa:3 solves:2 implemented:1 come:1 larochelle:1 differ:1 annotated:1 owing:2 correct:1 stochastic:3 grr:1 material:1 require:2 ujm:1 generalization:1 extension:2 hold:2 mm:4 marco:1 considered:9 algorithmic:2 mapping:6 predict:3 entropic:2 purpose:2 uniqueness:1 estimation:3 perceived:1 proc:3 applicable:1 bag:1 label:21 krr:2 repetition:1 successfully:1 weighted:1 hoffman:1 minimization:3 clearly:1 always:3 pnt:1 aim:2 rather:1 avoid:1 zhong:2 office:2 validated:1 consistently:1 rank:2 mainly:1 greatly:1 adversarial:3 baseline:3 rostamizadeh:1 minimizers:1 cnrs:4 nn:2 transferring:1 hidden:2 relation:2 going:1 tao:2 issue:1 classification:16 colt:1 denoted:8 priori:1 oca:1 development:1 plan:7 art:7 tzeng:1 marginal:6 having:1 beach:1 sampling:1 ernesto:1 park:1 yu:2 unsupervised:8 icml:6 wifi:4 representer:2 geng:1 discrepancy:3 alter:1 simplex:1 report:2 future:2 t2:3 few:1 thorpe:2 randomly:1 simultaneously:2 divergence:6 national:1 xtj:14 kitchen:7 versatility:3 ab:1 zheng:1 alignment:4 sixty:1 operated:1 regularizers:1 hubert:1 jialin:1 bregman:1 buenau:1 closer:1 respective:1 euclidean:2 old:1 re:2 causal:1 bretagne:1 theoretical:1 instance:8 hallway2:1 measuring:1 tuia:2 cost:6 rakotomamonjy:2 subset:4 habrard:4 surpasses:1 uniform:1 deviation:1 successful:1 wortman:1 conducted:2 peyr:1 optimally:1 reported:2 st:2 thanks:1 fritz:1 sequel:1 probabilistic:5 santambrogio:1 tip:2 together:1 continuously:1 w1:4 squared:7 aaai:1 choose:2 rosasco:1 isaim:1 classically:1 book:8 leading:1 promote:1 toy:2 li:1 potential:1 de:3 accompanied:1 rouen:2 bold:3 star:2 explicitly:1 dann:4 ranking:1 view:1 lot:1 lab:1 closed:1 linked:1 recover:1 jia:1 contribution:1 minimize:1 ass:1 square:2 accuracy:7 convolutional:2 who:1 efficiently:1 correspond:3 t3:3 yield:1 lkopf:2 produced:1 ren:1 cc:2 straight:1 leastsquare:1 reach:2 dslr:7 urner:2 notification:1 definition:2 against:2 acquisition:1 colleague:1 universite:1 associated:3 mi:1 proof:1 boil:2 naturally:1 stop:1 proved:2 dataset:12 knowledge:4 hilbert:1 ea:1 actually:1 higher:1 supervised:2 follow:2 verri:1 formulation:5 ptf:8 strongly:1 autoencoders:1 transport:32 nonlinear:2 overlapping:1 reweighting:1 french:1 birk:1 yf:1 quality:2 michele:1 usa:1 verify:1 y2:8 true:1 counterpart:1 regularization:11 hence:1 symmetric:2 leibler:1 deal:1 during:1 transferable:3 generalized:1 translocation:1 lovell:1 outline:1 ridge:2 mcdonald:1 unice:1 performs:1 dedicated:1 bring:1 percent:1 image:6 novel:1 recently:2 funding:2 fi:1 common:2 sigmoid:1 ctc:5 empirically:1 overview:1 sebban:1 volume:1 discussed:6 mathematischen:1 marginals:1 interpret:1 significant:1 smoothness:1 rd:1 tuning:1 fk:4 grid:2 similarly:2 sugiyama:1 language:1 lightning:1 access:3 sinkhorn:2 etc:2 align:2 add:1 base:2 wilcoxon:1 showed:2 recent:1 optimizing:1 belongs:2 optimizes:1 reverse:2 pns:1 inequality:2 binary:3 errs:3 outperforming:1 qualify:1 yi:9 devise:1 caltech:9 der:1 seen:3 minimum:1 wasserstein:4 relaxed:1 employed:1 determine:1 paradigm:1 period:5 fernando:1 signal:8 preservation:1 ii:1 semi:2 sound:1 july:1 caponnetto:1 seidel:3 smooth:1 match:1 characterized:1 adapt:2 cross:7 long:3 unlabelled:3 curien:1 compensate:1 bach:1 equally:1 y:4 prediction:8 involving:1 regression:17 xsi:9 variant:1 vision:1 metric:6 essentially:1 iteration:1 kernel:15 grounded:1 represent:1 nakajima:1 achieved:1 addition:1 want:4 background:1 else:1 source:26 standpoint:1 sch:2 ot:16 induced:1 flow:1 call:1 structural:1 presence:1 yang:3 constraining:1 enough:2 variety:1 lasso:1 competing:2 lightspeed:1 reduce:1 idea:1 multiclass:1 translates:1 ce23:1 shift:7 t0:3 six:1 feder:1 effort:1 deep:2 generally:2 clear:2 detailed:2 involve:1 gopalan:1 transforms:1 ten:3 differentiability:1 category:2 http:1 estimated:7 tkde:1 write:1 discrete:1 group:1 putting:1 four:3 achieving:1 drawn:3 prevent:1 fuse:1 relaxation:2 year:1 sum:1 letter:1 decision:1 appendix:1 bound:9 def:1 layer:4 simplification:1 correspondence:1 fan:1 quadratic:2 marchand:1 encountered:1 strength:1 occur:1 ahead:1 constraint:4 x2:14 encodes:1 bcd:1 sake:1 dvd:8 dominated:1 aspect:1 speed:1 min:6 extremely:1 performing:1 separable:2 majoration:1 glymour:1 according:3 alternate:1 ball:1 belonging:1 remain:1 slightly:1 across:6 pan:2 ur:1 lp:4 intuitively:1 invariant:3 explained:1 iccv:2 taken:1 computationally:1 equation:1 previously:1 discus:2 available:2 operation:1 kawanabe:1 kwok:1 generic:2 sciandrone:1 kashima:1 weinberger:1 rp:1 existence:4 assumes:5 denotes:2 remaining:2 include:2 saint:1 hinge:8 marginalized:1 etienne:2 inspirational:1 laviolette:1 exploit:1 classical:3 webcam:7 unchanged:2 seeking:2 objective:5 perrot:1 realized:1 quantity:1 font:1 strategy:4 sha:2 usual:1 kantorovich:3 surrogate:3 said:1 amongst:1 subspace:3 distance:11 link:2 thank:1 sci:1 parametrized:1 errt:5 polytope:1 collected:1 reason:1 toward:1 assuming:1 code:1 illustration:2 providing:1 minimizing:2 balance:1 equivalently:1 negative:3 kfk:1 implementation:1 unknown:1 contributed:1 allowing:2 perform:2 neuron:1 datasets:3 benchmark:1 descent:3 ecml:2 looking:2 ubs:1 barycentric:2 y1:7 locate:1 mansour:1 introduced:3 david:3 pair:1 namely:1 germain:1 imagenet:1 learned:2 nip:5 address:1 able:2 usually:3 pattern:1 indoor:1 kulesza:1 encompasses:1 normandie:2 including:4 max:1 critical:1 natural:4 rely:2 regularized:5 github:1 lorenzo:1 comply:1 literature:1 review:8 python:1 kf:1 epoch:1 meter:2 acknowledgement:1 loss:28 fully:1 interesting:1 monge:4 validation:5 affine:1 sufficient:1 amaury:2 proxy:4 share:2 balancing:1 pi:2 eccv:1 compatible:1 mohri:1 last:2 alain:2 deeper:1 neighbor:1 fall:1 litis:1 sparse:1 benefit:1 dip:4 dimension:2 world:2 avoids:1 stand:1 author:6 made:1 forward:1 coincide:1 simplified:1 collection:2 founded:1 cope:1 transaction:3 compact:1 emphasize:1 implicitly:2 kullback:1 relatedness:1 patel:1 baktashmotlagh:1 global:2 overfitting:1 investigating:1 projet:1 shwartz:2 piana:1 continuous:3 table:7 learn:3 transported:1 nature:1 nicolas:1 ca:1 transfer:22 genevay:1 mse:4 interpolating:1 complex:2 european:1 domain:64 da:5 elegantly:1 protocol:1 wissenschaften:1 main:2 courty:5 verifies:1 x1:13 xu:1 benefited:1 borel:1 ny:1 n:4 pereira:2 wish:1 weighting:2 donahue:1 theorem:6 down:2 specific:2 covariate:3 xt:6 unigram:1 harandi:1 maxi:3 x:7 svm:1 normalizing:1 exists:3 restricting:1 corr:1 importance:2 ci:1 decaf:2 te:1 labelling:1 illustrates:1 push:2 chen:1 entropy:2 remi:1 simply:1 visual:7 lagrange:1 expressed:5 vinyals:1 grippo:2 recommendation:1 applies:1 springer:1 corresponds:2 minimizer:2 relies:1 extracted:1 conditional:7 lempitsky:2 goal:5 formulated:1 rbf:1 labelled:4 shared:1 replace:1 lipschitz:6 change:4 hard:1 price:1 absence:1 except:2 surpass:1 denoising:1 called:3 gauss:3 experimental:1 meaningful:1 saenko:1 indicating:1 support:3 crammer:1 assessed:1 evaluate:1 tested:2
6,593
6,964
Multiresolution Kernel Approximation for Gaussian Process Regression Yi Ding? , Risi Kondor?? , Jonathan Eskreis-Winkler? Department of Computer Science, ? Department of Statistics The University of Chicago, Chicago, IL, 60637 {dingy,risi,eskreiswinkler}@uchicago.edu ? Abstract Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K ?1 and det(K). 1 Introduction Gaussian Process (GP) regression, and its frequentist cousin, kernel ridge regression, are such natural and canonical algorithms that they have been reinvented many times by different communities under different names. In machine learning, GPs are considered one of the standard methods of Bayesian nonparametric inference [1]. Meanwhile, the same model, under the name Kriging or Gaussian Random Fields, is the de facto standard for modeling a range of natural phenomena from geophyics to biology [2]. One of the most appealing features of GPs is that, ultimately, the algorithm reduces to ?just? having to compute the inverse of a kernel matrix, K. Unfortunately, this also turns out to be the algorithm?s Achilles heel, since in the general case, the complexity of inverting a dense n?n matrix scales with O(n3 ), meaning that when the number of training examples exceeds 104 ? 105 , GP inference becomes problematic on virtually any computer1 . Over the course of the last 15 years, devising approximations to address this problem has become a burgeoning field. The most common approach is to use one of the so-called Nystr?om methods [3], which select a small subset {xi1 , . . . , xim } of the original training data points as ?anchors? and approximate K in the > form K ? K?,I CK?,I , where K?,I is the submatrix of K consisting of columns {i1 , . . . , im }, and C is a matrix such as the pseudo-inverse of KI,I . Nystr?om methods often work well in practice and have a mature literature offering strong theoretical guarantees. Still, Nystr?om is inherently a global low rank approximation, and, as pointed out in [4], a priori there is no reason to believe that K should be well approximable by a low rank matrix: for example, in the case of the popular Gaussian kernel k(x, x0 ) = exp(?(x ? x0 )2 /(2`2 )), as ` decreases and the kernel becomes more and more ?local? the number of significant eigenvalues quickly increases. This observation has motivated alternative types of approximations, including local, hierarchical and distributed ones (see Section 2). In certain contexts involving translation invariant kernels yet other strategies may be applicable [5], but these are beyond the scope of the present paper. In this paper we present a new kernel approximation method, Multiresolution Kernel Approximation (MKA), which is inspired by a combination of ideas from hierarchical matrix decomposition 1 In the limited case of evaluating a GP with a fixed Gram matrix on a single training set, GP inference reduces to solving a linear system in K, which scales better with n, but might be problematic behavior when the condition number of K is large. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. algorithms and multiresolution analysis. Some of the important features of MKA are that (a) it is a broad spectrum algorithm that approximates the entire kernel matrix K, not just its top eigenvectors, and (b) it is a so-called ?direct? method, i.e., it yields explicit approximations to K ?1 and det(K). Notations. We define [n] = {1, 2, . . . , n}. Given a matrix A, and a tuple I = (i1 , . . . , ir ), AI,? will denote the submatrix of A formed of rows indexed by i1 , . . . , ir , similarly A ?,J will denote the submatrix formed of columns indexed by j1 , . . . , jp , and AI,J will denote the submatrix at the intersection of rows i1 , . . . , ir and columns j1 , . . . , jp . We extend these notations to the case when I and J are sets in the obvious way. If A is a blocked matrix then JAKi,j will denote its (i, j) block. 2 Local vs. global kernel approximation Recall that a Gaussian Process (GP) on a space X is a prior over functions f : X ? R defined by a mean function ?(x) = E[f (x)], and covariance function k(x, x0 ) = Cov(f (x), f (x0 )). Using the most elementary model yi = f (xi ) +  where  ? N (0, ? 2 ) and ? 2 is a noise parameter, given training data {(x1 , y1 ), . . . , (xn , yn )}, the posterior is also a GP, with mean ?0 (x) = ?(x)+k> x (K + ? 2 I)?1 y, where kx = (k(x, x1 ), . . . , k(x, xn )), y = (y1 , . . . , yn ), and covariance 2 ?1 kx . k 0 (x, x0 ) = k(x, x0 ) ? k> x0 (K + ? I) (1) Thus (here and in the following assuming ? = 0 for simplicity), the maximum a posteriori (MAP) estimate of f is 2 ?1 y. (2) fb(x) = k> x (K + ? I) Ridge regression, which is the frequentist analog of GP regression, yields the same formula, but regards fb as the solution to a regularized risk minization problem over a Hilbert space H induced by k. We will use ?GP? as the generic term to refer to both Bayesian GPs and ridge regression. Letting K 0 = (K+? 2 I), virtually all GP approximation approaches focus on trying to approximate the (augmented) kernel matrix K 0 in such a way so as to make inverting it, solving K 0 y = ? or computing det(K 0 ) easier. For the sake of simplicity in the following we will actually discuss approximating K, since adding the diagonal term usually doesn?t make the problem any more challenging. 2.1 Global low rank methods As in other kernel methods, intuitively, Ki,j = k(xi , xj ) encodes the degree of similarity or closeness between the two points xi and xj as it relates to the degree of correlation/similarity between the value of f at xi and at xj . Given that k is often conceived of as a smooth, slowly varying function, one very natural idea is to take a smaller set {xi1 , . . . , xim } of ?landmark points? or ?pseudo-inputs? and approximate k(x, x0 ) in terms of the similarity of x to each of the landmarks, the relationship of the landmarks to each other, and the similarity of the landmarks to x0 . Mathematically, k(x, x0 ) ? m X m X k(x, xis ) cis ,ij k(xij , x0 ), s=1 j=1 which, assuming that {xi1 , . . . , xim } is a subset of the original point set {x1 , . . . , xn }, amounts to > an approximation of the form K ? K?,I C K?,I , with I = {i1 , . . . , im }. The canonical choice for + + C is C = W , where W = KI,I , and W denotes the Moore-Penrose pseudoinverse of W . The resulting approximation > K ? K?,I W + K?,I , (3) is known as the Nystr?om approximation, because it is analogous to the so-called Nystr?om extension used to extrapolate continuous operators from a finite number of quadrature points. Clearly, the choice of I is critical for a good quality approximation. Starting with the pioneering papers [6, 3, 7], over the course of the last 15 years a sequence of different sampling strategies have been developed for obtaining I, several with rigorous approximation bounds [8, 9, 10, 11]. Further variations include the ensemble Nystr?om method [12] and the modified Nystr?om method [13]. Nystr?om methods have the advantage of being relatively simple, and having reliable performance bounds. A fundamental limitation, however, is that the approximation (3) is inherently low rank. As pointed out in [4], there is no reason to believe that kernel matrices in general should be close to low rank. An even more fundamental issue, which is less often discussed in the literature, relates to the 2 specific form of (2). The appearance of K 0?1 in this formula suggests that it is the low eigenvalue eigenvectors of K 0 that should dominate the result of GP regression. On the other hand, multiplying the matrix by kx largely cancels this effect, since kx is effectively a row of a kernel matrix similar to K 0 , and will likely concentrate most weight on the high eigenvalue eigenvectors. Therefore, ultimately, it is not K 0 itself, but the relationship between the eigenvectors of K 0 and the data vector y that determines which part of the spectrum of K 0 the result of GP regression is most sensitive to. Once again, intuition about the kernel helps clarify this point. In a setting where the function that we are regressing is smooth, and correspondingly, the kernel has a large length scale parameter, it is the global, long range relationships between data points that dominate GP regression, and that can indeed be well approximated by the landmark point method. In terms of the linear algebra, the spectral expansion of K 0 is dominated by a few large eigenvalue eigenvectors, we will call this the ?PCA-like? scenario. In contrast, in situations where f varies more rapidly, a shorter lengthscale kernel is called for, local relationships between nearby points become more important, which the landmark point method is less well suited to capture. We call this the ?k?nearest neighbor type? scenario. In reality, most non-trivial GP regression problems fall somewhere in between the above two extremes. In high dimensions data points tend to be all almost equally far from each other anyway, limiting the applicability of simple geometric interpretations. Nonetheless, the two scenarios are an illustration of the general point that one of the key challenges in large scale machine learning is integrating information from both local and global scales. 2.2 Local and hierarchical low rank methods Realizing the limitations of the low rank approach, local kernel approximation methods have also started appearing in the literature. Broadly, these algorithms: (1) first cluster the rows/columns of K with some appropriate fast clustering method, e.g., METIS [14] or GRACLUS [15] and block K accordingly; (2) compute a low rank, but relatively high accuracy, approximation JKKi,j ? Ui ?i Ui> to each diagonal block of K; (3) use the {Ui } bases to compute possibly coarser approximations to the JKKi,j off diagonal blocks. This idea appears in its purest form in [16], and is refined in [4] in a way that avoids having to form all rows/columns of the off-diagonal blocks in the first place. Recently, [17] proposed a related approach, where all the blocks in a given row share the same row basis but have different column bases. A major advantage of local approaches is that they are inherently parallelizable. The clustering itself, however, is a delicate, and sometimes not very robust component of these methods. In fact, divide-and-conquer type algorithms such as [18] and [19] can also be included in the same category, even though in these cases the blocking is usually random. A natural extension of the blocking idea would be to apply the divide-and-conquer approach recursively, at multiple different scales. Geometrically, this is similar to recent multiresolution data analysis approaches such as [20]. In fact, hierarchical matrix approximations, including HODLR matrices, H?matrices [21], H2 ?matrices [22] and HSS matrices [23] are very popular in the numerical analysis literature. While the exact details vary, each of these methods imposes a specific type of block structure on the matrix and forces the off-diagonal blocks to be low rank (Figure 1 in the Supplement). Intutitively, nearby clusters interact in a richer way, but as we move farther away, data can be aggregated more and more coarsely, just as in the fast multipole method [24]. We know of only two applications of the hierarchical matrix methodology to kernel approximation: B?orm and Garcke?s H2 matrix approach [25] and O?Neil et al.?s HODLR method [26]. The advantage of H2 matrices is their more intricate structure, allowing relatively tight interactions between neighboring clusters even when the two clusters are not siblings in the tree (e.g. blocks 8 and 9 in Figure 1c in the Supplement). However, the H2 format does not directly help with inverting K or computing its determinant: it is merely a memory-efficent way of storing K and performing matrix/vector multiplies inside an iterative method. HODLR matrices have a simpler structure, but admit a factorization that makes it possible to directly compute both the inverse and the determinant of the approximated matrix in just O(n log n) time. The reason that hierarhical matrix approximations have not become more popular in machine learning so far is that in the case of high dimensional, unstructured data, finding the way to organize {x1 , . . . , xn } into a single hierarchy is much more challenging than in the setting of regularly spaced points in R2 or R3 , where these methods originate: 1. Hierarchical matrices require making hard assignments of data points to clusters, since the block structure at each level corresponds to partitioning the rows/columns of the original matrix. 2. The hierarchy must form a single tree, which 3 puts deep divisions between clusters whose closest common ancestor is high up in the tree. 3. Finding the hierarchy in the first place is by no means trivial. Most works use a top-down strategy which defeats the inherent parallelism of the matrix structure, and the actual algorithm used (kd-trees) is known to be problematic in high dimensions [27]. 3 Multiresolution Kernel Approximation Our goal in this paper is to develop a data adapted multiscale kernel matrix approximation method, Multiresolution Kernel Approximation (MKA), that reflects the ?distant clusters only interact in a low rank fashion? insight of the fast multipole method, but is considerably more flexible than existing hierarchical matrix decompositions. The basic building blocks of MKA are local factorizations of a specific form, which we call core-diagonal compression. Definition 1 We say that a matrix H is c?core-diagonal if Hi,j = 0 unless either i, j ? c or i = j. Definition 2 A c?core-diagonal compression of a symmetric matrix A ? Rm?m is an approxima    tion of the form > A?Q HQ= , (4) where Q is orthogonal and H is c?core-diagonal. Core-diagonal compression is to be contrasted with rank c sketching, where H would just have the c ? c block, without the rest of the diagonal. From our multiresolution inspired point of view, however, the purpose of (4) is not just to sketch A, but to also to split Rm into the direct sum of two subspaces: (a) the ?detail space?, spanned by the last n?c rows of Q, responsible for capturing purely local interactions in A and (b) the ?scaling space?, spanned by the first c rows, capturing the overall structure of A and its relationship to other diagonal blocks. Hierarchical matrix methods apply low rank decompositions to many blocks of K in parallel, at different scales. MKA works similarly, by applying core-diagonal compressions. Specifically, the algorithm proceeds by taking K through a sequence of transformations K = K0 7? K1 7? . . . 7? Ks , called stages. In the first stage 1. Similary to other local methods, MKA first uses a fast clustering method to cluster the rows/columns of K0 into clusters C11 , . . . , Cp11 . Using the corresponding permutation matrix C1 (which maps the elements of the first cluster to (1, 2, . . . |C11 |), the elements of the second cluster to (|C11 | + 1, . . . , |C11 | + |C21 |), and so on) we form a blocked matrix K0 = C1 K0 C1> , where JK0 Ki,j = KCi1 ,Cj1 . 2. Each diagonal block of K0 is independently core-diagonally  compressed as in (4) to yield Hi1 = Q1i JK0 Ki,i (Q1i )> CD(c1 ) (5) i 3. where CD(c1i ) in the index stands for truncation to c1i ?core-diagonal The Q1i local rotations are assembled into a single large orthogonal > applied to the full matrix to give H1 = Q1 K0 Q1 . form. L 1 matrix Q1 = i Qi and 4. The rows/columns of H1 are rearranged by applying a permutation P1 that maps the core part of each block to one of the first c1 := c11 + . . . c1p1 coordinates, and the diagonal part to the rest, giving H1pre = P1 H1 P1> . 5. Finally, H1pre is truncated into the core-diagonal form H1 = K1 ? D1 , where K1 ? Rc1 ?c1 is dense, while D1 is diagonal. Effectively, K1 is a compressed version of K0 , while D1 is formed by concatenating the diagonal parts of each of the Hi1 matrices. Together, this gives a global core-diagonal compression K0 ? C1> Q1 > P1> (K1 ? D1 ) P1 Q1 C1 | {z } | {z } Q1 Q> 1 of the entire original matrix K0 . The second and further stages of MKA consist of applying the above five steps to K1 , K2 , . . . , Ks?1 ? which has a telescoping form in turn, so ultimately the algorithm yields a kernel approximation K ? ? Q>(Q>(. . . Q>(Ks ? Ds )Qs . . . ? D2 )Q2 ? D1 )Q1 K 1 2 s The pseudocode of the full algorithm is in the Supplementary Material. 4 (6) MKA is really a meta-algorithm, in the sense that it can be used in conjunction with different corediagonal compressors. The main requirements on the compressor are that (a) the core of H should capture the dominant part of A, in particular the subspace that most strongly interacts with other blocks, (b) the first c rows of Q should be as sparse as possible. We consider two alternatives. Augmented Sparse PCA (SPCA). Sparse PCA algorithms explicitly set out to find a set of vectors {v 1 , . . . , v c } so as to maximize kV >AV kFrob , where V = [v 1 , . . . , v c ], while constraining each vector to be as sparse as possible [28]. While not all SPCAs guarantee orthogonality, this can be enforced a posteriori via e.g., QR factorization, yielding Qsc , the top c rows of Q in (4). Letting U be a basis for the complementary subspace, the optimal choice for the bottom m ? c rows in terms ? where of minimizing Frobenius norm error of the compression is Qwlet = U O, ? = argmax k diag(O>U >A U O)k, O O >O=I the solution to which is of course given by the eigenvectors of U >AU . The main drawback of the SPCA approach is its computational cost: depending on the algorithm, the complexity of SPCA scales with m3 or worse [29, 30]. Multiresolution Matrix Factorization (MMF) MMF is a recently introduced matrix factorization algorithm motivated by similar multiresolution ideas as the present work, but applied at the level of individual matrix entries rather than at the level of matrix blocks [31]. Specifically, MMF yields a factorization of the form > H qL . . . q1 , A ? q1> . . . qL | {z } | {z } Q Q> where, in the simplest case, the qi ?s are just Givens rotations. Typically, the number of rotations in MMF is O(m). MMF is efficient to compute, and sparsity is guaranteed by the sparsity of the individual qi ?s and the structure of the algorithm. Hence, MMF has complementary strengths to SPCA: it comes with strong bounds on sparsity and computation time, but the quality of the scaling/wavelet space split that it produces is less well controlled. Remarks. We make a few remarks about MKA. 1. Typically, low rank approximations reduce dimensionality quite aggressively. In contrast, in core-diagonal compression c is often on the order of m/2, leading to ?gentler? and more faithful, kernel approximations. 2. In hierarchical matrix methods, the block structure of the matrix is defined by a single tree, which, as discussed above, is potentially problematic. In contrast, by virtue of reclustering the rows/columns of K` before every stage, MKA affords a more flexible factorization. In fact, beyond the first stage, it is not even individual datapoints that MKA clusters, but subspaces defined by the earlier local compressions. 3. While C` and P` are presented as explicit permutations, they really just correspond to different ways of blocking Ks , which is done implicitly in practice with relatively little overhead. 4. Step 3 of the algorithm is critical, because it extends the core-diagonal splits found in the diagonal blocks of the matrix to the off-diagonal blocks. Essentially the same is done in [4] and [17]. This operation reflects a structural assumption about K, namely that the same bases that pick out the dominant parts of the diagonal blocks (composed of the first c`i rows of the Q`i rotations) are also good for compressing the off-diagonal blocks. In the hierarchical matrix literature, for the case of specific kernels sampled in specific ways in low dimensions, it is possible to prove such statements. In our high dimensional and less structured setting, deriving analytical results is much more challenging. 5. MKA is an inherently bottom-up algorithm, including the clustering, thus it is naturally parallelizable and can be implemented in a distributed environment. 6. The hierarchical structure of MKA is similar to that of the parallel version of MMF (pMMF) [32], but the way that the compressions are calculated is different (pMMF tries to minimize an objective that relates to the entire matrix). 4 Complexity and application to GPs For MKA to be effective for large scale GP regression, it must be possible to compute the factor? must be symmetric positive semi-definite ization fast. In addition, the resulting approximation K (spsd) (MEKA, for example, fails to fulfill this [4]). We say that a matrix approximation algorithm A 7? A? is spsd preserving if A? is spsd whenever A is. It is clear from its form that the Nystr?om approximation is spsd preserving , so is augmented SPCA compression. MMF has different variants, but the core part of H is always derived by conjugating A by rotations, while the diagonal elements are guaranteed to be positive, therefore MMF is spsd preserving as well. 5 Proposition 1 If the individual core-diagonal compressions in MKA are spsd preserving, then the entire algorithm is spsd perserving. The complexity of MKA depends on the complexity of the local compressions. Next, we assume that to leading order in m this cost is bounded by ccomp m?comp (with ?comp ? 1) and that each row of the Q matrix that is produced is csp ?sparse. We assume that the MKA has s stages, the size of the final Ks ?core matrix? is dcore ? dcore , and that the size of the largest cluster is mmax . We assmue that the maximum number of clusters in any stage is bmax and that the clustering is close to balanced in the sense that that bmax = ?(n/mmax ) with a small constant. We ignore the cost of the clustering algorithm, which varies, but usually scales linearly in snbmax . We also ignore the cost of permuting the rows/columns of K` , since this is a memory bound operation that can be virtualized away. The following results are to leading order in mmax and are similar to those in [32] for parallel MMF. Proposition 2 With the above notations, the number of operations needed to compute the MKA of ?comp ?1 an n ? n matrix is upper bounded by 2scsp n2 + sccomp mmax n. Assuming bmax ?fold parallelism, ? comp this complexity reduces to 2scsp n2 /bmax + sccomp mmax . The memory cost of MKA is just the cost of storing the various matrices appearing in (6). We only include the number of non-zero reals that need to be stored and not indices, etc.. Proposition 3 The storage complexity of MKA is upper bounded by (scsp + 1)n + d2core . Rather than than the general case, it is more informative to focus on MMF based MKA, which is what we use in our experiments. We consider the simplest case of MMF, referred to as ?greedyJacobi? MMF, in which each of the qi elementary rotations is a Given rotation. An additional parameter of this algorithm is the compression ratio ?, which in our notation is equal to c/n. Some of the special features of this type of core-diagonal compression are: (a) While any given row of the rotation Q produced by the algorithm is not guaranteed to be sparse, Q will be the product of exactly b(1 ? ?)mc Givens rotations. (b) The leading term in the cost is the m3 cost of computing A>A, but this is a BLAS operation, so it is fast. (c) Once A>A has been computed, the cost of the rest of the compression scales with m2 . Together, these features result in very fast core-diagonal compressions and a very compact representation of the kernel matrix. Proposition 4 The complexity of computing the MMF-based MKA of an n?n dense matrix is upper bounded by 4sn2 + sm2max n, where s = log(dcore /n)/(log ?). Assuming bmax ?fold parallelism, this is reduced to 4snmmax + m3max . Proposition 5 The storage complexity of MMF-based MKA is upper bounded by (2s + 1)n + d2core . Typically, dcore = O(1). Note that this implies O(n log n) storage complexity, which is similar to Nystr?om approximations with very low rank. Finally, we have the following results that are critical for using MKA in GPs. ? in MMF-based MKA form (6), and a vector z ? Rn Proposition 6 Given an approximate kernel K ? the product Kz can be computed in 4sn + d2core operations. With bmax ?fold parallelism, this is reduced to 4smmax + d2core . ? in (MMF or SPCA-based) MKA form, the MKA Proposition 7 Given an approximate kernel K ? ? for any ? can be computed in O(n + d3core ) operations. The complexity of computing form of K ? for any ? in MKA form and the complexity of computing det(K) ? the matrix exponential exp(? K) are also O(n + d3core ). 4.1 MKA?GPs and MKA Ridge Regression The most direct way of applying MKA to speed up GP regression (or ridge regression) is simply using it to approximate the augmented kernel matrix K 0 = (K + ? 2 I) and then inverting this ? 0?1 never needs to be approximation using Proposition 7 (with ? = ?1). Note that the resulting K ? 0?1 y evaluated fully, in matrix form. Instead, in equations such as (2), the matrix-vector product K can be computed in ?matrix-free? form by cascading y through the analog of (6). Assuming that dcore  n and mmax is not too large, the serial complexity of each stage of this computation scales with at most n2 , which is the same as the complexity of computing K in the first place. One potential issue with the above approach however is that because MKA involves repeated trun? 0 will be a biased approximation to K, therefore expressions such as cation of the Hjpre matrices, K 6 SOR Full FITC PITC MEKA MKA 10 10 10 10 10 10 8 8 8 8 8 8 6 6 6 6 6 6 4 4 2 4 2 4 2 4 2 4 2 2 0 0 0 0 0 0 -2 -2 -2 -2 -2 -2 -4 -4 -4 -4 -4 -4 -6 -6 -8 -6 -8 50 100 150 200 250 300 -6 -8 50 100 150 200 250 300 -6 -8 50 100 150 200 250 300 -6 -8 50 100 150 200 250 300 -8 50 100 150 200 250 300 50 100 150 200 250 300 Figure 1: Snelson?s 1D example: ground truth (black circles); prediction mean (solid line curves); one standard deviation in prediction uncertainty (dashed line curves). Table 1: Regression Results with k to be # pseudo-inputs/dcore : SMSE(MNLP) Method housing rupture wine pageblocks compAct pendigit k 16 16 32 32 32 64 Full 0.36(?0.32) 0.17(?0.89) 0.59(?0.33) 0.44(?1.10) 0.58(?0.66) 0.15(?0.73) SOR 0.93(?0.03) 0.94(?0.04) 0.86(?0.07) 0.86(?0.57) 0.88(?0.13) 0.65(?0.19) FITC 0.91(?0.04) 0.96(?0.04) 0.84(?0.03) 0.81(?0.78) 0.91(?0.08) 0.70(?0.17) PITC 0.96(?0.02) 0.93(?0.05) 0.87(?0.07) 0.86(?0.72) 0.88(?0.14) 0.71(?0.17) MEKA 0.85(?0.08) 0.46(?0.18) 0.97(?0.12) 0.96(?0.10) 0.75(?0.21) 0.53(?0.29) MKA 0.52(?0.32) 0.32(?0.54) 0.70(?0.23) 0.63(?0.85) 0.60(?0.32) 0.30(?0.42) (2) which mix an approximate K 0 with an exact kx will exhibit some systematic bias. In Nystr?om type methods (specifically, the so-called Subset of Regressors and Deterministic Training of Conditionals (DTC) GP approximations) this problem is addressed by replacing kx with its own Nystr?om ? x = K?,I W + kI ,, where [k?I ]j = k(x, xi ). Although K ? 0 = K?,I W + K > + ? 2 I approximation, k x x j ?,I ?>K ? 0?1 can nonetheless be efficiently evaluated by using a is a large matrix, expressions such as k x variant of the Sherman?Morrison?Woodbury identity and the fact that W is low rank (see [33]). ? is not low rank. Assuming that the testing The same approach cannot be applied to MKA because K set {x1 , . . . , xp } is known at training time, however, instead of approximating K or K 0 , we compute the MKA approximation of the joint train/test kernel matrix   Ki,j = k(xi , xj ) + ? 2 K K? [K? ]i,j = k(xi , x0j ) K= where K?> Ktest [Ktest ]i,j = k(x0i , x0j ). Writing K?1 in blocked form ? ?1 = K  A C B D  , ? ?1 = A ? and taking the Schur complement of D now recovers an alternative approximation K BD?1 C to K ?1 which is consistent with the off-diagonal block K ? leading to our final MKA?GP ? ?1 y, where fb = (fb(x0 ), . . . , fb(x0 ))> . While conceptually this is somewhat formula fb = K?> K p 1 more involved than naively estimating K 0 , assuming p  n, the cost of inverting D is negligible, and the overall serial complexity of the algorithm remains (n + p)2 . In certain GP applications, the O(n2 ) cost of writing down the kernel matrix is already forbidding. The one circumstance under which MKA can get around this problem is when the kernel matrix is a matrix polynomial in a sparse matrix L, which is most notably for diffusion kernels and certain other graph kernels. Specifically in the case of MMF-based MKA, since the computational cost is dominated by computing local ?Gram matrices? A>A, when L is sparse, and this sparsity is retained from one compression to another, the MKA of sparse matrices can be computed very fast. In the case of graph Laplacians, empirically, the complexity is close to linear in n. By Proposition 7, the diffusion kernel and certain other graph kernels can also be approximated in about O(n log n) time. 5 Experiments We compare MKA to five other methods: 1. Full: the full GP regression using Cholesky factorization [1]. 2. SOR: the Subset of Regressors method (also equivalent to DTC in mean) [1]. 3. FITC: the Fully Independent Training Conditional approximation, also called Sparse Gaussian Processes using Pseudo-inputs [34]. 4. PITC: the Partially Independent Training Conditional approximation method (also equivalent to PTC in mean) [33]. 5. MEKA: the Memory Efficient Kernel Approximation method [4]. The KISS-GP [35] and other interpolation based methods are not discussed in this paper, because, we believe, they mostly only apply to low dimensional settings. We used custom Matlab implementations [1] for Full, SOR, FITC, and PITC. We used the Matlab codes provided by 7 Full SOR FITC PITC MEKA MKA 0.75 0.8 -0.2 0.7 MNLP 0.7 0.65 0.6 -0.3 -0.4 0.55 Full SOR FITC PITC MEKA MKA -0.5 0.5 0.45 -0.6 0.4 rupture Full SOR FITC PITC MEKA MKA 0.9 Full SOR FITC PITC MEKA MKA -0.1 -0.2 -0.3 MNLP 0.8 -0.1 SMSE 0.85 SMSE rupture housing housing 0.9 0.6 0.5 -0.4 -0.5 -0.6 0.4 -0.7 0.3 -0.8 0.2 2 2.5 3 3.5 4 Log2 # pseudo-inputs 4.5 2 2.5 3 3.5 4 4 4.5 4.5 5 5.5 6 6.5 7 Log2 # pseudo-inputs Log2 # pseudo-inputs 7.5 8 4 4.5 5 5.5 6 6.5 7 7.5 8 Log2 # pseudo-inputs Figure 2: SMSE and MNLP as a function of the number of pseudo-inputs/dcore on two datasets. In the given range MKA clearly outperforms the other methods in both error measures. the author for MEKA. Our algorithm MKA was implemented in C++ with the Matlab interface. To get an approximately fair comparison, we set dcore in MKA to be the number of pseudo-inputs. The parallel MMF algorithm was used as the compressor due to its computational strength [32]. The Gaussian kernel is used for all experiments with one length scale for all input dimensions. Qualitative results. We show the qualitative behavior of each method on the 1D toy dataset from [34]. We sampled the ground truth from a Gaussian processes with length scale ` = 0.5 and number of pseudo-inputs (dcore ) is 10. We applied cross-validation to select the parameters for each method to fit the data. Figure 1 shows that MKA fits the data almost as well as the Full GP does. In terms of the other approximate methods, although their fit to the data is smoother, this is to the detriment of capturing the local structure of the underlying data, which verifies MKA?s ability to capture the entire spectrum of the kernel matrix, not just its top eigenvectors. Real data. We tested the efficacy of GP regression on real-world datasets. The data are normalized to mean zero and variance one. We randomly selected 10% of each dataset to be used as a test set. On the other 90% we did five-fold cross validation to learn the length scale and noise parameter for each method and the regression results were averaged over repeating this setting five times. All experiments were ran on a 3.4GHz 8 core machine with 8GB of memory. Two distinct Pn error measures are used to assess performance: (a) standardized mean square error (SMSE), n1 t=1 (? yt ? 2 yt )P /? ??2 , where ? ??2 is the variance of test outputs, and (2) mean negative log probability (MNLP)  n 1 ??2 + log 2? , each of which corresponds to the predictive mean and yt ? yt )2 /? ??2 + log ? t=1 (? n variance in error assessment. From Table 1, we are competitive in both error measures when the number of pseudo-inputs (dcore ) is small, which reveals low-rank methods? inability in capturing the local structure of the data. We also illustrate the performance sensitivity by varying the number of pseudo-inputs on selected datasets. In Figure 2, for the interval of pseudo-inputs considered, MKA?s performance is robust to dcore , while low-rank based methods? performance changes rapidly, which shows MKA?s ability to achieve good regression results even with a crucial compression level. The Supplementary Material gives a more detailed discussion of the datasets and experiments. 6 Conclusions In this paper we made the case that whether a learning problem is low rank or not depends on the nature of the data rather than just the spectral properties of the kernel matrix K. This is easiest to see in the case of Gaussian Processes, which is the algorithm that we focused on in this paper, but it is also true more generally. Most exisiting sketching algorithms used in GP regression force low rank structure on K, either globally, or at the block level. When the nature of the problem is indeed low rank, this might actually act as an additional regularizer and improve performance. When the data does not have low rank structure, however, low rank approximations will fail. Inspired by recent work on multiresolution factorizations, we proposed a mulitresolution meta-algorithm, MKA, for approximating kernel matrices, which assumes that the interaction between distant clusters is low rank, while avoiding forcing a low rank structure of the data locally, at any scale. Importantly, MKA allows fast direct calculations of the inverse of the kernel matrix and its determinant, which are almost always the computational bottlenecks in GP problems. Acknowledgements This work was completed in part with resources provided by the University of Chicago Research Computing Center. The authors wish to thank Michael Stein for helpful suggestions. 8 References [1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [2] Michael L. Stein. Statistical Interpolation of Spatial Data: Some Theory for Kriging. Springer, 1999. [3] Christopher Williams and Matthias Seeger. Using the Nystr?om Method to Speed Up Kernel Machines. In Advances in Neural Information Processing Systems 13, 2001. [4] Si Si, C Hsieh, and Inderjit S Dhillon. Memory Efficient Kernel Approximation. In ICML, 2014. [5] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. NIPS, 2008. [6] Alex J. Smola and Bernhard Sch?okopf. Sparse Greedy Matrix Approximation for Machine Learning. In Proceedings of the 17th International Conference on Machine Learning, ICML, pages 911?918, 2000. [7] Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. Spectral grouping using the Nystr?om method. IEEE transactions on pattern analysis and machine intelligence, 26(2):214?25, 2004. [8] P. Drineas and M. W. Mahoney. On the Nystr?om method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153?2175, 2005. [9] Rong Jin, Tianbao Yang, Mehrdad Mahdavi, Yu-Feng Li, and Zhi-Hua Zhou. Improved Bounds for the Nystr?om Method With Application to Kernel Classification. IEEE Trans. Inf. Theory, 2013. [10] Alex Gittens and Michael W Mahoney. Revisiting the Nystr?om method for improved large-scale machine learning. ICML, 28:567?575, 2013. [11] Shiliang Sun, Jing Zhao, and Jiang Zhu. A Review of Nystr?om Methods for Large-Scale Machine Learning. Information Fusion, 26:36?48, 2015. [12] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Ensemble Nystr?om method. In NIPS, 2009. [13] Shusen Wang. Efficient algorithms and error analysis for the modified Nystr?om method. AISTATS, 2014. [14] Amine Abou-Rjeili and George Karypis. Multilevel algorithms for partitioning power-law graphs. In Proceedings of the 20th International Conference on Parallel and Distributed Processing, 2006. [15] Inderjit S Dhillon, Yuqiang Guan, and Brian Kulis. Weighted graph cuts without eigenvectors a multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(11):1944?1957, 2007. [16] Berkant Savas, Inderjit Dhillon, et al. Clustered Low-Rank Approximation of Graphs in Information Science Applications. In Proceedings of the SIAM International Conference on Data Mining, 2011. [17] Ruoxi Wang, Yingzhou Li, Michael W Mahoney, and Eric Darve. Structured Block Basis Factorization for Scalable Kernel Matrix Evaluation. arXiv preprint arXiv:1505.00398, 2015. [18] Yingyu Liang, Maria-Florina F Balcan, Vandana Kanchanapally, and David Woodruff. Improved distributed principal component analysis. In NIPS, pages 3113?3121, 2014. [19] Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression. Conference on Learning Theory, 30:1?26, 2013. [20] William K Allard, Guangliang Chen, and Mauro Maggioni. Multi-scale geometric methods for data sets II: Geometric multi-resolution analysis. Applied and Computational Harmonic Analysis, 2012. [21] W Hackbusch. A Sparse Matrix Arithmetic Based on H-Matrices. Part I: Introduction to H-Matrices. Computing, 62:89?108, 1999. [22] Wolfgang Hackbusch, Boris Khoromskij, and Stefan a. Sauter. On H2-Matrices. Lectures on applied mathematics, pages 9?29, 2000. [23] S. Chandrasekaran, M. Gu, and W. Lyons. A Fast Adaptive Solver For Hierarchically Semi-separable Representations. Calcolo, 42(3-4):171?185, 2005. [24] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. J. Comput. Phys., 1987. [25] Steffen B?orm and Jochen Garcke. Approximating Gaussian Processes with H 2 Matrices. In ECML. 2007. [26] Sivaram Ambikasaran, Sivaram Foreman-Mackey, Leslie Greengard, David W. Hogg, and Michael O?Neil. Fast Direct Methods for Gaussian Processes. arXiv:1403.6015v2, April 2015. [27] Nazneen Rajani, Kate McArdle, and Inderjit S Dhillon. Parallel k-Nearest Neighbor Graph Construction Using Tree-based Data Structures. In 1st High Performance Graph Mining workshop, 2015. [28] Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse Principal Component Analysis. Journal of Computational and Graphical Statistics, 15(2):265?286, 2004. [29] Q. Berthet and P. Rigollet. Complexity Theoretic Lower Bounds for Sparse Principal Component Detection. J. Mach. Learn. Res. (COLT), 30, 1046-1066 2013. [30] Volodymyr Kuleshov. Fast algorithms for sparse principal component analysis based on rayleigh quotient iteration. In ICML, pages 1418?1425, 2013. [31] Risi Kondor, Nedelina Teneva, and Vikas Garg. Multiresolution Matrix Factorization. In ICML, 2014. [32] Nedelina Teneva, Pramod K Murakarta, and Risi Kondor. Multiresolution Matrix Compression. In Proceedings of the 19th International Conference on Aritifical Intelligence and Statistics (AISTATS-16), 2016. [33] Joaquin Qui?nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:1939?1959, 2005. [34] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. NIPS, 2005. [35] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured gaussian processes (KISS-GP). In ICML, Lille, France, 6-11, pages 1775?1784, 2015. 9
6964 |@word kulis:1 determinant:3 version:2 kondor:3 compression:19 norm:1 polynomial:1 d2:1 simulation:1 covariance:2 hsieh:1 decomposition:3 abou:1 q1:9 pick:1 evaluating:1 nystr:20 solid:1 recursively:1 efficacy:1 woodruff:1 offering:1 reinvented:1 outperforms:1 existing:1 si:2 yet:1 forbidding:1 must:3 bd:1 john:1 sanjiv:1 distant:2 chicago:3 j1:2 informative:1 numerical:1 mackey:1 v:1 greedy:1 selected:2 devising:1 intelligence:3 accordingly:1 realizing:1 core:20 farther:1 simpler:1 zhang:1 five:4 direct:6 become:3 formed:3 qualitative:2 prove:1 overhead:1 inside:1 yingyu:1 introduce:1 x0:13 notably:1 indeed:2 intricate:1 behavior:2 p1:5 multi:2 steffen:1 inspired:3 globally:1 zhi:1 actual:1 little:1 lyon:1 solver:1 becomes:2 provided:2 estimating:1 notation:4 bounded:5 underlying:1 what:1 easiest:1 q2:1 developed:1 finding:2 transformation:1 guarantee:2 pseudo:15 every:1 act:1 pramod:1 exactly:1 k2:1 rm:2 facto:1 partitioning:2 yn:2 organize:1 before:1 negligible:1 positive:2 local:17 mach:1 jiang:1 interpolation:3 approximately:1 black:1 might:2 garg:1 au:1 k:5 garcke:2 suggests:1 challenging:3 limited:1 factorization:11 karypis:1 range:3 averaged:1 c21:1 faithful:1 responsible:1 woodbury:1 pitc:8 testing:1 practice:2 block:26 definite:1 integrating:1 orm:2 zoubin:1 get:2 cannot:1 close:3 operator:1 put:1 context:1 applying:5 writing:2 storage:3 risk:1 equivalent:2 map:3 deterministic:1 yt:4 center:1 williams:2 tianbao:1 starting:1 independently:1 focused:1 resolution:1 simplicity:2 unstructured:1 m2:1 insight:1 q:1 cascading:1 yuqiang:1 deriving:1 spanned:2 dominate:2 datapoints:1 importantly:1 anyway:1 maggioni:1 variation:1 analogous:1 coordinate:1 limiting:1 hierarchy:3 construction:1 exact:2 gps:6 us:1 kuleshov:1 carl:2 element:3 approximated:3 dcore:11 cut:1 coarser:1 blocking:3 bottom:2 preprint:1 ding:1 wang:2 capture:3 thousand:1 revisiting:1 compressing:1 sun:1 decrease:1 kriging:2 ran:1 balanced:1 environment:1 intuition:1 ui:3 benjamin:1 complexity:17 ultimately:3 solving:2 tight:1 algebra:1 pendigit:1 predictive:1 ali:1 ccomp:1 division:1 eric:1 purely:1 basis:3 sink:1 gu:1 drineas:1 joint:1 k0:9 various:1 regularizer:1 foreman:1 train:1 distinct:1 fast:13 effective:1 lengthscale:1 bmax:6 mka:56 jk0:2 refined:1 whose:1 richer:1 quite:1 supplementary:2 say:2 tested:1 compressed:2 ability:2 statistic:3 winkler:1 cov:1 neil:2 gp:25 itself:2 final:2 housing:3 sequence:2 eigenvalue:5 advantage:3 analytical:1 matthias:1 interaction:3 product:3 neighboring:1 nonero:1 rapidly:2 multiresolution:13 achieve:1 frobenius:1 kv:1 qr:1 spsd:7 xim:3 requirement:1 cluster:15 jing:1 produce:1 boris:1 help:2 depending:1 andrew:1 develop:1 illustrate:1 x0i:1 ij:1 nearest:2 approxima:1 strong:2 edward:3 implemented:2 quotient:1 involves:1 implies:1 come:1 concentrate:1 drawback:1 dtc:2 material:2 require:1 multilevel:2 sor:8 clustered:1 really:2 randomization:1 achilles:1 brian:1 elementary:2 proposition:9 im:2 mathematically:1 rong:1 extension:2 clarify:1 around:1 considered:2 ground:2 exp:2 scope:1 major:1 vary:1 wine:1 purpose:1 applicable:1 sensitive:1 largest:1 reflects:2 weighted:2 minimization:1 stefan:1 mit:1 clearly:2 gaussian:16 always:2 modified:2 ck:1 csp:1 fulfill:1 pn:1 zhou:1 rather:3 varying:2 wilson:1 conjunction:1 derived:1 focus:3 maria:1 rank:26 pageblocks:1 seeger:1 rigorous:1 contrast:3 talwalkar:1 sense:2 helpful:1 posteriori:2 inference:3 entire:5 typically:3 ancestor:1 france:1 i1:5 issue:2 overall:2 classification:1 flexible:2 colt:1 priori:1 multiplies:1 rc1:1 spatial:1 special:1 field:2 equal:1 once:2 never:1 beach:1 sampling:1 having:3 biology:1 calcolo:1 lille:1 graclus:1 icml:6 cancel:1 broad:2 yu:1 jochen:1 gordon:1 inherent:1 few:3 randomly:1 composed:1 individual:4 kitchen:1 argmax:1 consisting:1 n1:1 delicate:1 william:1 detection:1 mining:2 shusen:1 custom:1 evaluation:1 regressing:1 mahoney:3 extreme:1 yielding:1 permuting:1 tuple:1 approximable:1 shorter:1 orthogonal:2 unless:1 indexed:2 tree:6 yuchen:1 divide:3 circle:1 re:1 theoretical:1 column:11 modeling:1 earlier:1 assignment:1 leslie:1 applicability:1 cost:12 deviation:1 subset:4 entry:1 too:1 stored:1 varies:2 considerably:1 nickisch:1 st:2 recht:1 international:4 siam:1 sensitivity:1 fundamental:2 systematic:1 xi1:3 off:6 michael:5 together:2 sketching:2 quickly:1 okopf:1 again:1 slowly:1 possibly:1 worse:1 admit:1 zhao:1 leading:5 chung:1 toy:1 mahdavi:1 volodymyr:1 potential:1 li:2 de:1 savas:1 jitendra:1 kate:1 explicitly:1 kanchanapally:1 depends:2 tion:1 try:1 view:2 h1:4 candela:1 wolfgang:1 competitive:1 sort:1 parallel:6 om:20 square:1 il:1 accuracy:1 ass:1 minimize:1 largely:1 efficiently:1 ir:3 correspond:1 serge:1 variance:3 yield:5 ensemble:2 conceptually:1 spaced:1 bayesian:2 produced:2 mc:1 multiplying:1 comp:4 sccomp:2 similary:1 cation:1 guangliang:1 parallelizable:2 hierarhical:1 phys:1 whenever:1 trevor:1 definition:2 nonetheless:2 involved:1 obvious:1 naturally:1 recovers:1 sampled:2 dataset:2 popular:3 recall:1 dimensionality:1 hilbert:1 actually:2 appears:1 methodology:1 improved:4 april:1 hannes:1 evaluated:2 done:2 strongly:1 though:1 just:11 smola:1 stage:8 correlation:1 d:1 sketch:1 hand:1 joaquin:1 christopher:2 replacing:2 multiscale:1 assessment:1 quality:2 believe:3 name:2 effect:1 usa:1 building:1 true:2 normalized:1 ization:1 hence:1 aggressively:1 symmetric:2 moore:1 dhillon:4 gram:3 mmax:6 gentler:1 trying:1 doesn:1 ridge:6 theoretic:1 duchi:1 interface:1 balcan:1 meaning:1 snelson:2 harmonic:1 recently:2 charles:1 common:2 rotation:9 pseudocode:1 rigollet:1 empirically:1 jp:2 defeat:1 blas:1 discussed:3 interpretation:1 extend:1 analog:2 approximates:1 significant:1 blocked:3 refer:1 ai:2 meka:9 mathematics:1 hogg:1 similarly:2 particle:1 pointed:2 sherman:1 similarity:4 etc:1 base:3 dominant:2 closest:1 own:1 recent:2 posterior:1 inf:1 forcing:1 scenario:3 certain:4 meta:2 allard:1 yi:2 preserving:4 george:1 additional:2 somewhat:1 c11:5 aggregated:1 maximize:1 conjugating:1 morrison:1 dashed:1 relates:3 ii:1 arithmetic:1 semi:2 reduces:3 multiple:1 full:12 exceeds:1 mix:1 smoother:1 rahimi:1 calculation:1 smooth:2 long:2 efficent:1 cross:2 serial:2 equally:1 controlled:1 qi:4 prediction:2 scalable:2 regression:23 basic:1 involving:1 essentially:1 circumstance:1 florina:1 variant:2 arxiv:3 iteration:1 kernel:54 sometimes:1 c1:8 addition:1 conditionals:1 interval:1 addressed:1 crucial:1 sch:1 biased:1 rest:3 induced:1 tend:1 virtually:2 mature:1 regularly:1 schur:1 call:3 structural:1 yang:1 constraining:1 split:3 easy:1 spca:6 xj:4 fit:3 hastie:1 mmf:19 bandwidth:1 reduce:1 idea:5 sibling:1 cousin:1 det:4 bottleneck:1 whether:1 expression:2 pca:3 motivated:2 gb:1 remark:2 matlab:3 deep:1 generally:2 detailed:1 eigenvectors:8 clear:1 sn2:1 amount:1 nonparametric:1 repeating:1 stein:2 locally:1 category:1 simplest:2 rearranged:1 reduced:2 amine:1 affords:1 problematic:4 xij:1 canonical:2 conceived:1 tibshirani:1 broadly:1 coarsely:1 key:1 burgeoning:1 ptc:1 hi1:2 shiliang:1 diffusion:2 graph:8 geometrically:1 merely:1 year:2 sum:2 enforced:1 inverse:4 uncertainty:1 extends:1 place:3 x0j:2 chandrasekaran:1 almost:3 scaling:2 qui:1 submatrix:4 capturing:4 ki:7 hi:1 bound:6 guaranteed:3 fold:4 fan:1 purest:1 adapted:1 strength:2 orthogonality:1 alex:2 n3:1 encodes:1 sake:1 dominated:2 nearby:2 cj1:1 speed:2 kumar:1 performing:1 separable:1 ameet:1 relatively:4 format:1 martin:1 department:2 structured:3 metis:1 combination:1 kd:1 smaller:1 appealing:1 heel:1 gittens:1 making:1 intuitively:1 invariant:1 equation:1 resource:1 remains:1 turn:2 discus:1 fail:1 r3:1 needed:1 know:1 letting:2 operation:6 greengard:2 apply:3 hierarchical:11 away:2 appropriate:1 generic:1 spectral:3 v2:1 smmax:1 appearing:2 fowlkes:1 frequentist:2 alternative:3 vikas:1 original:4 standardized:1 denotes:1 clustering:6 include:2 completed:1 graphical:1 top:4 multipole:2 assumes:1 unifying:1 log2:4 somewhere:1 giving:1 risi:4 ghahramani:1 k1:6 conquer:3 approximating:5 feng:1 move:1 malik:1 already:1 objective:1 strategy:3 mehrdad:1 diagonal:31 interacts:1 trun:1 exhibit:1 hq:1 subspace:4 thank:1 mauro:1 landmark:6 originate:1 trivial:2 reason:3 assuming:7 length:5 code:1 index:2 relationship:5 illustration:1 retained:1 minimizing:1 ratio:1 detriment:1 liang:1 ql:2 unfortunately:1 mostly:1 robert:1 potentially:1 statement:1 negative:1 implementation:1 allowing:1 upper:4 av:1 observation:1 datasets:4 finite:1 jin:1 ecml:1 truncated:1 situation:1 y1:2 rn:1 community:1 david:2 complement:1 introduced:1 namely:1 inverting:5 vandana:1 nip:5 trans:1 address:1 beyond:3 assembled:1 proceeds:1 usually:3 parallelism:4 smse:5 pattern:2 laplacians:1 sparsity:4 challenge:1 pioneering:1 including:3 memory:7 reliable:1 wainwright:1 power:1 critical:3 natural:4 force:2 c1i:2 regularized:1 telescoping:1 zhu:1 fitc:8 improve:1 started:1 rupture:3 sn:1 prior:1 literature:5 geometric:3 review:1 acknowledgement:1 law:1 fully:2 lecture:1 permutation:3 suggestion:1 limitation:2 hs:1 validation:2 h2:5 degree:2 consistent:1 xp:1 imposes:1 storing:2 share:1 cd:2 translation:1 row:20 course:3 mohri:1 diagonally:1 last:3 truncation:1 free:1 rasmussen:2 bias:1 uchicago:1 neighbor:2 fall:1 taking:2 correspondingly:1 sparse:17 distributed:4 ghz:1 regard:1 calculated:1 exisiting:1 world:1 avoids:1 curve:2 fb:6 dimension:4 berthet:1 made:1 xn:4 stand:1 kz:1 adaptive:2 regressors:2 far:2 author:2 transaction:2 approximate:10 compact:2 ignore:2 implicitly:1 bernhard:1 global:6 pseudoinverse:1 reveals:1 anchor:1 belongie:1 xi:8 spectrum:4 continuous:1 iterative:1 hackbusch:2 reality:1 table:2 learn:2 nature:2 robust:2 ca:1 inherently:4 obtaining:1 interact:2 expansion:1 mehryar:1 meanwhile:1 mnlp:5 zou:1 diag:1 did:1 aistats:2 dense:3 main:2 linearly:1 hierarchically:1 noise:2 n2:4 verifies:1 repeated:1 fair:1 complementary:2 quadrature:1 x1:5 augmented:4 referred:1 q1i:3 fashion:1 fails:1 explicit:2 wish:1 concatenating:1 comput:1 exponential:1 guan:1 wavelet:1 formula:3 down:2 bad:1 specific:5 r2:1 virtue:1 closeness:1 fusion:1 consist:1 grouping:1 naively:1 workshop:1 adding:1 effectively:2 ci:1 supplement:2 hui:1 kx:6 chen:1 easier:1 aritifical:1 suited:1 intersection:1 rayleigh:1 simply:1 likely:1 appearance:1 penrose:1 kiss:2 compressor:3 inderjit:4 partially:1 hua:1 springer:1 corresponds:2 truth:2 determines:1 conditional:2 identity:1 goal:1 hard:1 change:1 included:1 specifically:4 contrasted:1 minization:1 principal:4 called:7 m3:2 select:2 rokhlin:1 cholesky:1 inability:1 jonathan:1 avoiding:1 d1:5 phenomenon:1 extrapolate:1
6,594
6,965
Collapsed variational Bayes for Markov jump processes Jiangwei Pan?? Department of Computer Science Duke University [email protected] Boqian Zhang? Department of Statistics Purdue University [email protected] Vinayak Rao Department of Statistics Purdue University [email protected] Abstract Markov jump processes are continuous-time stochastic processes widely used in statistical applications in the natural sciences, and more recently in machine learning. Inference for these models typically proceeds via Markov chain Monte Carlo, and can suffer from various computational challenges. In this work, we propose a novel collapsed variational inference algorithm to address this issue. Our work leverages ideas from discrete-time Markov chains, and exploits a connection between these two through an idea called uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods. 1 Markov jump processes Markov jump processes (MJPs) (?inlar, 1975) are stochastic processes that generalize discrete-time discrete-state Markov chains to continuous-time. MJPs find wide application in fields like biology, chemistry and ecology, where they are used to model phenomena like the evolution of population sizes (Opper and Sanguinetti, 2007), gene-regulation Boys et al. (2008), or the state of a computing network Xu and Shelton (2010). A realization of an MJP is a random piecewise-constant function of time, transitioning between a set of states, usually of finite cardinality N (see Figure 1, left). This stochastic process is parametrized by an N ? 1 distribution ? giving the initial distribution over states, and an N ? N rate matrix A governing the dynamics of the process. The off-diagonal element Aij (i 6= j) gives the rate of transitioning from stateP i to j, and these elements determine the diagonal element Aii according to the relation Aii = ? i6=j Aij . Thus, the rows of A sum to 0, and the negative of the diagonal element Aii gives the total rate of leaving state i. Simulating a trajectory from an MJP over an interval [0, T ] follows what is called the Gillespie algorithm (Gillespie, 1977): 1. First, at time t = 0, sample an initial state s0 from ?. 2. From here onwards, upon entering a new state i, sample the time of the next transition from an exponential with rate |Aii |, and then a new state j 6= i with probability proportional to Aij . These latter two steps are repeated until the end of the interval, giving a piecewise-constant trajectory consisting of a sequence of holds and jumps. Note that under this formulation, it is impossible for the system to make self-transition, these are effectively absorbed into the rate parameters Aii . ? ? Equal contribution Now at Facebook 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: (left) a realization of an MJP, (right) sampling a path via uniformization. Bayesian inference for MJPs: In practical applications, one only observes the MJP trajectory S(t) indirectly through a noisy observation process. Abstractly, this forms a hidden Markov model problem, now in continuous time. For instance, the states of the MJP could correspond to different states of a gene-network, and rather than observing these directly, one only has noisy gene-expression level measurements. Alternately, each state i can have an associated emission rate ?i , and rather than directly observing S(t) or ?S(t) , one observes a realization of a Poisson process with intensity ?S(t) . The Poisson events could correspond to mutation events on a strand of DNA, with position indexed by t (Fearnhead and Sherlock, 2006). In this work, we consider a dataset of users logging their activity into the social media website FourSquare, with each ?check-in? consisting of a time and a location. We model each user with an MJP, with different states having different distributions over check-in locations. Given a sequence of user check-ins, one is interested in quantities like the latent state of the user, various clusters of check-in locations, and the rate at which users transition from one state to another. We describe this problem and the dataset in more detail in our experiments. In typical situations, the parameters ? and A are themselves unknown, and it is necessary to learn these, along with the latent MJP trajectory, from the observed data. A Bayesian approach places a prior over these parameters and uses the observed data to obtain a posterior distribution. A simple and convenient prior over A is a Dirichlet-Gamma prior: this places a Dirichlet prior over ?, and models the off-diagonal elements Aij as draws from a Gamma(a, b) distribution. The negative diagonal element |Aii | is then just the sum of the corresponding elements from the same row, and is marginally distributed as a Gamma((N ? 1)a, b) variable. This prior is convenient in the context of MCMC sampling, allowing a Gibbs sampler that alternately samples (?, A) given a MJP trajectory S(t), and then a new trajectory S(t) given A and the observations. The first step is straightforward: given an MJP trajectory, the Dirichlet-Gamma prior is conjugate, resulting in a simple Dirichlet-Gamma posterior (but see Fearnhead and Sherlock (2006) and the next section for a slight generalization that continues to be conditionally conjugate). Similarly, recent developments in MCMC inference have made the second step fairly standard and efficient, see Rao and Teh (2014); Hajiaghayi et al. (2014). Despite its computational simplicity, this Gibbs sampler comes at a price: it can mix slowly due to coupling between S(t) and A. Alternate approaches like particle MCMC (Andrieu et al., 2010) do not exploit the MJP stucture, resulting in low acceptance rates, and estimates with high variance. These challenges associated with MCMC raise the need for new techniques for Bayesian inference. Here, we bring recent ideas from variational Bayes towards posterior inference for MJPs, proposing a novel and efficient collapsed variational algorithm that marginalizes out the parameter A, thereby addressing the issue of slow mixing. Our algorithm adaptively finds regions of low and high transition activity, rather than integrating these out. In our experiments, we show that these can bring significant computational benefits. Our algorithm is based on an alternate approach to sampling an MJP trajectory called uniformization (Jensen, 1953), which we describe next. 2 Uniformization Given a rate matrix A, choose an ? > max |Aii |, and sample a set of times from a Poisson process with intensity ?. These form a random discretization of time, giving a set of candidate transition times (Figure 1, top right). Next sample a piecewise-constant trajectory by running a discrete-time Markov chain over these times, with Markov transition matrix given by B = (I + ?1 A), and with initial distribution ?. It is easy to verify that B is a valid transition matrix with at least one non-zero diagonal element. This allows the discrete-time system to move back to the same state, something impossible under the original MJP. In fact as ? increases the probability of self-transitions increases; however at the same time, a large ? implies a large number of Poisson-distributed candidate times. Thus the self-transitions serve to discard excess candidate times, and one can show (Jensen, 1953; Rao and Teh, 2014) that after discarding the self-transitions, the resulting distribution over trajectories is identical to an MJP with rate matrix A for any ? ? max |Aii | (Figure 1, bottom right). Rao and Teh (2012) describe a generalization, where instead of a single ?, each state i has its own dominating rate ?i > |Aii |. The transition matrix B is now defined as Bii = 1 + Aii /?i , and 2 Bij = Aij /?i , for all i, j ? (1, . . . , N ), i 6= j. Now, on entering state i, one proposes the the next candidate transition time from a rate-?i exponential, and then samples the next state from Bi . As before, self-transitions amount to rejecting the opportunity to leave state i. Large ?i result in more candidate transition times, but more self-transitions. Rao and Teh (2012) show that these two effects cancel out, and the resulting path, after discarding self-transitions is a sample from an MJP. An alternate prior on the parameters of an MJP: We use uniformization to formulate a novel prior distribution over the parameters of an MJP; this will facilitate our later variational Bayes algorithm. Consider Ai , the ith row of the rate matrix A. This is specified by the diagonal element Aii , and the vector Bi := |A1ii | (Ai1 , ? ? ? , Ai,i?1 , 0, Ai,i+1 , ? ? ? , AiN ). Recall that the latter is a probability vector, giving the probability of the next state after i. In Fearnhead and Sherlock (2006), the authors place a Gamma prior on |Aii |, and what is effectively, a Dirichlet(?, ? ? ? , 0, ? ? ? , ?) prior on Bi (although they treat Bi as an N ? 1-component vector by ignoring the 0 at position i). We place a Dirichlet(a, ? ? ? , a0 , ? ? ? , a) prior on Bi for all i. Such Bi ?s allow self-transitions, and form the rows of the transition matrix B from uniformization. Note that under uniformization, the row Ai is uniquely specified by the pair (?, Bi ) via the relationship Ai = ?(Bi ? 1i ), where 1i is the indicator for i. We complete our specification by placing a Gamma prior over ?. Note that since the rows of A sum to 0, and the rows of B sum to 1, both matrices are completely determined by N (N ? 1) elements. On the other hand, our specification has N (N ? 1) + 1 random variables, the additional term arising because of the prior over ?. Given A, ? plays no role in the generative process defined by Gillespie?s algorithm, although it is an important parameter in MCMC inference algorithms based on uniformization. In our situation, B represents transition probabilities conditioned on there being a transition, and now ? does carry information about A, namely the distribution over event times. Later, we will look at the implied marginal distribution over A. First however, we consider the generalized uniformization scheme of Rao and Teh (2012). Here we have N additional parameters, ?1 to ?N . Again, under our model, we place Gamma priors over these ?i ?s, and Dirichlet priors on the rows of the transition matrix B. Note that in Rao and Teh (2014, 2012), ? is set to 2 maxi |Aii |. From the identity B = I + ?1 A, it follows that under any prior over A, with probability 1, the smallest diagonal element of B is 1/2. Our specification avoids such a constrained prior over B, instead introducing an additional random variable ?. Indeed, our approach is equivalent to a prior over (?, A), with ? = k maxi Aii for some random k. We emphasize that the choice of this prior over k does not effect the generative model, only the induced inference algorithms such as Rao and Teh (2014) or our proposed algorithm. To better understand the implied marginal distribution over A, consider the representation of Rao and Teh (2012), with independent Gamma priors over the ?i ?s. We have the following result: Proposition 1. Place independent Dirichlet priors on the rows of B as above, and independent Gamma((N ? 1)a + a0 , b) priors on the ?i . Then, the associated matrix A has off-diagonal elements that are marginally Gamma(a, b)-distributed, and negative-diagonal elements that are marginally Gamma((N ? 1)a, b)-distributed, with the rows of A adding to 0 almost surely. The proposition is a simple consequence of the Gamma-Dirichlet calculus: first observe that the collection of variables ?i Bij is a vector of independent Gamma(a, b) variables. Noting that Aij = ?i Bij , we have that the off-diagonal elements of A are independent Gamma(a, b)s, for i 6= j. Our proof is complete when we notice that the rows of A sum to 0, and that the sum of independent Gamma variables is still Gamma-distributed, with scale parameter equal to the sum of the scales. It is also easy to see that given A, the ?i is set by ?i = |Aii | + ?i , where ?i ? Gamma(a0 , b). In this work, we will simply matters by scaling all rows by a single, shared ?. This will result in a vector of Aij ?s each marginally distributed as a Gamma variable, but now positively correlated due to the common ?. We will see that this simplification does not affect the accuracy of our method. In fact, as our variational algorithm will maintain just a point estimate for ?, so that its effect on the correlation between the Aii ?s is negligible. 3 Variational inference for MJPs Given noisy observations X of an MJP, we are interested in the posterior p(S(t), A|X). Following the earlier section, we choose an augmented representation, where we replace A with the pair (B, ?). Similarly, we represent the MJP trajectory S(t) with the pair (T, U ), where T is the set of candidate transition times, and U (with |U | = |T |), is the set of states at these times. For our variational 3 algorithm, we will integrate out the Markov transition matrix B, working instead with the marginal distribution p(T, U, ?). Such a collapsed representation avoids issues that plague MCMC and VB approaches, where coupling between trajectory and transition matrix slows down mixing/convergence. The distribution p(T, U, ?) is still intractable however, and as is typical in variational algorithms, we will make a factorial approximation p(T, U, ?) ? q(T, U )q(?). Writing q(T, U ) = q(U |T )q(T ), we shall also restrict q(T ) to a delta-function: q(T ) = ?T? (T ) for some T?. In this way, finding the ?best? approximating q(T ) within this class amounts to finding a ?best? discretization of time. This approach of optimizing over a time-discretization is in contrast to MCMC schemes that integrate out the time discretization, and has a two advantages: Simplified computation: Searching over time-discretization can be significantly more efficient than integrating it out. This is especially true when a trajectory involves bursts of transitions interspersed with long periods of inactivity, where schemes like Rao and Teh (2014) can be quite inefficient. Better interpretability: A number of applications use MJPs as tools to segment a time interval into inhomogeneous segments. A full distribution over such an object can be hard to deal with. Following work on variational inference for discrete-time Markov chains (Wang and Blunsom, 2013), Q|T | we will approximate q(U |T ) factorially as q(U |T ) = t=1 q(ut ). Finally, since we fix q(T ) to a delta function, we will also restrict q(?) to a delta function, only representing uncertainty in the MJP parameters via the marginalized transition matrix B. We emphasize that even though we optimize over time discretizations, we still maintain posterior uncertainty of the MJP state. Thus our variational approximation represents a distribution over piecewise-constant trajectories as a single discretization of time, with a probability vector over states for each time segment (Figure 2). Such an approximation does not involve too much loss of information, while being more convenient than a full distribution over trajectories, or a set of sample paths. While optimizing over trajectories, our algorithm attempts to find segments where the distribution over states is reasonably constant, if not it will refine a segment into two smaller ones. Our overall variational inference algorithm then involves minimizing the Kullback-Liebler distance between this posterior approximation and the true posterior. We do this in a coordinate-wise manner: Q|T | 1) Updating q(U |T ) = t=1 q(ut ): Given a discretization T , and an ?, uniformization tells us that inference over U is just inference for a discrete-time hidden Markov model. We adapt the approach of Wang and Blunsom (2013) to update q(U ). Assume the observations X follow an exponential family likelihood with parameter Cs for state s: p(xlt |St = s) = exp(?(xlt )T Cs )h(xlt )/Z(Cs ), where Z is the normalization constant, and xlt is the l-th observation observed in between [Tt , Tt+1 ). Then for a sequence of |T | observations, we have p(X, U |B, C) ? ? ? |T | |T | nt nt S Y S S Y Y Y Y Y Y h(xl ) # t exp(?(xlt )T Cut )h(xlt )/Z(cut ) = ? Bij ij ? exp(??Ti Ci )( ) But ,ut+1 Z(C u t) t=0 t=0 i=1 j=1 i=1 l=1 l=1 Here nt is the number of observations in [Tt , Tt+1 ) and #ij is the number of transitions from state i Pnt P to j, and ??t = l=1 ?(xlt ) and ??i = t,s.t. ut =i ??t . Placing Dirichlet(?) priors on the rows of B, and an appropriate conjugate prior on C, we have ? ? # +??1 |T | nt S S S Y Y Y Y h(xl ) Y Bij ij t T ? ? ? exp(Ci (?i + ?))( ). p(X, U, B, C) ?= ?(S?) ?(?) Z(C u t) t=0 i=1 j=1 i=1 l=1 Integrating out B and C, and writing #i for the number of visits to state i, we have: ? ? S S S Y Y ?(S?) ?(#ij + ?) ? Y ? ? p(X, U ) ?= ? Zi (?i + ?). ?(#i + ?) j=1 ?(?) i=1 i=1 t Then, p(ut = k|?) ? t t?1,t+1 ?k ?t (#?t + ?)?k ut?1 ,k + ?) (#k,ut?1 + ?k t ?k (#?t k + ?) ? ? Z?k (???t k + ?k (Xt ) + ?) Standard calculations for variational inference give the solution to q(ut ) = argmin KL(q(U, T, ?)kp(U, T, ?|X)) as q(ut ) = Eq?t [log p(ut |?)], We then have the update 4 Figure 2: (left) Merging to time segments. (right) splitting a time segment. Horizontal arrows are VB messages. rule: q(ut = k) ? t?1,t+1 ?t Eq?t [#?t + ?] ut?1 ,k + ?]Eq ?t [#k,ut?1 + ?k ?t Eq?t [#?t ut?1 ,k + S?]Eq ?t [#k,ut?1 ? Eq?t Z?k (???t k + ?k (Xt ) + ?) t?1,t+1 ?t ? ? Eq?t Zk (?k + ?) + ?k + ?] ? For the special case of multinomial observations, we refer to Wang and Blunsom (2013). 2) Updating q(T ): We perform a greedy search over the space of time-discretizations by making local stochastic updates to the current T . Every iteration, we first scan the current T to find a beneficial merge (Figure 2, left): go through the transition times in sequential or random order, merge with the next time interval, compute the variational lower bound under this discretization, and accept if it results in an improvement. This eliminates unnecessary transitions times, reducing fragmentation of the segmentation, and the complexity of the learnt model. Calculating the variational bound for the new time requires merging the probability vectors associated with the two time segments into a new one. One approach is to initialize this vector to some arbitrary quantity, run step 1 till the q?s converge, and use the updated variational bound to accept or reject this proposal. Rather than taking this time-consuming approach, we found it adequate to set the new q to a convex combination to the old q?s, each weighted by the length of their corresponding interval length. In our experiments, we found that this performed comparably at a much lower computational cost. If no merge is found, we then try to find a beneficial split. Again, go through the time segments in some order, now splitting each interval into two. After each split, compare the likelihood before and after the split, and accept (and return) if the improvement exceeds a threshold. Again, such a split requires computing probability vectors for the newly created segments. Now, we assign each segment the same vector as the original segment (plus some noise to break symmetry). We then run one pass of step 1, updating the q?s on either side of the new segment, and then updating the q?s in the two segments. We consider two interval splitting schemes, bisection and random-splitting. Overall, our approach is related to split-merge approaches for variational inference in nonparametric Bayesian models Hughes et al. (2015); these too maintain and optimize point estimates of complex, combinatorial objects, instead maintaining uncertainty over quantities like cluster assignment. In our real-world check-in applications, we consider a situation where there is not just one MJP trajectory, but a number of trajectories corresponding to different users. In this situation, we take a stochastic variational Bayes approach, picking a random user and following the steps outlined earlier. Updating q(?): With a Gamma(a1 , a2 ) prior over ?, the posterior over ? is also Gamma, and we could set ? to the MAP. We found this greedy approach unstable sometimes, instead using a partial update, with the new ? equal to the mean of the old value and the MAP value. Writing s for the total number of transition times in all m trajectories, this gives us ?new = (?old + (a1 + s)/(a2 + m))/2. 4 Experiments We present qualitative and quantitative experiments using synthetic and real datasets to demonstrate the accuracy and efficiency of our variational Bayes (VB) algorithm. We mostly focus on comparisons with the MCMC algorithms from Rao and Teh (2014) and Rao and Teh (2012). Datasets. We use a dataset of check-in sequences from 8967 FourSquare users in the year 2011, originally collected by Gao et al. (2012) for studying location-based social networks. Each check-in has a time stamp and a location (latitude and longitude), with users having 191 check-in records on average. We only consider check-ins inside a rectangle containing the United States and parts of Mexico and Canada (see Figure 3, left), and randomly select 200 such sequences for our experiments. We partition the space into a 40 ? 40 grid, and define the observation distribution of each MJP state as a categorical distribution over the grid cells. See Pan et al. (2016) for more details on this application. We also use two synthetic datasets in our experiments, with observations in a 5 ? 5 grid. For the first dataset, we fix ? = 20 and construct a transition matrix B for 5 states with B(i, i) = 0.8, 5 true trajectory inferred trajectory (VB) Observations 4 2 12 10 10 8 8 6 4 2 0 0 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 -28 -120 -30 4 -130 -32 -140 -34 -150 -36 0 0.2 0.4 time 0.6 0.8 1 time 6 0 0 1 time 2 0 4 2 log-probability 12 MJP state MJP state Figure 3: (left) check-ins of 500 users. (right-top) heatmap of emission matrices; (right-bottom) true and inferred trajectories: the y-values are perturbed for clarity. true trajectory inferred trajectory (VB) Observations 6 MJP state MJP state 6 -38 1 0 50 time 100 -160 200 150 iteration Figure 4: (left,middle) posterior distribution over states of two trajectories in second synthetic dataset; (right) evolution of log p(T | ?, X) in the VB algorithm for two sample sequences B(i, 5) = 0.19, B(5, 5) = 0, and B(5, i) = 0.25 for i ? [1, 4]. By construction, these sequences can contain many short time intervals at state 5, and longer time intervals at other states. We set the observation distribution of state i to have 0.2 probability on grid cells in the i-th row and 0 probability otherwise. For the second synthetic dataset, we use 10 states and draw both the transition probabilities of B and the observation probabilities from Dirichlet(1) distribution. Given (?, B), we sample 50 sequences, each containing 100 evenly spaced observations. Hyperparameters: For VB on synthetic datasets we place a Gamma(20, 2) prior on ?, and Dirichlet(2) priors on the transition probabilities and the observation probabilities, while on the check-in data, a Gamma(6, 1), a Dirichlet(0.1) and a Dirichlet(0.01) are placed. For MCMC on synthetic datasets, we place a Gamma(2, 0.2) and a Dirichlet(0.1) for the rate matrix, while on the check-in data, a Gamma(1, 1) and a Dirichlet(0.1) are placed. Visualization: We run VB on the first synthetic dataset for 200 iterations, after which we use the posterior expected counts of observations in each state to infer the output emission probabilities (see Figure 3(top-right)). We then relabel the states under the posterior to best match the true state (our likelihood is invariant to state labels); Figure 3(bottom-right) shows the true and MAP MJP trajectories of two sample sequences in the synthetic dataset. Our VB algorithm recovers the trajectories well, although it is possible to miss some short ?bumps?. MCMC also performs well in this case, although as we will show, it is significantly more expensive. The inferred posteriors of trajectories have more uncertainty for the second synthetic dataset. Figure 4 (left and middle) visualizes the posterior distributions of two hidden trajectories with darker regions for higher probabilities. The ability to maintain posterior uncertainty about the trajectory information 2.7 reconstruction error 2.0 2.6 reconstruction error Figure 5: reconstruction error of MCMC and VB (using random and even splitting) for the (left) first and (right) the second synthetic dataset. The random split scheme is in blue , even split scheme is in red , and VB random split scheme with true omega in orange. MCMC is in black. 1.9 2.5 1.8 2.4 1.7 2.3 1.6 0 500 1000 1500 running time (seconds) 6 2000 0 500 1000 1500 running time (seconds) 2000 15 1000 # of trajectories # of trajectories # of trajectories 30 10 20 10 0 5 750 500 250 0 10 20 # of transitions 30 30 Figure 6: Synthetic dataset 1(top) and 2(bottom): Histogram of number of transitions using VB with (left) random splitting; (middle) even spliting; (right) using MCMC. 0 0 0 10 20 # of transitions 0 30 10 20 # of transitions 30 10 20 # of transitions 30 # of trajectories 400 # of trajectories # of trajectories 20 15 20 10 10 200 5 0 0 30 0 Figure 7: histogram of number of transitions using (left) VB and (middle) MCMC; (right) transition times of 10 users using VB 10 20 # of transitions 0 30 120 50 100 40 # of trajectories 10 20 # of transitions # of trajectories 0 80 60 40 20 20 0 0 5 10 15 # of transitions 8 30 10 0 10 trajectory id 0 6 4 2 0 20 40 # of transitions 60 80 0 0 0.2 0.4 0.6 0.8 1 time is important in real world applications, and is something that k-means-style approximate inference algorithms (Huggins et al., 2015) ignore. Inferred trajectories for real-world data. We run the VB algorithm on the check-in data using 50 states for 200 iterations. Modeling such data with MJPs will recover MJP states corresponding to cities or areas of dense population/high check-in activity. We investigate several aspects about the MJP trajectories inferred by the algorithm. Figure 4(right) shows the evolution of log p(T | ?, X) (up to constant factor) of two sample trajectories. This value is used to determine whether a merge or split is beneficial in our VB algorithm. It has an increasing trend for most sequences in the dataset, but can sometimes decrease as the trajectory discretization evolves. This is expected, since our stochastic algorithm maintains a pseudo-bound. Figure 6 shows similar results for the synthetic datasets. Normally, we expect a user to switch areas of check-in activity only a few times in a year. Indeed, Figure 7 (left) shows the histogram of the number of transition times across all trajectories, and the majority of trajectories have 3 or less transitions. We also plot the actual transition times of 10 random trajectories (right). In contrast, MCMC tends to produce more transitions, many of which are redundant. This is a side effect of uniformization in MCMC sampling, which requires a homogeneously dense Poisson distributed trajectory discretization at every iteration. Running time vs. reconstruction error. We measure the quality of the inferred posterior distributions of trajectories using a reconstruction task on the check-in data. We randomly select 100 test sequences, and randomly hold out half of the observations in each test sequence. The training data consists of the observations that are not held out, i.e., 100 full sequences and 100 half sequences. We run our VB algorithm on this training data for 200 iterations. After each iteration, we reconstruct the held-out observations as follows: given a held-out observation at time t on test sequence ? , using the maximum-likelihood grid cell to represent each state, we compute the expected grid distance between the true and predicted observations using the estimated posterior q(ut ). The reconstruction error for ? is computed by averaging the grid distances over all held-out observations in ? . The overall reconstruction error is the average reconstruction error over all test sequences. Similarly, we run the MCMC algorithm on the training data for 1000 iterations, and compute the overall reconstruction error after every 10 iterations, using the last 300 iterations to approximate the posterior distribution of the MJP trajectories. We also run an improved variant of the MCMC algorithm, where we use the generalized uniformization scheme Rao and Teh (2012) with different ?i for each state. This allows coarser discretizations for some states and typically runs faster per iteration. 7 6 4 2 0 0 200 400 600 800 VB random split VB even split 8 6 4 VB random split VB even split MCMC 2 0 1000 1200 0 reconstruction error 8 reconstruction error VB iteration 50 MCMC iteration 200 improved MCMC iteration 200 reconstruction error 1 2 3 4 10 10 0 200 400 600 800 0 1000 1200 1000 2000 3000 running time (seconds) running time (seconds) running time (seconds) 4000 Figure 8: (left) reconstruction error of VB and MCMC algorithms; (middle) reconstruction error using random and even splitting; (right) reconstruction error for more iterations 30 30 30 20 state 40 state 40 state 40 20 10 20 10 0 10 0 0.5 1.0 1.5 jump 2.0 2.5 0 1 2 3 jump 4 5 1 2 3 jump 4 5 Figure 9: Posterior distribution over states of three trajectories in checkin dataset. Figure 8(left) shows the evolution of reconstruction error during the algorithms. The error using VB plateaus much more quickly than the MCMC algorithms. The error gap between MCMC and VB is because of slow mixing of the paths and parameters, as a result of the coupling between latent states and observations as well as modeling approximations. Although the improved MCMC takes less time per iteration, it is not more effective for reconstruction in this experiment. Figure 5 shows similar results for the synthetic datasets. Figure 9 visualizes the posterior distributions of three hidden trajectories with darker shades for higher probabilities. We have chosen to split each time interval randomly in our VB algorithm. Another possibility is to simply split it evenly. Figure 8(middle) compares the reconstruction error of the two splitting schemes. Random splitting has lower error since it produces more successful splits; on the other hand, the running time is smaller with even splitting due to fewer transitions in the inferred trajectories. In Figure 8(right), we resampled the training set and the testing set and ran the experiment for longer. It shows that the error gap between VB and MCMC is closing. Related and future work: Posterior inference for MJPs has primarily been carried out via MCMC Hobolth and Stone (2009); Fearnhead and Sherlock (2006); Bladt and S?rensen (2005); Metzner et al. (2007). The state-of-the-art MCMC approach is the scheme of Rao and Teh (2014, 2012), both based on uniformization. Other MCMC approaches center around particle MCMC Andrieu et al. (2010), e.g. Hajiaghayi et al. (2014). There have also been a few deterministic approaches to posterior inference. The earliest variational approach is from Opper and Sanguinetti (2007), although they consider a different problem from ours, viz. structured MJPs with interacting MJPs (e.g. population sizes of a predator and prey species, or gene networks). They then use a mean-field posterior approximation where these processes are assumed independent. Our algorithm focuses on a single, simple MJP, and an interesting extension is to put the two schemes together for systems of coupled MJPs. Finally a recent paper Huggins et al. (2015) that studies the MJP posterior using a small-variance asymptotic limit. This approach, which generalizes k-means type algorithms to MJPs however provides only point estimates of the MJP trajectory and parameters, and cannot represent posterior uncertainty. Additionally, it still involves coupling between the MJP parameters and trajectory, an issue we bypass with our collapsed algorithm. There are a number of interesting extensions worth studying. First is to consider more structured variational approximations (Wang and Blunsom, 2013), than the factorial approximations we considered here. Also of interest are extensions to more complex MJPs, with infinite state-spaces (Saeedi and Bouchard-C?t?, 2011) or structured state-spaces (Opper and Sanguinetti, 2007). It is also interesting to look at different extensions of the schemes we proposed in this paper: different choices of splitmerge proposals, and more complicated posterior approximations of the parameter ?. Finally, it is instructive to use other real-world datasets to compare our approaches with more traditional MCMC approaches. 8 References Andrieu, C., Doucet, A., and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society Series B, 72(3):269?342. Bladt, M. and S?rensen, M. (2005). Statistical inference for discretely observed Markov jump processes. Journal of the Royal Statistical Society: B, 67(3):395?410. Boys, R. J., Wilkinson, D. J., and Kirkwood, T. B. L. (2008). Bayesian inference for a discretely observed stochastic kinetic model. Statistics and Computing, 18(2):125?135. ?inlar, E. (1975). Introduction to Stochastic Processes. Prentice Hall. Fearnhead, P. and Sherlock, C. (2006). An exact Gibbs sampler for the Markov-modulated Poisson process. Journal Of The Royal Statistical Society Series B, 68(5):767?784. Gao, H., Tang, J., and Liu, H. (2012). gscorr: Modeling geo-social correlations for new check-ins on locationbased social networks. In Proc. of the 21st ACM conf. on Information and knowledge management. ACM. Gillespie, D. T. (1977). Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem., 81(25):2340? 2361. Hajiaghayi, M., Kirkpatrick, B., Wang, L., and Bouchard-C?t?, A. (2014). Efficient Continuous-Time Markov Chain Estimation. In International Conference on Machine Learning (ICML), volume 31, pages 638?646. Hobolth, A. and Stone, E. (2009). Simulation from endpoint-conditioned, continuous-time Markov chains on a finite state space, with applications to molecular evolution. Ann Appl Stat, 3(3):1204. Huggins, J. H., Narasimhan, K., Saeedi, A., and Mansinghka, V. K. (2015). Jump-means: Small-variance asymptotics for Markov jump processes. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 693?701. Hughes, M. C., Stephenson, W. T., and Sudderth, E. B. (2015). Scalable adaptation of state complexity for nonparametric hidden Markov models. In NIPS 28, pages 1198?1206. Jensen, A. (1953). Markoff chains as an aid in the study of Markoff processes. Skand. Aktuarietiedskr., 36:87?91. Metzner, P., Horenko, I., and Sch?tte, C. (2007). Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time. Phys. Rev. E, 76. Opper, M. and Sanguinetti, G. (2007). Variational inference for Markov jump processes. In NIPS 20. Pan, J., Rao, V., Agarwal, P., and Gelfand, A. (2016). Markov-modulated marked poisson processes for check-in data. In International Conference on Machine Learning, pages 2244?2253. Rao, V. and Teh, Y. W. (2014). Fast MCMC sampling for Markov jump processes and extensions. Journal of Machine Learning Research, 13. Rao, V. A. and Teh, Y. W. (2012). MCMC for continuous-time discrete-state systems. In Bartlett, P., Pereira, F., Burges, C., Bottou, L., and Weinberger, K., editors, Advances in Neural Information Processing Systems 25, pages 710?718. Saeedi, A. and Bouchard-C?t?, A. (2011). Priors over Recurrent Continuous Time Processes. In NIPS 24. Wang, P. and Blunsom, P. (2013). Collapsed variational Bayesian inference for hidden Markov models. In AISTATS. Xu, J. and Shelton, C. R. (2010). Intrusion detection using continuous time Bayesian networks. Journal of Artificial Intelligence Research, 39:745?774. 9
6965 |@word middle:6 nd:1 mjp:34 calculus:1 simulation:2 splitmerge:1 thereby:1 carry:1 initial:3 liu:1 series:2 united:1 ours:1 reaction:1 current:2 com:1 discretization:11 nt:4 gmail:1 partition:1 plot:1 update:4 v:1 generative:2 greedy:2 website:1 half:2 fewer:1 intelligence:1 ith:1 short:2 record:1 provides:1 location:5 zhang:1 along:1 burst:1 qualitative:1 consists:1 inside:1 manner:1 expected:3 indeed:2 themselves:1 actual:1 cardinality:1 increasing:1 medium:1 what:2 argmin:1 narasimhan:1 proposing:1 finding:2 pseudo:1 quantitative:1 every:3 hajiaghayi:3 ti:1 normally:1 before:2 negligible:1 local:1 treat:1 tends:1 limit:1 consequence:1 despite:1 id:1 path:4 merge:5 blunsom:5 plus:1 black:1 appl:1 bi:8 practical:1 testing:1 hughes:2 asymptotics:1 area:2 discretizations:3 significantly:2 reject:1 convenient:3 integrating:3 cannot:1 marginalize:1 put:1 collapsed:6 impossible:2 context:1 writing:3 prentice:1 optimize:2 equivalent:1 map:3 deterministic:1 center:1 straightforward:1 go:2 convex:1 formulate:1 simplicity:1 splitting:10 factored:1 rule:1 population:3 searching:1 coordinate:1 updated:1 construction:1 play:1 user:12 exact:2 duke:1 us:1 element:14 trend:1 expensive:1 updating:5 continues:1 cut:2 coarser:1 observed:5 bottom:4 role:1 wang:6 region:2 decrease:1 observes:2 ran:1 complexity:2 wilkinson:1 dynamic:1 raise:1 segment:15 serve:1 upon:1 logging:1 efficiency:1 completely:1 aii:16 various:2 fast:1 describe:3 effective:1 monte:2 kp:1 artificial:1 tell:1 quite:1 gelfand:1 widely:1 dominating:1 otherwise:1 reconstruct:1 ability:1 statistic:3 abstractly:1 noisy:3 xlt:7 sequence:16 advantage:1 propose:1 reconstruction:18 adaptation:1 realization:3 mixing:3 till:1 convergence:1 cluster:2 produce:2 leave:1 object:2 coupling:4 recurrent:1 stat:1 omega:1 ij:4 mansinghka:1 eq:7 longitude:1 c:3 involves:3 come:1 implies:1 predicted:1 stucture:1 inhomogeneous:1 stochastic:9 assign:1 fix:2 generalization:2 proposition:2 extension:5 hold:2 around:1 considered:1 hall:1 exp:4 bump:1 smallest:1 a2:2 estimation:2 proc:1 combinatorial:1 label:1 ain:1 city:1 tool:1 weighted:1 fearnhead:5 pnt:1 rather:4 earliest:1 tte:1 emission:3 focus:2 viz:1 improvement:2 check:19 likelihood:4 intrusion:1 contrast:2 inference:22 typically:2 a0:3 accept:3 hidden:6 relation:1 france:1 interested:2 issue:4 overall:4 development:1 heatmap:1 proposes:1 art:2 constrained:1 fairly:1 special:1 marginal:3 field:2 equal:3 saving:1 beach:1 sampling:5 having:2 biology:1 identical:1 placing:2 represents:2 cancel:1 look:2 icml:2 lille:1 future:1 piecewise:6 few:2 primarily:1 randomly:4 gamma:25 consisting:2 maintain:4 ecology:1 attempt:1 detection:1 onwards:1 acceptance:1 message:1 interest:1 investigate:1 possibility:1 ai1:1 kirkpatrick:1 held:4 chain:9 partial:1 necessary:1 indexed:1 incomplete:1 old:3 instance:1 earlier:2 modeling:3 rao:17 vinayak:1 assignment:1 cost:1 geo:1 introducing:1 addressing:1 successful:1 too:2 perturbed:1 learnt:1 synthetic:14 adaptively:1 st:3 international:3 off:4 picking:1 together:1 quickly:1 again:3 management:1 containing:2 choose:2 slowly:1 marginalizes:1 conf:1 inefficient:1 style:1 return:1 chemistry:1 matter:1 skand:1 later:2 performed:1 try:1 break:1 observing:2 red:1 bayes:5 recover:1 maintains:1 bouchard:3 complicated:1 predator:1 mutation:1 contribution:1 accuracy:2 variance:3 correspond:2 spaced:1 generalize:1 bayesian:7 rejecting:1 comparably:1 bisection:1 marginally:4 carlo:2 trajectory:55 worth:1 visualizes:2 liebler:1 holenstein:1 plateau:1 phys:2 facebook:1 associated:4 proof:1 recovers:1 newly:1 dataset:14 recall:1 knowledge:1 ut:16 segmentation:1 spliting:1 back:1 originally:1 higher:2 follow:1 improved:3 formulation:1 though:1 governing:1 just:4 until:1 correlation:2 hand:2 working:1 horizontal:1 horenko:1 quality:1 usa:1 effect:4 facilitate:1 verify:1 true:9 contain:1 evolution:5 andrieu:3 chemical:1 entering:2 deal:1 conditionally:1 during:1 self:8 uniquely:1 generalized:2 stone:2 complete:2 demonstrate:2 tt:4 performs:1 bring:2 variational:23 wise:1 novel:3 recently:1 superior:1 common:1 multinomial:1 endpoint:1 volume:1 interspersed:1 slight:1 significant:2 measurement:1 refer:1 gibbs:3 ai:5 outlined:1 grid:7 i6:1 similarly:3 particle:3 closing:1 specification:3 longer:2 something:2 posterior:25 own:1 recent:3 optimizing:2 optimizes:1 discard:1 additional:3 surely:1 determine:2 converge:1 period:1 redundant:1 july:1 full:3 mix:1 infer:1 exceeds:1 match:1 adapt:1 calculation:1 faster:1 long:2 molecular:1 visit:1 a1:2 variant:1 scalable:1 relabel:1 poisson:7 iteration:16 represent:3 normalization:1 sometimes:2 histogram:3 agarwal:1 cell:3 proposal:2 orange:1 interval:10 sudderth:1 leaving:1 sch:1 eliminates:1 unlike:1 recording:1 induced:1 leverage:1 noting:1 split:16 easy:2 switch:1 affect:1 zi:1 restrict:2 idea:4 whether:1 expression:1 bartlett:1 inactivity:1 suffer:1 hobolth:2 adequate:1 involve:1 factorial:2 amount:2 nonparametric:2 dna:1 rensen:2 notice:1 delta:3 arising:1 estimated:1 per:2 blue:1 discrete:8 shall:1 threshold:1 clarity:1 saeedi:3 prey:1 rectangle:1 sum:7 year:2 run:8 uncertainty:6 place:8 almost:1 family:1 draw:2 scaling:1 vb:27 bound:4 resampled:1 simplification:1 refine:1 discretely:2 activity:4 aspect:1 inlar:2 department:3 structured:3 according:1 alternate:3 combination:1 conjugate:3 smaller:2 beneficial:3 pan:3 across:1 evolves:1 making:1 rev:1 huggins:3 invariant:1 visualization:1 count:1 end:1 studying:2 generalizes:1 apply:1 observe:1 indirectly:1 appropriate:1 simulating:1 bii:1 homogeneously:1 weinberger:1 markoff:2 original:2 top:4 dirichlet:16 running:8 opportunity:1 marginalized:1 maintaining:1 calculating:1 exploit:2 giving:4 especially:1 approximating:2 society:3 implied:2 move:1 quantity:3 diagonal:11 traditional:1 distance:3 parametrized:1 majority:1 evenly:2 collected:1 unstable:1 length:2 relationship:1 minimizing:1 mexico:1 regulation:1 mostly:1 boy:2 negative:3 slows:1 unknown:1 perform:1 allowing:1 teh:15 observation:25 markov:26 datasets:8 purdue:4 finite:2 construct:1 situation:4 interacting:1 arbitrary:1 uniformization:13 intensity:2 canada:1 inferred:8 pair:3 namely:1 specified:2 kl:1 connection:1 shelton:2 plague:1 nip:4 alternately:2 address:1 proceeds:2 usually:1 mjps:13 latitude:1 challenge:2 sherlock:5 max:2 interpretability:1 royal:3 gillespie:4 event:3 natural:1 indicator:1 representing:1 scheme:14 created:1 carried:1 categorical:1 coupled:2 prior:28 initialize:1 marginalizing:1 asymptotic:1 loss:1 expect:1 interesting:3 proportional:1 stephenson:1 generator:1 integrate:2 s0:1 editor:1 bypass:1 row:14 placed:2 last:1 aij:7 side:2 allow:1 understand:1 burges:1 wide:1 taking:1 factorially:1 distributed:7 benefit:1 kirkwood:1 opper:4 transition:50 valid:1 avoids:2 world:4 author:1 made:1 jump:15 collection:1 simplified:1 social:4 excess:1 approximate:3 emphasize:2 ignore:1 kullback:1 gene:4 doucet:1 unnecessary:1 assumed:1 consuming:1 sanguinetti:4 foursquare:2 continuous:8 latent:3 search:1 additionally:1 learn:1 reasonably:1 zk:1 ca:1 correlated:1 ignoring:1 symmetry:1 bottou:1 complex:2 aistats:1 dense:2 arrow:1 noise:1 hyperparameters:1 repeated:1 xu:2 positively:1 augmented:1 slow:2 darker:2 aid:1 position:2 bladt:2 pereira:1 exponential:3 xl:2 candidate:6 stamp:1 bij:5 tang:1 down:1 transitioning:2 shade:1 discarding:2 xt:2 jensen:3 maxi:2 intractable:1 adding:1 effectively:2 merging:2 ci:2 sequential:1 fragmentation:1 conditioned:2 gap:2 simply:2 gao:2 absorbed:1 strand:1 acm:2 kinetic:1 identity:1 marked:1 ann:1 towards:1 price:1 shared:1 replace:1 hard:1 typical:2 determined:1 reducing:1 infinite:1 sampler:3 averaging:1 miss:1 called:3 total:2 pas:1 specie:1 select:2 latter:2 scan:1 modulated:2 chem:1 phenomenon:1 mcmc:35 instructive:1 metzner:2
6,595
6,966
Universal consistency and minimax rates for online Mondrian Forests Jaouad Mourtada Centre de Math?matiques Appliqu?es ?cole Polytechnique, Palaiseau, France [email protected] St?phane Ga?ffas Centre de Math?matiques Appliqu?es ?cole Polytechnique,Palaiseau, France [email protected] Erwan Scornet Centre de Math?matiques Appliqu?es ?cole Polytechnique,Palaiseau, France [email protected] Abstract We establish the consistency of an algorithm of Mondrian Forests [LRT14, LRT16], a randomized classification algorithm that can be implemented online. First, we amend the original Mondrian Forest algorithm proposed in [LRT14], that considers a fixed lifetime parameter. Indeed, the fact that this parameter is fixed hinders the statistical consistency of the original procedure. Our modified Mondrian Forest algorithm grows trees with increasing lifetime parameters ?n , and uses an alternative updating rule, allowing to work also in an online fashion. Second, we provide a theoretical analysis establishing simple conditions for consistency. Our theoretical analysis also exhibits a surprising fact: our algorithm achieves the minimax rate (optimal rate) for the estimation of a Lipschitz regression function, which is a strong extension of previous results [AG14] to an arbitrary dimension. 1 Introduction Random Forests (RF) are state-of-the-art classification and regression algorithms that proceed by averaging the forecasts of a number of randomized decision trees grown in parallel (see [Bre01, Bre04, GEW06, BDL08, Bia12, BS16, DMdF14, SBV15]). Despite their widespread use and remarkable success in practical applications, the theoretical properties of such algorithms are still not fully understood [Bia12, DMdF14]. Among these methods, purely random forests [Bre00, BDL08, Gen12, AG14] that grow the individual trees independently of the sample, are particularly amenable to theoretical analysis; the consistency of such classifiers was obtained in [BDL08]. An important limitation of the most commonly used random forests algorithms, such as Breiman?s Random Forest [Bre01] and the Extra-Trees algorithm [GEW06], is that they are typically trained in a batch manner, using the whole dataset to build the trees. In order to enable their use in situations when large amounts of data have to be incorporated in a streaming fashion, several online adaptations of the decision trees and RF algorithms have been proposed [DH00, TGP11, SLS+ 09, DMdF13]. Of particular interest in this article is the Mondrian Forest algorithm, an efficient and accurate online random forest classifier [LRT14]. This algorithm is based on the Mondrian process [RT09, Roy11], a natural probability distribution on the set of recursive partitions of the unit cube [0, 1]d . An appealing property of Mondrian processes is that they can be updated in an online fashion: in [LRT14], the use of the conditional Mondrian process enabled to design an online algorithm that matched its batch counterpart. While Mondrian Forest offer several advantages, both computational and in terms of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. predictive performance, the algorithm proposed in [LRT14] depends on a fixed lifetime parameter ? that guides the complexity of the trees. Since this parameter has to be set in advance, the resulting algorithm is inconsistent, as the complexity of the randomized trees remains bounded. Furthermore, an analysis of the learning properties of Mondrian Forest ? and in particular of the influence and proper theoretical tuning of the lifetime parameter ? ? is still lacking. In this paper, we propose a modified online random forest algorithm based on Mondrian processes. Our algorithm retains the crucial property of the original method [LRT14] that the decision trees can be updated incrementally. However, contrary to the original approach, our algorithm uses an increasing sequence of lifetime parameters (?n )n>1 , so that the corresponding trees are increasingly complex, and involves an alternative online updating algorithm. We study such classification rules theoretically, establishing simple conditions on the sequence (?n )n>1 to achieve consistency, see Theorem 1 from Section 5 below. In fact, Mondrian Forests achieve much more than what they were designed for: while they were primarily introduced to derive an online algorithm, we show in Theorem 2 (Section 6) that they actually achieve minimax convergence rates for Lipschitz conditional probability (or regression) functions in arbitrary dimension. To the best of our knowledge, such results have only been proved for very specific purely random forests, where the covariate dimension is equal to one. Related work. While random forests were introduced in the early 2000s [Bre01], as noted by [DMdF14] the theoretical analysis of these methods is outpaced by their practical use. The consistency of various simplified random forests algorithms is first established in [BDL08], as a byproduct of the consistency of individual tree classifiers. A recent line of research [Bia12, DMdF14, SBV15] has sought to obtain theoretical guarantees (i.e. consistency) for random forests variants that more closely resembled the algorithms used in practice. Another aspect of the theoretical study of random forests is the bias-variance analysis of simplified versions of random forests [Gen12, AG14], such as the purely random forests (PRF) model that performs splits independently of the data. In particular, [Gen12] shows that some PRF variants achieve the minimax rate for the estimation of a Lipschitz regression functions in dimension 1. Additionally, the bias-variance analysis is extended in [AG14], showing that PRF can also achieve minimax rates for C 2 regression functions in dimension one, and considering higher dimensional models of PRF that achieve suboptimal rates. Starting with [SLS+ 09], online variants of the random forests algorithm have been considered. In [DMdF13], the authors propose an online random forest algorithm and prove its consistency. The procedure relies on a partitioning of the data into two streams: a structure stream (used to grow the tree structure) and an estimation stream (used to compute the prediction in each leaf). This separation of the data into separate streams is a way of simplifying the proof of consistency, but leads to a non-realistic setting in practice. A major development in the design of online random forests is the introduction of the Mondrian Forest (MF) classifier [LRT14, LRT16]. This algorithm makes an elegant use of the Mondrian Process, introduced in [RT09], see also [Roy11, OR15], to draw random trees. Indeed, this process provides a very convenient probability distribution over the set of recursive, tree-based partitions of the hypercube. In [BLG+ 16], the links between the Mondrian process and the Laplace kernel are used to design random features in order to efficiently approximate kernel ridge regression, leading to the so-called Mondrian kernel algorithm. Our approach differs from the original Mondrian Forest algorithm [LRT14], since it introduces a ?dual? construction, that works in the ?time? domain (lifetime parameters) instead of the ?space? domain (features range). Indeed, in [LRT14], the splits are selected using a Mondrian process on the range of previously observed features vectors, and the online updating of the trees is enabled by the possibility of extending a Mondrian process to a larger cell using conditional Mondrian processes. Our algorithm incrementally grows the trees by extending the lifetime; the online update of the trees exploits the Markov property of the Mondrian process, a consequence of its formulation in terms of competing exponential clocks. 2 Setting and notation We first explain the considered setting allowing to state consistency of our procedure, and we describe and set notation for the main concepts used in the paper, namely trees, forests and partitions. 2 Considered setting. Assume we are given an i.i.d. sequence (X1 , Y1 ), (X2 , Y2 ) . . . of [0, 1]d ? {0, 1}-valued random variables that come sequentially, such that each (Xi , Yi ) has the same distribution as (X, Y ). This unknown distribution is characterized by the distribution ? of X on [0, 1]d and the conditional probability ?(x) = P(Y = 1 | X = x). At each time step n > 1, we want to output a 0-1-valued randomized classification rule gn (?, Z, Dn ) : [0, 1]d ? {0, 1}, where Dn = (X1 , Y1 ), . . . , (Xn , Yn ) and Z is a random variable that accounts for the randomization procedure; to simplify notation, we will generally denote gbn (x, Z) = gn (x, Z, Dn ). The quality of a randomized classifier gn is measured by its probability of error L(gn ) = P(gn (X, Z, Dn ) 6= Y | Dn ) = P(X,Y ),Z (gn (X, Z, Dn ) 6= Y ) (1) where P(X,Y ),Z denotes the integration with respect to (X, Y ), Z alone. The quantity of Equation (1) is minimized by the Bayes classifier g ? (x) = 1{?(x)> 12 } , and its loss, the Bayes error, is denoted L? = L(g ? ). We say that a sequence of classification rules (gn )n>1 is consistent whenever L(gn ) ? L? in probability as n ? ?. Remark 1. We restrict ourselves to binary classification, note however that our results and proofs can be extended to multi-class classification. Trees and Forests. The classification rules (gn )n>1 we consider take the form of a random forest, defined by averaging randomized tree classifiers. More precisely, let K > 1 be a fixed number of randomized classifiers gbn (x, Z1 ), . . . , gbn (x, ZK ) associated to the same randomized mechanism, (K) where the Zk are i.i.d. Set Z (K) = (Z1 , . . . , ZK ). The averaging classifier gbn (x, Z (K) ) is defined by taking the majority vote among the values gn (x, Zk ), k = 1, . . . , K. Our individual randomized classifiers are decision trees. A decision tree (T, ?) is composed of the following components: ? A finite rooted ordered binary tree T , with nodes N (T ), interior nodes N ? (T ) and leaves L(T ) (so that N (T ) is the disjoint union of N ? (T ) and L(T )). Each interior node ? has a left child left(?) and a right child right(?); ? A family of splits ? = (?? )??N ? (T ) at each interior node, where each split ?? = (d? , ?? ) is characterized by its split dimension d? ? {1, . . . , d} and its threshold ?? . Each randomized classifier gbn (x, Zk ) relies on a decision tree T , the random variable Zk is the random sampling of the splits (?? ) defining T . This sampling mechanism, based on the Mondrian process, is defined in Section 3. We associate to M = (T, ?) a partition (A? )??L(T ) of the unit cube [0, 1]d , called a tree partition (or guillotine partition). For each node ? ? N (T ), we define a hyper-rectangular region A? recursively: ? The cell associated to the root of T is [0, 1]d ; ? For each ? ? N ? (T ), we define Aleft(?) := {x ? A? : xd? 6 ?? } and Aright(?) := A? \ Aleft(?) . The leaf cells (A? )??L(T ) form a partition of [0, 1]d by construction. In the sequel, we will identify a tree with splits (T, ?) with the associated tree partition M (T, ?), and a node ? ? N (T ) with the cell A? ? [0, 1]d . The decision tree classifier outputs a constant prediction of the label in each leaf cell A? using a simple majority vote of the labels Yi (1 6 i 6 n) such that Xi ? A? . 3 A new online Mondrian Forest algorithm We describe the Mondrian Process in Section 3.1, and recall the original Mondrian Forest procedure in Section 3.2. Our procedure is introduced in Section 3.3. 3.1 The Mondrian process The probability distribution we consider on tree-based partitions of the unit cube [0, 1]d is the Qd Mondrian process, introduced in [RT09]. Given a rectangular box C = j=1 [aj , bj ], we denote 3 Pd |C| := j=1 (bj ?aj ) its linear dimension. The Mondrian process distribution MP(?, C) is the distribution of the random tree partition of C obtained by the sampling procedure SampleMondrian(?, C) from Algorithm 1. Algorithm 1 SampleMondrian(?, C) ; Samples a tree partition distributed as MP(?, C). 1: Parameters: A rectangular box C ? Rd and a lifetime parameter ? > 0. 2: Call SplitCell(C, ?C := 0, ?). Algorithm 2 SplitCell(A, ?, ?) ; Recursively split a cell A, starting from time ? , until ? Q 1: Parameters: A cell A = 16j6d [aj , bj ], a starting time ? and a lifetime parameter ?. 2: Sample an exponential random variable EA with intensity |A|. 3: if ? + EA 6 ? then 4: Draw at random a split dimension J ? {1, . . . , d}, with P(J = j) = (bj ? aj )/|A|, and a split threshold ?J uniformly in [aJ , bJ ]. 5: Split A along the split (J, ?J ). 6: Call SplitCell(left(A), ? + EA , ?) and SplitCell(right(A), ? + EA , ?). 7: else 8: Do nothing. 9: end if 3.2 Online tree growing: the original scheme In order to implement an online algorithm, it is crucial to be able to ?update? the tree partitions grown at a given time step. The approach of the original Mondrian Forest algorithm [LRT14] uses a slightly different randomization mechanism, namely a Mondrian process supported in the range defined by the past feature points. More precisely, this modification amounts to replacing each call to SplitCell(A, ?, ?) by a call to SplitCell(Arange(n) , ?, ?), where Arange(n) is the range of the feature points X1 , . . . , Xn that fall in A (i.e. the smallest box that contains them). When a new training point (Xn+1 , Yn+1 ) arrives, the ranges of the training points may change. The online update of the tree partition then relies on the extension properties of the Mondrian process: given a Mondrian partition M1 ? MP(?, C1 ) on a box C1 , it is possible to efficiently sample a Mondrian partition M0 ? MP(?, C0 ) on a larger box C0 ? C1 that restricts to M1 on the cell C1 (this is called a ?conditional Mondrian?, see [RT09]). Remark 2. In [LRT14] a lifetime parameter ? = ? is actually used in experiments, which essentially amounts to growing the trees completely, until the leaves are homogeneous. We will not analyze this variant here, but this illustrates the problem of using a fixed, finite budget ? in advance. 3.3 Online tree growing: a dual approach An important limitation of the original scheme is the fact that it requires to fix the lifetime parameter ? in advance. In order to obtain a consistent algorithm, it is required to grow increasingly complex trees. To achieve this, we propose to adopt a ?dual? point of view: instead of using a Mondrian process with fixed lifetime on a domain that changes as new data points are added, we use a Mondrian process on a fixed domain (the cube [0, 1]d ) but with a varying lifetime ?n that grows with the sample size n. The rationale is that, as more data becomes available, the classifiers should be more complex and precise. Since the lifetime, rather than the domain, is the parameter that guides the complexity of the trees, it should be this parameter that dynamically adapts to the amount of training data. It turns out that in this approach, quite surprisingly, the trees can be updated incrementally, leading to an online algorithm. The ability to extend a tree partition M?n ? MP(?n , [0, 1]d ) into a finer tree partition M?n+1 ? MP(?n+1 , [0, 1]d ) relies on a different property of the Mondrian process, namely the fact that for ? < ?0 , it is possible to efficiently sample a Mondrian tree partition M?0 ? MP(?0 , C) given its pruning M? ? MP(?, C) at time ? (obtained by dropping all splits of M?0 performed at a time ? > ?). The procedure ExtendMondrian(M? , ?, ?0 ) from Algorithm 3 extends a Mondrian tree partition M? ? MP(?, C) to a tree partition M?0 ? MP(?0 , C). Indeed, for each leaf cell A of M? , the fact 4 Algorithm 3 ExtendMondrian(M? , ?, ?0 ) ; Extend M? ? MP(?, C) to M?0 ? MP(?0 , C) 1: Parameters: A tree partition M? , and lifetimes ? 6 ?0 . 2: for A in L(M? ) do 3: Call SplitCell(A, ?, ?0 ) 4: end for that A is a leaf of M? means that during the sampling of M? , the time of the next candidate split ? + EA (where ? is the time A was formed and EA ? Exp(|A|)) was strictly larger than ?. Now in 0 the procedure ExtendMondrian(M? , ?, ?0 ), the time of the next candidate split is ? + EA , where 0 EA ? Exp(|A|). This is precisely the where the trick resides: by the memory-less property of the exponential distribution, the distribution of ?A + EA conditionally on EA > ? ? ?A is the same as 0 that of ? + EA . The procedure ExtendMondrian can be replaced by the following more efficient implementation: P ? Time of the next split of the tree is sampled as ?+EM? with EM? ? Exp( ??L(M? ) |A? |); ? Leaf to split is chosen using a top-down path from the root of the tree, where the choice between left or right child for each interior node is sampled at random, proportionally to the linear dimension of all the leaves in the subtree defined by the child. Remark 3. While we consider Mondrian partitions on the fixed domain [0, 1]d , our increasing lifetime trick can be used in conjunction with a varying domain based on the range of the data (as in the original MF algorithm), simply by applying ExtendMondrian(M?n , ?n , ?n+1 ) after having extended the Mondrian to the new range. In order to keep the analysis tractable and avoid unnecessary complications in the analysis, we will study the procedure on a fixed domain only. Given an increasing sequence (?n )n>1 of lifetime parameters, our modified MF algorithm incremen(k) (k) tally updates the trees M? for k = 1, . . . , K by calling ExtendMondrian(M?n , ?n , ?n+1 ), and combines the forecasts of the given trees, as explained in Algorithm 4. Algorithm 4 MondrianForest(K, (?n )n>1 ) ; Trains a Mondrian Forest classifier. 1: Parameters: The number of trees K and the lifetime sequence (?n )n>1 . (k) 2: Initialization: Start with K trivial partitions M?0 , ?0 := 0, k = 1, . . . , K. Set the counts of the training labels in each cell to 0, and the labels e.g. to 0. 3: for n = 1, 2, . . . do 4: Receive the training point (Xn , Yn ). 5: for k = 1, . . . , K do 6: Update the counts of 0 and 1 (depending on Yn ) in the leaf cell of Xn in M?n . (k) 7: Call ExtendMondrian(M?n?1 , ?n?1 , ?n ). 8: Fit the newly created leaves. 9: end for 10: end for For the prediction of the label given a new feature vector, our algorithm uses a majority vote over the predictions given by all K trees. However, other choices are possible. For instance, the original Mondrian Forest algorithm [LRT14] places a hierarchical Bayesian prior over the label distribution on each node of the tree, and performs approximate posterior inference using the socalled interpolated Kneser-Ney (IKN) smoothing. Another possibility, that will be developed in an extended version of this work, is tree expert aggregation methods, such as the Context-Tree Weighting (CTW) algorithm [WST95, HS97] or specialist aggregation methods [FSSW97] over the nodes of the tree, adapting them to increasingly complex trees. Our modification of the original Mondrian Forest replaces the process of online tree growing with a fixed lifetime by a new process, that allows to increase lifetimes. This modification not only allows to prove consistency, but more surprisingly leads to an optimal estimation procedure, in terms of minimax rates, as illustrated in Sections 5 and 6 below. 5 4 Mondrian Forest with fixed lifetime are inconsistent We state in Proposition 1 the inconsistency of fixed-lifetime Mondrian Forests, such as the original algorithm [LRT14]. This negative result justifies our modified algorithm based on an increasing sequence of lifetimes (?n )n>1 . Proposition 1. The Mondrian Forest algorithm (Algorithm 4) with a fixed lifetime sequence ?n = ? is inconsistent: there exists a distribution of (X, Y ) ? [0, 1] ? {0, 1} such that L? = 0 and L(gn ) = P(gn (X) 6= Y ) does not tend to 0. This result also holds true for the original Mondrian Forest algorithm with lifetime ?. Proposition 1 is established in Appendix C. The proof uses a result of independent interest (Lemma 3), which states that asymptotically over the sample size, for fixed ?, the restricted domain does not affect the randomization procedure. 5 Consistency of Mondrian Forest with lifetime sequence (?n ) The consistency of the Mondrian Forest used with a properly tuned sequence (?n ) is established in Theorem 1 below. Theorem 1. Assume that ?n ? ? and that ?dn /n ? 0. Then, the online Mondrian Forest described in Algorithm 4 is consistent. This consistency result is universal, in the sense that it makes no assumption on the distribution of X nor on the conditional probability ?. This contrasts with some consistency results on Random forests, such as Theorem 1 of [DMdF13], which assumes that the density of X is bounded by above and below. Theorem 1 does not require an assumption on K (number of trees). It is well-known for batch Random Forests that this meta-parameter is not a sensitive tuning parameter, and that it suffices to choose it large enough to obtain good accuracy. The only important parameter is the sequence (?n ), that encodes the complexity of the trees. Requiring an assumption on this meta-parameter is natural, and confirmed by the well-known fact that the tree-depth is the most important tuning parameter for batch Random Forests, see for instance [BS16]. The proof of Theorem 1 can be found in the supplementary material (see Appendix D). The core of the argument lies in two lemmas describing two novel properties of Mondrian trees. Lemma 1 below provides an upper bound of order O(??1 ) on the diameter of the cell A? (x) of a Mondrian partition M? ? MP(?, [0, 1]d ). This is the key to control the bias of Mondrian Forests with lifetime sequence that tend to infinity. Lemma 1 (Cell diameter). Let x ? [0, 1]d , and let D? (x) be the `2 -diameter of the cell containing x in a Mondrian partition M? ? MP(?, [0, 1]d ). If ? ? ?, then D? (x) ? 0 in probability. More precisely, for every ?, ? > 0, we have     ?? ?? P(D? (x) > ?) 6 d 1 + ? exp ? ? (2) d d and   4d E D? (x)2 6 2 . ? (3) The proof of Lemma 1 is provided in the supplementary material (see Appendix A). The second important property needed to carry out the analysis is stated in Lemma 2 and helps to control the ?variance? of Mondrian forests. It consists in an upper bound of order O(?d ) on the total number of splits performed by a Mondrian partition M? ? MP(?, [0, 1]d ). This ensures that enough data points fall in each cell of the tree, so that the labels of the tree are well estimated. The proof of Lemma 2 is to be found in the supplementary material (see Appendix B). Lemma 2 (Number of splits). If K? denotes the number of splits performed by a Mondrian tree partition M? ? MP(?, [0, 1]d ), we have E(K? ) 6 (e(? + 1))d . Remark 4. It is worth noting that controlling the total number of splits ensures that the cell A?n (X) in which a new random X ? ? ends up contains enough training points among X1 , . . . , Xn 6 (see Lemma 4 in appendix D). This enables to get a distribution-free consistency result. Another approach consists in lower-bounding the volume V?n (x) of A?n (x) in probability for any x ? [0, 1]d , which shows that the cell A?n (x) contains enough training points, but this would require the extra assumption that the density of X is lower-bounded. Remarkably, owing to the nice restriction properties of the Mondrian process, Lemmas 1 and 2 essentially provide matching upper and lower bounds on the complexity of the partition. Indeed, in order to partition the cube [0, 1]d in cells of diameter O(1/?), at least ?(?d ) cells are needed; Lemma 2 shows that the Mondrian partition in fact contains only O(?d ) cells. 6 Minimax rates over the class of Lipschitz functions The estimates obtained in Lemmas 1 and 2 are quite explicit and sharp in their dependency on ?, and allow to study the convergence rate of our algorithm. Indeed, it turns out that our modified Mondrian Forest, when properly tuned, can achieve the minimax rate in classification over the class of Lipschitz functions (see e.g. Chapter I.3 in [Nem00] for details on minimax rates). We provide two results: a convergence rate for the estimation of the conditional probabilities, measured by the quadratic risk, see Theorem 2, and a control on the distance between the classification error of our classifier and the Bayes error, see Theorem 3. We provide also similar minimax bounds for the regression setting instead of the classification one in the supplementary material, see Proposition 4 in Appendix E. Let ?bn be the estimate of the conditional probability ? based on the Mondrian Forest (see Algorithm 4) in which: (i) Each leaf label is computed as the proportion of 1 in the corresponding leaf; (ii) Forest prediction results from the average of tree estimates instead of a majority vote. Theorem 2. Assume that the conditional probability function ? : [0, 1]d ? [0, 1] is Lipschitz on [0, 1]d . Let ?bn be a Mondrian Forest as defined in Points (i) and (ii), with a lifetimes sequence that satisfies ?n  n1/(d+2) . Then, the following upper bound holds E(?(X) ? ?bn (X))2 = O(n?2/(d+2) ) (4) for n large enough, which correspond to the minimax rate over the set of Lipschitz functions. To the best of our knowledge, Theorem 2 is the first to exhibit the fact that a classification method based on a purely random forest can be minimax optimal in an arbitrary dimension. The same kind of result is stated for regression estimation in the supplementary material (see Proposition 4 in Appendix E). Minimax rates, but only for d = 1, were obtained in [Gen12, AG14] for models of purely random forests such as Toy-PRF (where the individual partitions corresponded to randomly shifts of the regular partition of [0, 1] in k intervals) and PURF (Purely Uniformly Random Forests, where the partitions were obtained by drawing k random thresholds at random in [0, 1]). However, for d = 1, tree partitions reduce to partitions of [0, 1] in intervals, and do not possess the recursive structure that appears in higher dimensions and makes their precise analysis difficult. For this reason, the analysis of purely random forests for d > 1 has typically produced sub-optimal results: for example, [BDL08] show consistency for UBPRF (Unbalanced Purely Random Forests, that perform a fixed number of splits and randomly choose a leaf to split at each step), but with no rate of convergence. A further step was made by [AG14], who studied the BPRF (Balanced Purely Random Forests algorithm, where all leaves were split, so that the resulting tree was complete), and obtained suboptimal rates. In our approach, the convenient properties of the Mondrian process enable to bypass the inherent difficulties met in previous attempts, thanks to its recursive structure, and allow to obtain the minimax rate with transparent proof. Now, note that the Mondrian Forest classifier corresponds to the plugin classifier gbn (x) = 1{b?n (x)>1/2} , where ?bn is defined in Points (i) and (ii). A general theorem (Theorem 6.5 in [DGL96]) allows us to derive upper bounds on the distance between the classification error of gbn and the Bayes error, thanks to Theorem 2. Theorem 3. Under the same assumptions as in Theorem 2, the Mondrian Forest classifier gbn with lifetime sequence ?n  n1/(d+2) satisfies L(b gn ) ? L? = o(n?1/(d+2) ). (5) 7 The rate of convergence o(n?1/(d+2) ) for the error probability with a Lipschitz conditional probability ? turns out to be optimal, as shown by [Yan99]. Note that faster rates can be achieved in classification under low noise assumptions such as the margin assumption [MT99] (see e.g. [Tsy04, AT07, Lec07]). Such specializations of our results are to be considered in a future work, the aim of the present paper being an emphasis on the appealing optimal properties of our modified Mondrian Forest. 7 Experiments We now turn to the empirical evaluation of our algorithm, and examine its predictive performance (test error) as a function of the training size. More precisely, we compare the modified Mondrian Forest algorithm (Algorithm 4) to batch (Breiman RF [Bre01], Extra-Trees-1 [GEW06]) and online (the Mondrian Forest algorithm [LRT14] with fixed lifetime parameter ?) Random Forests algorithms. We compare the prediction accuracy (on the test set) of the aforementioned algorithms trained on varying fractions of the training data from 10% to 100%. Regarding our choice of competitors, we note that Breiman?s RF is well-established and known to achieve state-of-the-art performance. We also included the Extra-Trees-1 (ERT-1) algorithm [GEW06], which is most comparable to the Mondrian Forest classifier since it also draws ? splits randomly (we note that the ERT-k algorithm [GEW06] with the default tuning k = d in the scikit-learn implementation [PVG+ 11] achieves scores very close to those of Breiman?s RF). In the case of online Mondrian Forests, we included our modified Mondrian Forest classifier with an increasing lifetime parameter ?n = n1/(d+2) tuned according to the theoretical analysis (see Theorem 3), as well as a Mondrian Forest classifier with constant lifetime parameter ? = 2. Note that while a higher choice of ? would have resulted in a performance closer to that of the modified version (with increasing ?n ), our inconsistency result (Proposition 1) shows that its error would eventually stagnate given more training samples. In both cases, the splits are drawn within the range of the training feature, as in the original Mondrian Forest algorithm. Our results are reported in Figure 1. letter satimage 0.88 0.86 0.84 0.82 0.80 0.78 0.76 0.74 0.90 0.85 0.80 0.75 0.70 0.65 0.2 0.4 Breiman_RF Extra_Trees_1 Mondrian_increasing Mondrian_fixed 0.6 0.8 1.0 0.2 usps 0.90 0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.900 0.875 0.850 0.825 0.800 0.775 0.750 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 0.8 1.0 dna 0.2 0.4 0.6 . Figure 1: Prediction accuracy as a function of the fraction of data used on several datasets. Modified MF (Algorithm 4) outperforms MF with a constant lifetime, and is better than the batch ERT-1 algorithm. It also performs almost as well as Breiman?s RF (a batch algorithm that uses the whole training dataset in order to choose each split) on several datasets, while being incremental and much faster to train. On the dna dataset, as noted in [LRT14], Breiman?s RF outperforms the other algorithms because of the presence of a large number of irrelevant features. 8 8 Conclusion and future work Despite their widespread use in practice, the theoretical understanding of Random Forests is still incomplete. In this work, we show that amending the Mondrian Forest classifier, originally introduced to provide an efficient online algorithm, leads to an algorithm that is not only consistent, but in fact minimax optimal for Lipschitz conditional probabilities in arbitrary dimension. This new result suggests promising improvements in the understanding of random forests methods. A first, natural extension of our results, that will be addressed in a future work, is the study of the rates for smoother regression functions. Indeed, we conjecture that through a more refined study of the local properties of the Mondrian partitions, it is possible to describe exactly the distribution of the cell of a given point. In the spirit of the work of [AG14] in dimension one, this could be used to show improved rates for the bias of forests (e.g. for C 2 regression functions) compared to the tree bias, and hence give some theoretical insight to the empirically well-known fact that a forest performs better than individual trees. Second, the optimal upper bound O(n?1/(d+2) ) obtained in this paper is very slow when the number of features d is large. This comes from the well-known curse of dimensionality phenomenon, a problem affecting all fully nonparametric algorithms. A standard technique used in high-dimensional settings is to work under a sparsity assumption, where only s  d features are informative (i.e. affect the distribution of Y ). In such settings, a natural strategy is to select the splits using the labels Y1 , . . . , Yn , as most variants of Random Forests used in practice do. For example, it would be interesting to combine a Mondrian process-based randomization with a choice of the best split among several candidates, as performed by the Extra-Tree algorithm [GEW06]. Since the Mondrian Forest guarantees minimax rates, we conjecture that it should improve feature selection of batch random forest methods, and improve the underlying randomization mechanism of these algorithms. From a theoretical perspective, it could be interesting to see how the minimax rates obtained here could be coupled with results on the ability of forests to select informative variables, see for instance [SBV15]. References [AG14] Sylvain Arlot and Robin Genuer. arXiv:1407.3939, 2014. Analysis of purely random forests bias. arXiv preprint [AT07] Jean-Yves Audibert and Alexandre B. Tsybakov. Fast learning rates for plug-in classifiers. The Annals of Statistics, 35(2):608?633, 2007. [BDL08] G?rard Biau, Luc Devroye, and G?bor Lugosi. Consistency of random forests and other averaging classifiers. Journal of Machine Learning Research, 9:2015?2033, 2008. [Bia12] G?rard Biau. Analysis of a random forests model. Journal of Machine Learning Research, 13(1):1063?1095, 2012. [BLG+ 16] Matej Balog, Balaji Lakshminarayanan, Zoubin Ghahramani, Daniel M. Roy, and Yee W. Teh. The Mondrian kernel. In 32nd Conference on Uncertainty in Artificial Intelligence (UAI), 2016. [Bre00] Leo Breiman. Some infinity theory for predictor ensembles. Technical Report 577, Statistics departement, University of California Berkeley, 2000. [Bre01] Leo Breiman. Random forests. Machine Learning, 45(1):5?32, 2001. [Bre04] Leo Breiman. Consistency for a simple model of random forests. Technical Report 670, Statistics departement, University of California Berkeley, 2004. [BS16] G?rard Biau and Erwan Scornet. A random forest guided tour. TEST, 25(2):197?227, 2016. [DGL96] Luc Devroye, L?szl? Gy?rfi, and G?bor Lugosi. A Probabilistic Theory of Pattern Recognition, volume 31 of Applications of Mathematics. Springer-Verlag, 1996. [DH00] Pedro Domingos and Geoff Hulten. Mining high-speed data streams. In Proceedings of the 6th SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 71?80, 2000. [DMdF13] Misha Denil, David Matheson, and Nando de Freitas. Consistency of online random forests. In Proceedings of the 30th Annual International Conference on Machine Learning (ICML), pages 1256?1264, 2013. 9 [DMdF14] Misha Denil, David Matheson, and Nando de Freitas. Narrowing the gap: Random forests in theory and in practice. In Proceedings of the 31st Annual International Conference on Machine Learning (ICML), pages 665?673, 2014. [FSSW97] Yoav Freund, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth. Using and combining predictors that specialize. In Proceedings of the 29th Annual ACM Symposium on Theory of Computing, pages 334?343, 1997. [Gen12] Robin Genuer. Variance reduction in purely random forests. Journal of Nonparametric Statistics, 24(3):543?562, 2012. [GEW06] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely randomized trees. Machine learning, 63(1):3?42, 2006. [HS97] David P. Helmbold and Robert E. Schapire. Predicting nearly as well as the best pruning of a decision tree. Machine Learning, 27(1):51?68, 1997. [Lec07] Guillaume Lecu?. Optimal rates of aggregation in classification under low noise assumption. Bernoulli, 13(4):1000?1022, 2007. [LRT14] Balaji Lakshminarayanan, Daniel M. Roy, and Yee W. Teh. Mondrian forests: Efficient online random forests. In Advances in Neural Information Processing Systems 27, pages 3140?3148. Curran Associates, Inc., 2014. [LRT16] Balaji Lakshminarayanan, Daniel M. Roy, and Yee W. Teh. Mondrian forests for large-scale regression when uncertainty matters. In Proceedings of the 19th International Workshop on Artificial Intelligence and Statistics (AISTATS), 2016. [MT99] Enno Mammen and Alexandre B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808?1829, 1999. [Nem00] Arkadi Nemirovski. Topics in non-parametric statistics. Lectures on Probability Theory and Statistics: Ecole d?Ete de Probabilites de Saint-Flour XXVIII-1998, 28:85?277, 2000. [OR15] Peter Orbanz and Daniel M. Roy. Bayesian models of graphs, arrays and other exchangeable random structures. IEEE transactions on pattern analysis and machine intelligence, 37(2):437?461, 2015. [PVG+ 11] Fabian Pedregosa, Ga?l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ?douard Duchesnay. Scikitlearn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011. [Roy11] Daniel M. Roy. Computability, inference and modeling in probabilistic programming. PhD thesis, Massachusetts Institute of Technology, 2011. [RT09] Daniel M. Roy and Yee W. Teh. The Mondrian process. In Advances in Neural Information Processing Systems 21, pages 1377?1384. Curran Associates, Inc., 2009. [SBV15] Erwan Scornet, G?rard Biau, and Jean-Philippe Vert. Consistency of random forests. The Annals of Statistics, 43(4):1716?1741, 2015. [SLS+ 09] Amir Saffari, Christian Leistner, Jacob Santner, Martin Godec, and Horst Bischof. On-line random forests. In 3rd IEEE ICCV Workshop on On-line Computer Vision, 2009. [TGP11] Matthew A. Taddy, Robert B. Gramacy, and Nicholas G. Polson. Dynamic trees for learning and design. Journal of the American Statistical Association, 106(493):109?123, 2011. [Tsy04] Alexandre B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32(1):135?166, 2004. [WST95] Frans M. J. Willems, Yuri M. Shtarkov, and Tjalling J. Tjalkens. The context-tree weighting method: Basic properties. IEEE Transactions on Information Theory, 41(3):653?664, 1995. [Yan99] Yuhong Yang. Minimax nonparametric classification. I. Rates of convergence. IEEE Transactions on Information Theory, 45(7):2271?2284, 1999. 10
6966 |@word version:3 proportion:1 nd:1 c0:2 bn:4 simplifying:1 jacob:1 recursively:2 carry:1 reduction:1 contains:4 score:1 daniel:6 tuned:3 ecole:1 dubourg:1 past:1 outperforms:2 freitas:2 surprising:1 realistic:1 partition:36 informative:2 enables:1 christian:1 designed:1 update:5 discrimination:1 alone:1 intelligence:3 leaf:15 selected:1 warmuth:1 amir:1 core:1 manfred:1 guillotine:1 provides:2 math:3 node:9 complication:1 ron:1 dn:7 along:1 shtarkov:1 symposium:1 tsy04:2 prove:2 consists:2 specialize:1 combine:2 frans:1 pvg:2 manner:1 blondel:1 theoretically:1 indeed:7 nor:1 growing:4 multi:1 examine:1 bertrand:1 curse:1 considering:1 increasing:7 becomes:1 provided:1 matched:1 bounded:3 notation:3 underlying:1 what:1 kind:1 probabilites:1 developed:1 guarantee:2 berkeley:2 every:1 xd:1 exactly:1 classifier:25 partitioning:1 unit:3 control:3 exchangeable:1 yn:5 louis:1 arlot:1 understood:1 local:1 consequence:1 despite:2 plugin:1 establishing:2 path:1 kneser:1 lugosi:2 emphasis:1 initialization:1 studied:1 dynamically:1 suggests:1 nemirovski:1 range:8 practical:2 recursive:4 practice:5 union:1 differs:1 implement:1 procedure:13 universal:2 empirical:1 adapting:1 vert:1 convenient:2 matching:1 regular:1 zoubin:1 get:1 ga:2 interior:4 close:1 selection:1 context:2 influence:1 applying:1 risk:1 yee:4 restriction:1 starting:3 independently:2 tjalkens:1 rectangular:3 helmbold:1 matthieu:2 gramacy:1 rule:5 insight:1 array:1 enabled:2 laplace:1 updated:3 ert:3 construction:2 controlling:1 annals:4 taddy:1 olivier:1 homogeneous:1 us:6 curran:2 domingo:1 programming:1 associate:3 trick:2 roy:6 recognition:1 particularly:1 updating:3 balaji:3 observed:1 narrowing:1 preprint:1 region:1 ensures:2 hinders:1 balanced:1 pd:1 complexity:5 dynamic:1 trained:2 mondrian:82 passos:1 predictive:2 purely:11 completely:1 usps:1 geoff:1 various:1 chapter:1 grown:2 leo:3 train:2 amend:1 describe:3 fast:1 artificial:2 corresponded:1 hyper:1 refined:1 quite:2 jean:2 larger:3 valued:2 supplementary:5 say:1 ikn:1 drawing:1 godec:1 ability:2 statistic:10 online:28 advantage:1 sequence:14 propose:3 adaptation:1 fssw97:2 combining:1 matheson:2 ernst:1 achieve:9 adapts:1 convergence:6 extending:2 incremental:1 phane:2 help:1 derive:2 depending:1 damien:1 measured:2 strong:1 implemented:1 bs16:3 involves:1 come:2 qd:1 met:1 guided:1 closely:1 owing:1 nando:2 enable:2 saffari:1 material:5 incremen:1 require:2 fix:1 suffices:1 transparent:1 leistner:1 randomization:5 proposition:6 varoquaux:1 extension:3 strictly:1 hold:2 considered:4 exp:4 bj:5 matthew:1 m0:1 major:1 achieves:2 early:1 sought:1 smallest:1 enno:1 adopt:1 estimation:6 label:9 prf:5 prettenhofer:1 cole:3 sensitive:1 aim:1 modified:10 rather:1 denil:2 avoid:1 breiman:9 varying:3 hulten:1 conjunction:1 properly:2 improvement:1 bernoulli:1 grisel:1 contrast:1 sigkdd:1 sense:1 inference:2 streaming:1 typically:2 france:3 aforementioned:1 classification:16 appliqu:3 among:4 dual:3 denoted:1 socalled:1 development:1 art:2 integration:1 smoothing:1 gramfort:1 cube:5 equal:1 having:1 beach:1 sampling:4 icml:2 nearly:1 future:3 minimized:1 report:2 simplify:1 inherent:1 primarily:1 ete:1 randomly:3 composed:1 resulted:1 individual:5 replaced:1 ourselves:1 n1:3 attempt:1 interest:2 possibility:2 mining:2 cournapeau:1 evaluation:1 flour:1 szl:1 introduces:1 arrives:1 misha:2 amenable:1 accurate:1 closer:1 byproduct:1 tree:76 incomplete:1 theoretical:12 instance:3 modeling:1 gn:13 retains:1 yoav:1 tour:1 predictor:2 reported:1 dependency:1 st:4 density:2 thanks:2 randomized:11 international:4 sequel:1 probabilistic:2 thesis:1 containing:1 choose:3 expert:1 american:1 leading:2 michel:1 toy:1 account:1 de:7 gy:1 lakshminarayanan:3 inc:2 matter:1 mp:16 audibert:1 depends:1 stream:5 performed:4 root:2 view:1 analyze:1 start:1 bayes:4 aggregation:4 parallel:1 arkadi:1 formed:1 yves:1 accuracy:3 variance:4 who:1 efficiently:3 ensemble:1 correspond:1 identify:1 biau:4 bayesian:2 bor:2 vincent:2 produced:1 confirmed:1 worth:1 finer:1 explain:1 whenever:1 competitor:1 proof:7 associated:3 sampled:2 newly:1 dataset:3 proved:1 massachusetts:1 recall:1 knowledge:3 dimensionality:1 actually:2 ea:11 matej:1 appears:1 alexandre:5 higher:3 originally:1 improved:1 wei:1 rard:4 formulation:1 box:5 lifetime:33 furthermore:1 clock:1 until:2 replacing:1 scikit:1 incrementally:3 widespread:2 quality:1 aj:5 grows:3 usa:1 concept:1 y2:1 true:1 counterpart:1 requiring:1 hence:1 illustrated:1 conditionally:1 during:1 ffas:1 rooted:1 noted:2 mammen:1 ridge:1 polytechnique:6 complete:1 geurts:1 performs:4 matiques:3 novel:1 empirically:1 volume:2 extend:2 association:1 m1:2 tuning:4 rd:2 consistency:23 mathematics:1 gaiffas:1 centre:3 ctw:1 posterior:1 recent:1 perspective:1 orbanz:1 irrelevant:1 verlag:1 meta:2 binary:2 success:1 lecu:1 inconsistency:2 yi:2 yuri:1 ii:3 smoother:1 smooth:1 technical:2 faster:2 characterized:2 plug:1 offer:1 long:1 prediction:7 variant:5 regression:11 basic:1 essentially:2 vision:1 arxiv:2 kernel:4 bdl08:6 achieved:1 cell:21 c1:4 receive:1 affecting:1 want:1 remarkably:1 santner:1 interval:2 addressed:1 else:1 grow:3 crucial:2 extra:5 posse:1 tend:2 elegant:1 contrary:1 inconsistent:3 spirit:1 call:6 noting:1 presence:1 yang:1 split:29 enough:5 affect:2 fit:1 competing:1 suboptimal:2 restrict:1 reduce:1 regarding:1 shift:1 specialization:1 peter:2 proceed:1 remark:4 generally:1 rfi:1 proportionally:1 amount:4 nonparametric:3 tsybakov:3 diameter:4 dna:2 schapire:2 sl:3 restricts:1 estimated:1 disjoint:1 brucher:1 dropping:1 dgl96:2 key:1 threshold:3 drawn:1 computability:1 asymptotically:1 graph:1 fraction:2 letter:1 uncertainty:2 extends:1 family:1 place:1 almost:1 separation:1 draw:3 decision:8 appendix:7 comparable:1 bound:7 replaces:1 quadratic:1 annual:3 precisely:5 infinity:2 x2:1 encodes:1 calling:1 interpolated:1 aspect:1 speed:1 argument:1 extremely:1 martin:1 conjecture:2 according:1 slightly:1 increasingly:3 em:2 appealing:2 departement:2 modification:3 explained:1 restricted:1 iccv:1 equation:1 remains:1 previously:1 turn:4 count:2 mechanism:4 describing:1 needed:2 eventually:1 singer:1 thirion:1 tractable:1 end:5 available:1 hierarchical:1 pierre:1 nicholas:1 ney:1 alternative:2 batch:8 specialist:1 original:15 denotes:2 top:1 assumes:1 saint:1 wehenkel:1 blg:2 exploit:1 yoram:1 ghahramani:1 build:1 establish:1 hypercube:1 jake:1 perrot:1 added:1 quantity:1 strategy:1 parametric:1 exhibit:2 distance:2 separate:1 link:1 majority:4 topic:1 considers:1 trivial:1 reason:1 devroye:2 difficult:1 robert:3 negative:1 stated:2 polson:1 design:4 implementation:2 proper:1 unknown:1 perform:1 allowing:2 upper:6 teh:4 willems:1 markov:1 datasets:2 finite:2 fabian:1 philippe:1 situation:1 extended:4 incorporated:1 defining:1 precise:2 y1:3 arbitrary:4 sharp:1 intensity:1 introduced:6 david:4 namely:3 required:1 vanderplas:1 z1:2 bischof:1 california:2 established:4 nip:1 able:1 below:5 pattern:2 sparsity:1 rf:7 memory:1 natural:4 difficulty:1 predicting:1 minimax:18 scheme:2 improve:2 technology:1 mathieu:1 created:1 coupled:1 prior:1 nice:1 understanding:2 discovery:1 python:1 lacking:1 fully:2 loss:1 freund:1 rationale:1 lecture:1 interesting:2 limitation:2 remarkable:1 consistent:4 article:1 bypass:1 supported:1 surprisingly:2 free:1 guide:2 bias:6 allow:2 institute:1 fall:2 taking:1 distributed:1 dimension:13 xn:6 depth:1 default:1 resides:1 author:1 commonly:1 made:1 horst:1 simplified:2 palaiseau:3 transaction:3 approximate:2 pruning:2 keep:1 sequentially:1 uai:1 unnecessary:1 xi:2 gbn:8 robin:2 additionally:1 promising:1 learn:1 zk:6 ca:1 forest:91 complex:4 domain:9 aistats:1 main:1 whole:2 bounding:1 noise:2 nothing:1 child:4 x1:4 fashion:3 slow:1 sub:1 duchesnay:1 tally:1 explicit:1 exponential:3 candidate:3 lie:1 weighting:2 theorem:17 down:1 specific:1 covariate:1 resembled:1 showing:1 yuhong:1 exists:1 workshop:2 phd:1 subtree:1 illustrates:1 budget:1 justifies:1 margin:1 forecast:2 mt99:2 gap:1 outpaced:1 mf:5 simply:1 ordered:1 springer:1 pedro:1 corresponds:1 satisfies:2 relies:4 acm:1 conditional:11 satimage:1 lipschitz:9 luc:2 change:2 genuer:2 included:2 sylvain:1 uniformly:2 averaging:4 lemma:12 called:3 total:2 e:3 vote:4 pedregosa:1 select:2 guillaume:1 unbalanced:1 phenomenon:1
6,596
6,967
Welfare Guarantees from Data Darrell Hoy University of Maryland [email protected] Denis Nekipelov University of Virginia [email protected] Vasilis Syrgkanis Microsoft Research [email protected] Abstract Analysis of efficiency of outcomes in game theoretic settings has been a main item of study at the intersection of economics and computer science. The notion of the price of anarchy takes a worst-case stance to efficiency analysis, considering instance independent guarantees of efficiency. We propose a data-dependent analog of the price of anarchy that refines this worst-case assuming access to samples of strategic behavior. We focus on auction settings, where the latter is non-trivial due to the private information held by participants. Our approach to bounding the efficiency from data is robust to statistical errors and mis-specification. Unlike traditional econometrics, which seek to learn the private information of players from observed behavior and then analyze properties of the outcome, we directly quantify the inefficiency without going through the private information. We apply our approach to datasets from a sponsored search auction system and find empirical results that are a significant improvement over bounds from worst-case analysis. 1 Introduction A major field at the intersection of economics and computer science is the analysis of the efficiency of systems under strategic behavior. The seminal work of [6, 11] triggered a line of work on quantifying the inefficiency of computer systems, ranging from network routing, resource allocation and more recently auction marketplaces [10]. However, the notion of the price of anarchy suffers from the pessimism of worst-case analysis. Many systems can be inefficient in the worst-case over parameters of the model, but might perform very well for the parameters that arise in practice. Due to the large availability of datasets in modern economic systems, we propose a data-dependent analog of the price of anarchy, which assumes access to a sample of strategic behavior from the system. We focus our analysis on auction systems where the latter approach is more interesting due to the private information held by the participants of the system, i.e. their private value for the item at sale. Since efficiency is a function of these private parameters, quantifying the inefficiency of the system from samples of strategic behavior is non-trivial. The problem of estimation of the inefficiency becomes an econometric problem where we want to estimate a function of hidden variables from observed strategic behavior. The latter is feasible under the assumption that the observed behavior is the outcome of an equilibrium of the strategic setting, which connects observed behavior to unobserved private information. Traditional econometric approaches to auctions [3, 8], address such questions by attempting to exactly pin-point the private parameters from the observed behavior and subsequently measuring the quantities of interest, such as the efficiency of the allocation. The latter approach is problematic in complex auction systems for two main reasons: (i) it leads to statistical inefficiency, (ii) it requires strong conditions on the connection between observed behavior and private information. Even for a single-item first-price auction, uniform estimation of the private value of a player from T samples of observed bids, can only be achieved at O(T 1/3 )-rates [3]. Moreover, uniquely identifying the private information from the observed behavior, requires a one-to-one mapping between the two 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. quantities. The latter requires strong assumptions on the distribution of private parameters and can only be applied to simple auction rules. Our approach bridges the gap between worst-case price of anarchy analysis and statistically and modeling-wise brittle econometric analysis. We provide a data-dependent analog of recent techniques for quantifying the worst-case inefficiency in auctions [13, 4, 10], that do not require characterization of the equilibrium structure and which directly quantify the inefficiency through best-response arguments, without the need to pin-point the private information. Our approach makes minimal ? ? T )assumptions on the distribution of private parameters and on the auction rule and achieves O( rates of convergence for many auctions used in practice, such as the Generalized Second Price (GSP) auction [2, 14]. We applied our approach to a real world dataset from a sponsored search auction system and we portray the optimism of the data-dependent guarantees as compared to their worst-case counterparts [1]. 2 Preliminaries We consider the single-dimensional mechanism design setting with n bidders. The mechanism designer wants to allocate a unit of good to the bidders, subject to some feasibility constraint on the vector of allocations (x1 , . . . , xn ). Let X be the space of feasible allocations. Each bidder i has a private value vi ? [0, H] per-unit of the good, and her utility when she gets allocation xi and is asked to make a payment pi is vi ? xi ? pi . The value of each bidder is drawn independently from distribution with CDF Fi , supported in Vi ? R+ and let F = ?i Fi be the joint distribution. An auction A solicits a bid bi ? B from each bidder i and decides on the allocation vector based on an allocation rule X : B n ? X and a payment rule p : B n ? Rn . For a vector of values and bids, the utility of a bidder is: Ui (b; vi ) = vi ? Xi (b) ? Pi (b). (1) A strategy ?i : Vi ? B, for each bidder i, maps the value of the bidder to a bid. Given an auction A and distribution of values F, a strategy profile ? is a Bayes-Nash Equilibrium (BNE) if each bidder i with any value vi ? Vi maximizes her utility in expectation over her opponents bids, by bidding ?i (vi ). The welfare of an auction outcome is the expected utility generated for all the bidders, plus the revenue of the auctioneer, which due to the form of bidder utilities boils down to being the total value that the bidders get from the allocation. Thus the expected utility of a strategy profile ? is ? ? X W ELFARE(?; F) = Ev?F ? vi ? Xi (?(v))? (2) i?[n] We denote with O PT(F) the expected optimal welfare: O PT(F) = Ev?F [maxx?X P i?[n] vi ? xi ]. Worst-case Bayes-Nash price of anarchy. The Bayesian price of anarchy of an auction is defined as the worst-case ratio of welfare in the optimal auction to the welfare in a Bayes-Nash equilibrium of the original auction, taken over all value distributions and over all equilibria. Let BN E(A, F) be the set of Bayes-Nash equilibria of an auction A, when values are drawn from distributions F. Then: P OA = sup F,??BN E(F) 3 O PT(F) W ELFARE(?; F) (3) Distributional Price of Anarchy: Refining the P OA with Data We will assume that we observe T samples b1:T = {b1 , . . . , bT } of bid profiles from running T times an auction A. Each bid profile bt is drawn i.i.d. based on an unknown Bayes-Nash equilibrium ? of the auction, i.e.: let D denote the distribution of the random variable ?(v), when v is drawn from F. Then bt are i.i.d. samples from D. Our goal is to refine our prediction on the efficiency of the auction and compute a bound on the price of anarchy of the auction conditional on the observed data set. More formally, we want to derive statements of the form: conditional on b1:T , with probability at least 1 ? ?: W ELFARE(?; F) ? ?1? O PT(F), where ?? is the empirical analogue of the worst-case price of anarchy ratio. 2 Infinite data limit We will tackle this question in two steps, as is standard in estimation theory. First we will look at the infinite data limit where we know the actual distribution of equilibrium bids D. We define a notion of price of anarchy that is tailored to an equilibrium bid distribution, which we refer to as the distributional price of anarchy. In Section 4 we give a distribution-dependent upper bound on this ratio for any single-dimensional auction. Subsequently, in Section 5, we show how one can estimate this upper bound on the distributional price of anarchy from samples. Given a value distribution F and an equilibrium ?, let D(F, ?) denote the resulting equilibrium bid distribution. We then define the distributional price of anarchy as follows: Definition 1 (Distributional Price of Anarchy). The distributional price of anarchy DP OA(D) of an auction A and a distribution of bid profiles D, is the worst-case ratio of welfare in the optimal allocation to the welfare in an equilibrium, taken over all distributions of values and all equilibria that could generate the bid distribution D: DP OA(D) = sup F,??BN E(F) s.t. D(F,?)=D O PT(F) W ELFARE(?; F) (4) This notion has nothing to do with sampled data-sets, but rather is a hypothetical worst-case quantity that we could calculate had we known the true bid generating distribution D. What does the extra information of knowing D give us? To answer this question, we first focus on the optimization problem each bidder faces. At any Bayes-Nash equilibrium each player must be best-responding in expectation over his opponent bids. Observe that if we know the rules of the auction and the equilibrium distribution of bids D, then the expected allocation and payment function of a player as a function of his bid are uniquely determined: xi (b; D) = Eb?i ?D?i [Xi (b, b?i )] pi (b; D) = Eb?i ?D?i [Pi (b, b?i )] . (5) Importantly, these functions do not depend on the distribution of values F, other than through the distribution of bids D. Moreover, the expected revenue of the auction is also uniquely determined: " # X R EV(D) = Eb?D Pi (b) , (6) i Thus when bounding the distributional price of anarchy, we can assume that these functions and the expected revenue are known. The latter is unlike the standard price of anarchy analysis, which essentially needs to take a worst-case approach to these quantities. Shorthand notation Through the rest of the paper we will fix the distribution D. Hence, for brevity we omit it from notation, using xi (b), pi (b) and R EV instead of xi (b; D), pi (b; D) and R EV(D). 4 Bounding the Distributional Price of Anarchy We first upper bound the distributional price of anarchy via a quantity that is relatively easy to calculate as a function of the bid distribution D and hence will also be rather straightforward to estimate from samples of D, which we defer to the next section. To give intuition about the upper bound, we start with a simple but relevant example of bounding the distributional price of anarchy in the case when the auction A is the single-item first price auction. We then generalize the approach to any auction A. 4.1 Example: Single-Item First Price Auction In a single item first price auction, the designer wants to auction a single indivisible good. Thus the space of feasible allocations X , are ones where only one player gets allocation xi = 1 and other players get allocation 0. The auctioneer solicits bids bi from each bidder and allocates the good to the highest bidder (breaking ties lexicographically), charging him his bid. Let D be the equilibrium distribution of bids and let Gi be the CDF of the bid of player i. For simplicity we assume that Gi is continuous (i.e. the distribution is atomless). Q Then the expected allocation of a player i from submitting a bid b is equal to xi (b) = G?i (b) = j6=i Gj (b) and his expected payment is pi (b) = b ? xi (b), leading to expected utility: ui (b; vi ) = (vi ? b)G?i (b). 3 The quantity DP OA is a complex object as it involves the structure of the set of equilibria of the given auction. The set of equilibria of a first price auction when bidders values are drawn from different distributions is an horrific object.1 However, we can upper bound this quantity by a much simpler data-dependent quantity by simply invoking the fact that under any equilibrium bid distribution no player wants to deviate from his equilibrium bid. Moreover, this data-dependent quantity can be much better than its worst-case counterpart used in the existing literature on the price of anarchy. Lemma 1. Let A be the single item first price auction and let D be the equilibrium distribution of maxi?[n] Eb?i ?D?i [maxj6=i bj ] ?(D) bids, then DP OA(D) ? 1?e . ??(D) , where ?(D) = Eb?D [maxi?[n] bi ] Proof. Let Gi be the CDF of the bid of each player under distribution D. Moreover, let ? denote the equilibrium strategy that leads to distribution D. By the equilibrium condition, we know that for all vi ? Vi and for all b0 ? B, ui (?i (vi ); vi ) ? ui (b0 ; vi ) = (vi ? b0 ) ? G?i (b0 ). (7) We will give a special deviating strategy used in the literature [13], that will show that either the players equilibrium utility is large or the expected maximum Rother bid is high. Let Ti denote the ? expected maximum other bid which can be expressed as Ti = 0 1 ? G?i (z)dz. We consider the randomized deviation where the player submits a randomized bid in z ? [0, vi (1 ? e?? )] with PDF f (z) = ?(vi1?z) . Then the expected utility from this deviation is: E0 [ui (b ; vi )] = b Adding the quantity 1 ? Ti vi (1?e?? ) Z 0 1 ? 0 R vi (1?e?? ) 0 1 (vi ? z) ? G?i (z)f (z)dz = ? (1 ? G?i (z))dz ? 1 ? Ti Z vi (1?e?? ) G?i (z)dz on both sides, we get: Eb0 [ui (b0 ; vi )] + ?? ? vi 1?e? . Invoking the equilibrium condition we get: ui (?i (vi ); vi ) + Subsequently, for any x?i ? [0, 1]: ui (?i (vi ); vi ) + (8) 0 1 1 ? e?? Ti ? x?i ? vi ? x?i . ? ? 1 ? Ti ?? ? vi 1?e? . (9) If x?i is the expected allocation of player i under the efficient allocation rule Xi? (v) ? 1{vi = maxj vj }, then taking expectation of Equation (9) over vi and adding across all players we get: " # X X 1 1 ? e?? E [ui (?i (vi ); vi )] + E Ti Xi? (v) ? O PT(F) (10) vi ?v i ? i P The theorem then follows by invoking the fact that for any feasible allocation x: i Ti ? xi ? maxi Ti = ?(D)R EV(D), using the fact that expected total agent utility plus total revenue at equilibrium is equal to expected welfare at equilibrium and setting ? = ?(D). Comparison with worst-case P OA In the worst-case, ?(D) is upper bounded by 1, leading to the well-known worst-case price of anarchy ratio of the single-item first price auction of (1 ? 1/e)?1 , irrespective of the bid distribution D. However, if we know the distribution D then we can explicitly estimate ?, which can lead to a much better ratio (see Figure 1). Moreover, observe that even if we had samples from the bid distribution D, then estimating ?(D) is very easy as it corresponds to the ratio of two expectations, each of which can be estimating to within an O( ?1T ) error by a simple average and using standard concentration inequalities. Even thought this improvement, when compared to the worst-case bound might not be that drastic in the first price auction, the extension of the analysis in the next section will be applicable even to auctions where the analogue of the quantity ?(D) is not even bounded in the worst-case. In those settings, the empirical version of the price of anarchy analysis is of crucial importance to get any efficiency bound. 1 Even for two bidders with uniformly distributed values U [0, a] and U [0, b], the equilibrium strategy requires solving a complex system of partial differential equations, which took several years of research in economics to solve (see [15, 7]) 4 Price of Anarchy 4 3 2 1 0 1 2 3 4 ? Figure 1: The upper bound on the distributional price of anarchy of an auction ?(D) 1?e??(D as a function of ?(D). Comparison with value inversion approach Apart from being just a primer to our main general result in the next section, the latter result about the data-dependent efficiency bound for the first price auction, is itself a contribution to the literature. It is notable to compare the latter result with the standard econometric approach to estimating values in a first price auction pioneered by [3] (see also [8]). Traditional non-parametric auction econometrics use the equilibrium best response condition to pin-point the value of a player from his observed bid, by what is known as value inversion. In particular, if the function: ui (b0 ; vi ) = (vi ? b0 ) ? G?i (b0 ) has a unique maximum for each vi and this maximum is strictly monotone in vi , then given the equilibrium bid of a player bi and given a data distribution D we can reverse engineer the value vi (bi ) that the player mustPhave. Thus if we know the bid distribution D we can calculate the equilibrium welfare as Eb?D [ i vi (bi ) ? Xi (b)]. Moreover, we can calculate the expected optimal welfare as: Eb?D [maxi vi (bi )]. Thus we can pin-point the distributional price of anarchy. However, the latter approach suffers from two main drawbacks: (i) estimating the value inversion function vi (?) uniformly over b from samples, can only happen at very slow rates that are at least O(1/T 1/3 ) and which require differentiability assumptions from the value and bid distribution as well as strong conditions that the density of the value distribution is bounded away from zero in all the support (with this lower bound constant entering the rates of convergence), (ii) the main assumption of the latter approach is that the optimal bid is an invertible function and that given a bid there is a single value that corresponds to that bid. This assumption might be slightly benign in a single item first price auction, but becomes a harsher assumption when one goes to more complex auction schemes. Our result in Lemma 1 suffers neither of these drawbacks: it admits fast estimation rates from samples, makes no assumption on properties of the value and bid distribution and does not require invertibility of the best-response correspondence. Hence it provides an upper bound on the distributional price of anarchy that is statistically robust to both sampling and mis-specification errors. The robustness of our approach comes with the trade-off that we are now only estimating a bound on the efficiency of the outcome, rather than exactly pinpointing it. 4.2 Generalizing to any Single-Dimensional Auction Setting Our analysis on DP OA is based on the reformulation of the auction rules as an equivalent pay-yourbid auction and then bounding the price of anarchy as a function of the ratio of how much a player needs to pay in an equivalent pay-your-bid auction, so as to acquire his optimal allocation vs. how much revenue is the auctioneer collecting. For any auction, we can re-write the expected utility of a bid b:   pi (b) ui (b; vi ) = xi (b) vi ? (11) xi (b) This can be viewed as the same form of utility if the auction was a pay-your-bid auction and the player pi (b) submitted a bid of xpii(b) (b) . We refer to this term as the price-per-unit and denote it ppu(b) = xi (b) . Our analysis will be based on the price-per-unit allocation rule x ?(?), which determines the expected allocation of a player as a function of his price-per-unit. Given this notation, we can re-write the utility that an agent achieves if he submits a bid that corresponds to a price-per-unit of z as: u ?i (z; vi ) = x ?(z)(vi ? z). The latter is exactly the form of a pay-your-bid auction. Our upper bound on the DP OA, will be based on the inverse of the PPU allocation rule; let ?i (z) = x ??1 i (z) be the price-per-unit of the cheapest bid that achieves allocation at least z. More formally, 5 ?i (z) = minb|xi (b)?z {ppu(b)}. For simplicity, we assume that any allocation z ? [0, 1] is achieveable by some high enough bid b.2 Given this we can define the threshold for an allocation: Definition 2 (Average Threshold). The average threshold for agent i is Z 1 Ti = ?i (z) dz (12) 0 In Figures 3 and 2 we provide a pictorial representation of these quantities. Connecting with the previous section, for a first price auction, the price-per-unit function is ppu(b) = b, the price-per-unit allocation function is x ?i (b) = G?i (b) and the threshold function is ?i (z) = G?1 ?i (z). The average R 1 ?1 R? threshold Ti is equal to 0 G?i (z)dz = 0 1 ? G?i (b)db, i.e. the expected maximum other bid. 1 x ?i (ppu) = ?i?1 (ppu) E[Allocation] E[Allocation] 1 Ti x ?(ppu) ui (b) ppu(b) vi PPU PPU Figure 2: For any bid b with PPC ppu(b), the area of a rectangle between (ppu(b), x ?i (ppu(b))) and (vi , 0) on the bid allocation rule is the expected utility ui (b). The BNE action b? is chosen to maximize this area. Figure 3: The average threshold is the area to the left of the price-per-unit allocation rule, integrate from 0 to 1. We now give our main theorem, which is a distribution-dependent bound on DP OA, that is easy to compute give D and which can be easily estimated from samples of D. This theorem is a generalization of Lemma 1 in the previous section. Theorem 2 (Distributional Price of Anarchy Bound). For any auction A in a single dimensional setting and for any bid distribution P D, the distributional price of anarchy is bounded by DP OA(D) ? n maxx?X ?(D) i=1 Ti ?xi , where ?(D) = . R EV(D) 1?e??(D) Theorem 2 provides our main method for bounding the distributional price of anarchy. All we need is to compute the revenue R EV of the auction and the quantity: Pn T = maxx?X i=1 Ti ? xi , (13) under the given bid distribution D. Both of these are uniquely defined quantities if we are given D. Moreover, once we compute Ti , the optimization problem in Equation (13) is simply a welfare maximization problem, where each player?s value per-unit of the good is Ti . Thus, the latter can be solved in polynomial time, whenever the welfare maximization problem over the feasible set X is polynomial-time solvable. Theorem 2 can be viewed as a bid distribution-dependent analogue of the revenue covering framework [4] and of the smooth mechanism framework [13]. In particular, the quantity ?(D) is the datadepenent analogue of the worst-case ? quantity used in the definition of ?-revenue covering in [4] and is roughly related to the ? quantity used in the definition of a (?, ?)-smooth mechanism in [13]. 5 Distributional Price of Anarchy Bound from Samples In the last section, we assumed we were given distribution D and hence we could compute the quantity ? = RTEV , which gave an upper bound on the DP OA. We now show how we can estimate this 2 The theory can be easily extended to allow for different maximum achievable allocations by each player, by simply integrating the average threshold only up until the largest such allocation. 6 quantity ? when given access to i.i.d. samples b1:T from the bid distribution D. We will separately estimate T and R EV. The latter is simple expectation and thereby can be easily estimated by an average at ?1T rates. For the former we first need to estimate Ti for each player i, which requires estimation of the allocation and payment functions xi (?; D) and pi (?; D). Since both of these functions are expected values over the equilibrium bids of opponents, we will approximate them by their empirical analogues: T 1X xbi (b) = Xi (b, bt?i ) T t=1 T 1X pbi (b) = Pi (b, bt?i ). T t=1 (14) To bound the estimation error of the quantities T?i produced by using the latter empirical estimates of the allocation and payment function, we need to provide a uniform convergence property for the error of these functions over the bid b. Since b takes values in a continuous interval, we cannot simply apply a union bound. We need to make assumptions on the structure of the class of functions FXi = {Xi (b, ?) : b ? B} and FPi = {Pi (b, ?) : b ? B}, so as uniformly bound their estimation error. For this we resort to the technology of Rademacher complexity. For a generic class of functions F and a sequence of random variables Z 1:T , the Rademacher complexity is defined as: " # T 1X t 1:T t RT (F, Z ) = E sup ? f (Z ) . (15) ? 1:T f ?F T t=1 where each ? t ? {?1/2} is an i.i.d. Rademacher random variable, which takes each of those values with equal probabilities. The following well known theorem will be useful in our derivations: Theorem 3 ([12]). Suppose that for any sample Z 1:T of size T , RT (F, Z 1:T ) ? RT and suppose that functions in F take values in [0, H]. Then with probability 1 ? ?: r T 1 X 2 log(4/?) f (Zt ) ? E[f (Z)] ? 2RT + H (16) sup T f ?F T t=1 This Theorem reduces our uniform error problem to bounding the Rademacher complexity of classes FXi and FPi , since we immediately have the following corollary (where we also use that the allocation functions lie in [0, 1] and the payment functions lie in [0, H]): Corollary 4. Suppose that for any sample b1:T of size T , the Rademacher complexity of classes FXi and FPi is at most RT . Then with probability 1??/2, both sup |xbi (b)?xi (b)| and sup |pbi (b)?pi (b)| b?B b?B p are at most 2RT + H 2 log(4/?) / T . ? ? We now provide conditions under which the Rademacher complexity of these classes is O(1/ T ). Lemma 5. Suppose that B = [0, B] and for each bidder i and each bi ? B, the functions Xi (bi , ?) : [0, B]n?1 7? [0, 1] and Pi (b, ?) : [0, B]n?1 7? [0, H] can be computed as finite superposition of (i) coordinate-wise multiplication of bid vectors b?i with constants; (ii) comparison indicators 1{? > ?} of coordinates or constants; (iii) pairwise addition ? + ? of coordinates  or constants. The Rademacher p complexity for both classes on a sample of size T is O log(T ) / T . The proof of this Lemma follows by standard arguments of Rademacher calculus, together with VC arguments on the class of pairwise comparisons. Those arguments can be found in [5, Lemma 9.9] and [9, Lemma 11.6.28]. Thereby, we omit its proof. The assumptions of Lemma 5 can be directly verified, for instance, for the sponsored search auctions where the constants that multiply each bid correspond to quality factors of the bidders, e.g. as in [2] and [14] and then the allocation and the payment is a function of the rank of the weighted bid of a player. In that case the price and the allocation rule are determined solely by the ranks and the values of the score-weighted bids ?i bi , as well as the position specific quality factors ?j , for each position j in the auction. Next we turn to the analysis of the estimation errors on quantities Ti . We consider the following plugpbi (b) in estimator for Ti : We consider the empirical analog of function ?i (?) by ?bi (z) = inf xbi (b) . b?[0,B], xbi (b)?z 7 Then the empirical analog of Ti is obtained by: Z1 Tbi = ?bi (z) dz. (17) 0 To bound the estimation error of Tbi , we need to impose an additional condition that ensures that any non-zero allocation requires the payment from the bidder at least proportional to that allocation. Assumption 6. We assume that pi (x?1 i (?)) is Lipschitz-continuous and that the mechanism is worstcase interim individually rational, i.e. pi (b) ? H ? xi (b). ? ? T ) rates of convergence of Tbi to Ti and of the Under this assumption we can establish that O( ? = maxx?X Pn T?i ? xi of the optimized threshold to T as well as the empirical empirical analog T i=1 ? T analog Rd EV of the revenue to R EV . Thus the quantity ? ?= d , will also converge to ? = RTEV at R EV that rate. This implies the following final conclusion of this section. Theorem 7. Under Assumption 6 and the premises of Lemma 5, with probability 1 ? ?: ! r O PT(F) H log(n/?) ? b ? n max{L, H} ? +O (18) W ELFARE(?; F) 1 ? e?b? T 6 Sponsored Search Auction: Model, Methodology and Data Analysis We consider a position auction setting where k ordered positions are assigned to n bidders. An outcome m in a position auction is an allocation of positions to bidders. m(j) denotes the bidder who is allocated position j; m?1 (i) refers to the position assigned to bidder i. When bidder i is assigned to slot j, the probability of click ci,j is the product of the click-through-rate of the slot ?j and the quality score of the bidder, ?i , so ci,j = ?j ?i (in the data the quality scores for each bidder are varying across different auctions and we used the average score as a proxy for the score of a bidder). Each advertiser has a value-per-click (VPC) vi , which is not observed in the data and which we assume is drawn from some distribution Fi . Our benchmark for welfare will be the welfare of the auction that chooses a P feasible allocation to maximize the welfare generated, thus O PT = Ev [maxm i ?i ?m?1 (i) vi ]. We consider data generated by advertisers repeatedly participating in a sponsored search auction. The mechanism that is being repeated at each stage is an instance of a generalized second price auction triggered by a search query. The rules of each auction are as follows: Each advertiser i is associated with a click probability ?i and a scoring coefficient si and is asked to submit a bid-per-click bi . Advertisers are ranked by their rank-score qi = si ? bi and allocated positions in decreasing order of rank-score as long as they pass a rank-score reserve r. All the mentioned sets of parameters ? = (s, ?, ?, r) and the bids b are observable in the data. We will denote with ?b,? (j) the bidder allocated in slot j under a bid profile b and parameter profile ?1 ?. We denote with ?b,? (i) the slot allocated to bidder i. If advertiser i is allocated position j, then he pays only when he is clicked and his payment, i.e. his ncost-per-click is the minimal bid o max s? (j+1) ?b? (j+1) ,r b,? b,? he had to place to keep his position, which is: cpcij (b; ?) = . Mapping si this setting to our general model, the allocation function of the auction is Xi (b) = ???1 (i) ? ?, b,? the payment function is Pi (b) = ???1 (i) ? ? ? cpci??1 (i) (b; ?) and the utility function is: b,? b,?   Ui (b; vi ) = ???1 (i) ? ?i ? vi ? cpci??1 (i) (b; ?) . b,? b,? Data Analysis We applied our analysis to the BingAds sponsored search auction system. We analyzed eleven phrases from multiple thematic categories. For each phrase we retrieved data of auctions for the phrase for the period of a week. For each phrase and bidder that participated in the auctions for the phrase we computed the allocation curve by simulating the auctions for the week under any alternative bid an advertiser could submit (bids are multiples of cents). See Figure 4 for the price-per-unit allocation curves x ?i (?) = ?i?1 (?) for a subset of the advertisers for a specific search phrase. We estimated the average threshold T?i for each bidder by numerically 8 ? phrase1 phrase2 phrase3 phrase4 phrase5 phrase6 phrase7 phrase8 phrase9 phrase10 phrase11 T ? ?= d R EV .511 .509 2.966 1.556 .386 .488 .459 .419 .441 .377 .502 1 DP OA ?? ? = 1?e?? .783 .784 .320 .507 .829 .791 .802 .817 .809 .833 .786 Figure 4: (left) Examples of price-per-unit allocation curves for a subset of six advertisers for a specific keyword during the period of a week. All axes are normalized to 1 for privacy reasons. (right) Distributional Price of Anarchy analysis for a set of eleven search phrases on the BingAds system. integrating these allocation curves along the y axis. We then applied the approach described in ? = maxx?X P ? Section 3 for each of the search phrases, computing the quantity T i?[n] Ti ? xi = P ? maxm(?) i Ti ? ?i ? ?m?1 (i) . The latter optimization is simply the optimal assignment problem where each player?s value-per-click is T?i and can be performed by greedily assigning players to slots in decreasing order of T?i . We then estimate the expected revenue by the empirical revenue Rd EV . ? T We portray our results on the estimate ? ?= d and the implied bound on the distributional price R EV of anarchy for each of the eleven search phrases in Table 4. Phrases are grouped based on thematic category. Even though the worst-case price of anarchy of this auction is unbounded (since scores si are not equal to qualities ?i , which is required in worst-case P OA proofs [1]), we observe that empirically the price of anarchy is very good and on average the guarantee is approximately 80% of the optimal. Even if si = ?i the worst-case bound on the P OA implies guarantees of approx. 34% [1], while the DP OA we estimated implies significantly higher percentages, portraying the value of the empirical approach we propose. References [1] Ioannis Caragiannis, Christos Kaklamanis, Maria Kyropoulou, Brendan Lucier, Renato Paes Leme, and ?va Tardos. Bounding the inefficiency of outcomes in generalized second price auctions. pages 1?45, 2014. [2] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and the generalized second-price auction: Selling billions of dollars worth of keywords. The American economic review, 97(1):242?259, 2007. [3] Emmanuel Guerre, Isabelle Perrigne, and Quang Vuong. Optimal nonparametric estimation of first-price auctions. Econometrica, 68(3):525?574, 2000. [4] Jason Hartline, Darrell Hoy, and Sam Taggart. Price of Anarchy for Auction Revenue. In ACM Conference on Economics and Computation, pages 693?710, New York, New York, USA, 2014. ACM Press. [5] Michael R Kosorok. Introduction to empirical processes and semiparametric inference. Springer Science & Business Media, 2007. [6] Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. In STACS 99, pages 404?413. Springer, 1999. [7] Vijay Krishna. Auction Theory. Academic Press, March 2002. [8] H. J. Paarsch and H. Hong. An Introduction to the Structural Econometrics of Auction Data. MIT Press, 2006. 9 [9] D. Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984. [10] Tim Roughgarden, Vasilis Syrgkanis, and ?va Tardos. The price of anarchy in auctions. CoRR, abs/1607.07684, 2016. [11] Tim Roughgarden and Eva Tardos. How bad is selfish routing? J. ACM, 49(2):236?259, March 2002. [12] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [13] Vasilis Syrgkanis and Eva Tardos. Composable and efficient mechanisms. In ACM Symposium on Theory of Computing, pages 211?220, 2013. [14] Hal R Varian. Online ad auctions. The American Economic Review, pages 430?434, 2009. [15] William Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance, 16(1):8?37, 1961. 10
6967 |@word private:15 version:1 inversion:3 achievable:1 polynomial:2 vi1:1 calculus:1 seek:1 bn:3 invoking:3 thereby:2 inefficiency:8 score:9 existing:1 com:2 si:5 gmail:1 assigning:1 must:1 refines:1 happen:1 benign:1 eleven:3 sponsored:6 v:1 item:9 characterization:1 provides:2 denis:2 simpler:1 unbounded:1 along:1 quang:1 differential:1 symposium:1 tbi:3 edelman:1 shorthand:1 privacy:1 pairwise:2 expected:22 roughly:1 behavior:11 decreasing:2 actual:1 considering:1 becomes:2 clicked:1 estimating:5 moreover:7 notation:3 maximizes:1 bounded:4 medium:1 what:2 unobserved:1 guarantee:5 hypothetical:1 ti:23 collecting:1 tackle:1 finance:1 tie:1 exactly:3 ostrovsky:1 sale:1 unit:13 omit:2 anarchy:39 limit:2 solely:1 approximately:1 might:3 plus:2 eb:7 xbi:4 bi:14 statistically:2 unique:1 practice:2 union:1 area:3 empirical:12 maxx:5 thought:1 significantly:1 integrating:2 refers:1 submits:2 get:8 cannot:1 seminal:1 equivalent:2 map:1 dz:7 syrgkanis:3 economics:4 straightforward:1 independently:1 go:1 simplicity:2 identifying:1 immediately:1 rule:13 estimator:1 importantly:1 his:11 notion:4 coordinate:3 tardos:4 pt:8 suppose:4 pioneered:1 econometrics:3 distributional:19 stacs:1 observed:11 elfare:5 solved:1 worst:25 calculate:4 ensures:1 eva:2 keyword:1 trade:1 highest:1 mentioned:1 intuition:1 benjamin:1 nash:6 ui:14 complexity:6 asked:2 econometrica:1 depend:1 solving:1 efficiency:11 selling:1 bidding:1 easily:3 joint:1 derivation:1 fast:1 query:1 marketplace:1 outcome:7 shalev:1 solve:1 gi:3 itself:1 final:1 online:1 triggered:2 sequence:1 took:1 propose:3 product:1 vasilis:3 relevant:1 participating:1 billion:1 convergence:5 darrell:3 rademacher:8 generating:1 ben:1 object:2 tim:2 derive:1 keywords:1 b0:8 strong:3 involves:1 come:1 implies:3 quantify:2 drawback:2 subsequently:3 vc:1 stochastic:1 routing:2 require:3 premise:1 fix:1 generalization:1 preliminary:1 extension:1 strictly:1 ppc:1 welfare:15 equilibrium:33 mapping:2 bj:1 week:3 reserve:1 major:1 achieves:3 estimation:10 applicable:1 superposition:1 bridge:1 him:1 largest:1 individually:1 indivisible:1 maxm:2 grouped:1 schwarz:1 weighted:2 mit:1 rather:3 pn:2 varying:1 corollary:2 ax:1 focus:3 refining:1 improvement:2 she:1 rank:5 maria:1 brendan:1 greedily:1 dollar:1 inference:1 dependent:10 bt:5 hidden:1 her:3 going:1 caragiannis:1 special:1 field:1 equal:5 once:1 beach:1 sampling:1 look:1 paes:1 papadimitriou:1 modern:1 deviating:1 maxj:1 pictorial:1 connects:1 microsoft:2 william:1 ab:1 interest:1 multiply:1 analyzed:1 held:2 partial:1 allocates:1 re:2 varian:1 e0:1 minimal:2 instance:3 modeling:1 measuring:1 assignment:1 maximization:2 phrase:10 strategic:6 deviation:2 subset:2 uniform:3 virginia:2 answer:1 chooses:1 st:1 density:1 randomized:2 off:1 invertible:1 pessimism:1 connecting:1 together:1 michael:3 resort:1 inefficient:1 leading:2 american:2 bidder:32 ioannis:1 availability:1 invertibility:1 coefficient:1 notable:1 explicitly:1 vi:56 ad:1 performed:1 ppu:13 jason:1 analyze:1 sup:6 start:1 bayes:6 participant:2 competitive:1 defer:1 contribution:1 who:1 correspond:1 generalize:1 bayesian:1 produced:1 advertising:1 worth:1 hartline:1 j6:1 submitted:1 suffers:3 whenever:1 definition:4 proof:4 mi:2 associated:1 boil:1 sampled:1 rational:1 dataset:1 lucier:1 higher:1 methodology:1 response:3 though:1 just:1 stage:1 until:1 vasy:1 quality:5 hal:1 usa:2 normalized:1 true:1 counterpart:2 former:1 hence:4 assigned:3 stance:1 entering:1 vickrey:1 game:1 during:1 uniquely:4 covering:2 hong:1 generalized:4 pdf:1 theoretic:1 auction:82 ranging:1 wise:2 recently:1 fi:3 empirically:1 analog:7 he:4 numerically:1 significant:1 refer:2 isabelle:1 cambridge:1 vuong:1 rd:2 approx:1 sealed:1 had:3 maxj6:1 access:3 specification:2 harsher:1 gj:1 recent:1 retrieved:1 inf:1 apart:1 reverse:1 verlag:1 inequality:1 hoy:3 scoring:1 krishna:1 additional:1 impose:1 converge:1 maximize:2 advertiser:8 period:2 ii:3 multiple:2 reduces:1 smooth:2 lexicographically:1 academic:1 long:2 feasibility:1 qi:1 prediction:1 va:2 essentially:1 expectation:5 tailored:1 achieved:1 pinpointing:1 addition:1 want:5 separately:1 participated:1 interval:1 semiparametric:1 crucial:1 allocated:5 extra:1 rest:1 unlike:2 minb:1 subject:1 db:1 structural:1 iii:1 easy:3 enough:1 bid:67 gave:1 click:7 economic:3 knowing:1 six:1 optimism:1 allocate:1 utility:15 pollard:1 york:2 action:1 repeatedly:1 useful:1 leme:1 nonparametric:1 differentiability:1 category:2 generate:1 percentage:1 problematic:1 designer:2 estimated:4 per:16 write:2 reformulation:1 threshold:9 drawn:6 neither:1 verified:1 rectangle:1 econometric:4 monotone:1 year:1 inverse:1 auctioneer:3 place:1 fpi:3 pbi:2 solicits:2 bound:25 renato:1 pay:6 internet:1 correspondence:1 refine:1 roughgarden:2 constraint:1 your:3 argument:4 attempting:1 interim:1 relatively:1 march:2 across:2 slightly:1 sam:1 taken:2 resource:1 equation:3 payment:11 pin:4 turn:1 mechanism:7 know:5 drastic:1 opponent:3 apply:2 observe:4 away:1 fxi:3 generic:1 simulating:1 alternative:1 robustness:1 primer:1 original:1 cent:1 assumes:1 running:1 responding:1 denotes:1 emmanuel:1 establish:1 implied:1 question:3 quantity:23 strategy:6 concentration:1 parametric:1 rt:6 traditional:3 dp:11 maryland:1 oa:16 trivial:2 reason:2 assuming:1 rother:1 ratio:8 acquire:1 statement:1 design:1 zt:1 unknown:1 perform:1 upper:10 datasets:2 benchmark:1 finite:1 extended:1 rn:1 david:1 required:1 connection:1 z1:1 optimized:1 nip:1 address:1 ev:16 max:2 analogue:5 charging:1 ranked:1 business:1 solvable:1 indicator:1 scheme:1 technology:1 axis:1 irrespective:1 portray:2 deviate:1 vpc:1 literature:3 review:2 understanding:2 multiplication:1 portraying:1 brittle:1 interesting:1 gsp:1 allocation:45 proportional:1 composable:1 revenue:12 integrate:1 agent:3 elia:1 proxy:1 pi:19 supported:1 last:1 side:1 allow:1 face:1 taking:1 distributed:1 curve:4 xn:1 world:1 approximate:1 observable:1 keep:1 decides:1 b1:5 bne:2 assumed:1 xi:31 shwartz:1 search:11 continuous:3 table:1 learn:1 robust:2 ca:1 complex:4 vj:1 submit:2 cheapest:1 main:7 bounding:8 arise:1 profile:7 nothing:1 repeated:1 x1:1 slow:1 christos:2 position:11 thematic:2 lie:2 breaking:1 counterspeculation:1 down:1 theorem:10 bad:1 specific:3 achieveable:1 maxi:4 admits:1 submitting:1 adding:2 corr:1 importance:1 ci:2 gap:1 vijay:1 intersection:2 generalizing:1 simply:5 selfish:1 expressed:1 ordered:1 springer:3 corresponds:3 determines:1 worstcase:1 acm:4 cdf:3 conditional:2 slot:5 goal:1 viewed:2 quantifying:3 tender:1 price:67 lipschitz:1 feasible:6 infinite:2 determined:3 uniformly:3 koutsoupias:1 lemma:9 engineer:1 total:3 pas:1 player:26 formally:2 support:1 latter:15 brevity:1
6,597
6,968
Diving into the shallows: a computational perspective on large-scale shallow learning Siyuan Ma Mikhail Belkin Department of Computer Science and Engineering The Ohio State University {masi, mbelkin}@cse.ohio-state.edu Abstract Remarkable recent success of deep neural networks has not been easy to analyze theoretically. It has been particularly hard to disentangle relative significance of architecture and optimization in achieving accurate classification on large datasets. On the flip side, shallow methods (such as kernel methods) have encountered obstacles in scaling to large data, despite excellent performance on smaller datasets, and extensive theoretical analysis. Practical methods, such as variants of gradient descent used so successfully in deep learning, seem to perform below par when applied to kernel methods. This difficulty has sometimes been attributed to the limitations of shallow architecture. In this paper we identify a basic limitation in gradient descent-based optimization methods when used in conjunctions with smooth kernels. Our analysis demonstrates that only a vanishingly small fraction of the function space is reachable after a polynomial number of gradient descent iterations. That drastically limits the approximating power of gradient descent leading to over-regularization. The issue is purely algorithmic, persisting even in the limit of infinite data. To address this shortcoming in practice, we introduce EigenPro iteration, a simple and direct preconditioning scheme using a small number of approximately computed eigenvectors. It can also be viewed as learning a kernel optimized for gradient descent. Injecting this small, computationally inexpensive and SGD-compatible, amount of approximate second-order information leads to major improvements in convergence. For large data, this leads to a significant performance boost over the state-of-the-art kernel methods. In particular, we are able to match or improve the results reported in the literature at a small fraction of their computational budget. For complete version of this paper see https://arxiv.org/abs/1703.10622. 1 Introduction In recent years we have witnessed remarkable advances in many areas of artificial intelligence. Much of this progress has been due to machine learning methods, notably deep neural networks, applied to very large datasets. These networks are typically trained using variants of stochastic gradient descent (SGD), allowing training on large data with modern GPU hardware. Despite intense recent research and significant progress on SGD and deep architectures, it has not been easy to understand the underlying causes of that success. Broadly speaking, it can be attributed to (a) the structure of the function space represented by the network or (b) the properties of the optimization algorithms used. While these two aspects of learning are intertwined, they are distinct and may be disentangled. As learning in deep neural networks is still largely resistant to theoretical analysis, progress can be made by exploring the limits of shallow methods on large datasets. Shallow methods, such as kernel methods, are a subject of an extensive and diverse literature, both theoretical and practical. In particular, kernel machines are universal learners, capable of learning nearly arbitrary functions given a sufficient number of examples [STC04, SC08]. Still, while kernel methods are easily implementable and show state-of-the-art performance on smaller datasets (see [CK11, HAS+ 14, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. DXH+ 14, LML+ 14, MGL+ 17] for some comparisons with DNN?s) there has been significantly less progress in applying these methods to large modern data. The goal of this work is to make a step toward understanding the subtle interplay between architecture and optimization and to take practical steps to improve performance of kernel methods on large data. The paper consists of two main parts. First, we identify a basic underlying limitation in using gradient descent-based methods in conjunction with smooth (infinitely differentiable) kernels typically used in machine learning, showing that only very smooth functions can be approximated after polynomially many steps of gradient descent. This phenomenon is a result of fast spectral decay of smooth kernels and can be readily understood in terms of the spectral structure of the gradient descent operator in the least square regression/classification setting, which is the focus of our discussion. Slow convergence leads to severe over-regularization (over-smoothing) and suboptimal approximation for less smooth functions, which are arguably very common in practice, at least in the classification setting, where we expect fast transitions near the class boundaries. This shortcoming of gradient descent is purely algorithmic and is not related to the sample complexity of the data. It is also not an intrinsic flaw of the kernel architecture, which is capable of approximating arbitrary functions but potentially requiring a very large number of gradient descent steps. The issue is particularly serious for large data, where direct second order methods cannot be used due to the computational constraints. While many approximate second-order methods are available, they rely on low-rank approximations and, as we discuss below, lead to over-regularization (approximation bias). In the second part of the paper we propose EigenPro iteration (see http://www.github.com/EigenPro for the code), a direct and simple method to alleviate slow convergence resulting from fast eigen-decay for kernel (and covariance) matrices. EigenPro is a preconditioning scheme based on approximately computing a small number of top eigenvectors to modify the spectrum of these matrices. It can also be viewed as constructing a new kernel, specifically optimized for gradient descent. While EigenPro uses approximate second-order information, it is only employed to modify first-order gradient descent, leading to the same mathematical solution as gradient descent (without introducing a bias). EigenPro is also fully compatible with SGD, using a low-rank preconditioner with a low overhead per iteration. We analyze the step size in the SGD setting and provide a range of experimental results for different kernels and parameter settings showing five to 30-fold acceleration over the standard methods, such as Pegasos [SSSSC11]. For large data, when the computational budget is limited, that acceleration translates into significantly improved accuracy. In particular, we are able to improve or match the state-of-the-art results reported for large datasets in the kernel literature with only a small fraction of their computational budget. 2 Gradient descent for shallow methods Shallow methods. In the context of this paper, shallow methods denote the family of algorithms consisting of a (linear or non-linear) feature map ? : RN ? H to a (finite or infinite-dimensional) Hilbert space H followed by a linear regression/classification algorithm. This is a simple yet powerful setting amenable to theoretical analysis. In particular, it includes the class of kernel methods, where H is a Reproducing Kernel Hilbert Space (RKHS). Linear regression. Consider n labeled data points {(x1 , y1 ), ..., (xn , yn ) ? H ? R}. To simplify the notation let us assume that the feature map has already been applied to the data, i.e., xi = ?(zi ). Least square linear regression aims to recover the parameter vector ?? that minimize the empirical Pn def loss such that ?? = arg min??H L(?) where L(?) = n1 i=1 (h?, xi iH ? yi )2 . When ?? is not uniquely defined, we can choose the smallest norm solution. Minimizing the empirical loss is related to solving a linear system of equations. Define the data def def matrix X = (x1 , ..., xn )T and the label vector y = (y1 , ..., yn )T , as well as the (non-centralized) P def n 2 covariance matrix/operator, H = n1 i=1 xi xTi . Rewrite the loss as L(?) = n1 kX? ? yk2 . Since ?L(?) |?=?? = 0, minimizing L(?) is equivalent to solving the linear system H? ? b = 0 (1) with b = X T y. When d = dim(H) < ?, the time complexity of solving the linear system in Eq. 1 directly (using Gaussian elimination or other methods typically employed in practice) is O(d3 ). For kernel methods we frequently have d = ?. Instead of solving Eq. 1, one solves the dual n ? n system def K? matrix . The solution can be written as Pn ? y = 0 where K = [k(zi , zj )]i,j=1,...,n is the kernel 3 k(z , ?)?(z ). A direct solution would require O(n ) operations. i i i=1 2 Gradient descent (GD). While linear systems of equations can be solved by direct methods, such as Gaussian elimination, their computational demands make them impractical for large data. Gradient descent-type methods potentially require a small number of O(n2 ) matrix-vector multiplications, a much more manageable task. Moreover, these methods can typically be used in a stochastic setting, reducing computational requirements and allowing for efficient GPU implementations. These schemes are adopted in popular kernel methods implementations such as NORMA [KSW04], SDCA [HCL+ 08], Pegasos [SSSSC11], and DSGD [DXH+ 14]. For linear systems of equations gradient descent takes a simple form known as the Richardson iteration [Ric11]. It is given by ?(t+1) = ?(t) ? ?(H?(t) ? b) (2) It is easy to see that for convergence of ?t to ?? as t ? ? we need to ensure that kI ? ?Hk < 1, and hence 0 < ? < 2/?1 (H). The explicit formula is ?(t+1) ? ?? = (I ? ?H)t (?(1) ? ?? ) (3) We can now describe the computational reach of gradient descent CRt , i.e. the set of vectors which def can be -approximated by gradient descent after t steps, CRt () = {v ? H, s.t.k(I ? ?H)t vk < ?  kvk}. It is important to note that any ? ? / CRt () cannot be -approximated by gradient descent in less than t + 1 iterations. Note that we typically care about the quality of the solution kH?(t) ? bk, rather than the error estimating the parameter vector k?(t) ? ?? k which is reflected in the definition. We will assume that the initialization ?(1) = 0. Choosing a different starting point does not change the analysis unless second order information is incorporated in the initialization conditions. To get a better idea of the space CRt () consider the eigendecomposition of H. Let ?1 ? ?2 ? . . . be its P eigenvalues and e1 , e2 , . . . the corresponding eigenvectors/eigenfunctions. We have H = ?i ei eTi . Writing Eq. 3 in terms of eigendirection yields ?(t+1) ? P def ?? = (1 ? ??i )t hei , ?(1) ? ?? iei . Hence putting ai = hei , vi gives CRt () = P 2 2t 2 2 z {v, s.t. (1 ? ??i ) ai <  kvk }. Recalling that ? < 2/? fact ? P 1 and using the P that (1 ? 1/z) 1 2 2t 2 1/e, we see that a necessary condition for v ? CRt is 3 i,s.t.?i < ?1 ai < i (1 ? ??i ) ai < 2t P def 2 2 kvk . This is a convenient characterization, we will denote CR0 t () = {v, s.t. i,s.t.?i < ?1 a2i < 2t 2  2 kvk } ? CRt (). Another convenient but less precise necessary condition for v ? CRt is that t (1 ? 2?i /?1 ) hei , vi <  kvk. Noting that log(1 ? x) < ?x and assuming ?1 > 2?i , we have   ?1 t > ?1 (2?i )?1 log |hei , vi|?1 kvk (4) The condition number. We are primarily interested in the case when d is infinite or very large and the corresponding operators/matrices are extremely ill-conditioned with infinite or approaching infinity condition number. In that case instead of a single condition number, one should consider the properties of eigenvalue decay. Gradient descent, smoothness and kernel methods. We now proceed to analyze the computational reach for kernel methods. We will start by discussing the case of infinite data (the population case). It is both easier to analyze and allows us to demonstrate the purely computational (non-statistical) nature of limitations of gradient descent. We will see that when the kernel is smooth, the reach of gradient descent is limited to very smooth, at least infinitely differentiable functions. Moreover, to approximate a function with less smoothness within some accuracy  in the L2 norm one needs a super-polynomial (or even exponential) in 1/ number of iterations of gradient descent. Let the data be sampled from a probability with a smooth density ? on a compact domain ? ? Rp . In the case of infinite data H becomes an integral operator corresponding to a positive definite kernel k(?, ?) such def R that Kf (x) = ? k(x, z)f (z)d?z . This is a compact self-adjoint operator with an infinite positive spectrum ?1 , ?2 , . . ., limi?? ?i = 0. We have (see the full paper for discussion and references): Theorem 1. If k is an infinitely differentiable kernel, the rate of eigenvalue decay is super-polynomial, i.e. ?i = O(i?P ) ?P ? N. Moreover, if k is the Gaussian kernel, there exist constants C, C 0 > 0  0 1/p such that for large enough i, ?i < C exp ?Ci . The computational reach of kernel methods. Consider the eigenfunctions of K, KeP i = ?i ei , ? 2 2 which form an orthonormal basis for L (?). We can write a function f ? L (?) as f = i=1 ai ei . P? 2 2 We have kf kL2 = i=1 ai . We can now describe the reach of kernel methods with smooth kernel (in the infinite data setting). Specifically, functions which can be approximated in a polynomial number of iterations must have super-polynomial coefficient decay. 3 Theorem 2. Suppose f ? L2 (?) is such that it can be approximated within  using a polynomial in 1/ number of gradient descent iterations, i.e. ?>0 f ? CR?M () for some M ? N. Then any N ? N and i large enough |ai | < i?N . Corollary 1. Any f ? L2 (?) which can be -approximated with polynomial in 1/ number of steps of gradient descent is infinitely differentiable. In particular, f function must belong to the intersection of all Sobolev spaces on ?. Gradient descent for periodic functions on R. Let us now consider a simple but important special case, where the reach can be analyzed very explicitly. Let ? be a circle with the uniform measure, or, equivalently, consider periodic functions on the interval [0, 2?]. Let ks (x, z) be the heat kernel on the circle  [Ros97]. This kernel is very close to the Gaussian kernel 2 1 ks (x, z) ? ?2?s exp ? (x?z) . The eigenfunctions ej of the integral operator K correspond4s ing to ks (x, z) are simply the Fourier harmonics sin jx and cos jx. The corresponding eigenvalues 2 are {1, e?s , e?s , e?4s e?4s , . . . , e?bj/2+1c s , . . .}. Given a function f on [0, 2?], we can write its P, ? Fourier series f = j=0 aj ej . A direct computation shows that for any f ? CRt (), we have ? P ? 2 a2 < 32 kvk . We see that the space f ? CRt () is ?frozen" as 2 ln 2ts grows i> 2 sln 2t i extremely slowly as the number of iterations t increases. As a simple example consider the Heaviside step function f (x) (on a circle), taking 1 and ?1P values for x ? (0, ?] and x ? (?, 2?], respectively. The step function can be written as f (x) = ?4 j=1,3,... 1j sin(jx). From the analysis above, we need O(exp( s2 )) iterations of gradient descent to obtain an -approximation to the function. It is important to note that the Heaviside step function is a rather natural example, especially in the classification setting, where it represents the simplest two-class classification problem. The situation is not much better for functions with more smoothness unless they happen to be extremely smooth with super-exponential Fourier component decay. In contrast, a direct computation of inner products hf, ei i yields exact function recovery for any function in L2 ([0, 2?]) using the amount of computation equivalent to just one step of gradient descent. Thus, we see that the gradient descent is an extremely inefficient way to recover Fourier series for a general periodic function.  The situation is only mildly improved in dimension d, where the span of at most O? (log t)d/2 eigenfunctions of a Gaussian  kernel or O t1/p eigenfunctions of an arbitrary p-differentiable kernel can be approximated in t iterations. The discussion above shows that the gradient descent with a smooth kernel can be viewed as a heavy regularization of the target function. It is essentially a band-limited approximation no more than O(ln t) Fourier harmonics. While regularization is often desirable from a generalization/finite sample point of view , especially when the number of data points is small, the bias resulting from the application of the gradient descent algorithm cannot be overcome in a realistic number of iterations unless target functions are extremely smooth or the kernel itself is not infinitely differentiable. Remark: Rate of convergence vs statistical fit. Note that we can improve convergence by changing the shape parameter of the kernel, i.e. making it more ?peaked? (e.g., decreasing the bandwidth s in the definition of the Gaussian kernel) While that does not change the exponential nature of the asymptotics of the eigenvalues, it slows their decay. Unfortunately improved convergence comes at the price of overfitting. In particular, for finite data, using a very narrow Gaussian kernel results in an approximation to the 1-NN classifier, a suboptimal method which is up to a factor of two inferior to the Bayes optimal classifier in the binary classification case asymptotically. Finite sample effects, regularization and early stopping. It is well known (e.g., [B+ 05, RBV10]) that the top eigenvalues of kernel matrices approximate the eigenvalues of the underlying integral operators. Therefore computational obstructions encountered in the infinite case persist whenever the data set is large enough. Note that for a kernel method, t iterations of gradient descent for n data points require t ? n2 operations. Thus, gradient descent is computationally pointless unless t  n. That would allow us to fit only about O(log t) eigenvectors. In practice we need t to be much smaller than n, say, t < 1000. At this point we should contrast our conclusions with the important analysis of early stopping for gradient descent provided in [YRC07] (see also [RWY14, CARR16]). The authors analyze gradient descent for kernel methods obtaining the optimal number of iterations of the form t = n? , ? ? (0, 1). That seems to contradict our conclusion that a very large, potentially exponential, number of iterations may be needed to guarantee convergence. The apparent contradiction stems from the assumption in [YRC07] that the regression function f ? belongs to the range of some power of the kernel operator K. For an infinitely differentiable kernel, that implies super-polynomial spectral ? decay (ai = O(?N i ) for any N > 0). In particular, it implies that f belongs to any Sobolev space. We do not typically expect such high degree of smoothness in practice, particularly in classification problems, where the Heaviside step function seems to be a reasonable model. In particular, we expect 4 sharp transitions of label probabilities across class boundaries to be typical for many classifications datasets. These areas of near-discontinuity will necessarily result in slow decay of Fourier coefficients and require many iterations of gradient descent to approximate1 . To illustrate this point, we show (right taNumber of iterations Dataset Metric 1 80 1280 10240 81920 ble) the results of gradient descent for two train 4.07e-1 9.61e-2 2.60e-2 2.36e-3 2.17e-5 datasets of 10000 points (see Section 6). MNIST-10k L2 loss test 4.07e-1 9.74e-2 4.59e-2 3.64e-2 3.55e-2 c-error (test) 38.50% 7.60% 3.26% 2.39% 2.49% The regression error on the training set is 8.25e-2 4.58e-2 3.08e-2 1.83e-2 4.21e-3 roughly inverse to the number of iterations, HINT-M-10k L2 loss train test 7.98e-2 4.24e-2 3.34e-2 3.14e-2 3.42e-2 i.e. every extra bit of precision requires twice the number of iterations for the previous bit. For comparison, we see that the minimum regression (L2 ) error on both test sets is achieved at over 10000 iterations. This results is at least cubic computational complexity equivalent to that of a direct method. Regularization. Note that typical regularization, e.g., adding ?kf k, results in discarding information along the directions with small eigenvalues (below ?). While this improves the condition number it comes at a high cost in terms of over-regularization. In the Fourier analysis example this is similar to p considering band-limited functions with ? log(1/?)/s Fourier components. Even for ? = 10?16 (limit of double precision) and s = 1 we can only fit about 10 Fourier components. We argue that there is little need for explicit regularization for most iterative methods in the big data regimes. 3 Extending the reach of gradient descent: EigenPro iteration We will now propose practical measures to alleviate the over-regularization of linear regression by gradient descent. As seen above, one of the key shortcomings of shallow learning methods based on smooth kernels (and their approximations, e.g., Fourier and RBF features) is their fast spectral decay. That suggests modifying the corresponding matrix H by decreasing its top eigenvalues, enabling the algorithm to approximate more target functions in the same number of iterations. Moreover, this can be done in a way compatible with stochastic gradient descent thus obviating the need to materialize full covariance/kernel matrices in memory. Accurate approximation of top eigenvectors can be obtained from a subsample of the data with modest computational expenditure. Combining these observations we propose EigenPro, a low overhead preconditioned Richardson iteration. Preconditioned (stochastic) gradient descent. We will modify the linear system in Eq. 1 with an invertible matrix P , called a left preconditioner. P H? ? P b = 0. Clearly, this modified system and the original system in Eq. 1 have the same solution. The Richardson iteration corresponding to the modified system (preconditioned Richardson iteration) is ?(t+1) = ?(t) ? ?P (H?(t) ? b) (5) It is easy to see that as long as ?kP Hk < 1 it converges to ?? , the solution of the original linear system. Preconditioned SGD can be defined similarly by ? ? ? ? ?P (Hm ? ? bm ) (6) def def 1 1 T T Xm and bm = m ym using sampled mini-batch (Xm , ym ). where we define Hm = m Xm Xm Preconditioning as a linear feature map. It is easy to see that the preconditioned iteration is in fact equivalent to the standard Richardson iteration in Eq. 2 on a dataset transformed with the linear 1 def feature map, ?P (x) = P 2 x. This is a convenient point of view as the transformed data can be stored for future use. It also shows that preconditioning is compatible with most computational methods both in practice and, potentially, in terms of analysis. Linear EigenPro. We will now discuss properties desired to make preconditioned GD/SGD methods effective on large scale problems. Thus for the modified iteration in Eq. 5 we would like to choose P to meet the following targets: (Acceleration) The algorithm should provide high accuracy in a small number of iterations. (Initial cost) The preconditioning matrix P should be accurately computable, without materializing the full covariance matrix. (Cost per iteration) Preconditioning by P should be efficient per iteration in terms of computation and memory. The convergence of the preconditioned algorithm with the along the i-th eigendirection is dependent on the ratio of eigenvalues ?i (P H)/?1 (P H). This leads us to choose the preconditioner P to maximize the ratio ?i (P H)/?1 (P H) for each i. We see that modifying the top eigenvalues of H makes the most difference in convergence. For example, decreasing ?1 improves convergence along all directions, while decreasing any other eigenvalue only speeds up convergence in that 1 Interestingly they can lead to lower sample complexity for optimal classifiers (cf. Tsybakov margin condition [Tsy04]). 5 direction. However, decreasing ?1 below ?2 does not help unless ?2 is decreased as well. Therefore it is natural to decrease the top k eigenvalues to the maximum amount, i.e. to ?k+1 , leading to k Algorithm: EigenPro(X, y, k, m, ?, ?, M ) X def P =I? (1 ? ?k+1 /?i )ei eTi (7) input training data (X, y), number of eigeni=1 directions k, mini-batch size m, step size ?, We see that P -preconditioned iteration increases damping factor ? , subsample size M convergence by a factor ?1 /?k . However, exact output weight of the linear model ? construction of P involves computing the eigen- 1: [E, ?, ? ? k+1 ] = RSVD(X, k + 1, M ) decomposition of the d ? d matrix H, which def ? k+1 ??1 )E T 2: P = I ? E(I ? ? ? is not feasible for large data. Instead we use 3: Initialize ? ? 0 subsampled randomized SVD [HMT11] to ob4: while stopping criteria is False do def tain an approximate preconditioner P?? = I ? 5: (Xm , ym ) ? m rows sampled from (X, y) Pk ? ? ei e ?Ti . Here algorithm without replacement i=1 (1 ? ? ?k+1 /?i )? 1 T T RSVD (detailed in the full paper ) computes the 6: g? m (Xm (Xm ?) ? Xm ym ) ?k ) 7: approximate top eigenvectors E ? (? e1 , . . . , e ? ? ? ? ?P g ?1, . . . , ? ? k ) and 8: end while and eigenvalues ? ? diag(? ? k+1 for subsample covariance matrix HM . We introduce the parameter ? to counter the effect of ? approximate top eigenvectors ?spilling? into the span of the remaining eigensystem. Using ? < 1 is preferable to the obvious alternative of decreasing the step size ? as it does not decrease the step size ?k ). That allows the iteration to converge in the directions nearly orthogonal to the span of (? e1 , . . . , e ?k ) are computed exactly, the step size in faster in those directions. In particular, when (? e1 , . . . , e other eigendirections will not be affected by the choice of ? . We call SGD with the preconditioner P?? (Eq. 6) EigenPro iteration. See Algorithm EigenPro for details. Moreover, the key step size parameter ? can be selected in a theoretically sound way discussed below. Kernel EigenPro. We will now discuss modifications needed to work directly in the RKHS (primal) setting. A positive definite kernel k(?, ?) : RN ? RN ? R implies a feature map from X to an N RKHS space H. The feature map can be written as Pn? : x 7? k(x, ?), R ? 2H. This feature map 1 ? leads to the learning problem f = arg minf ?H n i=1 (hf, k(xi , ?)iH ? yi ) . Using properties of Pn def RKHS, EigenPro iteration in H becomes f ? f ? ? P(K(f ) ? b) where b = n1 i=1 yi k(xi , ?) P n and covariance operator K = n1 i=1 k(xi , ?) ? k(xi , ?). The top eigensystem of K forms the Pk def preconditioner P = I ? i=1 (1 ? ? ?k+1 (K)/?i (K)) Peni (K) ? ei (K). By the Representer theorem [Aro50], f ? admits a representation of the form i=1 ?i k(xi , ?). Parameterizing the above iteration accordingly and applying some linear algebra lead to the following iteration in a finitedef dimensional vector space, ? ? ???P (K??y) where K = [k(xi , xj )]i,j=1,...,n is the kernel matrix and EigenPro preconditioner P is defined using the top eigensystem of K (assume Kei = ?i ei ), Pk def P = I ? i=1 ?i ?1 (1 ? ? ?k+1 /?i )ei eTi . This differs from that for the linear case (Eq. 7) (with an extra factor of 1/?i ) due to the difference between the parameter space of ? and the RKHS space. EigenPro as kernel learning. Another way to view EigenPro is in terms of kernel learning. Assuming that the preconditioner is computed exactly, EigenPro is equivalent to computing the (distributionP? def Pk dependent) kernel, kEP (x, z) = i=1 ?k+1 ei (x)ei (z) + i=k+1 ?i ei (x)ei (z). Notice that the RKHS spaces corresponding to kEP and k contain the same functions but have different norms. The norm in kEP is a finite rank modification of the norm in the RKHS corresponding to k, a setting reminiscent of [SNB05] where unlabeled data was used to ?warp? the norm for semi-supervised learning. However, in our paper the ?warping" is purely for computational efficiency. 1 Acceleration. EigenPro can obtain acceleration factor of up to ??k+1 over the standard gradient descent. That factor assumes full gradient descent and exact computation of the preconditioner. See below for an acceleration analysis in the SGD setting. Initial cost. To construct the preconditioner P , we perform RSVD to compute the approximate top eigensystem of covariance H. RSVD has time complexity O(M d log k +(M +d)k 2 ) (see [HMT11]). The subsample size M can be much smaller than the data size n while preserving the accuracy of estimation. In addition, extra kd memory is needed to store the eigenvectors. Cost per iteration. For standard SGD using d kernel centers (or random Fourier features) and mini-batch of size m, the computational cost per iteration is O(md). In comparison, EigenPro iteration using top-k eigen-directions costs O(md + kd). Specifically, applying preconditioner P in EigenPro requires left multiplication by a matrix of rank k. This involves k vector-vector dot products resulting in k ? d additional operations per iteration. These can be implemented efficiently on a GPU. 6 4 Step Size Selection for EigenPro Preconditioned Methods We will now discuss the key issue of the step size selection for EigenPro iteration. For iteration ?1 ?1 involving covariance matrix H, ?1 (H) = kHk results in optimal (within a factor of 2) con?1 vergence. This suggests choosing the corresponding step size ? = kP Hk = ??1 k+1 . In practice this will lead to divergence due to (1) approximate computation of eigenvectors (2) the randomness inherent in SGD. One (costly) possibility is to compute kP Hm k at every step. As the mini-batch ?1 can be assumed to be chosen at random, we propose using a lower bound on kHm k (with high probability) as the step size to guarantee convergence at each iteration. Linear EigenPro. Consider the EigenPro preconditioned SGD in Eq. 6. For this analysis assume 1 that P is formed by the exact eigenvectors.Interpreting P 2 as a linear feature map as in Section 2, 1 1 1 makes P 2 Hm P 2 a random subsample on the dataset XP 2 . Using matrix Bernstein [Tro15] yields 2 Theorem 3. If kxk2 ? ? for any x ? X and ?k+1 =p?k+1 (H), with probability at least 1 ? ?, kP Hm k ? ?k+1 + 2(?k+1 + ?)(3m)?1 (ln 2d? ?1 ) + 2?k+1 ?m?1 (ln 2d? ?1 ). Kernel EigenPro. For EigenPro iteration in RKHS, we can bound kP ?Km k with a very similar result based on operator Bernstein [Min17]. Note that dimension d in Theorem 3 is replaced by the intrinsic dimension [Tro15]. See the arXiv version of this paper for details. Choice of the step size. In the spectral norm bounds ?k+1 is the dominant pterm when the mini-batch size m is large. However, in most large-scale settings, m is small, and 2?k+1 ?/m becomes the p dominant term. This suggests choosing step size ? ? 1/ ?k+1 leading to acceleration on the order p of ?1 / ?k+1 over the standard (unpreconditioned) SGD. That choice works well in practice. 5 EigenPro and Related Work Large scale machine learning imposes fairly specific limitations on optimization methods. The computational budget allocated to the problem must not exceed O(n2 ) operations, a small number of matrix-vector multiplications. That rules out most direct second order methods which require O(n3 ) operations. Approximate second order methods are far more efficient. However, they typically rely on low rank matrix approximation, a strategy which (similarly to regularization) in conjunction with smooth kernels discards information along important eigen-directions with small eigenvalues. On the other hand, first order methods can be slow to converge along eigenvectors with small eigenvalues. An effective method must thus be a hybrid approach using approximate second order information in a first order method. EigenPro is an example of such an approach as the second order information is used in conjunction with a first order method. The things that make EigenPro effective are as follows: 1. The second order information (eigenvalues and eigenvectors) is computed efficiently from a subsample of the data. Due to the quadratic loss function, that computation needs to be conducted only once. Moreover, the step size can be fixed throughout the iterations. 2. Preconditioning by a low rank modification of the identity matrix results in low overhead per iteration. The update is computed without materializing the full preconditioned covariance matrix. 3. EigenPro iteration converges (mathematically) to the same result even if the second order approximation is not accurate. That makes EigenPro relatively robust to errors in the second order preconditioning term P , in contrast to most approximate second order methods. Related work: First order optimization methods. Gradient based methods, such as gradient descent (GD), stochastic gradient descent (SGD), are classical methods [She94, DJS96, BV04, Bis06]. Recent success of neural networks had drawn significant attention to improving and accelerating these methods. Methods like SAGA [RSB12] and SVRG [JZ13] improve stochastic gradient by periodically evaluating full gradient to achieve variance reduction. Algorithms in [DHS11, TH12, KB14] compute adaptive step size for each gradient coordinate. Scalable kernel methods. There is a significant literature on scalable kernel methods including [KSW04, HCL+ 08, SSSSC11, TBRS13, DXH+ 14] Most of these are first order optimization methods. To avoid the O(n2 ) computation and memory requirement typically involved in constructing the kernel matrix, they often adopt approximations like RBF features [WS01, QB16, TRVR16] or random Fourier features [RR07, LSS13, DXH+ 14, TRVR16]. Second order/hybrid optimization methods. Second order methods use the inverse of the Hessian matrix or its approximation to accelerate convergence [SYG07, BBG09, MNJ16, BHNS16, ABH16]. These methods often need to compute the full gradient every iteration [LN89, EM15, ABH16] making less suitable for large data. [EM15] analyzed a hybrid first/second order method for general convex optimization with a rescaling term based on the top eigenvectors of the Hessian. That can be viewed as preconditioning the Hessian at every GD iteration. A related recent work [GOSS16] 7 analyses a hybrid method designed to accelerate SGD convergence for ridge regression. The data are preprocessed by rescaling points along the top singular vectors of the data matrix. Another second order method PCG [ACW16] accelerates the convergence of conjugate gradient for large kernel ridge regression using a preconditioner which is the inverse of an approximate covariance generated with random Fourier features. [TRVR16] achieves similar preconditioning effects by solving a linear system involving a subsampled kernel matrix every iteration. While not strictly a preconditioner Nystr?m with gradient descent(NYTRO) [CARR16] also improves the condition number. Compared to many of these methods EigenPro directly addresses the underlying issues of slow convergence without introducing a bias in directions with small eigenvalues. Additionally EigenPro incurs only a small overhead per iteration both in memory and computation. 6 Experimental Results Computing Resource/Data/Metrics. Experiments were run on a workstation with 128GB main memory, two Intel Xeon(R) E5-2620 CPUs, and one GTX Titan X (Maxwell) GPU. For multiclass datasets, we report classification error (c-error) for binary valued labels and mean squared error (mse) for real valued labels. See the arXiv version for details and more experimental results. Kernel methods/Hyperparameters. For smaller datasets direct solution of kernel regularized least squares (KRLS) is used to obtain the reference error. We compare with the primal method Pegasos [SSSSC11]. For even larger datasets, we use Random Fourier Features [RR07] (RF) with SGD as in [DXH+ 14, TRVR16]. The results of these methods are presented as baselines. For consistent comparison, all iterative methods use mini-batch of size m = 256. EigenPro preconditioner is constructed using the top k = 160 eigenvectors of a subsampled dataset of size M = 4800. For EigenPro-RF, we set the damping factor ? = 1/4. For primal EigenPro ? = 1. Acceleration for different kernels. The Gaussian Laplace Cauchy Dataset Size EigPro Pega EigPro Pega EigPro Pega table on the right presents the number of MNIST 6 ? 104 7 77 4 143 7 78 epochs needed by EigenPro and Pegasos to CIFAR-10 5 ? 104 5 56 13 136 6 107 reach the error of the optimal kernel clasSVHN 7 ? 104 8 54 14 297 17 191 sifier. We see that EigenPro provides ac- HINT-S 5 ? 104 19 164 15 308 13 126 celeration of 6 to 35 times in terms of the number of epochs required without any loss of accuracy. The actual acceleration is about 20% less due to the overhead of maintaining and applying a preconditioner. Comparisons on large datasets. Table below compares EigenPro to Pegasos/SGD-RF on several large datasets for 10 epochs. We see that EigenPro consistently outperforms Pegasos/SGD-RF within a fixed computational budget. Note that we adopt Gaussian kernel and 2 ? 105 random features. Dataset Size HINT-S TIMIT 2 ? 105 1 ? 106 1 ? 106 8 ? 106 1 ? 106 7 ? 106 MNIST-8M HINT-M Metric c-error mse EigenPro result GPU hours 10.0% 0.1 31.7% 3.2 0.8% 3.0 2.3e-2 1.9 - Pegasos result GPU hours 11.7% 0.1 33.0% 2.2 1.1% 2.7 2.7e-2 1.5 - EigenPro-RF result GPU hours 10.3% 0.2 32.6% 1.5 0.8% 0.8 0.7% 7.2 2.4e-2 0.8 2.1e-2 5.8 SGD-RF result GPU hours 11.5% 0.1 33.3% 1.0 1.0% 0.7 0.8% 6.0 2.7e-2 0.6 2.4e-2 4.1 Comparisons to state-of-the-art. In the below table, we provide a comparison to several large scale kernel results reported in the literature. EigenPro improves or matches performance on each dataset at a much lower computational budget. We note that [MGL+ 17] achieves error 30.9% on TIMIT using an AWS cluster. The method uses a novel supervised feature selection method, hence is not directly comparable. EigenPro can plausibly further improve the training error using this new feature set. Dataset Size MNIST 1 ? 106 6.7 ? 106 TIMIT 2 ? 106 SUSY 4 ? 106 EigenPro (use 1 GTX Titan X) error GPU hours epochs 0.70% 4.8 16 ? 0.80% 0.8 10 31.7% 3.2 10 (32.5%)? 19.8% 0.1 0.6 source [ACW16] [LML+ 14] [HAS+ 14] [TRVR16] [CAS16] error 0.72% 0.85% 33.5% 33.5% ? 20% Reported results description 1.1 hours/189 epochs/1344 AWS vCPUs less than 37.5 hours on 1 Tesla K20m 512 IBM BlueGene/Q cores 7.5 hours on 1024 AWS vCPUs 0.6 hours on IBM POWER8 ? The result is produced by EigenPro-RF using 1 ? 106 data points. ? Our TIMIT training set (1 ? 106 data points) was generated following a standard practice in the speech community [PGB+ 11] by taking 10ms frames and dropping the glottal stop ?q? labeled frames in core test set (1.2% of total test set). [HAS+ 14] adopts 5ms frames, resulting in 2 ? 106 data points, and keeping the glottal stop ?q?. In the worst case scenario EigenPro, if we mislabel all glottal stops, the corresponding frame-level error increases from 31.7% to 32.5%. Acknowledgements. We thank Adam Stiff, Eric Fosler-Lussier, Jitong Chen, and Deliang Wang for providing TIMIT and HINT datasets. This work is supported by NSF IIS-1550757 and NSF CCF-1422830. Part of this work was completed while the second author was at the Simons Institute at Berkeley. In particular, he thanks Suvrit Sra, Daniel Hsu, Peter Bartlett, and Stefanie Jegelka for many discussions and helpful suggestions. 8 References [ABH16] Naman Agarwal, Brian Bullins, and Elad Hazan. Second order stochastic optimization in linear time. arXiv preprint arXiv:1602.03943, 2016. [ACW16] H. Avron, K. Clarkson, and D. Woodruff. Faster kernel ridge regression using sketching and preconditioning. arXiv preprint arXiv:1611.03220, 2016. [Aro50] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American mathematical society, 68(3):337?404, 1950. [B+ 05] Mikio Ludwig Braun et al. Spectral properties of the kernel matrix and their relation to kernel methods in machine learning. PhD thesis, University of Bonn, 2005. [BBG09] Antoine Bordes, L?on Bottou, and Patrick Gallinari. SGD-QN: Careful quasi-newton stochastic gradient descent. JMLR, 10:1737?1754, 2009. [BHNS16] Richard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. SIAM Journal on Optimization, 26(2):1008?1031, 2016. [Bis06] Christopher M Bishop. Pattern recognition. Machine Learning, 128, 2006. [BV04] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [CARR16] Raffaello Camoriano, Tom?s Angles, Alessandro Rudi, and Lorenzo Rosasco. NYTRO: When subsampling meets early stopping. In AISTATS, pages 1403?1411, 2016. [CAS16] Jie Chen, Haim Avron, and Vikas Sindhwani. Hierarchically compositional kernels for scalable nonparametric learning. arXiv preprint arXiv:1608.00860, 2016. [CK11] Chih-Chieh Cheng and Brian Kingsbury. Arccosine kernels: Acoustic modeling with infinite neural networks. In ICASSP, pages 5200?5203. IEEE, 2011. [DHS11] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121?2159, 2011. [DJS96] John E Dennis Jr and Robert B Schnabel. Numerical methods for unconstrained optimization and nonlinear equations. SIAM, 1996. [DXH+ 14] B. Dai, B. Xie, N. He, Y. Liang, A. Raj, M. Balcan, and L. Song. Scalable kernel methods via doubly stochastic gradients. In NIPS, pages 3041?3049, 2014. [EM15] M. Erdogdu and A. Montanari. Convergence rates of sub-sampled newton methods. In NIPS, 2015. [GOSS16] Alon Gonen, Francesco Orabona, and Shai Shalev-Shwartz. Solving ridge regression using sketched preconditioned svrg. In ICML, pages 1397?1405, 2016. [HAS+ 14] Po-Sen Huang, Haim Avron, Tara N Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel methods match deep neural networks on timit. In ICASSP, pages 205?209. IEEE, 2014. [HCL+ 08] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S Sathiya Keerthi, and Sellamanickam Sundararajan. A dual coordinate descent method for large-scale linear svm. In Proceedings of the 25th international conference on Machine learning, pages 408?415. ACM, 2008. [HMT11] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217? 288, 2011. [JZ13] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315?323, 2013. [KB14] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [KSW04] Jyrki Kivinen, Alexander J Smola, and Robert C Williamson. Online learning with kernels. Signal Processing, IEEE Transactions on, 52(8):2165?2176, 2004. [LML+ 14] Zhiyun Lu, Avner May, Kuan Liu, Alireza Bagheri Garakani, Dong Guo, Aur?lien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, Michael Picheny, et al. How to scale up kernel methods to be as good as deep neural nets. arXiv preprint arXiv:1411.4000, 2014. [LN89] Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503?528, 1989. [LSS13] Quoc Le, Tam?s Sarl?s, and Alex Smola. Fastfood-approximating kernel expansions in loglinear time. In Proceedings of the international conference on machine learning, 2013. 9 [MGL+ 17] Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aur?lien Bellet, Linxi Fan, Michael Collins, Daniel Hsu, Brian Kingsbury, et al. Kernel approximation methods for speech recognition. arXiv preprint arXiv:1701.03577, 2017. [Min17] Stanislav Minsker. On some extensions of bernstein?s inequality for self-adjoint operators. Statistics & Probability Letters, 2017. [MNJ16] P. Moritz, R. Nishihara, and M. Jordan. A linearly-convergent stochastic l-bfgs algorithm. In AISTATS, 2016. [PGB+ 11] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, et al. The kaldi speech recognition toolkit. In ASRU, 2011. [QB16] Qichao Que and Mikhail Belkin. Back to the future: Radial basis function networks revisited. In AISTATS, pages 1375?1383, 2016. [RBV10] Lorenzo Rosasco, Mikhail Belkin, and Ernesto De Vito. On learning with integral operators. Journal of Machine Learning Research, 11(Feb):905?934, 2010. [Ric11] Lewis Fry Richardson. The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam. Philosophical Transactions of the Royal Society of London. Series A, 210:307?357, 1911. [Ros97] Steven Rosenberg. The Laplacian on a Riemannian manifold: an introduction to analysis on manifolds. Number 31. Cambridge University Press, 1997. [RR07] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177?1184, 2007. [RSB12] Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential convergence _rate for finite training sets. In Advances in Neural Information Processing Systems, pages 2663?2671, 2012. [RWY14] G. Raskutti, M. Wainwright, and B. Yu. Early stopping and non-parametric regression: an optimal data-dependent stopping rule. JMLR, 15(1):335?366, 2014. [SC08] Ingo Steinwart and Andreas Christmann. Support vector machines. Springer Science & Business Media, 2008. [She94] Jonathan Richard Shewchuk. An introduction to the conjugate gradient method without the agonizing pain, 1994. [SNB05] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the 22nd international conference on Machine learning, pages 824?831. ACM, 2005. [SSSSC11] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for SVM. Mathematical programming, 127(1):3?30, 2011. [STC04] John Shawe-Taylor and Nello Cristianini. Kernel methods for pattern analysis. Cambridge university press, 2004. [SYG07] Nicol N Schraudolph, Jin Yu, and Simon G?nter. A stochastic quasi-newton method for online convex optimization. In AISTATS, pages 436?443, 2007. [TBRS13] Martin Tak?c, Avleen Singh Bijral, Peter Richt?rik, and Nati Srebro. Mini-batch primal and dual methods for SVMs. In ICML (3), pages 1022?1030, 2013. [TH12] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. [Tro15] Joel A Tropp. An introduction to matrix concentration inequalities. arXiv:1501.01571, 2015. arXiv preprint [TRVR16] S. Tu, R. Roelofs, S. Venkataraman, and B. Recht. Large scale kernel learning using block coordinate descent. arXiv preprint arXiv:1602.05310, 2016. [Tsy04] Alexandre B Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, pages 135?166, 2004. [WS01] Christopher Williams and Matthias Seeger. Using the Nystr?m method to speed up kernel machines. In NIPS, pages 682?688, 2001. [YRC07] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289?315, 2007. 10
6968 |@word version:3 manageable:1 polynomial:8 norm:7 seems:2 nd:1 km:1 covariance:10 hsieh:1 decomposition:2 incurs:1 sgd:20 nystr:2 reduction:2 initial:2 liu:3 series:3 daniel:2 woodruff:1 rkhs:8 interestingly:1 outperforms:1 com:1 naman:1 yet:1 diederik:1 reminiscent:1 must:4 gpu:9 written:3 readily:1 periodically:1 happen:1 realistic:1 john:3 shape:1 numerical:1 designed:1 update:1 v:1 motlicek:1 intelligence:1 selected:1 accordingly:1 core:2 mgl:3 provides:1 characterization:1 revisited:1 cse:1 ws01:2 org:1 zhang:1 five:1 kingsbury:3 zhiyun:2 along:6 mathematical:4 differential:1 constructed:1 direct:10 tsy04:2 yuan:1 consists:1 khk:1 doubly:1 nati:1 overhead:5 introduce:2 theoretically:2 notably:1 roughly:1 andrea:1 frequently:1 bhuvana:1 decreasing:6 byrd:1 cpu:1 actual:1 xti:1 solver:1 little:1 becomes:3 provided:1 considering:1 notation:1 underlying:4 estimating:1 medium:1 moreover:6 finding:1 impractical:1 guarantee:2 berkeley:1 avron:3 every:5 ti:1 braun:1 preferable:1 exactly:2 classifier:4 demonstrates:1 gallinari:1 yn:2 eigendirection:2 arguably:1 t1:1 positive:3 engineering:1 understood:1 modify:3 limit:4 minsker:1 despite:2 meet:2 approximately:2 twice:1 initialization:2 qichao:1 k:3 suggests:3 co:1 limited:5 range:2 practical:4 practice:9 block:1 definite:2 differs:1 sdca:1 asymptotics:1 area:2 universal:1 empirical:2 significantly:2 convenient:3 boyd:1 ln89:2 burget:1 radial:1 jui:1 sln:1 get:1 cannot:3 close:1 unlabeled:1 selection:3 operator:12 pegasos:8 context:1 applying:4 dam:1 writing:1 www:1 equivalent:5 map:8 boulianne:1 center:1 williams:1 attention:1 starting:1 jimmy:1 sainath:1 convex:3 roux:1 recovery:1 khm:1 qian:1 contradiction:1 rule:2 parameterizing:1 orthonormal:1 vandenberghe:1 disentangled:1 population:1 hmt11:3 coordinate:3 laplace:1 annals:1 target:4 construction:1 suppose:1 exact:4 programming:2 us:2 shewchuk:1 recognition:3 approximated:7 particularly:3 persist:1 labeled:2 cloud:1 steven:1 preprint:8 wang:1 solved:1 worst:1 coursera:1 venkataraman:1 richt:1 counter:1 decrease:2 alessandro:1 rmsprop:1 complexity:5 cristianini:1 vito:1 trained:1 singh:1 solving:6 rewrite:1 algebra:1 predictive:1 purely:4 efficiency:1 learner:1 basis:2 eric:1 preconditioning:11 icassp:2 accelerate:2 po:1 easily:1 represented:1 train:2 heat:1 fast:4 describe:2 effective:3 dsgd:1 kp:5 shortcoming:3 london:1 artificial:1 distinct:1 choosing:3 shalev:2 que:1 sarl:1 apparent:1 elad:2 valued:2 kai:1 larger:1 say:1 statistic:2 niyogi:1 richardson:6 ros97:2 transductive:1 itself:1 kuan:2 online:3 interplay:1 eigenvalue:18 differentiable:7 frozen:1 net:1 sen:1 propose:4 matthias:1 vanishingly:1 product:2 tro15:3 tu:1 combining:1 ludwig:1 dxh:6 achieve:1 adjoint:2 description:1 kh:1 convergence:20 double:1 requirement:2 extending:1 cluster:1 k20m:1 adam:2 converges:2 help:1 illustrate:1 andrew:1 alon:1 ac:1 progress:4 eq:10 solves:1 implemented:1 christmann:1 implies:3 come:2 involves:2 direction:9 norma:1 modifying:2 stochastic:16 crt:10 elimination:2 require:5 generalization:1 alleviate:2 dhs11:2 brian:4 mathematically:1 strictly:1 exploring:1 lss13:2 extension:1 exp:3 algorithmic:2 bj:1 kaldi:1 camoriano:1 major:1 achieves:2 jx:3 smallest:1 a2:1 early:5 adopt:2 estimation:1 injecting:1 nachman:1 label:4 hansen:1 schwarz:1 successfully:1 cotter:1 eti:3 clearly:1 gaussian:9 super:5 modified:3 rather:2 aim:1 pn:4 avoid:1 cr:1 ej:2 agonizing:1 rosenberg:1 conjunction:4 corollary:1 focus:1 vk:1 consistently:1 rank:6 improvement:1 hk:3 seeger:1 contrast:3 linxi:2 mbelkin:1 baseline:1 dim:1 helpful:1 flaw:1 dependent:3 stopping:7 nn:1 typically:8 glottal:3 relation:1 dnn:1 tak:1 quasi:3 kep:4 interested:1 transformed:2 lien:2 sketched:1 arg:2 classification:10 ill:1 issue:4 pcg:1 dual:3 art:4 smoothing:1 fairly:1 initialize:1 special:1 construct:1 once:1 ernesto:1 beach:1 represents:1 yu:2 icml:2 nearly:2 minf:1 representer:1 peaked:1 future:2 report:1 simplify:1 serious:1 primarily:1 richard:2 modern:2 hcl:3 belkin:4 hint:5 inherent:1 divergence:1 subsampled:3 replaced:1 raffaello:1 consisting:1 replacement:1 keerthi:1 n1:5 recalling:1 ab:1 hannemann:1 centralized:1 expenditure:1 possibility:1 power8:1 severe:1 joel:2 analyzed:2 kvk:7 primal:5 amenable:1 accurate:3 integral:4 capable:2 necessary:2 intense:1 damping:2 unless:5 modest:1 orthogonal:1 taylor:1 divide:1 circle:3 desired:1 theoretical:4 witnessed:1 xeon:1 modeling:1 obstacle:1 bijral:1 cost:7 introducing:2 uniform:1 johnson:1 conducted:1 stored:1 reported:4 periodic:3 cho:1 gd:4 st:1 recht:2 international:3 randomized:1 aur:2 siam:3 density:1 thanks:1 probabilistic:1 dong:3 invertible:1 cr0:1 ym:4 sketching:1 michael:3 yao:1 thesis:1 squared:1 choose:3 slowly:1 huang:1 rosasco:3 tam:1 american:1 inefficient:1 leading:4 rescaling:2 mislabel:1 de:1 iei:1 bfgs:2 vcpus:2 includes:1 coefficient:2 titan:2 explicitly:1 vi:3 view:3 nishihara:1 analyze:5 hazan:2 francis:1 start:1 bayes:1 aggregation:1 recover:2 hf:2 shai:2 simon:2 timit:6 partha:1 minimize:1 square:3 formed:1 accuracy:5 variance:2 largely:1 efficiently:2 roelofs:1 yield:3 identify:2 nter:1 accurately:1 produced:1 lu:2 randomness:2 reach:8 whenever:1 definition:2 inexpensive:1 kl2:1 involved:1 obvious:1 e2:1 attributed:2 riemannian:1 con:1 workstation:1 sampled:4 stop:3 hsu:2 dataset:8 popular:1 improves:4 hilbert:2 subtle:1 back:1 masi:1 maxwell:1 alexandre:1 supervised:3 xie:1 reflected:1 tom:1 improved:3 wei:1 rie:1 done:1 just:1 smola:2 preconditioner:15 hand:1 steinwart:1 dennis:1 tropp:2 ei:13 christopher:2 aronszajn:1 nonlinear:1 quality:1 aj:1 grows:1 usa:1 effect:3 requiring:1 gtx:2 contain:1 ccf:1 hence:3 regularization:12 moritz:1 bv04:2 em15:3 sin:2 self:2 uniquely:1 inferior:1 ck11:2 m:2 criterion:1 eigensystem:4 stress:1 ridge:4 demonstrate:1 complete:1 duchi:1 interpreting:1 balcan:1 harmonic:2 ohio:2 novel:1 common:1 raskutti:1 physical:1 belong:1 discussed:1 martinsson:1 lieven:1 he:2 sundararajan:1 significant:4 cambridge:3 ai:8 smoothness:4 unconstrained:1 similarly:2 shawe:1 had:1 reachable:1 persisting:1 dot:1 toolkit:1 resistant:1 yk2:1 feb:1 patrick:1 dominant:2 disentangle:1 deliang:1 bagheri:2 recent:6 perspective:1 raj:1 belongs:2 stiff:1 discard:1 scenario:1 store:1 diving:1 susy:1 suvrit:1 inequality:2 binary:2 success:3 discussing:1 siyuan:1 jorge:2 yi:3 arithmetical:1 seen:1 preserving:1 additional:1 care:1 bbg09:2 dai:1 goel:1 employed:2 minimum:1 converge:2 maximize:1 signal:1 ii:1 semi:2 stephen:1 desirable:1 sound:1 full:8 stem:1 caponnetto:1 ing:1 rahimi:1 match:4 smooth:14 faster:2 schraudolph:1 long:2 cifar:1 lin:1 bach:1 e1:4 laplacian:1 scalable:4 basic:2 regression:13 variant:2 essentially:1 metric:3 involving:3 arxiv:19 iteration:53 kernel:83 alireza:2 sometimes:1 agarwal:1 sifier:1 achieved:1 addition:1 interval:1 decreased:1 aws:3 singular:1 source:1 allocated:1 extra:3 eigenfunctions:5 subject:1 thing:1 seem:1 jordan:1 call:1 near:2 noting:1 bernstein:3 exceed:1 easy:5 enough:3 xj:1 fit:3 zi:2 architecture:5 approaching:1 bandwidth:1 suboptimal:2 inner:1 idea:1 andreas:1 krls:1 computable:1 multiclass:1 translates:1 fosler:1 bartlett:1 gb:1 accelerating:2 tain:1 clarkson:1 song:1 peter:2 materializing:2 speech:3 hessian:3 speaking:1 cause:1 proceed:1 remark:1 compositional:1 deep:7 jie:1 detailed:1 eigenvectors:14 amount:3 nonparametric:1 tsybakov:2 obstruction:1 band:2 hardware:1 svms:1 simplest:1 http:2 sl:1 exist:1 zj:1 nsf:2 notice:1 estimated:1 per:9 materialize:1 broadly:1 diverse:1 intertwined:1 write:2 dropping:1 affected:1 gunnar:1 putting:1 key:3 achieving:1 drawn:1 d3:1 changing:1 preprocessed:1 povey:1 nocedal:2 asymptotically:1 subgradient:1 fraction:3 year:1 run:1 angle:1 inverse:3 letter:1 powerful:1 eigendirections:1 throughout:1 reasonable:1 chih:2 family:1 sobolev:2 ble:1 scaling:1 comparable:1 bit:2 accelerates:1 def:19 ki:1 bound:3 haim:2 followed:1 rudi:1 cheng:1 fold:1 quadratic:1 convergent:1 encountered:2 fan:2 constraint:1 infinity:1 alex:1 n3:1 bonn:1 aspect:1 fourier:14 nathan:2 min:1 span:3 speed:2 extremely:5 relatively:1 martin:1 department:1 kd:2 jr:1 across:1 conjugate:2 smaller:5 bellet:2 shallow:10 making:2 modification:3 quoc:1 bullins:1 avner:2 lml:3 computationally:2 equation:5 ln:4 resource:1 hei:4 discus:4 singer:3 needed:4 flip:1 end:1 adopted:1 available:1 operation:5 nytro:2 spectral:6 fry:1 batch:7 alternative:1 a2i:1 schmidt:1 eigen:4 vikas:3 original:2 rp:1 top:15 remaining:1 ensure:1 cf:1 running:1 completed:1 assumes:1 maintaining:1 subsampling:1 newton:4 yoram:3 plausibly:1 especially:2 arccosine:1 approximating:3 classical:1 ramabhadran:1 chieh:1 society:2 warping:1 already:1 parametric:1 costly:1 strategy:1 concentration:1 rr07:3 antoine:1 md:2 loglinear:1 pain:1 gradient:62 thank:1 manifold:2 argue:1 cauchy:1 nello:1 toward:1 preconditioned:12 assuming:2 code:1 tijmen:1 mini:7 minimizing:2 ratio:2 providing:1 equivalently:1 liang:1 unfortunately:1 robert:2 potentially:4 slows:1 ba:1 implementation:2 rsvd:4 perform:2 allowing:2 observation:1 francesco:1 datasets:14 pterm:1 ingo:1 finite:7 enabling:1 jin:1 descent:53 implementable:1 t:1 situation:2 hinton:1 incorporated:1 precise:1 y1:2 rn:3 frame:4 reproducing:2 sharp:1 arbitrary:3 community:1 bk:1 required:1 extensive:2 optimized:2 philosophical:1 acoustic:1 narrow:1 hour:9 kingma:1 discontinuity:1 nip:6 address:2 beyond:1 boost:1 able:2 below:8 pattern:2 xm:8 gonen:1 regime:1 bluegene:1 yrc07:3 rf:7 royal:1 memory:7 including:1 wainwright:1 power:2 suitable:1 difficulty:1 rely:2 regularized:1 business:1 hybrid:4 natural:2 kivinen:1 scheme:3 improve:6 github:1 lorenzo:3 hm:6 stefanie:1 review:1 literature:5 understanding:1 epoch:5 kf:3 multiplication:3 l2:7 relative:1 trvr16:6 acknowledgement:1 fully:1 par:1 expect:3 lecture:1 nicol:1 stanislav:1 suggestion:1 loss:7 limitation:5 srebro:2 geoffrey:1 remarkable:2 eigendecomposition:1 degree:1 jegelka:1 sufficient:1 consistent:1 xp:1 garakani:2 rik:1 imposes:1 heavy:1 bordes:1 row:1 ibm:2 compatible:4 supported:1 keeping:1 svrg:2 drastically:1 bias:4 side:1 understand:1 warp:1 institute:1 allow:1 limi:1 erdogdu:1 taking:2 mikhail:4 ghoshal:1 overcome:1 boundary:2 dimension:3 xn:2 transition:2 evaluating:1 qn:1 computes:1 adopts:1 made:1 adaptive:2 author:2 bm:2 kei:1 far:1 polynomially:1 transaction:3 picheny:1 approximate:17 contradict:1 compact:2 overfitting:1 assumed:1 sathiya:1 glembek:1 xi:9 shwartz:2 spectrum:2 iterative:2 vergence:1 table:3 additionally:1 distributionp:1 nature:2 robust:1 ca:1 sra:1 nicolas:1 obtaining:1 improving:1 e5:1 expansion:1 williamson:1 excellent:1 bottou:1 necessarily:1 constructing:3 domain:1 diag:1 mse:2 aistats:4 pk:4 significance:1 montanari:1 linearly:1 fastfood:1 s2:1 hierarchically:1 main:2 big:1 subsample:6 hyperparameters:1 n2:4 tesla:1 obviating:1 x1:2 mikio:1 intel:1 cubic:1 slow:5 tong:1 precision:2 sub:2 saga:1 explicit:2 exponential:5 kxk2:1 jmlr:3 pointless:1 theorem:5 formula:1 specific:1 jen:1 discarding:1 showing:2 bishop:1 decay:10 admits:1 svm:2 intrinsic:2 ih:2 false:1 mnist:4 adding:1 ci:1 phd:1 magnitude:1 conditioned:1 budget:6 kx:1 margin:1 demand:1 easier:1 mildly:1 chen:2 intersection:1 halko:1 simply:1 infinitely:6 lussier:1 chang:1 sindhwani:3 springer:1 avleen:1 tieleman:1 lewis:1 acm:2 ma:1 identity:1 jyrki:1 goal:1 acceleration:9 careful:1 viewed:4 orabona:1 asru:1 price:1 rbf:2 feasible:1 hard:1 change:2 infinite:10 specifically:3 typical:2 reducing:1 total:1 called:1 svd:1 experimental:3 tara:1 mark:1 guo:2 support:1 schnabel:1 collins:2 alexander:1 jonathan:1 constructive:1 heaviside:3 eigenpro:48 phenomenon:1
6,598
6,969
End-to-End Differentiable Proving Tim Rockt?schel University of Oxford [email protected] Sebastian Riedel University College London & Bloomsbury AI [email protected] Abstract We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules. 1 Introduction Current state-of-the-art methods for automated Knowledge Base (KB) completion use neural link prediction models to learn distributed vector representations of symbols (i.e. subsymbolic representations) for scoring fact triples [1?7]. Such subsymbolic representations enable these models to generalize to unseen facts by encoding similarities: If the vector of the predicate symbol grandfatherOf is similar to the vector of the symbol grandpaOf, both predicates likely express a similar relation. Likewise, if the vector of the constant symbol LISA is similar to MAGGIE, similar relations likely hold for both constants (e.g. they live in the same city, have the same parents etc.). This simple form of reasoning based on similarities is remarkably effective for automatically completing large KBs. However, in practice it is often important to capture more complex reasoning patterns that involve several inference steps. For example, if ABE is the father of HOMER and HOMER is a parent of BART, we would like to infer that ABE is a grandfather of BART. Such transitive reasoning is inherently hard for neural link prediction models as they only learn to score facts locally. In contrast, symbolic theorem provers like Prolog [8] enable exactly this type of multi-hop reasoning. Furthermore, Inductive Logic Programming (ILP) [9] builds upon such provers to learn interpretable rules from data and to exploit them for reasoning in KBs. However, symbolic provers lack the ability to learn subsymbolic representations and similarities between them from large KBs, which limits their ability to generalize to queries with similar but not identical symbols. While the connection between logic and machine learning has been addressed by statistical relational learning approaches, these models traditionally do not support reasoning with subsymbolic representations (e.g. [10]), and when using subsymbolic representations they are not trained end-to-end from training data (e.g. [11?13]). Neural multi-hop reasoning models [14?18] address the aforementioned limitations to some extent by encoding reasoning chains in a vector space or by iteratively refining subsymbolic representations of a question before comparison with answers. In many ways, these models operate like basic theorem provers, but they lack two of their most crucial ingredients: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. interpretability and straightforward ways of incorporating domain-specific knowledge in form of rules. Our approach to this problem is inspired by recent neural network architectures like Neural Turing Machines [19], Memory Networks [20], Neural Stacks/Queues [21, 22], Neural Programmer [23], Neural Programmer-Interpreters [24], Hierarchical Attentive Memory [25] and the Differentiable Forth Interpreter [26]. These architectures replace discrete algorithms and data structures by end-toend differentiable counterparts that operate on real-valued vectors. At the heart of our approach is the idea to translate this concept to basic symbolic theorem provers, and hence combine their advantages (multi-hop reasoning, interpretability, easy integration of domain knowledge) with the ability to reason with vector representations of predicates and constants. Specifically, we keep variable binding symbolic but compare symbols using their subsymbolic vector representations. Concretely, we introduce Neural Theorem Provers (NTPs): End-to-end differentiable provers for basic theorems formulated as queries to a KB. We use Prolog?s backward chaining algorithm as a recipe for recursively constructing neural networks that are capable of proving queries to a KB using subsymbolic representations. The success score of such proofs is differentiable with respect to vector representations of symbols, which enables us to learn such representations for predicates and constants in ground atoms, as well as parameters of function-free first-order logic rules of predefined structure. By doing so, NTPs learn to place representations of similar symbols in close proximity in a vector space and to induce rules given prior assumptions about the structure of logical relationships in a KB such as transitivity. Furthermore, NTPs can seamlessly reason with provided domain-specific rules. As NTPs operate on distributed representations of symbols, a single hand-crafted rule can be leveraged for many proofs of queries with symbols that have a similar representation. Finally, NTPs demonstrate a high degree of interpretability as they induce latent rules that we can decode to human-readable symbolic rules. Our contributions are threefold: (i) We present the construction of NTPs inspired by Prolog?s backward chaining algorithm and a differentiable unification operation using subsymbolic representations, (ii) we propose optimizations to this architecture by joint training with a neural link prediction model, batch proving, and approximate gradient calculation, and (iii) we experimentally show that NTPs can learn representations of symbols and function-free first-order rules of predefined structure, enabling them to learn to perform multi-hop reasoning on benchmark KBs and to outperform ComplEx [7], a state-of-the-art neural link prediction model, on three out of four KBs. 2 Background In this section, we briefly introduce the syntax of KBs that we use in the remainder of the paper. We refer the reader to [27, 28] for a more in-depth introduction. An atom consists of a predicate symbol and a list of terms. We will use lowercase names to refer to predicate and constant symbols (e.g. fatherOf and BART), and uppercase names for variables (e.g. X, Y, Z). As we only consider function-free first-order logic rules, a term can only be a constant or a variable. For instance, [grandfatherOf, Q, BART] is an atom with the predicate grandfatherOf, and two terms, the variable Q and the constant BART. We consider rules of the form H :? B, where the body B is a possibly empty conjunction of atoms represented as a list, and the head H is an atom. We call a rule with no free variables a ground rule. All variables are universally quantified. We call a ground rule with an empty body a fact. A substitution set = {X1 /t1 , . . . , XN /tN } is an assignment of variable symbols Xi to terms ti , and applying substitutions to an atom replaces all occurrences of variables Xi by their respective term ti . Given a query (also called goal) such as [grandfatherOf, Q, BART], we can use Prolog?s backward chaining algorithm to find substitutions for Q [8] (see appendix A for pseudocode). On a high level, backward chaining is based on two functions called O R and A ND. O R iterates through all rules (including rules with an empty body, i.e., facts) in a KB and unifies the goal with the respective rule head, thereby updating a substitution set. It is called O R since any successful proof suffices (disjunction). If unification succeeds, O R calls A ND to prove all atoms (subgoals) in the body of the rule. To prove subgoals of a rule body, A ND first applies substitutions to the first atom that is then proven by again calling O R, before proving the remaining subgoals by recursively calling A ND. This function is called A ND as all atoms in the body need to be proven together (conjunction). As an example, a rule such as [grandfatherOf, X, Y] :? [[fatherOf, X, Z], [parentOf, Z, Y]] is used 2 in O R for translating a goal like [grandfatherOf, Q, BART] into subgoals [fatherOf, Q, Z] and [parentOf, Z, BART] that are subsequently proven by A ND.1 3 Differentiable Prover In the following, we describe the recursive construction of NTPs ? neural networks for end-to-end differentiable proving that allow us to calculate the gradient of proof successes with respect to vector representations of symbols. We define the construction of NTPs in terms of modules similar to dynamic neural module networks [29]. Each module takes as inputs discrete objects (atoms and rules) and a proof state, and returns a list of new proof states (see Figure 1 for a graphical representation). A proof state S = ( , ?) is a tuple consisting of the substitution set constructed in the proof so far and a neural network ? that outputs a real-valued success score of a (partial) proof. While discrete objects and the substitution set are only used during construction of the neural network, once the network is constructed a continuous proof success score can be calculated for many different goals at training and test time. To summarize, modules are instantiated by discrete objects and the substitution set. They construct a neural network representing the (partial) proof success score and recursively instantiate submodules to continue the proof. X/Q Y/BART Z/HOMER X/Q Y/BART S S? S0 S?0 Figure 1: A module is mapping an upstream proof state (left) to a list of new proof states (right), thereby extending the substitution set S and adding nodes to the computation graph of the neural network S? representing the proof success. The shared signature of modules is D?S ! S N where D is a domain that controls the construction of the network, S is the domain of proof states, and N is the number of output proof states. Furthermore, let S denote the substitution set of the proof state S and let S? denote the neural network for calculating the proof success. We use pseudocode in style of a functional programming language to define the behavior of modules and auxiliary functions. Particularly, we are making use of pattern matching to check for properties of arguments passed to a module. We denote sets by Euler script letters (e.g. E), lists by small capital letters (e.g. E), lists of lists by blackboard bold letters (e.g. E) and we use : to refer to prepending an element to a list (e.g. e : E or E : E). While an atom is a list of a predicate symbol and terms, a rule can be seen as a list of atoms and thus a list of lists where the head of the list is the rule head.2 3.1 Unification Module Unification of two atoms, e.g., a goal that we want to prove and a rule head, is a central operation in backward chaining. Two non-variable symbols (predicates or constants) are checked for equality and the proof can be aborted if this check fails. However, we want to be able to apply rules even if symbols in the goal and head are not equal but similar in meaning (e.g. grandfatherOf and grandpaOf) and thus replace symbolic comparison with a computation that measures the similarity of both symbols in a vector space. The module unify updates a substitution set and creates a neural network for comparing the vector representations of non-variable symbols in two sequences of terms. The signature of this module is L ? L ? S ! S where L is the domain of lists of terms. unify takes two atoms represented as lists of terms and an upstream proof state, and maps these to a new proof state (substitution set and proof success). To this end, unify iterates through the list of terms of two atoms and compares their symbols. If one of the symbols is a variable, a substitution is added to the substitution set. Otherwise, the vector representations of the two non-variable symbols are compared using a Radial Basis Function (RBF) kernel [30] where ? is a hyperparameter that we set to p12 in our experiments. The following pseudocode implements unify. Note that "_" matches every argument and that the 1 For clarity, we will sometimes omit lists when writing rules grandfatherOf(X, Y) :? fatherOf(X, Z), parentOf(Z, Y). 2 For example, [[grandfatherOf, X, Y], [fatherOf, X, Z], [parentOf, Z, Y]]. 3 and atoms, e.g., order matters, i.e., if arguments match a line, subsequent lines are not evaluated. 1. unify? ([ ], [ ], S) = S 2. unify? ([ ], _, _) = FAIL 3. unify? (_, [ ], _) = FAIL 4. unify? (h : H, g : G, S) = unify? (H, G, S 0 ) = (S 0 , S?0 ) 8 < S [ {h/g} 0 S [ {g/h} S = : S 9 = if h 2 V if g 2 V, h 62 V , ; otherwise S?0 = min S? , ( where exp 1 ? k?h: ?g: k2 2?2 ? if h, g 62 V otherwise )! Here, S 0 refers to the new proof state, V refers to the set of variable symbols, h/g is a substitution from the variable symbol h to the symbol g, and ?g: denotes the embedding lookup of the non-variable symbol with index g. unify is parameterized by an embedding matrix ? 2 R|Z|?k where Z is the set of non-variables symbols and k is the dimension of vector representations of symbols. Furthermore, FAIL represents a unification failure due to mismatching arity of two atoms. Once a failure is reached, we abort the creation of the neural network for this branch of proving. In addition, we constrain proofs to be cycle-free by checking whether a variable is already bound. Note that this is a simple heuristic that prohibits applying the same non-ground rule twice. There are more sophisticated ways for finding and avoiding cycles in a proof graph such that the same rule can still be applied multiple times (e.g. [31]), but we leave this for future work. Example Assume that we are unifying two atoms [grandpaOf, ABE, BART] and [s, Q, i] given an upstream proof state S = (?, ?) where the latter input atom has placeholders for a predicate s and a constant i, and the neural network ? would output 0.7 when evaluated. Furthermore, assume grandpaOf, ABE and BART represent the indices of the respective symbols in a global symbol vocabulary. Then, the new proof state constructed by unify is: unify? ([grandpaOf, ABE, BART], [s, Q, i], (?, ?)) = (S 0 , S?0 ) = {Q/ABE}, min ?, exp( k?grandpaOf: ?s: k2 ), exp( k?BART: ?i: k2 ) Thus, the output score of the neural network S?0 will be high if the subsymbolic representation of the input s is close to grandpaOf and the input i is close to BART. However, the score cannot be higher than 0.7 due to the upstream proof success score in the forward pass of the neural network ?. Note that in addition to extending the neural networks ? to S?0 , this module also outputs a substitution set {Q/ABE} at graph creation time that will be used to instantiate submodules. 3.2 OR Module Based on unify, we now define the or module which attempts to apply rules in a KB. The signature of or is L ? N ? S ! S N where L is the domain of goal atoms and N is the domain of integers used for specifying the maximum proof depth of the neural network. Furthermore, N is the number of possible output proof states for a goal of a given structure and a provided KB.3 We implement or as 0 0 K 1. orK ? (G, d, S) = [S | S 2 and? (B, d, unify? (H, G, S)) for H :? B 2 K] where H :? B denotes a rule in a given KB K with a head atom H and a list of body atoms B. In contrast to the symbolic O R method, the or module is able to use the grandfatherOf rule above for a query involving grandpaOf provided that the subsymbolic representations of both predicates are similar as measured by the RBF kernel in the unify module. Example For a goal [s, Q, i], or would instantiate an and submodule based on the rule [grandfatherOf, X, Y] :? [[fatherOf, X, Z], [parentOf, Z, Y]] as follows 0 0 K ? orK ? ([s, Q, i], d, S) = [S |S 2 and? ([[fatherOf, X, Z], [parentOf, Z, Y]], d, ({X/Q, Y/i}, S? )), . . .] | {z } result of unify 3 The creation of the neural network is dependent on the KB but also the structure of the goal. For instance, the goal s(Q, i) would result in a different neural network, and hence a different number of output proof states, than s(i, j). 4 3.3 AND Module For implementing and we first define an auxiliary function called substitute which applies substitutions to variables in an atom if possible. This is realized via 1. substitute([ ], _) = [ ] 2. substitute(g : G, ) = ? x g if g/x 2 otherwise : substitute(G, ) For example, substitute([fatherOf, X, Z], {X/Q, Y/i}) results in [fatherOf, Q, Z]. The signature of and is L ? N ? S ! S N where L is the domain of lists of atoms and N is the number of possible output proof states for a list of atoms with a known structure and a provided KB. This module is implemented as 1. andK ? (_, _, FAIL) = FAIL 2. andK ? (_, 0, _) = FAIL 3. andK ? ([ ], _, S) = S 00 00 K 0 0 K 4. andK ? (G : G, d, S) = [S | S 2 and? (G, d, S ) for S 2 or? (substitute(G, S ), d 1, S)] where the first two lines define the failure of a proof, either because of an upstream unification failure that has been passed from the or module (line 1), or because the maximum proof depth has been reached (line 2). Line 3 specifies a proof success, i.e., the list of subgoals is empty before the maximum proof depth has been reached. Lastly, line 4 defines the recursion: The first subgoal G is proven by instantiating an or module after substitutions are applied, and every resulting proof state S 0 is used for proving the remaining subgoals G by again instantiating and modules. Example Continuing the example from Section 3.2, the and module would instantiate submodules as follows: ? andK ? ([[fatherOf, X, Z], [parentOf, Z, Y]], d, ({X/Q, Y/i}, S? )) = | {z } result of unify in or 0 0 K [S 00 |S 00 2 andK ? ([[parentOf, Z, Y]], d, S ) for S 2 or? ([fatherOf, Q, Z], d | {z } result of substitute 3.4 1, ({X/Q, Y/i}, S?? ))] | {z } result of unify in or Proof Aggregation Finally, we define the overall success score of proving a goal G using a KB K with parameters ? as ntpK ? (G, d) = S 2 arg max S? orK? (G,d,(?,1)) S6=FAIL where d is a predefined maximum proof depth and the initial proof state is set to an empty substitution set and a proof success score of 1. Example Figure 2 illustrates an examplary NTP computation graph constructed for a toy KB. Note that such an NTP is constructed once before training, and can then be used for proving goals of the structure [s, i, j] at training and test time where s is the index of an input predicate, and i and j are indices of input constants. Final proof states which are used in proof aggregation are underlined. 3.5 Neural Inductive Logic Programming We can use NTPs for ILP by gradient descent instead of a combinatorial search over the space of rules as, for example, done by the First Order Inductive Learner (FOIL) [32]. Specifically, we are using the concept of learning from entailment [9] to induce rules that let us prove known ground atoms, but that do not give high proof success scores to sampled unknown ground atoms. Let ?r: , ?s: , ?t: 2 Rk be representations of some unknown predicates with indices r, s and t respectively. The prior knowledge of a transitivity between three unknown predicates can be specified via 5 1. orK ? ([s, i, j], 2, (?, 1)) unify? ([fatherOf, ABE, HOMER], [s, i, j], (?, 1)) 2. ... S1 = (?, ?1 ) S2 = (?, ?2 ) 3. unify? ([grandfatherOf, X, Y], [s, i, j], (?, 1)) S3 = ({X/i, Y/j}, ?3 ) andK ? ([[fatherOf, X, Z], [parentOf, Z, Y]], 2, S3 ) 1. substitute orK ? ([fatherOf, i, Z], 1, S3 ) unify? ([fatherOf, ABE, HOMER], [fatherOf, i, Z], S3 ) 3. ... S31 = ({X/i, Y/j, Z/HOMER}, ?31 ) S33 = FAIL andK ? ([parentOf, Z, Y], 2, S31 ) substitute K or? ([parentOf, HOMER, j], 1, S31 ) . . . 1. 2. ... S311 = ({X/i, Y/j, Z/HOMER}, ?311 ) 2. Example Knowledge Base: 1. fatherOf(ABE, HOMER). 2. parentOf(HOMER, BART). 3. grandfatherOf(X, Y) :? fatherOf(X, Z), parentOf(Z, Y). unify? ([parentOf, HOMER, BART], [fatherOf, i, Z], S3 ) S32 = ({X/i, Y/j, Z/BART}, ?32 ) andK ? ([parentOf, Z, Y], 2, S32 ) substitute K or? ([parentOf, BART, j], 1, S32 ) 3. . . . ... S313 = FAIL S323 = FAIL S312 = ({X/i, Y/j, Z/HOMER}, ?312 ) 3. 2. 1. . . . ... S321 = ({X/i, Y/j, Z/BART}, ?321 ) S322 = ({X/i, Y/j, Z/BART}, ?322 ) Figure 2: Exemplary construction of an NTP computation graph for a toy knowledge base. Indices on arrows correspond to application of the respective KB rule. Proof states (blue) are subscripted with the sequence of indices of the rules that were applied. Underlined proof states are aggregated to obtain the final proof success. Boxes visualize instantiations of modules (omitted for unify). The proofs S33 , S313 and S323 fail due to cycle-detection (the same rule cannot be applied twice). r(X, Y) :? s(X, Z), t(Z, Y). We call this a parameterized rule as the corresponding predicates are unknown and their representations are learned from data. Such a rule can be used for proofs at training and test time in the same way as any other given rule. During training, the predicate representations of parameterized rules are optimized jointly with all other subsymbolic representations. Thus, the model can adapt parameterized rules such that proofs for known facts succeed while proofs for sampled unknown ground atoms fail, thereby inducing rules of predefined structures like the one above. Inspired by [33], we use rule templates for conveniently defining the structure of multiple parameterized rules by specifying the number of parameterized rules that should be instantiated for a given rule structure (see appendix E for examples). For inspection after training, we decode a parameterized rule by searching for the closest representations of known predicates. In addition, we provide users with a rule confidence by taking the minimum similarity between unknown and decoded predicate representations using the RBF kernel in unify. This confidence score is an upper bound on the proof success score that can be achieved when the induced rule is used in proofs. 4 Optimization In this section, we present the basic training loss that we use for NTPs, a training loss where a neural link prediction models is used as auxiliary task, as well as various computational optimizations. 4.1 Training Objective Let K be the set of known facts in a given KB. Usually, we do not observe negative facts and thus resort to sampling corrupted ground atoms as done in previous work [34]. Specifically, for every [s, i, j] 2 K we obtain corrupted ground atoms [s, ?i, j], [s, i, ?j], [s, ?i, ?j] 62 K by sampling ?i, ?j, ?i and ?j from the set of constants. These corrupted ground atoms are resampled in every iteration of training, and we denote the set of known and corrupted ground atoms together with their target score (1.0 for known ground atoms and 0.0 for corrupted ones) as T . We use the negative log-likelihood of the proof success score as loss function for an NTP with parameters ? and a given KB K X LntpK? = y log(ntpK (1 y) log(1 ntpK ? ([s, i, j], d)? ) ? ([s, i, j], d)? ) ([s,i,j],y) 2 T where [s, i, j] is a training ground atom and y its target proof success score. Note that since in our application all training facts are ground atoms, we only make use of the proof success score ? and not 6 the substitution list of the resulting proof state. We can prove known facts trivially by a unification with themselves, resulting in no parameter updates during training and hence no generalization. Therefore, during training we are masking the calculation of the unification success of a known ground atom that we want to prove. Specifically, we set the unification score to 0 to temporarily hide that training fact and assume it can be proven from other facts and rules in the KB. 4.2 Neural Link Prediction as Auxiliary Loss At the beginning of training all subsymbolic representations are initialized randomly. When unifying a goal with all facts in a KB we consequently get very noisy success scores in early stages of training. Moreover, as only the maximum success score will result in gradient updates for the respective subsymbolic representations along the maximum proof path, it can take a long time until NTPs learn to place similar symbols close to each other in the vector space and to make effective use of rules. To speed up learning subsymbolic representations, we train NTPs jointly with ComplEx [7] (Appendix B). ComplEx and the NTP share the same subsymbolic representations, which is feasible as the RBF kernel in unify is also defined for complex vectors. While the NTP is responsible for multi-hop reasoning, the neural link prediction model learns to score ground atoms locally. At test time, only the NTP is used for predictions. Thus, the training loss for ComplEx can be seen as an auxiliary loss for the subsymbolic representations learned by the NTP. We term the resulting model NTP . Based on the loss in Section 4.1, the joint training loss is defined as X Lntp K? = LntpK? + y log(complex? (s, i, j)) (1 y) log(1 complex? (s, i, j)) ([s,i,j],y) 2 T where [s, i, j] is a training atom and y its ground truth target. 4.3 Computational Optimizations NTPs as described above suffer from severe computational limitations since the neural network is representing all possible proofs up to some predefined depth. In contrast to symbolic backward chaining where a proof can be aborted as soon as unification fails, in differentiable proving we only get a unification failure for atoms whose arity does not match or when we detect cyclic rule application. We propose two optimizations to speed up NTPs in the Appendix. First, we make use of modern GPUs by batch processing many proofs in parallel (Appendix C). Second, we exploit the sparseness of gradients caused by the min and max operations used in the unification and proof aggregation respectively to derive a heuristic for a truncated forward and backward pass that drastically reduces the number of proofs that have to be considered for calculating gradients (Appendix D). 5 Experiments Consistent with previous work, we carry out experiments on four benchmark KBs and compare ComplEx with the NTP and NTP in terms of area under the Precision-Recall-curve (AUC-PR) on the Countries KB, and Mean Reciprocal Rank (MRR) and HITS@m [34] on the other KBs described below. Training details, including hyperparameters and rule templates, can be found in Appendix E. Countries The Countries KB is a dataset introduced by [35] for testing reasoning capabilities of neural link prediction models. It consists of 244 countries, 5 regions (e.g. EUROPE), 23 subregions (e.g. WESTERN EUROPE, NORTHERN AMERICA), and 1158 facts about the neighborhood of countries, and the location of countries and subregions. We follow [36] and split countries randomly into a training set of 204 countries (train), a development set of 20 countries (dev), and a test set of 20 countries (test), such that every dev and test country has at least one neighbor in the training set. Subsequently, three different task datasets are created. For all tasks, the goal is to predict locatedIn(c, r) for every test country c and all five regions r, but the access to training atoms in the KB varies. S1: All ground atoms locatedIn(c, r) where c is a test country and r is a region are removed from the KB. Since information about the subregion of test countries is still contained in the KB, this task can be solved by using the transitivity rule locatedIn(X, Y) :? locatedIn(X, Z), locatedIn(Z, Y). S2: In addition to S1, all ground atoms locatedIn(c, s) are removed where c is a test country and s 7 Table 1: AUC-PR results on Countries and MRR and HITS@m on Kinship, Nations, and UMLS. Corpus Countries Metric S1 S2 S3 AUC-PR AUC-PR AUC-PR Model Examples of induced rules and their confidence ComplEx NTP NTP 99.37 ? 0.4 87.95 ? 2.8 48.44 ? 6.3 90.83 ? 15.4 87.40 ? 11.7 56.68 ? 17.6 100.00 ? 0.0 93.04 ? 0.4 77.26 ? 17.0 0.90 locatedIn(X,Y) :? locatedIn(X,Z), locatedIn(Z,Y). 0.63 locatedIn(X,Y) :? neighborOf(X,Z), locatedIn(Z,Y). 0.32 locatedIn(X,Y) :? neighborOf(X,Z), neighborOf(Z,W), locatedIn(W,Y). Kinship MRR HITS@1 HITS@3 HITS@10 0.81 0.70 0.89 0.98 0.60 0.48 0.70 0.78 0.80 0.76 0.82 0.89 0.98 term15(X,Y) :? term5(Y,X) 0.97 term18(X,Y) :? term18(Y,X) 0.86 term4(X,Y) :? term4(Y,X) 0.73 term12(X,Y) :? term10(X, Z), term12(Z, Y). Nations MRR HITS@1 HITS@3 HITS@10 0.75 0.62 0.84 0.99 0.75 0.62 0.86 0.99 0.74 0.59 0.89 0.99 0.68 blockpositionindex(X,Y) :? blockpositionindex(Y,X). 0.46 expeldiplomats(X,Y) :? negativebehavior(X,Y). 0.38 negativecomm(X,Y) :? commonbloc0(X,Y). 0.38 intergovorgs3(X,Y) :? intergovorgs(Y,X). UMLS MRR HITS@1 HITS@3 HITS@10 0.89 0.82 0.96 1.00 0.88 0.82 0.92 0.97 0.93 0.87 0.98 1.00 0.88 interacts_with(X,Y) :? interacts_with(X,Z), interacts_with(Z,Y). 0.77 isa(X,Y) :? isa(X,Z), isa(Z,Y). 0.71 derivative_of(X,Y) :? derivative_of(X,Z), derivative_of(Z,Y). is a subregion. The location of test countries needs to be inferred from the location of its neighboring countries: locatedIn(X, Y) :? neighborOf(X, Z), locatedIn(Z, Y). This task is more difficult than S1, as neighboring countries might not be in the same region, so the rule above will not always hold. S3: In addition to S2, all ground atoms locatedIn(c, r) where r is a region and c is a training country that has a test or dev country as a neighbor are also removed. The location of test countries can for instance be inferred using the three-hop rule locatedIn(X, Y) :? neighborOf(X, Z), neighborOf(Z, W), locatedIn(W, Y). Kinship, Nations & UMLS We use the Nations, Alyawarra kinship (Kinship) and Unified Medical Language System (UMLS) KBs from [10]. We left out the Animals dataset as it only contains unary predicates and can thus not be used for evaluating multi-hop reasoning. Nations contains 56 binary predicates, 111 unary predicates, 14 constants and 2565 true facts, Kinship contains 26 predicates, 104 constants and 10686 true facts, and UMLS contains 49 predicates, 135 constants and 6529 true facts. Since our baseline ComplEx cannot deal with unary predicates, we remove unary atoms from Nations. We split every KB into 80% training facts, 10% development facts and 10% test facts. For evaluation, we take a test fact and corrupt its first and second argument in all possible ways such that the corrupted fact is not in the original KB. Subsequently, we predict a ranking of every test fact and its corruptions to calculate MRR and HITS@m. 6 Results and Discussion Results for the different model variants on the benchmark KBs are shown in Table 1. Another method for inducing rules in a differentiable way for automated KB completion has been introduced recently by [37] and our evaluation setup is equivalent to their Protocol II. However, our neural link prediction baseline, ComplEx, already achieves much higher HITS@10 results (1.00 vs. 0.70 on UMLS and 0.98 vs. 0.73 on Kinship). We thus focus on the comparison of NTPs with ComplEx. First, we note that vanilla NTPs alone do not work particularly well compared to ComplEx. They only outperform ComplEx on Countries S3 and Nations, but not on Kinship or UMLS. This demonstrates the difficulty of learning subsymbolic representations in a differentiable prover from unification alone, and the need for auxiliary losses. The NTP with ComplEx as auxiliary loss outperforms the other models in the majority of tasks. The difference in AUC-PR between ComplEx and NTP is significant for all Countries tasks (p < 0.0001). A major advantage of NTPs is that we can inspect induced rules which provide us with an interpretable representation of what the model has learned. The right column in Table 1 shows examples of induced rules by NTP (note that predicates on Kinship are anonymized). For Countries, the NTP recovered those rules that are needed for solving the three different tasks. On UMLS, the NTP induced transitivity rules. Those relationships are particularly hard to encode by neural link prediction models like ComplEx, as they are optimized to locally predict the score of a fact. 8 7 Related Work Combining neural and symbolic approaches to relational learning and reasoning has a long tradition and let to various proposed architectures over the past decades (see [38] for a review). Early proposals for neural-symbolic networks are limited to propositional rules (e.g., EBL-ANN [39], KBANN [40] and C-IL2 P [41]). Other neural-symbolic approaches focus on first-order inference, but do not learn subsymbolic vector representations from training facts in a KB (e.g., SHRUTI [42], Neural Prolog [43], CLIP++ [44], Lifted Relational Neural Networks [45], and TensorLog [46]). Logic Tensor Networks [47] are in spirit similar to NTPs, but need to fully ground first-order logic rules. However, they support function terms, whereas NTPs currently only support function-free terms. Recent question-answering architectures such as [15, 17, 18] translate query representations implicitly in a vector space without explicit rule representations and can thus not easily incorporate domainspecific knowledge. In addition, NTPs are related to random walk [48, 49, 11, 12] and path encoding models [14, 16]. However, instead of aggregating paths from random walks or encoding paths to predict a target predicate, reasoning steps in NTPs are explicit and only unification uses subsymbolic representations. This allows us to induce interpretable rules, as well as to incorporate prior knowledge either in the form of rules or in the form of rule templates which define the structure of logical relationships that we expect to hold in a KB. Another line of work [50?54] regularizes distributed representations via domain-specific rules, but these approaches do not learn such rules from data and only support a restricted subset of first-order logic. NTPs are constructed from Prolog?s backward chaining and are thus related to Unification Neural Networks [55, 56]. However, NTPs operate on vector representations of symbols instead of scalar values, which are more expressive. As NTPs can learn rules from data, they are related to ILP systems such as FOIL [32], Sherlock [57] and meta-interpretive learning of higher-order dyadic Datalog (Metagol) [58]. While these ILP systems operate on symbols and search over the discrete space of logical rules, NTPs work with subsymbolic representations and induce rules using gradient descent. Recently, [37] introduced a differentiable rule learning system based on TensorLog and a neural network controller similar to LSTMs [59]. Their method is more scalable than the NTPs introduced here. However, on UMLS and Kinship our baseline already achieved stronger generalization by learning subsymbolic representations. Still, scaling NTPs to larger KBs for competing with more scalable relational learning methods is an open problem that we seek to address in future work. 8 Conclusion and Future Work We proposed an end-to-end differentiable prover for automated KB completion that operates on subsymbolic representations. To this end, we used Prolog?s backward chaining algorithm as a recipe for recursively constructing neural networks that can be used to prove queries to a KB. Specifically, we introduced a differentiable unification operation between vector representations of symbols. The constructed neural network allowed us to compute the gradient of proof successes with respect to vector representations of symbols, and thus enabled us to train subsymbolic representations end-toend from facts in a KB, and to induce function-free first-order logic rules using gradient descent. On benchmark KBs, our model outperformed ComplEx, a state-of-the-art neural link prediction model, on three out of four KBs while at the same time inducing interpretable rules. To overcome the computational limitations of the end-to-end differentiable prover introduced in this paper, we want to investigate the use of hierarchical attention [25] and reinforcement learning methods such as Monte Carlo tree search [60, 61] that have been used for learning to play Go [62] and chemical synthesis planning [63]. In addition, we plan to support function terms in the future. Based on [64], we are furthermore interested in applying NTPs to automated proving of mathematical theorems, either in logical or natural language form, similar to recent approaches by [65] and [66]. Acknowledgements We thank Pasquale Minervini, Tim Dettmers, Matko Bosnjak, Johannes Welbl, Naoya Inoue, Kai Arulkumaran, and the anonymous reviewers for very helpful comments on drafts of this paper. This work has been supported by a Google PhD Fellowship in Natural Language Processing, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award. 9 References [1] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. Factorizing YAGO: scalable machine learning for linked data. In Proceedings of the 21st World Wide Web Conference 2012, WWW 2012, Lyon, France, April 16-20, 2012, pages 271?280, 2012. doi: 10.1145/2187836.2187874. [2] Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. Relation extraction with matrix factorization and universal schemas. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 74?84, 2013. [3] Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 926?934, 2013. [4] Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. Typed tensor decomposition of knowledge bases for relation extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1568?1579, 2014. [5] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In International Conference on Learning Representations (ICLR), 2015. [6] Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1499?1509, 2015. [7] Th?o Trouillon, Johannes Welbl, Sebastian Riedel, ?ric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2071?2080, 2016. [8] Herv? Gallaire and Jack Minker, editors. Logic and Data Bases, Symposium on Logic and Data Bases, Centre d??tudes et de recherches de Toulouse, 1977, Advances in Data Base Theory, New York, 1978. Plemum Press. ISBN 0-306-40060-X. [9] Stephen Muggleton. Inductive logic programming. New Generation Comput., 8(4):295?318, 1991. doi: 10.1007/BF03037089. [10] Stanley Kok and Pedro M. Domingos. Statistical predicate invention. In Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, pages 433?440, 2007. doi: 10.1145/1273496.1273551. [11] Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom M. Mitchell. Improving learning and inference in a large knowledge-base using latent syntactic cues. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 833?838, 2013. [12] Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom M. Mitchell. Incorporating vector space similarity in random walk inference over knowledge bases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 397?406, 2014. [13] Islam Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J Mooney. Representing meaning with a combination of logical and distributional models. Computational Linguistics, 2017. [14] Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. Compositional vector space models for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 156?166, 2015. [15] Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. Towards neural network-based reasoning. CoRR, abs/1508.05508, 2015. 10 [16] Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. In Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2017. [17] Dirk Weissenborn. Separating answers from queries for neural reading comprehension. abs/1607.03316, 2016. CoRR, [18] Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016., 2016. [19] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. [20] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. [21] Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1828?1836, 2015. [22] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 190?198, 2015. [23] Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR), 2016. [24] Scott E. Reed and Nando de Freitas. Neural programmer-interpreters. In International Conference on Learning Representations (ICLR), 2016. [25] Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3981?3989, 2016. [26] Matko Bosnjak, Tim Rockt?schel, Jason Naradowsky, and Sebastian Riedel. Programming with a differentiable forth interpreter. In International Conference on Machine Learning (ICML), 2017. [27] Stuart J. Russell and Peter Norvig. Artificial Intelligence - A Modern Approach (3. internat. ed.). Pearson Education, 2010. ISBN 978-0-13-207148-2. [28] Lise Getoor. Introduction to statistical relational learning. MIT press, 2007. [29] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1545?1554, 2016. [30] David S Broomhead and David Lowe. Radial basis functions, multi-variable functional interpolation and adaptive networks. Technical report, DTIC Document, 1988. [31] Allen Van Gelder. Efficient loop detection in prolog using the tortoise-and-hare technique. J. Log. Program., 4(1):23?31, 1987. doi: 10.1016/0743-1066(87)90020-3. [32] J. Ross Quinlan. Learning logical definitions from relations. Machine Learning, 5:239?266, 1990. doi: 10.1007/BF00117105. [33] William Yang Wang and William W. Cohen. Joint information extraction and reasoning: A scalable statistical relational learning approach. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 355?364, 2015. [34] Antoine Bordes, Nicolas Usunier, Alberto Garc?a-Dur?n, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 2787?2795, 2013. 11 [35] Guillaume Bouchard, Sameer Singh, and Theo Trouillon. On approximate reasoning capabilities of low-rank vector spaces. In Proceedings of the 2015 AAAI Spring Symposium on Knowledge Representation and Reasoning (KRR): Integrating Symbolic and Neural Approaches, 2015. [36] Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 1955?1961, 2016. [37] Fan Yang, Zhilin Yang, and William W. Cohen. Differentiable learning of logical rules for knowledge base completion. CoRR, abs/1702.08367, 2017. [38] Artur S. d?Avila Garcez, Krysia Broda, and Dov M. Gabbay. Neural-symbolic learning systems: foundations and applications. Springer Science & Business Media, 2012. [39] Jude W Shavlik and Geoffrey G Towell. An approach to combining explanation-based and neural learning algorithms. Connection Science, 1(3):231?253, 1989. [40] Geoffrey G. Towell and Jude W. Shavlik. Knowledge-based artificial neural networks. Artif. Intell., 70 (1-2):119?165, 1994. doi: 10.1016/0004-3702(94)90105-8. [41] Artur S. d?Avila Garcez and Gerson Zaverucha. The connectionist inductive learning and logic programming system. Appl. Intell., 11(1):59?77, 1999. doi: 10.1023/A:1008328630915. [42] Lokendra Shastri. Neurally motivated constraints on the working memory capacity of a production system for parallel processing: Implications of a connectionist model based on temporal synchrony. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society: July 29 to August 1, 1992, Cognitive Science Program, Indiana University, Bloomington, volume 14, page 159. Psychology Press, 1992. [43] Liya Ding. Neural prolog-the concepts, construction and mechanism. In Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on, volume 4, pages 3603?3608. IEEE, 1995. [44] Manoel V. M. Fran?a, Gerson Zaverucha, and Artur S. d?Avila Garcez. Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine Learning, 94(1):81?104, 2014. doi: 10.1007/s10994-013-5392-1. [45] Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezn?, and Ondrej Kuzelka. Lifted relational neural networks. In Proceedings of the NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches co-located with the 29th Annual Conference on Neural Information Processing Systems (NIPS 2015), Montreal, Canada, December 11-12, 2015., 2015. [46] William W. Cohen. Tensorlog: A differentiable deductive database. CoRR, abs/1605.06523, 2016. [47] Luciano Serafini and Artur S. d?Avila Garcez. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. In Proceedings of the 11th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy?16) co-located with the Joint Multi-Conference on Human-Level Artificial Intelligence (HLAI 2016), New York City, NY, USA, July 16-17, 2016., 2016. [48] Ni Lao, Tom M. Mitchell, and William W. Cohen. Random walk inference and learning in A large scale knowledge base. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 529?539, 2011. [49] Ni Lao, Amarnag Subramanya, Fernando C. N. Pereira, and William W. Cohen. Reading the web with learned syntactic-semantic inference rules. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL 2012, July 12-14, 2012, Jeju Island, Korea, pages 1017?1026, 2012. [50] Tim Rockt?schel, Matko Bosnjak, Sameer Singh, and Sebastian Riedel. Low-Dimensional Embeddings of Logic. In ACL Workshop on Semantic Parsing (SP?14), 2014. [51] Tim Rockt?schel, Sameer Singh, and Sebastian Riedel. Injecting logical background knowledge into embeddings for relation extraction. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1119?1129, 2015. [52] Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In International Conference on Learning Representations (ICLR), 2016. 12 [53] Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard H. Hovy, and Eric P. Xing. Harnessing deep neural networks with logic rules. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016. [54] Thomas Demeester, Tim Rockt?schel, and Sebastian Riedel. Lifted rule injection for relation embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1389?1399, 2016. [55] Ekaterina Komendantskaya. Unification neural networks: unification by error-correction learning. Logic Journal of the IGPL, 19(6):821?847, 2011. doi: 10.1093/jigpal/jzq012. [56] Steffen H?lldobler. A structured connectionist unification algorithm. In Proceedings of the 8th National Conference on Artificial Intelligence. Boston, Massachusetts, July 29 - August 3, 1990, 2 Volumes., pages 587?593, 1990. [57] Stefan Schoenmackers, Jesse Davis, Oren Etzioni, and Daniel S. Weld. Learning first-order horn clauses from web text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, 9-11 October 2010, MIT Stata Center, Massachusetts, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1088?1098, 2010. [58] Stephen H Muggleton, Dianhuan Lin, and Alireza Tamaddoni-Nezhad. Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited. Machine Learning, 100(1):49?73, 2015. [59] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997. doi: 10.1162/neco.1997.9.8.1735. [60] R?mi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In Computers and Games, 5th International Conference, CG 2006, Turin, Italy, May 29-31, 2006. Revised Papers, pages 72?83, 2006. doi: 10.1007/978-3-540-75538-8_7. [61] Levente Kocsis and Csaba Szepesv?ri. Bandit based monte-carlo planning. In Machine Learning: ECML 2006, 17th European Conference on Machine Learning, Berlin, Germany, September 18-22, 2006, Proceedings, pages 282?293, 2006. doi: 10.1007/11871842_29. [62] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016. doi: 10.1038/nature16961. [63] Marwin H. S. Segler, Mike Preu?, and Mark P. Waller. Towards "alphachem": Chemical synthesis planning with tree search and deep neural network policies. CoRR, abs/1702.00020, 2017. [64] Mark E. Stickel. A prolog technology theorem prover. New Generation Comput., 2(4):371?383, 1984. doi: 10.1007/BF03037328. [65] Cezary Kaliszyk, Fran?ois Chollet, and Christian Szegedy. Holstep: A machine learning dataset for higher-order logic theorem proving. In International Conference on Learning Representations (ICLR), 2017. [66] Sarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. In International Conferences on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), 2017. [67] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. [68] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pages 249?256, 2010. [69] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal J?zefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man?, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Vi?gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. 13
6969 |@word armand:1 briefly:1 stronger:1 nd:7 open:1 hu:1 seek:1 pratim:2 jacob:1 decomposition:1 evaluating:1 thereby:4 yih:2 recursively:5 carry:1 initial:1 cyclic:1 qatar:2 score:22 united:2 liu:1 daniel:1 substitution:20 contains:4 document:1 past:1 freitas:2 steiner:1 recovered:1 current:1 comparing:1 outperforms:2 diederik:1 guez:1 parsing:1 john:2 devin:1 subsequent:1 christian:2 enables:1 remove:1 interpretable:5 update:3 bart:21 alone:2 cue:1 intelligence:6 instantiate:4 v:2 kristina:1 isard:1 inspection:1 mccallum:3 beginning:1 ivo:1 reciprocal:1 short:1 aja:1 draft:1 iterates:2 revisited:1 node:1 location:4 tahoe:2 five:1 unbounded:1 mathematical:1 along:1 olah:1 constructed:8 symposium:2 yelong:1 abadi:1 yuan:1 prove:8 consists:2 compose:1 combine:1 dan:2 introduce:3 peng:1 tomaso:1 themselves:1 planning:3 kiros:1 multi:10 steffen:1 behavior:1 aborted:2 nham:1 inspired:3 automatically:1 lyon:1 spain:2 provided:5 moreover:1 s33:2 medium:1 kinship:10 what:1 schoenmackers:1 erk:1 prohibits:1 gelder:1 interpreter:4 unified:1 finding:1 homer:12 indiana:1 csaba:1 marlin:1 temporal:1 every:8 nation:7 ti:2 remarkably:1 rajarshi:1 exactly:1 demonstrates:1 k2:3 uk:3 hit:13 wayne:1 control:1 omit:1 medical:1 danihelka:1 before:4 t1:1 aggregating:1 waller:1 laguna:1 tensorlog:3 limit:1 gabbay:1 encoding:4 oxford:1 laurent:1 path:4 interpolation:1 might:1 acl:9 twice:2 blunsom:1 china:2 quantified:1 specifying:2 appl:1 co:3 limited:1 factorization:1 horn:1 responsible:1 hoifung:1 testing:1 practice:1 recursive:1 implement:2 demis:1 area:1 universal:1 empirical:8 vasudevan:1 yakhnenko:1 matching:1 confidence:3 induce:7 radial:3 refers:2 integrating:3 symbolic:18 get:2 cannot:3 close:5 operator:1 applying:3 writing:1 live:1 wong:1 igpl:1 www:1 dean:1 equivalent:1 roth:1 phil:1 jesse:1 straightforward:1 sepp:1 attention:1 jimmy:1 map:1 reviewer:1 shen:1 tomas:1 go:2 unify:25 matthieu:1 artur:4 rule:84 shlens:1 s6:1 enabled:1 proving:13 century:1 searching:1 traditionally:1 embedding:4 krishnamurthy:1 s10994:1 target:4 norvig:1 diego:1 play:1 decode:2 user:1 colorado:1 us:1 construction:7 domingo:1 goodfellow:1 programming:7 kunal:1 element:1 particularly:3 updating:1 located:3 distributional:1 database:1 mike:2 bottom:1 module:23 ding:1 wang:1 capture:1 solved:1 calculate:2 region:5 cycle:3 russell:1 removed:3 ebl:1 benjamin:2 dynamic:1 signature:4 trained:2 singh:3 solving:1 eacl:1 creation:3 kbann:1 creates:1 upon:1 eric:1 learner:1 basis:3 term4:2 po:1 easily:1 joint:8 represented:2 america:1 various:2 chapter:4 train:3 provers:7 fast:1 effective:2 describe:1 monte:3 london:1 query:11 artificial:8 instantiated:2 doi:14 jianfeng:2 neighborhood:1 pearson:1 harnessing:1 disjunction:1 kalchbrenner:1 whose:1 kai:2 valued:2 larger:1 heuristic:2 federation:2 otherwise:4 szepesv:1 ability:3 statistic:1 toulouse:1 unseen:1 syntactic:2 jointly:2 noisy:1 subramanya:1 final:2 kocsis:1 advantage:2 sequence:2 amarnag:1 differentiable:20 exemplary:1 sen:1 propose:2 isbn:2 ucl:1 welbl:2 talukdar:2 remainder:1 blackboard:1 nevada:2 neighboring:2 combining:3 loop:1 poon:1 translate:2 pantel:1 schaul:1 forth:2 inducing:5 weizhu:1 recipe:2 sutskever:3 parent:2 seattle:2 zefowicz:1 extending:2 empty:5 darrell:1 silver:1 leave:1 adam:1 object:3 tim:7 sarah:1 andrew:5 ac:2 montreal:3 derive:1 measured:1 recurrent:2 completion:6 subregion:2 edward:1 auxiliary:7 implemented:1 ois:1 c:2 broda:1 hermann:1 stochastic:1 subsequently:3 kb:43 human:5 nando:2 enable:2 jonathon:1 programmer:4 translating:2 implementing:1 education:1 garc:1 suffices:1 generalization:2 anonymous:1 ryan:1 comprehension:2 correction:1 hold:3 proximity:2 considered:1 eduard:1 ground:21 exp:3 algorithmic:1 dieleman:1 mapping:1 visualize:1 predict:4 gerson:2 rgen:1 major:1 matthew:1 early:2 achieves:1 omitted:1 outperformed:1 injecting:1 combinatorial:1 currently:1 krr:1 ross:1 deductive:1 tudes:1 city:3 hoffman:1 stefan:1 mit:2 always:1 choudhury:1 denil:1 lifted:3 volker:1 thirtieth:1 conjunction:2 locatedin:18 encode:1 lise:1 focus:2 june:5 refining:1 arulkumaran:1 rank:2 check:2 likelihood:1 seamlessly:1 contrast:3 vendrov:1 tradition:1 cg:1 baseline:3 detect:1 helpful:1 inference:7 dependent:1 lowercase:1 unary:4 relation:9 bandit:1 marcin:1 france:1 interested:1 subscripted:1 germany:2 overall:1 arg:1 aforementioned:1 development:2 plan:1 art:4 integration:2 animal:1 oksana:1 special:5 equal:1 construct:1 once:3 extraction:4 beach:1 sampling:2 ng:1 hop:8 identical:1 stuart:1 washington:1 icml:3 atom:44 represents:1 koray:1 yu:1 future:4 yoshua:1 report:1 gordon:1 richard:1 connectionist:3 wen:2 modern:2 randomly:2 intelligent:1 national:1 intell:2 asian:2 consisting:1 jeffrey:1 william:6 ab:8 attempt:1 detection:2 atlanta:1 interest:5 investigate:1 zheng:1 stata:1 evaluation:2 severe:1 benoit:1 demeester:1 misha:1 uppercase:1 held:2 chain:2 implication:1 predefined:5 andy:1 tuple:1 dov:1 partial:2 arthur:1 capable:1 poggio:1 korea:1 andk:9 il2:1 unification:20 submodule:1 tree:4 kam:1 respective:5 walk:4 initialized:1 incomplete:1 iv:1 continuing:1 instance:3 column:1 modeling:1 dev:3 assignment:1 subset:1 segler:1 euler:1 father:1 predicate:28 holographic:1 successful:1 sumit:1 loo:1 answer:2 varies:1 corrupted:6 gregory:1 kudlur:1 st:3 international:16 grand:1 yago:1 garcez:4 michael:2 synthesis:2 together:2 ashish:1 yao:1 ilya:3 again:2 aaai:2 central:1 rafal:1 huang:2 rosasco:1 possibly:1 leveraged:1 emnlp:8 lukasz:1 cognitive:4 american:3 resort:2 style:1 return:1 toy:2 prolog:11 li:2 szegedy:2 de:4 lookup:1 hyatt:1 bold:1 dur:1 ioannis:1 north:3 matter:1 grandfather:1 oregon:1 caused:1 ranking:1 vi:1 script:1 lowe:1 jason:3 schema:1 doing:1 reached:3 linked:1 aggregation:3 xing:1 parallel:2 capability:2 bouchard:2 masking:1 synchrony:1 jia:1 curie:1 contribution:1 partha:2 ni:2 greg:1 hovy:1 likewise:1 correspond:1 generalize:2 zhengdong:1 vincent:1 unifies:1 kavukcuoglu:1 craig:1 lu:1 carlo:3 cybernetics:1 mooney:1 corruption:1 sebastian:7 ed:1 hlt:2 checked:1 definition:1 attentive:1 failure:5 net:1 hotel:1 typed:1 hare:1 tucker:1 derek:1 proof:65 mi:1 sampled:2 stop:1 bloomington:1 dataset:3 massachusetts:2 broomhead:1 mitchell:3 recall:1 logical:11 knowledge:25 wicke:1 stanley:1 graepel:1 sophisticated:1 ondrej:1 higher:5 follow:1 tom:4 wei:1 april:1 entailment:1 subgoal:1 evaluated:2 done:2 ox:1 box:1 furthermore:7 stage:1 lastly:1 tortoise:1 until:1 working:1 belanger:1 hand:1 lstms:1 web:3 expressive:1 christopher:2 western:1 google:1 lack:2 abort:1 defines:1 fai:1 artif:1 thore:1 xiaodong:1 usa:11 naacl:2 lillicrap:1 true:3 name:2 counterpart:1 xavier:1 hence:3 fidler:1 inductive:5 moritz:1 umls:9 recherches:1 moore:1 equality:1 semantic:2 chemical:2 deal:1 iteratively:1 game:2 irving:2 during:4 transitivity:4 davis:2 auc:6 levenberg:1 chaining:9 syntax:1 demonstrate:2 tn:1 p12:1 allen:2 reasoning:26 meaning:2 image:1 jack:1 recently:2 mcintyre:1 pseudocode:3 functional:2 denver:1 phoenix:1 clause:2 cohen:5 volume:6 subgoals:6 association:7 he:1 refer:3 significant:1 corvallis:1 ai:1 rd:2 vanilla:1 trivially:1 doha:2 portugal:1 centre:2 language:21 sanja:1 access:1 europe:2 similarity:7 han:1 operating:1 etc:1 base:18 patrick:1 internat:1 pete:1 closest:1 sergio:1 recent:3 mrr:6 hide:1 pasquale:1 italy:2 wattenberg:1 schmidhuber:1 selectivity:1 ntp:18 sherry:1 kaliszyk:2 underlined:2 meta:2 continue:1 binary:1 fernanda:1 meeting:10 leach:1 success:22 scoring:1 seen:2 minimum:1 george:1 turin:1 deng:1 aggregated:1 fernando:1 corrado:1 july:7 ii:3 branch:1 neurally:1 stephen:3 isa:3 reduces:1 multiple:2 infer:2 neco:1 technical:1 match:3 adapt:1 calculation:2 arvind:3 chia:1 muggleton:2 long:7 alberto:1 lin:1 award:2 prediction:14 scalable:4 basic:4 instantiating:2 heterogeneous:1 controller:1 metric:1 involving:1 variant:1 iteration:1 kernel:5 represent:1 alireza:1 agarwal:1 jude:2 sometimes:1 monga:1 bishan:2 addition:7 fellowship:1 oren:1 thirteenth:1 proposal:1 want:4 whereas:1 addressed:1 country:26 dimension:1 suleyman:1 crucial:1 operate:5 warden:1 comment:1 induced:6 december:7 quebec:2 spirit:1 integer:1 call:4 schel:5 chopra:1 yang:5 gustav:1 feedforward:1 split:2 easy:1 bengio:1 automated:4 ivan:1 embeddings:7 iii:2 psychology:1 sander:1 architecture:6 competing:1 submodules:3 andreas:1 idea:1 barham:1 texas:1 whether:1 herv:1 motivated:1 passed:2 manjunath:1 inspiration:1 suffer:1 peter:2 queue:1 york:3 compositional:1 andrychowicz:1 deep:5 involve:1 johannes:2 kok:1 locally:3 subregions:2 clip:1 neelakantan:3 cezary:2 outperform:2 specifies:1 northern:1 s3:8 toend:2 trevor:1 towell:2 bryan:1 blue:1 klein:1 discrete:5 threefold:1 hyperparameter:1 express:1 group:5 harp:1 four:4 zhilin:1 yangqing:1 capital:1 clarity:1 levente:1 marie:1 nal:1 invention:2 backward:10 graph:6 chollet:1 beijing:2 ork:5 turing:2 fourteenth:1 talwar:1 letter:3 fourth:1 raquel:1 parameterized:7 place:3 sigdat:5 reader:1 lake:2 fran:2 datalog:2 lanctot:1 ric:1 scaling:1 conll:1 appendix:7 bound:2 resampled:1 completing:1 meek:1 gomez:1 cheng:1 fan:1 replaces:1 arizona:1 plaza:1 annual:11 naradowsky:1 constraint:1 riedel:8 constrain:1 alex:1 ri:1 katrin:1 avila:4 calling:2 weld:1 speed:2 argument:4 min:3 spring:1 achieved:2 mikolov:1 injection:1 martin:2 gpus:1 structured:1 combination:1 manning:1 mismatching:1 s31:3 mastering:1 island:1 making:1 s1:5 quoc:1 trouillon:2 gamon:1 den:1 restricted:1 pr:6 heart:1 fail:12 mechanism:1 ilp:4 needed:1 madeleine:1 antonoglou:1 end:18 usunier:1 brevdo:1 panneershelvam:1 operation:4 apply:2 observe:1 hierarchical:2 occurrence:1 distinguished:1 batch:2 hassabis:1 original:1 substitute:10 denotes:2 remaining:2 linguistics:8 thomas:1 graphical:1 quinlan:1 unifying:2 readable:1 placeholder:1 calculating:2 exploit:2 murray:1 build:1 february:1 society:1 tensor:4 objective:1 added:1 prover:5 already:3 kaiser:1 realized:1 question:3 antoine:2 september:2 gradient:13 iclr:6 link:13 thank:1 berlin:2 capacity:1 majority:1 separating:1 entity:2 chris:2 maddison:1 extent:1 urtasun:1 reason:2 marcus:1 index:7 reed:1 relationship:3 julian:1 coulom:1 schrittwieser:1 gaussier:1 difficult:1 setup:1 october:4 shastri:1 negative:2 ba:1 policy:1 twenty:1 perform:1 unknown:6 upper:1 inspect:1 revised:1 datasets:1 benchmark:5 enabling:1 november:1 descent:7 gas:1 truncated:1 kisiel:1 regularizes:1 relational:9 rockt:5 defining:1 ecml:1 head:7 dirk:1 zaverucha:2 stack:2 august:3 abe:10 canada:3 inferred:2 david:5 introduced:6 propositional:1 specified:1 connection:2 optimized:2 pfau:1 california:1 xiaoqiang:1 learned:4 manoel:1 tensorflow:1 kingma:1 barcelona:2 nip:4 address:2 able:2 kriegel:1 usually:1 below:1 pattern:3 scott:1 sanjay:1 reading:3 summarize:1 program:3 sherlock:1 max:2 memory:6 tau:2 explanation:1 oriol:1 hochreiter:1 interpretability:3 including:2 difficulty:2 lisbon:1 business:1 getoor:1 islam:1 natural:15 ekaterina:1 recursion:1 representing:5 technology:4 lorenzo:1 inoue:1 lao:2 gardner:2 grewe:1 created:1 transitive:1 vojtech:1 roller:1 tresp:1 raymond:1 text:4 prior:3 understanding:1 eugene:1 review:1 sardinia:1 acknowledgement:1 checking:1 graf:1 fully:1 expect:1 loss:10 generation:2 limitation:3 proven:5 geoffrey:4 nickel:2 ingredient:1 triple:1 foundation:1 etzioni:1 degree:1 vanhoucke:1 consistent:1 s0:1 sameer:3 zhiting:1 editor:1 anonymized:1 corrupt:1 share:1 bordes:2 production:1 foil:2 karl:1 austin:1 concept:3 supported:1 free:8 soon:1 theo:1 drastically:1 lisa:1 allow:1 shavlik:2 neighbor:2 template:3 taking:2 wide:1 distributed:4 edinburgh:1 overcome:1 calculated:1 vocabulary:1 world:1 van:2 curve:1 depth:6 xn:1 concretely:1 forward:2 reinforcement:1 san:1 sifre:1 adaptive:1 transduce:1 far:1 domainspecific:1 universally:1 approximate:2 hang:1 implicitly:1 logic:20 keep:1 global:1 mustafa:1 instantiation:1 corpus:1 filip:1 xi:2 factorizing:1 continuous:1 latent:3 search:6 decade:1 table:3 jayant:1 learn:13 nature:1 ca:1 inherently:1 career:1 nicolas:1 improving:1 upstream:5 complex:21 constructing:2 domain:10 european:2 protocol:1 sp:1 da:1 dense:1 marc:1 aistats:1 arrow:1 s2:4 joulin:1 backup:1 hyperparameters:1 paul:2 dyadic:2 allowed:1 center:1 body:7 interpretive:2 x1:1 augmented:1 crafted:1 georgia:1 ny:2 precision:1 fails:2 inferring:1 decoded:1 explicit:2 pereira:1 comput:2 answering:2 dominik:1 learns:2 subsymbolic:25 zhifeng:1 ian:1 theorem:8 rk:1 specific:3 arity:2 ghemawat:1 symbol:39 list:22 vedavyas:1 glorot:1 incorporating:2 socher:1 toutanova:1 workshop:4 adding:1 corr:8 phd:1 illustrates:1 sparseness:1 maximilian:2 dtic:1 westin:1 danqi:2 boston:1 vijay:1 prepending:1 chen:4 timothy:1 likely:2 gao:2 rohrbach:1 josh:1 conveniently:1 vinyals:1 limin:1 contained:1 background:2 temporarily:1 scalar:1 chang:1 binding:1 applies:2 pedro:1 driessche:1 truth:1 springer:1 luciano:1 ma:1 mart:1 weston:2 succeed:1 grefenstette:1 goal:15 formulated:1 s32:3 ann:1 consequently:1 rbf:4 towards:2 replace:3 shared:1 man:2 hard:2 experimentally:1 alyawarra:1 specifically:6 zhengzhong:1 feasible:1 operates:1 holstep:1 called:5 pas:2 matt:2 succeeds:1 citro:1 colmenarejo:1 college:1 guillaume:2 mark:2 support:5 latter:1 rajat:1 avoiding:1 investigator:1 incorporate:2 schuster:1
6,599
697
Analog Cochlear Model for Multiresolution Speech Analysis Weimin Liu~ Andreas G. Andreou and Moise H. Goldstein, Jr. Department of Electrical and Computer Engineering The Johns Hopkins University, Baltimore, Maryland 21218 USA Abstract This paper discusses the parameterization of speech by an analog cochlear model. The tradeoff between time and frequency resolution is viewed as the fundamental difference between conventional spectrographic analysis and cochlear signal processing for broadband, rapid-changing signals. The model's response exhibits a wavelet-like analysis in the scale domain that preserves good temporal resolution; the frequency of each spectral component in a broadband signal can be accurately determined from the interpeak intervals in the instantaneous firing rates of auditory fibers. Such properties of the cochlear model are demonstrated with natural speech and synthetic complex signals. 1 Introduction As a non-parametric tool, spectrogram, or short-term Fourier transform, is widely used in analyzing non-stationary signals, such speech. Usually a window is applied to the running signal and then the Fourier transform is performed. The specific window applied determines the tradeoff between temporal and spectral resolutions of the analysis, as indicated by the uncertainty principle [1]. Since only one window is used, this tradeoff is identical for all spectral components in the signal being analyzed. This implies that conventional spectrographic signal representation and its variations are uniform resolution analysis methods. Such is also the case in parametric analysis methods, such as linear prediction coding (LPC). "Present address: Hughes Network Systems, Inc., 11717 Exploration Lane, Germantown, Maryland 20876 USA 666 Analog Cochlear Model for Multiresolution Speech Analysis In spectrographic analysis of speech, it is frequently necessary to vary the window length, or equivalently the bandwidth in order to obtain appropriate resolution in time or frequency domain. Such a practice has the effect of changing the durationbandwidth tradeoff. Broadband (short window) analysis gives better temporal resolution to the extent that vertical voice pitch stripes can be seen; narrowband (long window) can result in better spectral resolution so that the harmonics of the pitch become apparent. A question arises: if the duty of the biological cochlea were to map a signal onto the time-frequency plane, should it be broadband or narrowband? Neurophysiological data from the study of mammalian auditory periphery suggest that the cochlear filter is effectively broadband with regard to the harmonics in synthetic voiced speech, and a precise frequency estimation of a spectral component, such as a formant, can be determined from the analysis of the temporal patterns in the instantaneous firing rates (IFRs) of auditory nerve fibers (neurograms) [2]. A similar representation was also considered by Shamma [3}. In this paper, we will first have a close look at the spectrogram of speech signals. Then the relevant features of a cochlear model [5, 6} are described and speech processing by the model is presented illustrating good resolution in time and frequency. Careful examination of the model's output reveals that indeed it performs multiresolution analysis. 2 Speech Spectrogram Broadband Narrowband . I o 250 500 0 250 500 time (ms) Figure 1: Broadband (6.4ms Hamming window) and narrowband (25.6ms window) spectrograms for the word "saint" spoken by a male speaker. Figure 1 shows the broadband and narrowband spectrograms of the word "saint" spoken by a male speaker. The broadband spectrogram is usually the choice of speech analysis for several reasons. First, the fundamental frequency is considered of insignificant importance in understanding many spoken languages. Second, broadband reserves good temporal resolution, and meanwhile the representation 667 668 Liu, Andreou, and Goldstein of formants has been considered adequate. The adequacy of this notion has been seriously challenged, especially for rapidly varying events in real speech [4]. Although the vertical striation in the narrowband spectrogram indicates the pitch period, to accurately estimate the fundamental frequency Po, it is often desirable to look at the narrowband spectrogram in which harmonics of Po are shown. Ideally a speech analysis method should provide multiple resolution so that both formant and harmonic information are represented simultaneously. To further emphasize this, a synthetic signal of a tone/chirp pair was generated. The synthetic signal (Figure 2) consists of tone and chirp pairs that are separated by 100Hz. There are two 1Oms gaps in both the high and low frequency tones; the chirp pair sweeps from 2900Hz-3000Hz down to 200Hz-300Hz in lOOms. The broadband spectrogram clearly shows the temporal gaps but fails to give a clear representation in frequency; the situation is reversed in the narrowband spectrogram. - - - ---- --- - ------ - -- Chirp and tone pairs 4K ~""-~'----- o 100 200 - --- -- -. 300 time (ms) Figure 2: The synthetic tone/chirp pair and its broadband (6.4ms Hamming window) (top) and narrowband (25.6ms window) (bottom) spectrograms. 3 The Analog Cochlear Model Parameterization of speech using software cochlear models has been pursued by several researchers; please refer to [5, 6] for a literature survey. The alternative to software simulations on engineering workstations, is the analog VLSI [7]. Computationally, analog VLSI models can be more effective compared to software simulations. They are also further constrained by fundamental physical limitations and scaling laws; this may direct the development of more realistic models. The constraints imposed by the technology are: power dissipation, physical extent of Analog Cochlear Model for Multiresolution Speech Analysis computing hardware, density of interconnects, precision and noise limitations in the characteristics of the basic elements, signal dynamic range, and robust behavior and stability. Analog VLSI cochlear models have been reported by Lyon and Mead [10] at Caltech with subsequent work by Lazzaro [11] and Watts [12}. Our model [5] consists of the middle ear, the cochlear filter bank, and haircell/synapse modules. All the modules in the model are based on detailed biophysical and physiological studies and it builds on the software simulation and the work in our laboratory b~? Payton [9]. At the present time the model is implemented both as a software simulation package but also as a set of two analog VLSI chips [6} to minimize the simulation times. Even though the silicon implementation of the model is completely functional, adequate interfaces to standard engineering workstations have not been yet fully developed and therefore here we will focus on results obtained through the software simulations. The design of the cochlear filter bank structure is the result of the effective bandwidth concept. The filter structure is flexible enough so that an appropriate set of parameters can be found to fit the neurophysiological data. In particular, the cochlear filter bank is tuned so that the model output closely resemble the auditory fibers' instantaneous firing rates (IFR) in response synthetic speech signals [2}. To do so, a fourth order section is used instead of the second order section of our earlier work [5]. Figure 3 shows the response amplitude and group delay of the filter bank that has been calibrated in this manner. ? .. ... . . .... ? .. , . .. . ? I I ? ?? ? ? ??? ?? ?? I. ,. ? ? ! Ifrl & :t--+--:-+-i+----;---:-~~:.r .. . ......' ". ,, , , . " " : : : : :: , , ~IJj:t;j:=;= : B=M=~==:====j?j?j?1?t..t===t=~LJ~~ 100 1000 Frequency (Hz) HIt 100 1000 FnoquOllC)l (Hz) Figure 3: The amplitude and group delay of the cochlear filter bank. The curves that have higher peak frequencies in the amplitude plot and those having smaller group delays are the filter channels representing locations near the base of the cochlea. The hair cells are the receptor cells for the hearing system. The function of hair cells and synapses in terms of signal processing is more than just rectification; besides the strong compressive nonlinearity in the mechano-electrical transduction, there are also rapid and short-term additive adaptation properties, as seen in the discharge patterns of auditory nerve fibers. Since the auditory fibers have a limited dynamic range of only 2D-30dB, magnitude compression and adaptation become necessary in the transmission of acoustical signals of much wider dynamic range. A neurotransmitter substance reservoir model, proposed by Smith and Brach- IOIt 669 670 Liu, Andreou, and Goldstein man [8], of the hair cell and synapses that characterizes the generation of instantaneous firing rates of nerve fibers has been incorporated in the model. This is computationaly very demanding and the model benefits considerably by the analog VLSI implementation. The circuit output resembles closely the response of mammalian auditory nerve fibers [5]. 4 Multiresolution Analysis The conventional Fourier transform can be considered as a constant- bandwidth analysis scheme, in which the absolute frequency resolution is identical for all frequencies. A wavelet transform, on the other hand, is constant-Q in nature where the relative bandwidth is constant. The cochlear filter that is tuned to fit the experiment data is neither but is more closely related the wavelet transform, even though it required a higher Q at the base than at the apex. The response of the cochlear model is shown in Figure 4, in the form of a neurogram. Each trace shows the IFR of a channel whose characteristic frequency is indicated on the left. The gross temporal aspects of the neurogram are rather obvious. To obtain insight into the fine time structure of IFRs, additional processing is need. One possible feature that can be extracted is the inter-peak intervals (IPI) in the IFR, which is directly related to the main spectral component in the output. The advantage of such a measure over Fourier transform is that it is not affected by the higher harmonics in the IFR. An autocorrelation and peak-picking operation were performed on the IFR output to capture the inter-peak intervals (IPls). The procedure was similar to that by Secker-Walker and Searle [2], except that the window lengths of autocorrelation functions directly depend on the channel peak frequency. That is, for high frequency channels, shorter windows were used. To illustrate the multiresolution nature of cochlear processing, the IPI histogram (Figure 4) across all channels are shown at each response time for the speech input "saint." Both formants and pitch frequencies are clearly shown in the composite IPI histogram. Similarly, the cochlear model's response to the synthetic tone/chirp pairs is shown in Figure 5. The IPI histogram gives high temporal resolution for high frequencies and high spectral resolution for low frequencies such that the 10ms temporal gap in the high frequency tones and the 100Hz spacing between the two low frequency tones are precisely represented. However in the high-frequency regions of the IPI histogram, the fact the each trace consists of a pair of tones or chirps is not clearly depicted. This limitation in the spectral resolution is the result of the relatively broad bandwidths in the highfrequency channels of the basilar membrane filter. U ndoubtly the information about the 100Hz spacing in the tone/chirp pairs is available in the IFRs, which can be estimated from the IFR envelope which exhibits an obvious beat every lOms (Is/100Hz) in the neurogram. Obtaining the beating information calls for a variable resolution IPI analysis scheme. For speech signals, such analysis may be necessary in pitch frequency estimation when only the IFRs of high characteristic frequencies are available. Analog Cochlear Model for Multiresolution Speech Analysis 671 100 ~ '-" >. u Jt .00 lK ~--------------------~~~N f========-===="~======== ... _______ _ _ - ----- -- - " -- - --- --r-::::===::~~~~~~~~::-=~~:::-::--;:::-:::::-::==:-::=~ --- 8K ....... - - - - -- - r--- -------.-. ----T"" -- --- .- . ---. - - - -- - ----_.- - ' -- - --600 200 300 400 500 o 100 r--- ----~II/AWro~J.JW,kI\II."'/fJ/oWij~......__--'~----____._..~~~ _ ~- . ?? .? - .' Response time (ms) -_. . - . ... J .. ... .?t ~ '. ? 1 I r " . .. - 10k f1 {" - .' It,.. ' ') ',- : . '. . ,., \\ , 1 ? - -,. - ,~ .,. -. ? l.t .; ~ .. , .. . J1 ? . . ., . ',.', ~ : ~ t:V: M,,;Y- -1jj~,.;!1fI:ft -g ~ Ik1~~~b'~ 7'0 ? loo-r-'-'-' --'-'I-r-'---'~-~~~~~-T--~~~ o 100 200 300 Response time (ms) Figure 4: (Top) Cochlear model output, in response to "saint," in the form of neurogram. Each trace shows the IFR of one channel. Outputs from different channels are arranged according to their characteristic frequency. (Bottom) IPI histograms. 672 Liu, Andreou, and Goldstein 100 200 2K - 4K - -- ...-_ - ....... ............. - --- ------ - - - ... . .. ~"_"""'_""""IIM"'-""""---~---- - ----- -. - - - - --~-------- ~---------------------------------- ---------- ----- ------ 8K 20 60 tOO 140 220 IRO Response time (ms) -. '.J" 2(,() - . '..... 10k "..... N ::c '-' ~ '" - Ik o 100 200 Response time (ms) Figure 5: (Top) Neurogram: cochlear model response to the tone/chirp pairs. (Bottom) IPI histograms. 300 Analog Cochlear Model for Multiresolution Speech Analysis 5 Discussion and Conclusions We have presented an analog cochlear model that is tuned to match physiological data of mammalian cochleas in response to complex sounds and uses a small number of realistic model parameters. The response of this model has good resolution in time and in frequency, suitable for speech and broadband signal analysis. Both the analysis performed by the cochlear model and at subsequent stages (IPI) is in the time-domain. Processing information using temporal representations is pervasive in neural information processing systems. From an engineering perspective, it is advantageous because it results in architectures that can be efficiently implemented in analog VLSI. The cochlear model has been implemented as an analog VLSI system [6] operating in real-time. Appropriate interfaces are also being developed that will enable the silicon model to communicate with standard engineering workstations. Furthermore, refinements of the model may find applications as high-performance front-ends for various speech processing tasks. References [1] D. Gabor. (1953) A summary of communication theory. In W. Jackson (ed.), Communication Theory, 1-21. London: Butterworths Scientific Pub. [2] H.E. Secker-Walker and C.L. Searle. (1990) Time-domain analysis of auditorynerve-fiber firing rates. J. Acoust. Soc. Am. 88:1427-1436. [3] S.A. Shamma. (1985) Speech processing in the auditory system. I: representation of speech sounds in the responses of the auditory-nerve. J. Acoust. Soc. Am. 78:1612-1621. [4] H.F. Siverman and Y.-T. Lee. (1987) On the spectrographic representation of rapidly time-varying speech. Computer Speech and Language 2:63-86. [5] W. Liu, A.G. Andreou and M.H. Goldstein. (1992) Voiced-speech representation by an analog silicon model of the auditory periphery. IEEE Trans. Neural Networks, 3(3):477-487. [6] W. Liu. (1992) An analog cochlear model: signal representation and VLSI realization Ph.D. Dissertation, The Johns Hopkins University. [7] C. A. Mead, (1989) Analog VLSI and Neural Systems, Addison-Wesley, Reading MA. [8] R.L. Smith and M.L. Brachman. (1982) Adaptation in auditory-nerve fibers: a revised model. Biological Cybernetics 44:107-120. [9] K.L Payton. (1988) Vowel processing by a model of the auditory periphery: a comparison to eighth-nerve responses. J. Acoust. Soc. Am. 83:155-162. [10] R.F. Lyon and C.A. Mead. (1988) An analog electronic cochlea. IEEE Trans. Acoust. Speech, and Signal Process. 36:1119-1134. [11] J. Lazzaro and C.A. Mead. (1989) A silicon model of auditory localization. Neural Computation 1(1):47-57. [12] L. Watts. (1992) Cochlear mechanics: analysis and analog VLSI. Ph.D. Dissertation, California Institute of Technology. 673
697 |@word illustrating:1 middle:1 compression:1 advantageous:1 simulation:6 searle:2 liu:6 pub:1 seriously:1 tuned:3 yet:1 john:2 subsequent:2 additive:1 realistic:2 j1:1 plot:1 stationary:1 pursued:1 parameterization:2 tone:11 plane:1 smith:2 short:3 dissertation:2 location:1 ipi:9 direct:1 become:2 ik:1 consists:3 autocorrelation:2 manner:1 inter:2 indeed:1 rapid:2 behavior:1 frequently:1 mechanic:1 formants:2 lyon:2 window:12 circuit:1 developed:2 compressive:1 spoken:3 acoust:4 temporal:10 every:1 hit:1 engineering:5 receptor:1 analyzing:1 mead:4 firing:5 chirp:9 resembles:1 shamma:2 limited:1 range:3 hughes:1 practice:1 procedure:1 gabor:1 composite:1 word:2 suggest:1 onto:1 close:1 conventional:3 map:1 demonstrated:1 imposed:1 survey:1 resolution:16 insight:1 jackson:1 stability:1 notion:1 variation:1 discharge:1 us:1 element:1 stripe:1 mammalian:3 bottom:3 ft:1 module:2 electrical:2 capture:1 region:1 gross:1 ideally:1 dynamic:3 depend:1 localization:1 completely:1 po:2 chip:1 represented:2 fiber:9 neurotransmitter:1 various:1 separated:1 effective:2 london:1 apparent:1 whose:1 widely:1 formant:2 transform:6 advantage:1 biophysical:1 adaptation:3 relevant:1 realization:1 rapidly:2 multiresolution:8 transmission:1 owij:1 wider:1 illustrate:1 basilar:1 strong:1 soc:3 implemented:3 resemble:1 implies:1 closely:3 filter:10 exploration:1 enable:1 f1:1 biological:2 considered:4 reserve:1 vary:1 estimation:2 tool:1 clearly:3 rather:1 varying:2 pervasive:1 focus:1 indicates:1 am:3 lj:1 vlsi:10 flexible:1 development:1 constrained:1 having:1 identical:2 broad:1 look:2 preserve:1 simultaneously:1 vowel:1 male:2 analyzed:1 necessary:3 shorter:1 earlier:1 challenged:1 hearing:1 uniform:1 delay:3 too:1 loo:1 front:1 reported:1 synthetic:7 calibrated:1 considerably:1 density:1 fundamental:4 peak:5 lee:1 picking:1 hopkins:2 ear:1 coding:1 inc:1 performed:3 characterizes:1 voiced:2 minimize:1 characteristic:4 efficiently:1 accurately:2 researcher:1 cybernetics:1 synapsis:2 ed:1 frequency:27 obvious:2 hamming:2 workstation:3 auditory:13 amplitude:3 goldstein:5 nerve:7 wesley:1 higher:3 response:17 synapse:1 jw:1 arranged:1 though:2 furthermore:1 just:1 stage:1 hand:1 indicated:2 scientific:1 usa:2 effect:1 spectrographic:4 concept:1 laboratory:1 please:1 speaker:2 m:11 performs:1 dissipation:1 interface:2 fj:1 narrowband:9 harmonic:5 instantaneous:4 fi:1 functional:1 physical:2 analog:20 refer:1 silicon:4 similarly:1 nonlinearity:1 language:2 ik1:1 apex:1 operating:1 base:2 perspective:1 periphery:3 caltech:1 seen:2 additional:1 spectrogram:11 period:1 signal:20 ii:2 multiple:1 desirable:1 sound:2 match:1 long:1 prediction:1 pitch:5 basic:1 hair:3 cochlea:4 histogram:6 cell:4 fine:1 spacing:2 baltimore:1 interval:3 walker:2 envelope:1 hz:10 db:1 adequacy:1 call:1 near:1 enough:1 fit:2 architecture:1 bandwidth:5 andreas:1 tradeoff:4 duty:1 speech:28 lazzaro:2 jj:1 adequate:2 clear:1 detailed:1 ph:2 hardware:1 estimated:1 affected:1 group:3 changing:2 neither:1 package:1 uncertainty:1 fourth:1 communicate:1 electronic:1 scaling:1 ki:1 constraint:1 precisely:1 software:6 lane:1 fourier:4 aspect:1 relatively:1 department:1 according:1 watt:2 jr:1 smaller:1 across:1 membrane:1 computationally:1 rectification:1 discus:1 addison:1 end:1 available:2 operation:1 spectral:8 appropriate:3 alternative:1 voice:1 top:3 running:1 saint:4 especially:1 build:1 sweep:1 question:1 ifr:7 parametric:2 highfrequency:1 exhibit:2 reversed:1 maryland:2 acoustical:1 cochlear:28 extent:2 reason:1 iro:1 length:2 besides:1 equivalently:1 trace:3 implementation:2 design:1 vertical:2 revised:1 beat:1 situation:1 incorporated:1 precise:1 communication:2 pair:9 required:1 andreou:5 california:1 trans:2 address:1 usually:2 pattern:2 beating:1 eighth:1 lpc:1 reading:1 power:1 event:1 demanding:1 natural:1 examination:1 suitable:1 representing:1 loom:1 interconnects:1 scheme:2 technology:2 lk:1 understanding:1 literature:1 relative:1 law:1 fully:1 oms:1 limitation:3 generation:1 principle:1 bank:5 summary:1 loms:1 iim:1 institute:1 absolute:1 benefit:1 regard:1 curve:1 refinement:1 emphasize:1 reveals:1 butterworths:1 channel:8 nature:2 robust:1 obtaining:1 complex:2 meanwhile:1 domain:4 main:1 noise:1 reservoir:1 broadband:13 transduction:1 precision:1 fails:1 wavelet:3 down:1 specific:1 substance:1 jt:1 insignificant:1 physiological:2 effectively:1 importance:1 magnitude:1 gap:3 depicted:1 ijj:1 neurophysiological:2 determines:1 extracted:1 ma:1 viewed:1 careful:1 man:1 determined:2 except:1 arises:1